mirror of
https://github.com/LCTT/TranslateProject.git
synced 2024-12-29 21:41:00 +08:00
Merge remote-tracking branch 'LCTT/master'
This commit is contained in:
commit
77bf531825
70
published/20171215 Linux Vs Unix.md
Normal file
70
published/20171215 Linux Vs Unix.md
Normal file
@ -0,0 +1,70 @@
|
||||
Linux 与 Unix 之差异
|
||||
==============
|
||||
|
||||
[![Linux vs. Unix](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/unix-vs-linux_orig.jpg)][1]
|
||||
|
||||
在计算机时代,相当一部分的人错误地认为 **Unix** 和 **Linux** 操作系统是一样的。然而,事实恰好相反。让我们仔细看看。
|
||||
|
||||
### 什么是 Unix?
|
||||
|
||||
[![what is unix](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/unix_orig.png)][2]
|
||||
|
||||
在 IT 领域,以操作系统而为人所知的 Unix,是 1969 年 AT&T 公司在美国新泽西所开发的(目前它的商标权由国际开放标准组织所拥有)。大多数的操作系统都受到了 Unix 的启发,而 Unix 也受到了未完成的 Multics 系统的启发。Unix 的另一版本是来自贝尔实验室的 Play 9。
|
||||
|
||||
#### Unix 被用于哪里?
|
||||
|
||||
作为一个操作系统,Unix 大多被用在服务器、工作站,现在也有用在个人计算机上。它在创建互联网、计算机网络或客户端/服务器模型方面发挥着非常重要的作用。
|
||||
|
||||
#### Unix 系统的特点
|
||||
|
||||
* 支持多任务
|
||||
* 相比 Multics 操作更加简单
|
||||
* 所有数据以纯文本形式存储
|
||||
* 采用单一根文件的树状存储
|
||||
* 能够同时访问多用户账户
|
||||
|
||||
#### Unix 操作系统的组成
|
||||
|
||||
**a)** 单核操作系统,负责低级操作以及由用户发起的操作,内核之间的通信通过系统调用进行。
|
||||
**b)** 系统工具
|
||||
**c)** 其他应用程序
|
||||
|
||||
### 什么是 Linux?
|
||||
|
||||
[![what is linux](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/linux_orig.png)][4]
|
||||
|
||||
这是一个基于 Unix 操作系统原理的开源操作系统。正如开源的含义一样,它是一个可以自由下载的系统。它也可以通过编辑、添加及扩充其源代码而定制该系统。这是它最大的好处之一,而不像今天的其它操作系统(Windows、Mac OS X 等)需要付费。Unix 系统不是创建新系统的唯一模版,另外一个重要的因素是 MINIX 系统,不像 Linus,此版本被其缔造者(Andrew Tanenbaum)用于商业系统。
|
||||
|
||||
Linux 由 Linus Torvalds 开发于 1991 年,这是一个其作为个人兴趣的操作系统。为什么 Linux 借鉴 Unix 的一个主要原因是因为其简洁性。Linux 第一个官方版本(0.01)发布于 1991 年 9 月 17 日。虽然这个系统并不是很完美和完善,但 Linus 对它产生很大的兴趣,并在几天内,Linus 发出了一些关于 Linux 源代码扩展以及其他想法的电子邮件。
|
||||
|
||||
#### Linux 的特点
|
||||
|
||||
Linux 的基石是 Unix 内核,其基于 Unix 的基本特点以及 **POSIX** 和单独的 **UNIX 规范标准**。看起来,该操作系统官方名字取自于 **Linus**,其中其操作系统名称的尾部的 “x” 和 **Unix 系统**相联系。
|
||||
|
||||
#### 主要功能
|
||||
|
||||
* 同时运行多任务(多任务)
|
||||
* 程序可以包含一个或多个进程(多用途系统),且每个进程可能有一个或多个线程。
|
||||
* 多用户,因此它可以运行多个用户程序。
|
||||
* 个人帐户受适当授权的保护。
|
||||
* 因此账户准确地定义了系统控制权。
|
||||
|
||||
**企鹅 Tux** 的 Logo 作者是 Larry Ewing,他选择这个企鹅作为他的开源 **Linux 操作系统**的吉祥物。**Linux Torvalds** 最初提出这个新的操作系统的名字为 “Freax” ,即为 “自由(free)” + “奇异(freak)” + x(UNIX 系统)的结合字,而不像存放它的首个版本的 FTP 服务器上所起的名字(Linux)。
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.linuxandubuntu.com/home/linux-vs-unix
|
||||
|
||||
作者:[linuxandubuntu][a]
|
||||
译者:[HardworkFish](https://github.com/HardworkFish)
|
||||
校对:[imquanquan](https://github.com/imquanquan), [wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.linuxandubuntu.com
|
||||
[1]:http://www.linuxandubuntu.com/home/linux-vs-unix
|
||||
[2]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/unix_orig.png
|
||||
[3]:http://www.unix.org/what_is_unix.html
|
||||
[4]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/linux_orig.png
|
||||
[5]:https://www.linux.com
|
@ -1,126 +0,0 @@
|
||||
An introduction to the DomTerm terminal emulator for Linux
|
||||
======
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc_terminals.png?itok=CfBqYBah)
|
||||
|
||||
[DomTerm][1] is a modern terminal emulator that uses a browser engine as a "GUI toolkit." This enables some neat features, such as embeddable graphics and links, HTML rich text, and foldable (show/hide) commands. Otherwise it looks and feels like a feature-full, standalone terminal emulator, with excellent xterm compatibility (including mouse handling and 24-bit color), and appropriate "chrome" (menus). In addition, there is built-in support for session management and sub-windows (as in `tmux` and `GNU screen`), basic input editing (as in `readline`), and paging (as in `less`).
|
||||
|
||||
![](https://opensource.com/sites/default/files/u128651/domterm1.png)
|
||||
Image 1: The DomTerminal terminal emulator. View larger image.
|
||||
|
||||
Below we'll look more at these features. We'll assume you have `domterm` installed (skip to the end of this article if you need to get and build DomTerm). First, though, here's a quick overview of the technology.
|
||||
|
||||
### Frontend vs. backend
|
||||
|
||||
Most of DomTerm is written in JavaScript and runs in a browser engine. This can be a desktop web browser, such as Chrome or Firefox (see image 3), or it can be an embedded browser. Using a general web browser works fine, but the user experience isn't as nice (as the menus are designed for general browsing, not for a terminal emulator), and the security model gets in the way, so using an embedded browser is nicer.
|
||||
|
||||
The following are currently supported:
|
||||
|
||||
* `qtdomterm`, which uses the Qt toolkit and `QtWebEngine`
|
||||
* An `[Electron][2]` embedding (see image 1)
|
||||
* `atom-domterm` runs DomTerm as a package in the [Atom text editor][3] (which is also based on Electron) and integrates with the Atom pane system (see image 2)
|
||||
* A wrapper for JavaFX's `WebEngine`, which is useful for code written in Java (see image 4)
|
||||
* Previously, the preferred frontend used [Firefox-XUL][4], but Mozilla has since dropped XUL
|
||||
|
||||
|
||||
|
||||
![DomTerm terminal panes in Atom editor][6]
|
||||
|
||||
Image 2: DomTerm terminal panes in Atom editor. [View larger image.][7]
|
||||
|
||||
Currently, the Electron frontend is probably the nicest option, closely followed by the Qt frontend. If you use Atom, `atom-domterm` works pretty well.
|
||||
|
||||
The backend server is written in C. It manages pseudo terminals (PTYs) and sessions. It is also an HTTP server that provides the JavaScript and other files to the frontend. The `domterm` command starts terminal jobs and performs other requests. If there is no server running, `domterm` daemonizes itself. Communication between the backend and the server is normally done using WebSockets (with [libwebsockets][8] on the server). However, the JavaFX embedding uses neither WebSockets nor the DomTerm server; instead Java applications communicate directly using the Java-JavaScript bridge.
|
||||
|
||||
### A solid xterm-compatible terminal emulator
|
||||
|
||||
DomTerm looks and feels like a modern terminal emulator. It handles mouse events, 24-bit color, Unicode, double-width (CJK) characters, and input methods. DomTerm does a very good job on the [vttest testsuite][9].
|
||||
|
||||
Unusual features include:
|
||||
|
||||
**Show/hide buttons ("folding"):** The little triangles (seen in image 2 above) are buttons that hide/show the corresponding output. To create the buttons, just add certain [escape sequences][10] in the [prompt text][11].
|
||||
|
||||
**Mouse-click support for`readline` and similar input editors:** If you click in the (yellow) input area, DomTerm will send the right sequence of arrow-key keystrokes to the application. (This is enabled by escape sequences in the prompt; you can also force it using Alt+Click.)
|
||||
|
||||
**Style the terminal using CSS:** This is usually done in `~/.domterm/settings.ini`, which is automatically reloaded when saved. For example, in image 2, terminal-specific background colors were set.
|
||||
|
||||
### A better REPL console
|
||||
|
||||
A classic terminal emulator works on rectangular grids of character cells. This works for a REPL (command shell), but it is not ideal. Here are some DomTerm features useful for REPLs that are not typically found in terminal emulators:
|
||||
|
||||
**A command can "print" an image, a graph, a mathematical formula, or a set of clickable links:** An application can send an escape sequence containing almost any HTML. (The HTML is scrubbed to remove JavaScript and other dangerous features.)
|
||||
|
||||
The image 3 shows a fragment from a [`gnuplot`][12] session. Gnuplot (2.1 or later) supports `domterm` as a terminal type. Graphical output is converted to an [SVG image][13], which is then printed to the terminal. My blog post [Gnuplot display on DomTerm][14] provides more information on this.
|
||||
|
||||
![](https://opensource.com/sites/default/files/dt-gnuplot.png)
|
||||
Image 3: Gnuplot screenshot. View larger image.
|
||||
|
||||
The [Kawa][15] language has a library for creating and transforming [geometric picture values][16]. If you print such a picture value to a DomTerm terminal, the picture is converted to SVG and embedded in the output.
|
||||
|
||||
![](https://opensource.com/sites/default/files/dt-kawa1.png)
|
||||
Image 4: Computable geometry in Kawa. View larger image.
|
||||
|
||||
**Rich text in output:** Help messages are more readable and look nicer with HTML styling. The lower pane of image 1 shows the ouput from `domterm help`. (The output is plaintext if not running under DomTerm.) Note the `PAUSED` message from the built-in pager.
|
||||
|
||||
**Error messages can include clickable links:** DomTerm recognizes the syntax `filename:line:column:` and turns it into a link that opens the file and line in a configurable text editor. (This works for relative filenames if you use `PROMPT_COMMAND` or similar to track directories.)
|
||||
|
||||
A compiler can detect that it is running under DomTerm and directly emit file links in an escape sequence. This is more robust than depending on DomTerm's pattern matching, as it handles spaces and other special characters, and it does not depend on directory tracking. In image 4, you can see error messages from the [Kawa compiler][15]. Hovering over the file position causes it to be underlined, and the `file:` URL shows in the `atom-domterm` message area (bottom of the window). (When not using `atom-domterm`, such messages are shown in an overlay box, as seen for the `PAUSED` message in image 1.)
|
||||
|
||||
The action when clicking on a link is configurable. The default action for a `file:` link with a `#position` suffix is to open the file in a text editor.
|
||||
|
||||
**Structured internal representation:** The following are all represented in the internal node structure: Commands, prompts, input lines, normal and error output, tabs, and preserving the structure if you "Save as HTML." The HTML file is compatible with XML, so you can use XML tools to search or transform the output. The command `domterm view-saved` opens a saved HTML file in a way that enables command folding (show/hide buttons are active) and reflow on window resize.
|
||||
|
||||
**Built-in Lisp-style pretty-printing:** You can include pretty-printing directives (e.g., grouping) in the output such that line breaks are recalculated on window resize. See my article [Dynamic pretty-printing in DomTerm][17] for a deeper discussion.
|
||||
|
||||
**Basic built-in line editing** with history (like `GNU readline`): This uses the browser's built-in editor, so it has great mouse and selection handling. You can switch between normal character-mode (most characters typed are sent directly to the process); or line-mode (regular characters are inserted while control characters cause editing actions, with Enter sending the edited line to the process). The default is automatic mode, where DomTerm switches between character-mode and line-mode depending on whether the PTY is in raw or canonical mode.
|
||||
|
||||
**A built-in pager** (like a simplified `less`): Keyboard shortcuts will control scrolling. In "paging mode," the output pauses after each new screen (or single line, if you move forward line-by-line). The paging mode is unobtrusive and smart about user input, so you can (if you wish) run it without it interfering with interactive programs.
|
||||
|
||||
### Multiplexing and sessions
|
||||
|
||||
**Tabs and tiling:** Not only can you create multiple terminal tabs, you can also tile them. You can use either the mouse or a keyboard shortcut to move between panes and tabs as well as create new ones. They can be rearranged and resized with the mouse. This is implemented using the [GoldenLayout][18] JavaScript library. [Image 1][19] shows a window with two panes. The top one has two tabs, with one running [Midnight Commander][20]; the bottom pane shows `domterm help` output as HTML. However, on Atom we instead use its built-in draggable tiles and tabs; you can see this in image 2.
|
||||
|
||||
**Detaching and reattaching to sessions:** DomTerm supports sessions arrangement, similar to `tmux` and GNU `screen`. You can even attach multiple windows or panes to the same session. This supports multi-user session sharing and remote connections. (For security, all sessions of the same server need to be able to read a Unix domain socket and a local file containing a random key. This restriction will be lifted when we have a good, safe remote-access story.)
|
||||
|
||||
**The** **`domterm`** **command** is also like `tmux` or GNU `screen` in that has multiple options for controlling or starting a server that manages one or more sessions. The major difference is that, if it's not already running under DomTerm, the `domterm` command creates a new top-level window, rather than running in the existing terminal.
|
||||
|
||||
The `domterm` command has a number of sub-commands, similar to `tmux` or `git`. Some sub-commands create windows or sessions. Others (such as "printing" an image) only work within an existing DomTerm session.
|
||||
|
||||
The command `domterm browse` opens a window or pane for browsing a specified URL, such as when browsing documentation.
|
||||
|
||||
### Getting and installing DomTerm
|
||||
|
||||
DomTerm is available from its [GitHub repository][21]. Currently, there are no prebuilt packages, but there are [detailed instructions][22]. All prerequisites are available on Fedora 27, which makes it especially easy to build.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/1/introduction-domterm-terminal-emulator
|
||||
|
||||
作者:[Per Bothner][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/perbothner
|
||||
[1]:http://domterm.org/
|
||||
[2]:https://electronjs.org/
|
||||
[3]:https://atom.io/
|
||||
[4]:https://en.wikipedia.org/wiki/XUL
|
||||
[5]:/file/385346
|
||||
[6]:https://opensource.com/sites/default/files/images/dt-atom1.png (DomTerm terminal panes in Atom editor)
|
||||
[7]:https://opensource.com/sites/default/files/images/dt-atom1.png
|
||||
[8]:https://libwebsockets.org/
|
||||
[9]:http://invisible-island.net/vttest/
|
||||
[10]:http://domterm.org/Wire-byte-protocol.html
|
||||
[11]:http://domterm.org/Shell-prompts.html
|
||||
[12]:http://www.gnuplot.info/
|
||||
[13]:https://developer.mozilla.org/en-US/docs/Web/SVG
|
||||
[14]:http://per.bothner.com/blog/2016/gnuplot-in-domterm/
|
||||
[15]:https://www.gnu.org/software/kawa/
|
||||
[16]:https://www.gnu.org/software/kawa/Composable-pictures.html
|
||||
[17]:http://per.bothner.com/blog/2017/dynamic-prettyprinting/
|
||||
[18]:https://golden-layout.com/
|
||||
[19]:https://opensource.com/sites/default/files/u128651/domterm1.png
|
||||
[20]:https://midnight-commander.org/
|
||||
[21]:https://github.com/PerBothner/DomTerm
|
||||
[22]:http://domterm.org/Downloading-and-building.html
|
@ -1,138 +0,0 @@
|
||||
A history of low-level Linux container runtimes
|
||||
============================================================
|
||||
|
||||
### "Container runtime" is an overloaded term.
|
||||
|
||||
|
||||
![two ships passing in the ocean](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/running-containers-two-ship-container-beach.png?itok=wr4zJC6p "two ships passing in the ocean")
|
||||
Image credits : Rikki Endsley. [CC BY-SA 4.0][12]
|
||||
|
||||
At Red Hat we like to say, "Containers are Linux—Linux is Containers." Here is what this means. Traditional containers are processes on a system that usually have the following three characteristics:
|
||||
|
||||
### 1\. Resource constraints
|
||||
|
||||
Linux Containers
|
||||
|
||||
* [What are Linux containers?][1]
|
||||
|
||||
* [What is Docker?][2]
|
||||
|
||||
* [What is Kubernetes?][3]
|
||||
|
||||
* [An introduction to container terminology][4]
|
||||
|
||||
When you run lots of containers on a system, you do not want to have any container monopolize the operating system, so we use resource constraints to control things like CPU, memory, network bandwidth, etc. The Linux kernel provides the cgroups feature, which can be configured to control the container process resources.
|
||||
|
||||
### 2\. Security constraints
|
||||
|
||||
Usually, you do not want your containers being able to attack each other or attack the host system. We take advantage of several features of the Linux kernel to set up security separation, such as SELinux, seccomp, capabilities, etc.
|
||||
|
||||
### 3\. Virtual separation
|
||||
|
||||
Container processes should not have a view of any processes outside the container. They should be on their own network. Container processes need to be able to bind to port 80 in different containers. Each container needs a different view of its image, needs its own root filesystem (rootfs). In Linux we use kernel namespaces to provide virtual separation.
|
||||
|
||||
Therefore, a process that runs in a cgroup, has security settings, and runs in namespaces can be called a container. Looking at PID 1, systemd, on a Red Hat Enterprise Linux 7 system, you see that systemd runs in a cgroup.
|
||||
|
||||
```
|
||||
# tail -1 /proc/1/cgroup
|
||||
1:name=systemd:/
|
||||
```
|
||||
|
||||
The `ps` command shows you that the system process has an SELinux label ...
|
||||
|
||||
```
|
||||
# ps -eZ | grep systemd
|
||||
system_u:system_r:init_t:s0 1 ? 00:00:48 systemd
|
||||
```
|
||||
|
||||
and capabilities.
|
||||
|
||||
```
|
||||
# grep Cap /proc/1/status
|
||||
...
|
||||
CapEff: 0000001fffffffff
|
||||
CapBnd: 0000001fffffffff
|
||||
CapBnd: 0000003fffffffff
|
||||
```
|
||||
|
||||
Finally, if you look at the `/proc/1/ns` subdir, you will see the namespace that systemd runs in.
|
||||
|
||||
```
|
||||
ls -l /proc/1/ns
|
||||
lrwxrwxrwx. 1 root root 0 Jan 11 11:46 mnt -> mnt:[4026531840]
|
||||
lrwxrwxrwx. 1 root root 0 Jan 11 11:46 net -> net:[4026532009]
|
||||
lrwxrwxrwx. 1 root root 0 Jan 11 11:46 pid -> pid:[4026531836]
|
||||
...
|
||||
```
|
||||
|
||||
If PID 1 (and really every other process on the system) has resource constraints, security settings, and namespaces, I argue that every process on the system is in a container.
|
||||
|
||||
Container runtime tools just modify these resource constraints, security settings, and namespaces. Then the Linux kernel executes the processes. After the container is launched, the container runtime can monitor PID 1 inside the container or the container's `stdin`/`stdout`—the container runtime manages the lifecycles of these processes.
|
||||
|
||||
### Container runtimes
|
||||
|
||||
You might say to yourself, well systemd sounds pretty similar to a container runtime. Well, after having several email discussions about why container runtimes do not use `systemd-nspawn` as a tool for launching containers, I decided it would be worth discussing container runtimes and giving some historical context.
|
||||
|
||||
[Docker][13] is often called a container runtime, but "container runtime" is an overloaded term. When folks talk about a "container runtime," they're really talking about higher-level tools like Docker, [CRI-O][14], and [RKT][15] that come with developer functionality. They are API driven. They include concepts like pulling the container image from the container registry, setting up the storage, and finally launching the container. Launching the container often involves running a specialized tool that configures the kernel to run the container, and these are also referred to as "container runtimes." I will refer to them as "low-level container runtimes." Daemons like Docker and CRI-O, as well as command-line tools like [Podman][16] and [Buildah][17], should probably be called "container managers" instead.
|
||||
|
||||
When Docker was originally written, it launched containers using the `lxc` toolset, which predates `systemd-nspawn`. Red Hat's original work with Docker was to try to integrate `[libvirt][6]` (`libvirt-lxc`) into Docker as an alternative to the `lxc` tools, which were not supported in RHEL. `libvirt-lxc` also did not use `systemd-nspawn`. At that time, the systemd team was saying that `systemd-nspawn` was only a tool for testing, not for production.
|
||||
|
||||
At the same time, the upstream Docker developers, including some members of my Red Hat team, decided they wanted a golang-native way to launch containers, rather than launching a separate application. Work began on libcontainer, as a native golang library for launching containers. Red Hat engineering decided that this was the best path forward and dropped `libvirt-lxc`.
|
||||
|
||||
Later, the [Open Container Initiative][18] (OCI) was formed, party because people wanted to be able to launch containers in additional ways. Traditional namespace-separated containers were popular, but people also had the desire for virtual machine-level isolation. Intel and [Hyper.sh][19] were working on KVM-separated containers, and Microsoft was working on Windows-based containers. The OCI wanted a standard specification defining what a container is, so the [OCI Runtime Specification][20] was born.
|
||||
|
||||
The OCI Runtime Specification defines a JSON file format that describes what binary should be run, how it should be contained, and the location of the rootfs of the container. Tools can generate this JSON file. Then other tools can read this JSON file and execute a container on the rootfs. The libcontainer parts of Docker were broken out and donated to the OCI. The upstream Docker engineers and our engineers helped create a new frontend tool to read the OCI Runtime Specification JSON file and interact with libcontainer to run the container. This tool, called `[runc][7]`, was also donated to the OCI. While `runc` can read the OCI JSON file, users are left to generate it themselves. `runc` has since become the most popular low-level container runtime. Almost all container-management tools support `runc`, including CRI-O, Docker, Buildah, Podman, and [Cloud Foundry Garden][21]. Since then, other tools have also implemented the OCI Runtime Spec to execute OCI-compliant containers.
|
||||
|
||||
Both [Clear Containers][22] and Hyper.sh's `runV` tools were created to use the OCI Runtime Specification to execute KVM-based containers, and they are combining their efforts in a new project called [Kata][23]. Last year, Oracle created a demonstration version of an OCI runtime tool called [RailCar][24], written in Rust. It's been two months since the GitHub project has been updated, so it's unclear if it is still in development. A couple of years ago, Vincent Batts worked on adding a tool, `[nspawn-oci][8]`, that interpreted an OCI Runtime Specification file and launched `systemd-nspawn`, but no one really picked up on it, and it was not a native implementation.
|
||||
|
||||
If someone wants to implement a native `systemd-nspawn --oci OCI-SPEC.json` and get it accepted by the systemd team for support, then CRI-O, Docker, and eventually Podman would be able to use it in addition to `runc `and Clear Container/runV ([Kata][25]). (No one on my team is working on this.)
|
||||
|
||||
The bottom line is, back three or four years, the upstream developers wanted to write a low-level golang tool for launching containers, and this tool ended up becoming `runc`. Those developers at the time had a C-based tool for doing this called `lxc` and moved away from it. I am pretty sure that at the time they made the decision to build libcontainer, they would not have been interested in `systemd-nspawn` or any other non-native (golang) way of running "namespace" separated containers.
|
||||
|
||||
|
||||
### About the author
|
||||
|
||||
[![](https://opensource.com/sites/default/files/styles/profile_pictures/public/walsh1.jpg?itok=JbZWFm6J)][26] Daniel J Walsh - Daniel Walsh has worked in the computer security field for almost 30 years. Dan joined Red Hat in August 2001\. Dan leads the RHEL Docker enablement team since August 2013, but has been working on container technology for several years. He has led the SELinux project, concentrating on the application space and policy development. Dan helped developed sVirt, Secure Virtualization. He also created the SELinux Sandbox, the Xguest user and the Secure Kiosk. Previously, Dan worked Netect/Bindview... [more about Daniel J Walsh][9][More about me][10]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/1/history-low-level-container-runtimes
|
||||
|
||||
作者:[Daniel J Walsh ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/rhatdan
|
||||
[1]:https://opensource.com/resources/what-are-linux-containers?utm_campaign=containers&intcmp=70160000000h1s6AAA
|
||||
[2]:https://opensource.com/resources/what-docker?utm_campaign=containers&intcmp=70160000000h1s6AAA
|
||||
[3]:https://opensource.com/resources/what-is-kubernetes?utm_campaign=containers&intcmp=70160000000h1s6AAA
|
||||
[4]:https://developers.redhat.com/blog/2016/01/13/a-practical-introduction-to-docker-container-terminology/?utm_campaign=containers&intcmp=70160000000h1s6AAA
|
||||
[5]:https://opensource.com/article/18/1/history-low-level-container-runtimes?rate=05T2m7ayQ7DRxtzQFjGcfBAlaTF5ffHN-EH1kEqSt9Q
|
||||
[6]:https://libvirt.org/
|
||||
[7]:https://github.com/opencontainers/runc
|
||||
[8]:https://github.com/vbatts/nspawn-oci
|
||||
[9]:https://opensource.com/users/rhatdan
|
||||
[10]:https://opensource.com/users/rhatdan
|
||||
[11]:https://opensource.com/user/16673/feed
|
||||
[12]:https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[13]:https://github.com/docker
|
||||
[14]:https://github.com/kubernetes-incubator/cri-o
|
||||
[15]:https://github.com/rkt/rkt
|
||||
[16]:https://github.com/projectatomic/libpod/tree/master/cmd/podman
|
||||
[17]:https://github.com/projectatomic/buildah
|
||||
[18]:https://www.opencontainers.org/
|
||||
[19]:https://www.hyper.sh/
|
||||
[20]:https://github.com/opencontainers/runtime-spec
|
||||
[21]:https://github.com/cloudfoundry/garden
|
||||
[22]:https://clearlinux.org/containers
|
||||
[23]:https://clearlinux.org/containers
|
||||
[24]:https://github.com/oracle/railcar
|
||||
[25]:https://github.com/kata-containers
|
||||
[26]:https://opensource.com/users/rhatdan
|
||||
[27]:https://opensource.com/users/rhatdan
|
||||
[28]:https://opensource.com/users/rhatdan
|
||||
[29]:https://opensource.com/tags/containers
|
||||
[30]:https://opensource.com/tags/linux
|
||||
[31]:https://opensource.com/tags/containers-column
|
95
sources/talk/20180201 IT automation- How to make the case.md
Normal file
95
sources/talk/20180201 IT automation- How to make the case.md
Normal file
@ -0,0 +1,95 @@
|
||||
IT automation: How to make the case
|
||||
======
|
||||
At the start of any significant project or change initiative, IT leaders face a proverbial fork in the road.
|
||||
|
||||
Path #1 might seem to offer the shortest route from A to B: Simply force-feed the project to everyone by executive mandate, essentially saying, “You’re going to do this – or else.”
|
||||
|
||||
Path #2 might appear less direct, because on this journey you take the time to explain the strategy and the reasons behind it. In fact, you’re going to be making pit stops along this route, rather than marathoning from start to finish: “Here’s what we’re doing – and why we’re doing it.”
|
||||
|
||||
Guess which path bears better results?
|
||||
|
||||
If you said #2, you’ve traveled both paths before – and experienced the results first-hand. Getting people on board with major changes beforehand is almost always the smarter choice.
|
||||
|
||||
IT leaders know as well as anyone that with significant change often comes [significant fear][1], skepticism, and other challenges. It may be especially true with IT automation. The term alone sounds scary to some people, and it is often tied to misconceptions. Helping people understand the what, why, and how of your company’s automation strategy is a necessary step to achieving your goals associated with that strategy.
|
||||
|
||||
[ **Read our related article,** [**IT automation best practices: 7 keys to long-term success**][2]. ]
|
||||
|
||||
With that in mind, we asked a variety of IT leaders for their advice on making the case for automation in your organization:
|
||||
|
||||
## 1. Show people what’s in it for them
|
||||
|
||||
Let’s face it: Self-interest and self-preservation are natural instincts. Tapping into that human tendency is a good way to get people on board: Show people how your automation strategy will benefit them and their jobs. Will automating a particular process in the software pipeline mean fewer middle-of-the-night calls for team members? Will it enable some people to dump low-skill, manual tasks in favor of more strategic, higher-order work – the sort that helps them take the next step in their career?
|
||||
|
||||
“Convey what’s in it for them, and how it will benefit clients and the whole company,” advises Vipul Nagrath, global CIO at [ADP][3]. “Compare the current state to a brighter future state, where the company enjoys greater stability, agility, efficiency, and security.”
|
||||
|
||||
The same approach holds true when making the case outside of IT; just lighten up on the jargon when explaining the benefits to non-technical stakeholders, Nagrath says.
|
||||
|
||||
Setting up a before-and-after picture is a good storytelling device for helping people see the upside.
|
||||
|
||||
“You want to paint a picture of the current state that people can relate to,” Nagrath says. “Present what’s working, but also highlight what’s causing teams to be less than agile.” Then explain how automating certain processes will improve that current state.
|
||||
|
||||
## 2. Connect automation to specific business goals
|
||||
|
||||
Part of making a strong case entails making sure people understand that you’re not just trend-chasing. If you’re automating simply for the sake of automating, people will sniff that out and become more resistant – perhaps especially within IT.
|
||||
|
||||
“The case for automation needs to be driven by a business demand signal, such as revenue or operating expense,” says David Emerson, VP and deputy CISO at [Cyxtera][4]. “No automation endeavor is self-justifying, and no technical feat, generally, should be a means unto itself, unless it’s a core competency of the company.”
|
||||
|
||||
Like Nagrath, Emerson recommends promoting the incentives associated with achieving the business goals of automation, and working toward these goals (and corresponding incentives) in an iterative, step-by-step fashion.
|
||||
|
||||
## 3. Break the automation plan into manageable pieces
|
||||
|
||||
Even if your automation strategy is literally “automate everything,” that’s a tough sell (and probably unrealistic) for most organizations. You’ll make a stronger case with a plan that approaches automation manageable piece by manageable piece, and that enables greater flexibility to adapt along the way.
|
||||
|
||||
“When making a case for automation, I recommend clearly illustrating the incentive to move to an automated process, and allowing iteration toward that goal to introduce and prove the benefits at lower risk,” Emerson says.
|
||||
|
||||
Sergey Zuev, founder at [GA Connector][5], shares an in-the-trenches account of why automating incrementally is crucial – and how it will help you build a stronger, longer-lasting argument for your strategy. Zuev should know: His company’s tool automates the import of data from CRM applications into Google Analytics. But it was actually the company’s internal experience automating its own customer onboarding process that led to a lightbulb moment.
|
||||
|
||||
“At first, we tried to build the whole onboarding funnel at once, and as a result, the project dragged [on] for months,” Zuev says. “After realizing that it [was] going nowhere, we decided to select small chunks that would have the biggest immediate effect, and start with that. As a result, we managed to implement one of the email sequences in just a week, and are already reaping the benefits of the desecrated manual effort.”
|
||||
|
||||
## 4. Sell the big-picture benefits too
|
||||
|
||||
A step-by-step approach does not preclude painting a bigger picture. Just as it’s a good idea to make the case at the individual or team level, it’s also a good idea for help people understand the company-wide benefits.
|
||||
|
||||
“If we can accelerate the time it takes for the business to get what it needs, it will silence the skeptics.”
|
||||
|
||||
Eric Kaplan, CTO at [AHEAD][6], agrees that using small wins to show automation’s value is a smart strategy for winning people over. But the value those so-called “small” wins reveal can actually help you sharpen the big picture for people. Kaplan points to the value of individual and organizational time as an area everyone can connect with easily.
|
||||
|
||||
“The best place to do this is where you can show savings in terms of time,” Kaplan says. “If we can accelerate the time it takes for the business to get what it needs, it will silence the skeptics.”
|
||||
|
||||
Time and scalability are powerful benefits business and IT colleagues, both charged with growing the business, can grasp.
|
||||
|
||||
“The result of automation is scalability – less effort per person to maintain and grow your IT environment, as [Red Hat][7] VP, Global Services John Allessio recently [noted][8]. “If adding manpower is the only way to grow your business, then scalability is a pipe dream. Automation reduces your manpower requirements and provides the flexibility required for continued IT evolution.” (See his full article, [What DevOps teams really need from a CIO][8].)
|
||||
|
||||
## 5. Promote the heck out of your results
|
||||
|
||||
At the outset of your automation strategy, you’ll likely be making the case based on goals and the anticipated benefits of achieving those goals. But as your automation strategy evolves, there’s no case quite as convincing as one grounded in real-world results.
|
||||
|
||||
“Seeing is believing,” says Nagrath, ADP’s CIO. “Nothing quiets skeptics like a track record of delivery.”
|
||||
|
||||
That means, of course, not only achieving your goals, but also doing so on time – another good reason for the iterative, step-by-step approach.
|
||||
|
||||
While quantitative results such as percentage improvements or cost savings can speak loudly, Nagrath advises his fellow IT leaders not to stop there when telling your automation story.
|
||||
|
||||
“Making a case for automation is also a qualitative discussion, where we can promote the issues prevented, overall business continuity, reductions in failures/errors, and associates taking on [greater] responsibility as they tackle more value-added tasks.”
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://enterprisersproject.com/article/2018/1/how-make-case-it-automation
|
||||
|
||||
作者:[Kevin Casey][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://enterprisersproject.com/user/kevin-casey
|
||||
[1]:https://enterprisersproject.com/article/2017/10/how-beat-fear-and-loathing-it-change
|
||||
[2]:https://enterprisersproject.com/article/2018/1/it-automation-best-practices-7-keys-long-term-success?sc_cid=70160000000h0aXAAQ
|
||||
[3]:https://www.adp.com/
|
||||
[4]:https://www.cyxtera.com/
|
||||
[5]:http://gaconnector.com/
|
||||
[6]:https://www.thinkahead.com/
|
||||
[7]:https://www.redhat.com/en?intcmp=701f2000000tjyaAAA
|
||||
[8]:https://enterprisersproject.com/article/2017/12/what-devops-teams-really-need-cio
|
||||
[9]:https://enterprisersproject.com/email-newsletter?intcmp=701f2000000tsjPAAQ
|
@ -0,0 +1,108 @@
|
||||
Open source is 20: How it changed programming and business forever
|
||||
======
|
||||
![][1]
|
||||
|
||||
Every company in the world now uses open-source software. Microsoft, once its greatest enemy, is [now an enthusiastic open supporter][2]. Even [Windows is now built using open-source techniques][3]. And if you ever searched on Google, bought a book from Amazon, watched a movie on Netflix, or looked at your friend's vacation pictures on Facebook, you're an open-source user. Not bad for a technology approach that turns 20 on February 3.
|
||||
|
||||
Now, free software has been around since the first computers, but the philosophy of both free software and open source are both much newer. In the 1970s and 80s, companies rose up which sought to profit by making proprietary software. In the nascent PC world, no one even knew about free software. But, on the Internet, which was dominated by Unix and ITS systems, it was a different story.
|
||||
|
||||
In the late 70s, [Richard M. Stallman][6], also known as RMS, then an MIT programmer, created a free printer utility based on its source code. But then a new laser printer arrived on the campus and he found he could no longer get the source code and so he couldn't recreate the utility. The angry [RMS created the concept of "Free Software."][7]
|
||||
|
||||
RMS's goal was to create a free operating system, [Hurd][8]. To make this happen in September 1983, [he announced the creation of the GNU project][9] (GNU stands for GNU's Not Unix -- a recursive acronym). By January 1984, he was working full-time on the project. To help build it he created the grandfather of all free software/open-source compiler system [GCC][10] and other operating system utilities. Early in 1985, he published "[The GNU Manifesto][11]," which was the founding charter of the free software movement and launched the [Free Software Foundation (FSF)][12].
|
||||
|
||||
This went well for a few years, but inevitably, [RMS collided with proprietary companies][13]. The company Unipress took the code to a variation of his [EMACS][14] programming editor and turned it into a proprietary program. RMS never wanted that to happen again so he created the [GNU General Public License (GPL)][15] in 1989. This was the first copyleft license. It gave users the right to use, copy, distribute, and modify a program's source code. But if you make source code changes and distribute it to others, you must share the modified code. While there had been earlier free licenses, such as [1980's four-clause BSD license][16], the GPL was the one that sparked the free-software, open-source revolution.
|
||||
|
||||
In 1997, [Eric S. Raymond][17] published his vital essay, "[The Cathedral and the Bazaar][18]." In it, he showed the advantages of the free-software development methodologies using GCC, the Linux kernel, and his experiences with his own [Fetchmail][19] project as examples. This essay did more than show the advantages of free software. The programming principles he described led the way for both [Agile][20] development and [DevOps][21]. Twenty-first century programming owes a large debt to Raymond.
|
||||
|
||||
Like all revolutions, free software quickly divided its supporters. On one side, as John Mark Walker, open-source expert and Strategic Advisor at Glyptodon, recently wrote, "[Free software is a social movement][22], with nary a hint of business interests -- it exists in the realm of religion and philosophy. Free software is a way of life with a strong moral code."
|
||||
|
||||
On the other were numerous people who wanted to bring "free software" to business. They would become the founders of "open source." They argued that such phrases as "Free as in freedom" and "Free speech, not beer," left most people confused about what that really meant for software.
|
||||
|
||||
The [release of the Netscape web browser source code][23] sparked a meeting of free software leaders and experts at [a strategy session held on February 3rd][24], 1998 in Palo Alto, CA. There, Eric S. Raymond, Michael Tiemann, Todd Anderson, Jon "maddog" Hall, Larry Augustin, Sam Ockman, and Christine Peterson hammered out the first steps to open source.
|
||||
|
||||
Peterson created the "open-source term." She remembered:
|
||||
|
||||
> [The introduction of the term "open source software" was a deliberate effort][25] to make this field of endeavor more understandable to newcomers and to business, which was viewed as necessary to its spread to a broader community of users. The problem with the main earlier label, "free software," was not its political connotations, but that -- to newcomers -- its seeming focus on price is distracting. A term was needed that focuses on the key issue of source code and that does not immediately confuse those new to the concept. The first term that came along at the right time and fulfilled these requirements was rapidly adopted: open source.
|
||||
|
||||
To help clarify what open source was, and wasn't, Raymond and Bruce Perens founded the [Open Source Initiative (OSI)][26]. Its purpose was, and still is, to define what are real open-source software licenses and what aren't.
|
||||
|
||||
Stallman was enraged by open source. He wrote:
|
||||
|
||||
> The two terms describe almost the same method/category of software, but they stand for [views based on fundamentally different values][27]. Open source is a development methodology; free software is a social movement. For the free software movement, free software is an ethical imperative, essential respect for the users' freedom. By contrast, the philosophy of open source considers issues in terms of how to make software 'better' -- in a practical sense only. It says that non-free software is an inferior solution to the practical problem at hand. Most discussion of "open source" pays no attention to right and wrong, only to popularity and success.
|
||||
|
||||
He saw open source as kowtowing to business and taking the focus away from the personal freedom of being able to have free access to the code. Twenty years later, he's still angry about it.
|
||||
|
||||
In a recent e-mail to me, Stallman said, it is a "common error is connecting me or my work or free software in general with the term 'Open Source.' That is the slogan adopted in 1998 by people who reject the philosophy of the Free Software Movement." In another message, he continued, "I rejected 'open source' because it was meant to bury the "free software" ideas of freedom. Open source inspired the release ofu seful free programs, but what's missing is the idea that users deserve control of their computing. We libre-software activists say, 'Software you can't change and share is unjust, so let's escape to our free replacement.' Open source says only, 'If you let users change your code, they might fix bugs.' What it does says is not wrong, but weak; it avoids saying the deeper point."
|
||||
|
||||
Philosophical conflicts aside, open source has indeed become the model for practical software development. Larry Augustin, CEO of [SugarCRM][28], the open-source customer relationship management (CRM) Software-as-a-Service (SaaS), was one of the first to practice open-source in a commercial software business. Augustin showed that a successful business could be built on open-source software.
|
||||
|
||||
Other companies quickly embraced this model. Besides Linux companies such as [Canonical][29], [Red Hat][30] and [SUSE][31], technology businesses such as [IBM][32] and [Oracle][33] also adopted it. This, in turn, led to open source's commercial success. More recently companies you would never think of for a moment as open-source businesses like [Wal-Mart][34] and [Verizon][35], now rely on open-source programs and have their own open-source projects.
|
||||
|
||||
As Jim Zemlin, director of [The Linux Foundation][36], observed in 2014:
|
||||
|
||||
> A [new business model][37] has emerged in which companies are joining together across industries to share development resources and build common open-source code bases on which they can differentiate their own products and services.
|
||||
|
||||
Today, Hall looked back and said "I look at 'closed source' as a blip in time." Raymond is unsurprised at open-source's success. In an e-mail interview, Raymond said, "Oh, yeah, it *has* been 20 years -- and that's not a big deal because we won most of the fights we needed to quite a while ago, like in the first decade after 1998."
|
||||
|
||||
"Ever since," he continued, "we've been mainly dealing with the problems of success rather than those of failure. And a whole new class of issues, like IoT devices without upgrade paths -- doesn't help so much for the software to be open if you can't patch it."
|
||||
|
||||
In other words, he concludes, "The reward of victory is often another set of battles."
|
||||
|
||||
These are battles that open source is poised to win. Jim Whitehurst, Red Hat's CEO and president told me:
|
||||
|
||||
> The future of open source is bright. We are on the cusp of a new wave of innovation that will come about because information is being separated from physical objects thanks to the Internet of Things. Over the next decade, we will see entire industries based on open-source concepts, like the sharing of information and joint innovation, become mainstream. We'll see this impact every sector, from non-profits, like healthcare, education and government, to global corporations who realize sharing information leads to better outcomes. Open and participative innovation will become a key part of increasing productivity around the world.
|
||||
|
||||
Others see open source extending beyond software development methods. Nick Hopman, Red Hat's senior director of emerging technology practices, said:
|
||||
|
||||
> Open-source is much more than just a process to develop and expose technology. Open-source is a catalyst to drive change in every facet of society -- government, policy, medical diagnostics, process re-engineering, you name it -- and can leverage open principles that have been perfected through the experiences of open-source software development to create communities that drive change and innovation. Looking forward, open-source will continue to drive technology innovation, but I am even more excited to see how it changes the world in ways we have yet to even consider.
|
||||
|
||||
Indeed. Open source has turned twenty, but its influence, and not just on software and business, will continue on for decades to come.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.zdnet.com/article/open-source-turns-20/
|
||||
|
||||
作者:[Steven J. Vaughan-Nichols][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.zdnet.com/meet-the-team/us/steven-j-vaughan-nichols/
|
||||
[1]:https://zdnet1.cbsistatic.com/hub/i/r/2018/01/08/d9527281-2972-4cb7-bd87-6464d8ad50ae/thumbnail/570x322/9d4ef9007b3a3ce34de0cc39d2b15b0c/5a4faac660b22f2aba08fc3f-1280x7201jan082018150043poster.jpg
|
||||
[2]:http://www.zdnet.com/article/microsoft-the-open-source-company/
|
||||
[3]:http://www.zdnet.com/article/microsoft-uses-open-source-software-to-create-windows/
|
||||
[4]:https://zdnet1.cbsistatic.com/hub/i/r/2016/11/18/a55b3c0c-7a8e-4143-893f-44900cb2767a/resize/220x165/6cd4e37b1904743ff1f579cb10d9e857/linux-open-source-money-penguin.jpg
|
||||
[5]:http://www.zdnet.com/article/how-do-linux-and-open-source-companies-make-money-from-free-software/
|
||||
[6]:https://stallman.org/
|
||||
[7]:https://opensource.com/article/18/2/pivotal-moments-history-open-source
|
||||
[8]:https://www.gnu.org/software/hurd/hurd.html
|
||||
[9]:https://groups.google.com/forum/#!original/net.unix-wizards/8twfRPM79u0/1xlglzrWrU0J
|
||||
[10]:https://gcc.gnu.org/
|
||||
[11]:https://www.gnu.org/gnu/manifesto.en.html
|
||||
[12]:https://www.fsf.org/
|
||||
[13]:https://www.free-soft.org/gpl_history/
|
||||
[14]:https://www.gnu.org/s/emacs/
|
||||
[15]:https://www.gnu.org/licenses/gpl-3.0.en.html
|
||||
[16]:http://www.linfo.org/bsdlicense.html
|
||||
[17]:http://www.catb.org/esr/
|
||||
[18]:http://www.catb.org/esr/writings/cathedral-bazaar/
|
||||
[19]:http://www.fetchmail.info/
|
||||
[20]:https://www.agilealliance.org/agile101/
|
||||
[21]:https://aws.amazon.com/devops/what-is-devops/
|
||||
[22]:https://opensource.com/business/16/11/open-source-not-free-software?sc_cid=70160000001273HAAQ
|
||||
[23]:http://www.zdnet.com/article/the-beginning-of-the-peoples-web-20-years-of-netscape/
|
||||
[24]:https://opensource.org/history
|
||||
[25]:https://opensource.com/article/18/2/coining-term-open-source-software
|
||||
[26]:https://opensource.org
|
||||
[27]:https://www.gnu.org/philosophy/open-source-misses-the-point.html
|
||||
[28]:https://www.sugarcrm.com/
|
||||
[29]:https://www.canonical.com/
|
||||
[30]:https://www.redhat.com/en
|
||||
[31]:https://www.suse.com/
|
||||
[32]:https://developer.ibm.com/code/open/
|
||||
[33]:http://www.oracle.com/us/technologies/open-source/overview/index.html
|
||||
[34]:http://www.zdnet.com/article/walmart-relies-on-openstack/
|
||||
[35]:https://www.networkworld.com/article/3195490/lan-wan/verizon-taps-into-open-source-white-box-fervor-with-new-cpe-offering.html
|
||||
[36]:http://www.linuxfoundation.org/
|
||||
[37]:http://www.zdnet.com/article/it-takes-an-open-source-village-to-make-commercial-software/
|
@ -1,160 +0,0 @@
|
||||
Linux Find Out Last System Reboot Time and Date Command
|
||||
======
|
||||
So, how do you find out your Linux or UNIX-like system was last rebooted? How do you display the system shutdown date and time? The last utility will either list the sessions of specified users, ttys, and hosts, in reverse time order, or list the users logged in at a specified date and time. Each line of output contains the user name, the tty from which the session was conducted, any hostname, the start and stop times for the session, and the duration of the session. To view Linux or Unix system reboot and shutdown date and time stamp using the following commands:
|
||||
|
||||
* last command
|
||||
* who command
|
||||
|
||||
|
||||
|
||||
### Use who command to find last system reboot time/date
|
||||
|
||||
You need to use the [who command][1], to print who is logged on. It also displays the time of last system boot. Use the last command to display system reboot and shutdown date and time, run:
|
||||
`$ who -b`
|
||||
Sample outputs:
|
||||
```
|
||||
system boot 2017-06-20 17:41
|
||||
```
|
||||
|
||||
Use the last command to display listing of last logged in users and system last reboot time and date, enter:
|
||||
`$ last reboot | less`
|
||||
Sample outputs:
|
||||
[![Fig.01: last command in action][2]][2]
|
||||
Or, better try:
|
||||
`$ last reboot | head -1`
|
||||
Sample outputs:
|
||||
```
|
||||
reboot system boot 4.9.0-3-amd64 Sat Jul 15 19:19 still running
|
||||
```
|
||||
|
||||
The last command searches back through the file /var/log/wtmp and displays a list of all users logged in (and out) since that file was created. The pseudo user reboot logs in each time the system is rebooted. Thus last reboot command will show a log of all reboots since the log file was created.
|
||||
|
||||
### Finding systems last shutdown date and time
|
||||
|
||||
To display last shutdown date and time use the following command:
|
||||
`$ last -x|grep shutdown | head -1`
|
||||
Sample outputs:
|
||||
```
|
||||
shutdown system down 2.6.15.4 Sun Apr 30 13:31 - 15:08 (01:37)
|
||||
```
|
||||
|
||||
Where,
|
||||
|
||||
* **-x** : Display the system shutdown entries and run level changes.
|
||||
|
||||
|
||||
|
||||
Here is another session from my last command:
|
||||
```
|
||||
$ last
|
||||
$ last -x
|
||||
$ last -x reboot
|
||||
$ last -x shutdown
|
||||
```
|
||||
Sample outputs:
|
||||
![Fig.01: How to view last Linux System Reboot Date/Time ][3]
|
||||
|
||||
### Find out Linux system up since…
|
||||
|
||||
Another option as suggested by readers in the comments section below is to run the following command:
|
||||
`$ uptime -s`
|
||||
Sample outputs:
|
||||
```
|
||||
2017-06-20 17:41:51
|
||||
```
|
||||
|
||||
### OS X/Unix/FreeBSD find out last reboot and shutdown time command examples
|
||||
|
||||
Type the following command:
|
||||
`$ last reboot`
|
||||
Sample outputs from OS X unix:
|
||||
```
|
||||
reboot ~ Fri Dec 18 23:58
|
||||
reboot ~ Mon Dec 14 09:54
|
||||
reboot ~ Wed Dec 9 23:21
|
||||
reboot ~ Tue Nov 17 21:52
|
||||
reboot ~ Tue Nov 17 06:01
|
||||
reboot ~ Wed Nov 11 12:14
|
||||
reboot ~ Sat Oct 31 13:40
|
||||
reboot ~ Wed Oct 28 15:56
|
||||
reboot ~ Wed Oct 28 11:35
|
||||
reboot ~ Tue Oct 27 00:00
|
||||
reboot ~ Sun Oct 18 17:28
|
||||
reboot ~ Sun Oct 18 17:11
|
||||
reboot ~ Mon Oct 5 09:35
|
||||
reboot ~ Sat Oct 3 18:57
|
||||
|
||||
|
||||
wtmp begins Sat Oct 3 18:57
|
||||
```
|
||||
|
||||
To see shutdown date and time, enter:
|
||||
`$ last shutdown`
|
||||
Sample outputs:
|
||||
```
|
||||
shutdown ~ Fri Dec 18 23:57
|
||||
shutdown ~ Mon Dec 14 09:53
|
||||
shutdown ~ Wed Dec 9 23:20
|
||||
shutdown ~ Tue Nov 17 14:24
|
||||
shutdown ~ Mon Nov 16 21:15
|
||||
shutdown ~ Tue Nov 10 13:15
|
||||
shutdown ~ Sat Oct 31 13:40
|
||||
shutdown ~ Wed Oct 28 03:10
|
||||
shutdown ~ Sun Oct 18 17:27
|
||||
shutdown ~ Mon Oct 5 09:23
|
||||
|
||||
|
||||
wtmp begins Sat Oct 3 18:57
|
||||
```
|
||||
|
||||
### How do I find who rebooted/shutdown the Linux box?
|
||||
|
||||
You need [to enable psacct service and run the following command to see info][4] about executed commands including user name. Type the following [lastcomm command][5] to see
|
||||
```
|
||||
# lastcomm userNameHere
|
||||
# lastcomm commandNameHere
|
||||
# lastcomm | more
|
||||
# lastcomm reboot
|
||||
# lastcomm shutdown
|
||||
### OR see both reboot and shutdown time
|
||||
# lastcomm | egrep 'reboot|shutdown'
|
||||
```
|
||||
Sample outputs:
|
||||
```
|
||||
reboot S X root pts/0 0.00 secs Sun Dec 27 23:49
|
||||
shutdown S root pts/1 0.00 secs Sun Dec 27 23:45
|
||||
```
|
||||
|
||||
So root user rebooted the box from 'pts/0' on Sun, Dec, 27th at 23:49 local time.
|
||||
|
||||
### See also
|
||||
|
||||
* For more information read last(1) and [learn how to use the tuptime command on Linux server to see the historical and statistical uptime][6].
|
||||
|
||||
|
||||
|
||||
### about the author
|
||||
|
||||
|
||||
The author is the creator of nixCraft and a seasoned sysadmin and a trainer for the Linux operating system/Unix shell scripting. He has worked with global clients and in various industries, including IT, education, defense and space research, and the nonprofit sector. Follow him on [Twitter][7], [Facebook][8], [Google+][9].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.cyberciti.biz/tips/linux-last-reboot-time-and-date-find-out.html
|
||||
|
||||
作者:[Vivek Gite][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.cyberciti.biz/
|
||||
[1]:https://www.cyberciti.biz/faq/unix-linux-who-command-examples-syntax-usage/ (See Linux/Unix who command examples for more info)
|
||||
[2]:https://www.cyberciti.biz/tips/wp-content/uploads/2006/04/last-reboot.jpg
|
||||
[3]:https://www.cyberciti.biz/media/new/tips/2006/04/check-last-time-system-was-rebooted.jpg
|
||||
[4]:https://www.cyberciti.biz/tips/howto-log-user-activity-using-process-accounting.html
|
||||
[5]:https://www.cyberciti.biz/faq/linux-unix-lastcomm-command-examples-usage-syntax/ (See Linux/Unix lastcomm command examples for more info)
|
||||
[6]:https://www.cyberciti.biz/hardware/howto-see-historical-statistical-uptime-on-linux-server/
|
||||
[7]:https://twitter.com/nixcraft
|
||||
[8]:https://facebook.com/nixcraft
|
||||
[9]:https://plus.google.com/+CybercitiBiz
|
@ -0,0 +1,82 @@
|
||||
How to use lftp to accelerate ftp/https download speed on Linux/UNIX
|
||||
======
|
||||
lftp is a file transfer program. It allows sophisticated FTP, HTTP/HTTPS, and other connections. If the site URL is specified, then lftp will connect to that site otherwise a connection has to be established with the open command. It is an essential tool for all a Linux/Unix command line users. I have already written about [Linux ultra fast command line download accelerator][1] such as Axel and prozilla. lftp is another tool for the same job with more features. lftp can handle seven file access methods:
|
||||
|
||||
1. ftp
|
||||
2. ftps
|
||||
3. http
|
||||
4. https
|
||||
5. hftp
|
||||
6. fish
|
||||
7. sftp
|
||||
8. file
|
||||
|
||||
|
||||
|
||||
### So what is unique about lftp?
|
||||
|
||||
* Every operation in lftp is reliable, that is any not fatal error is ignored, and the operation is repeated. So if downloading breaks, it will be restarted from the point automatically. Even if FTP server does not support REST command, lftp will try to retrieve the file from the very beginning until the file is transferred completely.
|
||||
* lftp has shell-like command syntax allowing you to launch several commands in parallel in the background.
|
||||
* lftp has a builtin mirror which can download or update a whole directory tree. There is also a reverse mirror (mirror -R) which uploads or updates a directory tree on the server. The mirror can also synchronize directories between two remote servers, using FXP if available.
|
||||
|
||||
|
||||
|
||||
### How to use lftp as download accelerator
|
||||
|
||||
lftp has pget command. It allows you download files in parallel. The syntax is
|
||||
`lftp -e 'pget -n NUM -c url; exit'`
|
||||
For example, download <http://kernel.org/pub/linux/kernel/v2.6/linux-2.6.22.2.tar.bz2> file using pget in 5 parts:
|
||||
```
|
||||
$ cd /tmp
|
||||
$ lftp -e 'pget -n 5 -c http://kernel.org/pub/linux/kernel/v2.6/linux-2.6.22.2.tar.bz2'
|
||||
```
|
||||
Sample outputs:
|
||||
```
|
||||
45108964 bytes transferred in 57 seconds (775.3K/s)
|
||||
lftp :~>quit
|
||||
|
||||
```
|
||||
|
||||
Where,
|
||||
|
||||
1. pget – Download files in parallel
|
||||
2. -n 5 – Set maximum number of connections to 5
|
||||
3. -c – Continue broken transfer if lfile.lftp-pget-status exists in the current directory
|
||||
|
||||
|
||||
|
||||
### How to use lftp to accelerate ftp/https download on Linux/Unix
|
||||
|
||||
Another try with added exit command:
|
||||
`$ lftp -e 'pget -n 10 -c https://cdn.kernel.org/pub/linux/kernel/v4.x/linux-4.15.tar.xz; exit'`
|
||||
|
||||
[Linux-lftp-command-demo][https://www.cyberciti.biz/tips/wp-content/uploads/2007/08/Linux-lftp-command-demo.mp4]
|
||||
|
||||
### A note about parallel downloading
|
||||
|
||||
Please note that by using download accelerator you are going to put a load on remote host. Also note that lftp may not work with sites that do not support multi-source downloads or blocks such requests at firewall level.
|
||||
|
||||
NA command offers many other features. Refer to [lftp][2] man page for more information:
|
||||
`man lftp`
|
||||
|
||||
### about the author
|
||||
|
||||
The author is the creator of nixCraft and a seasoned sysadmin and a trainer for the Linux operating system/Unix shell scripting. He has worked with global clients and in various industries, including IT, education, defense and space research, and the nonprofit sector. Follow him on [Twitter][3], [Facebook][4], [Google+][5]. Get the **latest tutorials on SysAdmin, Linux/Unix and open source topics via[my RSS/XML feed][6]**.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.cyberciti.biz/tips/linux-unix-download-accelerator.html
|
||||
|
||||
作者:[Vivek Gite][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.cyberciti.biz
|
||||
[1]:https://www.cyberciti.biz/tips/download-accelerator-for-linux-command-line-tools.html
|
||||
[2]:https://lftp.yar.ru/
|
||||
[3]:https://twitter.com/nixcraft
|
||||
[4]:https://facebook.com/nixcraft
|
||||
[5]:https://plus.google.com/+CybercitiBiz
|
||||
[6]:https://www.cyberciti.biz/atom/atom.xml
|
@ -1,134 +0,0 @@
|
||||
Linux Check IDE / SATA SSD Hard Disk Transfer Speed
|
||||
======
|
||||
So how do you find out how fast is your hard disk under Linux? Is it running at the SATA I (150 MB/s) or SATA II (300 MB/s) or SATA III (6.0Gb/s) speed without opening computer case or chassis?
|
||||
|
||||
You can use the **hdparm or dd command** to check hard disk speed. It provides a command line interface to various hard disk ioctls supported by the stock Linux ATA/IDE/SATA device driver subsystem. Some options may work correctly only with the latest kernels (make sure you have cutting edge kernel installed). I also recommend compiling hdparm with the included files from the most recent kernel source code.
|
||||
|
||||
### How to measure hard disk data transfer speed using hdparm
|
||||
|
||||
Login as the root user and enter the following command:
|
||||
`$ sudo hdparm -tT /dev/sda`
|
||||
OR
|
||||
`$ sudo hdparm -tT /dev/hda`
|
||||
Sample outputs:
|
||||
```
|
||||
/dev/sda:
|
||||
Timing cached reads: 7864 MB in 2.00 seconds = 3935.41 MB/sec
|
||||
Timing buffered disk reads: 204 MB in 3.00 seconds = 67.98 MB/sec
|
||||
```
|
||||
|
||||
For meaningful results, this operation should be **repeated 2-3 times**. This displays the speed of reading directly from the Linux buffer cache without disk access. This measurement is essentially an indication of the **throughput of the processor, cache, and memory** of the system under test. [Here is a for loop example][1], to run test 3 time in a row:
|
||||
`for i in 1 2 3; do hdparm -tT /dev/hda; done`
|
||||
Where,
|
||||
|
||||
* **-t** :perform device read timings
|
||||
* **-T** : perform cache read timings
|
||||
* **/dev/sda** : Hard disk device file
|
||||
|
||||
|
||||
|
||||
To [find out SATA hard disk link speed][2], enter:
|
||||
`sudo hdparm -I /dev/sda | grep -i speed`
|
||||
Output:
|
||||
```
|
||||
* Gen1 signaling speed (1.5Gb/s)
|
||||
* Gen2 signaling speed (3.0Gb/s)
|
||||
* Gen3 signaling speed (6.0Gb/s)
|
||||
|
||||
```
|
||||
|
||||
Above output indicate that my hard disk can use 1.5Gb/s, 3.0Gb/s, or 6.0Gb/s speed. Please note that your BIOS / Motherboard must have support for SATA-II/III:
|
||||
`$ dmesg | grep -i sata | grep 'link up'`
|
||||
[![Linux Check IDE SATA SSD Hard Disk Transfer Speed][3]][3]
|
||||
|
||||
### dd Command
|
||||
|
||||
You can use the dd command as follows to get speed info too:
|
||||
```
|
||||
dd if=/dev/zero of=/tmp/output.img bs=8k count=256k
|
||||
rm /tmp/output.img
|
||||
```
|
||||
|
||||
Sample outputs:
|
||||
```
|
||||
262144+0 records in
|
||||
262144+0 records out
|
||||
2147483648 bytes (2.1 GB) copied, 23.6472 seconds, **90.8 MB/s**
|
||||
|
||||
```
|
||||
|
||||
The [recommended syntax for the dd command is as follows][4]
|
||||
```
|
||||
dd if=/dev/input.file of=/path/to/output.file bs=block-size count=number-of-blocks oflag=dsync
|
||||
|
||||
## GNU dd syntax ##
|
||||
dd if=/dev/zero of=/tmp/test1.img bs=1G count=1 oflag=dsync
|
||||
|
||||
## OR alternate syntax for GNU/dd ##
|
||||
dd if=/dev/zero of=/tmp/testALT.img bs=1G count=1 conv=fdatasync
|
||||
```
|
||||
|
||||
|
||||
Sample outputs from the last dd command:
|
||||
```
|
||||
1+0 records in
|
||||
1+0 records out
|
||||
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.23889 s, 253 MB/s
|
||||
```
|
||||
|
||||
### Disks & storage - GUI tool
|
||||
|
||||
You can also use disk utility located at System > Administration > Disk utility menu. Please note that in latest version of Gnome it is simply called Disks.
|
||||
|
||||
#### How do I test the performance of my hard disk using Disks on Linux?
|
||||
|
||||
To test the speed of your hard disk:
|
||||
|
||||
1. Open **Disks** from the **Activities** overview (press the Super key on your keyboard and type Disks)
|
||||
2. Choose the **disk** from the list in the **left pane**
|
||||
3. Select the menu button and select **Benchmark disk …** from the menu
|
||||
4. Click **Start Benchmark …** and adjust the Transfer Rate and Access Time parameters as desired.
|
||||
5. Choose **Start Benchmarking** to test how fast data can be read from the disk. Administrative privileges required. Enter your password
|
||||
|
||||
|
||||
|
||||
A quick video demo of above procedure:
|
||||
|
||||
https://www.cyberciti.biz/tips/wp-content/uploads/2007/10/disks-performance.mp4
|
||||
|
||||
|
||||
#### Read Only Benchmark (Safe option)
|
||||
|
||||
Then, select > Read only:
|
||||
![Fig.01: Linux Benchmarking Hard Disk Read Only Test Speed][5]
|
||||
The above option will not destroy any data.
|
||||
|
||||
#### Read and Write Benchmark (All data will be lost so be careful)
|
||||
|
||||
Visit System > Administration > Disk utility menu > Click Benchmark > Click Start Read/Write Benchmark button:
|
||||
![Fig.02:Linux Measuring read rate, write rate and access time][6]
|
||||
|
||||
### About the author
|
||||
|
||||
The author is the creator of nixCraft and a seasoned sysadmin and a trainer for the Linux operating system/Unix shell scripting. He has worked with global clients and in various industries, including IT, education, defense and space research, and the nonprofit sector. Follow him on [Twitter][7], [Facebook][8], [Google+][9].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.cyberciti.biz/tips/how-fast-is-linux-sata-hard-disk.html
|
||||
|
||||
作者:[Vivek Gite][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.cyberciti.biz/
|
||||
[1]:https://www.cyberciti.biz/faq/bash-for-loop/
|
||||
[2]:https://www.cyberciti.biz/faq/linux-command-to-find-sata-harddisk-link-speed/
|
||||
[3]:https://www.cyberciti.biz/tips/wp-content/uploads/2007/10/Linux-Check-IDE-SATA-SSD-Hard-Disk-Transfer-Speed.jpg
|
||||
[4]:https://www.cyberciti.biz/faq/howto-linux-unix-test-disk-performance-with-dd-command/
|
||||
[5]:https://www.cyberciti.biz/media/new/tips/2007/10/Linux-Hard-Disk-Speed-Benchmark.png (Linux Benchmark Hard Disk Speed)
|
||||
[6]:https://www.cyberciti.biz/media/new/tips/2007/10/Linux-Hard-Disk-Read-Write-Benchmark.png (Linux Hard Disk Benchmark Read / Write Rate and Access Time)
|
||||
[7]:https://twitter.com/nixcraft
|
||||
[8]:https://facebook.com/nixcraft
|
||||
[9]:https://plus.google.com/+CybercitiBiz
|
@ -0,0 +1,141 @@
|
||||
How to use yum-cron to automatically update RHEL/CentOS Linux
|
||||
======
|
||||
The yum command line tool is used to install and update software packages under RHEL / CentOS Linux server. I know how to apply updates using [yum update command line][1], but I would like to use cron to update packages where appropriate manually. How do I configure yum to install software patches/updates [automatically with cron][2]?
|
||||
|
||||
You need to install yum-cron package. It provides files needed to run yum updates as a cron job. Install this package if you want auto yum updates nightly via cron.
|
||||
|
||||
### How to install yum cron on a CentOS/RHEL 6.x/7.x
|
||||
|
||||
Type the following [yum command][3] on:
|
||||
`$ sudo yum install yum-cron`
|
||||
![](https://www.cyberciti.biz/media/new/faq/2009/05/How-to-install-yum-cron-on-CentOS-RHEL-server.jpg)
|
||||
|
||||
Turn on service using systemctl command on **CentOS/RHEL 7.x** :
|
||||
```
|
||||
$ sudo systemctl enable yum-cron.service
|
||||
$ sudo systemctl start yum-cron.service
|
||||
$ sudo systemctl status yum-cron.service
|
||||
```
|
||||
If you are using **CentOS/RHEL 6.x** , run:
|
||||
```
|
||||
$ sudo chkconfig yum-cron on
|
||||
$ sudo service yum-cron start
|
||||
```
|
||||
![](https://www.cyberciti.biz/media/new/faq/2009/05/How-to-turn-on-yum-cron-service-on-CentOS-or-RHEL-server.jpg)
|
||||
|
||||
yum-cron is an alternate interface to yum. Very convenient way to call yum from cron. It provides methods to keep repository metadata up to date, and to check for, download, and apply updates. Rather than accepting many different command line arguments, the different functions of yum-cron can be accessed through config files.
|
||||
|
||||
### How to configure yum-cron to automatically update RHEL/CentOS Linux
|
||||
|
||||
You need to edit /etc/yum/yum-cron.conf and /etc/yum/yum-cron-hourly.conf files using a text editor such as vi command:
|
||||
`$ sudo vi /etc/yum/yum-cron.conf`
|
||||
Make sure updates should be applied when they are available
|
||||
`apply_updates = yes`
|
||||
You can set the address to send email messages from. Please note that ‘localhost’ will be replaced with the value of system_name.
|
||||
`email_from = root@localhost`
|
||||
List of addresses to send messages to.
|
||||
`email_to = your-it-support@some-domain-name`
|
||||
Name of the host to connect to to send email messages.
|
||||
`email_host = localhost`
|
||||
If you [do not want to update kernel package add the following on CentOS/RHEL 7.x][4]:
|
||||
`exclude=kernel*`
|
||||
For RHEL/CentOS 6.x add [the following to exclude kernel package from updating][5]:
|
||||
`YUM_PARAMETER=kernel*`
|
||||
[Save and close the file in vi/vim][6]. You also need to update /etc/yum/yum-cron-hourly.conf file if you want to apply update hourly. Otherwise /etc/yum/yum-cron.conf will run on daily using the following cron job (us [cat command][7]:
|
||||
`$ cat /etc/cron.daily/0yum-daily.cron`
|
||||
Sample outputs:
|
||||
```
|
||||
#!/bin/bash
|
||||
|
||||
# Only run if this flag is set. The flag is created by the yum-cron init
|
||||
# script when the service is started -- this allows one to use chkconfig and
|
||||
# the standard "service stop|start" commands to enable or disable yum-cron.
|
||||
if [[ ! -f /var/lock/subsys/yum-cron ]]; then
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# Action!
|
||||
exec /usr/sbin/yum-cron /etc/yum/yum-cron-hourly.conf
|
||||
[root@centos7-box yum]# cat /etc/cron.daily/0yum-daily.cron
|
||||
#!/bin/bash
|
||||
|
||||
# Only run if this flag is set. The flag is created by the yum-cron init
|
||||
# script when the service is started -- this allows one to use chkconfig and
|
||||
# the standard "service stop|start" commands to enable or disable yum-cron.
|
||||
if [[ ! -f /var/lock/subsys/yum-cron ]]; then
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# Action!
|
||||
exec /usr/sbin/yum-cron
|
||||
```
|
||||
|
||||
That is all. Now your system will update automatically everyday using yum-cron. See man page of yum-cron for more details:
|
||||
`$ man yum-cron`
|
||||
|
||||
### Method 2 – Use shell scripts
|
||||
|
||||
**Warning** : The following method is outdated. Do not use it on RHEL/CentOS 6.x/7.x. I kept it below for historical reasons only when I used it on CentOS/RHEL version 4.x/5.x.
|
||||
|
||||
Let us see how to configure CentOS/RHEL for yum automatic update retrieval and installation of security packages. You can use yum-updatesd service provided with CentOS / RHEL servers. However, this service provides a few overheads. You can create daily or weekly updates with the following shell script. Create
|
||||
|
||||
* **/etc/cron.daily/yumupdate.sh** to apply updates one a day.
|
||||
* **/etc/cron.weekly/yumupdate.sh** to apply updates once a week.
|
||||
|
||||
|
||||
|
||||
#### Sample shell script to update system
|
||||
|
||||
A shell script that instructs yum to update any packages it finds via [cron][8]:
|
||||
```
|
||||
#!/bin/bash
|
||||
YUM=/usr/bin/yum
|
||||
$YUM -y -R 120 -d 0 -e 0 update yum
|
||||
$YUM -y -R 10 -e 0 -d 0 update
|
||||
```
|
||||
|
||||
(Code listing -01: /etc/cron.daily/yumupdate.sh)
|
||||
|
||||
Where,
|
||||
|
||||
1. First command will update yum itself and next will apply system updates.
|
||||
2. **-R 120** : Sets the maximum amount of time yum will wait before performing a command
|
||||
3. **-e 0** : Sets the error level to 0 (range 0 – 10). 0 means print only critical errors about which you must be told.
|
||||
4. -d 0 : Sets the debugging level to 0 – turns up or down the amount of things that are printed. (range: 0 – 10).
|
||||
5. **-y** : Assume yes; assume that the answer to any question which would be asked is yes.
|
||||
|
||||
|
||||
|
||||
Make sure you setup executable permission:
|
||||
`# chmod +x /etc/cron.daily/yumupdate.sh`
|
||||
|
||||
|
||||
### about the author
|
||||
|
||||
Posted by:
|
||||
|
||||
The author is the creator of nixCraft and a seasoned sysadmin and a trainer for the Linux operating system/Unix shell scripting. He has worked with global clients and in various industries, including IT, education, defense and space research, and the nonprofit sector. Follow him on [Twitter][9], [Facebook][10], [Google+][11]. Get the **latest tutorials on SysAdmin, Linux/Unix and open source topics via[my RSS/XML feed][12]**.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.cyberciti.biz/faq/fedora-automatic-update-retrieval-installation-with-cron/
|
||||
|
||||
作者:[Vivek Gite][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.cyberciti.biz/
|
||||
[1]:https://www.cyberciti.biz/faq/rhel-centos-fedora-linux-yum-command-howto/
|
||||
[2]:https://www.cyberciti.biz/faq/how-do-i-add-jobs-to-cron-under-linux-or-unix-oses
|
||||
[3]:https://www.cyberciti.biz/faq/rhel-centos-fedora-linux-yum-command-howto/ (See Linux/Unix yum command examples for more info)
|
||||
[4]:https://www.cyberciti.biz/faq/yum-update-except-kernel-package-command/
|
||||
[5]:https://www.cyberciti.biz/faq/redhat-centos-linux-yum-update-exclude-packages/
|
||||
[6]:https://www.cyberciti.biz/faq/linux-unix-vim-save-and-quit-command/
|
||||
[7]:https://www.cyberciti.biz/faq/linux-unix-appleosx-bsd-cat-command-examples/ (See Linux/Unix cat command examples for more info)
|
||||
[8]:https://www.cyberciti.biz/faq/how-do-i-add-jobs-to-cron-under-linux-or-unix-oses
|
||||
[9]:https://twitter.com/nixcraft
|
||||
[10]:https://facebook.com/nixcraft
|
||||
[11]:https://plus.google.com/+CybercitiBiz
|
||||
[12]:https://www.cyberciti.biz/atom/atom.xml
|
@ -0,0 +1,164 @@
|
||||
Create your first Ansible server (automation) setup
|
||||
======
|
||||
Automation/configuration management tools are the new craze in the IT world, organizations are moving towards adopting them. There are many tools that are available in market like Puppet, Chef, Ansible etc & in this tutorial, we are going to learn about Ansible.
|
||||
|
||||
Ansible is an open source configuration tool; that is used to deploy, configure & manage servers. Ansible is one of the easiest automation tool to learn and master. It does not require you to learn complicated programming language like ruby (used in puppet & chef) & uses YAML, which is a very simple language. Also it does not require any special agent to be installed on client machines & only requires client machines to have python and ssh installed, both of these are usually available on systems.
|
||||
|
||||
## Pre-requisites
|
||||
|
||||
Before we move onto installation part, let's discuss the pre-requisites for Ansible
|
||||
|
||||
1. For server, we will need a machine with either CentOS or RHEL 7 installed & EPEL repository enabled
|
||||
|
||||
To enable epel repository, use the commands below,
|
||||
|
||||
**RHEL/CentOS 7**
|
||||
|
||||
```
|
||||
$ rpm -Uvh https://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-10.noarch.rpm
|
||||
```
|
||||
|
||||
**RHEL/CentOS 6 (64 Bit)**
|
||||
|
||||
```
|
||||
$ rpm -Uvh http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
|
||||
```
|
||||
|
||||
**RHEL/CentOS 6 (32 Bit)**
|
||||
|
||||
```
|
||||
$ rpm -Uvh http://dl.fedoraproject.org/pub/epel/6/i386/epel-release-6-8.noarch.rpm
|
||||
```
|
||||
|
||||
2. For client machines, Open SSH & python should be installed. Also we need to configure password less login for ssh session (create public-private keys). To create public-private keys & configure password less login for ssh session, refer to our article "
|
||||
|
||||
[Setting up SSH Server for Public/Private keys based Authentication (Password-less login)][1]"
|
||||
|
||||
|
||||
|
||||
## Installation
|
||||
|
||||
Once we have epel repository enabled, we can now install anisble using yum,
|
||||
|
||||
```
|
||||
$ yum install ansible
|
||||
```
|
||||
|
||||
## Configuring Ansible hosts
|
||||
|
||||
We will now configure hosts that we want Ansible to manage. To do that we need to edit the file **/etc/ansible/host** s & add the clients in following syntax,
|
||||
|
||||
```
|
||||
[group-name]
|
||||
alias ansible_ssh_host=host_IP_address
|
||||
```
|
||||
|
||||
where, alias is the alias name given to hosts we adding & it can be anything,
|
||||
|
||||
host_IP_address is where we enter the IP address for the hosts.
|
||||
|
||||
For this tutorial, we are going to add 2 clients/hosts for ansible to manage, so let's create an entry for these two hosts in the configuration file,
|
||||
|
||||
```
|
||||
$ vi /etc/ansible/hosts
|
||||
[test_clients]
|
||||
client1 ansible_ssh_host=192.168.1.101
|
||||
client2 ansible_ssh_host=192.168.1.10
|
||||
```
|
||||
|
||||
Save file & exit it. Now as mentioned in pre-requisites, we should have a password less login to these clients from the ansible server. To check if that's the case, ssh into the clients and we should be able to login without password,
|
||||
|
||||
```
|
||||
$ ssh root@192.168.1.101
|
||||
```
|
||||
|
||||
If that's working, then we can move further otherwise we need to create Public/Private keys for ssh session (Refer to article mentioned above in pre-requisites).
|
||||
|
||||
We are using root to login to other servers but we can use other local users as well & we need to define it for Ansible whatever user we will be using. To do so, we will first create a folder named 'group_vars' in '/etc/ansible'
|
||||
|
||||
```
|
||||
$ cd /etc/ansible
|
||||
$ mkdir group_vars
|
||||
```
|
||||
|
||||
Next, we will create a file named after the group we have created in 'etc/ansible/hosts' i.e. test_clients
|
||||
|
||||
```
|
||||
$ vi test_clients
|
||||
```
|
||||
|
||||
& add the ifollowing information about the user,
|
||||
|
||||
```
|
||||
--
|
||||
ansible_ssh_user:root
|
||||
```
|
||||
|
||||
**Note :-** File will start with '--' (minus symbol), so keep not of that.
|
||||
|
||||
If we want to use same user for all the groups created, then we can create only a single file named 'all' to mention the user details for ssh login, instead of creating a file for every group.
|
||||
|
||||
```
|
||||
$ vi /etc/ansible/group_vars/all
|
||||
--
|
||||
ansible_ssh_user: root
|
||||
```
|
||||
|
||||
Similarly, we can setup files for individual hosts as well.
|
||||
|
||||
Now, the setup for the clients has been done. We will now push some simple commands to all the clients being managed by Ansible.
|
||||
|
||||
## Testing hosts
|
||||
|
||||
To check the connectivity of all the hosts, we will issue a command,
|
||||
|
||||
```
|
||||
$ ansible -m ping all
|
||||
```
|
||||
|
||||
If all the hosts are properly connected, it should return the following output,
|
||||
|
||||
```
|
||||
client1 | SUCCESS = > {
|
||||
" changed": false,
|
||||
" ping": "pong"
|
||||
}
|
||||
client2 | SUCCESS = > {
|
||||
" changed": false,
|
||||
" ping": "pong"
|
||||
}
|
||||
```
|
||||
|
||||
We can also issue command to an individual host,
|
||||
|
||||
```
|
||||
$ ansible -m ping client1
|
||||
```
|
||||
|
||||
or to the multiple hosts,
|
||||
|
||||
```
|
||||
$ ansible -m ping client1:client2
|
||||
```
|
||||
|
||||
or even to a single group,
|
||||
|
||||
```
|
||||
$ ansible -m ping test_client
|
||||
```
|
||||
|
||||
This complete our tutorial on setting up an Ansible server, in our future posts we will further explore funtinalities offered by Ansible. If any having doubts or queries regarding this post, use the comment box below.
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://linuxtechlab.com/create-first-ansible-server-automation-setup/
|
||||
|
||||
作者:[SHUSAIN][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://linuxtechlab.com/author/shsuain/
|
||||
[1]:http://linuxtechlab.com/configure-ssh-server-publicprivate-key/
|
@ -1,4 +1,4 @@
|
||||
How to Use the ZFS Filesystem on Ubuntu Linux
|
||||
How to Use the ZFS Filesystem on Ubuntu Linux
|
||||
======
|
||||
There are a myriad of [filesystems available for Linux][1]. So why try a new one? They all work, right? They're not all the same, and some have some very distinct advantages, like ZFS.
|
||||
|
||||
|
108
sources/tech/20171101 -dev-[u]random- entropy explained.md
Normal file
108
sources/tech/20171101 -dev-[u]random- entropy explained.md
Normal file
@ -0,0 +1,108 @@
|
||||
/dev/[u]random: entropy explained
|
||||
======
|
||||
### Entropy
|
||||
|
||||
When the topic of /dev/random and /dev/urandom come up, you always hear this word: “Entropy”. Everyone seems to have their own analogy for it. So why not me? I like to think of Entropy as “Random juice”. It is juice, required for random to be more random.
|
||||
|
||||
If you have ever generated an SSL certificate, or a GPG key, you may have seen something like:
|
||||
```
|
||||
We need to generate a lot of random bytes. It is a good idea to perform
|
||||
some other action (type on the keyboard, move the mouse, utilize the
|
||||
disks) during the prime generation; this gives the random number
|
||||
generator a better chance to gain enough entropy.
|
||||
++++++++++..+++++.+++++++++++++++.++++++++++...+++++++++++++++...++++++
|
||||
+++++++++++++++++++++++++++++.+++++..+++++.+++++.+++++++++++++++++++++++++>.
|
||||
++++++++++>+++++...........................................................+++++
|
||||
Not enough random bytes available. Please do some other work to give
|
||||
the OS a chance to collect more entropy! (Need 290 more bytes)
|
||||
|
||||
```
|
||||
|
||||
|
||||
By typing on the keyboard, and moving the mouse, you help generate Entropy, or Random Juice.
|
||||
|
||||
You might be asking yourself… Why do I need Entropy? and why it is so important for random to be actually random? Well, lets say our Entropy was limited to keyboard, mouse, and disk IO. But our system is a server, so I know there is no mouse and keyboard input. This means the only factor is your IO. If it is a single disk, that was barely used, you will have low Entropy. This means your systems ability to be random is weak. In other words, I could play the probability game, and significantly decrease the amount of time it would take to crack things like your ssh keys, or decrypt what you thought was an encrypted session.
|
||||
|
||||
Okay, but that is pretty unrealistic right? No, actually it isn’t. Take a look at this [Debian OpenSSH Vulnerability][1]. This particular issue was caused by someone removing some of the code responsible for adding Entropy. Rumor has it they removed it because it was causing valgrind to throw warnings. However, in doing that, random is now MUCH less random. In fact, so much less that Brute forcing the private ssh keys generated is now a fesible attack vector.
|
||||
|
||||
Hopefully by now we understand how important Entropy is to security. Whether you realize you are using it or not.
|
||||
|
||||
### /dev/random & /dev/urandom
|
||||
|
||||
|
||||
/dev/urandom is a Psuedo Random Number Generator, and it **does not** block if you run out of Entropy.
|
||||
/dev/random is a True Random Number Generator, and it **does** block if you run out of Entropy.
|
||||
|
||||
Most often, if we are dealing with something pragmatic, and it doesn’t contain the keys to your nukes, /dev/urandom is the right choice. Otherwise if you go with /dev/random, then when the system runs out of Entropy your application is just going to behave funny. Whether it outright fails, or just hangs until it has enough depends on how you wrote your application.
|
||||
|
||||
### Checking the Entropy
|
||||
|
||||
So, how much Entropy do you have?
|
||||
```
|
||||
[root@testbox test]# cat /proc/sys/kernel/random/poolsize
|
||||
4096
|
||||
[root@testbox test]# cat /proc/sys/kernel/random/entropy_avail
|
||||
2975
|
||||
[root@testbox test]#
|
||||
|
||||
```
|
||||
|
||||
/proc/sys/kernel/random/poolsize, to state the obvious is the size(in bits) of the Entropy Pool. eg: How much random-juice we should save before we stop pumping more. /proc/sys/kernel/random/entropy_avail, is the amount(in bits) of random-juice in the pool currently.
|
||||
|
||||
### How can we influence this number?
|
||||
|
||||
The number is drained as we use it. The most crude example I can come up with is catting /dev/random into /dev/null:
|
||||
```
|
||||
[root@testbox test]# cat /dev/random > /dev/null &
|
||||
[1] 19058
|
||||
[root@testbox test]# cat /proc/sys/kernel/random/entropy_avail
|
||||
0
|
||||
[root@testbox test]# cat /proc/sys/kernel/random/entropy_avail
|
||||
1
|
||||
[root@testbox test]#
|
||||
|
||||
```
|
||||
|
||||
The easiest way to influence this is to run [Haveged][2]. Haveged is a daemon that uses the processor “flutter” to add Entropy to the systems Entropy Pool. Installation and basic setup is pretty straight forward
|
||||
```
|
||||
[root@b08s02ur ~]# systemctl enable haveged
|
||||
Created symlink from /etc/systemd/system/multi-user.target.wants/haveged.service to /usr/lib/systemd/system/haveged.service.
|
||||
[root@b08s02ur ~]# systemctl start haveged
|
||||
[root@b08s02ur ~]#
|
||||
|
||||
```
|
||||
|
||||
On a machine with relatively moderate traffic:
|
||||
```
|
||||
[root@testbox ~]# pv /dev/random > /dev/null
|
||||
40 B 0:00:15 [ 0 B/s] [ <=> ]
|
||||
52 B 0:00:23 [ 0 B/s] [ <=> ]
|
||||
58 B 0:00:25 [5.92 B/s] [ <=> ]
|
||||
64 B 0:00:30 [6.03 B/s] [ <=> ]
|
||||
^C
|
||||
[root@testbox ~]# systemctl start haveged
|
||||
[root@testbox ~]# pv /dev/random > /dev/null
|
||||
7.12MiB 0:00:05 [1.43MiB/s] [ <=> ]
|
||||
15.7MiB 0:00:11 [1.44MiB/s] [ <=> ]
|
||||
27.2MiB 0:00:19 [1.46MiB/s] [ <=> ]
|
||||
43MiB 0:00:30 [1.47MiB/s] [ <=> ]
|
||||
^C
|
||||
[root@testbox ~]#
|
||||
|
||||
```
|
||||
|
||||
Using pv we are able to see how much data we are passing via pipe. As you can see, before haveged, we were getting 2.1 bits per second(B/s). Whereas after starting haveged, and adding processor flutter to our Entropy pool we get ~1.5MiB/sec.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://jhurani.com/linux/2017/11/01/entropy-explained.html
|
||||
|
||||
作者:[James J][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://jblevins.org/log/ssh-vulnkey
|
||||
[1]:http://jhurani.com/linux/2017/11/01/%22https://jblevins.org/log/ssh-vulnkey%22
|
||||
[2]:http://www.issihosts.com/haveged/
|
@ -1,49 +0,0 @@
|
||||
translating-----geekpi
|
||||
|
||||
3 Essential Questions to Ask at Your Next Tech Interview
|
||||
======
|
||||
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/os-jobs_0.jpg?itok=nDf5j7xC)
|
||||
|
||||
Interviewing can be stressful, but 58 percent of companies tell Dice and the Linux Foundation that they need to hire open source talent in the months ahead. Learn how to ask the right questions.
|
||||
|
||||
The Linux Foundation
|
||||
|
||||
The annual [Open Source Jobs Report][1] from Dice and The Linux Foundation reveals a lot about prospects for open source professionals and hiring activity in the year ahead. In this year's report, 86 percent of tech professionals said that knowing open source has advanced their careers. Yet what happens with all that experience when it comes time for advancing within their own organization or applying for a new roles elsewhere?
|
||||
|
||||
Interviewing for a new job is never easy. Aside from the complexities of juggling your current work while preparing for a new role, there's the added pressure of coming up with the necessary response when the interviewer asks "Do you have any questions for me?"
|
||||
|
||||
At Dice, we're in the business of careers, advice, and connecting tech professionals with employers. But we also hire tech talent at our organization to work on open source projects. In fact, the Dice platform is based on a number of Linux distributions and we leverage open source databases as the basis for our search functionality. In short, we couldn't run Dice without open source software, therefore it's vital that we hire professionals who understand, and love, open source.
|
||||
|
||||
Over the years, I've learned the importance of asking good questions during an interview. It's an opportunity to learn about your potential new employer, as well as better understand if they are a good match for your skills.
|
||||
|
||||
Here are three essential questions to ask and the reason they're important:
|
||||
|
||||
**1\. What is the company 's position on employees contributing to open source projects or writing code in their spare time?**
|
||||
|
||||
The answer to this question will tell you a lot about the company you're interviewing with. In general, companies will want tech pros who contribute to websites or projects as long as they don't conflict with the work you're doing at that firm. Allowing this outside the company also fosters an entrepreneurial spirt among the tech organization, and teaches tech skills that you may not otherwise get in the normal course of your day.
|
||||
|
||||
**2\. How are projects prioritized here?**
|
||||
|
||||
As all companies have become tech companies, there is often a division between innovative customer facing tech projects versus those that improve the platform itself. Will you be working on keeping the existing platform up to date? Or working on new products for the public? Depending on where your interests lie, the answer could determine if the company is a right fit for you.
|
||||
|
||||
**3\. Who primarily makes decisions on new products and how much input do developers have in the decision-making process?**
|
||||
|
||||
This question is one part understanding who is responsible for innovation at the company (and how close you'll be working with him/her) and one part discovering your career path at the firm. A good company will talk to its developers and open source talent ahead of developing new products. It seems like a no brainer, but it's a step that's sometimes missed and will mean the difference between a collaborative environment or chaotic process ahead of new product releases.
|
||||
|
||||
Interviewing can be stressful, however as 58 percent of companies tell Dice and The Linux Foundation that they need to hire open source talent in the months ahead, it's important to remember the heightened demand puts professionals like you in the driver's seat. Steer your career in the direction you desire.
|
||||
|
||||
[Download ][2] the full 2017 Open Source Jobs Report now.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/blog/os-jobs/2017/12/3-essential-questions-ask-your-next-tech-interview
|
||||
|
||||
作者:[Brian Hostetter][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.linux.com/users/brianhostetter
|
||||
[1]:https://www.linuxfoundation.org/blog/2017-jobs-report-highlights-demand-open-source-skills/
|
||||
[2]:http://bit.ly/2017OSSjobsreport
|
@ -1,61 +0,0 @@
|
||||
Containers, the GPL, and copyleft: No reason for concern
|
||||
============================================================
|
||||
|
||||
### Wondering how open source licensing affects Linux containers? Here's what you need to know.
|
||||
|
||||
|
||||
![Containers, the GPL, and copyleft: No reason for concern](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LAW_patents4abstract_B.png?itok=6RHeRaYh "Containers, the GPL, and copyleft: No reason for concern")
|
||||
Image by : opensource.com
|
||||
|
||||
Though open source is thoroughly mainstream, new software technologies and old technologies that get newly popularized sometimes inspire hand-wringing about open source licenses. Most often the concern is about the GNU General Public License (GPL), and specifically the scope of its copyleft requirement, which is often described (somewhat misleadingly) as the GPL’s derivative work issue.
|
||||
|
||||
One imperfect way of framing the question is whether GPL-licensed code, when combined in some sense with proprietary code, forms a single modified work such that the proprietary code could be interpreted as being subject to the terms of the GPL. While we haven’t yet seen much of that concern directed to Linux containers, we expect more questions to be raised as adoption of containers continues to grow. But it’s fairly straightforward to show that containers do _not_ raise new or concerning GPL scope issues.
|
||||
|
||||
Statutes and case law provide little help in interpreting a license like the GPL. On the other hand, many of us give significant weight to the interpretive views of the Free Software Foundation (FSF), the drafter and steward of the GPL, even in the typical case where the FSF is not a copyright holder of the software at issue. In addition to being the author of the license text, the FSF has been engaged for many years in providing commentary and guidance on its licenses to the community. Its views have special credibility and influence based on its public interest mission and leadership in free software policy.
|
||||
|
||||
The FSF’s existing guidance on GPL interpretation has relevance for understanding the effects of including GPL and non-GPL code in containers. The FSF has placed emphasis on the process boundary when considering copyleft scope, and on the mechanism and semantics of the communication between multiple software components to determine whether they are closely integrated enough to be considered a single program for GPL purposes. For example, the [GNU Licenses FAQ][4] takes the view that pipes, sockets, and command-line arguments are mechanisms that are normally suggestive of separateness (in the absence of sufficiently "intimate" communications).
|
||||
|
||||
Consider the case of a container in which both GPL code and proprietary code might coexist and execute. A container is, in essence, an isolated userspace stack. In the [OCI container image format][5], code is packaged as a set of filesystem changeset layers, with the base layer normally being a stripped-down conventional Linux distribution without a kernel. As with the userspace of non-containerized Linux distributions, these base layers invariably contain many GPL-licensed packages (both GPLv2 and GPLv3), as well as packages under licenses considered GPL-incompatible, and commonly function as a runtime for proprietary as well as open source applications. The ["mere aggregation" clause][6] in GPLv2 (as well as its counterpart GPLv3 provision on ["aggregates"][7]) shows that this type of combination is generally acceptable, is specifically contemplated under the GPL, and has no effect on the licensing of the two programs, assuming incompatibly licensed components are separate and independent.
|
||||
|
||||
Of course, in a given situation, the relationship between two components may not be "mere aggregation," but the same is true of software running in non-containerized userspace on a Linux system. There is nothing in the technical makeup of containers or container images that suggests a need to apply a special form of copyleft scope analysis.
|
||||
|
||||
It follows that when looking at the relationship between code running in a container and code running outside a container, the "separate and independent" criterion is almost certainly met. The code will run as separate processes, and the whole technical point of using containers is isolation from other software running on the system.
|
||||
|
||||
Now consider the case where two components, one GPL-licensed and one proprietary, are running in separate but potentially interacting containers, perhaps as part of an application designed with a [microservices][8] architecture. In the absence of very unusual facts, we should not expect to see copyleft scope extending across multiple containers. Separate containers involve separate processes. Communication between containers by way of network interfaces is analogous to such mechanisms as pipes and sockets, and a multi-container microservices scenario would seem to preclude what the FSF calls "[intimate][9]" communication by definition. The composition of an application using multiple containers may not be dispositive of the GPL scope issue, but it makes the technical boundaries between the components more apparent and provides a strong basis for arguing separateness. Here, too, there is no technical feature of containers that suggests application of a different and stricter approach to copyleft scope analysis.
|
||||
|
||||
A company that is overly concerned with the potential effects of distributing GPL-licensed code might attempt to prohibit its developers from adding any such code to a container image that it plans to distribute. Insofar as the aim is to avoid distributing code under the GPL, this is a dubious strategy. As noted above, the base layers of conventional container images will contain multiple GPL-licensed components. If the company pushes a container image to a registry, there is normally no way it can guarantee that this will not include the base layer, even if it is widely shared.
|
||||
|
||||
On the other hand, the company might decide to embrace containerization as a means of limiting copyleft scope issues by isolating GPL and proprietary code—though one would hope that technical benefits would drive the decision, rather than legal concerns likely based on unfounded anxiety about the GPL. While in a non-containerized setting the relationship between two interacting software components will often be mere aggregation, the evidence of separateness that containers provide may be comforting to those who worry about GPL scope.
|
||||
|
||||
Open source license compliance obligations may arise when sharing container images. But there’s nothing technically different or unique about containers that changes the nature of these obligations or makes them harder to satisfy. With respect to copyleft scope, containerization should, if anything, ease the concerns of the extra-cautious.
|
||||
|
||||
|
||||
### About the author
|
||||
|
||||
[![](https://opensource.com/sites/default/files/styles/profile_pictures/public/pictures/picture-216.jpg?itok=R8W7jae8)][10] Richard Fontana - Richard is Senior Commercial Counsel on the Products and Technologies team in Red Hat's legal department. Most of his work focuses on open source-related legal issues.[More about me][2]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/1/containers-gpl-and-copyleft
|
||||
|
||||
作者:[Richard Fontana ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/fontana
|
||||
[1]:https://opensource.com/article/18/1/containers-gpl-and-copyleft?rate=qTlANxnuA2tf0hcGE6Po06RGUzcbB-cBxbU3dCuCt9w
|
||||
[2]:https://opensource.com/users/fontana
|
||||
[3]:https://opensource.com/user/10544/feed
|
||||
[4]:https://www.gnu.org/licenses/gpl-faq.en.html#MereAggregation
|
||||
[5]:https://github.com/opencontainers/image-spec/blob/master/spec.md
|
||||
[6]:https://www.gnu.org/licenses/old-licenses/gpl-2.0.en.html#section2
|
||||
[7]:https://www.gnu.org/licenses/gpl.html#section5
|
||||
[8]:https://www.redhat.com/en/topics/microservices
|
||||
[9]:https://www.gnu.org/licenses/gpl-faq.en.html#GPLPlugins
|
||||
[10]:https://opensource.com/users/fontana
|
||||
[11]:https://opensource.com/users/fontana
|
||||
[12]:https://opensource.com/users/fontana
|
||||
[13]:https://opensource.com/tags/licensing
|
||||
[14]:https://opensource.com/tags/containers
|
@ -1,140 +0,0 @@
|
||||
Being open about data privacy
|
||||
============================================================
|
||||
|
||||
### Regulations including GDPR notwithstanding, data privacy is something that's important for pretty much everybody.
|
||||
|
||||
![Being open about data privacy ](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/GOV_opendata.png?itok=M8L2HGVx "Being open about data privacy ")
|
||||
Image by : opensource.com
|
||||
|
||||
Today is [Data Privacy Day][9], ("Data Protection Day" in Europe), and you might think that those of us in the open source world should think that all data should be free, [as information supposedly wants to be][10], but life's not that simple. That's for two main reasons:
|
||||
|
||||
1. Most of us (and not just in open source) believe there's at least some data about us that we might not feel happy sharing (I compiled an example list in [a post][2] I published a while ago).
|
||||
|
||||
2. Many of us working in open source actually work for commercial companies or other organisations subject to legal requirements around what they can share.
|
||||
|
||||
So actually, data privacy is something that's important for pretty much everybody.
|
||||
|
||||
It turns out that the starting point for what data people and governments believe should be available for organisations to use is somewhat different between the U.S. and Europe, with the former generally providing more latitude for entities—particularly, the more cynical might suggest, large commercial entities—to use data they've collected about us as they will. Europe, on the other hand, has historically taken a more restrictive view, and on the 25th of May, Europe's view arguably will have triumphed.
|
||||
|
||||
### The impact of GDPR
|
||||
|
||||
That's a rather sweeping statement, but the fact remains that this is the date on which a piece of legislation called the General Data Protection Regulation (GDPR), enacted by the European Union in 2016, becomes enforceable. The GDPR basically provides a stringent set of rules about how personal data can be stored, what it can be used for, who can see it, and how long it can be kept. It also describes what personal data is—and it's a pretty broad set of items, from your name and home address to your medical records and on through to your computer's IP address.
|
||||
|
||||
What is important about the GDPR, though, is that it doesn't apply just to European companies, but to any organisation processing data about EU citizens. If you're an Argentinian, Japanese, U.S., or Russian company and you're collecting data about an EU citizen, you're subject to it.
|
||||
|
||||
"Pah!" you may say,[1][11] "I'm not based in the EU: what can they do to me?" The answer is simple: If you want to continue doing any business in the EU, you'd better comply, because if you breach GDPR rules, you could be liable for up to four percent of your _global_ revenues. Yes, that's global revenues: not just revenues in a particular country in Europe or across the EU, not just profits, but _global revenues_ . Those are the sorts of numbers that should lead you to talk to your legal team, who will direct you to your exec team, who will almost immediately direct you to your IT group to make sure you're compliant in pretty short order.
|
||||
|
||||
This may seem like it's not particularly relevant to non-EU citizens, but it is. For most companies, it's going to be simpler and more efficient to implement the same protection measures for data associated with _all_ customers, partners, and employees they deal with, rather than just targeting specific measures at EU citizens. This has got to be a good thing.[2][12]
|
||||
|
||||
However, just because GDPR will soon be applied to organisations across the globe doesn't mean that everything's fine and dandy[3][13]: it's not. We give away information about ourselves all the time—and permission for companies to use it.
|
||||
|
||||
There's a telling (though disputed) saying: "If you're not paying, you're the product." What this suggests is that if you're not paying for a service, then somebody else is paying to use your data. Do you pay to use Facebook? Twitter? Gmail? How do you think they make their money? Well, partly through advertising, and some might argue that's a service they provide to you, but actually that's them using your data to get money from the advertisers. You're not really a customer of advertising—it's only once you buy something from the advertiser that you become their customer, but until you do, the relationship is between the the owner of the advertising platform and the advertiser.
|
||||
|
||||
Some of these services allow you to pay to reduce or remove advertising (Spotify is a good example), but on the other hand, advertising may be enabled even for services that you think you do pay for (Amazon is apparently working to allow adverts via Alexa, for instance). Unless we want to start paying to use all of these "free" services, we need to be aware of what we're giving up, and making some choices about what we expose and what we don't.
|
||||
|
||||
### Who's the customer?
|
||||
|
||||
There's another issue around data that should be exercising us, and it's a direct consequence of the amounts of data that are being generated. There are many organisations out there—including "public" ones like universities, hospitals, or government departments[4][14]—who generate enormous quantities of data all the time, and who just don't have the capacity to store it. It would be a different matter if this data didn't have long-term value, but it does, as the tools for handling Big Data are developing, and organisations are realising they can be mining this now and in the future.
|
||||
|
||||
The problem they face, though, as the amount of data increases and their capacity to store it fails to keep up, is what to do with it. _Luckily_ —and I use this word with a very heavy dose of irony,[5][15] big corporations are stepping in to help them. "Give us your data," they say, "and we'll host it for free. We'll even let you use the data you collected when you want to!" Sounds like a great deal, yes? A fantastic example of big corporations[6][16] taking a philanthropic stance and helping out public organisations that have collected all of that lovely data about us.
|
||||
|
||||
Sadly, philanthropy isn't the only reason. These hosting deals come with a price: in exchange for agreeing to host the data, these corporations get to sell access to it to third parties. And do you think the public organisations, or those whose data is collected, will get a say in who these third parties are or how they will use it? I'll leave this as an exercise for the reader.[7][17]
|
||||
|
||||
### Open and positive
|
||||
|
||||
It's not all bad news, however. There's a growing "open data" movement among governments to encourage departments to make much of their data available to the public and other bodies for free. In some cases, this is being specifically legislated. Many voluntary organisations—particularly those receiving public funding—are starting to do the same. There are glimmerings of interest even from commercial organisations. What's more, there are techniques becoming available, such as those around differential privacy and multi-party computation, that are beginning to allow us to mine data across data sets without revealing too much about individuals—a computing problem that has historically been much less tractable than you might otherwise expect.
|
||||
|
||||
What does this all mean to us? Well, I've written before on Opensource.com about the [commonwealth of open source][18], and I'm increasingly convinced that we need to look beyond just software to other areas: hardware, organisations, and, relevant to this discussion, data. Let's imagine that you're a company (A) that provides a service to another company, a customer (B).[8][19] There are four different types of data in play:
|
||||
|
||||
1. Data that's fully open: visible to A, B, and the rest of the world
|
||||
|
||||
2. Data that's known, shared, and confidential: visible to A and B, but nobody else
|
||||
|
||||
3. Data that's company-confidential: visible to A, but not B
|
||||
|
||||
4. Data that's customer-confidential: visible to B, but not A
|
||||
|
||||
First of all, maybe we should be a bit more open about data and default to putting it into bucket 1\. That data—on self-driving cars, voice recognition, mineral deposits, demographic statistics—could be enormously useful if it were available to everyone.[9][20]Also, wouldn't it be great if we could find ways to make the data in buckets 2, 3, and 4—or at least some of it—available in bucket 1, whilst still keeping the details confidential? That's the hope for some of these new techniques being researched. They're a way off, though, so don't get too excited, and in the meantime, start thinking about making more of your data open by default.
|
||||
|
||||
### Some concrete steps
|
||||
|
||||
So, what can we do around data privacy and being open? Here are a few concrete steps that occurred to me: please use the comments to contribute more.
|
||||
|
||||
* Check to see whether your organisation is taking GDPR seriously. If it isn't, push for it.
|
||||
|
||||
* Default to encrypting sensitive data (or hashing where appropriate), and deleting when it's no longer required—there's really no excuse for data to be in the clear to these days except for when it's actually being processed.
|
||||
|
||||
* Consider what information you disclose when you sign up to services, particularly social media.
|
||||
|
||||
* Discuss this with your non-technical friends.
|
||||
|
||||
* Educate your children, your friends' children, and their friends. Better yet, go and talk to their teachers about it and present something in their schools.
|
||||
|
||||
* Encourage the organisations you work for, volunteer for, or interact with to make data open by default. Rather than thinking, "why should I make this public?" start with "why _shouldn't_ I make this public?"
|
||||
|
||||
* Try accessing some of the open data sources out there. Mine it, create apps that use it, perform statistical analyses, draw pretty graphs,[10][3] make interesting music, but consider doing something with it. Tell the organisations that sourced it, thank them, and encourage them to do more.
|
||||
|
||||
* * *
|
||||
|
||||
1\. Though you probably won't, I admit.
|
||||
|
||||
2\. Assuming that you believe that your personal data should be protected.
|
||||
|
||||
3\. If you're wondering what "dandy" means, you're not alone at this point.
|
||||
|
||||
4\. Exactly how public these institutions seem to you will probably depend on where you live: [YMMV][21].
|
||||
|
||||
5\. And given that I'm British, that's a really very, very heavy dose.
|
||||
|
||||
6\. And they're likely to be big corporations: nobody else can afford all of that storage and the infrastructure to keep it available.
|
||||
|
||||
7\. No. The answer's "no."
|
||||
|
||||
8\. Although the example works for people, too. Oh, look: A could be Alice, B could be Bob…
|
||||
|
||||
9\. Not that we should be exposing personal data or data that actually needs to be confidential, of course—not that type of data.
|
||||
|
||||
10\. A friend of mine decided that it always seemed to rain when she picked her children up from school, so to avoid confirmation bias, she accessed rainfall information across the school year and created graphs that she shared on social media.
|
||||
|
||||
|
||||
|
||||
### About the author
|
||||
|
||||
[![](https://opensource.com/sites/default/files/styles/profile_pictures/public/pictures/2017-05-10_0129.jpg?itok=Uh-eKFhx)][22] Mike Bursell - I've been in and around Open Source since around 1997, and have been running (GNU) Linux as my main desktop at home and work since then: [not always easy][4]... I'm a security bod and architect, and am currently employed as Chief Security Architect for Red Hat. I have a blog - "[Alice, Eve & Bob][5]" - where I write (sometimes rather parenthetically) about security. I live in the UK and... [more about Mike Bursell][6][More about me][7]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/1/being-open-about-data-privacy
|
||||
|
||||
作者:[Mike Bursell ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/mikecamel
|
||||
[1]:https://opensource.com/article/18/1/being-open-about-data-privacy?rate=oQCDAM0DY-P97d3pEEW_yUgoCV1ZXhv8BHYTnJVeHMc
|
||||
[2]:https://aliceevebob.wordpress.com/2017/06/06/helping-our-governments-differently/
|
||||
[3]:https://opensource.com/article/18/1/being-open-about-data-privacy#10
|
||||
[4]:https://opensource.com/article/17/11/politics-linux-desktop
|
||||
[5]:https://aliceevebob.com/
|
||||
[6]:https://opensource.com/users/mikecamel
|
||||
[7]:https://opensource.com/users/mikecamel
|
||||
[8]:https://opensource.com/user/105961/feed
|
||||
[9]:https://en.wikipedia.org/wiki/Data_Privacy_Day
|
||||
[10]:https://en.wikipedia.org/wiki/Information_wants_to_be_free
|
||||
[11]:https://opensource.com/article/18/1/being-open-about-data-privacy#1
|
||||
[12]:https://opensource.com/article/18/1/being-open-about-data-privacy#2
|
||||
[13]:https://opensource.com/article/18/1/being-open-about-data-privacy#3
|
||||
[14]:https://opensource.com/article/18/1/being-open-about-data-privacy#4
|
||||
[15]:https://opensource.com/article/18/1/being-open-about-data-privacy#5
|
||||
[16]:https://opensource.com/article/18/1/being-open-about-data-privacy#6
|
||||
[17]:https://opensource.com/article/18/1/being-open-about-data-privacy#7
|
||||
[18]:https://opensource.com/article/17/11/commonwealth-open-source
|
||||
[19]:https://opensource.com/article/18/1/being-open-about-data-privacy#8
|
||||
[20]:https://opensource.com/article/18/1/being-open-about-data-privacy#9
|
||||
[21]:http://www.outpost9.com/reference/jargon/jargon_40.html#TAG2036
|
||||
[22]:https://opensource.com/users/mikecamel
|
||||
[23]:https://opensource.com/users/mikecamel
|
||||
[24]:https://opensource.com/users/mikecamel
|
||||
[25]:https://opensource.com/tags/open-data
|
@ -1,194 +1,126 @@
|
||||
An introduction to the DomTerm terminal emulator for Linux
|
||||
============================================================
|
||||
======
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc_terminals.png?itok=CfBqYBah)
|
||||
|
||||
### Learn about DomTerm, a terminal emulator and multiplexer with HTML graphics and other unusual features.
|
||||
[DomTerm][1] is a modern terminal emulator that uses a browser engine as a "GUI toolkit." This enables some neat features, such as embeddable graphics and links, HTML rich text, and foldable (show/hide) commands. Otherwise it looks and feels like a feature-full, standalone terminal emulator, with excellent xterm compatibility (including mouse handling and 24-bit color), and appropriate "chrome" (menus). In addition, there is built-in support for session management and sub-windows (as in `tmux` and `GNU screen`), basic input editing (as in `readline`), and paging (as in `less`).
|
||||
|
||||
![Terminal](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc_terminals.png?itok=CfBqYBah "Terminal")
|
||||
Image by :
|
||||
![](https://opensource.com/sites/default/files/u128651/domterm1.png)
|
||||
Image 1: The DomTerminal terminal emulator. View larger image.
|
||||
|
||||
[Jamie Cox][25]. Modified by Opensource.com. [CC BY 2.0.][26]
|
||||
|
||||
[DomTerm][27] is a modern terminal emulator that uses a browser engine as a "GUI toolkit." This enables some neat features, such as embeddable graphics and links, HTML rich text, and foldable (show/hide) commands. Otherwise it looks and feels like a feature-full, standalone terminal emulator, with excellent xterm compatibility (including mouse handling and 24-bit color), and appropriate "chrome" (menus). In addition, there is built-in support for session management and sub-windows (as in `tmux` and `GNU screen`), basic input editing (as in `readline`), and paging (as in `less`).
|
||||
|
||||
### [domterm1.png][10]
|
||||
|
||||
![DomTerminal terminal emulator](https://opensource.com/sites/default/files/u128651/domterm1.png "DomTerminal terminal emulator")
|
||||
|
||||
Image 1: The DomTerminal terminal emulator. _[View larger image.][1]_
|
||||
|
||||
Below we'll look more at these features. We'll assume you have `domterm` installed (skip to the end of this article if you need to get and build DomTerm). First, though, here's a quick overview of the technology.
|
||||
Below we'll look more at these features. We'll assume you have `domterm` installed (skip to the end of this article if you need to get and build DomTerm). First, though, here's a quick overview of the technology.
|
||||
|
||||
### Frontend vs. backend
|
||||
|
||||
More Linux resources
|
||||
|
||||
* [What is Linux?][2]
|
||||
|
||||
* [What are Linux containers?][3]
|
||||
|
||||
* [Download Now: Linux commands cheat sheet][4]
|
||||
|
||||
* [Advanced Linux commands cheat sheet][5]
|
||||
|
||||
* [Our latest Linux articles][6]
|
||||
|
||||
Most of DomTerm is written in JavaScript and runs in a browser engine. This can be a desktop web browser, such as Chrome or Firefox (see [image 3][28]), or it can be an embedded browser. Using a general web browser works fine, but the user experience isn't as nice (as the menus are designed for general browsing, not for a terminal emulator), and the security model gets in the way, so using an embedded browser is nicer.
|
||||
Most of DomTerm is written in JavaScript and runs in a browser engine. This can be a desktop web browser, such as Chrome or Firefox (see image 3), or it can be an embedded browser. Using a general web browser works fine, but the user experience isn't as nice (as the menus are designed for general browsing, not for a terminal emulator), and the security model gets in the way, so using an embedded browser is nicer.
|
||||
|
||||
The following are currently supported:
|
||||
|
||||
* `qtdomterm`, which uses the Qt toolkit and `QtWebEngine`
|
||||
* `qtdomterm`, which uses the Qt toolkit and `QtWebEngine`
|
||||
* An `[Electron][2]` embedding (see image 1)
|
||||
* `atom-domterm` runs DomTerm as a package in the [Atom text editor][3] (which is also based on Electron) and integrates with the Atom pane system (see image 2)
|
||||
* A wrapper for JavaFX's `WebEngine`, which is useful for code written in Java (see image 4)
|
||||
* Previously, the preferred frontend used [Firefox-XUL][4], but Mozilla has since dropped XUL
|
||||
|
||||
* An `[Electron][11]` embedding (see [image 1][16])
|
||||
|
||||
* `atom-domterm` runs DomTerm as a package in the [Atom text editor][17] (which is also based on Electron) and integrates with the Atom pane system (see [image 2][18])
|
||||
|
||||
* A wrapper for JavaFX's `WebEngine`, which is useful for code written in Java (see [image 4][19])
|
||||
![DomTerm terminal panes in Atom editor][6]
|
||||
|
||||
* Previously, the preferred frontend used [Firefox-XUL][20], but Mozilla has since dropped XUL
|
||||
Image 2: DomTerm terminal panes in Atom editor. [View larger image.][7]
|
||||
|
||||
### [dt-atom1.png][12]
|
||||
Currently, the Electron frontend is probably the nicest option, closely followed by the Qt frontend. If you use Atom, `atom-domterm` works pretty well.
|
||||
|
||||
![DomTerm terminal panes in Atom editor](https://opensource.com/sites/default/files/images/dt-atom1.png "DomTerm terminal panes in Atom editor")
|
||||
|
||||
Image 2: DomTerm terminal panes in Atom editor. _[View larger image.][7]_
|
||||
|
||||
Currently, the Electron frontend is probably the nicest option, closely followed by the Qt frontend. If you use Atom, `atom-domterm` works pretty well.
|
||||
|
||||
The backend server is written in C. It manages pseudo terminals (PTYs) and sessions. It is also an HTTP server that provides the JavaScript and other files to the frontend. The `domterm` command starts terminal jobs and performs other requests. If there is no server running, `domterm` daemonizes itself. Communication between the backend and the server is normally done using WebSockets (with [libwebsockets][29] on the server). However, the JavaFX embedding uses neither WebSockets nor the DomTerm server; instead Java applications communicate directly using the Java–JavaScript bridge.
|
||||
The backend server is written in C. It manages pseudo terminals (PTYs) and sessions. It is also an HTTP server that provides the JavaScript and other files to the frontend. The `domterm` command starts terminal jobs and performs other requests. If there is no server running, `domterm` daemonizes itself. Communication between the backend and the server is normally done using WebSockets (with [libwebsockets][8] on the server). However, the JavaFX embedding uses neither WebSockets nor the DomTerm server; instead Java applications communicate directly using the Java-JavaScript bridge.
|
||||
|
||||
### A solid xterm-compatible terminal emulator
|
||||
|
||||
DomTerm looks and feels like a modern terminal emulator. It handles mouse events, 24-bit color, Unicode, double-width (CJK) characters, and input methods. DomTerm does a very good job on the [vttest testsuite][30].
|
||||
DomTerm looks and feels like a modern terminal emulator. It handles mouse events, 24-bit color, Unicode, double-width (CJK) characters, and input methods. DomTerm does a very good job on the [vttest testsuite][9].
|
||||
|
||||
Unusual features include:
|
||||
|
||||
**Show/hide buttons ("folding"):** The little triangles (seen in [image 2][31] above) are buttons that hide/show the corresponding output. To create the buttons, just add certain [escape sequences][32] in the [prompt text][33].
|
||||
**Show/hide buttons ("folding"):** The little triangles (seen in image 2 above) are buttons that hide/show the corresponding output. To create the buttons, just add certain [escape sequences][10] in the [prompt text][11].
|
||||
|
||||
**Mouse-click support for `readline` and similar input editors:** If you click in the (yellow) input area, DomTerm will send the right sequence of arrow-key keystrokes to the application. (This is enabled by escape sequences in the prompt; you can also force it using Alt+Click.)
|
||||
**Mouse-click support for`readline` and similar input editors:** If you click in the (yellow) input area, DomTerm will send the right sequence of arrow-key keystrokes to the application. (This is enabled by escape sequences in the prompt; you can also force it using Alt+Click.)
|
||||
|
||||
**Style the terminal using CSS:** This is usually done in `~/.domterm/settings.ini`, which is automatically reloaded when saved. For example, in [image 2][34], terminal-specific background colors were set.
|
||||
**Style the terminal using CSS:** This is usually done in `~/.domterm/settings.ini`, which is automatically reloaded when saved. For example, in image 2, terminal-specific background colors were set.
|
||||
|
||||
### A better REPL console
|
||||
|
||||
A classic terminal emulator works on rectangular grids of character cells. This works for a REPL (command shell), but it is not ideal. Here are some DomTerm features useful for REPLs that are not typically found in terminal emulators:
|
||||
|
||||
**A command can "print" an image, a graph, a mathematical formula, or a set of clickable links:** An application can send an escape sequence containing almost any HTML. (The HTML is scrubbed to remove JavaScript and other dangerous features.)
|
||||
**A command can "print" an image, a graph, a mathematical formula, or a set of clickable links:** An application can send an escape sequence containing almost any HTML. (The HTML is scrubbed to remove JavaScript and other dangerous features.)
|
||||
|
||||
The [image 3][35] shows a fragment from a [`gnuplot`][36] session. Gnuplot (2.1 or later) supports `domterm` as a terminal type. Graphical output is converted to an [SVG image][37], which is then printed to the terminal. My blog post [Gnuplot display on DomTerm][38]provides more information on this.
|
||||
The image 3 shows a fragment from a [`gnuplot`][12] session. Gnuplot (2.1 or later) supports `domterm` as a terminal type. Graphical output is converted to an [SVG image][13], which is then printed to the terminal. My blog post [Gnuplot display on DomTerm][14] provides more information on this.
|
||||
|
||||
### [dt-gnuplot.png][13]
|
||||
![](https://opensource.com/sites/default/files/dt-gnuplot.png)
|
||||
Image 3: Gnuplot screenshot. View larger image.
|
||||
|
||||
![Image 3: Gnuplot screenshot](https://opensource.com/sites/default/files/dt-gnuplot.png "Image 3: Gnuplot screenshot")
|
||||
The [Kawa][15] language has a library for creating and transforming [geometric picture values][16]. If you print such a picture value to a DomTerm terminal, the picture is converted to SVG and embedded in the output.
|
||||
|
||||
Image 3: Gnuplot screenshot. _[View larger image.][8]_
|
||||
![](https://opensource.com/sites/default/files/dt-kawa1.png)
|
||||
Image 4: Computable geometry in Kawa. View larger image.
|
||||
|
||||
The [Kawa][39] language has a library for creating and transforming [geometric picture values][40]. If you print such a picture value to a DomTerm terminal, the picture is converted to SVG and embedded in the output.
|
||||
**Rich text in output:** Help messages are more readable and look nicer with HTML styling. The lower pane of image 1 shows the ouput from `domterm help`. (The output is plaintext if not running under DomTerm.) Note the `PAUSED` message from the built-in pager.
|
||||
|
||||
### [dt-kawa1.png][14]
|
||||
**Error messages can include clickable links:** DomTerm recognizes the syntax `filename:line:column:` and turns it into a link that opens the file and line in a configurable text editor. (This works for relative filenames if you use `PROMPT_COMMAND` or similar to track directories.)
|
||||
|
||||
![Image 4: Computable geometry in Kawa](https://opensource.com/sites/default/files/dt-kawa1.png "Image 4: Computable geometry in Kawa")
|
||||
A compiler can detect that it is running under DomTerm and directly emit file links in an escape sequence. This is more robust than depending on DomTerm's pattern matching, as it handles spaces and other special characters, and it does not depend on directory tracking. In image 4, you can see error messages from the [Kawa compiler][15]. Hovering over the file position causes it to be underlined, and the `file:` URL shows in the `atom-domterm` message area (bottom of the window). (When not using `atom-domterm`, such messages are shown in an overlay box, as seen for the `PAUSED` message in image 1.)
|
||||
|
||||
Image 4: Computable geometry in Kawa. _[View larger image.][9]_
|
||||
The action when clicking on a link is configurable. The default action for a `file:` link with a `#position` suffix is to open the file in a text editor.
|
||||
|
||||
**Rich text in output:** Help messages are more readable and look nicer with HTML styling. The lower pane of [image 1][41] shows the ouput from `domterm help`. (The output is plaintext if not running under DomTerm.) Note the `PAUSED` message from the built-in pager.
|
||||
**Structured internal representation:** The following are all represented in the internal node structure: Commands, prompts, input lines, normal and error output, tabs, and preserving the structure if you "Save as HTML." The HTML file is compatible with XML, so you can use XML tools to search or transform the output. The command `domterm view-saved` opens a saved HTML file in a way that enables command folding (show/hide buttons are active) and reflow on window resize.
|
||||
|
||||
**Error messages can include clickable links:** DomTerm recognizes the syntax `<var style="font-size: 1em; line-height: 1.5em;">filename</var>:<var style="font-size: 1em; line-height: 1.5em;">line</var>:<var style="font-size: 1em; line-height: 1.5em;">column:</var>` and turns it into a link that opens the file and line in a configurable text editor. (This works for relative filenames if you use `PROMPT_COMMAND` or similar to track directories.)
|
||||
**Built-in Lisp-style pretty-printing:** You can include pretty-printing directives (e.g., grouping) in the output such that line breaks are recalculated on window resize. See my article [Dynamic pretty-printing in DomTerm][17] for a deeper discussion.
|
||||
|
||||
A compiler can detect that it is running under DomTerm and directly emit file links in an escape sequence. This is more robust than depending on DomTerm's pattern matching, as it handles spaces and other special characters, and it does not depend on directory tracking. In [image 4][42], you can see error messages from the [Kawa compiler][43]. Hovering over the file position causes it to be underlined, and the `file:`URL shows in the `atom-domterm` message area (bottom of the window). (When not using `atom-domterm`, such messages are shown in an overlay box, as seen for the `PAUSED` message in [image 1][44].)
|
||||
**Basic built-in line editing** with history (like `GNU readline`): This uses the browser's built-in editor, so it has great mouse and selection handling. You can switch between normal character-mode (most characters typed are sent directly to the process); or line-mode (regular characters are inserted while control characters cause editing actions, with Enter sending the edited line to the process). The default is automatic mode, where DomTerm switches between character-mode and line-mode depending on whether the PTY is in raw or canonical mode.
|
||||
|
||||
The action when clicking on a link is configurable. The default action for a `file:` link with a `#position` suffix is to open the file in a text editor.
|
||||
|
||||
**Structured internal representation:** The following are all represented in the internal node structure: Commands, prompts, input lines, normal and error output, tabs, and preserving the structure if you "Save as HTML." The HTML file is compatible with XML, so you can use XML tools to search or transform the output. The command `domterm view-saved` opens a saved HTML file in a way that enables command folding (show/hide buttons are active) and reflow on window resize.
|
||||
|
||||
**Built-in Lisp-style pretty-printing:** You can include pretty-printing directives (e.g., grouping) in the output such that line breaks are recalculated on window resize. See my article [Dynamic pretty-printing in DomTerm][45] for a deeper discussion.
|
||||
|
||||
**Basic built-in line editing** with history (like `GNU readline`): This uses the browser's built-in editor, so it has great mouse and selection handling. You can switch between normal character-mode (most characters typed are sent directly to the process); or line-mode (regular characters are inserted while control characters cause editing actions, with Enter sending the edited line to the process). The default is automatic mode, where DomTerm switches between character-mode and line-mode depending on whether the PTY is in <q style="quotes: "“" "”" "‘" "’";">raw<q style="quotes: "“" "”" "‘" "’";"> or <q style="quotes: "“" "”" "‘" "’";">canonical</q> mode.</q></q>
|
||||
|
||||
**A built-in pager** (like a simplified `less`): Keyboard shortcuts will control scrolling. In "paging mode," the output pauses after each new screen (or single line, if you move forward line-by-line). The paging mode is unobtrusive and smart about user input, so you can (if you wish) run it without it interfering with interactive programs.
|
||||
**A built-in pager** (like a simplified `less`): Keyboard shortcuts will control scrolling. In "paging mode," the output pauses after each new screen (or single line, if you move forward line-by-line). The paging mode is unobtrusive and smart about user input, so you can (if you wish) run it without it interfering with interactive programs.
|
||||
|
||||
### Multiplexing and sessions
|
||||
|
||||
**Tabs and tiling:** Not only can you create multiple terminal tabs, you can also tile them. You can use either the mouse or a keyboard shortcut to move between panes and tabs as well as create new ones. They can be rearranged and resized with the mouse. This is implemented using the [GoldenLayout][46] JavaScript library. [Image 1][47]shows a window with two panes. The top one has two tabs, with one running [Midnight Commander][48]; the bottom pane shows `domterm help` output as HTML. However, on Atom we instead use its built-in draggable tiles and tabs; you can see this in [image 2][49].
|
||||
**Tabs and tiling:** Not only can you create multiple terminal tabs, you can also tile them. You can use either the mouse or a keyboard shortcut to move between panes and tabs as well as create new ones. They can be rearranged and resized with the mouse. This is implemented using the [GoldenLayout][18] JavaScript library. [Image 1][19] shows a window with two panes. The top one has two tabs, with one running [Midnight Commander][20]; the bottom pane shows `domterm help` output as HTML. However, on Atom we instead use its built-in draggable tiles and tabs; you can see this in image 2.
|
||||
|
||||
**Detaching and reattaching to sessions:** DomTerm supports sessions arrangement, similar to `tmux` and GNU `screen`. You can even attach multiple windows or panes to the same session. This supports multi-user session sharing and remote connections. (For security, all sessions of the same server need to be able to read a Unix domain socket and a local file containing a random key. This restriction will be lifted when we have a good, safe remote-access story.)
|
||||
**Detaching and reattaching to sessions:** DomTerm supports sessions arrangement, similar to `tmux` and GNU `screen`. You can even attach multiple windows or panes to the same session. This supports multi-user session sharing and remote connections. (For security, all sessions of the same server need to be able to read a Unix domain socket and a local file containing a random key. This restriction will be lifted when we have a good, safe remote-access story.)
|
||||
|
||||
**The ****`domterm`**** command** is also like `tmux` or GNU `screen` in that has multiple options for controlling or starting a server that manages one or more sessions. The major difference is that, if it's not already running under DomTerm, the `domterm` command creates a new top-level window, rather than running in the existing terminal.
|
||||
**The** **`domterm`** **command** is also like `tmux` or GNU `screen` in that has multiple options for controlling or starting a server that manages one or more sessions. The major difference is that, if it's not already running under DomTerm, the `domterm` command creates a new top-level window, rather than running in the existing terminal.
|
||||
|
||||
The `domterm` command has a number of sub-commands, similar to `tmux` or `git`. Some sub-commands create windows or sessions. Others (such as "printing" an image) only work within an existing DomTerm session.
|
||||
The `domterm` command has a number of sub-commands, similar to `tmux` or `git`. Some sub-commands create windows or sessions. Others (such as "printing" an image) only work within an existing DomTerm session.
|
||||
|
||||
The command `domterm browse` opens a window or pane for browsing a specified URL, such as when browsing documentation.
|
||||
The command `domterm browse` opens a window or pane for browsing a specified URL, such as when browsing documentation.
|
||||
|
||||
### Getting and installing DomTerm
|
||||
|
||||
DomTerm is available from its [GitHub repository][50]. Currently, there are no prebuilt packages, but there are [detailed instructions][51]. All prerequisites are available on Fedora 27, which makes it especially easy to build.
|
||||
|
||||
### About the author
|
||||
|
||||
[![Per Bothner](https://opensource.com/sites/default/files/styles/profile_pictures/public/pictures/per180116a.jpg?itok=dNNCOoqX)][52] Per Bothner - Per has been involved with Open Source since before the term existed. He was an early employee of Cygnus (later purchased by Red Hat), which was the first company commercializing Free Software. There he worked on gcc, g++, libio (the precursor to GNU/Linux stdio), gdb, Kawa, and more. Later he worked in the Java group at Sun and Oracle. Per wrote the Emacs term mode. Currently, Per spends too much time on [Kawa][21] (a Scheme compiler... [more about Per Bothner][22][More about me][23]
|
||||
DomTerm is available from its [GitHub repository][21]. Currently, there are no prebuilt packages, but there are [detailed instructions][22]. All prerequisites are available on Fedora 27, which makes it especially easy to build.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/1/introduction-domterm-terminal-emulator
|
||||
|
||||
作者:[ Per Bothner][a]
|
||||
作者:[Per Bothner][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/perbothner
|
||||
[1]:https://opensource.com/sites/default/files/u128651/domterm1.png
|
||||
[2]:https://opensource.com/resources/what-is-linux?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent
|
||||
[3]:https://opensource.com/resources/what-are-linux-containers?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent
|
||||
[4]:https://developers.redhat.com/promotions/linux-cheatsheet/?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent
|
||||
[5]:https://developers.redhat.com/cheat-sheet/advanced-linux-commands-cheatsheet?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent
|
||||
[6]:https://opensource.com/tags/linux?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent
|
||||
[1]:http://domterm.org/
|
||||
[2]:https://electronjs.org/
|
||||
[3]:https://atom.io/
|
||||
[4]:https://en.wikipedia.org/wiki/XUL
|
||||
[5]:/file/385346
|
||||
[6]:https://opensource.com/sites/default/files/images/dt-atom1.png (DomTerm terminal panes in Atom editor)
|
||||
[7]:https://opensource.com/sites/default/files/images/dt-atom1.png
|
||||
[8]:https://opensource.com/sites/default/files/dt-gnuplot.png
|
||||
[9]:https://opensource.com/sites/default/files/dt-kawa1.png
|
||||
[10]:https://opensource.com/file/384931
|
||||
[11]:https://electronjs.org/
|
||||
[12]:https://opensource.com/file/385346
|
||||
[13]:https://opensource.com/file/385326
|
||||
[14]:https://opensource.com/file/385331
|
||||
[15]:https://opensource.com/article/18/1/introduction-domterm-terminal-emulator?rate=9HfSplqf1e4NohKkTld_881cH1hXTlSwU_2XKrnpTJQ
|
||||
[16]:https://opensource.com/article/18/1/introduction-domterm-terminal-emulator#image1
|
||||
[17]:https://atom.io/
|
||||
[18]:https://opensource.com/article/18/1/introduction-domterm-terminal-emulator#image2
|
||||
[19]:https://opensource.com/article/18/1/introduction-domterm-terminal-emulator#image4
|
||||
[20]:https://en.wikipedia.org/wiki/XUL
|
||||
[21]:https://www.gnu.org/software/kawa/
|
||||
[22]:https://opensource.com/users/perbothner
|
||||
[23]:https://opensource.com/users/perbothner
|
||||
[24]:https://opensource.com/user/205986/feed
|
||||
[25]:https://www.flickr.com/photos/15587432@N02/3281139507/
|
||||
[26]:http://creativecommons.org/licenses/by/2.0
|
||||
[27]:http://domterm.org/
|
||||
[28]:https://opensource.com/article/18/1/introduction-domterm-terminal-emulator#image3
|
||||
[29]:https://libwebsockets.org/
|
||||
[30]:http://invisible-island.net/vttest/
|
||||
[31]:https://opensource.com/article/18/1/introduction-domterm-terminal-emulator#image2
|
||||
[32]:http://domterm.org/Wire-byte-protocol.html
|
||||
[33]:http://domterm.org/Shell-prompts.html
|
||||
[34]:https://opensource.com/article/18/1/introduction-domterm-terminal-emulator#image2
|
||||
[35]:https://opensource.com/article/18/1/introduction-domterm-terminal-emulator#image3
|
||||
[36]:http://www.gnuplot.info/
|
||||
[37]:https://developer.mozilla.org/en-US/docs/Web/SVG
|
||||
[38]:http://per.bothner.com/blog/2016/gnuplot-in-domterm/
|
||||
[39]:https://www.gnu.org/software/kawa/
|
||||
[40]:https://www.gnu.org/software/kawa/Composable-pictures.html
|
||||
[41]:https://opensource.com/article/18/1/introduction-domterm-terminal-emulator#image1
|
||||
[42]:https://opensource.com/article/18/1/introduction-domterm-terminal-emulator#image4
|
||||
[43]:https://www.gnu.org/software/kawa/
|
||||
[44]:https://opensource.com/article/18/1/introduction-domterm-terminal-emulator#image1
|
||||
[45]:http://per.bothner.com/blog/2017/dynamic-prettyprinting/
|
||||
[46]:https://golden-layout.com/
|
||||
[47]:https://opensource.com/sites/default/files/u128651/domterm1.png
|
||||
[48]:https://midnight-commander.org/
|
||||
[49]:https://opensource.com/article/18/1/introduction-domterm-terminal-emulator#image2
|
||||
[50]:https://github.com/PerBothner/DomTerm
|
||||
[51]:http://domterm.org/Downloading-and-building.html
|
||||
[52]:https://opensource.com/users/perbothner
|
||||
[53]:https://opensource.com/users/perbothner
|
||||
[54]:https://opensource.com/users/perbothner
|
||||
[55]:https://opensource.com/tags/linux
|
||||
[8]:https://libwebsockets.org/
|
||||
[9]:http://invisible-island.net/vttest/
|
||||
[10]:http://domterm.org/Wire-byte-protocol.html
|
||||
[11]:http://domterm.org/Shell-prompts.html
|
||||
[12]:http://www.gnuplot.info/
|
||||
[13]:https://developer.mozilla.org/en-US/docs/Web/SVG
|
||||
[14]:http://per.bothner.com/blog/2016/gnuplot-in-domterm/
|
||||
[15]:https://www.gnu.org/software/kawa/
|
||||
[16]:https://www.gnu.org/software/kawa/Composable-pictures.html
|
||||
[17]:http://per.bothner.com/blog/2017/dynamic-prettyprinting/
|
||||
[18]:https://golden-layout.com/
|
||||
[19]:https://opensource.com/sites/default/files/u128651/domterm1.png
|
||||
[20]:https://midnight-commander.org/
|
||||
[21]:https://github.com/PerBothner/DomTerm
|
||||
[22]:http://domterm.org/Downloading-and-building.html
|
||||
|
@ -0,0 +1,239 @@
|
||||
Python + Memcached: Efficient Caching in Distributed Applications – Real Python
|
||||
======
|
||||
|
||||
When writing Python applications, caching is important. Using a cache to avoid recomputing data or accessing a slow database can provide you with a great performance boost.
|
||||
|
||||
Python offers built-in possibilities for caching, from a simple dictionary to a more complete data structure such as [`functools.lru_cache`][2]. The latter can cache any item using a [Least-Recently Used algorithm][3] to limit the cache size.
|
||||
|
||||
Those data structures are, however, by definition local to your Python process. When several copies of your application run across a large platform, using a in-memory data structure disallows sharing the cached content. This can be a problem for large-scale and distributed applications.
|
||||
|
||||
![](https://files.realpython.com/media/python-memcached.97e1deb2aa17.png)
|
||||
|
||||
Therefore, when a system is distributed across a network, it also needs a cache that is distributed across a network. Nowadays, there are plenty of network servers that offer caching capability—we already covered [how to use Redis for caching with Django][4].
|
||||
|
||||
As you’re going to see in this tutorial, [memcached][5] is another great option for distributed caching. After a quick introduction to basic memcached usage, you’ll learn about advanced patterns such as “cache and set” and using fallback caches to avoid cold cache performance issues.
|
||||
|
||||
### Installing memcached
|
||||
|
||||
Memcached is [available for many platforms][6]:
|
||||
|
||||
* If you run **Linux** , you can install it using `apt-get install memcached` or `yum install memcached`. This will install memcached from a pre-built package but you can alse build memcached from source, [as explained here][6].
|
||||
* For **macOS** , using [Homebrew][7] is the simplest option. Just run `brew install memcached` after you’ve installed the Homebrew package manager.
|
||||
* On **Windows** , you would have to compile memcached yourself or find [pre-compiled binaries][8].
|
||||
|
||||
|
||||
|
||||
Once installed, memcached can simply be launched by calling the `memcached` command:
|
||||
```
|
||||
$ memcached
|
||||
|
||||
```
|
||||
|
||||
Before you can interact with memcached from Python-land you’ll need to install a memcached client library. You’ll see how to do this in the next section, along with some basic cache access operations.
|
||||
|
||||
### Storing and Retrieving Cached Values Using Python
|
||||
|
||||
If you never used memcached, it is pretty easy to understand. It basically provides a giant network-available dictionary. This dictionary has a few properties that are different from a classical Python dictionnary, mainly:
|
||||
|
||||
* Keys and values have to be bytes
|
||||
* Keys and values are automatically deleted after an expiration time
|
||||
|
||||
|
||||
|
||||
Therefore, the two basic operations for interacting with memcached are `set` and `get`. As you might have guessed, they’re used to assign a value to a key or to get a value from a key, respectively.
|
||||
|
||||
My preferred Python library for interacting with memcached is [`pymemcache`][9]—I recommend using it. You can simply [install it using pip][10]:
|
||||
```
|
||||
$ pip install pymemcache
|
||||
|
||||
```
|
||||
|
||||
The following code shows how you can connect to memcached and use it as a network-distributed cache in your Python applications:
|
||||
```
|
||||
>>> from pymemcache.client import base
|
||||
|
||||
# Don't forget to run `memcached' before running this next line:
|
||||
>>> client = base.Client(('localhost', 11211))
|
||||
|
||||
# Once the client is instantiated, you can access the cache:
|
||||
>>> client.set('some_key', 'some value')
|
||||
|
||||
# Retrieve previously set data again:
|
||||
>>> client.get('some_key')
|
||||
'some value'
|
||||
|
||||
```
|
||||
|
||||
memcached network protocol is really simple an its implementation extremely fast, which makes it useful to store data that would be otherwise slow to retrieve from the canonical source of data or to compute again:
|
||||
|
||||
While straightforward enough, this example allows storing key/value tuples across the network and accessing them through multiple, distributed, running copies of your application. This is simplistic, yet powerful. And it’s a great first step towards optimizing your application.
|
||||
|
||||
### Automatically Expiring Cached Data
|
||||
|
||||
When storing data into memcached, you can set an expiration time—a maximum number of seconds for memcached to keep the key and value around. After that delay, memcached automatically removes the key from its cache.
|
||||
|
||||
What should you set this cache time to? There is no magic number for this delay, and it will entirely depend on the type of data and application that you are working with. It could be a few seconds, or it might be a few hours.
|
||||
|
||||
Cache invalidation, which defines when to remove the cache because it is out of sync with the current data, is also something that your application will have to handle. Especially if presenting data that is too old or or stale is to be avoided.
|
||||
|
||||
Here again, there is no magical recipe; it depends on the type of application you are building. However, there are several outlying cases that should be handled—which we haven’t yet covered in the above example.
|
||||
|
||||
A caching server cannot grow infinitely—memory is a finite resource. Therefore, keys will be flushed out by the caching server as soon as it needs more space to store other things.
|
||||
|
||||
Some keys might also be expired because they reached their expiration time (also sometimes called the “time-to-live” or TTL.) In those cases the data is lost, and the canonical data source must be queried again.
|
||||
|
||||
This sounds more complicated than it really is. You can generally work with the following pattern when working with memcached in Python:
|
||||
```
|
||||
from pymemcache.client import base
|
||||
|
||||
|
||||
def do_some_query():
|
||||
# Replace with actual querying code to a database,
|
||||
# a remote REST API, etc.
|
||||
return 42
|
||||
|
||||
|
||||
# Don't forget to run `memcached' before running this code
|
||||
client = base.Client(('localhost', 11211))
|
||||
result = client.get('some_key')
|
||||
|
||||
if result is None:
|
||||
# The cache is empty, need to get the value
|
||||
# from the canonical source:
|
||||
result = do_some_query()
|
||||
|
||||
# Cache the result for next time:
|
||||
client.set('some_key', result)
|
||||
|
||||
# Whether we needed to update the cache or not,
|
||||
# at this point you can work with the data
|
||||
# stored in the `result` variable:
|
||||
print(result)
|
||||
|
||||
```
|
||||
|
||||
> **Note:** Handling missing keys is mandatory because of normal flush-out operations. It is also obligatory to handle the cold cache scenario, i.e. when memcached has just been started. In that case, the cache will be entirely empty and the cache needs to be fully repopulated, one request at a time.
|
||||
|
||||
This means you should view any cached data as ephemeral. And you should never expect the cache to contain a value you previously wrote to it.
|
||||
|
||||
### Warming Up a Cold Cache
|
||||
|
||||
Some of the cold cache scenarios cannot be prevented, for example a memcached crash. But some can, for example migrating to a new memcached server.
|
||||
|
||||
When it is possible to predict that a cold cache scenario will happen, it is better to avoid it. A cache that needs to be refilled means that all of the sudden, the canonical storage of the cached data will be massively hit by all cache users who lack a cache data (also known as the [thundering herd problem][11].)
|
||||
|
||||
pymemcache provides a class named `FallbackClient` that helps in implementing this scenario as demonstrated here:
|
||||
```
|
||||
from pymemcache.client import base
|
||||
from pymemcache import fallback
|
||||
|
||||
|
||||
def do_some_query():
|
||||
# Replace with actual querying code to a database,
|
||||
# a remote REST API, etc.
|
||||
return 42
|
||||
|
||||
|
||||
# Set `ignore_exc=True` so it is possible to shut down
|
||||
# the old cache before removing its usage from
|
||||
# the program, if ever necessary.
|
||||
old_cache = base.Client(('localhost', 11211), ignore_exc=True)
|
||||
new_cache = base.Client(('localhost', 11212))
|
||||
|
||||
client = fallback.FallbackClient((new_cache, old_cache))
|
||||
|
||||
result = client.get('some_key')
|
||||
|
||||
if result is None:
|
||||
# The cache is empty, need to get the value
|
||||
# from the canonical source:
|
||||
result = do_some_query()
|
||||
|
||||
# Cache the result for next time:
|
||||
client.set('some_key', result)
|
||||
|
||||
print(result)
|
||||
|
||||
```
|
||||
|
||||
The `FallbackClient` queries the old cache passed to its constructor, respecting the order. In this case, the new cache server will always be queried first, and in case of a cache miss, the old one will be queried—avoiding a possible return-trip to the primary source of data.
|
||||
|
||||
If any key is set, it will only be set to the new cache. After some time, the old cache can be decommissioned and the `FallbackClient` can be replaced directed with the `new_cache` client.
|
||||
|
||||
### Check And Set
|
||||
|
||||
When communicating with a remote cache, the usual concurrency problem comes back: there might be several clients trying to access the same key at the same time. memcached provides a check and set operation, shortened to CAS, which helps to solve this problem.
|
||||
|
||||
The simplest example is an application that wants to count the number of users it has. Each time a visitor connects, a counter is incremented by 1. Using memcached, a simple implementation would be:
|
||||
```
|
||||
def on_visit(client):
|
||||
result = client.get('visitors')
|
||||
if result is None:
|
||||
result = 1
|
||||
else:
|
||||
result += 1
|
||||
client.set('visitors', result)
|
||||
|
||||
```
|
||||
|
||||
However, what happens if two instances of the application try to update this counter at the same time?
|
||||
|
||||
The first call `client.get('visitors')` will return the same number of visitors for both of them, let’s say it’s 42. Then both will add 1, compute 43, and set the number of visitors to 43. That number is wrong, and the result should be 44, i.e. 42 + 1 + 1.
|
||||
|
||||
To solve this concurrency issue, the CAS operation of memcached is handy. The following snippet implements a correct solution:
|
||||
```
|
||||
def on_visit(client):
|
||||
while True:
|
||||
result, cas = client.gets('visitors')
|
||||
if result is None:
|
||||
result = 1
|
||||
else:
|
||||
result += 1
|
||||
if client.cas('visitors', result, cas):
|
||||
break
|
||||
|
||||
```
|
||||
|
||||
The `gets` method returns the value, just like the `get` method, but it also returns a CAS value.
|
||||
|
||||
What is in this value is not relevant, but it is used for the next method `cas` call. This method is equivalent to the `set` operation, except that it fails if the value has changed since the `gets` operation. In case of success, the loop is broken. Otherwise, the operation is restarted from the beginning.
|
||||
|
||||
In the scenario where two instances of the application try to update the counter at the same time, only one succeeds to move the counter from 42 to 43. The second instance gets a `False` value returned by the `client.cas` call, and have to retry the loop. It will retrieve 43 as value this time, will increment it to 44, and its `cas` call will succeed, thus solving our problem.
|
||||
|
||||
Incrementing a counter is interesting as an example to explain how CAS works because it is simplistic. However, memcached also provides the `incr` and `decr` methods to increment or decrement an integer in a single request, rather than doing multiple `gets`/`cas` calls. In real-world applications `gets` and `cas` are used for more complex data type or operations
|
||||
|
||||
Most remote caching server and data store provide such a mechanism to prevent concurrency issues. It is critical to be aware of those cases to make proper use of their features.
|
||||
|
||||
### Beyond Caching
|
||||
|
||||
The simple techniques illustrated in this article showed you how easy it is to leverage memcached to speed up the performances of your Python application.
|
||||
|
||||
Just by using the two basic “set” and “get” operations you can often accelerate data retrieval or avoid recomputing results over and over again. With memcached you can share the cache accross a large number of distributed nodes.
|
||||
|
||||
Other, more advanced patterns you saw in this tutorial, like the Check And Set (CAS) operation allow you to update data stored in the cache concurrently across multiple Python threads or processes while avoiding data corruption.
|
||||
|
||||
If you are interested into learning more about advanced techniques to write faster and more scalable Python applications, check out [Scaling Python][12]. It covers many advanced topics such as network distribution, queuing systems, distributed hashing, and code profiling.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://realpython.com/blog/python/python-memcache-efficient-caching/
|
||||
|
||||
作者:[Julien Danjou][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://realpython.com/team/jdanjou/
|
||||
[1]:https://realpython.com/blog/categories/python/
|
||||
[2]:https://docs.python.org/3/library/functools.html#functools.lru_cache
|
||||
[3]:https://en.wikipedia.org/wiki/Cache_replacement_policies#Least_Recently_Used_(LRU)
|
||||
[4]:https://realpython.com/blog/python/caching-in-django-with-redis/
|
||||
[5]:http://memcached.org
|
||||
[6]:https://github.com/memcached/memcached/wiki/Install
|
||||
[7]:https://brew.sh/
|
||||
[8]:https://commaster.net/content/installing-memcached-windows
|
||||
[9]:https://pypi.python.org/pypi/pymemcache
|
||||
[10]:https://realpython.com/learn/python-first-steps/#11-pythons-power-packagesmodules
|
||||
[11]:https://en.wikipedia.org/wiki/Thundering_herd_problem
|
||||
[12]:https://scaling-python.com
|
@ -0,0 +1,262 @@
|
||||
Shallow vs Deep Copying of Python Objects – Real Python
|
||||
======
|
||||
|
||||
Assignment statements in Python do not create copies of objects, they only bind names to an object. For immutable objects, that usually doesn’t make a difference.
|
||||
|
||||
But for working with mutable objects or collections of mutable objects, you might be looking for a way to create “real copies” or “clones” of these objects.
|
||||
|
||||
Essentially, you’ll sometimes want copies that you can modify without automatically modifying the original at the same time. In this article I’m going to give you the rundown on how to copy or “clone” objects in Python 3 and some of the caveats involved.
|
||||
|
||||
> **Note:** This tutorial was written with Python 3 in mind but there is little difference between Python 2 and 3 when it comes to copying objects. When there are differences I will point them out in the text.
|
||||
|
||||
Let’s start by looking at how to copy Python’s built-in collections. Python’s built-in mutable collections like [lists, dicts, and sets][3] can be copied by calling their factory functions on an existing collection:
|
||||
```
|
||||
new_list = list(original_list)
|
||||
new_dict = dict(original_dict)
|
||||
new_set = set(original_set)
|
||||
|
||||
```
|
||||
|
||||
However, this method won’t work for custom objects and, on top of that, it only creates shallow copies. For compound objects like lists, dicts, and sets, there’s an important difference between shallow and deep copying:
|
||||
|
||||
* A **shallow copy** means constructing a new collection object and then populating it with references to the child objects found in the original. In essence, a shallow copy is only one level deep. The copying process does not recurse and therefore won’t create copies of the child objects themselves.
|
||||
|
||||
* A **deep copy** makes the copying process recursive. It means first constructing a new collection object and then recursively populating it with copies of the child objects found in the original. Copying an object this way walks the whole object tree to create a fully independent clone of the original object and all of its children.
|
||||
|
||||
|
||||
|
||||
|
||||
I know, that was a bit of a mouthful. So let’s look at some examples to drive home this difference between deep and shallow copies.
|
||||
|
||||
**Free Bonus:** [Click here to get access to a chapter from Python Tricks: The Book][4] that shows you Python's best practices with simple examples you can apply instantly to write more beautiful + Pythonic code.
|
||||
|
||||
### Making Shallow Copies
|
||||
|
||||
In the example below, we’ll create a new nested list and then shallowly copy it with the `list()` factory function:
|
||||
```
|
||||
>>> xs = [[1, 2, 3], [4, 5, 6], [7, 8, 9]]
|
||||
>>> ys = list(xs) # Make a shallow copy
|
||||
|
||||
```
|
||||
|
||||
This means `ys` will now be a new and independent object with the same contents as `xs`. You can verify this by inspecting both objects:
|
||||
```
|
||||
>>> xs
|
||||
[[1, 2, 3], [4, 5, 6], [7, 8, 9]]
|
||||
>>> ys
|
||||
[[1, 2, 3], [4, 5, 6], [7, 8, 9]]
|
||||
|
||||
```
|
||||
|
||||
To confirm `ys` really is independent from the original, let’s devise a little experiment. You could try and add a new sublist to the original (`xs`) and then check to make sure this modification didn’t affect the copy (`ys`):
|
||||
```
|
||||
>>> xs.append(['new sublist'])
|
||||
>>> xs
|
||||
[[1, 2, 3], [4, 5, 6], [7, 8, 9], ['new sublist']]
|
||||
>>> ys
|
||||
[[1, 2, 3], [4, 5, 6], [7, 8, 9]]
|
||||
|
||||
```
|
||||
|
||||
As you can see, this had the expected effect. Modifying the copied list at a “superficial” level was no problem at all.
|
||||
|
||||
However, because we only created a shallow copy of the original list, `ys` still contains references to the original child objects stored in `xs`.
|
||||
|
||||
These children were not copied. They were merely referenced again in the copied list.
|
||||
|
||||
Therefore, when you modify one of the child objects in `xs`, this modification will be reflected in `ys` as well—that’s because both lists share the same child objects. The copy is only a shallow, one level deep copy:
|
||||
```
|
||||
>>> xs[1][0] = 'X'
|
||||
>>> xs
|
||||
[[1, 2, 3], ['X', 5, 6], [7, 8, 9], ['new sublist']]
|
||||
>>> ys
|
||||
[[1, 2, 3], ['X', 5, 6], [7, 8, 9]]
|
||||
|
||||
```
|
||||
|
||||
In the above example we (seemingly) only made a change to `xs`. But it turns out that both sublists at index 1 in `xs` and `ys` were modified. Again, this happened because we had only created a shallow copy of the original list.
|
||||
|
||||
Had we created a deep copy of `xs` in the first step, both objects would’ve been fully independent. This is the practical difference between shallow and deep copies of objects.
|
||||
|
||||
Now you know how to create shallow copies of some of the built-in collection classes, and you know the difference between shallow and deep copying. The questions we still want answers for are:
|
||||
|
||||
* How can you create deep copies of built-in collections?
|
||||
* How can you create copies (shallow and deep) of arbitrary objects, including custom classes?
|
||||
|
||||
|
||||
|
||||
The answer to these questions lies in the `copy` module in the Python standard library. This module provides a simple interface for creating shallow and deep copies of arbitrary Python objects.
|
||||
|
||||
### Making Deep Copies
|
||||
|
||||
Let’s repeat the previous list-copying example, but with one important difference. This time we’re going to create a deep copy using the `deepcopy()` function defined in the `copy` module instead:
|
||||
```
|
||||
>>> import copy
|
||||
>>> xs = [[1, 2, 3], [4, 5, 6], [7, 8, 9]]
|
||||
>>> zs = copy.deepcopy(xs)
|
||||
|
||||
```
|
||||
|
||||
When you inspect `xs` and its clone `zs` that we created with `copy.deepcopy()`, you’ll see that they both look identical again—just like in the previous example:
|
||||
```
|
||||
>>> xs
|
||||
[[1, 2, 3], [4, 5, 6], [7, 8, 9]]
|
||||
>>> zs
|
||||
[[1, 2, 3], [4, 5, 6], [7, 8, 9]]
|
||||
|
||||
```
|
||||
|
||||
However, if you make a modification to one of the child objects in the original object (`xs`), you’ll see that this modification won’t affect the deep copy (`zs`).
|
||||
|
||||
Both objects, the original and the copy, are fully independent this time. `xs` was cloned recursively, including all of its child objects:
|
||||
```
|
||||
>>> xs[1][0] = 'X'
|
||||
>>> xs
|
||||
[[1, 2, 3], ['X', 5, 6], [7, 8, 9]]
|
||||
>>> zs
|
||||
[[1, 2, 3], [4, 5, 6], [7, 8, 9]]
|
||||
|
||||
```
|
||||
|
||||
You might want to take some time to sit down with the Python interpreter and play through these examples right about now. Wrapping your head around copying objects is easier when you get to experience and play with the examples firsthand.
|
||||
|
||||
By the way, you can also create shallow copies using a function in the `copy` module. The `copy.copy()` function creates shallow copies of objects.
|
||||
|
||||
This is useful if you need to clearly communicate that you’re creating a shallow copy somewhere in your code. Using `copy.copy()` lets you indicate this fact. However, for built-in collections it’s considered more Pythonic to simply use the list, dict, and set factory functions to create shallow copies.
|
||||
|
||||
### Copying Arbitrary Python Objects
|
||||
|
||||
The question we still need to answer is how do we create copies (shallow and deep) of arbitrary objects, including custom classes. Let’s take a look at that now.
|
||||
|
||||
Again the `copy` module comes to our rescue. Its `copy.copy()` and `copy.deepcopy()` functions can be used to duplicate any object.
|
||||
|
||||
Once again, the best way to understand how to use these is with a simple experiment. I’m going to base this on the previous list-copying example. Let’s start by defining a simple 2D point class:
|
||||
```
|
||||
class Point:
|
||||
def __init__(self, x, y):
|
||||
self.x = x
|
||||
self.y = y
|
||||
|
||||
def __repr__(self):
|
||||
return f'Point({self.x!r}, {self.y!r})'
|
||||
|
||||
```
|
||||
|
||||
I hope you agree that this was pretty straightforward. I added a `__repr__()` implementation so that we can easily inspect objects created from this class in the Python interpreter.
|
||||
|
||||
> **Note:** The above example uses a [Python 3.6 f-string][5] to construct the string returned by `__repr__`. On Python 2 and versions of Python 3 before 3.6 you’d use a different string formatting expression, for example:
|
||||
```
|
||||
> def __repr__(self):
|
||||
> return 'Point(%r, %r)' % (self.x, self.y)
|
||||
>
|
||||
```
|
||||
|
||||
Next up, we’ll create a `Point` instance and then (shallowly) copy it, using the `copy` module:
|
||||
```
|
||||
>>> a = Point(23, 42)
|
||||
>>> b = copy.copy(a)
|
||||
|
||||
```
|
||||
|
||||
If we inspect the contents of the original `Point` object and its (shallow) clone, we see what we’d expect:
|
||||
```
|
||||
>>> a
|
||||
Point(23, 42)
|
||||
>>> b
|
||||
Point(23, 42)
|
||||
>>> a is b
|
||||
False
|
||||
|
||||
```
|
||||
|
||||
Here’s something else to keep in mind. Because our point object uses primitive types (ints) for its coordinates, there’s no difference between a shallow and a deep copy in this case. But I’ll expand the example in a second.
|
||||
|
||||
Let’s move on to a more complex example. I’m going to define another class to represent 2D rectangles. I’ll do it in a way that allows us to create a more complex object hierarchy—my rectangles will use `Point` objects to represent their coordinates:
|
||||
```
|
||||
class Rectangle:
|
||||
def __init__(self, topleft, bottomright):
|
||||
self.topleft = topleft
|
||||
self.bottomright = bottomright
|
||||
|
||||
def __repr__(self):
|
||||
return (f'Rectangle({self.topleft!r}, '
|
||||
f'{self.bottomright!r})')
|
||||
|
||||
```
|
||||
|
||||
Again, first we’re going to attempt to create a shallow copy of a rectangle instance:
|
||||
```
|
||||
rect = Rectangle(Point(0, 1), Point(5, 6))
|
||||
srect = copy.copy(rect)
|
||||
|
||||
```
|
||||
|
||||
If you inspect the original rectangle and its copy, you’ll see how nicely the `__repr__()` override is working out, and that the shallow copy process worked as expected:
|
||||
```
|
||||
>>> rect
|
||||
Rectangle(Point(0, 1), Point(5, 6))
|
||||
>>> srect
|
||||
Rectangle(Point(0, 1), Point(5, 6))
|
||||
>>> rect is srect
|
||||
False
|
||||
|
||||
```
|
||||
|
||||
Remember how the previous list example illustrated the difference between deep and shallow copies? I’m going to use the same approach here. I’ll modify an object deeper in the object hierarchy, and then you’ll see this change reflected in the (shallow) copy as well:
|
||||
```
|
||||
>>> rect.topleft.x = 999
|
||||
>>> rect
|
||||
Rectangle(Point(999, 1), Point(5, 6))
|
||||
>>> srect
|
||||
Rectangle(Point(999, 1), Point(5, 6))
|
||||
|
||||
```
|
||||
|
||||
I hope this behaved how you expected it to. Next, I’ll create a deep copy of the original rectangle. Then I’ll apply another modification and you’ll see which objects are affected:
|
||||
```
|
||||
>>> drect = copy.deepcopy(srect)
|
||||
>>> drect.topleft.x = 222
|
||||
>>> drect
|
||||
Rectangle(Point(222, 1), Point(5, 6))
|
||||
>>> rect
|
||||
Rectangle(Point(999, 1), Point(5, 6))
|
||||
>>> srect
|
||||
Rectangle(Point(999, 1), Point(5, 6))
|
||||
|
||||
```
|
||||
|
||||
Voila! This time the deep copy (`drect`) is fully independent of the original (`rect`) and the shallow copy (`srect`).
|
||||
|
||||
We’ve covered a lot of ground here, and there are still some finer points to copying objects.
|
||||
|
||||
It pays to go deep (ha!) on this topic, so you may want to study up on the [`copy` module documentation][6]. For example, objects can control how they’re copied by defining the special methods `__copy__()` and `__deepcopy__()` on them.
|
||||
|
||||
### 3 Things to Remember
|
||||
|
||||
* Making a shallow copy of an object won’t clone child objects. Therefore, the copy is not fully independent of the original.
|
||||
* A deep copy of an object will recursively clone child objects. The clone is fully independent of the original, but creating a deep copy is slower.
|
||||
* You can copy arbitrary objects (including custom classes) with the `copy` module.
|
||||
|
||||
|
||||
|
||||
If you’d like to dig deeper into other intermediate-level Python programming techniques, check out this free bonus:
|
||||
|
||||
**Free Bonus:** [Click here to get access to a chapter from Python Tricks: The Book][4] that shows you Python's best practices with simple examples you can apply instantly to write more beautiful + Pythonic code.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://realpython.com/blog/python/copying-python-objects/
|
||||
|
||||
作者:[Dan Bader][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://realpython.com/team/dbader/
|
||||
[1]:https://realpython.com/blog/categories/fundamentals/
|
||||
[2]:https://realpython.com/blog/categories/python/
|
||||
[3]:https://realpython.com/learn/python-first-steps/
|
||||
[4]:https://realpython.com/blog/python/copying-python-objects/
|
||||
[5]:https://dbader.org/blog/python-string-formatting
|
||||
[6]:https://docs.python.org/3/library/copy.html
|
@ -0,0 +1,131 @@
|
||||
Error establishing a database connection
|
||||
======
|
||||
![Error establishing a database connection][1]
|
||||
|
||||
Error establishing a database connection, is a very common error when you try to access your WordPress site. The database stores all the important information for your website, including your posts, comments, site configuration, user accounts, theme and plugin settings and so on. If the connection to your database cannot be established, your WordPress website will not load, and more then likely will give you the error: “Error establishing a database connection” In this tutorial we will show you, how to fix Error establishing a database connection in WordPress.
|
||||
|
||||
The most common cause for “Error establishing a database connection” issue, is one of the following:
|
||||
|
||||
Your database has been corrupted
|
||||
Incorrect login credentials in your WordPress configuration file (wp-config.php)
|
||||
Your MySQL service stopped working due to insufficient memory on the server (due to heavy traffic), or server problems
|
||||
|
||||
![Error establishing a database connection][2]
|
||||
|
||||
### 1. Requirements
|
||||
|
||||
In order to troubleshoot “Error establishing a database connection” issue, a few requirements must be met:
|
||||
|
||||
* SSH access to your server
|
||||
* The database is located on the same server
|
||||
* You need to know your database username, user password, and name of the database
|
||||
|
||||
|
||||
|
||||
Also before you try to fix “Error establishing a database connection” error, it is highly recommended that you make a backup of both your website and database.
|
||||
|
||||
### 1. Corrupted database
|
||||
|
||||
The first step to do when trying to troubleshoot “Error establishing a database connection” problem is to check whether this error is present for both the front-end and the back-end of the your site. You can access your back-end via <http://www.yourdomain.com/wp-admin> (replace “yourdomain” with your actual domain name)
|
||||
|
||||
If the error remains the same for both your front-end and back-end then you should move to the next step.
|
||||
|
||||
If you are able to access the back-end via <https://www.yourdomain.com/wp-admin, > and you see the following message:
|
||||
```
|
||||
“One or more database tables are unavailable. The database may need to be repaired”
|
||||
|
||||
```
|
||||
|
||||
it means that your database has been corrupted and you need to try to repair it.
|
||||
|
||||
To do this, you must first enable the repair option in your wp-config.php file, located inside the WordPress site root directory, by adding the following line:
|
||||
```
|
||||
define('WP_ALLOW_REPAIR', true);
|
||||
|
||||
```
|
||||
|
||||
Now you can navigate to this this page: <https://www.yourdomain.com/wp-admin/maint/repair.php> and click the “Repair and Optimize Database button.”
|
||||
|
||||
For security reasons, remember to turn off the repair option be deleting the line we added before in the wp-config.php file.
|
||||
|
||||
If this does not fix the problem or the database cannot be repaired you will probably need to restore it from a backup if you have one available.
|
||||
|
||||
### 2. Check your wp-config.php file
|
||||
|
||||
Another, probably most common reason, for failed database connection is because of incorrect database information set in your WordPress configuration file.
|
||||
|
||||
The configuration file resides in your WordPress site root directory and it is called wp-config.php .
|
||||
|
||||
Open the file and locate the following lines:
|
||||
```
|
||||
define('DB_NAME', 'database_name');
|
||||
define('DB_USER', 'database_username');
|
||||
define('DB_PASSWORD', 'database_password');
|
||||
define('DB_HOST', 'localhost');
|
||||
|
||||
```
|
||||
|
||||
Make sure the correct database name, username, and password are set. Database host should be set to “localhost”.
|
||||
|
||||
If you ever change your database username and password you should always update this file as well.
|
||||
|
||||
If everything is set up properly and you are still getting the “Error establishing a database connection” error then the problem is probably on the server side and you should move on to the next step of this tutorial.
|
||||
|
||||
### 3. Check your server
|
||||
|
||||
Depending on the resources available, during high traffic hours, your server might not be able to handle all the load and it may stop your MySQL server.
|
||||
|
||||
You can either contact your hosting provider about this or you can check it yourself if the MySQL server is properly running.
|
||||
|
||||
To check the status of MySQL, log in to your server via [SSH][3] and use the following command:
|
||||
```
|
||||
systemctl status mysql
|
||||
|
||||
```
|
||||
|
||||
Or you can check if it is up in your active processes with:
|
||||
```
|
||||
ps aux | grep mysql
|
||||
|
||||
```
|
||||
|
||||
If your MySQL is not running you can start it with the following commands:
|
||||
```
|
||||
systemctl start mysql
|
||||
|
||||
```
|
||||
|
||||
You may also need to check the memory usage on your server.
|
||||
|
||||
To check how much RAM you have available you can use the following command:
|
||||
```
|
||||
free -m
|
||||
|
||||
```
|
||||
|
||||
If your server is running low on memory you may want to consider upgrading your server.
|
||||
|
||||
### 4. Conclusion
|
||||
|
||||
Most of the time. the “Error establishing a database connection” error can be fixed by following one of the steps above.
|
||||
|
||||
![How to Fix the Error Establishing a Database Connection in WordPress][4]Of course, you don’t have to fix, Error establishing a database connection, if you use one of our [WordPress VPS Hosting Services][5], in which case you can simply ask our expert Linux admins to help you fix the Error establishing a database connection in WordPress, for you. They are available 24×7 and will take care of your request immediately.
|
||||
|
||||
**PS**. If you liked this post, on how to fix the Error establishing a database connection in WordPress, please share it with your friends on the social networks using the buttons on the left or simply leave a reply below. Thanks.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.rosehosting.com/blog/error-establishing-a-database-connection/
|
||||
|
||||
作者:[RoseHosting][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.rosehosting.com
|
||||
[1]:https://www.rosehosting.com/blog/wp-content/uploads/2018/02/error-establishing-a-database-connection.jpg
|
||||
[2]:https://www.rosehosting.com/blog/wp-content/uploads/2018/02/Error-establishing-a-database-connection-e1517474875180.png
|
||||
[3]:https://www.rosehosting.com/blog/connect-to-your-linux-vps-via-ssh/
|
||||
[4]:https://www.rosehosting.com/blog/wp-content/uploads/2018/02/How-to-Fix-the-Error-Establishing-a-Database-Connection-in-WordPress.jpg
|
||||
[5]:https://www.rosehosting.com/wordpress-hosting.html
|
@ -1,3 +1,5 @@
|
||||
translating---geekpi
|
||||
|
||||
How to reload .vimrc file without restarting vim on Linux/Unix
|
||||
======
|
||||
|
||||
|
211
sources/tech/20180202 CompositeAcceleration.md
Normal file
211
sources/tech/20180202 CompositeAcceleration.md
Normal file
@ -0,0 +1,211 @@
|
||||
CompositeAcceleration
|
||||
======
|
||||
### Composite acceleration in the X server
|
||||
|
||||
One of the persistent problems with the modern X desktop is the number of moving parts required to display application content. Consider a simple PresentPixmap call as made by the Vulkan WSI or GL using DRI3:
|
||||
|
||||
1. Application calls PresentPixmap with new contents for its window
|
||||
|
||||
2. X server receives that call and pends any operation until the target frame
|
||||
|
||||
3. At the target frame, the X server copies the new contents into the window pixmap and delivers a Damage event to the compositor
|
||||
|
||||
4. The compositor responds to the damage event by copying the window pixmap contents into the next screen pixmap
|
||||
|
||||
5. The compositor calls PresentPixmap with the new screen contents
|
||||
|
||||
6. The X server receives that call and either posts a Swap call to the kernel or delays any action until the target frame
|
||||
|
||||
|
||||
|
||||
|
||||
This sequence has a number of issues:
|
||||
|
||||
* The operation is serialized between three processes with at least three context switches involved.
|
||||
|
||||
* There is no traceable relation between when the application asked for the frame to be shown and when it is finally presented. Nor do we even have any way to tell the application what time that was.
|
||||
|
||||
* There are at least two copies of the application contents, from DRI3 buffer to window pixmap and from window pixmap to screen pixmap.
|
||||
|
||||
|
||||
|
||||
|
||||
We'd also like to be able to take advantage of the multi-plane capabilities in the display engine (where available) to directly display the application contents.
|
||||
|
||||
### Previous Attempts
|
||||
|
||||
I've tried to come up with solutions to this issue a couple of times in the past.
|
||||
|
||||
#### Composite Redirection
|
||||
|
||||
My first attempt to solve (some of) this problem was through composite redirection. The idea there was to directly pass the Present'd pixmap to the compositor and let it copy the contents directly from there in constructing the new screen pixmap image. With some additional hand waving, the idea was that we could associate that final presentation with all of the associated redirected compositing operations and at least provide applications with accurate information about when their images were presented.
|
||||
|
||||
This fell apart when I tried to figure out how to plumb the necessary events through to the compositor and back. With that, and the realization that we still weren't solving problems inherent with the three-process dance, nor providing any path to using overlays, this solution just didn't seem worth pursuing further.
|
||||
|
||||
#### Automatic Compositing
|
||||
|
||||
More recently, Eric Anholt and I have been discussing how to have the X server do all of the compositing work by natively supporting ARGB window content. By changing compositors to place all screen content in windows, the X server could then generate the screen image by itself and not require any external compositing manager assistance for each frame.
|
||||
|
||||
Given that a primitive form of automatic compositing is already supported, extending that to support ARGB windows and having the X server manage the stack seemed pretty tractable. We would extend the driver interface so that drivers could perform the compositing themselves using a mixture of GPU operations and overlays.
|
||||
|
||||
This runs up against five hard problems though.
|
||||
|
||||
1. Making transitions between Manual and Automatic compositing seamless. We've seen how well the current compositing environment works when flipping compositing on and off to allow full-screen applications to use page flipping. Lots of screen flashing and application repaints.
|
||||
|
||||
2. Dealing with RGB windows with ARGB decorations. Right now, the window frame can be an ARGB window with the client being RGB; painting the client into the frame yields an ARGB result with the A values being 1 everywhere the client window is present.
|
||||
|
||||
3. Mesa currently allocates buffers exactly the size of the target drawable and assumes that the upper left corner of the buffer is the upper left corner of the drawable. If we want to place window manager decorations in the same buffer as the client and not need to copy the client contents, we would need to allocate a buffer large enough for both client and decorations, and then offset the client within that larger buffer.
|
||||
|
||||
4. Synchronizing window configuration and content updates with the screen presentation. One of the major features of a compositing manager is that it can construct complete and consistent frames for display; partial updates to application windows need never be shown to the user, nor does the user ever need to see the window tree partially reconfigured. To make this work with automatic compositing, we'd need to both codify frame markers within the 2D rendering stream and provide some method for collecting window configuration operations together.
|
||||
|
||||
5. Existing compositing managers don't do this today. Compositing managers are currently free to paint whatever they like into the screen image; requiring that they place all screen content into windows would mean they'd have to buy in to the new mechanism completely. That could still work with older X servers, but the additional overhead of more windows containing decoration content would slow performance with those systems, making migration less attractive.
|
||||
|
||||
|
||||
|
||||
|
||||
I can think of plausible ways to solve the first three of these without requiring application changes, but the last two require significant systemic changes to compositing managers. Ick.
|
||||
|
||||
### Semi-Automatic Compositing
|
||||
|
||||
I was up visiting Pierre-Loup at Valve recently and we sat down for a few hours to consider how to help applications regularly present content at known times, and to always know precisely when content was actually presented. That names just one of the above issues, but when you consider the additional work required by pure manual compositing, solving that one issue is likely best achieved by solving all three.
|
||||
|
||||
I presented the Automatic Compositing plan and we discussed the range of issues. Pierre-Loup focused on the last problem -- getting existing Compositing Managers to adopt whatever solution we came up with. Without any easy migration path for them, it seemed like a lot to ask.
|
||||
|
||||
He suggested that we come up with a mechanism which would allow Compositing Managers to ease into the new architecture and slowly improve things for applications. Towards that, we focused on a much simpler problem
|
||||
|
||||
> How can we get a single application at the top of the window stack to reliably display frames at the desired time, and to know when that doesn't occur.
|
||||
|
||||
Coming up with a solution for this led to a good discussion and a possible path to a broader solution in the future.
|
||||
|
||||
#### Steady-state Behavior
|
||||
|
||||
Let's start by ignoring how we start and stop this new mode and look at how we want applications to work when things are stable:
|
||||
|
||||
1. Windows not moving around
|
||||
2. Other applications idle
|
||||
|
||||
|
||||
|
||||
Let's get a picture I can use to describe this:
|
||||
|
||||
[![][1]][1]
|
||||
|
||||
In this picture, the compositing manager is triple buffered (as is normal for a page flipping application) with three buffers:
|
||||
|
||||
1. Scanout. The image currently on the screen
|
||||
|
||||
2. Queued. The image queued to be displayed next
|
||||
|
||||
3. Render. The image being constructed from various window pixmaps and other elements.
|
||||
|
||||
|
||||
|
||||
|
||||
The contents of the Scanout and Queued buffers are identical with the exception of the orange window.
|
||||
|
||||
The application is double buffered:
|
||||
|
||||
1. Current. What it has displayed for the last frame
|
||||
|
||||
2. Next. What it is constructing for the next frame
|
||||
|
||||
|
||||
|
||||
|
||||
Ok, so in the steady state, here's what we want to happen:
|
||||
|
||||
1. Application calls PresentPixmap with 'Next' for its window
|
||||
|
||||
2. X server receives that call and copies Next to Queued.
|
||||
|
||||
3. X server posts a Page Flip to the kernel with the Queued buffer
|
||||
|
||||
4. Once the flip happens, the X server swaps the names of the Scanout and Queued buffers.
|
||||
|
||||
|
||||
|
||||
|
||||
If the X server supports Overlays, then the sequence can look like:
|
||||
|
||||
1. Application calls PresentPixmap
|
||||
|
||||
2. X server receives that call and posts a Page Flip for the overlay
|
||||
|
||||
3. When the page flip completes, the X server notifies the client that the previous Current buffer is now idle.
|
||||
|
||||
|
||||
|
||||
|
||||
When the Compositing Manager has content to update outside of the orange window, it will:
|
||||
|
||||
1. Compositing Manager calls PresentPixmap
|
||||
|
||||
2. X server receives that call and paints the Current client image into the Render buffer
|
||||
|
||||
3. X server swaps Render and Queued buffers
|
||||
|
||||
4. X server posts Page Flip for the Queued buffer
|
||||
|
||||
5. When the page flip occurs, the server can mark the Scanout buffer as idle and notify the Compositing Manager
|
||||
|
||||
|
||||
|
||||
|
||||
If the Orange window is in an overlay, then the X server can skip step 2.
|
||||
|
||||
#### The Auto List
|
||||
|
||||
To give the Compositing Manager control over the presentation of all windows, each call to PresentPixmap by the Compositing Manager will be associated with the list of windows, the "Auto List", for which the X server will be responsible for providing suitable content. Transitioning from manual to automatic compositing can therefore be performed on a window-by-window basis, and each frame provided by the Compositing Manager will separately control how that happens.
|
||||
|
||||
The Steady State behavior above would be represented by having the same set of windows in the Auto List for the Scanout and Queued buffers, and when the Compositing Manager presents the Render buffer, it would also provide the same Auto List for that.
|
||||
|
||||
Importantly, the Auto List need not contain only children of the screen Root window. Any descendant window at all can be included, and the contents of that drawn into the image using appropriate clipping. This allows the Compositing Manager to draw the window manager frame while the client window is drawn by the X server.
|
||||
|
||||
Any window at all can be in the Auto List. Windows with PresentPixmap contents available would be drawn from those. Other windows would be drawn from their window pixmaps.
|
||||
|
||||
#### Transitioning from Manual to Auto
|
||||
|
||||
To transition a window from Manual mode to Auto mode, the Compositing Manager would add it to the Auto List for the Render image, and associate that Auto List with the PresentPixmap request for that image. For the first frame, the X server may not have received a PresentPixmap for the client window, and so the window contents would have to come from the Window Pixmap for the client.
|
||||
|
||||
I'm not sure how we'd get the Compositing Manager to provide another matching image that the X server can use for subsequent client frames; perhaps it would just create one itself?
|
||||
|
||||
#### Transitioning from Auto to Manual
|
||||
|
||||
To transition a window from Auto mode to Manual mode, the Compositing manager would remove it from the Auto List for the Render image and then paint the window contents into the render image itself. To do that, the X server would have to paint any PresentPixmap data from the client into the window pixmap; that would be done when the Compositing Manager called GetWindowPixmap.
|
||||
|
||||
### New Messages Required
|
||||
|
||||
For this to work, we need some way for the Compositing Manager to discover windows that are suitable for Auto composting. Normally, these will be windows managed by the Window Manager, but it's possible for them to be nested further within the application hierarchy, depending on how the application is constructed.
|
||||
|
||||
I think what we want is to tag Damage events with the source window, and perhaps additional information to help Compositing Managers determine whether it should be automatically presenting those source windows or a parent of them. Perhaps it would be helpful to also know whether the Damage event was actually caused by a PresentPixmap for the whole window?
|
||||
|
||||
To notify the server about the Auto List, a new request will be needed in the Present extension to set the value for a subsequent PresentPixmap request.
|
||||
|
||||
### Actually Drawing Frames
|
||||
|
||||
The DRM module in the Linux kernel doesn't provide any mechanism to remove or replace a Page Flip request. While this may get fixed at some point, we need to deal with how it works today, if only to provide reasonable support for existing kernels.
|
||||
|
||||
I think about the best we can do is to set a timer to fire a suitable time before vblank and have the X server wake up and execute any necessary drawing and Page Flip kernel calls. We can use feedback from the kernel to know how much slack time there was between any drawing and the vblank and adjust the timer as needed.
|
||||
|
||||
Given that the goal is to provide for reliable display of the client window, it might actually be sufficient to let the client PresentPixmap request drive the display; if the Compositing Manager provides new content for a frame where the client does not, we can schedule that for display using a timer before vblank. When the Compositing Manager provides new content after the client, it would be delayed until the next frame.
|
||||
|
||||
### Changes in Compositing Managers
|
||||
|
||||
As described above, one explicit goal is to ease the burden on Compositing Managers by making them able to opt-in to this new mechanism for a limited set of windows and only for a limited set of frames. Any time they need to take control over the screen presentation, a new frame can be constructed with an empty Auto List.
|
||||
|
||||
### Implementation Plans
|
||||
|
||||
This post is the first step in developing these ideas to the point where a prototype can be built. The next step will be to take feedback and adapt the design to suit. Of course, there's always the possibility that this design will also prove unworkable in practice, but I'm hoping that this third attempt will actually succeed.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://keithp.com/blogs/CompositeAcceleration/
|
||||
|
||||
作者:[keithp][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://keithp.com
|
||||
[1]:https://keithp.com/pictures/ca-steady.svg
|
191
sources/tech/20180202 How do I edit files on the command line.md
Normal file
191
sources/tech/20180202 How do I edit files on the command line.md
Normal file
@ -0,0 +1,191 @@
|
||||
How do I edit files on the command line?
|
||||
======
|
||||
|
||||
In this tutorial, we will show you how to edit files on the command line. This article covers three command line editors, vi (or vim), nano, and emacs.
|
||||
|
||||
#### Editing Files with Vi or Vim Command Line Editor
|
||||
|
||||
To edit files on the command line, you can use an editor such as vi. To open the file, run
|
||||
|
||||
```
|
||||
vi /path/to/file
|
||||
```
|
||||
|
||||
Now you see the contents of the file (if there is any. Please note that the file is created if it does not exist yet.).
|
||||
|
||||
The most important commands in vi are these:
|
||||
|
||||
Press `i` to enter the `Insert` mode. Now you can type in your text.
|
||||
|
||||
To leave the Insert mode press `ESC`.
|
||||
|
||||
To delete the character that is currently under the cursor you must press `x` (and you must not be in Insert mode because if you are you will insert the character `x` instead of deleting the character under the cursor). So if you have just opened the file with vi, you can immediately use `x` to delete characters. If you are in Insert mode you have to leave it first with `ESC`.
|
||||
|
||||
If you have made changes and want to save the file, press `:x` (again you must not be in Insert mode. If you are, press `ESC` to leave it).
|
||||
|
||||
If you haven't made any changes, press `:q` to leave the file (but you must not be in Insert mode).
|
||||
|
||||
If you have made changes, but want to leave the file without saving the changes, press `:q!` (but you must not be in Insert mode).
|
||||
|
||||
Please note that during all these operations you can use your keyboard's arrow keys to navigate the cursor through the text.
|
||||
|
||||
So that was all about the vi editor. Please note that the vim editor also works more or less in the same way, although if you'd like to know vim in depth, head [here][1].
|
||||
|
||||
#### Editing Files with Nano Command Line Editor
|
||||
|
||||
Next up is the Nano editor. You can invoke it simply by running the 'nano' command:
|
||||
|
||||
```
|
||||
nano
|
||||
```
|
||||
|
||||
Here's how the nano UI looks like:
|
||||
|
||||
[![Nano command line editor][2]][3]
|
||||
|
||||
You can also launch the editor directly with a file.
|
||||
|
||||
```
|
||||
nano [filename]
|
||||
```
|
||||
|
||||
For example:
|
||||
|
||||
```
|
||||
nano test.txt
|
||||
```
|
||||
|
||||
[![Open a file in nano][4]][5]
|
||||
|
||||
The UI, as you can see, is broadly divided into four parts. The line at the top shows editor version, file being edited, and the editing status. Then comes the actual edit area where you'll see the contents of the file. The highlighted line below the edit area shows important messages, and the last two lines are really helpful for beginners as they show keyboard shortcuts that you use to perform basic tasks in nano.
|
||||
|
||||
So here's a quick list of some of the shortcuts that you should know upfront.
|
||||
|
||||
Use arrow keys to navigate the text, the Backspace key to delete text, and **Ctrl+o** to save the changes you make. When you try saving the changes, nano will ask you for confirmation (see the line below the main editor area in screenshot below):
|
||||
|
||||
[![Save file in nano][6]][7]
|
||||
|
||||
Note that at this stage, you also have an option to save in different OS formats. Pressing **Altd+d** enables the DOS format, while **Atl+m** enables the Mac format.
|
||||
|
||||
[![Save file ind DOS format][8]][9]
|
||||
|
||||
Press enter and your changes will be saved.
|
||||
|
||||
[![File has been saved][10]][11]
|
||||
|
||||
Moving on, to cut and paste lines of text use **Ctrl+k** and **Ctrl+u**. These keyboard shortcuts can also be used to cut and paste individual words, but you'll have to select the words first, something you can do by pressing **Alt+A** (with the cursor under the first character of the word) and then using the arrow to key select the complete word.
|
||||
|
||||
Now comes search operations. A simple search can be initiated using **Ctrl+w** , while a search and replace operation can be done using **Ctrl+\**.
|
||||
|
||||
[![Search in files with nano][12]][13]
|
||||
|
||||
So those were some of the basic features of nano that should give you a head start if you're new to the editor. For more details, read our comprehensive coverage [here][14].
|
||||
|
||||
#### Editing Files with Emacs Command Line Editor
|
||||
|
||||
Next comes **Emacs**. If not already, you can install the editor on your system using the following command:
|
||||
|
||||
```
|
||||
sudo apt-get install emacs
|
||||
```
|
||||
|
||||
Like nano, you can directly open a file to edit in emacs in the following way:
|
||||
|
||||
```
|
||||
emacs -nw [filename]
|
||||
```
|
||||
|
||||
**Note** : The **-nw** flag makes sure emacs launches in bash itself, instead of a separate window which is the default behavior.
|
||||
|
||||
For example:
|
||||
```
|
||||
emacs -nw test.txt
|
||||
|
||||
```
|
||||
|
||||
Here's the editor's UI:
|
||||
|
||||
[![Open file in emacs][15]][16]
|
||||
|
||||
Like nano, the emacs UI is also divided into several parts. The first part is the top menu area, which is similar to the one you'd see in graphical applications. Then comes the main edit area, where the text (of the file you've opened) is displayed.
|
||||
|
||||
Below the edit area sits another highlighted bar that shows things like name of the file, editing mode ('Text' in screenshot above), and status (** for modified, - for non-modified, and %% for read only). Then comes the final area where you provide input instructions, see output as well.
|
||||
|
||||
Now coming to basic operations, after making changes, if you want to save them, use **Ctrl+x** followed by **Ctrl+s**. The last section will show you a message saying something on the lines of '**Wrote ........' . **Here's an example:
|
||||
|
||||
[![Save file in emacs][17]][18]
|
||||
|
||||
Now, if you want to discard changes and quit the editor, use **Ctrl+x** followed by **Ctrl+c**. The editor will confirm this through a prompt - see screenshot below:
|
||||
|
||||
[![Discard changes in emacs][19]][20]
|
||||
|
||||
Type 'n' followed by a 'yes' and the editor will quit without saving the changes.
|
||||
|
||||
Please note that Emacs represents 'Ctrl' as 'C' and 'Alt' as 'M'. So, for example, whenever you see something like C-x, it means Ctrl+x.
|
||||
|
||||
As for other basic editing operations, deleting is simple, as it works through the Backspace/Delete keys that most of us are already used to. However, there are shortcuts that make your deleting experience smooth. For example, use **Ctrl+k** for deleting complete line, **Alt+d** for deleting a word, and **Alt+k** for a sentence.
|
||||
|
||||
Undoing is achieved through ' **Ctrl+x** ' followed by ' **u** ', and to re-do, press **Ctrl+g** followed by **Ctrl+_**. Use **Ctrl+s** for forward search and **Ctrl+r** for reverse search.
|
||||
|
||||
[![Search in files with emacs][21]][22]
|
||||
|
||||
Moving on, to launch a replace operation, use the Alt+Shift+% keyboard shortcut. You'll be asked for the word you want to replace. Enter it. Then the editor will ask you for the replacement. For example, the following screenshot shows emacs asking user about the replacement for the word 'This'.
|
||||
|
||||
[![Replace text with emacs][23]][24]
|
||||
|
||||
Input the replacement text and press Enter. For each replacement operation emacs will carry, it'll seek your permission first:
|
||||
|
||||
[![Confirm text replacement][25]][26]
|
||||
|
||||
Press 'y' and the word will be replaced.
|
||||
|
||||
[![Press y to confirm][27]][28]
|
||||
|
||||
So that's pretty much all the basic editing operations that you should know to start using emacs. Oh, and yes, those menus at the top - we haven't discussed how to access them. Well, those can be accessed using the F10 key.
|
||||
|
||||
[![Basic editing operations][29]][30]
|
||||
|
||||
To come out of these menus, press the Esc key three times.
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.howtoforge.com/faq/how-to-edit-files-on-the-command-line
|
||||
|
||||
作者:[falko][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.howtoforge.com
|
||||
[1]:https://www.howtoforge.com/vim-basics
|
||||
[2]:https://www.howtoforge.com/images/command-tutorial/nano-basic-ui.png
|
||||
[3]:https://www.howtoforge.com/images/command-tutorial/big/nano-basic-ui.png
|
||||
[4]:https://www.howtoforge.com/images/command-tutorial/nano-file-open.png
|
||||
[5]:https://www.howtoforge.com/images/command-tutorial/big/nano-file-open.png
|
||||
[6]:https://www.howtoforge.com/images/command-tutorial/nano-save-changes.png
|
||||
[7]:https://www.howtoforge.com/images/command-tutorial/big/nano-save-changes.png
|
||||
[8]:https://www.howtoforge.com/images/command-tutorial/nano-mac-format.png
|
||||
[9]:https://www.howtoforge.com/images/command-tutorial/big/nano-mac-format.png
|
||||
[10]:https://www.howtoforge.com/images/command-tutorial/nano-changes-saved.png
|
||||
[11]:https://www.howtoforge.com/images/command-tutorial/big/nano-changes-saved.png
|
||||
[12]:https://www.howtoforge.com/images/command-tutorial/nano-search-replace.png
|
||||
[13]:https://www.howtoforge.com/images/command-tutorial/big/nano-search-replace.png
|
||||
[14]:https://www.howtoforge.com/linux-nano-command/
|
||||
[15]:https://www.howtoforge.com/images/command-tutorial/nano-file-open1.png
|
||||
[16]:https://www.howtoforge.com/images/command-tutorial/big/nano-file-open1.png
|
||||
[17]:https://www.howtoforge.com/images/command-tutorial/emacs-save.png
|
||||
[18]:https://www.howtoforge.com/images/command-tutorial/big/emacs-save.png
|
||||
[19]:https://www.howtoforge.com/images/command-tutorial/emacs-quit-without-saving.png
|
||||
[20]:https://www.howtoforge.com/images/command-tutorial/big/emacs-quit-without-saving.png
|
||||
[21]:https://www.howtoforge.com/images/command-tutorial/emacs-search.png
|
||||
[22]:https://www.howtoforge.com/images/command-tutorial/big/emacs-search.png
|
||||
[23]:https://www.howtoforge.com/images/command-tutorial/emacs-search-replace.png
|
||||
[24]:https://www.howtoforge.com/images/command-tutorial/big/emacs-search-replace.png
|
||||
[25]:https://www.howtoforge.com/images/command-tutorial/emacs-replace-prompt.png
|
||||
[26]:https://www.howtoforge.com/images/command-tutorial/big/emacs-replace-prompt.png
|
||||
[27]:https://www.howtoforge.com/images/command-tutorial/emacs-replaced.png
|
||||
[28]:https://www.howtoforge.com/images/command-tutorial/big/emacs-replaced.png
|
||||
[29]:https://www.howtoforge.com/images/command-tutorial/emacs-accessing-menus.png
|
||||
[30]:https://www.howtoforge.com/images/command-tutorial/big/emacs-accessing-menus.png
|
@ -0,0 +1,73 @@
|
||||
Tips for success when getting started with Ansible
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bus-big-data.png?itok=L34b2exg)
|
||||
|
||||
Ansible is an open source automation tool used to configure servers, install software, and perform a wide variety of IT tasks from one central location. It is a one-to-many agentless mechanism where all instructions are run from a control machine that communicates with remote clients over SSH, although other protocols are also supported.
|
||||
|
||||
While targeted for system administrators with privileged access who routinely perform tasks such as installing and configuring applications, Ansible can also be used by non-privileged users. For example, a database administrator using the `mysql` login ID could use Ansible to create databases, add users, and define access-level controls.
|
||||
|
||||
Let's go over a very simple example where a system administrator provisions 100 servers each day and must run a series of Bash commands on each one before handing it off to users.
|
||||
|
||||
![](https://opensource.com/sites/default/files/u128651/mapping-bash-commands-to-ansible.png)
|
||||
|
||||
This is a simple example, but should illustrate how easily commands can be specified in yaml files and executed on remote servers. In a heterogeneous environment, conditional statements can be added so that certain commands are only executed in certain servers (e.g., "only execute `yum` commands in systems that are not Ubuntu or Debian").
|
||||
|
||||
One important feature in Ansible is that a playbook describes a desired state in a computer system, so a playbook can be run multiple times against a server without impacting its state. If a certain task has already been implemented (e.g., "user `sysman` already exists"), then Ansible simply ignores it and moves on.
|
||||
|
||||
### Definitions
|
||||
|
||||
* **Tasks:**``A task is the smallest unit of work. It can be an action like "Install a database," "Install a web server," "Create a firewall rule," or "Copy this configuration file to that server."
|
||||
* **Plays:**``A play is made up of tasks. For example, the play: "Prepare a database to be used by a web server" is made up of tasks: 1) Install the database package; 2) Set a password for the database administrator; 3) Create a database; and 4) Set access to the database.
|
||||
* **Playbook:**``A playbook is made up of plays. A playbook could be: "Prepare my website with a database backend," and the plays would be 1) Set up the database server; and 2) Set up the web server.
|
||||
* **Roles:**``Roles are used to save and organize playbooks and allow sharing and reuse of playbooks. Following the previous examples, if you need to fully configure a web server, you can use a role that others have written and shared to do just that. Since roles are highly configurable (if written correctly), they can be easily reused to suit any given deployment requirements.
|
||||
* **Ansible Galaxy:**``Ansible [Galaxy][1] is an online repository where roles are uploaded so they can be shared with others. It is integrated with GitHub, so roles can be organized into Git repositories and then shared via Ansible Galaxy.
|
||||
|
||||
|
||||
|
||||
These definitions and their relationships are depicted here:
|
||||
|
||||
![](https://opensource.com/sites/default/files/u128651/ansible-definitions.png)
|
||||
|
||||
Please note this is just one way to organize the tasks that need to be executed. We could have split up the installation of the database and the web server into separate playbooks and into different roles. Most roles in Ansible Galaxy install and configure individual applications. You can see examples for installing [mysql][2] and installing [httpd][3].
|
||||
|
||||
### Tips for writing playbooks
|
||||
|
||||
The best source for learning Ansible is the official [documentation][4] site. And, as usual, online search is your friend. I recommend starting with simple tasks, like installing applications or creating users. Once you are ready, follow these guidelines:
|
||||
|
||||
* When testing, use a small subset of servers so that your plays execute faster. If they are successful in one server, they will be successful in others.
|
||||
* Always do a dry run to make sure all commands are working (run with `--check-mode` flag).
|
||||
* Test as often as you need to without fear of breaking things. Tasks describe a desired state, so if a desired state is already achieved, it will simply be ignored.
|
||||
* Be sure all host names defined in `/etc/ansible/hosts` are resolvable.
|
||||
* Because communication to remote hosts is done using SSH, keys have to be accepted by the control machine, so either 1) exchange keys with remote hosts prior to starting; or 2) be ready to type in "Yes" to accept SSH key exchange requests for each remote host you want to manage.
|
||||
* Although you can combine tasks for different Linux distributions in one playbook, it's cleaner to write a separate playbook for each distro.
|
||||
|
||||
|
||||
|
||||
### In the final analysis
|
||||
|
||||
Ansible is a great choice for implementing automation in your data center:
|
||||
|
||||
* It's agentless, so it is simpler to install than other automation tools.
|
||||
* Instructions are in YAML (though JSON is also supported) so it's easier than writing shell scripts.
|
||||
* It's open source software, so contribute back to it and make it even better!
|
||||
|
||||
|
||||
|
||||
How have you used Ansible to automate your data center? Share your experience in the comments.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/2/tips-success-when-getting-started-ansible
|
||||
|
||||
作者:[Jose Delarosa][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/jdelaros1
|
||||
[1]:https://galaxy.ansible.com/
|
||||
[2]:https://galaxy.ansible.com/bennojoy/mysql/
|
||||
[3]:https://galaxy.ansible.com/xcezx/httpd/
|
||||
[4]:http://docs.ansible.com/
|
@ -0,0 +1,259 @@
|
||||
API Star: Python 3 API Framework – Polyglot.Ninja()
|
||||
======
|
||||
For building quick APIs in Python, I have mostly depended on [Flask][1]. Recently I came across a new API framework for Python 3 named “API Star” which seemed really interesting to me for several reasons. Firstly the framework embraces modern Python features like type hints and asyncio. And then it goes ahead and uses these features to provide awesome development experience for us, the developers. We will get into those features soon but before we begin, I would like to thank Tom Christie for all the work he has put into Django REST Framework and now API Star.
|
||||
|
||||
Now back to API Star – I feel very productive in the framework. I can choose to write async codes based on asyncio or I can choose a traditional backend like WSGI. It comes with a command line tool – `apistar` to help us get things done faster. There’s (optional) support for both Django ORM and SQLAlchemy. There’s a brilliant type system that enables us to define constraints on our input and output and from these, API Star can auto generate api schemas (and docs), provide validation and serialization feature and a lot more. Although API Star is heavily focused on building APIs, you can also build web applications on top of it fairly easily. All these might not make proper sense until we build something all by ourselves.
|
||||
|
||||
### Getting Started
|
||||
|
||||
We will start by installing API Star. It would be a good idea to create a virtual environment for this exercise. If you don’t know how to create a virtualenv, don’t worry and go ahead.
|
||||
```
|
||||
pip install apistar
|
||||
|
||||
```
|
||||
|
||||
If you’re not using a virtual environment or the `pip` command for your Python 3 is called `pip3`, then please use `pip3 install apistar` instead.
|
||||
|
||||
Once we have the package installed, we should have access to the `apistar` command line tool. We can create a new project with it. Let’s create a new project in our current directory.
|
||||
```
|
||||
apistar new .
|
||||
|
||||
```
|
||||
|
||||
Now we should have two files created – `app.py` – which contains the main application and then `test.py` for our tests. Let’s examine our `app.py` file:
|
||||
```
|
||||
from apistar import Include, Route
|
||||
from apistar.frameworks.wsgi import WSGIApp as App
|
||||
from apistar.handlers import docs_urls, static_urls
|
||||
|
||||
|
||||
def welcome(name=None):
|
||||
if name is None:
|
||||
return {'message': 'Welcome to API Star!'}
|
||||
return {'message': 'Welcome to API Star, %s!' % name}
|
||||
|
||||
|
||||
routes = [
|
||||
Route('/', 'GET', welcome),
|
||||
Include('/docs', docs_urls),
|
||||
Include('/static', static_urls)
|
||||
]
|
||||
|
||||
app = App(routes=routes)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
app.main()
|
||||
|
||||
```
|
||||
|
||||
Before we dive into the code, let’s run the app and see if it works. If we navigate to `http://127.0.0.1:8080/` we will get this following response:
|
||||
```
|
||||
{"message": "Welcome to API Star!"}
|
||||
|
||||
```
|
||||
|
||||
And if we navigate to: `http://127.0.0.1:8080/?name=masnun`
|
||||
```
|
||||
{"message": "Welcome to API Star, masnun!"}
|
||||
|
||||
```
|
||||
|
||||
Similarly if we navigate to: `http://127.0.0.1:8080/docs/`, we will see auto generated docs for our API.
|
||||
|
||||
Now let’s look at the code. We have a `welcome` function that takes a parameter named `name` which has a default value of `None`. API Star is a smart api framework. It will try to find the `name` key in the url path or query string and pass it to our function. It also generates the API docs based on it. Pretty nice, no?
|
||||
|
||||
We then create a list of `Route` and `Include` instances and pass the list to the `App` instance. `Route` objects are used to define custom user routing. `Include` , as the name suggests, includes/embeds other routes under the path provided to it.
|
||||
|
||||
### Routing
|
||||
|
||||
Routing is simple. When constructing the `App` instance, we need to pass a list as the `routes` argument. This list should comprise of `Route` or `Include` objects as we just saw above. For `Route`s, we pass a url path, http method name and the request handler callable (function or otherwise). For the `Include` instances, we pass a url path and a list of `Routes` instance.
|
||||
|
||||
##### Path Parameters
|
||||
|
||||
We can put a name inside curly braces to declare a url path parameter. For example `/user/{user_id}` defines a path where the `user_id` is a path parameter or a variable which will be injected into the handler function (actually callable). Here’s a quick example:
|
||||
```
|
||||
from apistar import Route
|
||||
from apistar.frameworks.wsgi import WSGIApp as App
|
||||
|
||||
|
||||
def user_profile(user_id: int):
|
||||
return {'message': 'Your profile id is: {}'.format(user_id)}
|
||||
|
||||
|
||||
routes = [
|
||||
Route('/user/{user_id}', 'GET', user_profile),
|
||||
]
|
||||
|
||||
app = App(routes=routes)
|
||||
|
||||
if __name__ == '__main__':
|
||||
app.main()
|
||||
|
||||
```
|
||||
|
||||
If we visit `http://127.0.0.1:8080/user/23` we will get a response like this:
|
||||
```
|
||||
{"message": "Your profile id is: 23"}
|
||||
|
||||
```
|
||||
|
||||
But if we try to visit `http://127.0.0.1:8080/user/some_string` – it will not match. Because the `user_profile` function we defined, we added a type hint for the `user_id` parameter. If it’s not integer, the path doesn’t match. But if we go ahead and delete the type hint and just use `user_profile(user_id)`, it will match this url. This is again API Star is being smart and taking advantages of typing.
|
||||
|
||||
#### Including / Grouping Routes
|
||||
|
||||
Sometimes it might make sense to group certain urls together. Say we have a `user` module that deals with user related functionality. It might be better to group all the user related endpoints under the `/user` path. For example – `/user/new`, `/user/1`, `/user/1/update` and what not. We can easily create our handlers and routes in a separate module or package even and then include them in our own routes.
|
||||
|
||||
Let’s create a new module named `user`, the file name would be `user.py`. Let’s put these codes in this file:
|
||||
```
|
||||
from apistar import Route
|
||||
|
||||
|
||||
def user_new():
|
||||
return {"message": "Create a new user"}
|
||||
|
||||
|
||||
def user_update(user_id: int):
|
||||
return {"message": "Update user #{}".format(user_id)}
|
||||
|
||||
|
||||
def user_profile(user_id: int):
|
||||
return {"message": "User Profile for: {}".format(user_id)}
|
||||
|
||||
|
||||
user_routes = [
|
||||
Route("/new", "GET", user_new),
|
||||
Route("/{user_id}/update", "GET", user_update),
|
||||
Route("/{user_id}/profile", "GET", user_profile),
|
||||
]
|
||||
|
||||
```
|
||||
|
||||
Now we can import our `user_routes` from within our main app file and use it like this:
|
||||
```
|
||||
from apistar import Include
|
||||
from apistar.frameworks.wsgi import WSGIApp as App
|
||||
|
||||
from user import user_routes
|
||||
|
||||
routes = [
|
||||
Include("/user", user_routes)
|
||||
]
|
||||
|
||||
app = App(routes=routes)
|
||||
|
||||
if __name__ == '__main__':
|
||||
app.main()
|
||||
|
||||
```
|
||||
|
||||
Now `/user/new` will delegate to `user_new` function.
|
||||
|
||||
### Accessing Query String / Query Parameters
|
||||
|
||||
Any parameters passed in the query parameters can be injected directly into handler function. Say for the url `/call?phone=1234`, the handler function can define a `phone` parameter and it will receive the value from the query string / query parameters. If the url query string doesn’t include a value for `phone`, it will get `None` instead. We can also set a default value to the parameter like this:
|
||||
```
|
||||
def welcome(name=None):
|
||||
if name is None:
|
||||
return {'message': 'Welcome to API Star!'}
|
||||
return {'message': 'Welcome to API Star, %s!' % name}
|
||||
|
||||
```
|
||||
|
||||
In the above example, we set a default value to `name` which is `None` anyway.
|
||||
|
||||
### Injecting Objects
|
||||
|
||||
By type hinting a request handler, we can have different objects injected into our views. Injecting request related objects can be helpful for accessing them directly from inside the handler. There are several built in objects in the `http` package from API Star itself. We can also use it’s type system to create our own custom objects and have them injected into our functions. API Star also does data validation based on the constraints specified.
|
||||
|
||||
Let’s define our own `User` type and have it injected in our request handler:
|
||||
```
|
||||
from apistar import Include, Route
|
||||
from apistar.frameworks.wsgi import WSGIApp as App
|
||||
from apistar import typesystem
|
||||
|
||||
|
||||
class User(typesystem.Object):
|
||||
properties = {
|
||||
'name': typesystem.string(max_length=100),
|
||||
'email': typesystem.string(max_length=100),
|
||||
'age': typesystem.integer(maximum=100, minimum=18)
|
||||
}
|
||||
|
||||
required = ["name", "age", "email"]
|
||||
|
||||
|
||||
def new_user(user: User):
|
||||
return user
|
||||
|
||||
|
||||
routes = [
|
||||
Route('/', 'POST', new_user),
|
||||
]
|
||||
|
||||
app = App(routes=routes)
|
||||
|
||||
if __name__ == '__main__':
|
||||
app.main()
|
||||
|
||||
```
|
||||
|
||||
Now if we send this request:
|
||||
|
||||
```
|
||||
curl -X POST \
|
||||
http://127.0.0.1:8080/ \
|
||||
-H 'Cache-Control: no-cache' \
|
||||
-H 'Content-Type: application/json' \
|
||||
-d '{"name": "masnun", "email": "masnun@gmail.com", "age": 12}'
|
||||
```
|
||||
|
||||
Guess what happens? We get an error saying age must be equal to or greater than 18. The type system is allowing us intelligent data validation as well. If we enable the `docs` url, we will also get these parameters automatically documented there.
|
||||
|
||||
### Sending a Response
|
||||
|
||||
If you have noticed so far, we can just pass a dictionary and it will be JSON encoded and returned by default. However, we can set the status code and any additional headers by using the `Response` class from `apistar`. Here’s a quick example:
|
||||
```
|
||||
from apistar import Route, Response
|
||||
from apistar.frameworks.wsgi import WSGIApp as App
|
||||
|
||||
|
||||
def hello():
|
||||
return Response(
|
||||
content="Hello".encode("utf-8"),
|
||||
status=200,
|
||||
headers={"X-API-Framework": "API Star"},
|
||||
content_type="text/plain"
|
||||
)
|
||||
|
||||
|
||||
routes = [
|
||||
Route('/', 'GET', hello),
|
||||
]
|
||||
|
||||
app = App(routes=routes)
|
||||
|
||||
if __name__ == '__main__':
|
||||
app.main()
|
||||
|
||||
```
|
||||
|
||||
It should send a plain text response along with a custom header. Please note that the `content` should be bytes, not string. That’s why I encoded it.
|
||||
|
||||
### Moving On
|
||||
|
||||
I just walked through some of the features of API Star. There’s a lot more of cool stuff in API Star. I do recommend going through the [Github Readme][2] for learning more about different features offered by this excellent framework. I shall also try to cover short, focused tutorials on API Star in the coming days.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://polyglot.ninja/api-star-python-3-api-framework/
|
||||
|
||||
作者:[MASNUN][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://polyglot.ninja/author/masnun/
|
||||
[1]:http://polyglot.ninja/rest-api-best-practices-python-flask-tutorial/
|
||||
[2]:https://github.com/encode/apistar
|
@ -0,0 +1,122 @@
|
||||
How to print filename with awk on Linux / Unix
|
||||
======
|
||||
|
||||
I would like to print filename with awk on Linux / Unix-like system. How do I print filename in BEGIN section of awk? Can I print the name of the current input file using gawk/awk?
|
||||
|
||||
The name of the current input file set in FILENAME variable. You can use FILENAME to display or print current input file name If no files are specified on the command line, the value of FILENAME is “-” (stdin). However, FILENAME is undefined inside the BEGIN rule unless set by getline.
|
||||
|
||||
### How to print filename with awk
|
||||
|
||||
The syntax is:
|
||||
```
|
||||
awk '{ print FILENAME }' fileNameHere
|
||||
awk '{ print FILENAME }' /etc/hosts
|
||||
```
|
||||
You might see file name multiple times as awk read file line-by-line. To avoid this problem update your awk/gawk syntax as follows:
|
||||
```
|
||||
awk 'FNR == 1{ print FILENAME } ' /etc/passwd
|
||||
awk 'FNR == 1{ print FILENAME } ' /etc/hosts
|
||||
```
|
||||
![](https://www.cyberciti.biz/media/new/faq/2018/02/How-to-print-filename-using-awk-on-Linux-or-Unix.jpg)
|
||||
|
||||
### How to print filename in BEGIN section of awk
|
||||
|
||||
Use the following syntax:
|
||||
```
|
||||
awk 'BEGIN{print ARGV[1]}' fileNameHere
|
||||
awk 'BEGIN{print ARGV[1]}{ print "someting or do something on data" }END{}' fileNameHere
|
||||
awk 'BEGIN{print ARGV[1]}' /etc/hosts
|
||||
```
|
||||
Sample outputs:
|
||||
```
|
||||
/etc/hosts
|
||||
|
||||
```
|
||||
|
||||
However, ARGV[1] might not always work. For example:
|
||||
`ls -l /etc/hosts | awk 'BEGIN{print ARGV[1]} { print }'`
|
||||
So you need to modify it as follows (assuming that ls -l only produced a single line of output):
|
||||
`ls -l /etc/hosts | awk '{ print "File: " $9 ", Owner:" $3 ", Group: " $4 }'`
|
||||
Sample outputs:
|
||||
```
|
||||
File: /etc/hosts, Owner:root, Group: roo
|
||||
|
||||
```
|
||||
|
||||
### How to deal with multiple filenames specified by a wild card
|
||||
|
||||
Use the following simple syntax:
|
||||
```
|
||||
awk '{ print FILENAME; nextfile } ' *.c
|
||||
awk 'BEGIN{ print "Starting..."} { print FILENAME; nextfile }END{ print "....DONE"} ' *.conf
|
||||
```
|
||||
Sample outputs:
|
||||
```
|
||||
Starting...
|
||||
blkid.conf
|
||||
cryptconfig.conf
|
||||
dhclient6.conf
|
||||
dhclient.conf
|
||||
dracut.conf
|
||||
gai.conf
|
||||
gnome_defaults.conf
|
||||
host.conf
|
||||
idmapd.conf
|
||||
idnalias.conf
|
||||
idn.conf
|
||||
insserv.conf
|
||||
iscsid.conf
|
||||
krb5.conf
|
||||
ld.so.conf
|
||||
logrotate.conf
|
||||
mke2fs.conf
|
||||
mtools.conf
|
||||
netscsid.conf
|
||||
nfsmount.conf
|
||||
nscd.conf
|
||||
nsswitch.conf
|
||||
openct.conf
|
||||
opensc.conf
|
||||
request-key.conf
|
||||
resolv.conf
|
||||
rsyncd.conf
|
||||
sensors3.conf
|
||||
slp.conf
|
||||
smartd.conf
|
||||
sysctl.conf
|
||||
vconsole.conf
|
||||
warnquota.conf
|
||||
wodim.conf
|
||||
xattr.conf
|
||||
xinetd.conf
|
||||
yp.conf
|
||||
....DONE
|
||||
|
||||
```
|
||||
|
||||
nextfile tells awk to stop processing the current input file. The next input record read comes from the next input file. For more information see awk/[gawk][1] command man page:
|
||||
```
|
||||
man awk
|
||||
man gawk
|
||||
```
|
||||
|
||||
### about the author
|
||||
|
||||
The author is the creator of nixCraft and a seasoned sysadmin and a trainer for the Linux operating system/Unix shell scripting. He has worked with global clients and in various industries, including IT, education, defense and space research, and the nonprofit sector. Follow him on [Twitter][2], [Facebook][3], [Google+][4]. Get the **latest tutorials on SysAdmin, Linux/Unix and open source topics via[my RSS/XML feed][5]**.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.cyberciti.biz/faq/how-to-print-filename-with-awk-on-linux-unix/
|
||||
|
||||
作者:Vivek GIte[][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.cyberciti.biz/
|
||||
[1]:https://www.gnu.org/software/gawk/manual/
|
||||
[2]:https://twitter.com/nixcraft
|
||||
[3]:https://facebook.com/nixcraft
|
||||
[4]:https://plus.google.com/+CybercitiBiz
|
||||
[5]:https://www.cyberciti.biz/atom/atom.xml
|
@ -0,0 +1,131 @@
|
||||
Python Hello World and String Manipulation
|
||||
======
|
||||
|
||||
![](https://process.filestackapi.com/cache=expiry:max/resize=width:700/compress/eadkmsrBTcWSyCeA4qti)
|
||||
|
||||
Before starting, I should mention that the [code][1] used in this blog post and in the [video][2] below is available on my github.
|
||||
|
||||
With that, let’s get started! If you get lost, I recommend opening the [video][3] below in a separate tab.
|
||||
|
||||
[Hello World and String Manipulation Video using Python][2]
|
||||
|
||||
#### ** Get Started (Prerequisites)
|
||||
|
||||
Install Anaconda (Python) on your operating system. You can either download anaconda from the [official site][4] and install on your own or you can follow these anaconda installation tutorials below.
|
||||
|
||||
Install Anaconda on Windows: [Link][5]
|
||||
|
||||
Install Anaconda on Mac: [Link][6]
|
||||
|
||||
Install Anaconda on Ubuntu (Linux): [Link][7]
|
||||
|
||||
#### Open a Jupyter Notebook
|
||||
|
||||
Open your terminal (Mac) or command line and type the following ([see 1:16 in the video to follow along][8]) to open a Jupyter Notebook:
|
||||
```
|
||||
jupyter notebook
|
||||
|
||||
```
|
||||
|
||||
#### Print Statements/Hello World
|
||||
|
||||
Type the following into a cell in Jupyter and type **shift + enter** to execute code.
|
||||
```
|
||||
# This is a one line comment
|
||||
print('Hello World!')
|
||||
|
||||
```
|
||||
|
||||
![][9]
|
||||
Output of printing ‘Hello World!’
|
||||
|
||||
#### Strings and String Manipulation
|
||||
|
||||
Strings are a special type of a python class. As objects, in a class, you can call methods on string objects using the .methodName() notation. The string class is available by default in python, so you do not need an import statement to use the object interface to strings.
|
||||
```
|
||||
# Create a variable
|
||||
# Variables are used to store information to be referenced
|
||||
# and manipulated in a computer program.
|
||||
firstVariable = 'Hello World'
|
||||
print(firstVariable)
|
||||
|
||||
```
|
||||
|
||||
![][9]
|
||||
Output of printing the variable firstVariable
|
||||
```
|
||||
# Explore what various string methods
|
||||
print(firstVariable.lower())
|
||||
print(firstVariable.upper())
|
||||
print(firstVariable.title())
|
||||
|
||||
```
|
||||
|
||||
![][9]
|
||||
Output of using .lower(), .upper() , and title() methods
|
||||
```
|
||||
# Use the split method to convert your string into a list
|
||||
print(firstVariable.split(' '))
|
||||
|
||||
```
|
||||
|
||||
![][9]
|
||||
Output of using the split method (in this case, split on space)
|
||||
```
|
||||
# You can add strings together.
|
||||
a = "Fizz" + "Buzz"
|
||||
print(a)
|
||||
|
||||
```
|
||||
|
||||
![][9]
|
||||
string concatenation
|
||||
|
||||
#### Look up what Methods Do
|
||||
|
||||
For new programmers, they often ask how you know what each method does. Python provides two ways to do this.
|
||||
|
||||
1. (works in and out of Jupyter Notebook) Use **help** to lookup what each method does.
|
||||
|
||||
|
||||
|
||||
![][9]
|
||||
Look up what each method does
|
||||
|
||||
2. (Jupyter Notebook exclusive) You can also look up what a method does by having a question mark after a method.
|
||||
|
||||
|
||||
```
|
||||
# To look up what each method does in jupyter (doesnt work outside of jupyter)
|
||||
firstVariable.lower?
|
||||
|
||||
```
|
||||
|
||||
![][9]
|
||||
Look up what each method does in Jupyter
|
||||
|
||||
#### Closing Remarks
|
||||
|
||||
Please let me know if you have any questions either here or in the comments section of the [youtube video][2]. The code in the post is also available on my [github][1]. Part 2 of the tutorial series is [Simple Math][10].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.codementor.io/mgalarny/python-hello-world-and-string-manipulation-gdgwd8ymp
|
||||
|
||||
作者:[Michael][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.codementor.io/mgalarny
|
||||
[1]:https://github.com/mGalarnyk/Python_Tutorials/blob/master/Python_Basics/Intro/Python3Basics_Part1.ipynb
|
||||
[2]:https://www.youtube.com/watch?v=JqGjkNzzU4s
|
||||
[3]:https://www.youtube.com/watch?v=kApPBm1YsqU
|
||||
[4]:https://www.continuum.io/downloads
|
||||
[5]:https://medium.com/@GalarnykMichael/install-python-on-windows-anaconda-c63c7c3d1444
|
||||
[6]:https://medium.com/@GalarnykMichael/install-python-on-mac-anaconda-ccd9f2014072
|
||||
[7]:https://medium.com/@GalarnykMichael/install-python-on-ubuntu-anaconda-65623042cb5a
|
||||
[8]:https://youtu.be/JqGjkNzzU4s?t=1m16s
|
||||
[9]:data:image/gif;base64,R0lGODlhAQABAAAAACH5BAEKAAEALAAAAAABAAEAAAICTAEAOw==
|
||||
[10]:https://medium.com/@GalarnykMichael/python-basics-2-simple-math-4ac7cc928738
|
144
sources/test/20180204_1517810580494.md
Normal file
144
sources/test/20180204_1517810580494.md
Normal file
@ -0,0 +1,144 @@
|
||||
translating by cncuckoo
|
||||
|
||||
Linux/Unix App For Prevention Of RSI (Repetitive Strain Injury)
|
||||
======
|
||||
![workrave-image][1]
|
||||
|
||||
[A repetitive strain injury][2] (RSI) is occupational overuse syndrome, non-specific arm pain or work related upper limb disorder. RSI caused from overusing the hands to perform a repetitive task, such as typing, writing, or clicking a mouse. Unfortunately, most people do not understand what RSI is or how dangerous it can be. You can easily prevent RSI using open source software called Workrave.
|
||||
|
||||
|
||||
### What are the symptoms of RSI?
|
||||
|
||||
I'm quoting from this [page][3]. Do you experience:
|
||||
|
||||
1. Fatigue or lack of endurance?
|
||||
2. Weakness in the hands or forearms?
|
||||
3. Tingling, numbness, or loss of sensation?
|
||||
4. Heaviness: Do your hands feel like dead weight?
|
||||
5. Clumsiness: Do you keep dropping things?
|
||||
6. Lack of strength in your hands? Is it harder to open jars? Cut vegetables?
|
||||
7. Lack of control or coordination?
|
||||
8. Chronically cold hands?
|
||||
9. Heightened awareness? Just being slightly more aware of a body part can be a clue that something is wrong.
|
||||
10. Hypersensitivity?
|
||||
11. Frequent self-massage (subconsciously)?
|
||||
12. Sympathy pains? Do your hands hurt when someone else talks about their hand pain?
|
||||
|
||||
|
||||
|
||||
### How to reduce your risk of Developing RSI
|
||||
|
||||
* Take breaks, when using your computer, every 30 minutes or so. Use software such as workrave to prevent RSI.
|
||||
* Regular exercise can prevent all sort of injuries including RSI.
|
||||
* Use good posture. Adjust your computer desk and chair to support muscles necessary for good posture.
|
||||
|
||||
|
||||
|
||||
### Workrave
|
||||
|
||||
Workrave is a free open source software application intended to prevent computer users from developing RSI or myopia. The software periodically locks the screen while an animated character, "Miss Workrave," walks the user through various stretching exercises and urges them to take a coffee break. The program frequently alerts you to take micro-pauses, rest breaks and restricts you to your daily limit. The program works under MS-Windows and Linux, UNIX-like operating systems.
|
||||
|
||||
#### Install workrave
|
||||
|
||||
Type the following [apt command][4]/[apt-get command][5] under a Debian / Ubuntu Linux:
|
||||
`$ sudo apt-get install workrave`
|
||||
Fedora Linux user should type the following dnf command:
|
||||
`$ sudo dnf install workrave`
|
||||
RHEL/CentOS Linux user should enable EPEL repo and install it using [yum command][6]:
|
||||
```
|
||||
### [ **tested on a CentOS/RHEL 7.x and clones** ] ###
|
||||
$ sudo yum install epel-release
|
||||
$ sudo yum install https://rpms.remirepo.net/enterprise/remi-release-7.rpm
|
||||
$ sudo yum install workrave
|
||||
```
|
||||
Arch Linux user type the following pacman command to install it:
|
||||
`$ sudo pacman -S workrave`
|
||||
FreeBSD user can install it using the following pkg command:
|
||||
`# pkg install workrave`
|
||||
OpenBSD user can install it using the following pkg_add command
|
||||
```
|
||||
$ doas pkg_add workrave
|
||||
```
|
||||
|
||||
#### How to configure workrave
|
||||
|
||||
Workrave works as an applet which is a small application whose user interface resides within a panel. You need to add workrave to panel to control behavior and appearance of the software.
|
||||
|
||||
##### Adding a New Workrave Object To Panel
|
||||
|
||||
* Right-click on a vacant space on a panel to open the panel popup menu.
|
||||
* Choose Add to Panel.
|
||||
* The Add to Panel dialog opens.The available panel objects are listed alphabetically, with launchers at the top. Select workrave applet and click on Add button.
|
||||
|
||||
![Fig.01: Adding an Object \(Workrave\) to a Panel][7]
|
||||
Fig.01: Adding an Object (Workrave) to a Panel
|
||||
|
||||
##### How Do I Modify Properties Of Workrave Software?
|
||||
|
||||
To modify the properties of an object workrave, perform the following steps:
|
||||
|
||||
* Right-click on the workrave object to open the panel object popup.
|
||||
* Choose Preference. Use the Properties dialog to modify the properties as required.
|
||||
|
||||
![](https://www.cyberciti.biz/media/new/tips/2009/11/linux-gnome-workwave-preferences-.png)
|
||||
Fig.02: Modifying the Properties of The Workrave Software
|
||||
|
||||
#### Workrave in Action
|
||||
|
||||
The main window shows the time remaining until it suggests a pause. The windows can be closed and you will the time remaining on the panel itself:
|
||||
![Fig.03: Time reaming counter ][8]
|
||||
Fig.03: Time reaming counter
|
||||
|
||||
![Fig.04: Miss Workrave - an animated character walks you through various stretching exercises][9]
|
||||
Fig.04: Miss Workrave - an animated character walks you through various stretching exercises
|
||||
|
||||
The break prelude window, bugging you to take a micro-pause:
|
||||
![Fig.05: Time for a micro-pause remainder ][10]
|
||||
Fig.05: Time for a micro-pause remainder
|
||||
|
||||
![Fig.06: You can skip Micro-break ][11]
|
||||
Fig.06: You can skip Micro-break
|
||||
|
||||
##### References:
|
||||
|
||||
1. [Workrave project][12] home page.
|
||||
2. [pokoy][13] lightweight daemon that helps prevent RSI and other computer related stress.
|
||||
3. [A Pomodoro][14] timer for GNOME 3.
|
||||
4. [RSI][2] from the wikipedia.
|
||||
|
||||
|
||||
|
||||
### about the author
|
||||
|
||||
The author is the creator of nixCraft and a seasoned sysadmin and a trainer for the Linux operating system/Unix shell scripting. He has worked with global clients and in various industries, including IT, education, defense and space research, and the nonprofit sector. Follow him on [Twitter][15], [Facebook][16], [Google+][17].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.cyberciti.biz/tips/repetitive-strain-injury-prevention-software.html
|
||||
|
||||
作者:[Vivek Gite][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.cyberciti.biz/
|
||||
[1]:https://www.cyberciti.biz/media/new/tips/2009/11/workrave-image.jpg (workrave-image)
|
||||
[2]:https://en.wikipedia.org/wiki/Repetitive_strain_injury
|
||||
[3]:https://web.eecs.umich.edu/~cscott/rsi.html##symptoms
|
||||
[4]:https://www.cyberciti.biz/faq/ubuntu-lts-debian-linux-apt-command-examples/ (See Linux/Unix apt command examples for more info)
|
||||
[5]:https://www.cyberciti.biz/tips/linux-debian-package-management-cheat-sheet.html (See Linux/Unix apt-get command examples for more info)
|
||||
[6]:https://www.cyberciti.biz/faq/rhel-centos-fedora-linux-yum-command-howto/ (See Linux/Unix yum command examples for more info)
|
||||
[7]:https://www.cyberciti.biz/media/new/tips/2009/11/add-workwave-to-panel.png (Adding an Object (Workrave) to a Gnome Panel)
|
||||
[8]:https://www.cyberciti.biz/media/new/tips/2009/11/screenshot-workrave.png (Workrave main window shows the time remaining until it suggests a pause.)
|
||||
[9]:https://www.cyberciti.biz/media/new/tips/2009/11/miss-workrave.png (Miss Workrave Sofrware character walks you through various RSI stretching exercises )
|
||||
[10]:https://www.cyberciti.biz/media/new/tips/2009/11/time-for-micro-pause.gif (Workrave RSI Software Time for a micro-pause remainder )
|
||||
[11]:https://www.cyberciti.biz/media/new/tips/2009/11/Micro-break.png (Workrave RSI Software Micro-break )
|
||||
[12]:http://www.workrave.org/
|
||||
[13]:https://github.com/ttygde/pokoy
|
||||
[14]:http://gnomepomodoro.org
|
||||
[15]:https://twitter.com/nixcraft
|
||||
[16]:https://facebook.com/nixcraft
|
||||
[17]:https://plus.google.com/+CybercitiBiz
|
||||
|
||||
1517810580494
|
@ -0,0 +1,183 @@
|
||||
Linux 系统查询机器最近重新启动的日期和时间的命令
|
||||
======
|
||||
|
||||
在你的 Linux 或 类 UNIX 系统中,你是如何查询系统重新启动的日期和时间?你是如何查询系统关机的日期和时间? last 命令不仅可以按照时间从近到远的顺序列出指定的用户,终端和主机名,而且还可以列出指定日期和时间登录的用户。输出到终端的每一行都包括用户名,会话终端,主机名,会话开始和结束的时间,会话持续的时间。使用下面的命令来查看 Linux 或类 UNIX 系统重启和关机的时间和日期。
|
||||
|
||||
- last 命令
|
||||
- who 命令
|
||||
|
||||
|
||||
### 使用 who 命令来查看系统重新启动的时间/日期
|
||||
|
||||
你需要在终端使用 [who][1] 命令来打印有哪些人登陆了系统。who 命令同时也会显示上次系统启动的时间,使用 last 命令来查看系统重启和关机的日期和时间,运行:
|
||||
|
||||
`$ who -b`
|
||||
|
||||
示例输出:
|
||||
|
||||
`system boot 2017-06-20 17:41`
|
||||
|
||||
使用 last 命令来查询最近登陆到系统的用户和系统重启的时间和日期。输入:
|
||||
|
||||
`$ last reboot | less`
|
||||
|
||||
示例输出:
|
||||
|
||||
[![Fig.01: last command in action][2]][2]
|
||||
|
||||
或者,尝试输入:
|
||||
|
||||
`$ last reboot | head -1`
|
||||
|
||||
示例输出:
|
||||
|
||||
```
|
||||
reboot system boot 4.9.0-3-amd64 Sat Jul 15 19:19 still running
|
||||
```
|
||||
|
||||
last 命令通过查看文件 /var/log/wtmp 来显示自 wtmp 文件被创建时的所有登陆(和注销)的用户。每当系统重新启动时,伪用户将重启信息记录到日志。因此,`last reboot` 命令将会显示自日志文件被创建以来的所有重启信息。
|
||||
|
||||
### 查看系统上次关机的时间和日期
|
||||
|
||||
可以使用下面的命令来显示上次关机的日期和时间:
|
||||
|
||||
`$ last -x|grep shutdown | head -1`
|
||||
|
||||
示例输出:
|
||||
|
||||
```
|
||||
shutdown system down 2.6.15.4 Sun Apr 30 13:31 - 15:08 (01:37)
|
||||
```
|
||||
|
||||
命令中,
|
||||
|
||||
* **-x**:显示系统开关机和运行等级改变信息
|
||||
|
||||
|
||||
这里是 last 命令的其它的一些选项:
|
||||
|
||||
```
|
||||
$ last
|
||||
$ last -x
|
||||
$ last -x reboot
|
||||
$ last -x shutdown
|
||||
```
|
||||
示例输出:
|
||||
|
||||
![Fig.01: How to view last Linux System Reboot Date/Time ][3]
|
||||
|
||||
### 查看系统正常的运行时间
|
||||
|
||||
评论区的读者建议的另一个命令如下:
|
||||
|
||||
`$ uptime -s`
|
||||
|
||||
示例输出:
|
||||
|
||||
```
|
||||
2017-06-20 17:41:51
|
||||
```
|
||||
|
||||
### OS X/Unix/FreeBSD 查看最近重启和关机时间的命令示例
|
||||
|
||||
在终端输入下面的命令:
|
||||
|
||||
`$ last reboot`
|
||||
|
||||
在 OS X 示例输出结果如下:
|
||||
|
||||
```
|
||||
reboot ~ Fri Dec 18 23:58
|
||||
reboot ~ Mon Dec 14 09:54
|
||||
reboot ~ Wed Dec 9 23:21
|
||||
reboot ~ Tue Nov 17 21:52
|
||||
reboot ~ Tue Nov 17 06:01
|
||||
reboot ~ Wed Nov 11 12:14
|
||||
reboot ~ Sat Oct 31 13:40
|
||||
reboot ~ Wed Oct 28 15:56
|
||||
reboot ~ Wed Oct 28 11:35
|
||||
reboot ~ Tue Oct 27 00:00
|
||||
reboot ~ Sun Oct 18 17:28
|
||||
reboot ~ Sun Oct 18 17:11
|
||||
reboot ~ Mon Oct 5 09:35
|
||||
reboot ~ Sat Oct 3 18:57
|
||||
|
||||
|
||||
wtmp begins Sat Oct 3 18:57
|
||||
```
|
||||
|
||||
查看关机日期和时间,输入:
|
||||
|
||||
`$ last shutdown`
|
||||
|
||||
示例输出:
|
||||
|
||||
```
|
||||
shutdown ~ Fri Dec 18 23:57
|
||||
shutdown ~ Mon Dec 14 09:53
|
||||
shutdown ~ Wed Dec 9 23:20
|
||||
shutdown ~ Tue Nov 17 14:24
|
||||
shutdown ~ Mon Nov 16 21:15
|
||||
shutdown ~ Tue Nov 10 13:15
|
||||
shutdown ~ Sat Oct 31 13:40
|
||||
shutdown ~ Wed Oct 28 03:10
|
||||
shutdown ~ Sun Oct 18 17:27
|
||||
shutdown ~ Mon Oct 5 09:23
|
||||
|
||||
|
||||
wtmp begins Sat Oct 3 18:57
|
||||
```
|
||||
|
||||
### 如何查看是谁重启和关闭机器?
|
||||
|
||||
你需要[启动 psacct 服务然后运行下面的命令][4]来查看执行过的命令,同时包括用户名,在终端输入 [lastcomm][5] 命令查看信息
|
||||
|
||||
```
|
||||
# lastcomm userNameHere
|
||||
# lastcomm commandNameHere
|
||||
# lastcomm | more
|
||||
# lastcomm reboot
|
||||
# lastcomm shutdown
|
||||
### OR see both reboot and shutdown time
|
||||
# lastcomm | egrep 'reboot|shutdown'
|
||||
```
|
||||
示例输出:
|
||||
|
||||
```
|
||||
reboot S X root pts/0 0.00 secs Sun Dec 27 23:49
|
||||
shutdown S root pts/1 0.00 secs Sun Dec 27 23:45
|
||||
```
|
||||
|
||||
我们可以看到 root 用户在当地时间 12 月 27 日星期二 23:49 在 pts/0 重新启动了机器。
|
||||
|
||||
### 参见
|
||||
|
||||
* 更多信息可以查看 man 手册( man last )和参考文章 [如何在 Linux 服务器上使用 tuptime 命令查看历史和统计的正常的运行时间][6].
|
||||
|
||||
|
||||
### 关于作者
|
||||
|
||||
作者是 nixCraft 的创立者同时也是一名经验丰富的系统管理员,也是 Linux,类 Unix 操作系统 shell 脚本的培训师。他曾与全球各行各业的客户工作过,包括 IT,教育,国防和空间研究以及非营利部门等等。你可以在 [Twitter][7] ,[Facebook][8],[Google+][9] 关注他。
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.cyberciti.biz/tips/linux-last-reboot-time-and-date-find-out.html
|
||||
|
||||
作者:[Vivek Gite][a]
|
||||
译者:[amwps290](https://github.com/amwps290)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.cyberciti.biz/
|
||||
[1]:https://www.cyberciti.biz/faq/unix-linux-who-command-examples-syntax-usage/ "See Linux/Unix who command examples for more info"
|
||||
[2]:https://www.cyberciti.biz/tips/wp-content/uploads/2006/04/last-reboot.jpg
|
||||
[3]:https://www.cyberciti.biz/media/new/tips/2006/04/check-last-time-system-was-rebooted.jpg
|
||||
[4]:https://www.cyberciti.biz/tips/howto-log-user-activity-using-process-accounting.html
|
||||
[5]:https://www.cyberciti.biz/faq/linux-unix-lastcomm-command-examples-usage-syntax/ "See Linux/Unix lastcomm command examples for more info"
|
||||
[6]:https://www.cyberciti.biz/hardware/howto-see-historical-statistical-uptime-on-linux-server/
|
||||
[7]:https://twitter.com/nixcraft
|
||||
[8]:https://facebook.com/nixcraft
|
||||
[9]:https://plus.google.com/+CybercitiBiz
|
@ -0,0 +1,136 @@
|
||||
Linux 检测 IDE / SATA SSD 硬盘的传输速度
|
||||
======
|
||||
你知道你的硬盘在 linux 下挂在传输有多快吗? 打开电脑的机箱活着机柜,你了解你的硬盘类型吗? 不同类型的传输速度也不同 SATA I (150 MB/s) 、 SATA II (300 MB/s) 、 SATA III (6.0Gb/s) 。
|
||||
|
||||
你能够使用 **hdparm 和 dd 命令** 来检测你的硬盘速度。它为常见的 Linux 系统的 ATA / IDE / SATA 设备驱动程序子系统支持的各种硬盘的 ioctl 提供命令行界面。有些选项只能用最新的内核才能正常工作(请确保安装了最新的内核)。我也推荐使用最新的内核源代码中包含的编译的 hdparm 命令。
|
||||
|
||||
### 如何使用 hdparm 命令来检测硬盘的传输速度
|
||||
|
||||
以 root 管理员权限登录并执行命令:
|
||||
`$ sudo hdparm -tT /dev/sda`
|
||||
或者
|
||||
`$ sudo hdparm -tT /dev/hda`
|
||||
输出:
|
||||
```
|
||||
/dev/sda:
|
||||
Timing cached reads: 7864 MB in 2.00 seconds = 3935.41 MB/sec
|
||||
Timing buffered disk reads: 204 MB in 3.00 seconds = 67.98 MB/sec
|
||||
```
|
||||
|
||||
为了检测更精准,这个操作应该 **重复2-3次** 。这显示了直接从 Linux 缓冲区缓存中读取的速度,而无需磁盘访问。这个测量实际上是被测系统的处理器,高速缓存和存储器的吞吐量的指示。这里是 [for循环的例子][1],连续运行测试3次:
|
||||
`for i in 1 2 3; do hdparm -tT /dev/hda; done`
|
||||
Where,
|
||||
|
||||
* **-t** : 执行设备读取时序
|
||||
* **-T** : 执行缓存读取时间
|
||||
* **/dev/sda** : 硬盘设备文件
|
||||
|
||||
|
||||
|
||||
要 [找出SATA硬盘链接速度][2] ,请输入:
|
||||
`sudo hdparm -I /dev/sda | grep -i speed`
|
||||
输出:
|
||||
```
|
||||
* Gen1 signaling speed (1.5Gb/s)
|
||||
* Gen2 signaling speed (3.0Gb/s)
|
||||
* Gen3 signaling speed (6.0Gb/s)
|
||||
|
||||
```
|
||||
|
||||
以上输出表明我的硬盘可以使用1.5Gb / s,3.0Gb / s或6.0Gb / s的速度。请注意,您的BIOS /主板必须支持SATA-II / III :
|
||||
`$ dmesg | grep -i sata | grep 'link up'`
|
||||
[![Linux Check IDE SATA SSD Hard Disk Transfer Speed][3]][3]
|
||||
|
||||
### dd 命令
|
||||
|
||||
你使用 dd 发热命令也可以获取到相应的速度信息:
|
||||
```
|
||||
dd if=/dev/zero of=/tmp/output.img bs=8k count=256k
|
||||
rm /tmp/output.img
|
||||
```
|
||||
|
||||
输出:
|
||||
```
|
||||
262144+0 records in
|
||||
262144+0 records out
|
||||
2147483648 bytes (2.1 GB) copied, 23.6472 seconds, **90.8 MB/s**
|
||||
|
||||
```
|
||||
|
||||
下面是 [ dd 命令推荐的输入的参数][4]
|
||||
```
|
||||
dd if=/dev/input.file of=/path/to/output.file bs=block-size count=number-of-blocks oflag=dsync
|
||||
|
||||
## GNU dd syntax ##
|
||||
dd if=/dev/zero of=/tmp/test1.img bs=1G count=1 oflag=dsync
|
||||
|
||||
## OR alternate syntax for GNU/dd ##
|
||||
dd if=/dev/zero of=/tmp/testALT.img bs=1G count=1 conv=fdatasync
|
||||
```
|
||||
|
||||
|
||||
这个是上面命令的第三个命令的输出结果:
|
||||
```
|
||||
1+0 records in
|
||||
1+0 records out
|
||||
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.23889 s, 253 MB/s
|
||||
```
|
||||
|
||||
### 磁盘存储 - GUI 工具
|
||||
|
||||
您还可以使用位于系统>管理>磁盘实用程序菜单中的磁盘实用程序。请注意,在最新版本的Gnome中,它简称为磁盘。
|
||||
|
||||
#### 如何使用Linux上的磁盘测试我的硬盘的性能?
|
||||
|
||||
要测试硬盘的速度:
|
||||
|
||||
1. 从 **活动概览** 中打开 **磁盘**(按键盘上的超级键并键入磁盘)
|
||||
2. 从 **左侧窗格** 的列表中选择 **磁盘**
|
||||
3. 选择菜单按钮并从菜单中选择 **Benchmark disk** ...
|
||||
4. 单击 **开始 benchmark...** 并根据需要调整传输速率和访问时间参数。
|
||||
5. 选择 **Start Benchmarking** 来测试从磁盘读取数据的速度。需要管理权限请输入密码。
|
||||
|
||||
|
||||
以上操作的快速视频演示:
|
||||
|
||||
https://www.cyberciti.biz/tips/wp-content/uploads/2007/10/disks-performance.mp4
|
||||
|
||||
|
||||
#### 只读 Benchmark (安全模式下)
|
||||
|
||||
然后,选择 > 只读:
|
||||
|
||||
![Fig.01: Linux Benchmarking Hard Disk Read Only Test Speed][5]
|
||||
|
||||
上述选项不会销毁任何数据。
|
||||
|
||||
#### 读写的 Benchmark(所有数据将丢失,所以要小心)
|
||||
|
||||
访问系统 > 管理 > 磁盘实用程序菜单 > 单击 benchmark >单击开始读/写 benchmark 按钮:
|
||||
|
||||
![Fig.02:Linux Measuring read rate, write rate and access time][6]
|
||||
|
||||
### 作者
|
||||
|
||||
作者是 nixCraft 的创造者,是经验丰富的系统管理员,也是 Linux 操作系统/ Unix shell 脚本的培训师。他曾与全球客户以及IT,教育,国防和空间研究以及非营利部门等多个行业合作。在Twitter,Facebook和Google+上关注他。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.cyberciti.biz/tips/how-fast-is-linux-sata-hard-disk.html
|
||||
|
||||
作者:[Vivek Gite][a]
|
||||
译者:[MonkeyDEcho](https://github.com/MonkeyDEcho)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.cyberciti.biz/
|
||||
[1]:https://www.cyberciti.biz/faq/bash-for-loop/
|
||||
[2]:https://www.cyberciti.biz/faq/linux-command-to-find-sata-harddisk-link-speed/
|
||||
[3]:https://www.cyberciti.biz/tips/wp-content/uploads/2007/10/Linux-Check-IDE-SATA-SSD-Hard-Disk-Transfer-Speed.jpg
|
||||
[4]:https://www.cyberciti.biz/faq/howto-linux-unix-test-disk-performance-with-dd-command/
|
||||
[5]:https://www.cyberciti.biz/media/new/tips/2007/10/Linux-Hard-Disk-Speed-Benchmark.png (Linux Benchmark Hard Disk Speed)
|
||||
[6]:https://www.cyberciti.biz/media/new/tips/2007/10/Linux-Hard-Disk-Read-Write-Benchmark.png (Linux Hard Disk Benchmark Read / Write Rate and Access Time)
|
||||
[7]:https://twitter.com/nixcraft
|
||||
[8]:https://facebook.com/nixcraft
|
||||
[9]:https://plus.google.com/+CybercitiBiz
|
@ -0,0 +1,47 @@
|
||||
在你下一次技术面试的时候要提的 3 个基本问题
|
||||
======
|
||||
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/os-jobs_0.jpg?itok=nDf5j7xC)
|
||||
|
||||
面试可能会有压力,但 58% 的公司告诉 Dice 和 Linux 基金会,他们需要在未来几个月内聘请开源人才。学习如何提出正确的问题。
|
||||
|
||||
Linux 基金会
|
||||
|
||||
Dice 和 Linux 基金会的年度[开源工作报告][1]揭示了开源专业人士的前景以及未来一年的招聘活动。在今年的报告中,86% 的科技专业人士表示,了解开源推动了他们的职业生涯。然而,当在他们自己的组织内推进或在别处申请新职位的时候,有这些经历会发生什么呢?
|
||||
|
||||
面试新工作绝非易事。除了在准备新职位时还要应付复杂的工作,当面试官问“你对我有什么问题吗?”时适当的回答更增添了压力。
|
||||
|
||||
在 Dice,我们从事职业、建议,并将技术专家与雇主连接起来。但是我们也在公司雇佣技术人才来开发开源项目。实际上,Dice 平台基于许多 Linux 发行版,我们利用开源数据库作为我们搜索功能的基础。总之,如果没有开源软件,我们就无法运行 Dice,因此聘请了解和热爱开源软件的专业人士至关重要。
|
||||
|
||||
多年来,我在面试中了解到提出好问题的重要性。这是一个了解你的潜在新雇主的机会,以及更好地了解他们是否与你的技能相匹配。
|
||||
|
||||
这里有三个重要的问题需要以及其重要的原因:
|
||||
|
||||
**1\. 公司对员工在空闲时间致力于开源项目或编写代码的立场是什么?**
|
||||
|
||||
这个问题的答案会告诉正在面试的公司的很多信息。一般来说,只要它与你在该公司所从事的工作没有冲突,公司会希望技术专家为网站或项目做出贡献。在公司之外允许这种情况,也会在技术组织中培养出一种创业精神,并教授技术技能,否则在正常的日常工作中你可能无法获得这些技能。
|
||||
|
||||
**2\. 项目在这如何分优先级?**
|
||||
|
||||
由于所有的公司都成为了科技公司,所以在创新的客户面对技术项目与改进平台本身之间往往存在着分歧。你会努力保持现有的平台最新么?或者致力于公众开发新产品?根据你的兴趣,答案可以决定公司是否适合你。
|
||||
|
||||
**3\. 谁主要决定新产品,开发者在决策过程中有多少投入?**
|
||||
|
||||
这个问题是了解谁负责公司创新(以及与他/她有多少联系),还有一个是了解你在公司的职业道路。在开发新产品之前,一个好的公司会和开发人员和开源人才交流。这看起来没有困难,但有时会错过这步,意味着在新产品发布之前是协作环境或者混乱的过程。
|
||||
|
||||
面试可能会有压力,但是 58% 的公司告诉 Dice 和 Linux 基金会他们需要在未来几个月内聘用开源人才,所以记住高需求会让像你这样的专业人士成为雇员。以你想要的方向引导你的事业。
|
||||
|
||||
现在[下载][2]完整的 2017 年开源工作报告。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/blog/os-jobs/2017/12/3-essential-questions-ask-your-next-tech-interview
|
||||
|
||||
作者:[Brian Hostetter][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.linux.com/users/brianhostetter
|
||||
[1]:https://www.linuxfoundation.org/blog/2017-jobs-report-highlights-demand-open-source-skills/
|
||||
[2]:http://bit.ly/2017OSSjobsreport
|
@ -1,76 +0,0 @@
|
||||
|
||||
[![Linux vs. Unix](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/unix-vs-linux_orig.jpg)][1]
|
||||
|
||||
在计算机时代,相当一部分的人错误地认为 **Unix** 和 **LInux** 操作系统是一样的。然而,事实恰好相反。让我们仔细看看。
|
||||
|
||||
### 什么是 Unix?
|
||||
|
||||
[![what is unix](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/unix_orig.png)][2]
|
||||
|
||||
在 IT 领域,作为操作系统为我们所知的 Unix,是 1969 年 AT& 公司在美国新泽西所开发的,目前它的商标权由国际开放标准组织所拥有。大多数的操作系统都受 Unix 的启发,而 Unix 也受到了未完成的 Multics 系统的启发。Unix 的另一版本是来自贝尔实验室的 Play 9。
|
||||
|
||||
### Unix 被用于哪里?
|
||||
|
||||
作为一个操作系统,Unix 大多被用在服务器、工作站且现在也有用在个人计算机上。它在创建互联网、创建计算机网络或客户端/服务器模型方面发挥着非常重要的作用。
|
||||
|
||||
#### Unix 系统的特点
|
||||
|
||||
* 支持多任务(multitasking)
|
||||
|
||||
* 相比 Multics 操作更加简单
|
||||
|
||||
* 所有数据以纯文本形式存储
|
||||
|
||||
* 采用单根文件的树状存储
|
||||
|
||||
* 能够同时访问多用户账户
|
||||
|
||||
#### Unix 操作系统的组成:
|
||||
|
||||
**a)** 单核操作系统,负责低级操作以及由用户发起的操作,内核之间的通信通过一个系统调用进行。
|
||||
|
||||
**b)** 系统工具
|
||||
|
||||
**c)** 其他运用程序
|
||||
|
||||
### 什么是 Linux?
|
||||
|
||||
[![what is linux](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/linux_orig.png)][4]
|
||||
|
||||
这是一个基于 Unix 系统原理的开源操作系统。正如开源的含义一样,它是一个可以自由下载的系统。也可以干涉系统的编辑、添加,然后扩充其源代码。这是它最大的好处之一,而不像今天的其他操作系统(Windows、Mac OS X....)需要付费。Linux 系统的开发不仅是因为 Unix 系统提供了一个模板,其中还有一个重要的因素是 MINIX 系统的启发。不像 Linus,此版本被创造者(Andrew Tanenbaum)用于商业系统 。1991 年 **Linus Torvalds** 开始把对 Linux 系统的开发当做个人兴趣。其中,Linux 开始处理 Unix 的原因是系统的简单性。Linux(0.01)第一个官方版本发布于 1991年9月17日。虽然这个系统并不是很完美和完整,但 Linus 对它产生很大的兴趣。在接下来的几天,Linus 开始写关于 Linux 源代码扩展以及其他想法的电子邮件。
|
||||
|
||||
### Linux 的特点
|
||||
|
||||
Linux 内核是 Unix 内核,基于 Unix 的基本特点以及 **POSIX** 和 Single **UNIX 规范标准**。看起来那操作系统官方名字取自于 **Linus**,其中其操作系统名称的尾部"x"跟 **Unix 系统**相联系。
|
||||
|
||||
#### 主要特点:
|
||||
|
||||
* 一次运行多任务(多任务)
|
||||
|
||||
* 程序可以包含一个或多个进程(multipurpose system),且每个进程可能有一个或多个线程。
|
||||
|
||||
* 多用户,因此它可以运行多个用户程序。
|
||||
|
||||
* 个人帐户受适当授权的保护。
|
||||
|
||||
* 因此账户准确地定义了系统控制权。
|
||||
|
||||
**企鹅 Tux** Logo 的作者是 Larry Ewing,他选择这个企鹅作为他的开源 **Linux 操作系统**的吉祥物。**Linux Torvalds** 最初提出这个新的操作系统的名字为 “Freax” 即为 “自由(free)” + “奇异(freak)” + x(UNIX系统)的结合字,但它并不像之前在 **FTP 服务器** 上运行的那个版本那样。
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.linuxandubuntu.com/home/linux-vs-unix
|
||||
|
||||
作者:[linuxandubuntu][a]
|
||||
译者:[HardworkFish](https://github.com/HardworkFish)
|
||||
校对:[imquanquan](https://github.com/imquanquan)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.linuxandubuntu.com
|
||||
[1]:http://www.linuxandubuntu.com/home/linux-vs-unix
|
||||
[2]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/unix_orig.png
|
||||
[3]:http://www.unix.org/what_is_unix.html
|
||||
[4]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/linux_orig.png
|
||||
[5]:https://www.linux.com
|
Loading…
Reference in New Issue
Block a user