From c91f7e0c107e1e7638285bb4687c2f193699c7fe Mon Sep 17 00:00:00 2001 From: qhwdw Date: Sun, 3 Dec 2017 20:44:35 +0800 Subject: [PATCH 001/236] modified by qhwdw --- core.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/core.md b/core.md index da45c009fc..3093f4ae52 100644 --- a/core.md +++ b/core.md @@ -36,4 +36,4 @@ - 除非必要,合并 PR 时不要 squash-merge wxy@LCTT -2016/12/24 \ No newline at end of file +2017/12/24 From 5c74048e32e9b5b426fec5f80b6d23a4f5741a04 Mon Sep 17 00:00:00 2001 From: qhwdw <33189910+qhwdw@users.noreply.github.com> Date: Sun, 3 Dec 2017 20:47:57 +0800 Subject: [PATCH 002/236] =?UTF-8?q?=E6=9B=B4=E6=96=B0=E6=97=A5=E6=9C=9F?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- core.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/core.md b/core.md index 3093f4ae52..2ec8aa89cf 100644 --- a/core.md +++ b/core.md @@ -36,4 +36,4 @@ - 除非必要,合并 PR 时不要 squash-merge wxy@LCTT -2017/12/24 +2016/12/24 From 46d6e7e4a63873835e64e5302d3c02f1618820e5 Mon Sep 17 00:00:00 2001 From: qhwdw Date: Mon, 4 Dec 2017 15:30:49 +0800 Subject: [PATCH 003/236] Translated by qhwdw --- ...1109 Concurrent Servers- Part 4 - libuv.md | 492 ++++++++++++++++++ 1 file changed, 492 insertions(+) create mode 100644 translated/tech/20171109 Concurrent Servers- Part 4 - libuv.md diff --git a/translated/tech/20171109 Concurrent Servers- Part 4 - libuv.md b/translated/tech/20171109 Concurrent Servers- Part 4 - libuv.md new file mode 100644 index 0000000000..b4db491e4e --- /dev/null +++ b/translated/tech/20171109 Concurrent Servers- Part 4 - libuv.md @@ -0,0 +1,492 @@ +[并发服务器:第四部分 - libuv][17] +============================================================ + +这是写并发网络服务器系列文章的第四部分。在这一部分中,我们将使用 libuv 去再次重写我们的服务器,并且也讨论关于使用一个线程池在回调中去处理耗时任务。最终,我们去看一下底层的 libuv,花一点时间去学习如何用异步 API 对文件系统阻塞操作进行封装。 + +这一系列的所有文章包括: + +* [第一部分 - 简介][7] + +* [第二部分 - 线程][8] + +* [第三部分 - 事件驱动][9] + +* [第四部分 - libuv][10] + +### 使用 Linux 抽象出事件驱动循环 + +在 [第三部分][11] 中,我们看到了基于 `select` 和 `epoll` 的相似之处,并且,我说过,在它们之间抽象出细微的差别是件很有魅力的事。Numerous 库已经做到了这些,但是,因为在这一部分中,我将去选一个并使用它。我选的这个库是 [libuv][12],它最初设计用于 Node.js 底层的轻便的平台层,并且,后来发现在其它的项目中已有使用。libuv 是用 C 写的,因此,它具有很高的可移植性,非常适用嵌入到像 JavaScript 和 Python 这样的高级语言中。 + +虽然 libuv 为抽象出底层平台细节已经有了一个非常大的框架,但它仍然是一个以 _事件循环_ 思想为中心的。在我们第三部分的事件驱动服务器中,事件循环在 main 函数中是很明确的;当使用 libuv 时,循环通常隐藏在库自身中,而用户代码仅需要注册事件句柄(作为一个回调函数)和运行这个循环。此外,libuv 将为给定的平台实现更快的事件循环实现。对于 Linux 它是 epoll,等等。 + +![libuv loop](https://eli.thegreenplace.net/images/2017/libuvloop.png) + +libuv 支持多路事件循环,并且,因此一个事件循环在库中是非常重要的;它有一个句柄 - `uv_loop_t`,和创建/杀死/启动/停止循环的函数。也就是说,在这篇文章中,我将仅需要使用 “默认的” 循环,libuv 可通过 `uv_default_loop()` 提供它;多路循环大多用于多线程事件驱动的服务器,这是一个更高级别的话题,我将留在这一系列文章的以后部分。 + +### 使用 libuv 的并发服务器 + +为了对 libuv 有一个更深的印象,让我们跳转到我们的可靠的协议服务器,它通过我们的这个系列已经有了一个强大的重新实现。这个服务器的结构与第三部分中的基于 select 和 epoll 的服务器有一些相似之处。因为,它也依赖回调。完整的 [示例代码在这里][13];我们开始设置这个服务器的套接字绑定到一个本地端口: + +``` +int portnum = 9090; +if (argc >= 2) { + portnum = atoi(argv[1]); +} +printf("Serving on port %d\n", portnum); + +int rc; +uv_tcp_t server_stream; +if ((rc = uv_tcp_init(uv_default_loop(), &server_stream)) < 0) { + die("uv_tcp_init failed: %s", uv_strerror(rc)); +} + +struct sockaddr_in server_address; +if ((rc = uv_ip4_addr("0.0.0.0", portnum, &server_address)) < 0) { + die("uv_ip4_addr failed: %s", uv_strerror(rc)); +} + +if ((rc = uv_tcp_bind(&server_stream, (const struct sockaddr*)&server_address, 0)) < 0) { + die("uv_tcp_bind failed: %s", uv_strerror(rc)); +} +``` + +除了它被封装进 libuv APIs 中之外,你看到的是一个相当标准的套接字。在它的返回中,我们取得一个可工作于任何 libuv 支持的平台上的轻便的接口。 + +这些代码也很认真负责地演示了错误处理;多数的 libuv 函数返回一个整数状态,返回一个负数意味着出现了一个错误。在我们的服务器中,我们把这些错误按致命的问题处理,但也可以设想为一个更优雅的恢复。 + +现在,那个套接字已经绑定,是时候去监听它了。这里我们运行一个回调注册: + +``` +// Listen on the socket for new peers to connect. When a new peer connects, +// the on_peer_connected callback will be invoked. +if ((rc = uv_listen((uv_stream_t*)&server_stream, N_BACKLOG, on_peer_connected)) < 0) { + die("uv_listen failed: %s", uv_strerror(rc)); +} +``` + +当新的对端连接到这个套接字,`uv_listen` 将被调用去注册一个事件循环回调。我们的回调在这里被称为 `on_peer_connected`,并且我们一会儿将去检测它。 + +最终,main 运行这个 libuv 循环,直到它被停止(`uv_run` 仅在循环被停止或者发生错误时返回) + +``` +// Run the libuv event loop. +uv_run(uv_default_loop(), UV_RUN_DEFAULT); + +// If uv_run returned, close the default loop before exiting. +return uv_loop_close(uv_default_loop()); +``` + +注意,那个仅是一个单一的通过 main 优先去运行的事件循环回调;我们不久将看到怎么去添加更多的另外的回调。在事件循环的整个运行时中,添加和删除回调并不是一个问题 - 事实上,大多数服务器就是这么写的。 + +这是一个 `on_peer_connected`,它处理到服务器的新的客户端连接: + +``` +void on_peer_connected(uv_stream_t* server_stream, int status) { + if (status < 0) { + fprintf(stderr, "Peer connection error: %s\n", uv_strerror(status)); + return; + } + + // client will represent this peer; it's allocated on the heap and only + // released when the client disconnects. The client holds a pointer to + // peer_state_t in its data field; this peer state tracks the protocol state + // with this client throughout interaction. + uv_tcp_t* client = (uv_tcp_t*)xmalloc(sizeof(*client)); + int rc; + if ((rc = uv_tcp_init(uv_default_loop(), client)) < 0) { + die("uv_tcp_init failed: %s", uv_strerror(rc)); + } + client->data = NULL; + + if (uv_accept(server_stream, (uv_stream_t*)client) == 0) { + struct sockaddr_storage peername; + int namelen = sizeof(peername); + if ((rc = uv_tcp_getpeername(client, (struct sockaddr*)&peername, + &namelen)) < 0) { + die("uv_tcp_getpeername failed: %s", uv_strerror(rc)); + } + report_peer_connected((const struct sockaddr_in*)&peername, namelen); + + // Initialize the peer state for a new client: we start by sending the peer + // the initial '*' ack. + peer_state_t* peerstate = (peer_state_t*)xmalloc(sizeof(*peerstate)); + peerstate->state = INITIAL_ACK; + peerstate->sendbuf[0] = '*'; + peerstate->sendbuf_end = 1; + peerstate->client = client; + client->data = peerstate; + + // Enqueue the write request to send the ack; when it's done, + // on_wrote_init_ack will be called. The peer state is passed to the write + // request via the data pointer; the write request does not own this peer + // state - it's owned by the client handle. + uv_buf_t writebuf = uv_buf_init(peerstate->sendbuf, peerstate->sendbuf_end); + uv_write_t* req = (uv_write_t*)xmalloc(sizeof(*req)); + req->data = peerstate; + if ((rc = uv_write(req, (uv_stream_t*)client, &writebuf, 1, + on_wrote_init_ack)) < 0) { + die("uv_write failed: %s", uv_strerror(rc)); + } + } else { + uv_close((uv_handle_t*)client, on_client_closed); + } +} +``` + +这些代码都有很好的注释,但是,这里有一些重要的 libuv 语法我想去强调一下: + +* 进入回调中的自定义数据:因为 C 还没有停用,这可能是个挑战,libuv 在它的处理类型中有一个 `void*` 数据域;这些域可以被用于进入到用户数据。例如,注意 `client->data` 是如何指向到一个 `peer_state_t` 结构上,以便于通过 `uv_write` 和 `uv_read_start` 注册的回调可以知道它们正在处理的是哪个客户端的数据。 + +* 内存管理:事件驱动编程在语言中使用垃圾回收是非常容易的,因为,回调通常运行在一个它们注册的完全不同的栈框架中,使得基于栈的内存管理很困难。它总是需要传递堆分配的数据到 libuv 回调中(当所有回调运行时,除了 main,其它的都运行在栈上),并且,为了避免泄漏,许多情况下都要求这些数据去安全释放。这些都是些需要实践的内容 [[1]][6]。 + +这个服务器上对端的状态如下: + +``` +typedef struct { + ProcessingState state; + char sendbuf[SENDBUF_SIZE]; + int sendbuf_end; + uv_tcp_t* client; +} peer_state_t; +``` + +它与第三部分中的状态非常类似;我们不再需要 sendptr,因为,在调用 "done writing" 回调之前,`uv_write` 将确保去发送它提供的整个缓冲。我们也为其它的回调使用保持了一个到客户端的指针。这里是 `on_wrote_init_ack`: + +``` +void on_wrote_init_ack(uv_write_t* req, int status) { + if (status) { + die("Write error: %s\n", uv_strerror(status)); + } + peer_state_t* peerstate = (peer_state_t*)req->data; + // Flip the peer state to WAIT_FOR_MSG, and start listening for incoming data + // from this peer. + peerstate->state = WAIT_FOR_MSG; + peerstate->sendbuf_end = 0; + + int rc; + if ((rc = uv_read_start((uv_stream_t*)peerstate->client, on_alloc_buffer, + on_peer_read)) < 0) { + die("uv_read_start failed: %s", uv_strerror(rc)); + } + + // Note: the write request doesn't own the peer state, hence we only free the + // request itself, not the state. + free(req); +} +``` + +然后,我们确信知道了这个初始的 '*' 已经被发送到对端,我们通过调用 `uv_read_start` 去监听从这个对端来的入站数据,它注册一个回调(`on_peer_read`)去被调用,不论什么时候,事件循环都在套接字上接收来自客户端的调用: + +``` +void on_peer_read(uv_stream_t* client, ssize_t nread, const uv_buf_t* buf) { + if (nread < 0) { + if (nread != uv_eof) { + fprintf(stderr, "read error: %s\n", uv_strerror(nread)); + } + uv_close((uv_handle_t*)client, on_client_closed); + } else if (nread == 0) { + // from the documentation of uv_read_cb: nread might be 0, which does not + // indicate an error or eof. this is equivalent to eagain or ewouldblock + // under read(2). + } else { + // nread > 0 + assert(buf->len >= nread); + + peer_state_t* peerstate = (peer_state_t*)client->data; + if (peerstate->state == initial_ack) { + // if the initial ack hasn't been sent for some reason, ignore whatever + // the client sends in. + free(buf->base); + return; + } + + // run the protocol state machine. + for (int i = 0; i < nread; ++i) { + switch (peerstate->state) { + case initial_ack: + assert(0 && "can't reach here"); + break; + case wait_for_msg: + if (buf->base[i] == '^') { + peerstate->state = in_msg; + } + break; + case in_msg: + if (buf->base[i] == '$') { + peerstate->state = wait_for_msg; + } else { + assert(peerstate->sendbuf_end < sendbuf_size); + peerstate->sendbuf[peerstate->sendbuf_end++] = buf->base[i] + 1; + } + break; + } + } + + if (peerstate->sendbuf_end > 0) { + // we have data to send. the write buffer will point to the buffer stored + // in the peer state for this client. + uv_buf_t writebuf = + uv_buf_init(peerstate->sendbuf, peerstate->sendbuf_end); + uv_write_t* writereq = (uv_write_t*)xmalloc(sizeof(*writereq)); + writereq->data = peerstate; + int rc; + if ((rc = uv_write(writereq, (uv_stream_t*)client, &writebuf, 1, + on_wrote_buf)) < 0) { + die("uv_write failed: %s", uv_strerror(rc)); + } + } + } + free(buf->base); +} +``` + +这个服务器的运行时行为非常类似于第三部分的事件驱动服务器:所有的客户端都在一个单个的线程中并发处理。并且一些行为被维护在服务器代码中:服务器的逻辑实现为一个集成的回调,并且长周期运行是禁止的,因为它会阻塞事件循环。这一点也很类似。让我们进一步探索这个问题。 + +### 在事件驱动循环中的长周期运行的操作 + +单线程的事件驱动代码使它先天地对一些常见问题非常敏感:整个循环中的长周期运行的代码块。参见如下的程序: + +``` +void on_timer(uv_timer_t* timer) { + uint64_t timestamp = uv_hrtime(); + printf("on_timer [%" PRIu64 " ms]\n", (timestamp / 1000000) % 100000); + + // "Work" + if (random() % 5 == 0) { + printf("Sleeping...\n"); + sleep(3); + } +} + +int main(int argc, const char** argv) { + uv_timer_t timer; + uv_timer_init(uv_default_loop(), &timer); + uv_timer_start(&timer, on_timer, 0, 1000); + return uv_run(uv_default_loop(), UV_RUN_DEFAULT); +} +``` + +它用一个单个注册的回调运行一个 libuv 事件循环:`on_timer`,它被每秒钟循环调用一次。回调报告一个时间戳,并且,偶尔通过睡眠 3 秒去模拟一个长周期运行。这是运行示例: + +``` +$ ./uv-timer-sleep-demo +on_timer [4840 ms] +on_timer [5842 ms] +on_timer [6843 ms] +on_timer [7844 ms] +Sleeping... +on_timer [11845 ms] +on_timer [12846 ms] +Sleeping... +on_timer [16847 ms] +on_timer [17849 ms] +on_timer [18850 ms] +... +``` + +`on_timer` 忠实地每秒执行一次,直到随机出现的睡眠为止。在那个时间点,`on_timer` 不再被调用,直到睡眠时间结束;事实上,_没有其它的回调_  在这个时间帧中被调用。这个睡眠调用阻塞当前线程,它正是被调用的线程,并且也是事件循环使用的线程。当这个线程被阻塞后,事件循环也被阻塞。 + +这个示例演示了在事件驱动的调用中为什么回调不能被阻塞是多少的重要。并且,同样适用于 Node.js 服务器、客户端侧的 Javascript、大多数的 GUI 编程框架、以及许多其它的异步编程模型。 + +但是,有时候运行耗时的任务是不可避免的。并不是所有任务都有一个异步 APIs;例如,我们可能使用一些仅有同步 API 的库去处理,或者,正在执行一个可能的长周期计算。我们如何用事件驱动编程去结合这些代码?线程可以帮到你! + +### “转换” 阻塞调用到异步调用的线程 + +一个线程池可以被用于去转换阻塞调用到异步调用,通过与事件循环并行运行,并且当任务完成时去由它去公布事件。一个给定的阻塞函数 `do_work()`,这里介绍了它是怎么运行的: + +1. 在一个回调中,用 `do_work()` 代表直接调用,我们将它打包进一个 “任务”,并且请求线程池去运行这个任务。当任务完成时,我们也为循环去调用它注册一个回调;我们称它为 `on_work_done()`。 + +2. 在这个时间点,我们的回调可以返回并且事件循环保持运行;在同一时间点,线程池中的一个线程运行这个任务。 + +3. 一旦任务运行完成,通知主线程(指正在运行事件循环的线程),并且,通过事件循环调用 `on_work_done()`。 + +让我们看一下,使用 libuv 的工作调度 API,是怎么去解决我们前面的 timer/sleep 示例中展示的问题的: + +``` +void on_after_work(uv_work_t* req, int status) { + free(req); +} + +void on_work(uv_work_t* req) { + // "Work" + if (random() % 5 == 0) { + printf("Sleeping...\n"); + sleep(3); + } +} + +void on_timer(uv_timer_t* timer) { + uint64_t timestamp = uv_hrtime(); + printf("on_timer [%" PRIu64 " ms]\n", (timestamp / 1000000) % 100000); + + uv_work_t* work_req = (uv_work_t*)malloc(sizeof(*work_req)); + uv_queue_work(uv_default_loop(), work_req, on_work, on_after_work); +} + +int main(int argc, const char** argv) { + uv_timer_t timer; + uv_timer_init(uv_default_loop(), &timer); + uv_timer_start(&timer, on_timer, 0, 1000); + return uv_run(uv_default_loop(), UV_RUN_DEFAULT); +} +``` + +通过一个 work_req [[2]][14] 类型的句柄,我们进入一个任务队列,代替在 `on_timer` 上直接调用 sleep,这个函数在任务中(`on_work`)运行,并且,一旦任务完成(`on_after_work`),这个函数被调用一次。`on_work` 在这里是指发生的 “work”(阻塞中的/耗时的操作)。在这两个回调传递到 `uv_queue_work` 时,注意一个关键的区别:`on_work` 运行在线程池中,而 `on_after_work` 运行在事件循环中的主线程上 - 就好像是其它的回调一样。 + +让我们看一下这种方式的运行: + +``` +$ ./uv-timer-work-demo +on_timer [89571 ms] +on_timer [90572 ms] +on_timer [91573 ms] +on_timer [92575 ms] +Sleeping... +on_timer [93576 ms] +on_timer [94577 ms] +Sleeping... +on_timer [95577 ms] +on_timer [96578 ms] +on_timer [97578 ms] +... +``` + +即便在 sleep 函数被调用时,定时器也每秒钟滴答一下,睡眠(sleeping)现在运行在一个单独的线程中,并且不会阻塞事件循环。 + +### 一个用于练习的素数测试服务器 + +因为通过睡眼去模拟工作并不是件让人兴奋的事,我有一个事先准备好的更综合的一个示例 - 一个基于套接字接受来自客户端的数字的服务器,检查这个数字是否是素数,然后去返回一个 “prime" 或者 “composite”。完整的 [服务器代码在这里][15] - 我不在这里粘贴了,因为它太长了,更希望读者在一些自己的练习中去体会它。 + +这个服务器使用了一个原生的素数测试算法,因此,对于大的素数可能花很长时间才返回一个回答。在我的机器中,对于 2305843009213693951,它花了 ~5 秒钟去计算,但是,你的方法可能不同。 + +练习 1:服务器有一个设置(通过一个名为 MODE 的环境变量)要么去在套接字回调(意味着在主线程上)中运行素数测试,要么在 libuv 工作队列中。当多个客户端同时连接时,使用这个设置来观察服务器的行为。当它计算一个大的任务时,在阻塞模式中,服务器将不回复其它客户端,而在非阻塞模式中,它会回复。 + +练习 2;libuv 有一个缺省大小的线程池,并且线程池的大小可以通过环境变量配置。你可以通过使用多个客户端去实验找出它的缺省值是多少?找到线程池缺省值后,使用不同的设置去看一下,在重负载下怎么去影响服务器的响应能力。 + +### 在非阻塞文件系统中使用工作队列 + +对于仅傻傻的演示和 CPU 密集型的计算来说,将可能的阻塞操作委托给一个线程池并不是明智的;libuv 在它的文件系统 APIs 中本身就大量使用了这种性能。通过这种方式,libuv 使用一个异步 API,在一个轻便的方式中,显示出它强大的文件系统的处理能力。 + +让我们使用 `uv_fs_read()`,例如,这个函数从一个文件中(以一个 `uv_fs_t` 句柄为代表)读取一个文件到一个缓冲中 [[3]][16],并且当读取完成后调用一个回调。换句话说,`uv_fs_read()` 总是立即返回,甚至如果文件在一个类似 NFS 的系统上,并且,数据到达缓冲区可能需要一些时间。换句话说,这个 API 与这种方式中其它的 libuv APIs 是异步的。这是怎么工作的呢? + +在这一点上,我们看一下 libuv 的底层;内部实际上非常简单,并且它是一个很好的练习。作为一个便携的库,libuv 对于 Windows 和 Unix 系统在它的许多函数上有不同的实现。我们去看一下在 libuv 源树中的 src/unix/fs.c。 + +这是 `uv_fs_read` 的代码: + +``` +int uv_fs_read(uv_loop_t* loop, uv_fs_t* req, + uv_file file, + const uv_buf_t bufs[], + unsigned int nbufs, + int64_t off, + uv_fs_cb cb) { + if (bufs == NULL || nbufs == 0) + return -EINVAL; + + INIT(READ); + req->file = file; + + req->nbufs = nbufs; + req->bufs = req->bufsml; + if (nbufs > ARRAY_SIZE(req->bufsml)) + req->bufs = uv__malloc(nbufs * sizeof(*bufs)); + + if (req->bufs == NULL) { + if (cb != NULL) + uv__req_unregister(loop, req); + return -ENOMEM; + } + + memcpy(req->bufs, bufs, nbufs * sizeof(*bufs)); + + req->off = off; + POST; +} +``` + +第一次看可能觉得很困难,因为它延缓真实的工作到 INIT 和 POST 宏中,在 POST 中与一些本地变量一起设置。这样做可以避免了文件中的许多重复代码。 + +这是 INIT 宏: + +``` +#define INIT(subtype) \ + do { \ + req->type = UV_FS; \ + if (cb != NULL) \ + uv__req_init(loop, req, UV_FS); \ + req->fs_type = UV_FS_ ## subtype; \ + req->result = 0; \ + req->ptr = NULL; \ + req->loop = loop; \ + req->path = NULL; \ + req->new_path = NULL; \ + req->cb = cb; \ + } \ + while (0) +``` + +它设置了请求,并且更重要的是,设置 `req->fs_type` 域为真实的 FS 请求类型。因为 `uv_fs_read` 调用 invokes INIT(READ),它意味着 `req->fs_type` 被分配一个常数 `UV_FS_READ`。 + +这是 POST 宏: + +``` +#define POST \ + do { \ + if (cb != NULL) { \ + uv__work_submit(loop, &req->work_req, uv__fs_work, uv__fs_done); \ + return 0; \ + } \ + else { \ + uv__fs_work(&req->work_req); \ + return req->result; \ + } \ + } \ + while (0) +``` + +它做什么取决于回调是否为 NULL。在 libuv 文件系统 APIs 中,一个 NULL 回调意味着我们真实地希望去执行一个 _同步_ 操作。在这种情况下,POST 直接调用 `uv__fs_work`(我们需要了解一下这个函数的功能),而对于一个 non-NULL 回调,它提交 `uv__fs_work` 作为一个工作事项到工作队列(指的是线程池),然后,注册 `uv__fs_done` 作为回调;该函数执行一些登记并调用用户提供的回调。 + +如果我们去看 `uv__fs_work` 的代码,我们将看到它使用很多宏去按需路由工作到真实的文件系统调用。在我们的案例中,对于 `UV_FS_READ` 这个调用将被 `uv__fs_read` 生成,它(最终)使用普通的 POSIX APIs 去读取。这个函数可以在一个 _阻塞_ 方式中很安全地实现。因为,它通过异步 API 调用时被置于一个线程池中。 + +在 Node.js 中,fs.readFile 函数是映射到 `uv_fs_read` 上。因此,可以在一个非阻塞模式中读取文件,甚至是当底层文件系统 API 是阻塞方式时。 + +* * * + + +[[1]][1] 为确保服务器不泄露内存,我在一个启用泄露检查的 Valgrind 中运行它。因为服务器经常是被设计为永久运行,这是一个挑战;为克服这个问题,我在服务器上添加了一个 “kill 开关” - 一个从客户端接收的特定序列,以使它可以停止事件循环并退出。这个代码在 `theon_wrote_buf` 句柄中。 + + +[[2]][2] 在这里我们不过多地使用 `work_req`;讨论的素数测试服务器接下来将展示怎么被用于去传递上下文信息到回调中。 + + +[[3]][3] `uv_fs_read()` 提供了一个类似于 preadv Linux 系统调用的通用 API:它使用多缓冲区用于排序,并且支持一个到文件中的偏移。基于我们讨论的目的可以忽略这些特性。 + + +-------------------------------------------------------------------------------- + +via: https://eli.thegreenplace.net/2017/concurrent-servers-part-4-libuv/ + +作者:[Eli Bendersky ][a] +译者:[qhwdw](https://github.com/qhwdw) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://eli.thegreenplace.net/ +[1]:https://eli.thegreenplace.net/2017/concurrent-servers-part-4-libuv/#id1 +[2]:https://eli.thegreenplace.net/2017/concurrent-servers-part-4-libuv/#id2 +[3]:https://eli.thegreenplace.net/2017/concurrent-servers-part-4-libuv/#id3 +[4]:https://eli.thegreenplace.net/tag/concurrency +[5]:https://eli.thegreenplace.net/tag/c-c +[6]:https://eli.thegreenplace.net/2017/concurrent-servers-part-4-libuv/#id4 +[7]:http://eli.thegreenplace.net/2017/concurrent-servers-part-1-introduction/ +[8]:http://eli.thegreenplace.net/2017/concurrent-servers-part-2-threads/ +[9]:http://eli.thegreenplace.net/2017/concurrent-servers-part-3-event-driven/ +[10]:http://eli.thegreenplace.net/2017/concurrent-servers-part-4-libuv/ +[11]:http://eli.thegreenplace.net/2017/concurrent-servers-part-3-event-driven/ +[12]:http://libuv.org/ +[13]:https://github.com/eliben/code-for-blog/blob/master/2017/async-socket-server/uv-server.c +[14]:https://eli.thegreenplace.net/2017/concurrent-servers-part-4-libuv/#id5 +[15]:https://github.com/eliben/code-for-blog/blob/master/2017/async-socket-server/uv-isprime-server.c +[16]:https://eli.thegreenplace.net/2017/concurrent-servers-part-4-libuv/#id6 +[17]:https://eli.thegreenplace.net/2017/concurrent-servers-part-4-libuv/ From fb45dbb3d739a5ad577f34bfb7b74af9c5d96686 Mon Sep 17 00:00:00 2001 From: qhwdw Date: Mon, 4 Dec 2017 15:38:10 +0800 Subject: [PATCH 004/236] Translated by qhwdw --- ...1109 Concurrent Servers- Part 4 - libuv.md | 492 ------------------ ...1109 Concurrent Servers- Part 4 - libuv.md | 1 + 2 files changed, 1 insertion(+), 492 deletions(-) delete mode 100644 sources/tech/20171109 Concurrent Servers- Part 4 - libuv.md diff --git a/sources/tech/20171109 Concurrent Servers- Part 4 - libuv.md b/sources/tech/20171109 Concurrent Servers- Part 4 - libuv.md deleted file mode 100644 index 94b98cf5c2..0000000000 --- a/sources/tech/20171109 Concurrent Servers- Part 4 - libuv.md +++ /dev/null @@ -1,492 +0,0 @@ -Translating by qhwdw [Concurrent Servers: Part 4 - libuv][17] -============================================================ - -This is part 4 of a series of posts on writing concurrent network servers. In this part we're going to use libuv to rewrite our server once again, and also talk about handling time-consuming tasks in callbacks using a thread pool. Finally, we're going to look under the hood of libuv for a bit to study how it wraps blocking file-system operations with an asynchronous API. - -All posts in the series: - -* [Part 1 - Introduction][7] - -* [Part 2 - Threads][8] - -* [Part 3 - Event-driven][9] - -* [Part 4 - libuv][10] - -### Abstracting away event-driven loops with libuv - -In [part 3][11], we've seen how similar select-based and epoll-based servers are, and I mentioned it's very tempting to abstract away the minor differences between them. Numerous libraries are already doing this, however, so in this part I'm going to pick one and use it. The library I'm picking is [libuv][12], which was originally designed to serve as the underlying portable platform layer for Node.js, and has since found use in additional projects. libuv is written in C, which makes it highly portable and very suitable for tying into high-level languages like JavaScript and Python. - -While libuv has grown to be a fairly large framework for abstracting low-level platform details, it remains centered on the concept of an  _event loop_ . In our event-driven servers in part 3, the event loop was explicit in the main function; when using libuv, the loop is usually hidden inside the library itself, and user code just registers event handlers (as callback functions) and runs the loop. Furthermore, libuv will use the fastest event loop implementation for a given platform: for Linux this is epoll, etc. - -![libuv loop](https://eli.thegreenplace.net/images/2017/libuvloop.png) - -libuv supports multiple event loops, and thus an event loop is a first class citizen within the library; it has a handle - uv_loop_t, and functions for creating/destroying/starting/stopping loops. That said, I will only use the "default" loop in this post, which libuv makes available via uv_default_loop(); multiple loops are mosly useful for multi-threaded event-driven servers, a more advanced topic I'll leave for future parts in the series. - -### A concurrent server using libuv - -To get a better feel for libuv, let's jump to our trusty protocol server that we've been vigorously reimplementing throughout the series. The structure of this server is going to be somewhat similar to the select and epoll-based servers of part 3, since it also relies on callbacks. The full [code sample is here][13]; we start with setting up the server socket bound to a local port: - -``` -int portnum = 9090; -if (argc >= 2) { - portnum = atoi(argv[1]); -} -printf("Serving on port %d\n", portnum); - -int rc; -uv_tcp_t server_stream; -if ((rc = uv_tcp_init(uv_default_loop(), &server_stream)) < 0) { - die("uv_tcp_init failed: %s", uv_strerror(rc)); -} - -struct sockaddr_in server_address; -if ((rc = uv_ip4_addr("0.0.0.0", portnum, &server_address)) < 0) { - die("uv_ip4_addr failed: %s", uv_strerror(rc)); -} - -if ((rc = uv_tcp_bind(&server_stream, (const struct sockaddr*)&server_address, 0)) < 0) { - die("uv_tcp_bind failed: %s", uv_strerror(rc)); -} -``` - -Fairly standard socket fare here, except that it's all wrapped in libuv APIs. In return we get a portable interface that should work on any platform libuv supports. - -This code also demonstrates conscientious error handling; most libuv functions return an integer status, with a negative number meaning an error. In our server we treat these errors as fatals, but one may imagine a more graceful recovery. - -Now that the socket is bound, it's time to listen on it. Here we run into our first callback registration: - -``` -// Listen on the socket for new peers to connect. When a new peer connects, -// the on_peer_connected callback will be invoked. -if ((rc = uv_listen((uv_stream_t*)&server_stream, N_BACKLOG, on_peer_connected)) < 0) { - die("uv_listen failed: %s", uv_strerror(rc)); -} -``` - -uv_listen registers a callback that the event loop will invoke when new peers connect to the socket. Our callback here is called on_peer_connected, and we'll examine it soon. - -Finally, main runs the libuv loop until it's stopped (uv_run only returns when the loop has stopped or some error occurred). - -``` -// Run the libuv event loop. -uv_run(uv_default_loop(), UV_RUN_DEFAULT); - -// If uv_run returned, close the default loop before exiting. -return uv_loop_close(uv_default_loop()); -``` - -Note that only a single callback was registered by main prior to running the event loop; we'll soon see how additional callbacks are added. It's not a problem to add and remove callbacks throughout the runtime of the event loop - in fact, this is how most servers are expected to be written. - -This is on_peer_connected, which handles new client connections to the server: - -``` -void on_peer_connected(uv_stream_t* server_stream, int status) { - if (status < 0) { - fprintf(stderr, "Peer connection error: %s\n", uv_strerror(status)); - return; - } - - // client will represent this peer; it's allocated on the heap and only - // released when the client disconnects. The client holds a pointer to - // peer_state_t in its data field; this peer state tracks the protocol state - // with this client throughout interaction. - uv_tcp_t* client = (uv_tcp_t*)xmalloc(sizeof(*client)); - int rc; - if ((rc = uv_tcp_init(uv_default_loop(), client)) < 0) { - die("uv_tcp_init failed: %s", uv_strerror(rc)); - } - client->data = NULL; - - if (uv_accept(server_stream, (uv_stream_t*)client) == 0) { - struct sockaddr_storage peername; - int namelen = sizeof(peername); - if ((rc = uv_tcp_getpeername(client, (struct sockaddr*)&peername, - &namelen)) < 0) { - die("uv_tcp_getpeername failed: %s", uv_strerror(rc)); - } - report_peer_connected((const struct sockaddr_in*)&peername, namelen); - - // Initialize the peer state for a new client: we start by sending the peer - // the initial '*' ack. - peer_state_t* peerstate = (peer_state_t*)xmalloc(sizeof(*peerstate)); - peerstate->state = INITIAL_ACK; - peerstate->sendbuf[0] = '*'; - peerstate->sendbuf_end = 1; - peerstate->client = client; - client->data = peerstate; - - // Enqueue the write request to send the ack; when it's done, - // on_wrote_init_ack will be called. The peer state is passed to the write - // request via the data pointer; the write request does not own this peer - // state - it's owned by the client handle. - uv_buf_t writebuf = uv_buf_init(peerstate->sendbuf, peerstate->sendbuf_end); - uv_write_t* req = (uv_write_t*)xmalloc(sizeof(*req)); - req->data = peerstate; - if ((rc = uv_write(req, (uv_stream_t*)client, &writebuf, 1, - on_wrote_init_ack)) < 0) { - die("uv_write failed: %s", uv_strerror(rc)); - } - } else { - uv_close((uv_handle_t*)client, on_client_closed); - } -} -``` - -This code is well commented, but there are a couple of important libuv idioms I'd like to highlight: - -* Passing custom data into callbacks: since C has no closures, this can be challenging. libuv has a void* datafield in all its handle types; these fields can be used to pass user data. For example, note how client->data is made to point to a peer_state_t structure so that the callbacks registered by uv_write and uv_read_start can know which peer data they're dealing with. - -* Memory management: event-driven programming is much easier in languages with garbage collection, because callbacks usually run in a completely different stack frame from where they were registered, making stack-based memory management difficult. It's almost always necessary to pass heap-allocated data to libuv callbacks (except in main, which remains alive on the stack when all callbacks run), and to avoid leaks much care is required about when these data are safe to free(). This is something that comes with a bit of practice [[1]][6]. - -The peer state for this server is: - -``` -typedef struct { - ProcessingState state; - char sendbuf[SENDBUF_SIZE]; - int sendbuf_end; - uv_tcp_t* client; -} peer_state_t; -``` - -It's fairly similar to the state in part 3; we no longer need sendptr, since uv_write will make sure to send the whole buffer it's given before invoking the "done writing" callback. We also keep a pointer to the client for other callbacks to use. Here's on_wrote_init_ack: - -``` -void on_wrote_init_ack(uv_write_t* req, int status) { - if (status) { - die("Write error: %s\n", uv_strerror(status)); - } - peer_state_t* peerstate = (peer_state_t*)req->data; - // Flip the peer state to WAIT_FOR_MSG, and start listening for incoming data - // from this peer. - peerstate->state = WAIT_FOR_MSG; - peerstate->sendbuf_end = 0; - - int rc; - if ((rc = uv_read_start((uv_stream_t*)peerstate->client, on_alloc_buffer, - on_peer_read)) < 0) { - die("uv_read_start failed: %s", uv_strerror(rc)); - } - - // Note: the write request doesn't own the peer state, hence we only free the - // request itself, not the state. - free(req); -} -``` - -Then we know for sure that the initial '*' was sent to the peer, we start listening to incoming data from this peer by calling uv_read_start, which registers a callback (on_peer_read) that will be invoked by the event loop whenever new data is received on the socket from the client: - -``` -void on_peer_read(uv_stream_t* client, ssize_t nread, const uv_buf_t* buf) { - if (nread < 0) { - if (nread != uv_eof) { - fprintf(stderr, "read error: %s\n", uv_strerror(nread)); - } - uv_close((uv_handle_t*)client, on_client_closed); - } else if (nread == 0) { - // from the documentation of uv_read_cb: nread might be 0, which does not - // indicate an error or eof. this is equivalent to eagain or ewouldblock - // under read(2). - } else { - // nread > 0 - assert(buf->len >= nread); - - peer_state_t* peerstate = (peer_state_t*)client->data; - if (peerstate->state == initial_ack) { - // if the initial ack hasn't been sent for some reason, ignore whatever - // the client sends in. - free(buf->base); - return; - } - - // run the protocol state machine. - for (int i = 0; i < nread; ++i) { - switch (peerstate->state) { - case initial_ack: - assert(0 && "can't reach here"); - break; - case wait_for_msg: - if (buf->base[i] == '^') { - peerstate->state = in_msg; - } - break; - case in_msg: - if (buf->base[i] == '$') { - peerstate->state = wait_for_msg; - } else { - assert(peerstate->sendbuf_end < sendbuf_size); - peerstate->sendbuf[peerstate->sendbuf_end++] = buf->base[i] + 1; - } - break; - } - } - - if (peerstate->sendbuf_end > 0) { - // we have data to send. the write buffer will point to the buffer stored - // in the peer state for this client. - uv_buf_t writebuf = - uv_buf_init(peerstate->sendbuf, peerstate->sendbuf_end); - uv_write_t* writereq = (uv_write_t*)xmalloc(sizeof(*writereq)); - writereq->data = peerstate; - int rc; - if ((rc = uv_write(writereq, (uv_stream_t*)client, &writebuf, 1, - on_wrote_buf)) < 0) { - die("uv_write failed: %s", uv_strerror(rc)); - } - } - } - free(buf->base); -} -``` - -The runtime behavior of this server is very similar to the event-driven servers of part 3: all clients are handled concurrently in a single thread. Also similarly, a certain discipline has to be maintained in the server's code: the server's logic is implemented as an ensemble of callbacks, and long-running operations are a big no-no since they block the event loop. Let's explore this issue a bit further. - -### Long-running operations in event-driven loops - -The single-threaded nature of event-driven code makes it very susceptible to a common issue: long-running code blocks the entire loop. Consider this program: - -``` -void on_timer(uv_timer_t* timer) { - uint64_t timestamp = uv_hrtime(); - printf("on_timer [%" PRIu64 " ms]\n", (timestamp / 1000000) % 100000); - - // "Work" - if (random() % 5 == 0) { - printf("Sleeping...\n"); - sleep(3); - } -} - -int main(int argc, const char** argv) { - uv_timer_t timer; - uv_timer_init(uv_default_loop(), &timer); - uv_timer_start(&timer, on_timer, 0, 1000); - return uv_run(uv_default_loop(), UV_RUN_DEFAULT); -} -``` - -It runs a libuv event loop with a single registered callback: on_timer, which is invoked by the loop every second. The callback reports a timestamp, and once in a while simulates some long-running task by sleeping for 3 seconds. Here's a sample run: - -``` -$ ./uv-timer-sleep-demo -on_timer [4840 ms] -on_timer [5842 ms] -on_timer [6843 ms] -on_timer [7844 ms] -Sleeping... -on_timer [11845 ms] -on_timer [12846 ms] -Sleeping... -on_timer [16847 ms] -on_timer [17849 ms] -on_timer [18850 ms] -... -``` - -on_timer dutifully fires every second, until the random sleep hits in. At that point, on_timer is not invoked again until the sleep is over; in fact,  _no other callbacks_  will be invoked in this time frame. The sleep call blocks the current thread, which is the only thread involved and is also the thread the event loop uses. When this thread is blocked, the event loop is blocked. - -This example demonstrates why it's so important for callbacks to never block in event-driven calls, and applies equally to Node.js servers, client-side Javascript, most GUI programming frameworks, and many other asynchronous programming models. - -But sometimes running time-consuming tasks is unavoidable. Not all tasks have asynchronous APIs; for example, we may be dealing with some library that only has a synchronous API, or just have to perform a potentially long computation. How can we combine such code with event-driven programming? Threads to the rescue! - -### Threads for "converting" blocking calls into asynchronous calls - -A thread pool can be used to turn blocking calls into asynchronous calls, by running alongside the event loop and posting events onto it when tasks are completed. Here's how it works, for a given blocking function do_work(): - -1. Instead of directly calling do_work() in a callback, we package it into a "task" and ask the thread pool to execute the task. We also register a callback for the loop to invoke when the task has finished; let's call iton_work_done(). - -2. At this point our callback can return and the event loop keeps spinning; at the same time, a thread in the pool is executing the task. - -3. Once the task has finished executing, the main thread (the one running the event loop) is notified and on_work_done() is invoked by the event loop. - -Let's see how this solves our previous timer/sleep example, using libuv's work scheduling API: - -``` -void on_after_work(uv_work_t* req, int status) { - free(req); -} - -void on_work(uv_work_t* req) { - // "Work" - if (random() % 5 == 0) { - printf("Sleeping...\n"); - sleep(3); - } -} - -void on_timer(uv_timer_t* timer) { - uint64_t timestamp = uv_hrtime(); - printf("on_timer [%" PRIu64 " ms]\n", (timestamp / 1000000) % 100000); - - uv_work_t* work_req = (uv_work_t*)malloc(sizeof(*work_req)); - uv_queue_work(uv_default_loop(), work_req, on_work, on_after_work); -} - -int main(int argc, const char** argv) { - uv_timer_t timer; - uv_timer_init(uv_default_loop(), &timer); - uv_timer_start(&timer, on_timer, 0, 1000); - return uv_run(uv_default_loop(), UV_RUN_DEFAULT); -} -``` - -Instead of calling sleep directly in on_timer, we enqueue a task, represented by a handle of type work_req [[2]][14], the function to run in the task (on_work) and the function to invoke once the task is completed (on_after_work). on_workis where the "work" (the blocking/time-consuming operation) happens. Note a crucial difference between the two callbacks passed into uv_queue_work: on_work runs in the thread pool, while on_after_work runs on the main thread which also runs the event loop - just like any other callback. - -Let's see this version run: - -``` -$ ./uv-timer-work-demo -on_timer [89571 ms] -on_timer [90572 ms] -on_timer [91573 ms] -on_timer [92575 ms] -Sleeping... -on_timer [93576 ms] -on_timer [94577 ms] -Sleeping... -on_timer [95577 ms] -on_timer [96578 ms] -on_timer [97578 ms] -... -``` - -The timer ticks every second, even though the sleeping function is still invoked; sleeping is now done on a separate thread and doesn't block the event loop. - -### A primality-testing server, with exercises - -Since sleep isn't a very exciting way to simulate work, I've prepared a more comprehensive example - a server that accepts numbers from clients over a socket, checks whether these numbers are prime and sends back either "prime" or "composite". The full [code for this server is here][15] - I won't post it here since it's long, but will rather give readers the opportunity to explore it on their own with a couple of exercises. - -The server deliberatly uses a naive primality test algorithm, so for large primes it can take quite a while to return an answer. On my machine it takes ~5 seconds to compute the answer for 2305843009213693951, but YMMV. - -Exercise 1: the server has a setting (via an environment variable named MODE) to either run the primality test in the socket callback (meaning on the main thread) or in the libuv work queue. Play with this setting to observe the server's behavior when multiple clients are connecting simultaneously. In blocking mode, the server will not answer other clients while it's computing a big task; in non-blocking mode it will. - -Exercise 2: libuv has a default thread-pool size, and it can be configured via an environment variable. Can you use multiple clients to discover experimentally what the default size is? Having found the default thread-pool size, play with different settings to see how it affects the server's responsiveness under heavy load. - -### Non-blocking file-system operations using work queues - -Delegating potentially-blocking operations to a thread pool isn't good for just silly demos and CPU-intensive computations; libuv itself makes heavy use of this capability in its file-system APIs. This way, libuv accomplishes the superpower of exposing the file-system with an asynchronous API, in a portable way. - -Let's take uv_fs_read(), for example. This function reads from a file (represented by a uv_fs_t handle) into a buffer [[3]][16], and invokes a callback when the reading is completed. That is, uv_fs_read() always returns immediately, even if the file sits on an NFS-like system and it may take a while for the data to get to the buffer. In other words, this API is asynchronous in the way other libuv APIs are. How does this work? - -At this point we're going to look under the hood of libuv; the internals are actually fairly straightforward, and it's a good exercise. Being a portable library, libuv has different implementations of many of its functions for Windows and Unix systems. We're going to be looking at src/unix/fs.c in the libuv source tree. - -The code for uv_fs_read is: - -``` -int uv_fs_read(uv_loop_t* loop, uv_fs_t* req, - uv_file file, - const uv_buf_t bufs[], - unsigned int nbufs, - int64_t off, - uv_fs_cb cb) { - if (bufs == NULL || nbufs == 0) - return -EINVAL; - - INIT(READ); - req->file = file; - - req->nbufs = nbufs; - req->bufs = req->bufsml; - if (nbufs > ARRAY_SIZE(req->bufsml)) - req->bufs = uv__malloc(nbufs * sizeof(*bufs)); - - if (req->bufs == NULL) { - if (cb != NULL) - uv__req_unregister(loop, req); - return -ENOMEM; - } - - memcpy(req->bufs, bufs, nbufs * sizeof(*bufs)); - - req->off = off; - POST; -} -``` - -It may seem puzzling at first, because it defers the real work to the INIT and POST macros, with some local variable setup for POST. This is done to avoid too much code duplication within the file. - -The INIT macro is: - -``` -#define INIT(subtype) \ - do { \ - req->type = UV_FS; \ - if (cb != NULL) \ - uv__req_init(loop, req, UV_FS); \ - req->fs_type = UV_FS_ ## subtype; \ - req->result = 0; \ - req->ptr = NULL; \ - req->loop = loop; \ - req->path = NULL; \ - req->new_path = NULL; \ - req->cb = cb; \ - } \ - while (0) -``` - -It sets up the request, and most importantly sets the req->fs_type field to the actual FS request type. Since uv_fs_read invokes INIT(READ), it means req->fs_type gets assigned the constant UV_FS_READ. - -The POST macro is: - -``` -#define POST \ - do { \ - if (cb != NULL) { \ - uv__work_submit(loop, &req->work_req, uv__fs_work, uv__fs_done); \ - return 0; \ - } \ - else { \ - uv__fs_work(&req->work_req); \ - return req->result; \ - } \ - } \ - while (0) -``` - -What it does depends on whether the callback is NULL. In libuv file-system APIs, a NULL callback means we actually want to perform the operation  _synchronously_ . In this case POST invokes uv__fs_work directly (we'll get to what this function does in just a bit), whereas for a non-NULL callback, it submits uv__fs_work as a work item to the work queue (which is the thread pool), and registers uv__fs_done as the callback; that function does a bit of book-keeping and invokes the user-provided callback. - -If we look at the code of uv__fs_work, we'll see it uses more macros to route work to the actual file-system call as needed. In our case, for UV_FS_READ the call will be made to uv__fs_read, which (at last!) does the reading using regular POSIX APIs. This function can be safely implemented in a  _blocking_  manner, since it's placed on a thread-pool when called through the asynchronous API. - -In Node.js, the fs.readFile function is mapped to uv_fs_read. Thus, reading files can be done in a non-blocking fashion even though the underlying file-system API is blocking. - -* * * - - -[[1]][1] To ensure that this server doesn't leak memory, I ran it under Valgrind with the leak checker enabled. Since servers are often designed to run forever, this was a bit challenging; to overcome this issue I've added a "kill switch" to the server - a special sequence received from a client makes it stop the event loop and exit. The code for this is in theon_wrote_buf handler. - - -[[2]][2] Here we don't use work_req for much; the primality testing server discussed next will show how it's used to pass context information into the callback. - - -[[3]][3] uv_fs_read() provides a generalized API similar to the preadv Linux system call: it takes multiple buffers which it fills in order, and supports an offset into the file. We can ignore these features for the sake of our discussion. - - --------------------------------------------------------------------------------- - -via: https://eli.thegreenplace.net/2017/concurrent-servers-part-4-libuv/ - -作者:[Eli Bendersky ][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://eli.thegreenplace.net/ -[1]:https://eli.thegreenplace.net/2017/concurrent-servers-part-4-libuv/#id1 -[2]:https://eli.thegreenplace.net/2017/concurrent-servers-part-4-libuv/#id2 -[3]:https://eli.thegreenplace.net/2017/concurrent-servers-part-4-libuv/#id3 -[4]:https://eli.thegreenplace.net/tag/concurrency -[5]:https://eli.thegreenplace.net/tag/c-c -[6]:https://eli.thegreenplace.net/2017/concurrent-servers-part-4-libuv/#id4 -[7]:http://eli.thegreenplace.net/2017/concurrent-servers-part-1-introduction/ -[8]:http://eli.thegreenplace.net/2017/concurrent-servers-part-2-threads/ -[9]:http://eli.thegreenplace.net/2017/concurrent-servers-part-3-event-driven/ -[10]:http://eli.thegreenplace.net/2017/concurrent-servers-part-4-libuv/ -[11]:http://eli.thegreenplace.net/2017/concurrent-servers-part-3-event-driven/ -[12]:http://libuv.org/ -[13]:https://github.com/eliben/code-for-blog/blob/master/2017/async-socket-server/uv-server.c -[14]:https://eli.thegreenplace.net/2017/concurrent-servers-part-4-libuv/#id5 -[15]:https://github.com/eliben/code-for-blog/blob/master/2017/async-socket-server/uv-isprime-server.c -[16]:https://eli.thegreenplace.net/2017/concurrent-servers-part-4-libuv/#id6 -[17]:https://eli.thegreenplace.net/2017/concurrent-servers-part-4-libuv/ diff --git a/translated/tech/20171109 Concurrent Servers- Part 4 - libuv.md b/translated/tech/20171109 Concurrent Servers- Part 4 - libuv.md index b4db491e4e..07994c67b1 100644 --- a/translated/tech/20171109 Concurrent Servers- Part 4 - libuv.md +++ b/translated/tech/20171109 Concurrent Servers- Part 4 - libuv.md @@ -490,3 +490,4 @@ via: https://eli.thegreenplace.net/2017/concurrent-servers-part-4-libuv/ [15]:https://github.com/eliben/code-for-blog/blob/master/2017/async-socket-server/uv-isprime-server.c [16]:https://eli.thegreenplace.net/2017/concurrent-servers-part-4-libuv/#id6 [17]:https://eli.thegreenplace.net/2017/concurrent-servers-part-4-libuv/ + From 11e1c8c450f35378d5e24449e15628748ad98053 Mon Sep 17 00:00:00 2001 From: wxy Date: Mon, 4 Dec 2017 17:21:41 +0800 Subject: [PATCH 005/236] Revert "Merge branch 'master' of https://github.com/LCTT/TranslateProject" This reverts commit 59837a2677e18ebf7eb3c0a586662c72098fd1e1, reversing changes made to fb45dbb3d739a5ad577f34bfb7b74af9c5d96686. --- .../20141028 When Does Your OS Run.md | 0 ... Firewalld in Multi-Zone Configurations.md | 0 .../20170227 Ubuntu Core in LXD containers.md | 0 ... THE SOFTWARE CONTAINERIZATION MOVEMENT.md | 0 ...ner OS for Linux and Windows Containers.md | 0 ... Life-Changing Magic of Tidying Up Code.md | 0 ... guide to links in the Linux filesystem.md | 300 -------- ...ldcard Certificates Coming January 2018.md | 0 ...andy Tool for Every Level of Linux User.md | 0 ...GIVE AWAY YOUR CODE BUT NEVER YOUR TIME.md | 0 ...0928 3 Python web scrapers and crawlers.md | 0 .../20171002 Scaling the GitLab database.md | 0 ...3 PostgreSQL Hash Indexes Are Now Cool.md | 0 ...inux desktop hasnt jumped in popularity.md | 0 ...ant 100 command line productivity boost.md | 0 ...20171008 8 best languages to blog about.md | 0 ...ext Generation of Cybersecurity Experts.md | 0 ...itter Data in Apache Kafka through KSQL.md | 0 ...p a Postgres database on a Raspberry Pi.md | 0 .../{201711 => }/20171011 Why Linux Works.md | 0 ...easons open source is good for business.md | 0 ...71013 Best of PostgreSQL 10 for the DBA.md | 0 ... cloud-native computing with Kubernetes.md | 0 ...5 Monitoring Slow SQL Queries via Slack.md | 0 ... Use Docker with R A DevOps Perspective.md | 0 .../20171016 Introducing CRI-O 1.0.md | 0 ...20171017 A tour of Postgres Index Types.md | 0 .../20171017 Image Processing on Linux.md | 0 ...iners and microservices change security.md | 0 ...n Python by building a simple dice game.md | 0 ...ecure Your Network in the Wake of KRACK.md | 0 ...Simple Excellent Linux Network Monitors.md | 0 ...cker containers in Kubernetes with Java.md | 0 ...ols to Help You Remember Linux Commands.md | 0 ...ndroid on Top of a Linux Graphics Stack.md | 0 ...0171024 Top 5 Linux pain points in 2017.md | 0 ...et s analyze GitHub’s data and find out.md | 0 ...u Drop Unity Mark Shuttleworth Explains.md | 0 ...Backup, Rclone and Wasabi cloud storage.md | 0 ...26 But I dont know what a container is .md | 0 .../20171026 Why is Kubernetes so popular.md | 0 .../20171101 How to use cron in Linux.md | 0 ... to a DCO for source code contributions.md | 0 ...nage EXT2 EXT3 and EXT4 Health in Linux.md | 0 .../20171106 Finding Files with mlocate.md | 0 ...Publishes Enterprise Open Source Guides.md | 0 ...mmunity clue. Here s how to do it right.md | 0 ...dopts home-brewed KVM as new hypervisor.md | 0 ... created my first RPM package in Fedora.md | 0 ...est applications with Ansible Container.md | 0 ...71110 File better bugs with coredumpctl.md | 0 ... ​Linux totally dominates supercomputers.md | 0 ...1116 5 Coolest Linux Terminal Emulators.md | 0 ...7 How to Easily Remember Linux Commands.md | 0 ...tting started with OpenFaaS on minikube.md | 0 ...20 Containers and Kubernetes Whats next.md | 81 -- ...Install Android File Transfer for Linux.md | 75 -- ...and Certification Are Key for SysAdmins.md | 72 -- ...our Terminal Session To Anyone In Seconds.md | 0 ...Search DuckDuckGo from the Command Line.md | 97 --- ...The One in Which I Call Out Hacker News.md | 86 +++ ...nject features and investigate programs.md | 211 ------ ...an event Introducing eBPF Kernel probes.md | 361 --------- ...sers guide to Logical Volume Management.md | 233 ------ ...9 INTRODUCING DOCKER SECRETS MANAGEMENT.md | 110 --- ...170530 How to Improve a Legacy Codebase.md | 108 +++ ...es Are Hiring Computer Security Experts.md | 91 --- ... guide to links in the Linux filesystem.md | 314 ++++++++ ...ow to answer questions in a helpful way.md | 172 ----- ...Linux containers with Ansible Container.md | 114 --- .../20171005 Reasons Kubernetes is cool.md | 148 ---- ...20171010 Operating a Kubernetes network.md | 216 ------ ...LEAST PRIVILEGE CONTAINER ORCHESTRATION.md | 174 ----- ...ow Eclipse is advancing IoT development.md | 83 ++ ...ive into BPF a list of reading material.md | 711 ------------------ .../20171107 GitHub welcomes all CI tools.md | 95 --- sources/tech/20171112 Love Your Bugs.md | 311 -------- ... write fun small web projects instantly.md | 76 -- .../20171114 Sysadmin 101 Patch Management.md | 61 -- .../20171114 Take Linux and Run With It.md | 68 -- ...obs Are Hot Get Trained and Get Noticed.md | 58 -- ... and How to Set an Open Source Strategy.md | 120 --- ...ux Programs for Drawing and Image Editing.md | 130 ---- ...171120 Adopting Kubernetes step by step.md | 93 --- ...20 Containers and Kubernetes Whats next.md | 98 +++ ... Why microservices are a security issue.md | 116 --- ...and Certification Are Key for SysAdmins.md | 70 ++ ...Could Be Your New Favorite Container OS.md | 7 +- ...Help Build ONNX Open Source AI Platform.md | 76 -- ... Your Linux Server Has Been Compromised.md | 156 ---- ...71128 The politics of the Linux desktop.md | 110 --- ... a great pair for beginning programmers.md | 142 ---- ... open source technology trends for 2018.md | 143 ---- ...actices for getting started with DevOps.md | 94 --- ...eshark on Debian and Ubuntu 16.04_17.10.md | 185 ----- ...n Source Components Ease Learning Curve.md | 70 -- ...eractive Workflows for Cpp with Jupyter.md | 301 -------- ...Unity from the Dead as an Official Spin.md | 41 - ...usiness Software Alternatives For Linux.md | 116 --- ...x command-line screen grabs made simple.md | 108 --- ...Search DuckDuckGo from the Command Line.md | 103 +++ ...Long Running Terminal Commands Complete.md | 156 ---- ...ke up and Shut Down Linux Automatically.md | 135 ---- ...1 Fedora Classroom Session: Ansible 101.md | 71 -- ...ow to Manage Users with Groups in Linux.md | 168 ----- ... to find a publisher for your tech book.md | 76 -- ...e your WiFi MAC address on Ubuntu 16.04.md | 160 ---- ... millions of Linux users with Snapcraft.md | 321 -------- ...inux command-line screen grabs made simple | 72 -- ...0171202 docker - Use multi-stage builds.md | 127 ---- ...The One in Which I Call Out Hacker News.md | 99 --- ...20161216 Kprobes Event Tracing on ARMv8.md | 16 +- ...170530 How to Improve a Legacy Codebase.md | 104 --- .../20170910 Cool vim feature sessions.md | 44 -- ...ng network connections on Linux systems.md | 0 ...ow Eclipse is advancing IoT development.md | 77 -- ...layer introduction part 1 the bio layer.md | 13 +- .../tech/20171108 Archiving repositories.md | 37 - ...6 Introducing security alerts on GitHub.md | 48 -- ...stem Logs: Understand Your Linux System.md | 68 -- ...Install Android File Transfer for Linux.md | 82 ++ ...Could Be Your New Favorite Container OS.md | 147 ---- ...every domain someone owns automatically.md | 49 -- ...ogle Translate From Command Line In Linux.md | 400 ---------- ...171201 Linux Journal Ceases Publication.md | 34 - ...ing Hardware for Beginners: Think Software | 89 --- 126 files changed, 960 insertions(+), 8338 deletions(-) rename published/{201711 => }/20141028 When Does Your OS Run.md (100%) rename published/{201711 => }/20170202 Understanding Firewalld in Multi-Zone Configurations.md (100%) rename published/{201711 => }/20170227 Ubuntu Core in LXD containers.md (100%) rename published/{201711 => }/20170418 INTRODUCING MOBY PROJECT A NEW OPEN-SOURCE PROJECT TO ADVANCE THE SOFTWARE CONTAINERIZATION MOVEMENT.md (100%) rename published/{201711 => }/20170531 Understanding Docker Container Host vs Container OS for Linux and Windows Containers.md (100%) rename published/{201711 => }/20170608 The Life-Changing Magic of Tidying Up Code.md (100%) delete mode 100644 published/20170622 A users guide to links in the Linux filesystem.md rename published/{201711 => }/20170706 Wildcard Certificates Coming January 2018.md (100%) rename published/{201711 => }/20170825 Guide to Linux App Is a Handy Tool for Every Level of Linux User.md (100%) rename published/{201711 => }/20170905 GIVE AWAY YOUR CODE BUT NEVER YOUR TIME.md (100%) rename published/{201711 => }/20170928 3 Python web scrapers and crawlers.md (100%) rename published/{201711 => }/20171002 Scaling the GitLab database.md (100%) rename published/{201711 => }/20171003 PostgreSQL Hash Indexes Are Now Cool.md (100%) rename published/{201711 => }/20171004 No the Linux desktop hasnt jumped in popularity.md (100%) rename published/{201711 => }/20171007 Instant 100 command line productivity boost.md (100%) rename published/{201711 => }/20171008 8 best languages to blog about.md (100%) rename published/{201711 => }/20171009 CyberShaolin Teaching the Next Generation of Cybersecurity Experts.md (100%) rename published/{201711 => }/20171010 Getting Started Analyzing Twitter Data in Apache Kafka through KSQL.md (100%) rename published/{201711 => }/20171011 How to set up a Postgres database on a Raspberry Pi.md (100%) rename published/{201711 => }/20171011 Why Linux Works.md (100%) rename published/{201711 => }/20171013 6 reasons open source is good for business.md (100%) rename published/{201711 => }/20171013 Best of PostgreSQL 10 for the DBA.md (100%) rename published/{201711 => }/20171015 How to implement cloud-native computing with Kubernetes.md (100%) rename published/{201711 => }/20171015 Monitoring Slow SQL Queries via Slack.md (100%) rename published/{201711 => }/20171015 Why Use Docker with R A DevOps Perspective.md (100%) rename published/{201711 => }/20171016 Introducing CRI-O 1.0.md (100%) rename published/{201711 => }/20171017 A tour of Postgres Index Types.md (100%) rename published/{201711 => }/20171017 Image Processing on Linux.md (100%) rename published/{201711 => }/20171018 How containers and microservices change security.md (100%) rename published/{201711 => }/20171018 Learn how to program in Python by building a simple dice game.md (100%) rename published/{201711 => }/20171018 Tips to Secure Your Network in the Wake of KRACK.md (100%) rename published/{201711 => }/20171019 3 Simple Excellent Linux Network Monitors.md (100%) rename published/{201711 => }/20171019 How to manage Docker containers in Kubernetes with Java.md (100%) rename published/{201711 => }/20171020 3 Tools to Help You Remember Linux Commands.md (100%) rename published/{201711 => }/20171020 Running Android on Top of a Linux Graphics Stack.md (100%) rename published/{201711 => }/20171024 Top 5 Linux pain points in 2017.md (100%) rename published/{201711 => }/20171024 Who contributed the most to open source in 2017 Let s analyze GitHub’s data and find out.md (100%) rename published/{201711 => }/20171024 Why Did Ubuntu Drop Unity Mark Shuttleworth Explains.md (100%) rename published/{201711 => }/20171025 How to roll your own backup solution with BorgBackup, Rclone and Wasabi cloud storage.md (100%) rename published/{201711 => }/20171026 But I dont know what a container is .md (100%) rename published/{201711 => }/20171026 Why is Kubernetes so popular.md (100%) rename published/{201711 => }/20171101 How to use cron in Linux.md (100%) rename published/{201711 => }/20171101 We re switching to a DCO for source code contributions.md (100%) rename published/{201711 => }/20171106 4 Tools to Manage EXT2 EXT3 and EXT4 Health in Linux.md (100%) rename published/{201711 => }/20171106 Finding Files with mlocate.md (100%) rename published/{201711 => }/20171106 Linux Foundation Publishes Enterprise Open Source Guides.md (100%) rename published/{201711 => }/20171106 Most companies can t buy an open source community clue. Here s how to do it right.md (100%) rename published/{201711 => }/20171107 AWS adopts home-brewed KVM as new hypervisor.md (100%) rename published/{201711 => }/20171107 How I created my first RPM package in Fedora.md (100%) rename published/{201711 => }/20171108 Build and test applications with Ansible Container.md (100%) rename published/{201711 => }/20171110 File better bugs with coredumpctl.md (100%) rename published/{201711 => }/20171114 ​Linux totally dominates supercomputers.md (100%) rename published/{201711 => }/20171116 5 Coolest Linux Terminal Emulators.md (100%) rename published/{201711 => }/20171117 How to Easily Remember Linux Commands.md (100%) rename published/{201711 => }/20171118 Getting started with OpenFaaS on minikube.md (100%) delete mode 100644 published/20171120 Containers and Kubernetes Whats next.md delete mode 100644 published/20171124 How to Install Android File Transfer for Linux.md delete mode 100644 published/20171124 Open Source Cloud Skills and Certification Are Key for SysAdmins.md rename published/{201711 => }/20171128 tmate – Instantly Share Your Terminal Session To Anyone In Seconds.md (100%) delete mode 100644 published/20171130 Search DuckDuckGo from the Command Line.md create mode 100644 sources/tech/20090701 The One in Which I Call Out Hacker News.md delete mode 100644 sources/tech/20130402 Dynamic linker tricks Using LD_PRELOAD to cheat inject features and investigate programs.md delete mode 100644 sources/tech/20160330 How to turn any syscall into an event Introducing eBPF Kernel probes.md delete mode 100644 sources/tech/20160922 A Linux users guide to Logical Volume Management.md delete mode 100644 sources/tech/20170209 INTRODUCING DOCKER SECRETS MANAGEMENT.md create mode 100644 sources/tech/20170530 How to Improve a Legacy Codebase.md delete mode 100644 sources/tech/20170607 Why Car Companies Are Hiring Computer Security Experts.md create mode 100644 sources/tech/20170622 A users guide to links in the Linux filesystem.md delete mode 100644 sources/tech/20170921 How to answer questions in a helpful way.md delete mode 100644 sources/tech/20171005 How to manage Linux containers with Ansible Container.md delete mode 100644 sources/tech/20171005 Reasons Kubernetes is cool.md delete mode 100644 sources/tech/20171010 Operating a Kubernetes network.md delete mode 100644 sources/tech/20171011 LEAST PRIVILEGE CONTAINER ORCHESTRATION.md create mode 100644 sources/tech/20171020 How Eclipse is advancing IoT development.md delete mode 100644 sources/tech/20171102 Dive into BPF a list of reading material.md delete mode 100644 sources/tech/20171107 GitHub welcomes all CI tools.md delete mode 100644 sources/tech/20171112 Love Your Bugs.md delete mode 100644 sources/tech/20171113 Glitch write fun small web projects instantly.md delete mode 100644 sources/tech/20171114 Sysadmin 101 Patch Management.md delete mode 100644 sources/tech/20171114 Take Linux and Run With It.md delete mode 100644 sources/tech/20171115 Security Jobs Are Hot Get Trained and Get Noticed.md delete mode 100644 sources/tech/20171115 Why and How to Set an Open Source Strategy.md delete mode 100644 sources/tech/20171116 Unleash Your Creativity – Linux Programs for Drawing and Image Editing.md delete mode 100644 sources/tech/20171120 Adopting Kubernetes step by step.md create mode 100644 sources/tech/20171120 Containers and Kubernetes Whats next.md delete mode 100644 sources/tech/20171123 Why microservices are a security issue.md create mode 100644 sources/tech/20171124 Open Source Cloud Skills and Certification Are Key for SysAdmins.md delete mode 100644 sources/tech/20171125 AWS to Help Build ONNX Open Source AI Platform.md delete mode 100644 sources/tech/20171128 How To Tell If Your Linux Server Has Been Compromised.md delete mode 100644 sources/tech/20171128 The politics of the Linux desktop.md delete mode 100644 sources/tech/20171128 Why Python and Pygame are a great pair for beginning programmers.md delete mode 100644 sources/tech/20171129 10 open source technology trends for 2018.md delete mode 100644 sources/tech/20171129 5 best practices for getting started with DevOps.md delete mode 100644 sources/tech/20171129 How to Install and Use Wireshark on Debian and Ubuntu 16.04_17.10.md delete mode 100644 sources/tech/20171129 Inside AGL Familiar Open Source Components Ease Learning Curve.md delete mode 100644 sources/tech/20171129 Interactive Workflows for Cpp with Jupyter.md delete mode 100644 sources/tech/20171129 Someone Tries to Bring Back Ubuntus Unity from the Dead as an Official Spin.md delete mode 100644 sources/tech/20171130 Excellent Business Software Alternatives For Linux.md delete mode 100644 sources/tech/20171130 Scrot Linux command-line screen grabs made simple.md create mode 100644 sources/tech/20171130 Search DuckDuckGo from the Command Line.md delete mode 100644 sources/tech/20171130 Undistract-me : Get Notification When Long Running Terminal Commands Complete.md delete mode 100644 sources/tech/20171130 Wake up and Shut Down Linux Automatically.md delete mode 100644 sources/tech/20171201 Fedora Classroom Session: Ansible 101.md delete mode 100644 sources/tech/20171201 How to Manage Users with Groups in Linux.md delete mode 100644 sources/tech/20171201 How to find a publisher for your tech book.md delete mode 100644 sources/tech/20171201 Randomize your WiFi MAC address on Ubuntu 16.04.md delete mode 100644 sources/tech/20171202 Easily control delivery of your Python applications to millions of Linux users with Snapcraft.md delete mode 100644 sources/tech/20171202 Scrot Linux command-line screen grabs made simple delete mode 100644 sources/tech/20171202 docker - Use multi-stage builds.md delete mode 100644 translated/tech/20090701 The One in Which I Call Out Hacker News.md rename {published => translated/tech}/20161216 Kprobes Event Tracing on ARMv8.md (98%) delete mode 100644 translated/tech/20170530 How to Improve a Legacy Codebase.md delete mode 100644 translated/tech/20170910 Cool vim feature sessions.md rename {published => translated/tech}/20171009 Examining network connections on Linux systems.md (100%) delete mode 100644 translated/tech/20171020 How Eclipse is advancing IoT development.md rename {published => translated/tech}/20171029 A block layer introduction part 1 the bio layer.md (95%) delete mode 100644 translated/tech/20171108 Archiving repositories.md delete mode 100644 translated/tech/20171116 Introducing security alerts on GitHub.md delete mode 100644 translated/tech/20171117 System Logs: Understand Your Linux System.md create mode 100644 translated/tech/20171124 How to Install Android File Transfer for Linux.md delete mode 100644 translated/tech/20171124 Photon Could Be Your New Favorite Container OS.md delete mode 100644 translated/tech/20171130 New Feature Find every domain someone owns automatically.md delete mode 100644 translated/tech/20171130 Translate Shell – A Tool To Use Google Translate From Command Line In Linux.md delete mode 100644 translated/tech/20171201 Linux Journal Ceases Publication.md delete mode 100644 translated/tech/Linux Networking Hardware for Beginners: Think Software diff --git a/published/201711/20141028 When Does Your OS Run.md b/published/20141028 When Does Your OS Run.md similarity index 100% rename from published/201711/20141028 When Does Your OS Run.md rename to published/20141028 When Does Your OS Run.md diff --git a/published/201711/20170202 Understanding Firewalld in Multi-Zone Configurations.md b/published/20170202 Understanding Firewalld in Multi-Zone Configurations.md similarity index 100% rename from published/201711/20170202 Understanding Firewalld in Multi-Zone Configurations.md rename to published/20170202 Understanding Firewalld in Multi-Zone Configurations.md diff --git a/published/201711/20170227 Ubuntu Core in LXD containers.md b/published/20170227 Ubuntu Core in LXD containers.md similarity index 100% rename from published/201711/20170227 Ubuntu Core in LXD containers.md rename to published/20170227 Ubuntu Core in LXD containers.md diff --git a/published/201711/20170418 INTRODUCING MOBY PROJECT A NEW OPEN-SOURCE PROJECT TO ADVANCE THE SOFTWARE CONTAINERIZATION MOVEMENT.md b/published/20170418 INTRODUCING MOBY PROJECT A NEW OPEN-SOURCE PROJECT TO ADVANCE THE SOFTWARE CONTAINERIZATION MOVEMENT.md similarity index 100% rename from published/201711/20170418 INTRODUCING MOBY PROJECT A NEW OPEN-SOURCE PROJECT TO ADVANCE THE SOFTWARE CONTAINERIZATION MOVEMENT.md rename to published/20170418 INTRODUCING MOBY PROJECT A NEW OPEN-SOURCE PROJECT TO ADVANCE THE SOFTWARE CONTAINERIZATION MOVEMENT.md diff --git a/published/201711/20170531 Understanding Docker Container Host vs Container OS for Linux and Windows Containers.md b/published/20170531 Understanding Docker Container Host vs Container OS for Linux and Windows Containers.md similarity index 100% rename from published/201711/20170531 Understanding Docker Container Host vs Container OS for Linux and Windows Containers.md rename to published/20170531 Understanding Docker Container Host vs Container OS for Linux and Windows Containers.md diff --git a/published/201711/20170608 The Life-Changing Magic of Tidying Up Code.md b/published/20170608 The Life-Changing Magic of Tidying Up Code.md similarity index 100% rename from published/201711/20170608 The Life-Changing Magic of Tidying Up Code.md rename to published/20170608 The Life-Changing Magic of Tidying Up Code.md diff --git a/published/20170622 A users guide to links in the Linux filesystem.md b/published/20170622 A users guide to links in the Linux filesystem.md deleted file mode 100644 index 7d731693d8..0000000000 --- a/published/20170622 A users guide to links in the Linux filesystem.md +++ /dev/null @@ -1,300 +0,0 @@ -用户指南:Linux 文件系统的链接 -============================================================ - -> 学习如何使用链接,通过从 Linux 文件系统多个位置来访问文件,可以让日常工作变得轻松。 - -![linux 文件链接用户指南](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/links.png?itok=enaPOi4L "A user's guide to links in the Linux filesystem") - -Image by : [Paul Lewin][8]. Modified by Opensource.com. [CC BY-SA 2.0][9] - -在我为 opensource.com 写过的关于 Linux 文件系统方方面面的文章中,包括 [Linux 的 EXT4 文件系统的历史、特性以及最佳实践][10]; [在 Linux 中管理设备][11];[Linux 文件系统概览][12] 和 [用户指南:逻辑卷管理][13],我曾简要的提到过 Linux 文件系统一个有趣的特性,它允许用户从多个位置来访问 Linux 文件目录树中的文件来简化一些任务。 - -Linux 文件系统中有两种链接link硬链接hard link软链接soft link。虽然二者差别显著,但都用来解决相似的问题。它们都提供了对单个文件的多个目录项(引用)的访问,但实现却大为不同。链接的强大功能赋予了 Linux 文件系统灵活性,因为[一切皆是文件][14]。 - -举个例子,我曾发现一些程序要求特定的版本库方可运行。 当用升级后的库替代旧库后,程序会崩溃,提示旧版本库缺失。通常,库名的唯一变化就是版本号。出于直觉,我仅仅给程序添加了一个新的库链接,并以旧库名称命名。我试着再次启动程序,运行良好。程序就是一个游戏,人人都明白,每个玩家都会尽力使游戏进行下去。 - -事实上,几乎所有的应用程序链接库都使用通用的命名规则,链接名称中包含了主版本号,链接所指向的文件的文件名中同样包含了小版本号。再比如,程序的一些必需文件为了迎合 Linux 文件系统规范,从一个目录移动到另一个目录中,系统为了向后兼容那些不能获取这些文件新位置的程序在旧的目录中存放了这些文件的链接。如果你对 `/lib64` 目录做一个长清单列表,你会发现很多这样的例子。 - -``` -lrwxrwxrwx. 1 root root 36 Dec 8 2016 cracklib_dict.hwm -> ../../usr/share/cracklib/pw_dict.hwm -lrwxrwxrwx. 1 root root 36 Dec 8 2016 cracklib_dict.pwd -> ../../usr/share/cracklib/pw_dict.pwd -lrwxrwxrwx. 1 root root 36 Dec 8 2016 cracklib_dict.pwi -> ../../usr/share/cracklib/pw_dict.pwi -lrwxrwxrwx. 1 root root 27 Jun 9 2016 libaccountsservice.so.0 -> libaccountsservice.so.0.0.0 --rwxr-xr-x. 1 root root 288456 Jun 9 2016 libaccountsservice.so.0.0.0 -lrwxrwxrwx 1 root root 15 May 17 11:47 libacl.so.1 -> libacl.so.1.1.0 --rwxr-xr-x 1 root root 36472 May 17 11:47 libacl.so.1.1.0 -lrwxrwxrwx. 1 root root 15 Feb 4 2016 libaio.so.1 -> libaio.so.1.0.1 --rwxr-xr-x. 1 root root 6224 Feb 4 2016 libaio.so.1.0.0 --rwxr-xr-x. 1 root root 6224 Feb 4 2016 libaio.so.1.0.1 -lrwxrwxrwx. 1 root root 30 Jan 16 16:39 libakonadi-calendar.so.4 -> libakonadi-calendar.so.4.14.26 --rwxr-xr-x. 1 root root 816160 Jan 16 16:39 libakonadi-calendar.so.4.14.26 -lrwxrwxrwx. 1 root root 29 Jan 16 16:39 libakonadi-contact.so.4 -> libakonadi-contact.so.4.14.26 -``` - -`/lib64` 目录下的一些链接 - -在上面展示的 `/lib64` 目录清单列表中,文件模式第一个字母 `l` (小写字母 l)表示这是一个软链接(又称符号链接)。 - -### 硬链接 - -在 [Linux 的 EXT4 文件系统的历史、特性以及最佳实践][15]一文中,我曾探讨过这样一个事实,每个文件都有一个包含该文件信息的 inode,包含了该文件的位置信息。上述文章中的[图2][16]展示了一个指向 inode 的单一目录项。每个文件都至少有一个目录项指向描述该文件信息的 inode ,目录项是一个硬链接,因此每个文件至少都有一个硬链接。 - -如下图 1 所示,多个目录项指向了同一 inode 。这些目录项都是硬链接。我曾在三个目录项中使用波浪线 (`~`) 的缩写,这是用户目录的惯例表示,因此在该例中波浪线等同于 `/home/user` 。值得注意的是,第四个目录项是一个完全不同的目录,`/home/shared`,可能是该计算机上用户的共享文件目录。 - -![fig1directory_entries.png](https://opensource.com/sites/default/files/images/life/fig1directory_entries.png) - -*图 1* - -硬链接被限制在一个单一的文件系统中。此处的“文件系统” 是指挂载在特定挂载点上的分区或逻辑卷,此例中是 `/home`。这是因为在每个文件系统中的 inode 号都是唯一的。而在不同的文件系统中,如 `/var` 或 `/opt`,会有和 `/home` 中相同的 inode 号。 - -因为所有的硬链接都指向了包含文件元信息的单一 inode ,这些属性都是文件的一部分,像所属关系、权限、到该 inode 的硬链接数目,对每个硬链接来说这些特性没有什么不同的。这是一个文件所具有的一组属性。唯一能区分这些文件的是包含在 inode 信息中的文件名。链接到同一目录中的单一文件/ inode 的硬链接必须拥有不同的文件名,这是基于同一目录下不能存在重复的文件名的事实的。 - -文件的硬链接数目可通过 `ls -l` 来查看,如果你想查看实际节点号,可使用 `ls -li` 命令。 - -### 符号(软)链接 - -硬链接和软链接(也称为符号链接symlink)的区别在于,硬链接直接指向属于该文件的 inode ,而软链接直接指向一个目录项,即指向一个硬链接。因为软链接指向的是一个文件的硬链接而非该文件的 inode ,所以它们并不依赖于 inode 号,这使得它们能跨越不同的文件系统、分区和逻辑卷起作用。 - -软链接的缺点是,一旦它所指向的硬链接被删除或重命名后,该软链接就失效了。软链接虽然还在,但所指向的硬链接已不存在。所幸的是,`ls` 命令能以红底白字的方式在其列表中高亮显示失效的软链接。 - -### 实验项目: 链接实验 - -我认为最容易理解链接用法及其差异的方法是动手搭建一个项目。这个项目应以非超级用户的身份在一个空目录下进行。我创建了 `~/temp` 目录做这个实验,你也可以这么做。这么做可为项目创建一个安全的环境且提供一个新的空目录让程序运作,如此以来这儿仅存放和程序有关的文件。 - -#### 初始工作 - -首先,在你要进行实验的目录下为该项目中的任务创建一个临时目录,确保当前工作目录(PWD)是你的主目录,然后键入下列命令。 - -``` -mkdir temp -``` - -使用这个命令将当前工作目录切换到 `~/temp`。 - -``` -cd temp -``` - -实验开始,我们需要创建一个能够链接到的文件,下列命令可完成该工作并向其填充内容。 - -``` -du -h > main.file.txt -``` - -使用 `ls -l` 长列表命名确认文件正确地创建了。运行结果应类似于我的。注意文件大小只有 7 字节,但你的可能会有 1~2 字节的变动。 - -``` -[dboth@david temp]$ ls -l -total 4 --rw-rw-r-- 1 dboth dboth 7 Jun 13 07:34 main.file.txt -``` - -在列表中,文件模式串后的数字 `1` 代表存在于该文件上的硬链接数。现在应该是 1 ,因为我们还没有为这个测试文件建立任何硬链接。 - -#### 对硬链接进行实验 - -硬链接创建一个指向同一 inode 的新目录项,当为文件添加一个硬链接时,你会看到链接数目的增加。确保当前工作目录仍为 `~/temp`。创建一个指向 `main.file.txt` 的硬链接,然后查看该目录下文件列表。 - -``` -[dboth@david temp]$ ln main.file.txt link1.file.txt -[dboth@david temp]$ ls -l -total 8 --rw-rw-r-- 2 dboth dboth 7 Jun 13 07:34 link1.file.txt --rw-rw-r-- 2 dboth dboth 7 Jun 13 07:34 main.file.txt -``` - -目录中两个文件都有两个链接且大小相同,时间戳也一样。这就是有一个 inode 和两个硬链接(即该文件的目录项)的一个文件。再建立一个该文件的硬链接,并列出目录清单内容。你可以建立硬链接: `link1.file.txt` 或 `main.file.txt`。 - -``` -[dboth@david temp]$ ln link1.file.txt link2.file.txt ; ls -l -total 16 --rw-rw-r-- 3 dboth dboth 7 Jun 13 07:34 link1.file.txt --rw-rw-r-- 3 dboth dboth 7 Jun 13 07:34 link2.file.txt --rw-rw-r-- 3 dboth dboth 7 Jun 13 07:34 main.file.txt -``` - -注意,该目录下的每个硬链接必须使用不同的名称,因为同一目录下的两个文件不能拥有相同的文件名。试着创建一个和现存链接名称相同的硬链接。 - -``` -[dboth@david temp]$ ln main.file.txt link2.file.txt -ln: failed to create hard link 'link2.file.txt': File exists -``` - -显然不行,因为 `link2.file.txt` 已经存在。目前为止我们只在同一目录下创建硬链接,接着在临时目录的父目录(你的主目录)中创建一个链接。 - -``` -[dboth@david temp]$ ln main.file.txt ../main.file.txt ; ls -l ../main* --rw-rw-r-- 4 dboth dboth 7 Jun 13 07:34 main.file.txt -``` - -上面的 `ls` 命令显示 `main.file.txt` 文件确实存在于主目录中,且与该文件在 `temp` 目录中的名称一致。当然它们不是不同的文件,它们是同一文件的两个链接,指向了同一文件的目录项。为了帮助说明下一点,在 `temp` 目录中添加一个非链接文件。 - -``` -[dboth@david temp]$ touch unlinked.file ; ls -l -total 12 --rw-rw-r-- 4 dboth dboth 7 Jun 13 07:34 link1.file.txt --rw-rw-r-- 4 dboth dboth 7 Jun 13 07:34 link2.file.txt --rw-rw-r-- 4 dboth dboth 7 Jun 13 07:34 main.file.txt --rw-rw-r-- 1 dboth dboth 0 Jun 14 08:18 unlinked.file -``` - -使用 `ls` 命令的 `i` 选项查看 inode 的硬链接号和新创建文件的硬链接号。 - -``` -[dboth@david temp]$ ls -li -total 12 -657024 -rw-rw-r-- 4 dboth dboth 7 Jun 13 07:34 link1.file.txt -657024 -rw-rw-r-- 4 dboth dboth 7 Jun 13 07:34 link2.file.txt -657024 -rw-rw-r-- 4 dboth dboth 7 Jun 13 07:34 main.file.txt -657863 -rw-rw-r-- 1 dboth dboth 0 Jun 14 08:18 unlinked.file -``` - -注意上面文件模式左边的数字 `657024` ,这是三个硬链接文件所指的同一文件的 inode 号,你也可以使用 `i` 选项查看主目录中所创建的链接的节点号,和该值相同。而那个只有一个链接的 inode 号和其他的不同,在你的系统上看到的 inode 号或许不同于本文中的。 - -接着改变其中一个硬链接文件的大小。 - -``` -[dboth@david temp]$ df -h > link2.file.txt ; ls -li -total 12 -657024 -rw-rw-r-- 4 dboth dboth 1157 Jun 14 14:14 link1.file.txt -657024 -rw-rw-r-- 4 dboth dboth 1157 Jun 14 14:14 link2.file.txt -657024 -rw-rw-r-- 4 dboth dboth 1157 Jun 14 14:14 main.file.txt -657863 -rw-rw-r-- 1 dboth dboth 0 Jun 14 08:18 unlinked.file -``` - -现在所有的硬链接文件大小都比原来大了,因为多个目录项都链接着同一文件。 - -下个实验在我的电脑上会出现这样的结果,是因为我的 `/tmp` 目录在一个独立的逻辑卷上。如果你有单独的逻辑卷或文件系统在不同的分区上(如果未使用逻辑卷),确定你是否能访问那个分区或逻辑卷,如果不能,你可以在电脑上挂载一个 U 盘,如果上述方式适合你,你可以进行这个实验。 - -试着在 `/tmp` 目录中建立一个 `~/temp` 目录下文件的链接(或你的文件系统所在的位置)。 - -``` -[dboth@david temp]$ ln link2.file.txt /tmp/link3.file.txt -ln: failed to create hard link '/tmp/link3.file.txt' => 'link2.file.txt': -Invalid cross-device link -``` - -为什么会出现这个错误呢? 原因是每一个单独的可挂载文件系统都有一套自己的 inode 号。简单的通过 inode 号来跨越整个 Linux 文件系统结构引用一个文件会使系统困惑,因为相同的节点号会存在于每个已挂载的文件系统中。 - -有时你可能会想找到一个 inode 的所有硬链接。你可以使用 `ls -li` 命令。然后使用 `find` 命令找到所有硬链接的节点号。 - -``` -[dboth@david temp]$ find . -inum 657024 -./main.file.txt -./link1.file.txt -./link2.file.txt -``` - -注意 `find` 命令不能找到所属该节点的四个硬链接,因为我们在 `~/temp` 目录中查找。 `find` 命令仅在当前工作目录及其子目录中查找文件。要找到所有的硬链接,我们可以使用下列命令,指定你的主目录作为起始查找条件。 - -``` -[dboth@david temp]$ find ~ -samefile main.file.txt -/home/dboth/temp/main.file.txt -/home/dboth/temp/link1.file.txt -/home/dboth/temp/link2.file.txt -/home/dboth/main.file.txt -``` - -如果你是非超级用户,没有权限,可能会看到错误信息。这个命令也使用了 `-samefile` 选项而不是指定文件的节点号。这个效果和使用 inode 号一样且更容易,如果你知道其中一个硬链接名称的话。 - -#### 对软链接进行实验 - -如你刚才看到的,不能跨越文件系统边界创建硬链接,即在逻辑卷或文件系统中从一个文件系统到另一个文件系统。软链接给出了这个问题的解决方案。虽然它们可以达到相同的目的,但它们是非常不同的,知道这些差异是很重要的。 - -让我们在 `~/temp` 目录中创建一个符号链接来开始我们的探索。 - -``` -[dboth@david temp]$ ln -s link2.file.txt link3.file.txt ; ls -li -total 12 -657024 -rw-rw-r-- 4 dboth dboth 1157 Jun 14 14:14 link1.file.txt -657024 -rw-rw-r-- 4 dboth dboth 1157 Jun 14 14:14 link2.file.txt -658270 lrwxrwxrwx 1 dboth dboth 14 Jun 14 15:21 link3.file.txt -> -link2.file.txt -657024 -rw-rw-r-- 4 dboth dboth 1157 Jun 14 14:14 main.file.txt -657863 -rw-rw-r-- 1 dboth dboth 0 Jun 14 08:18 unlinked.file -``` - -拥有节点号 `657024` 的那些硬链接没有变化,且硬链接的数目也没有变化。新创建的符号链接有不同的 inode 号 `658270`。 名为 `link3.file.txt` 的软链接指向了 `link2.file.txt` 文件。使用 `cat` 命令查看 `link3.file.txt` 文件的内容。符号链接的 inode 信息以字母 `l` (小写字母 l)开头,意味着这个文件实际是个符号链接。 - -上例中软链接文件 `link3.file.txt` 的大小只有 14 字节。这是文本内容 `link3.file.txt` 的大小,即该目录项的实际内容。目录项 `link3.file.txt` 并不指向一个 inode ;它指向了另一个目录项,这在跨越文件系统建立链接时很有帮助。现在试着创建一个软链接,之前在 `/tmp` 目录中尝试过的。 - -``` -[dboth@david temp]$ ln -s /home/dboth/temp/link2.file.txt -/tmp/link3.file.txt ; ls -l /tmp/link* -lrwxrwxrwx 1 dboth dboth 31 Jun 14 21:53 /tmp/link3.file.txt -> -/home/dboth/temp/link2.file.txt -``` - -#### 删除链接 - -当你删除硬链接或硬链接所指的文件时,需要考虑一些问题。 - -首先,让我们删除硬链接文件 `main.file.txt`。注意指向 inode 的每个目录项就是一个硬链接。 - -``` -[dboth@david temp]$ rm main.file.txt ; ls -li -total 8 -657024 -rw-rw-r-- 3 dboth dboth 1157 Jun 14 14:14 link1.file.txt -657024 -rw-rw-r-- 3 dboth dboth 1157 Jun 14 14:14 link2.file.txt -658270 lrwxrwxrwx 1 dboth dboth 14 Jun 14 15:21 link3.file.txt -> -link2.file.txt -657863 -rw-rw-r-- 1 dboth dboth 0 Jun 14 08:18 unlinked.file -``` - -`main.file.txt` 是该文件被创建时所创建的第一个硬链接。现在删除它,仍然保留着原始文件和硬盘上的数据以及所有剩余的硬链接。要删除原始文件,你必须删除它的所有硬链接。 - -现在删除 `link2.file.txt` 硬链接文件。 - -``` -[dboth@david temp]$ rm link2.file.txt ; ls -li -total 8 -657024 -rw-rw-r-- 3 dboth dboth 1157 Jun 14 14:14 link1.file.txt -658270 lrwxrwxrwx 1 dboth dboth 14 Jun 14 15:21 link3.file.txt -> -link2.file.txt -657024 -rw-rw-r-- 3 dboth dboth 1157 Jun 14 14:14 main.file.txt -657863 -rw-rw-r-- 1 dboth dboth 0 Jun 14 08:18 unlinked.file -``` - -注意软链接的变化。删除软链接所指的硬链接会使该软链接失效。在我的系统中,断开的链接用颜色高亮显示,目标的硬链接会闪烁显示。如果需要修复这个损坏的软链接,你需要在同一目录下建立一个和旧链接相同名字的硬链接,只要不是所有硬链接都已删除就行。您还可以重新创建链接本身,链接保持相同的名称,但指向剩余的硬链接中的一个。当然如果软链接不再需要,可以使用 `rm` 命令删除它们。 - -`unlink` 命令在删除文件和链接时也有用。它非常简单且没有选项,就像 `rm` 命令一样。然而,它更准确地反映了删除的基本过程,因为它删除了目录项与被删除文件的链接。 - -### 写在最后 - -我用过这两种类型的链接很长一段时间后,我开始了解它们的能力和特质。我为我所教的 Linux 课程编写了一个实验室项目,以充分理解链接是如何工作的,并且我希望增进你的理解。 - --------------------------------------------------------------------------------- - -作者简介: - -戴维.布斯 - 戴维.布斯是 Linux 和开源倡导者,居住在北卡罗莱纳的罗列 。他在 IT 行业工作了四十年,为 IBM 工作了 20 多年的 OS/2。在 IBM 时,他在 1981 年编写了最初的 IBM PC 的第一个培训课程。他为 RedHat 教授过 RHCE 班,并曾在 MCI Worldcom、思科和北卡罗莱纳州工作。他已经用 Linux 和开源软件工作将近 20 年了。 - ---------------------------------- - -via: https://opensource.com/article/17/6/linking-linux-filesystem - -作者:[David Both][a] -译者:[yongshouzhang](https://github.com/yongshouzhang) -校对:[wxy](https://github.com/wxy) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/dboth -[1]:https://opensource.com/resources/what-is-linux?src=linux_resource_menu -[2]:https://opensource.com/resources/what-are-linux-containers?src=linux_resource_menu -[3]:https://developers.redhat.com/promotions/linux-cheatsheet/?intcmp=7016000000127cYAAQ -[4]:https://developers.redhat.com/cheat-sheet/advanced-linux-commands-cheatsheet?src=linux_resource_menu&intcmp=7016000000127cYAAQ -[5]:https://opensource.com/tags/linux?src=linux_resource_menu -[6]:https://opensource.com/article/17/6/linking-linux-filesystem?rate=YebHxA-zgNopDQKKOyX3_r25hGvnZms_33sYBUq-SMM -[7]:https://opensource.com/user/14106/feed -[8]:https://www.flickr.com/photos/digypho/7905320090 -[9]:https://creativecommons.org/licenses/by/2.0/ -[10]:https://linux.cn/article-8685-1.html -[11]:https://linux.cn/article-8099-1.html -[12]:https://linux.cn/article-8887-1.html -[13]:https://opensource.com/business/16/9/linux-users-guide-lvm -[14]:https://opensource.com/life/15/9/everything-is-a-file -[15]:https://linux.cn/article-8685-1.html -[16]:https://linux.cn/article-8685-1.html#3_19182 -[17]:https://opensource.com/users/dboth -[18]:https://opensource.com/article/17/6/linking-linux-filesystem#comments diff --git a/published/201711/20170706 Wildcard Certificates Coming January 2018.md b/published/20170706 Wildcard Certificates Coming January 2018.md similarity index 100% rename from published/201711/20170706 Wildcard Certificates Coming January 2018.md rename to published/20170706 Wildcard Certificates Coming January 2018.md diff --git a/published/201711/20170825 Guide to Linux App Is a Handy Tool for Every Level of Linux User.md b/published/20170825 Guide to Linux App Is a Handy Tool for Every Level of Linux User.md similarity index 100% rename from published/201711/20170825 Guide to Linux App Is a Handy Tool for Every Level of Linux User.md rename to published/20170825 Guide to Linux App Is a Handy Tool for Every Level of Linux User.md diff --git a/published/201711/20170905 GIVE AWAY YOUR CODE BUT NEVER YOUR TIME.md b/published/20170905 GIVE AWAY YOUR CODE BUT NEVER YOUR TIME.md similarity index 100% rename from published/201711/20170905 GIVE AWAY YOUR CODE BUT NEVER YOUR TIME.md rename to published/20170905 GIVE AWAY YOUR CODE BUT NEVER YOUR TIME.md diff --git a/published/201711/20170928 3 Python web scrapers and crawlers.md b/published/20170928 3 Python web scrapers and crawlers.md similarity index 100% rename from published/201711/20170928 3 Python web scrapers and crawlers.md rename to published/20170928 3 Python web scrapers and crawlers.md diff --git a/published/201711/20171002 Scaling the GitLab database.md b/published/20171002 Scaling the GitLab database.md similarity index 100% rename from published/201711/20171002 Scaling the GitLab database.md rename to published/20171002 Scaling the GitLab database.md diff --git a/published/201711/20171003 PostgreSQL Hash Indexes Are Now Cool.md b/published/20171003 PostgreSQL Hash Indexes Are Now Cool.md similarity index 100% rename from published/201711/20171003 PostgreSQL Hash Indexes Are Now Cool.md rename to published/20171003 PostgreSQL Hash Indexes Are Now Cool.md diff --git a/published/201711/20171004 No the Linux desktop hasnt jumped in popularity.md b/published/20171004 No the Linux desktop hasnt jumped in popularity.md similarity index 100% rename from published/201711/20171004 No the Linux desktop hasnt jumped in popularity.md rename to published/20171004 No the Linux desktop hasnt jumped in popularity.md diff --git a/published/201711/20171007 Instant 100 command line productivity boost.md b/published/20171007 Instant 100 command line productivity boost.md similarity index 100% rename from published/201711/20171007 Instant 100 command line productivity boost.md rename to published/20171007 Instant 100 command line productivity boost.md diff --git a/published/201711/20171008 8 best languages to blog about.md b/published/20171008 8 best languages to blog about.md similarity index 100% rename from published/201711/20171008 8 best languages to blog about.md rename to published/20171008 8 best languages to blog about.md diff --git a/published/201711/20171009 CyberShaolin Teaching the Next Generation of Cybersecurity Experts.md b/published/20171009 CyberShaolin Teaching the Next Generation of Cybersecurity Experts.md similarity index 100% rename from published/201711/20171009 CyberShaolin Teaching the Next Generation of Cybersecurity Experts.md rename to published/20171009 CyberShaolin Teaching the Next Generation of Cybersecurity Experts.md diff --git a/published/201711/20171010 Getting Started Analyzing Twitter Data in Apache Kafka through KSQL.md b/published/20171010 Getting Started Analyzing Twitter Data in Apache Kafka through KSQL.md similarity index 100% rename from published/201711/20171010 Getting Started Analyzing Twitter Data in Apache Kafka through KSQL.md rename to published/20171010 Getting Started Analyzing Twitter Data in Apache Kafka through KSQL.md diff --git a/published/201711/20171011 How to set up a Postgres database on a Raspberry Pi.md b/published/20171011 How to set up a Postgres database on a Raspberry Pi.md similarity index 100% rename from published/201711/20171011 How to set up a Postgres database on a Raspberry Pi.md rename to published/20171011 How to set up a Postgres database on a Raspberry Pi.md diff --git a/published/201711/20171011 Why Linux Works.md b/published/20171011 Why Linux Works.md similarity index 100% rename from published/201711/20171011 Why Linux Works.md rename to published/20171011 Why Linux Works.md diff --git a/published/201711/20171013 6 reasons open source is good for business.md b/published/20171013 6 reasons open source is good for business.md similarity index 100% rename from published/201711/20171013 6 reasons open source is good for business.md rename to published/20171013 6 reasons open source is good for business.md diff --git a/published/201711/20171013 Best of PostgreSQL 10 for the DBA.md b/published/20171013 Best of PostgreSQL 10 for the DBA.md similarity index 100% rename from published/201711/20171013 Best of PostgreSQL 10 for the DBA.md rename to published/20171013 Best of PostgreSQL 10 for the DBA.md diff --git a/published/201711/20171015 How to implement cloud-native computing with Kubernetes.md b/published/20171015 How to implement cloud-native computing with Kubernetes.md similarity index 100% rename from published/201711/20171015 How to implement cloud-native computing with Kubernetes.md rename to published/20171015 How to implement cloud-native computing with Kubernetes.md diff --git a/published/201711/20171015 Monitoring Slow SQL Queries via Slack.md b/published/20171015 Monitoring Slow SQL Queries via Slack.md similarity index 100% rename from published/201711/20171015 Monitoring Slow SQL Queries via Slack.md rename to published/20171015 Monitoring Slow SQL Queries via Slack.md diff --git a/published/201711/20171015 Why Use Docker with R A DevOps Perspective.md b/published/20171015 Why Use Docker with R A DevOps Perspective.md similarity index 100% rename from published/201711/20171015 Why Use Docker with R A DevOps Perspective.md rename to published/20171015 Why Use Docker with R A DevOps Perspective.md diff --git a/published/201711/20171016 Introducing CRI-O 1.0.md b/published/20171016 Introducing CRI-O 1.0.md similarity index 100% rename from published/201711/20171016 Introducing CRI-O 1.0.md rename to published/20171016 Introducing CRI-O 1.0.md diff --git a/published/201711/20171017 A tour of Postgres Index Types.md b/published/20171017 A tour of Postgres Index Types.md similarity index 100% rename from published/201711/20171017 A tour of Postgres Index Types.md rename to published/20171017 A tour of Postgres Index Types.md diff --git a/published/201711/20171017 Image Processing on Linux.md b/published/20171017 Image Processing on Linux.md similarity index 100% rename from published/201711/20171017 Image Processing on Linux.md rename to published/20171017 Image Processing on Linux.md diff --git a/published/201711/20171018 How containers and microservices change security.md b/published/20171018 How containers and microservices change security.md similarity index 100% rename from published/201711/20171018 How containers and microservices change security.md rename to published/20171018 How containers and microservices change security.md diff --git a/published/201711/20171018 Learn how to program in Python by building a simple dice game.md b/published/20171018 Learn how to program in Python by building a simple dice game.md similarity index 100% rename from published/201711/20171018 Learn how to program in Python by building a simple dice game.md rename to published/20171018 Learn how to program in Python by building a simple dice game.md diff --git a/published/201711/20171018 Tips to Secure Your Network in the Wake of KRACK.md b/published/20171018 Tips to Secure Your Network in the Wake of KRACK.md similarity index 100% rename from published/201711/20171018 Tips to Secure Your Network in the Wake of KRACK.md rename to published/20171018 Tips to Secure Your Network in the Wake of KRACK.md diff --git a/published/201711/20171019 3 Simple Excellent Linux Network Monitors.md b/published/20171019 3 Simple Excellent Linux Network Monitors.md similarity index 100% rename from published/201711/20171019 3 Simple Excellent Linux Network Monitors.md rename to published/20171019 3 Simple Excellent Linux Network Monitors.md diff --git a/published/201711/20171019 How to manage Docker containers in Kubernetes with Java.md b/published/20171019 How to manage Docker containers in Kubernetes with Java.md similarity index 100% rename from published/201711/20171019 How to manage Docker containers in Kubernetes with Java.md rename to published/20171019 How to manage Docker containers in Kubernetes with Java.md diff --git a/published/201711/20171020 3 Tools to Help You Remember Linux Commands.md b/published/20171020 3 Tools to Help You Remember Linux Commands.md similarity index 100% rename from published/201711/20171020 3 Tools to Help You Remember Linux Commands.md rename to published/20171020 3 Tools to Help You Remember Linux Commands.md diff --git a/published/201711/20171020 Running Android on Top of a Linux Graphics Stack.md b/published/20171020 Running Android on Top of a Linux Graphics Stack.md similarity index 100% rename from published/201711/20171020 Running Android on Top of a Linux Graphics Stack.md rename to published/20171020 Running Android on Top of a Linux Graphics Stack.md diff --git a/published/201711/20171024 Top 5 Linux pain points in 2017.md b/published/20171024 Top 5 Linux pain points in 2017.md similarity index 100% rename from published/201711/20171024 Top 5 Linux pain points in 2017.md rename to published/20171024 Top 5 Linux pain points in 2017.md diff --git a/published/201711/20171024 Who contributed the most to open source in 2017 Let s analyze GitHub’s data and find out.md b/published/20171024 Who contributed the most to open source in 2017 Let s analyze GitHub’s data and find out.md similarity index 100% rename from published/201711/20171024 Who contributed the most to open source in 2017 Let s analyze GitHub’s data and find out.md rename to published/20171024 Who contributed the most to open source in 2017 Let s analyze GitHub’s data and find out.md diff --git a/published/201711/20171024 Why Did Ubuntu Drop Unity Mark Shuttleworth Explains.md b/published/20171024 Why Did Ubuntu Drop Unity Mark Shuttleworth Explains.md similarity index 100% rename from published/201711/20171024 Why Did Ubuntu Drop Unity Mark Shuttleworth Explains.md rename to published/20171024 Why Did Ubuntu Drop Unity Mark Shuttleworth Explains.md diff --git a/published/201711/20171025 How to roll your own backup solution with BorgBackup, Rclone and Wasabi cloud storage.md b/published/20171025 How to roll your own backup solution with BorgBackup, Rclone and Wasabi cloud storage.md similarity index 100% rename from published/201711/20171025 How to roll your own backup solution with BorgBackup, Rclone and Wasabi cloud storage.md rename to published/20171025 How to roll your own backup solution with BorgBackup, Rclone and Wasabi cloud storage.md diff --git a/published/201711/20171026 But I dont know what a container is .md b/published/20171026 But I dont know what a container is .md similarity index 100% rename from published/201711/20171026 But I dont know what a container is .md rename to published/20171026 But I dont know what a container is .md diff --git a/published/201711/20171026 Why is Kubernetes so popular.md b/published/20171026 Why is Kubernetes so popular.md similarity index 100% rename from published/201711/20171026 Why is Kubernetes so popular.md rename to published/20171026 Why is Kubernetes so popular.md diff --git a/published/201711/20171101 How to use cron in Linux.md b/published/20171101 How to use cron in Linux.md similarity index 100% rename from published/201711/20171101 How to use cron in Linux.md rename to published/20171101 How to use cron in Linux.md diff --git a/published/201711/20171101 We re switching to a DCO for source code contributions.md b/published/20171101 We re switching to a DCO for source code contributions.md similarity index 100% rename from published/201711/20171101 We re switching to a DCO for source code contributions.md rename to published/20171101 We re switching to a DCO for source code contributions.md diff --git a/published/201711/20171106 4 Tools to Manage EXT2 EXT3 and EXT4 Health in Linux.md b/published/20171106 4 Tools to Manage EXT2 EXT3 and EXT4 Health in Linux.md similarity index 100% rename from published/201711/20171106 4 Tools to Manage EXT2 EXT3 and EXT4 Health in Linux.md rename to published/20171106 4 Tools to Manage EXT2 EXT3 and EXT4 Health in Linux.md diff --git a/published/201711/20171106 Finding Files with mlocate.md b/published/20171106 Finding Files with mlocate.md similarity index 100% rename from published/201711/20171106 Finding Files with mlocate.md rename to published/20171106 Finding Files with mlocate.md diff --git a/published/201711/20171106 Linux Foundation Publishes Enterprise Open Source Guides.md b/published/20171106 Linux Foundation Publishes Enterprise Open Source Guides.md similarity index 100% rename from published/201711/20171106 Linux Foundation Publishes Enterprise Open Source Guides.md rename to published/20171106 Linux Foundation Publishes Enterprise Open Source Guides.md diff --git a/published/201711/20171106 Most companies can t buy an open source community clue. Here s how to do it right.md b/published/20171106 Most companies can t buy an open source community clue. Here s how to do it right.md similarity index 100% rename from published/201711/20171106 Most companies can t buy an open source community clue. Here s how to do it right.md rename to published/20171106 Most companies can t buy an open source community clue. Here s how to do it right.md diff --git a/published/201711/20171107 AWS adopts home-brewed KVM as new hypervisor.md b/published/20171107 AWS adopts home-brewed KVM as new hypervisor.md similarity index 100% rename from published/201711/20171107 AWS adopts home-brewed KVM as new hypervisor.md rename to published/20171107 AWS adopts home-brewed KVM as new hypervisor.md diff --git a/published/201711/20171107 How I created my first RPM package in Fedora.md b/published/20171107 How I created my first RPM package in Fedora.md similarity index 100% rename from published/201711/20171107 How I created my first RPM package in Fedora.md rename to published/20171107 How I created my first RPM package in Fedora.md diff --git a/published/201711/20171108 Build and test applications with Ansible Container.md b/published/20171108 Build and test applications with Ansible Container.md similarity index 100% rename from published/201711/20171108 Build and test applications with Ansible Container.md rename to published/20171108 Build and test applications with Ansible Container.md diff --git a/published/201711/20171110 File better bugs with coredumpctl.md b/published/20171110 File better bugs with coredumpctl.md similarity index 100% rename from published/201711/20171110 File better bugs with coredumpctl.md rename to published/20171110 File better bugs with coredumpctl.md diff --git a/published/201711/20171114 ​Linux totally dominates supercomputers.md b/published/20171114 ​Linux totally dominates supercomputers.md similarity index 100% rename from published/201711/20171114 ​Linux totally dominates supercomputers.md rename to published/20171114 ​Linux totally dominates supercomputers.md diff --git a/published/201711/20171116 5 Coolest Linux Terminal Emulators.md b/published/20171116 5 Coolest Linux Terminal Emulators.md similarity index 100% rename from published/201711/20171116 5 Coolest Linux Terminal Emulators.md rename to published/20171116 5 Coolest Linux Terminal Emulators.md diff --git a/published/201711/20171117 How to Easily Remember Linux Commands.md b/published/20171117 How to Easily Remember Linux Commands.md similarity index 100% rename from published/201711/20171117 How to Easily Remember Linux Commands.md rename to published/20171117 How to Easily Remember Linux Commands.md diff --git a/published/201711/20171118 Getting started with OpenFaaS on minikube.md b/published/20171118 Getting started with OpenFaaS on minikube.md similarity index 100% rename from published/201711/20171118 Getting started with OpenFaaS on minikube.md rename to published/20171118 Getting started with OpenFaaS on minikube.md diff --git a/published/20171120 Containers and Kubernetes Whats next.md b/published/20171120 Containers and Kubernetes Whats next.md deleted file mode 100644 index 57f9379f7b..0000000000 --- a/published/20171120 Containers and Kubernetes Whats next.md +++ /dev/null @@ -1,81 +0,0 @@ -容器技术和 K8S 的下一站 -============================================================ -> 想知道容器编排管理和 K8S 的最新展望么?来看看专家怎么说。 - -![CIO_Big Data Decisions_2](https://enterprisersproject.com/sites/default/files/styles/620x350/public/images/CIO_Big%20Data%20Decisions_2.png?itok=Y5zMHxf8 "CIO_Big Data Decisions_2") - -如果你想对容器在未来的发展方向有一个整体把握,那么你一定要跟着钱走,看看钱都投在了哪里。当然了,有很多很多的钱正在投入容器的进一步发展。相关研究预计 2020 年容器技术的投入将占有 [27 亿美元][4] 的市场份额。而在 2016 年,容器相关技术投入的总额为 7.62 亿美元,只有 2020 年投入预计的三分之一。巨额投入的背后是一些显而易见的基本因素,包括容器化的迅速增长以及并行化的大趋势。随着容器被大面积推广和使用,容器编排管理也会被理所当然的推广应用起来。 - -来自 [The new stack][5] 的调研数据表明,容器的推广使用是编排管理被推广的主要的催化剂。根据调研参与者的反馈数据,在已经将容器技术使用到生产环境中的使用者里,有六成使用者正在将 Kubernetes(K8S)编排管理广泛的应用在生产环境中,另外百分之十九的人员则表示他们已经处于部署 K8S 的初级阶段。在容器部署初期的使用者当中,虽然只有百分之五的人员表示已经在使用 K8S ,但是百分之五十八的人员表示他们正在计划和准备使用 K8S。总而言之,容器和 Kubernetes 的关系就好比是鸡和蛋一样,相辅相成紧密关联。众多专家一致认为编排管理工具对容器的[长周期管理][6] 以及其在市场中的发展有至关重要的作用。正如 [Cockroach 实验室][7] 的 Alex Robinson 所说,容器编排管理被更广泛的拓展和应用是一个总体的大趋势。毫无疑问,这是一个正在快速演变的领域,且未来潜力无穷。鉴于此,我们对 Robinson 和其他的一些容器的实际使用和推介者做了采访,来从他们作为容器技术的践行者的视角上展望一下容器编排以及 K8S 的下一步发展。 - -### 容器编排将被主流接受 - -像任何重要技术的转型一样,我们就像是处在一个高崖之上一般,在经过了初期步履蹒跚的跋涉之后将要来到一望无际的广袤平原。广大的新天地和平实真切的应用需求将会让这种新技术在主流应用中被迅速推广,尤其是在大企业环境中。正如 Alex Robinson 说的那样,容器技术的淘金阶段已经过去,早期的技术革新创新正在减速,随之而来的则是市场对容器技术的稳定性和可用性的强烈需求。这意味着未来我们将不会再见到大量的新的编排管理系统的涌现,而是会看到容器技术方面更多的安全解决方案,更丰富的管理工具,以及基于目前主流容器编排系统的更多的新特性。 - -### 更好的易用性 - -人们将在简化容器的部署方面下大功夫,因为容器部署的初期工作对很多公司和组织来说还是比较复杂的,尤其是容器的[长期管理维护][8]更是需要投入大量的精力。正如 [Codemill AB][9] 公司的 My Karlsson 所说,容器编排技术还是太复杂了,这导致很多使用者难以娴熟驾驭和充分利用容器编排的功能。很多容器技术的新用户都需要花费很多精力,走很多弯路,才能搭建小规模的或单个的以隔离方式运行的容器系统。这种现象在那些没有针对容器技术设计和优化的应用中更为明显。在简化容器编排管理方面有很多优化可以做,这些优化和改造将会使容器技术更加具有可用性。 - -### 在混合云以及多云技术方面会有更多侧重 - -随着容器和容器编排技术被越来越多的使用,更多的组织机构会选择扩展他们现有的容器技术的部署,从之前的把非重要系统部署在单一环境的使用情景逐渐过渡到更加[复杂的使用情景][10]。对很多公司来说,这意味着他们必须开始学会在 [混合云][11] 和 [多云][12] 的环境下,全局化的去管理那些容器化的应用和微服务。正如红帽 [Openshift 部门产品战略总监][14] [Brian Gracely][13] 所说,“容器和 K8S 技术的使用使得我们成功的实现了混合云以及应用的可移植性。结合 Open Service Broker API 的使用,越来越多的结合私有云和公有云资源的新应用将会涌现出来。” -据 [CloudBees][15] 公司的高级工程师 Carlos Sanchez 分析,联合服务(Federation)将会得到极大推动,使一些诸如多地区部署和多云部署等的备受期待的新特性成为可能。 - -**[ 想知道 CIO 们对混合云和多云的战略构想么? 请参看我们的这条相关资源, [Hybrid Cloud: The IT leader's guide][16]。 ]** - -### 平台和工具的持续整合及加强 - -对任何一种科技来说,持续的整合和加强从来都是大势所趋;容器编排管理技术在这方面也不例外。来自 [Sumo Logic][17] 的首席分析师 Ben Newton 表示,随着容器化渐成主流,软件工程师们正在很少数的一些技术上做持续整合加固的工作,来满足他们的一些微应用的需求。容器和 K8S 将会毫无疑问的成为容器编排管理方面的主流平台,并轻松碾压其它的一些小众平台方案。因为 K8S 提供了一个相当清晰的可以摆脱各种特有云生态的途径,K8S 将被大量公司使用,逐渐形成一个不依赖于某个特定云服务的“中立云”cloud-neutral。 - -### K8S 的下一站 - -来自 [Alcide][18] 的 CTO 和联合创始人 Gadi Naor 表示,K8S 将会是一个有长期和远景发展的技术,虽然我们的社区正在大力推广和发展 K8S,K8S 仍有很长的路要走。 - -专家们对[日益流行的 K8S 平台][19]也作出了以下一些预测: - -**_来自 Alcide 的 Gadi Naor 表示:_** “运营商会持续演进并趋于成熟,直到在 K8S 上运行的应用可以完全自治。利用 [OpenTracing][20] 和诸如 [istio][21] 技术的 service mesh 架构,在 K8S 上部署和监控微应用将会带来很多新的可能性。” - -**_来自 Red Hat 的 Brian Gracely 表示:_** “K8S 所支持的应用的种类越来越多。今后在 K8S 上,你不仅可以运行传统的应用程序,还可以运行原生的云应用、大数据应用以及 HPC 或者基于 GPU 运算的应用程序,这将为灵活的架构设计带来无限可能。” - -**_来自 Sumo Logic 的 Ben Newton 表示:_** “随着 K8S 成为一个具有统治地位的平台,我预计更多的操作机制将会被统一化,尤其是 K8S 将和第三方管理和监控平台融合起来。” - -**_来自 CloudBees 的 Carlos Sanchez 表示:_** “在不久的将来我们就能看到不依赖于 Docker 而使用其它运行时环境的系统,这将会有助于消除任何可能的 lock-in 情景“ [编辑提示:[CRI-O][22] 就是一个可以借鉴的例子。]“而且我期待将来会出现更多的针对企业环境的存储服务新特性,包括数据快照以及在线的磁盘容量的扩展。” - -**_来自 Cockroach Labs 的 Alex Robinson 表示:_** “ K8S 社区正在讨论的一个重大发展议题就是加强对[有状态程序][23]的管理。目前在 K8S 平台下,实现状态管理仍然非常困难,除非你所使用的云服务商可以提供远程固定磁盘。现阶段也有很多人在多方面试图改善这个状况,包括在 K8S 平台内部以及在外部服务商一端做出的一些改进。” - -------------------------------------------------------------------------------- - -via: https://enterprisersproject.com/article/2017/11/containers-and-kubernetes-whats-next - -作者:[Kevin Casey][a] -译者:[yunfengHe](https://github.com/yunfengHe) -校对:[wxy](https://github.com/wxy) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://enterprisersproject.com/user/kevin-casey -[1]:https://enterprisersproject.com/article/2017/11/kubernetes-numbers-10-compelling-stats -[2]:https://enterprisersproject.com/article/2017/11/how-enterprise-it-uses-kubernetes-tame-container-complexity -[3]:https://enterprisersproject.com/article/2017/11/5-kubernetes-success-tips-start-smart?sc_cid=70160000000h0aXAAQ -[4]:https://451research.com/images/Marketing/press_releases/Application-container-market-will-reach-2-7bn-in-2020_final_graphic.pdf -[5]:https://thenewstack.io/ -[6]:https://enterprisersproject.com/article/2017/10/microservices-and-containers-6-management-tips-long-haul -[7]:https://www.cockroachlabs.com/ -[8]:https://enterprisersproject.com/article/2017/10/microservices-and-containers-6-management-tips-long-haul -[9]:https://codemill.se/ -[10]:https://www.redhat.com/en/challenges/integration?intcmp=701f2000000tjyaAAA -[11]:https://enterprisersproject.com/hybrid-cloud -[12]:https://enterprisersproject.com/article/2017/7/multi-cloud-vs-hybrid-cloud-whats-difference -[13]:https://enterprisersproject.com/user/brian-gracely -[14]:https://www.redhat.com/en -[15]:https://www.cloudbees.com/ -[16]:https://enterprisersproject.com/hybrid-cloud?sc_cid=70160000000h0aXAAQ -[17]:https://www.sumologic.com/ -[18]:http://alcide.io/ -[19]:https://enterprisersproject.com/article/2017/10/how-explain-kubernetes-plain-english -[20]:http://opentracing.io/ -[21]:https://istio.io/ -[22]:http://cri-o.io/ -[23]:https://opensource.com/article/17/2/stateful-applications -[24]:https://enterprisersproject.com/article/2017/11/containers-and-kubernetes-whats-next?rate=PBQHhF4zPRHcq2KybE1bQgMkS2bzmNzcW2RXSVItmw8 -[25]:https://enterprisersproject.com/user/kevin-casey diff --git a/published/20171124 How to Install Android File Transfer for Linux.md b/published/20171124 How to Install Android File Transfer for Linux.md deleted file mode 100644 index 3cdb372c93..0000000000 --- a/published/20171124 How to Install Android File Transfer for Linux.md +++ /dev/null @@ -1,75 +0,0 @@ -如何在 Linux 下安装安卓文件传输助手 -=============== - -如果你尝试在 Ubuntu 下连接你的安卓手机,你也许可以试试 Linux 下的安卓文件传输助手。 - -本质上来说,这个应用是谷歌 macOS 版本的一个克隆。它是用 Qt 编写的,用户界面非常简洁,使得你能轻松在 Ubuntu 和安卓手机之间传输文件和文件夹。 - -现在,有可能一部分人想知道有什么是这个应用可以做,而 Nautilus(Ubuntu 默认的文件资源管理器)不能做的,答案是没有。 - -当我将我的 Nexus 5X(记得选择 [媒体传输协议 MTP][7] 选项)连接在 Ubuntu 上时,在 [GVfs][8](LCTT 译注: GNOME 桌面下的虚拟文件系统)的帮助下,我可以打开、浏览和管理我的手机,就像它是一个普通的 U 盘一样。 - -[![Nautilus MTP integration with a Nexus 5X](http://www.omgubuntu.co.uk/wp-content/uploads/2017/11/browsing-android-mtp-nautilus.jpg)][9] - -但是*一些*用户在使用默认的文件管理器时,在 MTP 的某些功能上会出现问题:比如文件夹没有正确加载,创建新文件夹后此文件夹不存在,或者无法在媒体播放器中使用自己的手机。 - -这就是要为 Linux 系统用户设计一个安卓文件传输助手应用的原因,将这个应用当做将 MTP 设备安装在 Linux 下的另一种选择。如果你使用 Linux 下的默认应用时一切正常,你也许并不需要尝试使用它 (除非你真的很想尝试新鲜事物)。 - - -![Android File Transfer Linux App](http://www.omgubuntu.co.uk/wp-content/uploads/2017/11/android-file-transfer-for-linux-750x662.jpg) - -该 app 特点: - -*   简洁直观的用户界面 -*   支持文件拖放功能(从 Linux 系统到手机) -*   支持批量下载 (从手机到 Linux系统) -*   显示传输进程对话框 -*   FUSE 模块支持 -*   没有文件大小限制 -*   可选命令行工具 - -### Ubuntu 下安装安卓手机文件助手的步骤 - -以上就是对这个应用的介绍,下面是如何安装它的具体步骤。 - -这有一个 [PPA](个人软件包集)源为 Ubuntu 14.04 LTS、16.04 LTS 和 Ubuntu 17.10 提供可用应用。 - -为了将这一 PPA 加入你的软件资源列表中,执行这条命令: - -``` -sudo add-apt-repository ppa:samoilov-lex/aftl-stable -``` - -接着,为了在 Ubuntu 下安装 Linux版本的安卓文件传输助手,执行: - -``` -sudo apt-get update && sudo apt install android-file-transfer -``` - -这样就行了。 - -你会在你的应用列表中发现这一应用的启动图标。 - -在你启动这一应用之前,要确保没有其他应用(比如 Nautilus)已经挂载了你的手机。如果其它应用正在使用你的手机,就会显示“无法找到 MTP 设备”。要解决这一问题,将你的手机从 Nautilus(或者任何正在使用你的手机的应用)上移除,然后再重新启动安卓文件传输助手。 - --------------------------------------------------------------------------------- - -via: http://www.omgubuntu.co.uk/2017/11/android-file-transfer-app-linux - -作者:[JOEY SNEDDON][a] -译者:[wenwensnow](https://github.com/wenwensnow) -校对:[wxy](https://github.com/wxy) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://plus.google.com/117485690627814051450/?rel=author -[1]:https://plus.google.com/117485690627814051450/?rel=author -[2]:http://www.omgubuntu.co.uk/category/app -[3]:http://www.omgubuntu.co.uk/category/download -[4]:https://github.com/whoozle/android-file-transfer-linux -[5]:http://www.omgubuntu.co.uk/2017/11/android-file-transfer-app-linux -[6]:http://android.com/filetransfer?linkid=14270770 -[7]:https://en.wikipedia.org/wiki/Media_Transfer_Protocol -[8]:https://en.wikipedia.org/wiki/GVfs -[9]:http://www.omgubuntu.co.uk/wp-content/uploads/2017/11/browsing-android-mtp-nautilus.jpg -[10]:https://launchpad.net/~samoilov-lex/+archive/ubuntu/aftl-stable diff --git a/published/20171124 Open Source Cloud Skills and Certification Are Key for SysAdmins.md b/published/20171124 Open Source Cloud Skills and Certification Are Key for SysAdmins.md deleted file mode 100644 index 9b6a4f242c..0000000000 --- a/published/20171124 Open Source Cloud Skills and Certification Are Key for SysAdmins.md +++ /dev/null @@ -1,72 +0,0 @@ -开源云技能认证:系统管理员的核心竞争力 -========= - -![os jobs](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/open-house-sysadmin.jpg?itok=i5FHc3lu "os jobs") - -> [2017年开源工作报告][1](以下简称“报告”)显示,具有开源云技术认证的系统管理员往往能获得更高的薪酬。 - - -报告调查的受访者中,53% 认为系统管理员是雇主们最期望被填补的职位空缺之一,因此,技术娴熟的系统管理员更受青睐而收获高薪职位,但这一职位,并没想象中那么容易填补。 - -系统管理员主要负责服务器和其他电脑操作系统的安装、服务支持和维护,及时处理服务中断和预防其他问题的出现。 - -总的来说,今年的报告指出开源领域人才需求最大的有开源云(47%),应用开发(44%),大数据(43%),开发运营和安全(42%)。 - -此外,报告对人事经理的调查显示,58% 期望招揽更多的开源人才,67% 认为开源人才的需求增长会比业内其他领域更甚。有些单位视开源人才为招聘最优选则,它们招聘的开源人才较上年增长了 2 个百分点。 - -同时,89% 的人事经理认为很难找到颇具天赋的开源人才。 - -### 为什么要获取认证 - -报告显示,对系统管理员的需求刺激着人事经理为 53% 的组织/机构提供正规的培训和专业技术认证,而这一比例去年为 47%。 - -对系统管理方面感兴趣的 IT 人才考虑获取 Linux 认证已成为行业规律。随便查看几个知名的招聘网站,你就能发现:[CompTIA Linux+][3] 认证是入门级 Linux 系统管理员的最高认证;如果想胜任高级别的系统管理员职位,获取[红帽认证工程师(RHCE)][4]和[红帽认证系统管理员(RHCSA)][5]则是不可或缺的。 - -戴士(Dice)[2017 技术行业薪资调查][6]显示,2016 年系统管理员的薪水为 79,538 美元,较上年下降了 0.8%;系统架构师的薪水为 125,946 美元,同比下降 4.7%。尽管如此,该调查发现“高水平专业人才仍最受欢迎,特别是那些精通支持产业转型发展所需技术的人才”。 - -在开源技术方面,HBase(一个开源的分布式数据库)技术人才的薪水在戴士 2017 技术行业薪资调查中排第一。在计算机网络和数据库领域,掌握 OpenVMS 操作系统技术也能获得高薪。 - -### 成为出色的系统管理员 - -出色的系统管理员须在问题出现时马上处理,这意味着你必须时刻准备应对可能出现的状况。这个职位追求“零责备的、精益的、流程或技术上交互式改进的”思维方式和善于自我完善的人格,成为一个系统管理员意味着“你必将与开源软件如 Linux、BSD 甚至开源 Solaris 等结下不解之缘”,Paul English ^译注1 在 [opensource.com][7] 上发文指出。 - -Paul English 认为,现在的系统管理员较以前而言,要更多地与软件打交道,而且要能够编写脚本来协助系统管理。 - ->译注1:Paul English,计算机科学学士,UNIX/Linux 系统管理员,PreOS Security Inc. 公司 CEO,2015-2017 年于为推动系统管理员发展实践的非盈利组织——专业系统管理员联盟League of Professional System Administrator担任董事会成员。 - -### 展望 2018 - -[Robert Half 2018 年技术人才薪资导览][8]预测 2018 年北美地区许多单位将聘用大量系统管理方面的专业人才,同时个人软实力和领导力水平作为优秀人才的考量因素,越来越受到重视。 - -该报告指出:“良好的聆听能力和批判性思维能力对于理解和解决用户的问题和担忧至关重要,也是 IT 从业者必须具备的重要技能,特别是从事服务台和桌面支持工作相关的技术人员。” - -这与[Linux基金会][9]^译注2 提出的不同阶段的系统管理员必备技能相一致,都强调了强大的分析能力和快速处理问题的能力。 - ->译注2:Linux 基金会The Linux Foundation,成立于 2000 年,致力于围绕开源项目构建可持续发展的生态系统,以加速开源项目的技术开发和商业应用;它是世界上最大的开源非盈利组织,在推广、保护和推进 Linux 发展,协同开发,维护“历史上最大的共享资源”上功勋卓越。 - -如果想逐渐爬上系统管理员职位的金字塔上层,还应该对系统配置的结构化方法充满兴趣;且拥有解决系统安全问题的经验;用户身份验证管理的经验;与非技术人员进行非技术交流的能力;以及优化系统以满足最新的安全需求的能力。 - -- [下载][10]2017年开源工作报告全文,以获取更多信息。 - - ------------------------ - -via: https://www.linux.com/blog/open-source-cloud-skills-and-certification-are-key-sysadmins - -作者:[linux.com][a] -译者:[wangy325](https://github.com/wangy325) -校对:[wxy](https://github.com/wxy) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.linux.com/blog/open-source-cloud-skills-and-certification-are-key-sysadmins -[1]:https://www.linuxfoundation.org/blog/2017-jobs-report-highlights-demand-open-source-skills/ -[2]:https://www.linux.com/licenses/category/creative-commons-zero -[3]:https://certification.comptia.org/certifications/linux?tracking=getCertified/certifications/linux.aspx -[4]:https://www.redhat.com/en/services/certification/rhce -[5]:https://www.redhat.com/en/services/certification/rhcsa -[6]:http://marketing.dice.com/pdf/Dice_TechSalarySurvey_2017.pdf?aliId=105832232 -[7]:https://opensource.com/article/17/7/truth-about-sysadmins -[8]:https://www.roberthalf.com/salary-guide/technology -[9]:https://www.linux.com/learn/10-essential-skills-novice-junior-and-senior-sysadmins%20%20 -[10]:http://bit.ly/2017OSSjobsreport \ No newline at end of file diff --git a/published/201711/20171128 tmate – Instantly Share Your Terminal Session To Anyone In Seconds.md b/published/20171128 tmate – Instantly Share Your Terminal Session To Anyone In Seconds.md similarity index 100% rename from published/201711/20171128 tmate – Instantly Share Your Terminal Session To Anyone In Seconds.md rename to published/20171128 tmate – Instantly Share Your Terminal Session To Anyone In Seconds.md diff --git a/published/20171130 Search DuckDuckGo from the Command Line.md b/published/20171130 Search DuckDuckGo from the Command Line.md deleted file mode 100644 index 48b6fdd830..0000000000 --- a/published/20171130 Search DuckDuckGo from the Command Line.md +++ /dev/null @@ -1,97 +0,0 @@ -在命令行中使用 DuckDuckGo 搜索 -============= - -![](http://www.omgubuntu.co.uk/wp-content/uploads/2017/11/duckduckgo.png) - -此前我们介绍了[如何在命令行中使用 Google 搜索][3]。许多读者反馈说他们平时使用 [Duck Duck Go][4],这是一个功能强大而且保密性很强的搜索引擎。 - -正巧,最近出现了一款能够从命令行搜索 DuckDuckGo 的工具。它叫做 ddgr(我把它读作 “dodger”),非常好用。 - -像 [Googler][7] 一样,ddgr 是一个完全开源而且非官方的工具。没错,它并不属于 DuckDuckGo。所以,如果你发现它返回的结果有些奇怪,请先询问这个工具的开发者,而不是搜索引擎的开发者。 - -### DuckDuckGo 命令行应用 - -![](http://www.omgubuntu.co.uk/wp-content/uploads/2017/11/ddgr-gif.gif) - -[DuckDuckGo Bangs(DuckDuckGo 快捷搜索)][8] 可以帮助你轻易地在 DuckDuckGo 上找到想要的信息(甚至 _本网站 omgubuntu_ 都有快捷搜索)。ddgr 非常忠实地呈现了这个功能。 - -和网页版不同的是,你可以更改每页返回多少结果。这比起每次查询都要看三十多条结果要方便一些。默认界面经过了精心设计,在不影响可读性的情况下尽量减少了占用空间。 - -`ddgr` 有许多功能和亮点,包括: - -* 更改搜索结果数 -* 支持 Bash 自动补全 -* 使用 DuckDuckGo Bangs -* 在浏览器中打开链接 -* ”手气不错“选项 -* 基于时间、地区、文件类型等的筛选功能 -* 极少的依赖项 - -你可以从 Github 的项目页面上下载支持各种系统的 `ddgr`: - -- [从 Github 下载 “ddgr”][9] - -另外,在 Ubuntu 16.04 LTS 或更新版本中,你可以使用 PPA 安装 ddgr。这个仓库由 ddgr 的开发者维护。如果你想要保持在最新版本的话,推荐使用这种方式安装。 - -需要提醒的是,在本文创作时,这个 PPA 中的 ddgr _并不是_ 最新版本,而是一个稍旧的版本(缺少 -num 选项)。 - -使用以下命令添加 PPA: - -``` -sudo add-apt-repository ppa:twodopeshaggy/jarun -sudo apt-get update -``` - -### 如何使用 ddgr 在命令行中搜索 DuckDuckGo - -安装完毕后,你只需打开你的终端模拟器,并运行: - -``` -ddgr -``` - -然后输入查询内容: - -``` -search-term -``` - -你可以限制搜索结果数: - -``` -ddgr --num 5 search-term -``` - -或者自动在浏览器中打开第一条搜索结果: - - -``` -ddgr -j search-term -``` - -你可以使用参数和选项来提高搜索精确度。使用以下命令来查看所有的参数: - -``` -ddgr -h -``` - --------------------------------------------------------------------------------- - -via: http://www.omgubuntu.co.uk/2017/11/duck-duck-go-terminal-app - -作者:[JOEY SNEDDON][a] -译者:[yixunx](https://github.com/yixunx) -校对:[wxy](https://github.com/wxy) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://plus.google.com/117485690627814051450/?rel=author -[1]:https://plus.google.com/117485690627814051450/?rel=author -[2]:http://www.omgubuntu.co.uk/category/download -[3]:http://www.omgubuntu.co.uk/2017/08/search-google-from-the-command-line -[4]:http://duckduckgo.com/ -[5]:http://www.omgubuntu.co.uk/2017/11/duck-duck-go-terminal-app -[6]:https://github.com/jarun/ddgr -[7]:https://github.com/jarun/googler -[8]:https://duckduckgo.com/bang -[9]:https://github.com/jarun/ddgr/releases/tag/v1.1 diff --git a/sources/tech/20090701 The One in Which I Call Out Hacker News.md b/sources/tech/20090701 The One in Which I Call Out Hacker News.md new file mode 100644 index 0000000000..44c751dd5a --- /dev/null +++ b/sources/tech/20090701 The One in Which I Call Out Hacker News.md @@ -0,0 +1,86 @@ +translating by hopefully2333 + +# [The One in Which I Call Out Hacker News][14] + + +> “Implementing caching would take thirty hours. Do you have thirty extra hours? No, you don’t. I actually have no idea how long it would take. Maybe it would take five minutes. Do you have five minutes? No. Why? Because I’m lying. It would take much longer than five minutes. That’s the eternal optimism of programmers.” +> +> — Professor [Owen Astrachan][1] during 23 Feb 2004 lecture for [CPS 108][2] + +[Accusing open-source software of being a royal pain to use][5] is not a new argument; it’s been said before, by those much more eloquent than I, and even by some who are highly sympathetic to the open-source movement. Why go over it again? + +On Hacker News on Monday, I was amused to read some people saying that [writing StackOverflow was hilariously easy][6]—and proceeding to back up their claim by [promising to clone it over July 4th weekend][7]. Others chimed in, pointing to [existing][8] [clones][9] as a good starting point. + +Let’s assume, for sake of argument, that you decide it’s okay to write your StackOverflow clone in ASP.NET MVC, and that I, after being hypnotized with a pocket watch and a small club to the head, have decided to hand you the StackOverflow source code, page by page, so you can retype it verbatim. We’ll also assume you type like me, at a cool 100 WPM ([a smidge over eight characters per second][10]), and unlike me,  _you_  make zero mistakes. StackOverflow’s *.cs, *.sql, *.css, *.js, and *.aspx files come to 2.3 MB. So merely typing the source code back into the computer will take you about eighty hours if you make zero mistakes. + +Except, of course, you’re not doing that; you’re going to implement StackOverflow from scratch. So even assuming that it took you a mere ten times longer to design, type out, and debug your own implementation than it would take you to copy the real one, that already has you coding for several weeks straight—and I don’t know about you, but I am okay admitting I write new code  _considerably_  less than one tenth as fast as I copy existing code. + + _Well, okay_ , I hear you relent. *So not the whole thing. But I can do **most** of it.* + +Okay, so what’s “most”? There’s simply asking and responding to questions—that part’s easy. Well, except you have to implement voting questions and answers up and down, and the questioner should be able to accept a single answer for each question. And you can’t let people upvote or accept their own answers, so you need to block that. And you need to make sure that users don’t upvote or downvote another user too many times in a certain amount of time, to prevent spambots. Probably going to have to implement a spam filter, too, come to think of it, even in the basic design, and you also need to support user icons, and you’re going to have to find a sanitizing HTML library you really trust and that interfaces well with Markdown (provided you do want to reuse [that awesome editor][11] StackOverflow has, of course). You’ll also need to purchase, design, or find widgets for all the controls, plus you need at least a basic administration interface so that moderators can moderate, and you’ll need to implement that scaling karma thing so that you give users steadily increasing power to do things as they go. + +But if you do  _all that_ , you  _will_  be done. + +Except…except, of course, for the full-text search, especially its appearance in the search-as-you-ask feature, which is kind of indispensable. And user bios, and having comments on answers, and having a main page that shows you important questions but that bubbles down steadily à la reddit. Plus you’ll totally need to implement bounties, and support multiple OpenID logins per user, and send out email notifications for pertinent events, and add a tagging system, and allow administrators to configure badges by a nice GUI. And you’ll need to show users’ karma history, upvotes, and downvotes. And the whole thing has to scale really well, since it could be slashdotted/reddited/StackOverflown at any moment. + +But  _then_ ! **Then** you’re done! + +…right after you implement upgrades, internationalization, karma caps, a CSS design that makes your site not look like ass, AJAX versions of most of the above, and G-d knows what else that’s lurking just beneath the surface that you currently take for granted, but that will come to bite you when you start to do a real clone. + +Tell me: which of those features do you feel you can cut and still have a compelling offering? Which ones go under “most” of the site, and which can you punt? + +Developers think cloning a site like StackOverflow is easy for the same reason that open-source software remains such a horrible pain in the ass to use. When you put a developer in front of StackOverflow, they don’t really  _see_ StackOverflow. What they actually  _see_  is this: + +``` +create table QUESTION (ID identity primary key, + TITLE varchar(255), --- why do I know you thought 255? + BODY text, + UPVOTES integer not null default 0, + DOWNVOTES integer not null default 0, + USER integer references USER(ID)); +create table RESPONSE (ID identity primary key, + BODY text, + UPVOTES integer not null default 0, + DOWNVOTES integer not null default 0, + QUESTION integer references QUESTION(ID)) +``` + +If you then tell a developer to replicate StackOverflow, what goes into his head are the above two SQL tables and enough HTML to display them without formatting, and that really  _is_  completely doable in a weekend. The smarter ones will realize that they need to implement login and logout, and comments, and that the votes need to be tied to a user, but that’s still totally doable in a weekend; it’s just a couple more tables in a SQL back-end, and the HTML to show their contents. Use a framework like Django, and you even get basic users and comments for free. + +But that’s  _not_  what StackOverflow is about. Regardless of what your feelings may be on StackOverflow in general, most visitors seem to agree that the user experience is smooth, from start to finish. They feel that they’re interacting with a polished product. Even if I didn’t know better, I would guess that very little of what actually makes StackOverflow a continuing success has to do with the database schema—and having had a chance to read through StackOverflow’s source code, I know how little really does. There is a  _tremendous_  amount of spit and polish that goes into making a major website highly usable. A developer, asked how hard something will be to clone, simply  _does not think about the polish_ , because  _the polish is incidental to the implementation._ + +That is why an open-source clone of StackOverflow will fail. Even if someone were to manage to implement most of StackOverflow “to spec,” there are some key areas that would trip them up. Badges, for example, if you’re targeting end-users, either need a GUI to configure rules, or smart developers to determine which badges are generic enough to go on all installs. What will actually happen is that the developers will bitch and moan about how you can’t implement a really comprehensive GUI for something like badges, and then bikeshed any proposals for standard badges so far into the ground that they’ll hit escape velocity coming out the other side. They’ll ultimately come up with the same solution that bug trackers like Roundup use for their workflow: the developers implement a generic mechanism by which anyone, truly anyone at all, who feels totally comfortable working with the system API in Python or PHP or whatever, can easily add their own customizations. And when PHP and Python are so easy to learn and so much more flexible than a GUI could ever be, why bother with anything else? + +Likewise, the moderation and administration interfaces can be punted. If you’re an admin, you have access to the SQL server, so you can do anything really genuinely administrative-like that way. Moderators can get by with whatever django-admin and similar systems afford you, since, after all, few users are mods, and mods should understand how the sites  _work_ , dammit. And, certainly, none of StackOverflow’s interface failings will be rectified. Even if StackOverflow’s stupid requirement that you have to have and know how to use an OpenID (its worst failing) eventually gets fixed, I’m sure any open-source clones will rabidly follow it—just as GNOME and KDE for years slavishly copied off Windows, instead of trying to fix its most obvious flaws. + +Developers may not care about these parts of the application, but end-users do, and take it into consideration when trying to decide what application to use. Much as a good software company wants to minimize its support costs by ensuring that its products are top-notch before shipping, so, too, savvy consumers want to ensure products are good before they purchase them so that they won’t  _have_  to call support. Open-source products fail hard here. Proprietary solutions, as a rule, do better. + +That’s not to say that open-source doesn’t have its place. This blog runs on Apache, [Django][12], [PostgreSQL][13], and Linux. But let me tell you, configuring that stack is  _not_  for the faint of heart. PostgreSQL needs vacuuming configured on older versions, and, as of recent versions of Ubuntu and FreeBSD, still requires the user set up the first database cluster. MS SQL requires neither of those things. Apache…dear heavens, don’t even get me  _started_  on trying to explain to a novice user how to get virtual hosting, MovableType, a couple Django apps, and WordPress all running comfortably under a single install. Hell, just trying to explain the forking vs. threading variants of Apache to a technically astute non-developer can be a nightmare. IIS 7 and Apache with OS X Server’s very much closed-source GUI manager make setting up those same stacks vastly simpler. Django’s a great a product, but it’s nothing  _but_  infrastructure—exactly the thing that I happen to think open-source  _does_  do well,  _precisely_  because of the motivations that drive developers to contribute. + +The next time you see an application you like, think very long and hard about all the user-oriented details that went into making it a pleasure to use, before decrying how you could trivially reimplement the entire damn thing in a weekend. Nine times out of ten, when you think an application was ridiculously easy to implement, you’re completely missing the user side of the story. + +-------------------------------------------------------------------------------- + +via: https://bitquabit.com/post/one-which-i-call-out-hacker-news/ + +作者:[Benjamin Pollack][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://bitquabit.com/meta/about/ +[1]:http://www.cs.duke.edu/~ola/ +[2]:http://www.cs.duke.edu/courses/cps108/spring04/ +[3]:https://bitquabit.com/categories/programming +[4]:https://bitquabit.com/categories/technology +[5]:http://blog.bitquabit.com/2009/06/30/one-which-i-say-open-source-software-sucks/ +[6]:http://news.ycombinator.com/item?id=678501 +[7]:http://news.ycombinator.com/item?id=678704 +[8]:http://code.google.com/p/cnprog/ +[9]:http://code.google.com/p/soclone/ +[10]:http://en.wikipedia.org/wiki/Words_per_minute +[11]:http://github.com/derobins/wmd/tree/master +[12]:http://www.djangoproject.com/ +[13]:http://www.postgresql.org/ +[14]:https://bitquabit.com/post/one-which-i-call-out-hacker-news/ diff --git a/sources/tech/20130402 Dynamic linker tricks Using LD_PRELOAD to cheat inject features and investigate programs.md b/sources/tech/20130402 Dynamic linker tricks Using LD_PRELOAD to cheat inject features and investigate programs.md deleted file mode 100644 index 2329fadd41..0000000000 --- a/sources/tech/20130402 Dynamic linker tricks Using LD_PRELOAD to cheat inject features and investigate programs.md +++ /dev/null @@ -1,211 +0,0 @@ -# Dynamic linker tricks: Using LD_PRELOAD to cheat, inject features and investigate programs - -**This post assumes some basic C skills.** - -Linux puts you in full control. This is not always seen from everyone’s perspective, but a power user loves to be in control. I’m going to show you a basic trick that lets you heavily influence the behavior of most applications, which is not only fun, but also, at times, useful. - -#### A motivational example - -Let us begin with a simple example. Fun first, science later. - - -random_num.c: -``` -#include -#include -#include - -int main(){ - srand(time(NULL)); - int i = 10; - while(i--) printf("%d\n",rand()%100); - return 0; -} -``` - -Simple enough, I believe. I compiled it with no special flags, just - -> ``` -> gcc random_num.c -o random_num -> ``` - -I hope the resulting output is obvious – ten randomly selected numbers 0-99, hopefully different each time you run this program. - -Now let’s pretend we don’t really have the source of this executable. Either delete the source file, or move it somewhere – we won’t need it. We will significantly modify this programs behavior, yet without touching it’s source code nor recompiling it. - -For this, lets create another simple C file: - - -unrandom.c: -``` -int rand(){ - return 42; //the most random number in the universe -} -``` - -We’ll compile it into a shared library. - -> ``` -> gcc -shared -fPIC unrandom.c -o unrandom.so -> ``` - -So what we have now is an application that outputs some random data, and a custom library, which implements the rand() function as a constant value of 42\.  Now… just run  _random_num _ this way, and watch the result: - -> ``` -> LD_PRELOAD=$PWD/unrandom.so ./random_nums -> ``` - -If you are lazy and did not do it yourself (and somehow fail to guess what might have happened), I’ll let you know – the output consists of ten 42’s. - -This may be even more impressive it you first: - -> ``` -> export LD_PRELOAD=$PWD/unrandom.so -> ``` - -and then run the program normally. An unchanged app run in an apparently usual manner seems to be affected by what we did in our tiny library… - -###### **Wait, what? What did just happen?** - -Yup, you are right, our program failed to generate random numbers, because it did not use the “real” rand(), but the one we provided – which returns 42 every time. - -###### **But we *told* it to use the real one. We programmed it to use the real one. Besides, at the time we created that program, the fake rand() did not even exist!** - -This is not entirely true. We did not choose which rand() we want our program to use. We told it just to use rand(). - -When our program is started, certain libraries (that provide functionality needed by the program) are loaded. We can learn which are these using  _ldd_ : - -> ``` -> $ ldd random_nums -> linux-vdso.so.1 => (0x00007fff4bdfe000) -> libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f48c03ec000) -> /lib64/ld-linux-x86-64.so.2 (0x00007f48c07e3000) -> ``` - -What you see as the output is the list of libs that are needed by  _random_nums_ . This list is built into the executable, and is determined compile time. The exact output might slightly differ on your machine, but a **libc.so** must be there – this is the file which provides core C functionality. That includes the “real” rand(). - -We can have a peek at what functions does libc provide. I used the following to get a full list: - -> ``` -> nm -D /lib/libc.so.6 -> ``` - -The  _nm_  command lists symbols found in a binary file. The -D flag tells it to look for dynamic symbols, which makes sense, as libc.so.6 is a dynamic library. The output is very long, but it indeed lists rand() among many other standard functions. - -Now what happens when we set up the environmental variable LD_PRELOAD? This variable **forces some libraries to be loaded for a program**. In our case, it loads  _unrandom.so_  for  _random_num_ , even though the program itself does not ask for it. The following command may be interesting: - -> ``` -> $ LD_PRELOAD=$PWD/unrandom.so ldd random_nums -> linux-vdso.so.1 => (0x00007fff369dc000) -> /some/path/to/unrandom.so (0x00007f262b439000) -> libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f262b044000) -> /lib64/ld-linux-x86-64.so.2 (0x00007f262b63d000) -> ``` - -Note that it lists our custom library. And indeed this is the reason why it’s code get’s executed:  _random_num_  calls rand(), but if  _unrandom.so_  is loaded it is our library that provides implementation for rand(). Neat, isn’t it? - -#### Being transparent - -This is not enough. I’d like to be able to inject some code into an application in a similar manner, but in such way that it will be able to function normally. It’s clear if we implemented open() with a simple “ _return 0;_ “, the application we would like to hack should malfunction. The point is to be **transparent**, and to actually call the original open: - -inspect_open.c: -``` -int open(const char *pathname, int flags){ - /* Some evil injected code goes here. */ - return open(pathname,flags); // Here we call the "real" open function, that is provided to us by libc.so -} -``` - -Hm. Not really. This won’t call the “original” open(…). Obviously, this is an endless recursive call. - -How do we access the “real” open function? It is needed to use the programming interface to the dynamic linker. It’s simpler than it sounds. Have a look at this complete example, and then I’ll explain what happens there: - -inspect_open.c: - -``` -#define _GNU_SOURCE -#include - -typedef int (*orig_open_f_type)(const char *pathname, int flags); - -int open(const char *pathname, int flags, ...) -{ - /* Some evil injected code goes here. */ - - orig_open_f_type orig_open; - orig_open = (orig_open_f_type)dlsym(RTLD_NEXT,"open"); - return orig_open(pathname,flags); -} -``` - -The  _dlfcn.h_  is needed for  _dlsym_  function we use later. That strange  _#define_  directive instructs the compiler to enable some non-standard stuff, we need it to enable  _RTLD_NEXT_  in  _dlfcn.h_ . That typedef is just creating an alias to a complicated pointer-to-function type, with arguments just as the original open – the alias name is  _orig_open_f_type_ , which we’ll use later. - -The body of our custom open(…) consists of some custom code. The last part of it creates a new function pointer  _orig_open_  which will point to the original open(…) function. In order to get the address of that function, we ask  _dlsym_  to find for us the next “open” function on dynamic libraries stack. Finally, we call that function (passing the same arguments as were passed to our fake “open”), and return it’s return value as ours. - -As the “evil injected code” I simply used: - -inspect_open.c (fragment): - -``` -printf("The victim used open(...) to access '%s'!!!\n",pathname); //remember to include stdio.h! -``` - -To compile it, I needed to slightly adjust compiler flags: - -> ``` -> gcc -shared -fPIC  inspect_open.c -o inspect_open.so -ldl -> ``` - -I had to append  _-ldl_ , so that this shared library is linked to  _libdl_ , which provides the  _dlsym_  function. (Nah, I am not going to create a fake version of  _dlsym_ , though this might be fun.) - -So what do I have in result? A shared library, which implements the open(…) function so that it behaves **exactly** as the real open(…)… except it has a side effect of  _printf_ ing the file path :-) - -If you are not convinced this is a powerful trick, it’s the time you tried the following: - -> ``` -> LD_PRELOAD=$PWD/inspect_open.so gnome-calculator -> ``` - -I encourage you to see the result yourself, but basically it lists every file this application accesses. In real time. - -I believe it’s not that hard to imagine why this might be useful for debugging or investigating unknown applications. Please note, however, that this particular trick is not quite complete, because  _open()_  is not the only function that opens files… For example, there is also  _open64()_  in the standard library, and for full investigation you would need to create a fake one too. - -#### **Possible uses** - -If you are still with me and enjoyed the above, let me suggest a bunch of ideas of what can be achieved using this trick. Keep in mind that you can do all the above without to source of the affected app! - -1. ~~Gain root privileges.~~ Not really, don’t even bother, you won’t bypass any security this way. (A quick explanation for pros: no libraries will be preloaded this way if ruid != euid) - -2. Cheat games: **Unrandomize.** This is what I did in the first example. For a fully working case you would need also to implement a custom  _random()_ ,  _rand_r()_ _, random_r()_ . Also some apps may be reading from  _/dev/urandom_  or so, you might redirect them to  _/dev/null_  by running the original  _open()_  with a modified file path. Furthermore, some apps may have their own random number generation algorithm, there is little you can do about that (unless: point 10 below). But this looks like an easy exercise for beginners. - -3. Cheat games: **Bullet time. **Implement all standard time-related functions pretend the time flows two times slower. Or ten times slower. If you correctly calculate new values for time measurement, timed  _sleep_ functions, and others, the affected application will believe the time runs slower (or faster, if you wish), and you can experience awesome bullet-time action. - Or go **even one step further** and let your shared library also be a DBus client, so that you can communicate with it real time. Bind some shortcuts to custom commands, and with some additional calculations in your fake timing functions you will be able to enable&disable the slow-mo or fast-forward anytime you wish. - -4. Investigate apps: **List accessed files.** That’s what my second example does, but this could be also pushed further, by recording and monitoring all app’s file I/O. - -5. Investigate apps: **Monitor internet access.** You might do this with Wireshark or similar software, but with this trick you could actually gain control of what an app sends over the web, and not just look, but also affect the exchanged data. Lots of possibilities here, from detecting spyware, to cheating in multiplayer games, or analyzing & reverse-engineering protocols of closed-source applications. - -6. Investigate apps: **Inspect GTK structures.** Why just limit ourselves to standard library? Let’s inject code in all GTK calls, so that we can learn what widgets does an app use, and how are they structured. This might be then rendered either to an image or even to a gtkbuilder file! Super useful if you want to learn how does some app manage its interface! - -7. **Sandbox unsafe applications.** If you don’t trust some app and are afraid that it may wish to _ rm -rf / _ or do some other unwanted file activities, you might potentially redirect all it’s file IO to e.g. /tmp by appropriately modifying the arguments it passes to all file-related functions (not just  _open_ , but also e.g. removing directories etc.). It’s more difficult trick that a chroot, but it gives you more control. It would be only as safe as complete your “wrapper” was, and unless you really know what you’re doing, don’t actually run any malicious software this way. - -8. **Implement features.** [zlibc][1] is an actual library which is run this precise way; it uncompresses files on the go as they are accessed, so that any application can work on compressed data without even realizing it. - -9. **Fix bugs. **Another real-life example: some time ago (I am not sure this is still the case) Skype – which is closed-source – had problems capturing video from some certain webcams. Because the source could not be modified as Skype is not free software, this was fixed by preloading a library that would correct these problems with video. - -10. Manually **access application’s own memory**. Do note that you can access all app data this way. This may be not impressive if you are familiar with software like CheatEngine/scanmem/GameConqueror, but they all require root privileges to work. LD_PRELOAD does not. In fact, with a number of clever tricks your injected code might access all app memory, because, in fact, it gets executed by that application itself. You might modify everything this application can. You can probably imagine this allows a lot of low-level hacks… but I’ll post an article about it another time. - -These are only the ideas I came up with. I bet you can find some too, if you do – share them by commenting! - --------------------------------------------------------------------------------- - -via: https://rafalcieslak.wordpress.com/2013/04/02/dynamic-linker-tricks-using-ld_preload-to-cheat-inject-features-and-investigate-programs/ - -作者:[Rafał Cieślak ][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://rafalcieslak.wordpress.com/ -[1]:http://www.zlibc.linux.lu/index.html diff --git a/sources/tech/20160330 How to turn any syscall into an event Introducing eBPF Kernel probes.md b/sources/tech/20160330 How to turn any syscall into an event Introducing eBPF Kernel probes.md deleted file mode 100644 index a53270f2d7..0000000000 --- a/sources/tech/20160330 How to turn any syscall into an event Introducing eBPF Kernel probes.md +++ /dev/null @@ -1,361 +0,0 @@ -How to turn any syscall into an event: Introducing eBPF Kernel probes -============================================================ - - -TL;DR: Using eBPF in recent (>=4.4) Linux kernel, you can turn any kernel function call into a user land event with arbitrary data. This is made easy by bcc. The probe is written in C while the data is handled by python. - -If you are not familiar with eBPF or linux tracing, you really should read the full post. It tries to progressively go through the pitfalls I stumbled unpon while playing around with bcc / eBPF while saving you a lot of the time I spent searching and digging. - -### A note on push vs pull in a Linux world - -When I started to work on containers, I was wondering how we could update a load balancer configuration dynamically based on actual system state. A common strategy, which works, it to let the container orchestrator trigger a load balancer configuration update whenever it starts a container and then let the load balancer poll the container until some health check passes. It may be a simple “SYN” test. - -While this configuration works, it has the downside of making your load balancer waiting for some system to be available while it should be… load balancing. - -Can we do better? - -When you want a program to react to some change in a system there are 2 possible strategies. The program may  _poll_  the system to detect changes or, if the system supports it, the system may  _push_ events and let the program react to them. Wether you want to use push or poll depends on the context. A good rule of the thumb is to use push events when the event rate is low with respect to the processing time and switch to polling when the events are coming fast or the system may become unusable. For example, typical network driver will wait for events from the network card while frameworks like dpdk will actively poll the card for events to achieve the highest throughput and lowest latency. - -In an ideal world, we’d have some kernel interface telling us: - -> * “Hey Mr. ContainerManager, I’ve just created a socket for the Nginx-ware of container  _servestaticfiles_ , maybe you want to update your state?” -> -> * “Sure Mr. OS, Thanks for letting me know” - -While Linux has a wide range of interfaces to deal with events, up to 3 for file events, there is no dedicated interface to get socket event notifications. You can get routing table events, neighbor table events, conntrack events, interface change events. Just, not socket events. Or maybe there is, deep hidden in a Netlink interface. - -Ideally, we’d need a generic way to do it. How? - -### Kernel tracing and eBPF, a bit of history - -Until recently the only way was to patch the kernel or resort on SystemTap. [SytemTap][5] is a tracing Linux system. In a nutshell, it provides a DSL which is then compiled into a kernel module which is then live-loaded into the running kernel. Except that some production system disable dynamic module loading for security reasons. Including the one I was working on at that time. The other way would be to patch the kernel to trigger some events, probably based on netlink. This is not really convenient. Kernel hacking come with downsides including “interesting” new “features” and increased maintenance burden. - -Hopefully, starting with Linux 3.15 the ground was laid to safely transform any traceable kernel function into userland events. “Safely” is common computer science expression referring to “some virtual machine”. This case is no exception. Linux has had one for years. Since Linux 2.1.75 released in 1997 actually. It’s called Berkeley Packet Filter of BPF for short. As its name suggests, it was originally developed for the BSD firewalls. It had only 2 registers and only allowed forward jumps meaning that you could not write loops with it (Well, you can, if you know the maximum iterations and you manually unroll them). The point was to guarantee the program would always terminate and hence never hang the system. Still not sure if it has any use while you have iptables? It serves as the [foundation of CloudFlare’s AntiDDos protection][6]. - -OK, so, with Linux the 3.15, [BPF was extended][7] turning it into eBPF. For “extended” BPF. It upgrades from 2 32 bits registers to 10 64 bits 64 registers and adds backward jumping among others. It has then been [further extended in Linux 3.18][8] moving it out of the networking subsystem, and adding tools like maps. To preserve the safety guarantees, it [introduces a checker][9] which validates all memory accesses and possible code path. If the checker can’t guarantee the code will terminate within fixed boundaries, it will deny the initial insertion of the program. - -For more history, there is [an excellent Oracle presentation on eBPF][10]. - -Let’s get started. - -### Hello from from `inet_listen` - -As writing assembly is not the most convenient task, even for the best of us, we’ll use [bcc][11]. bcc is a collection of tools based on LLVM and Python abstracting the underlying machinery. Probes are written in C and the results can be exploited from python allowing to easily write non trivial applications. - -Start by install bcc. For some of these examples, you may require a recent (read >= 4.4) version of the kernel. If you are willing to actually try these examples, I highly recommend that you setup a VM.  _NOT_  a docker container. You can’t change the kernel in a container. As this is a young and dynamic projects, install instructions are highly platform/version dependant. You can find up to date instructions on [https://github.com/iovisor/bcc/blob/master/INSTALL.md][12] - -So, we want to get an event whenever a program starts to listen on TCP socket. When calling the `listen()` syscall on a `AF_INET` + `SOCK_STREAM` socket, the underlying kernel function is [`inet_listen`][13]. We’ll start by hooking a “Hello World” `kprobe` on it’s entrypoint. - -``` -from bcc import BPF - -# Hello BPF Program -bpf_text = """ -#include -#include - -// 1\. Attach kprobe to "inet_listen" -int kprobe__inet_listen(struct pt_regs *ctx, struct socket *sock, int backlog) -{ - bpf_trace_printk("Hello World!\\n"); - return 0; -}; -""" - -# 2\. Build and Inject program -b = BPF(text=bpf_text) - -# 3\. Print debug output -while True: - print b.trace_readline() - -``` - -This program does 3 things: 1\. It attaches a kernel probe to “inet_listen” using a naming convention. If the function was called, say, “my_probe”, it could be explicitly attached with `b.attach_kprobe("inet_listen", "my_probe"`. 2\. It builds the program using LLVM new BPF backend, inject the resulting bytecode using the (new) `bpf()` syscall and automatically attaches the probes matching the naming convention. 3\. It reads the raw output from the kernel pipe. - -Note: eBPF backend of LLVM is still young. If you think you’ve hit a bug, you may want to upgrade. - -Noticed the `bpf_trace_printk` call? This is a stripped down version of the kernel’s `printk()`debug function. When used, it produces tracing informations to a special kernel pipe in `/sys/kernel/debug/tracing/trace_pipe`. As the name implies, this is a pipe. If multiple readers are consuming it, only 1 will get a given line. This makes it unsuitable for production. - -Fortunately, Linux 3.19 introduced maps for message passing and Linux 4.4 brings arbitrary perf events support. I’ll demo the perf event based approach later in this post. - -``` -# From a first console -ubuntu@bcc:~/dev/listen-evts$ sudo /python tcv4listen.py - nc-4940 [000] d... 22666.991714: : Hello World! - -# From a second console -ubuntu@bcc:~$ nc -l 0 4242 -^C - -``` - -Yay! - -### Grab the backlog - -Now, let’s print some easily accessible data. Say the “backlog”. The backlog is the number of pending established TCP connections, pending to be `accept()`ed. - -Just tweak a bit the `bpf_trace_printk`: - -``` -bpf_trace_printk("Listening with with up to %d pending connections!\\n", backlog); - -``` - -If you re-run the example with this world-changing improvement, you should see something like: - -``` -(bcc)ubuntu@bcc:~/dev/listen-evts$ sudo python tcv4listen.py - nc-5020 [000] d... 25497.154070: : Listening with with up to 1 pending connections! - -``` - -`nc` is a single connection program, hence the backlog of 1\. Nginx or Redis would output 128 here. But that’s another story. - -Easy hue? Now let’s get the port. - -### Grab the port and IP - -Studying `inet_listen` source from the kernel, we know that we need to get the `inet_sock` from the `socket` object. Just copy from the sources, and insert at the beginning of the tracer: - -``` -// cast types. Intermediate cast not needed, kept for readability -struct sock *sk = sock->sk; -struct inet_sock *inet = inet_sk(sk); - -``` - -The port can now be accessed from `inet->inet_sport` in network byte order (aka: Big Endian). Easy! So, we could just replace the `bpf_trace_printk` with: - -``` -bpf_trace_printk("Listening on port %d!\\n", inet->inet_sport); - -``` - -Then run: - -``` -ubuntu@bcc:~/dev/listen-evts$ sudo /python tcv4listen.py -... -R1 invalid mem access 'inv' -... -Exception: Failed to load BPF program kprobe__inet_listen - -``` - -Except that it’s not (yet) so simple. Bcc is improving a  _lot_  currently. While writing this post, a couple of pitfalls had already been addressed. But not yet all. This Error means the in-kernel checker could prove the memory accesses in program are correct. See the explicit cast. We need to help is a little by making the accesses more explicit. We’ll use `bpf_probe_read` trusted function to read an arbitrary memory location while guaranteeing all necessary checks are done with something like: - -``` -// Explicit initialization. The "=0" part is needed to "give life" to the variable on the stack -u16 lport = 0; - -// Explicit arbitrary memory access. Read it: -// Read into 'lport', 'sizeof(lport)' bytes from 'inet->inet_sport' memory location -bpf_probe_read(&lport, sizeof(lport), &(inet->inet_sport)); - -``` - -Reading the bound address for IPv4 is basically the same, using `inet->inet_rcv_saddr`. If we put is all together, we should get the backlog, the port and the bound IP: - -``` -from bcc import BPF - -# BPF Program -bpf_text = """ -#include -#include -#include - -// Send an event for each IPv4 listen with PID, bound address and port -int kprobe__inet_listen(struct pt_regs *ctx, struct socket *sock, int backlog) -{ - // Cast types. Intermediate cast not needed, kept for readability - struct sock *sk = sock->sk; - struct inet_sock *inet = inet_sk(sk); - - // Working values. You *need* to initialize them to give them "life" on the stack and use them afterward - u32 laddr = 0; - u16 lport = 0; - - // Pull in details. As 'inet_sk' is internally a type cast, we need to use 'bpf_probe_read' - // read: load into 'laddr' 'sizeof(laddr)' bytes from address 'inet->inet_rcv_saddr' - bpf_probe_read(&laddr, sizeof(laddr), &(inet->inet_rcv_saddr)); - bpf_probe_read(&lport, sizeof(lport), &(inet->inet_sport)); - - // Push event - bpf_trace_printk("Listening on %x %d with %d pending connections\\n", ntohl(laddr), ntohs(lport), backlog); - return 0; -}; -""" - -# Build and Inject BPF -b = BPF(text=bpf_text) - -# Print debug output -while True: - print b.trace_readline() - -``` - -A test run should output something like: - -``` -(bcc)ubuntu@bcc:~/dev/listen-evts$ sudo python tcv4listen.py - nc-5024 [000] d... 25821.166286: : Listening on 7f000001 4242 with 1 pending connections - -``` - -Provided that you listen on localhost. The address is displayed as hex here to avoid dealing with the IP pretty printing but that’s all wired. And that’s cool. - -Note: you may wonder why `ntohs` and `ntohl` can be called from BPF while they are not trusted. This is because they are macros and inline functions from “.h” files and a small bug was [fixed][14]while writing this post. - -All done, one more piece: We want to get the related container. In the context of networking, that’s means we want the network namespace. The network namespace being the building block of containers allowing them to have isolated networks. - -### Grab the network namespace: a forced introduction to perf events - -On the userland, the network namespace can be determined by checking the target of `/proc/PID/ns/net`. It should look like `net:[4026531957]`. The number between brackets is the inode number of the network namespace. This said, we could grab it by scrapping ‘/proc’ but this is racy, we may be dealing with short-lived processes. And races are never good. We’ll grab the inode number directly from the kernel. Fortunately, that’s an easy one: - -``` -// Create an populate the variable -u32 netns = 0; - -// Read the netns inode number, like /proc does -netns = sk->__sk_common.skc_net.net->ns.inum; - -``` - -Easy. And it works. - -But if you’ve read so far, you may guess there is something wrong somewhere. And there is: - -``` -bpf_trace_printk("Listening on %x %d with %d pending connections in container %d\\n", ntohl(laddr), ntohs(lport), backlog, netns); - -``` - -If you try to run it, you’ll get some cryptic error message: - -``` -(bcc)ubuntu@bcc:~/dev/listen-evts$ sudo python tcv4listen.py -error: in function kprobe__inet_listen i32 (%struct.pt_regs*, %struct.socket*, i32) -too many args to 0x1ba9108: i64 = Constant<6> - -``` - -What clang is trying to tell you is “Hey pal, `bpf_trace_printk` can only take 4 arguments, you’ve just used 5.“. I won’t dive into the details here, but that’s a BPF limitation. If you want to dig it, [here is a good starting point][15]. - -The only way to fix it is to… stop debugging and make it production ready. So let’s get started (and make sure run at least Linux 4.4). We’ll use perf events which supports passing arbitrary sized structures to userland. Additionally, only our reader will get it so that multiple unrelated eBPF programs can produce data concurrently without issues. - -To use it, we need to: - -1. define a structure - -2. declare the event - -3. push the event - -4. re-declare the event on Python’s side (This step should go away in the future) - -5. consume and format the event - -This may seem like a lot, but it ain’t. See: - -``` -// At the begining of the C program, declare our event -struct listen_evt_t { - u64 laddr; - u64 lport; - u64 netns; - u64 backlog; -}; -BPF_PERF_OUTPUT(listen_evt); - -// In kprobe__inet_listen, replace the printk with -struct listen_evt_t evt = { - .laddr = ntohl(laddr), - .lport = ntohs(lport), - .netns = netns, - .backlog = backlog, -}; -listen_evt.perf_submit(ctx, &evt, sizeof(evt)); - -``` - -Python side will require a little more work, though: - -``` -# We need ctypes to parse the event structure -import ctypes - -# Declare data format -class ListenEvt(ctypes.Structure): - _fields_ = [ - ("laddr", ctypes.c_ulonglong), - ("lport", ctypes.c_ulonglong), - ("netns", ctypes.c_ulonglong), - ("backlog", ctypes.c_ulonglong), - ] - -# Declare event printer -def print_event(cpu, data, size): - event = ctypes.cast(data, ctypes.POINTER(ListenEvt)).contents - print("Listening on %x %d with %d pending connections in container %d" % ( - event.laddr, - event.lport, - event.backlog, - event.netns, - )) - -# Replace the event loop -b["listen_evt"].open_perf_buffer(print_event) -while True: - b.kprobe_poll() - -``` - -Give it a try. In this example, I have a redis running in a docker container and nc on the host: - -``` -(bcc)ubuntu@bcc:~/dev/listen-evts$ sudo python tcv4listen.py -Listening on 0 6379 with 128 pending connections in container 4026532165 -Listening on 0 6379 with 128 pending connections in container 4026532165 -Listening on 7f000001 6588 with 1 pending connections in container 4026531957 - -``` - -### Last word - -Absolutely everything is now setup to use trigger events from arbitrary function calls in the kernel using eBPF, and you should have seen most of the common pitfalls I hit while learning eBPF. If you want to see the full version of this tool, along with some more tricks like IPv6 support, have a look at [https://github.com/iovisor/bcc/blob/master/tools/solisten.py][16]. It’s now an official tool, thanks to the support of the bcc team. - -To go further, you may want to checkout Brendan Gregg’s blog, in particular [the post about eBPF maps and statistics][17]. He his one of the project’s main contributor. - - --------------------------------------------------------------------------------- - -via: https://blog.yadutaf.fr/2016/03/30/turn-any-syscall-into-event-introducing-ebpf-kernel-probes/ - -作者:[Jean-Tiare Le Bigot ][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://blog.yadutaf.fr/about -[1]:https://blog.yadutaf.fr/tags/linux -[2]:https://blog.yadutaf.fr/tags/tracing -[3]:https://blog.yadutaf.fr/tags/ebpf -[4]:https://blog.yadutaf.fr/tags/bcc -[5]:https://en.wikipedia.org/wiki/SystemTap -[6]:https://blog.cloudflare.com/bpf-the-forgotten-bytecode/ -[7]:https://blog.yadutaf.fr/2016/03/30/turn-any-syscall-into-event-introducing-ebpf-kernel-probes/TODO -[8]:https://lwn.net/Articles/604043/ -[9]:http://lxr.free-electrons.com/source/kernel/bpf/verifier.c#L21 -[10]:http://events.linuxfoundation.org/sites/events/files/slides/tracing-linux-ezannoni-linuxcon-ja-2015_0.pdf -[11]:https://github.com/iovisor/bcc -[12]:https://github.com/iovisor/bcc/blob/master/INSTALL.md -[13]:http://lxr.free-electrons.com/source/net/ipv4/af_inet.c#L194 -[14]:https://github.com/iovisor/bcc/pull/453 -[15]:http://lxr.free-electrons.com/source/kernel/trace/bpf_trace.c#L86 -[16]:https://github.com/iovisor/bcc/blob/master/tools/solisten.py -[17]:http://www.brendangregg.com/blog/2015-05-15/ebpf-one-small-step.html diff --git a/sources/tech/20160922 A Linux users guide to Logical Volume Management.md b/sources/tech/20160922 A Linux users guide to Logical Volume Management.md deleted file mode 100644 index ff0e390f38..0000000000 --- a/sources/tech/20160922 A Linux users guide to Logical Volume Management.md +++ /dev/null @@ -1,233 +0,0 @@ -A Linux user's guide to Logical Volume Management -============================================================ - -![Logical Volume Management (LVM)](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003499_01_other11x_cc.png?itok=I_kCDYj0 "Logical Volume Management (LVM)") -Image by : opensource.com - -Managing disk space has always been a significant task for sysadmins. Running out of disk space used to be the start of a long and complex series of tasks to increase the space available to a disk partition. It also required taking the system off-line. This usually involved installing a new hard drive, booting to recovery or single-user mode, creating a partition and a filesystem on the new hard drive, using temporary mount points to move the data from the too-small filesystem to the new, larger one, changing the content of the /etc/fstab file to reflect the correct device name for the new partition, and rebooting to remount the new filesystem on the correct mount point. - -I have to tell you that, when LVM (Logical Volume Manager) first made its appearance in Fedora Linux, I resisted it rather strongly. My initial reaction was that I did not need this additional layer of abstraction between me and the hard drives. It turns out that I was wrong, and that logical volume management is very useful. - -LVM allows for very flexible disk space management. It provides features like the ability to add disk space to a logical volume and its filesystem while that filesystem is mounted and active and it allows for the collection of multiple physical hard drives and partitions into a single volume group which can then be divided into logical volumes. - -The volume manager also allows reducing the amount of disk space allocated to a logical volume, but there are a couple requirements. First, the volume must be unmounted. Second, the filesystem itself must be reduced in size before the volume on which it resides can be reduced. - -It is important to note that the filesystem itself must allow resizing for this feature to work. The EXT2, 3, and 4 filesystems all allow both offline (unmounted) and online (mounted) resizing when increasing the size of a filesystem, and offline resizing when reducing the size. You should check the details of the filesystems you intend to use in order to verify whether they can be resized at all and especially whether they can be resized while online. - -### Expanding a filesystem on the fly - -I always like to run new distributions in a VirtualBox virtual machine for a few days or weeks to ensure that I will not run into any devastating problems when I start installing it on my production machines. One morning a couple years ago I started installing a newly released version of Fedora in a virtual machine on my primary workstation. I thought that I had enough disk space allocated to the host filesystem in which the VM was being installed. I did not. About a third of the way through the installation I ran out of space on that filesystem. Fortunately, VirtualBox detected the out-of-space condition and paused the virtual machine, and even displayed an error message indicating the exact cause of the problem. - -Note that this problem was not due to the fact that the virtual disk was too small, it was rather the logical volume on the host computer that was running out of space so that the virtual disk belonging to the virtual machine did not have enough space to expand on the host's logical volume. - -Since most modern distributions use Logical Volume Management by default, and I had some free space available on the volume group, I was able to assign additional disk space to the appropriate logical volume and then expand filesystem of the host on the fly. This means that I did not have to reformat the entire hard drive and reinstall the operating system or even reboot. I simply assigned some of the available space to the appropriate logical volume and resized the filesystem—all while the filesystem was on-line and the running program, The virtual machine was still using the host filesystem. After resizing the logical volume and the filesystem I resumed running the virtual machine and the installation continued as if no problems had occurred. - -Although this type of problem may never have happened to you, running out of disk space while a critical program is running has happened to many people. And while many programs, especially Windows programs, are not as well written and resilient as VirtualBox, Linux Logical Volume Management made it possible to recover without losing any data and without having to restart the time-consuming installation. - -### LVM Structure - -The structure of a Logical Volume Manager disk environment is illustrated by Figure 1, below. Logical Volume Management enables the combining of multiple individual hard drives and/or disk partitions into a single volume group (VG). That volume group can then be subdivided into logical volumes (LV) or used as a single large volume. Regular file systems, such as EXT3 or EXT4, can then be created on a logical volume. - -In Figure 1, two complete physical hard drives and one partition from a third hard drive have been combined into a single volume group. Two logical volumes have been created from the space in the volume group, and a filesystem, such as an EXT3 or EXT4 filesystem has been created on each of the two logical volumes. - -![lvm.png](https://opensource.com/sites/default/files/resize/images/life-uploads/lvm-520x222.png) - - _Figure 1: LVM allows combining partitions and entire hard drives into Volume Groups._ - -Adding disk space to a host is fairly straightforward but, in my experience, is done relatively infrequently. The basic steps needed are listed below. You can either create an entirely new volume group or you can add the new space to an existing volume group and either expand an existing logical volume or create a new one. - -### Adding a new logical volume - -There are times when it is necessary to add a new logical volume to a host. For example, after noticing that the directory containing virtual disks for my VirtualBox virtual machines was filling up the /home filesystem, I decided to create a new logical volume in which to store the virtual machine data, including the virtual disks. This would free up a great deal of space in my /home filesystem and also allow me to manage the disk space for the VMs independently. - -The basic steps for adding a new logical volume are as follows. - -1. If necessary, install a new hard drive. - -2. Optional: Create a partition on the hard drive. - -3. Create a physical volume (PV) of the complete hard drive or a partition on the hard drive. - -4. Assign the new physical volume to an existing volume group (VG) or create a new volume group. - -5. Create a new logical volumes (LV) from the space in the volume group. - -6. Create a filesystem on the new logical volume. - -7. Add appropriate entries to /etc/fstab for mounting the filesystem. - -8. Mount the filesystem. - -Now for the details. The following sequence is taken from an example I used as a lab project when teaching about Linux filesystems. - -### Example - -This example shows how to use the CLI to extend an existing volume group to add more space to it, create a new logical volume in that space, and create a filesystem on the logical volume. This procedure can be performed on a running, mounted filesystem. - -WARNING: Only the EXT3 and EXT4 filesystems can be resized on the fly on a running, mounted filesystem. Many other filesystems including BTRFS and ZFS cannot be resized. - -### Install hard drive - -If there is not enough space in the volume group on the existing hard drive(s) in the system to add the desired amount of space it may be necessary to add a new hard drive and create the space to add to the Logical Volume. First, install the physical hard drive, and then perform the following steps. - -### Create Physical Volume from hard drive - -It is first necessary to create a new Physical Volume (PV). Use the command below, which assumes that the new hard drive is assigned as /dev/hdd. - -``` -pvcreate /dev/hdd -``` - -It is not necessary to create a partition of any kind on the new hard drive. This creation of the Physical Volume which will be recognized by the Logical Volume Manager can be performed on a newly installed raw disk or on a Linux partition of type 83\. If you are going to use the entire hard drive, creating a partition first does not offer any particular advantages and uses disk space for metadata that could otherwise be used as part of the PV. - -### Extend the existing Volume Group - -In this example we will extend an existing volume group rather than creating a new one; you can choose to do it either way. After the Physical Volume has been created, extend the existing Volume Group (VG) to include the space on the new PV. In this example the existing Volume Group is named MyVG01. - -``` -vgextend /dev/MyVG01 /dev/hdd -``` - -### Create the Logical Volume - -First create the Logical Volume (LV) from existing free space within the Volume Group. The command below creates a LV with a size of 50GB. The Volume Group name is MyVG01 and the Logical Volume Name is Stuff. - -``` -lvcreate -L +50G --name Stuff MyVG01 -``` - -### Create the filesystem - -Creating the Logical Volume does not create the filesystem. That task must be performed separately. The command below creates an EXT4 filesystem that fits the newly created Logical Volume. - -``` -mkfs -t ext4 /dev/MyVG01/Stuff -``` - -### Add a filesystem label - -Adding a filesystem label makes it easy to identify the filesystem later in case of a crash or other disk related problems. - -``` -e2label /dev/MyVG01/Stuff Stuff -``` - -### Mount the filesystem - -At this point you can create a mount point, add an appropriate entry to the /etc/fstab file, and mount the filesystem. - -You should also check to verify the volume has been created correctly. You can use the **df**, **lvs,** and **vgs** commands to do this. - -### Resizing a logical volume in an LVM filesystem - -The need to resize a filesystem has been around since the beginning of the first versions of Unix and has not gone away with Linux. It has gotten easier, however, with Logical Volume Management. - -1. If necessary, install a new hard drive. - -2. Optional: Create a partition on the hard drive. - -3. Create a physical volume (PV) of the complete hard drive or a partition on the hard drive. - -4. Assign the new physical volume to an existing volume group (VG) or create a new volume group. - -5. Create one or more logical volumes (LV) from the space in the volume group, or expand an existing logical volume with some or all of the new space in the volume group. - -6. If you created a new logical volume, create a filesystem on it. If adding space to an existing logical volume, use the resize2fs command to enlarge the filesystem to fill the space in the logical volume. - -7. Add appropriate entries to /etc/fstab for mounting the filesystem. - -8. Mount the filesystem. - -### Example - -This example describes how to resize an existing Logical Volume in an LVM environment using the CLI. It adds about 50GB of space to the /Stuff filesystem. This procedure can be used on a mounted, live filesystem only with the Linux 2.6 Kernel (and higher) and EXT3 and EXT4 filesystems. I do not recommend that you do so on any critical system, but it can be done and I have done so many times; even on the root (/) filesystem. Use your judgment. - -WARNING: Only the EXT3 and EXT4 filesystems can be resized on the fly on a running, mounted filesystem. Many other filesystems including BTRFS and ZFS cannot be resized. - -### Install the hard drive - -If there is not enough space on the existing hard drive(s) in the system to add the desired amount of space it may be necessary to add a new hard drive and create the space to add to the Logical Volume. First, install the physical hard drive and then perform the following steps. - -### Create a Physical Volume from the hard drive - -It is first necessary to create a new Physical Volume (PV). Use the command below, which assumes that the new hard drive is assigned as /dev/hdd. - -``` -pvcreate /dev/hdd -``` - -It is not necessary to create a partition of any kind on the new hard drive. This creation of the Physical Volume which will be recognized by the Logical Volume Manager can be performed on a newly installed raw disk or on a Linux partition of type 83\. If you are going to use the entire hard drive, creating a partition first does not offer any particular advantages and uses disk space for metadata that could otherwise be used as part of the PV. - -### Add PV to existing Volume Group - -For this example, we will use the new PV to extend an existing Volume Group. After the Physical Volume has been created, extend the existing Volume Group (VG) to include the space on the new PV. In this example, the existing Volume Group is named MyVG01. - -``` -vgextend /dev/MyVG01 /dev/hdd -``` - -### Extend the Logical Volume - -Extend the Logical Volume (LV) from existing free space within the Volume Group. The command below expands the LV by 50GB. The Volume Group name is MyVG01 and the Logical Volume Name is Stuff. - -``` -lvextend -L +50G /dev/MyVG01/Stuff -``` - -### Expand the filesystem - -Extending the Logical Volume will also expand the filesystem if you use the -r option. If you do not use the -r option, that task must be performed separately. The command below resizes the filesystem to fit the newly resized Logical Volume. - -``` -resize2fs /dev/MyVG01/Stuff -``` - -You should check to verify the resizing has been performed correctly. You can use the **df**, **lvs,** and **vgs** commands to do this. - -### Tips - -Over the years I have learned a few things that can make logical volume management even easier than it already is. Hopefully these tips can prove of some value to you. - -* Use the Extended file systems unless you have a clear reason to use another filesystem. Not all filesystems support resizing but EXT2, 3, and 4 do. The EXT filesystems are also very fast and efficient. In any event, they can be tuned by a knowledgeable sysadmin to meet the needs of most environments if the defaults tuning parameters do not. - -* Use meaningful volume and volume group names. - -* Use EXT filesystem labels. - -I know that, like me, many sysadmins have resisted the change to Logical Volume Management. I hope that this article will encourage you to at least try LVM. I am really glad that I did; my disk management tasks are much easier since I made the switch. - - -### About the author - - [![](https://opensource.com/sites/default/files/styles/profile_pictures/public/david-crop.jpg?itok=oePpOpyV)][10] - - David Both - David Both is a Linux and Open Source advocate who resides in Raleigh, North Carolina. He has been in the IT industry for over forty years and taught OS/2 for IBM where he worked for over 20 years. While at IBM, he wrote the first training course for the original IBM PC in 1981\. He has taught RHCE classes for Red Hat and has worked at MCI Worldcom, Cisco, and the State of North Carolina. He has been working with Linux and Open Source Software for almost 20 years. David has written articles for... [more about David Both][7][More about me][8] - --------------------------------------------------------------------------------- - -via: https://opensource.com/business/16/9/linux-users-guide-lvm - -作者:[ David Both][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/dboth -[1]:https://opensource.com/resources/what-is-linux?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent -[2]:https://opensource.com/resources/what-are-linux-containers?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent -[3]:https://developers.redhat.com/promotions/linux-cheatsheet/?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent -[4]:https://developers.redhat.com/cheat-sheet/advanced-linux-commands-cheatsheet?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent -[5]:https://opensource.com/tags/linux?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent -[6]:https://opensource.com/business/16/9/linux-users-guide-lvm?rate=79vf1js7A7rlp-I96YFneopUQqsa2SuB-g-og7eiF1U -[7]:https://opensource.com/users/dboth -[8]:https://opensource.com/users/dboth -[9]:https://opensource.com/user/14106/feed -[10]:https://opensource.com/users/dboth -[11]:https://opensource.com/users/dboth -[12]:https://opensource.com/users/dboth -[13]:https://opensource.com/business/16/9/linux-users-guide-lvm#comments -[14]:https://opensource.com/tags/business -[15]:https://opensource.com/tags/linux -[16]:https://opensource.com/tags/how-tos-and-tutorials -[17]:https://opensource.com/tags/sysadmin diff --git a/sources/tech/20170209 INTRODUCING DOCKER SECRETS MANAGEMENT.md b/sources/tech/20170209 INTRODUCING DOCKER SECRETS MANAGEMENT.md deleted file mode 100644 index a3fc2c886e..0000000000 --- a/sources/tech/20170209 INTRODUCING DOCKER SECRETS MANAGEMENT.md +++ /dev/null @@ -1,110 +0,0 @@ -INTRODUCING DOCKER SECRETS MANAGEMENT -============================================================ - -Containers are changing how we view apps and infrastructure. Whether the code inside containers is big or small, container architecture introduces a change to how that code behaves with hardware – it fundamentally abstracts it from the infrastructure. Docker believes that there are three key components to container security and together they result in inherently safer apps. - - ![Docker Security](https://i2.wp.com/blog.docker.com/wp-content/uploads/e12387a1-ab21-4942-8760-5b1677bc656d-1.jpg?w=1140&ssl=1) - -A critical element of building safer apps is having a secure way of communicating with other apps and systems, something that often requires credentials, tokens, passwords and other types of confidential information—usually referred to as application secrets. We are excited to introduce Docker Secrets, a container native solution that strengthens the Trusted Delivery component of container security by integrating secret distribution directly into the container platform. - -With containers, applications are now dynamic and portable across multiple environments. This  made existing secrets distribution solutions inadequate because they were largely designed for static environments. Unfortunately, this led to an increase in mismanagement of application secrets, making it common to find insecure, home-grown solutions, such as embedding secrets into version control systems like GitHub, or other equally bad—bolted on point solutions as an afterthought. - -### Introducing Docker Secrets Management - -We fundamentally believe that apps are safer if there is a standardized interface for accessing secrets. Any good solution will also have to follow security best practices, such as encrypting secrets while in transit; encrypting secrets at rest; preventing secrets from unintentionally leaking when consumed by the final application; and strictly adhere to the principle of least-privilege, where an application only has access to the secrets that it needs—no more, no less. - -By integrating secrets into Docker orchestration, we are able to deliver a solution for the secrets management problem that follows these exact principles. - -The following diagram provides a high-level view of how the Docker swarm mode architecture is applied to securely deliver a new type of object to our containers: a secret object. - - ![Docker Secrets Management](https://i0.wp.com/blog.docker.com/wp-content/uploads/b69d2410-9e25-44d8-aa2d-f67b795ff5e3.jpg?w=1140&ssl=1) - -In Docker, a secret is any blob of data, such as a password, SSH private key, TLS Certificate, or any other piece of data that is sensitive in nature. When you add a secret to the swarm (by running `docker secret create`), Docker sends the secret over to the swarm manager over a mutually authenticated TLS connection, making use of the [built-in Certificate Authority][17] that gets automatically created when bootstrapping a new swarm. - -``` -$ echo "This is a secret" | docker secret create my_secret_data - -``` - -Once the secret reaches a manager node, it gets saved to the internal Raft store, which uses NACL’s Salsa20Poly1305 with a 256-bit key to ensure no data is ever written to disk unencrypted. Writing to the internal store gives secrets the same high availability guarantees that the the rest of the swarm management data gets. - -When a swarm manager starts up, the encrypted Raft logs containing the secrets is decrypted using a data encryption key that is unique per-node. This key, and the node’s TLS credentials used to communicate with the rest of the cluster, can be encrypted with a cluster-wide key encryption key, called the unlock key, which is also propagated using Raft and will be required on manager start. - -When you grant a newly-created or running service access to a secret, one of the manager nodes (only managers have access to all the stored secrets stored) will send it over the already established TLS connection exclusively to the nodes that will be running that specific service. This means that nodes cannot request the secrets themselves, and will only gain access to the secrets when provided to them by a manager – strictly for the services that require them. - -``` -$ docker service  create --name="redis" --secret="my_secret_data" redis:alpine -``` - -The  unencrypted secret is mounted into the container in an in-memory filesystem at /run/secrets/. - -``` -$ docker exec $(docker ps --filter name=redis -q) ls -l /run/secrets -total 4 --r--r--r--    1 root     root            17 Dec 13 22:48 my_secret_data -``` - -If a service gets deleted, or rescheduled somewhere else, the manager will immediately notify all the nodes that no longer require access to that secret to erase it from memory, and the node will no longer have any access to that application secret. - -``` -$ docker service update --secret-rm="my_secret_data" redis - -$ docker exec -it $(docker ps --filter name=redis -q) cat /run/secrets/my_secret_data - -cat: can't open '/run/secrets/my_secret_data': No such file or directory -``` - -Check out the [Docker secrets docs][18] for more information and examples on how to create and manage your secrets. And a special shout out to Laurens Van Houtven (https://www.lvh.io/[)][19] in collaboration with the Docker security and core engineering team to help make this feature a reality. - -[Get safer apps for dev and ops w/ new #Docker secrets management][5] - -[CLICK TO TWEET][6] - -### -![Docker Security](https://i2.wp.com/blog.docker.com/wp-content/uploads/Screenshot-2017-02-08-23.30.13.png?resize=1032%2C111&ssl=1) - -### Safer Apps with Docker - -Docker secrets is designed to be easily usable by developers and IT ops teams to build and run safer apps. Docker secrets is a container first architecture designed to keep secrets safe and used only when needed by the exact container that needs that secret to operate. From defining apps and secrets with Docker Compose through an IT admin deploying that Compose file directly in Docker Datacenter, the services, secrets, networks and volumes will travel securely, safely with the application. - -Resources to learn more: - -* [Docker Datacenter on 1.13 with Secrets, Security Scanning, Content Cache and More][7] - -* [Download Docker][8] and get started today - -* [Try secrets in Docker Datacenter][9] - -* [Read the Documentation][10] - -* Attend an [upcoming webinar][11] - --------------------------------------------------------------------------------- - -via: https://blog.docker.com/2017/02/docker-secrets-management/ - -作者:[ Ying Li][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://blog.docker.com/author/yingli/ -[1]:http://www.linkedin.com/shareArticle?mini=true&url=http://dockr.ly/2k6gnOB&title=Introducing%20Docker%20Secrets%20Management&summary=Containers%20are%20changing%20how%20we%20view%20apps%20and%20infrastructure.%20Whether%20the%20code%20inside%20containers%20is%20big%20or%20small,%20container%20architecture%20introduces%20a%20change%20to%20how%20that%20code%20behaves%20with%20hardware%20-%20it%20fundamentally%20abstracts%20it%20from%20the%20infrastructure.%20Docker%20believes%20that%20there%20are%20three%20key%20components%20to%20container%20security%20and%20... -[2]:http://www.reddit.com/submit?url=http://dockr.ly/2k6gnOB&title=Introducing%20Docker%20Secrets%20Management -[3]:https://plus.google.com/share?url=http://dockr.ly/2k6gnOB -[4]:http://news.ycombinator.com/submitlink?u=http://dockr.ly/2k6gnOB&t=Introducing%20Docker%20Secrets%20Management -[5]:https://twitter.com/share?text=Get+safer+apps+for+dev+and+ops+w%2F+new+%23Docker+secrets+management+&via=docker&related=docker&url=http://dockr.ly/2k6gnOB -[6]:https://twitter.com/share?text=Get+safer+apps+for+dev+and+ops+w%2F+new+%23Docker+secrets+management+&via=docker&related=docker&url=http://dockr.ly/2k6gnOB -[7]:http://dockr.ly/AppSecurity -[8]:https://www.docker.com/getdocker -[9]:http://www.docker.com/trial -[10]:https://docs.docker.com/engine/swarm/secrets/ -[11]:http://www.docker.com/webinars -[12]:https://blog.docker.com/author/yingli/ -[13]:https://blog.docker.com/tag/container-security/ -[14]:https://blog.docker.com/tag/docker-security/ -[15]:https://blog.docker.com/tag/secrets-management/ -[16]:https://blog.docker.com/tag/security/ -[17]:https://docs.docker.com/engine/swarm/how-swarm-mode-works/pki/ -[18]:https://docs.docker.com/engine/swarm/secrets/ -[19]:https://lvh.io%29/ diff --git a/sources/tech/20170530 How to Improve a Legacy Codebase.md b/sources/tech/20170530 How to Improve a Legacy Codebase.md new file mode 100644 index 0000000000..cff5e70538 --- /dev/null +++ b/sources/tech/20170530 How to Improve a Legacy Codebase.md @@ -0,0 +1,108 @@ +Translating by aiwhj +# How to Improve a Legacy Codebase + + +It happens at least once in the lifetime of every programmer, project manager or teamleader. You get handed a steaming pile of manure, if you’re lucky only a few million lines worth, the original programmers have long ago left for sunnier places and the documentation - if there is any to begin with - is hopelessly out of sync with what is presently keeping the company afloat. + +Your job: get us out of this mess. + +After your first instinctive response (run for the hills) has passed you start on the project knowing full well that the eyes of the company senior leadership are on you. Failure is not an option. And yet, by the looks of what you’ve been given failure is very much in the cards. So what to do? + +I’ve been (un)fortunate enough to be in this situation several times and me and a small band of friends have found that it is a lucrative business to be able to take these steaming piles of misery and to turn them into healthy maintainable projects. Here are some of the tricks that we employ: + +### Backup + +Before you start to do anything at all make a backup of  _everything_  that might be relevant. This to make sure that no information is lost that might be of crucial importance somewhere down the line. All it takes is a silly question that you can’t answer to eat up a day or more once the change has been made. Especially configuration data is susceptible to this kind of problem, it is usually not versioned and you’re lucky if it is taken along in the periodic back-up scheme. So better safe than sorry, copy everything to a very safe place and never ever touch that unless it is in read-only mode. + +### Important pre-requisite, make sure you have a build process and that it actually produces what runs in production + +I totally missed this step on the assumption that it is obvious and likely already in place but many HN commenters pointed this out and they are absolutely right: step one is to make sure that you know what is running in production right now and that means that you need to be able to build a version of the software that is - if your platform works that way - byte-for-byte identical with the current production build. If you can’t find a way to achieve this then likely you will be in for some unpleasant surprises once you commit something to production. Make sure you test this to the best of your ability to make sure that you have all the pieces in place and then, after you’ve gained sufficient confidence that it will work move it to production. Be prepared to switch back immediately to whatever was running before and make sure that you log everything and anything that might come in handy during the - inevitable - post mortem. + +### Freeze the DB + +If at all possible freeze the database schema until you are done with the first level of improvements, by the time you have a solid understanding of the codebase and the legacy code has been fully left behind you are ready to modify the database schema. Change it any earlier than that and you may have a real problem on your hand, now you’ve lost the ability to run an old and a new codebase side-by-side with the database as the steady foundation to build on. Keeping the DB totally unchanged allows you to compare the effect your new business logic code has compared to the old business logic code, if it all works as advertised there should be no differences. + +### Write your tests + +Before you make any changes at all write as many end-to-end and integration tests as you can. Make sure these tests produce the right output and test any and all assumptions that you can come up with about how you  _think_  the old stuff works (be prepared for surprises here). These tests will have two important functions: they will help to clear up any misconceptions at a very early stage and they will function as guardrails once you start writing new code to replace old code. + +Automate all your testing, if you’re already experienced with CI then use it and make sure your tests run fast enough to run the full set of tests after every commit. + +### Instrumentation and logging + +If the old platform is still available for development add instrumentation. Do this in a completely new database table, add a simple counter for every event that you can think of and add a single function to increment these counters based on the name of the event. That way you can implement a time-stamped event log with a few extra lines of code and you’ll get a good idea of how many events of one kind lead to events of another kind. One example: User opens app, User closes app. If two events should result in some back-end calls those two counters should over the long term remain at a constant difference, the difference is the number of apps currently open. If you see many more app opens than app closes you know there has to be a way in which apps end (for instance a crash). For each and every event you’ll find there is some kind of relationship to other events, usually you will strive for constant relationships unless there is an obvious error somewhere in the system. You’ll aim to reduce those counters that indicate errors and you’ll aim to maximize counters further down in the chain to the level indicated by the counters at the beginning. (For instance: customers attempting to pay should result in an equal number of actual payments received). + +This very simple trick turns every backend application into a bookkeeping system of sorts and just like with a real bookkeeping system the numbers have to match, as long as they don’t you have a problem somewhere. + +This system will over time become invaluable in establishing the health of the system and will be a great companion next to the source code control system revision log where you can determine the point in time that a bug was introduced and what the effect was on the various counters. + +I usually keep these counters at a 5 minute resolution (so 12 buckets for an hour), but if you have an application that generates fewer or more events then you might decide to change the interval at which new buckets are created. All counters share the same database table and so each counter is simply a column in that table. + +### Change only one thing at the time + +Do not fall into the trap of improving both the maintainability of the code or the platform it runs on at the same time as adding new features or fixing bugs. This will cause you huge headaches because you now have to ask yourself every step of the way what the desired outcome is of an action and will invalidate some of the tests you made earlier. + +### Platform changes + +If you’ve decided to migrate the application to another platform then do this first  _but keep everything else exactly the same_ . If you want you can add more documentation or tests, but no more than that, all business logic and interdependencies should remain as before. + +### Architecture changes + +The next thing to tackle is to change the architecture of the application (if desired). At this point in time you are free to change the higher level structure of the code, usually by reducing the number of horizontal links between modules, and thus reducing the scope of the code active during any one interaction with the end-user. If the old code was monolithic in nature now would be a good time to make it more modular, break up large functions into smaller ones but leave names of variables and data-structures as they were. + +HN user [mannykannot][1] points - rightfully - out that this is not always an option, if you’re particularly unlucky then you may have to dig in deep in order to be able to make any architecture changes. I agree with that and I should have included it here so hence this little update. What I would further like to add is if you do both do high level changes and low level changes at least try to limit them to one file or worst case one subsystem so that you limit the scope of your changes as much as possible. Otherwise you might have a very hard time debugging the change you just made. + +### Low level refactoring + +By now you should have a very good understanding of what each module does and you are ready for the real work: refactoring the code to improve maintainability and to make the code ready for new functionality. This will likely be the part of the project that consumes the most time, document as you go, do not make changes to a module until you have thoroughly documented it and feel you understand it. Feel free to rename variables and functions as well as datastructures to improve clarity and consistency, add tests (also unit tests, if the situation warrants them). + +### Fix bugs + +Now you’re ready to take on actual end-user visible changes, the first order of battle will be the long list of bugs that have accumulated over the years in the ticket queue. As usual, first confirm the problem still exists, write a test to that effect and then fix the bug, your CI and the end-to-end tests written should keep you safe from any mistakes you make due to a lack of understanding or some peripheral issue. + +### Database Upgrade + +If required after all this is done and you are on a solid and maintainable codebase again you have the option to change the database schema or to replace the database with a different make/model altogether if that is what you had planned to do. All the work you’ve done up to this point will help to assist you in making that change in a responsible manner without any surprises, you can completely test the new DB with the new code and all the tests in place to make sure your migration goes off without a hitch. + +### Execute on the roadmap + +Congratulations, you are out of the woods and are now ready to implement new functionality. + +### Do not ever even attempt a big-bang rewrite + +A big-bang rewrite is the kind of project that is pretty much guaranteed to fail. For one, you are in uncharted territory to begin with so how would you even know what to build, for another, you are pushing  _all_  the problems to the very last day, the day just before you go ‘live’ with your new system. And that’s when you’ll fail, miserably. Business logic assumptions will turn out to be faulty, suddenly you’ll gain insight into why that old system did certain things the way it did and in general you’ll end up realizing that the guys that put the old system together weren’t maybe idiots after all. If you really do want to wreck the company (and your own reputation to boot) by all means, do a big-bang rewrite, but if you’re smart about it this is not even on the table as an option. + +### So, the alternative, work incrementally + +To untangle one of these hairballs the quickest path to safety is to take any element of the code that you do understand (it could be a peripheral bit, but it might also be some core module) and try to incrementally improve it still within the old context. If the old build tools are no longer available you will have to use some tricks (see below) but at least try to leave as much of what is known to work alive while you start with your changes. That way as the codebase improves so does your understanding of what it actually does. A typical commit should be at most a couple of lines. + +### Release! + +Every change along the way gets released into production, even if the changes are not end-user visible it is important to make the smallest possible steps because as long as you lack understanding of the system there is a fair chance that only the production environment will tell you there is a problem. If that problem arises right after you make a small change you will gain several advantages: + +* it will probably be trivial to figure out what went wrong + +* you will be in an excellent position to improve the process + +* and you should immediately update the documentation to show the new insights gained + +### Use proxies to your advantage + +If you are doing web development praise the gods and insert a proxy between the end-users and the old system. Now you have per-url control over which requests go to the old system and which you will re-route to the new system allowing much easier and more granular control over what is run and who gets to see it. If your proxy is clever enough you could probably use it to send a percentage of the traffic to the new system for an individual URL until you are satisfied that things work the way they should. If your integration tests also connect to this interface it is even better. + +### Yes, but all this will take too much time! + +Well, that depends on how you look at it. It’s true there is a bit of re-work involved in following these steps. But it  _does_  work, and any kind of optimization of this process makes the assumption that you know more about the system than you probably do. I’ve got a reputation to maintain and I  _really_  do not like negative surprises during work like this. With some luck the company is already on the skids, or maybe there is a real danger of messing things up for the customers. In a situation like that I prefer total control and an iron clad process over saving a couple of days or weeks if that imperils a good outcome. If you’re more into cowboy stuff - and your bosses agree - then maybe it would be acceptable to take more risk, but most companies would rather take the slightly slower but much more sure road to victory. + +-------------------------------------------------------------------------------- + +via: https://jacquesmattheij.com/improving-a-legacy-codebase + +作者:[Jacques Mattheij ][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://jacquesmattheij.com/ +[1]:https://news.ycombinator.com/item?id=14445661 diff --git a/sources/tech/20170607 Why Car Companies Are Hiring Computer Security Experts.md b/sources/tech/20170607 Why Car Companies Are Hiring Computer Security Experts.md deleted file mode 100644 index 4a7d23e5f0..0000000000 --- a/sources/tech/20170607 Why Car Companies Are Hiring Computer Security Experts.md +++ /dev/null @@ -1,91 +0,0 @@ -Why Car Companies Are Hiring Computer Security Experts -============================================================ - -Photo -![](https://static01.nyt.com/images/2017/06/08/business/08BITS-GURUS1/08BITS-GURUS1-superJumbo.jpg) -The cybersecurity experts Marc Rogers, left, of CloudFlare and Kevin Mahaffey of Lookout were able to control various Tesla functions from their physically connected laptop. They pose in CloudFlare’s lobby in front of Lava Lamps used to generate numbers for encryption.CreditChristie Hemm Klok for The New York Times - -It started about seven years ago. Iran’s top nuclear scientists were being assassinated in a string of similar attacks: Assailants on motorcycles were pulling up to their moving cars, attaching magnetic bombs and detonating them after the motorcyclists had fled the scene. - -In another seven years, security experts warn, assassins won’t need motorcycles or magnetic bombs. All they’ll need is a laptop and code to send driverless cars careering off a bridge, colliding with a driverless truck or coming to an unexpected stop in the middle of fast-moving traffic. - -Automakers may call them self-driving cars. But hackers call them computers that travel over 100 miles an hour. - -“These are no longer cars,” said Marc Rogers, the principal security researcher at the cybersecurity firm CloudFlare. “These are data centers on wheels. Any part of the car that talks to the outside world is a potential inroad for attackers.” - -Those fears came into focus two years ago when two “white hat” hackers — researchers who look for computer vulnerabilities to spot problems and fix them, rather than to commit a crime or cause problems — successfully gained access to a Jeep Cherokee from their computer miles away. They rendered their crash-test dummy (in this case a nervous reporter) powerless over his vehicle and disabling his transmission in the middle of a highway. - -The hackers, Chris Valasek and Charlie Miller (now security researchers respectively at Uber and Didi, an Uber competitor in China), discovered an [electronic route from the Jeep’s entertainment system to its dashboard][10]. From there, they had control of the vehicle’s steering, brakes and transmission — everything they needed to paralyze their crash test dummy in the middle of a highway. - -“Car hacking makes great headlines, but remember: No one has ever had their car hacked by a bad guy,” Mr. Miller wrote on Twitter last Sunday. “It’s only ever been performed by researchers.” - -Still, the research by Mr. Miller and Mr. Valasek came at a steep price for Jeep’s manufacturer, Fiat Chrysler, which was forced to recall 1.4 million of its vehicles as a result of the hacking experiment. - -It is no wonder that Mary Barra, the chief executive of General Motors, called cybersecurity her company’s top priority last year. Now the skills of researchers and so-called white hat hackers are in high demand among automakers and tech companies pushing ahead with driverless car projects. - -Uber, [Tesla][11], Apple and Didi in China have been actively recruiting white hat hackers like Mr. Miller and Mr. Valasek from one another as well as from traditional cybersecurity firms and academia. - -Last year, Tesla poached Aaron Sigel, Apple’s manager of security for its iOS operating system. Uber poached Chris Gates, formerly a white hat hacker at Facebook. Didi poached Mr. Miller from Uber, where he had gone to work after the Jeep hack. And security firms have seen dozens of engineers leave their ranks for autonomous-car projects. - -Mr. Miller said he left Uber for Didi, in part, because his new Chinese employer has given him more freedom to discuss his work. - -“Carmakers seem to be taking the threat of cyberattack more seriously, but I’d still like to see more transparency from them,” Mr. Miller wrote on Twitter on Saturday. - -Like a number of big tech companies, Tesla and Fiat Chrysler started paying out rewards to hackers who turn over flaws the hackers discover in their systems. GM has done something similar, though critics say GM’s program is limited when compared with the ones offered by tech companies, and so far no rewards have been paid out. - -One year after the Jeep hack by Mr. Miller and Mr. Valasek, they demonstrated all the other ways they could mess with a Jeep driver, including hijacking the vehicle’s cruise control, swerving the steering wheel 180 degrees or slamming on the parking brake in high-speed traffic — all from a computer in the back of the car. (Those exploits ended with their test Jeep in a ditch and calls to a local tow company.) - -Granted, they had to be in the Jeep to make all that happen. But it was evidence of what is possible. - -The Jeep penetration was preceded by a [2011 hack by security researchers at the University of Washington][12] and the University of California, San Diego, who were the first to remotely hack a sedan and ultimately control its brakes via Bluetooth. The researchers warned car companies that the more connected cars become, the more likely they are to get hacked. - -Security researchers have also had their way with Tesla’s software-heavy Model S car. In 2015, Mr. Rogers, together with Kevin Mahaffey, the chief technology officer of the cybersecurity company Lookout, found a way to control various Tesla functions from their physically connected laptop. - -One year later, a team of Chinese researchers at Tencent took their research a step further, hacking a moving Tesla Model S and controlling its brakes from 12 miles away. Unlike Chrysler, Tesla was able to dispatch a remote patch to fix the security holes that made the hacks possible. - -In all the cases, the car hacks were the work of well meaning, white hat security researchers. But the lesson for all automakers was clear. - -The motivations to hack vehicles are limitless. When it learned of Mr. Rogers’s and Mr. Mahaffey’s investigation into Tesla’s Model S, a Chinese app-maker asked Mr. Rogers if he would be interested in sharing, or possibly selling, his discovery, he said. (The app maker was looking for a backdoor to secretly install its app on Tesla’s dashboard.) - -Criminals have not yet shown they have found back doors into connected vehicles, though for years, they have been actively developing, trading and deploying tools that can intercept car key communications. - -But as more driverless and semiautonomous cars hit the open roads, they will become a more worthy target. Security experts warn that driverless cars present a far more complex, intriguing and vulnerable “attack surface” for hackers. Each new “connected” car feature introduces greater complexity, and with complexity inevitably comes vulnerability. - -Twenty years ago, cars had, on average, one million lines of code. The General Motors 2010 [Chevrolet Volt][13] had about 10 million lines of code — more than an [F-35 fighter jet][14]. - -Today, an average car has more than 100 million lines of code. Automakers predict it won’t be long before they have 200 million. When you stop to consider that, on average, there are 15 to 50 defects per 1,000 lines of software code, the potentially exploitable weaknesses add up quickly. - -The only difference between computer code and driverless car code is that, “Unlike data center enterprise security — where the biggest threat is loss of data — in automotive security, it’s loss of life,” said David Barzilai, a co-founder of Karamba Security, an Israeli start-up that is working on addressing automotive security. - -To truly secure autonomous vehicles, security experts say, automakers will have to address the inevitable vulnerabilities that pop up in new sensors and car computers, address inherent vulnerabilities in the base car itself and, perhaps most challenging of all, bridge the cultural divide between automakers and software companies. - -“The genie is out of the bottle, and to solve this problem will require a major cultural shift,” said Mr. Mahaffey of the cybersecurity company Lookout. “And an automaker that truly values cybersecurity will treat security vulnerabilities the same they would an airbag recall. We have not seen that industrywide shift yet.” - -There will be winners and losers, Mr. Mahaffey added: “Automakers that transform themselves into software companies will win. Others will get left behind.” - --------------------------------------------------------------------------------- - -via: https://www.nytimes.com/2017/06/07/technology/why-car-companies-are-hiring-computer-security-experts.html - -作者:[NICOLE PERLROTH ][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.nytimes.com/by/nicole-perlroth -[1]:https://www.nytimes.com/2016/06/09/technology/software-as-weaponry-in-a-computer-connected-world.html -[2]:https://www.nytimes.com/2015/08/29/technology/uber-hires-two-engineers-who-showed-cars-could-be-hacked.html -[3]:https://www.nytimes.com/2015/08/11/opinion/zeynep-tufekci-why-smart-objects-may-be-a-dumb-idea.html -[4]:https://www.nytimes.com/by/nicole-perlroth -[5]:https://www.nytimes.com/column/bits -[6]:https://www.nytimes.com/2017/06/07/technology/why-car-companies-are-hiring-computer-security-experts.html?utm_source=wanqu.co&utm_campaign=Wanqu+Daily&utm_medium=website#story-continues-1 -[7]:http://www.nytimes.com/newsletters/sample/bits?pgtype=subscriptionspage&version=business&contentId=TU&eventName=sample&module=newsletter-sign-up -[8]:https://www.nytimes.com/privacy -[9]:https://www.nytimes.com/help/index.html -[10]:https://bits.blogs.nytimes.com/2015/07/21/security-researchers-find-a-way-to-hack-cars/ -[11]:http://www.nytimes.com/topic/company/tesla-motors-inc?inline=nyt-org -[12]:http://www.autosec.org/pubs/cars-usenixsec2011.pdf -[13]:http://autos.nytimes.com/2011/Chevrolet/Volt/238/4117/329463/researchOverview.aspx?inline=nyt-classifier -[14]:http://topics.nytimes.com/top/reference/timestopics/subjects/m/military_aircraft/f35_airplane/index.html?inline=nyt-classifier -[15]:https://www.nytimes.com/2017/06/07/technology/why-car-companies-are-hiring-computer-security-experts.html?utm_source=wanqu.co&utm_campaign=Wanqu+Daily&utm_medium=website#story-continues-3 diff --git a/sources/tech/20170622 A users guide to links in the Linux filesystem.md b/sources/tech/20170622 A users guide to links in the Linux filesystem.md new file mode 100644 index 0000000000..3cb59aaacb --- /dev/null +++ b/sources/tech/20170622 A users guide to links in the Linux filesystem.md @@ -0,0 +1,314 @@ +Translating by yongshouzhang + + +A user's guide to links in the Linux filesystem +============================================================ + +### Learn how to use links, which make tasks easier by providing access to files from multiple locations in the Linux filesystem directory tree. + + +![A user's guide to links in the Linux filesystem](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/links.png?itok=AumNmse7 "A user's guide to links in the Linux filesystem") +Image by : [Paul Lewin][8]. Modified by Opensource.com. [CC BY-SA 2.0][9] + +In articles I have written about various aspects of Linux filesystems for Opensource.com, including [An introduction to Linux's EXT4 filesystem][10]; [Managing devices in Linux][11]; [An introduction to Linux filesystems][12]; and [A Linux user's guide to Logical Volume Management][13], I have briefly mentioned an interesting feature of Linux filesystems that can make some tasks easier by providing access to files from multiple locations in the filesystem directory tree. + +There are two types of Linux filesystem links: hard and soft. The difference between the two types of links is significant, but both types are used to solve similar problems. They both provide multiple directory entries (or references) to a single file, but they do it quite differently. Links are powerful and add flexibility to Linux filesystems because [everything is a file][14]. + +More Linux resources + +* [What is Linux?][1] + +* [What are Linux containers?][2] + +* [Download Now: Linux commands cheat sheet][3] + +* [Advanced Linux commands cheat sheet][4] + +* [Our latest Linux articles][5] + +I have found, for instance, that some programs required a particular version of a library. When a library upgrade replaced the old version, the program would crash with an error specifying the name of the old, now-missing library. Usually, the only change in the library name was the version number. Acting on a hunch, I simply added a link to the new library but named the link after the old library name. I tried the program again and it worked perfectly. And, okay, the program was a game, and everyone knows the lengths that gamers will go to in order to keep their games running. + +In fact, almost all applications are linked to libraries using a generic name with only a major version number in the link name, while the link points to the actual library file that also has a minor version number. In other instances, required files have been moved from one directory to another to comply with the Linux file specification, and there are links in the old directories for backwards compatibility with those programs that have not yet caught up with the new locations. If you do a long listing of the **/lib64** directory, you can find many examples of both. + +``` +lrwxrwxrwx. 1 root root 36 Dec 8 2016 cracklib_dict.hwm -> ../../usr/share/cracklib/pw_dict.hwm +lrwxrwxrwx. 1 root root 36 Dec 8 2016 cracklib_dict.pwd -> ../../usr/share/cracklib/pw_dict.pwd +lrwxrwxrwx. 1 root root 36 Dec 8 2016 cracklib_dict.pwi -> ../../usr/share/cracklib/pw_dict.pwi +lrwxrwxrwx. 1 root root 27 Jun 9 2016 libaccountsservice.so.0 -> libaccountsservice.so.0.0.0 +-rwxr-xr-x. 1 root root 288456 Jun 9 2016 libaccountsservice.so.0.0.0 +lrwxrwxrwx 1 root root 15 May 17 11:47 libacl.so.1 -> libacl.so.1.1.0 +-rwxr-xr-x 1 root root 36472 May 17 11:47 libacl.so.1.1.0 +lrwxrwxrwx. 1 root root 15 Feb 4 2016 libaio.so.1 -> libaio.so.1.0.1 +-rwxr-xr-x. 1 root root 6224 Feb 4 2016 libaio.so.1.0.0 +-rwxr-xr-x. 1 root root 6224 Feb 4 2016 libaio.so.1.0.1 +lrwxrwxrwx. 1 root root 30 Jan 16 16:39 libakonadi-calendar.so.4 -> libakonadi-calendar.so.4.14.26 +-rwxr-xr-x. 1 root root 816160 Jan 16 16:39 libakonadi-calendar.so.4.14.26 +lrwxrwxrwx. 1 root root 29 Jan 16 16:39 libakonadi-contact.so.4 -> libakonadi-contact.so.4.14.26 +``` + +A few of the links in the **/lib64** directory + +The long listing of the **/lib64** directory above shows that the first character in the filemode is the letter "l," which means that each is a soft or symbolic link. + +### Hard links + +In [An introduction to Linux's EXT4 filesystem][15], I discussed the fact that each file has one inode that contains information about that file, including the location of the data belonging to that file. [Figure 2][16] in that article shows a single directory entry that points to the inode. Every file must have at least one directory entry that points to the inode that describes the file. The directory entry is a hard link, thus every file has at least one hard link. + +In Figure 1 below, multiple directory entries point to a single inode. These are all hard links. I have abbreviated the locations of three of the directory entries using the tilde (**~**) convention for the home directory, so that **~** is equivalent to **/home/user** in this example. Note that the fourth directory entry is in a completely different directory, **/home/shared**, which might be a location for sharing files between users of the computer. + +![fig1directory_entries.png](https://opensource.com/sites/default/files/images/life/fig1directory_entries.png) +Figure 1 + +Hard links are limited to files contained within a single filesystem. "Filesystem" is used here in the sense of a partition or logical volume (LV) that is mounted on a specified mount point, in this case **/home**. This is because inode numbers are unique only within each filesystem, and a different filesystem, for example, **/var**or **/opt**, will have inodes with the same number as the inode for our file. + +Because all the hard links point to the single inode that contains the metadata about the file, all of these attributes are part of the file, such as ownerships, permissions, and the total number of hard links to the inode, and cannot be different for each hard link. It is one file with one set of attributes. The only attribute that can be different is the file name, which is not contained in the inode. Hard links to a single **file/inode** located in the same directory must have different names, due to the fact that there can be no duplicate file names within a single directory. + +The number of hard links for a file is displayed with the **ls -l** command. If you want to display the actual inode numbers, the command **ls -li** does that. + +### Symbolic (soft) links + +The difference between a hard link and a soft link, also known as a symbolic link (or symlink), is that, while hard links point directly to the inode belonging to the file, soft links point to a directory entry, i.e., one of the hard links. Because soft links point to a hard link for the file and not the inode, they are not dependent upon the inode number and can work across filesystems, spanning partitions and LVs. + +The downside to this is: If the hard link to which the symlink points is deleted or renamed, the symlink is broken. The symlink is still there, but it points to a hard link that no longer exists. Fortunately, the **ls** command highlights broken links with flashing white text on a red background in a long listing. + +### Lab project: experimenting with links + +I think the easiest way to understand the use of and differences between hard and soft links is with a lab project that you can do. This project should be done in an empty directory as a  _non-root user_ . I created the **~/temp** directory for this project, and you should, too. It creates a safe place to do the project and provides a new, empty directory to work in so that only files associated with this project will be located there. + +### **Initial setup** + +First, create the temporary directory in which you will perform the tasks needed for this project. Ensure that the present working directory (PWD) is your home directory, then enter the following command. + +``` +mkdir temp +``` + +Change into **~/temp** to make it the PWD with this command. + +``` +cd temp +``` + +To get started, we need to create a file we can link to. The following command does that and provides some content as well. + +``` +du -h > main.file.txt +``` + +Use the **ls -l** long list to verify that the file was created correctly. It should look similar to my results. Note that the file size is only 7 bytes, but yours may vary by a byte or two. + +``` +[dboth@david temp]$ ls -l +total 4 +-rw-rw-r-- 1 dboth dboth 7 Jun 13 07:34 main.file.txt +``` + +Notice the number "1" following the file mode in the listing. That number represents the number of hard links that exist for the file. For now, it should be 1 because we have not created any additional links to our test file. + +### **Experimenting with hard links** + +Hard links create a new directory entry pointing to the same inode, so when hard links are added to a file, you will see the number of links increase. Ensure that the PWD is still **~/temp**. Create a hard link to the file **main.file.txt**, then do another long list of the directory. + +``` +[dboth@david temp]$ ln main.file.txt link1.file.txt +[dboth@david temp]$ ls -l +total 8 +-rw-rw-r-- 2 dboth dboth 7 Jun 13 07:34 link1.file.txt +-rw-rw-r-- 2 dboth dboth 7 Jun 13 07:34 main.file.txt +``` + +Notice that both files have two links and are exactly the same size. The date stamp is also the same. This is really one file with one inode and two links, i.e., directory entries to it. Create a second hard link to this file and list the directory contents. You can create the link to either of the existing ones: **link1.file.txt** or **main.file.txt**. + +``` +[dboth@david temp]$ ln link1.file.txt link2.file.txt ; ls -l +total 16 +-rw-rw-r-- 3 dboth dboth 7 Jun 13 07:34 link1.file.txt +-rw-rw-r-- 3 dboth dboth 7 Jun 13 07:34 link2.file.txt +-rw-rw-r-- 3 dboth dboth 7 Jun 13 07:34 main.file.txt +``` + +Notice that each new hard link in this directory must have a different name because two files—really directory entries—cannot have the same name within the same directory. Try to create another link with a target name the same as one of the existing ones. + +``` +[dboth@david temp]$ ln main.file.txt link2.file.txt +ln: failed to create hard link 'link2.file.txt': File exists +``` + +Clearly that does not work, because **link2.file.txt** already exists. So far, we have created only hard links in the same directory. So, create a link in your home directory, the parent of the temp directory in which we have been working so far. + +``` +[dboth@david temp]$ ln main.file.txt ../main.file.txt ; ls -l ../main* +-rw-rw-r-- 4 dboth dboth 7 Jun 13 07:34 main.file.txt +``` + +The **ls** command in the above listing shows that the **main.file.txt** file does exist in the home directory with the same name as the file in the temp directory. Of course, these are not different files; they are the same file with multiple links—directory entries—to the same inode. To help illustrate the next point, add a file that is not a link. + +``` +[dboth@david temp]$ touch unlinked.file ; ls -l +total 12 +-rw-rw-r-- 4 dboth dboth 7 Jun 13 07:34 link1.file.txt +-rw-rw-r-- 4 dboth dboth 7 Jun 13 07:34 link2.file.txt +-rw-rw-r-- 4 dboth dboth 7 Jun 13 07:34 main.file.txt +-rw-rw-r-- 1 dboth dboth 0 Jun 14 08:18 unlinked.file +``` + +Look at the inode number of the hard links and that of the new file using the **-i**option to the **ls** command. + +``` +[dboth@david temp]$ ls -li +total 12 +657024 -rw-rw-r-- 4 dboth dboth 7 Jun 13 07:34 link1.file.txt +657024 -rw-rw-r-- 4 dboth dboth 7 Jun 13 07:34 link2.file.txt +657024 -rw-rw-r-- 4 dboth dboth 7 Jun 13 07:34 main.file.txt +657863 -rw-rw-r-- 1 dboth dboth 0 Jun 14 08:18 unlinked.file +``` + +Notice the number **657024** to the left of the file mode in the example above. That is the inode number, and all three file links point to the same inode. You can use the **-i** option to view the inode number for the link we created in the home directory as well, and that will also show the same value. The inode number of the file that has only one link is different from the others. Note that the inode numbers will be different on your system. + +Let's change the size of one of the hard-linked files. + +``` +[dboth@david temp]$ df -h > link2.file.txt ; ls -li +total 12 +657024 -rw-rw-r-- 4 dboth dboth 1157 Jun 14 14:14 link1.file.txt +657024 -rw-rw-r-- 4 dboth dboth 1157 Jun 14 14:14 link2.file.txt +657024 -rw-rw-r-- 4 dboth dboth 1157 Jun 14 14:14 main.file.txt +657863 -rw-rw-r-- 1 dboth dboth 0 Jun 14 08:18 unlinked.file +``` + +The file size of all the hard-linked files is now larger than before. That is because there is really only one file that is linked to by multiple directory entries. + +I know this next experiment will work on my computer because my **/tmp**directory is on a separate LV. If you have a separate LV or a filesystem on a different partition (if you're not using LVs), determine whether or not you have access to that LV or partition. If you don't, you can try to insert a USB memory stick and mount it. If one of those options works for you, you can do this experiment. + +Try to create a link to one of the files in your **~/temp** directory in **/tmp** (or wherever your different filesystem directory is located). + +``` +[dboth@david temp]$ ln link2.file.txt /tmp/link3.file.txt +ln: failed to create hard link '/tmp/link3.file.txt' => 'link2.file.txt': +Invalid cross-device link +``` + +Why does this error occur? The reason is each separate mountable filesystem has its own set of inode numbers. Simply referring to a file by an inode number across the entire Linux directory structure can result in confusion because the same inode number can exist in each mounted filesystem. + +There may be a time when you will want to locate all the hard links that belong to a single inode. You can find the inode number using the **ls -li** command. Then you can use the **find** command to locate all links with that inode number. + +``` +[dboth@david temp]$ find . -inum 657024 +./main.file.txt +./link1.file.txt +./link2.file.txt +``` + +Note that the **find** command did not find all four of the hard links to this inode because we started at the current directory of **~/temp**. The **find** command only finds files in the PWD and its subdirectories. To find all the links, we can use the following command, which specifies your home directory as the starting place for the search. + +``` +[dboth@david temp]$ find ~ -samefile main.file.txt +/home/dboth/temp/main.file.txt +/home/dboth/temp/link1.file.txt +/home/dboth/temp/link2.file.txt +/home/dboth/main.file.txt +``` + +You may see error messages if you do not have permissions as a non-root user. This command also uses the **-samefile** option instead of specifying the inode number. This works the same as using the inode number and can be easier if you know the name of one of the hard links. + +### **Experimenting with soft links** + +As you have just seen, creating hard links is not possible across filesystem boundaries; that is, from a filesystem on one LV or partition to a filesystem on another. Soft links are a means to answer that problem with hard links. Although they can accomplish the same end, they are very different, and knowing these differences is important. + +Let's start by creating a symlink in our **~/temp** directory to start our exploration. + +``` +[dboth@david temp]$ ln -s link2.file.txt link3.file.txt ; ls -li +total 12 +657024 -rw-rw-r-- 4 dboth dboth 1157 Jun 14 14:14 link1.file.txt +657024 -rw-rw-r-- 4 dboth dboth 1157 Jun 14 14:14 link2.file.txt +658270 lrwxrwxrwx 1 dboth dboth 14 Jun 14 15:21 link3.file.txt -> +link2.file.txt +657024 -rw-rw-r-- 4 dboth dboth 1157 Jun 14 14:14 main.file.txt +657863 -rw-rw-r-- 1 dboth dboth 0 Jun 14 08:18 unlinked.file +``` + +The hard links, those that have the inode number **657024**, are unchanged, and the number of hard links shown for each has not changed. The newly created symlink has a different inode, number **658270**. The soft link named **link3.file.txt**points to **link2.file.txt**. Use the **cat** command to display the contents of **link3.file.txt**. The file mode information for the symlink starts with the letter "**l**" which indicates that this file is actually a symbolic link. + +The size of the symlink **link3.file.txt** is only 14 bytes in the example above. That is the size of the text **link3.file.txt -> link2.file.txt**, which is the actual content of the directory entry. The directory entry **link3.file.txt** does not point to an inode; it points to another directory entry, which makes it useful for creating links that span file system boundaries. So, let's create that link we tried before from the **/tmp** directory. + +``` +[dboth@david temp]$ ln -s /home/dboth/temp/link2.file.txt +/tmp/link3.file.txt ; ls -l /tmp/link* +lrwxrwxrwx 1 dboth dboth 31 Jun 14 21:53 /tmp/link3.file.txt -> +/home/dboth/temp/link2.file.txt +``` + +### **Deleting links** + +There are some other things that you should consider when you need to delete links or the files to which they point. + +First, let's delete the link **main.file.txt**. Remember that every directory entry that points to an inode is simply a hard link. + +``` +[dboth@david temp]$ rm main.file.txt ; ls -li +total 8 +657024 -rw-rw-r-- 3 dboth dboth 1157 Jun 14 14:14 link1.file.txt +657024 -rw-rw-r-- 3 dboth dboth 1157 Jun 14 14:14 link2.file.txt +658270 lrwxrwxrwx 1 dboth dboth 14 Jun 14 15:21 link3.file.txt -> +link2.file.txt +657863 -rw-rw-r-- 1 dboth dboth 0 Jun 14 08:18 unlinked.file +``` + +The link **main.file.txt** was the first link created when the file was created. Deleting it now still leaves the original file and its data on the hard drive along with all the remaining hard links. To delete the file and its data, you would have to delete all the remaining hard links. + +Now delete the **link2.file.txt** hard link. + +``` +[dboth@david temp]$ rm link2.file.txt ; ls -li +total 8 +657024 -rw-rw-r-- 3 dboth dboth 1157 Jun 14 14:14 link1.file.txt +658270 lrwxrwxrwx 1 dboth dboth 14 Jun 14 15:21 link3.file.txt -> +link2.file.txt +657024 -rw-rw-r-- 3 dboth dboth 1157 Jun 14 14:14 main.file.txt +657863 -rw-rw-r-- 1 dboth dboth 0 Jun 14 08:18 unlinked.file +``` + +Notice what happens to the soft link. Deleting the hard link to which the soft link points leaves a broken link. On my system, the broken link is highlighted in colors and the target hard link is flashing. If the broken link needs to be fixed, you can create another hard link in the same directory with the same name as the old one, so long as not all the hard links have been deleted. You could also recreate the link itself, with the link maintaining the same name but pointing to one of the remaining hard links. Of course, if the soft link is no longer needed, it can be deleted with the **rm** command. + +The **unlink** command can also be used to delete files and links. It is very simple and has no options, as the **rm** command does. It does, however, more accurately reflect the underlying process of deletion, in that it removes the link—the directory entry—to the file being deleted. + +### Final thoughts + +I worked with both types of links for a long time before I began to understand their capabilities and idiosyncrasies. It took writing a lab project for a Linux class I taught to fully appreciate how links work. This article is a simplification of what I taught in that class, and I hope it speeds your learning curve. + +-------------------------------------------------------------------------------- + +作者简介: + +David Both - David Both is a Linux and Open Source advocate who resides in Raleigh, North Carolina. He has been in the IT industry for over forty years and taught OS/2 for IBM where he worked for over 20 years. While at IBM, he wrote the first training course for the original IBM PC in 1981. He has taught RHCE classes for Red Hat and has worked at MCI Worldcom, Cisco, and the State of North Carolina. He has been working with Linux and Open Source Software for almost 20 years. + +--------------------------------- + +via: https://opensource.com/article/17/6/linking-linux-filesystem + +作者:[David Both ][a] +译者:[runningwater](https://github.com/runningwater) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/dboth +[1]:https://opensource.com/resources/what-is-linux?src=linux_resource_menu +[2]:https://opensource.com/resources/what-are-linux-containers?src=linux_resource_menu +[3]:https://developers.redhat.com/promotions/linux-cheatsheet/?intcmp=7016000000127cYAAQ +[4]:https://developers.redhat.com/cheat-sheet/advanced-linux-commands-cheatsheet?src=linux_resource_menu&intcmp=7016000000127cYAAQ +[5]:https://opensource.com/tags/linux?src=linux_resource_menu +[6]:https://opensource.com/article/17/6/linking-linux-filesystem?rate=YebHxA-zgNopDQKKOyX3_r25hGvnZms_33sYBUq-SMM +[7]:https://opensource.com/user/14106/feed +[8]:https://www.flickr.com/photos/digypho/7905320090 +[9]:https://creativecommons.org/licenses/by/2.0/ +[10]:https://opensource.com/article/17/5/introduction-ext4-filesystem +[11]:https://opensource.com/article/16/11/managing-devices-linux +[12]:https://opensource.com/life/16/10/introduction-linux-filesystems +[13]:https://opensource.com/business/16/9/linux-users-guide-lvm +[14]:https://opensource.com/life/15/9/everything-is-a-file +[15]:https://opensource.com/article/17/5/introduction-ext4-filesystem +[16]:https://opensource.com/article/17/5/introduction-ext4-filesystem#fig2 +[17]:https://opensource.com/users/dboth +[18]:https://opensource.com/article/17/6/linking-linux-filesystem#comments diff --git a/sources/tech/20170921 How to answer questions in a helpful way.md b/sources/tech/20170921 How to answer questions in a helpful way.md deleted file mode 100644 index 8a3601ed06..0000000000 --- a/sources/tech/20170921 How to answer questions in a helpful way.md +++ /dev/null @@ -1,172 +0,0 @@ -How to answer questions in a helpful way -============================================================ - -Your coworker asks you a slightly unclear question. How do you answer? I think asking questions is a skill (see [How to ask good questions][1]) and that answering questions in a helpful way is also a skill! Both of them are super useful. - -To start out with – sometimes the people asking you questions don’t respect your time, and that sucks. I’m assuming here throughout that that’s not what happening – we’re going to assume that the person asking you questions is a reasonable person who is trying their best to figure something out and that you want to help them out. Everyone I work with is like that and so that’s the world I live in :) - -Here are a few strategies for answering questions in a helpful way! - -### If they’re not asking clearly, help them clarify - -Often beginners don’t ask clear questions, or ask questions that don’t have the necessary information to answer the questions. Here are some strategies you can use to help them clarify. - -* **Rephrase a more specific question** back at them (“Are you asking X?”) - -* **Ask them for more specific information** they didn’t provide (“are you using IPv6?”) - -* **Ask what prompted their question**. For example, sometimes people come into my team’s channel with questions about how our service discovery works. Usually this is because they’re trying to set up/reconfigure a service. In that case it’s helpful to ask “which service are you working with? Can I see the pull request you’re working on?” - -A lot of these strategies come from the [how to ask good questions][2] post. (though I would never say to someone “oh you need to read this Document On How To Ask Good Questions before asking me a question”) - -### Figure out what they know already - -Before answering a question, it’s very useful to know what the person knows already! - -Harold Treen gave me a great example of this: - -> Someone asked me the other day to explain “Redux Sagas”. Rather than dive in and say “They are like worker threads that listen for actions and let you update the store!”  -> I started figuring out how much they knew about Redux, actions, the store and all these other fundamental concepts. From there it was easier to explain the concept that ties those other concepts together. - -Figuring out what your question-asker knows already is important because they may be confused about fundamental concepts (“What’s Redux?”), or they may be an expert who’s getting at a subtle corner case. An answer building on concepts they don’t know is confusing, and an answer that recaps things they know is tedious. - -One useful trick for asking what people know – instead of “Do you know X?”, maybe try “How familiar are you with X?”. - -### Point them to the documentation - -“RTFM” is the classic unhelpful answer to a question, but pointing someone to a specific piece of documentation can actually be really helpful! When I’m asking a question, I’d honestly rather be pointed to documentation that actually answers my question, because it’s likely to answer other questions I have too. - -I think it’s important here to make sure you’re linking to documentation that actually answers the question, or at least check in afterwards to make sure it helped. Otherwise you can end up with this (pretty common) situation: - -* Ali: How do I do X? - -* Jada: - -* Ali: That doesn’t actually explain how to X, it only explains Y! - -If the documentation I’m linking to is very long, I like to point out the specific part of the documentation I’m talking about. The [bash man page][3] is 44,000 words (really!), so just saying “it’s in the bash man page” is not that helpful :) - -### Point them to a useful search - -Often I find things at work by searching for some Specific Keyword that I know will find me the answer. That keyword might not be obvious to a beginner! So saying “this is the search I’d use to find the answer to that question” can be useful. Again, check in afterwards to make sure the search actually gets them the answer they need :) - -### Write new documentation - -People often come and ask my team the same questions over and over again. This is obviously not the fault of the people (how should  _they_  know that 10 people have asked this already, or what the answer is?). So we’re trying to, instead of answering the questions directly, - -1. Immediately write documentation - -2. Point the person to the new documentation we just wrote - -3. Celebrate! - -Writing documentation sometimes takes more time than just answering the question, but it’s often worth it! Writing documentation is especially worth it if: - -a. It’s a question which is being asked again and again b. The answer doesn’t change too much over time (if the answer changes every week or month, the documentation will just get out of date and be frustrating) - -### Explain what you did - -As a beginner to a subject, it’s really frustrating to have an exchange like this: - -* New person: “hey how do you do X?” - -* More Experienced Person: “I did it, it is done.” - -* New person: ….. but what did you DO?! - -If the person asking you is trying to learn how things work, it’s helpful to: - -* Walk them through how to accomplish a task instead of doing it yourself - -* Tell them the steps for how you got the answer you gave them! - -This might take longer than doing it yourself, but it’s a learning opportunity for the person who asked, so that they’ll be better equipped to solve such problems in the future. - -Then you can have WAY better exchanges, like this: - -* New person: “I’m seeing errors on the site, what’s happening?” - -* More Experienced Person: (2 minutes later) “oh that’s because there’s a database failover happening” - -* New person: how did you know that??!?!? - -* More Experienced Person: “Here’s what I did!”: - 1. Often these errors are due to Service Y being down. I looked at $PLACE and it said Service Y was up. So that wasn’t it. - - 2. Then I looked at dashboard X, and this part of that dashboard showed there was a database failover happening. - - 3. Then I looked in the logs for the service and it showed errors connecting to the database, here’s what those errors look like. - -If you’re explaining how you debugged a problem, it’s useful both to explain how you found out what the problem was, and how you found out what the problem wasn’t. While it might feel good to look like you knew the answer right off the top of your head, it feels even better to help someone improve at learning and diagnosis, and understand the resources available. - -### Solve the underlying problem - -This one is a bit tricky. Sometimes people think they’ve got the right path to a solution, and they just need one more piece of information to implement that solution. But they might not be quite on the right path! For example: - -* George: I’m doing X, and I got this error, how do I fix it - -* Jasminda: Are you actually trying to do Y? If so, you shouldn’t do X, you should do Z instead - -* George: Oh, you’re right!!! Thank you! I will do Z instead. - -Jasminda didn’t answer George’s question at all! Instead she guessed that George didn’t actually want to be doing X, and she was right. That is helpful! - -It’s possible to come off as condescending here though, like - -* George: I’m doing X, and I got this error, how do I fix it? - -* Jasminda: Don’t do that, you’re trying to do Y and you should do Z to accomplish that instead. - -* George: Well, I am not trying to do Y, I actually want to do X because REASONS. How do I do X? - -So don’t be condescending, and keep in mind that some questioners might be attached to the steps they’ve taken so far! It might be appropriate to answer both the question they asked and the one they should have asked: “Well, if you want to do X then you might try this, but if you’re trying to solve problem Y with that, you might have better luck doing this other thing, and here’s why that’ll work better”. - -### Ask “Did that answer your question?” - -I always like to check in after I  _think_  I’ve answered the question and ask “did that answer your question? Do you have more questions?”. - -It’s good to pause and wait after asking this because often people need a minute or two to know whether or not they’ve figured out the answer. I especially find this extra “did this answer your questions?” step helpful after writing documentation! Often when writing documentation about something I know well I’ll leave out something very important without realizing it. - -### Offer to pair program/chat in real life - -I work remote, so many of my conversations at work are text-based. I think of that as the default mode of communication. - -Today, we live in a world of easy video conferencing & screensharing! At work I can at any time click a button and immediately be in a video call/screensharing session with someone. Some problems are easier to talk about using your voices! - -For example, recently someone was asking about capacity planning/autoscaling for their service. I could tell there were a few things we needed to clear up but I wasn’t exactly sure what they were yet. We got on a quick video call and 5 minutes later we’d answered all their questions. - -I think especially if someone is really stuck on how to get started on a task, pair programming for a few minutes can really help, and it can be a lot more efficient than email/instant messaging. - -### Don’t act surprised - -This one’s a rule from the Recurse Center: [no feigning surprise][4]. Here’s a relatively common scenario - -* Human 1: “what’s the Linux kernel?” - -* Human 2: “you don’t know what the LINUX KERNEL is?!!!!?!!!???” - -Human 2’s reaction (regardless of whether they’re  _actually_  surprised or not) is not very helpful. It mostly just serves to make Human 1 feel bad that they don’t know what the Linux kernel is. - -I’ve worked on actually pretending not to be surprised even when I actually am a bit surprised the person doesn’t know the thing and it’s awesome. - -### Answering questions well is awesome - -Obviously not all these strategies are appropriate all the time, but hopefully you will find some of them helpful! I find taking the time to answer questions and teach people can be really rewarding. - -Special thanks to Josh Triplett for suggesting this post and making many helpful additions, and to Harold Treen, Vaibhav Sagar, Peter Bhat Harkins, Wesley Aptekar-Cassels, and Paul Gowder for reading/commenting. - --------------------------------------------------------------------------------- - -via: https://jvns.ca/blog/answer-questions-well/ - -作者:[ Julia Evans][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://jvns.ca/about -[1]:https://jvns.ca/blog/good-questions/ -[2]:https://jvns.ca/blog/good-questions/ -[3]:https://linux.die.net/man/1/bash -[4]:https://jvns.ca/blog/2017/04/27/no-feigning-surprise/ diff --git a/sources/tech/20171005 How to manage Linux containers with Ansible Container.md b/sources/tech/20171005 How to manage Linux containers with Ansible Container.md deleted file mode 100644 index 897b793a86..0000000000 --- a/sources/tech/20171005 How to manage Linux containers with Ansible Container.md +++ /dev/null @@ -1,114 +0,0 @@ -How to manage Linux containers with Ansible Container -============================================================ - -### Ansible Container addresses Dockerfile shortcomings and offers complete management for containerized projects. - -![Ansible Container: A new way to manage containers](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/container-ship.png?itok=pqZYgQ7K "Ansible Container: A new way to manage containers") -Image by : opensource.com - -I love containers and use the technology every day. Even so, containers aren't perfect. Over the past couple of months, however, a set of projects has emerged that addresses some of the problems I've experienced. - -I started using containers with [Docker][11], since this project made the technology so popular. Aside from using the container engine, I learned how to use **[docker-compose][6]** and started managing my projects with it. My productivity skyrocketed! One command to run my project, no matter how complex it was. I was so happy. - -After some time, I started noticing issues. The most apparent were related to the process of creating container images. The Docker tool uses a custom file format as a recipe to produce container images—Dockerfiles. This format is easy to learn, and after a short time you are ready to produce container images on your own. The problems arise once you want to master best practices or have complex scenarios in mind. - -More on Ansible - -* [How Ansible works][1] - -* [Free Ansible eBooks][2] - -* [Ansible quick start video][3] - -* [Download and install Ansible][4] - -Let's take a break and travel to a different land: the world of [Ansible][22]. You know it? It's awesome, right? You don't? Well, it's time to learn something new. Ansible is a project that allows you to manage your infrastructure by writing tasks and executing them inside environments of your choice. No need to install and set up any services; everything can easily run from your laptop. Many people already embrace Ansible. - -Imagine this scenario: You invested in Ansible, you wrote plenty of Ansible roles and playbooks that you use to manage your infrastructure, and you are thinking about investing in containers. What should you do? Start writing container image definitions via shell scripts and Dockerfiles? That doesn't sound right. - -Some people from the Ansible development team asked this question and realized that those same Ansible roles and playbooks that people wrote and use daily can also be used to produce container images. But not just that—they can be used to manage the complete lifecycle of containerized projects. From these ideas, the [Ansible Container][12] project was born. It utilizes existing Ansible roles that can be turned into container images and can even be used for the complete application lifecycle, from build to deploy in production. - -Let's talk about the problems I mentioned regarding best practices in context of Dockerfiles. A word of warning: This is going to be very specific and technical. Here are the top three issues I have: - -### 1\. Shell scripts embedded in Dockerfiles. - -When writing Dockerfiles, you can specify a script that will be interpreted via **/bin/sh -c**. It can be something like: - -``` -RUN dnf install -y nginx -``` - -where RUN is a Dockerfile instruction and the rest are its arguments (which are passed to shell). But imagine a more complex scenario: - -``` -RUN set -eux; \ -    \ -# this "case" statement is generated via "update.sh" -    %%ARCH-CASE%%; \ -    \ -    url="https://golang.org/dl/go${GOLANG_VERSION}.${goRelArch}.tar.gz"; \ -    wget -O go.tgz "$url"; \ -    echo "${goRelSha256} *go.tgz" | sha256sum -c -; \ -``` - -This one is taken from [the official golang image][13]. It doesn't look pretty, right? - -### 2\. You can't parse Dockerfiles easily. - -Dockerfiles are a new format without a formal specification. This is tricky if you need to process Dockerfiles in your infrastructure (e.g., automate the build process a bit). The only specification is [the code][14] that is part of **dockerd**. The problem is that you can't use it as a library. The easiest solution is to write a parser on your own and hope for the best. Wouldn't it be better to use some well-known markup language, such as YAML or JSON? - -### 3\. It's hard to control. - -If you are familiar with the internals of container images, you may know that every image is composed of layers. Once the container is created, the layers are stacked onto each other (like pancakes) using union filesystem technology. The problem is, that you cannot explicitly control this layering—you can't say, "here starts a new layer." You are forced to change your Dockerfile in a way that may hurt readability. The bigger problem is that a set of best practices has to be followed to achieve optimal results—newcomers have a really hard time here. - -### Comparing Ansible language and Dockerfiles - -The biggest shortcoming of Dockerfiles in comparison to Ansible is that Ansible, as a language, is much more powerful. For example, Dockerfiles have no direct concept of variables, whereas Ansible has a complete templating system (variables are just one of its features). Ansible contains a large number of modules that can be easily utilized, such as [**wait_for**][15], which can be used for service readiness checks—e.g., wait until a service is ready before proceeding. With Dockerfiles, everything is a shell script. So if you need to figure out service readiness, it has to be done with shell (or installed separately). The other problem with shell scripts is that, with growing complexity, maintenance becomes a burden. Plenty of people have already figured this out and turned those shell scripts into Ansible. - -If you are interested in this topic and would like to know more, please come to [Open Source Summit][16] in Prague to see [my presentation][17] on Monday, Oct. 23, at 4:20 p.m. in Palmovka room. - - _Learn more in Tomas Tomecek's talk, [From Dockerfiles to Ansible Container][7], at [Open Source Summit EU][8], which will be held October 23-26 in Prague._ - - - -### About the author - - [![human](https://opensource.com/sites/default/files/styles/profile_pictures/public/pictures/ja.jpeg?itok=4ATUEAbd)][18] Tomas Tomecek - Engineer. Hacker. Speaker. Tinker. Red Hatter. Likes containers, linux, open source, python 3, rust, zsh, tmux.[More about me][9] - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/17/10/dockerfiles-ansible-container - -作者:[Tomas Tomecek ][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/tomastomecek -[1]:https://www.ansible.com/how-ansible-works?intcmp=701f2000000h4RcAAI -[2]:https://www.ansible.com/ebooks?intcmp=701f2000000h4RcAAI -[3]:https://www.ansible.com/quick-start-video?intcmp=701f2000000h4RcAAI -[4]:https://docs.ansible.com/ansible/latest/intro_installation.html?intcmp=701f2000000h4RcAAI -[5]:https://opensource.com/article/17/10/dockerfiles-ansible-container?imm_mid=0f9013&cmp=em-webops-na-na-newsltr_20171201&rate=Wiw_0D6PK_CAjqatYu_YQH0t1sNHEF6q09_9u3sYkCY -[6]:https://github.com/docker/compose -[7]:http://sched.co/BxIW -[8]:http://events.linuxfoundation.org/events/open-source-summit-europe -[9]:https://opensource.com/users/tomastomecek -[10]:https://opensource.com/user/175651/feed -[11]:https://opensource.com/tags/docker -[12]:https://www.ansible.com/ansible-container -[13]:https://github.com/docker-library/golang/blob/master/Dockerfile-debian.template#L14 -[14]:https://github.com/moby/moby/tree/master/builder/dockerfile -[15]:http://docs.ansible.com/wait_for_module.html -[16]:http://events.linuxfoundation.org/events/open-source-summit-europe -[17]:http://events.linuxfoundation.org/events/open-source-summit-europe/program/schedule -[18]:https://opensource.com/users/tomastomecek -[19]:https://opensource.com/users/tomastomecek -[20]:https://opensource.com/users/tomastomecek -[21]:https://opensource.com/article/17/10/dockerfiles-ansible-container?imm_mid=0f9013&cmp=em-webops-na-na-newsltr_20171201#comments -[22]:https://opensource.com/tags/ansible -[23]:https://opensource.com/tags/containers -[24]:https://opensource.com/tags/ansible -[25]:https://opensource.com/tags/docker -[26]:https://opensource.com/tags/open-source-summit diff --git a/sources/tech/20171005 Reasons Kubernetes is cool.md b/sources/tech/20171005 Reasons Kubernetes is cool.md deleted file mode 100644 index a9d10b9cdb..0000000000 --- a/sources/tech/20171005 Reasons Kubernetes is cool.md +++ /dev/null @@ -1,148 +0,0 @@ -Reasons Kubernetes is cool -============================================================ - -When I first learned about Kubernetes (a year and a half ago?) I really didn’t understand why I should care about it. - -I’ve been working full time with Kubernetes for 3 months or so and now have some thoughts about why I think it’s useful. (I’m still very far from being a Kubernetes expert!) Hopefully this will help a little in your journey to understand what even is going on with Kubernetes! - -I will try to explain some reason I think Kubenetes is interesting without using the words “cloud native”, “orchestration”, “container”, or any Kubernetes-specific terminology :). I’m going to explain this mostly from the perspective of a kubernetes operator / infrastructure engineer, since my job right now is to set up Kubernetes and make it work well. - -I’m not going to try to address the question of “should you use kubernetes for your production systems?” at all, that is a very complicated question. (not least because “in production” has totally different requirements depending on what you’re doing) - -### Kubernetes lets you run code in production without setting up new servers - -The first pitch I got for Kubernetes was the following conversation with my partner Kamal: - -Here’s an approximate transcript: - -* Kamal: With Kubernetes you can set up a new service with a single command - -* Julia: I don’t understand how that’s possible. - -* Kamal: Like, you just write 1 configuration file, apply it, and then you have a HTTP service running in production - -* Julia: But today I need to create new AWS instances, write a puppet manifest, set up service discovery, configure my load balancers, configure our deployment software, and make sure DNS is working, it takes at least 4 hours if nothing goes wrong. - -* Kamal: Yeah. With Kubernetes you don’t have to do any of that, you can set up a new HTTP service in 5 minutes and it’ll just automatically run. As long as you have spare capacity in your cluster it just works! - -* Julia: There must be a trap - -There kind of is a trap, setting up a production Kubernetes cluster is (in my experience) is definitely not easy. (see [Kubernetes The Hard Way][3] for what’s involved to get started). But we’re not going to go into that right now! - -So the first cool thing about Kubernetes is that it has the potential to make life way easier for developers who want to deploy new software into production. That’s cool, and it’s actually true, once you have a working Kubernetes cluster you really can set up a production HTTP service (“run 5 of this application, set up a load balancer, give it this DNS name, done”) with just one configuration file. It’s really fun to see. - -### Kubernetes gives you easy visibility & control of what code you have running in production - -IMO you can’t understand Kubernetes without understanding etcd. So let’s talk about etcd! - -Imagine that I asked you today “hey, tell me every application you have running in production, what host it’s running on, whether it’s healthy or not, and whether or not it has a DNS name attached to it”. I don’t know about you but I would need to go look in a bunch of different places to answer this question and it would take me quite a while to figure out. I definitely can’t query just one API. - -In Kubernetes, all the state in your cluster – applications running (“pods”), nodes, DNS names, cron jobs, and more – is stored in a single database (etcd). Every Kubernetes component is stateless, and basically works by - -* Reading state from etcd (eg “the list of pods assigned to node 1”) - -* Making changes (eg “actually start running pod A on node 1”) - -* Updating the state in etcd (eg “set the state of pod A to ‘running’”) - -This means that if you want to answer a question like “hey, how many nginx pods do I have running right now in that availabliity zone?” you can answer it by querying a single unified API (the Kubernetes API!). And you have exactly the same access to that API that every other Kubernetes component does. - -This also means that you have easy control of everything running in Kubernetes. If you want to, say, - -* Implement a complicated custom rollout strategy for deployments (deploy 1 thing, wait 2 minutes, deploy 5 more, wait 3.7 minutes, etc) - -* Automatically [start a new webserver][1] every time a branch is pushed to github - -* Monitor all your running applications to make sure all of them have a reasonable cgroups memory limit - -all you need to do is to write a program that talks to the Kubernetes API. (a “controller”) - -Another very exciting thing about the Kubernetes API is that you’re not limited to just functionality that Kubernetes provides! If you decide that you have your own opinions about how your software should be deployed / created / monitored, then you can write code that uses the Kubernetes API to do it! It lets you do everything you need. - -### If every Kubernetes component dies, your code will still keep running - -One thing I was originally promised (by various blog posts :)) about Kubernetes was “hey, if the Kubernetes apiserver and everything else dies, it’s ok, your code will just keep running”. I thought this sounded cool in theory but I wasn’t sure if it was actually true. - -So far it seems to be actually true! - -I’ve been through some etcd outages now, and what happens is - -1. All the code that was running keeps running - -2. Nothing  _new_  happens (you can’t deploy new code or make changes, cron jobs will stop working) - -3. When everything comes back, the cluster will catch up on whatever it missed - -This does mean that if etcd goes down and one of your applications crashes or something, it can’t come back up until etcd returns. - -### Kubernetes’ design is pretty resilient to bugs - -Like any piece of software, Kubernetes has bugs. For example right now in our cluster the controller manager has a memory leak, and the scheduler crashes pretty regularly. Bugs obviously aren’t good but so far I’ve found that Kubernetes’ design helps mitigate a lot of the bugs in its core components really well. - -If you restart any component, what happens is: - -* It reads all its relevant state from etcd - -* It starts doing the necessary things it’s supposed to be doing based on that state (scheduling pods, garbage collecting completed pods, scheduling cronjobs, deploying daemonsets, whatever) - -Because all the components don’t keep any state in memory, you can just restart them at any time and that can help mitigate a variety of bugs. - -For example! Let’s say you have a memory leak in your controller manager. Because the controller manager is stateless, you can just periodically restart it every hour or something and feel confident that you won’t cause any consistency issues. Or we ran into a bug in the scheduler where it would sometimes just forget about pods and never schedule them. You can sort of mitigate this just by restarting the scheduler every 10 minutes. (we didn’t do that, we fixed the bug instead, but you  _could_  :) ) - -So I feel like I can trust Kubernetes’ design to help make sure the state in the cluster is consistent even when there are bugs in its core components. And in general I think the software is generally improving over time. The only stateful thing you have to operate is etcd - -Not to harp on this “state” thing too much but – I think it’s cool that in Kubernetes the only thing you have to come up with backup/restore plans for is etcd (unless you use persistent volumes for your pods). I think it makes kubernetes operations a lot easier to think about. - -### Implementing new distributed systems on top of Kubernetes is relatively easy - -Suppose you want to implement a distributed cron job scheduling system! Doing that from scratch is a ton of work. But implementing a distributed cron job scheduling system inside Kubernetes is much easier! (still not trivial, it’s still a distributed system) - -The first time I read the code for the Kubernetes cronjob controller I was really delighted by how simple it was. Here, go read it! The main logic is like 400 lines of Go. Go ahead, read it! => [cronjob_controller.go][4] <= - -Basically what the cronjob controller does is: - -* Every 10 seconds: - * Lists all the cronjobs that exist - - * Checks if any of them need to run right now - - * If so, creates a new Job object to be scheduled & actually run by other Kubernetes controllers - - * Clean up finished jobs - - * Repeat - -The Kubernetes model is pretty constrained (it has this pattern of resources are defined in etcd, controllers read those resources and update etcd), and I think having this relatively opinionated/constrained model makes it easier to develop your own distributed systems inside the Kubernetes framework. - -Kamal introduced me to this idea of “Kubernetes is a good platform for writing your own distributed systems” instead of just “Kubernetes is a distributed system you can use” and I think it’s really interesting. He has a prototype of a [system to run an HTTP service for every branch you push to github][5]. It took him a weekend and is like 800 lines of Go, which I thought was impressive! - -### Kubernetes lets you do some amazing things (but isn’t easy) - -I started out by saying “kubernetes lets you do these magical things, you can just spin up so much infrastructure with a single configuration file, it’s amazing”. And that’s true! - -What I mean by “Kubernetes isn’t easy” is that Kubernetes has a lot of moving parts learning how to successfully operate a highly available Kubernetes cluster is a lot of work. Like I find that with a lot of the abstractions it gives me, I need to understand what is underneath those abstractions in order to debug issues and configure things properly. I love learning new things so this doesn’t make me angry or anything, I just think it’s important to know :) - -One specific example of “I can’t just rely on the abstractions” that I’ve struggled with is that I needed to learn a LOT [about how networking works on Linux][6] to feel confident with setting up Kubernetes networking, way more than I’d ever had to learn about networking before. This was very fun but pretty time consuming. I might write more about what is hard/interesting about setting up Kubernetes networking at some point. - -Or I wrote a [2000 word blog post][7] about everything I had to learn about Kubernetes’ different options for certificate authorities to be able to set up my Kubernetes CAs successfully. - -I think some of these managed Kubernetes systems like GKE (google’s kubernetes product) may be simpler since they make a lot of decisions for you but I haven’t tried any of them. - --------------------------------------------------------------------------------- - -via: https://jvns.ca/blog/2017/10/05/reasons-kubernetes-is-cool/ - -作者:[ Julia Evans][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://jvns.ca/about -[1]:https://github.com/kamalmarhubi/kubereview -[2]:https://jvns.ca/categories/kubernetes -[3]:https://github.com/kelseyhightower/kubernetes-the-hard-way -[4]:https://github.com/kubernetes/kubernetes/blob/e4551d50e57c089aab6f67333412d3ca64bc09ae/pkg/controller/cronjob/cronjob_controller.go -[5]:https://github.com/kamalmarhubi/kubereview -[6]:https://jvns.ca/blog/2016/12/22/container-networking/ -[7]:https://jvns.ca/blog/2017/08/05/how-kubernetes-certificates-work/ diff --git a/sources/tech/20171010 Operating a Kubernetes network.md b/sources/tech/20171010 Operating a Kubernetes network.md deleted file mode 100644 index 9c85e9aa70..0000000000 --- a/sources/tech/20171010 Operating a Kubernetes network.md +++ /dev/null @@ -1,216 +0,0 @@ -Operating a Kubernetes network -============================================================ - -I’ve been working on Kubernetes networking a lot recently. One thing I’ve noticed is, while there’s a reasonable amount written about how to **set up** your Kubernetes network, I haven’t seen much about how to **operate** your network and be confident that it won’t create a lot of production incidents for you down the line. - -In this post I’m going to try to convince you of three things: (all I think pretty reasonable :)) - -* Avoiding networking outages in production is important - -* Operating networking software is hard - -* It’s worth thinking critically about major changes to your networking infrastructure and the impact that will have on your reliability, even if very fancy Googlers say “this is what we do at Google”. (google engineers are doing great work on Kubernetes!! But I think it’s important to still look at the architecture and make sure it makes sense for your organization.) - -I’m definitely not a Kubernetes networking expert by any means, but I have run into a few issues while setting things up and definitely know a LOT more about Kubernetes networking than I used to. - -### Operating networking software is hard - -Here I’m not talking about operating physical networks (I don’t know anything about that), but instead about keeping software like DNS servers & load balancers & proxies working correctly. - -I have been working on a team that’s responsible for a lot of networking infrastructure for a year, and I have learned a few things about operating networking infrastructure! (though I still have a lot to learn obviously). 3 overall thoughts before we start: - -* Networking software often relies very heavily on the Linux kernel. So in addition to configuring the software correctly you also need to make sure that a bunch of different sysctls are set correctly, and a misconfigured sysctl can easily be the difference between “everything is 100% fine” and “everything is on fire”. - -* Networking requirements change over time (for example maybe you’re doing 5x more DNS lookups than you were last year! Maybe your DNS server suddenly started returning TCP DNS responses instead of UDP which is a totally different kernel workload!). This means software that was working fine before can suddenly start having issues. - -* To fix a production networking issues you often need a lot of expertise. (for example see this [great post by Sophie Haskins on debugging a kube-dns issue][1]) I’m a lot better at debugging networking issues than I was, but that’s only after spending a huge amount of time investing in my knowledge of Linux networking. - -I am still far from an expert at networking operations but I think it seems important to: - -1. Very rarely make major changes to the production networking infrastructure (because it’s super disruptive) - -2. When you  _are_  making major changes, think really carefully about what the failure modes are for the new network architecture are - -3. Have multiple people who are able to understand your networking setup - -Switching to Kubernetes is obviously a pretty major networking change! So let’s talk about what some of the things that can go wrong are! - -### Kubernetes networking components - -The Kubernetes networking components we’re going to talk about in this post are: - -* Your overlay network backend (like flannel/calico/weave net/romana) - -* `kube-dns` - -* `kube-proxy` - -* Ingress controllers / load balancers - -* The `kubelet` - -If you’re going to set up HTTP services you probably need all of these. I’m not using most of these components yet but I’m trying to understand them, so that’s what this post is about. - -### The simplest way: Use host networking for all your containers - -Let’s start with the simplest possible thing you can do. This won’t let you run HTTP services in Kubernetes. I think it’s pretty safe because there are less moving parts. - -If you use host networking for all your containers I think all you need to do is: - -1. Configure the kubelet to configure DNS correctly inside your containers - -2. That’s it - -If you use host networking for literally every pod you don’t need kube-dns or kube-proxy. You don’t even need a working overlay network. - -In this setup your pods can connect to the outside world (the same way any process on your hosts would talk to the outside world) but the outside world can’t connect to your pods. - -This isn’t super important (I think most people want to run HTTP services inside Kubernetes and actually communicate with those services) but I do think it’s interesting to realize that at some level all of this networking complexity isn’t strictly required and sometimes you can get away without using it. Avoiding networking complexity seems like a good idea to me if you can. - -### Operating an overlay network - -The first networking component we’re going to talk about is your overlay network. Kubernetes assumes that every pod has an IP address and that you can communicate with services inside that pod by using that IP address. When I say “overlay network” this is what I mean (“the system that lets you refer to a pod by its IP address”). - -All other Kubernetes networking stuff relies on the overlay networking working correctly. You can read more about the [kubernetes networking model here][10]. - -The way Kelsey Hightower describes in [kubernetes the hard way][11] seems pretty good but it’s not really viable on AWS for clusters more than 50 nodes or so, so I’m not going to talk about that. - -There are a lot of overlay network backends (calico, flannel, weaveworks, romana) and the landscape is pretty confusing. But as far as I’m concerned an overlay network has 2 responsibilities: - -1. Make sure your pods can send network requests outside your cluster - -2. Keep a stable mapping of nodes to subnets and keep every node in your cluster updated with that mapping. Do the right thing when nodes are added & removed. - -Okay! So! What can go wrong with your overlay network? - -* The overlay network is responsible for setting up iptables rules (basically `iptables -A -t nat POSTROUTING -s $SUBNET -j MASQUERADE`) to ensure that containers can make network requests outside Kubernetes. If something goes wrong with this rule then your containers can’t connect to the external network. This isn’t that hard (it’s just a few iptables rules) but it is important. I made a [pull request][2] because I wanted to make sure this was resilient - -* Something can go wrong with adding or deleting nodes. We’re using the flannel hostgw backend and at the time we started using it, node deletion [did not work][3]. - -* Your overlay network is probably dependent on a distributed database (etcd). If that database has an incident, this can cause issues. For example [https://github.com/coreos/flannel/issues/610][4] says that if you have data loss in your flannel etcd cluster it can result in containers losing network connectivity. (this has now been fixed) - -* You upgrade Docker and everything breaks - -* Probably more things! - -I’m mostly talking about past issues in Flannel here but I promise I’m not picking on Flannel – I actually really **like** Flannel because I feel like it’s relatively simple (for instance the [vxlan backend part of it][12] is like 500 lines of code) and I feel like it’s possible for me to reason through any issues with it. And it’s obviously continuously improving. They’ve been great about reviewing pull requests. - -My approach to operating an overlay network so far has been: - -* Learn how it works in detail and how to debug it (for example the hostgw network backend for Flannel works by creating routes, so you mostly just need to do `sudo ip route list` to see whether it’s doing the correct thing) - -* Maintain an internal build so it’s easy to patch it if needed - -* When there are issues, contribute patches upstream - -I think it’s actually really useful to go through the list of merged PRs and see bugs that have been fixed in the past – it’s a bit time consuming but is a great way to get a concrete list of kinds of issues other people have run into. - -It’s possible that for other people their overlay networks just work but that hasn’t been my experience and I’ve heard other folks report similar issues. If you have an overlay network setup that is a) on AWS and b) works on a cluster more than 50-100 nodes where you feel more confident about operating it I would like to know. - -### Operating kube-proxy and kube-dns? - -Now that we have some thoughts about operating overlay networks, let’s talk about - -There’s a question mark next to this one because I haven’t done this. Here I have more questions than answers. - -Here’s how Kubernetes services work! A service is a collection of pods, which each have their own IP address (like 10.1.0.3, 10.2.3.5, 10.3.5.6) - -1. Every Kubernetes service gets an IP address (like 10.23.1.2) - -2. `kube-dns` resolves Kubernetes service DNS names to IP addresses (so my-svc.my-namespace.svc.cluster.local might map to 10.23.1.2) - -3. `kube-proxy` sets up iptables rules in order to do random load balancing between them. Kube-proxy also has a userspace round-robin load balancer but my impression is that they don’t recommend using it. - -So when you make a request to `my-svc.my-namespace.svc.cluster.local`, it resolves to 10.23.1.2, and then iptables rules on your local host (generated by kube-proxy) redirect it to one of 10.1.0.3 or 10.2.3.5 or 10.3.5.6 at random. - -Some things that I can imagine going wrong with this: - -* `kube-dns` is misconfigured - -* `kube-proxy` dies and your iptables rules don’t get updated - -* Some issue related to maintaining a large number of iptables rules - -Let’s talk about the iptables rules a bit, since doing load balancing by creating a bajillion iptables rules is something I had never heard of before! - -kube-proxy creates one iptables rule per target host like this: (these rules are from [this github issue][13]) - -``` --A KUBE-SVC-LI77LBOOMGYET5US -m comment --comment "default/showreadiness:showreadiness" -m statistic --mode random --probability 0.20000000019 -j KUBE-SEP-E4QKA7SLJRFZZ2DD[b][c] --A KUBE-SVC-LI77LBOOMGYET5US -m comment --comment "default/showreadiness:showreadiness" -m statistic --mode random --probability 0.25000000000 -j KUBE-SEP-LZ7EGMG4DRXMY26H --A KUBE-SVC-LI77LBOOMGYET5US -m comment --comment "default/showreadiness:showreadiness" -m statistic --mode random --probability 0.33332999982 -j KUBE-SEP-RKIFTWKKG3OHTTMI --A KUBE-SVC-LI77LBOOMGYET5US -m comment --comment "default/showreadiness:showreadiness" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-CGDKBCNM24SZWCMS --A KUBE-SVC-LI77LBOOMGYET5US -m comment --comment "default/showreadiness:showreadiness" -j KUBE-SEP-RI4SRNQQXWSTGE2Y - -``` - -So kube-proxy creates a **lot** of iptables rules. What does that mean? What are the implications of that in for my network? There’s a great talk from Huawei called [Scale Kubernetes to Support 50,000 services][14] that says if you have 5,000 services in your kubernetes cluster, it takes **11 minutes** to add a new rule. If that happened to your real cluster I think it would be very bad. - -I definitely don’t have 5,000 services in my cluster, but 5,000 isn’t SUCH a bit number. The proposal they give to solve this problem is to replace this iptables backend for kube-proxy with IPVS which is a load balancer that lives in the Linux kernel. - -It seems like kube-proxy is going in the direction of various Linux kernel based load balancers. I think this is partly because they support UDP load balancing, and other load balancers (like HAProxy) don’t support UDP load balancing. - -But I feel comfortable with HAProxy! Is it possible to replace kube-proxy with HAProxy! I googled this and I found this [thread on kubernetes-sig-network][15] saying: - -> kube-proxy is so awesome, we have used in production for almost a year, it works well most of time, but as we have more and more services in our cluster, we found it was getting hard to debug and maintain. There is no iptables expert in our team, we do have HAProxy&LVS experts, as we have used these for several years, so we decided to replace this distributed proxy with a centralized HAProxy. I think this maybe useful for some other people who are considering using HAProxy with kubernetes, so we just update this project and make it open source: [https://github.com/AdoHe/kube2haproxy][5]. If you found it’s useful , please take a look and give a try. - -So that’s an interesting option! I definitely don’t have answers here, but, some thoughts: - -* Load balancers are complicated - -* DNS is also complicated - -* If you already have a lot of experience operating one kind of load balancer (like HAProxy), it might make sense to do some extra work to use that instead of starting to use an entirely new kind of load balancer (like kube-proxy) - -* I’ve been thinking about where we want to be using kube-proxy or kube-dns at all – I think instead it might be better to just invest in Envoy and rely entirely on Envoy for all load balancing & service discovery. So then you just need to be good at operating Envoy. - -As you can see my thoughts on how to operate your Kubernetes internal proxies are still pretty confused and I’m still not super experienced with them. It’s totally possible that kube-proxy and kube-dns are fine and that they will just work fine but I still find it helpful to think through what some of the implications of using them are (for example “you can’t have 5,000 Kubernetes services”). - -### Ingress - -If you’re running a Kubernetes cluster, it’s pretty likely that you actually need HTTP requests to get into your cluster so far. This blog post is already too long and I don’t know much about ingress yet so we’re not going to talk about that. - -### Useful links - -A couple of useful links, to summarize: - -* [The Kubernetes networking model][6] - -* How GKE networking works: [https://www.youtube.com/watch?v=y2bhV81MfKQ][7] - -* The aforementioned talk on `kube-proxy` performance: [https://www.youtube.com/watch?v=4-pawkiazEg][8] - -### I think networking operations is important - -My sense of all this Kubernetes networking software is that it’s all still quite new and I’m not sure we (as a community) really know how to operate all of it well. This makes me worried as an operator because I really want my network to keep working! :) Also I feel like as an organization running your own Kubernetes cluster you need to make a pretty large investment into making sure you understand all the pieces so that you can fix things when they break. Which isn’t a bad thing, it’s just a thing. - -My plan right now is just to keep learning about how things work and reduce the number of moving parts I need to worry about as much as possible. - -As usual I hope this was helpful and I would very much like to know what I got wrong in this post! - --------------------------------------------------------------------------------- - -via: https://jvns.ca/blog/2017/10/10/operating-a-kubernetes-network/ - -作者:[Julia Evans ][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://jvns.ca/about -[1]:http://blog.sophaskins.net/blog/misadventures-with-kube-dns/ -[2]:https://github.com/coreos/flannel/pull/808 -[3]:https://github.com/coreos/flannel/pull/803 -[4]:https://github.com/coreos/flannel/issues/610 -[5]:https://github.com/AdoHe/kube2haproxy -[6]:https://kubernetes.io/docs/concepts/cluster-administration/networking/#kubernetes-model -[7]:https://www.youtube.com/watch?v=y2bhV81MfKQ -[8]:https://www.youtube.com/watch?v=4-pawkiazEg -[9]:https://jvns.ca/categories/kubernetes -[10]:https://kubernetes.io/docs/concepts/cluster-administration/networking/#kubernetes-model -[11]:https://github.com/kelseyhightower/kubernetes-the-hard-way/blob/master/docs/11-pod-network-routes.md -[12]:https://github.com/coreos/flannel/tree/master/backend/vxlan -[13]:https://github.com/kubernetes/kubernetes/issues/37932 -[14]:https://www.youtube.com/watch?v=4-pawkiazEg -[15]:https://groups.google.com/forum/#!topic/kubernetes-sig-network/3NlBVbTUUU0 diff --git a/sources/tech/20171011 LEAST PRIVILEGE CONTAINER ORCHESTRATION.md b/sources/tech/20171011 LEAST PRIVILEGE CONTAINER ORCHESTRATION.md deleted file mode 100644 index 7a9b6e817c..0000000000 --- a/sources/tech/20171011 LEAST PRIVILEGE CONTAINER ORCHESTRATION.md +++ /dev/null @@ -1,174 +0,0 @@ -# LEAST PRIVILEGE CONTAINER ORCHESTRATION - - -The Docker platform and the container has become the standard for packaging, deploying, and managing applications. In order to coordinate running containers across multiple nodes in a cluster, a key capability is required: a container orchestrator. - -![container orchestrator](https://i0.wp.com/blog.docker.com/wp-content/uploads/f753d4e8-9e22-4fe2-be9a-80661ef696a8-3.jpg?resize=536%2C312&ssl=1) - -Orchestrators are responsible for critical clustering and scheduling tasks, such as: - -* Managing container scheduling and resource allocation. - -* Support service discovery and hitless application deploys. - -* Distribute the necessary resources that applications need to run. - -Unfortunately, the distributed nature of orchestrators and the ephemeral nature of resources in this environment makes securing orchestrators a challenging task. In this post, we will describe in detail the less-considered—yet vital—aspect of the security model of container orchestrators, and how Docker Enterprise Edition with its built-in orchestration capability, Swarm mode, overcomes these difficulties. - -Motivation and threat model -============================================================ - -One of the primary objectives of Docker EE with swarm mode is to provide an orchestrator with security built-in. To achieve this goal, we developed the first container orchestrator designed with the principle of least privilege in mind. - -In computer science,the principle of least privilege in a distributed system requires that each participant of the system must only have access to  the information and resources that are necessary for its legitimate purpose. No more, no less. - -> #### ”A process must be able to access only the information and resources that are necessary for its legitimate purpose.” - -#### Principle of Least Privilege - -Each node in a Docker EE swarm is assigned role: either manager or worker. These roles define a coarsegrained level of privilege to the nodes: administration and task execution, respectively. However, regardless of its role, a node has access only to the information and resources it needs to perform the necessary tasks, with cryptographically enforced guarantees. As a result, it becomes easier to secure clusters against even the most sophisticated attacker models: attackers that control the underlying communication networks or even compromised cluster nodes. - -# Secure-by-default core - -There is an old security maxim that states: if it doesn’t come by default, no one will use it. Docker Swarm mode takes this notion to heart, and ships with secure-by-default mechanisms to solve three of the hardest and most important aspects of the orchestration lifecycle: - -1. Trust bootstrap and node introduction. - -2. Node identity issuance and management. - -3. Authenticated, Authorized, Encrypted information storage and dissemination. - -Let’s look at each of these aspects individually - -### Trust Bootstrap and Node Introduction - -The first step to a secure cluster is tight control over membership and identity. Without it, administrators cannot rely on the identities of their nodes and enforce strict workload separation between nodes. This means that unauthorized nodes can’t be allowed to join the cluster, and nodes that are already part of the cluster aren’t able to change identities, suddenly pretending to be another node. - -To address this need, nodes managed by Docker EE’s Swarm mode maintain strong, immutable identities. The desired properties are cryptographically guaranteed by using two key building-blocks: - -1. Secure join tokens for cluster membership. - -2. Unique identities embedded in certificates issued from a central certificate authority. - -### Joining the Swarm - -To join the swarm, a node needs a copy of a secure join token. The token is unique to each operational role within the cluster—there are currently two types of nodes: workers and managers. Due to this separation, a node with a copy of a worker token will not be allowed to join the cluster as a manager. The only way to get this special token is for a cluster administrator to interactively request it from the cluster’s manager through the swarm administration API. - -The token is securely and randomly generated, but it also has a special syntax that makes leaks of this token easier to detect: a special prefix that you can easily monitor for in your logs and repositories. Fortunately, even if a leak does occur, tokens are easy to rotate, and we recommend that you rotate them often—particularly in the case where your cluster will not be scaling up for a while. - -![Docker Swarm](https://i1.wp.com/blog.docker.com/wp-content/uploads/92d171d4-52c7-4702-8143-110c6f52017c-2.jpg?resize=547%2C208&ssl=1) - -### Bootstrapping trust - -As part of establishing its identity, a new node will ask for a new identity to be issued by any of the network managers. However, under our threat model, all communications can be intercepted by a third-party. This begs the question: how does a node know that it is talking to a legitimate manager? - -![Docker Security](https://i0.wp.com/blog.docker.com/wp-content/uploads/94e3fef0-5bd2-4970-b9e9-25b566d926ad-2.jpg?resize=528%2C348&ssl=1) - -Fortunately, Docker has a built-in mechanism for preventing this from happening. The join token, which the host uses to join the swarm, includes a hash of the root CA’s certificate. The host can therefore use one-way TLS and use the hash to verify that it’s joining the right swarm: if the manager presents a certificate not signed by a CA that matches the hash, the node knows not to trust it. - -### Node identity issuance and management - -Identities in a swarm are embedded in x509 certificates held by each individual node. In a manifestation of the least privilege principle, the certificates’ private keys are restricted strictly to the hosts where they originate. In particular, managers do not have access to private keys of any certificate but their own. - -### Identity Issuance - -To receive their certificates without sharing their private keys, new hosts begin by issuing a certificate signing request (CSR), which the managers then convert into a certificate. This certificate now becomes the new host’s identity, making the node a full-fledged member of the swarm! - -#### -![](https://i0.wp.com/blog.docker.com/wp-content/uploads/415ae6cf-7e76-4ba8-9d84-6d49bf327d8f-2.jpg?resize=548%2C350&ssl=1) - -When used alongside with the secure bootstrapping mechanism, this mechanism for issuing identities to joining nodes is secure by default: all communicating parties are authenticated, authorized and no sensitive information is ever exchanged in clear-text. - -### Identity Renewal - -However, securely joining nodes to a swarm is only part of the story. To minimize the impact of leaked or stolen certificates and to remove the complexity of managing CRL lists, Swarm mode uses short-lived certificates for the identities. These certificates have a default expiration of three months, but can be configured to expire every hour! - -![Docker secrets](https://i0.wp.com/blog.docker.com/wp-content/uploads/55e2ab9a-19cd-465d-82c6-fa76110e7ecd-2.jpg?resize=556%2C365&ssl=1) - -This short certificate expiration time means that certificate rotation can’t be a manual process, as it usually is for most PKI systems. With swarm, all certificates are rotated automatically and in a hitless fashion. The process is simple: using a mutually authenticated TLS connection to prove ownership over a particular identity, a Swarm node generates regularly a new public/private key pair and sends the corresponding CSR to be signed, creating a completely new certificate, but maintaining the same identity. - -### Authenticated, Authorized, Encrypted information storage and dissemination. - -During the normal operation of a swarm, information about the tasks has to be sent to the worker nodes for execution. This includes not only information on which containers are to be executed by a node;but also, it includes  all the resources that are necessary for the successful execution of that container, including sensitive secrets such as private keys, passwords, and API tokens. - -### Transport Security - -The fact that every node participating in a swarm is in possession of a unique identity in the form of a X509 certificate, communicating securely between nodes is trivial: nodes can use their respective certificates to establish mutually authenticated connections between one another, inheriting the confidentiality, authenticity and integrity properties of TLS. - -![Swarm Mode](https://i0.wp.com/blog.docker.com/wp-content/uploads/972273a3-d9e5-4053-8fcb-a407c8cdcbf6-2.jpg?resize=347%2C271&ssl=1) - -One interesting detail about Swarm mode is the fact that it uses a push model: only managers are allowed to send information to workers—significantly reducing the surface of attack manager nodes expose to the less privileged worker nodes. - -### Strict Workload Separation Into Security Zones - -One of the responsibilities of manager nodes is deciding which tasks to send to each of the workers. Managers make this determination using a variety of strategies; scheduling the workloads across the swarm depending on both the unique properties of each node and each workload. - -In Docker EE with Swarm mode, administrators have the ability of influencing these scheduling decisions by using labels that are securely attached to the individual node identities. These labels allow administrators to group nodes together into different security zones limiting the exposure of particularly sensitive workloads and any secrets related to them. - -![Docker Swarm Security](https://i0.wp.com/blog.docker.com/wp-content/uploads/67ffa551-d4ae-4522-ba13-4a646a158592-2.jpg?resize=546%2C375&ssl=1) - -### Secure Secret Distribution - -In addition to facilitating the identity issuance process, manager nodes have the important task of storing and distributing any resources needed by a worker. Secrets are treated like any other type of resource, and are pushed down from the manager to the worker over the secure mTLS connection. - -![Docker Secrets](https://i1.wp.com/blog.docker.com/wp-content/uploads/4341da98-2f8c-4aed-bb40-607246344dd8-2.jpg?resize=508%2C326&ssl=1) - -On the hosts, Docker EE ensures that secrets are provided only to the containers they are destined for. Other containers on the same host will not have access to them. Docker exposes secrets to a container as a temporary file system, ensuring that secrets are always stored in memory and never written to disk. This method is more secure than competing alternatives, such as [storing them in environment variables][12]. Once a task completes the secret is gone forever. - -### Storing secrets - -On manager hosts secrets are always encrypted at rest. By default, the key that encrypts these secrets (known as the Data Encryption Key, DEK) is also stored in plaintext on disk. This makes it easy for those with minimal security requirements to start using Docker Swarm mode. - -However, once you are running a production cluster, we recommend you enable auto-lock mode. When auto-lock mode is enabled, a newly rotated DEK is encrypted with a separate Key Encryption Key (KEK). This key is never stored on the cluster; the administrator is responsible for storing it securely and providing it when the cluster starts up. This is known as unlocking the swarm. - -Swarm mode supports multiple managers, relying on the Raft Consensus Algorithm for fault tolerance. Secure secret storage scales seamlessly in this scenario. Each manager host has a unique disk encryption key, in addition to the shared key. Furthermore, Raft logs are encrypted on disk and are similarly unavailable without the KEK when in autolock mode. - -### What happens when a node is compromised? - -![Docker Secrets](https://i0.wp.com/blog.docker.com/wp-content/uploads/2a78b37d-bbf0-40ee-a282-eb0900f71ba9-2.jpg?resize=502%2C303&ssl=1) - -In traditional orchestrators, recovering from a compromised host is a slow and complicated process. With Swarm mode, recovery is as easy as running the docker node rm command. This removes the affected node from the cluster, and Docker will take care of the rest, namely re-balancing services and making sure other hosts know not to talk to the affected node. - -As we have seen, thanks to least privilege orchestration, even if the attacker were still active on the host, they would be cut off from the rest of the network. The host’s certificate — its identity — is blacklisted, so the managers will not accept it as valid. - -# Conclusion - -Docker EE with Swarm mode ensures security by default in all key areas of orchestration: - -* Joining the cluster. Prevents malicious nodes from joining the cluster. - -* Organizing hosts into security zones. Prevents lateral movement by attackers. - -* Scheduling tasks. Tasks will be issued only to designated and allowed nodes. - -* Allocating resources. A malicious node cannot “steal” another’s workload or resources. - -* Storing secrets. Never stored in plaintext and never written to disk on worker nodes. - -* Communicating with the workers. Encrypted using mutually authenticated TLS. - -As Swarm mode continues to improve, the Docker team is working to take the principle of least privilege orchestration even further. The task we are tackling is: how can systems remain secure if a manager is compromised? The roadmap is in place, with some of the features already available such as the ability of whitelisting only specific Docker images, preventing managers from executing arbitrary workloads. This is achieved quite naturally using Docker Content Trust. - --------------------------------------------------------------------------------- - -via: https://blog.docker.com/2017/10/least-privilege-container-orchestration/ - -作者:[Diogo Mónica ][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://blog.docker.com/author/diogo/ -[1]:http://www.linkedin.com/shareArticle?mini=true&url=http://dockr.ly/2yZoNdy&title=Least%20Privilege%20Container%20Orchestration&summary=The%20Docker%20platform%20and%20the%20container%20has%20become%20the%20standard%20for%20packaging,%20deploying,%20and%20managing%20applications.%20In%20order%20to%20coordinate%20running%20containers%20across%20multiple%20nodes%20in%20a%20cluster,%20a%20key%20capability%20is%20required:%20a%20container%20orchestrator.Orchestrators%20are%20responsible%20for%20critical%20clustering%20and%20scheduling%20tasks,%20such%20as:%20%20%20%20Managing%20... -[2]:http://www.reddit.com/submit?url=http://dockr.ly/2yZoNdy&title=Least%20Privilege%20Container%20Orchestration -[3]:https://plus.google.com/share?url=http://dockr.ly/2yZoNdy -[4]:http://news.ycombinator.com/submitlink?u=http://dockr.ly/2yZoNdy&t=Least%20Privilege%20Container%20Orchestration -[5]:https://blog.docker.com/author/diogo/ -[6]:https://blog.docker.com/tag/docker-orchestration/ -[7]:https://blog.docker.com/tag/docker-secrets/ -[8]:https://blog.docker.com/tag/docker-security/ -[9]:https://blog.docker.com/tag/docker-swarm/ -[10]:https://blog.docker.com/tag/least-privilege-orchestrator/ -[11]:https://blog.docker.com/tag/tls/ -[12]:https://diogomonica.com/2017/03/27/why-you-shouldnt-use-env-variables-for-secret-data/ diff --git a/sources/tech/20171020 How Eclipse is advancing IoT development.md b/sources/tech/20171020 How Eclipse is advancing IoT development.md new file mode 100644 index 0000000000..30fd8eb64d --- /dev/null +++ b/sources/tech/20171020 How Eclipse is advancing IoT development.md @@ -0,0 +1,83 @@ +apply for translating + +How Eclipse is advancing IoT development +============================================================ + +### Open source organization's modular approach to development is a good match for the Internet of Things. + +![How Eclipse is advancing IoT development](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/OSDC_BUS_ArchitectureOfParticipation_520x292.png?itok=FA0Uuwzv "How Eclipse is advancing IoT development") +Image by : opensource.com + +[Eclipse][3] may not be the first open source organization that pops to mind when thinking about Internet of Things (IoT) projects. After all, the foundation has been around since 2001, long before IoT was a household word, supporting a community for commercially viable open source software development. + +September's Eclipse IoT Day, held in conjunction with RedMonk's [ThingMonk 2017][4] event, emphasized the big role Eclipse is taking in [IoT development][5]. It currently hosts 28 projects that touch a wide range of IoT needs and projects. While at the conference, I talked with [Ian Skerritt][6], who heads marketing for Eclipse, about Eclipse's IoT projects and how Eclipse thinks about IoT more broadly. + +### What's new about IoT? + +I asked Ian how IoT is different from traditional industrial automation, given that sensors and tools have been connected in factories for the past several decades. Ian notes that many factories still are not connected. + +Additionally, he says, "SCADA [supervisory control and data analysis] systems and even the factory floor technology are very proprietary, very siloed. It's hard to change it. It's hard to adapt to it… Right now, when you set up a manufacturing run, you need to manufacture hundreds of thousands of that piece, of that unit. What [manufacturers] want to do is to meet customer demand, to have manufacturing processes that are very flexible, that you can actually do a lot size of one." That's a big piece of what IoT is bringing to manufacturing. + +### Eclipse's approach to IoT + +He describes Eclipse's involvement in IoT by saying: "There's core fundamental technology that every IoT solution needs," and by using open source, "everyone can use it so they can get broader adoption." He says Eclipse see IoT as consisting of three connected software stacks. At a high level, these stacks mirror the (by now familiar) view that IoT can usually be described as spanning three layers. A given implementation may have even more layers, but they still generally map to the functions of this three-layer model: + +* A stack of software for constrained devices (e.g., the device, endpoint, microcontroller unit (MCU), sensor hardware). + +* Some type of gateway that aggregates information and data from the different sensors and sends it to the network. This layer also may take real-time actions based on what the sensors are observing. + +* A software stack for the IoT platform on the backend. This backend cloud stores the data and can provide services based on collected data, such as analysis of historical trends and predictive analytics. + +The three stacks are described in greater detail in Eclipse's whitepaper "[The Three Software Stacks Required for IoT Architectures][7]." + +Ian says that, when developing a solution within those architectures, "there's very specific things that need to be built, but there's a lot of underlying technology that can be used, like messaging protocols, like gateway services. It needs to be a modular approach to scale up to the different use cases that are up there." This encapsulates Eclipse's activities around IoT: Developing modular open source components that can be used to build a range of business-specific services and solutions. + +### Eclipse's IoT projects + +Of Eclipse's many IoT projects currently in use, Ian says two of the most prominent relate to [MQTT][8], a machine-to-machine (M2M) messaging protocol for IoT. Ian describes it as "a publish‑subscribe messaging protocol that was designed specifically for oil and gas pipeline monitoring where power-management network latency is really important. MQTT has been a great success in terms of being a standard that's being widely adopted in IoT." [Eclipse Mosquitto][9] is MQTT's broker and [Eclipse Paho][10] its client. + +[Eclipse Kura][11] is an IoT gateway that, in Ian's words, "provides northbound and southbound connectivity [for] a lot of different protocols" including Bluetooth, Modbus, controller-area network (CAN) bus, and OPC Unified Architecture, with more being added all the time. One benefit, he says, is "instead of you writing your own connectivity, Kura provides that and then connects you to the network via satellite, via Ethernet, or anything." In addition, it handles firewall configuration, network latency, and other functions. "If the network goes down, it will store messages until it comes back up," Ian says. + +A newer project, [Eclipse Kapua][12], is taking a microservices approach to providing different services for an IoT cloud platform. For example, it handles aspects of connectivity, integration, management, storage, and analysis. Ian describes it as "up and coming. It's not being deployed yet, but Eurotech and Red Hat are very active in that." + +Ian says [Eclipse hawkBit][13], which manages software updates, is one of the "most intriguing projects. From a security perspective, if you can't update your device, you've got a huge security hole." Most IoT security disasters are related to non-updated devices, he says. "HawkBit basically manages the backend of how you do scalable updates across your IoT system." + +Indeed, the difficulty of updating software in IoT devices is regularly cited as one of its biggest security challenges. IoT devices aren't always connected and may be numerous, plus update processes for constrained devices can be hard to consistently get right. For this reason, projects relating to updating IoT software are likely to be important going forward. + +### Why IoT is a good fit for Eclipse + +One of the trends we've seen in IoT development has been around building blocks that are integrated and applied to solve particular business problems, rather than monolithic IoT platforms that apply across industries and companies. This is a good fit with Eclipse's approach to IoT, which focuses on a number of modular stacks; projects that provide specific and commonly needed functions; and brokers, gateways, and protocols that can tie together the components needed for a given implementation. + +-------------------------------------------------------------------------------- + +作者简介: + +Gordon Haff - Gordon Haff is Red Hat’s cloud evangelist, is a frequent and highly acclaimed speaker at customer and industry events, and helps develop strategy across Red Hat’s full portfolio of cloud solutions. He is the author of Computing Next: How the Cloud Opens the Future in addition to numerous other publications. Prior to Red Hat, Gordon wrote hundreds of research notes, was frequently quoted in publications like The New York Times on a wide range of IT topics, and advised clients on product and... + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/17/10/eclipse-and-iot + +作者:[Gordon Haff ][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/ghaff +[1]:https://opensource.com/article/17/10/eclipse-and-iot?rate=u1Wr-MCMFCF4C45IMoSPUacCatoqzhdKz7NePxHOvwg +[2]:https://opensource.com/user/21220/feed +[3]:https://www.eclipse.org/home/ +[4]:http://thingmonk.com/ +[5]:https://iot.eclipse.org/ +[6]:https://twitter.com/ianskerrett +[7]:https://iot.eclipse.org/resources/white-papers/Eclipse%20IoT%20White%20Paper%20-%20The%20Three%20Software%20Stacks%20Required%20for%20IoT%20Architectures.pdf +[8]:http://mqtt.org/ +[9]:https://projects.eclipse.org/projects/technology.mosquitto +[10]:https://projects.eclipse.org/projects/technology.paho +[11]:https://www.eclipse.org/kura/ +[12]:https://www.eclipse.org/kapua/ +[13]:https://eclipse.org/hawkbit/ +[14]:https://opensource.com/users/ghaff +[15]:https://opensource.com/users/ghaff +[16]:https://opensource.com/article/17/10/eclipse-and-iot#comments diff --git a/sources/tech/20171102 Dive into BPF a list of reading material.md b/sources/tech/20171102 Dive into BPF a list of reading material.md deleted file mode 100644 index f4b90bd09d..0000000000 --- a/sources/tech/20171102 Dive into BPF a list of reading material.md +++ /dev/null @@ -1,711 +0,0 @@ -Dive into BPF: a list of reading material -============================================================ - -* [What is BPF?][143] - -* [Dive into the bytecode][144] - -* [Resources][145] - * [Generic presentations][23] - * [About BPF][1] - - * [About XDP][2] - - * [About other components related or based on eBPF][3] - - * [Documentation][24] - * [About BPF][4] - - * [About tc][5] - - * [About XDP][6] - - * [About P4 and BPF][7] - - * [Tutorials][25] - - * [Examples][26] - * [From the kernel][8] - - * [From package iproute2][9] - - * [From bcc set of tools][10] - - * [Manual pages][11] - - * [The code][27] - * [BPF code in the kernel][12] - - * [XDP hooks code][13] - - * [BPF logic in bcc][14] - - * [Code to manage BPF with tc][15] - - * [BPF utilities][16] - - * [Other interesting chunks][17] - - * [LLVM backend][18] - - * [Running in userspace][19] - - * [Commit logs][20] - - * [Troubleshooting][28] - * [Errors at compilation time][21] - - * [Errors at load and run time][22] - - * [And still more!][29] - - _~ [Updated][146] 2017-11-02 ~_ - -# What is BPF? - -BPF, as in **B**erkeley **P**acket **F**ilter, was initially conceived in 1992 so as to provide a way to filter packets and to avoid useless packet copies from kernel to userspace. It initially consisted in a simple bytecode that is injected from userspace into the kernel, where it is checked by a verifier—to prevent kernel crashes or security issues—and attached to a socket, then run on each received packet. It was ported to Linux a couple of years later, and used for a small number of applications (tcpdump for example). The simplicity of the language as well as the existence of an in-kernel Just-In-Time (JIT) compiling machine for BPF were factors for the excellent performances of this tool. - -Then in 2013, Alexei Starovoitov completely reshaped it, started to add new functionalities and to improve the performances of BPF. This new version is designated as eBPF (for “extended BPF”), while the former becomes cBPF (“classic” BPF). New features such as maps and tail calls appeared. The JIT machines were rewritten. The new language is even closer to native machine language than cBPF was. And also, new attach points in the kernel have been created. - -Thanks to those new hooks, eBPF programs can be designed for a variety of use cases, that divide into two fields of applications. One of them is the domain of kernel tracing and event monitoring. BPF programs can be attached to kprobes and they compare with other tracing methods, with many advantages (and sometimes some drawbacks). - -The other application domain remains network programming. In addition to socket filter, eBPF programs can be attached to tc (Linux traffic control tool) ingress or egress interfaces and perform a variety of packet processing tasks, in an efficient way. This opens new perspectives in the domain. - -And eBPF performances are further leveraged through the technologies developed for the IO Visor project: new hooks have also been added for XDP (“eXpress Data Path”), a new fast path recently added to the kernel. XDP works in conjunction with the Linux stack, and relies on BPF to perform very fast packet processing. - -Even some projects such as P4, Open vSwitch, [consider][155] or started to approach BPF. Some others, such as CETH, Cilium, are entirely based on it. BPF is buzzing, so we can expect a lot of tools and projects to orbit around it soon… - -# Dive into the bytecode - -As for me: some of my work (including for [BEBA][156]) is closely related to eBPF, and several future articles on this site will focus on this topic. Logically, I wanted to somehow introduce BPF on this blog before going down to the details—I mean, a real introduction, more developed on BPF functionalities that the brief abstract provided in first section: What are BPF maps? Tail calls? What do the internals look like? And so on. But there are a lot of presentations on this topic available on the web already, and I do not wish to create “yet another BPF introduction” that would come as a duplicate of existing documents. - -So instead, here is what we will do. After all, I spent some time reading and learning about BPF, and while doing so, I gathered a fair amount of material about BPF: introductions, documentation, but also tutorials or examples. There is a lot to read, but in order to read it, one has to  _find_  it first. Therefore, as an attempt to help people who wish to learn and use BPF, the present article introduces a list of resources. These are various kinds of readings, that hopefully will help you dive into the mechanics of this kernel bytecode. - -# Resources - -![](https://qmonnet.github.io/whirl-offload/img/icons/pic.svg) - -### Generic presentations - -The documents linked below provide a generic overview of BPF, or of some closely related topics. If you are very new to BPF, you can try picking a couple of presentation among the first ones and reading the ones you like most. If you know eBPF already, you probably want to target specific topics instead, lower down in the list. - -### About BPF - -Generic presentations about eBPF: - -* [_Making the Kernel’s Networking Data Path Programmable with BPF and XDP_][53]  (Daniel Borkmann, OSSNA17, Los Angeles, September 2017): - One of the best set of slides available to understand quickly all the basics about eBPF and XDP (mostly for network processing). - -* [The BSD Packet Filter][54] (Suchakra Sharma, June 2017):  - A very nice introduction, mostly about the tracing aspects. - -* [_BPF: tracing and more_][55]  (Brendan Gregg, January 2017): - Mostly about the tracing use cases. - -* [_Linux BPF Superpowers_][56]  (Brendan Gregg, March 2016): - With a first part on the use of **flame graphs**. - -* [_IO Visor_][57]  (Brenden Blanco, SCaLE 14x, January 2016): - Also introduces **IO Visor project**. - -* [_eBPF on the Mainframe_][58]  (Michael Holzheu, LinuxCon, Dubin, October 2015) - -* [_New (and Exciting!) Developments in Linux Tracing_][59]  (Elena Zannoni, LinuxCon, Japan, 2015) - -* [_BPF — in-kernel virtual machine_][60]  (Alexei Starovoitov, February 2015): - Presentation by the author of eBPF. - -* [_Extending extended BPF_][61]  (Jonathan Corbet, July 2014) - -**BPF internals**: - -* Daniel Borkmann has been doing an amazing work to present **the internals** of eBPF, in particular about **its use with tc**, through several talks and papers. - * [_Advanced programmability and recent updates with tc’s cls_bpf_][30]  (netdev 1.2, Tokyo, October 2016): - Daniel provides details on eBPF, its use for tunneling and encapsulation, direct packet access, and other features. - - * [_cls_bpf/eBPF updates since netdev 1.1_][31]  (netdev 1.2, Tokyo, October 2016, part of [this tc workshop][32]) - - * [_On getting tc classifier fully programmable with cls_bpf_][33]  (netdev 1.1, Sevilla, February 2016): - After introducing eBPF, this presentation provides insights on many internal BPF mechanisms (map management, tail calls, verifier). A must-read! For the most ambitious, [the full paper is available here][34]. - - * [_Linux tc and eBPF_][35]  (fosdem16, Brussels, Belgium, January 2016) - - * [_eBPF and XDP walkthrough and recent updates_][36]  (fosdem17, Brussels, Belgium, February 2017) - - These presentations are probably one of the best sources of documentation to understand the design and implementation of internal mechanisms of eBPF. - -The [**IO Visor blog**][157] has some interesting technical articles about BPF. Some of them contain a bit of marketing talks. - -**Kernel tracing**: summing up all existing methods, including BPF: - -* [_Meet-cute between eBPF and Kerne Tracing_][62]  (Viller Hsiao, July 2016): - Kprobes, uprobes, ftrace - -* [_Linux Kernel Tracing_][63]  (Viller Hsiao, July 2016): - Systemtap, Kernelshark, trace-cmd, LTTng, perf-tool, ftrace, hist-trigger, perf, function tracer, tracepoint, kprobe/uprobe… - -Regarding **event tracing and monitoring**, Brendan Gregg uses eBPF a lot and does an excellent job at documenting some of his use cases. If you are in kernel tracing, you should see his blog articles related to eBPF or to flame graphs. Most of it are accessible [from this article][158] or by browsing his blog. - -Introducing BPF, but also presenting **generic concepts of Linux networking**: - -* [_Linux Networking Explained_][64]  (Thomas Graf, LinuxCon, Toronto, August 2016) - -* [_Kernel Networking Walkthrough_][65]  (Thomas Graf, LinuxCon, Seattle, August 2015) - -**Hardware offload**: - -* eBPF with tc or XDP supports hardware offload, starting with Linux kernel version 4.9 and introduced by Netronome. Here is a presentation about this feature: - [eBPF/XDP hardware offload to SmartNICs][147] (Jakub Kicinski and Nic Viljoen, netdev 1.2, Tokyo, October 2016) - -About **cBPF**: - -* [_The BSD Packet Filter: A New Architecture for User-level Packet Capture_][66]  (Steven McCanne and Van Jacobson, 1992): - The original paper about (classic) BPF. - -* [The FreeBSD manual page about BPF][67] is a useful resource to understand cBPF programs. - -* Daniel Borkmann realized at least two presentations on cBPF, [one in 2013 on mmap, BPF and Netsniff-NG][68], and [a very complete one in 2014 on tc and cls_bpf][69]. - -* On Cloudflare’s blog, Marek Majkowski presented his [use of BPF bytecode with the `xt_bpf`module for **iptables**][70]. It is worth mentioning that eBPF is also supported by this module, starting with Linux kernel 4.10 (I do not know of any talk or article about this, though). - -* [Libpcap filters syntax][71] - -### About XDP - -* [XDP overview][72] on the IO Visor website. - -* [_eXpress Data Path (XDP)_][73]  (Tom Herbert, Alexei Starovoitov, March 2016): - The first presentation about XDP. - -* [_BoF - What Can BPF Do For You?_][74]  (Brenden Blanco, LinuxCon, Toronto, August 2016). - -* [_eXpress Data Path_][148]  (Brenden Blanco, Linux Meetup at Santa Clara, July 2016): - Contains some (somewhat marketing?) **benchmark results**! With a single core: - * ip routing drop: ~3.6 million packets per second (Mpps) - - * tc (with clsact qdisc) drop using BPF: ~4.2 Mpps - - * XDP drop using BPF: 20 Mpps (<10 % CPU utilization) - - * XDP forward (on port on which the packet was received) with rewrite: 10 Mpps - - (Tests performed with the mlx4 driver). - -* Jesper Dangaard Brouer has several excellent sets of slides, that are essential to fully understand the internals of XDP. - * [_XDP − eXpress Data Path, Intro and future use-cases_][37]  (September 2016): - _“Linux Kernel’s fight against DPDK”_ . **Future plans** (as of this writing) for XDP and comparison with DPDK. - - * [_Network Performance Workshop_][38]  (netdev 1.2, Tokyo, October 2016): - Additional hints about XDP internals and expected evolution. - - * [_XDP – eXpress Data Path, Used for DDoS protection_][39]  (OpenSourceDays, March 2017): - Contains details and use cases about XDP, with **benchmark results**, and **code snippets** for **benchmarking** as well as for **basic DDoS protection** with eBPF/XDP (based on an IP blacklisting scheme). - - * [_Memory vs. Networking, Provoking and fixing memory bottlenecks_][40]  (LSF Memory Management Summit, March 2017): - Provides a lot of details about current **memory issues** faced by XDP developers. Do not start with this one, but if you already know XDP and want to see how it really works on the page allocation side, this is a very helpful resource. - - * [_XDP for the Rest of Us_][41]  (netdev 2.1, Montreal, April 2017), with Andy Gospodarek: - How to get started with eBPF and XDP for normal humans. This presentation was also summarized by Julia Evans on [her blog][42]. - - (Jesper also created and tries to extend some documentation about eBPF and XDP, see [related section][75].) - -* [_XDP workshop — Introduction, experience, and future development_][76]  (Tom Herbert, netdev 1.2, Tokyo, October 2016) — as of this writing, only the video is available, I don’t know if the slides will be added. - -* [_High Speed Packet Filtering on Linux_][149]  (Gilberto Bertin, DEF CON 25, Las Vegas, July 2017) — an excellent introduction to state-of-the-art packet filtering on Linux, oriented towards DDoS protection, talking about packet processing in the kernel, kernel bypass, XDP and eBPF. - -### About other components related or based on eBPF - -* [_P4 on the Edge_][77]  (John Fastabend, May 2016): - Presents the use of **P4**, a description language for packet processing, with BPF to create high-performance programmable switches. - -* If you like audio presentations, there is an associated [OvS Orbit episode (#11), called  _**P4** on the Edge_][78] , dating from August 2016\. OvS Orbit are interviews realized by Ben Pfaff, who is one of the core maintainers of Open vSwitch. In this case, John Fastabend is interviewed. - -* [_P4, EBPF and Linux TC Offload_][79]  (Dinan Gunawardena and Jakub Kicinski, August 2016): - Another presentation on **P4**, with some elements related to eBPF hardware offload on Netronome’s **NFP** (Network Flow Processor) architecture. - -* **Cilium** is a technology initiated by Cisco and relying on BPF and XDP to provide “fast in-kernel networking and security policy enforcement for containers based on eBPF programs generated on the fly”. [The code of this project][150] is available on GitHub. Thomas Graf has been performing a number of presentations of this topic: - * [_Cilium: Networking & Security for Containers with BPF & XDP_][43] , also featuring a load balancer use case (Linux Plumbers conference, Santa Fe, November 2016) - - * [_Cilium: Networking & Security for Containers with BPF & XDP_][44]  (Docker Distributed Systems Summit, October 2016 — [video][45]) - - * [_Cilium: Fast IPv6 container Networking with BPF and XDP_][46]  (LinuxCon, Toronto, August 2016) - - * [_Cilium: BPF & XDP for containers_][47]  (fosdem17, Brussels, Belgium, February 2017) - - A good deal of contents is repeated between the different presentations; if in doubt, just pick the most recent one. Daniel Borkmann has also written [a generic introduction to Cilium][80] as a guest author on Google Open Source blog. - -* There are also podcasts about **Cilium**: an [OvS Orbit episode (#4)][81], in which Ben Pfaff interviews Thomas Graf (May 2016), and [another podcast by Ivan Pepelnjak][82], still with Thomas Graf about eBPF, P4, XDP and Cilium (October 2016). - -* **Open vSwitch** (OvS), and its related project **Open Virtual Network** (OVN, an open source network virtualization solution) are considering to use eBPF at various level, with several proof-of-concept prototypes already implemented: - - * [Offloading OVS Flow Processing using eBPF][48] (William (Cheng-Chun) Tu, OvS conference, San Jose, November 2016) - - * [Coupling the Flexibility of OVN with the Efficiency of IOVisor][49] (Fulvio Risso, Matteo Bertrone and Mauricio Vasquez Bernal, OvS conference, San Jose, November 2016) - - These use cases for eBPF seem to be only at the stage of proposals (nothing merge to OvS main branch) as far as I know, but it will be very interesting to see what comes out of it. - -* XDP is envisioned to be of great help for protection against Distributed Denial-of-Service (DDoS) attacks. More and more presentations focus on this. For example, the talks from people from Cloudflare ( [_XDP in practice: integrating XDP in our DDoS mitigation pipeline_][83] ) or from Facebook ( [_Droplet: DDoS countermeasures powered by BPF + XDP_][84] ) at the netdev 2.1 conference in Montreal, Canada, in April 2017, present such use cases. - -* [_CETH for XDP_][85]  (Yan Chan and Yunsong Lu, Linux Meetup, Santa Clara, July 2016): - **CETH** stands for Common Ethernet Driver Framework for faster network I/O, a technology initiated by Mellanox. - -* [**The VALE switch**][86], another virtual switch that can be used in conjunction with the netmap framework, has [a BPF extension module][87]. - -* **Suricata**, an open source intrusion detection system, [seems to rely on eBPF components][88] for its “capture bypass” features: - [_The adventures of a Suricate in eBPF land_][89]  (Éric Leblond, netdev 1.2, Tokyo, October 2016) - [_eBPF and XDP seen from the eyes of a meerkat_][90]  (Éric Leblond, Kernel Recipes, Paris, September 2017) - -* [InKeV: In-Kernel Distributed Network Virtualization for DCN][91] (Z. Ahmed, M. H. Alizai and A. A. Syed, SIGCOMM, August 2016): - **InKeV** is an eBPF-based datapath architecture for virtual networks, targeting data center networks. It was initiated by PLUMgrid, and claims to achieve better performances than OvS-based OpenStack solutions. - -* [_**gobpf** - utilizing eBPF from Go_][92]  (Michael Schubert, fosdem17, Brussels, Belgium, February 2017): - A “library to create, load and use eBPF programs from Go” - -* [**ply**][93] is a small but flexible open source dynamic **tracer** for Linux, with some features similar to the bcc tools, but with a simpler language inspired by awk and dtrace, written by Tobias Waldekranz. - -* If you read my previous article, you might be interested in this talk I gave about [implementing the OpenState interface with eBPF][151], for stateful packet processing, at fosdem17. - -![](https://qmonnet.github.io/whirl-offload/img/icons/book.svg) - -### Documentation - -Once you managed to get a broad idea of what BPF is, you can put aside generic presentations and start diving into the documentation. Below are the most complete documents about BPF specifications and functioning. Pick the one you need and read them carefully! - -### About BPF - -* The **specification of BPF** (both classic and extended versions) can be found within the documentation of the Linux kernel, and in particular in file[linux/Documentation/networking/filter.txt][94]. The use of BPF as well as its internals are documented there. Also, this is where you can find **information about errors thrown by the verifier** when loading BPF code fails. Can be helpful to troubleshoot obscure error messages. - -* Also in the kernel tree, there is a document about **frequent Questions & Answers** on eBPF design in file [linux/Documentation/bpf/bpf_design_QA.txt][95]. - -* … But the kernel documentation is dense and not especially easy to read. If you look for a simple description of eBPF language, head for [its **summarized description**][96] on the IO Visor GitHub repository instead. - -* By the way, the IO Visor project gathered a lot of **resources about BPF**. Mostly, it is split between[the documentation directory][97] of its bcc repository, and the whole content of [the bpf-docs repository][98], both on GitHub. Note the existence of this excellent [BPF **reference guide**][99] containing a detailed description of BPF C and bcc Python helpers. - -* To hack with BPF, there are some essential **Linux manual pages**. The first one is [the `bpf(2)` man page][100] about the `bpf()` **system call**, which is used to manage BPF programs and maps from userspace. It also contains a description of BPF advanced features (program types, maps and so on). The second one is mostly addressed to people wanting to attach BPF programs to tc interface: it is [the `tc-bpf(8)` man page][101], which is a reference for **using BPF with tc**, and includes some example commands and samples of code. - -* Jesper Dangaard Brouer initiated an attempt to **update eBPF Linux documentation**, including **the different kinds of maps**. [He has a draft][102] to which contributions are welcome. Once ready, this document should be merged into the man pages and into kernel documentation. - -* The Cilium project also has an excellent [**BPF and XDP Reference Guide**][103], written by core eBPF developers, that should prove immensely useful to any eBPF developer. - -* David Miller has sent several enlightening emails about eBPF/XDP internals on the [xdp-newbies][152]mailing list. I could not find a link that gathers them at a single place, so here is a list: - * [bpf.h and you…][50] - - * [Contextually speaking…][51] - - * [BPF Verifier Overview][52] - - The last one is possibly the best existing summary about the verifier at this date. - -* Ferris Ellis started [a **blog post series about eBPF**][104]. As I write this paragraph, the first article is out, with some historical background and future expectations for eBPF. Next posts should be more technical, and look promising. - -* [A **list of BPF features per kernel version**][153] is available in bcc repository. Useful is you want to know the minimal kernel version that is required to run a given feature. I contributed and added the links to the commits that introduced each feature, so you can also easily access the commit logs from there. - -### About tc - -When using BPF for networking purposes in conjunction with tc, the Linux tool for **t**raffic **c**ontrol, one may wish to gather information about tc’s generic functioning. Here are a couple of resources about it. - -* It is difficult to find simple tutorials about **QoS on Linux**. The two links I have are long and quite dense, but if you can find the time to read it you will learn nearly everything there is to know about tc (nothing about BPF, though). There they are:  [_Traffic Control HOWTO_  (Martin A. Brown, 2006)][105], and the  [_Linux Advanced Routing & Traffic Control HOWTO_  (“LARTC”) (Bert Hubert & al., 2002)][106]. - -* **tc manual pages** may not be up-to-date on your system, since several of them have been added lately. If you cannot find the documentation for a particular queuing discipline (qdisc), class or filter, it may be worth checking the latest [manual pages for tc components][107]. - -* Some additional material can be found within the files of iproute2 package itself: the package contains [some documentation][108], including some files that helped me understand better [the functioning of **tc’s actions**][109]. - **Edit:** While still available from the Git history, these files have been deleted from iproute2 in October 2017. - -* Not exactly documentation: there was [a workshop about several tc features][110] (including filtering, BPF, tc offload, …) organized by Jamal Hadi Salim during the netdev 1.2 conference (October 2016). - -* Bonus information—If you use `tc` a lot, here are some good news: I [wrote a bash completion function][111] for this tool, and it should be shipped with package iproute2 coming with kernel version 4.6 and higher! - -### About XDP - -* Some [work-in-progress documentation (including specifications)][112] for XDP started by Jesper Dangaard Brouer, but meant to be a collaborative work. Under progress (September 2016): you should expect it to change, and maybe to be moved at some point (Jesper [called for contribution][113], if you feel like improving it). - -* The [BPF and XDP Reference Guide][114] from Cilium project… Well, the name says it all. - -### About P4 and BPF - -[P4][159] is a language used to specify the behavior of a switch. It can be compiled for a number of hardware or software targets. As you may have guessed, one of these targets is BPF… The support is only partial: some P4 features cannot be translated towards BPF, and in a similar way there are things that BPF can do but that would not be possible to express with P4\. Anyway, the documentation related to **P4 use with BPF** [used to be hidden in bcc repository][160]. This changed with P4_16 version, the p4c reference compiler including [a backend for eBPF][161]. - -![](https://qmonnet.github.io/whirl-offload/img/icons/flask.svg) - -### Tutorials - -Brendan Gregg has produced excellent **tutorials** intended for people who want to **use bcc tools** for tracing and monitoring events in the kernel. [The first tutorial about using bcc itself][162] comes with eleven steps (as of today) to understand how to use the existing tools, while [the one **intended for Python developers**][163] focuses on developing new tools, across seventeen “lessons”. - -Sasha Goldshtein also has some  [_**Linux Tracing Workshops Materials**_][164]  involving the use of several BPF tools for tracing. - -Another post by Jean-Tiare Le Bigot provides a detailed (and instructive!) example of [using perf and eBPF to setup a low-level tracer][165] for ping requests and replies - -Few tutorials exist for network-related eBPF use cases. There are some interesting documents, including an  _eBPF Offload Starting Guide_ , on the [Open NFP][166] platform operated by Netronome. Other than these, the talk from Jesper,  [_XDP for the Rest of Us_][167] , is probably one of the best ways to get started with XDP. - -![](https://qmonnet.github.io/whirl-offload/img/icons/gears.svg) - -### Examples - -It is always nice to have examples. To see how things really work. But BPF program samples are scattered across several projects, so I listed all the ones I know of. The examples do not always use the same helpers (for instance, tc and bcc both have their own set of helpers to make it easier to write BPF programs in C language). - -### From the kernel - -The kernel contains examples for most types of program: filters to bind to sockets or to tc interfaces, event tracing/monitoring, and even XDP. You can find these examples under the [linux/samples/bpf/][168]directory. - -Also do not forget to have a look to the logs related to the (git) commits that introduced a particular feature, they may contain some detailed example of the feature. - -### From package iproute2 - -The iproute2 package provide several examples as well. They are obviously oriented towards network programming, since the programs are to be attached to tc ingress or egress interfaces. The examples dwell under the [iproute2/examples/bpf/][169] directory. - -### From bcc set of tools - -Many examples are [provided with bcc][170]: - -* Some are networking example programs, under the associated directory. They include socket filters, tc filters, and a XDP program. - -* The `tracing` directory include a lot of example **tracing programs**. The tutorials mentioned earlier are based on these. These programs cover a wide range of event monitoring functions, and some of them are production-oriented. Note that on certain Linux distributions (at least for Debian, Ubuntu, Fedora, Arch Linux), these programs have been [packaged][115] and can be “easily” installed by typing e.g. `# apt install bcc-tools`, but as of this writing (and except for Arch Linux), this first requires to set up IO Visor’s own package repository. - -* There are also some examples **using Lua** as a different BPF back-end (that is, BPF programs are written with Lua instead of a subset of C, allowing to use the same language for front-end and back-end), in the third directory. - -### Manual pages - -While bcc is generally the easiest way to inject and run a BPF program in the kernel, attaching programs to tc interfaces can also be performed by the `tc` tool itself. So if you intend to **use BPF with tc**, you can find some example invocations in the [`tc-bpf(8)` manual page][171]. - -![](https://qmonnet.github.io/whirl-offload/img/icons/srcfile.svg) - -### The code - -Sometimes, BPF documentation or examples are not enough, and you may have no other solution that to display the code in your favorite text editor (which should be Vim of course) and to read it. Or you may want to hack into the code so as to patch or add features to the machine. So here are a few pointers to the relevant files, finding the functions you want is up to you! - -### BPF code in the kernel - -* The file [linux/include/linux/bpf.h][116] and its counterpart [linux/include/uapi/bpf.h][117] contain **definitions** related to eBPF, to be used respectively in the kernel and to interface with userspace programs. - -* On the same pattern, files [linux/include/linux/filter.h][118] and [linux/include/uapi/filter.h][119] contain information used to **run the BPF programs**. - -* The **main pieces of code** related to BPF are under [linux/kernel/bpf/][120] directory. **The different operations permitted by the system call**, such as program loading or map management, are implemented in file `syscall.c`, while `core.c` contains the **interpreter**. The other files have self-explanatory names: `verifier.c` contains the **verifier** (no kidding), `arraymap.c` the code used to interact with **maps** of type array, and so on. - -* The **helpers**, as well as several functions related to networking (with tc, XDP…) and available to the user, are implemented in [linux/net/core/filter.c][121]. It also contains the code to migrate cBPF bytecode to eBPF (since all cBPF programs are now translated to eBPF in the kernel before being run). - -* The **JIT compilers** are under the directory of their respective architectures, such as file[linux/arch/x86/net/bpf_jit_comp.c][122] for x86. - -* You will find the code related to **the BPF components of tc** in the [linux/net/sched/][123] directory, and in particular in files `act_bpf.c` (action) and `cls_bpf.c` (filter). - -* I have not hacked with **event tracing** in BPF, so I do not really know about the hooks for such programs. There is some stuff in [linux/kernel/trace/bpf_trace.c][124]. If you are interested in this and want to know more, you may dig on the side of Brendan Gregg’s presentations or blog posts. - -* Nor have I used **seccomp-BPF**. But the code is in [linux/kernel/seccomp.c][125], and some example use cases can be found in [linux/tools/testing/selftests/seccomp/seccomp_bpf.c][126]. - -### XDP hooks code - -Once loaded into the in-kernel BPF virtual machine, **XDP** programs are hooked from userspace into the kernel network path thanks to a Netlink command. On reception, the function `dev_change_xdp_fd()` in file [linux/net/core/dev.c][172] is called and sets a XDP hook. Such hooks are located in the drivers of supported NICs. For example, the mlx4 driver used for some Mellanox hardware has hooks implemented in files under the [drivers/net/ethernet/mellanox/mlx4/][173] directory. File en_netdev.c receives Netlink commands and calls `mlx4_xdp_set()`, which in turns calls for instance `mlx4_en_process_rx_cq()` (for the RX side) implemented in file en_rx.c. - -### BPF logic in bcc - -One can find the code for the **bcc** set of tools [on the bcc GitHub repository][174]. The **Python code**, including the `BPF` class, is initiated in file [bcc/src/python/bcc/__init__.py][175]. But most of the interesting stuff—to my opinion—such as loading the BPF program into the kernel, happens [in the libbcc **C library**][176]. - -### Code to manage BPF with tc - -The code related to BPF **in tc** comes with the iproute2 package, of course. Some of it is under the[iproute2/tc/][177] directory. The files f_bpf.c and m_bpf.c (and e_bpf.c) are used respectively to handle BPF filters and actions (and tc `exec` command, whatever this may be). File q_clsact.c defines the `clsact` qdisc especially created for BPF. But **most of the BPF userspace logic** is implemented in[iproute2/lib/bpf.c][178] library, so this is probably where you should head to if you want to mess up with BPF and tc (it was moved from file iproute2/tc/tc_bpf.c, where you may find the same code in older versions of the package). - -### BPF utilities - -The kernel also ships the sources of three tools (`bpf_asm.c`, `bpf_dbg.c`, `bpf_jit_disasm.c`) related to BPF, under the [linux/tools/net/][179] or [linux/tools/bpf/][180] directory depending on your version: - -* `bpf_asm` is a minimal cBPF assembler. - -* `bpf_dbg` is a small debugger for cBPF programs. - -* `bpf_jit_disasm` is generic for both BPF flavors and could be highly useful for JIT debugging. - -* `bpftool` is a generic utility written by Jakub Kicinski, and that can be used to interact with eBPF programs and maps from userspace, for example to show, dump, pin programs, or to show, create, pin, update, delete maps. - -Read the comments at the top of the source files to get an overview of their usage. - -### Other interesting chunks - -If you are interested the use of less common languages with BPF, bcc contains [a **P4 compiler** for BPF targets][181] as well as [a **Lua front-end**][182] that can be used as alternatives to the C subset and (in the case of Lua) to the Python tools. - -### LLVM backend - -The BPF backend used by clang / LLVM for compiling C into eBPF was added to the LLVM sources in[this commit][183] (and can also be accessed on [the GitHub mirror][184]). - -### Running in userspace - -As far as I know there are at least two eBPF userspace implementations. The first one, [uBPF][185], is written in C. It contains an interpreter, a JIT compiler for x86_64 architecture, an assembler and a disassembler. - -The code of uBPF seems to have been reused to produce a [generic implementation][186], that claims to support FreeBSD kernel, FreeBSD userspace, Linux kernel, Linux userspace and MacOSX userspace. It is used for the [BPF extension module for VALE switch][187]. - -The other userspace implementation is my own work: [rbpf][188], based on uBPF, but written in Rust. The interpreter and JIT-compiler work (both under Linux, only the interpreter for MacOSX and Windows), there may be more in the future. - -### Commit logs - -As stated earlier, do not hesitate to have a look at the commit log that introduced a particular BPF feature if you want to have more information about it. You can search the logs in many places, such as on [git.kernel.org][189], [on GitHub][190], or on your local repository if you have cloned it. If you are not familiar with git, try things like `git blame ` to see what commit introduced a particular line of code, then `git show ` to have details (or search by keyword in `git log` results, but this may be tedious). See also [the list of eBPF features per kernel version][191] on bcc repository, that links to relevant commits. - -![](https://qmonnet.github.io/whirl-offload/img/icons/wand.svg) - -### Troubleshooting - -The enthusiasm about eBPF is quite recent, and so far I have not found a lot of resources intending to help with troubleshooting. So here are the few I have, augmented with my own recollection of pitfalls encountered while working with BPF. - -### Errors at compilation time - -* Make sure you have a recent enough version of the Linux kernel (see also [this document][127]). - -* If you compiled the kernel yourself: make sure you installed correctly all components, including kernel image, headers and libc. - -* When using the `bcc` shell function provided by `tc-bpf` man page (to compile C code into BPF): I once had to add includes to the header for the clang call: - - ``` - __bcc() { - clang -O2 -I "/usr/src/linux-headers-$(uname -r)/include/" \ - -I "/usr/src/linux-headers-$(uname -r)/arch/x86/include/" \ - -emit-llvm -c $1 -o - | \ - llc -march=bpf -filetype=obj -o "`basename $1 .c`.o" - } - - ``` - - (seems fixed as of today). - -* For other problems with `bcc`, do not forget to have a look at [the FAQ][128] of the tool set. - -* If you downloaded the examples from the iproute2 package in a version that does not exactly match your kernel, some errors can be triggered by the headers included in the files. The example snippets indeed assume that the same version of iproute2 package and kernel headers are installed on the system. If this is not the case, download the correct version of iproute2, or edit the path of included files in the examples to point to the headers included in iproute2 (some problems may or may not occur at runtime, depending on the features in use). - -### Errors at load and run time - -* To load a program with tc, make sure you use a tc binary coming from an iproute2 version equivalent to the kernel in use. - -* To load a program with bcc, make sure you have bcc installed on the system (just downloading the sources to run the Python script is not enough). - -* With tc, if the BPF program does not return the expected values, check that you called it in the correct fashion: filter, or action, or filter with “direct-action” mode. - -* With tc still, note that actions cannot be attached directly to qdiscs or interfaces without the use of a filter. - -* The errors thrown by the in-kernel verifier may be hard to interpret. [The kernel documentation][129]may help, so may [the reference guide][130] or, as a last resort, the source code (see above) (good luck!). For this kind of errors it is also important to keep in mind that the verifier  _does not run_  the program. If you get an error about an invalid memory access or about uninitialized data, it does not mean that these problems actually occurred (or sometimes, that they can possibly occur at all). It means that your program is written in such a way that the verifier estimates that such errors could happen, and therefore it rejects the program. - -* Note that `tc` tool has a verbose mode, and that it works well with BPF: try appending `verbose`at the end of your command line. - -* bcc also has verbose options: the `BPF` class has a `debug` argument that can take any combination of the three flags `DEBUG_LLVM_IR`, `DEBUG_BPF` and `DEBUG_PREPROCESSOR` (see details in [the source file][131]). It even embeds [some facilities to print output messages][132] for debugging the code. - -* LLVM v4.0+ [embeds a disassembler][133] for eBPF programs. So if you compile your program with clang, adding the `-g` flag for compiling enables you to later dump your program in the rather human-friendly format used by the kernel verifier. To proceed to the dump, use: - - ``` - $ llvm-objdump -S -no-show-raw-insn bpf_program.o - - ``` - -* Working with maps? You want to have a look at [bpf-map][134], a very userful tool in Go created for the Cilium project, that can be used to dump the contents of kernel eBPF maps. There also exists [a clone][135] in Rust. - -* There is an old [`bpf` tag on **StackOverflow**][136], but as of this writing it has been hardly used—ever (and there is nearly nothing related to the new eBPF version). If you are a reader from the Future though, you may want to check whether there has been more activity on this side. - -![](https://qmonnet.github.io/whirl-offload/img/icons/zoomin.svg) - -### And still more! - -* In case you would like to easily **test XDP**, there is [a Vagrant setup][137] available. You can also **test bcc**[in a Docker container][138]. - -* Wondering where the **development and activities** around BPF occur? Well, the kernel patches always end up [on the netdev mailing list][139] (related to the Linux kernel networking stack development): search for “BPF” or “XDP” keywords. Since April 2017, there is also [a mailing list specially dedicated to XDP programming][140] (both for architecture or for asking for help). Many discussions and debates also occur [on the IO Visor mailing list][141], since BPF is at the heart of the project. If you only want to keep informed from time to time, there is also an [@IOVisor Twitter account][142]. - -And come back on this blog from time to time to see if they are new articles [about BPF][192]! - - _Special thanks to Daniel Borkmann for the numerous [additional documents][154] he pointed to me so that I could complete this collection._ - --------------------------------------------------------------------------------- - -via: https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/ - -作者:[Quentin Monnet ][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://qmonnet.github.io/whirl-offload/about/ -[1]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#about-bpf -[2]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#about-xdp -[3]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#about-other-components-related-or-based-on-ebpf -[4]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#about-bpf-1 -[5]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#about-tc -[6]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#about-xdp-1 -[7]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#about-p4-and-bpf -[8]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#from-the-kernel -[9]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#from-package-iproute2 -[10]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#from-bcc-set-of-tools -[11]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#manual-pages -[12]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#bpf-code-in-the-kernel -[13]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#xdp-hooks-code -[14]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#bpf-logic-in-bcc -[15]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#code-to-manage-bpf-with-tc -[16]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#bpf-utilities -[17]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#other-interesting-chunks -[18]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#llvm-backend -[19]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#running-in-userspace -[20]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#commit-logs -[21]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#errors-at-compilation-time -[22]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#errors-at-load-and-run-time -[23]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#generic-presentations -[24]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#documentation -[25]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#tutorials -[26]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#examples -[27]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#the-code -[28]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#troubleshooting -[29]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#and-still-more -[30]:http://netdevconf.org/1.2/session.html?daniel-borkmann -[31]:http://netdevconf.org/1.2/slides/oct5/07_tcws_daniel_borkmann_2016_tcws.pdf -[32]:http://netdevconf.org/1.2/session.html?jamal-tc-workshop -[33]:http://www.netdevconf.org/1.1/proceedings/slides/borkmann-tc-classifier-cls-bpf.pdf -[34]:http://www.netdevconf.org/1.1/proceedings/papers/On-getting-tc-classifier-fully-programmable-with-cls-bpf.pdf -[35]:https://archive.fosdem.org/2016/schedule/event/ebpf/attachments/slides/1159/export/events/attachments/ebpf/slides/1159/ebpf.pdf -[36]:https://fosdem.org/2017/schedule/event/ebpf_xdp/ -[37]:http://people.netfilter.org/hawk/presentations/xdp2016/xdp_intro_and_use_cases_sep2016.pdf -[38]:http://netdevconf.org/1.2/session.html?jesper-performance-workshop -[39]:http://people.netfilter.org/hawk/presentations/OpenSourceDays2017/XDP_DDoS_protecting_osd2017.pdf -[40]:http://people.netfilter.org/hawk/presentations/MM-summit2017/MM-summit2017-JesperBrouer.pdf -[41]:http://netdevconf.org/2.1/session.html?gospodarek -[42]:http://jvns.ca/blog/2017/04/07/xdp-bpf-tutorial/ -[43]:http://www.slideshare.net/ThomasGraf5/clium-container-networking-with-bpf-xdp -[44]:http://www.slideshare.net/Docker/cilium-bpf-xdp-for-containers-66969823 -[45]:https://www.youtube.com/watch?v=TnJF7ht3ZYc&list=PLkA60AVN3hh8oPas3cq2VA9xB7WazcIgs -[46]:http://www.slideshare.net/ThomasGraf5/cilium-fast-ipv6-container-networking-with-bpf-and-xdp -[47]:https://fosdem.org/2017/schedule/event/cilium/ -[48]:http://openvswitch.org/support/ovscon2016/7/1120-tu.pdf -[49]:http://openvswitch.org/support/ovscon2016/7/1245-bertrone.pdf -[50]:https://www.spinics.net/lists/xdp-newbies/msg00179.html -[51]:https://www.spinics.net/lists/xdp-newbies/msg00181.html -[52]:https://www.spinics.net/lists/xdp-newbies/msg00185.html -[53]:http://schd.ws/hosted_files/ossna2017/da/BPFandXDP.pdf -[54]:https://speakerdeck.com/tuxology/the-bsd-packet-filter -[55]:http://www.slideshare.net/brendangregg/bpf-tracing-and-more -[56]:http://fr.slideshare.net/brendangregg/linux-bpf-superpowers -[57]:https://www.socallinuxexpo.org/sites/default/files/presentations/Room%20211%20-%20IOVisor%20-%20SCaLE%2014x.pdf -[58]:https://events.linuxfoundation.org/sites/events/files/slides/ebpf_on_the_mainframe_lcon_2015.pdf -[59]:https://events.linuxfoundation.org/sites/events/files/slides/tracing-linux-ezannoni-linuxcon-ja-2015_0.pdf -[60]:https://events.linuxfoundation.org/sites/events/files/slides/bpf_collabsummit_2015feb20.pdf -[61]:https://lwn.net/Articles/603983/ -[62]:http://www.slideshare.net/vh21/meet-cutebetweenebpfandtracing -[63]:http://www.slideshare.net/vh21/linux-kernel-tracing -[64]:http://www.slideshare.net/ThomasGraf5/linux-networking-explained -[65]:http://www.slideshare.net/ThomasGraf5/linuxcon-2015-linux-kernel-networking-walkthrough -[66]:http://www.tcpdump.org/papers/bpf-usenix93.pdf -[67]:http://www.gsp.com/cgi-bin/man.cgi?topic=bpf -[68]:http://borkmann.ch/talks/2013_devconf.pdf -[69]:http://borkmann.ch/talks/2014_devconf.pdf -[70]:https://blog.cloudflare.com/introducing-the-bpf-tools/ -[71]:http://biot.com/capstats/bpf.html -[72]:https://www.iovisor.org/technology/xdp -[73]:https://github.com/iovisor/bpf-docs/raw/master/Express_Data_Path.pdf -[74]:https://events.linuxfoundation.org/sites/events/files/slides/iovisor-lc-bof-2016.pdf -[75]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#about-xdp-1 -[76]:http://netdevconf.org/1.2/session.html?herbert-xdp-workshop -[77]:https://schd.ws/hosted_files/2016p4workshop/1d/Intel%20Fastabend-P4%20on%20the%20Edge.pdf -[78]:https://ovsorbit.benpfaff.org/#e11 -[79]:http://open-nfp.org/media/pdfs/Open_NFP_P4_EBPF_Linux_TC_Offload_FINAL.pdf -[80]:https://opensource.googleblog.com/2016/11/cilium-networking-and-security.html -[81]:https://ovsorbit.benpfaff.org/ -[82]:http://blog.ipspace.net/2016/10/fast-linux-packet-forwarding-with.html -[83]:http://netdevconf.org/2.1/session.html?bertin -[84]:http://netdevconf.org/2.1/session.html?zhou -[85]:http://www.slideshare.net/IOVisor/ceth-for-xdp-linux-meetup-santa-clara-july-2016 -[86]:http://info.iet.unipi.it/~luigi/vale/ -[87]:https://github.com/YutaroHayakawa/vale-bpf -[88]:https://www.stamus-networks.com/2016/09/28/suricata-bypass-feature/ -[89]:http://netdevconf.org/1.2/slides/oct6/10_suricata_ebpf.pdf -[90]:https://www.slideshare.net/ennael/kernel-recipes-2017-ebpf-and-xdp-eric-leblond -[91]:https://github.com/iovisor/bpf-docs/blob/master/university/sigcomm-ccr-InKev-2016.pdf -[92]:https://fosdem.org/2017/schedule/event/go_bpf/ -[93]:https://wkz.github.io/ply/ -[94]:https://www.kernel.org/doc/Documentation/networking/filter.txt -[95]:https://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next.git/tree/Documentation/bpf/bpf_design_QA.txt?id=2e39748a4231a893f057567e9b880ab34ea47aef -[96]:https://github.com/iovisor/bpf-docs/blob/master/eBPF.md -[97]:https://github.com/iovisor/bcc/tree/master/docs -[98]:https://github.com/iovisor/bpf-docs/ -[99]:https://github.com/iovisor/bcc/blob/master/docs/reference_guide.md -[100]:http://man7.org/linux/man-pages/man2/bpf.2.html -[101]:http://man7.org/linux/man-pages/man8/tc-bpf.8.html -[102]:https://prototype-kernel.readthedocs.io/en/latest/bpf/index.html -[103]:http://docs.cilium.io/en/latest/bpf/ -[104]:https://ferrisellis.com/tags/ebpf/ -[105]:http://linux-ip.net/articles/Traffic-Control-HOWTO/ -[106]:http://lartc.org/lartc.html -[107]:https://git.kernel.org/cgit/linux/kernel/git/shemminger/iproute2.git/tree/man/man8 -[108]:https://git.kernel.org/pub/scm/linux/kernel/git/shemminger/iproute2.git/tree/doc?h=v4.13.0 -[109]:https://git.kernel.org/pub/scm/linux/kernel/git/shemminger/iproute2.git/tree/doc/actions?h=v4.13.0 -[110]:http://netdevconf.org/1.2/session.html?jamal-tc-workshop -[111]:https://git.kernel.org/cgit/linux/kernel/git/shemminger/iproute2.git/commit/bash-completion/tc?id=27d44f3a8a4708bcc99995a4d9b6fe6f81e3e15b -[112]:https://prototype-kernel.readthedocs.io/en/latest/networking/XDP/index.html -[113]:https://marc.info/?l=linux-netdev&m=147436253625672 -[114]:http://docs.cilium.io/en/latest/bpf/ -[115]:https://github.com/iovisor/bcc/blob/master/INSTALL.md -[116]:https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/include/linux/bpf.h -[117]:https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/include/uapi/linux/bpf.h -[118]:https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/include/linux/filter.h -[119]:https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/include/uapi/linux/filter.h -[120]:https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/kernel/bpf -[121]:https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/net/core/filter.c -[122]:https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/arch/x86/net/bpf_jit_comp.c -[123]:https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/net/sched -[124]:https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/kernel/trace/bpf_trace.c -[125]:https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/kernel/seccomp.c -[126]:https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/tools/testing/selftests/seccomp/seccomp_bpf.c -[127]:https://github.com/iovisor/bcc/blob/master/docs/kernel-versions.md -[128]:https://github.com/iovisor/bcc/blob/master/FAQ.txt -[129]:https://www.kernel.org/doc/Documentation/networking/filter.txt -[130]:https://github.com/iovisor/bcc/blob/master/docs/reference_guide.md -[131]:https://github.com/iovisor/bcc/blob/master/src/python/bcc/__init__.py -[132]:https://github.com/iovisor/bcc/blob/master/docs/reference_guide.md#output -[133]:https://www.spinics.net/lists/netdev/msg406926.html -[134]:https://github.com/cilium/bpf-map -[135]:https://github.com/badboy/bpf-map -[136]:https://stackoverflow.com/questions/tagged/bpf -[137]:https://github.com/iovisor/xdp-vagrant -[138]:https://github.com/zlim/bcc-docker -[139]:http://lists.openwall.net/netdev/ -[140]:http://vger.kernel.org/vger-lists.html#xdp-newbies -[141]:http://lists.iovisor.org/pipermail/iovisor-dev/ -[142]:https://twitter.com/IOVisor -[143]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#what-is-bpf -[144]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#dive-into-the-bytecode -[145]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#resources -[146]:https://github.com/qmonnet/whirl-offload/commits/gh-pages/_posts/2016-09-01-dive-into-bpf.md -[147]:http://netdevconf.org/1.2/session.html?jakub-kicinski -[148]:http://www.slideshare.net/IOVisor/express-data-path-linux-meetup-santa-clara-july-2016 -[149]:https://cdn.shopify.com/s/files/1/0177/9886/files/phv2017-gbertin.pdf -[150]:https://github.com/cilium/cilium -[151]:https://fosdem.org/2017/schedule/event/stateful_ebpf/ -[152]:http://vger.kernel.org/vger-lists.html#xdp-newbies -[153]:https://github.com/iovisor/bcc/blob/master/docs/kernel-versions.md -[154]:https://github.com/qmonnet/whirl-offload/commit/d694f8081ba00e686e34f86d5ee76abeb4d0e429 -[155]:http://openvswitch.org/pipermail/dev/2014-October/047421.html -[156]:https://qmonnet.github.io/whirl-offload/2016/07/15/beba-research-project/ -[157]:https://www.iovisor.org/resources/blog -[158]:http://www.brendangregg.com/blog/2016-03-05/linux-bpf-superpowers.html -[159]:http://p4.org/ -[160]:https://github.com/iovisor/bcc/tree/master/src/cc/frontends/p4 -[161]:https://github.com/p4lang/p4c/blob/master/backends/ebpf/README.md -[162]:https://github.com/iovisor/bcc/blob/master/docs/reference_guide.md -[163]:https://github.com/iovisor/bcc/blob/master/docs/tutorial_bcc_python_developer.md -[164]:https://github.com/goldshtn/linux-tracing-workshop -[165]:https://blog.yadutaf.fr/2017/07/28/tracing-a-packet-journey-using-linux-tracepoints-perf-ebpf/ -[166]:https://open-nfp.org/dataplanes-ebpf/technical-papers/ -[167]:http://netdevconf.org/2.1/session.html?gospodarek -[168]:https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/samples/bpf -[169]:https://git.kernel.org/cgit/linux/kernel/git/shemminger/iproute2.git/tree/examples/bpf -[170]:https://github.com/iovisor/bcc/tree/master/examples -[171]:http://man7.org/linux/man-pages/man8/tc-bpf.8.html -[172]:https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/net/core/dev.c -[173]:https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/drivers/net/ethernet/mellanox/mlx4/ -[174]:https://github.com/iovisor/bcc/ -[175]:https://github.com/iovisor/bcc/blob/master/src/python/bcc/__init__.py -[176]:https://github.com/iovisor/bcc/blob/master/src/cc/libbpf.c -[177]:https://git.kernel.org/cgit/linux/kernel/git/shemminger/iproute2.git/tree/tc -[178]:https://git.kernel.org/cgit/linux/kernel/git/shemminger/iproute2.git/tree/lib/bpf.c -[179]:https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/tools/net -[180]:https://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next.git/tree/tools/bpf -[181]:https://github.com/iovisor/bcc/tree/master/src/cc/frontends/p4/compiler -[182]:https://github.com/iovisor/bcc/tree/master/src/lua -[183]:https://reviews.llvm.org/D6494 -[184]:https://github.com/llvm-mirror/llvm/commit/4fe85c75482f9d11c5a1f92a1863ce30afad8d0d -[185]:https://github.com/iovisor/ubpf/ -[186]:https://github.com/YutaroHayakawa/generic-ebpf -[187]:https://github.com/YutaroHayakawa/vale-bpf -[188]:https://github.com/qmonnet/rbpf -[189]:https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git -[190]:https://github.com/torvalds/linux -[191]:https://github.com/iovisor/bcc/blob/master/docs/kernel-versions.md -[192]:https://qmonnet.github.io/whirl-offload/categories/#BPF diff --git a/sources/tech/20171107 GitHub welcomes all CI tools.md b/sources/tech/20171107 GitHub welcomes all CI tools.md deleted file mode 100644 index 7bef351bd6..0000000000 --- a/sources/tech/20171107 GitHub welcomes all CI tools.md +++ /dev/null @@ -1,95 +0,0 @@ -translating---geekpi - -GitHub welcomes all CI tools -==================== - - -[![GitHub and all CI tools](https://user-images.githubusercontent.com/29592817/32509084-2d52c56c-c3a1-11e7-8c49-901f0f601faf.png)][11] - -Continuous Integration ([CI][12]) tools help you stick to your team's quality standards by running tests every time you push a new commit and [reporting the results][13] to a pull request. Combined with continuous delivery ([CD][14]) tools, you can also test your code on multiple configurations, run additional performance tests, and automate every step [until production][15]. - -There are several CI and CD tools that [integrate with GitHub][16], some of which you can install in a few clicks from [GitHub Marketplace][17]. With so many options, you can pick the best tool for the job—even if it's not the one that comes pre-integrated with your system. - -The tools that will work best for you depends on many factors, including: - -* Programming language and application architecture - -* Operating system and browsers you plan to support - -* Your team's experience and skills - -* Scaling capabilities and plans for growth - -* Geographic distribution of dependent systems and the people who use them - -* Packaging and delivery goals - -Of course, it isn't possible to optimize your CI tool for all of these scenarios. The people who build them have to choose which use cases to serve best—and when to prioritize complexity over simplicity. For example, if you like to test small applications written in a particular programming language for one platform, you won't need the complexity of a tool that tests embedded software controllers on dozens of platforms with a broad mix of programming languages and frameworks. - -If you need a little inspiration for which CI tool might work best, take a look at [popular GitHub projects][18]. Many show the status of their integrated CI/CD tools as badges in their README.md. We've also analyzed the use of CI tools across more than 50 million repositories in the GitHub community, and found a lot of variety. The following diagram shows the relative percentage of the top 10 CI tools used with GitHub.com, based on the most used [commit status contexts][19] used within our pull requests. - - _Our analysis also showed that many teams use more than one CI tool in their projects, allowing them to emphasize what each tool does best._ - - [![Top 10 CI systems used with GitHub.com based on most used commit status contexts](https://user-images.githubusercontent.com/7321362/32575895-ea563032-c49a-11e7-9581-e05ec882658b.png)][20] - -If you'd like to check them out, here are the top 10 tools teams use: - -* [Travis CI][1] - -* [Circle CI][2] - -* [Jenkins][3] - -* [AppVeyor][4] - -* [CodeShip][5] - -* [Drone][6] - -* [Semaphore CI][7] - -* [Buildkite][8] - -* [Wercker][9] - -* [TeamCity][10] - -It's tempting to just pick the default, pre-integrated tool without taking the time to research and choose the best one for the job, but there are plenty of [excellent choices][21] built for your specific use cases. And if you change your mind later, no problem. When you choose the best tool for a specific situation, you're guaranteeing tailored performance and the freedom of interchangability when it no longer fits. - -Ready to see how CI tools can fit into your workflow? - -[Browse GitHub Marketplace][22] - --------------------------------------------------------------------------------- - -via: https://github.com/blog/2463-github-welcomes-all-ci-tools - -作者:[jonico ][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://github.com/jonico -[1]:https://travis-ci.org/ -[2]:https://circleci.com/ -[3]:https://jenkins.io/ -[4]:https://www.appveyor.com/ -[5]:https://codeship.com/ -[6]:http://try.drone.io/ -[7]:https://semaphoreci.com/ -[8]:https://buildkite.com/ -[9]:http://www.wercker.com/ -[10]:https://www.jetbrains.com/teamcity/ -[11]:https://user-images.githubusercontent.com/29592817/32509084-2d52c56c-c3a1-11e7-8c49-901f0f601faf.png -[12]:https://en.wikipedia.org/wiki/Continuous_integration -[13]:https://github.com/blog/2051-protected-branches-and-required-status-checks -[14]:https://en.wikipedia.org/wiki/Continuous_delivery -[15]:https://developer.github.com/changes/2014-01-09-preview-the-new-deployments-api/ -[16]:https://github.com/works-with/category/continuous-integration -[17]:https://github.com/marketplace/category/continuous-integration -[18]:https://github.com/explore?trending=repositories#trending -[19]:https://developer.github.com/v3/repos/statuses/ -[20]:https://user-images.githubusercontent.com/7321362/32575895-ea563032-c49a-11e7-9581-e05ec882658b.png -[21]:https://github.com/works-with/category/continuous-integration -[22]:https://github.com/marketplace/category/continuous-integration diff --git a/sources/tech/20171112 Love Your Bugs.md b/sources/tech/20171112 Love Your Bugs.md deleted file mode 100644 index bf79f27cf7..0000000000 --- a/sources/tech/20171112 Love Your Bugs.md +++ /dev/null @@ -1,311 +0,0 @@ -Love Your Bugs -============================================================ - -In early October I gave a keynote at [Python Brasil][1] in Belo Horizonte. Here is an aspirational and lightly edited transcript of the talk. There is also a video available [here][2]. - -### I love bugs - -I’m currently a senior engineer at [Pilot.com][3], working on automating bookkeeping for startups. Before that, I worked for [Dropbox][4] on the desktop client team, and I’ll have a few stories about my work there. Earlier, I was a facilitator at the [Recurse Center][5], a writers retreat for programmers in NYC. I studied astrophysics in college and worked in finance for a few years before becoming an engineer. - -But none of that is really important to remember – the only thing you need to know about me is that I love bugs. I love bugs because they’re entertaining. They’re dramatic. The investigation of a great bug can be full of twists and turns. A great bug is like a good joke or a riddle – you’re expecting one outcome, but the result veers off in another direction. - -Over the course of this talk I’m going to tell you about some bugs that I have loved, explain why I love bugs so much, and then convince you that you should love bugs too. - -### Bug #1 - -Ok, straight into bug #1\. This is a bug that I encountered while working at Dropbox. As you may know, Dropbox is a utility that syncs your files from one computer to the cloud and to your other computers. - - - -``` - +--------------+ +---------------+ - | | | | - | METASERVER | | BLOCKSERVER | - | | | | - +-+--+---------+ +---------+-----+ - ^ | ^ - | | | - | | +----------+ | - | +---> | | | - | | CLIENT +--------+ - +--------+ | - +----------+ -``` - - -Here’s a vastly simplified diagram of Dropbox’s architecture. The desktop client runs on your local computer listening for changes in the file system. When it notices a changed file, it reads the file, then hashes the contents in 4MB blocks. These blocks are stored in the backend in a giant key-value store that we call blockserver. The key is the digest of the hashed contents, and the values are the contents themselves. - -Of course, we want to avoid uploading the same block multiple times. You can imagine that if you’re writing a document, you’re probably mostly changing the end – we don’t want to upload the beginning over and over. So before uploading a block to the blockserver the client talks to a different server that’s responsible for managing metadata and permissions, among other things. The client asks metaserver whether it needs the block or has seen it before. The “metaserver” responds with whether or not each block needs to be uploaded. - -So the request and response look roughly like this: The client says, “I have a changed file made up of blocks with hashes `'abcd,deef,efgh'`”. The server responds, “I have those first two, but upload the third.” Then the client sends the block up to the blockserver. - - -``` - +--------------+ +---------------+ - | | | | - | METASERVER | | BLOCKSERVER | - | | | | - +-+--+---------+ +---------+-----+ - ^ | ^ - | | 'ok, ok, need' | -'abcd,deef,efgh' | | +----------+ | efgh: [contents] - | +---> | | | - | | CLIENT +--------+ - +--------+ | - +----------+ -``` - - - -That’s the setup. So here’s the bug. - - - -``` - +--------------+ - | | - | METASERVER | - | | - +-+--+---------+ - ^ | - | | '???' -'abcdldeef,efgh' | | +----------+ - ^ | +---> | | - ^ | | CLIENT + - +--------+ | - +----------+ -``` - -Sometimes the client would make a weird request: each hash value should have been sixteen characters long, but instead it was thirty-three characters long – twice as many plus one. The server wouldn’t know what to do with this and would throw an exception. We’d see this exception get reported, and we’d go look at the log files from the desktop client, and really weird stuff would be going on – the client’s local database had gotten corrupted, or python would be throwing MemoryErrors, and none of it would make sense. - -If you’ve never seen this problem before, it’s totally mystifying. But once you’d seen it once, you can recognize it every time thereafter. Here’s a hint: the middle character of each 33-character string that we’d often see instead of a comma was `l`. These are the other characters we’d see in the middle position: - - -``` -l \x0c < $ ( . - -``` - -The ordinal value for an ascii comma – `,` – is 44\. The ordinal value for `l` is 108\. In binary, here’s how those two are represented: - -``` -bin(ord(',')): 0101100 -bin(ord('l')): 1101100 -``` - -You’ll notice that an `l` is exactly one bit away from a comma. And herein lies your problem: a bitflip. One bit of memory that the desktop client is using has gotten corrupted, and now the desktop client is sending a request to the server that is garbage. - -And here are the other characters we’d frequently see instead of the comma when a different bit had been flipped. - - - -``` -, : 0101100 -l : 1101100 -\x0c : 0001100 -< : 0111100 -$ : 0100100 -( : 0101000 -. : 0101110 -- : 0101101 -``` - - -### Bitflips are real! - -I love this bug because it shows that bitflips are a real thing that can happen, not just a theoretical concern. In fact, there are some domains where they’re more common than others. One such domain is if you’re getting requests from users with low-end or old hardware, which is true for a lot of laptops running Dropbox. Another domain with lots of bitflips is outer space – there’s no atmosphere in space to protect your memory from energetic particles and radiation, so bitflips are pretty common. - -You probably really care about correctness in space – your code might be keeping astronauts alive on the ISS, for example, but even if it’s not mission-critical, it’s hard to do software updates to space. If you really need your application to defend against bitflips, there are a variety of hardware & software approaches you can take, and there’s a [very interesting talk][6] by Katie Betchold about this. - -Dropbox in this context doesn’t really need to protect against bitflips. The machine that is corrupting memory is a user’s machine, so we can detect if the bitflip happens to fall in the comma – but if it’s in a different character we don’t necessarily know it, and if the bitflip is in the actual file data read off of disk, then we have no idea. There’s a pretty limited set of places where we could address this, and instead we decide to basically silence the exception and move on. Often this kind of bug resolves after the client restarts. - -### Unlikely bugs aren’t impossible - -This is one of my favorite bugs for a couple of reasons. The first is that it’s a reminder of the difference between unlikely and impossible. At sufficient scale, unlikely events start to happen at a noticable rate. - -### Social bugs - -My second favorite thing about this bug is that it’s a tremendously social one. This bug can crop up anywhere that the desktop client talks to the server, which is a lot of different endpoints and components in the system. This meant that a lot of different engineers at Dropbox would see versions of the bug. The first time you see it, you can  _really_  scratch your head, but after that it’s easy to diagnose, and the investigation is really quick: you look at the middle character and see if it’s an `l`. - -### Cultural differences - -One interesting side-effect of this bug was that it exposed a cultural difference between the server and client teams. Occasionally this bug would be spotted by a member of the server team and investigated from there. If one of your  _servers_  is flipping bits, that’s probably not random chance – it’s probably memory corruption, and you need to find the affected machine and get it out of the pool as fast as possible or you risk corrupting a lot of user data. That’s an incident, and you need to respond quickly. But if the user’s machine is corrupting data, there’s not a lot you can do. - -### Share your bugs - -So if you’re investigating a confusing bug, especially one in a big system, don’t forget to talk to people about it. Maybe your colleagues have seen a bug shaped like this one before. If they have, you might save a lot of time. And if they haven’t, don’t forget to tell people about the solution once you’ve figured it out – write it up or tell the story in your team meeting. Then the next time your teams hits something similar, you’ll all be more prepared. - -### How bugs can help you learn - -### Recurse Center - -Before I joined Dropbox, I worked for the Recurse Center. The idea behind RC is that it’s a community of self-directed learners spending time together getting better as programmers. That is the full extent of the structure of RC: there’s no curriculum or assignments or deadlines. The only scoping is a shared goal of getting better as a programmer. We’d see people come to participate in the program who had gotten CS degrees but didn’t feel like they had a solid handle on practical programming, or people who had been writing Java for ten years and wanted to learn Clojure or Haskell, and many other profiles as well. - -My job there was as a facilitator, helping people make the most of the lack of structure and providing guidance based on what we’d learned from earlier participants. So my colleagues and I were very interested in the best techniques for learning for self-motivated adults. - -### Deliberate Practice - -There’s a lot of different research in this space, and one of the ones I think is most interesting is the idea of deliberate practice. Deliberate practice is an attempt to explain the difference in performance between experts & amateurs. And the guiding principle here is that if you look just at innate characteristics – genetic or otherwise – they don’t go very far towards explaining the difference in performance. So the researchers, originally Ericsson, Krampe, and Tesch-Romer, set out to discover what did explain the difference. And what they settled on was time spent in deliberate practice. - -Deliberate practice is pretty narrow in their definition: it’s not work for pay, and it’s not playing for fun. You have to be operating on the edge of your ability, doing a project appropriate for your skill level (not so easy that you don’t learn anything and not so hard that you don’t make any progress). You also have to get immediate feedback on whether or not you’ve done the thing correctly. - -This is really exciting, because it’s a framework for how to build expertise. But the challenge is that as programmers this is really hard advice to apply. It’s hard to know whether you’re operating at the edge of your ability. Immediate corrective feedback is very rare – in some cases you’re lucky to get feedback ever, and in other cases maybe it takes months. You can get quick feedback on small things in the REPL and so on, but if you’re making a design decision or picking a technology, you’re not going to get feedback on those things for quite a long time. - -But one category of programming where deliberate practice is a useful model is debugging. If you wrote code, then you had a mental model of how it worked when you wrote it. But your code has a bug, so your mental model isn’t quite right. By definition you’re on the boundary of your understanding – so, great! You’re about to learn something new. And if you can reproduce the bug, that’s a rare case where you can get immediate feedback on whether or not your fix is correct. - -A bug like this might teach you something small about your program, or you might learn something larger about the system your code is running in. Now I’ve got a story for you about a bug like that. - -### Bug #2 - -This bug also one that I encountered at Dropbox. At the time, I was investigating why some desktop client weren’t sending logs as consistently as we expected. I’d started digging into the client logging system and discovered a bunch of interesting bugs. I’ll tell you only the subset of those bugs that is relevant to this story. - -Again here’s a very simplified architecture of the system. - - -``` - +--------------+ - | | - +---+ +----------> | LOG SERVER | - |log| | | | - +---+ | +------+-------+ - | | - +-----+----+ | 200 ok - | | | - | CLIENT | <-----------+ - | | - +-----+----+ - ^ - +--------+--------+--------+ - | ^ ^ | - +--+--+ +--+--+ +--+--+ +--+--+ - | log | | log | | log | | log | - | | | | | | | | - | | | | | | | | - +-----+ +-----+ +-----+ +-----+ -``` - -The desktop client would generate logs. Those logs were compress, encrypted, and written to disk. Then every so often the client would send them up to the server. The client would read a log off of disk and send it to the log server. The server would decrypt it and store it, then respond with a 200. - -If the client couldn’t reach the log server, it wouldn’t let the log directory grow unbounded. After a certain point it would start deleting logs to keep the directory under a maximum size. - -The first two bugs were not a big deal on their own. The first one was that the desktop client sent logs up to the server starting with the oldest one instead of starting with the newest. This isn’t really what you want – for example, the server would tell the client to send logs if the client reported an exception, so probably you care about the logs that just happened and not the oldest logs that happen to be on disk. - -The second bug was similar to the first: if the log directory hit its maximum size, the client would delete the logs starting with the newest instead of starting with the oldest. Again, you lose log files either way, but you probably care less about the older ones. - -The third bug had to do with the encryption. Sometimes, the server would be unable to decrypt a log file. (We generally didn’t figure out why – maybe it was a bitflip.) We weren’t handling this error correctly on the backend, so the server would reply with a 500\. The client would behave reasonably in the face of a 500: it would assume that the server was down. So it would stop sending log files and not try to send up any of the others. - -Returning a 500 on a corrupted log file is clearly not the right behavior. You could consider returning a 400, since it’s a problem with the client request. But the client also can’t fix the problem – if the log file can’t be decrypted now, we’ll never be able to decrypt it in the future. What you really want the client to do is just delete the log and move on. In fact, that’s the default behavior when the client gets a 200 back from the server for a log file that was successfully stored. So we said, ok – if the log file can’t be decrypted, just return a 200. - -All of these bugs were straightforward to fix. The first two bugs were on the client, so we’d fixed them on the alpha build but they hadn’t gone out to the majority of clients. The third bug we fixed on the server and deployed. - -### 📈 - -Suddenly traffic to the log cluster spikes. The serving team reaches out to us to ask if we know what’s going on. It takes me a minute to put all the pieces together. - -Before these fixes, there were four things going on: - -1. Log files were sent up starting with the oldest - -2. Log files were deleted starting with the newest - -3. If the server couldn’t decrypt a log file it would 500 - -4. If the client got a 500 it would stop sending logs - -A client with a corrupted log file would try to send it, the server would 500, the client would give up sending logs. On its next run, it would try to send the same file again, fail again, and give up again. Eventually the log directory would get full, at which point the client would start deleting its newest files, leaving the corrupted one on disk. - -The upshot of these three bugs: if a client ever had a corrupted log file, we would never see logs from that client again. - -The problem is that there were a lot more clients in this state than we thought. Any client with a single corrupted file had been dammed up from sending logs to the server. Now that dam was cleared, and all of them were sending up the rest of the contents of their log directories. - -### Our options - -Ok, there’s a huge flood of traffic coming from machines around the world. What can we do? (This is a fun thing about working at a company with Dropbox’s scale, and particularly Dropbox’s scale of desktop clients: you can trigger a self-DDOS very easily.) - -The first option when you do a deploy and things start going sideways is to rollback. Totally reasonable choice, but in this case, it wouldn’t have helped us. The state that we’d transformed wasn’t the state on the server but the state on the client – we’d deleted those files. Rolling back the server would prevent additional clients from entering this state but it wouldn’t solve the problem. - -What about increasing the size of the logging cluster? We did that – and started getting even more requests, now that we’d increased our capacity. We increased it again, but you can’t do that forever. Why not? This cluster isn’t isolated. It’s making requests into another cluster, in this case to handle exceptions. If you have a DDOS pointed at one cluster, and you keep scaling that cluster, you’re going to knock over its depedencies too, and now you have two problems. - -Another option we considered was shedding load – you don’t need every single log file, so can we just drop requests. One of the challenges here was that we didn’t have an easy way to tell good traffic from bad. We couldn’t quickly differentiate which log files were old and which were new. - -The solution we hit on is one that’s been used at Dropbox on a number of different occassions: we have a custom header, `chillout`, which every client in the world respects. If the client gets a response with this header, then it doesn’t make any requests for the provided number of seconds. Someone very wise added this to the Dropbox client very early on, and it’s come in handy more than once over the years. The logging server didn’t have the ability to set that header, but that’s an easy problem to solve. So two of my colleagues, Isaac Goldberg and John Lai, implemented support for it. We set the logging cluster chillout to two minutes initially and then managed it down as the deluge subsided over the next couple of days. - -### Know your system - -The first lesson from this bug is to know your system. I had a good mental model of the interaction between the client and the server, but I wasn’t thinking about what would happen when the server was interacting with all the clients at once. There was a level of complexity that I hadn’t thought all the way through. - -### Know your tools - -The second lesson is to know your tools. If things go sideways, what options do you have? Can you reverse your migration? How will you know if things are going sideways and how can you discover more? All of those things are great to know before a crisis – but if you don’t, you’ll learn them during a crisis and then never forget. - -### Feature flags & server-side gating - -The third lesson is for you if you’re writing a mobile or a desktop application:  _You need server-side feature gating and server-side flags._  When you discover a problem and you don’t have server-side controls, the resolution might take days or weeks as you push out a new release or submit a new version to the app store. That’s a bad situation to be in. The Dropbox desktop client isn’t going through an app store review process, but just pushing out a build to tens of millions of clients takes time. Compare that to hitting a problem in your feature and flipping a switch on the server: ten minutes later your problem is resolved. - -This strategy is not without its costs. Having a bunch of feature flags in your code adds to the complexity dramatically. You get a combinatoric problem with your testing: what if feature A is enabled and feature B, or just one, or neither – multiplied across N features. It’s extremely difficult to get engineers to clean up their feature flags after the fact (and I was also guilty of this). Then for the desktop client there’s multiple versions in the wild at the same time, so it gets pretty hard to reason about. - -But the benefit – man, when you need it, you really need it. - -# How to love bugs - -I’ve talked about some bugs that I love and I’ve talked about why to love bugs. Now I want to tell you how to love bugs. If you don’t love bugs yet, I know of exactly one way to learn, and that’s to have a growth mindset. - -The sociologist Carol Dweck has done a ton of interesting research about how people think about intelligence. She’s found that there are two different frameworks for thinking about intelligence. The first, which she calls the fixed mindset, holds that intelligence is a fixed trait, and people can’t change how much of it they have. The other mindset is a growth mindset. Under a growth mindset, people believe that intelligence is malleable and can increase with effort. - -Dweck found that a person’s theory of intelligence – whether they hold a fixed or growth mindset – can significantly influence the way they select tasks to work on, the way they respond to challenges, their cognitive performance, and even their honesty. - -[I also talked about a growth mindset in my Kiwi PyCon keynote, so here are just a few excerpts. You can read the full transcript [here][7].] - -Findings about honesty: - -> After this, they had the students write letters to pen pals about the study, saying “We did this study at school, and here’s the score that I got.” They found that  _almost half of the students praised for intelligence lied about their scores_ , and almost no one who was praised for working hard was dishonest. - -On effort: - -> Several studies found that people with a fixed mindset can be reluctant to really exert effort, because they believe it means they’re not good at the thing they’re working hard on. Dweck notes, “It would be hard to maintain confidence in your ability if every time a task requires effort, your intelligence is called into question.” - -On responding to confusion: - -> They found that students with a growth mindset mastered the material about 70% of the time, regardless of whether there was a confusing passage in it. Among students with a fixed mindset, if they read the booklet without the confusing passage, again about 70% of them mastered the material. But the fixed-mindset students who encountered the confusing passage saw their mastery drop to 30%. Students with a fixed mindset were pretty bad at recovering from being confused. - -These findings show that a growth mindset is critical while debugging. We have to recover from confusion, be candid about the limitations of our understanding, and at times really struggle on the way to finding solutions – all of which is easier and less painful with a growth mindset. - -### Love your bugs - -I learned to love bugs by explicitly celebrating challenges while working at the Recurse Center. A participant would sit down next to me and say, “[sigh] I think I’ve got a weird Python bug,” and I’d say, “Awesome, I  _love_  weird Python bugs!” First of all, this is definitely true, but more importantly, it emphasized to the participant that finding something where they struggled an accomplishment, and it was a good thing for them to have done that day. - -As I mentioned, at the Recurse Center there are no deadlines and no assignments, so this attitude is pretty much free. I’d say, “You get to spend a day chasing down this weird bug in Flask, how exciting!” At Dropbox and later at Pilot, where we have a product to ship, deadlines, and users, I’m not always uniformly delighted about spending a day on a weird bug. So I’m sympathetic to the reality of the world where there are deadlines. However, if I have a bug to fix, I have to fix it, and being grumbly about the existence of the bug isn’t going to help me fix it faster. I think that even in a world where deadlines loom, you can still apply this attitude. - -If you love your bugs, you can have more fun while you’re working on a tough problem. You can be less worried and more focused, and end up learning more from them. Finally, you can share a bug with your friends and colleagues, which helps you and your teammates. - -### Obrigada! - -My thanks to folks who gave me feedback on this talk and otherwise contributed to my being there: - -* Sasha Laundy - -* Amy Hanlon - -* Julia Evans - -* Julian Cooper - -* Raphael Passini Diniz and the rest of the Python Brasil organizing team - --------------------------------------------------------------------------------- - -via: http://akaptur.com/blog/2017/11/12/love-your-bugs/ - -作者:[Allison Kaptur ][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://akaptur.com/about/ -[1]:http://2017.pythonbrasil.org.br/# -[2]:http://www.youtube.com/watch?v=h4pZZOmv4Qs -[3]:http://www.pilot.com/ -[4]:http://www.dropbox.com/ -[5]:http://www.recurse.com/ -[6]:http://www.youtube.com/watch?v=ETgNLF_XpEM -[7]:http://akaptur.com/blog/2015/10/10/effective-learning-strategies-for-programmers/ diff --git a/sources/tech/20171113 Glitch write fun small web projects instantly.md b/sources/tech/20171113 Glitch write fun small web projects instantly.md deleted file mode 100644 index 734853ce51..0000000000 --- a/sources/tech/20171113 Glitch write fun small web projects instantly.md +++ /dev/null @@ -1,76 +0,0 @@ -translating---geekpi - -Glitch: write fun small web projects instantly -============================================================ - -I just wrote about Jupyter Notebooks which are a fun interactive way to write Python code. That reminded me I learned about Glitch recently, which I also love!! I built a small app to [turn of twitter retweets][2] with it. So! - -[Glitch][3] is an easy way to make Javascript webapps. (javascript backend, javascript frontend) - -The fun thing about glitch is: - -1. you start typing Javascript code into their web interface - -2. as soon as you type something, it automagically reloads the backend of your website with the new code. You don’t even have to save!! It autosaves. - -So it’s like Heroku, but even more magical!! Coding like this (you type, and the code runs on the public internet immediately) just feels really **fun** to me. - -It’s kind of like sshing into a server and editing PHP/HTML code on your server and having it instantly available, which I kind of also loved. Now we have “better deployment practices” than “just edit the code and it is instantly on the internet” but we are not talking about Serious Development Practices, we are talking about writing tiny programs for fun. - -### glitch has awesome example apps - -Glitch seems like fun nice way to learn programming! - -For example, there’s a space invaders game (code by [Mary Rose Cook][4]) at [https://space-invaders.glitch.me/][5]. The thing I love about this is that in just a few clicks I can - -1. click “remix this” - -2. start editing the code to make the boxes orange instead of black - -3. have my own space invaders game!! Mine is at [http://julias-space-invaders.glitch.me/][1]. (i just made very tiny edits to make it orange, nothing fancy) - -They have tons of example apps that you can start from – for instance [bots][6], [games][7], and more. - -### awesome actually useful app: tweetstorms - -The way I learned about Glitch was from this app which shows you tweetstorms from a given user: [https://tweetstorms.glitch.me/][8]. - -For example, you can see [@sarahmei][9]’s tweetstorms at [https://tweetstorms.glitch.me/sarahmei][10] (she tweets a lot of good tweetstorms!). - -### my glitch app: turn off retweets - -When I learned about Glitch I wanted to turn off retweets for everyone I follow on Twitter (I know you can do it in Tweetdeck!) and doing it manually was a pain – I had to do it one person at a time. So I wrote a tiny Glitch app to do it for me! - -I liked that I didn’t have to set up a local development environment, I could just start typing and go! - -Glitch only supports Javascript and I don’t really know Javascript that well (I think I’ve never written a Node program before), so the code isn’t awesome. But I had a really good time writing it – being able to type and just see my code running instantly was delightful. Here it is: [https://turn-off-retweets.glitch.me/][11]. - -### that’s all! - -Using Glitch feels really fun and democratic. Usually if I want to fork someone’s web project and make changes I wouldn’t do it – I’d have to fork it, figure out hosting, set up a local dev environment or Heroku or whatever, install the dependencies, etc. I think tasks like installing node.js dependencies used to be interesting, like “cool i am learning something new” and now I just find them tedious. - -So I love being able to just click “remix this!” and have my version on the internet instantly. - - --------------------------------------------------------------------------------- - -via: https://jvns.ca/blog/2017/11/13/glitch--write-small-web-projects-easily/ - -作者:[Julia Evans ][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://jvns.ca/ -[1]:http://julias-space-invaders.glitch.me/ -[2]:https://turn-off-retweets.glitch.me/ -[3]:https://glitch.com/ -[4]:https://maryrosecook.com/ -[5]:https://space-invaders.glitch.me/ -[6]:https://glitch.com/handy-bots -[7]:https://glitch.com/games -[8]:https://tweetstorms.glitch.me/ -[9]:https://twitter.com/sarahmei -[10]:https://tweetstorms.glitch.me/sarahmei -[11]:https://turn-off-retweets.glitch.me/ diff --git a/sources/tech/20171114 Sysadmin 101 Patch Management.md b/sources/tech/20171114 Sysadmin 101 Patch Management.md deleted file mode 100644 index 55ca09da87..0000000000 --- a/sources/tech/20171114 Sysadmin 101 Patch Management.md +++ /dev/null @@ -1,61 +0,0 @@ -【翻译中 @haoqixu】Sysadmin 101: Patch Management -============================================================ - -* [HOW-TOs][1] - -* [Servers][2] - -* [SysAdmin][3] - - -A few articles ago, I started a Sysadmin 101 series to pass down some fundamental knowledge about systems administration that the current generation of junior sysadmins, DevOps engineers or "full stack" developers might not learn otherwise. I had thought that I was done with the series, but then the WannaCry malware came out and exposed some of the poor patch management practices still in place in Windows networks. I imagine some readers that are still stuck in the Linux versus Windows wars of the 2000s might have even smiled with a sense of superiority when they heard about this outbreak. - -The reason I decided to revive my Sysadmin 101 series so soon is I realized that most Linux system administrators are no different from Windows sysadmins when it comes to patch management. Honestly, in some areas (in particular, uptime pride), some Linux sysadmins are even worse than Windows sysadmins regarding patch management. So in this article, I cover some of the fundamentals of patch management under Linux, including what a good patch management system looks like, the tools you will want to put in place and how the overall patching process should work. - -### What Is Patch Management? - -When I say patch management, I'm referring to the systems you have in place to update software already on a server. I'm not just talking about keeping up with the latest-and-greatest bleeding-edge version of a piece of software. Even more conservative distributions like Debian that stick with a particular version of software for its "stable" release still release frequent updates that patch bugs or security holes. - -Of course, if your organization decided to roll its own version of a particular piece of software, either because developers demanded the latest and greatest, you needed to fork the software to apply a custom change, or you just like giving yourself extra work, you now have a problem. Ideally you have put in a system that automatically packages up the custom version of the software for you in the same continuous integration system you use to build and package any other software, but many sysadmins still rely on the outdated method of packaging the software on their local machine based on (hopefully up to date) documentation on their wiki. In either case, you will need to confirm that your particular version has the security flaw, and if so, make sure that the new patch applies cleanly to your custom version. - -### What Good Patch Management Looks Like - -Patch management starts with knowing that there is a software update to begin with. First, for your core software, you should be subscribed to your Linux distribution's security mailing list, so you're notified immediately when there are security patches. If there you use any software that doesn't come from your distribution, you must find out how to be kept up to date on security patches for that software as well. When new security notifications come in, you should review the details so you understand how severe the security flaw is, whether you are affected and gauge a sense of how urgent the patch is. - -Some organizations have a purely manual patch management system. With such a system, when a security patch comes along, the sysadmin figures out which servers are running the software, generally by relying on memory and by logging in to servers and checking. Then the sysadmin uses the server's built-in package management tool to update the software with the latest from the distribution. Then the sysadmin moves on to the next server, and the next, until all of the servers are patched. - -There are many problems with manual patch management. First is the fact that it makes patching a laborious chore. The more work patching is, the more likely a sysadmin will put it off or skip doing it entirely. The second problem is that manual patch management relies too much on the sysadmin's ability to remember and recall all of the servers he or she is responsible for and keep track of which are patched and which aren't. This makes it easy for servers to be forgotten and sit unpatched. - -The faster and easier patch management is, the more likely you are to do it. You should have a system in place that quickly can tell you which servers are running a particular piece of software at which version. Ideally, that system also can push out updates. Personally, I prefer orchestration tools like MCollective for this task, but Red Hat provides Satellite, and Canonical provides Landscape as central tools that let you view software versions across your fleet of servers and apply patches all from a central place. - -Patching should be fault-tolerant as well. You should be able to patch a service and restart it without any overall down time. The same idea goes for kernel patches that require a reboot. My approach is to divide my servers into different high availability groups so that lb1, app1, rabbitmq1 and db1 would all be in one group, and lb2, app2, rabbitmq2 and db2 are in another. Then, I know I can patch one group at a time without it causing downtime anywhere else. - -So, how fast is fast? Your system should be able to roll out a patch to a minor piece of software that doesn't have an accompanying service (such as bash in the case of the ShellShock vulnerability) within a few minutes to an hour at most. For something like OpenSSL that requires you to restart services, the careful process of patching and restarting services in a fault-tolerant way probably will take more time, but this is where orchestration tools come in handy. I gave examples of how to use MCollective to accomplish this in my recent MCollective articles (see the December 2016 and January 2017 issues), but ideally, you should put a system in place that makes it easy to patch and restart services in a fault-tolerant and automated way. - -When patching requires a reboot, such as in the case of kernel patches, it might take a bit more time, but again, automation and orchestration tools can make this go much faster than you might imagine. I can patch and reboot the servers in an environment in a fault-tolerant way within an hour or two, and it would be much faster than that if I didn't need to wait for clusters to sync back up in between reboots. - -Unfortunately, many sysadmins still hold on to the outdated notion that uptime is a badge of pride—given that serious kernel patches tend to come out at least once a year if not more often, to me, it's proof you don't take security seriously. - -Many organizations also still have that single point of failure server that can never go down, and as a result, it never gets patched or rebooted. If you want to be secure, you need to remove these outdated liabilities and create systems that at least can be rebooted during a late-night maintenance window. - -Ultimately, fast and easy patch management is a sign of a mature and professional sysadmin team. Updating software is something all sysadmins have to do as part of their jobs, and investing time into systems that make that process easy and fast pays dividends far beyond security. For one, it helps identify bad architecture decisions that cause single points of failure. For another, it helps identify stagnant, out-of-date legacy systems in an environment and provides you with an incentive to replace them. Finally, when patching is managed well, it frees up sysadmins' time and turns their attention to the things that truly require their expertise. - -______________________ - -Kyle Rankin is senior security and infrastructure architect, the author of many books including Linux Hardening in Hostile Networks, DevOps Troubleshooting and The Official Ubuntu Server Book, and a columnist for Linux Journal. Follow him @kylerankin - --------------------------------------------------------------------------------- - -via: https://www.linuxjournal.com/content/sysadmin-101-patch-management - -作者:[Kyle Rankin ][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.linuxjournal.com/users/kyle-rankin -[1]:https://www.linuxjournal.com/tag/how-tos -[2]:https://www.linuxjournal.com/tag/servers -[3]:https://www.linuxjournal.com/tag/sysadmin -[4]:https://www.linuxjournal.com/users/kyle-rankin diff --git a/sources/tech/20171114 Take Linux and Run With It.md b/sources/tech/20171114 Take Linux and Run With It.md deleted file mode 100644 index b7b6cb9663..0000000000 --- a/sources/tech/20171114 Take Linux and Run With It.md +++ /dev/null @@ -1,68 +0,0 @@ -Take Linux and Run With It -============================================================ - -![](https://www.linuxinsider.com/article_images/story_graphics_xlarge/xl-2016-linux-1.jpg) - -![](https://www.linuxinsider.com/images/2015/image-credit-adobe-stock_130x15.gif) - - -"How do you run an operating system?" may seem like a simple question, since most of us are accustomed to turning on our computers and seeing our system spin up. However, this common model is only one way of running an operating system. As one of Linux's greatest strengths is versatility, Linux offers the most methods and environments for running it. - -To unleash the full power of Linux, and maybe even find a use for it you hadn't thought of, consider some less conventional ways of running it -- specifically, ones that don't even require installation on a computer's hard drive. - -### We'll Do It Live! - -Live-booting is a surprisingly useful and popular way to get the full Linux experience on the fly. While hard drives are where OSes reside most of the time, they actually can be installed to most major storage media, including CDs, DVDs and USB flash drives. - -When an OS is installed to some device other than a computer's onboard hard drive and subsequently booted instead of that onboard drive, it's called "live-booting" or running a "live session." - -At boot time, the user simply selects an external storage source for the hardware to look for boot information. If found, the computer follows the external device's boot instructions, essentially ignoring the onboard drive until the next time the user boots normally. Optical media are increasingly rare these days, so by far the most typical form that an external OS-carrying device takes is a USB stick. - -Most mainstream Linux distributions offer a way to run a live session as a way of trying them out. The live session doesn't save any user activity, and the OS resets to the clean default state after every shutdown. - -Live Linux sessions can be used for more than testing a distro, though. One application is for executing system repair for critically malfunctioning onboard (usually also Linux) systems. If an update or configuration made the onboard system unbootable, a full system backup is required, or the hard drive has sustained serious file corruption, the only recourse is to start up a live system and perform maintenance on the onboard drive. - -In these and similar scenarios, the onboard drive cannot be manipulated or corrected while also keeping the system stored on it running, so a live system takes on those burdens instead, leaving all but the problematic files on the onboard drive at rest. - -Live sessions also are perfectly suited for handling sensitive information. If you don't want a computer to retain any trace of the operations executed or information handled on it, especially if you are using hardware you can't vouch for -- like a public library or hotel business center computer -- a live session will provide you all the desktop computing functions to complete your task while retaining no trace of your session once you're finished. This is great for doing online banking or password input that you don't want a computer to remember. - -### Linux Virtually Anywhere - -Another approach for implementing Linux for more on-demand purposes is to run a virtual machine on another host OS. A virtual machine, or VM, is essentially a small computer running inside another computer and contained in a single large file. - -To run a VM, users simply install a hypervisor program (a kind of launcher for the VM), select a downloaded Linux OS image file (usually ending with a ".iso" file extension), and walk through the setup process. - -Most of the settings can be left at their defaults, but the key ones to configure are the amount of RAM and hard drive storage to lease to the VM. Fortunately, since Linux has a light footprint, you don't have to set these very high: 2 GB of RAM and 16 GB of storage should be plenty for the VM while still letting your host OS thrive. - -So what does this offer that a live system doesn't? First, whereas live systems are ephemeral, VMs can retain the data stored on them. This is great if you want to set up your Linux VM for a special use case, like software development or even security. - -When used for development, a Linux VM gives you the solid foundation of Linux's programming language suites and coding tools, and it lets you save your projects right in the VM to keep everything organized. - -If security is your goal, Linux VMs allow you to impose an extra layer between a potential hazard and your system. If you do your browsing from the VM, a malicious program would have to compromise not only your virtual Linux system, but also the hypervisor -- and  _then_ your host OS, a technical feat beyond all but the most skilled and determined adversaries. - -Second, you can start up your VM on demand from your host system, without having to power it down and start it up again as you would have to with a live session. When you need it, you can quickly bring up the VM, and when you're finished, you just shut it down and go back to what you were doing before. - -Your host system continues running normally while the VM is on, so you can attend to tasks simultaneously in each system. - -### Look Ma, No Installation! - -Just as there is no one form that Linux takes, there's also no one way to run it. Hopefully, this brief primer on the kinds of systems you can run has given you some ideas to expand your use models. - -The best part is that if you're not sure how these can help, live booting and virtual machines don't hurt to try!  -![](https://www.ectnews.com/images/end-enn.gif) - --------------------------------------------------------------------------------- - -via: https://www.linuxinsider.com/story/Take-Linux-and-Run-With-It-84951.html - -作者:[ Jonathan Terrasi ][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.linuxinsider.com/story/Take-Linux-and-Run-With-It-84951.html#searchbyline -[1]:https://www.linuxinsider.com/story/Take-Linux-and-Run-With-It-84951.html# -[2]:https://www.linuxinsider.com/perl/mailit/?id=84951 -[3]:https://www.linuxinsider.com/story/Take-Linux-and-Run-With-It-84951.html -[4]:https://www.linuxinsider.com/story/Take-Linux-and-Run-With-It-84951.html diff --git a/sources/tech/20171115 Security Jobs Are Hot Get Trained and Get Noticed.md b/sources/tech/20171115 Security Jobs Are Hot Get Trained and Get Noticed.md deleted file mode 100644 index a0a6b1ed60..0000000000 --- a/sources/tech/20171115 Security Jobs Are Hot Get Trained and Get Noticed.md +++ /dev/null @@ -1,58 +0,0 @@ -Security Jobs Are Hot: Get Trained and Get Noticed -============================================================ - -![security skills](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/security-skills.png?itok=IrwppCUw "security skills") -The Open Source Jobs Report, from Dice and The Linux Foundation, found that professionals with security experience are in high demand for the future.[Used with permission][1] - -The demand for security professionals is real. On [Dice.com][4], 15 percent of the more than 75K jobs are security positions. “Every year in the U.S., 40,000 jobs for information security analysts go unfilled, and employers are struggling to fill 200,000 other cyber-security related roles, according to cyber security data tool [CyberSeek][5]” ([Forbes][6]). We know that there is a fast-increasing need for security specialists, but that the interest level is low. - -### Security is the place to be - -In my experience, few students coming out of college are interested in roles in security; so many people see security as niche. Entry-level tech pros are interested in business analyst or system analyst roles, because of a belief that if you want to learn and apply core IT concepts, you have to stick to analyst roles or those closer to product development. That’s simply not the case. - -In fact, if you’re interested in getting in front of your business leaders, security is the place to be – as a security professional, you have to understand the business end-to-end; you have to look at the big picture to give your company the advantage. - -### Be fearless - -Analyst and security roles are not all that different. Companies continue to merge engineering and security roles out of necessity. Businesses are moving faster than ever with infrastructure and code being deployed through automation, which increases the importance of security being a part of all tech pros day to day lives. In our [Open Source Jobs Report with The Linux Foundation][7], 42 percent of hiring managers said professionals with security experience are in high demand for the future. - -There has never been a more exciting time to be in security. If you stay up-to-date with tech news, you’ll see that a huge number of stories are related to security – data breaches, system failures and fraud. The security teams are working in ever-changing, fast-paced environments. A real challenge lies is in the proactive side of security, finding, and eliminating vulnerabilities while maintaining or even improving the end-user experience.   - -### Growth is imminent - -Of any aspect of tech, security is the one that will continue to grow with the cloud. Businesses are moving more and more to the cloud and that’s exposing more security vulnerabilities than organizations are used to. As the cloud matures, security becomes increasingly important.            - -Regulations are also growing – Personally Identifiable Information (PII) is getting broader all the time. Many companies are finding that they must invest in security to stay in compliance and avoid being in the headlines. Companies are beginning to budget more and more for security tooling and staffing due to the risk of heavy fines, reputational damage, and, to be honest, executive job security.   - -### Training and support - -Even if you don’t choose a security-specific role, you’re bound to find yourself needing to code securely, and if you don’t have the skills to do that, you’ll start fighting an uphill battle. There are certainly ways to learn on-the-job if your company offers that option, that’s encouraged but I recommend a combination of training, mentorship and constant practice. Without using your security skills, you’ll lose them fast with how quickly the complexity of malicious attacks evolve. - -My recommendation for those seeking security roles is to find the people in your organization that are the strongest in engineering, development, or architecture areas – interface with them and other teams, do hands-on work, and be sure to keep the big-picture in mind. Be an asset to your organization that stands out – someone that can securely code and also consider strategy and overall infrastructure health. - -### The end game - -More and more companies are investing in security and trying to fill open roles in their tech teams. If you’re interested in management, security is the place to be. Executive leadership wants to know that their company is playing by the rules, that their data is secure, and that they’re safe from breaches and loss. - -Security that is implemented wisely and with strategy in mind will get noticed. Security is paramount for executives and consumers alike – I’d encourage anyone interested in security to train up and contribute. - - _[Download ][2]the full 2017 Open Source Jobs Report now._ - --------------------------------------------------------------------------------- - -via: https://www.linux.com/blog/os-jobs-report/2017/11/security-jobs-are-hot-get-trained-and-get-noticed - -作者:[ BEN COLLEN][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.linux.com/users/bencollen -[1]:https://www.linux.com/licenses/category/used-permission -[2]:http://bit.ly/2017OSSjobsreport -[3]:https://www.linux.com/files/images/security-skillspng -[4]:http://www.dice.com/ -[5]:http://cyberseek.org/index.html#about -[6]:https://www.forbes.com/sites/jeffkauflin/2017/03/16/the-fast-growing-job-with-a-huge-skills-gap-cyber-security/#292f0a675163 -[7]:http://media.dice.com/report/the-2017-open-source-jobs-report-employers-prioritize-hiring-open-source-professionals-with-latest-skills/ diff --git a/sources/tech/20171115 Why and How to Set an Open Source Strategy.md b/sources/tech/20171115 Why and How to Set an Open Source Strategy.md deleted file mode 100644 index 79ec071b4d..0000000000 --- a/sources/tech/20171115 Why and How to Set an Open Source Strategy.md +++ /dev/null @@ -1,120 +0,0 @@ -Why and How to Set an Open Source Strategy -============================================================ - -![](https://www.linuxfoundation.org/wp-content/uploads/2017/11/open-source-strategy-1024x576.jpg) - -This article explains how to walk through, measure, and define strategies collaboratively in an open source community. - - _“If you don’t know where you are going, you’ll end up someplace else.” _ _—_  Yogi Berra - -Open source projects are generally started as a way to scratch one’s itch — and frankly that’s one of its greatest attributes. Getting code down provides a tangible method to express an idea, showcase a need, and solve a problem. It avoids over thinking and getting a project stuck in analysis-paralysis, letting the project pragmatically solve the problem at hand. - -Next, a project starts to scale up and gets many varied users and contributions, with plenty of opinions along the way. That leads to the next big challenge — how does a project start to build a strategic vision? In this article, I’ll describe how to walk through, measure, and define strategies collaboratively, in a community. - -Strategy may seem like a buzzword of the corporate world rather something that an open source community would embrace, so I suggest stripping away the negative actions that are sometimes associated with this word (e.g., staff reductions, discontinuations, office closures). Strategy done right isn’t a tool to justify unfortunate actions but to help show focus and where each community member can contribute. - -A good application of strategy achieves the following: - -* Why the project exists? - -* What the project looks to achieve? - -* What is the ideal end state for a project is. - -The key to success is answering these questions as simply as possible, with consensus from your community. Let’s look at some ways to do this. - -### Setting a mission and vision - - _“_ _Efforts and courage are not enough without purpose and direction.”_  — John F. Kennedy - -All strategic planning starts off with setting a course for where the project wants to go. The two tools used here are  _Mission_  and  _Vision_ . They are complementary terms, describing both the reason a project exists (mission) and the ideal end state for a project (vision). - -A great way to start this exercise with the intent of driving consensus is by asking each key community member the following questions: - -* What drove you to join and/or contribute the project? - -* How do you define success for your participation? - -In a company, you’d ask your customers these questions usually. But in open source projects, the customers are the project participants — and their time investment is what makes the project a success. - -Driving consensus means capturing the answers to these questions and looking for themes across them. At R Consortium, for example, I created a shared doc for the board to review each member’s answers to the above questions, and followed up with a meeting to review for specific themes that came from those insights. - -Building a mission flows really well from this exercise. The key thing is to keep the wording of your mission short and concise. Open Mainframe Project has done this really well. Here’s their mission: - - _Build community and adoption of Open Source on the mainframe by:_ - -* _Eliminating barriers to Open Source adoption on the mainframe_ - -* _Demonstrating value of the mainframe on technical and business levels_ - -* _Strengthening collaboration points and resources for the community to thrive_ - -At 40 words, it passes the key eye tests of a good mission statement; it’s clear, concise, and demonstrates the useful value the project aims for. - -The next stage is to reflect on the mission statement and ask yourself this question: What is the ideal outcome if the project accomplishes its mission? That can be a tough one to tackle. Open Mainframe Project put together its vision really well: - - _Linux on the Mainframe as the standard for enterprise class systems and applications._ - -You could read that as a [BHAG][1], but it’s really more of a vision, because it describes a future state that is what would be created by the mission being fully accomplished. It also hits the key pieces to an effective vision — it’s only 13 words, inspirational, clear, memorable, and concise. - -Mission and vision add clarity on the who, what, why, and how for your project. But, how do you set a course for getting there? - -### Goals, Objectives, Actions, and Results - - _“I don’t focus on what I’m up against. I focus on my goals and I try to ignore the rest.”_  — Venus Williams - -Looking at a mission and vision can get overwhelming, so breaking them down into smaller chunks can help the project determine how to get started. This also helps prioritize actions, either by importance or by opportunity. Most importantly, this step gives you guidance on what things to focus on for a period of time, and which to put off. - -There are lots of methods of time bound planning, but the method I think works the best for projects is what I’ve dubbed the GOAR method. It’s an acronym that stands for: - -* Goals define what the project is striving for and likely would align and support the mission. Examples might be “Grow a diverse contributor base” or “Become the leading project for X.” Goals are aspirational and set direction. - -* Objectives show how you measure a goal’s completion, and should be clear and measurable. You might also have multiple objectives to measure the completion of a goal. For example, the goal “Grow a diverse contributor base” might have objectives such as “Have X total contributors monthly” and “Have contributors representing Y different organizations.” - -* Actions are what the project plans to do to complete an objective. This is where you get tactical on exactly what needs done. For example, the objective “Have contributors representing Y different organizations” would like have actions of reaching out to interested organizations using the project, having existing contributors mentor new mentors, and providing incentives for first time contributors. - -* Results come along the way, showing progress both positive and negative from the actions. - -You can put these into a table like this: - -| Goals | Objectives | Actions | Results | -|:--|:--|:--|:--| -| Grow a diverse contributor base     | Have X total contributors monthly | Existing contributors mentor new mentors Providing incentives for first time contributors | | -| | Have contributors representing Y different organizations | Reach out to interested organizations using the project | | - - -In large organizations, monthly or quarterly goals and objectives often make sense; however, on open source projects, these time frames are unrealistic. Six- even 12-month tracking allows the project leadership to focus on driving efforts at a high level by nurturing the community along. - -The end result is a rubric that provides clear vision on where the project is going. It also lets community members more easily find ways to contribute. For example, your project may include someone who knows a few organizations using the project — this person could help introduce those developers to the codebase and guide them through their first commit. - -### What happens if the project doesn’t hit the goals? - - _“I have not failed. I’ve just found 10,000 ways that won’t work.”_  — Thomas A. Edison - -Figuring out what is within the capability of an organization — whether Fortune 500 or a small open source project — is hard. And, sometimes the expectations or market conditions change along the way. Does that make the strategy planning process a failure? Absolutely not! - -Instead, you can use this experience as a way to better understand your project’s velocity, its impact, and its community, and perhaps as a way to prioritize what is important and what’s not. - --------------------------------------------------------------------------------- - -via: https://www.linuxfoundation.org/blog/set-open-source-strategy/ - -作者:[ John Mertic][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.linuxfoundation.org/author/jmertic/ -[1]:https://en.wikipedia.org/wiki/Big_Hairy_Audacious_Goal -[2]:https://www.linuxfoundation.org/author/jmertic/ -[3]:https://www.linuxfoundation.org/category/blog/ -[4]:https://www.linuxfoundation.org/category/audience/c-level/ -[5]:https://www.linuxfoundation.org/category/audience/developer-influencers/ -[6]:https://www.linuxfoundation.org/category/audience/entrepreneurs/ -[7]:https://www.linuxfoundation.org/category/campaigns/membership/how-to/ -[8]:https://www.linuxfoundation.org/category/campaigns/events-campaigns/linux-foundation/ -[9]:https://www.linuxfoundation.org/category/audience/open-source-developers/ -[10]:https://www.linuxfoundation.org/category/audience/open-source-professionals/ -[11]:https://www.linuxfoundation.org/category/audience/open-source-users/ -[12]:https://www.linuxfoundation.org/category/blog/thought-leadership/ diff --git a/sources/tech/20171116 Unleash Your Creativity – Linux Programs for Drawing and Image Editing.md b/sources/tech/20171116 Unleash Your Creativity – Linux Programs for Drawing and Image Editing.md deleted file mode 100644 index c6c50d9b25..0000000000 --- a/sources/tech/20171116 Unleash Your Creativity – Linux Programs for Drawing and Image Editing.md +++ /dev/null @@ -1,130 +0,0 @@ -### Unleash Your Creativity – Linux Programs for Drawing and Image Editing - - By: [chabowski][1] - -The following article is part of a series of articles that provide tips and tricks for Linux newbies – or Desktop users that are not yet experienced with regard to certain topics. This series intends to complement the special edition #30 “[Getting Started with Linux][2]” based on [openSUSE Leap][3], recently published by the [Linux Magazine,][4] with valuable additional information. - -![](https://www.suse.com/communities/blog/files/2017/11/DougDeMaio-450x450.jpeg) - -This article has been contributed by Douglas DeMaio, openSUSE PR Expert at SUSE. - -Both Mac OS or Window offer several popular programs for graphics editing, vector drawing and creating and manipulating Portable Document Format (PDF). The good news: users familiar with the Adobe Suite can transition with ease to free, open-source programs available on Linux. - -Programs like [GIMP][5], [InkScape][6] and [Okular][7] are cross platform programs that are available by default in Linux/GNU distributions and are persuasive alternatives to expensive Adobe programs like [Photoshop][8], [Illustrator][9] and [Acrobat][10]. - -These creativity programs on Linux distributions are just as powerful as those for macOS or Window. This article will explain some of the differences and how the programs can be used to make your transition to Linux comfortable. - -### Krita - -The KDE desktop environment comes with tons of cool applications. [Krita][11] is a professional open source painting program. It gives users the freedom to create any artistic image they desire. Krita features tools that are much more extensive than the tool sets of most proprietary programs you might be familiar with. From creating textures to comics, Krita is a must have application for Linux users. - -![](https://www.suse.com/communities/blog/files/2017/11/krita-450x267.png) - -### GIMP - -GNU Image Manipulation Program (GIMP) is a cross-platform image editor. Users of Photoshop will find the User Interface of GIMP to be similar to that of Photoshop. The drop down menu offers colors, layers, filters and tools to help the user with editing graphics. Rulers are located both horizontal and vertical and guide can be dragged across the screen to give exact measurements. The drop down menu gives tool options for resizing or cropping photos; adjustments can be made to the color balance, color levels, brightness and contrast as well as hue and saturation. - -![](https://www.suse.com/communities/blog/files/2017/11/gimp-450x281.png) - -There are multiple filters in GIMP to enhance or distort your images. Filters for artistic expression and animation are available and are more powerful tool options than those found in some proprietary applications. Gradients can be applied through additional layers and the Text Tool offers many fonts, which can be altered in shape and size through the Perspective Tool. - -The cloning tool works exactly like those in other graphics editors, so manipulating images is simple and acurrate given the selection of brush sizes to do the job. - -Perhaps one of the best options available with GIMP is that the images can be saved in a variety of formats like .jpg, .png, .pdf, .eps and .svg. These image options provide high-quality images in a small file. - -### InkScape - -Designing vector imagery with InkScape is simple and free. This cross platform allows for the creation of logos and illustrations that are highly scalable. Whether designing cartoons or creating images for branding, InkScape is a powerful application to get the job done. Like GIMP, InkScape lets you save files in various formats and allows for object manipulation like moving, rotating and skewing text and objects. Shape tools are available with InkScape so making stars, hexagons and other elements will meet the needs of your creative mind. - -![](https://www.suse.com/communities/blog/files/2017/11/inkscape-450x273.png) - -InkScape offers a comprehensive tool set, including a drawing tool, a pen tool and the freehand calligraphy tool that allows for object creation with your own personal style. The color selector gives you the choice of RGB, CMYK and RGBA – using specific colors for branding logos, icons and advertisement is definitely convincing. - -Short cut commands are similar to what users experience in Adobe Illustrator. Making layers and grouping or ungrouping the design elements can turn a blank page into a full-fledged image that can be used for designing technical diagrams for presentations, importing images into a multimedia program or for creating web graphics and software design. - -Inkscape can import vector graphics from multiple other programs. It can even import bitmap images. Inkscape is one of those cross platform, open-source programs that allow users to operate across different operating systems, no matter if they work with macOS, Windows or Linux. - -### Okular and LibreOffice - -LibreOffice, which is a free, open-source Office Suite, allows users to collaborate and interact with documents and important files on Linux, but also on macOS and Window. You can also create PDF files via LibreOffice, and LibreOffice Draw lets you view (and edit) PDF files as images. - -![](https://www.suse.com/communities/blog/files/2017/11/draw-450x273.png) - -However, the Portable Document Format (PDF) is quite different on the three Operating Systems. MacOS offers [Preview][12] by default; Windows has [Edge][13]. Of course, also Adobe Reader can be used for both MacOS and Window. With Linux, and especially the desktop selection of KDE, [Okular][14] is the default program for viewing PDF files. - -![](https://www.suse.com/communities/blog/files/2017/11/okular-450x273.png) - -The functionality of Okular supports different types of documents, like PDF, Postscript, [DjVu][15], [CHM][16], [XPS][17], [ePub][18] and others. Yet the universal document viewer also offers some powerful features that make interacting with a document different from other programs on MacOS and Windows. Okular gives selection and search tools that make accessing the text in PDFs fluid for how users interact with documents. Viewing documents with Okular is also accommodating with the magnification tool that allows for a quick look at small text in a document. - -Okular also provides users with the option to configure it to use more memory if the document is too large and freezes the Operating System. This functionality is convenient for users accessing high-quality print documents for example for advertising. - -For those who want to change locked images and documents, it’s rather easy to do so with LibreOffice Draw. A hypothetical situation would be to take a locked IRS (or tax) form and change it to make the uneditable document editable. Imagine how much fun it could be to transform it to some humorous kind of tax form … - -And indeed, the sky’s the limit on how creative a user wants to be when using programs that are available on Linux distributions. - -![2 votes, average: 5.00 out of 5](https://www.suse.com/communities/blog/wp-content/plugins/wp-postratings/images/stars_crystal/rating_on.gif) - -![2 votes, average: 5.00 out of 5](https://www.suse.com/communities/blog/wp-content/plugins/wp-postratings/images/stars_crystal/rating_on.gif) - -![2 votes, average: 5.00 out of 5](https://www.suse.com/communities/blog/wp-content/plugins/wp-postratings/images/stars_crystal/rating_on.gif) - -![2 votes, average: 5.00 out of 5](https://www.suse.com/communities/blog/wp-content/plugins/wp-postratings/images/stars_crystal/rating_on.gif) - -![2 votes, average: 5.00 out of 5](https://www.suse.com/communities/blog/wp-content/plugins/wp-postratings/images/stars_crystal/rating_on.gif) - -( - - _**2** votes, average: **5.00** out of 5_ - -) - - _You need to be a registered member to rate this post._ - -Tags: [drawing][19], [Getting Started with Linux][20], [GIMP][21], [image editing][22], [Images][23], [InkScape][24], [KDE][25], [Krita][26], [Leap 42.3][27], [LibreOffice][28], [Linux Magazine][29], [Okular][30], [openSUSE][31], [PDF][32] Categories: [Desktop][33], [Expert Views][34], [LibreOffice][35], [openSUSE][36] - --------------------------------------------------------------------------------- - -via: https://www.suse.com/communities/blog/unleash-creativity-linux-programs-drawing-image-editing/ - -作者:[chabowski ][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[1]:https://www.suse.com/communities/blog/author/chabowski/ -[2]:http://www.linux-magazine.com/Resources/Special-Editions/30-Getting-Started-with-Linux -[3]:https://en.opensuse.org/Portal:42.3 -[4]:http://www.linux-magazine.com/ -[5]:https://www.gimp.org/ -[6]:https://inkscape.org/en/ -[7]:https://okular.kde.org/ -[8]:http://www.adobe.com/products/photoshop.html -[9]:http://www.adobe.com/products/illustrator.html -[10]:https://acrobat.adobe.com/us/en/acrobat/acrobat-pro-cc.html -[11]:https://krita.org/en/ -[12]:https://en.wikipedia.org/wiki/Preview_(macOS) -[13]:https://en.wikipedia.org/wiki/Microsoft_Edge -[14]:https://okular.kde.org/ -[15]:http://djvu.org/ -[16]:https://fileinfo.com/extension/chm -[17]:https://fileinfo.com/extension/xps -[18]:http://idpf.org/epub -[19]:https://www.suse.com/communities/blog/tag/drawing/ -[20]:https://www.suse.com/communities/blog/tag/getting-started-with-linux/ -[21]:https://www.suse.com/communities/blog/tag/gimp/ -[22]:https://www.suse.com/communities/blog/tag/image-editing/ -[23]:https://www.suse.com/communities/blog/tag/images/ -[24]:https://www.suse.com/communities/blog/tag/inkscape/ -[25]:https://www.suse.com/communities/blog/tag/kde/ -[26]:https://www.suse.com/communities/blog/tag/krita/ -[27]:https://www.suse.com/communities/blog/tag/leap-42-3/ -[28]:https://www.suse.com/communities/blog/tag/libreoffice/ -[29]:https://www.suse.com/communities/blog/tag/linux-magazine/ -[30]:https://www.suse.com/communities/blog/tag/okular/ -[31]:https://www.suse.com/communities/blog/tag/opensuse/ -[32]:https://www.suse.com/communities/blog/tag/pdf/ -[33]:https://www.suse.com/communities/blog/category/desktop/ -[34]:https://www.suse.com/communities/blog/category/expert-views/ -[35]:https://www.suse.com/communities/blog/category/libreoffice/ -[36]:https://www.suse.com/communities/blog/category/opensuse/ diff --git a/sources/tech/20171120 Adopting Kubernetes step by step.md b/sources/tech/20171120 Adopting Kubernetes step by step.md deleted file mode 100644 index 05faf304c8..0000000000 --- a/sources/tech/20171120 Adopting Kubernetes step by step.md +++ /dev/null @@ -1,93 +0,0 @@ -Adopting Kubernetes step by step -============================================================ - -Why Docker and Kubernetes? - -Containers allow us to build, ship and run distributed applications. They remove the machine constraints from applications and lets us create a complex application in a deterministic fashion. - -Composing applications with containers allows us to make development, QA and production environments closer to each other (if you put the effort in to get there). By doing so, changes can be shipped faster and testing a full system can happen sooner. - -[Docker][1] — the containerization platform — provides this, making software  _independent_  of cloud providers. - -However, even with containers the amount of work needed for shipping your application through any cloud provider (or in a private cloud) is significant. An application usually needs auto scaling groups, persistent remote discs, auto discovery, etc. But each cloud provider has different mechanisms for doing this. If you want to support these features, you very quickly become cloud provider dependent. - -This is where [Kubernetes][2] comes in to play. It is an orchestration system for containers that allows you to manage, scale and deploy different pieces of your application — in a standardised way — with great tooling as part of it. It’s a portable abstraction that’s compatible with the main cloud providers (Google Cloud, Amazon Web Services and Microsoft Azure all have support for Kubernetes). - -A way to visualise your application, containers and Kubernetes is to think about your application as a shark — stay with me — that exists in the ocean (in this example, the ocean is your machine). The ocean may have other precious things you don’t want your shark to interact with, like [clown fish][3]. So you move you shark (your application) into a sealed aquarium (Container). This is great but not very robust. Your aquarium can break or maybe you want to build a tunnel to another aquarium where other fish live. Or maybe you want many copies of that aquarium in case one needs cleaning or maintenance… this is where Kubernetes clusters come to play. - - -![](https://cdn-images-1.medium.com/max/1600/1*OVt8cnY1WWOqdLFycCgdFg.jpeg) -Evolution to Kubernetes - -With Kubernetes being supported by the main cloud providers, it makes it easier for you and your team to have environments from  _development _ to  _production _ that are almost identical to each other. This is because Kubernetes has no reliance on proprietary software, services or infrastructure. - -The fact that you can start your application in your machine with the same pieces as in production closes the gaps between a development and a production environment. This makes developers more aware of how an application is structured together even though they might only be responsible for one piece of it. It also makes it easier for your application to be fully tested earlier in the pipeline. - -How do you work with Kubernetes? - -With more people adopting Kubernetes new questions arise; how should I develop against a cluster based environment? Suppose you have 3 environments — development, QA and production — how do I fit Kubernetes in them? Differences across these environments will still exist, either in terms of development cycle (e.g. time spent to see my code changes in the application I’m running) or in terms of data (e.g. I probably shouldn’t test with production data in my QA environment as it has sensitive information). - -So, should I always try to work inside a Kubernetes cluster, building images, recreating deployments and services while I code? Or maybe I should not try too hard to make my development environment be a Kubernetes cluster (or set of clusters) in development? Or maybe I should work in a hybrid way? - - -![](https://cdn-images-1.medium.com/max/1600/1*MXokxD8Ktte4_vWvTas9uw.jpeg) -Development with a local cluster - -If we carry on with our metaphor, the holes on the side represent a way to make changes to our app while keeping it in a development cluster. This is usually achieved via [volumes][4]. - -A Kubernetes series - -The Kubernetes series repository is open source and available here: - -### [https://github.com/red-gate/ks][5] - -We’ve written this series as we experiment with different ways to build software. We’ve tried to constrain ourselves to use Kubernetes in all environments so that we can explore the impact these technologies will have on the development and management of data and the database. - -The series starts with the basic creation of a React application hooked up to Kubernetes, and evolves to encompass more of our development requirements. By the end we’ll have covered all of our application development needs  _and_  have understood how best to cater for the database lifecycle in this world of containers and clusters. - -Here are the first 5 episodes of this series: - -1. ks1: build a React app with Kubernetes - -2. ks2: make minikube detect React code changes - -3. ks3: add a python web server that hosts an API - -4. ks4: make minikube detect Python code changes - -5. ks5: create a test environment - -The second part of the series will add a database and try to work out the best way to evolve our application alongside it. - -By running Kubernetes in all environments, we’ve been forced to solve new problems as we try to keep the development cycle as fast as possible. The trade-off being that we are constantly exposed to Kubernetes and become more accustomed to it. By doing so, development teams become responsible for production environments, which is no longer difficult as all environments (development through production) are all managed in the same way. - -What’s next? - -We will continue this series by incorporating a database and experimenting to find the best way to have a seamless database lifecycle experience with Kubernetes. - - _This Kubernetes series is brought to you by Foundry, Redgate’s R&D division. We’re working on making it easier to manage data alongside containerised environments, so if you’re working with data and containerised environments, we’d like to hear from you — reach out directly to the development team at _ [_foundry@red-gate.com_][6] - -* * * - - _We’re hiring_ _. Are you interested in uncovering product opportunities, building _ [_future technology_][7] _ and taking a startup-like approach (without the risk)? Take a look at our _ [_Software Engineer — Future Technologies_][8] _ role and read more about what it’s like to work at Redgate in _ [_Cambridge, UK_][9] _._ - --------------------------------------------------------------------------------- - -via: https://medium.com/ingeniouslysimple/adopting-kubernetes-step-by-step-f93093c13dfe - -作者:[santiago arias][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://medium.com/@santiaago?source=post_header_lockup -[1]:https://www.docker.com/what-docker -[2]:https://kubernetes.io/ -[3]:https://www.google.co.uk/search?biw=723&bih=753&tbm=isch&sa=1&ei=p-YCWpbtN8atkwWc8ZyQAQ&q=nemo+fish&oq=nemo+fish&gs_l=psy-ab.3..0i67k1l2j0l2j0i67k1j0l5.5128.9271.0.9566.9.9.0.0.0.0.81.532.9.9.0....0...1.1.64.psy-ab..0.9.526...0i7i30k1j0i7i10i30k1j0i13k1j0i10k1.0.FbAf9xXxTEM -[4]:https://kubernetes.io/docs/concepts/storage/volumes/ -[5]:https://github.com/red-gate/ks -[6]:mailto:foundry@red-gate.com -[7]:https://www.red-gate.com/foundry/ -[8]:https://www.red-gate.com/our-company/careers/current-opportunities/software-engineer-future-technologies -[9]:https://www.red-gate.com/our-company/careers/living-in-cambridge diff --git a/sources/tech/20171120 Containers and Kubernetes Whats next.md b/sources/tech/20171120 Containers and Kubernetes Whats next.md new file mode 100644 index 0000000000..b73ccb21c2 --- /dev/null +++ b/sources/tech/20171120 Containers and Kubernetes Whats next.md @@ -0,0 +1,98 @@ +YunfengHe Translating +Containers and Kubernetes: What's next? +============================================================ + +### What's ahead for container orchestration and Kubernetes? Here's an expert peek + +![CIO_Big Data Decisions_2](https://enterprisersproject.com/sites/default/files/styles/620x350/public/images/CIO_Big%20Data%20Decisions_2.png?itok=Y5zMHxf8 "CIO_Big Data Decisions_2") + +If you want a basic idea of where containers are headed in the near future, follow the money. There’s a lot of it: 451 Research projects that the overall market for containers will hit roughly [$2.7 billion in 2020][4], a 3.5-fold increase from the $762 million spent on container-related technology in 2016. + +There’s an obvious fundamental factor behind such big numbers: Rapidly increasing containerization. The parallel trend: As container adoption grows, so will container  _orchestration_  adoption. + +As recent survey data from  [_The New Stack_][5]  indicates, container adoption is the most significant catalyst of orchestration adoption: 60 percent of respondents who’ve deployed containers broadly in production report they’re also using Kubernetes widely in production. Another 19 percent of respondents with broad container deployments in production were in the initial stages of broad Kubernetes adoption. Meanwhile, just 5 percent of those in the initial phases of deploying containers in production environments were using Kubernetes broadly – but 58 percent said they were preparing to do so. It’s a chicken-and-egg relationship. + + +Most experts agree that an orchestration tool is essential to the scalable [long-term management of containers][6] – and corresponding developments in the marketplace. “The next trends in container orchestration are all focused on broadening adoption,” says Alex Robinson, software engineer at [Cockroach Labs][7]. + +This is a quickly shifting landscape, one that is just starting to realize its future potential. So we checked in with Robinson and other practitioners to get their boots-on-the-ground perspective on what’s next in container orchestration – and for Kubernetes itself. + +### **Container orchestration shifts to mainstream** + +We’re at the precipice common to most major technology shifts, where we transition from the careful steps of early adoption to cliff-diving into commonplace use. That will create new demand for the plain-vanilla requirements that make mainstream adoption easier, especially in large enterprises. + +“The gold rush phase of early innovation has slowed down and given way to a much stronger focus on stability and usability,” Robinson says. “This means we'll see fewer major announcements of new orchestration systems, and more security options, management tools, and features that make it easier to take advantage of the flexibility already inherent in the major orchestration systems.” + +### **Reduced complexity** + +On a related front, expect an intensifying effort to cut back on the complexity that some organizations face when taking their first plunge into container orchestration. As we’ve covered before, deploying a container might be “easy,” but [managing containers long-term ][8]requires more care. + +“Today, container orchestration is too complex for many users to take full advantage,” says My Karlsson, developer at [Codemill AB][9]. “New users are often struggling just to get single or small-size container configurations running in isolation, especially when applications are not originally designed for it. There are plenty of opportunities to simplify the orchestration of non-trivial applications and make the technology more accessible.” + +### **Increasing focus on hybrid cloud and multi-cloud** + +As adoption of containers and container orchestration grows, more organizations will scale from a starting point of, say, running non-critical workloads in a single environment to more [complex use cases][10] across multiple environments. For many companies, that will mean managing containerized applications (and particularly containerized microservices) across [hybrid cloud][11] and [multi-cloud][12] environments, often globally. + +"Containers and Kubernetes have made hybrid cloud and application portability a reality,” says [Brian Gracely][13], director of [Red Hat][14] OpenShift product strategy. “Combined with the Open Service Broker, we expect to see an explosion of new applications that combine private and public cloud resources." + +“I believe that federation will get a push, enabling much-wanted features such as seamless multi-region and multi-cloud deployments,” says Carlos Sanchez, senior software engineer at [CloudBees][15].  + +**[ Want CIO wisdom on hybrid cloud and multi-cloud strategy? See our related resource, **[**Hybrid Cloud: The IT leader's guide**][16]**. ]** + +### **Continued consolidation of platforms and tools** + +Technology consolidation is common trend; container orchestration is no exception. + +“As containerization goes mainstream, engineers are consolidating on a very small number of technologies to run their [microservices and] containers and Kubernetes will become the dominant container orchestration platform, far outstripping other platforms,” says Ben Newton, analytics lead at [Sumo Logic][17]. “Companies will adopt Kubernetes to drive a cloud-neutral approach as Kubernetes provides a reasonably clear path to reduce dependence on [specific] cloud ecosystems.**”** + +### **Speaking of Kubernetes, what’s next?** + +"Kubernetes is here for the long haul, and the community driving it is doing great job – but there's lots ahead,” says Gadi Naor, CTO and co-founder of [Alcide][18]. Our experts shared several predictions specific to [the increasingly popular Kubernetes platform][19]:  + + **_Gadi Naor at Alcide:_**  “Operators will continue to evolve and mature, to a point where applications running on Kubernetes will become fully self-managed. Deploying and monitoring microservices on top of Kubernetes with [OpenTracing][20] and service mesh frameworks such as [istio][21] will help shape new possibilities.” + + **_Brian Gracely at Red Hat:_**  “Kubernetes continues to expand in terms of the types of applications it can support. When you can run traditional applications, cloud-native applications, big data applications, and HPC or GPU-centric applications on the same platform, it unlocks a ton of architectural flexibility.” + + **_Ben Newton at Sumo Logic: _ “**As Kubernetes becomes more dominant, I would expect to see more normalization of the operational mechanisms – particularly integrations into third-party management and monitoring platforms.” + + **_Carlos Sanchez at CloudBees: _** “In the immediate future there is the ability to run without Docker, using other runtimes...to remove any lock-in. [Editor’s note: [CRI-O][22], for example, offers this ability.] “Also, [look for] storage improvements to support enterprise features like data snapshotting and online volume resizing.” + + + **_Alex Robinson at Cockroach Labs: _ “**One of the bigger developments happening in the Kubernetes community right now is the increased focus on managing [stateful applications][23]. Managing state in Kubernetes right now is very difficult if you aren't running in a cloud that offers remote persistent disks, but there's work being done on multiple fronts [both inside Kubernetes and by external vendors] to improve this.” + +-------------------------------------------------------------------------------- + +via: https://enterprisersproject.com/article/2017/11/containers-and-kubernetes-whats-next + +作者:[Kevin Casey ][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://enterprisersproject.com/user/kevin-casey +[1]:https://enterprisersproject.com/article/2017/11/kubernetes-numbers-10-compelling-stats +[2]:https://enterprisersproject.com/article/2017/11/how-enterprise-it-uses-kubernetes-tame-container-complexity +[3]:https://enterprisersproject.com/article/2017/11/5-kubernetes-success-tips-start-smart?sc_cid=70160000000h0aXAAQ +[4]:https://451research.com/images/Marketing/press_releases/Application-container-market-will-reach-2-7bn-in-2020_final_graphic.pdf +[5]:https://thenewstack.io/ +[6]:https://enterprisersproject.com/article/2017/10/microservices-and-containers-6-management-tips-long-haul +[7]:https://www.cockroachlabs.com/ +[8]:https://enterprisersproject.com/article/2017/10/microservices-and-containers-6-management-tips-long-haul +[9]:https://codemill.se/ +[10]:https://www.redhat.com/en/challenges/integration?intcmp=701f2000000tjyaAAA +[11]:https://enterprisersproject.com/hybrid-cloud +[12]:https://enterprisersproject.com/article/2017/7/multi-cloud-vs-hybrid-cloud-whats-difference +[13]:https://enterprisersproject.com/user/brian-gracely +[14]:https://www.redhat.com/en +[15]:https://www.cloudbees.com/ +[16]:https://enterprisersproject.com/hybrid-cloud?sc_cid=70160000000h0aXAAQ +[17]:https://www.sumologic.com/ +[18]:http://alcide.io/ +[19]:https://enterprisersproject.com/article/2017/10/how-explain-kubernetes-plain-english +[20]:http://opentracing.io/ +[21]:https://istio.io/ +[22]:http://cri-o.io/ +[23]:https://opensource.com/article/17/2/stateful-applications +[24]:https://enterprisersproject.com/article/2017/11/containers-and-kubernetes-whats-next?rate=PBQHhF4zPRHcq2KybE1bQgMkS2bzmNzcW2RXSVItmw8 +[25]:https://enterprisersproject.com/user/kevin-casey diff --git a/sources/tech/20171123 Why microservices are a security issue.md b/sources/tech/20171123 Why microservices are a security issue.md deleted file mode 100644 index d5868faa9e..0000000000 --- a/sources/tech/20171123 Why microservices are a security issue.md +++ /dev/null @@ -1,116 +0,0 @@ -Why microservices are a security issue -============================================================ - -### Maybe you don't want to decompose all your legacy applications into microservices, but you might consider starting with your security functions. - -![Why microservices are a security issue](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003601_05_mech_osyearbook2016_security_cc.png?itok=3V07Lpko "Why microservices are a security issue") -Image by : Opensource.com - -I struggled with writing the title for this post, and I worry that it comes across as clickbait. If you've come to read this because it looked like clickbait, then sorry.[1][5]I hope you'll stay anyway: there are lots of fascinating[2][6] points and many[3][7]footnotes. What I  _didn't_  mean to suggest is that microservices cause [security][15]problems—though like any component, of course, they can—but that microservices are appropriate objects of interest to those involved with security. I'd go further than that: I think they are an excellent architectural construct for those concerned with security. - -And why is that? Well, for those of us with a [systems security][16] bent, the world is an interesting place at the moment. We're seeing a growth in distributed systems, as bandwidth is cheap and latency low. Add to this the ease of deploying to the cloud, and more architects are beginning to realise that they can break up applications, not just into multiple layers, but also into multiple components within the layer. Load balancers, of course, help with this when the various components in a layer are performing the same job, but the ability to expose different services as small components has led to a growth in the design, implementation, and deployment of  _microservices_ . - -More on Microservices - -* [How to explain microservices to your CEO][1] - -* [Free eBook: Microservices vs. service-oriented architecture][2] - -* [Secured DevOps for microservices][3] - -So, [what exactly is a microservice][23]? I quite like [Wikipedia's definition][24], though it's interesting that security isn't mentioned there.[4][17] One of the points that I like about microservices is that, when well-designed, they conform to the first two points of Peter H. Salus' description of the [Unix philosophy][25]: - -1. Write programs that do one thing and do it well. - -2. Write programs to work together. - -3. Write programs to handle text streams, because that is a universal interface. - -The last of the three is slightly less relevant, because the Unix philosophy is generally used to refer to standalone applications, which often have a command instantiation. It does, however, encapsulate one of the basic requirements of microservices: that they must have well-defined interfaces. - -By "well-defined," I don't just mean a description of any externally accessible APIs' methods, but also of the normal operation of the microservice: inputs and outputs—and, if there are any, side-effects. As I described in a previous post, "[5 traits of good systems architecture][18]," data and entity descriptions are crucial if you're going to be able to design a system. Here, in our description of microservices, we get to see why these are so important, because, for me, the key defining feature of a microservices architecture is decomposability. And if you're going to decompose[5][8] your architecture, you need to be very, very clear which "bits" (components) are going to do what. - -And here's where security starts to come in. A clear description of what a particular component should be doing allows you to: - -* Check your design - -* Ensure that your implementation meets the description - -* Come up with reusable unit tests to check functionality - -* Track mistakes in implementation and correct them - -* Test for unexpected outcomes - -* Monitor for misbehaviour - -* Audit actual behaviour for future scrutiny - -Now, are all these things possible in a larger architecture? Yes, they are. But they become increasingly difficult where entities are chained together or combined in more complex configurations. Ensuring  _correct_  implementation and behaviour is much, much easier when you've got smaller pieces to work together. And deriving complex systems behaviours—and misbehaviours—is much more difficult if you can't be sure that the individual components are doing what they ought to be. - -It doesn't stop here, however. As I've mentioned on many [previous occasions][19], writing good security code is difficult.[7][9] Proving that it does what it should do is even more difficult. There is every reason, therefore, to restrict code that has particular security requirements—password checking, encryption, cryptographic key management, authorisation, etc.—to small, well-defined blocks. You can then do all the things that I've mentioned above to try to make sure it's done correctly. - -And yet there's more. We all know that not everybody is great at writing security-related code. By decomposing your architecture such that all security-sensitive code is restricted to well-defined components, you get the chance to put your best security people on that and restrict the danger that J. Random Coder[8][10] will put something in that bypasses or downgrades a key security control. - -It can also act as an opportunity for learning: It's always good to be able to point to a design/implementation/test/monitoring tuple and say: "That's how it should be done. Hear, read, mark, learn, and inwardly digest.[9][11]" - -Should you go about decomposing all of your legacy applications into microservices? Probably not. But given all the benefits you can accrue, you might consider starting with your security functions. - -* * * - -1Well, a little bit—it's always nice to have readers. - -2I know they are: I wrote them. - -3Probably less fascinating. - -4At the time this article was written. It's entirely possible that I—or one of you—may edit the article to change that. - -5This sounds like a gardening term, which is interesting. Not that I really like gardening, but still.[6][12] - -6Amusingly, I first wrote, "…if you're going to decompose your architect…," which sounds like the strapline for an IT-themed murder film. - -7Regular readers may remember a reference to the excellent film  _The Thick of It_ . - -8Other generic personae exist; please take your pick. - -9Not a cryptographic digest: I don't think that's what the original writers had in mind. - - _This article originally appeared on [Alice, Eve, and Bob—a security blog][13] and is republished with permission._ - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/17/11/microservices-are-security-issue - -作者:[Mike Bursell ][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/mikecamel -[1]:https://blog.openshift.com/microservices-how-to-explain-them-to-your-ceo/?intcmp=7016000000127cYAAQ&src=microservices_resource_menu1 -[2]:https://www.openshift.com/promotions/microservices.html?intcmp=7016000000127cYAAQ&src=microservices_resource_menu2 -[3]:https://opensource.com/business/16/11/secured-devops-microservices?src=microservices_resource_menu3 -[4]:https://opensource.com/article/17/11/microservices-are-security-issue?rate=GDH4xOWsgYsVnWbjEIoAcT_92b8gum8XmgR6U0T04oM -[5]:https://opensource.com/article/17/11/microservices-are-security-issue#1 -[6]:https://opensource.com/article/17/11/microservices-are-security-issue#2 -[7]:https://opensource.com/article/17/11/microservices-are-security-issue#3 -[8]:https://opensource.com/article/17/11/microservices-are-security-issue#5 -[9]:https://opensource.com/article/17/11/microservices-are-security-issue#7 -[10]:https://opensource.com/article/17/11/microservices-are-security-issue#8 -[11]:https://opensource.com/article/17/11/microservices-are-security-issue#9 -[12]:https://opensource.com/article/17/11/microservices-are-security-issue#6 -[13]:https://aliceevebob.com/2017/10/31/why-microservices-are-a-security-issue/ -[14]:https://opensource.com/user/105961/feed -[15]:https://opensource.com/tags/security -[16]:https://aliceevebob.com/2017/03/14/systems-security-why-it-matters/ -[17]:https://opensource.com/article/17/11/microservices-are-security-issue#4 -[18]:https://opensource.com/article/17/10/systems-architect -[19]:https://opensource.com/users/mikecamel -[20]:https://opensource.com/users/mikecamel -[21]:https://opensource.com/users/mikecamel -[22]:https://opensource.com/article/17/11/microservices-are-security-issue#comments -[23]:https://opensource.com/resources/what-are-microservices -[24]:https://en.wikipedia.org/wiki/Microservices -[25]:https://en.wikipedia.org/wiki/Unix_philosophy diff --git a/sources/tech/20171124 Open Source Cloud Skills and Certification Are Key for SysAdmins.md b/sources/tech/20171124 Open Source Cloud Skills and Certification Are Key for SysAdmins.md new file mode 100644 index 0000000000..27379cbe40 --- /dev/null +++ b/sources/tech/20171124 Open Source Cloud Skills and Certification Are Key for SysAdmins.md @@ -0,0 +1,70 @@ +translating by wangy325... + + +Open Source Cloud Skills and Certification Are Key for SysAdmins +============================================================ + + +![os jobs](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/open-house-sysadmin.jpg?itok=i5FHc3lu "os jobs") +Sysadmins with open source skills and certification can command higher pay, according to the 2017 Open Source Jobs Report.[Creative Commons Zero][1] + +System administrator is one of the most common positions employers are looking to fill among 53 percent of respondents to the [2017 Open Source Jobs Report][3]. Consequently, sysadmins with skills in engineering can command higher salaries, as these positions are among the hardest to fill, the report finds. + +Sysadmins are generally responsible for installing, supporting, and maintaining servers or other computer systems, and planning for and responding to service outages and other problems. + +Overall, this year’s report finds the skills most in demand are open source cloud (47 percent), application development (44 percent), Big Data (43 percent) and both DevOps and security (42 percent). + +The report also finds that 58 percent of hiring managers are planning to hire more open source professionals, and 67 percent say hiring of open source professionals will increase more than in other areas of the business. This represents a two-point increase over last year among employers who said open source hiring would be their top field of recruitment. + +At the same time, 89 percent of hiring managers report it is difficult to find open source talent. + +### Why get certified + +The desire for sysadmins is incentivizing hiring managers to offer formal training and/or certifications in the discipline in 53 percent of organizations, compared to 47 percent last year, the Open Source Jobs Report finds. + +IT professionals interested in sysadmin positions should consider Linux certifications. Searches on several of the more well-known job posting sites reveal that the [CompTIA Linux+][4]certification is the top certification for entry-level Linux sysadmin, while [Red Hat Certified Engineer (RHCE)][5] and [Red Hat Certified System Administrator (RHCSA)][6] are the main certifications for higher-level positions. + +In 2016, a sysadmin commanded a salary of $79,583, a change of -0.8 percent from the previous year, according to Dice’s [2017 Tech Salary Survey][7]. The systems architect position paid $125,946, a year-over-year change of -4.7 percent. Yet, the survey observes that “Highly skilled technology professionals remain in the most demand, especially those candidates proficient in the technologies needed to support industry transformation and growth.” + +When it comes to open source skills, HBase (an open-source distributed database), ranked as one that garners among the highest pay for tech pros in the Dice survey. In the networking and database category, the OpenVMS operating system ranked as another high-paying skill. + +### The sysadmin role + +One of a sysadmin’s responsibilities is to be available 24/7 when a problem occurs. The position calls for a mindset that is about “zero-blame, lean, iterative improvement in process or technology,’’ and one that is open to change, writes Paul English, a board member for the League of Professional System Administrators, a non-profit professional association for the advancement of the practice of system administration, in  [opensource.com][8]. He adds that being a sysadmin means “it’s almost a foregone conclusion that you’ll work with open source software like Linux, BSD, and even open source Solaris.” + +Today’s sysadmins will more often work with software rather than hardware, and should be prepared to write small scripts, according to English. + +### Outlook for 2018 + +Expect to see sysadmins among the tech professionals many employers in North America will be hiring in 2018, according to [Robert Half’s 2018 Salary Guide for Technology Professionals][9]. Increasingly, soft skills and leadership qualities are also highly valued. + +“Good listening and critical-thinking skills, which are essential to understanding and resolving customers’ issues and concerns, are important for almost any IT role today, but especially for help desk and desktop support professionals,’’ the report states. + +This jibes with some of the essential skills needed at various stages of the sysadmin position, including strong analytical skills and an ability to solve problems quickly, according to [The Linux Foundation][10]. + +Other skills sysadmins should have as they move up the ladder are: interest in structured approaches to system configuration management; experience in resolving security issues; experience with user identity management; ability to communicate in non-technical terms to non-technical people; and ability to modify system to meet new security requirements. + + _[Download ][11]the full 2017 Open Source Jobs Report now._ + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/blog/open-source-cloud-skills-and-certification-are-key-sysadmins + +作者:[ ][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: +[1]:https://www.linux.com/licenses/category/creative-commons-zero +[2]:https://www.linux.com/files/images/open-house-sysadminjpg +[3]:https://www.linuxfoundation.org/blog/2017-jobs-report-highlights-demand-open-source-skills/ +[4]:https://certification.comptia.org/certifications/linux?tracking=getCertified/certifications/linux.aspx +[5]:https://www.redhat.com/en/services/certification/rhce +[6]:https://www.redhat.com/en/services/certification/rhcsa +[7]:http://marketing.dice.com/pdf/Dice_TechSalarySurvey_2017.pdf?aliId=105832232 +[8]:https://opensource.com/article/17/7/truth-about-sysadmins +[9]:https://www.roberthalf.com/salary-guide/technology +[10]:https://www.linux.com/learn/10-essential-skills-novice-junior-and-senior-sysadmins%20%20 +[11]:http://bit.ly/2017OSSjobsreport diff --git a/sources/tech/20171124 Photon Could Be Your New Favorite Container OS.md b/sources/tech/20171124 Photon Could Be Your New Favorite Container OS.md index d282ef5445..147a2266cc 100644 --- a/sources/tech/20171124 Photon Could Be Your New Favorite Container OS.md +++ b/sources/tech/20171124 Photon Could Be Your New Favorite Container OS.md @@ -1,6 +1,9 @@ +KeyLD Translating + Photon Could Be Your New Favorite Container OS ============================================================ + ![Photon OS](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/photon-linux.jpg?itok=jUFHPR_c "Photon OS") Jack Wallen says Photon OS is an outstanding platform, geared specifically for containers.[Creative Commons Zero][5]Pixabay @@ -106,9 +109,9 @@ Give Photon a try and see if it doesn’t make deploying Docker containers and/o -------------------------------------------------------------------------------- -via: 网址 +via: https://www.linux.com/learn/intro-to-linux/2017/11/photon-could-be-your-new-favorite-container-os -作者:[ JACK WALLEN][a] +作者:[JACK WALLEN ][a] 译者:[译者ID](https://github.com/译者ID) 校对:[校对者ID](https://github.com/校对者ID) diff --git a/sources/tech/20171125 AWS to Help Build ONNX Open Source AI Platform.md b/sources/tech/20171125 AWS to Help Build ONNX Open Source AI Platform.md deleted file mode 100644 index c09d66bc57..0000000000 --- a/sources/tech/20171125 AWS to Help Build ONNX Open Source AI Platform.md +++ /dev/null @@ -1,76 +0,0 @@ -AWS to Help Build ONNX Open Source AI Platform -============================================================ -![onnx-open-source-ai-platform](https://www.linuxinsider.com/article_images/story_graphics_xlarge/xl-2017-onnx-1.jpg) - - -Amazon Web Services has become the latest tech firm to join the deep learning community's collaboration on the Open Neural Network Exchange, recently launched to advance artificial intelligence in a frictionless and interoperable environment. Facebook and Microsoft led the effort. - -As part of that collaboration, AWS made its open source Python package, ONNX-MxNet, available as a deep learning framework that offers application programming interfaces across multiple languages including Python, Scala and open source statistics software R. - -The ONNX format will help developers build and train models for other frameworks, including PyTorch, Microsoft Cognitive Toolkit or Caffe2, AWS Deep Learning Engineering Manager Hagay Lupesko and Software Developer Roshani Nagmote wrote in an online post last week. It will let developers import those models into MXNet, and run them for inference. - -### Help for Developers - -Facebook and Microsoft this summer launched ONNX to support a shared model of interoperability for the advancement of AI. Microsoft committed its Cognitive Toolkit, Caffe2 and PyTorch to support ONNX. - -Cognitive Toolkit and other frameworks make it easier for developers to construct and run computational graphs that represent neural networks, Microsoft said. - -Initial versions of [ONNX code and documentation][4] were made available on Github. - -AWS and Microsoft last month announced plans for Gluon, a new interface in Apache MXNet that allows developers to build and train deep learning models. - -Gluon "is an extension of their partnership where they are trying to compete with Google's Tensorflow," observed Aditya Kaul, research director at [Tractica][5]. - -"Google's omission from this is quite telling but also speaks to their dominance in the market," he told LinuxInsider. - -"Even Tensorflow is open source, and so open source is not the big catch here -- but the rest of the ecosystem teaming up to compete with Google is what this boils down to," Kaul said. - -The Apache MXNet community earlier this month introduced version 0.12 of MXNet, which extends Gluon functionality to allow for new, cutting-edge research, according to AWS. Among its new features are variational dropout, which allows developers to apply the dropout technique for mitigating overfitting to recurrent neural networks. - -Convolutional RNN, Long Short-Term Memory and gated recurrent unit cells allow datasets to be modeled using time-based sequence and spatial dimensions, AWS noted. - -### Framework-Neutral Method - -"This looks like a great way to deliver inference regardless of which framework generated a model," said Paul Teich, principal analyst at [Tirias Research][6]. - -"This is basically a framework-neutral way to deliver inference," he told LinuxInsider. - -Cloud providers like AWS, Microsoft and others are under pressure from customers to be able to train on one network while delivering on another, in order to advance AI, Teich pointed out. - -"I see this as kind of a baseline way for these vendors to check the interoperability box," he remarked. - -"Framework interoperability is a good thing, and this will only help developers in making sure that models that they build on MXNet or Caffe or CNTK are interoperable," Tractica's Kaul pointed out. - -As to how this interoperability might apply in the real world, Teich noted that technologies such as natural language translation or speech recognition would require that Alexa's voice recognition technology be packaged and delivered to another developer's embedded environment. - -### Thanks, Open Source - -"Despite their competitive differences, these companies all recognize they owe a significant amount of their success to the software development advancements generated by the open source movement," said Jeff Kaplan, managing director of [ThinkStrategies][7]. - -"The Open Neural Network Exchange is committed to producing similar benefits and innovations in AI," he told LinuxInsider. - -A growing number of major technology companies have announced plans to use open source to speed the development of AI collaboration, in order to create more uniform platforms for development and research. - -AT&T just a few weeks ago announced plans [to launch the Acumos Project][8] with TechMahindra and The Linux Foundation. The platform is designed to open up efforts for collaboration in telecommunications, media and technology.  -![](https://www.ectnews.com/images/end-enn.gif) - --------------------------------------------------------------------------------- - -via: https://www.linuxinsider.com/story/AWS-to-Help-Build-ONNX-Open-Source-AI-Platform-84971.html - -作者:[ David Jones ][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.linuxinsider.com/story/AWS-to-Help-Build-ONNX-Open-Source-AI-Platform-84971.html#searchbyline -[1]:https://www.linuxinsider.com/story/AWS-to-Help-Build-ONNX-Open-Source-AI-Platform-84971.html# -[2]:https://www.linuxinsider.com/perl/mailit/?id=84971 -[3]:https://www.linuxinsider.com/story/AWS-to-Help-Build-ONNX-Open-Source-AI-Platform-84971.html -[4]:https://github.com/onnx/onnx -[5]:https://www.tractica.com/ -[6]:http://www.tiriasresearch.com/ -[7]:http://www.thinkstrategies.com/ -[8]:https://www.linuxinsider.com/story/84926.html -[9]:https://www.linuxinsider.com/story/AWS-to-Help-Build-ONNX-Open-Source-AI-Platform-84971.html diff --git a/sources/tech/20171128 How To Tell If Your Linux Server Has Been Compromised.md b/sources/tech/20171128 How To Tell If Your Linux Server Has Been Compromised.md deleted file mode 100644 index dd61ad7a95..0000000000 --- a/sources/tech/20171128 How To Tell If Your Linux Server Has Been Compromised.md +++ /dev/null @@ -1,156 +0,0 @@ -translating by lujun9972 -How To Tell If Your Linux Server Has Been Compromised --------------- - -A server being compromised or hacked for the purpose of this guide is an unauthorized person or bot logging into the server in order to use it for their own, usually negative ends. - -Disclaimer: If your server has been compromised by a state organization like the NSA or a serious criminal group then you will not notice any problems and the following techniques will not register their presence. - -However, the majority of compromised servers are carried out by bots i.e. automated attack programs, in-experienced attackers e.g. “script kiddies”, or dumb criminals. - -These sorts of attackers will abuse the server for all it’s worth whilst they have access to it and take few precautions to hide what they are doing. - -### Symptoms of a compromised server - -When a server has been compromised by an in-experienced or automated attacker they will usually do something with it that consumes 100% of a resource. This resource will usually be either the CPU for something like crypt-currency mining or email spamming, or bandwidth for launching a DOS attack. - -This means that the first indication that something is amiss is that the server is “going slow”. This could manifest in the website serving pages much slower than usual, or email taking many minutes to deliver or send. - -So what should you look for? - -### Check 1 - Who’s currently logged in? - -The first thing you should look for is who is currently logged into the server. It is not uncommon to find the attacker actually logged into the server and working on it. - -The shell command to do this is w. Running w gives the following output: - -``` - 08:32:55 up 98 days, 5:43, 2 users, load average: 0.05, 0.03, 0.00 -USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT -root pts/0 113.174.161.1 08:26 0.00s 0.03s 0.02s ssh root@coopeaa12 -root pts/1 78.31.109.1 08:26 0.00s 0.01s 0.00s w - -``` - -One of those IP’s is a UK IP and the second is Vietnamese. That’s probably not a good thing. - -Stop and take a breath, don’t panic and simply kill their SSH connection. Unless you can stop then re-entering the server they will do so quickly and quite likely kick you off and stop you getting back in. - -Please see the What should I do if I’ve been compromised section at the end of this guide no how to proceed if you do find evidence of compromise. - -The whois command can be run on IP addresses and will tell you what all the information about the organization that the IP is registered to, including the country. - -### Check 2 - Who has logged in? - -Linux servers keep a record of which users logged in, from what IP, when and for how long. This information is accessed with the last command. - -The output looks like this: - -``` -root pts/1 78.31.109.1 Thu Nov 30 08:26 still logged in -root pts/0 113.174.161.1 Thu Nov 30 08:26 still logged in -root pts/1 78.31.109.1 Thu Nov 30 08:24 - 08:26 (00:01) -root pts/0 113.174.161.1 Wed Nov 29 12:34 - 12:52 (00:18) -root pts/0 14.176.196.1 Mon Nov 27 13:32 - 13:53 (00:21) - -``` - -There is a mix of my UK IP’s and some Vietnamese ones, with the top two still logged in. If you see any IP’s that are not authorized then refer to the final section. - -The login history is contained in a text file at ~/.bash_history and is therefore easily removable. Often, attackers will simply delete this file to try to cover their tracks. Consequently, if you run last and only see your current login, this is a Bad Sign. - -If there is no login history be very, very suspicious and continue looking for indications of compromise. - -### Check 3 - Review the command history - -This level of attacker will frequently take no precautions to leave no command history so running the history command will show you everything they have done. Be on the lookout for wget or curl commands to download out-of-repo software such as spam bots or crypto miners. - -The command history is contained in the ~/.bash_history file so some attackers will delete this file to cover what they have done. Just as with the login history, if you run history and don’t see anything then the history file has been deleted. Again this is a Bad Sign and you should review the server very carefully. - -### Check 4 - What’s using all the CPU? - -The sorts of attackers that you will encounter usually don’t take too many precautions to hide what they are doing. So they will run processes that consume all the CPU. This generally makes it pretty easy to spot them. Simply run top and look at the highest process. - -This will also show people exploiting your server without having logged in. This could be, for example, someone using an unprotected form-mail script to relay spam. - -If you don’t recognize the top process then either Google its name or investigate what it’s doing with losf or strace. - -To use these tools first copy its PID from top and run: - -``` -strace -p PID - -``` - -This will display all the system calls the process is making. It’s a lot of information but looking through it will give you a good idea what’s going on. - -``` -lsof -p PID - -``` - -This program will list the open files that the process has. Again, this will give you a good idea what it’s doing by showing you what files it is accessing. - -### Check 5 - Review the all the system processes - -If an unauthorized process is not consuming enough CPU to get listed noticeably on top it will still get displayed in a full process listing with ps. My proffered command is ps auxf for providing the most information clearly. - -You should be looking for any processes that you don’t recognize. The more times you run ps on your servers (which is a good habit to get into) the more obvious an alien process will stand out. - -### Check 6 - Review network usage by process - -The command iftop functions like top to show a ranked list of processes that are sending and receiving network data along with their source and destination. A process like a DOS attack or spam bot will immediately show itself at the top of the list. - -### Check 7 - What processes are listening for network connections? - -Often an attacker will install a program that doesn’t do anything except listen on the network port for instructions. This does not consume CPU or bandwidth whilst it is waiting so can get overlooked in the top type commands. - -The commands lsof and netstat will both list all networked processes. I use them with the following options: - -``` -lsof -i - -``` - -``` -netstat -plunt - -``` - -You should look for any process that is listed as in the LISTEN or ESTABLISHED status as these processes are either waiting for a connection (LISTEN) or have a connection open (ESTABLISHED). If you don’t recognize these processes use strace or lsof to try to see what they are doing. - -### What should I do if I’ve been compromised? - -The first thing to do is not to panic, especially if the attacker is currently logged in. You need to be able to take back control of the machine before the attacker is aware that you know about them. If they realize you know about them they may well lock you out of your server and start destroying any assets out of spite. - -If you are not very technical then simply shut down the server. Either from the server itself with shutdown -h now or systemctl poweroff. Or log into your hosting provider’s control panel and shut down the server. Once it’s powered off you can work on the needed firewall rules and consult with your provider in your own time. - -If you’re feeling a bit more confident and your hosting provider has an upstream firewall then create and enable the following two rules in this order: - -1. Allow SSH traffic from only your IP address. - -2. Block everything else, not just SSH but every protocol on every port. - -This will immediately kill their SSH session and give only you access to the server. - -If you don’t have access to an upstream firewall then you will have to create and enable these firewall rules on the server itself and then, when they are in place kill the attacker’s ssh session with the kill command. - -A final method, where available, is to log into the server via an out-of-band connection such as the serial console and stop networking with systemctl stop network.service. This will completely stop any network access so you can now enable the firewall rules in your own time. - -Once you have regained control of the server do not trust it. - -Do not attempt to fix things up and continue using the server. You can never be sure what the attacker did and so you can never sure the server is secure. - -The only sensible course of action is to copy off all the data that you need and start again from a fresh install. - --------------------------------------------------------------------------------- - -via: https://bash-prompt.net/guides/server-hacked/ - -作者:[Elliot Cooper][a] -译者:[lujun9972](https://github.com/lujun9972) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://bash-prompt.net diff --git a/sources/tech/20171128 The politics of the Linux desktop.md b/sources/tech/20171128 The politics of the Linux desktop.md deleted file mode 100644 index c9117dacfe..0000000000 --- a/sources/tech/20171128 The politics of the Linux desktop.md +++ /dev/null @@ -1,110 +0,0 @@ -The politics of the Linux desktop -============================================================ - -### If you're working in open source, why would you use anything but Linux as your main desktop? - - -![The politics of the Linux desktop](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_networks.png?itok=XasNXxKs "The politics of the Linux desktop") -Image by : opensource.com - -At some point in 1997 or 1998—history does not record exactly when—I made the leap from Windows to the Linux desktop. I went through quite a few distributions, from Red Hat to SUSE to Slackware, then Debian, Debian Experimental, and (for a long time thereafter) Ubuntu. When I accepted a role at Red Hat, I moved to Fedora, and migrated both my kids (then 9 and 11) to Fedora as well. - -More Linux resources - -* [What is Linux?][1] - -* [What are Linux containers?][2] - -* [Download Now: Linux commands cheat sheet][3] - -* [Advanced Linux commands cheat sheet][4] - -* [Our latest Linux articles][5] - -For a few years, I kept Windows as a dual-boot option, and then realised that, if I was going to commit to Linux, then I ought to go for it properly. In losing Windows, I didn't miss much; there were a few games that I couldn't play, but it was around the time that the Civilization franchise was embracing Linux, so that kept me happy. - -The move to Linux wasn't plain sailing, by any stretch of the imagination. If you wanted to use fairly new hardware in the early days, you had to first ensure that there were  _any_  drivers for Linux, then learn how to compile and install them. If they were not quite my friends, **lsmod** and **modprobe** became at least close companions. I taught myself to compile a kernel and tweak the options to make use of (sometimes disastrous) new, "EXPERIMENTAL" features as they came out. Early on, I learned the lesson that you should always keep at least one kernel in your [LILO][12] list that you were  _sure_  booted fully. I cursed NVidia and grew horrified by SCSI. I flirted with early journalling filesystem options and tried to work out whether the different preempt parameters made any noticeable difference to my user experience or not. I began to accept that printers would never print—and then they started to. I discovered that the Bluetooth stack suddenly started to connect to things. - -Over the years, using Linux moved from being an uphill struggle to something that just worked. I moved my mother-in-law and then my father over to Linux so I could help administer their machines. And then I moved them off Linux so they could no longer ask me to help administer their machines. - -Over the years, using Linux moved from being an uphill struggle to something that just worked.It wasn't just at home, either: I decided that I would use Linux as my desktop for work, as well. I even made it a condition of employment for at least one role. Linux desktop support in the workplace caused different sets of problems. The first was the "well, you're on your own: we're not going to support you" email from IT support. VPNs were touch and go, but in the end, usually go. - -The biggest hurdle was Microsoft Office, until I discovered [CrossOver][13], which I bought with my own money, and which allowed me to run company-issued copies of Word, PowerPoint, and the rest on my Linux desktop. Fonts were sometimes a problem, and one company I worked for required Microsoft Lync. For this, and for a few other applications, I would sometimes have to run a Windows virtual machine (VM) on my Linux desktop.  Was this a cop out?  Well, a little bit: but I've always tried to restrict my usage of this approach to the bare minimum. - -### But why? - -"Why?" colleagues would ask. "Why do you bother? Why not just run Windows?" - -"Because I enjoy pain," was usually my initial answer, and then the more honest, "because of the principle of the thing." - -So this is it: I believe in open source. We have a number of very, very good desktop-compatible distributions these days, and most of the time they just work. If you use well-known or supported hardware, they're likely to "just work" pretty much as well as the two obvious alternatives, Windows or Mac. And they just work because many people have put much time into using them, testing them, and improving them. So it's not a case of why wouldn't I use Windows or Mac, but why would I ever consider  _not_  using Linux? If, as I do, you believe in open source, and particularly if you work within the open source community or are employed by an open source organisation, I struggle to see why you would even consider not using Linux. - -So it's not a case of why wouldn't I use Windows or Mac, but why would I ever consider not using Linux?I've spoken to people about this (of course I have), and here are the most common reasons—or excuses—I've heard. - -1. I'm more productive on Windows/Mac. - -2. I can't use app X on Linux, and I need it for my job. - -3. I can't game on Linux. - -4. It's what our customers use, so why we would alienate them? - -5. "Open" means choice, and I prefer a proprietary desktop, so I use that. - -Interestingly, I don't hear "Linux isn't good enough" much anymore, because it's manifestly untrue, and I can show that my own experience—and that of many colleagues—belies that. - -### Rebuttals - -If you believe in open source, then I contest that you should take the time to learn how to use a Linux desktop and the associated applications.Let's go through those answers and rebut them. - -1. **I'm more productive on Windows/Mac.** I'm sure you are. Anyone is more productive when they're using a platform or a system they're used to. If you believe in open source, then I contest that you should take the time to learn how to use a Linux desktop and the associated applications. If you're working for an open source organisation, they'll probably help you along, and you're unlikely to find you're much less productive in the long term. And, you know what? If you are less productive in the long term, then get in touch with the maintainers of the apps that are causing you to be less productive and help improve them. You don't have to be a coder. You could submit bug reports, suggest improvements, write documentation, or just test the most recent versions of the software. And then you're helping yourself and the rest of the community. Welcome to open source. - -1. **I can't use app X on Linux, and I need it for my job.** This may be true. But it's probably less true than you think. The people most often saying this with conviction are audio, video, or graphics experts. It was certainly the case for many years that Linux lagged behind in those areas, but have a look and see what the other options are. And try them, even if they're not perfect, and see how you can improve them. Alternatively, use a VM for that particular app. - -1. **I can't game on Linux.** Well, you probably can, but not all the games that you enjoy. This, to be clear, shouldn't really be an excuse not to use Linux for most of what you do. It might be a reason to keep a dual-boot system or to do what I did (after much soul-searching) and buy a games console (because Elite Dangerous really  _doesn't_  work on Linux, more's the pity). It should also be an excuse to lobby for your favourite games to be ported to Linux. - -1. **It's what our customers use, so why would we alienate them?** I don't get this one. Does Microsoft ban visitors with Macs from their buildings? Does Apple ban Windows users? Does Google allow non-Android phones through their doors? You don't kowtow to the majority when you're the little guy or gal; if you're working in open source, surely you should be proud of that. You're not going to alienate your customer—you're really not. - -1. **"Open" means choice, and I prefer a proprietary desktop, so I use that.**Being open certainly does mean you have a choice. You made that choice by working in open source. For many, including me, that's a moral and philosophical choice. Saying you embrace open source, but rejecting it in practice seems mealy mouthed, even insulting. Using openness to justify your choice is the wrong approach. Saying "I prefer a proprietary desktop, and company policy allows me to do so" is better. I don't agree with your decision, but at least you're not using the principle of openness to justify it. - -Is using open source easy? Not always. But it's getting easier. I think that we should stand up for what we believe in, and if you're reading [Opensource.com][14], then you probably believe in open source. And that, I believe, means that you should run Linux as your main desktop. - - _Note: I welcome comments, and would love to hear different points of view. I would ask that comments don't just list application X or application Y as not working on Linux. I concede that not all apps do. I'm more interested in justifications that I haven't covered above, or (perceived) flaws in my argument. Oh, and support for it, of course._ - - -### About the author - - [![](https://opensource.com/sites/default/files/styles/profile_pictures/public/pictures/2017-05-10_0129.jpg?itok=Uh-eKFhx)][15] - - Mike Bursell - I've been in and around Open Source since around 1997, and have been running (GNU) Linux as my main desktop at home and work since then: [not always easy][7]...  I'm a security bod and architect, and am currently employed as Chief Security Architect for Red Hat.  I have a blog - "[Alice, Eve & Bob][8]" - where I write (sometimes rather parenthetically) about security.  I live in the UK and... [more about Mike Bursell][9][More about me][10] - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/17/11/politics-linux-desktop - -作者:[Mike Bursell ][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/mikecamel -[1]:https://opensource.com/resources/what-is-linux?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent -[2]:https://opensource.com/resources/what-are-linux-containers?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent -[3]:https://developers.redhat.com/promotions/linux-cheatsheet/?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent -[4]:https://developers.redhat.com/cheat-sheet/advanced-linux-commands-cheatsheet?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent -[5]:https://opensource.com/tags/linux?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent -[6]:https://opensource.com/article/17/11/politics-linux-desktop?rate=do69ixoNzK0yg3jzFk0bc6ZOBsIUcqTYv6FwqaVvzUA -[7]:https://opensource.com/article/17/11/politics-linux-desktop -[8]:https://aliceevebob.com/ -[9]:https://opensource.com/users/mikecamel -[10]:https://opensource.com/users/mikecamel -[11]:https://opensource.com/user/105961/feed -[12]:https://en.wikipedia.org/wiki/LILO_(boot_loader) -[13]:https://en.wikipedia.org/wiki/CrossOver_(software) -[14]:https://opensource.com/ -[15]:https://opensource.com/users/mikecamel -[16]:https://opensource.com/users/mikecamel -[17]:https://opensource.com/users/mikecamel -[18]:https://opensource.com/article/17/11/politics-linux-desktop#comments -[19]:https://opensource.com/tags/linux diff --git a/sources/tech/20171128 Why Python and Pygame are a great pair for beginning programmers.md b/sources/tech/20171128 Why Python and Pygame are a great pair for beginning programmers.md deleted file mode 100644 index 479bfb1232..0000000000 --- a/sources/tech/20171128 Why Python and Pygame are a great pair for beginning programmers.md +++ /dev/null @@ -1,142 +0,0 @@ -Why Python and Pygame are a great pair for beginning programmers -============================================================ - -### We look at three reasons Pygame is a good choice for learning to program. - - -![What's the best game platform for beginning programmers?](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/code_development_programming.png?itok=M_QDcgz5 "What's the best game platform for beginning programmers?") -Image by :  - -opensource.com - -Last month, [Scott Nesbitt][10] wrote about [Mozilla awarding $500K to support open source projects][11]. Phaser, a HTML/JavaScript game platform, was [awarded $50,000][12]. I’ve been teaching Phaser to my pre-teen daughter for a year, and it's one of the best and easiest HTML game development platforms to learn. [Pygame][13], however, may be a better choice for beginners. Here's why. - -### 1\. One long block of code - -Pygame is based on Python, the [most popular language for introductory computer courses][14]. Python is great for writing out ideas in one long block of code. Kids start off with a single file and with a single block of code. Before they can get to functions or classes, they start with code that will soon resemble spaghetti. It’s like finger-painting, as they throw thoughts onto the page. - -More Python Resources - -* [What is Python?][1] - -* [Top Python IDEs][2] - -* [Top Python GUI frameworks][3] - -* [Latest Python content][4] - -* [More developer resources][5] - -This approach to learning works. Kids will naturally start to break things into functions and classes as their code gets more difficult to manage. By learning the syntax of a language like Python prior to learning about functions, the student will gain basic programming knowledge before using global and local scope. - -Most HTML games separate the structure, style, and programming logic into HTML, CSS, and JavaScript to some degree and require knowledge of CSS and HTML. While the separation is better in the long term, it can be a barrier for beginners. Once kids realize that they can quickly build web pages with HTML and CSS, they may get distracted by the visual excitement of colors, fonts, and graphics. Even those who stay focused on JavaScript coding will still need to learn the basic document structure that the JavaScript code sits in. - -### 2\. Global variables are more obvious - -Both Python and JavaScript use dynamically typed variables, meaning that a variable becomes a string, an integer, or float when it’s assigned; however, making mistakes is easier in JavaScript. Similar to typed variables, both JavaScript and Python have global and local variable scopes. In Python, global variables inside of a function are identified with the global keyword. - -Let’s look at the basic [Making your first Phaser game tutorial][15], by Alvin Ourrad and Richard Davey, to understand the challenge of using Phaser to teach programming to beginners. In JavaScript, global variables—variables that can be accessed anywhere in the program—are difficult to keep track of and often are the source of bugs that are challenging to solve. Richard and Alvin are expert programmers and use global variables intentionally to keep things concise. - -``` -var game = new Phaser.Game(800, 600, Phaser.AUTO, '', { preload: preload, create: create, update: update }); - -function preload() { - -    game.load.image('sky', 'assets/sky.png'); - -} - -var player; -var platforms; - -function create() { -    game.physics.startSystem(Phaser.Physics.ARCADE); -… -``` - -In their Phaser programming book  [_Interphase_ ,][16] Richard Davey and Ilija Melentijevic explain that global variables are commonly used in many Phaser projects because they make it easier to get things done quickly. - -> “If you’ve ever worked on a game of any significant size then this approach is probably already making you cringe slightly... So why do we do it? The reason is simply because it’s the most concise and least complicated way to demonstrate what Phaser can do.” - -Although structuring a Phaser application to use local variables and split things up nicely into separation of concerns is possible, that’s tough for kids to understand when they’re first learning to program. - -If you’re set on teaching your kids to code with JavaScript, or if they already know how to code in another language like Python, a good Phaser course is [The Complete Mobile Game Development Course][17], by [Pablo Farias Navarro][18]. Although the title focuses on mobile games, the actual course focuses on JavaScript and Phaser. The JavaScript and Phaser apps are moved to a mobile phone with [PhoneGap][19]. - -### 3\. Pygame comes with less assembly required - -Thanks to [Python Wheels][20], Pygame is now super [easy to install][21]. You can also install it on Fedora/Red Hat with the **yum** package manager: - -``` -sudo yum install python3-pygame -``` - -See the official [Pygame installation documentation][22] for more information. - -Although Phaser itself is even easier to install, it does require more knowledge to use. As mentioned previously, the student will need to assemble their JavaScript code within an HTML document with some CSS. In addition to the three languages—HTML, CSS, and JavaScript—Phaser also requires the use of Firefox or Chrome development tools and an editor. The most common editors for JavaScript are Sublime, Atom, VS Code (probably in that order). - -Phaser applications will not run if you open the HTML file in a browser directly, due to [same-origin policy][23]. You must run a web server and access the files by connecting to the web server. Fortunately, you don’t need to run Apache on your local computer; you can run something lightweight like [httpster][24] for most projects. - -### Advantages of Phaser and JavaScript - -With all the challenges of JavaScript and Phaser, why am I teaching them? Honestly, I held off for a long time. I worried about students learning variable hoisting and scope. I developed my own curriculum based on Pygame and Python, then I developed one based on Phaser. Eventually, I decided to use Pablo’s pre-made curriculum as a starting point.  - -There are really two reasons that I moved to JavaScript. First, JavaScript has emerged as a serious language used in serious applications. In addition to web applications, it’s used for mobile and server applications. JavaScript is everywhere, and it’s used widely in applications kids see every day. If their friends code in JavaScript, they'll likely want to as well. As I saw the momentum behind JavaScript, I looked into alternatives that could compile into JavaScript, primarily Dart and TypeScript. I didn’t mind the extra conversion step, but I still looked at JavaScript. - -In the end, I chose to use Phaser and JavaScript because I realized that the problems could be solved with JavaScript and a bit of work. High-quality debugging tools and the work of some exceptionally smart people have made JavaScript a language that is both accessible and useful for teaching kids to code. - -### Final word: Python vs. JavaScript - -When people ask me what language to start their kids with, I immediately suggest Python and Pygame. There are tons of great curriculum options, many of which are free. I used ["Making Games with Python & Pygame"][25] by Al Sweigart with my son. I also used  _[Think Python: How to Think Like a Computer Scientist][7]_ by Allen B. Downey. You can get Pygame on your Android phone with [RAPT Pygame][26] by [Tom Rothamel][27]. - -Despite my recommendation, I always suspect that kids soon move to JavaScript. And that’s okay—JavaScript is a mature language with great tools. They’ll have fun with JavaScript and learn a lot. But after years of helping my daughter’s older brother create cool games in Python, I’ll always have an emotional attachment to Python and Pygame. - -### About the author - - [![](https://opensource.com/sites/default/files/styles/profile_pictures/public/pictures/craig-head-crop.png?itok=LlMnIq8m)][28] - - Craig Oda - First elected president and co-founder of Tokyo Linux Users Group. Co-author of "Linux Japanese Environment" book published by O'Reilly Japan. Part of core team that established first ISP in Asia. Former VP of product management and product marketing for major Linux company. Partner at Oppkey, developer relations consulting firm in Silicon Valley.[More about me][8] - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/17/11/pygame - -作者:[Craig Oda ][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/codetricity -[1]:https://opensource.com/resources/python?intcmp=7016000000127cYAAQ -[2]:https://opensource.com/resources/python/ides?intcmp=7016000000127cYAAQ -[3]:https://opensource.com/resources/python/gui-frameworks?intcmp=7016000000127cYAAQ -[4]:https://opensource.com/tags/python?intcmp=7016000000127cYAAQ -[5]:https://developers.redhat.com/?intcmp=7016000000127cYAAQ -[6]:https://opensource.com/article/17/11/pygame?rate=PV7Af00S0QwicZT2iv8xSjJrmJPdpfK1Kcm7LXxl_Xc -[7]:http://greenteapress.com/thinkpython/html/index.html -[8]:https://opensource.com/users/codetricity -[9]:https://opensource.com/user/46031/feed -[10]:https://opensource.com/users/scottnesbitt -[11]:https://opensource.com/article/17/10/news-october-14 -[12]:https://www.patreon.com/photonstorm/posts -[13]:https://www.pygame.org/news -[14]:https://cacm.acm.org/blogs/blog-cacm/176450-python-is-now-the-most-popular-introductory-teaching-language-at-top-u-s-universities/fulltext -[15]:http://phaser.io/tutorials/making-your-first-phaser-game -[16]:https://phaser.io/interphase -[17]:https://academy.zenva.com/product/the-complete-mobile-game-development-course-platinum-edition/ -[18]:https://gamedevacademy.org/author/fariazz/ -[19]:https://phonegap.com/ -[20]:https://pythonwheels.com/ -[21]:https://pypi.python.org/pypi/Pygame -[22]:http://www.pygame.org/wiki/GettingStarted#Pygame%20Installation -[23]:https://blog.chromium.org/2008/12/security-in-depth-local-web-pages.html -[24]:https://simbco.github.io/httpster/ -[25]:https://inventwithpython.com/makinggames.pdf -[26]:https://github.com/renpytom/rapt-pygame-example -[27]:https://github.com/renpytom -[28]:https://opensource.com/users/codetricity -[29]:https://opensource.com/users/codetricity -[30]:https://opensource.com/users/codetricity -[31]:https://opensource.com/article/17/11/pygame#comments -[32]:https://opensource.com/tags/python -[33]:https://opensource.com/tags/programming diff --git a/sources/tech/20171129 10 open source technology trends for 2018.md b/sources/tech/20171129 10 open source technology trends for 2018.md deleted file mode 100644 index eb21c62ec9..0000000000 --- a/sources/tech/20171129 10 open source technology trends for 2018.md +++ /dev/null @@ -1,143 +0,0 @@ -translating by wangy325... - - -10 open source technology trends for 2018 -============================================================ - -### What do you think will be the next open source tech trends? Here are 10 predictions. - -![10 open source technology trends for 2018](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/fireworks-newyear-celebrate.png?itok=6gXaznov "10 open source technology trends for 2018") -Image by : [Mitch Bennett][10]. Modified by Opensource.com. [CC BY-SA 4.0][11] - -Technology is always evolving. New developments, such as OpenStack, Progressive Web Apps, Rust, R, the cognitive cloud, artificial intelligence (AI), the Internet of Things, and more are putting our usual paradigms on the back burner. Here is a rundown of the top open source trends expected to soar in popularity in 2018. - -### 1\. OpenStack gains increasing acceptance - -[OpenStack][12] is essentially a cloud operating system that offers admins the ability to provision and control huge compute, storage, and networking resources through an intuitive and user-friendly dashboard. - -Many enterprises are using the OpenStack platform to build and manage cloud computing systems. Its popularity rests on its flexible ecosystem, transparency, and speed. It supports mission-critical applications with ease and lower costs compared to alternatives. But, OpenStack's complex structure and its dependency on virtualization, servers, and extensive networking resources has inhibited its adoption by a wider range of enterprises. Using OpenStack also requires a well-oiled machinery of skilled staff and resources. - -The OpenStack Foundation is working overtime to fill the voids. Several innovations, either released or on the anvil, would resolve many of its underlying challenges. As complexities decrease, OpenStack will surge in acceptance. The fact that OpenStack is already backed by many big software development and hosting companies, in addition to thousands of individual members, makes it the future of cloud computing. - -### 2\. Progressive Web Apps become popular - -[Progressive Web Apps][13] (PWA), an aggregation of technologies, design concepts, and web APIs, offer an app-like experience in the mobile browser. - -Traditional websites suffer from many inherent shortcomings. Apps, although offering a more personal and focused engagement than websites, place a huge demand on resources, including needing to be downloaded upfront. PWA delivers the best of both worlds. It delivers an app-like experience to users while being accessible on browsers, indexable on search engines, and responsive to fit any form factor. Like an app, a PWA updates itself to always display the latest real-time information, and, like a website, it is delivered in an ultra-safe HTTPS model. It runs in a standard container and is accessible to anyone who types in the URL, without having to install anything. - -PWAs perfectly suit the needs of today's mobile users, who value convenience and personal engagement over everything else. That this technology is set to soar in popularity is a no-brainer. - -### 3\. Rust to rule the roost - -Most programming languages come with safety vs. control tradeoffs. [Rust][14] is an exception. The language co-opts extensive compile-time checking to offer 100% control without compromising safety. The last [Pwn2Own][15] competition threw up many serious vulnerabilities in Firefox on account of its underlying C++ language. If Firefox had been written in Rust, many of those errors would have manifested as compile-time bugs and resolved before the product rollout stage. - -Rust's unique approach of built-in unit testing has led developers to consider it a viable first-choice open source language. It offers an effective alternative to languages such as C and Python to write secure code without sacrificing expressiveness. Rust has bright days ahead in 2018. - -### 4\. R user community grows - -The [R][16] programming language, a GNU project, is associated with statistical computing and graphics. It offers a wide array of statistical and graphical techniques and is extensible to boot. It starts where [S][17] ends. With the S language already the vehicle of choice for research in statistical methodology, R offers a viable open source route for data manipulation, calculation, and graphical display. An added benefit is R's attention to detail and care for the finer nuances. - -Like Rust, R's fortunes are on the rise. - -### 5\. XaaS expands in scope - -XaaS, an acronym for "anything as a service," stands for the increasing number of services delivered over the internet, rather than on premises. Although software as a service (SaaS), infrastructure as a service (IaaS), and platform as a service (PaaS) are well-entrenched, new cloud-based models, such as network as a service (NaaS), storage as a service (SaaS or StaaS), monitoring as a service (MaaS), and communications as a service (CaaS), are soaring in popularity. A world where anything and everything is available "as a service" is not far away. - -The scope of XaaS now extends to bricks-and-mortar businesses, as well. Good examples are companies such as Uber and Lyft leveraging digital technology to offer transportation as a service and Airbnb offering accommodations as a service. - -High-speed networks and server virtualization that make powerful computing affordable have accelerated the popularity of XaaS, to the point that 2018 may become the "year of XaaS." The unmatched flexibility, agility, and scalability will propel the popularity of XaaS even further. - -### 6\. Containers gain even more acceptance - -Container technology is the approach of packaging pieces of code in a standardized way so they can be "plugged and run" quickly in any environment. Container technology allows enterprises to cut costs and implementation times. While the potential of containers to revolutionize IT infrastructure has been evident for a while, actual container use has remained complex. - -Container technology is still evolving, and the complexities associated with the technology decrease with every advancement. The latest developments make containers quite intuitive and as easy as using a smartphone, not to mention tuned for today's needs, where speed and agility can make or break a business. - -### 7\. Machine learning and artificial intelligence expand in scope - -[Machine learning and AI][18] give machines the ability to learn and improve from experience without a programmer explicitly coding the instruction. - -These technologies are already well entrenched, with several open source technologies leveraging them for cutting-edge services and applications. - -[Gartner predicts][19] the scope of machine learning and artificial intelligence will expand in 2018\. Several greenfield areas, such as data preparation, integration, algorithm selection, training methodology selection, and model creation are all set for big-time enhancements through the infusion of machine learning. - -New open source intelligent solutions are set to change the way people interact with systems and transform the very nature of work. - -* Conversational platforms, such as chatbots, make the question-and-command experience, where a user asks a question and the platform responds, the default medium of interacting with machines. - -* Autonomous vehicles and drones, fancy fads today, are expected to become commonplace by 2018. - -* The scope of immersive experience will expand beyond video games and apply to real-life scenarios such as design, training, and visualization processes. - -### 8\. Blockchain becomes mainstream - -Blockchain has come a long way from Bitcoin. The technology is already in widespread use in finance, secure voting, authenticating academic credentials, and more. In the coming year, healthcare, manufacturing, supply chain logistics, and government services are among the sectors most likely to embrace blockchain technology. - -Blockchain distributes digital information. The information resides on millions of nodes, in shared and reconciled databases. The fact that it's not controlled by any single authority and has no single point of failure makes it very robust, transparent, and incorruptible. It also solves the threat of a middleman manipulating the data. Such inherent strengths account for blockchain's soaring popularity and explain why it is likely to emerge as a mainstream technology in the immediate future. - -### 9\. Cognitive cloud moves to center stage - -Cognitive technologies, such as machine learning and artificial intelligence, are increasingly used to reduce complexity and personalize experiences across multiple sectors. One case in point is gamification apps in the financial sector, which offer investors critical investment insights and reduce the complexities of investment models. Digital trust platforms reduce the identity-verification process for financial institutions by about 80%, improving compliance and reducing chances of fraud. - -Such cognitive cloud technologies are now moving to the cloud, making it even more potent and powerful. IBM Watson is the most well-known example of the cognitive cloud in action. IBM's UIMA architecture was made open source and is maintained by the Apache Foundation. DARPA's DeepDive project mirrors Watson's machine learning abilities to enhance decision-making capabilities over time by learning from human interactions. OpenCog, another open source platform, allows developers and data scientists to develop artificial intelligence apps and programs. - -Considering the high stakes of delivering powerful and customized experiences, these cognitive cloud platforms are set to take center stage over the coming year. - -### 10\. The Internet of Things connects more things - -At its core, the Internet of Things (IoT) is the interconnection of devices through embedded sensors or other computing devices that enable the devices (the "things") to send and receive data. IoT is already predicted to be the next big major disruptor of the tech space, but IoT itself is in a continuous state of flux. - -One innovation likely to gain widespread acceptance within the IoT space is Autonomous Decentralized Peer-to-Peer Telemetry ([ADEPT][20]), which is propelled by IBM and Samsung. It uses a blockchain-type technology to deliver a decentralized network of IoT devices. Freedom from a central control system facilitates autonomous communications between "things" in order to manage software updates, resolve bugs, manage energy, and more. - -### Open source drives innovation - -Digital disruption is the norm in today's tech-centric era. Within the technology space, open source is now pervasive, and in 2018, it will be the driving force behind most of the technology innovations. - -Which open source trends and technologies would you add to this list? Let us know in the comments. - -### Topics - - [Business][25][Yearbook][26][2017 Open Source Yearbook][27] - -### About the author - - [![Sreejith@Fingent](https://opensource.com/sites/default/files/styles/profile_pictures/public/pictures/sreejith.jpg?itok=sdYNV49V)][21] Sreejith - I have been programming since 2000, and professionally since 2007\. I currently lead the Open Source team at [Fingent][6] as we work on different technology stacks, ranging from the "boring"(read tried and trusted) to the bleeding edge. I like building, tinkering with and breaking things, not necessarily in that order. Hit me up at: [https://www.linkedin.com/in/futuregeek/][7][More about me][8] - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/17/11/10-open-source-technology-trends-2018 - -作者:[Sreejith ][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/sreejith -[1]:https://opensource.com/resources/what-is-openstack?intcmp=7016000000127cYAAQ -[2]:https://opensource.com/resources/openstack/tutorials?intcmp=7016000000127cYAAQ -[3]:https://opensource.com/tags/openstack?intcmp=7016000000127cYAAQ -[4]:https://www.rdoproject.org/?intcmp=7016000000127cYAAQ -[5]:https://opensource.com/article/17/11/10-open-source-technology-trends-2018?rate=GJqOXhiWvZh0zZ6WVTUzJ2TDJBpVpFhngfuX9V-dz4I -[6]:https://www.fingent.com/ -[7]:https://www.linkedin.com/in/futuregeek/ -[8]:https://opensource.com/users/sreejith -[9]:https://opensource.com/user/185026/feed -[10]:https://www.flickr.com/photos/mitchell3417/9206373620 -[11]:https://creativecommons.org/licenses/by-sa/4.0/ -[12]:https://www.openstack.org/ -[13]:https://developers.google.com/web/progressive-web-apps/ -[14]:https://www.rust-lang.org/ -[15]:https://en.wikipedia.org/wiki/Pwn2Own -[16]:https://en.wikipedia.org/wiki/R_(programming_language) -[17]:https://en.wikipedia.org/wiki/S_(programming_language) -[18]:https://opensource.com/tags/artificial-intelligence -[19]:https://sdtimes.com/gartners-top-10-technology-trends-2018/ -[20]:https://insights.samsung.com/2016/03/17/block-chain-mobile-and-the-internet-of-things/ -[21]:https://opensource.com/users/sreejith -[22]:https://opensource.com/users/sreejith -[23]:https://opensource.com/users/sreejith -[24]:https://opensource.com/article/17/11/10-open-source-technology-trends-2018#comments -[25]:https://opensource.com/tags/business -[26]:https://opensource.com/tags/yearbook -[27]:https://opensource.com/yearbook/2017 diff --git a/sources/tech/20171129 5 best practices for getting started with DevOps.md b/sources/tech/20171129 5 best practices for getting started with DevOps.md deleted file mode 100644 index 962f37aaf4..0000000000 --- a/sources/tech/20171129 5 best practices for getting started with DevOps.md +++ /dev/null @@ -1,94 +0,0 @@ -5 best practices for getting started with DevOps -============================================================ - -### Are you ready to implement DevOps, but don't know where to begin? Try these five best practices. - - -![5 best practices for getting started with DevOps](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/devops-gears.png?itok=rUejbLQX "5 best practices for getting started with DevOps") -Image by :  - -[Andrew Magill][8]. Modified by Opensource.com. [CC BY 4.0][9] - -DevOps often stymies early adopters with its ambiguity, not to mention its depth and breadth. By the time someone buys into the idea of DevOps, their first questions usually are: "How do I get started?" and "How do I measure success?" These five best practices are a great road map to starting your DevOps journey. - -### 1\. Measure all the things - -You don't know for sure that your efforts are even making things better unless you can quantify the outcomes. Are my features getting out to customers more rapidly? Are fewer defects escaping to them? Are we responding to and recovering more quickly from failure? - -Before you change anything, think about what kinds of outcomes you expect from your DevOps transformation. When you're further into your DevOps journey, you'll enjoy a rich array of near-real-time reports on everything about your service. But consider starting with these two metrics: - -* **Time to market** measures the end-to-end, often customer-facing, business experience. It usually begins when a feature is formally conceived and ends when the customer can consume the feature in production. Time to market is not mainly an engineering team metric; more importantly it shows your business' complete end-to-end efficiency in bringing valuable new features to market and isolates opportunities for system-wide improvement. - -* **Cycle time** measures the engineering team process. Once work on a new feature starts, when does it become available in production? This metric is very useful for understanding the efficiency of the engineering team and isolating opportunities for team-level improvement. - -### 2\. Get your process off the ground - -DevOps success requires an organization to put a regular (and hopefully effective) process in place and relentlessly improve upon it. It doesn't have to start out being effective, but it must be a regular process. Usually that it's some flavor of agile methodology like Scrum or Scrumban; sometimes it's a Lean derivative. Whichever way you go, pick a formal process, start using it, and get the basics right. - -Regular inspect-and-adapt behaviors are key to your DevOps success. Make good use of opportunities like the stakeholder demo, team retrospectives, and daily standups to find opportunities to improve your process. - -A lot of your DevOps success hinges on people working effectively together. People on a team need to work from a common process that they are empowered to improve upon. They also need regular opportunities to share what they are learning with other stakeholders, both upstream and downstream, in the process. - -Good process discipline will help your organization consume the other benefits of DevOps at the great speed that comes as your success builds. - -Although it's common for more development-oriented teams to successfully adopt processes like Scrum, operations-focused teams (or others that are more interrupt-driven) may opt for a process with a more near-term commitment horizon, such as Kanban. - -### 3\. Visualize your end-to-end workflow - -There is tremendous power in being able to see who's working on what part of your service at any given time. Visualizing your workflow will help people know what they need to work on next, how much work is in progress, and where the bottlenecks are in the process. - -You can't effectively limit work in process until you can see it and quantify it. Likewise, you can't effectively eliminate bottlenecks until you can clearly see them. - -Visualizing the entire workflow will help people in all parts of the organization understand how their work contributes to the success of the whole. It can catalyze relationship-building across organizational boundaries to help your teams collaborate more effectively towards a shared sense of success. - -### 4\. Continuous all the things - -DevOps promises a dizzying array of compelling automation. But Rome wasn't built in a day. One of the first areas you can focus your efforts on is [continuous integration][10] (CI). But don't stop there; you'll want to follow quickly with [continuous delivery][11] (CD) and eventually continuous deployment. - -Your CD pipeline is your opportunity to inject all manner of automated quality testing into your process. The moment new code is committed, your CD pipeline should run a battery of tests against the code and the successfully built artifact. The artifact that comes out at the end of this gauntlet is what progresses along your process until eventually it's seen by customers in production. - -Another "continuous" that doesn't get enough attention is continuous improvement. That's as simple as setting some time aside each day to ask your colleagues: "What small thing can we do today to get better at how we do our work?" These small, daily changes compound over time into more profound results. You'll be pleasantly surprised! But it also gets people thinking all the time about how to improve things. - -### 5\. Gherkinize - -Fostering more effective communication across your organization is crucial to fostering the sort of systems thinking prevalent in successful DevOps journeys. One way to help that along is to use a shared language between the business and the engineers to express the desired acceptance criteria for new features. A good product manager can learn [Gherkin][12] in a day and begin using it to express acceptance criteria in an unambiguous, structured form of plain English. Engineers can use this Gherkinized acceptance criteria to write acceptance tests against the criteria, and then develop their feature code until the tests pass. This is a simplification of [acceptance test-driven development][13](ATDD) that can also help kick start your DevOps culture and engineering practice. - -### Start on your journey - -Don't be discouraged by getting started with your DevOps practice. It's a journey. And hopefully these five ideas give you solid ways to get started. - - -### About the author - - [![](https://opensource.com/sites/default/files/styles/profile_pictures/public/pictures/headshot_4.jpg?itok=jntfDCfX)][14] - - Magnus Hedemark - Magnus has been in the IT industry for over 20 years, and a technology enthusiast for most of his life. He's presently Manager of DevOps Engineering at UnitedHealth Group. In his spare time, Magnus enjoys photography and paddling canoes. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/17/11/5-keys-get-started-devops - -作者:[Magnus Hedemark ][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/magnus919 -[1]:https://opensource.com/tags/devops?src=devops_resource_menu1 -[2]:https://opensource.com/resources/devops?src=devops_resource_menu2 -[3]:https://www.openshift.com/promotions/devops-with-openshift.html?intcmp=7016000000127cYAAQ&src=devops_resource_menu3 -[4]:https://enterprisersproject.com/article/2017/5/9-key-phrases-devops?intcmp=7016000000127cYAAQ&src=devops_resource_menu4 -[5]:https://www.redhat.com/en/insights/devops?intcmp=7016000000127cYAAQ&src=devops_resource_menu5 -[6]:https://opensource.com/article/17/11/5-keys-get-started-devops?rate=oEOzMXx1ghbkfl2a5ae6AnvO88iZ3wzkk53K2CzbDWI -[7]:https://opensource.com/user/25739/feed -[8]:https://ccsearch.creativecommons.org/image/detail/7qRx_yrcN5isTMS0u9iKMA== -[9]:https://creativecommons.org/licenses/by-sa/4.0/ -[10]:https://martinfowler.com/articles/continuousIntegration.html -[11]:https://martinfowler.com/bliki/ContinuousDelivery.html -[12]:https://cucumber.io/docs/reference -[13]:https://en.wikipedia.org/wiki/Acceptance_test%E2%80%93driven_development -[14]:https://opensource.com/users/magnus919 -[15]:https://opensource.com/users/magnus919 -[16]:https://opensource.com/users/magnus919 -[17]:https://opensource.com/tags/devops diff --git a/sources/tech/20171129 How to Install and Use Wireshark on Debian and Ubuntu 16.04_17.10.md b/sources/tech/20171129 How to Install and Use Wireshark on Debian and Ubuntu 16.04_17.10.md deleted file mode 100644 index d3ba75da14..0000000000 --- a/sources/tech/20171129 How to Install and Use Wireshark on Debian and Ubuntu 16.04_17.10.md +++ /dev/null @@ -1,185 +0,0 @@ -Translating by filefi - - -How to Install and Use Wireshark on Debian 9 / Ubuntu 16.04 / 17.10 -============================================================ - -by [Pradeep Kumar][1] · Published November 29, 2017 · Updated November 29, 2017 - - [![wireshark-Debian-9-Ubuntu 16.04 -17.10](https://www.linuxtechi.com/wp-content/uploads/2017/11/wireshark-Debian-9-Ubuntu-16.04-17.10.jpg)][2] - -Wireshark is free and open source, cross platform, GUI based Network packet analyzer that is available for Linux, Windows, MacOS, Solaris etc. It captures network packets in real time & presents them in human readable format. Wireshark allows us to monitor the network packets up to microscopic level. Wireshark also has a command line utility called ‘tshark‘ that performs the same functions as Wireshark but through terminal & not through GUI. - -Wireshark can be used for network troubleshooting, analyzing, software & communication protocol development & also for education purposed. Wireshark uses a library called ‘pcap‘ for capturing the network packets. - -Wireshark comes with a lot of features & some those features are; - -* Support for a hundreds of protocols for inspection, - -* Ability to capture packets in real time & save them for later offline analysis, - -* A number of filters to analyzing data, - -* Data captured can be compressed & uncompressed on the fly, - -* Various file formats for data analysis supported, output can also be saved to XML, CSV, plain text formats, - -* data can be captured from a number of interfaces like ethernet, wifi, bluetooth, USB, Frame relay , token rings etc. - -In this article, we will discuss how to install Wireshark on Ubuntu/Debain machines & will also learn to use Wireshark for capturing network packets. - -#### Installation of Wireshark on Ubuntu 16.04 / 17.10 - -Wireshark is available with default Ubuntu repositories & can be simply installed using the following command. But there might be chances that you will not get the latest version of wireshark. - -``` -linuxtechi@nixworld:~$ sudo apt-get update -linuxtechi@nixworld:~$ sudo apt-get install wireshark -y -``` - -So to install latest version of wireshark we have to enable or configure official wireshark repository. - -Use the beneath commands one after the another to configure repository and to install latest version of Wireshark utility - -``` -linuxtechi@nixworld:~$ sudo add-apt-repository ppa:wireshark-dev/stable -linuxtechi@nixworld:~$ sudo apt-get update -linuxtechi@nixworld:~$ sudo apt-get install wireshark -y -``` - -Once the Wireshark is installed execute the below command so that non-root users can capture live packets of interfaces, - -``` -linuxtechi@nixworld:~$ sudo setcap 'CAP_NET_RAW+eip CAP_NET_ADMIN+eip' /usr/bin/dumpcap -``` - -#### Installation of Wireshark on Debian 9 - -Wireshark package and its dependencies are already present in the default debian 9 repositories, so to install latest and stable version of Wireshark on Debian 9, use the following command: - -``` -linuxtechi@nixhome:~$ sudo apt-get update -linuxtechi@nixhome:~$ sudo apt-get install wireshark -y -``` - -During the installation, it will prompt us to configure dumpcap for non-superusers, - -Select ‘yes’ and then hit enter. - - [![Configure-Wireshark-Debian9](https://www.linuxtechi.com/wp-content/uploads/2017/11/Configure-Wireshark-Debian9-1024x542.jpg)][3] - -Once the Installation is completed, execute the below command so that non-root users can also capture the live packets of the interfaces. - -``` -linuxtechi@nixhome:~$ sudo chmod +x /usr/bin/dumpcap -``` - -We can also use the latest source package to install the wireshark on Ubuntu/Debain & many other Linux distributions. - -#### Installing Wireshark using source code on Debian / Ubuntu Systems - -Firstly download the latest source package (which is 2.4.2 at the time for writing this article), use the following command, - -``` -linuxtechi@nixhome:~$ wget https://1.as.dl.wireshark.org/src/wireshark-2.4.2.tar.xz -``` - -Next extract the package & enter into the extracted directory, - -``` -linuxtechi@nixhome:~$ tar -xf wireshark-2.4.2.tar.xz -C /tmp -linuxtechi@nixhome:~$ cd /tmp/wireshark-2.4.2 -``` - -Now we will compile the code with the following commands, - -``` -linuxtechi@nixhome:/tmp/wireshark-2.4.2$ ./configure --enable-setcap-install -linuxtechi@nixhome:/tmp/wireshark-2.4.2$ make -``` - -Lastly install the compiled packages to install Wireshark on the system, - -``` -linuxtechi@nixhome:/tmp/wireshark-2.4.2$ sudo make install -linuxtechi@nixhome:/tmp/wireshark-2.4.2$ sudo ldconfig -``` - -Upon installation a separate group for Wireshark will also be created, we will now add our user to the group so that it can work with wireshark otherwise you might get ‘permission denied‘ error when starting wireshark. - -To add the user to the wireshark group, execute the following command, - -``` -linuxtechi@nixhome:~$ sudo usermod -a -G wireshark linuxtechi -``` - -Now we can start wireshark either from GUI Menu or from terminal with this command, - -``` -linuxtechi@nixhome:~$ wireshark -``` - -#### Access Wireshark on Debian 9 System - - [![Access-wireshark-debian9](https://www.linuxtechi.com/wp-content/uploads/2017/11/Access-wireshark-debian9-1024x664.jpg)][4] - -Click on Wireshark icon - - [![Wireshark-window-debian9](https://www.linuxtechi.com/wp-content/uploads/2017/11/Wireshark-window-debian9-1024x664.jpg)][5] - -#### Access Wireshark on Ubuntu 16.04 / 17.10 - - [![Access-wireshark-Ubuntu](https://www.linuxtechi.com/wp-content/uploads/2017/11/Access-wireshark-Ubuntu-1024x664.jpg)][6] - -Click on Wireshark icon - - [![Wireshark-window-Ubuntu](https://www.linuxtechi.com/wp-content/uploads/2017/11/Wireshark-window-Ubuntu-1024x664.jpg)][7] - -#### Capturing and Analyzing packets - -Once the wireshark has been started, we should be presented with the wireshark window, example is shown above for Ubuntu and Debian system. - - [![wireshark-Linux-system](https://www.linuxtechi.com/wp-content/uploads/2017/11/wireshark-Linux-system.jpg)][8] - -All these are the interfaces from where we can capture the network packets. Based on the interfaces you have on your system, this screen might be different for you. - -We are selecting ‘enp0s3’ for capturing the network traffic for that inteface. After selecting the inteface, network packets for all the devices on our network start to populate (refer to screenshot below) - - [![Capturing-Packet-from-enp0s3-Ubuntu-Wireshark](https://www.linuxtechi.com/wp-content/uploads/2017/11/Capturing-Packet-from-enp0s3-Ubuntu-Wireshark-1024x727.jpg)][9] - -First time we see this screen we might get overwhelmed by the data that is presented in this screen & might have thought how to sort out this data but worry not, one the best features of Wireshark is its filters. - -We can sort/filter out the data based on IP address, Port number, can also used source & destination filters, packet size etc & can also combine 2 or more filters together to create more comprehensive searches. We can either write our filters in ‘Apply a Display Filter‘ tab , or we can also select one of already created rules. To select pre-built filter, click on ‘flag‘ icon , next to ‘Apply a Display Filter‘ tab, - - [![Filter-in-wireshark-Ubuntu](https://www.linuxtechi.com/wp-content/uploads/2017/11/Filter-in-wireshark-Ubuntu-1024x727.jpg)][10] - -We can also filter data based on the color coding, By default, light purple is TCP traffic, light blue is UDP traffic, and black identifies packets with errors , to see what these codes mean, click View -> Coloring Rules, also we can change these codes. - - [![Packet-Colouring-Wireshark](https://www.linuxtechi.com/wp-content/uploads/2017/11/Packet-Colouring-Wireshark-1024x682.jpg)][11] - -After we have the results that we need, we can then click on any of the captured packets to get more details about that packet, this will show all the data about that network packet. - -Wireshark is an extremely powerful tool takes some time to getting used to & make a command over it, this tutorial will help you get started. Please feel free to drop in your queries or suggestions in the comment box below. - --------------------------------------------------------------------------------- - -via: https://www.linuxtechi.com - -作者:[Pradeep Kumar][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.linuxtechi.com/author/pradeep/ -[1]:https://www.linuxtechi.com/author/pradeep/ -[2]:https://www.linuxtechi.com/wp-content/uploads/2017/11/wireshark-Debian-9-Ubuntu-16.04-17.10.jpg -[3]:https://www.linuxtechi.com/wp-content/uploads/2017/11/Configure-Wireshark-Debian9.jpg -[4]:https://www.linuxtechi.com/wp-content/uploads/2017/11/Access-wireshark-debian9.jpg -[5]:https://www.linuxtechi.com/wp-content/uploads/2017/11/Wireshark-window-debian9.jpg -[6]:https://www.linuxtechi.com/wp-content/uploads/2017/11/Access-wireshark-Ubuntu.jpg -[7]:https://www.linuxtechi.com/wp-content/uploads/2017/11/Wireshark-window-Ubuntu.jpg -[8]:https://www.linuxtechi.com/wp-content/uploads/2017/11/wireshark-Linux-system.jpg -[9]:https://www.linuxtechi.com/wp-content/uploads/2017/11/Capturing-Packet-from-enp0s3-Ubuntu-Wireshark.jpg -[10]:https://www.linuxtechi.com/wp-content/uploads/2017/11/Filter-in-wireshark-Ubuntu.jpg -[11]:https://www.linuxtechi.com/wp-content/uploads/2017/11/Packet-Colouring-Wireshark.jpg diff --git a/sources/tech/20171129 Inside AGL Familiar Open Source Components Ease Learning Curve.md b/sources/tech/20171129 Inside AGL Familiar Open Source Components Ease Learning Curve.md deleted file mode 100644 index 9eee39888a..0000000000 --- a/sources/tech/20171129 Inside AGL Familiar Open Source Components Ease Learning Curve.md +++ /dev/null @@ -1,70 +0,0 @@ -Inside AGL: Familiar Open Source Components Ease Learning Curve -============================================================ - -![Matt Porter](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/porter-elce-agl.png?itok=E-5xG98S "Matt Porter") -Konsulko’s Matt Porter (pictured) and Scott Murray ran through the major components of the AGL’s Unified Code Base at Embedded Linux Conference Europe.[The Linux Foundation][1] - -Among the sessions at the recent [Embedded Linux Conference Europe (ELCE)][5] — 57 of which are [available on YouTube][2] -- are several reports on the Linux Foundation’s [Automotive Grade Linux project][6]. These include [an overview from AGL Community Manager Walt Miner ][3]showing how AGL’s Unified Code Base (UCB) Linux distribution is expanding from in-vehicle infotainment (IVI) to ADAS. There was even a presentation on using AGL to build a remote-controlled robot (see links below). - -Here we look at the “State of AGL: Plumbing and Services,” from Konsulko Group’s CTO Matt Porter and senior staff software engineer Scott Murray. Porter and Murray ran through the components of the current [UCB 4.0 “Daring Dab”][7] and detailed major upstream components and API bindings, many of which will be appear in the Electric Eel release due in Jan. 2018. - -Despite the automotive focus of the AGL stack, most of the components are already familiar to Linux developers. “It looks a lot like a desktop distro,” Porter told the ELCE attendees in Prague. “All these familiar friends.” - -Some of those friends include the underlying Yocto Project “Poky” with OpenEmbedded foundation, which is topped with layers like oe-core, meta-openembedded, and metanetworking. Other components are based on familiar open source software like systemd (application control), Wayland and Weston (graphics), BlueZ (Bluetooth), oFono (telephony), PulseAudio and ALSA (audio), gpsd (location), ConnMan (Internet), and wpa-supplicant (WiFi), among others. - -UCB’s application framework is controlled through a WebSocket interface to the API bindings, thereby enabling apps to talk to each other. There’s also a new W3C widget for an alternative application packaging scheme, as well as support for SmartDeviceLink, a technology developed at Ford that automatically syncs up IVI systems with mobile phones.  - -AGL UCB’s Wayland/Weston graphics layer is augmented with an “IVI shell” that works with the layer manager. “One of the unique requirements of automotive is the ability to separate aspects of the application in the layers,” said Porter. “For example, in a navigation app, the graphics rendering for the map may be completely different than the engine used for the UI decorations. One engine layers to a surface in Wayland to expose the map while the decorations and controls are handled by another layer.” - -For audio, ALSA and PulseAudio are joined by GENIVI AudioManager, which works together with PulseAudio. “We use AudioManager for policy driven audio routing,” explained Porter. “It allows you to write a very complex XML-based policy using a rules engine with audio routing.” - -UCB leans primarily on the well-known [Smack Project][8] for security, and also incorporates Tizen’s [Cynara][9] safe policy-checker service. A Cynara-enabled D-Bus daemon is used to control Cynara security policies. - -Porter and Murray went on to explain AGL’s API binding mechanism, which according to Murray “abstracts the UI from its back-end logic so you can replace it with your own custom UI.” You can re-use application logic with different UI implementations, such as moving from the default Qt to HTML5 or a native toolkit. Application binding requests and responses use JSON via HTTP or WebSocket. Binding calls can be made from applications or from other bindings, thereby enabling “stacking” of bindings. - -Porter and Murray concluded with a detailed description of each binding. These include upstream bindings currently in various stages of development. The first is a Master binding that manages the application lifecycle, including tasks such as install, uninstall, start, and terminate. Other upstream bindings include the WiFi binding and the BlueZ-based Bluetooth binding, which in the future will be upgraded with Bluetooth [PBAP][10] (Phone Book Access Profile). PBAP can connect with contacts databases on your phone, and links to the Telephony binding to replicate caller ID. - -The oFono-based Telephony binding also makes calls to the Bluetooth binding for Bluetooth Hands-Free-Profile (HFP) support. In the future, Telephony binding will add support for sent dial tones, call waiting, call forwarding, and voice modem support. - -Support for AM/FM radio is not well developed in the Linux world, so for its Radio binding, AGL started by supporting [RTL-SDR][11] code for low-end radio dongles. Future plans call for supporting specific automotive tuner devices. - -The MediaPlayer binding is in very early development, and is currently limited to GStreamer based audio playback and control. Future plans call for adding playlist controls, as well as one of the most actively sought features among manufacturers: video playback support. - -Location bindings include the [gpsd][12] based GPS binding, as well as GeoClue and GeoFence. GeoClue, which is built around the [GeoClue][13] D-Bus geolocation service, “overlaps a little with GPS, which uses the same location data,” says Porter. GeoClue also gathers location data from WiFi AP databases, 3G/4G tower info, and the GeoIP database — sources that are useful “if you’re inside or don’t have a good fix,” he added. - -GeoFence depends on the GPS binding, as well. It lets you establish a bounding box, and then track ingress and egress events. GeoFence also tracks “dwell” status, which is determined by arriving at home and staying for 10 minutes. “It then triggers some behavior based on a timeout,” said Porter. Future plans call for a customizable dwell transition time. - -While most of these Upstream bindings are well established, there are also Work in Progress (WIP) bindings that are still in the early stages, including CAN, HomeScreen, and WindowManager bindings. Farther out, there are plans to add speech recognition and text-to-speech bindings, as well as a WWAN modem binding. - -In conclusion, Porter noted: “Like any open source project, we desperately need more developers.” The Automotive Grade Linux project may seem peripheral to some developers, but it offers a nice mix of familiarity — grounded in many widely used open source projects -- along with the excitement of expanding into a new and potentially game changing computing form factor: your automobile. AGL has also demonstrated success — you can now [check out AGL in action in the 2018 Toyota Camry][14], followed in the coming month by most Toyota and Lexus vehicles sold in North America. - -Watch the complete video below: - -[视频][15] - --------------------------------------------------------------------------------- - -via: https://www.linux.com/blog/event/elce/2017/11/inside-agl-familiar-open-source-components-ease-learning-curve - -作者:[ ERIC BROWN][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.linux.com/users/ericstephenbrown -[1]:https://www.linux.com/licenses/category/linux-foundation -[2]:https://www.youtube.com/playlist?list=PLbzoR-pLrL6pISWAq-1cXP4_UZAyRtesk -[3]:https://www.youtube.com/watch?v=kfwEmjSjAzM&index=14&list=PLbzoR-pLrL6pISWAq-1cXP4_UZAyRtesk -[4]:https://www.linux.com/files/images/porter-elce-aglpng -[5]:http://events.linuxfoundation.org/events/embedded-linux-conference-europe -[6]:https://www.automotivelinux.org/ -[7]:https://www.linux.com/blog/2017/8/automotive-grade-linux-moves-ucb-40-launches-virtualization-workgroup -[8]:http://schaufler-ca.com/ -[9]:https://wiki.tizen.org/Security:Cynara -[10]:https://wiki.maemo.org/Bluetooth_PBAP -[11]:https://www.rtl-sdr.com/about-rtl-sdr/ -[12]:http://www.catb.org/gpsd/ -[13]:https://www.freedesktop.org/wiki/Software/GeoClue/ -[14]:https://www.linux.com/blog/event/automotive-linux-summit/2017/6/linux-rolls-out-toyota-and-lexus-vehicles -[15]:https://youtu.be/RgI-g5h1t8I diff --git a/sources/tech/20171129 Interactive Workflows for Cpp with Jupyter.md b/sources/tech/20171129 Interactive Workflows for Cpp with Jupyter.md deleted file mode 100644 index 395c901618..0000000000 --- a/sources/tech/20171129 Interactive Workflows for Cpp with Jupyter.md +++ /dev/null @@ -1,301 +0,0 @@ -Interactive Workflows for C++ with Jupyter -============================================================ - -Scientists, educators and engineers not only use programming languages to build software systems, but also in interactive workflows, using the tools available to  _explore _ a problem and  _reason _ about it. - -Running some code, looking at a visualization, loading data, and running more code. Quick iteration is especially important during the exploratory phase of a project. - -For this kind of workflow, users of the C++ programming language currently have no choice but to use a heterogeneous set of tools that don’t play well with each other, making the whole process cumbersome, and difficult to reproduce. - - _We currently lack a good story for interactive computing in C++_ . - -In our opinion, this hurts the productivity of C++ developers: - -* Most of the progress made in software projects comes from incrementalism. Obstacles to fast iteration hinder progress. - -* This also makes C++ more difficult to teach. The first hours of a C++ class are rarely rewarding as the students must learn how to set up a small project before writing any code. And then, a lot more time is required before their work can result in any visual outcome. - -### Project Jupyter and Interactive Computing - - - -![](https://cdn-images-1.medium.com/max/1200/1*wOHyKy6fl3ltcBMNpCvC6Q.png) - -The goal of Project Jupyter is to provide a consistent set of tools for scientific computing and data science workflows, from the exploratory phase of the analysis to the presentation and the sharing of the results. The Jupyter stack was designed to be agnostic of the programming language, and also to allow alternative implementations of any component of the layered architecture (back-ends for programming languages, custom renderers for file types associated with Jupyter). The stack consists of - -* a low-level specification for messaging protocols, standardized file formats, - -* a reference implementation of these standards, - -* applications built on the top of these libraries: the Notebook, JupyterLab, Binder, JupyterHub - -* and visualization libraries integrated into the Notebook and JupyterLab. - -Adoption of the Jupyter ecosystem has skyrocketed in the past years, with millions of users worldwide, over a million Jupyter notebooks shared on GitHub and large-scale deployments of Jupyter in universities, companies and high-performance computing centers. - -### Jupyter and C++ - -One of the main extension points of the Jupyter stack is the  _kernel_ , the part of the infrastructure responsible for executing the user’s code. Jupyter kernels exist for [numerous programming languages][14]. - -Most Jupyter kernels are implemented in the target programming language: the reference implementation [ipykernel][15] in Python, [IJulia][16] in Julia, leading to a duplication of effort for the implementation of the protocol. A common denominator to a lot of these interpreted languages is that the interpreter generally exposes a C API, allowing the embedding into a native application. In an effort to consolidate these commonalities and save work for future kernel builders, we developed  _xeus_ . - - - -![](https://cdn-images-1.medium.com/max/1200/1*TKrPv5AvFM3NJ6a7VMu8Tw.png) - -[Xeus ][17]is a C++ implementation of the Jupyter kernel protocol. It is not a kernel itself but a library that facilitates the authoring of kernels, and other applications making use of the Jupyter kernel protocol. - -A typical kernel implementation using xeus would in fact make use of the target interpreter _ as a library._ - -There are a number of benefits of using xeus over implementing your kernel in the target language: - -* Xeus provides a complete implementation of the protocol, enabling a lot of features from the start for kernel authors, who only need to deal with the language bindings. - -* Xeus-based kernels can very easily provide a back-end for Jupyter interactive widgets. - -* Finally, xeus can be used to implement kernels for domain-specific languages such as SQL flavors. Existing approaches use a Python wrapper. With xeus, the resulting kernel won't require Python at run-time, leading to large performance benefits. - - - -![](https://cdn-images-1.medium.com/max/1200/1*Cr_cfHdrgFXHlO15qdNK7w.png) - -Interpreted C++ is already a reality at CERN with the [Cling][18]C++ interpreter in the context of the [ROOT][19] data analysis environment. - -As a first example for a kernel based on xeus, we have implemented [xeus-cling][20], a pure C++ kernel. - - - -![](https://cdn-images-1.medium.com/max/1600/1*NnjISpzZtpy5TOurg0S89A.gif) -Redirection of outputs to the Jupyter front-end, with different styling in the front-end. - -Complex features of the C++ programming language such as, polymorphism, templates, lambdas, are supported by the cling interpreter, making the C++ Jupyter notebook a great prototyping and learning platform for the C++ users. See the image below for a demonstration: - - - -![](https://cdn-images-1.medium.com/max/1600/1*lGVLY4fL1ytMfT-eWtoXkw.gif) -Features of the C++ programming language supported by the cling interpreter - -Finally, xeus-cling supports live quick-help, fetching the content on [cppreference][21] in the case of the standard library. - - - -![](https://cdn-images-1.medium.com/max/1600/1*Igegq0xBebuJV8hy0TGpfg.png) -Live help for the C++standard library in the Jupyter notebook - -> We realized that we started using the C++ kernel ourselves very early in the development of the project. For quick experimentation, or reproducing bugs. No need to set up a project with a cpp file and complicated project settings for finding the dependencies… Just write some code and hit Shift+Enter. - -Visual output can also be displayed using the rich display mechanism of the Jupyter protocol. - - - -![](https://cdn-images-1.medium.com/max/1600/1*t_9qAXtdkSXr-0tO9VvOzQ.png) -Using Jupyter's rich display mechanism to display an image inline in the notebook - - -![](https://cdn-images-1.medium.com/max/1200/1*OVfmXFAbfjUtGFXYS9fKRA.png) - -Another important feature of the Jupyter ecosystem are the [Jupyter Interactive Widgets][22]. They allow the user to build graphical interfaces and interactive data visualization inline in the Jupyter notebook. Moreover it is not just a collection of widgets, but a framework that can be built upon, to create arbitrary visual components. Popular interactive widget libraries include - -* [bqplot][1] (2-D plotting with d3.js) - -* [pythreejs][2] (3-D scene visualization with three.js) - -* [ipyleaflet][3] (maps visualization with leaflet.js) - -* [ipyvolume][4] (3-D plotting and volume rendering with three.js) - -* [nglview][5] (molecular visualization) - -Just like the rest of the Jupyter ecosystem, Jupyter interactive widgets were designed as a language-agnostic framework. Other language back-ends can be created reusing the front-end component, which can be installed separately. - -[xwidgets][23], which is still at an early stage of development, is a native C++ implementation of the Jupyter widgets protocol. It already provides an implementation for most of the widget types available in the core Jupyter widgets package. - - - -![](https://cdn-images-1.medium.com/max/1600/1*ro5Ggdstnf0DoqhTUWGq3A.gif) -C++ back-end to the Jupyter interactive widgets - -Just like with ipywidgets, one can build upon xwidgets and implement C++ back-ends for the Jupyter widget libraries listed earlier, effectively enabling them for the C++ programming language and other xeus-based kernels: xplot, xvolume, xthreejs… - - - -![](https://cdn-images-1.medium.com/max/1200/1*yCRYoJFnbtxYkYMRc9AioA.png) - -[xplot][24] is an experimental C++ back-end for the [bqplot][25] 2-D plotting library. It enables an API following the constructs of the  [_Grammar of Graphics_][26]  in C++. - -In xplot, every item in a chart is a separate object that can be modified from the back-end,  _dynamically_ . - -Changing a property of a plot item, a scale, an axis or the figure canvas itself results in the communication of an update message to the front-end, which reflects the new state of the widget visually. - - - -![](https://cdn-images-1.medium.com/max/1600/1*Mx2g3JuTG1Cfvkkv0kqtLA.gif) -Changing the data of a scatter plot dynamically to update the chart - -> Warning: the xplot and xwidgets projects are still at an early stage of development and are changing drastically at each release. - -Interactive computing environments like Jupyter are not the only missing tool in the C++ world. Two key ingredients to the success of Python as the  _lingua franca_  of data science is the existence of libraries like [NumPy][27] and [Pandas][28] at the foundation of the ecosystem. - - - -![](https://cdn-images-1.medium.com/max/1200/1*HsU43Jzp1vJZpX2g8XPJsg.png) - -[xtensor][29] is a C++ library meant for numerical analysis with multi-dimensional array expressions. - -xtensor provides - -* an extensible expression system enabling lazy NumPy-style broadcasting. - -* an API following the  _idioms_  of the C++ standard library. - -* tools to manipulate array expressions and build upon xtensor. - -xtensor exposes an API similar to that of NumPy covering a growing portion of the functionalities. A cheat sheet can be [found in the documentation][30]: - - - -![](https://cdn-images-1.medium.com/max/1600/1*PBrf5vWYC8VTq_7VUOZCpA.gif) -Scrolling the NumPy to xtensor cheat sheet - -However, xtensor internals are very different from NumPy. Using modern C++ techniques (template expressions, closure semantics) xtensor is a lazily evaluated library, avoiding the creation of temporary variables and unnecessary memory allocations, even in the case complex expressions involving broadcasting and language bindings. - -Still, from a user perspective, the combination of xtensor with the C++ notebook provides an experience very similar to that of NumPy in a Python notebook. - - - -![](https://cdn-images-1.medium.com/max/1600/1*ULFpg-ePkdUbqqDLJ9VrDw.png) -Using the xtensor array expression library in a C++ notebook - -In addition to the core library, the xtensor ecosystem has a number of other components - -* [xtensor-blas][6]: the counterpart to the numpy.linalg module. - -* [xtensor-fftw][7]: bindings to the [fftw][8] library. - -* [xtensor-io][9]: APIs to read and write various file formats (images, audio, NumPy's NPZ format). - -* [xtensor-ros][10]: bindings for ROS, the robot operating system. - -* [xtensor-python][11]: bindings for the Python programming language, allowing the use of NumPy arrays in-place, using the NumPy C API and the pybind11 library. - -* [xtensor-julia][12]: bindings for the Julia programming language, allowing the use of Julia arrays in-place, using the C API of the Julia interpreter, and the CxxWrap library. - -* [xtensor-r][13]: bindings for the R programming language, allowing the use of R arrays in-place. - -Detailing further the features of the xtensor framework would be beyond the scope of this post. - -If you are interested in trying the various notebooks presented in this post, there is no need to install anything. You can just use  _binder_ : - -![](https://cdn-images-1.medium.com/max/1200/1*9cy5Mns_I0eScsmDBjvxDQ.png) - -[The Binder project][31], which is part of Project Jupyter, enables the deployment of containerized Jupyter notebooks, from a GitHub repository together with a manifest listing the dependencies (as conda packages). - -All the notebooks in the screenshots above can be run online, by just clicking on one of the following links: - -[xtensor][32]: the C++ N-D array expression library in a C++ notebook - -[xwidgets][33]: the C++ back-end for Jupyter interactive widgets - -[xplot][34]: the C++ back-end to the bqplot 2-D plotting library for Jupyter. - - - -![](https://cdn-images-1.medium.com/max/1200/1*JwqhpMxMJppEepj7U4fV-g.png) - -[JupyterHub][35] is the multi-user infrastructure underlying open wide deployments of Jupyter like Binder but also smaller deployments for authenticated users. - -The modular architecture of JupyterHub enables a great variety of scenarios on how users are authenticated, and what service is made available to them. JupyterHub deployment for several hundreds of users have been done in various universities and institutions, including the Paris-Sud University, where the C++ kernel was also installed for the students to use. - -> In September 2017, the 350 first-year students at Paris-Sud University who took the “[Info 111: Introduction to Computer ->  Science][36]” class wrote their first lines of C++ in a Jupyter notebook. - -The use of Jupyter notebooks in the context of teaching C++ proved especially useful for the first classes, where students can focus on the syntax of the language without distractions such as compiling and linking. - -### Acknowledgements - -The software presented in this post was built upon the work of a large number of people including the Jupyter team and the Cling developers. - -We are especially grateful to [Patrick Bos ][37](who authored xtensor-fftw), Nicolas Thiéry, Min Ragan Kelley, Thomas Kluyver, Yuvi Panda, Kyle Cranmer, Axel Naumann and Vassil Vassilev. - -We thank the [DIANA/HEP][38] organization for supporting travel to CERN and encouraging the collaboration between Project Jupyter and the ROOT team. - -We are also grateful to the team at Paris-Sud University who worked on the JupyterHub deployment and the class materials, notably [Viviane Pons][39]. - -The development of xeus, xtensor, xwidgets and related packages at [QuantStack][40] is sponsored by [Bloomberg][41]. - -### About the authors (alphabetical order) - - [_Sylvain Corlay_][42] _, _ Scientific Software Developer at [QuantStack][43] - - [_Loic Gouarin_][44] _, _ Research Engineer at [Laboratoire de Mathématiques at Orsay][45] - - [_Johan Mabille_][46] _, _ Scientific Software Developer at [QuantStack][47] - - [_Wolf Vollprecht_][48] , Scientific Software Developer at [QuantStack][49] - -Thanks to [Maarten Breddels][50], [Wolf Vollprecht][51], [Brian E. Granger][52], and [Patrick Bos][53]. - --------------------------------------------------------------------------------- - -via: https://blog.jupyter.org/interactive-workflows-for-c-with-jupyter-fe9b54227d92 - -作者:[QuantStack ][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://blog.jupyter.org/@QuantStack?source=post_header_lockup -[1]:https://github.com/bloomberg/bqplot -[2]:https://github.com/jovyan/pythreejs -[3]:https://github.com/ellisonbg/ipyleaflet -[4]:https://github.com/maartenbreddels/ipyvolume -[5]:https://github.com/arose/nglview -[6]:https://github.com/QuantStack/xtensor-blas -[7]:https://github.com/egpbos/xtensor-fftw -[8]:http://www.fftw.org/ -[9]:https://github.com/QuantStack/xtensor-io -[10]:https://github.com/wolfv/xtensor_ros -[11]:https://github.com/QuantStack/xtensor-python -[12]:https://github.com/QuantStack/Xtensor.jl -[13]:https://github.com/QuantStack/xtensor-r -[14]:https://github.com/jupyter/jupyter/wiki/Jupyter-kernels -[15]:https://github.com/ipython/ipykernel -[16]:https://github.com/JuliaLang/IJulia.jl -[17]:https://github.com/QuantStack/xeus -[18]:https://root.cern.ch/cling -[19]:https://root.cern.ch/ -[20]:https://github.com/QuantStack/xeus-cling -[21]:http://en.cppreference.com/w/ -[22]:http://jupyter.org/widgets -[23]:https://github.com/QUantStack/xwidgets -[24]:https://github.com/QuantStack/xplot -[25]:https://github.com/bloomberg/bqplot -[26]:https://dl.acm.org/citation.cfm?id=1088896 -[27]:http://www.numpy.org/ -[28]:https://pandas.pydata.org/ -[29]:https://github.com/QuantStack/xtensor/ -[30]:http://xtensor.readthedocs.io/en/latest/numpy.html -[31]:https://mybinder.org/ -[32]:https://beta.mybinder.org/v2/gh/QuantStack/xtensor/0.14.0-binder2?filepath=notebooks/xtensor.ipynb -[33]:https://beta.mybinder.org/v2/gh/QuantStack/xwidgets/0.6.0-binder?filepath=notebooks/xwidgets.ipynb -[34]:https://beta.mybinder.org/v2/gh/QuantStack/xplot/0.3.0-binder?filepath=notebooks -[35]:https://github.com/jupyterhub/jupyterhub -[36]:http://nicolas.thiery.name/Enseignement/Info111/ -[37]:https://twitter.com/egpbos -[38]:http://diana-hep.org/ -[39]:https://twitter.com/pyviv -[40]:https://twitter.com/QuantStack -[41]:http://www.techatbloomberg.com/ -[42]:https://twitter.com/SylvainCorlay -[43]:https://github.com/QuantStack/ -[44]:https://twitter.com/lgouarin -[45]:https://www.math.u-psud.fr/ -[46]:https://twitter.com/johanmabille?lang=en -[47]:https://github.com/QuantStack/ -[48]:https://twitter.com/wuoulf -[49]:https://github.com/QuantStack/ -[50]:https://medium.com/@maartenbreddels?source=post_page -[51]:https://medium.com/@wolfv?source=post_page -[52]:https://medium.com/@ellisonbg?source=post_page -[53]:https://medium.com/@egpbos?source=post_page diff --git a/sources/tech/20171129 Someone Tries to Bring Back Ubuntus Unity from the Dead as an Official Spin.md b/sources/tech/20171129 Someone Tries to Bring Back Ubuntus Unity from the Dead as an Official Spin.md deleted file mode 100644 index 0e38373c3f..0000000000 --- a/sources/tech/20171129 Someone Tries to Bring Back Ubuntus Unity from the Dead as an Official Spin.md +++ /dev/null @@ -1,41 +0,0 @@ -Someone Tries to Bring Back Ubuntu's Unity from the Dead as an Official Spin -============================================================ - - - -> The Ubuntu Unity remix would be supported for nine months - -Canonical's sudden decision of killing its Unity user interface after seven years affected many Ubuntu users, and it looks like someone now tries to bring it back from the dead as an unofficial spin. - -Long-time [Ubuntu][1] member Dale Beaudoin [ran a poll][2] last week on the official Ubuntu forums to take the pulse of the community and see if they are interested in an Ubuntu Unity Remix that would be released alongside Ubuntu 18.04 LTS (Bionic Beaver) next year and be supported for nine months or five years. - -Thirty people voted in the poll, with 67 percent of them opting for an LTS (Long Term Support) release of the so-called Ubuntu Unity Remix, while 33 percent voted for the 9-month supported release. It also looks like this upcoming Ubuntu Unity Spin [looks to become an official flavor][3], yet this means commitment from those developing it. - -"A recent poll voted 2/3rds in favor of Ubuntu Unity to become an LTS distribution. We should try to work this cycle assuming that it will be LTS and an official flavor," said Dale Beaudoin. "We will try and release an updated ISO once every week or 10 days using the current 18.04 daily builds of default Ubuntu Bionic Beaver as a platform." - -### Is Ubuntu Unity making a comeback? - -The last Ubuntu version to ship with Unity by default was Ubuntu 17.04 (Zesty Zapus), which will reach end of life on January 2018\. Ubuntu 17.10 (Artful Artful), the current stable release of the popular operating system, is the first to use the GNOME desktop environment by default for the main Desktop edition as Canonical CEO [announced][4] earlier this year that Unity would no longer be developed. - -However, Canonical is still offering the Unity desktop environment from the official software repositories, so if someone wants to install it, it's one click away. But the bad news is that they'll be supported up until the release of Ubuntu 18.04 LTS (Bionic Beaver) in April 2018, so the developers of the Ubuntu Unity Remix would have to continue to keep in on life support on their a separate repository. - -On the other hand, we don't believe Canonical will change their mind and accept this Ubuntu Unity Spin to become an official flavor, which would mean they failed to continue development of Unity, and now a handful of people can do it. Most probably, if interest in this Ubuntu Unity Remix won't fade away soon, it will be an unofficial spin supported by the nostalgic community. - -Question is, would you be interested in an Ubuntu Unity spin, official or not? - --------------------------------------------------------------------------------- - -via: http://news.softpedia.com/news/someone-tries-to-bring-back-ubuntu-s-unity-from-the-dead-as-an-unofficial-spin-518778.shtml - -作者:[Marius Nestor ][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://news.softpedia.com/editors/browse/marius-nestor -[1]:http://linux.softpedia.com/downloadTag/Ubuntu -[2]:https://community.ubuntu.com/t/poll-unity-7-distro-9-month-spin-or-lts-for-18-04/2066 -[3]:https://community.ubuntu.com/t/unity-maintenance-roadmap/2223 -[4]:http://news.softpedia.com/news/canonical-to-stop-developing-unity-8-ubuntu-18-04-lts-ships-with-gnome-desktop-514604.shtml -[5]:http://news.softpedia.com/editors/browse/marius-nestor diff --git a/sources/tech/20171130 Excellent Business Software Alternatives For Linux.md b/sources/tech/20171130 Excellent Business Software Alternatives For Linux.md deleted file mode 100644 index 195b51423a..0000000000 --- a/sources/tech/20171130 Excellent Business Software Alternatives For Linux.md +++ /dev/null @@ -1,116 +0,0 @@ -Yoliver istranslating. -Excellent Business Software Alternatives For Linux -------- - -Many business owners choose to use Linux as the operating system for their operations for a variety of reasons. - -1. Firstly, they don't have to pay anything for the privilege, and that is a massive bonus during the early stages of a company where money is tight. - -2. Secondly, Linux is a light alternative compared to Windows and other popular operating systems available today. - -Of course, lots of entrepreneurs worry they won't have access to some of the essential software packages if they make that move. However, as you will discover throughout this post, there are plenty of similar tools that will cover all the bases. - - [![](https://4.bp.blogspot.com/-xwLuDRdB6sw/Whxx0Z5pI5I/AAAAAAAADhU/YWHID8GU9AgrXRfeTz4HcDZkG-XWZNbSgCLcBGAs/s400/4444061098_6eeaa7dc1a_z.jpg)][3] - -### Alternatives to Microsoft Word - -All company bosses will require access to a word processing tool if they want to ensure the smooth running of their operation according to - -[the latest article from Fareed Siddiqui][4] - -. You'll need that software to write business plans, letters, and many other jobs within your firm. Thankfully, there are a variety of alternatives you might like to select if you opt for the Linux operating system. Some of the most popular ones include: - -* LibreOffice Writer - -* AbiWord - -* KWord - -* LaTeX - -So, you just need to read some online reviews and then download the best word processor based on your findings. Of course, if you're not satisfied with the solution, you should take a look at some of the other ones on that list. In many instances, any of the programs mentioned above should work well. - -### Alternatives to Microsoft Excel - - [![](https://4.bp.blogspot.com/-XdS6bSLQbOU/WhxyeWZeeCI/AAAAAAAADhc/C3hGY6rgzX4m2emunot80-4URu9-aQx8wCLcBGAs/s400/28929069495_e85d2626ba_z.jpg)][5] - -You need a spreadsheet tool if you want to ensure your business doesn't get into trouble when it comes to bookkeeping and inventory control. There are specialist software packages on the market for both of those tasks, but - -[open-source alternatives][6] - -to Microsoft Excel will give you the most amount of freedom when creating your spreadsheets and editing them. While there are other packages out there, some of the best ones for Linux users include: - -* [LibreOffice Calc][1] - -* KSpread - -* Gnumeric - -Those programs work in much the same way as Microsoft Excel, and so you can use them for issues like accounting and stock control. You might also use that software to monitor employee earnings or punctuality. The possibilities are endless and only limited by your imagination. - -### Alternatives to Adobe Photoshop - - [![](https://3.bp.blogspot.com/-Id9Dm3CIXmc/WhxzGIlv3zI/AAAAAAAADho/VfIRCAbJMjMZzG2M97-uqLV9mOhqN7IWACLcBGAs/s400/32206185926_c69accfcef_z.jpg)][7] - -Company bosses require access to design programs when developing their marketing materials and creating graphics for their websites. You might also use software of that nature to come up with a new business logo at some point. Lots of entrepreneurs spend a fortune on - -[Training Connections Photoshop classes][8] - -and those available from other providers. They do that in the hope of educating their teams and getting the best results. However, people who use Linux can still benefit from that expertise if they select one of the following - -[alternatives][9] - -: - -* GIMP - -* Krita - -* Pixel - -* LightZone - -The last two suggestions on that list require a substantial investment. Still, they function in much the same way as Adobe Photoshop, and so you should manage to achieve the same quality of work. - -### Other software solutions that you'll want to consider - -Alongside those alternatives to some of the most widely-used software packages around today, business owners should take a look at the full range of products they could use with the Linux operating system. Here are some tools you might like to research and consider: - -* Inkscape - similar to Coreldraw - -* LibreOffice Base - similar to Microsoft Access - -* LibreOffice Impress - similar to Microsoft PowerPoint - -* File Roller - siThis is a contributed postmilar to WinZip - -* Linphone - similar to Skype - -There are - -[lots of other programs][10] - - you'll also want to research, and so the best solution is to use the internet to learn more. You will find lots of reviews from people who've used the software in the past, and many of them will compare the tool to its Windows or iOS alternative. So, you shouldn't have to work too hard to identify the best ones and sort the wheat from the chaff. - -Now you have all the right information; it's time to weigh all the pros and cons of Linux and work out if it's suitable for your operation. In most instances, that operating system does not place any limits on your business activities. It's just that you need to use different software compared to some of your competitors. People who use Linux tend to benefit from improved security, speed, and performance. Also, the solution gets regular updates, and so it's growing every single day. Unlike Windows and other solutions; you can customize Linux to meet your requirements. With that in mind, do not make the mistake of overlooking this fantastic system! - --------------------------------------------------------------------------------- - -via: http://linuxblog.darkduck.com/2017/11/excellent-business-software.html - -作者:[DarkDuck][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://linuxblog.darkduck.com/ -[1]:http://linuxblog.darkduck.com/2015/08/pivot-tables-in-libreoffice-calc.html -[3]:https://4.bp.blogspot.com/-xwLuDRdB6sw/Whxx0Z5pI5I/AAAAAAAADhU/YWHID8GU9AgrXRfeTz4HcDZkG-XWZNbSgCLcBGAs/s1600/4444061098_6eeaa7dc1a_z.jpg -[4]:https://www.linkedin.com/pulse/benefits-using-microsoft-word-fareed/ -[5]:https://4.bp.blogspot.com/-XdS6bSLQbOU/WhxyeWZeeCI/AAAAAAAADhc/C3hGY6rgzX4m2emunot80-4URu9-aQx8wCLcBGAs/s1600/28929069495_e85d2626ba_z.jpg -[6]:http://linuxblog.darkduck.com/2014/03/why-open-software-and-what-are-benefits.html -[7]:https://3.bp.blogspot.com/-Id9Dm3CIXmc/WhxzGIlv3zI/AAAAAAAADho/VfIRCAbJMjMZzG2M97-uqLV9mOhqN7IWACLcBGAs/s1600/32206185926_c69accfcef_z.jpg -[8]:https://www.trainingconnection.com/photoshop-training.php -[9]:http://linuxblog.darkduck.com/2011/10/photoshop-alternatives-for-linux.html -[10]:http://www.makeuseof.com/tag/best-linux-software/ diff --git a/sources/tech/20171130 Scrot Linux command-line screen grabs made simple.md b/sources/tech/20171130 Scrot Linux command-line screen grabs made simple.md deleted file mode 100644 index 2b4d2248b2..0000000000 --- a/sources/tech/20171130 Scrot Linux command-line screen grabs made simple.md +++ /dev/null @@ -1,108 +0,0 @@ -Scrot: Linux command-line screen grabs made simple -============================================================ - -### Scrot is a basic, flexible tool that offers a number of handy options for taking screen captures from the Linux command line. - -![Scrot: Screen grabs made simple](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/community-penguins-osdc-lead.png?itok=BmqsAF4A "Scrot: Screen grabs made simple") -Image credits : Original photo by Rikki Endsley. [CC BY-SA 4.0][13] - -There are great tools on the Linux desktop for taking screen captures, such as [KSnapshot][14] and [Shutter][15]. Even the simple utility that comes with the GNOME desktop does a pretty good job of capturing screens. But what if you rarely need to take screen captures? Or you use a Linux distribution without a built-in capture tool, or an older computer with limited resources? - -Turn to the command line and a little utility called [Scrot][16]. It does a fine job of taking simple screen captures, and it includes a few features that might surprise you. - -### Getting started with Scrot - -More Linux resources - -* [What is Linux?][1] - -* [What are Linux containers?][2] - -* [Download Now: Linux commands cheat sheet][3] - -* [Advanced Linux commands cheat sheet][4] - -* [Our latest Linux articles][5] - -Many Linux distributions come with Scrot already installed—to check, type `which scrot`. If it isn't there, you can install Scrot using your distro's package manager. If you're willing to compile the code, grab it [from GitHub][22]. - -To take a screen capture, crack open a terminal window and type `scrot [filename]`, where `[filename]` is the name of file to which you want to save the image (for example, `desktop.png`). If you don't include a name for the file, Scrot will create one for you, such as `2017-09-24-185009_1687x938_scrot.png`. (That filename isn't as descriptive it could be, is it? That's why it's better to add one to the command.) - -Running Scrot with no options takes a screen capture of your entire desktop. If you don't want to do that, Scrot lets you focus on smaller portions of your screen. - -### Taking a screen capture of a single window - -Tell Scrot to take a screen capture of a single window by typing `scrot -u [filename]`. - -The `-u` option tells Scrot to grab the window currently in focus. That's usually the terminal window you're working in, which might not be the one you want. - -To grab another window on your desktop, type `scrot -s [filename]`. - -The `-s` option lets you do one of two things: - -* select an open window, or - -* draw a rectangle around a window or a portion of a window to capture it. - -You can also set a delay, which gives you a little more time to select the window you want to capture. To do that, type `scrot -u -d [num] [filename]`. - -The `-d` option tells Scrot to wait before grabbing the window, and `[num]` is the number of seconds to wait. Specifying `-d 5` (wait five seconds) should give you enough time to choose a window. - -### More useful options - -Scrot offers a number of additional features (most of which I never use). The ones I find most useful include: - -* `-b` also grabs the window's border - -* `-t` grabs a window and creates a thumbnail of it. This can be useful when you're posting screen captures online. - -* `-c` creates a countdown in your terminal when you use the `-d` option. - -To learn about Scrot's other options, check out the its documentation by typing `man scrot` in a terminal window, or [read it online][17]. Then start snapping images of your screen. - -It's basic, but Scrot gets the job done nicely. - -### Topics - - [Linux][23] - -### About the author - - [![That idiot Scott Nesbitt ...](https://opensource.com/sites/default/files/styles/profile_pictures/public/scottn-cropped.jpg?itok=q4T2J4Ai)][18] - - Scott Nesbitt - I'm a long-time user of free/open source software, and write various things for both fun and profit. I don't take myself too seriously and I do all of my own stunts. You can find me at these fine establishments on the web: [Twitter][7], [Mastodon][8], [GitHub][9], and... [more about Scott Nesbitt][10][More about me][11] - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/17/11/taking-screen-captures-linux-command-line-scrot - -作者:[ Scott Nesbitt  ][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/scottnesbitt -[1]:https://opensource.com/resources/what-is-linux?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent -[2]:https://opensource.com/resources/what-are-linux-containers?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent -[3]:https://developers.redhat.com/promotions/linux-cheatsheet/?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent -[4]:https://developers.redhat.com/cheat-sheet/advanced-linux-commands-cheatsheet?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent -[5]:https://opensource.com/tags/linux?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent -[6]:https://opensource.com/article/17/11/taking-screen-captures-linux-command-line-scrot?rate=H43kUdawjR0GV9D0dCbpnmOWcqw1WekfrAI_qKo8UwI -[7]:http://www.twitter.com/ScottWNesbitt -[8]:https://mastodon.social/@scottnesbitt -[9]:https://github.com/ScottWNesbitt -[10]:https://opensource.com/users/scottnesbitt -[11]:https://opensource.com/users/scottnesbitt -[12]:https://opensource.com/user/14925/feed -[13]:https://creativecommons.org/licenses/by-sa/4.0/ -[14]:https://www.kde.org/applications/graphics/ksnapshot/ -[15]:https://launchpad.net/shutter -[16]:https://github.com/dreamer/scrot -[17]:http://manpages.ubuntu.com/manpages/precise/man1/scrot.1.html -[18]:https://opensource.com/users/scottnesbitt -[19]:https://opensource.com/users/scottnesbitt -[20]:https://opensource.com/users/scottnesbitt -[21]:https://opensource.com/article/17/11/taking-screen-captures-linux-command-line-scrot#comments -[22]:https://github.com/dreamer/scrot -[23]:https://opensource.com/tags/linux diff --git a/sources/tech/20171130 Search DuckDuckGo from the Command Line.md b/sources/tech/20171130 Search DuckDuckGo from the Command Line.md new file mode 100644 index 0000000000..ee451a6172 --- /dev/null +++ b/sources/tech/20171130 Search DuckDuckGo from the Command Line.md @@ -0,0 +1,103 @@ +translating---geekpi + +# Search DuckDuckGo from the Command Line + + ![](http://www.omgubuntu.co.uk/wp-content/uploads/2017/11/duckduckgo.png) +When we showed you how to [search Google from the command line][3] a lot of you to say you use [Duck Duck Go][4], the awesome privacy-focused search engine. + +Well, now there’s a tool to search DuckDuckGo from the command line. It’s called [ddgr][6] (pronounced, in my head, as  _dodger_ ) and it’s pretty neat. + +Like [Googler][7], ddgr is totally open-source and totally unofficial. Yup, the app is unaffiliated with DuckDuckGo in any way. So, should it start returning unsavoury search results for innocent terms, make sure you quack in this dev’s direction, and not the search engine’s! + +### DuckDuckGo Terminal App + +![](http://www.omgubuntu.co.uk/wp-content/uploads/2017/11/ddgr-gif.gif) + +[DuckDuckGo Bangs][8] makes finding stuff on DuckDuckGo super easy (there’s even a bang for  _this_  site) and, dutifully, ddgr supports them. + +Unlike the web interface, you can specify the number of search results you would like to see per page. It’s more convenient than skimming through 30-odd search results per page. The default interface is carefully designed to use minimum space without sacrificing readability. + +`ddgr` has a number of features, including: + +* Choose number of search results to fetch + +* Support for Bash autocomplete + +* Use !bangs + +* Open URLs in a browser + +* “I’m feeling lucky” option + +* Filter by time, region, file type, etc + +* Minimal dependencies + +You can download `ddgr` for various systems direct from the Github project page: + +[Download ‘ddgr’ from Github][9] + +You can also install ddgr on Ubuntu 16.04 LTS and up from a PPA. This repo is maintained by the developer of ddgr and is recommended should you want to stay up-to-date with new releases as and when they appear. + +Do note that at the time of writing the latest version of ddgr is  _not_  in the PPA, but an older version (lacking –num support) is: + +``` +sudo add-apt-repository ppa:twodopeshaggy/jarun +``` + +``` +sudo apt-get update +``` + +### How To Use ddgr to Search DuckDuckGo from the Comand Line + +To use ddgr once you installed all you need to do is pop open your terminal emulator of choice and run: + +``` +ddgr +``` + +Next enter a search term: + +``` +search-term +``` + +To limit the number of results returned run: + +``` +ddgr --num 5 search-term +``` + +To instantly open the first matching result for a search term in your browser run: + +``` +ddgr -j search-term +``` + +You can pass arguments and flags to narrow down your search. To see a comprehensive list inside the terminal run: + +``` +ddgr -h +``` + +-------------------------------------------------------------------------------- + +via: http://www.omgubuntu.co.uk/2017/11/duck-duck-go-terminal-app + +作者:[JOEY SNEDDON ][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://plus.google.com/117485690627814051450/?rel=author +[1]:https://plus.google.com/117485690627814051450/?rel=author +[2]:http://www.omgubuntu.co.uk/category/download +[3]:http://www.omgubuntu.co.uk/2017/08/search-google-from-the-command-line +[4]:http://duckduckgo.com/ +[5]:http://www.omgubuntu.co.uk/2017/11/duck-duck-go-terminal-app +[6]:https://github.com/jarun/ddgr +[7]:https://github.com/jarun/googler +[8]:https://duckduckgo.com/bang +[9]:https://github.com/jarun/ddgr/releases/tag/v1.1 diff --git a/sources/tech/20171130 Undistract-me : Get Notification When Long Running Terminal Commands Complete.md b/sources/tech/20171130 Undistract-me : Get Notification When Long Running Terminal Commands Complete.md deleted file mode 100644 index 46afe9b893..0000000000 --- a/sources/tech/20171130 Undistract-me : Get Notification When Long Running Terminal Commands Complete.md +++ /dev/null @@ -1,156 +0,0 @@ -translating---geekpi - -Undistract-me : Get Notification When Long Running Terminal Commands Complete -============================================================ - -by [sk][2] · November 30, 2017 - -![Undistract-me](https://www.ostechnix.com/wp-content/uploads/2017/11/undistract-me-2-720x340.png) - -A while ago, we published how to [get notification when a Terminal activity is done][3]. Today, I found out a similar utility called “undistract-me” that notifies you when long running terminal commands complete. Picture this scenario. You run a command that takes a while to finish. In the mean time, you check your facebook and get so involved in it. After a while, you remembered that you ran a command few minutes ago. You go back to the Terminal and notice that the command has already finished. But you have no idea when the command is completed. Have you ever been in this situation? I bet most of you were in this situation many times. This is where “undistract-me” comes in help. You don’t need to constantly check the terminal to see if a command is completed or not. Undistract-me utility will notify you when a long running command is completed. It will work on Arch Linux, Debian, Ubuntu and other Ubuntu-derivatives. - -#### Installing Undistract-me - -Undistract-me is available in the default repositories of Debian and its variants such as Ubuntu. All you have to do is to run the following command to install it. - -``` -sudo apt-get install undistract-me -``` - -The Arch Linux users can install it from AUR using any helper programs. - -Using [Pacaur][4]: - -``` -pacaur -S undistract-me-git -``` - -Using [Packer][5]: - -``` -packer -S undistract-me-git -``` - -Using [Yaourt][6]: - -``` -yaourt -S undistract-me-git -``` - -Then, run the following command to add “undistract-me” to your Bash. - -``` -echo 'source /etc/profile.d/undistract-me.sh' >> ~/.bashrc -``` - -Alternatively you can run this command to add it to your Bash: - -``` -echo "source /usr/share/undistract-me/long-running.bash\nnotify_when_long_running_commands_finish_install" >> .bashrc -``` - -If you are in Zsh shell, run this command: - -``` -echo "source /usr/share/undistract-me/long-running.bash\nnotify_when_long_running_commands_finish_install" >> .zshrc -``` - -Finally update the changes: - -For Bash: - -``` -source ~/.bashrc -``` - -For Zsh: - -``` -source ~/.zshrc -``` - -#### Configure Undistract-me - -By default, Undistract-me will consider any command that takes more than 10 seconds to complete as a long-running command. You can change this time interval by editing /usr/share/undistract-me/long-running.bash file. - -``` -sudo nano /usr/share/undistract-me/long-running.bash -``` - -Find “LONG_RUNNING_COMMAND_TIMEOUT” variable and change the default value (10 seconds) to something else of your choice. - - [![](http://www.ostechnix.com/wp-content/uploads/2017/11/undistract-me-1.png)][7] - -Save and close the file. Do not forget to update the changes: - -``` -source ~/.bashrc -``` - -Also, you can disable notifications for particular commands. To do so, find the “LONG_RUNNING_IGNORE_LIST” variable and add the commands space-separated like below. - -By default, the notification will only show if the active window is not the window the command is running in. That means, it will notify you only if the command is running in the background Terminal window. If the command is running in active window Terminal, you will not be notified. If you want undistract-me to send notifications either the Terminal window is visible or in the background, you can set IGNORE_WINDOW_CHECK to 1 to skip the window check. - -The other cool feature of Undistract-me is you can set audio notification along with visual notification when a command is done. By default, it will only send a visual notification. You can change this behavior by setting the variable UDM_PLAY_SOUND to a non-zero integer on the command line. However, your Ubuntu system should have pulseaudio-utils and sound-theme-freedesktop utilities installed to enable this functionality. - -Please remember that you need to run the following command to update the changes made. - -For Bash: - -``` -source ~/.bashrc -``` - -For Zsh: - -``` -source ~/.zshrc -``` - -It is time to verify if this really works. - -#### Get Notification When Long Running Terminal Commands Complete - -Now, run any command that takes longer than 10 seconds or the time duration you defined in Undistract-me script. - -I ran the following command on my Arch Linux desktop. - -``` -sudo pacman -Sy -``` - -This command took 32 seconds to complete. After the completion of the above command, I got the following notification. - - [![](http://www.ostechnix.com/wp-content/uploads/2017/11/undistract-me-2.png)][8] - -Please remember Undistract-me script notifies you only if the given command took more than 10 seconds to complete. If the command is completed in less than 10 seconds, you will not be notified. Of course, you can change this time interval settings as I described in the Configuration section above. - -I find this tool very useful. It helped me to get back to the business after I completely lost in some other tasks. I hope this tool will be helpful to you too. - -More good stuffs to come. Stay tuned! - -Cheers! - -Resource: - -* [Undistract-me GitHub Repository][1] - --------------------------------------------------------------------------------- - -via: https://www.ostechnix.com/undistract-get-notification-long-running-terminal-commands-complete/ - -作者:[sk][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.ostechnix.com/author/sk/ -[1]:https://github.com/jml/undistract-me -[2]:https://www.ostechnix.com/author/sk/ -[3]:https://www.ostechnix.com/get-notification-terminal-task-done/ -[4]:https://www.ostechnix.com/install-pacaur-arch-linux/ -[5]:https://www.ostechnix.com/install-packer-arch-linux-2/ -[6]:https://www.ostechnix.com/install-yaourt-arch-linux/ -[7]:http://www.ostechnix.com/wp-content/uploads/2017/11/undistract-me-1.png -[8]:http://www.ostechnix.com/wp-content/uploads/2017/11/undistract-me-2.png diff --git a/sources/tech/20171130 Wake up and Shut Down Linux Automatically.md b/sources/tech/20171130 Wake up and Shut Down Linux Automatically.md deleted file mode 100644 index 3a2c20ad52..0000000000 --- a/sources/tech/20171130 Wake up and Shut Down Linux Automatically.md +++ /dev/null @@ -1,135 +0,0 @@ - - translating by HardworkFish - -Wake up and Shut Down Linux Automatically -============================================================ - -### [banner.jpg][1] - -![time keeper](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/banner.jpg?itok=zItspoSb) - -Learn how to configure your Linux computers to watch the time for you, then wake up and shut down automatically. - -[Creative Commons Attribution][6][The Observatory at Delhi][7] - -Don't be a watt-waster. If your computers don't need to be on then shut them down. For convenience and nerd creds, you can configure your Linux computers to wake up and shut down automatically. - -### Precious Uptimes - -Some computers need to be on all the time, which is fine as long as it's not about satisfying an uptime compulsion. Some people are very proud of their lengthy uptimes, and now that we have kernel hot-patching that leaves only hardware failures requiring shutdowns. I think it's better to be practical. Save electricity as well as wear on your moving parts, and shut them down when they're not needed. For example, you can wake up a backup server at a scheduled time, run your backups, and then shut it down until it's time for the next backup. Or, you can configure your Internet gateway to be on only at certain times. Anything that doesn't need to be on all the time can be configured to turn on, do a job, and then shut down. - -### Sleepies - -For computers that don't need to be on all the time, good old cron will shut them down reliably. Use either root's cron, or /etc/crontab. This example creates a root cron job to shut down every night at 11:15 p.m. - -``` -# crontab -e -u root -# m h dom mon dow command -15 23 * * * /sbin/shutdown -h now -``` - -``` -15 23 * * 1-5 /sbin/shutdown -h now -``` - -You may also use /etc/crontab, which is fast and easy, and everything is in one file. You have to specify the user: - -``` -15 23 * * 1-5 root shutdown -h now -``` - -Auto-wakeups are very cool; most of my SUSE colleagues are in Nuremberg, so I am crawling out of bed at 5 a.m. to have a few hours of overlap with their schedules. My work computer turns itself on at 5:30 a.m., and then all I have to do is drag my coffee and myself to my desk to start work. It might not seem like pressing a power button is a big deal, but at that time of day every little thing looms large. - -Waking up your Linux PC can be less reliable than shutting it down, so you may want to try different methods. You can use wakeonlan, RTC wakeups, or your PC's BIOS to set scheduled wakeups. These all work because, when you power off your computer, it's not really all the way off; it is in an extremely low-power state and can receive and respond to signals. You need to use the power supply switch to turn it off completely. - -### BIOS Wakeup - -A BIOS wakeup is the most reliable. My system BIOS has an easy-to-use wakeup scheduler (Figure 1). Chances are yours does, too. Easy peasy. - -### [fig-1.png][2] - -![wake up](https://www.linux.com/sites/lcom/files/styles/floated_images/public/fig-1_11.png?itok=8qAeqo1I) - -Figure 1: My system BIOS has an easy-to-use wakeup scheduler. - -[Used with permission][8] - -### wakeonlan - -wakeonlan is the next most reliable method. This requires sending a signal from a second computer to the computer you want to power on. You could use an Arduino or Raspberry Pi to send the wakeup signal, a Linux-based router, or any Linux PC. First, look in your system BIOS to see if wakeonlan is supported -- which it should be -- and then enable it, as it should be disabled by default. - -Then, you'll need an Ethernet network adapter that supports wakeonlan; wireless adapters won't work. You'll need to verify that your Ethernet card supports wakeonlan: - -``` -# ethtool eth0 | grep -i wake-on - Supports Wake-on: pumbg - Wake-on: g -``` - -* d -- all wake ups disabled - -* p -- wake up on physical activity - -* u -- wake up on unicast messages - -* m -- wake up on multicast messages - -* b -- wake up on broadcast messages - -* a -- wake up on ARP messages - -* g -- wake up on magic packet - -* s -- set the Secure On password for the magic packet - -man ethtool is not clear on what the p switch does; it suggests that any signal will cause a wake up. In my testing, however, it doesn't do that. The one that must be enabled is g -- wake up on magic packet, and the Wake-on line shows that it is already enabled. If it is not enabled, you can use ethtool to enable it, using your own device name, of course: - -``` -# ethtool -s eth0 wol g -``` - -``` -@reboot /usr/bin/ethtool -s eth0 wol g -``` - -### [fig-2.png][3] - -![wakeonlan](https://www.linux.com/sites/lcom/files/styles/floated_images/public/fig-2_7.png?itok=XQAwmHoQ) - -Figure 2: Enable Wake on LAN. - -[Used with permission][9] - -Another option is recent Network Manager versions have a nice little checkbox to enable wakeonlan (Figure 2). - -There is a field for setting a password, but if your network interface doesn't support the Secure On password, it won't work. - -Now you need to configure a second PC to send the wakeup signal. You don't need root privileges, so create a cron job for your user. You need the MAC address of the network interface on the machine you're waking up: - -``` -30 08 * * * /usr/bin/wakeonlan D0:50:99:82:E7:2B -``` - -Using the real-time clock for wakeups is the least reliable method. Check out [Wake Up Linux With an RTC Alarm Clock][4]; this is a bit outdated as most distros use systemd now. Come back next week to learn more about updated ways to use RTC wakeups. - -Learn more about Linux through the free ["Introduction to Linux" ][5]course from The Linux Foundation and edX. - --------------------------------------------------------------------------------- - -via: https://www.linux.com/learn/intro-to-linux/2017/11/wake-and-shut-down-linux-automatically - -作者:[Carla Schroder] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[1]:https://www.linux.com/files/images/bannerjpg -[2]:https://www.linux.com/files/images/fig-1png-11 -[3]:https://www.linux.com/files/images/fig-2png-7 -[4]:https://www.linux.com/learn/wake-linux-rtc-alarm-clock -[5]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux -[6]:https://www.linux.com/licenses/category/creative-commons-attribution -[7]:http://www.columbia.edu/itc/mealac/pritchett/00routesdata/1700_1799/jaipur/delhijantarearly/delhijantarearly.html -[8]:https://www.linux.com/licenses/category/used-permission -[9]:https://www.linux.com/licenses/category/used-permission diff --git a/sources/tech/20171201 Fedora Classroom Session: Ansible 101.md b/sources/tech/20171201 Fedora Classroom Session: Ansible 101.md deleted file mode 100644 index a74b196663..0000000000 --- a/sources/tech/20171201 Fedora Classroom Session: Ansible 101.md +++ /dev/null @@ -1,71 +0,0 @@ -### [Fedora Classroom Session: Ansible 101][2] - -### By Sachin S Kamath - -![](https://fedoramagazine.org/wp-content/uploads/2017/07/fedora-classroom-945x400.jpg) - -Fedora Classroom sessions continue this week with an Ansible session. The general schedule for sessions appears [on the wiki][3]. You can also find [resources and recordings from previous sessions][4] there. Here are details about this week’s session on [Thursday, 30th November at 1600 UTC][5]. That link allows you to convert the time to your timezone. - -### Topic: Ansible 101 - -As the Ansible [documentation][6] explains, Ansible is an IT automation tool. It’s primarily used to configure systems, deploy software, and orchestrate more advanced IT tasks. Examples include continuous deployments or zero downtime rolling updates. - -This Classroom session covers the topics listed below: - -1. Introduction to SSH - -2. Understanding different terminologies - -3. Introduction to Ansible - -4. Ansible installation and setup - -5. Establishing password-less connection - -6. Ad-hoc commands - -7. Managing inventory - -8. Playbooks examples - -There will also be a follow-up Ansible 102 session later. That session will cover complex playbooks, roles, dynamic inventory files, control flow and Galaxy. - -### Instructors - -We have two experienced instructors handling this session. - -[Geoffrey Marr][7], also known by his IRC name as “coremodule,” is a Red Hat employee and Fedora contributor with a background in Linux and cloud technologies. While working, he spends his time lurking in the [Fedora QA][8] wiki and test pages. Away from work, he enjoys RaspberryPi projects, especially those focusing on software-defined radio. - -[Vipul Siddharth][9] is an intern at Red Hat who also works on Fedora. He loves to contribute to open source and seeks opportunities to spread the word of free and open source software. - -### Joining the session - -This session takes place on [BlueJeans][10]. The following information will help you join the session: - -* URL: [https://bluejeans.com/3466040121][1] - -* Meeting ID (for Desktop App): 3466040121 - -We hope you attend, learn from, and enjoy this session! If you have any feedback about the sessions, have ideas for a new one or want to host a session, please feel free to comment on this post or edit the [Classroom wiki page][11]. - --------------------------------------------------------------------------------- - -via: https://fedoramagazine.org/fedora-classroom-session-ansible-101/ - -作者:[Sachin S Kamath] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[1]:https://bluejeans.com/3466040121 -[2]:https://fedoramagazine.org/fedora-classroom-session-ansible-101/ -[3]:https://fedoraproject.org/wiki/Classroom -[4]:https://fedoraproject.org/wiki/Classroom#Previous_Sessions -[5]:https://www.timeanddate.com/worldclock/fixedtime.html?msg=Fedora+Classroom+-+Ansible+101&iso=20171130T16&p1=%3A -[6]:http://docs.ansible.com/ansible/latest/index.html -[7]:https://fedoraproject.org/wiki/User:Coremodule -[8]:https://fedoraproject.org/wiki/QA -[9]:https://fedoraproject.org/wiki/User:Siddharthvipul1 -[10]:https://www.bluejeans.com/downloads -[11]:https://fedoraproject.org/wiki/Classroom diff --git a/sources/tech/20171201 How to Manage Users with Groups in Linux.md b/sources/tech/20171201 How to Manage Users with Groups in Linux.md deleted file mode 100644 index 35350c819f..0000000000 --- a/sources/tech/20171201 How to Manage Users with Groups in Linux.md +++ /dev/null @@ -1,168 +0,0 @@ -translating---imquanquan - -How to Manage Users with Groups in Linux -============================================================ - -### [group-of-people-1645356_1920.jpg][1] - -![groups](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/group-of-people-1645356_1920.jpg?itok=rJlAxBSV) - -Learn how to work with users, via groups and access control lists in this tutorial. - -[Creative Commons Zero][4] - -Pixabay - -When you administer a Linux machine that houses multiple users, there might be times when you need to take more control over those users than the basic user tools offer. This idea comes to the fore especially when you need to manage permissions for certain users. Say, for example, you have a directory that needs to be accessed with read/write permissions by one group of users and only read permissions for another group. With Linux, this is entirely possible. To make this happen, however, you must first understand how to work with users, via groups and access control lists (ACLs). - -We’ll start from the beginning with users and work our way to the more complex ACLs. Everything you need to make this happen will be included in your Linux distribution of choice. We won’t touch on the basics of users, as the focus on this article is about groups. - -For the purpose of this piece, I’m going to assume the following: - -You need to create two users with usernames: - -* olivia - -* nathan - -You need to create two groups: - -* readers - -* editors - -Olivia needs to be a member of the group editors, while nathan needs to be a member of the group readers. The group readers needs to only have read permission to the directory /DATA, whereas the group editors needs to have both read and write permission to the /DATA directory. This, of course, is very minimal, but it will give you the basic information you need to expand the tasks to fit your much larger needs. - -I’ll be demonstrating on the Ubuntu 16.04 Server platform. The commands will be universal—the only difference would be if your distribution of choice doesn’t make use of sudo. If this is the case, you’ll have to first su to the root user to issue the commands that require sudo in the demonstrations. - -### Creating the users - -The first thing we need to do is create the two users for our experiment. User creation is handled with the useradd command. Instead of just simply creating the users we need to create them both with their own home directories and then give them passwords. - -The first thing we do is create the users. To do this, issue the commands: - -``` -sudo useradd -m olivia - -sudo useradd -m nathan -``` - -Next each user must have a password. To add passwords into the mix, you’d issue the following commands: - -``` -sudo passwd olivia - -sudo passwd nathan -``` - -That’s it, your users are created. - -### Creating groups and adding users - -Now we’re going to create the groups readers and editors and then add users to them. The commands to create our groups are: - -``` -addgroup readers - -addgroup editors -``` - -### [groups_1.jpg][2] - -![groups](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/groups_1.jpg?itok=BKwL89BB) - -Figure 1: Our new groups ready to be used. - -[Used with permission][5] - -With our groups created, we need to add our users. We’ll add user nathan to group readers with the command: - -``` -sudo usermod -a -G readers nathan -``` - -``` -sudo usermod -a -G editors olivia -``` - -### Giving groups permissions to directories - -Let’s say you have the directory /READERS and you need to allow all members of the readers group access to that directory. First, change the group of the folder with the command: - -``` -sudo chown -R :readers /READERS -``` - -``` -sudo chmod -R g-w /READERS -``` - -``` -sudo chmod -R o-x /READERS -``` - -Let’s say you have the directory /EDITORS and you need to give members of the editors group read and write permission to its contents. To do that, the following command would be necessary: - -``` -sudo chown -R :editors /EDITORS - -sudo chmod -R g+w /EDITORS - -sudo chmod -R o-x /EDITORS -``` - -The problem with using this method is you can only add one group to a directory at a time. This is where access control lists come in handy. - -### Using access control lists - -Now, let’s get tricky. Say you have a single folder—/DATA—and you want to give members of the readers group read permission and members of the group editors read/write permissions. To do that, you must take advantage of the setfacl command. The setfacl command sets file access control lists for files and folders. - -The structure of this command looks like this: - -``` -setfacl OPTION X:NAME:Y /DIRECTORY -``` - -``` -sudo setfacl -m g:readers:rx -R /DATA -``` - -To give members of the editors group read/write permissions (while retaining read permissions for the readers group), we’d issue the command; - -``` -sudo setfacl -m g:editors:rwx -R /DATA -``` - -### All the control you need - -And there you have it. You can now add members to groups and control those groups’ access to various directories with all the power and flexibility you need. To read more about the above tools, issue the commands: - -* man usradd - -* man addgroup - -* man usermod - -* man sefacl - -* man chown - -* man chmod - -Learn more about Linux through the free ["Introduction to Linux" ][3]course from The Linux Foundation and edX. - --------------------------------------------------------------------------------- - -via: https://www.linux.com/learn/intro-to-linux/2017/12/how-manage-users-groups-linux - -作者:[Jack Wallen ] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[1]:https://www.linux.com/files/images/group-people-16453561920jpg -[2]:https://www.linux.com/files/images/groups1jpg -[3]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux -[4]:https://www.linux.com/licenses/category/creative-commons-zero -[5]:https://www.linux.com/licenses/category/used-permission diff --git a/sources/tech/20171201 How to find a publisher for your tech book.md b/sources/tech/20171201 How to find a publisher for your tech book.md deleted file mode 100644 index 76dc8112ca..0000000000 --- a/sources/tech/20171201 How to find a publisher for your tech book.md +++ /dev/null @@ -1,76 +0,0 @@ -How to find a publisher for your tech book -============================================================ - -### Writing a technical book takes more than a good idea. You need to know a bit about how the publishing industry works. - - -![How to find a publisher for your tech book](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/EDUCATION_colorbooks.png?itok=vNhsYYyC "How to find a publisher for your tech book") -Image by : opensource.com - -You've got an idea for a technical book—congratulations! Like a hiking the Appalachian trail, or learning to cook a soufflé, writing a book is one of those things that people talk about, but never take beyond the idea stage. That makes sense, because the failure rate is pretty high. Making it real involves putting your idea in front of a publisher, and finding out whether it's good enough to become a book. That step is scary enough, but the lack of information about how to do it complicates matters. - -If you want to work with a traditional publisher, you'll need to get your book in front of them and hopefully start on the path to publication. I'm the Managing Editor at the [Pragmatic Bookshelf][4], so I see proposals all the time, as well as helping authors to craft good ones. Some are good, others are bad, but I often see proposals that just aren't right for Pragmatic. I'll help you with the process of finding the right publisher, and how to get your idea noticed. - -### Identify your target - -Your first step is to figure out which publisher is the a good fit for your idea. To start, think about the publishers that you buy books from, and that you enjoy. The odds are pretty good that your book will appeal to people like you, so starting with your favorites makes for a pretty good short list. If you don't have much of a book collection, you can visit a bookstore, or take a look on Amazon. Make a list of a handful of publishers that you personally like to start with. - -Next, winnow your prospects. Although most technical publishers look alike from a distance, they often have distinctive audiences. Some publishers go for broadly popular topics, such as C++ or Java. Your book on Elixir may not be a good fit for that publisher. If your prospective book is about teaching programming to kids, you probably don't want to go with the traditional academic publisher. - -Once you've identified a few targets, do some more research into the publishers' catalogs, either on their own site, or on Amazon. See what books they have that are similar to your idea. If they have a book that's identical, or nearly so, you'll have a tough time convincing them to sign yours. That doesn't necessarily mean you should drop that publisher from your list. You can make some changes to your proposal to differentiate it from the existing book: target a different audience, or a different skill level. Maybe the existing book is outdated, and you could focus on new approaches to the technology. Make your proposal into a book that complements the existing one, rather than competes. - -If your target publisher has no books that are similar, that can be a good sign, or a very bad one. Sometimes publishers choose not to publish on specific technologies, either because they don't believe their audience is interested, or they've had trouble with that technology in the past. New languages and libraries pop up all the time, and publishers have to make informed guesses about which will appeal to their readers. Their assessment may not be the same as yours. Their decision might be final, or they might be waiting for the right proposal. The only way to know is to propose and find out. - -### Work your network - -Identifying a publisher is the first step; now you need to make contact. Unfortunately, publishing is still about  _who_  you know, more than  _what_  you know. The person you want to know is an  _acquisitions editor,_  the editor whose job is to find new markets, authors, and proposals. If you know someone who has connections with a publisher, ask for an introduction to an acquisitions editor. These editors often specialize in particular subject areas, particularly at larger publishers, but you don't need to find the right one yourself. They're usually happy to connect you with the correct person. - -Sometimes you can find an acquisitions editor at a technical conference, especially one where the publisher is a sponsor, and has a booth. Even if there's not an acquisitions editor on site at the time, the staff at the booth can put you in touch with one. If conferences aren't your thing, you'll need to work your network to get an introduction. Use LinkedIn, or your informal contacts, to get in touch with an editor. - -For smaller publishers, you may find acquisitions editors listed on the company website, with contact information if you're lucky. If not, search for the publisher's name on Twitter, and see if you can turn up their editors. You might be nervous about trying to reach out to a stranger over social media to show them your book, but don't worry about it. Making contact is what acquisitions editors do. The worst-case result is they ignore you. - -Once you've made contact, the acquisitions editor will assist you with the next steps. They may have some feedback on your proposal right away, or they may want you to flesh it out according to their guidelines before they'll consider it. After you've put in the effort to find an acquisitions editor, listen to their advice. They know their system better than you do. - -### If all else fails - -If you can't find an acquisitions editor to contact, the publisher almost certainly has a blind proposal alias, usually of the form `proposals@[publisher].com`. Check the web site for instructions on what to send to a proposal alias; some publishers have specific requirements. Follow these instructions. If you don't, you have a good chance of your proposal getting thrown out before anybody looks at it. If you have questions, or aren't sure what the publisher wants, you'll need to try again to find an editor to talk to, because the proposal alias is not the place to get questions answered. Put together what they've asked for (which is a topic for a separate article), send it in, and hope for the best. - -### And ... wait - -No matter how you've gotten in touch with a publisher, you'll probably have to wait. If you submitted to the proposals alias, it's going to take a while before somebody does anything with that proposal, especially at a larger company. Even if you've found an acquisitions editor to work with, you're probably one of many prospects she's working with simultaneously, so you might not get rapid responses. Almost all publishers have a committee that decides on which proposals to accept, so even if your proposal is awesome and ready to go, you'll still need to wait for the committee to meet and discuss it. You might be waiting several weeks, or even a month before you hear anything. - -After a couple of weeks, it's fine to check back in with the editor to see if they need any more information. You want to be polite in this e-mail; if they haven't answered because they're swamped with proposals, being pushy isn't going to get you to the front of the line. It's possible that some publishers will never respond at all instead of sending a rejection notice, but that's uncommon. There's not a lot to do at this point other than be patient. Of course, if it's been months and nobody's returning your e-mails, you're free to approach a different publisher or consider self-publishing. - -### Good luck - -If this process seems somewhat scattered and unscientific, you're right; it is. Getting published depends on being in the right place, at the right time, talking to the right person, and hoping they're in the right mood. You can't control all of those variables, but having a better knowledge of how the industry works, and what publishers are looking for, can help you optimize the ones you can control. - -Finding a publisher is one step in a lengthy process. You need to refine your idea and create the proposal, as well as other considerations. At SeaGL this year [I presented][5] an introduction to the entire process. Check out [the video][6] for more detailed information. - -### About the author - - [![](https://opensource.com/sites/default/files/styles/profile_pictures/public/pictures/portrait.jpg?itok=b77dlNC4)][7] - - Brian MacDonald - Brian MacDonald is Managing Editor at the Pragmatic Bookshelf. Over the last 20 years in tech publishing, he's been an editor, author, and occasional speaker and trainer. He currently spends a lot of his time talking to new authors about how they can best present their ideas. You can follow him on Twitter at @bmac_editor.[More about me][2] - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/17/12/how-find-publisher-your-book - -作者:[Brian MacDonald ][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/bmacdonald -[1]:https://opensource.com/article/17/12/how-find-publisher-your-book?rate=o42yhdS44MUaykAIRLB3O24FvfWxAxBKa5WAWSnSY0s -[2]:https://opensource.com/users/bmacdonald -[3]:https://opensource.com/user/190176/feed -[4]:https://pragprog.com/ -[5]:https://archive.org/details/SeaGL2017WritingTheNextGreatTechBook -[6]:https://archive.org/details/SeaGL2017WritingTheNextGreatTechBook -[7]:https://opensource.com/users/bmacdonald -[8]:https://opensource.com/users/bmacdonald -[9]:https://opensource.com/users/bmacdonald -[10]:https://opensource.com/article/17/12/how-find-publisher-your-book#comments diff --git a/sources/tech/20171201 Randomize your WiFi MAC address on Ubuntu 16.04.md b/sources/tech/20171201 Randomize your WiFi MAC address on Ubuntu 16.04.md deleted file mode 100644 index b0f8e72018..0000000000 --- a/sources/tech/20171201 Randomize your WiFi MAC address on Ubuntu 16.04.md +++ /dev/null @@ -1,160 +0,0 @@ -Randomize your WiFi MAC address on Ubuntu 16.04 -============================================================ - - _Your device’s MAC address can be used to track you across the WiFi networks you connect to. That data can be shared and sold, and often identifies you as an individual. It’s possible to limit this tracking by using pseudo-random MAC addresses._ - -![A captive portal screen for a hotel allowing you to log in with social media for an hour of free WiFi](https://www.paulfurley.com/img/captive-portal-our-hotel.gif) - - _Image courtesy of [Cloudessa][4]_ - -Every network device like a WiFi or Ethernet card has a unique identifier called a MAC address, for example `b4:b6:76:31:8c:ff`. It’s how networking works: any time you connect to a WiFi network, the router uses that address to send and receive packets to your machine and distinguish it from other devices in the area. - -The snag with this design is that your unique, unchanging MAC address is just perfect for tracking you. Logged into Starbucks WiFi? Noted. London Underground? Logged. - -If you’ve ever put your real name into one of those Craptive Portals on a WiFi network you’ve now tied your identity to that MAC address. Didn’t read the terms and conditions? You might assume that free airport WiFi is subsidised by flogging ‘customer analytics’ (your personal information) to hotels, restaurant chains and whomever else wants to know about you. - -I don’t subscribe to being tracked and sold by mega-corps, so I spent a few hours hacking a solution. - -### MAC addresses don’t need to stay the same - -Fortunately, it’s possible to spoof your MAC address to a random one without fundamentally breaking networking. - -I wanted to randomize my MAC address, but with three particular caveats: - -1. The MAC should be different across different networks. This means Starbucks WiFi sees a different MAC from London Underground, preventing linking my identity across different providers. - -2. The MAC should change regularly to prevent a network knowing that I’m the same person who walked past 75 times over the last year. - -3. The MAC stays the same throughout each working day. When the MAC address changes, most networks will kick you off, and those with Craptive Portals will usually make you sign in again - annoying. - -### Manipulating NetworkManager - -My first attempt of using the `macchanger` tool was unsuccessful as NetworkManager would override the MAC address according to its own configuration. - -I learned that NetworkManager 1.4.1+ can do MAC address randomization right out the box. If you’re using Ubuntu 17.04 upwards, you can get most of the way with [this config file][7]. You can’t quite achieve all three of my requirements (you must choose  _random_ or  _stable_  but it seems you can’t do  _stable-for-one-day_ ). - -Since I’m sticking with Ubuntu 16.04 which ships with NetworkManager 1.2, I couldn’t make use of the new functionality. Supposedly there is some randomization support but I failed to actually make it work, so I scripted up a solution instead. - -Fortunately NetworkManager 1.2 does allow for spoofing your MAC address. You can see this in the ‘Edit connections’ dialog for a given network: - -![Screenshot of NetworkManager's edit connection dialog, showing a text entry for a cloned mac address](https://www.paulfurley.com/img/network-manager-cloned-mac-address.png) - -NetworkManager also supports hooks - any script placed in `/etc/NetworkManager/dispatcher.d/pre-up.d/` is run before a connection is brought up. - -### Assigning pseudo-random MAC addresses - -To recap, I wanted to generate random MAC addresses based on the  _network_  and the  _date_ . We can use the NetworkManager command line, nmcli, to show a full list of networks: - -``` -> nmcli connection -NAME UUID TYPE DEVICE -Gladstone Guest 618545ca-d81a-11e7-a2a4-271245e11a45 802-11-wireless wlp1s0 -DoESDinky 6e47c080-d81a-11e7-9921-87bc56777256 802-11-wireless -- -PublicWiFi 79282c10-d81a-11e7-87cb-6341829c2a54 802-11-wireless -- -virgintrainswifi 7d0c57de-d81a-11e7-9bae-5be89b161d22 802-11-wireless -- - -``` - -Since each network has a unique identifier, to achieve my scheme I just concatenated the UUID with today’s date and hashed the result: - -``` - -# eg 618545ca-d81a-11e7-a2a4-271245e11a45-2017-12-03 - -> echo -n "${UUID}-$(date +%F)" | md5sum - -53594de990e92f9b914a723208f22b3f - - -``` - -That produced bytes which can be substituted in for the last octets of the MAC address. - -Note that the first byte `02` signifies the address is [locally administered][8]. Real, burned-in MAC addresses start with 3 bytes designing their manufacturer, for example `b4:b6:76` for Intel. - -It’s possible that some routers may reject locally administered MACs but I haven’t encountered that yet. - -On every connection up, the script calls `nmcli` to set the spoofed MAC address for every connection: - -![A terminal window show a number of nmcli command line calls](https://www.paulfurley.com/img/terminal-window-nmcli-commands.png) - -As a final check, if I look at `ifconfig` I can see that the `HWaddr` is the spoofed one, not my real MAC address: - -``` -> ifconfig -wlp1s0 Link encap:Ethernet HWaddr b4:b6:76:45:64:4d - inet addr:192.168.0.86 Bcast:192.168.0.255 Mask:255.255.255.0 - inet6 addr: fe80::648c:aff2:9a9d:764/64 Scope:Link - UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 - RX packets:12107812 errors:0 dropped:2 overruns:0 frame:0 - TX packets:18332141 errors:0 dropped:0 overruns:0 carrier:0 - collisions:0 txqueuelen:1000 - RX bytes:11627977017 (11.6 GB) TX bytes:20700627733 (20.7 GB) - -``` - -The full script is [available on Github][9]. - -``` -#!/bin/sh - -# /etc/NetworkManager/dispatcher.d/pre-up.d/randomize-mac-addresses - -# Configure every saved WiFi connection in NetworkManager with a spoofed MAC -# address, seeded from the UUID of the connection and the date eg: -# 'c31bbcc4-d6ad-11e7-9a5a-e7e1491a7e20-2017-11-20' - -# This makes your MAC impossible(?) to track across WiFi providers, and -# for one provider to track across days. - -# For craptive portals that authenticate based on MAC, you might want to -# automate logging in :) - -# Note that NetworkManager >= 1.4.1 (Ubuntu 17.04+) can do something similar -# automatically. - -export PATH=$PATH:/usr/bin:/bin - -LOG_FILE=/var/log/randomize-mac-addresses - -echo "$(date): $*" > ${LOG_FILE} - -WIFI_UUIDS=$(nmcli --fields type,uuid connection show |grep 802-11-wireless |cut '-d ' -f3) - -for UUID in ${WIFI_UUIDS} -do - UUID_DAILY_HASH=$(echo "${UUID}-$(date +F)" | md5sum) - - RANDOM_MAC="02:$(echo -n ${UUID_DAILY_HASH} | sed 's/^\(..\)\(..\)\(..\)\(..\)\(..\).*$/\1:\2:\3:\4:\5/')" - - CMD="nmcli connection modify ${UUID} wifi.cloned-mac-address ${RANDOM_MAC}" - - echo "$CMD" >> ${LOG_FILE} - $CMD & -done - -wait -``` -Enjoy! - - _Update: [Use locally administered MAC addresses][5] to avoid clashing with real Intel ones. Thanks [@_fink][6]_ - --------------------------------------------------------------------------------- - -via: https://www.paulfurley.com/randomize-your-wifi-mac-address-on-ubuntu-1604-xenial/ - -作者:[Paul M Furley ][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.paulfurley.com/ -[1]:https://gist.github.com/paulfurley/46e0547ce5c5ea7eabeaef50dbacef3f/raw/5f02fc8f6ff7fca5bca6ee4913c63bf6de15abca/randomize-mac-addresses -[2]:https://gist.github.com/paulfurley/46e0547ce5c5ea7eabeaef50dbacef3f#file-randomize-mac-addresses -[3]:https://github.com/ -[4]:http://cloudessa.com/products/cloudessa-aaa-and-captive-portal-cloud-service/ -[5]:https://gist.github.com/paulfurley/46e0547ce5c5ea7eabeaef50dbacef3f/revisions#diff-824d510864d58c07df01102a8f53faef -[6]:https://twitter.com/fink_/status/937305600005943296 -[7]:https://gist.github.com/paulfurley/978d4e2e0cceb41d67d017a668106c53/ -[8]:https://en.wikipedia.org/wiki/MAC_address#Universal_vs._local -[9]:https://gist.github.com/paulfurley/46e0547ce5c5ea7eabeaef50dbacef3f diff --git a/sources/tech/20171202 Easily control delivery of your Python applications to millions of Linux users with Snapcraft.md b/sources/tech/20171202 Easily control delivery of your Python applications to millions of Linux users with Snapcraft.md deleted file mode 100644 index dbdebf63e3..0000000000 --- a/sources/tech/20171202 Easily control delivery of your Python applications to millions of Linux users with Snapcraft.md +++ /dev/null @@ -1,321 +0,0 @@ -Python -============================================================ - -Python has rich tools for packaging, distributing and sandboxing applications. Snapcraft builds on top of these familiar tools such as `pip`, `setup.py` and `requirements.txt` to create snaps for people to install on Linux. - -### What problems do snaps solve for Python applications? - -Linux install instructions for Python applications often get complicated. System dependencies, which differ from distribution to distribution, must be separately installed. To prevent modules from different Python applications clashing with each other, developer tools like `virtualenv` or `venv` must be used. With snapcraft it’s one command to produce a bundle that works anywhere. - -Here are some snap advantages that will benefit many Python projects: - -* Bundle all the runtime requirements, including the exact versions of system libraries and the Python interpreter. - -* Simplify installation instructions, regardless of distribution, to `snap install mypythonapp`. - -* Directly control the delivery of automatic application updates. - -* Extremely simple creation of daemons. - -### Getting started - -Let’s take a look at offlineimap and youtube-dl by way of examples. Both are command line applications. offlineimap uses Python 2 and only has Python module requirements. youtube-dl uses Python 3 and has system package requirements, in this case `ffmpeg`. - -### offlineimap - -Snaps are defined in a single yaml file placed in the root of your project. The offlineimap example shows the entire `snapcraft.yaml` for an existing project. We’ll break this down. - -``` -name: offlineimap -version: git -summary: OfflineIMAP -description: | - OfflineIMAP is software that downloads your email mailbox(es) as local - Maildirs. OfflineIMAP will synchronize both sides via IMAP. - -grade: devel -confinement: devmode - -apps: - offlineimap: - command: bin/offlineimap - -parts: - offlineimap: - plugin: python - python-version: python2 - source: . - -``` - -#### Metadata - -The `snapcraft.yaml` starts with a small amount of human-readable metadata, which usually can be lifted from the GitHub description or project README.md. This data is used in the presentation of your app in the Snap Store. The `summary:` can not exceed 79 characters. You can use a pipe with the `description:` to declare a multi-line description. - -``` -name: offlineimap -version: git -summary: OfflineIMAP -description: | - OfflineIMAP is software that downloads your email mailbox(es) as local - Maildirs. OfflineIMAP will synchronize both sides via IMAP. - -``` - -#### Confinement - -To get started we won’t confine this application. Unconfined applications, specified with `devmode`, can only be released to the hidden “edge” channel where you and other developers can install them. - -``` -confinement: devmode - -``` - -#### Parts - -Parts define how to build your app. Parts can be anything: programs, libraries, or other assets needed to create and run your application. In this case we have one: the offlineimap source code. In other cases these can point to local directories, remote git repositories, or tarballs. - -The Python plugin will also bundle Python in the snap, so you can be sure that the version of Python you test against is included with your app. Dependencies from `install_requires` in your `setup.py` will also be bundled. Dependencies from a `requirements.txt` file can also be bundled using the `requirements:` option. - -``` -parts: - offlineimap: - plugin: python - python-version: python2 - source: . - -``` - -#### Apps - -Apps are the commands and services exposed to end users. If your command name matches the snap `name`, users will be able run the command directly. If the names differ, then apps are prefixed with the snap `name`(`offlineimap.command-name`, for example). This is to avoid conflicting with apps defined by other installed snaps. - -If you don’t want your command prefixed you can request an alias for it on the [Snapcraft forum][1]. These command aliases are set up automatically when your snap is installed from the Snap Store. - -``` -apps: - offlineimap: - command: bin/offlineimap - -``` - -If your application is intended to run as a service, add the line `daemon: simple` after the command keyword. This will automatically keep the service running on install, update and reboot. - -### Building the snap - -You’ll first need to [install snap support][2], and then install the snapcraft tool: - -``` -sudo snap install --beta --classic snapcraft - -``` - -If you have just installed snap support, start a new shell so your `PATH` is updated to include `/snap/bin`. You can then build this example yourself: - -``` -git clone https://github.com/snapcraft-docs/offlineimap -cd offlineimap -snapcraft - -``` - -The resulting snap can be installed locally. This requires the `--dangerous` flag because the snap is not signed by the Snap Store. The `--devmode` flag acknowledges that you are installing an unconfined application: - -``` -sudo snap install offlineimap_*.snap --devmode --dangerous - -``` - -You can then try it out: - -``` -offlineimap - -``` - -Removing the snap is simple too: - -``` -sudo snap remove offlineimap - -``` - -Jump ahead to [Share with your friends][3] or continue to read another example. - -### youtube-dl - -The youtube-dl example shows a `snapcraft.yaml` using a tarball of a Python application and `ffmpeg` bundled in the snap to satisfy the runtime requirements. Here is the entire `snapcraft.yaml` for youtube-dl. We’ll break this down. - -``` -name: youtube-dl -version: 2017.06.18 -summary: YouTube Downloader. -description: | - youtube-dl is a small command-line program to download videos from - YouTube.com and a few more sites. - -grade: devel -confinement: devmode - -parts: - youtube-dl: - source: https://github.com/rg3/youtube-dl/archive/$SNAPCRAFT_PROJECT_VERSION.tar.gz - plugin: python - python-version: python3 - after: [ffmpeg] - -apps: - youtube-dl: - command: bin/youtube-dl - -``` - -#### Parts - -The `$SNAPCRAFT_PROJECT_VERSION` variable is derived from the `version:` stanza and used here to reference the matching release tarball. Because the `python` plugin is used, snapcraft will bundle a copy of Python in the snap using the version specified in the `python-version:` stanza, in this case Python 3. - -youtube-dl makes use of `ffmpeg` to transcode or otherwise convert the audio and video file it downloads. In this example, youtube-dl is told to build after the `ffmpeg` part. Because the `ffmpeg` part specifies no plugin, it will be fetched from the parts repository. This is a collection of community-contributed definitions which can be used by anyone when building a snap, saving you from needing to specify the source and build rules for each system dependency. You can use `snapcraft search` to find more parts to use and `snapcraft define ` to verify how the part is defined. - -``` -parts: - youtube-dl: - source: https://github.com/rg3/youtube-dl/archive/$SNAPCRAFT_PROJECT_VERSION.tar.gz - plugin: python - python-version: python3 - after: [ffmpeg] - -``` - -### Building the snap - -You can build this example yourself by running the following: - -``` -git clone https://github.com/snapcraft-docs/youtube-dl -cd youtube-dl -snapcraft - -``` - -The resulting snap can be installed locally. This requires the `--dangerous` flag because the snap is not signed by the Snap Store. The `--devmode` flag acknowledges that you are installing an unconfined application: - -``` -sudo snap install youtube-dl_*.snap --devmode --dangerous - -``` - -Run the command: - -``` -youtube-dl “https://www.youtube.com/watch?v=k-laAxucmEQ” - -``` - -Removing the snap is simple too: - -``` -sudo snap remove youtube-dl - -``` - -### Share with your friends - -To share your snaps you need to publish them in the Snap Store. First, create an account on [the dashboard][4]. Here you can customize how your snaps are presented, review your uploads and control publishing. - -You’ll need to choose a unique “developer namespace” as part of the account creation process. This name will be visible by users and associated with your published snaps. - -Make sure the `snapcraft` command is authenticated using the email address attached to your Snap Store account: - -``` -snapcraft login - -``` - -### Reserve a name for your snap - -You can publish your own version of a snap, provided you do so under a name you have rights to. - -``` -snapcraft register mypythonsnap - -``` - -Be sure to update the `name:` in your `snapcraft.yaml` to match this registered name, then run `snapcraft` again. - -### Upload your snap - -Use snapcraft to push the snap to the Snap Store. - -``` -snapcraft push --release=edge mypthonsnap_*.snap - -``` - -If you’re happy with the result, you can commit the snapcraft.yaml to your GitHub repo and [turn on automatic builds][5] so any further commits automatically get released to edge, without requiring you to manually build locally. - -### Further customisations - -Here are all the Python plugin-specific keywords: - -``` -- requirements: - (string) - Path to a requirements.txt file -- constraints: - (string) - Path to a constraints file -- process-dependency-links: - (bool; default: false) - Enable the processing of dependency links in pip, which allow one project - to provide places to look for another project -- python-packages: - (list) - A list of dependencies to get from PyPI -- python-version: - (string; default: python3) - The python version to use. Valid options are: python2 and python3 - -``` - -You can view them locally by running: - -``` -snapcraft help python - -``` - -### Extending and overriding behaviour - -You can [extend the behaviour][6] of any part in your `snapcraft.yaml` with shell commands. These can be run after pulling the source code but before building by using the `prepare` keyword. The build process can be overridden entirely using the `build` keyword and shell commands. The `install` keyword is used to run shell commands after building your code, useful for making post build modifications such as relocating build assets. - -Using the youtube-dl example above, we can run the test suite at the end of the build. If this fails, the snap creation will be terminated: - -``` -parts: - youtube-dl: - source: https://github.com/rg3/youtube-dl/archive/$SNAPCRAFT_PROJECT_VERSION.tar.gz - plugin: python - python-version: python3 - stage-packages: [ffmpeg, python-nose] - install: | - nosetests -``` - --------------------------------------------------------------------------------- - -via: https://docs.snapcraft.io/build-snaps/python - -作者:[Snapcraft.io ][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:Snapcraft.io - -[1]:https://forum.snapcraft.io/t/process-for-reviewing-aliases-auto-connections-and-track-requests/455 -[2]:https://docs.snapcraft.io/core/install -[3]:https://docs.snapcraft.io/build-snaps/python#share-with-your-friends -[4]:https://dashboard.snapcraft.io/openid/login/?next=/dev/snaps/ -[5]:https://build.snapcraft.io/ -[6]:https://docs.snapcraft.io/build-snaps/scriptlets diff --git a/sources/tech/20171202 Scrot Linux command-line screen grabs made simple b/sources/tech/20171202 Scrot Linux command-line screen grabs made simple deleted file mode 100644 index 979ed86b3c..0000000000 --- a/sources/tech/20171202 Scrot Linux command-line screen grabs made simple +++ /dev/null @@ -1,72 +0,0 @@ -Translating by filefi - -# Scrot: Linux command-line screen grabs made simple - -by [Scott Nesbitt][a] · November 30, 2017 - -> Scrot is a basic, flexible tool that offers a number of handy options for taking screen captures from the Linux command line. - -[![Original photo by Rikki Endsley. CC BY-SA 4.0](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/community-penguins-osdc-lead.png?itok=BmqsAF4A)][1] - - - -There are great tools on the Linux desktop for taking screen captures, such as [KSnapshot][2] and [Shutter][3]. Even the simple utility that comes with the GNOME desktop does a pretty good job of capturing screens. But what if you rarely need to take screen captures? Or you use a Linux distribution without a built-in capture tool, or an older computer with limited resources? - -Turn to the command line and a little utility called [Scrot][4]. It does a fine job of taking simple screen captures, and it includes a few features that might surprise you. - -### Getting started with Scrot -Many Linux distributions come with Scrot already installed—to check, type `which scrot`. If it isn't there, you can install Scrot using your distro's package manager. If you're willing to compile the code, grab it [from GitHub][5]. - -To take a screen capture, crack open a terminal window and type `scrot [filename]`, where `[filename]` is the name of file to which you want to save the image (for example, `desktop.png`). If you don't include a name for the file, Scrot will create one for you, such as `2017-09-24-185009_1687x938_scrot.png`. (That filename isn't as descriptive it could be, is it? That's why it's better to add one to the command.) - -Running Scrot with no options takes a screen capture of your entire desktop. If you don't want to do that, Scrot lets you focus on smaller portions of your screen. - -### Taking a screen capture of a single window - -Tell Scrot to take a screen capture of a single window by typing `scrot -u [filename]`. - -The `-u` option tells Scrot to grab the window currently in focus. That's usually the terminal window you're working in, which might not be the one you want. - -To grab another window on your desktop, type `scrot -s [filename]`. - -The `-s` option lets you do one of two things: - -* select an open window, or - -* draw a rectangle around a window or a portion of a window to capture it. - -You can also set a delay, which gives you a little more time to select the window you want to capture. To do that, type `scrot -u -d [num] [filename]`. - -The `-d` option tells Scrot to wait before grabbing the window, and `[num]` is the number of seconds to wait. Specifying `-d 5` (wait five seconds) should give you enough time to choose a window. - -### More useful options - -Scrot offers a number of additional features (most of which I never use). The ones I find most useful include: - -* `-b` also grabs the window's border - -* `-t` grabs a window and creates a thumbnail of it. This can be useful when you're posting screen captures online. - -* `-c` creates a countdown in your terminal when you use the `-d` option. - -To learn about Scrot's other options, check out the its documentation by typing `man scrot` in a terminal window, or [read it online][6]. Then start snapping images of your screen. - -It's basic, but Scrot gets the job done nicely. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/17/11/taking-screen-captures-linux-command-line-scrot - -作者:[Scott Nesbitt][a] -译者:[filefi](https://github.com/filefi) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/scottnesbitt -[1]:https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/community-penguins-osdc-lead.png?itok=BmqsAF4A -[2]:https://www.kde.org/applications/graphics/ksnapshot/ -[3]:https://launchpad.net/shutter -[4]:https://github.com/dreamer/scrot -[5]:http://manpages.ubuntu.com/manpages/precise/man1/scrot.1.html -[6]:https://github.com/dreamer/scrot diff --git a/sources/tech/20171202 docker - Use multi-stage builds.md b/sources/tech/20171202 docker - Use multi-stage builds.md deleted file mode 100644 index e1a6414862..0000000000 --- a/sources/tech/20171202 docker - Use multi-stage builds.md +++ /dev/null @@ -1,127 +0,0 @@ -Use multi-stage builds -============================================================ - -Multi-stage builds are a new feature requiring Docker 17.05 or higher on the daemon and client. Multistage builds are useful to anyone who has struggled to optimize Dockerfiles while keeping them easy to read and maintain. - -> Acknowledgment: Special thanks to [Alex Ellis][1] for granting permission to use his blog post [Builder pattern vs. Multi-stage builds in Docker][2] as the basis of the examples below. - -### Before multi-stage builds - -One of the most challenging things about building images is keeping the image size down. Each instruction in the Dockerfile adds a layer to the image, and you need to remember to clean up any artifacts you don’t need before moving on to the next layer. To write a really efficient Dockerfile, you have traditionally needed to employ shell tricks and other logic to keep the layers as small as possible and to ensure that each layer has the artifacts it needs from the previous layer and nothing else. - -It was actually very common to have one Dockerfile to use for development (which contained everything needed to build your application), and a slimmed-down one to use for production, which only contained your application and exactly what was needed to run it. This has been referred to as the “builder pattern”. Maintaining two Dockerfiles is not ideal. - -Here’s an example of a `Dockerfile.build` and `Dockerfile` which adhere to the builder pattern above: - -`Dockerfile.build`: - -``` -FROM golang:1.7.3 -WORKDIR /go/src/github.com/alexellis/href-counter/ -RUN go get -d -v golang.org/x/net/html -COPY app.go . -RUN go get -d -v golang.org/x/net/html \ - && CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o app . - -``` - -Notice that this example also artificially compresses two `RUN` commands together using the Bash `&&` operator, to avoid creating an additional layer in the image. This is failure-prone and hard to maintain. It’s easy to insert another command and forget to continue the line using the `\` character, for example. - -`Dockerfile`: - -``` -FROM alpine:latest -RUN apk --no-cache add ca-certificates -WORKDIR /root/ -COPY app . -CMD ["./app"] - -``` - -`build.sh`: - -``` -#!/bin/sh -echo Building alexellis2/href-counter:build - -docker build --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy \ - -t alexellis2/href-counter:build . -f Dockerfile.build - -docker create --name extract alexellis2/href-counter:build -docker cp extract:/go/src/github.com/alexellis/href-counter/app ./app -docker rm -f extract - -echo Building alexellis2/href-counter:latest - -docker build --no-cache -t alexellis2/href-counter:latest . -rm ./app - -``` - -When you run the `build.sh` script, it needs to build the first image, create a container from it in order to copy the artifact out, then build the second image. Both images take up room on your system and you still have the `app` artifact on your local disk as well. - -Multi-stage builds vastly simplify this situation! - -### Use multi-stage builds - -With multi-stage builds, you use multiple `FROM` statements in your Dockerfile. Each `FROM` instruction can use a different base, and each of them begins a new stage of the build. You can selectively copy artifacts from one stage to another, leaving behind everything you don’t want in the final image. To show how this works, Let’s adapt the Dockerfile from the previous section to use multi-stage builds. - -`Dockerfile`: - -``` -FROM golang:1.7.3 -WORKDIR /go/src/github.com/alexellis/href-counter/ -RUN go get -d -v golang.org/x/net/html -COPY app.go . -RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o app . - -FROM alpine:latest -RUN apk --no-cache add ca-certificates -WORKDIR /root/ -COPY --from=0 /go/src/github.com/alexellis/href-counter/app . -CMD ["./app"] - -``` - -You only need the single Dockerfile. You don’t need a separate build script, either. Just run `docker build`. - -``` -$ docker build -t alexellis2/href-counter:latest . - -``` - -The end result is the same tiny production image as before, with a significant reduction in complexity. You don’t need to create any intermediate images and you don’t need to extract any artifacts to your local system at all. - -How does it work? The second `FROM` instruction starts a new build stage with the `alpine:latest` image as its base. The `COPY --from=0` line copies just the built artifact from the previous stage into this new stage. The Go SDK and any intermediate artifacts are left behind, and not saved in the final image. - -### Name your build stages - -By default, the stages are not named, and you refer to them by their integer number, starting with 0 for the first `FROM` instruction. However, you can name your stages, by adding an `as ` to the `FROM` instruction. This example improves the previous one by naming the stages and using the name in the `COPY` instruction. This means that even if the instructions in your Dockerfile are re-ordered later, the `COPY` won’t break. - -``` -FROM golang:1.7.3 as builder -WORKDIR /go/src/github.com/alexellis/href-counter/ -RUN go get -d -v golang.org/x/net/html -COPY app.go . -RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o app . - -FROM alpine:latest -RUN apk --no-cache add ca-certificates -WORKDIR /root/ -COPY --from=builder /go/src/github.com/alexellis/href-counter/app . -CMD ["./app"] -``` - --------------------------------------------------------------------------------- - -via: https://docs.docker.com/engine/userguide/eng-image/multistage-build/#name-your-build-stages - -作者:[docker docs ][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://docs.docker.com/engine/userguide/eng-image/multistage-build/ -[1]:https://twitter.com/alexellisuk -[2]:http://blog.alexellis.io/mutli-stage-docker-builds/ diff --git a/translated/tech/20090701 The One in Which I Call Out Hacker News.md b/translated/tech/20090701 The One in Which I Call Out Hacker News.md deleted file mode 100644 index 670be95353..0000000000 --- a/translated/tech/20090701 The One in Which I Call Out Hacker News.md +++ /dev/null @@ -1,99 +0,0 @@ -我号召黑客新闻的理由之一 -实现高速缓存会花费 30 个小时,你有额外的 30 个小时吗? -不,你没有。 -我实际上并不知道它会花多少时间,可能它会花五分钟,你有五分钟吗?不,你还是没有。为什么?因为我在撒谎。它会消耗远超五分钟的时间,这是程序员永远的 -乐观主义。 -- Owen Astrachan 教授于 2004 年 2 月 23 日在 CPS 108 上的讲座 - -指责开源软件的使用存在着高昂的代价已经不是一个新论点了,它之前就被提过,而且说的比我更有信服力,即使一些人已经在高度赞扬开源软件的运作。 -这种事为什么会重复发生? - -在周一的黑客新闻上,我愉悦地看着某些人一边说写 Stack Overflow 简单的简直搞笑,一边通过允许七月第四个周末之后的克隆来开始备份他们的提问。 -其他的声明中也指出现存的克隆是一个好的出发点。 - -让我们假设,为了争辩,你觉得将自己的 Stack Overflow 通过 ASP.NET 和 MVC 克隆是正确的,然后被一块廉价的手表和一个小型俱乐部头领忽悠之后, -决定去手动拷贝你 Stack Overflow 的源代码,一页又一页,所以你可以逐字逐句地重新输入,我们同样会假定你像我一样打字,很酷的有 100 WPM -(差不多每秒8个字符),不和我一样的话,你不会犯错。 - - Stack Overflow 的 *.cs、*.sql、*.css、*.js 和 *.aspx 文件大约 2.3 MB,因此如果你想将这些源代码输进电脑里去的话,即使你不犯错也需要大约 80 个小时。 - -除非......当然,你是不会那样做的:你打算从头开始实现 Stack Overflow 。所以即使我们假设,你花了十倍的时间去设计、输出,然后调试你自己的实现而不是去拷 -贝已有的那份,那已经让你已经编译了好几个星期。我不知道你,但是我可以承认我写的新代码大大小于我复制的现有代码的十分之一。 - -好,ok,我听见你松了口气。所以不是全部。但是我可以做大部分。 - -行,所以什么是大部分?这只是询问和回答问题,这个部分很简单。那么,除了你必须实现对问题和答案投票、赞同还是反对,而且提问者应该能够去接收每一个问题的 -单一答案。你不能让人们赞同或者反对他们自己的回答。所以你需要去阻止。你需要去确保用户在一定的时间内不会赞同或反对其他用户太多次。以预防垃圾邮件, -你可能也需要去实现一个垃圾邮件过滤器,即使在一个基本的设计里,也要考虑到这一点。而且还需要去支持用户图标。并且你将不得不寻找一个自己真正信任的并且 -与 markdown 接合很好的 HTML 库(当然,你确实希望重新使用那个令人敬畏的编辑器 Stack Overflow ),你还需要为所有控件购买,设计或查找小部件,此外 -你至少需要一个基本的管理界面,以便用户可以调节,并且你需要实现可扩展的业务量,以便能稳定地给用户越来越多的功能去实现他们想做的。 - -如果你这样做了,你可以完成它。 - -除了...除了全文检索外,特别是它在“寻找问题”功能中的表现,这是必不可少的。然后用户的基本信息,和回答的意见,然后有一个主要展示你的重要问题, -但是它会稳定的冒泡式下降。另外你需要去实现奖励,并支持每个用户的多个 OpenID 登录,然后为相关的事件发送邮件通知,并添加一个标签系统, -接着允许管理员通过一个不错的图形界面配置徽章。你需要去显示用户的 karma 历史,点赞和差评。整个事情的规模都非常好,因为它随时都可以被 - slashdotted、reddited 或是 Stack Overflow 。 - -在这之后!你就已经完成了! - -...在正确地实现升级、国际化、业绩上限和一个 css 设计之后,使你的站点看起来不像是一个屁股,上面的大部分 AJAX 版本和 G-d 知道什么会同样潜伏 -在你所信任的界面下,但是当你开始做一个真正的克隆的时候,就会遇到它。 - -告诉我:这些功能中哪个是你感觉可以削减而让它仍然是一个引人注目的产品,哪些是大部分网站之下的呢?哪个你可以剔除呢? - -开发者因为开源软件的使用是一个可怕的痛苦这样一个相同的理由认为克隆一个像 Stack Overflow 的站点很简单。当你把一个开发者放在 Stack Overflow 前面, -他们并不真的看到 Stack Overflow,他们实际上看的是这些: - -create table QUESTION (ID identity primary key, - TITLE varchar(255), --- 为什么我知道你认为是 255 - BODY text, - UPVOTES integer not null default 0, - DOWNVOTES integer not null default 0, - USER integer references USER(ID)); -create table RESPONSE (ID identity primary key, - BODY text, - UPVOTES integer not null default 0, - DOWNVOTES integer not null default 0, - QUESTION integer references QUESTION(ID)) - -如果你告诉一个开发者去复制 Stack Overflow ,进入他脑海中的就是上面的两个 SQL 表和足够的 HTML 文件来显示它们,而不用格式化,这在一个周末里是完全 -可以实现的,聪明的人会意识到他们需要实现登陆、注销和评论,点赞需要绑定到用户。但是这在一个周末内仍然是完全可行的。这仅仅是在 SQL 后端里加上两张 -左右的表,而 HTML 则用来展示内容,使用像 Django 这样的框架,你甚至可以免费获得基本的用户和评论。 - -但是那不是和 Stack Overflow 相关的,无论你对 Stack Overflow 的感受如何,大多数访问者似乎都认为用户体验从头到尾都很流畅,他们感觉他们和一个 -好产品相互影响。即使我没有更好的了解,我也会猜测 Stack Overflow 在数据库模式方面取得了持续的成功-并且有机会去阅读 Stack Overflow 的源代码, -我知道它实际上有多么的小,这些是一个极大的 spit 和 Polish 的集合,成为了一个具有高可用性的主要网站,一个开发者,问一个东西被克隆有多难, -仅仅不认为和 Polish 相关,因为 Polish 是实现结果附带的。 - -这就是为什么 Stack Overflow 的开放源代码克隆会失败,即使一些人在设法实现大部分 Stack Overflow 的“规范”,也会有一些关键区域会将他们绊倒, -举个例子,如果你把目标市场定在了终端用户上,你要么需要一个图形界面去配置规则,要么聪明的开发者会决定哪些徽章具有足够的通用性,去继续所有的 -安装,实际情况是,开发者发牢骚和抱怨你不能实现一个真实的综合性的像 badges 的图形用户界面,然后 bikeshed 任何的建议,为因为标准的 badges -在范围内太远,他们会迅速避开选择其他方向,他们最后会带着相同的有 bug 追踪器的解决方案赶上,就像他们工作流程的概要使用一样: -开发者通过任意一种方式实现一个通用的机制,任何一个人完全都能轻松地使用 Python、PHP 或任意一门语言中的系统 API 来工作,能简单为他们自己增加 -自定义设置,PHP 和 Python 是学起来很简单的,并且比起曾经的图形界面更加的灵活,为什么还要操心其他事呢? - -同样的,节制和管理界面可以被削减。如果你是一个管理员,你可以进入 SQL 服务器,所以你可以做任何真正的管理-就像这样,管理员可以通过任何的 Django -管理和类似的系统给你提供支持,因为,毕竟只有少数用户是 mods,mods 应该理解网站是怎么运作、停止的。当然,没有 Stack Overflow 的接口失败会被纠正 -,即使 Stack Overflow 的愚蠢的要求,你必须知道如何去使用 openID (它是最糟糕的缺点)最后得到修复。我确信任何的开源的克隆都会狂热地跟随它- -即使 GNOME 和 KDE 多年来亦步亦趋地复制 windows ,而不是尝试去修复它自己最明显的缺陷。 - -开发者可能不会关心应用的这些部分,但是最终用户会,当他们尝试去决定使用哪个应用时会去考虑这些。就好像一家好的软件公司希望通过确保其产品在出货之前 -是一流的来降低其支持成本一样,所以,同样的,懂行的消费者想在他们购买这些产品之前确保产品好用,以便他们不需要去寻求帮助,开源产品就失败在这种地方 -,一般来说,专有解决方案会做得更好。 - -这不是说开源软件没有他们自己的立足之地,这个博客运行在 Apache,Django,PostgreSQL 和 Linux 上。但是让我告诉你,配置这些堆栈不是为了让人心灰意懒 -,PostgreSQL 需要在老版本上移除设置。然后,在 Ubuntu 和 FreeBSD 最新的版本上,仍然要求用户搭建第一个数据库集群,MS SQL不需要这些东西,Apache... -天啊,甚至没有让我开始尝试去向一个初学者用户解释如何去得到虚拟机,MovableType,一对 Django 应用程序,而且所有的 WordPress 都可以在一个单一的安装下 -顺利运行,像在地狱一样,只是试图解释 Apache 的分叉线程变换给技术上精明的非开发人员就是一个噩梦,IIS 7 和操作系统的 Apache 服务器是非常闭源的, -图形界面管理程序配置这些这些相同的堆栈非常的简单,Django 是一个伟大的产品,但是它只是基础架构而已,我认为开源软件做的很好,恰恰是因为推动开发者去 -贡献的动机 - -下次你看见一个你喜欢的应用,认为所有面向用户的细节非常长和辛苦,就会去让它用起来更令人开心,在谴责你如何能普通的实现整个的可恶的事在一个周末, -十分之九之后,当你认为一个应用的实现简单地简直可笑,你就完全的错失了故事另一边的用户 - -via: https://bitquabit.com/post/one-which-i-call-out-hacker-news/ - -作者:Benjamin Pollack 译者:hopefully2333 校对:校对者ID - -本文由 LCTT 原创编译,Linux中国 荣誉推出 diff --git a/published/20161216 Kprobes Event Tracing on ARMv8.md b/translated/tech/20161216 Kprobes Event Tracing on ARMv8.md similarity index 98% rename from published/20161216 Kprobes Event Tracing on ARMv8.md rename to translated/tech/20161216 Kprobes Event Tracing on ARMv8.md index 3985f064dc..3c3ab0de5b 100644 --- a/published/20161216 Kprobes Event Tracing on ARMv8.md +++ b/translated/tech/20161216 Kprobes Event Tracing on ARMv8.md @@ -29,19 +29,19 @@ jprobes 允许通过提供一个具有相同调用签名call signature kprobes 提供一系列能从内核代码中调用的 API 来设置探测点和当探测点被命中时调用的注册函数。在不往内核中添加代码的情况下,kprobes 也是可用的,这是通过写入特定事件追踪的 debugfs 文件来实现的,需要在文件中设置探针地址和信息,以便在探针被命中时记录到追踪日志中。后者是本文将要讨论的重点。最后 kprobes 可以通过 perl 命令来使用。 -#### kprobes API +### kprobes API 内核开发人员可以在内核中编写函数(通常在专用的调试模块中完成)来设置探测点,并且在探测指令执行前和执行后立即执行任何所需操作。这在 kprobes.txt 中有很好的解释。 -#### 事件追踪 +### 事件追踪 事件追踪子系统有自己的自己的文档^注2 ,对于了解一般追踪事件的背景可能值得一读。事件追踪子系统是追踪点tracepoints和 kprobes 事件追踪的基础。事件追踪文档重点关注追踪点,所以请在查阅文档时记住这一点。kprobes 与追踪点不同的是没有预定义的追踪点列表,而是采用动态创建的用于触发追踪事件信息收集的任意探测点。事件追踪子系统通过一系列 debugfs 文件来控制和监视。事件追踪(`CONFIG_EVENT_TRACING`)将在被如 kprobe 事件追踪子系统等需要时自动选择。 -##### kprobes 事件 +#### kprobes 事件 使用 kprobes 事件追踪子系统,用户可以在内核任意断点处指定要报告的信息,只需要指定任意现有可探测指令的地址以及格式化信息即可确定。在执行过程中遇到断点时,kprobes 将所请求的信息传递给事件追踪子系统的公共部分,这些部分将数据格式化并追加到追踪日志中,就像追踪点的工作方式一样。kprobes 使用一个类似的但是大部分是独立的 debugfs 文件来控制和显示追踪事件信息。该功能可使用 `CONFIG_KPROBE_EVENT` 来选择。Kprobetrace 文档^ 注3 提供了如何使用 kprobes 事件追踪的基本信息,并且应当被参考用以了解以下介绍示例的详细信息。 -#### kprobes 和 perf +### kprobes 和 perf perf 工具为 kprobes 提供了另一个命令行接口。特别地,`perf probe` 允许探测点除了由函数名加偏移量和地址指定外,还可由源文件和行号指定。perf 接口实际上是使用 kprobes 的 debugfs 接口的封装器。 @@ -60,7 +60,7 @@ perf 工具为 kprobes 提供了另一个命令行接口。特别地,`perf pro kprobes 的一个常用例子是检测函数入口和/或出口。因为只需要使用函数名来作为探针地址,它安装探针特别简单。kprobes 事件追踪将查看符号名称并且确定地址。ARMv8 调用标准定义了函数参数和返回值的位置,并且这些可以作为 kprobes 事件处理的一部分被打印出来。 -#### 例子: 函数入口探测 +### 例子: 函数入口探测 检测 USB 以太网驱动程序复位功能: @@ -94,7 +94,7 @@ kworker/0:0-4 [000] d… 10972.102939: p_ax88772_reset_0: 这里我们可以看见传入到我们的探测函数的指针参数的值。由于我们没有使用 kprobes 事件追踪的可选标签功能,我们需要的信息自动被标注为 `arg1`。注意这指向我们需要 kprobes 记录这个探针的一组值的第一个,而不是函数参数的实际位置。在这个例子中它也只是碰巧是我们探测函数的第一个参数。 -#### 例子: 函数入口和返回探测 +### 例子: 函数入口和返回探测 kretprobe 功能专门用于探测函数返回。在函数入口 kprobes 子系统将会被调用并且建立钩子以便在函数返回时调用,钩子将记录需求事件信息。对最常见情况,返回信息通常在 `X0` 寄存器中,这是非常有用的。在 `%x0` 中返回值也可以被称为 `$retval`。以下例子也演示了如何提供一个可读的标签来展示有趣的信息。 @@ -132,7 +132,7 @@ _$ cat trace bash-1671 [001] d..1 214.401975: r__do_fork_0: (SyS_clone+0x18/0x20 <- _do_fork) pid=0x726_ ``` -#### 例子: 解引用指针参数 +### 例子: 解引用指针参数 对于指针值,kprobes 事件处理子系统也允许解引用和打印所需的内存内容,适用于各种基本数据类型。为了展示所需字段,手动计算结构的偏移量是必要的。 @@ -173,7 +173,7 @@ $ cat trace bash-1702 [002] d..1 175.347349: wait_r: (SyS_wait4+0x74/0xe4 <- do_wait) arg1=0xfffffffffffffff6 ``` -#### 例子: 探测任意指令地址 +### 例子: 探测任意指令地址 在前面的例子中,我们已经为函数的入口和出口插入探针,然而探测一个任意指令(除少数例外)是可能的。如果我们正在 C 函数中放置一个探针,第一步是查看代码的汇编版本以确定我们要放置探针的位置。一种方法是在 vmlinux 文件上使用 gdb,并在要放置探针的函数中展示指令。下面是一个在 `arch/arm64/kernel/modules.c` 中 `module_alloc` 函数执行此操作的示例。在这种情况下,因为 gdb 似乎更喜欢使用弱符号定义,并且它是与这个函数关联的存根代码,所以我们从 System.map 中来获取符号值: diff --git a/translated/tech/20170530 How to Improve a Legacy Codebase.md b/translated/tech/20170530 How to Improve a Legacy Codebase.md deleted file mode 100644 index a1869b0449..0000000000 --- a/translated/tech/20170530 How to Improve a Legacy Codebase.md +++ /dev/null @@ -1,104 +0,0 @@ -# 如何改善遗留的代码库 - -这在每一个程序员,项目管理员,团队领导的一生中都会至少发生一次。原来的程序员早已离职去度假了,留下了一坨几百万行屎一样的代码和文档(如果有的话),一旦接手这些代码,想要跟上公司的进度简直让人绝望。 - -你的工作是带领团队摆脱这个混乱的局面 - -当你的第一反应过去之后,你开始去熟悉这个项目,公司的管理层都在关注着你,所以项目只能成功,然而,看了一遍代码之后却发现很大的可能会失败。那么该怎么办呢? - -幸运(不幸)的是我已经遇到好几次这种情况了,我和我的小伙伴发现将这坨热气腾腾的屎变成一个健康可维护的项目是非常值得一试的。下面这些是我们的一些经验: - -### 备份 - -在开始做任何事情之前备份与之可能相关的所有文件。这样可以确保不会丢失任何可能会在另外一些地方很重要的信息。一旦修改其中一些文件,你可能花费一天或者更多天都解决不了这个愚蠢的问题,配置数据通常不受版本控制,所以特别容易受到这方面影响,如果定期备份数据时连带着它一起备份了,还是比较幸运的。所以谨慎总比后悔好,复制所有东西到一个绝对安全的地方吧,除非这些文件是只读模式否则不要轻易碰它。 - -### 必须确保代码能够在生产环境下构建运行并产出,这是重要的先决条件。 - -之前我假设环境已经存在,所以完全丢了这一步,Hacker News 的众多网友指出了这一点并且证明他们是对的:第一步是确认你知道在生产环境下运行着什么东西,也意味着你需要在你的设备上构建一个跟生产环境上运行的版本每一个字节都一模一样的版本。如果你找不到实现它的办法,一旦你将它投入生产环境,你很可能会遭遇一些很糟糕的事情。确保每一部分都尽力测试,之后在你足够信任它能够很好的运行的时候将它部署生产环境下。无论它运行的怎么样都要做好能够马上切换回旧版本的准备,确保日志记录下了所有情况,以便于接下来不可避免的 “验尸” 。 - -### 冻结数据库 - -直到你修改代码之前尽可能冻结你的数据库,在你特别熟悉代码库和遗留代码之后再去修改数据库。在这之前过早的修改数据库的话,你可能会碰到大问题,你会失去让新旧代码和数据库一起构建稳固的基础的能力。保持数据库完全不变,就能比较新的逻辑代码和旧的逻辑代码运行的结果,比较的结果应该跟预期的没有差别。 - -### 写测试 - -在你做任何改变之前,尽可能多的写下端到端测试和集成测试。在你能够清晰的知道旧的是如何工作的情况下确保这些测试能够正确的输出(准备好应对一些突发状况)。这些测试有两个重要的作用,其一,他们能够在早期帮助你抛弃一些错误观念,其二,在你写新代码替换旧代码的时候也有一定防护作用。 - -自动化测试,如果你也有 CI 的使用经验请使用它,并且确保在你提交代码之后能够快速的完成所有测试。 - -### 日志监控 - -如果旧设备依然可用,那么添加上监控功能。使用一个全新的数据库,为每一个你能想到的事件都添加一个简单的计数器,并且根据这些事件的名字添加一个函数增加这些计数器。用一些额外的代码实现一个带有时间戳的事件日志,这是一个好办法知道有多少事件导致了另外一些种类的事件。例如:用户打开 APP ,用户关闭 APP 。如果这两个事件导致后端调用的数量维持长时间的不同,这个数量差就是当前打开的 APP 的数量。如果你发现打开 APP 比关闭 APP 多的时候,你就必须要知道是什么原因导致 APP 关闭了(例如崩溃)。你会发现每一个事件都跟其他的一些事件有许多不同种类的联系,通常情况下你应该尽量维持这些固定的联系,除非在系统上有一个明显的错误。你的目标是减少那些错误的事件,尽可能多的在开始的时候通过使用计数器在调用链中降低到指定的级别。(例如:用户支付应该得到相同数量的支付回调)。 - -这是简单的技巧去将每一个后端应用变成一个就像真实的簿记系统一样,所有数字必须匹配,只要他们在某个地方都不会有什么问题。 - -随着时间的推移,这个系统在监控健康方面变得非常宝贵,而且它也是使用源码控制修改系统日志的一个好伙伴,你可以使用它确认 BUG 出现的位置,以及对多种计数器造成的影响。 - -我通常保持 5 分钟(一小时 12 次)记录一次计数器,如果你的应用生成了更多或者更少的事件,你应该修改这个时间间隔。所有的计数器公用一个数据表,每一个记录都只是简单的一行。 - -### 一次只修改一处 - -不要完全陷入在提高代码或者平台可用性的同时添加新特性或者是修复 BUG 的陷阱。这会让你头大而且将会使你之前建立的测试失效,现在必须问问你自己,每一步的操作想要什么样的结果。 - -### 修改平台 - -如果你决定转移你的应用到另外一个平台,最主要的是跟之前保持一样。如果你觉得你会添加更多的文档和测试,但是不要忘记这一点,所有的业务逻辑和相互依赖跟从前一样保持不变。 - -### 修改架构 - -接下来处理的是改变应用的结构(如果需要)。这一点上,你可以自由的修改高层的代码,通常是降低模块间的横向联系,这样可以降低代码活动期间对终端用户造成的影响范围。如果老代码是庞大的,那么现在正是让他模块化的时候,将大段代码分解成众多小的,不过不要把变量的名字和他的数据结构分开。 - -Hacker News [mannykannot][1] 网友指出,修改架构并不总是可行,如果你特别不幸的话,你可能为了改变一些架构必须付出沉重的代价。我也赞同这一点,我应该加上这一点,因此这里有一些补充。我非常想补充的是如果你修改高级代码的时候修改了一点点底层代码,那么试着限制只修改一个文件或者最坏的情况是只修改一个子系统,所以尽可能限制修改的范围。否则你可能很难调试刚才所做的更改。 - -### 底层代码的重构 - -现在,你应该非常理解每一个模块的作用了,准备做一些真正的工作吧:重构代码以提高其可维护性并且使代码做好添加新功能的准备。这很可能是项目中最消耗时间的部分,记录你所做的任何操作,在你彻底的记录模块并且理解之前不要对它做任何修改。之后你可以自由的修改变量名、函数名以及数据结构以提高代码的清晰度和统一性,然后请做测试(情况允许的话,包括单元测试)。 - -### 修复 bugs - -现在准备做一些用户可见的修改,战斗的第一步是修复很多积累了一整年的bugs,像往常一样,首先证实 bug 仍然存在,然后编写测试并修复这个 bug,你的 CI 和端对端测试应该能避免一些由于不太熟悉或者一些额外的事情而犯的错误。 - -### 升级数据库 - - -如果在一个坚实且可维护的代码库上完成所有工作,如果你有更改数据库模式的计划,可以使用不同的完全替换数据库。 -把所有的这些都做完将能够帮助你更可靠的修改而不会碰到问题,你会完全的测试新数据库和新代码,所有测试可以确保你顺利的迁移。 - -### 按着路线图执行 - -祝贺你脱离的困境并且可以准备添加新功能了。 - -### 任何时候都不要尝试彻底重写 - -彻底重写是那种注定会失败的项目,一方面,你在一个未知的领域开始,所以你甚至不知道构建什么,另一方面,你会把所以的问题都推到新系统马上就要上线的前一天,非常不幸的是,这也是你失败的时候,假设业务逻辑存在问题,你会得到异样的眼光,那时您会突然明白为什么旧系统会用某种奇怪的方式来工作,最终也会意识到能将旧系统放在一起工作的人也不都是白痴。在那之后。如果你真的想破坏公司(和你自己的声誉),那就重写吧,但如果你足够聪明,彻底重写系统通常不会成为一个摆到桌上讨论的选项。 - -### 所以,替代方法是增量迭代工作 - -要解开这些线团最快方法是,使用你熟悉的代码中任何的元素(它可能是外部的,他可以是内核模块),试着使用旧的上下文去增量提升,如果旧的构建工具已经不能用了,你将必须使用一些技巧(看下面)至少当你开始做修改的时候,试着尽力保留已知的工作。那样随着代码库的提升你也对代码的作用更加理解。一个典型的代码提交应该最多两行。 - -### 发布! - -每一次的修改都发布到生产环境,即使一些修改不是用户可见的。使用最少的步骤也是很重要的,因为当你缺乏对系统的了解时,只有生产环境能够告诉你问题在哪里,如果你只做了一个很小的修改之后出了问题,会有一些好处: - -* 很容易弄清楚出了什么问题 -* 这是一个改进流程的好位置 -* 你应该马上更新文档展示你的新见解 - -### 使用代理的好处 -如果你做 web 开发时在旧系统和用户之间加了代理。你能很容易的控制每一个网址哪些请求旧系统,哪些重定向到新系统,从而更轻松更精确的控制运行的内容以及谁能够看到。如果你的代理足够的聪明,你可以使用它发送一定比例的流量到个人的 URL,直到你满意为止,如果你的集成测试也连接到这个接口那就更好了。 - -### 是的,这会花费很多时间 -这取决于你怎样看待它的,这是事实会有一些重复的工作涉及到这些步骤中。但是它确实有效,对于进程的任何一个优化都将使你对这样系统更加熟悉。我会保持声誉,并且我真的不喜欢在工作期间有负面的意外。如果运气好的话,公司系统已经出现问题,而且可能会影响客户。在这样的情况下,如果你更多地是牛仔的做事方式,并且你的老板同意可以接受冒更大的风险,我比较喜欢完全控制整个流程得到好的结果而不是节省两天或者一星期,但是大多数公司宁愿采取稍微慢一点但更确定的胜利之路。 - --------------------------------------------------------------------------------- - -via: https://jacquesmattheij.com/improving-a-legacy-codebase - -作者:[Jacques Mattheij][a] -译者:[aiwhj](https://github.com/aiwhj) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://jacquesmattheij.com/ -[1]:https://news.ycombinator.com/item?id=14445661 diff --git a/translated/tech/20170910 Cool vim feature sessions.md b/translated/tech/20170910 Cool vim feature sessions.md deleted file mode 100644 index 49ee43fda1..0000000000 --- a/translated/tech/20170910 Cool vim feature sessions.md +++ /dev/null @@ -1,44 +0,0 @@ -vim 的酷功能:会话! -============================================================• - -昨天我在编写我的[vimrc][5]的时候了解到一个很酷的 vim 功能!(主要为了添加 fzf 和 ripgrep 插件)。这是一个内置功能,不需要特别的插件。 - -所以我画了一个漫画。 - -基本上你可以用下面的命令保存所有你打开的文件和当前的状态 - -``` -:mksession ~/.vim/sessions/foo.vim - -``` - -接着用 `:source ~/.vim/sessions/foo.vim` 或者  `vim -S ~/.vim/sessions/foo.vim` 还原会话。非常酷! - -一些 vim 插件给 vim 会话添加了额外的功能: - -* [https://github.com/tpope/vim-obsession][1] - -* [https://github.com/mhinz/vim-startify][2] - -* [https://github.com/xolox/vim-session][3] - -这是漫画: - -![](https://jvns.ca/images/vimsessions.png) - --------------------------------------------------------------------------------- - -via: https://jvns.ca/blog/2017/09/10/vim-sessions/ - -作者:[Julia Evans ][a] -译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://jvns.ca/about -[1]:https://github.com/tpope/vim-obsession -[2]:https://github.com/mhinz/vim-startify -[3]:https://github.com/xolox/vim-session -[4]:https://jvns.ca/categories/vim -[5]:https://github.com/jvns/vimconfig/blob/master/vimrc diff --git a/published/20171009 Examining network connections on Linux systems.md b/translated/tech/20171009 Examining network connections on Linux systems.md similarity index 100% rename from published/20171009 Examining network connections on Linux systems.md rename to translated/tech/20171009 Examining network connections on Linux systems.md diff --git a/translated/tech/20171020 How Eclipse is advancing IoT development.md b/translated/tech/20171020 How Eclipse is advancing IoT development.md deleted file mode 100644 index 0de4f38ea1..0000000000 --- a/translated/tech/20171020 How Eclipse is advancing IoT development.md +++ /dev/null @@ -1,77 +0,0 @@ -translated by smartgrids -Eclipse 如何助力 IoT 发展 -============================================================ - -### 开源组织的模块发开发方式非常适合物联网。 - -![How Eclipse is advancing IoT development](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/OSDC_BUS_ArchitectureOfParticipation_520x292.png?itok=FA0Uuwzv "How Eclipse is advancing IoT development") -图片来源: opensource.com - -[Eclipse][3] 可能不是第一个去研究物联网的开源组织。但是,远在 IoT 家喻户晓之前,该基金会在 2001 年左右就开始支持开源软件发展商业化。九月 Eclipse 物联网日和 RedMonk 的 [ThingMonk 2017][4] 一块举行,着重强调了 Eclipse 在 [物联网发展][5] 中的重要作用。它现在已经包含了 28 个项目,覆盖了大部分物联网项目需求。会议过程中,我和负责 Eclipse 市场化运作的 [Ian Skerritt][6] 讨论了 Eclipse 的物联网项目以及如何拓展它。 - -###物联网的最新进展? -我问 Ian 物联网同传统工业自动化,也就是前几十年通过传感器和相应工具来实现工厂互联的方式有什么不同。 Ian 指出很多工厂是还没有互联的。 -另外,他说“ SCADA[监控和数据分析] 系统以及工厂底层技术都是私有、独立性的。我们很难去改变它,也很难去适配它们…… 现在,如果你想运行一套生产系统,你需要设计成百上千的单元。生产线想要的是满足用户需求,使制造过程更灵活,从而可以不断产出。” 这也就是物联网会带给制造业的一个很大的帮助。 - - -###Eclipse 物联网方面的研究 -Ian 对于 Eclipse 在物联网的研究是这样描述的:“满足任何物联网解决方案的核心基础技术” ,通过使用开源技术,“每个人都可以使用从而可以获得更好的适配性。” 他说,Eclipse 将物联网视为包括三层互联的软件栈。从更高的层面上看,这些软件栈(按照大家常见的说法)将物联网描述为跨越三个层面的网络。特定的观念可能认为含有更多的层面,但是他们一直符合这个三层模型的功能的: - -* 一种可以装载设备(例如设备、终端、微控制器、传感器)用软件的堆栈。 -* 将不同的传感器采集到的数据信息聚合起来并传输到网上的一类网关。这一层也可能会针对传感器数据检测做出实时反映。 -* 物联网平台后端的一个软件栈。这个后端云存储数据并能根据采集的数据比如历史趋势、预测分析提供服务。 - -这三个软件栈在 Eclipse 的白皮书 “ [The Three Software Stacks Required for IoT Architectures][7] ”中有更详细的描述。 - -Ian 说在这些架构中开发一种解决方案时,“需要开发一些特殊的东西,但是很多底层的技术是可以借用的,像通信协议、网关服务。需要一种模块化的方式来满足不用的需求场合。” Eclipse 关于物联网方面的研究可以概括为:开发模块化开源组件从而可以被用于开发大量的特定性商业服务和解决方案。 - -###Eclipse 的物联网项目 - -在众多一杯应用的 Eclipse 物联网应用中, Ian 举了两个和 [MQTT][8] 有关联的突出应用,一个设备与设备互联(M2M)的物联网协议。 Ian 把它描述成“一个专为重视电源管理工作的油气传输线监控系统的信息发布/订阅协议。MQTT 已经是众多物联网广泛应用标准中很成功的一个。” [Eclipse Mosquitto][9] 是 MQTT 的代理,[Eclipse Paho][10] 是他的客户端。 -[Eclipse Kura][11] 是一个物联网网关,引用 Ian 的话,“它连接了很多不同的协议间的联系”包括蓝牙、Modbus、CANbus 和 OPC 统一架构协议,以及一直在不断添加的协议。一个优势就是,他说,取代了你自己写你自己的协议, Kura 提供了这个功能并将你通过卫星、网络或其他设备连接到网络。”另外它也提供了防火墙配置、网络延时以及其它功能。Ian 也指出“如果网络不通时,它会存储信息直到网络恢复。” - -最新的一个项目中,[Eclipse Kapua][12] 正尝试通过微服务来为物联网云平台提供不同的服务。比如,它集成了通信、汇聚、管理、存储和分析功能。Ian 说“它正在不断前进,虽然还没被完全开发出来,但是 Eurotech 和 RedHat 在这个项目上非常积极。” -Ian 说 [Eclipse hawkBit][13] ,软件更新管理的软件,是一项“非常有趣的项目。从安全的角度说,如果你不能更新你的设备,你将会面临巨大的安全漏洞。”很多物联网安全事故都和无法更新的设备有关,他说,“ HawkBit 可以基本负责通过物联网系统来完成扩展性更新的后端管理。” - -物联网设备软件升级的难度一直被看作是难度最高的安全挑战之一。物联网设备不是一直连接的,而且数目众多,再加上首先设备的更新程序很难完全正常。正因为这个原因,关于无赖女王软件升级的项目一直是被当作重要内容往前推进。 - -###为什么物联网这么适合 Eclipse - -在物联网发展趋势中的一个方面就是关于构建模块来解决商业问题,而不是宽约工业和公司的大物联网平台。 Eclipse 关于物联网的研究放在一系列模块栈、提供特定和大众化需求功能的项目,还有就是指定目标所需的可捆绑式中间件、网关和协议组件上。 - - --------------------------------------------------------------------------------- - - - -作者简介: - -Gordon Haff - Gordon Haff 是红帽公司的云营销员,经常在消费者和工业会议上讲话,并且帮助发展红帽全办公云解决方案。他是 计算机前言:云如何如何打开众多出版社未来之门 的作者。在红帽之前, Gordon 写了成百上千的研究报告,经常被引用到公众刊物上,像纽约时报关于 IT 的议题和产品建议等…… - --------------------------------------------------------------------------------- - -转自: https://opensource.com/article/17/10/eclipse-and-iot - -作者:[Gordon Haff ][a] -译者:[smartgrids](https://github.com/smartgrids) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/ghaff -[1]:https://opensource.com/article/17/10/eclipse-and-iot?rate=u1Wr-MCMFCF4C45IMoSPUacCatoqzhdKz7NePxHOvwg -[2]:https://opensource.com/user/21220/feed -[3]:https://www.eclipse.org/home/ -[4]:http://thingmonk.com/ -[5]:https://iot.eclipse.org/ -[6]:https://twitter.com/ianskerrett -[7]:https://iot.eclipse.org/resources/white-papers/Eclipse%20IoT%20White%20Paper%20-%20The%20Three%20Software%20Stacks%20Required%20for%20IoT%20Architectures.pdf -[8]:http://mqtt.org/ -[9]:https://projects.eclipse.org/projects/technology.mosquitto -[10]:https://projects.eclipse.org/projects/technology.paho -[11]:https://www.eclipse.org/kura/ -[12]:https://www.eclipse.org/kapua/ -[13]:https://eclipse.org/hawkbit/ -[14]:https://opensource.com/users/ghaff -[15]:https://opensource.com/users/ghaff -[16]:https://opensource.com/article/17/10/eclipse-and-iot#comments diff --git a/published/20171029 A block layer introduction part 1 the bio layer.md b/translated/tech/20171029 A block layer introduction part 1 the bio layer.md similarity index 95% rename from published/20171029 A block layer introduction part 1 the bio layer.md rename to translated/tech/20171029 A block layer introduction part 1 the bio layer.md index 96374c2302..bc3f582259 100644 --- a/published/20171029 A block layer introduction part 1 the bio layer.md +++ b/translated/tech/20171029 A block layer introduction part 1 the bio layer.md @@ -1,4 +1,4 @@ -回复:块层介绍第一部分 - 块 I/O 层 +块层介绍第一部分:块 I/O 层 ============================================================ ### 块层介绍第一部分:块 I/O 层 @@ -6,14 +6,9 @@ 回复:amarao 在[块层介绍第一部分:块 I/O 层][1] 中提的问题 先前的文章:[块层介绍第一部分:块 I/O 层][2] -![](https://static.lwn.net/images/2017/neil-blocklayer.png) - 嗨, - 你在这里描述的问题与块层不直接相关。这可能是一个驱动错误、可能是一个 SCSI 层错误,但绝对不是一个块层的问题。 - 不幸的是,报告针对 Linux 的错误是一件难事。有些开发者拒绝去看 bugzilla,有些开发者喜欢它,有些(像我这样)只能勉强地使用它。 - 另一种方法是发送电子邮件。为此,你需要选择正确的邮件列表,还有也许是正确的开发人员,当他们心情愉快,或者不是太忙或者不是假期时找到它们。有些人会努力回复所有,有些是完全不可预知的 - 这对我来说通常会发送一个补丁,包含一些错误报告。如果你只是有一个你自己几乎都不了解的 bug,那么你的预期响应率可能会更低。很遗憾,但这是是真的。 许多 bug 都会得到回应和处理,但很多 bug 都没有。 @@ -21,20 +16,18 @@ 我不认为说没有人关心是公平的,但是没有人认为它如你想的那样重要是有可能的。如果你想要一个解决方案,那么你需要驱动它。一个驱动它的方法是花钱请顾问或者与经销商签订支持合同。我怀疑你的情况没有上面的可能。另一种方法是了解代码如何工作,并自己找到解决方案。很多人都这么做,但是这对你来说可能不是一种选择。另一种方法是在不同的相关论坛上不断提出问题,直到得到回复。坚持可以见效。你需要做好准备去执行任何你所要求的测试,可能包括建立一个新的内核来测试。 如果你能在最近的内核(4.12 或者更新)上复现这个 bug,我建议你邮件报告给 linux-kernel@vger.kernel.org、linux-scsi@vger.kernel.org 和我(neilb@suse.com)(注意你不必订阅这些列表来发送邮件,只需要发送就行)。描述你的硬件以及如何触发问题的。 - 包含所有进程状态是 “D” 的栈追踪。你可以用 “cat /proc/$PID/stack” 来得到它,这里的 “$PID” 是进程的 pid。 确保避免抱怨或者说这个已经坏了好几年了以及这是多么严重不足。没有人关心这个。我们关心的是 bug 以及如何修复它。因此只要报告相关的事实就行。 - 尝试在邮件中而不是链接到其他地方的链接中包含所有事实。有时链接是需要的,但是对于你的脚本,它只有 8 行,所以把它包含在邮件中就行(并避免像 “fuckup” 之类的描述。只需称它为“坏的”(broken)或者类似的)。同样确保你的邮件发送的不是 HTML 格式。我们喜欢纯文本。HTML 被所有的 @vger.kernel.org 邮件列表拒绝。你或许需要配置你的邮箱程序不发送 HTML。 -------------------------------------------------------------------------------- via: https://lwn.net/Articles/737655/ -作者:[neilbrown][a] +作者:[ neilbrown][a] 译者:[geekpi](https://github.com/geekpi) -校对:[wxy](https://github.com/wxy) +校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/translated/tech/20171108 Archiving repositories.md b/translated/tech/20171108 Archiving repositories.md deleted file mode 100644 index 3d1a328541..0000000000 --- a/translated/tech/20171108 Archiving repositories.md +++ /dev/null @@ -1,37 +0,0 @@ -归档仓库 -==================== - - -因为仓库不再活跃开发或者你不想接受额外的贡献并不意味着你想要删除它。现在在 Github 上归档仓库让它变成只读。 - - [![archived repository banner](https://user-images.githubusercontent.com/7321362/32558403-450458dc-c46a-11e7-96f9-af31d2206acb.png)][1] - -归档一个仓库让它对所有人只读(包括仓库拥有者)。这包括编辑仓库、问题、合并请求、标记、里程碑、维基、发布、提交、标签、分支、反馈和评论。没有人可以在一个归档的仓库上创建新的问题、合并请求或者评论,但是你仍可以 fork 仓库-允许归档的仓库在其他地方继续开发。 - -要归档一个仓库,进入仓库设置页面并点在这个仓库上点击归档。 - - [![archive repository button](https://user-images.githubusercontent.com/125011/32273119-0fc5571e-bef9-11e7-9909-d137268a1d6d.png)][2] - -在归档你的仓库前,确保你已经更改了它的设置并考虑关闭所有的开放问题和合并请求。你还应该更新你的 README 和描述来让它让访问者了解他不再能够贡献。 - -如果你改变了主意想要解除归档你的仓库,在相同的地方点击解除归档。请注意大多数归档仓库的设置是隐藏的,并且你需要解除归档来改变它们。 - - [![archived labelled repository](https://user-images.githubusercontent.com/125011/32541128-9d67a064-c466-11e7-857e-3834054ba3c9.png)][3] - -要了解更多,请查看[这份文档][4]中的归档仓库部分。归档快乐! - --------------------------------------------------------------------------------- - -via: https://github.com/blog/2460-archiving-repositories - -作者:[MikeMcQuaid ][a] -译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://github.com/MikeMcQuaid -[1]:https://user-images.githubusercontent.com/7321362/32558403-450458dc-c46a-11e7-96f9-af31d2206acb.png -[2]:https://user-images.githubusercontent.com/125011/32273119-0fc5571e-bef9-11e7-9909-d137268a1d6d.png -[3]:https://user-images.githubusercontent.com/125011/32541128-9d67a064-c466-11e7-857e-3834054ba3c9.png -[4]:https://help.github.com/articles/about-archiving-repositories/ diff --git a/translated/tech/20171116 Introducing security alerts on GitHub.md b/translated/tech/20171116 Introducing security alerts on GitHub.md deleted file mode 100644 index b8f0afba17..0000000000 --- a/translated/tech/20171116 Introducing security alerts on GitHub.md +++ /dev/null @@ -1,48 +0,0 @@ -介绍 GitHub 上的安全警报 -==================================== - - -上个月,我们用依赖关系图让你更容易跟踪你代码依赖的的项目,目前支持 Javascript 和 Ruby。如今,超过 75% 的 GitHub 项目有依赖,我们正在帮助你做更多的事情,而不只是关注那些重要的项目。在启用依赖关系图后,当我们检测到你的依赖中有漏洞或者来自 Github 社区中建议的已知修复时通知你。 - - [![Security Alerts & Suggested Fix](https://user-images.githubusercontent.com/594029/32851987-76c36e4a-c9eb-11e7-98fc-feb39fddaadb.gif)][1] - -### 如何开始使用安全警报 - -无论你的项目时私有还是公有的,安全警报都会为团队中的正确人员提供重要的漏洞信息。 - -启用你的依赖图 - -公开仓库将自动启用依赖关系图和安全警报。对于私人仓库,你需要在仓库设置中添加安全警报,或者在 “Insights” 选项卡中允许访问仓库的 “依赖关系图” 部分。 - -设置通知选项 - -启用依赖关系图后,管理员将默认收到安全警报。管理员还可以在依赖关系图设置中将团队或个人添加为安全警报的收件人。 - -警报响应 - -当我们通知你潜在的漏洞时,我们将突出显示我们建议更新的任何依赖关系。如果存在已知的安全版本,我们将使用机器学习和公开数据中选择一个,并将其包含在我们的建议中。 - -### 漏洞覆盖率 - -有 [CVE ID][2](公开披露的[国家漏洞数据库][3]中的漏洞)的漏洞将包含在安全警报中。但是,并非所有漏洞都有 CVE ID,甚至许多公开披露的漏洞也没有。随着安全数据的增长,我们将继续更好地识别漏洞。如需更多帮助来管理安全问题,请查看我们的[ GitHub Marketplace 中的安全合作伙伴][4]。 - -这是使用世界上最大的开源数据集的下一步,可以帮助你保持代码安全并做到最好。依赖关系图和安全警报目前支持 JavaScript 和 Ruby,并将在 2018 年提供 Python 支持。 - -[了解更多关于安全警报][5] - --------------------------------------------------------------------------------- - -via: https://github.com/blog/2470-introducing-security-alerts-on-github - -作者:[mijuhan ][a] -译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://github.com/mijuhan -[1]:https://user-images.githubusercontent.com/594029/32851987-76c36e4a-c9eb-11e7-98fc-feb39fddaadb.gif -[2]:https://cve.mitre.org/ -[3]:https://nvd.nist.gov/ -[4]:https://github.com/marketplace/category/security -[5]:https://help.github.com/articles/about-security-alerts-for-vulnerable-dependencies/ diff --git a/translated/tech/20171117 System Logs: Understand Your Linux System.md b/translated/tech/20171117 System Logs: Understand Your Linux System.md deleted file mode 100644 index dceea12a63..0000000000 --- a/translated/tech/20171117 System Logs: Understand Your Linux System.md +++ /dev/null @@ -1,68 +0,0 @@ -### 系统日志: 了解你的Linux系统 - -![chabowski](https://www.suse.com/communities/blog/files/2016/03/chabowski_avatar_1457537819-100x100.jpg) - By: [chabowski][1] - -本文摘自教授Linux小白(或者非资深桌面用户)技巧的系列文章. 该系列文章旨在为由LinuxMagazine基于 [openSUSE Leap][3] 发布的第30期特别版 “[Getting Started with Linux][2]” 提供补充说明. - -本文作者是 Romeo S. Romeo, 他是一名 PDX-based enterprise Linux 专家,转为创新企业提供富有伸缩性的解决方案. - -Linux系统日志非常重要. 后台运行的程序(通常被称为守护进程或者服务进程)处理了你Linux系统中的大部分任务. 当这些守护进程工作时,它们将任务的详细信息记录进日志文件中,作为他们做过什么的历史信息. 这些守护进程的工作内容涵盖从使用原子钟同步时钟到管理网络连接. 所有这些都被记录进日志文件,这样当有错误发生时,你可以通过查阅特定的日志文件来看出发生了什么. - -![](https://www.suse.com/communities/blog/files/2017/11/markus-spiske-153537-300x450.jpg) - -Photo by Markus Spiske on Unsplash - -有很多不同的日志. 历史上, 他们一般以纯文本的格式存储到 `/var/log` 目录中. 现在依然有很多日志这样做, 你可以很方便的使用 `less` 来查看它们. -在新装的 `openSUSE Leap 42.3` 以及大多数现代操作系统上,重要的日志由 `systemd` 初始化系统存储. `systemd`这套系统负责启动守护进程并在系统启动时让计算机做好被使用的准备。 -由 `systemd` 记录的日志以二进制格式存储, 这使地它们消耗的空间更小,更容易被浏览,也更容易被导出成其他各种格式,不过坏处就是你必须使用特定的工具才能查看. -好在, 这个工具已经预安装在你的系统上了: 它的名字叫 `journalctl`,而且默认情况下, 它会将每个守护进程的所有日志都记录到一个地方. - -只需要运行 `journalctl` 命令就能查看你的 `systemd` 日志了. 它会用 `less` 分页器显示各种日志. 为了让你有个直观的感受, 下面是`journalctl` 中摘录的一条日志记录: - -``` -Jul 06 11:53:47 aaathats3as pulseaudio[2216]: [pulseaudio] alsa-util.c: Disabling timer-based scheduling because running inside a VM. -``` - -这条独立的日志记录以此包含了记录的日期和时间, 计算机名, 记录日志的进程名, 记录日志的进程PID, 以及日志内容本身. - -若系统中某个程序运行出问题了, 则可以查看日志文件并搜索(使用 “/” 加上要搜索的关键字)程序名称. 有可能导致该程序出问题的错误会记录到系统日志中. -有时,错误信息会足够详细让你能够修复该问题. 其他时候, 你需要在Web上搜索解决方案. Google就很适合来搜索奇怪的Linux问题. -![](https://www.suse.com/communities/blog/files/2017/09/Sunglasses_Emoji-450x450.png) -不过搜索时请注意你只输入了日志的内容, 行首的那些信息(日期, 主机名, 进程ID) 是无意义的,会干扰搜索结果. - -解决方法一般在搜索结果的前几个连接中就会有了. 当然,你不能只是无脑得运行从互联网上找到的那些命令: 请一定先搞清楚你要做的事情是什么,它的效果会是什么. -据说, 从系统日志中查询日志要比直接搜索描述故障的关键字要有用的多. 因为程序出错有很多原因, 而且同样的故障表现也可能由多种问题引发的. - -比如, 系统无法发声的原因有很多, 可能是播放器没有插好, 也可能是声音系统出故障了, 还可能是缺少合适的驱动程序. -如果你只是泛泛的描述故障表现, 你会找到很多无关的解决方法,而你也会浪费大量的时间. 而指定搜索日志文件中的内容, 你只会查询出他人也有相同日志内容的结果. -你可以对比一下图1和图2. - -![](https://www.suse.com/communities/blog/files/2017/11/picture1-450x450.png) - -图 1 搜索系统的故障表现只会显示泛泛的,不精确的结果. 这种搜索通常没什么用. - -![](https://www.suse.com/communities/blog/files/2017/11/picture2-450x450.png) - -图 2 搜索特定的日志行会显示出精确的,有用的结果. 这种搜索通常很有用. - -也有一些系统不用 `journalctl` 来记录日志. 在桌面系统中最常见的这类日志包括用于 `/var/log/zypper.log` 记录openSUSE包管理器的行为; `/var/log/boot.log` 记录系统启动时的消息,这类消息往往滚动的特别块,根本看不过来; `/var/log/ntp` 用来记录 Network Time Protocol 守护进程同步时间时发生的错误. -另一个存放硬件故障信息的地方是 `Kernel Ring Buffer`(内核环状缓冲区), 你可以输入 `demesg -H` 命令来查看(这条命令也会调用 `less` 分页器来查看). -`Kernel Ring Buffer` 存储在内存中, 因此会在重启电脑后丢失. 不过它包含了Linux内核中的重要事件, 比如新增了硬件, 加载了模块, 以及奇怪的网络错误. - -希望你已经准备好深入了解你的Linux系统了! 祝你玩的开心! - --------------------------------------------------------------------------------- - -via: https://www.suse.com/communities/blog/system-logs-understand-linux-system/ - -作者:[chabowski] -译者:[lujun9972](https://github.com/lujun9972) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[1]:https://www.suse.com/communities/blog/author/chabowski/ -[2]:http://www.linux-magazine.com/Resources/Special-Editions/30-Getting-Started-with-Linux -[3]:https://en.opensuse.org/Portal:42.3 -[4]:http://www.linux-magazine.com/ diff --git a/translated/tech/20171124 How to Install Android File Transfer for Linux.md b/translated/tech/20171124 How to Install Android File Transfer for Linux.md new file mode 100644 index 0000000000..b93429f509 --- /dev/null +++ b/translated/tech/20171124 How to Install Android File Transfer for Linux.md @@ -0,0 +1,82 @@ +Translating by wenwensnow + +# 如何在Linux下安装安卓文件传输助手 + +如果你尝试在Ubuntu下安装你的安卓手机,你也许可以试试Linux下的安卓文件传输助手 + +本质上来说,这个应用是谷歌mac版本的一个复制。它是用Qt编写的,用户界面非常简洁,使得你能轻松在Ubuntu和安卓手机之间传输文件。 + +现在,有可能一部分人想知道有什么是这个应用可以做,而Nautilus(Ubuntu默认的文件资源管理器)不能做的,答案是没有。 + +当我将我的 Nexus 5X(记得选择[MTP][7] 选项)连接在Ubuntu上时,在[GVfs][8](Gnome桌面下的虚拟文件系统)的帮助下,我可以打开,浏览和管理我的手机, 就像它是一个普通的U盘一样。 + + [![Nautilus MTP integration with a Nexus 5X](http://www.omgubuntu.co.uk/wp-content/uploads/2017/11/browsing-android-mtp-nautilus.jpg)][9] + +但是一些用户在使用默认的文件管理器时,在MTP的某些功能上会出现问题:比如文件夹没有正确加载,创建新文件夹后此文件夹不存在,或者无法在媒体播放器中使用自己的手机。 + +这就是要为Linux系统用户设计一个安卓文件传输助手应用的原因。将这个应用当做将MTP设备安装在Linux下的另一种选择。如果你使用Linux下的默认应用时一切正常,你也许并不需要尝试使用它 (除非你真的很想尝试新鲜事物)。 + + +![Android File Transfer Linux App](http://www.omgubuntu.co.uk/wp-content/uploads/2017/11/android-file-transfer-for-linux-750x662.jpg) + +app特点: + +*   简洁直观的用户界面 + +*   支持文件拖放功能(从Linux系统到手机) + +*   支持批量下载 (从手机到Linux系统) + +*   显示传输进程对话框 + +*   FUSE模块支持 + +*   没有文件大小限制 + +*   可选命令行工具 + +### Ubuntu下安装安卓手机文件助手的步骤 + +以上就是对这个应用的介绍,下面是如何安装它的具体步骤。 + +这有一个[PPA](个人软件包集)源为Ubuntu 14.04 LTS(长期支持版本),16.04LTS 和 Ubuntu17.10 提供可用应用 + +为了将这一PPA加入你的软件资源列表中,执行这条命令: + +``` +sudo add-apt-repository ppa:samoilov-lex/aftl-stable +``` + +接着,为了在Ubuntu下安装Linux版本的安卓文件传输助手,执行: + +``` +sudo apt-get update && sudo apt install android-file-transfer +``` + +这样就行了。 + +你会在你的应用列表中发现这一应用的启动图标。 + +在你启动这一应用之前,要确保没有其他应用(比如Nautilus)已经加载了你的手机.如果其他应用正在使用你的手机,就会显示“无法找到MTP设备”。为了解决这一问题,将你的手机从Nautilus(或者任何正在使用你的手机的应用)上移除,然后再重新启动安卓文件传输助手。 + +-------------------------------------------------------------------------------- + +via: http://www.omgubuntu.co.uk/2017/11/android-file-transfer-app-linux + +作者:[ JOEY SNEDDON ][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://plus.google.com/117485690627814051450/?rel=author +[1]:https://plus.google.com/117485690627814051450/?rel=author +[2]:http://www.omgubuntu.co.uk/category/app +[3]:http://www.omgubuntu.co.uk/category/download +[4]:https://github.com/whoozle/android-file-transfer-linux +[5]:http://www.omgubuntu.co.uk/2017/11/android-file-transfer-app-linux +[6]:http://android.com/filetransfer?linkid=14270770 +[7]:https://en.wikipedia.org/wiki/Media_Transfer_Protocol +[8]:https://en.wikipedia.org/wiki/GVfs +[9]:http://www.omgubuntu.co.uk/wp-content/uploads/2017/11/browsing-android-mtp-nautilus.jpg +[10]:https://launchpad.net/~samoilov-lex/+archive/ubuntu/aftl-stable diff --git a/translated/tech/20171124 Photon Could Be Your New Favorite Container OS.md b/translated/tech/20171124 Photon Could Be Your New Favorite Container OS.md deleted file mode 100644 index e51c580da9..0000000000 --- a/translated/tech/20171124 Photon Could Be Your New Favorite Container OS.md +++ /dev/null @@ -1,147 +0,0 @@ -Photon也许能成为你最喜爱的容器操作系统 -============================================================ - -![Photon OS](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/photon-linux.jpg?itok=jUFHPR_c "Photon OS") - -Phonton OS专注于容器,是一个非常出色的平台。 —— Jack Wallen - -容器在当下的火热,并不是没有原因的。正如[之前][13]讨论的,容器可以使您轻松快捷地将新的服务与应用部署到您的网络上,而且并不耗费太多的系统资源。比起专用硬件和虚拟机,容器都是更加划算的,除此之外,他们更容易更新与重用。 - -更重要的是,容器喜欢Linux(反之亦然)。不需要太多时间和麻烦,你就可以启动一台Linux服务器,运行[Docker][14],再是部署容器。但是,哪种Linux发行版最适合部署容器呢?我们的选择很多。你可以使用标准的Ubuntu服务器平台(更容易安装Docker并部署容器)或者是更轻量级的发行版 —— 专门用于部署容器。 - -[Photon][15]就是这样的一个发行版。这个特殊的版本是由[VMware][16]于2005年创建的,它包含了Docker的守护进程,并与容器框架(如Mesos和Kubernetes)一起使用。Photon经过优化可与[VMware vSphere][17]协同工作,而且可用于裸机,[Microsoft Azure][18], [Google Compute Engine][19], [Amazon Elastic Compute Cloud][20], 或者 [VirtualBox][21]等。 - -Photon通过只安装Docker守护进程所必需的东西来保持它的轻量。而这样做的结果是,这个发行版的大小大约只有300MB。但这足以让Linux的运行一切正常。除此之外,Photon的主要特点还有: - -* 内核调整为性能模式。 - -* 内核根据[内核自防护项目][6](KSPP)进行了加固。 - -* 所有安装的软件包都根据加固的安全标识来构建。 - -* 操作系统在信任验证后启动。 - -* Photon管理进程管理防火墙,网络,软件包,和远程登录在Photon机子上的用户。 - -* 支持持久卷。 - -* [Project Lightwave][7] 整合。 - -* 及时的安全补丁与更新。 - -Photon可以通过[ISO][22],[OVA][23],[Amazon Machine Image][24],[Google Compute Engine image][25]和[Azure VHD][26]安装使用。现在我将向您展示如何使用ISO镜像在VirtualBox上安装Photon。整个安装过程大概需要五分钟,在最后您将有一台随时可以部署容器的虚拟机。 - -### 创建虚拟机 - -在部署第一台容器之前,您必须先创建一台虚拟机并安装Photon。为此,打开VirtualBox并点击“新建”按钮。跟着创建虚拟机向导进行配置(根据您的容器将需要的用途,为Photon提供必要的资源)。在创建好虚拟机后,您所需要做的第一件事就是更改配置。选择新建的虚拟机(在VirtualBox主窗口的左侧面板中),然后单击“设置”。在弹出的窗口中,点击“网络”(在左侧的导航中)。 - -在“网络”窗口(图1)中,你需要在“连接”的下拉窗口中选择桥接。这可以确保您的Photon服务与您的网络相连。完成更改后,单击确定。 - -### [photon_0.jpg][8] - -![change settings](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/photon_0.jpg?itok=Q0yhOhsZ "change setatings") -图 1: 更改Photon在VirtualBox中的网络设置。[经许可使用][1] - -从左侧的导航选择您的Photon虚拟机,点击启动。系统会提示您去加载IOS镜像。当您完成之后,Photon安装程序将会启动并提示您按回车后开始安装。安装过程基于ncurses(没有GUI),但它非常简单。 - -接下来(图2),系统会询问您是要最小化安装,完整安装还是安装OSTree服务器。我选择了完整安装。选择您所需要的任意选项,然后按回车继续。 - -### [photon_1.jpg][9] - -![installation type](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/photon_2.jpg?itok=QL1Rs-PH "Photon") -图 2: 选择您的安装类型.[经许可使用][2] - -在下一个窗口,选择您要安装Photon的磁盘。由于我们将其安装在虚拟机,因此只有一块磁盘会被列出(图3)。选择“自动”按下回车。然后安装程序会让您输入(并验证)管理员密码。在这之后镜像开始安装在您的磁盘上并在不到5分钟的时间内结束。 - -### [photon_2.jpg][] - -![Photon](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/photon_1.jpg?itok=OdnMVpaA "installation type") -图 3: 选择安装Photon的硬盘.[经许可使用][3] - -安装完成后,重启虚拟机并使用安装时创建的用户root和它的密码登录。一切就绪,你准备好开始工作了。 - -在开始使用Docker之前,您需要更新一下Photon。Photon使用 _yum_ 软件包管理器,因此在以root用户登录后输入命令 _yum update_。如果有任何可用更新,则会询问您是否确认(图4)。 - -### [photon_3.jpg][11] - -![Updating](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/photon_3.jpg?itok=vjqrspE2 "Updating") -图 4: 更新 Photon.[经许可使用][4] - -用法 - -正如我所说的,Photon提供了部署容器甚至创建Kubernetes集群所需要的所有包。但是,在使用之前还要做一些事情。首先要启动Docker守护进程。为此,执行以下命令: - -``` -systemctl start docker - -systemctl enable docker -``` - -现在我们需要创建一个标准用户,因此我们没有以root去运行docker命令。为此,执行以下命令: - -``` -useradd -m USERNAME - -passwd USERNAME -``` - -其中USERNAME是我们新增的用户的名称。 - -接下来,我们需要将这个新用户添加到 _docker_ 组,执行命令: - -``` -usermod -a -G docker USERNAME -``` - -其中USERNAME是刚刚创建的用户的名称。 - -注销root用户并切换为新增的用户。现在,您已经可以不必使用 _sudo_ 命令或者是切换到root用户来使用 _docker_命令了。从Docker Hub中取出一个镜像开始部署容器吧。 - -### 一个优秀的容器平台 - -在专注于容器方面,Photon毫无疑问是一个出色的平台。请注意,Photon是一个开源项目,因此没有任何付费支持。如果您对Photon有任何的问题,请移步Photon项目的Github下的[Issues][27],那里可以供您阅读相关问题,或者提交您的问题。如果您对Photon感兴趣,您也可以在项目的官方[Github][28]中找到源码。 - -尝试一下Photon吧,看看它是否能够使得Docker容器和Kubernetes集群的部署更加容易。 - -欲了解Linux的更多信息,可以通过学习Linux基金会和edX的免费课程,[“Linux 入门”][29]。 - --------------------------------------------------------------------------------- - -via: https://www.linux.com/learn/intro-to-linux/2017/11/photon-could-be-your-new-favorite-container-os - -作者:[JACK WALLEN][a] -译者:[KeyLD](https://github.com/KeyLd) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.linux.com/users/jlwallen -[1]:https://www.linux.com/licenses/category/used-permission -[2]:https://www.linux.com/licenses/category/used-permission -[3]:https://www.linux.com/licenses/category/used-permission -[4]:https://www.linux.com/licenses/category/used-permission -[5]:https://www.linux.com/licenses/category/creative-commons-zero -[6]:https://kernsec.org/wiki/index.php/Kernel_Self_Protection_Project -[7]:http://vmware.github.io/lightwave/ -[8]:https://www.linux.com/files/images/photon0jpg -[9]:https://www.linux.com/files/images/photon1jpg -[10]:https://www.linux.com/files/images/photon2jpg -[11]:https://www.linux.com/files/images/photon3jpg -[12]:https://www.linux.com/files/images/photon-linuxjpg -[13]:https://www.linux.com/learn/intro-to-linux/2017/11/how-install-and-use-docker-linux -[14]:https://www.docker.com/ -[15]:https://vmware.github.io/photon/ -[16]:https://www.vmware.com/ -[17]:https://www.vmware.com/products/vsphere.html -[18]:https://azure.microsoft.com/ -[19]:https://cloud.google.com/compute/ -[20]:https://aws.amazon.com/ec2/ -[21]:https://www.virtualbox.org/ -[22]:https://github.com/vmware/photon/wiki/Downloading-Photon-OS -[23]:https://github.com/vmware/photon/wiki/Downloading-Photon-OS -[24]:https://github.com/vmware/photon/wiki/Downloading-Photon-OS -[25]:https://github.com/vmware/photon/wiki/Downloading-Photon-OS -[26]:https://github.com/vmware/photon/wiki/Downloading-Photon-OS -[27]:https://github.com/vmware/photon/issues -[28]:https://github.com/vmware/photon -[29]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux diff --git a/translated/tech/20171130 New Feature Find every domain someone owns automatically.md b/translated/tech/20171130 New Feature Find every domain someone owns automatically.md deleted file mode 100644 index 4b72eaae5e..0000000000 --- a/translated/tech/20171130 New Feature Find every domain someone owns automatically.md +++ /dev/null @@ -1,49 +0,0 @@ -新功能:自动找出每个域名的拥有者 -============================================================ - - -今天,我们很高兴地宣布我们最近几周做的新功能。它是 Whois 聚合工具,现在可以在 [DNSTrails][1] 上获得。 - -在过去,查找一个域名的所有者会花费很多时间,因为大部分时间你都需要把域名指向一个 IP 地址,以便找到同一个人拥有的其他域名。 - -使用老的方法,你会很轻易地在一个工具和另外一个工具的研究和交叉比较结果中花费数个小时,直到得到你想要的域名。 - -感谢这个新工具和我们的智能[WHOIS 数据库][2],现在你可以搜索任何域名,并获得组织或个人注册的域名的完整列表,并在几秒钟内获得准确的结果。 - -### 我如何使用Whois聚合功能? - -第一步:打开 [DNSTrails.com][3] - -第二步:搜索任何域名,比如:godaddy.com - -第三步:在得到域名的结果后,如下所见,定位下面的 Whois 信息: - -![Domain name search results](https://securitytrails.com/images/a/a/1/3/f/aa13fa3616b8dc313f925bdbf1da43a54856d463-image1.png) - -第四步:你会看到那里有有关域名的电话和电子邮箱地址。 - -第五步:点击右边的链接,你会轻松地找到用相同电话和邮箱注册的域名。 - -![All domain names by the same owner](https://securitytrails.com/images/1/3/4/0/3/134037822d23db4907d421046b11f3cbb872f94f-image2.png) - -如果你正在调查互联网上任何个人的域名所有权,这意味着即使域名甚至没有指向注册服务商的 IP,如果他们使用相同的电话和邮件地址,我们仍然可以发现其他域名。 - -想知道一个人拥有的其他域名么?亲自试试 [DNStrails][5] 的[ WHOIS 聚合功能][4]或者[使用我们的 API 访问][6]。 - --------------------------------------------------------------------------------- - -via: https://securitytrails.com/blog/find-every-domain-someone-owns - -作者:[SECURITYTRAILS TEAM ][a] -译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://securitytrails.com/blog/find-every-domain-someone-owns -[1]:https://dnstrails.com/ -[2]:https://securitytrails.com/forensics -[3]:https://dnstrails.com/ -[4]:http://dnstrails.com/#/domain/domain/ueland.com -[5]:https://dnstrails.com/ -[6]:https://securitytrails.com/contact diff --git a/translated/tech/20171130 Translate Shell – A Tool To Use Google Translate From Command Line In Linux.md b/translated/tech/20171130 Translate Shell – A Tool To Use Google Translate From Command Line In Linux.md deleted file mode 100644 index 9f905bd496..0000000000 --- a/translated/tech/20171130 Translate Shell – A Tool To Use Google Translate From Command Line In Linux.md +++ /dev/null @@ -1,400 +0,0 @@ -Translate Shell: 一款在 Linux 命令行中使用 Google Translate的工具 -============================================================ - -我对 CLI 应用非常感兴趣,因此热衷于使用并分享 CLI 应用。 我之所以更喜欢 CLI 很大原因是因为我在大多数的时候都使用的是字符界面(black screen),已经习惯了使用 CLI 应用而不是 GUI 应用. - -我写过很多关于 CLI 应用的文章。 最近我发现了一些 google 的 CLI 工具,像 “Google Translator”, “Google Calendar”, 和 “Google Contacts”。 这里,我想在给大家分享一下。 - -今天我们要介绍的是 “Google Translator” 工具。 由于母语是泰米尔语,我在一天内用了很多次才理解了它的意义。 - -`Google translate` 为其他语系的人们所广泛使用。 - -### 什么是 Translate Shell - -[Translate Shell][2] (之前叫做 Google Translate CLI) 是一款借助 `Google Translate`(默认), `Bing Translator`, `Yandex.Translate` 以及 `Apertium` 来翻译的命令行翻译器。 -它让你可以在终端访问这些翻译引擎. `Translate Shell` 在大多数Linux发行版中都能使用。 - -### 如何安装 Translate Shell - -有三种方法安装 `Translate Shell`。 - -* 下载自包含的可执行文件 - -* 手工安装 - -* 通过包挂力气安装 - -#### 方法-1 : 下载自包含的可执行文件 - -下载自包含的可执行文件放到 `/usr/bin` 目录中。 - -```shell -$ wget git.io/trans -$ chmod +x ./trans -$ sudo mv trans /usr/bin/ -``` - -#### 方法-2 : 手工安装 - -克隆 `Translate Shell` github 仓库然后手工编译。 - -```shell -$ git clone https://github.com/soimort/translate-shell && cd translate-shell -$ make -$ sudo make install -``` - -#### 方法-3 : Via Package Manager - -有些发行版的官方仓库中包含了 `Translate Shell`,可以通过包管理器来安装。 - -对于 Debian/Ubuntu, 使用 [APT-GET Command][3] 或者 [APT Command][4]来安装。 - -```shell -$ sudo apt-get install translate-shell -``` - -对于 Fedora, 使用 [DNF Command][5] 来安装。 - -```shell -$ sudo dnf install translate-shell -``` - -对于基于 Arch Linux 的系统, 使用 [Yaourt Command][6] 或 [Packer Command][7] 来从 AUR 仓库中安装。 - -```shell -$ yaourt -S translate-shell -or -$ packer -S translate-shell -``` - -### 如何使用 Translate Shell - -安装好后,打开终端闭关输入下面命令。 `Google Translate` 会自动探测源文本是哪种语言,并且在默认情况下将之翻译成你的 `locale` 所对应的语言。 - -``` -$ trans [Words] -``` - -下面我将泰米尔语中的单词 “நன்றி” (Nanri) 翻译成英语。 这个单词的意思是感谢别人。 - -``` -$ trans நன்றி -நன்றி -(Naṉṟi) - -Thanks - -Definitions of நன்றி -[ தமிழ் -> English ] - -noun - gratitude - நன்றி - thanks - நன்றி - -நன்றி - Thanks -``` - -使用下面命令也能将英语翻译成泰米尔语。 - -``` -$ trans :ta thanks -thanks -/THaNGks/ - -நன்றி -(Naṉṟi) - -Definitions of thanks -[ English -> தமிழ் ] - -noun - நன்றி - gratitude, thanks - -thanks - நன்றி -``` - -要将一个单词翻译到多个语种可以使用下面命令(本例中, 我将单词翻译成泰米尔语以及印地语)。 - -``` -$ trans :ta+hi thanks -thanks -/THaNGks/ - -நன்றி -(Naṉṟi) - -Definitions of thanks -[ English -> தமிழ் ] - -noun - நன்றி - gratitude, thanks - -thanks - நன்றி - -thanks -/THaNGks/ - -धन्यवाद -(dhanyavaad) - -Definitions of thanks -[ English -> हिन्दी ] - -noun - धन्यवाद - thanks, thank, gratitude, thankfulness, felicitation - -thanks - धन्यवाद, शुक्रिया -``` - -使用下面命令可以将多个单词当成一个参数(句子)来进行翻译。(只需要把句子应用起来作为一个参数就行了)。 - -``` -$ trans :ta "what is going on your life?" -what is going on your life? - -உங்கள் வாழ்க்கையில் என்ன நடக்கிறது? -(Uṅkaḷ vāḻkkaiyil eṉṉa naṭakkiṟatu?) - -Translations of what is going on your life? -[ English -> தமிழ் ] - -what is going on your life? - உங்கள் வாழ்க்கையில் என்ன நடக்கிறது? -``` - -下面命令独立地翻译各个单词。 - -``` -$ trans :ta curios happy -curios - -ஆர்வம் -(Ārvam) - -Translations of curios -[ Română -> தமிழ் ] - -curios - ஆர்வம், அறிவாளிகள், ஆர்வமுள்ள, அறிய, ஆர்வமாக -happy -/ˈhapē/ - -சந்தோஷமாக -(Cantōṣamāka) - -Definitions of happy -[ English -> தமிழ் ] - - மகிழ்ச்சியான - happy, convivial, debonair, gay - திருப்தி உடைய - happy - -adjective - இன்பமான - happy - -happy - சந்தோஷமாக, மகிழ்ச்சி, இனிய, சந்தோஷமா -``` - -简洁模式: 默认情况下,`Translate Shell` 尽可能多的显示翻译信息. 如果你希望只显示简要信息,只需要加上`-b`选项。 - -``` -$ trans -b :ta thanks -நன்றி -``` - -字典模式: 加上 `-d` 可以把 `Translate Shell` 当成字典来用. - -``` -$ trans -d :en thanks -thanks -/THaNGks/ - -Synonyms - noun - - gratitude, appreciation, acknowledgment, recognition, credit - - exclamation - - thank you, many thanks, thanks very much, thanks a lot, thank you kindly, much obliged, much appreciated, bless you, thanks a million - -Examples - - In short, thanks for everything that makes this city great this Thanksgiving. - - - many thanks - - - There were no thanks in the letter from him, just complaints and accusations. - - - It is a joyful celebration in which Bolivians give thanks for their freedom as a nation. - - - festivals were held to give thanks for the harvest - - - The collection, as usual, received a great response and thanks is extended to all who subscribed. - - - It would be easy to dwell on the animals that Tasmania has lost, but I prefer to give thanks for what remains. - - - thanks for being so helpful - - - It came back on about half an hour earlier than predicted, so I suppose I can give thanks for that. - - - Many thanks for the reply but as much as I tried to follow your advice, it's been a bad week. - - - To them and to those who have supported the office I extend my grateful thanks . - - - We can give thanks and words of appreciation to others for their kind deeds done to us. - - - Adam, thanks for taking time out of your very busy schedule to be with us tonight. - - - a letter of thanks - - - Thank you very much for wanting to go on reading, and thanks for your understanding. - - - Gerry has received a letter of thanks from the charity for his part in helping to raise this much needed cash. - - - So thanks for your reply to that guy who seemed to have a chip on his shoulder about it. - - - Suzanne, thanks for being so supportive with your comments on my blog. - - - She has never once acknowledged my thanks , or existence for that matter. - - - My grateful thanks go to the funders who made it possible for me to travel. - - - festivals were held to give thanks for the harvest - - - All you secretaries who made it this far into the article… thanks for your patience. - - - So, even though I don't think the photos are that good, thanks for the compliments! - - - And thanks for warning us that your secret service requires a motorcade of more than 35 cars. - - - Many thanks for your advice, which as you can see, I have passed on to our readers. - - - Tom Ryan was given a bottle of wine as a thanks for his active involvement in the twinning project. - - - Mr Hill insists he has received no recent complaints and has even been sent a letter of thanks from the forum. - - - Hundreds turned out to pay tribute to a beloved former headteacher at a memorial service to give thanks for her life. - - - Again, thanks for a well written and much deserved tribute to our good friend George. - - - I appreciate your doing so, and thanks also for the compliments about the photos! - -See also - Thanks!, thank, many thanks, thanks to, thanks to you, special thanks, give thanks, thousand thanks, Many thanks!, render thanks, heartfelt thanks, thanks to this -``` - -使用下面格式可以使用 `Translate Shell` 来翻译文件。 - -```shell -$ trans :ta file:///home/magi/gtrans.txt -உங்கள் வாழ்க்கையில் என்ன நடக்கிறது? -``` - -下面命令可以让 `Translate Shell` 进入交互模式. 在进入交互模式之前你需要明确指定源语言和目标语言。本例中,我将英文单词翻译成泰米尔语。 - -``` -$ trans -shell en:ta thanks -Translate Shell -(:q to quit) -thanks -/THaNGks/ - -நன்றி -(Naṉṟi) - -Definitions of thanks -[ English -> தமிழ் ] - -noun - நன்றி - gratitude, thanks - -thanks - நன்றி -``` - -想知道语言代码,可以执行下面语言。 - -```shell -$ trans -R -``` -或者 -```shell -$ trans -T -┌───────────────────────┬───────────────────────┬───────────────────────┐ -│ Afrikaans - af │ Hindi - hi │ Punjabi - pa │ -│ Albanian - sq │ Hmong - hmn │ Querétaro Otomi- otq │ -│ Amharic - am │ Hmong Daw - mww │ Romanian - ro │ -│ Arabic - ar │ Hungarian - hu │ Russian - ru │ -│ Armenian - hy │ Icelandic - is │ Samoan - sm │ -│ Azerbaijani - az │ Igbo - ig │ Scots Gaelic - gd │ -│ Basque - eu │ Indonesian - id │ Serbian (Cyr...-sr-Cyrl -│ Belarusian - be │ Irish - ga │ Serbian (Latin)-sr-Latn -│ Bengali - bn │ Italian - it │ Sesotho - st │ -│ Bosnian - bs │ Japanese - ja │ Shona - sn │ -│ Bulgarian - bg │ Javanese - jv │ Sindhi - sd │ -│ Cantonese - yue │ Kannada - kn │ Sinhala - si │ -│ Catalan - ca │ Kazakh - kk │ Slovak - sk │ -│ Cebuano - ceb │ Khmer - km │ Slovenian - sl │ -│ Chichewa - ny │ Klingon - tlh │ Somali - so │ -│ Chinese Simp...- zh-CN│ Klingon (pIqaD)tlh-Qaak Spanish - es │ -│ Chinese Trad...- zh-TW│ Korean - ko │ Sundanese - su │ -│ Corsican - co │ Kurdish - ku │ Swahili - sw │ -│ Croatian - hr │ Kyrgyz - ky │ Swedish - sv │ -│ Czech - cs │ Lao - lo │ Tahitian - ty │ -│ Danish - da │ Latin - la │ Tajik - tg │ -│ Dutch - nl │ Latvian - lv │ Tamil - ta │ -│ English - en │ Lithuanian - lt │ Tatar - tt │ -│ Esperanto - eo │ Luxembourgish - lb │ Telugu - te │ -│ Estonian - et │ Macedonian - mk │ Thai - th │ -│ Fijian - fj │ Malagasy - mg │ Tongan - to │ -│ Filipino - tl │ Malay - ms │ Turkish - tr │ -│ Finnish - fi │ Malayalam - ml │ Udmurt - udm │ -│ French - fr │ Maltese - mt │ Ukrainian - uk │ -│ Frisian - fy │ Maori - mi │ Urdu - ur │ -│ Galician - gl │ Marathi - mr │ Uzbek - uz │ -│ Georgian - ka │ Mongolian - mn │ Vietnamese - vi │ -│ German - de │ Myanmar - my │ Welsh - cy │ -│ Greek - el │ Nepali - ne │ Xhosa - xh │ -│ Gujarati - gu │ Norwegian - no │ Yiddish - yi │ -│ Haitian Creole - ht │ Pashto - ps │ Yoruba - yo │ -│ Hausa - ha │ Persian - fa │ Yucatec Maya - yua │ -│ Hawaiian - haw │ Polish - pl │ Zulu - zu │ -│ Hebrew - he │ Portuguese - pt │ │ -└───────────────────────┴───────────────────────┴───────────────────────┘ -``` - -想了解更多选项的内容,可以查看 `man` 页. - -```shell -$ man trans -``` - --------------------------------------------------------------------------------- - -via: https://www.2daygeek.com/translate-shell-a-tool-to-use-google-translate-from-command-line-in-linux/ - -作者:[Magesh Maruthamuthu][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.2daygeek.com/author/magesh/ -[2]:https://github.com/soimort/translate-shell -[3]:https://www.2daygeek.com/apt-get-apt-cache-command-examples-manage-packages-debian-ubuntu-systems/ -[4]:https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/ -[5]:https://www.2daygeek.com/dnf-command-examples-manage-packages-fedora-system/ -[6]:https://www.2daygeek.com/install-yaourt-aur-helper-on-arch-linux/ -[7]:https://www.2daygeek.com/install-packer-aur-helper-on-arch-linux/ diff --git a/translated/tech/20171201 Linux Journal Ceases Publication.md b/translated/tech/20171201 Linux Journal Ceases Publication.md deleted file mode 100644 index 2eb5c82f51..0000000000 --- a/translated/tech/20171201 Linux Journal Ceases Publication.md +++ /dev/null @@ -1,34 +0,0 @@ -Linux Journal 停止发行 -============================================================ - -EOF - -伙计们,看起来我们要到终点了。如果按照计划而且没有什么其他的话,十一月份的 Linux Journal 将是我们的最后一期。 - -简单的事实是,我们已经用完了钱和期权。我们从来没有一个富有的母公司或者自己深厚的资金,从开始到结束,这使得我们变成一个反常的出版商。虽然我们在很长的一段时间内运营着,但当天平不可恢复地最终向相反方向倾斜时,我们在十一月份失去了最后一点支持。 - -虽然我们像看到出版业的过去那样看到出版业的未来 - 广告商赞助出版物的时代,因为他们重视品牌和读者 - 我们如今的广告宁愿追逐眼球,最好是在读者的浏览器中植入跟踪标记,并随时随地展示那些广告。但是,未来不是这样,过去的已经过去了。 - -我们猜想,有一个希望,那就是救世主可能会会来。但除了我们的品牌、我们的档案,我们的域名、我们的用户和读者之外,还必须是愿意承担我们一部分债务的人。如果你认识任何人能够提供认真的报价,请告诉我们。不然,请观看 LinuxJournal.com,并希望至少我们的遗留归档(可以追溯到 Linux Journal 诞生的 1994 年 4 月,当 Linux 命中 1.0 发布时)将不会消失。这里有很多很棒的东西,还有很多我们会痛恨世界失去的历史。 - -我们最大的遗憾是,我们甚至没有足够的钱回馈最看重我们的人:我们的用户。为此,我们不能更深刻或真诚地道歉。我们对订阅者而言有什么: - -Linux Pro Magazine 为我们的用户提供了六本免费的杂志,我们在 Linux Journal 上一直赞叹这点。在我们需要的时候,他们是我们的第一批人,我们感谢他们的恩惠。我们今天刚刚完成了我们的 2017 年归档,其中包括我们曾经发表过的每一个问题,包括第一个和最后一个。通常我们以 25 美元的价格出售,但显然用户将免费获得。订阅者请注意有关两者的详细信息的电子邮件。 - -我们也希望在知道我们非常非常努力地让 Linux Journal 进行下去后能有一些安慰 ,而且我们已经用最精益、小的可能运营了很长一段时间。我们是一个大多数是自愿者的组织,有些员工已经几个月没有收到工资。我们还欠钱给自由职业者。这时一个限制发行商能够维持多长时间的限制,现在这个限制已经到头了。 - -伙计们,这是一个伟大的运营。乡亲。对每一个为我们的诞生、我们的成功和我们多年的坚持作出贡献的人致敬。我们列了一份名单,但是列表太长了,并且漏掉有价值的人的风险很高。你知道你是谁。我们再次感谢。 - --------------------------------------------------------------------------------- - -via: https://www.linuxjournal.com/content/linux-journal-ceases-publication - -作者:[ Carlie Fairchild][a] -译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.linuxjournal.com/users/carlie-fairchild -[1]:https://www.linuxjournal.com/taxonomy/term/29 -[2]:https://www.linuxjournal.com/users/carlie-fairchild diff --git a/translated/tech/Linux Networking Hardware for Beginners: Think Software b/translated/tech/Linux Networking Hardware for Beginners: Think Software deleted file mode 100644 index a236a80e97..0000000000 --- a/translated/tech/Linux Networking Hardware for Beginners: Think Software +++ /dev/null @@ -1,89 +0,0 @@ -Translating by FelixYFZ - -面向初学者的Linux网络硬件: 软件工程思想 -============================================================ - -![island network](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/soderskar-island.jpg?itok=wiMaF66b "island network") - 没有路由和桥接,我们将会成为孤独的小岛,你将会在这个网络教程中学到更多知识。 -Commons Zero][3]Pixabay - - 上周,我们学习了本地网络硬件知识,本周,我们将学习网络互联技术和在移动网络中的一些很酷的黑客技术。 -### Routers:路由器 - - -网络路由器就是计算机网络中的一切,因为路由器连接着网络,没有路由器,我们就会成为孤岛, - -图一展示了一个简单的有线本地网络和一个无线接入点,所有设备都接入到Internet上,本地局域网的计算机连接到一个连接着防火墙或者路由器的以太网交换机上,防火墙或者路由器连接到网络服务供应商提供的电缆箱,调制调节器,卫星上行系统...好像一切都在计算中,就像是一个带着不停闪烁的的小灯的盒子,当你的网络数据包离开你的局域网,进入广阔的互联网,它们穿过一个又一个路由器直到到达自己的目的地。 - - -### [fig-1.png][4] - -![simple LAN](https://www.linux.com/sites/lcom/files/styles/floated_images/public/fig-1_7.png?itok=lsazmf3- "simple LAN") - -图一:一个简单的有线局域网和一个无线接入点。 - -一台路由器能连接一切,一个小巧特殊的小盒子只专注于路由,一个大点的盒子将会提供路由,防火墙,域名服务,以及VPN网关功能,一台重新设计的台式电脑或者笔记本,一个树莓派计算机或者一个小模块,体积臃肿矮小的像PC这样的单板计算机,除了苛刻的用途以外,普通的商品硬件都能良好的工作运行。高端的路由器使用特殊设计的硬件每秒能够传输最大量的数据包。 它们有多路数据总线,多个中央处理器和极快的存储。 -可以通过查阅Juniper和思科的路由器来感受一下高端路由器书什么样子的,而且能看看里面是什么样的构造。 -一个接入你的局域网的无线接入点要么作为一个以太网网桥要么作为一个路由器。一个桥接器扩展了这个网络,所以在这个桥接器上的任意一端口上的主机都连接在同一个网络中。 -一台路由器连接的是两个不同的网络。 -### Network Topology:网络拓扑 - - -有多种设置你的局域网的方式,你可以把所有主机接入到一个单独的平面网络,如果你的交换机支持的话,你也可以把它们分配到不同的子网中。 -平面网络是最简单的网络,只需把每一台设备接入到同一个交换机上即可,如果一台交换上的端口不够使用,你可以将更多的交换机连接在一起。 -有些交换机有特殊的上行端口,有些是没有这种特殊限制的上行端口,你可以连接其中的任意端口,你可能需要使用交叉类型的以太网线,所以你要查阅你的交换机的说明文档来设置。平面网络是最容易管理的,你不需要路由器也不需要计算子网,但它也有一些缺点。他们的伸缩性不好,所以当网络规模变得越来越大的时候就会被广播网络所阻塞。 -将你的局域网进行分段将会提升安全保障, 把局域网分成可管理的不同网段将有助于管理更大的网络。 - 图2展示了一个分成两个子网的局域网络:内部的有线和无线主机,和非军事区域(从来不知道所所有的工作上的男性术语都是在计算机上键入的?)因为他被阻挡了所有的内部网络的访问。 - - -### [fig-2.png][5] - -![LAN](https://www.linux.com/sites/lcom/files/styles/floated_images/public/fig-2_4.png?itok=LpXq7bLf "LAN") - -图2:一个分成两个子网的简单局域网。 -即使像图2那样的小型网络也可以有不同的配置方法。你可以将防火墙和路由器放置在一台单独的设备上。 -你可以为你的非军事区域设置一个专用的网络连接,把它完全从你的内部网络隔离,这将引导我们进入下一个主题:一切基于软件。 - - -### Think Software软件思维 - - -你可能已经注意到在这个简短的系列中我们所讨论的硬件,只有网络接口,交换机,和线缆是特殊用途的硬件。 -其它的都是通用的商用硬件,而且都是软件来定义它的用途。 -网关,虚拟专用网关,以太网桥,网页,邮箱以及文件等等。 -服务器,负载均衡,代理,大量的服务,各种各样的认证,中继,故障转移...你可以在运行着Linux系统的标准硬件上运行你的整个网络。 -你甚至可以使用Linux交换应用和VDE2协议来模拟以太网交换机,像DD-WRT,openWRT 和Rashpberry Pi distros,这些小型的硬件都是有专业的分类的,要记住BSDS和它们的特殊衍生用途如防火墙,路由器,和网络附件存储。 -你知道有些人坚持认为硬件防火墙和软件防火墙有区别?其实是没有区别的,就像说有一台硬件计算机和一台软件计算机。 -### Port Trunking and Ethernet Bonding -端口聚合和以太网绑定 -聚合和绑定,也称链路聚合,是把两条以太网通道绑定在一起成为一条通道。一些交换机支持端口聚合,就是把两个交换机端口绑定在一起成为一个是他们原来带宽之和的一条新的连接。对于一台承载很多业务的服务器来说这是一个增加通道带宽的有效的方式。 -你也可以在以太网口进行同样的配置,而且绑定汇聚的驱动是内置在Linux内核中的,所以不需要任何其他的专门的硬件。 - - -### Bending Mobile Broadband to your Will随心所欲选择你的移动带宽 - -我期望移动带宽能够迅速增长来替代DSL和有线网络。我居住在一个有250,000人口的靠近一个城市的地方,但是在城市以外,要想接入互联网就要靠运气了,即使那里有很大的用户上网需求。我居住的小角落离城镇有20分钟的距离,但对于网络服务供应商来说他们几乎不会考虑到为这个地方提供网络。 我唯一的选择就是移动带宽; 这里没有拨号网络,卫星网络(即使它很糟糕)或者是DSL,电缆,光纤,但却没有阻止网络供应商把那些在我这个区域从没看到过的无限制通信个其他高速网络服务的传单塞进我的邮箱。 -我试用了AT&T,Version,和T-Mobile。Version的信号覆盖范围最广,但是Version和AT&T是最昂贵的。 -我居住的地方在T-Mobile信号覆盖的边缘,但迄今为止他们给了最大的优惠,为了能够能够有效的使用,我必须购买一个WeBoostDe信号放大器和 -一台中兴的移动热点设备。当然你也可以使用一部手机作为热点,但是专用的热点设备有着最强的信号。如果你正在考虑购买一台信号放大器,最好的选择就是WeBoost因为他们的服务支持最棒,而且他们会尽最大努力去帮助你。在一个小小的APP的协助下去设置将会精准的增强 你的网络信号,他们有一个功能较少的免费的版本,但你将一点都不会后悔去花两美元使用专业版。 -那个小巧的中兴热点设备能够支持15台主机而且还有拥有基本的防火墙功能。 但你如果你使用像 Linksys WRT54GL这样的设备,使用Tomato,openWRT,或者DD-WRT来替代普通的固件,这样你就能完全控制你的防护墙规则,路由配置,以及任何其他你想要设置的服务。 - --------------------------------------------------------------------------------- - -via: https://www.linux.com/learn/intro-to-linux/2017/10/linux-networking-hardware-beginners-think-software - -作者:[CARLA SCHRODER][a] -译者:[FelixYFZ](https://github.com/FelixYFZ) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.linux.com/users/cschroder -[1]:https://www.linux.com/licenses/category/used-permission -[2]:https://www.linux.com/licenses/category/used-permission -[3]:https://www.linux.com/licenses/category/creative-commons-zero -[4]:https://www.linux.com/files/images/fig-1png-7 -[5]:https://www.linux.com/files/images/fig-2png-4 -[6]:https://www.linux.com/files/images/soderskar-islandjpg -[7]:https://www.linux.com/learn/intro-to-linux/2017/10/linux-networking-hardware-beginners-lan-hardware -[8]:http://www.bluelinepc.com/signalcheck/ From 5d8ab1f319782bf857b58e73cbdc5fa942176a68 Mon Sep 17 00:00:00 2001 From: wxy Date: Mon, 4 Dec 2017 17:23:04 +0800 Subject: [PATCH 006/236] Revert "Revert "Merge branch 'master' of https://github.com/LCTT/TranslateProject"" This reverts commit 11e1c8c450f35378d5e24449e15628748ad98053. --- ...20161216 Kprobes Event Tracing on ARMv8.md | 16 +- ... guide to links in the Linux filesystem.md | 300 ++++++++ ...ng network connections on Linux systems.md | 0 ...layer introduction part 1 the bio layer.md | 13 +- .../20141028 When Does Your OS Run.md | 0 ... Firewalld in Multi-Zone Configurations.md | 0 .../20170227 Ubuntu Core in LXD containers.md | 0 ... THE SOFTWARE CONTAINERIZATION MOVEMENT.md | 0 ...ner OS for Linux and Windows Containers.md | 0 ... Life-Changing Magic of Tidying Up Code.md | 0 ...ldcard Certificates Coming January 2018.md | 0 ...andy Tool for Every Level of Linux User.md | 0 ...GIVE AWAY YOUR CODE BUT NEVER YOUR TIME.md | 0 ...0928 3 Python web scrapers and crawlers.md | 0 .../20171002 Scaling the GitLab database.md | 0 ...3 PostgreSQL Hash Indexes Are Now Cool.md | 0 ...inux desktop hasnt jumped in popularity.md | 0 ...ant 100 command line productivity boost.md | 0 ...20171008 8 best languages to blog about.md | 0 ...ext Generation of Cybersecurity Experts.md | 0 ...itter Data in Apache Kafka through KSQL.md | 0 ...p a Postgres database on a Raspberry Pi.md | 0 .../{ => 201711}/20171011 Why Linux Works.md | 0 ...easons open source is good for business.md | 0 ...71013 Best of PostgreSQL 10 for the DBA.md | 0 ... cloud-native computing with Kubernetes.md | 0 ...5 Monitoring Slow SQL Queries via Slack.md | 0 ... Use Docker with R A DevOps Perspective.md | 0 .../20171016 Introducing CRI-O 1.0.md | 0 ...20171017 A tour of Postgres Index Types.md | 0 .../20171017 Image Processing on Linux.md | 0 ...iners and microservices change security.md | 0 ...n Python by building a simple dice game.md | 0 ...ecure Your Network in the Wake of KRACK.md | 0 ...Simple Excellent Linux Network Monitors.md | 0 ...cker containers in Kubernetes with Java.md | 0 ...ols to Help You Remember Linux Commands.md | 0 ...ndroid on Top of a Linux Graphics Stack.md | 0 ...0171024 Top 5 Linux pain points in 2017.md | 0 ...et s analyze GitHub’s data and find out.md | 0 ...u Drop Unity Mark Shuttleworth Explains.md | 0 ...Backup, Rclone and Wasabi cloud storage.md | 0 ...26 But I dont know what a container is .md | 0 .../20171026 Why is Kubernetes so popular.md | 0 .../20171101 How to use cron in Linux.md | 0 ... to a DCO for source code contributions.md | 0 ...nage EXT2 EXT3 and EXT4 Health in Linux.md | 0 .../20171106 Finding Files with mlocate.md | 0 ...Publishes Enterprise Open Source Guides.md | 0 ...mmunity clue. Here s how to do it right.md | 0 ...dopts home-brewed KVM as new hypervisor.md | 0 ... created my first RPM package in Fedora.md | 0 ...est applications with Ansible Container.md | 0 ...71110 File better bugs with coredumpctl.md | 0 ... ​Linux totally dominates supercomputers.md | 0 ...1116 5 Coolest Linux Terminal Emulators.md | 0 ...7 How to Easily Remember Linux Commands.md | 0 ...tting started with OpenFaaS on minikube.md | 0 ...our Terminal Session To Anyone In Seconds.md | 0 ...20 Containers and Kubernetes Whats next.md | 81 ++ ...Install Android File Transfer for Linux.md | 75 ++ ...and Certification Are Key for SysAdmins.md | 72 ++ ...Search DuckDuckGo from the Command Line.md | 97 +++ ...The One in Which I Call Out Hacker News.md | 86 --- ...nject features and investigate programs.md | 211 ++++++ ...an event Introducing eBPF Kernel probes.md | 361 +++++++++ ...sers guide to Logical Volume Management.md | 233 ++++++ ...9 INTRODUCING DOCKER SECRETS MANAGEMENT.md | 110 +++ ...170530 How to Improve a Legacy Codebase.md | 108 --- ...es Are Hiring Computer Security Experts.md | 91 +++ ... guide to links in the Linux filesystem.md | 314 -------- ...ow to answer questions in a helpful way.md | 172 +++++ ...Linux containers with Ansible Container.md | 114 +++ .../20171005 Reasons Kubernetes is cool.md | 148 ++++ ...20171010 Operating a Kubernetes network.md | 216 ++++++ ...LEAST PRIVILEGE CONTAINER ORCHESTRATION.md | 174 +++++ ...ow Eclipse is advancing IoT development.md | 83 -- ...ive into BPF a list of reading material.md | 711 ++++++++++++++++++ .../20171107 GitHub welcomes all CI tools.md | 95 +++ sources/tech/20171112 Love Your Bugs.md | 311 ++++++++ ... write fun small web projects instantly.md | 76 ++ .../20171114 Sysadmin 101 Patch Management.md | 61 ++ .../20171114 Take Linux and Run With It.md | 68 ++ ...obs Are Hot Get Trained and Get Noticed.md | 58 ++ ... and How to Set an Open Source Strategy.md | 120 +++ ...ux Programs for Drawing and Image Editing.md | 130 ++++ ...171120 Adopting Kubernetes step by step.md | 93 +++ ...20 Containers and Kubernetes Whats next.md | 98 --- ... Why microservices are a security issue.md | 116 +++ ...and Certification Are Key for SysAdmins.md | 70 -- ...Could Be Your New Favorite Container OS.md | 7 +- ...Help Build ONNX Open Source AI Platform.md | 76 ++ ... Your Linux Server Has Been Compromised.md | 156 ++++ ...71128 The politics of the Linux desktop.md | 110 +++ ... a great pair for beginning programmers.md | 142 ++++ ... open source technology trends for 2018.md | 143 ++++ ...actices for getting started with DevOps.md | 94 +++ ...eshark on Debian and Ubuntu 16.04_17.10.md | 185 +++++ ...n Source Components Ease Learning Curve.md | 70 ++ ...eractive Workflows for Cpp with Jupyter.md | 301 ++++++++ ...Unity from the Dead as an Official Spin.md | 41 + ...usiness Software Alternatives For Linux.md | 116 +++ ...x command-line screen grabs made simple.md | 108 +++ ...Search DuckDuckGo from the Command Line.md | 103 --- ...Long Running Terminal Commands Complete.md | 156 ++++ ...ke up and Shut Down Linux Automatically.md | 135 ++++ ...1 Fedora Classroom Session: Ansible 101.md | 71 ++ ...ow to Manage Users with Groups in Linux.md | 168 +++++ ... to find a publisher for your tech book.md | 76 ++ ...e your WiFi MAC address on Ubuntu 16.04.md | 160 ++++ ... millions of Linux users with Snapcraft.md | 321 ++++++++ ...inux command-line screen grabs made simple | 72 ++ ...0171202 docker - Use multi-stage builds.md | 127 ++++ ...The One in Which I Call Out Hacker News.md | 99 +++ ...170530 How to Improve a Legacy Codebase.md | 104 +++ .../20170910 Cool vim feature sessions.md | 44 ++ ...ow Eclipse is advancing IoT development.md | 77 ++ .../tech/20171108 Archiving repositories.md | 37 + ...6 Introducing security alerts on GitHub.md | 48 ++ ...stem Logs: Understand Your Linux System.md | 68 ++ ...Install Android File Transfer for Linux.md | 82 -- ...Could Be Your New Favorite Container OS.md | 147 ++++ ...every domain someone owns automatically.md | 49 ++ ...ogle Translate From Command Line In Linux.md | 400 ++++++++++ ...171201 Linux Journal Ceases Publication.md | 34 + ...ing Hardware for Beginners: Think Software | 89 +++ 126 files changed, 8338 insertions(+), 960 deletions(-) rename {translated/tech => published}/20161216 Kprobes Event Tracing on ARMv8.md (98%) create mode 100644 published/20170622 A users guide to links in the Linux filesystem.md rename {translated/tech => published}/20171009 Examining network connections on Linux systems.md (100%) rename {translated/tech => published}/20171029 A block layer introduction part 1 the bio layer.md (95%) rename published/{ => 201711}/20141028 When Does Your OS Run.md (100%) rename published/{ => 201711}/20170202 Understanding Firewalld in Multi-Zone Configurations.md (100%) rename published/{ => 201711}/20170227 Ubuntu Core in LXD containers.md (100%) rename published/{ => 201711}/20170418 INTRODUCING MOBY PROJECT A NEW OPEN-SOURCE PROJECT TO ADVANCE THE SOFTWARE CONTAINERIZATION MOVEMENT.md (100%) rename published/{ => 201711}/20170531 Understanding Docker Container Host vs Container OS for Linux and Windows Containers.md (100%) rename published/{ => 201711}/20170608 The Life-Changing Magic of Tidying Up Code.md (100%) rename published/{ => 201711}/20170706 Wildcard Certificates Coming January 2018.md (100%) rename published/{ => 201711}/20170825 Guide to Linux App Is a Handy Tool for Every Level of Linux User.md (100%) rename published/{ => 201711}/20170905 GIVE AWAY YOUR CODE BUT NEVER YOUR TIME.md (100%) rename published/{ => 201711}/20170928 3 Python web scrapers and crawlers.md (100%) rename published/{ => 201711}/20171002 Scaling the GitLab database.md (100%) rename published/{ => 201711}/20171003 PostgreSQL Hash Indexes Are Now Cool.md (100%) rename published/{ => 201711}/20171004 No the Linux desktop hasnt jumped in popularity.md (100%) rename published/{ => 201711}/20171007 Instant 100 command line productivity boost.md (100%) rename published/{ => 201711}/20171008 8 best languages to blog about.md (100%) rename published/{ => 201711}/20171009 CyberShaolin Teaching the Next Generation of Cybersecurity Experts.md (100%) rename published/{ => 201711}/20171010 Getting Started Analyzing Twitter Data in Apache Kafka through KSQL.md (100%) rename published/{ => 201711}/20171011 How to set up a Postgres database on a Raspberry Pi.md (100%) rename published/{ => 201711}/20171011 Why Linux Works.md (100%) rename published/{ => 201711}/20171013 6 reasons open source is good for business.md (100%) rename published/{ => 201711}/20171013 Best of PostgreSQL 10 for the DBA.md (100%) rename published/{ => 201711}/20171015 How to implement cloud-native computing with Kubernetes.md (100%) rename published/{ => 201711}/20171015 Monitoring Slow SQL Queries via Slack.md (100%) rename published/{ => 201711}/20171015 Why Use Docker with R A DevOps Perspective.md (100%) rename published/{ => 201711}/20171016 Introducing CRI-O 1.0.md (100%) rename published/{ => 201711}/20171017 A tour of Postgres Index Types.md (100%) rename published/{ => 201711}/20171017 Image Processing on Linux.md (100%) rename published/{ => 201711}/20171018 How containers and microservices change security.md (100%) rename published/{ => 201711}/20171018 Learn how to program in Python by building a simple dice game.md (100%) rename published/{ => 201711}/20171018 Tips to Secure Your Network in the Wake of KRACK.md (100%) rename published/{ => 201711}/20171019 3 Simple Excellent Linux Network Monitors.md (100%) rename published/{ => 201711}/20171019 How to manage Docker containers in Kubernetes with Java.md (100%) rename published/{ => 201711}/20171020 3 Tools to Help You Remember Linux Commands.md (100%) rename published/{ => 201711}/20171020 Running Android on Top of a Linux Graphics Stack.md (100%) rename published/{ => 201711}/20171024 Top 5 Linux pain points in 2017.md (100%) rename published/{ => 201711}/20171024 Who contributed the most to open source in 2017 Let s analyze GitHub’s data and find out.md (100%) rename published/{ => 201711}/20171024 Why Did Ubuntu Drop Unity Mark Shuttleworth Explains.md (100%) rename published/{ => 201711}/20171025 How to roll your own backup solution with BorgBackup, Rclone and Wasabi cloud storage.md (100%) rename published/{ => 201711}/20171026 But I dont know what a container is .md (100%) rename published/{ => 201711}/20171026 Why is Kubernetes so popular.md (100%) rename published/{ => 201711}/20171101 How to use cron in Linux.md (100%) rename published/{ => 201711}/20171101 We re switching to a DCO for source code contributions.md (100%) rename published/{ => 201711}/20171106 4 Tools to Manage EXT2 EXT3 and EXT4 Health in Linux.md (100%) rename published/{ => 201711}/20171106 Finding Files with mlocate.md (100%) rename published/{ => 201711}/20171106 Linux Foundation Publishes Enterprise Open Source Guides.md (100%) rename published/{ => 201711}/20171106 Most companies can t buy an open source community clue. Here s how to do it right.md (100%) rename published/{ => 201711}/20171107 AWS adopts home-brewed KVM as new hypervisor.md (100%) rename published/{ => 201711}/20171107 How I created my first RPM package in Fedora.md (100%) rename published/{ => 201711}/20171108 Build and test applications with Ansible Container.md (100%) rename published/{ => 201711}/20171110 File better bugs with coredumpctl.md (100%) rename published/{ => 201711}/20171114 ​Linux totally dominates supercomputers.md (100%) rename published/{ => 201711}/20171116 5 Coolest Linux Terminal Emulators.md (100%) rename published/{ => 201711}/20171117 How to Easily Remember Linux Commands.md (100%) rename published/{ => 201711}/20171118 Getting started with OpenFaaS on minikube.md (100%) rename published/{ => 201711}/20171128 tmate – Instantly Share Your Terminal Session To Anyone In Seconds.md (100%) create mode 100644 published/20171120 Containers and Kubernetes Whats next.md create mode 100644 published/20171124 How to Install Android File Transfer for Linux.md create mode 100644 published/20171124 Open Source Cloud Skills and Certification Are Key for SysAdmins.md create mode 100644 published/20171130 Search DuckDuckGo from the Command Line.md delete mode 100644 sources/tech/20090701 The One in Which I Call Out Hacker News.md create mode 100644 sources/tech/20130402 Dynamic linker tricks Using LD_PRELOAD to cheat inject features and investigate programs.md create mode 100644 sources/tech/20160330 How to turn any syscall into an event Introducing eBPF Kernel probes.md create mode 100644 sources/tech/20160922 A Linux users guide to Logical Volume Management.md create mode 100644 sources/tech/20170209 INTRODUCING DOCKER SECRETS MANAGEMENT.md delete mode 100644 sources/tech/20170530 How to Improve a Legacy Codebase.md create mode 100644 sources/tech/20170607 Why Car Companies Are Hiring Computer Security Experts.md delete mode 100644 sources/tech/20170622 A users guide to links in the Linux filesystem.md create mode 100644 sources/tech/20170921 How to answer questions in a helpful way.md create mode 100644 sources/tech/20171005 How to manage Linux containers with Ansible Container.md create mode 100644 sources/tech/20171005 Reasons Kubernetes is cool.md create mode 100644 sources/tech/20171010 Operating a Kubernetes network.md create mode 100644 sources/tech/20171011 LEAST PRIVILEGE CONTAINER ORCHESTRATION.md delete mode 100644 sources/tech/20171020 How Eclipse is advancing IoT development.md create mode 100644 sources/tech/20171102 Dive into BPF a list of reading material.md create mode 100644 sources/tech/20171107 GitHub welcomes all CI tools.md create mode 100644 sources/tech/20171112 Love Your Bugs.md create mode 100644 sources/tech/20171113 Glitch write fun small web projects instantly.md create mode 100644 sources/tech/20171114 Sysadmin 101 Patch Management.md create mode 100644 sources/tech/20171114 Take Linux and Run With It.md create mode 100644 sources/tech/20171115 Security Jobs Are Hot Get Trained and Get Noticed.md create mode 100644 sources/tech/20171115 Why and How to Set an Open Source Strategy.md create mode 100644 sources/tech/20171116 Unleash Your Creativity – Linux Programs for Drawing and Image Editing.md create mode 100644 sources/tech/20171120 Adopting Kubernetes step by step.md delete mode 100644 sources/tech/20171120 Containers and Kubernetes Whats next.md create mode 100644 sources/tech/20171123 Why microservices are a security issue.md delete mode 100644 sources/tech/20171124 Open Source Cloud Skills and Certification Are Key for SysAdmins.md create mode 100644 sources/tech/20171125 AWS to Help Build ONNX Open Source AI Platform.md create mode 100644 sources/tech/20171128 How To Tell If Your Linux Server Has Been Compromised.md create mode 100644 sources/tech/20171128 The politics of the Linux desktop.md create mode 100644 sources/tech/20171128 Why Python and Pygame are a great pair for beginning programmers.md create mode 100644 sources/tech/20171129 10 open source technology trends for 2018.md create mode 100644 sources/tech/20171129 5 best practices for getting started with DevOps.md create mode 100644 sources/tech/20171129 How to Install and Use Wireshark on Debian and Ubuntu 16.04_17.10.md create mode 100644 sources/tech/20171129 Inside AGL Familiar Open Source Components Ease Learning Curve.md create mode 100644 sources/tech/20171129 Interactive Workflows for Cpp with Jupyter.md create mode 100644 sources/tech/20171129 Someone Tries to Bring Back Ubuntus Unity from the Dead as an Official Spin.md create mode 100644 sources/tech/20171130 Excellent Business Software Alternatives For Linux.md create mode 100644 sources/tech/20171130 Scrot Linux command-line screen grabs made simple.md delete mode 100644 sources/tech/20171130 Search DuckDuckGo from the Command Line.md create mode 100644 sources/tech/20171130 Undistract-me : Get Notification When Long Running Terminal Commands Complete.md create mode 100644 sources/tech/20171130 Wake up and Shut Down Linux Automatically.md create mode 100644 sources/tech/20171201 Fedora Classroom Session: Ansible 101.md create mode 100644 sources/tech/20171201 How to Manage Users with Groups in Linux.md create mode 100644 sources/tech/20171201 How to find a publisher for your tech book.md create mode 100644 sources/tech/20171201 Randomize your WiFi MAC address on Ubuntu 16.04.md create mode 100644 sources/tech/20171202 Easily control delivery of your Python applications to millions of Linux users with Snapcraft.md create mode 100644 sources/tech/20171202 Scrot Linux command-line screen grabs made simple create mode 100644 sources/tech/20171202 docker - Use multi-stage builds.md create mode 100644 translated/tech/20090701 The One in Which I Call Out Hacker News.md create mode 100644 translated/tech/20170530 How to Improve a Legacy Codebase.md create mode 100644 translated/tech/20170910 Cool vim feature sessions.md create mode 100644 translated/tech/20171020 How Eclipse is advancing IoT development.md create mode 100644 translated/tech/20171108 Archiving repositories.md create mode 100644 translated/tech/20171116 Introducing security alerts on GitHub.md create mode 100644 translated/tech/20171117 System Logs: Understand Your Linux System.md delete mode 100644 translated/tech/20171124 How to Install Android File Transfer for Linux.md create mode 100644 translated/tech/20171124 Photon Could Be Your New Favorite Container OS.md create mode 100644 translated/tech/20171130 New Feature Find every domain someone owns automatically.md create mode 100644 translated/tech/20171130 Translate Shell – A Tool To Use Google Translate From Command Line In Linux.md create mode 100644 translated/tech/20171201 Linux Journal Ceases Publication.md create mode 100644 translated/tech/Linux Networking Hardware for Beginners: Think Software diff --git a/translated/tech/20161216 Kprobes Event Tracing on ARMv8.md b/published/20161216 Kprobes Event Tracing on ARMv8.md similarity index 98% rename from translated/tech/20161216 Kprobes Event Tracing on ARMv8.md rename to published/20161216 Kprobes Event Tracing on ARMv8.md index 3c3ab0de5b..3985f064dc 100644 --- a/translated/tech/20161216 Kprobes Event Tracing on ARMv8.md +++ b/published/20161216 Kprobes Event Tracing on ARMv8.md @@ -29,19 +29,19 @@ jprobes 允许通过提供一个具有相同调用签名call signature kprobes 提供一系列能从内核代码中调用的 API 来设置探测点和当探测点被命中时调用的注册函数。在不往内核中添加代码的情况下,kprobes 也是可用的,这是通过写入特定事件追踪的 debugfs 文件来实现的,需要在文件中设置探针地址和信息,以便在探针被命中时记录到追踪日志中。后者是本文将要讨论的重点。最后 kprobes 可以通过 perl 命令来使用。 -### kprobes API +#### kprobes API 内核开发人员可以在内核中编写函数(通常在专用的调试模块中完成)来设置探测点,并且在探测指令执行前和执行后立即执行任何所需操作。这在 kprobes.txt 中有很好的解释。 -### 事件追踪 +#### 事件追踪 事件追踪子系统有自己的自己的文档^注2 ,对于了解一般追踪事件的背景可能值得一读。事件追踪子系统是追踪点tracepoints和 kprobes 事件追踪的基础。事件追踪文档重点关注追踪点,所以请在查阅文档时记住这一点。kprobes 与追踪点不同的是没有预定义的追踪点列表,而是采用动态创建的用于触发追踪事件信息收集的任意探测点。事件追踪子系统通过一系列 debugfs 文件来控制和监视。事件追踪(`CONFIG_EVENT_TRACING`)将在被如 kprobe 事件追踪子系统等需要时自动选择。 -#### kprobes 事件 +##### kprobes 事件 使用 kprobes 事件追踪子系统,用户可以在内核任意断点处指定要报告的信息,只需要指定任意现有可探测指令的地址以及格式化信息即可确定。在执行过程中遇到断点时,kprobes 将所请求的信息传递给事件追踪子系统的公共部分,这些部分将数据格式化并追加到追踪日志中,就像追踪点的工作方式一样。kprobes 使用一个类似的但是大部分是独立的 debugfs 文件来控制和显示追踪事件信息。该功能可使用 `CONFIG_KPROBE_EVENT` 来选择。Kprobetrace 文档^ 注3 提供了如何使用 kprobes 事件追踪的基本信息,并且应当被参考用以了解以下介绍示例的详细信息。 -### kprobes 和 perf +#### kprobes 和 perf perf 工具为 kprobes 提供了另一个命令行接口。特别地,`perf probe` 允许探测点除了由函数名加偏移量和地址指定外,还可由源文件和行号指定。perf 接口实际上是使用 kprobes 的 debugfs 接口的封装器。 @@ -60,7 +60,7 @@ perf 工具为 kprobes 提供了另一个命令行接口。特别地,`perf pro kprobes 的一个常用例子是检测函数入口和/或出口。因为只需要使用函数名来作为探针地址,它安装探针特别简单。kprobes 事件追踪将查看符号名称并且确定地址。ARMv8 调用标准定义了函数参数和返回值的位置,并且这些可以作为 kprobes 事件处理的一部分被打印出来。 -### 例子: 函数入口探测 +#### 例子: 函数入口探测 检测 USB 以太网驱动程序复位功能: @@ -94,7 +94,7 @@ kworker/0:0-4 [000] d… 10972.102939: p_ax88772_reset_0: 这里我们可以看见传入到我们的探测函数的指针参数的值。由于我们没有使用 kprobes 事件追踪的可选标签功能,我们需要的信息自动被标注为 `arg1`。注意这指向我们需要 kprobes 记录这个探针的一组值的第一个,而不是函数参数的实际位置。在这个例子中它也只是碰巧是我们探测函数的第一个参数。 -### 例子: 函数入口和返回探测 +#### 例子: 函数入口和返回探测 kretprobe 功能专门用于探测函数返回。在函数入口 kprobes 子系统将会被调用并且建立钩子以便在函数返回时调用,钩子将记录需求事件信息。对最常见情况,返回信息通常在 `X0` 寄存器中,这是非常有用的。在 `%x0` 中返回值也可以被称为 `$retval`。以下例子也演示了如何提供一个可读的标签来展示有趣的信息。 @@ -132,7 +132,7 @@ _$ cat trace bash-1671 [001] d..1 214.401975: r__do_fork_0: (SyS_clone+0x18/0x20 <- _do_fork) pid=0x726_ ``` -### 例子: 解引用指针参数 +#### 例子: 解引用指针参数 对于指针值,kprobes 事件处理子系统也允许解引用和打印所需的内存内容,适用于各种基本数据类型。为了展示所需字段,手动计算结构的偏移量是必要的。 @@ -173,7 +173,7 @@ $ cat trace bash-1702 [002] d..1 175.347349: wait_r: (SyS_wait4+0x74/0xe4 <- do_wait) arg1=0xfffffffffffffff6 ``` -### 例子: 探测任意指令地址 +#### 例子: 探测任意指令地址 在前面的例子中,我们已经为函数的入口和出口插入探针,然而探测一个任意指令(除少数例外)是可能的。如果我们正在 C 函数中放置一个探针,第一步是查看代码的汇编版本以确定我们要放置探针的位置。一种方法是在 vmlinux 文件上使用 gdb,并在要放置探针的函数中展示指令。下面是一个在 `arch/arm64/kernel/modules.c` 中 `module_alloc` 函数执行此操作的示例。在这种情况下,因为 gdb 似乎更喜欢使用弱符号定义,并且它是与这个函数关联的存根代码,所以我们从 System.map 中来获取符号值: diff --git a/published/20170622 A users guide to links in the Linux filesystem.md b/published/20170622 A users guide to links in the Linux filesystem.md new file mode 100644 index 0000000000..7d731693d8 --- /dev/null +++ b/published/20170622 A users guide to links in the Linux filesystem.md @@ -0,0 +1,300 @@ +用户指南:Linux 文件系统的链接 +============================================================ + +> 学习如何使用链接,通过从 Linux 文件系统多个位置来访问文件,可以让日常工作变得轻松。 + +![linux 文件链接用户指南](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/links.png?itok=enaPOi4L "A user's guide to links in the Linux filesystem") + +Image by : [Paul Lewin][8]. Modified by Opensource.com. [CC BY-SA 2.0][9] + +在我为 opensource.com 写过的关于 Linux 文件系统方方面面的文章中,包括 [Linux 的 EXT4 文件系统的历史、特性以及最佳实践][10]; [在 Linux 中管理设备][11];[Linux 文件系统概览][12] 和 [用户指南:逻辑卷管理][13],我曾简要的提到过 Linux 文件系统一个有趣的特性,它允许用户从多个位置来访问 Linux 文件目录树中的文件来简化一些任务。 + +Linux 文件系统中有两种链接link硬链接hard link软链接soft link。虽然二者差别显著,但都用来解决相似的问题。它们都提供了对单个文件的多个目录项(引用)的访问,但实现却大为不同。链接的强大功能赋予了 Linux 文件系统灵活性,因为[一切皆是文件][14]。 + +举个例子,我曾发现一些程序要求特定的版本库方可运行。 当用升级后的库替代旧库后,程序会崩溃,提示旧版本库缺失。通常,库名的唯一变化就是版本号。出于直觉,我仅仅给程序添加了一个新的库链接,并以旧库名称命名。我试着再次启动程序,运行良好。程序就是一个游戏,人人都明白,每个玩家都会尽力使游戏进行下去。 + +事实上,几乎所有的应用程序链接库都使用通用的命名规则,链接名称中包含了主版本号,链接所指向的文件的文件名中同样包含了小版本号。再比如,程序的一些必需文件为了迎合 Linux 文件系统规范,从一个目录移动到另一个目录中,系统为了向后兼容那些不能获取这些文件新位置的程序在旧的目录中存放了这些文件的链接。如果你对 `/lib64` 目录做一个长清单列表,你会发现很多这样的例子。 + +``` +lrwxrwxrwx. 1 root root 36 Dec 8 2016 cracklib_dict.hwm -> ../../usr/share/cracklib/pw_dict.hwm +lrwxrwxrwx. 1 root root 36 Dec 8 2016 cracklib_dict.pwd -> ../../usr/share/cracklib/pw_dict.pwd +lrwxrwxrwx. 1 root root 36 Dec 8 2016 cracklib_dict.pwi -> ../../usr/share/cracklib/pw_dict.pwi +lrwxrwxrwx. 1 root root 27 Jun 9 2016 libaccountsservice.so.0 -> libaccountsservice.so.0.0.0 +-rwxr-xr-x. 1 root root 288456 Jun 9 2016 libaccountsservice.so.0.0.0 +lrwxrwxrwx 1 root root 15 May 17 11:47 libacl.so.1 -> libacl.so.1.1.0 +-rwxr-xr-x 1 root root 36472 May 17 11:47 libacl.so.1.1.0 +lrwxrwxrwx. 1 root root 15 Feb 4 2016 libaio.so.1 -> libaio.so.1.0.1 +-rwxr-xr-x. 1 root root 6224 Feb 4 2016 libaio.so.1.0.0 +-rwxr-xr-x. 1 root root 6224 Feb 4 2016 libaio.so.1.0.1 +lrwxrwxrwx. 1 root root 30 Jan 16 16:39 libakonadi-calendar.so.4 -> libakonadi-calendar.so.4.14.26 +-rwxr-xr-x. 1 root root 816160 Jan 16 16:39 libakonadi-calendar.so.4.14.26 +lrwxrwxrwx. 1 root root 29 Jan 16 16:39 libakonadi-contact.so.4 -> libakonadi-contact.so.4.14.26 +``` + +`/lib64` 目录下的一些链接 + +在上面展示的 `/lib64` 目录清单列表中,文件模式第一个字母 `l` (小写字母 l)表示这是一个软链接(又称符号链接)。 + +### 硬链接 + +在 [Linux 的 EXT4 文件系统的历史、特性以及最佳实践][15]一文中,我曾探讨过这样一个事实,每个文件都有一个包含该文件信息的 inode,包含了该文件的位置信息。上述文章中的[图2][16]展示了一个指向 inode 的单一目录项。每个文件都至少有一个目录项指向描述该文件信息的 inode ,目录项是一个硬链接,因此每个文件至少都有一个硬链接。 + +如下图 1 所示,多个目录项指向了同一 inode 。这些目录项都是硬链接。我曾在三个目录项中使用波浪线 (`~`) 的缩写,这是用户目录的惯例表示,因此在该例中波浪线等同于 `/home/user` 。值得注意的是,第四个目录项是一个完全不同的目录,`/home/shared`,可能是该计算机上用户的共享文件目录。 + +![fig1directory_entries.png](https://opensource.com/sites/default/files/images/life/fig1directory_entries.png) + +*图 1* + +硬链接被限制在一个单一的文件系统中。此处的“文件系统” 是指挂载在特定挂载点上的分区或逻辑卷,此例中是 `/home`。这是因为在每个文件系统中的 inode 号都是唯一的。而在不同的文件系统中,如 `/var` 或 `/opt`,会有和 `/home` 中相同的 inode 号。 + +因为所有的硬链接都指向了包含文件元信息的单一 inode ,这些属性都是文件的一部分,像所属关系、权限、到该 inode 的硬链接数目,对每个硬链接来说这些特性没有什么不同的。这是一个文件所具有的一组属性。唯一能区分这些文件的是包含在 inode 信息中的文件名。链接到同一目录中的单一文件/ inode 的硬链接必须拥有不同的文件名,这是基于同一目录下不能存在重复的文件名的事实的。 + +文件的硬链接数目可通过 `ls -l` 来查看,如果你想查看实际节点号,可使用 `ls -li` 命令。 + +### 符号(软)链接 + +硬链接和软链接(也称为符号链接symlink)的区别在于,硬链接直接指向属于该文件的 inode ,而软链接直接指向一个目录项,即指向一个硬链接。因为软链接指向的是一个文件的硬链接而非该文件的 inode ,所以它们并不依赖于 inode 号,这使得它们能跨越不同的文件系统、分区和逻辑卷起作用。 + +软链接的缺点是,一旦它所指向的硬链接被删除或重命名后,该软链接就失效了。软链接虽然还在,但所指向的硬链接已不存在。所幸的是,`ls` 命令能以红底白字的方式在其列表中高亮显示失效的软链接。 + +### 实验项目: 链接实验 + +我认为最容易理解链接用法及其差异的方法是动手搭建一个项目。这个项目应以非超级用户的身份在一个空目录下进行。我创建了 `~/temp` 目录做这个实验,你也可以这么做。这么做可为项目创建一个安全的环境且提供一个新的空目录让程序运作,如此以来这儿仅存放和程序有关的文件。 + +#### 初始工作 + +首先,在你要进行实验的目录下为该项目中的任务创建一个临时目录,确保当前工作目录(PWD)是你的主目录,然后键入下列命令。 + +``` +mkdir temp +``` + +使用这个命令将当前工作目录切换到 `~/temp`。 + +``` +cd temp +``` + +实验开始,我们需要创建一个能够链接到的文件,下列命令可完成该工作并向其填充内容。 + +``` +du -h > main.file.txt +``` + +使用 `ls -l` 长列表命名确认文件正确地创建了。运行结果应类似于我的。注意文件大小只有 7 字节,但你的可能会有 1~2 字节的变动。 + +``` +[dboth@david temp]$ ls -l +total 4 +-rw-rw-r-- 1 dboth dboth 7 Jun 13 07:34 main.file.txt +``` + +在列表中,文件模式串后的数字 `1` 代表存在于该文件上的硬链接数。现在应该是 1 ,因为我们还没有为这个测试文件建立任何硬链接。 + +#### 对硬链接进行实验 + +硬链接创建一个指向同一 inode 的新目录项,当为文件添加一个硬链接时,你会看到链接数目的增加。确保当前工作目录仍为 `~/temp`。创建一个指向 `main.file.txt` 的硬链接,然后查看该目录下文件列表。 + +``` +[dboth@david temp]$ ln main.file.txt link1.file.txt +[dboth@david temp]$ ls -l +total 8 +-rw-rw-r-- 2 dboth dboth 7 Jun 13 07:34 link1.file.txt +-rw-rw-r-- 2 dboth dboth 7 Jun 13 07:34 main.file.txt +``` + +目录中两个文件都有两个链接且大小相同,时间戳也一样。这就是有一个 inode 和两个硬链接(即该文件的目录项)的一个文件。再建立一个该文件的硬链接,并列出目录清单内容。你可以建立硬链接: `link1.file.txt` 或 `main.file.txt`。 + +``` +[dboth@david temp]$ ln link1.file.txt link2.file.txt ; ls -l +total 16 +-rw-rw-r-- 3 dboth dboth 7 Jun 13 07:34 link1.file.txt +-rw-rw-r-- 3 dboth dboth 7 Jun 13 07:34 link2.file.txt +-rw-rw-r-- 3 dboth dboth 7 Jun 13 07:34 main.file.txt +``` + +注意,该目录下的每个硬链接必须使用不同的名称,因为同一目录下的两个文件不能拥有相同的文件名。试着创建一个和现存链接名称相同的硬链接。 + +``` +[dboth@david temp]$ ln main.file.txt link2.file.txt +ln: failed to create hard link 'link2.file.txt': File exists +``` + +显然不行,因为 `link2.file.txt` 已经存在。目前为止我们只在同一目录下创建硬链接,接着在临时目录的父目录(你的主目录)中创建一个链接。 + +``` +[dboth@david temp]$ ln main.file.txt ../main.file.txt ; ls -l ../main* +-rw-rw-r-- 4 dboth dboth 7 Jun 13 07:34 main.file.txt +``` + +上面的 `ls` 命令显示 `main.file.txt` 文件确实存在于主目录中,且与该文件在 `temp` 目录中的名称一致。当然它们不是不同的文件,它们是同一文件的两个链接,指向了同一文件的目录项。为了帮助说明下一点,在 `temp` 目录中添加一个非链接文件。 + +``` +[dboth@david temp]$ touch unlinked.file ; ls -l +total 12 +-rw-rw-r-- 4 dboth dboth 7 Jun 13 07:34 link1.file.txt +-rw-rw-r-- 4 dboth dboth 7 Jun 13 07:34 link2.file.txt +-rw-rw-r-- 4 dboth dboth 7 Jun 13 07:34 main.file.txt +-rw-rw-r-- 1 dboth dboth 0 Jun 14 08:18 unlinked.file +``` + +使用 `ls` 命令的 `i` 选项查看 inode 的硬链接号和新创建文件的硬链接号。 + +``` +[dboth@david temp]$ ls -li +total 12 +657024 -rw-rw-r-- 4 dboth dboth 7 Jun 13 07:34 link1.file.txt +657024 -rw-rw-r-- 4 dboth dboth 7 Jun 13 07:34 link2.file.txt +657024 -rw-rw-r-- 4 dboth dboth 7 Jun 13 07:34 main.file.txt +657863 -rw-rw-r-- 1 dboth dboth 0 Jun 14 08:18 unlinked.file +``` + +注意上面文件模式左边的数字 `657024` ,这是三个硬链接文件所指的同一文件的 inode 号,你也可以使用 `i` 选项查看主目录中所创建的链接的节点号,和该值相同。而那个只有一个链接的 inode 号和其他的不同,在你的系统上看到的 inode 号或许不同于本文中的。 + +接着改变其中一个硬链接文件的大小。 + +``` +[dboth@david temp]$ df -h > link2.file.txt ; ls -li +total 12 +657024 -rw-rw-r-- 4 dboth dboth 1157 Jun 14 14:14 link1.file.txt +657024 -rw-rw-r-- 4 dboth dboth 1157 Jun 14 14:14 link2.file.txt +657024 -rw-rw-r-- 4 dboth dboth 1157 Jun 14 14:14 main.file.txt +657863 -rw-rw-r-- 1 dboth dboth 0 Jun 14 08:18 unlinked.file +``` + +现在所有的硬链接文件大小都比原来大了,因为多个目录项都链接着同一文件。 + +下个实验在我的电脑上会出现这样的结果,是因为我的 `/tmp` 目录在一个独立的逻辑卷上。如果你有单独的逻辑卷或文件系统在不同的分区上(如果未使用逻辑卷),确定你是否能访问那个分区或逻辑卷,如果不能,你可以在电脑上挂载一个 U 盘,如果上述方式适合你,你可以进行这个实验。 + +试着在 `/tmp` 目录中建立一个 `~/temp` 目录下文件的链接(或你的文件系统所在的位置)。 + +``` +[dboth@david temp]$ ln link2.file.txt /tmp/link3.file.txt +ln: failed to create hard link '/tmp/link3.file.txt' => 'link2.file.txt': +Invalid cross-device link +``` + +为什么会出现这个错误呢? 原因是每一个单独的可挂载文件系统都有一套自己的 inode 号。简单的通过 inode 号来跨越整个 Linux 文件系统结构引用一个文件会使系统困惑,因为相同的节点号会存在于每个已挂载的文件系统中。 + +有时你可能会想找到一个 inode 的所有硬链接。你可以使用 `ls -li` 命令。然后使用 `find` 命令找到所有硬链接的节点号。 + +``` +[dboth@david temp]$ find . -inum 657024 +./main.file.txt +./link1.file.txt +./link2.file.txt +``` + +注意 `find` 命令不能找到所属该节点的四个硬链接,因为我们在 `~/temp` 目录中查找。 `find` 命令仅在当前工作目录及其子目录中查找文件。要找到所有的硬链接,我们可以使用下列命令,指定你的主目录作为起始查找条件。 + +``` +[dboth@david temp]$ find ~ -samefile main.file.txt +/home/dboth/temp/main.file.txt +/home/dboth/temp/link1.file.txt +/home/dboth/temp/link2.file.txt +/home/dboth/main.file.txt +``` + +如果你是非超级用户,没有权限,可能会看到错误信息。这个命令也使用了 `-samefile` 选项而不是指定文件的节点号。这个效果和使用 inode 号一样且更容易,如果你知道其中一个硬链接名称的话。 + +#### 对软链接进行实验 + +如你刚才看到的,不能跨越文件系统边界创建硬链接,即在逻辑卷或文件系统中从一个文件系统到另一个文件系统。软链接给出了这个问题的解决方案。虽然它们可以达到相同的目的,但它们是非常不同的,知道这些差异是很重要的。 + +让我们在 `~/temp` 目录中创建一个符号链接来开始我们的探索。 + +``` +[dboth@david temp]$ ln -s link2.file.txt link3.file.txt ; ls -li +total 12 +657024 -rw-rw-r-- 4 dboth dboth 1157 Jun 14 14:14 link1.file.txt +657024 -rw-rw-r-- 4 dboth dboth 1157 Jun 14 14:14 link2.file.txt +658270 lrwxrwxrwx 1 dboth dboth 14 Jun 14 15:21 link3.file.txt -> +link2.file.txt +657024 -rw-rw-r-- 4 dboth dboth 1157 Jun 14 14:14 main.file.txt +657863 -rw-rw-r-- 1 dboth dboth 0 Jun 14 08:18 unlinked.file +``` + +拥有节点号 `657024` 的那些硬链接没有变化,且硬链接的数目也没有变化。新创建的符号链接有不同的 inode 号 `658270`。 名为 `link3.file.txt` 的软链接指向了 `link2.file.txt` 文件。使用 `cat` 命令查看 `link3.file.txt` 文件的内容。符号链接的 inode 信息以字母 `l` (小写字母 l)开头,意味着这个文件实际是个符号链接。 + +上例中软链接文件 `link3.file.txt` 的大小只有 14 字节。这是文本内容 `link3.file.txt` 的大小,即该目录项的实际内容。目录项 `link3.file.txt` 并不指向一个 inode ;它指向了另一个目录项,这在跨越文件系统建立链接时很有帮助。现在试着创建一个软链接,之前在 `/tmp` 目录中尝试过的。 + +``` +[dboth@david temp]$ ln -s /home/dboth/temp/link2.file.txt +/tmp/link3.file.txt ; ls -l /tmp/link* +lrwxrwxrwx 1 dboth dboth 31 Jun 14 21:53 /tmp/link3.file.txt -> +/home/dboth/temp/link2.file.txt +``` + +#### 删除链接 + +当你删除硬链接或硬链接所指的文件时,需要考虑一些问题。 + +首先,让我们删除硬链接文件 `main.file.txt`。注意指向 inode 的每个目录项就是一个硬链接。 + +``` +[dboth@david temp]$ rm main.file.txt ; ls -li +total 8 +657024 -rw-rw-r-- 3 dboth dboth 1157 Jun 14 14:14 link1.file.txt +657024 -rw-rw-r-- 3 dboth dboth 1157 Jun 14 14:14 link2.file.txt +658270 lrwxrwxrwx 1 dboth dboth 14 Jun 14 15:21 link3.file.txt -> +link2.file.txt +657863 -rw-rw-r-- 1 dboth dboth 0 Jun 14 08:18 unlinked.file +``` + +`main.file.txt` 是该文件被创建时所创建的第一个硬链接。现在删除它,仍然保留着原始文件和硬盘上的数据以及所有剩余的硬链接。要删除原始文件,你必须删除它的所有硬链接。 + +现在删除 `link2.file.txt` 硬链接文件。 + +``` +[dboth@david temp]$ rm link2.file.txt ; ls -li +total 8 +657024 -rw-rw-r-- 3 dboth dboth 1157 Jun 14 14:14 link1.file.txt +658270 lrwxrwxrwx 1 dboth dboth 14 Jun 14 15:21 link3.file.txt -> +link2.file.txt +657024 -rw-rw-r-- 3 dboth dboth 1157 Jun 14 14:14 main.file.txt +657863 -rw-rw-r-- 1 dboth dboth 0 Jun 14 08:18 unlinked.file +``` + +注意软链接的变化。删除软链接所指的硬链接会使该软链接失效。在我的系统中,断开的链接用颜色高亮显示,目标的硬链接会闪烁显示。如果需要修复这个损坏的软链接,你需要在同一目录下建立一个和旧链接相同名字的硬链接,只要不是所有硬链接都已删除就行。您还可以重新创建链接本身,链接保持相同的名称,但指向剩余的硬链接中的一个。当然如果软链接不再需要,可以使用 `rm` 命令删除它们。 + +`unlink` 命令在删除文件和链接时也有用。它非常简单且没有选项,就像 `rm` 命令一样。然而,它更准确地反映了删除的基本过程,因为它删除了目录项与被删除文件的链接。 + +### 写在最后 + +我用过这两种类型的链接很长一段时间后,我开始了解它们的能力和特质。我为我所教的 Linux 课程编写了一个实验室项目,以充分理解链接是如何工作的,并且我希望增进你的理解。 + +-------------------------------------------------------------------------------- + +作者简介: + +戴维.布斯 - 戴维.布斯是 Linux 和开源倡导者,居住在北卡罗莱纳的罗列 。他在 IT 行业工作了四十年,为 IBM 工作了 20 多年的 OS/2。在 IBM 时,他在 1981 年编写了最初的 IBM PC 的第一个培训课程。他为 RedHat 教授过 RHCE 班,并曾在 MCI Worldcom、思科和北卡罗莱纳州工作。他已经用 Linux 和开源软件工作将近 20 年了。 + +--------------------------------- + +via: https://opensource.com/article/17/6/linking-linux-filesystem + +作者:[David Both][a] +译者:[yongshouzhang](https://github.com/yongshouzhang) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/dboth +[1]:https://opensource.com/resources/what-is-linux?src=linux_resource_menu +[2]:https://opensource.com/resources/what-are-linux-containers?src=linux_resource_menu +[3]:https://developers.redhat.com/promotions/linux-cheatsheet/?intcmp=7016000000127cYAAQ +[4]:https://developers.redhat.com/cheat-sheet/advanced-linux-commands-cheatsheet?src=linux_resource_menu&intcmp=7016000000127cYAAQ +[5]:https://opensource.com/tags/linux?src=linux_resource_menu +[6]:https://opensource.com/article/17/6/linking-linux-filesystem?rate=YebHxA-zgNopDQKKOyX3_r25hGvnZms_33sYBUq-SMM +[7]:https://opensource.com/user/14106/feed +[8]:https://www.flickr.com/photos/digypho/7905320090 +[9]:https://creativecommons.org/licenses/by/2.0/ +[10]:https://linux.cn/article-8685-1.html +[11]:https://linux.cn/article-8099-1.html +[12]:https://linux.cn/article-8887-1.html +[13]:https://opensource.com/business/16/9/linux-users-guide-lvm +[14]:https://opensource.com/life/15/9/everything-is-a-file +[15]:https://linux.cn/article-8685-1.html +[16]:https://linux.cn/article-8685-1.html#3_19182 +[17]:https://opensource.com/users/dboth +[18]:https://opensource.com/article/17/6/linking-linux-filesystem#comments diff --git a/translated/tech/20171009 Examining network connections on Linux systems.md b/published/20171009 Examining network connections on Linux systems.md similarity index 100% rename from translated/tech/20171009 Examining network connections on Linux systems.md rename to published/20171009 Examining network connections on Linux systems.md diff --git a/translated/tech/20171029 A block layer introduction part 1 the bio layer.md b/published/20171029 A block layer introduction part 1 the bio layer.md similarity index 95% rename from translated/tech/20171029 A block layer introduction part 1 the bio layer.md rename to published/20171029 A block layer introduction part 1 the bio layer.md index bc3f582259..96374c2302 100644 --- a/translated/tech/20171029 A block layer introduction part 1 the bio layer.md +++ b/published/20171029 A block layer introduction part 1 the bio layer.md @@ -1,4 +1,4 @@ -块层介绍第一部分:块 I/O 层 +回复:块层介绍第一部分 - 块 I/O 层 ============================================================ ### 块层介绍第一部分:块 I/O 层 @@ -6,9 +6,14 @@ 回复:amarao 在[块层介绍第一部分:块 I/O 层][1] 中提的问题 先前的文章:[块层介绍第一部分:块 I/O 层][2] +![](https://static.lwn.net/images/2017/neil-blocklayer.png) + 嗨, + 你在这里描述的问题与块层不直接相关。这可能是一个驱动错误、可能是一个 SCSI 层错误,但绝对不是一个块层的问题。 + 不幸的是,报告针对 Linux 的错误是一件难事。有些开发者拒绝去看 bugzilla,有些开发者喜欢它,有些(像我这样)只能勉强地使用它。 + 另一种方法是发送电子邮件。为此,你需要选择正确的邮件列表,还有也许是正确的开发人员,当他们心情愉快,或者不是太忙或者不是假期时找到它们。有些人会努力回复所有,有些是完全不可预知的 - 这对我来说通常会发送一个补丁,包含一些错误报告。如果你只是有一个你自己几乎都不了解的 bug,那么你的预期响应率可能会更低。很遗憾,但这是是真的。 许多 bug 都会得到回应和处理,但很多 bug 都没有。 @@ -16,18 +21,20 @@ 我不认为说没有人关心是公平的,但是没有人认为它如你想的那样重要是有可能的。如果你想要一个解决方案,那么你需要驱动它。一个驱动它的方法是花钱请顾问或者与经销商签订支持合同。我怀疑你的情况没有上面的可能。另一种方法是了解代码如何工作,并自己找到解决方案。很多人都这么做,但是这对你来说可能不是一种选择。另一种方法是在不同的相关论坛上不断提出问题,直到得到回复。坚持可以见效。你需要做好准备去执行任何你所要求的测试,可能包括建立一个新的内核来测试。 如果你能在最近的内核(4.12 或者更新)上复现这个 bug,我建议你邮件报告给 linux-kernel@vger.kernel.org、linux-scsi@vger.kernel.org 和我(neilb@suse.com)(注意你不必订阅这些列表来发送邮件,只需要发送就行)。描述你的硬件以及如何触发问题的。 + 包含所有进程状态是 “D” 的栈追踪。你可以用 “cat /proc/$PID/stack” 来得到它,这里的 “$PID” 是进程的 pid。 确保避免抱怨或者说这个已经坏了好几年了以及这是多么严重不足。没有人关心这个。我们关心的是 bug 以及如何修复它。因此只要报告相关的事实就行。 + 尝试在邮件中而不是链接到其他地方的链接中包含所有事实。有时链接是需要的,但是对于你的脚本,它只有 8 行,所以把它包含在邮件中就行(并避免像 “fuckup” 之类的描述。只需称它为“坏的”(broken)或者类似的)。同样确保你的邮件发送的不是 HTML 格式。我们喜欢纯文本。HTML 被所有的 @vger.kernel.org 邮件列表拒绝。你或许需要配置你的邮箱程序不发送 HTML。 -------------------------------------------------------------------------------- via: https://lwn.net/Articles/737655/ -作者:[ neilbrown][a] +作者:[neilbrown][a] 译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/published/20141028 When Does Your OS Run.md b/published/201711/20141028 When Does Your OS Run.md similarity index 100% rename from published/20141028 When Does Your OS Run.md rename to published/201711/20141028 When Does Your OS Run.md diff --git a/published/20170202 Understanding Firewalld in Multi-Zone Configurations.md b/published/201711/20170202 Understanding Firewalld in Multi-Zone Configurations.md similarity index 100% rename from published/20170202 Understanding Firewalld in Multi-Zone Configurations.md rename to published/201711/20170202 Understanding Firewalld in Multi-Zone Configurations.md diff --git a/published/20170227 Ubuntu Core in LXD containers.md b/published/201711/20170227 Ubuntu Core in LXD containers.md similarity index 100% rename from published/20170227 Ubuntu Core in LXD containers.md rename to published/201711/20170227 Ubuntu Core in LXD containers.md diff --git a/published/20170418 INTRODUCING MOBY PROJECT A NEW OPEN-SOURCE PROJECT TO ADVANCE THE SOFTWARE CONTAINERIZATION MOVEMENT.md b/published/201711/20170418 INTRODUCING MOBY PROJECT A NEW OPEN-SOURCE PROJECT TO ADVANCE THE SOFTWARE CONTAINERIZATION MOVEMENT.md similarity index 100% rename from published/20170418 INTRODUCING MOBY PROJECT A NEW OPEN-SOURCE PROJECT TO ADVANCE THE SOFTWARE CONTAINERIZATION MOVEMENT.md rename to published/201711/20170418 INTRODUCING MOBY PROJECT A NEW OPEN-SOURCE PROJECT TO ADVANCE THE SOFTWARE CONTAINERIZATION MOVEMENT.md diff --git a/published/20170531 Understanding Docker Container Host vs Container OS for Linux and Windows Containers.md b/published/201711/20170531 Understanding Docker Container Host vs Container OS for Linux and Windows Containers.md similarity index 100% rename from published/20170531 Understanding Docker Container Host vs Container OS for Linux and Windows Containers.md rename to published/201711/20170531 Understanding Docker Container Host vs Container OS for Linux and Windows Containers.md diff --git a/published/20170608 The Life-Changing Magic of Tidying Up Code.md b/published/201711/20170608 The Life-Changing Magic of Tidying Up Code.md similarity index 100% rename from published/20170608 The Life-Changing Magic of Tidying Up Code.md rename to published/201711/20170608 The Life-Changing Magic of Tidying Up Code.md diff --git a/published/20170706 Wildcard Certificates Coming January 2018.md b/published/201711/20170706 Wildcard Certificates Coming January 2018.md similarity index 100% rename from published/20170706 Wildcard Certificates Coming January 2018.md rename to published/201711/20170706 Wildcard Certificates Coming January 2018.md diff --git a/published/20170825 Guide to Linux App Is a Handy Tool for Every Level of Linux User.md b/published/201711/20170825 Guide to Linux App Is a Handy Tool for Every Level of Linux User.md similarity index 100% rename from published/20170825 Guide to Linux App Is a Handy Tool for Every Level of Linux User.md rename to published/201711/20170825 Guide to Linux App Is a Handy Tool for Every Level of Linux User.md diff --git a/published/20170905 GIVE AWAY YOUR CODE BUT NEVER YOUR TIME.md b/published/201711/20170905 GIVE AWAY YOUR CODE BUT NEVER YOUR TIME.md similarity index 100% rename from published/20170905 GIVE AWAY YOUR CODE BUT NEVER YOUR TIME.md rename to published/201711/20170905 GIVE AWAY YOUR CODE BUT NEVER YOUR TIME.md diff --git a/published/20170928 3 Python web scrapers and crawlers.md b/published/201711/20170928 3 Python web scrapers and crawlers.md similarity index 100% rename from published/20170928 3 Python web scrapers and crawlers.md rename to published/201711/20170928 3 Python web scrapers and crawlers.md diff --git a/published/20171002 Scaling the GitLab database.md b/published/201711/20171002 Scaling the GitLab database.md similarity index 100% rename from published/20171002 Scaling the GitLab database.md rename to published/201711/20171002 Scaling the GitLab database.md diff --git a/published/20171003 PostgreSQL Hash Indexes Are Now Cool.md b/published/201711/20171003 PostgreSQL Hash Indexes Are Now Cool.md similarity index 100% rename from published/20171003 PostgreSQL Hash Indexes Are Now Cool.md rename to published/201711/20171003 PostgreSQL Hash Indexes Are Now Cool.md diff --git a/published/20171004 No the Linux desktop hasnt jumped in popularity.md b/published/201711/20171004 No the Linux desktop hasnt jumped in popularity.md similarity index 100% rename from published/20171004 No the Linux desktop hasnt jumped in popularity.md rename to published/201711/20171004 No the Linux desktop hasnt jumped in popularity.md diff --git a/published/20171007 Instant 100 command line productivity boost.md b/published/201711/20171007 Instant 100 command line productivity boost.md similarity index 100% rename from published/20171007 Instant 100 command line productivity boost.md rename to published/201711/20171007 Instant 100 command line productivity boost.md diff --git a/published/20171008 8 best languages to blog about.md b/published/201711/20171008 8 best languages to blog about.md similarity index 100% rename from published/20171008 8 best languages to blog about.md rename to published/201711/20171008 8 best languages to blog about.md diff --git a/published/20171009 CyberShaolin Teaching the Next Generation of Cybersecurity Experts.md b/published/201711/20171009 CyberShaolin Teaching the Next Generation of Cybersecurity Experts.md similarity index 100% rename from published/20171009 CyberShaolin Teaching the Next Generation of Cybersecurity Experts.md rename to published/201711/20171009 CyberShaolin Teaching the Next Generation of Cybersecurity Experts.md diff --git a/published/20171010 Getting Started Analyzing Twitter Data in Apache Kafka through KSQL.md b/published/201711/20171010 Getting Started Analyzing Twitter Data in Apache Kafka through KSQL.md similarity index 100% rename from published/20171010 Getting Started Analyzing Twitter Data in Apache Kafka through KSQL.md rename to published/201711/20171010 Getting Started Analyzing Twitter Data in Apache Kafka through KSQL.md diff --git a/published/20171011 How to set up a Postgres database on a Raspberry Pi.md b/published/201711/20171011 How to set up a Postgres database on a Raspberry Pi.md similarity index 100% rename from published/20171011 How to set up a Postgres database on a Raspberry Pi.md rename to published/201711/20171011 How to set up a Postgres database on a Raspberry Pi.md diff --git a/published/20171011 Why Linux Works.md b/published/201711/20171011 Why Linux Works.md similarity index 100% rename from published/20171011 Why Linux Works.md rename to published/201711/20171011 Why Linux Works.md diff --git a/published/20171013 6 reasons open source is good for business.md b/published/201711/20171013 6 reasons open source is good for business.md similarity index 100% rename from published/20171013 6 reasons open source is good for business.md rename to published/201711/20171013 6 reasons open source is good for business.md diff --git a/published/20171013 Best of PostgreSQL 10 for the DBA.md b/published/201711/20171013 Best of PostgreSQL 10 for the DBA.md similarity index 100% rename from published/20171013 Best of PostgreSQL 10 for the DBA.md rename to published/201711/20171013 Best of PostgreSQL 10 for the DBA.md diff --git a/published/20171015 How to implement cloud-native computing with Kubernetes.md b/published/201711/20171015 How to implement cloud-native computing with Kubernetes.md similarity index 100% rename from published/20171015 How to implement cloud-native computing with Kubernetes.md rename to published/201711/20171015 How to implement cloud-native computing with Kubernetes.md diff --git a/published/20171015 Monitoring Slow SQL Queries via Slack.md b/published/201711/20171015 Monitoring Slow SQL Queries via Slack.md similarity index 100% rename from published/20171015 Monitoring Slow SQL Queries via Slack.md rename to published/201711/20171015 Monitoring Slow SQL Queries via Slack.md diff --git a/published/20171015 Why Use Docker with R A DevOps Perspective.md b/published/201711/20171015 Why Use Docker with R A DevOps Perspective.md similarity index 100% rename from published/20171015 Why Use Docker with R A DevOps Perspective.md rename to published/201711/20171015 Why Use Docker with R A DevOps Perspective.md diff --git a/published/20171016 Introducing CRI-O 1.0.md b/published/201711/20171016 Introducing CRI-O 1.0.md similarity index 100% rename from published/20171016 Introducing CRI-O 1.0.md rename to published/201711/20171016 Introducing CRI-O 1.0.md diff --git a/published/20171017 A tour of Postgres Index Types.md b/published/201711/20171017 A tour of Postgres Index Types.md similarity index 100% rename from published/20171017 A tour of Postgres Index Types.md rename to published/201711/20171017 A tour of Postgres Index Types.md diff --git a/published/20171017 Image Processing on Linux.md b/published/201711/20171017 Image Processing on Linux.md similarity index 100% rename from published/20171017 Image Processing on Linux.md rename to published/201711/20171017 Image Processing on Linux.md diff --git a/published/20171018 How containers and microservices change security.md b/published/201711/20171018 How containers and microservices change security.md similarity index 100% rename from published/20171018 How containers and microservices change security.md rename to published/201711/20171018 How containers and microservices change security.md diff --git a/published/20171018 Learn how to program in Python by building a simple dice game.md b/published/201711/20171018 Learn how to program in Python by building a simple dice game.md similarity index 100% rename from published/20171018 Learn how to program in Python by building a simple dice game.md rename to published/201711/20171018 Learn how to program in Python by building a simple dice game.md diff --git a/published/20171018 Tips to Secure Your Network in the Wake of KRACK.md b/published/201711/20171018 Tips to Secure Your Network in the Wake of KRACK.md similarity index 100% rename from published/20171018 Tips to Secure Your Network in the Wake of KRACK.md rename to published/201711/20171018 Tips to Secure Your Network in the Wake of KRACK.md diff --git a/published/20171019 3 Simple Excellent Linux Network Monitors.md b/published/201711/20171019 3 Simple Excellent Linux Network Monitors.md similarity index 100% rename from published/20171019 3 Simple Excellent Linux Network Monitors.md rename to published/201711/20171019 3 Simple Excellent Linux Network Monitors.md diff --git a/published/20171019 How to manage Docker containers in Kubernetes with Java.md b/published/201711/20171019 How to manage Docker containers in Kubernetes with Java.md similarity index 100% rename from published/20171019 How to manage Docker containers in Kubernetes with Java.md rename to published/201711/20171019 How to manage Docker containers in Kubernetes with Java.md diff --git a/published/20171020 3 Tools to Help You Remember Linux Commands.md b/published/201711/20171020 3 Tools to Help You Remember Linux Commands.md similarity index 100% rename from published/20171020 3 Tools to Help You Remember Linux Commands.md rename to published/201711/20171020 3 Tools to Help You Remember Linux Commands.md diff --git a/published/20171020 Running Android on Top of a Linux Graphics Stack.md b/published/201711/20171020 Running Android on Top of a Linux Graphics Stack.md similarity index 100% rename from published/20171020 Running Android on Top of a Linux Graphics Stack.md rename to published/201711/20171020 Running Android on Top of a Linux Graphics Stack.md diff --git a/published/20171024 Top 5 Linux pain points in 2017.md b/published/201711/20171024 Top 5 Linux pain points in 2017.md similarity index 100% rename from published/20171024 Top 5 Linux pain points in 2017.md rename to published/201711/20171024 Top 5 Linux pain points in 2017.md diff --git a/published/20171024 Who contributed the most to open source in 2017 Let s analyze GitHub’s data and find out.md b/published/201711/20171024 Who contributed the most to open source in 2017 Let s analyze GitHub’s data and find out.md similarity index 100% rename from published/20171024 Who contributed the most to open source in 2017 Let s analyze GitHub’s data and find out.md rename to published/201711/20171024 Who contributed the most to open source in 2017 Let s analyze GitHub’s data and find out.md diff --git a/published/20171024 Why Did Ubuntu Drop Unity Mark Shuttleworth Explains.md b/published/201711/20171024 Why Did Ubuntu Drop Unity Mark Shuttleworth Explains.md similarity index 100% rename from published/20171024 Why Did Ubuntu Drop Unity Mark Shuttleworth Explains.md rename to published/201711/20171024 Why Did Ubuntu Drop Unity Mark Shuttleworth Explains.md diff --git a/published/20171025 How to roll your own backup solution with BorgBackup, Rclone and Wasabi cloud storage.md b/published/201711/20171025 How to roll your own backup solution with BorgBackup, Rclone and Wasabi cloud storage.md similarity index 100% rename from published/20171025 How to roll your own backup solution with BorgBackup, Rclone and Wasabi cloud storage.md rename to published/201711/20171025 How to roll your own backup solution with BorgBackup, Rclone and Wasabi cloud storage.md diff --git a/published/20171026 But I dont know what a container is .md b/published/201711/20171026 But I dont know what a container is .md similarity index 100% rename from published/20171026 But I dont know what a container is .md rename to published/201711/20171026 But I dont know what a container is .md diff --git a/published/20171026 Why is Kubernetes so popular.md b/published/201711/20171026 Why is Kubernetes so popular.md similarity index 100% rename from published/20171026 Why is Kubernetes so popular.md rename to published/201711/20171026 Why is Kubernetes so popular.md diff --git a/published/20171101 How to use cron in Linux.md b/published/201711/20171101 How to use cron in Linux.md similarity index 100% rename from published/20171101 How to use cron in Linux.md rename to published/201711/20171101 How to use cron in Linux.md diff --git a/published/20171101 We re switching to a DCO for source code contributions.md b/published/201711/20171101 We re switching to a DCO for source code contributions.md similarity index 100% rename from published/20171101 We re switching to a DCO for source code contributions.md rename to published/201711/20171101 We re switching to a DCO for source code contributions.md diff --git a/published/20171106 4 Tools to Manage EXT2 EXT3 and EXT4 Health in Linux.md b/published/201711/20171106 4 Tools to Manage EXT2 EXT3 and EXT4 Health in Linux.md similarity index 100% rename from published/20171106 4 Tools to Manage EXT2 EXT3 and EXT4 Health in Linux.md rename to published/201711/20171106 4 Tools to Manage EXT2 EXT3 and EXT4 Health in Linux.md diff --git a/published/20171106 Finding Files with mlocate.md b/published/201711/20171106 Finding Files with mlocate.md similarity index 100% rename from published/20171106 Finding Files with mlocate.md rename to published/201711/20171106 Finding Files with mlocate.md diff --git a/published/20171106 Linux Foundation Publishes Enterprise Open Source Guides.md b/published/201711/20171106 Linux Foundation Publishes Enterprise Open Source Guides.md similarity index 100% rename from published/20171106 Linux Foundation Publishes Enterprise Open Source Guides.md rename to published/201711/20171106 Linux Foundation Publishes Enterprise Open Source Guides.md diff --git a/published/20171106 Most companies can t buy an open source community clue. Here s how to do it right.md b/published/201711/20171106 Most companies can t buy an open source community clue. Here s how to do it right.md similarity index 100% rename from published/20171106 Most companies can t buy an open source community clue. Here s how to do it right.md rename to published/201711/20171106 Most companies can t buy an open source community clue. Here s how to do it right.md diff --git a/published/20171107 AWS adopts home-brewed KVM as new hypervisor.md b/published/201711/20171107 AWS adopts home-brewed KVM as new hypervisor.md similarity index 100% rename from published/20171107 AWS adopts home-brewed KVM as new hypervisor.md rename to published/201711/20171107 AWS adopts home-brewed KVM as new hypervisor.md diff --git a/published/20171107 How I created my first RPM package in Fedora.md b/published/201711/20171107 How I created my first RPM package in Fedora.md similarity index 100% rename from published/20171107 How I created my first RPM package in Fedora.md rename to published/201711/20171107 How I created my first RPM package in Fedora.md diff --git a/published/20171108 Build and test applications with Ansible Container.md b/published/201711/20171108 Build and test applications with Ansible Container.md similarity index 100% rename from published/20171108 Build and test applications with Ansible Container.md rename to published/201711/20171108 Build and test applications with Ansible Container.md diff --git a/published/20171110 File better bugs with coredumpctl.md b/published/201711/20171110 File better bugs with coredumpctl.md similarity index 100% rename from published/20171110 File better bugs with coredumpctl.md rename to published/201711/20171110 File better bugs with coredumpctl.md diff --git a/published/20171114 ​Linux totally dominates supercomputers.md b/published/201711/20171114 ​Linux totally dominates supercomputers.md similarity index 100% rename from published/20171114 ​Linux totally dominates supercomputers.md rename to published/201711/20171114 ​Linux totally dominates supercomputers.md diff --git a/published/20171116 5 Coolest Linux Terminal Emulators.md b/published/201711/20171116 5 Coolest Linux Terminal Emulators.md similarity index 100% rename from published/20171116 5 Coolest Linux Terminal Emulators.md rename to published/201711/20171116 5 Coolest Linux Terminal Emulators.md diff --git a/published/20171117 How to Easily Remember Linux Commands.md b/published/201711/20171117 How to Easily Remember Linux Commands.md similarity index 100% rename from published/20171117 How to Easily Remember Linux Commands.md rename to published/201711/20171117 How to Easily Remember Linux Commands.md diff --git a/published/20171118 Getting started with OpenFaaS on minikube.md b/published/201711/20171118 Getting started with OpenFaaS on minikube.md similarity index 100% rename from published/20171118 Getting started with OpenFaaS on minikube.md rename to published/201711/20171118 Getting started with OpenFaaS on minikube.md diff --git a/published/20171128 tmate – Instantly Share Your Terminal Session To Anyone In Seconds.md b/published/201711/20171128 tmate – Instantly Share Your Terminal Session To Anyone In Seconds.md similarity index 100% rename from published/20171128 tmate – Instantly Share Your Terminal Session To Anyone In Seconds.md rename to published/201711/20171128 tmate – Instantly Share Your Terminal Session To Anyone In Seconds.md diff --git a/published/20171120 Containers and Kubernetes Whats next.md b/published/20171120 Containers and Kubernetes Whats next.md new file mode 100644 index 0000000000..57f9379f7b --- /dev/null +++ b/published/20171120 Containers and Kubernetes Whats next.md @@ -0,0 +1,81 @@ +容器技术和 K8S 的下一站 +============================================================ +> 想知道容器编排管理和 K8S 的最新展望么?来看看专家怎么说。 + +![CIO_Big Data Decisions_2](https://enterprisersproject.com/sites/default/files/styles/620x350/public/images/CIO_Big%20Data%20Decisions_2.png?itok=Y5zMHxf8 "CIO_Big Data Decisions_2") + +如果你想对容器在未来的发展方向有一个整体把握,那么你一定要跟着钱走,看看钱都投在了哪里。当然了,有很多很多的钱正在投入容器的进一步发展。相关研究预计 2020 年容器技术的投入将占有 [27 亿美元][4] 的市场份额。而在 2016 年,容器相关技术投入的总额为 7.62 亿美元,只有 2020 年投入预计的三分之一。巨额投入的背后是一些显而易见的基本因素,包括容器化的迅速增长以及并行化的大趋势。随着容器被大面积推广和使用,容器编排管理也会被理所当然的推广应用起来。 + +来自 [The new stack][5] 的调研数据表明,容器的推广使用是编排管理被推广的主要的催化剂。根据调研参与者的反馈数据,在已经将容器技术使用到生产环境中的使用者里,有六成使用者正在将 Kubernetes(K8S)编排管理广泛的应用在生产环境中,另外百分之十九的人员则表示他们已经处于部署 K8S 的初级阶段。在容器部署初期的使用者当中,虽然只有百分之五的人员表示已经在使用 K8S ,但是百分之五十八的人员表示他们正在计划和准备使用 K8S。总而言之,容器和 Kubernetes 的关系就好比是鸡和蛋一样,相辅相成紧密关联。众多专家一致认为编排管理工具对容器的[长周期管理][6] 以及其在市场中的发展有至关重要的作用。正如 [Cockroach 实验室][7] 的 Alex Robinson 所说,容器编排管理被更广泛的拓展和应用是一个总体的大趋势。毫无疑问,这是一个正在快速演变的领域,且未来潜力无穷。鉴于此,我们对 Robinson 和其他的一些容器的实际使用和推介者做了采访,来从他们作为容器技术的践行者的视角上展望一下容器编排以及 K8S 的下一步发展。 + +### 容器编排将被主流接受 + +像任何重要技术的转型一样,我们就像是处在一个高崖之上一般,在经过了初期步履蹒跚的跋涉之后将要来到一望无际的广袤平原。广大的新天地和平实真切的应用需求将会让这种新技术在主流应用中被迅速推广,尤其是在大企业环境中。正如 Alex Robinson 说的那样,容器技术的淘金阶段已经过去,早期的技术革新创新正在减速,随之而来的则是市场对容器技术的稳定性和可用性的强烈需求。这意味着未来我们将不会再见到大量的新的编排管理系统的涌现,而是会看到容器技术方面更多的安全解决方案,更丰富的管理工具,以及基于目前主流容器编排系统的更多的新特性。 + +### 更好的易用性 + +人们将在简化容器的部署方面下大功夫,因为容器部署的初期工作对很多公司和组织来说还是比较复杂的,尤其是容器的[长期管理维护][8]更是需要投入大量的精力。正如 [Codemill AB][9] 公司的 My Karlsson 所说,容器编排技术还是太复杂了,这导致很多使用者难以娴熟驾驭和充分利用容器编排的功能。很多容器技术的新用户都需要花费很多精力,走很多弯路,才能搭建小规模的或单个的以隔离方式运行的容器系统。这种现象在那些没有针对容器技术设计和优化的应用中更为明显。在简化容器编排管理方面有很多优化可以做,这些优化和改造将会使容器技术更加具有可用性。 + +### 在混合云以及多云技术方面会有更多侧重 + +随着容器和容器编排技术被越来越多的使用,更多的组织机构会选择扩展他们现有的容器技术的部署,从之前的把非重要系统部署在单一环境的使用情景逐渐过渡到更加[复杂的使用情景][10]。对很多公司来说,这意味着他们必须开始学会在 [混合云][11] 和 [多云][12] 的环境下,全局化的去管理那些容器化的应用和微服务。正如红帽 [Openshift 部门产品战略总监][14] [Brian Gracely][13] 所说,“容器和 K8S 技术的使用使得我们成功的实现了混合云以及应用的可移植性。结合 Open Service Broker API 的使用,越来越多的结合私有云和公有云资源的新应用将会涌现出来。” +据 [CloudBees][15] 公司的高级工程师 Carlos Sanchez 分析,联合服务(Federation)将会得到极大推动,使一些诸如多地区部署和多云部署等的备受期待的新特性成为可能。 + +**[ 想知道 CIO 们对混合云和多云的战略构想么? 请参看我们的这条相关资源, [Hybrid Cloud: The IT leader's guide][16]。 ]** + +### 平台和工具的持续整合及加强 + +对任何一种科技来说,持续的整合和加强从来都是大势所趋;容器编排管理技术在这方面也不例外。来自 [Sumo Logic][17] 的首席分析师 Ben Newton 表示,随着容器化渐成主流,软件工程师们正在很少数的一些技术上做持续整合加固的工作,来满足他们的一些微应用的需求。容器和 K8S 将会毫无疑问的成为容器编排管理方面的主流平台,并轻松碾压其它的一些小众平台方案。因为 K8S 提供了一个相当清晰的可以摆脱各种特有云生态的途径,K8S 将被大量公司使用,逐渐形成一个不依赖于某个特定云服务的“中立云”cloud-neutral。 + +### K8S 的下一站 + +来自 [Alcide][18] 的 CTO 和联合创始人 Gadi Naor 表示,K8S 将会是一个有长期和远景发展的技术,虽然我们的社区正在大力推广和发展 K8S,K8S 仍有很长的路要走。 + +专家们对[日益流行的 K8S 平台][19]也作出了以下一些预测: + +**_来自 Alcide 的 Gadi Naor 表示:_** “运营商会持续演进并趋于成熟,直到在 K8S 上运行的应用可以完全自治。利用 [OpenTracing][20] 和诸如 [istio][21] 技术的 service mesh 架构,在 K8S 上部署和监控微应用将会带来很多新的可能性。” + +**_来自 Red Hat 的 Brian Gracely 表示:_** “K8S 所支持的应用的种类越来越多。今后在 K8S 上,你不仅可以运行传统的应用程序,还可以运行原生的云应用、大数据应用以及 HPC 或者基于 GPU 运算的应用程序,这将为灵活的架构设计带来无限可能。” + +**_来自 Sumo Logic 的 Ben Newton 表示:_** “随着 K8S 成为一个具有统治地位的平台,我预计更多的操作机制将会被统一化,尤其是 K8S 将和第三方管理和监控平台融合起来。” + +**_来自 CloudBees 的 Carlos Sanchez 表示:_** “在不久的将来我们就能看到不依赖于 Docker 而使用其它运行时环境的系统,这将会有助于消除任何可能的 lock-in 情景“ [编辑提示:[CRI-O][22] 就是一个可以借鉴的例子。]“而且我期待将来会出现更多的针对企业环境的存储服务新特性,包括数据快照以及在线的磁盘容量的扩展。” + +**_来自 Cockroach Labs 的 Alex Robinson 表示:_** “ K8S 社区正在讨论的一个重大发展议题就是加强对[有状态程序][23]的管理。目前在 K8S 平台下,实现状态管理仍然非常困难,除非你所使用的云服务商可以提供远程固定磁盘。现阶段也有很多人在多方面试图改善这个状况,包括在 K8S 平台内部以及在外部服务商一端做出的一些改进。” + +------------------------------------------------------------------------------- + +via: https://enterprisersproject.com/article/2017/11/containers-and-kubernetes-whats-next + +作者:[Kevin Casey][a] +译者:[yunfengHe](https://github.com/yunfengHe) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://enterprisersproject.com/user/kevin-casey +[1]:https://enterprisersproject.com/article/2017/11/kubernetes-numbers-10-compelling-stats +[2]:https://enterprisersproject.com/article/2017/11/how-enterprise-it-uses-kubernetes-tame-container-complexity +[3]:https://enterprisersproject.com/article/2017/11/5-kubernetes-success-tips-start-smart?sc_cid=70160000000h0aXAAQ +[4]:https://451research.com/images/Marketing/press_releases/Application-container-market-will-reach-2-7bn-in-2020_final_graphic.pdf +[5]:https://thenewstack.io/ +[6]:https://enterprisersproject.com/article/2017/10/microservices-and-containers-6-management-tips-long-haul +[7]:https://www.cockroachlabs.com/ +[8]:https://enterprisersproject.com/article/2017/10/microservices-and-containers-6-management-tips-long-haul +[9]:https://codemill.se/ +[10]:https://www.redhat.com/en/challenges/integration?intcmp=701f2000000tjyaAAA +[11]:https://enterprisersproject.com/hybrid-cloud +[12]:https://enterprisersproject.com/article/2017/7/multi-cloud-vs-hybrid-cloud-whats-difference +[13]:https://enterprisersproject.com/user/brian-gracely +[14]:https://www.redhat.com/en +[15]:https://www.cloudbees.com/ +[16]:https://enterprisersproject.com/hybrid-cloud?sc_cid=70160000000h0aXAAQ +[17]:https://www.sumologic.com/ +[18]:http://alcide.io/ +[19]:https://enterprisersproject.com/article/2017/10/how-explain-kubernetes-plain-english +[20]:http://opentracing.io/ +[21]:https://istio.io/ +[22]:http://cri-o.io/ +[23]:https://opensource.com/article/17/2/stateful-applications +[24]:https://enterprisersproject.com/article/2017/11/containers-and-kubernetes-whats-next?rate=PBQHhF4zPRHcq2KybE1bQgMkS2bzmNzcW2RXSVItmw8 +[25]:https://enterprisersproject.com/user/kevin-casey diff --git a/published/20171124 How to Install Android File Transfer for Linux.md b/published/20171124 How to Install Android File Transfer for Linux.md new file mode 100644 index 0000000000..3cdb372c93 --- /dev/null +++ b/published/20171124 How to Install Android File Transfer for Linux.md @@ -0,0 +1,75 @@ +如何在 Linux 下安装安卓文件传输助手 +=============== + +如果你尝试在 Ubuntu 下连接你的安卓手机,你也许可以试试 Linux 下的安卓文件传输助手。 + +本质上来说,这个应用是谷歌 macOS 版本的一个克隆。它是用 Qt 编写的,用户界面非常简洁,使得你能轻松在 Ubuntu 和安卓手机之间传输文件和文件夹。 + +现在,有可能一部分人想知道有什么是这个应用可以做,而 Nautilus(Ubuntu 默认的文件资源管理器)不能做的,答案是没有。 + +当我将我的 Nexus 5X(记得选择 [媒体传输协议 MTP][7] 选项)连接在 Ubuntu 上时,在 [GVfs][8](LCTT 译注: GNOME 桌面下的虚拟文件系统)的帮助下,我可以打开、浏览和管理我的手机,就像它是一个普通的 U 盘一样。 + +[![Nautilus MTP integration with a Nexus 5X](http://www.omgubuntu.co.uk/wp-content/uploads/2017/11/browsing-android-mtp-nautilus.jpg)][9] + +但是*一些*用户在使用默认的文件管理器时,在 MTP 的某些功能上会出现问题:比如文件夹没有正确加载,创建新文件夹后此文件夹不存在,或者无法在媒体播放器中使用自己的手机。 + +这就是要为 Linux 系统用户设计一个安卓文件传输助手应用的原因,将这个应用当做将 MTP 设备安装在 Linux 下的另一种选择。如果你使用 Linux 下的默认应用时一切正常,你也许并不需要尝试使用它 (除非你真的很想尝试新鲜事物)。 + + +![Android File Transfer Linux App](http://www.omgubuntu.co.uk/wp-content/uploads/2017/11/android-file-transfer-for-linux-750x662.jpg) + +该 app 特点: + +*   简洁直观的用户界面 +*   支持文件拖放功能(从 Linux 系统到手机) +*   支持批量下载 (从手机到 Linux系统) +*   显示传输进程对话框 +*   FUSE 模块支持 +*   没有文件大小限制 +*   可选命令行工具 + +### Ubuntu 下安装安卓手机文件助手的步骤 + +以上就是对这个应用的介绍,下面是如何安装它的具体步骤。 + +这有一个 [PPA](个人软件包集)源为 Ubuntu 14.04 LTS、16.04 LTS 和 Ubuntu 17.10 提供可用应用。 + +为了将这一 PPA 加入你的软件资源列表中,执行这条命令: + +``` +sudo add-apt-repository ppa:samoilov-lex/aftl-stable +``` + +接着,为了在 Ubuntu 下安装 Linux版本的安卓文件传输助手,执行: + +``` +sudo apt-get update && sudo apt install android-file-transfer +``` + +这样就行了。 + +你会在你的应用列表中发现这一应用的启动图标。 + +在你启动这一应用之前,要确保没有其他应用(比如 Nautilus)已经挂载了你的手机。如果其它应用正在使用你的手机,就会显示“无法找到 MTP 设备”。要解决这一问题,将你的手机从 Nautilus(或者任何正在使用你的手机的应用)上移除,然后再重新启动安卓文件传输助手。 + +-------------------------------------------------------------------------------- + +via: http://www.omgubuntu.co.uk/2017/11/android-file-transfer-app-linux + +作者:[JOEY SNEDDON][a] +译者:[wenwensnow](https://github.com/wenwensnow) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://plus.google.com/117485690627814051450/?rel=author +[1]:https://plus.google.com/117485690627814051450/?rel=author +[2]:http://www.omgubuntu.co.uk/category/app +[3]:http://www.omgubuntu.co.uk/category/download +[4]:https://github.com/whoozle/android-file-transfer-linux +[5]:http://www.omgubuntu.co.uk/2017/11/android-file-transfer-app-linux +[6]:http://android.com/filetransfer?linkid=14270770 +[7]:https://en.wikipedia.org/wiki/Media_Transfer_Protocol +[8]:https://en.wikipedia.org/wiki/GVfs +[9]:http://www.omgubuntu.co.uk/wp-content/uploads/2017/11/browsing-android-mtp-nautilus.jpg +[10]:https://launchpad.net/~samoilov-lex/+archive/ubuntu/aftl-stable diff --git a/published/20171124 Open Source Cloud Skills and Certification Are Key for SysAdmins.md b/published/20171124 Open Source Cloud Skills and Certification Are Key for SysAdmins.md new file mode 100644 index 0000000000..9b6a4f242c --- /dev/null +++ b/published/20171124 Open Source Cloud Skills and Certification Are Key for SysAdmins.md @@ -0,0 +1,72 @@ +开源云技能认证:系统管理员的核心竞争力 +========= + +![os jobs](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/open-house-sysadmin.jpg?itok=i5FHc3lu "os jobs") + +> [2017年开源工作报告][1](以下简称“报告”)显示,具有开源云技术认证的系统管理员往往能获得更高的薪酬。 + + +报告调查的受访者中,53% 认为系统管理员是雇主们最期望被填补的职位空缺之一,因此,技术娴熟的系统管理员更受青睐而收获高薪职位,但这一职位,并没想象中那么容易填补。 + +系统管理员主要负责服务器和其他电脑操作系统的安装、服务支持和维护,及时处理服务中断和预防其他问题的出现。 + +总的来说,今年的报告指出开源领域人才需求最大的有开源云(47%),应用开发(44%),大数据(43%),开发运营和安全(42%)。 + +此外,报告对人事经理的调查显示,58% 期望招揽更多的开源人才,67% 认为开源人才的需求增长会比业内其他领域更甚。有些单位视开源人才为招聘最优选则,它们招聘的开源人才较上年增长了 2 个百分点。 + +同时,89% 的人事经理认为很难找到颇具天赋的开源人才。 + +### 为什么要获取认证 + +报告显示,对系统管理员的需求刺激着人事经理为 53% 的组织/机构提供正规的培训和专业技术认证,而这一比例去年为 47%。 + +对系统管理方面感兴趣的 IT 人才考虑获取 Linux 认证已成为行业规律。随便查看几个知名的招聘网站,你就能发现:[CompTIA Linux+][3] 认证是入门级 Linux 系统管理员的最高认证;如果想胜任高级别的系统管理员职位,获取[红帽认证工程师(RHCE)][4]和[红帽认证系统管理员(RHCSA)][5]则是不可或缺的。 + +戴士(Dice)[2017 技术行业薪资调查][6]显示,2016 年系统管理员的薪水为 79,538 美元,较上年下降了 0.8%;系统架构师的薪水为 125,946 美元,同比下降 4.7%。尽管如此,该调查发现“高水平专业人才仍最受欢迎,特别是那些精通支持产业转型发展所需技术的人才”。 + +在开源技术方面,HBase(一个开源的分布式数据库)技术人才的薪水在戴士 2017 技术行业薪资调查中排第一。在计算机网络和数据库领域,掌握 OpenVMS 操作系统技术也能获得高薪。 + +### 成为出色的系统管理员 + +出色的系统管理员须在问题出现时马上处理,这意味着你必须时刻准备应对可能出现的状况。这个职位追求“零责备的、精益的、流程或技术上交互式改进的”思维方式和善于自我完善的人格,成为一个系统管理员意味着“你必将与开源软件如 Linux、BSD 甚至开源 Solaris 等结下不解之缘”,Paul English ^译注1 在 [opensource.com][7] 上发文指出。 + +Paul English 认为,现在的系统管理员较以前而言,要更多地与软件打交道,而且要能够编写脚本来协助系统管理。 + +>译注1:Paul English,计算机科学学士,UNIX/Linux 系统管理员,PreOS Security Inc. 公司 CEO,2015-2017 年于为推动系统管理员发展实践的非盈利组织——专业系统管理员联盟League of Professional System Administrator担任董事会成员。 + +### 展望 2018 + +[Robert Half 2018 年技术人才薪资导览][8]预测 2018 年北美地区许多单位将聘用大量系统管理方面的专业人才,同时个人软实力和领导力水平作为优秀人才的考量因素,越来越受到重视。 + +该报告指出:“良好的聆听能力和批判性思维能力对于理解和解决用户的问题和担忧至关重要,也是 IT 从业者必须具备的重要技能,特别是从事服务台和桌面支持工作相关的技术人员。” + +这与[Linux基金会][9]^译注2 提出的不同阶段的系统管理员必备技能相一致,都强调了强大的分析能力和快速处理问题的能力。 + +>译注2:Linux 基金会The Linux Foundation,成立于 2000 年,致力于围绕开源项目构建可持续发展的生态系统,以加速开源项目的技术开发和商业应用;它是世界上最大的开源非盈利组织,在推广、保护和推进 Linux 发展,协同开发,维护“历史上最大的共享资源”上功勋卓越。 + +如果想逐渐爬上系统管理员职位的金字塔上层,还应该对系统配置的结构化方法充满兴趣;且拥有解决系统安全问题的经验;用户身份验证管理的经验;与非技术人员进行非技术交流的能力;以及优化系统以满足最新的安全需求的能力。 + +- [下载][10]2017年开源工作报告全文,以获取更多信息。 + + +----------------------- + +via: https://www.linux.com/blog/open-source-cloud-skills-and-certification-are-key-sysadmins + +作者:[linux.com][a] +译者:[wangy325](https://github.com/wangy325) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.linux.com/blog/open-source-cloud-skills-and-certification-are-key-sysadmins +[1]:https://www.linuxfoundation.org/blog/2017-jobs-report-highlights-demand-open-source-skills/ +[2]:https://www.linux.com/licenses/category/creative-commons-zero +[3]:https://certification.comptia.org/certifications/linux?tracking=getCertified/certifications/linux.aspx +[4]:https://www.redhat.com/en/services/certification/rhce +[5]:https://www.redhat.com/en/services/certification/rhcsa +[6]:http://marketing.dice.com/pdf/Dice_TechSalarySurvey_2017.pdf?aliId=105832232 +[7]:https://opensource.com/article/17/7/truth-about-sysadmins +[8]:https://www.roberthalf.com/salary-guide/technology +[9]:https://www.linux.com/learn/10-essential-skills-novice-junior-and-senior-sysadmins%20%20 +[10]:http://bit.ly/2017OSSjobsreport \ No newline at end of file diff --git a/published/20171130 Search DuckDuckGo from the Command Line.md b/published/20171130 Search DuckDuckGo from the Command Line.md new file mode 100644 index 0000000000..48b6fdd830 --- /dev/null +++ b/published/20171130 Search DuckDuckGo from the Command Line.md @@ -0,0 +1,97 @@ +在命令行中使用 DuckDuckGo 搜索 +============= + +![](http://www.omgubuntu.co.uk/wp-content/uploads/2017/11/duckduckgo.png) + +此前我们介绍了[如何在命令行中使用 Google 搜索][3]。许多读者反馈说他们平时使用 [Duck Duck Go][4],这是一个功能强大而且保密性很强的搜索引擎。 + +正巧,最近出现了一款能够从命令行搜索 DuckDuckGo 的工具。它叫做 ddgr(我把它读作 “dodger”),非常好用。 + +像 [Googler][7] 一样,ddgr 是一个完全开源而且非官方的工具。没错,它并不属于 DuckDuckGo。所以,如果你发现它返回的结果有些奇怪,请先询问这个工具的开发者,而不是搜索引擎的开发者。 + +### DuckDuckGo 命令行应用 + +![](http://www.omgubuntu.co.uk/wp-content/uploads/2017/11/ddgr-gif.gif) + +[DuckDuckGo Bangs(DuckDuckGo 快捷搜索)][8] 可以帮助你轻易地在 DuckDuckGo 上找到想要的信息(甚至 _本网站 omgubuntu_ 都有快捷搜索)。ddgr 非常忠实地呈现了这个功能。 + +和网页版不同的是,你可以更改每页返回多少结果。这比起每次查询都要看三十多条结果要方便一些。默认界面经过了精心设计,在不影响可读性的情况下尽量减少了占用空间。 + +`ddgr` 有许多功能和亮点,包括: + +* 更改搜索结果数 +* 支持 Bash 自动补全 +* 使用 DuckDuckGo Bangs +* 在浏览器中打开链接 +* ”手气不错“选项 +* 基于时间、地区、文件类型等的筛选功能 +* 极少的依赖项 + +你可以从 Github 的项目页面上下载支持各种系统的 `ddgr`: + +- [从 Github 下载 “ddgr”][9] + +另外,在 Ubuntu 16.04 LTS 或更新版本中,你可以使用 PPA 安装 ddgr。这个仓库由 ddgr 的开发者维护。如果你想要保持在最新版本的话,推荐使用这种方式安装。 + +需要提醒的是,在本文创作时,这个 PPA 中的 ddgr _并不是_ 最新版本,而是一个稍旧的版本(缺少 -num 选项)。 + +使用以下命令添加 PPA: + +``` +sudo add-apt-repository ppa:twodopeshaggy/jarun +sudo apt-get update +``` + +### 如何使用 ddgr 在命令行中搜索 DuckDuckGo + +安装完毕后,你只需打开你的终端模拟器,并运行: + +``` +ddgr +``` + +然后输入查询内容: + +``` +search-term +``` + +你可以限制搜索结果数: + +``` +ddgr --num 5 search-term +``` + +或者自动在浏览器中打开第一条搜索结果: + + +``` +ddgr -j search-term +``` + +你可以使用参数和选项来提高搜索精确度。使用以下命令来查看所有的参数: + +``` +ddgr -h +``` + +-------------------------------------------------------------------------------- + +via: http://www.omgubuntu.co.uk/2017/11/duck-duck-go-terminal-app + +作者:[JOEY SNEDDON][a] +译者:[yixunx](https://github.com/yixunx) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://plus.google.com/117485690627814051450/?rel=author +[1]:https://plus.google.com/117485690627814051450/?rel=author +[2]:http://www.omgubuntu.co.uk/category/download +[3]:http://www.omgubuntu.co.uk/2017/08/search-google-from-the-command-line +[4]:http://duckduckgo.com/ +[5]:http://www.omgubuntu.co.uk/2017/11/duck-duck-go-terminal-app +[6]:https://github.com/jarun/ddgr +[7]:https://github.com/jarun/googler +[8]:https://duckduckgo.com/bang +[9]:https://github.com/jarun/ddgr/releases/tag/v1.1 diff --git a/sources/tech/20090701 The One in Which I Call Out Hacker News.md b/sources/tech/20090701 The One in Which I Call Out Hacker News.md deleted file mode 100644 index 44c751dd5a..0000000000 --- a/sources/tech/20090701 The One in Which I Call Out Hacker News.md +++ /dev/null @@ -1,86 +0,0 @@ -translating by hopefully2333 - -# [The One in Which I Call Out Hacker News][14] - - -> “Implementing caching would take thirty hours. Do you have thirty extra hours? No, you don’t. I actually have no idea how long it would take. Maybe it would take five minutes. Do you have five minutes? No. Why? Because I’m lying. It would take much longer than five minutes. That’s the eternal optimism of programmers.” -> -> — Professor [Owen Astrachan][1] during 23 Feb 2004 lecture for [CPS 108][2] - -[Accusing open-source software of being a royal pain to use][5] is not a new argument; it’s been said before, by those much more eloquent than I, and even by some who are highly sympathetic to the open-source movement. Why go over it again? - -On Hacker News on Monday, I was amused to read some people saying that [writing StackOverflow was hilariously easy][6]—and proceeding to back up their claim by [promising to clone it over July 4th weekend][7]. Others chimed in, pointing to [existing][8] [clones][9] as a good starting point. - -Let’s assume, for sake of argument, that you decide it’s okay to write your StackOverflow clone in ASP.NET MVC, and that I, after being hypnotized with a pocket watch and a small club to the head, have decided to hand you the StackOverflow source code, page by page, so you can retype it verbatim. We’ll also assume you type like me, at a cool 100 WPM ([a smidge over eight characters per second][10]), and unlike me,  _you_  make zero mistakes. StackOverflow’s *.cs, *.sql, *.css, *.js, and *.aspx files come to 2.3 MB. So merely typing the source code back into the computer will take you about eighty hours if you make zero mistakes. - -Except, of course, you’re not doing that; you’re going to implement StackOverflow from scratch. So even assuming that it took you a mere ten times longer to design, type out, and debug your own implementation than it would take you to copy the real one, that already has you coding for several weeks straight—and I don’t know about you, but I am okay admitting I write new code  _considerably_  less than one tenth as fast as I copy existing code. - - _Well, okay_ , I hear you relent. *So not the whole thing. But I can do **most** of it.* - -Okay, so what’s “most”? There’s simply asking and responding to questions—that part’s easy. Well, except you have to implement voting questions and answers up and down, and the questioner should be able to accept a single answer for each question. And you can’t let people upvote or accept their own answers, so you need to block that. And you need to make sure that users don’t upvote or downvote another user too many times in a certain amount of time, to prevent spambots. Probably going to have to implement a spam filter, too, come to think of it, even in the basic design, and you also need to support user icons, and you’re going to have to find a sanitizing HTML library you really trust and that interfaces well with Markdown (provided you do want to reuse [that awesome editor][11] StackOverflow has, of course). You’ll also need to purchase, design, or find widgets for all the controls, plus you need at least a basic administration interface so that moderators can moderate, and you’ll need to implement that scaling karma thing so that you give users steadily increasing power to do things as they go. - -But if you do  _all that_ , you  _will_  be done. - -Except…except, of course, for the full-text search, especially its appearance in the search-as-you-ask feature, which is kind of indispensable. And user bios, and having comments on answers, and having a main page that shows you important questions but that bubbles down steadily à la reddit. Plus you’ll totally need to implement bounties, and support multiple OpenID logins per user, and send out email notifications for pertinent events, and add a tagging system, and allow administrators to configure badges by a nice GUI. And you’ll need to show users’ karma history, upvotes, and downvotes. And the whole thing has to scale really well, since it could be slashdotted/reddited/StackOverflown at any moment. - -But  _then_ ! **Then** you’re done! - -…right after you implement upgrades, internationalization, karma caps, a CSS design that makes your site not look like ass, AJAX versions of most of the above, and G-d knows what else that’s lurking just beneath the surface that you currently take for granted, but that will come to bite you when you start to do a real clone. - -Tell me: which of those features do you feel you can cut and still have a compelling offering? Which ones go under “most” of the site, and which can you punt? - -Developers think cloning a site like StackOverflow is easy for the same reason that open-source software remains such a horrible pain in the ass to use. When you put a developer in front of StackOverflow, they don’t really  _see_ StackOverflow. What they actually  _see_  is this: - -``` -create table QUESTION (ID identity primary key, - TITLE varchar(255), --- why do I know you thought 255? - BODY text, - UPVOTES integer not null default 0, - DOWNVOTES integer not null default 0, - USER integer references USER(ID)); -create table RESPONSE (ID identity primary key, - BODY text, - UPVOTES integer not null default 0, - DOWNVOTES integer not null default 0, - QUESTION integer references QUESTION(ID)) -``` - -If you then tell a developer to replicate StackOverflow, what goes into his head are the above two SQL tables and enough HTML to display them without formatting, and that really  _is_  completely doable in a weekend. The smarter ones will realize that they need to implement login and logout, and comments, and that the votes need to be tied to a user, but that’s still totally doable in a weekend; it’s just a couple more tables in a SQL back-end, and the HTML to show their contents. Use a framework like Django, and you even get basic users and comments for free. - -But that’s  _not_  what StackOverflow is about. Regardless of what your feelings may be on StackOverflow in general, most visitors seem to agree that the user experience is smooth, from start to finish. They feel that they’re interacting with a polished product. Even if I didn’t know better, I would guess that very little of what actually makes StackOverflow a continuing success has to do with the database schema—and having had a chance to read through StackOverflow’s source code, I know how little really does. There is a  _tremendous_  amount of spit and polish that goes into making a major website highly usable. A developer, asked how hard something will be to clone, simply  _does not think about the polish_ , because  _the polish is incidental to the implementation._ - -That is why an open-source clone of StackOverflow will fail. Even if someone were to manage to implement most of StackOverflow “to spec,” there are some key areas that would trip them up. Badges, for example, if you’re targeting end-users, either need a GUI to configure rules, or smart developers to determine which badges are generic enough to go on all installs. What will actually happen is that the developers will bitch and moan about how you can’t implement a really comprehensive GUI for something like badges, and then bikeshed any proposals for standard badges so far into the ground that they’ll hit escape velocity coming out the other side. They’ll ultimately come up with the same solution that bug trackers like Roundup use for their workflow: the developers implement a generic mechanism by which anyone, truly anyone at all, who feels totally comfortable working with the system API in Python or PHP or whatever, can easily add their own customizations. And when PHP and Python are so easy to learn and so much more flexible than a GUI could ever be, why bother with anything else? - -Likewise, the moderation and administration interfaces can be punted. If you’re an admin, you have access to the SQL server, so you can do anything really genuinely administrative-like that way. Moderators can get by with whatever django-admin and similar systems afford you, since, after all, few users are mods, and mods should understand how the sites  _work_ , dammit. And, certainly, none of StackOverflow’s interface failings will be rectified. Even if StackOverflow’s stupid requirement that you have to have and know how to use an OpenID (its worst failing) eventually gets fixed, I’m sure any open-source clones will rabidly follow it—just as GNOME and KDE for years slavishly copied off Windows, instead of trying to fix its most obvious flaws. - -Developers may not care about these parts of the application, but end-users do, and take it into consideration when trying to decide what application to use. Much as a good software company wants to minimize its support costs by ensuring that its products are top-notch before shipping, so, too, savvy consumers want to ensure products are good before they purchase them so that they won’t  _have_  to call support. Open-source products fail hard here. Proprietary solutions, as a rule, do better. - -That’s not to say that open-source doesn’t have its place. This blog runs on Apache, [Django][12], [PostgreSQL][13], and Linux. But let me tell you, configuring that stack is  _not_  for the faint of heart. PostgreSQL needs vacuuming configured on older versions, and, as of recent versions of Ubuntu and FreeBSD, still requires the user set up the first database cluster. MS SQL requires neither of those things. Apache…dear heavens, don’t even get me  _started_  on trying to explain to a novice user how to get virtual hosting, MovableType, a couple Django apps, and WordPress all running comfortably under a single install. Hell, just trying to explain the forking vs. threading variants of Apache to a technically astute non-developer can be a nightmare. IIS 7 and Apache with OS X Server’s very much closed-source GUI manager make setting up those same stacks vastly simpler. Django’s a great a product, but it’s nothing  _but_  infrastructure—exactly the thing that I happen to think open-source  _does_  do well,  _precisely_  because of the motivations that drive developers to contribute. - -The next time you see an application you like, think very long and hard about all the user-oriented details that went into making it a pleasure to use, before decrying how you could trivially reimplement the entire damn thing in a weekend. Nine times out of ten, when you think an application was ridiculously easy to implement, you’re completely missing the user side of the story. - --------------------------------------------------------------------------------- - -via: https://bitquabit.com/post/one-which-i-call-out-hacker-news/ - -作者:[Benjamin Pollack][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://bitquabit.com/meta/about/ -[1]:http://www.cs.duke.edu/~ola/ -[2]:http://www.cs.duke.edu/courses/cps108/spring04/ -[3]:https://bitquabit.com/categories/programming -[4]:https://bitquabit.com/categories/technology -[5]:http://blog.bitquabit.com/2009/06/30/one-which-i-say-open-source-software-sucks/ -[6]:http://news.ycombinator.com/item?id=678501 -[7]:http://news.ycombinator.com/item?id=678704 -[8]:http://code.google.com/p/cnprog/ -[9]:http://code.google.com/p/soclone/ -[10]:http://en.wikipedia.org/wiki/Words_per_minute -[11]:http://github.com/derobins/wmd/tree/master -[12]:http://www.djangoproject.com/ -[13]:http://www.postgresql.org/ -[14]:https://bitquabit.com/post/one-which-i-call-out-hacker-news/ diff --git a/sources/tech/20130402 Dynamic linker tricks Using LD_PRELOAD to cheat inject features and investigate programs.md b/sources/tech/20130402 Dynamic linker tricks Using LD_PRELOAD to cheat inject features and investigate programs.md new file mode 100644 index 0000000000..2329fadd41 --- /dev/null +++ b/sources/tech/20130402 Dynamic linker tricks Using LD_PRELOAD to cheat inject features and investigate programs.md @@ -0,0 +1,211 @@ +# Dynamic linker tricks: Using LD_PRELOAD to cheat, inject features and investigate programs + +**This post assumes some basic C skills.** + +Linux puts you in full control. This is not always seen from everyone’s perspective, but a power user loves to be in control. I’m going to show you a basic trick that lets you heavily influence the behavior of most applications, which is not only fun, but also, at times, useful. + +#### A motivational example + +Let us begin with a simple example. Fun first, science later. + + +random_num.c: +``` +#include +#include +#include + +int main(){ + srand(time(NULL)); + int i = 10; + while(i--) printf("%d\n",rand()%100); + return 0; +} +``` + +Simple enough, I believe. I compiled it with no special flags, just + +> ``` +> gcc random_num.c -o random_num +> ``` + +I hope the resulting output is obvious – ten randomly selected numbers 0-99, hopefully different each time you run this program. + +Now let’s pretend we don’t really have the source of this executable. Either delete the source file, or move it somewhere – we won’t need it. We will significantly modify this programs behavior, yet without touching it’s source code nor recompiling it. + +For this, lets create another simple C file: + + +unrandom.c: +``` +int rand(){ + return 42; //the most random number in the universe +} +``` + +We’ll compile it into a shared library. + +> ``` +> gcc -shared -fPIC unrandom.c -o unrandom.so +> ``` + +So what we have now is an application that outputs some random data, and a custom library, which implements the rand() function as a constant value of 42\.  Now… just run  _random_num _ this way, and watch the result: + +> ``` +> LD_PRELOAD=$PWD/unrandom.so ./random_nums +> ``` + +If you are lazy and did not do it yourself (and somehow fail to guess what might have happened), I’ll let you know – the output consists of ten 42’s. + +This may be even more impressive it you first: + +> ``` +> export LD_PRELOAD=$PWD/unrandom.so +> ``` + +and then run the program normally. An unchanged app run in an apparently usual manner seems to be affected by what we did in our tiny library… + +###### **Wait, what? What did just happen?** + +Yup, you are right, our program failed to generate random numbers, because it did not use the “real” rand(), but the one we provided – which returns 42 every time. + +###### **But we *told* it to use the real one. We programmed it to use the real one. Besides, at the time we created that program, the fake rand() did not even exist!** + +This is not entirely true. We did not choose which rand() we want our program to use. We told it just to use rand(). + +When our program is started, certain libraries (that provide functionality needed by the program) are loaded. We can learn which are these using  _ldd_ : + +> ``` +> $ ldd random_nums +> linux-vdso.so.1 => (0x00007fff4bdfe000) +> libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f48c03ec000) +> /lib64/ld-linux-x86-64.so.2 (0x00007f48c07e3000) +> ``` + +What you see as the output is the list of libs that are needed by  _random_nums_ . This list is built into the executable, and is determined compile time. The exact output might slightly differ on your machine, but a **libc.so** must be there – this is the file which provides core C functionality. That includes the “real” rand(). + +We can have a peek at what functions does libc provide. I used the following to get a full list: + +> ``` +> nm -D /lib/libc.so.6 +> ``` + +The  _nm_  command lists symbols found in a binary file. The -D flag tells it to look for dynamic symbols, which makes sense, as libc.so.6 is a dynamic library. The output is very long, but it indeed lists rand() among many other standard functions. + +Now what happens when we set up the environmental variable LD_PRELOAD? This variable **forces some libraries to be loaded for a program**. In our case, it loads  _unrandom.so_  for  _random_num_ , even though the program itself does not ask for it. The following command may be interesting: + +> ``` +> $ LD_PRELOAD=$PWD/unrandom.so ldd random_nums +> linux-vdso.so.1 => (0x00007fff369dc000) +> /some/path/to/unrandom.so (0x00007f262b439000) +> libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f262b044000) +> /lib64/ld-linux-x86-64.so.2 (0x00007f262b63d000) +> ``` + +Note that it lists our custom library. And indeed this is the reason why it’s code get’s executed:  _random_num_  calls rand(), but if  _unrandom.so_  is loaded it is our library that provides implementation for rand(). Neat, isn’t it? + +#### Being transparent + +This is not enough. I’d like to be able to inject some code into an application in a similar manner, but in such way that it will be able to function normally. It’s clear if we implemented open() with a simple “ _return 0;_ “, the application we would like to hack should malfunction. The point is to be **transparent**, and to actually call the original open: + +inspect_open.c: +``` +int open(const char *pathname, int flags){ + /* Some evil injected code goes here. */ + return open(pathname,flags); // Here we call the "real" open function, that is provided to us by libc.so +} +``` + +Hm. Not really. This won’t call the “original” open(…). Obviously, this is an endless recursive call. + +How do we access the “real” open function? It is needed to use the programming interface to the dynamic linker. It’s simpler than it sounds. Have a look at this complete example, and then I’ll explain what happens there: + +inspect_open.c: + +``` +#define _GNU_SOURCE +#include + +typedef int (*orig_open_f_type)(const char *pathname, int flags); + +int open(const char *pathname, int flags, ...) +{ + /* Some evil injected code goes here. */ + + orig_open_f_type orig_open; + orig_open = (orig_open_f_type)dlsym(RTLD_NEXT,"open"); + return orig_open(pathname,flags); +} +``` + +The  _dlfcn.h_  is needed for  _dlsym_  function we use later. That strange  _#define_  directive instructs the compiler to enable some non-standard stuff, we need it to enable  _RTLD_NEXT_  in  _dlfcn.h_ . That typedef is just creating an alias to a complicated pointer-to-function type, with arguments just as the original open – the alias name is  _orig_open_f_type_ , which we’ll use later. + +The body of our custom open(…) consists of some custom code. The last part of it creates a new function pointer  _orig_open_  which will point to the original open(…) function. In order to get the address of that function, we ask  _dlsym_  to find for us the next “open” function on dynamic libraries stack. Finally, we call that function (passing the same arguments as were passed to our fake “open”), and return it’s return value as ours. + +As the “evil injected code” I simply used: + +inspect_open.c (fragment): + +``` +printf("The victim used open(...) to access '%s'!!!\n",pathname); //remember to include stdio.h! +``` + +To compile it, I needed to slightly adjust compiler flags: + +> ``` +> gcc -shared -fPIC  inspect_open.c -o inspect_open.so -ldl +> ``` + +I had to append  _-ldl_ , so that this shared library is linked to  _libdl_ , which provides the  _dlsym_  function. (Nah, I am not going to create a fake version of  _dlsym_ , though this might be fun.) + +So what do I have in result? A shared library, which implements the open(…) function so that it behaves **exactly** as the real open(…)… except it has a side effect of  _printf_ ing the file path :-) + +If you are not convinced this is a powerful trick, it’s the time you tried the following: + +> ``` +> LD_PRELOAD=$PWD/inspect_open.so gnome-calculator +> ``` + +I encourage you to see the result yourself, but basically it lists every file this application accesses. In real time. + +I believe it’s not that hard to imagine why this might be useful for debugging or investigating unknown applications. Please note, however, that this particular trick is not quite complete, because  _open()_  is not the only function that opens files… For example, there is also  _open64()_  in the standard library, and for full investigation you would need to create a fake one too. + +#### **Possible uses** + +If you are still with me and enjoyed the above, let me suggest a bunch of ideas of what can be achieved using this trick. Keep in mind that you can do all the above without to source of the affected app! + +1. ~~Gain root privileges.~~ Not really, don’t even bother, you won’t bypass any security this way. (A quick explanation for pros: no libraries will be preloaded this way if ruid != euid) + +2. Cheat games: **Unrandomize.** This is what I did in the first example. For a fully working case you would need also to implement a custom  _random()_ ,  _rand_r()_ _, random_r()_ . Also some apps may be reading from  _/dev/urandom_  or so, you might redirect them to  _/dev/null_  by running the original  _open()_  with a modified file path. Furthermore, some apps may have their own random number generation algorithm, there is little you can do about that (unless: point 10 below). But this looks like an easy exercise for beginners. + +3. Cheat games: **Bullet time. **Implement all standard time-related functions pretend the time flows two times slower. Or ten times slower. If you correctly calculate new values for time measurement, timed  _sleep_ functions, and others, the affected application will believe the time runs slower (or faster, if you wish), and you can experience awesome bullet-time action. + Or go **even one step further** and let your shared library also be a DBus client, so that you can communicate with it real time. Bind some shortcuts to custom commands, and with some additional calculations in your fake timing functions you will be able to enable&disable the slow-mo or fast-forward anytime you wish. + +4. Investigate apps: **List accessed files.** That’s what my second example does, but this could be also pushed further, by recording and monitoring all app’s file I/O. + +5. Investigate apps: **Monitor internet access.** You might do this with Wireshark or similar software, but with this trick you could actually gain control of what an app sends over the web, and not just look, but also affect the exchanged data. Lots of possibilities here, from detecting spyware, to cheating in multiplayer games, or analyzing & reverse-engineering protocols of closed-source applications. + +6. Investigate apps: **Inspect GTK structures.** Why just limit ourselves to standard library? Let’s inject code in all GTK calls, so that we can learn what widgets does an app use, and how are they structured. This might be then rendered either to an image or even to a gtkbuilder file! Super useful if you want to learn how does some app manage its interface! + +7. **Sandbox unsafe applications.** If you don’t trust some app and are afraid that it may wish to _ rm -rf / _ or do some other unwanted file activities, you might potentially redirect all it’s file IO to e.g. /tmp by appropriately modifying the arguments it passes to all file-related functions (not just  _open_ , but also e.g. removing directories etc.). It’s more difficult trick that a chroot, but it gives you more control. It would be only as safe as complete your “wrapper” was, and unless you really know what you’re doing, don’t actually run any malicious software this way. + +8. **Implement features.** [zlibc][1] is an actual library which is run this precise way; it uncompresses files on the go as they are accessed, so that any application can work on compressed data without even realizing it. + +9. **Fix bugs. **Another real-life example: some time ago (I am not sure this is still the case) Skype – which is closed-source – had problems capturing video from some certain webcams. Because the source could not be modified as Skype is not free software, this was fixed by preloading a library that would correct these problems with video. + +10. Manually **access application’s own memory**. Do note that you can access all app data this way. This may be not impressive if you are familiar with software like CheatEngine/scanmem/GameConqueror, but they all require root privileges to work. LD_PRELOAD does not. In fact, with a number of clever tricks your injected code might access all app memory, because, in fact, it gets executed by that application itself. You might modify everything this application can. You can probably imagine this allows a lot of low-level hacks… but I’ll post an article about it another time. + +These are only the ideas I came up with. I bet you can find some too, if you do – share them by commenting! + +-------------------------------------------------------------------------------- + +via: https://rafalcieslak.wordpress.com/2013/04/02/dynamic-linker-tricks-using-ld_preload-to-cheat-inject-features-and-investigate-programs/ + +作者:[Rafał Cieślak ][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://rafalcieslak.wordpress.com/ +[1]:http://www.zlibc.linux.lu/index.html diff --git a/sources/tech/20160330 How to turn any syscall into an event Introducing eBPF Kernel probes.md b/sources/tech/20160330 How to turn any syscall into an event Introducing eBPF Kernel probes.md new file mode 100644 index 0000000000..a53270f2d7 --- /dev/null +++ b/sources/tech/20160330 How to turn any syscall into an event Introducing eBPF Kernel probes.md @@ -0,0 +1,361 @@ +How to turn any syscall into an event: Introducing eBPF Kernel probes +============================================================ + + +TL;DR: Using eBPF in recent (>=4.4) Linux kernel, you can turn any kernel function call into a user land event with arbitrary data. This is made easy by bcc. The probe is written in C while the data is handled by python. + +If you are not familiar with eBPF or linux tracing, you really should read the full post. It tries to progressively go through the pitfalls I stumbled unpon while playing around with bcc / eBPF while saving you a lot of the time I spent searching and digging. + +### A note on push vs pull in a Linux world + +When I started to work on containers, I was wondering how we could update a load balancer configuration dynamically based on actual system state. A common strategy, which works, it to let the container orchestrator trigger a load balancer configuration update whenever it starts a container and then let the load balancer poll the container until some health check passes. It may be a simple “SYN” test. + +While this configuration works, it has the downside of making your load balancer waiting for some system to be available while it should be… load balancing. + +Can we do better? + +When you want a program to react to some change in a system there are 2 possible strategies. The program may  _poll_  the system to detect changes or, if the system supports it, the system may  _push_ events and let the program react to them. Wether you want to use push or poll depends on the context. A good rule of the thumb is to use push events when the event rate is low with respect to the processing time and switch to polling when the events are coming fast or the system may become unusable. For example, typical network driver will wait for events from the network card while frameworks like dpdk will actively poll the card for events to achieve the highest throughput and lowest latency. + +In an ideal world, we’d have some kernel interface telling us: + +> * “Hey Mr. ContainerManager, I’ve just created a socket for the Nginx-ware of container  _servestaticfiles_ , maybe you want to update your state?” +> +> * “Sure Mr. OS, Thanks for letting me know” + +While Linux has a wide range of interfaces to deal with events, up to 3 for file events, there is no dedicated interface to get socket event notifications. You can get routing table events, neighbor table events, conntrack events, interface change events. Just, not socket events. Or maybe there is, deep hidden in a Netlink interface. + +Ideally, we’d need a generic way to do it. How? + +### Kernel tracing and eBPF, a bit of history + +Until recently the only way was to patch the kernel or resort on SystemTap. [SytemTap][5] is a tracing Linux system. In a nutshell, it provides a DSL which is then compiled into a kernel module which is then live-loaded into the running kernel. Except that some production system disable dynamic module loading for security reasons. Including the one I was working on at that time. The other way would be to patch the kernel to trigger some events, probably based on netlink. This is not really convenient. Kernel hacking come with downsides including “interesting” new “features” and increased maintenance burden. + +Hopefully, starting with Linux 3.15 the ground was laid to safely transform any traceable kernel function into userland events. “Safely” is common computer science expression referring to “some virtual machine”. This case is no exception. Linux has had one for years. Since Linux 2.1.75 released in 1997 actually. It’s called Berkeley Packet Filter of BPF for short. As its name suggests, it was originally developed for the BSD firewalls. It had only 2 registers and only allowed forward jumps meaning that you could not write loops with it (Well, you can, if you know the maximum iterations and you manually unroll them). The point was to guarantee the program would always terminate and hence never hang the system. Still not sure if it has any use while you have iptables? It serves as the [foundation of CloudFlare’s AntiDDos protection][6]. + +OK, so, with Linux the 3.15, [BPF was extended][7] turning it into eBPF. For “extended” BPF. It upgrades from 2 32 bits registers to 10 64 bits 64 registers and adds backward jumping among others. It has then been [further extended in Linux 3.18][8] moving it out of the networking subsystem, and adding tools like maps. To preserve the safety guarantees, it [introduces a checker][9] which validates all memory accesses and possible code path. If the checker can’t guarantee the code will terminate within fixed boundaries, it will deny the initial insertion of the program. + +For more history, there is [an excellent Oracle presentation on eBPF][10]. + +Let’s get started. + +### Hello from from `inet_listen` + +As writing assembly is not the most convenient task, even for the best of us, we’ll use [bcc][11]. bcc is a collection of tools based on LLVM and Python abstracting the underlying machinery. Probes are written in C and the results can be exploited from python allowing to easily write non trivial applications. + +Start by install bcc. For some of these examples, you may require a recent (read >= 4.4) version of the kernel. If you are willing to actually try these examples, I highly recommend that you setup a VM.  _NOT_  a docker container. You can’t change the kernel in a container. As this is a young and dynamic projects, install instructions are highly platform/version dependant. You can find up to date instructions on [https://github.com/iovisor/bcc/blob/master/INSTALL.md][12] + +So, we want to get an event whenever a program starts to listen on TCP socket. When calling the `listen()` syscall on a `AF_INET` + `SOCK_STREAM` socket, the underlying kernel function is [`inet_listen`][13]. We’ll start by hooking a “Hello World” `kprobe` on it’s entrypoint. + +``` +from bcc import BPF + +# Hello BPF Program +bpf_text = """ +#include +#include + +// 1\. Attach kprobe to "inet_listen" +int kprobe__inet_listen(struct pt_regs *ctx, struct socket *sock, int backlog) +{ + bpf_trace_printk("Hello World!\\n"); + return 0; +}; +""" + +# 2\. Build and Inject program +b = BPF(text=bpf_text) + +# 3\. Print debug output +while True: + print b.trace_readline() + +``` + +This program does 3 things: 1\. It attaches a kernel probe to “inet_listen” using a naming convention. If the function was called, say, “my_probe”, it could be explicitly attached with `b.attach_kprobe("inet_listen", "my_probe"`. 2\. It builds the program using LLVM new BPF backend, inject the resulting bytecode using the (new) `bpf()` syscall and automatically attaches the probes matching the naming convention. 3\. It reads the raw output from the kernel pipe. + +Note: eBPF backend of LLVM is still young. If you think you’ve hit a bug, you may want to upgrade. + +Noticed the `bpf_trace_printk` call? This is a stripped down version of the kernel’s `printk()`debug function. When used, it produces tracing informations to a special kernel pipe in `/sys/kernel/debug/tracing/trace_pipe`. As the name implies, this is a pipe. If multiple readers are consuming it, only 1 will get a given line. This makes it unsuitable for production. + +Fortunately, Linux 3.19 introduced maps for message passing and Linux 4.4 brings arbitrary perf events support. I’ll demo the perf event based approach later in this post. + +``` +# From a first console +ubuntu@bcc:~/dev/listen-evts$ sudo /python tcv4listen.py + nc-4940 [000] d... 22666.991714: : Hello World! + +# From a second console +ubuntu@bcc:~$ nc -l 0 4242 +^C + +``` + +Yay! + +### Grab the backlog + +Now, let’s print some easily accessible data. Say the “backlog”. The backlog is the number of pending established TCP connections, pending to be `accept()`ed. + +Just tweak a bit the `bpf_trace_printk`: + +``` +bpf_trace_printk("Listening with with up to %d pending connections!\\n", backlog); + +``` + +If you re-run the example with this world-changing improvement, you should see something like: + +``` +(bcc)ubuntu@bcc:~/dev/listen-evts$ sudo python tcv4listen.py + nc-5020 [000] d... 25497.154070: : Listening with with up to 1 pending connections! + +``` + +`nc` is a single connection program, hence the backlog of 1\. Nginx or Redis would output 128 here. But that’s another story. + +Easy hue? Now let’s get the port. + +### Grab the port and IP + +Studying `inet_listen` source from the kernel, we know that we need to get the `inet_sock` from the `socket` object. Just copy from the sources, and insert at the beginning of the tracer: + +``` +// cast types. Intermediate cast not needed, kept for readability +struct sock *sk = sock->sk; +struct inet_sock *inet = inet_sk(sk); + +``` + +The port can now be accessed from `inet->inet_sport` in network byte order (aka: Big Endian). Easy! So, we could just replace the `bpf_trace_printk` with: + +``` +bpf_trace_printk("Listening on port %d!\\n", inet->inet_sport); + +``` + +Then run: + +``` +ubuntu@bcc:~/dev/listen-evts$ sudo /python tcv4listen.py +... +R1 invalid mem access 'inv' +... +Exception: Failed to load BPF program kprobe__inet_listen + +``` + +Except that it’s not (yet) so simple. Bcc is improving a  _lot_  currently. While writing this post, a couple of pitfalls had already been addressed. But not yet all. This Error means the in-kernel checker could prove the memory accesses in program are correct. See the explicit cast. We need to help is a little by making the accesses more explicit. We’ll use `bpf_probe_read` trusted function to read an arbitrary memory location while guaranteeing all necessary checks are done with something like: + +``` +// Explicit initialization. The "=0" part is needed to "give life" to the variable on the stack +u16 lport = 0; + +// Explicit arbitrary memory access. Read it: +// Read into 'lport', 'sizeof(lport)' bytes from 'inet->inet_sport' memory location +bpf_probe_read(&lport, sizeof(lport), &(inet->inet_sport)); + +``` + +Reading the bound address for IPv4 is basically the same, using `inet->inet_rcv_saddr`. If we put is all together, we should get the backlog, the port and the bound IP: + +``` +from bcc import BPF + +# BPF Program +bpf_text = """ +#include +#include +#include + +// Send an event for each IPv4 listen with PID, bound address and port +int kprobe__inet_listen(struct pt_regs *ctx, struct socket *sock, int backlog) +{ + // Cast types. Intermediate cast not needed, kept for readability + struct sock *sk = sock->sk; + struct inet_sock *inet = inet_sk(sk); + + // Working values. You *need* to initialize them to give them "life" on the stack and use them afterward + u32 laddr = 0; + u16 lport = 0; + + // Pull in details. As 'inet_sk' is internally a type cast, we need to use 'bpf_probe_read' + // read: load into 'laddr' 'sizeof(laddr)' bytes from address 'inet->inet_rcv_saddr' + bpf_probe_read(&laddr, sizeof(laddr), &(inet->inet_rcv_saddr)); + bpf_probe_read(&lport, sizeof(lport), &(inet->inet_sport)); + + // Push event + bpf_trace_printk("Listening on %x %d with %d pending connections\\n", ntohl(laddr), ntohs(lport), backlog); + return 0; +}; +""" + +# Build and Inject BPF +b = BPF(text=bpf_text) + +# Print debug output +while True: + print b.trace_readline() + +``` + +A test run should output something like: + +``` +(bcc)ubuntu@bcc:~/dev/listen-evts$ sudo python tcv4listen.py + nc-5024 [000] d... 25821.166286: : Listening on 7f000001 4242 with 1 pending connections + +``` + +Provided that you listen on localhost. The address is displayed as hex here to avoid dealing with the IP pretty printing but that’s all wired. And that’s cool. + +Note: you may wonder why `ntohs` and `ntohl` can be called from BPF while they are not trusted. This is because they are macros and inline functions from “.h” files and a small bug was [fixed][14]while writing this post. + +All done, one more piece: We want to get the related container. In the context of networking, that’s means we want the network namespace. The network namespace being the building block of containers allowing them to have isolated networks. + +### Grab the network namespace: a forced introduction to perf events + +On the userland, the network namespace can be determined by checking the target of `/proc/PID/ns/net`. It should look like `net:[4026531957]`. The number between brackets is the inode number of the network namespace. This said, we could grab it by scrapping ‘/proc’ but this is racy, we may be dealing with short-lived processes. And races are never good. We’ll grab the inode number directly from the kernel. Fortunately, that’s an easy one: + +``` +// Create an populate the variable +u32 netns = 0; + +// Read the netns inode number, like /proc does +netns = sk->__sk_common.skc_net.net->ns.inum; + +``` + +Easy. And it works. + +But if you’ve read so far, you may guess there is something wrong somewhere. And there is: + +``` +bpf_trace_printk("Listening on %x %d with %d pending connections in container %d\\n", ntohl(laddr), ntohs(lport), backlog, netns); + +``` + +If you try to run it, you’ll get some cryptic error message: + +``` +(bcc)ubuntu@bcc:~/dev/listen-evts$ sudo python tcv4listen.py +error: in function kprobe__inet_listen i32 (%struct.pt_regs*, %struct.socket*, i32) +too many args to 0x1ba9108: i64 = Constant<6> + +``` + +What clang is trying to tell you is “Hey pal, `bpf_trace_printk` can only take 4 arguments, you’ve just used 5.“. I won’t dive into the details here, but that’s a BPF limitation. If you want to dig it, [here is a good starting point][15]. + +The only way to fix it is to… stop debugging and make it production ready. So let’s get started (and make sure run at least Linux 4.4). We’ll use perf events which supports passing arbitrary sized structures to userland. Additionally, only our reader will get it so that multiple unrelated eBPF programs can produce data concurrently without issues. + +To use it, we need to: + +1. define a structure + +2. declare the event + +3. push the event + +4. re-declare the event on Python’s side (This step should go away in the future) + +5. consume and format the event + +This may seem like a lot, but it ain’t. See: + +``` +// At the begining of the C program, declare our event +struct listen_evt_t { + u64 laddr; + u64 lport; + u64 netns; + u64 backlog; +}; +BPF_PERF_OUTPUT(listen_evt); + +// In kprobe__inet_listen, replace the printk with +struct listen_evt_t evt = { + .laddr = ntohl(laddr), + .lport = ntohs(lport), + .netns = netns, + .backlog = backlog, +}; +listen_evt.perf_submit(ctx, &evt, sizeof(evt)); + +``` + +Python side will require a little more work, though: + +``` +# We need ctypes to parse the event structure +import ctypes + +# Declare data format +class ListenEvt(ctypes.Structure): + _fields_ = [ + ("laddr", ctypes.c_ulonglong), + ("lport", ctypes.c_ulonglong), + ("netns", ctypes.c_ulonglong), + ("backlog", ctypes.c_ulonglong), + ] + +# Declare event printer +def print_event(cpu, data, size): + event = ctypes.cast(data, ctypes.POINTER(ListenEvt)).contents + print("Listening on %x %d with %d pending connections in container %d" % ( + event.laddr, + event.lport, + event.backlog, + event.netns, + )) + +# Replace the event loop +b["listen_evt"].open_perf_buffer(print_event) +while True: + b.kprobe_poll() + +``` + +Give it a try. In this example, I have a redis running in a docker container and nc on the host: + +``` +(bcc)ubuntu@bcc:~/dev/listen-evts$ sudo python tcv4listen.py +Listening on 0 6379 with 128 pending connections in container 4026532165 +Listening on 0 6379 with 128 pending connections in container 4026532165 +Listening on 7f000001 6588 with 1 pending connections in container 4026531957 + +``` + +### Last word + +Absolutely everything is now setup to use trigger events from arbitrary function calls in the kernel using eBPF, and you should have seen most of the common pitfalls I hit while learning eBPF. If you want to see the full version of this tool, along with some more tricks like IPv6 support, have a look at [https://github.com/iovisor/bcc/blob/master/tools/solisten.py][16]. It’s now an official tool, thanks to the support of the bcc team. + +To go further, you may want to checkout Brendan Gregg’s blog, in particular [the post about eBPF maps and statistics][17]. He his one of the project’s main contributor. + + +-------------------------------------------------------------------------------- + +via: https://blog.yadutaf.fr/2016/03/30/turn-any-syscall-into-event-introducing-ebpf-kernel-probes/ + +作者:[Jean-Tiare Le Bigot ][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://blog.yadutaf.fr/about +[1]:https://blog.yadutaf.fr/tags/linux +[2]:https://blog.yadutaf.fr/tags/tracing +[3]:https://blog.yadutaf.fr/tags/ebpf +[4]:https://blog.yadutaf.fr/tags/bcc +[5]:https://en.wikipedia.org/wiki/SystemTap +[6]:https://blog.cloudflare.com/bpf-the-forgotten-bytecode/ +[7]:https://blog.yadutaf.fr/2016/03/30/turn-any-syscall-into-event-introducing-ebpf-kernel-probes/TODO +[8]:https://lwn.net/Articles/604043/ +[9]:http://lxr.free-electrons.com/source/kernel/bpf/verifier.c#L21 +[10]:http://events.linuxfoundation.org/sites/events/files/slides/tracing-linux-ezannoni-linuxcon-ja-2015_0.pdf +[11]:https://github.com/iovisor/bcc +[12]:https://github.com/iovisor/bcc/blob/master/INSTALL.md +[13]:http://lxr.free-electrons.com/source/net/ipv4/af_inet.c#L194 +[14]:https://github.com/iovisor/bcc/pull/453 +[15]:http://lxr.free-electrons.com/source/kernel/trace/bpf_trace.c#L86 +[16]:https://github.com/iovisor/bcc/blob/master/tools/solisten.py +[17]:http://www.brendangregg.com/blog/2015-05-15/ebpf-one-small-step.html diff --git a/sources/tech/20160922 A Linux users guide to Logical Volume Management.md b/sources/tech/20160922 A Linux users guide to Logical Volume Management.md new file mode 100644 index 0000000000..ff0e390f38 --- /dev/null +++ b/sources/tech/20160922 A Linux users guide to Logical Volume Management.md @@ -0,0 +1,233 @@ +A Linux user's guide to Logical Volume Management +============================================================ + +![Logical Volume Management (LVM)](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003499_01_other11x_cc.png?itok=I_kCDYj0 "Logical Volume Management (LVM)") +Image by : opensource.com + +Managing disk space has always been a significant task for sysadmins. Running out of disk space used to be the start of a long and complex series of tasks to increase the space available to a disk partition. It also required taking the system off-line. This usually involved installing a new hard drive, booting to recovery or single-user mode, creating a partition and a filesystem on the new hard drive, using temporary mount points to move the data from the too-small filesystem to the new, larger one, changing the content of the /etc/fstab file to reflect the correct device name for the new partition, and rebooting to remount the new filesystem on the correct mount point. + +I have to tell you that, when LVM (Logical Volume Manager) first made its appearance in Fedora Linux, I resisted it rather strongly. My initial reaction was that I did not need this additional layer of abstraction between me and the hard drives. It turns out that I was wrong, and that logical volume management is very useful. + +LVM allows for very flexible disk space management. It provides features like the ability to add disk space to a logical volume and its filesystem while that filesystem is mounted and active and it allows for the collection of multiple physical hard drives and partitions into a single volume group which can then be divided into logical volumes. + +The volume manager also allows reducing the amount of disk space allocated to a logical volume, but there are a couple requirements. First, the volume must be unmounted. Second, the filesystem itself must be reduced in size before the volume on which it resides can be reduced. + +It is important to note that the filesystem itself must allow resizing for this feature to work. The EXT2, 3, and 4 filesystems all allow both offline (unmounted) and online (mounted) resizing when increasing the size of a filesystem, and offline resizing when reducing the size. You should check the details of the filesystems you intend to use in order to verify whether they can be resized at all and especially whether they can be resized while online. + +### Expanding a filesystem on the fly + +I always like to run new distributions in a VirtualBox virtual machine for a few days or weeks to ensure that I will not run into any devastating problems when I start installing it on my production machines. One morning a couple years ago I started installing a newly released version of Fedora in a virtual machine on my primary workstation. I thought that I had enough disk space allocated to the host filesystem in which the VM was being installed. I did not. About a third of the way through the installation I ran out of space on that filesystem. Fortunately, VirtualBox detected the out-of-space condition and paused the virtual machine, and even displayed an error message indicating the exact cause of the problem. + +Note that this problem was not due to the fact that the virtual disk was too small, it was rather the logical volume on the host computer that was running out of space so that the virtual disk belonging to the virtual machine did not have enough space to expand on the host's logical volume. + +Since most modern distributions use Logical Volume Management by default, and I had some free space available on the volume group, I was able to assign additional disk space to the appropriate logical volume and then expand filesystem of the host on the fly. This means that I did not have to reformat the entire hard drive and reinstall the operating system or even reboot. I simply assigned some of the available space to the appropriate logical volume and resized the filesystem—all while the filesystem was on-line and the running program, The virtual machine was still using the host filesystem. After resizing the logical volume and the filesystem I resumed running the virtual machine and the installation continued as if no problems had occurred. + +Although this type of problem may never have happened to you, running out of disk space while a critical program is running has happened to many people. And while many programs, especially Windows programs, are not as well written and resilient as VirtualBox, Linux Logical Volume Management made it possible to recover without losing any data and without having to restart the time-consuming installation. + +### LVM Structure + +The structure of a Logical Volume Manager disk environment is illustrated by Figure 1, below. Logical Volume Management enables the combining of multiple individual hard drives and/or disk partitions into a single volume group (VG). That volume group can then be subdivided into logical volumes (LV) or used as a single large volume. Regular file systems, such as EXT3 or EXT4, can then be created on a logical volume. + +In Figure 1, two complete physical hard drives and one partition from a third hard drive have been combined into a single volume group. Two logical volumes have been created from the space in the volume group, and a filesystem, such as an EXT3 or EXT4 filesystem has been created on each of the two logical volumes. + +![lvm.png](https://opensource.com/sites/default/files/resize/images/life-uploads/lvm-520x222.png) + + _Figure 1: LVM allows combining partitions and entire hard drives into Volume Groups._ + +Adding disk space to a host is fairly straightforward but, in my experience, is done relatively infrequently. The basic steps needed are listed below. You can either create an entirely new volume group or you can add the new space to an existing volume group and either expand an existing logical volume or create a new one. + +### Adding a new logical volume + +There are times when it is necessary to add a new logical volume to a host. For example, after noticing that the directory containing virtual disks for my VirtualBox virtual machines was filling up the /home filesystem, I decided to create a new logical volume in which to store the virtual machine data, including the virtual disks. This would free up a great deal of space in my /home filesystem and also allow me to manage the disk space for the VMs independently. + +The basic steps for adding a new logical volume are as follows. + +1. If necessary, install a new hard drive. + +2. Optional: Create a partition on the hard drive. + +3. Create a physical volume (PV) of the complete hard drive or a partition on the hard drive. + +4. Assign the new physical volume to an existing volume group (VG) or create a new volume group. + +5. Create a new logical volumes (LV) from the space in the volume group. + +6. Create a filesystem on the new logical volume. + +7. Add appropriate entries to /etc/fstab for mounting the filesystem. + +8. Mount the filesystem. + +Now for the details. The following sequence is taken from an example I used as a lab project when teaching about Linux filesystems. + +### Example + +This example shows how to use the CLI to extend an existing volume group to add more space to it, create a new logical volume in that space, and create a filesystem on the logical volume. This procedure can be performed on a running, mounted filesystem. + +WARNING: Only the EXT3 and EXT4 filesystems can be resized on the fly on a running, mounted filesystem. Many other filesystems including BTRFS and ZFS cannot be resized. + +### Install hard drive + +If there is not enough space in the volume group on the existing hard drive(s) in the system to add the desired amount of space it may be necessary to add a new hard drive and create the space to add to the Logical Volume. First, install the physical hard drive, and then perform the following steps. + +### Create Physical Volume from hard drive + +It is first necessary to create a new Physical Volume (PV). Use the command below, which assumes that the new hard drive is assigned as /dev/hdd. + +``` +pvcreate /dev/hdd +``` + +It is not necessary to create a partition of any kind on the new hard drive. This creation of the Physical Volume which will be recognized by the Logical Volume Manager can be performed on a newly installed raw disk or on a Linux partition of type 83\. If you are going to use the entire hard drive, creating a partition first does not offer any particular advantages and uses disk space for metadata that could otherwise be used as part of the PV. + +### Extend the existing Volume Group + +In this example we will extend an existing volume group rather than creating a new one; you can choose to do it either way. After the Physical Volume has been created, extend the existing Volume Group (VG) to include the space on the new PV. In this example the existing Volume Group is named MyVG01. + +``` +vgextend /dev/MyVG01 /dev/hdd +``` + +### Create the Logical Volume + +First create the Logical Volume (LV) from existing free space within the Volume Group. The command below creates a LV with a size of 50GB. The Volume Group name is MyVG01 and the Logical Volume Name is Stuff. + +``` +lvcreate -L +50G --name Stuff MyVG01 +``` + +### Create the filesystem + +Creating the Logical Volume does not create the filesystem. That task must be performed separately. The command below creates an EXT4 filesystem that fits the newly created Logical Volume. + +``` +mkfs -t ext4 /dev/MyVG01/Stuff +``` + +### Add a filesystem label + +Adding a filesystem label makes it easy to identify the filesystem later in case of a crash or other disk related problems. + +``` +e2label /dev/MyVG01/Stuff Stuff +``` + +### Mount the filesystem + +At this point you can create a mount point, add an appropriate entry to the /etc/fstab file, and mount the filesystem. + +You should also check to verify the volume has been created correctly. You can use the **df**, **lvs,** and **vgs** commands to do this. + +### Resizing a logical volume in an LVM filesystem + +The need to resize a filesystem has been around since the beginning of the first versions of Unix and has not gone away with Linux. It has gotten easier, however, with Logical Volume Management. + +1. If necessary, install a new hard drive. + +2. Optional: Create a partition on the hard drive. + +3. Create a physical volume (PV) of the complete hard drive or a partition on the hard drive. + +4. Assign the new physical volume to an existing volume group (VG) or create a new volume group. + +5. Create one or more logical volumes (LV) from the space in the volume group, or expand an existing logical volume with some or all of the new space in the volume group. + +6. If you created a new logical volume, create a filesystem on it. If adding space to an existing logical volume, use the resize2fs command to enlarge the filesystem to fill the space in the logical volume. + +7. Add appropriate entries to /etc/fstab for mounting the filesystem. + +8. Mount the filesystem. + +### Example + +This example describes how to resize an existing Logical Volume in an LVM environment using the CLI. It adds about 50GB of space to the /Stuff filesystem. This procedure can be used on a mounted, live filesystem only with the Linux 2.6 Kernel (and higher) and EXT3 and EXT4 filesystems. I do not recommend that you do so on any critical system, but it can be done and I have done so many times; even on the root (/) filesystem. Use your judgment. + +WARNING: Only the EXT3 and EXT4 filesystems can be resized on the fly on a running, mounted filesystem. Many other filesystems including BTRFS and ZFS cannot be resized. + +### Install the hard drive + +If there is not enough space on the existing hard drive(s) in the system to add the desired amount of space it may be necessary to add a new hard drive and create the space to add to the Logical Volume. First, install the physical hard drive and then perform the following steps. + +### Create a Physical Volume from the hard drive + +It is first necessary to create a new Physical Volume (PV). Use the command below, which assumes that the new hard drive is assigned as /dev/hdd. + +``` +pvcreate /dev/hdd +``` + +It is not necessary to create a partition of any kind on the new hard drive. This creation of the Physical Volume which will be recognized by the Logical Volume Manager can be performed on a newly installed raw disk or on a Linux partition of type 83\. If you are going to use the entire hard drive, creating a partition first does not offer any particular advantages and uses disk space for metadata that could otherwise be used as part of the PV. + +### Add PV to existing Volume Group + +For this example, we will use the new PV to extend an existing Volume Group. After the Physical Volume has been created, extend the existing Volume Group (VG) to include the space on the new PV. In this example, the existing Volume Group is named MyVG01. + +``` +vgextend /dev/MyVG01 /dev/hdd +``` + +### Extend the Logical Volume + +Extend the Logical Volume (LV) from existing free space within the Volume Group. The command below expands the LV by 50GB. The Volume Group name is MyVG01 and the Logical Volume Name is Stuff. + +``` +lvextend -L +50G /dev/MyVG01/Stuff +``` + +### Expand the filesystem + +Extending the Logical Volume will also expand the filesystem if you use the -r option. If you do not use the -r option, that task must be performed separately. The command below resizes the filesystem to fit the newly resized Logical Volume. + +``` +resize2fs /dev/MyVG01/Stuff +``` + +You should check to verify the resizing has been performed correctly. You can use the **df**, **lvs,** and **vgs** commands to do this. + +### Tips + +Over the years I have learned a few things that can make logical volume management even easier than it already is. Hopefully these tips can prove of some value to you. + +* Use the Extended file systems unless you have a clear reason to use another filesystem. Not all filesystems support resizing but EXT2, 3, and 4 do. The EXT filesystems are also very fast and efficient. In any event, they can be tuned by a knowledgeable sysadmin to meet the needs of most environments if the defaults tuning parameters do not. + +* Use meaningful volume and volume group names. + +* Use EXT filesystem labels. + +I know that, like me, many sysadmins have resisted the change to Logical Volume Management. I hope that this article will encourage you to at least try LVM. I am really glad that I did; my disk management tasks are much easier since I made the switch. + + +### About the author + + [![](https://opensource.com/sites/default/files/styles/profile_pictures/public/david-crop.jpg?itok=oePpOpyV)][10] + + David Both - David Both is a Linux and Open Source advocate who resides in Raleigh, North Carolina. He has been in the IT industry for over forty years and taught OS/2 for IBM where he worked for over 20 years. While at IBM, he wrote the first training course for the original IBM PC in 1981\. He has taught RHCE classes for Red Hat and has worked at MCI Worldcom, Cisco, and the State of North Carolina. He has been working with Linux and Open Source Software for almost 20 years. David has written articles for... [more about David Both][7][More about me][8] + +-------------------------------------------------------------------------------- + +via: https://opensource.com/business/16/9/linux-users-guide-lvm + +作者:[ David Both][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/dboth +[1]:https://opensource.com/resources/what-is-linux?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent +[2]:https://opensource.com/resources/what-are-linux-containers?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent +[3]:https://developers.redhat.com/promotions/linux-cheatsheet/?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent +[4]:https://developers.redhat.com/cheat-sheet/advanced-linux-commands-cheatsheet?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent +[5]:https://opensource.com/tags/linux?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent +[6]:https://opensource.com/business/16/9/linux-users-guide-lvm?rate=79vf1js7A7rlp-I96YFneopUQqsa2SuB-g-og7eiF1U +[7]:https://opensource.com/users/dboth +[8]:https://opensource.com/users/dboth +[9]:https://opensource.com/user/14106/feed +[10]:https://opensource.com/users/dboth +[11]:https://opensource.com/users/dboth +[12]:https://opensource.com/users/dboth +[13]:https://opensource.com/business/16/9/linux-users-guide-lvm#comments +[14]:https://opensource.com/tags/business +[15]:https://opensource.com/tags/linux +[16]:https://opensource.com/tags/how-tos-and-tutorials +[17]:https://opensource.com/tags/sysadmin diff --git a/sources/tech/20170209 INTRODUCING DOCKER SECRETS MANAGEMENT.md b/sources/tech/20170209 INTRODUCING DOCKER SECRETS MANAGEMENT.md new file mode 100644 index 0000000000..a3fc2c886e --- /dev/null +++ b/sources/tech/20170209 INTRODUCING DOCKER SECRETS MANAGEMENT.md @@ -0,0 +1,110 @@ +INTRODUCING DOCKER SECRETS MANAGEMENT +============================================================ + +Containers are changing how we view apps and infrastructure. Whether the code inside containers is big or small, container architecture introduces a change to how that code behaves with hardware – it fundamentally abstracts it from the infrastructure. Docker believes that there are three key components to container security and together they result in inherently safer apps. + + ![Docker Security](https://i2.wp.com/blog.docker.com/wp-content/uploads/e12387a1-ab21-4942-8760-5b1677bc656d-1.jpg?w=1140&ssl=1) + +A critical element of building safer apps is having a secure way of communicating with other apps and systems, something that often requires credentials, tokens, passwords and other types of confidential information—usually referred to as application secrets. We are excited to introduce Docker Secrets, a container native solution that strengthens the Trusted Delivery component of container security by integrating secret distribution directly into the container platform. + +With containers, applications are now dynamic and portable across multiple environments. This  made existing secrets distribution solutions inadequate because they were largely designed for static environments. Unfortunately, this led to an increase in mismanagement of application secrets, making it common to find insecure, home-grown solutions, such as embedding secrets into version control systems like GitHub, or other equally bad—bolted on point solutions as an afterthought. + +### Introducing Docker Secrets Management + +We fundamentally believe that apps are safer if there is a standardized interface for accessing secrets. Any good solution will also have to follow security best practices, such as encrypting secrets while in transit; encrypting secrets at rest; preventing secrets from unintentionally leaking when consumed by the final application; and strictly adhere to the principle of least-privilege, where an application only has access to the secrets that it needs—no more, no less. + +By integrating secrets into Docker orchestration, we are able to deliver a solution for the secrets management problem that follows these exact principles. + +The following diagram provides a high-level view of how the Docker swarm mode architecture is applied to securely deliver a new type of object to our containers: a secret object. + + ![Docker Secrets Management](https://i0.wp.com/blog.docker.com/wp-content/uploads/b69d2410-9e25-44d8-aa2d-f67b795ff5e3.jpg?w=1140&ssl=1) + +In Docker, a secret is any blob of data, such as a password, SSH private key, TLS Certificate, or any other piece of data that is sensitive in nature. When you add a secret to the swarm (by running `docker secret create`), Docker sends the secret over to the swarm manager over a mutually authenticated TLS connection, making use of the [built-in Certificate Authority][17] that gets automatically created when bootstrapping a new swarm. + +``` +$ echo "This is a secret" | docker secret create my_secret_data - +``` + +Once the secret reaches a manager node, it gets saved to the internal Raft store, which uses NACL’s Salsa20Poly1305 with a 256-bit key to ensure no data is ever written to disk unencrypted. Writing to the internal store gives secrets the same high availability guarantees that the the rest of the swarm management data gets. + +When a swarm manager starts up, the encrypted Raft logs containing the secrets is decrypted using a data encryption key that is unique per-node. This key, and the node’s TLS credentials used to communicate with the rest of the cluster, can be encrypted with a cluster-wide key encryption key, called the unlock key, which is also propagated using Raft and will be required on manager start. + +When you grant a newly-created or running service access to a secret, one of the manager nodes (only managers have access to all the stored secrets stored) will send it over the already established TLS connection exclusively to the nodes that will be running that specific service. This means that nodes cannot request the secrets themselves, and will only gain access to the secrets when provided to them by a manager – strictly for the services that require them. + +``` +$ docker service  create --name="redis" --secret="my_secret_data" redis:alpine +``` + +The  unencrypted secret is mounted into the container in an in-memory filesystem at /run/secrets/. + +``` +$ docker exec $(docker ps --filter name=redis -q) ls -l /run/secrets +total 4 +-r--r--r--    1 root     root            17 Dec 13 22:48 my_secret_data +``` + +If a service gets deleted, or rescheduled somewhere else, the manager will immediately notify all the nodes that no longer require access to that secret to erase it from memory, and the node will no longer have any access to that application secret. + +``` +$ docker service update --secret-rm="my_secret_data" redis + +$ docker exec -it $(docker ps --filter name=redis -q) cat /run/secrets/my_secret_data + +cat: can't open '/run/secrets/my_secret_data': No such file or directory +``` + +Check out the [Docker secrets docs][18] for more information and examples on how to create and manage your secrets. And a special shout out to Laurens Van Houtven (https://www.lvh.io/[)][19] in collaboration with the Docker security and core engineering team to help make this feature a reality. + +[Get safer apps for dev and ops w/ new #Docker secrets management][5] + +[CLICK TO TWEET][6] + +### +![Docker Security](https://i2.wp.com/blog.docker.com/wp-content/uploads/Screenshot-2017-02-08-23.30.13.png?resize=1032%2C111&ssl=1) + +### Safer Apps with Docker + +Docker secrets is designed to be easily usable by developers and IT ops teams to build and run safer apps. Docker secrets is a container first architecture designed to keep secrets safe and used only when needed by the exact container that needs that secret to operate. From defining apps and secrets with Docker Compose through an IT admin deploying that Compose file directly in Docker Datacenter, the services, secrets, networks and volumes will travel securely, safely with the application. + +Resources to learn more: + +* [Docker Datacenter on 1.13 with Secrets, Security Scanning, Content Cache and More][7] + +* [Download Docker][8] and get started today + +* [Try secrets in Docker Datacenter][9] + +* [Read the Documentation][10] + +* Attend an [upcoming webinar][11] + +-------------------------------------------------------------------------------- + +via: https://blog.docker.com/2017/02/docker-secrets-management/ + +作者:[ Ying Li][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://blog.docker.com/author/yingli/ +[1]:http://www.linkedin.com/shareArticle?mini=true&url=http://dockr.ly/2k6gnOB&title=Introducing%20Docker%20Secrets%20Management&summary=Containers%20are%20changing%20how%20we%20view%20apps%20and%20infrastructure.%20Whether%20the%20code%20inside%20containers%20is%20big%20or%20small,%20container%20architecture%20introduces%20a%20change%20to%20how%20that%20code%20behaves%20with%20hardware%20-%20it%20fundamentally%20abstracts%20it%20from%20the%20infrastructure.%20Docker%20believes%20that%20there%20are%20three%20key%20components%20to%20container%20security%20and%20... +[2]:http://www.reddit.com/submit?url=http://dockr.ly/2k6gnOB&title=Introducing%20Docker%20Secrets%20Management +[3]:https://plus.google.com/share?url=http://dockr.ly/2k6gnOB +[4]:http://news.ycombinator.com/submitlink?u=http://dockr.ly/2k6gnOB&t=Introducing%20Docker%20Secrets%20Management +[5]:https://twitter.com/share?text=Get+safer+apps+for+dev+and+ops+w%2F+new+%23Docker+secrets+management+&via=docker&related=docker&url=http://dockr.ly/2k6gnOB +[6]:https://twitter.com/share?text=Get+safer+apps+for+dev+and+ops+w%2F+new+%23Docker+secrets+management+&via=docker&related=docker&url=http://dockr.ly/2k6gnOB +[7]:http://dockr.ly/AppSecurity +[8]:https://www.docker.com/getdocker +[9]:http://www.docker.com/trial +[10]:https://docs.docker.com/engine/swarm/secrets/ +[11]:http://www.docker.com/webinars +[12]:https://blog.docker.com/author/yingli/ +[13]:https://blog.docker.com/tag/container-security/ +[14]:https://blog.docker.com/tag/docker-security/ +[15]:https://blog.docker.com/tag/secrets-management/ +[16]:https://blog.docker.com/tag/security/ +[17]:https://docs.docker.com/engine/swarm/how-swarm-mode-works/pki/ +[18]:https://docs.docker.com/engine/swarm/secrets/ +[19]:https://lvh.io%29/ diff --git a/sources/tech/20170530 How to Improve a Legacy Codebase.md b/sources/tech/20170530 How to Improve a Legacy Codebase.md deleted file mode 100644 index cff5e70538..0000000000 --- a/sources/tech/20170530 How to Improve a Legacy Codebase.md +++ /dev/null @@ -1,108 +0,0 @@ -Translating by aiwhj -# How to Improve a Legacy Codebase - - -It happens at least once in the lifetime of every programmer, project manager or teamleader. You get handed a steaming pile of manure, if you’re lucky only a few million lines worth, the original programmers have long ago left for sunnier places and the documentation - if there is any to begin with - is hopelessly out of sync with what is presently keeping the company afloat. - -Your job: get us out of this mess. - -After your first instinctive response (run for the hills) has passed you start on the project knowing full well that the eyes of the company senior leadership are on you. Failure is not an option. And yet, by the looks of what you’ve been given failure is very much in the cards. So what to do? - -I’ve been (un)fortunate enough to be in this situation several times and me and a small band of friends have found that it is a lucrative business to be able to take these steaming piles of misery and to turn them into healthy maintainable projects. Here are some of the tricks that we employ: - -### Backup - -Before you start to do anything at all make a backup of  _everything_  that might be relevant. This to make sure that no information is lost that might be of crucial importance somewhere down the line. All it takes is a silly question that you can’t answer to eat up a day or more once the change has been made. Especially configuration data is susceptible to this kind of problem, it is usually not versioned and you’re lucky if it is taken along in the periodic back-up scheme. So better safe than sorry, copy everything to a very safe place and never ever touch that unless it is in read-only mode. - -### Important pre-requisite, make sure you have a build process and that it actually produces what runs in production - -I totally missed this step on the assumption that it is obvious and likely already in place but many HN commenters pointed this out and they are absolutely right: step one is to make sure that you know what is running in production right now and that means that you need to be able to build a version of the software that is - if your platform works that way - byte-for-byte identical with the current production build. If you can’t find a way to achieve this then likely you will be in for some unpleasant surprises once you commit something to production. Make sure you test this to the best of your ability to make sure that you have all the pieces in place and then, after you’ve gained sufficient confidence that it will work move it to production. Be prepared to switch back immediately to whatever was running before and make sure that you log everything and anything that might come in handy during the - inevitable - post mortem. - -### Freeze the DB - -If at all possible freeze the database schema until you are done with the first level of improvements, by the time you have a solid understanding of the codebase and the legacy code has been fully left behind you are ready to modify the database schema. Change it any earlier than that and you may have a real problem on your hand, now you’ve lost the ability to run an old and a new codebase side-by-side with the database as the steady foundation to build on. Keeping the DB totally unchanged allows you to compare the effect your new business logic code has compared to the old business logic code, if it all works as advertised there should be no differences. - -### Write your tests - -Before you make any changes at all write as many end-to-end and integration tests as you can. Make sure these tests produce the right output and test any and all assumptions that you can come up with about how you  _think_  the old stuff works (be prepared for surprises here). These tests will have two important functions: they will help to clear up any misconceptions at a very early stage and they will function as guardrails once you start writing new code to replace old code. - -Automate all your testing, if you’re already experienced with CI then use it and make sure your tests run fast enough to run the full set of tests after every commit. - -### Instrumentation and logging - -If the old platform is still available for development add instrumentation. Do this in a completely new database table, add a simple counter for every event that you can think of and add a single function to increment these counters based on the name of the event. That way you can implement a time-stamped event log with a few extra lines of code and you’ll get a good idea of how many events of one kind lead to events of another kind. One example: User opens app, User closes app. If two events should result in some back-end calls those two counters should over the long term remain at a constant difference, the difference is the number of apps currently open. If you see many more app opens than app closes you know there has to be a way in which apps end (for instance a crash). For each and every event you’ll find there is some kind of relationship to other events, usually you will strive for constant relationships unless there is an obvious error somewhere in the system. You’ll aim to reduce those counters that indicate errors and you’ll aim to maximize counters further down in the chain to the level indicated by the counters at the beginning. (For instance: customers attempting to pay should result in an equal number of actual payments received). - -This very simple trick turns every backend application into a bookkeeping system of sorts and just like with a real bookkeeping system the numbers have to match, as long as they don’t you have a problem somewhere. - -This system will over time become invaluable in establishing the health of the system and will be a great companion next to the source code control system revision log where you can determine the point in time that a bug was introduced and what the effect was on the various counters. - -I usually keep these counters at a 5 minute resolution (so 12 buckets for an hour), but if you have an application that generates fewer or more events then you might decide to change the interval at which new buckets are created. All counters share the same database table and so each counter is simply a column in that table. - -### Change only one thing at the time - -Do not fall into the trap of improving both the maintainability of the code or the platform it runs on at the same time as adding new features or fixing bugs. This will cause you huge headaches because you now have to ask yourself every step of the way what the desired outcome is of an action and will invalidate some of the tests you made earlier. - -### Platform changes - -If you’ve decided to migrate the application to another platform then do this first  _but keep everything else exactly the same_ . If you want you can add more documentation or tests, but no more than that, all business logic and interdependencies should remain as before. - -### Architecture changes - -The next thing to tackle is to change the architecture of the application (if desired). At this point in time you are free to change the higher level structure of the code, usually by reducing the number of horizontal links between modules, and thus reducing the scope of the code active during any one interaction with the end-user. If the old code was monolithic in nature now would be a good time to make it more modular, break up large functions into smaller ones but leave names of variables and data-structures as they were. - -HN user [mannykannot][1] points - rightfully - out that this is not always an option, if you’re particularly unlucky then you may have to dig in deep in order to be able to make any architecture changes. I agree with that and I should have included it here so hence this little update. What I would further like to add is if you do both do high level changes and low level changes at least try to limit them to one file or worst case one subsystem so that you limit the scope of your changes as much as possible. Otherwise you might have a very hard time debugging the change you just made. - -### Low level refactoring - -By now you should have a very good understanding of what each module does and you are ready for the real work: refactoring the code to improve maintainability and to make the code ready for new functionality. This will likely be the part of the project that consumes the most time, document as you go, do not make changes to a module until you have thoroughly documented it and feel you understand it. Feel free to rename variables and functions as well as datastructures to improve clarity and consistency, add tests (also unit tests, if the situation warrants them). - -### Fix bugs - -Now you’re ready to take on actual end-user visible changes, the first order of battle will be the long list of bugs that have accumulated over the years in the ticket queue. As usual, first confirm the problem still exists, write a test to that effect and then fix the bug, your CI and the end-to-end tests written should keep you safe from any mistakes you make due to a lack of understanding or some peripheral issue. - -### Database Upgrade - -If required after all this is done and you are on a solid and maintainable codebase again you have the option to change the database schema or to replace the database with a different make/model altogether if that is what you had planned to do. All the work you’ve done up to this point will help to assist you in making that change in a responsible manner without any surprises, you can completely test the new DB with the new code and all the tests in place to make sure your migration goes off without a hitch. - -### Execute on the roadmap - -Congratulations, you are out of the woods and are now ready to implement new functionality. - -### Do not ever even attempt a big-bang rewrite - -A big-bang rewrite is the kind of project that is pretty much guaranteed to fail. For one, you are in uncharted territory to begin with so how would you even know what to build, for another, you are pushing  _all_  the problems to the very last day, the day just before you go ‘live’ with your new system. And that’s when you’ll fail, miserably. Business logic assumptions will turn out to be faulty, suddenly you’ll gain insight into why that old system did certain things the way it did and in general you’ll end up realizing that the guys that put the old system together weren’t maybe idiots after all. If you really do want to wreck the company (and your own reputation to boot) by all means, do a big-bang rewrite, but if you’re smart about it this is not even on the table as an option. - -### So, the alternative, work incrementally - -To untangle one of these hairballs the quickest path to safety is to take any element of the code that you do understand (it could be a peripheral bit, but it might also be some core module) and try to incrementally improve it still within the old context. If the old build tools are no longer available you will have to use some tricks (see below) but at least try to leave as much of what is known to work alive while you start with your changes. That way as the codebase improves so does your understanding of what it actually does. A typical commit should be at most a couple of lines. - -### Release! - -Every change along the way gets released into production, even if the changes are not end-user visible it is important to make the smallest possible steps because as long as you lack understanding of the system there is a fair chance that only the production environment will tell you there is a problem. If that problem arises right after you make a small change you will gain several advantages: - -* it will probably be trivial to figure out what went wrong - -* you will be in an excellent position to improve the process - -* and you should immediately update the documentation to show the new insights gained - -### Use proxies to your advantage - -If you are doing web development praise the gods and insert a proxy between the end-users and the old system. Now you have per-url control over which requests go to the old system and which you will re-route to the new system allowing much easier and more granular control over what is run and who gets to see it. If your proxy is clever enough you could probably use it to send a percentage of the traffic to the new system for an individual URL until you are satisfied that things work the way they should. If your integration tests also connect to this interface it is even better. - -### Yes, but all this will take too much time! - -Well, that depends on how you look at it. It’s true there is a bit of re-work involved in following these steps. But it  _does_  work, and any kind of optimization of this process makes the assumption that you know more about the system than you probably do. I’ve got a reputation to maintain and I  _really_  do not like negative surprises during work like this. With some luck the company is already on the skids, or maybe there is a real danger of messing things up for the customers. In a situation like that I prefer total control and an iron clad process over saving a couple of days or weeks if that imperils a good outcome. If you’re more into cowboy stuff - and your bosses agree - then maybe it would be acceptable to take more risk, but most companies would rather take the slightly slower but much more sure road to victory. - --------------------------------------------------------------------------------- - -via: https://jacquesmattheij.com/improving-a-legacy-codebase - -作者:[Jacques Mattheij ][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://jacquesmattheij.com/ -[1]:https://news.ycombinator.com/item?id=14445661 diff --git a/sources/tech/20170607 Why Car Companies Are Hiring Computer Security Experts.md b/sources/tech/20170607 Why Car Companies Are Hiring Computer Security Experts.md new file mode 100644 index 0000000000..4a7d23e5f0 --- /dev/null +++ b/sources/tech/20170607 Why Car Companies Are Hiring Computer Security Experts.md @@ -0,0 +1,91 @@ +Why Car Companies Are Hiring Computer Security Experts +============================================================ + +Photo +![](https://static01.nyt.com/images/2017/06/08/business/08BITS-GURUS1/08BITS-GURUS1-superJumbo.jpg) +The cybersecurity experts Marc Rogers, left, of CloudFlare and Kevin Mahaffey of Lookout were able to control various Tesla functions from their physically connected laptop. They pose in CloudFlare’s lobby in front of Lava Lamps used to generate numbers for encryption.CreditChristie Hemm Klok for The New York Times + +It started about seven years ago. Iran’s top nuclear scientists were being assassinated in a string of similar attacks: Assailants on motorcycles were pulling up to their moving cars, attaching magnetic bombs and detonating them after the motorcyclists had fled the scene. + +In another seven years, security experts warn, assassins won’t need motorcycles or magnetic bombs. All they’ll need is a laptop and code to send driverless cars careering off a bridge, colliding with a driverless truck or coming to an unexpected stop in the middle of fast-moving traffic. + +Automakers may call them self-driving cars. But hackers call them computers that travel over 100 miles an hour. + +“These are no longer cars,” said Marc Rogers, the principal security researcher at the cybersecurity firm CloudFlare. “These are data centers on wheels. Any part of the car that talks to the outside world is a potential inroad for attackers.” + +Those fears came into focus two years ago when two “white hat” hackers — researchers who look for computer vulnerabilities to spot problems and fix them, rather than to commit a crime or cause problems — successfully gained access to a Jeep Cherokee from their computer miles away. They rendered their crash-test dummy (in this case a nervous reporter) powerless over his vehicle and disabling his transmission in the middle of a highway. + +The hackers, Chris Valasek and Charlie Miller (now security researchers respectively at Uber and Didi, an Uber competitor in China), discovered an [electronic route from the Jeep’s entertainment system to its dashboard][10]. From there, they had control of the vehicle’s steering, brakes and transmission — everything they needed to paralyze their crash test dummy in the middle of a highway. + +“Car hacking makes great headlines, but remember: No one has ever had their car hacked by a bad guy,” Mr. Miller wrote on Twitter last Sunday. “It’s only ever been performed by researchers.” + +Still, the research by Mr. Miller and Mr. Valasek came at a steep price for Jeep’s manufacturer, Fiat Chrysler, which was forced to recall 1.4 million of its vehicles as a result of the hacking experiment. + +It is no wonder that Mary Barra, the chief executive of General Motors, called cybersecurity her company’s top priority last year. Now the skills of researchers and so-called white hat hackers are in high demand among automakers and tech companies pushing ahead with driverless car projects. + +Uber, [Tesla][11], Apple and Didi in China have been actively recruiting white hat hackers like Mr. Miller and Mr. Valasek from one another as well as from traditional cybersecurity firms and academia. + +Last year, Tesla poached Aaron Sigel, Apple’s manager of security for its iOS operating system. Uber poached Chris Gates, formerly a white hat hacker at Facebook. Didi poached Mr. Miller from Uber, where he had gone to work after the Jeep hack. And security firms have seen dozens of engineers leave their ranks for autonomous-car projects. + +Mr. Miller said he left Uber for Didi, in part, because his new Chinese employer has given him more freedom to discuss his work. + +“Carmakers seem to be taking the threat of cyberattack more seriously, but I’d still like to see more transparency from them,” Mr. Miller wrote on Twitter on Saturday. + +Like a number of big tech companies, Tesla and Fiat Chrysler started paying out rewards to hackers who turn over flaws the hackers discover in their systems. GM has done something similar, though critics say GM’s program is limited when compared with the ones offered by tech companies, and so far no rewards have been paid out. + +One year after the Jeep hack by Mr. Miller and Mr. Valasek, they demonstrated all the other ways they could mess with a Jeep driver, including hijacking the vehicle’s cruise control, swerving the steering wheel 180 degrees or slamming on the parking brake in high-speed traffic — all from a computer in the back of the car. (Those exploits ended with their test Jeep in a ditch and calls to a local tow company.) + +Granted, they had to be in the Jeep to make all that happen. But it was evidence of what is possible. + +The Jeep penetration was preceded by a [2011 hack by security researchers at the University of Washington][12] and the University of California, San Diego, who were the first to remotely hack a sedan and ultimately control its brakes via Bluetooth. The researchers warned car companies that the more connected cars become, the more likely they are to get hacked. + +Security researchers have also had their way with Tesla’s software-heavy Model S car. In 2015, Mr. Rogers, together with Kevin Mahaffey, the chief technology officer of the cybersecurity company Lookout, found a way to control various Tesla functions from their physically connected laptop. + +One year later, a team of Chinese researchers at Tencent took their research a step further, hacking a moving Tesla Model S and controlling its brakes from 12 miles away. Unlike Chrysler, Tesla was able to dispatch a remote patch to fix the security holes that made the hacks possible. + +In all the cases, the car hacks were the work of well meaning, white hat security researchers. But the lesson for all automakers was clear. + +The motivations to hack vehicles are limitless. When it learned of Mr. Rogers’s and Mr. Mahaffey’s investigation into Tesla’s Model S, a Chinese app-maker asked Mr. Rogers if he would be interested in sharing, or possibly selling, his discovery, he said. (The app maker was looking for a backdoor to secretly install its app on Tesla’s dashboard.) + +Criminals have not yet shown they have found back doors into connected vehicles, though for years, they have been actively developing, trading and deploying tools that can intercept car key communications. + +But as more driverless and semiautonomous cars hit the open roads, they will become a more worthy target. Security experts warn that driverless cars present a far more complex, intriguing and vulnerable “attack surface” for hackers. Each new “connected” car feature introduces greater complexity, and with complexity inevitably comes vulnerability. + +Twenty years ago, cars had, on average, one million lines of code. The General Motors 2010 [Chevrolet Volt][13] had about 10 million lines of code — more than an [F-35 fighter jet][14]. + +Today, an average car has more than 100 million lines of code. Automakers predict it won’t be long before they have 200 million. When you stop to consider that, on average, there are 15 to 50 defects per 1,000 lines of software code, the potentially exploitable weaknesses add up quickly. + +The only difference between computer code and driverless car code is that, “Unlike data center enterprise security — where the biggest threat is loss of data — in automotive security, it’s loss of life,” said David Barzilai, a co-founder of Karamba Security, an Israeli start-up that is working on addressing automotive security. + +To truly secure autonomous vehicles, security experts say, automakers will have to address the inevitable vulnerabilities that pop up in new sensors and car computers, address inherent vulnerabilities in the base car itself and, perhaps most challenging of all, bridge the cultural divide between automakers and software companies. + +“The genie is out of the bottle, and to solve this problem will require a major cultural shift,” said Mr. Mahaffey of the cybersecurity company Lookout. “And an automaker that truly values cybersecurity will treat security vulnerabilities the same they would an airbag recall. We have not seen that industrywide shift yet.” + +There will be winners and losers, Mr. Mahaffey added: “Automakers that transform themselves into software companies will win. Others will get left behind.” + +-------------------------------------------------------------------------------- + +via: https://www.nytimes.com/2017/06/07/technology/why-car-companies-are-hiring-computer-security-experts.html + +作者:[NICOLE PERLROTH ][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.nytimes.com/by/nicole-perlroth +[1]:https://www.nytimes.com/2016/06/09/technology/software-as-weaponry-in-a-computer-connected-world.html +[2]:https://www.nytimes.com/2015/08/29/technology/uber-hires-two-engineers-who-showed-cars-could-be-hacked.html +[3]:https://www.nytimes.com/2015/08/11/opinion/zeynep-tufekci-why-smart-objects-may-be-a-dumb-idea.html +[4]:https://www.nytimes.com/by/nicole-perlroth +[5]:https://www.nytimes.com/column/bits +[6]:https://www.nytimes.com/2017/06/07/technology/why-car-companies-are-hiring-computer-security-experts.html?utm_source=wanqu.co&utm_campaign=Wanqu+Daily&utm_medium=website#story-continues-1 +[7]:http://www.nytimes.com/newsletters/sample/bits?pgtype=subscriptionspage&version=business&contentId=TU&eventName=sample&module=newsletter-sign-up +[8]:https://www.nytimes.com/privacy +[9]:https://www.nytimes.com/help/index.html +[10]:https://bits.blogs.nytimes.com/2015/07/21/security-researchers-find-a-way-to-hack-cars/ +[11]:http://www.nytimes.com/topic/company/tesla-motors-inc?inline=nyt-org +[12]:http://www.autosec.org/pubs/cars-usenixsec2011.pdf +[13]:http://autos.nytimes.com/2011/Chevrolet/Volt/238/4117/329463/researchOverview.aspx?inline=nyt-classifier +[14]:http://topics.nytimes.com/top/reference/timestopics/subjects/m/military_aircraft/f35_airplane/index.html?inline=nyt-classifier +[15]:https://www.nytimes.com/2017/06/07/technology/why-car-companies-are-hiring-computer-security-experts.html?utm_source=wanqu.co&utm_campaign=Wanqu+Daily&utm_medium=website#story-continues-3 diff --git a/sources/tech/20170622 A users guide to links in the Linux filesystem.md b/sources/tech/20170622 A users guide to links in the Linux filesystem.md deleted file mode 100644 index 3cb59aaacb..0000000000 --- a/sources/tech/20170622 A users guide to links in the Linux filesystem.md +++ /dev/null @@ -1,314 +0,0 @@ -Translating by yongshouzhang - - -A user's guide to links in the Linux filesystem -============================================================ - -### Learn how to use links, which make tasks easier by providing access to files from multiple locations in the Linux filesystem directory tree. - - -![A user's guide to links in the Linux filesystem](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/links.png?itok=AumNmse7 "A user's guide to links in the Linux filesystem") -Image by : [Paul Lewin][8]. Modified by Opensource.com. [CC BY-SA 2.0][9] - -In articles I have written about various aspects of Linux filesystems for Opensource.com, including [An introduction to Linux's EXT4 filesystem][10]; [Managing devices in Linux][11]; [An introduction to Linux filesystems][12]; and [A Linux user's guide to Logical Volume Management][13], I have briefly mentioned an interesting feature of Linux filesystems that can make some tasks easier by providing access to files from multiple locations in the filesystem directory tree. - -There are two types of Linux filesystem links: hard and soft. The difference between the two types of links is significant, but both types are used to solve similar problems. They both provide multiple directory entries (or references) to a single file, but they do it quite differently. Links are powerful and add flexibility to Linux filesystems because [everything is a file][14]. - -More Linux resources - -* [What is Linux?][1] - -* [What are Linux containers?][2] - -* [Download Now: Linux commands cheat sheet][3] - -* [Advanced Linux commands cheat sheet][4] - -* [Our latest Linux articles][5] - -I have found, for instance, that some programs required a particular version of a library. When a library upgrade replaced the old version, the program would crash with an error specifying the name of the old, now-missing library. Usually, the only change in the library name was the version number. Acting on a hunch, I simply added a link to the new library but named the link after the old library name. I tried the program again and it worked perfectly. And, okay, the program was a game, and everyone knows the lengths that gamers will go to in order to keep their games running. - -In fact, almost all applications are linked to libraries using a generic name with only a major version number in the link name, while the link points to the actual library file that also has a minor version number. In other instances, required files have been moved from one directory to another to comply with the Linux file specification, and there are links in the old directories for backwards compatibility with those programs that have not yet caught up with the new locations. If you do a long listing of the **/lib64** directory, you can find many examples of both. - -``` -lrwxrwxrwx. 1 root root 36 Dec 8 2016 cracklib_dict.hwm -> ../../usr/share/cracklib/pw_dict.hwm -lrwxrwxrwx. 1 root root 36 Dec 8 2016 cracklib_dict.pwd -> ../../usr/share/cracklib/pw_dict.pwd -lrwxrwxrwx. 1 root root 36 Dec 8 2016 cracklib_dict.pwi -> ../../usr/share/cracklib/pw_dict.pwi -lrwxrwxrwx. 1 root root 27 Jun 9 2016 libaccountsservice.so.0 -> libaccountsservice.so.0.0.0 --rwxr-xr-x. 1 root root 288456 Jun 9 2016 libaccountsservice.so.0.0.0 -lrwxrwxrwx 1 root root 15 May 17 11:47 libacl.so.1 -> libacl.so.1.1.0 --rwxr-xr-x 1 root root 36472 May 17 11:47 libacl.so.1.1.0 -lrwxrwxrwx. 1 root root 15 Feb 4 2016 libaio.so.1 -> libaio.so.1.0.1 --rwxr-xr-x. 1 root root 6224 Feb 4 2016 libaio.so.1.0.0 --rwxr-xr-x. 1 root root 6224 Feb 4 2016 libaio.so.1.0.1 -lrwxrwxrwx. 1 root root 30 Jan 16 16:39 libakonadi-calendar.so.4 -> libakonadi-calendar.so.4.14.26 --rwxr-xr-x. 1 root root 816160 Jan 16 16:39 libakonadi-calendar.so.4.14.26 -lrwxrwxrwx. 1 root root 29 Jan 16 16:39 libakonadi-contact.so.4 -> libakonadi-contact.so.4.14.26 -``` - -A few of the links in the **/lib64** directory - -The long listing of the **/lib64** directory above shows that the first character in the filemode is the letter "l," which means that each is a soft or symbolic link. - -### Hard links - -In [An introduction to Linux's EXT4 filesystem][15], I discussed the fact that each file has one inode that contains information about that file, including the location of the data belonging to that file. [Figure 2][16] in that article shows a single directory entry that points to the inode. Every file must have at least one directory entry that points to the inode that describes the file. The directory entry is a hard link, thus every file has at least one hard link. - -In Figure 1 below, multiple directory entries point to a single inode. These are all hard links. I have abbreviated the locations of three of the directory entries using the tilde (**~**) convention for the home directory, so that **~** is equivalent to **/home/user** in this example. Note that the fourth directory entry is in a completely different directory, **/home/shared**, which might be a location for sharing files between users of the computer. - -![fig1directory_entries.png](https://opensource.com/sites/default/files/images/life/fig1directory_entries.png) -Figure 1 - -Hard links are limited to files contained within a single filesystem. "Filesystem" is used here in the sense of a partition or logical volume (LV) that is mounted on a specified mount point, in this case **/home**. This is because inode numbers are unique only within each filesystem, and a different filesystem, for example, **/var**or **/opt**, will have inodes with the same number as the inode for our file. - -Because all the hard links point to the single inode that contains the metadata about the file, all of these attributes are part of the file, such as ownerships, permissions, and the total number of hard links to the inode, and cannot be different for each hard link. It is one file with one set of attributes. The only attribute that can be different is the file name, which is not contained in the inode. Hard links to a single **file/inode** located in the same directory must have different names, due to the fact that there can be no duplicate file names within a single directory. - -The number of hard links for a file is displayed with the **ls -l** command. If you want to display the actual inode numbers, the command **ls -li** does that. - -### Symbolic (soft) links - -The difference between a hard link and a soft link, also known as a symbolic link (or symlink), is that, while hard links point directly to the inode belonging to the file, soft links point to a directory entry, i.e., one of the hard links. Because soft links point to a hard link for the file and not the inode, they are not dependent upon the inode number and can work across filesystems, spanning partitions and LVs. - -The downside to this is: If the hard link to which the symlink points is deleted or renamed, the symlink is broken. The symlink is still there, but it points to a hard link that no longer exists. Fortunately, the **ls** command highlights broken links with flashing white text on a red background in a long listing. - -### Lab project: experimenting with links - -I think the easiest way to understand the use of and differences between hard and soft links is with a lab project that you can do. This project should be done in an empty directory as a  _non-root user_ . I created the **~/temp** directory for this project, and you should, too. It creates a safe place to do the project and provides a new, empty directory to work in so that only files associated with this project will be located there. - -### **Initial setup** - -First, create the temporary directory in which you will perform the tasks needed for this project. Ensure that the present working directory (PWD) is your home directory, then enter the following command. - -``` -mkdir temp -``` - -Change into **~/temp** to make it the PWD with this command. - -``` -cd temp -``` - -To get started, we need to create a file we can link to. The following command does that and provides some content as well. - -``` -du -h > main.file.txt -``` - -Use the **ls -l** long list to verify that the file was created correctly. It should look similar to my results. Note that the file size is only 7 bytes, but yours may vary by a byte or two. - -``` -[dboth@david temp]$ ls -l -total 4 --rw-rw-r-- 1 dboth dboth 7 Jun 13 07:34 main.file.txt -``` - -Notice the number "1" following the file mode in the listing. That number represents the number of hard links that exist for the file. For now, it should be 1 because we have not created any additional links to our test file. - -### **Experimenting with hard links** - -Hard links create a new directory entry pointing to the same inode, so when hard links are added to a file, you will see the number of links increase. Ensure that the PWD is still **~/temp**. Create a hard link to the file **main.file.txt**, then do another long list of the directory. - -``` -[dboth@david temp]$ ln main.file.txt link1.file.txt -[dboth@david temp]$ ls -l -total 8 --rw-rw-r-- 2 dboth dboth 7 Jun 13 07:34 link1.file.txt --rw-rw-r-- 2 dboth dboth 7 Jun 13 07:34 main.file.txt -``` - -Notice that both files have two links and are exactly the same size. The date stamp is also the same. This is really one file with one inode and two links, i.e., directory entries to it. Create a second hard link to this file and list the directory contents. You can create the link to either of the existing ones: **link1.file.txt** or **main.file.txt**. - -``` -[dboth@david temp]$ ln link1.file.txt link2.file.txt ; ls -l -total 16 --rw-rw-r-- 3 dboth dboth 7 Jun 13 07:34 link1.file.txt --rw-rw-r-- 3 dboth dboth 7 Jun 13 07:34 link2.file.txt --rw-rw-r-- 3 dboth dboth 7 Jun 13 07:34 main.file.txt -``` - -Notice that each new hard link in this directory must have a different name because two files—really directory entries—cannot have the same name within the same directory. Try to create another link with a target name the same as one of the existing ones. - -``` -[dboth@david temp]$ ln main.file.txt link2.file.txt -ln: failed to create hard link 'link2.file.txt': File exists -``` - -Clearly that does not work, because **link2.file.txt** already exists. So far, we have created only hard links in the same directory. So, create a link in your home directory, the parent of the temp directory in which we have been working so far. - -``` -[dboth@david temp]$ ln main.file.txt ../main.file.txt ; ls -l ../main* --rw-rw-r-- 4 dboth dboth 7 Jun 13 07:34 main.file.txt -``` - -The **ls** command in the above listing shows that the **main.file.txt** file does exist in the home directory with the same name as the file in the temp directory. Of course, these are not different files; they are the same file with multiple links—directory entries—to the same inode. To help illustrate the next point, add a file that is not a link. - -``` -[dboth@david temp]$ touch unlinked.file ; ls -l -total 12 --rw-rw-r-- 4 dboth dboth 7 Jun 13 07:34 link1.file.txt --rw-rw-r-- 4 dboth dboth 7 Jun 13 07:34 link2.file.txt --rw-rw-r-- 4 dboth dboth 7 Jun 13 07:34 main.file.txt --rw-rw-r-- 1 dboth dboth 0 Jun 14 08:18 unlinked.file -``` - -Look at the inode number of the hard links and that of the new file using the **-i**option to the **ls** command. - -``` -[dboth@david temp]$ ls -li -total 12 -657024 -rw-rw-r-- 4 dboth dboth 7 Jun 13 07:34 link1.file.txt -657024 -rw-rw-r-- 4 dboth dboth 7 Jun 13 07:34 link2.file.txt -657024 -rw-rw-r-- 4 dboth dboth 7 Jun 13 07:34 main.file.txt -657863 -rw-rw-r-- 1 dboth dboth 0 Jun 14 08:18 unlinked.file -``` - -Notice the number **657024** to the left of the file mode in the example above. That is the inode number, and all three file links point to the same inode. You can use the **-i** option to view the inode number for the link we created in the home directory as well, and that will also show the same value. The inode number of the file that has only one link is different from the others. Note that the inode numbers will be different on your system. - -Let's change the size of one of the hard-linked files. - -``` -[dboth@david temp]$ df -h > link2.file.txt ; ls -li -total 12 -657024 -rw-rw-r-- 4 dboth dboth 1157 Jun 14 14:14 link1.file.txt -657024 -rw-rw-r-- 4 dboth dboth 1157 Jun 14 14:14 link2.file.txt -657024 -rw-rw-r-- 4 dboth dboth 1157 Jun 14 14:14 main.file.txt -657863 -rw-rw-r-- 1 dboth dboth 0 Jun 14 08:18 unlinked.file -``` - -The file size of all the hard-linked files is now larger than before. That is because there is really only one file that is linked to by multiple directory entries. - -I know this next experiment will work on my computer because my **/tmp**directory is on a separate LV. If you have a separate LV or a filesystem on a different partition (if you're not using LVs), determine whether or not you have access to that LV or partition. If you don't, you can try to insert a USB memory stick and mount it. If one of those options works for you, you can do this experiment. - -Try to create a link to one of the files in your **~/temp** directory in **/tmp** (or wherever your different filesystem directory is located). - -``` -[dboth@david temp]$ ln link2.file.txt /tmp/link3.file.txt -ln: failed to create hard link '/tmp/link3.file.txt' => 'link2.file.txt': -Invalid cross-device link -``` - -Why does this error occur? The reason is each separate mountable filesystem has its own set of inode numbers. Simply referring to a file by an inode number across the entire Linux directory structure can result in confusion because the same inode number can exist in each mounted filesystem. - -There may be a time when you will want to locate all the hard links that belong to a single inode. You can find the inode number using the **ls -li** command. Then you can use the **find** command to locate all links with that inode number. - -``` -[dboth@david temp]$ find . -inum 657024 -./main.file.txt -./link1.file.txt -./link2.file.txt -``` - -Note that the **find** command did not find all four of the hard links to this inode because we started at the current directory of **~/temp**. The **find** command only finds files in the PWD and its subdirectories. To find all the links, we can use the following command, which specifies your home directory as the starting place for the search. - -``` -[dboth@david temp]$ find ~ -samefile main.file.txt -/home/dboth/temp/main.file.txt -/home/dboth/temp/link1.file.txt -/home/dboth/temp/link2.file.txt -/home/dboth/main.file.txt -``` - -You may see error messages if you do not have permissions as a non-root user. This command also uses the **-samefile** option instead of specifying the inode number. This works the same as using the inode number and can be easier if you know the name of one of the hard links. - -### **Experimenting with soft links** - -As you have just seen, creating hard links is not possible across filesystem boundaries; that is, from a filesystem on one LV or partition to a filesystem on another. Soft links are a means to answer that problem with hard links. Although they can accomplish the same end, they are very different, and knowing these differences is important. - -Let's start by creating a symlink in our **~/temp** directory to start our exploration. - -``` -[dboth@david temp]$ ln -s link2.file.txt link3.file.txt ; ls -li -total 12 -657024 -rw-rw-r-- 4 dboth dboth 1157 Jun 14 14:14 link1.file.txt -657024 -rw-rw-r-- 4 dboth dboth 1157 Jun 14 14:14 link2.file.txt -658270 lrwxrwxrwx 1 dboth dboth 14 Jun 14 15:21 link3.file.txt -> -link2.file.txt -657024 -rw-rw-r-- 4 dboth dboth 1157 Jun 14 14:14 main.file.txt -657863 -rw-rw-r-- 1 dboth dboth 0 Jun 14 08:18 unlinked.file -``` - -The hard links, those that have the inode number **657024**, are unchanged, and the number of hard links shown for each has not changed. The newly created symlink has a different inode, number **658270**. The soft link named **link3.file.txt**points to **link2.file.txt**. Use the **cat** command to display the contents of **link3.file.txt**. The file mode information for the symlink starts with the letter "**l**" which indicates that this file is actually a symbolic link. - -The size of the symlink **link3.file.txt** is only 14 bytes in the example above. That is the size of the text **link3.file.txt -> link2.file.txt**, which is the actual content of the directory entry. The directory entry **link3.file.txt** does not point to an inode; it points to another directory entry, which makes it useful for creating links that span file system boundaries. So, let's create that link we tried before from the **/tmp** directory. - -``` -[dboth@david temp]$ ln -s /home/dboth/temp/link2.file.txt -/tmp/link3.file.txt ; ls -l /tmp/link* -lrwxrwxrwx 1 dboth dboth 31 Jun 14 21:53 /tmp/link3.file.txt -> -/home/dboth/temp/link2.file.txt -``` - -### **Deleting links** - -There are some other things that you should consider when you need to delete links or the files to which they point. - -First, let's delete the link **main.file.txt**. Remember that every directory entry that points to an inode is simply a hard link. - -``` -[dboth@david temp]$ rm main.file.txt ; ls -li -total 8 -657024 -rw-rw-r-- 3 dboth dboth 1157 Jun 14 14:14 link1.file.txt -657024 -rw-rw-r-- 3 dboth dboth 1157 Jun 14 14:14 link2.file.txt -658270 lrwxrwxrwx 1 dboth dboth 14 Jun 14 15:21 link3.file.txt -> -link2.file.txt -657863 -rw-rw-r-- 1 dboth dboth 0 Jun 14 08:18 unlinked.file -``` - -The link **main.file.txt** was the first link created when the file was created. Deleting it now still leaves the original file and its data on the hard drive along with all the remaining hard links. To delete the file and its data, you would have to delete all the remaining hard links. - -Now delete the **link2.file.txt** hard link. - -``` -[dboth@david temp]$ rm link2.file.txt ; ls -li -total 8 -657024 -rw-rw-r-- 3 dboth dboth 1157 Jun 14 14:14 link1.file.txt -658270 lrwxrwxrwx 1 dboth dboth 14 Jun 14 15:21 link3.file.txt -> -link2.file.txt -657024 -rw-rw-r-- 3 dboth dboth 1157 Jun 14 14:14 main.file.txt -657863 -rw-rw-r-- 1 dboth dboth 0 Jun 14 08:18 unlinked.file -``` - -Notice what happens to the soft link. Deleting the hard link to which the soft link points leaves a broken link. On my system, the broken link is highlighted in colors and the target hard link is flashing. If the broken link needs to be fixed, you can create another hard link in the same directory with the same name as the old one, so long as not all the hard links have been deleted. You could also recreate the link itself, with the link maintaining the same name but pointing to one of the remaining hard links. Of course, if the soft link is no longer needed, it can be deleted with the **rm** command. - -The **unlink** command can also be used to delete files and links. It is very simple and has no options, as the **rm** command does. It does, however, more accurately reflect the underlying process of deletion, in that it removes the link—the directory entry—to the file being deleted. - -### Final thoughts - -I worked with both types of links for a long time before I began to understand their capabilities and idiosyncrasies. It took writing a lab project for a Linux class I taught to fully appreciate how links work. This article is a simplification of what I taught in that class, and I hope it speeds your learning curve. - --------------------------------------------------------------------------------- - -作者简介: - -David Both - David Both is a Linux and Open Source advocate who resides in Raleigh, North Carolina. He has been in the IT industry for over forty years and taught OS/2 for IBM where he worked for over 20 years. While at IBM, he wrote the first training course for the original IBM PC in 1981. He has taught RHCE classes for Red Hat and has worked at MCI Worldcom, Cisco, and the State of North Carolina. He has been working with Linux and Open Source Software for almost 20 years. - ---------------------------------- - -via: https://opensource.com/article/17/6/linking-linux-filesystem - -作者:[David Both ][a] -译者:[runningwater](https://github.com/runningwater) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/dboth -[1]:https://opensource.com/resources/what-is-linux?src=linux_resource_menu -[2]:https://opensource.com/resources/what-are-linux-containers?src=linux_resource_menu -[3]:https://developers.redhat.com/promotions/linux-cheatsheet/?intcmp=7016000000127cYAAQ -[4]:https://developers.redhat.com/cheat-sheet/advanced-linux-commands-cheatsheet?src=linux_resource_menu&intcmp=7016000000127cYAAQ -[5]:https://opensource.com/tags/linux?src=linux_resource_menu -[6]:https://opensource.com/article/17/6/linking-linux-filesystem?rate=YebHxA-zgNopDQKKOyX3_r25hGvnZms_33sYBUq-SMM -[7]:https://opensource.com/user/14106/feed -[8]:https://www.flickr.com/photos/digypho/7905320090 -[9]:https://creativecommons.org/licenses/by/2.0/ -[10]:https://opensource.com/article/17/5/introduction-ext4-filesystem -[11]:https://opensource.com/article/16/11/managing-devices-linux -[12]:https://opensource.com/life/16/10/introduction-linux-filesystems -[13]:https://opensource.com/business/16/9/linux-users-guide-lvm -[14]:https://opensource.com/life/15/9/everything-is-a-file -[15]:https://opensource.com/article/17/5/introduction-ext4-filesystem -[16]:https://opensource.com/article/17/5/introduction-ext4-filesystem#fig2 -[17]:https://opensource.com/users/dboth -[18]:https://opensource.com/article/17/6/linking-linux-filesystem#comments diff --git a/sources/tech/20170921 How to answer questions in a helpful way.md b/sources/tech/20170921 How to answer questions in a helpful way.md new file mode 100644 index 0000000000..8a3601ed06 --- /dev/null +++ b/sources/tech/20170921 How to answer questions in a helpful way.md @@ -0,0 +1,172 @@ +How to answer questions in a helpful way +============================================================ + +Your coworker asks you a slightly unclear question. How do you answer? I think asking questions is a skill (see [How to ask good questions][1]) and that answering questions in a helpful way is also a skill! Both of them are super useful. + +To start out with – sometimes the people asking you questions don’t respect your time, and that sucks. I’m assuming here throughout that that’s not what happening – we’re going to assume that the person asking you questions is a reasonable person who is trying their best to figure something out and that you want to help them out. Everyone I work with is like that and so that’s the world I live in :) + +Here are a few strategies for answering questions in a helpful way! + +### If they’re not asking clearly, help them clarify + +Often beginners don’t ask clear questions, or ask questions that don’t have the necessary information to answer the questions. Here are some strategies you can use to help them clarify. + +* **Rephrase a more specific question** back at them (“Are you asking X?”) + +* **Ask them for more specific information** they didn’t provide (“are you using IPv6?”) + +* **Ask what prompted their question**. For example, sometimes people come into my team’s channel with questions about how our service discovery works. Usually this is because they’re trying to set up/reconfigure a service. In that case it’s helpful to ask “which service are you working with? Can I see the pull request you’re working on?” + +A lot of these strategies come from the [how to ask good questions][2] post. (though I would never say to someone “oh you need to read this Document On How To Ask Good Questions before asking me a question”) + +### Figure out what they know already + +Before answering a question, it’s very useful to know what the person knows already! + +Harold Treen gave me a great example of this: + +> Someone asked me the other day to explain “Redux Sagas”. Rather than dive in and say “They are like worker threads that listen for actions and let you update the store!”  +> I started figuring out how much they knew about Redux, actions, the store and all these other fundamental concepts. From there it was easier to explain the concept that ties those other concepts together. + +Figuring out what your question-asker knows already is important because they may be confused about fundamental concepts (“What’s Redux?”), or they may be an expert who’s getting at a subtle corner case. An answer building on concepts they don’t know is confusing, and an answer that recaps things they know is tedious. + +One useful trick for asking what people know – instead of “Do you know X?”, maybe try “How familiar are you with X?”. + +### Point them to the documentation + +“RTFM” is the classic unhelpful answer to a question, but pointing someone to a specific piece of documentation can actually be really helpful! When I’m asking a question, I’d honestly rather be pointed to documentation that actually answers my question, because it’s likely to answer other questions I have too. + +I think it’s important here to make sure you’re linking to documentation that actually answers the question, or at least check in afterwards to make sure it helped. Otherwise you can end up with this (pretty common) situation: + +* Ali: How do I do X? + +* Jada: + +* Ali: That doesn’t actually explain how to X, it only explains Y! + +If the documentation I’m linking to is very long, I like to point out the specific part of the documentation I’m talking about. The [bash man page][3] is 44,000 words (really!), so just saying “it’s in the bash man page” is not that helpful :) + +### Point them to a useful search + +Often I find things at work by searching for some Specific Keyword that I know will find me the answer. That keyword might not be obvious to a beginner! So saying “this is the search I’d use to find the answer to that question” can be useful. Again, check in afterwards to make sure the search actually gets them the answer they need :) + +### Write new documentation + +People often come and ask my team the same questions over and over again. This is obviously not the fault of the people (how should  _they_  know that 10 people have asked this already, or what the answer is?). So we’re trying to, instead of answering the questions directly, + +1. Immediately write documentation + +2. Point the person to the new documentation we just wrote + +3. Celebrate! + +Writing documentation sometimes takes more time than just answering the question, but it’s often worth it! Writing documentation is especially worth it if: + +a. It’s a question which is being asked again and again b. The answer doesn’t change too much over time (if the answer changes every week or month, the documentation will just get out of date and be frustrating) + +### Explain what you did + +As a beginner to a subject, it’s really frustrating to have an exchange like this: + +* New person: “hey how do you do X?” + +* More Experienced Person: “I did it, it is done.” + +* New person: ….. but what did you DO?! + +If the person asking you is trying to learn how things work, it’s helpful to: + +* Walk them through how to accomplish a task instead of doing it yourself + +* Tell them the steps for how you got the answer you gave them! + +This might take longer than doing it yourself, but it’s a learning opportunity for the person who asked, so that they’ll be better equipped to solve such problems in the future. + +Then you can have WAY better exchanges, like this: + +* New person: “I’m seeing errors on the site, what’s happening?” + +* More Experienced Person: (2 minutes later) “oh that’s because there’s a database failover happening” + +* New person: how did you know that??!?!? + +* More Experienced Person: “Here’s what I did!”: + 1. Often these errors are due to Service Y being down. I looked at $PLACE and it said Service Y was up. So that wasn’t it. + + 2. Then I looked at dashboard X, and this part of that dashboard showed there was a database failover happening. + + 3. Then I looked in the logs for the service and it showed errors connecting to the database, here’s what those errors look like. + +If you’re explaining how you debugged a problem, it’s useful both to explain how you found out what the problem was, and how you found out what the problem wasn’t. While it might feel good to look like you knew the answer right off the top of your head, it feels even better to help someone improve at learning and diagnosis, and understand the resources available. + +### Solve the underlying problem + +This one is a bit tricky. Sometimes people think they’ve got the right path to a solution, and they just need one more piece of information to implement that solution. But they might not be quite on the right path! For example: + +* George: I’m doing X, and I got this error, how do I fix it + +* Jasminda: Are you actually trying to do Y? If so, you shouldn’t do X, you should do Z instead + +* George: Oh, you’re right!!! Thank you! I will do Z instead. + +Jasminda didn’t answer George’s question at all! Instead she guessed that George didn’t actually want to be doing X, and she was right. That is helpful! + +It’s possible to come off as condescending here though, like + +* George: I’m doing X, and I got this error, how do I fix it? + +* Jasminda: Don’t do that, you’re trying to do Y and you should do Z to accomplish that instead. + +* George: Well, I am not trying to do Y, I actually want to do X because REASONS. How do I do X? + +So don’t be condescending, and keep in mind that some questioners might be attached to the steps they’ve taken so far! It might be appropriate to answer both the question they asked and the one they should have asked: “Well, if you want to do X then you might try this, but if you’re trying to solve problem Y with that, you might have better luck doing this other thing, and here’s why that’ll work better”. + +### Ask “Did that answer your question?” + +I always like to check in after I  _think_  I’ve answered the question and ask “did that answer your question? Do you have more questions?”. + +It’s good to pause and wait after asking this because often people need a minute or two to know whether or not they’ve figured out the answer. I especially find this extra “did this answer your questions?” step helpful after writing documentation! Often when writing documentation about something I know well I’ll leave out something very important without realizing it. + +### Offer to pair program/chat in real life + +I work remote, so many of my conversations at work are text-based. I think of that as the default mode of communication. + +Today, we live in a world of easy video conferencing & screensharing! At work I can at any time click a button and immediately be in a video call/screensharing session with someone. Some problems are easier to talk about using your voices! + +For example, recently someone was asking about capacity planning/autoscaling for their service. I could tell there were a few things we needed to clear up but I wasn’t exactly sure what they were yet. We got on a quick video call and 5 minutes later we’d answered all their questions. + +I think especially if someone is really stuck on how to get started on a task, pair programming for a few minutes can really help, and it can be a lot more efficient than email/instant messaging. + +### Don’t act surprised + +This one’s a rule from the Recurse Center: [no feigning surprise][4]. Here’s a relatively common scenario + +* Human 1: “what’s the Linux kernel?” + +* Human 2: “you don’t know what the LINUX KERNEL is?!!!!?!!!???” + +Human 2’s reaction (regardless of whether they’re  _actually_  surprised or not) is not very helpful. It mostly just serves to make Human 1 feel bad that they don’t know what the Linux kernel is. + +I’ve worked on actually pretending not to be surprised even when I actually am a bit surprised the person doesn’t know the thing and it’s awesome. + +### Answering questions well is awesome + +Obviously not all these strategies are appropriate all the time, but hopefully you will find some of them helpful! I find taking the time to answer questions and teach people can be really rewarding. + +Special thanks to Josh Triplett for suggesting this post and making many helpful additions, and to Harold Treen, Vaibhav Sagar, Peter Bhat Harkins, Wesley Aptekar-Cassels, and Paul Gowder for reading/commenting. + +-------------------------------------------------------------------------------- + +via: https://jvns.ca/blog/answer-questions-well/ + +作者:[ Julia Evans][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://jvns.ca/about +[1]:https://jvns.ca/blog/good-questions/ +[2]:https://jvns.ca/blog/good-questions/ +[3]:https://linux.die.net/man/1/bash +[4]:https://jvns.ca/blog/2017/04/27/no-feigning-surprise/ diff --git a/sources/tech/20171005 How to manage Linux containers with Ansible Container.md b/sources/tech/20171005 How to manage Linux containers with Ansible Container.md new file mode 100644 index 0000000000..897b793a86 --- /dev/null +++ b/sources/tech/20171005 How to manage Linux containers with Ansible Container.md @@ -0,0 +1,114 @@ +How to manage Linux containers with Ansible Container +============================================================ + +### Ansible Container addresses Dockerfile shortcomings and offers complete management for containerized projects. + +![Ansible Container: A new way to manage containers](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/container-ship.png?itok=pqZYgQ7K "Ansible Container: A new way to manage containers") +Image by : opensource.com + +I love containers and use the technology every day. Even so, containers aren't perfect. Over the past couple of months, however, a set of projects has emerged that addresses some of the problems I've experienced. + +I started using containers with [Docker][11], since this project made the technology so popular. Aside from using the container engine, I learned how to use **[docker-compose][6]** and started managing my projects with it. My productivity skyrocketed! One command to run my project, no matter how complex it was. I was so happy. + +After some time, I started noticing issues. The most apparent were related to the process of creating container images. The Docker tool uses a custom file format as a recipe to produce container images—Dockerfiles. This format is easy to learn, and after a short time you are ready to produce container images on your own. The problems arise once you want to master best practices or have complex scenarios in mind. + +More on Ansible + +* [How Ansible works][1] + +* [Free Ansible eBooks][2] + +* [Ansible quick start video][3] + +* [Download and install Ansible][4] + +Let's take a break and travel to a different land: the world of [Ansible][22]. You know it? It's awesome, right? You don't? Well, it's time to learn something new. Ansible is a project that allows you to manage your infrastructure by writing tasks and executing them inside environments of your choice. No need to install and set up any services; everything can easily run from your laptop. Many people already embrace Ansible. + +Imagine this scenario: You invested in Ansible, you wrote plenty of Ansible roles and playbooks that you use to manage your infrastructure, and you are thinking about investing in containers. What should you do? Start writing container image definitions via shell scripts and Dockerfiles? That doesn't sound right. + +Some people from the Ansible development team asked this question and realized that those same Ansible roles and playbooks that people wrote and use daily can also be used to produce container images. But not just that—they can be used to manage the complete lifecycle of containerized projects. From these ideas, the [Ansible Container][12] project was born. It utilizes existing Ansible roles that can be turned into container images and can even be used for the complete application lifecycle, from build to deploy in production. + +Let's talk about the problems I mentioned regarding best practices in context of Dockerfiles. A word of warning: This is going to be very specific and technical. Here are the top three issues I have: + +### 1\. Shell scripts embedded in Dockerfiles. + +When writing Dockerfiles, you can specify a script that will be interpreted via **/bin/sh -c**. It can be something like: + +``` +RUN dnf install -y nginx +``` + +where RUN is a Dockerfile instruction and the rest are its arguments (which are passed to shell). But imagine a more complex scenario: + +``` +RUN set -eux; \ +    \ +# this "case" statement is generated via "update.sh" +    %%ARCH-CASE%%; \ +    \ +    url="https://golang.org/dl/go${GOLANG_VERSION}.${goRelArch}.tar.gz"; \ +    wget -O go.tgz "$url"; \ +    echo "${goRelSha256} *go.tgz" | sha256sum -c -; \ +``` + +This one is taken from [the official golang image][13]. It doesn't look pretty, right? + +### 2\. You can't parse Dockerfiles easily. + +Dockerfiles are a new format without a formal specification. This is tricky if you need to process Dockerfiles in your infrastructure (e.g., automate the build process a bit). The only specification is [the code][14] that is part of **dockerd**. The problem is that you can't use it as a library. The easiest solution is to write a parser on your own and hope for the best. Wouldn't it be better to use some well-known markup language, such as YAML or JSON? + +### 3\. It's hard to control. + +If you are familiar with the internals of container images, you may know that every image is composed of layers. Once the container is created, the layers are stacked onto each other (like pancakes) using union filesystem technology. The problem is, that you cannot explicitly control this layering—you can't say, "here starts a new layer." You are forced to change your Dockerfile in a way that may hurt readability. The bigger problem is that a set of best practices has to be followed to achieve optimal results—newcomers have a really hard time here. + +### Comparing Ansible language and Dockerfiles + +The biggest shortcoming of Dockerfiles in comparison to Ansible is that Ansible, as a language, is much more powerful. For example, Dockerfiles have no direct concept of variables, whereas Ansible has a complete templating system (variables are just one of its features). Ansible contains a large number of modules that can be easily utilized, such as [**wait_for**][15], which can be used for service readiness checks—e.g., wait until a service is ready before proceeding. With Dockerfiles, everything is a shell script. So if you need to figure out service readiness, it has to be done with shell (or installed separately). The other problem with shell scripts is that, with growing complexity, maintenance becomes a burden. Plenty of people have already figured this out and turned those shell scripts into Ansible. + +If you are interested in this topic and would like to know more, please come to [Open Source Summit][16] in Prague to see [my presentation][17] on Monday, Oct. 23, at 4:20 p.m. in Palmovka room. + + _Learn more in Tomas Tomecek's talk, [From Dockerfiles to Ansible Container][7], at [Open Source Summit EU][8], which will be held October 23-26 in Prague._ + + + +### About the author + + [![human](https://opensource.com/sites/default/files/styles/profile_pictures/public/pictures/ja.jpeg?itok=4ATUEAbd)][18] Tomas Tomecek - Engineer. Hacker. Speaker. Tinker. Red Hatter. Likes containers, linux, open source, python 3, rust, zsh, tmux.[More about me][9] + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/17/10/dockerfiles-ansible-container + +作者:[Tomas Tomecek ][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/tomastomecek +[1]:https://www.ansible.com/how-ansible-works?intcmp=701f2000000h4RcAAI +[2]:https://www.ansible.com/ebooks?intcmp=701f2000000h4RcAAI +[3]:https://www.ansible.com/quick-start-video?intcmp=701f2000000h4RcAAI +[4]:https://docs.ansible.com/ansible/latest/intro_installation.html?intcmp=701f2000000h4RcAAI +[5]:https://opensource.com/article/17/10/dockerfiles-ansible-container?imm_mid=0f9013&cmp=em-webops-na-na-newsltr_20171201&rate=Wiw_0D6PK_CAjqatYu_YQH0t1sNHEF6q09_9u3sYkCY +[6]:https://github.com/docker/compose +[7]:http://sched.co/BxIW +[8]:http://events.linuxfoundation.org/events/open-source-summit-europe +[9]:https://opensource.com/users/tomastomecek +[10]:https://opensource.com/user/175651/feed +[11]:https://opensource.com/tags/docker +[12]:https://www.ansible.com/ansible-container +[13]:https://github.com/docker-library/golang/blob/master/Dockerfile-debian.template#L14 +[14]:https://github.com/moby/moby/tree/master/builder/dockerfile +[15]:http://docs.ansible.com/wait_for_module.html +[16]:http://events.linuxfoundation.org/events/open-source-summit-europe +[17]:http://events.linuxfoundation.org/events/open-source-summit-europe/program/schedule +[18]:https://opensource.com/users/tomastomecek +[19]:https://opensource.com/users/tomastomecek +[20]:https://opensource.com/users/tomastomecek +[21]:https://opensource.com/article/17/10/dockerfiles-ansible-container?imm_mid=0f9013&cmp=em-webops-na-na-newsltr_20171201#comments +[22]:https://opensource.com/tags/ansible +[23]:https://opensource.com/tags/containers +[24]:https://opensource.com/tags/ansible +[25]:https://opensource.com/tags/docker +[26]:https://opensource.com/tags/open-source-summit diff --git a/sources/tech/20171005 Reasons Kubernetes is cool.md b/sources/tech/20171005 Reasons Kubernetes is cool.md new file mode 100644 index 0000000000..a9d10b9cdb --- /dev/null +++ b/sources/tech/20171005 Reasons Kubernetes is cool.md @@ -0,0 +1,148 @@ +Reasons Kubernetes is cool +============================================================ + +When I first learned about Kubernetes (a year and a half ago?) I really didn’t understand why I should care about it. + +I’ve been working full time with Kubernetes for 3 months or so and now have some thoughts about why I think it’s useful. (I’m still very far from being a Kubernetes expert!) Hopefully this will help a little in your journey to understand what even is going on with Kubernetes! + +I will try to explain some reason I think Kubenetes is interesting without using the words “cloud native”, “orchestration”, “container”, or any Kubernetes-specific terminology :). I’m going to explain this mostly from the perspective of a kubernetes operator / infrastructure engineer, since my job right now is to set up Kubernetes and make it work well. + +I’m not going to try to address the question of “should you use kubernetes for your production systems?” at all, that is a very complicated question. (not least because “in production” has totally different requirements depending on what you’re doing) + +### Kubernetes lets you run code in production without setting up new servers + +The first pitch I got for Kubernetes was the following conversation with my partner Kamal: + +Here’s an approximate transcript: + +* Kamal: With Kubernetes you can set up a new service with a single command + +* Julia: I don’t understand how that’s possible. + +* Kamal: Like, you just write 1 configuration file, apply it, and then you have a HTTP service running in production + +* Julia: But today I need to create new AWS instances, write a puppet manifest, set up service discovery, configure my load balancers, configure our deployment software, and make sure DNS is working, it takes at least 4 hours if nothing goes wrong. + +* Kamal: Yeah. With Kubernetes you don’t have to do any of that, you can set up a new HTTP service in 5 minutes and it’ll just automatically run. As long as you have spare capacity in your cluster it just works! + +* Julia: There must be a trap + +There kind of is a trap, setting up a production Kubernetes cluster is (in my experience) is definitely not easy. (see [Kubernetes The Hard Way][3] for what’s involved to get started). But we’re not going to go into that right now! + +So the first cool thing about Kubernetes is that it has the potential to make life way easier for developers who want to deploy new software into production. That’s cool, and it’s actually true, once you have a working Kubernetes cluster you really can set up a production HTTP service (“run 5 of this application, set up a load balancer, give it this DNS name, done”) with just one configuration file. It’s really fun to see. + +### Kubernetes gives you easy visibility & control of what code you have running in production + +IMO you can’t understand Kubernetes without understanding etcd. So let’s talk about etcd! + +Imagine that I asked you today “hey, tell me every application you have running in production, what host it’s running on, whether it’s healthy or not, and whether or not it has a DNS name attached to it”. I don’t know about you but I would need to go look in a bunch of different places to answer this question and it would take me quite a while to figure out. I definitely can’t query just one API. + +In Kubernetes, all the state in your cluster – applications running (“pods”), nodes, DNS names, cron jobs, and more – is stored in a single database (etcd). Every Kubernetes component is stateless, and basically works by + +* Reading state from etcd (eg “the list of pods assigned to node 1”) + +* Making changes (eg “actually start running pod A on node 1”) + +* Updating the state in etcd (eg “set the state of pod A to ‘running’”) + +This means that if you want to answer a question like “hey, how many nginx pods do I have running right now in that availabliity zone?” you can answer it by querying a single unified API (the Kubernetes API!). And you have exactly the same access to that API that every other Kubernetes component does. + +This also means that you have easy control of everything running in Kubernetes. If you want to, say, + +* Implement a complicated custom rollout strategy for deployments (deploy 1 thing, wait 2 minutes, deploy 5 more, wait 3.7 minutes, etc) + +* Automatically [start a new webserver][1] every time a branch is pushed to github + +* Monitor all your running applications to make sure all of them have a reasonable cgroups memory limit + +all you need to do is to write a program that talks to the Kubernetes API. (a “controller”) + +Another very exciting thing about the Kubernetes API is that you’re not limited to just functionality that Kubernetes provides! If you decide that you have your own opinions about how your software should be deployed / created / monitored, then you can write code that uses the Kubernetes API to do it! It lets you do everything you need. + +### If every Kubernetes component dies, your code will still keep running + +One thing I was originally promised (by various blog posts :)) about Kubernetes was “hey, if the Kubernetes apiserver and everything else dies, it’s ok, your code will just keep running”. I thought this sounded cool in theory but I wasn’t sure if it was actually true. + +So far it seems to be actually true! + +I’ve been through some etcd outages now, and what happens is + +1. All the code that was running keeps running + +2. Nothing  _new_  happens (you can’t deploy new code or make changes, cron jobs will stop working) + +3. When everything comes back, the cluster will catch up on whatever it missed + +This does mean that if etcd goes down and one of your applications crashes or something, it can’t come back up until etcd returns. + +### Kubernetes’ design is pretty resilient to bugs + +Like any piece of software, Kubernetes has bugs. For example right now in our cluster the controller manager has a memory leak, and the scheduler crashes pretty regularly. Bugs obviously aren’t good but so far I’ve found that Kubernetes’ design helps mitigate a lot of the bugs in its core components really well. + +If you restart any component, what happens is: + +* It reads all its relevant state from etcd + +* It starts doing the necessary things it’s supposed to be doing based on that state (scheduling pods, garbage collecting completed pods, scheduling cronjobs, deploying daemonsets, whatever) + +Because all the components don’t keep any state in memory, you can just restart them at any time and that can help mitigate a variety of bugs. + +For example! Let’s say you have a memory leak in your controller manager. Because the controller manager is stateless, you can just periodically restart it every hour or something and feel confident that you won’t cause any consistency issues. Or we ran into a bug in the scheduler where it would sometimes just forget about pods and never schedule them. You can sort of mitigate this just by restarting the scheduler every 10 minutes. (we didn’t do that, we fixed the bug instead, but you  _could_  :) ) + +So I feel like I can trust Kubernetes’ design to help make sure the state in the cluster is consistent even when there are bugs in its core components. And in general I think the software is generally improving over time. The only stateful thing you have to operate is etcd + +Not to harp on this “state” thing too much but – I think it’s cool that in Kubernetes the only thing you have to come up with backup/restore plans for is etcd (unless you use persistent volumes for your pods). I think it makes kubernetes operations a lot easier to think about. + +### Implementing new distributed systems on top of Kubernetes is relatively easy + +Suppose you want to implement a distributed cron job scheduling system! Doing that from scratch is a ton of work. But implementing a distributed cron job scheduling system inside Kubernetes is much easier! (still not trivial, it’s still a distributed system) + +The first time I read the code for the Kubernetes cronjob controller I was really delighted by how simple it was. Here, go read it! The main logic is like 400 lines of Go. Go ahead, read it! => [cronjob_controller.go][4] <= + +Basically what the cronjob controller does is: + +* Every 10 seconds: + * Lists all the cronjobs that exist + + * Checks if any of them need to run right now + + * If so, creates a new Job object to be scheduled & actually run by other Kubernetes controllers + + * Clean up finished jobs + + * Repeat + +The Kubernetes model is pretty constrained (it has this pattern of resources are defined in etcd, controllers read those resources and update etcd), and I think having this relatively opinionated/constrained model makes it easier to develop your own distributed systems inside the Kubernetes framework. + +Kamal introduced me to this idea of “Kubernetes is a good platform for writing your own distributed systems” instead of just “Kubernetes is a distributed system you can use” and I think it’s really interesting. He has a prototype of a [system to run an HTTP service for every branch you push to github][5]. It took him a weekend and is like 800 lines of Go, which I thought was impressive! + +### Kubernetes lets you do some amazing things (but isn’t easy) + +I started out by saying “kubernetes lets you do these magical things, you can just spin up so much infrastructure with a single configuration file, it’s amazing”. And that’s true! + +What I mean by “Kubernetes isn’t easy” is that Kubernetes has a lot of moving parts learning how to successfully operate a highly available Kubernetes cluster is a lot of work. Like I find that with a lot of the abstractions it gives me, I need to understand what is underneath those abstractions in order to debug issues and configure things properly. I love learning new things so this doesn’t make me angry or anything, I just think it’s important to know :) + +One specific example of “I can’t just rely on the abstractions” that I’ve struggled with is that I needed to learn a LOT [about how networking works on Linux][6] to feel confident with setting up Kubernetes networking, way more than I’d ever had to learn about networking before. This was very fun but pretty time consuming. I might write more about what is hard/interesting about setting up Kubernetes networking at some point. + +Or I wrote a [2000 word blog post][7] about everything I had to learn about Kubernetes’ different options for certificate authorities to be able to set up my Kubernetes CAs successfully. + +I think some of these managed Kubernetes systems like GKE (google’s kubernetes product) may be simpler since they make a lot of decisions for you but I haven’t tried any of them. + +-------------------------------------------------------------------------------- + +via: https://jvns.ca/blog/2017/10/05/reasons-kubernetes-is-cool/ + +作者:[ Julia Evans][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://jvns.ca/about +[1]:https://github.com/kamalmarhubi/kubereview +[2]:https://jvns.ca/categories/kubernetes +[3]:https://github.com/kelseyhightower/kubernetes-the-hard-way +[4]:https://github.com/kubernetes/kubernetes/blob/e4551d50e57c089aab6f67333412d3ca64bc09ae/pkg/controller/cronjob/cronjob_controller.go +[5]:https://github.com/kamalmarhubi/kubereview +[6]:https://jvns.ca/blog/2016/12/22/container-networking/ +[7]:https://jvns.ca/blog/2017/08/05/how-kubernetes-certificates-work/ diff --git a/sources/tech/20171010 Operating a Kubernetes network.md b/sources/tech/20171010 Operating a Kubernetes network.md new file mode 100644 index 0000000000..9c85e9aa70 --- /dev/null +++ b/sources/tech/20171010 Operating a Kubernetes network.md @@ -0,0 +1,216 @@ +Operating a Kubernetes network +============================================================ + +I’ve been working on Kubernetes networking a lot recently. One thing I’ve noticed is, while there’s a reasonable amount written about how to **set up** your Kubernetes network, I haven’t seen much about how to **operate** your network and be confident that it won’t create a lot of production incidents for you down the line. + +In this post I’m going to try to convince you of three things: (all I think pretty reasonable :)) + +* Avoiding networking outages in production is important + +* Operating networking software is hard + +* It’s worth thinking critically about major changes to your networking infrastructure and the impact that will have on your reliability, even if very fancy Googlers say “this is what we do at Google”. (google engineers are doing great work on Kubernetes!! But I think it’s important to still look at the architecture and make sure it makes sense for your organization.) + +I’m definitely not a Kubernetes networking expert by any means, but I have run into a few issues while setting things up and definitely know a LOT more about Kubernetes networking than I used to. + +### Operating networking software is hard + +Here I’m not talking about operating physical networks (I don’t know anything about that), but instead about keeping software like DNS servers & load balancers & proxies working correctly. + +I have been working on a team that’s responsible for a lot of networking infrastructure for a year, and I have learned a few things about operating networking infrastructure! (though I still have a lot to learn obviously). 3 overall thoughts before we start: + +* Networking software often relies very heavily on the Linux kernel. So in addition to configuring the software correctly you also need to make sure that a bunch of different sysctls are set correctly, and a misconfigured sysctl can easily be the difference between “everything is 100% fine” and “everything is on fire”. + +* Networking requirements change over time (for example maybe you’re doing 5x more DNS lookups than you were last year! Maybe your DNS server suddenly started returning TCP DNS responses instead of UDP which is a totally different kernel workload!). This means software that was working fine before can suddenly start having issues. + +* To fix a production networking issues you often need a lot of expertise. (for example see this [great post by Sophie Haskins on debugging a kube-dns issue][1]) I’m a lot better at debugging networking issues than I was, but that’s only after spending a huge amount of time investing in my knowledge of Linux networking. + +I am still far from an expert at networking operations but I think it seems important to: + +1. Very rarely make major changes to the production networking infrastructure (because it’s super disruptive) + +2. When you  _are_  making major changes, think really carefully about what the failure modes are for the new network architecture are + +3. Have multiple people who are able to understand your networking setup + +Switching to Kubernetes is obviously a pretty major networking change! So let’s talk about what some of the things that can go wrong are! + +### Kubernetes networking components + +The Kubernetes networking components we’re going to talk about in this post are: + +* Your overlay network backend (like flannel/calico/weave net/romana) + +* `kube-dns` + +* `kube-proxy` + +* Ingress controllers / load balancers + +* The `kubelet` + +If you’re going to set up HTTP services you probably need all of these. I’m not using most of these components yet but I’m trying to understand them, so that’s what this post is about. + +### The simplest way: Use host networking for all your containers + +Let’s start with the simplest possible thing you can do. This won’t let you run HTTP services in Kubernetes. I think it’s pretty safe because there are less moving parts. + +If you use host networking for all your containers I think all you need to do is: + +1. Configure the kubelet to configure DNS correctly inside your containers + +2. That’s it + +If you use host networking for literally every pod you don’t need kube-dns or kube-proxy. You don’t even need a working overlay network. + +In this setup your pods can connect to the outside world (the same way any process on your hosts would talk to the outside world) but the outside world can’t connect to your pods. + +This isn’t super important (I think most people want to run HTTP services inside Kubernetes and actually communicate with those services) but I do think it’s interesting to realize that at some level all of this networking complexity isn’t strictly required and sometimes you can get away without using it. Avoiding networking complexity seems like a good idea to me if you can. + +### Operating an overlay network + +The first networking component we’re going to talk about is your overlay network. Kubernetes assumes that every pod has an IP address and that you can communicate with services inside that pod by using that IP address. When I say “overlay network” this is what I mean (“the system that lets you refer to a pod by its IP address”). + +All other Kubernetes networking stuff relies on the overlay networking working correctly. You can read more about the [kubernetes networking model here][10]. + +The way Kelsey Hightower describes in [kubernetes the hard way][11] seems pretty good but it’s not really viable on AWS for clusters more than 50 nodes or so, so I’m not going to talk about that. + +There are a lot of overlay network backends (calico, flannel, weaveworks, romana) and the landscape is pretty confusing. But as far as I’m concerned an overlay network has 2 responsibilities: + +1. Make sure your pods can send network requests outside your cluster + +2. Keep a stable mapping of nodes to subnets and keep every node in your cluster updated with that mapping. Do the right thing when nodes are added & removed. + +Okay! So! What can go wrong with your overlay network? + +* The overlay network is responsible for setting up iptables rules (basically `iptables -A -t nat POSTROUTING -s $SUBNET -j MASQUERADE`) to ensure that containers can make network requests outside Kubernetes. If something goes wrong with this rule then your containers can’t connect to the external network. This isn’t that hard (it’s just a few iptables rules) but it is important. I made a [pull request][2] because I wanted to make sure this was resilient + +* Something can go wrong with adding or deleting nodes. We’re using the flannel hostgw backend and at the time we started using it, node deletion [did not work][3]. + +* Your overlay network is probably dependent on a distributed database (etcd). If that database has an incident, this can cause issues. For example [https://github.com/coreos/flannel/issues/610][4] says that if you have data loss in your flannel etcd cluster it can result in containers losing network connectivity. (this has now been fixed) + +* You upgrade Docker and everything breaks + +* Probably more things! + +I’m mostly talking about past issues in Flannel here but I promise I’m not picking on Flannel – I actually really **like** Flannel because I feel like it’s relatively simple (for instance the [vxlan backend part of it][12] is like 500 lines of code) and I feel like it’s possible for me to reason through any issues with it. And it’s obviously continuously improving. They’ve been great about reviewing pull requests. + +My approach to operating an overlay network so far has been: + +* Learn how it works in detail and how to debug it (for example the hostgw network backend for Flannel works by creating routes, so you mostly just need to do `sudo ip route list` to see whether it’s doing the correct thing) + +* Maintain an internal build so it’s easy to patch it if needed + +* When there are issues, contribute patches upstream + +I think it’s actually really useful to go through the list of merged PRs and see bugs that have been fixed in the past – it’s a bit time consuming but is a great way to get a concrete list of kinds of issues other people have run into. + +It’s possible that for other people their overlay networks just work but that hasn’t been my experience and I’ve heard other folks report similar issues. If you have an overlay network setup that is a) on AWS and b) works on a cluster more than 50-100 nodes where you feel more confident about operating it I would like to know. + +### Operating kube-proxy and kube-dns? + +Now that we have some thoughts about operating overlay networks, let’s talk about + +There’s a question mark next to this one because I haven’t done this. Here I have more questions than answers. + +Here’s how Kubernetes services work! A service is a collection of pods, which each have their own IP address (like 10.1.0.3, 10.2.3.5, 10.3.5.6) + +1. Every Kubernetes service gets an IP address (like 10.23.1.2) + +2. `kube-dns` resolves Kubernetes service DNS names to IP addresses (so my-svc.my-namespace.svc.cluster.local might map to 10.23.1.2) + +3. `kube-proxy` sets up iptables rules in order to do random load balancing between them. Kube-proxy also has a userspace round-robin load balancer but my impression is that they don’t recommend using it. + +So when you make a request to `my-svc.my-namespace.svc.cluster.local`, it resolves to 10.23.1.2, and then iptables rules on your local host (generated by kube-proxy) redirect it to one of 10.1.0.3 or 10.2.3.5 or 10.3.5.6 at random. + +Some things that I can imagine going wrong with this: + +* `kube-dns` is misconfigured + +* `kube-proxy` dies and your iptables rules don’t get updated + +* Some issue related to maintaining a large number of iptables rules + +Let’s talk about the iptables rules a bit, since doing load balancing by creating a bajillion iptables rules is something I had never heard of before! + +kube-proxy creates one iptables rule per target host like this: (these rules are from [this github issue][13]) + +``` +-A KUBE-SVC-LI77LBOOMGYET5US -m comment --comment "default/showreadiness:showreadiness" -m statistic --mode random --probability 0.20000000019 -j KUBE-SEP-E4QKA7SLJRFZZ2DD[b][c] +-A KUBE-SVC-LI77LBOOMGYET5US -m comment --comment "default/showreadiness:showreadiness" -m statistic --mode random --probability 0.25000000000 -j KUBE-SEP-LZ7EGMG4DRXMY26H +-A KUBE-SVC-LI77LBOOMGYET5US -m comment --comment "default/showreadiness:showreadiness" -m statistic --mode random --probability 0.33332999982 -j KUBE-SEP-RKIFTWKKG3OHTTMI +-A KUBE-SVC-LI77LBOOMGYET5US -m comment --comment "default/showreadiness:showreadiness" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-CGDKBCNM24SZWCMS +-A KUBE-SVC-LI77LBOOMGYET5US -m comment --comment "default/showreadiness:showreadiness" -j KUBE-SEP-RI4SRNQQXWSTGE2Y + +``` + +So kube-proxy creates a **lot** of iptables rules. What does that mean? What are the implications of that in for my network? There’s a great talk from Huawei called [Scale Kubernetes to Support 50,000 services][14] that says if you have 5,000 services in your kubernetes cluster, it takes **11 minutes** to add a new rule. If that happened to your real cluster I think it would be very bad. + +I definitely don’t have 5,000 services in my cluster, but 5,000 isn’t SUCH a bit number. The proposal they give to solve this problem is to replace this iptables backend for kube-proxy with IPVS which is a load balancer that lives in the Linux kernel. + +It seems like kube-proxy is going in the direction of various Linux kernel based load balancers. I think this is partly because they support UDP load balancing, and other load balancers (like HAProxy) don’t support UDP load balancing. + +But I feel comfortable with HAProxy! Is it possible to replace kube-proxy with HAProxy! I googled this and I found this [thread on kubernetes-sig-network][15] saying: + +> kube-proxy is so awesome, we have used in production for almost a year, it works well most of time, but as we have more and more services in our cluster, we found it was getting hard to debug and maintain. There is no iptables expert in our team, we do have HAProxy&LVS experts, as we have used these for several years, so we decided to replace this distributed proxy with a centralized HAProxy. I think this maybe useful for some other people who are considering using HAProxy with kubernetes, so we just update this project and make it open source: [https://github.com/AdoHe/kube2haproxy][5]. If you found it’s useful , please take a look and give a try. + +So that’s an interesting option! I definitely don’t have answers here, but, some thoughts: + +* Load balancers are complicated + +* DNS is also complicated + +* If you already have a lot of experience operating one kind of load balancer (like HAProxy), it might make sense to do some extra work to use that instead of starting to use an entirely new kind of load balancer (like kube-proxy) + +* I’ve been thinking about where we want to be using kube-proxy or kube-dns at all – I think instead it might be better to just invest in Envoy and rely entirely on Envoy for all load balancing & service discovery. So then you just need to be good at operating Envoy. + +As you can see my thoughts on how to operate your Kubernetes internal proxies are still pretty confused and I’m still not super experienced with them. It’s totally possible that kube-proxy and kube-dns are fine and that they will just work fine but I still find it helpful to think through what some of the implications of using them are (for example “you can’t have 5,000 Kubernetes services”). + +### Ingress + +If you’re running a Kubernetes cluster, it’s pretty likely that you actually need HTTP requests to get into your cluster so far. This blog post is already too long and I don’t know much about ingress yet so we’re not going to talk about that. + +### Useful links + +A couple of useful links, to summarize: + +* [The Kubernetes networking model][6] + +* How GKE networking works: [https://www.youtube.com/watch?v=y2bhV81MfKQ][7] + +* The aforementioned talk on `kube-proxy` performance: [https://www.youtube.com/watch?v=4-pawkiazEg][8] + +### I think networking operations is important + +My sense of all this Kubernetes networking software is that it’s all still quite new and I’m not sure we (as a community) really know how to operate all of it well. This makes me worried as an operator because I really want my network to keep working! :) Also I feel like as an organization running your own Kubernetes cluster you need to make a pretty large investment into making sure you understand all the pieces so that you can fix things when they break. Which isn’t a bad thing, it’s just a thing. + +My plan right now is just to keep learning about how things work and reduce the number of moving parts I need to worry about as much as possible. + +As usual I hope this was helpful and I would very much like to know what I got wrong in this post! + +-------------------------------------------------------------------------------- + +via: https://jvns.ca/blog/2017/10/10/operating-a-kubernetes-network/ + +作者:[Julia Evans ][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://jvns.ca/about +[1]:http://blog.sophaskins.net/blog/misadventures-with-kube-dns/ +[2]:https://github.com/coreos/flannel/pull/808 +[3]:https://github.com/coreos/flannel/pull/803 +[4]:https://github.com/coreos/flannel/issues/610 +[5]:https://github.com/AdoHe/kube2haproxy +[6]:https://kubernetes.io/docs/concepts/cluster-administration/networking/#kubernetes-model +[7]:https://www.youtube.com/watch?v=y2bhV81MfKQ +[8]:https://www.youtube.com/watch?v=4-pawkiazEg +[9]:https://jvns.ca/categories/kubernetes +[10]:https://kubernetes.io/docs/concepts/cluster-administration/networking/#kubernetes-model +[11]:https://github.com/kelseyhightower/kubernetes-the-hard-way/blob/master/docs/11-pod-network-routes.md +[12]:https://github.com/coreos/flannel/tree/master/backend/vxlan +[13]:https://github.com/kubernetes/kubernetes/issues/37932 +[14]:https://www.youtube.com/watch?v=4-pawkiazEg +[15]:https://groups.google.com/forum/#!topic/kubernetes-sig-network/3NlBVbTUUU0 diff --git a/sources/tech/20171011 LEAST PRIVILEGE CONTAINER ORCHESTRATION.md b/sources/tech/20171011 LEAST PRIVILEGE CONTAINER ORCHESTRATION.md new file mode 100644 index 0000000000..7a9b6e817c --- /dev/null +++ b/sources/tech/20171011 LEAST PRIVILEGE CONTAINER ORCHESTRATION.md @@ -0,0 +1,174 @@ +# LEAST PRIVILEGE CONTAINER ORCHESTRATION + + +The Docker platform and the container has become the standard for packaging, deploying, and managing applications. In order to coordinate running containers across multiple nodes in a cluster, a key capability is required: a container orchestrator. + +![container orchestrator](https://i0.wp.com/blog.docker.com/wp-content/uploads/f753d4e8-9e22-4fe2-be9a-80661ef696a8-3.jpg?resize=536%2C312&ssl=1) + +Orchestrators are responsible for critical clustering and scheduling tasks, such as: + +* Managing container scheduling and resource allocation. + +* Support service discovery and hitless application deploys. + +* Distribute the necessary resources that applications need to run. + +Unfortunately, the distributed nature of orchestrators and the ephemeral nature of resources in this environment makes securing orchestrators a challenging task. In this post, we will describe in detail the less-considered—yet vital—aspect of the security model of container orchestrators, and how Docker Enterprise Edition with its built-in orchestration capability, Swarm mode, overcomes these difficulties. + +Motivation and threat model +============================================================ + +One of the primary objectives of Docker EE with swarm mode is to provide an orchestrator with security built-in. To achieve this goal, we developed the first container orchestrator designed with the principle of least privilege in mind. + +In computer science,the principle of least privilege in a distributed system requires that each participant of the system must only have access to  the information and resources that are necessary for its legitimate purpose. No more, no less. + +> #### ”A process must be able to access only the information and resources that are necessary for its legitimate purpose.” + +#### Principle of Least Privilege + +Each node in a Docker EE swarm is assigned role: either manager or worker. These roles define a coarsegrained level of privilege to the nodes: administration and task execution, respectively. However, regardless of its role, a node has access only to the information and resources it needs to perform the necessary tasks, with cryptographically enforced guarantees. As a result, it becomes easier to secure clusters against even the most sophisticated attacker models: attackers that control the underlying communication networks or even compromised cluster nodes. + +# Secure-by-default core + +There is an old security maxim that states: if it doesn’t come by default, no one will use it. Docker Swarm mode takes this notion to heart, and ships with secure-by-default mechanisms to solve three of the hardest and most important aspects of the orchestration lifecycle: + +1. Trust bootstrap and node introduction. + +2. Node identity issuance and management. + +3. Authenticated, Authorized, Encrypted information storage and dissemination. + +Let’s look at each of these aspects individually + +### Trust Bootstrap and Node Introduction + +The first step to a secure cluster is tight control over membership and identity. Without it, administrators cannot rely on the identities of their nodes and enforce strict workload separation between nodes. This means that unauthorized nodes can’t be allowed to join the cluster, and nodes that are already part of the cluster aren’t able to change identities, suddenly pretending to be another node. + +To address this need, nodes managed by Docker EE’s Swarm mode maintain strong, immutable identities. The desired properties are cryptographically guaranteed by using two key building-blocks: + +1. Secure join tokens for cluster membership. + +2. Unique identities embedded in certificates issued from a central certificate authority. + +### Joining the Swarm + +To join the swarm, a node needs a copy of a secure join token. The token is unique to each operational role within the cluster—there are currently two types of nodes: workers and managers. Due to this separation, a node with a copy of a worker token will not be allowed to join the cluster as a manager. The only way to get this special token is for a cluster administrator to interactively request it from the cluster’s manager through the swarm administration API. + +The token is securely and randomly generated, but it also has a special syntax that makes leaks of this token easier to detect: a special prefix that you can easily monitor for in your logs and repositories. Fortunately, even if a leak does occur, tokens are easy to rotate, and we recommend that you rotate them often—particularly in the case where your cluster will not be scaling up for a while. + +![Docker Swarm](https://i1.wp.com/blog.docker.com/wp-content/uploads/92d171d4-52c7-4702-8143-110c6f52017c-2.jpg?resize=547%2C208&ssl=1) + +### Bootstrapping trust + +As part of establishing its identity, a new node will ask for a new identity to be issued by any of the network managers. However, under our threat model, all communications can be intercepted by a third-party. This begs the question: how does a node know that it is talking to a legitimate manager? + +![Docker Security](https://i0.wp.com/blog.docker.com/wp-content/uploads/94e3fef0-5bd2-4970-b9e9-25b566d926ad-2.jpg?resize=528%2C348&ssl=1) + +Fortunately, Docker has a built-in mechanism for preventing this from happening. The join token, which the host uses to join the swarm, includes a hash of the root CA’s certificate. The host can therefore use one-way TLS and use the hash to verify that it’s joining the right swarm: if the manager presents a certificate not signed by a CA that matches the hash, the node knows not to trust it. + +### Node identity issuance and management + +Identities in a swarm are embedded in x509 certificates held by each individual node. In a manifestation of the least privilege principle, the certificates’ private keys are restricted strictly to the hosts where they originate. In particular, managers do not have access to private keys of any certificate but their own. + +### Identity Issuance + +To receive their certificates without sharing their private keys, new hosts begin by issuing a certificate signing request (CSR), which the managers then convert into a certificate. This certificate now becomes the new host’s identity, making the node a full-fledged member of the swarm! + +#### +![](https://i0.wp.com/blog.docker.com/wp-content/uploads/415ae6cf-7e76-4ba8-9d84-6d49bf327d8f-2.jpg?resize=548%2C350&ssl=1) + +When used alongside with the secure bootstrapping mechanism, this mechanism for issuing identities to joining nodes is secure by default: all communicating parties are authenticated, authorized and no sensitive information is ever exchanged in clear-text. + +### Identity Renewal + +However, securely joining nodes to a swarm is only part of the story. To minimize the impact of leaked or stolen certificates and to remove the complexity of managing CRL lists, Swarm mode uses short-lived certificates for the identities. These certificates have a default expiration of three months, but can be configured to expire every hour! + +![Docker secrets](https://i0.wp.com/blog.docker.com/wp-content/uploads/55e2ab9a-19cd-465d-82c6-fa76110e7ecd-2.jpg?resize=556%2C365&ssl=1) + +This short certificate expiration time means that certificate rotation can’t be a manual process, as it usually is for most PKI systems. With swarm, all certificates are rotated automatically and in a hitless fashion. The process is simple: using a mutually authenticated TLS connection to prove ownership over a particular identity, a Swarm node generates regularly a new public/private key pair and sends the corresponding CSR to be signed, creating a completely new certificate, but maintaining the same identity. + +### Authenticated, Authorized, Encrypted information storage and dissemination. + +During the normal operation of a swarm, information about the tasks has to be sent to the worker nodes for execution. This includes not only information on which containers are to be executed by a node;but also, it includes  all the resources that are necessary for the successful execution of that container, including sensitive secrets such as private keys, passwords, and API tokens. + +### Transport Security + +The fact that every node participating in a swarm is in possession of a unique identity in the form of a X509 certificate, communicating securely between nodes is trivial: nodes can use their respective certificates to establish mutually authenticated connections between one another, inheriting the confidentiality, authenticity and integrity properties of TLS. + +![Swarm Mode](https://i0.wp.com/blog.docker.com/wp-content/uploads/972273a3-d9e5-4053-8fcb-a407c8cdcbf6-2.jpg?resize=347%2C271&ssl=1) + +One interesting detail about Swarm mode is the fact that it uses a push model: only managers are allowed to send information to workers—significantly reducing the surface of attack manager nodes expose to the less privileged worker nodes. + +### Strict Workload Separation Into Security Zones + +One of the responsibilities of manager nodes is deciding which tasks to send to each of the workers. Managers make this determination using a variety of strategies; scheduling the workloads across the swarm depending on both the unique properties of each node and each workload. + +In Docker EE with Swarm mode, administrators have the ability of influencing these scheduling decisions by using labels that are securely attached to the individual node identities. These labels allow administrators to group nodes together into different security zones limiting the exposure of particularly sensitive workloads and any secrets related to them. + +![Docker Swarm Security](https://i0.wp.com/blog.docker.com/wp-content/uploads/67ffa551-d4ae-4522-ba13-4a646a158592-2.jpg?resize=546%2C375&ssl=1) + +### Secure Secret Distribution + +In addition to facilitating the identity issuance process, manager nodes have the important task of storing and distributing any resources needed by a worker. Secrets are treated like any other type of resource, and are pushed down from the manager to the worker over the secure mTLS connection. + +![Docker Secrets](https://i1.wp.com/blog.docker.com/wp-content/uploads/4341da98-2f8c-4aed-bb40-607246344dd8-2.jpg?resize=508%2C326&ssl=1) + +On the hosts, Docker EE ensures that secrets are provided only to the containers they are destined for. Other containers on the same host will not have access to them. Docker exposes secrets to a container as a temporary file system, ensuring that secrets are always stored in memory and never written to disk. This method is more secure than competing alternatives, such as [storing them in environment variables][12]. Once a task completes the secret is gone forever. + +### Storing secrets + +On manager hosts secrets are always encrypted at rest. By default, the key that encrypts these secrets (known as the Data Encryption Key, DEK) is also stored in plaintext on disk. This makes it easy for those with minimal security requirements to start using Docker Swarm mode. + +However, once you are running a production cluster, we recommend you enable auto-lock mode. When auto-lock mode is enabled, a newly rotated DEK is encrypted with a separate Key Encryption Key (KEK). This key is never stored on the cluster; the administrator is responsible for storing it securely and providing it when the cluster starts up. This is known as unlocking the swarm. + +Swarm mode supports multiple managers, relying on the Raft Consensus Algorithm for fault tolerance. Secure secret storage scales seamlessly in this scenario. Each manager host has a unique disk encryption key, in addition to the shared key. Furthermore, Raft logs are encrypted on disk and are similarly unavailable without the KEK when in autolock mode. + +### What happens when a node is compromised? + +![Docker Secrets](https://i0.wp.com/blog.docker.com/wp-content/uploads/2a78b37d-bbf0-40ee-a282-eb0900f71ba9-2.jpg?resize=502%2C303&ssl=1) + +In traditional orchestrators, recovering from a compromised host is a slow and complicated process. With Swarm mode, recovery is as easy as running the docker node rm command. This removes the affected node from the cluster, and Docker will take care of the rest, namely re-balancing services and making sure other hosts know not to talk to the affected node. + +As we have seen, thanks to least privilege orchestration, even if the attacker were still active on the host, they would be cut off from the rest of the network. The host’s certificate — its identity — is blacklisted, so the managers will not accept it as valid. + +# Conclusion + +Docker EE with Swarm mode ensures security by default in all key areas of orchestration: + +* Joining the cluster. Prevents malicious nodes from joining the cluster. + +* Organizing hosts into security zones. Prevents lateral movement by attackers. + +* Scheduling tasks. Tasks will be issued only to designated and allowed nodes. + +* Allocating resources. A malicious node cannot “steal” another’s workload or resources. + +* Storing secrets. Never stored in plaintext and never written to disk on worker nodes. + +* Communicating with the workers. Encrypted using mutually authenticated TLS. + +As Swarm mode continues to improve, the Docker team is working to take the principle of least privilege orchestration even further. The task we are tackling is: how can systems remain secure if a manager is compromised? The roadmap is in place, with some of the features already available such as the ability of whitelisting only specific Docker images, preventing managers from executing arbitrary workloads. This is achieved quite naturally using Docker Content Trust. + +-------------------------------------------------------------------------------- + +via: https://blog.docker.com/2017/10/least-privilege-container-orchestration/ + +作者:[Diogo Mónica ][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://blog.docker.com/author/diogo/ +[1]:http://www.linkedin.com/shareArticle?mini=true&url=http://dockr.ly/2yZoNdy&title=Least%20Privilege%20Container%20Orchestration&summary=The%20Docker%20platform%20and%20the%20container%20has%20become%20the%20standard%20for%20packaging,%20deploying,%20and%20managing%20applications.%20In%20order%20to%20coordinate%20running%20containers%20across%20multiple%20nodes%20in%20a%20cluster,%20a%20key%20capability%20is%20required:%20a%20container%20orchestrator.Orchestrators%20are%20responsible%20for%20critical%20clustering%20and%20scheduling%20tasks,%20such%20as:%20%20%20%20Managing%20... +[2]:http://www.reddit.com/submit?url=http://dockr.ly/2yZoNdy&title=Least%20Privilege%20Container%20Orchestration +[3]:https://plus.google.com/share?url=http://dockr.ly/2yZoNdy +[4]:http://news.ycombinator.com/submitlink?u=http://dockr.ly/2yZoNdy&t=Least%20Privilege%20Container%20Orchestration +[5]:https://blog.docker.com/author/diogo/ +[6]:https://blog.docker.com/tag/docker-orchestration/ +[7]:https://blog.docker.com/tag/docker-secrets/ +[8]:https://blog.docker.com/tag/docker-security/ +[9]:https://blog.docker.com/tag/docker-swarm/ +[10]:https://blog.docker.com/tag/least-privilege-orchestrator/ +[11]:https://blog.docker.com/tag/tls/ +[12]:https://diogomonica.com/2017/03/27/why-you-shouldnt-use-env-variables-for-secret-data/ diff --git a/sources/tech/20171020 How Eclipse is advancing IoT development.md b/sources/tech/20171020 How Eclipse is advancing IoT development.md deleted file mode 100644 index 30fd8eb64d..0000000000 --- a/sources/tech/20171020 How Eclipse is advancing IoT development.md +++ /dev/null @@ -1,83 +0,0 @@ -apply for translating - -How Eclipse is advancing IoT development -============================================================ - -### Open source organization's modular approach to development is a good match for the Internet of Things. - -![How Eclipse is advancing IoT development](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/OSDC_BUS_ArchitectureOfParticipation_520x292.png?itok=FA0Uuwzv "How Eclipse is advancing IoT development") -Image by : opensource.com - -[Eclipse][3] may not be the first open source organization that pops to mind when thinking about Internet of Things (IoT) projects. After all, the foundation has been around since 2001, long before IoT was a household word, supporting a community for commercially viable open source software development. - -September's Eclipse IoT Day, held in conjunction with RedMonk's [ThingMonk 2017][4] event, emphasized the big role Eclipse is taking in [IoT development][5]. It currently hosts 28 projects that touch a wide range of IoT needs and projects. While at the conference, I talked with [Ian Skerritt][6], who heads marketing for Eclipse, about Eclipse's IoT projects and how Eclipse thinks about IoT more broadly. - -### What's new about IoT? - -I asked Ian how IoT is different from traditional industrial automation, given that sensors and tools have been connected in factories for the past several decades. Ian notes that many factories still are not connected. - -Additionally, he says, "SCADA [supervisory control and data analysis] systems and even the factory floor technology are very proprietary, very siloed. It's hard to change it. It's hard to adapt to it… Right now, when you set up a manufacturing run, you need to manufacture hundreds of thousands of that piece, of that unit. What [manufacturers] want to do is to meet customer demand, to have manufacturing processes that are very flexible, that you can actually do a lot size of one." That's a big piece of what IoT is bringing to manufacturing. - -### Eclipse's approach to IoT - -He describes Eclipse's involvement in IoT by saying: "There's core fundamental technology that every IoT solution needs," and by using open source, "everyone can use it so they can get broader adoption." He says Eclipse see IoT as consisting of three connected software stacks. At a high level, these stacks mirror the (by now familiar) view that IoT can usually be described as spanning three layers. A given implementation may have even more layers, but they still generally map to the functions of this three-layer model: - -* A stack of software for constrained devices (e.g., the device, endpoint, microcontroller unit (MCU), sensor hardware). - -* Some type of gateway that aggregates information and data from the different sensors and sends it to the network. This layer also may take real-time actions based on what the sensors are observing. - -* A software stack for the IoT platform on the backend. This backend cloud stores the data and can provide services based on collected data, such as analysis of historical trends and predictive analytics. - -The three stacks are described in greater detail in Eclipse's whitepaper "[The Three Software Stacks Required for IoT Architectures][7]." - -Ian says that, when developing a solution within those architectures, "there's very specific things that need to be built, but there's a lot of underlying technology that can be used, like messaging protocols, like gateway services. It needs to be a modular approach to scale up to the different use cases that are up there." This encapsulates Eclipse's activities around IoT: Developing modular open source components that can be used to build a range of business-specific services and solutions. - -### Eclipse's IoT projects - -Of Eclipse's many IoT projects currently in use, Ian says two of the most prominent relate to [MQTT][8], a machine-to-machine (M2M) messaging protocol for IoT. Ian describes it as "a publish‑subscribe messaging protocol that was designed specifically for oil and gas pipeline monitoring where power-management network latency is really important. MQTT has been a great success in terms of being a standard that's being widely adopted in IoT." [Eclipse Mosquitto][9] is MQTT's broker and [Eclipse Paho][10] its client. - -[Eclipse Kura][11] is an IoT gateway that, in Ian's words, "provides northbound and southbound connectivity [for] a lot of different protocols" including Bluetooth, Modbus, controller-area network (CAN) bus, and OPC Unified Architecture, with more being added all the time. One benefit, he says, is "instead of you writing your own connectivity, Kura provides that and then connects you to the network via satellite, via Ethernet, or anything." In addition, it handles firewall configuration, network latency, and other functions. "If the network goes down, it will store messages until it comes back up," Ian says. - -A newer project, [Eclipse Kapua][12], is taking a microservices approach to providing different services for an IoT cloud platform. For example, it handles aspects of connectivity, integration, management, storage, and analysis. Ian describes it as "up and coming. It's not being deployed yet, but Eurotech and Red Hat are very active in that." - -Ian says [Eclipse hawkBit][13], which manages software updates, is one of the "most intriguing projects. From a security perspective, if you can't update your device, you've got a huge security hole." Most IoT security disasters are related to non-updated devices, he says. "HawkBit basically manages the backend of how you do scalable updates across your IoT system." - -Indeed, the difficulty of updating software in IoT devices is regularly cited as one of its biggest security challenges. IoT devices aren't always connected and may be numerous, plus update processes for constrained devices can be hard to consistently get right. For this reason, projects relating to updating IoT software are likely to be important going forward. - -### Why IoT is a good fit for Eclipse - -One of the trends we've seen in IoT development has been around building blocks that are integrated and applied to solve particular business problems, rather than monolithic IoT platforms that apply across industries and companies. This is a good fit with Eclipse's approach to IoT, which focuses on a number of modular stacks; projects that provide specific and commonly needed functions; and brokers, gateways, and protocols that can tie together the components needed for a given implementation. - --------------------------------------------------------------------------------- - -作者简介: - -Gordon Haff - Gordon Haff is Red Hat’s cloud evangelist, is a frequent and highly acclaimed speaker at customer and industry events, and helps develop strategy across Red Hat’s full portfolio of cloud solutions. He is the author of Computing Next: How the Cloud Opens the Future in addition to numerous other publications. Prior to Red Hat, Gordon wrote hundreds of research notes, was frequently quoted in publications like The New York Times on a wide range of IT topics, and advised clients on product and... - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/17/10/eclipse-and-iot - -作者:[Gordon Haff ][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/ghaff -[1]:https://opensource.com/article/17/10/eclipse-and-iot?rate=u1Wr-MCMFCF4C45IMoSPUacCatoqzhdKz7NePxHOvwg -[2]:https://opensource.com/user/21220/feed -[3]:https://www.eclipse.org/home/ -[4]:http://thingmonk.com/ -[5]:https://iot.eclipse.org/ -[6]:https://twitter.com/ianskerrett -[7]:https://iot.eclipse.org/resources/white-papers/Eclipse%20IoT%20White%20Paper%20-%20The%20Three%20Software%20Stacks%20Required%20for%20IoT%20Architectures.pdf -[8]:http://mqtt.org/ -[9]:https://projects.eclipse.org/projects/technology.mosquitto -[10]:https://projects.eclipse.org/projects/technology.paho -[11]:https://www.eclipse.org/kura/ -[12]:https://www.eclipse.org/kapua/ -[13]:https://eclipse.org/hawkbit/ -[14]:https://opensource.com/users/ghaff -[15]:https://opensource.com/users/ghaff -[16]:https://opensource.com/article/17/10/eclipse-and-iot#comments diff --git a/sources/tech/20171102 Dive into BPF a list of reading material.md b/sources/tech/20171102 Dive into BPF a list of reading material.md new file mode 100644 index 0000000000..f4b90bd09d --- /dev/null +++ b/sources/tech/20171102 Dive into BPF a list of reading material.md @@ -0,0 +1,711 @@ +Dive into BPF: a list of reading material +============================================================ + +* [What is BPF?][143] + +* [Dive into the bytecode][144] + +* [Resources][145] + * [Generic presentations][23] + * [About BPF][1] + + * [About XDP][2] + + * [About other components related or based on eBPF][3] + + * [Documentation][24] + * [About BPF][4] + + * [About tc][5] + + * [About XDP][6] + + * [About P4 and BPF][7] + + * [Tutorials][25] + + * [Examples][26] + * [From the kernel][8] + + * [From package iproute2][9] + + * [From bcc set of tools][10] + + * [Manual pages][11] + + * [The code][27] + * [BPF code in the kernel][12] + + * [XDP hooks code][13] + + * [BPF logic in bcc][14] + + * [Code to manage BPF with tc][15] + + * [BPF utilities][16] + + * [Other interesting chunks][17] + + * [LLVM backend][18] + + * [Running in userspace][19] + + * [Commit logs][20] + + * [Troubleshooting][28] + * [Errors at compilation time][21] + + * [Errors at load and run time][22] + + * [And still more!][29] + + _~ [Updated][146] 2017-11-02 ~_ + +# What is BPF? + +BPF, as in **B**erkeley **P**acket **F**ilter, was initially conceived in 1992 so as to provide a way to filter packets and to avoid useless packet copies from kernel to userspace. It initially consisted in a simple bytecode that is injected from userspace into the kernel, where it is checked by a verifier—to prevent kernel crashes or security issues—and attached to a socket, then run on each received packet. It was ported to Linux a couple of years later, and used for a small number of applications (tcpdump for example). The simplicity of the language as well as the existence of an in-kernel Just-In-Time (JIT) compiling machine for BPF were factors for the excellent performances of this tool. + +Then in 2013, Alexei Starovoitov completely reshaped it, started to add new functionalities and to improve the performances of BPF. This new version is designated as eBPF (for “extended BPF”), while the former becomes cBPF (“classic” BPF). New features such as maps and tail calls appeared. The JIT machines were rewritten. The new language is even closer to native machine language than cBPF was. And also, new attach points in the kernel have been created. + +Thanks to those new hooks, eBPF programs can be designed for a variety of use cases, that divide into two fields of applications. One of them is the domain of kernel tracing and event monitoring. BPF programs can be attached to kprobes and they compare with other tracing methods, with many advantages (and sometimes some drawbacks). + +The other application domain remains network programming. In addition to socket filter, eBPF programs can be attached to tc (Linux traffic control tool) ingress or egress interfaces and perform a variety of packet processing tasks, in an efficient way. This opens new perspectives in the domain. + +And eBPF performances are further leveraged through the technologies developed for the IO Visor project: new hooks have also been added for XDP (“eXpress Data Path”), a new fast path recently added to the kernel. XDP works in conjunction with the Linux stack, and relies on BPF to perform very fast packet processing. + +Even some projects such as P4, Open vSwitch, [consider][155] or started to approach BPF. Some others, such as CETH, Cilium, are entirely based on it. BPF is buzzing, so we can expect a lot of tools and projects to orbit around it soon… + +# Dive into the bytecode + +As for me: some of my work (including for [BEBA][156]) is closely related to eBPF, and several future articles on this site will focus on this topic. Logically, I wanted to somehow introduce BPF on this blog before going down to the details—I mean, a real introduction, more developed on BPF functionalities that the brief abstract provided in first section: What are BPF maps? Tail calls? What do the internals look like? And so on. But there are a lot of presentations on this topic available on the web already, and I do not wish to create “yet another BPF introduction” that would come as a duplicate of existing documents. + +So instead, here is what we will do. After all, I spent some time reading and learning about BPF, and while doing so, I gathered a fair amount of material about BPF: introductions, documentation, but also tutorials or examples. There is a lot to read, but in order to read it, one has to  _find_  it first. Therefore, as an attempt to help people who wish to learn and use BPF, the present article introduces a list of resources. These are various kinds of readings, that hopefully will help you dive into the mechanics of this kernel bytecode. + +# Resources + +![](https://qmonnet.github.io/whirl-offload/img/icons/pic.svg) + +### Generic presentations + +The documents linked below provide a generic overview of BPF, or of some closely related topics. If you are very new to BPF, you can try picking a couple of presentation among the first ones and reading the ones you like most. If you know eBPF already, you probably want to target specific topics instead, lower down in the list. + +### About BPF + +Generic presentations about eBPF: + +* [_Making the Kernel’s Networking Data Path Programmable with BPF and XDP_][53]  (Daniel Borkmann, OSSNA17, Los Angeles, September 2017): + One of the best set of slides available to understand quickly all the basics about eBPF and XDP (mostly for network processing). + +* [The BSD Packet Filter][54] (Suchakra Sharma, June 2017):  + A very nice introduction, mostly about the tracing aspects. + +* [_BPF: tracing and more_][55]  (Brendan Gregg, January 2017): + Mostly about the tracing use cases. + +* [_Linux BPF Superpowers_][56]  (Brendan Gregg, March 2016): + With a first part on the use of **flame graphs**. + +* [_IO Visor_][57]  (Brenden Blanco, SCaLE 14x, January 2016): + Also introduces **IO Visor project**. + +* [_eBPF on the Mainframe_][58]  (Michael Holzheu, LinuxCon, Dubin, October 2015) + +* [_New (and Exciting!) Developments in Linux Tracing_][59]  (Elena Zannoni, LinuxCon, Japan, 2015) + +* [_BPF — in-kernel virtual machine_][60]  (Alexei Starovoitov, February 2015): + Presentation by the author of eBPF. + +* [_Extending extended BPF_][61]  (Jonathan Corbet, July 2014) + +**BPF internals**: + +* Daniel Borkmann has been doing an amazing work to present **the internals** of eBPF, in particular about **its use with tc**, through several talks and papers. + * [_Advanced programmability and recent updates with tc’s cls_bpf_][30]  (netdev 1.2, Tokyo, October 2016): + Daniel provides details on eBPF, its use for tunneling and encapsulation, direct packet access, and other features. + + * [_cls_bpf/eBPF updates since netdev 1.1_][31]  (netdev 1.2, Tokyo, October 2016, part of [this tc workshop][32]) + + * [_On getting tc classifier fully programmable with cls_bpf_][33]  (netdev 1.1, Sevilla, February 2016): + After introducing eBPF, this presentation provides insights on many internal BPF mechanisms (map management, tail calls, verifier). A must-read! For the most ambitious, [the full paper is available here][34]. + + * [_Linux tc and eBPF_][35]  (fosdem16, Brussels, Belgium, January 2016) + + * [_eBPF and XDP walkthrough and recent updates_][36]  (fosdem17, Brussels, Belgium, February 2017) + + These presentations are probably one of the best sources of documentation to understand the design and implementation of internal mechanisms of eBPF. + +The [**IO Visor blog**][157] has some interesting technical articles about BPF. Some of them contain a bit of marketing talks. + +**Kernel tracing**: summing up all existing methods, including BPF: + +* [_Meet-cute between eBPF and Kerne Tracing_][62]  (Viller Hsiao, July 2016): + Kprobes, uprobes, ftrace + +* [_Linux Kernel Tracing_][63]  (Viller Hsiao, July 2016): + Systemtap, Kernelshark, trace-cmd, LTTng, perf-tool, ftrace, hist-trigger, perf, function tracer, tracepoint, kprobe/uprobe… + +Regarding **event tracing and monitoring**, Brendan Gregg uses eBPF a lot and does an excellent job at documenting some of his use cases. If you are in kernel tracing, you should see his blog articles related to eBPF or to flame graphs. Most of it are accessible [from this article][158] or by browsing his blog. + +Introducing BPF, but also presenting **generic concepts of Linux networking**: + +* [_Linux Networking Explained_][64]  (Thomas Graf, LinuxCon, Toronto, August 2016) + +* [_Kernel Networking Walkthrough_][65]  (Thomas Graf, LinuxCon, Seattle, August 2015) + +**Hardware offload**: + +* eBPF with tc or XDP supports hardware offload, starting with Linux kernel version 4.9 and introduced by Netronome. Here is a presentation about this feature: + [eBPF/XDP hardware offload to SmartNICs][147] (Jakub Kicinski and Nic Viljoen, netdev 1.2, Tokyo, October 2016) + +About **cBPF**: + +* [_The BSD Packet Filter: A New Architecture for User-level Packet Capture_][66]  (Steven McCanne and Van Jacobson, 1992): + The original paper about (classic) BPF. + +* [The FreeBSD manual page about BPF][67] is a useful resource to understand cBPF programs. + +* Daniel Borkmann realized at least two presentations on cBPF, [one in 2013 on mmap, BPF and Netsniff-NG][68], and [a very complete one in 2014 on tc and cls_bpf][69]. + +* On Cloudflare’s blog, Marek Majkowski presented his [use of BPF bytecode with the `xt_bpf`module for **iptables**][70]. It is worth mentioning that eBPF is also supported by this module, starting with Linux kernel 4.10 (I do not know of any talk or article about this, though). + +* [Libpcap filters syntax][71] + +### About XDP + +* [XDP overview][72] on the IO Visor website. + +* [_eXpress Data Path (XDP)_][73]  (Tom Herbert, Alexei Starovoitov, March 2016): + The first presentation about XDP. + +* [_BoF - What Can BPF Do For You?_][74]  (Brenden Blanco, LinuxCon, Toronto, August 2016). + +* [_eXpress Data Path_][148]  (Brenden Blanco, Linux Meetup at Santa Clara, July 2016): + Contains some (somewhat marketing?) **benchmark results**! With a single core: + * ip routing drop: ~3.6 million packets per second (Mpps) + + * tc (with clsact qdisc) drop using BPF: ~4.2 Mpps + + * XDP drop using BPF: 20 Mpps (<10 % CPU utilization) + + * XDP forward (on port on which the packet was received) with rewrite: 10 Mpps + + (Tests performed with the mlx4 driver). + +* Jesper Dangaard Brouer has several excellent sets of slides, that are essential to fully understand the internals of XDP. + * [_XDP − eXpress Data Path, Intro and future use-cases_][37]  (September 2016): + _“Linux Kernel’s fight against DPDK”_ . **Future plans** (as of this writing) for XDP and comparison with DPDK. + + * [_Network Performance Workshop_][38]  (netdev 1.2, Tokyo, October 2016): + Additional hints about XDP internals and expected evolution. + + * [_XDP – eXpress Data Path, Used for DDoS protection_][39]  (OpenSourceDays, March 2017): + Contains details and use cases about XDP, with **benchmark results**, and **code snippets** for **benchmarking** as well as for **basic DDoS protection** with eBPF/XDP (based on an IP blacklisting scheme). + + * [_Memory vs. Networking, Provoking and fixing memory bottlenecks_][40]  (LSF Memory Management Summit, March 2017): + Provides a lot of details about current **memory issues** faced by XDP developers. Do not start with this one, but if you already know XDP and want to see how it really works on the page allocation side, this is a very helpful resource. + + * [_XDP for the Rest of Us_][41]  (netdev 2.1, Montreal, April 2017), with Andy Gospodarek: + How to get started with eBPF and XDP for normal humans. This presentation was also summarized by Julia Evans on [her blog][42]. + + (Jesper also created and tries to extend some documentation about eBPF and XDP, see [related section][75].) + +* [_XDP workshop — Introduction, experience, and future development_][76]  (Tom Herbert, netdev 1.2, Tokyo, October 2016) — as of this writing, only the video is available, I don’t know if the slides will be added. + +* [_High Speed Packet Filtering on Linux_][149]  (Gilberto Bertin, DEF CON 25, Las Vegas, July 2017) — an excellent introduction to state-of-the-art packet filtering on Linux, oriented towards DDoS protection, talking about packet processing in the kernel, kernel bypass, XDP and eBPF. + +### About other components related or based on eBPF + +* [_P4 on the Edge_][77]  (John Fastabend, May 2016): + Presents the use of **P4**, a description language for packet processing, with BPF to create high-performance programmable switches. + +* If you like audio presentations, there is an associated [OvS Orbit episode (#11), called  _**P4** on the Edge_][78] , dating from August 2016\. OvS Orbit are interviews realized by Ben Pfaff, who is one of the core maintainers of Open vSwitch. In this case, John Fastabend is interviewed. + +* [_P4, EBPF and Linux TC Offload_][79]  (Dinan Gunawardena and Jakub Kicinski, August 2016): + Another presentation on **P4**, with some elements related to eBPF hardware offload on Netronome’s **NFP** (Network Flow Processor) architecture. + +* **Cilium** is a technology initiated by Cisco and relying on BPF and XDP to provide “fast in-kernel networking and security policy enforcement for containers based on eBPF programs generated on the fly”. [The code of this project][150] is available on GitHub. Thomas Graf has been performing a number of presentations of this topic: + * [_Cilium: Networking & Security for Containers with BPF & XDP_][43] , also featuring a load balancer use case (Linux Plumbers conference, Santa Fe, November 2016) + + * [_Cilium: Networking & Security for Containers with BPF & XDP_][44]  (Docker Distributed Systems Summit, October 2016 — [video][45]) + + * [_Cilium: Fast IPv6 container Networking with BPF and XDP_][46]  (LinuxCon, Toronto, August 2016) + + * [_Cilium: BPF & XDP for containers_][47]  (fosdem17, Brussels, Belgium, February 2017) + + A good deal of contents is repeated between the different presentations; if in doubt, just pick the most recent one. Daniel Borkmann has also written [a generic introduction to Cilium][80] as a guest author on Google Open Source blog. + +* There are also podcasts about **Cilium**: an [OvS Orbit episode (#4)][81], in which Ben Pfaff interviews Thomas Graf (May 2016), and [another podcast by Ivan Pepelnjak][82], still with Thomas Graf about eBPF, P4, XDP and Cilium (October 2016). + +* **Open vSwitch** (OvS), and its related project **Open Virtual Network** (OVN, an open source network virtualization solution) are considering to use eBPF at various level, with several proof-of-concept prototypes already implemented: + + * [Offloading OVS Flow Processing using eBPF][48] (William (Cheng-Chun) Tu, OvS conference, San Jose, November 2016) + + * [Coupling the Flexibility of OVN with the Efficiency of IOVisor][49] (Fulvio Risso, Matteo Bertrone and Mauricio Vasquez Bernal, OvS conference, San Jose, November 2016) + + These use cases for eBPF seem to be only at the stage of proposals (nothing merge to OvS main branch) as far as I know, but it will be very interesting to see what comes out of it. + +* XDP is envisioned to be of great help for protection against Distributed Denial-of-Service (DDoS) attacks. More and more presentations focus on this. For example, the talks from people from Cloudflare ( [_XDP in practice: integrating XDP in our DDoS mitigation pipeline_][83] ) or from Facebook ( [_Droplet: DDoS countermeasures powered by BPF + XDP_][84] ) at the netdev 2.1 conference in Montreal, Canada, in April 2017, present such use cases. + +* [_CETH for XDP_][85]  (Yan Chan and Yunsong Lu, Linux Meetup, Santa Clara, July 2016): + **CETH** stands for Common Ethernet Driver Framework for faster network I/O, a technology initiated by Mellanox. + +* [**The VALE switch**][86], another virtual switch that can be used in conjunction with the netmap framework, has [a BPF extension module][87]. + +* **Suricata**, an open source intrusion detection system, [seems to rely on eBPF components][88] for its “capture bypass” features: + [_The adventures of a Suricate in eBPF land_][89]  (Éric Leblond, netdev 1.2, Tokyo, October 2016) + [_eBPF and XDP seen from the eyes of a meerkat_][90]  (Éric Leblond, Kernel Recipes, Paris, September 2017) + +* [InKeV: In-Kernel Distributed Network Virtualization for DCN][91] (Z. Ahmed, M. H. Alizai and A. A. Syed, SIGCOMM, August 2016): + **InKeV** is an eBPF-based datapath architecture for virtual networks, targeting data center networks. It was initiated by PLUMgrid, and claims to achieve better performances than OvS-based OpenStack solutions. + +* [_**gobpf** - utilizing eBPF from Go_][92]  (Michael Schubert, fosdem17, Brussels, Belgium, February 2017): + A “library to create, load and use eBPF programs from Go” + +* [**ply**][93] is a small but flexible open source dynamic **tracer** for Linux, with some features similar to the bcc tools, but with a simpler language inspired by awk and dtrace, written by Tobias Waldekranz. + +* If you read my previous article, you might be interested in this talk I gave about [implementing the OpenState interface with eBPF][151], for stateful packet processing, at fosdem17. + +![](https://qmonnet.github.io/whirl-offload/img/icons/book.svg) + +### Documentation + +Once you managed to get a broad idea of what BPF is, you can put aside generic presentations and start diving into the documentation. Below are the most complete documents about BPF specifications and functioning. Pick the one you need and read them carefully! + +### About BPF + +* The **specification of BPF** (both classic and extended versions) can be found within the documentation of the Linux kernel, and in particular in file[linux/Documentation/networking/filter.txt][94]. The use of BPF as well as its internals are documented there. Also, this is where you can find **information about errors thrown by the verifier** when loading BPF code fails. Can be helpful to troubleshoot obscure error messages. + +* Also in the kernel tree, there is a document about **frequent Questions & Answers** on eBPF design in file [linux/Documentation/bpf/bpf_design_QA.txt][95]. + +* … But the kernel documentation is dense and not especially easy to read. If you look for a simple description of eBPF language, head for [its **summarized description**][96] on the IO Visor GitHub repository instead. + +* By the way, the IO Visor project gathered a lot of **resources about BPF**. Mostly, it is split between[the documentation directory][97] of its bcc repository, and the whole content of [the bpf-docs repository][98], both on GitHub. Note the existence of this excellent [BPF **reference guide**][99] containing a detailed description of BPF C and bcc Python helpers. + +* To hack with BPF, there are some essential **Linux manual pages**. The first one is [the `bpf(2)` man page][100] about the `bpf()` **system call**, which is used to manage BPF programs and maps from userspace. It also contains a description of BPF advanced features (program types, maps and so on). The second one is mostly addressed to people wanting to attach BPF programs to tc interface: it is [the `tc-bpf(8)` man page][101], which is a reference for **using BPF with tc**, and includes some example commands and samples of code. + +* Jesper Dangaard Brouer initiated an attempt to **update eBPF Linux documentation**, including **the different kinds of maps**. [He has a draft][102] to which contributions are welcome. Once ready, this document should be merged into the man pages and into kernel documentation. + +* The Cilium project also has an excellent [**BPF and XDP Reference Guide**][103], written by core eBPF developers, that should prove immensely useful to any eBPF developer. + +* David Miller has sent several enlightening emails about eBPF/XDP internals on the [xdp-newbies][152]mailing list. I could not find a link that gathers them at a single place, so here is a list: + * [bpf.h and you…][50] + + * [Contextually speaking…][51] + + * [BPF Verifier Overview][52] + + The last one is possibly the best existing summary about the verifier at this date. + +* Ferris Ellis started [a **blog post series about eBPF**][104]. As I write this paragraph, the first article is out, with some historical background and future expectations for eBPF. Next posts should be more technical, and look promising. + +* [A **list of BPF features per kernel version**][153] is available in bcc repository. Useful is you want to know the minimal kernel version that is required to run a given feature. I contributed and added the links to the commits that introduced each feature, so you can also easily access the commit logs from there. + +### About tc + +When using BPF for networking purposes in conjunction with tc, the Linux tool for **t**raffic **c**ontrol, one may wish to gather information about tc’s generic functioning. Here are a couple of resources about it. + +* It is difficult to find simple tutorials about **QoS on Linux**. The two links I have are long and quite dense, but if you can find the time to read it you will learn nearly everything there is to know about tc (nothing about BPF, though). There they are:  [_Traffic Control HOWTO_  (Martin A. Brown, 2006)][105], and the  [_Linux Advanced Routing & Traffic Control HOWTO_  (“LARTC”) (Bert Hubert & al., 2002)][106]. + +* **tc manual pages** may not be up-to-date on your system, since several of them have been added lately. If you cannot find the documentation for a particular queuing discipline (qdisc), class or filter, it may be worth checking the latest [manual pages for tc components][107]. + +* Some additional material can be found within the files of iproute2 package itself: the package contains [some documentation][108], including some files that helped me understand better [the functioning of **tc’s actions**][109]. + **Edit:** While still available from the Git history, these files have been deleted from iproute2 in October 2017. + +* Not exactly documentation: there was [a workshop about several tc features][110] (including filtering, BPF, tc offload, …) organized by Jamal Hadi Salim during the netdev 1.2 conference (October 2016). + +* Bonus information—If you use `tc` a lot, here are some good news: I [wrote a bash completion function][111] for this tool, and it should be shipped with package iproute2 coming with kernel version 4.6 and higher! + +### About XDP + +* Some [work-in-progress documentation (including specifications)][112] for XDP started by Jesper Dangaard Brouer, but meant to be a collaborative work. Under progress (September 2016): you should expect it to change, and maybe to be moved at some point (Jesper [called for contribution][113], if you feel like improving it). + +* The [BPF and XDP Reference Guide][114] from Cilium project… Well, the name says it all. + +### About P4 and BPF + +[P4][159] is a language used to specify the behavior of a switch. It can be compiled for a number of hardware or software targets. As you may have guessed, one of these targets is BPF… The support is only partial: some P4 features cannot be translated towards BPF, and in a similar way there are things that BPF can do but that would not be possible to express with P4\. Anyway, the documentation related to **P4 use with BPF** [used to be hidden in bcc repository][160]. This changed with P4_16 version, the p4c reference compiler including [a backend for eBPF][161]. + +![](https://qmonnet.github.io/whirl-offload/img/icons/flask.svg) + +### Tutorials + +Brendan Gregg has produced excellent **tutorials** intended for people who want to **use bcc tools** for tracing and monitoring events in the kernel. [The first tutorial about using bcc itself][162] comes with eleven steps (as of today) to understand how to use the existing tools, while [the one **intended for Python developers**][163] focuses on developing new tools, across seventeen “lessons”. + +Sasha Goldshtein also has some  [_**Linux Tracing Workshops Materials**_][164]  involving the use of several BPF tools for tracing. + +Another post by Jean-Tiare Le Bigot provides a detailed (and instructive!) example of [using perf and eBPF to setup a low-level tracer][165] for ping requests and replies + +Few tutorials exist for network-related eBPF use cases. There are some interesting documents, including an  _eBPF Offload Starting Guide_ , on the [Open NFP][166] platform operated by Netronome. Other than these, the talk from Jesper,  [_XDP for the Rest of Us_][167] , is probably one of the best ways to get started with XDP. + +![](https://qmonnet.github.io/whirl-offload/img/icons/gears.svg) + +### Examples + +It is always nice to have examples. To see how things really work. But BPF program samples are scattered across several projects, so I listed all the ones I know of. The examples do not always use the same helpers (for instance, tc and bcc both have their own set of helpers to make it easier to write BPF programs in C language). + +### From the kernel + +The kernel contains examples for most types of program: filters to bind to sockets or to tc interfaces, event tracing/monitoring, and even XDP. You can find these examples under the [linux/samples/bpf/][168]directory. + +Also do not forget to have a look to the logs related to the (git) commits that introduced a particular feature, they may contain some detailed example of the feature. + +### From package iproute2 + +The iproute2 package provide several examples as well. They are obviously oriented towards network programming, since the programs are to be attached to tc ingress or egress interfaces. The examples dwell under the [iproute2/examples/bpf/][169] directory. + +### From bcc set of tools + +Many examples are [provided with bcc][170]: + +* Some are networking example programs, under the associated directory. They include socket filters, tc filters, and a XDP program. + +* The `tracing` directory include a lot of example **tracing programs**. The tutorials mentioned earlier are based on these. These programs cover a wide range of event monitoring functions, and some of them are production-oriented. Note that on certain Linux distributions (at least for Debian, Ubuntu, Fedora, Arch Linux), these programs have been [packaged][115] and can be “easily” installed by typing e.g. `# apt install bcc-tools`, but as of this writing (and except for Arch Linux), this first requires to set up IO Visor’s own package repository. + +* There are also some examples **using Lua** as a different BPF back-end (that is, BPF programs are written with Lua instead of a subset of C, allowing to use the same language for front-end and back-end), in the third directory. + +### Manual pages + +While bcc is generally the easiest way to inject and run a BPF program in the kernel, attaching programs to tc interfaces can also be performed by the `tc` tool itself. So if you intend to **use BPF with tc**, you can find some example invocations in the [`tc-bpf(8)` manual page][171]. + +![](https://qmonnet.github.io/whirl-offload/img/icons/srcfile.svg) + +### The code + +Sometimes, BPF documentation or examples are not enough, and you may have no other solution that to display the code in your favorite text editor (which should be Vim of course) and to read it. Or you may want to hack into the code so as to patch or add features to the machine. So here are a few pointers to the relevant files, finding the functions you want is up to you! + +### BPF code in the kernel + +* The file [linux/include/linux/bpf.h][116] and its counterpart [linux/include/uapi/bpf.h][117] contain **definitions** related to eBPF, to be used respectively in the kernel and to interface with userspace programs. + +* On the same pattern, files [linux/include/linux/filter.h][118] and [linux/include/uapi/filter.h][119] contain information used to **run the BPF programs**. + +* The **main pieces of code** related to BPF are under [linux/kernel/bpf/][120] directory. **The different operations permitted by the system call**, such as program loading or map management, are implemented in file `syscall.c`, while `core.c` contains the **interpreter**. The other files have self-explanatory names: `verifier.c` contains the **verifier** (no kidding), `arraymap.c` the code used to interact with **maps** of type array, and so on. + +* The **helpers**, as well as several functions related to networking (with tc, XDP…) and available to the user, are implemented in [linux/net/core/filter.c][121]. It also contains the code to migrate cBPF bytecode to eBPF (since all cBPF programs are now translated to eBPF in the kernel before being run). + +* The **JIT compilers** are under the directory of their respective architectures, such as file[linux/arch/x86/net/bpf_jit_comp.c][122] for x86. + +* You will find the code related to **the BPF components of tc** in the [linux/net/sched/][123] directory, and in particular in files `act_bpf.c` (action) and `cls_bpf.c` (filter). + +* I have not hacked with **event tracing** in BPF, so I do not really know about the hooks for such programs. There is some stuff in [linux/kernel/trace/bpf_trace.c][124]. If you are interested in this and want to know more, you may dig on the side of Brendan Gregg’s presentations or blog posts. + +* Nor have I used **seccomp-BPF**. But the code is in [linux/kernel/seccomp.c][125], and some example use cases can be found in [linux/tools/testing/selftests/seccomp/seccomp_bpf.c][126]. + +### XDP hooks code + +Once loaded into the in-kernel BPF virtual machine, **XDP** programs are hooked from userspace into the kernel network path thanks to a Netlink command. On reception, the function `dev_change_xdp_fd()` in file [linux/net/core/dev.c][172] is called and sets a XDP hook. Such hooks are located in the drivers of supported NICs. For example, the mlx4 driver used for some Mellanox hardware has hooks implemented in files under the [drivers/net/ethernet/mellanox/mlx4/][173] directory. File en_netdev.c receives Netlink commands and calls `mlx4_xdp_set()`, which in turns calls for instance `mlx4_en_process_rx_cq()` (for the RX side) implemented in file en_rx.c. + +### BPF logic in bcc + +One can find the code for the **bcc** set of tools [on the bcc GitHub repository][174]. The **Python code**, including the `BPF` class, is initiated in file [bcc/src/python/bcc/__init__.py][175]. But most of the interesting stuff—to my opinion—such as loading the BPF program into the kernel, happens [in the libbcc **C library**][176]. + +### Code to manage BPF with tc + +The code related to BPF **in tc** comes with the iproute2 package, of course. Some of it is under the[iproute2/tc/][177] directory. The files f_bpf.c and m_bpf.c (and e_bpf.c) are used respectively to handle BPF filters and actions (and tc `exec` command, whatever this may be). File q_clsact.c defines the `clsact` qdisc especially created for BPF. But **most of the BPF userspace logic** is implemented in[iproute2/lib/bpf.c][178] library, so this is probably where you should head to if you want to mess up with BPF and tc (it was moved from file iproute2/tc/tc_bpf.c, where you may find the same code in older versions of the package). + +### BPF utilities + +The kernel also ships the sources of three tools (`bpf_asm.c`, `bpf_dbg.c`, `bpf_jit_disasm.c`) related to BPF, under the [linux/tools/net/][179] or [linux/tools/bpf/][180] directory depending on your version: + +* `bpf_asm` is a minimal cBPF assembler. + +* `bpf_dbg` is a small debugger for cBPF programs. + +* `bpf_jit_disasm` is generic for both BPF flavors and could be highly useful for JIT debugging. + +* `bpftool` is a generic utility written by Jakub Kicinski, and that can be used to interact with eBPF programs and maps from userspace, for example to show, dump, pin programs, or to show, create, pin, update, delete maps. + +Read the comments at the top of the source files to get an overview of their usage. + +### Other interesting chunks + +If you are interested the use of less common languages with BPF, bcc contains [a **P4 compiler** for BPF targets][181] as well as [a **Lua front-end**][182] that can be used as alternatives to the C subset and (in the case of Lua) to the Python tools. + +### LLVM backend + +The BPF backend used by clang / LLVM for compiling C into eBPF was added to the LLVM sources in[this commit][183] (and can also be accessed on [the GitHub mirror][184]). + +### Running in userspace + +As far as I know there are at least two eBPF userspace implementations. The first one, [uBPF][185], is written in C. It contains an interpreter, a JIT compiler for x86_64 architecture, an assembler and a disassembler. + +The code of uBPF seems to have been reused to produce a [generic implementation][186], that claims to support FreeBSD kernel, FreeBSD userspace, Linux kernel, Linux userspace and MacOSX userspace. It is used for the [BPF extension module for VALE switch][187]. + +The other userspace implementation is my own work: [rbpf][188], based on uBPF, but written in Rust. The interpreter and JIT-compiler work (both under Linux, only the interpreter for MacOSX and Windows), there may be more in the future. + +### Commit logs + +As stated earlier, do not hesitate to have a look at the commit log that introduced a particular BPF feature if you want to have more information about it. You can search the logs in many places, such as on [git.kernel.org][189], [on GitHub][190], or on your local repository if you have cloned it. If you are not familiar with git, try things like `git blame ` to see what commit introduced a particular line of code, then `git show ` to have details (or search by keyword in `git log` results, but this may be tedious). See also [the list of eBPF features per kernel version][191] on bcc repository, that links to relevant commits. + +![](https://qmonnet.github.io/whirl-offload/img/icons/wand.svg) + +### Troubleshooting + +The enthusiasm about eBPF is quite recent, and so far I have not found a lot of resources intending to help with troubleshooting. So here are the few I have, augmented with my own recollection of pitfalls encountered while working with BPF. + +### Errors at compilation time + +* Make sure you have a recent enough version of the Linux kernel (see also [this document][127]). + +* If you compiled the kernel yourself: make sure you installed correctly all components, including kernel image, headers and libc. + +* When using the `bcc` shell function provided by `tc-bpf` man page (to compile C code into BPF): I once had to add includes to the header for the clang call: + + ``` + __bcc() { + clang -O2 -I "/usr/src/linux-headers-$(uname -r)/include/" \ + -I "/usr/src/linux-headers-$(uname -r)/arch/x86/include/" \ + -emit-llvm -c $1 -o - | \ + llc -march=bpf -filetype=obj -o "`basename $1 .c`.o" + } + + ``` + + (seems fixed as of today). + +* For other problems with `bcc`, do not forget to have a look at [the FAQ][128] of the tool set. + +* If you downloaded the examples from the iproute2 package in a version that does not exactly match your kernel, some errors can be triggered by the headers included in the files. The example snippets indeed assume that the same version of iproute2 package and kernel headers are installed on the system. If this is not the case, download the correct version of iproute2, or edit the path of included files in the examples to point to the headers included in iproute2 (some problems may or may not occur at runtime, depending on the features in use). + +### Errors at load and run time + +* To load a program with tc, make sure you use a tc binary coming from an iproute2 version equivalent to the kernel in use. + +* To load a program with bcc, make sure you have bcc installed on the system (just downloading the sources to run the Python script is not enough). + +* With tc, if the BPF program does not return the expected values, check that you called it in the correct fashion: filter, or action, or filter with “direct-action” mode. + +* With tc still, note that actions cannot be attached directly to qdiscs or interfaces without the use of a filter. + +* The errors thrown by the in-kernel verifier may be hard to interpret. [The kernel documentation][129]may help, so may [the reference guide][130] or, as a last resort, the source code (see above) (good luck!). For this kind of errors it is also important to keep in mind that the verifier  _does not run_  the program. If you get an error about an invalid memory access or about uninitialized data, it does not mean that these problems actually occurred (or sometimes, that they can possibly occur at all). It means that your program is written in such a way that the verifier estimates that such errors could happen, and therefore it rejects the program. + +* Note that `tc` tool has a verbose mode, and that it works well with BPF: try appending `verbose`at the end of your command line. + +* bcc also has verbose options: the `BPF` class has a `debug` argument that can take any combination of the three flags `DEBUG_LLVM_IR`, `DEBUG_BPF` and `DEBUG_PREPROCESSOR` (see details in [the source file][131]). It even embeds [some facilities to print output messages][132] for debugging the code. + +* LLVM v4.0+ [embeds a disassembler][133] for eBPF programs. So if you compile your program with clang, adding the `-g` flag for compiling enables you to later dump your program in the rather human-friendly format used by the kernel verifier. To proceed to the dump, use: + + ``` + $ llvm-objdump -S -no-show-raw-insn bpf_program.o + + ``` + +* Working with maps? You want to have a look at [bpf-map][134], a very userful tool in Go created for the Cilium project, that can be used to dump the contents of kernel eBPF maps. There also exists [a clone][135] in Rust. + +* There is an old [`bpf` tag on **StackOverflow**][136], but as of this writing it has been hardly used—ever (and there is nearly nothing related to the new eBPF version). If you are a reader from the Future though, you may want to check whether there has been more activity on this side. + +![](https://qmonnet.github.io/whirl-offload/img/icons/zoomin.svg) + +### And still more! + +* In case you would like to easily **test XDP**, there is [a Vagrant setup][137] available. You can also **test bcc**[in a Docker container][138]. + +* Wondering where the **development and activities** around BPF occur? Well, the kernel patches always end up [on the netdev mailing list][139] (related to the Linux kernel networking stack development): search for “BPF” or “XDP” keywords. Since April 2017, there is also [a mailing list specially dedicated to XDP programming][140] (both for architecture or for asking for help). Many discussions and debates also occur [on the IO Visor mailing list][141], since BPF is at the heart of the project. If you only want to keep informed from time to time, there is also an [@IOVisor Twitter account][142]. + +And come back on this blog from time to time to see if they are new articles [about BPF][192]! + + _Special thanks to Daniel Borkmann for the numerous [additional documents][154] he pointed to me so that I could complete this collection._ + +-------------------------------------------------------------------------------- + +via: https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/ + +作者:[Quentin Monnet ][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://qmonnet.github.io/whirl-offload/about/ +[1]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#about-bpf +[2]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#about-xdp +[3]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#about-other-components-related-or-based-on-ebpf +[4]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#about-bpf-1 +[5]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#about-tc +[6]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#about-xdp-1 +[7]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#about-p4-and-bpf +[8]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#from-the-kernel +[9]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#from-package-iproute2 +[10]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#from-bcc-set-of-tools +[11]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#manual-pages +[12]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#bpf-code-in-the-kernel +[13]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#xdp-hooks-code +[14]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#bpf-logic-in-bcc +[15]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#code-to-manage-bpf-with-tc +[16]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#bpf-utilities +[17]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#other-interesting-chunks +[18]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#llvm-backend +[19]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#running-in-userspace +[20]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#commit-logs +[21]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#errors-at-compilation-time +[22]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#errors-at-load-and-run-time +[23]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#generic-presentations +[24]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#documentation +[25]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#tutorials +[26]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#examples +[27]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#the-code +[28]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#troubleshooting +[29]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#and-still-more +[30]:http://netdevconf.org/1.2/session.html?daniel-borkmann +[31]:http://netdevconf.org/1.2/slides/oct5/07_tcws_daniel_borkmann_2016_tcws.pdf +[32]:http://netdevconf.org/1.2/session.html?jamal-tc-workshop +[33]:http://www.netdevconf.org/1.1/proceedings/slides/borkmann-tc-classifier-cls-bpf.pdf +[34]:http://www.netdevconf.org/1.1/proceedings/papers/On-getting-tc-classifier-fully-programmable-with-cls-bpf.pdf +[35]:https://archive.fosdem.org/2016/schedule/event/ebpf/attachments/slides/1159/export/events/attachments/ebpf/slides/1159/ebpf.pdf +[36]:https://fosdem.org/2017/schedule/event/ebpf_xdp/ +[37]:http://people.netfilter.org/hawk/presentations/xdp2016/xdp_intro_and_use_cases_sep2016.pdf +[38]:http://netdevconf.org/1.2/session.html?jesper-performance-workshop +[39]:http://people.netfilter.org/hawk/presentations/OpenSourceDays2017/XDP_DDoS_protecting_osd2017.pdf +[40]:http://people.netfilter.org/hawk/presentations/MM-summit2017/MM-summit2017-JesperBrouer.pdf +[41]:http://netdevconf.org/2.1/session.html?gospodarek +[42]:http://jvns.ca/blog/2017/04/07/xdp-bpf-tutorial/ +[43]:http://www.slideshare.net/ThomasGraf5/clium-container-networking-with-bpf-xdp +[44]:http://www.slideshare.net/Docker/cilium-bpf-xdp-for-containers-66969823 +[45]:https://www.youtube.com/watch?v=TnJF7ht3ZYc&list=PLkA60AVN3hh8oPas3cq2VA9xB7WazcIgs +[46]:http://www.slideshare.net/ThomasGraf5/cilium-fast-ipv6-container-networking-with-bpf-and-xdp +[47]:https://fosdem.org/2017/schedule/event/cilium/ +[48]:http://openvswitch.org/support/ovscon2016/7/1120-tu.pdf +[49]:http://openvswitch.org/support/ovscon2016/7/1245-bertrone.pdf +[50]:https://www.spinics.net/lists/xdp-newbies/msg00179.html +[51]:https://www.spinics.net/lists/xdp-newbies/msg00181.html +[52]:https://www.spinics.net/lists/xdp-newbies/msg00185.html +[53]:http://schd.ws/hosted_files/ossna2017/da/BPFandXDP.pdf +[54]:https://speakerdeck.com/tuxology/the-bsd-packet-filter +[55]:http://www.slideshare.net/brendangregg/bpf-tracing-and-more +[56]:http://fr.slideshare.net/brendangregg/linux-bpf-superpowers +[57]:https://www.socallinuxexpo.org/sites/default/files/presentations/Room%20211%20-%20IOVisor%20-%20SCaLE%2014x.pdf +[58]:https://events.linuxfoundation.org/sites/events/files/slides/ebpf_on_the_mainframe_lcon_2015.pdf +[59]:https://events.linuxfoundation.org/sites/events/files/slides/tracing-linux-ezannoni-linuxcon-ja-2015_0.pdf +[60]:https://events.linuxfoundation.org/sites/events/files/slides/bpf_collabsummit_2015feb20.pdf +[61]:https://lwn.net/Articles/603983/ +[62]:http://www.slideshare.net/vh21/meet-cutebetweenebpfandtracing +[63]:http://www.slideshare.net/vh21/linux-kernel-tracing +[64]:http://www.slideshare.net/ThomasGraf5/linux-networking-explained +[65]:http://www.slideshare.net/ThomasGraf5/linuxcon-2015-linux-kernel-networking-walkthrough +[66]:http://www.tcpdump.org/papers/bpf-usenix93.pdf +[67]:http://www.gsp.com/cgi-bin/man.cgi?topic=bpf +[68]:http://borkmann.ch/talks/2013_devconf.pdf +[69]:http://borkmann.ch/talks/2014_devconf.pdf +[70]:https://blog.cloudflare.com/introducing-the-bpf-tools/ +[71]:http://biot.com/capstats/bpf.html +[72]:https://www.iovisor.org/technology/xdp +[73]:https://github.com/iovisor/bpf-docs/raw/master/Express_Data_Path.pdf +[74]:https://events.linuxfoundation.org/sites/events/files/slides/iovisor-lc-bof-2016.pdf +[75]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#about-xdp-1 +[76]:http://netdevconf.org/1.2/session.html?herbert-xdp-workshop +[77]:https://schd.ws/hosted_files/2016p4workshop/1d/Intel%20Fastabend-P4%20on%20the%20Edge.pdf +[78]:https://ovsorbit.benpfaff.org/#e11 +[79]:http://open-nfp.org/media/pdfs/Open_NFP_P4_EBPF_Linux_TC_Offload_FINAL.pdf +[80]:https://opensource.googleblog.com/2016/11/cilium-networking-and-security.html +[81]:https://ovsorbit.benpfaff.org/ +[82]:http://blog.ipspace.net/2016/10/fast-linux-packet-forwarding-with.html +[83]:http://netdevconf.org/2.1/session.html?bertin +[84]:http://netdevconf.org/2.1/session.html?zhou +[85]:http://www.slideshare.net/IOVisor/ceth-for-xdp-linux-meetup-santa-clara-july-2016 +[86]:http://info.iet.unipi.it/~luigi/vale/ +[87]:https://github.com/YutaroHayakawa/vale-bpf +[88]:https://www.stamus-networks.com/2016/09/28/suricata-bypass-feature/ +[89]:http://netdevconf.org/1.2/slides/oct6/10_suricata_ebpf.pdf +[90]:https://www.slideshare.net/ennael/kernel-recipes-2017-ebpf-and-xdp-eric-leblond +[91]:https://github.com/iovisor/bpf-docs/blob/master/university/sigcomm-ccr-InKev-2016.pdf +[92]:https://fosdem.org/2017/schedule/event/go_bpf/ +[93]:https://wkz.github.io/ply/ +[94]:https://www.kernel.org/doc/Documentation/networking/filter.txt +[95]:https://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next.git/tree/Documentation/bpf/bpf_design_QA.txt?id=2e39748a4231a893f057567e9b880ab34ea47aef +[96]:https://github.com/iovisor/bpf-docs/blob/master/eBPF.md +[97]:https://github.com/iovisor/bcc/tree/master/docs +[98]:https://github.com/iovisor/bpf-docs/ +[99]:https://github.com/iovisor/bcc/blob/master/docs/reference_guide.md +[100]:http://man7.org/linux/man-pages/man2/bpf.2.html +[101]:http://man7.org/linux/man-pages/man8/tc-bpf.8.html +[102]:https://prototype-kernel.readthedocs.io/en/latest/bpf/index.html +[103]:http://docs.cilium.io/en/latest/bpf/ +[104]:https://ferrisellis.com/tags/ebpf/ +[105]:http://linux-ip.net/articles/Traffic-Control-HOWTO/ +[106]:http://lartc.org/lartc.html +[107]:https://git.kernel.org/cgit/linux/kernel/git/shemminger/iproute2.git/tree/man/man8 +[108]:https://git.kernel.org/pub/scm/linux/kernel/git/shemminger/iproute2.git/tree/doc?h=v4.13.0 +[109]:https://git.kernel.org/pub/scm/linux/kernel/git/shemminger/iproute2.git/tree/doc/actions?h=v4.13.0 +[110]:http://netdevconf.org/1.2/session.html?jamal-tc-workshop +[111]:https://git.kernel.org/cgit/linux/kernel/git/shemminger/iproute2.git/commit/bash-completion/tc?id=27d44f3a8a4708bcc99995a4d9b6fe6f81e3e15b +[112]:https://prototype-kernel.readthedocs.io/en/latest/networking/XDP/index.html +[113]:https://marc.info/?l=linux-netdev&m=147436253625672 +[114]:http://docs.cilium.io/en/latest/bpf/ +[115]:https://github.com/iovisor/bcc/blob/master/INSTALL.md +[116]:https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/include/linux/bpf.h +[117]:https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/include/uapi/linux/bpf.h +[118]:https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/include/linux/filter.h +[119]:https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/include/uapi/linux/filter.h +[120]:https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/kernel/bpf +[121]:https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/net/core/filter.c +[122]:https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/arch/x86/net/bpf_jit_comp.c +[123]:https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/net/sched +[124]:https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/kernel/trace/bpf_trace.c +[125]:https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/kernel/seccomp.c +[126]:https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/tools/testing/selftests/seccomp/seccomp_bpf.c +[127]:https://github.com/iovisor/bcc/blob/master/docs/kernel-versions.md +[128]:https://github.com/iovisor/bcc/blob/master/FAQ.txt +[129]:https://www.kernel.org/doc/Documentation/networking/filter.txt +[130]:https://github.com/iovisor/bcc/blob/master/docs/reference_guide.md +[131]:https://github.com/iovisor/bcc/blob/master/src/python/bcc/__init__.py +[132]:https://github.com/iovisor/bcc/blob/master/docs/reference_guide.md#output +[133]:https://www.spinics.net/lists/netdev/msg406926.html +[134]:https://github.com/cilium/bpf-map +[135]:https://github.com/badboy/bpf-map +[136]:https://stackoverflow.com/questions/tagged/bpf +[137]:https://github.com/iovisor/xdp-vagrant +[138]:https://github.com/zlim/bcc-docker +[139]:http://lists.openwall.net/netdev/ +[140]:http://vger.kernel.org/vger-lists.html#xdp-newbies +[141]:http://lists.iovisor.org/pipermail/iovisor-dev/ +[142]:https://twitter.com/IOVisor +[143]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#what-is-bpf +[144]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#dive-into-the-bytecode +[145]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#resources +[146]:https://github.com/qmonnet/whirl-offload/commits/gh-pages/_posts/2016-09-01-dive-into-bpf.md +[147]:http://netdevconf.org/1.2/session.html?jakub-kicinski +[148]:http://www.slideshare.net/IOVisor/express-data-path-linux-meetup-santa-clara-july-2016 +[149]:https://cdn.shopify.com/s/files/1/0177/9886/files/phv2017-gbertin.pdf +[150]:https://github.com/cilium/cilium +[151]:https://fosdem.org/2017/schedule/event/stateful_ebpf/ +[152]:http://vger.kernel.org/vger-lists.html#xdp-newbies +[153]:https://github.com/iovisor/bcc/blob/master/docs/kernel-versions.md +[154]:https://github.com/qmonnet/whirl-offload/commit/d694f8081ba00e686e34f86d5ee76abeb4d0e429 +[155]:http://openvswitch.org/pipermail/dev/2014-October/047421.html +[156]:https://qmonnet.github.io/whirl-offload/2016/07/15/beba-research-project/ +[157]:https://www.iovisor.org/resources/blog +[158]:http://www.brendangregg.com/blog/2016-03-05/linux-bpf-superpowers.html +[159]:http://p4.org/ +[160]:https://github.com/iovisor/bcc/tree/master/src/cc/frontends/p4 +[161]:https://github.com/p4lang/p4c/blob/master/backends/ebpf/README.md +[162]:https://github.com/iovisor/bcc/blob/master/docs/reference_guide.md +[163]:https://github.com/iovisor/bcc/blob/master/docs/tutorial_bcc_python_developer.md +[164]:https://github.com/goldshtn/linux-tracing-workshop +[165]:https://blog.yadutaf.fr/2017/07/28/tracing-a-packet-journey-using-linux-tracepoints-perf-ebpf/ +[166]:https://open-nfp.org/dataplanes-ebpf/technical-papers/ +[167]:http://netdevconf.org/2.1/session.html?gospodarek +[168]:https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/samples/bpf +[169]:https://git.kernel.org/cgit/linux/kernel/git/shemminger/iproute2.git/tree/examples/bpf +[170]:https://github.com/iovisor/bcc/tree/master/examples +[171]:http://man7.org/linux/man-pages/man8/tc-bpf.8.html +[172]:https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/net/core/dev.c +[173]:https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/drivers/net/ethernet/mellanox/mlx4/ +[174]:https://github.com/iovisor/bcc/ +[175]:https://github.com/iovisor/bcc/blob/master/src/python/bcc/__init__.py +[176]:https://github.com/iovisor/bcc/blob/master/src/cc/libbpf.c +[177]:https://git.kernel.org/cgit/linux/kernel/git/shemminger/iproute2.git/tree/tc +[178]:https://git.kernel.org/cgit/linux/kernel/git/shemminger/iproute2.git/tree/lib/bpf.c +[179]:https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/tools/net +[180]:https://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next.git/tree/tools/bpf +[181]:https://github.com/iovisor/bcc/tree/master/src/cc/frontends/p4/compiler +[182]:https://github.com/iovisor/bcc/tree/master/src/lua +[183]:https://reviews.llvm.org/D6494 +[184]:https://github.com/llvm-mirror/llvm/commit/4fe85c75482f9d11c5a1f92a1863ce30afad8d0d +[185]:https://github.com/iovisor/ubpf/ +[186]:https://github.com/YutaroHayakawa/generic-ebpf +[187]:https://github.com/YutaroHayakawa/vale-bpf +[188]:https://github.com/qmonnet/rbpf +[189]:https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git +[190]:https://github.com/torvalds/linux +[191]:https://github.com/iovisor/bcc/blob/master/docs/kernel-versions.md +[192]:https://qmonnet.github.io/whirl-offload/categories/#BPF diff --git a/sources/tech/20171107 GitHub welcomes all CI tools.md b/sources/tech/20171107 GitHub welcomes all CI tools.md new file mode 100644 index 0000000000..7bef351bd6 --- /dev/null +++ b/sources/tech/20171107 GitHub welcomes all CI tools.md @@ -0,0 +1,95 @@ +translating---geekpi + +GitHub welcomes all CI tools +==================== + + +[![GitHub and all CI tools](https://user-images.githubusercontent.com/29592817/32509084-2d52c56c-c3a1-11e7-8c49-901f0f601faf.png)][11] + +Continuous Integration ([CI][12]) tools help you stick to your team's quality standards by running tests every time you push a new commit and [reporting the results][13] to a pull request. Combined with continuous delivery ([CD][14]) tools, you can also test your code on multiple configurations, run additional performance tests, and automate every step [until production][15]. + +There are several CI and CD tools that [integrate with GitHub][16], some of which you can install in a few clicks from [GitHub Marketplace][17]. With so many options, you can pick the best tool for the job—even if it's not the one that comes pre-integrated with your system. + +The tools that will work best for you depends on many factors, including: + +* Programming language and application architecture + +* Operating system and browsers you plan to support + +* Your team's experience and skills + +* Scaling capabilities and plans for growth + +* Geographic distribution of dependent systems and the people who use them + +* Packaging and delivery goals + +Of course, it isn't possible to optimize your CI tool for all of these scenarios. The people who build them have to choose which use cases to serve best—and when to prioritize complexity over simplicity. For example, if you like to test small applications written in a particular programming language for one platform, you won't need the complexity of a tool that tests embedded software controllers on dozens of platforms with a broad mix of programming languages and frameworks. + +If you need a little inspiration for which CI tool might work best, take a look at [popular GitHub projects][18]. Many show the status of their integrated CI/CD tools as badges in their README.md. We've also analyzed the use of CI tools across more than 50 million repositories in the GitHub community, and found a lot of variety. The following diagram shows the relative percentage of the top 10 CI tools used with GitHub.com, based on the most used [commit status contexts][19] used within our pull requests. + + _Our analysis also showed that many teams use more than one CI tool in their projects, allowing them to emphasize what each tool does best._ + + [![Top 10 CI systems used with GitHub.com based on most used commit status contexts](https://user-images.githubusercontent.com/7321362/32575895-ea563032-c49a-11e7-9581-e05ec882658b.png)][20] + +If you'd like to check them out, here are the top 10 tools teams use: + +* [Travis CI][1] + +* [Circle CI][2] + +* [Jenkins][3] + +* [AppVeyor][4] + +* [CodeShip][5] + +* [Drone][6] + +* [Semaphore CI][7] + +* [Buildkite][8] + +* [Wercker][9] + +* [TeamCity][10] + +It's tempting to just pick the default, pre-integrated tool without taking the time to research and choose the best one for the job, but there are plenty of [excellent choices][21] built for your specific use cases. And if you change your mind later, no problem. When you choose the best tool for a specific situation, you're guaranteeing tailored performance and the freedom of interchangability when it no longer fits. + +Ready to see how CI tools can fit into your workflow? + +[Browse GitHub Marketplace][22] + +-------------------------------------------------------------------------------- + +via: https://github.com/blog/2463-github-welcomes-all-ci-tools + +作者:[jonico ][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://github.com/jonico +[1]:https://travis-ci.org/ +[2]:https://circleci.com/ +[3]:https://jenkins.io/ +[4]:https://www.appveyor.com/ +[5]:https://codeship.com/ +[6]:http://try.drone.io/ +[7]:https://semaphoreci.com/ +[8]:https://buildkite.com/ +[9]:http://www.wercker.com/ +[10]:https://www.jetbrains.com/teamcity/ +[11]:https://user-images.githubusercontent.com/29592817/32509084-2d52c56c-c3a1-11e7-8c49-901f0f601faf.png +[12]:https://en.wikipedia.org/wiki/Continuous_integration +[13]:https://github.com/blog/2051-protected-branches-and-required-status-checks +[14]:https://en.wikipedia.org/wiki/Continuous_delivery +[15]:https://developer.github.com/changes/2014-01-09-preview-the-new-deployments-api/ +[16]:https://github.com/works-with/category/continuous-integration +[17]:https://github.com/marketplace/category/continuous-integration +[18]:https://github.com/explore?trending=repositories#trending +[19]:https://developer.github.com/v3/repos/statuses/ +[20]:https://user-images.githubusercontent.com/7321362/32575895-ea563032-c49a-11e7-9581-e05ec882658b.png +[21]:https://github.com/works-with/category/continuous-integration +[22]:https://github.com/marketplace/category/continuous-integration diff --git a/sources/tech/20171112 Love Your Bugs.md b/sources/tech/20171112 Love Your Bugs.md new file mode 100644 index 0000000000..bf79f27cf7 --- /dev/null +++ b/sources/tech/20171112 Love Your Bugs.md @@ -0,0 +1,311 @@ +Love Your Bugs +============================================================ + +In early October I gave a keynote at [Python Brasil][1] in Belo Horizonte. Here is an aspirational and lightly edited transcript of the talk. There is also a video available [here][2]. + +### I love bugs + +I’m currently a senior engineer at [Pilot.com][3], working on automating bookkeeping for startups. Before that, I worked for [Dropbox][4] on the desktop client team, and I’ll have a few stories about my work there. Earlier, I was a facilitator at the [Recurse Center][5], a writers retreat for programmers in NYC. I studied astrophysics in college and worked in finance for a few years before becoming an engineer. + +But none of that is really important to remember – the only thing you need to know about me is that I love bugs. I love bugs because they’re entertaining. They’re dramatic. The investigation of a great bug can be full of twists and turns. A great bug is like a good joke or a riddle – you’re expecting one outcome, but the result veers off in another direction. + +Over the course of this talk I’m going to tell you about some bugs that I have loved, explain why I love bugs so much, and then convince you that you should love bugs too. + +### Bug #1 + +Ok, straight into bug #1\. This is a bug that I encountered while working at Dropbox. As you may know, Dropbox is a utility that syncs your files from one computer to the cloud and to your other computers. + + + +``` + +--------------+ +---------------+ + | | | | + | METASERVER | | BLOCKSERVER | + | | | | + +-+--+---------+ +---------+-----+ + ^ | ^ + | | | + | | +----------+ | + | +---> | | | + | | CLIENT +--------+ + +--------+ | + +----------+ +``` + + +Here’s a vastly simplified diagram of Dropbox’s architecture. The desktop client runs on your local computer listening for changes in the file system. When it notices a changed file, it reads the file, then hashes the contents in 4MB blocks. These blocks are stored in the backend in a giant key-value store that we call blockserver. The key is the digest of the hashed contents, and the values are the contents themselves. + +Of course, we want to avoid uploading the same block multiple times. You can imagine that if you’re writing a document, you’re probably mostly changing the end – we don’t want to upload the beginning over and over. So before uploading a block to the blockserver the client talks to a different server that’s responsible for managing metadata and permissions, among other things. The client asks metaserver whether it needs the block or has seen it before. The “metaserver” responds with whether or not each block needs to be uploaded. + +So the request and response look roughly like this: The client says, “I have a changed file made up of blocks with hashes `'abcd,deef,efgh'`”. The server responds, “I have those first two, but upload the third.” Then the client sends the block up to the blockserver. + + +``` + +--------------+ +---------------+ + | | | | + | METASERVER | | BLOCKSERVER | + | | | | + +-+--+---------+ +---------+-----+ + ^ | ^ + | | 'ok, ok, need' | +'abcd,deef,efgh' | | +----------+ | efgh: [contents] + | +---> | | | + | | CLIENT +--------+ + +--------+ | + +----------+ +``` + + + +That’s the setup. So here’s the bug. + + + +``` + +--------------+ + | | + | METASERVER | + | | + +-+--+---------+ + ^ | + | | '???' +'abcdldeef,efgh' | | +----------+ + ^ | +---> | | + ^ | | CLIENT + + +--------+ | + +----------+ +``` + +Sometimes the client would make a weird request: each hash value should have been sixteen characters long, but instead it was thirty-three characters long – twice as many plus one. The server wouldn’t know what to do with this and would throw an exception. We’d see this exception get reported, and we’d go look at the log files from the desktop client, and really weird stuff would be going on – the client’s local database had gotten corrupted, or python would be throwing MemoryErrors, and none of it would make sense. + +If you’ve never seen this problem before, it’s totally mystifying. But once you’d seen it once, you can recognize it every time thereafter. Here’s a hint: the middle character of each 33-character string that we’d often see instead of a comma was `l`. These are the other characters we’d see in the middle position: + + +``` +l \x0c < $ ( . - +``` + +The ordinal value for an ascii comma – `,` – is 44\. The ordinal value for `l` is 108\. In binary, here’s how those two are represented: + +``` +bin(ord(',')): 0101100 +bin(ord('l')): 1101100 +``` + +You’ll notice that an `l` is exactly one bit away from a comma. And herein lies your problem: a bitflip. One bit of memory that the desktop client is using has gotten corrupted, and now the desktop client is sending a request to the server that is garbage. + +And here are the other characters we’d frequently see instead of the comma when a different bit had been flipped. + + + +``` +, : 0101100 +l : 1101100 +\x0c : 0001100 +< : 0111100 +$ : 0100100 +( : 0101000 +. : 0101110 +- : 0101101 +``` + + +### Bitflips are real! + +I love this bug because it shows that bitflips are a real thing that can happen, not just a theoretical concern. In fact, there are some domains where they’re more common than others. One such domain is if you’re getting requests from users with low-end or old hardware, which is true for a lot of laptops running Dropbox. Another domain with lots of bitflips is outer space – there’s no atmosphere in space to protect your memory from energetic particles and radiation, so bitflips are pretty common. + +You probably really care about correctness in space – your code might be keeping astronauts alive on the ISS, for example, but even if it’s not mission-critical, it’s hard to do software updates to space. If you really need your application to defend against bitflips, there are a variety of hardware & software approaches you can take, and there’s a [very interesting talk][6] by Katie Betchold about this. + +Dropbox in this context doesn’t really need to protect against bitflips. The machine that is corrupting memory is a user’s machine, so we can detect if the bitflip happens to fall in the comma – but if it’s in a different character we don’t necessarily know it, and if the bitflip is in the actual file data read off of disk, then we have no idea. There’s a pretty limited set of places where we could address this, and instead we decide to basically silence the exception and move on. Often this kind of bug resolves after the client restarts. + +### Unlikely bugs aren’t impossible + +This is one of my favorite bugs for a couple of reasons. The first is that it’s a reminder of the difference between unlikely and impossible. At sufficient scale, unlikely events start to happen at a noticable rate. + +### Social bugs + +My second favorite thing about this bug is that it’s a tremendously social one. This bug can crop up anywhere that the desktop client talks to the server, which is a lot of different endpoints and components in the system. This meant that a lot of different engineers at Dropbox would see versions of the bug. The first time you see it, you can  _really_  scratch your head, but after that it’s easy to diagnose, and the investigation is really quick: you look at the middle character and see if it’s an `l`. + +### Cultural differences + +One interesting side-effect of this bug was that it exposed a cultural difference between the server and client teams. Occasionally this bug would be spotted by a member of the server team and investigated from there. If one of your  _servers_  is flipping bits, that’s probably not random chance – it’s probably memory corruption, and you need to find the affected machine and get it out of the pool as fast as possible or you risk corrupting a lot of user data. That’s an incident, and you need to respond quickly. But if the user’s machine is corrupting data, there’s not a lot you can do. + +### Share your bugs + +So if you’re investigating a confusing bug, especially one in a big system, don’t forget to talk to people about it. Maybe your colleagues have seen a bug shaped like this one before. If they have, you might save a lot of time. And if they haven’t, don’t forget to tell people about the solution once you’ve figured it out – write it up or tell the story in your team meeting. Then the next time your teams hits something similar, you’ll all be more prepared. + +### How bugs can help you learn + +### Recurse Center + +Before I joined Dropbox, I worked for the Recurse Center. The idea behind RC is that it’s a community of self-directed learners spending time together getting better as programmers. That is the full extent of the structure of RC: there’s no curriculum or assignments or deadlines. The only scoping is a shared goal of getting better as a programmer. We’d see people come to participate in the program who had gotten CS degrees but didn’t feel like they had a solid handle on practical programming, or people who had been writing Java for ten years and wanted to learn Clojure or Haskell, and many other profiles as well. + +My job there was as a facilitator, helping people make the most of the lack of structure and providing guidance based on what we’d learned from earlier participants. So my colleagues and I were very interested in the best techniques for learning for self-motivated adults. + +### Deliberate Practice + +There’s a lot of different research in this space, and one of the ones I think is most interesting is the idea of deliberate practice. Deliberate practice is an attempt to explain the difference in performance between experts & amateurs. And the guiding principle here is that if you look just at innate characteristics – genetic or otherwise – they don’t go very far towards explaining the difference in performance. So the researchers, originally Ericsson, Krampe, and Tesch-Romer, set out to discover what did explain the difference. And what they settled on was time spent in deliberate practice. + +Deliberate practice is pretty narrow in their definition: it’s not work for pay, and it’s not playing for fun. You have to be operating on the edge of your ability, doing a project appropriate for your skill level (not so easy that you don’t learn anything and not so hard that you don’t make any progress). You also have to get immediate feedback on whether or not you’ve done the thing correctly. + +This is really exciting, because it’s a framework for how to build expertise. But the challenge is that as programmers this is really hard advice to apply. It’s hard to know whether you’re operating at the edge of your ability. Immediate corrective feedback is very rare – in some cases you’re lucky to get feedback ever, and in other cases maybe it takes months. You can get quick feedback on small things in the REPL and so on, but if you’re making a design decision or picking a technology, you’re not going to get feedback on those things for quite a long time. + +But one category of programming where deliberate practice is a useful model is debugging. If you wrote code, then you had a mental model of how it worked when you wrote it. But your code has a bug, so your mental model isn’t quite right. By definition you’re on the boundary of your understanding – so, great! You’re about to learn something new. And if you can reproduce the bug, that’s a rare case where you can get immediate feedback on whether or not your fix is correct. + +A bug like this might teach you something small about your program, or you might learn something larger about the system your code is running in. Now I’ve got a story for you about a bug like that. + +### Bug #2 + +This bug also one that I encountered at Dropbox. At the time, I was investigating why some desktop client weren’t sending logs as consistently as we expected. I’d started digging into the client logging system and discovered a bunch of interesting bugs. I’ll tell you only the subset of those bugs that is relevant to this story. + +Again here’s a very simplified architecture of the system. + + +``` + +--------------+ + | | + +---+ +----------> | LOG SERVER | + |log| | | | + +---+ | +------+-------+ + | | + +-----+----+ | 200 ok + | | | + | CLIENT | <-----------+ + | | + +-----+----+ + ^ + +--------+--------+--------+ + | ^ ^ | + +--+--+ +--+--+ +--+--+ +--+--+ + | log | | log | | log | | log | + | | | | | | | | + | | | | | | | | + +-----+ +-----+ +-----+ +-----+ +``` + +The desktop client would generate logs. Those logs were compress, encrypted, and written to disk. Then every so often the client would send them up to the server. The client would read a log off of disk and send it to the log server. The server would decrypt it and store it, then respond with a 200. + +If the client couldn’t reach the log server, it wouldn’t let the log directory grow unbounded. After a certain point it would start deleting logs to keep the directory under a maximum size. + +The first two bugs were not a big deal on their own. The first one was that the desktop client sent logs up to the server starting with the oldest one instead of starting with the newest. This isn’t really what you want – for example, the server would tell the client to send logs if the client reported an exception, so probably you care about the logs that just happened and not the oldest logs that happen to be on disk. + +The second bug was similar to the first: if the log directory hit its maximum size, the client would delete the logs starting with the newest instead of starting with the oldest. Again, you lose log files either way, but you probably care less about the older ones. + +The third bug had to do with the encryption. Sometimes, the server would be unable to decrypt a log file. (We generally didn’t figure out why – maybe it was a bitflip.) We weren’t handling this error correctly on the backend, so the server would reply with a 500\. The client would behave reasonably in the face of a 500: it would assume that the server was down. So it would stop sending log files and not try to send up any of the others. + +Returning a 500 on a corrupted log file is clearly not the right behavior. You could consider returning a 400, since it’s a problem with the client request. But the client also can’t fix the problem – if the log file can’t be decrypted now, we’ll never be able to decrypt it in the future. What you really want the client to do is just delete the log and move on. In fact, that’s the default behavior when the client gets a 200 back from the server for a log file that was successfully stored. So we said, ok – if the log file can’t be decrypted, just return a 200. + +All of these bugs were straightforward to fix. The first two bugs were on the client, so we’d fixed them on the alpha build but they hadn’t gone out to the majority of clients. The third bug we fixed on the server and deployed. + +### 📈 + +Suddenly traffic to the log cluster spikes. The serving team reaches out to us to ask if we know what’s going on. It takes me a minute to put all the pieces together. + +Before these fixes, there were four things going on: + +1. Log files were sent up starting with the oldest + +2. Log files were deleted starting with the newest + +3. If the server couldn’t decrypt a log file it would 500 + +4. If the client got a 500 it would stop sending logs + +A client with a corrupted log file would try to send it, the server would 500, the client would give up sending logs. On its next run, it would try to send the same file again, fail again, and give up again. Eventually the log directory would get full, at which point the client would start deleting its newest files, leaving the corrupted one on disk. + +The upshot of these three bugs: if a client ever had a corrupted log file, we would never see logs from that client again. + +The problem is that there were a lot more clients in this state than we thought. Any client with a single corrupted file had been dammed up from sending logs to the server. Now that dam was cleared, and all of them were sending up the rest of the contents of their log directories. + +### Our options + +Ok, there’s a huge flood of traffic coming from machines around the world. What can we do? (This is a fun thing about working at a company with Dropbox’s scale, and particularly Dropbox’s scale of desktop clients: you can trigger a self-DDOS very easily.) + +The first option when you do a deploy and things start going sideways is to rollback. Totally reasonable choice, but in this case, it wouldn’t have helped us. The state that we’d transformed wasn’t the state on the server but the state on the client – we’d deleted those files. Rolling back the server would prevent additional clients from entering this state but it wouldn’t solve the problem. + +What about increasing the size of the logging cluster? We did that – and started getting even more requests, now that we’d increased our capacity. We increased it again, but you can’t do that forever. Why not? This cluster isn’t isolated. It’s making requests into another cluster, in this case to handle exceptions. If you have a DDOS pointed at one cluster, and you keep scaling that cluster, you’re going to knock over its depedencies too, and now you have two problems. + +Another option we considered was shedding load – you don’t need every single log file, so can we just drop requests. One of the challenges here was that we didn’t have an easy way to tell good traffic from bad. We couldn’t quickly differentiate which log files were old and which were new. + +The solution we hit on is one that’s been used at Dropbox on a number of different occassions: we have a custom header, `chillout`, which every client in the world respects. If the client gets a response with this header, then it doesn’t make any requests for the provided number of seconds. Someone very wise added this to the Dropbox client very early on, and it’s come in handy more than once over the years. The logging server didn’t have the ability to set that header, but that’s an easy problem to solve. So two of my colleagues, Isaac Goldberg and John Lai, implemented support for it. We set the logging cluster chillout to two minutes initially and then managed it down as the deluge subsided over the next couple of days. + +### Know your system + +The first lesson from this bug is to know your system. I had a good mental model of the interaction between the client and the server, but I wasn’t thinking about what would happen when the server was interacting with all the clients at once. There was a level of complexity that I hadn’t thought all the way through. + +### Know your tools + +The second lesson is to know your tools. If things go sideways, what options do you have? Can you reverse your migration? How will you know if things are going sideways and how can you discover more? All of those things are great to know before a crisis – but if you don’t, you’ll learn them during a crisis and then never forget. + +### Feature flags & server-side gating + +The third lesson is for you if you’re writing a mobile or a desktop application:  _You need server-side feature gating and server-side flags._  When you discover a problem and you don’t have server-side controls, the resolution might take days or weeks as you push out a new release or submit a new version to the app store. That’s a bad situation to be in. The Dropbox desktop client isn’t going through an app store review process, but just pushing out a build to tens of millions of clients takes time. Compare that to hitting a problem in your feature and flipping a switch on the server: ten minutes later your problem is resolved. + +This strategy is not without its costs. Having a bunch of feature flags in your code adds to the complexity dramatically. You get a combinatoric problem with your testing: what if feature A is enabled and feature B, or just one, or neither – multiplied across N features. It’s extremely difficult to get engineers to clean up their feature flags after the fact (and I was also guilty of this). Then for the desktop client there’s multiple versions in the wild at the same time, so it gets pretty hard to reason about. + +But the benefit – man, when you need it, you really need it. + +# How to love bugs + +I’ve talked about some bugs that I love and I’ve talked about why to love bugs. Now I want to tell you how to love bugs. If you don’t love bugs yet, I know of exactly one way to learn, and that’s to have a growth mindset. + +The sociologist Carol Dweck has done a ton of interesting research about how people think about intelligence. She’s found that there are two different frameworks for thinking about intelligence. The first, which she calls the fixed mindset, holds that intelligence is a fixed trait, and people can’t change how much of it they have. The other mindset is a growth mindset. Under a growth mindset, people believe that intelligence is malleable and can increase with effort. + +Dweck found that a person’s theory of intelligence – whether they hold a fixed or growth mindset – can significantly influence the way they select tasks to work on, the way they respond to challenges, their cognitive performance, and even their honesty. + +[I also talked about a growth mindset in my Kiwi PyCon keynote, so here are just a few excerpts. You can read the full transcript [here][7].] + +Findings about honesty: + +> After this, they had the students write letters to pen pals about the study, saying “We did this study at school, and here’s the score that I got.” They found that  _almost half of the students praised for intelligence lied about their scores_ , and almost no one who was praised for working hard was dishonest. + +On effort: + +> Several studies found that people with a fixed mindset can be reluctant to really exert effort, because they believe it means they’re not good at the thing they’re working hard on. Dweck notes, “It would be hard to maintain confidence in your ability if every time a task requires effort, your intelligence is called into question.” + +On responding to confusion: + +> They found that students with a growth mindset mastered the material about 70% of the time, regardless of whether there was a confusing passage in it. Among students with a fixed mindset, if they read the booklet without the confusing passage, again about 70% of them mastered the material. But the fixed-mindset students who encountered the confusing passage saw their mastery drop to 30%. Students with a fixed mindset were pretty bad at recovering from being confused. + +These findings show that a growth mindset is critical while debugging. We have to recover from confusion, be candid about the limitations of our understanding, and at times really struggle on the way to finding solutions – all of which is easier and less painful with a growth mindset. + +### Love your bugs + +I learned to love bugs by explicitly celebrating challenges while working at the Recurse Center. A participant would sit down next to me and say, “[sigh] I think I’ve got a weird Python bug,” and I’d say, “Awesome, I  _love_  weird Python bugs!” First of all, this is definitely true, but more importantly, it emphasized to the participant that finding something where they struggled an accomplishment, and it was a good thing for them to have done that day. + +As I mentioned, at the Recurse Center there are no deadlines and no assignments, so this attitude is pretty much free. I’d say, “You get to spend a day chasing down this weird bug in Flask, how exciting!” At Dropbox and later at Pilot, where we have a product to ship, deadlines, and users, I’m not always uniformly delighted about spending a day on a weird bug. So I’m sympathetic to the reality of the world where there are deadlines. However, if I have a bug to fix, I have to fix it, and being grumbly about the existence of the bug isn’t going to help me fix it faster. I think that even in a world where deadlines loom, you can still apply this attitude. + +If you love your bugs, you can have more fun while you’re working on a tough problem. You can be less worried and more focused, and end up learning more from them. Finally, you can share a bug with your friends and colleagues, which helps you and your teammates. + +### Obrigada! + +My thanks to folks who gave me feedback on this talk and otherwise contributed to my being there: + +* Sasha Laundy + +* Amy Hanlon + +* Julia Evans + +* Julian Cooper + +* Raphael Passini Diniz and the rest of the Python Brasil organizing team + +-------------------------------------------------------------------------------- + +via: http://akaptur.com/blog/2017/11/12/love-your-bugs/ + +作者:[Allison Kaptur ][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://akaptur.com/about/ +[1]:http://2017.pythonbrasil.org.br/# +[2]:http://www.youtube.com/watch?v=h4pZZOmv4Qs +[3]:http://www.pilot.com/ +[4]:http://www.dropbox.com/ +[5]:http://www.recurse.com/ +[6]:http://www.youtube.com/watch?v=ETgNLF_XpEM +[7]:http://akaptur.com/blog/2015/10/10/effective-learning-strategies-for-programmers/ diff --git a/sources/tech/20171113 Glitch write fun small web projects instantly.md b/sources/tech/20171113 Glitch write fun small web projects instantly.md new file mode 100644 index 0000000000..734853ce51 --- /dev/null +++ b/sources/tech/20171113 Glitch write fun small web projects instantly.md @@ -0,0 +1,76 @@ +translating---geekpi + +Glitch: write fun small web projects instantly +============================================================ + +I just wrote about Jupyter Notebooks which are a fun interactive way to write Python code. That reminded me I learned about Glitch recently, which I also love!! I built a small app to [turn of twitter retweets][2] with it. So! + +[Glitch][3] is an easy way to make Javascript webapps. (javascript backend, javascript frontend) + +The fun thing about glitch is: + +1. you start typing Javascript code into their web interface + +2. as soon as you type something, it automagically reloads the backend of your website with the new code. You don’t even have to save!! It autosaves. + +So it’s like Heroku, but even more magical!! Coding like this (you type, and the code runs on the public internet immediately) just feels really **fun** to me. + +It’s kind of like sshing into a server and editing PHP/HTML code on your server and having it instantly available, which I kind of also loved. Now we have “better deployment practices” than “just edit the code and it is instantly on the internet” but we are not talking about Serious Development Practices, we are talking about writing tiny programs for fun. + +### glitch has awesome example apps + +Glitch seems like fun nice way to learn programming! + +For example, there’s a space invaders game (code by [Mary Rose Cook][4]) at [https://space-invaders.glitch.me/][5]. The thing I love about this is that in just a few clicks I can + +1. click “remix this” + +2. start editing the code to make the boxes orange instead of black + +3. have my own space invaders game!! Mine is at [http://julias-space-invaders.glitch.me/][1]. (i just made very tiny edits to make it orange, nothing fancy) + +They have tons of example apps that you can start from – for instance [bots][6], [games][7], and more. + +### awesome actually useful app: tweetstorms + +The way I learned about Glitch was from this app which shows you tweetstorms from a given user: [https://tweetstorms.glitch.me/][8]. + +For example, you can see [@sarahmei][9]’s tweetstorms at [https://tweetstorms.glitch.me/sarahmei][10] (she tweets a lot of good tweetstorms!). + +### my glitch app: turn off retweets + +When I learned about Glitch I wanted to turn off retweets for everyone I follow on Twitter (I know you can do it in Tweetdeck!) and doing it manually was a pain – I had to do it one person at a time. So I wrote a tiny Glitch app to do it for me! + +I liked that I didn’t have to set up a local development environment, I could just start typing and go! + +Glitch only supports Javascript and I don’t really know Javascript that well (I think I’ve never written a Node program before), so the code isn’t awesome. But I had a really good time writing it – being able to type and just see my code running instantly was delightful. Here it is: [https://turn-off-retweets.glitch.me/][11]. + +### that’s all! + +Using Glitch feels really fun and democratic. Usually if I want to fork someone’s web project and make changes I wouldn’t do it – I’d have to fork it, figure out hosting, set up a local dev environment or Heroku or whatever, install the dependencies, etc. I think tasks like installing node.js dependencies used to be interesting, like “cool i am learning something new” and now I just find them tedious. + +So I love being able to just click “remix this!” and have my version on the internet instantly. + + +-------------------------------------------------------------------------------- + +via: https://jvns.ca/blog/2017/11/13/glitch--write-small-web-projects-easily/ + +作者:[Julia Evans ][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://jvns.ca/ +[1]:http://julias-space-invaders.glitch.me/ +[2]:https://turn-off-retweets.glitch.me/ +[3]:https://glitch.com/ +[4]:https://maryrosecook.com/ +[5]:https://space-invaders.glitch.me/ +[6]:https://glitch.com/handy-bots +[7]:https://glitch.com/games +[8]:https://tweetstorms.glitch.me/ +[9]:https://twitter.com/sarahmei +[10]:https://tweetstorms.glitch.me/sarahmei +[11]:https://turn-off-retweets.glitch.me/ diff --git a/sources/tech/20171114 Sysadmin 101 Patch Management.md b/sources/tech/20171114 Sysadmin 101 Patch Management.md new file mode 100644 index 0000000000..55ca09da87 --- /dev/null +++ b/sources/tech/20171114 Sysadmin 101 Patch Management.md @@ -0,0 +1,61 @@ +【翻译中 @haoqixu】Sysadmin 101: Patch Management +============================================================ + +* [HOW-TOs][1] + +* [Servers][2] + +* [SysAdmin][3] + + +A few articles ago, I started a Sysadmin 101 series to pass down some fundamental knowledge about systems administration that the current generation of junior sysadmins, DevOps engineers or "full stack" developers might not learn otherwise. I had thought that I was done with the series, but then the WannaCry malware came out and exposed some of the poor patch management practices still in place in Windows networks. I imagine some readers that are still stuck in the Linux versus Windows wars of the 2000s might have even smiled with a sense of superiority when they heard about this outbreak. + +The reason I decided to revive my Sysadmin 101 series so soon is I realized that most Linux system administrators are no different from Windows sysadmins when it comes to patch management. Honestly, in some areas (in particular, uptime pride), some Linux sysadmins are even worse than Windows sysadmins regarding patch management. So in this article, I cover some of the fundamentals of patch management under Linux, including what a good patch management system looks like, the tools you will want to put in place and how the overall patching process should work. + +### What Is Patch Management? + +When I say patch management, I'm referring to the systems you have in place to update software already on a server. I'm not just talking about keeping up with the latest-and-greatest bleeding-edge version of a piece of software. Even more conservative distributions like Debian that stick with a particular version of software for its "stable" release still release frequent updates that patch bugs or security holes. + +Of course, if your organization decided to roll its own version of a particular piece of software, either because developers demanded the latest and greatest, you needed to fork the software to apply a custom change, or you just like giving yourself extra work, you now have a problem. Ideally you have put in a system that automatically packages up the custom version of the software for you in the same continuous integration system you use to build and package any other software, but many sysadmins still rely on the outdated method of packaging the software on their local machine based on (hopefully up to date) documentation on their wiki. In either case, you will need to confirm that your particular version has the security flaw, and if so, make sure that the new patch applies cleanly to your custom version. + +### What Good Patch Management Looks Like + +Patch management starts with knowing that there is a software update to begin with. First, for your core software, you should be subscribed to your Linux distribution's security mailing list, so you're notified immediately when there are security patches. If there you use any software that doesn't come from your distribution, you must find out how to be kept up to date on security patches for that software as well. When new security notifications come in, you should review the details so you understand how severe the security flaw is, whether you are affected and gauge a sense of how urgent the patch is. + +Some organizations have a purely manual patch management system. With such a system, when a security patch comes along, the sysadmin figures out which servers are running the software, generally by relying on memory and by logging in to servers and checking. Then the sysadmin uses the server's built-in package management tool to update the software with the latest from the distribution. Then the sysadmin moves on to the next server, and the next, until all of the servers are patched. + +There are many problems with manual patch management. First is the fact that it makes patching a laborious chore. The more work patching is, the more likely a sysadmin will put it off or skip doing it entirely. The second problem is that manual patch management relies too much on the sysadmin's ability to remember and recall all of the servers he or she is responsible for and keep track of which are patched and which aren't. This makes it easy for servers to be forgotten and sit unpatched. + +The faster and easier patch management is, the more likely you are to do it. You should have a system in place that quickly can tell you which servers are running a particular piece of software at which version. Ideally, that system also can push out updates. Personally, I prefer orchestration tools like MCollective for this task, but Red Hat provides Satellite, and Canonical provides Landscape as central tools that let you view software versions across your fleet of servers and apply patches all from a central place. + +Patching should be fault-tolerant as well. You should be able to patch a service and restart it without any overall down time. The same idea goes for kernel patches that require a reboot. My approach is to divide my servers into different high availability groups so that lb1, app1, rabbitmq1 and db1 would all be in one group, and lb2, app2, rabbitmq2 and db2 are in another. Then, I know I can patch one group at a time without it causing downtime anywhere else. + +So, how fast is fast? Your system should be able to roll out a patch to a minor piece of software that doesn't have an accompanying service (such as bash in the case of the ShellShock vulnerability) within a few minutes to an hour at most. For something like OpenSSL that requires you to restart services, the careful process of patching and restarting services in a fault-tolerant way probably will take more time, but this is where orchestration tools come in handy. I gave examples of how to use MCollective to accomplish this in my recent MCollective articles (see the December 2016 and January 2017 issues), but ideally, you should put a system in place that makes it easy to patch and restart services in a fault-tolerant and automated way. + +When patching requires a reboot, such as in the case of kernel patches, it might take a bit more time, but again, automation and orchestration tools can make this go much faster than you might imagine. I can patch and reboot the servers in an environment in a fault-tolerant way within an hour or two, and it would be much faster than that if I didn't need to wait for clusters to sync back up in between reboots. + +Unfortunately, many sysadmins still hold on to the outdated notion that uptime is a badge of pride—given that serious kernel patches tend to come out at least once a year if not more often, to me, it's proof you don't take security seriously. + +Many organizations also still have that single point of failure server that can never go down, and as a result, it never gets patched or rebooted. If you want to be secure, you need to remove these outdated liabilities and create systems that at least can be rebooted during a late-night maintenance window. + +Ultimately, fast and easy patch management is a sign of a mature and professional sysadmin team. Updating software is something all sysadmins have to do as part of their jobs, and investing time into systems that make that process easy and fast pays dividends far beyond security. For one, it helps identify bad architecture decisions that cause single points of failure. For another, it helps identify stagnant, out-of-date legacy systems in an environment and provides you with an incentive to replace them. Finally, when patching is managed well, it frees up sysadmins' time and turns their attention to the things that truly require their expertise. + +______________________ + +Kyle Rankin is senior security and infrastructure architect, the author of many books including Linux Hardening in Hostile Networks, DevOps Troubleshooting and The Official Ubuntu Server Book, and a columnist for Linux Journal. Follow him @kylerankin + +-------------------------------------------------------------------------------- + +via: https://www.linuxjournal.com/content/sysadmin-101-patch-management + +作者:[Kyle Rankin ][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.linuxjournal.com/users/kyle-rankin +[1]:https://www.linuxjournal.com/tag/how-tos +[2]:https://www.linuxjournal.com/tag/servers +[3]:https://www.linuxjournal.com/tag/sysadmin +[4]:https://www.linuxjournal.com/users/kyle-rankin diff --git a/sources/tech/20171114 Take Linux and Run With It.md b/sources/tech/20171114 Take Linux and Run With It.md new file mode 100644 index 0000000000..b7b6cb9663 --- /dev/null +++ b/sources/tech/20171114 Take Linux and Run With It.md @@ -0,0 +1,68 @@ +Take Linux and Run With It +============================================================ + +![](https://www.linuxinsider.com/article_images/story_graphics_xlarge/xl-2016-linux-1.jpg) + +![](https://www.linuxinsider.com/images/2015/image-credit-adobe-stock_130x15.gif) + + +"How do you run an operating system?" may seem like a simple question, since most of us are accustomed to turning on our computers and seeing our system spin up. However, this common model is only one way of running an operating system. As one of Linux's greatest strengths is versatility, Linux offers the most methods and environments for running it. + +To unleash the full power of Linux, and maybe even find a use for it you hadn't thought of, consider some less conventional ways of running it -- specifically, ones that don't even require installation on a computer's hard drive. + +### We'll Do It Live! + +Live-booting is a surprisingly useful and popular way to get the full Linux experience on the fly. While hard drives are where OSes reside most of the time, they actually can be installed to most major storage media, including CDs, DVDs and USB flash drives. + +When an OS is installed to some device other than a computer's onboard hard drive and subsequently booted instead of that onboard drive, it's called "live-booting" or running a "live session." + +At boot time, the user simply selects an external storage source for the hardware to look for boot information. If found, the computer follows the external device's boot instructions, essentially ignoring the onboard drive until the next time the user boots normally. Optical media are increasingly rare these days, so by far the most typical form that an external OS-carrying device takes is a USB stick. + +Most mainstream Linux distributions offer a way to run a live session as a way of trying them out. The live session doesn't save any user activity, and the OS resets to the clean default state after every shutdown. + +Live Linux sessions can be used for more than testing a distro, though. One application is for executing system repair for critically malfunctioning onboard (usually also Linux) systems. If an update or configuration made the onboard system unbootable, a full system backup is required, or the hard drive has sustained serious file corruption, the only recourse is to start up a live system and perform maintenance on the onboard drive. + +In these and similar scenarios, the onboard drive cannot be manipulated or corrected while also keeping the system stored on it running, so a live system takes on those burdens instead, leaving all but the problematic files on the onboard drive at rest. + +Live sessions also are perfectly suited for handling sensitive information. If you don't want a computer to retain any trace of the operations executed or information handled on it, especially if you are using hardware you can't vouch for -- like a public library or hotel business center computer -- a live session will provide you all the desktop computing functions to complete your task while retaining no trace of your session once you're finished. This is great for doing online banking or password input that you don't want a computer to remember. + +### Linux Virtually Anywhere + +Another approach for implementing Linux for more on-demand purposes is to run a virtual machine on another host OS. A virtual machine, or VM, is essentially a small computer running inside another computer and contained in a single large file. + +To run a VM, users simply install a hypervisor program (a kind of launcher for the VM), select a downloaded Linux OS image file (usually ending with a ".iso" file extension), and walk through the setup process. + +Most of the settings can be left at their defaults, but the key ones to configure are the amount of RAM and hard drive storage to lease to the VM. Fortunately, since Linux has a light footprint, you don't have to set these very high: 2 GB of RAM and 16 GB of storage should be plenty for the VM while still letting your host OS thrive. + +So what does this offer that a live system doesn't? First, whereas live systems are ephemeral, VMs can retain the data stored on them. This is great if you want to set up your Linux VM for a special use case, like software development or even security. + +When used for development, a Linux VM gives you the solid foundation of Linux's programming language suites and coding tools, and it lets you save your projects right in the VM to keep everything organized. + +If security is your goal, Linux VMs allow you to impose an extra layer between a potential hazard and your system. If you do your browsing from the VM, a malicious program would have to compromise not only your virtual Linux system, but also the hypervisor -- and  _then_ your host OS, a technical feat beyond all but the most skilled and determined adversaries. + +Second, you can start up your VM on demand from your host system, without having to power it down and start it up again as you would have to with a live session. When you need it, you can quickly bring up the VM, and when you're finished, you just shut it down and go back to what you were doing before. + +Your host system continues running normally while the VM is on, so you can attend to tasks simultaneously in each system. + +### Look Ma, No Installation! + +Just as there is no one form that Linux takes, there's also no one way to run it. Hopefully, this brief primer on the kinds of systems you can run has given you some ideas to expand your use models. + +The best part is that if you're not sure how these can help, live booting and virtual machines don't hurt to try!  +![](https://www.ectnews.com/images/end-enn.gif) + +-------------------------------------------------------------------------------- + +via: https://www.linuxinsider.com/story/Take-Linux-and-Run-With-It-84951.html + +作者:[ Jonathan Terrasi ][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.linuxinsider.com/story/Take-Linux-and-Run-With-It-84951.html#searchbyline +[1]:https://www.linuxinsider.com/story/Take-Linux-and-Run-With-It-84951.html# +[2]:https://www.linuxinsider.com/perl/mailit/?id=84951 +[3]:https://www.linuxinsider.com/story/Take-Linux-and-Run-With-It-84951.html +[4]:https://www.linuxinsider.com/story/Take-Linux-and-Run-With-It-84951.html diff --git a/sources/tech/20171115 Security Jobs Are Hot Get Trained and Get Noticed.md b/sources/tech/20171115 Security Jobs Are Hot Get Trained and Get Noticed.md new file mode 100644 index 0000000000..a0a6b1ed60 --- /dev/null +++ b/sources/tech/20171115 Security Jobs Are Hot Get Trained and Get Noticed.md @@ -0,0 +1,58 @@ +Security Jobs Are Hot: Get Trained and Get Noticed +============================================================ + +![security skills](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/security-skills.png?itok=IrwppCUw "security skills") +The Open Source Jobs Report, from Dice and The Linux Foundation, found that professionals with security experience are in high demand for the future.[Used with permission][1] + +The demand for security professionals is real. On [Dice.com][4], 15 percent of the more than 75K jobs are security positions. “Every year in the U.S., 40,000 jobs for information security analysts go unfilled, and employers are struggling to fill 200,000 other cyber-security related roles, according to cyber security data tool [CyberSeek][5]” ([Forbes][6]). We know that there is a fast-increasing need for security specialists, but that the interest level is low. + +### Security is the place to be + +In my experience, few students coming out of college are interested in roles in security; so many people see security as niche. Entry-level tech pros are interested in business analyst or system analyst roles, because of a belief that if you want to learn and apply core IT concepts, you have to stick to analyst roles or those closer to product development. That’s simply not the case. + +In fact, if you’re interested in getting in front of your business leaders, security is the place to be – as a security professional, you have to understand the business end-to-end; you have to look at the big picture to give your company the advantage. + +### Be fearless + +Analyst and security roles are not all that different. Companies continue to merge engineering and security roles out of necessity. Businesses are moving faster than ever with infrastructure and code being deployed through automation, which increases the importance of security being a part of all tech pros day to day lives. In our [Open Source Jobs Report with The Linux Foundation][7], 42 percent of hiring managers said professionals with security experience are in high demand for the future. + +There has never been a more exciting time to be in security. If you stay up-to-date with tech news, you’ll see that a huge number of stories are related to security – data breaches, system failures and fraud. The security teams are working in ever-changing, fast-paced environments. A real challenge lies is in the proactive side of security, finding, and eliminating vulnerabilities while maintaining or even improving the end-user experience.   + +### Growth is imminent + +Of any aspect of tech, security is the one that will continue to grow with the cloud. Businesses are moving more and more to the cloud and that’s exposing more security vulnerabilities than organizations are used to. As the cloud matures, security becomes increasingly important.            + +Regulations are also growing – Personally Identifiable Information (PII) is getting broader all the time. Many companies are finding that they must invest in security to stay in compliance and avoid being in the headlines. Companies are beginning to budget more and more for security tooling and staffing due to the risk of heavy fines, reputational damage, and, to be honest, executive job security.   + +### Training and support + +Even if you don’t choose a security-specific role, you’re bound to find yourself needing to code securely, and if you don’t have the skills to do that, you’ll start fighting an uphill battle. There are certainly ways to learn on-the-job if your company offers that option, that’s encouraged but I recommend a combination of training, mentorship and constant practice. Without using your security skills, you’ll lose them fast with how quickly the complexity of malicious attacks evolve. + +My recommendation for those seeking security roles is to find the people in your organization that are the strongest in engineering, development, or architecture areas – interface with them and other teams, do hands-on work, and be sure to keep the big-picture in mind. Be an asset to your organization that stands out – someone that can securely code and also consider strategy and overall infrastructure health. + +### The end game + +More and more companies are investing in security and trying to fill open roles in their tech teams. If you’re interested in management, security is the place to be. Executive leadership wants to know that their company is playing by the rules, that their data is secure, and that they’re safe from breaches and loss. + +Security that is implemented wisely and with strategy in mind will get noticed. Security is paramount for executives and consumers alike – I’d encourage anyone interested in security to train up and contribute. + + _[Download ][2]the full 2017 Open Source Jobs Report now._ + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/blog/os-jobs-report/2017/11/security-jobs-are-hot-get-trained-and-get-noticed + +作者:[ BEN COLLEN][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.linux.com/users/bencollen +[1]:https://www.linux.com/licenses/category/used-permission +[2]:http://bit.ly/2017OSSjobsreport +[3]:https://www.linux.com/files/images/security-skillspng +[4]:http://www.dice.com/ +[5]:http://cyberseek.org/index.html#about +[6]:https://www.forbes.com/sites/jeffkauflin/2017/03/16/the-fast-growing-job-with-a-huge-skills-gap-cyber-security/#292f0a675163 +[7]:http://media.dice.com/report/the-2017-open-source-jobs-report-employers-prioritize-hiring-open-source-professionals-with-latest-skills/ diff --git a/sources/tech/20171115 Why and How to Set an Open Source Strategy.md b/sources/tech/20171115 Why and How to Set an Open Source Strategy.md new file mode 100644 index 0000000000..79ec071b4d --- /dev/null +++ b/sources/tech/20171115 Why and How to Set an Open Source Strategy.md @@ -0,0 +1,120 @@ +Why and How to Set an Open Source Strategy +============================================================ + +![](https://www.linuxfoundation.org/wp-content/uploads/2017/11/open-source-strategy-1024x576.jpg) + +This article explains how to walk through, measure, and define strategies collaboratively in an open source community. + + _“If you don’t know where you are going, you’ll end up someplace else.” _ _—_  Yogi Berra + +Open source projects are generally started as a way to scratch one’s itch — and frankly that’s one of its greatest attributes. Getting code down provides a tangible method to express an idea, showcase a need, and solve a problem. It avoids over thinking and getting a project stuck in analysis-paralysis, letting the project pragmatically solve the problem at hand. + +Next, a project starts to scale up and gets many varied users and contributions, with plenty of opinions along the way. That leads to the next big challenge — how does a project start to build a strategic vision? In this article, I’ll describe how to walk through, measure, and define strategies collaboratively, in a community. + +Strategy may seem like a buzzword of the corporate world rather something that an open source community would embrace, so I suggest stripping away the negative actions that are sometimes associated with this word (e.g., staff reductions, discontinuations, office closures). Strategy done right isn’t a tool to justify unfortunate actions but to help show focus and where each community member can contribute. + +A good application of strategy achieves the following: + +* Why the project exists? + +* What the project looks to achieve? + +* What is the ideal end state for a project is. + +The key to success is answering these questions as simply as possible, with consensus from your community. Let’s look at some ways to do this. + +### Setting a mission and vision + + _“_ _Efforts and courage are not enough without purpose and direction.”_  — John F. Kennedy + +All strategic planning starts off with setting a course for where the project wants to go. The two tools used here are  _Mission_  and  _Vision_ . They are complementary terms, describing both the reason a project exists (mission) and the ideal end state for a project (vision). + +A great way to start this exercise with the intent of driving consensus is by asking each key community member the following questions: + +* What drove you to join and/or contribute the project? + +* How do you define success for your participation? + +In a company, you’d ask your customers these questions usually. But in open source projects, the customers are the project participants — and their time investment is what makes the project a success. + +Driving consensus means capturing the answers to these questions and looking for themes across them. At R Consortium, for example, I created a shared doc for the board to review each member’s answers to the above questions, and followed up with a meeting to review for specific themes that came from those insights. + +Building a mission flows really well from this exercise. The key thing is to keep the wording of your mission short and concise. Open Mainframe Project has done this really well. Here’s their mission: + + _Build community and adoption of Open Source on the mainframe by:_ + +* _Eliminating barriers to Open Source adoption on the mainframe_ + +* _Demonstrating value of the mainframe on technical and business levels_ + +* _Strengthening collaboration points and resources for the community to thrive_ + +At 40 words, it passes the key eye tests of a good mission statement; it’s clear, concise, and demonstrates the useful value the project aims for. + +The next stage is to reflect on the mission statement and ask yourself this question: What is the ideal outcome if the project accomplishes its mission? That can be a tough one to tackle. Open Mainframe Project put together its vision really well: + + _Linux on the Mainframe as the standard for enterprise class systems and applications._ + +You could read that as a [BHAG][1], but it’s really more of a vision, because it describes a future state that is what would be created by the mission being fully accomplished. It also hits the key pieces to an effective vision — it’s only 13 words, inspirational, clear, memorable, and concise. + +Mission and vision add clarity on the who, what, why, and how for your project. But, how do you set a course for getting there? + +### Goals, Objectives, Actions, and Results + + _“I don’t focus on what I’m up against. I focus on my goals and I try to ignore the rest.”_  — Venus Williams + +Looking at a mission and vision can get overwhelming, so breaking them down into smaller chunks can help the project determine how to get started. This also helps prioritize actions, either by importance or by opportunity. Most importantly, this step gives you guidance on what things to focus on for a period of time, and which to put off. + +There are lots of methods of time bound planning, but the method I think works the best for projects is what I’ve dubbed the GOAR method. It’s an acronym that stands for: + +* Goals define what the project is striving for and likely would align and support the mission. Examples might be “Grow a diverse contributor base” or “Become the leading project for X.” Goals are aspirational and set direction. + +* Objectives show how you measure a goal’s completion, and should be clear and measurable. You might also have multiple objectives to measure the completion of a goal. For example, the goal “Grow a diverse contributor base” might have objectives such as “Have X total contributors monthly” and “Have contributors representing Y different organizations.” + +* Actions are what the project plans to do to complete an objective. This is where you get tactical on exactly what needs done. For example, the objective “Have contributors representing Y different organizations” would like have actions of reaching out to interested organizations using the project, having existing contributors mentor new mentors, and providing incentives for first time contributors. + +* Results come along the way, showing progress both positive and negative from the actions. + +You can put these into a table like this: + +| Goals | Objectives | Actions | Results | +|:--|:--|:--|:--| +| Grow a diverse contributor base     | Have X total contributors monthly | Existing contributors mentor new mentors Providing incentives for first time contributors | | +| | Have contributors representing Y different organizations | Reach out to interested organizations using the project | | + + +In large organizations, monthly or quarterly goals and objectives often make sense; however, on open source projects, these time frames are unrealistic. Six- even 12-month tracking allows the project leadership to focus on driving efforts at a high level by nurturing the community along. + +The end result is a rubric that provides clear vision on where the project is going. It also lets community members more easily find ways to contribute. For example, your project may include someone who knows a few organizations using the project — this person could help introduce those developers to the codebase and guide them through their first commit. + +### What happens if the project doesn’t hit the goals? + + _“I have not failed. I’ve just found 10,000 ways that won’t work.”_  — Thomas A. Edison + +Figuring out what is within the capability of an organization — whether Fortune 500 or a small open source project — is hard. And, sometimes the expectations or market conditions change along the way. Does that make the strategy planning process a failure? Absolutely not! + +Instead, you can use this experience as a way to better understand your project’s velocity, its impact, and its community, and perhaps as a way to prioritize what is important and what’s not. + +-------------------------------------------------------------------------------- + +via: https://www.linuxfoundation.org/blog/set-open-source-strategy/ + +作者:[ John Mertic][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.linuxfoundation.org/author/jmertic/ +[1]:https://en.wikipedia.org/wiki/Big_Hairy_Audacious_Goal +[2]:https://www.linuxfoundation.org/author/jmertic/ +[3]:https://www.linuxfoundation.org/category/blog/ +[4]:https://www.linuxfoundation.org/category/audience/c-level/ +[5]:https://www.linuxfoundation.org/category/audience/developer-influencers/ +[6]:https://www.linuxfoundation.org/category/audience/entrepreneurs/ +[7]:https://www.linuxfoundation.org/category/campaigns/membership/how-to/ +[8]:https://www.linuxfoundation.org/category/campaigns/events-campaigns/linux-foundation/ +[9]:https://www.linuxfoundation.org/category/audience/open-source-developers/ +[10]:https://www.linuxfoundation.org/category/audience/open-source-professionals/ +[11]:https://www.linuxfoundation.org/category/audience/open-source-users/ +[12]:https://www.linuxfoundation.org/category/blog/thought-leadership/ diff --git a/sources/tech/20171116 Unleash Your Creativity – Linux Programs for Drawing and Image Editing.md b/sources/tech/20171116 Unleash Your Creativity – Linux Programs for Drawing and Image Editing.md new file mode 100644 index 0000000000..c6c50d9b25 --- /dev/null +++ b/sources/tech/20171116 Unleash Your Creativity – Linux Programs for Drawing and Image Editing.md @@ -0,0 +1,130 @@ +### Unleash Your Creativity – Linux Programs for Drawing and Image Editing + + By: [chabowski][1] + +The following article is part of a series of articles that provide tips and tricks for Linux newbies – or Desktop users that are not yet experienced with regard to certain topics. This series intends to complement the special edition #30 “[Getting Started with Linux][2]” based on [openSUSE Leap][3], recently published by the [Linux Magazine,][4] with valuable additional information. + +![](https://www.suse.com/communities/blog/files/2017/11/DougDeMaio-450x450.jpeg) + +This article has been contributed by Douglas DeMaio, openSUSE PR Expert at SUSE. + +Both Mac OS or Window offer several popular programs for graphics editing, vector drawing and creating and manipulating Portable Document Format (PDF). The good news: users familiar with the Adobe Suite can transition with ease to free, open-source programs available on Linux. + +Programs like [GIMP][5], [InkScape][6] and [Okular][7] are cross platform programs that are available by default in Linux/GNU distributions and are persuasive alternatives to expensive Adobe programs like [Photoshop][8], [Illustrator][9] and [Acrobat][10]. + +These creativity programs on Linux distributions are just as powerful as those for macOS or Window. This article will explain some of the differences and how the programs can be used to make your transition to Linux comfortable. + +### Krita + +The KDE desktop environment comes with tons of cool applications. [Krita][11] is a professional open source painting program. It gives users the freedom to create any artistic image they desire. Krita features tools that are much more extensive than the tool sets of most proprietary programs you might be familiar with. From creating textures to comics, Krita is a must have application for Linux users. + +![](https://www.suse.com/communities/blog/files/2017/11/krita-450x267.png) + +### GIMP + +GNU Image Manipulation Program (GIMP) is a cross-platform image editor. Users of Photoshop will find the User Interface of GIMP to be similar to that of Photoshop. The drop down menu offers colors, layers, filters and tools to help the user with editing graphics. Rulers are located both horizontal and vertical and guide can be dragged across the screen to give exact measurements. The drop down menu gives tool options for resizing or cropping photos; adjustments can be made to the color balance, color levels, brightness and contrast as well as hue and saturation. + +![](https://www.suse.com/communities/blog/files/2017/11/gimp-450x281.png) + +There are multiple filters in GIMP to enhance or distort your images. Filters for artistic expression and animation are available and are more powerful tool options than those found in some proprietary applications. Gradients can be applied through additional layers and the Text Tool offers many fonts, which can be altered in shape and size through the Perspective Tool. + +The cloning tool works exactly like those in other graphics editors, so manipulating images is simple and acurrate given the selection of brush sizes to do the job. + +Perhaps one of the best options available with GIMP is that the images can be saved in a variety of formats like .jpg, .png, .pdf, .eps and .svg. These image options provide high-quality images in a small file. + +### InkScape + +Designing vector imagery with InkScape is simple and free. This cross platform allows for the creation of logos and illustrations that are highly scalable. Whether designing cartoons or creating images for branding, InkScape is a powerful application to get the job done. Like GIMP, InkScape lets you save files in various formats and allows for object manipulation like moving, rotating and skewing text and objects. Shape tools are available with InkScape so making stars, hexagons and other elements will meet the needs of your creative mind. + +![](https://www.suse.com/communities/blog/files/2017/11/inkscape-450x273.png) + +InkScape offers a comprehensive tool set, including a drawing tool, a pen tool and the freehand calligraphy tool that allows for object creation with your own personal style. The color selector gives you the choice of RGB, CMYK and RGBA – using specific colors for branding logos, icons and advertisement is definitely convincing. + +Short cut commands are similar to what users experience in Adobe Illustrator. Making layers and grouping or ungrouping the design elements can turn a blank page into a full-fledged image that can be used for designing technical diagrams for presentations, importing images into a multimedia program or for creating web graphics and software design. + +Inkscape can import vector graphics from multiple other programs. It can even import bitmap images. Inkscape is one of those cross platform, open-source programs that allow users to operate across different operating systems, no matter if they work with macOS, Windows or Linux. + +### Okular and LibreOffice + +LibreOffice, which is a free, open-source Office Suite, allows users to collaborate and interact with documents and important files on Linux, but also on macOS and Window. You can also create PDF files via LibreOffice, and LibreOffice Draw lets you view (and edit) PDF files as images. + +![](https://www.suse.com/communities/blog/files/2017/11/draw-450x273.png) + +However, the Portable Document Format (PDF) is quite different on the three Operating Systems. MacOS offers [Preview][12] by default; Windows has [Edge][13]. Of course, also Adobe Reader can be used for both MacOS and Window. With Linux, and especially the desktop selection of KDE, [Okular][14] is the default program for viewing PDF files. + +![](https://www.suse.com/communities/blog/files/2017/11/okular-450x273.png) + +The functionality of Okular supports different types of documents, like PDF, Postscript, [DjVu][15], [CHM][16], [XPS][17], [ePub][18] and others. Yet the universal document viewer also offers some powerful features that make interacting with a document different from other programs on MacOS and Windows. Okular gives selection and search tools that make accessing the text in PDFs fluid for how users interact with documents. Viewing documents with Okular is also accommodating with the magnification tool that allows for a quick look at small text in a document. + +Okular also provides users with the option to configure it to use more memory if the document is too large and freezes the Operating System. This functionality is convenient for users accessing high-quality print documents for example for advertising. + +For those who want to change locked images and documents, it’s rather easy to do so with LibreOffice Draw. A hypothetical situation would be to take a locked IRS (or tax) form and change it to make the uneditable document editable. Imagine how much fun it could be to transform it to some humorous kind of tax form … + +And indeed, the sky’s the limit on how creative a user wants to be when using programs that are available on Linux distributions. + +![2 votes, average: 5.00 out of 5](https://www.suse.com/communities/blog/wp-content/plugins/wp-postratings/images/stars_crystal/rating_on.gif) + +![2 votes, average: 5.00 out of 5](https://www.suse.com/communities/blog/wp-content/plugins/wp-postratings/images/stars_crystal/rating_on.gif) + +![2 votes, average: 5.00 out of 5](https://www.suse.com/communities/blog/wp-content/plugins/wp-postratings/images/stars_crystal/rating_on.gif) + +![2 votes, average: 5.00 out of 5](https://www.suse.com/communities/blog/wp-content/plugins/wp-postratings/images/stars_crystal/rating_on.gif) + +![2 votes, average: 5.00 out of 5](https://www.suse.com/communities/blog/wp-content/plugins/wp-postratings/images/stars_crystal/rating_on.gif) + +( + + _**2** votes, average: **5.00** out of 5_ + +) + + _You need to be a registered member to rate this post._ + +Tags: [drawing][19], [Getting Started with Linux][20], [GIMP][21], [image editing][22], [Images][23], [InkScape][24], [KDE][25], [Krita][26], [Leap 42.3][27], [LibreOffice][28], [Linux Magazine][29], [Okular][30], [openSUSE][31], [PDF][32] Categories: [Desktop][33], [Expert Views][34], [LibreOffice][35], [openSUSE][36] + +-------------------------------------------------------------------------------- + +via: https://www.suse.com/communities/blog/unleash-creativity-linux-programs-drawing-image-editing/ + +作者:[chabowski ][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[1]:https://www.suse.com/communities/blog/author/chabowski/ +[2]:http://www.linux-magazine.com/Resources/Special-Editions/30-Getting-Started-with-Linux +[3]:https://en.opensuse.org/Portal:42.3 +[4]:http://www.linux-magazine.com/ +[5]:https://www.gimp.org/ +[6]:https://inkscape.org/en/ +[7]:https://okular.kde.org/ +[8]:http://www.adobe.com/products/photoshop.html +[9]:http://www.adobe.com/products/illustrator.html +[10]:https://acrobat.adobe.com/us/en/acrobat/acrobat-pro-cc.html +[11]:https://krita.org/en/ +[12]:https://en.wikipedia.org/wiki/Preview_(macOS) +[13]:https://en.wikipedia.org/wiki/Microsoft_Edge +[14]:https://okular.kde.org/ +[15]:http://djvu.org/ +[16]:https://fileinfo.com/extension/chm +[17]:https://fileinfo.com/extension/xps +[18]:http://idpf.org/epub +[19]:https://www.suse.com/communities/blog/tag/drawing/ +[20]:https://www.suse.com/communities/blog/tag/getting-started-with-linux/ +[21]:https://www.suse.com/communities/blog/tag/gimp/ +[22]:https://www.suse.com/communities/blog/tag/image-editing/ +[23]:https://www.suse.com/communities/blog/tag/images/ +[24]:https://www.suse.com/communities/blog/tag/inkscape/ +[25]:https://www.suse.com/communities/blog/tag/kde/ +[26]:https://www.suse.com/communities/blog/tag/krita/ +[27]:https://www.suse.com/communities/blog/tag/leap-42-3/ +[28]:https://www.suse.com/communities/blog/tag/libreoffice/ +[29]:https://www.suse.com/communities/blog/tag/linux-magazine/ +[30]:https://www.suse.com/communities/blog/tag/okular/ +[31]:https://www.suse.com/communities/blog/tag/opensuse/ +[32]:https://www.suse.com/communities/blog/tag/pdf/ +[33]:https://www.suse.com/communities/blog/category/desktop/ +[34]:https://www.suse.com/communities/blog/category/expert-views/ +[35]:https://www.suse.com/communities/blog/category/libreoffice/ +[36]:https://www.suse.com/communities/blog/category/opensuse/ diff --git a/sources/tech/20171120 Adopting Kubernetes step by step.md b/sources/tech/20171120 Adopting Kubernetes step by step.md new file mode 100644 index 0000000000..05faf304c8 --- /dev/null +++ b/sources/tech/20171120 Adopting Kubernetes step by step.md @@ -0,0 +1,93 @@ +Adopting Kubernetes step by step +============================================================ + +Why Docker and Kubernetes? + +Containers allow us to build, ship and run distributed applications. They remove the machine constraints from applications and lets us create a complex application in a deterministic fashion. + +Composing applications with containers allows us to make development, QA and production environments closer to each other (if you put the effort in to get there). By doing so, changes can be shipped faster and testing a full system can happen sooner. + +[Docker][1] — the containerization platform — provides this, making software  _independent_  of cloud providers. + +However, even with containers the amount of work needed for shipping your application through any cloud provider (or in a private cloud) is significant. An application usually needs auto scaling groups, persistent remote discs, auto discovery, etc. But each cloud provider has different mechanisms for doing this. If you want to support these features, you very quickly become cloud provider dependent. + +This is where [Kubernetes][2] comes in to play. It is an orchestration system for containers that allows you to manage, scale and deploy different pieces of your application — in a standardised way — with great tooling as part of it. It’s a portable abstraction that’s compatible with the main cloud providers (Google Cloud, Amazon Web Services and Microsoft Azure all have support for Kubernetes). + +A way to visualise your application, containers and Kubernetes is to think about your application as a shark — stay with me — that exists in the ocean (in this example, the ocean is your machine). The ocean may have other precious things you don’t want your shark to interact with, like [clown fish][3]. So you move you shark (your application) into a sealed aquarium (Container). This is great but not very robust. Your aquarium can break or maybe you want to build a tunnel to another aquarium where other fish live. Or maybe you want many copies of that aquarium in case one needs cleaning or maintenance… this is where Kubernetes clusters come to play. + + +![](https://cdn-images-1.medium.com/max/1600/1*OVt8cnY1WWOqdLFycCgdFg.jpeg) +Evolution to Kubernetes + +With Kubernetes being supported by the main cloud providers, it makes it easier for you and your team to have environments from  _development _ to  _production _ that are almost identical to each other. This is because Kubernetes has no reliance on proprietary software, services or infrastructure. + +The fact that you can start your application in your machine with the same pieces as in production closes the gaps between a development and a production environment. This makes developers more aware of how an application is structured together even though they might only be responsible for one piece of it. It also makes it easier for your application to be fully tested earlier in the pipeline. + +How do you work with Kubernetes? + +With more people adopting Kubernetes new questions arise; how should I develop against a cluster based environment? Suppose you have 3 environments — development, QA and production — how do I fit Kubernetes in them? Differences across these environments will still exist, either in terms of development cycle (e.g. time spent to see my code changes in the application I’m running) or in terms of data (e.g. I probably shouldn’t test with production data in my QA environment as it has sensitive information). + +So, should I always try to work inside a Kubernetes cluster, building images, recreating deployments and services while I code? Or maybe I should not try too hard to make my development environment be a Kubernetes cluster (or set of clusters) in development? Or maybe I should work in a hybrid way? + + +![](https://cdn-images-1.medium.com/max/1600/1*MXokxD8Ktte4_vWvTas9uw.jpeg) +Development with a local cluster + +If we carry on with our metaphor, the holes on the side represent a way to make changes to our app while keeping it in a development cluster. This is usually achieved via [volumes][4]. + +A Kubernetes series + +The Kubernetes series repository is open source and available here: + +### [https://github.com/red-gate/ks][5] + +We’ve written this series as we experiment with different ways to build software. We’ve tried to constrain ourselves to use Kubernetes in all environments so that we can explore the impact these technologies will have on the development and management of data and the database. + +The series starts with the basic creation of a React application hooked up to Kubernetes, and evolves to encompass more of our development requirements. By the end we’ll have covered all of our application development needs  _and_  have understood how best to cater for the database lifecycle in this world of containers and clusters. + +Here are the first 5 episodes of this series: + +1. ks1: build a React app with Kubernetes + +2. ks2: make minikube detect React code changes + +3. ks3: add a python web server that hosts an API + +4. ks4: make minikube detect Python code changes + +5. ks5: create a test environment + +The second part of the series will add a database and try to work out the best way to evolve our application alongside it. + +By running Kubernetes in all environments, we’ve been forced to solve new problems as we try to keep the development cycle as fast as possible. The trade-off being that we are constantly exposed to Kubernetes and become more accustomed to it. By doing so, development teams become responsible for production environments, which is no longer difficult as all environments (development through production) are all managed in the same way. + +What’s next? + +We will continue this series by incorporating a database and experimenting to find the best way to have a seamless database lifecycle experience with Kubernetes. + + _This Kubernetes series is brought to you by Foundry, Redgate’s R&D division. We’re working on making it easier to manage data alongside containerised environments, so if you’re working with data and containerised environments, we’d like to hear from you — reach out directly to the development team at _ [_foundry@red-gate.com_][6] + +* * * + + _We’re hiring_ _. Are you interested in uncovering product opportunities, building _ [_future technology_][7] _ and taking a startup-like approach (without the risk)? Take a look at our _ [_Software Engineer — Future Technologies_][8] _ role and read more about what it’s like to work at Redgate in _ [_Cambridge, UK_][9] _._ + +-------------------------------------------------------------------------------- + +via: https://medium.com/ingeniouslysimple/adopting-kubernetes-step-by-step-f93093c13dfe + +作者:[santiago arias][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://medium.com/@santiaago?source=post_header_lockup +[1]:https://www.docker.com/what-docker +[2]:https://kubernetes.io/ +[3]:https://www.google.co.uk/search?biw=723&bih=753&tbm=isch&sa=1&ei=p-YCWpbtN8atkwWc8ZyQAQ&q=nemo+fish&oq=nemo+fish&gs_l=psy-ab.3..0i67k1l2j0l2j0i67k1j0l5.5128.9271.0.9566.9.9.0.0.0.0.81.532.9.9.0....0...1.1.64.psy-ab..0.9.526...0i7i30k1j0i7i10i30k1j0i13k1j0i10k1.0.FbAf9xXxTEM +[4]:https://kubernetes.io/docs/concepts/storage/volumes/ +[5]:https://github.com/red-gate/ks +[6]:mailto:foundry@red-gate.com +[7]:https://www.red-gate.com/foundry/ +[8]:https://www.red-gate.com/our-company/careers/current-opportunities/software-engineer-future-technologies +[9]:https://www.red-gate.com/our-company/careers/living-in-cambridge diff --git a/sources/tech/20171120 Containers and Kubernetes Whats next.md b/sources/tech/20171120 Containers and Kubernetes Whats next.md deleted file mode 100644 index b73ccb21c2..0000000000 --- a/sources/tech/20171120 Containers and Kubernetes Whats next.md +++ /dev/null @@ -1,98 +0,0 @@ -YunfengHe Translating -Containers and Kubernetes: What's next? -============================================================ - -### What's ahead for container orchestration and Kubernetes? Here's an expert peek - -![CIO_Big Data Decisions_2](https://enterprisersproject.com/sites/default/files/styles/620x350/public/images/CIO_Big%20Data%20Decisions_2.png?itok=Y5zMHxf8 "CIO_Big Data Decisions_2") - -If you want a basic idea of where containers are headed in the near future, follow the money. There’s a lot of it: 451 Research projects that the overall market for containers will hit roughly [$2.7 billion in 2020][4], a 3.5-fold increase from the $762 million spent on container-related technology in 2016. - -There’s an obvious fundamental factor behind such big numbers: Rapidly increasing containerization. The parallel trend: As container adoption grows, so will container  _orchestration_  adoption. - -As recent survey data from  [_The New Stack_][5]  indicates, container adoption is the most significant catalyst of orchestration adoption: 60 percent of respondents who’ve deployed containers broadly in production report they’re also using Kubernetes widely in production. Another 19 percent of respondents with broad container deployments in production were in the initial stages of broad Kubernetes adoption. Meanwhile, just 5 percent of those in the initial phases of deploying containers in production environments were using Kubernetes broadly – but 58 percent said they were preparing to do so. It’s a chicken-and-egg relationship. - - -Most experts agree that an orchestration tool is essential to the scalable [long-term management of containers][6] – and corresponding developments in the marketplace. “The next trends in container orchestration are all focused on broadening adoption,” says Alex Robinson, software engineer at [Cockroach Labs][7]. - -This is a quickly shifting landscape, one that is just starting to realize its future potential. So we checked in with Robinson and other practitioners to get their boots-on-the-ground perspective on what’s next in container orchestration – and for Kubernetes itself. - -### **Container orchestration shifts to mainstream** - -We’re at the precipice common to most major technology shifts, where we transition from the careful steps of early adoption to cliff-diving into commonplace use. That will create new demand for the plain-vanilla requirements that make mainstream adoption easier, especially in large enterprises. - -“The gold rush phase of early innovation has slowed down and given way to a much stronger focus on stability and usability,” Robinson says. “This means we'll see fewer major announcements of new orchestration systems, and more security options, management tools, and features that make it easier to take advantage of the flexibility already inherent in the major orchestration systems.” - -### **Reduced complexity** - -On a related front, expect an intensifying effort to cut back on the complexity that some organizations face when taking their first plunge into container orchestration. As we’ve covered before, deploying a container might be “easy,” but [managing containers long-term ][8]requires more care. - -“Today, container orchestration is too complex for many users to take full advantage,” says My Karlsson, developer at [Codemill AB][9]. “New users are often struggling just to get single or small-size container configurations running in isolation, especially when applications are not originally designed for it. There are plenty of opportunities to simplify the orchestration of non-trivial applications and make the technology more accessible.” - -### **Increasing focus on hybrid cloud and multi-cloud** - -As adoption of containers and container orchestration grows, more organizations will scale from a starting point of, say, running non-critical workloads in a single environment to more [complex use cases][10] across multiple environments. For many companies, that will mean managing containerized applications (and particularly containerized microservices) across [hybrid cloud][11] and [multi-cloud][12] environments, often globally. - -"Containers and Kubernetes have made hybrid cloud and application portability a reality,” says [Brian Gracely][13], director of [Red Hat][14] OpenShift product strategy. “Combined with the Open Service Broker, we expect to see an explosion of new applications that combine private and public cloud resources." - -“I believe that federation will get a push, enabling much-wanted features such as seamless multi-region and multi-cloud deployments,” says Carlos Sanchez, senior software engineer at [CloudBees][15].  - -**[ Want CIO wisdom on hybrid cloud and multi-cloud strategy? See our related resource, **[**Hybrid Cloud: The IT leader's guide**][16]**. ]** - -### **Continued consolidation of platforms and tools** - -Technology consolidation is common trend; container orchestration is no exception. - -“As containerization goes mainstream, engineers are consolidating on a very small number of technologies to run their [microservices and] containers and Kubernetes will become the dominant container orchestration platform, far outstripping other platforms,” says Ben Newton, analytics lead at [Sumo Logic][17]. “Companies will adopt Kubernetes to drive a cloud-neutral approach as Kubernetes provides a reasonably clear path to reduce dependence on [specific] cloud ecosystems.**”** - -### **Speaking of Kubernetes, what’s next?** - -"Kubernetes is here for the long haul, and the community driving it is doing great job – but there's lots ahead,” says Gadi Naor, CTO and co-founder of [Alcide][18]. Our experts shared several predictions specific to [the increasingly popular Kubernetes platform][19]:  - - **_Gadi Naor at Alcide:_**  “Operators will continue to evolve and mature, to a point where applications running on Kubernetes will become fully self-managed. Deploying and monitoring microservices on top of Kubernetes with [OpenTracing][20] and service mesh frameworks such as [istio][21] will help shape new possibilities.” - - **_Brian Gracely at Red Hat:_**  “Kubernetes continues to expand in terms of the types of applications it can support. When you can run traditional applications, cloud-native applications, big data applications, and HPC or GPU-centric applications on the same platform, it unlocks a ton of architectural flexibility.” - - **_Ben Newton at Sumo Logic: _ “**As Kubernetes becomes more dominant, I would expect to see more normalization of the operational mechanisms – particularly integrations into third-party management and monitoring platforms.” - - **_Carlos Sanchez at CloudBees: _** “In the immediate future there is the ability to run without Docker, using other runtimes...to remove any lock-in. [Editor’s note: [CRI-O][22], for example, offers this ability.] “Also, [look for] storage improvements to support enterprise features like data snapshotting and online volume resizing.” - - - **_Alex Robinson at Cockroach Labs: _ “**One of the bigger developments happening in the Kubernetes community right now is the increased focus on managing [stateful applications][23]. Managing state in Kubernetes right now is very difficult if you aren't running in a cloud that offers remote persistent disks, but there's work being done on multiple fronts [both inside Kubernetes and by external vendors] to improve this.” - --------------------------------------------------------------------------------- - -via: https://enterprisersproject.com/article/2017/11/containers-and-kubernetes-whats-next - -作者:[Kevin Casey ][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://enterprisersproject.com/user/kevin-casey -[1]:https://enterprisersproject.com/article/2017/11/kubernetes-numbers-10-compelling-stats -[2]:https://enterprisersproject.com/article/2017/11/how-enterprise-it-uses-kubernetes-tame-container-complexity -[3]:https://enterprisersproject.com/article/2017/11/5-kubernetes-success-tips-start-smart?sc_cid=70160000000h0aXAAQ -[4]:https://451research.com/images/Marketing/press_releases/Application-container-market-will-reach-2-7bn-in-2020_final_graphic.pdf -[5]:https://thenewstack.io/ -[6]:https://enterprisersproject.com/article/2017/10/microservices-and-containers-6-management-tips-long-haul -[7]:https://www.cockroachlabs.com/ -[8]:https://enterprisersproject.com/article/2017/10/microservices-and-containers-6-management-tips-long-haul -[9]:https://codemill.se/ -[10]:https://www.redhat.com/en/challenges/integration?intcmp=701f2000000tjyaAAA -[11]:https://enterprisersproject.com/hybrid-cloud -[12]:https://enterprisersproject.com/article/2017/7/multi-cloud-vs-hybrid-cloud-whats-difference -[13]:https://enterprisersproject.com/user/brian-gracely -[14]:https://www.redhat.com/en -[15]:https://www.cloudbees.com/ -[16]:https://enterprisersproject.com/hybrid-cloud?sc_cid=70160000000h0aXAAQ -[17]:https://www.sumologic.com/ -[18]:http://alcide.io/ -[19]:https://enterprisersproject.com/article/2017/10/how-explain-kubernetes-plain-english -[20]:http://opentracing.io/ -[21]:https://istio.io/ -[22]:http://cri-o.io/ -[23]:https://opensource.com/article/17/2/stateful-applications -[24]:https://enterprisersproject.com/article/2017/11/containers-and-kubernetes-whats-next?rate=PBQHhF4zPRHcq2KybE1bQgMkS2bzmNzcW2RXSVItmw8 -[25]:https://enterprisersproject.com/user/kevin-casey diff --git a/sources/tech/20171123 Why microservices are a security issue.md b/sources/tech/20171123 Why microservices are a security issue.md new file mode 100644 index 0000000000..d5868faa9e --- /dev/null +++ b/sources/tech/20171123 Why microservices are a security issue.md @@ -0,0 +1,116 @@ +Why microservices are a security issue +============================================================ + +### Maybe you don't want to decompose all your legacy applications into microservices, but you might consider starting with your security functions. + +![Why microservices are a security issue](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003601_05_mech_osyearbook2016_security_cc.png?itok=3V07Lpko "Why microservices are a security issue") +Image by : Opensource.com + +I struggled with writing the title for this post, and I worry that it comes across as clickbait. If you've come to read this because it looked like clickbait, then sorry.[1][5]I hope you'll stay anyway: there are lots of fascinating[2][6] points and many[3][7]footnotes. What I  _didn't_  mean to suggest is that microservices cause [security][15]problems—though like any component, of course, they can—but that microservices are appropriate objects of interest to those involved with security. I'd go further than that: I think they are an excellent architectural construct for those concerned with security. + +And why is that? Well, for those of us with a [systems security][16] bent, the world is an interesting place at the moment. We're seeing a growth in distributed systems, as bandwidth is cheap and latency low. Add to this the ease of deploying to the cloud, and more architects are beginning to realise that they can break up applications, not just into multiple layers, but also into multiple components within the layer. Load balancers, of course, help with this when the various components in a layer are performing the same job, but the ability to expose different services as small components has led to a growth in the design, implementation, and deployment of  _microservices_ . + +More on Microservices + +* [How to explain microservices to your CEO][1] + +* [Free eBook: Microservices vs. service-oriented architecture][2] + +* [Secured DevOps for microservices][3] + +So, [what exactly is a microservice][23]? I quite like [Wikipedia's definition][24], though it's interesting that security isn't mentioned there.[4][17] One of the points that I like about microservices is that, when well-designed, they conform to the first two points of Peter H. Salus' description of the [Unix philosophy][25]: + +1. Write programs that do one thing and do it well. + +2. Write programs to work together. + +3. Write programs to handle text streams, because that is a universal interface. + +The last of the three is slightly less relevant, because the Unix philosophy is generally used to refer to standalone applications, which often have a command instantiation. It does, however, encapsulate one of the basic requirements of microservices: that they must have well-defined interfaces. + +By "well-defined," I don't just mean a description of any externally accessible APIs' methods, but also of the normal operation of the microservice: inputs and outputs—and, if there are any, side-effects. As I described in a previous post, "[5 traits of good systems architecture][18]," data and entity descriptions are crucial if you're going to be able to design a system. Here, in our description of microservices, we get to see why these are so important, because, for me, the key defining feature of a microservices architecture is decomposability. And if you're going to decompose[5][8] your architecture, you need to be very, very clear which "bits" (components) are going to do what. + +And here's where security starts to come in. A clear description of what a particular component should be doing allows you to: + +* Check your design + +* Ensure that your implementation meets the description + +* Come up with reusable unit tests to check functionality + +* Track mistakes in implementation and correct them + +* Test for unexpected outcomes + +* Monitor for misbehaviour + +* Audit actual behaviour for future scrutiny + +Now, are all these things possible in a larger architecture? Yes, they are. But they become increasingly difficult where entities are chained together or combined in more complex configurations. Ensuring  _correct_  implementation and behaviour is much, much easier when you've got smaller pieces to work together. And deriving complex systems behaviours—and misbehaviours—is much more difficult if you can't be sure that the individual components are doing what they ought to be. + +It doesn't stop here, however. As I've mentioned on many [previous occasions][19], writing good security code is difficult.[7][9] Proving that it does what it should do is even more difficult. There is every reason, therefore, to restrict code that has particular security requirements—password checking, encryption, cryptographic key management, authorisation, etc.—to small, well-defined blocks. You can then do all the things that I've mentioned above to try to make sure it's done correctly. + +And yet there's more. We all know that not everybody is great at writing security-related code. By decomposing your architecture such that all security-sensitive code is restricted to well-defined components, you get the chance to put your best security people on that and restrict the danger that J. Random Coder[8][10] will put something in that bypasses or downgrades a key security control. + +It can also act as an opportunity for learning: It's always good to be able to point to a design/implementation/test/monitoring tuple and say: "That's how it should be done. Hear, read, mark, learn, and inwardly digest.[9][11]" + +Should you go about decomposing all of your legacy applications into microservices? Probably not. But given all the benefits you can accrue, you might consider starting with your security functions. + +* * * + +1Well, a little bit—it's always nice to have readers. + +2I know they are: I wrote them. + +3Probably less fascinating. + +4At the time this article was written. It's entirely possible that I—or one of you—may edit the article to change that. + +5This sounds like a gardening term, which is interesting. Not that I really like gardening, but still.[6][12] + +6Amusingly, I first wrote, "…if you're going to decompose your architect…," which sounds like the strapline for an IT-themed murder film. + +7Regular readers may remember a reference to the excellent film  _The Thick of It_ . + +8Other generic personae exist; please take your pick. + +9Not a cryptographic digest: I don't think that's what the original writers had in mind. + + _This article originally appeared on [Alice, Eve, and Bob—a security blog][13] and is republished with permission._ + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/17/11/microservices-are-security-issue + +作者:[Mike Bursell ][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/mikecamel +[1]:https://blog.openshift.com/microservices-how-to-explain-them-to-your-ceo/?intcmp=7016000000127cYAAQ&src=microservices_resource_menu1 +[2]:https://www.openshift.com/promotions/microservices.html?intcmp=7016000000127cYAAQ&src=microservices_resource_menu2 +[3]:https://opensource.com/business/16/11/secured-devops-microservices?src=microservices_resource_menu3 +[4]:https://opensource.com/article/17/11/microservices-are-security-issue?rate=GDH4xOWsgYsVnWbjEIoAcT_92b8gum8XmgR6U0T04oM +[5]:https://opensource.com/article/17/11/microservices-are-security-issue#1 +[6]:https://opensource.com/article/17/11/microservices-are-security-issue#2 +[7]:https://opensource.com/article/17/11/microservices-are-security-issue#3 +[8]:https://opensource.com/article/17/11/microservices-are-security-issue#5 +[9]:https://opensource.com/article/17/11/microservices-are-security-issue#7 +[10]:https://opensource.com/article/17/11/microservices-are-security-issue#8 +[11]:https://opensource.com/article/17/11/microservices-are-security-issue#9 +[12]:https://opensource.com/article/17/11/microservices-are-security-issue#6 +[13]:https://aliceevebob.com/2017/10/31/why-microservices-are-a-security-issue/ +[14]:https://opensource.com/user/105961/feed +[15]:https://opensource.com/tags/security +[16]:https://aliceevebob.com/2017/03/14/systems-security-why-it-matters/ +[17]:https://opensource.com/article/17/11/microservices-are-security-issue#4 +[18]:https://opensource.com/article/17/10/systems-architect +[19]:https://opensource.com/users/mikecamel +[20]:https://opensource.com/users/mikecamel +[21]:https://opensource.com/users/mikecamel +[22]:https://opensource.com/article/17/11/microservices-are-security-issue#comments +[23]:https://opensource.com/resources/what-are-microservices +[24]:https://en.wikipedia.org/wiki/Microservices +[25]:https://en.wikipedia.org/wiki/Unix_philosophy diff --git a/sources/tech/20171124 Open Source Cloud Skills and Certification Are Key for SysAdmins.md b/sources/tech/20171124 Open Source Cloud Skills and Certification Are Key for SysAdmins.md deleted file mode 100644 index 27379cbe40..0000000000 --- a/sources/tech/20171124 Open Source Cloud Skills and Certification Are Key for SysAdmins.md +++ /dev/null @@ -1,70 +0,0 @@ -translating by wangy325... - - -Open Source Cloud Skills and Certification Are Key for SysAdmins -============================================================ - - -![os jobs](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/open-house-sysadmin.jpg?itok=i5FHc3lu "os jobs") -Sysadmins with open source skills and certification can command higher pay, according to the 2017 Open Source Jobs Report.[Creative Commons Zero][1] - -System administrator is one of the most common positions employers are looking to fill among 53 percent of respondents to the [2017 Open Source Jobs Report][3]. Consequently, sysadmins with skills in engineering can command higher salaries, as these positions are among the hardest to fill, the report finds. - -Sysadmins are generally responsible for installing, supporting, and maintaining servers or other computer systems, and planning for and responding to service outages and other problems. - -Overall, this year’s report finds the skills most in demand are open source cloud (47 percent), application development (44 percent), Big Data (43 percent) and both DevOps and security (42 percent). - -The report also finds that 58 percent of hiring managers are planning to hire more open source professionals, and 67 percent say hiring of open source professionals will increase more than in other areas of the business. This represents a two-point increase over last year among employers who said open source hiring would be their top field of recruitment. - -At the same time, 89 percent of hiring managers report it is difficult to find open source talent. - -### Why get certified - -The desire for sysadmins is incentivizing hiring managers to offer formal training and/or certifications in the discipline in 53 percent of organizations, compared to 47 percent last year, the Open Source Jobs Report finds. - -IT professionals interested in sysadmin positions should consider Linux certifications. Searches on several of the more well-known job posting sites reveal that the [CompTIA Linux+][4]certification is the top certification for entry-level Linux sysadmin, while [Red Hat Certified Engineer (RHCE)][5] and [Red Hat Certified System Administrator (RHCSA)][6] are the main certifications for higher-level positions. - -In 2016, a sysadmin commanded a salary of $79,583, a change of -0.8 percent from the previous year, according to Dice’s [2017 Tech Salary Survey][7]. The systems architect position paid $125,946, a year-over-year change of -4.7 percent. Yet, the survey observes that “Highly skilled technology professionals remain in the most demand, especially those candidates proficient in the technologies needed to support industry transformation and growth.” - -When it comes to open source skills, HBase (an open-source distributed database), ranked as one that garners among the highest pay for tech pros in the Dice survey. In the networking and database category, the OpenVMS operating system ranked as another high-paying skill. - -### The sysadmin role - -One of a sysadmin’s responsibilities is to be available 24/7 when a problem occurs. The position calls for a mindset that is about “zero-blame, lean, iterative improvement in process or technology,’’ and one that is open to change, writes Paul English, a board member for the League of Professional System Administrators, a non-profit professional association for the advancement of the practice of system administration, in  [opensource.com][8]. He adds that being a sysadmin means “it’s almost a foregone conclusion that you’ll work with open source software like Linux, BSD, and even open source Solaris.” - -Today’s sysadmins will more often work with software rather than hardware, and should be prepared to write small scripts, according to English. - -### Outlook for 2018 - -Expect to see sysadmins among the tech professionals many employers in North America will be hiring in 2018, according to [Robert Half’s 2018 Salary Guide for Technology Professionals][9]. Increasingly, soft skills and leadership qualities are also highly valued. - -“Good listening and critical-thinking skills, which are essential to understanding and resolving customers’ issues and concerns, are important for almost any IT role today, but especially for help desk and desktop support professionals,’’ the report states. - -This jibes with some of the essential skills needed at various stages of the sysadmin position, including strong analytical skills and an ability to solve problems quickly, according to [The Linux Foundation][10]. - -Other skills sysadmins should have as they move up the ladder are: interest in structured approaches to system configuration management; experience in resolving security issues; experience with user identity management; ability to communicate in non-technical terms to non-technical people; and ability to modify system to meet new security requirements. - - _[Download ][11]the full 2017 Open Source Jobs Report now._ - --------------------------------------------------------------------------------- - -via: https://www.linux.com/blog/open-source-cloud-skills-and-certification-are-key-sysadmins - -作者:[ ][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: -[1]:https://www.linux.com/licenses/category/creative-commons-zero -[2]:https://www.linux.com/files/images/open-house-sysadminjpg -[3]:https://www.linuxfoundation.org/blog/2017-jobs-report-highlights-demand-open-source-skills/ -[4]:https://certification.comptia.org/certifications/linux?tracking=getCertified/certifications/linux.aspx -[5]:https://www.redhat.com/en/services/certification/rhce -[6]:https://www.redhat.com/en/services/certification/rhcsa -[7]:http://marketing.dice.com/pdf/Dice_TechSalarySurvey_2017.pdf?aliId=105832232 -[8]:https://opensource.com/article/17/7/truth-about-sysadmins -[9]:https://www.roberthalf.com/salary-guide/technology -[10]:https://www.linux.com/learn/10-essential-skills-novice-junior-and-senior-sysadmins%20%20 -[11]:http://bit.ly/2017OSSjobsreport diff --git a/sources/tech/20171124 Photon Could Be Your New Favorite Container OS.md b/sources/tech/20171124 Photon Could Be Your New Favorite Container OS.md index 147a2266cc..d282ef5445 100644 --- a/sources/tech/20171124 Photon Could Be Your New Favorite Container OS.md +++ b/sources/tech/20171124 Photon Could Be Your New Favorite Container OS.md @@ -1,9 +1,6 @@ -KeyLD Translating - Photon Could Be Your New Favorite Container OS ============================================================ - ![Photon OS](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/photon-linux.jpg?itok=jUFHPR_c "Photon OS") Jack Wallen says Photon OS is an outstanding platform, geared specifically for containers.[Creative Commons Zero][5]Pixabay @@ -109,9 +106,9 @@ Give Photon a try and see if it doesn’t make deploying Docker containers and/o -------------------------------------------------------------------------------- -via: https://www.linux.com/learn/intro-to-linux/2017/11/photon-could-be-your-new-favorite-container-os +via: 网址 -作者:[JACK WALLEN ][a] +作者:[ JACK WALLEN][a] 译者:[译者ID](https://github.com/译者ID) 校对:[校对者ID](https://github.com/校对者ID) diff --git a/sources/tech/20171125 AWS to Help Build ONNX Open Source AI Platform.md b/sources/tech/20171125 AWS to Help Build ONNX Open Source AI Platform.md new file mode 100644 index 0000000000..c09d66bc57 --- /dev/null +++ b/sources/tech/20171125 AWS to Help Build ONNX Open Source AI Platform.md @@ -0,0 +1,76 @@ +AWS to Help Build ONNX Open Source AI Platform +============================================================ +![onnx-open-source-ai-platform](https://www.linuxinsider.com/article_images/story_graphics_xlarge/xl-2017-onnx-1.jpg) + + +Amazon Web Services has become the latest tech firm to join the deep learning community's collaboration on the Open Neural Network Exchange, recently launched to advance artificial intelligence in a frictionless and interoperable environment. Facebook and Microsoft led the effort. + +As part of that collaboration, AWS made its open source Python package, ONNX-MxNet, available as a deep learning framework that offers application programming interfaces across multiple languages including Python, Scala and open source statistics software R. + +The ONNX format will help developers build and train models for other frameworks, including PyTorch, Microsoft Cognitive Toolkit or Caffe2, AWS Deep Learning Engineering Manager Hagay Lupesko and Software Developer Roshani Nagmote wrote in an online post last week. It will let developers import those models into MXNet, and run them for inference. + +### Help for Developers + +Facebook and Microsoft this summer launched ONNX to support a shared model of interoperability for the advancement of AI. Microsoft committed its Cognitive Toolkit, Caffe2 and PyTorch to support ONNX. + +Cognitive Toolkit and other frameworks make it easier for developers to construct and run computational graphs that represent neural networks, Microsoft said. + +Initial versions of [ONNX code and documentation][4] were made available on Github. + +AWS and Microsoft last month announced plans for Gluon, a new interface in Apache MXNet that allows developers to build and train deep learning models. + +Gluon "is an extension of their partnership where they are trying to compete with Google's Tensorflow," observed Aditya Kaul, research director at [Tractica][5]. + +"Google's omission from this is quite telling but also speaks to their dominance in the market," he told LinuxInsider. + +"Even Tensorflow is open source, and so open source is not the big catch here -- but the rest of the ecosystem teaming up to compete with Google is what this boils down to," Kaul said. + +The Apache MXNet community earlier this month introduced version 0.12 of MXNet, which extends Gluon functionality to allow for new, cutting-edge research, according to AWS. Among its new features are variational dropout, which allows developers to apply the dropout technique for mitigating overfitting to recurrent neural networks. + +Convolutional RNN, Long Short-Term Memory and gated recurrent unit cells allow datasets to be modeled using time-based sequence and spatial dimensions, AWS noted. + +### Framework-Neutral Method + +"This looks like a great way to deliver inference regardless of which framework generated a model," said Paul Teich, principal analyst at [Tirias Research][6]. + +"This is basically a framework-neutral way to deliver inference," he told LinuxInsider. + +Cloud providers like AWS, Microsoft and others are under pressure from customers to be able to train on one network while delivering on another, in order to advance AI, Teich pointed out. + +"I see this as kind of a baseline way for these vendors to check the interoperability box," he remarked. + +"Framework interoperability is a good thing, and this will only help developers in making sure that models that they build on MXNet or Caffe or CNTK are interoperable," Tractica's Kaul pointed out. + +As to how this interoperability might apply in the real world, Teich noted that technologies such as natural language translation or speech recognition would require that Alexa's voice recognition technology be packaged and delivered to another developer's embedded environment. + +### Thanks, Open Source + +"Despite their competitive differences, these companies all recognize they owe a significant amount of their success to the software development advancements generated by the open source movement," said Jeff Kaplan, managing director of [ThinkStrategies][7]. + +"The Open Neural Network Exchange is committed to producing similar benefits and innovations in AI," he told LinuxInsider. + +A growing number of major technology companies have announced plans to use open source to speed the development of AI collaboration, in order to create more uniform platforms for development and research. + +AT&T just a few weeks ago announced plans [to launch the Acumos Project][8] with TechMahindra and The Linux Foundation. The platform is designed to open up efforts for collaboration in telecommunications, media and technology.  +![](https://www.ectnews.com/images/end-enn.gif) + +-------------------------------------------------------------------------------- + +via: https://www.linuxinsider.com/story/AWS-to-Help-Build-ONNX-Open-Source-AI-Platform-84971.html + +作者:[ David Jones ][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.linuxinsider.com/story/AWS-to-Help-Build-ONNX-Open-Source-AI-Platform-84971.html#searchbyline +[1]:https://www.linuxinsider.com/story/AWS-to-Help-Build-ONNX-Open-Source-AI-Platform-84971.html# +[2]:https://www.linuxinsider.com/perl/mailit/?id=84971 +[3]:https://www.linuxinsider.com/story/AWS-to-Help-Build-ONNX-Open-Source-AI-Platform-84971.html +[4]:https://github.com/onnx/onnx +[5]:https://www.tractica.com/ +[6]:http://www.tiriasresearch.com/ +[7]:http://www.thinkstrategies.com/ +[8]:https://www.linuxinsider.com/story/84926.html +[9]:https://www.linuxinsider.com/story/AWS-to-Help-Build-ONNX-Open-Source-AI-Platform-84971.html diff --git a/sources/tech/20171128 How To Tell If Your Linux Server Has Been Compromised.md b/sources/tech/20171128 How To Tell If Your Linux Server Has Been Compromised.md new file mode 100644 index 0000000000..dd61ad7a95 --- /dev/null +++ b/sources/tech/20171128 How To Tell If Your Linux Server Has Been Compromised.md @@ -0,0 +1,156 @@ +translating by lujun9972 +How To Tell If Your Linux Server Has Been Compromised +-------------- + +A server being compromised or hacked for the purpose of this guide is an unauthorized person or bot logging into the server in order to use it for their own, usually negative ends. + +Disclaimer: If your server has been compromised by a state organization like the NSA or a serious criminal group then you will not notice any problems and the following techniques will not register their presence. + +However, the majority of compromised servers are carried out by bots i.e. automated attack programs, in-experienced attackers e.g. “script kiddies”, or dumb criminals. + +These sorts of attackers will abuse the server for all it’s worth whilst they have access to it and take few precautions to hide what they are doing. + +### Symptoms of a compromised server + +When a server has been compromised by an in-experienced or automated attacker they will usually do something with it that consumes 100% of a resource. This resource will usually be either the CPU for something like crypt-currency mining or email spamming, or bandwidth for launching a DOS attack. + +This means that the first indication that something is amiss is that the server is “going slow”. This could manifest in the website serving pages much slower than usual, or email taking many minutes to deliver or send. + +So what should you look for? + +### Check 1 - Who’s currently logged in? + +The first thing you should look for is who is currently logged into the server. It is not uncommon to find the attacker actually logged into the server and working on it. + +The shell command to do this is w. Running w gives the following output: + +``` + 08:32:55 up 98 days, 5:43, 2 users, load average: 0.05, 0.03, 0.00 +USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT +root pts/0 113.174.161.1 08:26 0.00s 0.03s 0.02s ssh root@coopeaa12 +root pts/1 78.31.109.1 08:26 0.00s 0.01s 0.00s w + +``` + +One of those IP’s is a UK IP and the second is Vietnamese. That’s probably not a good thing. + +Stop and take a breath, don’t panic and simply kill their SSH connection. Unless you can stop then re-entering the server they will do so quickly and quite likely kick you off and stop you getting back in. + +Please see the What should I do if I’ve been compromised section at the end of this guide no how to proceed if you do find evidence of compromise. + +The whois command can be run on IP addresses and will tell you what all the information about the organization that the IP is registered to, including the country. + +### Check 2 - Who has logged in? + +Linux servers keep a record of which users logged in, from what IP, when and for how long. This information is accessed with the last command. + +The output looks like this: + +``` +root pts/1 78.31.109.1 Thu Nov 30 08:26 still logged in +root pts/0 113.174.161.1 Thu Nov 30 08:26 still logged in +root pts/1 78.31.109.1 Thu Nov 30 08:24 - 08:26 (00:01) +root pts/0 113.174.161.1 Wed Nov 29 12:34 - 12:52 (00:18) +root pts/0 14.176.196.1 Mon Nov 27 13:32 - 13:53 (00:21) + +``` + +There is a mix of my UK IP’s and some Vietnamese ones, with the top two still logged in. If you see any IP’s that are not authorized then refer to the final section. + +The login history is contained in a text file at ~/.bash_history and is therefore easily removable. Often, attackers will simply delete this file to try to cover their tracks. Consequently, if you run last and only see your current login, this is a Bad Sign. + +If there is no login history be very, very suspicious and continue looking for indications of compromise. + +### Check 3 - Review the command history + +This level of attacker will frequently take no precautions to leave no command history so running the history command will show you everything they have done. Be on the lookout for wget or curl commands to download out-of-repo software such as spam bots or crypto miners. + +The command history is contained in the ~/.bash_history file so some attackers will delete this file to cover what they have done. Just as with the login history, if you run history and don’t see anything then the history file has been deleted. Again this is a Bad Sign and you should review the server very carefully. + +### Check 4 - What’s using all the CPU? + +The sorts of attackers that you will encounter usually don’t take too many precautions to hide what they are doing. So they will run processes that consume all the CPU. This generally makes it pretty easy to spot them. Simply run top and look at the highest process. + +This will also show people exploiting your server without having logged in. This could be, for example, someone using an unprotected form-mail script to relay spam. + +If you don’t recognize the top process then either Google its name or investigate what it’s doing with losf or strace. + +To use these tools first copy its PID from top and run: + +``` +strace -p PID + +``` + +This will display all the system calls the process is making. It’s a lot of information but looking through it will give you a good idea what’s going on. + +``` +lsof -p PID + +``` + +This program will list the open files that the process has. Again, this will give you a good idea what it’s doing by showing you what files it is accessing. + +### Check 5 - Review the all the system processes + +If an unauthorized process is not consuming enough CPU to get listed noticeably on top it will still get displayed in a full process listing with ps. My proffered command is ps auxf for providing the most information clearly. + +You should be looking for any processes that you don’t recognize. The more times you run ps on your servers (which is a good habit to get into) the more obvious an alien process will stand out. + +### Check 6 - Review network usage by process + +The command iftop functions like top to show a ranked list of processes that are sending and receiving network data along with their source and destination. A process like a DOS attack or spam bot will immediately show itself at the top of the list. + +### Check 7 - What processes are listening for network connections? + +Often an attacker will install a program that doesn’t do anything except listen on the network port for instructions. This does not consume CPU or bandwidth whilst it is waiting so can get overlooked in the top type commands. + +The commands lsof and netstat will both list all networked processes. I use them with the following options: + +``` +lsof -i + +``` + +``` +netstat -plunt + +``` + +You should look for any process that is listed as in the LISTEN or ESTABLISHED status as these processes are either waiting for a connection (LISTEN) or have a connection open (ESTABLISHED). If you don’t recognize these processes use strace or lsof to try to see what they are doing. + +### What should I do if I’ve been compromised? + +The first thing to do is not to panic, especially if the attacker is currently logged in. You need to be able to take back control of the machine before the attacker is aware that you know about them. If they realize you know about them they may well lock you out of your server and start destroying any assets out of spite. + +If you are not very technical then simply shut down the server. Either from the server itself with shutdown -h now or systemctl poweroff. Or log into your hosting provider’s control panel and shut down the server. Once it’s powered off you can work on the needed firewall rules and consult with your provider in your own time. + +If you’re feeling a bit more confident and your hosting provider has an upstream firewall then create and enable the following two rules in this order: + +1. Allow SSH traffic from only your IP address. + +2. Block everything else, not just SSH but every protocol on every port. + +This will immediately kill their SSH session and give only you access to the server. + +If you don’t have access to an upstream firewall then you will have to create and enable these firewall rules on the server itself and then, when they are in place kill the attacker’s ssh session with the kill command. + +A final method, where available, is to log into the server via an out-of-band connection such as the serial console and stop networking with systemctl stop network.service. This will completely stop any network access so you can now enable the firewall rules in your own time. + +Once you have regained control of the server do not trust it. + +Do not attempt to fix things up and continue using the server. You can never be sure what the attacker did and so you can never sure the server is secure. + +The only sensible course of action is to copy off all the data that you need and start again from a fresh install. + +-------------------------------------------------------------------------------- + +via: https://bash-prompt.net/guides/server-hacked/ + +作者:[Elliot Cooper][a] +译者:[lujun9972](https://github.com/lujun9972) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://bash-prompt.net diff --git a/sources/tech/20171128 The politics of the Linux desktop.md b/sources/tech/20171128 The politics of the Linux desktop.md new file mode 100644 index 0000000000..c9117dacfe --- /dev/null +++ b/sources/tech/20171128 The politics of the Linux desktop.md @@ -0,0 +1,110 @@ +The politics of the Linux desktop +============================================================ + +### If you're working in open source, why would you use anything but Linux as your main desktop? + + +![The politics of the Linux desktop](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_networks.png?itok=XasNXxKs "The politics of the Linux desktop") +Image by : opensource.com + +At some point in 1997 or 1998—history does not record exactly when—I made the leap from Windows to the Linux desktop. I went through quite a few distributions, from Red Hat to SUSE to Slackware, then Debian, Debian Experimental, and (for a long time thereafter) Ubuntu. When I accepted a role at Red Hat, I moved to Fedora, and migrated both my kids (then 9 and 11) to Fedora as well. + +More Linux resources + +* [What is Linux?][1] + +* [What are Linux containers?][2] + +* [Download Now: Linux commands cheat sheet][3] + +* [Advanced Linux commands cheat sheet][4] + +* [Our latest Linux articles][5] + +For a few years, I kept Windows as a dual-boot option, and then realised that, if I was going to commit to Linux, then I ought to go for it properly. In losing Windows, I didn't miss much; there were a few games that I couldn't play, but it was around the time that the Civilization franchise was embracing Linux, so that kept me happy. + +The move to Linux wasn't plain sailing, by any stretch of the imagination. If you wanted to use fairly new hardware in the early days, you had to first ensure that there were  _any_  drivers for Linux, then learn how to compile and install them. If they were not quite my friends, **lsmod** and **modprobe** became at least close companions. I taught myself to compile a kernel and tweak the options to make use of (sometimes disastrous) new, "EXPERIMENTAL" features as they came out. Early on, I learned the lesson that you should always keep at least one kernel in your [LILO][12] list that you were  _sure_  booted fully. I cursed NVidia and grew horrified by SCSI. I flirted with early journalling filesystem options and tried to work out whether the different preempt parameters made any noticeable difference to my user experience or not. I began to accept that printers would never print—and then they started to. I discovered that the Bluetooth stack suddenly started to connect to things. + +Over the years, using Linux moved from being an uphill struggle to something that just worked. I moved my mother-in-law and then my father over to Linux so I could help administer their machines. And then I moved them off Linux so they could no longer ask me to help administer their machines. + +Over the years, using Linux moved from being an uphill struggle to something that just worked.It wasn't just at home, either: I decided that I would use Linux as my desktop for work, as well. I even made it a condition of employment for at least one role. Linux desktop support in the workplace caused different sets of problems. The first was the "well, you're on your own: we're not going to support you" email from IT support. VPNs were touch and go, but in the end, usually go. + +The biggest hurdle was Microsoft Office, until I discovered [CrossOver][13], which I bought with my own money, and which allowed me to run company-issued copies of Word, PowerPoint, and the rest on my Linux desktop. Fonts were sometimes a problem, and one company I worked for required Microsoft Lync. For this, and for a few other applications, I would sometimes have to run a Windows virtual machine (VM) on my Linux desktop.  Was this a cop out?  Well, a little bit: but I've always tried to restrict my usage of this approach to the bare minimum. + +### But why? + +"Why?" colleagues would ask. "Why do you bother? Why not just run Windows?" + +"Because I enjoy pain," was usually my initial answer, and then the more honest, "because of the principle of the thing." + +So this is it: I believe in open source. We have a number of very, very good desktop-compatible distributions these days, and most of the time they just work. If you use well-known or supported hardware, they're likely to "just work" pretty much as well as the two obvious alternatives, Windows or Mac. And they just work because many people have put much time into using them, testing them, and improving them. So it's not a case of why wouldn't I use Windows or Mac, but why would I ever consider  _not_  using Linux? If, as I do, you believe in open source, and particularly if you work within the open source community or are employed by an open source organisation, I struggle to see why you would even consider not using Linux. + +So it's not a case of why wouldn't I use Windows or Mac, but why would I ever consider not using Linux?I've spoken to people about this (of course I have), and here are the most common reasons—or excuses—I've heard. + +1. I'm more productive on Windows/Mac. + +2. I can't use app X on Linux, and I need it for my job. + +3. I can't game on Linux. + +4. It's what our customers use, so why we would alienate them? + +5. "Open" means choice, and I prefer a proprietary desktop, so I use that. + +Interestingly, I don't hear "Linux isn't good enough" much anymore, because it's manifestly untrue, and I can show that my own experience—and that of many colleagues—belies that. + +### Rebuttals + +If you believe in open source, then I contest that you should take the time to learn how to use a Linux desktop and the associated applications.Let's go through those answers and rebut them. + +1. **I'm more productive on Windows/Mac.** I'm sure you are. Anyone is more productive when they're using a platform or a system they're used to. If you believe in open source, then I contest that you should take the time to learn how to use a Linux desktop and the associated applications. If you're working for an open source organisation, they'll probably help you along, and you're unlikely to find you're much less productive in the long term. And, you know what? If you are less productive in the long term, then get in touch with the maintainers of the apps that are causing you to be less productive and help improve them. You don't have to be a coder. You could submit bug reports, suggest improvements, write documentation, or just test the most recent versions of the software. And then you're helping yourself and the rest of the community. Welcome to open source. + +1. **I can't use app X on Linux, and I need it for my job.** This may be true. But it's probably less true than you think. The people most often saying this with conviction are audio, video, or graphics experts. It was certainly the case for many years that Linux lagged behind in those areas, but have a look and see what the other options are. And try them, even if they're not perfect, and see how you can improve them. Alternatively, use a VM for that particular app. + +1. **I can't game on Linux.** Well, you probably can, but not all the games that you enjoy. This, to be clear, shouldn't really be an excuse not to use Linux for most of what you do. It might be a reason to keep a dual-boot system or to do what I did (after much soul-searching) and buy a games console (because Elite Dangerous really  _doesn't_  work on Linux, more's the pity). It should also be an excuse to lobby for your favourite games to be ported to Linux. + +1. **It's what our customers use, so why would we alienate them?** I don't get this one. Does Microsoft ban visitors with Macs from their buildings? Does Apple ban Windows users? Does Google allow non-Android phones through their doors? You don't kowtow to the majority when you're the little guy or gal; if you're working in open source, surely you should be proud of that. You're not going to alienate your customer—you're really not. + +1. **"Open" means choice, and I prefer a proprietary desktop, so I use that.**Being open certainly does mean you have a choice. You made that choice by working in open source. For many, including me, that's a moral and philosophical choice. Saying you embrace open source, but rejecting it in practice seems mealy mouthed, even insulting. Using openness to justify your choice is the wrong approach. Saying "I prefer a proprietary desktop, and company policy allows me to do so" is better. I don't agree with your decision, but at least you're not using the principle of openness to justify it. + +Is using open source easy? Not always. But it's getting easier. I think that we should stand up for what we believe in, and if you're reading [Opensource.com][14], then you probably believe in open source. And that, I believe, means that you should run Linux as your main desktop. + + _Note: I welcome comments, and would love to hear different points of view. I would ask that comments don't just list application X or application Y as not working on Linux. I concede that not all apps do. I'm more interested in justifications that I haven't covered above, or (perceived) flaws in my argument. Oh, and support for it, of course._ + + +### About the author + + [![](https://opensource.com/sites/default/files/styles/profile_pictures/public/pictures/2017-05-10_0129.jpg?itok=Uh-eKFhx)][15] + + Mike Bursell - I've been in and around Open Source since around 1997, and have been running (GNU) Linux as my main desktop at home and work since then: [not always easy][7]...  I'm a security bod and architect, and am currently employed as Chief Security Architect for Red Hat.  I have a blog - "[Alice, Eve & Bob][8]" - where I write (sometimes rather parenthetically) about security.  I live in the UK and... [more about Mike Bursell][9][More about me][10] + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/17/11/politics-linux-desktop + +作者:[Mike Bursell ][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/mikecamel +[1]:https://opensource.com/resources/what-is-linux?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent +[2]:https://opensource.com/resources/what-are-linux-containers?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent +[3]:https://developers.redhat.com/promotions/linux-cheatsheet/?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent +[4]:https://developers.redhat.com/cheat-sheet/advanced-linux-commands-cheatsheet?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent +[5]:https://opensource.com/tags/linux?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent +[6]:https://opensource.com/article/17/11/politics-linux-desktop?rate=do69ixoNzK0yg3jzFk0bc6ZOBsIUcqTYv6FwqaVvzUA +[7]:https://opensource.com/article/17/11/politics-linux-desktop +[8]:https://aliceevebob.com/ +[9]:https://opensource.com/users/mikecamel +[10]:https://opensource.com/users/mikecamel +[11]:https://opensource.com/user/105961/feed +[12]:https://en.wikipedia.org/wiki/LILO_(boot_loader) +[13]:https://en.wikipedia.org/wiki/CrossOver_(software) +[14]:https://opensource.com/ +[15]:https://opensource.com/users/mikecamel +[16]:https://opensource.com/users/mikecamel +[17]:https://opensource.com/users/mikecamel +[18]:https://opensource.com/article/17/11/politics-linux-desktop#comments +[19]:https://opensource.com/tags/linux diff --git a/sources/tech/20171128 Why Python and Pygame are a great pair for beginning programmers.md b/sources/tech/20171128 Why Python and Pygame are a great pair for beginning programmers.md new file mode 100644 index 0000000000..479bfb1232 --- /dev/null +++ b/sources/tech/20171128 Why Python and Pygame are a great pair for beginning programmers.md @@ -0,0 +1,142 @@ +Why Python and Pygame are a great pair for beginning programmers +============================================================ + +### We look at three reasons Pygame is a good choice for learning to program. + + +![What's the best game platform for beginning programmers?](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/code_development_programming.png?itok=M_QDcgz5 "What's the best game platform for beginning programmers?") +Image by :  + +opensource.com + +Last month, [Scott Nesbitt][10] wrote about [Mozilla awarding $500K to support open source projects][11]. Phaser, a HTML/JavaScript game platform, was [awarded $50,000][12]. I’ve been teaching Phaser to my pre-teen daughter for a year, and it's one of the best and easiest HTML game development platforms to learn. [Pygame][13], however, may be a better choice for beginners. Here's why. + +### 1\. One long block of code + +Pygame is based on Python, the [most popular language for introductory computer courses][14]. Python is great for writing out ideas in one long block of code. Kids start off with a single file and with a single block of code. Before they can get to functions or classes, they start with code that will soon resemble spaghetti. It’s like finger-painting, as they throw thoughts onto the page. + +More Python Resources + +* [What is Python?][1] + +* [Top Python IDEs][2] + +* [Top Python GUI frameworks][3] + +* [Latest Python content][4] + +* [More developer resources][5] + +This approach to learning works. Kids will naturally start to break things into functions and classes as their code gets more difficult to manage. By learning the syntax of a language like Python prior to learning about functions, the student will gain basic programming knowledge before using global and local scope. + +Most HTML games separate the structure, style, and programming logic into HTML, CSS, and JavaScript to some degree and require knowledge of CSS and HTML. While the separation is better in the long term, it can be a barrier for beginners. Once kids realize that they can quickly build web pages with HTML and CSS, they may get distracted by the visual excitement of colors, fonts, and graphics. Even those who stay focused on JavaScript coding will still need to learn the basic document structure that the JavaScript code sits in. + +### 2\. Global variables are more obvious + +Both Python and JavaScript use dynamically typed variables, meaning that a variable becomes a string, an integer, or float when it’s assigned; however, making mistakes is easier in JavaScript. Similar to typed variables, both JavaScript and Python have global and local variable scopes. In Python, global variables inside of a function are identified with the global keyword. + +Let’s look at the basic [Making your first Phaser game tutorial][15], by Alvin Ourrad and Richard Davey, to understand the challenge of using Phaser to teach programming to beginners. In JavaScript, global variables—variables that can be accessed anywhere in the program—are difficult to keep track of and often are the source of bugs that are challenging to solve. Richard and Alvin are expert programmers and use global variables intentionally to keep things concise. + +``` +var game = new Phaser.Game(800, 600, Phaser.AUTO, '', { preload: preload, create: create, update: update }); + +function preload() { + +    game.load.image('sky', 'assets/sky.png'); + +} + +var player; +var platforms; + +function create() { +    game.physics.startSystem(Phaser.Physics.ARCADE); +… +``` + +In their Phaser programming book  [_Interphase_ ,][16] Richard Davey and Ilija Melentijevic explain that global variables are commonly used in many Phaser projects because they make it easier to get things done quickly. + +> “If you’ve ever worked on a game of any significant size then this approach is probably already making you cringe slightly... So why do we do it? The reason is simply because it’s the most concise and least complicated way to demonstrate what Phaser can do.” + +Although structuring a Phaser application to use local variables and split things up nicely into separation of concerns is possible, that’s tough for kids to understand when they’re first learning to program. + +If you’re set on teaching your kids to code with JavaScript, or if they already know how to code in another language like Python, a good Phaser course is [The Complete Mobile Game Development Course][17], by [Pablo Farias Navarro][18]. Although the title focuses on mobile games, the actual course focuses on JavaScript and Phaser. The JavaScript and Phaser apps are moved to a mobile phone with [PhoneGap][19]. + +### 3\. Pygame comes with less assembly required + +Thanks to [Python Wheels][20], Pygame is now super [easy to install][21]. You can also install it on Fedora/Red Hat with the **yum** package manager: + +``` +sudo yum install python3-pygame +``` + +See the official [Pygame installation documentation][22] for more information. + +Although Phaser itself is even easier to install, it does require more knowledge to use. As mentioned previously, the student will need to assemble their JavaScript code within an HTML document with some CSS. In addition to the three languages—HTML, CSS, and JavaScript—Phaser also requires the use of Firefox or Chrome development tools and an editor. The most common editors for JavaScript are Sublime, Atom, VS Code (probably in that order). + +Phaser applications will not run if you open the HTML file in a browser directly, due to [same-origin policy][23]. You must run a web server and access the files by connecting to the web server. Fortunately, you don’t need to run Apache on your local computer; you can run something lightweight like [httpster][24] for most projects. + +### Advantages of Phaser and JavaScript + +With all the challenges of JavaScript and Phaser, why am I teaching them? Honestly, I held off for a long time. I worried about students learning variable hoisting and scope. I developed my own curriculum based on Pygame and Python, then I developed one based on Phaser. Eventually, I decided to use Pablo’s pre-made curriculum as a starting point.  + +There are really two reasons that I moved to JavaScript. First, JavaScript has emerged as a serious language used in serious applications. In addition to web applications, it’s used for mobile and server applications. JavaScript is everywhere, and it’s used widely in applications kids see every day. If their friends code in JavaScript, they'll likely want to as well. As I saw the momentum behind JavaScript, I looked into alternatives that could compile into JavaScript, primarily Dart and TypeScript. I didn’t mind the extra conversion step, but I still looked at JavaScript. + +In the end, I chose to use Phaser and JavaScript because I realized that the problems could be solved with JavaScript and a bit of work. High-quality debugging tools and the work of some exceptionally smart people have made JavaScript a language that is both accessible and useful for teaching kids to code. + +### Final word: Python vs. JavaScript + +When people ask me what language to start their kids with, I immediately suggest Python and Pygame. There are tons of great curriculum options, many of which are free. I used ["Making Games with Python & Pygame"][25] by Al Sweigart with my son. I also used  _[Think Python: How to Think Like a Computer Scientist][7]_ by Allen B. Downey. You can get Pygame on your Android phone with [RAPT Pygame][26] by [Tom Rothamel][27]. + +Despite my recommendation, I always suspect that kids soon move to JavaScript. And that’s okay—JavaScript is a mature language with great tools. They’ll have fun with JavaScript and learn a lot. But after years of helping my daughter’s older brother create cool games in Python, I’ll always have an emotional attachment to Python and Pygame. + +### About the author + + [![](https://opensource.com/sites/default/files/styles/profile_pictures/public/pictures/craig-head-crop.png?itok=LlMnIq8m)][28] + + Craig Oda - First elected president and co-founder of Tokyo Linux Users Group. Co-author of "Linux Japanese Environment" book published by O'Reilly Japan. Part of core team that established first ISP in Asia. Former VP of product management and product marketing for major Linux company. Partner at Oppkey, developer relations consulting firm in Silicon Valley.[More about me][8] + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/17/11/pygame + +作者:[Craig Oda ][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/codetricity +[1]:https://opensource.com/resources/python?intcmp=7016000000127cYAAQ +[2]:https://opensource.com/resources/python/ides?intcmp=7016000000127cYAAQ +[3]:https://opensource.com/resources/python/gui-frameworks?intcmp=7016000000127cYAAQ +[4]:https://opensource.com/tags/python?intcmp=7016000000127cYAAQ +[5]:https://developers.redhat.com/?intcmp=7016000000127cYAAQ +[6]:https://opensource.com/article/17/11/pygame?rate=PV7Af00S0QwicZT2iv8xSjJrmJPdpfK1Kcm7LXxl_Xc +[7]:http://greenteapress.com/thinkpython/html/index.html +[8]:https://opensource.com/users/codetricity +[9]:https://opensource.com/user/46031/feed +[10]:https://opensource.com/users/scottnesbitt +[11]:https://opensource.com/article/17/10/news-october-14 +[12]:https://www.patreon.com/photonstorm/posts +[13]:https://www.pygame.org/news +[14]:https://cacm.acm.org/blogs/blog-cacm/176450-python-is-now-the-most-popular-introductory-teaching-language-at-top-u-s-universities/fulltext +[15]:http://phaser.io/tutorials/making-your-first-phaser-game +[16]:https://phaser.io/interphase +[17]:https://academy.zenva.com/product/the-complete-mobile-game-development-course-platinum-edition/ +[18]:https://gamedevacademy.org/author/fariazz/ +[19]:https://phonegap.com/ +[20]:https://pythonwheels.com/ +[21]:https://pypi.python.org/pypi/Pygame +[22]:http://www.pygame.org/wiki/GettingStarted#Pygame%20Installation +[23]:https://blog.chromium.org/2008/12/security-in-depth-local-web-pages.html +[24]:https://simbco.github.io/httpster/ +[25]:https://inventwithpython.com/makinggames.pdf +[26]:https://github.com/renpytom/rapt-pygame-example +[27]:https://github.com/renpytom +[28]:https://opensource.com/users/codetricity +[29]:https://opensource.com/users/codetricity +[30]:https://opensource.com/users/codetricity +[31]:https://opensource.com/article/17/11/pygame#comments +[32]:https://opensource.com/tags/python +[33]:https://opensource.com/tags/programming diff --git a/sources/tech/20171129 10 open source technology trends for 2018.md b/sources/tech/20171129 10 open source technology trends for 2018.md new file mode 100644 index 0000000000..eb21c62ec9 --- /dev/null +++ b/sources/tech/20171129 10 open source technology trends for 2018.md @@ -0,0 +1,143 @@ +translating by wangy325... + + +10 open source technology trends for 2018 +============================================================ + +### What do you think will be the next open source tech trends? Here are 10 predictions. + +![10 open source technology trends for 2018](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/fireworks-newyear-celebrate.png?itok=6gXaznov "10 open source technology trends for 2018") +Image by : [Mitch Bennett][10]. Modified by Opensource.com. [CC BY-SA 4.0][11] + +Technology is always evolving. New developments, such as OpenStack, Progressive Web Apps, Rust, R, the cognitive cloud, artificial intelligence (AI), the Internet of Things, and more are putting our usual paradigms on the back burner. Here is a rundown of the top open source trends expected to soar in popularity in 2018. + +### 1\. OpenStack gains increasing acceptance + +[OpenStack][12] is essentially a cloud operating system that offers admins the ability to provision and control huge compute, storage, and networking resources through an intuitive and user-friendly dashboard. + +Many enterprises are using the OpenStack platform to build and manage cloud computing systems. Its popularity rests on its flexible ecosystem, transparency, and speed. It supports mission-critical applications with ease and lower costs compared to alternatives. But, OpenStack's complex structure and its dependency on virtualization, servers, and extensive networking resources has inhibited its adoption by a wider range of enterprises. Using OpenStack also requires a well-oiled machinery of skilled staff and resources. + +The OpenStack Foundation is working overtime to fill the voids. Several innovations, either released or on the anvil, would resolve many of its underlying challenges. As complexities decrease, OpenStack will surge in acceptance. The fact that OpenStack is already backed by many big software development and hosting companies, in addition to thousands of individual members, makes it the future of cloud computing. + +### 2\. Progressive Web Apps become popular + +[Progressive Web Apps][13] (PWA), an aggregation of technologies, design concepts, and web APIs, offer an app-like experience in the mobile browser. + +Traditional websites suffer from many inherent shortcomings. Apps, although offering a more personal and focused engagement than websites, place a huge demand on resources, including needing to be downloaded upfront. PWA delivers the best of both worlds. It delivers an app-like experience to users while being accessible on browsers, indexable on search engines, and responsive to fit any form factor. Like an app, a PWA updates itself to always display the latest real-time information, and, like a website, it is delivered in an ultra-safe HTTPS model. It runs in a standard container and is accessible to anyone who types in the URL, without having to install anything. + +PWAs perfectly suit the needs of today's mobile users, who value convenience and personal engagement over everything else. That this technology is set to soar in popularity is a no-brainer. + +### 3\. Rust to rule the roost + +Most programming languages come with safety vs. control tradeoffs. [Rust][14] is an exception. The language co-opts extensive compile-time checking to offer 100% control without compromising safety. The last [Pwn2Own][15] competition threw up many serious vulnerabilities in Firefox on account of its underlying C++ language. If Firefox had been written in Rust, many of those errors would have manifested as compile-time bugs and resolved before the product rollout stage. + +Rust's unique approach of built-in unit testing has led developers to consider it a viable first-choice open source language. It offers an effective alternative to languages such as C and Python to write secure code without sacrificing expressiveness. Rust has bright days ahead in 2018. + +### 4\. R user community grows + +The [R][16] programming language, a GNU project, is associated with statistical computing and graphics. It offers a wide array of statistical and graphical techniques and is extensible to boot. It starts where [S][17] ends. With the S language already the vehicle of choice for research in statistical methodology, R offers a viable open source route for data manipulation, calculation, and graphical display. An added benefit is R's attention to detail and care for the finer nuances. + +Like Rust, R's fortunes are on the rise. + +### 5\. XaaS expands in scope + +XaaS, an acronym for "anything as a service," stands for the increasing number of services delivered over the internet, rather than on premises. Although software as a service (SaaS), infrastructure as a service (IaaS), and platform as a service (PaaS) are well-entrenched, new cloud-based models, such as network as a service (NaaS), storage as a service (SaaS or StaaS), monitoring as a service (MaaS), and communications as a service (CaaS), are soaring in popularity. A world where anything and everything is available "as a service" is not far away. + +The scope of XaaS now extends to bricks-and-mortar businesses, as well. Good examples are companies such as Uber and Lyft leveraging digital technology to offer transportation as a service and Airbnb offering accommodations as a service. + +High-speed networks and server virtualization that make powerful computing affordable have accelerated the popularity of XaaS, to the point that 2018 may become the "year of XaaS." The unmatched flexibility, agility, and scalability will propel the popularity of XaaS even further. + +### 6\. Containers gain even more acceptance + +Container technology is the approach of packaging pieces of code in a standardized way so they can be "plugged and run" quickly in any environment. Container technology allows enterprises to cut costs and implementation times. While the potential of containers to revolutionize IT infrastructure has been evident for a while, actual container use has remained complex. + +Container technology is still evolving, and the complexities associated with the technology decrease with every advancement. The latest developments make containers quite intuitive and as easy as using a smartphone, not to mention tuned for today's needs, where speed and agility can make or break a business. + +### 7\. Machine learning and artificial intelligence expand in scope + +[Machine learning and AI][18] give machines the ability to learn and improve from experience without a programmer explicitly coding the instruction. + +These technologies are already well entrenched, with several open source technologies leveraging them for cutting-edge services and applications. + +[Gartner predicts][19] the scope of machine learning and artificial intelligence will expand in 2018\. Several greenfield areas, such as data preparation, integration, algorithm selection, training methodology selection, and model creation are all set for big-time enhancements through the infusion of machine learning. + +New open source intelligent solutions are set to change the way people interact with systems and transform the very nature of work. + +* Conversational platforms, such as chatbots, make the question-and-command experience, where a user asks a question and the platform responds, the default medium of interacting with machines. + +* Autonomous vehicles and drones, fancy fads today, are expected to become commonplace by 2018. + +* The scope of immersive experience will expand beyond video games and apply to real-life scenarios such as design, training, and visualization processes. + +### 8\. Blockchain becomes mainstream + +Blockchain has come a long way from Bitcoin. The technology is already in widespread use in finance, secure voting, authenticating academic credentials, and more. In the coming year, healthcare, manufacturing, supply chain logistics, and government services are among the sectors most likely to embrace blockchain technology. + +Blockchain distributes digital information. The information resides on millions of nodes, in shared and reconciled databases. The fact that it's not controlled by any single authority and has no single point of failure makes it very robust, transparent, and incorruptible. It also solves the threat of a middleman manipulating the data. Such inherent strengths account for blockchain's soaring popularity and explain why it is likely to emerge as a mainstream technology in the immediate future. + +### 9\. Cognitive cloud moves to center stage + +Cognitive technologies, such as machine learning and artificial intelligence, are increasingly used to reduce complexity and personalize experiences across multiple sectors. One case in point is gamification apps in the financial sector, which offer investors critical investment insights and reduce the complexities of investment models. Digital trust platforms reduce the identity-verification process for financial institutions by about 80%, improving compliance and reducing chances of fraud. + +Such cognitive cloud technologies are now moving to the cloud, making it even more potent and powerful. IBM Watson is the most well-known example of the cognitive cloud in action. IBM's UIMA architecture was made open source and is maintained by the Apache Foundation. DARPA's DeepDive project mirrors Watson's machine learning abilities to enhance decision-making capabilities over time by learning from human interactions. OpenCog, another open source platform, allows developers and data scientists to develop artificial intelligence apps and programs. + +Considering the high stakes of delivering powerful and customized experiences, these cognitive cloud platforms are set to take center stage over the coming year. + +### 10\. The Internet of Things connects more things + +At its core, the Internet of Things (IoT) is the interconnection of devices through embedded sensors or other computing devices that enable the devices (the "things") to send and receive data. IoT is already predicted to be the next big major disruptor of the tech space, but IoT itself is in a continuous state of flux. + +One innovation likely to gain widespread acceptance within the IoT space is Autonomous Decentralized Peer-to-Peer Telemetry ([ADEPT][20]), which is propelled by IBM and Samsung. It uses a blockchain-type technology to deliver a decentralized network of IoT devices. Freedom from a central control system facilitates autonomous communications between "things" in order to manage software updates, resolve bugs, manage energy, and more. + +### Open source drives innovation + +Digital disruption is the norm in today's tech-centric era. Within the technology space, open source is now pervasive, and in 2018, it will be the driving force behind most of the technology innovations. + +Which open source trends and technologies would you add to this list? Let us know in the comments. + +### Topics + + [Business][25][Yearbook][26][2017 Open Source Yearbook][27] + +### About the author + + [![Sreejith@Fingent](https://opensource.com/sites/default/files/styles/profile_pictures/public/pictures/sreejith.jpg?itok=sdYNV49V)][21] Sreejith - I have been programming since 2000, and professionally since 2007\. I currently lead the Open Source team at [Fingent][6] as we work on different technology stacks, ranging from the "boring"(read tried and trusted) to the bleeding edge. I like building, tinkering with and breaking things, not necessarily in that order. Hit me up at: [https://www.linkedin.com/in/futuregeek/][7][More about me][8] + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/17/11/10-open-source-technology-trends-2018 + +作者:[Sreejith ][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/sreejith +[1]:https://opensource.com/resources/what-is-openstack?intcmp=7016000000127cYAAQ +[2]:https://opensource.com/resources/openstack/tutorials?intcmp=7016000000127cYAAQ +[3]:https://opensource.com/tags/openstack?intcmp=7016000000127cYAAQ +[4]:https://www.rdoproject.org/?intcmp=7016000000127cYAAQ +[5]:https://opensource.com/article/17/11/10-open-source-technology-trends-2018?rate=GJqOXhiWvZh0zZ6WVTUzJ2TDJBpVpFhngfuX9V-dz4I +[6]:https://www.fingent.com/ +[7]:https://www.linkedin.com/in/futuregeek/ +[8]:https://opensource.com/users/sreejith +[9]:https://opensource.com/user/185026/feed +[10]:https://www.flickr.com/photos/mitchell3417/9206373620 +[11]:https://creativecommons.org/licenses/by-sa/4.0/ +[12]:https://www.openstack.org/ +[13]:https://developers.google.com/web/progressive-web-apps/ +[14]:https://www.rust-lang.org/ +[15]:https://en.wikipedia.org/wiki/Pwn2Own +[16]:https://en.wikipedia.org/wiki/R_(programming_language) +[17]:https://en.wikipedia.org/wiki/S_(programming_language) +[18]:https://opensource.com/tags/artificial-intelligence +[19]:https://sdtimes.com/gartners-top-10-technology-trends-2018/ +[20]:https://insights.samsung.com/2016/03/17/block-chain-mobile-and-the-internet-of-things/ +[21]:https://opensource.com/users/sreejith +[22]:https://opensource.com/users/sreejith +[23]:https://opensource.com/users/sreejith +[24]:https://opensource.com/article/17/11/10-open-source-technology-trends-2018#comments +[25]:https://opensource.com/tags/business +[26]:https://opensource.com/tags/yearbook +[27]:https://opensource.com/yearbook/2017 diff --git a/sources/tech/20171129 5 best practices for getting started with DevOps.md b/sources/tech/20171129 5 best practices for getting started with DevOps.md new file mode 100644 index 0000000000..962f37aaf4 --- /dev/null +++ b/sources/tech/20171129 5 best practices for getting started with DevOps.md @@ -0,0 +1,94 @@ +5 best practices for getting started with DevOps +============================================================ + +### Are you ready to implement DevOps, but don't know where to begin? Try these five best practices. + + +![5 best practices for getting started with DevOps](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/devops-gears.png?itok=rUejbLQX "5 best practices for getting started with DevOps") +Image by :  + +[Andrew Magill][8]. Modified by Opensource.com. [CC BY 4.0][9] + +DevOps often stymies early adopters with its ambiguity, not to mention its depth and breadth. By the time someone buys into the idea of DevOps, their first questions usually are: "How do I get started?" and "How do I measure success?" These five best practices are a great road map to starting your DevOps journey. + +### 1\. Measure all the things + +You don't know for sure that your efforts are even making things better unless you can quantify the outcomes. Are my features getting out to customers more rapidly? Are fewer defects escaping to them? Are we responding to and recovering more quickly from failure? + +Before you change anything, think about what kinds of outcomes you expect from your DevOps transformation. When you're further into your DevOps journey, you'll enjoy a rich array of near-real-time reports on everything about your service. But consider starting with these two metrics: + +* **Time to market** measures the end-to-end, often customer-facing, business experience. It usually begins when a feature is formally conceived and ends when the customer can consume the feature in production. Time to market is not mainly an engineering team metric; more importantly it shows your business' complete end-to-end efficiency in bringing valuable new features to market and isolates opportunities for system-wide improvement. + +* **Cycle time** measures the engineering team process. Once work on a new feature starts, when does it become available in production? This metric is very useful for understanding the efficiency of the engineering team and isolating opportunities for team-level improvement. + +### 2\. Get your process off the ground + +DevOps success requires an organization to put a regular (and hopefully effective) process in place and relentlessly improve upon it. It doesn't have to start out being effective, but it must be a regular process. Usually that it's some flavor of agile methodology like Scrum or Scrumban; sometimes it's a Lean derivative. Whichever way you go, pick a formal process, start using it, and get the basics right. + +Regular inspect-and-adapt behaviors are key to your DevOps success. Make good use of opportunities like the stakeholder demo, team retrospectives, and daily standups to find opportunities to improve your process. + +A lot of your DevOps success hinges on people working effectively together. People on a team need to work from a common process that they are empowered to improve upon. They also need regular opportunities to share what they are learning with other stakeholders, both upstream and downstream, in the process. + +Good process discipline will help your organization consume the other benefits of DevOps at the great speed that comes as your success builds. + +Although it's common for more development-oriented teams to successfully adopt processes like Scrum, operations-focused teams (or others that are more interrupt-driven) may opt for a process with a more near-term commitment horizon, such as Kanban. + +### 3\. Visualize your end-to-end workflow + +There is tremendous power in being able to see who's working on what part of your service at any given time. Visualizing your workflow will help people know what they need to work on next, how much work is in progress, and where the bottlenecks are in the process. + +You can't effectively limit work in process until you can see it and quantify it. Likewise, you can't effectively eliminate bottlenecks until you can clearly see them. + +Visualizing the entire workflow will help people in all parts of the organization understand how their work contributes to the success of the whole. It can catalyze relationship-building across organizational boundaries to help your teams collaborate more effectively towards a shared sense of success. + +### 4\. Continuous all the things + +DevOps promises a dizzying array of compelling automation. But Rome wasn't built in a day. One of the first areas you can focus your efforts on is [continuous integration][10] (CI). But don't stop there; you'll want to follow quickly with [continuous delivery][11] (CD) and eventually continuous deployment. + +Your CD pipeline is your opportunity to inject all manner of automated quality testing into your process. The moment new code is committed, your CD pipeline should run a battery of tests against the code and the successfully built artifact. The artifact that comes out at the end of this gauntlet is what progresses along your process until eventually it's seen by customers in production. + +Another "continuous" that doesn't get enough attention is continuous improvement. That's as simple as setting some time aside each day to ask your colleagues: "What small thing can we do today to get better at how we do our work?" These small, daily changes compound over time into more profound results. You'll be pleasantly surprised! But it also gets people thinking all the time about how to improve things. + +### 5\. Gherkinize + +Fostering more effective communication across your organization is crucial to fostering the sort of systems thinking prevalent in successful DevOps journeys. One way to help that along is to use a shared language between the business and the engineers to express the desired acceptance criteria for new features. A good product manager can learn [Gherkin][12] in a day and begin using it to express acceptance criteria in an unambiguous, structured form of plain English. Engineers can use this Gherkinized acceptance criteria to write acceptance tests against the criteria, and then develop their feature code until the tests pass. This is a simplification of [acceptance test-driven development][13](ATDD) that can also help kick start your DevOps culture and engineering practice. + +### Start on your journey + +Don't be discouraged by getting started with your DevOps practice. It's a journey. And hopefully these five ideas give you solid ways to get started. + + +### About the author + + [![](https://opensource.com/sites/default/files/styles/profile_pictures/public/pictures/headshot_4.jpg?itok=jntfDCfX)][14] + + Magnus Hedemark - Magnus has been in the IT industry for over 20 years, and a technology enthusiast for most of his life. He's presently Manager of DevOps Engineering at UnitedHealth Group. In his spare time, Magnus enjoys photography and paddling canoes. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/17/11/5-keys-get-started-devops + +作者:[Magnus Hedemark ][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/magnus919 +[1]:https://opensource.com/tags/devops?src=devops_resource_menu1 +[2]:https://opensource.com/resources/devops?src=devops_resource_menu2 +[3]:https://www.openshift.com/promotions/devops-with-openshift.html?intcmp=7016000000127cYAAQ&src=devops_resource_menu3 +[4]:https://enterprisersproject.com/article/2017/5/9-key-phrases-devops?intcmp=7016000000127cYAAQ&src=devops_resource_menu4 +[5]:https://www.redhat.com/en/insights/devops?intcmp=7016000000127cYAAQ&src=devops_resource_menu5 +[6]:https://opensource.com/article/17/11/5-keys-get-started-devops?rate=oEOzMXx1ghbkfl2a5ae6AnvO88iZ3wzkk53K2CzbDWI +[7]:https://opensource.com/user/25739/feed +[8]:https://ccsearch.creativecommons.org/image/detail/7qRx_yrcN5isTMS0u9iKMA== +[9]:https://creativecommons.org/licenses/by-sa/4.0/ +[10]:https://martinfowler.com/articles/continuousIntegration.html +[11]:https://martinfowler.com/bliki/ContinuousDelivery.html +[12]:https://cucumber.io/docs/reference +[13]:https://en.wikipedia.org/wiki/Acceptance_test%E2%80%93driven_development +[14]:https://opensource.com/users/magnus919 +[15]:https://opensource.com/users/magnus919 +[16]:https://opensource.com/users/magnus919 +[17]:https://opensource.com/tags/devops diff --git a/sources/tech/20171129 How to Install and Use Wireshark on Debian and Ubuntu 16.04_17.10.md b/sources/tech/20171129 How to Install and Use Wireshark on Debian and Ubuntu 16.04_17.10.md new file mode 100644 index 0000000000..d3ba75da14 --- /dev/null +++ b/sources/tech/20171129 How to Install and Use Wireshark on Debian and Ubuntu 16.04_17.10.md @@ -0,0 +1,185 @@ +Translating by filefi + + +How to Install and Use Wireshark on Debian 9 / Ubuntu 16.04 / 17.10 +============================================================ + +by [Pradeep Kumar][1] · Published November 29, 2017 · Updated November 29, 2017 + + [![wireshark-Debian-9-Ubuntu 16.04 -17.10](https://www.linuxtechi.com/wp-content/uploads/2017/11/wireshark-Debian-9-Ubuntu-16.04-17.10.jpg)][2] + +Wireshark is free and open source, cross platform, GUI based Network packet analyzer that is available for Linux, Windows, MacOS, Solaris etc. It captures network packets in real time & presents them in human readable format. Wireshark allows us to monitor the network packets up to microscopic level. Wireshark also has a command line utility called ‘tshark‘ that performs the same functions as Wireshark but through terminal & not through GUI. + +Wireshark can be used for network troubleshooting, analyzing, software & communication protocol development & also for education purposed. Wireshark uses a library called ‘pcap‘ for capturing the network packets. + +Wireshark comes with a lot of features & some those features are; + +* Support for a hundreds of protocols for inspection, + +* Ability to capture packets in real time & save them for later offline analysis, + +* A number of filters to analyzing data, + +* Data captured can be compressed & uncompressed on the fly, + +* Various file formats for data analysis supported, output can also be saved to XML, CSV, plain text formats, + +* data can be captured from a number of interfaces like ethernet, wifi, bluetooth, USB, Frame relay , token rings etc. + +In this article, we will discuss how to install Wireshark on Ubuntu/Debain machines & will also learn to use Wireshark for capturing network packets. + +#### Installation of Wireshark on Ubuntu 16.04 / 17.10 + +Wireshark is available with default Ubuntu repositories & can be simply installed using the following command. But there might be chances that you will not get the latest version of wireshark. + +``` +linuxtechi@nixworld:~$ sudo apt-get update +linuxtechi@nixworld:~$ sudo apt-get install wireshark -y +``` + +So to install latest version of wireshark we have to enable or configure official wireshark repository. + +Use the beneath commands one after the another to configure repository and to install latest version of Wireshark utility + +``` +linuxtechi@nixworld:~$ sudo add-apt-repository ppa:wireshark-dev/stable +linuxtechi@nixworld:~$ sudo apt-get update +linuxtechi@nixworld:~$ sudo apt-get install wireshark -y +``` + +Once the Wireshark is installed execute the below command so that non-root users can capture live packets of interfaces, + +``` +linuxtechi@nixworld:~$ sudo setcap 'CAP_NET_RAW+eip CAP_NET_ADMIN+eip' /usr/bin/dumpcap +``` + +#### Installation of Wireshark on Debian 9 + +Wireshark package and its dependencies are already present in the default debian 9 repositories, so to install latest and stable version of Wireshark on Debian 9, use the following command: + +``` +linuxtechi@nixhome:~$ sudo apt-get update +linuxtechi@nixhome:~$ sudo apt-get install wireshark -y +``` + +During the installation, it will prompt us to configure dumpcap for non-superusers, + +Select ‘yes’ and then hit enter. + + [![Configure-Wireshark-Debian9](https://www.linuxtechi.com/wp-content/uploads/2017/11/Configure-Wireshark-Debian9-1024x542.jpg)][3] + +Once the Installation is completed, execute the below command so that non-root users can also capture the live packets of the interfaces. + +``` +linuxtechi@nixhome:~$ sudo chmod +x /usr/bin/dumpcap +``` + +We can also use the latest source package to install the wireshark on Ubuntu/Debain & many other Linux distributions. + +#### Installing Wireshark using source code on Debian / Ubuntu Systems + +Firstly download the latest source package (which is 2.4.2 at the time for writing this article), use the following command, + +``` +linuxtechi@nixhome:~$ wget https://1.as.dl.wireshark.org/src/wireshark-2.4.2.tar.xz +``` + +Next extract the package & enter into the extracted directory, + +``` +linuxtechi@nixhome:~$ tar -xf wireshark-2.4.2.tar.xz -C /tmp +linuxtechi@nixhome:~$ cd /tmp/wireshark-2.4.2 +``` + +Now we will compile the code with the following commands, + +``` +linuxtechi@nixhome:/tmp/wireshark-2.4.2$ ./configure --enable-setcap-install +linuxtechi@nixhome:/tmp/wireshark-2.4.2$ make +``` + +Lastly install the compiled packages to install Wireshark on the system, + +``` +linuxtechi@nixhome:/tmp/wireshark-2.4.2$ sudo make install +linuxtechi@nixhome:/tmp/wireshark-2.4.2$ sudo ldconfig +``` + +Upon installation a separate group for Wireshark will also be created, we will now add our user to the group so that it can work with wireshark otherwise you might get ‘permission denied‘ error when starting wireshark. + +To add the user to the wireshark group, execute the following command, + +``` +linuxtechi@nixhome:~$ sudo usermod -a -G wireshark linuxtechi +``` + +Now we can start wireshark either from GUI Menu or from terminal with this command, + +``` +linuxtechi@nixhome:~$ wireshark +``` + +#### Access Wireshark on Debian 9 System + + [![Access-wireshark-debian9](https://www.linuxtechi.com/wp-content/uploads/2017/11/Access-wireshark-debian9-1024x664.jpg)][4] + +Click on Wireshark icon + + [![Wireshark-window-debian9](https://www.linuxtechi.com/wp-content/uploads/2017/11/Wireshark-window-debian9-1024x664.jpg)][5] + +#### Access Wireshark on Ubuntu 16.04 / 17.10 + + [![Access-wireshark-Ubuntu](https://www.linuxtechi.com/wp-content/uploads/2017/11/Access-wireshark-Ubuntu-1024x664.jpg)][6] + +Click on Wireshark icon + + [![Wireshark-window-Ubuntu](https://www.linuxtechi.com/wp-content/uploads/2017/11/Wireshark-window-Ubuntu-1024x664.jpg)][7] + +#### Capturing and Analyzing packets + +Once the wireshark has been started, we should be presented with the wireshark window, example is shown above for Ubuntu and Debian system. + + [![wireshark-Linux-system](https://www.linuxtechi.com/wp-content/uploads/2017/11/wireshark-Linux-system.jpg)][8] + +All these are the interfaces from where we can capture the network packets. Based on the interfaces you have on your system, this screen might be different for you. + +We are selecting ‘enp0s3’ for capturing the network traffic for that inteface. After selecting the inteface, network packets for all the devices on our network start to populate (refer to screenshot below) + + [![Capturing-Packet-from-enp0s3-Ubuntu-Wireshark](https://www.linuxtechi.com/wp-content/uploads/2017/11/Capturing-Packet-from-enp0s3-Ubuntu-Wireshark-1024x727.jpg)][9] + +First time we see this screen we might get overwhelmed by the data that is presented in this screen & might have thought how to sort out this data but worry not, one the best features of Wireshark is its filters. + +We can sort/filter out the data based on IP address, Port number, can also used source & destination filters, packet size etc & can also combine 2 or more filters together to create more comprehensive searches. We can either write our filters in ‘Apply a Display Filter‘ tab , or we can also select one of already created rules. To select pre-built filter, click on ‘flag‘ icon , next to ‘Apply a Display Filter‘ tab, + + [![Filter-in-wireshark-Ubuntu](https://www.linuxtechi.com/wp-content/uploads/2017/11/Filter-in-wireshark-Ubuntu-1024x727.jpg)][10] + +We can also filter data based on the color coding, By default, light purple is TCP traffic, light blue is UDP traffic, and black identifies packets with errors , to see what these codes mean, click View -> Coloring Rules, also we can change these codes. + + [![Packet-Colouring-Wireshark](https://www.linuxtechi.com/wp-content/uploads/2017/11/Packet-Colouring-Wireshark-1024x682.jpg)][11] + +After we have the results that we need, we can then click on any of the captured packets to get more details about that packet, this will show all the data about that network packet. + +Wireshark is an extremely powerful tool takes some time to getting used to & make a command over it, this tutorial will help you get started. Please feel free to drop in your queries or suggestions in the comment box below. + +-------------------------------------------------------------------------------- + +via: https://www.linuxtechi.com + +作者:[Pradeep Kumar][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.linuxtechi.com/author/pradeep/ +[1]:https://www.linuxtechi.com/author/pradeep/ +[2]:https://www.linuxtechi.com/wp-content/uploads/2017/11/wireshark-Debian-9-Ubuntu-16.04-17.10.jpg +[3]:https://www.linuxtechi.com/wp-content/uploads/2017/11/Configure-Wireshark-Debian9.jpg +[4]:https://www.linuxtechi.com/wp-content/uploads/2017/11/Access-wireshark-debian9.jpg +[5]:https://www.linuxtechi.com/wp-content/uploads/2017/11/Wireshark-window-debian9.jpg +[6]:https://www.linuxtechi.com/wp-content/uploads/2017/11/Access-wireshark-Ubuntu.jpg +[7]:https://www.linuxtechi.com/wp-content/uploads/2017/11/Wireshark-window-Ubuntu.jpg +[8]:https://www.linuxtechi.com/wp-content/uploads/2017/11/wireshark-Linux-system.jpg +[9]:https://www.linuxtechi.com/wp-content/uploads/2017/11/Capturing-Packet-from-enp0s3-Ubuntu-Wireshark.jpg +[10]:https://www.linuxtechi.com/wp-content/uploads/2017/11/Filter-in-wireshark-Ubuntu.jpg +[11]:https://www.linuxtechi.com/wp-content/uploads/2017/11/Packet-Colouring-Wireshark.jpg diff --git a/sources/tech/20171129 Inside AGL Familiar Open Source Components Ease Learning Curve.md b/sources/tech/20171129 Inside AGL Familiar Open Source Components Ease Learning Curve.md new file mode 100644 index 0000000000..9eee39888a --- /dev/null +++ b/sources/tech/20171129 Inside AGL Familiar Open Source Components Ease Learning Curve.md @@ -0,0 +1,70 @@ +Inside AGL: Familiar Open Source Components Ease Learning Curve +============================================================ + +![Matt Porter](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/porter-elce-agl.png?itok=E-5xG98S "Matt Porter") +Konsulko’s Matt Porter (pictured) and Scott Murray ran through the major components of the AGL’s Unified Code Base at Embedded Linux Conference Europe.[The Linux Foundation][1] + +Among the sessions at the recent [Embedded Linux Conference Europe (ELCE)][5] — 57 of which are [available on YouTube][2] -- are several reports on the Linux Foundation’s [Automotive Grade Linux project][6]. These include [an overview from AGL Community Manager Walt Miner ][3]showing how AGL’s Unified Code Base (UCB) Linux distribution is expanding from in-vehicle infotainment (IVI) to ADAS. There was even a presentation on using AGL to build a remote-controlled robot (see links below). + +Here we look at the “State of AGL: Plumbing and Services,” from Konsulko Group’s CTO Matt Porter and senior staff software engineer Scott Murray. Porter and Murray ran through the components of the current [UCB 4.0 “Daring Dab”][7] and detailed major upstream components and API bindings, many of which will be appear in the Electric Eel release due in Jan. 2018. + +Despite the automotive focus of the AGL stack, most of the components are already familiar to Linux developers. “It looks a lot like a desktop distro,” Porter told the ELCE attendees in Prague. “All these familiar friends.” + +Some of those friends include the underlying Yocto Project “Poky” with OpenEmbedded foundation, which is topped with layers like oe-core, meta-openembedded, and metanetworking. Other components are based on familiar open source software like systemd (application control), Wayland and Weston (graphics), BlueZ (Bluetooth), oFono (telephony), PulseAudio and ALSA (audio), gpsd (location), ConnMan (Internet), and wpa-supplicant (WiFi), among others. + +UCB’s application framework is controlled through a WebSocket interface to the API bindings, thereby enabling apps to talk to each other. There’s also a new W3C widget for an alternative application packaging scheme, as well as support for SmartDeviceLink, a technology developed at Ford that automatically syncs up IVI systems with mobile phones.  + +AGL UCB’s Wayland/Weston graphics layer is augmented with an “IVI shell” that works with the layer manager. “One of the unique requirements of automotive is the ability to separate aspects of the application in the layers,” said Porter. “For example, in a navigation app, the graphics rendering for the map may be completely different than the engine used for the UI decorations. One engine layers to a surface in Wayland to expose the map while the decorations and controls are handled by another layer.” + +For audio, ALSA and PulseAudio are joined by GENIVI AudioManager, which works together with PulseAudio. “We use AudioManager for policy driven audio routing,” explained Porter. “It allows you to write a very complex XML-based policy using a rules engine with audio routing.” + +UCB leans primarily on the well-known [Smack Project][8] for security, and also incorporates Tizen’s [Cynara][9] safe policy-checker service. A Cynara-enabled D-Bus daemon is used to control Cynara security policies. + +Porter and Murray went on to explain AGL’s API binding mechanism, which according to Murray “abstracts the UI from its back-end logic so you can replace it with your own custom UI.” You can re-use application logic with different UI implementations, such as moving from the default Qt to HTML5 or a native toolkit. Application binding requests and responses use JSON via HTTP or WebSocket. Binding calls can be made from applications or from other bindings, thereby enabling “stacking” of bindings. + +Porter and Murray concluded with a detailed description of each binding. These include upstream bindings currently in various stages of development. The first is a Master binding that manages the application lifecycle, including tasks such as install, uninstall, start, and terminate. Other upstream bindings include the WiFi binding and the BlueZ-based Bluetooth binding, which in the future will be upgraded with Bluetooth [PBAP][10] (Phone Book Access Profile). PBAP can connect with contacts databases on your phone, and links to the Telephony binding to replicate caller ID. + +The oFono-based Telephony binding also makes calls to the Bluetooth binding for Bluetooth Hands-Free-Profile (HFP) support. In the future, Telephony binding will add support for sent dial tones, call waiting, call forwarding, and voice modem support. + +Support for AM/FM radio is not well developed in the Linux world, so for its Radio binding, AGL started by supporting [RTL-SDR][11] code for low-end radio dongles. Future plans call for supporting specific automotive tuner devices. + +The MediaPlayer binding is in very early development, and is currently limited to GStreamer based audio playback and control. Future plans call for adding playlist controls, as well as one of the most actively sought features among manufacturers: video playback support. + +Location bindings include the [gpsd][12] based GPS binding, as well as GeoClue and GeoFence. GeoClue, which is built around the [GeoClue][13] D-Bus geolocation service, “overlaps a little with GPS, which uses the same location data,” says Porter. GeoClue also gathers location data from WiFi AP databases, 3G/4G tower info, and the GeoIP database — sources that are useful “if you’re inside or don’t have a good fix,” he added. + +GeoFence depends on the GPS binding, as well. It lets you establish a bounding box, and then track ingress and egress events. GeoFence also tracks “dwell” status, which is determined by arriving at home and staying for 10 minutes. “It then triggers some behavior based on a timeout,” said Porter. Future plans call for a customizable dwell transition time. + +While most of these Upstream bindings are well established, there are also Work in Progress (WIP) bindings that are still in the early stages, including CAN, HomeScreen, and WindowManager bindings. Farther out, there are plans to add speech recognition and text-to-speech bindings, as well as a WWAN modem binding. + +In conclusion, Porter noted: “Like any open source project, we desperately need more developers.” The Automotive Grade Linux project may seem peripheral to some developers, but it offers a nice mix of familiarity — grounded in many widely used open source projects -- along with the excitement of expanding into a new and potentially game changing computing form factor: your automobile. AGL has also demonstrated success — you can now [check out AGL in action in the 2018 Toyota Camry][14], followed in the coming month by most Toyota and Lexus vehicles sold in North America. + +Watch the complete video below: + +[视频][15] + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/blog/event/elce/2017/11/inside-agl-familiar-open-source-components-ease-learning-curve + +作者:[ ERIC BROWN][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.linux.com/users/ericstephenbrown +[1]:https://www.linux.com/licenses/category/linux-foundation +[2]:https://www.youtube.com/playlist?list=PLbzoR-pLrL6pISWAq-1cXP4_UZAyRtesk +[3]:https://www.youtube.com/watch?v=kfwEmjSjAzM&index=14&list=PLbzoR-pLrL6pISWAq-1cXP4_UZAyRtesk +[4]:https://www.linux.com/files/images/porter-elce-aglpng +[5]:http://events.linuxfoundation.org/events/embedded-linux-conference-europe +[6]:https://www.automotivelinux.org/ +[7]:https://www.linux.com/blog/2017/8/automotive-grade-linux-moves-ucb-40-launches-virtualization-workgroup +[8]:http://schaufler-ca.com/ +[9]:https://wiki.tizen.org/Security:Cynara +[10]:https://wiki.maemo.org/Bluetooth_PBAP +[11]:https://www.rtl-sdr.com/about-rtl-sdr/ +[12]:http://www.catb.org/gpsd/ +[13]:https://www.freedesktop.org/wiki/Software/GeoClue/ +[14]:https://www.linux.com/blog/event/automotive-linux-summit/2017/6/linux-rolls-out-toyota-and-lexus-vehicles +[15]:https://youtu.be/RgI-g5h1t8I diff --git a/sources/tech/20171129 Interactive Workflows for Cpp with Jupyter.md b/sources/tech/20171129 Interactive Workflows for Cpp with Jupyter.md new file mode 100644 index 0000000000..395c901618 --- /dev/null +++ b/sources/tech/20171129 Interactive Workflows for Cpp with Jupyter.md @@ -0,0 +1,301 @@ +Interactive Workflows for C++ with Jupyter +============================================================ + +Scientists, educators and engineers not only use programming languages to build software systems, but also in interactive workflows, using the tools available to  _explore _ a problem and  _reason _ about it. + +Running some code, looking at a visualization, loading data, and running more code. Quick iteration is especially important during the exploratory phase of a project. + +For this kind of workflow, users of the C++ programming language currently have no choice but to use a heterogeneous set of tools that don’t play well with each other, making the whole process cumbersome, and difficult to reproduce. + + _We currently lack a good story for interactive computing in C++_ . + +In our opinion, this hurts the productivity of C++ developers: + +* Most of the progress made in software projects comes from incrementalism. Obstacles to fast iteration hinder progress. + +* This also makes C++ more difficult to teach. The first hours of a C++ class are rarely rewarding as the students must learn how to set up a small project before writing any code. And then, a lot more time is required before their work can result in any visual outcome. + +### Project Jupyter and Interactive Computing + + + +![](https://cdn-images-1.medium.com/max/1200/1*wOHyKy6fl3ltcBMNpCvC6Q.png) + +The goal of Project Jupyter is to provide a consistent set of tools for scientific computing and data science workflows, from the exploratory phase of the analysis to the presentation and the sharing of the results. The Jupyter stack was designed to be agnostic of the programming language, and also to allow alternative implementations of any component of the layered architecture (back-ends for programming languages, custom renderers for file types associated with Jupyter). The stack consists of + +* a low-level specification for messaging protocols, standardized file formats, + +* a reference implementation of these standards, + +* applications built on the top of these libraries: the Notebook, JupyterLab, Binder, JupyterHub + +* and visualization libraries integrated into the Notebook and JupyterLab. + +Adoption of the Jupyter ecosystem has skyrocketed in the past years, with millions of users worldwide, over a million Jupyter notebooks shared on GitHub and large-scale deployments of Jupyter in universities, companies and high-performance computing centers. + +### Jupyter and C++ + +One of the main extension points of the Jupyter stack is the  _kernel_ , the part of the infrastructure responsible for executing the user’s code. Jupyter kernels exist for [numerous programming languages][14]. + +Most Jupyter kernels are implemented in the target programming language: the reference implementation [ipykernel][15] in Python, [IJulia][16] in Julia, leading to a duplication of effort for the implementation of the protocol. A common denominator to a lot of these interpreted languages is that the interpreter generally exposes a C API, allowing the embedding into a native application. In an effort to consolidate these commonalities and save work for future kernel builders, we developed  _xeus_ . + + + +![](https://cdn-images-1.medium.com/max/1200/1*TKrPv5AvFM3NJ6a7VMu8Tw.png) + +[Xeus ][17]is a C++ implementation of the Jupyter kernel protocol. It is not a kernel itself but a library that facilitates the authoring of kernels, and other applications making use of the Jupyter kernel protocol. + +A typical kernel implementation using xeus would in fact make use of the target interpreter _ as a library._ + +There are a number of benefits of using xeus over implementing your kernel in the target language: + +* Xeus provides a complete implementation of the protocol, enabling a lot of features from the start for kernel authors, who only need to deal with the language bindings. + +* Xeus-based kernels can very easily provide a back-end for Jupyter interactive widgets. + +* Finally, xeus can be used to implement kernels for domain-specific languages such as SQL flavors. Existing approaches use a Python wrapper. With xeus, the resulting kernel won't require Python at run-time, leading to large performance benefits. + + + +![](https://cdn-images-1.medium.com/max/1200/1*Cr_cfHdrgFXHlO15qdNK7w.png) + +Interpreted C++ is already a reality at CERN with the [Cling][18]C++ interpreter in the context of the [ROOT][19] data analysis environment. + +As a first example for a kernel based on xeus, we have implemented [xeus-cling][20], a pure C++ kernel. + + + +![](https://cdn-images-1.medium.com/max/1600/1*NnjISpzZtpy5TOurg0S89A.gif) +Redirection of outputs to the Jupyter front-end, with different styling in the front-end. + +Complex features of the C++ programming language such as, polymorphism, templates, lambdas, are supported by the cling interpreter, making the C++ Jupyter notebook a great prototyping and learning platform for the C++ users. See the image below for a demonstration: + + + +![](https://cdn-images-1.medium.com/max/1600/1*lGVLY4fL1ytMfT-eWtoXkw.gif) +Features of the C++ programming language supported by the cling interpreter + +Finally, xeus-cling supports live quick-help, fetching the content on [cppreference][21] in the case of the standard library. + + + +![](https://cdn-images-1.medium.com/max/1600/1*Igegq0xBebuJV8hy0TGpfg.png) +Live help for the C++standard library in the Jupyter notebook + +> We realized that we started using the C++ kernel ourselves very early in the development of the project. For quick experimentation, or reproducing bugs. No need to set up a project with a cpp file and complicated project settings for finding the dependencies… Just write some code and hit Shift+Enter. + +Visual output can also be displayed using the rich display mechanism of the Jupyter protocol. + + + +![](https://cdn-images-1.medium.com/max/1600/1*t_9qAXtdkSXr-0tO9VvOzQ.png) +Using Jupyter's rich display mechanism to display an image inline in the notebook + + +![](https://cdn-images-1.medium.com/max/1200/1*OVfmXFAbfjUtGFXYS9fKRA.png) + +Another important feature of the Jupyter ecosystem are the [Jupyter Interactive Widgets][22]. They allow the user to build graphical interfaces and interactive data visualization inline in the Jupyter notebook. Moreover it is not just a collection of widgets, but a framework that can be built upon, to create arbitrary visual components. Popular interactive widget libraries include + +* [bqplot][1] (2-D plotting with d3.js) + +* [pythreejs][2] (3-D scene visualization with three.js) + +* [ipyleaflet][3] (maps visualization with leaflet.js) + +* [ipyvolume][4] (3-D plotting and volume rendering with three.js) + +* [nglview][5] (molecular visualization) + +Just like the rest of the Jupyter ecosystem, Jupyter interactive widgets were designed as a language-agnostic framework. Other language back-ends can be created reusing the front-end component, which can be installed separately. + +[xwidgets][23], which is still at an early stage of development, is a native C++ implementation of the Jupyter widgets protocol. It already provides an implementation for most of the widget types available in the core Jupyter widgets package. + + + +![](https://cdn-images-1.medium.com/max/1600/1*ro5Ggdstnf0DoqhTUWGq3A.gif) +C++ back-end to the Jupyter interactive widgets + +Just like with ipywidgets, one can build upon xwidgets and implement C++ back-ends for the Jupyter widget libraries listed earlier, effectively enabling them for the C++ programming language and other xeus-based kernels: xplot, xvolume, xthreejs… + + + +![](https://cdn-images-1.medium.com/max/1200/1*yCRYoJFnbtxYkYMRc9AioA.png) + +[xplot][24] is an experimental C++ back-end for the [bqplot][25] 2-D plotting library. It enables an API following the constructs of the  [_Grammar of Graphics_][26]  in C++. + +In xplot, every item in a chart is a separate object that can be modified from the back-end,  _dynamically_ . + +Changing a property of a plot item, a scale, an axis or the figure canvas itself results in the communication of an update message to the front-end, which reflects the new state of the widget visually. + + + +![](https://cdn-images-1.medium.com/max/1600/1*Mx2g3JuTG1Cfvkkv0kqtLA.gif) +Changing the data of a scatter plot dynamically to update the chart + +> Warning: the xplot and xwidgets projects are still at an early stage of development and are changing drastically at each release. + +Interactive computing environments like Jupyter are not the only missing tool in the C++ world. Two key ingredients to the success of Python as the  _lingua franca_  of data science is the existence of libraries like [NumPy][27] and [Pandas][28] at the foundation of the ecosystem. + + + +![](https://cdn-images-1.medium.com/max/1200/1*HsU43Jzp1vJZpX2g8XPJsg.png) + +[xtensor][29] is a C++ library meant for numerical analysis with multi-dimensional array expressions. + +xtensor provides + +* an extensible expression system enabling lazy NumPy-style broadcasting. + +* an API following the  _idioms_  of the C++ standard library. + +* tools to manipulate array expressions and build upon xtensor. + +xtensor exposes an API similar to that of NumPy covering a growing portion of the functionalities. A cheat sheet can be [found in the documentation][30]: + + + +![](https://cdn-images-1.medium.com/max/1600/1*PBrf5vWYC8VTq_7VUOZCpA.gif) +Scrolling the NumPy to xtensor cheat sheet + +However, xtensor internals are very different from NumPy. Using modern C++ techniques (template expressions, closure semantics) xtensor is a lazily evaluated library, avoiding the creation of temporary variables and unnecessary memory allocations, even in the case complex expressions involving broadcasting and language bindings. + +Still, from a user perspective, the combination of xtensor with the C++ notebook provides an experience very similar to that of NumPy in a Python notebook. + + + +![](https://cdn-images-1.medium.com/max/1600/1*ULFpg-ePkdUbqqDLJ9VrDw.png) +Using the xtensor array expression library in a C++ notebook + +In addition to the core library, the xtensor ecosystem has a number of other components + +* [xtensor-blas][6]: the counterpart to the numpy.linalg module. + +* [xtensor-fftw][7]: bindings to the [fftw][8] library. + +* [xtensor-io][9]: APIs to read and write various file formats (images, audio, NumPy's NPZ format). + +* [xtensor-ros][10]: bindings for ROS, the robot operating system. + +* [xtensor-python][11]: bindings for the Python programming language, allowing the use of NumPy arrays in-place, using the NumPy C API and the pybind11 library. + +* [xtensor-julia][12]: bindings for the Julia programming language, allowing the use of Julia arrays in-place, using the C API of the Julia interpreter, and the CxxWrap library. + +* [xtensor-r][13]: bindings for the R programming language, allowing the use of R arrays in-place. + +Detailing further the features of the xtensor framework would be beyond the scope of this post. + +If you are interested in trying the various notebooks presented in this post, there is no need to install anything. You can just use  _binder_ : + +![](https://cdn-images-1.medium.com/max/1200/1*9cy5Mns_I0eScsmDBjvxDQ.png) + +[The Binder project][31], which is part of Project Jupyter, enables the deployment of containerized Jupyter notebooks, from a GitHub repository together with a manifest listing the dependencies (as conda packages). + +All the notebooks in the screenshots above can be run online, by just clicking on one of the following links: + +[xtensor][32]: the C++ N-D array expression library in a C++ notebook + +[xwidgets][33]: the C++ back-end for Jupyter interactive widgets + +[xplot][34]: the C++ back-end to the bqplot 2-D plotting library for Jupyter. + + + +![](https://cdn-images-1.medium.com/max/1200/1*JwqhpMxMJppEepj7U4fV-g.png) + +[JupyterHub][35] is the multi-user infrastructure underlying open wide deployments of Jupyter like Binder but also smaller deployments for authenticated users. + +The modular architecture of JupyterHub enables a great variety of scenarios on how users are authenticated, and what service is made available to them. JupyterHub deployment for several hundreds of users have been done in various universities and institutions, including the Paris-Sud University, where the C++ kernel was also installed for the students to use. + +> In September 2017, the 350 first-year students at Paris-Sud University who took the “[Info 111: Introduction to Computer +>  Science][36]” class wrote their first lines of C++ in a Jupyter notebook. + +The use of Jupyter notebooks in the context of teaching C++ proved especially useful for the first classes, where students can focus on the syntax of the language without distractions such as compiling and linking. + +### Acknowledgements + +The software presented in this post was built upon the work of a large number of people including the Jupyter team and the Cling developers. + +We are especially grateful to [Patrick Bos ][37](who authored xtensor-fftw), Nicolas Thiéry, Min Ragan Kelley, Thomas Kluyver, Yuvi Panda, Kyle Cranmer, Axel Naumann and Vassil Vassilev. + +We thank the [DIANA/HEP][38] organization for supporting travel to CERN and encouraging the collaboration between Project Jupyter and the ROOT team. + +We are also grateful to the team at Paris-Sud University who worked on the JupyterHub deployment and the class materials, notably [Viviane Pons][39]. + +The development of xeus, xtensor, xwidgets and related packages at [QuantStack][40] is sponsored by [Bloomberg][41]. + +### About the authors (alphabetical order) + + [_Sylvain Corlay_][42] _, _ Scientific Software Developer at [QuantStack][43] + + [_Loic Gouarin_][44] _, _ Research Engineer at [Laboratoire de Mathématiques at Orsay][45] + + [_Johan Mabille_][46] _, _ Scientific Software Developer at [QuantStack][47] + + [_Wolf Vollprecht_][48] , Scientific Software Developer at [QuantStack][49] + +Thanks to [Maarten Breddels][50], [Wolf Vollprecht][51], [Brian E. Granger][52], and [Patrick Bos][53]. + +-------------------------------------------------------------------------------- + +via: https://blog.jupyter.org/interactive-workflows-for-c-with-jupyter-fe9b54227d92 + +作者:[QuantStack ][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://blog.jupyter.org/@QuantStack?source=post_header_lockup +[1]:https://github.com/bloomberg/bqplot +[2]:https://github.com/jovyan/pythreejs +[3]:https://github.com/ellisonbg/ipyleaflet +[4]:https://github.com/maartenbreddels/ipyvolume +[5]:https://github.com/arose/nglview +[6]:https://github.com/QuantStack/xtensor-blas +[7]:https://github.com/egpbos/xtensor-fftw +[8]:http://www.fftw.org/ +[9]:https://github.com/QuantStack/xtensor-io +[10]:https://github.com/wolfv/xtensor_ros +[11]:https://github.com/QuantStack/xtensor-python +[12]:https://github.com/QuantStack/Xtensor.jl +[13]:https://github.com/QuantStack/xtensor-r +[14]:https://github.com/jupyter/jupyter/wiki/Jupyter-kernels +[15]:https://github.com/ipython/ipykernel +[16]:https://github.com/JuliaLang/IJulia.jl +[17]:https://github.com/QuantStack/xeus +[18]:https://root.cern.ch/cling +[19]:https://root.cern.ch/ +[20]:https://github.com/QuantStack/xeus-cling +[21]:http://en.cppreference.com/w/ +[22]:http://jupyter.org/widgets +[23]:https://github.com/QUantStack/xwidgets +[24]:https://github.com/QuantStack/xplot +[25]:https://github.com/bloomberg/bqplot +[26]:https://dl.acm.org/citation.cfm?id=1088896 +[27]:http://www.numpy.org/ +[28]:https://pandas.pydata.org/ +[29]:https://github.com/QuantStack/xtensor/ +[30]:http://xtensor.readthedocs.io/en/latest/numpy.html +[31]:https://mybinder.org/ +[32]:https://beta.mybinder.org/v2/gh/QuantStack/xtensor/0.14.0-binder2?filepath=notebooks/xtensor.ipynb +[33]:https://beta.mybinder.org/v2/gh/QuantStack/xwidgets/0.6.0-binder?filepath=notebooks/xwidgets.ipynb +[34]:https://beta.mybinder.org/v2/gh/QuantStack/xplot/0.3.0-binder?filepath=notebooks +[35]:https://github.com/jupyterhub/jupyterhub +[36]:http://nicolas.thiery.name/Enseignement/Info111/ +[37]:https://twitter.com/egpbos +[38]:http://diana-hep.org/ +[39]:https://twitter.com/pyviv +[40]:https://twitter.com/QuantStack +[41]:http://www.techatbloomberg.com/ +[42]:https://twitter.com/SylvainCorlay +[43]:https://github.com/QuantStack/ +[44]:https://twitter.com/lgouarin +[45]:https://www.math.u-psud.fr/ +[46]:https://twitter.com/johanmabille?lang=en +[47]:https://github.com/QuantStack/ +[48]:https://twitter.com/wuoulf +[49]:https://github.com/QuantStack/ +[50]:https://medium.com/@maartenbreddels?source=post_page +[51]:https://medium.com/@wolfv?source=post_page +[52]:https://medium.com/@ellisonbg?source=post_page +[53]:https://medium.com/@egpbos?source=post_page diff --git a/sources/tech/20171129 Someone Tries to Bring Back Ubuntus Unity from the Dead as an Official Spin.md b/sources/tech/20171129 Someone Tries to Bring Back Ubuntus Unity from the Dead as an Official Spin.md new file mode 100644 index 0000000000..0e38373c3f --- /dev/null +++ b/sources/tech/20171129 Someone Tries to Bring Back Ubuntus Unity from the Dead as an Official Spin.md @@ -0,0 +1,41 @@ +Someone Tries to Bring Back Ubuntu's Unity from the Dead as an Official Spin +============================================================ + + + +> The Ubuntu Unity remix would be supported for nine months + +Canonical's sudden decision of killing its Unity user interface after seven years affected many Ubuntu users, and it looks like someone now tries to bring it back from the dead as an unofficial spin. + +Long-time [Ubuntu][1] member Dale Beaudoin [ran a poll][2] last week on the official Ubuntu forums to take the pulse of the community and see if they are interested in an Ubuntu Unity Remix that would be released alongside Ubuntu 18.04 LTS (Bionic Beaver) next year and be supported for nine months or five years. + +Thirty people voted in the poll, with 67 percent of them opting for an LTS (Long Term Support) release of the so-called Ubuntu Unity Remix, while 33 percent voted for the 9-month supported release. It also looks like this upcoming Ubuntu Unity Spin [looks to become an official flavor][3], yet this means commitment from those developing it. + +"A recent poll voted 2/3rds in favor of Ubuntu Unity to become an LTS distribution. We should try to work this cycle assuming that it will be LTS and an official flavor," said Dale Beaudoin. "We will try and release an updated ISO once every week or 10 days using the current 18.04 daily builds of default Ubuntu Bionic Beaver as a platform." + +### Is Ubuntu Unity making a comeback? + +The last Ubuntu version to ship with Unity by default was Ubuntu 17.04 (Zesty Zapus), which will reach end of life on January 2018\. Ubuntu 17.10 (Artful Artful), the current stable release of the popular operating system, is the first to use the GNOME desktop environment by default for the main Desktop edition as Canonical CEO [announced][4] earlier this year that Unity would no longer be developed. + +However, Canonical is still offering the Unity desktop environment from the official software repositories, so if someone wants to install it, it's one click away. But the bad news is that they'll be supported up until the release of Ubuntu 18.04 LTS (Bionic Beaver) in April 2018, so the developers of the Ubuntu Unity Remix would have to continue to keep in on life support on their a separate repository. + +On the other hand, we don't believe Canonical will change their mind and accept this Ubuntu Unity Spin to become an official flavor, which would mean they failed to continue development of Unity, and now a handful of people can do it. Most probably, if interest in this Ubuntu Unity Remix won't fade away soon, it will be an unofficial spin supported by the nostalgic community. + +Question is, would you be interested in an Ubuntu Unity spin, official or not? + +-------------------------------------------------------------------------------- + +via: http://news.softpedia.com/news/someone-tries-to-bring-back-ubuntu-s-unity-from-the-dead-as-an-unofficial-spin-518778.shtml + +作者:[Marius Nestor ][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://news.softpedia.com/editors/browse/marius-nestor +[1]:http://linux.softpedia.com/downloadTag/Ubuntu +[2]:https://community.ubuntu.com/t/poll-unity-7-distro-9-month-spin-or-lts-for-18-04/2066 +[3]:https://community.ubuntu.com/t/unity-maintenance-roadmap/2223 +[4]:http://news.softpedia.com/news/canonical-to-stop-developing-unity-8-ubuntu-18-04-lts-ships-with-gnome-desktop-514604.shtml +[5]:http://news.softpedia.com/editors/browse/marius-nestor diff --git a/sources/tech/20171130 Excellent Business Software Alternatives For Linux.md b/sources/tech/20171130 Excellent Business Software Alternatives For Linux.md new file mode 100644 index 0000000000..195b51423a --- /dev/null +++ b/sources/tech/20171130 Excellent Business Software Alternatives For Linux.md @@ -0,0 +1,116 @@ +Yoliver istranslating. +Excellent Business Software Alternatives For Linux +------- + +Many business owners choose to use Linux as the operating system for their operations for a variety of reasons. + +1. Firstly, they don't have to pay anything for the privilege, and that is a massive bonus during the early stages of a company where money is tight. + +2. Secondly, Linux is a light alternative compared to Windows and other popular operating systems available today. + +Of course, lots of entrepreneurs worry they won't have access to some of the essential software packages if they make that move. However, as you will discover throughout this post, there are plenty of similar tools that will cover all the bases. + + [![](https://4.bp.blogspot.com/-xwLuDRdB6sw/Whxx0Z5pI5I/AAAAAAAADhU/YWHID8GU9AgrXRfeTz4HcDZkG-XWZNbSgCLcBGAs/s400/4444061098_6eeaa7dc1a_z.jpg)][3] + +### Alternatives to Microsoft Word + +All company bosses will require access to a word processing tool if they want to ensure the smooth running of their operation according to + +[the latest article from Fareed Siddiqui][4] + +. You'll need that software to write business plans, letters, and many other jobs within your firm. Thankfully, there are a variety of alternatives you might like to select if you opt for the Linux operating system. Some of the most popular ones include: + +* LibreOffice Writer + +* AbiWord + +* KWord + +* LaTeX + +So, you just need to read some online reviews and then download the best word processor based on your findings. Of course, if you're not satisfied with the solution, you should take a look at some of the other ones on that list. In many instances, any of the programs mentioned above should work well. + +### Alternatives to Microsoft Excel + + [![](https://4.bp.blogspot.com/-XdS6bSLQbOU/WhxyeWZeeCI/AAAAAAAADhc/C3hGY6rgzX4m2emunot80-4URu9-aQx8wCLcBGAs/s400/28929069495_e85d2626ba_z.jpg)][5] + +You need a spreadsheet tool if you want to ensure your business doesn't get into trouble when it comes to bookkeeping and inventory control. There are specialist software packages on the market for both of those tasks, but + +[open-source alternatives][6] + +to Microsoft Excel will give you the most amount of freedom when creating your spreadsheets and editing them. While there are other packages out there, some of the best ones for Linux users include: + +* [LibreOffice Calc][1] + +* KSpread + +* Gnumeric + +Those programs work in much the same way as Microsoft Excel, and so you can use them for issues like accounting and stock control. You might also use that software to monitor employee earnings or punctuality. The possibilities are endless and only limited by your imagination. + +### Alternatives to Adobe Photoshop + + [![](https://3.bp.blogspot.com/-Id9Dm3CIXmc/WhxzGIlv3zI/AAAAAAAADho/VfIRCAbJMjMZzG2M97-uqLV9mOhqN7IWACLcBGAs/s400/32206185926_c69accfcef_z.jpg)][7] + +Company bosses require access to design programs when developing their marketing materials and creating graphics for their websites. You might also use software of that nature to come up with a new business logo at some point. Lots of entrepreneurs spend a fortune on + +[Training Connections Photoshop classes][8] + +and those available from other providers. They do that in the hope of educating their teams and getting the best results. However, people who use Linux can still benefit from that expertise if they select one of the following + +[alternatives][9] + +: + +* GIMP + +* Krita + +* Pixel + +* LightZone + +The last two suggestions on that list require a substantial investment. Still, they function in much the same way as Adobe Photoshop, and so you should manage to achieve the same quality of work. + +### Other software solutions that you'll want to consider + +Alongside those alternatives to some of the most widely-used software packages around today, business owners should take a look at the full range of products they could use with the Linux operating system. Here are some tools you might like to research and consider: + +* Inkscape - similar to Coreldraw + +* LibreOffice Base - similar to Microsoft Access + +* LibreOffice Impress - similar to Microsoft PowerPoint + +* File Roller - siThis is a contributed postmilar to WinZip + +* Linphone - similar to Skype + +There are + +[lots of other programs][10] + + you'll also want to research, and so the best solution is to use the internet to learn more. You will find lots of reviews from people who've used the software in the past, and many of them will compare the tool to its Windows or iOS alternative. So, you shouldn't have to work too hard to identify the best ones and sort the wheat from the chaff. + +Now you have all the right information; it's time to weigh all the pros and cons of Linux and work out if it's suitable for your operation. In most instances, that operating system does not place any limits on your business activities. It's just that you need to use different software compared to some of your competitors. People who use Linux tend to benefit from improved security, speed, and performance. Also, the solution gets regular updates, and so it's growing every single day. Unlike Windows and other solutions; you can customize Linux to meet your requirements. With that in mind, do not make the mistake of overlooking this fantastic system! + +-------------------------------------------------------------------------------- + +via: http://linuxblog.darkduck.com/2017/11/excellent-business-software.html + +作者:[DarkDuck][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://linuxblog.darkduck.com/ +[1]:http://linuxblog.darkduck.com/2015/08/pivot-tables-in-libreoffice-calc.html +[3]:https://4.bp.blogspot.com/-xwLuDRdB6sw/Whxx0Z5pI5I/AAAAAAAADhU/YWHID8GU9AgrXRfeTz4HcDZkG-XWZNbSgCLcBGAs/s1600/4444061098_6eeaa7dc1a_z.jpg +[4]:https://www.linkedin.com/pulse/benefits-using-microsoft-word-fareed/ +[5]:https://4.bp.blogspot.com/-XdS6bSLQbOU/WhxyeWZeeCI/AAAAAAAADhc/C3hGY6rgzX4m2emunot80-4URu9-aQx8wCLcBGAs/s1600/28929069495_e85d2626ba_z.jpg +[6]:http://linuxblog.darkduck.com/2014/03/why-open-software-and-what-are-benefits.html +[7]:https://3.bp.blogspot.com/-Id9Dm3CIXmc/WhxzGIlv3zI/AAAAAAAADho/VfIRCAbJMjMZzG2M97-uqLV9mOhqN7IWACLcBGAs/s1600/32206185926_c69accfcef_z.jpg +[8]:https://www.trainingconnection.com/photoshop-training.php +[9]:http://linuxblog.darkduck.com/2011/10/photoshop-alternatives-for-linux.html +[10]:http://www.makeuseof.com/tag/best-linux-software/ diff --git a/sources/tech/20171130 Scrot Linux command-line screen grabs made simple.md b/sources/tech/20171130 Scrot Linux command-line screen grabs made simple.md new file mode 100644 index 0000000000..2b4d2248b2 --- /dev/null +++ b/sources/tech/20171130 Scrot Linux command-line screen grabs made simple.md @@ -0,0 +1,108 @@ +Scrot: Linux command-line screen grabs made simple +============================================================ + +### Scrot is a basic, flexible tool that offers a number of handy options for taking screen captures from the Linux command line. + +![Scrot: Screen grabs made simple](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/community-penguins-osdc-lead.png?itok=BmqsAF4A "Scrot: Screen grabs made simple") +Image credits : Original photo by Rikki Endsley. [CC BY-SA 4.0][13] + +There are great tools on the Linux desktop for taking screen captures, such as [KSnapshot][14] and [Shutter][15]. Even the simple utility that comes with the GNOME desktop does a pretty good job of capturing screens. But what if you rarely need to take screen captures? Or you use a Linux distribution without a built-in capture tool, or an older computer with limited resources? + +Turn to the command line and a little utility called [Scrot][16]. It does a fine job of taking simple screen captures, and it includes a few features that might surprise you. + +### Getting started with Scrot + +More Linux resources + +* [What is Linux?][1] + +* [What are Linux containers?][2] + +* [Download Now: Linux commands cheat sheet][3] + +* [Advanced Linux commands cheat sheet][4] + +* [Our latest Linux articles][5] + +Many Linux distributions come with Scrot already installed—to check, type `which scrot`. If it isn't there, you can install Scrot using your distro's package manager. If you're willing to compile the code, grab it [from GitHub][22]. + +To take a screen capture, crack open a terminal window and type `scrot [filename]`, where `[filename]` is the name of file to which you want to save the image (for example, `desktop.png`). If you don't include a name for the file, Scrot will create one for you, such as `2017-09-24-185009_1687x938_scrot.png`. (That filename isn't as descriptive it could be, is it? That's why it's better to add one to the command.) + +Running Scrot with no options takes a screen capture of your entire desktop. If you don't want to do that, Scrot lets you focus on smaller portions of your screen. + +### Taking a screen capture of a single window + +Tell Scrot to take a screen capture of a single window by typing `scrot -u [filename]`. + +The `-u` option tells Scrot to grab the window currently in focus. That's usually the terminal window you're working in, which might not be the one you want. + +To grab another window on your desktop, type `scrot -s [filename]`. + +The `-s` option lets you do one of two things: + +* select an open window, or + +* draw a rectangle around a window or a portion of a window to capture it. + +You can also set a delay, which gives you a little more time to select the window you want to capture. To do that, type `scrot -u -d [num] [filename]`. + +The `-d` option tells Scrot to wait before grabbing the window, and `[num]` is the number of seconds to wait. Specifying `-d 5` (wait five seconds) should give you enough time to choose a window. + +### More useful options + +Scrot offers a number of additional features (most of which I never use). The ones I find most useful include: + +* `-b` also grabs the window's border + +* `-t` grabs a window and creates a thumbnail of it. This can be useful when you're posting screen captures online. + +* `-c` creates a countdown in your terminal when you use the `-d` option. + +To learn about Scrot's other options, check out the its documentation by typing `man scrot` in a terminal window, or [read it online][17]. Then start snapping images of your screen. + +It's basic, but Scrot gets the job done nicely. + +### Topics + + [Linux][23] + +### About the author + + [![That idiot Scott Nesbitt ...](https://opensource.com/sites/default/files/styles/profile_pictures/public/scottn-cropped.jpg?itok=q4T2J4Ai)][18] + + Scott Nesbitt - I'm a long-time user of free/open source software, and write various things for both fun and profit. I don't take myself too seriously and I do all of my own stunts. You can find me at these fine establishments on the web: [Twitter][7], [Mastodon][8], [GitHub][9], and... [more about Scott Nesbitt][10][More about me][11] + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/17/11/taking-screen-captures-linux-command-line-scrot + +作者:[ Scott Nesbitt  ][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/scottnesbitt +[1]:https://opensource.com/resources/what-is-linux?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent +[2]:https://opensource.com/resources/what-are-linux-containers?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent +[3]:https://developers.redhat.com/promotions/linux-cheatsheet/?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent +[4]:https://developers.redhat.com/cheat-sheet/advanced-linux-commands-cheatsheet?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent +[5]:https://opensource.com/tags/linux?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent +[6]:https://opensource.com/article/17/11/taking-screen-captures-linux-command-line-scrot?rate=H43kUdawjR0GV9D0dCbpnmOWcqw1WekfrAI_qKo8UwI +[7]:http://www.twitter.com/ScottWNesbitt +[8]:https://mastodon.social/@scottnesbitt +[9]:https://github.com/ScottWNesbitt +[10]:https://opensource.com/users/scottnesbitt +[11]:https://opensource.com/users/scottnesbitt +[12]:https://opensource.com/user/14925/feed +[13]:https://creativecommons.org/licenses/by-sa/4.0/ +[14]:https://www.kde.org/applications/graphics/ksnapshot/ +[15]:https://launchpad.net/shutter +[16]:https://github.com/dreamer/scrot +[17]:http://manpages.ubuntu.com/manpages/precise/man1/scrot.1.html +[18]:https://opensource.com/users/scottnesbitt +[19]:https://opensource.com/users/scottnesbitt +[20]:https://opensource.com/users/scottnesbitt +[21]:https://opensource.com/article/17/11/taking-screen-captures-linux-command-line-scrot#comments +[22]:https://github.com/dreamer/scrot +[23]:https://opensource.com/tags/linux diff --git a/sources/tech/20171130 Search DuckDuckGo from the Command Line.md b/sources/tech/20171130 Search DuckDuckGo from the Command Line.md deleted file mode 100644 index ee451a6172..0000000000 --- a/sources/tech/20171130 Search DuckDuckGo from the Command Line.md +++ /dev/null @@ -1,103 +0,0 @@ -translating---geekpi - -# Search DuckDuckGo from the Command Line - - ![](http://www.omgubuntu.co.uk/wp-content/uploads/2017/11/duckduckgo.png) -When we showed you how to [search Google from the command line][3] a lot of you to say you use [Duck Duck Go][4], the awesome privacy-focused search engine. - -Well, now there’s a tool to search DuckDuckGo from the command line. It’s called [ddgr][6] (pronounced, in my head, as  _dodger_ ) and it’s pretty neat. - -Like [Googler][7], ddgr is totally open-source and totally unofficial. Yup, the app is unaffiliated with DuckDuckGo in any way. So, should it start returning unsavoury search results for innocent terms, make sure you quack in this dev’s direction, and not the search engine’s! - -### DuckDuckGo Terminal App - -![](http://www.omgubuntu.co.uk/wp-content/uploads/2017/11/ddgr-gif.gif) - -[DuckDuckGo Bangs][8] makes finding stuff on DuckDuckGo super easy (there’s even a bang for  _this_  site) and, dutifully, ddgr supports them. - -Unlike the web interface, you can specify the number of search results you would like to see per page. It’s more convenient than skimming through 30-odd search results per page. The default interface is carefully designed to use minimum space without sacrificing readability. - -`ddgr` has a number of features, including: - -* Choose number of search results to fetch - -* Support for Bash autocomplete - -* Use !bangs - -* Open URLs in a browser - -* “I’m feeling lucky” option - -* Filter by time, region, file type, etc - -* Minimal dependencies - -You can download `ddgr` for various systems direct from the Github project page: - -[Download ‘ddgr’ from Github][9] - -You can also install ddgr on Ubuntu 16.04 LTS and up from a PPA. This repo is maintained by the developer of ddgr and is recommended should you want to stay up-to-date with new releases as and when they appear. - -Do note that at the time of writing the latest version of ddgr is  _not_  in the PPA, but an older version (lacking –num support) is: - -``` -sudo add-apt-repository ppa:twodopeshaggy/jarun -``` - -``` -sudo apt-get update -``` - -### How To Use ddgr to Search DuckDuckGo from the Comand Line - -To use ddgr once you installed all you need to do is pop open your terminal emulator of choice and run: - -``` -ddgr -``` - -Next enter a search term: - -``` -search-term -``` - -To limit the number of results returned run: - -``` -ddgr --num 5 search-term -``` - -To instantly open the first matching result for a search term in your browser run: - -``` -ddgr -j search-term -``` - -You can pass arguments and flags to narrow down your search. To see a comprehensive list inside the terminal run: - -``` -ddgr -h -``` - --------------------------------------------------------------------------------- - -via: http://www.omgubuntu.co.uk/2017/11/duck-duck-go-terminal-app - -作者:[JOEY SNEDDON ][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://plus.google.com/117485690627814051450/?rel=author -[1]:https://plus.google.com/117485690627814051450/?rel=author -[2]:http://www.omgubuntu.co.uk/category/download -[3]:http://www.omgubuntu.co.uk/2017/08/search-google-from-the-command-line -[4]:http://duckduckgo.com/ -[5]:http://www.omgubuntu.co.uk/2017/11/duck-duck-go-terminal-app -[6]:https://github.com/jarun/ddgr -[7]:https://github.com/jarun/googler -[8]:https://duckduckgo.com/bang -[9]:https://github.com/jarun/ddgr/releases/tag/v1.1 diff --git a/sources/tech/20171130 Undistract-me : Get Notification When Long Running Terminal Commands Complete.md b/sources/tech/20171130 Undistract-me : Get Notification When Long Running Terminal Commands Complete.md new file mode 100644 index 0000000000..46afe9b893 --- /dev/null +++ b/sources/tech/20171130 Undistract-me : Get Notification When Long Running Terminal Commands Complete.md @@ -0,0 +1,156 @@ +translating---geekpi + +Undistract-me : Get Notification When Long Running Terminal Commands Complete +============================================================ + +by [sk][2] · November 30, 2017 + +![Undistract-me](https://www.ostechnix.com/wp-content/uploads/2017/11/undistract-me-2-720x340.png) + +A while ago, we published how to [get notification when a Terminal activity is done][3]. Today, I found out a similar utility called “undistract-me” that notifies you when long running terminal commands complete. Picture this scenario. You run a command that takes a while to finish. In the mean time, you check your facebook and get so involved in it. After a while, you remembered that you ran a command few minutes ago. You go back to the Terminal and notice that the command has already finished. But you have no idea when the command is completed. Have you ever been in this situation? I bet most of you were in this situation many times. This is where “undistract-me” comes in help. You don’t need to constantly check the terminal to see if a command is completed or not. Undistract-me utility will notify you when a long running command is completed. It will work on Arch Linux, Debian, Ubuntu and other Ubuntu-derivatives. + +#### Installing Undistract-me + +Undistract-me is available in the default repositories of Debian and its variants such as Ubuntu. All you have to do is to run the following command to install it. + +``` +sudo apt-get install undistract-me +``` + +The Arch Linux users can install it from AUR using any helper programs. + +Using [Pacaur][4]: + +``` +pacaur -S undistract-me-git +``` + +Using [Packer][5]: + +``` +packer -S undistract-me-git +``` + +Using [Yaourt][6]: + +``` +yaourt -S undistract-me-git +``` + +Then, run the following command to add “undistract-me” to your Bash. + +``` +echo 'source /etc/profile.d/undistract-me.sh' >> ~/.bashrc +``` + +Alternatively you can run this command to add it to your Bash: + +``` +echo "source /usr/share/undistract-me/long-running.bash\nnotify_when_long_running_commands_finish_install" >> .bashrc +``` + +If you are in Zsh shell, run this command: + +``` +echo "source /usr/share/undistract-me/long-running.bash\nnotify_when_long_running_commands_finish_install" >> .zshrc +``` + +Finally update the changes: + +For Bash: + +``` +source ~/.bashrc +``` + +For Zsh: + +``` +source ~/.zshrc +``` + +#### Configure Undistract-me + +By default, Undistract-me will consider any command that takes more than 10 seconds to complete as a long-running command. You can change this time interval by editing /usr/share/undistract-me/long-running.bash file. + +``` +sudo nano /usr/share/undistract-me/long-running.bash +``` + +Find “LONG_RUNNING_COMMAND_TIMEOUT” variable and change the default value (10 seconds) to something else of your choice. + + [![](http://www.ostechnix.com/wp-content/uploads/2017/11/undistract-me-1.png)][7] + +Save and close the file. Do not forget to update the changes: + +``` +source ~/.bashrc +``` + +Also, you can disable notifications for particular commands. To do so, find the “LONG_RUNNING_IGNORE_LIST” variable and add the commands space-separated like below. + +By default, the notification will only show if the active window is not the window the command is running in. That means, it will notify you only if the command is running in the background Terminal window. If the command is running in active window Terminal, you will not be notified. If you want undistract-me to send notifications either the Terminal window is visible or in the background, you can set IGNORE_WINDOW_CHECK to 1 to skip the window check. + +The other cool feature of Undistract-me is you can set audio notification along with visual notification when a command is done. By default, it will only send a visual notification. You can change this behavior by setting the variable UDM_PLAY_SOUND to a non-zero integer on the command line. However, your Ubuntu system should have pulseaudio-utils and sound-theme-freedesktop utilities installed to enable this functionality. + +Please remember that you need to run the following command to update the changes made. + +For Bash: + +``` +source ~/.bashrc +``` + +For Zsh: + +``` +source ~/.zshrc +``` + +It is time to verify if this really works. + +#### Get Notification When Long Running Terminal Commands Complete + +Now, run any command that takes longer than 10 seconds or the time duration you defined in Undistract-me script. + +I ran the following command on my Arch Linux desktop. + +``` +sudo pacman -Sy +``` + +This command took 32 seconds to complete. After the completion of the above command, I got the following notification. + + [![](http://www.ostechnix.com/wp-content/uploads/2017/11/undistract-me-2.png)][8] + +Please remember Undistract-me script notifies you only if the given command took more than 10 seconds to complete. If the command is completed in less than 10 seconds, you will not be notified. Of course, you can change this time interval settings as I described in the Configuration section above. + +I find this tool very useful. It helped me to get back to the business after I completely lost in some other tasks. I hope this tool will be helpful to you too. + +More good stuffs to come. Stay tuned! + +Cheers! + +Resource: + +* [Undistract-me GitHub Repository][1] + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/undistract-get-notification-long-running-terminal-commands-complete/ + +作者:[sk][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.ostechnix.com/author/sk/ +[1]:https://github.com/jml/undistract-me +[2]:https://www.ostechnix.com/author/sk/ +[3]:https://www.ostechnix.com/get-notification-terminal-task-done/ +[4]:https://www.ostechnix.com/install-pacaur-arch-linux/ +[5]:https://www.ostechnix.com/install-packer-arch-linux-2/ +[6]:https://www.ostechnix.com/install-yaourt-arch-linux/ +[7]:http://www.ostechnix.com/wp-content/uploads/2017/11/undistract-me-1.png +[8]:http://www.ostechnix.com/wp-content/uploads/2017/11/undistract-me-2.png diff --git a/sources/tech/20171130 Wake up and Shut Down Linux Automatically.md b/sources/tech/20171130 Wake up and Shut Down Linux Automatically.md new file mode 100644 index 0000000000..3a2c20ad52 --- /dev/null +++ b/sources/tech/20171130 Wake up and Shut Down Linux Automatically.md @@ -0,0 +1,135 @@ + + translating by HardworkFish + +Wake up and Shut Down Linux Automatically +============================================================ + +### [banner.jpg][1] + +![time keeper](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/banner.jpg?itok=zItspoSb) + +Learn how to configure your Linux computers to watch the time for you, then wake up and shut down automatically. + +[Creative Commons Attribution][6][The Observatory at Delhi][7] + +Don't be a watt-waster. If your computers don't need to be on then shut them down. For convenience and nerd creds, you can configure your Linux computers to wake up and shut down automatically. + +### Precious Uptimes + +Some computers need to be on all the time, which is fine as long as it's not about satisfying an uptime compulsion. Some people are very proud of their lengthy uptimes, and now that we have kernel hot-patching that leaves only hardware failures requiring shutdowns. I think it's better to be practical. Save electricity as well as wear on your moving parts, and shut them down when they're not needed. For example, you can wake up a backup server at a scheduled time, run your backups, and then shut it down until it's time for the next backup. Or, you can configure your Internet gateway to be on only at certain times. Anything that doesn't need to be on all the time can be configured to turn on, do a job, and then shut down. + +### Sleepies + +For computers that don't need to be on all the time, good old cron will shut them down reliably. Use either root's cron, or /etc/crontab. This example creates a root cron job to shut down every night at 11:15 p.m. + +``` +# crontab -e -u root +# m h dom mon dow command +15 23 * * * /sbin/shutdown -h now +``` + +``` +15 23 * * 1-5 /sbin/shutdown -h now +``` + +You may also use /etc/crontab, which is fast and easy, and everything is in one file. You have to specify the user: + +``` +15 23 * * 1-5 root shutdown -h now +``` + +Auto-wakeups are very cool; most of my SUSE colleagues are in Nuremberg, so I am crawling out of bed at 5 a.m. to have a few hours of overlap with their schedules. My work computer turns itself on at 5:30 a.m., and then all I have to do is drag my coffee and myself to my desk to start work. It might not seem like pressing a power button is a big deal, but at that time of day every little thing looms large. + +Waking up your Linux PC can be less reliable than shutting it down, so you may want to try different methods. You can use wakeonlan, RTC wakeups, or your PC's BIOS to set scheduled wakeups. These all work because, when you power off your computer, it's not really all the way off; it is in an extremely low-power state and can receive and respond to signals. You need to use the power supply switch to turn it off completely. + +### BIOS Wakeup + +A BIOS wakeup is the most reliable. My system BIOS has an easy-to-use wakeup scheduler (Figure 1). Chances are yours does, too. Easy peasy. + +### [fig-1.png][2] + +![wake up](https://www.linux.com/sites/lcom/files/styles/floated_images/public/fig-1_11.png?itok=8qAeqo1I) + +Figure 1: My system BIOS has an easy-to-use wakeup scheduler. + +[Used with permission][8] + +### wakeonlan + +wakeonlan is the next most reliable method. This requires sending a signal from a second computer to the computer you want to power on. You could use an Arduino or Raspberry Pi to send the wakeup signal, a Linux-based router, or any Linux PC. First, look in your system BIOS to see if wakeonlan is supported -- which it should be -- and then enable it, as it should be disabled by default. + +Then, you'll need an Ethernet network adapter that supports wakeonlan; wireless adapters won't work. You'll need to verify that your Ethernet card supports wakeonlan: + +``` +# ethtool eth0 | grep -i wake-on + Supports Wake-on: pumbg + Wake-on: g +``` + +* d -- all wake ups disabled + +* p -- wake up on physical activity + +* u -- wake up on unicast messages + +* m -- wake up on multicast messages + +* b -- wake up on broadcast messages + +* a -- wake up on ARP messages + +* g -- wake up on magic packet + +* s -- set the Secure On password for the magic packet + +man ethtool is not clear on what the p switch does; it suggests that any signal will cause a wake up. In my testing, however, it doesn't do that. The one that must be enabled is g -- wake up on magic packet, and the Wake-on line shows that it is already enabled. If it is not enabled, you can use ethtool to enable it, using your own device name, of course: + +``` +# ethtool -s eth0 wol g +``` + +``` +@reboot /usr/bin/ethtool -s eth0 wol g +``` + +### [fig-2.png][3] + +![wakeonlan](https://www.linux.com/sites/lcom/files/styles/floated_images/public/fig-2_7.png?itok=XQAwmHoQ) + +Figure 2: Enable Wake on LAN. + +[Used with permission][9] + +Another option is recent Network Manager versions have a nice little checkbox to enable wakeonlan (Figure 2). + +There is a field for setting a password, but if your network interface doesn't support the Secure On password, it won't work. + +Now you need to configure a second PC to send the wakeup signal. You don't need root privileges, so create a cron job for your user. You need the MAC address of the network interface on the machine you're waking up: + +``` +30 08 * * * /usr/bin/wakeonlan D0:50:99:82:E7:2B +``` + +Using the real-time clock for wakeups is the least reliable method. Check out [Wake Up Linux With an RTC Alarm Clock][4]; this is a bit outdated as most distros use systemd now. Come back next week to learn more about updated ways to use RTC wakeups. + +Learn more about Linux through the free ["Introduction to Linux" ][5]course from The Linux Foundation and edX. + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/learn/intro-to-linux/2017/11/wake-and-shut-down-linux-automatically + +作者:[Carla Schroder] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[1]:https://www.linux.com/files/images/bannerjpg +[2]:https://www.linux.com/files/images/fig-1png-11 +[3]:https://www.linux.com/files/images/fig-2png-7 +[4]:https://www.linux.com/learn/wake-linux-rtc-alarm-clock +[5]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux +[6]:https://www.linux.com/licenses/category/creative-commons-attribution +[7]:http://www.columbia.edu/itc/mealac/pritchett/00routesdata/1700_1799/jaipur/delhijantarearly/delhijantarearly.html +[8]:https://www.linux.com/licenses/category/used-permission +[9]:https://www.linux.com/licenses/category/used-permission diff --git a/sources/tech/20171201 Fedora Classroom Session: Ansible 101.md b/sources/tech/20171201 Fedora Classroom Session: Ansible 101.md new file mode 100644 index 0000000000..a74b196663 --- /dev/null +++ b/sources/tech/20171201 Fedora Classroom Session: Ansible 101.md @@ -0,0 +1,71 @@ +### [Fedora Classroom Session: Ansible 101][2] + +### By Sachin S Kamath + +![](https://fedoramagazine.org/wp-content/uploads/2017/07/fedora-classroom-945x400.jpg) + +Fedora Classroom sessions continue this week with an Ansible session. The general schedule for sessions appears [on the wiki][3]. You can also find [resources and recordings from previous sessions][4] there. Here are details about this week’s session on [Thursday, 30th November at 1600 UTC][5]. That link allows you to convert the time to your timezone. + +### Topic: Ansible 101 + +As the Ansible [documentation][6] explains, Ansible is an IT automation tool. It’s primarily used to configure systems, deploy software, and orchestrate more advanced IT tasks. Examples include continuous deployments or zero downtime rolling updates. + +This Classroom session covers the topics listed below: + +1. Introduction to SSH + +2. Understanding different terminologies + +3. Introduction to Ansible + +4. Ansible installation and setup + +5. Establishing password-less connection + +6. Ad-hoc commands + +7. Managing inventory + +8. Playbooks examples + +There will also be a follow-up Ansible 102 session later. That session will cover complex playbooks, roles, dynamic inventory files, control flow and Galaxy. + +### Instructors + +We have two experienced instructors handling this session. + +[Geoffrey Marr][7], also known by his IRC name as “coremodule,” is a Red Hat employee and Fedora contributor with a background in Linux and cloud technologies. While working, he spends his time lurking in the [Fedora QA][8] wiki and test pages. Away from work, he enjoys RaspberryPi projects, especially those focusing on software-defined radio. + +[Vipul Siddharth][9] is an intern at Red Hat who also works on Fedora. He loves to contribute to open source and seeks opportunities to spread the word of free and open source software. + +### Joining the session + +This session takes place on [BlueJeans][10]. The following information will help you join the session: + +* URL: [https://bluejeans.com/3466040121][1] + +* Meeting ID (for Desktop App): 3466040121 + +We hope you attend, learn from, and enjoy this session! If you have any feedback about the sessions, have ideas for a new one or want to host a session, please feel free to comment on this post or edit the [Classroom wiki page][11]. + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/fedora-classroom-session-ansible-101/ + +作者:[Sachin S Kamath] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[1]:https://bluejeans.com/3466040121 +[2]:https://fedoramagazine.org/fedora-classroom-session-ansible-101/ +[3]:https://fedoraproject.org/wiki/Classroom +[4]:https://fedoraproject.org/wiki/Classroom#Previous_Sessions +[5]:https://www.timeanddate.com/worldclock/fixedtime.html?msg=Fedora+Classroom+-+Ansible+101&iso=20171130T16&p1=%3A +[6]:http://docs.ansible.com/ansible/latest/index.html +[7]:https://fedoraproject.org/wiki/User:Coremodule +[8]:https://fedoraproject.org/wiki/QA +[9]:https://fedoraproject.org/wiki/User:Siddharthvipul1 +[10]:https://www.bluejeans.com/downloads +[11]:https://fedoraproject.org/wiki/Classroom diff --git a/sources/tech/20171201 How to Manage Users with Groups in Linux.md b/sources/tech/20171201 How to Manage Users with Groups in Linux.md new file mode 100644 index 0000000000..35350c819f --- /dev/null +++ b/sources/tech/20171201 How to Manage Users with Groups in Linux.md @@ -0,0 +1,168 @@ +translating---imquanquan + +How to Manage Users with Groups in Linux +============================================================ + +### [group-of-people-1645356_1920.jpg][1] + +![groups](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/group-of-people-1645356_1920.jpg?itok=rJlAxBSV) + +Learn how to work with users, via groups and access control lists in this tutorial. + +[Creative Commons Zero][4] + +Pixabay + +When you administer a Linux machine that houses multiple users, there might be times when you need to take more control over those users than the basic user tools offer. This idea comes to the fore especially when you need to manage permissions for certain users. Say, for example, you have a directory that needs to be accessed with read/write permissions by one group of users and only read permissions for another group. With Linux, this is entirely possible. To make this happen, however, you must first understand how to work with users, via groups and access control lists (ACLs). + +We’ll start from the beginning with users and work our way to the more complex ACLs. Everything you need to make this happen will be included in your Linux distribution of choice. We won’t touch on the basics of users, as the focus on this article is about groups. + +For the purpose of this piece, I’m going to assume the following: + +You need to create two users with usernames: + +* olivia + +* nathan + +You need to create two groups: + +* readers + +* editors + +Olivia needs to be a member of the group editors, while nathan needs to be a member of the group readers. The group readers needs to only have read permission to the directory /DATA, whereas the group editors needs to have both read and write permission to the /DATA directory. This, of course, is very minimal, but it will give you the basic information you need to expand the tasks to fit your much larger needs. + +I’ll be demonstrating on the Ubuntu 16.04 Server platform. The commands will be universal—the only difference would be if your distribution of choice doesn’t make use of sudo. If this is the case, you’ll have to first su to the root user to issue the commands that require sudo in the demonstrations. + +### Creating the users + +The first thing we need to do is create the two users for our experiment. User creation is handled with the useradd command. Instead of just simply creating the users we need to create them both with their own home directories and then give them passwords. + +The first thing we do is create the users. To do this, issue the commands: + +``` +sudo useradd -m olivia + +sudo useradd -m nathan +``` + +Next each user must have a password. To add passwords into the mix, you’d issue the following commands: + +``` +sudo passwd olivia + +sudo passwd nathan +``` + +That’s it, your users are created. + +### Creating groups and adding users + +Now we’re going to create the groups readers and editors and then add users to them. The commands to create our groups are: + +``` +addgroup readers + +addgroup editors +``` + +### [groups_1.jpg][2] + +![groups](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/groups_1.jpg?itok=BKwL89BB) + +Figure 1: Our new groups ready to be used. + +[Used with permission][5] + +With our groups created, we need to add our users. We’ll add user nathan to group readers with the command: + +``` +sudo usermod -a -G readers nathan +``` + +``` +sudo usermod -a -G editors olivia +``` + +### Giving groups permissions to directories + +Let’s say you have the directory /READERS and you need to allow all members of the readers group access to that directory. First, change the group of the folder with the command: + +``` +sudo chown -R :readers /READERS +``` + +``` +sudo chmod -R g-w /READERS +``` + +``` +sudo chmod -R o-x /READERS +``` + +Let’s say you have the directory /EDITORS and you need to give members of the editors group read and write permission to its contents. To do that, the following command would be necessary: + +``` +sudo chown -R :editors /EDITORS + +sudo chmod -R g+w /EDITORS + +sudo chmod -R o-x /EDITORS +``` + +The problem with using this method is you can only add one group to a directory at a time. This is where access control lists come in handy. + +### Using access control lists + +Now, let’s get tricky. Say you have a single folder—/DATA—and you want to give members of the readers group read permission and members of the group editors read/write permissions. To do that, you must take advantage of the setfacl command. The setfacl command sets file access control lists for files and folders. + +The structure of this command looks like this: + +``` +setfacl OPTION X:NAME:Y /DIRECTORY +``` + +``` +sudo setfacl -m g:readers:rx -R /DATA +``` + +To give members of the editors group read/write permissions (while retaining read permissions for the readers group), we’d issue the command; + +``` +sudo setfacl -m g:editors:rwx -R /DATA +``` + +### All the control you need + +And there you have it. You can now add members to groups and control those groups’ access to various directories with all the power and flexibility you need. To read more about the above tools, issue the commands: + +* man usradd + +* man addgroup + +* man usermod + +* man sefacl + +* man chown + +* man chmod + +Learn more about Linux through the free ["Introduction to Linux" ][3]course from The Linux Foundation and edX. + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/learn/intro-to-linux/2017/12/how-manage-users-groups-linux + +作者:[Jack Wallen ] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[1]:https://www.linux.com/files/images/group-people-16453561920jpg +[2]:https://www.linux.com/files/images/groups1jpg +[3]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux +[4]:https://www.linux.com/licenses/category/creative-commons-zero +[5]:https://www.linux.com/licenses/category/used-permission diff --git a/sources/tech/20171201 How to find a publisher for your tech book.md b/sources/tech/20171201 How to find a publisher for your tech book.md new file mode 100644 index 0000000000..76dc8112ca --- /dev/null +++ b/sources/tech/20171201 How to find a publisher for your tech book.md @@ -0,0 +1,76 @@ +How to find a publisher for your tech book +============================================================ + +### Writing a technical book takes more than a good idea. You need to know a bit about how the publishing industry works. + + +![How to find a publisher for your tech book](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/EDUCATION_colorbooks.png?itok=vNhsYYyC "How to find a publisher for your tech book") +Image by : opensource.com + +You've got an idea for a technical book—congratulations! Like a hiking the Appalachian trail, or learning to cook a soufflé, writing a book is one of those things that people talk about, but never take beyond the idea stage. That makes sense, because the failure rate is pretty high. Making it real involves putting your idea in front of a publisher, and finding out whether it's good enough to become a book. That step is scary enough, but the lack of information about how to do it complicates matters. + +If you want to work with a traditional publisher, you'll need to get your book in front of them and hopefully start on the path to publication. I'm the Managing Editor at the [Pragmatic Bookshelf][4], so I see proposals all the time, as well as helping authors to craft good ones. Some are good, others are bad, but I often see proposals that just aren't right for Pragmatic. I'll help you with the process of finding the right publisher, and how to get your idea noticed. + +### Identify your target + +Your first step is to figure out which publisher is the a good fit for your idea. To start, think about the publishers that you buy books from, and that you enjoy. The odds are pretty good that your book will appeal to people like you, so starting with your favorites makes for a pretty good short list. If you don't have much of a book collection, you can visit a bookstore, or take a look on Amazon. Make a list of a handful of publishers that you personally like to start with. + +Next, winnow your prospects. Although most technical publishers look alike from a distance, they often have distinctive audiences. Some publishers go for broadly popular topics, such as C++ or Java. Your book on Elixir may not be a good fit for that publisher. If your prospective book is about teaching programming to kids, you probably don't want to go with the traditional academic publisher. + +Once you've identified a few targets, do some more research into the publishers' catalogs, either on their own site, or on Amazon. See what books they have that are similar to your idea. If they have a book that's identical, or nearly so, you'll have a tough time convincing them to sign yours. That doesn't necessarily mean you should drop that publisher from your list. You can make some changes to your proposal to differentiate it from the existing book: target a different audience, or a different skill level. Maybe the existing book is outdated, and you could focus on new approaches to the technology. Make your proposal into a book that complements the existing one, rather than competes. + +If your target publisher has no books that are similar, that can be a good sign, or a very bad one. Sometimes publishers choose not to publish on specific technologies, either because they don't believe their audience is interested, or they've had trouble with that technology in the past. New languages and libraries pop up all the time, and publishers have to make informed guesses about which will appeal to their readers. Their assessment may not be the same as yours. Their decision might be final, or they might be waiting for the right proposal. The only way to know is to propose and find out. + +### Work your network + +Identifying a publisher is the first step; now you need to make contact. Unfortunately, publishing is still about  _who_  you know, more than  _what_  you know. The person you want to know is an  _acquisitions editor,_  the editor whose job is to find new markets, authors, and proposals. If you know someone who has connections with a publisher, ask for an introduction to an acquisitions editor. These editors often specialize in particular subject areas, particularly at larger publishers, but you don't need to find the right one yourself. They're usually happy to connect you with the correct person. + +Sometimes you can find an acquisitions editor at a technical conference, especially one where the publisher is a sponsor, and has a booth. Even if there's not an acquisitions editor on site at the time, the staff at the booth can put you in touch with one. If conferences aren't your thing, you'll need to work your network to get an introduction. Use LinkedIn, or your informal contacts, to get in touch with an editor. + +For smaller publishers, you may find acquisitions editors listed on the company website, with contact information if you're lucky. If not, search for the publisher's name on Twitter, and see if you can turn up their editors. You might be nervous about trying to reach out to a stranger over social media to show them your book, but don't worry about it. Making contact is what acquisitions editors do. The worst-case result is they ignore you. + +Once you've made contact, the acquisitions editor will assist you with the next steps. They may have some feedback on your proposal right away, or they may want you to flesh it out according to their guidelines before they'll consider it. After you've put in the effort to find an acquisitions editor, listen to their advice. They know their system better than you do. + +### If all else fails + +If you can't find an acquisitions editor to contact, the publisher almost certainly has a blind proposal alias, usually of the form `proposals@[publisher].com`. Check the web site for instructions on what to send to a proposal alias; some publishers have specific requirements. Follow these instructions. If you don't, you have a good chance of your proposal getting thrown out before anybody looks at it. If you have questions, or aren't sure what the publisher wants, you'll need to try again to find an editor to talk to, because the proposal alias is not the place to get questions answered. Put together what they've asked for (which is a topic for a separate article), send it in, and hope for the best. + +### And ... wait + +No matter how you've gotten in touch with a publisher, you'll probably have to wait. If you submitted to the proposals alias, it's going to take a while before somebody does anything with that proposal, especially at a larger company. Even if you've found an acquisitions editor to work with, you're probably one of many prospects she's working with simultaneously, so you might not get rapid responses. Almost all publishers have a committee that decides on which proposals to accept, so even if your proposal is awesome and ready to go, you'll still need to wait for the committee to meet and discuss it. You might be waiting several weeks, or even a month before you hear anything. + +After a couple of weeks, it's fine to check back in with the editor to see if they need any more information. You want to be polite in this e-mail; if they haven't answered because they're swamped with proposals, being pushy isn't going to get you to the front of the line. It's possible that some publishers will never respond at all instead of sending a rejection notice, but that's uncommon. There's not a lot to do at this point other than be patient. Of course, if it's been months and nobody's returning your e-mails, you're free to approach a different publisher or consider self-publishing. + +### Good luck + +If this process seems somewhat scattered and unscientific, you're right; it is. Getting published depends on being in the right place, at the right time, talking to the right person, and hoping they're in the right mood. You can't control all of those variables, but having a better knowledge of how the industry works, and what publishers are looking for, can help you optimize the ones you can control. + +Finding a publisher is one step in a lengthy process. You need to refine your idea and create the proposal, as well as other considerations. At SeaGL this year [I presented][5] an introduction to the entire process. Check out [the video][6] for more detailed information. + +### About the author + + [![](https://opensource.com/sites/default/files/styles/profile_pictures/public/pictures/portrait.jpg?itok=b77dlNC4)][7] + + Brian MacDonald - Brian MacDonald is Managing Editor at the Pragmatic Bookshelf. Over the last 20 years in tech publishing, he's been an editor, author, and occasional speaker and trainer. He currently spends a lot of his time talking to new authors about how they can best present their ideas. You can follow him on Twitter at @bmac_editor.[More about me][2] + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/17/12/how-find-publisher-your-book + +作者:[Brian MacDonald ][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/bmacdonald +[1]:https://opensource.com/article/17/12/how-find-publisher-your-book?rate=o42yhdS44MUaykAIRLB3O24FvfWxAxBKa5WAWSnSY0s +[2]:https://opensource.com/users/bmacdonald +[3]:https://opensource.com/user/190176/feed +[4]:https://pragprog.com/ +[5]:https://archive.org/details/SeaGL2017WritingTheNextGreatTechBook +[6]:https://archive.org/details/SeaGL2017WritingTheNextGreatTechBook +[7]:https://opensource.com/users/bmacdonald +[8]:https://opensource.com/users/bmacdonald +[9]:https://opensource.com/users/bmacdonald +[10]:https://opensource.com/article/17/12/how-find-publisher-your-book#comments diff --git a/sources/tech/20171201 Randomize your WiFi MAC address on Ubuntu 16.04.md b/sources/tech/20171201 Randomize your WiFi MAC address on Ubuntu 16.04.md new file mode 100644 index 0000000000..b0f8e72018 --- /dev/null +++ b/sources/tech/20171201 Randomize your WiFi MAC address on Ubuntu 16.04.md @@ -0,0 +1,160 @@ +Randomize your WiFi MAC address on Ubuntu 16.04 +============================================================ + + _Your device’s MAC address can be used to track you across the WiFi networks you connect to. That data can be shared and sold, and often identifies you as an individual. It’s possible to limit this tracking by using pseudo-random MAC addresses._ + +![A captive portal screen for a hotel allowing you to log in with social media for an hour of free WiFi](https://www.paulfurley.com/img/captive-portal-our-hotel.gif) + + _Image courtesy of [Cloudessa][4]_ + +Every network device like a WiFi or Ethernet card has a unique identifier called a MAC address, for example `b4:b6:76:31:8c:ff`. It’s how networking works: any time you connect to a WiFi network, the router uses that address to send and receive packets to your machine and distinguish it from other devices in the area. + +The snag with this design is that your unique, unchanging MAC address is just perfect for tracking you. Logged into Starbucks WiFi? Noted. London Underground? Logged. + +If you’ve ever put your real name into one of those Craptive Portals on a WiFi network you’ve now tied your identity to that MAC address. Didn’t read the terms and conditions? You might assume that free airport WiFi is subsidised by flogging ‘customer analytics’ (your personal information) to hotels, restaurant chains and whomever else wants to know about you. + +I don’t subscribe to being tracked and sold by mega-corps, so I spent a few hours hacking a solution. + +### MAC addresses don’t need to stay the same + +Fortunately, it’s possible to spoof your MAC address to a random one without fundamentally breaking networking. + +I wanted to randomize my MAC address, but with three particular caveats: + +1. The MAC should be different across different networks. This means Starbucks WiFi sees a different MAC from London Underground, preventing linking my identity across different providers. + +2. The MAC should change regularly to prevent a network knowing that I’m the same person who walked past 75 times over the last year. + +3. The MAC stays the same throughout each working day. When the MAC address changes, most networks will kick you off, and those with Craptive Portals will usually make you sign in again - annoying. + +### Manipulating NetworkManager + +My first attempt of using the `macchanger` tool was unsuccessful as NetworkManager would override the MAC address according to its own configuration. + +I learned that NetworkManager 1.4.1+ can do MAC address randomization right out the box. If you’re using Ubuntu 17.04 upwards, you can get most of the way with [this config file][7]. You can’t quite achieve all three of my requirements (you must choose  _random_ or  _stable_  but it seems you can’t do  _stable-for-one-day_ ). + +Since I’m sticking with Ubuntu 16.04 which ships with NetworkManager 1.2, I couldn’t make use of the new functionality. Supposedly there is some randomization support but I failed to actually make it work, so I scripted up a solution instead. + +Fortunately NetworkManager 1.2 does allow for spoofing your MAC address. You can see this in the ‘Edit connections’ dialog for a given network: + +![Screenshot of NetworkManager's edit connection dialog, showing a text entry for a cloned mac address](https://www.paulfurley.com/img/network-manager-cloned-mac-address.png) + +NetworkManager also supports hooks - any script placed in `/etc/NetworkManager/dispatcher.d/pre-up.d/` is run before a connection is brought up. + +### Assigning pseudo-random MAC addresses + +To recap, I wanted to generate random MAC addresses based on the  _network_  and the  _date_ . We can use the NetworkManager command line, nmcli, to show a full list of networks: + +``` +> nmcli connection +NAME UUID TYPE DEVICE +Gladstone Guest 618545ca-d81a-11e7-a2a4-271245e11a45 802-11-wireless wlp1s0 +DoESDinky 6e47c080-d81a-11e7-9921-87bc56777256 802-11-wireless -- +PublicWiFi 79282c10-d81a-11e7-87cb-6341829c2a54 802-11-wireless -- +virgintrainswifi 7d0c57de-d81a-11e7-9bae-5be89b161d22 802-11-wireless -- + +``` + +Since each network has a unique identifier, to achieve my scheme I just concatenated the UUID with today’s date and hashed the result: + +``` + +# eg 618545ca-d81a-11e7-a2a4-271245e11a45-2017-12-03 + +> echo -n "${UUID}-$(date +%F)" | md5sum + +53594de990e92f9b914a723208f22b3f - + +``` + +That produced bytes which can be substituted in for the last octets of the MAC address. + +Note that the first byte `02` signifies the address is [locally administered][8]. Real, burned-in MAC addresses start with 3 bytes designing their manufacturer, for example `b4:b6:76` for Intel. + +It’s possible that some routers may reject locally administered MACs but I haven’t encountered that yet. + +On every connection up, the script calls `nmcli` to set the spoofed MAC address for every connection: + +![A terminal window show a number of nmcli command line calls](https://www.paulfurley.com/img/terminal-window-nmcli-commands.png) + +As a final check, if I look at `ifconfig` I can see that the `HWaddr` is the spoofed one, not my real MAC address: + +``` +> ifconfig +wlp1s0 Link encap:Ethernet HWaddr b4:b6:76:45:64:4d + inet addr:192.168.0.86 Bcast:192.168.0.255 Mask:255.255.255.0 + inet6 addr: fe80::648c:aff2:9a9d:764/64 Scope:Link + UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 + RX packets:12107812 errors:0 dropped:2 overruns:0 frame:0 + TX packets:18332141 errors:0 dropped:0 overruns:0 carrier:0 + collisions:0 txqueuelen:1000 + RX bytes:11627977017 (11.6 GB) TX bytes:20700627733 (20.7 GB) + +``` + +The full script is [available on Github][9]. + +``` +#!/bin/sh + +# /etc/NetworkManager/dispatcher.d/pre-up.d/randomize-mac-addresses + +# Configure every saved WiFi connection in NetworkManager with a spoofed MAC +# address, seeded from the UUID of the connection and the date eg: +# 'c31bbcc4-d6ad-11e7-9a5a-e7e1491a7e20-2017-11-20' + +# This makes your MAC impossible(?) to track across WiFi providers, and +# for one provider to track across days. + +# For craptive portals that authenticate based on MAC, you might want to +# automate logging in :) + +# Note that NetworkManager >= 1.4.1 (Ubuntu 17.04+) can do something similar +# automatically. + +export PATH=$PATH:/usr/bin:/bin + +LOG_FILE=/var/log/randomize-mac-addresses + +echo "$(date): $*" > ${LOG_FILE} + +WIFI_UUIDS=$(nmcli --fields type,uuid connection show |grep 802-11-wireless |cut '-d ' -f3) + +for UUID in ${WIFI_UUIDS} +do + UUID_DAILY_HASH=$(echo "${UUID}-$(date +F)" | md5sum) + + RANDOM_MAC="02:$(echo -n ${UUID_DAILY_HASH} | sed 's/^\(..\)\(..\)\(..\)\(..\)\(..\).*$/\1:\2:\3:\4:\5/')" + + CMD="nmcli connection modify ${UUID} wifi.cloned-mac-address ${RANDOM_MAC}" + + echo "$CMD" >> ${LOG_FILE} + $CMD & +done + +wait +``` +Enjoy! + + _Update: [Use locally administered MAC addresses][5] to avoid clashing with real Intel ones. Thanks [@_fink][6]_ + +-------------------------------------------------------------------------------- + +via: https://www.paulfurley.com/randomize-your-wifi-mac-address-on-ubuntu-1604-xenial/ + +作者:[Paul M Furley ][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.paulfurley.com/ +[1]:https://gist.github.com/paulfurley/46e0547ce5c5ea7eabeaef50dbacef3f/raw/5f02fc8f6ff7fca5bca6ee4913c63bf6de15abca/randomize-mac-addresses +[2]:https://gist.github.com/paulfurley/46e0547ce5c5ea7eabeaef50dbacef3f#file-randomize-mac-addresses +[3]:https://github.com/ +[4]:http://cloudessa.com/products/cloudessa-aaa-and-captive-portal-cloud-service/ +[5]:https://gist.github.com/paulfurley/46e0547ce5c5ea7eabeaef50dbacef3f/revisions#diff-824d510864d58c07df01102a8f53faef +[6]:https://twitter.com/fink_/status/937305600005943296 +[7]:https://gist.github.com/paulfurley/978d4e2e0cceb41d67d017a668106c53/ +[8]:https://en.wikipedia.org/wiki/MAC_address#Universal_vs._local +[9]:https://gist.github.com/paulfurley/46e0547ce5c5ea7eabeaef50dbacef3f diff --git a/sources/tech/20171202 Easily control delivery of your Python applications to millions of Linux users with Snapcraft.md b/sources/tech/20171202 Easily control delivery of your Python applications to millions of Linux users with Snapcraft.md new file mode 100644 index 0000000000..dbdebf63e3 --- /dev/null +++ b/sources/tech/20171202 Easily control delivery of your Python applications to millions of Linux users with Snapcraft.md @@ -0,0 +1,321 @@ +Python +============================================================ + +Python has rich tools for packaging, distributing and sandboxing applications. Snapcraft builds on top of these familiar tools such as `pip`, `setup.py` and `requirements.txt` to create snaps for people to install on Linux. + +### What problems do snaps solve for Python applications? + +Linux install instructions for Python applications often get complicated. System dependencies, which differ from distribution to distribution, must be separately installed. To prevent modules from different Python applications clashing with each other, developer tools like `virtualenv` or `venv` must be used. With snapcraft it’s one command to produce a bundle that works anywhere. + +Here are some snap advantages that will benefit many Python projects: + +* Bundle all the runtime requirements, including the exact versions of system libraries and the Python interpreter. + +* Simplify installation instructions, regardless of distribution, to `snap install mypythonapp`. + +* Directly control the delivery of automatic application updates. + +* Extremely simple creation of daemons. + +### Getting started + +Let’s take a look at offlineimap and youtube-dl by way of examples. Both are command line applications. offlineimap uses Python 2 and only has Python module requirements. youtube-dl uses Python 3 and has system package requirements, in this case `ffmpeg`. + +### offlineimap + +Snaps are defined in a single yaml file placed in the root of your project. The offlineimap example shows the entire `snapcraft.yaml` for an existing project. We’ll break this down. + +``` +name: offlineimap +version: git +summary: OfflineIMAP +description: | + OfflineIMAP is software that downloads your email mailbox(es) as local + Maildirs. OfflineIMAP will synchronize both sides via IMAP. + +grade: devel +confinement: devmode + +apps: + offlineimap: + command: bin/offlineimap + +parts: + offlineimap: + plugin: python + python-version: python2 + source: . + +``` + +#### Metadata + +The `snapcraft.yaml` starts with a small amount of human-readable metadata, which usually can be lifted from the GitHub description or project README.md. This data is used in the presentation of your app in the Snap Store. The `summary:` can not exceed 79 characters. You can use a pipe with the `description:` to declare a multi-line description. + +``` +name: offlineimap +version: git +summary: OfflineIMAP +description: | + OfflineIMAP is software that downloads your email mailbox(es) as local + Maildirs. OfflineIMAP will synchronize both sides via IMAP. + +``` + +#### Confinement + +To get started we won’t confine this application. Unconfined applications, specified with `devmode`, can only be released to the hidden “edge” channel where you and other developers can install them. + +``` +confinement: devmode + +``` + +#### Parts + +Parts define how to build your app. Parts can be anything: programs, libraries, or other assets needed to create and run your application. In this case we have one: the offlineimap source code. In other cases these can point to local directories, remote git repositories, or tarballs. + +The Python plugin will also bundle Python in the snap, so you can be sure that the version of Python you test against is included with your app. Dependencies from `install_requires` in your `setup.py` will also be bundled. Dependencies from a `requirements.txt` file can also be bundled using the `requirements:` option. + +``` +parts: + offlineimap: + plugin: python + python-version: python2 + source: . + +``` + +#### Apps + +Apps are the commands and services exposed to end users. If your command name matches the snap `name`, users will be able run the command directly. If the names differ, then apps are prefixed with the snap `name`(`offlineimap.command-name`, for example). This is to avoid conflicting with apps defined by other installed snaps. + +If you don’t want your command prefixed you can request an alias for it on the [Snapcraft forum][1]. These command aliases are set up automatically when your snap is installed from the Snap Store. + +``` +apps: + offlineimap: + command: bin/offlineimap + +``` + +If your application is intended to run as a service, add the line `daemon: simple` after the command keyword. This will automatically keep the service running on install, update and reboot. + +### Building the snap + +You’ll first need to [install snap support][2], and then install the snapcraft tool: + +``` +sudo snap install --beta --classic snapcraft + +``` + +If you have just installed snap support, start a new shell so your `PATH` is updated to include `/snap/bin`. You can then build this example yourself: + +``` +git clone https://github.com/snapcraft-docs/offlineimap +cd offlineimap +snapcraft + +``` + +The resulting snap can be installed locally. This requires the `--dangerous` flag because the snap is not signed by the Snap Store. The `--devmode` flag acknowledges that you are installing an unconfined application: + +``` +sudo snap install offlineimap_*.snap --devmode --dangerous + +``` + +You can then try it out: + +``` +offlineimap + +``` + +Removing the snap is simple too: + +``` +sudo snap remove offlineimap + +``` + +Jump ahead to [Share with your friends][3] or continue to read another example. + +### youtube-dl + +The youtube-dl example shows a `snapcraft.yaml` using a tarball of a Python application and `ffmpeg` bundled in the snap to satisfy the runtime requirements. Here is the entire `snapcraft.yaml` for youtube-dl. We’ll break this down. + +``` +name: youtube-dl +version: 2017.06.18 +summary: YouTube Downloader. +description: | + youtube-dl is a small command-line program to download videos from + YouTube.com and a few more sites. + +grade: devel +confinement: devmode + +parts: + youtube-dl: + source: https://github.com/rg3/youtube-dl/archive/$SNAPCRAFT_PROJECT_VERSION.tar.gz + plugin: python + python-version: python3 + after: [ffmpeg] + +apps: + youtube-dl: + command: bin/youtube-dl + +``` + +#### Parts + +The `$SNAPCRAFT_PROJECT_VERSION` variable is derived from the `version:` stanza and used here to reference the matching release tarball. Because the `python` plugin is used, snapcraft will bundle a copy of Python in the snap using the version specified in the `python-version:` stanza, in this case Python 3. + +youtube-dl makes use of `ffmpeg` to transcode or otherwise convert the audio and video file it downloads. In this example, youtube-dl is told to build after the `ffmpeg` part. Because the `ffmpeg` part specifies no plugin, it will be fetched from the parts repository. This is a collection of community-contributed definitions which can be used by anyone when building a snap, saving you from needing to specify the source and build rules for each system dependency. You can use `snapcraft search` to find more parts to use and `snapcraft define ` to verify how the part is defined. + +``` +parts: + youtube-dl: + source: https://github.com/rg3/youtube-dl/archive/$SNAPCRAFT_PROJECT_VERSION.tar.gz + plugin: python + python-version: python3 + after: [ffmpeg] + +``` + +### Building the snap + +You can build this example yourself by running the following: + +``` +git clone https://github.com/snapcraft-docs/youtube-dl +cd youtube-dl +snapcraft + +``` + +The resulting snap can be installed locally. This requires the `--dangerous` flag because the snap is not signed by the Snap Store. The `--devmode` flag acknowledges that you are installing an unconfined application: + +``` +sudo snap install youtube-dl_*.snap --devmode --dangerous + +``` + +Run the command: + +``` +youtube-dl “https://www.youtube.com/watch?v=k-laAxucmEQ” + +``` + +Removing the snap is simple too: + +``` +sudo snap remove youtube-dl + +``` + +### Share with your friends + +To share your snaps you need to publish them in the Snap Store. First, create an account on [the dashboard][4]. Here you can customize how your snaps are presented, review your uploads and control publishing. + +You’ll need to choose a unique “developer namespace” as part of the account creation process. This name will be visible by users and associated with your published snaps. + +Make sure the `snapcraft` command is authenticated using the email address attached to your Snap Store account: + +``` +snapcraft login + +``` + +### Reserve a name for your snap + +You can publish your own version of a snap, provided you do so under a name you have rights to. + +``` +snapcraft register mypythonsnap + +``` + +Be sure to update the `name:` in your `snapcraft.yaml` to match this registered name, then run `snapcraft` again. + +### Upload your snap + +Use snapcraft to push the snap to the Snap Store. + +``` +snapcraft push --release=edge mypthonsnap_*.snap + +``` + +If you’re happy with the result, you can commit the snapcraft.yaml to your GitHub repo and [turn on automatic builds][5] so any further commits automatically get released to edge, without requiring you to manually build locally. + +### Further customisations + +Here are all the Python plugin-specific keywords: + +``` +- requirements: + (string) + Path to a requirements.txt file +- constraints: + (string) + Path to a constraints file +- process-dependency-links: + (bool; default: false) + Enable the processing of dependency links in pip, which allow one project + to provide places to look for another project +- python-packages: + (list) + A list of dependencies to get from PyPI +- python-version: + (string; default: python3) + The python version to use. Valid options are: python2 and python3 + +``` + +You can view them locally by running: + +``` +snapcraft help python + +``` + +### Extending and overriding behaviour + +You can [extend the behaviour][6] of any part in your `snapcraft.yaml` with shell commands. These can be run after pulling the source code but before building by using the `prepare` keyword. The build process can be overridden entirely using the `build` keyword and shell commands. The `install` keyword is used to run shell commands after building your code, useful for making post build modifications such as relocating build assets. + +Using the youtube-dl example above, we can run the test suite at the end of the build. If this fails, the snap creation will be terminated: + +``` +parts: + youtube-dl: + source: https://github.com/rg3/youtube-dl/archive/$SNAPCRAFT_PROJECT_VERSION.tar.gz + plugin: python + python-version: python3 + stage-packages: [ffmpeg, python-nose] + install: | + nosetests +``` + +-------------------------------------------------------------------------------- + +via: https://docs.snapcraft.io/build-snaps/python + +作者:[Snapcraft.io ][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:Snapcraft.io + +[1]:https://forum.snapcraft.io/t/process-for-reviewing-aliases-auto-connections-and-track-requests/455 +[2]:https://docs.snapcraft.io/core/install +[3]:https://docs.snapcraft.io/build-snaps/python#share-with-your-friends +[4]:https://dashboard.snapcraft.io/openid/login/?next=/dev/snaps/ +[5]:https://build.snapcraft.io/ +[6]:https://docs.snapcraft.io/build-snaps/scriptlets diff --git a/sources/tech/20171202 Scrot Linux command-line screen grabs made simple b/sources/tech/20171202 Scrot Linux command-line screen grabs made simple new file mode 100644 index 0000000000..979ed86b3c --- /dev/null +++ b/sources/tech/20171202 Scrot Linux command-line screen grabs made simple @@ -0,0 +1,72 @@ +Translating by filefi + +# Scrot: Linux command-line screen grabs made simple + +by [Scott Nesbitt][a] · November 30, 2017 + +> Scrot is a basic, flexible tool that offers a number of handy options for taking screen captures from the Linux command line. + +[![Original photo by Rikki Endsley. CC BY-SA 4.0](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/community-penguins-osdc-lead.png?itok=BmqsAF4A)][1] + + + +There are great tools on the Linux desktop for taking screen captures, such as [KSnapshot][2] and [Shutter][3]. Even the simple utility that comes with the GNOME desktop does a pretty good job of capturing screens. But what if you rarely need to take screen captures? Or you use a Linux distribution without a built-in capture tool, or an older computer with limited resources? + +Turn to the command line and a little utility called [Scrot][4]. It does a fine job of taking simple screen captures, and it includes a few features that might surprise you. + +### Getting started with Scrot +Many Linux distributions come with Scrot already installed—to check, type `which scrot`. If it isn't there, you can install Scrot using your distro's package manager. If you're willing to compile the code, grab it [from GitHub][5]. + +To take a screen capture, crack open a terminal window and type `scrot [filename]`, where `[filename]` is the name of file to which you want to save the image (for example, `desktop.png`). If you don't include a name for the file, Scrot will create one for you, such as `2017-09-24-185009_1687x938_scrot.png`. (That filename isn't as descriptive it could be, is it? That's why it's better to add one to the command.) + +Running Scrot with no options takes a screen capture of your entire desktop. If you don't want to do that, Scrot lets you focus on smaller portions of your screen. + +### Taking a screen capture of a single window + +Tell Scrot to take a screen capture of a single window by typing `scrot -u [filename]`. + +The `-u` option tells Scrot to grab the window currently in focus. That's usually the terminal window you're working in, which might not be the one you want. + +To grab another window on your desktop, type `scrot -s [filename]`. + +The `-s` option lets you do one of two things: + +* select an open window, or + +* draw a rectangle around a window or a portion of a window to capture it. + +You can also set a delay, which gives you a little more time to select the window you want to capture. To do that, type `scrot -u -d [num] [filename]`. + +The `-d` option tells Scrot to wait before grabbing the window, and `[num]` is the number of seconds to wait. Specifying `-d 5` (wait five seconds) should give you enough time to choose a window. + +### More useful options + +Scrot offers a number of additional features (most of which I never use). The ones I find most useful include: + +* `-b` also grabs the window's border + +* `-t` grabs a window and creates a thumbnail of it. This can be useful when you're posting screen captures online. + +* `-c` creates a countdown in your terminal when you use the `-d` option. + +To learn about Scrot's other options, check out the its documentation by typing `man scrot` in a terminal window, or [read it online][6]. Then start snapping images of your screen. + +It's basic, but Scrot gets the job done nicely. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/17/11/taking-screen-captures-linux-command-line-scrot + +作者:[Scott Nesbitt][a] +译者:[filefi](https://github.com/filefi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/scottnesbitt +[1]:https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/community-penguins-osdc-lead.png?itok=BmqsAF4A +[2]:https://www.kde.org/applications/graphics/ksnapshot/ +[3]:https://launchpad.net/shutter +[4]:https://github.com/dreamer/scrot +[5]:http://manpages.ubuntu.com/manpages/precise/man1/scrot.1.html +[6]:https://github.com/dreamer/scrot diff --git a/sources/tech/20171202 docker - Use multi-stage builds.md b/sources/tech/20171202 docker - Use multi-stage builds.md new file mode 100644 index 0000000000..e1a6414862 --- /dev/null +++ b/sources/tech/20171202 docker - Use multi-stage builds.md @@ -0,0 +1,127 @@ +Use multi-stage builds +============================================================ + +Multi-stage builds are a new feature requiring Docker 17.05 or higher on the daemon and client. Multistage builds are useful to anyone who has struggled to optimize Dockerfiles while keeping them easy to read and maintain. + +> Acknowledgment: Special thanks to [Alex Ellis][1] for granting permission to use his blog post [Builder pattern vs. Multi-stage builds in Docker][2] as the basis of the examples below. + +### Before multi-stage builds + +One of the most challenging things about building images is keeping the image size down. Each instruction in the Dockerfile adds a layer to the image, and you need to remember to clean up any artifacts you don’t need before moving on to the next layer. To write a really efficient Dockerfile, you have traditionally needed to employ shell tricks and other logic to keep the layers as small as possible and to ensure that each layer has the artifacts it needs from the previous layer and nothing else. + +It was actually very common to have one Dockerfile to use for development (which contained everything needed to build your application), and a slimmed-down one to use for production, which only contained your application and exactly what was needed to run it. This has been referred to as the “builder pattern”. Maintaining two Dockerfiles is not ideal. + +Here’s an example of a `Dockerfile.build` and `Dockerfile` which adhere to the builder pattern above: + +`Dockerfile.build`: + +``` +FROM golang:1.7.3 +WORKDIR /go/src/github.com/alexellis/href-counter/ +RUN go get -d -v golang.org/x/net/html +COPY app.go . +RUN go get -d -v golang.org/x/net/html \ + && CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o app . + +``` + +Notice that this example also artificially compresses two `RUN` commands together using the Bash `&&` operator, to avoid creating an additional layer in the image. This is failure-prone and hard to maintain. It’s easy to insert another command and forget to continue the line using the `\` character, for example. + +`Dockerfile`: + +``` +FROM alpine:latest +RUN apk --no-cache add ca-certificates +WORKDIR /root/ +COPY app . +CMD ["./app"] + +``` + +`build.sh`: + +``` +#!/bin/sh +echo Building alexellis2/href-counter:build + +docker build --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy \ + -t alexellis2/href-counter:build . -f Dockerfile.build + +docker create --name extract alexellis2/href-counter:build +docker cp extract:/go/src/github.com/alexellis/href-counter/app ./app +docker rm -f extract + +echo Building alexellis2/href-counter:latest + +docker build --no-cache -t alexellis2/href-counter:latest . +rm ./app + +``` + +When you run the `build.sh` script, it needs to build the first image, create a container from it in order to copy the artifact out, then build the second image. Both images take up room on your system and you still have the `app` artifact on your local disk as well. + +Multi-stage builds vastly simplify this situation! + +### Use multi-stage builds + +With multi-stage builds, you use multiple `FROM` statements in your Dockerfile. Each `FROM` instruction can use a different base, and each of them begins a new stage of the build. You can selectively copy artifacts from one stage to another, leaving behind everything you don’t want in the final image. To show how this works, Let’s adapt the Dockerfile from the previous section to use multi-stage builds. + +`Dockerfile`: + +``` +FROM golang:1.7.3 +WORKDIR /go/src/github.com/alexellis/href-counter/ +RUN go get -d -v golang.org/x/net/html +COPY app.go . +RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o app . + +FROM alpine:latest +RUN apk --no-cache add ca-certificates +WORKDIR /root/ +COPY --from=0 /go/src/github.com/alexellis/href-counter/app . +CMD ["./app"] + +``` + +You only need the single Dockerfile. You don’t need a separate build script, either. Just run `docker build`. + +``` +$ docker build -t alexellis2/href-counter:latest . + +``` + +The end result is the same tiny production image as before, with a significant reduction in complexity. You don’t need to create any intermediate images and you don’t need to extract any artifacts to your local system at all. + +How does it work? The second `FROM` instruction starts a new build stage with the `alpine:latest` image as its base. The `COPY --from=0` line copies just the built artifact from the previous stage into this new stage. The Go SDK and any intermediate artifacts are left behind, and not saved in the final image. + +### Name your build stages + +By default, the stages are not named, and you refer to them by their integer number, starting with 0 for the first `FROM` instruction. However, you can name your stages, by adding an `as ` to the `FROM` instruction. This example improves the previous one by naming the stages and using the name in the `COPY` instruction. This means that even if the instructions in your Dockerfile are re-ordered later, the `COPY` won’t break. + +``` +FROM golang:1.7.3 as builder +WORKDIR /go/src/github.com/alexellis/href-counter/ +RUN go get -d -v golang.org/x/net/html +COPY app.go . +RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o app . + +FROM alpine:latest +RUN apk --no-cache add ca-certificates +WORKDIR /root/ +COPY --from=builder /go/src/github.com/alexellis/href-counter/app . +CMD ["./app"] +``` + +-------------------------------------------------------------------------------- + +via: https://docs.docker.com/engine/userguide/eng-image/multistage-build/#name-your-build-stages + +作者:[docker docs ][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://docs.docker.com/engine/userguide/eng-image/multistage-build/ +[1]:https://twitter.com/alexellisuk +[2]:http://blog.alexellis.io/mutli-stage-docker-builds/ diff --git a/translated/tech/20090701 The One in Which I Call Out Hacker News.md b/translated/tech/20090701 The One in Which I Call Out Hacker News.md new file mode 100644 index 0000000000..670be95353 --- /dev/null +++ b/translated/tech/20090701 The One in Which I Call Out Hacker News.md @@ -0,0 +1,99 @@ +我号召黑客新闻的理由之一 +实现高速缓存会花费 30 个小时,你有额外的 30 个小时吗? +不,你没有。 +我实际上并不知道它会花多少时间,可能它会花五分钟,你有五分钟吗?不,你还是没有。为什么?因为我在撒谎。它会消耗远超五分钟的时间,这是程序员永远的 +乐观主义。 +- Owen Astrachan 教授于 2004 年 2 月 23 日在 CPS 108 上的讲座 + +指责开源软件的使用存在着高昂的代价已经不是一个新论点了,它之前就被提过,而且说的比我更有信服力,即使一些人已经在高度赞扬开源软件的运作。 +这种事为什么会重复发生? + +在周一的黑客新闻上,我愉悦地看着某些人一边说写 Stack Overflow 简单的简直搞笑,一边通过允许七月第四个周末之后的克隆来开始备份他们的提问。 +其他的声明中也指出现存的克隆是一个好的出发点。 + +让我们假设,为了争辩,你觉得将自己的 Stack Overflow 通过 ASP.NET 和 MVC 克隆是正确的,然后被一块廉价的手表和一个小型俱乐部头领忽悠之后, +决定去手动拷贝你 Stack Overflow 的源代码,一页又一页,所以你可以逐字逐句地重新输入,我们同样会假定你像我一样打字,很酷的有 100 WPM +(差不多每秒8个字符),不和我一样的话,你不会犯错。 + + Stack Overflow 的 *.cs、*.sql、*.css、*.js 和 *.aspx 文件大约 2.3 MB,因此如果你想将这些源代码输进电脑里去的话,即使你不犯错也需要大约 80 个小时。 + +除非......当然,你是不会那样做的:你打算从头开始实现 Stack Overflow 。所以即使我们假设,你花了十倍的时间去设计、输出,然后调试你自己的实现而不是去拷 +贝已有的那份,那已经让你已经编译了好几个星期。我不知道你,但是我可以承认我写的新代码大大小于我复制的现有代码的十分之一。 + +好,ok,我听见你松了口气。所以不是全部。但是我可以做大部分。 + +行,所以什么是大部分?这只是询问和回答问题,这个部分很简单。那么,除了你必须实现对问题和答案投票、赞同还是反对,而且提问者应该能够去接收每一个问题的 +单一答案。你不能让人们赞同或者反对他们自己的回答。所以你需要去阻止。你需要去确保用户在一定的时间内不会赞同或反对其他用户太多次。以预防垃圾邮件, +你可能也需要去实现一个垃圾邮件过滤器,即使在一个基本的设计里,也要考虑到这一点。而且还需要去支持用户图标。并且你将不得不寻找一个自己真正信任的并且 +与 markdown 接合很好的 HTML 库(当然,你确实希望重新使用那个令人敬畏的编辑器 Stack Overflow ),你还需要为所有控件购买,设计或查找小部件,此外 +你至少需要一个基本的管理界面,以便用户可以调节,并且你需要实现可扩展的业务量,以便能稳定地给用户越来越多的功能去实现他们想做的。 + +如果你这样做了,你可以完成它。 + +除了...除了全文检索外,特别是它在“寻找问题”功能中的表现,这是必不可少的。然后用户的基本信息,和回答的意见,然后有一个主要展示你的重要问题, +但是它会稳定的冒泡式下降。另外你需要去实现奖励,并支持每个用户的多个 OpenID 登录,然后为相关的事件发送邮件通知,并添加一个标签系统, +接着允许管理员通过一个不错的图形界面配置徽章。你需要去显示用户的 karma 历史,点赞和差评。整个事情的规模都非常好,因为它随时都可以被 + slashdotted、reddited 或是 Stack Overflow 。 + +在这之后!你就已经完成了! + +...在正确地实现升级、国际化、业绩上限和一个 css 设计之后,使你的站点看起来不像是一个屁股,上面的大部分 AJAX 版本和 G-d 知道什么会同样潜伏 +在你所信任的界面下,但是当你开始做一个真正的克隆的时候,就会遇到它。 + +告诉我:这些功能中哪个是你感觉可以削减而让它仍然是一个引人注目的产品,哪些是大部分网站之下的呢?哪个你可以剔除呢? + +开发者因为开源软件的使用是一个可怕的痛苦这样一个相同的理由认为克隆一个像 Stack Overflow 的站点很简单。当你把一个开发者放在 Stack Overflow 前面, +他们并不真的看到 Stack Overflow,他们实际上看的是这些: + +create table QUESTION (ID identity primary key, + TITLE varchar(255), --- 为什么我知道你认为是 255 + BODY text, + UPVOTES integer not null default 0, + DOWNVOTES integer not null default 0, + USER integer references USER(ID)); +create table RESPONSE (ID identity primary key, + BODY text, + UPVOTES integer not null default 0, + DOWNVOTES integer not null default 0, + QUESTION integer references QUESTION(ID)) + +如果你告诉一个开发者去复制 Stack Overflow ,进入他脑海中的就是上面的两个 SQL 表和足够的 HTML 文件来显示它们,而不用格式化,这在一个周末里是完全 +可以实现的,聪明的人会意识到他们需要实现登陆、注销和评论,点赞需要绑定到用户。但是这在一个周末内仍然是完全可行的。这仅仅是在 SQL 后端里加上两张 +左右的表,而 HTML 则用来展示内容,使用像 Django 这样的框架,你甚至可以免费获得基本的用户和评论。 + +但是那不是和 Stack Overflow 相关的,无论你对 Stack Overflow 的感受如何,大多数访问者似乎都认为用户体验从头到尾都很流畅,他们感觉他们和一个 +好产品相互影响。即使我没有更好的了解,我也会猜测 Stack Overflow 在数据库模式方面取得了持续的成功-并且有机会去阅读 Stack Overflow 的源代码, +我知道它实际上有多么的小,这些是一个极大的 spit 和 Polish 的集合,成为了一个具有高可用性的主要网站,一个开发者,问一个东西被克隆有多难, +仅仅不认为和 Polish 相关,因为 Polish 是实现结果附带的。 + +这就是为什么 Stack Overflow 的开放源代码克隆会失败,即使一些人在设法实现大部分 Stack Overflow 的“规范”,也会有一些关键区域会将他们绊倒, +举个例子,如果你把目标市场定在了终端用户上,你要么需要一个图形界面去配置规则,要么聪明的开发者会决定哪些徽章具有足够的通用性,去继续所有的 +安装,实际情况是,开发者发牢骚和抱怨你不能实现一个真实的综合性的像 badges 的图形用户界面,然后 bikeshed 任何的建议,为因为标准的 badges +在范围内太远,他们会迅速避开选择其他方向,他们最后会带着相同的有 bug 追踪器的解决方案赶上,就像他们工作流程的概要使用一样: +开发者通过任意一种方式实现一个通用的机制,任何一个人完全都能轻松地使用 Python、PHP 或任意一门语言中的系统 API 来工作,能简单为他们自己增加 +自定义设置,PHP 和 Python 是学起来很简单的,并且比起曾经的图形界面更加的灵活,为什么还要操心其他事呢? + +同样的,节制和管理界面可以被削减。如果你是一个管理员,你可以进入 SQL 服务器,所以你可以做任何真正的管理-就像这样,管理员可以通过任何的 Django +管理和类似的系统给你提供支持,因为,毕竟只有少数用户是 mods,mods 应该理解网站是怎么运作、停止的。当然,没有 Stack Overflow 的接口失败会被纠正 +,即使 Stack Overflow 的愚蠢的要求,你必须知道如何去使用 openID (它是最糟糕的缺点)最后得到修复。我确信任何的开源的克隆都会狂热地跟随它- +即使 GNOME 和 KDE 多年来亦步亦趋地复制 windows ,而不是尝试去修复它自己最明显的缺陷。 + +开发者可能不会关心应用的这些部分,但是最终用户会,当他们尝试去决定使用哪个应用时会去考虑这些。就好像一家好的软件公司希望通过确保其产品在出货之前 +是一流的来降低其支持成本一样,所以,同样的,懂行的消费者想在他们购买这些产品之前确保产品好用,以便他们不需要去寻求帮助,开源产品就失败在这种地方 +,一般来说,专有解决方案会做得更好。 + +这不是说开源软件没有他们自己的立足之地,这个博客运行在 Apache,Django,PostgreSQL 和 Linux 上。但是让我告诉你,配置这些堆栈不是为了让人心灰意懒 +,PostgreSQL 需要在老版本上移除设置。然后,在 Ubuntu 和 FreeBSD 最新的版本上,仍然要求用户搭建第一个数据库集群,MS SQL不需要这些东西,Apache... +天啊,甚至没有让我开始尝试去向一个初学者用户解释如何去得到虚拟机,MovableType,一对 Django 应用程序,而且所有的 WordPress 都可以在一个单一的安装下 +顺利运行,像在地狱一样,只是试图解释 Apache 的分叉线程变换给技术上精明的非开发人员就是一个噩梦,IIS 7 和操作系统的 Apache 服务器是非常闭源的, +图形界面管理程序配置这些这些相同的堆栈非常的简单,Django 是一个伟大的产品,但是它只是基础架构而已,我认为开源软件做的很好,恰恰是因为推动开发者去 +贡献的动机 + +下次你看见一个你喜欢的应用,认为所有面向用户的细节非常长和辛苦,就会去让它用起来更令人开心,在谴责你如何能普通的实现整个的可恶的事在一个周末, +十分之九之后,当你认为一个应用的实现简单地简直可笑,你就完全的错失了故事另一边的用户 + +via: https://bitquabit.com/post/one-which-i-call-out-hacker-news/ + +作者:Benjamin Pollack 译者:hopefully2333 校对:校对者ID + +本文由 LCTT 原创编译,Linux中国 荣誉推出 diff --git a/translated/tech/20170530 How to Improve a Legacy Codebase.md b/translated/tech/20170530 How to Improve a Legacy Codebase.md new file mode 100644 index 0000000000..a1869b0449 --- /dev/null +++ b/translated/tech/20170530 How to Improve a Legacy Codebase.md @@ -0,0 +1,104 @@ +# 如何改善遗留的代码库 + +这在每一个程序员,项目管理员,团队领导的一生中都会至少发生一次。原来的程序员早已离职去度假了,留下了一坨几百万行屎一样的代码和文档(如果有的话),一旦接手这些代码,想要跟上公司的进度简直让人绝望。 + +你的工作是带领团队摆脱这个混乱的局面 + +当你的第一反应过去之后,你开始去熟悉这个项目,公司的管理层都在关注着你,所以项目只能成功,然而,看了一遍代码之后却发现很大的可能会失败。那么该怎么办呢? + +幸运(不幸)的是我已经遇到好几次这种情况了,我和我的小伙伴发现将这坨热气腾腾的屎变成一个健康可维护的项目是非常值得一试的。下面这些是我们的一些经验: + +### 备份 + +在开始做任何事情之前备份与之可能相关的所有文件。这样可以确保不会丢失任何可能会在另外一些地方很重要的信息。一旦修改其中一些文件,你可能花费一天或者更多天都解决不了这个愚蠢的问题,配置数据通常不受版本控制,所以特别容易受到这方面影响,如果定期备份数据时连带着它一起备份了,还是比较幸运的。所以谨慎总比后悔好,复制所有东西到一个绝对安全的地方吧,除非这些文件是只读模式否则不要轻易碰它。 + +### 必须确保代码能够在生产环境下构建运行并产出,这是重要的先决条件。 + +之前我假设环境已经存在,所以完全丢了这一步,Hacker News 的众多网友指出了这一点并且证明他们是对的:第一步是确认你知道在生产环境下运行着什么东西,也意味着你需要在你的设备上构建一个跟生产环境上运行的版本每一个字节都一模一样的版本。如果你找不到实现它的办法,一旦你将它投入生产环境,你很可能会遭遇一些很糟糕的事情。确保每一部分都尽力测试,之后在你足够信任它能够很好的运行的时候将它部署生产环境下。无论它运行的怎么样都要做好能够马上切换回旧版本的准备,确保日志记录下了所有情况,以便于接下来不可避免的 “验尸” 。 + +### 冻结数据库 + +直到你修改代码之前尽可能冻结你的数据库,在你特别熟悉代码库和遗留代码之后再去修改数据库。在这之前过早的修改数据库的话,你可能会碰到大问题,你会失去让新旧代码和数据库一起构建稳固的基础的能力。保持数据库完全不变,就能比较新的逻辑代码和旧的逻辑代码运行的结果,比较的结果应该跟预期的没有差别。 + +### 写测试 + +在你做任何改变之前,尽可能多的写下端到端测试和集成测试。在你能够清晰的知道旧的是如何工作的情况下确保这些测试能够正确的输出(准备好应对一些突发状况)。这些测试有两个重要的作用,其一,他们能够在早期帮助你抛弃一些错误观念,其二,在你写新代码替换旧代码的时候也有一定防护作用。 + +自动化测试,如果你也有 CI 的使用经验请使用它,并且确保在你提交代码之后能够快速的完成所有测试。 + +### 日志监控 + +如果旧设备依然可用,那么添加上监控功能。使用一个全新的数据库,为每一个你能想到的事件都添加一个简单的计数器,并且根据这些事件的名字添加一个函数增加这些计数器。用一些额外的代码实现一个带有时间戳的事件日志,这是一个好办法知道有多少事件导致了另外一些种类的事件。例如:用户打开 APP ,用户关闭 APP 。如果这两个事件导致后端调用的数量维持长时间的不同,这个数量差就是当前打开的 APP 的数量。如果你发现打开 APP 比关闭 APP 多的时候,你就必须要知道是什么原因导致 APP 关闭了(例如崩溃)。你会发现每一个事件都跟其他的一些事件有许多不同种类的联系,通常情况下你应该尽量维持这些固定的联系,除非在系统上有一个明显的错误。你的目标是减少那些错误的事件,尽可能多的在开始的时候通过使用计数器在调用链中降低到指定的级别。(例如:用户支付应该得到相同数量的支付回调)。 + +这是简单的技巧去将每一个后端应用变成一个就像真实的簿记系统一样,所有数字必须匹配,只要他们在某个地方都不会有什么问题。 + +随着时间的推移,这个系统在监控健康方面变得非常宝贵,而且它也是使用源码控制修改系统日志的一个好伙伴,你可以使用它确认 BUG 出现的位置,以及对多种计数器造成的影响。 + +我通常保持 5 分钟(一小时 12 次)记录一次计数器,如果你的应用生成了更多或者更少的事件,你应该修改这个时间间隔。所有的计数器公用一个数据表,每一个记录都只是简单的一行。 + +### 一次只修改一处 + +不要完全陷入在提高代码或者平台可用性的同时添加新特性或者是修复 BUG 的陷阱。这会让你头大而且将会使你之前建立的测试失效,现在必须问问你自己,每一步的操作想要什么样的结果。 + +### 修改平台 + +如果你决定转移你的应用到另外一个平台,最主要的是跟之前保持一样。如果你觉得你会添加更多的文档和测试,但是不要忘记这一点,所有的业务逻辑和相互依赖跟从前一样保持不变。 + +### 修改架构 + +接下来处理的是改变应用的结构(如果需要)。这一点上,你可以自由的修改高层的代码,通常是降低模块间的横向联系,这样可以降低代码活动期间对终端用户造成的影响范围。如果老代码是庞大的,那么现在正是让他模块化的时候,将大段代码分解成众多小的,不过不要把变量的名字和他的数据结构分开。 + +Hacker News [mannykannot][1] 网友指出,修改架构并不总是可行,如果你特别不幸的话,你可能为了改变一些架构必须付出沉重的代价。我也赞同这一点,我应该加上这一点,因此这里有一些补充。我非常想补充的是如果你修改高级代码的时候修改了一点点底层代码,那么试着限制只修改一个文件或者最坏的情况是只修改一个子系统,所以尽可能限制修改的范围。否则你可能很难调试刚才所做的更改。 + +### 底层代码的重构 + +现在,你应该非常理解每一个模块的作用了,准备做一些真正的工作吧:重构代码以提高其可维护性并且使代码做好添加新功能的准备。这很可能是项目中最消耗时间的部分,记录你所做的任何操作,在你彻底的记录模块并且理解之前不要对它做任何修改。之后你可以自由的修改变量名、函数名以及数据结构以提高代码的清晰度和统一性,然后请做测试(情况允许的话,包括单元测试)。 + +### 修复 bugs + +现在准备做一些用户可见的修改,战斗的第一步是修复很多积累了一整年的bugs,像往常一样,首先证实 bug 仍然存在,然后编写测试并修复这个 bug,你的 CI 和端对端测试应该能避免一些由于不太熟悉或者一些额外的事情而犯的错误。 + +### 升级数据库 + + +如果在一个坚实且可维护的代码库上完成所有工作,如果你有更改数据库模式的计划,可以使用不同的完全替换数据库。 +把所有的这些都做完将能够帮助你更可靠的修改而不会碰到问题,你会完全的测试新数据库和新代码,所有测试可以确保你顺利的迁移。 + +### 按着路线图执行 + +祝贺你脱离的困境并且可以准备添加新功能了。 + +### 任何时候都不要尝试彻底重写 + +彻底重写是那种注定会失败的项目,一方面,你在一个未知的领域开始,所以你甚至不知道构建什么,另一方面,你会把所以的问题都推到新系统马上就要上线的前一天,非常不幸的是,这也是你失败的时候,假设业务逻辑存在问题,你会得到异样的眼光,那时您会突然明白为什么旧系统会用某种奇怪的方式来工作,最终也会意识到能将旧系统放在一起工作的人也不都是白痴。在那之后。如果你真的想破坏公司(和你自己的声誉),那就重写吧,但如果你足够聪明,彻底重写系统通常不会成为一个摆到桌上讨论的选项。 + +### 所以,替代方法是增量迭代工作 + +要解开这些线团最快方法是,使用你熟悉的代码中任何的元素(它可能是外部的,他可以是内核模块),试着使用旧的上下文去增量提升,如果旧的构建工具已经不能用了,你将必须使用一些技巧(看下面)至少当你开始做修改的时候,试着尽力保留已知的工作。那样随着代码库的提升你也对代码的作用更加理解。一个典型的代码提交应该最多两行。 + +### 发布! + +每一次的修改都发布到生产环境,即使一些修改不是用户可见的。使用最少的步骤也是很重要的,因为当你缺乏对系统的了解时,只有生产环境能够告诉你问题在哪里,如果你只做了一个很小的修改之后出了问题,会有一些好处: + +* 很容易弄清楚出了什么问题 +* 这是一个改进流程的好位置 +* 你应该马上更新文档展示你的新见解 + +### 使用代理的好处 +如果你做 web 开发时在旧系统和用户之间加了代理。你能很容易的控制每一个网址哪些请求旧系统,哪些重定向到新系统,从而更轻松更精确的控制运行的内容以及谁能够看到。如果你的代理足够的聪明,你可以使用它发送一定比例的流量到个人的 URL,直到你满意为止,如果你的集成测试也连接到这个接口那就更好了。 + +### 是的,这会花费很多时间 +这取决于你怎样看待它的,这是事实会有一些重复的工作涉及到这些步骤中。但是它确实有效,对于进程的任何一个优化都将使你对这样系统更加熟悉。我会保持声誉,并且我真的不喜欢在工作期间有负面的意外。如果运气好的话,公司系统已经出现问题,而且可能会影响客户。在这样的情况下,如果你更多地是牛仔的做事方式,并且你的老板同意可以接受冒更大的风险,我比较喜欢完全控制整个流程得到好的结果而不是节省两天或者一星期,但是大多数公司宁愿采取稍微慢一点但更确定的胜利之路。 + +-------------------------------------------------------------------------------- + +via: https://jacquesmattheij.com/improving-a-legacy-codebase + +作者:[Jacques Mattheij][a] +译者:[aiwhj](https://github.com/aiwhj) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://jacquesmattheij.com/ +[1]:https://news.ycombinator.com/item?id=14445661 diff --git a/translated/tech/20170910 Cool vim feature sessions.md b/translated/tech/20170910 Cool vim feature sessions.md new file mode 100644 index 0000000000..49ee43fda1 --- /dev/null +++ b/translated/tech/20170910 Cool vim feature sessions.md @@ -0,0 +1,44 @@ +vim 的酷功能:会话! +============================================================• + +昨天我在编写我的[vimrc][5]的时候了解到一个很酷的 vim 功能!(主要为了添加 fzf 和 ripgrep 插件)。这是一个内置功能,不需要特别的插件。 + +所以我画了一个漫画。 + +基本上你可以用下面的命令保存所有你打开的文件和当前的状态 + +``` +:mksession ~/.vim/sessions/foo.vim + +``` + +接着用 `:source ~/.vim/sessions/foo.vim` 或者  `vim -S ~/.vim/sessions/foo.vim` 还原会话。非常酷! + +一些 vim 插件给 vim 会话添加了额外的功能: + +* [https://github.com/tpope/vim-obsession][1] + +* [https://github.com/mhinz/vim-startify][2] + +* [https://github.com/xolox/vim-session][3] + +这是漫画: + +![](https://jvns.ca/images/vimsessions.png) + +-------------------------------------------------------------------------------- + +via: https://jvns.ca/blog/2017/09/10/vim-sessions/ + +作者:[Julia Evans ][a] +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://jvns.ca/about +[1]:https://github.com/tpope/vim-obsession +[2]:https://github.com/mhinz/vim-startify +[3]:https://github.com/xolox/vim-session +[4]:https://jvns.ca/categories/vim +[5]:https://github.com/jvns/vimconfig/blob/master/vimrc diff --git a/translated/tech/20171020 How Eclipse is advancing IoT development.md b/translated/tech/20171020 How Eclipse is advancing IoT development.md new file mode 100644 index 0000000000..0de4f38ea1 --- /dev/null +++ b/translated/tech/20171020 How Eclipse is advancing IoT development.md @@ -0,0 +1,77 @@ +translated by smartgrids +Eclipse 如何助力 IoT 发展 +============================================================ + +### 开源组织的模块发开发方式非常适合物联网。 + +![How Eclipse is advancing IoT development](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/OSDC_BUS_ArchitectureOfParticipation_520x292.png?itok=FA0Uuwzv "How Eclipse is advancing IoT development") +图片来源: opensource.com + +[Eclipse][3] 可能不是第一个去研究物联网的开源组织。但是,远在 IoT 家喻户晓之前,该基金会在 2001 年左右就开始支持开源软件发展商业化。九月 Eclipse 物联网日和 RedMonk 的 [ThingMonk 2017][4] 一块举行,着重强调了 Eclipse 在 [物联网发展][5] 中的重要作用。它现在已经包含了 28 个项目,覆盖了大部分物联网项目需求。会议过程中,我和负责 Eclipse 市场化运作的 [Ian Skerritt][6] 讨论了 Eclipse 的物联网项目以及如何拓展它。 + +###物联网的最新进展? +我问 Ian 物联网同传统工业自动化,也就是前几十年通过传感器和相应工具来实现工厂互联的方式有什么不同。 Ian 指出很多工厂是还没有互联的。 +另外,他说“ SCADA[监控和数据分析] 系统以及工厂底层技术都是私有、独立性的。我们很难去改变它,也很难去适配它们…… 现在,如果你想运行一套生产系统,你需要设计成百上千的单元。生产线想要的是满足用户需求,使制造过程更灵活,从而可以不断产出。” 这也就是物联网会带给制造业的一个很大的帮助。 + + +###Eclipse 物联网方面的研究 +Ian 对于 Eclipse 在物联网的研究是这样描述的:“满足任何物联网解决方案的核心基础技术” ,通过使用开源技术,“每个人都可以使用从而可以获得更好的适配性。” 他说,Eclipse 将物联网视为包括三层互联的软件栈。从更高的层面上看,这些软件栈(按照大家常见的说法)将物联网描述为跨越三个层面的网络。特定的观念可能认为含有更多的层面,但是他们一直符合这个三层模型的功能的: + +* 一种可以装载设备(例如设备、终端、微控制器、传感器)用软件的堆栈。 +* 将不同的传感器采集到的数据信息聚合起来并传输到网上的一类网关。这一层也可能会针对传感器数据检测做出实时反映。 +* 物联网平台后端的一个软件栈。这个后端云存储数据并能根据采集的数据比如历史趋势、预测分析提供服务。 + +这三个软件栈在 Eclipse 的白皮书 “ [The Three Software Stacks Required for IoT Architectures][7] ”中有更详细的描述。 + +Ian 说在这些架构中开发一种解决方案时,“需要开发一些特殊的东西,但是很多底层的技术是可以借用的,像通信协议、网关服务。需要一种模块化的方式来满足不用的需求场合。” Eclipse 关于物联网方面的研究可以概括为:开发模块化开源组件从而可以被用于开发大量的特定性商业服务和解决方案。 + +###Eclipse 的物联网项目 + +在众多一杯应用的 Eclipse 物联网应用中, Ian 举了两个和 [MQTT][8] 有关联的突出应用,一个设备与设备互联(M2M)的物联网协议。 Ian 把它描述成“一个专为重视电源管理工作的油气传输线监控系统的信息发布/订阅协议。MQTT 已经是众多物联网广泛应用标准中很成功的一个。” [Eclipse Mosquitto][9] 是 MQTT 的代理,[Eclipse Paho][10] 是他的客户端。 +[Eclipse Kura][11] 是一个物联网网关,引用 Ian 的话,“它连接了很多不同的协议间的联系”包括蓝牙、Modbus、CANbus 和 OPC 统一架构协议,以及一直在不断添加的协议。一个优势就是,他说,取代了你自己写你自己的协议, Kura 提供了这个功能并将你通过卫星、网络或其他设备连接到网络。”另外它也提供了防火墙配置、网络延时以及其它功能。Ian 也指出“如果网络不通时,它会存储信息直到网络恢复。” + +最新的一个项目中,[Eclipse Kapua][12] 正尝试通过微服务来为物联网云平台提供不同的服务。比如,它集成了通信、汇聚、管理、存储和分析功能。Ian 说“它正在不断前进,虽然还没被完全开发出来,但是 Eurotech 和 RedHat 在这个项目上非常积极。” +Ian 说 [Eclipse hawkBit][13] ,软件更新管理的软件,是一项“非常有趣的项目。从安全的角度说,如果你不能更新你的设备,你将会面临巨大的安全漏洞。”很多物联网安全事故都和无法更新的设备有关,他说,“ HawkBit 可以基本负责通过物联网系统来完成扩展性更新的后端管理。” + +物联网设备软件升级的难度一直被看作是难度最高的安全挑战之一。物联网设备不是一直连接的,而且数目众多,再加上首先设备的更新程序很难完全正常。正因为这个原因,关于无赖女王软件升级的项目一直是被当作重要内容往前推进。 + +###为什么物联网这么适合 Eclipse + +在物联网发展趋势中的一个方面就是关于构建模块来解决商业问题,而不是宽约工业和公司的大物联网平台。 Eclipse 关于物联网的研究放在一系列模块栈、提供特定和大众化需求功能的项目,还有就是指定目标所需的可捆绑式中间件、网关和协议组件上。 + + +-------------------------------------------------------------------------------- + + + +作者简介: + +Gordon Haff - Gordon Haff 是红帽公司的云营销员,经常在消费者和工业会议上讲话,并且帮助发展红帽全办公云解决方案。他是 计算机前言:云如何如何打开众多出版社未来之门 的作者。在红帽之前, Gordon 写了成百上千的研究报告,经常被引用到公众刊物上,像纽约时报关于 IT 的议题和产品建议等…… + +-------------------------------------------------------------------------------- + +转自: https://opensource.com/article/17/10/eclipse-and-iot + +作者:[Gordon Haff ][a] +译者:[smartgrids](https://github.com/smartgrids) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/ghaff +[1]:https://opensource.com/article/17/10/eclipse-and-iot?rate=u1Wr-MCMFCF4C45IMoSPUacCatoqzhdKz7NePxHOvwg +[2]:https://opensource.com/user/21220/feed +[3]:https://www.eclipse.org/home/ +[4]:http://thingmonk.com/ +[5]:https://iot.eclipse.org/ +[6]:https://twitter.com/ianskerrett +[7]:https://iot.eclipse.org/resources/white-papers/Eclipse%20IoT%20White%20Paper%20-%20The%20Three%20Software%20Stacks%20Required%20for%20IoT%20Architectures.pdf +[8]:http://mqtt.org/ +[9]:https://projects.eclipse.org/projects/technology.mosquitto +[10]:https://projects.eclipse.org/projects/technology.paho +[11]:https://www.eclipse.org/kura/ +[12]:https://www.eclipse.org/kapua/ +[13]:https://eclipse.org/hawkbit/ +[14]:https://opensource.com/users/ghaff +[15]:https://opensource.com/users/ghaff +[16]:https://opensource.com/article/17/10/eclipse-and-iot#comments diff --git a/translated/tech/20171108 Archiving repositories.md b/translated/tech/20171108 Archiving repositories.md new file mode 100644 index 0000000000..3d1a328541 --- /dev/null +++ b/translated/tech/20171108 Archiving repositories.md @@ -0,0 +1,37 @@ +归档仓库 +==================== + + +因为仓库不再活跃开发或者你不想接受额外的贡献并不意味着你想要删除它。现在在 Github 上归档仓库让它变成只读。 + + [![archived repository banner](https://user-images.githubusercontent.com/7321362/32558403-450458dc-c46a-11e7-96f9-af31d2206acb.png)][1] + +归档一个仓库让它对所有人只读(包括仓库拥有者)。这包括编辑仓库、问题、合并请求、标记、里程碑、维基、发布、提交、标签、分支、反馈和评论。没有人可以在一个归档的仓库上创建新的问题、合并请求或者评论,但是你仍可以 fork 仓库-允许归档的仓库在其他地方继续开发。 + +要归档一个仓库,进入仓库设置页面并点在这个仓库上点击归档。 + + [![archive repository button](https://user-images.githubusercontent.com/125011/32273119-0fc5571e-bef9-11e7-9909-d137268a1d6d.png)][2] + +在归档你的仓库前,确保你已经更改了它的设置并考虑关闭所有的开放问题和合并请求。你还应该更新你的 README 和描述来让它让访问者了解他不再能够贡献。 + +如果你改变了主意想要解除归档你的仓库,在相同的地方点击解除归档。请注意大多数归档仓库的设置是隐藏的,并且你需要解除归档来改变它们。 + + [![archived labelled repository](https://user-images.githubusercontent.com/125011/32541128-9d67a064-c466-11e7-857e-3834054ba3c9.png)][3] + +要了解更多,请查看[这份文档][4]中的归档仓库部分。归档快乐! + +-------------------------------------------------------------------------------- + +via: https://github.com/blog/2460-archiving-repositories + +作者:[MikeMcQuaid ][a] +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://github.com/MikeMcQuaid +[1]:https://user-images.githubusercontent.com/7321362/32558403-450458dc-c46a-11e7-96f9-af31d2206acb.png +[2]:https://user-images.githubusercontent.com/125011/32273119-0fc5571e-bef9-11e7-9909-d137268a1d6d.png +[3]:https://user-images.githubusercontent.com/125011/32541128-9d67a064-c466-11e7-857e-3834054ba3c9.png +[4]:https://help.github.com/articles/about-archiving-repositories/ diff --git a/translated/tech/20171116 Introducing security alerts on GitHub.md b/translated/tech/20171116 Introducing security alerts on GitHub.md new file mode 100644 index 0000000000..b8f0afba17 --- /dev/null +++ b/translated/tech/20171116 Introducing security alerts on GitHub.md @@ -0,0 +1,48 @@ +介绍 GitHub 上的安全警报 +==================================== + + +上个月,我们用依赖关系图让你更容易跟踪你代码依赖的的项目,目前支持 Javascript 和 Ruby。如今,超过 75% 的 GitHub 项目有依赖,我们正在帮助你做更多的事情,而不只是关注那些重要的项目。在启用依赖关系图后,当我们检测到你的依赖中有漏洞或者来自 Github 社区中建议的已知修复时通知你。 + + [![Security Alerts & Suggested Fix](https://user-images.githubusercontent.com/594029/32851987-76c36e4a-c9eb-11e7-98fc-feb39fddaadb.gif)][1] + +### 如何开始使用安全警报 + +无论你的项目时私有还是公有的,安全警报都会为团队中的正确人员提供重要的漏洞信息。 + +启用你的依赖图 + +公开仓库将自动启用依赖关系图和安全警报。对于私人仓库,你需要在仓库设置中添加安全警报,或者在 “Insights” 选项卡中允许访问仓库的 “依赖关系图” 部分。 + +设置通知选项 + +启用依赖关系图后,管理员将默认收到安全警报。管理员还可以在依赖关系图设置中将团队或个人添加为安全警报的收件人。 + +警报响应 + +当我们通知你潜在的漏洞时,我们将突出显示我们建议更新的任何依赖关系。如果存在已知的安全版本,我们将使用机器学习和公开数据中选择一个,并将其包含在我们的建议中。 + +### 漏洞覆盖率 + +有 [CVE ID][2](公开披露的[国家漏洞数据库][3]中的漏洞)的漏洞将包含在安全警报中。但是,并非所有漏洞都有 CVE ID,甚至许多公开披露的漏洞也没有。随着安全数据的增长,我们将继续更好地识别漏洞。如需更多帮助来管理安全问题,请查看我们的[ GitHub Marketplace 中的安全合作伙伴][4]。 + +这是使用世界上最大的开源数据集的下一步,可以帮助你保持代码安全并做到最好。依赖关系图和安全警报目前支持 JavaScript 和 Ruby,并将在 2018 年提供 Python 支持。 + +[了解更多关于安全警报][5] + +-------------------------------------------------------------------------------- + +via: https://github.com/blog/2470-introducing-security-alerts-on-github + +作者:[mijuhan ][a] +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://github.com/mijuhan +[1]:https://user-images.githubusercontent.com/594029/32851987-76c36e4a-c9eb-11e7-98fc-feb39fddaadb.gif +[2]:https://cve.mitre.org/ +[3]:https://nvd.nist.gov/ +[4]:https://github.com/marketplace/category/security +[5]:https://help.github.com/articles/about-security-alerts-for-vulnerable-dependencies/ diff --git a/translated/tech/20171117 System Logs: Understand Your Linux System.md b/translated/tech/20171117 System Logs: Understand Your Linux System.md new file mode 100644 index 0000000000..dceea12a63 --- /dev/null +++ b/translated/tech/20171117 System Logs: Understand Your Linux System.md @@ -0,0 +1,68 @@ +### 系统日志: 了解你的Linux系统 + +![chabowski](https://www.suse.com/communities/blog/files/2016/03/chabowski_avatar_1457537819-100x100.jpg) + By: [chabowski][1] + +本文摘自教授Linux小白(或者非资深桌面用户)技巧的系列文章. 该系列文章旨在为由LinuxMagazine基于 [openSUSE Leap][3] 发布的第30期特别版 “[Getting Started with Linux][2]” 提供补充说明. + +本文作者是 Romeo S. Romeo, 他是一名 PDX-based enterprise Linux 专家,转为创新企业提供富有伸缩性的解决方案. + +Linux系统日志非常重要. 后台运行的程序(通常被称为守护进程或者服务进程)处理了你Linux系统中的大部分任务. 当这些守护进程工作时,它们将任务的详细信息记录进日志文件中,作为他们做过什么的历史信息. 这些守护进程的工作内容涵盖从使用原子钟同步时钟到管理网络连接. 所有这些都被记录进日志文件,这样当有错误发生时,你可以通过查阅特定的日志文件来看出发生了什么. + +![](https://www.suse.com/communities/blog/files/2017/11/markus-spiske-153537-300x450.jpg) + +Photo by Markus Spiske on Unsplash + +有很多不同的日志. 历史上, 他们一般以纯文本的格式存储到 `/var/log` 目录中. 现在依然有很多日志这样做, 你可以很方便的使用 `less` 来查看它们. +在新装的 `openSUSE Leap 42.3` 以及大多数现代操作系统上,重要的日志由 `systemd` 初始化系统存储. `systemd`这套系统负责启动守护进程并在系统启动时让计算机做好被使用的准备。 +由 `systemd` 记录的日志以二进制格式存储, 这使地它们消耗的空间更小,更容易被浏览,也更容易被导出成其他各种格式,不过坏处就是你必须使用特定的工具才能查看. +好在, 这个工具已经预安装在你的系统上了: 它的名字叫 `journalctl`,而且默认情况下, 它会将每个守护进程的所有日志都记录到一个地方. + +只需要运行 `journalctl` 命令就能查看你的 `systemd` 日志了. 它会用 `less` 分页器显示各种日志. 为了让你有个直观的感受, 下面是`journalctl` 中摘录的一条日志记录: + +``` +Jul 06 11:53:47 aaathats3as pulseaudio[2216]: [pulseaudio] alsa-util.c: Disabling timer-based scheduling because running inside a VM. +``` + +这条独立的日志记录以此包含了记录的日期和时间, 计算机名, 记录日志的进程名, 记录日志的进程PID, 以及日志内容本身. + +若系统中某个程序运行出问题了, 则可以查看日志文件并搜索(使用 “/” 加上要搜索的关键字)程序名称. 有可能导致该程序出问题的错误会记录到系统日志中. +有时,错误信息会足够详细让你能够修复该问题. 其他时候, 你需要在Web上搜索解决方案. Google就很适合来搜索奇怪的Linux问题. +![](https://www.suse.com/communities/blog/files/2017/09/Sunglasses_Emoji-450x450.png) +不过搜索时请注意你只输入了日志的内容, 行首的那些信息(日期, 主机名, 进程ID) 是无意义的,会干扰搜索结果. + +解决方法一般在搜索结果的前几个连接中就会有了. 当然,你不能只是无脑得运行从互联网上找到的那些命令: 请一定先搞清楚你要做的事情是什么,它的效果会是什么. +据说, 从系统日志中查询日志要比直接搜索描述故障的关键字要有用的多. 因为程序出错有很多原因, 而且同样的故障表现也可能由多种问题引发的. + +比如, 系统无法发声的原因有很多, 可能是播放器没有插好, 也可能是声音系统出故障了, 还可能是缺少合适的驱动程序. +如果你只是泛泛的描述故障表现, 你会找到很多无关的解决方法,而你也会浪费大量的时间. 而指定搜索日志文件中的内容, 你只会查询出他人也有相同日志内容的结果. +你可以对比一下图1和图2. + +![](https://www.suse.com/communities/blog/files/2017/11/picture1-450x450.png) + +图 1 搜索系统的故障表现只会显示泛泛的,不精确的结果. 这种搜索通常没什么用. + +![](https://www.suse.com/communities/blog/files/2017/11/picture2-450x450.png) + +图 2 搜索特定的日志行会显示出精确的,有用的结果. 这种搜索通常很有用. + +也有一些系统不用 `journalctl` 来记录日志. 在桌面系统中最常见的这类日志包括用于 `/var/log/zypper.log` 记录openSUSE包管理器的行为; `/var/log/boot.log` 记录系统启动时的消息,这类消息往往滚动的特别块,根本看不过来; `/var/log/ntp` 用来记录 Network Time Protocol 守护进程同步时间时发生的错误. +另一个存放硬件故障信息的地方是 `Kernel Ring Buffer`(内核环状缓冲区), 你可以输入 `demesg -H` 命令来查看(这条命令也会调用 `less` 分页器来查看). +`Kernel Ring Buffer` 存储在内存中, 因此会在重启电脑后丢失. 不过它包含了Linux内核中的重要事件, 比如新增了硬件, 加载了模块, 以及奇怪的网络错误. + +希望你已经准备好深入了解你的Linux系统了! 祝你玩的开心! + +-------------------------------------------------------------------------------- + +via: https://www.suse.com/communities/blog/system-logs-understand-linux-system/ + +作者:[chabowski] +译者:[lujun9972](https://github.com/lujun9972) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[1]:https://www.suse.com/communities/blog/author/chabowski/ +[2]:http://www.linux-magazine.com/Resources/Special-Editions/30-Getting-Started-with-Linux +[3]:https://en.opensuse.org/Portal:42.3 +[4]:http://www.linux-magazine.com/ diff --git a/translated/tech/20171124 How to Install Android File Transfer for Linux.md b/translated/tech/20171124 How to Install Android File Transfer for Linux.md deleted file mode 100644 index b93429f509..0000000000 --- a/translated/tech/20171124 How to Install Android File Transfer for Linux.md +++ /dev/null @@ -1,82 +0,0 @@ -Translating by wenwensnow - -# 如何在Linux下安装安卓文件传输助手 - -如果你尝试在Ubuntu下安装你的安卓手机,你也许可以试试Linux下的安卓文件传输助手 - -本质上来说,这个应用是谷歌mac版本的一个复制。它是用Qt编写的,用户界面非常简洁,使得你能轻松在Ubuntu和安卓手机之间传输文件。 - -现在,有可能一部分人想知道有什么是这个应用可以做,而Nautilus(Ubuntu默认的文件资源管理器)不能做的,答案是没有。 - -当我将我的 Nexus 5X(记得选择[MTP][7] 选项)连接在Ubuntu上时,在[GVfs][8](Gnome桌面下的虚拟文件系统)的帮助下,我可以打开,浏览和管理我的手机, 就像它是一个普通的U盘一样。 - - [![Nautilus MTP integration with a Nexus 5X](http://www.omgubuntu.co.uk/wp-content/uploads/2017/11/browsing-android-mtp-nautilus.jpg)][9] - -但是一些用户在使用默认的文件管理器时,在MTP的某些功能上会出现问题:比如文件夹没有正确加载,创建新文件夹后此文件夹不存在,或者无法在媒体播放器中使用自己的手机。 - -这就是要为Linux系统用户设计一个安卓文件传输助手应用的原因。将这个应用当做将MTP设备安装在Linux下的另一种选择。如果你使用Linux下的默认应用时一切正常,你也许并不需要尝试使用它 (除非你真的很想尝试新鲜事物)。 - - -![Android File Transfer Linux App](http://www.omgubuntu.co.uk/wp-content/uploads/2017/11/android-file-transfer-for-linux-750x662.jpg) - -app特点: - -*   简洁直观的用户界面 - -*   支持文件拖放功能(从Linux系统到手机) - -*   支持批量下载 (从手机到Linux系统) - -*   显示传输进程对话框 - -*   FUSE模块支持 - -*   没有文件大小限制 - -*   可选命令行工具 - -### Ubuntu下安装安卓手机文件助手的步骤 - -以上就是对这个应用的介绍,下面是如何安装它的具体步骤。 - -这有一个[PPA](个人软件包集)源为Ubuntu 14.04 LTS(长期支持版本),16.04LTS 和 Ubuntu17.10 提供可用应用 - -为了将这一PPA加入你的软件资源列表中,执行这条命令: - -``` -sudo add-apt-repository ppa:samoilov-lex/aftl-stable -``` - -接着,为了在Ubuntu下安装Linux版本的安卓文件传输助手,执行: - -``` -sudo apt-get update && sudo apt install android-file-transfer -``` - -这样就行了。 - -你会在你的应用列表中发现这一应用的启动图标。 - -在你启动这一应用之前,要确保没有其他应用(比如Nautilus)已经加载了你的手机.如果其他应用正在使用你的手机,就会显示“无法找到MTP设备”。为了解决这一问题,将你的手机从Nautilus(或者任何正在使用你的手机的应用)上移除,然后再重新启动安卓文件传输助手。 - --------------------------------------------------------------------------------- - -via: http://www.omgubuntu.co.uk/2017/11/android-file-transfer-app-linux - -作者:[ JOEY SNEDDON ][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://plus.google.com/117485690627814051450/?rel=author -[1]:https://plus.google.com/117485690627814051450/?rel=author -[2]:http://www.omgubuntu.co.uk/category/app -[3]:http://www.omgubuntu.co.uk/category/download -[4]:https://github.com/whoozle/android-file-transfer-linux -[5]:http://www.omgubuntu.co.uk/2017/11/android-file-transfer-app-linux -[6]:http://android.com/filetransfer?linkid=14270770 -[7]:https://en.wikipedia.org/wiki/Media_Transfer_Protocol -[8]:https://en.wikipedia.org/wiki/GVfs -[9]:http://www.omgubuntu.co.uk/wp-content/uploads/2017/11/browsing-android-mtp-nautilus.jpg -[10]:https://launchpad.net/~samoilov-lex/+archive/ubuntu/aftl-stable diff --git a/translated/tech/20171124 Photon Could Be Your New Favorite Container OS.md b/translated/tech/20171124 Photon Could Be Your New Favorite Container OS.md new file mode 100644 index 0000000000..e51c580da9 --- /dev/null +++ b/translated/tech/20171124 Photon Could Be Your New Favorite Container OS.md @@ -0,0 +1,147 @@ +Photon也许能成为你最喜爱的容器操作系统 +============================================================ + +![Photon OS](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/photon-linux.jpg?itok=jUFHPR_c "Photon OS") + +Phonton OS专注于容器,是一个非常出色的平台。 —— Jack Wallen + +容器在当下的火热,并不是没有原因的。正如[之前][13]讨论的,容器可以使您轻松快捷地将新的服务与应用部署到您的网络上,而且并不耗费太多的系统资源。比起专用硬件和虚拟机,容器都是更加划算的,除此之外,他们更容易更新与重用。 + +更重要的是,容器喜欢Linux(反之亦然)。不需要太多时间和麻烦,你就可以启动一台Linux服务器,运行[Docker][14],再是部署容器。但是,哪种Linux发行版最适合部署容器呢?我们的选择很多。你可以使用标准的Ubuntu服务器平台(更容易安装Docker并部署容器)或者是更轻量级的发行版 —— 专门用于部署容器。 + +[Photon][15]就是这样的一个发行版。这个特殊的版本是由[VMware][16]于2005年创建的,它包含了Docker的守护进程,并与容器框架(如Mesos和Kubernetes)一起使用。Photon经过优化可与[VMware vSphere][17]协同工作,而且可用于裸机,[Microsoft Azure][18], [Google Compute Engine][19], [Amazon Elastic Compute Cloud][20], 或者 [VirtualBox][21]等。 + +Photon通过只安装Docker守护进程所必需的东西来保持它的轻量。而这样做的结果是,这个发行版的大小大约只有300MB。但这足以让Linux的运行一切正常。除此之外,Photon的主要特点还有: + +* 内核调整为性能模式。 + +* 内核根据[内核自防护项目][6](KSPP)进行了加固。 + +* 所有安装的软件包都根据加固的安全标识来构建。 + +* 操作系统在信任验证后启动。 + +* Photon管理进程管理防火墙,网络,软件包,和远程登录在Photon机子上的用户。 + +* 支持持久卷。 + +* [Project Lightwave][7] 整合。 + +* 及时的安全补丁与更新。 + +Photon可以通过[ISO][22],[OVA][23],[Amazon Machine Image][24],[Google Compute Engine image][25]和[Azure VHD][26]安装使用。现在我将向您展示如何使用ISO镜像在VirtualBox上安装Photon。整个安装过程大概需要五分钟,在最后您将有一台随时可以部署容器的虚拟机。 + +### 创建虚拟机 + +在部署第一台容器之前,您必须先创建一台虚拟机并安装Photon。为此,打开VirtualBox并点击“新建”按钮。跟着创建虚拟机向导进行配置(根据您的容器将需要的用途,为Photon提供必要的资源)。在创建好虚拟机后,您所需要做的第一件事就是更改配置。选择新建的虚拟机(在VirtualBox主窗口的左侧面板中),然后单击“设置”。在弹出的窗口中,点击“网络”(在左侧的导航中)。 + +在“网络”窗口(图1)中,你需要在“连接”的下拉窗口中选择桥接。这可以确保您的Photon服务与您的网络相连。完成更改后,单击确定。 + +### [photon_0.jpg][8] + +![change settings](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/photon_0.jpg?itok=Q0yhOhsZ "change setatings") +图 1: 更改Photon在VirtualBox中的网络设置。[经许可使用][1] + +从左侧的导航选择您的Photon虚拟机,点击启动。系统会提示您去加载IOS镜像。当您完成之后,Photon安装程序将会启动并提示您按回车后开始安装。安装过程基于ncurses(没有GUI),但它非常简单。 + +接下来(图2),系统会询问您是要最小化安装,完整安装还是安装OSTree服务器。我选择了完整安装。选择您所需要的任意选项,然后按回车继续。 + +### [photon_1.jpg][9] + +![installation type](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/photon_2.jpg?itok=QL1Rs-PH "Photon") +图 2: 选择您的安装类型.[经许可使用][2] + +在下一个窗口,选择您要安装Photon的磁盘。由于我们将其安装在虚拟机,因此只有一块磁盘会被列出(图3)。选择“自动”按下回车。然后安装程序会让您输入(并验证)管理员密码。在这之后镜像开始安装在您的磁盘上并在不到5分钟的时间内结束。 + +### [photon_2.jpg][] + +![Photon](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/photon_1.jpg?itok=OdnMVpaA "installation type") +图 3: 选择安装Photon的硬盘.[经许可使用][3] + +安装完成后,重启虚拟机并使用安装时创建的用户root和它的密码登录。一切就绪,你准备好开始工作了。 + +在开始使用Docker之前,您需要更新一下Photon。Photon使用 _yum_ 软件包管理器,因此在以root用户登录后输入命令 _yum update_。如果有任何可用更新,则会询问您是否确认(图4)。 + +### [photon_3.jpg][11] + +![Updating](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/photon_3.jpg?itok=vjqrspE2 "Updating") +图 4: 更新 Photon.[经许可使用][4] + +用法 + +正如我所说的,Photon提供了部署容器甚至创建Kubernetes集群所需要的所有包。但是,在使用之前还要做一些事情。首先要启动Docker守护进程。为此,执行以下命令: + +``` +systemctl start docker + +systemctl enable docker +``` + +现在我们需要创建一个标准用户,因此我们没有以root去运行docker命令。为此,执行以下命令: + +``` +useradd -m USERNAME + +passwd USERNAME +``` + +其中USERNAME是我们新增的用户的名称。 + +接下来,我们需要将这个新用户添加到 _docker_ 组,执行命令: + +``` +usermod -a -G docker USERNAME +``` + +其中USERNAME是刚刚创建的用户的名称。 + +注销root用户并切换为新增的用户。现在,您已经可以不必使用 _sudo_ 命令或者是切换到root用户来使用 _docker_命令了。从Docker Hub中取出一个镜像开始部署容器吧。 + +### 一个优秀的容器平台 + +在专注于容器方面,Photon毫无疑问是一个出色的平台。请注意,Photon是一个开源项目,因此没有任何付费支持。如果您对Photon有任何的问题,请移步Photon项目的Github下的[Issues][27],那里可以供您阅读相关问题,或者提交您的问题。如果您对Photon感兴趣,您也可以在项目的官方[Github][28]中找到源码。 + +尝试一下Photon吧,看看它是否能够使得Docker容器和Kubernetes集群的部署更加容易。 + +欲了解Linux的更多信息,可以通过学习Linux基金会和edX的免费课程,[“Linux 入门”][29]。 + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/learn/intro-to-linux/2017/11/photon-could-be-your-new-favorite-container-os + +作者:[JACK WALLEN][a] +译者:[KeyLD](https://github.com/KeyLd) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.linux.com/users/jlwallen +[1]:https://www.linux.com/licenses/category/used-permission +[2]:https://www.linux.com/licenses/category/used-permission +[3]:https://www.linux.com/licenses/category/used-permission +[4]:https://www.linux.com/licenses/category/used-permission +[5]:https://www.linux.com/licenses/category/creative-commons-zero +[6]:https://kernsec.org/wiki/index.php/Kernel_Self_Protection_Project +[7]:http://vmware.github.io/lightwave/ +[8]:https://www.linux.com/files/images/photon0jpg +[9]:https://www.linux.com/files/images/photon1jpg +[10]:https://www.linux.com/files/images/photon2jpg +[11]:https://www.linux.com/files/images/photon3jpg +[12]:https://www.linux.com/files/images/photon-linuxjpg +[13]:https://www.linux.com/learn/intro-to-linux/2017/11/how-install-and-use-docker-linux +[14]:https://www.docker.com/ +[15]:https://vmware.github.io/photon/ +[16]:https://www.vmware.com/ +[17]:https://www.vmware.com/products/vsphere.html +[18]:https://azure.microsoft.com/ +[19]:https://cloud.google.com/compute/ +[20]:https://aws.amazon.com/ec2/ +[21]:https://www.virtualbox.org/ +[22]:https://github.com/vmware/photon/wiki/Downloading-Photon-OS +[23]:https://github.com/vmware/photon/wiki/Downloading-Photon-OS +[24]:https://github.com/vmware/photon/wiki/Downloading-Photon-OS +[25]:https://github.com/vmware/photon/wiki/Downloading-Photon-OS +[26]:https://github.com/vmware/photon/wiki/Downloading-Photon-OS +[27]:https://github.com/vmware/photon/issues +[28]:https://github.com/vmware/photon +[29]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux diff --git a/translated/tech/20171130 New Feature Find every domain someone owns automatically.md b/translated/tech/20171130 New Feature Find every domain someone owns automatically.md new file mode 100644 index 0000000000..4b72eaae5e --- /dev/null +++ b/translated/tech/20171130 New Feature Find every domain someone owns automatically.md @@ -0,0 +1,49 @@ +新功能:自动找出每个域名的拥有者 +============================================================ + + +今天,我们很高兴地宣布我们最近几周做的新功能。它是 Whois 聚合工具,现在可以在 [DNSTrails][1] 上获得。 + +在过去,查找一个域名的所有者会花费很多时间,因为大部分时间你都需要把域名指向一个 IP 地址,以便找到同一个人拥有的其他域名。 + +使用老的方法,你会很轻易地在一个工具和另外一个工具的研究和交叉比较结果中花费数个小时,直到得到你想要的域名。 + +感谢这个新工具和我们的智能[WHOIS 数据库][2],现在你可以搜索任何域名,并获得组织或个人注册的域名的完整列表,并在几秒钟内获得准确的结果。 + +### 我如何使用Whois聚合功能? + +第一步:打开 [DNSTrails.com][3] + +第二步:搜索任何域名,比如:godaddy.com + +第三步:在得到域名的结果后,如下所见,定位下面的 Whois 信息: + +![Domain name search results](https://securitytrails.com/images/a/a/1/3/f/aa13fa3616b8dc313f925bdbf1da43a54856d463-image1.png) + +第四步:你会看到那里有有关域名的电话和电子邮箱地址。 + +第五步:点击右边的链接,你会轻松地找到用相同电话和邮箱注册的域名。 + +![All domain names by the same owner](https://securitytrails.com/images/1/3/4/0/3/134037822d23db4907d421046b11f3cbb872f94f-image2.png) + +如果你正在调查互联网上任何个人的域名所有权,这意味着即使域名甚至没有指向注册服务商的 IP,如果他们使用相同的电话和邮件地址,我们仍然可以发现其他域名。 + +想知道一个人拥有的其他域名么?亲自试试 [DNStrails][5] 的[ WHOIS 聚合功能][4]或者[使用我们的 API 访问][6]。 + +-------------------------------------------------------------------------------- + +via: https://securitytrails.com/blog/find-every-domain-someone-owns + +作者:[SECURITYTRAILS TEAM ][a] +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://securitytrails.com/blog/find-every-domain-someone-owns +[1]:https://dnstrails.com/ +[2]:https://securitytrails.com/forensics +[3]:https://dnstrails.com/ +[4]:http://dnstrails.com/#/domain/domain/ueland.com +[5]:https://dnstrails.com/ +[6]:https://securitytrails.com/contact diff --git a/translated/tech/20171130 Translate Shell – A Tool To Use Google Translate From Command Line In Linux.md b/translated/tech/20171130 Translate Shell – A Tool To Use Google Translate From Command Line In Linux.md new file mode 100644 index 0000000000..9f905bd496 --- /dev/null +++ b/translated/tech/20171130 Translate Shell – A Tool To Use Google Translate From Command Line In Linux.md @@ -0,0 +1,400 @@ +Translate Shell: 一款在 Linux 命令行中使用 Google Translate的工具 +============================================================ + +我对 CLI 应用非常感兴趣,因此热衷于使用并分享 CLI 应用。 我之所以更喜欢 CLI 很大原因是因为我在大多数的时候都使用的是字符界面(black screen),已经习惯了使用 CLI 应用而不是 GUI 应用. + +我写过很多关于 CLI 应用的文章。 最近我发现了一些 google 的 CLI 工具,像 “Google Translator”, “Google Calendar”, 和 “Google Contacts”。 这里,我想在给大家分享一下。 + +今天我们要介绍的是 “Google Translator” 工具。 由于母语是泰米尔语,我在一天内用了很多次才理解了它的意义。 + +`Google translate` 为其他语系的人们所广泛使用。 + +### 什么是 Translate Shell + +[Translate Shell][2] (之前叫做 Google Translate CLI) 是一款借助 `Google Translate`(默认), `Bing Translator`, `Yandex.Translate` 以及 `Apertium` 来翻译的命令行翻译器。 +它让你可以在终端访问这些翻译引擎. `Translate Shell` 在大多数Linux发行版中都能使用。 + +### 如何安装 Translate Shell + +有三种方法安装 `Translate Shell`。 + +* 下载自包含的可执行文件 + +* 手工安装 + +* 通过包挂力气安装 + +#### 方法-1 : 下载自包含的可执行文件 + +下载自包含的可执行文件放到 `/usr/bin` 目录中。 + +```shell +$ wget git.io/trans +$ chmod +x ./trans +$ sudo mv trans /usr/bin/ +``` + +#### 方法-2 : 手工安装 + +克隆 `Translate Shell` github 仓库然后手工编译。 + +```shell +$ git clone https://github.com/soimort/translate-shell && cd translate-shell +$ make +$ sudo make install +``` + +#### 方法-3 : Via Package Manager + +有些发行版的官方仓库中包含了 `Translate Shell`,可以通过包管理器来安装。 + +对于 Debian/Ubuntu, 使用 [APT-GET Command][3] 或者 [APT Command][4]来安装。 + +```shell +$ sudo apt-get install translate-shell +``` + +对于 Fedora, 使用 [DNF Command][5] 来安装。 + +```shell +$ sudo dnf install translate-shell +``` + +对于基于 Arch Linux 的系统, 使用 [Yaourt Command][6] 或 [Packer Command][7] 来从 AUR 仓库中安装。 + +```shell +$ yaourt -S translate-shell +or +$ packer -S translate-shell +``` + +### 如何使用 Translate Shell + +安装好后,打开终端闭关输入下面命令。 `Google Translate` 会自动探测源文本是哪种语言,并且在默认情况下将之翻译成你的 `locale` 所对应的语言。 + +``` +$ trans [Words] +``` + +下面我将泰米尔语中的单词 “நன்றி” (Nanri) 翻译成英语。 这个单词的意思是感谢别人。 + +``` +$ trans நன்றி +நன்றி +(Naṉṟi) + +Thanks + +Definitions of நன்றி +[ தமிழ் -> English ] + +noun + gratitude + நன்றி + thanks + நன்றி + +நன்றி + Thanks +``` + +使用下面命令也能将英语翻译成泰米尔语。 + +``` +$ trans :ta thanks +thanks +/THaNGks/ + +நன்றி +(Naṉṟi) + +Definitions of thanks +[ English -> தமிழ் ] + +noun + நன்றி + gratitude, thanks + +thanks + நன்றி +``` + +要将一个单词翻译到多个语种可以使用下面命令(本例中, 我将单词翻译成泰米尔语以及印地语)。 + +``` +$ trans :ta+hi thanks +thanks +/THaNGks/ + +நன்றி +(Naṉṟi) + +Definitions of thanks +[ English -> தமிழ் ] + +noun + நன்றி + gratitude, thanks + +thanks + நன்றி + +thanks +/THaNGks/ + +धन्यवाद +(dhanyavaad) + +Definitions of thanks +[ English -> हिन्दी ] + +noun + धन्यवाद + thanks, thank, gratitude, thankfulness, felicitation + +thanks + धन्यवाद, शुक्रिया +``` + +使用下面命令可以将多个单词当成一个参数(句子)来进行翻译。(只需要把句子应用起来作为一个参数就行了)。 + +``` +$ trans :ta "what is going on your life?" +what is going on your life? + +உங்கள் வாழ்க்கையில் என்ன நடக்கிறது? +(Uṅkaḷ vāḻkkaiyil eṉṉa naṭakkiṟatu?) + +Translations of what is going on your life? +[ English -> தமிழ் ] + +what is going on your life? + உங்கள் வாழ்க்கையில் என்ன நடக்கிறது? +``` + +下面命令独立地翻译各个单词。 + +``` +$ trans :ta curios happy +curios + +ஆர்வம் +(Ārvam) + +Translations of curios +[ Română -> தமிழ் ] + +curios + ஆர்வம், அறிவாளிகள், ஆர்வமுள்ள, அறிய, ஆர்வமாக +happy +/ˈhapē/ + +சந்தோஷமாக +(Cantōṣamāka) + +Definitions of happy +[ English -> தமிழ் ] + + மகிழ்ச்சியான + happy, convivial, debonair, gay + திருப்தி உடைய + happy + +adjective + இன்பமான + happy + +happy + சந்தோஷமாக, மகிழ்ச்சி, இனிய, சந்தோஷமா +``` + +简洁模式: 默认情况下,`Translate Shell` 尽可能多的显示翻译信息. 如果你希望只显示简要信息,只需要加上`-b`选项。 + +``` +$ trans -b :ta thanks +நன்றி +``` + +字典模式: 加上 `-d` 可以把 `Translate Shell` 当成字典来用. + +``` +$ trans -d :en thanks +thanks +/THaNGks/ + +Synonyms + noun + - gratitude, appreciation, acknowledgment, recognition, credit + + exclamation + - thank you, many thanks, thanks very much, thanks a lot, thank you kindly, much obliged, much appreciated, bless you, thanks a million + +Examples + - In short, thanks for everything that makes this city great this Thanksgiving. + + - many thanks + + - There were no thanks in the letter from him, just complaints and accusations. + + - It is a joyful celebration in which Bolivians give thanks for their freedom as a nation. + + - festivals were held to give thanks for the harvest + + - The collection, as usual, received a great response and thanks is extended to all who subscribed. + + - It would be easy to dwell on the animals that Tasmania has lost, but I prefer to give thanks for what remains. + + - thanks for being so helpful + + - It came back on about half an hour earlier than predicted, so I suppose I can give thanks for that. + + - Many thanks for the reply but as much as I tried to follow your advice, it's been a bad week. + + - To them and to those who have supported the office I extend my grateful thanks . + + - We can give thanks and words of appreciation to others for their kind deeds done to us. + + - Adam, thanks for taking time out of your very busy schedule to be with us tonight. + + - a letter of thanks + + - Thank you very much for wanting to go on reading, and thanks for your understanding. + + - Gerry has received a letter of thanks from the charity for his part in helping to raise this much needed cash. + + - So thanks for your reply to that guy who seemed to have a chip on his shoulder about it. + + - Suzanne, thanks for being so supportive with your comments on my blog. + + - She has never once acknowledged my thanks , or existence for that matter. + + - My grateful thanks go to the funders who made it possible for me to travel. + + - festivals were held to give thanks for the harvest + + - All you secretaries who made it this far into the article… thanks for your patience. + + - So, even though I don't think the photos are that good, thanks for the compliments! + + - And thanks for warning us that your secret service requires a motorcade of more than 35 cars. + + - Many thanks for your advice, which as you can see, I have passed on to our readers. + + - Tom Ryan was given a bottle of wine as a thanks for his active involvement in the twinning project. + + - Mr Hill insists he has received no recent complaints and has even been sent a letter of thanks from the forum. + + - Hundreds turned out to pay tribute to a beloved former headteacher at a memorial service to give thanks for her life. + + - Again, thanks for a well written and much deserved tribute to our good friend George. + + - I appreciate your doing so, and thanks also for the compliments about the photos! + +See also + Thanks!, thank, many thanks, thanks to, thanks to you, special thanks, give thanks, thousand thanks, Many thanks!, render thanks, heartfelt thanks, thanks to this +``` + +使用下面格式可以使用 `Translate Shell` 来翻译文件。 + +```shell +$ trans :ta file:///home/magi/gtrans.txt +உங்கள் வாழ்க்கையில் என்ன நடக்கிறது? +``` + +下面命令可以让 `Translate Shell` 进入交互模式. 在进入交互模式之前你需要明确指定源语言和目标语言。本例中,我将英文单词翻译成泰米尔语。 + +``` +$ trans -shell en:ta thanks +Translate Shell +(:q to quit) +thanks +/THaNGks/ + +நன்றி +(Naṉṟi) + +Definitions of thanks +[ English -> தமிழ் ] + +noun + நன்றி + gratitude, thanks + +thanks + நன்றி +``` + +想知道语言代码,可以执行下面语言。 + +```shell +$ trans -R +``` +或者 +```shell +$ trans -T +┌───────────────────────┬───────────────────────┬───────────────────────┐ +│ Afrikaans - af │ Hindi - hi │ Punjabi - pa │ +│ Albanian - sq │ Hmong - hmn │ Querétaro Otomi- otq │ +│ Amharic - am │ Hmong Daw - mww │ Romanian - ro │ +│ Arabic - ar │ Hungarian - hu │ Russian - ru │ +│ Armenian - hy │ Icelandic - is │ Samoan - sm │ +│ Azerbaijani - az │ Igbo - ig │ Scots Gaelic - gd │ +│ Basque - eu │ Indonesian - id │ Serbian (Cyr...-sr-Cyrl +│ Belarusian - be │ Irish - ga │ Serbian (Latin)-sr-Latn +│ Bengali - bn │ Italian - it │ Sesotho - st │ +│ Bosnian - bs │ Japanese - ja │ Shona - sn │ +│ Bulgarian - bg │ Javanese - jv │ Sindhi - sd │ +│ Cantonese - yue │ Kannada - kn │ Sinhala - si │ +│ Catalan - ca │ Kazakh - kk │ Slovak - sk │ +│ Cebuano - ceb │ Khmer - km │ Slovenian - sl │ +│ Chichewa - ny │ Klingon - tlh │ Somali - so │ +│ Chinese Simp...- zh-CN│ Klingon (pIqaD)tlh-Qaak Spanish - es │ +│ Chinese Trad...- zh-TW│ Korean - ko │ Sundanese - su │ +│ Corsican - co │ Kurdish - ku │ Swahili - sw │ +│ Croatian - hr │ Kyrgyz - ky │ Swedish - sv │ +│ Czech - cs │ Lao - lo │ Tahitian - ty │ +│ Danish - da │ Latin - la │ Tajik - tg │ +│ Dutch - nl │ Latvian - lv │ Tamil - ta │ +│ English - en │ Lithuanian - lt │ Tatar - tt │ +│ Esperanto - eo │ Luxembourgish - lb │ Telugu - te │ +│ Estonian - et │ Macedonian - mk │ Thai - th │ +│ Fijian - fj │ Malagasy - mg │ Tongan - to │ +│ Filipino - tl │ Malay - ms │ Turkish - tr │ +│ Finnish - fi │ Malayalam - ml │ Udmurt - udm │ +│ French - fr │ Maltese - mt │ Ukrainian - uk │ +│ Frisian - fy │ Maori - mi │ Urdu - ur │ +│ Galician - gl │ Marathi - mr │ Uzbek - uz │ +│ Georgian - ka │ Mongolian - mn │ Vietnamese - vi │ +│ German - de │ Myanmar - my │ Welsh - cy │ +│ Greek - el │ Nepali - ne │ Xhosa - xh │ +│ Gujarati - gu │ Norwegian - no │ Yiddish - yi │ +│ Haitian Creole - ht │ Pashto - ps │ Yoruba - yo │ +│ Hausa - ha │ Persian - fa │ Yucatec Maya - yua │ +│ Hawaiian - haw │ Polish - pl │ Zulu - zu │ +│ Hebrew - he │ Portuguese - pt │ │ +└───────────────────────┴───────────────────────┴───────────────────────┘ +``` + +想了解更多选项的内容,可以查看 `man` 页. + +```shell +$ man trans +``` + +-------------------------------------------------------------------------------- + +via: https://www.2daygeek.com/translate-shell-a-tool-to-use-google-translate-from-command-line-in-linux/ + +作者:[Magesh Maruthamuthu][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.2daygeek.com/author/magesh/ +[2]:https://github.com/soimort/translate-shell +[3]:https://www.2daygeek.com/apt-get-apt-cache-command-examples-manage-packages-debian-ubuntu-systems/ +[4]:https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/ +[5]:https://www.2daygeek.com/dnf-command-examples-manage-packages-fedora-system/ +[6]:https://www.2daygeek.com/install-yaourt-aur-helper-on-arch-linux/ +[7]:https://www.2daygeek.com/install-packer-aur-helper-on-arch-linux/ diff --git a/translated/tech/20171201 Linux Journal Ceases Publication.md b/translated/tech/20171201 Linux Journal Ceases Publication.md new file mode 100644 index 0000000000..2eb5c82f51 --- /dev/null +++ b/translated/tech/20171201 Linux Journal Ceases Publication.md @@ -0,0 +1,34 @@ +Linux Journal 停止发行 +============================================================ + +EOF + +伙计们,看起来我们要到终点了。如果按照计划而且没有什么其他的话,十一月份的 Linux Journal 将是我们的最后一期。 + +简单的事实是,我们已经用完了钱和期权。我们从来没有一个富有的母公司或者自己深厚的资金,从开始到结束,这使得我们变成一个反常的出版商。虽然我们在很长的一段时间内运营着,但当天平不可恢复地最终向相反方向倾斜时,我们在十一月份失去了最后一点支持。 + +虽然我们像看到出版业的过去那样看到出版业的未来 - 广告商赞助出版物的时代,因为他们重视品牌和读者 - 我们如今的广告宁愿追逐眼球,最好是在读者的浏览器中植入跟踪标记,并随时随地展示那些广告。但是,未来不是这样,过去的已经过去了。 + +我们猜想,有一个希望,那就是救世主可能会会来。但除了我们的品牌、我们的档案,我们的域名、我们的用户和读者之外,还必须是愿意承担我们一部分债务的人。如果你认识任何人能够提供认真的报价,请告诉我们。不然,请观看 LinuxJournal.com,并希望至少我们的遗留归档(可以追溯到 Linux Journal 诞生的 1994 年 4 月,当 Linux 命中 1.0 发布时)将不会消失。这里有很多很棒的东西,还有很多我们会痛恨世界失去的历史。 + +我们最大的遗憾是,我们甚至没有足够的钱回馈最看重我们的人:我们的用户。为此,我们不能更深刻或真诚地道歉。我们对订阅者而言有什么: + +Linux Pro Magazine 为我们的用户提供了六本免费的杂志,我们在 Linux Journal 上一直赞叹这点。在我们需要的时候,他们是我们的第一批人,我们感谢他们的恩惠。我们今天刚刚完成了我们的 2017 年归档,其中包括我们曾经发表过的每一个问题,包括第一个和最后一个。通常我们以 25 美元的价格出售,但显然用户将免费获得。订阅者请注意有关两者的详细信息的电子邮件。 + +我们也希望在知道我们非常非常努力地让 Linux Journal 进行下去后能有一些安慰 ,而且我们已经用最精益、小的可能运营了很长一段时间。我们是一个大多数是自愿者的组织,有些员工已经几个月没有收到工资。我们还欠钱给自由职业者。这时一个限制发行商能够维持多长时间的限制,现在这个限制已经到头了。 + +伙计们,这是一个伟大的运营。乡亲。对每一个为我们的诞生、我们的成功和我们多年的坚持作出贡献的人致敬。我们列了一份名单,但是列表太长了,并且漏掉有价值的人的风险很高。你知道你是谁。我们再次感谢。 + +-------------------------------------------------------------------------------- + +via: https://www.linuxjournal.com/content/linux-journal-ceases-publication + +作者:[ Carlie Fairchild][a] +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.linuxjournal.com/users/carlie-fairchild +[1]:https://www.linuxjournal.com/taxonomy/term/29 +[2]:https://www.linuxjournal.com/users/carlie-fairchild diff --git a/translated/tech/Linux Networking Hardware for Beginners: Think Software b/translated/tech/Linux Networking Hardware for Beginners: Think Software new file mode 100644 index 0000000000..a236a80e97 --- /dev/null +++ b/translated/tech/Linux Networking Hardware for Beginners: Think Software @@ -0,0 +1,89 @@ +Translating by FelixYFZ + +面向初学者的Linux网络硬件: 软件工程思想 +============================================================ + +![island network](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/soderskar-island.jpg?itok=wiMaF66b "island network") + 没有路由和桥接,我们将会成为孤独的小岛,你将会在这个网络教程中学到更多知识。 +Commons Zero][3]Pixabay + + 上周,我们学习了本地网络硬件知识,本周,我们将学习网络互联技术和在移动网络中的一些很酷的黑客技术。 +### Routers:路由器 + + +网络路由器就是计算机网络中的一切,因为路由器连接着网络,没有路由器,我们就会成为孤岛, + +图一展示了一个简单的有线本地网络和一个无线接入点,所有设备都接入到Internet上,本地局域网的计算机连接到一个连接着防火墙或者路由器的以太网交换机上,防火墙或者路由器连接到网络服务供应商提供的电缆箱,调制调节器,卫星上行系统...好像一切都在计算中,就像是一个带着不停闪烁的的小灯的盒子,当你的网络数据包离开你的局域网,进入广阔的互联网,它们穿过一个又一个路由器直到到达自己的目的地。 + + +### [fig-1.png][4] + +![simple LAN](https://www.linux.com/sites/lcom/files/styles/floated_images/public/fig-1_7.png?itok=lsazmf3- "simple LAN") + +图一:一个简单的有线局域网和一个无线接入点。 + +一台路由器能连接一切,一个小巧特殊的小盒子只专注于路由,一个大点的盒子将会提供路由,防火墙,域名服务,以及VPN网关功能,一台重新设计的台式电脑或者笔记本,一个树莓派计算机或者一个小模块,体积臃肿矮小的像PC这样的单板计算机,除了苛刻的用途以外,普通的商品硬件都能良好的工作运行。高端的路由器使用特殊设计的硬件每秒能够传输最大量的数据包。 它们有多路数据总线,多个中央处理器和极快的存储。 +可以通过查阅Juniper和思科的路由器来感受一下高端路由器书什么样子的,而且能看看里面是什么样的构造。 +一个接入你的局域网的无线接入点要么作为一个以太网网桥要么作为一个路由器。一个桥接器扩展了这个网络,所以在这个桥接器上的任意一端口上的主机都连接在同一个网络中。 +一台路由器连接的是两个不同的网络。 +### Network Topology:网络拓扑 + + +有多种设置你的局域网的方式,你可以把所有主机接入到一个单独的平面网络,如果你的交换机支持的话,你也可以把它们分配到不同的子网中。 +平面网络是最简单的网络,只需把每一台设备接入到同一个交换机上即可,如果一台交换上的端口不够使用,你可以将更多的交换机连接在一起。 +有些交换机有特殊的上行端口,有些是没有这种特殊限制的上行端口,你可以连接其中的任意端口,你可能需要使用交叉类型的以太网线,所以你要查阅你的交换机的说明文档来设置。平面网络是最容易管理的,你不需要路由器也不需要计算子网,但它也有一些缺点。他们的伸缩性不好,所以当网络规模变得越来越大的时候就会被广播网络所阻塞。 +将你的局域网进行分段将会提升安全保障, 把局域网分成可管理的不同网段将有助于管理更大的网络。 + 图2展示了一个分成两个子网的局域网络:内部的有线和无线主机,和非军事区域(从来不知道所所有的工作上的男性术语都是在计算机上键入的?)因为他被阻挡了所有的内部网络的访问。 + + +### [fig-2.png][5] + +![LAN](https://www.linux.com/sites/lcom/files/styles/floated_images/public/fig-2_4.png?itok=LpXq7bLf "LAN") + +图2:一个分成两个子网的简单局域网。 +即使像图2那样的小型网络也可以有不同的配置方法。你可以将防火墙和路由器放置在一台单独的设备上。 +你可以为你的非军事区域设置一个专用的网络连接,把它完全从你的内部网络隔离,这将引导我们进入下一个主题:一切基于软件。 + + +### Think Software软件思维 + + +你可能已经注意到在这个简短的系列中我们所讨论的硬件,只有网络接口,交换机,和线缆是特殊用途的硬件。 +其它的都是通用的商用硬件,而且都是软件来定义它的用途。 +网关,虚拟专用网关,以太网桥,网页,邮箱以及文件等等。 +服务器,负载均衡,代理,大量的服务,各种各样的认证,中继,故障转移...你可以在运行着Linux系统的标准硬件上运行你的整个网络。 +你甚至可以使用Linux交换应用和VDE2协议来模拟以太网交换机,像DD-WRT,openWRT 和Rashpberry Pi distros,这些小型的硬件都是有专业的分类的,要记住BSDS和它们的特殊衍生用途如防火墙,路由器,和网络附件存储。 +你知道有些人坚持认为硬件防火墙和软件防火墙有区别?其实是没有区别的,就像说有一台硬件计算机和一台软件计算机。 +### Port Trunking and Ethernet Bonding +端口聚合和以太网绑定 +聚合和绑定,也称链路聚合,是把两条以太网通道绑定在一起成为一条通道。一些交换机支持端口聚合,就是把两个交换机端口绑定在一起成为一个是他们原来带宽之和的一条新的连接。对于一台承载很多业务的服务器来说这是一个增加通道带宽的有效的方式。 +你也可以在以太网口进行同样的配置,而且绑定汇聚的驱动是内置在Linux内核中的,所以不需要任何其他的专门的硬件。 + + +### Bending Mobile Broadband to your Will随心所欲选择你的移动带宽 + +我期望移动带宽能够迅速增长来替代DSL和有线网络。我居住在一个有250,000人口的靠近一个城市的地方,但是在城市以外,要想接入互联网就要靠运气了,即使那里有很大的用户上网需求。我居住的小角落离城镇有20分钟的距离,但对于网络服务供应商来说他们几乎不会考虑到为这个地方提供网络。 我唯一的选择就是移动带宽; 这里没有拨号网络,卫星网络(即使它很糟糕)或者是DSL,电缆,光纤,但却没有阻止网络供应商把那些在我这个区域从没看到过的无限制通信个其他高速网络服务的传单塞进我的邮箱。 +我试用了AT&T,Version,和T-Mobile。Version的信号覆盖范围最广,但是Version和AT&T是最昂贵的。 +我居住的地方在T-Mobile信号覆盖的边缘,但迄今为止他们给了最大的优惠,为了能够能够有效的使用,我必须购买一个WeBoostDe信号放大器和 +一台中兴的移动热点设备。当然你也可以使用一部手机作为热点,但是专用的热点设备有着最强的信号。如果你正在考虑购买一台信号放大器,最好的选择就是WeBoost因为他们的服务支持最棒,而且他们会尽最大努力去帮助你。在一个小小的APP的协助下去设置将会精准的增强 你的网络信号,他们有一个功能较少的免费的版本,但你将一点都不会后悔去花两美元使用专业版。 +那个小巧的中兴热点设备能够支持15台主机而且还有拥有基本的防火墙功能。 但你如果你使用像 Linksys WRT54GL这样的设备,使用Tomato,openWRT,或者DD-WRT来替代普通的固件,这样你就能完全控制你的防护墙规则,路由配置,以及任何其他你想要设置的服务。 + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/learn/intro-to-linux/2017/10/linux-networking-hardware-beginners-think-software + +作者:[CARLA SCHRODER][a] +译者:[FelixYFZ](https://github.com/FelixYFZ) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.linux.com/users/cschroder +[1]:https://www.linux.com/licenses/category/used-permission +[2]:https://www.linux.com/licenses/category/used-permission +[3]:https://www.linux.com/licenses/category/creative-commons-zero +[4]:https://www.linux.com/files/images/fig-1png-7 +[5]:https://www.linux.com/files/images/fig-2png-4 +[6]:https://www.linux.com/files/images/soderskar-islandjpg +[7]:https://www.linux.com/learn/intro-to-linux/2017/10/linux-networking-hardware-beginners-lan-hardware +[8]:http://www.bluelinepc.com/signalcheck/ From 1a976208331c92480f576c4f7df65fa67135973a Mon Sep 17 00:00:00 2001 From: wxy Date: Mon, 4 Dec 2017 17:28:56 +0800 Subject: [PATCH 007/236] =?UTF-8?q?=E8=A1=A5=E5=AE=8C=20PR?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @FelixYFZ 你的 PR 有问题,需要删除原文,并且不能修改文件名,要保留文件名前的日期和扩展名。我帮你修复了。 --- ...g Hardware for Beginners Think Software.md | 79 ------------------- ... Hardware for Beginners Think Software.md} | 0 2 files changed, 79 deletions(-) delete mode 100644 sources/tech/20171012 Linux Networking Hardware for Beginners Think Software.md rename translated/tech/{Linux Networking Hardware for Beginners: Think Software => 20171012 Linux Networking Hardware for Beginners Think Software.md} (100%) diff --git a/sources/tech/20171012 Linux Networking Hardware for Beginners Think Software.md b/sources/tech/20171012 Linux Networking Hardware for Beginners Think Software.md deleted file mode 100644 index 661f5bc2df..0000000000 --- a/sources/tech/20171012 Linux Networking Hardware for Beginners Think Software.md +++ /dev/null @@ -1,79 +0,0 @@ -Translating by FelixYFZ - -Linux Networking Hardware for Beginners: Think Software -============================================================ - -![island network](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/soderskar-island.jpg?itok=wiMaF66b "island network") -Without routers and bridges, we would be lonely little islands; learn more in this networking tutorial.[Creative Commons Zero][3]Pixabay - -Last week, we learned about [LAN (local area network) hardware][7]. This week, we'll learn about connecting networks to each other, and some cool hacks for mobile broadband. - -### Routers - -Network routers are everything in computer networking, because routers connect networks. Without routers we would be lonely little islands. Figure 1 shows a simple wired LAN (local area network) with a wireless access point, all connected to the Internet. Computers on the LAN connect to an Ethernet switch, which connects to a combination firewall/router, which connects to the big bad Internet through whatever interface your Internet service provider (ISP) provides, such as cable box, DSL modem, satellite uplink...like everything in computing, it's likely to be a box with blinky lights. When your packets leave your LAN and venture forth into the great wide Internet, they travel from router to router until they reach their destination. - -### [fig-1.png][4] - -![simple LAN](https://www.linux.com/sites/lcom/files/styles/floated_images/public/fig-1_7.png?itok=lsazmf3- "simple LAN") -Figure 1: A simple wired LAN with a wireless access point.[Used with permission][1] - -A router can look like pretty much anything: a nice little specialized box that does only routing and nothing else, a bigger box that provides routing, firewall, name services, and VPN gateway, a re-purposed PC or laptop, a Raspberry Pi or Arduino, stout little single-board computers like PC Engines...for all but the most demanding uses, ordinary commodity hardware works fine. The highest-end routers use specialized hardware that is designed to move the maximum number of packets per second. They have multiple fat data buses, multiple CPUs, and super-fast memory. (Look up Juniper and Cisco routers to see what high-end routers look like, and what's inside.) - -A wireless access point connects to your LAN either as an Ethernet bridge or a router. A bridge extends the network, so hosts on both sides of the bridge are on the same network. A router connects two different networks. - -### Network Topology - -There are multitudes of ways to set up your LAN. You can put all hosts on a single flat network. You can divide it up into different subnets. You can divide it into virtual LANs, if your switch supports this. - -A flat network is the simplest; just plug everyone into the same switch. If one switch isn't enough you can connect switches to each other. Some switches have special uplink ports, some don't care which ports you connect, and you may need to use a crossover Ethernet cable, so check your switch documentation. - -Flat networks are the easiest to administer. You don't need routers and don't have to calculate subnets, but there are some downsides. They don't scale, so when they get too large they get bogged down by broadcast traffic. Segmenting your LAN provides a bit of security, and makes it easier to manage larger networks by dividing it into manageable chunks. Figure 2 shows a simplified LAN divided into two subnets: internal wired and wireless hosts, and one for servers that host public services. The subnet that contains the public-facing servers is called a DMZ, demilitarized zone (ever notice all the macho terminology for jobs that are mostly typing on a computer?) because it is blocked from all internal access. - -### [fig-2.png][5] - -![LAN](https://www.linux.com/sites/lcom/files/styles/floated_images/public/fig-2_4.png?itok=LpXq7bLf "LAN") -Figure 2: A simplified LAN divided into two subnets.[Used with permission][2] - -Even in a network as small as Figure 2 there are several ways to set it up. You can put your firewall and router on a single device. You could have a dedicated Internet link for the DMZ, divorcing it completely from your internal network. Which brings us to our next topic: it's all software. - -### Think Software - -You may have noticed that of the hardware we have discussed in this little series, only network interfaces, switches, and cabling are special-purpose hardware. Everything else is general-purpose commodity hardware, and it's the software that defines its purpose. Linux is a true networking operating system, and it supports a multitude of network operations: VLANs, firewall, router, Internet gateway, VPN gateway, Ethernet bridge, Web/mail/file/etc. servers, load-balancer, proxy, quality of service, multiple authenticators, trunking, failover...you can run your entire network on commodity hardware with Linux. You can even use Linux to simulate an Ethernet switch with LISA (LInux Switching Appliance) and vde2. - -There are specialized distributions for small hardware like DD-WRT, OpenWRT, and the Raspberry Pi distros, and don't forget the BSDs and their specialized offshoots like the pfSense firewall/router, and the FreeNAS network-attached storage server. - -You know how some people insist there is a difference between a hardware firewall and a software firewall? There isn't. That's like saying there is a hardware computer and a software computer. - -### Port Trunking and Ethernet Bonding - -Trunking and bonding, also called link aggregation, is combining two Ethernet channels into one. Some Ethernet switches support port trunking, which is combining two switch ports to combine their bandwidth into a single link. This is a nice way to make a bigger pipe to a busy server. - -You can do the same thing with Ethernet interfaces, and the bonding driver is built-in to the Linux kernel, so you don't need any special hardware. - -### Bending Mobile Broadband to your Will - -I expect that mobile broadband is going to grow in the place of DSL and cable Internet. I live near a city of 250,000 population, but outside the city limits good luck getting Internet, even though there is a large population to serve. My little corner of the world is 20 minutes from town, but it might as well be the moon as far as Internet service providers are concerned. My only option is mobile broadband; there is no dialup, satellite Internet is sold out (and it sucks), and haha lol DSL, cable, or fiber. That doesn't stop ISPs from stuffing my mailbox with flyers for Xfinity and other high-speed services my area will never see. - -I tried AT&T, Verizon, and T-Mobile. Verizon has the strongest coverage, but Verizon and AT&T are expensive. I'm at the edge of T-Mobile coverage, but they give the best deal by far. To make it work, I had to buy a weBoost signal booster and ZTE mobile hotspot. Yes, you can use a smartphone as a hotspot, but the little dedicated hotspots have stronger radios. If you're thinking you might want a signal booster, I have nothing but praise for weBoost because their customer support is superb, and they will do their best to help you. Set it up with the help of a great little app that accurately measures signal strength, [SignalCheck Pro][8]. They have a free version with fewer features; spend the two bucks to get the pro version, you won't be sorry. - -The little ZTE hotspots serve up to 15 hosts and have rudimentary firewalls. But we can do better: get something like the Linksys WRT54GL, replace the stock firmware with Tomato, OpenWRT, or DD-WRT, and then you have complete control of your firewall rules, routing, and any other services you want to set up. - --------------------------------------------------------------------------------- - -via: https://www.linux.com/learn/intro-to-linux/2017/10/linux-networking-hardware-beginners-think-software - -作者:[CARLA SCHRODER][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.linux.com/users/cschroder -[1]:https://www.linux.com/licenses/category/used-permission -[2]:https://www.linux.com/licenses/category/used-permission -[3]:https://www.linux.com/licenses/category/creative-commons-zero -[4]:https://www.linux.com/files/images/fig-1png-7 -[5]:https://www.linux.com/files/images/fig-2png-4 -[6]:https://www.linux.com/files/images/soderskar-islandjpg -[7]:https://www.linux.com/learn/intro-to-linux/2017/10/linux-networking-hardware-beginners-lan-hardware -[8]:http://www.bluelinepc.com/signalcheck/ diff --git a/translated/tech/Linux Networking Hardware for Beginners: Think Software b/translated/tech/20171012 Linux Networking Hardware for Beginners Think Software.md similarity index 100% rename from translated/tech/Linux Networking Hardware for Beginners: Think Software rename to translated/tech/20171012 Linux Networking Hardware for Beginners Think Software.md From 79f337b4547d29d3a20f49e373be479fe3bd3625 Mon Sep 17 00:00:00 2001 From: wxy Date: Mon, 4 Dec 2017 17:29:41 +0800 Subject: [PATCH 008/236] =?UTF-8?q?=E8=A1=A5=E5=AE=8C=20PR?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @filefi 不要丢掉扩展名。 --- ...20171202 Scrot Linux command-line screen grabs made simple.md} | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename sources/tech/{20171202 Scrot Linux command-line screen grabs made simple => 20171202 Scrot Linux command-line screen grabs made simple.md} (100%) diff --git a/sources/tech/20171202 Scrot Linux command-line screen grabs made simple b/sources/tech/20171202 Scrot Linux command-line screen grabs made simple.md similarity index 100% rename from sources/tech/20171202 Scrot Linux command-line screen grabs made simple rename to sources/tech/20171202 Scrot Linux command-line screen grabs made simple.md From 557c2e97ab8c3c218fb6fb21544eb71bbe78879c Mon Sep 17 00:00:00 2001 From: wxy Date: Mon, 4 Dec 2017 17:31:40 +0800 Subject: [PATCH 009/236] =?UTF-8?q?=E5=B7=B2=E5=8F=91=E5=B8=83?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @geekpi --- .../20171201 Linux Journal Ceases Publication.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20171201 Linux Journal Ceases Publication.md (100%) diff --git a/translated/tech/20171201 Linux Journal Ceases Publication.md b/published/20171201 Linux Journal Ceases Publication.md similarity index 100% rename from translated/tech/20171201 Linux Journal Ceases Publication.md rename to published/20171201 Linux Journal Ceases Publication.md From b1848c52a3a76ad92f3f6dbe17a88ddfb4659d14 Mon Sep 17 00:00:00 2001 From: wxy Date: Mon, 4 Dec 2017 21:38:41 +0800 Subject: [PATCH 010/236] =?UTF-8?q?PRF:20171130=20Translate=20Shell=20?= =?UTF-8?q?=E2=80=93=20A=20Tool=20To=20Use=20Google=20Translate=20From=20C?= =?UTF-8?q?ommand=20Line=20In=20Linux.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @lujun9972 译者没署名,我找了半天 /cry --- ...ogle Translate From Command Line In Linux.md | 76 +++++++++---------- 1 file changed, 37 insertions(+), 39 deletions(-) diff --git a/translated/tech/20171130 Translate Shell – A Tool To Use Google Translate From Command Line In Linux.md b/translated/tech/20171130 Translate Shell – A Tool To Use Google Translate From Command Line In Linux.md index 9f905bd496..aeae003532 100644 --- a/translated/tech/20171130 Translate Shell – A Tool To Use Google Translate From Command Line In Linux.md +++ b/translated/tech/20171130 Translate Shell – A Tool To Use Google Translate From Command Line In Linux.md @@ -1,68 +1,65 @@ -Translate Shell: 一款在 Linux 命令行中使用 Google Translate的工具 +Translate Shell :一款在 Linux 命令行中使用谷歌翻译的工具 ============================================================ -我对 CLI 应用非常感兴趣,因此热衷于使用并分享 CLI 应用。 我之所以更喜欢 CLI 很大原因是因为我在大多数的时候都使用的是字符界面(black screen),已经习惯了使用 CLI 应用而不是 GUI 应用. +我对 CLI 应用非常感兴趣,因此热衷于使用并分享 CLI 应用。 我之所以更喜欢 CLI 很大原因是因为我在大多数的时候都使用的是字符界面(black screen),已经习惯了使用 CLI 应用而不是 GUI 应用。 -我写过很多关于 CLI 应用的文章。 最近我发现了一些 google 的 CLI 工具,像 “Google Translator”, “Google Calendar”, 和 “Google Contacts”。 这里,我想在给大家分享一下。 +我写过很多关于 CLI 应用的文章。 最近我发现了一些谷歌的 CLI 工具,像 “Google Translator”、“Google Calendar” 和 “Google Contacts”。 这里,我想在给大家分享一下。 -今天我们要介绍的是 “Google Translator” 工具。 由于母语是泰米尔语,我在一天内用了很多次才理解了它的意义。 +今天我们要介绍的是 “Google Translator” 工具。 由于我的母语是泰米尔语,我在一天内用了很多次才理解了它的意义。 -`Google translate` 为其他语系的人们所广泛使用。 +谷歌翻译为其它语系的人们所广泛使用。 ### 什么是 Translate Shell -[Translate Shell][2] (之前叫做 Google Translate CLI) 是一款借助 `Google Translate`(默认), `Bing Translator`, `Yandex.Translate` 以及 `Apertium` 来翻译的命令行翻译器。 -它让你可以在终端访问这些翻译引擎. `Translate Shell` 在大多数Linux发行版中都能使用。 +[Translate Shell][2] (之前叫做 Google Translate CLI) 是一款借助谷歌翻译(默认)、必应翻译、Yandex.Translate 以及 Apertium 来翻译的命令行翻译器。它让你可以在终端访问这些翻译引擎。 Translate Shell 在大多数 Linux 发行版中都能使用。 ### 如何安装 Translate Shell -有三种方法安装 `Translate Shell`。 +有三种方法安装 Translate Shell。 * 下载自包含的可执行文件 - * 手工安装 +* 通过包管理器安装 -* 通过包挂力气安装 - -#### 方法-1 : 下载自包含的可执行文件 +#### 方法 1 : 下载自包含的可执行文件 下载自包含的可执行文件放到 `/usr/bin` 目录中。 -```shell +``` $ wget git.io/trans $ chmod +x ./trans $ sudo mv trans /usr/bin/ ``` -#### 方法-2 : 手工安装 +#### 方法 2 : 手工安装 -克隆 `Translate Shell` github 仓库然后手工编译。 +克隆 Translate Shell 的 GitHub 仓库然后手工编译。 -```shell +``` $ git clone https://github.com/soimort/translate-shell && cd translate-shell $ make $ sudo make install ``` -#### 方法-3 : Via Package Manager +#### 方法 3 : 通过包管理器 -有些发行版的官方仓库中包含了 `Translate Shell`,可以通过包管理器来安装。 +有些发行版的官方仓库中包含了 Translate Shell,可以通过包管理器来安装。 -对于 Debian/Ubuntu, 使用 [APT-GET Command][3] 或者 [APT Command][4]来安装。 +对于 Debian/Ubuntu, 使用 [APT-GET 命令][3] 或者 [APT 命令][4]来安装。 -```shell +``` $ sudo apt-get install translate-shell ``` -对于 Fedora, 使用 [DNF Command][5] 来安装。 +对于 Fedora, 使用 [DNF 命令][5] 来安装。 -```shell +``` $ sudo dnf install translate-shell ``` -对于基于 Arch Linux 的系统, 使用 [Yaourt Command][6] 或 [Packer Command][7] 来从 AUR 仓库中安装。 +对于基于 Arch Linux 的系统, 使用 [Yaourt 命令][6] 或 [Packer 明快][7] 来从 AUR 仓库中安装。 -```shell +``` $ yaourt -S translate-shell or $ packer -S translate-shell @@ -70,7 +67,7 @@ $ packer -S translate-shell ### 如何使用 Translate Shell -安装好后,打开终端闭关输入下面命令。 `Google Translate` 会自动探测源文本是哪种语言,并且在默认情况下将之翻译成你的 `locale` 所对应的语言。 +安装好后,打开终端闭关输入下面命令。 谷歌翻译会自动探测源文本是哪种语言,并且在默认情况下将之翻译成你的 `locale` 所对应的语言。 ``` $ trans [Words] @@ -119,7 +116,7 @@ thanks நன்றி ``` -要将一个单词翻译到多个语种可以使用下面命令(本例中, 我将单词翻译成泰米尔语以及印地语)。 +要将一个单词翻译到多个语种可以使用下面命令(本例中,我将单词翻译成泰米尔语以及印地语)。 ``` $ trans :ta+hi thanks @@ -172,7 +169,7 @@ what is going on your life? உங்கள் வாழ்க்கையில் என்ன நடக்கிறது? ``` -下面命令独立地翻译各个单词。 +下面命令单独地翻译各个单词。 ``` $ trans :ta curios happy @@ -208,14 +205,14 @@ happy சந்தோஷமாக, மகிழ்ச்சி, இனிய, சந்தோஷமா ``` -简洁模式: 默认情况下,`Translate Shell` 尽可能多的显示翻译信息. 如果你希望只显示简要信息,只需要加上`-b`选项。 +简洁模式:默认情况下,Translate Shell 尽可能多的显示翻译信息。如果你希望只显示简要信息,只需要加上 `-b`选项。 ``` $ trans -b :ta thanks நன்றி ``` -字典模式: 加上 `-d` 可以把 `Translate Shell` 当成字典来用. +字典模式:加上 `-d` 可以把 Translate Shell 当成字典来用。 ``` $ trans -d :en thanks @@ -294,14 +291,14 @@ See also Thanks!, thank, many thanks, thanks to, thanks to you, special thanks, give thanks, thousand thanks, Many thanks!, render thanks, heartfelt thanks, thanks to this ``` -使用下面格式可以使用 `Translate Shell` 来翻译文件。 +使用下面格式可以使用 Translate Shell 来翻译文件。 -```shell +``` $ trans :ta file:///home/magi/gtrans.txt உங்கள் வாழ்க்கையில் என்ன நடக்கிறது? ``` -下面命令可以让 `Translate Shell` 进入交互模式. 在进入交互模式之前你需要明确指定源语言和目标语言。本例中,我将英文单词翻译成泰米尔语。 +下面命令可以让 Translate Shell 进入交互模式。 在进入交互模式之前你需要明确指定源语言和目标语言。本例中,我将英文单词翻译成泰米尔语。 ``` $ trans -shell en:ta thanks @@ -324,13 +321,14 @@ thanks நன்றி ``` -想知道语言代码,可以执行下面语言。 +想知道语言代码,可以执行下面命令。 -```shell +``` $ trans -R ``` 或者 -```shell + +``` $ trans -T ┌───────────────────────┬───────────────────────┬───────────────────────┐ │ Afrikaans - af │ Hindi - hi │ Punjabi - pa │ @@ -375,9 +373,9 @@ $ trans -T └───────────────────────┴───────────────────────┴───────────────────────┘ ``` -想了解更多选项的内容,可以查看 `man` 页. +想了解更多选项的内容,可以查看其 man 手册。 -```shell +``` $ man trans ``` @@ -386,8 +384,8 @@ $ man trans via: https://www.2daygeek.com/translate-shell-a-tool-to-use-google-translate-from-command-line-in-linux/ 作者:[Magesh Maruthamuthu][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) +译者:[lujun9972](https://github.com/lujun9972 ) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From a30b1c69ad66a0dcbc6e6cc67b6dffbe55e04c88 Mon Sep 17 00:00:00 2001 From: wxy Date: Mon, 4 Dec 2017 21:38:57 +0800 Subject: [PATCH 011/236] =?UTF-8?q?PUB:20171130=20Translate=20Shell=20?= =?UTF-8?q?=E2=80=93=20A=20Tool=20To=20Use=20Google=20Translate=20From=20C?= =?UTF-8?q?ommand=20Line=20In=20Linux.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @lujun9972 https://linux.cn/article-9107-1.html --- ...– A Tool To Use Google Translate From Command Line In Linux.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20171130 Translate Shell – A Tool To Use Google Translate From Command Line In Linux.md (100%) diff --git a/translated/tech/20171130 Translate Shell – A Tool To Use Google Translate From Command Line In Linux.md b/published/20171130 Translate Shell – A Tool To Use Google Translate From Command Line In Linux.md similarity index 100% rename from translated/tech/20171130 Translate Shell – A Tool To Use Google Translate From Command Line In Linux.md rename to published/20171130 Translate Shell – A Tool To Use Google Translate From Command Line In Linux.md From 8a5b20e9ad5362f9deb8f934fde1a0b37f82a1e0 Mon Sep 17 00:00:00 2001 From: iron0x <2727586680@qq.com> Date: Mon, 4 Dec 2017 21:40:05 +0800 Subject: [PATCH 012/236] Update 20171202 docker - Use multi-stage builds.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 翻译中 --- sources/tech/20171202 docker - Use multi-stage builds.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20171202 docker - Use multi-stage builds.md b/sources/tech/20171202 docker - Use multi-stage builds.md index e1a6414862..8cc8af1c94 100644 --- a/sources/tech/20171202 docker - Use multi-stage builds.md +++ b/sources/tech/20171202 docker - Use multi-stage builds.md @@ -1,3 +1,5 @@ +【iron0x翻译中】 + Use multi-stage builds ============================================================ From 6d9411106201719c2a58e43176454b2b5f07062d Mon Sep 17 00:00:00 2001 From: wxy Date: Mon, 4 Dec 2017 22:02:54 +0800 Subject: [PATCH 013/236] PRF&PUB:20171130 New Feature Find every domain someone owns automatically.md @geekpi --- ...d every domain someone owns automatically.md | 17 ++++++++--------- 1 file changed, 8 insertions(+), 9 deletions(-) rename {translated/tech => published}/20171130 New Feature Find every domain someone owns automatically.md (69%) diff --git a/translated/tech/20171130 New Feature Find every domain someone owns automatically.md b/published/20171130 New Feature Find every domain someone owns automatically.md similarity index 69% rename from translated/tech/20171130 New Feature Find every domain someone owns automatically.md rename to published/20171130 New Feature Find every domain someone owns automatically.md index 4b72eaae5e..e8866a5ce5 100644 --- a/translated/tech/20171130 New Feature Find every domain someone owns automatically.md +++ b/published/20171130 New Feature Find every domain someone owns automatically.md @@ -1,16 +1,15 @@ -新功能:自动找出每个域名的拥有者 +使用 DNSTrails 自动找出每个域名的拥有者 ============================================================ - 今天,我们很高兴地宣布我们最近几周做的新功能。它是 Whois 聚合工具,现在可以在 [DNSTrails][1] 上获得。 -在过去,查找一个域名的所有者会花费很多时间,因为大部分时间你都需要把域名指向一个 IP 地址,以便找到同一个人拥有的其他域名。 +在过去,查找一个域名的所有者会花费很多时间,因为大部分时间你都需要把域名翻译为一个 IP 地址,以便找到同一个人拥有的其他域名。 -使用老的方法,你会很轻易地在一个工具和另外一个工具的研究和交叉比较结果中花费数个小时,直到得到你想要的域名。 +使用老的方法,在得到你想要的域名列表之前,你在一个工具和另外一个工具的一日又一日的研究和交叉比较结果中经常会花费数个小时。 -感谢这个新工具和我们的智能[WHOIS 数据库][2],现在你可以搜索任何域名,并获得组织或个人注册的域名的完整列表,并在几秒钟内获得准确的结果。 +感谢这个新工具和我们的智能 [WHOIS 数据库][2],现在你可以搜索任何域名,并获得组织或个人注册的域名的完整列表,并在几秒钟内获得准确的结果。 -### 我如何使用Whois聚合功能? +### 我如何使用 Whois 聚合功能? 第一步:打开 [DNSTrails.com][3] @@ -28,15 +27,15 @@ 如果你正在调查互联网上任何个人的域名所有权,这意味着即使域名甚至没有指向注册服务商的 IP,如果他们使用相同的电话和邮件地址,我们仍然可以发现其他域名。 -想知道一个人拥有的其他域名么?亲自试试 [DNStrails][5] 的[ WHOIS 聚合功能][4]或者[使用我们的 API 访问][6]。 +想知道一个人拥有的其他域名么?亲自试试 [DNStrails][5] 的 [WHOIS 聚合功能][4]或者[使用我们的 API 访问][6]。 -------------------------------------------------------------------------------- via: https://securitytrails.com/blog/find-every-domain-someone-owns -作者:[SECURITYTRAILS TEAM ][a] +作者:[SECURITYTRAILS TEAM][a] 译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 40e85e02d0860ba5071128b76793eec19fefcbfd Mon Sep 17 00:00:00 2001 From: wxy Date: Mon, 4 Dec 2017 22:08:09 +0800 Subject: [PATCH 014/236] =?UTF-8?q?=E4=BF=AE=E6=AD=A3=E6=96=87=E4=BB=B6?= =?UTF-8?q?=E5=90=8D?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @oska874 @lujun9972 --- ...em.md => 20171117 System Logs Understand Your Linux System.md} | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename translated/tech/{20171117 System Logs: Understand Your Linux System.md => 20171117 System Logs Understand Your Linux System.md} (100%) diff --git a/translated/tech/20171117 System Logs: Understand Your Linux System.md b/translated/tech/20171117 System Logs Understand Your Linux System.md similarity index 100% rename from translated/tech/20171117 System Logs: Understand Your Linux System.md rename to translated/tech/20171117 System Logs Understand Your Linux System.md From ca2175631b518a9ee619aa0aea6727694b016cfe Mon Sep 17 00:00:00 2001 From: imquanquan Date: Mon, 4 Dec 2017 22:13:28 +0800 Subject: [PATCH 015/236] translated --- ...ow to Manage Users with Groups in Linux.md | 183 ++++++++++++++++++ 1 file changed, 183 insertions(+) create mode 100644 translated/tech/20171201 How to Manage Users with Groups in Linux.md diff --git a/translated/tech/20171201 How to Manage Users with Groups in Linux.md b/translated/tech/20171201 How to Manage Users with Groups in Linux.md new file mode 100644 index 0000000000..8baac8707b --- /dev/null +++ b/translated/tech/20171201 How to Manage Users with Groups in Linux.md @@ -0,0 +1,183 @@ +如何在 Linux 系统中用用户组来管理用户 +============================================================ + +### [group-of-people-1645356_1920.jpg][1] + +![groups](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/group-of-people-1645356_1920.jpg?itok=rJlAxBSV) + +在本教程中了解如何通过用户组和访问控制表(ACL)来管理用户。 + +[创意共享协议][4] + +当你需要管理一台容纳多个用户的 Linux 机器时,比起一些基本的用户管理工具所提供的方法,有时候你需要对这些用户采取更多的用户权限管理方式。特别是当你要管理某些用户的权限时,这个想法尤为重要。比如说,你有一个目录,一个用户组中的用户可以通过读和写的权限访问这个目录,而其他用户组中的用户对这个目录只有读的权限。通过 Linux 这是完全可以实现的。但是你首先必须了解如何通过用户组和访问控制表(ACL)来管理用户。 + +我们将从简单的用户开始,逐渐深入到复杂的访问控制表(ACL)。你所需要做的一切都将在你选择的 Linux 发行版中完成。本文的重点是用户组,所以不会涉及到关于用户的基础知识。 + +为了达到演示的目的,我将假设: + +你需要用下面两个用户名新建两个用户: + +* olivia + +* nathan + +你需要新建以下两个用户组: + +* readers + +* editors + +olivia 属于 editors 用户组,而 nathan 属于 readers 用户组。reader 用户组对 ``/DATA`` 目录只有读的权限,而 editors 用户组则对 ``/DATA`` 目录同时有读和写的权限。当然,这是个非常小的任务,但它会给你基本的用法。你可以扩展这个任务以适应你其他更大的需求。 + +我将在 Ubuntu 16.04 Server 平台上进行演示。这些命令都是通用的,唯一不同的是,要是在你的发行版中不使用 sudo 命令,你必须切换到 root 用户来执行这些命令。 + +### 创建用户 + +我们需要做的第一件事是为我们的实验创建两个用户。可以用 ``useradd`` 命令来创建用户,我们不只是简单地创建一个用户,而需要同时创建用户和属于他们的家目录,然后给他们设置密码。 + +``` +sudo useradd -m olivia + +sudo useradd -m nathan +``` + +我们现在创建了两个用户,如果你看看 ``/home`` 目录,你可以发现他们的家目录(因为我们用了 -m 选项,可以帮在创建用户的同时创建他们的家目录。 + +之后,我们可以用以下命令给他们设置密码: + +``` +sudo passwd olivia + +sudo passwd nathan +``` + +就这样,我们创建了两个用户。 + +### 创建用户组并添加用户 + +现在我们将创建 readers 和 editors 用户组,然后给它们添加用户。创建用户组的命令是: + +``` +addgroup readers + +addgroup editors +``` + +(译者注:当你使用 CentOS 等一些 Linux 发行版时,可能系统没有 addgroup 这个命令,推荐使用 groupadd 命令来替换 addgroup 命令以达到同样的效果) + + +### [groups_1.jpg][2] + +![groups](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/groups_1.jpg?itok=BKwL89BB) + +图一:我们可以使用刚创建的新用户组了。 + +[Used with permission][5] + +创建用户组后,我们需要给他们添加用户。我们用以下命令来将 nathan 添加到 readers 用户组: + +``` +sudo usermod -a -G readers nathan +``` +用以下命令将 olivia 添加到 editors 用户组: + +``` +sudo usermod -a -G editors olivia +``` + +现在我们已经准备好用用户组来管理用户了。 + +### 给用户组授予目录的权限 + +假设你有个目录 ``/READERS``,允许 readers 用户组的所有成员访问这个目录。首先,我们执行以下命令来更改目录所属用户组: + +``` +sudo chown -R :readers /READERS +``` + +接下来,执行以下命令收回目录所属用户组的写入权限: + +``` +sudo chmod -R g-w /READERS +``` + +然后我们执行下面的命令来收回其他用户对这个目录的访问权限(以防止任何不在读者组中的用户访问这个目录里的文件): + +``` +sudo chmod -R o-x /READERS +``` + +这时候,只有目录的所有者(root)和用户组 reader 中的用户可以访问 ``/READES`` 中的文件。 + +假设你有个目录 ``/EDITORS`` ,你需要给用户组 editors 里的成员这个目录的读和写的权限。为了达到这个目的,执行下面的这些命令是必要的: + +``` +sudo chown -R :editors /EDITORS + +sudo chmod -R g+w /EDITORS + +sudo chmod -R o-x /EDITORS +``` + +此时 editors 用户组的所有成员都可以访问和修改其中的文件。除此之外其他用户(除了 root 之外)无法访问 ``/EDITORS`` 中的任何文件。 + +使用这个方法的问题在于,你一次只能操作一个组和一个目录而已。这时候访问控制表(ACL)就可以派得上用场了。 + + +### 使用访问控制表(ACL) + +现在,让我们把这个问题变得棘手一点。假设你有一个目录 ``/DATA`` 并且你想给 readers 用户组的成员读取权限同时给 editors 用户组的成员读和写的权限。为此,你必须要用到 setfacl 命令。setfacl 命令可以为文件或文件夹设置一个访问控制表(ACL)。 + +这个命令的结构如下: + +``` +setfacl OPTION X:NAME:Y /DIRECTORY +``` + +其中 OPTION 是可选选项,X 可以是 u(用户)或者是 g (用户组),NAME 是用户或者用户组的名字,/DIRECTORY 是要用到的目录。我们将使用 -m 选项进行修改(modify)。因此,我们给 readers 用户组添加读取权限的命令是: + +``` +sudo setfacl -m g:readers:rx -R /DATA +``` + +现在 readers 用户组里面的每一个用户都可以读取 /DATA 目录里的文件了,但是他们不能修改里面的内容。 + +为了给 editors 用户组里面的用户读写权限,我们执行了以下的命令: + +``` +sudo setfacl -m g:editors:rwx -R /DATA +``` +上述命令将赋予 editors 用户组中的任何成员读取权限,同时保留 readers 用户组的只读权限。 + +### 更多的权限控制 + +使用访问控制表(ACL),你可以实现你所需的权限控制。你可以实现将用户添加到用户组,并且可靠灵活地控制这些用户组对每个目录的权限以达到你的需求。想要了解上述工具的更多信息,可以执行下列的命令: + +* man usradd + +* man addgroup + +* man usermod + +* man sefacl + +* man chown + +* man chmod + + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/learn/intro-to-linux/2017/12/how-manage-users-groups-linux + +作者:[Jack Wallen ] +译者:[imquanquan](https://github.com/imquanquan) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[1]:https://www.linux.com/files/images/group-people-16453561920jpg +[2]:https://www.linux.com/files/images/groups1jpg +[3]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux +[4]:https://www.linux.com/licenses/category/creative-commons-zero +[5]:https://www.linux.com/licenses/category/used-permission From 96a54dd193bec2b9eb8bb945766b13087f101179 Mon Sep 17 00:00:00 2001 From: wxy Date: Mon, 4 Dec 2017 22:20:54 +0800 Subject: [PATCH 016/236] =?UTF-8?q?=E7=A7=BB=E9=99=A4=E9=87=8D=E5=A4=8D?= =?UTF-8?q?=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @oska874 --- ...x command-line screen grabs made simple.md | 108 ------------------ 1 file changed, 108 deletions(-) delete mode 100644 sources/tech/20171130 Scrot Linux command-line screen grabs made simple.md diff --git a/sources/tech/20171130 Scrot Linux command-line screen grabs made simple.md b/sources/tech/20171130 Scrot Linux command-line screen grabs made simple.md deleted file mode 100644 index 2b4d2248b2..0000000000 --- a/sources/tech/20171130 Scrot Linux command-line screen grabs made simple.md +++ /dev/null @@ -1,108 +0,0 @@ -Scrot: Linux command-line screen grabs made simple -============================================================ - -### Scrot is a basic, flexible tool that offers a number of handy options for taking screen captures from the Linux command line. - -![Scrot: Screen grabs made simple](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/community-penguins-osdc-lead.png?itok=BmqsAF4A "Scrot: Screen grabs made simple") -Image credits : Original photo by Rikki Endsley. [CC BY-SA 4.0][13] - -There are great tools on the Linux desktop for taking screen captures, such as [KSnapshot][14] and [Shutter][15]. Even the simple utility that comes with the GNOME desktop does a pretty good job of capturing screens. But what if you rarely need to take screen captures? Or you use a Linux distribution without a built-in capture tool, or an older computer with limited resources? - -Turn to the command line and a little utility called [Scrot][16]. It does a fine job of taking simple screen captures, and it includes a few features that might surprise you. - -### Getting started with Scrot - -More Linux resources - -* [What is Linux?][1] - -* [What are Linux containers?][2] - -* [Download Now: Linux commands cheat sheet][3] - -* [Advanced Linux commands cheat sheet][4] - -* [Our latest Linux articles][5] - -Many Linux distributions come with Scrot already installed—to check, type `which scrot`. If it isn't there, you can install Scrot using your distro's package manager. If you're willing to compile the code, grab it [from GitHub][22]. - -To take a screen capture, crack open a terminal window and type `scrot [filename]`, where `[filename]` is the name of file to which you want to save the image (for example, `desktop.png`). If you don't include a name for the file, Scrot will create one for you, such as `2017-09-24-185009_1687x938_scrot.png`. (That filename isn't as descriptive it could be, is it? That's why it's better to add one to the command.) - -Running Scrot with no options takes a screen capture of your entire desktop. If you don't want to do that, Scrot lets you focus on smaller portions of your screen. - -### Taking a screen capture of a single window - -Tell Scrot to take a screen capture of a single window by typing `scrot -u [filename]`. - -The `-u` option tells Scrot to grab the window currently in focus. That's usually the terminal window you're working in, which might not be the one you want. - -To grab another window on your desktop, type `scrot -s [filename]`. - -The `-s` option lets you do one of two things: - -* select an open window, or - -* draw a rectangle around a window or a portion of a window to capture it. - -You can also set a delay, which gives you a little more time to select the window you want to capture. To do that, type `scrot -u -d [num] [filename]`. - -The `-d` option tells Scrot to wait before grabbing the window, and `[num]` is the number of seconds to wait. Specifying `-d 5` (wait five seconds) should give you enough time to choose a window. - -### More useful options - -Scrot offers a number of additional features (most of which I never use). The ones I find most useful include: - -* `-b` also grabs the window's border - -* `-t` grabs a window and creates a thumbnail of it. This can be useful when you're posting screen captures online. - -* `-c` creates a countdown in your terminal when you use the `-d` option. - -To learn about Scrot's other options, check out the its documentation by typing `man scrot` in a terminal window, or [read it online][17]. Then start snapping images of your screen. - -It's basic, but Scrot gets the job done nicely. - -### Topics - - [Linux][23] - -### About the author - - [![That idiot Scott Nesbitt ...](https://opensource.com/sites/default/files/styles/profile_pictures/public/scottn-cropped.jpg?itok=q4T2J4Ai)][18] - - Scott Nesbitt - I'm a long-time user of free/open source software, and write various things for both fun and profit. I don't take myself too seriously and I do all of my own stunts. You can find me at these fine establishments on the web: [Twitter][7], [Mastodon][8], [GitHub][9], and... [more about Scott Nesbitt][10][More about me][11] - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/17/11/taking-screen-captures-linux-command-line-scrot - -作者:[ Scott Nesbitt  ][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/scottnesbitt -[1]:https://opensource.com/resources/what-is-linux?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent -[2]:https://opensource.com/resources/what-are-linux-containers?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent -[3]:https://developers.redhat.com/promotions/linux-cheatsheet/?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent -[4]:https://developers.redhat.com/cheat-sheet/advanced-linux-commands-cheatsheet?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent -[5]:https://opensource.com/tags/linux?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent -[6]:https://opensource.com/article/17/11/taking-screen-captures-linux-command-line-scrot?rate=H43kUdawjR0GV9D0dCbpnmOWcqw1WekfrAI_qKo8UwI -[7]:http://www.twitter.com/ScottWNesbitt -[8]:https://mastodon.social/@scottnesbitt -[9]:https://github.com/ScottWNesbitt -[10]:https://opensource.com/users/scottnesbitt -[11]:https://opensource.com/users/scottnesbitt -[12]:https://opensource.com/user/14925/feed -[13]:https://creativecommons.org/licenses/by-sa/4.0/ -[14]:https://www.kde.org/applications/graphics/ksnapshot/ -[15]:https://launchpad.net/shutter -[16]:https://github.com/dreamer/scrot -[17]:http://manpages.ubuntu.com/manpages/precise/man1/scrot.1.html -[18]:https://opensource.com/users/scottnesbitt -[19]:https://opensource.com/users/scottnesbitt -[20]:https://opensource.com/users/scottnesbitt -[21]:https://opensource.com/article/17/11/taking-screen-captures-linux-command-line-scrot#comments -[22]:https://github.com/dreamer/scrot -[23]:https://opensource.com/tags/linux From 1e5d60f56f5695b538bc5c8d0448a69b6cf0db57 Mon Sep 17 00:00:00 2001 From: imquanquan Date: Mon, 4 Dec 2017 22:33:42 +0800 Subject: [PATCH 017/236] Delete 20171201 How to Manage Users with Groups in Linux.md --- ...ow to Manage Users with Groups in Linux.md | 168 ------------------ 1 file changed, 168 deletions(-) delete mode 100644 sources/tech/20171201 How to Manage Users with Groups in Linux.md diff --git a/sources/tech/20171201 How to Manage Users with Groups in Linux.md b/sources/tech/20171201 How to Manage Users with Groups in Linux.md deleted file mode 100644 index 35350c819f..0000000000 --- a/sources/tech/20171201 How to Manage Users with Groups in Linux.md +++ /dev/null @@ -1,168 +0,0 @@ -translating---imquanquan - -How to Manage Users with Groups in Linux -============================================================ - -### [group-of-people-1645356_1920.jpg][1] - -![groups](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/group-of-people-1645356_1920.jpg?itok=rJlAxBSV) - -Learn how to work with users, via groups and access control lists in this tutorial. - -[Creative Commons Zero][4] - -Pixabay - -When you administer a Linux machine that houses multiple users, there might be times when you need to take more control over those users than the basic user tools offer. This idea comes to the fore especially when you need to manage permissions for certain users. Say, for example, you have a directory that needs to be accessed with read/write permissions by one group of users and only read permissions for another group. With Linux, this is entirely possible. To make this happen, however, you must first understand how to work with users, via groups and access control lists (ACLs). - -We’ll start from the beginning with users and work our way to the more complex ACLs. Everything you need to make this happen will be included in your Linux distribution of choice. We won’t touch on the basics of users, as the focus on this article is about groups. - -For the purpose of this piece, I’m going to assume the following: - -You need to create two users with usernames: - -* olivia - -* nathan - -You need to create two groups: - -* readers - -* editors - -Olivia needs to be a member of the group editors, while nathan needs to be a member of the group readers. The group readers needs to only have read permission to the directory /DATA, whereas the group editors needs to have both read and write permission to the /DATA directory. This, of course, is very minimal, but it will give you the basic information you need to expand the tasks to fit your much larger needs. - -I’ll be demonstrating on the Ubuntu 16.04 Server platform. The commands will be universal—the only difference would be if your distribution of choice doesn’t make use of sudo. If this is the case, you’ll have to first su to the root user to issue the commands that require sudo in the demonstrations. - -### Creating the users - -The first thing we need to do is create the two users for our experiment. User creation is handled with the useradd command. Instead of just simply creating the users we need to create them both with their own home directories and then give them passwords. - -The first thing we do is create the users. To do this, issue the commands: - -``` -sudo useradd -m olivia - -sudo useradd -m nathan -``` - -Next each user must have a password. To add passwords into the mix, you’d issue the following commands: - -``` -sudo passwd olivia - -sudo passwd nathan -``` - -That’s it, your users are created. - -### Creating groups and adding users - -Now we’re going to create the groups readers and editors and then add users to them. The commands to create our groups are: - -``` -addgroup readers - -addgroup editors -``` - -### [groups_1.jpg][2] - -![groups](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/groups_1.jpg?itok=BKwL89BB) - -Figure 1: Our new groups ready to be used. - -[Used with permission][5] - -With our groups created, we need to add our users. We’ll add user nathan to group readers with the command: - -``` -sudo usermod -a -G readers nathan -``` - -``` -sudo usermod -a -G editors olivia -``` - -### Giving groups permissions to directories - -Let’s say you have the directory /READERS and you need to allow all members of the readers group access to that directory. First, change the group of the folder with the command: - -``` -sudo chown -R :readers /READERS -``` - -``` -sudo chmod -R g-w /READERS -``` - -``` -sudo chmod -R o-x /READERS -``` - -Let’s say you have the directory /EDITORS and you need to give members of the editors group read and write permission to its contents. To do that, the following command would be necessary: - -``` -sudo chown -R :editors /EDITORS - -sudo chmod -R g+w /EDITORS - -sudo chmod -R o-x /EDITORS -``` - -The problem with using this method is you can only add one group to a directory at a time. This is where access control lists come in handy. - -### Using access control lists - -Now, let’s get tricky. Say you have a single folder—/DATA—and you want to give members of the readers group read permission and members of the group editors read/write permissions. To do that, you must take advantage of the setfacl command. The setfacl command sets file access control lists for files and folders. - -The structure of this command looks like this: - -``` -setfacl OPTION X:NAME:Y /DIRECTORY -``` - -``` -sudo setfacl -m g:readers:rx -R /DATA -``` - -To give members of the editors group read/write permissions (while retaining read permissions for the readers group), we’d issue the command; - -``` -sudo setfacl -m g:editors:rwx -R /DATA -``` - -### All the control you need - -And there you have it. You can now add members to groups and control those groups’ access to various directories with all the power and flexibility you need. To read more about the above tools, issue the commands: - -* man usradd - -* man addgroup - -* man usermod - -* man sefacl - -* man chown - -* man chmod - -Learn more about Linux through the free ["Introduction to Linux" ][3]course from The Linux Foundation and edX. - --------------------------------------------------------------------------------- - -via: https://www.linux.com/learn/intro-to-linux/2017/12/how-manage-users-groups-linux - -作者:[Jack Wallen ] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[1]:https://www.linux.com/files/images/group-people-16453561920jpg -[2]:https://www.linux.com/files/images/groups1jpg -[3]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux -[4]:https://www.linux.com/licenses/category/creative-commons-zero -[5]:https://www.linux.com/licenses/category/used-permission From 18ae29fedefe613992f4c3c98b15fb6a4c7a121c Mon Sep 17 00:00:00 2001 From: wxy Date: Mon, 4 Dec 2017 22:38:01 +0800 Subject: [PATCH 018/236] PRF:20171124 Photon Could Be Your New Favorite Container OS.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @KeyLD 恭喜你,完成了第一篇翻译! 不过,按照流程,翻译前应该发起申请的 PR,翻译完提交时,要将原文删除。 --- ...Could Be Your New Favorite Container OS.md | 146 ------------------ ...Could Be Your New Favorite Container OS.md | 77 ++++----- 2 files changed, 32 insertions(+), 191 deletions(-) delete mode 100644 sources/tech/20171124 Photon Could Be Your New Favorite Container OS.md diff --git a/sources/tech/20171124 Photon Could Be Your New Favorite Container OS.md b/sources/tech/20171124 Photon Could Be Your New Favorite Container OS.md deleted file mode 100644 index d282ef5445..0000000000 --- a/sources/tech/20171124 Photon Could Be Your New Favorite Container OS.md +++ /dev/null @@ -1,146 +0,0 @@ -Photon Could Be Your New Favorite Container OS -============================================================ - -![Photon OS](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/photon-linux.jpg?itok=jUFHPR_c "Photon OS") -Jack Wallen says Photon OS is an outstanding platform, geared specifically for containers.[Creative Commons Zero][5]Pixabay - -Containers are all the rage, and with good reason. [As discussed previously][13], containers allow you to quickly and easily deploy new services and applications onto your network, without requiring too much in the way of added system resources. Containers are more cost-effective than using dedicated hardware or virtual machines, and they’re easier to update and reuse. - -Best of all, containers love Linux (and vice versa). Without much trouble or time, you can get a Linux server up and running with [Docker][14] and deploying containers. But, which Linux distribution is best suited for the deployment of your containers? There are a _lot_  of options. You could go with a standard Ubuntu Server platform (which makes installing Docker and deploying containers incredibly easy), or you could opt for a lighter weight distribution — one geared specifically for the purpose of deploying containers. - -One such distribution is [Photon][15]. This particular platform was created in 2005 by [VMware][16]; it includes the Docker daemon and works with container frameworks, such as Mesos and Kubernetes. Photon is optimized to work with [VMware vSphere][17], but it can be used on bare metal, [Microsoft Azure][18], [Google Compute Engine][19], [Amazon Elastic Compute Cloud][20], or [VirtualBox][21]. - -Photon manages to stay slim by only installing what is absolutely necessary to run the Docker daemon. In the end, the distribution comes in around 300 MB. This is just enough Linux make it all work. The key features to Photon are: - -* Kernel tuned for performance. - -* Kernel is hardened according to the [Kernel Self-Protection Project][6] (KSPP). - -* All installed packages are built with hardened security flags. - -* Operating system boots with validated trust. - -* Photon management daemon manages firewall, network, packages, and users on remote Photon OS machines. - -* Support for persistent volumes. - -* [Project Lightwave][7] integration. - -* Timely security patches and updates. - -Photon can be used via [ISO][22], [OVA][23], [Amazon Machine Image][24], [Google Compute Engine image][25], and [Azure VHD][26]. I’ll show you how to install Photon on VirtualBox, using an ISO image. The installation takes about five minutes and, in the end, you’ll have a virtual machine, ready to deploy containers. - -### Creating the virtual machine - -Before you deploy that first container, you have to create the virtual machine and install Photon. To do this, open up VirtualBox and click the New button. Walk through the Create Virtual Machine wizard (giving Photon the necessary resources, based on the usage you predict the container server will need). Once you’ve created the virtual machine, you need to first make a change to the settings. Select the newly created virtual machine (in the left pane of the VirtualBox main window) and then click Settings. In the resulting window, click on Network (from the left navigation). - -In the Networking window (Figure 1), you need to change the Attached to drop-down to Bridged Adapter. This will ensure your Photon server is reachable from your network. Once you’ve made that change, click OK. - -### [photon_0.jpg][8] - -![change settings](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/photon_0.jpg?itok=Q0yhOhsZ "change settings") -Figure 1: Changing the VirtualBox network settings for Photon.[Used with permission][1] - -Select your Photon virtual machine from the left navigation and then click Start. You will be prompted to locate and attach the IOS image. Once you’ve done that, Photon will boot up and prompt you to hit Enter to begin the installation. The installation is ncurses based (there is no GUI), but it’s incredibly simple. - -In the next screen (Figure 2), you will be asked if you want to do a Minimal, Full, or OSTree Server. I opted to go the Full route. Select whichever option you require and hit enter. - -### [photon_1.jpg][9] - -![installation type](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/photon_1.jpg?itok=OdnMVpaA "installation type") -Figure 2: Selecting your installation type.[Used with permission][2] - -In the next window, select the disk that will house Photon. Since we’re installing this as a virtual machine, there will be only one disk listed (Figure 3). Tab down to Auto and hit Enter on your keyboard. The installation will then require you to type (and verify) an administrator password. Once you’ve done that, the installation will begin and finish in less than five minutes. - -### [photon_2.jpg][10] - -![Photon ](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/photon_2.jpg?itok=QL1Rs-PH "Photon") -Figure 3: Selecting your hard disk for the Photon installation.[Used with permission][3] - -Once the installation completes, reboot the virtual machine and log in with the username root and the password you created during installation. You are ready to start working. - -Before you begin using Docker on Photon, you’ll want to upgrade the platform. Photon uses the _yum_ package manager, so login as root and issue the command  _yum update_ .If there are any updates available, you’ll be asked to okay the process (Figure 4). - -### [photon_3.jpg][11] - -![Updating](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/photon_3.jpg?itok=vjqrspE2 "Updating") -Figure 4: Updating Photon.[Used with permission][4] - -Usage - -As I mentioned, Photon comes with everything you need to deploy containers or even create a Kubernetes cluster. However, out of the box, there are a few things you’ll need to do. The first thing is to enable the Docker daemon to run at start. To do this, issue the commands: - -``` -systemctl start docker - -systemctl enable docker -``` - -Now we need to create a standard user, so we’re not running the docker command as root. To do this, issue the following commands: - -``` -useradd -m USERNAME - -passwd USERNAME -``` - -Where USERNAME is the name of the user to add. - -Next we need to add the new user to the  _docker_ group with the command: - -``` -usermod -a -G docker USERNAME -``` - -Where USERNAME is the name of the user just created. - -Log out as the root user and log back in as the newly created user. You can now work with the  _docker _ command without having to make use of  _sudo_  or switching to the root user. Pull down an image from Docker Hub and start deploying containers. - -### An outstanding container platform - -Photon is, without a doubt, an outstanding platform, geared specifically for containers. Do note that Photon is an open source project, so there is no paid support to be had. If you find yourself having trouble with Photon, hop on over to the [Issues tab in the Photon Project’s Github page][27], where you can read and post about issues. And if you’re interested in forking Photon, you’ll find the source code on the project’s [official Github page][28]. - -Give Photon a try and see if it doesn’t make deploying Docker containers and/or Kubernetes clusters significantly easier. - - _Learn more about Linux through the free ["Introduction to Linux" ][29]course from The Linux Foundation and edX._ - --------------------------------------------------------------------------------- - -via: 网址 - -作者:[ JACK WALLEN][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.linux.com/users/jlwallen -[1]:https://www.linux.com/licenses/category/used-permission -[2]:https://www.linux.com/licenses/category/used-permission -[3]:https://www.linux.com/licenses/category/used-permission -[4]:https://www.linux.com/licenses/category/used-permission -[5]:https://www.linux.com/licenses/category/creative-commons-zero -[6]:https://kernsec.org/wiki/index.php/Kernel_Self_Protection_Project -[7]:http://vmware.github.io/lightwave/ -[8]:https://www.linux.com/files/images/photon0jpg -[9]:https://www.linux.com/files/images/photon1jpg -[10]:https://www.linux.com/files/images/photon2jpg -[11]:https://www.linux.com/files/images/photon3jpg -[12]:https://www.linux.com/files/images/photon-linuxjpg -[13]:https://www.linux.com/learn/intro-to-linux/2017/11/how-install-and-use-docker-linux -[14]:https://www.docker.com/ -[15]:https://vmware.github.io/photon/ -[16]:https://www.vmware.com/ -[17]:https://www.vmware.com/products/vsphere.html -[18]:https://azure.microsoft.com/ -[19]:https://cloud.google.com/compute/ -[20]:https://aws.amazon.com/ec2/ -[21]:https://www.virtualbox.org/ -[22]:https://github.com/vmware/photon/wiki/Downloading-Photon-OS -[23]:https://github.com/vmware/photon/wiki/Downloading-Photon-OS -[24]:https://github.com/vmware/photon/wiki/Downloading-Photon-OS -[25]:https://github.com/vmware/photon/wiki/Downloading-Photon-OS -[26]:https://github.com/vmware/photon/wiki/Downloading-Photon-OS -[27]:https://github.com/vmware/photon/issues -[28]:https://github.com/vmware/photon -[29]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux diff --git a/translated/tech/20171124 Photon Could Be Your New Favorite Container OS.md b/translated/tech/20171124 Photon Could Be Your New Favorite Container OS.md index e51c580da9..3496f22f4a 100644 --- a/translated/tech/20171124 Photon Could Be Your New Favorite Container OS.md +++ b/translated/tech/20171124 Photon Could Be Your New Favorite Container OS.md @@ -1,109 +1,96 @@ -Photon也许能成为你最喜爱的容器操作系统 +Photon 也许能成为你最喜爱的容器操作系统 ============================================================ ![Photon OS](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/photon-linux.jpg?itok=jUFHPR_c "Photon OS") -Phonton OS专注于容器,是一个非常出色的平台。 —— Jack Wallen +>Phonton OS 专注于容器,是一个非常出色的平台。 —— Jack Wallen 容器在当下的火热,并不是没有原因的。正如[之前][13]讨论的,容器可以使您轻松快捷地将新的服务与应用部署到您的网络上,而且并不耗费太多的系统资源。比起专用硬件和虚拟机,容器都是更加划算的,除此之外,他们更容易更新与重用。 -更重要的是,容器喜欢Linux(反之亦然)。不需要太多时间和麻烦,你就可以启动一台Linux服务器,运行[Docker][14],再是部署容器。但是,哪种Linux发行版最适合部署容器呢?我们的选择很多。你可以使用标准的Ubuntu服务器平台(更容易安装Docker并部署容器)或者是更轻量级的发行版 —— 专门用于部署容器。 +更重要的是,容器喜欢 Linux(反之亦然)。不需要太多时间和麻烦,你就可以启动一台 Linux 服务器,运行[Docker][14],然后部署容器。但是,哪种 Linux 发行版最适合部署容器呢?我们的选择很多。你可以使用标准的 Ubuntu 服务器平台(更容易安装 Docker 并部署容器)或者是更轻量级的发行版 —— 专门用于部署容器。 -[Photon][15]就是这样的一个发行版。这个特殊的版本是由[VMware][16]于2005年创建的,它包含了Docker的守护进程,并与容器框架(如Mesos和Kubernetes)一起使用。Photon经过优化可与[VMware vSphere][17]协同工作,而且可用于裸机,[Microsoft Azure][18], [Google Compute Engine][19], [Amazon Elastic Compute Cloud][20], 或者 [VirtualBox][21]等。 +[Photon][15] 就是这样的一个发行版。这个特殊的版本是由 [VMware][16] 于 2005 年创建的,它包含了 Docker 的守护进程,并可与容器框架(如 Mesos 和 Kubernetes )一起使用。Photon 经过优化可与 [VMware vSphere][17] 协同工作,而且可用于裸机、[Microsoft Azure][18]、 [Google Compute Engine][19]、 [Amazon Elastic Compute Cloud][20] 或者 [VirtualBox][21] 等。 -Photon通过只安装Docker守护进程所必需的东西来保持它的轻量。而这样做的结果是,这个发行版的大小大约只有300MB。但这足以让Linux的运行一切正常。除此之外,Photon的主要特点还有: - -* 内核调整为性能模式。 - -* 内核根据[内核自防护项目][6](KSPP)进行了加固。 +Photon 通过只安装 Docker 守护进程所必需的东西来保持它的轻量。而这样做的结果是,这个发行版的大小大约只有 300MB。但这足以让 Linux 的运行一切正常。除此之外,Photon 的主要特点还有: +* 内核为性能而调整。 +* 内核根据[内核自防护项目][6](KSPP)进行了加固。 * 所有安装的软件包都根据加固的安全标识来构建。 - * 操作系统在信任验证后启动。 - -* Photon管理进程管理防火墙,网络,软件包,和远程登录在Photon机子上的用户。 - +* Photon 的管理进程可以管理防火墙、网络、软件包,和远程登录在 Photon 机器上的用户。 * 支持持久卷。 - * [Project Lightwave][7] 整合。 - * 及时的安全补丁与更新。 -Photon可以通过[ISO][22],[OVA][23],[Amazon Machine Image][24],[Google Compute Engine image][25]和[Azure VHD][26]安装使用。现在我将向您展示如何使用ISO镜像在VirtualBox上安装Photon。整个安装过程大概需要五分钟,在最后您将有一台随时可以部署容器的虚拟机。 +Photon 可以通过 [ISO 镜像][22]、[OVA][23]、[Amazon Machine Image][24]、[Google Compute Engine 镜像][25] 和 [Azure VHD][26] 安装使用。现在我将向您展示如何使用 ISO 镜像在 VirtualBox 上安装 Photon。整个安装过程大概需要五分钟,在最后您将有一台随时可以部署容器的虚拟机。 ### 创建虚拟机 -在部署第一台容器之前,您必须先创建一台虚拟机并安装Photon。为此,打开VirtualBox并点击“新建”按钮。跟着创建虚拟机向导进行配置(根据您的容器将需要的用途,为Photon提供必要的资源)。在创建好虚拟机后,您所需要做的第一件事就是更改配置。选择新建的虚拟机(在VirtualBox主窗口的左侧面板中),然后单击“设置”。在弹出的窗口中,点击“网络”(在左侧的导航中)。 +在部署第一台容器之前,您必须先创建一台虚拟机并安装 Photon。为此,打开 VirtualBox 并点击“新建”按钮。跟着创建虚拟机向导进行配置(根据您的容器将需要的用途,为 Photon 提供必要的资源)。在创建好虚拟机后,您所需要做的第一件事就是更改配置。选择新建的虚拟机(在 VirtualBox 主窗口的左侧面板中),然后单击“设置”。在弹出的窗口中,点击“网络”(在左侧的导航中)。 -在“网络”窗口(图1)中,你需要在“连接”的下拉窗口中选择桥接。这可以确保您的Photon服务与您的网络相连。完成更改后,单击确定。 - -### [photon_0.jpg][8] +在“网络”窗口(图1)中,你需要在“连接”的下拉窗口中选择桥接。这可以确保您的 Photon 服务与您的网络相连。完成更改后,单击确定。 ![change settings](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/photon_0.jpg?itok=Q0yhOhsZ "change setatings") -图 1: 更改Photon在VirtualBox中的网络设置。[经许可使用][1] -从左侧的导航选择您的Photon虚拟机,点击启动。系统会提示您去加载IOS镜像。当您完成之后,Photon安装程序将会启动并提示您按回车后开始安装。安装过程基于ncurses(没有GUI),但它非常简单。 +*图 1: 更改 Photon 在 VirtualBox 中的网络设置。[经许可使用][1]* -接下来(图2),系统会询问您是要最小化安装,完整安装还是安装OSTree服务器。我选择了完整安装。选择您所需要的任意选项,然后按回车继续。 +从左侧的导航选择您的 Photon 虚拟机,点击启动。系统会提示您去加载 ISO 镜像。当您完成之后,Photon 安装程序将会启动并提示您按回车后开始安装。安装过程基于 ncurses(没有 GUI),但它非常简单。 -### [photon_1.jpg][9] +接下来(图2),系统会询问您是要最小化安装,完整安装还是安装 OSTree 服务器。我选择了完整安装。选择您所需要的任意选项,然后按回车继续。 ![installation type](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/photon_2.jpg?itok=QL1Rs-PH "Photon") -图 2: 选择您的安装类型.[经许可使用][2] -在下一个窗口,选择您要安装Photon的磁盘。由于我们将其安装在虚拟机,因此只有一块磁盘会被列出(图3)。选择“自动”按下回车。然后安装程序会让您输入(并验证)管理员密码。在这之后镜像开始安装在您的磁盘上并在不到5分钟的时间内结束。 +*图 2: 选择您的安装类型。[经许可使用][2]* -### [photon_2.jpg][] +在下一个窗口,选择您要安装 Photon 的磁盘。由于我们将其安装在虚拟机,因此只有一块磁盘会被列出(图3)。选择“自动”按下回车。然后安装程序会让您输入(并验证)管理员密码。在这之后镜像开始安装在您的磁盘上并在不到 5 分钟的时间内结束。 ![Photon](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/photon_1.jpg?itok=OdnMVpaA "installation type") -图 3: 选择安装Photon的硬盘.[经许可使用][3] -安装完成后,重启虚拟机并使用安装时创建的用户root和它的密码登录。一切就绪,你准备好开始工作了。 +*图 3: 选择安装 Photon 的硬盘。[经许可使用][3]* -在开始使用Docker之前,您需要更新一下Photon。Photon使用 _yum_ 软件包管理器,因此在以root用户登录后输入命令 _yum update_。如果有任何可用更新,则会询问您是否确认(图4)。 +安装完成后,重启虚拟机并使用安装时创建的用户 root 和它的密码登录。一切就绪,你准备好开始工作了。 -### [photon_3.jpg][11] +在开始使用 Docker 之前,您需要更新一下 Photon。Photon 使用 `yum` 软件包管理器,因此在以 root 用户登录后输入命令 `yum update`。如果有任何可用更新,则会询问您是否确认(图4)。 ![Updating](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/photon_3.jpg?itok=vjqrspE2 "Updating") -图 4: 更新 Photon.[经许可使用][4] -用法 +*图 4: 更新 Photon。[经许可使用][4]* -正如我所说的,Photon提供了部署容器甚至创建Kubernetes集群所需要的所有包。但是,在使用之前还要做一些事情。首先要启动Docker守护进程。为此,执行以下命令: +### 用法 + +正如我所说的,Photon 提供了部署容器甚至创建 Kubernetes 集群所需要的所有包。但是,在使用之前还要做一些事情。首先要启动 Docker 守护进程。为此,执行以下命令: ``` systemctl start docker - systemctl enable docker ``` -现在我们需要创建一个标准用户,因此我们没有以root去运行docker命令。为此,执行以下命令: +现在我们需要创建一个标准用户,以便我们可以不用 root 去运行 `docker` 命令。为此,执行以下命令: ``` useradd -m USERNAME - passwd USERNAME ``` -其中USERNAME是我们新增的用户的名称。 +其中 “USERNAME” 是我们新增的用户的名称。 -接下来,我们需要将这个新用户添加到 _docker_ 组,执行命令: +接下来,我们需要将这个新用户添加到 “docker” 组,执行命令: ``` usermod -a -G docker USERNAME ``` -其中USERNAME是刚刚创建的用户的名称。 +其中 “USERNAME” 是刚刚创建的用户的名称。 -注销root用户并切换为新增的用户。现在,您已经可以不必使用 _sudo_ 命令或者是切换到root用户来使用 _docker_命令了。从Docker Hub中取出一个镜像开始部署容器吧。 +注销 root 用户并切换为新增的用户。现在,您已经可以不必使用 `sudo` 命令或者切换到 root 用户来使用 `docker` 命令了。从 Docker Hub 中取出一个镜像开始部署容器吧。 ### 一个优秀的容器平台 -在专注于容器方面,Photon毫无疑问是一个出色的平台。请注意,Photon是一个开源项目,因此没有任何付费支持。如果您对Photon有任何的问题,请移步Photon项目的Github下的[Issues][27],那里可以供您阅读相关问题,或者提交您的问题。如果您对Photon感兴趣,您也可以在项目的官方[Github][28]中找到源码。 +在专注于容器方面,Photon 毫无疑问是一个出色的平台。请注意,Photon 是一个开源项目,因此没有任何付费支持。如果您对 Photon 有任何的问题,请移步 Photon 项目的 GitHub 下的 [Issues][27],那里可以供您阅读相关问题,或者提交您的问题。如果您对 Photon 感兴趣,您也可以在该项目的官方 [GitHub][28]中找到源码。 -尝试一下Photon吧,看看它是否能够使得Docker容器和Kubernetes集群的部署更加容易。 +尝试一下 Photon 吧,看看它是否能够使得 Docker 容器和 Kubernetes 集群的部署更加容易。 -欲了解Linux的更多信息,可以通过学习Linux基金会和edX的免费课程,[“Linux 入门”][29]。 +欲了解 Linux 的更多信息,可以通过学习 Linux 基金会和 edX 的免费课程,[“Linux 入门”][29]。 -------------------------------------------------------------------------------- @@ -111,7 +98,7 @@ via: https://www.linux.com/learn/intro-to-linux/2017/11/photon-could-be-your-new 作者:[JACK WALLEN][a] 译者:[KeyLD](https://github.com/KeyLd) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From d533874817e9245d849b7a7d4d2c91eadb2c62c6 Mon Sep 17 00:00:00 2001 From: wxy Date: Mon, 4 Dec 2017 22:38:59 +0800 Subject: [PATCH 019/236] PUB:20171124 Photon Could Be Your New Favorite Container OS.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @KeyLD 文章的发布地址:https://linux.cn/article-9110-1.html 你的 LCTT 专页地址: https://linux.cn/lctt/KeyLD --- .../20171124 Photon Could Be Your New Favorite Container OS.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20171124 Photon Could Be Your New Favorite Container OS.md (100%) diff --git a/translated/tech/20171124 Photon Could Be Your New Favorite Container OS.md b/published/20171124 Photon Could Be Your New Favorite Container OS.md similarity index 100% rename from translated/tech/20171124 Photon Could Be Your New Favorite Container OS.md rename to published/20171124 Photon Could Be Your New Favorite Container OS.md From 17255c900268bf1c6bd669bbed53a5ab58b038e2 Mon Sep 17 00:00:00 2001 From: qhwdw Date: Mon, 4 Dec 2017 22:48:12 +0800 Subject: [PATCH 020/236] Translating by qhwdw --- ...20160922 A Linux users guide to Logical Volume Management.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sources/tech/20160922 A Linux users guide to Logical Volume Management.md b/sources/tech/20160922 A Linux users guide to Logical Volume Management.md index ff0e390f38..baed1b3976 100644 --- a/sources/tech/20160922 A Linux users guide to Logical Volume Management.md +++ b/sources/tech/20160922 A Linux users guide to Logical Volume Management.md @@ -1,4 +1,4 @@ -A Linux user's guide to Logical Volume Management +Translating by qhwdw A Linux user's guide to Logical Volume Management ============================================================ ![Logical Volume Management (LVM)](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003499_01_other11x_cc.png?itok=I_kCDYj0 "Logical Volume Management (LVM)") From 2746612bf85dcef35f6133b35b47f226dee9f147 Mon Sep 17 00:00:00 2001 From: qhwdw <33189910+qhwdw@users.noreply.github.com> Date: Mon, 4 Dec 2017 23:14:31 +0800 Subject: [PATCH 021/236] Revert "Translating by qhwdw" --- core.md | 2 +- ...20160922 A Linux users guide to Logical Volume Management.md | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/core.md b/core.md index 2ec8aa89cf..da45c009fc 100644 --- a/core.md +++ b/core.md @@ -36,4 +36,4 @@ - 除非必要,合并 PR 时不要 squash-merge wxy@LCTT -2016/12/24 +2016/12/24 \ No newline at end of file diff --git a/sources/tech/20160922 A Linux users guide to Logical Volume Management.md b/sources/tech/20160922 A Linux users guide to Logical Volume Management.md index baed1b3976..ff0e390f38 100644 --- a/sources/tech/20160922 A Linux users guide to Logical Volume Management.md +++ b/sources/tech/20160922 A Linux users guide to Logical Volume Management.md @@ -1,4 +1,4 @@ -Translating by qhwdw A Linux user's guide to Logical Volume Management +A Linux user's guide to Logical Volume Management ============================================================ ![Logical Volume Management (LVM)](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003499_01_other11x_cc.png?itok=I_kCDYj0 "Logical Volume Management (LVM)") From 0d97d9c993bf939edfa0d62f07c7d39ec919a0f7 Mon Sep 17 00:00:00 2001 From: imquanquan Date: Mon, 4 Dec 2017 23:15:21 +0800 Subject: [PATCH 022/236] fix errors based on the previous translations --- ...ow to Manage Users with Groups in Linux.md | 22 +++++++++---------- 1 file changed, 11 insertions(+), 11 deletions(-) diff --git a/translated/tech/20171201 How to Manage Users with Groups in Linux.md b/translated/tech/20171201 How to Manage Users with Groups in Linux.md index 8baac8707b..1927de6817 100644 --- a/translated/tech/20171201 How to Manage Users with Groups in Linux.md +++ b/translated/tech/20171201 How to Manage Users with Groups in Linux.md @@ -5,13 +5,13 @@ ![groups](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/group-of-people-1645356_1920.jpg?itok=rJlAxBSV) -在本教程中了解如何通过用户组和访问控制表(ACL)来管理用户。 +本教程可以了解如何通过用户组和访问控制表(ACL)来管理用户。 [创意共享协议][4] -当你需要管理一台容纳多个用户的 Linux 机器时,比起一些基本的用户管理工具所提供的方法,有时候你需要对这些用户采取更多的用户权限管理方式。特别是当你要管理某些用户的权限时,这个想法尤为重要。比如说,你有一个目录,一个用户组中的用户可以通过读和写的权限访问这个目录,而其他用户组中的用户对这个目录只有读的权限。通过 Linux 这是完全可以实现的。但是你首先必须了解如何通过用户组和访问控制表(ACL)来管理用户。 +当你需要管理一台容纳多个用户的 Linux 机器时,比起一些基本的用户管理工具所提供的方法,有时候你需要对这些用户采取更多的用户权限管理方式。特别是当你要管理某些用户的权限时,这个想法尤为重要。比如说,你有一个目录,某个用户组中的用户可以通过读和写的权限访问这个目录,而其他用户组中的用户对这个目录只有读的权限。在 Linux 中,这是完全可以实现的。但前提是你必须先了解如何通过用户组和访问控制表(ACL)来管理用户。 -我们将从简单的用户开始,逐渐深入到复杂的访问控制表(ACL)。你所需要做的一切都将在你选择的 Linux 发行版中完成。本文的重点是用户组,所以不会涉及到关于用户的基础知识。 +我们将从简单的用户开始,逐渐深入到复杂的访问控制表(ACL)。你可以在你所选择的 Linux 发行版完成你所需要做的一切。本文的重点是用户组,所以不会涉及到关于用户的基础知识。 为了达到演示的目的,我将假设: @@ -27,7 +27,7 @@ * editors -olivia 属于 editors 用户组,而 nathan 属于 readers 用户组。reader 用户组对 ``/DATA`` 目录只有读的权限,而 editors 用户组则对 ``/DATA`` 目录同时有读和写的权限。当然,这是个非常小的任务,但它会给你基本的用法。你可以扩展这个任务以适应你其他更大的需求。 +olivia 属于 editors 用户组,而 nathan 属于 readers 用户组。reader 用户组对 ``/DATA`` 目录只有读的权限,而 editors 用户组则对 ``/DATA`` 目录同时有读和写的权限。当然,这是个非常小的任务,但它会给你基本的信息·。你可以扩展这个任务以适应你其他更大的需求。 我将在 Ubuntu 16.04 Server 平台上进行演示。这些命令都是通用的,唯一不同的是,要是在你的发行版中不使用 sudo 命令,你必须切换到 root 用户来执行这些命令。 @@ -74,7 +74,7 @@ addgroup editors [Used with permission][5] -创建用户组后,我们需要给他们添加用户。我们用以下命令来将 nathan 添加到 readers 用户组: +创建用户组后,我们需要添加我们的用户到这两个用户组。我们用以下命令来将 nathan 用户添加到 readers 用户组: ``` sudo usermod -a -G readers nathan @@ -85,11 +85,11 @@ sudo usermod -a -G readers nathan sudo usermod -a -G editors olivia ``` -现在我们已经准备好用用户组来管理用户了。 +现在我们可以通过用户组来管理用户了。 ### 给用户组授予目录的权限 -假设你有个目录 ``/READERS``,允许 readers 用户组的所有成员访问这个目录。首先,我们执行以下命令来更改目录所属用户组: +假设你有个目录 ``/READERS`` 且允许 readers 用户组的所有成员访问这个目录。首先,我们执行以下命令来更改目录所属用户组: ``` sudo chown -R :readers /READERS @@ -101,7 +101,7 @@ sudo chown -R :readers /READERS sudo chmod -R g-w /READERS ``` -然后我们执行下面的命令来收回其他用户对这个目录的访问权限(以防止任何不在读者组中的用户访问这个目录里的文件): +然后我们执行下面的命令来收回其他用户对这个目录的访问权限(以防止任何不在 readers 组中的用户访问这个目录里的文件): ``` sudo chmod -R o-x /READERS @@ -126,7 +126,7 @@ sudo chmod -R o-x /EDITORS ### 使用访问控制表(ACL) -现在,让我们把这个问题变得棘手一点。假设你有一个目录 ``/DATA`` 并且你想给 readers 用户组的成员读取权限同时给 editors 用户组的成员读和写的权限。为此,你必须要用到 setfacl 命令。setfacl 命令可以为文件或文件夹设置一个访问控制表(ACL)。 +现在,让我们把这个问题变得棘手一点。假设你有一个目录 ``/DATA`` 并且你想给 readers 用户组的成员读取权限并同时给 editors 用户组的成员读和写的权限。为此,你必须要用到 setfacl 命令。setfacl 命令可以为文件或文件夹设置一个访问控制表(ACL)。 这个命令的结构如下: @@ -142,7 +142,7 @@ sudo setfacl -m g:readers:rx -R /DATA 现在 readers 用户组里面的每一个用户都可以读取 /DATA 目录里的文件了,但是他们不能修改里面的内容。 -为了给 editors 用户组里面的用户读写权限,我们执行了以下的命令: +为了给 editors 用户组里面的用户读写权限,我们执行了以下命令: ``` sudo setfacl -m g:editors:rwx -R /DATA @@ -151,7 +151,7 @@ sudo setfacl -m g:editors:rwx -R /DATA ### 更多的权限控制 -使用访问控制表(ACL),你可以实现你所需的权限控制。你可以实现将用户添加到用户组,并且可靠灵活地控制这些用户组对每个目录的权限以达到你的需求。想要了解上述工具的更多信息,可以执行下列的命令: +使用访问控制表(ACL),你可以实现你所需的权限控制。你可以添加用户到用户组,并且灵活地控制这些用户组对每个目录的权限以达到你的需求。如果想了解上述工具的更多信息,可以执行下列的命令: * man usradd From 832f9e7fe8a42e4bd8b588f52a434f33a673542d Mon Sep 17 00:00:00 2001 From: Unknown Date: Tue, 5 Dec 2017 07:22:51 +0800 Subject: [PATCH 023/236] translating by aiwhj translating by aiwhj --- .../20171129 5 best practices for getting started with DevOps.md | 1 + 1 file changed, 1 insertion(+) diff --git a/sources/tech/20171129 5 best practices for getting started with DevOps.md b/sources/tech/20171129 5 best practices for getting started with DevOps.md index 962f37aaf4..7694180c14 100644 --- a/sources/tech/20171129 5 best practices for getting started with DevOps.md +++ b/sources/tech/20171129 5 best practices for getting started with DevOps.md @@ -1,3 +1,4 @@ +translating---aiwhj 5 best practices for getting started with DevOps ============================================================ From a94f1fca1bfa0c9150da7656dce9685b31bec345 Mon Sep 17 00:00:00 2001 From: Sihua Zheng Date: Tue, 5 Dec 2017 09:12:33 +0800 Subject: [PATCH 024/236] translated --- ...ilable on Flathub the Flatpak App Store.md | 73 ------------------- ...ilable on Flathub the Flatpak App Store.md | 70 ++++++++++++++++++ 2 files changed, 70 insertions(+), 73 deletions(-) delete mode 100644 sources/tech/20171121 LibreOffice Is Now Available on Flathub the Flatpak App Store.md create mode 100644 translated/tech/20171121 LibreOffice Is Now Available on Flathub the Flatpak App Store.md diff --git a/sources/tech/20171121 LibreOffice Is Now Available on Flathub the Flatpak App Store.md b/sources/tech/20171121 LibreOffice Is Now Available on Flathub the Flatpak App Store.md deleted file mode 100644 index fe72e37128..0000000000 --- a/sources/tech/20171121 LibreOffice Is Now Available on Flathub the Flatpak App Store.md +++ /dev/null @@ -1,73 +0,0 @@ -translating---geekpi - - -# LibreOffice Is Now Available on Flathub, the Flatpak App Store - -![LibreOffice on Flathub](http://www.omgubuntu.co.uk/wp-content/uploads/2017/11/libroffice-on-flathub-750x250.jpeg) - -LibreOffice is now available to install from [Flathub][3], the centralised Flatpak app store. - -Its arrival allows anyone running a modern Linux distribution to install the latest stable release of LibreOffice in a click or two, without having to hunt down a PPA, tussle with tarballs or wait for a distro provider to package it up. - -A [LibreOffice Flatpak][5] has been available for users to download and install since August of last year and the [LibreOffice 5.2][6] release. - -What’s “new” here is the distribution method. Rather than release updates through their own dedicated server The Document Foundation has opted to use Flathub. - -This is  _great_  news for end users as it means there’s one less repo to worry about adding on a fresh install, but it’s also good news for Flatpak advocates too: LibreOffice is open-source software’s most popular productivity suite. Its support for both format and app store is sure to be warmly welcomed. - -At the time of writing you can install LibreOffice 5.4.2 from Flathub. New stable releases will be added as and when they’re released. - -### Enable Flathub on Ubuntu - -![](http://www.omgubuntu.co.uk/wp-content/uploads/2017/11/flathub-750x495.png) - -Fedora, Arch, and Linux Mint 18.3 users have Flatpak installed, ready to go, out of the box. Mint even comes with the Flathub remote pre-enabled. - -[Install LibreOffice from Flathub][7] - -To get Flatpak up and running on Ubuntu you first have to install it: - -``` -sudo apt install flatpak gnome-software-plugin-flatpak -``` - -To be able to install apps from Flathub you need to add the Flathub remote server: - -``` -flatpak remote-add --if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepo -``` - -That’s pretty much it. Just log out and back in (so that Ubuntu Software refreshes its cache) and you  _should_  be able to find any Flatpak apps available on Flathub through the Ubuntu Software app. - -In this instance, search for “LibreOffice” and locate the result that has a line of text underneath mentioning Flathub. (Do bear in mind that Ubuntu has tweaked the Software client to shows Snap app results above everything else, so you may need scroll down the list of results to see it). - -There is a [bug with installing Flatpak apps][8] from a flatpakref file, so if the above method doesn’t work you can also install Flatpak apps form Flathub using the command line. - -The Flathub website lists the command needed to install each app. Switch to the “Command Line” tab to see them. - -#### More apps on Flathub - -If you read this site regularly enough you’ll know that I  _love_  Flathub. It’s home to some of my favourite apps (Corebird, Parlatype, GNOME MPV, Peek, Audacity, GIMP… etc). I get the latest, stable versions of these apps (plus any dependencies they need) without compromise. - -And, as I tweeted a week or so back, most Flatpak apps now look great with GTK themes — no more [workarounds][9]required! - --------------------------------------------------------------------------------- - -via: http://www.omgubuntu.co.uk/2017/11/libreoffice-now-available-flathub-flatpak-app-store - -作者:[ JOEY SNEDDON ][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://plus.google.com/117485690627814051450/?rel=author -[1]:https://plus.google.com/117485690627814051450/?rel=author -[2]:http://www.omgubuntu.co.uk/category/news -[3]:http://www.flathub.org/ -[4]:http://www.omgubuntu.co.uk/2017/11/libreoffice-now-available-flathub-flatpak-app-store -[5]:http://www.omgubuntu.co.uk/2016/08/libreoffice-5-2-released-whats-new -[6]:http://www.omgubuntu.co.uk/2016/08/libreoffice-5-2-released-whats-new -[7]:https://flathub.org/repo/appstream/org.libreoffice.LibreOffice.flatpakref -[8]:https://bugs.launchpad.net/ubuntu/+source/gnome-software/+bug/1716409 -[9]:http://www.omgubuntu.co.uk/2017/05/flatpak-theme-issue-fix diff --git a/translated/tech/20171121 LibreOffice Is Now Available on Flathub the Flatpak App Store.md b/translated/tech/20171121 LibreOffice Is Now Available on Flathub the Flatpak App Store.md new file mode 100644 index 0000000000..4edb744098 --- /dev/null +++ b/translated/tech/20171121 LibreOffice Is Now Available on Flathub the Flatpak App Store.md @@ -0,0 +1,70 @@ +# LibreOffice 现在在 Flatpak 的 Flathub 应用商店提供 + +![LibreOffice on Flathub](http://www.omgubuntu.co.uk/wp-content/uploads/2017/11/libroffice-on-flathub-750x250.jpeg) + +LibreOffice 现在可以从集中化的 Flatpak 应用商店 [Flathub][3] 进行安装。 + +它的到来使任何运行现代 Linux 发行版的人都能只点击一两次安装 LibreOffice 的最新稳定版本,而无需搜索 PPA,纠缠 tar 包或等待发行商将其打包。 + +自去年 8 月份以来,[LibreOffice Flatpak][5] 已经可供用户下载和安装 [LibreOffice 5.2][6]。 + +这里“新”的是发行方法。文档基金会选择使用 Flathub 而不是专门的服务器来发布更新。 + +这对于终端用户来说是一个_很好_的消息,因为这意味着不需要在新安装时担心仓库,但对于 Flatpak 的倡议者来说也是一个好消息:LibreOffice 是开源软件最流行的生产力套件。它对格式和应用商店的支持肯定会受到热烈的欢迎。 + +在撰写本文时,你可以从 Flathub 安装 LibreOffice 5.4.2。新的稳定版本将在发布时添加。 + +### 在 Ubuntu 上启用 Flathub + +![](http://www.omgubuntu.co.uk/wp-content/uploads/2017/11/flathub-750x495.png) + +Fedora、Arch 和 Linux Mint 18.3 用户已经安装了 Flatpak,随时可以开箱即用。Mint 甚至预启用了 Flathub remote。 + +[从 Flathub 安装 LibreOffice][7] + +要在 Ubuntu 上启动并运行 Flatpak,首先必须安装它: + +``` +sudo apt install flatpak gnome-software-plugin-flatpak +``` + +为了能够从 Flathub 安装应用程序,你需要添加 Flathub 远程服务器: + +``` +flatpak remote-add --if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepo +``` + +这就行了。只需注销并返回(以便 Ubuntu Software 刷新其缓存),之后你应该能够通过 Ubuntu Software 看到 Flathub 上的任何 Flatpak 程序了。 + +在本例中,搜索 “LibreOffice” 并在结果中找到下面有 Flathub 提示的结果。(请记住,Ubuntu 已经调整了客户端,来将 Snap 程序显示在最上面,所以你可能需要向下滚动列表来查看它)。 + +从 flatpakref 中[安装 Flatpak 程序有一个 bug][8],所以如果上面的方法不起作用,你也可以使用命令行从 Flathub 中安装 Flathub 程序。 + +Flathub 网站列出了安装每个程序所需的命令。切换到“命令行”选项卡来查看它们。 + +#### Flathub 上更多的应用 + +如果你经常看这个网站,你就会知道我喜欢 Flathub。这是我最喜欢的一些应用(Corebird、Parlatype、GNOME MPV、Peek、Audacity、GIMP 等)的家园。我无需折衷就能获得这些应用程序的最新,稳定版本(加上它们需要的所有依赖)。 + +而且,在我 twiiter 上发布一周左右后,大多数 Flatpak 应用现在看起来有很棒 GTK 主题 - 不再需要[临时方案][9]了! + +-------------------------------------------------------------------------------- + +via: http://www.omgubuntu.co.uk/2017/11/libreoffice-now-available-flathub-flatpak-app-store + +作者:[ JOEY SNEDDON ][a] +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://plus.google.com/117485690627814051450/?rel=author +[1]:https://plus.google.com/117485690627814051450/?rel=author +[2]:http://www.omgubuntu.co.uk/category/news +[3]:http://www.flathub.org/ +[4]:http://www.omgubuntu.co.uk/2017/11/libreoffice-now-available-flathub-flatpak-app-store +[5]:http://www.omgubuntu.co.uk/2016/08/libreoffice-5-2-released-whats-new +[6]:http://www.omgubuntu.co.uk/2016/08/libreoffice-5-2-released-whats-new +[7]:https://flathub.org/repo/appstream/org.libreoffice.LibreOffice.flatpakref +[8]:https://bugs.launchpad.net/ubuntu/+source/gnome-software/+bug/1716409 +[9]:http://www.omgubuntu.co.uk/2017/05/flatpak-theme-issue-fix From 7af7dee62e699dcbcc64adf28af774f350f9d67e Mon Sep 17 00:00:00 2001 From: geekpi Date: Tue, 5 Dec 2017 09:16:46 +0800 Subject: [PATCH 025/236] translating --- .../20171125 AWS to Help Build ONNX Open Source AI Platform.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20171125 AWS to Help Build ONNX Open Source AI Platform.md b/sources/tech/20171125 AWS to Help Build ONNX Open Source AI Platform.md index c09d66bc57..1e9424178e 100644 --- a/sources/tech/20171125 AWS to Help Build ONNX Open Source AI Platform.md +++ b/sources/tech/20171125 AWS to Help Build ONNX Open Source AI Platform.md @@ -1,3 +1,5 @@ +translating---geekpi + AWS to Help Build ONNX Open Source AI Platform ============================================================ ![onnx-open-source-ai-platform](https://www.linuxinsider.com/article_images/story_graphics_xlarge/xl-2017-onnx-1.jpg) From 5eba6f6260c4fa3a6f856198e97136e819b45e4d Mon Sep 17 00:00:00 2001 From: darksun Date: Tue, 5 Dec 2017 18:31:06 +0800 Subject: [PATCH 026/236] =?UTF-8?q?=E9=80=89=E9=A2=98:=20How=20to=20find?= =?UTF-8?q?=20all=20files=20with=20a=20specific=20text=20using=20Linux=20s?= =?UTF-8?q?hell?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...with a specific text using Linux shell .md | 294 ++++++++++++++++++ 1 file changed, 294 insertions(+) create mode 100644 sources/tech/20171130 How to find all files with a specific text using Linux shell .md diff --git a/sources/tech/20171130 How to find all files with a specific text using Linux shell .md b/sources/tech/20171130 How to find all files with a specific text using Linux shell .md new file mode 100644 index 0000000000..f5909c27c9 --- /dev/null +++ b/sources/tech/20171130 How to find all files with a specific text using Linux shell .md @@ -0,0 +1,294 @@ +translating by lujun9972 +How to find all files with a specific text using Linux shell +------ +### Objective + +The following article provides some useful tips on how to find all files within any specific directory or entire file-system containing any specific word or string. + +### Difficulty + +EASY + +### Conventions + +* # - requires given command to be executed with root privileges either directly as a root user or by use of sudo command + +* $ - given command to be executed as a regular non-privileged user + +### Examples + +### Find all files with a specific string non-recursively + +The first command example will search for a string + +`stretch` + +in all files within + +`/etc/` + +directory while excluding any sub-directories: + +``` +# grep -s stretch /etc/* +/etc/os-release:PRETTY_NAME="Debian GNU/Linux 9 (stretch)" +/etc/os-release:VERSION="9 (stretch)" +``` +`-s` + +grep option will suppress error messages about nonexistent or unreadable files. The output shows filenames as well as prints the actual line containing requested string. + +### Find all files with a specific string recursively + +The above command omitted all sub-directories. To search recursively means to also traverse all sub-directories. The following command will search for a string + +`stretch` + +in all files within + +`/etc/` + +directory including all sub-directories: + +``` +# grep -R stretch /etc/* +/etc/apt/sources.list:# deb cdrom:[Debian GNU/Linux testing _Stretch_ - Official Snapshot amd64 NETINST Binary-1 20170109-05:56]/ stretch main +/etc/apt/sources.list:#deb cdrom:[Debian GNU/Linux testing _Stretch_ - Official Snapshot amd64 NETINST Binary-1 20170109-05:56]/ stretch main +/etc/apt/sources.list:deb http://ftp.au.debian.org/debian/ stretch main +/etc/apt/sources.list:deb-src http://ftp.au.debian.org/debian/ stretch main +/etc/apt/sources.list:deb http://security.debian.org/debian-security stretch/updates main +/etc/apt/sources.list:deb-src http://security.debian.org/debian-security stretch/updates main +/etc/dictionaries-common/words:backstretch +/etc/dictionaries-common/words:backstretch's +/etc/dictionaries-common/words:backstretches +/etc/dictionaries-common/words:homestretch +/etc/dictionaries-common/words:homestretch's +/etc/dictionaries-common/words:homestretches +/etc/dictionaries-common/words:outstretch +/etc/dictionaries-common/words:outstretched +/etc/dictionaries-common/words:outstretches +/etc/dictionaries-common/words:outstretching +/etc/dictionaries-common/words:stretch +/etc/dictionaries-common/words:stretch's +/etc/dictionaries-common/words:stretched +/etc/dictionaries-common/words:stretcher +/etc/dictionaries-common/words:stretcher's +/etc/dictionaries-common/words:stretchers +/etc/dictionaries-common/words:stretches +/etc/dictionaries-common/words:stretchier +/etc/dictionaries-common/words:stretchiest +/etc/dictionaries-common/words:stretching +/etc/dictionaries-common/words:stretchy +/etc/grub.d/00_header:background_image -m stretch `make_system_path_relative_to_its_root "$GRUB_BACKGROUND"` +/etc/os-release:PRETTY_NAME="Debian GNU/Linux 9 (stretch)" +/etc/os-release:VERSION="9 (stretch)" +``` + +The above + +`grep` + +command example lists all files containing string + +`stretch` + +. Meaning the lines with + +`stretches` + +, + +`stretched` + +etc. are also shown. Use grep's + +`-w` + +option to show only a specific word: + +``` +# grep -Rw stretch /etc/* +/etc/apt/sources.list:# deb cdrom:[Debian GNU/Linux testing _Stretch_ - Official Snapshot amd64 NETINST Binary-1 20170109-05:56]/ stretch main +/etc/apt/sources.list:#deb cdrom:[Debian GNU/Linux testing _Stretch_ - Official Snapshot amd64 NETINST Binary-1 20170109-05:56]/ stretch main +/etc/apt/sources.list:deb http://ftp.au.debian.org/debian/ stretch main +/etc/apt/sources.list:deb-src http://ftp.au.debian.org/debian/ stretch main +/etc/apt/sources.list:deb http://security.debian.org/debian-security stretch/updates main +/etc/apt/sources.list:deb-src http://security.debian.org/debian-security stretch/updates main +/etc/dictionaries-common/words:stretch +/etc/dictionaries-common/words:stretch's +/etc/grub.d/00_header:background_image -m stretch `make_system_path_relative_to_its_root "$GRUB_BACKGROUND"` +/etc/os-release:PRETTY_NAME="Debian GNU/Linux 9 (stretch)" +/etc/os-release:VERSION="9 (stretch)" +``` + +The above commands may produce an unnecessary output. The next example will only show all file names containing string + +`stretch` + +within + +`/etc/` + +directory recursively: + +``` +# grep -Rl stretch /etc/* +/etc/apt/sources.list +/etc/dictionaries-common/words +/etc/grub.d/00_header +/etc/os-release +``` + +All searches are by default case sensitive which means that any search for a string + +`stretch` + +will only show files containing the exact uppercase and lowercase match. By using grep's + +`-i` + +option the command will also list any lines containing + +`Stretch` + +, + +`STRETCH` + +, + +`StReTcH` + +etc., hence, to perform case-insensitive search. + +``` +# grep -Ril stretch /etc/* +/etc/apt/sources.list +/etc/dictionaries-common/default.hash +/etc/dictionaries-common/words +/etc/grub.d/00_header +/etc/os-release +``` + +Using + +`grep` + +command it is also possible to include only specific files as part of the search. For example we only would like to search for a specific text/string within configuration files with extension + +`.conf` + +. The next example will find all files with extension + +`.conf` + +within + +`/etc` + +directory containing string + +`bash` + +: + +``` +# grep -Ril bash /etc/*.conf +OR +# grep -Ril --include=\*.conf bash /etc/* +/etc/adduser.conf +``` +`--exclude` + +option we can exclude any specific filenames: + +``` +# grep -Ril --exclude=\*.conf bash /etc/* +/etc/alternatives/view +/etc/alternatives/vim +/etc/alternatives/vi +/etc/alternatives/vimdiff +/etc/alternatives/rvim +/etc/alternatives/ex +/etc/alternatives/rview +/etc/bash.bashrc +/etc/bash_completion.d/grub +/etc/cron.daily/apt-compat +/etc/cron.daily/exim4-base +/etc/dictionaries-common/default.hash +/etc/dictionaries-common/words +/etc/inputrc +/etc/passwd +/etc/passwd- +/etc/profile +/etc/shells +/etc/skel/.profile +/etc/skel/.bashrc +/etc/skel/.bash_logout +``` + +Same as with files grep can also exclude specific directories from the search. Use + +`--exclude-dir` + +option to exclude directory from search. The following search example will find all files containing string + +`stretch` + +within + +`/etc` + +directory and exclude + +`/etc/grub.d` + +from search: + +``` +# grep --exclude-dir=/etc/grub.d -Rwl stretch /etc/* +/etc/apt/sources.list +/etc/dictionaries-common/words +/etc/os-release +``` + +By using + +`-n` + +option grep will also provide an information regarding a line number where the specific string was found: + +``` +# grep -Rni bash /etc/*.conf +/etc/adduser.conf:6:DSHELL=/bin/bash +``` + +The last example will use + +`-v` + +option to list all files NOT containing a specific keyword. For example the following search will list all files within + +`/etc/` + +directory which do not contain string + +`stretch` + +: + +``` +# grep -Rlv stretch /etc/* +``` + +-------------------------------------------------------------------------------- + +via: https://linuxconfig.org/how-to-find-all-files-with-a-specific-text-using-linux-shell + +作者:[Lubos Rendek][a] +译者:[lujun9972](https://github.com/lujun9972) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://linuxconfig.org From 57252987000dc1f3968a790330119a1afc0182a9 Mon Sep 17 00:00:00 2001 From: TRsky <625310581@qq.com> Date: Tue, 5 Dec 2017 20:02:08 +0800 Subject: [PATCH 027/236] translate the passage --- ...ke up and Shut Down Linux Automatically.md | 75 ++++++++++--------- 1 file changed, 38 insertions(+), 37 deletions(-) diff --git a/sources/tech/20171130 Wake up and Shut Down Linux Automatically.md b/sources/tech/20171130 Wake up and Shut Down Linux Automatically.md index 3a2c20ad52..5ed3f2bf10 100644 --- a/sources/tech/20171130 Wake up and Shut Down Linux Automatically.md +++ b/sources/tech/20171130 Wake up and Shut Down Linux Automatically.md @@ -1,26 +1,24 @@ - translating by HardworkFish - -Wake up and Shut Down Linux Automatically -============================================================ +自动唤醒和关闭 Linux +===================== ### [banner.jpg][1] -![time keeper](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/banner.jpg?itok=zItspoSb) - -Learn how to configure your Linux computers to watch the time for you, then wake up and shut down automatically. +![timekeeper](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/banner.jpg?itok=zItspoSb) +了解如何通过配置 Linux 计算机来查看时间,并实现自动唤醒和关闭 Linux + [Creative Commons Attribution][6][The Observatory at Delhi][7] -Don't be a watt-waster. If your computers don't need to be on then shut them down. For convenience and nerd creds, you can configure your Linux computers to wake up and shut down automatically. +不要成为一个电能浪费者。如果你的电脑不需要开机就请把他们关机。出于方便和计算机宅的考虑,你可以通过配置你的 Linux 计算机实现自动唤醒和关闭 Linux 。 -### Precious Uptimes +### 系统运行时间 -Some computers need to be on all the time, which is fine as long as it's not about satisfying an uptime compulsion. Some people are very proud of their lengthy uptimes, and now that we have kernel hot-patching that leaves only hardware failures requiring shutdowns. I think it's better to be practical. Save electricity as well as wear on your moving parts, and shut them down when they're not needed. For example, you can wake up a backup server at a scheduled time, run your backups, and then shut it down until it's time for the next backup. Or, you can configure your Internet gateway to be on only at certain times. Anything that doesn't need to be on all the time can be configured to turn on, do a job, and then shut down. +有时候有些电脑需要一直处在开机状态,在不超过电脑运行时间的限制下这种情况是被允许的。有些人为他们的计算机可以长时间的正常运行而感到自豪,且现在我们有内核热补丁能够实现只有在硬件发生故障时才允许机器关机。我认为比较实际可行的是能够在机器需要节省电能以及在移动硬件发生磨损的情况下,且在不需要机器运行的情况下将其关机。比如,你可以在规定的时间内唤醒备份服务器,执行备份,然后关闭它直到下一次进行备份时间。或者,你可以只在特定时间内配置网卡。任何不需要一直运行的东西都可以将其配置成在其需要工作的时候打开,待其完成工作后将其关闭。 -### Sleepies +### 系统休眠 -For computers that don't need to be on all the time, good old cron will shut them down reliably. Use either root's cron, or /etc/crontab. This example creates a root cron job to shut down every night at 11:15 p.m. +对于不需要一直运行的电脑,使用 root 的 cron 定时任务 或者 /etc/crontab 文件 可以可靠地关闭电脑。这个例子创建一个 root 定时任务实现每天下午 11点15分 定时关机。 ``` # crontab -e -u root @@ -32,33 +30,34 @@ For computers that don't need to be on all the time, good old cron will shut the 15 23 * * 1-5 /sbin/shutdown -h now ``` -You may also use /etc/crontab, which is fast and easy, and everything is in one file. You have to specify the user: +一个快速、容易的方式是,使用 /etc/crontab 文件。你必须指定用户: ``` 15 23 * * 1-5 root shutdown -h now ``` -Auto-wakeups are very cool; most of my SUSE colleagues are in Nuremberg, so I am crawling out of bed at 5 a.m. to have a few hours of overlap with their schedules. My work computer turns itself on at 5:30 a.m., and then all I have to do is drag my coffee and myself to my desk to start work. It might not seem like pressing a power button is a big deal, but at that time of day every little thing looms large. +实现自动唤醒是一件很酷的事情;我的大多数 SUSE (SUSE Linux)同事都在纽伦堡,因此,为了能够跟同事的计划有几小时的重叠时间我需要在凌晨5点起床。我的计算机早上 5点半自动开始工作,而我只需要将自己和咖啡拖到我的桌子上就可以开始工作了。按下电源按钮看起来好像并不是什么大事,但是在每天的那个时候每件小事都会变得很大。 -Waking up your Linux PC can be less reliable than shutting it down, so you may want to try different methods. You can use wakeonlan, RTC wakeups, or your PC's BIOS to set scheduled wakeups. These all work because, when you power off your computer, it's not really all the way off; it is in an extremely low-power state and can receive and respond to signals. You need to use the power supply switch to turn it off completely. +唤醒 Linux 计算机可能不比关闭它可靠,因此你可能需要尝试不同的办法。你可以使用 远程唤醒(Wake-On-LAN)、RTC 唤醒或者个人电脑的 BIOS 设置预定的唤醒。做这些工作的原因是,当你关闭电脑时,这并不是真正关闭了计算机;此时计算机处在极低功耗状态且还可以接受和响应信号。你需要使用电源开关将其彻底关闭。 -### BIOS Wakeup +### BIOS 唤醒 -A BIOS wakeup is the most reliable. My system BIOS has an easy-to-use wakeup scheduler (Figure 1). Chances are yours does, too. Easy peasy. +BIOS 唤醒是最可靠的。我的系统主板 BIOS 有一个易于使用的唤醒调度程序。(Figure 1). Chances are yours does, too. Easy peasy. ### [fig-1.png][2] -![wake up](https://www.linux.com/sites/lcom/files/styles/floated_images/public/fig-1_11.png?itok=8qAeqo1I) +![wakeup](https://www.linux.com/sites/lcom/files/styles/floated_images/public/fig-1_11.png?itok=8qAeqo1I) Figure 1: My system BIOS has an easy-to-use wakeup scheduler. [Used with permission][8] -### wakeonlan -wakeonlan is the next most reliable method. This requires sending a signal from a second computer to the computer you want to power on. You could use an Arduino or Raspberry Pi to send the wakeup signal, a Linux-based router, or any Linux PC. First, look in your system BIOS to see if wakeonlan is supported -- which it should be -- and then enable it, as it should be disabled by default. +### 主机远程唤醒(Wake-On-LAN) -Then, you'll need an Ethernet network adapter that supports wakeonlan; wireless adapters won't work. You'll need to verify that your Ethernet card supports wakeonlan: +远程唤醒是仅次于 BIOS 唤醒的又一种可靠的唤醒方法。这需要你从第二台计算机发送信号到所要打开的计算机。可以使用 Arduino 或 树莓派(Raspberry Pi) 发送基于 Linux 的路由器或者任何Linux 计算机的唤醒信号。首先,查看系统主板 BIOS 是否支持 Wake-On-LAN –如果支持—然后启动它,因为它被默认为禁用。 + +然后,需要一个支持 Wake-On-LAN 的网卡;无线网卡并不支持。你需要运行 ethtool 命令查看网卡是否支持 Wake-On-LAN : ``` # ethtool eth0 | grep -i wake-on @@ -66,23 +65,23 @@ Then, you'll need an Ethernet network adapter that supports wakeonlan; wireless Wake-on: g ``` -* d -- all wake ups disabled +* d -- 禁用 -* p -- wake up on physical activity +* p -- 物理活动唤醒 -* u -- wake up on unicast messages +* u -- 单播消息唤醒 -* m -- wake up on multicast messages +* m -- 多播(组播)消息唤醒 -* b -- wake up on broadcast messages +* b -- 广播消息唤醒 -* a -- wake up on ARP messages +* a -- ARP(Address Resolution Protocol)唤醒 -* g -- wake up on magic packet +* g -- magic packet 唤醒 -* s -- set the Secure On password for the magic packet +* s -- magic packet 设置安全密码 -man ethtool is not clear on what the p switch does; it suggests that any signal will cause a wake up. In my testing, however, it doesn't do that. The one that must be enabled is g -- wake up on magic packet, and the Wake-on line shows that it is already enabled. If it is not enabled, you can use ethtool to enable it, using your own device name, of course: +man ethtool 并不清楚开关 p 的作用;这表明任何信号都会导致唤醒。在我的测试中,然而,它并没有这么做。Wake-On-Lan 被启动的 Wake-on 参数是 g –- magic packet 唤醒,且当 Wake-On 值已经为 g 时表示网卡已支持 Wake-On-Lan 。如果它没有被启用,你可以通过 ethtool 命令来启用它。 ``` # ethtool -s eth0 wol g @@ -100,26 +99,26 @@ Figure 2: Enable Wake on LAN. [Used with permission][9] -Another option is recent Network Manager versions have a nice little checkbox to enable wakeonlan (Figure 2). +另外一个选择是最近的网络管理器版本有一个很好的小复选框能够唤醒局域网(图2)。 -There is a field for setting a password, but if your network interface doesn't support the Secure On password, it won't work. +这里有一个可以用于设置密码的地方,但是如果你的网络接口不支持 Secure On password,它就不起作用。 -Now you need to configure a second PC to send the wakeup signal. You don't need root privileges, so create a cron job for your user. You need the MAC address of the network interface on the machine you're waking up: +现在你需要配置第二台计算机来发送唤醒信号。你并不需要 root 权限,所以你可以为你的用户创建 cron 任务。你需要正在唤醒的机器上的网络接口和MAC地址。 ``` 30 08 * * * /usr/bin/wakeonlan D0:50:99:82:E7:2B ``` -Using the real-time clock for wakeups is the least reliable method. Check out [Wake Up Linux With an RTC Alarm Clock][4]; this is a bit outdated as most distros use systemd now. Come back next week to learn more about updated ways to use RTC wakeups. +通过使用实时闹钟来唤醒计算机是最不可靠的方法。查看 [Wake Up Linux With an RTC Alarm Clock][4] ;对于现在的大多数发行版来说这种方法已经有点过时了。下周继续了解更多关于使用RTC唤醒的方法。 -Learn more about Linux through the free ["Introduction to Linux" ][5]course from The Linux Foundation and edX. +通过 Linux 基金会和 edX 可以学习更多关于 Linux 的免费 [ Linux 入门][5]教程。 -------------------------------------------------------------------------------- -via: https://www.linux.com/learn/intro-to-linux/2017/11/wake-and-shut-down-linux-automatically +via:https://www.linux.com/learn/intro-to-linux/2017/11/wake-and-shut-down-linux-automatically 作者:[Carla Schroder] -译者:[译者ID](https://github.com/译者ID) +译者:[译者ID](https://github.com/HardworkFish) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 @@ -133,3 +132,5 @@ via: https://www.linux.com/learn/intro-to-linux/2017/11/wake-and-shut-down-linux [7]:http://www.columbia.edu/itc/mealac/pritchett/00routesdata/1700_1799/jaipur/delhijantarearly/delhijantarearly.html [8]:https://www.linux.com/licenses/category/used-permission [9]:https://www.linux.com/licenses/category/used-permission + + From 2fa592a2cb1e9994ed40a207cde287b8efb19657 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E9=82=B9=E8=8D=A3=E5=8D=87?= Date: Tue, 5 Dec 2017 20:04:00 +0800 Subject: [PATCH 028/236] Update 20171120 Mark McIntyre How Do You Fedora.md --- sources/tech/20171120 Mark McIntyre How Do You Fedora.md | 1 + 1 file changed, 1 insertion(+) diff --git a/sources/tech/20171120 Mark McIntyre How Do You Fedora.md b/sources/tech/20171120 Mark McIntyre How Do You Fedora.md index bfd19e1eda..40af7eba2f 100644 --- a/sources/tech/20171120 Mark McIntyre How Do You Fedora.md +++ b/sources/tech/20171120 Mark McIntyre How Do You Fedora.md @@ -1,5 +1,6 @@ translating by zrszrszrs # [Mark McIntyre: How Do You Fedora?][1] +# [Mark McIntyre: 你是如何使用Fedora的?][1] ![](https://fedoramagazine.org/wp-content/uploads/2017/11/mock-couch-945w-945x400.jpg) From 76fbbbde563baf68ff3f74227b524684bdcd7eda Mon Sep 17 00:00:00 2001 From: darksun Date: Tue, 5 Dec 2017 20:18:00 +0800 Subject: [PATCH 029/236] =?UTF-8?q?=E9=80=89=E9=A2=98:=20How=20to=20Encryp?= =?UTF-8?q?t=20and=20Decrypt=20Individual=20Files=20With=20GPG?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...t and Decrypt Individual Files With GPG.md | 145 ++++++++++++++++++ 1 file changed, 145 insertions(+) create mode 100644 sources/tech/20171024 How to Encrypt and Decrypt Individual Files With GPG.md diff --git a/sources/tech/20171024 How to Encrypt and Decrypt Individual Files With GPG.md b/sources/tech/20171024 How to Encrypt and Decrypt Individual Files With GPG.md new file mode 100644 index 0000000000..eea4b569bf --- /dev/null +++ b/sources/tech/20171024 How to Encrypt and Decrypt Individual Files With GPG.md @@ -0,0 +1,145 @@ +translating by lujun9972 +How to Encrypt and Decrypt Individual Files With GPG +------ +### Objective + +Encrypt individual files with GPG. + +### Distributions + +This will work with any Linux distribution. + +### Requirements + +A working Linux install with GPG installed or root privileges to install it. + +### Difficulty + +Easy + +### Conventions + +* # - requires given command to be executed with root privileges either directly as a root user or by use of sudo command + +* $ - given command to be executed as a regular non-privileged user + +### Introduction + +Encryption is important. It's absolutely vital to protecting sensitive information. Your personal files are worth encrypting, and GPG provides the perfect solution. + +### Install GPG + +GPG is a widely used piece of software. You can find it in nearly every distribution's repositories. If you don't have it already, install it on your computer. + +### Debian/Ubuntu + +``` +$ sudo apt install gnupg +``` + +``` +# dnf install gnupg2 +``` + +``` +# pacman -S gnupg +``` + +``` +# emerge --ask app-crypt/gnupg +``` + +You need a key pair to be able to encrypt and decrypt files. If you already have a key pair that you generated for SSH, you can actually use those here. If not, GPG includes a utility to generate them. + +``` +$ gpg --full-generate-key +``` + +The first thing GPG will ask for is the type of key. Use the default, if there isn't anything specific that you need. + +The next thing that you'll need to set is the key size. + +`4096` + +is probably best. + +After that, you can set an expiration date. Set it to + +`0` + +if you want the key to be permanent. + +Then, it will ask you for your name. + +Finally, it asks for your email address. + +You can add a comment if you need to too. + +When it has everything, GPG will ask you to verify the information. + +GPG will ask if you want a password for your key. This is optional, but adds a degree of protection. As it's doing that, GPG will collect entropy from your actions to increase the strength of your key. When it's done, GPG will print out the information pertaining to the key you just created. + +### Basic Encryption + +Now that you have your key, encrypting files is very easy. Create a blank text file in your + +`/tmp` + +directory to practice with. + +``` +$ touch /tmp/test.txt +``` +`-e` + +flag tells GPG that you'll be encrypting a file, and the + +`-r` + +flag specifies a recipient. + +``` +$ gpg -e -r "Your Name" /tmp/test.txt +``` + +### Basic Decryption + +You have an encrypted file. Try decrypting it. You don't need to specify any keys. That information is encoded with the file. GPG will try the keys that it has to decrypt it. + +``` +$ gpg -d /tmp/test.txt.gpg +``` + +Say you + + _do_ + +need to send the file. You need to have the recipient's public key. How you get that from them is up to you. You can ask them to send it to you, or it may be publicly available on a keyserver. + +Once you have it, import the key into GPG. + +``` +$ gpg --import yourfriends.key +``` + +``` +gpg --export -a "Your Name" > your.key +``` + +``` +$ gpg -e -u "Your Name" -r "Their Name" /tmp/test.txt +``` + +That's mostly it. There are some more advanced options available, but you won't need them ninety-nine percent of the time. GPG is that easy to use. You can also use the key pair that you created to send and receive encrypted email in much the same way as this, though most email clients automate the process once they have the keys. + +-------------------------------------------------------------------------------- + +via: https://linuxconfig.org/how-to-encrypt-and-decrypt-individual-files-with-gpg + +作者:[Nick Congleton][a] +译者:[lujun9972](https://github.com/lujun9972) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://linuxconfig.org From 21a2999ba0fc513fbc3d344f5d22979960f07c6e Mon Sep 17 00:00:00 2001 From: darksun Date: Tue, 5 Dec 2017 21:41:32 +0800 Subject: [PATCH 030/236] translated --- ...t and Decrypt Individual Files With GPG.md | 132 ++++++++---------- 1 file changed, 62 insertions(+), 70 deletions(-) diff --git a/sources/tech/20171024 How to Encrypt and Decrypt Individual Files With GPG.md b/sources/tech/20171024 How to Encrypt and Decrypt Individual Files With GPG.md index eea4b569bf..d28e36e358 100644 --- a/sources/tech/20171024 How to Encrypt and Decrypt Individual Files With GPG.md +++ b/sources/tech/20171024 How to Encrypt and Decrypt Individual Files With GPG.md @@ -1,136 +1,128 @@ -translating by lujun9972 -How to Encrypt and Decrypt Individual Files With GPG +如何使用 GPG 加解密文件 ------ -### Objective +### 目标 -Encrypt individual files with GPG. +使用 GPG 加密文件 -### Distributions +### 发行版 -This will work with any Linux distribution. +适用于任何发行版 -### Requirements +### 要求 -A working Linux install with GPG installed or root privileges to install it. +安装了 GPG 的Linux 或者拥有 root 权限来安装它. -### Difficulty +### 难度 -Easy +简单 -### Conventions +### 约定 -* # - requires given command to be executed with root privileges either directly as a root user or by use of sudo command +* # - 需要使用root权限来执行指定命令,可以直接使用root用户来执行也可以使用sudo命令 -* $ - given command to be executed as a regular non-privileged user +* $ - 可以使用普通用户来执行指定命令 -### Introduction +### 介绍 -Encryption is important. It's absolutely vital to protecting sensitive information. Your personal files are worth encrypting, and GPG provides the perfect solution. +加密非常重要. 它对于保护敏感信息来说是必不可少的. +你的私人文件应该要被加密, 而 GPG 提供了很好的解决方案. -### Install GPG +### 安装 GPG -GPG is a widely used piece of software. You can find it in nearly every distribution's repositories. If you don't have it already, install it on your computer. +GPG 的使用非常广泛. 你在几乎每个发行版的仓库中都能找到它. +如果你还没有安装它,那现在就来安装一下吧. -### Debian/Ubuntu +#### Debian/Ubuntu -``` +```shell $ sudo apt install gnupg ``` - -``` +#### Fedora +```shell # dnf install gnupg2 ``` - -``` +#### Arch +```shell # pacman -S gnupg ``` - -``` +#### Gentoo +```shell # emerge --ask app-crypt/gnupg ``` +### Create a Key +你需要一个密钥对来加解密文件. 如果你为 SSH 已经生成过了密钥对,那么你可以直接使用它. +如果没有,GPG包含工具来生成密钥对. -You need a key pair to be able to encrypt and decrypt files. If you already have a key pair that you generated for SSH, you can actually use those here. If not, GPG includes a utility to generate them. - -``` +```shell $ gpg --full-generate-key ``` +GPG 有一个命令行程序帮你一步一步的生成密钥. 它还有一个简单得多的工具,但是这个工具不能让你设置密钥类型,密钥的长度以及过期时间,因此不推荐使用这个工具. -The first thing GPG will ask for is the type of key. Use the default, if there isn't anything specific that you need. +GPG 首先会询问你密钥的类型. 没什么特别的话选择默认值就好. -The next thing that you'll need to set is the key size. +下一步需要设置密钥长度. `4096` 是一个不错的选择. -`4096` +之后, 可以设置过期的日期. 如果希望密钥永不过期则设置为 `0` -is probably best. +然后,输入你的名称. -After that, you can set an expiration date. Set it to +最后, 输入电子邮件地址. -`0` +如果你需要的话,还能添加一个注释. -if you want the key to be permanent. +所有这些都完成后, GPG 会让你校验一下这些信息. -Then, it will ask you for your name. +GPG 还会问你是否需要为密钥设置密码. 这一步是可选的, 但是会增加保护的程度. +若需要设置密码,则 GPG 会收集你的操作信息来增加密钥的健壮性. 所有这些都完成后, GPG 会显示密钥相关的信息. -Finally, it asks for your email address. +### 加密的基本方法 -You can add a comment if you need to too. +现在你拥有了自己的密钥, 加密文件非常简单. 使用虾米那命令在 `/tmp` 目录中创建一个空白文本文件. -When it has everything, GPG will ask you to verify the information. - -GPG will ask if you want a password for your key. This is optional, but adds a degree of protection. As it's doing that, GPG will collect entropy from your actions to increase the strength of your key. When it's done, GPG will print out the information pertaining to the key you just created. - -### Basic Encryption - -Now that you have your key, encrypting files is very easy. Create a blank text file in your - -`/tmp` - -directory to practice with. - -``` +```shell $ touch /tmp/test.txt ``` -`-e` -flag tells GPG that you'll be encrypting a file, and the +然后用 GPG 来加密它. 这里 `-e` 标志告诉 GPG 你想要加密文件, `-r` 标志指定接收者. -`-r` - -flag specifies a recipient. - -``` +```shell $ gpg -e -r "Your Name" /tmp/test.txt ``` -### Basic Decryption +GPG 需要知道这个文件的接收者和发送者. 由于这个文件给是你的,因此无需指定发送者,而接收者就是你自己. -You have an encrypted file. Try decrypting it. You don't need to specify any keys. That information is encoded with the file. GPG will try the keys that it has to decrypt it. +### 解密的基本方法 -``` +你收到加密文件后,就需要对它进行解密. 你无需指定解密用的密钥. 这个信息被编码在文件中. GPG 会尝试用其中的密钥进行解密. + +```shel $ gpg -d /tmp/test.txt.gpg ``` -Say you +### 发送文件 +假设你需要发送文件给别人. 你需要有接收者的公钥. 具体怎么获得密钥由你自己决定. 你可以让他们直接把公钥发送给你, 也可以通过密钥服务器来获取. - _do_ +收到对方公钥后, 导入公钥到GPG 中. -need to send the file. You need to have the recipient's public key. How you get that from them is up to you. You can ask them to send it to you, or it may be publicly available on a keyserver. - -Once you have it, import the key into GPG. - -``` +```shell $ gpg --import yourfriends.key ``` -``` +这些公钥与你自己创建的密钥一样,自带了名称和电子邮件地址的信息. +记住,为了让别人能解密你的文件,别人也需要你的公钥. 因此导出公钥并将之发送出去. + +```shell gpg --export -a "Your Name" > your.key ``` +现在可以开始加密要发送的文件了. 它跟之前的步骤差不多, 只是需要指定你自己为发送人. ``` $ gpg -e -u "Your Name" -r "Their Name" /tmp/test.txt ``` -That's mostly it. There are some more advanced options available, but you won't need them ninety-nine percent of the time. GPG is that easy to use. You can also use the key pair that you created to send and receive encrypted email in much the same way as this, though most email clients automate the process once they have the keys. +### 结语 +就这样了. GPG 还有一些高级选项, 不过你在 99% 的时间内都不会用到这些高级选项. GPG 就是这么易于使用. +你也可以使用创建的密钥对来发送和接受加密邮件,其步骤跟上面演示的差不多, 不过大多数的电子邮件客户端在拥有密钥的情况下会自动帮你做这个动作. -------------------------------------------------------------------------------- From 35a7f43a8fa8a02581bffaca2025fcd6fff0ad83 Mon Sep 17 00:00:00 2001 From: darksun Date: Tue, 5 Dec 2017 21:42:32 +0800 Subject: [PATCH 031/236] change to translated --- ...171024 How to Encrypt and Decrypt Individual Files With GPG.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {sources => translated}/tech/20171024 How to Encrypt and Decrypt Individual Files With GPG.md (100%) diff --git a/sources/tech/20171024 How to Encrypt and Decrypt Individual Files With GPG.md b/translated/tech/20171024 How to Encrypt and Decrypt Individual Files With GPG.md similarity index 100% rename from sources/tech/20171024 How to Encrypt and Decrypt Individual Files With GPG.md rename to translated/tech/20171024 How to Encrypt and Decrypt Individual Files With GPG.md From 93395794ecbfcb4ade7d71e1e5c07ae46ac28c09 Mon Sep 17 00:00:00 2001 From: darksun Date: Tue, 5 Dec 2017 21:50:49 +0800 Subject: [PATCH 032/236] reformat --- ...t and Decrypt Individual Files With GPG.md | 62 +++++++++---------- 1 file changed, 31 insertions(+), 31 deletions(-) diff --git a/translated/tech/20171024 How to Encrypt and Decrypt Individual Files With GPG.md b/translated/tech/20171024 How to Encrypt and Decrypt Individual Files With GPG.md index d28e36e358..6b534be640 100644 --- a/translated/tech/20171024 How to Encrypt and Decrypt Individual Files With GPG.md +++ b/translated/tech/20171024 How to Encrypt and Decrypt Individual Files With GPG.md @@ -10,7 +10,7 @@ ### 要求 -安装了 GPG 的Linux 或者拥有 root 权限来安装它. +安装了 GPG 的 Linux 或者拥有 root 权限来安装它。 ### 难度 @@ -18,19 +18,19 @@ ### 约定 -* # - 需要使用root权限来执行指定命令,可以直接使用root用户来执行也可以使用sudo命令 +* # - 需要使用 root 权限来执行指定命令,可以直接使用 root 用户来执行也可以使用 sudo 命令 * $ - 可以使用普通用户来执行指定命令 ### 介绍 -加密非常重要. 它对于保护敏感信息来说是必不可少的. -你的私人文件应该要被加密, 而 GPG 提供了很好的解决方案. +加密非常重要。它对于保护敏感信息来说是必不可少的。 +你的私人文件应该要被加密,而 GPG 提供了很好的解决方案。 ### 安装 GPG -GPG 的使用非常广泛. 你在几乎每个发行版的仓库中都能找到它. -如果你还没有安装它,那现在就来安装一下吧. +GPG 的使用非常广泛。你在几乎每个发行版的仓库中都能找到它。 +如果你还没有安装它,那现在就来安装一下吧。 #### Debian/Ubuntu @@ -50,79 +50,79 @@ $ sudo apt install gnupg # emerge --ask app-crypt/gnupg ``` ### Create a Key -你需要一个密钥对来加解密文件. 如果你为 SSH 已经生成过了密钥对,那么你可以直接使用它. -如果没有,GPG包含工具来生成密钥对. +你需要一个密钥对来加解密文件。如果你为 SSH 已经生成过了密钥对,那么你可以直接使用它。 +如果没有,GPG 包含工具来生成密钥对。 ```shell $ gpg --full-generate-key ``` -GPG 有一个命令行程序帮你一步一步的生成密钥. 它还有一个简单得多的工具,但是这个工具不能让你设置密钥类型,密钥的长度以及过期时间,因此不推荐使用这个工具. +GPG 有一个命令行程序帮你一步一步的生成密钥。它还有一个简单得多的工具,但是这个工具不能让你设置密钥类型,密钥的长度以及过期时间,因此不推荐使用这个工具。 -GPG 首先会询问你密钥的类型. 没什么特别的话选择默认值就好. +GPG 首先会询问你密钥的类型。没什么特别的话选择默认值就好。 -下一步需要设置密钥长度. `4096` 是一个不错的选择. +下一步需要设置密钥长度。`4096` 是一个不错的选择。 -之后, 可以设置过期的日期. 如果希望密钥永不过期则设置为 `0` +之后,可以设置过期的日期。 如果希望密钥永不过期则设置为 `0` -然后,输入你的名称. +然后,输入你的名称。 -最后, 输入电子邮件地址. +最后,输入电子邮件地址。 -如果你需要的话,还能添加一个注释. +如果你需要的话,还能添加一个注释。 -所有这些都完成后, GPG 会让你校验一下这些信息. +所有这些都完成后,GPG 会让你校验一下这些信息。 -GPG 还会问你是否需要为密钥设置密码. 这一步是可选的, 但是会增加保护的程度. -若需要设置密码,则 GPG 会收集你的操作信息来增加密钥的健壮性. 所有这些都完成后, GPG 会显示密钥相关的信息. +GPG 还会问你是否需要为密钥设置密码。这一步是可选的, 但是会增加保护的程度。 +若需要设置密码,则 GPG 会收集你的操作信息来增加密钥的健壮性。 所有这些都完成后, GPG 会显示密钥相关的信息。 ### 加密的基本方法 -现在你拥有了自己的密钥, 加密文件非常简单. 使用虾米那命令在 `/tmp` 目录中创建一个空白文本文件. +现在你拥有了自己的密钥,加密文件非常简单。 使用虾米那命令在 `/tmp` 目录中创建一个空白文本文件。 ```shell $ touch /tmp/test.txt ``` -然后用 GPG 来加密它. 这里 `-e` 标志告诉 GPG 你想要加密文件, `-r` 标志指定接收者. +然后用 GPG 来加密它。这里 `-e` 标志告诉 GPG 你想要加密文件, `-r` 标志指定接收者。 ```shell $ gpg -e -r "Your Name" /tmp/test.txt ``` -GPG 需要知道这个文件的接收者和发送者. 由于这个文件给是你的,因此无需指定发送者,而接收者就是你自己. +GPG 需要知道这个文件的接收者和发送者。由于这个文件给是你的,因此无需指定发送者,而接收者就是你自己。 ### 解密的基本方法 -你收到加密文件后,就需要对它进行解密. 你无需指定解密用的密钥. 这个信息被编码在文件中. GPG 会尝试用其中的密钥进行解密. +你收到加密文件后,就需要对它进行解密。 你无需指定解密用的密钥。 这个信息被编码在文件中。 GPG 会尝试用其中的密钥进行解密。 ```shel $ gpg -d /tmp/test.txt.gpg ``` ### 发送文件 -假设你需要发送文件给别人. 你需要有接收者的公钥. 具体怎么获得密钥由你自己决定. 你可以让他们直接把公钥发送给你, 也可以通过密钥服务器来获取. +假设你需要发送文件给别人。你需要有接收者的公钥。 具体怎么获得密钥由你自己决定。 你可以让他们直接把公钥发送给你, 也可以通过密钥服务器来获取。 -收到对方公钥后, 导入公钥到GPG 中. +收到对方公钥后,导入公钥到 GPG 中。 ```shell $ gpg --import yourfriends.key ``` -这些公钥与你自己创建的密钥一样,自带了名称和电子邮件地址的信息. -记住,为了让别人能解密你的文件,别人也需要你的公钥. 因此导出公钥并将之发送出去. +这些公钥与你自己创建的密钥一样,自带了名称和电子邮件地址的信息。 +记住,为了让别人能解密你的文件,别人也需要你的公钥。 因此导出公钥并将之发送出去。 ```shell gpg --export -a "Your Name" > your.key ``` -现在可以开始加密要发送的文件了. 它跟之前的步骤差不多, 只是需要指定你自己为发送人. +现在可以开始加密要发送的文件了。它跟之前的步骤差不多, 只是需要指定你自己为发送人。 ``` $ gpg -e -u "Your Name" -r "Their Name" /tmp/test.txt ``` ### 结语 -就这样了. GPG 还有一些高级选项, 不过你在 99% 的时间内都不会用到这些高级选项. GPG 就是这么易于使用. -你也可以使用创建的密钥对来发送和接受加密邮件,其步骤跟上面演示的差不多, 不过大多数的电子邮件客户端在拥有密钥的情况下会自动帮你做这个动作. +就这样了。GPG 还有一些高级选项, 不过你在 99% 的时间内都不会用到这些高级选项。 GPG 就是这么易于使用。 +你也可以使用创建的密钥对来发送和接受加密邮件,其步骤跟上面演示的差不多, 不过大多数的电子邮件客户端在拥有密钥的情况下会自动帮你做这个动作。 -------------------------------------------------------------------------------- @@ -130,8 +130,8 @@ via: https://linuxconfig.org/how-to-encrypt-and-decrypt-individual-files-with-gp 作者:[Nick Congleton][a] 译者:[lujun9972](https://github.com/lujun9972) -校对:[校对者ID](https://github.com/校对者ID) +校对:[校对者 ID](https://github.com/校对者 ID) -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux 中国](https://linux.cn/) 荣誉推出 [a]:https://linuxconfig.org From 3265fe318b500cc11c783c26fe9785980c796dd0 Mon Sep 17 00:00:00 2001 From: FelixYFZ <33593534+FelixYFZ@users.noreply.github.com> Date: Tue, 5 Dec 2017 21:53:27 +0800 Subject: [PATCH 033/236] Update 20171201 How to find a publisher for your tech book.md --- .../tech/20171201 How to find a publisher for your tech book.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20171201 How to find a publisher for your tech book.md b/sources/tech/20171201 How to find a publisher for your tech book.md index 76dc8112ca..6c7cfeecc1 100644 --- a/sources/tech/20171201 How to find a publisher for your tech book.md +++ b/sources/tech/20171201 How to find a publisher for your tech book.md @@ -1,3 +1,5 @@ + +Translating by FelixYFZ How to find a publisher for your tech book ============================================================ From b776726f813e461a8016c29112b091270baa8480 Mon Sep 17 00:00:00 2001 From: wxy Date: Tue, 5 Dec 2017 22:20:03 +0800 Subject: [PATCH 034/236] PRF:20171012 Linux Networking Hardware for Beginners Think Software.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @FelixYFZ 恭喜你,完成了第一篇翻译! --- ...g Hardware for Beginners Think Software.md | 87 +++++++++---------- 1 file changed, 39 insertions(+), 48 deletions(-) diff --git a/translated/tech/20171012 Linux Networking Hardware for Beginners Think Software.md b/translated/tech/20171012 Linux Networking Hardware for Beginners Think Software.md index a236a80e97..af79b1e9f0 100644 --- a/translated/tech/20171012 Linux Networking Hardware for Beginners Think Software.md +++ b/translated/tech/20171012 Linux Networking Hardware for Beginners Think Software.md @@ -1,72 +1,63 @@ -Translating by FelixYFZ - -面向初学者的Linux网络硬件: 软件工程思想 -============================================================ +面向初学者的 Linux 网络硬件:软件思维 +=========================================================== ![island network](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/soderskar-island.jpg?itok=wiMaF66b "island network") - 没有路由和桥接,我们将会成为孤独的小岛,你将会在这个网络教程中学到更多知识。 -Commons Zero][3]Pixabay - 上周,我们学习了本地网络硬件知识,本周,我们将学习网络互联技术和在移动网络中的一些很酷的黑客技术。 -### Routers:路由器 +> 没有路由和桥接,我们将会成为孤独的小岛,你将会在这个网络教程中学到更多知识。 +[Commons Zero][3]Pixabay + +上周,我们学习了本地网络硬件知识,本周,我们将学习网络互联技术和在移动网络中的一些很酷的黑客技术。 + +### 路由器 -网络路由器就是计算机网络中的一切,因为路由器连接着网络,没有路由器,我们就会成为孤岛, - -图一展示了一个简单的有线本地网络和一个无线接入点,所有设备都接入到Internet上,本地局域网的计算机连接到一个连接着防火墙或者路由器的以太网交换机上,防火墙或者路由器连接到网络服务供应商提供的电缆箱,调制调节器,卫星上行系统...好像一切都在计算中,就像是一个带着不停闪烁的的小灯的盒子,当你的网络数据包离开你的局域网,进入广阔的互联网,它们穿过一个又一个路由器直到到达自己的目的地。 - - -### [fig-1.png][4] +网络路由器就是计算机网络中的一切,因为路由器连接着网络,没有路由器,我们就会成为孤岛。图一展示了一个简单的有线本地网络和一个无线接入点,所有设备都接入到互联网上,本地局域网的计算机连接到一个连接着防火墙或者路由器的以太网交换机上,防火墙或者路由器连接到网络服务供应商(ISP)提供的电缆箱、调制调节器、卫星上行系统……好像一切都在计算中,就像是一个带着不停闪烁的的小灯的盒子。当你的网络数据包离开你的局域网,进入广阔的互联网,它们穿过一个又一个路由器直到到达自己的目的地。 ![simple LAN](https://www.linux.com/sites/lcom/files/styles/floated_images/public/fig-1_7.png?itok=lsazmf3- "simple LAN") -图一:一个简单的有线局域网和一个无线接入点。 +*图一:一个简单的有线局域网和一个无线接入点。* -一台路由器能连接一切,一个小巧特殊的小盒子只专注于路由,一个大点的盒子将会提供路由,防火墙,域名服务,以及VPN网关功能,一台重新设计的台式电脑或者笔记本,一个树莓派计算机或者一个小模块,体积臃肿矮小的像PC这样的单板计算机,除了苛刻的用途以外,普通的商品硬件都能良好的工作运行。高端的路由器使用特殊设计的硬件每秒能够传输最大量的数据包。 它们有多路数据总线,多个中央处理器和极快的存储。 -可以通过查阅Juniper和思科的路由器来感受一下高端路由器书什么样子的,而且能看看里面是什么样的构造。 -一个接入你的局域网的无线接入点要么作为一个以太网网桥要么作为一个路由器。一个桥接器扩展了这个网络,所以在这个桥接器上的任意一端口上的主机都连接在同一个网络中。 -一台路由器连接的是两个不同的网络。 -### Network Topology:网络拓扑 +路由器可以是各种样式:一个只专注于路由的小巧特殊的小盒子,一个将会提供路由、防火墙、域名服务,以及 VPN 网关功能的大点的盒子,一台重新设计的台式电脑或者笔记本,一个树莓派计算机或者一个 Arduino,体积臃肿矮小的像 PC Engines 这样的单板计算机,除了苛刻的用途以外,普通的商品硬件都能良好的工作运行。高端的路由器使用特殊设计的硬件每秒能够传输最大量的数据包。它们有多路数据总线,多个中央处理器和极快的存储。(可以通过了解 Juniper 和思科的路由器来感受一下高端路由器书什么样子的,而且能看看里面是什么样的构造。) - -有多种设置你的局域网的方式,你可以把所有主机接入到一个单独的平面网络,如果你的交换机支持的话,你也可以把它们分配到不同的子网中。 -平面网络是最简单的网络,只需把每一台设备接入到同一个交换机上即可,如果一台交换上的端口不够使用,你可以将更多的交换机连接在一起。 -有些交换机有特殊的上行端口,有些是没有这种特殊限制的上行端口,你可以连接其中的任意端口,你可能需要使用交叉类型的以太网线,所以你要查阅你的交换机的说明文档来设置。平面网络是最容易管理的,你不需要路由器也不需要计算子网,但它也有一些缺点。他们的伸缩性不好,所以当网络规模变得越来越大的时候就会被广播网络所阻塞。 -将你的局域网进行分段将会提升安全保障, 把局域网分成可管理的不同网段将有助于管理更大的网络。 - 图2展示了一个分成两个子网的局域网络:内部的有线和无线主机,和非军事区域(从来不知道所所有的工作上的男性术语都是在计算机上键入的?)因为他被阻挡了所有的内部网络的访问。 +接入你的局域网的无线接入点要么作为一个以太网网桥,要么作为一个路由器。桥接器扩展了这个网络,所以在这个桥接器上的任意一端口上的主机都连接在同一个网络中。一台路由器连接的是两个不同的网络。 +### 网络拓扑 -### [fig-2.png][5] +有多种设置你的局域网的方式,你可以把所有主机接入到一个单独的平面网络flat network,也可以把它们划分为不同的子网。如果你的交换机支持 VLAN 的话,你也可以把它们分配到不同的 VLAN 中。 + +平面网络是最简单的网络,只需把每一台设备接入到同一个交换机上即可,如果一台交换上的端口不够使用,你可以将更多的交换机连接在一起。有些交换机有特殊的上行端口,有些是没有这种特殊限制的上行端口,你可以连接其中的任意端口,你可能需要使用交叉类型的以太网线,所以你要查阅你的交换机的说明文档来设置。 + +平面网络是最容易管理的,你不需要路由器也不需要计算子网,但它也有一些缺点。它们的伸缩性不好,所以当网络规模变得越来越大的时候就会被广播网络所阻塞。将你的局域网进行分段将会提升安全保障, 把局域网分成可管理的不同网段将有助于管理更大的网络。图二展示了一个分成两个子网的局域网络:内部的有线和无线主机,和一个托管公开服务的主机。包含面向公共的服务器的子网称作非军事区域 DMZ,(你有没有注意到那些都是主要在电脑上打字的男人们的术语?)因为它被阻挡了所有的内部网络的访问。 ![LAN](https://www.linux.com/sites/lcom/files/styles/floated_images/public/fig-2_4.png?itok=LpXq7bLf "LAN") -图2:一个分成两个子网的简单局域网。 -即使像图2那样的小型网络也可以有不同的配置方法。你可以将防火墙和路由器放置在一台单独的设备上。 -你可以为你的非军事区域设置一个专用的网络连接,把它完全从你的内部网络隔离,这将引导我们进入下一个主题:一切基于软件。 +*图二:一个分成两个子网的简单局域网。* +即使像图二那样的小型网络也可以有不同的配置方法。你可以将防火墙和路由器放置在一台单独的设备上。你可以为你的非军事区域设置一个专用的网络连接,把它完全从你的内部网络隔离,这将引导我们进入下一个主题:一切基于软件。 -### Think Software软件思维 +### 软件思维 +你可能已经注意到在这个简短的系列中我们所讨论的硬件,只有网络接口、交换机,和线缆是特殊用途的硬件。 +其它的都是通用的商用硬件,而且都是软件来定义它的用途。Linux 是一个真实的网络操作系统,它支持大量的网络操作:网关、虚拟专用网关、以太网桥、网页、邮箱以及文件等等服务器、负载均衡、代理、服务质量、多种认证、中继、故障转移……你可以在运行着 Linux 系统的标准硬件上运行你的整个网络。你甚至可以使用 Linux 交换应用(LISA)和VDE2 协议来模拟以太网交换机。 -你可能已经注意到在这个简短的系列中我们所讨论的硬件,只有网络接口,交换机,和线缆是特殊用途的硬件。 -其它的都是通用的商用硬件,而且都是软件来定义它的用途。 -网关,虚拟专用网关,以太网桥,网页,邮箱以及文件等等。 -服务器,负载均衡,代理,大量的服务,各种各样的认证,中继,故障转移...你可以在运行着Linux系统的标准硬件上运行你的整个网络。 -你甚至可以使用Linux交换应用和VDE2协议来模拟以太网交换机,像DD-WRT,openWRT 和Rashpberry Pi distros,这些小型的硬件都是有专业的分类的,要记住BSDS和它们的特殊衍生用途如防火墙,路由器,和网络附件存储。 -你知道有些人坚持认为硬件防火墙和软件防火墙有区别?其实是没有区别的,就像说有一台硬件计算机和一台软件计算机。 -### Port Trunking and Ethernet Bonding -端口聚合和以太网绑定 -聚合和绑定,也称链路聚合,是把两条以太网通道绑定在一起成为一条通道。一些交换机支持端口聚合,就是把两个交换机端口绑定在一起成为一个是他们原来带宽之和的一条新的连接。对于一台承载很多业务的服务器来说这是一个增加通道带宽的有效的方式。 -你也可以在以太网口进行同样的配置,而且绑定汇聚的驱动是内置在Linux内核中的,所以不需要任何其他的专门的硬件。 +有一些用于小型硬件的特殊发行版,如 DD-WRT、OpenWRT,以及树莓派发行版,也不要忘记 BSD 们和它们的特殊衍生用途如 pfSense 防火墙/路由器,和 FreeNAS 网络存储服务器。 +你知道有些人坚持认为硬件防火墙和软件防火墙有区别?其实是没有区别的,就像说硬件计算机和软件计算机一样。 -### Bending Mobile Broadband to your Will随心所欲选择你的移动带宽 +### 端口聚合和以太网绑定 -我期望移动带宽能够迅速增长来替代DSL和有线网络。我居住在一个有250,000人口的靠近一个城市的地方,但是在城市以外,要想接入互联网就要靠运气了,即使那里有很大的用户上网需求。我居住的小角落离城镇有20分钟的距离,但对于网络服务供应商来说他们几乎不会考虑到为这个地方提供网络。 我唯一的选择就是移动带宽; 这里没有拨号网络,卫星网络(即使它很糟糕)或者是DSL,电缆,光纤,但却没有阻止网络供应商把那些在我这个区域从没看到过的无限制通信个其他高速网络服务的传单塞进我的邮箱。 -我试用了AT&T,Version,和T-Mobile。Version的信号覆盖范围最广,但是Version和AT&T是最昂贵的。 -我居住的地方在T-Mobile信号覆盖的边缘,但迄今为止他们给了最大的优惠,为了能够能够有效的使用,我必须购买一个WeBoostDe信号放大器和 -一台中兴的移动热点设备。当然你也可以使用一部手机作为热点,但是专用的热点设备有着最强的信号。如果你正在考虑购买一台信号放大器,最好的选择就是WeBoost因为他们的服务支持最棒,而且他们会尽最大努力去帮助你。在一个小小的APP的协助下去设置将会精准的增强 你的网络信号,他们有一个功能较少的免费的版本,但你将一点都不会后悔去花两美元使用专业版。 -那个小巧的中兴热点设备能够支持15台主机而且还有拥有基本的防火墙功能。 但你如果你使用像 Linksys WRT54GL这样的设备,使用Tomato,openWRT,或者DD-WRT来替代普通的固件,这样你就能完全控制你的防护墙规则,路由配置,以及任何其他你想要设置的服务。 +聚合和绑定,也称链路聚合,是把两条以太网通道绑定在一起成为一条通道。一些交换机支持端口聚合,就是把两个交换机端口绑定在一起,成为一个是它们原来带宽之和的一条新的连接。对于一台承载很多业务的服务器来说这是一个增加通道带宽的有效的方式。 + +你也可以在以太网口进行同样的配置,而且绑定汇聚的驱动是内置在 Linux 内核中的,所以不需要任何其他的专门的硬件。 + +### 随心所欲选择你的移动宽带 + +我期望移动宽带能够迅速增长来替代 DSL 和有线网络。我居住在一个有 25 万人口的靠近一个城市的地方,但是在城市以外,要想接入互联网就要靠运气了,即使那里有很大的用户上网需求。我居住的小角落离城镇有 20 分钟的距离,但对于网络服务供应商来说他们几乎不会考虑到为这个地方提供网络。 我唯一的选择就是移动宽带;这里没有拨号网络、卫星网络(即使它很糟糕)或者是 DSL、电缆、光纤,但却没有阻止网络供应商把那些我在这个区域从没看到过的 Xfinity 和其它高速网络服务的传单塞进我的邮箱。 + +我试用了 AT&T、Version 和 T-Mobile。Version 的信号覆盖范围最广,但是 Version 和 AT&T 是最昂贵的。 +我居住的地方在 T-Mobile 信号覆盖的边缘,但迄今为止他们给了最大的优惠,为了能够能够有效的使用,我必须购买一个 WeBoost 信号放大器和一台中兴的移动热点设备。当然你也可以使用一部手机作为热点,但是专用的热点设备有着最强的信号。如果你正在考虑购买一台信号放大器,最好的选择就是 WeBoost,因为他们的服务支持最棒,而且他们会尽最大努力去帮助你。在一个小小的 APP [SignalCheck Pro][8] 的协助下设置将会精准的增强你的网络信号,他们有一个功能较少的免费的版本,但你将一点都不会后悔去花两美元使用专业版。 + +那个小巧的中兴热点设备能够支持 15 台主机,而且还有拥有基本的防火墙功能。 但你如果你使用像 Linksys WRT54GL这样的设备,可以使用 Tomato、OpenWRT,或者 DD-WRT 来替代普通的固件,这样你就能完全控制你的防护墙规则、路由配置,以及任何其它你想要设置的服务。 -------------------------------------------------------------------------------- @@ -74,7 +65,7 @@ via: https://www.linux.com/learn/intro-to-linux/2017/10/linux-networking-hardwar 作者:[CARLA SCHRODER][a] 译者:[FelixYFZ](https://github.com/FelixYFZ) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 498e48cb8397fc6ed84161a6cc861b50ac4145e2 Mon Sep 17 00:00:00 2001 From: wxy Date: Tue, 5 Dec 2017 22:20:51 +0800 Subject: [PATCH 035/236] PUB:20171012 Linux Networking Hardware for Beginners Think Software.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @FelixYFZ 文章发布地址:https://linux.cn/article-9113-1.html 你的 LCTT 专页地址: https://linux.cn/lctt/FelixYFZ --- ...1012 Linux Networking Hardware for Beginners Think Software.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20171012 Linux Networking Hardware for Beginners Think Software.md (100%) diff --git a/translated/tech/20171012 Linux Networking Hardware for Beginners Think Software.md b/published/20171012 Linux Networking Hardware for Beginners Think Software.md similarity index 100% rename from translated/tech/20171012 Linux Networking Hardware for Beginners Think Software.md rename to published/20171012 Linux Networking Hardware for Beginners Think Software.md From 13c499ec00cb26f1fb6f263f95de3f7cb3528564 Mon Sep 17 00:00:00 2001 From: wxy Date: Tue, 5 Dec 2017 22:43:09 +0800 Subject: [PATCH 036/236] PRF:20171201 How to Manage Users with Groups in Linux.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @imquanquan 翻译的很好,我基本没调整。 --- ...ow to Manage Users with Groups in Linux.md | 70 +++++++------------ 1 file changed, 25 insertions(+), 45 deletions(-) diff --git a/translated/tech/20171201 How to Manage Users with Groups in Linux.md b/translated/tech/20171201 How to Manage Users with Groups in Linux.md index 1927de6817..c9bbf066cd 100644 --- a/translated/tech/20171201 How to Manage Users with Groups in Linux.md +++ b/translated/tech/20171201 How to Manage Users with Groups in Linux.md @@ -1,13 +1,9 @@ -如何在 Linux 系统中用用户组来管理用户 +如何在 Linux 系统中通过用户组来管理用户 ============================================================ -### [group-of-people-1645356_1920.jpg][1] - ![groups](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/group-of-people-1645356_1920.jpg?itok=rJlAxBSV) -本教程可以了解如何通过用户组和访问控制表(ACL)来管理用户。 - -[创意共享协议][4] +> 本教程可以了解如何通过用户组和访问控制表(ACL)来管理用户。 当你需要管理一台容纳多个用户的 Linux 机器时,比起一些基本的用户管理工具所提供的方法,有时候你需要对这些用户采取更多的用户权限管理方式。特别是当你要管理某些用户的权限时,这个想法尤为重要。比如说,你有一个目录,某个用户组中的用户可以通过读和写的权限访问这个目录,而其他用户组中的用户对这个目录只有读的权限。在 Linux 中,这是完全可以实现的。但前提是你必须先了解如何通过用户组和访问控制表(ACL)来管理用户。 @@ -18,36 +14,32 @@ 你需要用下面两个用户名新建两个用户: * olivia - * nathan 你需要新建以下两个用户组: * readers - * editors -olivia 属于 editors 用户组,而 nathan 属于 readers 用户组。reader 用户组对 ``/DATA`` 目录只有读的权限,而 editors 用户组则对 ``/DATA`` 目录同时有读和写的权限。当然,这是个非常小的任务,但它会给你基本的信息·。你可以扩展这个任务以适应你其他更大的需求。 +olivia 属于 editors 用户组,而 nathan 属于 readers 用户组。reader 用户组对 `/DATA` 目录只有读的权限,而 editors 用户组则对 `/DATA` 目录同时有读和写的权限。当然,这是个非常小的任务,但它会给你基本的信息,你可以扩展这个任务以适应你其他更大的需求。 -我将在 Ubuntu 16.04 Server 平台上进行演示。这些命令都是通用的,唯一不同的是,要是在你的发行版中不使用 sudo 命令,你必须切换到 root 用户来执行这些命令。 +我将在 Ubuntu 16.04 Server 平台上进行演示。这些命令都是通用的,唯一不同的是,要是在你的发行版中不使用 `sudo` 命令,你必须切换到 root 用户来执行这些命令。 ### 创建用户 -我们需要做的第一件事是为我们的实验创建两个用户。可以用 ``useradd`` 命令来创建用户,我们不只是简单地创建一个用户,而需要同时创建用户和属于他们的家目录,然后给他们设置密码。 +我们需要做的第一件事是为我们的实验创建两个用户。可以用 `useradd` 命令来创建用户,我们不只是简单地创建一个用户,而需要同时创建用户和属于他们的家目录,然后给他们设置密码。 ``` sudo useradd -m olivia - sudo useradd -m nathan ``` -我们现在创建了两个用户,如果你看看 ``/home`` 目录,你可以发现他们的家目录(因为我们用了 -m 选项,可以帮在创建用户的同时创建他们的家目录。 +我们现在创建了两个用户,如果你看看 `/home` 目录,你可以发现他们的家目录(因为我们用了 `-m` 选项,可以在创建用户的同时创建他们的家目录。 之后,我们可以用以下命令给他们设置密码: ``` sudo passwd olivia - sudo passwd nathan ``` @@ -59,26 +51,21 @@ sudo passwd nathan ``` addgroup readers - addgroup editors ``` -(译者注:当你使用 CentOS 等一些 Linux 发行版时,可能系统没有 addgroup 这个命令,推荐使用 groupadd 命令来替换 addgroup 命令以达到同样的效果) - - -### [groups_1.jpg][2] +(LCTT 译注:当你使用 CentOS 等一些 Linux 发行版时,可能系统没有 `addgroup` 这个命令,推荐使用 `groupadd` 命令来替换 `addgroup` 命令以达到同样的效果) ![groups](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/groups_1.jpg?itok=BKwL89BB) -图一:我们可以使用刚创建的新用户组了。 - -[Used with permission][5] +*图一:我们可以使用刚创建的新用户组了。* 创建用户组后,我们需要添加我们的用户到这两个用户组。我们用以下命令来将 nathan 用户添加到 readers 用户组: ``` sudo usermod -a -G readers nathan ``` + 用以下命令将 olivia 添加到 editors 用户组: ``` @@ -89,7 +76,7 @@ sudo usermod -a -G editors olivia ### 给用户组授予目录的权限 -假设你有个目录 ``/READERS`` 且允许 readers 用户组的所有成员访问这个目录。首先,我们执行以下命令来更改目录所属用户组: +假设你有个目录 `/READERS` 且允许 readers 用户组的所有成员访问这个目录。首先,我们执行以下命令来更改目录所属用户组: ``` sudo chown -R :readers /READERS @@ -107,26 +94,23 @@ sudo chmod -R g-w /READERS sudo chmod -R o-x /READERS ``` -这时候,只有目录的所有者(root)和用户组 reader 中的用户可以访问 ``/READES`` 中的文件。 +这时候,只有目录的所有者(root)和用户组 reader 中的用户可以访问 `/READES` 中的文件。 -假设你有个目录 ``/EDITORS`` ,你需要给用户组 editors 里的成员这个目录的读和写的权限。为了达到这个目的,执行下面的这些命令是必要的: +假设你有个目录 `/EDITORS` ,你需要给用户组 editors 里的成员这个目录的读和写的权限。为了达到这个目的,执行下面的这些命令是必要的: ``` sudo chown -R :editors /EDITORS - sudo chmod -R g+w /EDITORS - sudo chmod -R o-x /EDITORS ``` -此时 editors 用户组的所有成员都可以访问和修改其中的文件。除此之外其他用户(除了 root 之外)无法访问 ``/EDITORS`` 中的任何文件。 +此时 editors 用户组的所有成员都可以访问和修改其中的文件。除此之外其他用户(除了 root 之外)无法访问 `/EDITORS` 中的任何文件。 使用这个方法的问题在于,你一次只能操作一个组和一个目录而已。这时候访问控制表(ACL)就可以派得上用场了。 - ### 使用访问控制表(ACL) -现在,让我们把这个问题变得棘手一点。假设你有一个目录 ``/DATA`` 并且你想给 readers 用户组的成员读取权限并同时给 editors 用户组的成员读和写的权限。为此,你必须要用到 setfacl 命令。setfacl 命令可以为文件或文件夹设置一个访问控制表(ACL)。 +现在,让我们把这个问题变得棘手一点。假设你有一个目录 `/DATA` 并且你想给 readers 用户组的成员读取权限,并同时给 editors 用户组的成员读和写的权限。为此,你必须要用到 `setfacl` 命令。`setfacl` 命令可以为文件或文件夹设置一个访问控制表(ACL)。 这个命令的结构如下: @@ -134,45 +118,41 @@ sudo chmod -R o-x /EDITORS setfacl OPTION X:NAME:Y /DIRECTORY ``` -其中 OPTION 是可选选项,X 可以是 u(用户)或者是 g (用户组),NAME 是用户或者用户组的名字,/DIRECTORY 是要用到的目录。我们将使用 -m 选项进行修改(modify)。因此,我们给 readers 用户组添加读取权限的命令是: +其中 OPTION 是可选选项,X 可以是 `u`(用户)或者是 `g` (用户组),NAME 是用户或者用户组的名字,/DIRECTORY 是要用到的目录。我们将使用 `-m` 选项进行修改。因此,我们给 readers 用户组添加读取权限的命令是: ``` sudo setfacl -m g:readers:rx -R /DATA ``` -现在 readers 用户组里面的每一个用户都可以读取 /DATA 目录里的文件了,但是他们不能修改里面的内容。 +现在 readers 用户组里面的每一个用户都可以读取 `/DATA` 目录里的文件了,但是他们不能修改里面的内容。 为了给 editors 用户组里面的用户读写权限,我们执行了以下命令: ``` sudo setfacl -m g:editors:rwx -R /DATA ``` + 上述命令将赋予 editors 用户组中的任何成员读取权限,同时保留 readers 用户组的只读权限。 ### 更多的权限控制 使用访问控制表(ACL),你可以实现你所需的权限控制。你可以添加用户到用户组,并且灵活地控制这些用户组对每个目录的权限以达到你的需求。如果想了解上述工具的更多信息,可以执行下列的命令: -* man usradd - -* man addgroup - -* man usermod - -* man sefacl - -* man chown - -* man chmod +* `man usradd` +* `man addgroup` +* `man usermod` +* `man sefacl` +* `man chown` +* `man chmod` -------------------------------------------------------------------------------- via: https://www.linux.com/learn/intro-to-linux/2017/12/how-manage-users-groups-linux -作者:[Jack Wallen ] +作者:[Jack Wallen] 译者:[imquanquan](https://github.com/imquanquan) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From c66ccc52bdb0a841bac206168980061a76188e12 Mon Sep 17 00:00:00 2001 From: wenwensnow <963555237@qq.com> Date: Tue, 5 Dec 2017 22:43:32 +0800 Subject: [PATCH 037/236] Update 20171201 Randomize your WiFi MAC address on Ubuntu 16.04.md --- .../20171201 Randomize your WiFi MAC address on Ubuntu 16.04.md | 1 + 1 file changed, 1 insertion(+) diff --git a/sources/tech/20171201 Randomize your WiFi MAC address on Ubuntu 16.04.md b/sources/tech/20171201 Randomize your WiFi MAC address on Ubuntu 16.04.md index b0f8e72018..3f0b8a0f50 100644 --- a/sources/tech/20171201 Randomize your WiFi MAC address on Ubuntu 16.04.md +++ b/sources/tech/20171201 Randomize your WiFi MAC address on Ubuntu 16.04.md @@ -1,3 +1,4 @@ +translating by wenwensnow Randomize your WiFi MAC address on Ubuntu 16.04 ============================================================ From cec664010fec9e893946cc0cfb7b0bd794b82cbe Mon Sep 17 00:00:00 2001 From: wxy Date: Tue, 5 Dec 2017 22:43:55 +0800 Subject: [PATCH 038/236] PUB:20171201 How to Manage Users with Groups in Linux.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @imquanquan 文章发布地址:https://linux.cn/article-9114-1.html 你的 LCTT 专页地址: https://linux.cn/lctt/imquanquan --- .../20171201 How to Manage Users with Groups in Linux.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20171201 How to Manage Users with Groups in Linux.md (100%) diff --git a/translated/tech/20171201 How to Manage Users with Groups in Linux.md b/published/20171201 How to Manage Users with Groups in Linux.md similarity index 100% rename from translated/tech/20171201 How to Manage Users with Groups in Linux.md rename to published/20171201 How to Manage Users with Groups in Linux.md From b89313e03b88e6234feff0ac3d1d6a929b3c7da9 Mon Sep 17 00:00:00 2001 From: TRsky <625310581@qq.com> Date: Tue, 5 Dec 2017 22:49:08 +0800 Subject: [PATCH 039/236] update --- ...ke up and Shut Down Linux Automatically.md | 38 +++++++++++-------- 1 file changed, 22 insertions(+), 16 deletions(-) diff --git a/sources/tech/20171130 Wake up and Shut Down Linux Automatically.md b/sources/tech/20171130 Wake up and Shut Down Linux Automatically.md index 5ed3f2bf10..68e49f1e1b 100644 --- a/sources/tech/20171130 Wake up and Shut Down Linux Automatically.md +++ b/sources/tech/20171130 Wake up and Shut Down Linux Automatically.md @@ -10,35 +10,38 @@ [Creative Commons Attribution][6][The Observatory at Delhi][7] -不要成为一个电能浪费者。如果你的电脑不需要开机就请把他们关机。出于方便和计算机宅的考虑,你可以通过配置你的 Linux 计算机实现自动唤醒和关闭 Linux 。 +不要成为一个电能浪费者。如果你的电脑不需要开机就请把它们关机。出于方便和计算机宅的考虑,你可以通过配置你的 Linux 计算机实现自动唤醒和关闭 Linux 。 ### 系统运行时间 -有时候有些电脑需要一直处在开机状态,在不超过电脑运行时间的限制下这种情况是被允许的。有些人为他们的计算机可以长时间的正常运行而感到自豪,且现在我们有内核热补丁能够实现只有在硬件发生故障时才允许机器关机。我认为比较实际可行的是能够在机器需要节省电能以及在移动硬件发生磨损的情况下,且在不需要机器运行的情况下将其关机。比如,你可以在规定的时间内唤醒备份服务器,执行备份,然后关闭它直到下一次进行备份时间。或者,你可以只在特定时间内配置网卡。任何不需要一直运行的东西都可以将其配置成在其需要工作的时候打开,待其完成工作后将其关闭。 +有时候有些电脑需要一直处在开机状态,在不超过电脑运行时间的限制下这种情况是被允许的。有些人为他们的计算机可以长时间的正常运行而感到自豪,且现在我们有内核热补丁能够实现只有在硬件发生故障时才允许机器关机。我认为比较实际可行的是能够在机器需要节省电能以及在移动硬件发生磨损的情况下,且在不需要机器运行的情况下将其关机。比如,你可以在规定的时间内唤醒备份服务器,执行备份,然后关闭它直到它要进行下一次备份。或者,你可以设置你的 Internet 网关只在特定的时间运行。任何不需要一直运行的东西都可以将其配置成在其需要工作的时候打开,待其完成工作后将其关闭。 ### 系统休眠 -对于不需要一直运行的电脑,使用 root 的 cron 定时任务 或者 /etc/crontab 文件 可以可靠地关闭电脑。这个例子创建一个 root 定时任务实现每天下午 11点15分 定时关机。 +对于不需要一直运行的电脑,使用 root 的 cron 定时任务或者 /etc/crontab 文件 可以可靠地关闭电脑。这个例子创建一个 root 定时任务实现每天下午 11点15分 定时关机。 ``` # crontab -e -u root # m h dom mon dow command 15 23 * * * /sbin/shutdown -h now ``` - +以下示例仅在周一至周五运行: ``` 15 23 * * 1-5 /sbin/shutdown -h now ``` +您可以为不同的日期和时间创建多个cron作业。 通过命令 ``man 5 crontab`` 可以了解所有时间和日期的字段。 -一个快速、容易的方式是,使用 /etc/crontab 文件。你必须指定用户: +一个快速、容易的方式是,使用 /etc/crontab 文件。但这样你必须指定用户: ``` 15 23 * * 1-5 root shutdown -h now ``` -实现自动唤醒是一件很酷的事情;我的大多数 SUSE (SUSE Linux)同事都在纽伦堡,因此,为了能够跟同事的计划有几小时的重叠时间我需要在凌晨5点起床。我的计算机早上 5点半自动开始工作,而我只需要将自己和咖啡拖到我的桌子上就可以开始工作了。按下电源按钮看起来好像并不是什么大事,但是在每天的那个时候每件小事都会变得很大。 +### 自动唤醒 -唤醒 Linux 计算机可能不比关闭它可靠,因此你可能需要尝试不同的办法。你可以使用 远程唤醒(Wake-On-LAN)、RTC 唤醒或者个人电脑的 BIOS 设置预定的唤醒。做这些工作的原因是,当你关闭电脑时,这并不是真正关闭了计算机;此时计算机处在极低功耗状态且还可以接受和响应信号。你需要使用电源开关将其彻底关闭。 +实现自动唤醒是一件很酷的事情; 我大多数使用 SUSE (SUSE Linux)的同事都在纽伦堡,因此,为了能够跟同事的计划有几小时的重叠时间我需要在凌晨5点起床。我的计算机早上 5点半自动开始工作,而我只需要将自己和咖啡拖到我的桌子上就可以开始工作了。按下电源按钮看起来好像并不是什么大事,但是在每天的那个时候每件小事都会变得很大。 + +唤醒 Linux 计算机可能不比关闭它稳当,因此你可能需要尝试不同的办法。你可以使用远程唤醒(Wake-On-LAN)、RTC 唤醒或者个人电脑的 BIOS 设置预定的唤醒这些方式。做这些工作的原因是,当你关闭电脑时,这并不是真正关闭了计算机;此时计算机处在极低功耗状态且还可以接受和响应信号。你需要拔掉电源开关将其彻底关闭。 ### BIOS 唤醒 @@ -55,7 +58,7 @@ Figure 1: My system BIOS has an easy-to-use wakeup scheduler. ### 主机远程唤醒(Wake-On-LAN) -远程唤醒是仅次于 BIOS 唤醒的又一种可靠的唤醒方法。这需要你从第二台计算机发送信号到所要打开的计算机。可以使用 Arduino 或 树莓派(Raspberry Pi) 发送基于 Linux 的路由器或者任何Linux 计算机的唤醒信号。首先,查看系统主板 BIOS 是否支持 Wake-On-LAN –如果支持—然后启动它,因为它被默认为禁用。 +远程唤醒是仅次于 BIOS 唤醒的又一种可靠的唤醒方法。这需要你从第二台计算机发送信号到所要打开的计算机。可以使用 Arduino 或 树莓派(Raspberry Pi) 发送基于 Linux 的路由器或者任何 Linux 计算机的唤醒信号。首先,查看系统主板 BIOS 是否支持 Wake-On-LAN ,要是支持的话,必须先启动它,因为它被默认为禁用。 然后,需要一个支持 Wake-On-LAN 的网卡;无线网卡并不支持。你需要运行 ethtool 命令查看网卡是否支持 Wake-On-LAN : @@ -64,7 +67,8 @@ Figure 1: My system BIOS has an easy-to-use wakeup scheduler. Supports Wake-on: pumbg Wake-on: g ``` - +这条命令输出的 Supports Wake-on字段会告诉你你的网卡现在开启了哪些功能: +    * d -- 禁用 * p -- 物理活动唤醒 @@ -79,14 +83,14 @@ Figure 1: My system BIOS has an easy-to-use wakeup scheduler. * g -- magic packet 唤醒 -* s -- magic packet 设置安全密码 +* s -- 设有密码的 magic packet 唤醒 -man ethtool 并不清楚开关 p 的作用;这表明任何信号都会导致唤醒。在我的测试中,然而,它并没有这么做。Wake-On-Lan 被启动的 Wake-on 参数是 g –- magic packet 唤醒,且当 Wake-On 值已经为 g 时表示网卡已支持 Wake-On-Lan 。如果它没有被启用,你可以通过 ethtool 命令来启用它。 +man ethtool 命令并没说清楚 p 选项的作用;这表明任何信号都会导致唤醒。然而,在我的测试中它并没有这么做。想要实现远程唤醒主机,必须支持的功能是g -- magic packet 唤醒,而且显示这个功能已经在启用了。如果它没有被启用,你可以通过 ethtool 命令来启用它。 ``` # ethtool -s eth0 wol g ``` - +这条命令可能会在重启后失效,所以为了确保万无一失,你可以创建个 root 用户的定时任务(cron)在每次重启的时候来执行这条命令。 ``` @reboot /usr/bin/ethtool -s eth0 wol g ``` @@ -99,17 +103,20 @@ Figure 2: Enable Wake on LAN. [Used with permission][9] -另外一个选择是最近的网络管理器版本有一个很好的小复选框能够唤醒局域网(图2)。 +另一个选择是最近的网络管理器版本有一个很好的小复选框来启用Wake-On-LAN(图2)。 这里有一个可以用于设置密码的地方,但是如果你的网络接口不支持 Secure On password,它就不起作用。 -现在你需要配置第二台计算机来发送唤醒信号。你并不需要 root 权限,所以你可以为你的用户创建 cron 任务。你需要正在唤醒的机器上的网络接口和MAC地址。 +现在你需要配置第二台计算机来发送唤醒信号。你并不需要 root 权限,所以你可以为你的用户创建 cron 任务。你需要用到的是想要唤醒的机器的网络接口和MAC地址信息。 ``` 30 08 * * * /usr/bin/wakeonlan D0:50:99:82:E7:2B ``` +### RTC 唤醒(RTC Alarm Clock) -通过使用实时闹钟来唤醒计算机是最不可靠的方法。查看 [Wake Up Linux With an RTC Alarm Clock][4] ;对于现在的大多数发行版来说这种方法已经有点过时了。下周继续了解更多关于使用RTC唤醒的方法。 +通过使用实时闹钟来唤醒计算机是最不可靠的方法。对于这个方法,可以参看 [Wake Up Linux With an RTC Alarm Clock][4] ;对于现在的大多数发行版来说这种方法已经有点过时了。 + +下周继续了解更多关于使用RTC唤醒的方法。 通过 Linux 基金会和 edX 可以学习更多关于 Linux 的免费 [ Linux 入门][5]教程。 @@ -133,4 +140,3 @@ via:https://www.linux.com/learn/intro-to-linux/2017/11/wake-and-shut-down-linux- [8]:https://www.linux.com/licenses/category/used-permission [9]:https://www.linux.com/licenses/category/used-permission - From 204fb18a8d66db22104a0e8ce9d2fd7c4c86fb90 Mon Sep 17 00:00:00 2001 From: TRsky <625310581@qq.com> Date: Tue, 5 Dec 2017 23:00:52 +0800 Subject: [PATCH 040/236] update --- .../20171130 Wake up and Shut Down Linux Automatically.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/sources/tech/20171130 Wake up and Shut Down Linux Automatically.md b/sources/tech/20171130 Wake up and Shut Down Linux Automatically.md index 68e49f1e1b..9c02d113e9 100644 --- a/sources/tech/20171130 Wake up and Shut Down Linux Automatically.md +++ b/sources/tech/20171130 Wake up and Shut Down Linux Automatically.md @@ -18,7 +18,7 @@ ### 系统休眠 -对于不需要一直运行的电脑,使用 root 的 cron 定时任务或者 /etc/crontab 文件 可以可靠地关闭电脑。这个例子创建一个 root 定时任务实现每天下午 11点15分 定时关机。 +对于不需要一直运行的电脑,使用 root 的 cron 定时任务或者 `/etc/crontab` 文件 可以可靠地关闭电脑。这个例子创建一个 root 定时任务实现每天下午 11点15分 定时关机。 ``` # crontab -e -u root @@ -39,7 +39,7 @@ ### 自动唤醒 -实现自动唤醒是一件很酷的事情; 我大多数使用 SUSE (SUSE Linux)的同事都在纽伦堡,因此,为了能够跟同事的计划有几小时的重叠时间我需要在凌晨5点起床。我的计算机早上 5点半自动开始工作,而我只需要将自己和咖啡拖到我的桌子上就可以开始工作了。按下电源按钮看起来好像并不是什么大事,但是在每天的那个时候每件小事都会变得很大。 +实现自动唤醒是一件很酷的事情; 我大多数使用 SUSE (SUSE Linux)的同事都在纽伦堡,因此,因此为了跟同事能有几小时一起工作的时间,我不得不需要在凌晨五点起床。我的计算机早上 5点半自动开始工作,而我只需要将自己和咖啡拖到我的桌子上就可以开始工作了。按下电源按钮看起来好像并不是什么大事,但是在每天的那个时候每件小事都会变得很大。 唤醒 Linux 计算机可能不比关闭它稳当,因此你可能需要尝试不同的办法。你可以使用远程唤醒(Wake-On-LAN)、RTC 唤醒或者个人电脑的 BIOS 设置预定的唤醒这些方式。做这些工作的原因是,当你关闭电脑时,这并不是真正关闭了计算机;此时计算机处在极低功耗状态且还可以接受和响应信号。你需要拔掉电源开关将其彻底关闭。 @@ -79,7 +79,7 @@ Figure 1: My system BIOS has an easy-to-use wakeup scheduler. * b -- 广播消息唤醒 -* a -- ARP(Address Resolution Protocol)唤醒 +* a -- ARP(Address Resolution Protocol) 唤醒 * g -- magic packet 唤醒 From 8b0e3c185d7b2c03894396db28c6a8ab657c02dd Mon Sep 17 00:00:00 2001 From: TRsky <625310581@qq.com> Date: Tue, 5 Dec 2017 23:04:43 +0800 Subject: [PATCH 041/236] translated by HardworkFish --- ...ke up and Shut Down Linux Automatically.md | 142 ++++++++++++++++++ 1 file changed, 142 insertions(+) create mode 100644 translated/tech/20171130 Wake up and Shut Down Linux Automatically.md diff --git a/translated/tech/20171130 Wake up and Shut Down Linux Automatically.md b/translated/tech/20171130 Wake up and Shut Down Linux Automatically.md new file mode 100644 index 0000000000..9c02d113e9 --- /dev/null +++ b/translated/tech/20171130 Wake up and Shut Down Linux Automatically.md @@ -0,0 +1,142 @@ + +自动唤醒和关闭 Linux +===================== + +### [banner.jpg][1] + +![timekeeper](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/banner.jpg?itok=zItspoSb) + +了解如何通过配置 Linux 计算机来查看时间,并实现自动唤醒和关闭 Linux + +[Creative Commons Attribution][6][The Observatory at Delhi][7] + +不要成为一个电能浪费者。如果你的电脑不需要开机就请把它们关机。出于方便和计算机宅的考虑,你可以通过配置你的 Linux 计算机实现自动唤醒和关闭 Linux 。 + +### 系统运行时间 + +有时候有些电脑需要一直处在开机状态,在不超过电脑运行时间的限制下这种情况是被允许的。有些人为他们的计算机可以长时间的正常运行而感到自豪,且现在我们有内核热补丁能够实现只有在硬件发生故障时才允许机器关机。我认为比较实际可行的是能够在机器需要节省电能以及在移动硬件发生磨损的情况下,且在不需要机器运行的情况下将其关机。比如,你可以在规定的时间内唤醒备份服务器,执行备份,然后关闭它直到它要进行下一次备份。或者,你可以设置你的 Internet 网关只在特定的时间运行。任何不需要一直运行的东西都可以将其配置成在其需要工作的时候打开,待其完成工作后将其关闭。 + +### 系统休眠 + +对于不需要一直运行的电脑,使用 root 的 cron 定时任务或者 `/etc/crontab` 文件 可以可靠地关闭电脑。这个例子创建一个 root 定时任务实现每天下午 11点15分 定时关机。 + +``` +# crontab -e -u root +# m h dom mon dow command +15 23 * * * /sbin/shutdown -h now +``` +以下示例仅在周一至周五运行: +``` +15 23 * * 1-5 /sbin/shutdown -h now +``` +您可以为不同的日期和时间创建多个cron作业。 通过命令 ``man 5 crontab`` 可以了解所有时间和日期的字段。 + +一个快速、容易的方式是,使用 /etc/crontab 文件。但这样你必须指定用户: + +``` +15 23 * * 1-5 root shutdown -h now +``` + +### 自动唤醒 + +实现自动唤醒是一件很酷的事情; 我大多数使用 SUSE (SUSE Linux)的同事都在纽伦堡,因此,因此为了跟同事能有几小时一起工作的时间,我不得不需要在凌晨五点起床。我的计算机早上 5点半自动开始工作,而我只需要将自己和咖啡拖到我的桌子上就可以开始工作了。按下电源按钮看起来好像并不是什么大事,但是在每天的那个时候每件小事都会变得很大。 + +唤醒 Linux 计算机可能不比关闭它稳当,因此你可能需要尝试不同的办法。你可以使用远程唤醒(Wake-On-LAN)、RTC 唤醒或者个人电脑的 BIOS 设置预定的唤醒这些方式。做这些工作的原因是,当你关闭电脑时,这并不是真正关闭了计算机;此时计算机处在极低功耗状态且还可以接受和响应信号。你需要拔掉电源开关将其彻底关闭。 + +### BIOS 唤醒 + +BIOS 唤醒是最可靠的。我的系统主板 BIOS 有一个易于使用的唤醒调度程序。(Figure 1). Chances are yours does, too. Easy peasy. + +### [fig-1.png][2] + +![wakeup](https://www.linux.com/sites/lcom/files/styles/floated_images/public/fig-1_11.png?itok=8qAeqo1I) + +Figure 1: My system BIOS has an easy-to-use wakeup scheduler. + +[Used with permission][8] + + +### 主机远程唤醒(Wake-On-LAN) + +远程唤醒是仅次于 BIOS 唤醒的又一种可靠的唤醒方法。这需要你从第二台计算机发送信号到所要打开的计算机。可以使用 Arduino 或 树莓派(Raspberry Pi) 发送基于 Linux 的路由器或者任何 Linux 计算机的唤醒信号。首先,查看系统主板 BIOS 是否支持 Wake-On-LAN ,要是支持的话,必须先启动它,因为它被默认为禁用。 + +然后,需要一个支持 Wake-On-LAN 的网卡;无线网卡并不支持。你需要运行 ethtool 命令查看网卡是否支持 Wake-On-LAN : + +``` +# ethtool eth0 | grep -i wake-on + Supports Wake-on: pumbg + Wake-on: g +``` +这条命令输出的 Supports Wake-on字段会告诉你你的网卡现在开启了哪些功能: +    +* d -- 禁用 + +* p -- 物理活动唤醒 + +* u -- 单播消息唤醒 + +* m -- 多播(组播)消息唤醒 + +* b -- 广播消息唤醒 + +* a -- ARP(Address Resolution Protocol) 唤醒 + +* g -- magic packet 唤醒 + +* s -- 设有密码的 magic packet 唤醒 + +man ethtool 命令并没说清楚 p 选项的作用;这表明任何信号都会导致唤醒。然而,在我的测试中它并没有这么做。想要实现远程唤醒主机,必须支持的功能是g -- magic packet 唤醒,而且显示这个功能已经在启用了。如果它没有被启用,你可以通过 ethtool 命令来启用它。 + +``` +# ethtool -s eth0 wol g +``` +这条命令可能会在重启后失效,所以为了确保万无一失,你可以创建个 root 用户的定时任务(cron)在每次重启的时候来执行这条命令。 +``` +@reboot /usr/bin/ethtool -s eth0 wol g +``` + +### [fig-2.png][3] + +![wakeonlan](https://www.linux.com/sites/lcom/files/styles/floated_images/public/fig-2_7.png?itok=XQAwmHoQ) + +Figure 2: Enable Wake on LAN. + +[Used with permission][9] + +另一个选择是最近的网络管理器版本有一个很好的小复选框来启用Wake-On-LAN(图2)。 + +这里有一个可以用于设置密码的地方,但是如果你的网络接口不支持 Secure On password,它就不起作用。 + +现在你需要配置第二台计算机来发送唤醒信号。你并不需要 root 权限,所以你可以为你的用户创建 cron 任务。你需要用到的是想要唤醒的机器的网络接口和MAC地址信息。 + +``` +30 08 * * * /usr/bin/wakeonlan D0:50:99:82:E7:2B +``` +### RTC 唤醒(RTC Alarm Clock) + +通过使用实时闹钟来唤醒计算机是最不可靠的方法。对于这个方法,可以参看 [Wake Up Linux With an RTC Alarm Clock][4] ;对于现在的大多数发行版来说这种方法已经有点过时了。 + +下周继续了解更多关于使用RTC唤醒的方法。 + +通过 Linux 基金会和 edX 可以学习更多关于 Linux 的免费 [ Linux 入门][5]教程。 + +-------------------------------------------------------------------------------- + +via:https://www.linux.com/learn/intro-to-linux/2017/11/wake-and-shut-down-linux-automatically + +作者:[Carla Schroder] +译者:[译者ID](https://github.com/HardworkFish) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[1]:https://www.linux.com/files/images/bannerjpg +[2]:https://www.linux.com/files/images/fig-1png-11 +[3]:https://www.linux.com/files/images/fig-2png-7 +[4]:https://www.linux.com/learn/wake-linux-rtc-alarm-clock +[5]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux +[6]:https://www.linux.com/licenses/category/creative-commons-attribution +[7]:http://www.columbia.edu/itc/mealac/pritchett/00routesdata/1700_1799/jaipur/delhijantarearly/delhijantarearly.html +[8]:https://www.linux.com/licenses/category/used-permission +[9]:https://www.linux.com/licenses/category/used-permission + From 1e85ca2c64981a4ecdfc7566b44986ae7754dd35 Mon Sep 17 00:00:00 2001 From: TRsky <625310581@qq.com> Date: Tue, 5 Dec 2017 23:25:06 +0800 Subject: [PATCH 042/236] translated by HardworkFish --- .../20171130 Wake up and Shut Down Linux Automatically.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/translated/tech/20171130 Wake up and Shut Down Linux Automatically.md b/translated/tech/20171130 Wake up and Shut Down Linux Automatically.md index 9c02d113e9..4b02fce189 100644 --- a/translated/tech/20171130 Wake up and Shut Down Linux Automatically.md +++ b/translated/tech/20171130 Wake up and Shut Down Linux Automatically.md @@ -60,7 +60,7 @@ Figure 1: My system BIOS has an easy-to-use wakeup scheduler. 远程唤醒是仅次于 BIOS 唤醒的又一种可靠的唤醒方法。这需要你从第二台计算机发送信号到所要打开的计算机。可以使用 Arduino 或 树莓派(Raspberry Pi) 发送基于 Linux 的路由器或者任何 Linux 计算机的唤醒信号。首先,查看系统主板 BIOS 是否支持 Wake-On-LAN ,要是支持的话,必须先启动它,因为它被默认为禁用。 -然后,需要一个支持 Wake-On-LAN 的网卡;无线网卡并不支持。你需要运行 ethtool 命令查看网卡是否支持 Wake-On-LAN : +然后,需要一个支持 Wake-On-LAN 的网卡;无线网卡并不支持。你需要运行 `ethtool` 命令查看网卡是否支持 Wake-On-LAN : ``` # ethtool eth0 | grep -i wake-on @@ -85,7 +85,7 @@ Figure 1: My system BIOS has an easy-to-use wakeup scheduler. * s -- 设有密码的 magic packet 唤醒 -man ethtool 命令并没说清楚 p 选项的作用;这表明任何信号都会导致唤醒。然而,在我的测试中它并没有这么做。想要实现远程唤醒主机,必须支持的功能是g -- magic packet 唤醒,而且显示这个功能已经在启用了。如果它没有被启用,你可以通过 ethtool 命令来启用它。 +man ethtool 命令并没说清楚 p 选项的作用;这表明任何信号都会导致唤醒。然而,在我的测试中它并没有这么做。想要实现远程唤醒主机,必须支持的功能是 `g -- magic packet` 唤醒,而且显示这个功能已经在启用了。如果它没有被启用,你可以通过 `ethtool` 命令来启用它。 ``` # ethtool -s eth0 wol g From a07b81629d52df95574f23a897e49a0ada5ace96 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E9=AD=91=E9=AD=85=E9=AD=8D=E9=AD=89?= <625310581@qq.com> Date: Tue, 5 Dec 2017 23:32:38 +0800 Subject: [PATCH 043/236] translated by HardworkFish --- .../20171130 Wake up and Shut Down Linux Automatically.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/translated/tech/20171130 Wake up and Shut Down Linux Automatically.md b/translated/tech/20171130 Wake up and Shut Down Linux Automatically.md index 4b02fce189..a4b829620f 100644 --- a/translated/tech/20171130 Wake up and Shut Down Linux Automatically.md +++ b/translated/tech/20171130 Wake up and Shut Down Linux Automatically.md @@ -31,7 +31,7 @@ ``` 您可以为不同的日期和时间创建多个cron作业。 通过命令 ``man 5 crontab`` 可以了解所有时间和日期的字段。 -一个快速、容易的方式是,使用 /etc/crontab 文件。但这样你必须指定用户: +一个快速、容易的方式是,使用 `/etc/crontab ` 文件。但这样你必须指定用户: ``` 15 23 * * 1-5 root shutdown -h now @@ -67,7 +67,7 @@ Figure 1: My system BIOS has an easy-to-use wakeup scheduler. Supports Wake-on: pumbg Wake-on: g ``` -这条命令输出的 Supports Wake-on字段会告诉你你的网卡现在开启了哪些功能: +这条命令输出的 Supports Wake-on 字段会告诉你你的网卡现在开启了哪些功能:     * d -- 禁用 @@ -103,9 +103,9 @@ Figure 2: Enable Wake on LAN. [Used with permission][9] -另一个选择是最近的网络管理器版本有一个很好的小复选框来启用Wake-On-LAN(图2)。 +另一个选择是最近的网络管理器版本有一个很好的小复选框来启用 Wake-On-LAN(图2)。 -这里有一个可以用于设置密码的地方,但是如果你的网络接口不支持 Secure On password,它就不起作用。 +这里有一个可以用于设置密码的地方,但是如果你的网络接口不支持安全密码,它就不起作用。 现在你需要配置第二台计算机来发送唤醒信号。你并不需要 root 权限,所以你可以为你的用户创建 cron 任务。你需要用到的是想要唤醒的机器的网络接口和MAC地址信息。 From 7d8fba497f003426ad862860bb9ad9f6de586b4e Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E9=AD=91=E9=AD=85=E9=AD=8D=E9=AD=89?= <625310581@qq.com> Date: Tue, 5 Dec 2017 23:35:17 +0800 Subject: [PATCH 044/236] finish translation by HardworkFish --- ...ke up and Shut Down Linux Automatically.md | 142 ------------------ 1 file changed, 142 deletions(-) delete mode 100644 sources/tech/20171130 Wake up and Shut Down Linux Automatically.md diff --git a/sources/tech/20171130 Wake up and Shut Down Linux Automatically.md b/sources/tech/20171130 Wake up and Shut Down Linux Automatically.md deleted file mode 100644 index 9c02d113e9..0000000000 --- a/sources/tech/20171130 Wake up and Shut Down Linux Automatically.md +++ /dev/null @@ -1,142 +0,0 @@ - -自动唤醒和关闭 Linux -===================== - -### [banner.jpg][1] - -![timekeeper](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/banner.jpg?itok=zItspoSb) - -了解如何通过配置 Linux 计算机来查看时间,并实现自动唤醒和关闭 Linux - -[Creative Commons Attribution][6][The Observatory at Delhi][7] - -不要成为一个电能浪费者。如果你的电脑不需要开机就请把它们关机。出于方便和计算机宅的考虑,你可以通过配置你的 Linux 计算机实现自动唤醒和关闭 Linux 。 - -### 系统运行时间 - -有时候有些电脑需要一直处在开机状态,在不超过电脑运行时间的限制下这种情况是被允许的。有些人为他们的计算机可以长时间的正常运行而感到自豪,且现在我们有内核热补丁能够实现只有在硬件发生故障时才允许机器关机。我认为比较实际可行的是能够在机器需要节省电能以及在移动硬件发生磨损的情况下,且在不需要机器运行的情况下将其关机。比如,你可以在规定的时间内唤醒备份服务器,执行备份,然后关闭它直到它要进行下一次备份。或者,你可以设置你的 Internet 网关只在特定的时间运行。任何不需要一直运行的东西都可以将其配置成在其需要工作的时候打开,待其完成工作后将其关闭。 - -### 系统休眠 - -对于不需要一直运行的电脑,使用 root 的 cron 定时任务或者 `/etc/crontab` 文件 可以可靠地关闭电脑。这个例子创建一个 root 定时任务实现每天下午 11点15分 定时关机。 - -``` -# crontab -e -u root -# m h dom mon dow command -15 23 * * * /sbin/shutdown -h now -``` -以下示例仅在周一至周五运行: -``` -15 23 * * 1-5 /sbin/shutdown -h now -``` -您可以为不同的日期和时间创建多个cron作业。 通过命令 ``man 5 crontab`` 可以了解所有时间和日期的字段。 - -一个快速、容易的方式是,使用 /etc/crontab 文件。但这样你必须指定用户: - -``` -15 23 * * 1-5 root shutdown -h now -``` - -### 自动唤醒 - -实现自动唤醒是一件很酷的事情; 我大多数使用 SUSE (SUSE Linux)的同事都在纽伦堡,因此,因此为了跟同事能有几小时一起工作的时间,我不得不需要在凌晨五点起床。我的计算机早上 5点半自动开始工作,而我只需要将自己和咖啡拖到我的桌子上就可以开始工作了。按下电源按钮看起来好像并不是什么大事,但是在每天的那个时候每件小事都会变得很大。 - -唤醒 Linux 计算机可能不比关闭它稳当,因此你可能需要尝试不同的办法。你可以使用远程唤醒(Wake-On-LAN)、RTC 唤醒或者个人电脑的 BIOS 设置预定的唤醒这些方式。做这些工作的原因是,当你关闭电脑时,这并不是真正关闭了计算机;此时计算机处在极低功耗状态且还可以接受和响应信号。你需要拔掉电源开关将其彻底关闭。 - -### BIOS 唤醒 - -BIOS 唤醒是最可靠的。我的系统主板 BIOS 有一个易于使用的唤醒调度程序。(Figure 1). Chances are yours does, too. Easy peasy. - -### [fig-1.png][2] - -![wakeup](https://www.linux.com/sites/lcom/files/styles/floated_images/public/fig-1_11.png?itok=8qAeqo1I) - -Figure 1: My system BIOS has an easy-to-use wakeup scheduler. - -[Used with permission][8] - - -### 主机远程唤醒(Wake-On-LAN) - -远程唤醒是仅次于 BIOS 唤醒的又一种可靠的唤醒方法。这需要你从第二台计算机发送信号到所要打开的计算机。可以使用 Arduino 或 树莓派(Raspberry Pi) 发送基于 Linux 的路由器或者任何 Linux 计算机的唤醒信号。首先,查看系统主板 BIOS 是否支持 Wake-On-LAN ,要是支持的话,必须先启动它,因为它被默认为禁用。 - -然后,需要一个支持 Wake-On-LAN 的网卡;无线网卡并不支持。你需要运行 ethtool 命令查看网卡是否支持 Wake-On-LAN : - -``` -# ethtool eth0 | grep -i wake-on - Supports Wake-on: pumbg - Wake-on: g -``` -这条命令输出的 Supports Wake-on字段会告诉你你的网卡现在开启了哪些功能: -    -* d -- 禁用 - -* p -- 物理活动唤醒 - -* u -- 单播消息唤醒 - -* m -- 多播(组播)消息唤醒 - -* b -- 广播消息唤醒 - -* a -- ARP(Address Resolution Protocol) 唤醒 - -* g -- magic packet 唤醒 - -* s -- 设有密码的 magic packet 唤醒 - -man ethtool 命令并没说清楚 p 选项的作用;这表明任何信号都会导致唤醒。然而,在我的测试中它并没有这么做。想要实现远程唤醒主机,必须支持的功能是g -- magic packet 唤醒,而且显示这个功能已经在启用了。如果它没有被启用,你可以通过 ethtool 命令来启用它。 - -``` -# ethtool -s eth0 wol g -``` -这条命令可能会在重启后失效,所以为了确保万无一失,你可以创建个 root 用户的定时任务(cron)在每次重启的时候来执行这条命令。 -``` -@reboot /usr/bin/ethtool -s eth0 wol g -``` - -### [fig-2.png][3] - -![wakeonlan](https://www.linux.com/sites/lcom/files/styles/floated_images/public/fig-2_7.png?itok=XQAwmHoQ) - -Figure 2: Enable Wake on LAN. - -[Used with permission][9] - -另一个选择是最近的网络管理器版本有一个很好的小复选框来启用Wake-On-LAN(图2)。 - -这里有一个可以用于设置密码的地方,但是如果你的网络接口不支持 Secure On password,它就不起作用。 - -现在你需要配置第二台计算机来发送唤醒信号。你并不需要 root 权限,所以你可以为你的用户创建 cron 任务。你需要用到的是想要唤醒的机器的网络接口和MAC地址信息。 - -``` -30 08 * * * /usr/bin/wakeonlan D0:50:99:82:E7:2B -``` -### RTC 唤醒(RTC Alarm Clock) - -通过使用实时闹钟来唤醒计算机是最不可靠的方法。对于这个方法,可以参看 [Wake Up Linux With an RTC Alarm Clock][4] ;对于现在的大多数发行版来说这种方法已经有点过时了。 - -下周继续了解更多关于使用RTC唤醒的方法。 - -通过 Linux 基金会和 edX 可以学习更多关于 Linux 的免费 [ Linux 入门][5]教程。 - --------------------------------------------------------------------------------- - -via:https://www.linux.com/learn/intro-to-linux/2017/11/wake-and-shut-down-linux-automatically - -作者:[Carla Schroder] -译者:[译者ID](https://github.com/HardworkFish) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[1]:https://www.linux.com/files/images/bannerjpg -[2]:https://www.linux.com/files/images/fig-1png-11 -[3]:https://www.linux.com/files/images/fig-2png-7 -[4]:https://www.linux.com/learn/wake-linux-rtc-alarm-clock -[5]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux -[6]:https://www.linux.com/licenses/category/creative-commons-attribution -[7]:http://www.columbia.edu/itc/mealac/pritchett/00routesdata/1700_1799/jaipur/delhijantarearly/delhijantarearly.html -[8]:https://www.linux.com/licenses/category/used-permission -[9]:https://www.linux.com/licenses/category/used-permission - From f01eac69d05de4a92aebdf0602feaa280338616a Mon Sep 17 00:00:00 2001 From: Yixun Xu Date: Tue, 5 Dec 2017 10:40:50 -0500 Subject: [PATCH 045/236] =?UTF-8?q?=E7=BF=BB=E8=AF=91=E8=AE=A4=E9=A2=86=20?= =?UTF-8?q?30=20Best=20Linux=20Games...?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...1204 30 Best Linux Games On Steam You Should Play in 2017.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20171204 30 Best Linux Games On Steam You Should Play in 2017.md b/sources/tech/20171204 30 Best Linux Games On Steam You Should Play in 2017.md index 3248358b37..7a14f92847 100644 --- a/sources/tech/20171204 30 Best Linux Games On Steam You Should Play in 2017.md +++ b/sources/tech/20171204 30 Best Linux Games On Steam You Should Play in 2017.md @@ -1,3 +1,5 @@ +yixunx translating + 30 Best Linux Games On Steam You Should Play in 2017 ============================================================ From 7f54fbf12e9ce45dc7859ed0eb323fa79d3be37e Mon Sep 17 00:00:00 2001 From: aiwhj Date: Wed, 6 Dec 2017 01:40:45 +0800 Subject: [PATCH 046/236] translated --- ...actices for getting started with DevOps.md | 95 ------------------- ...actices for getting started with DevOps.md | 92 ++++++++++++++++++ 2 files changed, 92 insertions(+), 95 deletions(-) delete mode 100644 sources/tech/20171129 5 best practices for getting started with DevOps.md create mode 100644 translated/tech/20171129 5 best practices for getting started with DevOps.md diff --git a/sources/tech/20171129 5 best practices for getting started with DevOps.md b/sources/tech/20171129 5 best practices for getting started with DevOps.md deleted file mode 100644 index 7694180c14..0000000000 --- a/sources/tech/20171129 5 best practices for getting started with DevOps.md +++ /dev/null @@ -1,95 +0,0 @@ -translating---aiwhj -5 best practices for getting started with DevOps -============================================================ - -### Are you ready to implement DevOps, but don't know where to begin? Try these five best practices. - - -![5 best practices for getting started with DevOps](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/devops-gears.png?itok=rUejbLQX "5 best practices for getting started with DevOps") -Image by :  - -[Andrew Magill][8]. Modified by Opensource.com. [CC BY 4.0][9] - -DevOps often stymies early adopters with its ambiguity, not to mention its depth and breadth. By the time someone buys into the idea of DevOps, their first questions usually are: "How do I get started?" and "How do I measure success?" These five best practices are a great road map to starting your DevOps journey. - -### 1\. Measure all the things - -You don't know for sure that your efforts are even making things better unless you can quantify the outcomes. Are my features getting out to customers more rapidly? Are fewer defects escaping to them? Are we responding to and recovering more quickly from failure? - -Before you change anything, think about what kinds of outcomes you expect from your DevOps transformation. When you're further into your DevOps journey, you'll enjoy a rich array of near-real-time reports on everything about your service. But consider starting with these two metrics: - -* **Time to market** measures the end-to-end, often customer-facing, business experience. It usually begins when a feature is formally conceived and ends when the customer can consume the feature in production. Time to market is not mainly an engineering team metric; more importantly it shows your business' complete end-to-end efficiency in bringing valuable new features to market and isolates opportunities for system-wide improvement. - -* **Cycle time** measures the engineering team process. Once work on a new feature starts, when does it become available in production? This metric is very useful for understanding the efficiency of the engineering team and isolating opportunities for team-level improvement. - -### 2\. Get your process off the ground - -DevOps success requires an organization to put a regular (and hopefully effective) process in place and relentlessly improve upon it. It doesn't have to start out being effective, but it must be a regular process. Usually that it's some flavor of agile methodology like Scrum or Scrumban; sometimes it's a Lean derivative. Whichever way you go, pick a formal process, start using it, and get the basics right. - -Regular inspect-and-adapt behaviors are key to your DevOps success. Make good use of opportunities like the stakeholder demo, team retrospectives, and daily standups to find opportunities to improve your process. - -A lot of your DevOps success hinges on people working effectively together. People on a team need to work from a common process that they are empowered to improve upon. They also need regular opportunities to share what they are learning with other stakeholders, both upstream and downstream, in the process. - -Good process discipline will help your organization consume the other benefits of DevOps at the great speed that comes as your success builds. - -Although it's common for more development-oriented teams to successfully adopt processes like Scrum, operations-focused teams (or others that are more interrupt-driven) may opt for a process with a more near-term commitment horizon, such as Kanban. - -### 3\. Visualize your end-to-end workflow - -There is tremendous power in being able to see who's working on what part of your service at any given time. Visualizing your workflow will help people know what they need to work on next, how much work is in progress, and where the bottlenecks are in the process. - -You can't effectively limit work in process until you can see it and quantify it. Likewise, you can't effectively eliminate bottlenecks until you can clearly see them. - -Visualizing the entire workflow will help people in all parts of the organization understand how their work contributes to the success of the whole. It can catalyze relationship-building across organizational boundaries to help your teams collaborate more effectively towards a shared sense of success. - -### 4\. Continuous all the things - -DevOps promises a dizzying array of compelling automation. But Rome wasn't built in a day. One of the first areas you can focus your efforts on is [continuous integration][10] (CI). But don't stop there; you'll want to follow quickly with [continuous delivery][11] (CD) and eventually continuous deployment. - -Your CD pipeline is your opportunity to inject all manner of automated quality testing into your process. The moment new code is committed, your CD pipeline should run a battery of tests against the code and the successfully built artifact. The artifact that comes out at the end of this gauntlet is what progresses along your process until eventually it's seen by customers in production. - -Another "continuous" that doesn't get enough attention is continuous improvement. That's as simple as setting some time aside each day to ask your colleagues: "What small thing can we do today to get better at how we do our work?" These small, daily changes compound over time into more profound results. You'll be pleasantly surprised! But it also gets people thinking all the time about how to improve things. - -### 5\. Gherkinize - -Fostering more effective communication across your organization is crucial to fostering the sort of systems thinking prevalent in successful DevOps journeys. One way to help that along is to use a shared language between the business and the engineers to express the desired acceptance criteria for new features. A good product manager can learn [Gherkin][12] in a day and begin using it to express acceptance criteria in an unambiguous, structured form of plain English. Engineers can use this Gherkinized acceptance criteria to write acceptance tests against the criteria, and then develop their feature code until the tests pass. This is a simplification of [acceptance test-driven development][13](ATDD) that can also help kick start your DevOps culture and engineering practice. - -### Start on your journey - -Don't be discouraged by getting started with your DevOps practice. It's a journey. And hopefully these five ideas give you solid ways to get started. - - -### About the author - - [![](https://opensource.com/sites/default/files/styles/profile_pictures/public/pictures/headshot_4.jpg?itok=jntfDCfX)][14] - - Magnus Hedemark - Magnus has been in the IT industry for over 20 years, and a technology enthusiast for most of his life. He's presently Manager of DevOps Engineering at UnitedHealth Group. In his spare time, Magnus enjoys photography and paddling canoes. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/17/11/5-keys-get-started-devops - -作者:[Magnus Hedemark ][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/magnus919 -[1]:https://opensource.com/tags/devops?src=devops_resource_menu1 -[2]:https://opensource.com/resources/devops?src=devops_resource_menu2 -[3]:https://www.openshift.com/promotions/devops-with-openshift.html?intcmp=7016000000127cYAAQ&src=devops_resource_menu3 -[4]:https://enterprisersproject.com/article/2017/5/9-key-phrases-devops?intcmp=7016000000127cYAAQ&src=devops_resource_menu4 -[5]:https://www.redhat.com/en/insights/devops?intcmp=7016000000127cYAAQ&src=devops_resource_menu5 -[6]:https://opensource.com/article/17/11/5-keys-get-started-devops?rate=oEOzMXx1ghbkfl2a5ae6AnvO88iZ3wzkk53K2CzbDWI -[7]:https://opensource.com/user/25739/feed -[8]:https://ccsearch.creativecommons.org/image/detail/7qRx_yrcN5isTMS0u9iKMA== -[9]:https://creativecommons.org/licenses/by-sa/4.0/ -[10]:https://martinfowler.com/articles/continuousIntegration.html -[11]:https://martinfowler.com/bliki/ContinuousDelivery.html -[12]:https://cucumber.io/docs/reference -[13]:https://en.wikipedia.org/wiki/Acceptance_test%E2%80%93driven_development -[14]:https://opensource.com/users/magnus919 -[15]:https://opensource.com/users/magnus919 -[16]:https://opensource.com/users/magnus919 -[17]:https://opensource.com/tags/devops diff --git a/translated/tech/20171129 5 best practices for getting started with DevOps.md b/translated/tech/20171129 5 best practices for getting started with DevOps.md new file mode 100644 index 0000000000..3fa96176d5 --- /dev/null +++ b/translated/tech/20171129 5 best practices for getting started with DevOps.md @@ -0,0 +1,92 @@ +5 个最佳实践开始你的 DevOps 之旅 +============================================================ + +### 想要实现 DevOps 但是不知道如何开始吗?试试这 5 个最佳实践吧。 + + +![5 best practices for getting started with DevOps](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/devops-gears.png?itok=rUejbLQX "5 best practices for getting started with DevOps") + +Image by : [Andrew Magill][8]. Modified by Opensource.com. [CC BY 4.0][9] + +想要采用 DevOps 的人通常会过早的被它的歧义性给吓跑,更不要说更加深入的使用了。当一些人开始使用 DevOps 的时候都会问:“如何开始使用呢?”,”怎么才算使用了呢?“。这 5 个最佳实践是很好的路线图来指导你的 DevOps 之旅。 + +### 1\. 衡量所有的事情 + +除非你能量化输出结果,否则你并不能确认你的努力能否使事情变得更好。新功能能否快速的输出给客户?有更少的漏洞泄漏给他们吗?出错了能快速应对和恢复吗? + +在你开始做任何修改之前,思考一下你切换到 DevOps 之后想要一些什么样的输出。随着你的 DevOps 之旅,将享受到服务的所有内容的丰富的实时报告,从这两个指标考虑一下: + +* **上架时间** 衡量端到端,通常是面向客户的业务经验。这通常从一个功能被正式提出而开始,客户在产品中开始使用这个功能而结束。上架时间不是团队的主要指标;更加重要的是,当开发出一个有价值的新功能时,它表明了你完成业务的效率,为系统改进提供了一个机会。 + +* **时间周期** 衡量工程团队的进度。从开始开发一个新功能开始,到在产品中运行需要多久?这个指标对于你理解团队的效率是非常有用的,为团队等级的提升提供了一个机会。 + +### 2\. 放飞你的流程 + +DevOps 的成功需要团队布置一个定期流程并且持续提升它。这不总是有效的,但是必须是一个定期(希望有效)的流程。通常它有一些敏捷开发的味道,就像 Scrum 或者 Scrumban 一样;一些时候它也像精益开发。不论你用的什么方法,挑选一个正式的流程,开始使用它,并且做好这些基础。 + +定期检查和调整流程是 DevOps 成功的关键,抓住相关演示,团队回顾,每日会议的机会来提升你的流程。 + +DevOps 的成功取决于大家一起有效的工作。团队的成员需要在一个有权改进的公共流程中工作。他们也需要定期找机会分享从这个流程中上游或下游的其他人那里学到的东西。 + +随着你构建成功。好的流程规范能帮助你的团队以很快的速度体会到 DevOps 其他的好处 + +尽管更多面向开发的团队采用 Scrum 是常见的,但是以运营为中心的团队(或者其他中断驱动的团队)可能选用一个更短期的流程,例如 Kanban。 + +### 3\. 可视化工作流程 +这是很强大的,能够看到哪个人在给定的时间做哪一部分工作,可视化你的工作流程能帮助大家知道接下来应该做什么,流程中有多少工作以及流程中的瓶颈在哪里。 + +在你看到和衡量之前你并不能有效的限制流程中的工作。同样的,你也不能有效的排除瓶颈直到你清楚的看到它。 + +全部工作可视化能帮助团队中的成员了解他们在整个工作中的贡献。这样可以促进跨组织边界的关系建设,帮助您的团队更有效地协作,实现共同的成就感。 + +### 4\. 持续化所有的事情 + +DevOps 应该是强制自动化的。然而罗马不是一日建成的。你应该注意的第一个事情应该是努力的持续集成(CI),但是不要停留到这里;紧接着的是持续交付(CD)以及最终的持续部署。 + +持续部署的过程中是个注入自动测试的好时机。这个时候新代码刚被提交,你的持续部署应该运行测试代码来测试你的代码和构建成功的加工品。这个加工品经受流程的考验被产出直到最终被客户看到。 + +另一个“持续”是不太引人注意的持续改进。一个简单的场景是每天询问你旁边的同事:“今天做些什么能使工作变得更好?”,随着时间的推移,这些日常的小改进融合到一起会引起很大的结果,你将很惊喜!但是这也会让人一直思考着如何改进。 + +### 5\. Gherkinize + +促进组织间更有效的沟通对于成功的 DevOps 的系统思想至关重要。在程序员和业务员之间直接使用共享语言来描述新功能的需求文档对于沟通是个好办法。一个好的产品经理能在一天内学会 [Gherkin][12] 然后使用它构造出明确的英语来描述需求文档,工程师会使用 Gherkin 描述的需求文档来写功能测试,之后开发功能代码直到代码通过测试。这是一个简化的 [验收测试驱动开发][13](ATDD),这样就开始了你的 DevOps 文化和开发实践。 + +### 开始你旅程 + +不要自馁哦。希望这五个想法给你坚实的入门方法。 + + +### 关于作者 + + [![](https://opensource.com/sites/default/files/styles/profile_pictures/public/pictures/headshot_4.jpg?itok=jntfDCfX)][14] + + Magnus Hedemark - Magnus 在IT行业已有20多年,并且一直热衷于技术。他目前是 nitedHealth Group 的 DevOps 工程师。在业余时间,Magnus 喜欢摄影和划独木舟。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/17/11/5-keys-get-started-devops + +作者:[Magnus Hedemark ][a] +译者:[aiwhj](https://github.com/aiwhj) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/magnus919 +[1]:https://opensource.com/tags/devops?src=devops_resource_menu1 +[2]:https://opensource.com/resources/devops?src=devops_resource_menu2 +[3]:https://www.openshift.com/promotions/devops-with-openshift.html?intcmp=7016000000127cYAAQ&src=devops_resource_menu3 +[4]:https://enterprisersproject.com/article/2017/5/9-key-phrases-devops?intcmp=7016000000127cYAAQ&src=devops_resource_menu4 +[5]:https://www.redhat.com/en/insights/devops?intcmp=7016000000127cYAAQ&src=devops_resource_menu5 +[6]:https://opensource.com/article/17/11/5-keys-get-started-devops?rate=oEOzMXx1ghbkfl2a5ae6AnvO88iZ3wzkk53K2CzbDWI +[7]:https://opensource.com/user/25739/feed +[8]:https://ccsearch.creativecommons.org/image/detail/7qRx_yrcN5isTMS0u9iKMA== +[9]:https://creativecommons.org/licenses/by-sa/4.0/ +[10]:https://martinfowler.com/articles/continuousIntegration.html +[11]:https://martinfowler.com/bliki/ContinuousDelivery.html +[12]:https://cucumber.io/docs/reference +[13]:https://en.wikipedia.org/wiki/Acceptance_test%E2%80%93driven_development +[14]:https://opensource.com/users/magnus919 +[15]:https://opensource.com/users/magnus919 +[16]:https://opensource.com/users/magnus919 +[17]:https://opensource.com/tags/devops From 9c85d472b21dbf1d0e3218b148826bff21d867d4 Mon Sep 17 00:00:00 2001 From: wxy Date: Wed, 6 Dec 2017 09:11:52 +0800 Subject: [PATCH 047/236] PRF:20171118 Language engineering for great justice.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 初步校对,@Valoniakim @yunfengHe 再帮着过一遍~ --- ... Language engineering for great justice.md | 118 +++++++++--------- 1 file changed, 59 insertions(+), 59 deletions(-) diff --git a/translated/tech/20171118 Language engineering for great justice.md b/translated/tech/20171118 Language engineering for great justice.md index 301337b11c..fc116e2e18 100644 --- a/translated/tech/20171118 Language engineering for great justice.md +++ b/translated/tech/20171118 Language engineering for great justice.md @@ -1,59 +1,59 @@ - -最合理的语言工程模式 - -============================================================ - - - -当你熟练掌握一体化工程技术时,你就会发现它逐渐超过了技术优化的层面。我们制作的每件手工艺品都在一个大环境背景下,在这个环境中,人类的行为逐渐突破了经济意义,社会学意义,达到了奥地利经济学家所称的“人类行为学”,这是目的明确的人类行为所能达到的最大范围。 - - - -对我来说这并不只是抽象理论。当我在开源发展项目中编写时,我的行为就十分符合人类行为学的理论,这行为不是针对任何特定的软件技术或某个客观事物,它指的是在开发科技的过程中人类行为的背景环境。从人类行为学角度对科技进行的解读不断增加,大量的这种解读可以重塑科技框架,带来人类生产力和满足感的极大幅度增长,而这并不是由于我们换了工具,而是在于我们改变了掌握它们的方式。 - - - -在这个背景下,我在第三篇额外的文章中谈到了 C 语言的衰退和正在到来的巨大改变,而我们也确实能够感受到系统编程的新时代的到来,在这个时刻,我决定把我之前有的大体的预感具象化为更加具体的,更实用的点子,它们主要是关于计算机语言设计的分析,例如为什么他们会成功,或为什么他们会失败。 - - - -在我最近的一篇文章中,我写道:所有计算机语言都是对机器资源的成本和程序员工作成本的相对权衡的结果,和对其相对价值的体现。这些都是在一个计算能力成本不断下降但程序员工作成本不减反增的背景下产生的。我还强调了转化成本在使原有交易主张适用于当下环境中的新增角色。在文中我将编程人员描述为一个寻找今后最适方案的探索者。 - - - -现在我要讲一讲最后一点。以现有水平为起点,一个语言工程师有极大可能通过多种方式推动语言设计的发展。通过什么系统呢? GC 还是人工分配?使用何种配置,命令式语言,函数程式语言或是面向对象语言?但是从人类行为学的角度来说,我认为它的形式会更简洁,也许只是选择解决长期问题还是短期问题? - - - -所谓的“远”“近”之分,是指硬件成本的逐渐降低,软件复杂程度的上升和由现有语言向其他语言转化的成本的增加,根据它们的变化曲线所做出的判断。短期问题指编程人员眼下发现的问题,长期问题指可预见的一系列情况,但它们一段时间内不会到来。针对近期问题所做出的部署需要非常及时且有效,但随着情况的变化,短期解决方案有可能很快就不适用了。而长期的解决方案可能因其过于超前而夭折,或因其代价过高无法被接受。 - - - -在计算机刚刚面世的时候, FORTRAN 是近期亟待解决的问题, LISP 是远期问题。汇编语言是短期解决方案,图解说明非通用语言的分类应用,还有关门电阻不断上涨的成本。随着计算机技术的发展,PHP 和 Javascript逐渐应用于游戏中。至于长期的解决方案? Oberon , Ocaml , ML , XML-Docbook 都可以。 他们形成的激励机制带来了大量具有突破性和原创性的想法,事态蓬勃但未形成体系,那个时候距离专业语言的面世还很远,(值得注意的是这些想法的出现都是人类行为学中的因果,并非由于某种技术)。专业语言会失败,这是显而易见的,它的转入成本高昂,让大部分人望而却步,因此不能没能达到能够让主流群体接受的水平,被孤立,被搁置。这也是 LISP 不为人知的的过去,作为前 LISP 管理层人员,出于对它深深的爱,我为你们讲述了这段历史。 - - - -如果短期解决方案出现故障,它的后果更加惨不忍睹,最好的结果是期待一个相对体面的失败,好转换到另一个设计方案。(通常在转化成本较高时)如果他们执意继续,通常造成众多方案相互之间藕断丝连,形成一个不断扩张的复合体,一直维持到不能运转下去,变成一堆摇摇欲坠的杂物。是的,我说的就是 C++ 语言,还有 Java 描述语言,(唉)还有 Perl,虽然 Larry Wall 的好品味成功地让他维持了很多年,问题一直没有爆发,但在 Perl 6 发行时,他的好品味最终引爆了整个问题。 - - - -这种思考角度激励了编程人员向着两个不同的目的重新塑造语言设计: ①以远近为轴,在自身和预计的未来之间选取一个最适点,然后 ②降低由一种或多种语言转化为自身语言的转入成本,这样你就可以吸纳他们的用户群。接下来我会讲讲 C 语言是怎样占领全世界的。 - - - -在整个计算机发展史中,没有谁能比 C 语言完美地把握最适点的选取了,我要做的只是证明这一点,作为一种实用的主流语言, C 语言有着更长的寿命,它目睹了无数个竞争者的兴衰,但它的地位仍旧不可取代。从淘汰它的第一个竞争者到现在已经过了 35 年,但看起来C语言的终结仍旧不会到来。 - - - -当然,如果你愿意的话,可以把 C 语言的持久存在归功于人类的文化惰性,但那是对“文化惰性”这个词的曲解, C 语言一直得以延续的真正原因是没有人提供足够的转化费用! - - - -相反的, C 语言低廉的内部转化费用未得到应有的重视,C 语言是如此的千变万化,从它漫长统治时期的初期开始,它就可以适用于多种语言如 FORTRAN , Pascal , 汇编语言和 LISP 的编程习惯。在二十世纪八十年代我就注意到,我可以根据编程人员的编码风格判断出他的母语是什么,这也从另一方面证明了C 语言的魅力能够吸引全世界的人使用它。 - - - -C++ 语言同样胜在它低廉的转化费用。很快,大部分新兴的语言为了降低自身转化费用,纷纷参考 C 语言语法。请注意这给未来的语言设计环境带来了什么影响:它尽可能地提高了 C-like 语言的价值,以此来降低其他语言转化为 C 语言的转化成本。 - - - -另一种降低转入成本的方法十分简单,即使没接触过编程的人都能学会,但这种方法很难完成。我认为唯一使用了这种方法的 Python就是靠这种方法进入了职业比赛。对这个方法我一带而过,是因为它并不是我希望看到的,顺利执行的系统语言战略,虽然我很希望它不是那样的。 - - - -今天我们在2017年年底聚集在这里,下一项我们应该为某些暴躁的团体发声,如 Go 团队,但事实并非如此。 Go 这个项目漏洞百出,我甚至可以想象出它失败的各种可能,Go 团队太过固执独断,即使几乎整个用户群体都认为 Go 需要做出改变了,Go 团队也无动于衷,这是个大问题。 一旦发生故障, GC 发生延迟或者用牺牲生产量来弥补延迟,但无论如何,它都会严重影响到这种语言的应用,大幅缩小这种语言的适用范围。 - - - -即便如此,在 Go 的设计中,还是有一个我颇为认同的远大战略目标,想要理解这个目标,我们需要回想一下如果想要取代 C 语言,要面临的短期问题是什么。同我之前提到的,随着项目计划的不断扩张,故障率也在持续上升,这其中内存管理方面的故障尤其多,而内存管理一直是崩溃漏洞和安全漏洞的高发领域。 - - - -我们现在已经知道了两件十分中重要的紧急任务,要想取代 C 语言,首先要先做到这两点:(1)解决内存管理问题;(2)降低由 C 语言向本语言转化时所需的转入成本。纵观编程语言的历史——从人类行为学的角度来看,作为 C 语言的准替代者,如果不能有效解决转入成本过高这个问题,那他们所做的其他部分做得再好都不算数。相反的,如果他们把转入成本过高这个问题解决地很好,即使他们其他部分做的不是最好的,人们也不会对他们吹毛求疵。 - - - -这正是 Go 的做法,但这个理论并不是完美无瑕的,它也有局限性。目前 GC 延迟限制了它的发展,但 Go 现在选择照搬 Unix 下 C 语言的传染战略,让自身语言变成易于转入,便于传播的语言,其繁殖速度甚至快于替代品。但从长远角度看,这并不是个好办法。 - - - -当然, Rust 语言的不足是个十分明显的问题,我们不应当回避它。而它,正将自己定位为适用于长远计划的选择。在之前的部分中我已经谈到了为什么我觉得它还不完美,Rust 语言在 TIBOE 和PYPL 指数上的成就也证明了我的说法,在 TIBOE 上 Rust 从来没有进过前20名,在 PYPL 指数上它的成就也比 Go 差很多。 - - - -五年后 Rust 能发展的怎样还是个问题,如果他们愿意改变,我建议他们重视转入成本问题。以我个人经历来说,由 C 语言转入 Rust 语言的能量壁垒使人望而却步。如果编码提升工具比如 Corrode 只能把 C 语言映射为不稳定的 Rust 语言,但不能解决能量壁垒的问题;或者如果有更简单的方法能够自动注释所有权或试用期,人们也不再需要它们了——这些问题编译器就能够解决。目前我不知道怎样解决这个问题,但我觉得他们最好找出解决方案。 - - - -在最后我想强调一下,虽然在 Ken Thompson 的设计经历中,他看起来很少解决短期问题,但他对未来有着极大的包容性,并且这种包容性还在不断提升。当然 Unix 也是这样的, 它让我不禁暗自揣测,让我认为 Go 语言中令人不快的地方都其实是他们未来事业的基石(例如缺乏泛型)。如果要确认这件事是真假,我需要比 Ken 还要聪明,但这并不是一件容易让人相信的事情。 - - - --------------------------------------------------------------------------------- - - - -via: http://esr.ibiblio.org/?p=7745 - - - -作者:[Eric Raymond ][a] - -译者:[Valoniakim](https://github.com/Valoniakim) - -校对:[校对者ID](https://github.com/校对者ID) - - - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - - - -[a]:http://esr.ibiblio.org/?author=2 - -[1]:http://esr.ibiblio.org/?author=2 - -[2]:http://esr.ibiblio.org/?p=7711&cpage=1#comment-1913931 - -[3]:http://esr.ibiblio.org/?p=7745 +ESR:最合理的语言工程模式 +============================================================ + +当你熟练掌握一体化工程技术时,你就会发现它逐渐超过了技术优化的层面。我们制作的每件手工艺品都在一个大环境背景下,在这个环境中,人类的行为逐渐突破了经济意义、社会学意义,达到了奥地利经济学家所称的“人类行为学praxeology”,这是目的明确的人类行为所能达到的最大范围。 + +对我来说这并不只是抽象理论。当我在开源开发项目中编写论文时,我的行为就十分符合人类行为学的理论,这行为不是针对任何特定的软件技术或某个客观事物,它指的是在开发科技的过程中人类行为的背景环境。从人类行为学角度对科技进行的解读不断增加,大量的这种解读可以重塑科技框架,带来人类生产力和满足感的极大幅度增长,而这并不是由于我们换了工具,而是在于我们改变了掌握它们的方式。 + +在这个背景下,我的计划之外的文章系列的第三篇中谈到了 C 语言的衰退和正在到来的巨大改变,而我们也确实能够感受到系统编程的新时代的到来,在这个时刻,我决定把我之前有的大体的预感具象化为更加具体的、更实用的想法,它们主要是关于计算机语言设计的分析,例如为什么它们会成功,或为什么它们会失败。 + +在我最近的一篇文章中,我写道:所有计算机语言都是对机器资源的成本和程序员工作成本的相对权衡的结果,和对其相对价值的体现。这些都是在一个计算能力成本不断下降但程序员工作成本不减反增的背景下产生的。我还强调了转化成本在使原有交易主张适用于当下环境中的新增角色。在文中我将编程人员描述为一个寻找今后最适方案的探索者。 + +现在我要讲一讲最后一点。以现有水平为起点,一个语言工程师有极大可能通过多种方式推动语言设计的发展。通过什么系统呢? GC 还是人工分配?使用何种配置,命令式语言、函数程式语言或是面向对象语言?但是从人类行为学的角度来说,我认为它的形式会更简洁,也许只是选择解决长期问题还是短期问题? + +所谓的“远”、“近”之分,是指硬件成本的逐渐降低,软件复杂程度的上升和由现有语言向其他语言转化的成本的增加,根据它们的变化曲线所做出的判断。短期问题指编程人员眼下发现的问题,长期问题指可预见的一系列情况,但它们一段时间内不会到来。针对近期问题所做出的部署需要非常及时且有效,但随着情况的变化,短期解决方案有可能很快就不适用了。而长期的解决方案可能因其过于超前而夭折,或因其代价过高无法被接受。 + +在计算机刚刚面世的时候, FORTRAN 是近期亟待解决的问题, LISP 是远期问题,汇编语言是短期解决方案。说明这种分类适用于非通用语言,还有 roff 标记语言。随着计算机技术的发展,PHP 和 Javascript 逐渐参与到这场游戏中。至于长期的解决方案? Oberon、Ocaml、ML、XML-Docbook 都可以。 它们形成的激励机制带来了大量具有突破性和原创性的想法,事态蓬勃但未形成体系,那个时候距离专业语言的面世还很远,(值得注意的是这些想法的出现都是人类行为学中的因果,并非由于某种技术)。专业语言会失败,这是显而易见的,它的转入成本高昂,让大部分人望而却步,因此不能达到能够让主流群体接受的水平,被孤立,被搁置。这也是 LISP 不为人知的的过去,作为前 LISP 管理层人员,出于对它深深的爱,我为你们讲述了这段历史。 + +如果短期解决方案出现故障,它的后果更加惨不忍睹,最好的结果是期待一个相对体面的失败,好转换到另一个设计方案。(通常在转化成本较高时)如果他们执意继续,通常造成众多方案相互之间藕断丝连,形成一个不断扩张的复合体,一直维持到不能运转下去,变成一堆摇摇欲坠的杂物。是的,我说的就是 C++ 语言,还有 Java 描述语言,(唉)还有 Perl,虽然 Larry Wall 的好品味成功地让他维持了很多年,问题一直没有爆发,但在 Perl 6 发行时,他的好品味最终引爆了整个问题。 + +这种思考角度激励了编程人员向着两个不同的目的重新塑造语言设计: (1)以远近为轴,在自身和预计的未来之间选取一个最适点,然后(2)降低由一种或多种语言转化为自身语言的转入成本,这样你就可以吸纳他们的用户群。接下来我会讲讲 C 语言是怎样占领全世界的。 + +在整个计算机发展史中,没有谁能比 C 语言完美地把握最适点的选取了,我要做的只是证明这一点,作为一种实用的主流语言, C 语言有着更长的寿命,它目睹了无数个竞争者的兴衰,但它的地位仍旧不可取代。从淘汰它的第一个竞争者到现在已经过了 35 年,但看起来C语言的终结仍旧不会到来。 + +当然,如果你愿意的话,可以把 C 语言的持久存在归功于人类的文化惰性,但那是对“文化惰性”这个词的曲解, C 语言一直得以延续的真正原因是没有人提供足够的转化费用! + +相反的, C 语言低廉的内部转化成本未得到应有的重视,C 语言是如此的千变万化,从它漫长统治时期的初期开始,它就可以适用于多种语言如 FORTRAN、Pascal 、汇编语言和 LISP 的编程习惯。在二十世纪八十年代我就注意到,我可以根据编程人员的编码风格判断出他的母语是什么,这也从另一方面证明了C 语言的魅力能够吸引全世界的人使用它。 + +C++ 语言同样胜在它低廉的转化成本。很快,大部分新兴的语言为了降低自身转化成本,纷纷参考 C 语言语法。请注意这给未来的语言设计环境带来了什么影响:它尽可能地提高了类 C 语言的价值,以此来降低其他语言转化为 C 语言的转化成本。 + +另一种降低转入成本的方法十分简单,即使没接触过编程的人都能学会,但这种方法很难完成。我认为唯一使用了这种方法的 Python 就是靠这种方法进入了职业比赛。对这个方法我一带而过,是因为它并不是我希望看到的,顺利执行的系统语言战略,虽然我很希望它不是那样的。 + +今天我们在 2017 年底聚集在这里,下一项我们应该为某些暴躁的团体发声,如 Go 团队,但事实并非如此。 Go 这个项目漏洞百出,我甚至可以想象出它失败的各种可能,Go 团队太过固执独断,即使几乎整个用户群体都认为 Go 需要做出改变了,Go 团队也无动于衷,这是个大问题。 一旦发生故障, GC 发生延迟或者用牺牲生产量来弥补延迟,但无论如何,它都会严重影响到这种语言的应用,大幅缩小这种语言的适用范围。 + +即便如此,在 Go 的设计中,还是有一个我颇为认同的远大战略目标,想要理解这个目标,我们需要回想一下如果想要取代 C 语言,要面临的短期问题是什么。同我之前提到的,随着项目计划的不断扩张,故障率也在持续上升,这其中内存管理方面的故障尤其多,而内存管理一直是崩溃漏洞和安全漏洞的高发领域。 + +我们现在已经知道了两件十分重要的紧急任务,要想取代 C 语言,首先要先做到这两点:(1)解决内存管理问题;(2)降低由 C 语言向本语言转化时所需的转入成本。纵观编程语言的历史——从人类行为学的角度来看,作为 C 语言的准替代者,如果不能有效解决转入成本过高这个问题,那他们所做的其他部分做得再好都不算数。相反的,如果他们把转入成本过高这个问题解决地很好,即使他们其他部分做的不是最好的,人们也不会对他们吹毛求疵。 + +这正是 Go 的做法,但这个理论并不是完美无瑕的,它也有局限性。目前 GC 延迟限制了它的发展,但 Go 现在选择照搬 Unix 下 C 语言的传染战略,让自身语言变成易于转入,便于传播的语言,其繁殖速度甚至快于替代品。但从长远角度看,这并不是个好办法。 + +当然, Rust 语言的不足是个十分明显的问题,我们不应当回避它。而它,正将自己定位为适用于长远计划的选择。在之前的部分中我已经谈到了为什么我觉得它还不完美,Rust 语言在 TIBOE 和PYPL 指数上的成就也证明了我的说法,在 TIBOE 上 Rust 从来没有进过前 20 名,在 PYPL 指数上它的成就也比 Go 差很多。 + +五年后 Rust 能发展的怎样还是个问题,如果他们愿意改变,我建议他们重视转入成本问题。以我个人经历来说,由 C 语言转入 Rust 语言的能量壁垒使人望而却步。如果编码提升工具比如 Corrode 只能把 C 语言映射为不稳定的 Rust 语言,但不能解决能量壁垒的问题;或者如果有更简单的方法能够自动注释所有权或试用期,人们也不再需要它们了——这些问题编译器就能够解决。目前我不知道怎样解决这个问题,但我觉得他们最好找出解决方案。 + +在最后我想强调一下,虽然在 Ken Thompson 的设计经历中,他看起来很少解决短期问题,但他对未来有着极大的包容性,并且这种包容性还在不断提升。当然 Unix 也是这样的, 它让我不禁暗自揣测,让我认为 Go 语言中令人不快的地方都其实是他们未来事业的基石(例如缺乏泛型)。如果要确认这件事是真假,我需要比 Ken 还要聪明,但这并不是一件容易让人相信的事情。 + +-------------------------------------------------------------------------------- + +via: http://esr.ibiblio.org/?p=7745 + +作者:[Eric Raymond][a] +译者:[Valoniakim](https://github.com/Valoniakim) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://esr.ibiblio.org/?author=2 +[1]:http://esr.ibiblio.org/?author=2 +[2]:http://esr.ibiblio.org/?p=7711&cpage=1#comment-1913931 +[3]:http://esr.ibiblio.org/?p=7745 From 01dcb3137581c8f23b9f6377e584d5180958775f Mon Sep 17 00:00:00 2001 From: Sihua Zheng Date: Wed, 6 Dec 2017 09:20:11 +0800 Subject: [PATCH 048/236] translated --- ... write fun small web projects instantly.md | 76 ------------------- ... write fun small web projects instantly.md | 73 ++++++++++++++++++ 2 files changed, 73 insertions(+), 76 deletions(-) delete mode 100644 sources/tech/20171113 Glitch write fun small web projects instantly.md create mode 100644 translated/tech/20171113 Glitch write fun small web projects instantly.md diff --git a/sources/tech/20171113 Glitch write fun small web projects instantly.md b/sources/tech/20171113 Glitch write fun small web projects instantly.md deleted file mode 100644 index 734853ce51..0000000000 --- a/sources/tech/20171113 Glitch write fun small web projects instantly.md +++ /dev/null @@ -1,76 +0,0 @@ -translating---geekpi - -Glitch: write fun small web projects instantly -============================================================ - -I just wrote about Jupyter Notebooks which are a fun interactive way to write Python code. That reminded me I learned about Glitch recently, which I also love!! I built a small app to [turn of twitter retweets][2] with it. So! - -[Glitch][3] is an easy way to make Javascript webapps. (javascript backend, javascript frontend) - -The fun thing about glitch is: - -1. you start typing Javascript code into their web interface - -2. as soon as you type something, it automagically reloads the backend of your website with the new code. You don’t even have to save!! It autosaves. - -So it’s like Heroku, but even more magical!! Coding like this (you type, and the code runs on the public internet immediately) just feels really **fun** to me. - -It’s kind of like sshing into a server and editing PHP/HTML code on your server and having it instantly available, which I kind of also loved. Now we have “better deployment practices” than “just edit the code and it is instantly on the internet” but we are not talking about Serious Development Practices, we are talking about writing tiny programs for fun. - -### glitch has awesome example apps - -Glitch seems like fun nice way to learn programming! - -For example, there’s a space invaders game (code by [Mary Rose Cook][4]) at [https://space-invaders.glitch.me/][5]. The thing I love about this is that in just a few clicks I can - -1. click “remix this” - -2. start editing the code to make the boxes orange instead of black - -3. have my own space invaders game!! Mine is at [http://julias-space-invaders.glitch.me/][1]. (i just made very tiny edits to make it orange, nothing fancy) - -They have tons of example apps that you can start from – for instance [bots][6], [games][7], and more. - -### awesome actually useful app: tweetstorms - -The way I learned about Glitch was from this app which shows you tweetstorms from a given user: [https://tweetstorms.glitch.me/][8]. - -For example, you can see [@sarahmei][9]’s tweetstorms at [https://tweetstorms.glitch.me/sarahmei][10] (she tweets a lot of good tweetstorms!). - -### my glitch app: turn off retweets - -When I learned about Glitch I wanted to turn off retweets for everyone I follow on Twitter (I know you can do it in Tweetdeck!) and doing it manually was a pain – I had to do it one person at a time. So I wrote a tiny Glitch app to do it for me! - -I liked that I didn’t have to set up a local development environment, I could just start typing and go! - -Glitch only supports Javascript and I don’t really know Javascript that well (I think I’ve never written a Node program before), so the code isn’t awesome. But I had a really good time writing it – being able to type and just see my code running instantly was delightful. Here it is: [https://turn-off-retweets.glitch.me/][11]. - -### that’s all! - -Using Glitch feels really fun and democratic. Usually if I want to fork someone’s web project and make changes I wouldn’t do it – I’d have to fork it, figure out hosting, set up a local dev environment or Heroku or whatever, install the dependencies, etc. I think tasks like installing node.js dependencies used to be interesting, like “cool i am learning something new” and now I just find them tedious. - -So I love being able to just click “remix this!” and have my version on the internet instantly. - - --------------------------------------------------------------------------------- - -via: https://jvns.ca/blog/2017/11/13/glitch--write-small-web-projects-easily/ - -作者:[Julia Evans ][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://jvns.ca/ -[1]:http://julias-space-invaders.glitch.me/ -[2]:https://turn-off-retweets.glitch.me/ -[3]:https://glitch.com/ -[4]:https://maryrosecook.com/ -[5]:https://space-invaders.glitch.me/ -[6]:https://glitch.com/handy-bots -[7]:https://glitch.com/games -[8]:https://tweetstorms.glitch.me/ -[9]:https://twitter.com/sarahmei -[10]:https://tweetstorms.glitch.me/sarahmei -[11]:https://turn-off-retweets.glitch.me/ diff --git a/translated/tech/20171113 Glitch write fun small web projects instantly.md b/translated/tech/20171113 Glitch write fun small web projects instantly.md new file mode 100644 index 0000000000..fde7d7f880 --- /dev/null +++ b/translated/tech/20171113 Glitch write fun small web projects instantly.md @@ -0,0 +1,73 @@ +Glitch:立即写出有趣的小型网站项目 +============================================================ + +我刚写了一篇关于 Jupyter Notebooks 是一个有趣的交互式写 Python 代码的方式。这让我想起我最近学习了 Glitch,这个我同样喜爱!我构建了一个小的程序来用于[关闭转发 twitter][2]。因此有了这篇文章! + +[Glitch][3] 是一个简单的构建 Javascript web 程序的方式(javascript 后端、javascript 前端) + +关于 glitch 有趣的事有: + +1. 你在他们的网站输入 Javascript 代码 + +2. 只要输入了任何代码,它会自动用你的新代码重载你的网站。你甚至不必保存!它会自动保存。 + +所以这就像 Heroku,但更神奇!像这样的编码(你输入代码,代码立即在公共网络上运行)对我而言感觉很**有趣**。 + +这有点像 ssh 登录服务器,编辑服务器上的 PHP/HTML 代码,并让它立即可用,这也是我所喜爱的。现在我们有了“更好的部署实践”,而不是“编辑代码,它立即出现在互联网上”,但我们并不是在谈论严肃的开发实践,而是在讨论编写微型程序的乐趣。 + +### Glitch 有很棒的示例应用程序 + +Glitch 似乎是学习编程的好方式! + +比如,这有一个太空侵略者游戏(由 [Mary Rose Cook][4] 编写):[https://space-invaders.glitch.me/][5]。我喜欢的是我只需要点击几下。 + +1. 点击 “remix this” + +2. 开始编辑代码使箱子变成橘色而不是黑色 + +3. 制作我自己太空侵略者游戏!我的在这:[http://julias-space-invaders.glitch.me/][1]。(我只做了很小的更改使其变成橘色,没什么神奇的) + +他们有大量的示例程序,你可以从中启动 - 例如[机器人][6]、[游戏][7]等等。 + +### 实际有用的非常好的程序:tweetstorms + +我学习 Glitch 的方式是从这个程序:[https://tweetstorms.glitch.me/][8],它会向你展示给定用户的 tweetstorm。 + +比如,你可以在 [https://tweetstorms.glitch.me/sarahmei][10] 看到 [@sarahmei][9] 的 tweetstorm(她发布了很多好的 tweetstorm!)。 + +### 我的 Glitch 程序: 关闭转推 + +当我了解到 Glitch 的时候,我想关闭在 Twitter 上关注的所有人的转推(我知道可以在 Tweetdeck 中做这件事),而且手动做这件事是一件很痛苦的事 - 我一次只能设置一个人。所以我写了一个 Glitch 程序来为我做! + +我喜欢我不必设置一个本地开发环境,我可以直接开始输入然后开始! + +Glitch 只支持 Javascript,我不非常了解 Javascript(我之前从没写过一个 Node 程序),所以代码不是很好。但是编写它很愉快 - 能够输入并立即看到我的代码运行是令人愉快的。这是我的项目:[https://turn-off-retweets.glitch.me/][11]。 + +### 就是这些! + +使用 Glitch 感觉真的很有趣和民主。通常情况下,如果我想 fork 某人的 Web 项目,并做出更改,我不会这样做 - 我必须 fork,找一个托管,设置本地开发环境或者 Heroku 或其他,安装依赖项等。我认为像安装 node.js 依赖关系这样的任务过去很有趣,就像“我正在学习新东西很酷”,现在我觉得它们很乏味。 + +所以我喜欢只需点击 “remix this!” 并立即在互联网上能有我的版本。 + +-------------------------------------------------------------------------------- + +via: https://jvns.ca/blog/2017/11/13/glitch--write-small-web-projects-easily/ + +作者:[Julia Evans ][a] +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://jvns.ca/ +[1]:http://julias-space-invaders.glitch.me/ +[2]:https://turn-off-retweets.glitch.me/ +[3]:https://glitch.com/ +[4]:https://maryrosecook.com/ +[5]:https://space-invaders.glitch.me/ +[6]:https://glitch.com/handy-bots +[7]:https://glitch.com/games +[8]:https://tweetstorms.glitch.me/ +[9]:https://twitter.com/sarahmei +[10]:https://tweetstorms.glitch.me/sarahmei +[11]:https://turn-off-retweets.glitch.me/ From 042a92d63f97636d7e760bbdc464a6d7d8053167 Mon Sep 17 00:00:00 2001 From: root Date: Wed, 6 Dec 2017 09:45:43 +0800 Subject: [PATCH 049/236] rename --- ...Long Running Terminal Commands Complete.md | 156 ------------------ 1 file changed, 156 deletions(-) delete mode 100644 sources/tech/20171130 Undistract-me : Get Notification When Long Running Terminal Commands Complete.md diff --git a/sources/tech/20171130 Undistract-me : Get Notification When Long Running Terminal Commands Complete.md b/sources/tech/20171130 Undistract-me : Get Notification When Long Running Terminal Commands Complete.md deleted file mode 100644 index 46afe9b893..0000000000 --- a/sources/tech/20171130 Undistract-me : Get Notification When Long Running Terminal Commands Complete.md +++ /dev/null @@ -1,156 +0,0 @@ -translating---geekpi - -Undistract-me : Get Notification When Long Running Terminal Commands Complete -============================================================ - -by [sk][2] · November 30, 2017 - -![Undistract-me](https://www.ostechnix.com/wp-content/uploads/2017/11/undistract-me-2-720x340.png) - -A while ago, we published how to [get notification when a Terminal activity is done][3]. Today, I found out a similar utility called “undistract-me” that notifies you when long running terminal commands complete. Picture this scenario. You run a command that takes a while to finish. In the mean time, you check your facebook and get so involved in it. After a while, you remembered that you ran a command few minutes ago. You go back to the Terminal and notice that the command has already finished. But you have no idea when the command is completed. Have you ever been in this situation? I bet most of you were in this situation many times. This is where “undistract-me” comes in help. You don’t need to constantly check the terminal to see if a command is completed or not. Undistract-me utility will notify you when a long running command is completed. It will work on Arch Linux, Debian, Ubuntu and other Ubuntu-derivatives. - -#### Installing Undistract-me - -Undistract-me is available in the default repositories of Debian and its variants such as Ubuntu. All you have to do is to run the following command to install it. - -``` -sudo apt-get install undistract-me -``` - -The Arch Linux users can install it from AUR using any helper programs. - -Using [Pacaur][4]: - -``` -pacaur -S undistract-me-git -``` - -Using [Packer][5]: - -``` -packer -S undistract-me-git -``` - -Using [Yaourt][6]: - -``` -yaourt -S undistract-me-git -``` - -Then, run the following command to add “undistract-me” to your Bash. - -``` -echo 'source /etc/profile.d/undistract-me.sh' >> ~/.bashrc -``` - -Alternatively you can run this command to add it to your Bash: - -``` -echo "source /usr/share/undistract-me/long-running.bash\nnotify_when_long_running_commands_finish_install" >> .bashrc -``` - -If you are in Zsh shell, run this command: - -``` -echo "source /usr/share/undistract-me/long-running.bash\nnotify_when_long_running_commands_finish_install" >> .zshrc -``` - -Finally update the changes: - -For Bash: - -``` -source ~/.bashrc -``` - -For Zsh: - -``` -source ~/.zshrc -``` - -#### Configure Undistract-me - -By default, Undistract-me will consider any command that takes more than 10 seconds to complete as a long-running command. You can change this time interval by editing /usr/share/undistract-me/long-running.bash file. - -``` -sudo nano /usr/share/undistract-me/long-running.bash -``` - -Find “LONG_RUNNING_COMMAND_TIMEOUT” variable and change the default value (10 seconds) to something else of your choice. - - [![](http://www.ostechnix.com/wp-content/uploads/2017/11/undistract-me-1.png)][7] - -Save and close the file. Do not forget to update the changes: - -``` -source ~/.bashrc -``` - -Also, you can disable notifications for particular commands. To do so, find the “LONG_RUNNING_IGNORE_LIST” variable and add the commands space-separated like below. - -By default, the notification will only show if the active window is not the window the command is running in. That means, it will notify you only if the command is running in the background Terminal window. If the command is running in active window Terminal, you will not be notified. If you want undistract-me to send notifications either the Terminal window is visible or in the background, you can set IGNORE_WINDOW_CHECK to 1 to skip the window check. - -The other cool feature of Undistract-me is you can set audio notification along with visual notification when a command is done. By default, it will only send a visual notification. You can change this behavior by setting the variable UDM_PLAY_SOUND to a non-zero integer on the command line. However, your Ubuntu system should have pulseaudio-utils and sound-theme-freedesktop utilities installed to enable this functionality. - -Please remember that you need to run the following command to update the changes made. - -For Bash: - -``` -source ~/.bashrc -``` - -For Zsh: - -``` -source ~/.zshrc -``` - -It is time to verify if this really works. - -#### Get Notification When Long Running Terminal Commands Complete - -Now, run any command that takes longer than 10 seconds or the time duration you defined in Undistract-me script. - -I ran the following command on my Arch Linux desktop. - -``` -sudo pacman -Sy -``` - -This command took 32 seconds to complete. After the completion of the above command, I got the following notification. - - [![](http://www.ostechnix.com/wp-content/uploads/2017/11/undistract-me-2.png)][8] - -Please remember Undistract-me script notifies you only if the given command took more than 10 seconds to complete. If the command is completed in less than 10 seconds, you will not be notified. Of course, you can change this time interval settings as I described in the Configuration section above. - -I find this tool very useful. It helped me to get back to the business after I completely lost in some other tasks. I hope this tool will be helpful to you too. - -More good stuffs to come. Stay tuned! - -Cheers! - -Resource: - -* [Undistract-me GitHub Repository][1] - --------------------------------------------------------------------------------- - -via: https://www.ostechnix.com/undistract-get-notification-long-running-terminal-commands-complete/ - -作者:[sk][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.ostechnix.com/author/sk/ -[1]:https://github.com/jml/undistract-me -[2]:https://www.ostechnix.com/author/sk/ -[3]:https://www.ostechnix.com/get-notification-terminal-task-done/ -[4]:https://www.ostechnix.com/install-pacaur-arch-linux/ -[5]:https://www.ostechnix.com/install-packer-arch-linux-2/ -[6]:https://www.ostechnix.com/install-yaourt-arch-linux/ -[7]:http://www.ostechnix.com/wp-content/uploads/2017/11/undistract-me-1.png -[8]:http://www.ostechnix.com/wp-content/uploads/2017/11/undistract-me-2.png From 675e5cf6af6f303e757912ac0751b7d09684d4bf Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Wed, 6 Dec 2017 09:55:46 +0800 Subject: [PATCH 050/236] Revert "merge" --- ... write fun small web projects instantly.md | 76 ++++++++ ...actices for getting started with DevOps.md | 95 --------- ...g Hardware for Beginners Think Software.md | 89 --------- ... write fun small web projects instantly.md | 73 ------- ...ow to Manage Users with Groups in Linux.md | 183 ------------------ 5 files changed, 76 insertions(+), 440 deletions(-) create mode 100644 sources/tech/20171113 Glitch write fun small web projects instantly.md delete mode 100644 sources/tech/20171129 5 best practices for getting started with DevOps.md delete mode 100644 translated/tech/20171012 Linux Networking Hardware for Beginners Think Software.md delete mode 100644 translated/tech/20171113 Glitch write fun small web projects instantly.md delete mode 100644 translated/tech/20171201 How to Manage Users with Groups in Linux.md diff --git a/sources/tech/20171113 Glitch write fun small web projects instantly.md b/sources/tech/20171113 Glitch write fun small web projects instantly.md new file mode 100644 index 0000000000..734853ce51 --- /dev/null +++ b/sources/tech/20171113 Glitch write fun small web projects instantly.md @@ -0,0 +1,76 @@ +translating---geekpi + +Glitch: write fun small web projects instantly +============================================================ + +I just wrote about Jupyter Notebooks which are a fun interactive way to write Python code. That reminded me I learned about Glitch recently, which I also love!! I built a small app to [turn of twitter retweets][2] with it. So! + +[Glitch][3] is an easy way to make Javascript webapps. (javascript backend, javascript frontend) + +The fun thing about glitch is: + +1. you start typing Javascript code into their web interface + +2. as soon as you type something, it automagically reloads the backend of your website with the new code. You don’t even have to save!! It autosaves. + +So it’s like Heroku, but even more magical!! Coding like this (you type, and the code runs on the public internet immediately) just feels really **fun** to me. + +It’s kind of like sshing into a server and editing PHP/HTML code on your server and having it instantly available, which I kind of also loved. Now we have “better deployment practices” than “just edit the code and it is instantly on the internet” but we are not talking about Serious Development Practices, we are talking about writing tiny programs for fun. + +### glitch has awesome example apps + +Glitch seems like fun nice way to learn programming! + +For example, there’s a space invaders game (code by [Mary Rose Cook][4]) at [https://space-invaders.glitch.me/][5]. The thing I love about this is that in just a few clicks I can + +1. click “remix this” + +2. start editing the code to make the boxes orange instead of black + +3. have my own space invaders game!! Mine is at [http://julias-space-invaders.glitch.me/][1]. (i just made very tiny edits to make it orange, nothing fancy) + +They have tons of example apps that you can start from – for instance [bots][6], [games][7], and more. + +### awesome actually useful app: tweetstorms + +The way I learned about Glitch was from this app which shows you tweetstorms from a given user: [https://tweetstorms.glitch.me/][8]. + +For example, you can see [@sarahmei][9]’s tweetstorms at [https://tweetstorms.glitch.me/sarahmei][10] (she tweets a lot of good tweetstorms!). + +### my glitch app: turn off retweets + +When I learned about Glitch I wanted to turn off retweets for everyone I follow on Twitter (I know you can do it in Tweetdeck!) and doing it manually was a pain – I had to do it one person at a time. So I wrote a tiny Glitch app to do it for me! + +I liked that I didn’t have to set up a local development environment, I could just start typing and go! + +Glitch only supports Javascript and I don’t really know Javascript that well (I think I’ve never written a Node program before), so the code isn’t awesome. But I had a really good time writing it – being able to type and just see my code running instantly was delightful. Here it is: [https://turn-off-retweets.glitch.me/][11]. + +### that’s all! + +Using Glitch feels really fun and democratic. Usually if I want to fork someone’s web project and make changes I wouldn’t do it – I’d have to fork it, figure out hosting, set up a local dev environment or Heroku or whatever, install the dependencies, etc. I think tasks like installing node.js dependencies used to be interesting, like “cool i am learning something new” and now I just find them tedious. + +So I love being able to just click “remix this!” and have my version on the internet instantly. + + +-------------------------------------------------------------------------------- + +via: https://jvns.ca/blog/2017/11/13/glitch--write-small-web-projects-easily/ + +作者:[Julia Evans ][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://jvns.ca/ +[1]:http://julias-space-invaders.glitch.me/ +[2]:https://turn-off-retweets.glitch.me/ +[3]:https://glitch.com/ +[4]:https://maryrosecook.com/ +[5]:https://space-invaders.glitch.me/ +[6]:https://glitch.com/handy-bots +[7]:https://glitch.com/games +[8]:https://tweetstorms.glitch.me/ +[9]:https://twitter.com/sarahmei +[10]:https://tweetstorms.glitch.me/sarahmei +[11]:https://turn-off-retweets.glitch.me/ diff --git a/sources/tech/20171129 5 best practices for getting started with DevOps.md b/sources/tech/20171129 5 best practices for getting started with DevOps.md deleted file mode 100644 index 7694180c14..0000000000 --- a/sources/tech/20171129 5 best practices for getting started with DevOps.md +++ /dev/null @@ -1,95 +0,0 @@ -translating---aiwhj -5 best practices for getting started with DevOps -============================================================ - -### Are you ready to implement DevOps, but don't know where to begin? Try these five best practices. - - -![5 best practices for getting started with DevOps](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/devops-gears.png?itok=rUejbLQX "5 best practices for getting started with DevOps") -Image by :  - -[Andrew Magill][8]. Modified by Opensource.com. [CC BY 4.0][9] - -DevOps often stymies early adopters with its ambiguity, not to mention its depth and breadth. By the time someone buys into the idea of DevOps, their first questions usually are: "How do I get started?" and "How do I measure success?" These five best practices are a great road map to starting your DevOps journey. - -### 1\. Measure all the things - -You don't know for sure that your efforts are even making things better unless you can quantify the outcomes. Are my features getting out to customers more rapidly? Are fewer defects escaping to them? Are we responding to and recovering more quickly from failure? - -Before you change anything, think about what kinds of outcomes you expect from your DevOps transformation. When you're further into your DevOps journey, you'll enjoy a rich array of near-real-time reports on everything about your service. But consider starting with these two metrics: - -* **Time to market** measures the end-to-end, often customer-facing, business experience. It usually begins when a feature is formally conceived and ends when the customer can consume the feature in production. Time to market is not mainly an engineering team metric; more importantly it shows your business' complete end-to-end efficiency in bringing valuable new features to market and isolates opportunities for system-wide improvement. - -* **Cycle time** measures the engineering team process. Once work on a new feature starts, when does it become available in production? This metric is very useful for understanding the efficiency of the engineering team and isolating opportunities for team-level improvement. - -### 2\. Get your process off the ground - -DevOps success requires an organization to put a regular (and hopefully effective) process in place and relentlessly improve upon it. It doesn't have to start out being effective, but it must be a regular process. Usually that it's some flavor of agile methodology like Scrum or Scrumban; sometimes it's a Lean derivative. Whichever way you go, pick a formal process, start using it, and get the basics right. - -Regular inspect-and-adapt behaviors are key to your DevOps success. Make good use of opportunities like the stakeholder demo, team retrospectives, and daily standups to find opportunities to improve your process. - -A lot of your DevOps success hinges on people working effectively together. People on a team need to work from a common process that they are empowered to improve upon. They also need regular opportunities to share what they are learning with other stakeholders, both upstream and downstream, in the process. - -Good process discipline will help your organization consume the other benefits of DevOps at the great speed that comes as your success builds. - -Although it's common for more development-oriented teams to successfully adopt processes like Scrum, operations-focused teams (or others that are more interrupt-driven) may opt for a process with a more near-term commitment horizon, such as Kanban. - -### 3\. Visualize your end-to-end workflow - -There is tremendous power in being able to see who's working on what part of your service at any given time. Visualizing your workflow will help people know what they need to work on next, how much work is in progress, and where the bottlenecks are in the process. - -You can't effectively limit work in process until you can see it and quantify it. Likewise, you can't effectively eliminate bottlenecks until you can clearly see them. - -Visualizing the entire workflow will help people in all parts of the organization understand how their work contributes to the success of the whole. It can catalyze relationship-building across organizational boundaries to help your teams collaborate more effectively towards a shared sense of success. - -### 4\. Continuous all the things - -DevOps promises a dizzying array of compelling automation. But Rome wasn't built in a day. One of the first areas you can focus your efforts on is [continuous integration][10] (CI). But don't stop there; you'll want to follow quickly with [continuous delivery][11] (CD) and eventually continuous deployment. - -Your CD pipeline is your opportunity to inject all manner of automated quality testing into your process. The moment new code is committed, your CD pipeline should run a battery of tests against the code and the successfully built artifact. The artifact that comes out at the end of this gauntlet is what progresses along your process until eventually it's seen by customers in production. - -Another "continuous" that doesn't get enough attention is continuous improvement. That's as simple as setting some time aside each day to ask your colleagues: "What small thing can we do today to get better at how we do our work?" These small, daily changes compound over time into more profound results. You'll be pleasantly surprised! But it also gets people thinking all the time about how to improve things. - -### 5\. Gherkinize - -Fostering more effective communication across your organization is crucial to fostering the sort of systems thinking prevalent in successful DevOps journeys. One way to help that along is to use a shared language between the business and the engineers to express the desired acceptance criteria for new features. A good product manager can learn [Gherkin][12] in a day and begin using it to express acceptance criteria in an unambiguous, structured form of plain English. Engineers can use this Gherkinized acceptance criteria to write acceptance tests against the criteria, and then develop their feature code until the tests pass. This is a simplification of [acceptance test-driven development][13](ATDD) that can also help kick start your DevOps culture and engineering practice. - -### Start on your journey - -Don't be discouraged by getting started with your DevOps practice. It's a journey. And hopefully these five ideas give you solid ways to get started. - - -### About the author - - [![](https://opensource.com/sites/default/files/styles/profile_pictures/public/pictures/headshot_4.jpg?itok=jntfDCfX)][14] - - Magnus Hedemark - Magnus has been in the IT industry for over 20 years, and a technology enthusiast for most of his life. He's presently Manager of DevOps Engineering at UnitedHealth Group. In his spare time, Magnus enjoys photography and paddling canoes. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/17/11/5-keys-get-started-devops - -作者:[Magnus Hedemark ][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/magnus919 -[1]:https://opensource.com/tags/devops?src=devops_resource_menu1 -[2]:https://opensource.com/resources/devops?src=devops_resource_menu2 -[3]:https://www.openshift.com/promotions/devops-with-openshift.html?intcmp=7016000000127cYAAQ&src=devops_resource_menu3 -[4]:https://enterprisersproject.com/article/2017/5/9-key-phrases-devops?intcmp=7016000000127cYAAQ&src=devops_resource_menu4 -[5]:https://www.redhat.com/en/insights/devops?intcmp=7016000000127cYAAQ&src=devops_resource_menu5 -[6]:https://opensource.com/article/17/11/5-keys-get-started-devops?rate=oEOzMXx1ghbkfl2a5ae6AnvO88iZ3wzkk53K2CzbDWI -[7]:https://opensource.com/user/25739/feed -[8]:https://ccsearch.creativecommons.org/image/detail/7qRx_yrcN5isTMS0u9iKMA== -[9]:https://creativecommons.org/licenses/by-sa/4.0/ -[10]:https://martinfowler.com/articles/continuousIntegration.html -[11]:https://martinfowler.com/bliki/ContinuousDelivery.html -[12]:https://cucumber.io/docs/reference -[13]:https://en.wikipedia.org/wiki/Acceptance_test%E2%80%93driven_development -[14]:https://opensource.com/users/magnus919 -[15]:https://opensource.com/users/magnus919 -[16]:https://opensource.com/users/magnus919 -[17]:https://opensource.com/tags/devops diff --git a/translated/tech/20171012 Linux Networking Hardware for Beginners Think Software.md b/translated/tech/20171012 Linux Networking Hardware for Beginners Think Software.md deleted file mode 100644 index a236a80e97..0000000000 --- a/translated/tech/20171012 Linux Networking Hardware for Beginners Think Software.md +++ /dev/null @@ -1,89 +0,0 @@ -Translating by FelixYFZ - -面向初学者的Linux网络硬件: 软件工程思想 -============================================================ - -![island network](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/soderskar-island.jpg?itok=wiMaF66b "island network") - 没有路由和桥接,我们将会成为孤独的小岛,你将会在这个网络教程中学到更多知识。 -Commons Zero][3]Pixabay - - 上周,我们学习了本地网络硬件知识,本周,我们将学习网络互联技术和在移动网络中的一些很酷的黑客技术。 -### Routers:路由器 - - -网络路由器就是计算机网络中的一切,因为路由器连接着网络,没有路由器,我们就会成为孤岛, - -图一展示了一个简单的有线本地网络和一个无线接入点,所有设备都接入到Internet上,本地局域网的计算机连接到一个连接着防火墙或者路由器的以太网交换机上,防火墙或者路由器连接到网络服务供应商提供的电缆箱,调制调节器,卫星上行系统...好像一切都在计算中,就像是一个带着不停闪烁的的小灯的盒子,当你的网络数据包离开你的局域网,进入广阔的互联网,它们穿过一个又一个路由器直到到达自己的目的地。 - - -### [fig-1.png][4] - -![simple LAN](https://www.linux.com/sites/lcom/files/styles/floated_images/public/fig-1_7.png?itok=lsazmf3- "simple LAN") - -图一:一个简单的有线局域网和一个无线接入点。 - -一台路由器能连接一切,一个小巧特殊的小盒子只专注于路由,一个大点的盒子将会提供路由,防火墙,域名服务,以及VPN网关功能,一台重新设计的台式电脑或者笔记本,一个树莓派计算机或者一个小模块,体积臃肿矮小的像PC这样的单板计算机,除了苛刻的用途以外,普通的商品硬件都能良好的工作运行。高端的路由器使用特殊设计的硬件每秒能够传输最大量的数据包。 它们有多路数据总线,多个中央处理器和极快的存储。 -可以通过查阅Juniper和思科的路由器来感受一下高端路由器书什么样子的,而且能看看里面是什么样的构造。 -一个接入你的局域网的无线接入点要么作为一个以太网网桥要么作为一个路由器。一个桥接器扩展了这个网络,所以在这个桥接器上的任意一端口上的主机都连接在同一个网络中。 -一台路由器连接的是两个不同的网络。 -### Network Topology:网络拓扑 - - -有多种设置你的局域网的方式,你可以把所有主机接入到一个单独的平面网络,如果你的交换机支持的话,你也可以把它们分配到不同的子网中。 -平面网络是最简单的网络,只需把每一台设备接入到同一个交换机上即可,如果一台交换上的端口不够使用,你可以将更多的交换机连接在一起。 -有些交换机有特殊的上行端口,有些是没有这种特殊限制的上行端口,你可以连接其中的任意端口,你可能需要使用交叉类型的以太网线,所以你要查阅你的交换机的说明文档来设置。平面网络是最容易管理的,你不需要路由器也不需要计算子网,但它也有一些缺点。他们的伸缩性不好,所以当网络规模变得越来越大的时候就会被广播网络所阻塞。 -将你的局域网进行分段将会提升安全保障, 把局域网分成可管理的不同网段将有助于管理更大的网络。 - 图2展示了一个分成两个子网的局域网络:内部的有线和无线主机,和非军事区域(从来不知道所所有的工作上的男性术语都是在计算机上键入的?)因为他被阻挡了所有的内部网络的访问。 - - -### [fig-2.png][5] - -![LAN](https://www.linux.com/sites/lcom/files/styles/floated_images/public/fig-2_4.png?itok=LpXq7bLf "LAN") - -图2:一个分成两个子网的简单局域网。 -即使像图2那样的小型网络也可以有不同的配置方法。你可以将防火墙和路由器放置在一台单独的设备上。 -你可以为你的非军事区域设置一个专用的网络连接,把它完全从你的内部网络隔离,这将引导我们进入下一个主题:一切基于软件。 - - -### Think Software软件思维 - - -你可能已经注意到在这个简短的系列中我们所讨论的硬件,只有网络接口,交换机,和线缆是特殊用途的硬件。 -其它的都是通用的商用硬件,而且都是软件来定义它的用途。 -网关,虚拟专用网关,以太网桥,网页,邮箱以及文件等等。 -服务器,负载均衡,代理,大量的服务,各种各样的认证,中继,故障转移...你可以在运行着Linux系统的标准硬件上运行你的整个网络。 -你甚至可以使用Linux交换应用和VDE2协议来模拟以太网交换机,像DD-WRT,openWRT 和Rashpberry Pi distros,这些小型的硬件都是有专业的分类的,要记住BSDS和它们的特殊衍生用途如防火墙,路由器,和网络附件存储。 -你知道有些人坚持认为硬件防火墙和软件防火墙有区别?其实是没有区别的,就像说有一台硬件计算机和一台软件计算机。 -### Port Trunking and Ethernet Bonding -端口聚合和以太网绑定 -聚合和绑定,也称链路聚合,是把两条以太网通道绑定在一起成为一条通道。一些交换机支持端口聚合,就是把两个交换机端口绑定在一起成为一个是他们原来带宽之和的一条新的连接。对于一台承载很多业务的服务器来说这是一个增加通道带宽的有效的方式。 -你也可以在以太网口进行同样的配置,而且绑定汇聚的驱动是内置在Linux内核中的,所以不需要任何其他的专门的硬件。 - - -### Bending Mobile Broadband to your Will随心所欲选择你的移动带宽 - -我期望移动带宽能够迅速增长来替代DSL和有线网络。我居住在一个有250,000人口的靠近一个城市的地方,但是在城市以外,要想接入互联网就要靠运气了,即使那里有很大的用户上网需求。我居住的小角落离城镇有20分钟的距离,但对于网络服务供应商来说他们几乎不会考虑到为这个地方提供网络。 我唯一的选择就是移动带宽; 这里没有拨号网络,卫星网络(即使它很糟糕)或者是DSL,电缆,光纤,但却没有阻止网络供应商把那些在我这个区域从没看到过的无限制通信个其他高速网络服务的传单塞进我的邮箱。 -我试用了AT&T,Version,和T-Mobile。Version的信号覆盖范围最广,但是Version和AT&T是最昂贵的。 -我居住的地方在T-Mobile信号覆盖的边缘,但迄今为止他们给了最大的优惠,为了能够能够有效的使用,我必须购买一个WeBoostDe信号放大器和 -一台中兴的移动热点设备。当然你也可以使用一部手机作为热点,但是专用的热点设备有着最强的信号。如果你正在考虑购买一台信号放大器,最好的选择就是WeBoost因为他们的服务支持最棒,而且他们会尽最大努力去帮助你。在一个小小的APP的协助下去设置将会精准的增强 你的网络信号,他们有一个功能较少的免费的版本,但你将一点都不会后悔去花两美元使用专业版。 -那个小巧的中兴热点设备能够支持15台主机而且还有拥有基本的防火墙功能。 但你如果你使用像 Linksys WRT54GL这样的设备,使用Tomato,openWRT,或者DD-WRT来替代普通的固件,这样你就能完全控制你的防护墙规则,路由配置,以及任何其他你想要设置的服务。 - --------------------------------------------------------------------------------- - -via: https://www.linux.com/learn/intro-to-linux/2017/10/linux-networking-hardware-beginners-think-software - -作者:[CARLA SCHRODER][a] -译者:[FelixYFZ](https://github.com/FelixYFZ) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.linux.com/users/cschroder -[1]:https://www.linux.com/licenses/category/used-permission -[2]:https://www.linux.com/licenses/category/used-permission -[3]:https://www.linux.com/licenses/category/creative-commons-zero -[4]:https://www.linux.com/files/images/fig-1png-7 -[5]:https://www.linux.com/files/images/fig-2png-4 -[6]:https://www.linux.com/files/images/soderskar-islandjpg -[7]:https://www.linux.com/learn/intro-to-linux/2017/10/linux-networking-hardware-beginners-lan-hardware -[8]:http://www.bluelinepc.com/signalcheck/ diff --git a/translated/tech/20171113 Glitch write fun small web projects instantly.md b/translated/tech/20171113 Glitch write fun small web projects instantly.md deleted file mode 100644 index fde7d7f880..0000000000 --- a/translated/tech/20171113 Glitch write fun small web projects instantly.md +++ /dev/null @@ -1,73 +0,0 @@ -Glitch:立即写出有趣的小型网站项目 -============================================================ - -我刚写了一篇关于 Jupyter Notebooks 是一个有趣的交互式写 Python 代码的方式。这让我想起我最近学习了 Glitch,这个我同样喜爱!我构建了一个小的程序来用于[关闭转发 twitter][2]。因此有了这篇文章! - -[Glitch][3] 是一个简单的构建 Javascript web 程序的方式(javascript 后端、javascript 前端) - -关于 glitch 有趣的事有: - -1. 你在他们的网站输入 Javascript 代码 - -2. 只要输入了任何代码,它会自动用你的新代码重载你的网站。你甚至不必保存!它会自动保存。 - -所以这就像 Heroku,但更神奇!像这样的编码(你输入代码,代码立即在公共网络上运行)对我而言感觉很**有趣**。 - -这有点像 ssh 登录服务器,编辑服务器上的 PHP/HTML 代码,并让它立即可用,这也是我所喜爱的。现在我们有了“更好的部署实践”,而不是“编辑代码,它立即出现在互联网上”,但我们并不是在谈论严肃的开发实践,而是在讨论编写微型程序的乐趣。 - -### Glitch 有很棒的示例应用程序 - -Glitch 似乎是学习编程的好方式! - -比如,这有一个太空侵略者游戏(由 [Mary Rose Cook][4] 编写):[https://space-invaders.glitch.me/][5]。我喜欢的是我只需要点击几下。 - -1. 点击 “remix this” - -2. 开始编辑代码使箱子变成橘色而不是黑色 - -3. 制作我自己太空侵略者游戏!我的在这:[http://julias-space-invaders.glitch.me/][1]。(我只做了很小的更改使其变成橘色,没什么神奇的) - -他们有大量的示例程序,你可以从中启动 - 例如[机器人][6]、[游戏][7]等等。 - -### 实际有用的非常好的程序:tweetstorms - -我学习 Glitch 的方式是从这个程序:[https://tweetstorms.glitch.me/][8],它会向你展示给定用户的 tweetstorm。 - -比如,你可以在 [https://tweetstorms.glitch.me/sarahmei][10] 看到 [@sarahmei][9] 的 tweetstorm(她发布了很多好的 tweetstorm!)。 - -### 我的 Glitch 程序: 关闭转推 - -当我了解到 Glitch 的时候,我想关闭在 Twitter 上关注的所有人的转推(我知道可以在 Tweetdeck 中做这件事),而且手动做这件事是一件很痛苦的事 - 我一次只能设置一个人。所以我写了一个 Glitch 程序来为我做! - -我喜欢我不必设置一个本地开发环境,我可以直接开始输入然后开始! - -Glitch 只支持 Javascript,我不非常了解 Javascript(我之前从没写过一个 Node 程序),所以代码不是很好。但是编写它很愉快 - 能够输入并立即看到我的代码运行是令人愉快的。这是我的项目:[https://turn-off-retweets.glitch.me/][11]。 - -### 就是这些! - -使用 Glitch 感觉真的很有趣和民主。通常情况下,如果我想 fork 某人的 Web 项目,并做出更改,我不会这样做 - 我必须 fork,找一个托管,设置本地开发环境或者 Heroku 或其他,安装依赖项等。我认为像安装 node.js 依赖关系这样的任务过去很有趣,就像“我正在学习新东西很酷”,现在我觉得它们很乏味。 - -所以我喜欢只需点击 “remix this!” 并立即在互联网上能有我的版本。 - --------------------------------------------------------------------------------- - -via: https://jvns.ca/blog/2017/11/13/glitch--write-small-web-projects-easily/ - -作者:[Julia Evans ][a] -译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://jvns.ca/ -[1]:http://julias-space-invaders.glitch.me/ -[2]:https://turn-off-retweets.glitch.me/ -[3]:https://glitch.com/ -[4]:https://maryrosecook.com/ -[5]:https://space-invaders.glitch.me/ -[6]:https://glitch.com/handy-bots -[7]:https://glitch.com/games -[8]:https://tweetstorms.glitch.me/ -[9]:https://twitter.com/sarahmei -[10]:https://tweetstorms.glitch.me/sarahmei -[11]:https://turn-off-retweets.glitch.me/ diff --git a/translated/tech/20171201 How to Manage Users with Groups in Linux.md b/translated/tech/20171201 How to Manage Users with Groups in Linux.md deleted file mode 100644 index 1927de6817..0000000000 --- a/translated/tech/20171201 How to Manage Users with Groups in Linux.md +++ /dev/null @@ -1,183 +0,0 @@ -如何在 Linux 系统中用用户组来管理用户 -============================================================ - -### [group-of-people-1645356_1920.jpg][1] - -![groups](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/group-of-people-1645356_1920.jpg?itok=rJlAxBSV) - -本教程可以了解如何通过用户组和访问控制表(ACL)来管理用户。 - -[创意共享协议][4] - -当你需要管理一台容纳多个用户的 Linux 机器时,比起一些基本的用户管理工具所提供的方法,有时候你需要对这些用户采取更多的用户权限管理方式。特别是当你要管理某些用户的权限时,这个想法尤为重要。比如说,你有一个目录,某个用户组中的用户可以通过读和写的权限访问这个目录,而其他用户组中的用户对这个目录只有读的权限。在 Linux 中,这是完全可以实现的。但前提是你必须先了解如何通过用户组和访问控制表(ACL)来管理用户。 - -我们将从简单的用户开始,逐渐深入到复杂的访问控制表(ACL)。你可以在你所选择的 Linux 发行版完成你所需要做的一切。本文的重点是用户组,所以不会涉及到关于用户的基础知识。 - -为了达到演示的目的,我将假设: - -你需要用下面两个用户名新建两个用户: - -* olivia - -* nathan - -你需要新建以下两个用户组: - -* readers - -* editors - -olivia 属于 editors 用户组,而 nathan 属于 readers 用户组。reader 用户组对 ``/DATA`` 目录只有读的权限,而 editors 用户组则对 ``/DATA`` 目录同时有读和写的权限。当然,这是个非常小的任务,但它会给你基本的信息·。你可以扩展这个任务以适应你其他更大的需求。 - -我将在 Ubuntu 16.04 Server 平台上进行演示。这些命令都是通用的,唯一不同的是,要是在你的发行版中不使用 sudo 命令,你必须切换到 root 用户来执行这些命令。 - -### 创建用户 - -我们需要做的第一件事是为我们的实验创建两个用户。可以用 ``useradd`` 命令来创建用户,我们不只是简单地创建一个用户,而需要同时创建用户和属于他们的家目录,然后给他们设置密码。 - -``` -sudo useradd -m olivia - -sudo useradd -m nathan -``` - -我们现在创建了两个用户,如果你看看 ``/home`` 目录,你可以发现他们的家目录(因为我们用了 -m 选项,可以帮在创建用户的同时创建他们的家目录。 - -之后,我们可以用以下命令给他们设置密码: - -``` -sudo passwd olivia - -sudo passwd nathan -``` - -就这样,我们创建了两个用户。 - -### 创建用户组并添加用户 - -现在我们将创建 readers 和 editors 用户组,然后给它们添加用户。创建用户组的命令是: - -``` -addgroup readers - -addgroup editors -``` - -(译者注:当你使用 CentOS 等一些 Linux 发行版时,可能系统没有 addgroup 这个命令,推荐使用 groupadd 命令来替换 addgroup 命令以达到同样的效果) - - -### [groups_1.jpg][2] - -![groups](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/groups_1.jpg?itok=BKwL89BB) - -图一:我们可以使用刚创建的新用户组了。 - -[Used with permission][5] - -创建用户组后,我们需要添加我们的用户到这两个用户组。我们用以下命令来将 nathan 用户添加到 readers 用户组: - -``` -sudo usermod -a -G readers nathan -``` -用以下命令将 olivia 添加到 editors 用户组: - -``` -sudo usermod -a -G editors olivia -``` - -现在我们可以通过用户组来管理用户了。 - -### 给用户组授予目录的权限 - -假设你有个目录 ``/READERS`` 且允许 readers 用户组的所有成员访问这个目录。首先,我们执行以下命令来更改目录所属用户组: - -``` -sudo chown -R :readers /READERS -``` - -接下来,执行以下命令收回目录所属用户组的写入权限: - -``` -sudo chmod -R g-w /READERS -``` - -然后我们执行下面的命令来收回其他用户对这个目录的访问权限(以防止任何不在 readers 组中的用户访问这个目录里的文件): - -``` -sudo chmod -R o-x /READERS -``` - -这时候,只有目录的所有者(root)和用户组 reader 中的用户可以访问 ``/READES`` 中的文件。 - -假设你有个目录 ``/EDITORS`` ,你需要给用户组 editors 里的成员这个目录的读和写的权限。为了达到这个目的,执行下面的这些命令是必要的: - -``` -sudo chown -R :editors /EDITORS - -sudo chmod -R g+w /EDITORS - -sudo chmod -R o-x /EDITORS -``` - -此时 editors 用户组的所有成员都可以访问和修改其中的文件。除此之外其他用户(除了 root 之外)无法访问 ``/EDITORS`` 中的任何文件。 - -使用这个方法的问题在于,你一次只能操作一个组和一个目录而已。这时候访问控制表(ACL)就可以派得上用场了。 - - -### 使用访问控制表(ACL) - -现在,让我们把这个问题变得棘手一点。假设你有一个目录 ``/DATA`` 并且你想给 readers 用户组的成员读取权限并同时给 editors 用户组的成员读和写的权限。为此,你必须要用到 setfacl 命令。setfacl 命令可以为文件或文件夹设置一个访问控制表(ACL)。 - -这个命令的结构如下: - -``` -setfacl OPTION X:NAME:Y /DIRECTORY -``` - -其中 OPTION 是可选选项,X 可以是 u(用户)或者是 g (用户组),NAME 是用户或者用户组的名字,/DIRECTORY 是要用到的目录。我们将使用 -m 选项进行修改(modify)。因此,我们给 readers 用户组添加读取权限的命令是: - -``` -sudo setfacl -m g:readers:rx -R /DATA -``` - -现在 readers 用户组里面的每一个用户都可以读取 /DATA 目录里的文件了,但是他们不能修改里面的内容。 - -为了给 editors 用户组里面的用户读写权限,我们执行了以下命令: - -``` -sudo setfacl -m g:editors:rwx -R /DATA -``` -上述命令将赋予 editors 用户组中的任何成员读取权限,同时保留 readers 用户组的只读权限。 - -### 更多的权限控制 - -使用访问控制表(ACL),你可以实现你所需的权限控制。你可以添加用户到用户组,并且灵活地控制这些用户组对每个目录的权限以达到你的需求。如果想了解上述工具的更多信息,可以执行下列的命令: - -* man usradd - -* man addgroup - -* man usermod - -* man sefacl - -* man chown - -* man chmod - - --------------------------------------------------------------------------------- - -via: https://www.linux.com/learn/intro-to-linux/2017/12/how-manage-users-groups-linux - -作者:[Jack Wallen ] -译者:[imquanquan](https://github.com/imquanquan) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[1]:https://www.linux.com/files/images/group-people-16453561920jpg -[2]:https://www.linux.com/files/images/groups1jpg -[3]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux -[4]:https://www.linux.com/licenses/category/creative-commons-zero -[5]:https://www.linux.com/licenses/category/used-permission From 0783a0be6e2a322d3455eda0ee19001fa02240be Mon Sep 17 00:00:00 2001 From: wxy Date: Wed, 6 Dec 2017 10:03:52 +0800 Subject: [PATCH 051/236] PRF:20171130 Wake up and Shut Down Linux Automatically.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @HardworkFish 恭喜你,完成了第一篇翻译! --- ...ke up and Shut Down Linux Automatically.md | 86 ++++++++----------- 1 file changed, 38 insertions(+), 48 deletions(-) diff --git a/translated/tech/20171130 Wake up and Shut Down Linux Automatically.md b/translated/tech/20171130 Wake up and Shut Down Linux Automatically.md index a4b829620f..d1c2167a35 100644 --- a/translated/tech/20171130 Wake up and Shut Down Linux Automatically.md +++ b/translated/tech/20171130 Wake up and Shut Down Linux Automatically.md @@ -1,37 +1,36 @@ - -自动唤醒和关闭 Linux +如何自动唤醒和关闭 Linux ===================== -### [banner.jpg][1] - ![timekeeper](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/banner.jpg?itok=zItspoSb) -了解如何通过配置 Linux 计算机来查看时间,并实现自动唤醒和关闭 Linux +> 了解如何通过配置 Linux 计算机来根据时间自动唤醒和关闭。 -[Creative Commons Attribution][6][The Observatory at Delhi][7] -不要成为一个电能浪费者。如果你的电脑不需要开机就请把它们关机。出于方便和计算机宅的考虑,你可以通过配置你的 Linux 计算机实现自动唤醒和关闭 Linux 。 +不要成为一个电能浪费者。如果你的电脑不需要开机就请把它们关机。出于方便和计算机宅的考虑,你可以通过配置你的 Linux 计算机实现自动唤醒和关闭。 -### 系统运行时间 +### 宝贵的系统运行时间 -有时候有些电脑需要一直处在开机状态,在不超过电脑运行时间的限制下这种情况是被允许的。有些人为他们的计算机可以长时间的正常运行而感到自豪,且现在我们有内核热补丁能够实现只有在硬件发生故障时才允许机器关机。我认为比较实际可行的是能够在机器需要节省电能以及在移动硬件发生磨损的情况下,且在不需要机器运行的情况下将其关机。比如,你可以在规定的时间内唤醒备份服务器,执行备份,然后关闭它直到它要进行下一次备份。或者,你可以设置你的 Internet 网关只在特定的时间运行。任何不需要一直运行的东西都可以将其配置成在其需要工作的时候打开,待其完成工作后将其关闭。 +有时候有些电脑需要一直处在开机状态,在不超过电脑运行时间的限制下这种情况是被允许的。有些人为他们的计算机可以长时间的正常运行而感到自豪,且现在我们有内核热补丁能够实现只有在硬件发生故障时才需要机器关机。我认为比较实际可行的是,像减少移动部件磨损一样节省电能,且在不需要机器运行的情况下将其关机。比如,你可以在规定的时间内唤醒备份服务器,执行备份,然后关闭它直到它要进行下一次备份。或者,你可以设置你的互联网网关只在特定的时间运行。任何不需要一直运行的东西都可以将其配置成在其需要工作的时候打开,待其完成工作后将其关闭。 ### 系统休眠 -对于不需要一直运行的电脑,使用 root 的 cron 定时任务或者 `/etc/crontab` 文件 可以可靠地关闭电脑。这个例子创建一个 root 定时任务实现每天下午 11点15分 定时关机。 +对于不需要一直运行的电脑,使用 root 的 cron 定时任务(即 `/etc/crontab`)可以可靠地关闭电脑。这个例子创建一个 root 定时任务实现每天晚上 11 点 15 分定时关机。 ``` # crontab -e -u root # m h dom mon dow command 15 23 * * * /sbin/shutdown -h now ``` + 以下示例仅在周一至周五运行: + ``` 15 23 * * 1-5 /sbin/shutdown -h now ``` -您可以为不同的日期和时间创建多个cron作业。 通过命令 ``man 5 crontab`` 可以了解所有时间和日期的字段。 -一个快速、容易的方式是,使用 `/etc/crontab ` 文件。但这样你必须指定用户: +您可以为不同的日期和时间创建多个 cron 作业。 通过命令 `man 5 crontab` 可以了解所有时间和日期的字段。 + +一个快速、容易的方式是,使用 `/etc/crontab` 文件。但这样你必须指定用户: ``` 15 23 * * 1-5 root shutdown -h now @@ -39,26 +38,21 @@ ### 自动唤醒 -实现自动唤醒是一件很酷的事情; 我大多数使用 SUSE (SUSE Linux)的同事都在纽伦堡,因此,因此为了跟同事能有几小时一起工作的时间,我不得不需要在凌晨五点起床。我的计算机早上 5点半自动开始工作,而我只需要将自己和咖啡拖到我的桌子上就可以开始工作了。按下电源按钮看起来好像并不是什么大事,但是在每天的那个时候每件小事都会变得很大。 +实现自动唤醒是一件很酷的事情;我大多数 SUSE (SUSE Linux)的同事都在纽伦堡,因此,因此为了跟同事能有几小时一起工作的时间,我不得不需要在凌晨五点起床。我的计算机早上 5 点半自动开始工作,而我只需要将自己和咖啡拖到我的桌子上就可以开始工作了。按下电源按钮看起来好像并不是什么大事,但是在每天的那个时候每件小事都会变得很大。 -唤醒 Linux 计算机可能不比关闭它稳当,因此你可能需要尝试不同的办法。你可以使用远程唤醒(Wake-On-LAN)、RTC 唤醒或者个人电脑的 BIOS 设置预定的唤醒这些方式。做这些工作的原因是,当你关闭电脑时,这并不是真正关闭了计算机;此时计算机处在极低功耗状态且还可以接受和响应信号。你需要拔掉电源开关将其彻底关闭。 +唤醒 Linux 计算机可能不如关闭它可靠,因此你可能需要尝试不同的办法。你可以使用远程唤醒(Wake-On-LAN)、RTC 唤醒或者个人电脑的 BIOS 设置预定的唤醒这些方式。这些方式可行的原因是,当你关闭电脑时,这并不是真正关闭了计算机;此时计算机处在极低功耗状态且还可以接受和响应信号。只有在你拔掉电源开关时其才彻底关闭。 ### BIOS 唤醒 -BIOS 唤醒是最可靠的。我的系统主板 BIOS 有一个易于使用的唤醒调度程序。(Figure 1). Chances are yours does, too. Easy peasy. - -### [fig-1.png][2] +BIOS 唤醒是最可靠的。我的系统主板 BIOS 有一个易于使用的唤醒调度程序 (图 1)。对你来说也是一样的容易。 ![wakeup](https://www.linux.com/sites/lcom/files/styles/floated_images/public/fig-1_11.png?itok=8qAeqo1I) -Figure 1: My system BIOS has an easy-to-use wakeup scheduler. - -[Used with permission][8] - +*图 1:我的系统 BIOS 有个易用的唤醒定时器。* ### 主机远程唤醒(Wake-On-LAN) -远程唤醒是仅次于 BIOS 唤醒的又一种可靠的唤醒方法。这需要你从第二台计算机发送信号到所要打开的计算机。可以使用 Arduino 或 树莓派(Raspberry Pi) 发送基于 Linux 的路由器或者任何 Linux 计算机的唤醒信号。首先,查看系统主板 BIOS 是否支持 Wake-On-LAN ,要是支持的话,必须先启动它,因为它被默认为禁用。 +远程唤醒是仅次于 BIOS 唤醒的又一种可靠的唤醒方法。这需要你从第二台计算机发送信号到所要打开的计算机。可以使用 Arduino 或树莓派Raspberry Pi发送给基于 Linux 的路由器或者任何 Linux 计算机的唤醒信号。首先,查看系统主板 BIOS 是否支持 Wake-On-LAN ,要是支持的话,必须先启动它,因为它被默认为禁用。 然后,需要一个支持 Wake-On-LAN 的网卡;无线网卡并不支持。你需要运行 `ethtool` 命令查看网卡是否支持 Wake-On-LAN : @@ -67,69 +61,65 @@ Figure 1: My system BIOS has an easy-to-use wakeup scheduler. Supports Wake-on: pumbg Wake-on: g ``` -这条命令输出的 Supports Wake-on 字段会告诉你你的网卡现在开启了哪些功能: + +这条命令输出的 “Supports Wake-on” 字段会告诉你你的网卡现在开启了哪些功能:     * d -- 禁用 - * p -- 物理活动唤醒 - * u -- 单播消息唤醒 - * m -- 多播(组播)消息唤醒 - * b -- 广播消息唤醒 +* a -- ARP 唤醒 +* g -- 特定数据包magic packet唤醒 +* s -- 设有密码的特定数据包magic packet唤醒 -* a -- ARP(Address Resolution Protocol) 唤醒 - -* g -- magic packet 唤醒 - -* s -- 设有密码的 magic packet 唤醒 - -man ethtool 命令并没说清楚 p 选项的作用;这表明任何信号都会导致唤醒。然而,在我的测试中它并没有这么做。想要实现远程唤醒主机,必须支持的功能是 `g -- magic packet` 唤醒,而且显示这个功能已经在启用了。如果它没有被启用,你可以通过 `ethtool` 命令来启用它。 +`ethtool` 命令的 man 手册并没说清楚 `p` 选项的作用;这表明任何信号都会导致唤醒。然而,在我的测试中它并没有这么做。想要实现远程唤醒主机,必须支持的功能是 `g` —— 特定数据包magic packet唤醒,而且下面的“Wake-on” 行显示这个功能已经在启用了。如果它没有被启用,你可以通过 `ethtool` 命令来启用它。 ``` # ethtool -s eth0 wol g ``` + 这条命令可能会在重启后失效,所以为了确保万无一失,你可以创建个 root 用户的定时任务(cron)在每次重启的时候来执行这条命令。 + ``` @reboot /usr/bin/ethtool -s eth0 wol g ``` -### [fig-2.png][3] +另一个选择是最近的网络管理器Network Manager版本有一个很好的小复选框来启用 Wake-On-LAN(图 2)。 ![wakeonlan](https://www.linux.com/sites/lcom/files/styles/floated_images/public/fig-2_7.png?itok=XQAwmHoQ) -Figure 2: Enable Wake on LAN. +*图 2:启用 Wake on LAN* -[Used with permission][9] +这里有一个可以用于设置密码的地方,但是如果你的网络接口不支持安全开机Secure On密码,它就不起作用。 -另一个选择是最近的网络管理器版本有一个很好的小复选框来启用 Wake-On-LAN(图2)。 - -这里有一个可以用于设置密码的地方,但是如果你的网络接口不支持安全密码,它就不起作用。 - -现在你需要配置第二台计算机来发送唤醒信号。你并不需要 root 权限,所以你可以为你的用户创建 cron 任务。你需要用到的是想要唤醒的机器的网络接口和MAC地址信息。 +现在你需要配置第二台计算机来发送唤醒信号。你并不需要 root 权限,所以你可以为你的普通用户创建 cron 任务。你需要用到的是想要唤醒的机器的网络接口和MAC地址信息。 ``` 30 08 * * * /usr/bin/wakeonlan D0:50:99:82:E7:2B ``` -### RTC 唤醒(RTC Alarm Clock) + +### RTC 唤醒 通过使用实时闹钟来唤醒计算机是最不可靠的方法。对于这个方法,可以参看 [Wake Up Linux With an RTC Alarm Clock][4] ;对于现在的大多数发行版来说这种方法已经有点过时了。 -下周继续了解更多关于使用RTC唤醒的方法。 +下周继续了解更多关于使用 RTC 唤醒的方法。 -通过 Linux 基金会和 edX 可以学习更多关于 Linux 的免费 [ Linux 入门][5]教程。 +通过 Linux 基金会和 edX 可以学习更多关于 Linux 的免费 [Linux 入门][5]教程。 + +(题图:[The Observatory at Delhi][7]) -------------------------------------------------------------------------------- via:https://www.linux.com/learn/intro-to-linux/2017/11/wake-and-shut-down-linux-automatically -作者:[Carla Schroder] -译者:[译者ID](https://github.com/HardworkFish) -校对:[校对者ID](https://github.com/校对者ID) +作者:[Carla Schroder][a] +译者:[HardworkFish](https://github.com/HardworkFish) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 +[a]:https://www.linux.com/users/cschroder [1]:https://www.linux.com/files/images/bannerjpg [2]:https://www.linux.com/files/images/fig-1png-11 [3]:https://www.linux.com/files/images/fig-2png-7 From cc4ba2f3b8da76146ec6d33db86935a887ed69e8 Mon Sep 17 00:00:00 2001 From: wxy Date: Wed, 6 Dec 2017 10:04:35 +0800 Subject: [PATCH 052/236] PUB:20171130 Wake up and Shut Down Linux Automatically.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @HardworkFish 文章发布地址:https://linux.cn/article-9115-1.html LCTT 专页地址:https://linux.cn/lctt/HardworkFish --- .../20171130 Wake up and Shut Down Linux Automatically.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20171130 Wake up and Shut Down Linux Automatically.md (100%) diff --git a/translated/tech/20171130 Wake up and Shut Down Linux Automatically.md b/published/20171130 Wake up and Shut Down Linux Automatically.md similarity index 100% rename from translated/tech/20171130 Wake up and Shut Down Linux Automatically.md rename to published/20171130 Wake up and Shut Down Linux Automatically.md From ccbb494a35afac3934b0b673580b8ebdebcc3a8c Mon Sep 17 00:00:00 2001 From: root Date: Wed, 6 Dec 2017 10:15:56 +0800 Subject: [PATCH 053/236] rename --- ...Long Running Terminal Commands Complete.md | 156 ++++++++++++++++++ ...1 Fedora Classroom Session_Ansible 101.md} | 0 2 files changed, 156 insertions(+) create mode 100644 sources/tech/20171130 Undistract-me_Get Notification When Long Running Terminal Commands Complete.md rename sources/tech/{20171201 Fedora Classroom Session: Ansible 101.md => 20171201 Fedora Classroom Session_Ansible 101.md} (100%) diff --git a/sources/tech/20171130 Undistract-me_Get Notification When Long Running Terminal Commands Complete.md b/sources/tech/20171130 Undistract-me_Get Notification When Long Running Terminal Commands Complete.md new file mode 100644 index 0000000000..46afe9b893 --- /dev/null +++ b/sources/tech/20171130 Undistract-me_Get Notification When Long Running Terminal Commands Complete.md @@ -0,0 +1,156 @@ +translating---geekpi + +Undistract-me : Get Notification When Long Running Terminal Commands Complete +============================================================ + +by [sk][2] · November 30, 2017 + +![Undistract-me](https://www.ostechnix.com/wp-content/uploads/2017/11/undistract-me-2-720x340.png) + +A while ago, we published how to [get notification when a Terminal activity is done][3]. Today, I found out a similar utility called “undistract-me” that notifies you when long running terminal commands complete. Picture this scenario. You run a command that takes a while to finish. In the mean time, you check your facebook and get so involved in it. After a while, you remembered that you ran a command few minutes ago. You go back to the Terminal and notice that the command has already finished. But you have no idea when the command is completed. Have you ever been in this situation? I bet most of you were in this situation many times. This is where “undistract-me” comes in help. You don’t need to constantly check the terminal to see if a command is completed or not. Undistract-me utility will notify you when a long running command is completed. It will work on Arch Linux, Debian, Ubuntu and other Ubuntu-derivatives. + +#### Installing Undistract-me + +Undistract-me is available in the default repositories of Debian and its variants such as Ubuntu. All you have to do is to run the following command to install it. + +``` +sudo apt-get install undistract-me +``` + +The Arch Linux users can install it from AUR using any helper programs. + +Using [Pacaur][4]: + +``` +pacaur -S undistract-me-git +``` + +Using [Packer][5]: + +``` +packer -S undistract-me-git +``` + +Using [Yaourt][6]: + +``` +yaourt -S undistract-me-git +``` + +Then, run the following command to add “undistract-me” to your Bash. + +``` +echo 'source /etc/profile.d/undistract-me.sh' >> ~/.bashrc +``` + +Alternatively you can run this command to add it to your Bash: + +``` +echo "source /usr/share/undistract-me/long-running.bash\nnotify_when_long_running_commands_finish_install" >> .bashrc +``` + +If you are in Zsh shell, run this command: + +``` +echo "source /usr/share/undistract-me/long-running.bash\nnotify_when_long_running_commands_finish_install" >> .zshrc +``` + +Finally update the changes: + +For Bash: + +``` +source ~/.bashrc +``` + +For Zsh: + +``` +source ~/.zshrc +``` + +#### Configure Undistract-me + +By default, Undistract-me will consider any command that takes more than 10 seconds to complete as a long-running command. You can change this time interval by editing /usr/share/undistract-me/long-running.bash file. + +``` +sudo nano /usr/share/undistract-me/long-running.bash +``` + +Find “LONG_RUNNING_COMMAND_TIMEOUT” variable and change the default value (10 seconds) to something else of your choice. + + [![](http://www.ostechnix.com/wp-content/uploads/2017/11/undistract-me-1.png)][7] + +Save and close the file. Do not forget to update the changes: + +``` +source ~/.bashrc +``` + +Also, you can disable notifications for particular commands. To do so, find the “LONG_RUNNING_IGNORE_LIST” variable and add the commands space-separated like below. + +By default, the notification will only show if the active window is not the window the command is running in. That means, it will notify you only if the command is running in the background Terminal window. If the command is running in active window Terminal, you will not be notified. If you want undistract-me to send notifications either the Terminal window is visible or in the background, you can set IGNORE_WINDOW_CHECK to 1 to skip the window check. + +The other cool feature of Undistract-me is you can set audio notification along with visual notification when a command is done. By default, it will only send a visual notification. You can change this behavior by setting the variable UDM_PLAY_SOUND to a non-zero integer on the command line. However, your Ubuntu system should have pulseaudio-utils and sound-theme-freedesktop utilities installed to enable this functionality. + +Please remember that you need to run the following command to update the changes made. + +For Bash: + +``` +source ~/.bashrc +``` + +For Zsh: + +``` +source ~/.zshrc +``` + +It is time to verify if this really works. + +#### Get Notification When Long Running Terminal Commands Complete + +Now, run any command that takes longer than 10 seconds or the time duration you defined in Undistract-me script. + +I ran the following command on my Arch Linux desktop. + +``` +sudo pacman -Sy +``` + +This command took 32 seconds to complete. After the completion of the above command, I got the following notification. + + [![](http://www.ostechnix.com/wp-content/uploads/2017/11/undistract-me-2.png)][8] + +Please remember Undistract-me script notifies you only if the given command took more than 10 seconds to complete. If the command is completed in less than 10 seconds, you will not be notified. Of course, you can change this time interval settings as I described in the Configuration section above. + +I find this tool very useful. It helped me to get back to the business after I completely lost in some other tasks. I hope this tool will be helpful to you too. + +More good stuffs to come. Stay tuned! + +Cheers! + +Resource: + +* [Undistract-me GitHub Repository][1] + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/undistract-get-notification-long-running-terminal-commands-complete/ + +作者:[sk][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.ostechnix.com/author/sk/ +[1]:https://github.com/jml/undistract-me +[2]:https://www.ostechnix.com/author/sk/ +[3]:https://www.ostechnix.com/get-notification-terminal-task-done/ +[4]:https://www.ostechnix.com/install-pacaur-arch-linux/ +[5]:https://www.ostechnix.com/install-packer-arch-linux-2/ +[6]:https://www.ostechnix.com/install-yaourt-arch-linux/ +[7]:http://www.ostechnix.com/wp-content/uploads/2017/11/undistract-me-1.png +[8]:http://www.ostechnix.com/wp-content/uploads/2017/11/undistract-me-2.png diff --git a/sources/tech/20171201 Fedora Classroom Session: Ansible 101.md b/sources/tech/20171201 Fedora Classroom Session_Ansible 101.md similarity index 100% rename from sources/tech/20171201 Fedora Classroom Session: Ansible 101.md rename to sources/tech/20171201 Fedora Classroom Session_Ansible 101.md From 9e42cbb031b688a1e732dbbc8405b70b425bff97 Mon Sep 17 00:00:00 2001 From: root Date: Wed, 6 Dec 2017 10:42:43 +0800 Subject: [PATCH 054/236] translated --- ... write fun small web projects instantly.md | 76 ------------------- ... write fun small web projects instantly.md | 73 ++++++++++++++++++ 2 files changed, 73 insertions(+), 76 deletions(-) delete mode 100644 sources/tech/20171113 Glitch write fun small web projects instantly.md create mode 100644 translated/tech/20171113 Glitch write fun small web projects instantly.md diff --git a/sources/tech/20171113 Glitch write fun small web projects instantly.md b/sources/tech/20171113 Glitch write fun small web projects instantly.md deleted file mode 100644 index 734853ce51..0000000000 --- a/sources/tech/20171113 Glitch write fun small web projects instantly.md +++ /dev/null @@ -1,76 +0,0 @@ -translating---geekpi - -Glitch: write fun small web projects instantly -============================================================ - -I just wrote about Jupyter Notebooks which are a fun interactive way to write Python code. That reminded me I learned about Glitch recently, which I also love!! I built a small app to [turn of twitter retweets][2] with it. So! - -[Glitch][3] is an easy way to make Javascript webapps. (javascript backend, javascript frontend) - -The fun thing about glitch is: - -1. you start typing Javascript code into their web interface - -2. as soon as you type something, it automagically reloads the backend of your website with the new code. You don’t even have to save!! It autosaves. - -So it’s like Heroku, but even more magical!! Coding like this (you type, and the code runs on the public internet immediately) just feels really **fun** to me. - -It’s kind of like sshing into a server and editing PHP/HTML code on your server and having it instantly available, which I kind of also loved. Now we have “better deployment practices” than “just edit the code and it is instantly on the internet” but we are not talking about Serious Development Practices, we are talking about writing tiny programs for fun. - -### glitch has awesome example apps - -Glitch seems like fun nice way to learn programming! - -For example, there’s a space invaders game (code by [Mary Rose Cook][4]) at [https://space-invaders.glitch.me/][5]. The thing I love about this is that in just a few clicks I can - -1. click “remix this” - -2. start editing the code to make the boxes orange instead of black - -3. have my own space invaders game!! Mine is at [http://julias-space-invaders.glitch.me/][1]. (i just made very tiny edits to make it orange, nothing fancy) - -They have tons of example apps that you can start from – for instance [bots][6], [games][7], and more. - -### awesome actually useful app: tweetstorms - -The way I learned about Glitch was from this app which shows you tweetstorms from a given user: [https://tweetstorms.glitch.me/][8]. - -For example, you can see [@sarahmei][9]’s tweetstorms at [https://tweetstorms.glitch.me/sarahmei][10] (she tweets a lot of good tweetstorms!). - -### my glitch app: turn off retweets - -When I learned about Glitch I wanted to turn off retweets for everyone I follow on Twitter (I know you can do it in Tweetdeck!) and doing it manually was a pain – I had to do it one person at a time. So I wrote a tiny Glitch app to do it for me! - -I liked that I didn’t have to set up a local development environment, I could just start typing and go! - -Glitch only supports Javascript and I don’t really know Javascript that well (I think I’ve never written a Node program before), so the code isn’t awesome. But I had a really good time writing it – being able to type and just see my code running instantly was delightful. Here it is: [https://turn-off-retweets.glitch.me/][11]. - -### that’s all! - -Using Glitch feels really fun and democratic. Usually if I want to fork someone’s web project and make changes I wouldn’t do it – I’d have to fork it, figure out hosting, set up a local dev environment or Heroku or whatever, install the dependencies, etc. I think tasks like installing node.js dependencies used to be interesting, like “cool i am learning something new” and now I just find them tedious. - -So I love being able to just click “remix this!” and have my version on the internet instantly. - - --------------------------------------------------------------------------------- - -via: https://jvns.ca/blog/2017/11/13/glitch--write-small-web-projects-easily/ - -作者:[Julia Evans ][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://jvns.ca/ -[1]:http://julias-space-invaders.glitch.me/ -[2]:https://turn-off-retweets.glitch.me/ -[3]:https://glitch.com/ -[4]:https://maryrosecook.com/ -[5]:https://space-invaders.glitch.me/ -[6]:https://glitch.com/handy-bots -[7]:https://glitch.com/games -[8]:https://tweetstorms.glitch.me/ -[9]:https://twitter.com/sarahmei -[10]:https://tweetstorms.glitch.me/sarahmei -[11]:https://turn-off-retweets.glitch.me/ diff --git a/translated/tech/20171113 Glitch write fun small web projects instantly.md b/translated/tech/20171113 Glitch write fun small web projects instantly.md new file mode 100644 index 0000000000..fde7d7f880 --- /dev/null +++ b/translated/tech/20171113 Glitch write fun small web projects instantly.md @@ -0,0 +1,73 @@ +Glitch:立即写出有趣的小型网站项目 +============================================================ + +我刚写了一篇关于 Jupyter Notebooks 是一个有趣的交互式写 Python 代码的方式。这让我想起我最近学习了 Glitch,这个我同样喜爱!我构建了一个小的程序来用于[关闭转发 twitter][2]。因此有了这篇文章! + +[Glitch][3] 是一个简单的构建 Javascript web 程序的方式(javascript 后端、javascript 前端) + +关于 glitch 有趣的事有: + +1. 你在他们的网站输入 Javascript 代码 + +2. 只要输入了任何代码,它会自动用你的新代码重载你的网站。你甚至不必保存!它会自动保存。 + +所以这就像 Heroku,但更神奇!像这样的编码(你输入代码,代码立即在公共网络上运行)对我而言感觉很**有趣**。 + +这有点像 ssh 登录服务器,编辑服务器上的 PHP/HTML 代码,并让它立即可用,这也是我所喜爱的。现在我们有了“更好的部署实践”,而不是“编辑代码,它立即出现在互联网上”,但我们并不是在谈论严肃的开发实践,而是在讨论编写微型程序的乐趣。 + +### Glitch 有很棒的示例应用程序 + +Glitch 似乎是学习编程的好方式! + +比如,这有一个太空侵略者游戏(由 [Mary Rose Cook][4] 编写):[https://space-invaders.glitch.me/][5]。我喜欢的是我只需要点击几下。 + +1. 点击 “remix this” + +2. 开始编辑代码使箱子变成橘色而不是黑色 + +3. 制作我自己太空侵略者游戏!我的在这:[http://julias-space-invaders.glitch.me/][1]。(我只做了很小的更改使其变成橘色,没什么神奇的) + +他们有大量的示例程序,你可以从中启动 - 例如[机器人][6]、[游戏][7]等等。 + +### 实际有用的非常好的程序:tweetstorms + +我学习 Glitch 的方式是从这个程序:[https://tweetstorms.glitch.me/][8],它会向你展示给定用户的 tweetstorm。 + +比如,你可以在 [https://tweetstorms.glitch.me/sarahmei][10] 看到 [@sarahmei][9] 的 tweetstorm(她发布了很多好的 tweetstorm!)。 + +### 我的 Glitch 程序: 关闭转推 + +当我了解到 Glitch 的时候,我想关闭在 Twitter 上关注的所有人的转推(我知道可以在 Tweetdeck 中做这件事),而且手动做这件事是一件很痛苦的事 - 我一次只能设置一个人。所以我写了一个 Glitch 程序来为我做! + +我喜欢我不必设置一个本地开发环境,我可以直接开始输入然后开始! + +Glitch 只支持 Javascript,我不非常了解 Javascript(我之前从没写过一个 Node 程序),所以代码不是很好。但是编写它很愉快 - 能够输入并立即看到我的代码运行是令人愉快的。这是我的项目:[https://turn-off-retweets.glitch.me/][11]。 + +### 就是这些! + +使用 Glitch 感觉真的很有趣和民主。通常情况下,如果我想 fork 某人的 Web 项目,并做出更改,我不会这样做 - 我必须 fork,找一个托管,设置本地开发环境或者 Heroku 或其他,安装依赖项等。我认为像安装 node.js 依赖关系这样的任务过去很有趣,就像“我正在学习新东西很酷”,现在我觉得它们很乏味。 + +所以我喜欢只需点击 “remix this!” 并立即在互联网上能有我的版本。 + +-------------------------------------------------------------------------------- + +via: https://jvns.ca/blog/2017/11/13/glitch--write-small-web-projects-easily/ + +作者:[Julia Evans ][a] +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://jvns.ca/ +[1]:http://julias-space-invaders.glitch.me/ +[2]:https://turn-off-retweets.glitch.me/ +[3]:https://glitch.com/ +[4]:https://maryrosecook.com/ +[5]:https://space-invaders.glitch.me/ +[6]:https://glitch.com/handy-bots +[7]:https://glitch.com/games +[8]:https://tweetstorms.glitch.me/ +[9]:https://twitter.com/sarahmei +[10]:https://tweetstorms.glitch.me/sarahmei +[11]:https://turn-off-retweets.glitch.me/ From 8391d9321c5fba682cfffa41436516aae1e171f2 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E9=82=B9=E8=8D=A3=E5=8D=87?= Date: Wed, 6 Dec 2017 10:43:15 +0800 Subject: [PATCH 055/236] Update 20171120 Mark McIntyre How Do You Fedora.md --- ...0171120 Mark McIntyre How Do You Fedora.md | 38 +++++++++---------- 1 file changed, 18 insertions(+), 20 deletions(-) diff --git a/sources/tech/20171120 Mark McIntyre How Do You Fedora.md b/sources/tech/20171120 Mark McIntyre How Do You Fedora.md index 40af7eba2f..76606f74dc 100644 --- a/sources/tech/20171120 Mark McIntyre How Do You Fedora.md +++ b/sources/tech/20171120 Mark McIntyre How Do You Fedora.md @@ -1,54 +1,52 @@ -translating by zrszrszrs -# [Mark McIntyre: How Do You Fedora?][1] # [Mark McIntyre: 你是如何使用Fedora的?][1] ![](https://fedoramagazine.org/wp-content/uploads/2017/11/mock-couch-945w-945x400.jpg) -We recently interviewed Mark McIntyre on how he uses Fedora. This is [part of a series][2] on the Fedora Magazine. The series profiles Fedora users and how they use Fedora to get things done. Contact us on the [feedback form][3] to express your interest in becoming a interviewee. +最近我们采访了 Mark McIntyre,谈来他是如何使用 Fedora 系统的。这也是Fedora 杂志上[本系列的一部分][2]。该系列简要介绍了 Fedora 用户,以及他们是如何用 Fedora 把事情做好的。通过[反馈表][3]与我们联系,表达你想成为采访对象的意愿。 -### Who is Mark McIntyre? +### Mark McIntyre 是谁? -Mark McIntyre is a geek by birth and Linux by choice. “I started coding at the early age of 13 learning BASIC on my own and finding the excitement of programming which led me down a path of becoming a professional coder,” he says. McIntyre and his niece are big fans of pizza. “My niece and I started a quest last fall to try as many of the pizza joints in Knoxville. You can read about our progress at [https://knox-pizza-quest.blogspot.com/][4]” Mark is also an amateur photographer and [publishes his images][5] on Flickr. +Mark McIntyre 是一个天生的极客,后天的 Linux 爱好者。他说:“我在 13 岁开始编程,当时自学 BASIC 语言,我体会到其中的乐趣,并在乐趣的引导下,一步步成为专业的码农。”Mark 和他的侄女都是披萨饼的死忠粉。“去年秋天,我和我的侄女尽可能多地光顾了诺克斯维尔的披萨饼连锁店。 点击 [https://knox-pizza-quest.blogspot.com/][4] 可以了解我们的进展情况。”Mark 也是一名业余的摄影爱好者,并且在 Flickr 上 [发布自己的作品][5]。 ![](https://fedoramagazine.org/wp-content/uploads/2017/11/31456893222_553b3cac4d_k-1024x575.jpg) -Mark has a diverse background as a developer. He has worked with Visual Basic for Applications, LotusScript, Oracle’s PL/SQL, Tcl/Tk and Python with Django as the framework. His strongest skill is Python which he uses in his current job as a systems engineer. “I am using Python on a regular basis. As my job is morphing into more of an automation engineer, that became more frequent.” +作为一名开发者,Mark 有着丰富的工作背景。他用过 Visual Basic 编写应用程序,用过 LotusScript、 PL/SQL(Oracle)、 Tcl/TK 编写代码,也用过基于 Python 的 Django 框架。他的强项是 Python。这也是目前他作为系统工程师的工作语言。“我用 Python 比较规律。但当我的工作变得更像是自动化工程师时,Python 用得就更频繁了。” -McIntyre is a self-described nerd and loves sci-fi movies, but his favorite movie falls out of that genre. “As much as I am a nerd and love the Star Trek and Star Wars and related movies, the movie Glory is probably my favorite of all time.” He also mentioned that Serenity was a fantastic follow-up to a great TV series. +McIntyre 自称是个书呆子,喜欢科幻电影,但他最喜欢的一部电影却不是科幻片。“尽管我是个书呆子,喜欢看《星际迷航》、《星球大战》之类的影片,但《光荣战役》或许才是我最喜欢的电影。”他还提到,电影《冲出宁静号》实属著名电视剧《萤火虫》的精彩后续。 -Mark values humility, knowledge and graciousness in others. He appreciates people who act based on understanding the situation that other people are in. “If you add a decision to serve another, you have the basis for someone you’d want to be around instead of someone who you have to tolerate.” +Mark 比较看重他人的谦逊、知识与和气。他欣赏能够设身处地为他人着想的人。“如果你决定为另一个人服务,那么你会选择自己愿意亲近的人,而不是让自己备受折磨的人。” -McIntyre works for [Scripps Networks Interactive][6], which is the parent company for HGTV, Food Network, Travel Channel, DIY, GAC, and several other cable channels. “Currently, I function as a systems engineer for the non-linear video content, which is all the media purposed for online consumption.” He supports a few development teams who write applications to publish the linear video from cable TV into the online formats such as Amazon and Hulu. The systems include both on-premise and cloud systems. Mark also develops automation tools for deploying these applications primarily to a cloud infrastructure. +McIntyre 目前在 [Scripps Networks Interactive][6] 工作,这家公司是 HGTV、Food Network、Travel Channel、DIY、GAC 以及其他几个有线电视频道的母公司。“我现在是一名系统工程师,负责非线性视频内容,这是全部媒体开展线上消费的计划。”他支持一些开发团队编写应用程序,将线性视频从有线电视发布到线上平台,比如亚马逊、葫芦。这些系统既包含预置系统,也包含云系统。Mark 还开发了一些自动化工具,将这些应用程序主要部署到云基础结构中。 -### The Fedora community +### Fedora 社区 -Mark describes the Fedora community as an active community filled with people who enjoy life as Fedora users. “From designers to packagers, this group is still very active and feels alive.” McIntyre continues, “That gives me a sense of confidence in the operating system.” +Mark 形容 Fedora 社区是一个富有活力的社区,充满着像 Fedora 用户一样热爱生活的人。“从设计师到包装师,这个团体依然非常活跃,生机勃勃。” 他继续说道:“这使我对操作系统抱有一种信心。” -He started frequenting the #fedora channel on IRC around 2002: “Back then, Wi-Fi functionality was still done a lot by hand in starting the adapter and configuring the modules.” In order to get his Wi-Fi working he had to recompile the Fedora kernel. Shortly after, he started helping others in the #fedora channel. +2002年左右,Mark 开始经常使用 IRC 上的 #fedora 频道:“那时候,Wi-Fi 在启用适配器和配置模块功能时,有许多还是靠手工实现的。”为了让他的 Wi-Fi 能够工作,他不得不重新去编译 Fedora 内核。 -McIntyre encourages others to get involved in the Fedora Community. “There are many different areas of opportunity in which to be involved. Front-end design, testing deployments, development, packaging of applications, and new technology implementation.” He recommends picking an area of interest and asking questions of that group. “There are many opportunities available to jump in to contribute.” +McIntyre 鼓励他人参与 Fedora 社区。“这里有许多来自不同领域的机会。前端设计、测试部署、开发、应用程序包装以及新型技术实现。”他建议选择一个感兴趣的领域,然后向那个团体提出疑问。“这里有许多机会去奉献自己。” -He credits a fellow community member with helping him get started: “Ben Williams was very helpful in my first encounters with Fedora, helping me with some of my first installation rough patches in the #fedora support channel.” Ben also encouraged Mark to become an [Ambassador][7]. +对于帮助他起步的社区成员,Mark 赞道:“Ben Williams 非常乐于助人。在我第一次接触Fedora时,他帮我搞定了一些#fedora支持频道中的安装补丁。”Ben 也鼓励 Mark 去做 Fedora [代表][7]。 -### What hardware and software? +### 什么样的硬件和软件? -McIntyre uses Fedora Linux on all his laptops and desktops. On servers he chooses CentOS, due to the longer support lifecycle. His current desktop is self-built and equipped with an Intel Core i5 processor, 32 GB of RAM and 2 TB of disk space. “I have a 4K monitor attached which gives me plenty of room for viewing all my applications at once.” His current work laptop is a Dell Inspiron 2-in-1 13-inch laptop with 16 GB RAM and a 525 GB m.2 SSD. +McIntyre 将 Fedora Linux 系统用在他的笔记本和台式机上。在服务器上他选择了 CentOS,因为它有更长的生命周期支持。他现在的台式机是自己组装的,配有 Intel 酷睿 i5 处理器,32GB 的内存和2TB 的硬盘。“我装了个 4K 的显示屏,有足够大的地方来同时查看所有的应用。”他目前工作用的笔记本是戴尔灵越二合一,配备 13 英寸的屏,16 GB 的内存和 525 GB 的 m.2 固态硬盘。 ![](https://fedoramagazine.org/wp-content/uploads/2017/11/Screenshot-from-2017-10-26-08-51-41-1024x640.png) -Mark currently runs Fedora 26 on any box he setup in the past few months. When it comes to new versions he likes to avoid the rush when the version is officially released. “I usually try to get the latest version as soon as it goes gold, with the exception of one of my workstations running the next version’s beta when it is closer to release.” He usually upgrades in place: “The in-place upgrade using  _dnf system-upgrade_  works very well these days.” +Mark 现在将 Fedora 26 运行在他过去几个月装配的所有盒子中。当一个新版本正式发布的时候,他倾向于避开这个高峰期。“除非在它即将发行的时候,我的工作站中有个正在运行下一代测试版本,通常情况下,一旦它发展成熟,我都会试着去获取最新的版本。”他经常采取就地更新:“这种就地更新方法利用 dnf 系统升级插件,目前表现得非常好。” -To handle his photography, McIntyre uses [GIMP][8] and [Darktable][9], along with a few other photo viewing and quick editing packages. When not using web-based email, he uses [Geary][10] along with [GNOME Calendar][11]. Mark’s IRC client of choice is [HexChat][12] connecting to a [ZNC bouncer][13]running on a Fedora Server instance. His department’s communication is handled via Slack. +为了搞摄影,McIntyre 用上了 [GIMP][8]、[Darktable][9],以及其他一些照片查看包和快速编辑包。当不启用网络电子邮件时,Mark 会使用 [Geary][10],还有[GNOME Calendar][11]。Mark 选用 HexChat 作为 IRC 客户端,[HexChat][12] 与在 Fedora 服务器实例上运行的 [ZNC bouncer][13] 联机。他的部门通过 Slave 进行沟通交流。 -“I have never really been a big IDE fan, so I spend time in [vim][14] for most of my editing.” Occasionally, he opens up a simple text editor like [gedit][15] or [xed][16]. Mark uses [GPaste][17] for  copying and pasting. “I have become a big fan of [Tilix][18] for my terminal choice.” McIntyre manages the podcasts he likes with [Rhythmbox][19], and uses [Epiphany][20] for quick web lookups. +“我从来都不是 IDE 粉,所以大多数的编辑任务都是在 [vim][14] 上完成的。”Mark 偶尔也会打开一个简单的文本编辑器,如 [gedit][15],或者 [xed][16]。他用 [GPaste][17] 做复制和粘贴工作。“对于终端的选择,我已经变成 [Tilix][18] 的忠粉。” McIntyre 通过 [Rhythmbox][19] 来管理他喜欢的播客,并用 [Epiphany][20] 实现快速网络查询。 -------------------------------------------------------------------------------- via: https://fedoramagazine.org/mark-mcintyre-fedora/ 作者:[Charles Profitt][a] -译者:[译者ID](https://github.com/译者ID) +译者:[zrszrs](https://github.com/zrszrszrs) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 9906fd3a107cf8b38291b23164e012191f3b4cf3 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E9=82=B9=E8=8D=A3=E5=8D=87?= Date: Wed, 6 Dec 2017 11:05:45 +0800 Subject: [PATCH 056/236] Update 20171120 Mark McIntyre How Do You Fedora.md --- .../tech/20171120 Mark McIntyre How Do You Fedora.md | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/sources/tech/20171120 Mark McIntyre How Do You Fedora.md b/sources/tech/20171120 Mark McIntyre How Do You Fedora.md index 76606f74dc..e89527a377 100644 --- a/sources/tech/20171120 Mark McIntyre How Do You Fedora.md +++ b/sources/tech/20171120 Mark McIntyre How Do You Fedora.md @@ -3,7 +3,7 @@ ![](https://fedoramagazine.org/wp-content/uploads/2017/11/mock-couch-945w-945x400.jpg) -最近我们采访了 Mark McIntyre,谈来他是如何使用 Fedora 系统的。这也是Fedora 杂志上[本系列的一部分][2]。该系列简要介绍了 Fedora 用户,以及他们是如何用 Fedora 把事情做好的。通过[反馈表][3]与我们联系,表达你想成为采访对象的意愿。 +最近我们采访了 Mark McIntyre,谈来他是如何使用 Fedora 系统的。这也是 Fedora 杂志上[本系列的一部分][2]。该系列简要介绍了 Fedora 用户,以及他们是如何用 Fedora 把事情做好的。通过[反馈表][3]与我们联系,表达你想成为采访对象的意愿。 ### Mark McIntyre 是谁? @@ -11,7 +11,7 @@ Mark McIntyre 是一个天生的极客,后天的 Linux 爱好者。他说: ![](https://fedoramagazine.org/wp-content/uploads/2017/11/31456893222_553b3cac4d_k-1024x575.jpg) -作为一名开发者,Mark 有着丰富的工作背景。他用过 Visual Basic 编写应用程序,用过 LotusScript、 PL/SQL(Oracle)、 Tcl/TK 编写代码,也用过基于 Python 的 Django 框架。他的强项是 Python。这也是目前他作为系统工程师的工作语言。“我用 Python 比较规律。但当我的工作变得更像是自动化工程师时,Python 用得就更频繁了。” +作为一名开发者,Mark 有着丰富的工作背景。他用过 Visual Basic 编写应用程序,用过 LotusScript、 PL/SQL(Oracle)、 Tcl/TK 编写代码,也用过基于 Python 的 Django 框架。他的强项是 Python。这也是目前他作为系统工程师的工作语言。“我用 Python 比较规律。但当我的工作变得更像是自动化工程师时, Python 用得就更频繁了。” McIntyre 自称是个书呆子,喜欢科幻电影,但他最喜欢的一部电影却不是科幻片。“尽管我是个书呆子,喜欢看《星际迷航》、《星球大战》之类的影片,但《光荣战役》或许才是我最喜欢的电影。”他还提到,电影《冲出宁静号》实属著名电视剧《萤火虫》的精彩后续。 @@ -27,11 +27,11 @@ Mark 形容 Fedora 社区是一个富有活力的社区,充满着像 Fedora McIntyre 鼓励他人参与 Fedora 社区。“这里有许多来自不同领域的机会。前端设计、测试部署、开发、应用程序包装以及新型技术实现。”他建议选择一个感兴趣的领域,然后向那个团体提出疑问。“这里有许多机会去奉献自己。” -对于帮助他起步的社区成员,Mark 赞道:“Ben Williams 非常乐于助人。在我第一次接触Fedora时,他帮我搞定了一些#fedora支持频道中的安装补丁。”Ben 也鼓励 Mark 去做 Fedora [代表][7]。 +对于帮助他起步的社区成员,Mark 赞道:“Ben Williams 非常乐于助人。在我第一次接触 Fedora 时,他帮我搞定了一些 #fedora 支持频道中的安装补丁。” Ben 也鼓励 Mark 去做 Fedora [代表][7]。 ### 什么样的硬件和软件? -McIntyre 将 Fedora Linux 系统用在他的笔记本和台式机上。在服务器上他选择了 CentOS,因为它有更长的生命周期支持。他现在的台式机是自己组装的,配有 Intel 酷睿 i5 处理器,32GB 的内存和2TB 的硬盘。“我装了个 4K 的显示屏,有足够大的地方来同时查看所有的应用。”他目前工作用的笔记本是戴尔灵越二合一,配备 13 英寸的屏,16 GB 的内存和 525 GB 的 m.2 固态硬盘。 +McIntyre 将 Fedora Linux 系统用在他的笔记本和台式机上。在服务器上他选择了 CentOS,因为它有更长的生命周期支持。他现在的台式机是自己组装的,配有 Intel 酷睿 i5 处理器,32GB 的内存和2TB 的硬盘。“我装了个 4K 的显示屏,有足够大的,地方来同时查看所有的应用。”他目前工作用的笔记本是戴尔灵越二合一,配备 13 英寸的屏,16 GB 的内存和 525 GB 的 m.2 固态硬盘。 ![](https://fedoramagazine.org/wp-content/uploads/2017/11/Screenshot-from-2017-10-26-08-51-41-1024x640.png) @@ -39,7 +39,7 @@ Mark 现在将 Fedora 26 运行在他过去几个月装配的所有盒子中。 为了搞摄影,McIntyre 用上了 [GIMP][8]、[Darktable][9],以及其他一些照片查看包和快速编辑包。当不启用网络电子邮件时,Mark 会使用 [Geary][10],还有[GNOME Calendar][11]。Mark 选用 HexChat 作为 IRC 客户端,[HexChat][12] 与在 Fedora 服务器实例上运行的 [ZNC bouncer][13] 联机。他的部门通过 Slave 进行沟通交流。 -“我从来都不是 IDE 粉,所以大多数的编辑任务都是在 [vim][14] 上完成的。”Mark 偶尔也会打开一个简单的文本编辑器,如 [gedit][15],或者 [xed][16]。他用 [GPaste][17] 做复制和粘贴工作。“对于终端的选择,我已经变成 [Tilix][18] 的忠粉。” McIntyre 通过 [Rhythmbox][19] 来管理他喜欢的播客,并用 [Epiphany][20] 实现快速网络查询。 +“我从来都不是 IDE 粉,所以大多数的编辑任务都是在 [vim][14] 上完成的。”Mark 偶尔也会打开一个简单的文本编辑器,如 [gedit][15],或者 [xed][16]。他用 [GPaste][17] 做复制和粘贴工作。“对于终端的选择,我已经变成 [Tilix][18] 的忠粉。”McIntyre 通过 [Rhythmbox][19] 来管理他喜欢的播客,并用 [Epiphany][20] 实现快速网络查询。 -------------------------------------------------------------------------------- From cde0a112fc36619879e0772c81843e14ce8a3c4e Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E9=82=B9=E8=8D=A3=E5=8D=87?= Date: Wed, 6 Dec 2017 11:06:15 +0800 Subject: [PATCH 057/236] Update 20171120 Mark McIntyre How Do You Fedora.md --- sources/tech/20171120 Mark McIntyre How Do You Fedora.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sources/tech/20171120 Mark McIntyre How Do You Fedora.md b/sources/tech/20171120 Mark McIntyre How Do You Fedora.md index e89527a377..4fe315eb07 100644 --- a/sources/tech/20171120 Mark McIntyre How Do You Fedora.md +++ b/sources/tech/20171120 Mark McIntyre How Do You Fedora.md @@ -3,7 +3,7 @@ ![](https://fedoramagazine.org/wp-content/uploads/2017/11/mock-couch-945w-945x400.jpg) -最近我们采访了 Mark McIntyre,谈来他是如何使用 Fedora 系统的。这也是 Fedora 杂志上[本系列的一部分][2]。该系列简要介绍了 Fedora 用户,以及他们是如何用 Fedora 把事情做好的。通过[反馈表][3]与我们联系,表达你想成为采访对象的意愿。 +最近我们采访了 Mark McIntyre,谈了他是如何使用 Fedora 系统的。这也是 Fedora 杂志上[本系列的一部分][2]。该系列简要介绍了 Fedora 用户,以及他们是如何用 Fedora 把事情做好的。通过[反馈表][3]与我们联系,表达你想成为采访对象的意愿。 ### Mark McIntyre 是谁? From 0422527c893f226ab6c0395f395104ab8a91e233 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E9=82=B9=E8=8D=A3=E5=8D=87?= Date: Wed, 6 Dec 2017 11:24:30 +0800 Subject: [PATCH 058/236] 333 translated --- translated/tech/233 | 74 +++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 74 insertions(+) create mode 100644 translated/tech/233 diff --git a/translated/tech/233 b/translated/tech/233 new file mode 100644 index 0000000000..4fe315eb07 --- /dev/null +++ b/translated/tech/233 @@ -0,0 +1,74 @@ +# [Mark McIntyre: 你是如何使用Fedora的?][1] + + +![](https://fedoramagazine.org/wp-content/uploads/2017/11/mock-couch-945w-945x400.jpg) + +最近我们采访了 Mark McIntyre,谈了他是如何使用 Fedora 系统的。这也是 Fedora 杂志上[本系列的一部分][2]。该系列简要介绍了 Fedora 用户,以及他们是如何用 Fedora 把事情做好的。通过[反馈表][3]与我们联系,表达你想成为采访对象的意愿。 + +### Mark McIntyre 是谁? + +Mark McIntyre 是一个天生的极客,后天的 Linux 爱好者。他说:“我在 13 岁开始编程,当时自学 BASIC 语言,我体会到其中的乐趣,并在乐趣的引导下,一步步成为专业的码农。”Mark 和他的侄女都是披萨饼的死忠粉。“去年秋天,我和我的侄女尽可能多地光顾了诺克斯维尔的披萨饼连锁店。 点击 [https://knox-pizza-quest.blogspot.com/][4] 可以了解我们的进展情况。”Mark 也是一名业余的摄影爱好者,并且在 Flickr 上 [发布自己的作品][5]。 + +![](https://fedoramagazine.org/wp-content/uploads/2017/11/31456893222_553b3cac4d_k-1024x575.jpg) + +作为一名开发者,Mark 有着丰富的工作背景。他用过 Visual Basic 编写应用程序,用过 LotusScript、 PL/SQL(Oracle)、 Tcl/TK 编写代码,也用过基于 Python 的 Django 框架。他的强项是 Python。这也是目前他作为系统工程师的工作语言。“我用 Python 比较规律。但当我的工作变得更像是自动化工程师时, Python 用得就更频繁了。” + +McIntyre 自称是个书呆子,喜欢科幻电影,但他最喜欢的一部电影却不是科幻片。“尽管我是个书呆子,喜欢看《星际迷航》、《星球大战》之类的影片,但《光荣战役》或许才是我最喜欢的电影。”他还提到,电影《冲出宁静号》实属著名电视剧《萤火虫》的精彩后续。 + +Mark 比较看重他人的谦逊、知识与和气。他欣赏能够设身处地为他人着想的人。“如果你决定为另一个人服务,那么你会选择自己愿意亲近的人,而不是让自己备受折磨的人。” + +McIntyre 目前在 [Scripps Networks Interactive][6] 工作,这家公司是 HGTV、Food Network、Travel Channel、DIY、GAC 以及其他几个有线电视频道的母公司。“我现在是一名系统工程师,负责非线性视频内容,这是全部媒体开展线上消费的计划。”他支持一些开发团队编写应用程序,将线性视频从有线电视发布到线上平台,比如亚马逊、葫芦。这些系统既包含预置系统,也包含云系统。Mark 还开发了一些自动化工具,将这些应用程序主要部署到云基础结构中。 + +### Fedora 社区 + +Mark 形容 Fedora 社区是一个富有活力的社区,充满着像 Fedora 用户一样热爱生活的人。“从设计师到包装师,这个团体依然非常活跃,生机勃勃。” 他继续说道:“这使我对操作系统抱有一种信心。” + +2002年左右,Mark 开始经常使用 IRC 上的 #fedora 频道:“那时候,Wi-Fi 在启用适配器和配置模块功能时,有许多还是靠手工实现的。”为了让他的 Wi-Fi 能够工作,他不得不重新去编译 Fedora 内核。 + +McIntyre 鼓励他人参与 Fedora 社区。“这里有许多来自不同领域的机会。前端设计、测试部署、开发、应用程序包装以及新型技术实现。”他建议选择一个感兴趣的领域,然后向那个团体提出疑问。“这里有许多机会去奉献自己。” + +对于帮助他起步的社区成员,Mark 赞道:“Ben Williams 非常乐于助人。在我第一次接触 Fedora 时,他帮我搞定了一些 #fedora 支持频道中的安装补丁。” Ben 也鼓励 Mark 去做 Fedora [代表][7]。 + +### 什么样的硬件和软件? + +McIntyre 将 Fedora Linux 系统用在他的笔记本和台式机上。在服务器上他选择了 CentOS,因为它有更长的生命周期支持。他现在的台式机是自己组装的,配有 Intel 酷睿 i5 处理器,32GB 的内存和2TB 的硬盘。“我装了个 4K 的显示屏,有足够大的,地方来同时查看所有的应用。”他目前工作用的笔记本是戴尔灵越二合一,配备 13 英寸的屏,16 GB 的内存和 525 GB 的 m.2 固态硬盘。 + +![](https://fedoramagazine.org/wp-content/uploads/2017/11/Screenshot-from-2017-10-26-08-51-41-1024x640.png) + +Mark 现在将 Fedora 26 运行在他过去几个月装配的所有盒子中。当一个新版本正式发布的时候,他倾向于避开这个高峰期。“除非在它即将发行的时候,我的工作站中有个正在运行下一代测试版本,通常情况下,一旦它发展成熟,我都会试着去获取最新的版本。”他经常采取就地更新:“这种就地更新方法利用 dnf 系统升级插件,目前表现得非常好。” + +为了搞摄影,McIntyre 用上了 [GIMP][8]、[Darktable][9],以及其他一些照片查看包和快速编辑包。当不启用网络电子邮件时,Mark 会使用 [Geary][10],还有[GNOME Calendar][11]。Mark 选用 HexChat 作为 IRC 客户端,[HexChat][12] 与在 Fedora 服务器实例上运行的 [ZNC bouncer][13] 联机。他的部门通过 Slave 进行沟通交流。 + +“我从来都不是 IDE 粉,所以大多数的编辑任务都是在 [vim][14] 上完成的。”Mark 偶尔也会打开一个简单的文本编辑器,如 [gedit][15],或者 [xed][16]。他用 [GPaste][17] 做复制和粘贴工作。“对于终端的选择,我已经变成 [Tilix][18] 的忠粉。”McIntyre 通过 [Rhythmbox][19] 来管理他喜欢的播客,并用 [Epiphany][20] 实现快速网络查询。 + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/mark-mcintyre-fedora/ + +作者:[Charles Profitt][a] +译者:[zrszrs](https://github.com/zrszrszrs) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://fedoramagazine.org/author/cprofitt/ +[1]:https://fedoramagazine.org/mark-mcintyre-fedora/ +[2]:https://fedoramagazine.org/tag/how-do-you-fedora/ +[3]:https://fedoramagazine.org/submit-an-idea-or-tip/ +[4]:https://knox-pizza-quest.blogspot.com/ +[5]:https://www.flickr.com/photos/mockgeek/ +[6]:http://www.scrippsnetworksinteractive.com/ +[7]:https://fedoraproject.org/wiki/Ambassadors +[8]:https://www.gimp.org/ +[9]:http://www.darktable.org/ +[10]:https://wiki.gnome.org/Apps/Geary +[11]:https://wiki.gnome.org/Apps/Calendar +[12]:https://hexchat.github.io/ +[13]:https://wiki.znc.in/ZNC +[14]:http://www.vim.org/ +[15]:https://wiki.gnome.org/Apps/Gedit +[16]:https://github.com/linuxmint/xed +[17]:https://github.com/Keruspe/GPaste +[18]:https://fedoramagazine.org/try-tilix-new-terminal-emulator-fedora/ +[19]:https://wiki.gnome.org/Apps/Rhythmbox +[20]:https://wiki.gnome.org/Apps/Web From e37682381fa433386398d867ef7cb25ef9f32bb8 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E9=82=B9=E8=8D=A3=E5=8D=87?= Date: Wed, 6 Dec 2017 11:25:16 +0800 Subject: [PATCH 059/236] Delete 233 --- translated/tech/233 | 74 --------------------------------------------- 1 file changed, 74 deletions(-) delete mode 100644 translated/tech/233 diff --git a/translated/tech/233 b/translated/tech/233 deleted file mode 100644 index 4fe315eb07..0000000000 --- a/translated/tech/233 +++ /dev/null @@ -1,74 +0,0 @@ -# [Mark McIntyre: 你是如何使用Fedora的?][1] - - -![](https://fedoramagazine.org/wp-content/uploads/2017/11/mock-couch-945w-945x400.jpg) - -最近我们采访了 Mark McIntyre,谈了他是如何使用 Fedora 系统的。这也是 Fedora 杂志上[本系列的一部分][2]。该系列简要介绍了 Fedora 用户,以及他们是如何用 Fedora 把事情做好的。通过[反馈表][3]与我们联系,表达你想成为采访对象的意愿。 - -### Mark McIntyre 是谁? - -Mark McIntyre 是一个天生的极客,后天的 Linux 爱好者。他说:“我在 13 岁开始编程,当时自学 BASIC 语言,我体会到其中的乐趣,并在乐趣的引导下,一步步成为专业的码农。”Mark 和他的侄女都是披萨饼的死忠粉。“去年秋天,我和我的侄女尽可能多地光顾了诺克斯维尔的披萨饼连锁店。 点击 [https://knox-pizza-quest.blogspot.com/][4] 可以了解我们的进展情况。”Mark 也是一名业余的摄影爱好者,并且在 Flickr 上 [发布自己的作品][5]。 - -![](https://fedoramagazine.org/wp-content/uploads/2017/11/31456893222_553b3cac4d_k-1024x575.jpg) - -作为一名开发者,Mark 有着丰富的工作背景。他用过 Visual Basic 编写应用程序,用过 LotusScript、 PL/SQL(Oracle)、 Tcl/TK 编写代码,也用过基于 Python 的 Django 框架。他的强项是 Python。这也是目前他作为系统工程师的工作语言。“我用 Python 比较规律。但当我的工作变得更像是自动化工程师时, Python 用得就更频繁了。” - -McIntyre 自称是个书呆子,喜欢科幻电影,但他最喜欢的一部电影却不是科幻片。“尽管我是个书呆子,喜欢看《星际迷航》、《星球大战》之类的影片,但《光荣战役》或许才是我最喜欢的电影。”他还提到,电影《冲出宁静号》实属著名电视剧《萤火虫》的精彩后续。 - -Mark 比较看重他人的谦逊、知识与和气。他欣赏能够设身处地为他人着想的人。“如果你决定为另一个人服务,那么你会选择自己愿意亲近的人,而不是让自己备受折磨的人。” - -McIntyre 目前在 [Scripps Networks Interactive][6] 工作,这家公司是 HGTV、Food Network、Travel Channel、DIY、GAC 以及其他几个有线电视频道的母公司。“我现在是一名系统工程师,负责非线性视频内容,这是全部媒体开展线上消费的计划。”他支持一些开发团队编写应用程序,将线性视频从有线电视发布到线上平台,比如亚马逊、葫芦。这些系统既包含预置系统,也包含云系统。Mark 还开发了一些自动化工具,将这些应用程序主要部署到云基础结构中。 - -### Fedora 社区 - -Mark 形容 Fedora 社区是一个富有活力的社区,充满着像 Fedora 用户一样热爱生活的人。“从设计师到包装师,这个团体依然非常活跃,生机勃勃。” 他继续说道:“这使我对操作系统抱有一种信心。” - -2002年左右,Mark 开始经常使用 IRC 上的 #fedora 频道:“那时候,Wi-Fi 在启用适配器和配置模块功能时,有许多还是靠手工实现的。”为了让他的 Wi-Fi 能够工作,他不得不重新去编译 Fedora 内核。 - -McIntyre 鼓励他人参与 Fedora 社区。“这里有许多来自不同领域的机会。前端设计、测试部署、开发、应用程序包装以及新型技术实现。”他建议选择一个感兴趣的领域,然后向那个团体提出疑问。“这里有许多机会去奉献自己。” - -对于帮助他起步的社区成员,Mark 赞道:“Ben Williams 非常乐于助人。在我第一次接触 Fedora 时,他帮我搞定了一些 #fedora 支持频道中的安装补丁。” Ben 也鼓励 Mark 去做 Fedora [代表][7]。 - -### 什么样的硬件和软件? - -McIntyre 将 Fedora Linux 系统用在他的笔记本和台式机上。在服务器上他选择了 CentOS,因为它有更长的生命周期支持。他现在的台式机是自己组装的,配有 Intel 酷睿 i5 处理器,32GB 的内存和2TB 的硬盘。“我装了个 4K 的显示屏,有足够大的,地方来同时查看所有的应用。”他目前工作用的笔记本是戴尔灵越二合一,配备 13 英寸的屏,16 GB 的内存和 525 GB 的 m.2 固态硬盘。 - -![](https://fedoramagazine.org/wp-content/uploads/2017/11/Screenshot-from-2017-10-26-08-51-41-1024x640.png) - -Mark 现在将 Fedora 26 运行在他过去几个月装配的所有盒子中。当一个新版本正式发布的时候,他倾向于避开这个高峰期。“除非在它即将发行的时候,我的工作站中有个正在运行下一代测试版本,通常情况下,一旦它发展成熟,我都会试着去获取最新的版本。”他经常采取就地更新:“这种就地更新方法利用 dnf 系统升级插件,目前表现得非常好。” - -为了搞摄影,McIntyre 用上了 [GIMP][8]、[Darktable][9],以及其他一些照片查看包和快速编辑包。当不启用网络电子邮件时,Mark 会使用 [Geary][10],还有[GNOME Calendar][11]。Mark 选用 HexChat 作为 IRC 客户端,[HexChat][12] 与在 Fedora 服务器实例上运行的 [ZNC bouncer][13] 联机。他的部门通过 Slave 进行沟通交流。 - -“我从来都不是 IDE 粉,所以大多数的编辑任务都是在 [vim][14] 上完成的。”Mark 偶尔也会打开一个简单的文本编辑器,如 [gedit][15],或者 [xed][16]。他用 [GPaste][17] 做复制和粘贴工作。“对于终端的选择,我已经变成 [Tilix][18] 的忠粉。”McIntyre 通过 [Rhythmbox][19] 来管理他喜欢的播客,并用 [Epiphany][20] 实现快速网络查询。 - --------------------------------------------------------------------------------- - -via: https://fedoramagazine.org/mark-mcintyre-fedora/ - -作者:[Charles Profitt][a] -译者:[zrszrs](https://github.com/zrszrszrs) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://fedoramagazine.org/author/cprofitt/ -[1]:https://fedoramagazine.org/mark-mcintyre-fedora/ -[2]:https://fedoramagazine.org/tag/how-do-you-fedora/ -[3]:https://fedoramagazine.org/submit-an-idea-or-tip/ -[4]:https://knox-pizza-quest.blogspot.com/ -[5]:https://www.flickr.com/photos/mockgeek/ -[6]:http://www.scrippsnetworksinteractive.com/ -[7]:https://fedoraproject.org/wiki/Ambassadors -[8]:https://www.gimp.org/ -[9]:http://www.darktable.org/ -[10]:https://wiki.gnome.org/Apps/Geary -[11]:https://wiki.gnome.org/Apps/Calendar -[12]:https://hexchat.github.io/ -[13]:https://wiki.znc.in/ZNC -[14]:http://www.vim.org/ -[15]:https://wiki.gnome.org/Apps/Gedit -[16]:https://github.com/linuxmint/xed -[17]:https://github.com/Keruspe/GPaste -[18]:https://fedoramagazine.org/try-tilix-new-terminal-emulator-fedora/ -[19]:https://wiki.gnome.org/Apps/Rhythmbox -[20]:https://wiki.gnome.org/Apps/Web From c36a27b4a041c378eef871c7875e504cbb965b69 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E9=82=B9=E8=8D=A3=E5=8D=87?= Date: Wed, 6 Dec 2017 11:25:47 +0800 Subject: [PATCH 060/236] Add files via upload --- ...0171120 Mark McIntyre How Do You Fedora.md | 74 +++++++++++++++++++ 1 file changed, 74 insertions(+) create mode 100644 translated/tech/20171120 Mark McIntyre How Do You Fedora.md diff --git a/translated/tech/20171120 Mark McIntyre How Do You Fedora.md b/translated/tech/20171120 Mark McIntyre How Do You Fedora.md new file mode 100644 index 0000000000..4fe315eb07 --- /dev/null +++ b/translated/tech/20171120 Mark McIntyre How Do You Fedora.md @@ -0,0 +1,74 @@ +# [Mark McIntyre: 你是如何使用Fedora的?][1] + + +![](https://fedoramagazine.org/wp-content/uploads/2017/11/mock-couch-945w-945x400.jpg) + +最近我们采访了 Mark McIntyre,谈了他是如何使用 Fedora 系统的。这也是 Fedora 杂志上[本系列的一部分][2]。该系列简要介绍了 Fedora 用户,以及他们是如何用 Fedora 把事情做好的。通过[反馈表][3]与我们联系,表达你想成为采访对象的意愿。 + +### Mark McIntyre 是谁? + +Mark McIntyre 是一个天生的极客,后天的 Linux 爱好者。他说:“我在 13 岁开始编程,当时自学 BASIC 语言,我体会到其中的乐趣,并在乐趣的引导下,一步步成为专业的码农。”Mark 和他的侄女都是披萨饼的死忠粉。“去年秋天,我和我的侄女尽可能多地光顾了诺克斯维尔的披萨饼连锁店。 点击 [https://knox-pizza-quest.blogspot.com/][4] 可以了解我们的进展情况。”Mark 也是一名业余的摄影爱好者,并且在 Flickr 上 [发布自己的作品][5]。 + +![](https://fedoramagazine.org/wp-content/uploads/2017/11/31456893222_553b3cac4d_k-1024x575.jpg) + +作为一名开发者,Mark 有着丰富的工作背景。他用过 Visual Basic 编写应用程序,用过 LotusScript、 PL/SQL(Oracle)、 Tcl/TK 编写代码,也用过基于 Python 的 Django 框架。他的强项是 Python。这也是目前他作为系统工程师的工作语言。“我用 Python 比较规律。但当我的工作变得更像是自动化工程师时, Python 用得就更频繁了。” + +McIntyre 自称是个书呆子,喜欢科幻电影,但他最喜欢的一部电影却不是科幻片。“尽管我是个书呆子,喜欢看《星际迷航》、《星球大战》之类的影片,但《光荣战役》或许才是我最喜欢的电影。”他还提到,电影《冲出宁静号》实属著名电视剧《萤火虫》的精彩后续。 + +Mark 比较看重他人的谦逊、知识与和气。他欣赏能够设身处地为他人着想的人。“如果你决定为另一个人服务,那么你会选择自己愿意亲近的人,而不是让自己备受折磨的人。” + +McIntyre 目前在 [Scripps Networks Interactive][6] 工作,这家公司是 HGTV、Food Network、Travel Channel、DIY、GAC 以及其他几个有线电视频道的母公司。“我现在是一名系统工程师,负责非线性视频内容,这是全部媒体开展线上消费的计划。”他支持一些开发团队编写应用程序,将线性视频从有线电视发布到线上平台,比如亚马逊、葫芦。这些系统既包含预置系统,也包含云系统。Mark 还开发了一些自动化工具,将这些应用程序主要部署到云基础结构中。 + +### Fedora 社区 + +Mark 形容 Fedora 社区是一个富有活力的社区,充满着像 Fedora 用户一样热爱生活的人。“从设计师到包装师,这个团体依然非常活跃,生机勃勃。” 他继续说道:“这使我对操作系统抱有一种信心。” + +2002年左右,Mark 开始经常使用 IRC 上的 #fedora 频道:“那时候,Wi-Fi 在启用适配器和配置模块功能时,有许多还是靠手工实现的。”为了让他的 Wi-Fi 能够工作,他不得不重新去编译 Fedora 内核。 + +McIntyre 鼓励他人参与 Fedora 社区。“这里有许多来自不同领域的机会。前端设计、测试部署、开发、应用程序包装以及新型技术实现。”他建议选择一个感兴趣的领域,然后向那个团体提出疑问。“这里有许多机会去奉献自己。” + +对于帮助他起步的社区成员,Mark 赞道:“Ben Williams 非常乐于助人。在我第一次接触 Fedora 时,他帮我搞定了一些 #fedora 支持频道中的安装补丁。” Ben 也鼓励 Mark 去做 Fedora [代表][7]。 + +### 什么样的硬件和软件? + +McIntyre 将 Fedora Linux 系统用在他的笔记本和台式机上。在服务器上他选择了 CentOS,因为它有更长的生命周期支持。他现在的台式机是自己组装的,配有 Intel 酷睿 i5 处理器,32GB 的内存和2TB 的硬盘。“我装了个 4K 的显示屏,有足够大的,地方来同时查看所有的应用。”他目前工作用的笔记本是戴尔灵越二合一,配备 13 英寸的屏,16 GB 的内存和 525 GB 的 m.2 固态硬盘。 + +![](https://fedoramagazine.org/wp-content/uploads/2017/11/Screenshot-from-2017-10-26-08-51-41-1024x640.png) + +Mark 现在将 Fedora 26 运行在他过去几个月装配的所有盒子中。当一个新版本正式发布的时候,他倾向于避开这个高峰期。“除非在它即将发行的时候,我的工作站中有个正在运行下一代测试版本,通常情况下,一旦它发展成熟,我都会试着去获取最新的版本。”他经常采取就地更新:“这种就地更新方法利用 dnf 系统升级插件,目前表现得非常好。” + +为了搞摄影,McIntyre 用上了 [GIMP][8]、[Darktable][9],以及其他一些照片查看包和快速编辑包。当不启用网络电子邮件时,Mark 会使用 [Geary][10],还有[GNOME Calendar][11]。Mark 选用 HexChat 作为 IRC 客户端,[HexChat][12] 与在 Fedora 服务器实例上运行的 [ZNC bouncer][13] 联机。他的部门通过 Slave 进行沟通交流。 + +“我从来都不是 IDE 粉,所以大多数的编辑任务都是在 [vim][14] 上完成的。”Mark 偶尔也会打开一个简单的文本编辑器,如 [gedit][15],或者 [xed][16]。他用 [GPaste][17] 做复制和粘贴工作。“对于终端的选择,我已经变成 [Tilix][18] 的忠粉。”McIntyre 通过 [Rhythmbox][19] 来管理他喜欢的播客,并用 [Epiphany][20] 实现快速网络查询。 + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/mark-mcintyre-fedora/ + +作者:[Charles Profitt][a] +译者:[zrszrs](https://github.com/zrszrszrs) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://fedoramagazine.org/author/cprofitt/ +[1]:https://fedoramagazine.org/mark-mcintyre-fedora/ +[2]:https://fedoramagazine.org/tag/how-do-you-fedora/ +[3]:https://fedoramagazine.org/submit-an-idea-or-tip/ +[4]:https://knox-pizza-quest.blogspot.com/ +[5]:https://www.flickr.com/photos/mockgeek/ +[6]:http://www.scrippsnetworksinteractive.com/ +[7]:https://fedoraproject.org/wiki/Ambassadors +[8]:https://www.gimp.org/ +[9]:http://www.darktable.org/ +[10]:https://wiki.gnome.org/Apps/Geary +[11]:https://wiki.gnome.org/Apps/Calendar +[12]:https://hexchat.github.io/ +[13]:https://wiki.znc.in/ZNC +[14]:http://www.vim.org/ +[15]:https://wiki.gnome.org/Apps/Gedit +[16]:https://github.com/linuxmint/xed +[17]:https://github.com/Keruspe/GPaste +[18]:https://fedoramagazine.org/try-tilix-new-terminal-emulator-fedora/ +[19]:https://wiki.gnome.org/Apps/Rhythmbox +[20]:https://wiki.gnome.org/Apps/Web From 3214c62240e148a862536689968d2a4c36efddf7 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E9=82=B9=E8=8D=A3=E5=8D=87?= Date: Wed, 6 Dec 2017 11:27:39 +0800 Subject: [PATCH 061/236] Delete 20171120 Mark McIntyre How Do You Fedora.md --- ...0171120 Mark McIntyre How Do You Fedora.md | 74 ------------------- 1 file changed, 74 deletions(-) delete mode 100644 sources/tech/20171120 Mark McIntyre How Do You Fedora.md diff --git a/sources/tech/20171120 Mark McIntyre How Do You Fedora.md b/sources/tech/20171120 Mark McIntyre How Do You Fedora.md deleted file mode 100644 index 4fe315eb07..0000000000 --- a/sources/tech/20171120 Mark McIntyre How Do You Fedora.md +++ /dev/null @@ -1,74 +0,0 @@ -# [Mark McIntyre: 你是如何使用Fedora的?][1] - - -![](https://fedoramagazine.org/wp-content/uploads/2017/11/mock-couch-945w-945x400.jpg) - -最近我们采访了 Mark McIntyre,谈了他是如何使用 Fedora 系统的。这也是 Fedora 杂志上[本系列的一部分][2]。该系列简要介绍了 Fedora 用户,以及他们是如何用 Fedora 把事情做好的。通过[反馈表][3]与我们联系,表达你想成为采访对象的意愿。 - -### Mark McIntyre 是谁? - -Mark McIntyre 是一个天生的极客,后天的 Linux 爱好者。他说:“我在 13 岁开始编程,当时自学 BASIC 语言,我体会到其中的乐趣,并在乐趣的引导下,一步步成为专业的码农。”Mark 和他的侄女都是披萨饼的死忠粉。“去年秋天,我和我的侄女尽可能多地光顾了诺克斯维尔的披萨饼连锁店。 点击 [https://knox-pizza-quest.blogspot.com/][4] 可以了解我们的进展情况。”Mark 也是一名业余的摄影爱好者,并且在 Flickr 上 [发布自己的作品][5]。 - -![](https://fedoramagazine.org/wp-content/uploads/2017/11/31456893222_553b3cac4d_k-1024x575.jpg) - -作为一名开发者,Mark 有着丰富的工作背景。他用过 Visual Basic 编写应用程序,用过 LotusScript、 PL/SQL(Oracle)、 Tcl/TK 编写代码,也用过基于 Python 的 Django 框架。他的强项是 Python。这也是目前他作为系统工程师的工作语言。“我用 Python 比较规律。但当我的工作变得更像是自动化工程师时, Python 用得就更频繁了。” - -McIntyre 自称是个书呆子,喜欢科幻电影,但他最喜欢的一部电影却不是科幻片。“尽管我是个书呆子,喜欢看《星际迷航》、《星球大战》之类的影片,但《光荣战役》或许才是我最喜欢的电影。”他还提到,电影《冲出宁静号》实属著名电视剧《萤火虫》的精彩后续。 - -Mark 比较看重他人的谦逊、知识与和气。他欣赏能够设身处地为他人着想的人。“如果你决定为另一个人服务,那么你会选择自己愿意亲近的人,而不是让自己备受折磨的人。” - -McIntyre 目前在 [Scripps Networks Interactive][6] 工作,这家公司是 HGTV、Food Network、Travel Channel、DIY、GAC 以及其他几个有线电视频道的母公司。“我现在是一名系统工程师,负责非线性视频内容,这是全部媒体开展线上消费的计划。”他支持一些开发团队编写应用程序,将线性视频从有线电视发布到线上平台,比如亚马逊、葫芦。这些系统既包含预置系统,也包含云系统。Mark 还开发了一些自动化工具,将这些应用程序主要部署到云基础结构中。 - -### Fedora 社区 - -Mark 形容 Fedora 社区是一个富有活力的社区,充满着像 Fedora 用户一样热爱生活的人。“从设计师到包装师,这个团体依然非常活跃,生机勃勃。” 他继续说道:“这使我对操作系统抱有一种信心。” - -2002年左右,Mark 开始经常使用 IRC 上的 #fedora 频道:“那时候,Wi-Fi 在启用适配器和配置模块功能时,有许多还是靠手工实现的。”为了让他的 Wi-Fi 能够工作,他不得不重新去编译 Fedora 内核。 - -McIntyre 鼓励他人参与 Fedora 社区。“这里有许多来自不同领域的机会。前端设计、测试部署、开发、应用程序包装以及新型技术实现。”他建议选择一个感兴趣的领域,然后向那个团体提出疑问。“这里有许多机会去奉献自己。” - -对于帮助他起步的社区成员,Mark 赞道:“Ben Williams 非常乐于助人。在我第一次接触 Fedora 时,他帮我搞定了一些 #fedora 支持频道中的安装补丁。” Ben 也鼓励 Mark 去做 Fedora [代表][7]。 - -### 什么样的硬件和软件? - -McIntyre 将 Fedora Linux 系统用在他的笔记本和台式机上。在服务器上他选择了 CentOS,因为它有更长的生命周期支持。他现在的台式机是自己组装的,配有 Intel 酷睿 i5 处理器,32GB 的内存和2TB 的硬盘。“我装了个 4K 的显示屏,有足够大的,地方来同时查看所有的应用。”他目前工作用的笔记本是戴尔灵越二合一,配备 13 英寸的屏,16 GB 的内存和 525 GB 的 m.2 固态硬盘。 - -![](https://fedoramagazine.org/wp-content/uploads/2017/11/Screenshot-from-2017-10-26-08-51-41-1024x640.png) - -Mark 现在将 Fedora 26 运行在他过去几个月装配的所有盒子中。当一个新版本正式发布的时候,他倾向于避开这个高峰期。“除非在它即将发行的时候,我的工作站中有个正在运行下一代测试版本,通常情况下,一旦它发展成熟,我都会试着去获取最新的版本。”他经常采取就地更新:“这种就地更新方法利用 dnf 系统升级插件,目前表现得非常好。” - -为了搞摄影,McIntyre 用上了 [GIMP][8]、[Darktable][9],以及其他一些照片查看包和快速编辑包。当不启用网络电子邮件时,Mark 会使用 [Geary][10],还有[GNOME Calendar][11]。Mark 选用 HexChat 作为 IRC 客户端,[HexChat][12] 与在 Fedora 服务器实例上运行的 [ZNC bouncer][13] 联机。他的部门通过 Slave 进行沟通交流。 - -“我从来都不是 IDE 粉,所以大多数的编辑任务都是在 [vim][14] 上完成的。”Mark 偶尔也会打开一个简单的文本编辑器,如 [gedit][15],或者 [xed][16]。他用 [GPaste][17] 做复制和粘贴工作。“对于终端的选择,我已经变成 [Tilix][18] 的忠粉。”McIntyre 通过 [Rhythmbox][19] 来管理他喜欢的播客,并用 [Epiphany][20] 实现快速网络查询。 - --------------------------------------------------------------------------------- - -via: https://fedoramagazine.org/mark-mcintyre-fedora/ - -作者:[Charles Profitt][a] -译者:[zrszrs](https://github.com/zrszrszrs) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://fedoramagazine.org/author/cprofitt/ -[1]:https://fedoramagazine.org/mark-mcintyre-fedora/ -[2]:https://fedoramagazine.org/tag/how-do-you-fedora/ -[3]:https://fedoramagazine.org/submit-an-idea-or-tip/ -[4]:https://knox-pizza-quest.blogspot.com/ -[5]:https://www.flickr.com/photos/mockgeek/ -[6]:http://www.scrippsnetworksinteractive.com/ -[7]:https://fedoraproject.org/wiki/Ambassadors -[8]:https://www.gimp.org/ -[9]:http://www.darktable.org/ -[10]:https://wiki.gnome.org/Apps/Geary -[11]:https://wiki.gnome.org/Apps/Calendar -[12]:https://hexchat.github.io/ -[13]:https://wiki.znc.in/ZNC -[14]:http://www.vim.org/ -[15]:https://wiki.gnome.org/Apps/Gedit -[16]:https://github.com/linuxmint/xed -[17]:https://github.com/Keruspe/GPaste -[18]:https://fedoramagazine.org/try-tilix-new-terminal-emulator-fedora/ -[19]:https://wiki.gnome.org/Apps/Rhythmbox -[20]:https://wiki.gnome.org/Apps/Web From f2f1ab9ebebe43b2044f89025b5929f246db693b Mon Sep 17 00:00:00 2001 From: wxy Date: Wed, 6 Dec 2017 15:32:23 +0800 Subject: [PATCH 062/236] PRF:20171128 How To Tell If Your Linux Server Has Been Compromised.md @lujun9972 --- ... Your Linux Server Has Been Compromised.md | 108 ++++++++---------- 1 file changed, 47 insertions(+), 61 deletions(-) diff --git a/translated/tech/20171128 How To Tell If Your Linux Server Has Been Compromised.md b/translated/tech/20171128 How To Tell If Your Linux Server Has Been Compromised.md index 29fe95d868..3ce5f449e3 100644 --- a/translated/tech/20171128 How To Tell If Your Linux Server Has Been Compromised.md +++ b/translated/tech/20171128 How To Tell If Your Linux Server Has Been Compromised.md @@ -1,49 +1,48 @@ -如何判断Linux服务器是否被入侵 --------------- +如何判断 Linux 服务器是否被入侵? +========================= -本指南中所谓的服务器被入侵或者说被黑了的意思是指未经认证的人或程序为了自己的目的登录到服务器上去并使用其计算资源, 通常会产生不好的影响。 +本指南中所谓的服务器被入侵或者说被黑了的意思,是指未经授权的人或程序为了自己的目的登录到服务器上去并使用其计算资源,通常会产生不好的影响。 -免责声明: 若你的服务器被类似NSA这样的国家机关或者某个犯罪集团如请,那么你并不会发现有任何问题,这些技术也无法发觉他们的存在。 +免责声明:若你的服务器被类似 NSA 这样的国家机关或者某个犯罪集团入侵,那么你并不会注意到有任何问题,这些技术也无法发觉他们的存在。 -然而, 大多数被攻破的服务器都是被类似自动攻击程序这样的程序或者类似“脚本小子”这样的廉价攻击者,以及蠢蛋犯罪所入侵的。 +然而,大多数被攻破的服务器都是被类似自动攻击程序这样的程序或者类似“脚本小子”这样的廉价攻击者,以及蠢蛋罪犯所入侵的。 这类攻击者会在访问服务器的同时滥用服务器资源,并且不怎么会采取措施来隐藏他们正在做的事情。 -### 入侵服务器的症状 +### 被入侵服务器的症状 -当服务器被没有经验攻击者或者自动攻击程序入侵了的话,他们往往会消耗100%的资源. 他们可能消耗CPU资源来进行数字货币的采矿或者发送垃圾邮件,也可能消耗带宽来发动 `DoS` 攻击。 +当服务器被没有经验攻击者或者自动攻击程序入侵了的话,他们往往会消耗 100% 的资源。他们可能消耗 CPU 资源来进行数字货币的采矿或者发送垃圾邮件,也可能消耗带宽来发动 DoS 攻击。 -因此出现问题的第一个表现就是服务器 “变慢了”. 这可能表现在网站的页面打开的很慢, 或者电子邮件要花很长时间才能发送出去。 +因此出现问题的第一个表现就是服务器 “变慢了”。这可能表现在网站的页面打开的很慢,或者电子邮件要花很长时间才能发送出去。 那么你应该查看那些东西呢? #### 检查 1 - 当前都有谁在登录? -你首先要查看当前都有谁登录在服务器上. 发现攻击者登录到服务器上进行操作并不罕见。 +你首先要查看当前都有谁登录在服务器上。发现攻击者登录到服务器上进行操作并不复杂。 -其对应的命令是 `w`. 运行 `w` 会输出如下结果: +其对应的命令是 `w`。运行 `w` 会输出如下结果: ``` 08:32:55 up 98 days, 5:43, 2 users, load average: 0.05, 0.03, 0.00 USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT root pts/0 113.174.161.1 08:26 0.00s 0.03s 0.02s ssh root@coopeaa12 root pts/1 78.31.109.1 08:26 0.00s 0.01s 0.00s w - ``` -第一个IP是英国IP,而第二个IP是越南IP. 这个不是个好兆头。 +第一个 IP 是英国 IP,而第二个 IP 是越南 IP。这个不是个好兆头。 -停下来做个深呼吸, 不要紧,只需要杀掉他们的SSH连接就好了. Unless you can stop then re-entering the server they will do so quickly and quite likely kick you off and stop you getting back in。 +停下来做个深呼吸, 不要恐慌之下只是干掉他们的 SSH 连接。除非你能够防止他们再次进入服务器,否则他们会很快进来并踢掉你,以防你再次回去。 -请参阅本文最后的 `入侵之后怎么办` 这一章节来看发现被入侵的证据后应该怎么办。 +请参阅本文最后的“被入侵之后怎么办”这一章节来看找到了被入侵的证据后应该怎么办。 -`whois` 命令可以接一个IP地址然后告诉你IP注册的组织的所有信息, 当然就包括所在国家的信息。 +`whois` 命令可以接一个 IP 地址然后告诉你该 IP 所注册的组织的所有信息,当然就包括所在国家的信息。 #### 检查 2 - 谁曾经登录过? -Linux 服务器会记录下哪些用户,从哪个IP,在什么时候登录的以及登陆了多长时间这些信息. 使用 `last` 命令可以查看这些信息。 +Linux 服务器会记录下哪些用户,从哪个 IP,在什么时候登录的以及登录了多长时间这些信息。使用 `last` 命令可以查看这些信息。 -输出类似这样: +输出类似这样: ``` root pts/1 78.31.109.1 Thu Nov 30 08:26 still logged in @@ -51,104 +50,91 @@ root pts/0 113.174.161.1 Thu Nov 30 08:26 still logged in root pts/1 78.31.109.1 Thu Nov 30 08:24 - 08:26 (00:01) root pts/0 113.174.161.1 Wed Nov 29 12:34 - 12:52 (00:18) root pts/0 14.176.196.1 Mon Nov 27 13:32 - 13:53 (00:21) - ``` -这里可以看到英国IP和越南IP交替出现, 而且最上面两个IP现在还处于登录状态. 如果你看到任何未经授权的IP,那么请参阅最后章节。 +这里可以看到英国 IP 和越南 IP 交替出现,而且最上面两个 IP 现在还处于登录状态。如果你看到任何未经授权的 IP,那么请参阅最后章节。 -登录历史记录会以文本格式记录到 `~/.bash_history`(注:这里作者应该写错了)中,因此很容易被删除。 -通常攻击者会直接把这个文件删掉,以掩盖他们的攻击行为. 因此, 若你运行了 `last` 命令却只看得见你的当前登录,那么这就是个不妙的信号。 +登录后的历史记录会记录到二进制的 `/var/log/wtmp` 文件中(LCTT 译注:这里作者应该写错了,根据实际情况修改),因此很容易被删除。通常攻击者会直接把这个文件删掉,以掩盖他们的攻击行为。 因此, 若你运行了 `last` 命令却只看得见你的当前登录,那么这就是个不妙的信号。 如果没有登录历史的话,请一定小心,继续留意入侵的其他线索。 #### 检查 3 - 回顾命令历史 -这个层次的攻击者通常不会注意掩盖命令的历史记录,因此运行 `history` 命令会显示出他们曾经做过的所有事情。 -一定留意有没有用 `wget` 或 `curl` 命令来下载类似垃圾邮件机器人或者挖矿程序之类的软件。 +这个层次的攻击者通常不会注意掩盖命令的历史记录,因此运行 `history` 命令会显示出他们曾经做过的所有事情。 +一定留意有没有用 `wget` 或 `curl` 命令来下载类似垃圾邮件机器人或者挖矿程序之类的非常规软件。 -命令历史存储在 `~/.bash_history` 文件中,因此有些攻击者会删除该文件以掩盖他们的所作所为。 -跟登录历史一样, 若你运行 `history` 命令却没有输出任何东西那就表示历史文件被删掉了. 这也是个不妙的信号,你需要很小心地检查一下服务器了。 +命令历史存储在 `~/.bash_history` 文件中,因此有些攻击者会删除该文件以掩盖他们的所作所为。跟登录历史一样,若你运行 `history` 命令却没有输出任何东西那就表示历史文件被删掉了。这也是个不妙的信号,你需要很小心地检查一下服务器了。(LCTT 译注,如果没有命令历史,也有可能是你的配置错误。) -#### 检查 4 - 哪些进程在消耗CPU? +#### 检查 4 - 哪些进程在消耗 CPU? -你常遇到的这类攻击者通常不怎么会去掩盖他们做的事情. 他们会运行一些特别消耗CPU的进程. 这就很容易发着这些进程了. 只需要运行 `top` 然后看最前的那几个进程就行了。 +你常遇到的这类攻击者通常不怎么会去掩盖他们做的事情。他们会运行一些特别消耗 CPU 的进程。这就很容易发现这些进程了。只需要运行 `top` 然后看最前的那几个进程就行了。 -这也能显示出那些未登录的攻击者来. 比如,可能有人在用未受保护的邮件脚本来发送垃圾邮件。 +这也能显示出那些未登录进来的攻击者。比如,可能有人在用未受保护的邮件脚本来发送垃圾邮件。 -如果你最上面的进程对不了解,那么你可以google一下进程名称,或者通过 `losf` 和 `strace` 来看看它做的事情是什么。 +如果你最上面的进程对不了解,那么你可以 Google 一下进程名称,或者通过 `losf` 和 `strace` 来看看它做的事情是什么。 -使用这些工具,第一步从 `top` 中拷贝出进程的 PID,然后运行: - -```shell -strace -p PID +使用这些工具,第一步从 `top` 中拷贝出进程的 PID,然后运行: +``` +strace -p PID ``` -这会显示出进程调用的所有系统调用. 它产生的内容会很多,但这些信息能告诉你这个进程在做什么。 +这会显示出该进程调用的所有系统调用。它产生的内容会很多,但这些信息能告诉你这个进程在做什么。 ``` lsof -p PID - ``` -这个程序会列出进程打开的文件. 通过查看它访问的文件可以很好的理解它在做的事情。 +这个程序会列出该进程打开的文件。通过查看它访问的文件可以很好的理解它在做的事情。 #### 检查 5 - 检查所有的系统进程 -消耗CPU不严重的未认证进程可能不会在 `top` 中显露出来,不过它依然可以通过 `ps` 列出来. 命令 `ps auxf` 就能显示足够清晰的信息了。 +消耗 CPU 不严重的未授权进程可能不会在 `top` 中显露出来,不过它依然可以通过 `ps` 列出来。命令 `ps auxf` 就能显示足够清晰的信息了。 -你需要检查一下每个不认识的进程. 经常运行 `ps` (这是个好习惯) 能帮助你发现奇怪的进程。 +你需要检查一下每个不认识的进程。经常运行 `ps` (这是个好习惯)能帮助你发现奇怪的进程。 #### 检查 6 - 检查进程的网络使用情况 -`iftop` 的功能类似 `top`,他会显示一系列收发网络数据的进程以及他们的源地址和目的地址。 -类似 `DoS` 攻击或垃圾制造器这样的进程很容易显示在列表的最顶端。 +`iftop` 的功能类似 `top`,它会排列显示收发网络数据的进程以及它们的源地址和目的地址。类似 DoS 攻击或垃圾机器人这样的进程很容易显示在列表的最顶端。 #### 检查 7 - 哪些进程在监听网络连接? -通常攻击者会安装一个后门程序专门监听网络端口接受指令. 该进程等待期间是不会消耗CPU和带宽的,因此也就不容易通过 `top` 之类的命令发现。 +通常攻击者会安装一个后门程序专门监听网络端口接受指令。该进程等待期间是不会消耗 CPU 和带宽的,因此也就不容易通过 `top` 之类的命令发现。 -`lsof` 和 `netstat` 命令都会列出所有的联网进程. 我通常会让他们带上下面这些参数: +`lsof` 和 `netstat` 命令都会列出所有的联网进程。我通常会让它们带上下面这些参数: ``` lsof -i - ``` ``` netstat -plunt - ``` -你需要留意那些处于 `LISTEN` 和 `ESTABLISHED` 状态的进程,这些进程要么正在等待连接(LISTEN),要么已经连接(ESTABLISHED)。 -如果遇到不认识的进程,使用 `strace` 和 `lsof` 来看看它们在做什么东西。 +你需要留意那些处于 `LISTEN` 和 `ESTABLISHED` 状态的进程,这些进程要么正在等待连接(LISTEN),要么已经连接(ESTABLISHED)。如果遇到不认识的进程,使用 `strace` 和 `lsof` 来看看它们在做什么东西。 ### 被入侵之后该怎么办呢? -首先,不要紧张, 尤其当攻击者正处于登陆状态时更不能紧张. 你需要在攻击者警觉到你已经发现他之前夺回机器的控制权。 -如果他发现你已经发觉到他了,那么他可能会锁死你不让你登陆服务器,然后开始毁尸灭迹。 +首先,不要紧张,尤其当攻击者正处于登录状态时更不能紧张。**你需要在攻击者警觉到你已经发现他之前夺回机器的控制权。**如果他发现你已经发觉到他了,那么他可能会锁死你不让你登陆服务器,然后开始毁尸灭迹。 -如果你技术不太好那么就直接关机吧. 你可以在服务器上运行 `shutdown -h now` 或者 `systemctl poweroff` 这两条命令. 也可以登陆主机提供商的控制面板中关闭服务器。 -关机后,你就可以开始配置防火墙或者咨询一下供应商的意见。 +如果你技术不太好那么就直接关机吧。你可以在服务器上运行 `shutdown -h now` 或者 `systemctl poweroff` 这两条命令之一。也可以登录主机提供商的控制面板中关闭服务器。关机后,你就可以开始配置防火墙或者咨询一下供应商的意见。 -如果你对自己颇有自信,而你的主机提供商也有提供上游防火墙,那么你只需要以此创建并启用下面两条规则就行了: +如果你对自己颇有自信,而你的主机提供商也有提供上游防火墙,那么你只需要以此创建并启用下面两条规则就行了: -1. 只允许从你的IP地址登陆SSH +1. 只允许从你的 IP 地址登录 SSH。 +2. 封禁除此之外的任何东西,不仅仅是 SSH,还包括任何端口上的任何协议。 -2. 封禁除此之外的任何东西,不仅仅是SSH,还包括任何端口上的任何协议。 +这样会立即关闭攻击者的 SSH 会话,而只留下你可以访问服务器。 -这样会立即关闭攻击者的SSH会话,而只留下你访问服务器。 +如果你无法访问上游防火墙,那么你就需要在服务器本身创建并启用这些防火墙策略,然后在防火墙规则起效后使用 `kill` 命令关闭攻击者的 SSH 会话。(LCTT 译注:本地防火墙规则 有可能不会阻止已经建立的 SSH 会话,所以保险起见,你需要手工杀死该会话。) -如果你无法访问上游防火墙,那么你就需要在服务器本身创建并启用这些防火墙策略,然后在防火墙规则起效后使用 `kill` 命令关闭攻击者的ssh会话。 - -最后还有一种方法, 就是通过诸如串行控制台之类的带外连接登陆服务器,然后通过 `systemctl stop network.service` 停止网络功能。 -这会关闭所有服务器上的网络连接,这样你就可以慢慢的配置那些防火墙规则了。 +最后还有一种方法,如果支持的话,就是通过诸如串行控制台之类的带外连接登录服务器,然后通过 `systemctl stop network.service` 停止网络功能。这会关闭所有服务器上的网络连接,这样你就可以慢慢的配置那些防火墙规则了。 重夺服务器的控制权后,也不要以为就万事大吉了。 -不要试着修复这台服务器,让后接着用. 你永远不知道攻击者做过什么因此你也永远无法保证这台服务器还是安全的。 +不要试着修复这台服务器,然后接着用。你永远不知道攻击者做过什么,因此你也永远无法保证这台服务器还是安全的。 -最好的方法就是拷贝出所有的资料,然后重装系统。 +最好的方法就是拷贝出所有的数据,然后重装系统。(LCTT 译注:你的程序这时已经不可信了,但是数据一般来说没问题。) -------------------------------------------------------------------------------- @@ -156,7 +142,7 @@ via: https://bash-prompt.net/guides/server-hacked/ 作者:[Elliot Cooper][a] 译者:[lujun9972](https://github.com/lujun9972) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 1244a3c6b4497768bc21ded74d948d82ea3ad0b8 Mon Sep 17 00:00:00 2001 From: wxy Date: Wed, 6 Dec 2017 15:32:44 +0800 Subject: [PATCH 063/236] PUB:20171128 How To Tell If Your Linux Server Has Been Compromised.md @lujun9972 https://linux.cn/article-9116-1.html --- ...71128 How To Tell If Your Linux Server Has Been Compromised.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20171128 How To Tell If Your Linux Server Has Been Compromised.md (100%) diff --git a/translated/tech/20171128 How To Tell If Your Linux Server Has Been Compromised.md b/published/20171128 How To Tell If Your Linux Server Has Been Compromised.md similarity index 100% rename from translated/tech/20171128 How To Tell If Your Linux Server Has Been Compromised.md rename to published/20171128 How To Tell If Your Linux Server Has Been Compromised.md From b8190d86deb4bb182bc3a11a23d2465d94c7b70e Mon Sep 17 00:00:00 2001 From: wxy Date: Wed, 6 Dec 2017 16:22:31 +0800 Subject: [PATCH 064/236] PRF:20171006 Concurrent Servers Part 3 - Event-driven.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @GitFuture 很辛苦, 这么长,这么专业的文章。 --- ...oncurrent Servers Part 3 - Event-driven.md | 105 ++++++++---------- 1 file changed, 44 insertions(+), 61 deletions(-) diff --git a/translated/tech/20171006 Concurrent Servers Part 3 - Event-driven.md b/translated/tech/20171006 Concurrent Servers Part 3 - Event-driven.md index 01c1d74112..1c0a0a329f 100644 --- a/translated/tech/20171006 Concurrent Servers Part 3 - Event-driven.md +++ b/translated/tech/20171006 Concurrent Servers Part 3 - Event-driven.md @@ -1,9 +1,9 @@ -并发服务器(3) —— 事件驱动 +并发服务器(三):事件驱动 ============================================================ -这是《并发服务器》系列的第三节。[第一节][26] 介绍了阻塞式编程,[第二节 —— 线程][27] 探讨了多线程,将其作为一种可行的方法来实现服务器并发编程。 +这是并发服务器系列的第三节。[第一节][26] 介绍了阻塞式编程,[第二节:线程][27] 探讨了多线程,将其作为一种可行的方法来实现服务器并发编程。 -另一种常见的实现并发的方法叫做 _事件驱动编程_,也可以叫做 _异步_ 编程 [^注1][28]。这种方法变化万千,因此我们会从最基本的开始,使用一些基本的 APIs 而非从封装好的高级方法开始。本系列以后的文章会讲高层次抽象,还有各种混合的方法。 +另一种常见的实现并发的方法叫做 _事件驱动编程_,也可以叫做 _异步_ 编程 ^注1 。这种方法变化万千,因此我们会从最基本的开始,使用一些基本的 API 而非从封装好的高级方法开始。本系列以后的文章会讲高层次抽象,还有各种混合的方法。 本系列的所有文章: @@ -13,13 +13,13 @@ ### 阻塞式 vs. 非阻塞式 I/O -要介绍这个标题,我们先讲讲阻塞和非阻塞 I/O 的区别。阻塞式 I/O 更好理解,因为这是我们使用 I/O 相关 API 时的“标准”方式。从套接字接收数据的时候,调用 `recv` 函数会发生 _阻塞_,直到它从端口上接收到了来自另一端套接字的数据。这恰恰是第一部分讲到的顺序服务器的问题。 +作为本篇的介绍,我们先讲讲阻塞和非阻塞 I/O 的区别。阻塞式 I/O 更好理解,因为这是我们使用 I/O 相关 API 时的“标准”方式。从套接字接收数据的时候,调用 `recv` 函数会发生 _阻塞_,直到它从端口上接收到了来自另一端套接字的数据。这恰恰是第一部分讲到的顺序服务器的问题。 因此阻塞式 I/O 存在着固有的性能问题。第二节里我们讲过一种解决方法,就是用多线程。哪怕一个线程的 I/O 阻塞了,别的线程仍然可以使用 CPU 资源。实际上,阻塞 I/O 通常在利用资源方面非常高效,因为线程就等待着 —— 操作系统将线程变成休眠状态,只有满足了线程需要的条件才会被唤醒。 -_非阻塞式_ I/O 是另一种思路。把套接字设成非阻塞模式时,调用 `recv` 时(还有 `send`,但是我们现在只考虑接收),函数返回地会很快,哪怕没有数据要接收。这时,就会返回一个特殊的错误状态 ^[注2][15] 来通知调用者,此时没有数据传进来。调用者可以去做其他的事情,或者尝试再次调用 `recv` 函数。 +_非阻塞式_ I/O 是另一种思路。把套接字设成非阻塞模式时,调用 `recv` 时(还有 `send`,但是我们现在只考虑接收),函数返回的会很快,哪怕没有接收到数据。这时,就会返回一个特殊的错误状态 ^注2 来通知调用者,此时没有数据传进来。调用者可以去做其他的事情,或者尝试再次调用 `recv` 函数。 -证明阻塞式和非阻塞式的 `recv` 区别的最好方式就是贴一段示例代码。这里有个监听套接字的小程序,一直在 `recv` 这里阻塞着;当 `recv` 返回了数据,程序就报告接收到了多少个字节 ^[注3][16]: +示范阻塞式和非阻塞式的 `recv` 区别的最好方式就是贴一段示例代码。这里有个监听套接字的小程序,一直在 `recv` 这里阻塞着;当 `recv` 返回了数据,程序就报告接收到了多少个字节 ^注3 : ``` int main(int argc, const char** argv) { @@ -69,8 +69,7 @@ hello # wait for 2 seconds after typing this socket world ^D # to end the connection> ``` - -The listening program will print the following: + 监听程序会输出以下内容: ``` @@ -144,7 +143,6 @@ int main(int argc, const char** argv) { 这里与阻塞版本有些差异,值得注意: 1. `accept` 函数返回的 `newsocktfd` 套接字因调用了 `fcntl`, 被设置成非阻塞的模式。 - 2. 检查 `recv` 的返回状态时,我们对 `errno` 进行了检查,判断它是否被设置成表示没有可供接收的数据的状态。这时,我们仅仅是休眠了 200 毫秒然后进入到下一轮循环。 同样用 `nc` 进行测试,以下是非阻塞监听器的输出: @@ -183,19 +181,19 @@ Peer disconnected; I'm done. 作为练习,给输出添加一个时间戳,确认调用 `recv` 得到结果之间花费的时间是比输入到 `nc` 中所用的多还是少(每一轮是 200 ms)。 -这里就实现了使用非阻塞的 `recv` 让监听者检查套接字变为可能,并且在没有数据的时候重新获得控制权。换句话说,这就是 _polling(轮询)_ —— 主程序周期性的查询套接字以便读取数据。 +这里就实现了使用非阻塞的 `recv` 让监听者检查套接字变为可能,并且在没有数据的时候重新获得控制权。换句话说,用编程的语言说这就是 轮询polling —— 主程序周期性的查询套接字以便读取数据。 对于顺序响应的问题,这似乎是个可行的方法。非阻塞的 `recv` 让同时与多个套接字通信变成可能,轮询这些套接字,仅当有新数据到来时才处理。就是这样,这种方式 _可以_ 用来写并发服务器;但实际上一般不这么做,因为轮询的方式很难扩展。 -首先,我在代码中引入的 200 ms 延迟对于记录非常好(监听器在我输入 `nc` 之间只打印几行 “Calling recv...”,但实际上应该有上千行)。但它也增加了多达 200 ms 的服务器响应时间,这几乎是意料不到的。实际的程序中,延迟会低得多,休眠时间越短,进程占用的 CPU 资源就越多。有些时钟周期只是浪费在等待,这并不好,尤其是在移动设备上,这些设备的电量往往有限。 +首先,我在代码中引入的 200ms 延迟对于演示非常好(监听器在我输入 `nc` 之间只打印几行 “Calling recv...”,但实际上应该有上千行)。但它也增加了多达 200ms 的服务器响应时间,这无意是不必要的。实际的程序中,延迟会低得多,休眠时间越短,进程占用的 CPU 资源就越多。有些时钟周期只是浪费在等待,这并不好,尤其是在移动设备上,这些设备的电量往往有限。 -但是当我们实际这样来使用多个套接字的时候,更严重的问题出现了。想像下监听器正在同时处理 1000 个 客户端。这意味着每一个循环迭代里面,它都得为 _这 1000 个套接字中的每一个_ 执行一遍非阻塞的 `recv`,找到其中准备好了数据的那一个。这非常低效,并且极大的限制了服务器能够并发处理的客户端数。这里有个准则:每次轮询之间等待的间隔越久,服务器响应性越差;而等待的时间越少,CPU 在无用的轮询上耗费的资源越多。 +但是当我们实际这样来使用多个套接字的时候,更严重的问题出现了。想像下监听器正在同时处理 1000 个客户端。这意味着每一个循环迭代里面,它都得为 _这 1000 个套接字中的每一个_ 执行一遍非阻塞的 `recv`,找到其中准备好了数据的那一个。这非常低效,并且极大的限制了服务器能够并发处理的客户端数。这里有个准则:每次轮询之间等待的间隔越久,服务器响应性越差;而等待的时间越少,CPU 在无用的轮询上耗费的资源越多。 -讲真,所有的轮询都像是无用功。当然操作系统应该是知道哪个套接字是准备好了数据的,因此没必要逐个扫描。事实上,就是这样,接下来就会讲一些API,让我们可以更优雅地处理多个客户端。 +讲真,所有的轮询都像是无用功。当然操作系统应该是知道哪个套接字是准备好了数据的,因此没必要逐个扫描。事实上,就是这样,接下来就会讲一些 API,让我们可以更优雅地处理多个客户端。 ### select -`select` 的系统调用是轻便的(POSIX),标准 Unix API 中常有的部分。它是为上一节最后一部分描述的问题而设计的 —— 允许一个线程可以监视许多文件描述符 ^[注4][17] 的变化,不用在轮询中执行不必要的代码。我并不打算在这里引入一个关于 `select` 的理解性的教程,有很多网站和书籍讲这个,但是在涉及到问题的相关内容时,我会介绍一下它的 API,然后再展示一个非常复杂的例子。 +`select` 的系统调用是可移植的(POSIX),是标准 Unix API 中常有的部分。它是为上一节最后一部分描述的问题而设计的 —— 允许一个线程可以监视许多文件描述符 ^注4 的变化,而不用在轮询中执行不必要的代码。我并不打算在这里引入一个关于 `select` 的全面教程,有很多网站和书籍讲这个,但是在涉及到问题的相关内容时,我会介绍一下它的 API,然后再展示一个非常复杂的例子。 `select` 允许 _多路 I/O_,监视多个文件描述符,查看其中任何一个的 I/O 是否可用。 @@ -209,30 +207,25 @@ int select(int nfds, fd_set *readfds, fd_set *writefds, `select` 的调用过程如下: 1. 在调用之前,用户先要为所有不同种类的要监视的文件描述符创建 `fd_set` 实例。如果想要同时监视读取和写入事件,`readfds` 和 `writefds` 都要被创建并且引用。 - 2. 用户可以使用 `FD_SET` 来设置集合中想要监视的特殊描述符。例如,如果想要监视描述符 2、7 和 10 的读取事件,在 `readfds` 这里调用三次 `FD_SET`,分别设置 2、7 和 10。 - 3. `select` 被调用。 - 4. 当 `select` 返回时(现在先不管超时),就是说集合中有多少个文件描述符已经就绪了。它也修改 `readfds` 和 `writefds` 集合,来标记这些准备好的描述符。其它所有的描述符都会被清空。 - 5. 这时用户需要遍历 `readfds` 和 `writefds`,找到哪个描述符就绪了(使用 `FD_ISSET`)。 -作为完整的例子,我在并发的服务器程序上使用 `select`,重新实现了我们之前的协议。[完整的代码在这里][18];接下来的是代码中的高亮,还有注释。警告:示例代码非常复杂,因此第一次看的时候,如果没有足够的时间,快速浏览也没有关系。 +作为完整的例子,我在并发的服务器程序上使用 `select`,重新实现了我们之前的协议。[完整的代码在这里][18];接下来的是代码中的重点部分及注释。警告:示例代码非常复杂,因此第一次看的时候,如果没有足够的时间,快速浏览也没有关系。 ### 使用 select 的并发服务器 使用 I/O 的多发 API 诸如 `select` 会给我们服务器的设计带来一些限制;这不会马上显现出来,但这值得探讨,因为它们是理解事件驱动编程到底是什么的关键。 -最重要的是,要记住这种方法本质上是单线程的 ^[注5][19]。服务器实际上在 _同一时刻只能做一件事_。因为我们想要同时处理多个客户端请求,我们需要换一种方式重构代码。 +最重要的是,要记住这种方法本质上是单线程的 ^注5 。服务器实际上在 _同一时刻只能做一件事_。因为我们想要同时处理多个客户端请求,我们需要换一种方式重构代码。 首先,让我们谈谈主循环。它看起来是什么样的呢?先让我们想象一下服务器有一堆任务,它应该监视哪些东西呢?两种类型的套接字活动: 1. 新客户端尝试连接。这些客户端应该被 `accept`。 - 2. 已连接的客户端发送数据。这个数据要用 [第一节][11] 中所讲到的协议进行传输,有可能会有一些数据要被回送给客户端。 -尽管这两种活动在本质上有所区别,我们还是要把他们放在一个循环里,因为只能有一个主循环。循环会包含 `select` 的调用。这个 `select` 的调用会监视上述的两种活动。 +尽管这两种活动在本质上有所区别,我们还是要把它们放在一个循环里,因为只能有一个主循环。循环会包含 `select` 的调用。这个 `select` 的调用会监视上述的两种活动。 这里是部分代码,设置了文件描述符集合,并在主循环里转到被调用的 `select` 部分。 @@ -264,9 +257,7 @@ while (1) { 这里的一些要点: 1. 由于每次调用 `select` 都会重写传递给函数的集合,调用器就得维护一个 “master” 集合,在循环迭代中,保持对所监视的所有活跃的套接字的追踪。 - 2. 注意我们所关心的,最开始的唯一那个套接字是怎么变成 `listener_sockfd` 的,这就是最开始的套接字,服务器借此来接收新客户端的连接。 - 3. `select` 的返回值,是在作为参数传递的集合中,那些已经就绪的描述符的个数。`select` 修改这个集合,用来标记就绪的描述符。下一步是在这些描述符中进行迭代。 ``` @@ -298,7 +289,7 @@ for (int fd = 0; fd <= fdset_max && nready > 0; fd++) { } ``` -这部分循环检查 _可读的_ 描述符。让我们跳过监听器套接字(要浏览所有内容,[看这个代码][20]) 然后看看当其中一个客户端准备好了之后会发生什么。出现了这种情况后,我们调用一个叫做 `on_peer_ready_recv` 的 _回调_ 函数,传入相应的文件描述符。这个调用意味着客户端连接到套接字上,发送某些数据,并且对套接字上 `recv` 的调用不会被阻塞 ^[注6][21]。这个回调函数返回结构体 `fd_status_t`。 +这部分循环检查 _可读的_ 描述符。让我们跳过监听器套接字(要浏览所有内容,[看这个代码][20]) 然后看看当其中一个客户端准备好了之后会发生什么。出现了这种情况后,我们调用一个叫做 `on_peer_ready_recv` 的 _回调_ 函数,传入相应的文件描述符。这个调用意味着客户端连接到套接字上,发送某些数据,并且对套接字上 `recv` 的调用不会被阻塞 ^注6 。这个回调函数返回结构体 `fd_status_t`。 ``` typedef struct { @@ -307,7 +298,7 @@ typedef struct { } fd_status_t; ``` -这个结构体告诉主循环,是否应该监视套接字的读取事件,写入事件,或者两者都监视。上述代码展示了 `FD_SET` 和 `FD_CLR` 是怎么在合适的描述符集合中被调用的。对于主循环中某个准备好了写入数据的描述符,代码是类似的,除了它所调用的回调函数,这个回调函数叫做 `on_peer_ready_send`。 +这个结构体告诉主循环,是否应该监视套接字的读取事件、写入事件,或者两者都监视。上述代码展示了 `FD_SET` 和 `FD_CLR` 是怎么在合适的描述符集合中被调用的。对于主循环中某个准备好了写入数据的描述符,代码是类似的,除了它所调用的回调函数,这个回调函数叫做 `on_peer_ready_send`。 现在来花点时间看看这个回调: @@ -464,37 +455,36 @@ INFO:2017-09-26 05:29:18,070:conn0 disconnecting INFO:2017-09-26 05:29:18,070:conn2 disconnecting ``` -和线程的情况相似,客户端之间没有延迟,他们被同时处理。而且在 `select-server` 也没有用线程!主循环 _多路_ 处理所有的客户端,通过高效使用 `select` 轮询多个套接字。回想下 [第二节中][22] 顺序的 vs 多线程的客户端处理过程的图片。对于我们的 `select-server`,三个客户端的处理流程像这样: +和线程的情况相似,客户端之间没有延迟,它们被同时处理。而且在 `select-server` 也没有用线程!主循环 _多路_ 处理所有的客户端,通过高效使用 `select` 轮询多个套接字。回想下 [第二节中][22] 顺序的 vs 多线程的客户端处理过程的图片。对于我们的 `select-server`,三个客户端的处理流程像这样: ![多客户端处理流程](https://eli.thegreenplace.net/images/2017/multiplexed-flow.png) 所有的客户端在同一个线程中同时被处理,通过乘积,做一点这个客户端的任务,然后切换到另一个,再切换到下一个,最后切换回到最开始的那个客户端。注意,这里没有什么循环调度,客户端在它们发送数据的时候被客户端处理,这实际上是受客户端左右的。 -### 同步,异步,事件驱动,回调 +### 同步、异步、事件驱动、回调 -`select-server` 示例代码为讨论什么是异步编程,它和事件驱动及基于回调的编程有何联系,提供了一个良好的背景。因为这些词汇在并发服务器的(非常矛盾的)讨论中很常见。 +`select-server` 示例代码为讨论什么是异步编程、它和事件驱动及基于回调的编程有何联系,提供了一个良好的背景。因为这些词汇在并发服务器的(非常矛盾的)讨论中很常见。 -让我们从一段 `select` 的手册页面中引用的一句好开始: +让我们从一段 `select` 的手册页面中引用的一句话开始: -> select,pselect,FD_CLR,FD_ISSET,FD_SET,FD_ZERO - 同步 I/O 处理 +> select,pselect,FD\_CLR,FD\_ISSET,FD\_SET,FD\_ZERO - 同步 I/O 处理 因此 `select` 是 _同步_ 处理。但我刚刚演示了大量代码的例子,使用 `select` 作为 _异步_ 处理服务器的例子。有哪些东西? -答案是:这取决于你的观查角度。同步常用作阻塞处理,并且对 `select` 的调用实际上是阻塞的。和第 1、2 节中讲到的顺序的、多线程的服务器中对 `send` 和 `recv` 是一样的。因此说 `select` 是 _同步的_ API 是有道理的。可是,服务器的设计却可以是 _异步的_,或是 _基于回调的_,或是 _事件驱动的_,尽管其中有对 `select` 的使用。注意这里的 `on_peer_*` 函数是回调函数;它们永远不会阻塞,并且只有网络事件触发的时候才会被调用。它们可以获得部分数据,并能够在调用过程中保持稳定的状态。 +答案是:这取决于你的观察角度。同步常用作阻塞处理,并且对 `select` 的调用实际上是阻塞的。和第 1、2 节中讲到的顺序的、多线程的服务器中对 `send` 和 `recv` 是一样的。因此说 `select` 是 _同步的_ API 是有道理的。可是,服务器的设计却可以是 _异步的_,或是 _基于回调的_,或是 _事件驱动的_,尽管其中有对 `select` 的使用。注意这里的 `on_peer_*` 函数是回调函数;它们永远不会阻塞,并且只有网络事件触发的时候才会被调用。它们可以获得部分数据,并能够在调用过程中保持稳定的状态。 -如果你曾经做过一些 GUI 编程,这些东西对你来说应该很亲切。有个 “事件循环”,常常完全隐藏在框架里,应用的 “业务逻辑” 建立在回调上,这些回调会在各种事件触发后被调用,用户点击鼠标,选择菜单,定时器到时间,数据到达套接字,等等。曾经最常见的编程模型是客户端的 JavaScript,这里面有一堆回调函数,它们在浏览网页时用户的行为被触发。 +如果你曾经做过一些 GUI 编程,这些东西对你来说应该很亲切。有个 “事件循环”,常常完全隐藏在框架里,应用的 “业务逻辑” 建立在回调上,这些回调会在各种事件触发后被调用,用户点击鼠标、选择菜单、定时器触发、数据到达套接字等等。曾经最常见的编程模型是客户端的 JavaScript,这里面有一堆回调函数,它们在浏览网页时用户的行为被触发。 ### select 的局限 -使用 `select` 作为第一个异步服务器的例子对于说明这个概念很有用,而且由于 `select` 是很常见,可移植的 API。但是它也有一些严重的缺陷,在监视的文件描述符非常大的时候就会出现。 +使用 `select` 作为第一个异步服务器的例子对于说明这个概念很有用,而且由于 `select` 是很常见、可移植的 API。但是它也有一些严重的缺陷,在监视的文件描述符非常大的时候就会出现。 1. 有限的文件描述符的集合大小。 - 2. 糟糕的性能。 -从文件描述符的大小开始。`FD_SETSIZE` 是一个编译期常数,在如今的操作系统中,它的值通常是 1024。它被硬编码在 `glibc` 的头文件里,并且不容易修改。它把 `select` 能够监视的文件描述符的数量限制在 1024 以内。曾有些分支想要写出能够处理上万个并发访问的客户端请求的服务器,这个问题很有现实意义。有一些方法,但是不可移植,也很难用。 +从文件描述符的大小开始。`FD_SETSIZE` 是一个编译期常数,在如今的操作系统中,它的值通常是 1024。它被硬编码在 `glibc` 的头文件里,并且不容易修改。它把 `select` 能够监视的文件描述符的数量限制在 1024 以内。曾有些人想要写出能够处理上万个并发访问的客户端请求的服务器,所以这个问题很有现实意义。有一些方法,但是不可移植,也很难用。 -糟糕的性能问题就好解决的多,但是依然非常严重。注意当 `select` 返回的时候,它向调用者提供的信息是 “就绪的” 描述符的个数,还有被修改过的描述符集合。描述符集映射着描述符 就绪/未就绪”,但是并没有提供什么有效的方法去遍历所有就绪的描述符。如果只有一个描述符是就绪的,最坏的情况是调用者需要遍历 _整个集合_ 来找到那个描述符。这在监视的描述符数量比较少的时候还行,但是如果数量变的很大的时候,这种方法弊端就凸显出了 ^[注7][23]。 +糟糕的性能问题就好解决的多,但是依然非常严重。注意当 `select` 返回的时候,它向调用者提供的信息是 “就绪的” 描述符的个数,还有被修改过的描述符集合。描述符集映射着描述符“就绪/未就绪”,但是并没有提供什么有效的方法去遍历所有就绪的描述符。如果只有一个描述符是就绪的,最坏的情况是调用者需要遍历 _整个集合_ 来找到那个描述符。这在监视的描述符数量比较少的时候还行,但是如果数量变的很大的时候,这种方法弊端就凸显出了 ^注7 。 由于这些原因,为了写出高性能的并发服务器, `select` 已经不怎么用了。每一个流行的操作系统有独特的不可移植的 API,允许用户写出非常高效的事件循环;像框架这样的高级结构还有高级语言通常在一个可移植的接口中包含这些 API。 @@ -541,30 +531,23 @@ while (1) { } ``` -通过调用 `epoll_ctl` 来配置 `epoll`。这时,配置监听的套接字数量,也就是 `epoll` 监听的描述符的数量。然后分配一个缓冲区,把就绪的事件传给 `epoll` 以供修改。在主循环里对 `epoll_wait` 的调用是魅力所在。它阻塞着,直到某个描述符就绪了(或者超时),返回就绪的描述符数量。但这时,不少盲目地迭代所有监视的集合,我们知道 `epoll_write` 会修改传给它的 `events` 缓冲区,缓冲区中有就绪的事件,从 0 到 `nready-1`,因此我们只需迭代必要的次数。 +通过调用 `epoll_ctl` 来配置 `epoll`。这时,配置监听的套接字数量,也就是 `epoll` 监听的描述符的数量。然后分配一个缓冲区,把就绪的事件传给 `epoll` 以供修改。在主循环里对 `epoll_wait` 的调用是魅力所在。它阻塞着,直到某个描述符就绪了(或者超时),返回就绪的描述符数量。但这时,不要盲目地迭代所有监视的集合,我们知道 `epoll_write` 会修改传给它的 `events` 缓冲区,缓冲区中有就绪的事件,从 0 到 `nready-1`,因此我们只需迭代必要的次数。 要在 `select` 里面重新遍历,有明显的差异:如果在监视着 1000 个描述符,只有两个就绪, `epoll_waits` 返回的是 `nready=2`,然后修改 `events` 缓冲区最前面的两个元素,因此我们只需要“遍历”两个描述符。用 `select` 我们就需要遍历 1000 个描述符,找出哪个是就绪的。因此,在繁忙的服务器上,有许多活跃的套接字时 `epoll` 比 `select` 更加容易扩展。 -剩下的代码很直观,因为我们已经很熟悉 `select 服务器` 了。实际上,`epoll 服务器` 中的所有“业务逻辑”和 `select 服务器` 是一样的,回调构成相同的代码。 +剩下的代码很直观,因为我们已经很熟悉 “select 服务器” 了。实际上,“epoll 服务器” 中的所有“业务逻辑”和 “select 服务器” 是一样的,回调构成相同的代码。 -这种相似是通过将事件循环抽象分离到一个库/框架中。我将会详述这些内容,因为很多优秀的程序员曾经也是这样做的。相反,下一篇文章里我们会了解 `libuv`,一个最近出现的更加受欢迎的时间循环抽象层。像 `libuv` 这样的库让我们能够写出并发的异步服务器,并且不用考虑系统调用下繁琐的细节。 +这种相似是通过将事件循环抽象分离到一个库/框架中。我将会详述这些内容,因为很多优秀的程序员曾经也是这样做的。相反,下一篇文章里我们会了解 libuv,一个最近出现的更加受欢迎的时间循环抽象层。像 libuv 这样的库让我们能够写出并发的异步服务器,并且不用考虑系统调用下繁琐的细节。 * * * - -[注1][1] 我试着在两件事的实际差别中突显自己,一件是做一些网络浏览和阅读,但经常做得头疼。有很多不同的选项,从“他们是一样的东西”到“一个是另一个的子集”,再到“他们是完全不同的东西”。在面临这样主观的观点时,最好是完全放弃这个问题,专注特殊的例子和用例。 - -[注2][2] POSIX 表示这可以是 `EAGAIN`,也可以是 `EWOULDBLOCK`,可移植应用应该对这两个都进行检查。 - -[注3][3] 和这个系列所有的 C 示例类似,代码中用到了某些助手工具来设置监听套接字。这些工具的完整代码在这个 [仓库][4] 的 `utils` 模块里。 - -[注4][5] `select` 不是网络/套接字专用的函数,它可以监视任意的文件描述符,有可能是硬盘文件,管道,终端,套接字或者 Unix 系统中用到的任何文件描述符。这篇文章里,我们主要关注它在套接字方面的应用。 - -[注5][6] 有多种方式用多线程来实现事件驱动,我会把它放在稍后的文章中进行讨论。 - -[注6][7] 由于各种非实验因素,它 _仍然_ 可以阻塞,即使是在 `select` 说它就绪了之后。因此服务器上打开的所有套接字都被设置成非阻塞模式,如果对 `recv` 或 `send` 的调用返回了 `EAGAIN` 或者 `EWOULDBLOCK`,回调函数就装作没有事件发生。阅读示例代码的注释可以了解更多细节。 - -[注7][8] 注意这比该文章前面所讲的异步 polling 例子要稍好一点。polling 需要 _一直_ 发生,而 `select` 实际上会阻塞到有一个或多个套接字准备好读取/写入;`select` 会比一直询问浪费少得多的 CPU 时间。 +- 注1:我试着在做网络浏览和阅读这两件事的实际差别中突显自己,但经常做得头疼。有很多不同的选项,从“它们是一样的东西”到“一个是另一个的子集”,再到“它们是完全不同的东西”。在面临这样主观的观点时,最好是完全放弃这个问题,专注特殊的例子和用例。 +- 注2:POSIX 表示这可以是 `EAGAIN`,也可以是 `EWOULDBLOCK`,可移植应用应该对这两个都进行检查。 +- 注3:和这个系列所有的 C 示例类似,代码中用到了某些助手工具来设置监听套接字。这些工具的完整代码在这个 [仓库][4] 的 `utils` 模块里。 +- 注4:`select` 不是网络/套接字专用的函数,它可以监视任意的文件描述符,有可能是硬盘文件、管道、终端、套接字或者 Unix 系统中用到的任何文件描述符。这篇文章里,我们主要关注它在套接字方面的应用。 +- 注5:有多种方式用多线程来实现事件驱动,我会把它放在稍后的文章中进行讨论。 +- 注6:由于各种非实验因素,它 _仍然_ 可以阻塞,即使是在 `select` 说它就绪了之后。因此服务器上打开的所有套接字都被设置成非阻塞模式,如果对 `recv` 或 `send` 的调用返回了 `EAGAIN` 或者 `EWOULDBLOCK`,回调函数就装作没有事件发生。阅读示例代码的注释可以了解更多细节。 +- 注7:注意这比该文章前面所讲的异步轮询的例子要稍好一点。轮询需要 _一直_ 发生,而 `select` 实际上会阻塞到有一个或多个套接字准备好读取/写入;`select` 会比一直询问浪费少得多的 CPU 时间。 -------------------------------------------------------------------------------- @@ -572,7 +555,7 @@ via: https://eli.thegreenplace.net/2017/concurrent-servers-part-3-event-driven/ 作者:[Eli Bendersky][a] 译者:[GitFuture](https://github.com/GitFuture) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 @@ -587,9 +570,9 @@ via: https://eli.thegreenplace.net/2017/concurrent-servers-part-3-event-driven/ [8]:https://eli.thegreenplace.net/2017/concurrent-servers-part-3-event-driven/#id9 [9]:https://eli.thegreenplace.net/tag/concurrency [10]:https://eli.thegreenplace.net/tag/c-c -[11]:http://eli.thegreenplace.net/2017/concurrent-servers-part-1-introduction/ -[12]:http://eli.thegreenplace.net/2017/concurrent-servers-part-1-introduction/ -[13]:http://eli.thegreenplace.net/2017/concurrent-servers-part-2-threads/ +[11]:https://linux.cn/article-8993-1.html +[12]:https://linux.cn/article-8993-1.html +[13]:https://linux.cn/article-9002-1.html [14]:http://eli.thegreenplace.net/2017/concurrent-servers-part-3-event-driven/ [15]:https://eli.thegreenplace.net/2017/concurrent-servers-part-3-event-driven/#id11 [16]:https://eli.thegreenplace.net/2017/concurrent-servers-part-3-event-driven/#id12 @@ -598,10 +581,10 @@ via: https://eli.thegreenplace.net/2017/concurrent-servers-part-3-event-driven/ [19]:https://eli.thegreenplace.net/2017/concurrent-servers-part-3-event-driven/#id14 [20]:https://github.com/eliben/code-for-blog/blob/master/2017/async-socket-server/select-server.c [21]:https://eli.thegreenplace.net/2017/concurrent-servers-part-3-event-driven/#id15 -[22]:http://eli.thegreenplace.net/2017/concurrent-servers-part-2-threads/ +[22]:https://linux.cn/article-9002-1.html [23]:https://eli.thegreenplace.net/2017/concurrent-servers-part-3-event-driven/#id16 [24]:https://github.com/eliben/code-for-blog/blob/master/2017/async-socket-server/epoll-server.c [25]:https://eli.thegreenplace.net/2017/concurrent-servers-part-3-event-driven/ -[26]:http://eli.thegreenplace.net/2017/concurrent-servers-part-1-introduction/ -[27]:http://eli.thegreenplace.net/2017/concurrent-servers-part-2-threads/ +[26]:https://linux.cn/article-8993-1.html +[27]:https://linux.cn/article-9002-1.html [28]:https://eli.thegreenplace.net/2017/concurrent-servers-part-3-event-driven/#id10 From 2b113411066140b6e613b7755741010bfd7ff31f Mon Sep 17 00:00:00 2001 From: wxy Date: Wed, 6 Dec 2017 16:23:03 +0800 Subject: [PATCH 065/236] PUB:20171006 Concurrent Servers Part 3 - Event-driven.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @GitFuture 定时发布于周五 https://linux.cn/article-9117-1.html --- .../20171006 Concurrent Servers Part 3 - Event-driven.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20171006 Concurrent Servers Part 3 - Event-driven.md (100%) diff --git a/translated/tech/20171006 Concurrent Servers Part 3 - Event-driven.md b/published/20171006 Concurrent Servers Part 3 - Event-driven.md similarity index 100% rename from translated/tech/20171006 Concurrent Servers Part 3 - Event-driven.md rename to published/20171006 Concurrent Servers Part 3 - Event-driven.md From 11b7923d41f1afeec677b6fe4d24ea1245107777 Mon Sep 17 00:00:00 2001 From: darksun Date: Wed, 6 Dec 2017 17:20:20 +0800 Subject: [PATCH 066/236] =?UTF-8?q?=E8=A1=A5=E5=85=85=E4=B8=8D=E5=AE=8C?= =?UTF-8?q?=E6=95=B4=E7=9A=84=E5=86=85=E5=AE=B9?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...with a specific text using Linux shell .md | 180 ++++-------------- 1 file changed, 34 insertions(+), 146 deletions(-) diff --git a/sources/tech/20171130 How to find all files with a specific text using Linux shell .md b/sources/tech/20171130 How to find all files with a specific text using Linux shell .md index f5909c27c9..d518dd48db 100644 --- a/sources/tech/20171130 How to find all files with a specific text using Linux shell .md +++ b/sources/tech/20171130 How to find all files with a specific text using Linux shell .md @@ -1,56 +1,38 @@ translating by lujun9972 How to find all files with a specific text using Linux shell ------ -### Objective +### 目标 The following article provides some useful tips on how to find all files within any specific directory or entire file-system containing any specific word or string. -### Difficulty +### 难度 EASY -### Conventions +### 约定 * # - requires given command to be executed with root privileges either directly as a root user or by use of sudo command * $ - given command to be executed as a regular non-privileged user -### Examples +### 案例 -### Find all files with a specific string non-recursively +#### Find all files with a specific string non-recursively -The first command example will search for a string +The first command example will search for a string `stretch` in all files within `/etc/` directory while excluding any sub-directories: -`stretch` - -in all files within - -`/etc/` - -directory while excluding any sub-directories: - -``` +```shell # grep -s stretch /etc/* /etc/os-release:PRETTY_NAME="Debian GNU/Linux 9 (stretch)" /etc/os-release:VERSION="9 (stretch)" ``` -`-s` +The `-s` grep option will suppress error messages about nonexistent or unreadable files. The output shows filenames as well as prints the actual line containing requested string. -grep option will suppress error messages about nonexistent or unreadable files. The output shows filenames as well as prints the actual line containing requested string. +#### Find all files with a specific string recursively -### Find all files with a specific string recursively +The above command omitted all sub-directories. To search recursively means to also traverse all sub-directories. The following command will search for a string `stretch` in all files within `/etc/` directory including all sub-directories: -The above command omitted all sub-directories. To search recursively means to also traverse all sub-directories. The following command will search for a string - -`stretch` - -in all files within - -`/etc/` - -directory including all sub-directories: - -``` +```shell # grep -R stretch /etc/* /etc/apt/sources.list:# deb cdrom:[Debian GNU/Linux testing _Stretch_ - Official Snapshot amd64 NETINST Binary-1 20170109-05:56]/ stretch main /etc/apt/sources.list:#deb cdrom:[Debian GNU/Linux testing _Stretch_ - Official Snapshot amd64 NETINST Binary-1 20170109-05:56]/ stretch main @@ -84,29 +66,10 @@ directory including all sub-directories: /etc/os-release:VERSION="9 (stretch)" ``` -The above +#### Search for all files containing a specific word +The above `grep` command example lists all files containing string `stretch` . Meaning the lines with `stretches` , `stretched` etc. are also shown. Use grep's `-w` option to show only a specific word: -`grep` - -command example lists all files containing string - -`stretch` - -. Meaning the lines with - -`stretches` - -, - -`stretched` - -etc. are also shown. Use grep's - -`-w` - -option to show only a specific word: - -``` +```shell # grep -Rw stretch /etc/* /etc/apt/sources.list:# deb cdrom:[Debian GNU/Linux testing _Stretch_ - Official Snapshot amd64 NETINST Binary-1 20170109-05:56]/ stretch main /etc/apt/sources.list:#deb cdrom:[Debian GNU/Linux testing _Stretch_ - Official Snapshot amd64 NETINST Binary-1 20170109-05:56]/ stretch main @@ -121,17 +84,10 @@ option to show only a specific word: /etc/os-release:VERSION="9 (stretch)" ``` -The above commands may produce an unnecessary output. The next example will only show all file names containing string +#### List only files names containing a specific text +The above commands may produce an unnecessary output. The next example will only show all file names containing string `stretch` within `/etc/` directory recursively: -`stretch` - -within - -`/etc/` - -directory recursively: - -``` +```shell # grep -Rl stretch /etc/* /etc/apt/sources.list /etc/dictionaries-common/words @@ -139,29 +95,10 @@ directory recursively: /etc/os-release ``` -All searches are by default case sensitive which means that any search for a string +#### Perform case-insensitive search +All searches are by default case sensitive which means that any search for a string `stretch` will only show files containing the exact uppercase and lowercase match. By using grep's `-i` option the command will also list any lines containing `Stretch` , `STRETCH` , `StReTcH` etc., hence, to perform case-insensitive search. -`stretch` - -will only show files containing the exact uppercase and lowercase match. By using grep's - -`-i` - -option the command will also list any lines containing - -`Stretch` - -, - -`STRETCH` - -, - -`StReTcH` - -etc., hence, to perform case-insensitive search. - -``` +```shell # grep -Ril stretch /etc/* /etc/apt/sources.list /etc/dictionaries-common/default.hash @@ -170,39 +107,19 @@ etc., hence, to perform case-insensitive search. /etc/os-release ``` -Using +#### Include or Exclude specific files names from search +Using `grep` command it is also possible to include only specific files as part of the search. For example we only would like to search for a specific text/string within configuration files with extension `.conf` . The next example will find all files with extension `.conf` within `/etc` directory containing string `bash` : -`grep` - -command it is also possible to include only specific files as part of the search. For example we only would like to search for a specific text/string within configuration files with extension - -`.conf` - -. The next example will find all files with extension - -`.conf` - -within - -`/etc` - -directory containing string - -`bash` - -: - -``` +```shell # grep -Ril bash /etc/*.conf OR # grep -Ril --include=\*.conf bash /etc/* /etc/adduser.conf ``` -`--exclude` -option we can exclude any specific filenames: +Similarly, using `--exclude` option we can exclude any specific filenames: -``` +```shell # grep -Ril --exclude=\*.conf bash /etc/* /etc/alternatives/view /etc/alternatives/vim @@ -227,57 +144,28 @@ option we can exclude any specific filenames: /etc/skel/.bash_logout ``` -Same as with files grep can also exclude specific directories from the search. Use +#### Exclude specific Directories from search +Same as with files grep can also exclude specific directories from the search. Use `--exclude-dir` option to exclude directory from search. The following search example will find all files containing string `stretch` within `/etc` directory and exclude `/etc/grub.d` from search: -`--exclude-dir` - -option to exclude directory from search. The following search example will find all files containing string - -`stretch` - -within - -`/etc` - -directory and exclude - -`/etc/grub.d` - -from search: - -``` +```shell # grep --exclude-dir=/etc/grub.d -Rwl stretch /etc/* /etc/apt/sources.list /etc/dictionaries-common/words /etc/os-release ``` -By using +#### Display a line number containing searched string +By using `-n` option grep will also provide an information regarding a line number where the specific string was found: -`-n` - -option grep will also provide an information regarding a line number where the specific string was found: - -``` +```shell # grep -Rni bash /etc/*.conf /etc/adduser.conf:6:DSHELL=/bin/bash ``` -The last example will use +#### Find all files not containing a specific string +The last example will use `-v` option to list all files NOT containing a specific keyword. For example the following search will list all files within `/etc/` directory which do not contain string `stretch` : -`-v` - -option to list all files NOT containing a specific keyword. For example the following search will list all files within - -`/etc/` - -directory which do not contain string - -`stretch` - -: - -``` +```shell # grep -Rlv stretch /etc/* ``` From 30ff78e85b4449e1ffd9763af8daf97e052164de Mon Sep 17 00:00:00 2001 From: darksun Date: Wed, 6 Dec 2017 19:37:10 +0800 Subject: [PATCH 067/236] translated --- ...with a specific text using Linux shell .md | 55 ++++++++++--------- 1 file changed, 29 insertions(+), 26 deletions(-) diff --git a/sources/tech/20171130 How to find all files with a specific text using Linux shell .md b/sources/tech/20171130 How to find all files with a specific text using Linux shell .md index d518dd48db..81938b79fc 100644 --- a/sources/tech/20171130 How to find all files with a specific text using Linux shell .md +++ b/sources/tech/20171130 How to find all files with a specific text using Linux shell .md @@ -1,36 +1,36 @@ -translating by lujun9972 -How to find all files with a specific text using Linux shell +如何在Linux shell中找出所有包含指定文本的文件 ------ ### 目标 -The following article provides some useful tips on how to find all files within any specific directory or entire file-system containing any specific word or string. +本文提供一些关于如何搜索出指定目录或整个文件系统中那些包含指定单词或字符串的文件. ### 难度 -EASY +容易 ### 约定 -* # - requires given command to be executed with root privileges either directly as a root user or by use of sudo command +* \# - 需要使用 root 权限来执行指定命令,可以直接使用 root 用户来执行也可以使用 sudo 命令 -* $ - given command to be executed as a regular non-privileged user +* \$ - 可以使用普通用户来执行指定命令 ### 案例 -#### Find all files with a specific string non-recursively +#### 非递归搜索包含指定字符串的文件 -The first command example will search for a string `stretch` in all files within `/etc/` directory while excluding any sub-directories: +第一个例子让我们来搜索 `/etc/` 目录下所有包含 `stretch` 字符串的文件,但不去搜索其中的子目录: ```shell # grep -s stretch /etc/* /etc/os-release:PRETTY_NAME="Debian GNU/Linux 9 (stretch)" /etc/os-release:VERSION="9 (stretch)" ``` -The `-s` grep option will suppress error messages about nonexistent or unreadable files. The output shows filenames as well as prints the actual line containing requested string. +grep 的 `-s` 选项会在发现不能存在或者不能读取的文件时抑制报错信息. 结果现实除了文件名外还有包含请求字符串的行也被一起输出了. -#### Find all files with a specific string recursively +#### 递归地搜索包含指定字符串的文件 -The above command omitted all sub-directories. To search recursively means to also traverse all sub-directories. The following command will search for a string `stretch` in all files within `/etc/` directory including all sub-directories: +上面案例中忽略了所有的子目录. 所谓递归搜索就是指同时搜索所有的子目录. +下面的命令会在 `/etc/` 及其子目录中搜索包含 `stretch` 字符串的文件: ```shell # grep -R stretch /etc/* @@ -66,8 +66,8 @@ The above command omitted all sub-directories. To search recursively means to al /etc/os-release:VERSION="9 (stretch)" ``` -#### Search for all files containing a specific word -The above `grep` command example lists all files containing string `stretch` . Meaning the lines with `stretches` , `stretched` etc. are also shown. Use grep's `-w` option to show only a specific word: +#### 搜索所有包含特定单词的文件 +上面 `grep` 命令的案例中列出的是所有包含字符串 `stretch` 的文件. 也就是说包含 `stretches` , `stretched` 等内容的行也会被显示. 使用 grep 的 `-w` 选项会只显示包含特定单词的行: ```shell # grep -Rw stretch /etc/* @@ -84,8 +84,8 @@ The above `grep` command example lists all files containing string `stretch` . M /etc/os-release:VERSION="9 (stretch)" ``` -#### List only files names containing a specific text -The above commands may produce an unnecessary output. The next example will only show all file names containing string `stretch` within `/etc/` directory recursively: +#### 显示包含特定文本文件的文件名 +上面的命令都会产生多余的输出. 下一个案例则会递归地搜索 `etc` 目录中包含 `stretch` 的文件并只输出文件名: ```shell # grep -Rl stretch /etc/* @@ -95,8 +95,9 @@ The above commands may produce an unnecessary output. The next example will only /etc/os-release ``` -#### Perform case-insensitive search -All searches are by default case sensitive which means that any search for a string `stretch` will only show files containing the exact uppercase and lowercase match. By using grep's `-i` option the command will also list any lines containing `Stretch` , `STRETCH` , `StReTcH` etc., hence, to perform case-insensitive search. +#### 大小写不敏感的搜索 +默认情况下搜索hi大小写敏感的,也就是说当搜索字符串 `stretch` 时只会包含大小写一致内容的文件. +通过使用 grep 的 `-i` 选项,grep 命令还会列出所有包含 `Stretch` , `STRETCH` , `StReTcH` 等内容的文件,也就是说进行的是大小写不敏感的搜索. ```shell # grep -Ril stretch /etc/* @@ -107,8 +108,8 @@ All searches are by default case sensitive which means that any search for a str /etc/os-release ``` -#### Include or Exclude specific files names from search -Using `grep` command it is also possible to include only specific files as part of the search. For example we only would like to search for a specific text/string within configuration files with extension `.conf` . The next example will find all files with extension `.conf` within `/etc` directory containing string `bash` : +#### 搜索是包含/排除指定文件 +`grep` 命令也可以只在指定文件中进行搜索. 比如,我们可以只在配置文件(扩展名为`.conf`)中搜索指定的文本/字符串. 下面这个例子就会在 `/etc` 目录中搜索带字符串 `bash` 且所有扩展名为 `.conf` 的文件: ```shell # grep -Ril bash /etc/*.conf @@ -117,7 +118,7 @@ OR /etc/adduser.conf ``` -Similarly, using `--exclude` option we can exclude any specific filenames: +类似的, 也可以使用 `--exclude` 来排除特定的文件: ```shell # grep -Ril --exclude=\*.conf bash /etc/* @@ -144,8 +145,9 @@ Similarly, using `--exclude` option we can exclude any specific filenames: /etc/skel/.bash_logout ``` -#### Exclude specific Directories from search -Same as with files grep can also exclude specific directories from the search. Use `--exclude-dir` option to exclude directory from search. The following search example will find all files containing string `stretch` within `/etc` directory and exclude `/etc/grub.d` from search: +#### 搜索时排除指定目录 +跟文件一样,grep 也能在搜索时排除指定目录. 使用 `--exclude-dir` 选项就行. +下面这个例子会搜索 `/etc` 目录中搜有包含字符串 `stretch` 的文件,但不包括 `/etc/grub.d` 目录下的文件: ```shell # grep --exclude-dir=/etc/grub.d -Rwl stretch /etc/* @@ -154,16 +156,17 @@ Same as with files grep can also exclude specific directories from the search. U /etc/os-release ``` -#### Display a line number containing searched string -By using `-n` option grep will also provide an information regarding a line number where the specific string was found: +#### 显示包含搜索字符串的行号 +`-n` 选项还会显示指定字符串所在行的行号: ```shell # grep -Rni bash /etc/*.conf /etc/adduser.conf:6:DSHELL=/bin/bash ``` -#### Find all files not containing a specific string -The last example will use `-v` option to list all files NOT containing a specific keyword. For example the following search will list all files within `/etc/` directory which do not contain string `stretch` : +#### 寻找不包含指定字符串的文件 +最后这个例子使用 `-v` 来列出所有 *不* 包含指定字符串的文件. +例如下面命令会搜索 `/etc` 目录中不包含 `stretch` 的所有文件: ```shell # grep -Rlv stretch /etc/* From 05bf1048b2b032bac519880665bbd25d3e0f54a6 Mon Sep 17 00:00:00 2001 From: darksun Date: Wed, 6 Dec 2017 19:39:15 +0800 Subject: [PATCH 068/236] reformat --- ...with a specific text using Linux shell .md | 26 +++++++++---------- 1 file changed, 13 insertions(+), 13 deletions(-) diff --git a/sources/tech/20171130 How to find all files with a specific text using Linux shell .md b/sources/tech/20171130 How to find all files with a specific text using Linux shell .md index 81938b79fc..41b02fc989 100644 --- a/sources/tech/20171130 How to find all files with a specific text using Linux shell .md +++ b/sources/tech/20171130 How to find all files with a specific text using Linux shell .md @@ -1,8 +1,8 @@ -如何在Linux shell中找出所有包含指定文本的文件 +如何在 Linux shell 中找出所有包含指定文本的文件 ------ ### 目标 -本文提供一些关于如何搜索出指定目录或整个文件系统中那些包含指定单词或字符串的文件. +本文提供一些关于如何搜索出指定目录或整个文件系统中那些包含指定单词或字符串的文件。 ### 难度 @@ -25,11 +25,11 @@ /etc/os-release:PRETTY_NAME="Debian GNU/Linux 9 (stretch)" /etc/os-release:VERSION="9 (stretch)" ``` -grep 的 `-s` 选项会在发现不能存在或者不能读取的文件时抑制报错信息. 结果现实除了文件名外还有包含请求字符串的行也被一起输出了. +grep 的 `-s` 选项会在发现不能存在或者不能读取的文件时抑制报错信息。结果现实除了文件名外还有包含请求字符串的行也被一起输出了。 #### 递归地搜索包含指定字符串的文件 -上面案例中忽略了所有的子目录. 所谓递归搜索就是指同时搜索所有的子目录. +上面案例中忽略了所有的子目录。所谓递归搜索就是指同时搜索所有的子目录。 下面的命令会在 `/etc/` 及其子目录中搜索包含 `stretch` 字符串的文件: ```shell @@ -67,7 +67,7 @@ grep 的 `-s` 选项会在发现不能存在或者不能读取的文件时抑制 ``` #### 搜索所有包含特定单词的文件 -上面 `grep` 命令的案例中列出的是所有包含字符串 `stretch` 的文件. 也就是说包含 `stretches` , `stretched` 等内容的行也会被显示. 使用 grep 的 `-w` 选项会只显示包含特定单词的行: +上面 `grep` 命令的案例中列出的是所有包含字符串 `stretch` 的文件。也就是说包含 `stretches` , `stretched` 等内容的行也会被显示。 使用 grep 的 `-w` 选项会只显示包含特定单词的行: ```shell # grep -Rw stretch /etc/* @@ -85,7 +85,7 @@ grep 的 `-s` 选项会在发现不能存在或者不能读取的文件时抑制 ``` #### 显示包含特定文本文件的文件名 -上面的命令都会产生多余的输出. 下一个案例则会递归地搜索 `etc` 目录中包含 `stretch` 的文件并只输出文件名: +上面的命令都会产生多余的输出。下一个案例则会递归地搜索 `etc` 目录中包含 `stretch` 的文件并只输出文件名: ```shell # grep -Rl stretch /etc/* @@ -96,8 +96,8 @@ grep 的 `-s` 选项会在发现不能存在或者不能读取的文件时抑制 ``` #### 大小写不敏感的搜索 -默认情况下搜索hi大小写敏感的,也就是说当搜索字符串 `stretch` 时只会包含大小写一致内容的文件. -通过使用 grep 的 `-i` 选项,grep 命令还会列出所有包含 `Stretch` , `STRETCH` , `StReTcH` 等内容的文件,也就是说进行的是大小写不敏感的搜索. +默认情况下搜索 hi 大小写敏感的,也就是说当搜索字符串 `stretch` 时只会包含大小写一致内容的文件。 +通过使用 grep 的 `-i` 选项,grep 命令还会列出所有包含 `Stretch` , `STRETCH` , `StReTcH` 等内容的文件,也就是说进行的是大小写不敏感的搜索。 ```shell # grep -Ril stretch /etc/* @@ -109,7 +109,7 @@ grep 的 `-s` 选项会在发现不能存在或者不能读取的文件时抑制 ``` #### 搜索是包含/排除指定文件 -`grep` 命令也可以只在指定文件中进行搜索. 比如,我们可以只在配置文件(扩展名为`.conf`)中搜索指定的文本/字符串. 下面这个例子就会在 `/etc` 目录中搜索带字符串 `bash` 且所有扩展名为 `.conf` 的文件: +`grep` 命令也可以只在指定文件中进行搜索。比如,我们可以只在配置文件(扩展名为`.conf`)中搜索指定的文本/字符串。 下面这个例子就会在 `/etc` 目录中搜索带字符串 `bash` 且所有扩展名为 `.conf` 的文件: ```shell # grep -Ril bash /etc/*.conf @@ -118,7 +118,7 @@ OR /etc/adduser.conf ``` -类似的, 也可以使用 `--exclude` 来排除特定的文件: +类似的,也可以使用 `--exclude` 来排除特定的文件: ```shell # grep -Ril --exclude=\*.conf bash /etc/* @@ -146,7 +146,7 @@ OR ``` #### 搜索时排除指定目录 -跟文件一样,grep 也能在搜索时排除指定目录. 使用 `--exclude-dir` 选项就行. +跟文件一样,grep 也能在搜索时排除指定目录。 使用 `--exclude-dir` 选项就行。 下面这个例子会搜索 `/etc` 目录中搜有包含字符串 `stretch` 的文件,但不包括 `/etc/grub.d` 目录下的文件: ```shell @@ -165,7 +165,7 @@ OR ``` #### 寻找不包含指定字符串的文件 -最后这个例子使用 `-v` 来列出所有 *不* 包含指定字符串的文件. +最后这个例子使用 `-v` 来列出所有 *不* 包含指定字符串的文件。 例如下面命令会搜索 `/etc` 目录中不包含 `stretch` 的所有文件: ```shell @@ -178,7 +178,7 @@ via: https://linuxconfig.org/how-to-find-all-files-with-a-specific-text-using-li 作者:[Lubos Rendek][a] 译者:[lujun9972](https://github.com/lujun9972) -校对:[校对者ID](https://github.com/校对者ID) +校对:[校对者 ID](https://github.com/校对者 ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 15da7e58d9374ee09679e0445d62817dca010b88 Mon Sep 17 00:00:00 2001 From: darksun Date: Wed, 6 Dec 2017 19:54:17 +0800 Subject: [PATCH 069/236] =?UTF-8?q?update=20at=202017=E5=B9=B4=2012?= =?UTF-8?q?=E6=9C=88=2006=E6=97=A5=20=E6=98=9F=E6=9C=9F=E4=B8=89=2019:54:1?= =?UTF-8?q?7=20CST?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...w to find all files with a specific text using Linux shell .md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {sources => translated}/tech/20171130 How to find all files with a specific text using Linux shell .md (100%) diff --git a/sources/tech/20171130 How to find all files with a specific text using Linux shell .md b/translated/tech/20171130 How to find all files with a specific text using Linux shell .md similarity index 100% rename from sources/tech/20171130 How to find all files with a specific text using Linux shell .md rename to translated/tech/20171130 How to find all files with a specific text using Linux shell .md From 297bddfde76917b37ed614afb498cf4ac0cf06fb Mon Sep 17 00:00:00 2001 From: imquanquan Date: Wed, 6 Dec 2017 20:55:41 +0800 Subject: [PATCH 070/236] translating --- sources/tech/20171201 Fedora Classroom Session_Ansible 101.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20171201 Fedora Classroom Session_Ansible 101.md b/sources/tech/20171201 Fedora Classroom Session_Ansible 101.md index a74b196663..9e41edb393 100644 --- a/sources/tech/20171201 Fedora Classroom Session_Ansible 101.md +++ b/sources/tech/20171201 Fedora Classroom Session_Ansible 101.md @@ -1,3 +1,5 @@ +translating---geekpi + ### [Fedora Classroom Session: Ansible 101][2] ### By Sachin S Kamath From b44d24ca6bd4a55839e4030c4a61333d0bb9f751 Mon Sep 17 00:00:00 2001 From: imquanquan Date: Wed, 6 Dec 2017 20:57:08 +0800 Subject: [PATCH 071/236] translating --- sources/tech/20171201 Fedora Classroom Session_Ansible 101.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sources/tech/20171201 Fedora Classroom Session_Ansible 101.md b/sources/tech/20171201 Fedora Classroom Session_Ansible 101.md index 9e41edb393..628cd79497 100644 --- a/sources/tech/20171201 Fedora Classroom Session_Ansible 101.md +++ b/sources/tech/20171201 Fedora Classroom Session_Ansible 101.md @@ -1,4 +1,4 @@ -translating---geekpi +translating---imquanquan ### [Fedora Classroom Session: Ansible 101][2] From 3d25b5016c700f367ccd7da6a468d3396e033d1d Mon Sep 17 00:00:00 2001 From: darksun Date: Wed, 6 Dec 2017 21:38:27 +0800 Subject: [PATCH 072/236] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Using=20sudo=20to?= =?UTF-8?q?=20delegate=20permissions=20in=20Linux?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...g sudo to delegate permissions in Linux.md | 229 ++++++++++++++++++ 1 file changed, 229 insertions(+) create mode 100644 sources/tech/20171205 Using sudo to delegate permissions in Linux.md diff --git a/sources/tech/20171205 Using sudo to delegate permissions in Linux.md b/sources/tech/20171205 Using sudo to delegate permissions in Linux.md new file mode 100644 index 0000000000..46a807c7a4 --- /dev/null +++ b/sources/tech/20171205 Using sudo to delegate permissions in Linux.md @@ -0,0 +1,229 @@ +translating by lujun9972 +Using sudo to delegate permissions in Linux +====== +I recently wrote a short Bash program to copy MP3 files from a USB thumb drive on one network host to another network host. The files are copied to a specific directory on the server that I run for a volunteer organization, from where the files can be downloaded and played. + +My program does a few other things, such as changing the name of the files before they are copied so they are automatically sorted by date on the webpage. It also deletes all the files on the USB drive after verifying that the transfer completed correctly. This nice little program has a few options, such as -h to display help, -t for test mode, and a couple of others. + +My program, as wonderful as it is, must run as root to perform its primary functions. Unfortunately, this organization has only a few people who have any interest in administering our audio and computer systems, which puts me in the position of finding semi-technical people and training them to log into the computer used to perform the transfer and run this little program. + +It is not that I cannot run the program myself, but for various reasons, including travel and illness, I am not always there. Even when I am present, as the "lazy sysadmin," I like to have others do my work for me. So, I write scripts to automate those tasks and use sudo to anoint a couple of users to run the scripts. Many Linux commands require the user to be root in order to run. This protects the system against accidental damage, such as that caused by my own stupidity, and intentional damage by a user with malicious intent. + +### Do that sudo that you do so well + +The sudo program is a handy tool that allows me as a sysadmin with root access to delegate responsibility for all or a few administrative tasks to other users of the computer. It allows me to perform that delegation without compromising the root password, thus maintaining a high level of security on the host. + +Let's assume, for example, that I have given regular user, "ruser," access to my Bash program, "myprog," which must be run as root to perform parts of its functions. First, the user logs in as ruser with their own password, then uses the following command to run myprog. + +``` + sudo myprog +``` + +I find it helpful to have the log of each command run by sudo for training. I can see who did what and whether they entered the command correctly. + +I have done this to delegate authority to myself and one other user to run a single program; however, sudo can be used to do so much more. It can allow the sysadmin to delegate authority for managing network functions or specific services to a single person or to a group of trusted users. It allows these functions to be delegated while protecting the security of the root password. + +### Configuring the sudoers file + +As a sysadmin, I can use the /etc/sudoers file to allow users or groups of users access to a single command, defined groups of commands, or all commands. This flexibility is key to both the power and the simplicity of using sudo for delegation. + +I found the sudoers file very confusing at first, so below I have copied and deconstructed the entire sudoers file from the host on which I am using it. Hopefully it won't be quite so obscure for you by the time you get through this analysis. Incidentally, I've found that the default configuration files in Red Hat-based distributions tend to have lots of comments and examples to provide guidance, which makes things easier, with less online searching required. + +Do not use your standard editor to modify the sudoers file. Use the visudo command because it is designed to enable any changes as soon as the file is saved and you exit the editor. It is possible to use editors besides Vi in the same way as visudo. + +Let's start analyzing this file at the beginning with a couple types of aliases. + +### Host aliases + +The host aliases section is used to create groups of hosts on which commands or command aliases can be used to provide access. The basic idea is that this single file will be maintained for all hosts in an organization and copied to /etc of each host. Some hosts, such as servers, can thus be configured as a group to give some users access to specific commands, such as the ability to start and stop services like HTTPD, DNS, and networking; to mount filesystems; and so on. + +IP addresses can be used instead of host names in the host aliases. + +``` +## Sudoers allows particular users to run various commands as +## the root user, without needing the root password. +## +## Examples are provided at the bottom of the file for collections +## of related commands, which can then be delegated out to particular +## users or groups. +## +## This file must be edited with the 'visudo' command. + +## Host Aliases +## Groups of machines. You may prefer to use hostnames (perhaps using +## wildcards for entire domains) or IP addresses instead. +# Host_Alias FILESERVERS = fs1, fs2 +# Host_Alias MAILSERVERS = smtp, smtp2 + +## User Aliases +## These aren't often necessary, as you can use regular groups +## (ie, from files, LDAP, NIS, etc) in this file - just use %groupname +## rather than USERALIAS +# User_Alias ADMINS = jsmith, mikem +User_Alias AUDIO = dboth, ruser + +## Command Aliases +## These are groups of related commands... + +## Networking +# Cmnd_Alias NETWORKING = /sbin/route, /sbin/ifconfig, + /bin/ping, /sbin/dhclient, /usr/bin/net, /sbin/iptables, +/usr/bin/rfcomm, /usr/bin/wvdial, /sbin/iwconfig, /sbin/mii-tool + +## Installation and management of software +# Cmnd_Alias SOFTWARE = /bin/rpm, /usr/bin/up2date, /usr/bin/yum + +## Services +# Cmnd_Alias SERVICES = /sbin/service, /sbin/chkconfig + +## Updating the locate database +# Cmnd_Alias LOCATE = /usr/bin/updatedb + +## Storage +# Cmnd_Alias STORAGE = /sbin/fdisk, /sbin/sfdisk, /sbin/parted, /sbin/partprobe, /bin/mount, /bin/umount + +## Delegating permissions +# Cmnd_Alias DELEGATING = /usr/sbin/visudo, /bin/chown, /bin/chmod, /bin/chgrp + +## Processes +# Cmnd_Alias PROCESSES = /bin/nice, /bin/kill, /usr/bin/kill, /usr/bin/killall + +## Drivers +# Cmnd_Alias DRIVERS = /sbin/modprobe + +# Defaults specification + +# +# Refuse to run if unable to disable echo on the tty. +# +Defaults !visiblepw + +Defaults env_reset +Defaults env_keep = "COLORS DISPLAY HOSTNAME HISTSIZE KDEDIR LS_COLORS" +Defaults env_keep += "MAIL PS1 PS2 QTDIR USERNAME LANG LC_ADDRESS LC_CTYPE" +Defaults env_keep += "LC_COLLATE LC_IDENTIFICATION LC_MEASUREMENT LC_MESSAGES" +Defaults env_keep += "LC_MONETARY LC_NAME LC_NUMERIC LC_PAPER LC_TELEPHONE" +Defaults env_keep += "LC_TIME LC_ALL LANGUAGE LINGUAS _XKB_CHARSET XAUTHORITY" + +Defaults secure_path = /sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin + +## Next comes the main part: which users can run what software on +## which machines (the sudoers file can be shared between multiple +## systems). +## Syntax: +## +## user MACHINE=COMMANDS +## +## The COMMANDS section may have other options added to it. +## +## Allow root to run any commands anywhere +root ALL=(ALL) ALL + +## Allows members of the 'sys' group to run networking, software, +## service management apps and more. +# %sys ALL = NETWORKING, SOFTWARE, SERVICES, STORAGE, DELEGATING, PROCESSES, LOCATE, DRIVERS + +## Allows people in group wheel to run all commands +%wheel ALL=(ALL) ALL + +## Same thing without a password +# %wheel ALL=(ALL) NOPASSWD: ALL + +## Allows members of the users group to mount and unmount the +## cdrom as root +# %users ALL=/sbin/mount /mnt/cdrom, /sbin/umount /mnt/cdrom + +## Allows members of the users group to shutdown this system +# %users localhost=/sbin/shutdown -h now + +## Read drop-in files from /etc/sudoers.d (the # here does not mean a comment) +#includedir /etc/sudoers.d + +################################################################################ +# Added by David Both, 11/04/2017 to provide limited access to myprog # +################################################################################ +# +AUDIO guest1=/usr/local/bin/myprog +``` + +### User aliases + +The user alias configuration allows root to sort users into aliased groups so that an entire group can have access to certain root capabilities. This is the section to which I have added the line User_Alias AUDIO = dboth, ruser, which defines the alias AUDIO and assigns two users to that alias. + +It is possible, as stated in the sudoers file, to simply use groups defined in the /etc/groups file instead of aliases. If you already have a group defined there that meets your needs, such as "audio," use that group name preceded by a % sign like so: %audio when assigning commands that will be made available to groups later in the sudoers file. + +### Command aliases + +Further down in the sudoers file is a command aliases section. These aliases are lists of related commands, such as networking commands or commands required to install updates or new RPM packages. These aliases allow the sysadmin to easily permit access to groups of commands. + +A number of aliases are already set up in this section that make it easy to delegate access to specific types of commands. + +### Environment defaults + +The next section sets some default environment variables. The item that is most interesting in this section is the !visiblepw line, which prevents sudo from running if the user environment is set to show the password. This is a security precaution that should not be overridden. + +### Command section + +The command section is the main part of the sudoers file. Everything you need to do can be done without all the aliases by adding enough entries here. The aliases just make it a whole lot easier. + +This section uses the aliases you've already defined to tell sudo who can do what on which hosts. The examples are self-explanatory once you understand the syntax in this section. Let's look at the syntax that we find in the command section. + +``` +ruser ALL=(ALL) ALL +``` + +This is a generic entry for our user, ruser. The first ALL in the line indicates that this rule applies on all hosts. The second ALL allows ruser to run commands as any other user. By default, commands are run as root user, but ruser can specify on the sudo command line that a program be run as any other user. The last ALL means that ruser can run all commands without restriction. This would effectively make ruser root. + +Note that there is an entry for root, as shown below. This allows root to have all-encompassing access to all commands on all hosts. + +``` +root ALL=(ALL) ALL +``` + +To try this out, I commented out the line and, as root, tried to run chown without sudo. That did work—much to my surprise. Then I used sudo chown and that failed with the message, "Root is not in the sudoers file. This incident will be reported." This means that root can run everything as root, but nothing when using the sudo command. This would prevent root from running commands as other users via the sudo command, but root has plenty of ways around that restriction. + +The code below is the one I added to control access to myprog. It specifies that users who are listed in the AUDIO group, as defined near the top of the sudoers file, have access to only one program, myprog, on one host, guest1. + +``` +AUDIO guest1=/usr/local/bin/myprog +``` + +Note that the syntax of the line above specifies only the host on which this access is to be allowed and the program. It does not specify that the user may run the program as any other user. + +### Bypassing passwords + +You can also use NOPASSWORD to allow the users specified in the group AUDIO to run myprog without the need for entering their passwords. Here's how: + +``` +AUDIO guest1=NOPASSWORD : /usr/local/bin/myprog +``` + +I did not do this for my program, because I believe that users with sudo access must stop and think about what they are doing, and this may help a bit with that. I used the entry for my little program as an example. + +### wheel + +The wheel specification in the command section of the sudoers file, as shown below, allows all users in the "wheel" group to run all commands on any host. The wheel group is defined in the /etc/group file, and users must be added to the group there for this to work. The % sign preceding the group name means that sudo should look for that group in the /etc/group file. + +``` +%wheel ALL = (ALL) ALL +``` + +This is a good way to delegate full root access to multiple users without providing the root password. Just adding a user to the wheel group gives them access to full root powers. It also provides a means to monitor their activities via the log entries created by sudo. Some distributions, such as Ubuntu, add users' IDs to the wheel group in /etc/group, which allows them to use the sudo command for all privileged commands. + +### Final thoughts + +I have used sudo here for a very limited objective—providing one or two users with access to a single command. I accomplished this with two lines (if you ignore my own comments). Delegating authority to perform certain tasks to users who do not have root access is simple and can save you, as a sysadmin, a good deal of time. It also generates log entries that can help detect problems. + +The sudoers file offers a plethora of capabilities and options for configuration. Check the man files for sudo and sudoers for the down-and-dirty details. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/17/12/using-sudo-delegate + +作者:[David Both][a] +译者:[lujun9972](https://github.com/lujun9972) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/dboth From 66615466074354727ff6601267a308c0e582ce1f Mon Sep 17 00:00:00 2001 From: Snowden Fu Date: Wed, 6 Dec 2017 21:40:43 +0800 Subject: [PATCH 073/236] update to latest --- ...ng Languages and Code Quality in GitHub.md | 38 +++++++++---------- 1 file changed, 19 insertions(+), 19 deletions(-) diff --git a/sources/tech/20171007 A Large-Scale Study of Programming Languages and Code Quality in GitHub.md b/sources/tech/20171007 A Large-Scale Study of Programming Languages and Code Quality in GitHub.md index 22986eaa19..2845ca549a 100644 --- a/sources/tech/20171007 A Large-Scale Study of Programming Languages and Code Quality in GitHub.md +++ b/sources/tech/20171007 A Large-Scale Study of Programming Languages and Code Quality in GitHub.md @@ -35,7 +35,7 @@ Our language and project data was extracted from the  _GitHub Archive_ , a data **Identifying top languages.** We aggregate projects based on their primary language. Then we select the languages with the most projects for further analysis, as shown in [Table 1][48]. A given project can use many languages; assigning a single language to it is difficult. Github Archive stores information gathered from GitHub Linguist which measures the language distribution of a project repository using the source file extensions. The language with the maximum number of source files is assigned as the  _primary language_  of the project. - [![t1.jpg](http://deliveryimages.acm.org/10.1145/3130000/3126905/t1.jpg)][49] + [![t1.jpg](http://deliveryimages.acm.org/10.1145/3130000/3126905/t1.jpg)][49] **Table 1\. Top 3 projects in each language.** **Retrieving popular projects.** For each selected language, we filter the project repositories written primarily in that language by its popularity based on the associated number of  _stars._ This number indicates how many people have actively expressed interest in the project, and is a reasonable proxy for its popularity. Thus, the top 3 projects in C are  _linux, git_ , and  _php-src_ ; and for C++ they are  _node-webkit, phantomjs_ , and  _mongo_ ; and for `Java` they are  _storm, elasticsearch_ , and  _ActionBarSherlock._  In total, we select the top 50 projects in each language. @@ -46,7 +46,7 @@ To ensure that these projects have a sufficient development history, we drop the [Table 2][51] summarizes our data set. Since a project may use multiple languages, the second column of the table shows the total number of projects that use a certain language at some capacity. We further exclude some languages from a project that have fewer than 20 commits in that language, where 20 is the first quartile value of the total number of commits per project per language. For example, we find 220 projects that use more than 20 commits in C. This ensures sufficient activity for each language–project pair. - [![t2.jpg](http://deliveryimages.acm.org/10.1145/3130000/3126905/t2.jpg)][52] + [![t2.jpg](http://deliveryimages.acm.org/10.1145/3130000/3126905/t2.jpg)][52] **Table 2\. Study subjects.** In summary, we study 728 projects developed in 17 languages with 18 years of history. This includes 29,000 different developers, 1.57 million commits, and 564,625 bug fix commits. @@ -56,14 +56,14 @@ In summary, we study 728 projects developed in 17 languages with 18 years of his We define language classes based on several properties of the language thought to influence language quality,[7][9], [8][10], [12][11] as shown in [Table 3][53]. The  _Programming Paradigm_  indicates whether the project is written in an imperative procedural, imperative scripting, or functional language. In the rest of the paper, we use the terms procedural and scripting to indicate imperative procedural and imperative scripting respectively. - [![t3.jpg](http://deliveryimages.acm.org/10.1145/3130000/3126905/t3.jpg)][54] + [![t3.jpg](http://deliveryimages.acm.org/10.1145/3130000/3126905/t3.jpg)][54] **Table 3\. Different types of language classes.** _Type Checking_  indicates static or dynamic typing. In statically typed languages, type checking occurs at compile time, and variable names are bound to a value and to a type. In addition, expressions (including variables) are classified by types that correspond to the values they might take on at run-time. In dynamically typed languages, type checking occurs at run-time. Hence, in the latter, it is possible to bind a variable name to objects of different types in the same program. _Implicit Type Conversion_  allows access of an operand of type T1 as a different type T2, without an explicit conversion. Such implicit conversion may introduce type-confusion in some cases, especially when it presents an operand of specific type T1, as an instance of a different type T2\. Since not all implicit type conversions are immediately a problem, we operationalize our definition by showing examples of the implicit type confusion that can happen in all the languages we identified as allowing it. For example, in languages like `Perl, JavaScript`, and `CoffeeScript` adding a string to a number is permissible (e.g., "5" + 2 yields "52"). The same operation yields 7 in `Php`. Such an operation is not permitted in languages such as `Java` and `Python` as they do not allow implicit conversion. In C and C++ coercion of data types can result in unintended results, for example, `int x; float y; y=3.5; x=y`; is legal C code, and results in different values for x and y, which, depending on intent, may be a problem downstream.[a][12] In `Objective-C` the data type  _id_  is a generic object pointer, which can be used with an object of any data type, regardless of the class.[b][13] The flexibility that such a generic data type provides can lead to implicit type conversion and also have unintended consequences.[c][14]Hence, we classify a language based on whether its compiler  _allows_  or  _disallows_  the implicit type conversion as above; the latter explicitly detects type confusion and reports it. -Disallowing implicit type conversion could result from static type inference within a compiler (e.g., with `Java`), using a type-inference algorithm such as Hindley[10][15] and Milner,[17][16] or at run-time using a dynamic type checker. In contrast, a type-confusion can occur silently because it is either undetected or is unreported. Either way, implicitly allowing type conversion provides flexibility but may eventually cause errors that are difficult to localize. To abbreviate, we refer to languages allowing implicit type conversion as  _implicit_  and those that disallow it as  _explicit._ +Disallowing implicit type conversion could result from static type inference within a compiler (e.g., with `Java`), using a type-inference algorithm such as Hindley[10][15] and Milner,[17][16] or at run-time using a dynamic type checker. In contrast, a type-confusion can occur silently because it is either undetected or is unreported. Either way, implicitly allowing type conversion provides flexibility but may eventually cause errors that are difficult to localize. To abbreviate, we refer to languages allowing implicit type conversion as  _implicit_  and those that disallow it as  _explicit._ _Memory Class_  indicates whether the language requires developers to manage memory. We treat `Objective-C` as unmanaged, in spite of it following a hybrid model, because we observe many memory errors in its codebase, as discussed in RQ4 in Section 3. @@ -76,7 +76,7 @@ We classify the studied projects into different domains based on their features We detect 30 distinct domains, that is, topics, and estimate the probability that each project belonging to each domain. Since these auto-detected domains include several project-specific keywords, for example, facebook, it is difficult to identify the underlying common functions. In order to assign a meaningful name to each domain, we manually inspect each of the 30 domains to identify projectname-independent, domain-identifying keywords. We manually rename all of the 30 auto-detected domains and find that the majority of the projects fall under six domains: Application, Database, CodeAnalyzer, Middleware, Library, and Framework. We also find that some projects do not fall under any of the above domains and so we assign them to a catchall domain labeled as  _Other_ . This classification of projects into domains was subsequently checked and confirmed by another member of our research group. [Table 4][57] summarizes the identified domains resulting from this process. - [![t4.jpg](http://deliveryimages.acm.org/10.1145/3130000/3126905/t4.jpg)][58] + [![t4.jpg](http://deliveryimages.acm.org/10.1145/3130000/3126905/t4.jpg)][58] **Table 4\. Characteristics of domains.** ![*](http://dl.acm.org/images/bullet.gif) @@ -86,7 +86,7 @@ While fixing software bugs, developers often leave important information in the First, we categorize the bugs based on their  _Cause_  and  _Impact. Causes_  are further classified into disjoint subcategories of errors: Algorithmic, Concurrency, Memory, generic Programming, and Unknown. The bug  _Impact_  is also classified into four disjoint subcategories: Security, Performance, Failure, and Other unknown categories. Thus, each bug-fix commit also has an induced Cause and an Impact type. [Table 5][59] shows the description of each bug category. This classification is performed in two phases: - [![t5.jpg](http://deliveryimages.acm.org/10.1145/3130000/3126905/t5.jpg)][60] + [![t5.jpg](http://deliveryimages.acm.org/10.1145/3130000/3126905/t5.jpg)][60] **Table 5\. Categories of bugs and their distribution in the whole dataset.** **(1) Keyword search.** We randomly choose 10% of the bug-fix messages and use a keyword based search technique to automatically categorize them as potential bug types. We use this annotation, separately, for both Cause and Impact types. We chose a restrictive set of keywords and phrases, as shown in [Table 5][61]. Such a restrictive set of keywords and phrases helps reduce false positives. @@ -118,7 +118,7 @@ We begin with a straightforward question that directly addresses the core of wha We use a regression model to compare the impact of each language on the number of defects with the average impact of all languages, against defect fixing commits (see [Table 6][64]). - [![t6.jpg](http://deliveryimages.acm.org/10.1145/3130000/3126905/t6.jpg)][65] + [![t6.jpg](http://deliveryimages.acm.org/10.1145/3130000/3126905/t6.jpg)][65] **Table 6\. Some languages induce fewer defects than other languages.** We include some variables as controls for factors that will clearly influence the response. Project age is included as older projects will generally have a greater number of defect fixes. Trivially, the number of commits to a project will also impact the response. Additionally, the number of developers who touch a project and the raw size of the project are both expected to grow with project activity. @@ -127,11 +127,11 @@ The sign and magnitude of the estimated coefficients in the above model relates One should take care not to overestimate the impact of language on defects. While the observed relationships are statistically significant, the effects are quite small. Analysis of deviance reveals that language accounts for less than 1% of the total explained deviance. - [![ut1.jpg](http://deliveryimages.acm.org/10.1145/3130000/3126905/ut1.jpg)][66] + [![ut1.jpg](http://deliveryimages.acm.org/10.1145/3130000/3126905/ut1.jpg)][66] We can read the model coefficients as the expected change in the log of the response for a one unit change in the predictor with all other predictors held constant; that is, for a coefficient  _βi_ , a one unit change in  _βi_  yields an expected change in the response of e _βi_ . For the factor variables, this expected change is compared to the average across all languages. Thus, if, for some number of commits, a particular project developed in an  _average_  language had four defective commits, then the choice to use C++ would mean that we should expect one additional defective commit since e0.18 × 4 = 4.79\. For the same project, choosing `Haskell` would mean that we should expect about one fewer defective commit as  _e_ −0.26 × 4 = 3.08\. The accuracy of this prediction depends on all other factors remaining the same, a challenging proposition for all but the most trivial of projects. All observational studies face similar limitations; we address this concern in more detail in Section 5. -**Result 1:**  _Some languages have a greater association with defects than other languages, although the effect is small._ +**Result 1:**  _Some languages have a greater association with defects than other languages, although the effect is small._ In the remainder of this paper we expand on this basic result by considering how different categories of application, defect, and language, lead to further insight into the relationship between languages and defect proneness. @@ -149,26 +149,26 @@ Rather than considering languages individually, we aggregate them by language cl As with language (earlier in [Table 6][67]), we are comparing language  _classes_  with the average behavior across all language classes. The model is presented in [Table 7][68]. It is clear that `Script-Dynamic-Explicit-Managed` class has the smallest magnitude coefficient. The coefficient is insignificant, that is, the z-test for the coefficient cannot distinguish the coefficient from zero. Given the magnitude of the standard error, however, we can assume that the behavior of languages in this class is very close to the average across all languages. We confirm this by recoding the coefficient using `Proc-Static-Implicit-Unmanaged` as the base level and employing treatment, or dummy coding that compares each language class with the base level. In this case, `Script-Dynamic-Explicit-Managed` is significantly different with  _p_  = 0.00044\. We note here that while choosing different coding methods affects the coefficients and z-scores, the models are identical in all other respects. When we change the coding we are rescaling the coefficients to reflect the comparison that we wish to make.[4][28] Comparing the other language classes to the grand mean, `Proc-Static-Implicit-Unmanaged` languages are more likely to induce defects. This implies that either implicit type conversion or memory management issues contribute to greater defect proneness as compared with other procedural languages. - [![t7.jpg](http://deliveryimages.acm.org/10.1145/3130000/3126905/t7.jpg)][69] + [![t7.jpg](http://deliveryimages.acm.org/10.1145/3130000/3126905/t7.jpg)][69] **Table 7\. Functional languages have a smaller relationship to defects than other language classes whereas procedural languages are greater than or similar to the average.** Among scripting languages we observe a similar relationship between languages that allow versus those that do not allow implicit type conversion, providing some evidence that implicit type conversion (vs. explicit) is responsible for this difference as opposed to memory management. We cannot state this conclusively given the correlation between factors. However when compared to the average, as a group, languages that do not allow implicit type conversion are less error-prone while those that do are more error-prone. The contrast between static and dynamic typing is also visible in functional languages. The functional languages as a group show a strong difference from the average. Statically typed languages have a substantially smaller coefficient yet both functional language classes have the same standard error. This is strong evidence that functional static languages are less error-prone than functional dynamic languages, however, the z-tests only test whether the coefficients are different from zero. In order to strengthen this assertion, we recode the model as above using treatment coding and observe that the `Functional-Static-Explicit-Managed` language class is significantly less defect-prone than the `Functional-Dynamic-Explicit-Managed`language class with  _p_  = 0.034. - [![ut2.jpg](http://deliveryimages.acm.org/10.1145/3130000/3126905/ut2.jpg)][70] + [![ut2.jpg](http://deliveryimages.acm.org/10.1145/3130000/3126905/ut2.jpg)][70] As with language and defects, the relationship between language class and defects is based on a small effect. The deviance explained is similar, albeit smaller, with language class explaining much less than 1% of the deviance. We now revisit the question of application domain. Does domain have an interaction with language class? Does the choice of, for example, a functional language, have an advantage for a particular domain? As above, a Chi-square test for the relationship between these factors and the project domain yields a value of 99.05 and  _df_  = 30 with  _p_  = 2.622e–09 allowing us to reject the null hypothesis that the factors are independent. Cramer's-V yields a value of 0.133, a weak level of association. Consequently, although there is some relation between domain and language, there is only a weak relationship between domain and language class. -**Result 2:**  _There is a small but significant relationship between language class and defects. Functional languages are associated with fewer defects than either procedural or scripting languages._ +**Result 2:**  _There is a small but significant relationship between language class and defects. Functional languages are associated with fewer defects than either procedural or scripting languages._ It is somewhat unsatisfying that we do not observe a strong association between language, or language class, and domain within a project. An alternative way to view this same data is to disregard projects and aggregate defects over all languages and domains. Since this does not yield independent samples, we do not attempt to analyze it statistically, rather we take a descriptive, visualization-based approach. We define  _Defect Proneness_  as the ratio of bug fix commits over total commits per language per domain. [Figure 1][71] illustrates the interaction between domain and language using a heat map, where the defect proneness increases from lighter to darker zone. We investigate which language factors influence defect fixing commits across a collection of projects written across a variety of languages. This leads to the following research question: - [![f1.jpg](http://deliveryimages.acm.org/10.1145/3130000/3126905/f1.jpg)][72] + [![f1.jpg](http://deliveryimages.acm.org/10.1145/3130000/3126905/f1.jpg)][72] **Figure 1\. Interaction of language's defect proneness with domain. Each cell in the heat map represents defect proneness of a language (row header) for a given domain (column header). The "Overall" column represents defect proneness of a language over all the domains. The cells with white cross mark indicate null value, that is, no commits were made corresponding to that cell.** **RQ3\. Does language defect proneness depend on domain?** @@ -177,9 +177,9 @@ In order to answer this question we first filtered out projects that would have We see only a subdued variation in this heat map which is a result of the inherent defect proneness of the languages as seen in RQ1\. To validate this, we measure the pairwise rank correlation between the language defect proneness for each domain with the overall. For all of the domains except Database, the correlation is positive, and p-values are significant (<0.01). Thus, w.r.t. defect proneness, the language ordering in each domain is strongly correlated with the overall language ordering. - [![ut3.jpg](http://deliveryimages.acm.org/10.1145/3130000/3126905/ut3.jpg)][74] + [![ut3.jpg](http://deliveryimages.acm.org/10.1145/3130000/3126905/ut3.jpg)][74] -**Result 3:**  _There is no general relationship between application domain and language defect proneness._ +**Result 3:**  _There is no general relationship between application domain and language defect proneness._ We have shown that different languages induce a larger number of defects and that this relationship is not only related to particular languages but holds for general classes of languages; however, we find that the type of project does not mediate this relationship to a large degree. We now turn our attention to categorization of the response. We want to understand how language relates to specific kinds of defects and how this relationship compares to the more general relationship that we observe. We divide the defects into categories as described in [Table 5][75] and ask the following question: @@ -187,12 +187,12 @@ We have shown that different languages induce a larger number of defects and tha We use an approach similar to RQ3 to understand the relation between languages and bug categories. First, we study the relation between bug categories and language class. A heat map ([Figure 2][76]) shows aggregated defects over language classes and bug types. To understand the interaction between bug categories and languages, we use an NBR regression model for each category. For each model we use the same control factors as RQ1 as well as languages encoded with weighted effects to predict defect fixing commits. - [![f2.jpg](http://deliveryimages.acm.org/10.1145/3130000/3126905/f2.jpg)][77] + [![f2.jpg](http://deliveryimages.acm.org/10.1145/3130000/3126905/f2.jpg)][77] **Figure 2\. Relation between bug categories and language class. Each cell represents percentage of bug fix commit out of all bug fix commits per language class (row header) per bug category (column header). The values are normalized column wise.** The results along with the anova value for language are shown in [Table 8][78]. The overall deviance for each model is substantially smaller and the proportion explained by language for a specific defect type is similar in magnitude for most of the categories. We interpret this relationship to mean that language has a greater impact on specific categories of bugs, than it does on bugs overall. In the next section we expand on these results for the bug categories with significant bug counts as reported in [Table 5][79]. However, our conclusion generalizes for all categories. - [![t8.jpg](http://deliveryimages.acm.org/10.1145/3130000/3126905/t8.jpg)][80] + [![t8.jpg](http://deliveryimages.acm.org/10.1145/3130000/3126905/t8.jpg)][80] **Table 8\. While the impact of language on defects varies across defect category, language has a greater impact on specific categories than it does on defects in general.** **Programming errors.** Generic programming errors account for around 88.53% of all bug fix commits and occur in all the language classes. Consequently, the regression analysis draws a similar conclusion as of RQ1 (see [Table 6][81]). All languages incur programming errors such as faulty error-handling, faulty definitions, typos, etc. @@ -201,7 +201,7 @@ The results along with the anova value for language are shown in [Table 8][78]. **Concurrency errors.** 1.99% of the total bug fix commits are related to concurrency errors. The heat map shows that `Proc-Static-Implicit-Unmanaged` dominates this error type. C and C++ introduce 19.15% and 7.89% of the errors, and they are distributed across the projects. - [![ut4.jpg](http://deliveryimages.acm.org/10.1145/3130000/3126905/ut4.jpg)][84] + [![ut4.jpg](http://deliveryimages.acm.org/10.1145/3130000/3126905/ut4.jpg)][84] Both of the `Static-Strong-Managed` language classes are in the darker zone in the heat map confirming, in general static languages produce more concurrency errors than others. Among the dynamic languages, only `Erlang` is more prone to concurrency errors, perhaps relating to the greater use of this language for concurrent applications. Likewise, the negative coefficients in [Table 8][85] shows that projects written in dynamic languages like `Ruby` and `Php` have fewer concurrency errors. Note that, certain languages like `JavaScript, CoffeeScript`, and `TypeScript` do not support concurrency, in its traditional form, while `Php` has a limited support depending on its implementations. These languages introduce artificial zeros in the data, and thus the concurrency model coefficients in [Table 8][86] for those languages cannot be interpreted like the other coefficients. Due to these artificial zeros, the average over all languages in this model is smaller, which may affect the sizes of the coefficients, since they are given w.r.t. the average, but it will not affect their relative relationships, which is what we are after. @@ -209,7 +209,7 @@ A textual analysis based on word-frequency of the bug fix messages suggests that **Security and other impact errors.** Around 7.33% of all the bug fix commits are related to Impact errors. Among them `Erlang, C++`, and `Python` associate with more security errors than average ([Table 8][87]). `Clojure` projects associate with fewer security errors ([Figure 2][88]). From the heat map we also see that `Static` languages are in general more prone to failure and performance errors, these are followed by `Functional-Dynamic-Explicit-Managed` languages such as `Erlang`. The analysis of deviance results confirm that language is strongly associated with failure impacts. While security errors are the weakest among the categories, the deviance explained by language is still quite strong when compared with the residual deviance. -**Result 4:**  _Defect types are strongly associated with languages; some defect type like memory errors and concurrency errors also depend on language primitives. Language matters more for specific categories than it does for defects overall._ +**Result 4:**  _Defect types are strongly associated with languages; some defect type like memory errors and concurrency errors also depend on language primitives. Language matters more for specific categories than it does for defects overall._ [Back to Top][89] From 5b7e80a1b469c3e1fad329e4aac0847a4f0a4938 Mon Sep 17 00:00:00 2001 From: imquanquan Date: Wed, 6 Dec 2017 22:55:39 +0800 Subject: [PATCH 074/236] translated --- ...01 Fedora Classroom Session_Ansible 101.md | 73 ------------------- ...01 Fedora Classroom Session_Ansible 101.md | 71 ++++++++++++++++++ 2 files changed, 71 insertions(+), 73 deletions(-) delete mode 100644 sources/tech/20171201 Fedora Classroom Session_Ansible 101.md create mode 100644 translated/tech/20171201 Fedora Classroom Session_Ansible 101.md diff --git a/sources/tech/20171201 Fedora Classroom Session_Ansible 101.md b/sources/tech/20171201 Fedora Classroom Session_Ansible 101.md deleted file mode 100644 index 628cd79497..0000000000 --- a/sources/tech/20171201 Fedora Classroom Session_Ansible 101.md +++ /dev/null @@ -1,73 +0,0 @@ -translating---imquanquan - -### [Fedora Classroom Session: Ansible 101][2] - -### By Sachin S Kamath - -![](https://fedoramagazine.org/wp-content/uploads/2017/07/fedora-classroom-945x400.jpg) - -Fedora Classroom sessions continue this week with an Ansible session. The general schedule for sessions appears [on the wiki][3]. You can also find [resources and recordings from previous sessions][4] there. Here are details about this week’s session on [Thursday, 30th November at 1600 UTC][5]. That link allows you to convert the time to your timezone. - -### Topic: Ansible 101 - -As the Ansible [documentation][6] explains, Ansible is an IT automation tool. It’s primarily used to configure systems, deploy software, and orchestrate more advanced IT tasks. Examples include continuous deployments or zero downtime rolling updates. - -This Classroom session covers the topics listed below: - -1. Introduction to SSH - -2. Understanding different terminologies - -3. Introduction to Ansible - -4. Ansible installation and setup - -5. Establishing password-less connection - -6. Ad-hoc commands - -7. Managing inventory - -8. Playbooks examples - -There will also be a follow-up Ansible 102 session later. That session will cover complex playbooks, roles, dynamic inventory files, control flow and Galaxy. - -### Instructors - -We have two experienced instructors handling this session. - -[Geoffrey Marr][7], also known by his IRC name as “coremodule,” is a Red Hat employee and Fedora contributor with a background in Linux and cloud technologies. While working, he spends his time lurking in the [Fedora QA][8] wiki and test pages. Away from work, he enjoys RaspberryPi projects, especially those focusing on software-defined radio. - -[Vipul Siddharth][9] is an intern at Red Hat who also works on Fedora. He loves to contribute to open source and seeks opportunities to spread the word of free and open source software. - -### Joining the session - -This session takes place on [BlueJeans][10]. The following information will help you join the session: - -* URL: [https://bluejeans.com/3466040121][1] - -* Meeting ID (for Desktop App): 3466040121 - -We hope you attend, learn from, and enjoy this session! If you have any feedback about the sessions, have ideas for a new one or want to host a session, please feel free to comment on this post or edit the [Classroom wiki page][11]. - --------------------------------------------------------------------------------- - -via: https://fedoramagazine.org/fedora-classroom-session-ansible-101/ - -作者:[Sachin S Kamath] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[1]:https://bluejeans.com/3466040121 -[2]:https://fedoramagazine.org/fedora-classroom-session-ansible-101/ -[3]:https://fedoraproject.org/wiki/Classroom -[4]:https://fedoraproject.org/wiki/Classroom#Previous_Sessions -[5]:https://www.timeanddate.com/worldclock/fixedtime.html?msg=Fedora+Classroom+-+Ansible+101&iso=20171130T16&p1=%3A -[6]:http://docs.ansible.com/ansible/latest/index.html -[7]:https://fedoraproject.org/wiki/User:Coremodule -[8]:https://fedoraproject.org/wiki/QA -[9]:https://fedoraproject.org/wiki/User:Siddharthvipul1 -[10]:https://www.bluejeans.com/downloads -[11]:https://fedoraproject.org/wiki/Classroom diff --git a/translated/tech/20171201 Fedora Classroom Session_Ansible 101.md b/translated/tech/20171201 Fedora Classroom Session_Ansible 101.md new file mode 100644 index 0000000000..094bc6e044 --- /dev/null +++ b/translated/tech/20171201 Fedora Classroom Session_Ansible 101.md @@ -0,0 +1,71 @@ +### [Fedora 课堂会议: Ansible 101][2] + +### By Sachin S Kamath + +![](https://fedoramagazine.org/wp-content/uploads/2017/07/fedora-classroom-945x400.jpg) + +Fedora 课堂会议本周继续进行本周的主题是 Ansible。 会议的时间安排表发布在 [on the wiki][3]。你还可以从那里找到[之前会议的资源和录像][4]。以下是本周[11月30日星期四 1600 UTC][5]。该链接可以将这个时间转换为您的时区上的时间。 + +### 主题: Ansible 101 + +正如 Ansible [文档][6] 所说, Ansible 是一个 IT 自动化工具。它主要用于配置系统,部署软件和编排更高级的 IT 任务。示例包括持续交付与零停机滚动升级。 + +本课堂课程涵盖以下主题: + +1. SSH 简介 + +2. 了解不同的术语 + +3. Ansible 简介 + +4. Ansible 安装和设置 + +5. 建立无密码连接 + +6. Ad-hoc 命令 + +7. 管理 inventory + +8. Playbooks 示例 + +稍后还将有 Ansible 102 的后续会议。该会议将涵盖复杂的 playbooks,playbooks 角色(roles),动态 inventory 文件,流程控制和 Ansible Galaxy 命令行工具. + +### 讲师 + +我们有两位经验丰富的讲师进行这次会议。 + +[Geoffrey Marr][7],IRC 聊天室中名字叫 coremodule,是 Red Hat 的一名员工和 Fedora 的贡献者,拥有 Linux 和云技术的背景。工作时, 他潜心于 [Fedora QA][8] wiki 和测试页面下。业余时间, 他热衷于 RaspberryPi 项目, 尤其是专注于那些软件无线电(Software-defined radio)项目。 + +[Vipul Siddharth][9] 是Red Hat的实习生,他也在Fedora上工作。他喜欢贡献开源,借此机会传播自由开源软件。 + +### 加入会议 + +本次会议将在 [BlueJeans][10] 上进行。下面的信息可以帮你加入到会议: + +* 网址: [https://bluejeans.com/3466040121][1] + +* 会议 ID (桌面版): 3466040121 + +我们希望您可以参加,学习,并享受这个会议!如果您对会议有任何反馈意见,有什么新的想法或者想要主持一个会议, 可以随时在这篇文章发表评论或者查看[课堂 wiki 页面][11]. + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/fedora-classroom-session-ansible-101/ + +作者:[Sachin S Kamath] +译者:[imquanquan](https://github.com/imquanquan) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[1]:https://bluejeans.com/3466040121 +[2]:https://fedoramagazine.org/fedora-classroom-session-ansible-101/ +[3]:https://fedoraproject.org/wiki/Classroom +[4]:https://fedoraproject.org/wiki/Classroom#Previous_Sessions +[5]:https://www.timeanddate.com/worldclock/fixedtime.html?msg=Fedora+Classroom+-+Ansible+101&iso=20171130T16&p1=%3A +[6]:http://docs.ansible.com/ansible/latest/index.html +[7]:https://fedoraproject.org/wiki/User:Coremodule +[8]:https://fedoraproject.org/wiki/QA +[9]:https://fedoraproject.org/wiki/User:Siddharthvipul1 +[10]:https://www.bluejeans.com/downloads +[11]:https://fedoraproject.org/wiki/Classroom From 437740dee3a014494349b2c94ee0aa405b73621b Mon Sep 17 00:00:00 2001 From: imquanquan Date: Wed, 6 Dec 2017 22:58:25 +0800 Subject: [PATCH 075/236] translated --- .../tech/20171201 Fedora Classroom Session_Ansible 101.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/translated/tech/20171201 Fedora Classroom Session_Ansible 101.md b/translated/tech/20171201 Fedora Classroom Session_Ansible 101.md index 094bc6e044..c57a20afd7 100644 --- a/translated/tech/20171201 Fedora Classroom Session_Ansible 101.md +++ b/translated/tech/20171201 Fedora Classroom Session_Ansible 101.md @@ -4,7 +4,7 @@ ![](https://fedoramagazine.org/wp-content/uploads/2017/07/fedora-classroom-945x400.jpg) -Fedora 课堂会议本周继续进行本周的主题是 Ansible。 会议的时间安排表发布在 [on the wiki][3]。你还可以从那里找到[之前会议的资源和录像][4]。以下是本周[11月30日星期四 1600 UTC][5]。该链接可以将这个时间转换为您的时区上的时间。 +Fedora 课堂会议本周继续进行本周的主题是 Ansible。 会议的时间安排表发布在 [wiki][3] 上。你还可以从那里找到[之前会议的资源和录像][4]。以下是本周[11月30日星期四 1600 UTC][5]。该链接可以将这个时间转换为您的时区上的时间。 ### 主题: Ansible 101 From 112fd9d835bff132820bfde31355b18f674ed329 Mon Sep 17 00:00:00 2001 From: imquanquan Date: Wed, 6 Dec 2017 23:00:07 +0800 Subject: [PATCH 076/236] fix errors --- .../tech/20171201 Fedora Classroom Session_Ansible 101.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/translated/tech/20171201 Fedora Classroom Session_Ansible 101.md b/translated/tech/20171201 Fedora Classroom Session_Ansible 101.md index c57a20afd7..3941b8b702 100644 --- a/translated/tech/20171201 Fedora Classroom Session_Ansible 101.md +++ b/translated/tech/20171201 Fedora Classroom Session_Ansible 101.md @@ -4,7 +4,7 @@ ![](https://fedoramagazine.org/wp-content/uploads/2017/07/fedora-classroom-945x400.jpg) -Fedora 课堂会议本周继续进行本周的主题是 Ansible。 会议的时间安排表发布在 [wiki][3] 上。你还可以从那里找到[之前会议的资源和录像][4]。以下是本周[11月30日星期四 1600 UTC][5]。该链接可以将这个时间转换为您的时区上的时间。 +Fedora 课堂会议本周继续进行本周的主题是 Ansible。 会议的时间安排表发布在 [wiki][3] 上。你还可以从那里找到[之前会议的资源和录像][4]。以下是会议的具体时间 [本周11月30日星期四 1600 UTC][5]。该链接可以将这个时间转换为您的时区上的时间。 ### 主题: Ansible 101 From 4bb45667cf341c8cb5b15c4241422ee1d1bd2753 Mon Sep 17 00:00:00 2001 From: imquanquan Date: Wed, 6 Dec 2017 23:04:10 +0800 Subject: [PATCH 077/236] translated --- .../tech/20171201 Fedora Classroom Session_Ansible 101.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/translated/tech/20171201 Fedora Classroom Session_Ansible 101.md b/translated/tech/20171201 Fedora Classroom Session_Ansible 101.md index 3941b8b702..184671cdff 100644 --- a/translated/tech/20171201 Fedora Classroom Session_Ansible 101.md +++ b/translated/tech/20171201 Fedora Classroom Session_Ansible 101.md @@ -34,7 +34,7 @@ Fedora 课堂会议本周继续进行本周的主题是 Ansible。 会议的时 我们有两位经验丰富的讲师进行这次会议。 -[Geoffrey Marr][7],IRC 聊天室中名字叫 coremodule,是 Red Hat 的一名员工和 Fedora 的贡献者,拥有 Linux 和云技术的背景。工作时, 他潜心于 [Fedora QA][8] wiki 和测试页面下。业余时间, 他热衷于 RaspberryPi 项目, 尤其是专注于那些软件无线电(Software-defined radio)项目。 +[Geoffrey Marr][7],IRC 聊天室中名字叫 coremodule,是 Red Hat 的一名员工和 Fedora 的贡献者,拥有 Linux 和云技术的背景。工作时,他潜心于 [Fedora QA][8] wiki 和测试页面下。业余时间, 他热衷于 RaspberryPi 项目,尤其是专注于那些软件无线电(Software-defined radio)项目。 [Vipul Siddharth][9] 是Red Hat的实习生,他也在Fedora上工作。他喜欢贡献开源,借此机会传播自由开源软件。 From e6d475a5c54411eb1bdae568d168db54fa451cb9 Mon Sep 17 00:00:00 2001 From: imquanquan Date: Wed, 6 Dec 2017 23:04:59 +0800 Subject: [PATCH 078/236] Update 20171201 Fedora Classroom Session_Ansible 101.md --- .../tech/20171201 Fedora Classroom Session_Ansible 101.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/translated/tech/20171201 Fedora Classroom Session_Ansible 101.md b/translated/tech/20171201 Fedora Classroom Session_Ansible 101.md index 184671cdff..26b3f1c42e 100644 --- a/translated/tech/20171201 Fedora Classroom Session_Ansible 101.md +++ b/translated/tech/20171201 Fedora Classroom Session_Ansible 101.md @@ -4,11 +4,11 @@ ![](https://fedoramagazine.org/wp-content/uploads/2017/07/fedora-classroom-945x400.jpg) -Fedora 课堂会议本周继续进行本周的主题是 Ansible。 会议的时间安排表发布在 [wiki][3] 上。你还可以从那里找到[之前会议的资源和录像][4]。以下是会议的具体时间 [本周11月30日星期四 1600 UTC][5]。该链接可以将这个时间转换为您的时区上的时间。 +Fedora 课堂会议本周继续进行本周的主题是 Ansible。 会议的时间安排表发布在 [wiki][3] 上。你还可以从那里找到[之前会议的资源和录像][4]。以下是会议的具体时间 [11月30日本周星期四 1600 UTC][5]。该链接可以将这个时间转换为您的时区上的时间。 ### 主题: Ansible 101 -正如 Ansible [文档][6] 所说, Ansible 是一个 IT 自动化工具。它主要用于配置系统,部署软件和编排更高级的 IT 任务。示例包括持续交付与零停机滚动升级。 +正如 Ansible [文档][6] 所说,Ansible 是一个 IT 自动化工具。它主要用于配置系统,部署软件和编排更高级的 IT 任务。示例包括持续交付与零停机滚动升级。 本课堂课程涵盖以下主题: From 3297f7cda45030951ae6af5eb614863ea4e35f2c Mon Sep 17 00:00:00 2001 From: imquanquan Date: Wed, 6 Dec 2017 23:15:52 +0800 Subject: [PATCH 079/236] translated --- .../tech/20171201 Fedora Classroom Session_Ansible 101.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/translated/tech/20171201 Fedora Classroom Session_Ansible 101.md b/translated/tech/20171201 Fedora Classroom Session_Ansible 101.md index 26b3f1c42e..1ebee40a44 100644 --- a/translated/tech/20171201 Fedora Classroom Session_Ansible 101.md +++ b/translated/tech/20171201 Fedora Classroom Session_Ansible 101.md @@ -28,13 +28,13 @@ Fedora 课堂会议本周继续进行本周的主题是 Ansible。 会议的时 8. Playbooks 示例 -稍后还将有 Ansible 102 的后续会议。该会议将涵盖复杂的 playbooks,playbooks 角色(roles),动态 inventory 文件,流程控制和 Ansible Galaxy 命令行工具. +之后还将有 Ansible 102 的后续会议。该会议将涵盖复杂的 playbooks,playbooks 角色(roles),动态 inventory 文件,流程控制和 Ansible Galaxy 命令行工具. ### 讲师 我们有两位经验丰富的讲师进行这次会议。 -[Geoffrey Marr][7],IRC 聊天室中名字叫 coremodule,是 Red Hat 的一名员工和 Fedora 的贡献者,拥有 Linux 和云技术的背景。工作时,他潜心于 [Fedora QA][8] wiki 和测试页面下。业余时间, 他热衷于 RaspberryPi 项目,尤其是专注于那些软件无线电(Software-defined radio)项目。 +[Geoffrey Marr][7],IRC 聊天室中名字叫 coremodule,是 Red Hat 的一名员工和 Fedora 的贡献者,拥有 Linux 和云技术的背景。工作时,他潜心于 [Fedora QA][8] wiki 和测试页面中。业余时间, 他热衷于 RaspberryPi 项目,尤其是专注于那些软件无线电(Software-defined radio)项目。 [Vipul Siddharth][9] 是Red Hat的实习生,他也在Fedora上工作。他喜欢贡献开源,借此机会传播自由开源软件。 From fdb26b4cc65630c85a8c2017584fcf58ab42dbb4 Mon Sep 17 00:00:00 2001 From: imquanquan Date: Wed, 6 Dec 2017 23:24:38 +0800 Subject: [PATCH 080/236] translated --- .../tech/20171201 Fedora Classroom Session_Ansible 101.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/translated/tech/20171201 Fedora Classroom Session_Ansible 101.md b/translated/tech/20171201 Fedora Classroom Session_Ansible 101.md index 1ebee40a44..4a4c5514ba 100644 --- a/translated/tech/20171201 Fedora Classroom Session_Ansible 101.md +++ b/translated/tech/20171201 Fedora Classroom Session_Ansible 101.md @@ -4,7 +4,7 @@ ![](https://fedoramagazine.org/wp-content/uploads/2017/07/fedora-classroom-945x400.jpg) -Fedora 课堂会议本周继续进行本周的主题是 Ansible。 会议的时间安排表发布在 [wiki][3] 上。你还可以从那里找到[之前会议的资源和录像][4]。以下是会议的具体时间 [11月30日本周星期四 1600 UTC][5]。该链接可以将这个时间转换为您的时区上的时间。 +Fedora 课堂会议本周继续进行,本周的主题是 Ansible。 会议的时间安排表发布在 [wiki][3] 上。你还可以从那里找到[之前会议的资源和录像][4]。以下是会议的具体时间 [11月30日本周星期四 1600 UTC][5]。该链接可以将这个时间转换为您的时区上的时间。 ### 主题: Ansible 101 From 8cb7cea13716d0435eab2defbf903febfe3d3bcf Mon Sep 17 00:00:00 2001 From: wxy Date: Wed, 6 Dec 2017 23:38:32 +0800 Subject: [PATCH 081/236] PUB:20170910 Cool vim feature sessions.md @geekpi --- .../tech => published}/20170910 Cool vim feature sessions.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20170910 Cool vim feature sessions.md (100%) diff --git a/translated/tech/20170910 Cool vim feature sessions.md b/published/20170910 Cool vim feature sessions.md similarity index 100% rename from translated/tech/20170910 Cool vim feature sessions.md rename to published/20170910 Cool vim feature sessions.md From 3d2b8967e59276ccf7bc7382a44939a8b971ca90 Mon Sep 17 00:00:00 2001 From: wxy Date: Wed, 6 Dec 2017 23:51:22 +0800 Subject: [PATCH 082/236] PRF&PUB:20171024 How to Encrypt and Decrypt Individual Files With GPG.md @lujun9972 https://linux.cn/article-9118-1.html --- ...t and Decrypt Individual Files With GPG.md | 92 +++++++++---------- 1 file changed, 45 insertions(+), 47 deletions(-) rename {translated/tech => published}/20171024 How to Encrypt and Decrypt Individual Files With GPG.md (60%) diff --git a/translated/tech/20171024 How to Encrypt and Decrypt Individual Files With GPG.md b/published/20171024 How to Encrypt and Decrypt Individual Files With GPG.md similarity index 60% rename from translated/tech/20171024 How to Encrypt and Decrypt Individual Files With GPG.md rename to published/20171024 How to Encrypt and Decrypt Individual Files With GPG.md index 6b534be640..dffab516ac 100644 --- a/translated/tech/20171024 How to Encrypt and Decrypt Individual Files With GPG.md +++ b/published/20171024 How to Encrypt and Decrypt Individual Files With GPG.md @@ -1,68 +1,66 @@ 如何使用 GPG 加解密文件 ------- -### 目标 +================= -使用 GPG 加密文件 +目标:使用 GPG 加密文件 -### 发行版 +发行版:适用于任何发行版 -适用于任何发行版 +要求:安装了 GPG 的 Linux 或者拥有 root 权限来安装它。 -### 要求 +难度:简单 -安装了 GPG 的 Linux 或者拥有 root 权限来安装它。 +约定: -### 难度 - -简单 - -### 约定 - -* # - 需要使用 root 权限来执行指定命令,可以直接使用 root 用户来执行也可以使用 sudo 命令 - -* $ - 可以使用普通用户来执行指定命令 +* `#` - 需要使用 root 权限来执行指定命令,可以直接使用 root 用户来执行,也可以使用 `sudo` 命令 +* `$` - 可以使用普通用户来执行指定命令 ### 介绍 -加密非常重要。它对于保护敏感信息来说是必不可少的。 -你的私人文件应该要被加密,而 GPG 提供了很好的解决方案。 +加密非常重要。它对于保护敏感信息来说是必不可少的。你的私人文件应该要被加密,而 GPG 提供了很好的解决方案。 ### 安装 GPG -GPG 的使用非常广泛。你在几乎每个发行版的仓库中都能找到它。 -如果你还没有安装它,那现在就来安装一下吧。 +GPG 的使用非常广泛。你在几乎每个发行版的仓库中都能找到它。如果你还没有安装它,那现在就来安装一下吧。 -#### Debian/Ubuntu +**Debian/Ubuntu** -```shell +``` $ sudo apt install gnupg ``` -#### Fedora -```shell + +**Fedora** + +``` # dnf install gnupg2 ``` -#### Arch -```shell + +**Arch** + +``` # pacman -S gnupg ``` -#### Gentoo -```shell + +**Gentoo** + +``` # emerge --ask app-crypt/gnupg ``` -### Create a Key -你需要一个密钥对来加解密文件。如果你为 SSH 已经生成过了密钥对,那么你可以直接使用它。 -如果没有,GPG 包含工具来生成密钥对。 -```shell +### 创建密钥 + +你需要一个密钥对来加解密文件。如果你为 SSH 已经生成过了密钥对,那么你可以直接使用它。如果没有,GPG 包含工具来生成密钥对。 + +``` $ gpg --full-generate-key ``` -GPG 有一个命令行程序帮你一步一步的生成密钥。它还有一个简单得多的工具,但是这个工具不能让你设置密钥类型,密钥的长度以及过期时间,因此不推荐使用这个工具。 + +GPG 有一个命令行程序可以帮你一步一步的生成密钥。它还有一个简单得多的工具,但是这个工具不能让你设置密钥类型,密钥的长度以及过期时间,因此不推荐使用这个工具。 GPG 首先会询问你密钥的类型。没什么特别的话选择默认值就好。 下一步需要设置密钥长度。`4096` 是一个不错的选择。 -之后,可以设置过期的日期。 如果希望密钥永不过期则设置为 `0` +之后,可以设置过期的日期。 如果希望密钥永不过期则设置为 `0`。 然后,输入你的名称。 @@ -72,20 +70,19 @@ GPG 首先会询问你密钥的类型。没什么特别的话选择默认值就 所有这些都完成后,GPG 会让你校验一下这些信息。 -GPG 还会问你是否需要为密钥设置密码。这一步是可选的, 但是会增加保护的程度。 -若需要设置密码,则 GPG 会收集你的操作信息来增加密钥的健壮性。 所有这些都完成后, GPG 会显示密钥相关的信息。 +GPG 还会问你是否需要为密钥设置密码。这一步是可选的, 但是会增加保护的程度。若需要设置密码,则 GPG 会收集你的操作信息来增加密钥的健壮性。 所有这些都完成后, GPG 会显示密钥相关的信息。 ### 加密的基本方法 -现在你拥有了自己的密钥,加密文件非常简单。 使用虾米那命令在 `/tmp` 目录中创建一个空白文本文件。 +现在你拥有了自己的密钥,加密文件非常简单。 使用下面的命令在 `/tmp` 目录中创建一个空白文本文件。 -```shell +``` $ touch /tmp/test.txt ``` 然后用 GPG 来加密它。这里 `-e` 标志告诉 GPG 你想要加密文件, `-r` 标志指定接收者。 -```shell +``` $ gpg -e -r "Your Name" /tmp/test.txt ``` @@ -95,34 +92,35 @@ GPG 需要知道这个文件的接收者和发送者。由于这个文件给是 你收到加密文件后,就需要对它进行解密。 你无需指定解密用的密钥。 这个信息被编码在文件中。 GPG 会尝试用其中的密钥进行解密。 -```shel +``` $ gpg -d /tmp/test.txt.gpg ``` ### 发送文件 + 假设你需要发送文件给别人。你需要有接收者的公钥。 具体怎么获得密钥由你自己决定。 你可以让他们直接把公钥发送给你, 也可以通过密钥服务器来获取。 收到对方公钥后,导入公钥到 GPG 中。 -```shell +``` $ gpg --import yourfriends.key ``` -这些公钥与你自己创建的密钥一样,自带了名称和电子邮件地址的信息。 -记住,为了让别人能解密你的文件,别人也需要你的公钥。 因此导出公钥并将之发送出去。 +这些公钥与你自己创建的密钥一样,自带了名称和电子邮件地址的信息。 记住,为了让别人能解密你的文件,别人也需要你的公钥。 因此导出公钥并将之发送出去。 -```shell +``` gpg --export -a "Your Name" > your.key ``` 现在可以开始加密要发送的文件了。它跟之前的步骤差不多, 只是需要指定你自己为发送人。 + ``` $ gpg -e -u "Your Name" -r "Their Name" /tmp/test.txt ``` ### 结语 -就这样了。GPG 还有一些高级选项, 不过你在 99% 的时间内都不会用到这些高级选项。 GPG 就是这么易于使用。 -你也可以使用创建的密钥对来发送和接受加密邮件,其步骤跟上面演示的差不多, 不过大多数的电子邮件客户端在拥有密钥的情况下会自动帮你做这个动作。 + +就这样了。GPG 还有一些高级选项, 不过你在 99% 的时间内都不会用到这些高级选项。 GPG 就是这么易于使用。你也可以使用创建的密钥对来发送和接受加密邮件,其步骤跟上面演示的差不多, 不过大多数的电子邮件客户端在拥有密钥的情况下会自动帮你做这个动作。 -------------------------------------------------------------------------------- @@ -130,7 +128,7 @@ via: https://linuxconfig.org/how-to-encrypt-and-decrypt-individual-files-with-gp 作者:[Nick Congleton][a] 译者:[lujun9972](https://github.com/lujun9972) -校对:[校对者 ID](https://github.com/校对者 ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux 中国](https://linux.cn/) 荣誉推出 From 9277e93482a6e4634e07dd2ba1d4b84eead43473 Mon Sep 17 00:00:00 2001 From: wxy Date: Wed, 6 Dec 2017 23:53:42 +0800 Subject: [PATCH 083/236] PUB:20170215 How to take screenshots on Linux using Scrot.md @zpl1025 --- .../20170215 How to take screenshots on Linux using Scrot.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20170215 How to take screenshots on Linux using Scrot.md (100%) diff --git a/translated/tech/20170215 How to take screenshots on Linux using Scrot.md b/published/20170215 How to take screenshots on Linux using Scrot.md similarity index 100% rename from translated/tech/20170215 How to take screenshots on Linux using Scrot.md rename to published/20170215 How to take screenshots on Linux using Scrot.md From 39c021864df47484a4e8fab0b63b383341212051 Mon Sep 17 00:00:00 2001 From: Yixun Xu Date: Wed, 6 Dec 2017 16:30:58 -0500 Subject: [PATCH 084/236] Translated - Linux games --- ... Games On Steam You Should Play in 2017.md | 246 +++++++++--------- 1 file changed, 121 insertions(+), 125 deletions(-) diff --git a/sources/tech/20171204 30 Best Linux Games On Steam You Should Play in 2017.md b/sources/tech/20171204 30 Best Linux Games On Steam You Should Play in 2017.md index 7a14f92847..f9fadae4ec 100644 --- a/sources/tech/20171204 30 Best Linux Games On Steam You Should Play in 2017.md +++ b/sources/tech/20171204 30 Best Linux Games On Steam You Should Play in 2017.md @@ -1,256 +1,252 @@ -yixunx translating - -30 Best Linux Games On Steam You Should Play in 2017 +2017 年最好的 30 款支持 Linux 的 Steam 游戏 ============================================================ -When it comes to Gaming, a system running on Windows platform is what anyone would recommend. It still is a superior choice for gamers with better graphics driver support and perfect hardware compatibility. But, what about the thought of [gaming on a Linux system][9]? Well, yes, of course – it is possible – maybe you thought of it at some point in time but the collection of Linux games on [Steam for Linux][10] platform wasn’t appealing at all few years back. +说到游戏,人们一般都会推荐使用 Windows 系统。Windows 能提供更好的显卡支持和硬件兼容性,所以对于游戏爱好者来说的确是个更好的选择。但你是否想过[在 Linux 系统上玩游戏][9]?这的确是可能的,也许你以前还曾经考虑过。但在几年之前, [Steam for Linux][10] 上可玩的游戏并不是很吸引人。 -However, that’s not true at all for the current scene. The Steam store now has a lot of great games listed for Linux platform (including a lot of major titles). So, in this article, we’ll be taking a look at the best Linux games on Steam. +但现在情况完全不一样了。Steam 商店里现在有许多支持 Linux 平台的游戏(包括很多主流大作)。我们在本文中将介绍 Steam 上最好的一些 Linux 游戏。 -But before we do that, let me tell you a money saving trick. If you are an avid gamer who spends plenty of time and money on gaming, you should subscribe to Humble Monthly. This monthly subscription program from [Humble Bundle][11] gives you $100 in games for just $12 each month. +在进入正题之前,先介绍一个省钱小窍门。如果你是个狂热的游戏爱好者,在游戏上花费很多时间和金钱的话,我建议你订阅 [Humble 每月包(Humble Monthly)][11]。这是个每月收费的订阅服务,每月只用 12 美元就能获得价值 100 美元的游戏。 -Not all games might be available on Linux though but it is still a good deal because you get additional 10% discount on any games or books you buy from [Humble Bundle website][12]. +这个游戏包中可能有些游戏不支持 Linux,但除了 Steam 游戏之外,它还会让 [Humble Bundle 网站][12]上所有的游戏和书籍都打九折,所以这依然是个不错的优惠。 -The best thing here is that every purchase you make supports a charity organization. So, you are not just gaming, you are also making a difference to the world. +更棒的是,你在 Humble Bundle 上所有的消费都会捐出一部分给慈善机构。所以你在享受游戏的同时还在帮助改变世界。 -### Best Linux games on Steam +### Steam 上最好的 Linux 游戏 -The list of best Linux games on steam is in no particular ranking order. +以下排名无先后顺序。 -Additional Note: While there’s a lot of games available on Steam for Linux, there are still a lot of problems you would face as a Linux gamer. You can refer to one of our articles to know about the [annoying experiences every Linux gamer encounters][14]. +额外提示:虽然在 Steam 上有很多支持 Linux 的游戏,但你在 Linux 上玩游戏时依然可能会遇到各种问题。你可以阅读我们之前的文章:[每个 Linux 游戏玩家都会遇到的烦人问题][14] -Jump Directly to your preferred genre of Games: +可以点击以下链接跳转到你喜欢的游戏类型: -* [Action Games][3] +* [动作类游戏][3] -* [RPG Games][4] +* [角色扮演类游戏][4] -* [Racing/Sports/Simulation Games][5] +* [赛车/运动/模拟类游戏][5] -* [Adventure Games][6] +* [冒险类游戏][6] -* [Indie Games][7] +* [独立游戏][7] -* [Strategy Games][8] +* [策略类游戏][8] -### Best Action Games for Linux On Steam +### Steam 上最佳 Linux 动作类游戏 -### 1\. Counter-Strike: Global Offensive (Multiplayer) +### 1\. 反恐精英:全球攻势(Counter-Strike: Global Offensive)(多人) -CS GO is definitely one of the best FPS games for Linux on Steam. I don’t think this game needs an introduction but in case you are unaware of it – I must mention that it is one of the most enjoyable FPS multiplayer game you would ever play. You’ll observe CS GO is one of the games contributing a major part to the e-sports scene. To up your rank – you need to play competitive matches. In either case, you can continue playing casual matches. +《CS:GO》毫无疑问是 Steam 上支持 Linux 的最好的 FPS 游戏之一。我觉得这款游戏无需介绍,但如果你没有听说过它,我要告诉你这将会是你玩过的最好玩的多人 FPS 游戏之一。《CS:GO》还是电子竞技中的一个主流项目。想要提升等级的话,你需要在天梯上和其他玩家同台竞技。但你也可以选择更加轻松的休闲模式。 -I could have listed Rainbow Six siege instead of Counter-Strike, but we still don’t have it for Linux/Steam OS. +我本想写《彩虹六号:围攻行动》,但它目前还不支持 Linux 或 Steam OS。 -[CS: GO (Purchase)][15] +[购买《CS: GO》][15] -### 2\. Left 4 Dead 2 (Multiplayer/Singleplayer) +### 2\. 求生之路 2(多人/单机) -One of the most loved first-person zombie shooter multiplayer game. You may get it for as low as 1.3 USD on a Steam sale. It is an interesting game which gives you the chills and thrills you’d expect from a zombie game. The game features swamps, cities, cemetries, and a lot more environments to keep things interesting and horrific. The guns aren’t super techy but definitely provides a realistic experience considering it’s an old game. +这是最受欢迎的僵尸主题多人 FPS 游戏之一。在 Steam 优惠时,价格可以低至 1.3 美元。这是个有趣的游戏,能让你体会到你在僵尸游戏中期待的寒意和紧张感。游戏中的环境包括了沼泽、城市、墓地等等,让游戏既有趣又吓人。游戏中的枪械并不是非常先进,但作为一个老游戏来说,它已经提供了足够真实的体验。 -[Left 4 Dead 2 (Purchase)][16] +[购买《求生之路 2》][16] -### 3\. Borderlands 2 (Singleplayer/Co-op) +### 3\. 无主之地 2(Borderlands 2)(单机/协作) -Borderlands 2 is an interesting take on FPS games for PC. It isn’t anything like you experienced before. The graphics look sketchy and cartoony but that does not let you miss the real action you always look for in a first-person shooter game. You can trust me on that! +《无主之地 2》是个很有意思的 FPS 游戏。它和你以前玩过的游戏完全不同。画风看上去有些诡异和卡通化,但我可以保证,游戏体验可一点也不逊色! -If you are looking for one of the best Linux games with tons of DLC – Borderlands 2 will definitely suffice. +如果你在寻找一个好玩而且有很多 DLC 的 Linux 游戏,《无主之地 2》绝对是个不错的选择。 -[Borderlands 2 (Purchase)][17] +[购买《无主之地 2》][17] -### 4\. Insurgency (Multiplayer) +### 4\. 叛乱(Insurgency)(多人) -Insurgency is yet another impressive FPS game available on Steam for Linux machines. It takes a different approach by eliminating the HUD or the ammo counter. As most of the reviewers mentioned – pure shooting game focusing on the weapon and the tactics of your team. It may not be the best FPS game – but it surely is one of them if you like – Delta Force kinda shooters along with your squad. +《叛乱》是 Steam 上又一款支持 Linux 的优秀的 FPS 游戏。它剑走偏锋,从屏幕上去掉了 HUD 和弹药数量指示。如同许多评论者所说,这是款注重武器和团队战术的纯粹的射击游戏。这也许不是最好的 FPS 游戏,但如果你想玩和《三角洲部队》类似的多人游戏的话,这绝对是最好的游戏之一。 -[Insurgency (Purchase)][18] +[购买《叛乱》][18] -### 5\. Bioshock: Infinite (Singleplayer) +### 5\. 生化奇兵:无限(Bioshock: Infinite)(单机) -Bioshock Infinite would definitely remain as one of the best singleplayer FPS games ever developed for PC. You get unrealistic powers to kill your enemies. And, so do your enemies have a lot of tricks up in the sleeves. It is a story-rich FPS game which you should not miss playing on your Linux system! +《生化奇兵:无限》毫无疑问将会作为 PC 平台最好的单机 FPS 游戏之一而载入史册。你可以利用很多强大的能力来杀死你的敌人。同时你的敌人也各个身怀绝技。游戏的剧情也非常丰富。你不容错过! -[BioShock: Infinite (Purchase)][19] +[购买《生化奇兵:无限》][19] -### 6\. HITMAN – Game of the Year Edition (Singleplayer) +### 6\. 《杀手(年度版)》(HITMAN - Game of the Year Edition)(单机) -The Hitman series is obviously one of the most loved game series for a PC gamer. The recent iteration of HITMAN series saw an episodic release which wasn’t appreciated much but now with Square Enix gone, the GOTY edition announced with a few more additions is back to the spotlight. Make sure to get creative with your assassinations in the game Agent 47! +《杀手》系列无疑是 PC 游戏爱好者们的最爱之一。本系列的最新作开始按章节发布,让很多玩家觉得不满。但现在 Square Enix 撤出了开发,而最新的年度版带着新的内容重返舞台。在游戏中发挥你的想象力暗杀你的目标吧,杀手47! -[HITMAN (GOTY)][20] +[购买(杀手(年度版))][20] -### 7\. Portal 2 +### 7\. 传送门 2 -Portal 2 is the perfect blend of action and adventure. It is a puzzle game which lets you join co-op sessions and create interesting puzzles. The co-op mode features a completely different campaign when compared to the single player mode. +《传送门 2》完美地结合了动作与冒险。这是款解谜类游戏,你可以与其他玩家协作,并开发有趣的谜题。协作模式提供了和单机模式截然不同的游戏内容。 -[Portal 2 (Purchase)][21] +[购买《传送门2》][21] -### 8\. Deux Ex: Mankind Divided +### 8\. 杀出重围:人类分裂 -If you are on the lookout for a shooter game focused on stealth skills – Deux Ex would be the perfect addition to your Steam library. It is indeed a very beautiful game with some state-of-the-art weapons and crazy fighting mechanics. +如果你在寻找隐蔽类的射击游戏,《杀出重围》是个完美的选择。这是个非常华丽的游戏,有着最先进的武器和超乎寻常的战斗机制。 -[Deus Ex: Mankind Divided (Purchase)][22] +[购买《杀出重围:人类分裂》][22] -### 9\. Metro 2033 Redux / Metro Last Light Redux +### 9\. 地铁 2033 重置版(Metro 2033 Redux) / 地铁:最后曙光 重置版(Metro Last Light Redux) -Both Metro 2033 Redux and the Last Light are the definitive editions of the classic hit Metro 2033 and Last Light. The game has a post-apocalyptic setting. You need to eliminate all the mutants in order to ensure the survival of mankind. You should explore the rest when you get to play it! +《地铁 2033 重置版》和《地铁:最后曙光 重置版》是经典的《地铁 2033》和《地铁:最后曙光》的最终版本。故事发生在世界末日之后。你需要消灭所有的变种人来保证人类的生存。剩下的就交给你自己去探索了! -[Metro 2033 Redux (Purchase)][23] +[购买《地铁 2033 重置版》][23] -[Metro Last Light Redux (Purchase)][24] +[购买《地铁:最后曙光 重置版》][24] -### 10\. Tannenberg (Multiplayer) +### 10\. 坦能堡(Tannenberg)(多人) -Tannenberg is a brand new game – announced a month before this article was published. The game is based on the Eastern Front (1914-1918) as a part of World War I. It is a multiplayer-only game. So, if you want to experience WWI gameplay experience, look no further! +《坦能堡》是个全新的游戏 - 在本文发表一个月前刚刚发售。游戏背景是第一次世界大战的东线战场(1914-1918)。这款游戏只有多人模式。如果你想要在游戏中体验第一次世界大战,不要错过这款游戏! -[Tannenberg (Purchase)][25] +[购买《坦能堡》][25] -### Best RPG Games for Linux on Steam +### Steam 上最佳 Linux 角色扮演类游戏 -### 11\. Shadow of Mordor +### 11\. 中土世界:暗影魔多(Shadow of Mordor) -Shadow of Mordor is one of the most exciting open world RPG game you will find listed on Steam for Linux systems. You have to fight as a ranger (Talion) with the bright master (Celebrimbor) to defeat Sauron’s army (and then approach killing him). The fighting mechanics are very impressive. It is a must try game! +《中土世界:暗影魔多》 是 Steam 上支持 Linux 的最好的开放式角色扮演类游戏之一。你将扮演一个游侠(塔里昂),和光明领主(凯勒布理鹏)并肩作战击败索隆的军队(并最终和他直接交手)。战斗机制非常出色。这是款不得不玩的游戏! -[SOM (Purchase)][26] +[购买《中土世界:暗影魔多》][26] -### 12\. Divinity: Original Sin – Enhanced Edition +### 12\. 神界:原罪加强版(Divinity: Original Sin – Enhanced Edition) -Divinity: Original is a kick-ass Indie-RPG game that’s unique in itself and very much enjoyable. It is probably one of the highest rated RPG games with a mixture of Adventure & Strategy. The enhanced edition includes new game modes and a complete revamp of voice-overs, controller support, co-op sessions, and so much more. +《神界:原罪》是一款极其优秀的角色扮演类独立游戏。它非常独特而又引人入胜。这或许是评分最高的带有冒险和策略元素的角色扮演游戏。加强版添加了新的游戏模式,并且完全重做了配音、手柄支持、协作任务等等。 -[Divinity: Original Sin (Purchase)][27] +[购买《神界:原罪加强版》][27] -### 13\. Wasteland 2: Director’s Cut +### 13\. 废土 2:导演剪辑版(Wasteland 2: Director’s Cut) -Wasteland 2 is an amazing CRPG game. If Fallout 4 was to be ported down as a CRPG as well – this is what we would have expected it to be. The director’s cut edition includes a complete visual overhaul with hundred new characters. +《废土 2》是一款出色的 CRPG 游戏。如果《辐射 4》被移植成 CRPG 游戏,大概就是这种感觉。导演剪辑版完全重做了画面,并且增加了一百多名新人物。 -[Wasteland 2 (Purchase)][28] +[购买《废土 2》][28] -### 14\. Darkwood +### 14\. 阴暗森林(Darkwood) -A horror-filled top-down view RPG game. You get to explore the world, scavenging materials, and craft weapons to survive. +一个充满恐怖的俯视角角色扮演类游戏。你将探索世界、搜集材料、制作武器来生存下去。 -[Darkwood (Purchase)][29] +[购买《阴暗森林》][29] -### Best Racing/Sports/Simulation Games +### 最佳赛车 / 运动 / 模拟类游戏 -### 15\. Rocket League +### 15\. 火箭联盟(Rocket League) -Rocket League is an action-packed soccer game conceptualized by rocket-powered battle cars. Not just driving the car and heading to the goal – you can even make your opponents go – kaboom! +《火箭联盟》是一款充满刺激的足球游戏。游戏中你将驾驶用火箭助推的战斗赛车。你不仅是要驾车把球带进对方球门,你甚至还可以让你的对手化为灰烬! -A fantastic sports-action game every gamer must have installed! +这是款超棒的体育动作类游戏,每个游戏爱好者都值得拥有! -[Rocket League (Purchase)][30] +[购买《火箭联盟》][30] -### 16\. Road Redemption +### 16\. 公路救赎(Road Redemption) -Missing Road Rash? Well, Road Redemption will quench your thirst as a spiritual successor to Road Rash. Ofcourse, it is not officially “Road Rash II” – but it is equally enjoyable. If you loved Road Rash, you’ll like it too. +想念《暴力摩托》了?作为它精神上的续作,《公路救赎》可以缓解你的饥渴。当然,这并不是真正的《暴力摩托 2》,但它一样有趣。如果你喜欢《暴力摩托》,你也会喜欢这款游戏。 -[Road Redemption (Purchase)][31] +[购买《公路救赎》][31] -### 17\. Dirt Rally +### 17\. 尘埃拉力赛(Dirt Rally) -Dirt Rally is for the gamers who want to experience off-road and on-road racing game. The visuals are breathtaking and the game is enjoyable with near to perfect driving mechanics. +《尘埃拉力赛》是为想要体验公路和越野赛车的玩家准备的。画面非常有魄力,驾驶手感也近乎完美。 -[Dirt Rally (Purchase)][32] +[购买《尘埃拉力赛》][32] ### 18\. F1 2017 -F1 2017 is yet another impressive car racing game from the developers of Dirt Rally (Codemasters & Feral Interactive). It features all of the iconic F1 racing cars that you need to experience. +《F1 2017》是另一款令人印象深刻的赛车游戏。由《尘埃拉力赛》的开发者 Codemasters & Feral Interactive 制作。游戏中包含了所有标志性的 F1 赛车,值得你去体验。 -[F1 2017 (Purchase)][33] +[购买《F1 2017》][33] -### 19. GRID Autosport +### 19. 超级房车赛:汽车运动(GRID Autosport) -GRID is one of the most underrated car racing games available out there. GRID Autosport is the sequel to GRID 2\. The gameplay seems stunning to me. With even better cars than GRID 2, the GRID Autosport is a recommended racing game for every PC gamer out there. The game also supports a multiplayer mode where you can play with your friends – representing as a team. +《超级房车赛》是最被低估的赛车游戏之一。《超级房车赛:汽车运动》是《超级房车赛》的续作。这款游戏的可玩性令人惊艳。游戏中的赛车也比前作更好。推荐所有的 PC 游戏玩家尝试这款赛车游戏。游戏还支持多人模式,你可以和你的朋友组队参赛。 -[GRID Autosport (Purchase)][34] +[购买《超级房车赛:汽车运动》][34] -### Best Adventure Games +### 最好的冒险游戏 -### 20\. ARK: Survival Evolved +### 20\. 方舟:生存进化(ARK: Survival Evolved) -ARK Survival Evolved is a quite decent survival game with exciting adventures following in the due course. You find yourself in the middle of nowhere (ARK Island) and have got no choice except training the dinosaurs, teaming up with other players, hunt someone to get the required resources, and craft items to maximize your chances to survive and escape the Island. +《方舟:生存进化》是一款不错的生存游戏,里面有着激动人心的冒险。你发现自己身处一个未知孤岛(方舟岛),为了生存下去并逃离这个孤岛,你必须去驯服恐龙、与其他玩家合作、猎杀其他人来抢夺资源、以及制作物品。 -[ARK: Survival Evolved (Purchase)][35] +[购买《方舟:生存进化》][35] -### 21\. This War of Mine +### 21\. 这是我的战争(This War of Mine) -A unique game where you aren’t a soldier but a civilian facing the hardships of wartime. You’ve to make your way through highly-skilled enemies and help out other survivors as well. +一款独特的战争游戏。你不是扮演士兵,而是要作为一个平民来面对战争带来的艰难。你需要在身经百战的敌人手下逃生,并帮助其他的幸存者。 -[This War of Mine (Purchase)][36] +[购买《这是我的战争》][36] -### 22\. Mad Max +### 22\. 疯狂的麦克斯(Mad Max) -Mad Max is all about survival and brutality. It includes powerful cars, an open-world setting, weapons, and hand-to-hand combat. You need to keep exploring the place and also focus on upgrading your vehicle to prepare for the worst. You need to think carefully and have a strategy before you make a decision. +生存和暴力概括了《疯狂的麦克斯》的全部内容。游戏中有性能强大的汽车,开放性的世界,各种武器,以及徒手肉搏。你要不断地探索世界,并注意升级你的汽车来防患于未然。在做决定之前,你要仔细思考并设计好策略。 -[Mad Max (Purchase)][37] +[购买《疯狂的麦克斯》][37] -### Best Indie Games +### 最佳独立游戏 -### 23\. Terraria +### 23\. 泰拉瑞亚(Terraria) -It is a 2D game which has received overwhelmingly positive reviews on Steam. Dig, fight, explore, and build to keep your journey going. The environments are automatically generated. So, it isn’t anything static. You might encounter something first and your friend might encounter the same after a while. You’ll also get to experience creative 2D action-packed sequences. +这是款在 Steam 上广受好评的 2D 游戏。你在旅途中需要去挖掘、战斗、探索、建造。游戏地图是自动生成的,而不是静止的。也许你刚刚遇到的东西,你的朋友过一会儿才会遇到。你还将体验到富有新意的 2D 动作场景。 -[Terraria (Purchase)][38] +[购买《泰拉瑞亚》][38] -### 24\. Kingdoms and Castles +### 24\. 王国与城堡(Kingdoms and Castles) -With Kingdoms and Castles, you get to build your own kingdom. You have to manage your kingdom by collecting tax (as funds necessary) from the people, take care of the forests, handle the city +在《王国与城堡》中,你将建造你自己的王国。在管理你的王国的过程中,你需要收税、保护森林、规划城市,并且发展国防来防止别人入侵你的王国。 -design, and also make sure no one raids your kingdom by implementing proper defences. +这是款比较新的游戏,但在独立游戏中已经相对获得了比较高的人气。 -It is a fairly new game but quite trending among the Indie genre of games. +[购买《王国与城堡》][39] -[Kingdoms and Castles][39] +### Steam 上最佳 Linux 策略类游戏 -### Best Strategy Games on Steam For Linux Machines +### 25\. 文明 5(Sid Meier’s Civilization V) -### 25\. Sid Meier’s Civilization V +《文明 5》是 PC 上评价最高的策略游戏之一。如果你想的话,你可以去玩《文明 6》。但是依然有许多玩家喜欢《文明 5》,觉得它更有独创性,游戏细节也更富有创造力。 -Sid Meier’s Civilization V is one of the best-rated strategy game available for PC. You could opt for Civilization VI – if you want. But, the gamers still root for Sid Meier’s Civilization V because of its originality and creative implementation. +[购买《文明 5》][40] -[Civilization V (Purchase)][40] +### 26\. 全面战争:战锤(Total War: Warhammer) -### 26\. Total War: Warhammer +《全面战争:战锤》是 PC 平台上一款非常出色的回合制策略游戏。可惜的是,新作《战锤 2》依然不支持Linux。但如果你喜欢使用飞龙和魔法来建造与毁灭帝国的话,2016 年的《战锤》依然是个不错的选择。 -Total War: Warhammer is an incredible turn-based strategy game available for PC. Sadly, the Warhammer II isn’t available for Linux as of yet. But 2016’s Warhammer is still a great choice if you like real-time battles that involve building/destroying empires with flying creatures and magical powers. +[购买《全面战争:战锤》][41] -[Warhammer I (Purchase)][41] +### 27\. 轰炸小队《Bomber Crew》 -### 27\. Bomber Crew +想要一款充满乐趣的策略游戏?《轰炸小队》就是为你准备的。你需要选择合适的队员并且让你的队伍稳定运转来取得最终的胜利。 -Wanted a strategy simulation game that’s equally fun to play? Bomber Crew is the answer to it. You need to choose the right crew and maintain it in order to win it all. +[购买《轰炸小队》][42] -[Bomber Crew (Purchase)][42] +### 28\. 奇迹时代 3(Age of Wonders III) -### 28\. Age of Wonders III +非常流行的策略游戏,包含帝国建造、角色扮演、以及战争元素。这是款精致的回合制策略游戏,请一定要试试! -A very popular strategy title with a mixture of empire building, role playing, and warfare. A polished turn-based strategy game you must try! +[购买《奇迹时代 3》][43] -[Age of Wonders III (Purchase)][43] +### 29\. 城市:天际线(Cities: Skylines) -### 29\. Cities: Skylines +一款非常简洁的游戏。你要从零开始建造一座城市,并且管理它的全部运作。你将体验建造和管理城市带来的愉悦与困难。我不觉得每个玩家都会喜欢这款游戏——它的用户群体非常明确。 -A pretty straightforward strategy game to build a city from scratch and manage everything in it. You’ll experience the thrills and hardships of building and maintaining a city. I wouldn’t expect every gamer to like this game – it has a very specific userbase. +[购买《城市:天际线》][44] -[Cities: Skylines (Purchase)][44] +### 30\. 幽浮 2(XCOM 2) -### 30\. XCOM 2 +《幽浮 2》是 PC 上最好的回合制策略游戏之一。我在想如果《幽浮 2》能够被制作成 FPS 游戏的话该有多棒。不过它现在已经是一款好评如潮的杰作了。如果你有多余的预算能花在这款游戏上,建议你购买“天选之战(War of the Chosen)“ DLC。 -XCOM 2 is one of the best turn-based strategy game available for PC. I wonder how crazy it could have been to have XCOM 2 as a first person shooter game. However, it’s still a masterpiece with an overwhelming response from almost everyone who bought the game. If you have the budget to spend more on this game, do get the – “War of the Chosen” – DLC. +[购买《幽浮 2》][45] -[XCOM 2 (Purchase)][45] +### 总结 -### Wrapping Up +我们从所有支持 Linux 的游戏中挑选了大部分的主流大作以及一些评价很高的新作。 -Among all the games available for Linux, we did include most of the major titles and some the latest games with an overwhelming response from the gamers. +你觉得我们遗漏了你最喜欢的支持 Linux 的 Steam 游戏么?另外,你还希望哪些 Steam 游戏开始支持 Linux 平台? -Do you think we missed any of your favorite Linux game available on Steam? Also, what are the games that you would like to see on Steam for Linux platform? - -Let us know your thoughts in the comments below. +请在下面的回复中告诉我们你的想法。 -------------------------------------------------------------------------------- via: https://itsfoss.com/best-linux-games-steam/ 作者:[Ankush Das][a] -译者:[译者ID](https://github.com/译者ID) +译者:[yixunx](https://github.com/yixunx) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 40603fd2d9115fb502aa45e6f5be9d7c7f9aa6b8 Mon Sep 17 00:00:00 2001 From: geekpi Date: Thu, 7 Dec 2017 08:58:45 +0800 Subject: [PATCH 085/236] translated --- ...Long Running Terminal Commands Complete.md | 156 ------------------ ...Long Running Terminal Commands Complete.md | 154 +++++++++++++++++ 2 files changed, 154 insertions(+), 156 deletions(-) delete mode 100644 sources/tech/20171130 Undistract-me_Get Notification When Long Running Terminal Commands Complete.md create mode 100644 translated/tech/20171130 Undistract-me_Get Notification When Long Running Terminal Commands Complete.md diff --git a/sources/tech/20171130 Undistract-me_Get Notification When Long Running Terminal Commands Complete.md b/sources/tech/20171130 Undistract-me_Get Notification When Long Running Terminal Commands Complete.md deleted file mode 100644 index 46afe9b893..0000000000 --- a/sources/tech/20171130 Undistract-me_Get Notification When Long Running Terminal Commands Complete.md +++ /dev/null @@ -1,156 +0,0 @@ -translating---geekpi - -Undistract-me : Get Notification When Long Running Terminal Commands Complete -============================================================ - -by [sk][2] · November 30, 2017 - -![Undistract-me](https://www.ostechnix.com/wp-content/uploads/2017/11/undistract-me-2-720x340.png) - -A while ago, we published how to [get notification when a Terminal activity is done][3]. Today, I found out a similar utility called “undistract-me” that notifies you when long running terminal commands complete. Picture this scenario. You run a command that takes a while to finish. In the mean time, you check your facebook and get so involved in it. After a while, you remembered that you ran a command few minutes ago. You go back to the Terminal and notice that the command has already finished. But you have no idea when the command is completed. Have you ever been in this situation? I bet most of you were in this situation many times. This is where “undistract-me” comes in help. You don’t need to constantly check the terminal to see if a command is completed or not. Undistract-me utility will notify you when a long running command is completed. It will work on Arch Linux, Debian, Ubuntu and other Ubuntu-derivatives. - -#### Installing Undistract-me - -Undistract-me is available in the default repositories of Debian and its variants such as Ubuntu. All you have to do is to run the following command to install it. - -``` -sudo apt-get install undistract-me -``` - -The Arch Linux users can install it from AUR using any helper programs. - -Using [Pacaur][4]: - -``` -pacaur -S undistract-me-git -``` - -Using [Packer][5]: - -``` -packer -S undistract-me-git -``` - -Using [Yaourt][6]: - -``` -yaourt -S undistract-me-git -``` - -Then, run the following command to add “undistract-me” to your Bash. - -``` -echo 'source /etc/profile.d/undistract-me.sh' >> ~/.bashrc -``` - -Alternatively you can run this command to add it to your Bash: - -``` -echo "source /usr/share/undistract-me/long-running.bash\nnotify_when_long_running_commands_finish_install" >> .bashrc -``` - -If you are in Zsh shell, run this command: - -``` -echo "source /usr/share/undistract-me/long-running.bash\nnotify_when_long_running_commands_finish_install" >> .zshrc -``` - -Finally update the changes: - -For Bash: - -``` -source ~/.bashrc -``` - -For Zsh: - -``` -source ~/.zshrc -``` - -#### Configure Undistract-me - -By default, Undistract-me will consider any command that takes more than 10 seconds to complete as a long-running command. You can change this time interval by editing /usr/share/undistract-me/long-running.bash file. - -``` -sudo nano /usr/share/undistract-me/long-running.bash -``` - -Find “LONG_RUNNING_COMMAND_TIMEOUT” variable and change the default value (10 seconds) to something else of your choice. - - [![](http://www.ostechnix.com/wp-content/uploads/2017/11/undistract-me-1.png)][7] - -Save and close the file. Do not forget to update the changes: - -``` -source ~/.bashrc -``` - -Also, you can disable notifications for particular commands. To do so, find the “LONG_RUNNING_IGNORE_LIST” variable and add the commands space-separated like below. - -By default, the notification will only show if the active window is not the window the command is running in. That means, it will notify you only if the command is running in the background Terminal window. If the command is running in active window Terminal, you will not be notified. If you want undistract-me to send notifications either the Terminal window is visible or in the background, you can set IGNORE_WINDOW_CHECK to 1 to skip the window check. - -The other cool feature of Undistract-me is you can set audio notification along with visual notification when a command is done. By default, it will only send a visual notification. You can change this behavior by setting the variable UDM_PLAY_SOUND to a non-zero integer on the command line. However, your Ubuntu system should have pulseaudio-utils and sound-theme-freedesktop utilities installed to enable this functionality. - -Please remember that you need to run the following command to update the changes made. - -For Bash: - -``` -source ~/.bashrc -``` - -For Zsh: - -``` -source ~/.zshrc -``` - -It is time to verify if this really works. - -#### Get Notification When Long Running Terminal Commands Complete - -Now, run any command that takes longer than 10 seconds or the time duration you defined in Undistract-me script. - -I ran the following command on my Arch Linux desktop. - -``` -sudo pacman -Sy -``` - -This command took 32 seconds to complete. After the completion of the above command, I got the following notification. - - [![](http://www.ostechnix.com/wp-content/uploads/2017/11/undistract-me-2.png)][8] - -Please remember Undistract-me script notifies you only if the given command took more than 10 seconds to complete. If the command is completed in less than 10 seconds, you will not be notified. Of course, you can change this time interval settings as I described in the Configuration section above. - -I find this tool very useful. It helped me to get back to the business after I completely lost in some other tasks. I hope this tool will be helpful to you too. - -More good stuffs to come. Stay tuned! - -Cheers! - -Resource: - -* [Undistract-me GitHub Repository][1] - --------------------------------------------------------------------------------- - -via: https://www.ostechnix.com/undistract-get-notification-long-running-terminal-commands-complete/ - -作者:[sk][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.ostechnix.com/author/sk/ -[1]:https://github.com/jml/undistract-me -[2]:https://www.ostechnix.com/author/sk/ -[3]:https://www.ostechnix.com/get-notification-terminal-task-done/ -[4]:https://www.ostechnix.com/install-pacaur-arch-linux/ -[5]:https://www.ostechnix.com/install-packer-arch-linux-2/ -[6]:https://www.ostechnix.com/install-yaourt-arch-linux/ -[7]:http://www.ostechnix.com/wp-content/uploads/2017/11/undistract-me-1.png -[8]:http://www.ostechnix.com/wp-content/uploads/2017/11/undistract-me-2.png diff --git a/translated/tech/20171130 Undistract-me_Get Notification When Long Running Terminal Commands Complete.md b/translated/tech/20171130 Undistract-me_Get Notification When Long Running Terminal Commands Complete.md new file mode 100644 index 0000000000..26a087d440 --- /dev/null +++ b/translated/tech/20171130 Undistract-me_Get Notification When Long Running Terminal Commands Complete.md @@ -0,0 +1,154 @@ +Undistract-me:当长时间运行的终端命令完成时获取通知 +============================================================ + +作者:[sk][2],时间:2017.11.30 + +![Undistract-me](https://www.ostechnix.com/wp-content/uploads/2017/11/undistract-me-2-720x340.png) + +前一段时间,我们发表了如何[在终端活动完成时获取通知][3]。今天,我发现了一个叫做 “undistract-me” 的类似工具,它可以在长时间运行的终端命令完成时通知你。想象这个场景。你运行着一个需要一段时间才能完成的命令。与此同时,你查看你的 Facebook,并参与其中。过了一会儿,你记得你几分钟前执行了一个命令。你回到终端,注意到这个命令已经完成了。但是你不知道命令何时完成。你有没有遇到这种情况?我敢打赌,你们大多数人遇到过许多次这种情况。这就是 “undistract-me” 能帮助的了。你不需要经常检查终端,查看命令是否完成。长时间运行的命令完成后,undistract-me 会通知你。它能在 Arch Linux、Debian、Ubuntu 和其他 Ubuntu 衍生版上运行。 + +#### 安装 Undistract-me + +Undistract-me 可以在 Debian 及其衍生版(如 Ubuntu)的默认仓库中使用。你要做的就是运行下面的命令来安装它。 + +``` +sudo apt-get install undistract-me +``` + +Arch Linux 用户可以使用任何帮助程序从 AUR 安装它。 + +使用 [Pacaur][4]: + +``` +pacaur -S undistract-me-git +``` + +使用 [Packer][5]: + +``` +packer -S undistract-me-git +``` + +使用 [Yaourt][6]: + +``` +yaourt -S undistract-me-git +``` + +然后,运行以下命令将 “undistract-me” 添加到 Bash 中。 + +``` +echo 'source /etc/profile.d/undistract-me.sh' >> ~/.bashrc +``` + +或者,你可以运行此命令将其添加到你的 Bash: + +``` +echo "source /usr/share/undistract-me/long-running.bash\nnotify_when_long_running_commands_finish_install" >> .bashrc +``` + +如果你在 Zsh shell 中,请运行以下命令: + +``` +echo "source /usr/share/undistract-me/long-running.bash\nnotify_when_long_running_commands_finish_install" >> .zshrc +``` + +最后更新更改: + +对于 Bash: + +``` +source ~/.bashrc +``` + +对于 Zsh: + +``` +source ~/.zshrc +``` + +#### 配置 Undistract-me + +默认情况下,Undistract-me 会将任何超过 10 秒的命令视为长时间运行的命令。你可以通过编辑 /usr/share/undistract-me/long-running.bash 来更改此时间间隔。 + +``` +sudo nano /usr/share/undistract-me/long-running.bash +``` + +找到 “LONG_RUNNING_COMMAND_TIMEOUT” 变量并将默认值(10 秒)更改为你所选择的其他值。 + + [![](http://www.ostechnix.com/wp-content/uploads/2017/11/undistract-me-1.png)][7] + +保存并关闭文件。不要忘记更新更改: + +``` +source ~/.bashrc +``` + +此外,你可以禁用特定命令的通知。为此,找到 “LONG_RUNNING_IGNORE_LIST” 变量并像下面那样用空格分隔命令。 + +默认情况下,只有当活动窗口不是命令运行的窗口时才会显示通知。也就是说,只有当命令在后台“终端”窗口中运行时,它才会通知你。如果该命令在活动窗口终端中运行,则不会收到通知。如果你希望无论终端窗口可见还是在后台都发送通知,你可以将 IGNORE_WINDOW_CHECK 设置为 1 以跳过窗口检查。 + +Undistract-me 的另一个很酷的功能是当命令完成时,你可以设置音频通知和可视通知。默认情况下,它只会发送一个可视通知。你可以通过在命令行上将变量 UDM_PLAY_SOUND 设置为非零整数来更改此行为。但是,你的 Ubuntu 系统应该安装 pulseaudio-utils 和 sound-theme-freedesktop 程序来启用此功能。 + +请记住,你需要运行以下命令来更新所做的更改。 + +对于 Bash: + +``` +source ~/.bashrc +``` + +对于 Zsh: + +``` +source ~/.zshrc +``` + +现在是时候来验证这是否真的有效。 + +#### 在长时间运行的终端命令完成时获取通知 + +现在,运行任何需要超过 10 秒或者你在 Undistract-me 脚本中定义的时间的命令 + +我在 Arch Linux 桌面上运行以下命令。 + +``` +sudo pacman -Sy +``` + +这个命令花了 32 秒完成。上述命令完成后,我收到以下通知。 + + [![](http://www.ostechnix.com/wp-content/uploads/2017/11/undistract-me-2.png)][8] + +请记住,只有当给定的命令花了超过 10 秒才能完成,Undistract-me 脚本才会通知你。如果命令在 10 秒内完成,你将不会收到通知。当然,你可以按照上面的“配置”部分所述更改此时间间隔设置。 + +我发现这个工具非常有用。在我迷失在其他任务上时,它帮助我回到正事。我希望这个工具也能对你有帮助。 + +还有更多的工具。保持耐心! + +干杯! + +资源: + +* [Undistract-me GitHub 仓库][1] + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/undistract-get-notification-long-running-terminal-commands-complete/ + +作者:[sk][a] +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.ostechnix.com/author/sk/ +[1]:https://github.com/jml/undistract-me +[2]:https://www.ostechnix.com/author/sk/ +[3]:https://www.ostechnix.com/get-notification-terminal-task-done/ +[4]:https://www.ostechnix.com/install-pacaur-arch-linux/ +[5]:https://www.ostechnix.com/install-packer-arch-linux-2/ +[6]:https://www.ostechnix.com/install-yaourt-arch-linux/ +[7]:http://www.ostechnix.com/wp-content/uploads/2017/11/undistract-me-1.png +[8]:http://www.ostechnix.com/wp-content/uploads/2017/11/undistract-me-2.png From fe8423d9ceeac4937cf7afa33c3855929190ca0e Mon Sep 17 00:00:00 2001 From: Yixun Xu Date: Wed, 6 Dec 2017 20:00:27 -0500 Subject: [PATCH 086/236] move --- ...171204 30 Best Linux Games On Steam You Should Play in 2017.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {sources => translated}/tech/20171204 30 Best Linux Games On Steam You Should Play in 2017.md (100%) diff --git a/sources/tech/20171204 30 Best Linux Games On Steam You Should Play in 2017.md b/translated/tech/20171204 30 Best Linux Games On Steam You Should Play in 2017.md similarity index 100% rename from sources/tech/20171204 30 Best Linux Games On Steam You Should Play in 2017.md rename to translated/tech/20171204 30 Best Linux Games On Steam You Should Play in 2017.md From bf1ea7f4add16a5c401bfb48c2a29162673fe2bf Mon Sep 17 00:00:00 2001 From: geekpi Date: Thu, 7 Dec 2017 09:03:21 +0800 Subject: [PATCH 087/236] translating --- ...ring Back Ubuntus Unity from the Dead as an Official Spin.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20171129 Someone Tries to Bring Back Ubuntus Unity from the Dead as an Official Spin.md b/sources/tech/20171129 Someone Tries to Bring Back Ubuntus Unity from the Dead as an Official Spin.md index 0e38373c3f..d50a3cdfc5 100644 --- a/sources/tech/20171129 Someone Tries to Bring Back Ubuntus Unity from the Dead as an Official Spin.md +++ b/sources/tech/20171129 Someone Tries to Bring Back Ubuntus Unity from the Dead as an Official Spin.md @@ -1,3 +1,5 @@ +translating---geekpi + Someone Tries to Bring Back Ubuntu's Unity from the Dead as an Official Spin ============================================================ From fce8b5278170fab547440db18d8ae42074993826 Mon Sep 17 00:00:00 2001 From: Yixun Xu Date: Wed, 6 Dec 2017 20:18:29 -0500 Subject: [PATCH 088/236] translation request: Love Your Bugs --- sources/tech/20171112 Love Your Bugs.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20171112 Love Your Bugs.md b/sources/tech/20171112 Love Your Bugs.md index bf79f27cf7..0404875a25 100644 --- a/sources/tech/20171112 Love Your Bugs.md +++ b/sources/tech/20171112 Love Your Bugs.md @@ -1,3 +1,5 @@ +yixunx translating + Love Your Bugs ============================================================ From eead636c96ef94637dfd3f1d2b2b1afa0f9862ea Mon Sep 17 00:00:00 2001 From: wxy Date: Thu, 7 Dec 2017 09:36:04 +0800 Subject: [PATCH 089/236] PRF:20171120 Mark McIntyre How Do You Fedora.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @zrszrszrs 恭喜你,完成了第一篇翻译! --- ...0171120 Mark McIntyre How Do You Fedora.md | 32 +++++++++---------- 1 file changed, 16 insertions(+), 16 deletions(-) diff --git a/translated/tech/20171120 Mark McIntyre How Do You Fedora.md b/translated/tech/20171120 Mark McIntyre How Do You Fedora.md index 4fe315eb07..3ce32fd266 100644 --- a/translated/tech/20171120 Mark McIntyre How Do You Fedora.md +++ b/translated/tech/20171120 Mark McIntyre How Do You Fedora.md @@ -1,43 +1,43 @@ -# [Mark McIntyre: 你是如何使用Fedora的?][1] - +Mark McIntyre:与 Fedora 的那些事 +=========================== ![](https://fedoramagazine.org/wp-content/uploads/2017/11/mock-couch-945w-945x400.jpg) -最近我们采访了 Mark McIntyre,谈了他是如何使用 Fedora 系统的。这也是 Fedora 杂志上[本系列的一部分][2]。该系列简要介绍了 Fedora 用户,以及他们是如何用 Fedora 把事情做好的。通过[反馈表][3]与我们联系,表达你想成为采访对象的意愿。 +最近我们采访了 Mark McIntyre,谈了他是如何使用 Fedora 系统的。这也是 Fedora 杂志上[系列文章的一部分][2]。该系列简要介绍了 Fedora 用户,以及他们是如何用 Fedora 把事情做好的。如果你想成为采访对象,请通过[反馈表][3]与我们联系。 ### Mark McIntyre 是谁? -Mark McIntyre 是一个天生的极客,后天的 Linux 爱好者。他说:“我在 13 岁开始编程,当时自学 BASIC 语言,我体会到其中的乐趣,并在乐趣的引导下,一步步成为专业的码农。”Mark 和他的侄女都是披萨饼的死忠粉。“去年秋天,我和我的侄女尽可能多地光顾了诺克斯维尔的披萨饼连锁店。 点击 [https://knox-pizza-quest.blogspot.com/][4] 可以了解我们的进展情况。”Mark 也是一名业余的摄影爱好者,并且在 Flickr 上 [发布自己的作品][5]。 +Mark McIntyre 为极客而生,以 Linux 为乐趣。他说:“我在 13 岁开始编程,当时自学 BASIC 语言,我体会到其中的乐趣,并在乐趣的引导下,一步步成为专业的码农。” Mark 和他的侄女都是披萨饼的死忠粉。“去年秋天,我和我的侄女开始了一个任务,去尝试诺克斯维尔的许多披萨饼连锁店。点击[这里][4]可以了解我们的进展情况。”Mark 也是一名业余的摄影爱好者,并且在 Flickr 上 [发布自己的作品][5]。 ![](https://fedoramagazine.org/wp-content/uploads/2017/11/31456893222_553b3cac4d_k-1024x575.jpg) -作为一名开发者,Mark 有着丰富的工作背景。他用过 Visual Basic 编写应用程序,用过 LotusScript、 PL/SQL(Oracle)、 Tcl/TK 编写代码,也用过基于 Python 的 Django 框架。他的强项是 Python。这也是目前他作为系统工程师的工作语言。“我用 Python 比较规律。但当我的工作变得更像是自动化工程师时, Python 用得就更频繁了。” +作为一名开发者,Mark 有着丰富的工作背景。他用过 Visual Basic 编写应用程序,用过 LotusScript、 PL/SQL(Oracle)、 Tcl/TK 编写代码,也用过基于 Python 的 Django 框架。他的强项是 Python。这也是目前他作为系统工程师的工作语言。“我经常使用 Python。由于我的工作变得更像是自动化工程师, Python 用得就更频繁了。” -McIntyre 自称是个书呆子,喜欢科幻电影,但他最喜欢的一部电影却不是科幻片。“尽管我是个书呆子,喜欢看《星际迷航》、《星球大战》之类的影片,但《光荣战役》或许才是我最喜欢的电影。”他还提到,电影《冲出宁静号》实属著名电视剧《萤火虫》的精彩后续。 +McIntyre 自称是个书呆子,喜欢科幻电影,但他最喜欢的一部电影却不是科幻片。“尽管我是个书呆子,喜欢看《星际迷航Star Trek》、《星球大战Star Wars》之类的影片,但《光荣战役Glory》或许才是我最喜欢的电影。”他还提到,电影《冲出宁静号Serenity》是一个著名电视剧的精彩后续(指《萤火虫》)。 Mark 比较看重他人的谦逊、知识与和气。他欣赏能够设身处地为他人着想的人。“如果你决定为另一个人服务,那么你会选择自己愿意亲近的人,而不是让自己备受折磨的人。” -McIntyre 目前在 [Scripps Networks Interactive][6] 工作,这家公司是 HGTV、Food Network、Travel Channel、DIY、GAC 以及其他几个有线电视频道的母公司。“我现在是一名系统工程师,负责非线性视频内容,这是全部媒体开展线上消费的计划。”他支持一些开发团队编写应用程序,将线性视频从有线电视发布到线上平台,比如亚马逊、葫芦。这些系统既包含预置系统,也包含云系统。Mark 还开发了一些自动化工具,将这些应用程序主要部署到云基础结构中。 +McIntyre 目前在 [Scripps Networks Interactive][6] 工作,这家公司是 HGTV、Food Network、Travel Channel、DIY、GAC 以及其他几个有线电视频道的母公司。“我现在是一名系统工程师,负责非线性视频内容,这是所有媒体要开展线上消费所需要的。”他为一些开发团队提供支持,他们编写应用程序,将线性视频从有线电视发布到线上平台,比如亚马逊、葫芦。这些系统既包含预置系统,也包含云系统。Mark 还开发了一些自动化工具,将这些应用程序主要部署到云基础结构中。 ### Fedora 社区 -Mark 形容 Fedora 社区是一个富有活力的社区,充满着像 Fedora 用户一样热爱生活的人。“从设计师到包装师,这个团体依然非常活跃,生机勃勃。” 他继续说道:“这使我对操作系统抱有一种信心。” +Mark 形容 Fedora 社区是一个富有活力的社区,充满着像 Fedora 用户一样热爱生活的人。“从设计师到封包人,这个团体依然非常活跃,生机勃勃。” 他继续说道:“这使我对该操作系统抱有一种信心。” -2002年左右,Mark 开始经常使用 IRC 上的 #fedora 频道:“那时候,Wi-Fi 在启用适配器和配置模块功能时,有许多还是靠手工实现的。”为了让他的 Wi-Fi 能够工作,他不得不重新去编译 Fedora 内核。 +2002 年左右,Mark 开始经常使用 IRC 上的 #fedora 频道:“那时候,Wi-Fi 在启用适配器和配置模块功能时,有许多还是靠手工实现的。”为了让他的 Wi-Fi 能够工作,他不得不重新去编译 Fedora 内核。 -McIntyre 鼓励他人参与 Fedora 社区。“这里有许多来自不同领域的机会。前端设计、测试部署、开发、应用程序包装以及新型技术实现。”他建议选择一个感兴趣的领域,然后向那个团体提出疑问。“这里有许多机会去奉献自己。” +McIntyre 鼓励他人参与 Fedora 社区。“这里有许多来自不同领域的机会。前端设计、测试部署、开发、应用程序打包以及新技术实现。”他建议选择一个感兴趣的领域,然后向那个团体提出疑问。“这里有许多机会去奉献自己。” -对于帮助他起步的社区成员,Mark 赞道:“Ben Williams 非常乐于助人。在我第一次接触 Fedora 时,他帮我搞定了一些 #fedora 支持频道中的安装补丁。” Ben 也鼓励 Mark 去做 Fedora [代表][7]。 +对于帮助他起步的社区成员,Mark 赞道:“Ben Williams 非常乐于助人。在我第一次接触 Fedora 时,他帮我搞定了一些 #fedora 支持频道中的安装补丁。” Ben 也鼓励 Mark 去做 Fedora [大使][7]。 ### 什么样的硬件和软件? -McIntyre 将 Fedora Linux 系统用在他的笔记本和台式机上。在服务器上他选择了 CentOS,因为它有更长的生命周期支持。他现在的台式机是自己组装的,配有 Intel 酷睿 i5 处理器,32GB 的内存和2TB 的硬盘。“我装了个 4K 的显示屏,有足够大的,地方来同时查看所有的应用。”他目前工作用的笔记本是戴尔灵越二合一,配备 13 英寸的屏,16 GB 的内存和 525 GB 的 m.2 固态硬盘。 +McIntyre 将 Fedora Linux 系统用在他的笔记本和台式机上。在服务器上他选择了 CentOS,因为它有更长的生命周期支持。他现在的台式机是自己组装的,配有 Intel 酷睿 i5 处理器,32GB 的内存和2TB 的硬盘。“我装了个 4K 的显示屏,有足够大的地方来同时查看所有的应用。”他目前工作用的笔记本是戴尔灵越二合一,配备 13 英寸的屏,16 GB 的内存和 525 GB 的 m.2 固态硬盘。 ![](https://fedoramagazine.org/wp-content/uploads/2017/11/Screenshot-from-2017-10-26-08-51-41-1024x640.png) -Mark 现在将 Fedora 26 运行在他过去几个月装配的所有盒子中。当一个新版本正式发布的时候,他倾向于避开这个高峰期。“除非在它即将发行的时候,我的工作站中有个正在运行下一代测试版本,通常情况下,一旦它发展成熟,我都会试着去获取最新的版本。”他经常采取就地更新:“这种就地更新方法利用 dnf 系统升级插件,目前表现得非常好。” +Mark 现在将 Fedora 26 运行在他过去几个月装配的所有机器中。当一个新版本正式发布的时候,他倾向于避开这个高峰期。“除非在它即将发行的时候,我的工作站中有个正在运行下一代测试版本,通常情况下,一旦它发展成熟,我都会试着去获取最新的版本。”他经常采取就地更新:“这种就地更新方法利用 dnf 系统升级插件,目前表现得非常好。” -为了搞摄影,McIntyre 用上了 [GIMP][8]、[Darktable][9],以及其他一些照片查看包和快速编辑包。当不启用网络电子邮件时,Mark 会使用 [Geary][10],还有[GNOME Calendar][11]。Mark 选用 HexChat 作为 IRC 客户端,[HexChat][12] 与在 Fedora 服务器实例上运行的 [ZNC bouncer][13] 联机。他的部门通过 Slave 进行沟通交流。 +为了搞摄影,McIntyre 用上了 [GIMP][8]、[Darktable][9],以及其他一些照片查看包和快速编辑包。当不用 Web 电子邮件时,Mark 会使用 [Geary][10],还有[GNOME Calendar][11]。Mark 选用 HexChat 作为 IRC 客户端,[HexChat][12] 与在 Fedora 服务器实例上运行的 [ZNC bouncer][13] 联机。他的部门通过 Slave 进行沟通交流。 “我从来都不是 IDE 粉,所以大多数的编辑任务都是在 [vim][14] 上完成的。”Mark 偶尔也会打开一个简单的文本编辑器,如 [gedit][15],或者 [xed][16]。他用 [GPaste][17] 做复制和粘贴工作。“对于终端的选择,我已经变成 [Tilix][18] 的忠粉。”McIntyre 通过 [Rhythmbox][19] 来管理他喜欢的播客,并用 [Epiphany][20] 实现快速网络查询。 @@ -46,8 +46,8 @@ Mark 现在将 Fedora 26 运行在他过去几个月装配的所有盒子中。 via: https://fedoramagazine.org/mark-mcintyre-fedora/ 作者:[Charles Profitt][a] -译者:[zrszrs](https://github.com/zrszrszrs) -校对:[校对者ID](https://github.com/校对者ID) +译者:[zrszrszrs](https://github.com/zrszrszrs) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From a27c82b2254cb4f264df4785c3fd8d9be00b651a Mon Sep 17 00:00:00 2001 From: wxy Date: Thu, 7 Dec 2017 09:36:45 +0800 Subject: [PATCH 090/236] PUB:20171120 Mark McIntyre How Do You Fedora.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @zrszrszrs 文章发布地址:https://linux.cn/article-9119-1.html 你的 LCTT 专页地址:https://linux.cn/lctt/zrszrszrs --- .../20171120 Mark McIntyre How Do You Fedora.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20171120 Mark McIntyre How Do You Fedora.md (100%) diff --git a/translated/tech/20171120 Mark McIntyre How Do You Fedora.md b/published/20171120 Mark McIntyre How Do You Fedora.md similarity index 100% rename from translated/tech/20171120 Mark McIntyre How Do You Fedora.md rename to published/20171120 Mark McIntyre How Do You Fedora.md From 9523efd34a083f9f9066333ef1ee7b6116d7c9e2 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E9=AD=91=E9=AD=85=E9=AD=8D=E9=AD=89?= <625310581@qq.com> Date: Thu, 7 Dec 2017 10:10:32 +0800 Subject: [PATCH 091/236] apply for translation --- .../tech/20170921 How to answer questions in a helpful way.md | 3 +++ 1 file changed, 3 insertions(+) diff --git a/sources/tech/20170921 How to answer questions in a helpful way.md b/sources/tech/20170921 How to answer questions in a helpful way.md index 8a3601ed06..31d6be1046 100644 --- a/sources/tech/20170921 How to answer questions in a helpful way.md +++ b/sources/tech/20170921 How to answer questions in a helpful way.md @@ -1,3 +1,6 @@ + +translating by HardworkFish + How to answer questions in a helpful way ============================================================ From b416b110fbb92f3599ded2d4743cec220accc8a8 Mon Sep 17 00:00:00 2001 From: wxy Date: Thu, 7 Dec 2017 12:56:18 +0800 Subject: [PATCH 092/236] PRF:20171204 30 Best Linux Games On Steam You Should Play in 2017.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @yixunx 翻译的很棒! --- ... Games On Steam You Should Play in 2017.md | 152 +++++++++--------- 1 file changed, 73 insertions(+), 79 deletions(-) diff --git a/translated/tech/20171204 30 Best Linux Games On Steam You Should Play in 2017.md b/translated/tech/20171204 30 Best Linux Games On Steam You Should Play in 2017.md index f9fadae4ec..0588e1f88c 100644 --- a/translated/tech/20171204 30 Best Linux Games On Steam You Should Play in 2017.md +++ b/translated/tech/20171204 30 Best Linux Games On Steam You Should Play in 2017.md @@ -1,11 +1,11 @@ -2017 年最好的 30 款支持 Linux 的 Steam 游戏 +2017 年 30 款最好的支持 Linux 的 Steam 游戏 ============================================================ -说到游戏,人们一般都会推荐使用 Windows 系统。Windows 能提供更好的显卡支持和硬件兼容性,所以对于游戏爱好者来说的确是个更好的选择。但你是否想过[在 Linux 系统上玩游戏][9]?这的确是可能的,也许你以前还曾经考虑过。但在几年之前, [Steam for Linux][10] 上可玩的游戏并不是很吸引人。 +说到游戏,人们一般都会推荐使用 Windows 系统。Windows 能提供更好的显卡支持和硬件兼容性,所以对于游戏爱好者来说的确是个更好的选择。但你是否想过[在 Linux 系统上玩游戏][9]?这的确是可以的,也许你以前还曾经考虑过。但在几年之前, [Steam for Linux][10] 上可玩的游戏并不是很吸引人。 但现在情况完全不一样了。Steam 商店里现在有许多支持 Linux 平台的游戏(包括很多主流大作)。我们在本文中将介绍 Steam 上最好的一些 Linux 游戏。 -在进入正题之前,先介绍一个省钱小窍门。如果你是个狂热的游戏爱好者,在游戏上花费很多时间和金钱的话,我建议你订阅 [Humble 每月包(Humble Monthly)][11]。这是个每月收费的订阅服务,每月只用 12 美元就能获得价值 100 美元的游戏。 +在进入正题之前,先介绍一个省钱小窍门。如果你是个狂热的游戏爱好者,在游戏上花费很多时间和金钱的话,我建议你订阅 [Humble 每月包Humble Monthly][11]。这是个每月收费的订阅服务,每月只用 12 美元就能获得价值 100 美元的游戏。 这个游戏包中可能有些游戏不支持 Linux,但除了 Steam 游戏之外,它还会让 [Humble Bundle 网站][12]上所有的游戏和书籍都打九折,所以这依然是个不错的优惠。 @@ -20,218 +20,212 @@ 可以点击以下链接跳转到你喜欢的游戏类型: * [动作类游戏][3] - * [角色扮演类游戏][4] - * [赛车/运动/模拟类游戏][5] - * [冒险类游戏][6] - * [独立游戏][7] - * [策略类游戏][8] ### Steam 上最佳 Linux 动作类游戏 -### 1\. 反恐精英:全球攻势(Counter-Strike: Global Offensive)(多人) +#### 1、 《反恐精英:全球攻势Counter-Strike: Global Offensive》(多人) 《CS:GO》毫无疑问是 Steam 上支持 Linux 的最好的 FPS 游戏之一。我觉得这款游戏无需介绍,但如果你没有听说过它,我要告诉你这将会是你玩过的最好玩的多人 FPS 游戏之一。《CS:GO》还是电子竞技中的一个主流项目。想要提升等级的话,你需要在天梯上和其他玩家同台竞技。但你也可以选择更加轻松的休闲模式。 我本想写《彩虹六号:围攻行动》,但它目前还不支持 Linux 或 Steam OS。 -[购买《CS: GO》][15] +- [购买《CS: GO》][15] -### 2\. 求生之路 2(多人/单机) +#### 2、 《求生之路 2Left 4 Dead 2》(多人/单机) -这是最受欢迎的僵尸主题多人 FPS 游戏之一。在 Steam 优惠时,价格可以低至 1.3 美元。这是个有趣的游戏,能让你体会到你在僵尸游戏中期待的寒意和紧张感。游戏中的环境包括了沼泽、城市、墓地等等,让游戏既有趣又吓人。游戏中的枪械并不是非常先进,但作为一个老游戏来说,它已经提供了足够真实的体验。 +这是最受欢迎的僵尸主题多人 FPS 游戏之一。在 Steam 优惠时,价格可以低至 1.3 美元。这是个有趣的游戏,能让你体会到你在僵尸游戏中期待的战栗和刺激。游戏中的环境包括了沼泽、城市、墓地等等,让游戏既有趣又吓人。游戏中的枪械并不是非常先进,但作为一个老游戏来说,它已经提供了足够真实的体验。 -[购买《求生之路 2》][16] +- [购买《求生之路 2》][16] -### 3\. 无主之地 2(Borderlands 2)(单机/协作) +#### 3、 《无主之地 2Borderlands 2》(单机/协作) -《无主之地 2》是个很有意思的 FPS 游戏。它和你以前玩过的游戏完全不同。画风看上去有些诡异和卡通化,但我可以保证,游戏体验可一点也不逊色! +《无主之地 2》是个很有意思的 FPS 游戏。它和你以前玩过的游戏完全不同。画风看上去有些诡异和卡通化,如果你正在寻找一个第一视角的射击游戏,我可以保证,游戏体验可一点也不逊色! 如果你在寻找一个好玩而且有很多 DLC 的 Linux 游戏,《无主之地 2》绝对是个不错的选择。 -[购买《无主之地 2》][17] +- [购买《无主之地 2》][17] -### 4\. 叛乱(Insurgency)(多人) +#### 4、 《叛乱Insurgency》(多人) 《叛乱》是 Steam 上又一款支持 Linux 的优秀的 FPS 游戏。它剑走偏锋,从屏幕上去掉了 HUD 和弹药数量指示。如同许多评论者所说,这是款注重武器和团队战术的纯粹的射击游戏。这也许不是最好的 FPS 游戏,但如果你想玩和《三角洲部队》类似的多人游戏的话,这绝对是最好的游戏之一。 -[购买《叛乱》][18] +- [购买《叛乱》][18] -### 5\. 生化奇兵:无限(Bioshock: Infinite)(单机) +#### 5、 《生化奇兵:无限Bioshock: Infinite》(单机) 《生化奇兵:无限》毫无疑问将会作为 PC 平台最好的单机 FPS 游戏之一而载入史册。你可以利用很多强大的能力来杀死你的敌人。同时你的敌人也各个身怀绝技。游戏的剧情也非常丰富。你不容错过! -[购买《生化奇兵:无限》][19] +- [购买《生化奇兵:无限》][19] -### 6\. 《杀手(年度版)》(HITMAN - Game of the Year Edition)(单机) +#### 6、 《杀手(年度版)HITMAN - Game of the Year Edition》(单机) 《杀手》系列无疑是 PC 游戏爱好者们的最爱之一。本系列的最新作开始按章节发布,让很多玩家觉得不满。但现在 Square Enix 撤出了开发,而最新的年度版带着新的内容重返舞台。在游戏中发挥你的想象力暗杀你的目标吧,杀手47! -[购买(杀手(年度版))][20] +- [购买(杀手(年度版))][20] -### 7\. 传送门 2 +#### 7、 《传送门 2Portal 2》 《传送门 2》完美地结合了动作与冒险。这是款解谜类游戏,你可以与其他玩家协作,并开发有趣的谜题。协作模式提供了和单机模式截然不同的游戏内容。 -[购买《传送门2》][21] +- [购买《传送门2》][21] -### 8\. 杀出重围:人类分裂 +#### 8、 《杀出重围:人类分裂Deux Ex: Mankind Divided》 -如果你在寻找隐蔽类的射击游戏,《杀出重围》是个完美的选择。这是个非常华丽的游戏,有着最先进的武器和超乎寻常的战斗机制。 +如果你在寻找隐蔽类的射击游戏,《杀出重围》是个填充你的 Steam 游戏库的完美选择。这是个非常华丽的游戏,有着最先进的武器和超乎寻常的战斗机制。 -[购买《杀出重围:人类分裂》][22] +- [购买《杀出重围:人类分裂》][22] -### 9\. 地铁 2033 重置版(Metro 2033 Redux) / 地铁:最后曙光 重置版(Metro Last Light Redux) +#### 9、 《地铁 2033 重置版Metro 2033 Redux》 / 《地铁:最后曙光 重置版Metro Last Light Redux》 《地铁 2033 重置版》和《地铁:最后曙光 重置版》是经典的《地铁 2033》和《地铁:最后曙光》的最终版本。故事发生在世界末日之后。你需要消灭所有的变种人来保证人类的生存。剩下的就交给你自己去探索了! -[购买《地铁 2033 重置版》][23] +- [购买《地铁 2033 重置版》][23] +- [购买《地铁:最后曙光 重置版》][24] -[购买《地铁:最后曙光 重置版》][24] - -### 10\. 坦能堡(Tannenberg)(多人) +#### 10、 《坦能堡Tannenberg》(多人) 《坦能堡》是个全新的游戏 - 在本文发表一个月前刚刚发售。游戏背景是第一次世界大战的东线战场(1914-1918)。这款游戏只有多人模式。如果你想要在游戏中体验第一次世界大战,不要错过这款游戏! -[购买《坦能堡》][25] +- [购买《坦能堡》][25] ### Steam 上最佳 Linux 角色扮演类游戏 -### 11\. 中土世界:暗影魔多(Shadow of Mordor) +#### 11、 《中土世界:暗影魔多Shadow of Mordor》 《中土世界:暗影魔多》 是 Steam 上支持 Linux 的最好的开放式角色扮演类游戏之一。你将扮演一个游侠(塔里昂),和光明领主(凯勒布理鹏)并肩作战击败索隆的军队(并最终和他直接交手)。战斗机制非常出色。这是款不得不玩的游戏! -[购买《中土世界:暗影魔多》][26] +- [购买《中土世界:暗影魔多》][26] -### 12\. 神界:原罪加强版(Divinity: Original Sin – Enhanced Edition) +#### 12、 《神界:原罪加强版Divinity: Original Sin – Enhanced Edition》 《神界:原罪》是一款极其优秀的角色扮演类独立游戏。它非常独特而又引人入胜。这或许是评分最高的带有冒险和策略元素的角色扮演游戏。加强版添加了新的游戏模式,并且完全重做了配音、手柄支持、协作任务等等。 -[购买《神界:原罪加强版》][27] +- [购买《神界:原罪加强版》][27] -### 13\. 废土 2:导演剪辑版(Wasteland 2: Director’s Cut) +#### 13、 《废土 2:导演剪辑版Wasteland 2: Director’s Cut》 《废土 2》是一款出色的 CRPG 游戏。如果《辐射 4》被移植成 CRPG 游戏,大概就是这种感觉。导演剪辑版完全重做了画面,并且增加了一百多名新人物。 -[购买《废土 2》][28] +- [购买《废土 2》][28] -### 14\. 阴暗森林(Darkwood) +#### 14、 《阴暗森林Darkwood》 一个充满恐怖的俯视角角色扮演类游戏。你将探索世界、搜集材料、制作武器来生存下去。 -[购买《阴暗森林》][29] +- [购买《阴暗森林》][29] ### 最佳赛车 / 运动 / 模拟类游戏 -### 15\. 火箭联盟(Rocket League) +#### 15、 《火箭联盟Rocket League》 《火箭联盟》是一款充满刺激的足球游戏。游戏中你将驾驶用火箭助推的战斗赛车。你不仅是要驾车把球带进对方球门,你甚至还可以让你的对手化为灰烬! 这是款超棒的体育动作类游戏,每个游戏爱好者都值得拥有! -[购买《火箭联盟》][30] +- [购买《火箭联盟》][30] -### 16\. 公路救赎(Road Redemption) +#### 16、 《公路救赎Road Redemption》 想念《暴力摩托》了?作为它精神上的续作,《公路救赎》可以缓解你的饥渴。当然,这并不是真正的《暴力摩托 2》,但它一样有趣。如果你喜欢《暴力摩托》,你也会喜欢这款游戏。 -[购买《公路救赎》][31] +- [购买《公路救赎》][31] -### 17\. 尘埃拉力赛(Dirt Rally) +#### 17、 《尘埃拉力赛Dirt Rally》 《尘埃拉力赛》是为想要体验公路和越野赛车的玩家准备的。画面非常有魄力,驾驶手感也近乎完美。 -[购买《尘埃拉力赛》][32] +- [购买《尘埃拉力赛》][32] -### 18\. F1 2017 +#### 18、 《F1 2017》 《F1 2017》是另一款令人印象深刻的赛车游戏。由《尘埃拉力赛》的开发者 Codemasters & Feral Interactive 制作。游戏中包含了所有标志性的 F1 赛车,值得你去体验。 -[购买《F1 2017》][33] +- [购买《F1 2017》][33] -### 19. 超级房车赛:汽车运动(GRID Autosport) +#### 19、 《超级房车赛:汽车运动GRID Autosport》 《超级房车赛》是最被低估的赛车游戏之一。《超级房车赛:汽车运动》是《超级房车赛》的续作。这款游戏的可玩性令人惊艳。游戏中的赛车也比前作更好。推荐所有的 PC 游戏玩家尝试这款赛车游戏。游戏还支持多人模式,你可以和你的朋友组队参赛。 -[购买《超级房车赛:汽车运动》][34] +- [购买《超级房车赛:汽车运动》][34] ### 最好的冒险游戏 -### 20\. 方舟:生存进化(ARK: Survival Evolved) +#### 20、 《方舟:生存进化ARK: Survival Evolved》 《方舟:生存进化》是一款不错的生存游戏,里面有着激动人心的冒险。你发现自己身处一个未知孤岛(方舟岛),为了生存下去并逃离这个孤岛,你必须去驯服恐龙、与其他玩家合作、猎杀其他人来抢夺资源、以及制作物品。 -[购买《方舟:生存进化》][35] +- [购买《方舟:生存进化》][35] -### 21\. 这是我的战争(This War of Mine) +#### 21、 《这是我的战争This War of Mine》 一款独特的战争游戏。你不是扮演士兵,而是要作为一个平民来面对战争带来的艰难。你需要在身经百战的敌人手下逃生,并帮助其他的幸存者。 -[购买《这是我的战争》][36] +- [购买《这是我的战争》][36] -### 22\. 疯狂的麦克斯(Mad Max) +#### 22、 《疯狂的麦克斯Mad Max》 生存和暴力概括了《疯狂的麦克斯》的全部内容。游戏中有性能强大的汽车,开放性的世界,各种武器,以及徒手肉搏。你要不断地探索世界,并注意升级你的汽车来防患于未然。在做决定之前,你要仔细思考并设计好策略。 -[购买《疯狂的麦克斯》][37] +- [购买《疯狂的麦克斯》][37] ### 最佳独立游戏 -### 23\. 泰拉瑞亚(Terraria) +#### 23、 《泰拉瑞亚Terraria》 -这是款在 Steam 上广受好评的 2D 游戏。你在旅途中需要去挖掘、战斗、探索、建造。游戏地图是自动生成的,而不是静止的。也许你刚刚遇到的东西,你的朋友过一会儿才会遇到。你还将体验到富有新意的 2D 动作场景。 +这是款在 Steam 上广受好评的 2D 游戏。你在旅途中需要去挖掘、战斗、探索、建造。游戏地图是自动生成的,而不是固定不变的。也许你刚刚遇到的东西,你的朋友过一会儿才会遇到。你还将体验到富有新意的 2D 动作场景。 -[购买《泰拉瑞亚》][38] +- [购买《泰拉瑞亚》][38] -### 24\. 王国与城堡(Kingdoms and Castles) +#### 24、 《王国与城堡Kingdoms and Castles》 在《王国与城堡》中,你将建造你自己的王国。在管理你的王国的过程中,你需要收税、保护森林、规划城市,并且发展国防来防止别人入侵你的王国。 这是款比较新的游戏,但在独立游戏中已经相对获得了比较高的人气。 -[购买《王国与城堡》][39] +- [购买《王国与城堡》][39] ### Steam 上最佳 Linux 策略类游戏 -### 25\. 文明 5(Sid Meier’s Civilization V) +#### 25、 《文明 5Sid Meier’s Civilization V》 《文明 5》是 PC 上评价最高的策略游戏之一。如果你想的话,你可以去玩《文明 6》。但是依然有许多玩家喜欢《文明 5》,觉得它更有独创性,游戏细节也更富有创造力。 -[购买《文明 5》][40] +- [购买《文明 5》][40] -### 26\. 全面战争:战锤(Total War: Warhammer) +#### 26、 《全面战争:战锤Total War: Warhammer》 -《全面战争:战锤》是 PC 平台上一款非常出色的回合制策略游戏。可惜的是,新作《战锤 2》依然不支持Linux。但如果你喜欢使用飞龙和魔法来建造与毁灭帝国的话,2016 年的《战锤》依然是个不错的选择。 +《全面战争:战锤》是 PC 平台上一款非常出色的回合制策略游戏。可惜的是,新作《战锤 2》依然不支持 Linux。但如果你喜欢使用飞龙和魔法来建造与毁灭帝国的话,2016 年的《战锤》依然是个不错的选择。 -[购买《全面战争:战锤》][41] +- [购买《全面战争:战锤》][41] -### 27\. 轰炸小队《Bomber Crew》 +#### 27、 《轰炸小队Bomber Crew》 想要一款充满乐趣的策略游戏?《轰炸小队》就是为你准备的。你需要选择合适的队员并且让你的队伍稳定运转来取得最终的胜利。 -[购买《轰炸小队》][42] +- [购买《轰炸小队》][42] -### 28\. 奇迹时代 3(Age of Wonders III) +#### 28、 《奇迹时代 3Age of Wonders III》 非常流行的策略游戏,包含帝国建造、角色扮演、以及战争元素。这是款精致的回合制策略游戏,请一定要试试! -[购买《奇迹时代 3》][43] +- [购买《奇迹时代 3》][43] -### 29\. 城市:天际线(Cities: Skylines) +#### 29、 《城市:天际线Cities: Skylines》 -一款非常简洁的游戏。你要从零开始建造一座城市,并且管理它的全部运作。你将体验建造和管理城市带来的愉悦与困难。我不觉得每个玩家都会喜欢这款游戏——它的用户群体非常明确。 +一款非常简洁的策略游戏。你要从零开始建造一座城市,并且管理它的全部运作。你将体验建造和管理城市带来的愉悦与困难。我不觉得每个玩家都会喜欢这款游戏——它的用户群体非常明确。 -[购买《城市:天际线》][44] +- [购买《城市:天际线》][44] -### 30\. 幽浮 2(XCOM 2) +#### 30、 《幽浮 2XCOM 2》 -《幽浮 2》是 PC 上最好的回合制策略游戏之一。我在想如果《幽浮 2》能够被制作成 FPS 游戏的话该有多棒。不过它现在已经是一款好评如潮的杰作了。如果你有多余的预算能花在这款游戏上,建议你购买“天选之战(War of the Chosen)“ DLC。 +《幽浮 2》是 PC 上最好的回合制策略游戏之一。我在想如果《幽浮 2》能够被制作成 FPS 游戏的话该有多棒。不过它现在已经是一款好评如潮的杰作了。如果你有多余的预算能花在这款游戏上,建议你购买“天选之战War of the Chosen“ DLC。 -[购买《幽浮 2》][45] +- [购买《幽浮 2》][45] ### 总结 @@ -247,7 +241,7 @@ via: https://itsfoss.com/best-linux-games-steam/ 作者:[Ankush Das][a] 译者:[yixunx](https://github.com/yixunx) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 @@ -260,7 +254,7 @@ via: https://itsfoss.com/best-linux-games-steam/ [6]:https://itsfoss.com/best-linux-games-steam/#adv [7]:https://itsfoss.com/best-linux-games-steam/#indie [8]:https://itsfoss.com/best-linux-games-steam/#strategy -[9]:https://itsfoss.com/linux-gaming-guide/ +[9]:https://linux.cn/article-7316-1.html [10]:https://itsfoss.com/install-steam-ubuntu-linux/ [11]:https://www.humblebundle.com/?partner=itsfoss [12]:https://www.humblebundle.com/store?partner=itsfoss From 10f3feb64991c0ceeece73b048672c712e027020 Mon Sep 17 00:00:00 2001 From: wxy Date: Thu, 7 Dec 2017 12:57:05 +0800 Subject: [PATCH 093/236] PUB:20171204 30 Best Linux Games On Steam You Should Play in 2017.md @yixunx https://linux.cn/article-9120-1.html --- ...171204 30 Best Linux Games On Steam You Should Play in 2017.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20171204 30 Best Linux Games On Steam You Should Play in 2017.md (100%) diff --git a/translated/tech/20171204 30 Best Linux Games On Steam You Should Play in 2017.md b/published/20171204 30 Best Linux Games On Steam You Should Play in 2017.md similarity index 100% rename from translated/tech/20171204 30 Best Linux Games On Steam You Should Play in 2017.md rename to published/20171204 30 Best Linux Games On Steam You Should Play in 2017.md From 94241be9d25d7a55a406798877481977fd8b5345 Mon Sep 17 00:00:00 2001 From: darksun Date: Thu, 7 Dec 2017 15:59:31 +0800 Subject: [PATCH 094/236] translated --- ...g sudo to delegate permissions in Linux.md | 150 +++++++++--------- 1 file changed, 75 insertions(+), 75 deletions(-) diff --git a/sources/tech/20171205 Using sudo to delegate permissions in Linux.md b/sources/tech/20171205 Using sudo to delegate permissions in Linux.md index 46a807c7a4..0f24ae5a90 100644 --- a/sources/tech/20171205 Using sudo to delegate permissions in Linux.md +++ b/sources/tech/20171205 Using sudo to delegate permissions in Linux.md @@ -1,92 +1,91 @@ -translating by lujun9972 -Using sudo to delegate permissions in Linux +Linux下使用sudo进行赋权 ====== -I recently wrote a short Bash program to copy MP3 files from a USB thumb drive on one network host to another network host. The files are copied to a specific directory on the server that I run for a volunteer organization, from where the files can be downloaded and played. +我最近写了一个简短的 Bash 程序来将 MP3 文件从一台网络主机的 UBS 盘中拷贝到另一台网络主机上去。拷贝出来的文件存放在一台志愿者组织所属服务器的特定目录下, 在那里,这些文件可以被下载和播放。 -My program does a few other things, such as changing the name of the files before they are copied so they are automatically sorted by date on the webpage. It also deletes all the files on the USB drive after verifying that the transfer completed correctly. This nice little program has a few options, such as -h to display help, -t for test mode, and a couple of others. +我的程序还会做些其他事情,比如为了自动在网页上根据日期排序,在拷贝文件之前会先对这些文件重命名。 在验证拷贝完成后,还会删掉 USB 盘中的所有文件。 这个小程序还有一些其他选项,比如 `-h` 会显示帮助, `-t` 进入测试模式等等。 -My program, as wonderful as it is, must run as root to perform its primary functions. Unfortunately, this organization has only a few people who have any interest in administering our audio and computer systems, which puts me in the position of finding semi-technical people and training them to log into the computer used to perform the transfer and run this little program. +我的程序需要以 root 运行才能发挥作用。然而, 这个组织中之后很少的人对管理音频和计算机系统有兴趣的,这使得我不得不找那些半吊子的科技人员来,并培训他们登陆用于传输的计算机,运行这个小程序。 -It is not that I cannot run the program myself, but for various reasons, including travel and illness, I am not always there. Even when I am present, as the "lazy sysadmin," I like to have others do my work for me. So, I write scripts to automate those tasks and use sudo to anoint a couple of users to run the scripts. Many Linux commands require the user to be root in order to run. This protects the system against accidental damage, such as that caused by my own stupidity, and intentional damage by a user with malicious intent. +倒不是说我不能亲自运行这个程序,但由于外出和疾病等等各种原因, 我不是时常在场的。 即使我在场, 作为一名 "懒惰的系统管理员", 我也希望别人能替我把事情给做了。 因此我写了一些脚本来自动完成这些人物并通过 sudo 来指定某些人来运行这些脚本。 很多 Linux 命令都需要用户以 root 身份来运行。 sudo 能够保护系统免遭一时糊涂造成的意外损坏以及恶意用户的故意破坏。 ### Do that sudo that you do so well -The sudo program is a handy tool that allows me as a sysadmin with root access to delegate responsibility for all or a few administrative tasks to other users of the computer. It allows me to perform that delegation without compromising the root password, thus maintaining a high level of security on the host. +sudo 是一个很方便的工具,它让我一个 root 管理员可以分配所有或者部分管理性的任务给其他用户, 而且还无需告诉他们 root 密码, 从而保证主机的高安全性。 -Let's assume, for example, that I have given regular user, "ruser," access to my Bash program, "myprog," which must be run as root to perform parts of its functions. First, the user logs in as ruser with their own password, then uses the following command to run myprog. +假设,我给了普通用户 "ruser" 访问我 Bash 程序 "myprog" 的权限, 而这个程序的部分功能需要 root 权限。 那么该用户可以以 ruser 的身份登陆,然后通过以下命令运行 myprog。 -``` - sudo myprog +```shell +sudo myprog ``` -I find it helpful to have the log of each command run by sudo for training. I can see who did what and whether they entered the command correctly. +我发现在训练时记录下每个用 sudo 执行的命令会很有帮助。我可以看到谁执行了哪些命令,他们是否输对了。 -I have done this to delegate authority to myself and one other user to run a single program; however, sudo can be used to do so much more. It can allow the sysadmin to delegate authority for managing network functions or specific services to a single person or to a group of trusted users. It allows these functions to be delegated while protecting the security of the root password. +我委派了权限给自己和另一个人来运行那个程序; 然而,sudo 可以做更多的事情。 它允许系统管理员委派网络管理或特定的服务器权限给某个人或某组人,以此来保护 root 密码的安全性。 -### Configuring the sudoers file +### 配置 sudoers 文件 -As a sysadmin, I can use the /etc/sudoers file to allow users or groups of users access to a single command, defined groups of commands, or all commands. This flexibility is key to both the power and the simplicity of using sudo for delegation. +作为一名系统管理员,我使用 `/etc/sudoers` 文件来设置某些用户或某些用户组可以访问某个命令,或某组命令,或所有命令。 这种灵活性是使用 sudo 进行委派时能兼顾功能与简易性的关键。 -I found the sudoers file very confusing at first, so below I have copied and deconstructed the entire sudoers file from the host on which I am using it. Hopefully it won't be quite so obscure for you by the time you get through this analysis. Incidentally, I've found that the default configuration files in Red Hat-based distributions tend to have lots of comments and examples to provide guidance, which makes things easier, with less online searching required. +我一开始对 `sudoers` 文件感到很困惑,因此下面我会拷贝并分解我所使用主机上的完整 `sudoers` 文件。 希望在分析的过程中不会让你感到困惑。 我意外地发现, 基于 Red Hat 的发行版中默认的配置文件都会很多注释以及例子来指导你如何做出修改,这使得修改配置文件变得简单了很多,也不需要在互联网上搜索那么多东西了。 -Do not use your standard editor to modify the sudoers file. Use the visudo command because it is designed to enable any changes as soon as the file is saved and you exit the editor. It is possible to use editors besides Vi in the same way as visudo. +不要直接用编辑起来修改 sudoers 文件,而应该用 `visudo` 命令,因为该命令会在你保存并退出编辑器后就立即生效这些变更。 visudo 也可以使用除了 `Vi` 之外的其他编辑器。 -Let's start analyzing this file at the beginning with a couple types of aliases. +让我们首先来分析一下文件中的各种别名。 -### Host aliases +#### Host aliases(主机别名) -The host aliases section is used to create groups of hosts on which commands or command aliases can be used to provide access. The basic idea is that this single file will be maintained for all hosts in an organization and copied to /etc of each host. Some hosts, such as servers, can thus be configured as a group to give some users access to specific commands, such as the ability to start and stop services like HTTPD, DNS, and networking; to mount filesystems; and so on. +host aliases 用于创建主机分组,在不同主机上可以设置允许访问不同的命令或命令别名 (command aliases)。 它的基本思想是,该文件由组织中的所有主机共同维护,然后拷贝到每台主机中的 `/etc` 中。 其中有些主机, 例如各种服务器, 可以配置成一个组来赋予用户访问特定命令的权限, 比如可以启停类似 HTTPD, DNS, 以及网络服务; 可以挂载文件系统等等。 -IP addresses can be used instead of host names in the host aliases. +在设置主机别名时也可以用 IP 地址替代主机名。 ``` ## Sudoers allows particular users to run various commands as -## the root user, without needing the root password. +## the root user,without needing the root password。 ## ## Examples are provided at the bottom of the file for collections -## of related commands, which can then be delegated out to particular -## users or groups. +## of related commands,which can then be delegated out to particular +## users or groups。 ## -## This file must be edited with the 'visudo' command. +## This file must be edited with the 'visudo' command。 ## Host Aliases -## Groups of machines. You may prefer to use hostnames (perhaps using -## wildcards for entire domains) or IP addresses instead. -# Host_Alias FILESERVERS = fs1, fs2 -# Host_Alias MAILSERVERS = smtp, smtp2 +## Groups of machines。You may prefer to use hostnames (perhaps using +## wildcards for entire domains) or IP addresses instead。 +# Host_Alias FILESERVERS = fs1,fs2 +# Host_Alias MAILSERVERS = smtp,smtp2 ## User Aliases -## These aren't often necessary, as you can use regular groups -## (ie, from files, LDAP, NIS, etc) in this file - just use %groupname +## These aren't often necessary,as you can use regular groups +## (ie,from files, LDAP, NIS, etc) in this file - just use %groupname ## rather than USERALIAS -# User_Alias ADMINS = jsmith, mikem -User_Alias AUDIO = dboth, ruser +# User_Alias ADMINS = jsmith,mikem +User_Alias AUDIO = dboth,ruser ## Command Aliases -## These are groups of related commands... +## These are groups of related commands。.。 ## Networking -# Cmnd_Alias NETWORKING = /sbin/route, /sbin/ifconfig, - /bin/ping, /sbin/dhclient, /usr/bin/net, /sbin/iptables, -/usr/bin/rfcomm, /usr/bin/wvdial, /sbin/iwconfig, /sbin/mii-tool +# Cmnd_Alias NETWORKING = /sbin/route,/sbin/ifconfig, + /bin/ping,/sbin/dhclient, /usr/bin/net, /sbin/iptables, +/usr/bin/rfcomm,/usr/bin/wvdial, /sbin/iwconfig, /sbin/mii-tool ## Installation and management of software -# Cmnd_Alias SOFTWARE = /bin/rpm, /usr/bin/up2date, /usr/bin/yum +# Cmnd_Alias SOFTWARE = /bin/rpm,/usr/bin/up2date, /usr/bin/yum ## Services -# Cmnd_Alias SERVICES = /sbin/service, /sbin/chkconfig +# Cmnd_Alias SERVICES = /sbin/service,/sbin/chkconfig ## Updating the locate database # Cmnd_Alias LOCATE = /usr/bin/updatedb ## Storage -# Cmnd_Alias STORAGE = /sbin/fdisk, /sbin/sfdisk, /sbin/parted, /sbin/partprobe, /bin/mount, /bin/umount +# Cmnd_Alias STORAGE = /sbin/fdisk,/sbin/sfdisk, /sbin/parted, /sbin/partprobe, /bin/mount, /bin/umount ## Delegating permissions -# Cmnd_Alias DELEGATING = /usr/sbin/visudo, /bin/chown, /bin/chmod, /bin/chgrp +# Cmnd_Alias DELEGATING = /usr/sbin/visudo,/bin/chown, /bin/chmod, /bin/chgrp ## Processes -# Cmnd_Alias PROCESSES = /bin/nice, /bin/kill, /usr/bin/kill, /usr/bin/killall +# Cmnd_Alias PROCESSES = /bin/nice,/bin/kill, /usr/bin/kill, /usr/bin/killall ## Drivers # Cmnd_Alias DRIVERS = /sbin/modprobe @@ -94,9 +93,9 @@ User_Alias AUDIO = dboth, ruser # Defaults specification # -# Refuse to run if unable to disable echo on the tty. +# Refuse to run if unable to disable echo on the tty。 # -Defaults !visiblepw +Defaults!visiblepw Defaults env_reset Defaults env_keep = "COLORS DISPLAY HOSTNAME HISTSIZE KDEDIR LS_COLORS" @@ -109,19 +108,19 @@ Defaults secure_path = /sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin ## Next comes the main part: which users can run what software on ## which machines (the sudoers file can be shared between multiple -## systems). +## systems)。 ## Syntax: ## ## user MACHINE=COMMANDS ## -## The COMMANDS section may have other options added to it. +## The COMMANDS section may have other options added to it。 ## ## Allow root to run any commands anywhere root ALL=(ALL) ALL -## Allows members of the 'sys' group to run networking, software, -## service management apps and more. -# %sys ALL = NETWORKING, SOFTWARE, SERVICES, STORAGE, DELEGATING, PROCESSES, LOCATE, DRIVERS +## Allows members of the 'sys' group to run networking,software, +## service management apps and more。 +# %sys ALL = NETWORKING,SOFTWARE, SERVICES, STORAGE, DELEGATING, PROCESSES, LOCATE, DRIVERS ## Allows people in group wheel to run all commands %wheel ALL=(ALL) ALL @@ -131,7 +130,7 @@ root ALL=(ALL) ALL ## Allows members of the users group to mount and unmount the ## cdrom as root -# %users ALL=/sbin/mount /mnt/cdrom, /sbin/umount /mnt/cdrom +# %users ALL=/sbin/mount /mnt/cdrom,/sbin/umount /mnt/cdrom ## Allows members of the users group to shutdown this system # %users localhost=/sbin/shutdown -h now @@ -140,81 +139,82 @@ root ALL=(ALL) ALL #includedir /etc/sudoers.d ################################################################################ -# Added by David Both, 11/04/2017 to provide limited access to myprog # +# Added by David Both,11/04/2017 to provide limited access to myprog # ################################################################################ # AUDIO guest1=/usr/local/bin/myprog ``` -### User aliases +#### User aliases(用户别名) -The user alias configuration allows root to sort users into aliased groups so that an entire group can have access to certain root capabilities. This is the section to which I have added the line User_Alias AUDIO = dboth, ruser, which defines the alias AUDIO and assigns two users to that alias. +user alias 允许 root 将用户整理成组并按组来分配权限。在这部分内容中我加了一行 `User_Alias AUDIO = dboth, ruser`,他定义了一个别名 `AUDIO` 用来指代了两个用户。 -It is possible, as stated in the sudoers file, to simply use groups defined in the /etc/groups file instead of aliases. If you already have a group defined there that meets your needs, such as "audio," use that group name preceded by a % sign like so: %audio when assigning commands that will be made available to groups later in the sudoers file. +正如 `sudoers` 文件中所阐明的,也可以直接使用 `/etc/groups` 中定义的组而不用自己设置别名。 如果你定义好的组(假设组名为 "audio")已经能满足要求了, 那么在后面分配命令时只需要在组名前加上 `%` 号,像这样: %audio。 -### Command aliases +#### Command aliases(命令别名) -Further down in the sudoers file is a command aliases section. These aliases are lists of related commands, such as networking commands or commands required to install updates or new RPM packages. These aliases allow the sysadmin to easily permit access to groups of commands. +再后面是 command aliases 部分。这些别名表示的是一系列相关的命令, 比如网络相关命令,或者 RPM 包管理命令。 这些别名允许系统管理员方便地为一组命令分配权限。 -A number of aliases are already set up in this section that make it easy to delegate access to specific types of commands. +该部分内容已经设置好了许多别名,这使得分配权限给某类命令变得方便很多。 -### Environment defaults +#### Environment defaults(环境默认值) -The next section sets some default environment variables. The item that is most interesting in this section is the !visiblepw line, which prevents sudo from running if the user environment is set to show the password. This is a security precaution that should not be overridden. +下部分内容设置默认的环境变量。这部分最值得关注的是 `!visiblepw` 这一行, 它表示当用户环境设置成显示密码时禁止 `sudo` 的运行。 这个安全措施不应该被修改掉。 -### Command section +#### Command section(命令部分) -The command section is the main part of the sudoers file. Everything you need to do can be done without all the aliases by adding enough entries here. The aliases just make it a whole lot easier. +command 部分是 `sudoers` 文件的主体。不使用别名并不会影响你完成要实现 的效果。 它只是让整个配置工作大幅简化而已。 -This section uses the aliases you've already defined to tell sudo who can do what on which hosts. The examples are self-explanatory once you understand the syntax in this section. Let's look at the syntax that we find in the command section. +这部分使用之前定义的别名来告诉 `sudo` 哪些人可以在哪些机器上执行哪些操作。一旦你理解了这部分内容的语法,你会发现这些例子都非常的直观。 下面我们来看看它的语法。 ``` ruser ALL=(ALL) ALL ``` -This is a generic entry for our user, ruser. The first ALL in the line indicates that this rule applies on all hosts. The second ALL allows ruser to run commands as any other user. By default, commands are run as root user, but ruser can specify on the sudo command line that a program be run as any other user. The last ALL means that ruser can run all commands without restriction. This would effectively make ruser root. +这是一条为用户 ruser 做出的配置。行中第一个 `ALL` 表示该条规则在所有主机上生效。 第二个 `ALL` 允许 ruser 以其他用户的身份运行命令。 默认情况下, 命令以 root 用户的身份运行, 但 ruser 可以在 sudo 命令行指定程序以其他用户的身份运行。 最后这个 ALL 表示 ruser 可以运行所有命令而不受限制。 这让 ruser 实际上就变成了 root。 -Note that there is an entry for root, as shown below. This allows root to have all-encompassing access to all commands on all hosts. +注意到下面还有一条针对 root 的配置。这允许 root 能通过 sudo 在任何主机上运行任何命令。 ``` root ALL=(ALL) ALL ``` -To try this out, I commented out the line and, as root, tried to run chown without sudo. That did work—much to my surprise. Then I used sudo chown and that failed with the message, "Root is not in the sudoers file. This incident will be reported." This means that root can run everything as root, but nothing when using the sudo command. This would prevent root from running commands as other users via the sudo command, but root has plenty of ways around that restriction. +为了实验一下效果,我注释掉了这行, 然后以 root 的身份, 试着直接运行 chown。 出乎意料的是这样是能成功的。 然后我试了下 sudo chown,结果失败了,提示信息 "Root is not in the sudoers file。 This incident will be reported"。 也就是说 root 可以直接运行任何命令, 但当加上 sudo 时则不行。 这会阻止 root 像其他用户一样使用 sudo 命令来运行其他命令, 但是 root 有太多中方法可以绕过这个约束了。 -The code below is the one I added to control access to myprog. It specifies that users who are listed in the AUDIO group, as defined near the top of the sudoers file, have access to only one program, myprog, on one host, guest1. +下面这行是我新增来控制访问 myprog 的。它指定了只有上面定义的 AUDIO 组中的用户才能在 guest1 这台主机上使用 myprog 这个命令。 ``` AUDIO guest1=/usr/local/bin/myprog ``` -Note that the syntax of the line above specifies only the host on which this access is to be allowed and the program. It does not specify that the user may run the program as any other user. +注意,上面这一行只指定了允许访问的主机名和程序, 而没有说用户可以以其他用户的身份来运行该程序。 -### Bypassing passwords +#### 省略密码 -You can also use NOPASSWORD to allow the users specified in the group AUDIO to run myprog without the need for entering their passwords. Here's how: +你也可以通过 NOPASSWORD 来让 AUDIO 组中的用户无需密码就能运行 myprog。像这样 Here's how: ``` AUDIO guest1=NOPASSWORD : /usr/local/bin/myprog ``` -I did not do this for my program, because I believe that users with sudo access must stop and think about what they are doing, and this may help a bit with that. I used the entry for my little program as an example. +我并没有这样做,因为哦我觉得使用 sudo 的用户必须要停下来想清楚他们正在做的事情,这对他们有好处。 我这里只是举个例子。 -### wheel +#### wheel -The wheel specification in the command section of the sudoers file, as shown below, allows all users in the "wheel" group to run all commands on any host. The wheel group is defined in the /etc/group file, and users must be added to the group there for this to work. The % sign preceding the group name means that sudo should look for that group in the /etc/group file. +`sudoers` 文件中命令部分的 `wheel` 说明(如下所示)允许所有在 "wheel" 组中的用户在任何机器上运行任何命令。wheel 组在 `/etc/group` 文件中定义, 用户必须加入该组后才能工作。 组名前面的 % 符号表示 sudo 应该去 `/etc/group` 文件中查找该组。 ``` %wheel ALL = (ALL) ALL ``` -This is a good way to delegate full root access to multiple users without providing the root password. Just adding a user to the wheel group gives them access to full root powers. It also provides a means to monitor their activities via the log entries created by sudo. Some distributions, such as Ubuntu, add users' IDs to the wheel group in /etc/group, which allows them to use the sudo command for all privileged commands. +这种方法很好的实现了为多个用户赋予完全的 root 权限而不用提供 root 密码。只需要把哦嗯虎加入 wheel 组中就能给他们提供完整的 root 的能力。 它也提供了一个种通过 sudo 创建的日志来监控他们行为的途径。 有些 Linux 发行版, 比如 Ubuntu, 会自动将用户的 ID 加入 `/etc/group` 中的 wheel 组中, 这使得他们能够用 sudo 命令运行所有的特权命令。 -### Final thoughts +### 结语 -I have used sudo here for a very limited objective—providing one or two users with access to a single command. I accomplished this with two lines (if you ignore my own comments). Delegating authority to perform certain tasks to users who do not have root access is simple and can save you, as a sysadmin, a good deal of time. It also generates log entries that can help detect problems. +我这里只是小试了一把 sudo — 我只是给一到两个用户以 root 权限运行单个命令的权限。完成这些只添加了两行配置(不考虑注释)。 将某项任务的权限委派给其他非 root 用户非常简单,而且可以节省你大量的时间。 同时它还会产生日志来帮你发现问题。 + +`sudoers` 文件还有许多其他的配置和能力。查看 sudo 和 sudoers 的 man 手册可以深入了解详细信息。 -The sudoers file offers a plethora of capabilities and options for configuration. Check the man files for sudo and sudoers for the down-and-dirty details. -------------------------------------------------------------------------------- From f48e6defcc1040148bf099b5ffe986f08c029e52 Mon Sep 17 00:00:00 2001 From: darksun Date: Thu, 7 Dec 2017 16:00:20 +0800 Subject: [PATCH 095/236] move to translated --- .../tech/20171205 Using sudo to delegate permissions in Linux.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {sources => translated}/tech/20171205 Using sudo to delegate permissions in Linux.md (100%) diff --git a/sources/tech/20171205 Using sudo to delegate permissions in Linux.md b/translated/tech/20171205 Using sudo to delegate permissions in Linux.md similarity index 100% rename from sources/tech/20171205 Using sudo to delegate permissions in Linux.md rename to translated/tech/20171205 Using sudo to delegate permissions in Linux.md From 2a75d325e3a725248eabf27f2d78f5512e399cd6 Mon Sep 17 00:00:00 2001 From: darksun Date: Thu, 7 Dec 2017 16:12:11 +0800 Subject: [PATCH 096/236] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Suplemon=20-=20Mo?= =?UTF-8?q?dern=20CLI=20Text=20Editor=20with=20Multi=20Cursor=20Support?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...I Text Editor with Multi Cursor Support.md | 151 ++++++++++++++++++ 1 file changed, 151 insertions(+) create mode 100644 sources/tech/20171129 Suplemon - Modern CLI Text Editor with Multi Cursor Support.md diff --git a/sources/tech/20171129 Suplemon - Modern CLI Text Editor with Multi Cursor Support.md b/sources/tech/20171129 Suplemon - Modern CLI Text Editor with Multi Cursor Support.md new file mode 100644 index 0000000000..2b82be93ba --- /dev/null +++ b/sources/tech/20171129 Suplemon - Modern CLI Text Editor with Multi Cursor Support.md @@ -0,0 +1,151 @@ +Suplemon - Modern CLI Text Editor with Multi Cursor Support +====== +Suplemon is a modern text editor for CLI that emulates the multi cursor behavior and other features of [Sublime Text][1]. It's lightweight and really easy to use, just as Nano is. + +One of the benefits of using a CLI editor is that you can use it whether the Linux distribution that you're using has a GUI or not. This type of text editors also stands out as being simple, fast and powerful. + +You can find useful information and the source code in the [official repository][2]. + +### Features + +These are some of its interesting features: + +* Multi cursor support + +* Undo / Redo + +* Copy and Paste, with multi line support + +* Mouse support + +* Extensions + +* Find, find all, find next + +* Syntax highlighting + +* Autocomplete + +* Custom keyboard shortcuts + +### Installation + +First, make sure you have the latest version of python3 and pip3 installed. + +Then type in a terminal: + +``` +$ sudo pip3 install suplemon +``` + +Create a new file in the current directory + +Open a terminal and type: + +``` +$ suplemon +``` + +![suplemon new file](https://linoxide.com/wp-content/uploads/2017/11/suplemon-new-file.png) + +Open one or multiple files + +Open a terminal and type: + +``` +$ suplemon ... +``` + +``` +$ suplemon example1.c example2.c +``` + +Main configuration + +You can find the configuration file at ~/.config/suplemon/suplemon-config.json. + +Editing this file is easy, you just have to enter command mode (once you are inside suplemon) and run the config command. You can view the default configuration by running config defaults. + +Keymap configuration + +I'll show you the default key mappings for suplemon. If you want to edit them, just run keymap command. Run keymap default to view the default keymap file. + +* Exit: Ctrl + Q + +* Copy line(s) to buffer: Ctrl + C + +* Cut line(s) to buffer: Ctrl + X + +* Insert buffer: Ctrl + V + +* Duplicate line: Ctrl + K + +* Goto: Ctrl + G. You can go to a line or to a file (just type the beginning of a file name). Also, it is possible to type something like 'exam:50' to go to the line 50 of the file example.c at line 50. + +* Search for string or regular expression: Ctrl + F + +* Search next: Ctrl + D + +* Trim whitespace: Ctrl + T + +* Add new cursor in arrow direction: Alt + Arrow key + +* Jump to previous or next word or line: Ctrl + Left / Right + +* Revert to single cursor / Cancel input prompt: Esc + +* Move line(s) up / down: Page Up / Page Down + +* Save file: Ctrl + S + +* Save file with new name: F1 + +* Reload current file: F2 + +* Open file: Ctrl + O + +* Close file: Ctrl + W + +* Switch to next/previous file: Ctrl + Page Up / Ctrl + Page Down + +* Run a command: Ctrl + E + +* Undo: Ctrl + Z + +* Redo: Ctrl + Y + +* Toggle visible whitespace: F7 + +* Toggle mouse mode: F8 + +* Toggle line numbers: F9 + +* Toggle Full screen: F11 + +Mouse shortcuts + +* Set cursor at pointer position: Left Click + +* Add a cursor at pointer position: Right Click + +* Scroll vertically: Scroll Wheel Up / Down + +### Wrapping up + +After trying Suplemon for some time, I have changed my opinion about CLI text editors. I had tried Nano before, and yes, I liked its simplicity, but its modern-feature lack made it non-practical for my everyday use. + +This tool has the best of both CLI and GUI worlds... Simplicity and feature-richness! So I suggest you give it a try, and write your thoughts in the comments :-) + +-------------------------------------------------------------------------------- + +via: https://linoxide.com/tools/suplemon-cli-text-editor-multi-cursor/ + +作者:[Ivo Ursino][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://linoxide.com/author/ursinov/ +[1]:https://linoxide.com/tools/install-sublime-text-editor-linux/ +[2]:https://github.com/richrd/suplemon/ From b9e0afd5b4a9f60575a442f5c179faa1f77c9655 Mon Sep 17 00:00:00 2001 From: darksun Date: Thu, 7 Dec 2017 16:17:49 +0800 Subject: [PATCH 097/236] =?UTF-8?q?=E9=80=89=E9=A2=98:=20How=20To=20Know?= =?UTF-8?q?=20What=20A=20Command=20Or=20Program=20Will=20Exactly=20Do=20Be?= =?UTF-8?q?fore=20Executing=20It?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ram Will Exactly Do Before Executing It.md | 143 ++++++++++++++++++ 1 file changed, 143 insertions(+) create mode 100644 sources/tech/20171204 How To Know What A Command Or Program Will Exactly Do Before Executing It.md diff --git a/sources/tech/20171204 How To Know What A Command Or Program Will Exactly Do Before Executing It.md b/sources/tech/20171204 How To Know What A Command Or Program Will Exactly Do Before Executing It.md new file mode 100644 index 0000000000..417be5b294 --- /dev/null +++ b/sources/tech/20171204 How To Know What A Command Or Program Will Exactly Do Before Executing It.md @@ -0,0 +1,143 @@ +How To Know What A Command Or Program Will Exactly Do Before Executing It +====== +Ever wondered what a Unix command will do before executing it? Not everyone knows what a particular command or program will do. Of course, you can check it with [Explainshell][2]. You need to copy/paste the command in Explainshell website and it let you know what each part of a Linux command does. However, it is not necessary. Now, we can easily know what a command or program will exactly do before executing it, right from the Terminal. Say hello to “maybe”, a simple tool that allows you to run a command and see what it does to your files without actually doing it! After reviewing the output listed, you can then decide whether you really want to run it or not. + +#### How “maybe” works? + +According to the developer, + +> “maybe” runs processes under the control of ptrace with the help of python-ptrace library. When it intercepts a system call that is about to make changes to the file system, it logs that call, and then modifies CPU registers to both redirect the call to an invalid syscall ID (effectively turning it into a no-op) and set the return value of that no-op call to one indicating success of the original call. As a result, the process believes that everything it is trying to do is actually happening, when in reality nothing is. + +Warning: You should be very very careful when using this utility in a production system or in any systems you care about. It can still do serious damages, because it will block only a handful of syscalls. + +#### Installing “maybe” + +Make sure you have installed pip in your Linux system. If not, install it as shown below depending upon the distribution you use. + +On Arch Linux and its derivatives like Antergos, Manjaro Linux, install pip using the following command: + +``` +sudo pacman -S python-pip +``` + +On RHEL, CentOS: + +``` +sudo yum install epel-release +``` + +``` +sudo yum install python-pip +``` + +On Fedora: + +``` +sudo dnf install epel-release +``` + +``` +sudo dnf install python-pip +``` + +On Debian, Ubuntu, Linux Mint: + +``` +sudo apt-get install python-pip +``` + +On SUSE, openSUSE: + +``` +sudo zypper install python-pip +``` + +Once pip installed, run the following command to install “maybe”. + +``` +sudo pip install maybe +``` + +#### Know What A Command Or Program Will Exactly Do Before Executing It + +Usage is absolutely easy! Just add “maybe” in front of a command that you want to execute. + +Allow me to show you an example. + +``` +$ maybe rm -r ostechnix/ +``` + +As you can see, I am going to delete a folder called “ostechnix” from my system. Here is the sample output. + +``` +maybe has prevented rm -r ostechnix/ from performing 5 file system operations: + + delete /home/sk/inboxer-0.4.0-x86_64.AppImage + delete /home/sk/Docker.pdf + delete /home/sk/Idhayathai Oru Nodi.mp3 + delete /home/sk/dThmLbB334_1398236878432.jpg + delete /home/sk/ostechnix + +Do you want to rerun rm -r ostechnix/ and permit these operations? [y/N] y +``` + + [![](http://www.ostechnix.com/wp-content/uploads/2017/12/maybe-1.png)][3] + +The “maybe” tool performs 5 file system operations and shows me what this command (rm -r ostechnix/) will exactly do. Now I can decide whether I should perform this operation or not. Cool, yeah? Indeed! + +Here is another example. I am going to install [Inboxer][4] desktop client for Gmail. This is what I got. + +``` +$ maybe ./inboxer-0.4.0-x86_64.AppImage +fuse: bad mount point `/tmp/.mount_inboxemDzuGV': No such file or directory +squashfuse 0.1.100 (c) 2012 Dave Vasilevsky + +Usage: /home/sk/Downloads/inboxer-0.4.0-x86_64.AppImage [options] ARCHIVE MOUNTPOINT + +FUSE options: + -d -o debug enable debug output (implies -f) + -f foreground operation + -s disable multi-threaded operation + +open dir error: No such file or directory +maybe has prevented ./inboxer-0.4.0-x86_64.AppImage from performing 1 file system operations: + +create directory /tmp/.mount_inboxemDzuGV + +Do you want to rerun ./inboxer-0.4.0-x86_64.AppImage and permit these operations? [y/N] +``` + +If it not detects any file system operations, then it will simply display a result something like below. + +For instance, I run this command to update my Arch Linux. + +``` +$ maybe sudo pacman -Syu +sudo: effective uid is not 0, is /usr/bin/sudo on a file system with the 'nosuid' option set or an NFS file system without root privileges? +maybe has not detected any file system operations from sudo pacman -Syu. +``` + +See? It didn’t detect any file system operations, so there were no warnings. This is absolutely brilliant and exactly what I was looking for. From now on, I can easily know what a command or a program will do even before executing it. I hope this will be useful to you too. More good stuffs to come. Stay tuned! + +Cheers! + +Resource: + +* [“maybe” GitHub page][1] + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/know-command-program-will-exactly-executing/ + +作者:[SK][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.ostechnix.com/author/sk/ +[1]:https://github.com/p-e-w/maybe +[2]:https://www.ostechnix.com/explainshell-find-part-linux-command/ +[3]:http://www.ostechnix.com/wp-content/uploads/2017/12/maybe-1.png +[4]:https://www.ostechnix.com/inboxer-unofficial-google-inbox-desktop-client/ From 512ed82739b4a794f83043a20bdd314f157b1e7d Mon Sep 17 00:00:00 2001 From: darksun Date: Thu, 7 Dec 2017 16:25:33 +0800 Subject: [PATCH 098/236] =?UTF-8?q?=E9=80=89=E9=A2=98:=20NETSTAT=20Command?= =?UTF-8?q?:=20Learn=20to=20use=20netstat=20with=20examples?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...mand Learn to use netstat with examples.md | 112 ++++++++++++++++++ 1 file changed, 112 insertions(+) create mode 100644 sources/tech/20171205 NETSTAT Command Learn to use netstat with examples.md diff --git a/sources/tech/20171205 NETSTAT Command Learn to use netstat with examples.md b/sources/tech/20171205 NETSTAT Command Learn to use netstat with examples.md new file mode 100644 index 0000000000..1f3b8976b1 --- /dev/null +++ b/sources/tech/20171205 NETSTAT Command Learn to use netstat with examples.md @@ -0,0 +1,112 @@ +translating by lujun9972 +NETSTAT Command: Learn to use netstat with examples +====== +Netstat is a command line utility that tells us about all the tcp/udp/unix socket connections on our system. It provides list of all connections that are currently established or are in waiting state. This tool is extremely useful in identifying the port numbers on which an application is working and we can also make sure if an application is working or not on the port it is supposed to work. + +Netstat command also displays various other network related information such as routing tables, interface statistics, masquerade connections, multicast memberships etc., + +In this tutorial, we will learn about Netstat with examples. + +(Recommended Read: [Learn to use CURL command with examples][1] ) + +Netstat with examples +============================================================ + +### 1- Checking all connections + +To list out all the connections on a system, we can use ‘a’ option with netstat command, + +$ netstat -a + +This will produce all tcp, udp & unix connections from the system. + +### 2- Checking all tcp or udp or unix socket connections + +To list only the tcp connections our system, use ‘t’ options with netstat, + +$ netstat -at + +Similarly to list out only the udp connections on our system, we can use ‘u’ option with netstat, + +$ netstat -au + +To only list out Unix socket connections, we can use ‘x’ options, + +$ netstat -ax + +### 3- List process id/Process Name with + +To get list of all connections along with PID or process name, we can use ‘p’ option & it can be used in combination with any other netstat option, + +$ netstat -ap + +### 4- List only port number & not the name + +To speed up our output, we can use ‘n’ option as it will perform any reverse lookup & produce output with only numbers. Since no lookup is performed, our output will much faster. + +$ netstat -an + +### 5- Print only listening ports + +To print only the listening ports , we will use ‘l’ option with netstat. It will not be used with ‘a’ as it prints all ports, + +$ netstat -l + +### 6- Print network stats + +To print network statistics of each protocol like packet received or transmitted, we can use ‘s’ options with netstat, + +$ netstat -s + +### 7- Print interfaces stats + +To display only the statistics on network interfaces, use ‘I’ option, + +$ netstat -i + +### 8-Display multicast group information + +With option ‘g’ , we can print the multicast group information for IPV4 & IPV6, + +$ netstat -g + +### 9- Display the network routing information + +To print the network routing information, use ‘r’ option, + +$ netstat -r + +### 10- Continuous output + +To get continuous output of netstat, use ‘c’ option + +$ netstat -c + +### 11- Filtering a single port + +To filter a single port connections, we can combine ‘grep’ command with netstat, + +$ netstat -anp | grep 3306 + +### 12- Count number of connections + +To count the number of connections from port, we can further add ‘wc’ command with netstat & grep command, + +$ netstat -anp | grep 3306 | wc -l + +This will print the number of connections for the port mysql port i.e. 3306. + +This was our brief tutorial on Netstat with examples, hope it was informative enough. If you have any query or suggestion, please mention it in the comment box below. + +-------------------------------------------------------------------------------- + +via: http://linuxtechlab.com/learn-use-netstat-with-examples/ + +作者:[Shusain][a] +译者:[lujun9972](https://github.com/lujun9972) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://linuxtechlab.com/author/shsuain/ +[1]:http://linuxtechlab.com/learn-use-curl-command-examples/ From 01e5f3c72ac28a29ddd55716eba723a58e9ef48f Mon Sep 17 00:00:00 2001 From: darksun Date: Thu, 7 Dec 2017 18:28:36 +0800 Subject: [PATCH 099/236] translating by lujun9972 --- ...0171205 NETSTAT Command Learn to use netstat with examples.md | 1 + 1 file changed, 1 insertion(+) diff --git a/sources/tech/20171205 NETSTAT Command Learn to use netstat with examples.md b/sources/tech/20171205 NETSTAT Command Learn to use netstat with examples.md index 1f3b8976b1..4001ab5c08 100644 --- a/sources/tech/20171205 NETSTAT Command Learn to use netstat with examples.md +++ b/sources/tech/20171205 NETSTAT Command Learn to use netstat with examples.md @@ -1,4 +1,5 @@ translating by lujun9972 +translating by lujun9972 NETSTAT Command: Learn to use netstat with examples ====== Netstat is a command line utility that tells us about all the tcp/udp/unix socket connections on our system. It provides list of all connections that are currently established or are in waiting state. This tool is extremely useful in identifying the port numbers on which an application is working and we can also make sure if an application is working or not on the port it is supposed to work. From 28d7268898251b27cfa2db99875ad3fde8f12e00 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E5=BC=A0=E5=AE=88=E6=B0=B8?= Date: Thu, 7 Dec 2017 19:38:02 +0800 Subject: [PATCH 100/236] =?UTF-8?q?=E9=80=89=E9=A2=98=20How=20to=20use=20c?= =?UTF-8?q?ron=20in=20Linux?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- .../20171207 How to use cron in Linux.md | 288 ++++++++++++++++++ 1 file changed, 288 insertions(+) create mode 100644 sources/tech/20171207 How to use cron in Linux.md diff --git a/sources/tech/20171207 How to use cron in Linux.md b/sources/tech/20171207 How to use cron in Linux.md new file mode 100644 index 0000000000..3165aa8139 --- /dev/null +++ b/sources/tech/20171207 How to use cron in Linux.md @@ -0,0 +1,288 @@ +translating by yongshouzhang + +How to use cron in Linux +============================================================ + +### No time for commands? Scheduling tasks with cron means programs can run but you don't have to stay up late. + + [![](https://opensource.com/sites/default/files/styles/byline_thumbnail/public/david-crop.jpg?itok=Wnz6HdS0)][10] 06 Nov 2017 [David Both][11] [Feed][12] + +27[up][13] + + [9 comments][14] +![How to use cron in Linux](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/linux-penguins.png?itok=yKOpaJM_) + +Image by : + +[Internet Archive Book Images][15]. Modified by Opensource.com. [CC BY-SA 4.0][16] + +One of the challenges (among the many advantages) of being a sysadmin is running tasks when you'd rather be sleeping. For example, some tasks (including regularly recurring tasks) need to run overnight or on weekends, when no one is expected to be using computer resources. I have no time to spare in the evenings to run commands and scripts that have to operate during off-hours. And I don't want to have to get up at oh-dark-hundred to start a backup or major update. + +Instead, I use two service utilities that allow me to run commands, programs, and tasks at predetermined times. The [cron][17] and at services enable sysadmins to schedule tasks to run at a specific time in the future. The at service specifies a one-time task that runs at a certain time. The cron service can schedule tasks on a repetitive basis, such as daily, weekly, or monthly. + +In this article, I'll introduce the cron service and how to use it. + +### Common (and uncommon) cron uses + +I use the cron service to schedule obvious things, such as regular backups that occur daily at 2 a.m. I also use it for less obvious things. + +* The system times (i.e., the operating system time) on my many computers are set using the Network Time Protocol (NTP). While NTP sets the system time, it does not set the hardware time, which can drift. I use cron to set the hardware time based on the system time. + +* I also have a Bash program I run early every morning that creates a new "message of the day" (MOTD) on each computer. It contains information, such as disk usage, that should be current in order to be useful. + +* Many system processes and services, like [Logwatch][1], [logrotate][2], and [Rootkit Hunter][3], use the cron service to schedule tasks and run programs every day. + +The crond daemon is the background service that enables cron functionality. + +The cron service checks for files in the /var/spool/cron and /etc/cron.d directories and the /etc/anacrontab file. The contents of these files define cron jobs that are to be run at various intervals. The individual user cron files are located in /var/spool/cron, and system services and applications generally add cron job files in the /etc/cron.ddirectory. The /etc/anacrontab is a special case that will be covered later in this article. + +### Using crontab + +The cron utility runs based on commands specified in a cron table (crontab). Each user, including root, can have a cron file. These files don't exist by default, but can be created in the /var/spool/cron directory using the crontab -e command that's also used to edit a cron file (see the script below). I strongly recommend that you not use a standard editor (such as Vi, Vim, Emacs, Nano, or any of the many other editors that are available). Using the crontab command not only allows you to edit the command, it also restarts the crond daemon when you save and exit the editor. The crontabcommand uses Vi as its underlying editor, because Vi is always present (on even the most basic of installations). + +New cron files are empty, so commands must be added from scratch. I added the job definition example below to my own cron files, just as a quick reference, so I know what the various parts of a command mean. Feel free to copy it for your own use. + +``` +# crontab -e +SHELL=/bin/bash +MAILTO=root@example.com +PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin + +# For details see man 4 crontabs + +# Example of job definition: +# .---------------- minute (0 - 59) +# | .------------- hour (0 - 23) +# | | .---------- day of month (1 - 31) +# | | | .------- month (1 - 12) OR jan,feb,mar,apr ... +# | | | | .---- day of week (0 - 6) (Sunday=0 or 7) OR sun,mon,tue,wed,thu,fri,sat +# | | | | | +# * * * * * user-name command to be executed + +# backup using the rsbu program to the internal 4TB HDD and then 4TB external +01 01 * * * /usr/local/bin/rsbu -vbd1 ; /usr/local/bin/rsbu -vbd2 + +# Set the hardware clock to keep it in sync with the more accurate system clock +03 05 * * * /sbin/hwclock --systohc + +# Perform monthly updates on the first of the month +# 25 04 1 * * /usr/bin/dnf -y update +``` + +The first three lines in the code above set up a default environment. The environment must be set to whatever is necessary for a given user because cron does not provide an environment of any kind. The SHELL variable specifies the shell to use when commands are executed. This example specifies the Bash shell. The MAILTO variable sets the email address where cron job results will be sent. These emails can provide the status of the cron job (backups, updates, etc.) and consist of the output you would see if you ran the program manually from the command line. The third line sets up the PATH for the environment. Even though the path is set here, I always prepend the fully qualified path to each executable. + +There are several comment lines in the example above that detail the syntax required to define a cron job. I'll break those commands down, then add a few more to show you some more advanced capabilities of crontab files. + +``` +01 01 * * * /usr/local/bin/rsbu -vbd1 ; /usr/local/bin/rsbu -vbd2 +``` + +This line runs my self-written Bash shell script, rsbu, that backs up all my systems. This job kicks off at 1:01 a.m. (01 01) every day. The asterisks (*) in positions three, four, and five of the time specification are like file globs, or wildcards, for other time divisions; they specify "every day of the month," "every month," and "every day of the week." This line runs my backups twice; one backs up to an internal dedicated backup hard drive, and the other backs up to an external USB drive that I can take to the safe deposit box. + +The following line sets the hardware clock on the computer using the system clock as the source of an accurate time. This line is set to run at 5:03 a.m. (03 05) every day. + +``` +03 05 * * * /sbin/hwclock --systohc +``` + +I was using the third and final cron job (commented out) to perform a dnf or yumupdate at 04:25 a.m. on the first day of each month, but I commented it out so it no longer runs. + +``` +# 25 04 1 * * /usr/bin/dnf -y update +``` + +### Other scheduling tricks + +Now let's do some things that are a little more interesting than these basics. Suppose you want to run a particular job every Thursday at 3 p.m.: + +``` +00 15 * * Thu /usr/local/bin/mycronjob.sh +``` + +Or, maybe you need to run quarterly reports after the end of each quarter. The cron service has no option for "The last day of the month," so instead you can use the first day of the following month, as shown below. (This assumes that the data needed for the reports will be ready when the job is set to run.) + +``` +02 03 1 1,4,7,10 * /usr/local/bin/reports.sh +``` + +The following shows a job that runs one minute past every hour between 9:01 a.m. and 5:01 p.m. + +``` +01 09-17 * * * /usr/local/bin/hourlyreminder.sh +``` + +I have encountered situations where I need to run a job every two, three, or four hours. That can be accomplished by dividing the hours by the desired interval, such as */3 for every three hours, or 6-18/3 to run every three hours between 6 a.m. and 6 p.m. Other intervals can be divided similarly; for example, the expression */15 in the minutes position means "run the job every 15 minutes." + +``` +*/5 08-18/2 * * * /usr/local/bin/mycronjob.sh +``` + +One thing to note: The division expressions must result in a remainder of zero for the job to run. That's why, in this example, the job is set to run every five minutes (08:05, 08:10, 08:15, etc.) during even-numbered hours from 8 a.m. to 6 p.m., but not during any odd-numbered hours. For example, the job will not run at all from 9 p.m. to 9:59 a.m. + +I am sure you can come up with many other possibilities based on these examples. + +### Limiting cron access + +More Linux resources + +* [What is Linux?][4] + +* [What are Linux containers?][5] + +* [Download Now: Linux commands cheat sheet][6] + +* [Advanced Linux commands cheat sheet][7] + +* [Our latest Linux articles][8] + +Regular users with cron access could make mistakes that, for example, might cause system resources (such as memory and CPU time) to be swamped. To prevent possible misuse, the sysadmin can limit user access by creating a + +**/etc/cron.allow** + + file that contains a list of all users with permission to create cron jobs. The root user cannot be prevented from using cron. + +By preventing non-root users from creating their own cron jobs, it may be necessary for root to add their cron jobs to the root crontab. "But wait!" you say. "Doesn't that run those jobs as root?" Not necessarily. In the first example in this article, the username field shown in the comments can be used to specify the user ID a job is to have when it runs. This prevents the specified non-root user's jobs from running as root. The following example shows a job definition that runs a job as the user "student": + +``` +04 07 * * * student /usr/local/bin/mycronjob.sh +``` + +### cron.d + +The directory /etc/cron.d is where some applications, such as [SpamAssassin][18] and [sysstat][19], install cron files. Because there is no spamassassin or sysstat user, these programs need a place to locate cron files, so they are placed in /etc/cron.d. + +The /etc/cron.d/sysstat file below contains cron jobs that relate to system activity reporting (SAR). These cron files have the same format as a user cron file. + +``` +# Run system activity accounting tool every 10 minutes +*/10 * * * * root /usr/lib64/sa/sa1 1 1 +# Generate a daily summary of process accounting at 23:53 +53 23 * * * root /usr/lib64/sa/sa2 -A +``` + +The sysstat cron file has two lines that perform tasks. The first line runs the sa1program every 10 minutes to collect data stored in special binary files in the /var/log/sadirectory. Then, every night at 23:53, the sa2 program runs to create a daily summary. + +### Scheduling tips + +Some of the times I set in the crontab files seem rather random—and to some extent they are. Trying to schedule cron jobs can be challenging, especially as the number of jobs increases. I usually have only a few tasks to schedule on each of my computers, which is simpler than in some of the production and lab environments where I have worked. + +One system I administered had around a dozen cron jobs that ran every night and an additional three or four that ran on weekends or the first of the month. That was a challenge, because if too many jobs ran at the same time—especially the backups and compiles—the system would run out of RAM and nearly fill the swap file, which resulted in system thrashing while performance tanked, so nothing got done. We added more memory and improved how we scheduled tasks. We also removed a task that was very poorly written and used large amounts of memory. + +The crond service assumes that the host computer runs all the time. That means that if the computer is turned off during a period when cron jobs were scheduled to run, they will not run until the next time they are scheduled. This might cause problems if they are critical cron jobs. Fortunately, there is another option for running jobs at regular intervals: anacron. + +### anacron + +The [anacron][20] program performs the same function as crond, but it adds the ability to run jobs that were skipped, such as if the computer was off or otherwise unable to run the job for one or more cycles. This is very useful for laptops and other computers that are turned off or put into sleep mode. + +As soon as the computer is turned on and booted, anacron checks to see whether configured jobs missed their last scheduled run. If they have, those jobs run immediately, but only once (no matter how many cycles have been missed). For example, if a weekly job was not run for three weeks because the system was shut down while you were on vacation, it would be run soon after you turn the computer on, but only once, not three times. + +The anacron program provides some easy options for running regularly scheduled tasks. Just install your scripts in the /etc/cron.[hourly|daily|weekly|monthly]directories, depending how frequently they need to be run. + +How does this work? The sequence is simpler than it first appears. + +1. The crond service runs the cron job specified in /etc/cron.d/0hourly. + +``` +# Run the hourly jobs +SHELL=/bin/bash +PATH=/sbin:/bin:/usr/sbin:/usr/bin +MAILTO=root +01 * * * * root run-parts /etc/cron.hourly +``` + +1. The cron job specified in /etc/cron.d/0hourly runs the run-parts program once per hour. + +2. The run-parts program runs all the scripts located in the /etc/cron.hourlydirectory. + +3. The /etc/cron.hourly directory contains the 0anacron script, which runs the anacron program using the /etdc/anacrontab configuration file shown here. + +``` +# /etc/anacrontab: configuration file for anacron + +# See anacron(8) and anacrontab(5) for details. + +SHELL=/bin/sh +PATH=/sbin:/bin:/usr/sbin:/usr/bin +MAILTO=root +# the maximal random delay added to the base delay of the jobs +RANDOM_DELAY=45 +# the jobs will be started during the following hours only +START_HOURS_RANGE=3-22 + +#period in days delay in minutes job-identifier command +1 5 cron.daily nice run-parts /etc/cron.daily +7 25 cron.weekly nice run-parts /etc/cron.weekly +@monthly 45 cron.monthly nice run-parts /etc/cron.monthly +``` + +1. The anacron program runs the programs located in /etc/cron.daily once per day; it runs the jobs located in /etc/cron.weekly once per week, and the jobs in cron.monthly once per month. Note the specified delay times in each line that help prevent these jobs from overlapping themselves and other cron jobs. + +Instead of placing complete Bash programs in the cron.X directories, I install them in the /usr/local/bin directory, which allows me to run them easily from the command line. Then I add a symlink in the appropriate cron directory, such as /etc/cron.daily. + +The anacron program is not designed to run programs at specific times. Rather, it is intended to run programs at intervals that begin at the specified times, such as 3 a.m. (see the START_HOURS_RANGE line in the script just above) of each day, on Sunday (to begin the week), and on the first day of the month. If any one or more cycles are missed, anacron will run the missed jobs once, as soon as possible. + +### More on setting limits + +I use most of these methods for scheduling tasks to run on my computers. All those tasks are ones that need to run with root privileges. It's rare in my experience that regular users really need a cron job. One case was a developer user who needed a cron job to kick off a daily compile in a development lab. + +It is important to restrict access to cron functions by non-root users. However, there are circumstances when a user needs to set a task to run at pre-specified times, and cron can allow them to do that. Many users do not understand how to properly configure these tasks using cron and they make mistakes. Those mistakes may be harmless, but, more often than not, they can cause problems. By setting functional policies that cause users to interact with the sysadmin, individual cron jobs are much less likely to interfere with other users and other system functions. + +It is possible to set limits on the total resources that can be allocated to individual users or groups, but that is an article for another time. + +For more information, the man pages for [cron][21], [crontab][22], [anacron][23], [anacrontab][24], and [run-parts][25] all have excellent information and descriptions of how the cron system works. + +### Topics + + [Linux][26][SysAdmin][27] + +### About the author + + [![](https://opensource.com/sites/default/files/styles/profile_pictures/public/david-crop.jpg?itok=oePpOpyV)][28] David Both + +- + + David Both is a Linux and Open Source advocate who resides in Raleigh, North Carolina. He has been in the IT industry for over forty years and taught OS/2 for IBM where he worked for over 20 years. While at IBM, he wrote the first training course for the original IBM PC in 1981\. He has taught RHCE classes for Red Hat and has worked at MCI Worldcom, Cisco, and the State of North Carolina. He has been working with Linux and Open Source Software for almost 20 years. David has written articles for... [more about David Both][29][More about me][30] + +* [Learn how you can contribute][9] + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/17/11/how-use-cron-linux + +作者:[David Both ][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: +[1]:https://sourceforge.net/projects/logwatch/files/ +[2]:https://github.com/logrotate/logrotate +[3]:http://rkhunter.sourceforge.net/ +[4]:https://opensource.com/resources/what-is-linux?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent +[5]:https://opensource.com/resources/what-are-linux-containers?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent +[6]:https://developers.redhat.com/promotions/linux-cheatsheet/?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent +[7]:https://developers.redhat.com/cheat-sheet/advanced-linux-commands-cheatsheet?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent +[8]:https://opensource.com/tags/linux?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent +[9]:https://opensource.com/participate +[10]:https://opensource.com/users/dboth +[11]:https://opensource.com/users/dboth +[12]:https://opensource.com/user/14106/feed +[13]:https://opensource.com/article/17/11/how-use-cron-linux?rate=9R7lrdQXsne44wxIh0Wu91ytYaxxi86zT1-uHo1a1IU +[14]:https://opensource.com/article/17/11/how-use-cron-linux#comments +[15]:https://www.flickr.com/photos/internetarchivebookimages/20570945848/in/photolist-xkMtw9-xA5zGL-tEQLWZ-wFwzFM-aNwxgn-aFdWBj-uyFKYv-7ZCCBU-obY1yX-UAPafA-otBzDF-ovdDo6-7doxUH-obYkeH-9XbHKV-8Zk4qi-apz7Ky-apz8Qu-8ZoaWG-orziEy-aNwxC6-od8NTv-apwpMr-8Zk4vn-UAP9Sb-otVa3R-apz6Cb-9EMPj6-eKfyEL-cv5mwu-otTtHk-7YjK1J-ovhxf6-otCg2K-8ZoaJf-UAPakL-8Zo8j7-8Zk74v-otp4Ls-8Zo8h7-i7xvpR-otSosT-9EMPja-8Zk6Zi-XHpSDB-hLkuF3-of24Gf-ouN1Gv-fJzkJS-icfbY9 +[16]:https://creativecommons.org/licenses/by-sa/4.0/ +[17]:https://en.wikipedia.org/wiki/Cron +[18]:http://spamassassin.apache.org/ +[19]:https://github.com/sysstat/sysstat +[20]:https://en.wikipedia.org/wiki/Anacron +[21]:http://man7.org/linux/man-pages/man8/cron.8.html +[22]:http://man7.org/linux/man-pages/man5/crontab.5.html +[23]:http://man7.org/linux/man-pages/man8/anacron.8.html +[24]:http://man7.org/linux/man-pages/man5/anacrontab.5.html +[25]:http://manpages.ubuntu.com/manpages/zesty/man8/run-parts.8.html +[26]:https://opensource.com/tags/linux +[27]:https://opensource.com/tags/sysadmin +[28]:https://opensource.com/users/dboth +[29]:https://opensource.com/users/dboth +[30]:https://opensource.com/users/dboth From 62f0b7686a710da078c84e24922bc6dd828429fe Mon Sep 17 00:00:00 2001 From: darksun Date: Thu, 7 Dec 2017 20:48:57 +0800 Subject: [PATCH 101/236] translated --- ...mand Learn to use netstat with examples.md | 99 ++++++++++++------- 1 file changed, 62 insertions(+), 37 deletions(-) diff --git a/sources/tech/20171205 NETSTAT Command Learn to use netstat with examples.md b/sources/tech/20171205 NETSTAT Command Learn to use netstat with examples.md index 4001ab5c08..b2b7175749 100644 --- a/sources/tech/20171205 NETSTAT Command Learn to use netstat with examples.md +++ b/sources/tech/20171205 NETSTAT Command Learn to use netstat with examples.md @@ -1,103 +1,128 @@ -translating by lujun9972 -translating by lujun9972 -NETSTAT Command: Learn to use netstat with examples +NETSTAT 命令: 通过案例学习使用 netstate ====== -Netstat is a command line utility that tells us about all the tcp/udp/unix socket connections on our system. It provides list of all connections that are currently established or are in waiting state. This tool is extremely useful in identifying the port numbers on which an application is working and we can also make sure if an application is working or not on the port it is supposed to work. +Netstat 是一个告诉我们系统中所有 tcp/udp/unix socket 连接状态的命令行工具。它会列出所有已经连接或者等待连接状态的连接。 该工具在识别某个应用监听哪个端口时特别有用,我们也能用它来判断某个应用是否正常的在监听某个端口。 -Netstat command also displays various other network related information such as routing tables, interface statistics, masquerade connections, multicast memberships etc., +Netstat 命令还能显示其他各种各样的网络相关信息,例如路由表, 网卡统计信息, 虚假连接以及多播成员等。 -In this tutorial, we will learn about Netstat with examples. +本文中,我们会通过几个例子来学习 Netstat。 -(Recommended Read: [Learn to use CURL command with examples][1] ) +(推荐阅读: [Learn to use CURL command with examples][1] ) Netstat with examples ============================================================ -### 1- Checking all connections - -To list out all the connections on a system, we can use ‘a’ option with netstat command, +### 1- 检查所有的连接 +使用 `a` 选项可以列出系统中的所有连接, +```shell $ netstat -a +``` -This will produce all tcp, udp & unix connections from the system. +这会显示系统所有的 tcp,udp 以及 unix 连接。 -### 2- Checking all tcp or udp or unix socket connections +### 2- 检查所有的 tcp/udp/unix socket 连接 -To list only the tcp connections our system, use ‘t’ options with netstat, +使用 `t` 选项只列出 tcp 连接, +```shell $ netstat -at +``` -Similarly to list out only the udp connections on our system, we can use ‘u’ option with netstat, +类似的,使用 `u` 选项只列出 udp 连接 to list out only the udp connections on our system, we can use ‘u’ option with netstat, +```shell $ netstat -au +``` -To only list out Unix socket connections, we can use ‘x’ options, +使用 `x` 选项只列出 Unix socket 连接,we can use ‘x’ options, +```shell $ netstat -ax +``` -### 3- List process id/Process Name with +### 3- 同时列出进程 ID/进程名称 -To get list of all connections along with PID or process name, we can use ‘p’ option & it can be used in combination with any other netstat option, +使用 `p` 选项可以在列出连接的同时也显示 PID 或者进程名称,而且它还能与其他选项连用, +```shell $ netstat -ap +``` -### 4- List only port number & not the name +### 4- 列出端口号而不是服务名 -To speed up our output, we can use ‘n’ option as it will perform any reverse lookup & produce output with only numbers. Since no lookup is performed, our output will much faster. +使用 `n` 选项可以加快输出,它不会执行任何反向查询(译者注:这里原文说的是 "it will perform any reverse lookup",应该是写错了),而是直接输出数字。 由于无需查询,因此结果输出会快很多。 +```shell $ netstat -an +``` -### 5- Print only listening ports +### 5- 只输出监听端口 -To print only the listening ports , we will use ‘l’ option with netstat. It will not be used with ‘a’ as it prints all ports, +使用 `l` 选项只输出监听端口。它不能与 `a` 选项连用,因为 `a` 会输出所有端口, +```shell $ netstat -l +``` -### 6- Print network stats +### 6- 输出网络状态 -To print network statistics of each protocol like packet received or transmitted, we can use ‘s’ options with netstat, +使用 `s` 选项输出每个协议的统计信息,包括接收/发送的包数量 +```shell $ netstat -s +``` -### 7- Print interfaces stats +### 7- 输出网卡状态 -To display only the statistics on network interfaces, use ‘I’ option, +使用 `I` 选项只显示网卡的统计信息, +```shell $ netstat -i +``` -### 8-Display multicast group information +### 8- 显示多播组(multicast group)信息 -With option ‘g’ , we can print the multicast group information for IPV4 & IPV6, +使用 `g` 选项输出 IPV4 以及 IPV6 的多播组信息, +```shell $ netstat -g +``` -### 9- Display the network routing information +### 9- 显示网络路由信息 -To print the network routing information, use ‘r’ option, +使用 `r` 输出网络路由信息, +```shell $ netstat -r +``` -### 10- Continuous output +### 10- 持续输出 -To get continuous output of netstat, use ‘c’ option +使用 `c` 选项持续输出结果 +```shell $ netstat -c +``` -### 11- Filtering a single port +### 11- 过滤出某个端口 -To filter a single port connections, we can combine ‘grep’ command with netstat, +与 `grep` 连用来过滤出某个端口的连接, +```shell $ netstat -anp | grep 3306 +``` -### 12- Count number of connections +### 12- 统计连接个数 -To count the number of connections from port, we can further add ‘wc’ command with netstat & grep command, +通过与 wc 和 grep 命令连用,可以统计指定端口的连接数量 +```shell $ netstat -anp | grep 3306 | wc -l +``` -This will print the number of connections for the port mysql port i.e. 3306. +这回输出 mysql 服务端口(即 3306)的连接数。 -This was our brief tutorial on Netstat with examples, hope it was informative enough. If you have any query or suggestion, please mention it in the comment box below. +这就是我们间断的案例指南了,希望它带给你的信息量足够。 有任何疑问欢迎提出。 -------------------------------------------------------------------------------- From fac25d49bafe4d827dccd879e1a9e70eaa57d840 Mon Sep 17 00:00:00 2001 From: darksun Date: Thu, 7 Dec 2017 20:50:31 +0800 Subject: [PATCH 102/236] =?UTF-8?q?=E7=BF=BB=E8=AF=91=E5=AE=8C=E6=AF=95?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...20171205 NETSTAT Command Learn to use netstat with examples.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {sources => translated}/tech/20171205 NETSTAT Command Learn to use netstat with examples.md (100%) diff --git a/sources/tech/20171205 NETSTAT Command Learn to use netstat with examples.md b/translated/tech/20171205 NETSTAT Command Learn to use netstat with examples.md similarity index 100% rename from sources/tech/20171205 NETSTAT Command Learn to use netstat with examples.md rename to translated/tech/20171205 NETSTAT Command Learn to use netstat with examples.md From e0c14254869c1652918c0d5cfae4c12d1345bba4 Mon Sep 17 00:00:00 2001 From: darksun Date: Thu, 7 Dec 2017 20:58:58 +0800 Subject: [PATCH 103/236] =?UTF-8?q?=E9=80=89=E9=A2=98:=20How=20to=20Use=20?= =?UTF-8?q?the=20Date=20Command=20in=20Linux?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...05 How to Use the Date Command in Linux.md | 164 ++++++++++++++++++ 1 file changed, 164 insertions(+) create mode 100644 sources/tech/20171205 How to Use the Date Command in Linux.md diff --git a/sources/tech/20171205 How to Use the Date Command in Linux.md b/sources/tech/20171205 How to Use the Date Command in Linux.md new file mode 100644 index 0000000000..0c0f9a31aa --- /dev/null +++ b/sources/tech/20171205 How to Use the Date Command in Linux.md @@ -0,0 +1,164 @@ +translating by lujun9972 +How to Use the Date Command in Linux +====== +In this post, we will show you some examples on how to use the date command in Linux. The date command can be used to print or set the system date and time. Using the Date Command in Linux its simple, just follow the examples and the syntax below. + +By default when running the date command in Linux, without any arguments it will display the current system date and time: + +``` +date +``` + +``` +Sat 2 Dec 12:34:12 CST 2017 +``` + +#### Syntax + +``` +Usage: date [OPTION]... [+FORMAT] + or: date [-u|--utc|--universal] [MMDDhhmm[[CC]YY][.ss]] +Display the current time in the given FORMAT, or set the system date. + +``` + +### Date examples + +The following examples will show you how to use the date command to find the date and time from a period of time in the past or future. + +### 1\. Find the date 5 weeks in the future + +``` +date -d "5 weeks" +Sun Jan 7 19:53:50 CST 2018 + +``` + +### 2\. Find the date 5 weeks and 4 days in the future + +``` +date -d "5 weeks 4 days" +Thu Jan 11 19:55:35 CST 2018 + +``` + +### 3\. Get the next month date + +``` +date -d "next month" +Wed Jan 3 19:57:43 CST 2018 +``` + +### 4\. Get the last sunday date + +``` +date -d last-sunday +Sun Nov 26 00:00:00 CST 2017 +``` + +The date command comes with various formatting option, the following examples will show you how to format the date command output. + +### 5\. Display the date in yyyy-mm-dd format + +``` +date +"%F" +2017-12-03 +``` + +### 6\. Display date in mm/dd/yyyy format + +``` +date +"%m/%d/%Y" +12/03/2017 + +``` + +### 7\. Display only the time + +``` +date +"%T" +20:07:04 + +``` + +### 8\. Display the day of the year + +``` +date +"%j" +337 + +``` + +### 9\. Formatting Options + +| **%%** | A literal percent sign (“**%**“). | +| **%a** | The abbreviated weekday name (e.g., **Sun**). | +| **%A** | The full weekday name (e.g., **Sunday**). | +| **%b** | The abbreviated month name (e.g., **Jan**). | +| **%B** | Locale’s full month name (e.g., **January**). | +| **%c** | The date and time (e.g., **Thu Mar 3 23:05:25 2005**). | +| **%C** | The current century; like **%Y**, except omit last two digits (e.g., **20**). | +| **%d** | Day of month (e.g., **01**). | +| **%D** | Date; same as **%m/%d/%y**. | +| **%e** | Day of month, space padded; same as **%_d**. | +| **%F** | Full date; same as **%Y-%m-%d**. | +| **%g** | Last two digits of year of ISO week number (see **%G**). | +| **%G** | Year of ISO week number (see **%V**); normally useful only with **%V**. | +| **%h** | Same as **%b**. | +| **%H** | Hour (**00**..**23**). | +| **%I** | Hour (**01**..**12**). | +| **%j** | Day of year (**001**..**366**). | +| **%k** | Hour, space padded ( **0**..**23**); same as **%_H**. | +| **%l** | Hour, space padded ( **1**..**12**); same as **%_I**. | +| **%m** | Month (**01**..**12**). | +| **%M** | Minute (**00**..**59**). | +| **%n** | A newline. | +| **%N** | Nanoseconds (**000000000**..**999999999**). | +| **%p** | Locale’s equivalent of either **AM** or **PM**; blank if not known. | +| **%P** | Like **%p**, but lower case. | +| **%r** | Locale’s 12-hour clock time (e.g., **11:11:04 PM**). | +| **%R** | 24-hour hour and minute; same as **%H:%M**. | +| **%s** | Seconds since 1970-01-01 00:00:00 UTC. | +| **%S** | Second (**00**..**60**). | +| **%t** | A tab. | +| **%T** | Time; same as **%H:%M:%S**. | +| **%u** | Day of week (**1**..**7**); 1 is **Monday**. | +| **%U** | Week number of year, with Sunday as first day of week (**00**..**53**). | +| **%V** | ISO week number, with Monday as first day of week (**01**..**53**). | +| **%w** | Day of week (**0**..**6**); 0 is **Sunday**. | +| **%W** | Week number of year, with Monday as first day of week (**00**..**53**). | +| **%x** | Locale’s date representation (e.g., **12/31/99**). | +| **%X** | Locale’s time representation (e.g., **23:13:48**). | +| **%y** | Last two digits of year (**00**..**99**). | +| **%Y** | Year. | +| **%z** | +hhmm numeric time zone (e.g., **-0400**). | +| **%:z** | +hh:mm numeric time zone (e.g., **-04:00**). | +| **%::z** | +hh:mm:ss numeric time zone (e.g., **-04:00:00**). | +| **%:::z** | Numeric time zone with “**:**” to necessary precision (e.g., **-04**, **+05:30**). | +| **%Z** | Alphabetic time zone abbreviation (e.g., EDT). | + +### 10\. Set the system clock + +With the date command in Linux, you can also manually set the system clock using the --set switch, in the following example we will set the system date to 4:22pm August 30, 2017 + +``` +date --set="20170830 16:22" + +``` + +Of course, if you use one of our [VPS Hosting services][1], you can always contact and ask our expert Linux admins (via chat or ticket) about date command in linux and anything related to date examples on Linux. They are available 24×7 and will provide information or assistance immediately. + +PS. If you liked this post on How to Use the Date Command in Linux please share it with your friends on the social networks using the buttons below or simply leave a reply. Thanks. + +-------------------------------------------------------------------------------- + +via: https://www.rosehosting.com/blog/use-the-date-command-in-linux/ + +作者:[][a] +译者:[lujun9972](https://github.com/lujun9972) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.rosehosting.com +[1]:https://www.rosehosting.com/hosting-services.html From df080cad71b795d893839fbecd8fb638688a5596 Mon Sep 17 00:00:00 2001 From: imquanquan Date: Thu, 7 Dec 2017 21:22:20 +0800 Subject: [PATCH 104/236] apply for translation --- ... A Command Or Program Will Exactly Do Before Executing It.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20171204 How To Know What A Command Or Program Will Exactly Do Before Executing It.md b/sources/tech/20171204 How To Know What A Command Or Program Will Exactly Do Before Executing It.md index 417be5b294..21ac6f86d1 100644 --- a/sources/tech/20171204 How To Know What A Command Or Program Will Exactly Do Before Executing It.md +++ b/sources/tech/20171204 How To Know What A Command Or Program Will Exactly Do Before Executing It.md @@ -1,3 +1,5 @@ +translating--imquanquan + How To Know What A Command Or Program Will Exactly Do Before Executing It ====== Ever wondered what a Unix command will do before executing it? Not everyone knows what a particular command or program will do. Of course, you can check it with [Explainshell][2]. You need to copy/paste the command in Explainshell website and it let you know what each part of a Linux command does. However, it is not necessary. Now, we can easily know what a command or program will exactly do before executing it, right from the Terminal. Say hello to “maybe”, a simple tool that allows you to run a command and see what it does to your files without actually doing it! After reviewing the output listed, you can then decide whether you really want to run it or not. From 70212482a249d7627cce9072f4d312f7953bbd04 Mon Sep 17 00:00:00 2001 From: imquanquan Date: Thu, 7 Dec 2017 21:25:54 +0800 Subject: [PATCH 105/236] apply for translation --- ... A Command Or Program Will Exactly Do Before Executing It.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sources/tech/20171204 How To Know What A Command Or Program Will Exactly Do Before Executing It.md b/sources/tech/20171204 How To Know What A Command Or Program Will Exactly Do Before Executing It.md index 21ac6f86d1..bc7b2c9cb2 100644 --- a/sources/tech/20171204 How To Know What A Command Or Program Will Exactly Do Before Executing It.md +++ b/sources/tech/20171204 How To Know What A Command Or Program Will Exactly Do Before Executing It.md @@ -1,4 +1,4 @@ -translating--imquanquan +translating by imquanquan How To Know What A Command Or Program Will Exactly Do Before Executing It ====== From 25b6a4a5fa620555322d90eb4531911b0c51c837 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E5=BC=A0=E5=AE=88=E6=B0=B8?= Date: Thu, 7 Dec 2017 22:49:53 +0800 Subject: [PATCH 106/236] Update 20171207 How to use cron in Linux.md --- .../tech/20171207 How to use cron in Linux.md | 15 +++++++-------- 1 file changed, 7 insertions(+), 8 deletions(-) diff --git a/sources/tech/20171207 How to use cron in Linux.md b/sources/tech/20171207 How to use cron in Linux.md index 3165aa8139..3df9c1b402 100644 --- a/sources/tech/20171207 How to use cron in Linux.md +++ b/sources/tech/20171207 How to use cron in Linux.md @@ -1,28 +1,27 @@ translating by yongshouzhang -How to use cron in Linux +如何在linux中使用cron ============================================================ -### No time for commands? Scheduling tasks with cron means programs can run but you don't have to stay up late. - +### 没有时间键入命令? 使用 cron 调度任务意味着你不必熬夜守着程序,就可以让它运行。 [![](https://opensource.com/sites/default/files/styles/byline_thumbnail/public/david-crop.jpg?itok=Wnz6HdS0)][10] 06 Nov 2017 [David Both][11] [Feed][12] 27[up][13] [9 comments][14] -![How to use cron in Linux](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/linux-penguins.png?itok=yKOpaJM_) +![如何在 linux 中使用cron](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/linux-penguins.png?itok=yKOpaJM_) Image by : [Internet Archive Book Images][15]. Modified by Opensource.com. [CC BY-SA 4.0][16] -One of the challenges (among the many advantages) of being a sysadmin is running tasks when you'd rather be sleeping. For example, some tasks (including regularly recurring tasks) need to run overnight or on weekends, when no one is expected to be using computer resources. I have no time to spare in the evenings to run commands and scripts that have to operate during off-hours. And I don't want to have to get up at oh-dark-hundred to start a backup or major update. +作为系统管理员的一个挑战(也是众多优势之一)就是在你想睡觉时如何让任务运行。例如,一些任务(包括定期循环的作业)需要整夜或每逢周末运行,当没人想占用计算机资源时。晚上我没有空闲时间去运行在非高峰时间必须运行的命令和脚本。我也不想摸黑起床,进行备份和主更新。 -Instead, I use two service utilities that allow me to run commands, programs, and tasks at predetermined times. The [cron][17] and at services enable sysadmins to schedule tasks to run at a specific time in the future. The at service specifies a one-time task that runs at a certain time. The cron service can schedule tasks on a repetitive basis, such as daily, weekly, or monthly. +与之代替的是,我用了两个能够在既定时间运行命令,程序,任务的服务实用程序。[cron][17] 和 at 服务能够让系统管理雨在未来特定时间内运行计划任务。at 服务指定一个在特定时间运行的一次性任务。cron服务可在重复的基础上调度任务,如每天,每周或每月。 -In this article, I'll introduce the cron service and how to use it. +在本文中我将会介绍 cron 服务以及如何使用。 -### Common (and uncommon) cron uses +### cron 的常见(和不常见)用法 I use the cron service to schedule obvious things, such as regular backups that occur daily at 2 a.m. I also use it for less obvious things. From b3ca15cde48eda38a116992e866e94497739c55d Mon Sep 17 00:00:00 2001 From: XiatianSummer Date: Thu, 7 Dec 2017 23:17:34 +0800 Subject: [PATCH 107/236] Update 20170607 Why Car Companies Are Hiring Computer Security Experts.md Request for translating --- ...07 Why Car Companies Are Hiring Computer Security Experts.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20170607 Why Car Companies Are Hiring Computer Security Experts.md b/sources/tech/20170607 Why Car Companies Are Hiring Computer Security Experts.md index 4a7d23e5f0..f67a692647 100644 --- a/sources/tech/20170607 Why Car Companies Are Hiring Computer Security Experts.md +++ b/sources/tech/20170607 Why Car Companies Are Hiring Computer Security Experts.md @@ -1,3 +1,5 @@ +Translating by XiatianSummer + Why Car Companies Are Hiring Computer Security Experts ============================================================ From ad6b4d26b1d5eda405208e277738f47dd0b0bd93 Mon Sep 17 00:00:00 2001 From: erlinux Date: Fri, 8 Dec 2017 01:01:08 +0800 Subject: [PATCH 108/236] Translating By erlinux --- sources/tech/20171010 Operating a Kubernetes network.md | 1 + 1 file changed, 1 insertion(+) diff --git a/sources/tech/20171010 Operating a Kubernetes network.md b/sources/tech/20171010 Operating a Kubernetes network.md index 9c85e9aa70..abac12f718 100644 --- a/sources/tech/20171010 Operating a Kubernetes network.md +++ b/sources/tech/20171010 Operating a Kubernetes network.md @@ -1,3 +1,4 @@ +**translating by [erlinux](https://github.com/erlinux)** Operating a Kubernetes network ============================================================ From 9e786f23fccc355b44686be74067d892626bd2d2 Mon Sep 17 00:00:00 2001 From: geekpi Date: Fri, 8 Dec 2017 08:53:20 +0800 Subject: [PATCH 109/236] translated --- ...Help Build ONNX Open Source AI Platform.md | 78 ------------------- ...Help Build ONNX Open Source AI Platform.md | 76 ++++++++++++++++++ 2 files changed, 76 insertions(+), 78 deletions(-) delete mode 100644 sources/tech/20171125 AWS to Help Build ONNX Open Source AI Platform.md create mode 100644 translated/tech/20171125 AWS to Help Build ONNX Open Source AI Platform.md diff --git a/sources/tech/20171125 AWS to Help Build ONNX Open Source AI Platform.md b/sources/tech/20171125 AWS to Help Build ONNX Open Source AI Platform.md deleted file mode 100644 index 1e9424178e..0000000000 --- a/sources/tech/20171125 AWS to Help Build ONNX Open Source AI Platform.md +++ /dev/null @@ -1,78 +0,0 @@ -translating---geekpi - -AWS to Help Build ONNX Open Source AI Platform -============================================================ -![onnx-open-source-ai-platform](https://www.linuxinsider.com/article_images/story_graphics_xlarge/xl-2017-onnx-1.jpg) - - -Amazon Web Services has become the latest tech firm to join the deep learning community's collaboration on the Open Neural Network Exchange, recently launched to advance artificial intelligence in a frictionless and interoperable environment. Facebook and Microsoft led the effort. - -As part of that collaboration, AWS made its open source Python package, ONNX-MxNet, available as a deep learning framework that offers application programming interfaces across multiple languages including Python, Scala and open source statistics software R. - -The ONNX format will help developers build and train models for other frameworks, including PyTorch, Microsoft Cognitive Toolkit or Caffe2, AWS Deep Learning Engineering Manager Hagay Lupesko and Software Developer Roshani Nagmote wrote in an online post last week. It will let developers import those models into MXNet, and run them for inference. - -### Help for Developers - -Facebook and Microsoft this summer launched ONNX to support a shared model of interoperability for the advancement of AI. Microsoft committed its Cognitive Toolkit, Caffe2 and PyTorch to support ONNX. - -Cognitive Toolkit and other frameworks make it easier for developers to construct and run computational graphs that represent neural networks, Microsoft said. - -Initial versions of [ONNX code and documentation][4] were made available on Github. - -AWS and Microsoft last month announced plans for Gluon, a new interface in Apache MXNet that allows developers to build and train deep learning models. - -Gluon "is an extension of their partnership where they are trying to compete with Google's Tensorflow," observed Aditya Kaul, research director at [Tractica][5]. - -"Google's omission from this is quite telling but also speaks to their dominance in the market," he told LinuxInsider. - -"Even Tensorflow is open source, and so open source is not the big catch here -- but the rest of the ecosystem teaming up to compete with Google is what this boils down to," Kaul said. - -The Apache MXNet community earlier this month introduced version 0.12 of MXNet, which extends Gluon functionality to allow for new, cutting-edge research, according to AWS. Among its new features are variational dropout, which allows developers to apply the dropout technique for mitigating overfitting to recurrent neural networks. - -Convolutional RNN, Long Short-Term Memory and gated recurrent unit cells allow datasets to be modeled using time-based sequence and spatial dimensions, AWS noted. - -### Framework-Neutral Method - -"This looks like a great way to deliver inference regardless of which framework generated a model," said Paul Teich, principal analyst at [Tirias Research][6]. - -"This is basically a framework-neutral way to deliver inference," he told LinuxInsider. - -Cloud providers like AWS, Microsoft and others are under pressure from customers to be able to train on one network while delivering on another, in order to advance AI, Teich pointed out. - -"I see this as kind of a baseline way for these vendors to check the interoperability box," he remarked. - -"Framework interoperability is a good thing, and this will only help developers in making sure that models that they build on MXNet or Caffe or CNTK are interoperable," Tractica's Kaul pointed out. - -As to how this interoperability might apply in the real world, Teich noted that technologies such as natural language translation or speech recognition would require that Alexa's voice recognition technology be packaged and delivered to another developer's embedded environment. - -### Thanks, Open Source - -"Despite their competitive differences, these companies all recognize they owe a significant amount of their success to the software development advancements generated by the open source movement," said Jeff Kaplan, managing director of [ThinkStrategies][7]. - -"The Open Neural Network Exchange is committed to producing similar benefits and innovations in AI," he told LinuxInsider. - -A growing number of major technology companies have announced plans to use open source to speed the development of AI collaboration, in order to create more uniform platforms for development and research. - -AT&T just a few weeks ago announced plans [to launch the Acumos Project][8] with TechMahindra and The Linux Foundation. The platform is designed to open up efforts for collaboration in telecommunications, media and technology.  -![](https://www.ectnews.com/images/end-enn.gif) - --------------------------------------------------------------------------------- - -via: https://www.linuxinsider.com/story/AWS-to-Help-Build-ONNX-Open-Source-AI-Platform-84971.html - -作者:[ David Jones ][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.linuxinsider.com/story/AWS-to-Help-Build-ONNX-Open-Source-AI-Platform-84971.html#searchbyline -[1]:https://www.linuxinsider.com/story/AWS-to-Help-Build-ONNX-Open-Source-AI-Platform-84971.html# -[2]:https://www.linuxinsider.com/perl/mailit/?id=84971 -[3]:https://www.linuxinsider.com/story/AWS-to-Help-Build-ONNX-Open-Source-AI-Platform-84971.html -[4]:https://github.com/onnx/onnx -[5]:https://www.tractica.com/ -[6]:http://www.tiriasresearch.com/ -[7]:http://www.thinkstrategies.com/ -[8]:https://www.linuxinsider.com/story/84926.html -[9]:https://www.linuxinsider.com/story/AWS-to-Help-Build-ONNX-Open-Source-AI-Platform-84971.html diff --git a/translated/tech/20171125 AWS to Help Build ONNX Open Source AI Platform.md b/translated/tech/20171125 AWS to Help Build ONNX Open Source AI Platform.md new file mode 100644 index 0000000000..8f80387e82 --- /dev/null +++ b/translated/tech/20171125 AWS to Help Build ONNX Open Source AI Platform.md @@ -0,0 +1,76 @@ +AWS 帮助构建 ONNX 开源 AI 平台 +============================================================ +![onnx-open-source-ai-platform](https://www.linuxinsider.com/article_images/story_graphics_xlarge/xl-2017-onnx-1.jpg) + + +AWS 已经成为最近加入深度学习社区的开放神经网络交换(ONNX)协作的最新技术公司,最近在无摩擦和可互操作的环境中推出了高级人工智能。由 Facebook 和微软领头。 + +作为该合作的一部分,AWS 将其开源 Python 软件包 ONNX-MxNet 作为一个深度学习框架提供,该框架提供跨多种语言的编程接口,包括 Python、Scala 和开源统计软件 R。 + +AWS 深度学习工程经理 Hagay Lupesko 和软件开发人员 Roshani Nagmote 上周在一篇帖子中写道:ONNX 格式将帮助开发人员构建和训练其他框架的模型,包括 PyTorch、Microsoft Cognitive Toolkit 或 Caffe2。它可以让开发人员将这些模型导入 MXNet,并运行它们进行推理。 + +### 对开发者的帮助 + +今年夏天,Facebook 和微软推出了 ONNX,以支持共享模式的互操作性,来促进 AI 的发展。微软提交了其 Cognitive Toolkit、Caffe2 和 PyTorch 来支持 ONNX。 + +微软表示:Cognitive Toolkit 和其他框架使开发人员更容易构建和运行代表神经网络的计算图。 + +Github 上提供了[ ONNX 代码和文档][4]的初始版本。 + +AWS 和微软上个月宣布了在 Apache MXNet 上的一个新 Gluon 接口计划,该计划允许开发人员构建和训练深度学习模型。 + +[Tractica][5] 的研究总监 Aditya Kaul 观察到:“Gluon 是他们与 Google 的 Tensorflow 竞争的合作伙伴关系的延伸”。 + +他告诉 LinuxInsider,“谷歌在这点上的疏忽是非常明显的,但也说明了他们在市场上的主导地位。 + +Kaul 说:“甚至 Tensorflow 是开源的,所以开源在这里并不是什么大事,但这归结到底是其他生态系统联手与谷歌竞争。” + +根据 AWS 的说法,本月早些时候,Apache MXNet 社区推出了 MXNet 的 0.12 版本,它扩展了 Gluon 的功能,以便进行新的尖端研究。它的新功能之一是变分 dropout,它允许开发人员使用 dropout 技术来缓解递归神经网络中的过拟合。 + +AWS 指出:卷积 RNN、LSTM 网络和门控循环单元(GRU)允许使用基于时间的序列和空间维度对数据集进行建模。 + +### 框架中立方式 + +[Tirias Research][6] 的首席分析师 Paul Teich 说:“这看起来像是一个提供推理的好方法,而不管是什么框架生成的模型。” + +他告诉 LinuxInsider:“这基本上是一种框架中立的推理方式。” + +Teich 指出,像 AWS、微软等云提供商在客户的压力下可以在一个网络上进行训练,同时提供另一个网络,以推进人工智能。 + +他说:“我认为这是这些供应商检查互操作性的一种基本方式。” + +Tractica 的 Kaul 指出:“框架互操作性是一件好事,这会帮助开发人员确保他们建立在 MXNet 或 Caffe 或 CNTK 上的模型可以互操作。” + +至于这种互操作性如何适用于现实世界,Teich 指出,诸如自然语言翻译或语音识别等技术将要求将 Alexa 的语音识别技术打包并交付给另一个开发人员的嵌入式环境。 + +### 感谢开源 + +[ThinkStrategies][7] 的总经理 Jeff Kaplan 表示:“尽管存在竞争差异,但这些公司都认识到他们在开源运动所带来的软件开发进步方面所取得的巨大成功。” + +他告诉 LinuxInsider:“开放式神经网络交换(ONNX)致力于在人工智能方面产生类似的优势和创新。” + +越来越多的大型科技公司已经宣布使用开源技术来加快 AI 协作开发的计划,以便创建更加统一的开发和研究平台。 + +AT&T 几周前宣布了与 TechMahindra 和 Linux 基金会合作[推出 Acumos 项目][8]的计划。该平台旨在开拓电信、媒体和技术方面的合作。 +![](https://www.ectnews.com/images/end-enn.gif) + +-------------------------------------------------------------------------------- + +via: https://www.linuxinsider.com/story/AWS-to-Help-Build-ONNX-Open-Source-AI-Platform-84971.html + +作者:[ David Jones ][a] +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.linuxinsider.com/story/AWS-to-Help-Build-ONNX-Open-Source-AI-Platform-84971.html#searchbyline +[1]:https://www.linuxinsider.com/story/AWS-to-Help-Build-ONNX-Open-Source-AI-Platform-84971.html# +[2]:https://www.linuxinsider.com/perl/mailit/?id=84971 +[3]:https://www.linuxinsider.com/story/AWS-to-Help-Build-ONNX-Open-Source-AI-Platform-84971.html +[4]:https://github.com/onnx/onnx +[5]:https://www.tractica.com/ +[6]:http://www.tiriasresearch.com/ +[7]:http://www.thinkstrategies.com/ +[8]:https://www.linuxinsider.com/story/84926.html +[9]:https://www.linuxinsider.com/story/AWS-to-Help-Build-ONNX-Open-Source-AI-Platform-84971.html From 2265f2f7c7c05eabb7cc6d010cb4ba7aaef76d07 Mon Sep 17 00:00:00 2001 From: geekpi Date: Fri, 8 Dec 2017 08:56:55 +0800 Subject: [PATCH 110/236] translating --- ...plemon - Modern CLI Text Editor with Multi Cursor Support.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20171129 Suplemon - Modern CLI Text Editor with Multi Cursor Support.md b/sources/tech/20171129 Suplemon - Modern CLI Text Editor with Multi Cursor Support.md index 2b82be93ba..6f0703cd08 100644 --- a/sources/tech/20171129 Suplemon - Modern CLI Text Editor with Multi Cursor Support.md +++ b/sources/tech/20171129 Suplemon - Modern CLI Text Editor with Multi Cursor Support.md @@ -1,3 +1,5 @@ +translating---geekpi + Suplemon - Modern CLI Text Editor with Multi Cursor Support ====== Suplemon is a modern text editor for CLI that emulates the multi cursor behavior and other features of [Sublime Text][1]. It's lightweight and really easy to use, just as Nano is. From 27b35221eafe662773e1a303d8e70fbffad50793 Mon Sep 17 00:00:00 2001 From: darksun Date: Fri, 8 Dec 2017 09:04:31 +0800 Subject: [PATCH 111/236] translating by lujun9972 --- sources/tech/20171205 How to Use the Date Command in Linux.md | 1 + 1 file changed, 1 insertion(+) diff --git a/sources/tech/20171205 How to Use the Date Command in Linux.md b/sources/tech/20171205 How to Use the Date Command in Linux.md index 0c0f9a31aa..14d9690f1e 100644 --- a/sources/tech/20171205 How to Use the Date Command in Linux.md +++ b/sources/tech/20171205 How to Use the Date Command in Linux.md @@ -1,4 +1,5 @@ translating by lujun9972 +translating by lujun9972 How to Use the Date Command in Linux ====== In this post, we will show you some examples on how to use the date command in Linux. The date command can be used to print or set the system date and time. Using the Date Command in Linux its simple, just follow the examples and the syntax below. From bd4ad375c6d5e23ca2b96ba964a98eb225a9e63a Mon Sep 17 00:00:00 2001 From: Unknown Date: Fri, 8 Dec 2017 10:15:30 +0800 Subject: [PATCH 112/236] translating translating --- sources/tech/20171120 Adopting Kubernetes step by step.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20171120 Adopting Kubernetes step by step.md b/sources/tech/20171120 Adopting Kubernetes step by step.md index 05faf304c8..0d10282dc7 100644 --- a/sources/tech/20171120 Adopting Kubernetes step by step.md +++ b/sources/tech/20171120 Adopting Kubernetes step by step.md @@ -1,3 +1,5 @@ +translating by aiwhj + Adopting Kubernetes step by step ============================================================ From 452410b4be88dfc52f09c8d43c18acbffdfa536c Mon Sep 17 00:00:00 2001 From: TRsky <625310581@qq.com> Date: Fri, 8 Dec 2017 14:03:35 +0800 Subject: [PATCH 113/236] mupdate --- ...ow to answer questions in a helpful way.md | 180 +++++++++--------- 1 file changed, 88 insertions(+), 92 deletions(-) diff --git a/sources/tech/20170921 How to answer questions in a helpful way.md b/sources/tech/20170921 How to answer questions in a helpful way.md index 31d6be1046..a01e28fde2 100644 --- a/sources/tech/20170921 How to answer questions in a helpful way.md +++ b/sources/tech/20170921 How to answer questions in a helpful way.md @@ -1,175 +1,171 @@ -translating by HardworkFish -How to answer questions in a helpful way -============================================================ -Your coworker asks you a slightly unclear question. How do you answer? I think asking questions is a skill (see [How to ask good questions][1]) and that answering questions in a helpful way is also a skill! Both of them are super useful. +如何以有用的方式回答问题 +============================= -To start out with – sometimes the people asking you questions don’t respect your time, and that sucks. I’m assuming here throughout that that’s not what happening – we’re going to assume that the person asking you questions is a reasonable person who is trying their best to figure something out and that you want to help them out. Everyone I work with is like that and so that’s the world I live in :) +如果你的同事问你一个不太清晰的问题,你会怎么回答?我认为提问题是一种技巧(可以看 [如何提出有意义的问题][1]) 同时以合理的方式回答问题也是一种技巧。他们都是非常实用的。 -Here are a few strategies for answering questions in a helpful way! +开始 - 有时问你问题的人不尊重你的时间,这很糟糕。 -### If they’re not asking clearly, help them clarify +我假设 - 我们来假设问你问题的人是一个合理的人并且正在尽力解决问题而你想帮助他们。和我一起工作的人是这样,我所生活的世界也是这样。 -Often beginners don’t ask clear questions, or ask questions that don’t have the necessary information to answer the questions. Here are some strategies you can use to help them clarify. +下面是有助于回答问题的一些策略! -* **Rephrase a more specific question** back at them (“Are you asking X?”) +### 如果他们提问不清楚,帮他们澄清 -* **Ask them for more specific information** they didn’t provide (“are you using IPv6?”) +通常初学者不会提出很清晰的问题,或者问一些对回答问题没有必要信息的问题。 -* **Ask what prompted their question**. For example, sometimes people come into my team’s channel with questions about how our service discovery works. Usually this is because they’re trying to set up/reconfigure a service. In that case it’s helpful to ask “which service are you working with? Can I see the pull request you’re working on?” +* ** 重述为一个更明确的问题 ** 回复他们(”你是想问 X ?“) -A lot of these strategies come from the [how to ask good questions][2] post. (though I would never say to someone “oh you need to read this Document On How To Ask Good Questions before asking me a question”) +* ** 问他们更具体的信息 ** 他们并没有提供(”你使用 IPv6 ?”) -### Figure out what they know already +* ** 问是什么导致了他们的问题 ** 例如,有时有些人会进入我的团队频道,询问我们的 service discovery 如何工作的。这通常是因为他们试图设置/重新配置服务。在这种情况下,如果问“你正在使用哪种服务?可以给我看看你正在进行的 pull 请求吗?”是有帮助的。这些策略很多来自 [如何提出有意义的问题][2]。(尽管我永远不会对某人说“oh 你得先看完文档 “如何提出有意义的问题” 再来问我问题) -Before answering a question, it’s very useful to know what the person knows already! +这些策略很多来自[如何提出有意义的问题][2]中的要点。 -Harold Treen gave me a great example of this: +### 明白什么是他们已经知道的 -> Someone asked me the other day to explain “Redux Sagas”. Rather than dive in and say “They are like worker threads that listen for actions and let you update the store!”  -> I started figuring out how much they knew about Redux, actions, the store and all these other fundamental concepts. From there it was easier to explain the concept that ties those other concepts together. +在回答问题之前,知道对方已经知道什么是非常有用的! -Figuring out what your question-asker knows already is important because they may be confused about fundamental concepts (“What’s Redux?”), or they may be an expert who’s getting at a subtle corner case. An answer building on concepts they don’t know is confusing, and an answer that recaps things they know is tedious. +Harold Treen 给了我一个很好的例子: -One useful trick for asking what people know – instead of “Do you know X?”, maybe try “How familiar are you with X?”. +> 前几天,有人请我解释“ Redux-Sagas ”。与其深入解释不如说“ 他们就像 worker threads 监听行为,让你更新 Redux store 。 -### Point them to the documentation +> 我开始搞清楚他们对 Redux 、行为(actions)、store 以及其他基本概念了解多少。将这些概念都联系在一起来解释概念会容易得多。 -“RTFM” is the classic unhelpful answer to a question, but pointing someone to a specific piece of documentation can actually be really helpful! When I’m asking a question, I’d honestly rather be pointed to documentation that actually answers my question, because it’s likely to answer other questions I have too. +弄清楚问你问题的人已经知道什么是非常重要的。因为有时他们可能会对基础概念感到疑惑(“ Redux 是什么?“),或者他们可是是专家并恰遇到了微妙的极端情况(corner case)。如果答案建立在他们不知道的概念上会令他们困惑,且重述他们已经知道的的会是乏味的。 -I think it’s important here to make sure you’re linking to documentation that actually answers the question, or at least check in afterwards to make sure it helped. Otherwise you can end up with this (pretty common) situation: +这里有一个有用的技巧问他们已经知道什么 - 比如可以尝试用“你对 X 了解多少?”而不是问“你知道 X 吗?”。 -* Ali: How do I do X? +### 给他们一个文档 -* Jada: +“RTFM” (“去读那些他妈的手册”(Read The Fucking Manual))是一个典型的无用的回答,但事实上如果向他们指明一个特定的文档会是非常有用的!当我提问题的时候,我非常乐意被指明那些事实上能够解决我的问题的文档,因为它可能解决了其他我也想问的问题。 -* Ali: That doesn’t actually explain how to X, it only explains Y! +我认为明确你所指明的文档确实能够解决问题或者至少经过查阅明确它有用是非常重要的。否则,你可能将以下面这种情形结束对话(非常常见): -If the documentation I’m linking to is very long, I like to point out the specific part of the documentation I’m talking about. The [bash man page][3] is 44,000 words (really!), so just saying “it’s in the bash man page” is not that helpful :) +* Ali:我应该如何处理 X ? -### Point them to a useful search +* Jada:<文档链接> -Often I find things at work by searching for some Specific Keyword that I know will find me the answer. That keyword might not be obvious to a beginner! So saying “this is the search I’d use to find the answer to that question” can be useful. Again, check in afterwards to make sure the search actually gets them the answer they need :) +* Ali: 这个并有实际解释如何处理 X ,它仅仅解释了如何处理 Y ! -### Write new documentation +如果我所给的文档特别长,我会指明文档中那个我将会谈及的特定部分。[bash 手册][3] 有44000个字(真的!),所以只说“它在 bash 手册中有说明”是没有用的:) -People often come and ask my team the same questions over and over again. This is obviously not the fault of the people (how should  _they_  know that 10 people have asked this already, or what the answer is?). So we’re trying to, instead of answering the questions directly, +### 告诉他们一个有用的搜索 -1. Immediately write documentation +在工作中,经常我发现我可以利用我所知道的关键字进行搜索找到能够解决我的问题的答案。对于初学者来说,这些关键字往往不是那么明显。所以说“这是我用来寻找这个答案的搜索”可能有用些。再次说明,回答时请经检查后以确保搜索能够得到他们所需要的答案。 -2. Point the person to the new documentation we just wrote +### 写新文档 -3. Celebrate! +人们经常一次又一次地问我的团队重复的问题。很显然这并不是人们的错(他们怎么能够知道在他们之前已经有10个人问了这个问题,且知道答案是什么呢?)因此,我们尝试写文档,而不是直接回答回答问题。 -Writing documentation sometimes takes more time than just answering the question, but it’s often worth it! Writing documentation is especially worth it if: +1. 马上写新文档 -a. It’s a question which is being asked again and again b. The answer doesn’t change too much over time (if the answer changes every week or month, the documentation will just get out of date and be frustrating) +2. 给他们我们刚刚写好的新文档 -### Explain what you did +3. 公示 -As a beginner to a subject, it’s really frustrating to have an exchange like this: +写文档有时往往比回答问题需要花很多时间,但这是值得的。写文档尤其值得如果: -* New person: “hey how do you do X?” +a. 这个问题被问了一遍又一遍 -* More Experienced Person: “I did it, it is done.” +b. 随着时间的推移,这个答案不会变化太大(如果这个答案每一个星期或者一个月就会变化,文档就会过时并且令人受挫) -* New person: ….. but what did you DO?! +### 解释你做了什么 -If the person asking you is trying to learn how things work, it’s helpful to: +对于一个话题,作为初学者来说,这样的交流会真让人沮丧: -* Walk them through how to accomplish a task instead of doing it yourself +* 新人:“hey 你如何处理 X ?” -* Tell them the steps for how you got the answer you gave them! +* 更加有经验的人:“我做了,它完成了” -This might take longer than doing it yourself, but it’s a learning opportunity for the person who asked, so that they’ll be better equipped to solve such problems in the future. +* 新人:”...... 但是你做了什么?!“ -Then you can have WAY better exchanges, like this: +如果问你问题的人想知道事情是如何进行的,这样是有帮助的: -* New person: “I’m seeing errors on the site, what’s happening?” +* 让他们去完成任务而不是自己做 -* More Experienced Person: (2 minutes later) “oh that’s because there’s a database failover happening” +* 把你是如何得到你给他们的答案的步骤告诉他们。 -* New person: how did you know that??!?!? +这可能比你自己做的时间还要长,但对于被问的人来说这是一个学习机会,因为那样做使得他们将来能够更好地解决问题。 -* More Experienced Person: “Here’s what I did!”: - 1. Often these errors are due to Service Y being down. I looked at $PLACE and it said Service Y was up. So that wasn’t it. +这样,你可以进行更好的交流,像这: - 2. Then I looked at dashboard X, and this part of that dashboard showed there was a database failover happening. +* 新人:“这个网站出现了错误,发生了什么?” - 3. Then I looked in the logs for the service and it showed errors connecting to the database, here’s what those errors look like. +* 有经验的人:(2分钟后)”oh 这是因为发生了数据库故障转移“ -If you’re explaining how you debugged a problem, it’s useful both to explain how you found out what the problem was, and how you found out what the problem wasn’t. While it might feel good to look like you knew the answer right off the top of your head, it feels even better to help someone improve at learning and diagnosis, and understand the resources available. +* 新人: ”你是怎么知道的??!?!?“ -### Solve the underlying problem +* 有经验的人:“以下是我所做的!“: -This one is a bit tricky. Sometimes people think they’ve got the right path to a solution, and they just need one more piece of information to implement that solution. But they might not be quite on the right path! For example: +1. 通常这些错误是因为服务器 Y 被关闭了。我查看了一下 `$PLACE` 但它表明服务器 Y 开着。所以,并不是这个原因导致的。 -* George: I’m doing X, and I got this error, how do I fix it +2. 然后我查看仪表 X ,这个仪表的这个部分显示这里发生了数据库故障转移。 -* Jasminda: Are you actually trying to do Y? If so, you shouldn’t do X, you should do Z instead +3. 然后我在日志中找到了相应服务器,并且它显示连接数据库错误,看起来错误就是这里。 -* George: Oh, you’re right!!! Thank you! I will do Z instead. +如果你正在解释你是如何调试一个问题,解释你是如何发现问题,以及如何找出问题是非常有用的当看起来似乎你已经得到正确答案时,感觉是很棒的。这比你帮助他人提高学习和诊断能力以及明白充分利用可用资源的感觉还要好。 -Jasminda didn’t answer George’s question at all! Instead she guessed that George didn’t actually want to be doing X, and she was right. That is helpful! +### 解决根本问题 -It’s possible to come off as condescending here though, like +这一点有点狡猾。有时候人们认为他们依旧找到了解决问题的正确途径,且他们只需要一条信息就可以把问题解决。但他们可能并不是走在正确的道路上!比如: -* George: I’m doing X, and I got this error, how do I fix it? +* George:”我在处理 X 的时候遇到了错误,我该如何修复它?“ -* Jasminda: Don’t do that, you’re trying to do Y and you should do Z to accomplish that instead. +* Jasminda:”你是正在尝试解决 Y 吗?如果是这样,你不应该处理 X ,反而你应该处理 Z 。“ -* George: Well, I am not trying to do Y, I actually want to do X because REASONS. How do I do X? +* George:“噢,你是对的!!!谢谢你!我回反过来处理 Z 的。“ -So don’t be condescending, and keep in mind that some questioners might be attached to the steps they’ve taken so far! It might be appropriate to answer both the question they asked and the one they should have asked: “Well, if you want to do X then you might try this, but if you’re trying to solve problem Y with that, you might have better luck doing this other thing, and here’s why that’ll work better”. +Jasminda 一点都没有回答 George 的问题!反而,她猜测 George 并不想处理 X ,并且她是猜对了。这是非常有用的! -### Ask “Did that answer your question?” +如果你这样做可能会产生居高临下的感觉: -I always like to check in after I  _think_  I’ve answered the question and ask “did that answer your question? Do you have more questions?”. +* George:”我在处理 X 的时候遇到了错误,我该如何修复它?“ -It’s good to pause and wait after asking this because often people need a minute or two to know whether or not they’ve figured out the answer. I especially find this extra “did this answer your questions?” step helpful after writing documentation! Often when writing documentation about something I know well I’ll leave out something very important without realizing it. +* Jasminda:不要这样做,如果你想处理 Y ,你应该反过来完成 Z 。 -### Offer to pair program/chat in real life +* George:“好吧,我并不是想处理 Y 。实际上我想处理 X 因为某些原因(REASONS)。所以我该如何处理 X 。 -I work remote, so many of my conversations at work are text-based. I think of that as the default mode of communication. +所以不要居高临下,且要记住有时有些提问者可能已经偏离根本问题很远了。同时回答提问者提出的问题以及他们本该提出的问题的恰当的:“嗯,如果你想处理 X ,那么你可能需要这么做,但如果你想用这个解决 Y 问题,可能通过处理其他事情你可以更好地解决这个问题,这就是为什么可以做得更好的原因。 -Today, we live in a world of easy video conferencing & screensharing! At work I can at any time click a button and immediately be in a video call/screensharing session with someone. Some problems are easier to talk about using your voices! +### 询问”那个回答可以解决您的问题吗?” -For example, recently someone was asking about capacity planning/autoscaling for their service. I could tell there were a few things we needed to clear up but I wasn’t exactly sure what they were yet. We got on a quick video call and 5 minutes later we’d answered all their questions. +我总是喜欢在我回答了问题之后核实是否真的已经解决了问题:”那个回答解决了您的问题吗?您还有其他问题吗?“在问完这个之后等待一会是很好的,因为人们通常需要一两分钟来知道他们是否已经找到了答案。 -I think especially if someone is really stuck on how to get started on a task, pair programming for a few minutes can really help, and it can be a lot more efficient than email/instant messaging. +我发现尤其是问“这个回答解决了您的问题吗”这句在写文档时是非常有用的。通常,在写我关于我熟悉的东西的文档时,我会忽略掉重要的东西。 -### Don’t act surprised +### Offer to。。。。 -This one’s a rule from the Recurse Center: [no feigning surprise][4]. Here’s a relatively common scenario +我是远程工作的,所以我的很多对话都是基于文本的。我认为这是默认的沟通方式。 -* Human 1: “what’s the Linux kernel?” +今天,我们生活在一个简单的视频会议和屏幕共享的世界!在工作时候,我可以在任何时间点击一个按钮并快速处在与他人的视频对话或者屏幕共享的对话中! -* Human 2: “you don’t know what the LINUX KERNEL is?!!!!?!!!???” +例如,最近有人问我关于容量规划/自动缩放。。。 -Human 2’s reaction (regardless of whether they’re  _actually_  surprised or not) is not very helpful. It mostly just serves to make Human 1 feel bad that they don’t know what the Linux kernel is. +### 不要表现得过于惊讶 -I’ve worked on actually pretending not to be surprised even when I actually am a bit surprised the person doesn’t know the thing and it’s awesome. +这是源自 Recurse Center 的一则法则:[不要假装惊讶][4]。这里有一个常见的情景: -### Answering questions well is awesome +* 某人1:“什么是 Linux 内核” -Obviously not all these strategies are appropriate all the time, but hopefully you will find some of them helpful! I find taking the time to answer questions and teach people can be really rewarding. +* 某人2:“你竟然不知道什么是 Linux 内核(LINUX KERNEL)?!!!!?!!!????” + +某人2表现(无论他们是否真的如此惊讶)是没有帮助的。这大部分只会让某人1感觉不好,因为他们不知道什么的 Linux 内核。 + +我一直在假装不惊讶即使我事实上确实有点惊讶那个人不知道这种东西但它是令人敬畏的。 + +### 回答问题是令人敬畏的 + +很显然,这些策略并不是一直都是适当的,但希望你能够发现这里有些是有用的!我发现花时间去回答问题并教人们是其实是很有收获的。 + +特别感谢 Josh Triplett 的一些建议并做了很多有益的补充,以及感谢 Harold Treen、Vaibhav Sagar、Peter Bhat Hatkins、Wesley Aptekar Cassels 和 Paul Gowder的阅读或评论。 + + -Special thanks to Josh Triplett for suggesting this post and making many helpful additions, and to Harold Treen, Vaibhav Sagar, Peter Bhat Harkins, Wesley Aptekar-Cassels, and Paul Gowder for reading/commenting. --------------------------------------------------------------------------------- -via: https://jvns.ca/blog/answer-questions-well/ -作者:[ Julia Evans][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 -[a]:https://jvns.ca/about -[1]:https://jvns.ca/blog/good-questions/ -[2]:https://jvns.ca/blog/good-questions/ -[3]:https://linux.die.net/man/1/bash -[4]:https://jvns.ca/blog/2017/04/27/no-feigning-surprise/ From 66ba5bae5bbbf2c974b18525847eade78911d6cb Mon Sep 17 00:00:00 2001 From: TRsky <625310581@qq.com> Date: Fri, 8 Dec 2017 14:23:31 +0800 Subject: [PATCH 114/236] update --- ...ow to answer questions in a helpful way.md | 96 +++++++++++-------- 1 file changed, 55 insertions(+), 41 deletions(-) diff --git a/sources/tech/20170921 How to answer questions in a helpful way.md b/sources/tech/20170921 How to answer questions in a helpful way.md index a01e28fde2..8c5507a030 100644 --- a/sources/tech/20170921 How to answer questions in a helpful way.md +++ b/sources/tech/20170921 How to answer questions in a helpful way.md @@ -1,14 +1,12 @@ - - -如何以有用的方式回答问题 +如何合理地回答问题 ============================= -如果你的同事问你一个不太清晰的问题,你会怎么回答?我认为提问题是一种技巧(可以看 [如何提出有意义的问题][1]) 同时以合理的方式回答问题也是一种技巧。他们都是非常实用的。 +如果你的同事问你一个不太清晰的问题,你会怎么回答?我认为提问题是一种技巧(可以看 [如何提出有意义的问题][1]) 同时,合理地回答问题也是一种技巧。他们都是非常实用的。 -开始 - 有时问你问题的人不尊重你的时间,这很糟糕。 +开始 - 有时向你提问的人不尊重你的时间,这很糟糕。 -我假设 - 我们来假设问你问题的人是一个合理的人并且正在尽力解决问题而你想帮助他们。和我一起工作的人是这样,我所生活的世界也是这样。 +我假设 - 我们假设问你问题的人是一个合理的人并且正在尽力解决问题而你想帮助他们。和我一起工作的人是这样,我所生活的世界也是这样。 下面是有助于回答问题的一些策略! @@ -16,11 +14,11 @@ 通常初学者不会提出很清晰的问题,或者问一些对回答问题没有必要信息的问题。 -* ** 重述为一个更明确的问题 ** 回复他们(”你是想问 X ?“) +* ** 重述为一个更明确的问题 ** 回复他们(”你是想问 X ?“) -* ** 问他们更具体的信息 ** 他们并没有提供(”你使用 IPv6 ?”) +* ** 问他们更具体的信息 ** 他们并没有提供(”你使用 IPv6 ?”) -* ** 问是什么导致了他们的问题 ** 例如,有时有些人会进入我的团队频道,询问我们的 service discovery 如何工作的。这通常是因为他们试图设置/重新配置服务。在这种情况下,如果问“你正在使用哪种服务?可以给我看看你正在进行的 pull 请求吗?”是有帮助的。这些策略很多来自 [如何提出有意义的问题][2]。(尽管我永远不会对某人说“oh 你得先看完文档 “如何提出有意义的问题” 再来问我问题) +* ** 问是什么导致了他们的问题 ** 例如,有时有些人会进入我的团队频道,询问我们的 service discovery 如何工作的。这通常是因为他们试图设置/重新配置服务。在这种情况下,如果问“你正在使用哪种服务?可以给我看看你正在进行的 pull 请求吗?”是有帮助的。这些策略很多来自 [如何提出有意义的问题][2]。(尽管我永远不会对某人说“噢,你得先看完文档 “如何提出有意义的问题” 再来问我问题) 这些策略很多来自[如何提出有意义的问题][2]中的要点。 @@ -32,11 +30,11 @@ Harold Treen 给了我一个很好的例子: > 前几天,有人请我解释“ Redux-Sagas ”。与其深入解释不如说“ 他们就像 worker threads 监听行为,让你更新 Redux store 。 -> 我开始搞清楚他们对 Redux 、行为(actions)、store 以及其他基本概念了解多少。将这些概念都联系在一起来解释概念会容易得多。 +> 我开始搞清楚他们对 Redux 、行为(actions)、store 以及其他基本概念了解多少。将这些概念都联系在一起再来解释会容易得多。 -弄清楚问你问题的人已经知道什么是非常重要的。因为有时他们可能会对基础概念感到疑惑(“ Redux 是什么?“),或者他们可是是专家并恰遇到了微妙的极端情况(corner case)。如果答案建立在他们不知道的概念上会令他们困惑,且重述他们已经知道的的会是乏味的。 +弄清楚问你问题的人已经知道什么是非常重要的。因为有时他们可能会对基础概念感到疑惑(“ Redux 是什么?“),或者他们可是专家但是恰巧遇到了微妙的极端情况(corner case)。如果答案建立在他们不知道的概念上会令他们困惑,但如果重述他们已经知道的的又会是乏味的。 -这里有一个有用的技巧问他们已经知道什么 - 比如可以尝试用“你对 X 了解多少?”而不是问“你知道 X 吗?”。 +这里有一个很实用的技巧来了解他们已经知道什么 - 比如可以尝试用“你对 X 了解多少?”而不是问“你知道 X 吗?”。 ### 给他们一个文档 @@ -44,27 +42,27 @@ Harold Treen 给了我一个很好的例子: 我认为明确你所指明的文档确实能够解决问题或者至少经过查阅明确它有用是非常重要的。否则,你可能将以下面这种情形结束对话(非常常见): -* Ali:我应该如何处理 X ? +* Ali:我应该如何处理 X ? -* Jada:<文档链接> +* Jada:<文档链接> -* Ali: 这个并有实际解释如何处理 X ,它仅仅解释了如何处理 Y ! +* Ali: 这个并有实际解释如何处理 X ,它仅仅解释了如何处理 Y ! -如果我所给的文档特别长,我会指明文档中那个我将会谈及的特定部分。[bash 手册][3] 有44000个字(真的!),所以只说“它在 bash 手册中有说明”是没有用的:) +如果我所给的文档特别长,我会指明文档中那个我将会谈及的特定部分。[bash 手册][3] 有44000个字(真的!),所以如果只说“它在 bash 手册中有说明”是没有帮助的:) ### 告诉他们一个有用的搜索 -在工作中,经常我发现我可以利用我所知道的关键字进行搜索找到能够解决我的问题的答案。对于初学者来说,这些关键字往往不是那么明显。所以说“这是我用来寻找这个答案的搜索”可能有用些。再次说明,回答时请经检查后以确保搜索能够得到他们所需要的答案。 +在工作中,我经常发现我可以利用我所知道的关键字进行搜索找到能够解决我的问题的答案。对于初学者来说,这些关键字往往不是那么明显。所以说“这是我用来寻找这个答案的搜索”可能有用些。再次说明,回答时请经检查后以确保搜索能够得到他们所需要的答案。 ### 写新文档 人们经常一次又一次地问我的团队重复的问题。很显然这并不是人们的错(他们怎么能够知道在他们之前已经有10个人问了这个问题,且知道答案是什么呢?)因此,我们尝试写文档,而不是直接回答回答问题。 -1. 马上写新文档 +1. 马上写新文档 -2. 给他们我们刚刚写好的新文档 +2. 给他们我们刚刚写好的新文档 -3. 公示 +3. 公示 写文档有时往往比回答问题需要花很多时间,但这是值得的。写文档尤其值得如果: @@ -76,35 +74,35 @@ b. 随着时间的推移,这个答案不会变化太大(如果这个答案 对于一个话题,作为初学者来说,这样的交流会真让人沮丧: -* 新人:“hey 你如何处理 X ?” +* 新人:“hey 你如何处理 X ?” -* 更加有经验的人:“我做了,它完成了” +* 有经验的人:“我做了,它完成了” -* 新人:”...... 但是你做了什么?!“ +* 新人:”...... 但是你做了什么?!“ 如果问你问题的人想知道事情是如何进行的,这样是有帮助的: -* 让他们去完成任务而不是自己做 +* 让他们去完成任务而不是自己做 -* 把你是如何得到你给他们的答案的步骤告诉他们。 +* 把你是如何得到你给他们的答案的步骤告诉他们。 这可能比你自己做的时间还要长,但对于被问的人来说这是一个学习机会,因为那样做使得他们将来能够更好地解决问题。 这样,你可以进行更好的交流,像这: -* 新人:“这个网站出现了错误,发生了什么?” +* 新人:“这个网站出现了错误,发生了什么?” -* 有经验的人:(2分钟后)”oh 这是因为发生了数据库故障转移“ +* 有经验的人:(2分钟后)”oh 这是因为发生了数据库故障转移“ + +* 新人: ”你是怎么知道的??!?!?“ -* 新人: ”你是怎么知道的??!?!?“ +* 有经验的人:“以下是我所做的!“: -* 有经验的人:“以下是我所做的!“: + 1. 通常这些错误是因为服务器 Y 被关闭了。我查看了一下 `$PLACE` 但它表明服务器 Y 开着。所以,并不是这个原因导致的。 -1. 通常这些错误是因为服务器 Y 被关闭了。我查看了一下 `$PLACE` 但它表明服务器 Y 开着。所以,并不是这个原因导致的。 + 2. 然后我查看仪表 X ,这个仪表的这个部分显示这里发生了数据库故障转移。 -2. 然后我查看仪表 X ,这个仪表的这个部分显示这里发生了数据库故障转移。 - -3. 然后我在日志中找到了相应服务器,并且它显示连接数据库错误,看起来错误就是这里。 + 3. 然后我在日志中找到了相应服务器,并且它显示连接数据库错误,看起来错误就是这里。 如果你正在解释你是如何调试一个问题,解释你是如何发现问题,以及如何找出问题是非常有用的当看起来似乎你已经得到正确答案时,感觉是很棒的。这比你帮助他人提高学习和诊断能力以及明白充分利用可用资源的感觉还要好。 @@ -112,21 +110,21 @@ b. 随着时间的推移,这个答案不会变化太大(如果这个答案 这一点有点狡猾。有时候人们认为他们依旧找到了解决问题的正确途径,且他们只需要一条信息就可以把问题解决。但他们可能并不是走在正确的道路上!比如: -* George:”我在处理 X 的时候遇到了错误,我该如何修复它?“ +* George:”我在处理 X 的时候遇到了错误,我该如何修复它?“ -* Jasminda:”你是正在尝试解决 Y 吗?如果是这样,你不应该处理 X ,反而你应该处理 Z 。“ +* Jasminda:”你是正在尝试解决 Y 吗?如果是这样,你不应该处理 X ,反而你应该处理 Z 。“ -* George:“噢,你是对的!!!谢谢你!我回反过来处理 Z 的。“ +* George:“噢,你是对的!!!谢谢你!我回反过来处理 Z 的。“ Jasminda 一点都没有回答 George 的问题!反而,她猜测 George 并不想处理 X ,并且她是猜对了。这是非常有用的! 如果你这样做可能会产生居高临下的感觉: -* George:”我在处理 X 的时候遇到了错误,我该如何修复它?“ +* George:”我在处理 X 的时候遇到了错误,我该如何修复它?“ -* Jasminda:不要这样做,如果你想处理 Y ,你应该反过来完成 Z 。 +* Jasminda:不要这样做,如果你想处理 Y ,你应该反过来完成 Z 。 -* George:“好吧,我并不是想处理 Y 。实际上我想处理 X 因为某些原因(REASONS)。所以我该如何处理 X 。 +* George:“好吧,我并不是想处理 Y 。实际上我想处理 X 因为某些原因(REASONS)。所以我该如何处理 X 。 所以不要居高临下,且要记住有时有些提问者可能已经偏离根本问题很远了。同时回答提问者提出的问题以及他们本该提出的问题的恰当的:“嗯,如果你想处理 X ,那么你可能需要这么做,但如果你想用这个解决 Y 问题,可能通过处理其他事情你可以更好地解决这个问题,这就是为什么可以做得更好的原因。 @@ -148,9 +146,9 @@ Jasminda 一点都没有回答 George 的问题!反而,她猜测 George 并 这是源自 Recurse Center 的一则法则:[不要假装惊讶][4]。这里有一个常见的情景: -* 某人1:“什么是 Linux 内核” +* 某人1:“什么是 Linux 内核” -* 某人2:“你竟然不知道什么是 Linux 内核(LINUX KERNEL)?!!!!?!!!????” +* 某人2:“你竟然不知道什么是 Linux 内核(LINUX KERNEL)?!!!!?!!!????” 某人2表现(无论他们是否真的如此惊讶)是没有帮助的。这大部分只会让某人1感觉不好,因为他们不知道什么的 Linux 内核。 @@ -162,6 +160,22 @@ Jasminda 一点都没有回答 George 的问题!反而,她猜测 George 并 特别感谢 Josh Triplett 的一些建议并做了很多有益的补充,以及感谢 Harold Treen、Vaibhav Sagar、Peter Bhat Hatkins、Wesley Aptekar Cassels 和 Paul Gowder的阅读或评论。 +-------------------------------------------------------------------------------- + +via: https://jvns.ca/blog/answer-questions-well/ + +作者:[ Julia Evans][a] +译者:[HardworkFish](https://github.com/HardworkFish) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://jvns.ca/about +[1]:https://jvns.ca/blog/good-questions/ +[2]:https://jvns.ca/blog/good-questions/ +[3]:https://linux.die.net/man/1/bash +[4]:https://jvns.ca/blog/2017/04/27/no-feigning-surprise/ + From b19adcc3a312114667047e5409aad92006ae59e4 Mon Sep 17 00:00:00 2001 From: TRsky <625310581@qq.com> Date: Fri, 8 Dec 2017 14:32:28 +0800 Subject: [PATCH 115/236] update --- .../20170921 How to answer questions in a helpful way.md | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/sources/tech/20170921 How to answer questions in a helpful way.md b/sources/tech/20170921 How to answer questions in a helpful way.md index 8c5507a030..1cd590bdd2 100644 --- a/sources/tech/20170921 How to answer questions in a helpful way.md +++ b/sources/tech/20170921 How to answer questions in a helpful way.md @@ -18,9 +18,10 @@ * ** 问他们更具体的信息 ** 他们并没有提供(”你使用 IPv6 ?”) -* ** 问是什么导致了他们的问题 ** 例如,有时有些人会进入我的团队频道,询问我们的 service discovery 如何工作的。这通常是因为他们试图设置/重新配置服务。在这种情况下,如果问“你正在使用哪种服务?可以给我看看你正在进行的 pull 请求吗?”是有帮助的。这些策略很多来自 [如何提出有意义的问题][2]。(尽管我永远不会对某人说“噢,你得先看完文档 “如何提出有意义的问题” 再来问我问题) +* ** 问是什么导致了他们的问题 ** 例如,有时有些人会进入我的团队频道,询问我们的 service discovery 如何工作的。这通常是因为他们试图设置/重新配置服务。在这种情况下,如果问“你正在使用哪种服务?可以给我看看你正在进行的 pull 请求吗?”是有帮助的。 + +这些策略很多来自 [如何提出有意义的问题][2]中的要点。(尽管我永远不会对某人说“噢,你得先看完文档 “如何提出有意义的问题” 再来问我问题) -这些策略很多来自[如何提出有意义的问题][2]中的要点。 ### 明白什么是他们已经知道的 @@ -48,11 +49,11 @@ Harold Treen 给了我一个很好的例子: * Ali: 这个并有实际解释如何处理 X ,它仅仅解释了如何处理 Y ! -如果我所给的文档特别长,我会指明文档中那个我将会谈及的特定部分。[bash 手册][3] 有44000个字(真的!),所以如果只说“它在 bash 手册中有说明”是没有帮助的:) +如果我所给的文档特别长,我会指明文档中那个我将会谈及的特定部分。[bash 手册][3] 有44000个字(真的!),所以如果只说“它在 bash 手册中有说明”是没有帮助的:) ### 告诉他们一个有用的搜索 -在工作中,我经常发现我可以利用我所知道的关键字进行搜索找到能够解决我的问题的答案。对于初学者来说,这些关键字往往不是那么明显。所以说“这是我用来寻找这个答案的搜索”可能有用些。再次说明,回答时请经检查后以确保搜索能够得到他们所需要的答案。 +在工作中,我经常发现我可以利用我所知道的关键字进行搜索找到能够解决我的问题的答案。对于初学者来说,这些关键字往往不是那么明显。所以说“这是我用来寻找这个答案的搜索”可能有用些。再次说明,回答时请经检查后以确保搜索能够得到他们所需要的答案:) ### 写新文档 From 2459017c83db3677c6559e7410dc8cbcd6d4957a Mon Sep 17 00:00:00 2001 From: darksun Date: Fri, 8 Dec 2017 14:41:09 +0800 Subject: [PATCH 116/236] take a break --- ...05 How to Use the Date Command in Linux.md | 152 +++++++++--------- 1 file changed, 75 insertions(+), 77 deletions(-) diff --git a/sources/tech/20171205 How to Use the Date Command in Linux.md b/sources/tech/20171205 How to Use the Date Command in Linux.md index 14d9690f1e..2cd7cd6877 100644 --- a/sources/tech/20171205 How to Use the Date Command in Linux.md +++ b/sources/tech/20171205 How to Use the Date Command in Linux.md @@ -1,12 +1,10 @@ -translating by lujun9972 -translating by lujun9972 -How to Use the Date Command in Linux +如何使用 Date 命令 ====== -In this post, we will show you some examples on how to use the date command in Linux. The date command can be used to print or set the system date and time. Using the Date Command in Linux its simple, just follow the examples and the syntax below. +在本文中, 我们会通过一些案例来演示如何使用 linux 中的 date 命令. date 命令可以用户输出/设置系统日期和时间. Date 命令很简单, 请参见下面的例子和语法. -By default when running the date command in Linux, without any arguments it will display the current system date and time: +默认情况下,当不带任何参数运行 date 命令时,它会输出当前系统日期和时间: -``` +```shell date ``` @@ -14,7 +12,7 @@ date Sat 2 Dec 12:34:12 CST 2017 ``` -#### Syntax +#### 语法 ``` Usage: date [OPTION]... [+FORMAT] @@ -23,133 +21,133 @@ Display the current time in the given FORMAT, or set the system date. ``` -### Date examples +### 案例 -The following examples will show you how to use the date command to find the date and time from a period of time in the past or future. +下面这些案例会向你演示如何使用 date 命令来查看前后一段时间的日期时间. -### 1\. Find the date 5 weeks in the future +#### 1\. 查找5周后的日期 -``` +```shell date -d "5 weeks" Sun Jan 7 19:53:50 CST 2018 ``` -### 2\. Find the date 5 weeks and 4 days in the future +#### 2\. 查找5周后又过4天的日期 -``` +```shell date -d "5 weeks 4 days" Thu Jan 11 19:55:35 CST 2018 ``` -### 3\. Get the next month date +#### 3\. 获取下个月的日期 -``` +```shell date -d "next month" Wed Jan 3 19:57:43 CST 2018 ``` -### 4\. Get the last sunday date +#### 4\. 获取下周日的日期 -``` +```shell date -d last-sunday Sun Nov 26 00:00:00 CST 2017 ``` -The date command comes with various formatting option, the following examples will show you how to format the date command output. +date 命令还有很多格式化相关的选项, 下面的例子向你演示如何格式化 date 命令的输出. -### 5\. Display the date in yyyy-mm-dd format +#### 5\. 以 yyyy-mm-dd 的格式显示日期 -``` +```shell date +"%F" 2017-12-03 ``` -### 6\. Display date in mm/dd/yyyy format +#### 6\. 以 mm/dd/yyyy 的格式显示日期 -``` +```shell date +"%m/%d/%Y" 12/03/2017 ``` -### 7\. Display only the time +#### 7\. 只显示时间 -``` +```shell date +"%T" 20:07:04 ``` -### 8\. Display the day of the year +#### 8\. 显示今天是一年中的第几天 -``` +```shell date +"%j" 337 ``` -### 9\. Formatting Options +#### 9\. 与格式化相关的选项 -| **%%** | A literal percent sign (“**%**“). | -| **%a** | The abbreviated weekday name (e.g., **Sun**). | -| **%A** | The full weekday name (e.g., **Sunday**). | -| **%b** | The abbreviated month name (e.g., **Jan**). | -| **%B** | Locale’s full month name (e.g., **January**). | -| **%c** | The date and time (e.g., **Thu Mar 3 23:05:25 2005**). | -| **%C** | The current century; like **%Y**, except omit last two digits (e.g., **20**). | -| **%d** | Day of month (e.g., **01**). | -| **%D** | Date; same as **%m/%d/%y**. | -| **%e** | Day of month, space padded; same as **%_d**. | -| **%F** | Full date; same as **%Y-%m-%d**. | -| **%g** | Last two digits of year of ISO week number (see **%G**). | -| **%G** | Year of ISO week number (see **%V**); normally useful only with **%V**. | -| **%h** | Same as **%b**. | -| **%H** | Hour (**00**..**23**). | -| **%I** | Hour (**01**..**12**). | -| **%j** | Day of year (**001**..**366**). | -| **%k** | Hour, space padded ( **0**..**23**); same as **%_H**. | -| **%l** | Hour, space padded ( **1**..**12**); same as **%_I**. | -| **%m** | Month (**01**..**12**). | -| **%M** | Minute (**00**..**59**). | -| **%n** | A newline. | -| **%N** | Nanoseconds (**000000000**..**999999999**). | -| **%p** | Locale’s equivalent of either **AM** or **PM**; blank if not known. | -| **%P** | Like **%p**, but lower case. | -| **%r** | Locale’s 12-hour clock time (e.g., **11:11:04 PM**). | -| **%R** | 24-hour hour and minute; same as **%H:%M**. | -| **%s** | Seconds since 1970-01-01 00:00:00 UTC. | -| **%S** | Second (**00**..**60**). | -| **%t** | A tab. | -| **%T** | Time; same as **%H:%M:%S**. | -| **%u** | Day of week (**1**..**7**); 1 is **Monday**. | -| **%U** | Week number of year, with Sunday as first day of week (**00**..**53**). | -| **%V** | ISO week number, with Monday as first day of week (**01**..**53**). | -| **%w** | Day of week (**0**..**6**); 0 is **Sunday**. | -| **%W** | Week number of year, with Monday as first day of week (**00**..**53**). | -| **%x** | Locale’s date representation (e.g., **12/31/99**). | -| **%X** | Locale’s time representation (e.g., **23:13:48**). | -| **%y** | Last two digits of year (**00**..**99**). | -| **%Y** | Year. | -| **%z** | +hhmm numeric time zone (e.g., **-0400**). | -| **%:z** | +hh:mm numeric time zone (e.g., **-04:00**). | -| **%::z** | +hh:mm:ss numeric time zone (e.g., **-04:00:00**). | -| **%:::z** | Numeric time zone with “**:**” to necessary precision (e.g., **-04**, **+05:30**). | -| **%Z** | Alphabetic time zone abbreviation (e.g., EDT). | +| **%%** | 百分号 (“**%**“). | +| **%a** | 星期的缩写形式 (像这样, **Sun**). | +| **%A** | 星期的完整形式 (像这样, **Sunday**). | +| **%b** | 缩写的月份 (像这样, **Jan**). | +| **%B** | 当前区域的月份全称 (像这样, **January**). | +| **%c** | 日期以及时间 (像这样, **Thu Mar 3 23:05:25 2005**). | +| **%C** | 本世纪; 类似 **%Y**, 但是会省略最后两位 (像这样, **20**). | +| **%d** | 月中的第几日 (像这样, **01**). | +| **%D** | 日期; 效果与 **%m/%d/%y** 一样. | +| **%e** | 月中的第几日, 会填充空格; 与 **%_d** 一样. | +| **%F** | 完整的日期; 跟 **%Y-%m-%d** 一样. | +| **%g** | 年份的后两位 (参见 **%G**). | +| **%G** | 年份 (参见 **%V**); 通常跟 **%V** 连用. | +| **%h** | 同 **%b**. | +| **%H** | 小时 (**00**..**23**). | +| **%I** | 小时 (**01**..**12**). | +| **%j** | 一年中的第几天 (**001**..**366**). | +| **%k** | 小时, 用空格填充 ( **0**..**23**); same as **%_H**. | +| **%l** | 小时, 用空格填充 ( **1**..**12**); same as **%_I**. | +| **%m** | 月份 (**01**..**12**). | +| **%M** | 分钟 (**00**..**59**). | +| **%n** | 换行. | +| **%N** | 纳秒 (**000000000**..**999999999**). | +| **%p** | 当前区域时间是上午 **AM** 还是下午 **PM**; 未知则为空哦. | +| **%P** | 类似 **%p**, 但是用小写字母现实. | +| **%r** | 当前区域的12小时制现实时间 (像这样, **11:11:04 PM**). | +| **%R** | 24-小时制的小时和分钟; 同 **%H:%M**. | +| **%s** | 从 1970-01-01 00:00:00 UTC 到现在经历的秒数. | +| **%S** | 秒数 (**00**..**60**). | +| **%t** | tab 制表符. | +| **%T** | 时间; 同 **%H:%M:%S**. | +| **%u** | 星期 (**1**..**7**); 1 表示 **星期一**. | +| **%U** | 一年中的第几个星期, 以周日为一周的开始 (**00**..**53**). | +| **%V** | 一年中的第几个星期,以周一为一周的开始 (**01**..**53**). | +| **%w** | 用数字表示周几 (**0**..**6**); 0 表示 **周日**. | +| **%W** | 一年中的第几个星期, 周一为一周的开始 (**00**..**53**). | +| **%x** | Locale’s date representation (像这样, **12/31/99**). | +| **%X** | Locale’s time representation (像这样, **23:13:48**). | +| **%y** | 年份的后面两位 (**00**..**99**). | +| **%Y** | 年. | +| **%z** | +hhmm 指定数字时区 (像这样, **-0400**). | +| **%:z** | +hh:mm 指定数字时区 (像这样, **-04:00**). | +| **%::z** | +hh:mm:ss 指定数字时区 (像这样, **-04:00:00**). | +| **%:::z** | 指定数字时区, with “**:**” to necessary precision (e.g., **-04**, **+05:30**). | +| **%Z** | 时区的字符缩写(例如, EDT). | -### 10\. Set the system clock +#### 10\. 设置系统时间 -With the date command in Linux, you can also manually set the system clock using the --set switch, in the following example we will set the system date to 4:22pm August 30, 2017 +你也可以使用 date 来手工设置系统时间,方法是使用 `--set` 选项, 下面的例子会将系统时间设置成2017年8月30日下午4点22分 -``` +```shell date --set="20170830 16:22" ``` -Of course, if you use one of our [VPS Hosting services][1], you can always contact and ask our expert Linux admins (via chat or ticket) about date command in linux and anything related to date examples on Linux. They are available 24×7 and will provide information or assistance immediately. +当然, 如果你使用的是我们的 [VPS Hosting services][1], 你总是可以联系并咨询我们的Linux专家管理员 (通过客服电话或者下工单的方式) 关于 date 命令的任何东西. 他们是 24×7 在线的,会立即向您提供帮助. -PS. If you liked this post on How to Use the Date Command in Linux please share it with your friends on the social networks using the buttons below or simply leave a reply. Thanks. +PS. 如果你喜欢这篇帖子,请点击下面的按钮分享或者留言. 谢谢. -------------------------------------------------------------------------------- From 53e28ec21671e525827bafd80a67c8b7077cd2e8 Mon Sep 17 00:00:00 2001 From: wxy Date: Fri, 8 Dec 2017 14:41:59 +0800 Subject: [PATCH 117/236] =?UTF-8?q?=E9=80=89=E9=A2=98=EF=BC=9A19951001=20W?= =?UTF-8?q?riting=20man=20Pages=20Using=20groff.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @oska874 --- .../19951001 Writing man Pages Using groff.md | 156 ++++++++++++++++++ 1 file changed, 156 insertions(+) create mode 100644 sources/tech/19951001 Writing man Pages Using groff.md diff --git a/sources/tech/19951001 Writing man Pages Using groff.md b/sources/tech/19951001 Writing man Pages Using groff.md new file mode 100644 index 0000000000..3ba749bc66 --- /dev/null +++ b/sources/tech/19951001 Writing man Pages Using groff.md @@ -0,0 +1,156 @@ +Writing man Pages Using groff +=================== + +groff is the GNU version of the popular nroff/troff text-formatting tools provided on most Unix systems. Its most common use is writing manual pages—online documentation for commands, programming interfaqces, and so forth. In this article, we show you the ropes of writing your own man pages with groff. + +Two of the original text processing systems found on Unix systems are troff and nroff, developed at Bell Labs for the original implementation of Unix (in fact, the development of Unix itself was spurred, in part, to support such a text-processing system). The first version of this text processor was called roff (for “runoff”); later came troff, which generated output for a particular typesetter in use at the time. nroff was a later version that became the standard text processor on Unix systems everywhere. groff is GNU's implementation of nroff and troff that is used on Linux systems. It includes several extended features and drivers for a number of printing devices. + +groff is capable of producing documents, articles, and books, much in the same vein as other text-formatting systems, such as TeX. However, groff (as well as the original nroff) has one intrinsic feature that is absent from TeX and variants: the ability to produce plain-ASCII output. While other systems are great for producing documents to be printed, groff is able to produce plain ASCII to be viewed online (or printed directly as plain text on even the simplest of printers). If you're going to be producing documentation to be viewed online, as well as in printed form, groff may be the way to go (although there are alternatives, such as Texinfo, Lametex, and other tools). + +groff also has the benefit of being much smaller than TeX; it requires fewer support files and executables than even a minimal TeX distribution. + +One special application of groff is to format Unix man pages. If you're a Unix programmer, you'll eventually need to write and produce man pages of some kind. In this article, we'll introduce the use of groff through the writing of a short man page. + +As with TeX, groff uses a particular text-formatting language to describe how to process the text. This language is slightly more cryptic than systems such as TeX, but also less verbose. In addition, groff provides several macro packages that are used on top of the basic formatter; these macro packages are tailored to a particular type of document. For example, the mgs macros are an ideal choice for writing articles and papers, while the man macros are used for man pages. + +### Writing a man Page + +Writing man pages with groff is actually quite simple. For your man page to look like others, you need to follow several conventions in the source, which are presented below. In this example, we'll write a man page for a mythical command coffee that controls your networked coffee machine in various ways. + +Using any text editor, enter the source from Listing 1 and save the result as coffee.man. Do not enter the line numbers at the beginning of each line; those are used only for reference later in the article. + +``` +.TH COFFEE 1 "23 March 94" +.SH NAME +coffee /- Control remote coffee machine +.SH SYNOPSIS +/fBcoffee/fP [ -h | -b ] [ -t /fItype/fP ] +/fIamount/fP +.SH DESCRIPTION +/fBcoffee/fP queues a request to the remote +coffee machine at the device /fB/dev/cf0/fR. +The required /fIamount/fP argument specifies +the number of cups, generally between 0 and +12 on ISO standard coffee machines. +.SS Options +.TP +/fB-h/fP +Brew hot coffee. Cold is the default. +.TP +/fB-b/fP +Burn coffee. Especially useful when executing +/fBcoffee/fP on behalf of your boss. +.TP +/fB-t /fItype/fR +Specify the type of coffee to brew, where +/fItype/fP is one of /fBcolumbian/fP, +/fBregular/fP, or /fBdecaf/fP. +.SH FILES +.TP +/fC/dev/cf0/fR +The remote coffee machine device +.SH "SEE ALSO" +milk(5), sugar(5) +.SH BUGS +May require human intervention if coffee +supply is exhausted. +``` + +Don't let the amount of obscurity in this source file frighten you. It helps to know that the character sequences \fB, \fI, and \fR are used to change the font to boldface, italics, and roman type, respectively. \fP sets the font to the one previously selected. + +Other groff requests appear on lines beginning with a dot (.). On line 1, we see that the .TH request is used to set the title of the man page to COFFEE, the man section to 1, and the date of the last man page revision. (Recall that man section 1 is used for user commands, section 2 is for system calls, and so forth. The man man command details each section number.) On line 2, the .SH request is used to start a section, entitled NAME. Note that almost all Unix man pages use the section progression NAME, SYNOPSIS, DESCRIPTION, FILES, SEE ALSO, NOTES, AUTHOR, and BUGS, with extra, optional sections as needed. This is just a convention used when writing man pages and isn't enforced by the software at all. + +Line 3 gives the name of the command and a short description, after a dash ([mi]). You should use this format for the NAME section so that your man page can be added to the whatis database used by the man -k and apropos commands. + +On lines 4—6 we give the synopsis of the command syntax for coffee. Note that italic type \fI...\fP is used to denote parameters on the command line, and that optional arguments are enclosed in square brackets. + +Lines 7—12 give a brief description of the command. Boldface type is generally used to denote program and file names. On line 13, a subsection named Options is started with the .SS request. Following this on lines 14—25 is a list of options, presented using a tagged list. Each item in the tagged list is marked with the .TPrequest; the line after .TP is the tag, after which follows the item text itself. For example, the source on lines 14—16: + +``` +.TP +\fB-h\P +Brew hot coffee. Cold is the default. +``` + +``` +-h Brew hot coffee. Cold is the default. +``` + +Lines 26—29 make up the FILES section of the man page, which describes any files that the command might use to do its work. A tagged list using the .TP request is used for this as well. + +On lines 30—31, the SEE ALSO section is given, which provides cross-references to other man pages of note. Notice that the string <\#34>SEE ALSO<\#34>following the .SH request on line 30 is in quotes; this is because .SH uses the first whitespace-delimited argument as the section title. Therefore any section titles that are more than one word need to be enclosed in quotes to make up a single argument. Finally, on lines 32—34, the BUGS section is presented. + +### Formatting and Installing the man Page + +In order to format this man page and view it on your screen, you can use the command: + +``` +$ groff -Tascii -man coffee.man | more +``` + +``` +COFFEE(1) COFFEE(1) +NAME + coffee - Control remote coffee machine +SYNOPSIS + coffee [ -h | -b ] [ -t type ] amount +DESCRIPTION + coffee queues a request to the remote coffee machine at + the device /dev/cf0\. The required amount argument speci- + fies the number of cups, generally between 0 and 12 on ISO + standard coffee machines. + Options + -h Brew hot coffee. Cold is the default. + -b Burn coffee. Especially useful when executing cof- + fee on behalf of your boss. + -t type + Specify the type of coffee to brew, where type is + one of columbian, regular, or decaf. +FILES + /dev/cf0 + The remote coffee machine device +SEE ALSO + milk(5), sugar(5) +BUGS + May require human intervention if coffee supply is + exhausted. +``` + +As mentioned before, groff is capable of producing other types of output. Using the -Tps option in place of -Tascii will produce PostScript output that you can save to a file, view with GhostView, or print on a PostScript printer. -Tdvi will produce device-independent .dvi output similar to that produced by TeX. + +If you wish to make the man page available for others to view on your system, you need to install the groff source in a directory that is present in other users' MANPATH. The location for standard man pages is /usr/man. The source for section 1 man pages should therefore go in /usr/man/man1\. Therefore, the command: + +``` +$ cp coffee.man /usr/man/man1/coffee.1 +``` + +If you can't copy man page sources directly to /usr/man (say, because you're not the system administrator), you can create your own man page directory tree and add it to your MANPATH. The MANPATH environment variable is of the same format asPATH; for example, to add the directory /home/mdw/man to MANPATH just use: + +``` +$ export MANPATH=/home/mdw/man:$MANPATH +``` + +``` +groff -Tascii -mgs files... +``` + +Unfortunately, the macro sets provided with groff are not well-documented. There are section 7 man pages for some of them; for example, man 7 groff_mm will tell you about the mm macro set. However, this documentation usually only covers the differences and new features in the groff implementation, which assumes you have access to the man pages for the original nroff/troff macro sets (known as DWB—the Documentor's Work Bench). The best source of information may be a book on using nroff/troff which covers these classic macro sets in detail. For more about writing man pages, you can always look at the man page sources (in /usr/man) and determine what they do by comparing the formatted output with the source. + +This article is adapted from Running Linux, by Matt Welsh and Lar Kaufman, published by O'Reilly and Associates (ISBN 1-56592-100-3). Among other things, this book includes tutorials of various text-formatting systems used under Linux. Information in this issue of Linux Journal as well as Running Linux should provide a good head-start on using the many text tools available for the system. + +### Good luck, and happy documenting! + +Matt Welsh ([mdw@cs.cornell.edu][1]) is a student and systems programmer at Cornell University, working with the Robotics and Vision Laboratory on projects dealing with real-time machine vision. + +-------------------------------------------------------------------------------- + +via: http://www.linuxjournal.com/article/1158 + +作者:[Matt Welsh][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.linuxjournal.com/user/800006 +[1]:mailto:mdw@cs.cornell.edu \ No newline at end of file From f09c088bb375d36186b41cf0c721905a4a215aaf Mon Sep 17 00:00:00 2001 From: wxy Date: Fri, 8 Dec 2017 14:44:17 +0800 Subject: [PATCH 118/236] translating 19951001 Writing man Pages Using groff.md --- sources/tech/19951001 Writing man Pages Using groff.md | 1 + 1 file changed, 1 insertion(+) diff --git a/sources/tech/19951001 Writing man Pages Using groff.md b/sources/tech/19951001 Writing man Pages Using groff.md index 3ba749bc66..3aad365c56 100644 --- a/sources/tech/19951001 Writing man Pages Using groff.md +++ b/sources/tech/19951001 Writing man Pages Using groff.md @@ -1,3 +1,4 @@ +translating by wxy Writing man Pages Using groff =================== From 5417ef1fd580956abb828c20aaa46bcfd0fb3364 Mon Sep 17 00:00:00 2001 From: darksun Date: Fri, 8 Dec 2017 14:48:26 +0800 Subject: [PATCH 119/236] =?UTF-8?q?=E9=80=89=E9=A2=98:=20How=20to=20disabl?= =?UTF-8?q?e=20USB=20storage=20on=20Linux?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...922 How to disable USB storage on Linux.md | 59 +++++++++++++++++++ 1 file changed, 59 insertions(+) create mode 100644 sources/tech/20170922 How to disable USB storage on Linux.md diff --git a/sources/tech/20170922 How to disable USB storage on Linux.md b/sources/tech/20170922 How to disable USB storage on Linux.md new file mode 100644 index 0000000000..36723ed34c --- /dev/null +++ b/sources/tech/20170922 How to disable USB storage on Linux.md @@ -0,0 +1,59 @@ +translating by lujun9972 +How to disable USB storage on Linux +====== +To secure our infrastructure of data breaches, we use software & hardware firewalls to restrict unauthorized access from outside but data breaches can occur from inside as well. To remove such a possibility, organizations limit & monitor the access to internet & also disable usb storage devices. + +In this tutorial, we are going to discuss three different ways to disable USB storage devices on Linux machines. All the three methods have been tested on CentOS 6 & 7 machine & are working as they are supposed to . So let’s discuss all the three methods one by one, + +( Also Read : [Ultimate guide to securing SSH sessions][1] ) + +### Method 1 – Fake install + +In this method, we add a line ‘install usb-storage /bin/true’ which causes the ‘/bin/true’ to run instead of installing usb-storage module & that’s why it’s also called ‘Fake Install’ . To do this, create and open a file named ‘block_usb.conf’ (it can be something as well) in the folder ‘/etc/modprobe.d’, + +$ sudo vim /etc/modprobe.d/block_usb.conf + +& add the below mentioned line, + +install usb-storage /bin/true + +Now save the file and exit. + +### Method 2 – Removing the USB driver + +Using this method, we can remove/move the drive for usb-storage (usb_storage.ko) from our machines, thus making it impossible to access a usb-storage device from the mahcine. To move the driver from it’s default location, execute the following command, + +$ sudo mv /lib/modules/$(uname -r)/kernel/drivers/usb/storage/usb-storage.ko /home/user1 + +Now the driver is not available on its default location & thus would not be loaded when a usb-storage device is attached to the system & device would not be able to work. But this method has one little issue, that is when the kernel of the system is updated the usb-storage module would again show up in it’s default location. + +### Method 3- Blacklisting USB-storage + +We can also blacklist usb-storage using the file ‘/etc/modprobe.d/blacklist.conf’. This file is available on RHEL/CentOS 6 but might need to be created on 7\. To blacklist usb-storage, open/create the above mentioned file using vim, + +$ sudo vim /etc/modprobe.d/blacklist.conf + +& enter the following line to blacklist the usb, + +blacklist usb-storage + +Save file & exit. USB-storage will now be blocked on the system but this method has one major downside i.e. any privileged user can load the usb-storage module by executing the following command, + +$ sudo modprobe usb-storage + +This issue makes this method somewhat not desirable but it works well for non-privileged users. + +Reboot your system after the changes have been made to implement the changes made for all the above mentioned methods. Do check these methods to disable usb storage & let us know if you face any issue or have a query using the comment box below. + +-------------------------------------------------------------------------------- + +via: http://linuxtechlab.com/disable-usb-storage-linux/ + +作者:[Shusain][a] +译者:[lujun9972](https://github.com/lujun9972) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://linuxtechlab.com/author/shsuain/ +[1]:http://linuxtechlab.com/ultimate-guide-to-securing-ssh-sessions/ From eefd9dad8d4979cd89816a490fdae763c7797cb9 Mon Sep 17 00:00:00 2001 From: erlinux Date: Fri, 8 Dec 2017 15:07:14 +0800 Subject: [PATCH 120/236] Translating By erlinux --- sources/tech/20171123 Why microservices are a security issue.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20171123 Why microservices are a security issue.md b/sources/tech/20171123 Why microservices are a security issue.md index d5868faa9e..0bda05860e 100644 --- a/sources/tech/20171123 Why microservices are a security issue.md +++ b/sources/tech/20171123 Why microservices are a security issue.md @@ -1,3 +1,5 @@ +**translating by [erlinux](https://github.com/erlinux)** + Why microservices are a security issue ============================================================ From f48edad1b0b45e90a6ec36cf343089d3a2064603 Mon Sep 17 00:00:00 2001 From: darksun Date: Fri, 8 Dec 2017 15:09:00 +0800 Subject: [PATCH 121/236] translated --- sources/tech/20171205 How to Use the Date Command in Linux.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sources/tech/20171205 How to Use the Date Command in Linux.md b/sources/tech/20171205 How to Use the Date Command in Linux.md index 2cd7cd6877..9564b72e69 100644 --- a/sources/tech/20171205 How to Use the Date Command in Linux.md +++ b/sources/tech/20171205 How to Use the Date Command in Linux.md @@ -133,7 +133,7 @@ date +"%j" | **%z** | +hhmm 指定数字时区 (像这样, **-0400**). | | **%:z** | +hh:mm 指定数字时区 (像这样, **-04:00**). | | **%::z** | +hh:mm:ss 指定数字时区 (像这样, **-04:00:00**). | -| **%:::z** | 指定数字时区, with “**:**” to necessary precision (e.g., **-04**, **+05:30**). | +| **%:::z** | 指定数字时区, 其中 “**:**” 的个数由你需要的精度来决定 (例如, **-04**, **+05:30**). | | **%Z** | 时区的字符缩写(例如, EDT). | #### 10\. 设置系统时间 From 33a73d24329df49349f7d22e84d560c867216bdb Mon Sep 17 00:00:00 2001 From: darksun Date: Fri, 8 Dec 2017 15:11:07 +0800 Subject: [PATCH 122/236] =?UTF-8?q?=E7=BF=BB=E8=AF=91=E5=AE=8C=E6=AF=95?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- .../tech/20171205 How to Use the Date Command in Linux.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {sources => translated}/tech/20171205 How to Use the Date Command in Linux.md (100%) diff --git a/sources/tech/20171205 How to Use the Date Command in Linux.md b/translated/tech/20171205 How to Use the Date Command in Linux.md similarity index 100% rename from sources/tech/20171205 How to Use the Date Command in Linux.md rename to translated/tech/20171205 How to Use the Date Command in Linux.md From 81bc1652701be97a9f10c6e9caa81baf181d1d61 Mon Sep 17 00:00:00 2001 From: darksun Date: Fri, 8 Dec 2017 15:30:28 +0800 Subject: [PATCH 123/236] =?UTF-8?q?=E9=80=89=E9=A2=98:=20How=20To=20Instal?= =?UTF-8?q?l=20Fish,=20The=20Friendly=20Interactive=20Shell,=20In=20Linux?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...he Friendly Interactive Shell, In Linux.md | 336 ++++++++++++++++++ 1 file changed, 336 insertions(+) create mode 100644 sources/tech/20171206 How To Install Fish, The Friendly Interactive Shell, In Linux.md diff --git a/sources/tech/20171206 How To Install Fish, The Friendly Interactive Shell, In Linux.md b/sources/tech/20171206 How To Install Fish, The Friendly Interactive Shell, In Linux.md new file mode 100644 index 0000000000..eaecb18a0a --- /dev/null +++ b/sources/tech/20171206 How To Install Fish, The Friendly Interactive Shell, In Linux.md @@ -0,0 +1,336 @@ +How To Install Fish, The Friendly Interactive Shell, In Linux +====== +Fish, acronym of friendly interactive shell, is a well equipped, smart and user-friendly shell for Unix-like systems. Fish comes with many significant features, such as autosuggestions, syntax highlighting, searchable history (like CTRL+r in Bash), smart search functionality, glorious VGA color support, web based configuration, man page completions and many, out of the box. Just install it and start using it in no time. No more extra configuration or you don’t have to install any extra add-ons/plug-ins! + +In this tutorial, let us discuss how to install and use fish shell in Linux. + +#### Install Fish + +Even though fish is very user-friendly and feature-rich shell, it is not included in the default repositories of most Linux distributions. It is available in the official repositories of only few Linux distributions such as Arch Linux, Gentoo, NixOS, and Ubuntu etc. However, installing fish is not a big deal. + +On Arch Linux and its derivatives, run the following command to install it. + +``` +sudo pacman -S fish +``` + +On CentOS 7 run the following as root: + +``` +cd /etc/yum.repos.d/ +``` + +``` +wget https://download.opensuse.org/repositories/shells:fish:release:2/CentOS_7/shells:fish:release:2.repo +``` + +``` +yum install fish +``` + +On CentOS 6 run the following as root: + +``` +cd /etc/yum.repos.d/ +``` + +``` +wget https://download.opensuse.org/repositories/shells:fish:release:2/CentOS_6/shells:fish:release:2.repo +``` + +``` +yum install fish +``` + +On Debian 9 run the following as root: + +``` +wget -nv https://download.opensuse.org/repositories/shells:fish:release:2/Debian_9.0/Release.key -O Release.key +``` + +``` +apt-key add - < Release.key +``` + +``` +echo 'deb http://download.opensuse.org/repositories/shells:/fish:/release:/2/Debian_9.0/ /' > /etc/apt/sources.list.d/fish.list +``` + +``` +apt-get update +``` + +``` +apt-get install fish +``` + +On Debian 8 run the following as root: + +``` +wget -nv https://download.opensuse.org/repositories/shells:fish:release:2/Debian_8.0/Release.key -O Release.key +``` + +``` +apt-key add - < Release.key +``` + +``` +echo 'deb http://download.opensuse.org/repositories/shells:/fish:/release:/2/Debian_8.0/ /' > /etc/apt/sources.list.d/fish.list +``` + +``` +apt-get update +``` + +``` +apt-get install fish +``` + +On Fedora 26 run the following as root: + +``` +dnf config-manager --add-repo https://download.opensuse.org/repositories/shells:fish:release:2/Fedora_26/shells:fish:release:2.repo +``` + +``` +dnf install fish +``` + +On Fedora 25 run the following as root: + +``` +dnf config-manager --add-repo https://download.opensuse.org/repositories/shells:fish:release:2/Fedora_25/shells:fish:release:2.repo +``` + +``` +dnf install fish +``` + +On Fedora 24 run the following as root: + +``` +dnf config-manager --add-repo https://download.opensuse.org/repositories/shells:fish:release:2/Fedora_24/shells:fish:release:2.repo +``` + +``` +dnf install fish +``` + +On Fedora 23 run the following as root: + +``` +dnf config-manager --add-repo https://download.opensuse.org/repositories/shells:fish:release:2/Fedora_23/shells:fish:release:2.repo +``` + +``` +dnf install fish +``` + +On openSUSE: run the following as root: + +``` +zypper install fish +``` + +On RHEL 7 run the following as root: + +``` +cd /etc/yum.repos.d/ +``` + +``` +wget https://download.opensuse.org/repositories/shells:fish:release:2/RHEL_7/shells:fish:release:2.repo +``` + +``` +yum install fish +``` + +On RHEL-6 run the following as root: + +``` +cd /etc/yum.repos.d/ +``` + +``` +wget https://download.opensuse.org/repositories/shells:fish:release:2/RedHat_RHEL-6/shells:fish:release:2.repo +``` + +``` +yum install fish +``` + +On Ubuntu and its derivatives: + +``` +sudo apt-get update +``` + +``` +sudo apt-get install fish +``` + +That’s it. It is time explore fish shell. + +### Usage + +To switch to fish from your default shell, do: + +``` +$ fish +Welcome to fish, the friendly interactive shell +``` + +You can find the default fish configuration at ~/.config/fish/config.fish (similar to .bashrc). If it doesn’t exist, just create it. + +#### Auto suggestions + +When I type a command, it automatically suggests a command in a light grey color. So, I had to type a first few letters of a Linux and hit tab key to complete the command. + + [![](http://www.ostechnix.com/wp-content/uploads/2017/12/fish-1.png)][2] + +If there are more possibilities, it will list them. You can select the listed commands from the list by using up/down arrow keys. After choosing the command you want to run, just hit the right arrow key and press ENTER to run it. + + [![](http://www.ostechnix.com/wp-content/uploads/2017/12/fish-2.png)][3] + +No more CTRL+r! As you already know, we do reverse search by pressing ctrl+r to search for commands from history in Bash shell. But it is not necessary in fish shell. Since it has autosuggestions capability, just type first few letters of a command, and pick the command from the list that you already executed, from the history. Cool, yeah? + +#### Smart search + +We can also do smart search to find a specific command, file or directory. For example, I type the substring of a command, then hit the down arrow key to enter into smart search and again type a letter to pick the required command from the list. + + [![](http://www.ostechnix.com/wp-content/uploads/2017/12/fish-6.png)][4] + +#### Syntax highlighting + +You will notice the syntax highlighting as you type a command. See the difference in below screenshots when I type the same command in Bash and fish shells. + +Bash: + + [![](http://www.ostechnix.com/wp-content/uploads/2017/12/fish-3.png)][5] + +Fish: + + [![](http://www.ostechnix.com/wp-content/uploads/2017/12/fish-4.png)][6] + +As you see, “sudo” has been highlighted in fish shell. Also, it will display the invalid commands in red color by default. + +#### Web based configuration + +This is yet another cool feature of fish shell. We can can set our colors, change fish prompt, and view functions, variables, history, key bindings all from a web page. + +To start the web configuration interface, just type: + +``` +fish_config +``` + + [![](http://www.ostechnix.com/wp-content/uploads/2017/12/fish-5.png)][7] + +#### Man page completions + +Bash and other shells supports programmable completions, but only fish generates them automatically by parsing your installed man pages. + +To do so, run: + +``` +fish_update_completions +``` + +Sample output would be: + +``` +Parsing man pages and writing completions to /home/sk/.local/share/fish/generated_completions/ + 3435 / 3435 : zramctl.8.gz +``` + +#### Disable greetings + +By default, fish greets you (Welcome to fish, the friendly interactive shell) at startup. If you don’t this greeting message, you can disable it. To do so, edit fish configuration file: + +``` +vi ~/.config/fish/config.fish +``` + +Add the following line: + +``` +set -g -x fish_greeting '' +``` + +Instead of disabling fish greeting, you can also set any custom greeting message. + +``` +set -g -x fish_greeting 'Welcome to OSTechNix' +``` + +#### Getting help + +This one is another impressive feature that caught my attention. To open fish documentation page in your default web browser from Terminal, just type: + +``` +help +``` + +The official documentation will be opened in your default browser. Also, you can use man pages to display the help section of any command. + +``` +man fish +``` + +#### Set Fish as default shell + +Liked it very much? Great! Just set it as default shell. To do so, use chsh command: + +``` +chsh -s /usr/bin/fish +``` + +Here, /usr/bin/fish is the path to the fish shell. If you don’t know the correct path, the following command will help you. + +``` +which fish +``` + +Log out and log in back to use the new default shell. + +Please remember that many shell scripts written for Bash may not fully compatible with fish. + +To switch back to Bash, just run: + +``` +bash +``` + +If you want Bash as your default shell permanently, run: + +``` +chsh -s /bin/bash +``` + +And, that’s all for now folks. At this stage, you might get a basic idea about fish shell usage. If you’re looking for a Bash alternatives, fish might be a good option. + +Cheers! + +Resource: + +* [fish shell website][1] + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/install-fish-friendly-interactive-shell-linux/ + +作者:[SK][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.ostechnix.com/author/sk/ +[1]:https://fishshell.com/ +[2]:http://www.ostechnix.com/wp-content/uploads/2017/12/fish-1.png +[3]:http://www.ostechnix.com/wp-content/uploads/2017/12/fish-2.png +[4]:http://www.ostechnix.com/wp-content/uploads/2017/12/fish-6.png +[5]:http://www.ostechnix.com/wp-content/uploads/2017/12/fish-3.png +[6]:http://www.ostechnix.com/wp-content/uploads/2017/12/fish-4.png +[7]:http://www.ostechnix.com/wp-content/uploads/2017/12/fish-5.png From 8e4db559459021902bc0235577e6d0fe944245f0 Mon Sep 17 00:00:00 2001 From: darksun Date: Fri, 8 Dec 2017 15:34:33 +0800 Subject: [PATCH 124/236] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Getting=20started?= =?UTF-8?q?=20with=20Turtl,=20an=20open=20source=20alternative=20to=20Ever?= =?UTF-8?q?note?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ... an open source alternative to Evernote.md | 95 +++++++++++++++++++ 1 file changed, 95 insertions(+) create mode 100644 sources/tech/20171206 Getting started with Turtl, an open source alternative to Evernote.md diff --git a/sources/tech/20171206 Getting started with Turtl, an open source alternative to Evernote.md b/sources/tech/20171206 Getting started with Turtl, an open source alternative to Evernote.md new file mode 100644 index 0000000000..969ef39901 --- /dev/null +++ b/sources/tech/20171206 Getting started with Turtl, an open source alternative to Evernote.md @@ -0,0 +1,95 @@ +Getting started with Turtl, an open source alternative to Evernote +====== +![Using Turtl as an open source alternative to Evernote](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUS_brainstorm_island_520px.png?itok=6IUPyxkY) + +Just about everyone I know takes notes, and many people use an online note-taking application like Evernote, Simplenote, or Google Keep. Those are all good tools, but you have to wonder about the security and privacy of your information—especially in light of [Evernote's privacy flip-flop of 2016][1]. If you want more control over your notes and your data, you really need to turn to an open source tool. + +Whatever your reasons for moving away from Evernote, there are open source alternatives out there. Let's look at one of those alternatives: Turtl. + +### Getting started + +The developers behind [Turtl][2] want you to think of it as "Evernote with ultimate privacy." To be honest, I can't vouch for the level of privacy that Turtl offers, but it is a quite a good note-taking tool. + +To get started with Turtl, [download][3] a desktop client for Linux, Mac OS, or Windows, or grab the [Android app][4]. Install it, then fire up the client or app. You'll be asked for a username and passphrase. Turtl uses the passphrase to generate a cryptographic key that, according to the developers, encrypts your notes before storing them anywhere on your device or on their servers. + +### Using Turtl + +You can create the following types of notes with Turtl: + +* Password + +* File + +* Image + +* Bookmark + +* Text note + +No matter what type of note you choose, you create it in a window that's similar for all types of notes: + +### [turtl-new-note-520.png][5] + +![Create new text note with Turtl](https://opensource.com/sites/default/files/images/life-uploads/turtl-new-note-520.png) + +Creating a new text note in Turtl + +Add information like the title of the note, some text, and (if you're creating a File or Image note) attach a file or an image. Then click Save. + +You can add formatting to your notes via [Markdown][6]. You need to add the formatting by hand—there are no toolbar shortcuts. + +If you need to organize your notes, you can add them to Boards. Boards are just like notebooks in Evernote. To create a new board, click on the Boards tab, then click the Create a board button. Type a title for the board, then click Create. + +### [turtl-boards-520.png][7] + +![Create new board in Turtl](https://opensource.com/sites/default/files/images/life-uploads/turtl-boards-520.png) + +Creating a new board in Turtl + +To add a note to a board, create or edit the note, then click the This note is not in any boards link at the bottom of the note. Select one or more boards, then click Done. + +To add tags to a note, click the Tags icon at the bottom of a note, enter one or more keywords separated by commas, and click Done. + +### Syncing your notes across your devices + +If you use Turtl across several computers and an Android device, for example, Turtl will sync your notes whenever you're online. However, I've encountered a small problem with syncing: Every so often, a note I've created on my phone doesn't sync to my laptop. I tried to sync manually by clicking the icon in the top left of the window and then clicking Sync Now, but that doesn't always work. I found that I occasionally need to click that icon, click Your settings, and then click Clear local data. I then need to log back into Turtl, but all the data syncs properly. + +### A question, and a couple of problems + +When I started using Turtl, I was dogged by one question: Where are my notes kept online? It turns out that the developers behind Turtl are based in the U.S., and that's also where their servers are. Although the encryption that Turtl uses is [quite strong][8] and your notes are encrypted on the server, the paranoid part of me says that you shouldn't save anything sensitive in Turtl (or any online note-taking tool, for that matter). + +Turtl displays notes in a tiled view, reminiscent of Google Keep: + +### [turtl-notes-520.png][9] + +![Notes in Turtl](https://opensource.com/sites/default/files/images/life-uploads/turtl-notes-520.png) + +A collection of notes in Turtl + +There's no way to change that to a list view, either on the desktop or on the Android app. This isn't a problem for me, but I've heard some people pan Turtl because it lacks a list view. + +Speaking of the Android app, it's not bad; however, it doesn't integrate with the Android Share menu. If you want to add a note to Turtl based on something you've seen or read in another app, you need to copy and paste it manually. + +I've been using a Turtl for several months on a Linux-powered laptop, my [Chromebook running GalliumOS][10], and an Android-powered phone. It's been a pretty seamless experience across all those devices. Although it's not my favorite open source note-taking tool, Turtl does a pretty good job. Give it a try; it might be the simple note-taking tool you're looking for. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/17/12/using-turtl-open-source-alternative-evernote + +作者:[Scott Nesbitt][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/scottnesbitt +[1]:https://blog.evernote.com/blog/2016/12/15/evernote-revisits-privacy-policy/ +[2]:https://turtlapp.com/ +[3]:https://turtlapp.com/download/ +[4]:https://turtlapp.com/download/ +[5]:https://opensource.com/file/378346 +[6]:https://en.wikipedia.org/wiki/Markdown +[7]:https://opensource.com/file/378351 +[8]:https://turtlapp.com/docs/security/encryption-specifics/ +[9]:https://opensource.com/file/378356 +[10]:https://opensource.com/article/17/4/linux-chromebook-gallium-os From 0998b98904c5adcf7416eeccf05817640d689f2a Mon Sep 17 00:00:00 2001 From: kimii <2545489745@qq.com> Date: Fri, 8 Dec 2017 16:42:38 +0800 Subject: [PATCH 125/236] Update 20171206 How To Install Fish, The Friendly Interactive Shell, In Linux.md Translating by kimii --- ... To Install Fish, The Friendly Interactive Shell, In Linux.md | 1 + 1 file changed, 1 insertion(+) diff --git a/sources/tech/20171206 How To Install Fish, The Friendly Interactive Shell, In Linux.md b/sources/tech/20171206 How To Install Fish, The Friendly Interactive Shell, In Linux.md index eaecb18a0a..00a5ebadef 100644 --- a/sources/tech/20171206 How To Install Fish, The Friendly Interactive Shell, In Linux.md +++ b/sources/tech/20171206 How To Install Fish, The Friendly Interactive Shell, In Linux.md @@ -1,3 +1,4 @@ +Translating by kimii How To Install Fish, The Friendly Interactive Shell, In Linux ====== Fish, acronym of friendly interactive shell, is a well equipped, smart and user-friendly shell for Unix-like systems. Fish comes with many significant features, such as autosuggestions, syntax highlighting, searchable history (like CTRL+r in Bash), smart search functionality, glorious VGA color support, web based configuration, man page completions and many, out of the box. Just install it and start using it in no time. No more extra configuration or you don’t have to install any extra add-ons/plug-ins! From 6761ed9cdbdb129b3fd6c733f6bdd17e64cecdd3 Mon Sep 17 00:00:00 2001 From: wxy Date: Fri, 8 Dec 2017 17:18:54 +0800 Subject: [PATCH 126/236] =?UTF-8?q?=E5=88=A0=E9=99=A4=E9=87=8D=E5=A4=8D?= =?UTF-8?q?=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @yongshouzhang ,不好意思, 这篇已经被翻译发布过了: https://linux.cn/article-9057-1.html ,所以这个文章不用浪费精力了。 @oska874 --- .../20171207 How to use cron in Linux.md | 288 ------------------ 1 file changed, 288 deletions(-) delete mode 100644 sources/tech/20171207 How to use cron in Linux.md diff --git a/sources/tech/20171207 How to use cron in Linux.md b/sources/tech/20171207 How to use cron in Linux.md deleted file mode 100644 index 3165aa8139..0000000000 --- a/sources/tech/20171207 How to use cron in Linux.md +++ /dev/null @@ -1,288 +0,0 @@ -translating by yongshouzhang - -How to use cron in Linux -============================================================ - -### No time for commands? Scheduling tasks with cron means programs can run but you don't have to stay up late. - - [![](https://opensource.com/sites/default/files/styles/byline_thumbnail/public/david-crop.jpg?itok=Wnz6HdS0)][10] 06 Nov 2017 [David Both][11] [Feed][12] - -27[up][13] - - [9 comments][14] -![How to use cron in Linux](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/linux-penguins.png?itok=yKOpaJM_) - -Image by : - -[Internet Archive Book Images][15]. Modified by Opensource.com. [CC BY-SA 4.0][16] - -One of the challenges (among the many advantages) of being a sysadmin is running tasks when you'd rather be sleeping. For example, some tasks (including regularly recurring tasks) need to run overnight or on weekends, when no one is expected to be using computer resources. I have no time to spare in the evenings to run commands and scripts that have to operate during off-hours. And I don't want to have to get up at oh-dark-hundred to start a backup or major update. - -Instead, I use two service utilities that allow me to run commands, programs, and tasks at predetermined times. The [cron][17] and at services enable sysadmins to schedule tasks to run at a specific time in the future. The at service specifies a one-time task that runs at a certain time. The cron service can schedule tasks on a repetitive basis, such as daily, weekly, or monthly. - -In this article, I'll introduce the cron service and how to use it. - -### Common (and uncommon) cron uses - -I use the cron service to schedule obvious things, such as regular backups that occur daily at 2 a.m. I also use it for less obvious things. - -* The system times (i.e., the operating system time) on my many computers are set using the Network Time Protocol (NTP). While NTP sets the system time, it does not set the hardware time, which can drift. I use cron to set the hardware time based on the system time. - -* I also have a Bash program I run early every morning that creates a new "message of the day" (MOTD) on each computer. It contains information, such as disk usage, that should be current in order to be useful. - -* Many system processes and services, like [Logwatch][1], [logrotate][2], and [Rootkit Hunter][3], use the cron service to schedule tasks and run programs every day. - -The crond daemon is the background service that enables cron functionality. - -The cron service checks for files in the /var/spool/cron and /etc/cron.d directories and the /etc/anacrontab file. The contents of these files define cron jobs that are to be run at various intervals. The individual user cron files are located in /var/spool/cron, and system services and applications generally add cron job files in the /etc/cron.ddirectory. The /etc/anacrontab is a special case that will be covered later in this article. - -### Using crontab - -The cron utility runs based on commands specified in a cron table (crontab). Each user, including root, can have a cron file. These files don't exist by default, but can be created in the /var/spool/cron directory using the crontab -e command that's also used to edit a cron file (see the script below). I strongly recommend that you not use a standard editor (such as Vi, Vim, Emacs, Nano, or any of the many other editors that are available). Using the crontab command not only allows you to edit the command, it also restarts the crond daemon when you save and exit the editor. The crontabcommand uses Vi as its underlying editor, because Vi is always present (on even the most basic of installations). - -New cron files are empty, so commands must be added from scratch. I added the job definition example below to my own cron files, just as a quick reference, so I know what the various parts of a command mean. Feel free to copy it for your own use. - -``` -# crontab -e -SHELL=/bin/bash -MAILTO=root@example.com -PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin - -# For details see man 4 crontabs - -# Example of job definition: -# .---------------- minute (0 - 59) -# | .------------- hour (0 - 23) -# | | .---------- day of month (1 - 31) -# | | | .------- month (1 - 12) OR jan,feb,mar,apr ... -# | | | | .---- day of week (0 - 6) (Sunday=0 or 7) OR sun,mon,tue,wed,thu,fri,sat -# | | | | | -# * * * * * user-name command to be executed - -# backup using the rsbu program to the internal 4TB HDD and then 4TB external -01 01 * * * /usr/local/bin/rsbu -vbd1 ; /usr/local/bin/rsbu -vbd2 - -# Set the hardware clock to keep it in sync with the more accurate system clock -03 05 * * * /sbin/hwclock --systohc - -# Perform monthly updates on the first of the month -# 25 04 1 * * /usr/bin/dnf -y update -``` - -The first three lines in the code above set up a default environment. The environment must be set to whatever is necessary for a given user because cron does not provide an environment of any kind. The SHELL variable specifies the shell to use when commands are executed. This example specifies the Bash shell. The MAILTO variable sets the email address where cron job results will be sent. These emails can provide the status of the cron job (backups, updates, etc.) and consist of the output you would see if you ran the program manually from the command line. The third line sets up the PATH for the environment. Even though the path is set here, I always prepend the fully qualified path to each executable. - -There are several comment lines in the example above that detail the syntax required to define a cron job. I'll break those commands down, then add a few more to show you some more advanced capabilities of crontab files. - -``` -01 01 * * * /usr/local/bin/rsbu -vbd1 ; /usr/local/bin/rsbu -vbd2 -``` - -This line runs my self-written Bash shell script, rsbu, that backs up all my systems. This job kicks off at 1:01 a.m. (01 01) every day. The asterisks (*) in positions three, four, and five of the time specification are like file globs, or wildcards, for other time divisions; they specify "every day of the month," "every month," and "every day of the week." This line runs my backups twice; one backs up to an internal dedicated backup hard drive, and the other backs up to an external USB drive that I can take to the safe deposit box. - -The following line sets the hardware clock on the computer using the system clock as the source of an accurate time. This line is set to run at 5:03 a.m. (03 05) every day. - -``` -03 05 * * * /sbin/hwclock --systohc -``` - -I was using the third and final cron job (commented out) to perform a dnf or yumupdate at 04:25 a.m. on the first day of each month, but I commented it out so it no longer runs. - -``` -# 25 04 1 * * /usr/bin/dnf -y update -``` - -### Other scheduling tricks - -Now let's do some things that are a little more interesting than these basics. Suppose you want to run a particular job every Thursday at 3 p.m.: - -``` -00 15 * * Thu /usr/local/bin/mycronjob.sh -``` - -Or, maybe you need to run quarterly reports after the end of each quarter. The cron service has no option for "The last day of the month," so instead you can use the first day of the following month, as shown below. (This assumes that the data needed for the reports will be ready when the job is set to run.) - -``` -02 03 1 1,4,7,10 * /usr/local/bin/reports.sh -``` - -The following shows a job that runs one minute past every hour between 9:01 a.m. and 5:01 p.m. - -``` -01 09-17 * * * /usr/local/bin/hourlyreminder.sh -``` - -I have encountered situations where I need to run a job every two, three, or four hours. That can be accomplished by dividing the hours by the desired interval, such as */3 for every three hours, or 6-18/3 to run every three hours between 6 a.m. and 6 p.m. Other intervals can be divided similarly; for example, the expression */15 in the minutes position means "run the job every 15 minutes." - -``` -*/5 08-18/2 * * * /usr/local/bin/mycronjob.sh -``` - -One thing to note: The division expressions must result in a remainder of zero for the job to run. That's why, in this example, the job is set to run every five minutes (08:05, 08:10, 08:15, etc.) during even-numbered hours from 8 a.m. to 6 p.m., but not during any odd-numbered hours. For example, the job will not run at all from 9 p.m. to 9:59 a.m. - -I am sure you can come up with many other possibilities based on these examples. - -### Limiting cron access - -More Linux resources - -* [What is Linux?][4] - -* [What are Linux containers?][5] - -* [Download Now: Linux commands cheat sheet][6] - -* [Advanced Linux commands cheat sheet][7] - -* [Our latest Linux articles][8] - -Regular users with cron access could make mistakes that, for example, might cause system resources (such as memory and CPU time) to be swamped. To prevent possible misuse, the sysadmin can limit user access by creating a - -**/etc/cron.allow** - - file that contains a list of all users with permission to create cron jobs. The root user cannot be prevented from using cron. - -By preventing non-root users from creating their own cron jobs, it may be necessary for root to add their cron jobs to the root crontab. "But wait!" you say. "Doesn't that run those jobs as root?" Not necessarily. In the first example in this article, the username field shown in the comments can be used to specify the user ID a job is to have when it runs. This prevents the specified non-root user's jobs from running as root. The following example shows a job definition that runs a job as the user "student": - -``` -04 07 * * * student /usr/local/bin/mycronjob.sh -``` - -### cron.d - -The directory /etc/cron.d is where some applications, such as [SpamAssassin][18] and [sysstat][19], install cron files. Because there is no spamassassin or sysstat user, these programs need a place to locate cron files, so they are placed in /etc/cron.d. - -The /etc/cron.d/sysstat file below contains cron jobs that relate to system activity reporting (SAR). These cron files have the same format as a user cron file. - -``` -# Run system activity accounting tool every 10 minutes -*/10 * * * * root /usr/lib64/sa/sa1 1 1 -# Generate a daily summary of process accounting at 23:53 -53 23 * * * root /usr/lib64/sa/sa2 -A -``` - -The sysstat cron file has two lines that perform tasks. The first line runs the sa1program every 10 minutes to collect data stored in special binary files in the /var/log/sadirectory. Then, every night at 23:53, the sa2 program runs to create a daily summary. - -### Scheduling tips - -Some of the times I set in the crontab files seem rather random—and to some extent they are. Trying to schedule cron jobs can be challenging, especially as the number of jobs increases. I usually have only a few tasks to schedule on each of my computers, which is simpler than in some of the production and lab environments where I have worked. - -One system I administered had around a dozen cron jobs that ran every night and an additional three or four that ran on weekends or the first of the month. That was a challenge, because if too many jobs ran at the same time—especially the backups and compiles—the system would run out of RAM and nearly fill the swap file, which resulted in system thrashing while performance tanked, so nothing got done. We added more memory and improved how we scheduled tasks. We also removed a task that was very poorly written and used large amounts of memory. - -The crond service assumes that the host computer runs all the time. That means that if the computer is turned off during a period when cron jobs were scheduled to run, they will not run until the next time they are scheduled. This might cause problems if they are critical cron jobs. Fortunately, there is another option for running jobs at regular intervals: anacron. - -### anacron - -The [anacron][20] program performs the same function as crond, but it adds the ability to run jobs that were skipped, such as if the computer was off or otherwise unable to run the job for one or more cycles. This is very useful for laptops and other computers that are turned off or put into sleep mode. - -As soon as the computer is turned on and booted, anacron checks to see whether configured jobs missed their last scheduled run. If they have, those jobs run immediately, but only once (no matter how many cycles have been missed). For example, if a weekly job was not run for three weeks because the system was shut down while you were on vacation, it would be run soon after you turn the computer on, but only once, not three times. - -The anacron program provides some easy options for running regularly scheduled tasks. Just install your scripts in the /etc/cron.[hourly|daily|weekly|monthly]directories, depending how frequently they need to be run. - -How does this work? The sequence is simpler than it first appears. - -1. The crond service runs the cron job specified in /etc/cron.d/0hourly. - -``` -# Run the hourly jobs -SHELL=/bin/bash -PATH=/sbin:/bin:/usr/sbin:/usr/bin -MAILTO=root -01 * * * * root run-parts /etc/cron.hourly -``` - -1. The cron job specified in /etc/cron.d/0hourly runs the run-parts program once per hour. - -2. The run-parts program runs all the scripts located in the /etc/cron.hourlydirectory. - -3. The /etc/cron.hourly directory contains the 0anacron script, which runs the anacron program using the /etdc/anacrontab configuration file shown here. - -``` -# /etc/anacrontab: configuration file for anacron - -# See anacron(8) and anacrontab(5) for details. - -SHELL=/bin/sh -PATH=/sbin:/bin:/usr/sbin:/usr/bin -MAILTO=root -# the maximal random delay added to the base delay of the jobs -RANDOM_DELAY=45 -# the jobs will be started during the following hours only -START_HOURS_RANGE=3-22 - -#period in days delay in minutes job-identifier command -1 5 cron.daily nice run-parts /etc/cron.daily -7 25 cron.weekly nice run-parts /etc/cron.weekly -@monthly 45 cron.monthly nice run-parts /etc/cron.monthly -``` - -1. The anacron program runs the programs located in /etc/cron.daily once per day; it runs the jobs located in /etc/cron.weekly once per week, and the jobs in cron.monthly once per month. Note the specified delay times in each line that help prevent these jobs from overlapping themselves and other cron jobs. - -Instead of placing complete Bash programs in the cron.X directories, I install them in the /usr/local/bin directory, which allows me to run them easily from the command line. Then I add a symlink in the appropriate cron directory, such as /etc/cron.daily. - -The anacron program is not designed to run programs at specific times. Rather, it is intended to run programs at intervals that begin at the specified times, such as 3 a.m. (see the START_HOURS_RANGE line in the script just above) of each day, on Sunday (to begin the week), and on the first day of the month. If any one or more cycles are missed, anacron will run the missed jobs once, as soon as possible. - -### More on setting limits - -I use most of these methods for scheduling tasks to run on my computers. All those tasks are ones that need to run with root privileges. It's rare in my experience that regular users really need a cron job. One case was a developer user who needed a cron job to kick off a daily compile in a development lab. - -It is important to restrict access to cron functions by non-root users. However, there are circumstances when a user needs to set a task to run at pre-specified times, and cron can allow them to do that. Many users do not understand how to properly configure these tasks using cron and they make mistakes. Those mistakes may be harmless, but, more often than not, they can cause problems. By setting functional policies that cause users to interact with the sysadmin, individual cron jobs are much less likely to interfere with other users and other system functions. - -It is possible to set limits on the total resources that can be allocated to individual users or groups, but that is an article for another time. - -For more information, the man pages for [cron][21], [crontab][22], [anacron][23], [anacrontab][24], and [run-parts][25] all have excellent information and descriptions of how the cron system works. - -### Topics - - [Linux][26][SysAdmin][27] - -### About the author - - [![](https://opensource.com/sites/default/files/styles/profile_pictures/public/david-crop.jpg?itok=oePpOpyV)][28] David Both - -- - - David Both is a Linux and Open Source advocate who resides in Raleigh, North Carolina. He has been in the IT industry for over forty years and taught OS/2 for IBM where he worked for over 20 years. While at IBM, he wrote the first training course for the original IBM PC in 1981\. He has taught RHCE classes for Red Hat and has worked at MCI Worldcom, Cisco, and the State of North Carolina. He has been working with Linux and Open Source Software for almost 20 years. David has written articles for... [more about David Both][29][More about me][30] - -* [Learn how you can contribute][9] - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/17/11/how-use-cron-linux - -作者:[David Both ][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: -[1]:https://sourceforge.net/projects/logwatch/files/ -[2]:https://github.com/logrotate/logrotate -[3]:http://rkhunter.sourceforge.net/ -[4]:https://opensource.com/resources/what-is-linux?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent -[5]:https://opensource.com/resources/what-are-linux-containers?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent -[6]:https://developers.redhat.com/promotions/linux-cheatsheet/?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent -[7]:https://developers.redhat.com/cheat-sheet/advanced-linux-commands-cheatsheet?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent -[8]:https://opensource.com/tags/linux?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent -[9]:https://opensource.com/participate -[10]:https://opensource.com/users/dboth -[11]:https://opensource.com/users/dboth -[12]:https://opensource.com/user/14106/feed -[13]:https://opensource.com/article/17/11/how-use-cron-linux?rate=9R7lrdQXsne44wxIh0Wu91ytYaxxi86zT1-uHo1a1IU -[14]:https://opensource.com/article/17/11/how-use-cron-linux#comments -[15]:https://www.flickr.com/photos/internetarchivebookimages/20570945848/in/photolist-xkMtw9-xA5zGL-tEQLWZ-wFwzFM-aNwxgn-aFdWBj-uyFKYv-7ZCCBU-obY1yX-UAPafA-otBzDF-ovdDo6-7doxUH-obYkeH-9XbHKV-8Zk4qi-apz7Ky-apz8Qu-8ZoaWG-orziEy-aNwxC6-od8NTv-apwpMr-8Zk4vn-UAP9Sb-otVa3R-apz6Cb-9EMPj6-eKfyEL-cv5mwu-otTtHk-7YjK1J-ovhxf6-otCg2K-8ZoaJf-UAPakL-8Zo8j7-8Zk74v-otp4Ls-8Zo8h7-i7xvpR-otSosT-9EMPja-8Zk6Zi-XHpSDB-hLkuF3-of24Gf-ouN1Gv-fJzkJS-icfbY9 -[16]:https://creativecommons.org/licenses/by-sa/4.0/ -[17]:https://en.wikipedia.org/wiki/Cron -[18]:http://spamassassin.apache.org/ -[19]:https://github.com/sysstat/sysstat -[20]:https://en.wikipedia.org/wiki/Anacron -[21]:http://man7.org/linux/man-pages/man8/cron.8.html -[22]:http://man7.org/linux/man-pages/man5/crontab.5.html -[23]:http://man7.org/linux/man-pages/man8/anacron.8.html -[24]:http://man7.org/linux/man-pages/man5/anacrontab.5.html -[25]:http://manpages.ubuntu.com/manpages/zesty/man8/run-parts.8.html -[26]:https://opensource.com/tags/linux -[27]:https://opensource.com/tags/sysadmin -[28]:https://opensource.com/users/dboth -[29]:https://opensource.com/users/dboth -[30]:https://opensource.com/users/dboth From 99a80e03726706115d85a31a7c0e71c0931fe6dd Mon Sep 17 00:00:00 2001 From: wxy Date: Fri, 8 Dec 2017 18:06:54 +0800 Subject: [PATCH 127/236] PRF:19951001 Writing man Pages Using groff.md @wxy --- .../19951001 Writing man Pages Using groff.md | 106 +++++++++++------- 1 file changed, 64 insertions(+), 42 deletions(-) diff --git a/sources/tech/19951001 Writing man Pages Using groff.md b/sources/tech/19951001 Writing man Pages Using groff.md index 3aad365c56..896c6ebe48 100644 --- a/sources/tech/19951001 Writing man Pages Using groff.md +++ b/sources/tech/19951001 Writing man Pages Using groff.md @@ -1,54 +1,53 @@ -translating by wxy -Writing man Pages Using groff +使用 groff 编写 man 手册页页 =================== -groff is the GNU version of the popular nroff/troff text-formatting tools provided on most Unix systems. Its most common use is writing manual pages—online documentation for commands, programming interfaqces, and so forth. In this article, we show you the ropes of writing your own man pages with groff. +`groff` 是大多数 Unix 系统上所提供的流行的文本格式化工具 nroff/troff 的 GNU 版本。它一般用于编写手册页,即命令、编程接口等的在线文档。在本文中,我们将给你展示如何使用 `groff` 编写你自己的 man 手册页。 -Two of the original text processing systems found on Unix systems are troff and nroff, developed at Bell Labs for the original implementation of Unix (in fact, the development of Unix itself was spurred, in part, to support such a text-processing system). The first version of this text processor was called roff (for “runoff”); later came troff, which generated output for a particular typesetter in use at the time. nroff was a later version that became the standard text processor on Unix systems everywhere. groff is GNU's implementation of nroff and troff that is used on Linux systems. It includes several extended features and drivers for a number of printing devices. +在 Unix 系统上最初有两个文本处理系统:troff 和 nroff,它们是由贝尔实验室为初始的 Unix 所开发的(事实上,开发 Unix 系统的部分原因就是为了支持这样的一个文本处理系统)。这个文本处理器的第一个版本被称作 roff(意为 “runoff”——径流);稍后出现了 troff,在那时用于为特定的排字机Typesetter生成输出。nroff 是更晚一些的版本,它成为了各种 Unix 系统的标准文本处理器。groff 是 nroff 和 troff 的 GNU 实现,用在 Linux 系统上。它包括了几个扩展功能和一些打印设备的驱动程序。 -groff is capable of producing documents, articles, and books, much in the same vein as other text-formatting systems, such as TeX. However, groff (as well as the original nroff) has one intrinsic feature that is absent from TeX and variants: the ability to produce plain-ASCII output. While other systems are great for producing documents to be printed, groff is able to produce plain ASCII to be viewed online (or printed directly as plain text on even the simplest of printers). If you're going to be producing documentation to be viewed online, as well as in printed form, groff may be the way to go (although there are alternatives, such as Texinfo, Lametex, and other tools). +`groff` 能够生成文档、文章和书籍,很多时候它就像是其它的文本格式化系统(如 TeX)的血管一样。然而,`groff`(以及原来的 nroff)有一个固有的功能是 TeX 及其变体所缺乏的:生成普通 ASCII 输出。其它的系统在生成打印的文档方面做得很好,而 `groff` 却能够生成可以在线浏览的普通 ASCII(甚至可以在最简单的打印机上直接以普通文本打印)。如果要生成在线浏览的文档以及打印的表单,`groff` 也许是你所需要的(虽然也有替代品,如 Texinfo、Lametex 等等)。 -groff also has the benefit of being much smaller than TeX; it requires fewer support files and executables than even a minimal TeX distribution. +`groff` 还有一个好处是它比 TeX 小很多;它所需要的支持文件和可执行程序甚至比最小化的 TeX 版本都少。 -One special application of groff is to format Unix man pages. If you're a Unix programmer, you'll eventually need to write and produce man pages of some kind. In this article, we'll introduce the use of groff through the writing of a short man page. +`groff` 一个特定的用途是用于格式化 Unix 的 man 手册页。如果你是一个 Unix 程序员,你肯定需要编写和生成各种 man 手册页。在本文中,我们将通过编写一个简短的 man 手册页来介绍 `groff` 的使用。 -As with TeX, groff uses a particular text-formatting language to describe how to process the text. This language is slightly more cryptic than systems such as TeX, but also less verbose. In addition, groff provides several macro packages that are used on top of the basic formatter; these macro packages are tailored to a particular type of document. For example, the mgs macros are an ideal choice for writing articles and papers, while the man macros are used for man pages. +像 TeX 一样,`groff` 使用特定的文本格式化语言来描述如何处理文本。这种语言比 TeX 之类的系统更加神秘一些,但是更加简洁。此外,`groff` 在基本的格式化器之上提供了几个宏软件包;这些宏软件包是为一些特定类型的文档所定制的。举个例子, mgs 宏对于写作文章或论文很适合,而 man 宏可用于 man 手册页。 -### Writing a man Page +### 编写 man 手册页 -Writing man pages with groff is actually quite simple. For your man page to look like others, you need to follow several conventions in the source, which are presented below. In this example, we'll write a man page for a mythical command coffee that controls your networked coffee machine in various ways. +用 `groff` 编写 man 手册页十分简单。要让你的 man 手册页看起来和其它的一样,你需要从源头上遵循几个惯例,如下所示。在这个例子中,我们将为一个虚构的命令 `coffee` 编写 man 手册页,它用于以各种方式控制你的联网咖啡机。 -Using any text editor, enter the source from Listing 1 and save the result as coffee.man. Do not enter the line numbers at the beginning of each line; those are used only for reference later in the article. +使用任意文本编辑器,输入如下代码,并保存为 `coffee.man`。不要输入每行的行号,它们仅用于本文中的说明。 ``` .TH COFFEE 1 "23 March 94" .SH NAME -coffee /- Control remote coffee machine +coffee \- Control remote coffee machine .SH SYNOPSIS -/fBcoffee/fP [ -h | -b ] [ -t /fItype/fP ] -/fIamount/fP +\fBcoffee\fP [ -h | -b ] [ -t \fItype\fP ] +\fIamount\fP .SH DESCRIPTION -/fBcoffee/fP queues a request to the remote -coffee machine at the device /fB/dev/cf0/fR. -The required /fIamount/fP argument specifies +\fBcoffee\fP queues a request to the remote +coffee machine at the device \fB/dev/cf0\fR. +The required \fIamount\fP argument specifies the number of cups, generally between 0 and 12 on ISO standard coffee machines. .SS Options .TP -/fB-h/fP +\fB-h\fP Brew hot coffee. Cold is the default. .TP -/fB-b/fP +\fB-b\fP Burn coffee. Especially useful when executing -/fBcoffee/fP on behalf of your boss. +\fBcoffee\fP on behalf of your boss. .TP -/fB-t /fItype/fR +\fB-t \fItype\fR Specify the type of coffee to brew, where -/fItype/fP is one of /fBcolumbian/fP, -/fBregular/fP, or /fBdecaf/fP. +\fItype\fP is one of \fBcolumbian\fP, +\fBregular\fP, or \fBdecaf\fP. .SH FILES .TP -/fC/dev/cf0/fR +\fC/dev/cf0\fR The remote coffee machine device .SH "SEE ALSO" milk(5), sugar(5) @@ -57,15 +56,23 @@ May require human intervention if coffee supply is exhausted. ``` -Don't let the amount of obscurity in this source file frighten you. It helps to know that the character sequences \fB, \fI, and \fR are used to change the font to boldface, italics, and roman type, respectively. \fP sets the font to the one previously selected. +*清单 1:示例 man 手册页源文件* -Other groff requests appear on lines beginning with a dot (.). On line 1, we see that the .TH request is used to set the title of the man page to COFFEE, the man section to 1, and the date of the last man page revision. (Recall that man section 1 is used for user commands, section 2 is for system calls, and so forth. The man man command details each section number.) On line 2, the .SH request is used to start a section, entitled NAME. Note that almost all Unix man pages use the section progression NAME, SYNOPSIS, DESCRIPTION, FILES, SEE ALSO, NOTES, AUTHOR, and BUGS, with extra, optional sections as needed. This is just a convention used when writing man pages and isn't enforced by the software at all. +不要让这些晦涩的代码吓坏了你。字符串序列 `\fB`、`\fI` 和 `\fR` 分别用来改变字体为粗体、斜体和正体(罗马字体)。`\fP` 设置字体为前一个选择的字体。 -Line 3 gives the name of the command and a short description, after a dash ([mi]). You should use this format for the NAME section so that your man page can be added to the whatis database used by the man -k and apropos commands. +其它的 `groff` 请求request以点(`.`)开头出现在行首。第 1 行中,我们看到的 `.TH` 请求用于设置该 man 手册页的标题为 `COFFEE`、man 的部分为 `1`、以及该 man 手册页的最新版本的日期。(说明,man 手册的第 1 部分用于用户命令、第 2 部分用于系统调用等等。使用 `man man` 命令了解各个部分)。 -On lines 4—6 we give the synopsis of the command syntax for coffee. Note that italic type \fI...\fP is used to denote parameters on the command line, and that optional arguments are enclosed in square brackets. +在第 2 行,`.SH` 请求用于标记一个section的开始,并给该节名称为 `NAME`。注意,大部分的 Unix man 手册页依次使用 `NAME`、 `SYNOPSIS`、`DESCRIPTION`、`FILES`、`SEE ALSO`、`NOTES`、`AUTHOR` 和 `BUGS` 等节,个别情况下也需要一些额外的可选节。这只是编写 man 手册页的惯例,并不强制所有软件都如此。 -Lines 7—12 give a brief description of the command. Boldface type is generally used to denote program and file names. On line 13, a subsection named Options is started with the .SS request. Following this on lines 14—25 is a list of options, presented using a tagged list. Each item in the tagged list is marked with the .TPrequest; the line after .TP is the tag, after which follows the item text itself. For example, the source on lines 14—16: +第 3 行给出命令的名称,并在一个横线(`-`)后给出简短描述。在 `NAME` 节使用这个格式以便你的 man 手册页可以加到 whatis 数据库中——它可以用于 `man -k` 或 `apropos` 命令。 + +第 4-6 行我们给出了 `coffee` 命令格式的大纲。注意,斜体 `\fI...\fP` 用于表示命令行的参数,可选参数用方括号扩起来。 + +第 7-12 行给出了该命令的摘要介绍。粗体通常用于表示程序或文件的名称。 + +在 13 行,使用 `.SS` 开始了一个名为 `Options` 的子节。 + +接着第 14-25 行是选项列表,会使用参数列表样式表示。参数列表中的每一项以 `.TP` 请求来标记;`.TP` 后的行是参数,再之后是该项的文本。例如,第 14-16 行: ``` .TP @@ -73,22 +80,29 @@ Lines 7—12 give a brief description of the command. Boldface type is generally Brew hot coffee. Cold is the default. ``` +将会显示如下: + ``` -h Brew hot coffee. Cold is the default. ``` -Lines 26—29 make up the FILES section of the man page, which describes any files that the command might use to do its work. A tagged list using the .TP request is used for this as well. +第 26-29 行创建该 man 手册页的 `FILES` 节,它用于描述该命令可能使用的文件。可以使用 `.TP` 请求来表示文件列表。 -On lines 30—31, the SEE ALSO section is given, which provides cross-references to other man pages of note. Notice that the string <\#34>SEE ALSO<\#34>following the .SH request on line 30 is in quotes; this is because .SH uses the first whitespace-delimited argument as the section title. Therefore any section titles that are more than one word need to be enclosed in quotes to make up a single argument. Finally, on lines 32—34, the BUGS section is presented. +第 30-31 行,给出了 `SEE ALSO` 节,它提供了其它可以参考的 man 手册页。注意,第 30 行的 `.SH` 请求中 `"SEE ALSO"` 使用括号扩起来,这是因为 `.SH` 使用第一个空格来分隔该节的标题。任何超过一个单词的标题都需要使用引号扩起来成为一个单一参数。 -### Formatting and Installing the man Page +最后,第 32-34 行,是 `BUGS` 节。 + +### 格式化和安装 man 手册页 + +为了在你的屏幕上查看这个手册页格式化的样式,你可以使用如下命令: -In order to format this man page and view it on your screen, you can use the command: ``` $ groff -Tascii -man coffee.man | more ``` +`-Tascii` 选项告诉 `groff` 生成普通 ASCII 输出;`-man` 告诉 `groff` 使用 man 手册页宏集合。如果一切正常,这个 man 手册页显示应该如下。 + ``` COFFEE(1) COFFEE(1) NAME @@ -117,39 +131,47 @@ BUGS exhausted. ``` -As mentioned before, groff is capable of producing other types of output. Using the -Tps option in place of -Tascii will produce PostScript output that you can save to a file, view with GhostView, or print on a PostScript printer. -Tdvi will produce device-independent .dvi output similar to that produced by TeX. +*格式化的 man 手册页* -If you wish to make the man page available for others to view on your system, you need to install the groff source in a directory that is present in other users' MANPATH. The location for standard man pages is /usr/man. The source for section 1 man pages should therefore go in /usr/man/man1\. Therefore, the command: +如之前提到过的,`groff` 能够生成其它类型的输出。使用 `-Tps` 选项替代 `-Tascii` 将会生成 PostScript 输出,你可以将其保存为文件,用 GhostView 查看,或用一个 PostScript 打印机打印出来。`-Tdvi` 会生成设备无关的 .dvi 输出,类似于 TeX 的输出。 + +如果你希望让别人在你的系统上也可以查看这个 man 手册页,你需要安装这个 groff 源文件到其它用户的 `%MANPATH` 目录里面。标准的 man 手册页放在 `/usr/man`。第一部分的 man 手册页应该放在 `/usr/man/man1` 下,因此,使用命令: ``` $ cp coffee.man /usr/man/man1/coffee.1 ``` -If you can't copy man page sources directly to /usr/man (say, because you're not the system administrator), you can create your own man page directory tree and add it to your MANPATH. The MANPATH environment variable is of the same format asPATH; for example, to add the directory /home/mdw/man to MANPATH just use: +这将安装该 man 手册页到 `/usr/man` 中供所有人使用(注意使用 `.1` 扩展名而不是 `.man`)。当接下来执行 `man coffee` 命令时,该 man 手册页会被自动重新格式化,并且可查看的文本会被保存到 `/usr/man/cat1/coffee.1.Z` 中。 + +如果你不能直接复制 man 手册页的源文件到 `/usr/man`(比如说你不是系统管理员),你可创建你自己的 man 手册页目录树,并将其加入到你的 `%MANPATH`。`%MANPATH` 环境变量的格式同 `%PATH` 一样,举个例子,要添加目录 `/home/mdw/man` 到 `%MANPATH` ,只需要: ``` $ export MANPATH=/home/mdw/man:$MANPATH ``` +`groff` 和 man 手册页宏还有许多其它的选项和格式化命令。找到它们的最好办法是查看 `/usr/lib/groff` 中的文件; `tmac` 目录包含了宏文件,自身通常会包含其所提供的命令的文档。要让 `groff` 使用特定的宏集合,只需要使用 `-m macro` (或 `-macro`) 选项。例如,要使用 mgs 宏,使用命令: + ``` groff -Tascii -mgs files... ``` -Unfortunately, the macro sets provided with groff are not well-documented. There are section 7 man pages for some of them; for example, man 7 groff_mm will tell you about the mm macro set. However, this documentation usually only covers the differences and new features in the groff implementation, which assumes you have access to the man pages for the original nroff/troff macro sets (known as DWB—the Documentor's Work Bench). The best source of information may be a book on using nroff/troff which covers these classic macro sets in detail. For more about writing man pages, you can always look at the man page sources (in /usr/man) and determine what they do by comparing the formatted output with the source. +`groff` 的 man 手册页对这个选项描述了更多细节。 -This article is adapted from Running Linux, by Matt Welsh and Lar Kaufman, published by O'Reilly and Associates (ISBN 1-56592-100-3). Among other things, this book includes tutorials of various text-formatting systems used under Linux. Information in this issue of Linux Journal as well as Running Linux should provide a good head-start on using the many text tools available for the system. +不幸的是,随同 `groff` 提供的宏集合没有完善的文档。第 7 部分的 man 手册页提供了一些,例如,`man 7 groff_mm` 会给你 mm 宏集合的信息。然而,该文档通常只覆盖了在 `groff` 实现中不同和新功能,而假设你已经了解过原来的 nroff/troff 宏集合(称作 DWB:the Documentor's Work Bench)。最佳的信息来源或许是一本覆盖了那些经典宏集合细节的书。要了解更多的编写 man 手册页的信息,你可以看看 man 手册页源文件(`/usr/man` 中),并通过它们来比较源文件的输出。 -### Good luck, and happy documenting! +这篇文章是《Running Linux》 中的一章,由 Matt Welsh 和 Lar Kaufman 著,奥莱理出版(ISBN 1-56592-100-3)。在本书中,还包括了 Linux 下使用的各种文本格式化系统的教程。这期的《Linux Journal》中的内容及《Running Linux》应该可以给你提供在 Linux 上使用各种文本工具的良好开端。 -Matt Welsh ([mdw@cs.cornell.edu][1]) is a student and systems programmer at Cornell University, working with the Robotics and Vision Laboratory on projects dealing with real-time machine vision. +### 祝好,撰写快乐! + +Matt Welsh ([mdw@cs.cornell.edu][1])是康奈尔大学的一名学生和系统程序员,在机器人和视觉实验室从事于时时机器视觉研究。 -------------------------------------------------------------------------------- via: http://www.linuxjournal.com/article/1158 作者:[Matt Welsh][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) +译者:[wxy](https://github.com/wxy) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From f1a3ed70e18107ee50b66c5b91b78abf493842d3 Mon Sep 17 00:00:00 2001 From: wxy Date: Fri, 8 Dec 2017 18:30:52 +0800 Subject: [PATCH 128/236] TRD:19951001 Writing man Pages Using groff.md @wxy --- .../tech/19951001 Writing man Pages Using groff.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) rename {sources => translated}/tech/19951001 Writing man Pages Using groff.md (99%) diff --git a/sources/tech/19951001 Writing man Pages Using groff.md b/translated/tech/19951001 Writing man Pages Using groff.md similarity index 99% rename from sources/tech/19951001 Writing man Pages Using groff.md rename to translated/tech/19951001 Writing man Pages Using groff.md index 896c6ebe48..360bfcaf4a 100644 --- a/sources/tech/19951001 Writing man Pages Using groff.md +++ b/translated/tech/19951001 Writing man Pages Using groff.md @@ -1,4 +1,4 @@ -使用 groff 编写 man 手册页页 +使用 groff 编写 man 手册页 =================== `groff` 是大多数 Unix 系统上所提供的流行的文本格式化工具 nroff/troff 的 GNU 版本。它一般用于编写手册页,即命令、编程接口等的在线文档。在本文中,我们将给你展示如何使用 `groff` 编写你自己的 man 手册页。 From 61bc4155dd2d58476a834994e699176b8bcb324c Mon Sep 17 00:00:00 2001 From: wxy Date: Fri, 8 Dec 2017 18:31:26 +0800 Subject: [PATCH 129/236] PUB:19951001 Writing man Pages Using groff.md https://linux.cn/article-9122-1.html --- .../tech => published}/19951001 Writing man Pages Using groff.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/19951001 Writing man Pages Using groff.md (100%) diff --git a/translated/tech/19951001 Writing man Pages Using groff.md b/published/19951001 Writing man Pages Using groff.md similarity index 100% rename from translated/tech/19951001 Writing man Pages Using groff.md rename to published/19951001 Writing man Pages Using groff.md From c7df54d7f2fad2f82148c1f8055aac6e262611ef Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E5=BC=A0=E5=AE=88=E6=B0=B8?= Date: Fri, 8 Dec 2017 20:12:40 +0800 Subject: [PATCH 130/236] =?UTF-8?q?=E9=80=89=E9=A2=98=207=20tools=20for=20?= =?UTF-8?q?analyzing=20performance=20in=20Linux?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 选题文章名 7 tools for analyzing performance in Linux with bcc/BPF --- ...lyzing performance in Linux with bccBPF.md | 402 ++++++++++++++++++ 1 file changed, 402 insertions(+) create mode 100644 sources/tech/20171207 7 tools for analyzing performance in Linux with bccBPF.md diff --git a/sources/tech/20171207 7 tools for analyzing performance in Linux with bccBPF.md b/sources/tech/20171207 7 tools for analyzing performance in Linux with bccBPF.md new file mode 100644 index 0000000000..e6fd19e212 --- /dev/null +++ b/sources/tech/20171207 7 tools for analyzing performance in Linux with bccBPF.md @@ -0,0 +1,402 @@ +translating by yongshouzhang + +7 tools for analyzing performance in Linux with bcc/BPF +============================================================ + +### Look deeply into your Linux code with these Berkeley Packet Filter (BPF) Compiler Collection (bcc) tools. + + [![](https://opensource.com/sites/default/files/styles/byline_thumbnail/public/pictures/brendan_face2017_620d.jpg?itok=xZzBQNcY)][7] 21 Nov 2017 [Brendan Gregg][8] [Feed][9] + +43[up][10] + + [4 comments][11] +![7 superpowers for Fedora bcc/BPF performance analysis](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/penguins%20in%20space_0.jpg?itok=umpCTAul) + +Image by : + +opensource.com + +A new technology has arrived in Linux that can provide sysadmins and developers with a large number of new tools and dashboards for performance analysis and troubleshooting. It's called the enhanced Berkeley Packet Filter (eBPF, or just BPF), although these enhancements weren't developed in Berkeley, they operate on much more than just packets, and they do much more than just filtering. I'll discuss one way to use BPF on the Fedora and Red Hat family of Linux distributions, demonstrating on Fedora 26. + +BPF can run user-defined sandboxed programs in the kernel to add new custom capabilities instantly. It's like adding superpowers to Linux, on demand. Examples of what you can use it for include: + +* Advanced performance tracing tools: programmatic low-overhead instrumentation of filesystem operations, TCP events, user-level events, etc. + +* Network performance: dropping packets early on to improve DDOS resilience, or redirecting packets in-kernel to improve performance + +* Security monitoring: 24x7 custom monitoring and logging of suspicious kernel and userspace events + +BPF programs must pass an in-kernel verifier to ensure they are safe to run, making it a safer option, where possible, than writing custom kernel modules. I suspect most people won't write BPF programs themselves, but will use other people's. I've published many on GitHub as open source in the [BPF Compiler Collection (bcc)][12] project. bcc provides different frontends for BPF development, including Python and Lua, and is currently the most active project for BPF tooling. + +### 7 useful new bcc/BPF tools + +To understand the bcc/BPF tools and what they instrument, I created the following diagram and added it to the bcc project: + +### [bcc_tracing_tools.png][13] + +![Linux bcc/BPF tracing tools diagram](https://opensource.com/sites/default/files/u128651/bcc_tracing_tools.png) + +Brendan Gregg, [CC BY-SA 4.0][14] + +These are command-line interface (CLI) tools you can use over SSH (secure shell). Much analysis nowadays, including at my employer, is conducted using GUIs and dashboards. SSH is a last resort. But these CLI tools are still a good way to preview BPF capabilities, even if you ultimately intend to use them only through a GUI when available. I've began adding BPF capabilities to an open source GUI, but that's a topic for another article. Right now I'd like to share the CLI tools, which you can use today. + +### 1\. execsnoop + +Where to start? How about watching new processes. These can consume system resources, but be so short-lived they don't show up in top(1) or other tools. They can be instrumented (or, using the industry jargon for this, they can be traced) using [execsnoop][15]. While tracing, I'll log in over SSH in another window: + +``` +# /usr/share/bcc/tools/execsnoop +PCOMM PID PPID RET ARGS +sshd 12234 727 0 /usr/sbin/sshd -D -R +unix_chkpwd 12236 12234 0 /usr/sbin/unix_chkpwd root nonull +unix_chkpwd 12237 12234 0 /usr/sbin/unix_chkpwd root chkexpiry +bash 12239 12238 0 /bin/bash +id 12241 12240 0 /usr/bin/id -un +hostname 12243 12242 0 /usr/bin/hostname +pkg-config 12245 12244 0 /usr/bin/pkg-config --variable=completionsdir bash-completion +grepconf.sh 12246 12239 0 /usr/libexec/grepconf.sh -c +grep 12247 12246 0 /usr/bin/grep -qsi ^COLOR.*none /etc/GREP_COLORS +tty 12249 12248 0 /usr/bin/tty -s +tput 12250 12248 0 /usr/bin/tput colors +dircolors 12252 12251 0 /usr/bin/dircolors --sh /etc/DIR_COLORS +grep 12253 12239 0 /usr/bin/grep -qi ^COLOR.*none /etc/DIR_COLORS +grepconf.sh 12254 12239 0 /usr/libexec/grepconf.sh -c +grep 12255 12254 0 /usr/bin/grep -qsi ^COLOR.*none /etc/GREP_COLORS +grepconf.sh 12256 12239 0 /usr/libexec/grepconf.sh -c +grep 12257 12256 0 /usr/bin/grep -qsi ^COLOR.*none /etc/GREP_COLORS +``` + +Welcome to the fun of system tracing. You can learn a lot about how the system is really working (or not working, as the case may be) and discover some easy optimizations along the way. execsnoop works by tracing the exec() system call, which is usually used to load different program code in new processes. + +### 2\. opensnoop + +Continuing from above, so, grepconf.sh is likely a shell script, right? I'll run file(1) to check, and also use the [opensnoop][16] bcc tool to see what file is opening: + +``` +# /usr/share/bcc/tools/opensnoop +PID COMM FD ERR PATH +12420 file 3 0 /etc/ld.so.cache +12420 file 3 0 /lib64/libmagic.so.1 +12420 file 3 0 /lib64/libz.so.1 +12420 file 3 0 /lib64/libc.so.6 +12420 file 3 0 /usr/lib/locale/locale-archive +12420 file -1 2 /etc/magic.mgc +12420 file 3 0 /etc/magic +12420 file 3 0 /usr/share/misc/magic.mgc +12420 file 3 0 /usr/lib64/gconv/gconv-modules.cache +12420 file 3 0 /usr/libexec/grepconf.sh +1 systemd 16 0 /proc/565/cgroup +1 systemd 16 0 /proc/536/cgroup +``` + +``` +# file /usr/share/misc/magic.mgc /etc/magic +/usr/share/misc/magic.mgc: magic binary file for file(1) cmd (version 14) (little endian) +/etc/magic: magic text file for file(1) cmd, ASCII text +``` + +### 3\. xfsslower + +bcc/BPF can analyze much more than just syscalls. The [xfsslower][17] tool traces common XFS filesystem operations that have a latency of greater than 1 millisecond (the argument): + +``` +# /usr/share/bcc/tools/xfsslower 1 +Tracing XFS operations slower than 1 ms +TIME COMM PID T BYTES OFF_KB LAT(ms) FILENAME +14:17:34 systemd-journa 530 S 0 0 1.69 system.journal +14:17:35 auditd 651 S 0 0 2.43 audit.log +14:17:42 cksum 4167 R 52976 0 1.04 at +14:17:45 cksum 4168 R 53264 0 1.62 [ +14:17:45 cksum 4168 R 65536 0 1.01 certutil +14:17:45 cksum 4168 R 65536 0 1.01 dir +14:17:45 cksum 4168 R 65536 0 1.17 dirmngr-client +14:17:46 cksum 4168 R 65536 0 1.06 grub2-file +14:17:46 cksum 4168 R 65536 128 1.01 grub2-fstest +[...] +``` + +This is a useful tool and an important example of BPF tracing. Traditional analysis of filesystem performance focuses on block I/O statistics—what you commonly see printed by the iostat(1) tool and plotted by many performance-monitoring GUIs. Those statistics show how the disks are performing, but not really the filesystem. Often you care more about the filesystem's performance than the disks, since it's the filesystem that applications make requests to and wait for. And the performance of filesystems can be quite different from that of disks! Filesystems may serve reads entirely from memory cache and also populate that cache via a read-ahead algorithm and for write-back caching. xfsslower shows filesystem performance—what the applications directly experience. This is often useful for exonerating the entire storage subsystem; if there is really no filesystem latency, then performance issues are likely to be elsewhere. + +### 4\. biolatency + +Although filesystem performance is important to study for understanding application performance, studying disk performance has merit as well. Poor disk performance will affect the application eventually, when various caching tricks can no longer hide its latency. Disk performance is also a target of study for capacity planning. + +The iostat(1) tool shows the average disk I/O latency, but averages can be misleading. It can be useful to study the distribution of I/O latency as a histogram, which can be done using [biolatency][18]: + +``` +# /usr/share/bcc/tools/biolatency +Tracing block device I/O... Hit Ctrl-C to end. +^C + usecs : count distribution + 0 -> 1 : 0 | | + 2 -> 3 : 0 | | + 4 -> 7 : 0 | | + 8 -> 15 : 0 | | + 16 -> 31 : 0 | | + 32 -> 63 : 1 | | + 64 -> 127 : 63 |**** | + 128 -> 255 : 121 |********* | + 256 -> 511 : 483 |************************************ | + 512 -> 1023 : 532 |****************************************| + 1024 -> 2047 : 117 |******** | + 2048 -> 4095 : 8 | | +``` + +It's worth noting that many of these tools support CLI options and arguments as shown by their USAGE message: + +``` +# /usr/share/bcc/tools/biolatency -h +usage: biolatency [-h] [-T] [-Q] [-m] [-D] [interval] [count] + +Summarize block device I/O latency as a histogram + +positional arguments: + interval output interval, in seconds + count number of outputs + +optional arguments: + -h, --help show this help message and exit + -T, --timestamp include timestamp on output + -Q, --queued include OS queued time in I/O time + -m, --milliseconds millisecond histogram + -D, --disks print a histogram per disk device + +examples: + ./biolatency # summarize block I/O latency as a histogram + ./biolatency 1 10 # print 1 second summaries, 10 times + ./biolatency -mT 1 # 1s summaries, milliseconds, and timestamps + ./biolatency -Q # include OS queued time in I/O time + ./biolatency -D # show each disk device separately +``` + +### 5\. tcplife + +Another useful tool and example, this time showing lifespan and throughput statistics of TCP sessions, is [tcplife][19]: + +``` +# /usr/share/bcc/tools/tcplife +PID COMM LADDR LPORT RADDR RPORT TX_KB RX_KB MS +12759 sshd 192.168.56.101 22 192.168.56.1 60639 2 3 1863.82 +12783 sshd 192.168.56.101 22 192.168.56.1 60640 3 3 9174.53 +12844 wget 10.0.2.15 34250 54.204.39.132 443 11 1870 5712.26 +12851 curl 10.0.2.15 34252 54.204.39.132 443 0 74 505.90 +``` + +### 6\. gethostlatency + +Every previous example involves kernel tracing, so I need at least one user-level tracing example. Here is [gethostlatency][20], which instruments gethostbyname(3) and related library calls for name resolution: + +``` +# /usr/share/bcc/tools/gethostlatency +TIME PID COMM LATms HOST +06:43:33 12903 curl 188.98 opensource.com +06:43:36 12905 curl 8.45 opensource.com +06:43:40 12907 curl 6.55 opensource.com +06:43:44 12911 curl 9.67 opensource.com +06:45:02 12948 curl 19.66 opensource.cats +06:45:06 12950 curl 18.37 opensource.cats +06:45:07 12952 curl 13.64 opensource.cats +06:45:19 13139 curl 13.10 opensource.cats +``` + +### 7\. trace + +Okay, one more example. The [trace][21] tool was contributed by Sasha Goldshtein and provides some basic printf(1) functionality with custom probes. For example: + +``` +# /usr/share/bcc/tools/trace 'pam:pam_start "%s: %s", arg1, arg2' +PID TID COMM FUNC - +13266 13266 sshd pam_start sshd: root +``` + +### Install bcc via packages + +The best way to install bcc is from an iovisor repository, following the instructions from the bcc [INSTALL.md][22]. [IO Visor][23] is the Linux Foundation project that includes bcc. The BPF enhancements these tools use were added in the 4.x series Linux kernels, up to 4.9\. This means that Fedora 25, with its 4.8 kernel, can run most of these tools; and Fedora 26, with its 4.11 kernel, can run them all (at least currently). + +If you are on Fedora 25 (or Fedora 26, and this post was published many months ago—hello from the distant past!), then this package approach should just work. If you are on Fedora 26, then skip to the [Install via Source][24] section, which avoids a [known][25] and [fixed][26] bug. That bug fix hasn't made its way into the Fedora 26 package dependencies at the moment. The system I'm using is: + +``` +# uname -a +Linux localhost.localdomain 4.11.8-300.fc26.x86_64 #1 SMP Thu Jun 29 20:09:48 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux +# cat /etc/fedora-release +Fedora release 26 (Twenty Six) +``` + +``` +# echo -e '[iovisor]\nbaseurl=https://repo.iovisor.org/yum/nightly/f25/$basearch\nenabled=1\ngpgcheck=0' | sudo tee /etc/yum.repos.d/iovisor.repo +# dnf install bcc-tools +[...] +Total download size: 37 M +Installed size: 143 M +Is this ok [y/N]: y +``` + +``` +# ls /usr/share/bcc/tools/ +argdist dcsnoop killsnoop softirqs trace +bashreadline dcstat llcstat solisten ttysnoop +[...] +``` + +``` +# /usr/share/bcc/tools/opensnoop +chdir(/lib/modules/4.11.8-300.fc26.x86_64/build): No such file or directory +Traceback (most recent call last): + File "/usr/share/bcc/tools/opensnoop", line 126, in + b = BPF(text=bpf_text) + File "/usr/lib/python3.6/site-packages/bcc/__init__.py", line 284, in __init__ + raise Exception("Failed to compile BPF module %s" % src_file) +Exception: Failed to compile BPF module +``` + +``` +# dnf install kernel-devel-4.11.8-300.fc26.x86_64 +[...] +Total download size: 20 M +Installed size: 63 M +Is this ok [y/N]: y +[...] +``` + +``` +# /usr/share/bcc/tools/opensnoop +PID COMM FD ERR PATH +11792 ls 3 0 /etc/ld.so.cache +11792 ls 3 0 /lib64/libselinux.so.1 +11792 ls 3 0 /lib64/libcap.so.2 +11792 ls 3 0 /lib64/libc.so.6 +[...] +``` + +### Install via source + +If you need to install from source, you can also find documentation and updated instructions in [INSTALL.md][27]. I did the following on Fedora 26: + +``` +sudo dnf install -y bison cmake ethtool flex git iperf libstdc++-static \ + python-netaddr python-pip gcc gcc-c++ make zlib-devel \ + elfutils-libelf-devel +sudo dnf install -y luajit luajit-devel # for Lua support +sudo dnf install -y \ + http://pkgs.repoforge.org/netperf/netperf-2.6.0-1.el6.rf.x86_64.rpm +sudo pip install pyroute2 +sudo dnf install -y clang clang-devel llvm llvm-devel llvm-static ncurses-devel +``` + +``` +Curl error (28): Timeout was reached for http://pkgs.repoforge.org/netperf/netperf-2.6.0-1.el6.rf.x86_64.rpm [Connection timed out after 120002 milliseconds] +``` + +Here are the remaining bcc compilation and install steps: + +``` +git clone https://github.com/iovisor/bcc.git +mkdir bcc/build; cd bcc/build +cmake .. -DCMAKE_INSTALL_PREFIX=/usr +make +sudo make install +``` + +``` +# /usr/share/bcc/tools/opensnoop +PID COMM FD ERR PATH +4131 date 3 0 /etc/ld.so.cache +4131 date 3 0 /lib64/libc.so.6 +4131 date 3 0 /usr/lib/locale/locale-archive +4131 date 3 0 /etc/localtime +[...] +``` + +More Linux resources + +* [What is Linux?][1] + +* [What are Linux containers?][2] + +* [Download Now: Linux commands cheat sheet][3] + +* [Advanced Linux commands cheat sheet][4] + +* [Our latest Linux articles][5] + +This was a quick tour of the new BPF performance analysis superpowers that you can use on the Fedora and Red Hat family of operating systems. I demonstrated the popular + + [bcc][28] + +frontend to BPF and included install instructions for Fedora. bcc comes with more than 60 new tools for performance analysis, which will help you get the most out of your Linux systems. Perhaps you will use these tools directly over SSH, or perhaps you will use the same functionality via monitoring GUIs once they support BPF. + +Also, bcc is not the only frontend in development. There are [ply][29] and [bpftrace][30], which aim to provide higher-level language for quickly writing custom tools. In addition, [SystemTap][31] just released [version 3.2][32], including an early, experimental eBPF backend. Should this continue to be developed, it will provide a production-safe and efficient engine for running the many SystemTap scripts and tapsets (libraries) that have been developed over the years. (Using SystemTap with eBPF would be good topic for another post.) + +If you need to develop custom tools, you can do that with bcc as well, although the language is currently much more verbose than SystemTap, ply, or bpftrace. My bcc tools can serve as code examples, plus I contributed a [tutorial][33] for developing bcc tools in Python. I'd recommend learning the bcc multi-tools first, as you may get a lot of mileage from them before needing to write new tools. You can study the multi-tools from their example files in the bcc repository: [funccount][34], [funclatency][35], [funcslower][36], [stackcount][37], [trace][38], and [argdist][39]. + +Thanks to [Opensource.com][40] for edits. + +### Topics + + [Linux][41][SysAdmin][42] + +### About the author + + [![](https://opensource.com/sites/default/files/styles/profile_pictures/public/pictures/brendan_face2017_620d.jpg?itok=LIwTJjL9)][43] Brendan Gregg + +- + + Brendan Gregg is a senior performance architect at Netflix, where he does large scale computer performance design, analysis, and tuning.[More about me][44] + +* [Learn how you can contribute][6] + +-------------------------------------------------------------------------------- + +via:https://opensource.com/article/17/11/bccbpf-performance + +作者:[Brendan Gregg ][a] +译者:[yongshouzhang](https://github.com/yongshouzhang) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: +[1]:https://opensource.com/resources/what-is-linux?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent +[2]:https://opensource.com/resources/what-are-linux-containers?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent +[3]:https://developers.redhat.com/promotions/linux-cheatsheet/?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent +[4]:https://developers.redhat.com/cheat-sheet/advanced-linux-commands-cheatsheet?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent +[5]:https://opensource.com/tags/linux?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent +[6]:https://opensource.com/participate +[7]:https://opensource.com/users/brendang +[8]:https://opensource.com/users/brendang +[9]:https://opensource.com/user/77626/feed +[10]:https://opensource.com/article/17/11/bccbpf-performance?rate=r9hnbg3mvjFUC9FiBk9eL_ZLkioSC21SvICoaoJjaSM +[11]:https://opensource.com/article/17/11/bccbpf-performance#comments +[12]:https://github.com/iovisor/bcc +[13]:https://opensource.com/file/376856 +[14]:https://opensource.com/usr/share/bcc/tools/trace +[15]:https://github.com/brendangregg/perf-tools/blob/master/execsnoop +[16]:https://github.com/brendangregg/perf-tools/blob/master/opensnoop +[17]:https://github.com/iovisor/bcc/blob/master/tools/xfsslower.py +[18]:https://github.com/iovisor/bcc/blob/master/tools/biolatency.py +[19]:https://github.com/iovisor/bcc/blob/master/tools/tcplife.py +[20]:https://github.com/iovisor/bcc/blob/master/tools/gethostlatency.py +[21]:https://github.com/iovisor/bcc/blob/master/tools/trace.py +[22]:https://github.com/iovisor/bcc/blob/master/INSTALL.md#fedora---binary +[23]:https://www.iovisor.org/ +[24]:https://opensource.com/article/17/11/bccbpf-performance#InstallViaSource +[25]:https://github.com/iovisor/bcc/issues/1221 +[26]:https://reviews.llvm.org/rL302055 +[27]:https://github.com/iovisor/bcc/blob/master/INSTALL.md#fedora---source +[28]:https://github.com/iovisor/bcc +[29]:https://github.com/iovisor/ply +[30]:https://github.com/ajor/bpftrace +[31]:https://sourceware.org/systemtap/ +[32]:https://sourceware.org/ml/systemtap/2017-q4/msg00096.html +[33]:https://github.com/iovisor/bcc/blob/master/docs/tutorial_bcc_python_developer.md +[34]:https://github.com/iovisor/bcc/blob/master/tools/funccount_example.txt +[35]:https://github.com/iovisor/bcc/blob/master/tools/funclatency_example.txt +[36]:https://github.com/iovisor/bcc/blob/master/tools/funcslower_example.txt +[37]:https://github.com/iovisor/bcc/blob/master/tools/stackcount_example.txt +[38]:https://github.com/iovisor/bcc/blob/master/tools/trace_example.txt +[39]:https://github.com/iovisor/bcc/blob/master/tools/argdist_example.txt +[40]:http://opensource.com/ +[41]:https://opensource.com/tags/linux +[42]:https://opensource.com/tags/sysadmin +[43]:https://opensource.com/users/brendang +[44]:https://opensource.com/users/brendang From 97a0c97bd5089cded76d22ad63575ae28cfec558 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E5=BC=A0=E5=AE=88=E6=B0=B8?= Date: Fri, 8 Dec 2017 20:33:04 +0800 Subject: [PATCH 131/236] Delete 20171207 How to use cron in Linux.md --- .../20171207 How to use cron in Linux.md | 287 ------------------ 1 file changed, 287 deletions(-) delete mode 100644 sources/tech/20171207 How to use cron in Linux.md diff --git a/sources/tech/20171207 How to use cron in Linux.md b/sources/tech/20171207 How to use cron in Linux.md deleted file mode 100644 index 3df9c1b402..0000000000 --- a/sources/tech/20171207 How to use cron in Linux.md +++ /dev/null @@ -1,287 +0,0 @@ -translating by yongshouzhang - -如何在linux中使用cron -============================================================ - -### 没有时间键入命令? 使用 cron 调度任务意味着你不必熬夜守着程序,就可以让它运行。 - [![](https://opensource.com/sites/default/files/styles/byline_thumbnail/public/david-crop.jpg?itok=Wnz6HdS0)][10] 06 Nov 2017 [David Both][11] [Feed][12] - -27[up][13] - - [9 comments][14] -![如何在 linux 中使用cron](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/linux-penguins.png?itok=yKOpaJM_) - -Image by : - -[Internet Archive Book Images][15]. Modified by Opensource.com. [CC BY-SA 4.0][16] - -作为系统管理员的一个挑战(也是众多优势之一)就是在你想睡觉时如何让任务运行。例如,一些任务(包括定期循环的作业)需要整夜或每逢周末运行,当没人想占用计算机资源时。晚上我没有空闲时间去运行在非高峰时间必须运行的命令和脚本。我也不想摸黑起床,进行备份和主更新。 - -与之代替的是,我用了两个能够在既定时间运行命令,程序,任务的服务实用程序。[cron][17] 和 at 服务能够让系统管理雨在未来特定时间内运行计划任务。at 服务指定一个在特定时间运行的一次性任务。cron服务可在重复的基础上调度任务,如每天,每周或每月。 - -在本文中我将会介绍 cron 服务以及如何使用。 - -### cron 的常见(和不常见)用法 - -I use the cron service to schedule obvious things, such as regular backups that occur daily at 2 a.m. I also use it for less obvious things. - -* The system times (i.e., the operating system time) on my many computers are set using the Network Time Protocol (NTP). While NTP sets the system time, it does not set the hardware time, which can drift. I use cron to set the hardware time based on the system time. - -* I also have a Bash program I run early every morning that creates a new "message of the day" (MOTD) on each computer. It contains information, such as disk usage, that should be current in order to be useful. - -* Many system processes and services, like [Logwatch][1], [logrotate][2], and [Rootkit Hunter][3], use the cron service to schedule tasks and run programs every day. - -The crond daemon is the background service that enables cron functionality. - -The cron service checks for files in the /var/spool/cron and /etc/cron.d directories and the /etc/anacrontab file. The contents of these files define cron jobs that are to be run at various intervals. The individual user cron files are located in /var/spool/cron, and system services and applications generally add cron job files in the /etc/cron.ddirectory. The /etc/anacrontab is a special case that will be covered later in this article. - -### Using crontab - -The cron utility runs based on commands specified in a cron table (crontab). Each user, including root, can have a cron file. These files don't exist by default, but can be created in the /var/spool/cron directory using the crontab -e command that's also used to edit a cron file (see the script below). I strongly recommend that you not use a standard editor (such as Vi, Vim, Emacs, Nano, or any of the many other editors that are available). Using the crontab command not only allows you to edit the command, it also restarts the crond daemon when you save and exit the editor. The crontabcommand uses Vi as its underlying editor, because Vi is always present (on even the most basic of installations). - -New cron files are empty, so commands must be added from scratch. I added the job definition example below to my own cron files, just as a quick reference, so I know what the various parts of a command mean. Feel free to copy it for your own use. - -``` -# crontab -e -SHELL=/bin/bash -MAILTO=root@example.com -PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin - -# For details see man 4 crontabs - -# Example of job definition: -# .---------------- minute (0 - 59) -# | .------------- hour (0 - 23) -# | | .---------- day of month (1 - 31) -# | | | .------- month (1 - 12) OR jan,feb,mar,apr ... -# | | | | .---- day of week (0 - 6) (Sunday=0 or 7) OR sun,mon,tue,wed,thu,fri,sat -# | | | | | -# * * * * * user-name command to be executed - -# backup using the rsbu program to the internal 4TB HDD and then 4TB external -01 01 * * * /usr/local/bin/rsbu -vbd1 ; /usr/local/bin/rsbu -vbd2 - -# Set the hardware clock to keep it in sync with the more accurate system clock -03 05 * * * /sbin/hwclock --systohc - -# Perform monthly updates on the first of the month -# 25 04 1 * * /usr/bin/dnf -y update -``` - -The first three lines in the code above set up a default environment. The environment must be set to whatever is necessary for a given user because cron does not provide an environment of any kind. The SHELL variable specifies the shell to use when commands are executed. This example specifies the Bash shell. The MAILTO variable sets the email address where cron job results will be sent. These emails can provide the status of the cron job (backups, updates, etc.) and consist of the output you would see if you ran the program manually from the command line. The third line sets up the PATH for the environment. Even though the path is set here, I always prepend the fully qualified path to each executable. - -There are several comment lines in the example above that detail the syntax required to define a cron job. I'll break those commands down, then add a few more to show you some more advanced capabilities of crontab files. - -``` -01 01 * * * /usr/local/bin/rsbu -vbd1 ; /usr/local/bin/rsbu -vbd2 -``` - -This line runs my self-written Bash shell script, rsbu, that backs up all my systems. This job kicks off at 1:01 a.m. (01 01) every day. The asterisks (*) in positions three, four, and five of the time specification are like file globs, or wildcards, for other time divisions; they specify "every day of the month," "every month," and "every day of the week." This line runs my backups twice; one backs up to an internal dedicated backup hard drive, and the other backs up to an external USB drive that I can take to the safe deposit box. - -The following line sets the hardware clock on the computer using the system clock as the source of an accurate time. This line is set to run at 5:03 a.m. (03 05) every day. - -``` -03 05 * * * /sbin/hwclock --systohc -``` - -I was using the third and final cron job (commented out) to perform a dnf or yumupdate at 04:25 a.m. on the first day of each month, but I commented it out so it no longer runs. - -``` -# 25 04 1 * * /usr/bin/dnf -y update -``` - -### Other scheduling tricks - -Now let's do some things that are a little more interesting than these basics. Suppose you want to run a particular job every Thursday at 3 p.m.: - -``` -00 15 * * Thu /usr/local/bin/mycronjob.sh -``` - -Or, maybe you need to run quarterly reports after the end of each quarter. The cron service has no option for "The last day of the month," so instead you can use the first day of the following month, as shown below. (This assumes that the data needed for the reports will be ready when the job is set to run.) - -``` -02 03 1 1,4,7,10 * /usr/local/bin/reports.sh -``` - -The following shows a job that runs one minute past every hour between 9:01 a.m. and 5:01 p.m. - -``` -01 09-17 * * * /usr/local/bin/hourlyreminder.sh -``` - -I have encountered situations where I need to run a job every two, three, or four hours. That can be accomplished by dividing the hours by the desired interval, such as */3 for every three hours, or 6-18/3 to run every three hours between 6 a.m. and 6 p.m. Other intervals can be divided similarly; for example, the expression */15 in the minutes position means "run the job every 15 minutes." - -``` -*/5 08-18/2 * * * /usr/local/bin/mycronjob.sh -``` - -One thing to note: The division expressions must result in a remainder of zero for the job to run. That's why, in this example, the job is set to run every five minutes (08:05, 08:10, 08:15, etc.) during even-numbered hours from 8 a.m. to 6 p.m., but not during any odd-numbered hours. For example, the job will not run at all from 9 p.m. to 9:59 a.m. - -I am sure you can come up with many other possibilities based on these examples. - -### Limiting cron access - -More Linux resources - -* [What is Linux?][4] - -* [What are Linux containers?][5] - -* [Download Now: Linux commands cheat sheet][6] - -* [Advanced Linux commands cheat sheet][7] - -* [Our latest Linux articles][8] - -Regular users with cron access could make mistakes that, for example, might cause system resources (such as memory and CPU time) to be swamped. To prevent possible misuse, the sysadmin can limit user access by creating a - -**/etc/cron.allow** - - file that contains a list of all users with permission to create cron jobs. The root user cannot be prevented from using cron. - -By preventing non-root users from creating their own cron jobs, it may be necessary for root to add their cron jobs to the root crontab. "But wait!" you say. "Doesn't that run those jobs as root?" Not necessarily. In the first example in this article, the username field shown in the comments can be used to specify the user ID a job is to have when it runs. This prevents the specified non-root user's jobs from running as root. The following example shows a job definition that runs a job as the user "student": - -``` -04 07 * * * student /usr/local/bin/mycronjob.sh -``` - -### cron.d - -The directory /etc/cron.d is where some applications, such as [SpamAssassin][18] and [sysstat][19], install cron files. Because there is no spamassassin or sysstat user, these programs need a place to locate cron files, so they are placed in /etc/cron.d. - -The /etc/cron.d/sysstat file below contains cron jobs that relate to system activity reporting (SAR). These cron files have the same format as a user cron file. - -``` -# Run system activity accounting tool every 10 minutes -*/10 * * * * root /usr/lib64/sa/sa1 1 1 -# Generate a daily summary of process accounting at 23:53 -53 23 * * * root /usr/lib64/sa/sa2 -A -``` - -The sysstat cron file has two lines that perform tasks. The first line runs the sa1program every 10 minutes to collect data stored in special binary files in the /var/log/sadirectory. Then, every night at 23:53, the sa2 program runs to create a daily summary. - -### Scheduling tips - -Some of the times I set in the crontab files seem rather random—and to some extent they are. Trying to schedule cron jobs can be challenging, especially as the number of jobs increases. I usually have only a few tasks to schedule on each of my computers, which is simpler than in some of the production and lab environments where I have worked. - -One system I administered had around a dozen cron jobs that ran every night and an additional three or four that ran on weekends or the first of the month. That was a challenge, because if too many jobs ran at the same time—especially the backups and compiles—the system would run out of RAM and nearly fill the swap file, which resulted in system thrashing while performance tanked, so nothing got done. We added more memory and improved how we scheduled tasks. We also removed a task that was very poorly written and used large amounts of memory. - -The crond service assumes that the host computer runs all the time. That means that if the computer is turned off during a period when cron jobs were scheduled to run, they will not run until the next time they are scheduled. This might cause problems if they are critical cron jobs. Fortunately, there is another option for running jobs at regular intervals: anacron. - -### anacron - -The [anacron][20] program performs the same function as crond, but it adds the ability to run jobs that were skipped, such as if the computer was off or otherwise unable to run the job for one or more cycles. This is very useful for laptops and other computers that are turned off or put into sleep mode. - -As soon as the computer is turned on and booted, anacron checks to see whether configured jobs missed their last scheduled run. If they have, those jobs run immediately, but only once (no matter how many cycles have been missed). For example, if a weekly job was not run for three weeks because the system was shut down while you were on vacation, it would be run soon after you turn the computer on, but only once, not three times. - -The anacron program provides some easy options for running regularly scheduled tasks. Just install your scripts in the /etc/cron.[hourly|daily|weekly|monthly]directories, depending how frequently they need to be run. - -How does this work? The sequence is simpler than it first appears. - -1. The crond service runs the cron job specified in /etc/cron.d/0hourly. - -``` -# Run the hourly jobs -SHELL=/bin/bash -PATH=/sbin:/bin:/usr/sbin:/usr/bin -MAILTO=root -01 * * * * root run-parts /etc/cron.hourly -``` - -1. The cron job specified in /etc/cron.d/0hourly runs the run-parts program once per hour. - -2. The run-parts program runs all the scripts located in the /etc/cron.hourlydirectory. - -3. The /etc/cron.hourly directory contains the 0anacron script, which runs the anacron program using the /etdc/anacrontab configuration file shown here. - -``` -# /etc/anacrontab: configuration file for anacron - -# See anacron(8) and anacrontab(5) for details. - -SHELL=/bin/sh -PATH=/sbin:/bin:/usr/sbin:/usr/bin -MAILTO=root -# the maximal random delay added to the base delay of the jobs -RANDOM_DELAY=45 -# the jobs will be started during the following hours only -START_HOURS_RANGE=3-22 - -#period in days delay in minutes job-identifier command -1 5 cron.daily nice run-parts /etc/cron.daily -7 25 cron.weekly nice run-parts /etc/cron.weekly -@monthly 45 cron.monthly nice run-parts /etc/cron.monthly -``` - -1. The anacron program runs the programs located in /etc/cron.daily once per day; it runs the jobs located in /etc/cron.weekly once per week, and the jobs in cron.monthly once per month. Note the specified delay times in each line that help prevent these jobs from overlapping themselves and other cron jobs. - -Instead of placing complete Bash programs in the cron.X directories, I install them in the /usr/local/bin directory, which allows me to run them easily from the command line. Then I add a symlink in the appropriate cron directory, such as /etc/cron.daily. - -The anacron program is not designed to run programs at specific times. Rather, it is intended to run programs at intervals that begin at the specified times, such as 3 a.m. (see the START_HOURS_RANGE line in the script just above) of each day, on Sunday (to begin the week), and on the first day of the month. If any one or more cycles are missed, anacron will run the missed jobs once, as soon as possible. - -### More on setting limits - -I use most of these methods for scheduling tasks to run on my computers. All those tasks are ones that need to run with root privileges. It's rare in my experience that regular users really need a cron job. One case was a developer user who needed a cron job to kick off a daily compile in a development lab. - -It is important to restrict access to cron functions by non-root users. However, there are circumstances when a user needs to set a task to run at pre-specified times, and cron can allow them to do that. Many users do not understand how to properly configure these tasks using cron and they make mistakes. Those mistakes may be harmless, but, more often than not, they can cause problems. By setting functional policies that cause users to interact with the sysadmin, individual cron jobs are much less likely to interfere with other users and other system functions. - -It is possible to set limits on the total resources that can be allocated to individual users or groups, but that is an article for another time. - -For more information, the man pages for [cron][21], [crontab][22], [anacron][23], [anacrontab][24], and [run-parts][25] all have excellent information and descriptions of how the cron system works. - -### Topics - - [Linux][26][SysAdmin][27] - -### About the author - - [![](https://opensource.com/sites/default/files/styles/profile_pictures/public/david-crop.jpg?itok=oePpOpyV)][28] David Both - -- - - David Both is a Linux and Open Source advocate who resides in Raleigh, North Carolina. He has been in the IT industry for over forty years and taught OS/2 for IBM where he worked for over 20 years. While at IBM, he wrote the first training course for the original IBM PC in 1981\. He has taught RHCE classes for Red Hat and has worked at MCI Worldcom, Cisco, and the State of North Carolina. He has been working with Linux and Open Source Software for almost 20 years. David has written articles for... [more about David Both][29][More about me][30] - -* [Learn how you can contribute][9] - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/17/11/how-use-cron-linux - -作者:[David Both ][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: -[1]:https://sourceforge.net/projects/logwatch/files/ -[2]:https://github.com/logrotate/logrotate -[3]:http://rkhunter.sourceforge.net/ -[4]:https://opensource.com/resources/what-is-linux?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent -[5]:https://opensource.com/resources/what-are-linux-containers?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent -[6]:https://developers.redhat.com/promotions/linux-cheatsheet/?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent -[7]:https://developers.redhat.com/cheat-sheet/advanced-linux-commands-cheatsheet?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent -[8]:https://opensource.com/tags/linux?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent -[9]:https://opensource.com/participate -[10]:https://opensource.com/users/dboth -[11]:https://opensource.com/users/dboth -[12]:https://opensource.com/user/14106/feed -[13]:https://opensource.com/article/17/11/how-use-cron-linux?rate=9R7lrdQXsne44wxIh0Wu91ytYaxxi86zT1-uHo1a1IU -[14]:https://opensource.com/article/17/11/how-use-cron-linux#comments -[15]:https://www.flickr.com/photos/internetarchivebookimages/20570945848/in/photolist-xkMtw9-xA5zGL-tEQLWZ-wFwzFM-aNwxgn-aFdWBj-uyFKYv-7ZCCBU-obY1yX-UAPafA-otBzDF-ovdDo6-7doxUH-obYkeH-9XbHKV-8Zk4qi-apz7Ky-apz8Qu-8ZoaWG-orziEy-aNwxC6-od8NTv-apwpMr-8Zk4vn-UAP9Sb-otVa3R-apz6Cb-9EMPj6-eKfyEL-cv5mwu-otTtHk-7YjK1J-ovhxf6-otCg2K-8ZoaJf-UAPakL-8Zo8j7-8Zk74v-otp4Ls-8Zo8h7-i7xvpR-otSosT-9EMPja-8Zk6Zi-XHpSDB-hLkuF3-of24Gf-ouN1Gv-fJzkJS-icfbY9 -[16]:https://creativecommons.org/licenses/by-sa/4.0/ -[17]:https://en.wikipedia.org/wiki/Cron -[18]:http://spamassassin.apache.org/ -[19]:https://github.com/sysstat/sysstat -[20]:https://en.wikipedia.org/wiki/Anacron -[21]:http://man7.org/linux/man-pages/man8/cron.8.html -[22]:http://man7.org/linux/man-pages/man5/crontab.5.html -[23]:http://man7.org/linux/man-pages/man8/anacron.8.html -[24]:http://man7.org/linux/man-pages/man5/anacrontab.5.html -[25]:http://manpages.ubuntu.com/manpages/zesty/man8/run-parts.8.html -[26]:https://opensource.com/tags/linux -[27]:https://opensource.com/tags/sysadmin -[28]:https://opensource.com/users/dboth -[29]:https://opensource.com/users/dboth -[30]:https://opensource.com/users/dboth From 5a3649a288ecfd0bc57d9dcba700eea20e07d7e9 Mon Sep 17 00:00:00 2001 From: Trsky <625310581@qq.com> Date: Fri, 8 Dec 2017 21:20:22 +0800 Subject: [PATCH 132/236] complete the translation --- ...ow to answer questions in a helpful way.md | 186 ------------------ 1 file changed, 186 deletions(-) delete mode 100644 sources/tech/20170921 How to answer questions in a helpful way.md diff --git a/sources/tech/20170921 How to answer questions in a helpful way.md b/sources/tech/20170921 How to answer questions in a helpful way.md deleted file mode 100644 index 1cd590bdd2..0000000000 --- a/sources/tech/20170921 How to answer questions in a helpful way.md +++ /dev/null @@ -1,186 +0,0 @@ - -如何合理地回答问题 -============================= - -如果你的同事问你一个不太清晰的问题,你会怎么回答?我认为提问题是一种技巧(可以看 [如何提出有意义的问题][1]) 同时,合理地回答问题也是一种技巧。他们都是非常实用的。 - -开始 - 有时向你提问的人不尊重你的时间,这很糟糕。 - -我假设 - 我们假设问你问题的人是一个合理的人并且正在尽力解决问题而你想帮助他们。和我一起工作的人是这样,我所生活的世界也是这样。 - -下面是有助于回答问题的一些策略! - -### 如果他们提问不清楚,帮他们澄清 - -通常初学者不会提出很清晰的问题,或者问一些对回答问题没有必要信息的问题。 - -* ** 重述为一个更明确的问题 ** 回复他们(”你是想问 X ?“) - -* ** 问他们更具体的信息 ** 他们并没有提供(”你使用 IPv6 ?”) - -* ** 问是什么导致了他们的问题 ** 例如,有时有些人会进入我的团队频道,询问我们的 service discovery 如何工作的。这通常是因为他们试图设置/重新配置服务。在这种情况下,如果问“你正在使用哪种服务?可以给我看看你正在进行的 pull 请求吗?”是有帮助的。 - -这些策略很多来自 [如何提出有意义的问题][2]中的要点。(尽管我永远不会对某人说“噢,你得先看完文档 “如何提出有意义的问题” 再来问我问题) - - -### 明白什么是他们已经知道的 - -在回答问题之前,知道对方已经知道什么是非常有用的! - -Harold Treen 给了我一个很好的例子: - -> 前几天,有人请我解释“ Redux-Sagas ”。与其深入解释不如说“ 他们就像 worker threads 监听行为,让你更新 Redux store 。 - -> 我开始搞清楚他们对 Redux 、行为(actions)、store 以及其他基本概念了解多少。将这些概念都联系在一起再来解释会容易得多。 - -弄清楚问你问题的人已经知道什么是非常重要的。因为有时他们可能会对基础概念感到疑惑(“ Redux 是什么?“),或者他们可是专家但是恰巧遇到了微妙的极端情况(corner case)。如果答案建立在他们不知道的概念上会令他们困惑,但如果重述他们已经知道的的又会是乏味的。 - -这里有一个很实用的技巧来了解他们已经知道什么 - 比如可以尝试用“你对 X 了解多少?”而不是问“你知道 X 吗?”。 - -### 给他们一个文档 - -“RTFM” (“去读那些他妈的手册”(Read The Fucking Manual))是一个典型的无用的回答,但事实上如果向他们指明一个特定的文档会是非常有用的!当我提问题的时候,我非常乐意被指明那些事实上能够解决我的问题的文档,因为它可能解决了其他我也想问的问题。 - -我认为明确你所指明的文档确实能够解决问题或者至少经过查阅明确它有用是非常重要的。否则,你可能将以下面这种情形结束对话(非常常见): - -* Ali:我应该如何处理 X ? - -* Jada:<文档链接> - -* Ali: 这个并有实际解释如何处理 X ,它仅仅解释了如何处理 Y ! - -如果我所给的文档特别长,我会指明文档中那个我将会谈及的特定部分。[bash 手册][3] 有44000个字(真的!),所以如果只说“它在 bash 手册中有说明”是没有帮助的:) - -### 告诉他们一个有用的搜索 - -在工作中,我经常发现我可以利用我所知道的关键字进行搜索找到能够解决我的问题的答案。对于初学者来说,这些关键字往往不是那么明显。所以说“这是我用来寻找这个答案的搜索”可能有用些。再次说明,回答时请经检查后以确保搜索能够得到他们所需要的答案:) - -### 写新文档 - -人们经常一次又一次地问我的团队重复的问题。很显然这并不是人们的错(他们怎么能够知道在他们之前已经有10个人问了这个问题,且知道答案是什么呢?)因此,我们尝试写文档,而不是直接回答回答问题。 - -1. 马上写新文档 - -2. 给他们我们刚刚写好的新文档 - -3. 公示 - -写文档有时往往比回答问题需要花很多时间,但这是值得的。写文档尤其值得如果: - -a. 这个问题被问了一遍又一遍 - -b. 随着时间的推移,这个答案不会变化太大(如果这个答案每一个星期或者一个月就会变化,文档就会过时并且令人受挫) - -### 解释你做了什么 - -对于一个话题,作为初学者来说,这样的交流会真让人沮丧: - -* 新人:“hey 你如何处理 X ?” - -* 有经验的人:“我做了,它完成了” - -* 新人:”...... 但是你做了什么?!“ - -如果问你问题的人想知道事情是如何进行的,这样是有帮助的: - -* 让他们去完成任务而不是自己做 - -* 把你是如何得到你给他们的答案的步骤告诉他们。 - -这可能比你自己做的时间还要长,但对于被问的人来说这是一个学习机会,因为那样做使得他们将来能够更好地解决问题。 - -这样,你可以进行更好的交流,像这: - -* 新人:“这个网站出现了错误,发生了什么?” - -* 有经验的人:(2分钟后)”oh 这是因为发生了数据库故障转移“ - -* 新人: ”你是怎么知道的??!?!?“ - -* 有经验的人:“以下是我所做的!“: - - 1. 通常这些错误是因为服务器 Y 被关闭了。我查看了一下 `$PLACE` 但它表明服务器 Y 开着。所以,并不是这个原因导致的。 - - 2. 然后我查看仪表 X ,这个仪表的这个部分显示这里发生了数据库故障转移。 - - 3. 然后我在日志中找到了相应服务器,并且它显示连接数据库错误,看起来错误就是这里。 - -如果你正在解释你是如何调试一个问题,解释你是如何发现问题,以及如何找出问题是非常有用的当看起来似乎你已经得到正确答案时,感觉是很棒的。这比你帮助他人提高学习和诊断能力以及明白充分利用可用资源的感觉还要好。 - -### 解决根本问题 - -这一点有点狡猾。有时候人们认为他们依旧找到了解决问题的正确途径,且他们只需要一条信息就可以把问题解决。但他们可能并不是走在正确的道路上!比如: - -* George:”我在处理 X 的时候遇到了错误,我该如何修复它?“ - -* Jasminda:”你是正在尝试解决 Y 吗?如果是这样,你不应该处理 X ,反而你应该处理 Z 。“ - -* George:“噢,你是对的!!!谢谢你!我回反过来处理 Z 的。“ - -Jasminda 一点都没有回答 George 的问题!反而,她猜测 George 并不想处理 X ,并且她是猜对了。这是非常有用的! - -如果你这样做可能会产生居高临下的感觉: - -* George:”我在处理 X 的时候遇到了错误,我该如何修复它?“ - -* Jasminda:不要这样做,如果你想处理 Y ,你应该反过来完成 Z 。 - -* George:“好吧,我并不是想处理 Y 。实际上我想处理 X 因为某些原因(REASONS)。所以我该如何处理 X 。 - -所以不要居高临下,且要记住有时有些提问者可能已经偏离根本问题很远了。同时回答提问者提出的问题以及他们本该提出的问题的恰当的:“嗯,如果你想处理 X ,那么你可能需要这么做,但如果你想用这个解决 Y 问题,可能通过处理其他事情你可以更好地解决这个问题,这就是为什么可以做得更好的原因。 - -### 询问”那个回答可以解决您的问题吗?” - -我总是喜欢在我回答了问题之后核实是否真的已经解决了问题:”那个回答解决了您的问题吗?您还有其他问题吗?“在问完这个之后等待一会是很好的,因为人们通常需要一两分钟来知道他们是否已经找到了答案。 - -我发现尤其是问“这个回答解决了您的问题吗”这句在写文档时是非常有用的。通常,在写我关于我熟悉的东西的文档时,我会忽略掉重要的东西。 - -### Offer to。。。。 - -我是远程工作的,所以我的很多对话都是基于文本的。我认为这是默认的沟通方式。 - -今天,我们生活在一个简单的视频会议和屏幕共享的世界!在工作时候,我可以在任何时间点击一个按钮并快速处在与他人的视频对话或者屏幕共享的对话中! - -例如,最近有人问我关于容量规划/自动缩放。。。 - -### 不要表现得过于惊讶 - -这是源自 Recurse Center 的一则法则:[不要假装惊讶][4]。这里有一个常见的情景: - -* 某人1:“什么是 Linux 内核” - -* 某人2:“你竟然不知道什么是 Linux 内核(LINUX KERNEL)?!!!!?!!!????” - -某人2表现(无论他们是否真的如此惊讶)是没有帮助的。这大部分只会让某人1感觉不好,因为他们不知道什么的 Linux 内核。 - -我一直在假装不惊讶即使我事实上确实有点惊讶那个人不知道这种东西但它是令人敬畏的。 - -### 回答问题是令人敬畏的 - -很显然,这些策略并不是一直都是适当的,但希望你能够发现这里有些是有用的!我发现花时间去回答问题并教人们是其实是很有收获的。 - -特别感谢 Josh Triplett 的一些建议并做了很多有益的补充,以及感谢 Harold Treen、Vaibhav Sagar、Peter Bhat Hatkins、Wesley Aptekar Cassels 和 Paul Gowder的阅读或评论。 - --------------------------------------------------------------------------------- - -via: https://jvns.ca/blog/answer-questions-well/ - -作者:[ Julia Evans][a] -译者:[HardworkFish](https://github.com/HardworkFish) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://jvns.ca/about -[1]:https://jvns.ca/blog/good-questions/ -[2]:https://jvns.ca/blog/good-questions/ -[3]:https://linux.die.net/man/1/bash -[4]:https://jvns.ca/blog/2017/04/27/no-feigning-surprise/ - - - - - - - - From 0668a38da02d3ee3741e16fe8fc0f5646e0f9e42 Mon Sep 17 00:00:00 2001 From: Trsky <625310581@qq.com> Date: Fri, 8 Dec 2017 21:21:08 +0800 Subject: [PATCH 133/236] translated by HardworkFish --- ...ow to answer questions in a helpful way.md | 197 ++++++++++++++++++ 1 file changed, 197 insertions(+) create mode 100644 translated/tech/20170921 How to answer questions in a helpful way.md diff --git a/translated/tech/20170921 How to answer questions in a helpful way.md b/translated/tech/20170921 How to answer questions in a helpful way.md new file mode 100644 index 0000000000..acc67fd10c --- /dev/null +++ b/translated/tech/20170921 How to answer questions in a helpful way.md @@ -0,0 +1,197 @@ + +如何提供有帮助的回答 +============================= + +如果你的同事问你一个不太清晰的问题,你会怎么回答?我认为提问题是一种技巧(可以看 [如何提出有意义的问题][1]) 同时,合理地回答问题也是一种技巧。他们都是非常实用的。 + +一开始 - 有时向你提问的人不尊重你的时间,这很糟糕。 + +理想情况下,我们假设问你问题的人是一个理性的人并且正在尽力解决问题而你想帮助他们。和我一起工作的人是这样,我所生活的世界也是这样。当然,现实生活并不是这样。 + +下面是有助于回答问题的一些方法! + + +### 如果他们提问不清楚,帮他们澄清 + +通常初学者不会提出很清晰的问题,或者问一些对回答问题没有必要信息的问题。你可以尝试以下方法 澄清问题: + +* ** 重述为一个更明确的问题 ** 来回复他们(”你是想问 X 吗?“) + +* ** 向他们了解更具体的他们并没有提供的信息 ** (”你使用 IPv6 ?”) + +* ** 问是什么导致了他们的问题 ** 例如,有时有些人会进入我的团队频道,询问我们的服务发现(service discovery )如何工作的。这通常是因为他们试图设置/重新配置服务。在这种情况下,如果问“你正在使用哪种服务?可以给我看看你正在处理的 pull requests 吗?”是有帮助的。 + +这些方法很多来自 [如何提出有意义的问题][2]中的要点。(尽管我永远不会对某人说“噢,你得先看完 “如何提出有意义的问题”这篇文章后再来像我提问) + + +### 弄清楚他们已经知道了什么 + +在回答问题之前,知道对方已经知道什么是非常有用的! + +Harold Treen 给了我一个很好的例子: + +> 前几天,有人请我解释“ Redux-Sagas ”。与其深入解释不如说“ 他们就像 worker threads 监听行为(actions),让你更新 Redux store 。 + +> 我开始搞清楚他们对 Redux 、行为(actions)、store 以及其他基本概念了解多少。将这些概念都联系在一起再来解释会容易得多。 + +弄清楚问你问题的人已经知道什么是非常重要的。因为有时他们可能会对基础概念感到疑惑(“ Redux 是什么?“),或者他们可能是专家但是恰巧遇到了微妙的极端情况(corner case)。如果答案建立在他们不知道的概念上会令他们困惑,但如果重述他们已经知道的的又会是乏味的。 + +这里有一个很实用的技巧来了解他们已经知道什么 - 比如可以尝试用“你对 X 了解多少?”而不是问“你知道 X 吗?”。 + + +### 给他们一个文档 + +“RTFM” (“去读那些他妈的手册”(Read The Fucking Manual))是一个典型的无用的回答,但事实上如果向他们指明一个特定的文档会是非常有用的!当我提问题的时候,我当然很乐意翻看那些能实际解决我的问题的文档,因为它也可能解决其他我想问的问题。 + +我认为明确你所给的文档的确能够解决问题是非常重要的,或者至少经过查阅后确认它对解决问题有帮助。否则,你可能将以下面这种情形结束对话(非常常见): + +* Ali:我应该如何处理 X ? + +* Jada:<文档链接> + +* Ali: 这个并有实际解释如何处理 X ,它仅仅解释了如何处理 Y ! + +如果我所给的文档特别长,我会指明文档中那个我将会谈及的特定部分。[bash 手册][3] 有44000个字(真的!),所以如果只说“它在 bash 手册中有说明”是没有帮助的:) + + +### 告诉他们一个有用的搜索 + +在工作中,我经常发现我可以利用我所知道的关键字进行搜索找到能够解决我的问题的答案。对于初学者来说,这些关键字往往不是那么明显。所以说“这是我用来寻找这个答案的搜索”可能有用些。再次说明,回答时请经检查后以确保搜索能够得到他们所需要的答案:) + + +### 写新文档 + +人们经常一次又一次地问我的团队同样的问题。很显然这并不是他们的错(他们怎么能够知道在他们之前已经有10个人问了这个问题,且知道答案是什么呢?)因此,我们会尝试写新文档,而不是直接回答回答问题。 + +1. 马上写新文档 + +2. 给他们我们刚刚写好的新文档 + +3. 公示 + +写文档有时往往比回答问题需要花很多时间,但这是值得的。写文档尤其重要,如果: + +a. 这个问题被问了一遍又一遍 + +b. 随着时间的推移,这个答案不会变化太大(如果这个答案每一个星期或者一个月就会变化,文档就会过时并且令人受挫) + + +### 解释你做了什么 + +对于一个话题,作为初学者来说,这样的交流会真让人沮丧: + +* 新人:“嗨!你如何处理 X ?” + +* 有经验的人:“我已经处理过了,而且它已经完美解决了” + +* 新人:”...... 但是你做了什么?!“ + +如果问你问题的人想知道事情是如何进行的,这样是有帮助的: + +* 让他们去完成任务而不是自己做 + +* 告诉他们你是如何得到你给他们的答案的。 + +这可能比你自己做的时间还要长,但对于被问的人来说这是一个学习机会,因为那样做使得他们将来能够更好地解决问题。 + +这样,你可以进行更好的交流,像这: + +* 新人:“这个网站出现了错误,发生了什么?” + +* 有经验的人:(2分钟后)”oh 这是因为发生了数据库故障转移“ + +* 新人: ”你是怎么知道的??!?!?“ + +* 有经验的人:“以下是我所做的!“: + + 1. 通常这些错误是因为服务器 Y 被关闭了。我查看了一下 `$PLACE` 但它表明服务器 Y 开着。所以,并不是这个原因导致的。 + + 2. 然后我查看 X 的仪表盘 ,仪表盘的这个部分显示这里发生了数据库故障转移。 + + 3. 然后我在日志中找到了相应服务器,并且它显示连接数据库错误,看起来错误就是这里。 + +如果你正在解释你是如何调试一个问题,解释你是如何发现问题,以及如何找出问题的。尽管看起来你好像已经得到正确答案,但感觉更好的是能够帮助他们提高学习和诊断能力,并了解可用的资源。 + + +### 解决根本问题 + +这一点有点棘手。有时候人们认为他们依旧找到了解决问题的正确途径,且他们只再多一点信息就可以解决问题。但他们可能并不是走在正确的道路上!比如: + +* George:”我在处理 X 的时候遇到了错误,我该如何修复它?“ + +* Jasminda:”你是正在尝试解决 Y 吗?如果是这样,你不应该处理 X ,反而你应该处理 Z 。“ + +* George:“噢,你是对的!!!谢谢你!我回反过来处理 Z 的。“ + +Jasminda 一点都没有回答 George 的问题!反而,她猜测 George 并不想处理 X ,并且她是猜对了。这是非常有用的! + +如果你这样做可能会产生高高在上的感觉: + +* George:”我在处理 X 的时候遇到了错误,我该如何修复它?“ + +* Jasminda:不要这样做,如果你想处理 Y ,你应该反过来完成 Z 。 + +* George:“好吧,我并不是想处理 Y 。实际上我想处理 X 因为某些原因(REASONS)。所以我该如何处理 X 。 + +所以不要高高在上,且要记住有时有些提问者可能已经偏离根本问题很远了。同时回答提问者提出的问题以及他们本该提出的问题都是合理的:“嗯,如果你想处理 X ,那么你可能需要这么做,但如果你想用这个解决 Y 问题,可能通过处理其他事情你可以更好地解决这个问题,这就是为什么可以做得更好的原因。 + + +### 询问”那个回答可以解决您的问题吗?” + +我总是喜欢在我回答了问题之后核实是否真的已经解决了问题:”这个回答解决了您的问题吗?您还有其他问题吗?“在问完这个之后最好等待一会,因为人们通常需要一两分钟来知道他们是否已经找到了答案。 + +我发现尤其是问“这个回答解决了您的问题吗”这个额外的步骤在写完文档后是非常有用的。通常,在写关于我熟悉的东西的文档时,我会忽略掉重要的东西而不会意识到它。 + + +### 结对编程和面对面交谈 + +我是远程工作的,所以我的很多对话都是基于文本的。我认为这是沟通的默认方式。 + +今天,我们生活在一个方便进行小视频会议和屏幕共享的世界!在工作时候,在任何时间我都可以点击一个按钮并快速加入与他人的视频对话或者屏幕共享的对话中! + +例如,最近有人问如何自动调节他们的服务容量规划。我告诉他们我们有几样东西需要清理,但我还不太确定他们要清理的是什么。然后我们进行了一个简短的视屏会话并在5分钟后,我们解决了他们问题。 + +我认为,特别是如果有人真的被困在该如何开始一项任务时,开启视频进行结对编程几分钟真的比电子邮件或者一些即时通信更有效。 + + +### 不要表现得过于惊讶 + +这是源自 Recurse Center 的一则法则:[不要故作惊讶][4]。这里有一个常见的情景: + +* 某人1:“什么是 Linux 内核” + +* 某人2:“你竟然不知道什么是 Linux 内核(LINUX KERNEL)?!!!!?!!!????” + +某人2表现(无论他们是否真的如此惊讶)是没有帮助的。这大部分只会让某人1不好受,因为他们确实不知道什么是 Linux 内核。 + +我一直在假装不惊讶即使我事实上确实有点惊讶那个人不知道这种东西但它是令人敬畏的。 + +### 回答问题是令人敬畏的 + +显然并不是所有方法都是合适的,但希望你能够发现这里有些是有帮助的!我发现花时间去回答问题并教导人们是其实是很有收获的。 + +特别感谢 Josh Triplett 的一些建议并做了很多有益的补充,以及感谢 Harold Treen、Vaibhav Sagar、Peter Bhat Hatkins、Wesley Aptekar Cassels 和 Paul Gowder的阅读或评论。 + +-------------------------------------------------------------------------------- + +via: https://jvns.ca/blog/answer-questions-well/ + +作者:[ Julia Evans][a] +译者:[HardworkFish](https://github.com/HardworkFish) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://jvns.ca/about +[1]:https://jvns.ca/blog/good-questions/ +[2]:https://jvns.ca/blog/good-questions/ +[3]:https://linux.die.net/man/1/bash +[4]:https://jvns.ca/blog/2017/04/27/no-feigning-surprise/ + + + + + + + + From 700320a70be9fec280f1792d6e55f99ae190be69 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E5=BC=A0=E5=AE=88=E6=B0=B8?= Date: Fri, 8 Dec 2017 23:03:32 +0800 Subject: [PATCH 134/236] Update 20171207 7 tools for analyzing performance in Linux with bccBPF.md --- ... tools for analyzing performance in Linux with bccBPF.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/sources/tech/20171207 7 tools for analyzing performance in Linux with bccBPF.md b/sources/tech/20171207 7 tools for analyzing performance in Linux with bccBPF.md index e6fd19e212..eebfa9e30d 100644 --- a/sources/tech/20171207 7 tools for analyzing performance in Linux with bccBPF.md +++ b/sources/tech/20171207 7 tools for analyzing performance in Linux with bccBPF.md @@ -3,7 +3,7 @@ translating by yongshouzhang 7 tools for analyzing performance in Linux with bcc/BPF ============================================================ -### Look deeply into your Linux code with these Berkeley Packet Filter (BPF) Compiler Collection (bcc) tools. +###使用伯克利的包过滤(BPF)编译器集合(BCC)工具深度探查你的 linux 代码。 [![](https://opensource.com/sites/default/files/styles/byline_thumbnail/public/pictures/brendan_face2017_620d.jpg?itok=xZzBQNcY)][7] 21 Nov 2017 [Brendan Gregg][8] [Feed][9] @@ -12,11 +12,11 @@ translating by yongshouzhang [4 comments][11] ![7 superpowers for Fedora bcc/BPF performance analysis](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/penguins%20in%20space_0.jpg?itok=umpCTAul) -Image by : +图片来源 : opensource.com -A new technology has arrived in Linux that can provide sysadmins and developers with a large number of new tools and dashboards for performance analysis and troubleshooting. It's called the enhanced Berkeley Packet Filter (eBPF, or just BPF), although these enhancements weren't developed in Berkeley, they operate on much more than just packets, and they do much more than just filtering. I'll discuss one way to use BPF on the Fedora and Red Hat family of Linux distributions, demonstrating on Fedora 26. +在 linux 中出现的一种新技术能够为系统管理员和开发者提供大量用于性能分析和故障排除的新工具和仪表盘。 它被称为增强的伯克利数据包过滤器(eBPF,或BPF),虽然这些改进并不由伯克利开发,它们不仅仅是处理数据包,更多的是过滤。我将讨论在 Fedora 和 Red Hat Linux 发行版中使用 BPF 的一种方法,并在 Fedora 26 上演示。 BPF can run user-defined sandboxed programs in the kernel to add new custom capabilities instantly. It's like adding superpowers to Linux, on demand. Examples of what you can use it for include: From 68f7096ad6aa82839d264a8ceaeebad36e73e742 Mon Sep 17 00:00:00 2001 From: darksun Date: Sat, 9 Dec 2017 04:10:27 +0800 Subject: [PATCH 135/236] =?UTF-8?q?update=20at=202017=E5=B9=B4=2012?= =?UTF-8?q?=E6=9C=88=2009=E6=97=A5=20=E6=98=9F=E6=9C=9F=E5=85=AD=2004:10:2?= =?UTF-8?q?7=20CST?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- sources/tech/20170922 How to disable USB storage on Linux.md | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/sources/tech/20170922 How to disable USB storage on Linux.md b/sources/tech/20170922 How to disable USB storage on Linux.md index 36723ed34c..b7378c7940 100644 --- a/sources/tech/20170922 How to disable USB storage on Linux.md +++ b/sources/tech/20170922 How to disable USB storage on Linux.md @@ -1,5 +1,4 @@ -translating by lujun9972 -How to disable USB storage on Linux +Linux上如何禁用 USB 存储 ====== To secure our infrastructure of data breaches, we use software & hardware firewalls to restrict unauthorized access from outside but data breaches can occur from inside as well. To remove such a possibility, organizations limit & monitor the access to internet & also disable usb storage devices. From 8be1850397b58a991cb4cba04d0ccddc5cb4d2df Mon Sep 17 00:00:00 2001 From: cmn <2545489745@qq.com> Date: Sat, 9 Dec 2017 10:52:40 +0800 Subject: [PATCH 136/236] translated --- ...he Friendly Interactive Shell, In Linux.md | 338 ++++++++++++++++++ 1 file changed, 338 insertions(+) create mode 100644 translated/tech/20171206 How To Install Fish, The Friendly Interactive Shell, In Linux.md diff --git a/translated/tech/20171206 How To Install Fish, The Friendly Interactive Shell, In Linux.md b/translated/tech/20171206 How To Install Fish, The Friendly Interactive Shell, In Linux.md new file mode 100644 index 0000000000..e519106806 --- /dev/null +++ b/translated/tech/20171206 How To Install Fish, The Friendly Interactive Shell, In Linux.md @@ -0,0 +1,338 @@ +如何在 Linux 上安装友好的交互式 shell,Fish +====== +Fish,友好的交互式 shell 的缩写,它是一个适用于类 Unix 系统的装备良好,智能而且用户友好的 shell。Fish 有着很多重要的功能,比如自动建议,语法高亮,可搜索的历史记录(像在 bash 中 CTRL+r),智能搜索功能,极好的 VGA 颜色支持,基本的 web 设置,完善的手册页和许多开箱即用的功能。尽管安装并立即使用它吧。无需更多其他配置,你也不需要安装任何额外的附加组件/插件! + +在这篇教程中,我们讨论如何在 linux 中安装和使用 fish shell。 + +#### 安装 Fish + +尽管 fish 是一个非常用户友好的并且功能丰富的 shell,但在大多数 Linux 发行版的默认仓库中它并没有被包括。它只能在少数 Linux 发行版中的官方仓库中找到,如 Arch Linux,Gentoo,NixOS,和 Ubuntu 等。然而,安装 fish 并不难。 + +在 Arch Linux 和它的衍生版上,运行以下命令来安装它。 + +``` +sudo pacman -S fish +``` + +在 CentOS 7 上以 root 运行以下命令: + +``` +cd /etc/yum.repos.d/ +``` + +``` +wget https://download.opensuse.org/repositories/shells:fish:release:2/CentOS_7/shells:fish:release:2.repo +``` + +``` +yum install fish +``` + +在 CentOS 6 上以 root 运行以下命令: + +``` +cd /etc/yum.repos.d/ +``` + +``` +wget https://download.opensuse.org/repositories/shells:fish:release:2/CentOS_6/shells:fish:release:2.repo +``` + +``` +yum install fish +``` + +在 Debian 9 上以 root 运行以下命令: + +``` +wget -nv https://download.opensuse.org/repositories/shells:fish:release:2/Debian_9.0/Release.key -O Release.key +``` + +``` +apt-key add - < Release.key +``` + +``` +echo 'deb http://download.opensuse.org/repositories/shells:/fish:/release:/2/Debian_9.0/ /' > /etc/apt/sources.list.d/fish.list +``` + +``` +apt-get update +``` + +``` +apt-get install fish +``` + +在 Debian 8 上以 root 运行以下命令: + +``` +wget -nv https://download.opensuse.org/repositories/shells:fish:release:2/Debian_8.0/Release.key -O Release.key +``` + +``` +apt-key add - < Release.key +``` + +``` +echo 'deb http://download.opensuse.org/repositories/shells:/fish:/release:/2/Debian_8.0/ /' > /etc/apt/sources.list.d/fish.list +``` + +``` +apt-get update +``` + +``` +apt-get install fish +``` + +在 Fedora 26 上以 root 运行以下命令: + +``` +dnf config-manager --add-repo https://download.opensuse.org/repositories/shells:fish:release:2/Fedora_26/shells:fish:release:2.repo +``` + +``` +dnf install fish +``` + +在 Fedora 25 上以 root 运行以下命令: + +``` +dnf config-manager --add-repo https://download.opensuse.org/repositories/shells:fish:release:2/Fedora_25/shells:fish:release:2.repo +``` + +``` +dnf install fish +``` + +在 Fedora 24 上以 root 运行以下命令: + +``` +dnf config-manager --add-repo https://download.opensuse.org/repositories/shells:fish:release:2/Fedora_24/shells:fish:release:2.repo +``` + +``` +dnf install fish +``` + +在 Fedora 23 上以 root 运行以下命令: + +``` +dnf config-manager --add-repo https://download.opensuse.org/repositories/shells:fish:release:2/Fedora_23/shells:fish:release:2.repo +``` + +``` +dnf install fish +``` + +在 openSUSE 上以 root 运行以下命令: + +``` +zypper install fish +``` + +在 RHEL 7 上以 root 运行以下命令: + +``` +cd /etc/yum.repos.d/ +``` + +``` +wget https://download.opensuse.org/repositories/shells:fish:release:2/RHEL_7/shells:fish:release:2.repo +``` + +``` +yum install fish +``` + +在 RHEL-6 上以 root 运行以下命令: + +``` +cd /etc/yum.repos.d/ +``` + +``` +wget https://download.opensuse.org/repositories/shells:fish:release:2/RedHat_RHEL-6/shells:fish:release:2.repo +``` + +``` +yum install fish +``` + +在 Ubuntu 和它的衍生版上: + +``` +sudo apt-get update +``` + +``` +sudo apt-get install fish +``` + +就这样了。是时候探索 fish shell 了。 + +### 用法 + +要从你默认的 shell 切换到 fish,请执行以下操作: + +``` +$ fish +Welcome to fish, the friendly interactive shell +``` + +你可以在 ~/.config/fish/config.fish 上找到默认的 fish 配置(类似于 .bashrc)。如果它不存在,就创建它吧。 + +#### 自动建议 + +当我输入一个命令,它自动建议一个浅灰色的命令。所以,我需要输入一个 Linux 命令的前几个字母,然后按下 tab 键来完成这个命令。 + + [![](http://www.ostechnix.com/wp-content/uploads/2017/12/fish-1.png)][2] + +如果有更多的可能性,它将会列出它们。你可以使用上/下箭头键从列表中选择列出的命令。在选择你想运行的命令后,只需按下右箭头键,然后按下 ENTER 运行它。 + + [![](http://www.ostechnix.com/wp-content/uploads/2017/12/fish-2.png)][3] + +无需 CTRL+r 了!正如你已知道的,我们通过按 ctrl+r 来反向搜索 Bash shell 中的历史命令。但在 fish shell 中是没有必要的。由于它有自动建议功能,只需输入命令的前几个字母,然后从历史记录中选择已经执行的命令。Cool,是吗? + +#### 智能搜索 + +我们也可以使用智能搜索来查找一个特定的命令,文件或者目录。例如,我输入一个命令的子串,然后按向下箭头键进行智能搜索,再次输入一个字母来从列表中选择所需的命令。 + + [![](http://www.ostechnix.com/wp-content/uploads/2017/12/fish-6.png)][4] + +#### 语法高亮 + + +当你输入一个命令时,你将注意到语法高亮。请看下面当我在 Bash shell 和 fish shell 中输入相同的命令时截图的区别。 + +Bash: + + [![](http://www.ostechnix.com/wp-content/uploads/2017/12/fish-3.png)][5] + +Fish: + + [![](http://www.ostechnix.com/wp-content/uploads/2017/12/fish-4.png)][6] + +正如你所看到的,“sudo” 在 fish shell 中已经被高亮显示。此外,默认情况下它将以红色显示无效命令。 + +#### 基于 web 的配置 + +这是 fish shell 另一个很酷的功能。我们可以设置我们的颜色,更改 fish 提示,并从网页上查看所有功能,变量,历史记录,键绑定。 + +启动 web 配置接口,只需输入: + +``` +fish_config +``` + + [![](http://www.ostechnix.com/wp-content/uploads/2017/12/fish-5.png)][7] + +#### 手册页完成 + +Bash 和 其它 shells 支持可编程完成,但只有 fish 会通过解析已安装的手册自动生成他们。 + +为此,请运行: + +``` +fish_update_completions +``` + +实例输出将是: + +``` +Parsing man pages and writing completions to /home/sk/.local/share/fish/generated_completions/ + 3435 / 3435 : zramctl.8.gz +``` + +#### 禁用问候 + +默认情况下,fish 在启动时问候你(Welcome to fish, the friendly interactive shell)。如果你不想要这个问候消息,可以禁用它。为此,编辑 fish 配置文件: + +``` +vi ~/.config/fish/config.fish +``` + +添加以下行: + +``` +set -g -x fish_greeting '' +``` + +你也可以设置任意自定义的问候语,而不是禁用 fish 问候。 +Instead of disabling fish greeting, you can also set any custom greeting message. + +``` +set -g -x fish_greeting 'Welcome to OSTechNix' +``` + +#### 获得帮助 + +这是另一个引人注目的令人印象深刻的功能。要在终端的默认 web 浏览器中打开 fish 文档页面,只需输入: + +``` +help +``` + +官方文档将会在你的默认浏览器中打开。另外,你可以使用手册页来显示任何命令的帮助部分。 + +``` +man fish +``` + +#### 设置 fish 为默认 shell + +非常喜欢它?太好了!设置它作为默认 shell 吧。为此,请使用命令 chsh: + +``` +chsh -s /usr/bin/fish +``` + +在这里,/usr/bin/fish 是 fish shell 的路径。如果你不知道正确的路径,以下命令将会帮助你: + +``` +which fish +``` + +注销并且重新登录以使用新的默认 shell。 + +请记住,为 Bash 编写的许多 shell 脚本可能不完全兼容 fish。 + +要切换会 Bash,只需运行: + +``` +bash +``` + +如果你想 Bash 作为你的永久默认 shell,运行: + +``` +chsh -s /bin/bash +``` + +对目前的各位,这就是全部了。在这个阶段,你可能会得到一个有关 fish shell 使用的基本概念。 如果你正在寻找一个Bash的替代品,fish 可能是一个不错的选择。 + +Cheers! + +资源: + +* [fish shell website][1] + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/install-fish-friendly-interactive-shell-linux/ + +作者:[SK][a] +译者:[kimii](https://github.com/kimii) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.ostechnix.com/author/sk/ +[1]:https://fishshell.com/ +[2]:http://www.ostechnix.com/wp-content/uploads/2017/12/fish-1.png +[3]:http://www.ostechnix.com/wp-content/uploads/2017/12/fish-2.png +[4]:http://www.ostechnix.com/wp-content/uploads/2017/12/fish-6.png +[5]:http://www.ostechnix.com/wp-content/uploads/2017/12/fish-3.png +[6]:http://www.ostechnix.com/wp-content/uploads/2017/12/fish-4.png +[7]:http://www.ostechnix.com/wp-content/uploads/2017/12/fish-5.png From 6e6be0038e235f3f47fc9b719a54cd3155c4dbb5 Mon Sep 17 00:00:00 2001 From: kimii <2545489745@qq.com> Date: Sat, 9 Dec 2017 10:54:52 +0800 Subject: [PATCH 137/236] Delete 20171206 How To Install Fish, The Friendly Interactive Shell, In Linux.md --- ...he Friendly Interactive Shell, In Linux.md | 337 ------------------ 1 file changed, 337 deletions(-) delete mode 100644 sources/tech/20171206 How To Install Fish, The Friendly Interactive Shell, In Linux.md diff --git a/sources/tech/20171206 How To Install Fish, The Friendly Interactive Shell, In Linux.md b/sources/tech/20171206 How To Install Fish, The Friendly Interactive Shell, In Linux.md deleted file mode 100644 index 00a5ebadef..0000000000 --- a/sources/tech/20171206 How To Install Fish, The Friendly Interactive Shell, In Linux.md +++ /dev/null @@ -1,337 +0,0 @@ -Translating by kimii -How To Install Fish, The Friendly Interactive Shell, In Linux -====== -Fish, acronym of friendly interactive shell, is a well equipped, smart and user-friendly shell for Unix-like systems. Fish comes with many significant features, such as autosuggestions, syntax highlighting, searchable history (like CTRL+r in Bash), smart search functionality, glorious VGA color support, web based configuration, man page completions and many, out of the box. Just install it and start using it in no time. No more extra configuration or you don’t have to install any extra add-ons/plug-ins! - -In this tutorial, let us discuss how to install and use fish shell in Linux. - -#### Install Fish - -Even though fish is very user-friendly and feature-rich shell, it is not included in the default repositories of most Linux distributions. It is available in the official repositories of only few Linux distributions such as Arch Linux, Gentoo, NixOS, and Ubuntu etc. However, installing fish is not a big deal. - -On Arch Linux and its derivatives, run the following command to install it. - -``` -sudo pacman -S fish -``` - -On CentOS 7 run the following as root: - -``` -cd /etc/yum.repos.d/ -``` - -``` -wget https://download.opensuse.org/repositories/shells:fish:release:2/CentOS_7/shells:fish:release:2.repo -``` - -``` -yum install fish -``` - -On CentOS 6 run the following as root: - -``` -cd /etc/yum.repos.d/ -``` - -``` -wget https://download.opensuse.org/repositories/shells:fish:release:2/CentOS_6/shells:fish:release:2.repo -``` - -``` -yum install fish -``` - -On Debian 9 run the following as root: - -``` -wget -nv https://download.opensuse.org/repositories/shells:fish:release:2/Debian_9.0/Release.key -O Release.key -``` - -``` -apt-key add - < Release.key -``` - -``` -echo 'deb http://download.opensuse.org/repositories/shells:/fish:/release:/2/Debian_9.0/ /' > /etc/apt/sources.list.d/fish.list -``` - -``` -apt-get update -``` - -``` -apt-get install fish -``` - -On Debian 8 run the following as root: - -``` -wget -nv https://download.opensuse.org/repositories/shells:fish:release:2/Debian_8.0/Release.key -O Release.key -``` - -``` -apt-key add - < Release.key -``` - -``` -echo 'deb http://download.opensuse.org/repositories/shells:/fish:/release:/2/Debian_8.0/ /' > /etc/apt/sources.list.d/fish.list -``` - -``` -apt-get update -``` - -``` -apt-get install fish -``` - -On Fedora 26 run the following as root: - -``` -dnf config-manager --add-repo https://download.opensuse.org/repositories/shells:fish:release:2/Fedora_26/shells:fish:release:2.repo -``` - -``` -dnf install fish -``` - -On Fedora 25 run the following as root: - -``` -dnf config-manager --add-repo https://download.opensuse.org/repositories/shells:fish:release:2/Fedora_25/shells:fish:release:2.repo -``` - -``` -dnf install fish -``` - -On Fedora 24 run the following as root: - -``` -dnf config-manager --add-repo https://download.opensuse.org/repositories/shells:fish:release:2/Fedora_24/shells:fish:release:2.repo -``` - -``` -dnf install fish -``` - -On Fedora 23 run the following as root: - -``` -dnf config-manager --add-repo https://download.opensuse.org/repositories/shells:fish:release:2/Fedora_23/shells:fish:release:2.repo -``` - -``` -dnf install fish -``` - -On openSUSE: run the following as root: - -``` -zypper install fish -``` - -On RHEL 7 run the following as root: - -``` -cd /etc/yum.repos.d/ -``` - -``` -wget https://download.opensuse.org/repositories/shells:fish:release:2/RHEL_7/shells:fish:release:2.repo -``` - -``` -yum install fish -``` - -On RHEL-6 run the following as root: - -``` -cd /etc/yum.repos.d/ -``` - -``` -wget https://download.opensuse.org/repositories/shells:fish:release:2/RedHat_RHEL-6/shells:fish:release:2.repo -``` - -``` -yum install fish -``` - -On Ubuntu and its derivatives: - -``` -sudo apt-get update -``` - -``` -sudo apt-get install fish -``` - -That’s it. It is time explore fish shell. - -### Usage - -To switch to fish from your default shell, do: - -``` -$ fish -Welcome to fish, the friendly interactive shell -``` - -You can find the default fish configuration at ~/.config/fish/config.fish (similar to .bashrc). If it doesn’t exist, just create it. - -#### Auto suggestions - -When I type a command, it automatically suggests a command in a light grey color. So, I had to type a first few letters of a Linux and hit tab key to complete the command. - - [![](http://www.ostechnix.com/wp-content/uploads/2017/12/fish-1.png)][2] - -If there are more possibilities, it will list them. You can select the listed commands from the list by using up/down arrow keys. After choosing the command you want to run, just hit the right arrow key and press ENTER to run it. - - [![](http://www.ostechnix.com/wp-content/uploads/2017/12/fish-2.png)][3] - -No more CTRL+r! As you already know, we do reverse search by pressing ctrl+r to search for commands from history in Bash shell. But it is not necessary in fish shell. Since it has autosuggestions capability, just type first few letters of a command, and pick the command from the list that you already executed, from the history. Cool, yeah? - -#### Smart search - -We can also do smart search to find a specific command, file or directory. For example, I type the substring of a command, then hit the down arrow key to enter into smart search and again type a letter to pick the required command from the list. - - [![](http://www.ostechnix.com/wp-content/uploads/2017/12/fish-6.png)][4] - -#### Syntax highlighting - -You will notice the syntax highlighting as you type a command. See the difference in below screenshots when I type the same command in Bash and fish shells. - -Bash: - - [![](http://www.ostechnix.com/wp-content/uploads/2017/12/fish-3.png)][5] - -Fish: - - [![](http://www.ostechnix.com/wp-content/uploads/2017/12/fish-4.png)][6] - -As you see, “sudo” has been highlighted in fish shell. Also, it will display the invalid commands in red color by default. - -#### Web based configuration - -This is yet another cool feature of fish shell. We can can set our colors, change fish prompt, and view functions, variables, history, key bindings all from a web page. - -To start the web configuration interface, just type: - -``` -fish_config -``` - - [![](http://www.ostechnix.com/wp-content/uploads/2017/12/fish-5.png)][7] - -#### Man page completions - -Bash and other shells supports programmable completions, but only fish generates them automatically by parsing your installed man pages. - -To do so, run: - -``` -fish_update_completions -``` - -Sample output would be: - -``` -Parsing man pages and writing completions to /home/sk/.local/share/fish/generated_completions/ - 3435 / 3435 : zramctl.8.gz -``` - -#### Disable greetings - -By default, fish greets you (Welcome to fish, the friendly interactive shell) at startup. If you don’t this greeting message, you can disable it. To do so, edit fish configuration file: - -``` -vi ~/.config/fish/config.fish -``` - -Add the following line: - -``` -set -g -x fish_greeting '' -``` - -Instead of disabling fish greeting, you can also set any custom greeting message. - -``` -set -g -x fish_greeting 'Welcome to OSTechNix' -``` - -#### Getting help - -This one is another impressive feature that caught my attention. To open fish documentation page in your default web browser from Terminal, just type: - -``` -help -``` - -The official documentation will be opened in your default browser. Also, you can use man pages to display the help section of any command. - -``` -man fish -``` - -#### Set Fish as default shell - -Liked it very much? Great! Just set it as default shell. To do so, use chsh command: - -``` -chsh -s /usr/bin/fish -``` - -Here, /usr/bin/fish is the path to the fish shell. If you don’t know the correct path, the following command will help you. - -``` -which fish -``` - -Log out and log in back to use the new default shell. - -Please remember that many shell scripts written for Bash may not fully compatible with fish. - -To switch back to Bash, just run: - -``` -bash -``` - -If you want Bash as your default shell permanently, run: - -``` -chsh -s /bin/bash -``` - -And, that’s all for now folks. At this stage, you might get a basic idea about fish shell usage. If you’re looking for a Bash alternatives, fish might be a good option. - -Cheers! - -Resource: - -* [fish shell website][1] - --------------------------------------------------------------------------------- - -via: https://www.ostechnix.com/install-fish-friendly-interactive-shell-linux/ - -作者:[SK][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.ostechnix.com/author/sk/ -[1]:https://fishshell.com/ -[2]:http://www.ostechnix.com/wp-content/uploads/2017/12/fish-1.png -[3]:http://www.ostechnix.com/wp-content/uploads/2017/12/fish-2.png -[4]:http://www.ostechnix.com/wp-content/uploads/2017/12/fish-6.png -[5]:http://www.ostechnix.com/wp-content/uploads/2017/12/fish-3.png -[6]:http://www.ostechnix.com/wp-content/uploads/2017/12/fish-4.png -[7]:http://www.ostechnix.com/wp-content/uploads/2017/12/fish-5.png From 47a4f30b3838dd2712f168ae7b91123753dec325 Mon Sep 17 00:00:00 2001 From: darksun Date: Sat, 9 Dec 2017 13:03:20 +0800 Subject: [PATCH 138/236] translated --- ...922 How to disable USB storage on Linux.md | 49 ++++++++++++------- 1 file changed, 31 insertions(+), 18 deletions(-) diff --git a/sources/tech/20170922 How to disable USB storage on Linux.md b/sources/tech/20170922 How to disable USB storage on Linux.md index b7378c7940..04a8b607b4 100644 --- a/sources/tech/20170922 How to disable USB storage on Linux.md +++ b/sources/tech/20170922 How to disable USB storage on Linux.md @@ -1,48 +1,61 @@ -Linux上如何禁用 USB 存储 +Linux 上如何禁用 USB 存储 ====== -To secure our infrastructure of data breaches, we use software & hardware firewalls to restrict unauthorized access from outside but data breaches can occur from inside as well. To remove such a possibility, organizations limit & monitor the access to internet & also disable usb storage devices. +为了保护数据不被泄漏,我们使用软件和硬件防火墙来限制外部未经授权的访问,但是数据泄露也可能发生在内部。 为了消除这种可能性,机构会限制和监测访问互联网,同时禁用 USB 存储设备。 -In this tutorial, we are going to discuss three different ways to disable USB storage devices on Linux machines. All the three methods have been tested on CentOS 6 & 7 machine & are working as they are supposed to . So let’s discuss all the three methods one by one, +在本教程中,我们将讨论三种不同的方法来禁用 Linux 机器上的 USB 存储设备。所有这三种方法都在 CentOS 6&7 机器上通过测试。那么让我们一一讨论这三种方法, -( Also Read : [Ultimate guide to securing SSH sessions][1] ) +( 另请阅读: [Ultimate guide to securing SSH sessions][1] ) -### Method 1 – Fake install +### 方法 1 – 伪安装 -In this method, we add a line ‘install usb-storage /bin/true’ which causes the ‘/bin/true’ to run instead of installing usb-storage module & that’s why it’s also called ‘Fake Install’ . To do this, create and open a file named ‘block_usb.conf’ (it can be something as well) in the folder ‘/etc/modprobe.d’, +在本方法中,我们往配置文件中添加一行 `install usb-storage /bin/true`, 这会让安装 usb-storage 模块的操作实际上变成运行 `/bin/true`, 这也是为什么这种方法叫做`伪安装`的原因。 具体来说就是, 在文件夹 `/etc/modprobe.d` 中创建并打开一个名为 `block_usb.conf` (也可能教其他名字) , +```shell $ sudo vim /etc/modprobe.d/block_usb.conf +``` -& add the below mentioned line, +然后将下行内容添加进去, +```shell install usb-storage /bin/true +``` -Now save the file and exit. +最后保存文件并退出。 -### Method 2 – Removing the USB driver +### 方法 2 – 删除 UBS 驱动 -Using this method, we can remove/move the drive for usb-storage (usb_storage.ko) from our machines, thus making it impossible to access a usb-storage device from the mahcine. To move the driver from it’s default location, execute the following command, +这种方法要求我们将 usb 存储的驱动程序(usb_storage.ko)删掉或者移走,从而达到无法再访问 usb 存储设备的目的。 执行下面命令可以将驱动从它默认的位置移走, execute the following command, +```shell $ sudo mv /lib/modules/$(uname -r)/kernel/drivers/usb/storage/usb-storage.ko /home/user1 +``` -Now the driver is not available on its default location & thus would not be loaded when a usb-storage device is attached to the system & device would not be able to work. But this method has one little issue, that is when the kernel of the system is updated the usb-storage module would again show up in it’s default location. +现在在默认的位置上无法再找到驱动程序了,因此当 USB 存储器连接道系统上时也就无法加载到驱动程序了,从而导致磁盘不可用。 但是这个方法有一个小问题,那就是当系统内核更新的时候,usb-storage 模块会再次出现在它的默认位置。 -### Method 3- Blacklisting USB-storage +### 方法 3- 将 USB-storage 纳入黑名单 -We can also blacklist usb-storage using the file ‘/etc/modprobe.d/blacklist.conf’. This file is available on RHEL/CentOS 6 but might need to be created on 7\. To blacklist usb-storage, open/create the above mentioned file using vim, +我们也可以通过 `/etc/modprobe.d/blacklist.conf` 文件将 usb-storage 纳入黑名单。这个文件在 RHEL/CentOS 6 是现成就有的,但在 7 上可能需要自己创建。 要将 usb 存储列入黑名单,请使用 vim 打开/创建上述文件, +```shell $ sudo vim /etc/modprobe.d/blacklist.conf +``` -& enter the following line to blacklist the usb, +并输入以下行将 USB 纳入黑名单, +``` blacklist usb-storage +``` -Save file & exit. USB-storage will now be blocked on the system but this method has one major downside i.e. any privileged user can load the usb-storage module by executing the following command, +保存文件并退出。`usb-storage` 就在就会被系统阻止加载,但这种方法有一个很大的缺点,即任何特权用户都可以通过执行以下命令来加载 `usb-storage` 模块, +```shell $ sudo modprobe usb-storage +``` -This issue makes this method somewhat not desirable but it works well for non-privileged users. +这个问题使得这个方法不是那么理想,但是对于非特权用户来说,这个方法效果很好。 + +在更改完成后重新启动系统,以使更改生效。请尝试用这些方法来禁用 USB 存储,如果您遇到任何问题或有什么问题,请告知我们。 -Reboot your system after the changes have been made to implement the changes made for all the above mentioned methods. Do check these methods to disable usb storage & let us know if you face any issue or have a query using the comment box below. -------------------------------------------------------------------------------- @@ -52,7 +65,7 @@ via: http://linuxtechlab.com/disable-usb-storage-linux/ 译者:[lujun9972](https://github.com/lujun9972) 校对:[校对者ID](https://github.com/校对者ID) -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 +本文由 [LCTT](https://github.com/LCTT/TranslateProject)原创编译,[Linux 中国](https://linux.cn/)荣誉推出 [a]:http://linuxtechlab.com/author/shsuain/ [1]:http://linuxtechlab.com/ultimate-guide-to-securing-ssh-sessions/ From b4e99a24888fffca6bd67713a289564b693e2b4d Mon Sep 17 00:00:00 2001 From: darksun Date: Sat, 9 Dec 2017 13:04:03 +0800 Subject: [PATCH 139/236] move to translated --- .../tech/20170922 How to disable USB storage on Linux.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {sources => translated}/tech/20170922 How to disable USB storage on Linux.md (100%) diff --git a/sources/tech/20170922 How to disable USB storage on Linux.md b/translated/tech/20170922 How to disable USB storage on Linux.md similarity index 100% rename from sources/tech/20170922 How to disable USB storage on Linux.md rename to translated/tech/20170922 How to disable USB storage on Linux.md From a517642863760dce3da0287f1b9b9ce32b08208b Mon Sep 17 00:00:00 2001 From: darksun Date: Sat, 9 Dec 2017 13:10:00 +0800 Subject: [PATCH 140/236] =?UTF-8?q?=E9=80=89=E9=A2=98:=20How=20to=20extrac?= =?UTF-8?q?t=20substring=20in=20Bash?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...171206 How to extract substring in Bash.md | 153 ++++++++++++++++++ 1 file changed, 153 insertions(+) create mode 100644 sources/tech/20171206 How to extract substring in Bash.md diff --git a/sources/tech/20171206 How to extract substring in Bash.md b/sources/tech/20171206 How to extract substring in Bash.md new file mode 100644 index 0000000000..945b8bd4dd --- /dev/null +++ b/sources/tech/20171206 How to extract substring in Bash.md @@ -0,0 +1,153 @@ +translating by lujun9972 +How to extract substring in Bash +====== +A substring is nothing but a string is a string that occurs “in”. For example “3382” is a substring of “this is a 3382 test”. One can extract the digits or given string using various methods. + + [![How to Extract substring in Bash Shell on Linux or Unix](https://www.cyberciti.biz/media/new/faq/2017/12/How-to-Extract-substring-in-Bash-Shell-on-Linux-or-Unix.jpg)][2] + +This quick tutorial shows how to obtain or finds substring when using bash shell. + +### Extract substring in Bash + +The syntax is: ## syntax ## ${parameter:offset:length} The substring expansion is a bash feature. It expands to up to length characters of the value of parameter starting at the character specified by offset. For example, $u defined as follows: + +| +``` +## define var named u ## +u="this is a test" +``` + | + +The following substring parameter expansion performs substring extraction: + +| +``` +var="${u:10:4}" +echo "${var}" +``` + | + +Sample outputs: + +``` +test +``` + +* 10 : The offset + +* 4 : The length + +### Using IFS + +From the bash man page: + +> The Internal Field Separator that is used for word splitting after expansion and to split lines into words with the read builtin command. The default value is . + +Another POSIX ready solution is as follows: + +| +``` +u="this is a test" +set -- $u +echo "$1" +echo "$2" +echo "$3" +echo "$4" +``` + | + +Sample outputs: + +``` +this +is +a +test +``` + +| +``` +#!/bin/bash +#################################################### +## Author - Vivek Gite {https://www.cyberciti.biz/} +## Purpose - Purge CF cache +## License - Under GPL ver 3.x+ +#################################################### +## set me first ## +zone_id="YOUR_ZONE_ID_HERE" +api_key="YOUR_API_KEY_HERE" +email_id="YOUR_EMAIL_ID_HERE" + +## hold data ## +home_url="" +amp_url="" +urls="$@" + +## Show usage +[ "$urls" == "" ] && { echo "Usage: $0 url1 url2 url3"; exit 1; } + +## Get home page url as we have various sub dirs on domain +## /tips/ +## /faq/ + +get_home_url(){ + local u="$1" + IFS='/' + set -- $u + echo "${1}${IFS}${IFS}${3}${IFS}${4}${IFS}" +} + +echo +echo "Purging cache from Cloudflare..." +echo +for u in $urls +do + home_url="$(get_home_url $u)" + amp_url="${u}amp/" + curl -X DELETE "https://api.cloudflare.com/client/v4/zones/${zone_id}/purge_cache" \ + -H "X-Auth-Email: ${email_id}" \ + -H "X-Auth-Key: ${api_key}" \ + -H "Content-Type: application/json" \ + --data "{\"files\":[\"${u}\",\"${amp_url}\",\"${home_url}\"]}" + echo +done +echo +``` + | + +I can run it as follows: ~/bin/cf.clear.cache https://www.cyberciti.biz/faq/bash-for-loop/ https://www.cyberciti.biz/tips/linux-security.html + +### Say hello to cut command + +One can remove sections from each line of file or variable using the cut command. The syntax is: + +| +``` +u="this is a test" +echo "$u" | cut -d' ' -f 4 +echo "$u" | cut --delimiter=' ' --fields=4 +########################################## +## WHERE +## -d' ' : Use a whitespace as delimiter +## -f 4 : Select only 4th field +########################################## +var="$(cut -d' ' -f 4 <<< $u)" +echo "${var}" +``` + | + +For more info read bash man page: man bash man cut See also: [Bash String Comparison: Find Out IF a Variable Contains a Substring][1] + +-------------------------------------------------------------------------------- + +via: https://www.cyberciti.biz/faq/how-to-extract-substring-in-bash/ + +作者:[Vivek Gite][a] +译者:[lujun9972](https://github.com/lujun9972) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.cyberciti.biz +[1]:https://www.cyberciti.biz/faq/bash-find-out-if-variable-contains-substring/ +[2]:https://www.cyberciti.biz/media/new/faq/2017/12/How-to-Extract-substring-in-Bash-Shell-on-Linux-or-Unix.jpg From f224365a53215245f94a098b27f34811e732ab82 Mon Sep 17 00:00:00 2001 From: imquanquan Date: Sat, 9 Dec 2017 13:17:28 +0800 Subject: [PATCH 141/236] translated --- ...ram Will Exactly Do Before Executing It.md | 143 ++++++++++++++++++ 1 file changed, 143 insertions(+) create mode 100644 translated/tech/20171204 How To Know What A Command Or Program Will Exactly Do Before Executing It.md diff --git a/translated/tech/20171204 How To Know What A Command Or Program Will Exactly Do Before Executing It.md b/translated/tech/20171204 How To Know What A Command Or Program Will Exactly Do Before Executing It.md new file mode 100644 index 0000000000..5123c87df9 --- /dev/null +++ b/translated/tech/20171204 How To Know What A Command Or Program Will Exactly Do Before Executing It.md @@ -0,0 +1,143 @@ +如何获知一个命令或程序在执行前将会做什么 +====== + +有没有想过一个 Unix 命令在执行前将会干些什么呢?并不是每个人都会知道一个特定的命令或者程序将会做什么。当然,你可以用 [Explainshell][2] 来查看它。你可以在 Explainshell 网站中粘贴你的命令,然后它可以让你了解命令的每个部分做了什么。但是,这是没有必要的。现在,我们从终端就可以轻易地知道一个命令或者程序在执行前会做什么。 `maybe` ,一个简单的工具,它允许你运行一条命令并可以查看此命令对你的文件系统做了什么而实际上这条命令却并未执行!在查看 `maybe` 的输出列表后,你可以决定是否真的想要运行这条命令。 + +#### “maybe”是如何工作的 + +根据开发者的介绍 + +> `maybe` 利用 `python-ptrace` 库运行了一个在 `ptrace` 控制下的进程。当它截取到一个即将更改文件系统的系统调用时,它会记录该调用,然后修改 CPU 寄存器,将这个调用重定向到一个无效的系统调用 ID(将其变成一个无效操作(no-op)),并将这个无效操作(no-op)的返回值设置为有效操作的返回值。结果,这个进程认为,它所做的一切都发生了,实际上什么都没有改变。 + +警告: 在生产环境或者任何你所关心的系统里面使用这个工具时都应该小心。它仍然可能造成严重的损失,因为它只能阻止少数系统调用。 + +#### 安装 “maybe” + +确保你已经在你的 Linux 系统中已经安装了 `pip` 。如果没有,可以根据您使用的发行版,按照如下指示进行安装。 + +在 Arch Linux 及其衍生产品(如 Antergos,Manjaro Linux)上,使用以下命令安装 `pip` : + +``` +sudo pacman -S python-pip +``` + +在 RHEL,CentOS 上: + +``` +sudo yum install epel-release +``` +``` +sudo yum install python-pip +``` + +在 Fedora 上: + +``` +sudo dnf install epel-release +``` +``` +sudo dnf install python-pip +``` + +在 Debian,Ubuntu,Linux Mint 上: + +``` +sudo apt-get install python-pip +``` + +在 SUSE, openSUSE 上: + +``` +sudo zypper install python-pip +``` + +安装 `pip` 后,运行以下命令安装 `maybe` 。 + +``` +sudo pip install maybe +``` + +#### 了解一个命令或程序在执行前会做什么 + +用法是非常简单的!只要在要执行的命令前加上 `maybe` 即可。 + +让我给你看一个例子: + +``` +$ maybe rm -r ostechnix/ +``` + +如你所看到的,我从我的系统中删除一个名为 `ostechnix` 的文件夹。下面是示例输出: + +``` +maybe has prevented rm -r ostechnix/ from performing 5 file system operations: + + delete /home/sk/inboxer-0.4.0-x86_64.AppImage + delete /home/sk/Docker.pdf + delete /home/sk/Idhayathai Oru Nodi.mp3 + delete /home/sk/dThmLbB334_1398236878432.jpg + delete /home/sk/ostechnix + +Do you want to rerun rm -r ostechnix/ and permit these operations? [y/N] y +``` + + [![](http://www.ostechnix.com/wp-content/uploads/2017/12/maybe-1.png)][3] + + + `maybe` 执行 5 个文件系统操作,并向我显示该命令(rm -r ostechnix /)究竟会做什么。现在我可以决定是否应该执行这个操作。是不是很酷呢?确实很酷! + +这是另一个例子。我要为 Gmail 安装 Inboxer 桌面客户端。这是我得到的输出: + +``` +$ maybe ./inboxer-0.4.0-x86_64.AppImage +fuse: bad mount point `/tmp/.mount_inboxemDzuGV': No such file or directory +squashfuse 0.1.100 (c) 2012 Dave Vasilevsky + +Usage: /home/sk/Downloads/inboxer-0.4.0-x86_64.AppImage [options] ARCHIVE MOUNTPOINT + +FUSE options: + -d -o debug enable debug output (implies -f) + -f foreground operation + -s disable multi-threaded operation + +open dir error: No such file or directory +maybe has prevented ./inboxer-0.4.0-x86_64.AppImage from performing 1 file system operations: + +create directory /tmp/.mount_inboxemDzuGV + +Do you want to rerun ./inboxer-0.4.0-x86_64.AppImage and permit these operations? [y/N] +``` + +如果它没有检测到任何文件系统操作,那么它会只显示如下所示的结果。 + +例如,我运行下面这条命令来更新我的 Arch Linux。 + +``` +$ maybe sudo pacman -Syu +sudo: effective uid is not 0, is /usr/bin/sudo on a file system with the 'nosuid' option set or an NFS file system without root privileges? +maybe has not detected any file system operations from sudo pacman -Syu. +``` + +看到没?它没有检测到任何文件系统操作,所以没有任何警告。这非常棒,而且正是我所预料到的结果。从现在开始,我甚至可以在执行之前知道一个命令或一个程序将执行什么操作。我希望这对你也会有帮助。 + +Cheers! + +资源: + +* [“maybe” GitHub page][1] + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/know-command-program-will-exactly-executing/ + +作者:[SK][a] +译者:[imquanquan](https://github.com/imquanquan) +校对:[校对ID](https://github.com/校对ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.ostechnix.com/author/sk/ +[1]:https://github.com/p-e-w/maybe +[2]:https://www.ostechnix.com/explainshell-find-part-linux-command/ +[3]:http://www.ostechnix.com/wp-content/uploads/2017/12/maybe-1.png +[4]:https://www.ostechnix.com/inboxer-unofficial-google-inbox-desktop-client/ From 410b1ae7f47c5cac2943635a501bc3a3f02c66af Mon Sep 17 00:00:00 2001 From: imquanquan Date: Sat, 9 Dec 2017 13:21:58 +0800 Subject: [PATCH 142/236] delete sources --- ...ram Will Exactly Do Before Executing It.md | 145 ------------------ 1 file changed, 145 deletions(-) delete mode 100644 sources/tech/20171204 How To Know What A Command Or Program Will Exactly Do Before Executing It.md diff --git a/sources/tech/20171204 How To Know What A Command Or Program Will Exactly Do Before Executing It.md b/sources/tech/20171204 How To Know What A Command Or Program Will Exactly Do Before Executing It.md deleted file mode 100644 index bc7b2c9cb2..0000000000 --- a/sources/tech/20171204 How To Know What A Command Or Program Will Exactly Do Before Executing It.md +++ /dev/null @@ -1,145 +0,0 @@ -translating by imquanquan - -How To Know What A Command Or Program Will Exactly Do Before Executing It -====== -Ever wondered what a Unix command will do before executing it? Not everyone knows what a particular command or program will do. Of course, you can check it with [Explainshell][2]. You need to copy/paste the command in Explainshell website and it let you know what each part of a Linux command does. However, it is not necessary. Now, we can easily know what a command or program will exactly do before executing it, right from the Terminal. Say hello to “maybe”, a simple tool that allows you to run a command and see what it does to your files without actually doing it! After reviewing the output listed, you can then decide whether you really want to run it or not. - -#### How “maybe” works? - -According to the developer, - -> “maybe” runs processes under the control of ptrace with the help of python-ptrace library. When it intercepts a system call that is about to make changes to the file system, it logs that call, and then modifies CPU registers to both redirect the call to an invalid syscall ID (effectively turning it into a no-op) and set the return value of that no-op call to one indicating success of the original call. As a result, the process believes that everything it is trying to do is actually happening, when in reality nothing is. - -Warning: You should be very very careful when using this utility in a production system or in any systems you care about. It can still do serious damages, because it will block only a handful of syscalls. - -#### Installing “maybe” - -Make sure you have installed pip in your Linux system. If not, install it as shown below depending upon the distribution you use. - -On Arch Linux and its derivatives like Antergos, Manjaro Linux, install pip using the following command: - -``` -sudo pacman -S python-pip -``` - -On RHEL, CentOS: - -``` -sudo yum install epel-release -``` - -``` -sudo yum install python-pip -``` - -On Fedora: - -``` -sudo dnf install epel-release -``` - -``` -sudo dnf install python-pip -``` - -On Debian, Ubuntu, Linux Mint: - -``` -sudo apt-get install python-pip -``` - -On SUSE, openSUSE: - -``` -sudo zypper install python-pip -``` - -Once pip installed, run the following command to install “maybe”. - -``` -sudo pip install maybe -``` - -#### Know What A Command Or Program Will Exactly Do Before Executing It - -Usage is absolutely easy! Just add “maybe” in front of a command that you want to execute. - -Allow me to show you an example. - -``` -$ maybe rm -r ostechnix/ -``` - -As you can see, I am going to delete a folder called “ostechnix” from my system. Here is the sample output. - -``` -maybe has prevented rm -r ostechnix/ from performing 5 file system operations: - - delete /home/sk/inboxer-0.4.0-x86_64.AppImage - delete /home/sk/Docker.pdf - delete /home/sk/Idhayathai Oru Nodi.mp3 - delete /home/sk/dThmLbB334_1398236878432.jpg - delete /home/sk/ostechnix - -Do you want to rerun rm -r ostechnix/ and permit these operations? [y/N] y -``` - - [![](http://www.ostechnix.com/wp-content/uploads/2017/12/maybe-1.png)][3] - -The “maybe” tool performs 5 file system operations and shows me what this command (rm -r ostechnix/) will exactly do. Now I can decide whether I should perform this operation or not. Cool, yeah? Indeed! - -Here is another example. I am going to install [Inboxer][4] desktop client for Gmail. This is what I got. - -``` -$ maybe ./inboxer-0.4.0-x86_64.AppImage -fuse: bad mount point `/tmp/.mount_inboxemDzuGV': No such file or directory -squashfuse 0.1.100 (c) 2012 Dave Vasilevsky - -Usage: /home/sk/Downloads/inboxer-0.4.0-x86_64.AppImage [options] ARCHIVE MOUNTPOINT - -FUSE options: - -d -o debug enable debug output (implies -f) - -f foreground operation - -s disable multi-threaded operation - -open dir error: No such file or directory -maybe has prevented ./inboxer-0.4.0-x86_64.AppImage from performing 1 file system operations: - -create directory /tmp/.mount_inboxemDzuGV - -Do you want to rerun ./inboxer-0.4.0-x86_64.AppImage and permit these operations? [y/N] -``` - -If it not detects any file system operations, then it will simply display a result something like below. - -For instance, I run this command to update my Arch Linux. - -``` -$ maybe sudo pacman -Syu -sudo: effective uid is not 0, is /usr/bin/sudo on a file system with the 'nosuid' option set or an NFS file system without root privileges? -maybe has not detected any file system operations from sudo pacman -Syu. -``` - -See? It didn’t detect any file system operations, so there were no warnings. This is absolutely brilliant and exactly what I was looking for. From now on, I can easily know what a command or a program will do even before executing it. I hope this will be useful to you too. More good stuffs to come. Stay tuned! - -Cheers! - -Resource: - -* [“maybe” GitHub page][1] - --------------------------------------------------------------------------------- - -via: https://www.ostechnix.com/know-command-program-will-exactly-executing/ - -作者:[SK][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.ostechnix.com/author/sk/ -[1]:https://github.com/p-e-w/maybe -[2]:https://www.ostechnix.com/explainshell-find-part-linux-command/ -[3]:http://www.ostechnix.com/wp-content/uploads/2017/12/maybe-1.png -[4]:https://www.ostechnix.com/inboxer-unofficial-google-inbox-desktop-client/ From 0d5f003231dd1bc69833e95b9a73246610d7af12 Mon Sep 17 00:00:00 2001 From: darksun Date: Sat, 9 Dec 2017 13:59:39 +0800 Subject: [PATCH 143/236] =?UTF-8?q?=E9=80=89=E9=A2=98:=2024=20Must=20Have?= =?UTF-8?q?=20Essential=20Linux=20Applications=20In=202017?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ve Essential Linux Applications In 2017.md | 251 ++++++++++++++++++ 1 file changed, 251 insertions(+) create mode 100644 sources/tech/20171208 24 Must Have Essential Linux Applications In 2017.md diff --git a/sources/tech/20171208 24 Must Have Essential Linux Applications In 2017.md b/sources/tech/20171208 24 Must Have Essential Linux Applications In 2017.md new file mode 100644 index 0000000000..c5c214eb1d --- /dev/null +++ b/sources/tech/20171208 24 Must Have Essential Linux Applications In 2017.md @@ -0,0 +1,251 @@ +24 Must Have Essential Linux Applications In 2017 +====== +Brief: What are the must have applications for Linux? The answer is subjective and it depends on for what purpose do you use your desktop Linux. But there are still some essentials Linux apps that are more likely to be used by most Linux user. We have listed such best Linux applications that you should have installed in every Linux distribution you use. + +The world of Linux, everything is full of alternatives. You have to choose a distro? You have got several dozens of them. Are you trying to find a decent music player? Alternatives are there too. + +But not all of them are built with the same thing in mind – some of them might target minimalism while others might offer tons of features. Finding the right application for your needs can be quite confusing and a tiresome task. Let’s make that a bit easier. + +### Best free applications for Linux users + +I’m putting together a list of essential free Linux applications I prefer to use in different categories. I’m not saying that they are the best, but I have tried lots of applications in each category and finally liked the listed ones better. So, you are more than welcome to mention your favorite applications in the comment section. + +We have also compiled a nice video of this list. Do subscribe to our YouTube channel for more such educational Linux videos: + +### Web Browser + +![Web Browsers](https://itsfoss.com/wp-content/uploads/2016/10/Essential-Linux-Apps-Web-Browser-1024x512.jpg) +[Save][1]Web Browsers + +#### [Google Chrome][12] + +Google Chrome is a powerful and complete solution for a web browser. It comes with excellent syncing capabilities and offers a vast collection of extensions. If you are accustomed to Google eco-system Google Chrome is for you without any doubt. If you prefer a more open source solution, you may want to try out [Chromium][13], which is the project Google Chrome is based on. + +#### [Firefox][14] + +If you are not a fan of Google Chrome, you can try out Firefox. It’s been around for a long time and is a very stable and robust web browser. + +#### [Vivaldi][15] + +However, if you want something new and different, you can check out Vivaldi. Vivaldi takes a completely fresh approach towards web browser. It’s from former team members of Opera and built on top of the Chromium project. It’s lightweight and customizable. Though it is still quite new and still missing out some features, it feels amazingly refreshing and does a really decent job. + +[Suggested read[Review] Otter Browser Brings Hope To Opera Lovers][40] + +### Download Manager + +![Download Managers](https://itsfoss.com/wp-content/uploads/2016/10/Essential-Linux-Apps-Download-Manager-1024x512.jpg) +[Save][2]Download Managers + +#### [uGet][16] + +uGet is the best download manager I have come across. It is open source and offers everything you can expect from a download manager. uGet offers advanced settings for managing downloads. It can queue and resume downloads, use multiple connections for downloading large files, download files to different directories according to categories and so on. + +#### [XDM][17] + +Xtreme Download Manager (XDM) is a powerful and open source tool developed with Java. It has all the basic features of a download manager, including – video grabber, smart scheduler and browser integration. + +[Suggested read4 Best Download Managers For Linux][41] + +### BitTorrent Client + +![BitTorrent Clients](https://itsfoss.com/wp-content/uploads/2016/10/Essential-Linux-Apps-BitTorrent-Client-1024x512.jpg) +[Save][3]BitTorrent Clients + +#### [Deluge][18] + +Deluge is a open source BitTorrent client. It has a beautiful user interface. If you are used to using uTorrent for Windows, Deluge interface will feel familiar. It has various configuration options as well as plugins support for various tasks. + +#### [Transmission][19] + +Transmission takes the minimal approach. It is an open source BitTorrent client with a minimal user interface. Transmission comes pre-installed with many Linux distributions. + +[Suggested readTop 5 Torrent Clients For Ubuntu Linux][42] + +### Cloud Storage + +![Cloud Storages](https://itsfoss.com/wp-content/uploads/2016/10/Essential-Linux-Apps-Cloud-Storage-1024x512.jpg) +[Save][4]Cloud Storages + +#### [Dropbox][20] + +Dropbox is one of the most popular cloud storage service available out there. It gives you 2GB free storage to start with. Dropbox has a robust and straight-forward Linux client. + +#### [MEGA][21] + +MEGA offers 50GB of free storage. But that is not the best thing about it. The best thing about MEGA is that it has end-to-end encryption support for your files. MEGA has a solid Linux client named MEGAsync. + +[Suggested readBest Free Cloud Services For Linux in 2017][43] + +### Communication + +![Communication Apps](https://itsfoss.com/wp-content/uploads/2016/10/Essential-Linux-Apps-Communication-1024x512.jpg) +[Save][5]Communication Apps + +#### [Pidgin][22] + +Pidgin is an open source instant messenger client. It supports many chatting platforms including – Google Talk, Yahoo and even IRC. Pidgin is extensible through third-party plugins, that can provide a lot of additional functionalities to Pidgin. + +You can also use [Franz][23] or [Rambox][24] to use several messaging services in one application. + +#### [Skype][25] + +We all know Skype, it is one of the most popular video chatting platforms. Recently it has [released a brand new desktop client][26] for Linux. + +[Suggested read6 Best Messaging Apps Available For Linux In 2017][44] + +### Office Suite + +![Office Suites](https://itsfoss.com/wp-content/uploads/2016/10/Essential-Linux-Apps-Office-Suite-1024x512.jpg) +[Save][6]Office Suites + +#### [LibreOffice][27] + +LibreOffice is the most actively developed open source office suite for Linux. It has mainly six modules – Writer, Calc, Impress, Draw, Math and Base. And every one of them supports a wide range of file formats. LibreOffice also supports third-party extensions. It is the default office suite for many of the Linux distributions. + +#### [WPS Office][28] + +If you want to try out something other than LibreOffice, WPS Office might be your go-to. WPS Office suite includes writer, presentation and spreadsheets support. + +[Suggested read6 Best Open Source Alternatives to Microsoft Office for Linux][45] + +### Music Player + +![Music Players](https://itsfoss.com/wp-content/uploads/2016/10/Essential-Linux-Apps-Music-Player-1024x512.jpg) +[Save][7]Music Players + +#### [Lollypop][29] + +This is a relatively new music player. Lollypop is open source and has a beautiful yet simple user interface. It offers a nice music organizer, scrobbling support, online radio and a party mode. Though it is a simple music player without so many advanced features, it is worth giving it a try. + +#### [Rhythmbox][30] + +Rhythmbox is the music player mainly developed for GNOME desktop environment but it works on other desktop environments as well. It does all the basic tasks of a music player, including – CD Ripping & Burning, scribbling etc. It also has support for iPod. + +#### [cmus][31] + +If you want minimalism and love your terminal window, cmus is for you. Personally, I’m a fan and user of this one. cmus is a small, fast and powerful console music player for Unix-like operating systems. It has all the basic music player features. And you can also extend its functionalities with additional extensions and scripts. + +[Suggested readHow To Install Tomahawk Player In Ubuntu 14.04 And Linux Mint 17][46] + +### Video Player + +![Video Player](https://itsfoss.com/wp-content/uploads/2016/10/Essential-Linux-Apps-Video-Player-1024x512.jpg) +[Save][8]Video Players + +#### [VLC][32] + +VLC is an open source media player. It is simple, fast, lightweight and really powerful. VLC can play almost any media formats you can throw at it out-of-the-box. It can also stream online medias. It also have some nifty extensions for various tasks like downloading subtitles right from the player. + +#### [Kodi][33] + +Kodi is a full-fledged media center. Kodi is open source and very popular among its user base. It can handle videos, music, pictures, podcasts and even games, from both local and network media storage. You can even record TV with it. The behavior of Kodi can be customized via add-ons and skins. + +[Suggested read4 Format Factory Alternative In Linux][47] + +### Photo Editor + +![Photo Editors](https://itsfoss.com/wp-content/uploads/2016/10/Essential-Linux-Apps-Photo-Editor-1024x512.jpg) +[Save][9]Photo Editors + +#### [GIMP][34] + +GIMP is the Photoshop alternative for Linux. It is open source, full-featured and professional photo editing software. It is packed with a wide range of tools for manipulating images. And on top of that, there is various customization options and third-party plugins for enhancing the experience. + +#### [Krita][35] + +Krita is mainly a painting tool but serves as a photo editing application as well. It is open source and packed with lots of sophisticated and advanced tools. + +[Suggested readBest Photo Applications For Linux][48] + +### Text Editor + +Every Linux distribution comes with their own solution for text editors. Generally, they are quite simple and without much functionality. But here are some text editors with enhanced capabilities. + +![Text Editors](https://itsfoss.com/wp-content/uploads/2016/10/Essential-Linux-Apps-Text-Editor-1024x512.jpg) +[Save][10]Text Editors + +#### [Atom][36] + +Atom is the modern and hackable text editor maintained by GitHub. It is completely open-source and offers everything you can think of to get out of a text editor. You can use it right out-of-the-box or you can customize and tune it just the way you want. And it has a ton of extensions and themes from the community up for grab. + +#### [Sublime Text][37] + +Sublime Text is one of the most popular text editors. Though it is not free, it allows you to use the software for evaluation without any time limit. Sublime Text is a feature-rich and sophisticated piece of software. And of course, it has plugins and themes support. + +[Suggested read4 Best Modern Open Source Code Editors For Linux][49] + +### Launcher + +![Launchers](https://itsfoss.com/wp-content/uploads/2016/10/Essential-Linux-Apps-Launcher-1024x512.jpg) +[Save][11]Launchers + +#### [Albert][38] + +Albert is inspired by Alfred (a productivity application for Mac, which is totally kickass by-the-way) and still in the development phase. Albert is fast, extensible and customizable. The goal is to “Access everything with virtually zero effort”. It integrates with your Linux distribution nicely and helps you to boost your productivity. + +#### [Synapse][39] + +Synapse has been around for years. It’s a simple launcher that can search and run applications. It can also speed up various workflows like – controlling music, searching files, directories, bookmarks etc., running commands and such. + +As Abhishek advised, we will keep this list of best Linux software updated with our readers’ (i.e. yours) feedback. So, what are your favorite must have Linux applications? Share with us and do suggest more categories of software to add to this list. + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/essential-linux-applications/ + +作者:[Munif Tanjim][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://itsfoss.com/author/munif/ +[1]:http://pinterest.com/pin/create/bookmarklet/?media=https://itsfoss.com/wp-content/uploads/2016/10/Essential-Linux-Apps-Web-Browser-1024x512.jpg&url=https://itsfoss.com/essential-linux-applications/&is_video=false&description=Web%20Browsers +[2]:http://pinterest.com/pin/create/bookmarklet/?media=https://itsfoss.com/wp-content/uploads/2016/10/Essential-Linux-Apps-Download-Manager-1024x512.jpg&url=https://itsfoss.com/essential-linux-applications/&is_video=false&description=Download%20Managers +[3]:http://pinterest.com/pin/create/bookmarklet/?media=https://itsfoss.com/wp-content/uploads/2016/10/Essential-Linux-Apps-BitTorrent-Client-1024x512.jpg&url=https://itsfoss.com/essential-linux-applications/&is_video=false&description=BitTorrent%20Clients +[4]:http://pinterest.com/pin/create/bookmarklet/?media=https://itsfoss.com/wp-content/uploads/2016/10/Essential-Linux-Apps-Cloud-Storage-1024x512.jpg&url=https://itsfoss.com/essential-linux-applications/&is_video=false&description=Cloud%20Storages +[5]:http://pinterest.com/pin/create/bookmarklet/?media=https://itsfoss.com/wp-content/uploads/2016/10/Essential-Linux-Apps-Communication-1024x512.jpg&url=https://itsfoss.com/essential-linux-applications/&is_video=false&description=Communication%20Apps +[6]:http://pinterest.com/pin/create/bookmarklet/?media=https://itsfoss.com/wp-content/uploads/2016/10/Essential-Linux-Apps-Office-Suite-1024x512.jpg&url=https://itsfoss.com/essential-linux-applications/&is_video=false&description=Office%20Suites +[7]:http://pinterest.com/pin/create/bookmarklet/?media=https://itsfoss.com/wp-content/uploads/2016/10/Essential-Linux-Apps-Music-Player-1024x512.jpg&url=https://itsfoss.com/essential-linux-applications/&is_video=false&description=Music%20Players +[8]:http://pinterest.com/pin/create/bookmarklet/?media=https://itsfoss.com/wp-content/uploads/2016/10/Essential-Linux-Apps-Video-Player-1024x512.jpg&url=https://itsfoss.com/essential-linux-applications/&is_video=false&description=Video%20Player +[9]:http://pinterest.com/pin/create/bookmarklet/?media=https://itsfoss.com/wp-content/uploads/2016/10/Essential-Linux-Apps-Photo-Editor-1024x512.jpg&url=https://itsfoss.com/essential-linux-applications/&is_video=false&description=Photo%20Editors +[10]:http://pinterest.com/pin/create/bookmarklet/?media=https://itsfoss.com/wp-content/uploads/2016/10/Essential-Linux-Apps-Text-Editor-1024x512.jpg&url=https://itsfoss.com/essential-linux-applications/&is_video=false&description=Text%20Editors +[11]:http://pinterest.com/pin/create/bookmarklet/?media=https://itsfoss.com/wp-content/uploads/2016/10/Essential-Linux-Apps-Launcher-1024x512.jpg&url=https://itsfoss.com/essential-linux-applications/&is_video=false&description=Launchers +[12]:https://www.google.com/chrome/browser +[13]:https://www.chromium.org/Home +[14]:https://www.mozilla.org/en-US/firefox +[15]:https://vivaldi.com +[16]:http://ugetdm.com/ +[17]:http://xdman.sourceforge.net/ +[18]:http://deluge-torrent.org/ +[19]:https://transmissionbt.com/ +[20]:https://www.dropbox.com +[21]:https://mega.nz/ +[22]:https://www.pidgin.im/ +[23]:https://itsfoss.com/franz-messaging-app/ +[24]:http://rambox.pro/ +[25]:https://www.skype.com +[26]:https://itsfoss.com/skpe-alpha-linux/ +[27]:https://www.libreoffice.org +[28]:https://www.wps.com +[29]:http://gnumdk.github.io/lollypop-web/ +[30]:https://wiki.gnome.org/Apps/Rhythmbox +[31]:https://cmus.github.io/ +[32]:http://www.videolan.org +[33]:https://kodi.tv +[34]:https://www.gimp.org/ +[35]:https://krita.org/en/ +[36]:https://atom.io/ +[37]:http://www.sublimetext.com/ +[38]:https://github.com/ManuelSchneid3r/albert +[39]:https://launchpad.net/synapse-project +[40]:https://itsfoss.com/otter-browser-review/ +[41]:https://itsfoss.com/4-best-download-managers-for-linux/ +[42]:https://itsfoss.com/best-torrent-ubuntu/ +[43]:https://itsfoss.com/cloud-services-linux/ +[44]:https://itsfoss.com/best-messaging-apps-linux/ +[45]:https://itsfoss.com/best-free-open-source-alternatives-microsoft-office/ +[46]:https://itsfoss.com/install-tomahawk-ubuntu-1404-linux-mint-17/ +[47]:https://itsfoss.com/format-factory-alternative-linux/ +[48]:https://itsfoss.com/image-applications-ubuntu-linux/ +[49]:https://itsfoss.com/best-modern-open-source-code-editors-for-linux/ From 11f89a9b5da8f3d1a8ef40acc7db2932191ba36a Mon Sep 17 00:00:00 2001 From: Ezio Date: Sat, 9 Dec 2017 14:59:06 +0800 Subject: [PATCH 144/236] =?UTF-8?q?20171209-1=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...mplified Alternative To Linux Man Pages.md | 117 ++++++++++++++++++ 1 file changed, 117 insertions(+) create mode 100644 sources/tech/20171129 TLDR pages Simplified Alternative To Linux Man Pages.md diff --git a/sources/tech/20171129 TLDR pages Simplified Alternative To Linux Man Pages.md b/sources/tech/20171129 TLDR pages Simplified Alternative To Linux Man Pages.md new file mode 100644 index 0000000000..afa981df46 --- /dev/null +++ b/sources/tech/20171129 TLDR pages Simplified Alternative To Linux Man Pages.md @@ -0,0 +1,117 @@ +TLDR pages: Simplified Alternative To Linux Man Pages +============================================================ + + [![](https://fossbytes.com/wp-content/uploads/2017/11/tldr-page-ubuntu-640x360.jpg "tldr page ubuntu")][22] + + Working on the terminal and using various commands to carry out important tasks is an indispensable part of a Linux desktop experience. This open-source operating system possesses an [abundance of commands][23] that **makes** it impossible for any user to remember all of them. To make things more complex, each command has its own set of options to bring a wider set of functionality. + +To solve this problem, [Man Pages][12], short for manual pages, were created. First written in English, it contains tons of in-depth information about different commands. Sometimes, when you’re looking for just basic information on a command, it can also become overwhelming. To solve this issue,[ TLDR pages][13] was created. + + _Before going ahead and knowing more about it, don’t forget to check a few more terminal tricks:_ + +* _**[Watch Star Wars in terminal ][1]**_ + +* _**[Use StackOverflow in terminal][2]**_ + +* _**[Get Weather report in terminal][3]**_ + +* _**[Access Google through terminal][4]**_ + +* [**_Use Wikipedia from command line_**][7] + +* _**[Check Cryptocurrency Prices From Terminal][5]**_ + +* _**[Search and download torrent in terminal][6]**_ + +### What are TLDR pages? + +The GitHub page of TLDR pages for Linux/Unix describes it as a collection of simplified and community-driven man pages. It’s an effort to make the experience of using man pages simpler with the help of practical examples. For those who don’t know, TLDR is taken from common internet slang _ Too Long Didn’t Read_ . + +In case you wish to compare, let’s take the example of tar command. The usual man page extends over 1,000 lines. It’s an archiving utility that’s often combined with a compression method like bzip or gzip. Take a look at its man page: + + [![tar man page](https://fossbytes.com/wp-content/uploads/2017/11/tar-man-page.jpg)][14] On the other hand, TLDR pages lets you simply take a glance at the command and see how it works. Tar’s TLDR page simply looks like this and comes with some handy examples of the most common tasks you can complete with this utility: + + [![tar tldr page](https://fossbytes.com/wp-content/uploads/2017/11/tar-tldr-page.jpg)][15] Let’s take another example and show you what TLDR pages has to offer when it comes to apt: + + [![tldr-page-of-apt](https://fossbytes.com/wp-content/uploads/2017/11/tldr-page-of-apt.jpg)][16] Having shown you how TLDR works and makes your life easier, let’s tell you how to install it on your Linux-based operating system. + +### How to install and use TLDR pages on Linux? + +The most mature TLDR client is based on Node.js and you can install it easily using NPM package manager. In case Node and NPM are not available on your system, run the following command: + +``` +sudo apt-get install nodejs + +sudo apt-get install npm +``` + +In case you’re using an OS other than Debian, Ubuntu, or Ubuntu’s derivatives, you can use yum, dnf, or pacman package manager as per your convenience. + +Now, by running the following command in terminal, install TLDR client on your Linux machine: + +``` +sudo npm install -g tldr +``` + +Once you’ve installed this terminal utility, it would be a good idea to update its cache before trying it out. To do so, run the following command: + +``` +tldr --update +``` + +After doing this, feel free to read the TLDR page of any Linux command. To do so, simply type: + +``` +tldr +``` + + [![tldr kill command](https://fossbytes.com/wp-content/uploads/2017/11/tldr-kill-command.jpg)][17] + +You can also run the following help command to see all different parameters that can be used with TLDR to get the desired output. As usual, this help page is also accompanied with examples. + +### TLDR web, Android, and iOS versions + +You would be pleasantly surprised to know that TLDR pages isn’t limited to your Linux desktop. Instead, it can also be used in your web browser, which can be accessed from any machine. + +To use TLDR web version, visit [tldr.ostera.io][18] and perform the required search operation. + +Alternatively, you can also download the [iOS][19] and [Android][20] apps and keep learning new commands on the go. + + [![tldr app ios](https://fossbytes.com/wp-content/uploads/2017/11/tldr-app-ios.jpg)][21] + +Did you find this cool Linux terminal trick interesting? Do give it a try and let us know your feedback. + +-------------------------------------------------------------------------------- + +via: https://fossbytes.com/tldr-pages-linux-man-pages-alternative/ + +作者:[Adarsh Verma ][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://fossbytes.com/author/adarsh/ +[1]:https://fossbytes.com/watch-star-wars-command-prompt-via-telnet/ +[2]:https://fossbytes.com/use-stackoverflow-linux-terminal-mac/ +[3]:https://fossbytes.com/single-command-curl-wttr-terminal-weather-report/ +[4]:https://fossbytes.com/how-to-google-search-in-command-line-using-googler/ +[5]:https://fossbytes.com/check-bitcoin-cryptocurrency-prices-command-line-coinmon/ +[6]:https://fossbytes.com/review-torrench-download-torrents-using-terminal-linux/ +[7]:https://fossbytes.com/use-wikipedia-termnianl-wikit/ +[8]:http://www.facebook.com/sharer.php?u=https%3A%2F%2Ffossbytes.com%2Ftldr-pages-linux-man-pages-alternative%2F +[9]:https://twitter.com/intent/tweet?text=TLDR+pages%3A+Simplified+Alternative+To+Linux+Man+Pages&url=https%3A%2F%2Ffossbytes.com%2Ftldr-pages-linux-man-pages-alternative%2F&via=%40fossbytes14 +[10]:http://plus.google.com/share?url=https://fossbytes.com/tldr-pages-linux-man-pages-alternative/ +[11]:http://pinterest.com/pin/create/button/?url=https://fossbytes.com/tldr-pages-linux-man-pages-alternative/&media=https://fossbytes.com/wp-content/uploads/2017/11/tldr-page-ubuntu.jpg +[12]:https://fossbytes.com/linux-lexicon-man-pages-navigation/ +[13]:https://github.com/tldr-pages/tldr +[14]:https://fossbytes.com/wp-content/uploads/2017/11/tar-man-page.jpg +[15]:https://fossbytes.com/wp-content/uploads/2017/11/tar-tldr-page.jpg +[16]:https://fossbytes.com/wp-content/uploads/2017/11/tldr-page-of-apt.jpg +[17]:https://fossbytes.com/wp-content/uploads/2017/11/tldr-kill-command.jpg +[18]:https://tldr.ostera.io/ +[19]:https://itunes.apple.com/us/app/tldt-pages/id1071725095?ls=1&mt=8 +[20]:https://play.google.com/store/apps/details?id=io.github.hidroh.tldroid +[21]:https://fossbytes.com/wp-content/uploads/2017/11/tldr-app-ios.jpg +[22]:https://fossbytes.com/wp-content/uploads/2017/11/tldr-page-ubuntu.jpg +[23]:https://fossbytes.com/a-z-list-linux-command-line-reference/ From 174c861b3feae93c61fe8a32bfc315f78d387a3c Mon Sep 17 00:00:00 2001 From: Ezio Date: Sat, 9 Dec 2017 15:05:40 +0800 Subject: [PATCH 145/236] =?UTF-8?q?20171209-2=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...eriences Every Linux Gamer Never Wanted.md | 159 ++++++++++++++++++ 1 file changed, 159 insertions(+) create mode 100644 sources/tech/20160922 Annoying Experiences Every Linux Gamer Never Wanted.md diff --git a/sources/tech/20160922 Annoying Experiences Every Linux Gamer Never Wanted.md b/sources/tech/20160922 Annoying Experiences Every Linux Gamer Never Wanted.md new file mode 100644 index 0000000000..8362f15d9b --- /dev/null +++ b/sources/tech/20160922 Annoying Experiences Every Linux Gamer Never Wanted.md @@ -0,0 +1,159 @@ +Annoying Experiences Every Linux Gamer Never Wanted! +============================================================ + + + [![Linux gamer's problem](https://itsfoss.com/wp-content/uploads/2016/09/Linux-Gaming-Problems.jpg)][10] + +[Gaming on Linux][12] has come a long way. There are dedicated [Linux gaming distributions][13] now. But this doesn’t mean that gaming experience on Linux is as smooth as on Windows. + +What are the obstacles that should be thought about to ensure that we enjoy games as much as Windows users do? + +[Wine][14], [PlayOnLinux][15] and other similar tools are not always able to play every popular Windows game. In this article, I would like to discuss various factors that must be dealt with in order to have the best possible Linux gaming experience. + +### #1 SteamOS is Open Source, Steam for Linux is NOT + +As stated on the [SteamOS page][16], even though SteamOS is open source, Steam for Linux continues to be proprietary. Had it also been open source, the amount of support from the open source community would have been tremendous! Since it is not, [the birth of Project Ascension was inevitable][17]: + +[video](https://youtu.be/07UiS5iAknA) + +Project Ascension is an open source game launcher designed to launch games that have been bought and downloaded from anywhere – they can be Steam games, [Origin games][18], Uplay games, games downloaded directly from game developer websites or from DVD/CD-ROMs. + +Here is how it all began: [Sharing The Idea][19] resulted in a very interesting discussion with readers all over from the gaming community pitching in their own opinions and suggestions. + +### #2 Performance compared to Windows + +Getting Windows games to run on Linux is not always an easy task. But thanks to a feature called [CSMT][20] (command stream multi-threading), PlayOnLinux is now better equipped to deal with these performance issues, though it’s still a long way to achieve Windows level outcomes. + +Native Linux support for games has not been so good for past releases. + +Last year, it was reported that SteamOS performed [significantly worse][21] than Windows. Tomb Raider was released on SteamOS/Steam for Linux last year. However, benchmark results were [not at par][22] with performance on Windows. + +[video](https://youtu.be/nkWUBRacBNE) + +This was much obviously due to the fact that the game had been developed with [DirectX][23] in mind and not [OpenGL][24]. + +Tomb Raider is the [first Linux game that uses TressFX][25]. This video includes TressFX comparisons: + +[video](https://youtu.be/-IeY5ZS-LlA) + +Here is another interesting comparison which shows Wine+CSMT performing much better than the native Linux version itself on Steam! This is the power of Open Source! + +[Suggested readA New Linux OS "OSu" Vying To Be Ubuntu Of Arch Linux World][26] + +[video](https://youtu.be/sCJkC6oJ08A) + +TressFX has been turned off in this case to avoid FPS loss. + +Here is another Linux vs Windows comparison for the recently released “[Life is Strange][27]” on Linux: + +[video](https://youtu.be/Vlflu-pIgIY) + +It’s good to know that  [_Steam for Linux_][28]  has begun to show better improvements in performance for this new Linux game. + +Before launching any game for Linux, developers should consider optimizing them especially if it’s a DirectX game and requires OpenGL translation. We really do hope that [Deus Ex: Mankind Divided on Linux][29] gets benchmarked well, upon release. As its a DirectX game, we hope it’s being ported well for Linux. Here’s [what the Executive Game Director had to say][30]. + +### #3 Proprietary NVIDIA Drivers + +[AMD’s support for Open Source][31] is definitely commendable when compared to [NVIDIA][32]. Though [AMD][33] driver support is [pretty good on Linux][34] now due to its better open source driver, NVIDIA graphic card owners will still have to use the proprietary NVIDIA drivers because of the limited capabilities of the open-source version of NVIDIA’s graphics driver called Nouveau. + +In the past, legendary Linus Torvalds has also shared his thoughts about Linux support from NVIDIA to be totally unacceptable: + +[video](https://youtu.be/O0r6Pr_mdio) + +You can watch the complete talk [here][35]. Although NVIDIA responded with [a commitment for better linux support][36], the open source graphics driver still continues to be weak as before. + +### #4 Need for Uplay and Origin DRM support on Linux + +[video](https://youtu.be/rc96NFwyxWU) + +The above video describes how to install the [Uplay][37] DRM on Linux. The uploader also suggests that the use of wine as the main tool of games and applications is not recommended on Linux. Rather, preference to native applications should be encouraged instead. + +The following video is a guide about installing the [Origin][38] DRM on Linux: + +[video](https://youtu.be/ga2lNM72-Kw) + +Digital Rights Management Software adds another layer for game execution and hence it adds up to the already challenging task to make a Windows game run well on Linux. So in addition to making the game execute, W.I.N.E has to take care of running the DRM software such as Uplay or Origin as well. It would have been great if, like Steam, Linux could have got its own native versions of Uplay and Origin. + +[Suggested readLinux Foundation Head Calls 2017 'Year of the Linux Desktop'... While Running Apple's macOS Himself][39] + +### #5 DirectX 11 support for Linux + +Even though we have tools on Linux to run Windows applications, every game comes with its own set of tweak requirements for it to be playable on Linux. Though there was an announcement about [DirectX 11 support for Linux][40] last year via Code Weavers, it’s still a long way to go to make playing newly launched titles on Linux a possibility. Currently, you can + +Currently, you can [buy Crossover from Codeweavers][41] to get the best DirectX 11 support available. This [thread][42] on the Arch Linux forums clearly shows how much more effort is required to make this dream a possibility. Here is an interesting [find][43] from a [Reddit thread][44], which mentions Wine getting [DirectX 11 patches from Codeweavers][45]. Now that’s definitely some good news. + +### #6 100% of Steam games are not available for Linux + +This is an important point to ponder as Linux gamers continue to miss out on every major game release since most of them land up on Windows. Here is a guide to [install Steam for Windows on Linux][46]. + +### #7 Better Support from video game publishers for OpenGL + +Currently, developers and publishers focus primarily on DirectX for video game development rather than OpenGL. Now as Steam is officially here for Linux, developers should start considering development in OpenGL as well. + +[Direct3D][47] is made solely for the Windows platform. The OpenGL API is an open standard, and implementations exist for not only Windows but a wide variety of other platforms. + +Though quite an old article, [this valuable resource][48] shares a lot of thoughtful information on the realities of OpenGL and DirectX. The points made are truly very sensible and enlightens the reader about the facts based on actual chronological events. + +Publishers who are launching their titles on Linux should definitely not leave out the fact that developing the game on OpenGL would be a much better deal than translating it from DirectX to OpenGL. If conversion has to be done, the translations must be well optimized and carefully looked into. There might be a delay in releasing the games but still it would definitely be worth the wait. + +Have more annoyances to share? Do let us know in the comments. + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/linux-gaming-problems/ + +作者:[Avimanyu Bandyopadhyay ][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://itsfoss.com/author/avimanyu/ +[1]:https://itsfoss.com/author/avimanyu/ +[2]:https://itsfoss.com/linux-gaming-problems/#comments +[3]:https://www.facebook.com/share.php?u=https%3A%2F%2Fitsfoss.com%2Flinux-gaming-problems%2F%3Futm_source%3Dfacebook%26utm_medium%3Dsocial%26utm_campaign%3DSocialWarfare +[4]:https://twitter.com/share?original_referer=/&text=Annoying+Experiences+Every+Linux+Gamer+Never+Wanted%21&url=https://itsfoss.com/linux-gaming-problems/%3Futm_source%3Dtwitter%26utm_medium%3Dsocial%26utm_campaign%3DSocialWarfare&via=itsfoss2 +[5]:https://plus.google.com/share?url=https%3A%2F%2Fitsfoss.com%2Flinux-gaming-problems%2F%3Futm_source%3DgooglePlus%26utm_medium%3Dsocial%26utm_campaign%3DSocialWarfare +[6]:https://www.linkedin.com/cws/share?url=https%3A%2F%2Fitsfoss.com%2Flinux-gaming-problems%2F%3Futm_source%3DlinkedIn%26utm_medium%3Dsocial%26utm_campaign%3DSocialWarfare +[7]:http://www.stumbleupon.com/submit?url=https://itsfoss.com/linux-gaming-problems/&title=Annoying+Experiences+Every+Linux+Gamer+Never+Wanted%21 +[8]:https://www.reddit.com/submit?url=https://itsfoss.com/linux-gaming-problems/&title=Annoying+Experiences+Every+Linux+Gamer+Never+Wanted%21 +[9]:https://itsfoss.com/wp-content/uploads/2016/09/Linux-Gaming-Problems.jpg +[10]:https://itsfoss.com/wp-content/uploads/2016/09/Linux-Gaming-Problems.jpg +[11]:http://pinterest.com/pin/create/bookmarklet/?media=https://itsfoss.com/wp-content/uploads/2016/09/Linux-Gaming-Problems.jpg&url=https://itsfoss.com/linux-gaming-problems/&is_video=false&description=Linux%20gamer%27s%20problem +[12]:https://itsfoss.com/linux-gaming-guide/ +[13]:https://itsfoss.com/linux-gaming-distributions/ +[14]:https://itsfoss.com/use-windows-applications-linux/ +[15]:https://www.playonlinux.com/en/ +[16]:http://store.steampowered.com/steamos/ +[17]:http://www.ibtimes.co.uk/reddit-users-want-replace-steam-open-source-game-launcher-project-ascension-1498999 +[18]:https://www.origin.com/ +[19]:https://www.reddit.com/r/pcmasterrace/comments/33xcvm/we_hate_valves_monopoly_over_pc_gaming_why/ +[20]:https://github.com/wine-compholio/wine-staging/wiki/CSMT +[21]:http://arstechnica.com/gaming/2015/11/ars-benchmarks-show-significant-performance-hit-for-steamos-gaming/ +[22]:https://www.gamingonlinux.com/articles/tomb-raider-benchmark-video-comparison-linux-vs-windows-10.7138 +[23]:https://en.wikipedia.org/wiki/DirectX +[24]:https://en.wikipedia.org/wiki/OpenGL +[25]:https://www.gamingonlinux.com/articles/tomb-raider-released-for-linux-video-thoughts-port-report-included-the-first-linux-game-to-use-tresfx.7124 +[26]:https://itsfoss.com/osu-new-linux/ +[27]:http://lifeisstrange.com/ +[28]:https://itsfoss.com/install-steam-ubuntu-linux/ +[29]:https://itsfoss.com/deus-ex-mankind-divided-linux/ +[30]:http://wccftech.com/deus-ex-mankind-divided-director-console-ports-on-pc-is-disrespectful/ +[31]:http://developer.amd.com/tools-and-sdks/open-source/ +[32]:http://nvidia.com/ +[33]:http://amd.com/ +[34]:http://www.makeuseof.com/tag/open-source-amd-graphics-now-awesome-heres-get/ +[35]:https://youtu.be/MShbP3OpASA +[36]:https://itsfoss.com/nvidia-optimus-support-linux/ +[37]:http://uplay.com/ +[38]:http://origin.com/ +[39]:https://itsfoss.com/linux-foundation-head-uses-macos/ +[40]:http://www.pcworld.com/article/2940470/hey-gamers-directx-11-is-coming-to-linux-thanks-to-codeweavers-and-wine.html +[41]:https://itsfoss.com/deal-run-windows-software-and-games-on-linux-with-crossover-15-66-off/ +[42]:https://bbs.archlinux.org/viewtopic.php?id=214771 +[43]:https://ghostbin.com/paste/sy3e2 +[44]:https://www.reddit.com/r/linux_gaming/comments/3ap3uu/directx_11_support_coming_to_codeweavers/ +[45]:https://www.codeweavers.com/about/blogs/caron/2015/12/10/directx-11-really-james-didnt-lie +[46]:https://itsfoss.com/linux-gaming-guide/ +[47]:https://en.wikipedia.org/wiki/Direct3D +[48]:http://blog.wolfire.com/2010/01/Why-you-should-use-OpenGL-and-not-DirectX From 6f6a7557613874113d530ba9429f9db383b83564 Mon Sep 17 00:00:00 2001 From: Ezio Date: Sat, 9 Dec 2017 15:12:13 +0800 Subject: [PATCH 146/236] =?UTF-8?q?20171205-3=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...NDARD RUNTIME USED BY MILLIONS OF USERS.md | 102 ++++++++++++++++++ 1 file changed, 102 insertions(+) create mode 100644 sources/tech/20171205 ANNOUNCING THE GENERAL AVAILABILITY OF CONTAINERD 1.0 THE INDUSTRY-STANDARD RUNTIME USED BY MILLIONS OF USERS.md diff --git a/sources/tech/20171205 ANNOUNCING THE GENERAL AVAILABILITY OF CONTAINERD 1.0 THE INDUSTRY-STANDARD RUNTIME USED BY MILLIONS OF USERS.md b/sources/tech/20171205 ANNOUNCING THE GENERAL AVAILABILITY OF CONTAINERD 1.0 THE INDUSTRY-STANDARD RUNTIME USED BY MILLIONS OF USERS.md new file mode 100644 index 0000000000..80fe739969 --- /dev/null +++ b/sources/tech/20171205 ANNOUNCING THE GENERAL AVAILABILITY OF CONTAINERD 1.0 THE INDUSTRY-STANDARD RUNTIME USED BY MILLIONS OF USERS.md @@ -0,0 +1,102 @@ +ANNOUNCING THE GENERAL AVAILABILITY OF CONTAINERD 1.0, THE INDUSTRY-STANDARD RUNTIME USED BY MILLIONS OF USERS +============================================================ + +Today, we’re pleased to announce that containerd (pronounced Con-Tay-Ner-D), an industry-standard runtime for building container solutions, has reached its 1.0 milestone. containerd has already been deployed in millions of systems in production today, making it the most widely adopted runtime and an essential upstream component of the Docker platform. + +Built to address the needs of modern container platforms like Docker and orchestration systems like Kubernetes, containerd ensures users have a consistent dev to ops experience. From [Docker’s initial announcement][22] last year that it was spinning out its core runtime to [its donation to the CNCF][23] in March 2017, the containerd project has experienced significant growth and progress over the past 12 months. . + +Within both the Docker and Kubernetes communities, there has been a significant uptick in contributions from independents and CNCF member companies alike including Docker, Google, NTT, IBM, Microsoft, AWS, ZTE, Huawei and ZJU. Similarly, the maintainers have been working to add key functionality to containerd.The initial containerd donation provided everything users need to ensure a seamless container experience including methods for: + +* transferring container images, + +* container execution and supervision, + +* low-level local storage and network interfaces and + +* the ability to work on both Linux, Windows and other platforms.  + +Additional work has been done to add even more powerful capabilities to containerd including a: + +* Complete storage and distribution system that supports both OCI and Docker image formats and + +* Robust events system + +* More sophisticated snapshot model to manage container filesystems + +These changes helped the team build out a smaller interface for the snapshotters, while still fulfilling the requirements needed from things like a builder. It also reduces the amount of code needed, making it much easier to maintain in the long run. + +The containerd 1.0 milestone comes after several months testing both the alpha and version versions, which enabled the  team to implement many performance improvements. Some of these,improvements include the creation of a stress testing system, improvements in garbage collection and shim memory usage. + +“In 2017 key functionality has been added containerd to address the needs of modern container platforms like Docker and orchestration systems like Kubernetes,” said Michael Crosby, Maintainer for containerd and engineer at Docker. “Since our announcement in December, we have been progressing the design of the project with the goal of making it easily embeddable in higher level systems to provide core container capabilities. We will continue to work with the community to create a runtime that’s lightweight yet powerful, balancing new functionality with the desire for code that is easy to support and maintain.” + +containerd is already being used by Kubernetes for its[ cri-containerd project][24], which enables users to run Kubernetes clusters using containerd as the underlying runtime. containerd is also an essential upstream component of the Docker platform and is currently used by millions of end users. There is also strong alignment with other CNCF projects: containerd exposes an API using [gRPC][25] and exposes metrics in the [Prometheus][26] format. containerd also fully leverages the Open Container Initiative (OCI) runtime, image format specifications and OCI reference implementation ([runC][27]), and will pursue OCI certification when it is available. + +Key Milestones in the progress to 1.0 include: + +![containerd 1.0](https://i2.wp.com/blog.docker.com/wp-content/uploads/4f8d8c4a-6233-4d96-a0a2-77ed345bf42b-5.jpg?resize=720%2C405&ssl=1) + +Notable containerd facts and figures: + +* 1994 GitHub stars, 401 forks + +* 108 contributors + +* 8 maintainers from independents and and member companies alike including Docker, Google, IBM, ZTE and ZJU . + +* 3030+ commits, 26 releases + +Availability and Resources + +To participate in containerd: [github.com/containerd/containerd][28] + +* Getting Started with containerd: [http://mobyproject.org/blog/2017/08/15/containerd-getting-started/][8] + +* Roadmap: [https://github.com/containerd/containerd/blob/master/ROADMAP.md][1] + +* Scope table: [https://github.com/containerd/containerd#scope][2] + +* Architecture document: [https://github.com/containerd/containerd/blob/master/design/architecture.md][3] + +* APIs: [https://github.com/containerd/containerd/tree/master/api/][9]. + +* Learn more about containerd at KubeCon by attending Justin Cormack’s [LinuxKit & Kubernetes talk at Austin Docker Meetup][10], Patrick Chanezon’s [Moby session][11] [Phil Estes’ session][12] or the [containerd salon][13] + +-------------------------------------------------------------------------------- + +via: https://blog.docker.com/2017/12/cncf-containerd-1-0-ga-announcement/ + +作者:[Patrick Chanezon ][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://blog.docker.com/author/chanezon/ +[1]:https://github.com/docker/containerd/blob/master/ROADMAP.md +[2]:https://github.com/docker/containerd#scope +[3]:https://github.com/docker/containerd/blob/master/design/architecture.md +[4]:http://www.linkedin.com/shareArticle?mini=true&url=http://dockr.ly/2ArQe3G&title=Announcing%20the%20General%20Availability%20of%20containerd%201.0%2C%20the%20industry-standard%20runtime%20used%20by%20millions%20of%20users&summary=Today,%20we%E2%80%99re%20pleased%20to%20announce%20that%20containerd%20(pronounced%20Con-Tay-Ner-D),%20an%20industry-standard%20runtime%20for%20building%20container%20solutions,%20has%20reached%20its%201.0%20milestone.%20containerd%20has%20already%20been%20deployed%20in%20millions%20of%20systems%20in%20production%20today,%20making%20it%20the%20most%20widely%20adopted%20runtime%20and%20an%20essential%20upstream%20component%20of%20the%20Docker%20platform.%20Built%20... +[5]:http://www.reddit.com/submit?url=http://dockr.ly/2ArQe3G&title=Announcing%20the%20General%20Availability%20of%20containerd%201.0%2C%20the%20industry-standard%20runtime%20used%20by%20millions%20of%20users +[6]:https://plus.google.com/share?url=http://dockr.ly/2ArQe3G +[7]:http://news.ycombinator.com/submitlink?u=http://dockr.ly/2ArQe3G&t=Announcing%20the%20General%20Availability%20of%20containerd%201.0%2C%20the%20industry-standard%20runtime%20used%20by%20millions%20of%20users +[8]:http://mobyproject.org/blog/2017/08/15/containerd-getting-started/ +[9]:https://github.com/docker/containerd/tree/master/api/ +[10]:https://www.meetup.com/Docker-Austin/events/245536895/ +[11]:http://sched.co/CU6G +[12]:https://kccncna17.sched.com/event/CU6g/embedding-the-containerd-runtime-for-fun-and-profit-i-phil-estes-ibm +[13]:https://kccncna17.sched.com/event/Cx9k/containerd-salon-hosted-by-derek-mcgowan-docker-lantao-liu-google +[14]:https://blog.docker.com/author/chanezon/ +[15]:https://blog.docker.com/tag/cloud-native-computing-foundation/ +[16]:https://blog.docker.com/tag/cncf/ +[17]:https://blog.docker.com/tag/container-runtime/ +[18]:https://blog.docker.com/tag/containerd/ +[19]:https://blog.docker.com/tag/cri-containerd/ +[20]:https://blog.docker.com/tag/grpc/ +[21]:https://blog.docker.com/tag/kubernetes/ +[22]:https://blog.docker.com/2016/12/introducing-containerd/ +[23]:https://blog.docker.com/2017/03/docker-donates-containerd-to-cncf/ +[24]:http://blog.kubernetes.io/2017/11/containerd-container-runtime-options-kubernetes.html +[25]:http://www.grpc.io/ +[26]:https://prometheus.io/ +[27]:https://github.com/opencontainers/runc +[28]:http://github.com/containerd/containerd From 6214048e86a6868299cee8c3c97f8c82a51b0c94 Mon Sep 17 00:00:00 2001 From: Ezio Date: Sat, 9 Dec 2017 15:14:57 +0800 Subject: [PATCH 147/236] =?UTF-8?q?20171209-4=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...l Writing for an International Audience.md | 81 +++++++++++++++++++ 1 file changed, 81 insertions(+) create mode 100644 sources/tech/20171204 5 Tips to Improve Technical Writing for an International Audience.md diff --git a/sources/tech/20171204 5 Tips to Improve Technical Writing for an International Audience.md b/sources/tech/20171204 5 Tips to Improve Technical Writing for an International Audience.md new file mode 100644 index 0000000000..1b3ab61fb4 --- /dev/null +++ b/sources/tech/20171204 5 Tips to Improve Technical Writing for an International Audience.md @@ -0,0 +1,81 @@ +5 Tips to Improve Technical Writing for an International Audience +============================================================ + + +![documentation](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/typewriter-801921_1920.jpg?itok=faTXFNoE "documentation") +Writing in English for an international audience takes work; here are some handy tips to remember.[Creative Commons Zero][2] + +Writing in English for an international audience does not necessarily put native English speakers in a better position. On the contrary, they tend to forget that the document's language might not be the first language of the audience. Let's have a look at the following simple sentence as an example: “Encrypt the password using the 'foo bar' command.” + +Grammatically, the sentence is correct. Given that "-ing" forms (gerunds) are frequently used in the English language, most native speakers would probably not hesitate to phrase a sentence like this. However, on closer inspection, the sentence is ambiguous: The word “using” may refer either to the object (“the password”) or to the verb (“encrypt”). Thus, the sentence can be interpreted in two different ways: + +* Encrypt the password that uses the 'foo bar' command. + +* Encrypt the password by using the 'foo bar' command. + +As long as you have previous knowledge about the topic (password encryption or the 'foo bar' command), you can resolve this ambiguity and correctly decide that the second reading is the intended meaning of this sentence. But what if you lack in-depth knowledge of the topic? What if you are not an expert but a translator with only general knowledge of the subject? Or, what if you are a non-native speaker of English who is unfamiliar with advanced grammatical forms? + +### Know Your Audience + +Even native English speakers may need some training to write clear and straightforward technical documentation. Raising awareness of usability and potential problems is the first step. This article, based on my talk at[ Open Source Summit EU][5], offers several useful techniques. Most of them are useful not only for technical documentation but also for everyday written communication, such as writing email or reports. + +**1. Change perspective. **Step into your audience's shoes. Step one is to know your intended audience. If you are a developer writing for end users, view the product from their perspective. The [persona technique][6] can help to focus on the target audience and to provide the right level of detail for your readers. + +**2\. Follow the KISS principle. **Keep it short and simple. The principle can be applied to several levels, like grammar, sentences, or words. Here are some examples: + + _Words: _ Uncommon and long words slow down reading and might be obstacles for non-native speakers. Use simpler alternatives: + +“utilize” → “use” + +“indicate” → “show”, “tell”, “say” + +“prerequisite” → “requirement” + + _Grammar: _ Use the simplest tense that is appropriate. For example, use present tense when mentioning the result of an action: "Click  _OK_ . The  _Printer Options_  dialog appears.” + + _Sentences: _ As a rule of thumb, present one idea in one sentence. However, restricting sentence length to a certain amount of words is not useful in my opinion. Short sentences are not automatically easy to understand (especially if they are a cluster of nouns). Sometimes, trimming down sentences to a certain word count can introduce ambiquities, which can, in turn, make sentences even more difficult to understand. + +**3\. Beware of ambiguities. **As authors, we often do not notice ambiguity in a sentence. Having your texts reviewed by others can help identify such problems. If that's not an option, try to look at each sentence from different perspectives: Does the sentence also work for readers without in-depth knowledge of the topic? Does it work for readers with limited language skills? Is the grammatical relationship between all sentence parts clear? If the sentence does not meet these requirements, rephrase it to resolve the ambiguity. + +**4\. Be consistent. **This applies to choice of words, spelling, and punctuation as well as phrases and structure. For lists, use parallel grammatical construction. For example: + +Why white space is important: + +* It focuses attention. + +* It visually separates sections. + +* It splits content into chunks.  + +**5\. Remove redundant content.** Keep only information that is relevant for your target audience. On a sentence level, avoid fillers (basically, easily) and unnecessary modifications: + +"already existing" → "existing" + +"completely new" → "new" + +As you might have guessed by now, writing is rewriting. Good writing requires effort and practice. But even if you write only occasionally, you can significantly improve your texts by focusing on the target audience and by using basic writing techniques. The better the readability of a text, the easier it is to process, even for an audience with varying language skills. When it comes to localization especially, good quality of the source text is important: Garbage in, garbage out. If the original text has deficiencies, it will take longer to translate the text, resulting in higher costs. In the worst case, the flaws will be multiplied during translation and need to be corrected in various languages.  + + +![Tanja Roth](https://www.linux.com/sites/lcom/files/styles/floated_images/public/tanja-roth.jpg?itok=eta0fvZC "Tanja Roth") + +Tanja Roth, Technical Documentation Specialist at SUSE Linux GmbH[Used with permission][1] + + _Driven by an interest in both language and technology, Tanja has been working as a technical writer in mechanical engineering, medical technology, and IT for many years. She joined SUSE in 2005 and contributes to a wide range of product and project documentation, including High Availability and Cloud topics._ + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/blog/event/open-source-summit-eu/2017/12/technical-writing-international-audience?sf175396579=1 + +作者:[TANJA ROTH ][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.linux.com/users/tanja-roth +[1]:https://www.linux.com/licenses/category/used-permission +[2]:https://www.linux.com/licenses/category/creative-commons-zero +[3]:https://www.linux.com/files/images/tanja-rothjpg +[4]:https://www.linux.com/files/images/typewriter-8019211920jpg +[5]:https://osseu17.sched.com/event/ByIW +[6]:https://en.wikipedia.org/wiki/Persona_(user_experience) From c0744035ef455570db822cf62f6abe752a8cdb96 Mon Sep 17 00:00:00 2001 From: Ezio Date: Sat, 9 Dec 2017 15:17:23 +0800 Subject: [PATCH 148/236] =?UTF-8?q?20171209-5=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ... how to write basic udev rules in Linux.md | 210 ++++++++++++++++++ 1 file changed, 210 insertions(+) create mode 100644 sources/tech/20171204 Tutorial on how to write basic udev rules in Linux.md diff --git a/sources/tech/20171204 Tutorial on how to write basic udev rules in Linux.md b/sources/tech/20171204 Tutorial on how to write basic udev rules in Linux.md new file mode 100644 index 0000000000..e4f3d6f537 --- /dev/null +++ b/sources/tech/20171204 Tutorial on how to write basic udev rules in Linux.md @@ -0,0 +1,210 @@ +# Tutorial on how to write basic udev rules in Linux + +Contents + +* * [1. Objective][4] + + * [2. Requirements][5] + + * [3. Difficulty][6] + + * [4. Conventions][7] + + * [5. Introduction][8] + + * [6. How rules are organized][9] + + * [7. The rules syntax][10] + + * [8. A test case][11] + + * [9. Operators][12] + * * [9.1.1. == and != operators][1] + + * [9.1.2. The assignment operators: = and :=][2] + + * [9.1.3. The += and -= operators][3] + + * [10. The keys we used][13] + +### Objective + +Understanding the base concepts behind udev, and learn how to write simple rules + +### Requirements + +* Root permissions + +### Difficulty + +MEDIUM + +### Conventions + +* **#** - requires given command to be executed with root privileges either directly as a root user or by use of `sudo` command + +* **$** - given command to be executed as a regular non-privileged user + +### Introduction + +In a GNU/Linux system, while devices low level support is handled at the kernel level, the management of events related to them is managed in userspace by `udev`, and more precisely by the `udevd` daemon. Learning how to write rules to be applied on the occurring of those events can be really useful to modify the behavior of the system and adapt it to our needs. + +### How rules are organized + +Udev rules are defined into files with the `.rules` extension. There are two main locations in which those files can be placed: `/usr/lib/udev/rules.d` it's the directory used for system-installed rules, `/etc/udev/rules.d/`is reserved for custom made rules.  + +The files in which the rules are defined are conventionally named with a number as prefix (e.g `50-udev-default.rules`) and are processed in lexical order independently of the directory they are in. Files installed in `/etc/udev/rules.d`, however, override those with the same name installed in the system default path. + +### The rules syntax + +The syntax of udev rules is not very complicated once you understand the logic behind it. A rule is composed by two main sections: the "match" part, in which we define the conditions for the rule to be applied, using a series of keys separated by a comma, and the "action" part, in which we perform some kind of action, when the conditions are met.  + +### A test case + +What a better way to explain possible options than to configure an actual rule? As an example, we are going to define a rule to disable the touchpad when a mouse is connected. Obviously the attributes provided in the rule definition, will reflect my hardware.  + +We will write our rule in the `/etc/udev/rules.d/99-togglemouse.rules` file with the help of our favorite text editor. A rule definition can span over multiple lines, but if that's the case, a backslash must be used before the newline character, as a line continuation, just as in shell scripts. Here is our rule: +``` +ACTION=="add" \ +, ATTRS{idProduct}=="c52f" \ +, ATTRS{idVendor}=="046d" \ +, ENV{DISPLAY}=":0" \ +, ENV{XAUTHORITY}="/run/user/1000/gdm/Xauthority" \ +, RUN+="/usr/bin/xinput --disable 16" +``` +Let's analyze it. + +### Operators + +First of all, an explanation of the used and possible operators: + +#### == and != operators + +The `==` is the equality operator and the `!=` is the inequality operator. By using them we establish that for the rule to be applied the defined keys must match, or not match the defined value respectively. + +#### The assignment operators: = and := + +The `=` assignment operator, is used to assign a value to the keys that accepts one. We use the `:=` operator, instead, when we want to assign a value and we want to make sure that it is not overridden by other rules: the values assigned with this operator, in facts, cannot be altered. + +#### The += and -= operators + +The `+=` and `-=` operators are used respectively to add or to remove a value from the list of values defined for a specific key. + +### The keys we used + +Let's now analyze the keys we used in the rule. First of all we have the `ACTION` key: by using it, we specified that our rule is to be applied when a specific event happens for the device. Valid values are `add`, `remove` and `change`  + +We then used the `ATTRS` keyword to specify an attribute to be matched. We can list a device attributes by using the `udevadm info` command, providing its name or `sysfs` path: +``` +udevadm info -ap /devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.2/2-1.2:1.1/0003:046D:C52F.0010/input/input39 + +Udevadm info starts with the device specified by the devpath and then +walks up the chain of parent devices. It prints for every device +found, all possible attributes in the udev rules key format. +A rule to match, can be composed by the attributes of the device +and the attributes from one single parent device. + + looking at device '/devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.2/2-1.2:1.1/0003:046D:C52F.0010/input/input39': + KERNEL=="input39" + SUBSYSTEM=="input" + DRIVER=="" + ATTR{name}=="Logitech USB Receiver" + ATTR{phys}=="usb-0000:00:1d.0-1.2/input1" + ATTR{properties}=="0" + ATTR{uniq}=="" + + looking at parent device '/devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.2/2-1.2:1.1/0003:046D:C52F.0010': + KERNELS=="0003:046D:C52F.0010" + SUBSYSTEMS=="hid" + DRIVERS=="hid-generic" + ATTRS{country}=="00" + + looking at parent device '/devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.2/2-1.2:1.1': + KERNELS=="2-1.2:1.1" + SUBSYSTEMS=="usb" + DRIVERS=="usbhid" + ATTRS{authorized}=="1" + ATTRS{bAlternateSetting}==" 0" + ATTRS{bInterfaceClass}=="03" + ATTRS{bInterfaceNumber}=="01" + ATTRS{bInterfaceProtocol}=="00" + ATTRS{bInterfaceSubClass}=="00" + ATTRS{bNumEndpoints}=="01" + ATTRS{supports_autosuspend}=="1" + + looking at parent device '/devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.2': + KERNELS=="2-1.2" + SUBSYSTEMS=="usb" + DRIVERS=="usb" + ATTRS{authorized}=="1" + ATTRS{avoid_reset_quirk}=="0" + ATTRS{bConfigurationValue}=="1" + ATTRS{bDeviceClass}=="00" + ATTRS{bDeviceProtocol}=="00" + ATTRS{bDeviceSubClass}=="00" + ATTRS{bMaxPacketSize0}=="8" + ATTRS{bMaxPower}=="98mA" + ATTRS{bNumConfigurations}=="1" + ATTRS{bNumInterfaces}==" 2" + ATTRS{bcdDevice}=="3000" + ATTRS{bmAttributes}=="a0" + ATTRS{busnum}=="2" + ATTRS{configuration}=="RQR30.00_B0009" + ATTRS{devnum}=="12" + ATTRS{devpath}=="1.2" + ATTRS{idProduct}=="c52f" + ATTRS{idVendor}=="046d" + ATTRS{ltm_capable}=="no" + ATTRS{manufacturer}=="Logitech" + ATTRS{maxchild}=="0" + ATTRS{product}=="USB Receiver" + ATTRS{quirks}=="0x0" + ATTRS{removable}=="removable" + ATTRS{speed}=="12" + ATTRS{urbnum}=="1401" + ATTRS{version}==" 2.00" + + [...] +``` +Above is the truncated output received after running the command. As you can read it from the output itself, `udevadm` starts with the specified path that we provided, and gives us information about all the parent devices. Notice that attributes of the device are reported in singular form (e.g `KERNEL`), while the parent ones in plural form (e.g `KERNELS`). The parent information can be part of a rule but only one of the parents can be referenced at a time: mixing attributes of different parent devices will not work. In the rule we defined above, we used the attributes of one parent device: `idProduct` and `idVendor`.  + +The next thing we have done in our rule, is to use the `ENV` keyword: it can be used to both set or try to match environment variables. We assigned a value to the `DISPLAY` and `XAUTHORITY` ones. Those variables are essential when interacting with the X server programmatically, to setup some needed information: with the `DISPLAY` variable, we specify on what machine the server is running, what display and what screen we are referencing, and with `XAUTHORITY` we provide the path to the file which contains Xorg authentication and authorization information. This file is usually located in the users "home" directory.  + +Finally we used the `RUN` keyword: this is used to run external programs. Very important: this is not executed immediately, but the various actions are executed once all the rules have been parsed. In this case we used the `xinput` utility to change the status of the touchpad. I will not explain the syntax of xinput here, it would be out of context, just notice that `16` is the id of the touchpad.  + +Once our rule is set, we can debug it by using the `udevadm test` command. This is useful for debugging but it doesn't really run commands specified using the `RUN` key: +``` +$ udevadm test --action="add" /devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.2/2-1.2:1.1/0003:046D:C52F.0010/input/input39 +``` +What we provided to the command is the action to simulate, using the `--action` option, and the sysfs path of the device. If no errors are reported, our rule should be good to go. To run it in the real world, we must reload the rules: +``` +# udevadm control --reload +``` +This command will reload the rules files, however, will have effect only on new generated events.  + +We have seen the basic concepts and logic used to create an udev rule, however we only scratched the surface of the many options and possible settings. The udev manpage provides an exhaustive list: please refer to it for a more in-depth knowledge. + +-------------------------------------------------------------------------------- + +via: https://linuxconfig.org/tutorial-on-how-to-write-basic-udev-rules-in-linux + +作者:[Egidio Docile ][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://disqus.com/by/egidiodocile/ +[1]:https://linuxconfig.org/tutorial-on-how-to-write-basic-udev-rules-in-linux#h9-1-1-and-operators +[2]:https://linuxconfig.org/tutorial-on-how-to-write-basic-udev-rules-in-linux#h9-1-2-the-assignment-operators-and +[3]:https://linuxconfig.org/tutorial-on-how-to-write-basic-udev-rules-in-linux#h9-1-3-the-and-operators +[4]:https://linuxconfig.org/tutorial-on-how-to-write-basic-udev-rules-in-linux#h1-objective +[5]:https://linuxconfig.org/tutorial-on-how-to-write-basic-udev-rules-in-linux#h2-requirements +[6]:https://linuxconfig.org/tutorial-on-how-to-write-basic-udev-rules-in-linux#h3-difficulty +[7]:https://linuxconfig.org/tutorial-on-how-to-write-basic-udev-rules-in-linux#h4-conventions +[8]:https://linuxconfig.org/tutorial-on-how-to-write-basic-udev-rules-in-linux#h5-introduction +[9]:https://linuxconfig.org/tutorial-on-how-to-write-basic-udev-rules-in-linux#h6-how-rules-are-organized +[10]:https://linuxconfig.org/tutorial-on-how-to-write-basic-udev-rules-in-linux#h7-the-rules-syntax +[11]:https://linuxconfig.org/tutorial-on-how-to-write-basic-udev-rules-in-linux#h8-a-test-case +[12]:https://linuxconfig.org/tutorial-on-how-to-write-basic-udev-rules-in-linux#h9-operators +[13]:https://linuxconfig.org/tutorial-on-how-to-write-basic-udev-rules-in-linux#h10-the-keys-we-used From 1ba414f29fcb49ea5648753deb1a83c16f2fbeb7 Mon Sep 17 00:00:00 2001 From: Ezio Date: Sat, 9 Dec 2017 15:20:36 +0800 Subject: [PATCH 149/236] =?UTF-8?q?20171209-6=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ... an open source alternative to Evernote.md | 115 ++++++++++++++++++ 1 file changed, 115 insertions(+) create mode 100644 sources/tech/20171206 Getting started with Turtl an open source alternative to Evernote.md diff --git a/sources/tech/20171206 Getting started with Turtl an open source alternative to Evernote.md b/sources/tech/20171206 Getting started with Turtl an open source alternative to Evernote.md new file mode 100644 index 0000000000..18d7d12f82 --- /dev/null +++ b/sources/tech/20171206 Getting started with Turtl an open source alternative to Evernote.md @@ -0,0 +1,115 @@ +Getting started with Turtl, an open source alternative to Evernote +============================================================ + +### Turtl is a solid note-taking tool for users looking for an alternative to apps like Evernote and Google Keep. + +![Using Turtl as an open source alternative to Evernote](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUS_brainstorm_island_520px.png?itok=6IUPyxkY "Using Turtl as an open source alternative to Evernote") +Image by : opensource.com + +Just about everyone I know takes notes, and many people use an online note-taking application like Evernote, Simplenote, or Google Keep. Those are all good tools, but you have to wonder about the security and privacy of your information—especially in light of [Evernote's privacy flip-flop of 2016][11]. If you want more control over your notes and your data, you really need to turn to an open source tool. + +Whatever your reasons for moving away from Evernote, there are open source alternatives out there. Let's look at one of those alternatives: Turtl. + +### Getting started + +The developers behind [Turtl][12] want you to think of it as "Evernote with ultimate privacy." To be honest, I can't vouch for the level of privacy that Turtl offers, but it is a quite a good note-taking tool. + +To get started with Turtl, [download][13] a desktop client for Linux, Mac OS, or Windows, or grab the [Android app][14]. Install it, then fire up the client or app. You'll be asked for a username and passphrase. Turtl uses the passphrase to generate a cryptographic key that, according to the developers, encrypts your notes before storing them anywhere on your device or on their servers. + +### Using Turtl + +You can create the following types of notes with Turtl: + +* Password + +* File + +* Image + +* Bookmark + +* Text note + +No matter what type of note you choose, you create it in a window that's similar for all types of notes: + + +![Create new text note with Turtl](https://opensource.com/sites/default/files/images/life-uploads/turtl-new-note-520.png "Creating a new text note with Turtl") + +Creating a new text note in Turtl + +Add information like the title of the note, some text, and (if you're creating a File or Image note) attach a file or an image. Then click **Save**. + +You can add formatting to your notes via [Markdown][15]. You need to add the formatting by hand—there are no toolbar shortcuts. + +If you need to organize your notes, you can add them to **Boards**. Boards are just like notebooks in Evernote. To create a new board, click on the **Boards** tab, then click the **Create a board** button. Type a title for the board, then click **Create**. + + +![Create new board in Turtl](https://opensource.com/sites/default/files/images/life-uploads/turtl-boards-520.png "Creating a new Turtl board") + +Creating a new board in Turtl + +To add a note to a board, create or edit the note, then click the **This note is not in any boards** link at the bottom of the note. Select one or more boards, then click **Done**. + +To add tags to a note, click the **Tags** icon at the bottom of a note, enter one or more keywords separated by commas, and click **Done**. + +### Syncing your notes across your devices + +If you use Turtl across several computers and an Android device, for example, Turtl will sync your notes whenever you're online. However, I've encountered a small problem with syncing: Every so often, a note I've created on my phone doesn't sync to my laptop. I tried to sync manually by clicking the icon in the top left of the window and then clicking **Sync Now**, but that doesn't always work. I found that I occasionally need to click that icon, click **Your settings**, and then click **Clear local data**. I then need to log back into Turtl, but all the data syncs properly. + +### A question, and a couple of problems + +When I started using Turtl, I was dogged by one question:  _Where are my notes kept online?_  It turns out that the developers behind Turtl are based in the U.S., and that's also where their servers are. Although the encryption that Turtl uses is [quite strong][16] and your notes are encrypted on the server, the paranoid part of me says that you shouldn't save anything sensitive in Turtl (or any online note-taking tool, for that matter). + +Turtl displays notes in a tiled view, reminiscent of Google Keep: + + +![Notes in Turtl](https://opensource.com/sites/default/files/images/life-uploads/turtl-notes-520.png "Collection of notes in Turtl") + +A collection of notes in Turtl + +There's no way to change that to a list view, either on the desktop or on the Android app. This isn't a problem for me, but I've heard some people pan Turtl because it lacks a list view. + +Speaking of the Android app, it's not bad; however, it doesn't integrate with the Android **Share** menu. If you want to add a note to Turtl based on something you've seen or read in another app, you need to copy and paste it manually. + +I've been using a Turtl for several months on a Linux-powered laptop, my [Chromebook running GalliumOS][17], and an Android-powered phone. It's been a pretty seamless experience across all those devices. Although it's not my favorite open source note-taking tool, Turtl does a pretty good job. Give it a try; it might be the simple note-taking tool you're looking for. + + +### About the author + + [![That idiot Scott Nesbitt ...](https://opensource.com/sites/default/files/styles/profile_pictures/public/scottn-cropped.jpg?itok=q4T2J4Ai)][18] + + Scott Nesbitt - I'm a long-time user of free/open source software, and write various things for both fun and profit. I don't take myself too seriously and I do all of my own stunts. You can find me at these fine establishments on the web: [Twitter][5], [Mastodon][6], [GitHub][7], and... [more about Scott Nesbitt][8][More about me][9] + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/17/12/using-turtl-open-source-alternative-evernote + +作者:[Scott Nesbitt ][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/scottnesbitt +[1]:https://opensource.com/file/378346 +[2]:https://opensource.com/file/378351 +[3]:https://opensource.com/file/378356 +[4]:https://opensource.com/article/17/12/using-turtl-open-source-alternative-evernote?rate=Kktl8DSEAXzwIGppn0PS4KuSpZv3Qbk0fuiilnplrnE +[5]:http://www.twitter.com/ScottWNesbitt +[6]:https://mastodon.social/@scottnesbitt +[7]:https://github.com/ScottWNesbitt +[8]:https://opensource.com/users/scottnesbitt +[9]:https://opensource.com/users/scottnesbitt +[10]:https://opensource.com/user/14925/feed +[11]:https://blog.evernote.com/blog/2016/12/15/evernote-revisits-privacy-policy/ +[12]:https://turtlapp.com/ +[13]:https://turtlapp.com/download/ +[14]:https://turtlapp.com/download/ +[15]:https://en.wikipedia.org/wiki/Markdown +[16]:https://turtlapp.com/docs/security/encryption-specifics/ +[17]:https://opensource.com/article/17/4/linux-chromebook-gallium-os +[18]:https://opensource.com/users/scottnesbitt +[19]:https://opensource.com/users/scottnesbitt +[20]:https://opensource.com/users/scottnesbitt +[21]:https://opensource.com/article/17/12/using-turtl-open-source-alternative-evernote#comments +[22]:https://opensource.com/tags/alternatives From 0a69496990fe3751c26a2471eca6d477b12c13e6 Mon Sep 17 00:00:00 2001 From: Ezio Date: Sat, 9 Dec 2017 15:27:47 +0800 Subject: [PATCH 150/236] =?UTF-8?q?20171209-7=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- .../20171205 Ubuntu 18.04 – New Features.md | 154 ++++++++++++++++++ 1 file changed, 154 insertions(+) create mode 100644 sources/tech/20171205 Ubuntu 18.04 – New Features.md diff --git a/sources/tech/20171205 Ubuntu 18.04 – New Features.md b/sources/tech/20171205 Ubuntu 18.04 – New Features.md new file mode 100644 index 0000000000..79fa22e1f6 --- /dev/null +++ b/sources/tech/20171205 Ubuntu 18.04 – New Features.md @@ -0,0 +1,154 @@ +Ubuntu 18.04 – New Features, Release Date & More +============================================================ + + +We’ve all been waiting for it – the new LTS release of Ubuntu – 18.04\. Learn more about new features, the release dates, and more. + +> Note: we’ll frequently update this article with new information, so bookmark this page and check back soon. + +### Basic information about Ubuntu 18.04 + +Let’s start with some basic information. + +* It’s a new LTS (Long Term Support) release. So you get 5 years of support for both the desktop and server version. + +* Named “Bionic Beaver”. The founder of Canonical, Mark Shuttleworth, explained the meaning behind the name. The mascot is a Beaver because it’s energetic, industrious, and an awesome engineer – which perfectly describes a typical Ubuntu user, and the new Ubuntu release itself. The “Bionic” adjective is due to the increased number of robots that run on the Ubuntu Core. + +### Ubuntu 18.04 Release Dates & Schedule + +If you’re new to Ubuntu, you may not be familiar the actual version numbers mean. It’s the year and month of the official release. So Ubuntu’s 18.04 official release will be in the 4th month of the year 2018. Ubuntu 17.10 was released in 2017, in the 10th month of the year. + +To go into further details, here are the important dates and need to know about Ubuntu 18.04 LTS: + +* November 30th, 2017 – Feature Definition Freeze. + +* January 4th, 2018 – First Alpha release. So if you opted-in to receive new Alpha releases, you’ll get the Alpha 1 update on this date. + +* February 1st, 2018 – Second Alpha release. + +* March 1st, 2018 – Feature Freeze. No new features will be introduced or released. So the development team will only work on improving existing features and fixing bugs. With exceptions, of course. If you’re not a developer or an experienced user, but would still like to try the new Ubuntu ASAP, then I’d personally recommend starting with this release. + +* March 8th, 2018 – First Beta release. If you opted-in for receiving Beta updates, you’ll get your update on this day. + +* March 22nd, 2018 – User Interface Freeze. It means that no further changes or updates will be done to the actual user interface, so if you write documentation, [tutorials][1], and use screenshots, it’s safe to start then. + +* March 29th, 2018 – Documentation String Freeze. There won’t be any edits or new stuff (strings) added to the documentation, so translators can start translating the documentation. + +* April 5th, 2018 – Final Beta release. This is also a good day to start using the new release. + +* April 19th, 2018 – Final Freeze. Everything’s pretty much done now. Images for the release are created and distributed, and will likely not have any changes. + +* April 26th, 2018 – Official, Final release of Ubuntu 18.04\. Everyone should start using it starting this day, even on production servers. We recommend getting an Ubuntu 18.04 server from [Vultr][2] and testing out the new features. Servers at [Vultr][3] start at $2.5 per month. + +### What’s New in Ubuntu 18.04 + +All the new features in Ubuntu 18.04 LTS: + +### Color emojis are now supported  + +With previous versions, Ubuntu only supported monochrome (black and white) emojis, which quite frankly, didn’t look so good. Ubuntu 18.04 will support colored emojis by using the [Noto Color Emoji font][7]. With 18.04, you can view and add color emojis with ease everywhere. They are supported natively – so you can use them without using 3-rd party apps or installing/configuring anything extra. You can always disable the color emojis by removing the font. + +### GNOME desktop environment + + [![ubuntu 17.10 gnome](https://thishosting.rocks/wp-content/uploads/2017/12/ubuntu-17-10-gnome.jpg.webp)][8] + +Ubuntu started using the GNOME desktop environment with Ubuntu 17.10 instead of the default Unity environment. Ubuntu 18.04 will continue using GNOME. This is a major change to Ubuntu. + +### Ubuntu 18.04 Desktop will have a new default theme + +Ubuntu 18.04 is saying Goodbye to the old ‘Ambience’ default theme with a new GTK theme. If you want to help with the new theme, check out some screenshots and more, go [here][9]. + +As of now, there is speculation that Suru will be the [new default icon theme][10] for Ubuntu 18.04\. Here’s a screenshot: + + [![suru icon theme ubuntu 18.04](https://thishosting.rocks/wp-content/uploads/2017/12/suru-icon-theme-ubuntu-18-04.jpg.webp)][11] + +> Worth noting: all new features in Ubuntu 16.10, 17.04, and 17.10 will roll through to Ubuntu 18.04\. So updates like Window buttons to the right, a better login screen, imrpoved Bluetooth support etc. will roll out to Ubuntu 18.04\. We won’t include a special section since it’s not really new to Ubuntu 18.04 itself. If you want to learn more about all the changes from 16.04 to 18.04, google it for each version in between. + +### Download Ubuntu 18.04 + +First off, if you’re already using Ubuntu, you can just upgrade to Ubuntu 18.04. + +If you need to download Ubuntu 18.04: + +Go to the [official Ubuntu download page][12] after the final release. + +For the daily builds (alpha, beta, and non-final releases), go [here][13]. + +### FAQs + +Now for some of the frequently asked questions (with answers) that should give you more information about all of this. + +### When is it safe to switch to Ubuntu 18.04? + +On the official final release date, of course. But if you can’t wait, start using the desktop version on March 1st, 2018, and start testing out the server version on April 5th, 2018\. But for you to truly be “safe”, you’ll need to wait for the final release, maybe even more so the 3-rd party services and apps you are using are tested and working well on the new release. + +### How do I upgrade my server to Ubuntu 18.04? + +It’s a fairly simple process but has huge potential risks. We may publish a tutorial sometime in the near future, but you’ll basically need to use ‘do-release-upgrade’. Again, upgrading your server has potential risks, and if you’re on a production server, I’d think twice before upgrading. Especially if you’re on 16.04 which has a few years of support left. + +### How can I help with Ubuntu 18.04? + +Even if you’re not an experienced developer and Ubuntu user, you can still help by: + +* Spreading the word. Let people know about Ubuntu 18.04\. A simple share on social media helps a bit too. + +* Using and testing the release. Start using the release and test it. Again, you don’t have to be a developer. You can still find and report bugs, or send feedback. + +* Translating. Join the translating teams and start translating documentation and/or applications. + +* Helping other people. Join some online Ubuntu communities and help others with issues they’re having with Ubuntu 18.04\. Sometimes people need help with simple stuff like “where can I download Ubuntu?” + +### What does Ubuntu 18.04 mean for other distros like Lubuntu? + +All distros that are based on Ubuntu will have similar new features and a similar release schedule. You’ll need to check your distro’s official website for more information. + +### Is Ubuntu 18.04 an LTS release? + +Yes, Ubuntu 18.04 is an LTS (Long Term Support) release, so you’ll get support for 5 years. + +### Can I switch from Windows/OS X to Ubuntu 18.04? + +Of course! You’ll most likely experience a performance boost too. Switching from a different OS to Ubuntu is fairly easy, there are quite a lot of tutorials for doing that. You can even set up a dual-boot where you’ll be using multiple OSes, so you can use both Windows and Ubuntu 18.04. + +### Can I try Ubuntu 18.04 without installing it? + +Sure. You can use something like [VirtualBox][14] to create a “virtual desktop” – you can install it on your local machine and use Ubuntu 18.04 without actually installing Ubuntu. + +Or you can try an Ubuntu 18.04 server at [Vultr][15] for $2.5 per month. It’s essentially free if you use some [free credits][16]. + +### Why can’t I find a 32-bit version of Ubuntu 18.04? + +Because there is no 32bit version. Ubuntu dropped 32bit versions with its 17.10 release. If you’re using old hardware, you’re better off using a different [lightweight Linux distro][17] instead of Ubuntu 18.04 anyway. + +### Any other question? + +Leave a comment below! Share your thoughts, we’re super excited and we’re gonna update this article as soon as new information comes in. Stay tuned and be patient! + +-------------------------------------------------------------------------------- + +via: https://thishosting.rocks/ubuntu-18-04-new-features-release-date/ + +作者:[ thishosting.rocks][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:thishosting.rocks +[1]:https://thishosting.rocks/category/knowledgebase/ +[2]:https://thishosting.rocks/go/vultr/ +[3]:https://thishosting.rocks/go/vultr/ +[4]:https://thishosting.rocks/category/knowledgebase/ +[5]:https://thishosting.rocks/tag/ubuntu/ +[6]:https://thishosting.rocks/2017/12/05/ +[7]:https://www.google.com/get/noto/help/emoji/ +[8]:https://thishosting.rocks/wp-content/uploads/2017/12/ubuntu-17-10-gnome.jpg +[9]:https://community.ubuntu.com/t/call-for-participation-an-ubuntu-default-theme-lead-by-the-community/1545 +[10]:http://www.omgubuntu.co.uk/2017/11/suru-default-icon-theme-ubuntu-18-04-lts +[11]:https://thishosting.rocks/wp-content/uploads/2017/12/suru-icon-theme-ubuntu-18-04.jpg +[12]:https://www.ubuntu.com/download +[13]:http://cdimage.ubuntu.com/daily-live/current/ +[14]:https://www.virtualbox.org/ +[15]:https://thishosting.rocks/go/vultr/ +[16]:https://thishosting.rocks/vultr-coupons-for-2017-free-credits-and-more/ +[17]:https://thishosting.rocks/best-lightweight-linux-distros/ From ba1962dc5886757185ba4907635a7f4058451f50 Mon Sep 17 00:00:00 2001 From: Ezio Date: Sat, 9 Dec 2017 15:35:22 +0800 Subject: [PATCH 151/236] =?UTF-8?q?20171209-9=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...xtensions You Should Be Using Right Now.md | 307 ++++++++++++++++++ 1 file changed, 307 insertions(+) create mode 100644 sources/tech/20171203 Top 20 GNOME Extensions You Should Be Using Right Now.md diff --git a/sources/tech/20171203 Top 20 GNOME Extensions You Should Be Using Right Now.md b/sources/tech/20171203 Top 20 GNOME Extensions You Should Be Using Right Now.md new file mode 100644 index 0000000000..60b188780f --- /dev/null +++ b/sources/tech/20171203 Top 20 GNOME Extensions You Should Be Using Right Now.md @@ -0,0 +1,307 @@ +Top 20 GNOME Extensions You Should Be Using Right Now +============================================================ + + _Brief: You can enhance the capacity of your GNOME desktop with extensions. Here, we list the best GNOME extensions to save you the trouble of finding them on your own._ + +[GNOME extensions][9] are a major part of the [GNOME][10] experience. These extensions add a lot of value to the ecosystem whether it is to mold the Gnome Desktop Environment (DE) to your workflow, to add more functionality than there is by default, or just simply to freshen up the experience. + +With default [Ubuntu 17.10][11] switching from [Unity to Gnome][12], now is the time to familiarize yourself with the various extensions that the GNOME community has to offer. We already showed you[ how to enable and manage GNOME extensions][13]. But finding good extensions could be a daunting task. That’s why I created this list of best GNOME extensions to save you some trouble. + +### Best GNOME Extensions + +![Best GNOME Extensions for Ubuntu](https://itsfoss.com/wp-content/uploads/2017/12/Best-GNOME-Extensions-800x450.jpg) + +The list is in alphabetical order but there is no ranking involved here. Extension at number 1 position is not better than the rest of the extensions. + +### 1\. Appfolders Management extensions + +One of the major features that I think GNOME is missing is the ability to organize the default application grid. This is something included by default in [KDE][14]‘s Application Dashboard, in [Elementary OS][15]‘s Slingshot Launcher, and even in macOS, yet as of [GNOME 3.26][16] it isn’t something that comes baked in. Appfolders Management extension changes that. + +This extension gives the user an easy way to organize their applications into various folders with a simple right click > add to folder. Creating folders and adding applications to them is not only simple through this extension, but it feels so natively implemented that you will wonder why this isn’t built into the default GNOME experience. + +![](https://itsfoss.com/wp-content/uploads/2017/11/folders-300x225.jpg) + +[Appfolders Management extension][17] + +### 2\. Apt Update Indicator + +For distributions that utilize [Apt as their package manager][18], such as Ubuntu or Debian, the Apt Update Indicator extension allows for a more streamlined update experience in GNOME. + +The extension settles into your top bar and notifies the user of updates waiting on their system. It also displays recently added repos, residual config files, files that are auto removable, and allows the user to manually check for updates all in one basic drop-down menu. + +It is a simple extension that adds an immense amount of functionality to any system.  + +![](https://itsfoss.com/wp-content/uploads/2017/11/Apt-Update-300x185.jpg) + +[Apt Update Indicator][19] + +### 3\. Auto Move Windows + +If, like me, you utilize multiple virtual desktops than this extension will make your workflow much easier. Auto Move Windows allows you to set your applications to automatically open on a virtual desktop of your choosing. It is as simple as adding an application to the list and selecting the desktop you would like that application to open on. + +From then on every time you open that application it will open on that desktop. This makes all the difference when as soon as you login to your computer all you have to do is open the application and it immediately opens to where you want it to go without manually having to move it around every time before you can get to work. + +![](https://itsfoss.com/wp-content/uploads/2017/11/auto-move-300x225.jpg) + +[Auto Move Windows][20] + +### 4\. Caffeine  + +Caffeine allows the user to keep their computer screen from auto-suspending at the flip of a switch. The coffee mug shaped extension icon embeds itself into the right side of your top bar and with a click shows that your computer is “caffeinated” with a subtle addition of steam to the mug and a notification. + +The same is true to turn off Caffeine, enabling auto suspend and/or screensave again. It’s incredibly simple to use and works just as you would expect. + +Caffeine Disabled:   +![](https://itsfoss.com/wp-content/uploads/2017/11/caffeine-enabled-300x78.jpg) + +Caffeine Enabled: +![](https://itsfoss.com/wp-content/uploads/2017/11/caffeine-disabled-300x75.jpg) + +[Caffeine][21] + +### 5\. CPU Power Management [Only for Intel CPUs] + +This is an extension that, at first, I didn’t think would be very useful, but after some time using it I have found that functionality like this should be backed into all computers by default. At least all laptops. CPU Power Management allows you to chose how much of your computer’s resources are being used at any given time. + +Its simple drop-down menu allows the user to change between various preset or user made profiles that control at what frequency your CPU is to run. For example, you can set your CPU to the “Quiet” present which tells your computer to only us a maximum of 30% of its resources in this case. + +On the other hand, you can set it to the “High Performance” preset to allow your computer to run at full potential. This comes in handy if you have loud fans and want to minimize the amount of noise they make or if you just need to save some battery life. + +One thing to note is that  _this only works on computers with an Intel CPU_ , so keep that in mind. + +![](https://itsfoss.com/wp-content/uploads/2017/11/CPU-300x194.jpg) + +[CPU Power Management][22] + +### 6\. Clipboard Indicator  + +Clipboard Indicator is a clean and simple clipboard management tool. The extension sits in the top bar and caches your recent clipboard history (things you copy and paste). It will continue to save this information until the user clears the extension’s history. + +If you know that you are about to work with documentation that you don’t want to be saved in this way, like Credit Card numbers or any of your personal information, Clipboard Indicator offers a private mode that the user can toggle on and off for such cases. + +![](https://itsfoss.com/wp-content/uploads/2017/11/clipboard-300x200.jpg) + +[Clipboard Indicator][23] + +### 7\. Extensions + +The Extensions extension allows the user to enable/disable other extensions and to access their settings in one singular extension. Extensions either sit next to your other icons and extensions in the panel or in the user drop-down menu. + +Redundancies aside, Extensions is a great way to gain easy access to all your extensions without the need to open up the GNOME Tweak Tool to do so.  + +![](https://itsfoss.com/wp-content/uploads/2017/11/extensions-300x185.jpg) + +[Extensions][24] + +### 8\. Frippery Move Clock + +For those of us who are used to having the clock to the right of the Panel in Unity, this extension does the trick. Frippery Move Clock moves the clock from the middle of the top panel to the right side. It takes the calendar and notification window with it but does not migrate the notifications themselves. We have another application later in this list, Panel OSD, that can add bring your notifications over to the right as well. + +Before Frippery:  +![](https://itsfoss.com/wp-content/uploads/2017/11/before-move-clock-300x19.jpg) + +After Frippery: +![](https://itsfoss.com/wp-content/uploads/2017/11/after-move-clock-300x19.jpg) + +[Frippery Move Clock][25] + +### 9\. Gno-Menu + +Gno-Menu brings a more traditional menu to the GNOME DE. Not only does it add an applications menu to the top panel but it also brings a ton of functionality and customization with it. If you are used to using the Applications Menu extension traditionally found in GNOME but don’t want the bugs and issues that Ubuntu 17.10 brought to is, Gno-Meny is an awesome alternative. + +![](https://itsfoss.com/wp-content/uploads/2017/11/Gno-Menu-300x169.jpg) + +[Gno-Menu][26] + +### 10\. User Themes + +User Themes is a must for anyone looking to customize their GNOME desktop. By default, GNOME Tweaks lets its users change the theme of the applications themselves, icons, and cursors but not the theme of the shell. User Themes fixes that by enabling us to change the theme of GNOME Shell, allowing us to get the most out of our customization experience.  Check out our [video][27] or read our article to know how to [install new themes][28]. + +User Themes Off: +![](https://itsfoss.com/wp-content/uploads/2017/11/user-themes-off-300x141.jpg) +User Themes On: +![](https://itsfoss.com/wp-content/uploads/2017/11/user-themes-on-300x141.jpg) + +[User Themes][29] + +### 11\. Hide Activities Button + +Hide Activities Button does exactly what you would expect. It hides the activities button found a the leftmost corner of the top panel. This button traditionally actives the activities overview in GNOME, but plenty of people use the Super Key on the keyboard to do this same function. + +Though this disables the button itself, it does not disable the hot corner. Since Ubuntu 17.10 offers the ability to shut off the hot corner int he native settings application this not a huge deal for Ubuntu users. For other distributions, there are a plethora of other ways to disable the hot corner if you so desire, which we will not cover in this particular article. + +Before: ![](https://itsfoss.com/wp-content/uploads/2017/11/activies-present-300x15.jpg) After: +![](https://itsfoss.com/wp-content/uploads/2017/11/activities-removed-300x15.jpg) + +#### [Hide Activities Button][30]  + +### 12\. MConnect + +MConnect offers a way to seamlessly integrate the [KDE Connect][31] application within the GNOME desktop. Though KDE Connect offers a way for users to connect their Android handsets with virtually any Linux DE its indicator lacks a good way to integrate more seamlessly into any other DE than [Plasma][32]. + +MConnect fixes that, giving the user a straightforward drop-down menu that allows them to send SMS messages, locate their phones, browse their phone’s file system, and to send files to their phone from the desktop. Though I had to do some tweaking to get MConnect to work just as I would expect it to, I couldn’t be any happier with the extension. + +Do remember that you will need KDE Connect installed alongside MConnect in order to get it to work. + +![](https://itsfoss.com/wp-content/uploads/2017/11/MConenct-300x174.jpg) + +[MConnect][33] + +### 13\. OpenWeather + +OpenWeather adds an extension to the panel that gives the user weather information at a glance. It is customizable, it lets the user view weather information for whatever location they want to, and it doesn’t rely on the computers location services. OpenWeather gives the user the choice between [OpenWeatherMap][34] and [Dark Sky][35] to provide the weather information that is to be displayed. + +![](https://itsfoss.com/wp-content/uploads/2017/11/OpenWeather-300x147.jpg) + +[OpenWeather][36] + +### 14\. Panel OSD + +This is the extension I mentioned earlier which allows the user to customize the location in which their desktop notifications appear on the screen. Not only does this allow the user to move their notifications over to the right, but Panel OSD gives the user the option to put their notifications literally anywhere they want on the screen. But for us migrating from Unity to GNOME, switching the notifications from the top middle to the top right may make us feel more at home. + +Before: +![](https://itsfoss.com/wp-content/uploads/2017/11/osd1-300x40.jpg) + +After: +![](https://itsfoss.com/wp-content/uploads/2017/11/osd-300x36.jpg) + +#### [Panel OSD][37]  + +### 15\. Places Status Indicator + +Places Status Indicator has been a recommended extension for as long as people have started recommending extensions. Places adds a drop-down menu to the panel that gives the user quick access to various areas of the file system, from the home directory to serves your computer has access to and anywhere in between. + +The convenience and usefulness of this extension become more apparent as you use it, becoming a fundamental way you navigate your system. I couldn’t recommend it more highly enough. + +![](https://itsfoss.com/wp-content/uploads/2017/11/Places-288x300.jpg) + +[Places Status Indicator][38] + +### 16\. Refresh Wifi Connections + +One minor annoyance in GNOME is that the Wi-Fi Networks dialog box does not have a refresh button on it when you are trying to connect to a new Wi-Fi network. Instead, it makes the user wait while the system automatically refreshes the list. Refresh Wifi Connections fixes this. It simply adds that desired refresh button to the dialog box, adding functionality that really should be included out of the box. + +Before:  +![](https://itsfoss.com/wp-content/uploads/2017/11/refresh-before-292x300.jpg) + +After: +![](https://itsfoss.com/wp-content/uploads/2017/11/Refresh-after-280x300.jpg) + +#### [Refresh Wifi Connections][39]  + +### 17\. Remove Dropdown Arrows + +The Remove Dropdown Arrows extension removes the arrows on the panel that signify when an icon has a drop-down menu that you can interact with. This is purely an aesthetic tweak and isn’t always necessary as some themes remove these arrows by default. But themes such as [Numix][40], which happens to be my personal favorite, don’t remove them. + +Remove Dropdown Arrows brings that clean look to the GNOME Shell that removes some unneeded clutter. The only bug I have encountered is that the CPU Management extension I mentioned earlier will randomly “respawn” the drop-down arrow. To turn it back off I have to disable Remove Dropdown Arrows and then enable it again until once more it randomly reappears out of nowhere.  + +Before:   +![](https://itsfoss.com/wp-content/uploads/2017/11/remove-arrows-before-300x17.jpg) + +After: +![](https://itsfoss.com/wp-content/uploads/2017/11/remove-arrows-after-300x14.jpg) + +[Remove Dropdown Arrows][41] + +### 18\. Status Area Horizontal Spacing + +This is another extension that is purely aesthetic and is only “necessary” in certain themes. Status Area Horizontal Spacing allows the user to control the amount of space between the icons in the status bar. If you think your status icons are too close or too spaced out, then this extension has you covered. Just select the padding you would like and you’re set. + +Maximum Spacing:  +![](https://itsfoss.com/wp-content/uploads/2017/11/spacing-2-300x241.jpg) + +Minimum Spacing: +![](https://itsfoss.com/wp-content/uploads/2017/11/spacing-300x237.jpg) + +#### [Status Area Horizontal Spacing][42]  + +### 19\. Steal My Focus + +By default, when you open an application in GNOME is will sometimes stay behind what you have open if a different application has focus. GNOME then notifies you that the application you selected has opened and it is up to you to switch over to it. But, in my experience, this isn’t always consistent. There are certain applications that seem to jump to the front when opened while the rest rely on you to see the notifications to know they opened. + +Steal My Focus changes that by removing the notification and immediately giving the user focus of the application they just opened. Because of this inconsistency, it was difficult for me to get a screenshot so you just have to trust me on this one. ;) + +#### [Steal My Focus][43]  + +### 20\. Workspaces to Dock  + +This extension changed the way I use GNOME. Period. It allows me to be more productive and aware of my virtual desktop, making for a much better user experience. Workspaces to Dock allows the user to customize their overview workspaces by turning into an interactive dock. + +You can customize its look, size, functionality, and even position. It can be used purely for aesthetics, but I think the real gold is using it to make the workspaces more fluid, functional, and consistent with the rest of the UI. + +![](https://itsfoss.com/wp-content/uploads/2017/11/Workspaces-to-dock-300x169.jpg) + +[Workspaces to Dock][44] + +### Honorable Mentions: Dash to Dock and Dash to Panel   + +Dash to Dock and Dash to Panel are not included in the official 20 extensions of this article for one main reason: Ubuntu Dock. Both extensions allow the user to make the GNOME Dash either a dock or a panel respectively and add more customization than comes by default. + +The problem is that to get the full functionality of these two extensions you will need to jump through some hoops to disable Ubuntu Dock, which I won’t outline in this article. We acknowledge that not everyone will be using Ubuntu 17.10, so for those of you that aren’t this may not apply to you. That being said, bot of these extensions are great and are included among some of the most popular GNOME extensions you will find. + +Currently, there is a “bug” in Dash to Dock whereby changing its setting, even with the extension disabled, the changes apply to the Ubuntu Dock as well.  I say “bug” because I actually use this myself to customize Ubuntu Dock without the need for the extensions to be activated.  This may get patched in the future, but until then consider that a free tip. + +###    [Dash to Dock][45]     [Dash to Panel][46] + +So there you have it, our top 20 GNOME Extensions you should try right now. Which of these extensions do you particularly like? Which do you dislike? Let us know in the comments below and don’t be afraid to say something if there is anything you think we missed. + +### About Phillip Prado + +Phillip Prado is an avid follower of all things tech, culture, and art. Not only is he an all-around geek, he has a BA in cultural communications and considers himself a serial hobbyist. He loves hiking, cycling, poetry, video games, and movies. But no matter what his passions are there is only one thing he loves more than Linux and FOSS: coffee. You can find him (nearly) everywhere on the web as @phillipprado. +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/best-gnome-extensions/ + +作者:[ Phillip Prado][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://itsfoss.com/author/phillip/ +[1]:https://itsfoss.com/author/phillip/ +[2]:https://itsfoss.com/best-gnome-extensions/#comments +[3]:https://www.facebook.com/share.php?u=https%3A%2F%2Fitsfoss.com%2Fbest-gnome-extensions%2F%3Futm_source%3Dfacebook%26utm_medium%3Dsocial%26utm_campaign%3DSocialWarfare +[4]:https://twitter.com/share?original_referer=/&text=Top+20+GNOME+Extensions+You+Should+Be+Using+Right+Now&url=https://itsfoss.com/best-gnome-extensions/%3Futm_source%3Dtwitter%26utm_medium%3Dsocial%26utm_campaign%3DSocialWarfare&via=phillipprado +[5]:https://plus.google.com/share?url=https%3A%2F%2Fitsfoss.com%2Fbest-gnome-extensions%2F%3Futm_source%3DgooglePlus%26utm_medium%3Dsocial%26utm_campaign%3DSocialWarfare +[6]:https://www.linkedin.com/cws/share?url=https%3A%2F%2Fitsfoss.com%2Fbest-gnome-extensions%2F%3Futm_source%3DlinkedIn%26utm_medium%3Dsocial%26utm_campaign%3DSocialWarfare +[7]:http://www.stumbleupon.com/submit?url=https://itsfoss.com/best-gnome-extensions/&title=Top+20+GNOME+Extensions+You+Should+Be+Using+Right+Now +[8]:https://www.reddit.com/submit?url=https://itsfoss.com/best-gnome-extensions/&title=Top+20+GNOME+Extensions+You+Should+Be+Using+Right+Now +[9]:https://extensions.gnome.org/ +[10]:https://www.gnome.org/ +[11]:https://itsfoss.com/ubuntu-17-10-release-features/ +[12]:https://itsfoss.com/ubuntu-unity-shutdown/ +[13]:https://itsfoss.com/gnome-shell-extensions/ +[14]:https://www.kde.org/ +[15]:https://elementary.io/ +[16]:https://itsfoss.com/gnome-3-26-released/ +[17]:https://extensions.gnome.org/extension/1217/appfolders-manager/ +[18]:https://en.wikipedia.org/wiki/APT_(Debian) +[19]:https://extensions.gnome.org/extension/1139/apt-update-indicator/ +[20]:https://extensions.gnome.org/extension/16/auto-move-windows/ +[21]:https://extensions.gnome.org/extension/517/caffeine/ +[22]:https://extensions.gnome.org/extension/945/cpu-power-manager/ +[23]:https://extensions.gnome.org/extension/779/clipboard-indicator/ +[24]:https://extensions.gnome.org/extension/1036/extensions/ +[25]:https://extensions.gnome.org/extension/2/move-clock/ +[26]:https://extensions.gnome.org/extension/608/gnomenu/ +[27]:https://youtu.be/9TNvaqtVKLk +[28]:https://itsfoss.com/install-themes-ubuntu/ +[29]:https://extensions.gnome.org/extension/19/user-themes/ +[30]:https://extensions.gnome.org/extension/744/hide-activities-button/ +[31]:https://community.kde.org/KDEConnect +[32]:https://www.kde.org/plasma-desktop +[33]:https://extensions.gnome.org/extension/1272/mconnect/ +[34]:http://openweathermap.org/ +[35]:https://darksky.net/forecast/40.7127,-74.0059/us12/en +[36]:https://extensions.gnome.org/extension/750/openweather/ +[37]:https://extensions.gnome.org/extension/708/panel-osd/ +[38]:https://extensions.gnome.org/extension/8/places-status-indicator/ +[39]:https://extensions.gnome.org/extension/905/refresh-wifi-connections/ +[40]:https://numixproject.github.io/ +[41]:https://extensions.gnome.org/extension/800/remove-dropdown-arrows/ +[42]:https://extensions.gnome.org/extension/355/status-area-horizontal-spacing/ +[43]:https://extensions.gnome.org/extension/234/steal-my-focus/ +[44]:https://extensions.gnome.org/extension/427/workspaces-to-dock/ +[45]:https://extensions.gnome.org/extension/307/dash-to-dock/ +[46]:https://extensions.gnome.org/extension/1160/dash-to-panel/ From 57e2dd7066ccd7331d727bfbe728fc6f914851bd Mon Sep 17 00:00:00 2001 From: Ezio Date: Sat, 9 Dec 2017 15:37:03 +0800 Subject: [PATCH 152/236] =?UTF-8?q?20171209-10=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ... Improve your Bash scripts with Argbash.md | 113 ++++++++++++++++++ 1 file changed, 113 insertions(+) create mode 100644 sources/tech/20171204 Improve your Bash scripts with Argbash.md diff --git a/sources/tech/20171204 Improve your Bash scripts with Argbash.md b/sources/tech/20171204 Improve your Bash scripts with Argbash.md new file mode 100644 index 0000000000..826512867e --- /dev/null +++ b/sources/tech/20171204 Improve your Bash scripts with Argbash.md @@ -0,0 +1,113 @@ +# [Improve your Bash scripts with Argbash][1] + +![](https://fedoramagazine.org/wp-content/uploads/2017/11/argbash-1-945x400.png) + +Do you write or maintain non-trivial bash scripts? If so, you probably want them to accept command-line arguments in a standard and robust way. Fedora recently got [a nice addition][2] which can help you produce better scripts. And don’t worry, it won’t cost you much of your time or energy. + +### Why Argbash? + +Bash is an interpreted command-line language with no standard library. Therefore, if you write bash scripts and want command-line interfaces that conform to [POSIX][3] and [GNU CLI][4] standards, you’re used to only two options: + +1. Write the argument-parsing functionality tailored to your script yourself (possibly using the `getopts` builtin). + +2. Use an external bash module. + +The first option looks incredibly silly as implementing the interface properly is not trivial. However, it is suggested as the best choice on various sites ranging from [Stack Overflow][5] to the [Bash Hackers][6] wiki. + +The second option looks smarter, but using a module has its issues. The biggest is you have to bundle its code with your script. This may mean either: + +* You distribute the library as a separate file, or + +* You include the library code at the beginning of your script. + +Having two files instead of one is awkward. So is polluting your bash scripts with a chunk of complex code over thousand lines long. + +This was the main reason why the Argbash [project came to life][7]. Argbash is a code generator, so it generates a tailor-made parsing library for your script. Unlike the generic code of other bash modules, it produces minimal code your script needs. Moreover, you can request even simpler code if you don’t need 100% conformance to these CLI standards. + +### Example + +### Analysis + +Let’s say you want to implement a script that [draws a bar][8] across the terminal window. You do that by repeating a single character of your choice multiple times. This means you need to get the following information from the command-line: + +* _The character which is the element of the line. If not specified, use a dash._  On the command-line, this would be a single-valued positional argument  _character_  with a default value of -. + +* _Length of the line. If not specified, go for 80._  This is a single-valued optional argument  _–length_  with a default of 80. + +* _Verbose mode (for debugging)._  This is a boolean argument  _verbose_ , off by default. + +As the body of the script is really simple, this article focuses on getting the input of the user from the command-line to appropriate script variables. Argbash generates code that saves parsing results to shell variables  __arg_character_ ,  __arg_length_  and  __arg_verbose_ . + +### Execution + +In order to proceed, you need the  _argbash-init_  and  _argbash_  bash scripts that are parts of the  _argbash_  package. Therefore, run this command: + +``` +sudo dnf install argbash +``` + +Then, use  _argbash-init_  to generate a template for  _argbash_ , which generates the executable script. You want three arguments: a positional one called  _character_ , an optional  _length_  and an optional boolean  _verbose_ . Tell this to  _argbash-init_ , and then pass the output to  _argbash_ : + +``` +argbash-init --pos character --opt length --opt-bool verbose script-template.sh +argbash script-template.sh -o script +./script +``` + +See the help message? Looks like the script doesn’t know about the default option for the character argument. So take a look at the [Argbash API][9], and then fix the issue by editing the template section of the script: + +``` +# ... +# ARG_OPTIONAL_SINGLE([length],[l],[Length of the line],[80]) +# ARG_OPTIONAL_BOOLEAN([verbose],[V],[Debug mode]) +# ARG_POSITIONAL_SINGLE([character],[The element of the line],[-]) +# ARG_HELP([The line drawer]) +# ... +``` + +Argbash is so smart that it tries to make every generated script a template of itself. This means you don’t have to worry about storing source templates for further use. You just shouldn’t lose your generated bash scripts. Now, try to regenerate the future line drawer to work as expected: + +``` +argbash script -o script +./script +``` + +As you can see, everything is working all right. The only thing left to do is fill in the line drawing functionality itself. + +### Conclusion + +You might find the section containing parsing code quite long, but consider that it allows you to call  _./script.sh x -Vl50_  and it will be understood the same way as  _./script -V -l 50 x. I_ t does require some code to get this right. + +However, you can shift the balance between generated code complexity and parsing abilities towards more simple code by calling  _argbash-init_  with argument  _–mode_  set to  _minimal_ . This option reduces the size of the script by about 20 lines, which corresponds to a roughly 25% decrease of the generated parsing code size. On the other hand, the  _full_  mode makes the script even smarter. + +If you want to examine the generated code, give  _argbash_  the argument  _–commented_ , which puts comments into the parsing code that reveal the intent behind various sections. Compare that to other argument parsing libraries such as [shflags][10], [argsparse][11] or [bash-modules/arguments][12], and you’ll see the powerful simplicity of Argbash. If something goes horribly wrong and you need to fix a glitch in the parsing functionality quickly, Argbash allows you to do that as well. + +As you’re most likely a Fedora user, you can enjoy the luxury of having command-line Argbash installed from the official repositories. However, there is also an [online parsing code generator][13] at your service. Furthermore, if you’re working on a server with Docker, you can appreciate the [Argbash Docker image][14]. + +So enjoy and make sure that your scripts have a command-line interface that pleases your users. Argbash is here to help, with minimal effort required from your side. + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/improve-bash-scripts-argbash/ + +作者:[Matěj Týč ][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://fedoramagazine.org/author/bubla/ +[1]:https://fedoramagazine.org/improve-bash-scripts-argbash/ +[2]:https://argbash.readthedocs.io/ +[3]:http://pubs.opengroup.org/onlinepubs/9699919799/basedefs/V1_chap12.html +[4]:https://www.gnu.org/prep/standards/html_node/Command_002dLine-Interfaces.html +[5]:https://stackoverflow.com/questions/192249/how-do-i-parse-command-line-arguments-in-bash +[6]:http://wiki.bash-hackers.org/howto/getopts_tutorial +[7]:https://argbash.readthedocs.io/ +[8]:http://wiki.bash-hackers.org/snipplets/print_horizontal_line +[9]:http://argbash.readthedocs.io/en/stable/guide.html#argbash-api +[10]:https://raw.githubusercontent.com/Anvil/bash-argsparse/master/argsparse.sh +[11]:https://raw.githubusercontent.com/Anvil/bash-argsparse/master/argsparse.sh +[12]:https://raw.githubusercontent.com/vlisivka/bash-modules/master/main/bash-modules/src/bash-modules/arguments.sh +[13]:https://argbash.io/generate +[14]:https://hub.docker.com/r/matejak/argbash/ From 52f00820b803b2ab716a7e0ed859820cc585fb41 Mon Sep 17 00:00:00 2001 From: Ezio Date: Sat, 9 Dec 2017 15:39:09 +0800 Subject: [PATCH 153/236] =?UTF-8?q?20171209-11=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...s It Easier to Test Drive Linux Distros.md | 45 +++++++++++++++++++ 1 file changed, 45 insertions(+) create mode 100644 sources/tech/20171204 GNOME Boxes Makes It Easier to Test Drive Linux Distros.md diff --git a/sources/tech/20171204 GNOME Boxes Makes It Easier to Test Drive Linux Distros.md b/sources/tech/20171204 GNOME Boxes Makes It Easier to Test Drive Linux Distros.md new file mode 100644 index 0000000000..42556932c1 --- /dev/null +++ b/sources/tech/20171204 GNOME Boxes Makes It Easier to Test Drive Linux Distros.md @@ -0,0 +1,45 @@ +# GNOME Boxes Makes It Easier to Test Drive Linux Distros + +![GNOME Boxes Distribution Selection](http://www.omgubuntu.co.uk/wp-content/uploads/2017/12/GNOME-Boxes-INstall-Distros-750x475.jpg) + +Creating Linux virtual machines on the GNOME desktop is about to get a whole lot easier. + +The next major release of  [_GNOME Boxes_][5]  is able to download popular Linux (and BSD-based) operating systems directly inside the app itself. + +Boxes is free, open-source software. It can be used to access both remote and virtual systems as it is built around [QEMU][6], KVM, and libvirt virtualisation technologies. + +For its new ISO-toting integration  _Boxes_  makes use of [libosinfo][7], a database of operating systems that also provides details on any virtualized environment requirements. + +In [this (mis-titled) video][8] from GNOME developer Felipe Borges you can see just how easy the improved ‘Source Selection’ screen makes things, including the ability to download a specific ISO architecture for a given distro: + +[video](https://youtu.be/CGahI05Gbac) + +Despite it being a core GNOME app I have to confess that I have never used Boxes. It’s not that I don’t hear good things about it (I do) it’s just that I’m more familiar with setting up and configuring virtual machines in VirtualBox. + +> ‘The lazy geek inside me is going to appreciate this integration’ + +Admitted it’s not exactly  _difficult_  to head out and download an ISO using the browser, then point a virtual machine app to it (heck, it’s what most of us have been doing for a decade or so). + +But the lazy geek inside me is really going to appreciate this integration. + +So, thanks to this feature I’ll be unpacking Boxes on my system when GNOME 3.28 is released next March. I will be able to launch  _Boxes_ , close my eyes,pick a distro from the list at random, and instantly broaden my Tux-shaped horizons. + +-------------------------------------------------------------------------------- + +via: http://www.omgubuntu.co.uk/2017/12/gnome-boxes-install-linux-distros-directly + +作者:[ JOEY SNEDDON ][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://plus.google.com/117485690627814051450/?rel=author +[1]:https://plus.google.com/117485690627814051450/?rel=author +[2]:http://www.omgubuntu.co.uk/category/dev +[3]:http://www.omgubuntu.co.uk/category/video +[4]:http://www.omgubuntu.co.uk/2017/12/gnome-boxes-install-linux-distros-directly +[5]:https://en.wikipedia.org/wiki/GNOME_Boxes +[6]:https://en.wikipedia.org/wiki/QEMU +[7]:https://libosinfo.org/ +[8]:https://blogs.gnome.org/felipeborges/boxes-downloadable-oses/ From 6f6bec6b60f807e9e836faab78c3177044afb375 Mon Sep 17 00:00:00 2001 From: Ezio Date: Sat, 9 Dec 2017 15:41:07 +0800 Subject: [PATCH 154/236] =?UTF-8?q?20171209-12=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...3D Modeling and Design Software for Linux.md | 80 +++++++++++++++++++ 1 file changed, 80 insertions(+) create mode 100644 sources/tech/20171204 FreeCAD – A 3D Modeling and Design Software for Linux.md diff --git a/sources/tech/20171204 FreeCAD – A 3D Modeling and Design Software for Linux.md b/sources/tech/20171204 FreeCAD – A 3D Modeling and Design Software for Linux.md new file mode 100644 index 0000000000..6df21bce1b --- /dev/null +++ b/sources/tech/20171204 FreeCAD – A 3D Modeling and Design Software for Linux.md @@ -0,0 +1,80 @@ +FreeCAD – A 3D Modeling and Design Software for Linux +============================================================ +![FreeCAD 3D Modeling Software](https://www.fossmint.com/wp-content/uploads/2017/12/FreeCAD-3D-Modeling-Software.png) + +[FreeCAD][8] is a cross-platform OpenCasCade-based mechanical engineering and product design tool. Being a parametric 3D modeler it works with PLM, CAx, CAE, MCAD and CAD and its functionalities can be extended using tons of advanced extensions and customization options. + +It features a QT-based minimalist User Interface with toggable panels, layouts, toolbars, a broad Python API, and an Open Inventor-compliant 3D scene representation model (thanks to the Coin 3D library). + + [![FreeCAD 3D Software](https://www.fossmint.com/wp-content/uploads/2017/12/FreeCAD-3D-Software.png)][9] + +FreeCAD 3D Software + +As is listed on the website, FreeCAD has a coupled of user cases, namely: + +> * The Home User/Hobbyist: Got yourself a project you want to build, have built, or 3D printed? Model it in FreeCAD. No previous CAD experience required. Our community will help you get the hang of it quickly! +> +> * The Experienced CAD User: If you use commercial CAD or BIM modeling software at work, you will find similar tools and workflow among the many workbenches of FreeCAD. +> +> * The Programmer: Almost all of FreeCAD’s functionality is accessible to Python. You can easily extend FreeCAD’s functionality, automatize it with scripts, build your own modules or even embed FreeCAD in your own application. +> +> * The Educator: Teach your students a free software with no worry about license purchase. They can install the same version at home and continue using it after leaving school. + +#### Features in FreeCAD + +* Freeware: FreeCAD is free for everyone to download and use. + +* Open Source: Contribute to the source code on [GitHub][4]. + +* Cross-Platform: All Windows, Linux, and Mac users can enjoy the coolness of FreeCAD. + +* A comprehensive [Online Documentation][5]. + +* A free [Online Manual][6] for beginners and pros alike. + +* Annotations support e.g. text and dimensions. + +* A built-in Python console. + +* A fully customizable and scriptable UI. + +* An online community for showcasing projects [here][7]. + +* Extendable modules for modeling and designing a variety of objects e.g. + +FreeCAD has a lot more features to offer users than we can list here so feel free to see the rest of them on its website’s [Features page][11]. + +There are many 3D modeling tools in the market but they are barely free. If you are a modeling engineer, architect, or artist and are looking for an application you can use without necessarily shelling out any cash then FreeCAD is a beautiful open-source project you should check out. + +Give it a test-drive and see if you don’t like it. + +[Download FreeCAD for Linux][13] + +Are you already a FreeCAD user? Which of its features do you enjoy the most and have you come across any alternatives that may go head to head with its abilities? + +Remember that your comments, suggestions, and constructive criticisms are always welcome in the comments section below. + +-------------------------------------------------------------------------------- + +via: https://www.fossmint.com/freecad-3d-modeling-and-design-software-for-linux/ + +作者:[Martins D. Okoi ][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.fossmint.com/author/dillivine/ +[1]:https://www.fossmint.com/author/dillivine/ +[2]:https://www.fossmint.com/author/dillivine/ +[3]:https://www.fossmint.com/freecad-3d-modeling-and-design-software-for-linux/#disqus_thread +[4]:https://github.com/FreeCAD/FreeCAD +[5]:https://www.freecadweb.org/wiki/Main_Page +[6]:https://www.freecadweb.org/wiki/Manual +[7]:https://forum.freecadweb.org/viewforum.php?f=24 +[8]:http://www.freecadweb.org/ +[9]:https://www.fossmint.com/wp-content/uploads/2017/12/FreeCAD-3D-Software.png +[10]:https://www.fossmint.com/synfig-an-adobe-animate-alternative-for-gnulinux/ +[11]:https://www.freecadweb.org/wiki/Feature_list +[12]:http://www.tecmint.com/red-hat-rhcsa-rhce-exam-certification-book/ +[13]:https://www.freecadweb.org/wiki/Download From 08b1593a06d8a3a49cbbc5035a4db7d290dea133 Mon Sep 17 00:00:00 2001 From: Ezio Date: Sat, 9 Dec 2017 15:45:07 +0800 Subject: [PATCH 155/236] =?UTF-8?q?20171209-12=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...Long-term Linux support future clarified.md | 52 +++++++++++++++++++ 1 file changed, 52 insertions(+) create mode 100644 sources/tech/20171127 ​Long-term Linux support future clarified.md diff --git a/sources/tech/20171127 ​Long-term Linux support future clarified.md b/sources/tech/20171127 ​Long-term Linux support future clarified.md new file mode 100644 index 0000000000..e077f33425 --- /dev/null +++ b/sources/tech/20171127 ​Long-term Linux support future clarified.md @@ -0,0 +1,52 @@ +Long-term Linux support future clarified +============================================================ + +Long-term support Linux version 4.4 will get six years of life, but that doesn't mean other LTS editions will last so long. + +[video](http://www.zdnet.com/video/video-torvalds-surprised-by-resilience-of-2-6-kernel-1/) + + _Video: Torvalds surprised by resilience of 2.6 kernel_ + +In October 2017, the [Linux kernel team agreed to extend the next version of Linux's Long Term Support (LTS) from two years to six years][5], [Linux 4.14][6]. This helps [Android][7], embedded Linux, and Linux Internet of Things (IoT) developers. But this move did not mean all future Linux LTS versions will have a six-year lifespan. + +As Konstantin Ryabitsev, [The Linux Foundation][8]'s director of IT infrastructure security, explained in a Google+ post, "Despite what various news sites out there may have told you, [kernel 4.14 LTS is not planned to be supported for 6 years][9]. Just because Greg Kroah-Hartman is doing it for 4.4 does not mean that all LTS kernels from now on are going to be maintained for that long." + +So, in short, 4.14 will be supported until January 2020, while the 4.4 Linux kernel, which arrived on Jan. 20, 2016, will be supported until 2022\. Therefore, if you're working on a Linux distribution that's meant for the longest possible run, you want to base it on [Linux 4.4][10]. + +[Linux LTS versions][11] incorporate back-ported bug fixes for older kernel trees. Not all bug fixes are imported; only important bug fixes are applied to such kernels. They, especially for older trees, don't usually see very frequent releases. + +The other Linux versions are Prepatch or release candidates (RC), Mainline, Stable, and LTS. + +RC must be compiled from source and usually contains bug fixes and new features. These are maintained and released by Linus Torvalds. He also maintains the Mainline tree (this is where all new features are introduced). New mainline kernels are released every few months. When the mainline kernel is released for general use, it is considered "stable." Bug fixes for a stable kernel are back-ported from the mainline tree and applied by a designated stable kernel maintainer. There are usually only a few bug-fix kernel releases until the next mainline kernel becomes available. + +As for the latest LTS, Linux 4.14, Ryabitsev said, "It is possible that someone may pick up maintainership of 4.14 after Greg is done with it (it's happened in the past on multiple occasions), but you should emphatically not plan on that." + +Kroah-Hartman simply added to Ryabitsev's post: ["What he said."][12] + +-------------------------------------------------------------------------------- + +via: http://www.zdnet.com/article/long-term-linux-support-future-clarified/ + +作者:[Steven J. Vaughan-Nichols ][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.zdnet.com/meet-the-team/us/steven-j-vaughan-nichols/ +[1]:http://www.zdnet.com/article/long-term-linux-support-future-clarified/#comments-eb4f0633-955f-4fec-9e56-734c34ee2bf2 +[2]:http://www.zdnet.com/article/the-tension-between-iot-and-erp/ +[3]:http://www.zdnet.com/article/the-tension-between-iot-and-erp/ +[4]:http://www.zdnet.com/article/the-tension-between-iot-and-erp/ +[5]:http://www.zdnet.com/article/long-term-support-linux-gets-a-longer-lease-on-life/ +[6]:http://www.zdnet.com/article/the-new-long-term-linux-kernel-linux-4-14-has-arrived/ +[7]:https://www.android.com/ +[8]:https://www.linuxfoundation.org/ +[9]:https://plus.google.com/u/0/+KonstantinRyabitsev/posts/Lq97ZtL8Xw9 +[10]:http://www.zdnet.com/article/whats-new-and-nifty-in-linux-4-4/ +[11]:https://www.kernel.org/releases.html +[12]:https://plus.google.com/u/0/+gregkroahhartman/posts/ZUcSz3Sn1Hc +[13]:http://www.zdnet.com/meet-the-team/us/steven-j-vaughan-nichols/ +[14]:http://www.zdnet.com/meet-the-team/us/steven-j-vaughan-nichols/ +[15]:http://www.zdnet.com/blog/open-source/ +[16]:http://www.zdnet.com/topic/enterprise-software/ From 3236bf1492f749ad8066aee6ccba2ac6ef832fb6 Mon Sep 17 00:00:00 2001 From: Ezio Date: Sat, 9 Dec 2017 15:47:47 +0800 Subject: [PATCH 156/236] =?UTF-8?q?20171209-12=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...riaDB Security Best Practices for Linux.md | 186 ++++++++++++++++++ 1 file changed, 186 insertions(+) create mode 100644 sources/tech/20171201 12 MySQL MariaDB Security Best Practices for Linux.md diff --git a/sources/tech/20171201 12 MySQL MariaDB Security Best Practices for Linux.md b/sources/tech/20171201 12 MySQL MariaDB Security Best Practices for Linux.md new file mode 100644 index 0000000000..5cf9169661 --- /dev/null +++ b/sources/tech/20171201 12 MySQL MariaDB Security Best Practices for Linux.md @@ -0,0 +1,186 @@ +12 MySQL/MariaDB Security Best Practices for Linux +============================================================ + +MySQL is the world’s most popular open source database system and MariaDB (a fork of MySQL) is the world’s fastest growing open source database system. After installing MySQL server, it is insecure in it’s default configuration, and securing it is one of the essential tasks in general database management. + +This will contribute to hardening and boosting of overall Linux server security, as attackers always scan vulnerabilities in any part of a system, and databases have in the past been key target areas. A common example is the brute-forcing of the root password for the MySQL database. + +In this guide, we will explain useful MySQL/MariaDB security best practice for Linux. + +### 1\. Secure MySQL Installation + +This is the first recommended step after installing MySQL server, towards securing the database server. This script facilitates in improving the security of your MySQL server by asking you to: + +* set a password for the root account, if you didn’t set it during installation. + +* disable remote root user login by removing root accounts that are accessible from outside the local host. + +* remove anonymous-user accounts and test database which by default can be accessed by all users, even anonymous users. + +``` +# mysql_secure_installation +``` + +After running it, set the root password and answer the series of questions by entering [Yes/Y] and press [Enter]. + + [![Secure MySQL Installation](https://www.tecmint.com/wp-content/uploads/2017/12/Secure-MySQL-Installation.png)][2] + +Secure MySQL Installation + +### 2\. Bind Database Server To Loopback Address + +This configuration will restrict access from remote machines, it tells the MySQL server to only accept connections from within the localhost. You can set it in main configuration file. + +``` +# vi /etc/my.cnf [RHEL/CentOS] +# vi /etc/mysql/my.conf [Debian/Ubuntu] +OR +# vi /etc/mysql/mysql.conf.d/mysqld.cnf [Debian/Ubuntu] +``` + +Add the following line below under `[mysqld]` section. + +``` +bind-address = 127.0.0.1 +``` + +### 3\. Disable LOCAL INFILE in MySQL + +As part of security hardening, you need to disable local_infile to prevent access to the underlying filesystem from within MySQL using the following directive under `[mysqld]` section. + +``` +local-infile=0 +``` + +### 4\. Change MYSQL Default Port + +The Port variable sets the MySQL port number that will be used to listen on TCP/ IP connections. The default port number is 3306 but you can change it under the [mysqld] section as shown. + +``` +Port=5000 +``` + +### 5\. Enable MySQL Logging + +Logs are one of the best ways to understand what happens on a server, in case of any attacks, you can easily see any intrusion-related activities from log files. You can enable MySQL logging by adding the following variable under the `[mysqld]` section. + +``` +log=/var/log/mysql.log +``` + +### 6\. Set Appropriate Permission on MySQL Files + +Ensure that you have appropriate permissions set for all mysql server files and data directories. The /etc/my.conf file should only be writeable to root. This blocks other users from changing database server configurations. + +``` +# chmod 644 /etc/my.cnf +``` + +### 7\. Delete MySQL Shell History + +All commands you execute on MySQL shell are stored by the mysql client in a history file: ~/.mysql_history. This can be dangerous, because for any user accounts that you will create, all usernames and passwords typed on the shell will recorded in the history file. + +``` +# cat /dev/null > ~/.mysql_history +``` + +### 8\. Don’t Run MySQL Commands from Commandline + +As you already know, all commands you type on the terminal are stored in a history file, depending on the shell you are using (for example ~/.bash_history for bash). An attacker who manages to gain access to this history file can easily see any passwords recorded there. + +It is strongly not recommended to type passwords on the command line, something like this: + +``` +# mysql -u root -ppassword_ +``` + [![Connect MySQL with Password](https://www.tecmint.com/wp-content/uploads/2017/12/Connect-MySQL-with-Password.png)][3] + +Connect MySQL with Password + +When you check the last section of the command history file, you will see the password typed above. + +``` +# history +``` + [![Check Command History](https://www.tecmint.com/wp-content/uploads/2017/12/Check-Command-History.png)][4] + +Check Command History + +The appropriate way to connect MySQL is. + +``` +# mysql -u root -p +Enter password: +``` + +### 9\. Define Application-Specific Database Users + +For each application running on the server, only give access to a user who is in charge of a database for a given application. For example, if you have a wordpress site, create a specific user for the wordpress site database as follows. + +``` +# mysql -u root -p +MariaDB [(none)]> CREATE DATABASE osclass_db; +MariaDB [(none)]> CREATE USER 'osclassdmin'@'localhost' IDENTIFIED BY 'osclass@dmin%!2'; +MariaDB [(none)]> GRANT ALL PRIVILEGES ON osclass_db.* TO 'osclassdmin'@'localhost'; +MariaDB [(none)]> FLUSH PRIVILEGES; +MariaDB [(none)]> exit +``` + +and remember to always remove user accounts that are no longer managing any application database on the server. + +### 10\. Use Additional Security Plugins and Libraries + +MySQL includes a number of security plugins for: authenticating attempts by clients to connect to mysql server, password-validation and securing storage for sensitive information, which are all available in the free version. + +You can find more here: [https://dev.mysql.com/doc/refman/5.7/en/security-plugins.html][5] + +### 11\. Change MySQL Passwords Regularly + +This is a common piece of information/application/system security advice. How often you do this will entirely depend on your internal security policy. However, it can prevent “snoopers” who might have been tracking your activity over an long period of time, from gaining access to your mysql server. + +``` +MariaDB [(none)]> USE mysql; +MariaDB [(none)]> UPDATE user SET password=PASSWORD('YourPasswordHere') WHERE User='root' AND Host = 'localhost'; +MariaDB [(none)]> FLUSH PRIVILEGES; +``` + +### 12\. Update MySQL Server Package Regularly + +It is highly recommended to upgrade mysql/mariadb packages regularly to keep up with security updates and bug fixes, from the vendor’s repository. Normally packages in default operating system repositories are outdated. + +``` +# yum update +# apt update +``` + +After making any changes to the mysql/mariadb server, always restart the service. + +``` +# systemctl restart mariadb #RHEL/CentOS +# systemctl restart mysql #Debian/Ubuntu +``` + +Read Also: [15 Useful MySQL/MariaDB Performance Tuning and Optimization Tips][6] + +That’s all! We love to hear from you via the comment form below. Do share with us any MySQL/MariaDB security tips missing in the above list. + +-------------------------------------------------------------------------------- + +via: https://www.tecmint.com/mysql-mariadb-security-best-practices-for-linux/ + +作者:[ Aaron Kili ][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.tecmint.com/author/aaronkili/ +[1]:https://www.tecmint.com/learn-mysql-mariadb-for-beginners/ +[2]:https://www.tecmint.com/wp-content/uploads/2017/12/Secure-MySQL-Installation.png +[3]:https://www.tecmint.com/wp-content/uploads/2017/12/Connect-MySQL-with-Password.png +[4]:https://www.tecmint.com/wp-content/uploads/2017/12/Check-Command-History.png +[5]:https://dev.mysql.com/doc/refman/5.7/en/security-plugins.html +[6]:https://www.tecmint.com/mysql-mariadb-performance-tuning-and-optimization/ +[7]:https://www.tecmint.com/author/aaronkili/ +[8]:https://www.tecmint.com/10-useful-free-linux-ebooks-for-newbies-and-administrators/ +[9]:https://www.tecmint.com/free-linux-shell-scripting-books/ From 37b098258afb468e38ac8b56d8e6ad587524214e Mon Sep 17 00:00:00 2001 From: Ezio Date: Sat, 9 Dec 2017 15:50:08 +0800 Subject: [PATCH 157/236] =?UTF-8?q?20171209-13=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...Simple Excellent Linux Network Monitors.md | 189 ++++++++++++++++++ 1 file changed, 189 insertions(+) create mode 100644 sources/tech/20171019 3 Simple Excellent Linux Network Monitors.md diff --git a/sources/tech/20171019 3 Simple Excellent Linux Network Monitors.md b/sources/tech/20171019 3 Simple Excellent Linux Network Monitors.md new file mode 100644 index 0000000000..28b784e763 --- /dev/null +++ b/sources/tech/20171019 3 Simple Excellent Linux Network Monitors.md @@ -0,0 +1,189 @@ +3 Simple, Excellent Linux Network Monitors +============================================================ + +![network](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/banner_3.png?itok=iuPcSN4k "network") +Learn more about your network connections with the iftop, Nethogs, and vnstat tools.[Used with permission][3] + +You can learn an amazing amount of information about your network connections with these three glorious Linux networking commands. iftop tracks network connections by process number, Nethogs quickly reveals what is hogging your bandwidth, and vnstat runs as a nice lightweight daemon to record your usage over time. + +### iftop + +The excellent [iftop][8] listens to the network interface that you specify, and displays connections in a top-style interface. + +This is a great little tool for quickly identifying hogs, measuring speed, and also to maintain a running total of your network traffic. It is rather surprising to see how much bandwidth we use, especially for us old people who remember the days of telephone land lines, modems, screaming kilobits of speed, and real live bauds. We abandoned bauds a long time ago in favor of bit rates. Baud measures signal changes, which sometimes were the same as bit rates, but mostly not. + +If you have just one network interface, run iftop with no options. iftop requires root permissions: + +``` +$ sudo iftop +``` + +When you have more than one, specify the interface you want to monitor: + +``` +$ sudo iftop -i wlan0 +``` + +Just like top, you can change the display options while it is running. + +* **h** toggles the help screen. + +* **n** toggles name resolution. + +* **s** toggles source host display, and **d** toggles the destination hosts. + +* **s** toggles port numbers. + +* **N** toggles port resolution; to see all port numbers toggle resolution off. + +* **t** toggles the text interface. The default display requires ncurses. I think the text display is more readable and better-organized (Figure 1). + +* **p** pauses the display. + +* **q** quits the program. + + +![text display](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/fig-1_8.png?itok=luKHS5ve "text display") +Figure 1: The text display is readable and organized.[Used with permission][1] + +When you toggle the display options, iftop continues to measure all traffic. You can also select a single host to monitor. You need the host's IP address and netmask. I was curious how much of a load Pandora put on my sad little meager bandwidth cap, so first I used dig to find their IP address: + +``` +$ dig A pandora.com +[...] +;; ANSWER SECTION: +pandora.com. 267 IN A 208.85.40.20 +pandora.com. 267 IN A 208.85.40.50 +``` + +What's the netmask? [ipcalc][9] tells us: + +``` +$ ipcalc -b 208.85.40.20 +Address: 208.85.40.20 +Netmask: 255.255.255.0 = 24 +Wildcard: 0.0.0.255 +=> +Network: 208.85.40.0/24 +``` + +Now feed the address and netmask to iftop: + +``` +$ sudo iftop -F 208.85.40.20/24 -i wlan0 +``` + +Is that not seriously groovy? I was surprised to learn that Pandora is easy on my precious bits, using around 500Kb per hour. And, like most streaming services, Pandora's traffic comes in spurts and relies on caching to smooth out the lumps and bumps. + +You can do the same with IPv6 addresses, using the **-G** option. Consult the fine man page to learn the rest of iftop's features, including customizing your default options with a personal configuration file, and applying custom filters (see [PCAP-FILTER][10] for a filter reference). + +### Nethogs + +When you want to quickly learn who is sucking up your bandwidth, Nethogs is fast and easy. Run it as root and specify the interface to listen on. It displays the hoggy application and the process number, so that you may kill it if you so desire: + +``` +$ sudo nethogs wlan0 + +NetHogs version 0.8.1 + +PID USER PROGRAM DEV SENT RECEIVED +7690 carla /usr/lib/firefox wlan0 12.494 556.580 KB/sec +5648 carla .../chromium-browser wlan0 0.052 0.038 KB/sec +TOTAL 12.546 556.618 KB/sec +``` + +Nethogs has few options: cycling between kb/s, kb, b, and mb, sorting by received or sent packets, and adjusting the delay between refreshes. See `man nethogs`, or run `nethogs -h`. + +### vnstat + +[vnstat][11] is the easiest network data collector to use. It is lightweight and does not need root permissions. It runs as a daemon and records your network statistics over time. The `vnstat`command displays the accumulated data: + +``` +$ vnstat -i wlan0 +Database updated: Tue Oct 17 08:36:38 2017 + + wlan0 since 10/17/2017 + + rx: 45.27 MiB tx: 3.77 MiB total: 49.04 MiB + + monthly + rx | tx | total | avg. rate + ------------------------+-------------+-------------+--------------- + Oct '17 45.27 MiB | 3.77 MiB | 49.04 MiB | 0.28 kbit/s + ------------------------+-------------+-------------+--------------- + estimated 85 MiB | 5 MiB | 90 MiB | + + daily + rx | tx | total | avg. rate + ------------------------+-------------+-------------+--------------- + today 45.27 MiB | 3.77 MiB | 49.04 MiB | 12.96 kbit/s + ------------------------+-------------+-------------+--------------- + estimated 125 MiB | 8 MiB | 133 MiB | +``` + +By default it displays all network interfaces. Use the `-i` option to select a single interface. Merge the data of multiple interfaces this way: + +``` +$ vnstat -i wlan0+eth0+eth1 +``` + +You can filter the display in several ways: + +* **-h** displays statistics by hours. + +* **-d** displays statistics by days. + +* **-w** and **-m** displays statistics by weeks and months. + +* Watch live updates with the **-l** option. + +This command deletes the database for wlan1 and stops watching it: + +``` +$ vnstat -i wlan1 --delete +``` + +This command creates an alias for a network interface. This example uses one of the weird interface names from Ubuntu 16.04: + +``` +$ vnstat -u -i enp0s25 --nick eth0 +``` + +By default vnstat monitors eth0\. You can change this in `/etc/vnstat.conf`, or create your own personal configuration file in your home directory. See `man vnstat` for a complete reference. + +You can also install vnstati to create simple, colored graphs (Figure 2): + +``` +$ vnstati -s -i wlx7cdd90a0a1c2 -o vnstat.png +``` + + +![vnstati](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/fig-2_5.png?itok=HsWJMcW0 "vnstati") +Figure 2: You can create simple colored graphs with vnstati.[Used with permission][2] + +See `man vnstati` for complete options. + + _Learn more about Linux through the free ["Introduction to Linux" ][7]course from The Linux Foundation and edX._ + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/learn/intro-to-linux/2017/10/3-simple-excellent-linux-network-monitors + +作者:[CARLA SCHRODER ][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.linux.com/users/cschroder +[1]:https://www.linux.com/licenses/category/used-permission +[2]:https://www.linux.com/licenses/category/used-permission +[3]:https://www.linux.com/licenses/category/used-permission +[4]:https://www.linux.com/files/images/fig-1png-8 +[5]:https://www.linux.com/files/images/fig-2png-5 +[6]:https://www.linux.com/files/images/bannerpng-3 +[7]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux +[8]:http://www.ex-parrot.com/pdw/iftop/ +[9]:https://www.linux.com/learn/intro-to-linux/2017/8/how-calculate-network-addresses-ipcalc +[10]:http://www.tcpdump.org/manpages/pcap-filter.7.html +[11]:http://humdi.net/vnstat/ From d4ef22d511d955dfbe7341b0de86b3b4ef11ba64 Mon Sep 17 00:00:00 2001 From: Ezio Date: Sat, 9 Dec 2017 15:53:54 +0800 Subject: [PATCH 158/236] =?UTF-8?q?20171209-14=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- .../20170413 More Unknown Linux Commands.md | 131 ++++++++++++++++++ 1 file changed, 131 insertions(+) create mode 100644 sources/tech/20170413 More Unknown Linux Commands.md diff --git a/sources/tech/20170413 More Unknown Linux Commands.md b/sources/tech/20170413 More Unknown Linux Commands.md new file mode 100644 index 0000000000..f5507d3802 --- /dev/null +++ b/sources/tech/20170413 More Unknown Linux Commands.md @@ -0,0 +1,131 @@ +More Unknown Linux Commands +============================================================ + + +![unknown Linux commands](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/outer-limits-of-linux.jpg?itok=5L5xfj2v "unknown Linux commands") +>Explore the outer limits of Linux with Carla Schroder in this roundup of little-known utilities.[Creative Commons Zero][2]Pixabay + +A roundup of the fun and little-known utilities `termsaver`, `pv`, and `calendar`. `termsaver` is an ASCII screensaver for the console, and `pv` measures data throughput and simulates typing. Debian's `calendar` comes with a batch of different calendars, and instructions for making your own. + +![Linux commands](https://www.linux.com/sites/lcom/files/styles/floated_images/public/linux-commands-fig-1.png?itok=HveXXLLK "Linux commands") + +Figure 1: Star Wars screensaver.[Used with permission][1] + +### Terminal Screensaver + +Why should graphical desktops have all the fun with fancy screensavers? Install `termsaver` to enjoy fancy ASCII screensavers like matrix, clock, starwars, and a couple of not-safe-for-work screens. More on the NSFW screens in a moment. + +`termsaver` is included in Debian/Ubuntu, and if you're using a boring distro that doesn't package fun things (like CentOS), you can download it from [termsaver.brunobraga.net][7] and follow the simple installation instructions. + +Run `termsaver -h` to see a list of screens: + +``` + randtxt displays word in random places on screen + starwars runs the asciimation Star Wars movie + urlfetcher displays url contents with typing animation + quotes4all displays recent quotes from quotes4all.net + rssfeed displays rss feed information + matrix displays a matrix movie alike screensaver + clock displays a digital clock on screen + rfc randomly displays RFC contents + jokes4all displays recent jokes from jokes4all.net (NSFW) + asciiartfarts displays ascii images from asciiartfarts.com (NSFW) + programmer displays source code in typing animation + sysmon displays a graphical system monitor +``` + +Then run your chosen screen with `termsaver [screen name]`, e.g. `termsaver matrix`, and stop it with Ctrl+c. Get information on individual screens by running `termsaver [screen name] -h`. Figure 1 is from the `starwars` screen, which runs our old favorite [Asciimation Wars][8]. + +The not-safe-for-work screens pull in online feeds. They're not my cup of tea, but the good news is `termsaver` is a gaggle of Python scripts, so they're easy to hack to connect to any RSS feed you desire. + +### pv + +The `pv` command is one of those funny little utilities that lends itself to creative uses. Its intended use is monitoring data copying progress, like when you run `rsync` or create a `tar`archive. When you run `pv` without options the defaults are: + +* -p progress. + +* -t timer, total elapsed time. + +* -e, ETA, time to completion. This is often inaccurate as `pv` cannot always know the size of the data you are moving. + +* -r, rate counter, or throughput. + +* -b, byte counter. + +This is what an `rsync` transfer looks like: + +``` +$ rsync -av /home/carla/ /media/carla/backup/ | pv +sending incremental file list +[...] +103GiB 0:02:48 [ 615MiB/s] [ <=> +``` + +Create a tar archive like this example: + +``` +$ tar -czf - /file/path| (pv > backup.tgz) + 885MiB 0:00:30 [28.6MiB/s] [ <=> +``` + +`pv` monitors processes. To see maximum activity monitor a Web browser process. It is amazing how much activity that generates: + +``` +$ pv -d 3095 + 58:/home/carla/.pki/nssdb/key4.db: 0 B 0:00:33 + [ 0 B/s] [<=> ] + 78:/home/carla/.config/chromium/Default/Visited Links: + 256KiB 0:00:33 [ 0 B/s] [<=> ] + ] + 85:/home/carla/.con...romium/Default/data_reduction_proxy_leveldb/LOG: + 298 B 0:00:33 [ 0 B/s] [<=> ] +``` + +Somewhere on the Internet I stumbled across a most entertaining way to use `pv` to echo back what I type: + +``` +$ echo "typing random stuff to pipe through pv" | pv -qL 8 +typing random stuff to pipe through pv +``` + +The normal `echo` command prints the whole line at once. Piping it through `pv` makes it appear as though it is being re-typed. I have no idea if this has any practical value, but I like it. The `-L`controls the speed of the playback, in bytes per second. + +`pv` is one of those funny little old commands that has acquired a giant batch of options over the years, including fancy formatting options, multiple output options, and transfer speed modifiers. `man pv` reveals all. + +### /usr/bin/calendar + +It's amazing what you can learn by browsing `/usr/bin` and other commands directories, and reading man pages. `/usr/bin/calendar` on Debian/Ubuntu is a modification of the BSD calendar, but it omits the moon and sun phases. It retains multiple calendars including `calendar.computer, calendar.discordian, calendar.music`, and `calendar.lotr`. On my system the man page lists different calendars than exist in `/usr/bin/calendar`. This example displays the Lord of the Rings calendar for the next 60 days: + +``` +$ calendar -f /usr/share/calendar/calendar.lotr -A 60 +Apr 17 An unexpected party +Apr 23 Crowning of King Ellesar +May 19 Arwen leaves Lorian to wed King Ellesar +Jun 11 Sauron attacks Osgilliath +``` + +The calendars are plain text files so you can easily create your own. The easy way is to copy the format of the existing calendar files. `man calendar` contains detailed instructions for creating your own calendar file. + +Once again we come to the end too quickly. Take some time to cruise your own filesystem to dig up interesting commands to play with. + + _Learn more about Linux through the free ["Introduction to Linux" ][5]course from The Linux Foundation and edX._ + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/learn/intro-to-linux/2017/4/more-unknown-linux-commands + +作者:[ CARLA SCHRODER][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.linux.com/users/cschroder +[1]:https://www.linux.com/licenses/category/used-permission +[2]:https://www.linux.com/licenses/category/creative-commons-zero +[3]:https://www.linux.com/files/images/linux-commands-fig-1png +[4]:https://www.linux.com/files/images/outer-limits-linuxjpg +[5]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux +[6]:https://www.addtoany.com/share#url=https%3A%2F%2Fwww.linux.com%2Flearn%2Fintro-to-linux%2F2017%2F4%2Fmore-unknown-linux-commands&title=More%20Unknown%20Linux%20Commands +[7]:http://termsaver.brunobraga.net/ +[8]:http://www.asciimation.co.nz/ From c64baad040d51200952baf7823145224c71c4f7d Mon Sep 17 00:00:00 2001 From: Ezio Date: Sat, 9 Dec 2017 15:57:49 +0800 Subject: [PATCH 159/236] =?UTF-8?q?20171209-15=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ... to 1 free software project every month.md | 104 ++++++++++++++++++ 1 file changed, 104 insertions(+) create mode 100644 sources/tech/20170123 New Years resolution Donate to 1 free software project every month.md diff --git a/sources/tech/20170123 New Years resolution Donate to 1 free software project every month.md b/sources/tech/20170123 New Years resolution Donate to 1 free software project every month.md new file mode 100644 index 0000000000..d2e2b5f5c1 --- /dev/null +++ b/sources/tech/20170123 New Years resolution Donate to 1 free software project every month.md @@ -0,0 +1,104 @@ +New Year’s resolution: Donate to 1 free software project every month +============================================================ + +### Donating just a little bit helps ensure the open source software I use remains alive + +Free and open source software is an absolutely critical part of our world—and the future of technology and computing. One problem that consistently plagues many free software projects, though, is the challenge of funding ongoing development (and support and documentation).  + +With that in mind, I have finally settled on a New Year’s resolution for 2017: to donate to one free software project (or group) every month—or the whole year. After all, these projects are saving me a boatload of money because I don’t need to buy expensive, proprietary packages to accomplish the same things. + +#### + Also on Network World: [Free Software Foundation shakes up its list of priority projects][19] + + +I’m not setting some crazy goal here—not requiring that I donate beyond my means. Heck, some months I may be able to donate only a few bucks. But every little bit helps, right?  + +To help me accomplish that goal, below is a list of free software projects with links to where I can donate to them. Organized by categories, just because. I’m scheduling a monthly calendar item to remind me to bring up this page and donate to one of these projects.  + +This isn’t a complete list—not by any measure—but it’s a good starting point. Apologies to the (many) great projects out there that I missed. + +#### Linux distributions  + +[elementary OS][20] — In addition to the distribution itself (which is based, in part, on Ubuntu), this team also develops the Pantheon desktop environment.  + +[Solus][21] — This is a “from scratch” distro using their own custom-developed desktop environment, “Budgie.”  + +[Ubuntu MATE][22] — It’s Ubuntu—with Unity ripped off and replaced with MATE. I like to think of this as “What Ubuntu was like back when I still used Ubuntu.”  + +[Debian][23] — If you use Ubuntu or elementary or Mint, you are using a system based on Debian. Personally, I use Debian on my [PocketCHIP][24]. + +#### Linux components  + +[PulseAudio][25] — PulsAudio is all over the place now. If it stopped being supported and maintained, that would be… highly inconvenient.  + +#### Productivity/Creation  + +[Gimp][26] — The GNU Image Manipulation Program is one of the most famous free software projects—and the standard for cross-platform raster design tools.  + +[FreeCAD][27] — When people talk about difficulty in moving from Windows to Linux, the lack of CAD software often crops up. Supporting projects such as FreeCAD helps to remove that barrier.  + +[OpenShot][28] — Video editing on Linux (and other free software desktops) has improved tremendously over the past few years. But there is still work to be done.  + +[Blender][29] — What is Blender? A 3D modelling suite? A video editor? A game creation system? All three (and more)? Whatever you use Blender for, it’s amazing.  + +[Inkscape][30] — This is the most fantastic vector graphics editing suite on the planet (in my oh-so-humble opinion).  + +[LibreOffice / The Document Foundation][31] — I am writing this very document in LibreOffice. Donating to their foundation to help further development seems to be in my best interests.  + +#### Software development  + +[Python Software Foundation][32] — Python is a great language and is used all over the place.  + +#### Free and open source foundations  + +[Free Software Foundation][33] — “The Free Software Foundation (FSF) is a nonprofit with a worldwide mission to promote computer user freedom. We defend the rights of all software users.”  + +[Software Freedom Conservancy][34] — “Software Freedom Conservancy helps promote, improve, develop and defend Free, Libre and Open Source Software (FLOSS) projects.”  + +Again—this is, by no means, a complete list. Not even close. Luckily many projects provide easy donation mechanisms on their websites. + +Join the Network World communities on [Facebook][17] and [LinkedIn][18] to comment on topics that are top of mind. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3160174/linux/new-years-resolution-donate-to-1-free-software-project-every-month.html + +作者:[ Bryan Lunduke][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.networkworld.com/author/Bryan-Lunduke/ +[1]:https://www.networkworld.com/article/3143583/linux/linux-y-things-i-am-thankful-for.html +[2]:https://www.networkworld.com/article/3152745/linux/5-rock-solid-linux-distros-for-developers.html +[3]:https://www.networkworld.com/article/3130760/open-source-tools/elementary-os-04-review-and-interview-with-the-founder.html +[4]:https://www.networkworld.com/video/51206/solo-drone-has-linux-smarts-gopro-mount +[5]:https://twitter.com/intent/tweet?url=https%3A%2F%2Fwww.networkworld.com%2Farticle%2F3160174%2Flinux%2Fnew-years-resolution-donate-to-1-free-software-project-every-month.html&via=networkworld&text=New+Year%E2%80%99s+resolution%3A+Donate+to+1+free+software+project+every+month +[6]:https://www.facebook.com/sharer/sharer.php?u=https%3A%2F%2Fwww.networkworld.com%2Farticle%2F3160174%2Flinux%2Fnew-years-resolution-donate-to-1-free-software-project-every-month.html +[7]:http://www.linkedin.com/shareArticle?url=https%3A%2F%2Fwww.networkworld.com%2Farticle%2F3160174%2Flinux%2Fnew-years-resolution-donate-to-1-free-software-project-every-month.html&title=New+Year%E2%80%99s+resolution%3A+Donate+to+1+free+software+project+every+month +[8]:https://plus.google.com/share?url=https%3A%2F%2Fwww.networkworld.com%2Farticle%2F3160174%2Flinux%2Fnew-years-resolution-donate-to-1-free-software-project-every-month.html +[9]:http://reddit.com/submit?url=https%3A%2F%2Fwww.networkworld.com%2Farticle%2F3160174%2Flinux%2Fnew-years-resolution-donate-to-1-free-software-project-every-month.html&title=New+Year%E2%80%99s+resolution%3A+Donate+to+1+free+software+project+every+month +[10]:http://www.stumbleupon.com/submit?url=https%3A%2F%2Fwww.networkworld.com%2Farticle%2F3160174%2Flinux%2Fnew-years-resolution-donate-to-1-free-software-project-every-month.html +[11]:https://www.networkworld.com/article/3160174/linux/new-years-resolution-donate-to-1-free-software-project-every-month.html#email +[12]:https://www.networkworld.com/article/3143583/linux/linux-y-things-i-am-thankful-for.html +[13]:https://www.networkworld.com/article/3152745/linux/5-rock-solid-linux-distros-for-developers.html +[14]:https://www.networkworld.com/article/3130760/open-source-tools/elementary-os-04-review-and-interview-with-the-founder.html +[15]:https://www.networkworld.com/video/51206/solo-drone-has-linux-smarts-gopro-mount +[16]:https://www.networkworld.com/video/51206/solo-drone-has-linux-smarts-gopro-mount +[17]:https://www.facebook.com/NetworkWorld/ +[18]:https://www.linkedin.com/company/network-world +[19]:http://www.networkworld.com/article/3158685/open-source-tools/free-software-foundation-shakes-up-its-list-of-priority-projects.html +[20]:https://www.patreon.com/elementary +[21]:https://www.patreon.com/solus +[22]:https://www.patreon.com/ubuntu_mate +[23]:https://www.debian.org/donations +[24]:http://www.networkworld.com/article/3157210/linux/review-pocketchipsuper-cheap-linux-terminal-that-fits-in-your-pocket.html +[25]:https://www.patreon.com/tanuk +[26]:https://www.gimp.org/donating/ +[27]:https://www.patreon.com/yorikvanhavre +[28]:https://www.patreon.com/openshot +[29]:https://www.blender.org/foundation/donation-payment/ +[30]:https://inkscape.org/en/support-us/donate/ +[31]:https://www.libreoffice.org/donate/ +[32]:https://www.python.org/psf/donations/ +[33]:http://www.fsf.org/associate/ +[34]:https://sfconservancy.org/supporter/ From 19566f7a2dbe8fbc3b0de4c45f97fdde2a6ba480 Mon Sep 17 00:00:00 2001 From: Ezio Date: Sat, 9 Dec 2017 16:01:24 +0800 Subject: [PATCH 160/236] =?UTF-8?q?20171209-16=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ing an Open Source Project A Free Guide.md | 85 +++++++++++++++++++ 1 file changed, 85 insertions(+) create mode 100644 sources/tech/20171201 Launching an Open Source Project A Free Guide.md diff --git a/sources/tech/20171201 Launching an Open Source Project A Free Guide.md b/sources/tech/20171201 Launching an Open Source Project A Free Guide.md new file mode 100644 index 0000000000..0d3fa9e18c --- /dev/null +++ b/sources/tech/20171201 Launching an Open Source Project A Free Guide.md @@ -0,0 +1,85 @@ +Launching an Open Source Project: A Free Guide +============================================================ + +![](https://www.linuxfoundation.org/wp-content/uploads/2017/11/project-launch-1024x645.jpg) + +Launching a project and then rallying community support can be complicated, but the new guide to Starting an Open Source Project can help. + +Increasingly, as open source programs become more pervasive at organizations of all sizes, tech and DevOps workers are choosing to or being asked to launch their own open source projects. From Google to Netflix to Facebook, companies are also releasing their open source creations to the community. It’s become common for open source projects to start from scratch internally, after which they benefit from collaboration involving external developers. + +Launching a project and then rallying community support can be more complicated than you think, however. A little up-front work can help things go smoothly, and that’s exactly where the new guide to[ Starting an Open Source Project][1] comes in. + +This free guide was created to help organizations already versed in open source learn how to start their own open source projects. It starts at the beginning of the process, including deciding what to open source, and moves on to budget and legal considerations, and more. The road to creating an open source project may be foreign, but major companies, from Google to Facebook, have opened up resources and provided guidance. In fact, Google has[ an extensive online destination][2] dedicated to open source best practices and how to open source projects. + +“No matter how many smart people we hire inside the company, there’s always smarter people on the outside,” notes Jared Smith, Open Source Community Manager at Capital One. “We find it is worth it to us to open source and share our code with the outside world in exchange for getting some great advice from people on the outside who have expertise and are willing to share back with us.” + +In the new guide, noted open source expert Ibrahim Haddad provides five reasons why an organization might open source a new project: + +1. Accelerate an open solution; provide a reference implementation to a standard; share development costs for strategic functions + +2. Commoditize a market; reduce prices of non-strategic software components. + +3. Drive demand by building an ecosystem for your products. + +4. Partner with others; engage customers; strengthen relationships with common goals. + +5. Offer your customers the ability to self-support: the ability to adapt your code without waiting for you. + +The guide notes: “The decision to release or create a new open source project depends on your circumstances. Your company should first achieve a certain level of open source mastery by using open source software and contributing to existing projects. This is because consuming can teach you how to leverage external projects and developers to build your products. And participation can bring more fluency in the conventions and culture of open source communities. (See our guides on [Using Open Source Code][3] and [Participating in Open Source Communities][4]) But once you have achieved open source fluency, the best time to start launching your own open source projects is simply ‘early’ and ‘often.’” + +The guide also notes that planning can keep you and your organization out of legal trouble. Issues pertaining to licensing, distribution, support options, and even branding require thinking ahead if you want your project to flourish. + +“I think it is a crucial thing for a company to be thinking about what they’re hoping to achieve with a new open source project,” said John Mertic, Director of Program Management at The Linux Foundation. “They must think about the value of it to the community and developers out there and what outcomes they’re hoping to get out of it. And then they must understand all the pieces they must have in place to do this the right way, including legal, governance, infrastructure and a starting community. Those are the things I always stress the most when you’re putting an open source project out there.” + +The[ Starting an Open Source Project][5] guide can help you with everything from licensing issues to best development practices, and it explores how to seamlessly and safely weave existing open components into your open source projects. It is one of a new collection of free guides from The Linux Foundation and The TODO Group that are all extremely valuable for any organization running an open source program.[ The guides are available][6]now to help you run an open source program office where open source is supported, shared, and leveraged. With such an office, organizations can establish and execute on their open source strategies efficiently, with clear terms. + +These free resources were produced based on expertise from open source leaders.[ Check out all the guides here][7] and stay tuned for our continuing coverage. + +Also, don’t miss the previous articles in the series: + +[How to Create an Open Source Program][8] + +[Tools for Managing Open Source Programs][9] + +[Measuring Your Open Source Program’s Success][10] + +[Effective Strategies for Recruiting Open Source Developers][11] + +[Participating in Open Source Communities][12] + +[Using Open Source Code][13] + +-------------------------------------------------------------------------------- + +via: https://www.linuxfoundation.org/blog/launching-open-source-project-free-guide/ + +作者:[Sam Dean ][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.linuxfoundation.org/author/sdean/ +[1]:https://www.linuxfoundation.org/resources/open-source-guides/starting-open-source-project/ +[2]:https://www.linux.com/blog/learn/chapter/open-source-management/2017/5/googles-new-home-all-things-open-source-runs-deep +[3]:https://www.linuxfoundation.org/using-open-source-code/ +[4]:https://www.linuxfoundation.org/participating-open-source-communities/ +[5]:https://www.linuxfoundation.org/resources/open-source-guides/starting-open-source-project/ +[6]:https://github.com/todogroup/guides +[7]:https://github.com/todogroup/guides +[8]:https://github.com/todogroup/guides/blob/master/creating-an-open-source-program.md +[9]:https://www.linuxfoundation.org/blog/managing-open-source-programs-free-guide/ +[10]:https://www.linuxfoundation.org/measuring-your-open-source-program-success/ +[11]:https://www.linuxfoundation.org/blog/effective-strategies-recruiting-open-source-developers/ +[12]:https://www.linuxfoundation.org/participating-open-source-communities/ +[13]:https://www.linuxfoundation.org/using-open-source-code/ +[14]:https://www.linuxfoundation.org/author/sdean/ +[15]:https://www.linuxfoundation.org/category/audience/attorneys/ +[16]:https://www.linuxfoundation.org/category/blog/ +[17]:https://www.linuxfoundation.org/category/audience/c-level/ +[18]:https://www.linuxfoundation.org/category/audience/developer-influencers/ +[19]:https://www.linuxfoundation.org/category/audience/entrepreneurs/ +[20]:https://www.linuxfoundation.org/category/content-placement/lf-brand/ +[21]:https://www.linuxfoundation.org/category/audience/open-source-developers/ +[22]:https://www.linuxfoundation.org/category/audience/open-source-professionals/ +[23]:https://www.linuxfoundation.org/category/audience/open-source-users/ From e02f586d98893128509cb347e832e03c9971eea2 Mon Sep 17 00:00:00 2001 From: Ezio Date: Sat, 9 Dec 2017 16:03:28 +0800 Subject: [PATCH 161/236] =?UTF-8?q?20171209-17=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ... Official Ubuntu Flavor Is Best for You.md | 168 ++++++++++++++++++ 1 file changed, 168 insertions(+) create mode 100644 sources/tech/20170512 Which Official Ubuntu Flavor Is Best for You.md diff --git a/sources/tech/20170512 Which Official Ubuntu Flavor Is Best for You.md b/sources/tech/20170512 Which Official Ubuntu Flavor Is Best for You.md new file mode 100644 index 0000000000..960f75034f --- /dev/null +++ b/sources/tech/20170512 Which Official Ubuntu Flavor Is Best for You.md @@ -0,0 +1,168 @@ +Which Official Ubuntu Flavor Is Best for You? +============================================================ + + +![Ubuntu Budgie](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ubuntu_budgie.jpg?itok=xpo3Ujfw "Ubuntu Budgie") +Ubuntu Budgie is just one of the few officially recognized flavors of Ubuntu. Jack Wallen takes a look at some important differences between them.[Used with permission][7] + +Ubuntu Linux comes in a few officially recognized flavors, as well as several derivative distributions. The recognized flavors are: + +* [Kubuntu][9] - Ubuntu with the KDE desktop + +* [Lubuntu][10] - Ubuntu with the LXDE desktop + +* [Mythbuntu][11] - Ubuntu MythTV + +* [Ubuntu Budgie][12] - Ubuntu with the Budgie desktop + +* [Xubuntu][8] - Ubuntu with Xfce + +Up until recently, the official Ubuntu Linux included the in-house Unity desktop and a sixth recognized flavor existed: Ubuntu GNOME -- Ubuntu with the GNOME desktop environment. + +When Mark Shuttleworth decided to nix Unity, the choice was obvious to Canonical—make GNOME the official desktop of Ubuntu Linux. This begins with Ubuntu 18.04 (so April, 2018) and we’ll be down to the official distribution and four recognized flavors. + +For those already enmeshed in the Linux community, that’s some seriously simple math to do—you know which Linux desktop you like, so making the choice between Ubuntu, Kubuntu, Lubuntu, Mythbuntu, Ubuntu Budgie, and Xubuntu couldn’t be easier. Those that haven’t already been indoctrinated into the way of Linux won’t see that as such a cut-and-dried decision. + +To that end, I thought it might be a good idea to help newer users decide which flavor is best for them. After all, choosing the wrong distribution out of the starting gate can make for a less-than-ideal experience. + +And so, if you’re considering a flavor of Ubuntu, and you want your experience to be as painless as possible, read on. + +### Ubuntu + +I’ll begin with the official flavor of Ubuntu. I am also going to warp time a bit and skip Unity, to launch right into the upcoming GNOME-based distribution. Beyond GNOME being an incredibly stable and easy to use desktop environment, there is one very good reason to select the official flavor—support. The official flavor of Ubuntu is commercially supported by Canonical. For $150.00 per year, you can purchase [official support][20] for the Ubuntu desktop. There is, of course, a 50-desktop minimum for this level of support. For individuals, the best bet for support would be the [Ubuntu Forums][21], the [Ubuntu documentation][22], or the [Community help wiki][23]. + +Beyond the commercial support, the reason to choose the official Ubuntu flavor would be if you’re looking for a modern, full-featured desktop that is incredibly reliable and easy to use. GNOME has been designed to serve as a platform perfectly suited for both desktops and laptops (Figure 1). Unlike its predecessor, Unity, GNOME can be far more easily customized to suit your needs—to a point. If you’re not one to tinker with the desktop, fear not, GNOME just works. In fact, the out of the box experience with GNOME might well be one of the finest on the market—even rivaling (or besting) Mac OS X. If tinkering and tweaking is of primary interest, you will find GNOME somewhat limiting. The [GNOME Tweak Tool][24] and [GNOME Shell Extensions ][25]will only take you so far, before you find yourself wanting more. + + +![GNOME desktop](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ubuntu_flavor_a.jpg?itok=Ir6jBKbd "GNOME desktop") + +Figure 1: The GNOME desktop with a Unity-like flavor might be what we see with Ubuntu 18.04.[Used with permission][1] + +### Kubuntu + +The [K Desktop Environment][26] (otherwise known as KDE) has been around as long as GNOME and has, at times, been maligned as a lesser desktop. With the release of KDE Plasma 5, that changed. KDE has become an incredibly powerful, efficient, and stable desktop that can stand toe to toe with the best of them. But why would you select Kubuntu over the official Ubuntu? The answer to that question is quite simple—you’re used to the Windows XP/7 desktop metaphor. Start menu, taskbar, system tray, etc., KDE has those and more, all fashioned in such a way that will make you feel like you’re using the best of the past and current technologies. In fact, if you’re looking for one of the most Windows 7-like official Ubuntu flavors, you won’t find one that better fits the bill. + +One of the nice things about Kubuntu, is that you’ll find it a bit more flexible than any Windows iteration you’ve ever used—and equally reliable/user-friendly. And don’t think, because KDE opts to offer a desktop somewhat similar to Windows 7, that it doesn’t have a modern flavor. In fact, Kubuntu takes what worked well with the Windows 7 interface and updates it to meet a more modern aesthetic (Figure 2). + + +![Kubuntu](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ubuntu_flavor_b.jpg?itok=dGpebi4z "Kubuntu") + +Figure 2: Kubuntu offers a modern take on an old UX.[Used with permission][2] + +The official Ubuntu is not the only flavor to offer desktop support. Kubuntu users also can pay for [commercial support][27]. Be warned, it’s not cheap. One hour of support time will cost you $103.88 cents. + +### Lubuntu + +If you’re looking for an easy-to-use desktop that is very fast (so that older hardware will feel like new) and far more flexible than just about any desktop you’ve ever used, Lubuntu is what you want. The only caveat to Lubuntu is that you’re looking at a bit more bare bones on the desktop then you may be accustomed to. Lubuntu makes use of the [LXDE desktop][28] and includes a list of applications that continues the lightweight theme. So if you’re looking for blazing fast speeds on the desktop, Lubuntu might be a good choice. +However, there is a caveat with Lubuntu and, for some users, this might be a deal breaker. Along with the small footprint of Lubuntu come pre-installed applications that might not stand up to task. For example, instead of the full-blown office suite, you’ll find the [AibWord word processor][29] and the [Gnumeric spreadsheet][30] tool. Don’t get me wrong; both of these are fine tools. However, if you’re looking for software that’s business-ready, you will find them lacking. On the other hand, if you want to install more work-centric tools (e.g., LibreOffice), Lubuntu includes the Synaptic Package Manager to make installation of third-party software simple. + +Even with the limited default software, Lubuntu offers a clean and easy to use desktop (Figure 3), that anyone could start using with little to no learning curve. + + +![Lubuntu](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ubuntu_flavor_c.jpg?itok=nWsJr39r "Lubuntu") + +Figure 3: What Lubuntu lacks in software, it makes up for in speed and simplicity.[Used with permission][3] + +### Mythbuntu + +Mythbuntu is a sort of odd bird here, because it isn’t really a desktop variant. Instead, Mythbuntu is a special flavor of Ubuntu designed to be a multimedia powerhouse. Using Mythbuntu requires TV Tuners and TV Out cards. And, during the installation, there are a number of additional steps that must be taken (choosing how to set up the frontend/backend as well as setting up your IR remotes). + +If you do happen to have the hardware (and the desire to create your own Ubuntu-powered entertainment system), Mythbuntu is the distribution you want. Once you’ve installed Mythbuntu, you will then be prompted to walk through the setup of your Capture cards, recording profiles, video sources, and Input connections (Figure 4). + + +![Mythbuntu](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ubuntu_flavor_d.jpg?itok=Uk16xUIF "Mythbuntu") + +Figure 4: Getting ready to set up Mythbuntu.[Used with permission][4] + +### Ubuntu Budgie + +Ubuntu Budgie is the new kid on the block to the official flavor list. Sporting the Budgie Desktop, this is a beautiful and modern take on Linux that will please just about any type of user. The goal of Ubuntu Budgie was to create an elegant and simple desktop interface. Mission accomplished. If you’re looking for a beautiful desktop to work on top of the remarkably stable Ubuntu Linux platform, look no further than Ubuntu Budgie. + +Adding this particular spin on Ubuntu to the list of official variants was a smart move on the part of Canonical. With Unity going away, they needed a desktop that would offer the elegance found in Unity. Customization of Budgie is very easy, and the list of included software will get you working and browsing immediately. + +And, unlike the learning curve many users encountered with Unity, the developers/designers of Ubuntu Budgie have done a remarkable job of keeping this take on Ubuntu familiar. Click on the “start” button to reveal a fairly standard menu of applications. Budgie also includes an easy to use Dock (Figure 5) that holds applications launchers for quick access. + + +![Budgie](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ubuntu_flavor_e.jpg?itok=mwlo4xzm "Budgie") + +Figure 5: This is one beautiful desktop.[Used with permission][5] + +Another really nice feature found in Ubuntu Budgie is a sidebar that can be quickly revealed and hidden. This sidebar holds applets and notifications. With this in play, your desktop can be both incredibly useful, while remaining clutter free. + +In the end, if you’re looking for something a bit different, that happens to also be a very modern take on the desktop—with features and functions not found on other distributions—Ubuntu Budgie is what you’re looking for. + +### Xubuntu + +Another official flavor of Ubuntu that does a nice job of providing a small footprint version of Linux is [Xubuntu][32]. The difference between Xubuntu and Lubuntu is that, where Lubuntu uses the LXDE desktop, Xubuntu makes use of [Xfce][33]. What you get with that difference is a lightweight desktop that is far more configurable (than Lubuntu) as well as one that includes the more business-ready LibreOffice office suite. + +Xubuntu is an out of the box experience that anyone, regardless of experience, can use. But don't think that immediate familiarity means this flavor of Ubuntu is locked out of making it your own. If you're looking for a take on Ubuntu that's somewhat old-school out of the box, but can be heavily tweaked to better resemble a more modern desktop, Xubuntu is what you want. + +One really handy addition to Xubuntu that I've always enjoyed (one that harks back to Enlightenment) is the ability to bring up the "start" menu by right-clicking anywhere on the desktop (Figure 6). This can make for very efficient usage. + + +![Xubuntu](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/xubuntu.jpg?itok=XL8_hLet "Xubuntu") + +Figure 6: Xubuntu lets you bring up the "start" menu by right-clicking anywhere on the desktop.[Used with permission][6] + +### The choice is yours + +There is a flavor of Ubuntu to meet nearly any need—which one you choose is up to you. As yourself questions such as: + +* What are your needs? + +* What type of desktop do you prefer to interact with? + +* Is your hardware aging? + +* Do you prefer a Windows XP/7 feel? + +* Are you wanting a multimedia system? + +Your answers to the above questions will go a long way to determining which flavor of Ubuntu is right for you. The good news is that you can’t really go wrong with any of the available options. + + _Learn more about Linux through the free ["Introduction to Linux" ][31]course from The Linux Foundation and edX._ + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/learn/intro-to-linux/2017/5/which-official-ubuntu-flavor-best-you + +作者:[ JACK WALLEN][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.linux.com/users/jlwallen +[1]:https://www.linux.com/licenses/category/used-permission +[2]:https://www.linux.com/licenses/category/used-permission +[3]:https://www.linux.com/licenses/category/used-permission +[4]:https://www.linux.com/licenses/category/used-permission +[5]:https://www.linux.com/licenses/category/used-permission +[6]:https://www.linux.com/licenses/category/used-permission +[7]:https://www.linux.com/licenses/category/used-permission +[8]:http://xubuntu.org/ +[9]:http://www.kubuntu.org/ +[10]:http://lubuntu.net/ +[11]:http://www.mythbuntu.org/ +[12]:https://ubuntubudgie.org/ +[13]:https://www.linux.com/files/images/ubuntuflavorajpg +[14]:https://www.linux.com/files/images/ubuntuflavorbjpg +[15]:https://www.linux.com/files/images/ubuntuflavorcjpg +[16]:https://www.linux.com/files/images/ubuntuflavordjpg +[17]:https://www.linux.com/files/images/ubuntuflavorejpg +[18]:https://www.linux.com/files/images/xubuntujpg +[19]:https://www.linux.com/files/images/ubuntubudgiejpg +[20]:https://buy.ubuntu.com/collections/ubuntu-advantage-for-desktop +[21]:https://ubuntuforums.org/ +[22]:https://help.ubuntu.com/?_ga=2.155705979.1922322560.1494162076-828730842.1481046109 +[23]:https://help.ubuntu.com/community/CommunityHelpWiki?_ga=2.155705979.1922322560.1494162076-828730842.1481046109 +[24]:https://apps.ubuntu.com/cat/applications/gnome-tweak-tool/ +[25]:https://extensions.gnome.org/ +[26]:https://www.kde.org/ +[27]:https://kubuntu.emerge-open.com/buy +[28]:http://lxde.org/ +[29]:https://www.abisource.com/ +[30]:http://www.gnumeric.org/ +[31]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux +[32]:https://xubuntu.org/ +[33]:https://www.xfce.org/ From 70194d29757054b78146c4e410616dde0128f2bf Mon Sep 17 00:00:00 2001 From: Ezio Date: Sat, 9 Dec 2017 16:06:13 +0800 Subject: [PATCH 162/236] =?UTF-8?q?20171209-18=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...216 GitHub Is Building a Coder Paradise.md | 67 +++++++++++++++++++ 1 file changed, 67 insertions(+) create mode 100644 sources/tech/20161216 GitHub Is Building a Coder Paradise.md diff --git a/sources/tech/20161216 GitHub Is Building a Coder Paradise.md b/sources/tech/20161216 GitHub Is Building a Coder Paradise.md new file mode 100644 index 0000000000..36e9e76343 --- /dev/null +++ b/sources/tech/20161216 GitHub Is Building a Coder Paradise.md @@ -0,0 +1,67 @@ +GitHub Is Building a Coder’s Paradise. It’s Not Coming Cheap +============================================================ + +The VC-backed unicorn startup lost $66 million in nine months of 2016, financial documents show. + + +Though the name GitHub is practically unknown outside technology circles, coders around the world have embraced the software. The startup operates a sort of Google Docs for programmers, giving them a place to store, share and collaborate on their work. But GitHub Inc. is losing money through profligate spending and has stood by as new entrants emerged in a software category it essentially gave birth to, according to people familiar with the business and financial paperwork reviewed by Bloomberg. + +The rise of GitHub has captivated venture capitalists. Sequoia Capital led a $250 million investment in mid-2015\. But GitHub management may have been a little too eager to spend the new money. The company paid to send employees jetting across the globe to Amsterdam, London, New York and elsewhere. More costly, it doubled headcount to 600 over the course of about 18 months. + +GitHub lost $27 million in the fiscal year that ended in January 2016, according to an income statement seen by Bloomberg. It generated $95 million in revenue during that period, the internal financial document says. + +![Chris Wanstrath, co-founder and chief executive officer at GitHub Inc., speaks during the 2015 Bloomberg Technology Conference in San Francisco, California, U.S., on Tuesday, June 16, 2015\. The conference gathers global business leaders, tech influencers, top investors and entrepreneurs to shine a spotlight on how coders and coding are transforming business and fueling disruption across all industries. Photographer: David Paul Morris/Bloomberg *** Local Caption *** Chris Wanstrath](https://assets.bwbx.io/images/users/iqjWHBFdfxIU/iXpmtRL9Q0C4/v0/400x-1.jpg) +GitHub CEO Chris Wanstrath.Photographer: David Paul Morris/Bloomberg + +Sitting in a conference room featuring an abstract art piece on the wall and a Mad Men-style rollaway bar cart in the corner, GitHub’s Chris Wanstrath says the business is running more smoothly now and growing. “What happened to 2015?” says the 31-year-old co-founder and chief executive officer. “Nothing was getting done, maybe? I shouldn’t say that. Strike that.” + +GitHub recently hired Mike Taylor, the former treasurer and vice president of finance at Tesla Motors Inc., to manage spending as chief financial officer. It also hopes to add a seasoned chief operating officer. GitHub has already surpassed last year’s revenue in nine months this year, with $98 million, the financial document shows. “The whole product road map, we have all of our shit together in a way that we’ve never had together. I’m pretty elated right now with the way things are going,” says Wanstrath. “We’ve had a lot of ups and downs, and right now we’re definitely in an up.” + +Also up: expenses. The income statement shows a loss of $66 million in the first three quarters of this year. That’s more than twice as much lost in any nine-month time frame by Twilio Inc., another maker of software tools founded the same year as GitHub. At least a dozen members of GitHub’s leadership team have left since last year, several of whom expressed unhappiness with Wanstrath’s management style. GitHub says the company has flourished under his direction but declined to comment on finances. Wanstrath says: “We raised $250 million last year, and we’re putting it to use. We’re not expecting to be profitable right now.” + +Wanstrath started GitHub with three friends during the recession of 2008 and bootstrapped the business for four years. They encouraged employees to [work remotely][1], which forced the team to adopt GitHub’s tools for their own projects and had the added benefit of saving money on office space. GitHub quickly became essential to the code-writing process at technology companies of all sizes and gave birth to a new generation of programmers by hosting their open-source code for free. + +Peter Levine, a partner at Andreessen Horowitz, courted the founders and eventually convinced them to take their first round of VC money in 2012\. The firm led a $100 million cash infusion, and Levine joined the board. The next year, GitHub signed a seven-year lease worth about $35 million for a headquarters in San Francisco, says a person familiar with the project. + +The new digs gave employees a reason to come into the office. Visitors would enter a lobby modeled after the White House’s Oval Office before making their way to a replica of the Situation Room. The company also erected a statue of its mascot, a cartoon octopus-cat creature known as the Octocat. The 55,000-square-foot space is filled with wooden tables and modern art. + +In GitHub’s cultural hierarchy, the coder is at the top. The company has strived to create the best product possible for software developers and watch them to flock to it. In addition to offering its base service for free, GitHub sells more advanced programming tools to companies big and small. But it found that some chief information officers want a human touch and began to consider building out a sales team. + +The issue took on a new sense of urgency in 2014 with the formation of a rival startup with a similar name. GitLab Inc. went after large businesses from the start, offering them a cheaper alternative to GitHub. “The big differentiator for GitLab is that it was designed for the enterprise, and GitHub was not,” says GitLab CEO Sid Sijbrandij. “One of the values is frugality, and this is something very close to our heart. We want to treat our team members really well, but we don’t want to waste any money where it’s not needed. So we don’t have a big fancy office because we can be effective without it.” + +Y Combinator, a Silicon Valley business incubator, welcomed GitLab into the fold last year. GitLab says more than 110,000 organizations, including IBM and Macy’s Inc., use its software. (IBM also uses GitHub.) Atlassian Corp. has taken a similar top-down approach with its own code repository Bitbucket. + +Wanstrath says the competition has helped validate GitHub’s business. “When we started, people made fun of us and said there is no money in developer tools,” he says. “I’ve kind of been waiting for this for a long time—to be proven right, that this is a real market.” + +![GitHub_Office-03](https://assets.bwbx.io/images/users/iqjWHBFdfxIU/iQB5sqXgihdQ/v0/400x-1.jpg) +Source: GitHub + +It also spurred GitHub into action. With fresh capital last year valuing the company at $2 billion, it went on a hiring spree. It spent $71 million on salaries and benefits last fiscal year, according to the financial document seen by Bloomberg. This year, those costs rose to $108 million from February to October, with three months still to go in the fiscal year, the document shows. This was the startup’s biggest expense by far. + +The emphasis on sales seemed to be making an impact, but the team missed some of its targets, says a person familiar with the matter. In September 2014, subscription revenue on an annualized basis was about $25 million each from enterprise sales and organizations signing up through the site, according to another financial document. After GitHub staffed up, annual recurring revenue from large clients increased this year to $70 million while the self-service business saw healthy, if less dramatic, growth to $52 million. + +But the uptick in revenue wasn’t keeping pace with the aggressive hiring. GitHub cut about 20 employees in recent weeks. “The unicorn trap is that you’ve sold equity against a plan that you often can’t hit; then what do you do?” says Nick Sturiale, a VC at Ignition Partners. + +Such business shifts are risky, and stumbles aren’t uncommon, says Jason Lemkin, a corporate software VC who’s not an investor in GitHub. “That transition from a self-service product in its early days to being enterprise always has bumps,” he says. GitHub says it has 18 million users, and its Enterprise service is used by half of the world’s 10 highest-grossing companies, including Wal-Mart Stores Inc. and Ford Motor Co. + +Some longtime GitHub fans weren’t happy with the new direction, though. More than 1,800 developers signed an online petition, saying: “Those of us who run some of the most popular projects on GitHub feel completely ignored by you.” + +The backlash was a wake-up call, Wanstrath says. GitHub is now more focused on its original mission of catering to coders, he says. “I want us to be judged on, ‘Are we making developers more productive?’” he says. At GitHub’s developer conference in September, Wanstrath introduced several new features, including an updated process for reviewing code. He says 2016 was a “marquee year.” + + +At least five senior staffers left in 2015, and turnover among leadership continued this year. Among them was co-founder and CIO Scott Chacon, who says he left to start a new venture. “GitHub was always very good to me, from the first day I started when it was just the four of us,” Chacon says. “They allowed me to travel the world representing them; they supported my teaching and evangelizing Git and remote work culture for a long time.” + +The travel excursions are expected to continue at GitHub, and there’s little evidence it can rein in spending any time soon. The company says about half its staff is remote and that the trips bring together GitHub’s distributed workforce and encourage collaboration. Last week, at least 20 employees on GitHub’s human-resources team convened in Rancho Mirage, California, for a retreat at the Ritz Carlton. + +-------------------------------------------------------------------------------- + +via: https://www.bloomberg.com/news/articles/2016-12-15/github-is-building-a-coder-s-paradise-it-s-not-coming-cheap + +作者:[Eric Newcomer ][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.bloomberg.com/authors/ASFMS16EsvU/eric-newcomer +[1]:https://www.bloomberg.com/news/articles/2016-09-06/why-github-finally-abandoned-its-bossless-workplace From eecab508e8e9fab95738192d47e1ad31a565c5da Mon Sep 17 00:00:00 2001 From: Ezio Date: Sat, 9 Dec 2017 16:08:39 +0800 Subject: [PATCH 163/236] =?UTF-8?q?20171209-19=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...d vision recognition kit for RPi Zero W.md | 85 +++++++++++++++++++ 1 file changed, 85 insertions(+) create mode 100644 sources/tech/20171130 Google launches TensorFlow-based vision recognition kit for RPi Zero W.md diff --git a/sources/tech/20171130 Google launches TensorFlow-based vision recognition kit for RPi Zero W.md b/sources/tech/20171130 Google launches TensorFlow-based vision recognition kit for RPi Zero W.md new file mode 100644 index 0000000000..5bf32964e4 --- /dev/null +++ b/sources/tech/20171130 Google launches TensorFlow-based vision recognition kit for RPi Zero W.md @@ -0,0 +1,85 @@ +# [Google launches TensorFlow-based vision recognition kit for RPi Zero W][26] + + +![](http://linuxgizmos.com/files/google_aiyvisionkit-thm.jpg) +Google’s $45 “AIY Vision Kit” for the Raspberry Pi Zero W performs TensorFlow-based vision recognition using a “VisionBonnet” board with a Movidius chip. + +Google’s AIY Vision Kit for on-device neural network acceleration follows an earlier [AIY Projects][7] voice/AI kit for the Raspberry Pi that shipped to MagPi subscribers back in May. Like the voice kit and the older Google Cardboard VR viewer, the new AIY Vision Kit has a cardboard enclosure. The kit differs from the [Cloud Vision API][8], which was demo’d in 2015 with a Raspberry Pi based GoPiGo robot, in that it runs entirely on local processing power rather than requiring a cloud connection. The AIY Vision Kit is available now for pre-order at $45, with shipments due in early December. + + + [![](http://linuxgizmos.com/files/google_aiyvisionkit-sm.jpg)][9]   [![](http://linuxgizmos.com/files/rpi_zerow-sm.jpg)][10] +**AIY Vision Kit, fully assembled (left) and Raspberry Pi Zero W** +(click images to enlarge) + + +The kit’s key processing element, aside from the 1GHz ARM11-based Broadcom BCM2836 SoC found on the required [Raspberry Pi Zero W][21] SBC, is Google’s new VisionBonnet RPi accessory board. The VisionBonnet pHAT board uses a Movidius MA2450, a version of the [Movidius Myriad 2 VPU][22] processor. On the VisionBonnet, the processor runs Google’s open source [TensorFlow][23]machine intelligence library for neural networking. The chip enables visual perception processing at up to 30 frames per second. + +The AIY Vision Kit requires a user-supplied RPi Zero W, a [Raspberry Pi Camera v2][11], and a 16GB micro SD card for downloading the Linux-based image. The kit includes the VisionBonnet, an RGB arcade-style button, a piezo speaker, a macro/wide lens kit, and the cardboard enclosure. You also get flex cables, standoffs, a tripod mounting nut, and connecting components. + + + [![](http://linuxgizmos.com/files/google_aiyvisionkit_pieces-sm.jpg)][12]   [![](http://linuxgizmos.com/files/google_visionbonnet-sm.jpg)][13] +**AIY Vision Kit kit components (left) and VisonBonnet accessory board** +(click images to enlarge) + + +Three neural network models are available. There’s a general-purpose model that can recognize 1,000 common objects, a facial detection model that can also score facial expression on a “joy scale” that ranges from “sad” to “laughing,” and a model that can identify whether the image contains a dog, cat, or human. The 1,000-image model derives from Google’s open source [MobileNets][24], a family of TensorFlow based computer vision models designed for the restricted resources of a mobile or embedded device. + +MobileNet models offer low latency and low power consumption, and are parameterized to meet the resource constraints of different use cases. The models can be built for classification, detection, embeddings, and segmentation, says Google. Earlier this month, Google released a developer preview of a mobile-friendly [TensorFlow Lite][14] library for Android and iOS that is compatible with MobileNets and the Android Neural Networks API. + + + [![](http://linuxgizmos.com/files/google_aiyvisionkit_assembly-sm.jpg)][15] +**AIY Vision Kit assembly views** +(click image to enlarge) + + +In addition to providing the three models, the AIY Vision Kit provides basic TensorFlow code and a compiler, so users can develop their own models. In addition, Python developers can write new software to customize RGB button colors, piezo element sounds, and 4x GPIO pins on the VisionBonnet that can add additional lights, buttons, or servos. Potential models include recognizing food items, opening a dog door based on visual input, sending a text when your car leaves the driveway, or playing particular music based on facial recognition of a person entering the camera’s viewpoint. + + + [![](http://linuxgizmos.com/files/movidius_myriad2vpu_block-sm.jpg)][16]   [![](http://linuxgizmos.com/files/movidius_myriad2_reference_board-sm.jpg)][17] +**Myriad 2 VPU block diagram (left) and reference board** +(click image to enlarge) + + +The Movidius Myriad 2 processor provides TeraFLOPS of performance within a nominal 1 Watt power envelope. The chip appeared on early Project Tango reference platforms, and is built into the Ubuntu-driven [Fathom][25] neural processing USB stick that Movidius debuted in May 2016, prior to being acquired by Intel. According to Movidius, the Myriad 2 is available “in millions of devices on the market today.” + +**Further information** + +The AIY Vision Kit is available for pre-order from Micro Center at $44.99, with shipments due in early December. More information may be found in the AIY Vision Kit [announcement][18], [Google Blog notice][19], and [Micro Center shopping page][20]. + +-------------------------------------------------------------------------------- + +via: http://linuxgizmos.com/google-launches-tensorflow-based-vision-recognition-kit-for-rpi-zero-w/ + +作者:[ Eric Brown][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://linuxgizmos.com/google-launches-tensorflow-based-vision-recognition-kit-for-rpi-zero-w/ +[1]:http://twitter.com/share?url=http://linuxgizmos.com/google-launches-tensorflow-based-vision-recognition-kit-for-rpi-zero-w/&text=Google%20launches%20TensorFlow-based%20vision%20recognition%20kit%20for%20RPi%20Zero%20W%20 +[2]:https://plus.google.com/share?url=http://linuxgizmos.com/google-launches-tensorflow-based-vision-recognition-kit-for-rpi-zero-w/ +[3]:http://www.facebook.com/sharer.php?u=http://linuxgizmos.com/google-launches-tensorflow-based-vision-recognition-kit-for-rpi-zero-w/ +[4]:http://www.linkedin.com/shareArticle?mini=true&url=http://linuxgizmos.com/google-launches-tensorflow-based-vision-recognition-kit-for-rpi-zero-w/ +[5]:http://reddit.com/submit?url=http://linuxgizmos.com/google-launches-tensorflow-based-vision-recognition-kit-for-rpi-zero-w/&title=Google%20launches%20TensorFlow-based%20vision%20recognition%20kit%20for%20RPi%20Zero%20W +[6]:mailto:?subject=Google%20launches%20TensorFlow-based%20vision%20recognition%20kit%20for%20RPi%20Zero%20W&body=%20http://linuxgizmos.com/google-launches-tensorflow-based-vision-recognition-kit-for-rpi-zero-w/ +[7]:http://linuxgizmos.com/free-raspberry-pi-voice-kit-taps-google-assistant-sdk/ +[8]:http://linuxgizmos.com/google-releases-cloud-vision-api-with-demo-for-pi-based-robot/ +[9]:http://linuxgizmos.com/files/google_aiyvisionkit.jpg +[10]:http://linuxgizmos.com/files/rpi_zerow.jpg +[11]:http://linuxgizmos.com/raspberry-pi-cameras-jump-to-8mp-keep-25-dollar-price/ +[12]:http://linuxgizmos.com/files/google_aiyvisionkit_pieces.jpg +[13]:http://linuxgizmos.com/files/google_visionbonnet.jpg +[14]:https://developers.googleblog.com/2017/11/announcing-tensorflow-lite.html +[15]:http://linuxgizmos.com/files/google_aiyvisionkit_assembly.jpg +[16]:http://linuxgizmos.com/files/movidius_myriad2vpu_block.jpg +[17]:http://linuxgizmos.com/files/movidius_myriad2_reference_board.jpg +[18]:https://blog.google/topics/machine-learning/introducing-aiy-vision-kit-make-devices-see/ +[19]:https://developers.googleblog.com/2017/11/introducing-aiy-vision-kit-add-computer.html +[20]:http://www.microcenter.com/site/content/Google_AIY.aspx?ekw=aiy&rd=1 +[21]:http://linuxgizmos.com/raspberry-pi-zero-w-adds-wifi-and-bluetooth-for-only-5-more/ +[22]:https://www.movidius.com/solutions/vision-processing-unit +[23]:https://www.tensorflow.org/ +[24]:https://research.googleblog.com/2017/06/mobilenets-open-source-models-for.html +[25]:http://linuxgizmos.com/usb-stick-brings-neural-computing-functions-to-devices/ +[26]:http://linuxgizmos.com/google-launches-tensorflow-based-vision-recognition-kit-for-rpi-zero-w/ From a17560c6964ea00b15a48dda40041ec9cad2a792 Mon Sep 17 00:00:00 2001 From: darksun Date: Sat, 9 Dec 2017 17:32:16 +0800 Subject: [PATCH 164/236] =?UTF-8?q?update=20at=202017=E5=B9=B4=2012?= =?UTF-8?q?=E6=9C=88=2009=E6=97=A5=20=E6=98=9F=E6=9C=9F=E5=85=AD=2017:32:1?= =?UTF-8?q?6=20CST?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...171206 How to extract substring in Bash.md | 87 ++++++++++--------- 1 file changed, 46 insertions(+), 41 deletions(-) rename {sources => translated}/tech/20171206 How to extract substring in Bash.md (59%) diff --git a/sources/tech/20171206 How to extract substring in Bash.md b/translated/tech/20171206 How to extract substring in Bash.md similarity index 59% rename from sources/tech/20171206 How to extract substring in Bash.md rename to translated/tech/20171206 How to extract substring in Bash.md index 945b8bd4dd..f1deaebab9 100644 --- a/sources/tech/20171206 How to extract substring in Bash.md +++ b/translated/tech/20171206 How to extract substring in Bash.md @@ -1,52 +1,51 @@ -translating by lujun9972 -How to extract substring in Bash +如何在 Bash 中抽取子字符串 ====== -A substring is nothing but a string is a string that occurs “in”. For example “3382” is a substring of “this is a 3382 test”. One can extract the digits or given string using various methods. +子字符串不是别的,就是出现在其他字符串内的字符串。 比如 “3382” 就是 “this is a 3382 test” 的子字符串。 我们有多种方法可以从中把数字或指定部分字符串抽取出来。 [![How to Extract substring in Bash Shell on Linux or Unix](https://www.cyberciti.biz/media/new/faq/2017/12/How-to-Extract-substring-in-Bash-Shell-on-Linux-or-Unix.jpg)][2] -This quick tutorial shows how to obtain or finds substring when using bash shell. +本文会向你展示在 bash shell 中如何获取或者说查找出子字符串。 -### Extract substring in Bash +### 在 Bash 中抽取子字符串 -The syntax is: ## syntax ## ${parameter:offset:length} The substring expansion is a bash feature. It expands to up to length characters of the value of parameter starting at the character specified by offset. For example, $u defined as follows: - -| +其语法为: +```shell +## syntax ## +${parameter:offset:length} ``` +子字符串扩展是 bash 的一项功能。它会扩展成 parameter 值中以 offset 为开始,长为 length 个字符的字符串。 假设, $u 定义如下: + +```shell ## define var named u ## u="this is a test" ``` - | -The following substring parameter expansion performs substring extraction: +那么下面参数的子字符串扩展会抽取出子字符串: -| -``` +```shell var="${u:10:4}" echo "${var}" ``` - | -Sample outputs: +结果为: ``` test ``` -* 10 : The offset +其中这些参数分别表示: ++ 10 : 偏移位置 ++ 4 : 长度 -* 4 : The length +### 使用 IFS -### Using IFS +根据 bash 的 man 页说明: -From the bash man page: +> The Internal Field Separator that is used for word splitting after expansion and to split lines into words with the read builtin command。The default value is。 -> The Internal Field Separator that is used for word splitting after expansion and to split lines into words with the read builtin command. The default value is . +另一种 POSIX 就绪(POSIX ready) 的方案如下: -Another POSIX ready solution is as follows: - -| -``` +```shell u="this is a test" set -- $u echo "$1" @@ -54,20 +53,20 @@ echo "$2" echo "$3" echo "$4" ``` - | -Sample outputs: +输出为: -``` +```shell this is a test ``` -| -``` -#!/bin/bash +下面是一段 bash 代码,用来从 Cloudflare cache 中去除带主页的 url + +```shell +#!/bin/bash #################################################### ## Author - Vivek Gite {https://www.cyberciti.biz/} ## Purpose - Purge CF cache @@ -98,7 +97,7 @@ get_home_url(){ } echo -echo "Purging cache from Cloudflare..." +echo "Purging cache from Cloudflare。.。" echo for u in $urls do @@ -108,21 +107,22 @@ do -H "X-Auth-Email: ${email_id}" \ -H "X-Auth-Key: ${api_key}" \ -H "Content-Type: application/json" \ - --data "{\"files\":[\"${u}\",\"${amp_url}\",\"${home_url}\"]}" + --data "{\"files\":[\"${u}\",\"${amp_url}\",\"${home_url}\"]}" echo done echo ``` - | -I can run it as follows: ~/bin/cf.clear.cache https://www.cyberciti.biz/faq/bash-for-loop/ https://www.cyberciti.biz/tips/linux-security.html - -### Say hello to cut command - -One can remove sections from each line of file or variable using the cut command. The syntax is: - -| +它的使用方法为: +```shell +~/bin/cf.clear.cache https://www.cyberciti.biz/faq/bash-for-loop/ https://www.cyberciti.biz/tips/linux-security.html ``` + +### 借助 cut 命令 + +可以使用 cut 命令来将文件中每一行或者变量中的一部分删掉。它的语法为: + +```shell u="this is a test" echo "$u" | cut -d' ' -f 4 echo "$u" | cut --delimiter=' ' --fields=4 @@ -134,9 +134,14 @@ echo "$u" | cut --delimiter=' ' --fields=4 var="$(cut -d' ' -f 4 <<< $u)" echo "${var}" ``` - | -For more info read bash man page: man bash man cut See also: [Bash String Comparison: Find Out IF a Variable Contains a Substring][1] +想了解更多请阅读 bash 的 man 页: +```shell +man bash +man cut +``` + +另请参见: [Bash String Comparison: Find Out IF a Variable Contains a Substring][1] -------------------------------------------------------------------------------- From 2e6ce6badf87ae04b3f13dd23eb01f3ed70c52fa Mon Sep 17 00:00:00 2001 From: darksun Date: Sat, 9 Dec 2017 17:53:04 +0800 Subject: [PATCH 165/236] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Sessions=20And=20?= =?UTF-8?q?Cookies=20=E2=80=93=20How=20Does=20User-Login=20Work?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ... And Cookies – How Does User-Login Work.md | 73 +++++++++++++++++++ 1 file changed, 73 insertions(+) create mode 100644 sources/tech/20171208 Sessions And Cookies – How Does User-Login Work.md diff --git a/sources/tech/20171208 Sessions And Cookies – How Does User-Login Work.md b/sources/tech/20171208 Sessions And Cookies – How Does User-Login Work.md new file mode 100644 index 0000000000..a53b0f8d61 --- /dev/null +++ b/sources/tech/20171208 Sessions And Cookies – How Does User-Login Work.md @@ -0,0 +1,73 @@ +translating by lujun9972 +Sessions And Cookies – How Does User-Login Work? +====== +Facebook, Gmail, Twitter we all use these websites every day. One common thing among them is that they all require you to log in to do stuff. You cannot tweet on twitter, comment on Facebook or email on Gmail unless you are authenticated and logged in to the service. + + [![gmail, facebook login page](http://www.theitstuff.com/wp-content/uploads/2017/10/Untitled-design-1.jpg)][1] + +So how does it work? How does the website authenticate us? How does it know which user is logged in and from where? Let us answer each of these questions below. + +### How User-Login works? + +Whenever you enter your username and password in the login page of a site, the information you enter is sent to the server. The server then validates your password against the password on the server. If it doesn’t match, you get an error of incorrect password. But if it matches, you get logged in. + +### What happens when I get logged in? + +When you get logged in, the web server initiates a session and sets a cookie variable in your browser. The cookie variable then acts as a reference to the session created. Confused? Let us simplify this. + +### How does Session work? + +When the username and password are right, the server initiates a session. Sessions have a really complicated definition so I like to call them ‘beginning of a relationship’. + + [![session beginning of a relationship or partnership](http://www.theitstuff.com/wp-content/uploads/2017/10/pasted-image-0-9.png)][2] + +When the credentials are right, the server begins a relationship with you. Since the server cannot see like us humans, it sets a cookie in our browsers to identify our unique relationship from all the other relationships that other people have with the server. + +### What is a Cookie? + +A cookie is a small amount of data that the websites can store in your browser. You must have seen them here. + + [![theitstuff official facebook page cookies](http://www.theitstuff.com/wp-content/uploads/2017/10/pasted-image-0-1-4.png)][3] + +So when you log in and the server has created a relationship or session with you, it takes the session id which is the unique identifier of that session and stores it in your browser in form of cookies. + +### What’s the Point? + +The reason all of this is needed is to verify that it’s you so that when you comment or tweet, the server knows who did that tweet or who did that comment. + +As soon as you’re logged in, a cookie is set which contains the session id. Now, this session id is granted to the person who enters the correct username and password combination. + + [![facebook cookies in web browser](http://www.theitstuff.com/wp-content/uploads/2017/10/pasted-image-0-2-3-e1508926255472.png)][4] + +So the session id is granted to the person who owns that account. Now whenever an activity is performed on that website, the server knows who it was by their session id. + +### Keep me logged in? + +The sessions have a time limit. Unlike the real world where relationships can last even without seeing the person for longer periods of time, sessions have a time limit. You have to keep telling the server that you are online by performing some or the other actions. If that doesn’t happen the server will close the session and you will be logged out. + + [![websites keep me logged in option](http://www.theitstuff.com/wp-content/uploads/2017/10/pasted-image-0-3-3-e1508926314117.png)][5] + +But when we use the Keep me logged in feature on some websites, we allow them to store another unique variable in the form of cookies in our browsers. This unique variable is used to automatically log us in by checking it against the one on the server. When someone steals this unique identifier it is called as cookie stealing. They then get access to your account. + +### Conclusion + +We discussed how Login Systems work and how we are authenticated on a website. We also learned about what sessions and cookies are and how they are implemented in login mechanism. + +I hope you guys have grasped that how User-Login works, and if you still have a doubt regarding anything, just drop in a comment and I’ll be there for you. + +-------------------------------------------------------------------------------- + +via: http://www.theitstuff.com/sessions-cookies-user-login-work + +作者:[Rishabh Kandari][a] +译者:[lujun9972](https://github.com/lujun9972) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.theitstuff.com/author/reevkandari +[1]:http://www.theitstuff.com/wp-content/uploads/2017/10/Untitled-design-1.jpg +[2]:http://www.theitstuff.com/wp-content/uploads/2017/10/pasted-image-0-9.png +[3]:http://www.theitstuff.com/wp-content/uploads/2017/10/pasted-image-0-1-4.png +[4]:http://www.theitstuff.com/wp-content/uploads/2017/10/pasted-image-0-2-3-e1508926255472.png +[5]:http://www.theitstuff.com/wp-content/uploads/2017/10/pasted-image-0-3-3-e1508926314117.png From 395752355d42394ef687715127ddf6b11e12ce2e Mon Sep 17 00:00:00 2001 From: darksun Date: Sat, 9 Dec 2017 18:16:39 +0800 Subject: [PATCH 166/236] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Executing=20Comma?= =?UTF-8?q?nds=20and=20Scripts=20at=20Reboot=20&=20Startup=20in=20Linux?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...nd Scripts at Reboot & Startup in Linux.md | 64 +++++++++++++++++++ 1 file changed, 64 insertions(+) create mode 100644 sources/tech/20170918 Executing Commands and Scripts at Reboot & Startup in Linux.md diff --git a/sources/tech/20170918 Executing Commands and Scripts at Reboot & Startup in Linux.md b/sources/tech/20170918 Executing Commands and Scripts at Reboot & Startup in Linux.md new file mode 100644 index 0000000000..04c922aa41 --- /dev/null +++ b/sources/tech/20170918 Executing Commands and Scripts at Reboot & Startup in Linux.md @@ -0,0 +1,64 @@ +translating by lujun9972 +Executing Commands and Scripts at Reboot & Startup in Linux +====== +There might arise a need to execute a command or scripts at reboot or every time when we start our system. So how can we do that, in this tutorial we are going to discuss just that. We will discuss how we can make our CentOS/RHEL and Ubuntu systems to execute a command or scripts at reboot or at system startup using two different methods. Both the methods are tested and works just fine, + +### Method 1 – Using rc.local + +In this method, we will use ‘rc.local’ file located in ‘/etc/’ to execute our scripts and commands at startup. We will make an entry to execute the script in the file & every time when our system starts, the script will be executed. + +But we will first provide the permissions to make the file /etc/rc.local executable, + +$ sudo chmod +x /etc/rc.local + +Next we will add the script to be executed in the file, + +$ sudo vi /etc/rc.local + +& at the bottom of file, add the entry + +sh /root/script.sh & + +Now save the file & exit. Similarly we can execute a command using rc.local file but we need to make sure that we mention the full path of the command. To locate the full command path, run + +$ which command + +For example, + +$ which shutter + +/usr/bin/shutter + +For CentOS, we use file ‘/etc/rc.d/rc.local’ instead of ‘/etc/rc.local’. We also need to make this file executable before adding any script or command to the file. + +Note:- When executing a script at startup, make sure that the script ends with ‘exit 0’. + +( Recommended Read : ) + +### Method 2 – Crontab method + +This method is the easiest method of the two methods. We will create a cron job that will wait for 90 seconds after system startup & then will execute the command or script on the system. + +To create a cron job, open terminal & run + +$ crontab -e + +& enter the following line , + +@reboot ( sleep 90 ; sh \location\script.sh ) + +where \location\script.sh is the location of script to be executed. + +So this was our tutorial on how to execute a script or a command when system starts up. Please leave your queries, if any , using the comment box below + +-------------------------------------------------------------------------------- + +via: http://linuxtechlab.com/executing-commands-scripts-at-reboot/ + +作者:[Shusain][a] +译者:[lujun9972](https://github.com/lujun9972) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://linuxtechlab.com/author/shsuain/ From 2baffb6ff801b527c6a955b6d1a79a8755a95aa5 Mon Sep 17 00:00:00 2001 From: iron0x <2727586680@qq.com> Date: Sat, 9 Dec 2017 20:18:04 +0800 Subject: [PATCH 167/236] create translated file "20171202 docker - Use multi-stage builds.md" create translated file "20171202 docker - Use multi-stage builds.md" --- ...0171202 docker - Use multi-stage builds.md | 128 ++++++++++++++++++ 1 file changed, 128 insertions(+) create mode 100644 translated/tech/20171202 docker - Use multi-stage builds.md diff --git a/translated/tech/20171202 docker - Use multi-stage builds.md b/translated/tech/20171202 docker - Use multi-stage builds.md new file mode 100644 index 0000000000..b8bb2acf0a --- /dev/null +++ b/translated/tech/20171202 docker - Use multi-stage builds.md @@ -0,0 +1,128 @@ +使用多阶段构建 +============================================================ + +多阶段构建是 `Docker 17.05` 或更高版本提供的新功能。这对致力于优化 `Dockerfile` 的人来说,使得 `Dockerfile` 易于阅读和维护。 + +> 致谢: 特别感谢 [Alex Ellis][1] 授权使用他的关于 `Docker` 多阶段构建的博客文章 [Builder pattern vs. Multi-stage builds in Docker][2] 作为以下示例的基础. + +### 在多阶段构建之前 + +关于构建镜像最具挑战性的事情之一是保持镜像体积小巧. `Dockerfile` 中的每条指令都会在镜像中增加一层, 并且在移动到下一层之前, 需要记住清除不需要的构件. 要编写一个非常高效的 `Dockerfile`, 传统上您需要使用 `shell` 技巧和其他逻辑来尽可能地减少层数, 并确保每一层都具有上一层所需的构件, 而不是其他任何东西. +实际上, 有一个 `Dockerfile` 用于开发(其中包含构建应用程序所需的所有内容), 以及另一个用于生产的瘦客户端, 它只包含您的应用程序以及运行它所需的内容. 这被称为"建造者模式". 维护两个 `Dockerfile` 并不理想. + +下面分别是一个 `Dockerfile.build` 和 遵循上面的构建器模式的 `Dockerfile` 的例子: + +`Dockerfile.build`: + +``` +FROM golang:1.7.3 +WORKDIR /go/src/github.com/alexellis/href-counter/ +RUN go get -d -v golang.org/x/net/html +COPY app.go . +RUN go get -d -v golang.org/x/net/html \ + && CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o app . + +``` + +注意这个例子还使用 Bash && 运算符人为地将两个 `RUN` 命令压缩在一起, 以避免在镜像中创建额外的层. 这很容易失败, 很难维护. 例如, 插入另一个命令时, 很容易忘记继续使用 `\` 字符. + +`Dockerfile`: + +``` +FROM alpine:latest +RUN apk --no-cache add ca-certificates +WORKDIR /root/ +COPY app . +CMD ["./app"] + +``` + +`build.sh`: + +``` +#!/bin/sh +echo Building alexellis2/href-counter:build + +docker build --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy \ + -t alexellis2/href-counter:build . -f Dockerfile.build + +docker create --name extract alexellis2/href-counter:build +docker cp extract:/go/src/github.com/alexellis/href-counter/app ./app +docker rm -f extract + +echo Building alexellis2/href-counter:latest + +docker build --no-cache -t alexellis2/href-counter:latest . +rm ./app + +``` + +当您运行 `build.sh` 脚本时, 它会构建第一个镜像, 从中创建一个容器, 以便将该构件复制出来, 然后构建第二个镜像. 这两个镜像和应用构件会占用您的系统的空间. + +多阶段构建大大简化了这种情况! + +### 使用多阶段构建 + +在多阶段构建中, 您需要在 `Dockerfile` 中多次使用 `FROM` 声明. 每次 `FROM` 指令可以使用不同的基础镜像, 并且每次 `FROM` 指令都会开始新阶段的构建. 您可以选择将构件从一个阶段复制到另一个阶段, 在最终镜像中, 不会留下您不需要的所有内容. 为了演示这是如何工作的, 让我们调整前一节中的 `Dockerfile` 以使用多阶段构建。 + +`Dockerfile`: + +``` +FROM golang:1.7.3 +WORKDIR /go/src/github.com/alexellis/href-counter/ +RUN go get -d -v golang.org/x/net/html +COPY app.go . +RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o app . + +FROM alpine:latest +RUN apk --no-cache add ca-certificates +WORKDIR /root/ +COPY --from=0 /go/src/github.com/alexellis/href-counter/app . +CMD ["./app"] + +``` + +您只需要单一个 `Dockerfile`. 不需要分隔构建脚本. 只需运行 `docker build` . + +``` +$ docker build -t alexellis2/href-counter:latest . + +``` + +最终的结果是和以前体积一样小的生产镜像, 复杂性显着降低. 您不需要创建任何中间镜像, 也不需要将任何构件提取到本地系统. + +它是如何工作的呢? 第二条 `FROM` 指令以 `alpine:latest` 镜像作为基础开始新的建造阶段. `COPY --from=0` 这一行将刚才前一个阶段产生的构件复制到这个新阶段. Go SDK和任何中间构件都被保留下来, 而不是只保存在最终的镜像中. +### 命名您的构建阶段 + +默认情况下, 这些阶段没有命名, 您可以通过它们的整数来引用它们, 从第一个 `FROM` 指令的 0 开始. 但是, 您可以通过在 `FROM` 指令中使用 `as ` +来为阶段命名. 以下示例通过命名阶段并在 `COPY` 指令中使用名称来改进前一个示例. 这意味着, 即使您的 `Dockerfile` 中的指令稍后重新排序, `COPY` 也不会中断。 + +``` +FROM golang:1.7.3 as builder +WORKDIR /go/src/github.com/alexellis/href-counter/ +RUN go get -d -v golang.org/x/net/html +COPY app.go . +RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o app . + +FROM alpine:latest +RUN apk --no-cache add ca-certificates +WORKDIR /root/ +COPY --from=builder /go/src/github.com/alexellis/href-counter/app . +CMD ["./app"] +``` + +> 译者话: 1.此文章系译者第一次翻译英文文档,有描述不清楚或错误的地方,请读者给予反馈(2727586680@qq.com),不胜感激。 +> 译者话: 2.本文只是简单介绍多阶段构建,不够深入,如果读者需要深入了解,请自行查阅相关资料。 +-------------------------------------------------------------------------------- + +via: https://docs.docker.com/engine/userguide/eng-image/multistage-build/#name-your-build-stages + +作者:[docker docs ][a] +译者:[iron0x](https://github.com/iron0x) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://docs.docker.com/engine/userguide/eng-image/multistage-build/ +[1]:https://twitter.com/alexellisuk +[2]:http://blog.alexellis.io/mutli-stage-docker-builds/ From 135c7b542ad0b4e91bfbe8623feb7ba0891e0ad2 Mon Sep 17 00:00:00 2001 From: iron0x <2727586680@qq.com> Date: Sat, 9 Dec 2017 20:19:09 +0800 Subject: [PATCH 168/236] Delete 20171202 docker - Use multi-stage builds.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 翻译完成 --- ...0171202 docker - Use multi-stage builds.md | 129 ------------------ 1 file changed, 129 deletions(-) delete mode 100644 sources/tech/20171202 docker - Use multi-stage builds.md diff --git a/sources/tech/20171202 docker - Use multi-stage builds.md b/sources/tech/20171202 docker - Use multi-stage builds.md deleted file mode 100644 index 8cc8af1c94..0000000000 --- a/sources/tech/20171202 docker - Use multi-stage builds.md +++ /dev/null @@ -1,129 +0,0 @@ -【iron0x翻译中】 - -Use multi-stage builds -============================================================ - -Multi-stage builds are a new feature requiring Docker 17.05 or higher on the daemon and client. Multistage builds are useful to anyone who has struggled to optimize Dockerfiles while keeping them easy to read and maintain. - -> Acknowledgment: Special thanks to [Alex Ellis][1] for granting permission to use his blog post [Builder pattern vs. Multi-stage builds in Docker][2] as the basis of the examples below. - -### Before multi-stage builds - -One of the most challenging things about building images is keeping the image size down. Each instruction in the Dockerfile adds a layer to the image, and you need to remember to clean up any artifacts you don’t need before moving on to the next layer. To write a really efficient Dockerfile, you have traditionally needed to employ shell tricks and other logic to keep the layers as small as possible and to ensure that each layer has the artifacts it needs from the previous layer and nothing else. - -It was actually very common to have one Dockerfile to use for development (which contained everything needed to build your application), and a slimmed-down one to use for production, which only contained your application and exactly what was needed to run it. This has been referred to as the “builder pattern”. Maintaining two Dockerfiles is not ideal. - -Here’s an example of a `Dockerfile.build` and `Dockerfile` which adhere to the builder pattern above: - -`Dockerfile.build`: - -``` -FROM golang:1.7.3 -WORKDIR /go/src/github.com/alexellis/href-counter/ -RUN go get -d -v golang.org/x/net/html -COPY app.go . -RUN go get -d -v golang.org/x/net/html \ - && CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o app . - -``` - -Notice that this example also artificially compresses two `RUN` commands together using the Bash `&&` operator, to avoid creating an additional layer in the image. This is failure-prone and hard to maintain. It’s easy to insert another command and forget to continue the line using the `\` character, for example. - -`Dockerfile`: - -``` -FROM alpine:latest -RUN apk --no-cache add ca-certificates -WORKDIR /root/ -COPY app . -CMD ["./app"] - -``` - -`build.sh`: - -``` -#!/bin/sh -echo Building alexellis2/href-counter:build - -docker build --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy \ - -t alexellis2/href-counter:build . -f Dockerfile.build - -docker create --name extract alexellis2/href-counter:build -docker cp extract:/go/src/github.com/alexellis/href-counter/app ./app -docker rm -f extract - -echo Building alexellis2/href-counter:latest - -docker build --no-cache -t alexellis2/href-counter:latest . -rm ./app - -``` - -When you run the `build.sh` script, it needs to build the first image, create a container from it in order to copy the artifact out, then build the second image. Both images take up room on your system and you still have the `app` artifact on your local disk as well. - -Multi-stage builds vastly simplify this situation! - -### Use multi-stage builds - -With multi-stage builds, you use multiple `FROM` statements in your Dockerfile. Each `FROM` instruction can use a different base, and each of them begins a new stage of the build. You can selectively copy artifacts from one stage to another, leaving behind everything you don’t want in the final image. To show how this works, Let’s adapt the Dockerfile from the previous section to use multi-stage builds. - -`Dockerfile`: - -``` -FROM golang:1.7.3 -WORKDIR /go/src/github.com/alexellis/href-counter/ -RUN go get -d -v golang.org/x/net/html -COPY app.go . -RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o app . - -FROM alpine:latest -RUN apk --no-cache add ca-certificates -WORKDIR /root/ -COPY --from=0 /go/src/github.com/alexellis/href-counter/app . -CMD ["./app"] - -``` - -You only need the single Dockerfile. You don’t need a separate build script, either. Just run `docker build`. - -``` -$ docker build -t alexellis2/href-counter:latest . - -``` - -The end result is the same tiny production image as before, with a significant reduction in complexity. You don’t need to create any intermediate images and you don’t need to extract any artifacts to your local system at all. - -How does it work? The second `FROM` instruction starts a new build stage with the `alpine:latest` image as its base. The `COPY --from=0` line copies just the built artifact from the previous stage into this new stage. The Go SDK and any intermediate artifacts are left behind, and not saved in the final image. - -### Name your build stages - -By default, the stages are not named, and you refer to them by their integer number, starting with 0 for the first `FROM` instruction. However, you can name your stages, by adding an `as ` to the `FROM` instruction. This example improves the previous one by naming the stages and using the name in the `COPY` instruction. This means that even if the instructions in your Dockerfile are re-ordered later, the `COPY` won’t break. - -``` -FROM golang:1.7.3 as builder -WORKDIR /go/src/github.com/alexellis/href-counter/ -RUN go get -d -v golang.org/x/net/html -COPY app.go . -RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o app . - -FROM alpine:latest -RUN apk --no-cache add ca-certificates -WORKDIR /root/ -COPY --from=builder /go/src/github.com/alexellis/href-counter/app . -CMD ["./app"] -``` - --------------------------------------------------------------------------------- - -via: https://docs.docker.com/engine/userguide/eng-image/multistage-build/#name-your-build-stages - -作者:[docker docs ][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://docs.docker.com/engine/userguide/eng-image/multistage-build/ -[1]:https://twitter.com/alexellisuk -[2]:http://blog.alexellis.io/mutli-stage-docker-builds/ From 2fad17303cc0953e93f77b9a12515c5722402f48 Mon Sep 17 00:00:00 2001 From: Ezio Date: Sat, 9 Dec 2017 21:43:57 +0800 Subject: [PATCH 169/236] Delete 20171206 Getting started with Turtl an open source alternative to Evernote.md --- ... an open source alternative to Evernote.md | 115 ------------------ 1 file changed, 115 deletions(-) delete mode 100644 sources/tech/20171206 Getting started with Turtl an open source alternative to Evernote.md diff --git a/sources/tech/20171206 Getting started with Turtl an open source alternative to Evernote.md b/sources/tech/20171206 Getting started with Turtl an open source alternative to Evernote.md deleted file mode 100644 index 18d7d12f82..0000000000 --- a/sources/tech/20171206 Getting started with Turtl an open source alternative to Evernote.md +++ /dev/null @@ -1,115 +0,0 @@ -Getting started with Turtl, an open source alternative to Evernote -============================================================ - -### Turtl is a solid note-taking tool for users looking for an alternative to apps like Evernote and Google Keep. - -![Using Turtl as an open source alternative to Evernote](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUS_brainstorm_island_520px.png?itok=6IUPyxkY "Using Turtl as an open source alternative to Evernote") -Image by : opensource.com - -Just about everyone I know takes notes, and many people use an online note-taking application like Evernote, Simplenote, or Google Keep. Those are all good tools, but you have to wonder about the security and privacy of your information—especially in light of [Evernote's privacy flip-flop of 2016][11]. If you want more control over your notes and your data, you really need to turn to an open source tool. - -Whatever your reasons for moving away from Evernote, there are open source alternatives out there. Let's look at one of those alternatives: Turtl. - -### Getting started - -The developers behind [Turtl][12] want you to think of it as "Evernote with ultimate privacy." To be honest, I can't vouch for the level of privacy that Turtl offers, but it is a quite a good note-taking tool. - -To get started with Turtl, [download][13] a desktop client for Linux, Mac OS, or Windows, or grab the [Android app][14]. Install it, then fire up the client or app. You'll be asked for a username and passphrase. Turtl uses the passphrase to generate a cryptographic key that, according to the developers, encrypts your notes before storing them anywhere on your device or on their servers. - -### Using Turtl - -You can create the following types of notes with Turtl: - -* Password - -* File - -* Image - -* Bookmark - -* Text note - -No matter what type of note you choose, you create it in a window that's similar for all types of notes: - - -![Create new text note with Turtl](https://opensource.com/sites/default/files/images/life-uploads/turtl-new-note-520.png "Creating a new text note with Turtl") - -Creating a new text note in Turtl - -Add information like the title of the note, some text, and (if you're creating a File or Image note) attach a file or an image. Then click **Save**. - -You can add formatting to your notes via [Markdown][15]. You need to add the formatting by hand—there are no toolbar shortcuts. - -If you need to organize your notes, you can add them to **Boards**. Boards are just like notebooks in Evernote. To create a new board, click on the **Boards** tab, then click the **Create a board** button. Type a title for the board, then click **Create**. - - -![Create new board in Turtl](https://opensource.com/sites/default/files/images/life-uploads/turtl-boards-520.png "Creating a new Turtl board") - -Creating a new board in Turtl - -To add a note to a board, create or edit the note, then click the **This note is not in any boards** link at the bottom of the note. Select one or more boards, then click **Done**. - -To add tags to a note, click the **Tags** icon at the bottom of a note, enter one or more keywords separated by commas, and click **Done**. - -### Syncing your notes across your devices - -If you use Turtl across several computers and an Android device, for example, Turtl will sync your notes whenever you're online. However, I've encountered a small problem with syncing: Every so often, a note I've created on my phone doesn't sync to my laptop. I tried to sync manually by clicking the icon in the top left of the window and then clicking **Sync Now**, but that doesn't always work. I found that I occasionally need to click that icon, click **Your settings**, and then click **Clear local data**. I then need to log back into Turtl, but all the data syncs properly. - -### A question, and a couple of problems - -When I started using Turtl, I was dogged by one question:  _Where are my notes kept online?_  It turns out that the developers behind Turtl are based in the U.S., and that's also where their servers are. Although the encryption that Turtl uses is [quite strong][16] and your notes are encrypted on the server, the paranoid part of me says that you shouldn't save anything sensitive in Turtl (or any online note-taking tool, for that matter). - -Turtl displays notes in a tiled view, reminiscent of Google Keep: - - -![Notes in Turtl](https://opensource.com/sites/default/files/images/life-uploads/turtl-notes-520.png "Collection of notes in Turtl") - -A collection of notes in Turtl - -There's no way to change that to a list view, either on the desktop or on the Android app. This isn't a problem for me, but I've heard some people pan Turtl because it lacks a list view. - -Speaking of the Android app, it's not bad; however, it doesn't integrate with the Android **Share** menu. If you want to add a note to Turtl based on something you've seen or read in another app, you need to copy and paste it manually. - -I've been using a Turtl for several months on a Linux-powered laptop, my [Chromebook running GalliumOS][17], and an Android-powered phone. It's been a pretty seamless experience across all those devices. Although it's not my favorite open source note-taking tool, Turtl does a pretty good job. Give it a try; it might be the simple note-taking tool you're looking for. - - -### About the author - - [![That idiot Scott Nesbitt ...](https://opensource.com/sites/default/files/styles/profile_pictures/public/scottn-cropped.jpg?itok=q4T2J4Ai)][18] - - Scott Nesbitt - I'm a long-time user of free/open source software, and write various things for both fun and profit. I don't take myself too seriously and I do all of my own stunts. You can find me at these fine establishments on the web: [Twitter][5], [Mastodon][6], [GitHub][7], and... [more about Scott Nesbitt][8][More about me][9] - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/17/12/using-turtl-open-source-alternative-evernote - -作者:[Scott Nesbitt ][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/scottnesbitt -[1]:https://opensource.com/file/378346 -[2]:https://opensource.com/file/378351 -[3]:https://opensource.com/file/378356 -[4]:https://opensource.com/article/17/12/using-turtl-open-source-alternative-evernote?rate=Kktl8DSEAXzwIGppn0PS4KuSpZv3Qbk0fuiilnplrnE -[5]:http://www.twitter.com/ScottWNesbitt -[6]:https://mastodon.social/@scottnesbitt -[7]:https://github.com/ScottWNesbitt -[8]:https://opensource.com/users/scottnesbitt -[9]:https://opensource.com/users/scottnesbitt -[10]:https://opensource.com/user/14925/feed -[11]:https://blog.evernote.com/blog/2016/12/15/evernote-revisits-privacy-policy/ -[12]:https://turtlapp.com/ -[13]:https://turtlapp.com/download/ -[14]:https://turtlapp.com/download/ -[15]:https://en.wikipedia.org/wiki/Markdown -[16]:https://turtlapp.com/docs/security/encryption-specifics/ -[17]:https://opensource.com/article/17/4/linux-chromebook-gallium-os -[18]:https://opensource.com/users/scottnesbitt -[19]:https://opensource.com/users/scottnesbitt -[20]:https://opensource.com/users/scottnesbitt -[21]:https://opensource.com/article/17/12/using-turtl-open-source-alternative-evernote#comments -[22]:https://opensource.com/tags/alternatives From 1d63bddba169c5d33f184379450bb2c56b418f54 Mon Sep 17 00:00:00 2001 From: darksun Date: Sat, 9 Dec 2017 21:45:18 +0800 Subject: [PATCH 170/236] =?UTF-8?q?update=20at=202017=E5=B9=B4=2012?= =?UTF-8?q?=E6=9C=88=2009=E6=97=A5=20=E6=98=9F=E6=9C=9F=E5=85=AD=2021:45:1?= =?UTF-8?q?8=20CST?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...nd Scripts at Reboot & Startup in Linux.md | 64 ---------------- ...nd Scripts at Reboot & Startup in Linux.md | 75 +++++++++++++++++++ 2 files changed, 75 insertions(+), 64 deletions(-) delete mode 100644 sources/tech/20170918 Executing Commands and Scripts at Reboot & Startup in Linux.md create mode 100644 translated/tech/20170918 Executing Commands and Scripts at Reboot & Startup in Linux.md diff --git a/sources/tech/20170918 Executing Commands and Scripts at Reboot & Startup in Linux.md b/sources/tech/20170918 Executing Commands and Scripts at Reboot & Startup in Linux.md deleted file mode 100644 index 04c922aa41..0000000000 --- a/sources/tech/20170918 Executing Commands and Scripts at Reboot & Startup in Linux.md +++ /dev/null @@ -1,64 +0,0 @@ -translating by lujun9972 -Executing Commands and Scripts at Reboot & Startup in Linux -====== -There might arise a need to execute a command or scripts at reboot or every time when we start our system. So how can we do that, in this tutorial we are going to discuss just that. We will discuss how we can make our CentOS/RHEL and Ubuntu systems to execute a command or scripts at reboot or at system startup using two different methods. Both the methods are tested and works just fine, - -### Method 1 – Using rc.local - -In this method, we will use ‘rc.local’ file located in ‘/etc/’ to execute our scripts and commands at startup. We will make an entry to execute the script in the file & every time when our system starts, the script will be executed. - -But we will first provide the permissions to make the file /etc/rc.local executable, - -$ sudo chmod +x /etc/rc.local - -Next we will add the script to be executed in the file, - -$ sudo vi /etc/rc.local - -& at the bottom of file, add the entry - -sh /root/script.sh & - -Now save the file & exit. Similarly we can execute a command using rc.local file but we need to make sure that we mention the full path of the command. To locate the full command path, run - -$ which command - -For example, - -$ which shutter - -/usr/bin/shutter - -For CentOS, we use file ‘/etc/rc.d/rc.local’ instead of ‘/etc/rc.local’. We also need to make this file executable before adding any script or command to the file. - -Note:- When executing a script at startup, make sure that the script ends with ‘exit 0’. - -( Recommended Read : ) - -### Method 2 – Crontab method - -This method is the easiest method of the two methods. We will create a cron job that will wait for 90 seconds after system startup & then will execute the command or script on the system. - -To create a cron job, open terminal & run - -$ crontab -e - -& enter the following line , - -@reboot ( sleep 90 ; sh \location\script.sh ) - -where \location\script.sh is the location of script to be executed. - -So this was our tutorial on how to execute a script or a command when system starts up. Please leave your queries, if any , using the comment box below - --------------------------------------------------------------------------------- - -via: http://linuxtechlab.com/executing-commands-scripts-at-reboot/ - -作者:[Shusain][a] -译者:[lujun9972](https://github.com/lujun9972) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://linuxtechlab.com/author/shsuain/ diff --git a/translated/tech/20170918 Executing Commands and Scripts at Reboot & Startup in Linux.md b/translated/tech/20170918 Executing Commands and Scripts at Reboot & Startup in Linux.md new file mode 100644 index 0000000000..107caa6bd9 --- /dev/null +++ b/translated/tech/20170918 Executing Commands and Scripts at Reboot & Startup in Linux.md @@ -0,0 +1,75 @@ +在 Linux 启动或重启时执行命令与脚本 +====== +有时可能会需要在重启时或者每次系统启动时运行某些命令或者脚本。我们要怎样做呢?本文中我们就对此进行讨论。 我们会用两种方法来描述如何在 CentOS/RHEL 以及 Ubuntu 系统上做到重启或者系统启动时执行命令和脚本。 两种方法都通过了测试。 + +### 方法 1 – 使用 rc.local + +这种方法会利用 `/etc/` 中的 `rc.local` 文件来在启动时执行脚本与命令。我们在文件中加上一行 l 爱执行脚本,这样每次启动系统时,都会执行该脚本。 + +不过我们首先需要为 `/etc/rc.local` 添加执行权限, + +```shell +$ sudo chmod +x /etc/rc.local +``` + +然后将要执行的脚本加入其中, + +```shell +$ sudo vi /etc/rc.local +``` + +在文件最后加上 + +```shell +sh /root/script.sh & +``` + +然后保存文件并退出。 +使用 `rc.local` 文件来执行命令也是一样的,但是一定要记得填写命令的完整路径。 像知道命令的完整路径可以运行 + +```shell +$ which command +``` + +比如, + +```shell +$ which shutter +/usr/bin/shutter +``` + +如果是 CentOS,我们修改的是文件 `/etc/rc.d/rc.local` 而不是 `/etc/rc.local`。 不过我们也需要先为该文件添加可执行权限。 + +注意:- 启动时执行的脚本,请一定保证是以 `exit 0` 结尾的。 + +### 方法 2 – 使用 Crontab + +该方法最简单了。我们创建一个 cron 任务,这个任务在系统启动后等待 90 秒,然后执行命令和脚本。 + +要创建 cron 任务,打开终端并执行 + +```shell +$ crontab -e +``` + +然后输入下行内容, + +```shell +@reboot ( sleep 90 ; sh \location\script.sh ) +``` + +这里 `\location\script.sh` 就是待执行脚本的地址。 + +我们的文章至此就完了。如有疑问,欢迎留言。 + +-------------------------------------------------------------------------------- + +via: http://linuxtechlab.com/executing-commands-scripts-at-reboot/ + +作者:[Shusain][a] +译者:[lujun9972](https://github.com/lujun9972) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://linuxtechlab.com/author/shsuain/ From f9eea6891c0e79fc34a4db085302bdf66acbb2e1 Mon Sep 17 00:00:00 2001 From: feng lv Date: Sat, 9 Dec 2017 23:48:42 +0800 Subject: [PATCH 171/236] ucasFL translating --- sources/tech/20170413 More Unknown Linux Commands.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20170413 More Unknown Linux Commands.md b/sources/tech/20170413 More Unknown Linux Commands.md index f5507d3802..d773d7b4c9 100644 --- a/sources/tech/20170413 More Unknown Linux Commands.md +++ b/sources/tech/20170413 More Unknown Linux Commands.md @@ -1,3 +1,5 @@ +translating by ucasFL + More Unknown Linux Commands ============================================================ From 7162ea6a1c215c9d3bafdc90adc5cb9fdbdfa989 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E9=AD=91=E9=AD=85=E9=AD=8D=E9=AD=89?= <625310581@qq.com> Date: Sat, 9 Dec 2017 23:59:58 +0800 Subject: [PATCH 172/236] translating by HardworkFish --- sources/tech/20170209 INTRODUCING DOCKER SECRETS MANAGEMENT.md | 3 +++ 1 file changed, 3 insertions(+) diff --git a/sources/tech/20170209 INTRODUCING DOCKER SECRETS MANAGEMENT.md b/sources/tech/20170209 INTRODUCING DOCKER SECRETS MANAGEMENT.md index a3fc2c886e..2ead77d63d 100644 --- a/sources/tech/20170209 INTRODUCING DOCKER SECRETS MANAGEMENT.md +++ b/sources/tech/20170209 INTRODUCING DOCKER SECRETS MANAGEMENT.md @@ -1,3 +1,6 @@ + +translating by HardworkFish + INTRODUCING DOCKER SECRETS MANAGEMENT ============================================================ From 61e609292f58749920ba77d60a536ba445558bcd Mon Sep 17 00:00:00 2001 From: wenwensnow <963555237@qq.com> Date: Sun, 10 Dec 2017 01:02:21 +0800 Subject: [PATCH 173/236] Update 20171201 Randomize your WiFi MAC address on Ubuntu 16.04.md --- ...e your WiFi MAC address on Ubuntu 16.04.md | 67 ++++++++++++------- 1 file changed, 43 insertions(+), 24 deletions(-) diff --git a/sources/tech/20171201 Randomize your WiFi MAC address on Ubuntu 16.04.md b/sources/tech/20171201 Randomize your WiFi MAC address on Ubuntu 16.04.md index 3f0b8a0f50..bd0efb5d5e 100644 --- a/sources/tech/20171201 Randomize your WiFi MAC address on Ubuntu 16.04.md +++ b/sources/tech/20171201 Randomize your WiFi MAC address on Ubuntu 16.04.md @@ -1,50 +1,65 @@ translating by wenwensnow -Randomize your WiFi MAC address on Ubuntu 16.04 + +在Ubuntu 16.04下随机生成你的WiFi MAC地址 ============================================================ - _Your device’s MAC address can be used to track you across the WiFi networks you connect to. That data can be shared and sold, and often identifies you as an individual. It’s possible to limit this tracking by using pseudo-random MAC addresses._ +你设备的MAC地址可以在不同的WiFi网络中记录你的活动。这些信息能被共享后出售,用于识别特定的个体。但可以用随机生成的伪MAC地址来阻止这一行为。 + ![A captive portal screen for a hotel allowing you to log in with social media for an hour of free WiFi](https://www.paulfurley.com/img/captive-portal-our-hotel.gif) _Image courtesy of [Cloudessa][4]_ -Every network device like a WiFi or Ethernet card has a unique identifier called a MAC address, for example `b4:b6:76:31:8c:ff`. It’s how networking works: any time you connect to a WiFi network, the router uses that address to send and receive packets to your machine and distinguish it from other devices in the area. +每一个诸如WiFi或者以太网卡这样的网络设备,都有一个叫做MAC地址的唯一标识符,如:`b4:b6:76:31:8c:ff`。这就是你能上网的原因:每当你连接上WiFi,路由器就会用这一地址来向你接受和发送数据,并且用它来区别你和这一网络的其他设备。 -The snag with this design is that your unique, unchanging MAC address is just perfect for tracking you. Logged into Starbucks WiFi? Noted. London Underground? Logged. +这一设计的缺陷在于唯一性,不变的MAC地址正好可以用来追踪你。连上了星巴克的WiFi? 好,注意到了。在伦敦的地铁上? 也记录下来。 -If you’ve ever put your real name into one of those Craptive Portals on a WiFi network you’ve now tied your identity to that MAC address. Didn’t read the terms and conditions? You might assume that free airport WiFi is subsidised by flogging ‘customer analytics’ (your personal information) to hotels, restaurant chains and whomever else wants to know about you. +如果你曾经在某一个WiFi验证页面上输入过你的真实姓名,你就已经把自己和这一MAC地址建立了联系。没有仔细阅读许可服务条款? 你可以认为,机场的免费WiFi正通过出售所谓的 ‘顾客分析数据’(你的个人信息)获利。出售的对象包括酒店,餐饮业,和任何想要了解你的人。 -I don’t subscribe to being tracked and sold by mega-corps, so I spent a few hours hacking a solution. -### MAC addresses don’t need to stay the same +我不想信息被记录,再出售给多家公司,所以我花了几个小时想出了一个解决方案。 -Fortunately, it’s possible to spoof your MAC address to a random one without fundamentally breaking networking. -I wanted to randomize my MAC address, but with three particular caveats: +### MAC 地址不一定总是不变的 -1. The MAC should be different across different networks. This means Starbucks WiFi sees a different MAC from London Underground, preventing linking my identity across different providers. +幸运的是,在不断开网络的情况下,是可以随机生成一个伪MAC地址的。 -2. The MAC should change regularly to prevent a network knowing that I’m the same person who walked past 75 times over the last year. -3. The MAC stays the same throughout each working day. When the MAC address changes, most networks will kick you off, and those with Craptive Portals will usually make you sign in again - annoying. +我想随机生成我的MAC地址,但是有三个要求: -### Manipulating NetworkManager -My first attempt of using the `macchanger` tool was unsuccessful as NetworkManager would override the MAC address according to its own configuration. +1.MAC地址在不同网络中是不相同的。这意味着,我在星巴克和在伦敦地铁网络中的MAC地址是不相同的,这样在不同的服务提供商中就无法将我的活动联系起来 -I learned that NetworkManager 1.4.1+ can do MAC address randomization right out the box. If you’re using Ubuntu 17.04 upwards, you can get most of the way with [this config file][7]. You can’t quite achieve all three of my requirements (you must choose  _random_ or  _stable_  but it seems you can’t do  _stable-for-one-day_ ). -Since I’m sticking with Ubuntu 16.04 which ships with NetworkManager 1.2, I couldn’t make use of the new functionality. Supposedly there is some randomization support but I failed to actually make it work, so I scripted up a solution instead. +2.MAC地址需要经常更换,这样在网络上就没人知道我就是去年在这儿经过了75次的那个人 + + +3. MAC地址一天之内应该保持不变。当MAC地址更改时,大多数网络都会与你断开连接,然后必须得进入验证页面再次登陆 - 这很烦人。 + + +### 操作网络管理器 + +我第一次尝试用一个叫做 `macchanger`的工具,但是失败了。网络管理器会根据它自己的设置恢复默认的MAC地址。 + + +我了解到,网络管理器1.4.1以上版本可以自动生成随机的MAC地址。如果你在使用Ubuntu 17.04 版本,你可以根据这一配置文件实现这一目的。但这并不能完全符合我的三个要求 (你必须在随机和稳定这两个选项之中选择一个,但没有一天之内保持不变这一选项) + + +因为我使用的是Ubuntu 16.04,网络管理器版本为1.2,不能直接使用高版本这一新功能。可能网络管理器有一些随机化方法支持,但我没能成功。所以我编了一个脚本来实现这一目标。 + + +幸运的是,网络管理器1.2 允许生成随机MAC地址。你在已连接的网络中可以看见 ‘编辑连接’这一选项: -Fortunately NetworkManager 1.2 does allow for spoofing your MAC address. You can see this in the ‘Edit connections’ dialog for a given network: ![Screenshot of NetworkManager's edit connection dialog, showing a text entry for a cloned mac address](https://www.paulfurley.com/img/network-manager-cloned-mac-address.png) +网络管理器也支持消息处理 - 任何位于 `/etc/NetworkManager/dispatcher.d/pre-up.d/` 的脚本在建立网络连接之前都会被执行。 NetworkManager also supports hooks - any script placed in `/etc/NetworkManager/dispatcher.d/pre-up.d/` is run before a connection is brought up. -### Assigning pseudo-random MAC addresses +### 分配随机生成的伪MAC地址 + +我想根据网络ID和日期来生成新的随机MAC地址。 我们可以使用网络管理器的命令行工具,nmcli,来显示所有可用网络: -To recap, I wanted to generate random MAC addresses based on the  _network_  and the  _date_ . We can use the NetworkManager command line, nmcli, to show a full list of networks: ``` > nmcli connection @@ -56,7 +71,7 @@ virgintrainswifi 7d0c57de-d81a-11e7-9bae-5be89b161d22 802-11-wireless -- ``` -Since each network has a unique identifier, to achieve my scheme I just concatenated the UUID with today’s date and hashed the result: +因为每个网络都有一个唯一标识符,为了实现我的计划,我将UUID和日期拼接在一起,然后使用MD5生成hash值: ``` @@ -67,17 +82,21 @@ Since each network has a unique identifier, to achieve my scheme I just concaten 53594de990e92f9b914a723208f22b3f - ``` +生成的结果可以代替MAC地址的最后八个字节。 -That produced bytes which can be substituted in for the last octets of the MAC address. -Note that the first byte `02` signifies the address is [locally administered][8]. Real, burned-in MAC addresses start with 3 bytes designing their manufacturer, for example `b4:b6:76` for Intel. +值得注意的是,最开始的字节 `02` 代表这个地址是自行指定的。实际上,真实MAC地址的前三个字节是由制造商决定的,例如 `b4:b6:76` 就代表Intel。 -It’s possible that some routers may reject locally administered MACs but I haven’t encountered that yet. +有可能某些路由器会拒绝自己指定的MAC地址,但是我还没有遇到过这种情况。 + + +每一次连接到一个网络,这一脚本都会用`nmcli` 来指定一个随机生成的伪MAC地址: On every connection up, the script calls `nmcli` to set the spoofed MAC address for every connection: ![A terminal window show a number of nmcli command line calls](https://www.paulfurley.com/img/terminal-window-nmcli-commands.png) +最后,我查看了 `ifconfig`的输出结果,我发现端口MAC地址已经变成了随机生成的地址,而不是我真实的MAC地址。 As a final check, if I look at `ifconfig` I can see that the `HWaddr` is the spoofed one, not my real MAC address: ``` @@ -92,8 +111,8 @@ wlp1s0 Link encap:Ethernet HWaddr b4:b6:76:45:64:4d RX bytes:11627977017 (11.6 GB) TX bytes:20700627733 (20.7 GB) ``` +完整的脚本可以在Github上查看。 -The full script is [available on Github][9]. ``` #!/bin/sh From c48d1ff2bac0948fc68d92e6320621ac3d2360c3 Mon Sep 17 00:00:00 2001 From: wenwensnow <963555237@qq.com> Date: Sun, 10 Dec 2017 01:04:42 +0800 Subject: [PATCH 174/236] Update 20171201 Randomize your WiFi MAC address on Ubuntu 16.04.md --- ... Randomize your WiFi MAC address on Ubuntu 16.04.md | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/sources/tech/20171201 Randomize your WiFi MAC address on Ubuntu 16.04.md b/sources/tech/20171201 Randomize your WiFi MAC address on Ubuntu 16.04.md index bd0efb5d5e..46e316aad9 100644 --- a/sources/tech/20171201 Randomize your WiFi MAC address on Ubuntu 16.04.md +++ b/sources/tech/20171201 Randomize your WiFi MAC address on Ubuntu 16.04.md @@ -54,7 +54,7 @@ translating by wenwensnow ![Screenshot of NetworkManager's edit connection dialog, showing a text entry for a cloned mac address](https://www.paulfurley.com/img/network-manager-cloned-mac-address.png) 网络管理器也支持消息处理 - 任何位于 `/etc/NetworkManager/dispatcher.d/pre-up.d/` 的脚本在建立网络连接之前都会被执行。 -NetworkManager also supports hooks - any script placed in `/etc/NetworkManager/dispatcher.d/pre-up.d/` is run before a connection is brought up. + ### 分配随机生成的伪MAC地址 @@ -91,13 +91,13 @@ virgintrainswifi 7d0c57de-d81a-11e7-9bae-5be89b161d22 802-11-wireless -- 有可能某些路由器会拒绝自己指定的MAC地址,但是我还没有遇到过这种情况。 -每一次连接到一个网络,这一脚本都会用`nmcli` 来指定一个随机生成的伪MAC地址: -On every connection up, the script calls `nmcli` to set the spoofed MAC address for every connection: +每次连接到一个网络,这一脚本都会用`nmcli` 来指定一个随机生成的伪MAC地址: + ![A terminal window show a number of nmcli command line calls](https://www.paulfurley.com/img/terminal-window-nmcli-commands.png) 最后,我查看了 `ifconfig`的输出结果,我发现端口MAC地址已经变成了随机生成的地址,而不是我真实的MAC地址。 -As a final check, if I look at `ifconfig` I can see that the `HWaddr` is the spoofed one, not my real MAC address: + ``` > ifconfig @@ -154,7 +154,7 @@ done wait ``` -Enjoy! + _Update: [Use locally administered MAC addresses][5] to avoid clashing with real Intel ones. Thanks [@_fink][6]_ From 69e501c257625851fcde067813875da1cc5da43a Mon Sep 17 00:00:00 2001 From: wenwensnow <963555237@qq.com> Date: Sun, 10 Dec 2017 01:08:16 +0800 Subject: [PATCH 175/236] Delete 20171201 Randomize your WiFi MAC address on Ubuntu 16.04.md --- ...e your WiFi MAC address on Ubuntu 16.04.md | 180 ------------------ 1 file changed, 180 deletions(-) delete mode 100644 sources/tech/20171201 Randomize your WiFi MAC address on Ubuntu 16.04.md diff --git a/sources/tech/20171201 Randomize your WiFi MAC address on Ubuntu 16.04.md b/sources/tech/20171201 Randomize your WiFi MAC address on Ubuntu 16.04.md deleted file mode 100644 index 46e316aad9..0000000000 --- a/sources/tech/20171201 Randomize your WiFi MAC address on Ubuntu 16.04.md +++ /dev/null @@ -1,180 +0,0 @@ -translating by wenwensnow - -在Ubuntu 16.04下随机生成你的WiFi MAC地址 -============================================================ - -你设备的MAC地址可以在不同的WiFi网络中记录你的活动。这些信息能被共享后出售,用于识别特定的个体。但可以用随机生成的伪MAC地址来阻止这一行为。 - - -![A captive portal screen for a hotel allowing you to log in with social media for an hour of free WiFi](https://www.paulfurley.com/img/captive-portal-our-hotel.gif) - - _Image courtesy of [Cloudessa][4]_ - -每一个诸如WiFi或者以太网卡这样的网络设备,都有一个叫做MAC地址的唯一标识符,如:`b4:b6:76:31:8c:ff`。这就是你能上网的原因:每当你连接上WiFi,路由器就会用这一地址来向你接受和发送数据,并且用它来区别你和这一网络的其他设备。 - -这一设计的缺陷在于唯一性,不变的MAC地址正好可以用来追踪你。连上了星巴克的WiFi? 好,注意到了。在伦敦的地铁上? 也记录下来。 - -如果你曾经在某一个WiFi验证页面上输入过你的真实姓名,你就已经把自己和这一MAC地址建立了联系。没有仔细阅读许可服务条款? 你可以认为,机场的免费WiFi正通过出售所谓的 ‘顾客分析数据’(你的个人信息)获利。出售的对象包括酒店,餐饮业,和任何想要了解你的人。 - - -我不想信息被记录,再出售给多家公司,所以我花了几个小时想出了一个解决方案。 - - -### MAC 地址不一定总是不变的 - -幸运的是,在不断开网络的情况下,是可以随机生成一个伪MAC地址的。 - - -我想随机生成我的MAC地址,但是有三个要求: - - -1.MAC地址在不同网络中是不相同的。这意味着,我在星巴克和在伦敦地铁网络中的MAC地址是不相同的,这样在不同的服务提供商中就无法将我的活动联系起来 - - -2.MAC地址需要经常更换,这样在网络上就没人知道我就是去年在这儿经过了75次的那个人 - - -3. MAC地址一天之内应该保持不变。当MAC地址更改时,大多数网络都会与你断开连接,然后必须得进入验证页面再次登陆 - 这很烦人。 - - -### 操作网络管理器 - -我第一次尝试用一个叫做 `macchanger`的工具,但是失败了。网络管理器会根据它自己的设置恢复默认的MAC地址。 - - -我了解到,网络管理器1.4.1以上版本可以自动生成随机的MAC地址。如果你在使用Ubuntu 17.04 版本,你可以根据这一配置文件实现这一目的。但这并不能完全符合我的三个要求 (你必须在随机和稳定这两个选项之中选择一个,但没有一天之内保持不变这一选项) - - -因为我使用的是Ubuntu 16.04,网络管理器版本为1.2,不能直接使用高版本这一新功能。可能网络管理器有一些随机化方法支持,但我没能成功。所以我编了一个脚本来实现这一目标。 - - -幸运的是,网络管理器1.2 允许生成随机MAC地址。你在已连接的网络中可以看见 ‘编辑连接’这一选项: - - -![Screenshot of NetworkManager's edit connection dialog, showing a text entry for a cloned mac address](https://www.paulfurley.com/img/network-manager-cloned-mac-address.png) - -网络管理器也支持消息处理 - 任何位于 `/etc/NetworkManager/dispatcher.d/pre-up.d/` 的脚本在建立网络连接之前都会被执行。 - - -### 分配随机生成的伪MAC地址 - -我想根据网络ID和日期来生成新的随机MAC地址。 我们可以使用网络管理器的命令行工具,nmcli,来显示所有可用网络: - - -``` -> nmcli connection -NAME UUID TYPE DEVICE -Gladstone Guest 618545ca-d81a-11e7-a2a4-271245e11a45 802-11-wireless wlp1s0 -DoESDinky 6e47c080-d81a-11e7-9921-87bc56777256 802-11-wireless -- -PublicWiFi 79282c10-d81a-11e7-87cb-6341829c2a54 802-11-wireless -- -virgintrainswifi 7d0c57de-d81a-11e7-9bae-5be89b161d22 802-11-wireless -- - -``` - -因为每个网络都有一个唯一标识符,为了实现我的计划,我将UUID和日期拼接在一起,然后使用MD5生成hash值: - -``` - -# eg 618545ca-d81a-11e7-a2a4-271245e11a45-2017-12-03 - -> echo -n "${UUID}-$(date +%F)" | md5sum - -53594de990e92f9b914a723208f22b3f - - -``` -生成的结果可以代替MAC地址的最后八个字节。 - - -值得注意的是,最开始的字节 `02` 代表这个地址是自行指定的。实际上,真实MAC地址的前三个字节是由制造商决定的,例如 `b4:b6:76` 就代表Intel。 - - -有可能某些路由器会拒绝自己指定的MAC地址,但是我还没有遇到过这种情况。 - - -每次连接到一个网络,这一脚本都会用`nmcli` 来指定一个随机生成的伪MAC地址: - - -![A terminal window show a number of nmcli command line calls](https://www.paulfurley.com/img/terminal-window-nmcli-commands.png) - -最后,我查看了 `ifconfig`的输出结果,我发现端口MAC地址已经变成了随机生成的地址,而不是我真实的MAC地址。 - - -``` -> ifconfig -wlp1s0 Link encap:Ethernet HWaddr b4:b6:76:45:64:4d - inet addr:192.168.0.86 Bcast:192.168.0.255 Mask:255.255.255.0 - inet6 addr: fe80::648c:aff2:9a9d:764/64 Scope:Link - UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 - RX packets:12107812 errors:0 dropped:2 overruns:0 frame:0 - TX packets:18332141 errors:0 dropped:0 overruns:0 carrier:0 - collisions:0 txqueuelen:1000 - RX bytes:11627977017 (11.6 GB) TX bytes:20700627733 (20.7 GB) - -``` -完整的脚本可以在Github上查看。 - - -``` -#!/bin/sh - -# /etc/NetworkManager/dispatcher.d/pre-up.d/randomize-mac-addresses - -# Configure every saved WiFi connection in NetworkManager with a spoofed MAC -# address, seeded from the UUID of the connection and the date eg: -# 'c31bbcc4-d6ad-11e7-9a5a-e7e1491a7e20-2017-11-20' - -# This makes your MAC impossible(?) to track across WiFi providers, and -# for one provider to track across days. - -# For craptive portals that authenticate based on MAC, you might want to -# automate logging in :) - -# Note that NetworkManager >= 1.4.1 (Ubuntu 17.04+) can do something similar -# automatically. - -export PATH=$PATH:/usr/bin:/bin - -LOG_FILE=/var/log/randomize-mac-addresses - -echo "$(date): $*" > ${LOG_FILE} - -WIFI_UUIDS=$(nmcli --fields type,uuid connection show |grep 802-11-wireless |cut '-d ' -f3) - -for UUID in ${WIFI_UUIDS} -do - UUID_DAILY_HASH=$(echo "${UUID}-$(date +F)" | md5sum) - - RANDOM_MAC="02:$(echo -n ${UUID_DAILY_HASH} | sed 's/^\(..\)\(..\)\(..\)\(..\)\(..\).*$/\1:\2:\3:\4:\5/')" - - CMD="nmcli connection modify ${UUID} wifi.cloned-mac-address ${RANDOM_MAC}" - - echo "$CMD" >> ${LOG_FILE} - $CMD & -done - -wait -``` - - - _Update: [Use locally administered MAC addresses][5] to avoid clashing with real Intel ones. Thanks [@_fink][6]_ - --------------------------------------------------------------------------------- - -via: https://www.paulfurley.com/randomize-your-wifi-mac-address-on-ubuntu-1604-xenial/ - -作者:[Paul M Furley ][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.paulfurley.com/ -[1]:https://gist.github.com/paulfurley/46e0547ce5c5ea7eabeaef50dbacef3f/raw/5f02fc8f6ff7fca5bca6ee4913c63bf6de15abca/randomize-mac-addresses -[2]:https://gist.github.com/paulfurley/46e0547ce5c5ea7eabeaef50dbacef3f#file-randomize-mac-addresses -[3]:https://github.com/ -[4]:http://cloudessa.com/products/cloudessa-aaa-and-captive-portal-cloud-service/ -[5]:https://gist.github.com/paulfurley/46e0547ce5c5ea7eabeaef50dbacef3f/revisions#diff-824d510864d58c07df01102a8f53faef -[6]:https://twitter.com/fink_/status/937305600005943296 -[7]:https://gist.github.com/paulfurley/978d4e2e0cceb41d67d017a668106c53/ -[8]:https://en.wikipedia.org/wiki/MAC_address#Universal_vs._local -[9]:https://gist.github.com/paulfurley/46e0547ce5c5ea7eabeaef50dbacef3f From 8f17217da1be32193d06fd6d581d56d8f62ae691 Mon Sep 17 00:00:00 2001 From: wenwensnow <963555237@qq.com> Date: Sun, 10 Dec 2017 01:26:29 +0800 Subject: [PATCH 176/236] Create 20171201 Randomize your WiFi MAC address on Ubuntu 16.04.md --- ...e your WiFi MAC address on Ubuntu 16.04.md | 180 ++++++++++++++++++ 1 file changed, 180 insertions(+) create mode 100644 translated/tech/20171201 Randomize your WiFi MAC address on Ubuntu 16.04.md diff --git a/translated/tech/20171201 Randomize your WiFi MAC address on Ubuntu 16.04.md b/translated/tech/20171201 Randomize your WiFi MAC address on Ubuntu 16.04.md new file mode 100644 index 0000000000..a5e50edc89 --- /dev/null +++ b/translated/tech/20171201 Randomize your WiFi MAC address on Ubuntu 16.04.md @@ -0,0 +1,180 @@ + + 在Ubuntu 16.04下随机生成你的WiFi MAC地址 + ============================================================ + + 你设备的MAC地址可以在不同的WiFi网络中记录你的活动。这些信息能被共享后出售,用于识别特定的个体。但可以用随机生成的伪MAC地址来阻止这一行为。 + + + ![A captive portal screen for a hotel allowing you to log in with social media for an hour of free WiFi](https://www.paulfurley.com/img/captive-portal-our-hotel.gif) + + _Image courtesy of [Cloudessa][4]_ + + 每一个诸如WiFi或者以太网卡这样的网络设备,都有一个叫做MAC地址的唯一标识符,如:`b4:b6:76:31:8c:ff`。这就是你能上网的原因:每当你连接上WiFi,路由器就会用这一地址来向你接受和发送数据,并且用它来区别你和这一网络的其他设备。 + + 这一设计的缺陷在于唯一性,不变的MAC地址正好可以用来追踪你。连上了星巴克的WiFi? 好,注意到了。在伦敦的地铁上? 也记录下来。 + + 如果你曾经在某一个WiFi验证页面上输入过你的真实姓名,你就已经把自己和这一MAC地址建立了联系。没有仔细阅读许可服务条款? 你可以认为,机场的免费WiFi正通过出售所谓的 ‘顾客分析数据’(你的个人信息)获利。出售的对象包括酒店,餐饮业,和任何想要了解你的人。 + + + 我不想信息被记录,再出售给多家公司,所以我花了几个小时想出了一个解决方案。 + + + ### MAC 地址不一定总是不变的 + + 幸运的是,在不断开网络的情况下,是可以随机生成一个伪MAC地址的。 + + + 我想随机生成我的MAC地址,但是有三个要求: + + + 1.MAC地址在不同网络中是不相同的。这意味着,我在星巴克和在伦敦地铁网络中的MAC地址是不相同的,这样在不同的服务提供商中就无法将我的活动联系起来 + + + 2.MAC地址需要经常更换,这样在网络上就没人知道我就是去年在这儿经过了75次的那个人 + + + 3. MAC地址一天之内应该保持不变。当MAC地址更改时,大多数网络都会与你断开连接,然后必须得进入验证页面再次登陆 - 这很烦人。 + + + ### 操作网络管理器 + + 我第一次尝试用一个叫做 `macchanger`的工具,但是失败了。网络管理器会根据它自己的设置恢复默认的MAC地址。 + + + 我了解到,网络管理器1.4.1以上版本可以自动生成随机的MAC地址。如果你在使用Ubuntu 17.04 版本,你可以根据这一配置文件实现这一目的。但这并不能完全符合我的三个要求 (你必须在随机和稳定这两个选项之中选择一个,但没有一天之内保持不变这一选项) + + + 因为我使用的是Ubuntu 16.04,网络管理器版本为1.2,不能直接使用高版本这一新功能。可能网络管理器有一些随机化方法支持,但我没能成功。所以我编了一个脚本来实现这一目标。 + + + 幸运的是,网络管理器1.2 允许生成随机MAC地址。你在已连接的网络中可以看见 ‘编辑连接’这一选项: + + + ![Screenshot of NetworkManager's edit connection dialog, showing a text entry for a cloned mac address](https://www.paulfurley.com/img/network-manager-cloned-mac-address.png) + + 网络管理器也支持消息处理 - 任何位于 `/etc/NetworkManager/dispatcher.d/pre-up.d/` 的脚本在建立网络连接之前都会被执行。 + + + ### 分配随机生成的伪MAC地址 + + 我想根据网络ID和日期来生成新的随机MAC地址。 我们可以使用网络管理器的命令行工具,nmcli,来显示所有可用网络: + + + ``` + > nmcli connection + NAME UUID TYPE DEVICE + Gladstone Guest 618545ca-d81a-11e7-a2a4-271245e11a45 802-11-wireless wlp1s0 + DoESDinky 6e47c080-d81a-11e7-9921-87bc56777256 802-11-wireless -- + PublicWiFi 79282c10-d81a-11e7-87cb-6341829c2a54 802-11-wireless -- + virgintrainswifi 7d0c57de-d81a-11e7-9bae-5be89b161d22 802-11-wireless -- + + ``` + + 因为每个网络都有一个唯一标识符,为了实现我的计划,我将UUID和日期拼接在一起,然后使用MD5生成hash值: + + ``` + + # eg 618545ca-d81a-11e7-a2a4-271245e11a45-2017-12-03 + + > echo -n "${UUID}-$(date +%F)" | md5sum + + 53594de990e92f9b914a723208f22b3f - + + ``` + 生成的结果可以代替MAC地址的最后八个字节。 + + + 值得注意的是,最开始的字节 `02` 代表这个地址是自行指定的。实际上,真实MAC地址的前三个字节是由制造商决定的,例如 `b4:b6:76` 就代表Intel。 + + + 有可能某些路由器会拒绝自己指定的MAC地址,但是我还没有遇到过这种情况。 + + + 每次连接到一个网络,这一脚本都会用`nmcli` 来指定一个随机生成的伪MAC地址: + + + ![A terminal window show a number of nmcli command line calls](https://www.paulfurley.com/img/terminal-window-nmcli-commands.png) + + 最后,我查看了 `ifconfig`的输出结果,我发现端口MAC地址已经变成了随机生成的地址,而不是我真实的MAC地址。 + + + ``` + > ifconfig + wlp1s0 Link encap:Ethernet HWaddr b4:b6:76:45:64:4d + inet addr:192.168.0.86 Bcast:192.168.0.255 Mask:255.255.255.0 + inet6 addr: fe80::648c:aff2:9a9d:764/64 Scope:Link + UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 + RX packets:12107812 errors:0 dropped:2 overruns:0 frame:0 + TX packets:18332141 errors:0 dropped:0 overruns:0 carrier:0 + collisions:0 txqueuelen:1000 + RX bytes:11627977017 (11.6 GB) TX bytes:20700627733 (20.7 GB) + + ``` + 完整的脚本可以在Github上查看。 + + + ``` + #!/bin/sh + + # /etc/NetworkManager/dispatcher.d/pre-up.d/randomize-mac-addresses + + # Configure every saved WiFi connection in NetworkManager with a spoofed MAC + # address, seeded from the UUID of the connection and the date eg: + # 'c31bbcc4-d6ad-11e7-9a5a-e7e1491a7e20-2017-11-20' + + # This makes your MAC impossible(?) to track across WiFi providers, and + # for one provider to track across days. + + # For craptive portals that authenticate based on MAC, you might want to + # automate logging in :) + + # Note that NetworkManager >= 1.4.1 (Ubuntu 17.04+) can do something similar + # automatically. + + export PATH=$PATH:/usr/bin:/bin + + LOG_FILE=/var/log/randomize-mac-addresses + + echo "$(date): $*" > ${LOG_FILE} + + WIFI_UUIDS=$(nmcli --fields type,uuid connection show |grep 802-11-wireless |cut '-d ' -f3) + + for UUID in ${WIFI_UUIDS} + do + UUID_DAILY_HASH=$(echo "${UUID}-$(date +F)" | md5sum) + + RANDOM_MAC="02:$(echo -n ${UUID_DAILY_HASH} | sed 's/^\(..\)\(..\)\(..\)\(..\)\(..\).*$/\1:\2:\3:\4:\5/')" + + CMD="nmcli connection modify ${UUID} wifi.cloned-mac-address ${RANDOM_MAC}" + + echo "$CMD" >> ${LOG_FILE} + $CMD & + done + + wait + ``` + + + + _更新:使用自己指定的MAC地址可以避免和真正的intel地址冲突。感谢 [@_fink][6]_ + + --------------------------------------------------------------------------------- + + -via: https://www.paulfurley.com/randomize-your-wifi-mac-address-on-ubuntu-1604-xenial/ + + 作者:[Paul M Furley ][a] + 译者:[译者ID](https://github.com/译者ID) + 校对:[校对者ID](https://github.com/校对者ID) + + 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + + [a]:https://www.paulfurley.com/ + [1]:https://gist.github.com/paulfurley/46e0547ce5c5ea7eabeaef50dbacef3f/raw/5f02fc8f6ff7fca5bca6ee4913c63bf6de15abca/randomize-mac-addresses + [2]:https://gist.github.com/paulfurley/46e0547ce5c5ea7eabeaef50dbacef3f#file-randomize-mac-addresses + [3]:https://github.com/ + [4]:http://cloudessa.com/products/cloudessa-aaa-and-captive-portal-cloud-service/ + [5]:https://gist.github.com/paulfurley/46e0547ce5c5ea7eabeaef50dbacef3f/revisions#diff-824d510864d58c07df01102a8f53faef + [6]:https://twitter.com/fink_/status/937305600005943296 + [7]:https://gist.github.com/paulfurley/978d4e2e0cceb41d67d017a668106c53/ + [8]:https://en.wikipedia.org/wiki/MAC_address#Universal_vs._local + [9]:https://gist.github.com/paulfurley/46e0547ce5c5ea7eabeaef50dbacef3f From a29132873f48bd4e04b3260b6191521af00835fe Mon Sep 17 00:00:00 2001 From: xu0o0 Date: Sun, 10 Dec 2017 01:36:58 +0800 Subject: [PATCH 177/236] translated by @haoqixu --- .../20171114 Sysadmin 101 Patch Management.md | 61 ------------------- .../20171114 Sysadmin 101 Patch Management.md | 54 ++++++++++++++++ 2 files changed, 54 insertions(+), 61 deletions(-) delete mode 100644 sources/tech/20171114 Sysadmin 101 Patch Management.md create mode 100644 translated/tech/20171114 Sysadmin 101 Patch Management.md diff --git a/sources/tech/20171114 Sysadmin 101 Patch Management.md b/sources/tech/20171114 Sysadmin 101 Patch Management.md deleted file mode 100644 index 55ca09da87..0000000000 --- a/sources/tech/20171114 Sysadmin 101 Patch Management.md +++ /dev/null @@ -1,61 +0,0 @@ -【翻译中 @haoqixu】Sysadmin 101: Patch Management -============================================================ - -* [HOW-TOs][1] - -* [Servers][2] - -* [SysAdmin][3] - - -A few articles ago, I started a Sysadmin 101 series to pass down some fundamental knowledge about systems administration that the current generation of junior sysadmins, DevOps engineers or "full stack" developers might not learn otherwise. I had thought that I was done with the series, but then the WannaCry malware came out and exposed some of the poor patch management practices still in place in Windows networks. I imagine some readers that are still stuck in the Linux versus Windows wars of the 2000s might have even smiled with a sense of superiority when they heard about this outbreak. - -The reason I decided to revive my Sysadmin 101 series so soon is I realized that most Linux system administrators are no different from Windows sysadmins when it comes to patch management. Honestly, in some areas (in particular, uptime pride), some Linux sysadmins are even worse than Windows sysadmins regarding patch management. So in this article, I cover some of the fundamentals of patch management under Linux, including what a good patch management system looks like, the tools you will want to put in place and how the overall patching process should work. - -### What Is Patch Management? - -When I say patch management, I'm referring to the systems you have in place to update software already on a server. I'm not just talking about keeping up with the latest-and-greatest bleeding-edge version of a piece of software. Even more conservative distributions like Debian that stick with a particular version of software for its "stable" release still release frequent updates that patch bugs or security holes. - -Of course, if your organization decided to roll its own version of a particular piece of software, either because developers demanded the latest and greatest, you needed to fork the software to apply a custom change, or you just like giving yourself extra work, you now have a problem. Ideally you have put in a system that automatically packages up the custom version of the software for you in the same continuous integration system you use to build and package any other software, but many sysadmins still rely on the outdated method of packaging the software on their local machine based on (hopefully up to date) documentation on their wiki. In either case, you will need to confirm that your particular version has the security flaw, and if so, make sure that the new patch applies cleanly to your custom version. - -### What Good Patch Management Looks Like - -Patch management starts with knowing that there is a software update to begin with. First, for your core software, you should be subscribed to your Linux distribution's security mailing list, so you're notified immediately when there are security patches. If there you use any software that doesn't come from your distribution, you must find out how to be kept up to date on security patches for that software as well. When new security notifications come in, you should review the details so you understand how severe the security flaw is, whether you are affected and gauge a sense of how urgent the patch is. - -Some organizations have a purely manual patch management system. With such a system, when a security patch comes along, the sysadmin figures out which servers are running the software, generally by relying on memory and by logging in to servers and checking. Then the sysadmin uses the server's built-in package management tool to update the software with the latest from the distribution. Then the sysadmin moves on to the next server, and the next, until all of the servers are patched. - -There are many problems with manual patch management. First is the fact that it makes patching a laborious chore. The more work patching is, the more likely a sysadmin will put it off or skip doing it entirely. The second problem is that manual patch management relies too much on the sysadmin's ability to remember and recall all of the servers he or she is responsible for and keep track of which are patched and which aren't. This makes it easy for servers to be forgotten and sit unpatched. - -The faster and easier patch management is, the more likely you are to do it. You should have a system in place that quickly can tell you which servers are running a particular piece of software at which version. Ideally, that system also can push out updates. Personally, I prefer orchestration tools like MCollective for this task, but Red Hat provides Satellite, and Canonical provides Landscape as central tools that let you view software versions across your fleet of servers and apply patches all from a central place. - -Patching should be fault-tolerant as well. You should be able to patch a service and restart it without any overall down time. The same idea goes for kernel patches that require a reboot. My approach is to divide my servers into different high availability groups so that lb1, app1, rabbitmq1 and db1 would all be in one group, and lb2, app2, rabbitmq2 and db2 are in another. Then, I know I can patch one group at a time without it causing downtime anywhere else. - -So, how fast is fast? Your system should be able to roll out a patch to a minor piece of software that doesn't have an accompanying service (such as bash in the case of the ShellShock vulnerability) within a few minutes to an hour at most. For something like OpenSSL that requires you to restart services, the careful process of patching and restarting services in a fault-tolerant way probably will take more time, but this is where orchestration tools come in handy. I gave examples of how to use MCollective to accomplish this in my recent MCollective articles (see the December 2016 and January 2017 issues), but ideally, you should put a system in place that makes it easy to patch and restart services in a fault-tolerant and automated way. - -When patching requires a reboot, such as in the case of kernel patches, it might take a bit more time, but again, automation and orchestration tools can make this go much faster than you might imagine. I can patch and reboot the servers in an environment in a fault-tolerant way within an hour or two, and it would be much faster than that if I didn't need to wait for clusters to sync back up in between reboots. - -Unfortunately, many sysadmins still hold on to the outdated notion that uptime is a badge of pride—given that serious kernel patches tend to come out at least once a year if not more often, to me, it's proof you don't take security seriously. - -Many organizations also still have that single point of failure server that can never go down, and as a result, it never gets patched or rebooted. If you want to be secure, you need to remove these outdated liabilities and create systems that at least can be rebooted during a late-night maintenance window. - -Ultimately, fast and easy patch management is a sign of a mature and professional sysadmin team. Updating software is something all sysadmins have to do as part of their jobs, and investing time into systems that make that process easy and fast pays dividends far beyond security. For one, it helps identify bad architecture decisions that cause single points of failure. For another, it helps identify stagnant, out-of-date legacy systems in an environment and provides you with an incentive to replace them. Finally, when patching is managed well, it frees up sysadmins' time and turns their attention to the things that truly require their expertise. - -______________________ - -Kyle Rankin is senior security and infrastructure architect, the author of many books including Linux Hardening in Hostile Networks, DevOps Troubleshooting and The Official Ubuntu Server Book, and a columnist for Linux Journal. Follow him @kylerankin - --------------------------------------------------------------------------------- - -via: https://www.linuxjournal.com/content/sysadmin-101-patch-management - -作者:[Kyle Rankin ][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.linuxjournal.com/users/kyle-rankin -[1]:https://www.linuxjournal.com/tag/how-tos -[2]:https://www.linuxjournal.com/tag/servers -[3]:https://www.linuxjournal.com/tag/sysadmin -[4]:https://www.linuxjournal.com/users/kyle-rankin diff --git a/translated/tech/20171114 Sysadmin 101 Patch Management.md b/translated/tech/20171114 Sysadmin 101 Patch Management.md new file mode 100644 index 0000000000..4a7a223ccf --- /dev/null +++ b/translated/tech/20171114 Sysadmin 101 Patch Management.md @@ -0,0 +1,54 @@ +系统管理 101:补丁管理 +============================================================ + +就在之前几篇文章,我开始了“系统管理 101”系列文章,用来记录现今许多初级系统管理员,DevOps 工程师或者“全栈”开发者可能不曾接触过的一些系统管理方面的基本知识。按照我原本的设想,该系列文章已经是完结了的。然而后来 WannaCry 恶意软件出现并在补丁管理不善的 Windows 主机网络间爆发。我能想象到那些仍然深陷 2000 年代 Linux 与 Windows 争论的读者听到这个消息可能已经面露优越的微笑。 + +我之所以这么快就决定再次继续“系统管理 101”文章系列,是因为我意识到在补丁管理方面一些 Linux 系统管理员和 Windows 系统管理员没有差别。实话说,在一些方面甚至做的更差(特别是以运行时间为豪)。所以,这篇文章会涉及 Linux 下补丁管理的基础概念,包括良好的补丁管理该是怎样的,你可能会用到的一些相关工具,以及整个补丁安装过程是如何进行的。 + +### 什么是补丁管理? + +我所说的补丁管理,是指你部署用于升级服务器上软件的系统,不仅仅是把软件更新到最新最好的前沿版本。即使是像 Debian 这样为了“稳定性”持续保持某一特定版本软件的保守派发行版,也会时常发布升级补丁用于修补错误和安全漏洞。 + +当然,因为开发者对最新最好版本的需求,你需要派生软件源码并做出修改,或者因为你喜欢给自己额外的工作量,你的组织可能会决定自己维护特定软件的版本,这时你就会遇到问题。理想情况下,你应该已经配置好你的系统,让它在自动构建和打包定制版本软件时使用其它软件所用的同一套持续集成系统。然而,许多系统管理员仍旧在自己的本地主机上按照维基上的文档(但愿是最新的文档)使用过时的方法打包软件。不论使用哪种方法,你都需要明确你所使用的版本有没有安全缺陷,如果有,那必须确保新补丁安装到你定制版本的软件上了。 + +### 良好的补丁管理是怎样的 + +补丁管理首先要做的是检查软件的升级。首先,对于核心软件,你应该订阅相应 Linux 发行版的安全邮件列表,这样才能第一时间得知软件的安全升级情况。如果你使用的软件有些不是来自发行版的仓库,那么你也必须设法跟踪它们的安全更新。一旦接收到新的安全通知,你必须查阅通知细节,以此明确安全漏洞的严重程度,确定你的系统是否受影响,以及安全补丁的紧急性。 + +一些组织仍在使用手动方式管理补丁。在这种方式下,当出现一个安全补丁,系统管理员就要凭借记忆,登录到各个服务器上进行检查。在确定了哪些服务器需要升级后,再使用服务器内建的包管理工具从发行版仓库升级这些软件。最后以相同的方式升级剩余的所有服务器。 + +手动管理补丁的方式存在很多问题。首先,这么做会使补丁安装成为一个苦力活,安装补丁需要越多人力成本,系统管理员就越可能推迟甚至完全忽略它。其次,手动管理方式依赖系统管理员凭借记忆去跟踪他或她所负责的服务器的升级情况。这非常容易导致有些服务器被遗漏而未能及时升级。 + +补丁管理越快速简便,你就越可能把它做好。你应该构建一个系统,用来快速查询哪些服务器运行着特定的软件,以及这些软件的版本号,而且它最好还能够推送各种升级补丁。就个人而言,我倾向于使用 MCollective 这样的编排工具来完成这个任务,但是红帽提供的 Satellite 以及 Canonical 提供的 Landscape 也可以让你在统一的管理接口查看服务器上软件的版本信息,并且安装补丁。 + +补丁安装还应该具有容错能力。你应该具备在不下线的情况下为服务安装补丁的能力。这同样适用于需要重启系统的内核补丁。我采用的方法是把我的服务器划分为不同的高可用组,lb1,app1,rabbitmq1 和 db1 在一个组,而lb2,app2,rabbitmq2 和 db2 在另一个组。这样,我就能一次升级一个组,而无须下线服务。 + +所以,多快才能算快呢?对于少数没有附带服务的软件,你的系统最快应该能够在几分钟到一小时内安装好补丁(例如 bash 的 ShellShock 漏洞)。对于像 OpenSSL 这样需要重启服务的软件,以容错的方式安装补丁并重启服务的过程可能会花费稍多的时间,但这就是编排工具派上用场的时候。我在最近的关于 MCollective 的文章中(查看 2016 年 12 月和 2017 年 1 月的工单)给了几个使用 MCollective 实现补丁管理的例子。你最好能够部署一个系统,以具备容错性的自动化方式简化补丁安装和服务重启的过程。 + +如果补丁要求重启系统,像内核补丁,那它会花费更多的时间。再次强调,自动化和编排工具能够让这个过程比你想象的还要快。我能够在一到两个小时内在生产环境中以容错方式升级并重启服务器,如果重启之间无须等待集群同步备份,这个过程还能更快。 + +不幸的是,许多系统管理员仍坚信过时的观点,把运行时间作为一种骄傲的象征——鉴于紧急内核补丁大约每年一次。对于我来说,这只能说明你没有认真对待系统的安全性。 + +很多组织仍然使用无法暂时下线的单点故障的服务器,也因为这个原因,它无法升级或者重启。如果你想让系统更加安全,你需要去除过时的包袱,搭建一个至少能在深夜维护时段重启的系统。 + +基本上,快速便捷的补丁管理也是一个成熟专业的系统管理团队所具备的标志。升级软件是所有系统管理员的必要工作之一,花费时间去让这个过程简洁快速,带来的好处远远不止是系统安全性。例如,它能帮助我们找到架构设计中的单点故障。另外,它还帮助鉴定出环境中过时的系统,给我们替换这些部分提供了动机。最后,当补丁管理做得足够好,它会节省系统管理员的时间,让他们把精力放在真正需要专业知识的地方。 + +______________________ + +Kyle Rankin 是高级安全与基础设施架构师,其著作包括: Linux Hardening in Hostile Networks,DevOps Troubleshooting 以及 The Official Ubuntu Server Book。同时,他还是 Linux Journal 的专栏作家。 + +-------------------------------------------------------------------------------- + +via: https://www.linuxjournal.com/content/sysadmin-101-patch-management + +作者:[Kyle Rankin ][a] +译者:[haoqixu](https://github.com/haoqixu) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.linuxjournal.com/users/kyle-rankin +[1]:https://www.linuxjournal.com/tag/how-tos +[2]:https://www.linuxjournal.com/tag/servers +[3]:https://www.linuxjournal.com/tag/sysadmin +[4]:https://www.linuxjournal.com/users/kyle-rankin From cbdd4be84fb97b3d35f6d21b9bfbd8b4c339f0ec Mon Sep 17 00:00:00 2001 From: wxy Date: Sun, 10 Dec 2017 09:15:42 +0800 Subject: [PATCH 178/236] PRF&PUB:20170918 Executing Commands and Scripts at Reboot & Startup in Linux.md @lujun9972 https://linux.cn/article-9123-1.html --- ...nd Scripts at Reboot & Startup in Linux.md | 28 +++++++++---------- 1 file changed, 14 insertions(+), 14 deletions(-) rename {translated/tech => published}/20170918 Executing Commands and Scripts at Reboot & Startup in Linux.md (77%) diff --git a/translated/tech/20170918 Executing Commands and Scripts at Reboot & Startup in Linux.md b/published/20170918 Executing Commands and Scripts at Reboot & Startup in Linux.md similarity index 77% rename from translated/tech/20170918 Executing Commands and Scripts at Reboot & Startup in Linux.md rename to published/20170918 Executing Commands and Scripts at Reboot & Startup in Linux.md index 107caa6bd9..cc308d7554 100644 --- a/translated/tech/20170918 Executing Commands and Scripts at Reboot & Startup in Linux.md +++ b/published/20170918 Executing Commands and Scripts at Reboot & Startup in Linux.md @@ -1,39 +1,39 @@ 在 Linux 启动或重启时执行命令与脚本 ====== + 有时可能会需要在重启时或者每次系统启动时运行某些命令或者脚本。我们要怎样做呢?本文中我们就对此进行讨论。 我们会用两种方法来描述如何在 CentOS/RHEL 以及 Ubuntu 系统上做到重启或者系统启动时执行命令和脚本。 两种方法都通过了测试。 ### 方法 1 – 使用 rc.local -这种方法会利用 `/etc/` 中的 `rc.local` 文件来在启动时执行脚本与命令。我们在文件中加上一行 l 爱执行脚本,这样每次启动系统时,都会执行该脚本。 +这种方法会利用 `/etc/` 中的 `rc.local` 文件来在启动时执行脚本与命令。我们在文件中加上一行来执行脚本,这样每次启动系统时,都会执行该脚本。 不过我们首先需要为 `/etc/rc.local` 添加执行权限, -```shell +``` $ sudo chmod +x /etc/rc.local ``` -然后将要执行的脚本加入其中, +然后将要执行的脚本加入其中: -```shell +``` $ sudo vi /etc/rc.local ``` -在文件最后加上 +在文件最后加上: -```shell +``` sh /root/script.sh & ``` -然后保存文件并退出。 -使用 `rc.local` 文件来执行命令也是一样的,但是一定要记得填写命令的完整路径。 像知道命令的完整路径可以运行 +然后保存文件并退出。使用 `rc.local` 文件来执行命令也是一样的,但是一定要记得填写命令的完整路径。 想知道命令的完整路径可以运行: -```shell +``` $ which command ``` -比如, +比如: -```shell +``` $ which shutter /usr/bin/shutter ``` @@ -48,13 +48,13 @@ $ which shutter 要创建 cron 任务,打开终端并执行 -```shell +``` $ crontab -e ``` 然后输入下行内容, -```shell +``` @reboot ( sleep 90 ; sh \location\script.sh ) ``` @@ -68,7 +68,7 @@ via: http://linuxtechlab.com/executing-commands-scripts-at-reboot/ 作者:[Shusain][a] 译者:[lujun9972](https://github.com/lujun9972) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From bac419840e68132378d4d826355c153be0bfdb8b Mon Sep 17 00:00:00 2001 From: wxy Date: Sun, 10 Dec 2017 09:30:10 +0800 Subject: [PATCH 179/236] PRF&PUB:20170922 How to disable USB storage on Linux.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @lujun9972 延迟发布 --- ...922 How to disable USB storage on Linux.md | 33 ++++++++++--------- 1 file changed, 17 insertions(+), 16 deletions(-) rename {translated/tech => published}/20170922 How to disable USB storage on Linux.md (68%) diff --git a/translated/tech/20170922 How to disable USB storage on Linux.md b/published/20170922 How to disable USB storage on Linux.md similarity index 68% rename from translated/tech/20170922 How to disable USB storage on Linux.md rename to published/20170922 How to disable USB storage on Linux.md index 04a8b607b4..f89e11f691 100644 --- a/translated/tech/20170922 How to disable USB storage on Linux.md +++ b/published/20170922 How to disable USB storage on Linux.md @@ -1,46 +1,47 @@ Linux 上如何禁用 USB 存储 ====== + 为了保护数据不被泄漏,我们使用软件和硬件防火墙来限制外部未经授权的访问,但是数据泄露也可能发生在内部。 为了消除这种可能性,机构会限制和监测访问互联网,同时禁用 USB 存储设备。 在本教程中,我们将讨论三种不同的方法来禁用 Linux 机器上的 USB 存储设备。所有这三种方法都在 CentOS 6&7 机器上通过测试。那么让我们一一讨论这三种方法, -( 另请阅读: [Ultimate guide to securing SSH sessions][1] ) +(另请阅读: [Ultimate guide to securing SSH sessions][1]) ### 方法 1 – 伪安装 -在本方法中,我们往配置文件中添加一行 `install usb-storage /bin/true`, 这会让安装 usb-storage 模块的操作实际上变成运行 `/bin/true`, 这也是为什么这种方法叫做`伪安装`的原因。 具体来说就是, 在文件夹 `/etc/modprobe.d` 中创建并打开一个名为 `block_usb.conf` (也可能教其他名字) , +在本方法中,我们往配置文件中添加一行 `install usb-storage /bin/true`, 这会让安装 usb-storage 模块的操作实际上变成运行 `/bin/true`, 这也是为什么这种方法叫做`伪安装`的原因。 具体来说就是,在文件夹 `/etc/modprobe.d` 中创建并打开一个名为 `block_usb.conf` (也可能叫其他名字) , -```shell +``` $ sudo vim /etc/modprobe.d/block_usb.conf ``` -然后将下行内容添加进去, +然后将下行内容添加进去: -```shell +``` install usb-storage /bin/true ``` 最后保存文件并退出。 -### 方法 2 – 删除 UBS 驱动 +### 方法 2 – 删除 USB 驱动 -这种方法要求我们将 usb 存储的驱动程序(usb_storage.ko)删掉或者移走,从而达到无法再访问 usb 存储设备的目的。 执行下面命令可以将驱动从它默认的位置移走, execute the following command, +这种方法要求我们将 USB 存储的驱动程序(`usb_storage.ko`)删掉或者移走,从而达到无法再访问 USB 存储设备的目的。 执行下面命令可以将驱动从它默认的位置移走: -```shell +``` $ sudo mv /lib/modules/$(uname -r)/kernel/drivers/usb/storage/usb-storage.ko /home/user1 ``` -现在在默认的位置上无法再找到驱动程序了,因此当 USB 存储器连接道系统上时也就无法加载到驱动程序了,从而导致磁盘不可用。 但是这个方法有一个小问题,那就是当系统内核更新的时候,usb-storage 模块会再次出现在它的默认位置。 +现在在默认的位置上无法再找到驱动程序了,因此当 USB 存储器连接到系统上时也就无法加载到驱动程序了,从而导致磁盘不可用。 但是这个方法有一个小问题,那就是当系统内核更新的时候,`usb-storage` 模块会再次出现在它的默认位置。 -### 方法 3- 将 USB-storage 纳入黑名单 +### 方法 3 - 将 USB 存储器纳入黑名单 -我们也可以通过 `/etc/modprobe.d/blacklist.conf` 文件将 usb-storage 纳入黑名单。这个文件在 RHEL/CentOS 6 是现成就有的,但在 7 上可能需要自己创建。 要将 usb 存储列入黑名单,请使用 vim 打开/创建上述文件, +我们也可以通过 `/etc/modprobe.d/blacklist.conf` 文件将 usb-storage 纳入黑名单。这个文件在 RHEL/CentOS 6 是现成就有的,但在 7 上可能需要自己创建。 要将 USB 存储列入黑名单,请使用 vim 打开/创建上述文件: -```shell +``` $ sudo vim /etc/modprobe.d/blacklist.conf ``` -并输入以下行将 USB 纳入黑名单, +并输入以下行将 USB 纳入黑名单: ``` blacklist usb-storage @@ -48,13 +49,13 @@ blacklist usb-storage 保存文件并退出。`usb-storage` 就在就会被系统阻止加载,但这种方法有一个很大的缺点,即任何特权用户都可以通过执行以下命令来加载 `usb-storage` 模块, -```shell +``` $ sudo modprobe usb-storage ``` 这个问题使得这个方法不是那么理想,但是对于非特权用户来说,这个方法效果很好。 -在更改完成后重新启动系统,以使更改生效。请尝试用这些方法来禁用 USB 存储,如果您遇到任何问题或有什么问题,请告知我们。 +在更改完成后重新启动系统,以使更改生效。请尝试用这些方法来禁用 USB 存储,如果您遇到任何问题或有什么疑问,请告知我们。 -------------------------------------------------------------------------------- @@ -63,7 +64,7 @@ via: http://linuxtechlab.com/disable-usb-storage-linux/ 作者:[Shusain][a] 译者:[lujun9972](https://github.com/lujun9972) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject)原创编译,[Linux 中国](https://linux.cn/)荣誉推出 From 78a7de4a99fbaded456781719ebe27b7d36ca5cf Mon Sep 17 00:00:00 2001 From: wxy Date: Sun, 10 Dec 2017 09:56:12 +0800 Subject: [PATCH 180/236] PRF&PUB:20171206 How To Install Fish, The Friendly Interactive Shell, In Linux.md @kimii https://linux.cn/article-9125-1.html --- ...he Friendly Interactive Shell, In Linux.md | 116 ++++-------------- 1 file changed, 26 insertions(+), 90 deletions(-) rename {translated/tech => published}/20171206 How To Install Fish, The Friendly Interactive Shell, In Linux.md (64%) diff --git a/translated/tech/20171206 How To Install Fish, The Friendly Interactive Shell, In Linux.md b/published/20171206 How To Install Fish, The Friendly Interactive Shell, In Linux.md similarity index 64% rename from translated/tech/20171206 How To Install Fish, The Friendly Interactive Shell, In Linux.md rename to published/20171206 How To Install Fish, The Friendly Interactive Shell, In Linux.md index e519106806..01a6fad444 100644 --- a/translated/tech/20171206 How To Install Fish, The Friendly Interactive Shell, In Linux.md +++ b/published/20171206 How To Install Fish, The Friendly Interactive Shell, In Linux.md @@ -1,12 +1,13 @@ -如何在 Linux 上安装友好的交互式 shell,Fish +如何在 Linux 上安装友好的交互式 shell:Fish ====== -Fish,友好的交互式 shell 的缩写,它是一个适用于类 Unix 系统的装备良好,智能而且用户友好的 shell。Fish 有着很多重要的功能,比如自动建议,语法高亮,可搜索的历史记录(像在 bash 中 CTRL+r),智能搜索功能,极好的 VGA 颜色支持,基本的 web 设置,完善的手册页和许多开箱即用的功能。尽管安装并立即使用它吧。无需更多其他配置,你也不需要安装任何额外的附加组件/插件! -在这篇教程中,我们讨论如何在 linux 中安装和使用 fish shell。 +Fish,友好的交互式 shellFriendly Interactive SHell 的缩写,它是一个适于装备于类 Unix 系统的智能而用户友好的 shell。Fish 有着很多重要的功能,比如自动建议、语法高亮、可搜索的历史记录(像在 bash 中 `CTRL+r`)、智能搜索功能、极好的 VGA 颜色支持、基于 web 的设置方式、完善的手册页和许多开箱即用的功能。尽管安装并立即使用它吧。无需更多其他配置,你也不需要安装任何额外的附加组件/插件! + +在这篇教程中,我们讨论如何在 Linux 中安装和使用 fish shell。 #### 安装 Fish -尽管 fish 是一个非常用户友好的并且功能丰富的 shell,但在大多数 Linux 发行版的默认仓库中它并没有被包括。它只能在少数 Linux 发行版中的官方仓库中找到,如 Arch Linux,Gentoo,NixOS,和 Ubuntu 等。然而,安装 fish 并不难。 +尽管 fish 是一个非常用户友好的并且功能丰富的 shell,但并没有包括在大多数 Linux 发行版的默认仓库中。它只能在少数 Linux 发行版中的官方仓库中找到,如 Arch Linux,Gentoo,NixOS,和 Ubuntu 等。然而,安装 fish 并不难。 在 Arch Linux 和它的衍生版上,运行以下命令来安装它。 @@ -18,13 +19,7 @@ sudo pacman -S fish ``` cd /etc/yum.repos.d/ -``` - -``` wget https://download.opensuse.org/repositories/shells:fish:release:2/CentOS_7/shells:fish:release:2.repo -``` - -``` yum install fish ``` @@ -32,13 +27,7 @@ yum install fish ``` cd /etc/yum.repos.d/ -``` - -``` wget https://download.opensuse.org/repositories/shells:fish:release:2/CentOS_6/shells:fish:release:2.repo -``` - -``` yum install fish ``` @@ -46,21 +35,9 @@ yum install fish ``` wget -nv https://download.opensuse.org/repositories/shells:fish:release:2/Debian_9.0/Release.key -O Release.key -``` - -``` apt-key add - < Release.key -``` - -``` echo 'deb http://download.opensuse.org/repositories/shells:/fish:/release:/2/Debian_9.0/ /' > /etc/apt/sources.list.d/fish.list -``` - -``` apt-get update -``` - -``` apt-get install fish ``` @@ -68,21 +45,9 @@ apt-get install fish ``` wget -nv https://download.opensuse.org/repositories/shells:fish:release:2/Debian_8.0/Release.key -O Release.key -``` - -``` apt-key add - < Release.key -``` - -``` echo 'deb http://download.opensuse.org/repositories/shells:/fish:/release:/2/Debian_8.0/ /' > /etc/apt/sources.list.d/fish.list -``` - -``` apt-get update -``` - -``` apt-get install fish ``` @@ -90,9 +55,6 @@ apt-get install fish ``` dnf config-manager --add-repo https://download.opensuse.org/repositories/shells:fish:release:2/Fedora_26/shells:fish:release:2.repo -``` - -``` dnf install fish ``` @@ -100,9 +62,6 @@ dnf install fish ``` dnf config-manager --add-repo https://download.opensuse.org/repositories/shells:fish:release:2/Fedora_25/shells:fish:release:2.repo -``` - -``` dnf install fish ``` @@ -110,9 +69,6 @@ dnf install fish ``` dnf config-manager --add-repo https://download.opensuse.org/repositories/shells:fish:release:2/Fedora_24/shells:fish:release:2.repo -``` - -``` dnf install fish ``` @@ -120,9 +76,6 @@ dnf install fish ``` dnf config-manager --add-repo https://download.opensuse.org/repositories/shells:fish:release:2/Fedora_23/shells:fish:release:2.repo -``` - -``` dnf install fish ``` @@ -136,13 +89,7 @@ zypper install fish ``` cd /etc/yum.repos.d/ -``` - -``` wget https://download.opensuse.org/repositories/shells:fish:release:2/RHEL_7/shells:fish:release:2.repo -``` - -``` yum install fish ``` @@ -150,13 +97,7 @@ yum install fish ``` cd /etc/yum.repos.d/ -``` - -``` wget https://download.opensuse.org/repositories/shells:fish:release:2/RedHat_RHEL-6/shells:fish:release:2.repo -``` - -``` yum install fish ``` @@ -164,9 +105,6 @@ yum install fish ``` sudo apt-get update -``` - -``` sudo apt-get install fish ``` @@ -181,44 +119,43 @@ $ fish Welcome to fish, the friendly interactive shell ``` -你可以在 ~/.config/fish/config.fish 上找到默认的 fish 配置(类似于 .bashrc)。如果它不存在,就创建它吧。 +你可以在 `~/.config/fish/config.fish` 上找到默认的 fish 配置(类似于 `.bashrc`)。如果它不存在,就创建它吧。 #### 自动建议 -当我输入一个命令,它自动建议一个浅灰色的命令。所以,我需要输入一个 Linux 命令的前几个字母,然后按下 tab 键来完成这个命令。 +当我输入一个命令,它以浅灰色自动建议一个命令。所以,我需要输入一个 Linux 命令的前几个字母,然后按下 `tab` 键来完成这个命令。 [![](http://www.ostechnix.com/wp-content/uploads/2017/12/fish-1.png)][2] -如果有更多的可能性,它将会列出它们。你可以使用上/下箭头键从列表中选择列出的命令。在选择你想运行的命令后,只需按下右箭头键,然后按下 ENTER 运行它。 +如果有更多的可能性,它将会列出它们。你可以使用上/下箭头键从列表中选择列出的命令。在选择你想运行的命令后,只需按下右箭头键,然后按下 `ENTER` 运行它。 [![](http://www.ostechnix.com/wp-content/uploads/2017/12/fish-2.png)][3] -无需 CTRL+r 了!正如你已知道的,我们通过按 ctrl+r 来反向搜索 Bash shell 中的历史命令。但在 fish shell 中是没有必要的。由于它有自动建议功能,只需输入命令的前几个字母,然后从历史记录中选择已经执行的命令。Cool,是吗? +无需 `CTRL+r` 了!正如你已知道的,我们通过按 `CTRL+r` 来反向搜索 Bash shell 中的历史命令。但在 fish shell 中是没有必要的。由于它有自动建议功能,只需输入命令的前几个字母,然后从历史记录中选择已经执行的命令。很酷,是吧。 #### 智能搜索 -我们也可以使用智能搜索来查找一个特定的命令,文件或者目录。例如,我输入一个命令的子串,然后按向下箭头键进行智能搜索,再次输入一个字母来从列表中选择所需的命令。 +我们也可以使用智能搜索来查找一个特定的命令、文件或者目录。例如,我输入一个命令的一部分,然后按向下箭头键进行智能搜索,再次输入一个字母来从列表中选择所需的命令。 [![](http://www.ostechnix.com/wp-content/uploads/2017/12/fish-6.png)][4] #### 语法高亮 - 当你输入一个命令时,你将注意到语法高亮。请看下面当我在 Bash shell 和 fish shell 中输入相同的命令时截图的区别。 -Bash: +Bash: [![](http://www.ostechnix.com/wp-content/uploads/2017/12/fish-3.png)][5] -Fish: +Fish: [![](http://www.ostechnix.com/wp-content/uploads/2017/12/fish-4.png)][6] -正如你所看到的,“sudo” 在 fish shell 中已经被高亮显示。此外,默认情况下它将以红色显示无效命令。 +正如你所看到的,`sudo` 在 fish shell 中已经被高亮显示。此外,默认情况下它将以红色显示无效命令。 -#### 基于 web 的配置 +#### 基于 web 的配置方式 -这是 fish shell 另一个很酷的功能。我们可以设置我们的颜色,更改 fish 提示,并从网页上查看所有功能,变量,历史记录,键绑定。 +这是 fish shell 另一个很酷的功能。我们可以设置我们的颜色、更改 fish 提示符,并从网页上查看所有功能、变量、历史记录、键绑定。 启动 web 配置接口,只需输入: @@ -228,9 +165,9 @@ fish_config [![](http://www.ostechnix.com/wp-content/uploads/2017/12/fish-5.png)][7] -#### 手册页完成 +#### 手册页补完 -Bash 和 其它 shells 支持可编程完成,但只有 fish 会通过解析已安装的手册自动生成他们。 +Bash 和 其它 shells 支持可编程的补完,但只有 fish 可以通过解析已安装的手册来自动生成它们。 为此,请运行: @@ -245,9 +182,9 @@ Parsing man pages and writing completions to /home/sk/.local/share/fish/generate 3435 / 3435 : zramctl.8.gz ``` -#### 禁用问候 +#### 禁用问候语 -默认情况下,fish 在启动时问候你(Welcome to fish, the friendly interactive shell)。如果你不想要这个问候消息,可以禁用它。为此,编辑 fish 配置文件: +默认情况下,fish 在启动时问候你(“Welcome to fish, the friendly interactive shell”)。如果你不想要这个问候消息,可以禁用它。为此,编辑 fish 配置文件: ``` vi ~/.config/fish/config.fish @@ -260,7 +197,6 @@ set -g -x fish_greeting '' ``` 你也可以设置任意自定义的问候语,而不是禁用 fish 问候。 -Instead of disabling fish greeting, you can also set any custom greeting message. ``` set -g -x fish_greeting 'Welcome to OSTechNix' @@ -268,7 +204,7 @@ set -g -x fish_greeting 'Welcome to OSTechNix' #### 获得帮助 -这是另一个引人注目的令人印象深刻的功能。要在终端的默认 web 浏览器中打开 fish 文档页面,只需输入: +这是另一个吸引我的令人印象深刻的功能。要在终端的默认 web 浏览器中打开 fish 文档页面,只需输入: ``` help @@ -282,13 +218,13 @@ man fish #### 设置 fish 为默认 shell -非常喜欢它?太好了!设置它作为默认 shell 吧。为此,请使用命令 chsh: +非常喜欢它?太好了!设置它作为默认 shell 吧。为此,请使用命令 `chsh`: ``` chsh -s /usr/bin/fish ``` -在这里,/usr/bin/fish 是 fish shell 的路径。如果你不知道正确的路径,以下命令将会帮助你: +在这里,`/usr/bin/fish` 是 fish shell 的路径。如果你不知道正确的路径,以下命令将会帮助你: ``` which fish @@ -298,7 +234,7 @@ which fish 请记住,为 Bash 编写的许多 shell 脚本可能不完全兼容 fish。 -要切换会 Bash,只需运行: +要切换回 Bash,只需运行: ``` bash @@ -310,13 +246,13 @@ bash chsh -s /bin/bash ``` -对目前的各位,这就是全部了。在这个阶段,你可能会得到一个有关 fish shell 使用的基本概念。 如果你正在寻找一个Bash的替代品,fish 可能是一个不错的选择。 +各位,这就是全部了。在这个阶段,你可能会得到一个有关 fish shell 使用的基本概念。 如果你正在寻找一个Bash的替代品,fish 可能是一个不错的选择。 Cheers! 资源: -* [fish shell website][1] +* [fish shell 官网][1] -------------------------------------------------------------------------------- @@ -324,7 +260,7 @@ via: https://www.ostechnix.com/install-fish-friendly-interactive-shell-linux/ 作者:[SK][a] 译者:[kimii](https://github.com/kimii) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From acaffa1b32edf834fcea1fe72921d4123bcf8955 Mon Sep 17 00:00:00 2001 From: feng lv Date: Sun, 10 Dec 2017 13:08:51 +0800 Subject: [PATCH 181/236] translated --- .../20170413 More Unknown Linux Commands.md | 133 ------------------ .../20170413 More Unknown Linux Commands.md | 132 +++++++++++++++++ 2 files changed, 132 insertions(+), 133 deletions(-) delete mode 100644 sources/tech/20170413 More Unknown Linux Commands.md create mode 100644 translated/tech/20170413 More Unknown Linux Commands.md diff --git a/sources/tech/20170413 More Unknown Linux Commands.md b/sources/tech/20170413 More Unknown Linux Commands.md deleted file mode 100644 index d773d7b4c9..0000000000 --- a/sources/tech/20170413 More Unknown Linux Commands.md +++ /dev/null @@ -1,133 +0,0 @@ -translating by ucasFL - -More Unknown Linux Commands -============================================================ - - -![unknown Linux commands](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/outer-limits-of-linux.jpg?itok=5L5xfj2v "unknown Linux commands") ->Explore the outer limits of Linux with Carla Schroder in this roundup of little-known utilities.[Creative Commons Zero][2]Pixabay - -A roundup of the fun and little-known utilities `termsaver`, `pv`, and `calendar`. `termsaver` is an ASCII screensaver for the console, and `pv` measures data throughput and simulates typing. Debian's `calendar` comes with a batch of different calendars, and instructions for making your own. - -![Linux commands](https://www.linux.com/sites/lcom/files/styles/floated_images/public/linux-commands-fig-1.png?itok=HveXXLLK "Linux commands") - -Figure 1: Star Wars screensaver.[Used with permission][1] - -### Terminal Screensaver - -Why should graphical desktops have all the fun with fancy screensavers? Install `termsaver` to enjoy fancy ASCII screensavers like matrix, clock, starwars, and a couple of not-safe-for-work screens. More on the NSFW screens in a moment. - -`termsaver` is included in Debian/Ubuntu, and if you're using a boring distro that doesn't package fun things (like CentOS), you can download it from [termsaver.brunobraga.net][7] and follow the simple installation instructions. - -Run `termsaver -h` to see a list of screens: - -``` - randtxt displays word in random places on screen - starwars runs the asciimation Star Wars movie - urlfetcher displays url contents with typing animation - quotes4all displays recent quotes from quotes4all.net - rssfeed displays rss feed information - matrix displays a matrix movie alike screensaver - clock displays a digital clock on screen - rfc randomly displays RFC contents - jokes4all displays recent jokes from jokes4all.net (NSFW) - asciiartfarts displays ascii images from asciiartfarts.com (NSFW) - programmer displays source code in typing animation - sysmon displays a graphical system monitor -``` - -Then run your chosen screen with `termsaver [screen name]`, e.g. `termsaver matrix`, and stop it with Ctrl+c. Get information on individual screens by running `termsaver [screen name] -h`. Figure 1 is from the `starwars` screen, which runs our old favorite [Asciimation Wars][8]. - -The not-safe-for-work screens pull in online feeds. They're not my cup of tea, but the good news is `termsaver` is a gaggle of Python scripts, so they're easy to hack to connect to any RSS feed you desire. - -### pv - -The `pv` command is one of those funny little utilities that lends itself to creative uses. Its intended use is monitoring data copying progress, like when you run `rsync` or create a `tar`archive. When you run `pv` without options the defaults are: - -* -p progress. - -* -t timer, total elapsed time. - -* -e, ETA, time to completion. This is often inaccurate as `pv` cannot always know the size of the data you are moving. - -* -r, rate counter, or throughput. - -* -b, byte counter. - -This is what an `rsync` transfer looks like: - -``` -$ rsync -av /home/carla/ /media/carla/backup/ | pv -sending incremental file list -[...] -103GiB 0:02:48 [ 615MiB/s] [ <=> -``` - -Create a tar archive like this example: - -``` -$ tar -czf - /file/path| (pv > backup.tgz) - 885MiB 0:00:30 [28.6MiB/s] [ <=> -``` - -`pv` monitors processes. To see maximum activity monitor a Web browser process. It is amazing how much activity that generates: - -``` -$ pv -d 3095 - 58:/home/carla/.pki/nssdb/key4.db: 0 B 0:00:33 - [ 0 B/s] [<=> ] - 78:/home/carla/.config/chromium/Default/Visited Links: - 256KiB 0:00:33 [ 0 B/s] [<=> ] - ] - 85:/home/carla/.con...romium/Default/data_reduction_proxy_leveldb/LOG: - 298 B 0:00:33 [ 0 B/s] [<=> ] -``` - -Somewhere on the Internet I stumbled across a most entertaining way to use `pv` to echo back what I type: - -``` -$ echo "typing random stuff to pipe through pv" | pv -qL 8 -typing random stuff to pipe through pv -``` - -The normal `echo` command prints the whole line at once. Piping it through `pv` makes it appear as though it is being re-typed. I have no idea if this has any practical value, but I like it. The `-L`controls the speed of the playback, in bytes per second. - -`pv` is one of those funny little old commands that has acquired a giant batch of options over the years, including fancy formatting options, multiple output options, and transfer speed modifiers. `man pv` reveals all. - -### /usr/bin/calendar - -It's amazing what you can learn by browsing `/usr/bin` and other commands directories, and reading man pages. `/usr/bin/calendar` on Debian/Ubuntu is a modification of the BSD calendar, but it omits the moon and sun phases. It retains multiple calendars including `calendar.computer, calendar.discordian, calendar.music`, and `calendar.lotr`. On my system the man page lists different calendars than exist in `/usr/bin/calendar`. This example displays the Lord of the Rings calendar for the next 60 days: - -``` -$ calendar -f /usr/share/calendar/calendar.lotr -A 60 -Apr 17 An unexpected party -Apr 23 Crowning of King Ellesar -May 19 Arwen leaves Lorian to wed King Ellesar -Jun 11 Sauron attacks Osgilliath -``` - -The calendars are plain text files so you can easily create your own. The easy way is to copy the format of the existing calendar files. `man calendar` contains detailed instructions for creating your own calendar file. - -Once again we come to the end too quickly. Take some time to cruise your own filesystem to dig up interesting commands to play with. - - _Learn more about Linux through the free ["Introduction to Linux" ][5]course from The Linux Foundation and edX._ - --------------------------------------------------------------------------------- - -via: https://www.linux.com/learn/intro-to-linux/2017/4/more-unknown-linux-commands - -作者:[ CARLA SCHRODER][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.linux.com/users/cschroder -[1]:https://www.linux.com/licenses/category/used-permission -[2]:https://www.linux.com/licenses/category/creative-commons-zero -[3]:https://www.linux.com/files/images/linux-commands-fig-1png -[4]:https://www.linux.com/files/images/outer-limits-linuxjpg -[5]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux -[6]:https://www.addtoany.com/share#url=https%3A%2F%2Fwww.linux.com%2Flearn%2Fintro-to-linux%2F2017%2F4%2Fmore-unknown-linux-commands&title=More%20Unknown%20Linux%20Commands -[7]:http://termsaver.brunobraga.net/ -[8]:http://www.asciimation.co.nz/ diff --git a/translated/tech/20170413 More Unknown Linux Commands.md b/translated/tech/20170413 More Unknown Linux Commands.md new file mode 100644 index 0000000000..95bad0d983 --- /dev/null +++ b/translated/tech/20170413 More Unknown Linux Commands.md @@ -0,0 +1,132 @@ +更多你所不知道的 Linux 命令 +============================================================ + + +![unknown Linux commands](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/outer-limits-of-linux.jpg?itok=5L5xfj2v "unknown Linux commands") +>在这篇文章中和 Carla Schroder 一起探索 Linux 中的一些鲜为人知的强大工具。[CC Zero][2]Pixabay + +本文是一篇关于一些有趣但鲜为人知的工具 `termsaver`、`pv` 和 `calendar` 的文章。`termsaver` 是一个终端 ASCII 锁屏,`pv` 能够测量数据吞吐量并模拟输入。Debian 的 `calendar` 拥有许多不同的日历表,并且你还可以制定你自己的日历表。 + +![Linux commands](https://www.linux.com/sites/lcom/files/styles/floated_images/public/linux-commands-fig-1.png?itok=HveXXLLK "Linux commands") + +*图片 1: 星球大战屏保。[使用许可][1]* + +### 终端屏保 + +难道只有图形桌面能够拥有有趣的屏保吗?现在,你可以通过安装 `termsaver` 来享受 ASCII 屏保,比如 matrix(LCTT 译注:电影《黑客帝国》中出现的黑客屏保)、时钟、星球大战以及一系列不太安全的屏保。有趣的屏保将会瞬间占据 NSFW 屏幕。 + +`termsaver` 可以从 Debian/Ubuntu 的包管理器中直接下载安装,如果你使用别的不包含该软件包的发行版比如 CentOS,那么你可以从 [termsaver.brunobraga.net][7] 下载,然后按照安装指导进行安装。 + +运行 `termsaver -h` 来查看一系列屏保: + +``` + randtxt displays word in random places on screen + starwars runs the asciimation Star Wars movie + urlfetcher displays url contents with typing animation + quotes4all displays recent quotes from quotes4all.net + rssfeed displays rss feed information + matrix displays a matrix movie alike screensaver + clock displays a digital clock on screen + rfc randomly displays RFC contents + jokes4all displays recent jokes from jokes4all.net (NSFW) + asciiartfarts displays ascii images from asciiartfarts.com (NSFW) + programmer displays source code in typing animation + sysmon displays a graphical system monitor +``` + +你可以通过运行命令 `termsaver [屏保名]` 来使用屏保,比如 `termsaver matrix` ,然后按 `Ctrl+c` 停止。你也可以通过运行 `termsaver [屏保名] -h` 命令来获取关于某一个特定屏保的信息。图片 1 来自 `startwars` 屏保,它运行的是古老但受人喜爱的 [Asciimation Wars][8] 。 + +那些不太安全的屏保通过在线获取资源的方式运行,我并不喜欢它们,但好消息是,由于 `termsaver` 是一些 Python 的脚本文件,因此,你可以很容易的利用它们连接到任何你想要的 RSS 资源。 + +### pv + +`pv` 命令是一个非常有趣的小工具但却很实用。它的用途是监测数据复制的进程,比如,当你运行 `rsync` 命令或创建一个 `tar` 归档的时候。当你不带任何选项运行 `pv` 命令时,默认参数为: + +* -p :进程 + +* -t :时间,到当前总运行时间 + +* -e :预计完成时间,这往往是不准确的,因为 `pv` 通常不知道需要移动的数据的大小 + +* -r :速率计数器,或吞吐量 + +* -b :字节计数器 + +一次 `rsync` 传输看起来像这样: + +``` +$ rsync -av /home/carla/ /media/carla/backup/ | pv +sending incremental file list +[...] +103GiB 0:02:48 [ 615MiB/s] [ <=> +``` + +创建一个 tar 归档,就像下面这个例子: + +``` +$ tar -czf - /file/path| (pv > backup.tgz) + 885MiB 0:00:30 [28.6MiB/s] [ <=> +``` + +`pv` 能够监测进程,因此也可以监测 Web 浏览器的最大活动,令人惊讶的是,它产生了如此多的活动: + +``` +$ pv -d 3095 + 58:/home/carla/.pki/nssdb/key4.db: 0 B 0:00:33 + [ 0 B/s] [<=> ] + 78:/home/carla/.config/chromium/Default/Visited Links: + 256KiB 0:00:33 [ 0 B/s] [<=> ] + ] + 85:/home/carla/.con...romium/Default/data_reduction_proxy_leveldb/LOG: + 298 B 0:00:33 [ 0 B/s] [<=> ] +``` + +在网上,我偶然发现一个使用 `pv` 最有趣的方式:使用 `pv` 来回显输入的内容: + +``` +$ echo "typing random stuff to pipe through pv" | pv -qL 8 +typing random stuff to pipe through pv +``` + +普通的 `echo` 命令会瞬间打印一整行内容。通过管道传给 `pv` 之后能够让内容像是重新输入一样的显示出来。我不知道这是否有实际的价值,但是我非常喜欢它。`-L` 选项控制回显的速度,即多少字节每秒。 + +`pv` 是一个非常古老且非常有趣的命令,这么多年以来,它拥有了许多的选项,包括有趣的格式化选项,多输出选项,以及传输速度修改器。你可以通过 `man pv` 来查看所有的选项。 + +### /usr/bin/calendar + +通过浏览 `/usr/bin` 目录以及其他命令目录和阅读 man 手册,你能够学到很多东西。在 Debian/Ubuntu 上的 `/usr/bin/calendar` 是 BSD 日历的一个变种,但它忽略了月亮历和太阳历。它保留了多个日历包括 `calendar.computer, calendar.discordian, calendar.music` 以及 `calendar.lotr`。在我的系统上,man 手册列出了 `/usr/bin/calendar` 里存在的不同日历。下面这个例子展示了指环王日历接下来的 60 天: + +``` +$ calendar -f /usr/share/calendar/calendar.lotr -A 60 +Apr 17 An unexpected party +Apr 23 Crowning of King Ellesar +May 19 Arwen leaves Lorian to wed King Ellesar +Jun 11 Sauron attacks Osgilliath +``` + +这些日历是纯文本文件,因此,你可以轻松的创建你自己的日历。最简单的方式就是复制已经存在的日历文件的格式。你可以通过 `man calendar` 命令来查看创建个人日历文件的更详细的指导。 + +又一次很快走到了尽头。你可以花费一些时间来浏览你的文件系统,挖掘更多有趣的命令。 + + _你可以他通过来自 Linux 基金会和 edx 的免费课程 ["Introduction to Linux"][5] 来学习更过关于 Linux 的知识_。 + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/learn/intro-to-linux/2017/4/more-unknown-linux-commands + +作者:[ CARLA SCHRODER][a] +译者:[ucasFL](https://github.com/ucasFL) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.linux.com/users/cschroder +[1]:https://www.linux.com/licenses/category/used-permission +[2]:https://www.linux.com/licenses/category/creative-commons-zero +[3]:https://www.linux.com/files/images/linux-commands-fig-1png + +[4]:https://www.linux.com/files/images/outer-limits-linuxjpg +[5]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux +[6]:https://www.addtoany.com/share#url=https%3A%2F%2Fwww.linux.com%2Flearn%2Fintro-to-linux%2F2017%2F4%2Fmore-unknown-linux-commands&amp;amp;title=More%20Unknown%20Linux%20Commands +[7]:http://termsaver.brunobraga.net/ +[8]:http://www.asciimation.co.nz/ From 13685d79a4ce5410c72efaee9e8bae9d30d96d71 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E5=BC=A0=E5=AE=88=E6=B0=B8?= Date: Sun, 10 Dec 2017 16:08:01 +0800 Subject: [PATCH 182/236] Update 20171207 7 tools for analyzing performance in Linux with bccBPF.md --- ...lyzing performance in Linux with bccBPF.md | 19 ++++++++++--------- 1 file changed, 10 insertions(+), 9 deletions(-) diff --git a/sources/tech/20171207 7 tools for analyzing performance in Linux with bccBPF.md b/sources/tech/20171207 7 tools for analyzing performance in Linux with bccBPF.md index eebfa9e30d..d0733e0903 100644 --- a/sources/tech/20171207 7 tools for analyzing performance in Linux with bccBPF.md +++ b/sources/tech/20171207 7 tools for analyzing performance in Linux with bccBPF.md @@ -18,27 +18,28 @@ opensource.com 在 linux 中出现的一种新技术能够为系统管理员和开发者提供大量用于性能分析和故障排除的新工具和仪表盘。 它被称为增强的伯克利数据包过滤器(eBPF,或BPF),虽然这些改进并不由伯克利开发,它们不仅仅是处理数据包,更多的是过滤。我将讨论在 Fedora 和 Red Hat Linux 发行版中使用 BPF 的一种方法,并在 Fedora 26 上演示。 -BPF can run user-defined sandboxed programs in the kernel to add new custom capabilities instantly. It's like adding superpowers to Linux, on demand. Examples of what you can use it for include: +BPF 可以运行自定义沙盒程序在内核中即刻添加新的自定义功能。这就像可按需给 Linux 系统添加超能力一般。 你可以使用它的例子包括如下: -* Advanced performance tracing tools: programmatic low-overhead instrumentation of filesystem operations, TCP events, user-level events, etc. +* 高级性能跟踪工具:文件系统操作、TCP事件、用户级事件等的编程低开销指令。 -* Network performance: dropping packets early on to improve DDOS resilience, or redirecting packets in-kernel to improve performance +* 网络性能 : 尽早丢弃数据包以提高DDoS的恢复能力,或者在内核中重定向数据包以提高性能。 -* Security monitoring: 24x7 custom monitoring and logging of suspicious kernel and userspace events +* 安全监控 : 24x7 小时全天候自定义检测和记录内核空间与用户空间内的可疑事件。 -BPF programs must pass an in-kernel verifier to ensure they are safe to run, making it a safer option, where possible, than writing custom kernel modules. I suspect most people won't write BPF programs themselves, but will use other people's. I've published many on GitHub as open source in the [BPF Compiler Collection (bcc)][12] project. bcc provides different frontends for BPF development, including Python and Lua, and is currently the most active project for BPF tooling. +在可能的情况下,BPF 程序必须通过一个内核验证机制来保证它们的安全运行,这比写自定义的内核模块更安全。我在此假设大多数人并不编写自己的 BPF 程序,而是使用别人写好的。在 GitHub 上的 [BPF Compiler Collection (bcc)][12] 项目中,我已发布许多。开源代码。bcc 提供不同的 BPF 开发前端支持,包括Python和Lua,并且是目前最活跃的 BPF 模具项目。 -### 7 useful new bcc/BPF tools +### 7 个有用的 bcc/BPF 新工具 +为了了解BCC / BPF工具和他们的乐器,我创建了下面的图表并添加到项目中 To understand the bcc/BPF tools and what they instrument, I created the following diagram and added it to the bcc project: -### [bcc_tracing_tools.png][13] +### [bcc_跟踪工具.png][13] -![Linux bcc/BPF tracing tools diagram](https://opensource.com/sites/default/files/u128651/bcc_tracing_tools.png) +![Linux bcc/BPF 跟踪工具图](https://opensource.com/sites/default/files/u128651/bcc_tracing_tools.png) Brendan Gregg, [CC BY-SA 4.0][14] -These are command-line interface (CLI) tools you can use over SSH (secure shell). Much analysis nowadays, including at my employer, is conducted using GUIs and dashboards. SSH is a last resort. But these CLI tools are still a good way to preview BPF capabilities, even if you ultimately intend to use them only through a GUI when available. I've began adding BPF capabilities to an open source GUI, but that's a topic for another article. Right now I'd like to share the CLI tools, which you can use today. +这些是命令行界面工具,你可以通过 SSH (安全外壳)使用它们。目前大多数分析,包括我的老板,是用 GUIs 和仪表盘进行的。SSH是最后的手段。但这些命令行工具仍然是预览BPF能力的好方法,即使你最终打算通过一个可用的 GUI 使用它。我已着手向一个开源 GUI 添加BPF功能,但那是另一篇文章的主题。现在我想分享你今天可以使用的 CLI 工具。 ### 1\. execsnoop From 83199cc7a00dfe6ea23d464c2f1b2767baa55de3 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E9=82=B9=E8=8D=A3=E5=8D=87?= Date: Sun, 10 Dec 2017 19:34:59 +0800 Subject: [PATCH 183/236] Update 20161216 GitHub Is Building a Coder Paradise.md --- sources/tech/20161216 GitHub Is Building a Coder Paradise.md | 1 + 1 file changed, 1 insertion(+) diff --git a/sources/tech/20161216 GitHub Is Building a Coder Paradise.md b/sources/tech/20161216 GitHub Is Building a Coder Paradise.md index 36e9e76343..d8a7b9e467 100644 --- a/sources/tech/20161216 GitHub Is Building a Coder Paradise.md +++ b/sources/tech/20161216 GitHub Is Building a Coder Paradise.md @@ -1,3 +1,4 @@ +translating by zrszrszrs GitHub Is Building a Coder’s Paradise. It’s Not Coming Cheap ============================================================ From 8056b28b8b767af79a2d3b12726fe46b40639761 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E9=82=B9=E8=8D=A3=E5=8D=87?= Date: Sun, 10 Dec 2017 19:38:20 +0800 Subject: [PATCH 184/236] Update 20171201 12 MySQL MariaDB Security Best Practices for Linux.md --- ...0171201 12 MySQL MariaDB Security Best Practices for Linux.md | 1 + 1 file changed, 1 insertion(+) diff --git a/sources/tech/20171201 12 MySQL MariaDB Security Best Practices for Linux.md b/sources/tech/20171201 12 MySQL MariaDB Security Best Practices for Linux.md index 5cf9169661..5653e2daed 100644 --- a/sources/tech/20171201 12 MySQL MariaDB Security Best Practices for Linux.md +++ b/sources/tech/20171201 12 MySQL MariaDB Security Best Practices for Linux.md @@ -1,3 +1,4 @@ +translating by zrszrszrs 12 MySQL/MariaDB Security Best Practices for Linux ============================================================ From 66abaff1fca41d94fe0c4cad993a1dc475120592 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E9=82=B9=E8=8D=A3=E5=8D=87?= Date: Sun, 10 Dec 2017 19:39:08 +0800 Subject: [PATCH 185/236] translating by zrszrszrs --- ...171201 12 MySQL MariaDB Security Best Practices for Linux.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sources/tech/20171201 12 MySQL MariaDB Security Best Practices for Linux.md b/sources/tech/20171201 12 MySQL MariaDB Security Best Practices for Linux.md index 5653e2daed..8897f74f39 100644 --- a/sources/tech/20171201 12 MySQL MariaDB Security Best Practices for Linux.md +++ b/sources/tech/20171201 12 MySQL MariaDB Security Best Practices for Linux.md @@ -1,4 +1,4 @@ -translating by zrszrszrs +translating by zrszrszr 12 MySQL/MariaDB Security Best Practices for Linux ============================================================ From 5a8ca802d4eacdf388f92f099ba03f04f9bb1956 Mon Sep 17 00:00:00 2001 From: liuyakun Date: Sun, 10 Dec 2017 23:08:50 +0800 Subject: [PATCH 186/236] =?UTF-8?q?=E5=88=A0=E9=99=A4=E5=8E=9F=E6=96=87?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...es in Red Hat Enterprise Linux – Part 1.md | 232 ------------------ 1 file changed, 232 deletions(-) delete mode 100644 sources/tech/20170719 Containing System Services in Red Hat Enterprise Linux – Part 1.md diff --git a/sources/tech/20170719 Containing System Services in Red Hat Enterprise Linux – Part 1.md b/sources/tech/20170719 Containing System Services in Red Hat Enterprise Linux – Part 1.md deleted file mode 100644 index 2193b8078c..0000000000 --- a/sources/tech/20170719 Containing System Services in Red Hat Enterprise Linux – Part 1.md +++ /dev/null @@ -1,232 +0,0 @@ -translating by liuxinyu123 - -Containing System Services in Red Hat Enterprise Linux – Part 1 -============================================================ - - -At the 2017 Red Hat Summit, several people asked me “We normally use full VMs to separate network services like DNS and DHCP, can we use containers instead?”. The answer is yes, and here’s an example of how to create a system container in Red Hat Enterprise Linux 7 today.    - -### **THE GOAL** - -#### _Create a network service that can be updated independently of any other services of the system, yet easily managed and updated from the host._ - -Let’s explore setting up a BIND server running under systemd in a container. In this part, we’ll look at building our container, as well as managing the BIND configuration and data files. - -In Part Two, we’ll look at how systemd on the host integrates with systemd in the container. We’ll explore managing the service in the container, and enabling it as a service on the host. - -### **CREATING THE BIND CONTAINER** - -To get systemd working inside a container easily, we first need to add two packages on the host: `oci-register-machine` and `oci-systemd-hook`. The `oci-systemd-hook` hook allows us to run systemd in a container without needing to use a privileged container or manually configuring tmpfs and cgroups. The `oci-register-machine` hook allows us to keep track of the container with the systemd tools like `systemctl` and `machinectl`. - -``` -[root@rhel7-host ~]# yum install oci-register-machine oci-systemd-hook -``` - -On to creating our BIND container. The [Red Hat Enterprise Linux 7 base image][6]  includes systemd as an init system. We can install and enable BIND the same way we would on a typical system. You can [download this Dockerfile from the git repository][7] in the Resources. - -``` -[root@rhel7-host bind]# vi Dockerfile - -# Dockerfile for BIND -FROM registry.access.redhat.com/rhel7/rhel -ENV container docker -RUN yum -y install bind && \ -    yum clean all && \ -    systemctl enable named -STOPSIGNAL SIGRTMIN+3 -EXPOSE 53 -EXPOSE 53/udp -CMD [ "/sbin/init" ] -``` - -Since we’re starting with an init system as PID 1, we need to change the signal sent by the docker CLI when we tell the container to stop. From the `kill` system call man pages (`man 2 kill`): - -``` -The only signals that can be sent to process ID 1, the init -process, are those for which init has explicitly installed -signal handlers. This is done to assure the system is not -brought down accidentally. -``` - -For the systemd signal handlers, `SIGRTMIN+3` is the signal that corresponds to `systemd start halt.target`. We also expose both TCP and UDP ports for BIND, since both protocols could be in use. - -### **MANAGING DATA** - -With a functional BIND service, we need a way to manage the configuration and zone files. Currently those are inside the container, so we  _could_  enter the container any time we wanted to update the configs or make a zone file change. This isn’t ideal from a management perspective.  We’ll need to rebuild the container when we need to update BIND, so changes in the images would be lost. Having to enter the container any time we need to update a file or restart the service adds steps and time. - -Instead, we’ll extract the configuration and data files from the container and copy them to the host, then mount them at run time. This way we can easily restart or rebuild the container without losing changes. We can also modify configs and zones by using an editor outside of the container. Since this container data looks like “ _site-specific data served by this system_ ”, let’s follow the File System Hierarchy and create `/srv/named` on the local host to maintain administrative separation. - -``` -[root@rhel7-host ~]# mkdir -p /srv/named/etc - -[root@rhel7-host ~]# mkdir -p /srv/named/var/named -``` - -##### _NOTE: If you are migrating an existing configuration, you can skip the following step and copy it directly to the`/srv/named` directories. You may still want to check the container assigned GID with a temporary container._ - -Let’s build and run an temporary container to examine BIND. With a init process as PID 1, we can’t run the container interactively to get a shell. We’ll exec into it after it launches, and check for important files with `rpm`. - -``` -[root@rhel7-host ~]# docker build -t named . - -[root@rhel7-host ~]# docker exec -it $( docker run -d named ) /bin/bash - -[root@0e77ce00405e /]# rpm -ql bind -``` - -For this example, we’ll need `/etc/named.conf` and everything under `/var/named/`. We can extract these with `machinectl`. If there’s more than one container registered, we can see what’s running in any machine with `machinectl status`. Once we have the configs we can stop the temporary container. - - _There’s also a[ sample `named.conf` and zone files for `example.com` in the Resources][2] if you prefer._ - -``` -[root@rhel7-host bind]# machinectl list - -MACHINE                          CLASS     SERVICE -8824c90294d5a36d396c8ab35167937f container docker - -[root@rhel7-host ~]# machinectl copy-from 8824c90294d5a36d396c8ab35167937f /etc/named.conf /srv/named/etc/named.conf - -[root@rhel7-host ~]# machinectl copy-from 8824c90294d5a36d396c8ab35167937f /var/named /srv/named/var/named - -[root@rhel7-host ~]# docker stop infallible_wescoff -``` - -### **FINAL CREATION** - -To create and run the final container, add the volume options to mount: - -* file `/srv/named/etc/named.conf` as `/etc/named.conf` - -* directory `/srv/named/var/named` as `/var/named` - -Since this is our final container, we’ll also provide a meaningful name that we can refer to later. - -``` -[root@rhel7-host ~]# docker run -d -p 53:53 -p 53:53/udp -v /srv/named/etc/named.conf:/etc/named.conf:Z -v /srv/named/var/named:/var/named:Z --name named-container named -``` - -With the final container running, we can modify the local configs to change the behavior of BIND in the container. The BIND server will need to listen on any IP that the container might be assigned. Be sure the GID of any new file matches the rest of the BIND files from the container.  - -``` -[root@rhel7-host bind]# cp named.conf /srv/named/etc/named.conf - -[root@rhel7-host ~]# cp example.com.zone /srv/named/var/named/example.com.zone - -[root@rhel7-host ~]# cp example.com.rr.zone  /srv/named/var/named/example.com.rr.zone -``` - -> [Curious why I didn’t need to change SELinux context on the host directories?][3] - -We’ll reload the config by exec’ing the `rndc` binary provided by the container. We can use `journald` in the same fashion to check the BIND logs. If you run into errors, you can edit the file on the host, and reload the config. Using `host` or `dig` on the host, we can check the responses from the contained service for example.com. - -``` -[root@rhel7-host ~]# docker exec -it named-container rndc reload        -server reload successful - -[root@rhel7-host ~]# docker exec -it named-container journalctl -u named -n --- Logs begin at Fri 2017-05-12 19:15:18 UTC, end at Fri 2017-05-12 19:29:17 UTC. -- -May 12 19:29:17 ac1752c314a7 named[27]: automatic empty zone: 9.E.F.IP6.ARPA -May 12 19:29:17 ac1752c314a7 named[27]: automatic empty zone: A.E.F.IP6.ARPA -May 12 19:29:17 ac1752c314a7 named[27]: automatic empty zone: B.E.F.IP6.ARPA -May 12 19:29:17 ac1752c314a7 named[27]: automatic empty zone: 8.B.D.0.1.0.0.2.IP6.ARPA -May 12 19:29:17 ac1752c314a7 named[27]: reloading configuration succeeded -May 12 19:29:17 ac1752c314a7 named[27]: reloading zones succeeded -May 12 19:29:17 ac1752c314a7 named[27]: zone 1.0.10.in-addr.arpa/IN: loaded serial 2001062601 -May 12 19:29:17 ac1752c314a7 named[27]: zone 1.0.10.in-addr.arpa/IN: sending notifies (serial 2001062601) -May 12 19:29:17 ac1752c314a7 named[27]: all zones loaded -May 12 19:29:17 ac1752c314a7 named[27]: running - -[root@rhel7-host bind]# host www.example.com localhost -Using domain server: -Name: localhost -Address: ::1#53 -Aliases: -www.example.com is an alias for server1.example.com. -server1.example.com is an alias for mail -``` - -> [Did your zone file not update? It might be your editor not the serial number.][4] - -### THE FINISH LINE (?) - -We’ve got what we set out to accomplish. DNS requests and zones are being served from a container. We’ve got a persistent location to manage data and configurations across updates.   - -In Part 2 of this series, we’ll see how to treat the container as a normal service on the host. - -* * * - - _[Follow the RHEL Blog][5] to receive updates on Part 2 of this series and other new posts via email._ - -* * * - -### _**Additional Resources:**_ - -#### GitHub repository for accompanying files:  [https://github.com/nzwulfin/named-container][8] - -#### **SIDEBAR 1: ** _SELinux context on local files accessed by a container_ - -You may have noticed that when I copied the files from the container to the local host, I didn’t run a `chcon` to change the files on the host to type `svirt_sandbox_file_t`.  Why didn’t it break? Copying a file into `/srv` should have made that file label type `var_t`. Did I `setenforce 0`? - -Of course not, that would make Dan Walsh cry.  And yes, `machinectl` did indeed set the label type as expected, take a look: - -Before starting the container: - -``` -[root@rhel7-host ~]# ls -Z /srv/named/etc/named.conf - --rw-r-----. unconfined_u:object_r:var_t:s0   /srv/named/etc/named.conf -``` - -No, I used a volume option in run that makes Dan Walsh happy, `:Z`.  This part of the command `-v /srv/named/etc/named.conf:/etc/named.conf:Z` does two things: first it says this needs to be relabeled with a private volume SELinux label, and second it says to mount it read / write. - -After starting the container: - -``` -[root@rhel7-host ~]# ls -Z /srv/named/etc/named.conf - --rw-r-----. root 25 system_u:object_r:svirt_sandbox_file_t:s0:c821,c956 /srv/named/etc/named.conf -``` - -#### **SIDEBAR 2: ** _VIM backup behavior can change inodes_ - -If you made the edits to the config file with `vim` on the local host and you aren’t seeing the changes in the container, you may have inadvertently created a new file that the container isn’t aware of. There are three `vim` settings that affect backup copies during editing: backup, writebackup, and backupcopy. - -I’ve snipped out the defaults that apply for RHEL 7 from the official VIM backup_table [http://vimdoc.sourceforge.net/htmldoc/editing.html#backup-table] - -``` -backup    writebackup - -   off     on backup current file, deleted afterwards (default) -``` - -So we don’t create tilde copies that stick around, but we are creating backups. The other setting is backupcopy, where `auto` is the shipped default: - -``` -"yes" make a copy of the file and overwrite the original one - "no" rename the file and write a new one - "auto" one of the previous, what works best -``` - -This combo means that when you edit a file, unless `vim` sees a reason not to (check the docs for the logic) you will end up with a new file that contains your edits, which will be renamed to the original filename when you save. This means the file gets a new inode. For most situations this isn’t a problem, but here the bind mount into the container *is* senstive to inode changes. To solve this, you need to change the backupcopy behavior. - -Either in the `vim` session or in your `.vimrc`, add `set backupcopy=yes`. This will make sure the original file gets truncated and overwritten, preserving the inode and propagating the changes into the container. - --------------------------------------------------------------------------------- - -via: http://rhelblog.redhat.com/2017/07/19/containing-system-services-in-red-hat-enterprise-linux-part-1/ - -作者:[Matt Micene ][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://rhelblog.redhat.com/2017/07/19/containing-system-services-in-red-hat-enterprise-linux-part-1/ -[1]:http://rhelblog.redhat.com/author/mmicenerht/ -[2]:http://rhelblog.redhat.com/2017/07/19/containing-system-services-in-red-hat-enterprise-linux-part-1/#repo -[3]:http://rhelblog.redhat.com/2017/07/19/containing-system-services-in-red-hat-enterprise-linux-part-1/#sidebar_1 -[4]:http://rhelblog.redhat.com/2017/07/19/containing-system-services-in-red-hat-enterprise-linux-part-1/#sidebar_2 -[5]:http://redhatstackblog.wordpress.com/feed/ -[6]:https://access.redhat.com/containers -[7]:http://rhelblog.redhat.com/2017/07/19/containing-system-services-in-red-hat-enterprise-linux-part-1/#repo -[8]:https://github.com/nzwulfin/named-container From db0a868c69b8137e0b7f7902b3899a25a2436301 Mon Sep 17 00:00:00 2001 From: liuyakun Date: Sun, 10 Dec 2017 23:43:42 +0800 Subject: [PATCH 187/236] translating by liuxinyu123 --- .../tech/20171127 ​Long-term Linux support future clarified.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20171127 ​Long-term Linux support future clarified.md b/sources/tech/20171127 ​Long-term Linux support future clarified.md index e077f33425..f9e6b5d3b3 100644 --- a/sources/tech/20171127 ​Long-term Linux support future clarified.md +++ b/sources/tech/20171127 ​Long-term Linux support future clarified.md @@ -1,3 +1,5 @@ +translating by liuxinyu123 + Long-term Linux support future clarified ============================================================ From 49d96d7cedf4dce6233a24f8415fbf981c353a26 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E5=BC=A0=E5=AE=88=E6=B0=B8?= Date: Mon, 11 Dec 2017 00:39:38 +0800 Subject: [PATCH 188/236] Update 20171207 7 tools for analyzing performance in Linux with bccBPF.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 翻译 7 tools for analyzing performance in Linux with bcc/BPF --- ...lyzing performance in Linux with bccBPF.md | 81 ++++++++++++------- 1 file changed, 50 insertions(+), 31 deletions(-) diff --git a/sources/tech/20171207 7 tools for analyzing performance in Linux with bccBPF.md b/sources/tech/20171207 7 tools for analyzing performance in Linux with bccBPF.md index d0733e0903..4e55ed979a 100644 --- a/sources/tech/20171207 7 tools for analyzing performance in Linux with bccBPF.md +++ b/sources/tech/20171207 7 tools for analyzing performance in Linux with bccBPF.md @@ -1,6 +1,7 @@ translating by yongshouzhang -7 tools for analyzing performance in Linux with bcc/BPF + +7个 Linux 下使用 bcc/BPF 的性能分析工具 ============================================================ ###使用伯克利的包过滤(BPF)编译器集合(BCC)工具深度探查你的 linux 代码。 @@ -18,15 +19,15 @@ opensource.com 在 linux 中出现的一种新技术能够为系统管理员和开发者提供大量用于性能分析和故障排除的新工具和仪表盘。 它被称为增强的伯克利数据包过滤器(eBPF,或BPF),虽然这些改进并不由伯克利开发,它们不仅仅是处理数据包,更多的是过滤。我将讨论在 Fedora 和 Red Hat Linux 发行版中使用 BPF 的一种方法,并在 Fedora 26 上演示。 -BPF 可以运行自定义沙盒程序在内核中即刻添加新的自定义功能。这就像可按需给 Linux 系统添加超能力一般。 你可以使用它的例子包括如下: +BPF 可以在内核中运行用户定义的沙盒程序,以立即添加新的自定义功能。这就像可按需给 Linux 系统添加超能力一般。 你可以使用它的例子包括如下: -* 高级性能跟踪工具:文件系统操作、TCP事件、用户级事件等的编程低开销指令。 +* 高级性能跟踪工具:文件系统操作、TCP事件、用户级事件等的编程低开销检测。 * 网络性能 : 尽早丢弃数据包以提高DDoS的恢复能力,或者在内核中重定向数据包以提高性能。 * 安全监控 : 24x7 小时全天候自定义检测和记录内核空间与用户空间内的可疑事件。 -在可能的情况下,BPF 程序必须通过一个内核验证机制来保证它们的安全运行,这比写自定义的内核模块更安全。我在此假设大多数人并不编写自己的 BPF 程序,而是使用别人写好的。在 GitHub 上的 [BPF Compiler Collection (bcc)][12] 项目中,我已发布许多。开源代码。bcc 提供不同的 BPF 开发前端支持,包括Python和Lua,并且是目前最活跃的 BPF 模具项目。 +在可能的情况下,BPF 程序必须通过一个内核验证机制来保证它们的安全运行,这比写自定义的内核模块更安全。我在此假设大多数人并不编写自己的 BPF 程序,而是使用别人写好的。在 GitHub 上的 [BPF Compiler Collection (bcc)][12] 项目中,我已发布许多开源代码。bcc 提供不同的 BPF 开发前端支持,包括Python和Lua,并且是目前最活跃的 BPF 模具项目。 ### 7 个有用的 bcc/BPF 新工具 @@ -43,7 +44,7 @@ Brendan Gregg, [CC BY-SA 4.0][14] ### 1\. execsnoop -Where to start? How about watching new processes. These can consume system resources, but be so short-lived they don't show up in top(1) or other tools. They can be instrumented (or, using the industry jargon for this, they can be traced) using [execsnoop][15]. While tracing, I'll log in over SSH in another window: +从哪儿开始? 如何查看新的进程。这些可以消耗系统资源,但很短暂,它们不会出现在 top(1)命令或其他工具中。 这些新进程可以使用[execsnoop] [15]进行检测(或使用行业术语,可以追踪)。 在追踪时,我将在另一个窗口中通过 SSH 登录: ``` # /usr/share/bcc/tools/execsnoop @@ -66,12 +67,13 @@ grep 12255 12254 0 /usr/bin/grep -qsi ^COLOR.*none /etc/GREP_COL grepconf.sh 12256 12239 0 /usr/libexec/grepconf.sh -c grep 12257 12256 0 /usr/bin/grep -qsi ^COLOR.*none /etc/GREP_COLORS ``` +哇。 那是什么? 什么是grepconf.sh? 什么是 /etc/GREP_COLORS? 而且 grep通过运行自身阅读它自己的配置文件? 这甚至是如何工作的? -Welcome to the fun of system tracing. You can learn a lot about how the system is really working (or not working, as the case may be) and discover some easy optimizations along the way. execsnoop works by tracing the exec() system call, which is usually used to load different program code in new processes. +欢迎来到有趣的系统追踪世界。 你可以学到很多关于系统是如何工作的(或者一些情况下根本不工作),并且发现一些简单的优化。 execsnoop 通过跟踪 exec()系统调用来工作,exec() 通常用于在新进程中加载不同的程序代码。 ### 2\. opensnoop -Continuing from above, so, grepconf.sh is likely a shell script, right? I'll run file(1) to check, and also use the [opensnoop][16] bcc tool to see what file is opening: +从上面继续,所以,grepconf.sh可能是一个shell脚本,对吧? 我将运行file(1)来检查,并使用[opensnoop][16] bcc 工具来查看打开的文件: ``` # /usr/share/bcc/tools/opensnoop @@ -89,16 +91,18 @@ PID COMM FD ERR PATH 1 systemd 16 0 /proc/565/cgroup 1 systemd 16 0 /proc/536/cgroup ``` +像execsnoop和opensnoop这样的工具每个事件打印一行。上图显示 file(1)命令当前打开(或尝试打开)的文件:返回的文件描述符(“FD”列)对于 /etc/magic.mgc 是-1,而“ERR”列指示它是“文件未找到”。我不知道该文件,也不知道 file(1)正在读取的 /usr/share/misc/magic.mgc 文件。我不应该感到惊讶,但是 file(1)在识别文件类型时没有问题: ``` # file /usr/share/misc/magic.mgc /etc/magic /usr/share/misc/magic.mgc: magic binary file for file(1) cmd (version 14) (little endian) /etc/magic: magic text file for file(1) cmd, ASCII text ``` +opensnoop通过跟踪 open()系统调用来工作。为什么不使用 strace -feopen file 命令呢? 这将在这种情况下起作用。然而,opensnoop 的一些优点在于它能在系统范围内工作,并且跟踪所有进程的 open()系统调用。注意上例的输出中包括了从systemd打开的文件。Opensnoop 也应该有更低的开销:BPF 跟踪已经被优化,并且当前版本的 strace(1)仍然使用较老和较慢的 ptrace(2)接口。 ### 3\. xfsslower -bcc/BPF can analyze much more than just syscalls. The [xfsslower][17] tool traces common XFS filesystem operations that have a latency of greater than 1 millisecond (the argument): +bcc/BPF 不仅仅可以分析系统调用。[xfsslower][17] 工具跟踪具有大于1毫秒(参数)延迟的常见XFS文件系统操作。 ``` # /usr/share/bcc/tools/xfsslower 1 @@ -115,14 +119,15 @@ TIME COMM PID T BYTES OFF_KB LAT(ms) FILENAME 14:17:46 cksum 4168 R 65536 128 1.01 grub2-fstest [...] ``` +在上图输出中,我捕获了多个延迟超过 1 毫秒 的 cksum(1)读数(字段“T”等于“R”)。这个工作是在 xfsslower 工具运行的时候,通过在 XFS 中动态地设置内核函数实现,当它结束的时候解除检测。其他文件系统也有这个 bcc 工具的版本:ext4slower,btrfsslower,zfsslower 和 nfsslower。 -This is a useful tool and an important example of BPF tracing. Traditional analysis of filesystem performance focuses on block I/O statistics—what you commonly see printed by the iostat(1) tool and plotted by many performance-monitoring GUIs. Those statistics show how the disks are performing, but not really the filesystem. Often you care more about the filesystem's performance than the disks, since it's the filesystem that applications make requests to and wait for. And the performance of filesystems can be quite different from that of disks! Filesystems may serve reads entirely from memory cache and also populate that cache via a read-ahead algorithm and for write-back caching. xfsslower shows filesystem performance—what the applications directly experience. This is often useful for exonerating the entire storage subsystem; if there is really no filesystem latency, then performance issues are likely to be elsewhere. +这是个有用的工具,也是 BPF 追踪的重要例子。对文件系统性能的传统分析主要集中在块 I/O 统计信息 - 通常你看到的是由 iostat(1)工具打印并由许多性能监视 GUI 绘制的图表。这些统计数据显示了磁盘如何执行,但不是真正的文件系统。通常比起磁盘你更关心文件系统的性能,因为应用程序是在文件系统中发起请求和等待。并且文件系统的性能可能与磁盘的性能大为不同!文件系统可以完全从内存缓存中读取数据,也可以通过预读算法和回写缓存填充缓存。xfsslower 显示了文件系统的性能 - 应用程序直接体验到什么。这对于免除整个存储子系统通常是有用的; 如果确实没有文件系统延迟,那么性能问题很可能在别处。 ### 4\. biolatency -Although filesystem performance is important to study for understanding application performance, studying disk performance has merit as well. Poor disk performance will affect the application eventually, when various caching tricks can no longer hide its latency. Disk performance is also a target of study for capacity planning. +虽然文件系统性能对于理解应用程序性能非常重要,但研究磁盘性能也是有好处的。当各种缓存技巧不能再隐藏其延迟时,磁盘的低性能终会影响应用程序。 磁盘性能也是容量规划研究的目标。 -The iostat(1) tool shows the average disk I/O latency, but averages can be misleading. It can be useful to study the distribution of I/O latency as a histogram, which can be done using [biolatency][18]: +iostat(1)工具显示平均磁盘 I/O 延迟,但平均值可能会引起误解。 以直方图的形式研究 I/O 延迟的分布是有用的,这可以通过使用 [biolatency] 来实现[18]: ``` # /usr/share/bcc/tools/biolatency @@ -142,8 +147,9 @@ Tracing block device I/O... Hit Ctrl-C to end. 1024 -> 2047 : 117 |******** | 2048 -> 4095 : 8 | | ``` +这是另一个有用的工具和例子; 它使用一个名为maps的BPF特性,它可以用来实现高效的内核内摘要统计。从内核级别到用户级别的数据传输仅仅是“计数”列。 用户级程序生成其余的。 -It's worth noting that many of these tools support CLI options and arguments as shown by their USAGE message: +值得注意的是,其中许多工具支持CLI选项和参数,如其使用信息所示: ``` # /usr/share/bcc/tools/biolatency -h @@ -169,10 +175,11 @@ examples: ./biolatency -Q # include OS queued time in I/O time ./biolatency -D # show each disk device separately ``` +它们的行为像其他Unix工具是通过设计,以协助采用。 ### 5\. tcplife -Another useful tool and example, this time showing lifespan and throughput statistics of TCP sessions, is [tcplife][19]: +另一个有用的工具是[tcplife][19] ,该例显示TCP会话的生命周期和吞吐量统计 ``` # /usr/share/bcc/tools/tcplife @@ -182,10 +189,11 @@ PID COMM LADDR LPORT RADDR RPORT TX_KB RX_KB MS 12844 wget 10.0.2.15 34250 54.204.39.132 443 11 1870 5712.26 12851 curl 10.0.2.15 34252 54.204.39.132 443 0 74 505.90 ``` +在你说:“我不能只是刮 tcpdump(8)输出这个?”之前请注意,运行 tcpdump(8)或任何数据包嗅探器,在高数据包速率系统上花费的开销会很大,即使tcpdump(8)的用户级和内核级机制已经过多年优化(可能更差)。tcplife不会测试每个数据包; 它只会监视TCP会话状态的变化,从而影响会话的持续时间。它还使用已经跟踪吞吐量的内核计数器,以及处理和命令信息(“PID”和“COMM”列),这些对 tcpdump(8)等线上嗅探工具是做不到的。 ### 6\. gethostlatency -Every previous example involves kernel tracing, so I need at least one user-level tracing example. Here is [gethostlatency][20], which instruments gethostbyname(3) and related library calls for name resolution: +之前的每个例子都涉及到内核跟踪,所以我至少需要一个用户级跟踪的例子。 这是[gethostlatency] [20],其中gethostbyname(3)和相关的库调用名称解析: ``` # /usr/share/bcc/tools/gethostlatency @@ -199,22 +207,24 @@ TIME PID COMM LATms HOST 06:45:07 12952 curl 13.64 opensource.cats 06:45:19 13139 curl 13.10 opensource.cats ``` +是的,它始终是DNS,所以有一个工具来监视系统范围内的DNS请求可以很方便(这只有在应用程序使用标准系统库时才有效)看看我如何跟踪多个查找“opensource.com”? 第一个是188.98毫秒,然后是更快,不到10毫秒,毫无疑问,缓存的作用。它还追踪多个查找“opensource.cats”,一个可悲的不存在的主机,但我们仍然可以检查第一个和后续查找的延迟。 (第二次查找后是否有一点负面缓存?) ### 7\. trace -Okay, one more example. The [trace][21] tool was contributed by Sasha Goldshtein and provides some basic printf(1) functionality with custom probes. For example: +好的,再举一个例子。 [trace] [21]工具由Sasha Goldshtein提供,并提供了一些基本的printf(1)功能和自定义探针。 例如: ``` # /usr/share/bcc/tools/trace 'pam:pam_start "%s: %s", arg1, arg2' PID TID COMM FUNC - 13266 13266 sshd pam_start sshd: root ``` +在这里,我正在跟踪 libpam 及其 pam_start(3)函数并将其两个参数都打印为字符串。 Libpam 用于可插入的身份验证模块系统,输出显示 sshd 为“root”用户(我登录)调用了 pam_start()。 USAGE消息中有更多的例子(“trace -h”),而且所有这些工具在bcc版本库中都有手册页和示例文件。 例如trace_example.txt和trace.8。 -### Install bcc via packages +### 通过包安装 bcc -The best way to install bcc is from an iovisor repository, following the instructions from the bcc [INSTALL.md][22]. [IO Visor][23] is the Linux Foundation project that includes bcc. The BPF enhancements these tools use were added in the 4.x series Linux kernels, up to 4.9\. This means that Fedora 25, with its 4.8 kernel, can run most of these tools; and Fedora 26, with its 4.11 kernel, can run them all (at least currently). +安装 bcc 最佳的方法是从 iovisor 仓储库中安装,按照 bcc [INSTALL.md][22]。[IO Visor] [23]是包含 bcc 的Linux基金会项目。4.x系列Linux内核中增加了这些工具使用的BPF增强功能,上至4.9 \。这意味着拥有4.8内核的 Fedora 25可以运行大部分这些工具。 Fedora 26及其4.11内核可以运行它们(至少目前)。 -If you are on Fedora 25 (or Fedora 26, and this post was published many months ago—hello from the distant past!), then this package approach should just work. If you are on Fedora 26, then skip to the [Install via Source][24] section, which avoids a [known][25] and [fixed][26] bug. That bug fix hasn't made its way into the Fedora 26 package dependencies at the moment. The system I'm using is: +如果你使用的是Fedora 25(或者Fedora 26,而且这个帖子已经在很多个月前发布了 - 你好,来自遥远的过去!),那么这个包的方法应该是正常的。 如果您使用的是Fedora 26,那么请跳至“通过源代码安装”部分,该部分避免了已知的固定错误。 这个错误修复目前还没有进入Fedora 26软件包的依赖关系。 我使用的系统是: ``` # uname -a @@ -222,6 +232,7 @@ Linux localhost.localdomain 4.11.8-300.fc26.x86_64 #1 SMP Thu Jun 29 20:09:48 UT # cat /etc/fedora-release Fedora release 26 (Twenty Six) ``` +以下是我所遵循的安装步骤,但请参阅INSTALL.md获取更新的版本: ``` # echo -e '[iovisor]\nbaseurl=https://repo.iovisor.org/yum/nightly/f25/$basearch\nenabled=1\ngpgcheck=0' | sudo tee /etc/yum.repos.d/iovisor.repo @@ -231,6 +242,7 @@ Total download size: 37 M Installed size: 143 M Is this ok [y/N]: y ``` +安装完成后,您可以在/ usr / share中看到新的工具: ``` # ls /usr/share/bcc/tools/ @@ -238,6 +250,7 @@ argdist dcsnoop killsnoop softirqs trace bashreadline dcstat llcstat solisten ttysnoop [...] ``` +试着运行其中一个: ``` # /usr/share/bcc/tools/opensnoop @@ -249,6 +262,7 @@ Traceback (most recent call last): raise Exception("Failed to compile BPF module %s" % src_file) Exception: Failed to compile BPF module ``` +运行失败,提示/lib/modules/4.11.8-300.fc26.x86_64/build丢失。 如果你也这样做,那只是因为系统缺少内核头文件。 如果你看看这个文件指向什么(这是一个符号链接),然后使用“dnf whatprovides”来搜索它,它会告诉你接下来需要安装的包。 对于这个系统,它是: ``` # dnf install kernel-devel-4.11.8-300.fc26.x86_64 @@ -258,6 +272,7 @@ Installed size: 63 M Is this ok [y/N]: y [...] ``` +现在 ``` # /usr/share/bcc/tools/opensnoop @@ -268,10 +283,11 @@ PID COMM FD ERR PATH 11792 ls 3 0 /lib64/libc.so.6 [...] ``` +运行起来了。 这是从另一个窗口中的ls命令捕捉活动。 请参阅前面的部分以获取其他有用的命令 -### Install via source +### 通过源码安装 -If you need to install from source, you can also find documentation and updated instructions in [INSTALL.md][27]. I did the following on Fedora 26: +如果您需要从源代码安装,您还可以在[INSTALL.md] [27]中找到文档和更新说明。 我在Fedora 26上做了如下的事情: ``` sudo dnf install -y bison cmake ethtool flex git iperf libstdc++-static \ @@ -283,12 +299,16 @@ sudo dnf install -y \ sudo pip install pyroute2 sudo dnf install -y clang clang-devel llvm llvm-devel llvm-static ncurses-devel ``` +除 netperf 外一切妥当,其中有以下错误: ``` Curl error (28): Timeout was reached for http://pkgs.repoforge.org/netperf/netperf-2.6.0-1.el6.rf.x86_64.rpm [Connection timed out after 120002 milliseconds] ``` -Here are the remaining bcc compilation and install steps: +不必理会,netperf是可选的 - 它只是用于测试 - 而 bcc 没有它也会编译成功。 + +以下是 bcc 编译和安装余下的步骤: + ``` git clone https://github.com/iovisor/bcc.git @@ -297,6 +317,7 @@ cmake .. -DCMAKE_INSTALL_PREFIX=/usr make sudo make install ``` +在这一点上,命令应该起作用: ``` # /usr/share/bcc/tools/opensnoop @@ -320,29 +341,27 @@ More Linux resources * [Our latest Linux articles][5] -This was a quick tour of the new BPF performance analysis superpowers that you can use on the Fedora and Red Hat family of operating systems. I demonstrated the popular +### 写在最后和其他前端 - [bcc][28] +这是一个可以在 Fedora 和 Red Hat 系列操作系统上使用的新 BPF 性能分析强大功能的快速浏览。我演示了BPF的流行前端 [bcc][28] ,并包含了其在 Fedora 上的安装说明。bcc 附带了60多个用于性能分析的新工具,这将帮助您充分利用Linux系统。也许你会直接通过SSH使用这些工具,或者一旦它们支持BPF,你也可以通过监视GUI来使用相同的功能。 -frontend to BPF and included install instructions for Fedora. bcc comes with more than 60 new tools for performance analysis, which will help you get the most out of your Linux systems. Perhaps you will use these tools directly over SSH, or perhaps you will use the same functionality via monitoring GUIs once they support BPF. +此外,bcc并不是开发中唯一的前端。[ply][29]和[bpftrace][30],旨在为快速编写自定义工具提供更高级的语言。此外,[SystemTap] [31]刚刚发布[版本3.2] [32],包括一个早期的实验性eBPF后端。 如果这一点继续得到发展,它将为运行多年来开发的许多SystemTap脚本和攻击集(库)提供一个生产安全和高效的引擎。 (使用SystemTap和eBPF将成为另一篇文章的主题。) -Also, bcc is not the only frontend in development. There are [ply][29] and [bpftrace][30], which aim to provide higher-level language for quickly writing custom tools. In addition, [SystemTap][31] just released [version 3.2][32], including an early, experimental eBPF backend. Should this continue to be developed, it will provide a production-safe and efficient engine for running the many SystemTap scripts and tapsets (libraries) that have been developed over the years. (Using SystemTap with eBPF would be good topic for another post.) +如果您需要开发自定义工具,那么也可以使用 bcc 来实现,尽管语言比 SystemTap,ply 或 bpftrace 要冗长得多。 我的 bcc 工具可以作为代码示例,另外我还贡献了[教程] [33]来开发 Python 中的 bcc 工具。 我建议先学习bcc多工具,因为在需要编写新工具之前,你可能会从里面获得很多里程。 您可以从他们 bcc 存储库[funccount] [34],[funclatency] [35],[funcslower] [36],[stackcount] [37],[trace] [38] ,[argdist] [39] 的示例文件中研究 bcc。 -If you need to develop custom tools, you can do that with bcc as well, although the language is currently much more verbose than SystemTap, ply, or bpftrace. My bcc tools can serve as code examples, plus I contributed a [tutorial][33] for developing bcc tools in Python. I'd recommend learning the bcc multi-tools first, as you may get a lot of mileage from them before needing to write new tools. You can study the multi-tools from their example files in the bcc repository: [funccount][34], [funclatency][35], [funcslower][36], [stackcount][37], [trace][38], and [argdist][39]. +感谢[Opensource.com] [40]进行编辑。 -Thanks to [Opensource.com][40] for edits. +###  专题 -### Topics - - [Linux][41][SysAdmin][42] + [Linux][41][系统管理员][42] ### About the author [![](https://opensource.com/sites/default/files/styles/profile_pictures/public/pictures/brendan_face2017_620d.jpg?itok=LIwTJjL9)][43] Brendan Gregg - +Brendan Gregg是Netflix的一名高级性能架构师,在那里他进行大规模的计算机性能设计,分析和调优。[关于更多] [44] - Brendan Gregg is a senior performance architect at Netflix, where he does large scale computer performance design, analysis, and tuning.[More about me][44] * [Learn how you can contribute][6] From 6ca219c4026615f80ec1265b398b20add386e836 Mon Sep 17 00:00:00 2001 From: wxy Date: Mon, 11 Dec 2017 00:49:55 +0800 Subject: [PATCH 189/236] PRF&PUB:20171206 How to extract substring in Bash.md @lujun9972 https://linux.cn/article-9127-1.html --- ...171206 How to extract substring in Bash.md | 41 +++++++++++-------- 1 file changed, 24 insertions(+), 17 deletions(-) rename {translated/tech => published}/20171206 How to extract substring in Bash.md (73%) diff --git a/translated/tech/20171206 How to extract substring in Bash.md b/published/20171206 How to extract substring in Bash.md similarity index 73% rename from translated/tech/20171206 How to extract substring in Bash.md rename to published/20171206 How to extract substring in Bash.md index f1deaebab9..c4a030303e 100644 --- a/translated/tech/20171206 How to extract substring in Bash.md +++ b/published/20171206 How to extract substring in Bash.md @@ -1,6 +1,7 @@ 如何在 Bash 中抽取子字符串 ====== -子字符串不是别的,就是出现在其他字符串内的字符串。 比如 “3382” 就是 “this is a 3382 test” 的子字符串。 我们有多种方法可以从中把数字或指定部分字符串抽取出来。 + +所谓“子字符串”就是出现在其它字符串内的字符串。 比如 “3382” 就是 “this is a 3382 test” 的子字符串。 我们有多种方法可以从中把数字或指定部分字符串抽取出来。 [![How to Extract substring in Bash Shell on Linux or Unix](https://www.cyberciti.biz/media/new/faq/2017/12/How-to-Extract-substring-in-Bash-Shell-on-Linux-or-Unix.jpg)][2] @@ -8,15 +9,17 @@ ### 在 Bash 中抽取子字符串 -其语法为: -```shell -## syntax ## -${parameter:offset:length} -``` -子字符串扩展是 bash 的一项功能。它会扩展成 parameter 值中以 offset 为开始,长为 length 个字符的字符串。 假设, $u 定义如下: +其语法为: ```shell -## define var named u ## +## 格式 ## +${parameter:offset:length} +``` + +子字符串扩展是 bash 的一项功能。它会扩展成 `parameter` 值中以 `offset` 为开始,长为 `length` 个字符的字符串。 假设, `$u` 定义如下: + +```shell +## 定义变量 u ## u="this is a test" ``` @@ -34,6 +37,7 @@ test ``` 其中这些参数分别表示: + + 10 : 偏移位置 + 4 : 长度 @@ -41,9 +45,9 @@ test 根据 bash 的 man 页说明: -> The Internal Field Separator that is used for word splitting after expansion and to split lines into words with the read builtin command。The default value is。 +> [IFS (内部字段分隔符)][3]用于在扩展后进行单词分割,并用内建的 read 命令将行分割为词。默认值是。 -另一种 POSIX 就绪(POSIX ready) 的方案如下: +另一种 POSIX 就绪POSIX ready的方案如下: ```shell u="this is a test" @@ -54,7 +58,7 @@ echo "$3" echo "$4" ``` -输出为: +输出为: ```shell this @@ -63,7 +67,7 @@ a test ``` -下面是一段 bash 代码,用来从 Cloudflare cache 中去除带主页的 url +下面是一段 bash 代码,用来从 Cloudflare cache 中去除带主页的 url。 ```shell #!/bin/bash @@ -113,14 +117,15 @@ done echo ``` -它的使用方法为: +它的使用方法为: + ```shell ~/bin/cf.clear.cache https://www.cyberciti.biz/faq/bash-for-loop/ https://www.cyberciti.biz/tips/linux-security.html ``` ### 借助 cut 命令 -可以使用 cut 命令来将文件中每一行或者变量中的一部分删掉。它的语法为: +可以使用 `cut` 命令来将文件中每一行或者变量中的一部分删掉。它的语法为: ```shell u="this is a test" @@ -135,13 +140,14 @@ var="$(cut -d' ' -f 4 <<< $u)" echo "${var}" ``` -想了解更多请阅读 bash 的 man 页: +想了解更多请阅读 bash 的 man 页: + ```shell man bash man cut ``` -另请参见: [Bash String Comparison: Find Out IF a Variable Contains a Substring][1] +另请参见: [Bash String Comparison: Find Out IF a Variable Contains a Substring][1] -------------------------------------------------------------------------------- @@ -149,10 +155,11 @@ via: https://www.cyberciti.biz/faq/how-to-extract-substring-in-bash/ 作者:[Vivek Gite][a] 译者:[lujun9972](https://github.com/lujun9972) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 [a]:https://www.cyberciti.biz [1]:https://www.cyberciti.biz/faq/bash-find-out-if-variable-contains-substring/ [2]:https://www.cyberciti.biz/media/new/faq/2017/12/How-to-Extract-substring-in-Bash-Shell-on-Linux-or-Unix.jpg +[3]:https://bash.cyberciti.biz/guide/$IFS From 51b590e868d57387098c8ba42bdeb24e10a8cbab Mon Sep 17 00:00:00 2001 From: wxy Date: Mon, 11 Dec 2017 01:02:00 +0800 Subject: [PATCH 190/236] PRF&PUB:20171205 NETSTAT Command Learn to use netstat with examples.md @lujun9972 https://linux.cn/article-9128-1.html --- ...mand Learn to use netstat with examples.md | 55 +++++++++---------- 1 file changed, 27 insertions(+), 28 deletions(-) rename {translated/tech => published}/20171205 NETSTAT Command Learn to use netstat with examples.md (56%) diff --git a/translated/tech/20171205 NETSTAT Command Learn to use netstat with examples.md b/published/20171205 NETSTAT Command Learn to use netstat with examples.md similarity index 56% rename from translated/tech/20171205 NETSTAT Command Learn to use netstat with examples.md rename to published/20171205 NETSTAT Command Learn to use netstat with examples.md index b2b7175749..fc5a64b840 100644 --- a/translated/tech/20171205 NETSTAT Command Learn to use netstat with examples.md +++ b/published/20171205 NETSTAT Command Learn to use netstat with examples.md @@ -1,26 +1,25 @@ -NETSTAT 命令: 通过案例学习使用 netstate +通过示例学习使用 netstat ====== -Netstat 是一个告诉我们系统中所有 tcp/udp/unix socket 连接状态的命令行工具。它会列出所有已经连接或者等待连接状态的连接。 该工具在识别某个应用监听哪个端口时特别有用,我们也能用它来判断某个应用是否正常的在监听某个端口。 -Netstat 命令还能显示其他各种各样的网络相关信息,例如路由表, 网卡统计信息, 虚假连接以及多播成员等。 +netstat 是一个告诉我们系统中所有 tcp/udp/unix socket 连接状态的命令行工具。它会列出所有已经连接或者等待连接状态的连接。 该工具在识别某个应用监听哪个端口时特别有用,我们也能用它来判断某个应用是否正常的在监听某个端口。 -本文中,我们会通过几个例子来学习 Netstat。 +netstat 命令还能显示其它各种各样的网络相关信息,例如路由表, 网卡统计信息, 虚假连接以及多播成员等。 -(推荐阅读: [Learn to use CURL command with examples][1] ) +本文中,我们会通过几个例子来学习 netstat。 -Netstat with examples -============================================================ +(推荐阅读: [通过示例学习使用 CURL 命令][1] ) -### 1- 检查所有的连接 +### 1 - 检查所有的连接 使用 `a` 选项可以列出系统中的所有连接, + ```shell $ netstat -a ``` -这会显示系统所有的 tcp,udp 以及 unix 连接。 +这会显示系统所有的 tcp、udp 以及 unix 连接。 -### 2- 检查所有的 tcp/udp/unix socket 连接 +### 2 - 检查所有的 tcp/udp/unix socket 连接 使用 `t` 选项只列出 tcp 连接, @@ -28,19 +27,19 @@ $ netstat -a $ netstat -at ``` -类似的,使用 `u` 选项只列出 udp 连接 to list out only the udp connections on our system, we can use ‘u’ option with netstat, +类似的,使用 `u` 选项只列出 udp 连接, ```shell $ netstat -au ``` -使用 `x` 选项只列出 Unix socket 连接,we can use ‘x’ options, +使用 `x` 选项只列出 Unix socket 连接, ```shell $ netstat -ax ``` -### 3- 同时列出进程 ID/进程名称 +### 3 - 同时列出进程 ID/进程名称 使用 `p` 选项可以在列出连接的同时也显示 PID 或者进程名称,而且它还能与其他选项连用, @@ -48,15 +47,15 @@ $ netstat -ax $ netstat -ap ``` -### 4- 列出端口号而不是服务名 +### 4 - 列出端口号而不是服务名 -使用 `n` 选项可以加快输出,它不会执行任何反向查询(译者注:这里原文说的是 "it will perform any reverse lookup",应该是写错了),而是直接输出数字。 由于无需查询,因此结果输出会快很多。 +使用 `n` 选项可以加快输出,它不会执行任何反向查询(LCTT 译注:这里原文有误),而是直接输出数字。 由于无需查询,因此结果输出会快很多。 ```shell $ netstat -an ``` -### 5- 只输出监听端口 +### 5 - 只输出监听端口 使用 `l` 选项只输出监听端口。它不能与 `a` 选项连用,因为 `a` 会输出所有端口, @@ -64,15 +63,15 @@ $ netstat -an $ netstat -l ``` -### 6- 输出网络状态 +### 6 - 输出网络状态 -使用 `s` 选项输出每个协议的统计信息,包括接收/发送的包数量 +使用 `s` 选项输出每个协议的统计信息,包括接收/发送的包数量, ```shell $ netstat -s ``` -### 7- 输出网卡状态 +### 7 - 输出网卡状态 使用 `I` 选项只显示网卡的统计信息, @@ -80,7 +79,7 @@ $ netstat -s $ netstat -i ``` -### 8- 显示多播组(multicast group)信息 +### 8 - 显示多播组multicast group信息 使用 `g` 选项输出 IPV4 以及 IPV6 的多播组信息, @@ -88,7 +87,7 @@ $ netstat -i $ netstat -g ``` -### 9- 显示网络路由信息 +### 9 - 显示网络路由信息 使用 `r` 输出网络路由信息, @@ -96,7 +95,7 @@ $ netstat -g $ netstat -r ``` -### 10- 持续输出 +### 10 - 持续输出 使用 `c` 选项持续输出结果 @@ -104,7 +103,7 @@ $ netstat -r $ netstat -c ``` -### 11- 过滤出某个端口 +### 11 - 过滤出某个端口 与 `grep` 连用来过滤出某个端口的连接, @@ -112,17 +111,17 @@ $ netstat -c $ netstat -anp | grep 3306 ``` -### 12- 统计连接个数 +### 12 - 统计连接个数 -通过与 wc 和 grep 命令连用,可以统计指定端口的连接数量 +通过与 `wc` 和 `grep` 命令连用,可以统计指定端口的连接数量 ```shell $ netstat -anp | grep 3306 | wc -l ``` -这回输出 mysql 服务端口(即 3306)的连接数。 +这会输出 mysql 服务端口(即 3306)的连接数。 -这就是我们间断的案例指南了,希望它带给你的信息量足够。 有任何疑问欢迎提出。 +这就是我们简短的案例指南了,希望它带给你的信息量足够。 有任何疑问欢迎提出。 -------------------------------------------------------------------------------- @@ -130,7 +129,7 @@ via: http://linuxtechlab.com/learn-use-netstat-with-examples/ 作者:[Shusain][a] 译者:[lujun9972](https://github.com/lujun9972) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From bde3b57aac3307c4a79d8bda046e79dea4907509 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E5=BC=A0=E5=AE=88=E6=B0=B8?= Date: Mon, 11 Dec 2017 08:26:16 +0800 Subject: [PATCH 191/236] Create 20171121 7 tools for analyzing performance in Linux with bccBPF.md --- ...lyzing performance in Linux with bccBPF.md | 422 ++++++++++++++++++ 1 file changed, 422 insertions(+) create mode 100644 translated/tech/20171121 7 tools for analyzing performance in Linux with bccBPF.md diff --git a/translated/tech/20171121 7 tools for analyzing performance in Linux with bccBPF.md b/translated/tech/20171121 7 tools for analyzing performance in Linux with bccBPF.md new file mode 100644 index 0000000000..4e55ed979a --- /dev/null +++ b/translated/tech/20171121 7 tools for analyzing performance in Linux with bccBPF.md @@ -0,0 +1,422 @@ +translating by yongshouzhang + + +7个 Linux 下使用 bcc/BPF 的性能分析工具 +============================================================ + +###使用伯克利的包过滤(BPF)编译器集合(BCC)工具深度探查你的 linux 代码。 + + [![](https://opensource.com/sites/default/files/styles/byline_thumbnail/public/pictures/brendan_face2017_620d.jpg?itok=xZzBQNcY)][7] 21 Nov 2017 [Brendan Gregg][8] [Feed][9] + +43[up][10] + + [4 comments][11] +![7 superpowers for Fedora bcc/BPF performance analysis](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/penguins%20in%20space_0.jpg?itok=umpCTAul) + +图片来源 : + +opensource.com + +在 linux 中出现的一种新技术能够为系统管理员和开发者提供大量用于性能分析和故障排除的新工具和仪表盘。 它被称为增强的伯克利数据包过滤器(eBPF,或BPF),虽然这些改进并不由伯克利开发,它们不仅仅是处理数据包,更多的是过滤。我将讨论在 Fedora 和 Red Hat Linux 发行版中使用 BPF 的一种方法,并在 Fedora 26 上演示。 + +BPF 可以在内核中运行用户定义的沙盒程序,以立即添加新的自定义功能。这就像可按需给 Linux 系统添加超能力一般。 你可以使用它的例子包括如下: + +* 高级性能跟踪工具:文件系统操作、TCP事件、用户级事件等的编程低开销检测。 + +* 网络性能 : 尽早丢弃数据包以提高DDoS的恢复能力,或者在内核中重定向数据包以提高性能。 + +* 安全监控 : 24x7 小时全天候自定义检测和记录内核空间与用户空间内的可疑事件。 + +在可能的情况下,BPF 程序必须通过一个内核验证机制来保证它们的安全运行,这比写自定义的内核模块更安全。我在此假设大多数人并不编写自己的 BPF 程序,而是使用别人写好的。在 GitHub 上的 [BPF Compiler Collection (bcc)][12] 项目中,我已发布许多开源代码。bcc 提供不同的 BPF 开发前端支持,包括Python和Lua,并且是目前最活跃的 BPF 模具项目。 + +### 7 个有用的 bcc/BPF 新工具 + +为了了解BCC / BPF工具和他们的乐器,我创建了下面的图表并添加到项目中 +To understand the bcc/BPF tools and what they instrument, I created the following diagram and added it to the bcc project: + +### [bcc_跟踪工具.png][13] + +![Linux bcc/BPF 跟踪工具图](https://opensource.com/sites/default/files/u128651/bcc_tracing_tools.png) + +Brendan Gregg, [CC BY-SA 4.0][14] + +这些是命令行界面工具,你可以通过 SSH (安全外壳)使用它们。目前大多数分析,包括我的老板,是用 GUIs 和仪表盘进行的。SSH是最后的手段。但这些命令行工具仍然是预览BPF能力的好方法,即使你最终打算通过一个可用的 GUI 使用它。我已着手向一个开源 GUI 添加BPF功能,但那是另一篇文章的主题。现在我想分享你今天可以使用的 CLI 工具。 + +### 1\. execsnoop + +从哪儿开始? 如何查看新的进程。这些可以消耗系统资源,但很短暂,它们不会出现在 top(1)命令或其他工具中。 这些新进程可以使用[execsnoop] [15]进行检测(或使用行业术语,可以追踪)。 在追踪时,我将在另一个窗口中通过 SSH 登录: + +``` +# /usr/share/bcc/tools/execsnoop +PCOMM PID PPID RET ARGS +sshd 12234 727 0 /usr/sbin/sshd -D -R +unix_chkpwd 12236 12234 0 /usr/sbin/unix_chkpwd root nonull +unix_chkpwd 12237 12234 0 /usr/sbin/unix_chkpwd root chkexpiry +bash 12239 12238 0 /bin/bash +id 12241 12240 0 /usr/bin/id -un +hostname 12243 12242 0 /usr/bin/hostname +pkg-config 12245 12244 0 /usr/bin/pkg-config --variable=completionsdir bash-completion +grepconf.sh 12246 12239 0 /usr/libexec/grepconf.sh -c +grep 12247 12246 0 /usr/bin/grep -qsi ^COLOR.*none /etc/GREP_COLORS +tty 12249 12248 0 /usr/bin/tty -s +tput 12250 12248 0 /usr/bin/tput colors +dircolors 12252 12251 0 /usr/bin/dircolors --sh /etc/DIR_COLORS +grep 12253 12239 0 /usr/bin/grep -qi ^COLOR.*none /etc/DIR_COLORS +grepconf.sh 12254 12239 0 /usr/libexec/grepconf.sh -c +grep 12255 12254 0 /usr/bin/grep -qsi ^COLOR.*none /etc/GREP_COLORS +grepconf.sh 12256 12239 0 /usr/libexec/grepconf.sh -c +grep 12257 12256 0 /usr/bin/grep -qsi ^COLOR.*none /etc/GREP_COLORS +``` +哇。 那是什么? 什么是grepconf.sh? 什么是 /etc/GREP_COLORS? 而且 grep通过运行自身阅读它自己的配置文件? 这甚至是如何工作的? + +欢迎来到有趣的系统追踪世界。 你可以学到很多关于系统是如何工作的(或者一些情况下根本不工作),并且发现一些简单的优化。 execsnoop 通过跟踪 exec()系统调用来工作,exec() 通常用于在新进程中加载不同的程序代码。 + +### 2\. opensnoop + +从上面继续,所以,grepconf.sh可能是一个shell脚本,对吧? 我将运行file(1)来检查,并使用[opensnoop][16] bcc 工具来查看打开的文件: + +``` +# /usr/share/bcc/tools/opensnoop +PID COMM FD ERR PATH +12420 file 3 0 /etc/ld.so.cache +12420 file 3 0 /lib64/libmagic.so.1 +12420 file 3 0 /lib64/libz.so.1 +12420 file 3 0 /lib64/libc.so.6 +12420 file 3 0 /usr/lib/locale/locale-archive +12420 file -1 2 /etc/magic.mgc +12420 file 3 0 /etc/magic +12420 file 3 0 /usr/share/misc/magic.mgc +12420 file 3 0 /usr/lib64/gconv/gconv-modules.cache +12420 file 3 0 /usr/libexec/grepconf.sh +1 systemd 16 0 /proc/565/cgroup +1 systemd 16 0 /proc/536/cgroup +``` +像execsnoop和opensnoop这样的工具每个事件打印一行。上图显示 file(1)命令当前打开(或尝试打开)的文件:返回的文件描述符(“FD”列)对于 /etc/magic.mgc 是-1,而“ERR”列指示它是“文件未找到”。我不知道该文件,也不知道 file(1)正在读取的 /usr/share/misc/magic.mgc 文件。我不应该感到惊讶,但是 file(1)在识别文件类型时没有问题: + +``` +# file /usr/share/misc/magic.mgc /etc/magic +/usr/share/misc/magic.mgc: magic binary file for file(1) cmd (version 14) (little endian) +/etc/magic: magic text file for file(1) cmd, ASCII text +``` +opensnoop通过跟踪 open()系统调用来工作。为什么不使用 strace -feopen file 命令呢? 这将在这种情况下起作用。然而,opensnoop 的一些优点在于它能在系统范围内工作,并且跟踪所有进程的 open()系统调用。注意上例的输出中包括了从systemd打开的文件。Opensnoop 也应该有更低的开销:BPF 跟踪已经被优化,并且当前版本的 strace(1)仍然使用较老和较慢的 ptrace(2)接口。 + +### 3\. xfsslower + +bcc/BPF 不仅仅可以分析系统调用。[xfsslower][17] 工具跟踪具有大于1毫秒(参数)延迟的常见XFS文件系统操作。 + +``` +# /usr/share/bcc/tools/xfsslower 1 +Tracing XFS operations slower than 1 ms +TIME COMM PID T BYTES OFF_KB LAT(ms) FILENAME +14:17:34 systemd-journa 530 S 0 0 1.69 system.journal +14:17:35 auditd 651 S 0 0 2.43 audit.log +14:17:42 cksum 4167 R 52976 0 1.04 at +14:17:45 cksum 4168 R 53264 0 1.62 [ +14:17:45 cksum 4168 R 65536 0 1.01 certutil +14:17:45 cksum 4168 R 65536 0 1.01 dir +14:17:45 cksum 4168 R 65536 0 1.17 dirmngr-client +14:17:46 cksum 4168 R 65536 0 1.06 grub2-file +14:17:46 cksum 4168 R 65536 128 1.01 grub2-fstest +[...] +``` +在上图输出中,我捕获了多个延迟超过 1 毫秒 的 cksum(1)读数(字段“T”等于“R”)。这个工作是在 xfsslower 工具运行的时候,通过在 XFS 中动态地设置内核函数实现,当它结束的时候解除检测。其他文件系统也有这个 bcc 工具的版本:ext4slower,btrfsslower,zfsslower 和 nfsslower。 + +这是个有用的工具,也是 BPF 追踪的重要例子。对文件系统性能的传统分析主要集中在块 I/O 统计信息 - 通常你看到的是由 iostat(1)工具打印并由许多性能监视 GUI 绘制的图表。这些统计数据显示了磁盘如何执行,但不是真正的文件系统。通常比起磁盘你更关心文件系统的性能,因为应用程序是在文件系统中发起请求和等待。并且文件系统的性能可能与磁盘的性能大为不同!文件系统可以完全从内存缓存中读取数据,也可以通过预读算法和回写缓存填充缓存。xfsslower 显示了文件系统的性能 - 应用程序直接体验到什么。这对于免除整个存储子系统通常是有用的; 如果确实没有文件系统延迟,那么性能问题很可能在别处。 + +### 4\. biolatency + +虽然文件系统性能对于理解应用程序性能非常重要,但研究磁盘性能也是有好处的。当各种缓存技巧不能再隐藏其延迟时,磁盘的低性能终会影响应用程序。 磁盘性能也是容量规划研究的目标。 + +iostat(1)工具显示平均磁盘 I/O 延迟,但平均值可能会引起误解。 以直方图的形式研究 I/O 延迟的分布是有用的,这可以通过使用 [biolatency] 来实现[18]: + +``` +# /usr/share/bcc/tools/biolatency +Tracing block device I/O... Hit Ctrl-C to end. +^C + usecs : count distribution + 0 -> 1 : 0 | | + 2 -> 3 : 0 | | + 4 -> 7 : 0 | | + 8 -> 15 : 0 | | + 16 -> 31 : 0 | | + 32 -> 63 : 1 | | + 64 -> 127 : 63 |**** | + 128 -> 255 : 121 |********* | + 256 -> 511 : 483 |************************************ | + 512 -> 1023 : 532 |****************************************| + 1024 -> 2047 : 117 |******** | + 2048 -> 4095 : 8 | | +``` +这是另一个有用的工具和例子; 它使用一个名为maps的BPF特性,它可以用来实现高效的内核内摘要统计。从内核级别到用户级别的数据传输仅仅是“计数”列。 用户级程序生成其余的。 + +值得注意的是,其中许多工具支持CLI选项和参数,如其使用信息所示: + +``` +# /usr/share/bcc/tools/biolatency -h +usage: biolatency [-h] [-T] [-Q] [-m] [-D] [interval] [count] + +Summarize block device I/O latency as a histogram + +positional arguments: + interval output interval, in seconds + count number of outputs + +optional arguments: + -h, --help show this help message and exit + -T, --timestamp include timestamp on output + -Q, --queued include OS queued time in I/O time + -m, --milliseconds millisecond histogram + -D, --disks print a histogram per disk device + +examples: + ./biolatency # summarize block I/O latency as a histogram + ./biolatency 1 10 # print 1 second summaries, 10 times + ./biolatency -mT 1 # 1s summaries, milliseconds, and timestamps + ./biolatency -Q # include OS queued time in I/O time + ./biolatency -D # show each disk device separately +``` +它们的行为像其他Unix工具是通过设计,以协助采用。 + +### 5\. tcplife + +另一个有用的工具是[tcplife][19] ,该例显示TCP会话的生命周期和吞吐量统计 + +``` +# /usr/share/bcc/tools/tcplife +PID COMM LADDR LPORT RADDR RPORT TX_KB RX_KB MS +12759 sshd 192.168.56.101 22 192.168.56.1 60639 2 3 1863.82 +12783 sshd 192.168.56.101 22 192.168.56.1 60640 3 3 9174.53 +12844 wget 10.0.2.15 34250 54.204.39.132 443 11 1870 5712.26 +12851 curl 10.0.2.15 34252 54.204.39.132 443 0 74 505.90 +``` +在你说:“我不能只是刮 tcpdump(8)输出这个?”之前请注意,运行 tcpdump(8)或任何数据包嗅探器,在高数据包速率系统上花费的开销会很大,即使tcpdump(8)的用户级和内核级机制已经过多年优化(可能更差)。tcplife不会测试每个数据包; 它只会监视TCP会话状态的变化,从而影响会话的持续时间。它还使用已经跟踪吞吐量的内核计数器,以及处理和命令信息(“PID”和“COMM”列),这些对 tcpdump(8)等线上嗅探工具是做不到的。 + +### 6\. gethostlatency + +之前的每个例子都涉及到内核跟踪,所以我至少需要一个用户级跟踪的例子。 这是[gethostlatency] [20],其中gethostbyname(3)和相关的库调用名称解析: + +``` +# /usr/share/bcc/tools/gethostlatency +TIME PID COMM LATms HOST +06:43:33 12903 curl 188.98 opensource.com +06:43:36 12905 curl 8.45 opensource.com +06:43:40 12907 curl 6.55 opensource.com +06:43:44 12911 curl 9.67 opensource.com +06:45:02 12948 curl 19.66 opensource.cats +06:45:06 12950 curl 18.37 opensource.cats +06:45:07 12952 curl 13.64 opensource.cats +06:45:19 13139 curl 13.10 opensource.cats +``` +是的,它始终是DNS,所以有一个工具来监视系统范围内的DNS请求可以很方便(这只有在应用程序使用标准系统库时才有效)看看我如何跟踪多个查找“opensource.com”? 第一个是188.98毫秒,然后是更快,不到10毫秒,毫无疑问,缓存的作用。它还追踪多个查找“opensource.cats”,一个可悲的不存在的主机,但我们仍然可以检查第一个和后续查找的延迟。 (第二次查找后是否有一点负面缓存?) + +### 7\. trace + +好的,再举一个例子。 [trace] [21]工具由Sasha Goldshtein提供,并提供了一些基本的printf(1)功能和自定义探针。 例如: + +``` +# /usr/share/bcc/tools/trace 'pam:pam_start "%s: %s", arg1, arg2' +PID TID COMM FUNC - +13266 13266 sshd pam_start sshd: root +``` +在这里,我正在跟踪 libpam 及其 pam_start(3)函数并将其两个参数都打印为字符串。 Libpam 用于可插入的身份验证模块系统,输出显示 sshd 为“root”用户(我登录)调用了 pam_start()。 USAGE消息中有更多的例子(“trace -h”),而且所有这些工具在bcc版本库中都有手册页和示例文件。 例如trace_example.txt和trace.8。 + +### 通过包安装 bcc + +安装 bcc 最佳的方法是从 iovisor 仓储库中安装,按照 bcc [INSTALL.md][22]。[IO Visor] [23]是包含 bcc 的Linux基金会项目。4.x系列Linux内核中增加了这些工具使用的BPF增强功能,上至4.9 \。这意味着拥有4.8内核的 Fedora 25可以运行大部分这些工具。 Fedora 26及其4.11内核可以运行它们(至少目前)。 + +如果你使用的是Fedora 25(或者Fedora 26,而且这个帖子已经在很多个月前发布了 - 你好,来自遥远的过去!),那么这个包的方法应该是正常的。 如果您使用的是Fedora 26,那么请跳至“通过源代码安装”部分,该部分避免了已知的固定错误。 这个错误修复目前还没有进入Fedora 26软件包的依赖关系。 我使用的系统是: + +``` +# uname -a +Linux localhost.localdomain 4.11.8-300.fc26.x86_64 #1 SMP Thu Jun 29 20:09:48 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux +# cat /etc/fedora-release +Fedora release 26 (Twenty Six) +``` +以下是我所遵循的安装步骤,但请参阅INSTALL.md获取更新的版本: + +``` +# echo -e '[iovisor]\nbaseurl=https://repo.iovisor.org/yum/nightly/f25/$basearch\nenabled=1\ngpgcheck=0' | sudo tee /etc/yum.repos.d/iovisor.repo +# dnf install bcc-tools +[...] +Total download size: 37 M +Installed size: 143 M +Is this ok [y/N]: y +``` +安装完成后,您可以在/ usr / share中看到新的工具: + +``` +# ls /usr/share/bcc/tools/ +argdist dcsnoop killsnoop softirqs trace +bashreadline dcstat llcstat solisten ttysnoop +[...] +``` +试着运行其中一个: + +``` +# /usr/share/bcc/tools/opensnoop +chdir(/lib/modules/4.11.8-300.fc26.x86_64/build): No such file or directory +Traceback (most recent call last): + File "/usr/share/bcc/tools/opensnoop", line 126, in + b = BPF(text=bpf_text) + File "/usr/lib/python3.6/site-packages/bcc/__init__.py", line 284, in __init__ + raise Exception("Failed to compile BPF module %s" % src_file) +Exception: Failed to compile BPF module +``` +运行失败,提示/lib/modules/4.11.8-300.fc26.x86_64/build丢失。 如果你也这样做,那只是因为系统缺少内核头文件。 如果你看看这个文件指向什么(这是一个符号链接),然后使用“dnf whatprovides”来搜索它,它会告诉你接下来需要安装的包。 对于这个系统,它是: + +``` +# dnf install kernel-devel-4.11.8-300.fc26.x86_64 +[...] +Total download size: 20 M +Installed size: 63 M +Is this ok [y/N]: y +[...] +``` +现在 + +``` +# /usr/share/bcc/tools/opensnoop +PID COMM FD ERR PATH +11792 ls 3 0 /etc/ld.so.cache +11792 ls 3 0 /lib64/libselinux.so.1 +11792 ls 3 0 /lib64/libcap.so.2 +11792 ls 3 0 /lib64/libc.so.6 +[...] +``` +运行起来了。 这是从另一个窗口中的ls命令捕捉活动。 请参阅前面的部分以获取其他有用的命令 + +### 通过源码安装 + +如果您需要从源代码安装,您还可以在[INSTALL.md] [27]中找到文档和更新说明。 我在Fedora 26上做了如下的事情: + +``` +sudo dnf install -y bison cmake ethtool flex git iperf libstdc++-static \ + python-netaddr python-pip gcc gcc-c++ make zlib-devel \ + elfutils-libelf-devel +sudo dnf install -y luajit luajit-devel # for Lua support +sudo dnf install -y \ + http://pkgs.repoforge.org/netperf/netperf-2.6.0-1.el6.rf.x86_64.rpm +sudo pip install pyroute2 +sudo dnf install -y clang clang-devel llvm llvm-devel llvm-static ncurses-devel +``` +除 netperf 外一切妥当,其中有以下错误: + +``` +Curl error (28): Timeout was reached for http://pkgs.repoforge.org/netperf/netperf-2.6.0-1.el6.rf.x86_64.rpm [Connection timed out after 120002 milliseconds] +``` + +不必理会,netperf是可选的 - 它只是用于测试 - 而 bcc 没有它也会编译成功。 + +以下是 bcc 编译和安装余下的步骤: + + +``` +git clone https://github.com/iovisor/bcc.git +mkdir bcc/build; cd bcc/build +cmake .. -DCMAKE_INSTALL_PREFIX=/usr +make +sudo make install +``` +在这一点上,命令应该起作用: + +``` +# /usr/share/bcc/tools/opensnoop +PID COMM FD ERR PATH +4131 date 3 0 /etc/ld.so.cache +4131 date 3 0 /lib64/libc.so.6 +4131 date 3 0 /usr/lib/locale/locale-archive +4131 date 3 0 /etc/localtime +[...] +``` + +More Linux resources + +* [What is Linux?][1] + +* [What are Linux containers?][2] + +* [Download Now: Linux commands cheat sheet][3] + +* [Advanced Linux commands cheat sheet][4] + +* [Our latest Linux articles][5] + +### 写在最后和其他前端 + +这是一个可以在 Fedora 和 Red Hat 系列操作系统上使用的新 BPF 性能分析强大功能的快速浏览。我演示了BPF的流行前端 [bcc][28] ,并包含了其在 Fedora 上的安装说明。bcc 附带了60多个用于性能分析的新工具,这将帮助您充分利用Linux系统。也许你会直接通过SSH使用这些工具,或者一旦它们支持BPF,你也可以通过监视GUI来使用相同的功能。 + +此外,bcc并不是开发中唯一的前端。[ply][29]和[bpftrace][30],旨在为快速编写自定义工具提供更高级的语言。此外,[SystemTap] [31]刚刚发布[版本3.2] [32],包括一个早期的实验性eBPF后端。 如果这一点继续得到发展,它将为运行多年来开发的许多SystemTap脚本和攻击集(库)提供一个生产安全和高效的引擎。 (使用SystemTap和eBPF将成为另一篇文章的主题。) + +如果您需要开发自定义工具,那么也可以使用 bcc 来实现,尽管语言比 SystemTap,ply 或 bpftrace 要冗长得多。 我的 bcc 工具可以作为代码示例,另外我还贡献了[教程] [33]来开发 Python 中的 bcc 工具。 我建议先学习bcc多工具,因为在需要编写新工具之前,你可能会从里面获得很多里程。 您可以从他们 bcc 存储库[funccount] [34],[funclatency] [35],[funcslower] [36],[stackcount] [37],[trace] [38] ,[argdist] [39] 的示例文件中研究 bcc。 + +感谢[Opensource.com] [40]进行编辑。 + +###  专题 + + [Linux][41][系统管理员][42] + +### About the author + + [![](https://opensource.com/sites/default/files/styles/profile_pictures/public/pictures/brendan_face2017_620d.jpg?itok=LIwTJjL9)][43] Brendan Gregg + +- +Brendan Gregg是Netflix的一名高级性能架构师,在那里他进行大规模的计算机性能设计,分析和调优。[关于更多] [44] + + +* [Learn how you can contribute][6] + +-------------------------------------------------------------------------------- + +via:https://opensource.com/article/17/11/bccbpf-performance + +作者:[Brendan Gregg ][a] +译者:[yongshouzhang](https://github.com/yongshouzhang) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: +[1]:https://opensource.com/resources/what-is-linux?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent +[2]:https://opensource.com/resources/what-are-linux-containers?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent +[3]:https://developers.redhat.com/promotions/linux-cheatsheet/?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent +[4]:https://developers.redhat.com/cheat-sheet/advanced-linux-commands-cheatsheet?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent +[5]:https://opensource.com/tags/linux?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent +[6]:https://opensource.com/participate +[7]:https://opensource.com/users/brendang +[8]:https://opensource.com/users/brendang +[9]:https://opensource.com/user/77626/feed +[10]:https://opensource.com/article/17/11/bccbpf-performance?rate=r9hnbg3mvjFUC9FiBk9eL_ZLkioSC21SvICoaoJjaSM +[11]:https://opensource.com/article/17/11/bccbpf-performance#comments +[12]:https://github.com/iovisor/bcc +[13]:https://opensource.com/file/376856 +[14]:https://opensource.com/usr/share/bcc/tools/trace +[15]:https://github.com/brendangregg/perf-tools/blob/master/execsnoop +[16]:https://github.com/brendangregg/perf-tools/blob/master/opensnoop +[17]:https://github.com/iovisor/bcc/blob/master/tools/xfsslower.py +[18]:https://github.com/iovisor/bcc/blob/master/tools/biolatency.py +[19]:https://github.com/iovisor/bcc/blob/master/tools/tcplife.py +[20]:https://github.com/iovisor/bcc/blob/master/tools/gethostlatency.py +[21]:https://github.com/iovisor/bcc/blob/master/tools/trace.py +[22]:https://github.com/iovisor/bcc/blob/master/INSTALL.md#fedora---binary +[23]:https://www.iovisor.org/ +[24]:https://opensource.com/article/17/11/bccbpf-performance#InstallViaSource +[25]:https://github.com/iovisor/bcc/issues/1221 +[26]:https://reviews.llvm.org/rL302055 +[27]:https://github.com/iovisor/bcc/blob/master/INSTALL.md#fedora---source +[28]:https://github.com/iovisor/bcc +[29]:https://github.com/iovisor/ply +[30]:https://github.com/ajor/bpftrace +[31]:https://sourceware.org/systemtap/ +[32]:https://sourceware.org/ml/systemtap/2017-q4/msg00096.html +[33]:https://github.com/iovisor/bcc/blob/master/docs/tutorial_bcc_python_developer.md +[34]:https://github.com/iovisor/bcc/blob/master/tools/funccount_example.txt +[35]:https://github.com/iovisor/bcc/blob/master/tools/funclatency_example.txt +[36]:https://github.com/iovisor/bcc/blob/master/tools/funcslower_example.txt +[37]:https://github.com/iovisor/bcc/blob/master/tools/stackcount_example.txt +[38]:https://github.com/iovisor/bcc/blob/master/tools/trace_example.txt +[39]:https://github.com/iovisor/bcc/blob/master/tools/argdist_example.txt +[40]:http://opensource.com/ +[41]:https://opensource.com/tags/linux +[42]:https://opensource.com/tags/sysadmin +[43]:https://opensource.com/users/brendang +[44]:https://opensource.com/users/brendang From 73ef25de5cea559dcfdd98f593c9b76db3b0efc1 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E5=BC=A0=E5=AE=88=E6=B0=B8?= Date: Mon, 11 Dec 2017 08:26:34 +0800 Subject: [PATCH 192/236] Delete 20171207 7 tools for analyzing performance in Linux with bccBPF.md --- ...lyzing performance in Linux with bccBPF.md | 422 ------------------ 1 file changed, 422 deletions(-) delete mode 100644 sources/tech/20171207 7 tools for analyzing performance in Linux with bccBPF.md diff --git a/sources/tech/20171207 7 tools for analyzing performance in Linux with bccBPF.md b/sources/tech/20171207 7 tools for analyzing performance in Linux with bccBPF.md deleted file mode 100644 index 4e55ed979a..0000000000 --- a/sources/tech/20171207 7 tools for analyzing performance in Linux with bccBPF.md +++ /dev/null @@ -1,422 +0,0 @@ -translating by yongshouzhang - - -7个 Linux 下使用 bcc/BPF 的性能分析工具 -============================================================ - -###使用伯克利的包过滤(BPF)编译器集合(BCC)工具深度探查你的 linux 代码。 - - [![](https://opensource.com/sites/default/files/styles/byline_thumbnail/public/pictures/brendan_face2017_620d.jpg?itok=xZzBQNcY)][7] 21 Nov 2017 [Brendan Gregg][8] [Feed][9] - -43[up][10] - - [4 comments][11] -![7 superpowers for Fedora bcc/BPF performance analysis](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/penguins%20in%20space_0.jpg?itok=umpCTAul) - -图片来源 : - -opensource.com - -在 linux 中出现的一种新技术能够为系统管理员和开发者提供大量用于性能分析和故障排除的新工具和仪表盘。 它被称为增强的伯克利数据包过滤器(eBPF,或BPF),虽然这些改进并不由伯克利开发,它们不仅仅是处理数据包,更多的是过滤。我将讨论在 Fedora 和 Red Hat Linux 发行版中使用 BPF 的一种方法,并在 Fedora 26 上演示。 - -BPF 可以在内核中运行用户定义的沙盒程序,以立即添加新的自定义功能。这就像可按需给 Linux 系统添加超能力一般。 你可以使用它的例子包括如下: - -* 高级性能跟踪工具:文件系统操作、TCP事件、用户级事件等的编程低开销检测。 - -* 网络性能 : 尽早丢弃数据包以提高DDoS的恢复能力,或者在内核中重定向数据包以提高性能。 - -* 安全监控 : 24x7 小时全天候自定义检测和记录内核空间与用户空间内的可疑事件。 - -在可能的情况下,BPF 程序必须通过一个内核验证机制来保证它们的安全运行,这比写自定义的内核模块更安全。我在此假设大多数人并不编写自己的 BPF 程序,而是使用别人写好的。在 GitHub 上的 [BPF Compiler Collection (bcc)][12] 项目中,我已发布许多开源代码。bcc 提供不同的 BPF 开发前端支持,包括Python和Lua,并且是目前最活跃的 BPF 模具项目。 - -### 7 个有用的 bcc/BPF 新工具 - -为了了解BCC / BPF工具和他们的乐器,我创建了下面的图表并添加到项目中 -To understand the bcc/BPF tools and what they instrument, I created the following diagram and added it to the bcc project: - -### [bcc_跟踪工具.png][13] - -![Linux bcc/BPF 跟踪工具图](https://opensource.com/sites/default/files/u128651/bcc_tracing_tools.png) - -Brendan Gregg, [CC BY-SA 4.0][14] - -这些是命令行界面工具,你可以通过 SSH (安全外壳)使用它们。目前大多数分析,包括我的老板,是用 GUIs 和仪表盘进行的。SSH是最后的手段。但这些命令行工具仍然是预览BPF能力的好方法,即使你最终打算通过一个可用的 GUI 使用它。我已着手向一个开源 GUI 添加BPF功能,但那是另一篇文章的主题。现在我想分享你今天可以使用的 CLI 工具。 - -### 1\. execsnoop - -从哪儿开始? 如何查看新的进程。这些可以消耗系统资源,但很短暂,它们不会出现在 top(1)命令或其他工具中。 这些新进程可以使用[execsnoop] [15]进行检测(或使用行业术语,可以追踪)。 在追踪时,我将在另一个窗口中通过 SSH 登录: - -``` -# /usr/share/bcc/tools/execsnoop -PCOMM PID PPID RET ARGS -sshd 12234 727 0 /usr/sbin/sshd -D -R -unix_chkpwd 12236 12234 0 /usr/sbin/unix_chkpwd root nonull -unix_chkpwd 12237 12234 0 /usr/sbin/unix_chkpwd root chkexpiry -bash 12239 12238 0 /bin/bash -id 12241 12240 0 /usr/bin/id -un -hostname 12243 12242 0 /usr/bin/hostname -pkg-config 12245 12244 0 /usr/bin/pkg-config --variable=completionsdir bash-completion -grepconf.sh 12246 12239 0 /usr/libexec/grepconf.sh -c -grep 12247 12246 0 /usr/bin/grep -qsi ^COLOR.*none /etc/GREP_COLORS -tty 12249 12248 0 /usr/bin/tty -s -tput 12250 12248 0 /usr/bin/tput colors -dircolors 12252 12251 0 /usr/bin/dircolors --sh /etc/DIR_COLORS -grep 12253 12239 0 /usr/bin/grep -qi ^COLOR.*none /etc/DIR_COLORS -grepconf.sh 12254 12239 0 /usr/libexec/grepconf.sh -c -grep 12255 12254 0 /usr/bin/grep -qsi ^COLOR.*none /etc/GREP_COLORS -grepconf.sh 12256 12239 0 /usr/libexec/grepconf.sh -c -grep 12257 12256 0 /usr/bin/grep -qsi ^COLOR.*none /etc/GREP_COLORS -``` -哇。 那是什么? 什么是grepconf.sh? 什么是 /etc/GREP_COLORS? 而且 grep通过运行自身阅读它自己的配置文件? 这甚至是如何工作的? - -欢迎来到有趣的系统追踪世界。 你可以学到很多关于系统是如何工作的(或者一些情况下根本不工作),并且发现一些简单的优化。 execsnoop 通过跟踪 exec()系统调用来工作,exec() 通常用于在新进程中加载不同的程序代码。 - -### 2\. opensnoop - -从上面继续,所以,grepconf.sh可能是一个shell脚本,对吧? 我将运行file(1)来检查,并使用[opensnoop][16] bcc 工具来查看打开的文件: - -``` -# /usr/share/bcc/tools/opensnoop -PID COMM FD ERR PATH -12420 file 3 0 /etc/ld.so.cache -12420 file 3 0 /lib64/libmagic.so.1 -12420 file 3 0 /lib64/libz.so.1 -12420 file 3 0 /lib64/libc.so.6 -12420 file 3 0 /usr/lib/locale/locale-archive -12420 file -1 2 /etc/magic.mgc -12420 file 3 0 /etc/magic -12420 file 3 0 /usr/share/misc/magic.mgc -12420 file 3 0 /usr/lib64/gconv/gconv-modules.cache -12420 file 3 0 /usr/libexec/grepconf.sh -1 systemd 16 0 /proc/565/cgroup -1 systemd 16 0 /proc/536/cgroup -``` -像execsnoop和opensnoop这样的工具每个事件打印一行。上图显示 file(1)命令当前打开(或尝试打开)的文件:返回的文件描述符(“FD”列)对于 /etc/magic.mgc 是-1,而“ERR”列指示它是“文件未找到”。我不知道该文件,也不知道 file(1)正在读取的 /usr/share/misc/magic.mgc 文件。我不应该感到惊讶,但是 file(1)在识别文件类型时没有问题: - -``` -# file /usr/share/misc/magic.mgc /etc/magic -/usr/share/misc/magic.mgc: magic binary file for file(1) cmd (version 14) (little endian) -/etc/magic: magic text file for file(1) cmd, ASCII text -``` -opensnoop通过跟踪 open()系统调用来工作。为什么不使用 strace -feopen file 命令呢? 这将在这种情况下起作用。然而,opensnoop 的一些优点在于它能在系统范围内工作,并且跟踪所有进程的 open()系统调用。注意上例的输出中包括了从systemd打开的文件。Opensnoop 也应该有更低的开销:BPF 跟踪已经被优化,并且当前版本的 strace(1)仍然使用较老和较慢的 ptrace(2)接口。 - -### 3\. xfsslower - -bcc/BPF 不仅仅可以分析系统调用。[xfsslower][17] 工具跟踪具有大于1毫秒(参数)延迟的常见XFS文件系统操作。 - -``` -# /usr/share/bcc/tools/xfsslower 1 -Tracing XFS operations slower than 1 ms -TIME COMM PID T BYTES OFF_KB LAT(ms) FILENAME -14:17:34 systemd-journa 530 S 0 0 1.69 system.journal -14:17:35 auditd 651 S 0 0 2.43 audit.log -14:17:42 cksum 4167 R 52976 0 1.04 at -14:17:45 cksum 4168 R 53264 0 1.62 [ -14:17:45 cksum 4168 R 65536 0 1.01 certutil -14:17:45 cksum 4168 R 65536 0 1.01 dir -14:17:45 cksum 4168 R 65536 0 1.17 dirmngr-client -14:17:46 cksum 4168 R 65536 0 1.06 grub2-file -14:17:46 cksum 4168 R 65536 128 1.01 grub2-fstest -[...] -``` -在上图输出中,我捕获了多个延迟超过 1 毫秒 的 cksum(1)读数(字段“T”等于“R”)。这个工作是在 xfsslower 工具运行的时候,通过在 XFS 中动态地设置内核函数实现,当它结束的时候解除检测。其他文件系统也有这个 bcc 工具的版本:ext4slower,btrfsslower,zfsslower 和 nfsslower。 - -这是个有用的工具,也是 BPF 追踪的重要例子。对文件系统性能的传统分析主要集中在块 I/O 统计信息 - 通常你看到的是由 iostat(1)工具打印并由许多性能监视 GUI 绘制的图表。这些统计数据显示了磁盘如何执行,但不是真正的文件系统。通常比起磁盘你更关心文件系统的性能,因为应用程序是在文件系统中发起请求和等待。并且文件系统的性能可能与磁盘的性能大为不同!文件系统可以完全从内存缓存中读取数据,也可以通过预读算法和回写缓存填充缓存。xfsslower 显示了文件系统的性能 - 应用程序直接体验到什么。这对于免除整个存储子系统通常是有用的; 如果确实没有文件系统延迟,那么性能问题很可能在别处。 - -### 4\. biolatency - -虽然文件系统性能对于理解应用程序性能非常重要,但研究磁盘性能也是有好处的。当各种缓存技巧不能再隐藏其延迟时,磁盘的低性能终会影响应用程序。 磁盘性能也是容量规划研究的目标。 - -iostat(1)工具显示平均磁盘 I/O 延迟,但平均值可能会引起误解。 以直方图的形式研究 I/O 延迟的分布是有用的,这可以通过使用 [biolatency] 来实现[18]: - -``` -# /usr/share/bcc/tools/biolatency -Tracing block device I/O... Hit Ctrl-C to end. -^C - usecs : count distribution - 0 -> 1 : 0 | | - 2 -> 3 : 0 | | - 4 -> 7 : 0 | | - 8 -> 15 : 0 | | - 16 -> 31 : 0 | | - 32 -> 63 : 1 | | - 64 -> 127 : 63 |**** | - 128 -> 255 : 121 |********* | - 256 -> 511 : 483 |************************************ | - 512 -> 1023 : 532 |****************************************| - 1024 -> 2047 : 117 |******** | - 2048 -> 4095 : 8 | | -``` -这是另一个有用的工具和例子; 它使用一个名为maps的BPF特性,它可以用来实现高效的内核内摘要统计。从内核级别到用户级别的数据传输仅仅是“计数”列。 用户级程序生成其余的。 - -值得注意的是,其中许多工具支持CLI选项和参数,如其使用信息所示: - -``` -# /usr/share/bcc/tools/biolatency -h -usage: biolatency [-h] [-T] [-Q] [-m] [-D] [interval] [count] - -Summarize block device I/O latency as a histogram - -positional arguments: - interval output interval, in seconds - count number of outputs - -optional arguments: - -h, --help show this help message and exit - -T, --timestamp include timestamp on output - -Q, --queued include OS queued time in I/O time - -m, --milliseconds millisecond histogram - -D, --disks print a histogram per disk device - -examples: - ./biolatency # summarize block I/O latency as a histogram - ./biolatency 1 10 # print 1 second summaries, 10 times - ./biolatency -mT 1 # 1s summaries, milliseconds, and timestamps - ./biolatency -Q # include OS queued time in I/O time - ./biolatency -D # show each disk device separately -``` -它们的行为像其他Unix工具是通过设计,以协助采用。 - -### 5\. tcplife - -另一个有用的工具是[tcplife][19] ,该例显示TCP会话的生命周期和吞吐量统计 - -``` -# /usr/share/bcc/tools/tcplife -PID COMM LADDR LPORT RADDR RPORT TX_KB RX_KB MS -12759 sshd 192.168.56.101 22 192.168.56.1 60639 2 3 1863.82 -12783 sshd 192.168.56.101 22 192.168.56.1 60640 3 3 9174.53 -12844 wget 10.0.2.15 34250 54.204.39.132 443 11 1870 5712.26 -12851 curl 10.0.2.15 34252 54.204.39.132 443 0 74 505.90 -``` -在你说:“我不能只是刮 tcpdump(8)输出这个?”之前请注意,运行 tcpdump(8)或任何数据包嗅探器,在高数据包速率系统上花费的开销会很大,即使tcpdump(8)的用户级和内核级机制已经过多年优化(可能更差)。tcplife不会测试每个数据包; 它只会监视TCP会话状态的变化,从而影响会话的持续时间。它还使用已经跟踪吞吐量的内核计数器,以及处理和命令信息(“PID”和“COMM”列),这些对 tcpdump(8)等线上嗅探工具是做不到的。 - -### 6\. gethostlatency - -之前的每个例子都涉及到内核跟踪,所以我至少需要一个用户级跟踪的例子。 这是[gethostlatency] [20],其中gethostbyname(3)和相关的库调用名称解析: - -``` -# /usr/share/bcc/tools/gethostlatency -TIME PID COMM LATms HOST -06:43:33 12903 curl 188.98 opensource.com -06:43:36 12905 curl 8.45 opensource.com -06:43:40 12907 curl 6.55 opensource.com -06:43:44 12911 curl 9.67 opensource.com -06:45:02 12948 curl 19.66 opensource.cats -06:45:06 12950 curl 18.37 opensource.cats -06:45:07 12952 curl 13.64 opensource.cats -06:45:19 13139 curl 13.10 opensource.cats -``` -是的,它始终是DNS,所以有一个工具来监视系统范围内的DNS请求可以很方便(这只有在应用程序使用标准系统库时才有效)看看我如何跟踪多个查找“opensource.com”? 第一个是188.98毫秒,然后是更快,不到10毫秒,毫无疑问,缓存的作用。它还追踪多个查找“opensource.cats”,一个可悲的不存在的主机,但我们仍然可以检查第一个和后续查找的延迟。 (第二次查找后是否有一点负面缓存?) - -### 7\. trace - -好的,再举一个例子。 [trace] [21]工具由Sasha Goldshtein提供,并提供了一些基本的printf(1)功能和自定义探针。 例如: - -``` -# /usr/share/bcc/tools/trace 'pam:pam_start "%s: %s", arg1, arg2' -PID TID COMM FUNC - -13266 13266 sshd pam_start sshd: root -``` -在这里,我正在跟踪 libpam 及其 pam_start(3)函数并将其两个参数都打印为字符串。 Libpam 用于可插入的身份验证模块系统,输出显示 sshd 为“root”用户(我登录)调用了 pam_start()。 USAGE消息中有更多的例子(“trace -h”),而且所有这些工具在bcc版本库中都有手册页和示例文件。 例如trace_example.txt和trace.8。 - -### 通过包安装 bcc - -安装 bcc 最佳的方法是从 iovisor 仓储库中安装,按照 bcc [INSTALL.md][22]。[IO Visor] [23]是包含 bcc 的Linux基金会项目。4.x系列Linux内核中增加了这些工具使用的BPF增强功能,上至4.9 \。这意味着拥有4.8内核的 Fedora 25可以运行大部分这些工具。 Fedora 26及其4.11内核可以运行它们(至少目前)。 - -如果你使用的是Fedora 25(或者Fedora 26,而且这个帖子已经在很多个月前发布了 - 你好,来自遥远的过去!),那么这个包的方法应该是正常的。 如果您使用的是Fedora 26,那么请跳至“通过源代码安装”部分,该部分避免了已知的固定错误。 这个错误修复目前还没有进入Fedora 26软件包的依赖关系。 我使用的系统是: - -``` -# uname -a -Linux localhost.localdomain 4.11.8-300.fc26.x86_64 #1 SMP Thu Jun 29 20:09:48 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux -# cat /etc/fedora-release -Fedora release 26 (Twenty Six) -``` -以下是我所遵循的安装步骤,但请参阅INSTALL.md获取更新的版本: - -``` -# echo -e '[iovisor]\nbaseurl=https://repo.iovisor.org/yum/nightly/f25/$basearch\nenabled=1\ngpgcheck=0' | sudo tee /etc/yum.repos.d/iovisor.repo -# dnf install bcc-tools -[...] -Total download size: 37 M -Installed size: 143 M -Is this ok [y/N]: y -``` -安装完成后,您可以在/ usr / share中看到新的工具: - -``` -# ls /usr/share/bcc/tools/ -argdist dcsnoop killsnoop softirqs trace -bashreadline dcstat llcstat solisten ttysnoop -[...] -``` -试着运行其中一个: - -``` -# /usr/share/bcc/tools/opensnoop -chdir(/lib/modules/4.11.8-300.fc26.x86_64/build): No such file or directory -Traceback (most recent call last): - File "/usr/share/bcc/tools/opensnoop", line 126, in - b = BPF(text=bpf_text) - File "/usr/lib/python3.6/site-packages/bcc/__init__.py", line 284, in __init__ - raise Exception("Failed to compile BPF module %s" % src_file) -Exception: Failed to compile BPF module -``` -运行失败,提示/lib/modules/4.11.8-300.fc26.x86_64/build丢失。 如果你也这样做,那只是因为系统缺少内核头文件。 如果你看看这个文件指向什么(这是一个符号链接),然后使用“dnf whatprovides”来搜索它,它会告诉你接下来需要安装的包。 对于这个系统,它是: - -``` -# dnf install kernel-devel-4.11.8-300.fc26.x86_64 -[...] -Total download size: 20 M -Installed size: 63 M -Is this ok [y/N]: y -[...] -``` -现在 - -``` -# /usr/share/bcc/tools/opensnoop -PID COMM FD ERR PATH -11792 ls 3 0 /etc/ld.so.cache -11792 ls 3 0 /lib64/libselinux.so.1 -11792 ls 3 0 /lib64/libcap.so.2 -11792 ls 3 0 /lib64/libc.so.6 -[...] -``` -运行起来了。 这是从另一个窗口中的ls命令捕捉活动。 请参阅前面的部分以获取其他有用的命令 - -### 通过源码安装 - -如果您需要从源代码安装,您还可以在[INSTALL.md] [27]中找到文档和更新说明。 我在Fedora 26上做了如下的事情: - -``` -sudo dnf install -y bison cmake ethtool flex git iperf libstdc++-static \ - python-netaddr python-pip gcc gcc-c++ make zlib-devel \ - elfutils-libelf-devel -sudo dnf install -y luajit luajit-devel # for Lua support -sudo dnf install -y \ - http://pkgs.repoforge.org/netperf/netperf-2.6.0-1.el6.rf.x86_64.rpm -sudo pip install pyroute2 -sudo dnf install -y clang clang-devel llvm llvm-devel llvm-static ncurses-devel -``` -除 netperf 外一切妥当,其中有以下错误: - -``` -Curl error (28): Timeout was reached for http://pkgs.repoforge.org/netperf/netperf-2.6.0-1.el6.rf.x86_64.rpm [Connection timed out after 120002 milliseconds] -``` - -不必理会,netperf是可选的 - 它只是用于测试 - 而 bcc 没有它也会编译成功。 - -以下是 bcc 编译和安装余下的步骤: - - -``` -git clone https://github.com/iovisor/bcc.git -mkdir bcc/build; cd bcc/build -cmake .. -DCMAKE_INSTALL_PREFIX=/usr -make -sudo make install -``` -在这一点上,命令应该起作用: - -``` -# /usr/share/bcc/tools/opensnoop -PID COMM FD ERR PATH -4131 date 3 0 /etc/ld.so.cache -4131 date 3 0 /lib64/libc.so.6 -4131 date 3 0 /usr/lib/locale/locale-archive -4131 date 3 0 /etc/localtime -[...] -``` - -More Linux resources - -* [What is Linux?][1] - -* [What are Linux containers?][2] - -* [Download Now: Linux commands cheat sheet][3] - -* [Advanced Linux commands cheat sheet][4] - -* [Our latest Linux articles][5] - -### 写在最后和其他前端 - -这是一个可以在 Fedora 和 Red Hat 系列操作系统上使用的新 BPF 性能分析强大功能的快速浏览。我演示了BPF的流行前端 [bcc][28] ,并包含了其在 Fedora 上的安装说明。bcc 附带了60多个用于性能分析的新工具,这将帮助您充分利用Linux系统。也许你会直接通过SSH使用这些工具,或者一旦它们支持BPF,你也可以通过监视GUI来使用相同的功能。 - -此外,bcc并不是开发中唯一的前端。[ply][29]和[bpftrace][30],旨在为快速编写自定义工具提供更高级的语言。此外,[SystemTap] [31]刚刚发布[版本3.2] [32],包括一个早期的实验性eBPF后端。 如果这一点继续得到发展,它将为运行多年来开发的许多SystemTap脚本和攻击集(库)提供一个生产安全和高效的引擎。 (使用SystemTap和eBPF将成为另一篇文章的主题。) - -如果您需要开发自定义工具,那么也可以使用 bcc 来实现,尽管语言比 SystemTap,ply 或 bpftrace 要冗长得多。 我的 bcc 工具可以作为代码示例,另外我还贡献了[教程] [33]来开发 Python 中的 bcc 工具。 我建议先学习bcc多工具,因为在需要编写新工具之前,你可能会从里面获得很多里程。 您可以从他们 bcc 存储库[funccount] [34],[funclatency] [35],[funcslower] [36],[stackcount] [37],[trace] [38] ,[argdist] [39] 的示例文件中研究 bcc。 - -感谢[Opensource.com] [40]进行编辑。 - -###  专题 - - [Linux][41][系统管理员][42] - -### About the author - - [![](https://opensource.com/sites/default/files/styles/profile_pictures/public/pictures/brendan_face2017_620d.jpg?itok=LIwTJjL9)][43] Brendan Gregg - -- -Brendan Gregg是Netflix的一名高级性能架构师,在那里他进行大规模的计算机性能设计,分析和调优。[关于更多] [44] - - -* [Learn how you can contribute][6] - --------------------------------------------------------------------------------- - -via:https://opensource.com/article/17/11/bccbpf-performance - -作者:[Brendan Gregg ][a] -译者:[yongshouzhang](https://github.com/yongshouzhang) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: -[1]:https://opensource.com/resources/what-is-linux?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent -[2]:https://opensource.com/resources/what-are-linux-containers?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent -[3]:https://developers.redhat.com/promotions/linux-cheatsheet/?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent -[4]:https://developers.redhat.com/cheat-sheet/advanced-linux-commands-cheatsheet?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent -[5]:https://opensource.com/tags/linux?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent -[6]:https://opensource.com/participate -[7]:https://opensource.com/users/brendang -[8]:https://opensource.com/users/brendang -[9]:https://opensource.com/user/77626/feed -[10]:https://opensource.com/article/17/11/bccbpf-performance?rate=r9hnbg3mvjFUC9FiBk9eL_ZLkioSC21SvICoaoJjaSM -[11]:https://opensource.com/article/17/11/bccbpf-performance#comments -[12]:https://github.com/iovisor/bcc -[13]:https://opensource.com/file/376856 -[14]:https://opensource.com/usr/share/bcc/tools/trace -[15]:https://github.com/brendangregg/perf-tools/blob/master/execsnoop -[16]:https://github.com/brendangregg/perf-tools/blob/master/opensnoop -[17]:https://github.com/iovisor/bcc/blob/master/tools/xfsslower.py -[18]:https://github.com/iovisor/bcc/blob/master/tools/biolatency.py -[19]:https://github.com/iovisor/bcc/blob/master/tools/tcplife.py -[20]:https://github.com/iovisor/bcc/blob/master/tools/gethostlatency.py -[21]:https://github.com/iovisor/bcc/blob/master/tools/trace.py -[22]:https://github.com/iovisor/bcc/blob/master/INSTALL.md#fedora---binary -[23]:https://www.iovisor.org/ -[24]:https://opensource.com/article/17/11/bccbpf-performance#InstallViaSource -[25]:https://github.com/iovisor/bcc/issues/1221 -[26]:https://reviews.llvm.org/rL302055 -[27]:https://github.com/iovisor/bcc/blob/master/INSTALL.md#fedora---source -[28]:https://github.com/iovisor/bcc -[29]:https://github.com/iovisor/ply -[30]:https://github.com/ajor/bpftrace -[31]:https://sourceware.org/systemtap/ -[32]:https://sourceware.org/ml/systemtap/2017-q4/msg00096.html -[33]:https://github.com/iovisor/bcc/blob/master/docs/tutorial_bcc_python_developer.md -[34]:https://github.com/iovisor/bcc/blob/master/tools/funccount_example.txt -[35]:https://github.com/iovisor/bcc/blob/master/tools/funclatency_example.txt -[36]:https://github.com/iovisor/bcc/blob/master/tools/funcslower_example.txt -[37]:https://github.com/iovisor/bcc/blob/master/tools/stackcount_example.txt -[38]:https://github.com/iovisor/bcc/blob/master/tools/trace_example.txt -[39]:https://github.com/iovisor/bcc/blob/master/tools/argdist_example.txt -[40]:http://opensource.com/ -[41]:https://opensource.com/tags/linux -[42]:https://opensource.com/tags/sysadmin -[43]:https://opensource.com/users/brendang -[44]:https://opensource.com/users/brendang From a9d22d528f3a3647a03b14de77103b18a266af4e Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E5=BC=A0=E5=AE=88=E6=B0=B8?= Date: Mon, 11 Dec 2017 08:38:16 +0800 Subject: [PATCH 193/236] Rename 20171121 7 tools for analyzing performance in Linux with bccBPF.md to 20171207 7 tools for analyzing performance in Linux with bccBPF.md --- ...207 7 tools for analyzing performance in Linux with bccBPF.md} | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename translated/tech/{20171121 7 tools for analyzing performance in Linux with bccBPF.md => 20171207 7 tools for analyzing performance in Linux with bccBPF.md} (100%) diff --git a/translated/tech/20171121 7 tools for analyzing performance in Linux with bccBPF.md b/translated/tech/20171207 7 tools for analyzing performance in Linux with bccBPF.md similarity index 100% rename from translated/tech/20171121 7 tools for analyzing performance in Linux with bccBPF.md rename to translated/tech/20171207 7 tools for analyzing performance in Linux with bccBPF.md From cbd51ce9ec57b0826b8613d89ce0a63b74d11daa Mon Sep 17 00:00:00 2001 From: wxy Date: Mon, 11 Dec 2017 08:43:06 +0800 Subject: [PATCH 194/236] PRF&PUB:20171125 AWS to Help Build ONNX Open Source AI Platform.md @geekpi https://linux.cn/article-9129-1.html --- ...Help Build ONNX Open Source AI Platform.md | 22 +++++++++---------- 1 file changed, 11 insertions(+), 11 deletions(-) rename {translated/tech => published}/20171125 AWS to Help Build ONNX Open Source AI Platform.md (77%) diff --git a/translated/tech/20171125 AWS to Help Build ONNX Open Source AI Platform.md b/published/20171125 AWS to Help Build ONNX Open Source AI Platform.md similarity index 77% rename from translated/tech/20171125 AWS to Help Build ONNX Open Source AI Platform.md rename to published/20171125 AWS to Help Build ONNX Open Source AI Platform.md index 8f80387e82..d847db3366 100644 --- a/translated/tech/20171125 AWS to Help Build ONNX Open Source AI Platform.md +++ b/published/20171125 AWS to Help Build ONNX Open Source AI Platform.md @@ -3,27 +3,27 @@ AWS 帮助构建 ONNX 开源 AI 平台 ![onnx-open-source-ai-platform](https://www.linuxinsider.com/article_images/story_graphics_xlarge/xl-2017-onnx-1.jpg) -AWS 已经成为最近加入深度学习社区的开放神经网络交换(ONNX)协作的最新技术公司,最近在无摩擦和可互操作的环境中推出了高级人工智能。由 Facebook 和微软领头。 +AWS 最近成为了加入深度学习社区的开放神经网络交换Open Neural Network Exchange(ONNX)协作的技术公司,最近在无障碍和可互操作frictionless and interoperable的环境中推出了高级人工智能。由 Facebook 和微软领头了该协作。 -作为该合作的一部分,AWS 将其开源 Python 软件包 ONNX-MxNet 作为一个深度学习框架提供,该框架提供跨多种语言的编程接口,包括 Python、Scala 和开源统计软件 R。 +作为该合作的一部分,AWS 开源其深度学习框架 Python 软件包 ONNX-MXNet,该框架提供了跨多种语言的编程接口(API),包括 Python、Scala 和开源统计软件 R。 -AWS 深度学习工程经理 Hagay Lupesko 和软件开发人员 Roshani Nagmote 上周在一篇帖子中写道:ONNX 格式将帮助开发人员构建和训练其他框架的模型,包括 PyTorch、Microsoft Cognitive Toolkit 或 Caffe2。它可以让开发人员将这些模型导入 MXNet,并运行它们进行推理。 +AWS 深度学习工程经理 Hagay Lupesko 和软件开发人员 Roshani Nagmote 上周在一篇帖子中写道,ONNX 格式将帮助开发人员构建和训练其它框架的模型,包括 PyTorch、Microsoft Cognitive Toolkit 或 Caffe2。它可以让开发人员将这些模型导入 MXNet,并运行它们进行推理。 ### 对开发者的帮助 今年夏天,Facebook 和微软推出了 ONNX,以支持共享模式的互操作性,来促进 AI 的发展。微软提交了其 Cognitive Toolkit、Caffe2 和 PyTorch 来支持 ONNX。 -微软表示:Cognitive Toolkit 和其他框架使开发人员更容易构建和运行代表神经网络的计算图。 +微软表示:Cognitive Toolkit 和其他框架使开发人员更容易构建和运行计算图以表达神经网络。 -Github 上提供了[ ONNX 代码和文档][4]的初始版本。 +[ONNX 代码和文档][4]的初始版本已经放到了 Github。 AWS 和微软上个月宣布了在 Apache MXNet 上的一个新 Gluon 接口计划,该计划允许开发人员构建和训练深度学习模型。 -[Tractica][5] 的研究总监 Aditya Kaul 观察到:“Gluon 是他们与 Google 的 Tensorflow 竞争的合作伙伴关系的延伸”。 +[Tractica][5] 的研究总监 Aditya Kaul 观察到:“Gluon 是他们试图与 Google 的 Tensorflow 竞争的合作伙伴关系的延伸”。 -他告诉 LinuxInsider,“谷歌在这点上的疏忽是非常明显的,但也说明了他们在市场上的主导地位。 +他告诉 LinuxInsider,“谷歌在这点上的疏忽是非常明显的,但也说明了他们在市场上的主导地位。” -Kaul 说:“甚至 Tensorflow 是开源的,所以开源在这里并不是什么大事,但这归结到底是其他生态系统联手与谷歌竞争。” +Kaul 说:“甚至 Tensorflow 也是开源的,所以开源在这里并不是什么大事,但这归结到底是其他生态系统联手与谷歌竞争。” 根据 AWS 的说法,本月早些时候,Apache MXNet 社区推出了 MXNet 的 0.12 版本,它扩展了 Gluon 的功能,以便进行新的尖端研究。它的新功能之一是变分 dropout,它允许开发人员使用 dropout 技术来缓解递归神经网络中的过拟合。 @@ -52,15 +52,15 @@ Tractica 的 Kaul 指出:“框架互操作性是一件好事,这会帮助 越来越多的大型科技公司已经宣布使用开源技术来加快 AI 协作开发的计划,以便创建更加统一的开发和研究平台。 AT&T 几周前宣布了与 TechMahindra 和 Linux 基金会合作[推出 Acumos 项目][8]的计划。该平台旨在开拓电信、媒体和技术方面的合作。 -![](https://www.ectnews.com/images/end-enn.gif) + -------------------------------------------------------------------------------- via: https://www.linuxinsider.com/story/AWS-to-Help-Build-ONNX-Open-Source-AI-Platform-84971.html -作者:[ David Jones ][a] +作者:[David Jones][a] 译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 5abd318410232ce883032b1fcb1628c9319b8eec Mon Sep 17 00:00:00 2001 From: geekpi Date: Mon, 11 Dec 2017 09:06:13 +0800 Subject: [PATCH 195/236] translated --- .../20171107 GitHub welcomes all CI tools.md | 95 ------------------- .../20171107 GitHub welcomes all CI tools.md | 93 ++++++++++++++++++ 2 files changed, 93 insertions(+), 95 deletions(-) delete mode 100644 sources/tech/20171107 GitHub welcomes all CI tools.md create mode 100644 translated/tech/20171107 GitHub welcomes all CI tools.md diff --git a/sources/tech/20171107 GitHub welcomes all CI tools.md b/sources/tech/20171107 GitHub welcomes all CI tools.md deleted file mode 100644 index 7bef351bd6..0000000000 --- a/sources/tech/20171107 GitHub welcomes all CI tools.md +++ /dev/null @@ -1,95 +0,0 @@ -translating---geekpi - -GitHub welcomes all CI tools -==================== - - -[![GitHub and all CI tools](https://user-images.githubusercontent.com/29592817/32509084-2d52c56c-c3a1-11e7-8c49-901f0f601faf.png)][11] - -Continuous Integration ([CI][12]) tools help you stick to your team's quality standards by running tests every time you push a new commit and [reporting the results][13] to a pull request. Combined with continuous delivery ([CD][14]) tools, you can also test your code on multiple configurations, run additional performance tests, and automate every step [until production][15]. - -There are several CI and CD tools that [integrate with GitHub][16], some of which you can install in a few clicks from [GitHub Marketplace][17]. With so many options, you can pick the best tool for the job—even if it's not the one that comes pre-integrated with your system. - -The tools that will work best for you depends on many factors, including: - -* Programming language and application architecture - -* Operating system and browsers you plan to support - -* Your team's experience and skills - -* Scaling capabilities and plans for growth - -* Geographic distribution of dependent systems and the people who use them - -* Packaging and delivery goals - -Of course, it isn't possible to optimize your CI tool for all of these scenarios. The people who build them have to choose which use cases to serve best—and when to prioritize complexity over simplicity. For example, if you like to test small applications written in a particular programming language for one platform, you won't need the complexity of a tool that tests embedded software controllers on dozens of platforms with a broad mix of programming languages and frameworks. - -If you need a little inspiration for which CI tool might work best, take a look at [popular GitHub projects][18]. Many show the status of their integrated CI/CD tools as badges in their README.md. We've also analyzed the use of CI tools across more than 50 million repositories in the GitHub community, and found a lot of variety. The following diagram shows the relative percentage of the top 10 CI tools used with GitHub.com, based on the most used [commit status contexts][19] used within our pull requests. - - _Our analysis also showed that many teams use more than one CI tool in their projects, allowing them to emphasize what each tool does best._ - - [![Top 10 CI systems used with GitHub.com based on most used commit status contexts](https://user-images.githubusercontent.com/7321362/32575895-ea563032-c49a-11e7-9581-e05ec882658b.png)][20] - -If you'd like to check them out, here are the top 10 tools teams use: - -* [Travis CI][1] - -* [Circle CI][2] - -* [Jenkins][3] - -* [AppVeyor][4] - -* [CodeShip][5] - -* [Drone][6] - -* [Semaphore CI][7] - -* [Buildkite][8] - -* [Wercker][9] - -* [TeamCity][10] - -It's tempting to just pick the default, pre-integrated tool without taking the time to research and choose the best one for the job, but there are plenty of [excellent choices][21] built for your specific use cases. And if you change your mind later, no problem. When you choose the best tool for a specific situation, you're guaranteeing tailored performance and the freedom of interchangability when it no longer fits. - -Ready to see how CI tools can fit into your workflow? - -[Browse GitHub Marketplace][22] - --------------------------------------------------------------------------------- - -via: https://github.com/blog/2463-github-welcomes-all-ci-tools - -作者:[jonico ][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://github.com/jonico -[1]:https://travis-ci.org/ -[2]:https://circleci.com/ -[3]:https://jenkins.io/ -[4]:https://www.appveyor.com/ -[5]:https://codeship.com/ -[6]:http://try.drone.io/ -[7]:https://semaphoreci.com/ -[8]:https://buildkite.com/ -[9]:http://www.wercker.com/ -[10]:https://www.jetbrains.com/teamcity/ -[11]:https://user-images.githubusercontent.com/29592817/32509084-2d52c56c-c3a1-11e7-8c49-901f0f601faf.png -[12]:https://en.wikipedia.org/wiki/Continuous_integration -[13]:https://github.com/blog/2051-protected-branches-and-required-status-checks -[14]:https://en.wikipedia.org/wiki/Continuous_delivery -[15]:https://developer.github.com/changes/2014-01-09-preview-the-new-deployments-api/ -[16]:https://github.com/works-with/category/continuous-integration -[17]:https://github.com/marketplace/category/continuous-integration -[18]:https://github.com/explore?trending=repositories#trending -[19]:https://developer.github.com/v3/repos/statuses/ -[20]:https://user-images.githubusercontent.com/7321362/32575895-ea563032-c49a-11e7-9581-e05ec882658b.png -[21]:https://github.com/works-with/category/continuous-integration -[22]:https://github.com/marketplace/category/continuous-integration diff --git a/translated/tech/20171107 GitHub welcomes all CI tools.md b/translated/tech/20171107 GitHub welcomes all CI tools.md new file mode 100644 index 0000000000..a20d164014 --- /dev/null +++ b/translated/tech/20171107 GitHub welcomes all CI tools.md @@ -0,0 +1,93 @@ +GitHub 欢迎所有 CI 工具 +==================== + + +[![GitHub and all CI tools](https://user-images.githubusercontent.com/29592817/32509084-2d52c56c-c3a1-11e7-8c49-901f0f601faf.png)][11] + +持续集成([CI][12])工具可以帮助你在每次提交时执行测试,并将[报告结果][13]提交到合并请求,从而帮助维持团队的质量标准。结合持续交付([CD][14])工具,你还可以在多种配置上测试你的代码,运行额外的性能测试,并自动执行每个步骤[直到产品][15]。 + +有几个[与 GitHub 集成][16]的 CI 和 CD 工具,其中一些可以在 [GitHub Marketplace][17] 中点击几下安装。有了这么多的选择,你可以选择最好的工具 - 即使它不是与你的系统预集成的工具。 + +最适合你的工具取决于许多因素,其中包括: + +* 编程语言和程序架构 + +* 你计划支持的操作系统和浏览器 + +* 你团队的经验和技能 + +* 扩展能力和增长计划 + +* 依赖系统的地理分布和使用的人 + +* 打包和交付目标 + +当然,无法为所有这些情况优化你的 CI 工具。构建它们的人需要选择哪些情况服务更好,何时优先考虑复杂性而不是简单性。例如,如果你想测试针对一个平台的用特定语言编写的小程序,那么你就不需要那些可在数十个平台上测试,有许多编程语言和框架的,用来测试嵌入软件控制器的复杂工具。 + +如果你需要一些灵感来挑选最好使用哪个 CI 工具,那么看一下[ Github 上的流行项目][18]。许多人在他们的 README.md 中将他们的集成的 CI/CD 工具的状态显示为徽章。我们还分析了 GitHub 社区中超过 5000 万个仓库中 CI 工具的使用情况,并发现了很多变化。下图显示了根据我们的 pull 请求中使用最多的[提交状态上下文][19],GitHub.com 使用的前 10 个 CI 工具的相对百分比。 + +_我们的分析还显示,许多团队在他们的项目中使用多个 CI 工具,使他们能够发挥它们最擅长的。_ + + [![Top 10 CI systems used with GitHub.com based on most used commit status contexts](https://user-images.githubusercontent.com/7321362/32575895-ea563032-c49a-11e7-9581-e05ec882658b.png)][20] + +如果你想查看,下面是团队中使用最多的 10 个工具: + +* [Travis CI][1] + +* [Circle CI][2] + +* [Jenkins][3] + +* [AppVeyor][4] + +* [CodeShip][5] + +* [Drone][6] + +* [Semaphore CI][7] + +* [Buildkite][8] + +* [Wercker][9] + +* [TeamCity][10] + +这只是尝试选择默认的、预先集成的工具,而没有花时间根据任务研究和选择最好的工具,但是对于你的特定情况会有很多[很好的选择][21]。如果你以后改变主意,没问题。当你为特定情况选择最佳工具时,你可以保证量身定制的性能和不再适合时互换的自由。 + +准备好了解 CI 工具如何适应你的工作流程了么? + +[浏览 GitHub Marketplace][22] + +-------------------------------------------------------------------------------- + +via: https://github.com/blog/2463-github-welcomes-all-ci-tools + +作者:[jonico ][a] +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://github.com/jonico +[1]:https://travis-ci.org/ +[2]:https://circleci.com/ +[3]:https://jenkins.io/ +[4]:https://www.appveyor.com/ +[5]:https://codeship.com/ +[6]:http://try.drone.io/ +[7]:https://semaphoreci.com/ +[8]:https://buildkite.com/ +[9]:http://www.wercker.com/ +[10]:https://www.jetbrains.com/teamcity/ +[11]:https://user-images.githubusercontent.com/29592817/32509084-2d52c56c-c3a1-11e7-8c49-901f0f601faf.png +[12]:https://en.wikipedia.org/wiki/Continuous_integration +[13]:https://github.com/blog/2051-protected-branches-and-required-status-checks +[14]:https://en.wikipedia.org/wiki/Continuous_delivery +[15]:https://developer.github.com/changes/2014-01-09-preview-the-new-deployments-api/ +[16]:https://github.com/works-with/category/continuous-integration +[17]:https://github.com/marketplace/category/continuous-integration +[18]:https://github.com/explore?trending=repositories#trending +[19]:https://developer.github.com/v3/repos/statuses/ +[20]:https://user-images.githubusercontent.com/7321362/32575895-ea563032-c49a-11e7-9581-e05ec882658b.png +[21]:https://github.com/works-with/category/continuous-integration +[22]:https://github.com/marketplace/category/continuous-integration From 4fd8bbbaf391c06c2149178577f0dd4314723153 Mon Sep 17 00:00:00 2001 From: geekpi Date: Mon, 11 Dec 2017 09:12:10 +0800 Subject: [PATCH 196/236] translating --- ...204 FreeCAD – A 3D Modeling and Design Software for Linux.md | 2 ++ ...4 GNOME Boxes Makes It Easier to Test Drive Linux Distros.md | 2 ++ 2 files changed, 4 insertions(+) diff --git a/sources/tech/20171204 FreeCAD – A 3D Modeling and Design Software for Linux.md b/sources/tech/20171204 FreeCAD – A 3D Modeling and Design Software for Linux.md index 6df21bce1b..fbcbacaf7f 100644 --- a/sources/tech/20171204 FreeCAD – A 3D Modeling and Design Software for Linux.md +++ b/sources/tech/20171204 FreeCAD – A 3D Modeling and Design Software for Linux.md @@ -1,3 +1,5 @@ +translating---geekpi + FreeCAD – A 3D Modeling and Design Software for Linux ============================================================ ![FreeCAD 3D Modeling Software](https://www.fossmint.com/wp-content/uploads/2017/12/FreeCAD-3D-Modeling-Software.png) diff --git a/sources/tech/20171204 GNOME Boxes Makes It Easier to Test Drive Linux Distros.md b/sources/tech/20171204 GNOME Boxes Makes It Easier to Test Drive Linux Distros.md index 42556932c1..2564f94ded 100644 --- a/sources/tech/20171204 GNOME Boxes Makes It Easier to Test Drive Linux Distros.md +++ b/sources/tech/20171204 GNOME Boxes Makes It Easier to Test Drive Linux Distros.md @@ -1,3 +1,5 @@ +translating---geekpi + # GNOME Boxes Makes It Easier to Test Drive Linux Distros ![GNOME Boxes Distribution Selection](http://www.omgubuntu.co.uk/wp-content/uploads/2017/12/GNOME-Boxes-INstall-Distros-750x475.jpg) From 96d9a191ab3afd1429320cccd0889e3912051b32 Mon Sep 17 00:00:00 2001 From: wxy Date: Mon, 11 Dec 2017 17:37:58 +0800 Subject: [PATCH 197/236] PRF&PUB:20171108 Archiving repositories.md @geekpi --- .../20171108 Archiving repositories.md | 17 ++++++++--------- 1 file changed, 8 insertions(+), 9 deletions(-) rename {translated/tech => published}/20171108 Archiving repositories.md (59%) diff --git a/translated/tech/20171108 Archiving repositories.md b/published/20171108 Archiving repositories.md similarity index 59% rename from translated/tech/20171108 Archiving repositories.md rename to published/20171108 Archiving repositories.md index 3d1a328541..4ed822a7f2 100644 --- a/translated/tech/20171108 Archiving repositories.md +++ b/published/20171108 Archiving repositories.md @@ -1,20 +1,19 @@ -归档仓库 +如何归档 GitHub 仓库 ==================== - -因为仓库不再活跃开发或者你不想接受额外的贡献并不意味着你想要删除它。现在在 Github 上归档仓库让它变成只读。 +如果仓库不再活跃开发或者你不想接受额外的贡献,但这并不意味着你想要删除它。现在可以在 Github 上归档仓库让它变成只读。 [![archived repository banner](https://user-images.githubusercontent.com/7321362/32558403-450458dc-c46a-11e7-96f9-af31d2206acb.png)][1] -归档一个仓库让它对所有人只读(包括仓库拥有者)。这包括编辑仓库、问题、合并请求、标记、里程碑、维基、发布、提交、标签、分支、反馈和评论。没有人可以在一个归档的仓库上创建新的问题、合并请求或者评论,但是你仍可以 fork 仓库-允许归档的仓库在其他地方继续开发。 +归档一个仓库会让它对所有人只读(包括仓库拥有者)。这包括对仓库的编辑、问题issue合并请求pull request(PR)、标记、里程碑、项目、维基、发布、提交、标签、分支、反馈和评论。谁都不可以在一个归档的仓库上创建新的问题、合并请求或者评论,但是你仍可以 fork 仓库——以允许归档的仓库在其它地方继续开发。 -要归档一个仓库,进入仓库设置页面并点在这个仓库上点击归档。 +要归档一个仓库,进入仓库设置页面并点在这个仓库上点击“归档该仓库Archive this repository”。 [![archive repository button](https://user-images.githubusercontent.com/125011/32273119-0fc5571e-bef9-11e7-9909-d137268a1d6d.png)][2] -在归档你的仓库前,确保你已经更改了它的设置并考虑关闭所有的开放问题和合并请求。你还应该更新你的 README 和描述来让它让访问者了解他不再能够贡献。 +在归档你的仓库前,确保你已经更改了它的设置并考虑关闭所有的开放问题和合并请求。你还应该更新你的 README 和描述来让它让访问者了解他不再能够对之贡献。 -如果你改变了主意想要解除归档你的仓库,在相同的地方点击解除归档。请注意大多数归档仓库的设置是隐藏的,并且你需要解除归档来改变它们。 +如果你改变了主意想要解除归档你的仓库,在相同的地方点击“解除归档该仓库Unarchive this repository”。请注意归档仓库的大多数设置是隐藏的,你需要解除归档才能改变它们。 [![archived labelled repository](https://user-images.githubusercontent.com/125011/32541128-9d67a064-c466-11e7-857e-3834054ba3c9.png)][3] @@ -24,9 +23,9 @@ via: https://github.com/blog/2460-archiving-repositories -作者:[MikeMcQuaid ][a] +作者:[MikeMcQuaid][a] 译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From d2389afea700fb3634d260a3a4f2b06b0c6b7a2d Mon Sep 17 00:00:00 2001 From: wxy Date: Mon, 11 Dec 2017 18:12:54 +0800 Subject: [PATCH 198/236] PRF&PUB:20171204 How To Know What A Command Or Program Will Exactly Do Before Executing It.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @imquanquan 定时发布 https://linux.cn/article-9131-1.html --- ...ram Will Exactly Do Before Executing It.md | 37 +++++++++---------- 1 file changed, 17 insertions(+), 20 deletions(-) rename {translated/tech => published}/20171204 How To Know What A Command Or Program Will Exactly Do Before Executing It.md (61%) diff --git a/translated/tech/20171204 How To Know What A Command Or Program Will Exactly Do Before Executing It.md b/published/20171204 How To Know What A Command Or Program Will Exactly Do Before Executing It.md similarity index 61% rename from translated/tech/20171204 How To Know What A Command Or Program Will Exactly Do Before Executing It.md rename to published/20171204 How To Know What A Command Or Program Will Exactly Do Before Executing It.md index 5123c87df9..10d38dde17 100644 --- a/translated/tech/20171204 How To Know What A Command Or Program Will Exactly Do Before Executing It.md +++ b/published/20171204 How To Know What A Command Or Program Will Exactly Do Before Executing It.md @@ -1,21 +1,23 @@ -如何获知一个命令或程序在执行前将会做什么 +如何在执行一个命令或程序之前就了解它会做什么 ====== -有没有想过一个 Unix 命令在执行前将会干些什么呢?并不是每个人都会知道一个特定的命令或者程序将会做什么。当然,你可以用 [Explainshell][2] 来查看它。你可以在 Explainshell 网站中粘贴你的命令,然后它可以让你了解命令的每个部分做了什么。但是,这是没有必要的。现在,我们从终端就可以轻易地知道一个命令或者程序在执行前会做什么。 `maybe` ,一个简单的工具,它允许你运行一条命令并可以查看此命令对你的文件系统做了什么而实际上这条命令却并未执行!在查看 `maybe` 的输出列表后,你可以决定是否真的想要运行这条命令。 +有没有想过在执行一个 Unix 命令前就知道它干些什么呢?并不是每个人都会知道一个特定的命令或者程序将会做什么。当然,你可以用 [Explainshell][2] 来查看它。你可以在 Explainshell 网站中粘贴你的命令,然后它可以让你了解命令的每个部分做了什么。但是,这是没有必要的。现在,我们从终端就可以轻易地在执行一个命令或者程序前就知道它会做什么。 `maybe` ,一个简单的工具,它允许你运行一条命令并可以查看此命令对你的文件做了什么,而实际上这条命令却并未执行!在查看 `maybe` 的输出列表后,你可以决定是否真的想要运行这条命令。 -#### “maybe”是如何工作的 +![](https://www.ostechnix.com/wp-content/uploads/2017/12/maybe-2-720x340.png) -根据开发者的介绍 +### `maybe` 是如何工作的 -> `maybe` 利用 `python-ptrace` 库运行了一个在 `ptrace` 控制下的进程。当它截取到一个即将更改文件系统的系统调用时,它会记录该调用,然后修改 CPU 寄存器,将这个调用重定向到一个无效的系统调用 ID(将其变成一个无效操作(no-op)),并将这个无效操作(no-op)的返回值设置为有效操作的返回值。结果,这个进程认为,它所做的一切都发生了,实际上什么都没有改变。 +根据开发者的介绍: -警告: 在生产环境或者任何你所关心的系统里面使用这个工具时都应该小心。它仍然可能造成严重的损失,因为它只能阻止少数系统调用。 +> `maybe` 利用 `python-ptrace` 库在 `ptrace` 控制下运行了一个进程。当它截取到一个即将更改文件系统的系统调用时,它会记录该调用,然后修改 CPU 寄存器,将这个调用重定向到一个无效的系统调用 ID(效果上将其变成一个无效操作(no-op)),并将这个无效操作(no-op)的返回值设置为有效操作的返回值。结果,这个进程认为,它所做的一切都发生了,实际上什么都没有改变。 -#### 安装 “maybe” +警告:在生产环境或者任何你所关心的系统里面使用这个工具时都应该小心。它仍然可能造成严重的损失,因为它只能阻止少数系统调用。 + +#### 安装 `maybe` 确保你已经在你的 Linux 系统中已经安装了 `pip` 。如果没有,可以根据您使用的发行版,按照如下指示进行安装。 -在 Arch Linux 及其衍生产品(如 Antergos,Manjaro Linux)上,使用以下命令安装 `pip` : +在 Arch Linux 及其衍生产品(如 Antergos、Manjaro Linux)上,使用以下命令安装 `pip` : ``` sudo pacman -S python-pip @@ -25,8 +27,6 @@ sudo pacman -S python-pip ``` sudo yum install epel-release -``` -``` sudo yum install python-pip ``` @@ -34,8 +34,6 @@ sudo yum install python-pip ``` sudo dnf install epel-release -``` -``` sudo dnf install python-pip ``` @@ -45,19 +43,19 @@ sudo dnf install python-pip sudo apt-get install python-pip ``` -在 SUSE, openSUSE 上: +在 SUSE、 openSUSE 上: ``` sudo zypper install python-pip ``` -安装 `pip` 后,运行以下命令安装 `maybe` 。 +安装 `pip` 后,运行以下命令安装 `maybe` : ``` sudo pip install maybe ``` -#### 了解一个命令或程序在执行前会做什么 +### 了解一个命令或程序在执行前会做什么 用法是非常简单的!只要在要执行的命令前加上 `maybe` 即可。 @@ -83,8 +81,7 @@ Do you want to rerun rm -r ostechnix/ and permit these operations? [y/N] y [![](http://www.ostechnix.com/wp-content/uploads/2017/12/maybe-1.png)][3] - - `maybe` 执行 5 个文件系统操作,并向我显示该命令(rm -r ostechnix /)究竟会做什么。现在我可以决定是否应该执行这个操作。是不是很酷呢?确实很酷! + `maybe` 执行了 5 个文件系统操作,并向我显示该命令(`rm -r ostechnix/`)究竟会做什么。现在我可以决定是否应该执行这个操作。是不是很酷呢?确实很酷! 这是另一个例子。我要为 Gmail 安装 Inboxer 桌面客户端。这是我得到的输出: @@ -122,9 +119,9 @@ maybe has not detected any file system operations from sudo pacman -Syu. Cheers! -资源: +资源: -* [“maybe” GitHub page][1] +* [`maybe` GitHub 主页][1] -------------------------------------------------------------------------------- @@ -132,7 +129,7 @@ via: https://www.ostechnix.com/know-command-program-will-exactly-executing/ 作者:[SK][a] 译者:[imquanquan](https://github.com/imquanquan) -校对:[校对ID](https://github.com/校对ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From a7e0ea96ddcffb977507ff27c8e03a64b7212cea Mon Sep 17 00:00:00 2001 From: wxy Date: Mon, 11 Dec 2017 19:05:04 +0800 Subject: [PATCH 199/236] PRF&PUB:20171201 Randomize your WiFi MAC address on Ubuntu 16.04.md @wenwensnow --- ...e your WiFi MAC address on Ubuntu 16.04.md | 161 ++++++++++++++++ ...e your WiFi MAC address on Ubuntu 16.04.md | 180 ------------------ 2 files changed, 161 insertions(+), 180 deletions(-) create mode 100644 published/20171201 Randomize your WiFi MAC address on Ubuntu 16.04.md delete mode 100644 translated/tech/20171201 Randomize your WiFi MAC address on Ubuntu 16.04.md diff --git a/published/20171201 Randomize your WiFi MAC address on Ubuntu 16.04.md b/published/20171201 Randomize your WiFi MAC address on Ubuntu 16.04.md new file mode 100644 index 0000000000..5dcb985fcd --- /dev/null +++ b/published/20171201 Randomize your WiFi MAC address on Ubuntu 16.04.md @@ -0,0 +1,161 @@ +在 Ubuntu 16.04 下随机化你的 WiFi MAC 地址 +============================================================ + +> 你的设备的 MAC 地址可以在不同的 WiFi 网络中记录你的活动。这些信息能被共享后出售,用于识别特定的个体。但可以用随机生成的伪 MAC 地址来阻止这一行为。 + + +![A captive portal screen for a hotel allowing you to log in with social media for an hour of free WiFi](https://www.paulfurley.com/img/captive-portal-our-hotel.gif) + +_Image courtesy of [Cloudessa][4]_ + +每一个诸如 WiFi 或者以太网卡这样的网络设备,都有一个叫做 MAC 地址的唯一标识符,如:`b4:b6:76:31:8c:ff`。这就是你能上网的原因:每当你连上 WiFi,路由器就会用这一地址来向你接受和发送数据,并且用它来区别你和这一网络的其它设备。 + +这一设计的缺陷在于唯一性,不变的 MAC 地址正好可以用来追踪你。连上了星巴克的 WiFi? 好,注意到了。在伦敦的地铁上? 也记录下来。 + +如果你曾经在某一个 WiFi 验证页面上输入过你的真实姓名,你就已经把自己和这一 MAC 地址建立了联系。没有仔细阅读许可服务条款、你可以认为,机场的免费 WiFi 正通过出售所谓的 ‘顾客分析数据’(你的个人信息)获利。出售的对象包括酒店,餐饮业,和任何想要了解你的人。 + +我不想信息被记录,再出售给多家公司,所以我花了几个小时想出了一个解决方案。 + +### MAC 地址不一定总是不变的 + +幸运的是,在不断开网络的情况下,是可以随机生成一个伪 MAC 地址的。 + +我想随机生成我的 MAC 地址,但是有三个要求: + +1. MAC 地址在不同网络中是不相同的。这意味着,我在星巴克和在伦敦地铁网络中的 MAC 地址是不相同的,这样在不同的服务提供商中就无法将我的活动系起来。 +2. MAC 地址需要经常更换,这样在网络上就没人知道我就是去年在这儿经过了 75 次的那个人。 +3. MAC 地址一天之内应该保持不变。当 MAC 地址更改时,大多数网络都会与你断开连接,然后必须得进入验证页面再次登陆 - 这很烦人。 + +### 操作网络管理器NetworkManager + +我第一次尝试用一个叫做 `macchanger` 的工具,但是失败了。因为网络管理器NetworkManager会根据它自己的设置恢复默认的 MAC 地址。 + +我了解到,网络管理器 1.4.1 以上版本可以自动生成随机的 MAC 地址。如果你在使用 Ubuntu 17.04 版本,你可以根据[这一配置文件][7]实现这一目的。但这并不能完全符合我的三个要求(你必须在随机random稳定stable这两个选项之中选择一个,但没有一天之内保持不变这一选项) + +因为我使用的是 Ubuntu 16.04,网络管理器版本为 1.2,不能直接使用高版本这一新功能。可能网络管理器有一些随机化方法支持,但我没能成功。所以我编了一个脚本来实现这一目标。 + +幸运的是,网络管理器 1.2 允许模拟 MAC 地址。你在已连接的网络中可以看见 ‘编辑连接’ 这一选项: + +![Screenshot of NetworkManager's edit connection dialog, showing a text entry for a cloned mac address](https:/www.paulfurley.com/img/network-manager-cloned-mac-address.png) + +网络管理器也支持钩子处理 —— 任何位于 `/etc/NetworkManager/dispatcher.d/pre-up.d/` 的脚本在建立网络连接之前都会被执行。 + + +### 分配随机生成的伪 MAC 地址 + +我想根据网络 ID 和日期来生成新的随机 MAC 地址。 我们可以使用网络管理器的命令行工具 nmcli 来显示所有可用网络: + + +``` +> nmcli connection +NAME UUID TYPE DEVICE +Gladstone Guest 618545ca-d81a-11e7-a2a4-271245e11a45 802-11-wireless wlp1s0 +DoESDinky 6e47c080-d81a-11e7-9921-87bc56777256 802-11-wireless -- +PublicWiFi 79282c10-d81a-11e7-87cb-6341829c2a54 802-11-wireless -- +virgintrainswifi 7d0c57de-d81a-11e7-9bae-5be89b161d22 802-11-wireless -- +``` + +因为每个网络都有一个唯一标识符(UUID),为了实现我的计划,我将 UUID 和日期拼接在一起,然后使用 MD5 生成 hash 值: + +``` +# eg 618545ca-d81a-11e7-a2a4-271245e11a45-2017-12-03 + +> echo -n "${UUID}-$(date +%F)" | md5sum + +53594de990e92f9b914a723208f22b3f - +``` + +生成的结果可以代替 MAC 地址的最后八个字节。 + + +值得注意的是,最开始的字节 `02` 代表这个地址是[自行指定][8]的。实际上,真实 MAC 地址的前三个字节是由制造商决定的,例如 `b4:b6:76` 就代表 Intel。 + +有可能某些路由器会拒绝自己指定的 MAC 地址,但是我还没有遇到过这种情况。 + +每次连接到一个网络,这一脚本都会用 `nmcli` 来指定一个随机生成的伪 MAC 地址: + +![A terminal window show a number of nmcli command line calls](https://www.paulfurley.com/imgterminal-window-nmcli-commands.png) + +最后,我查看了 `ifconfig` 的输出结果,我发现 MAC 地址 `HWaddr` 已经变成了随机生成的地址(模拟 Intel 的),而不是我真实的 MAC 地址。 + + +``` +> ifconfig +wlp1s0 Link encap:Ethernet HWaddr b4:b6:76:45:64:4d + inet addr:192.168.0.86 Bcast:192.168.0.255 Mask:255.255.255.0 + inet6 addr: fe80::648c:aff2:9a9d:764/64 Scope:Link + UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 + RX packets:12107812 errors:0 dropped:2 overruns:0 frame:0 + TX packets:18332141 errors:0 dropped:0 overruns:0 carrier:0 + collisions:0 txqueuelen:1000 + RX bytes:11627977017 (11.6 GB) TX bytes:20700627733 (20.7 GB) + +``` + +### 脚本 + +完整的脚本也可以[在 Github 上查看][9]。 + +``` +#!/bin/sh + +# /etc/NetworkManager/dispatcher.d/pre-up.d/randomize-mac-addresses + +# Configure every saved WiFi connection in NetworkManager with a spoofed MAC +# address, seeded from the UUID of the connection and the date eg: +# 'c31bbcc4-d6ad-11e7-9a5a-e7e1491a7e20-2017-11-20' + +# This makes your MAC impossible(?) to track across WiFi providers, and +# for one provider to track across days. + +# For craptive portals that authenticate based on MAC, you might want to +# automate logging in :) + +# Note that NetworkManager >= 1.4.1 (Ubuntu 17.04+) can do something similar +# automatically. + +export PATH=$PATH:/usr/bin:/bin + +LOG_FILE=/var/log/randomize-mac-addresses + +echo "$(date): $*" > ${LOG_FILE} + +WIFI_UUIDS=$(nmcli --fields type,uuid connection show |grep 802-11-wireless |cut '-d ' -f3) + +for UUID in ${WIFI_UUIDS} +do + UUID_DAILY_HASH=$(echo "${UUID}-$(date +F)" | md5sum) + + RANDOM_MAC="02:$(echo -n ${UUID_DAILY_HASH} | sed 's/^\(..\)\(..\)\(..\)\(..\)\(..\).*$/\1:\2:\3:\4:\5/')" + + CMD="nmcli connection modify ${UUID} wifi.cloned-mac-address ${RANDOM_MAC}" + + echo "$CMD" >> ${LOG_FILE} + $CMD & +done + +wait +``` + +_更新:[使用自己指定的 MAC 地址][5]可以避免和真正的 intel 地址冲突。感谢 [@_fink][6]_ + +--------------------------------------------------------------------------------- + +via: https://www.paulfurley.com/randomize-your-wifi-mac-address-on-ubuntu-1604-xenial/ + +作者:[Paul M Furley][a] +译者:[wenwensnow](https://github.com/wenwensnow) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.paulfurley.com/ +[1]:https://gist.github.com/paulfurley/46e0547ce5c5ea7eabeaef50dbacef3f/raw/5f02fc8f6ff7fca5bca6ee4913c63bf6de15abcarandomize-mac-addresses +[2]:https://gist.github.com/paulfurley/46e0547ce5c5ea7eabeaef50dbacef3f#file-randomize-mac-addresses +[3]:https://github.com/ +[4]:http://cloudessa.com/products/cloudessa-aaa-and-captive-portal-cloud-service/ +[5]:https://gist.github.com/paulfurley/46e0547ce5c5ea7eabeaef50dbacef3f/revisions#diff-824d510864d58c07df01102a8f53faef +[6]:https://twitter.com/fink_/status/937305600005943296 +[7]:https://gist.github.com/paulfurley/978d4e2e0cceb41d67d017a668106c53/ +[8]:https://en.wikipedia.org/wiki/MAC_address#Universal_vs._local +[9]:https://gist.github.com/paulfurley/46e0547ce5c5ea7eabeaef50dbacef3f \ No newline at end of file diff --git a/translated/tech/20171201 Randomize your WiFi MAC address on Ubuntu 16.04.md b/translated/tech/20171201 Randomize your WiFi MAC address on Ubuntu 16.04.md deleted file mode 100644 index a5e50edc89..0000000000 --- a/translated/tech/20171201 Randomize your WiFi MAC address on Ubuntu 16.04.md +++ /dev/null @@ -1,180 +0,0 @@ - - 在Ubuntu 16.04下随机生成你的WiFi MAC地址 - ============================================================ - - 你设备的MAC地址可以在不同的WiFi网络中记录你的活动。这些信息能被共享后出售,用于识别特定的个体。但可以用随机生成的伪MAC地址来阻止这一行为。 - - - ![A captive portal screen for a hotel allowing you to log in with social media for an hour of free WiFi](https://www.paulfurley.com/img/captive-portal-our-hotel.gif) - - _Image courtesy of [Cloudessa][4]_ - - 每一个诸如WiFi或者以太网卡这样的网络设备,都有一个叫做MAC地址的唯一标识符,如:`b4:b6:76:31:8c:ff`。这就是你能上网的原因:每当你连接上WiFi,路由器就会用这一地址来向你接受和发送数据,并且用它来区别你和这一网络的其他设备。 - - 这一设计的缺陷在于唯一性,不变的MAC地址正好可以用来追踪你。连上了星巴克的WiFi? 好,注意到了。在伦敦的地铁上? 也记录下来。 - - 如果你曾经在某一个WiFi验证页面上输入过你的真实姓名,你就已经把自己和这一MAC地址建立了联系。没有仔细阅读许可服务条款? 你可以认为,机场的免费WiFi正通过出售所谓的 ‘顾客分析数据’(你的个人信息)获利。出售的对象包括酒店,餐饮业,和任何想要了解你的人。 - - - 我不想信息被记录,再出售给多家公司,所以我花了几个小时想出了一个解决方案。 - - - ### MAC 地址不一定总是不变的 - - 幸运的是,在不断开网络的情况下,是可以随机生成一个伪MAC地址的。 - - - 我想随机生成我的MAC地址,但是有三个要求: - - - 1.MAC地址在不同网络中是不相同的。这意味着,我在星巴克和在伦敦地铁网络中的MAC地址是不相同的,这样在不同的服务提供商中就无法将我的活动联系起来 - - - 2.MAC地址需要经常更换,这样在网络上就没人知道我就是去年在这儿经过了75次的那个人 - - - 3. MAC地址一天之内应该保持不变。当MAC地址更改时,大多数网络都会与你断开连接,然后必须得进入验证页面再次登陆 - 这很烦人。 - - - ### 操作网络管理器 - - 我第一次尝试用一个叫做 `macchanger`的工具,但是失败了。网络管理器会根据它自己的设置恢复默认的MAC地址。 - - - 我了解到,网络管理器1.4.1以上版本可以自动生成随机的MAC地址。如果你在使用Ubuntu 17.04 版本,你可以根据这一配置文件实现这一目的。但这并不能完全符合我的三个要求 (你必须在随机和稳定这两个选项之中选择一个,但没有一天之内保持不变这一选项) - - - 因为我使用的是Ubuntu 16.04,网络管理器版本为1.2,不能直接使用高版本这一新功能。可能网络管理器有一些随机化方法支持,但我没能成功。所以我编了一个脚本来实现这一目标。 - - - 幸运的是,网络管理器1.2 允许生成随机MAC地址。你在已连接的网络中可以看见 ‘编辑连接’这一选项: - - - ![Screenshot of NetworkManager's edit connection dialog, showing a text entry for a cloned mac address](https://www.paulfurley.com/img/network-manager-cloned-mac-address.png) - - 网络管理器也支持消息处理 - 任何位于 `/etc/NetworkManager/dispatcher.d/pre-up.d/` 的脚本在建立网络连接之前都会被执行。 - - - ### 分配随机生成的伪MAC地址 - - 我想根据网络ID和日期来生成新的随机MAC地址。 我们可以使用网络管理器的命令行工具,nmcli,来显示所有可用网络: - - - ``` - > nmcli connection - NAME UUID TYPE DEVICE - Gladstone Guest 618545ca-d81a-11e7-a2a4-271245e11a45 802-11-wireless wlp1s0 - DoESDinky 6e47c080-d81a-11e7-9921-87bc56777256 802-11-wireless -- - PublicWiFi 79282c10-d81a-11e7-87cb-6341829c2a54 802-11-wireless -- - virgintrainswifi 7d0c57de-d81a-11e7-9bae-5be89b161d22 802-11-wireless -- - - ``` - - 因为每个网络都有一个唯一标识符,为了实现我的计划,我将UUID和日期拼接在一起,然后使用MD5生成hash值: - - ``` - - # eg 618545ca-d81a-11e7-a2a4-271245e11a45-2017-12-03 - - > echo -n "${UUID}-$(date +%F)" | md5sum - - 53594de990e92f9b914a723208f22b3f - - - ``` - 生成的结果可以代替MAC地址的最后八个字节。 - - - 值得注意的是,最开始的字节 `02` 代表这个地址是自行指定的。实际上,真实MAC地址的前三个字节是由制造商决定的,例如 `b4:b6:76` 就代表Intel。 - - - 有可能某些路由器会拒绝自己指定的MAC地址,但是我还没有遇到过这种情况。 - - - 每次连接到一个网络,这一脚本都会用`nmcli` 来指定一个随机生成的伪MAC地址: - - - ![A terminal window show a number of nmcli command line calls](https://www.paulfurley.com/img/terminal-window-nmcli-commands.png) - - 最后,我查看了 `ifconfig`的输出结果,我发现端口MAC地址已经变成了随机生成的地址,而不是我真实的MAC地址。 - - - ``` - > ifconfig - wlp1s0 Link encap:Ethernet HWaddr b4:b6:76:45:64:4d - inet addr:192.168.0.86 Bcast:192.168.0.255 Mask:255.255.255.0 - inet6 addr: fe80::648c:aff2:9a9d:764/64 Scope:Link - UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 - RX packets:12107812 errors:0 dropped:2 overruns:0 frame:0 - TX packets:18332141 errors:0 dropped:0 overruns:0 carrier:0 - collisions:0 txqueuelen:1000 - RX bytes:11627977017 (11.6 GB) TX bytes:20700627733 (20.7 GB) - - ``` - 完整的脚本可以在Github上查看。 - - - ``` - #!/bin/sh - - # /etc/NetworkManager/dispatcher.d/pre-up.d/randomize-mac-addresses - - # Configure every saved WiFi connection in NetworkManager with a spoofed MAC - # address, seeded from the UUID of the connection and the date eg: - # 'c31bbcc4-d6ad-11e7-9a5a-e7e1491a7e20-2017-11-20' - - # This makes your MAC impossible(?) to track across WiFi providers, and - # for one provider to track across days. - - # For craptive portals that authenticate based on MAC, you might want to - # automate logging in :) - - # Note that NetworkManager >= 1.4.1 (Ubuntu 17.04+) can do something similar - # automatically. - - export PATH=$PATH:/usr/bin:/bin - - LOG_FILE=/var/log/randomize-mac-addresses - - echo "$(date): $*" > ${LOG_FILE} - - WIFI_UUIDS=$(nmcli --fields type,uuid connection show |grep 802-11-wireless |cut '-d ' -f3) - - for UUID in ${WIFI_UUIDS} - do - UUID_DAILY_HASH=$(echo "${UUID}-$(date +F)" | md5sum) - - RANDOM_MAC="02:$(echo -n ${UUID_DAILY_HASH} | sed 's/^\(..\)\(..\)\(..\)\(..\)\(..\).*$/\1:\2:\3:\4:\5/')" - - CMD="nmcli connection modify ${UUID} wifi.cloned-mac-address ${RANDOM_MAC}" - - echo "$CMD" >> ${LOG_FILE} - $CMD & - done - - wait - ``` - - - - _更新:使用自己指定的MAC地址可以避免和真正的intel地址冲突。感谢 [@_fink][6]_ - - --------------------------------------------------------------------------------- - - -via: https://www.paulfurley.com/randomize-your-wifi-mac-address-on-ubuntu-1604-xenial/ - - 作者:[Paul M Furley ][a] - 译者:[译者ID](https://github.com/译者ID) - 校对:[校对者ID](https://github.com/校对者ID) - - 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - - [a]:https://www.paulfurley.com/ - [1]:https://gist.github.com/paulfurley/46e0547ce5c5ea7eabeaef50dbacef3f/raw/5f02fc8f6ff7fca5bca6ee4913c63bf6de15abca/randomize-mac-addresses - [2]:https://gist.github.com/paulfurley/46e0547ce5c5ea7eabeaef50dbacef3f#file-randomize-mac-addresses - [3]:https://github.com/ - [4]:http://cloudessa.com/products/cloudessa-aaa-and-captive-portal-cloud-service/ - [5]:https://gist.github.com/paulfurley/46e0547ce5c5ea7eabeaef50dbacef3f/revisions#diff-824d510864d58c07df01102a8f53faef - [6]:https://twitter.com/fink_/status/937305600005943296 - [7]:https://gist.github.com/paulfurley/978d4e2e0cceb41d67d017a668106c53/ - [8]:https://en.wikipedia.org/wiki/MAC_address#Universal_vs._local - [9]:https://gist.github.com/paulfurley/46e0547ce5c5ea7eabeaef50dbacef3f From ccb09b55ca82ba577de6e8f3f28f952fcc0af268 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E4=BB=98=E5=B3=A5?= <24203166+fuzheng1998@users.noreply.github.com> Date: Mon, 11 Dec 2017 20:39:29 +0800 Subject: [PATCH 200/236] =?UTF-8?q?=E8=BD=AC=E8=AE=A9?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 因期末考试临近,只好放弃该文章,明年再见 --- ...Study of Programming Languages and Code Quality in GitHub.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sources/tech/20171007 A Large-Scale Study of Programming Languages and Code Quality in GitHub.md b/sources/tech/20171007 A Large-Scale Study of Programming Languages and Code Quality in GitHub.md index 8934872a14..bb6b5bf693 100644 --- a/sources/tech/20171007 A Large-Scale Study of Programming Languages and Code Quality in GitHub.md +++ b/sources/tech/20171007 A Large-Scale Study of Programming Languages and Code Quality in GitHub.md @@ -1,4 +1,4 @@ -fuzheng1998 translating + A Large-Scale Study of Programming Languages and Code Quality in GitHub ============================================================ From d31198edeb646e2ff5117c4ec1107091a29e5bf7 Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 11 Dec 2017 21:38:00 +0800 Subject: [PATCH 201/236] =?UTF-8?q?=E7=BF=BB=E8=AF=91=E5=AE=8C=E6=AF=95?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ... And Cookies – How Does User-Login Work.md | 73 ------------------- ... And Cookies – How Does User-Login Work.md | 72 ++++++++++++++++++ 2 files changed, 72 insertions(+), 73 deletions(-) delete mode 100644 sources/tech/20171208 Sessions And Cookies – How Does User-Login Work.md create mode 100644 translated/tech/20171208 Sessions And Cookies – How Does User-Login Work.md diff --git a/sources/tech/20171208 Sessions And Cookies – How Does User-Login Work.md b/sources/tech/20171208 Sessions And Cookies – How Does User-Login Work.md deleted file mode 100644 index a53b0f8d61..0000000000 --- a/sources/tech/20171208 Sessions And Cookies – How Does User-Login Work.md +++ /dev/null @@ -1,73 +0,0 @@ -translating by lujun9972 -Sessions And Cookies – How Does User-Login Work? -====== -Facebook, Gmail, Twitter we all use these websites every day. One common thing among them is that they all require you to log in to do stuff. You cannot tweet on twitter, comment on Facebook or email on Gmail unless you are authenticated and logged in to the service. - - [![gmail, facebook login page](http://www.theitstuff.com/wp-content/uploads/2017/10/Untitled-design-1.jpg)][1] - -So how does it work? How does the website authenticate us? How does it know which user is logged in and from where? Let us answer each of these questions below. - -### How User-Login works? - -Whenever you enter your username and password in the login page of a site, the information you enter is sent to the server. The server then validates your password against the password on the server. If it doesn’t match, you get an error of incorrect password. But if it matches, you get logged in. - -### What happens when I get logged in? - -When you get logged in, the web server initiates a session and sets a cookie variable in your browser. The cookie variable then acts as a reference to the session created. Confused? Let us simplify this. - -### How does Session work? - -When the username and password are right, the server initiates a session. Sessions have a really complicated definition so I like to call them ‘beginning of a relationship’. - - [![session beginning of a relationship or partnership](http://www.theitstuff.com/wp-content/uploads/2017/10/pasted-image-0-9.png)][2] - -When the credentials are right, the server begins a relationship with you. Since the server cannot see like us humans, it sets a cookie in our browsers to identify our unique relationship from all the other relationships that other people have with the server. - -### What is a Cookie? - -A cookie is a small amount of data that the websites can store in your browser. You must have seen them here. - - [![theitstuff official facebook page cookies](http://www.theitstuff.com/wp-content/uploads/2017/10/pasted-image-0-1-4.png)][3] - -So when you log in and the server has created a relationship or session with you, it takes the session id which is the unique identifier of that session and stores it in your browser in form of cookies. - -### What’s the Point? - -The reason all of this is needed is to verify that it’s you so that when you comment or tweet, the server knows who did that tweet or who did that comment. - -As soon as you’re logged in, a cookie is set which contains the session id. Now, this session id is granted to the person who enters the correct username and password combination. - - [![facebook cookies in web browser](http://www.theitstuff.com/wp-content/uploads/2017/10/pasted-image-0-2-3-e1508926255472.png)][4] - -So the session id is granted to the person who owns that account. Now whenever an activity is performed on that website, the server knows who it was by their session id. - -### Keep me logged in? - -The sessions have a time limit. Unlike the real world where relationships can last even without seeing the person for longer periods of time, sessions have a time limit. You have to keep telling the server that you are online by performing some or the other actions. If that doesn’t happen the server will close the session and you will be logged out. - - [![websites keep me logged in option](http://www.theitstuff.com/wp-content/uploads/2017/10/pasted-image-0-3-3-e1508926314117.png)][5] - -But when we use the Keep me logged in feature on some websites, we allow them to store another unique variable in the form of cookies in our browsers. This unique variable is used to automatically log us in by checking it against the one on the server. When someone steals this unique identifier it is called as cookie stealing. They then get access to your account. - -### Conclusion - -We discussed how Login Systems work and how we are authenticated on a website. We also learned about what sessions and cookies are and how they are implemented in login mechanism. - -I hope you guys have grasped that how User-Login works, and if you still have a doubt regarding anything, just drop in a comment and I’ll be there for you. - --------------------------------------------------------------------------------- - -via: http://www.theitstuff.com/sessions-cookies-user-login-work - -作者:[Rishabh Kandari][a] -译者:[lujun9972](https://github.com/lujun9972) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.theitstuff.com/author/reevkandari -[1]:http://www.theitstuff.com/wp-content/uploads/2017/10/Untitled-design-1.jpg -[2]:http://www.theitstuff.com/wp-content/uploads/2017/10/pasted-image-0-9.png -[3]:http://www.theitstuff.com/wp-content/uploads/2017/10/pasted-image-0-1-4.png -[4]:http://www.theitstuff.com/wp-content/uploads/2017/10/pasted-image-0-2-3-e1508926255472.png -[5]:http://www.theitstuff.com/wp-content/uploads/2017/10/pasted-image-0-3-3-e1508926314117.png diff --git a/translated/tech/20171208 Sessions And Cookies – How Does User-Login Work.md b/translated/tech/20171208 Sessions And Cookies – How Does User-Login Work.md new file mode 100644 index 0000000000..16b04b3e6c --- /dev/null +++ b/translated/tech/20171208 Sessions And Cookies – How Does User-Login Work.md @@ -0,0 +1,72 @@ +Sessions 与 Cookies – 用户登录的原理是什么? +====== +Facebook, Gmail, Twitter 是我们每天都会用的网站. 它们的共同点在于都需要你登录进去后才能做进一步的操作. 只有你通过认证并登录后才能在 twitter 发推, 在 Facebook 上评论,以及在 Gmail上处理电子邮件. + + [![gmail, facebook login page](http://www.theitstuff.com/wp-content/uploads/2017/10/Untitled-design-1.jpg)][1] + +那么登录的原理是什么? 网站是如何认证的? 它怎么知道是哪个用户从哪儿登录进来的? 下面我们来对这些问题进行一一解答. + +### 用户登录的原理是什么? + +每次你在网站的登录页面中输入用户名和密码时, 这些信息都会发送到服务器. 服务器随后会将你的密码与服务器中的密码进行验证. 如果两者不匹配, 则你会得到一个错误密码的提示. 如果两则匹配, 则成功登录. + +### 登陆时发生了什么? + +登录后, web 服务器会初始化一个 session 并在你的浏览器中设置一个 cookie 变量. 该 cookie 变量用于作为新建 session 的一个引用. 搞晕了? 让我们说的再简单一点. + +### 会话的原理是什么? + +服务器在用户名和密码都正确的情况下会初始化一个 session. Sessions 的定义很复杂,你可以把它理解为 `关系的开始`. + + [![session beginning of a relationship or partnership](http://www.theitstuff.com/wp-content/uploads/2017/10/pasted-image-0-9.png)][2] + +认证通过后, 服务器就开始跟你展开一段关系了. 由于服务器不能象我们人类一样看东西, 它会在我们的浏览器中设置一个 cookie 来将我们的关系从其他人与服务器的关系标识出来. + +### 什么是 Cookie? + +cookie 是网站在你的浏览器中存储的一小段数据. 你应该已经见过他们了. + + [![theitstuff official facebook page cookies](http://www.theitstuff.com/wp-content/uploads/2017/10/pasted-image-0-1-4.png)][3] + +当你登录后,服务器为你创建一段关系或者说一个 session, 然后将唯一标识这个 session 的 session id 以 cookie 的形式存储在你的浏览器中. + +### 什么意思? + +所有这些东西存在的原因在于识别出你来,这样当你写评论或者发推时, 服务器能知道是谁在发评论,是谁在发推. + +当你登录后, 会产生一个包含 session id 的 cookie. 这样, 这个 session id 就被赋予了那个输入正确用户名和密码的人了. + + [![facebook cookies in web browser](http://www.theitstuff.com/wp-content/uploads/2017/10/pasted-image-0-2-3-e1508926255472.png)][4] + +也就是说, session id 被赋予给了拥有这个账户的人了. 之后,所有在网站上产生的行为, 服务器都能通过他们的 session id 来判断是由谁发起的. + +### 如何让我保持登录状态? + +session 有一定的时间限制. 这一点与现实生活中不一样,现实生活中的关系可以在不见面的情况下持续很长一段时间, 而 session 具有时间限制. 你必须要不断地通过一些动作来告诉服务器你还在线. 否则的话,服务器会关掉这个 session,而你会被登出. + + [![websites keep me logged in option](http://www.theitstuff.com/wp-content/uploads/2017/10/pasted-image-0-3-3-e1508926314117.png)][5] + +不过在某些网站上可以启用 `保持登录(Keep me logged in)`, 这样服务器会将另一个唯一变量以 cookie 的形式保存到我们的浏览器中. 这个唯一变量会通过与服务器上的变量进行对比来实现自动登录. 若有人盗取了这个唯一标识(我们称之为 cookie stealing), 他们就能访问你的账户了. + +### 结论 + +我们讨论了登录系统的工作原理以及网站是如何进行认证的. 我们还学到了什么是 sessions 和 cookies,以及它们在登录机制中的作用. + +我们希望你们以及理解了用户登录的工作原理, 如有疑问, 欢迎提问. + +-------------------------------------------------------------------------------- + +via: http://www.theitstuff.com/sessions-cookies-user-login-work + +作者:[Rishabh Kandari][a] +译者:[lujun9972](https://github.com/lujun9972) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.theitstuff.com/author/reevkandari +[1]:http://www.theitstuff.com/wp-content/uploads/2017/10/Untitled-design-1.jpg +[2]:http://www.theitstuff.com/wp-content/uploads/2017/10/pasted-image-0-9.png +[3]:http://www.theitstuff.com/wp-content/uploads/2017/10/pasted-image-0-1-4.png +[4]:http://www.theitstuff.com/wp-content/uploads/2017/10/pasted-image-0-2-3-e1508926255472.png +[5]:http://www.theitstuff.com/wp-content/uploads/2017/10/pasted-image-0-3-3-e1508926314117.png From 5b9df7d564cc67f555e797a2876289584199000d Mon Sep 17 00:00:00 2001 From: darsh8 <752458434@qq.com> Date: Mon, 11 Dec 2017 21:45:58 +0800 Subject: [PATCH 202/236] darsh8 translating --- sources/talk/20170131 Book review Ours to Hack and to Own.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/talk/20170131 Book review Ours to Hack and to Own.md b/sources/talk/20170131 Book review Ours to Hack and to Own.md index 1405bfe34a..a75a39a718 100644 --- a/sources/talk/20170131 Book review Ours to Hack and to Own.md +++ b/sources/talk/20170131 Book review Ours to Hack and to Own.md @@ -1,3 +1,5 @@ +darsh8 Translating + Book review: Ours to Hack and to Own ============================================================ From 78830dcf32d6703c137791d12b26673cc8e20ef4 Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 11 Dec 2017 21:58:48 +0800 Subject: [PATCH 203/236] =?UTF-8?q?=E9=80=89=E9=A2=98:=20What=20Are=20Zomb?= =?UTF-8?q?ie=20Processes=20And=20How=20To=20Find=20&=20Kill=20Zombie=20Pr?= =?UTF-8?q?ocesses=3F?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...nd How To Find & Kill Zombie Processes-.md | 90 +++++++++++++++++++ 1 file changed, 90 insertions(+) create mode 100644 sources/tech/20171211 What Are Zombie Processes And How To Find & Kill Zombie Processes-.md diff --git a/sources/tech/20171211 What Are Zombie Processes And How To Find & Kill Zombie Processes-.md b/sources/tech/20171211 What Are Zombie Processes And How To Find & Kill Zombie Processes-.md new file mode 100644 index 0000000000..a0d01b195a --- /dev/null +++ b/sources/tech/20171211 What Are Zombie Processes And How To Find & Kill Zombie Processes-.md @@ -0,0 +1,90 @@ +translating by lujun9972 +What Are Zombie Processes And How To Find & Kill Zombie Processes? +====== + [![What Are Zombie Processes And How To Find & Kill Zombie Processes?](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/what-are-the-zombie-processes_orig.jpg)][1] + +If you are a regular Linux user, you must have encountered the term `Zombie Processes`. So what are the Zombie Processes? How do they get created? Are they harmful to the system? How do I kill these processes? Keep reading for the answers to all these questions. + +### What are Zombie Processes? + +So we all know how processes work. We launch a program, start our task & once our task is over, we end that process. Once the process has ended, it has to be removed from the processes table. + +​ + +You can see the current processes in the ‘System-Monitor’. + + [![Replace the pid with the id of the parent process so that the parent process will remove all the child processes that are dead and completed. Imagine it Like this : “You find a dead body in the middle of the road, you call the dead body’s family and they take that body away from the road.” But a lot of programs are not programmed well enough to remove these child zombies because if they were, you wouldn’t have those zombies in the first place. So the only thing guaranteed to remove Child Zombies is killing the parent.](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/linux-check-zombie-processes_orig.jpg)][2] + +But, sometimes some of these processes stay in the processes table even after they have completed execution. + +​ + +So these processes that have completed their life of execution but still exist in the processes table are called ‘Zombie Processes’. + +### And How Exactly do they get Created? + +Whenever we run a program it creates a parent process and a lot of child processes. All of these child processes use resources such as memory and CPU allocated to them by the kernel. + +​ + +Once these child processes have finished executing they send an Exit call and die. This Exit call has to be read by the parent process which later calls the wait command to read the exit_status of the child process so that the child process can be removed from the processes table. + +If the Parent reads the Exit call correctly sent by the Child Process, the process is removed from the processes table. + +But, if the parent fails to read the exit call from the child process, the child process which has already finished its execution and is now dead will not be removed from the processes table. + +### Are Zombie processes harmful to the System? + +**No. ** + +Since zombie process is not doing any work, not using any resources or affecting any other process, there is no harm in having a zombie process. But since the exit_status and other process information from the process table are stored in the RAM, having too many Zombie processes can sometimes be an issue. + + **_Imagine it Like this :_** + +“ + + _You are the owner of a construction company. You pay daily wages to all your workers depending upon how they work. _ _A worker comes to the construction site every day, just sits there, you don’t have to pay him, he doesn’t do any work. _ _He just comes every day and sits, that’s it !”_ + +Such a worker is the living example of a zombie process. + +**But,** + +if you have a lot of zombie workers, your construction site will get crowded and it might get difficult for the people that are actually working. + +### So how to find Zombie Processes? + +Fire up a terminal and type the following command - + +ps aux | grep Z + +You will now get details of all zombie processes in the processes table. + +### How to kill Zombie processes? + +Normally we kill processes with the SIGKILL command but zombie processes are already dead. You Cannot kill something that is already dead. So what you do is you type this command - + +kill -s SIGCHLD pid + +​Replace the pid with the id of the parent process so that the parent process will remove all the child processes that are dead and completed. + + **_Imagine it Like this :_** + +“ + + _You find a dead body in the middle of the road, you call the dead body’s family and they take that body away from the road.”_ + +But a lot of programs are not programmed well enough to remove these child zombies because if they were, you wouldn’t have those zombies in the first place. So the only thing guaranteed to remove Child Zombies is killing the parent. + +-------------------------------------------------------------------------------- + +via: http://www.linuxandubuntu.com/home/what-are-zombie-processes-and-how-to-find-kill-zombie-processes + +作者:[linuxandubuntu][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.linuxandubuntu.com +[1]:http://www.linuxandubuntu.com/home/what-are-zombie-processes-and-how-to-find-kill-zombie-processes +[2]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/linux-check-zombie-processes_orig.jpg From 319214bfcd528ec9b4a487f36dccaf15a57bb3ea Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 11 Dec 2017 22:08:47 +0800 Subject: [PATCH 204/236] =?UTF-8?q?=E9=80=89=E9=A2=98:=2010=20useful=20nca?= =?UTF-8?q?t=20(nc)=20Command=20Examples=20for=20Linux=20Systems?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...(nc) Command Examples for Linux Systems.md | 200 ++++++++++++++++++ 1 file changed, 200 insertions(+) create mode 100644 sources/tech/20171206 10 useful ncat (nc) Command Examples for Linux Systems.md diff --git a/sources/tech/20171206 10 useful ncat (nc) Command Examples for Linux Systems.md b/sources/tech/20171206 10 useful ncat (nc) Command Examples for Linux Systems.md new file mode 100644 index 0000000000..8264014664 --- /dev/null +++ b/sources/tech/20171206 10 useful ncat (nc) Command Examples for Linux Systems.md @@ -0,0 +1,200 @@ +translating by lujun9972 +10 useful ncat (nc) Command Examples for Linux Systems +====== + [![nc-ncat-command-examples-Linux-Systems](https://www.linuxtechi.com/wp-content/uploads/2017/12/nc-ncat-command-examples-Linux-Systems.jpg)][1] + +ncat or nc is networking utility with functionality similar to cat command but for network. It is a general purpose CLI tool for reading, writing, redirecting data across a network. It is designed to be a reliable back-end tool that can be used with scripts or other programs. It’s also a great tool for network debugging, as it can create any kind of connect one can need. + +ncat/nc can be a port scanning tool, or a security tool, or monitoring tool and is also a simple TCP proxy. Since it has so many features, it is known as a network swiss army knife. It’s one of those tools that every System Admin should know & master. + +In most of Debian distributions ‘nc’ is available and its package is automatically installed during installation. But in minimal CentOS 7 / RHEL 7 installation you will not find nc as a default package. You need to install using the following command. + +``` +[root@linuxtechi ~]# yum install nmap-ncat -y +``` + +System admins can use it audit their system security, they can use it find the ports that are opened & than secure them. Admins can also use it as a client for auditing web servers, telnet servers, mail servers etc, with ‘nc’ we can control every character sent & can also view the responses to sent queries. + +We can also cause it to capture data being sent by client to understand what they are upto. + +In this tutorial, we are going to learn about how to use ‘nc’ command with 10 examples, + +#### Example: 1) Listen to inbound connections + +Ncat can work in listen mode & we can listen for inbound connections on port number with option ‘l’. Complete command is, + +$ ncat -l port_number + +For example, + +``` +$ ncat -l 8080 +``` + +Server will now start listening to port 8080 for inbound connections. + +#### Example: 2) Connect to a remote system + +To connect to a remote system with nc, we can use the following command, + +$ ncat IP_address port_number + +Let’s take an example, + +``` +$ ncat 192.168.1.100 80 +``` + +Now a connection to server with IP address 192.168.1.100 will be made at port 80 & we can now send instructions to server. Like we can get the complete page content with + +GET / HTTP/1.1 + +or get the page name, + +GET / HTTP/1.1 + +or we can get banner for OS fingerprinting with the following, + +HEAD / HTTP/1.1 + +This will tell what software is being used to run the web Server. + +#### Example: 3) Connecting to UDP ports + +By default , the nc utility makes connections only to TCP ports. But we can also make connections to UDP ports, for that we can use option ‘u’, + +``` +$ ncat -l -u 1234 +``` + +Now our system will start listening a udp port ‘1234’, we can verify this using below netstat command, + +``` +$ netstat -tunlp | grep 1234 +udp 0 0 0.0.0.0:1234 0.0.0.0:* 17341/nc +udp6 0 0 :::1234 :::* 17341/nc +``` + +Let’s assume we want to send or test UDP port connectivity to a specific remote host, then use the following command, + +$ ncat -v -u {host-ip} {udp-port} + +example: + +``` +[root@localhost ~]# ncat -v -u 192.168.105.150 53 +Ncat: Version 6.40 ( http://nmap.org/ncat ) +Ncat: Connected to 192.168.105.150:53. +``` + +#### Example: 4) NC as chat tool + +NC can also be used as chat tool, we can configure server to listen to a port & than can make connection to server from a remote machine on same port & start sending message. On server side, run + +``` +$ ncat -l 8080 +``` + +On remote client machine, run + +``` +$ ncat 192.168.1.100 8080 +``` + +Than start sending messages & they will be displayed on server terminal. + +#### Example: 5) NC as a proxy + +NC can also be used as a proxy with a simple command. Let’s take an example, + +``` +$ ncat -l 8080 | ncat 192.168.1.200 80 +``` + +Now all the connections coming to our server on port 8080 will be automatically redirected to 192.168.1.200 server on port 80\. But since we are using a pipe, data can only be transferred & to be able to receive the data back, we need to create a two way pipe. Use the following commands to do so, + +``` +$ mkfifo 2way +$ ncat -l 8080 0<2way | ncat 192.168.1.200 80 1>2way +``` + +Now you will be able to send & receive data over nc proxy. + +#### Example: 6) Copying Files using nc/ncat + +NC can also be used to copy the files from one system to another, though it is not recommended & mostly all systems have ssh/scp installed by default. But none the less if you have come across a system with no ssh/scp, you can also use nc as last ditch effort. + +Start with machine on which data is to be received & start nc is listener mode, + +``` +$ ncat -l 8080 > file.txt +``` + +Now on the machine from where data is to be copied, run the following command, + +``` +$ ncat 192.168.1.100 8080 --send-only < data.txt +``` + +Here, data.txt is the file that has to be sent. –send-only option will close the connection once the file has been copied. If not using this option, than we will have press ctrl+c to close the connection manually. + +We can also copy entire disk partitions using this method, but it should be done with caution. + +#### Example: 7) Create a backdoor via nc/nact + +NC command can also be used to create backdoor to your systems & this technique is actually used by hackers a lot. We should know how it works in order to secure our system. To create a backdoor, the command is, + +``` +$ ncat -l 10000 -e /bin/bash +``` + +‘e‘ flag attaches a bash to port 10000\. Now a client can connect to port 10000 on server & will have complete access to our system via bash, + +``` +$ ncat 192.168.1.100 1000 +``` + +#### Example: 8) Port forwarding via nc/ncat + +We can also use NC for port forwarding with the help of option ‘c’ , syntax for accomplishing port forwarding is, + +``` +$ ncat -u -l 80 -c 'ncat -u -l 8080' +``` + +Now all the connections for port 80 will be forwarded to port 8080. + +#### Example: 9) Set Connection timeouts + +Listener mode in ncat will continue to run & would have to be terminated manually. But we can configure timeouts with option ‘w’, + +``` +$ ncat -w 10 192.168.1.100 8080 +``` + +This will cause connection to be terminated in 10 seconds, but it can only be used on client side & not on server side. + +#### Example: 10) Force server to stay up using -k option in ncat + +When client disconnects from server, after sometime server also stops listening. But we can force server to stay connected & continuing port listening with option ‘k’. Run the following command, + +``` +$ ncat -l -k 8080 +``` + +Now server will stay up, even if a connection from client is broken. + +With this we end our tutorial, please feel free to ask any question regarding this article using the comment box below. + +-------------------------------------------------------------------------------- + +via: https://www.linuxtechi.com/nc-ncat-command-examples-linux-systems/ + +作者:[Pradeep Kumar][a] +译者:[lujun9972](https://github.com/lujun9972) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.linuxtechi.com/author/pradeep/ +[1]:https://www.linuxtechi.com/wp-content/uploads/2017/12/nc-ncat-command-examples-Linux-Systems.jpg From 8af68db28d3a95ebd6dec23b54918dda6ccc3b35 Mon Sep 17 00:00:00 2001 From: wxy Date: Mon, 11 Dec 2017 22:33:43 +0800 Subject: [PATCH 205/236] PRF:20171202 docker - Use multi-stage builds.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @iron0x 恭喜你,完成了第一篇翻译! --- ...0171202 docker - Use multi-stage builds.md | 52 ++++++++----------- 1 file changed, 23 insertions(+), 29 deletions(-) diff --git a/translated/tech/20171202 docker - Use multi-stage builds.md b/translated/tech/20171202 docker - Use multi-stage builds.md index b8bb2acf0a..706d98e7f4 100644 --- a/translated/tech/20171202 docker - Use multi-stage builds.md +++ b/translated/tech/20171202 docker - Use multi-stage builds.md @@ -1,18 +1,19 @@ -使用多阶段构建 +Docker:使用多阶段构建镜像 ============================================================ -多阶段构建是 `Docker 17.05` 或更高版本提供的新功能。这对致力于优化 `Dockerfile` 的人来说,使得 `Dockerfile` 易于阅读和维护。 +多阶段构建是 Docker 17.05 及更高版本提供的新功能。这对致力于优化 Dockerfile 的人来说,使得 Dockerfile 易于阅读和维护。 -> 致谢: 特别感谢 [Alex Ellis][1] 授权使用他的关于 `Docker` 多阶段构建的博客文章 [Builder pattern vs. Multi-stage builds in Docker][2] 作为以下示例的基础. +> 致谢: 特别感谢 [Alex Ellis][1] 授权使用他的关于 Docker 多阶段构建的博客文章 [Builder pattern vs. Multi-stage builds in Docker][2] 作为以下示例的基础。 ### 在多阶段构建之前 -关于构建镜像最具挑战性的事情之一是保持镜像体积小巧. `Dockerfile` 中的每条指令都会在镜像中增加一层, 并且在移动到下一层之前, 需要记住清除不需要的构件. 要编写一个非常高效的 `Dockerfile`, 传统上您需要使用 `shell` 技巧和其他逻辑来尽可能地减少层数, 并确保每一层都具有上一层所需的构件, 而不是其他任何东西. -实际上, 有一个 `Dockerfile` 用于开发(其中包含构建应用程序所需的所有内容), 以及另一个用于生产的瘦客户端, 它只包含您的应用程序以及运行它所需的内容. 这被称为"建造者模式". 维护两个 `Dockerfile` 并不理想. +关于构建镜像最具挑战性的事情之一是保持镜像体积小巧。 Dockerfile 中的每条指令都会在镜像中增加一层,并且在移动到下一层之前,需要记住清除不需要的构件。要编写一个非常高效的 Dockerfile,你通常需要使用 shell 技巧和其它方式来尽可能地减少层数,并确保每一层都具有上一层所需的构件,而其它任何东西都不需要。 -下面分别是一个 `Dockerfile.build` 和 遵循上面的构建器模式的 `Dockerfile` 的例子: +实际上最常见的是,有一个 Dockerfile 用于开发(其中包含构建应用程序所需的所有内容),而另一个裁剪过的用于生产环境,它只包含您的应用程序以及运行它所需的内容。这被称为“构建器模式”。但是维护两个 Dockerfile 并不理想。 -`Dockerfile.build`: +下面分别是一个 `Dockerfile.build` 和遵循上面的构建器模式的 `Dockerfile` 的例子: + +`Dockerfile.build`: ``` FROM golang:1.7.3 @@ -21,12 +22,11 @@ RUN go get -d -v golang.org/x/net/html COPY app.go . RUN go get -d -v golang.org/x/net/html \ && CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o app . - ``` -注意这个例子还使用 Bash && 运算符人为地将两个 `RUN` 命令压缩在一起, 以避免在镜像中创建额外的层. 这很容易失败, 很难维护. 例如, 插入另一个命令时, 很容易忘记继续使用 `\` 字符. +注意这个例子还使用 Bash 的 `&&` 运算符人为地将两个 `RUN` 命令压缩在一起,以避免在镜像中创建额外的层。这很容易失败,难以维护。例如,插入另一个命令时,很容易忘记继续使用 `\` 字符。 -`Dockerfile`: +`Dockerfile`: ``` FROM alpine:latest @@ -34,10 +34,9 @@ RUN apk --no-cache add ca-certificates WORKDIR /root/ COPY app . CMD ["./app"] - ``` -`build.sh`: +`build.sh`: ``` #!/bin/sh @@ -54,18 +53,17 @@ echo Building alexellis2/href-counter:latest docker build --no-cache -t alexellis2/href-counter:latest . rm ./app - ``` -当您运行 `build.sh` 脚本时, 它会构建第一个镜像, 从中创建一个容器, 以便将该构件复制出来, 然后构建第二个镜像. 这两个镜像和应用构件会占用您的系统的空间. +当您运行 `build.sh` 脚本时,它会构建第一个镜像,从中创建一个容器,以便将该构件复制出来,然后构建第二个镜像。 这两个镜像会占用您的系统的空间,而你仍然会一个 `app` 构件存放在你的本地磁盘上。 -多阶段构建大大简化了这种情况! +多阶段构建大大简化了这种情况! ### 使用多阶段构建 -在多阶段构建中, 您需要在 `Dockerfile` 中多次使用 `FROM` 声明. 每次 `FROM` 指令可以使用不同的基础镜像, 并且每次 `FROM` 指令都会开始新阶段的构建. 您可以选择将构件从一个阶段复制到另一个阶段, 在最终镜像中, 不会留下您不需要的所有内容. 为了演示这是如何工作的, 让我们调整前一节中的 `Dockerfile` 以使用多阶段构建。 +在多阶段构建中,您需要在 Dockerfile 中多次使用 `FROM` 声明。每次 `FROM` 指令可以使用不同的基础镜像,并且每次 `FROM` 指令都会开始新阶段的构建。您可以选择将构件从一个阶段复制到另一个阶段,在最终镜像中,不会留下您不需要的所有内容。为了演示这是如何工作的,让我们调整前一节中的 Dockerfile 以使用多阶段构建。 -`Dockerfile`: +`Dockerfile`: ``` FROM golang:1.7.3 @@ -79,23 +77,21 @@ RUN apk --no-cache add ca-certificates WORKDIR /root/ COPY --from=0 /go/src/github.com/alexellis/href-counter/app . CMD ["./app"] - ``` -您只需要单一个 `Dockerfile`. 不需要分隔构建脚本. 只需运行 `docker build` . +您只需要单一个 Dockerfile。 不需要另外的构建脚本。只需运行 `docker build` 即可。 ``` $ docker build -t alexellis2/href-counter:latest . - ``` -最终的结果是和以前体积一样小的生产镜像, 复杂性显着降低. 您不需要创建任何中间镜像, 也不需要将任何构件提取到本地系统. +最终的结果是和以前体积一样小的生产镜像,复杂性显著降低。您不需要创建任何中间镜像,也不需要将任何构件提取到本地系统。 + +它是如何工作的呢?第二条 `FROM` 指令以 `alpine:latest` 镜像作为基础开始新的建造阶段。`COPY --from=0` 这一行将刚才前一个阶段产生的构件复制到这个新阶段。Go SDK 和任何中间构件都被留在那里,而不会保存到最终的镜像中。 -它是如何工作的呢? 第二条 `FROM` 指令以 `alpine:latest` 镜像作为基础开始新的建造阶段. `COPY --from=0` 这一行将刚才前一个阶段产生的构件复制到这个新阶段. Go SDK和任何中间构件都被保留下来, 而不是只保存在最终的镜像中. ### 命名您的构建阶段 -默认情况下, 这些阶段没有命名, 您可以通过它们的整数来引用它们, 从第一个 `FROM` 指令的 0 开始. 但是, 您可以通过在 `FROM` 指令中使用 `as ` -来为阶段命名. 以下示例通过命名阶段并在 `COPY` 指令中使用名称来改进前一个示例. 这意味着, 即使您的 `Dockerfile` 中的指令稍后重新排序, `COPY` 也不会中断。 +默认情况下,这些阶段没有命名,您可以通过它们的整数来引用它们,从第一个 `FROM` 指令的 0 开始。但是,你可以通过在 `FROM` 指令中使用 `as ` 来为阶段命名。以下示例通过命名阶段并在 `COPY` 指令中使用名称来改进前一个示例。这意味着,即使您的 `Dockerfile` 中的指令稍后重新排序,`COPY` 也不会出问题。 ``` FROM golang:1.7.3 as builder @@ -111,15 +107,13 @@ COPY --from=builder /go/src/github.com/alexellis/href-counter/app . CMD ["./app"] ``` -> 译者话: 1.此文章系译者第一次翻译英文文档,有描述不清楚或错误的地方,请读者给予反馈(2727586680@qq.com),不胜感激。 -> 译者话: 2.本文只是简单介绍多阶段构建,不够深入,如果读者需要深入了解,请自行查阅相关资料。 -------------------------------------------------------------------------------- -via: https://docs.docker.com/engine/userguide/eng-image/multistage-build/#name-your-build-stages +via: https://docs.docker.com/engine/userguide/eng-image/multistage-build/ -作者:[docker docs ][a] +作者:[docker][a] 译者:[iron0x](https://github.com/iron0x) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From dee1e0950897092a38ca7df349401984e321c5fe Mon Sep 17 00:00:00 2001 From: wxy Date: Mon, 11 Dec 2017 22:34:30 +0800 Subject: [PATCH 206/236] PUB:20171202 docker - Use multi-stage builds.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @iron0x 文章发布地址: https://linux.cn/article-9133-1.html 你的 LCTT 专页地址: https://linux.cn/lctt/iron0x --- ...stage builds.md => 20171202 docker - Use multi-stage builds.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename translated/tech/20171202 docker - Use multi-stage builds.md => 20171202 docker - Use multi-stage builds.md (100%) diff --git a/translated/tech/20171202 docker - Use multi-stage builds.md b/20171202 docker - Use multi-stage builds.md similarity index 100% rename from translated/tech/20171202 docker - Use multi-stage builds.md rename to 20171202 docker - Use multi-stage builds.md From d56c5a4f6c2bb32ef3d16b833838b1e7d2946857 Mon Sep 17 00:00:00 2001 From: wxy Date: Mon, 11 Dec 2017 23:30:57 +0800 Subject: [PATCH 207/236] PRF:20171020 How Eclipse is advancing IoT development.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @smartgrids 恭喜你,完成了第一篇翻译。我校对发布晚了,抱歉。 --- ...ow Eclipse is advancing IoT development.md | 49 +++++++++++-------- 1 file changed, 28 insertions(+), 21 deletions(-) diff --git a/translated/tech/20171020 How Eclipse is advancing IoT development.md b/translated/tech/20171020 How Eclipse is advancing IoT development.md index 0de4f38ea1..1bc039848d 100644 --- a/translated/tech/20171020 How Eclipse is advancing IoT development.md +++ b/translated/tech/20171020 How Eclipse is advancing IoT development.md @@ -1,43 +1,50 @@ -translated by smartgrids Eclipse 如何助力 IoT 发展 ============================================================ -### 开源组织的模块发开发方式非常适合物联网。 + +> 开源组织的模块化开发方式非常适合物联网。 ![How Eclipse is advancing IoT development](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/OSDC_BUS_ArchitectureOfParticipation_520x292.png?itok=FA0Uuwzv "How Eclipse is advancing IoT development") + 图片来源: opensource.com -[Eclipse][3] 可能不是第一个去研究物联网的开源组织。但是,远在 IoT 家喻户晓之前,该基金会在 2001 年左右就开始支持开源软件发展商业化。九月 Eclipse 物联网日和 RedMonk 的 [ThingMonk 2017][4] 一块举行,着重强调了 Eclipse 在 [物联网发展][5] 中的重要作用。它现在已经包含了 28 个项目,覆盖了大部分物联网项目需求。会议过程中,我和负责 Eclipse 市场化运作的 [Ian Skerritt][6] 讨论了 Eclipse 的物联网项目以及如何拓展它。 +[Eclipse][3] 可能不是第一个去研究物联网的开源组织。但是,远在 IoT 家喻户晓之前,该基金会在 2001 年左右就开始支持开源软件发展商业化。 + +九月份的 Eclipse 物联网日和 RedMonk 的 [ThingMonk 2017][4] 一块举行,着重强调了 Eclipse 在 [物联网发展][5] 中的重要作用。它现在已经包含了 28 个项目,覆盖了大部分物联网项目需求。会议过程中,我和负责 Eclipse 市场化运作的 [Ian Skerritt][6] 讨论了 Eclipse 的物联网项目以及如何拓展它。 + +### 物联网的最新进展? -###物联网的最新进展? 我问 Ian 物联网同传统工业自动化,也就是前几十年通过传感器和相应工具来实现工厂互联的方式有什么不同。 Ian 指出很多工厂是还没有互联的。 -另外,他说“ SCADA[监控和数据分析] 系统以及工厂底层技术都是私有、独立性的。我们很难去改变它,也很难去适配它们…… 现在,如果你想运行一套生产系统,你需要设计成百上千的单元。生产线想要的是满足用户需求,使制造过程更灵活,从而可以不断产出。” 这也就是物联网会带给制造业的一个很大的帮助。 +另外,他说 “SCADA [监控和数据分析supervisory control and data analysis] 系统以及工厂底层技术都是非常私有的、独立性的。我们很难去改变它,也很难去适配它们 …… 现在,如果你想运行一套生产系统,你需要设计成百上千的单元。生产线想要的是满足用户需求,使制造过程更灵活,从而可以不断产出。” 这也就是物联网会带给制造业的一个很大的帮助。 -###Eclipse 物联网方面的研究 -Ian 对于 Eclipse 在物联网的研究是这样描述的:“满足任何物联网解决方案的核心基础技术” ,通过使用开源技术,“每个人都可以使用从而可以获得更好的适配性。” 他说,Eclipse 将物联网视为包括三层互联的软件栈。从更高的层面上看,这些软件栈(按照大家常见的说法)将物联网描述为跨越三个层面的网络。特定的观念可能认为含有更多的层面,但是他们一直符合这个三层模型的功能的: +### Eclipse 物联网方面的研究 + +Ian 对于 Eclipse 在物联网的研究是这样描述的:“满足任何物联网解决方案的核心基础技术” ,通过使用开源技术,“每个人都可以使用,从而可以获得更好的适配性。” 他说,Eclipse 将物联网视为包括三层互联的软件栈。从更高的层面上看,这些软件栈(按照大家常见的说法)将物联网描述为跨越三个层面的网络。特定的实现方式可能含有更多的层,但是它们一般都可以映射到这个三层模型的功能上: * 一种可以装载设备(例如设备、终端、微控制器、传感器)用软件的堆栈。 -* 将不同的传感器采集到的数据信息聚合起来并传输到网上的一类网关。这一层也可能会针对传感器数据检测做出实时反映。 +* 将不同的传感器采集到的数据信息聚合起来并传输到网上的一类网关。这一层也可能会针对传感器数据检测做出实时反应。 * 物联网平台后端的一个软件栈。这个后端云存储数据并能根据采集的数据比如历史趋势、预测分析提供服务。 -这三个软件栈在 Eclipse 的白皮书 “ [The Three Software Stacks Required for IoT Architectures][7] ”中有更详细的描述。 +这三个软件栈在 Eclipse 的白皮书 “[The Three Software Stacks Required for IoT Architectures][7] ”中有更详细的描述。 -Ian 说在这些架构中开发一种解决方案时,“需要开发一些特殊的东西,但是很多底层的技术是可以借用的,像通信协议、网关服务。需要一种模块化的方式来满足不用的需求场合。” Eclipse 关于物联网方面的研究可以概括为:开发模块化开源组件从而可以被用于开发大量的特定性商业服务和解决方案。 +Ian 说在这些架构中开发一种解决方案时,“需要开发一些特殊的东西,但是很多底层的技术是可以借用的,像通信协议、网关服务。需要一种模块化的方式来满足不同的需求场合。” Eclipse 关于物联网方面的研究可以概括为:开发模块化开源组件,从而可以被用于开发大量的特定性商业服务和解决方案。 -###Eclipse 的物联网项目 +### Eclipse 的物联网项目 -在众多一杯应用的 Eclipse 物联网应用中, Ian 举了两个和 [MQTT][8] 有关联的突出应用,一个设备与设备互联(M2M)的物联网协议。 Ian 把它描述成“一个专为重视电源管理工作的油气传输线监控系统的信息发布/订阅协议。MQTT 已经是众多物联网广泛应用标准中很成功的一个。” [Eclipse Mosquitto][9] 是 MQTT 的代理,[Eclipse Paho][10] 是他的客户端。 -[Eclipse Kura][11] 是一个物联网网关,引用 Ian 的话,“它连接了很多不同的协议间的联系”包括蓝牙、Modbus、CANbus 和 OPC 统一架构协议,以及一直在不断添加的协议。一个优势就是,他说,取代了你自己写你自己的协议, Kura 提供了这个功能并将你通过卫星、网络或其他设备连接到网络。”另外它也提供了防火墙配置、网络延时以及其它功能。Ian 也指出“如果网络不通时,它会存储信息直到网络恢复。” +在众多已被应用的 Eclipse 物联网应用中, Ian 举了两个和 [MQTT][8] 有关联的突出应用,一个设备与设备互联(M2M)的物联网协议。 Ian 把它描述成“一个专为重视电源管理工作的油气传输线监控系统的信息发布/订阅协议。MQTT 已经是众多物联网广泛应用标准中很成功的一个。” [Eclipse Mosquitto][9] 是 MQTT 的代理,[Eclipse Paho][10] 是他的客户端。 + +[Eclipse Kura][11] 是一个物联网网关,引用 Ian 的话,“它连接了很多不同的协议间的联系”,包括蓝牙、Modbus、CANbus 和 OPC 统一架构协议,以及一直在不断添加的各种协议。他说,一个优势就是,取代了你自己写你自己的协议, Kura 提供了这个功能并将你通过卫星、网络或其他设备连接到网络。”另外它也提供了防火墙配置、网络延时以及其它功能。Ian 也指出“如果网络不通时,它会存储信息直到网络恢复。” 最新的一个项目中,[Eclipse Kapua][12] 正尝试通过微服务来为物联网云平台提供不同的服务。比如,它集成了通信、汇聚、管理、存储和分析功能。Ian 说“它正在不断前进,虽然还没被完全开发出来,但是 Eurotech 和 RedHat 在这个项目上非常积极。” -Ian 说 [Eclipse hawkBit][13] ,软件更新管理的软件,是一项“非常有趣的项目。从安全的角度说,如果你不能更新你的设备,你将会面临巨大的安全漏洞。”很多物联网安全事故都和无法更新的设备有关,他说,“ HawkBit 可以基本负责通过物联网系统来完成扩展性更新的后端管理。” -物联网设备软件升级的难度一直被看作是难度最高的安全挑战之一。物联网设备不是一直连接的,而且数目众多,再加上首先设备的更新程序很难完全正常。正因为这个原因,关于无赖女王软件升级的项目一直是被当作重要内容往前推进。 +Ian 说 [Eclipse hawkBit][13] ,一个软件更新管理的软件,是一项“非常有趣的项目。从安全的角度说,如果你不能更新你的设备,你将会面临巨大的安全漏洞。”很多物联网安全事故都和无法更新的设备有关,他说,“HawkBit 可以基本负责通过物联网系统来完成扩展性更新的后端管理。” -###为什么物联网这么适合 Eclipse +物联网设备软件升级的难度一直被看作是难度最高的安全挑战之一。物联网设备不是一直连接的,而且数目众多,再加上首先设备的更新程序很难完全正常。正因为这个原因,关于 IoT 软件升级的项目一直是被当作重要内容往前推进。 -在物联网发展趋势中的一个方面就是关于构建模块来解决商业问题,而不是宽约工业和公司的大物联网平台。 Eclipse 关于物联网的研究放在一系列模块栈、提供特定和大众化需求功能的项目,还有就是指定目标所需的可捆绑式中间件、网关和协议组件上。 +### 为什么物联网这么适合 Eclipse + +在物联网发展趋势中的一个方面就是关于构建模块来解决商业问题,而不是跨越行业和公司的大物联网平台。 Eclipse 关于物联网的研究放在一系列模块栈、提供特定和大众化需求功能的项目上,还有就是指定目标所需的可捆绑式中间件、网关和协议组件上。 -------------------------------------------------------------------------------- @@ -46,15 +53,15 @@ Ian 说 [Eclipse hawkBit][13] ,软件更新管理的软件,是一项“非 作者简介: -Gordon Haff - Gordon Haff 是红帽公司的云营销员,经常在消费者和工业会议上讲话,并且帮助发展红帽全办公云解决方案。他是 计算机前言:云如何如何打开众多出版社未来之门 的作者。在红帽之前, Gordon 写了成百上千的研究报告,经常被引用到公众刊物上,像纽约时报关于 IT 的议题和产品建议等…… +Gordon Haff - Gordon Haff 是红帽公司的云专家,经常在消费者和行业会议上讲话,并且帮助发展红帽全面云化解决方案。他是《计算机前沿:云如何如何打开众多出版社未来之门》的作者。在红帽之前, Gordon 写了成百上千的研究报告,经常被引用到公众刊物上,像纽约时报关于 IT 的议题和产品建议等…… -------------------------------------------------------------------------------- -转自: https://opensource.com/article/17/10/eclipse-and-iot +via: https://opensource.com/article/17/10/eclipse-and-iot -作者:[Gordon Haff ][a] +作者:[Gordon Haff][a] 译者:[smartgrids](https://github.com/smartgrids) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From c4113a6fd3c22a7189d46b91a419a760a7a3a543 Mon Sep 17 00:00:00 2001 From: wxy Date: Mon, 11 Dec 2017 23:31:48 +0800 Subject: [PATCH 208/236] PUB:20171020 How Eclipse is advancing IoT development.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @smartgrids 文章发布地址:https://linux.cn/article-9134-1.html 你的 LCTT 专页地址: https://linux.cn/lctt/smartgrids --- .../20171020 How Eclipse is advancing IoT development.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20171020 How Eclipse is advancing IoT development.md (100%) diff --git a/translated/tech/20171020 How Eclipse is advancing IoT development.md b/published/20171020 How Eclipse is advancing IoT development.md similarity index 100% rename from translated/tech/20171020 How Eclipse is advancing IoT development.md rename to published/20171020 How Eclipse is advancing IoT development.md From 29c91769295258db847d551bba4f3cc45e29b237 Mon Sep 17 00:00:00 2001 From: geekpi Date: Tue, 12 Dec 2017 09:01:53 +0800 Subject: [PATCH 209/236] translated --- ...I Text Editor with Multi Cursor Support.md | 153 ------------------ ...I Text Editor with Multi Cursor Support.md | 151 +++++++++++++++++ 2 files changed, 151 insertions(+), 153 deletions(-) delete mode 100644 sources/tech/20171129 Suplemon - Modern CLI Text Editor with Multi Cursor Support.md create mode 100644 translated/tech/20171129 Suplemon - Modern CLI Text Editor with Multi Cursor Support.md diff --git a/sources/tech/20171129 Suplemon - Modern CLI Text Editor with Multi Cursor Support.md b/sources/tech/20171129 Suplemon - Modern CLI Text Editor with Multi Cursor Support.md deleted file mode 100644 index 6f0703cd08..0000000000 --- a/sources/tech/20171129 Suplemon - Modern CLI Text Editor with Multi Cursor Support.md +++ /dev/null @@ -1,153 +0,0 @@ -translating---geekpi - -Suplemon - Modern CLI Text Editor with Multi Cursor Support -====== -Suplemon is a modern text editor for CLI that emulates the multi cursor behavior and other features of [Sublime Text][1]. It's lightweight and really easy to use, just as Nano is. - -One of the benefits of using a CLI editor is that you can use it whether the Linux distribution that you're using has a GUI or not. This type of text editors also stands out as being simple, fast and powerful. - -You can find useful information and the source code in the [official repository][2]. - -### Features - -These are some of its interesting features: - -* Multi cursor support - -* Undo / Redo - -* Copy and Paste, with multi line support - -* Mouse support - -* Extensions - -* Find, find all, find next - -* Syntax highlighting - -* Autocomplete - -* Custom keyboard shortcuts - -### Installation - -First, make sure you have the latest version of python3 and pip3 installed. - -Then type in a terminal: - -``` -$ sudo pip3 install suplemon -``` - -Create a new file in the current directory - -Open a terminal and type: - -``` -$ suplemon -``` - -![suplemon new file](https://linoxide.com/wp-content/uploads/2017/11/suplemon-new-file.png) - -Open one or multiple files - -Open a terminal and type: - -``` -$ suplemon ... -``` - -``` -$ suplemon example1.c example2.c -``` - -Main configuration - -You can find the configuration file at ~/.config/suplemon/suplemon-config.json. - -Editing this file is easy, you just have to enter command mode (once you are inside suplemon) and run the config command. You can view the default configuration by running config defaults. - -Keymap configuration - -I'll show you the default key mappings for suplemon. If you want to edit them, just run keymap command. Run keymap default to view the default keymap file. - -* Exit: Ctrl + Q - -* Copy line(s) to buffer: Ctrl + C - -* Cut line(s) to buffer: Ctrl + X - -* Insert buffer: Ctrl + V - -* Duplicate line: Ctrl + K - -* Goto: Ctrl + G. You can go to a line or to a file (just type the beginning of a file name). Also, it is possible to type something like 'exam:50' to go to the line 50 of the file example.c at line 50. - -* Search for string or regular expression: Ctrl + F - -* Search next: Ctrl + D - -* Trim whitespace: Ctrl + T - -* Add new cursor in arrow direction: Alt + Arrow key - -* Jump to previous or next word or line: Ctrl + Left / Right - -* Revert to single cursor / Cancel input prompt: Esc - -* Move line(s) up / down: Page Up / Page Down - -* Save file: Ctrl + S - -* Save file with new name: F1 - -* Reload current file: F2 - -* Open file: Ctrl + O - -* Close file: Ctrl + W - -* Switch to next/previous file: Ctrl + Page Up / Ctrl + Page Down - -* Run a command: Ctrl + E - -* Undo: Ctrl + Z - -* Redo: Ctrl + Y - -* Toggle visible whitespace: F7 - -* Toggle mouse mode: F8 - -* Toggle line numbers: F9 - -* Toggle Full screen: F11 - -Mouse shortcuts - -* Set cursor at pointer position: Left Click - -* Add a cursor at pointer position: Right Click - -* Scroll vertically: Scroll Wheel Up / Down - -### Wrapping up - -After trying Suplemon for some time, I have changed my opinion about CLI text editors. I had tried Nano before, and yes, I liked its simplicity, but its modern-feature lack made it non-practical for my everyday use. - -This tool has the best of both CLI and GUI worlds... Simplicity and feature-richness! So I suggest you give it a try, and write your thoughts in the comments :-) - --------------------------------------------------------------------------------- - -via: https://linoxide.com/tools/suplemon-cli-text-editor-multi-cursor/ - -作者:[Ivo Ursino][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://linoxide.com/author/ursinov/ -[1]:https://linoxide.com/tools/install-sublime-text-editor-linux/ -[2]:https://github.com/richrd/suplemon/ diff --git a/translated/tech/20171129 Suplemon - Modern CLI Text Editor with Multi Cursor Support.md b/translated/tech/20171129 Suplemon - Modern CLI Text Editor with Multi Cursor Support.md new file mode 100644 index 0000000000..4fc430ff2a --- /dev/null +++ b/translated/tech/20171129 Suplemon - Modern CLI Text Editor with Multi Cursor Support.md @@ -0,0 +1,151 @@ +Suplemon - 带有多光标支持的现代 CLI 文本编辑器 +====== +Suplemon 是一个 CLI 中的现代文本编辑器,它模拟 [Sublime Text][1] 的多光标行为和其他特性。它是轻量级的,非常易于使用,就像 Nano 一样。 + +使用 CLI 编辑器的好处之一是,无论你使用的 Linux 发行版是否有 GUI,你都可以使用它。这种文本编辑器也很简单、快速和强大。 + +你可以在[官方仓库][2]中找到有用的信息和源代码。 + +### 功能 + +这些事一些它有趣的功能: + +* 多光标支持 + +* 撤销/重做 + +* 复制和粘贴,带有多行支持 + +* 鼠标支持 + +* 扩展 + +* 查找、查找所有、查找下一个 + +* 语法高亮 + +* 自动完成 + +* 自定义键盘快捷键 + +### 安装 + +首先,确保安装了最新版本的 python3 和 pip3。 + +然后在终端输入: + +``` +$ sudo pip3 install suplemon +``` + +在当前目录中创建一个新文件 + +打开一个终端并输入: + +``` +$ suplemon +``` + +![suplemon new file](https://linoxide.com/wp-content/uploads/2017/11/suplemon-new-file.png) + +打开一个或多个文件 + +打开一个终端并输入: + +``` +$ suplemon ... +``` + +``` +$ suplemon example1.c example2.c +``` + +主要配置 + +你可以在这 ~/.config/suplemon/suplemon-config.json 找到配置文件。 + +编辑这个文件很简单,你只需要进入命令模式(进入 suplemon 后)并运行 config 命令。你可以通过运行 config defaults 来查看默认配置。 + +键盘映射配置 + +我会展示 suplemon 的默认键映射。如果你想编辑它们,只需运行 keymap 命令。运行 keymap default 来查看默认的键盘映射文件。 + +* 退出: Ctrl + Q + +* 复制行到缓冲区:Ctrl + C + +* 剪切行缓冲区: Ctrl + X + +* 插入缓冲区: Ctrl + V + +* 复制行: Ctrl + K + +* 跳转: Ctrl + G。 你可以跳转到一行或一个文件(只需键入一个文件名的开头)。另外,可以输入类似于 “exam:50” 跳转到 example.c 的第 50行。 + +* 用字符串或正则表达式搜索: Ctrl + F + +* 搜索下一个: Ctrl + D + +* 去除空格: Ctrl + T + +* 在箭头方向添加新的光标: Alt + 方向键 + +* 跳转到上一个或下一个单词或行: Ctrl + 左/右 + +* 恢复到单光标/取消输入提示: Esc + +* 向上/向下移动行: Page Up / Page Down + +* 保存文件:Ctrl + S + +* 用新名称保存文件:F1 + +* 重新载入当前文件:F2 + +* 打开文件:Ctrl + O + +* Close file: 关闭文件: + +* 切换到下一个/上一个文件:Ctrl + Page Up / Ctrl + Page Down + +* 运行一个命令:Ctrl + E + +* 撤消:Ctrl + Z + +* 重做:Ctrl + Y + +* 触发可见的空格:F7 + +* 切换鼠标模式:F8 + +* 显示行号:F9 + +* 显示全屏:F11 + +鼠标快捷键 + +* 将光标置于指针位置:左键单击 + +* 在指针位置添加一个光标:右键单击 + +* 垂直滚动:向上/向下滚动滚轮 + +### 总结 + +在尝试 Suplemon 一段时间后,我改变了对 CLI 文本编辑的看法。我以前曾经尝试过 Nano,是的,我喜欢它的简单性,但是它的现代特征的缺乏使它在日常使用中变得不实用。 + +这个工具有 CLI 和 GUI 世界最好的东西。。。简单性和功能丰富!所以我建议你试试看,并在评论中写下你的想法 :-) + +-------------------------------------------------------------------------------- + +via: https://linoxide.com/tools/suplemon-cli-text-editor-multi-cursor/ + +作者:[Ivo Ursino][a] +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://linoxide.com/author/ursinov/ +[1]:https://linoxide.com/tools/install-sublime-text-editor-linux/ +[2]:https://github.com/richrd/suplemon/ From 88506024bffc72f3457855c5875ed8e57e978232 Mon Sep 17 00:00:00 2001 From: wxy Date: Tue, 12 Dec 2017 12:02:14 +0800 Subject: [PATCH 210/236] =?UTF-8?q?PRF:20170719=20Containing=20System=20Se?= =?UTF-8?q?rvices=20in=20Red=20Hat=20Enterprise=20Linux=20=E2=80=93=20Part?= =?UTF-8?q?=201.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @liuxinyu123 恭喜你,完成了第一篇翻译。以后要注意保持 MD 格式,以及中文标点。 --- ...es in Red Hat Enterprise Linux – Part 1.md | 179 ++++++++++-------- 1 file changed, 98 insertions(+), 81 deletions(-) diff --git a/translated/tech/20170719 Containing System Services in Red Hat Enterprise Linux – Part 1.md b/translated/tech/20170719 Containing System Services in Red Hat Enterprise Linux – Part 1.md index f2dde64cf4..93817695a4 100644 --- a/translated/tech/20170719 Containing System Services in Red Hat Enterprise Linux – Part 1.md +++ b/translated/tech/20170719 Containing System Services in Red Hat Enterprise Linux – Part 1.md @@ -1,18 +1,25 @@ -# 红帽企业版 Linux 包含的系统服务 - 第一部分 +在红帽企业版 Linux 中将系统服务容器化(一) +==================== -在 2017 年红帽峰会上,有几个人问我“我们通常用完整的虚拟机来隔离如 DSN 和 DHCP 等网络服务,那我们可以用容器来取而代之吗?”。答案是可以的,下面是在当前红帽企业版 Linux 7 系统上创建一个系统容器的例子。 -## **我们的目的** -### *创建一个可以独立于任何其他系统服务来进行更新的网络服务,并且可以从主机端容易管理和更新。* -让我们来探究在一个容器中建立一个运行在 systemd 之下的 BIND 服务器。在这一部分,我们将看到建立自己的容器以及管理 BIND 配置和数据文件。 -在第二部分,我们将看到主机中的 systemd 怎样和容器中的 systmed 整合。我们将探究管理容器中的服务,并且使它作为一种主机中的服务。 -## **创建 BIND 容器** -为了使 systemd 在一个容器中容易运行,我们首先需要在主机中增加两个包:`oci-register-machine` 和 `oci-systemd-hook`。`oci-systemd-hook` 这个钩子允许我们在一个容器中运行 systemd,而不需要使用特权容器或者手工配置 tmpfs 和 cgroups。`oci-register-machine` 这个钩子允许我们使用 systemd 工具如 `systemctl` 和 `machinectl` 来跟踪容器。 +在 2017 年红帽峰会上,有几个人问我“我们通常用完整的虚拟机来隔离如 DNS 和 DHCP 等网络服务,那我们可以用容器来取而代之吗?”答案是可以的,下面是在当前红帽企业版 Linux 7 系统上创建一个系统容器的例子。 + +### 我们的目的 + +**创建一个可以独立于任何其它系统服务而更新的网络服务,并且可以从主机端容易地管理和更新。** + +让我们来探究一下在容器中建立一个运行在 systemd 之下的 BIND 服务器。在这一部分,我们将了解到如何建立自己的容器以及管理 BIND 配置和数据文件。 + +在本系列的第二部分,我们将看到如何整合主机中的 systemd 和容器中的 systemd。我们将探究如何管理容器中的服务,并且使它作为一种主机中的服务。 + +### 创建 BIND 容器 + +为了使 systemd 在一个容器中轻松运行,我们首先需要在主机中增加两个包:`oci-register-machine` 和 `oci-systemd-hook`。`oci-systemd-hook` 这个钩子允许我们在一个容器中运行 systemd,而不需要使用特权容器或者手工配置 tmpfs 和 cgroups。`oci-register-machine` 这个钩子允许我们使用 systemd 工具如 `systemctl` 和 `machinectl` 来跟踪容器。 ``` [root@rhel7-host ~]# yum install oci-register-machine oci-systemd-hook -``` +``` -回到创建我们的 BIND 容器上。[红帽企业版 Linux 7 基础映像](https://access.redhat.com/containers)包含 systemd 作为一个 init 系统。我们安装并激活 BIND 正如我们在典型系统中做的那样。你可以从资源库中的 [git 仓库中下载这份 Dockerfile](http://rhelblog.redhat.com/2017/07/19/containing-system-services-in-red-hat-enterprise-linux-part-1/#repo)。 +回到创建我们的 BIND 容器上。[红帽企业版 Linux 7 基础镜像][6]包含了 systemd 作为其初始化系统。我们可以如我们在典型的系统中做的那样安装并激活 BIND。你可以从 [git 仓库中下载这份 Dockerfile][8]。 ``` [root@rhel7-host bind]# vi Dockerfile @@ -29,19 +36,17 @@ EXPOSE 53/udp CMD [ "/sbin/init" ] ``` -因为我们以 PID 1 来启动一个 init 系统,当我们告诉容器停止时,需要改变 docker CLI 发送的信号。从 `kill` 系统调用手册中(man 2 kill): +因为我们以 PID 1 来启动一个初始化系统,当我们告诉容器停止时,需要改变 docker CLI 发送的信号。从 `kill` 系统调用手册中 (`man 2 kill`): -``` -The only signals that can be sent to process ID 1, the init -process, are those for which init has explicitly installed -signal handlers. This is done to assure the system is not -brought down accidentally. -``` +> 唯一可以发送给 PID 1 进程(即 init 进程)的信号,是那些初始化系统明确安装了信号处理器signal handler的信号。这是为了避免系统被意外破坏。 -对于 systemd 信号句柄,`SIGRTMIN+3`是对应于 `systemd start halt.target` 的信号。我们也应该暴露 TCP 和 UDP 端口号用来 BIND ,因为这两种协议可能都在使用中。 -## **管理数据** -有了一个实用的 BIND 服务,我们需要一种管理配置和区域文件的方法。目前这些都在容器里面,所以我们任何时候都可以进入容器去更新配置或者改变一个区域文件。从管理者角度来说,这并不是很理想。当要更新 BIND 时,我们将需要重建这个容器,所以映像中的改变将会丢失。任何时候我们需要更新一个文件或者重启服务时,都需要进入这个容器,而这增加了步骤和时间。 -相反的,我们将从这个容器中提取配置和数据文件,把它们拷贝到主机,然后在运行的时候挂载它们。这种方式我们可以很容易地重启或者重建容器,而不会丢失做出的更改。我们也可以使用容器外的编辑器来更改配置和区域文件。因为这个容器的数据看起来像“该系统服务的特定站点数据”,让我们遵循文件系统层次并在当前主机上创建 `/srv/named` 目录来保持管理权分离。 +对于 systemd 信号处理器,`SIGRTMIN+3` 是对应于 `systemd start halt.target` 的信号。我们也需要为 BIND 暴露 TCP 和 UDP 端口号,因为这两种协议可能都要使用。 + +### 管理数据 + +有了一个可以工作的 BIND 服务,我们还需要一种管理配置文件和区域文件的方法。目前这些都放在容器里面,所以我们任何时候都可以进入容器去更新配置或者改变一个区域文件。从管理的角度来说,这并不是很理想。当要更新 BIND 时,我们将需要重建这个容器,所以镜像中的改变将会丢失。任何时候我们需要更新一个文件或者重启服务时,都需要进入这个容器,而这增加了步骤和时间。 + +相反的,我们将从这个容器中提取出配置文件和数据文件,把它们拷贝到主机上,然后在运行的时候挂载它们。用这种方式我们可以很容易地重启或者重建容器,而不会丢失所做出的更改。我们也可以使用容器外的编辑器来更改配置和区域文件。因为这个容器的数据看起来像“该系统所提供服务的特定站点数据”,让我们遵循 Linux 文件系统层次标准File System Hierarchy,并在当前主机上创建 `/srv/named` 目录来保持管理权分离。 ``` [root@rhel7-host ~]# mkdir -p /srv/named/etc @@ -49,8 +54,9 @@ brought down accidentally. [root@rhel7-host ~]# mkdir -p /srv/named/var/named ``` - ***提示:如果你正在迁移一个存在的配置文件,你可以跳过下面的步骤并且将它直接拷贝到 `/srv/named` 目录下。你可能仍然想检查以一个临时容器分配给这个容器的 GID。*** -让我们建立并运行一个临时容器来检查 BIND。在将 init 进程作为 PID 1 运行时,我们不能交互地运行这个容器来获取一个 shell。我们会在容器 启动后执行 shell,并且使用 `rpm` 命令来检查重要文件。 +*提示:如果你正在迁移一个已有的配置文件,你可以跳过下面的步骤并且将它直接拷贝到 `/srv/named` 目录下。你也许仍然要用一个临时容器来检查一下分配给这个容器的 GID。* + +让我们建立并运行一个临时容器来检查 BIND。在将 init 进程以 PID 1 运行时,我们不能交互地运行这个容器来获取一个 shell。我们会在容器启动后执行 shell,并且使用 `rpm` 命令来检查重要文件。 ``` [root@rhel7-host ~]# docker build -t named . @@ -60,8 +66,9 @@ brought down accidentally. [root@0e77ce00405e /]# rpm -ql bind ``` -对于这个例子来说,我们将需要 `/etc/named.conf` 和 `/var/named/` 目录下的任何文件。我们可以使用 `machinectl` 命令来提取它们。如果有一个以上的容器注册了,我们可以使用 `machinectl status` 命令来查看任一机器上运行的是什么。一旦有了这个配置我们就可以终止这个临时容器了。 -*如果你喜欢,资源库中也有一个[样例 `named.conf` 和针对 `example.com` 的区域文件](http://rhelblog.redhat.com/2017/07/19/containing-system-services-in-red-hat-enterprise-linux-part-1/#repo)* +对于这个例子来说,我们将需要 `/etc/named.conf` 和 `/var/named/` 目录下的任何文件。我们可以使用 `machinectl` 命令来提取它们。如果注册了一个以上的容器,我们可以在任一机器上使用 `machinectl status` 命令来查看运行的是什么。一旦有了这些配置,我们就可以终止这个临时容器了。 + +*如果你喜欢,资源库中也有一个[样例 `named.conf` 和针对 `example.com` 的区域文件][8]。* ``` [root@rhel7-host bind]# machinectl list @@ -76,17 +83,20 @@ MACHINE CLASS SERVICE [root@rhel7-host ~]# docker stop infallible_wescoff ``` -## **最终的创建** -为了创建和运行最终的容器,添加卷选项到挂载: +### 最终的创建 + +为了创建和运行最终的容器,添加卷选项以挂载: + - 将文件 `/srv/named/etc/named.conf` 映射为 `/etc/named.conf` - 将目录 `/srv/named/var/named` 映射为 `/var/named` -因为这是我们最终的容器,我们将提供一个有意义的名字,以供我们以后引用。 +因为这是我们最终的容器,我们将提供一个有意义的名字,以供我们以后引用。 + ``` [root@rhel7-host ~]# docker run -d -p 53:53 -p 53:53/udp -v /srv/named/etc/named.conf:/etc/named.conf:Z -v /srv/named/var/named:/var/named:Z --name named-container named -``` +``` -在最终容器运行时,我们可以更改本机配置来改变这个容器中 BIND 的行为。这个 BIND 服务器将需要在这个容器分配的任何 IP 上监听。确保任何新文件的 GID 与来自这个容器中的剩余 BIND 文件相匹配。 +在最终容器运行时,我们可以更改本机配置来改变这个容器中 BIND 的行为。这个 BIND 服务器将需要在这个容器分配的任何 IP 上监听。请确保任何新文件的 GID 与来自这个容器中的其余的 BIND 文件相匹配。 ``` [root@rhel7-host bind]# cp named.conf /srv/named/etc/named.conf @@ -94,10 +104,12 @@ MACHINE CLASS SERVICE [root@rhel7-host ~]# cp example.com.zone /srv/named/var/named/example.com.zone [root@rhel7-host ~]# cp example.com.rr.zone /srv/named/var/named/example.com.rr.zone -``` -> [很好奇为什么我不需要在主机目录中改变 SELinux 上下文?](http://rhelblog.redhat.com/2017/07/19/containing-system-services-in-red-hat-enterprise-linux-part-1/#sidebar_1) +``` + +> 很好奇为什么我不需要在主机目录中改变 SELinux 上下文?^注1 + +我们将运行这个容器提供的 `rndc` 二进制文件重新加载配置。我们可以使用 `journald` 以同样的方式检查 BIND 日志。如果运行出现错误,你可以在主机中编辑该文件,并且重新加载配置。在主机中使用 `host` 或 `dig`,我们可以检查来自该容器化服务的 example.com 的响应。 -我们将运行这个容器提供的 `rndc` 二进制文件重新加载配置。我们可以使用 `journald` 以同样的方式检查 BIND 日志。如果运行出现错误,你可以在主机中编辑这个文件,并且重新加载配置。在主机中使用 `host` 或 `dig`, 我们可以检查来自针对 example.com 而包含的服务的响应。 ``` [root@rhel7-host ~]# docker exec -it named-container rndc reload server reload successful @@ -122,81 +134,86 @@ Address: ::1#53 Aliases: www.example.com is an alias for server1.example.com. server1.example.com is an alias for mail -``` -> [你的区域文件没有更新吗?可能是因为你的编辑器,而不是序列号。](http://rhelblog.redhat.com/2017/07/19/containing-system-services-in-red-hat-enterprise-linux-part-1/#sidebar_2) +``` -## **终点线** +> 你的区域文件没有更新吗?可能是因为你的编辑器,而不是序列号。^注2 -我们已经知道我们打算完成什么。从容器中为 DNS 请求和区域文件提供服务。更新之后,我们已经得到一个持久化的位置来管理更新和配置。 -在这个系列的第二部分,我们将看到怎样将一个容器看作为主机中的一个普通服务。 +### 终点线 + +我们已经达成了我们打算完成的目标,从容器中为 DNS 请求和区域文件提供服务。我们已经得到一个持久化的位置来管理更新和配置,并且更新后该配置不变。 + +在这个系列的第二部分,我们将看到怎样将一个容器看作为主机中的一个普通服务来运行。 --- -[跟随 RHEL 博客](http://redhatstackblog.wordpress.com/feed/)通过电子邮件来获得本系列第二部分和其它新文章的更新。 +[关注 RHEL 博客](http://redhatstackblog.wordpress.com/feed/),通过电子邮件来获得本系列第二部分和其它新文章的更新。 --- -## **额外的资源** -**附带文件的 Github 仓库:**[**https://github.com/nzwulfin/named-container**](https://github.com/nzwulfin/named-container) -**侧边栏 1:** **通过容器访问本地文件的 SELinux 上下文** -你可能已经注意到当我从容器向本地主机拷贝文件时,我没有运行 `chcon` 将主机中的文件类型改变为 `svirt_sandbox_file_t`。为什么它没有终止?将一个文件拷贝到 `/srv` 本应该将这个文件标记为类型 `var_t`。我 `setenforce 0` 了吗? -当然没有,这将让 Dan Walsh 大哭(译注:未知人名)。是的,`machinectl` 确实将文件标记类型设置为期望的那样,可以看一下: -启动一个容器之前: -``` +### 附加资源 + +- **所附带文件的 Github 仓库:** [https://github.com/nzwulfin/named-container](https://github.com/nzwulfin/named-container) +- **注1:** **通过容器访问本地文件的 SELinux 上下文** + + 你可能已经注意到当我从容器向本地主机拷贝文件时,我没有运行 `chcon` 将主机中的文件类型改变为 `svirt_sandbox_file_t`。为什么它没有出错?将一个文件拷贝到 `/srv` 会将这个文件标记为类型 `var_t`。我 `setenforce 0` (关闭 SELinux)了吗? + + 当然没有,这将让 [Dan Walsh 大哭](https://stopdisablingselinux.com/)(LCTT 译注:RedHat 的 SELinux 团队负责人,倡议不要禁用 SELinux)。是的,`machinectl` 确实将文件标记类型设置为期望的那样,可以看一下: + + 启动一个容器之前: + + ``` [root@rhel7-host ~]# ls -Z /srv/named/etc/named.conf - -rw-r-----. unconfined_u:object_r:var_t:s0 /srv/named/etc/named.conf -``` - -After starting the container: -不,运行中我使用了一个卷选项使 Dan Walsh 高兴,`:Z`。`-v /srv/named/etc/named.conf:/etc/named.conf:Z`命令的这部分做了两件事情:首先它表示这需要使用一个私有的卷 SELiunx 标记来重新标记,其次它表明以读写挂载。 -启动容器之后: ``` + + 不过,运行中我使用了一个卷选项可以使 Dan Walsh 先生高兴起来,`:Z`。`-v /srv/named/etc/named.conf:/etc/named.conf:Z` 命令的这部分做了两件事情:首先它表示这需要使用一个私有卷的 SELiunx 标记来重新标记;其次它表明以读写挂载。 + + 启动容器之后: + + ``` [root@rhel7-host ~]# ls -Z /srv/named/etc/named.conf - -rw-r-----. root 25 system_u:object_r:svirt_sandbox_file_t:s0:c821,c956 /srv/named/etc/named.conf -``` - -**侧边栏 2:** **VIM 备份行为改变 inode** -如果你在本地主机中使用 `vim` 来编辑配置文件,并且你没有看到容器中的改变,你可能不经意的创建了容器感知不到的新文件。在编辑中时,有三种 `vim` 设定影响背负副本:backup, writebackup 和 backupcopy。 -我从官方 VIM backup_table 中剪下了应用到 RHEL 7 中的默认配置 -[[http://vimdoc.sourceforge.net/htmldoc/editing.html#backup-table](http://vimdoc.sourceforge.net/htmldoc/editing.html#backup-table)] ``` -backup writebackup +- **注2:** **VIM 备份行为能改变 inode** + + 如果你在本地主机中使用 `vim` 来编辑配置文件,而你没有看到容器中的改变,你可能不经意的创建了容器感知不到的新文件。在编辑时,有三种 `vim` 设定影响备份副本:`backup`、`writebackup` 和 `backupcopy`。 + + 我摘录了 RHEL 7 中的来自官方 VIM [backup_table][9] 中的默认配置。 + + ``` +backup writebackup off on backup current file, deleted afterwards (default) ``` -So we don’t create tilde copies that stick around, but we are creating backups. The other setting is backupcopy, where auto is the shipped default: -所以我们不创建停留的副本,但我们将创建备份。另外的设定是 backupcopy,`auto` 是默认的设置: -``` -"yes" make a copy of the file and overwrite the original one + 所以我们不创建残留下的 `~` 副本,而是创建备份。另外的设定是 `backupcopy`,`auto` 是默认的设置: + + ``` + "yes" make a copy of the file and overwrite the original one "no" rename the file and write a new one "auto" one of the previous, what works best ``` -这种组合设定意味着当你编辑一个文件时,除非 `vim` 有理由不去(检查文件逻辑),你将会得到包含你编辑之后的新文件,当你保存时它会重命名原先的文件。这意味着这个文件获得了新的 inode。对于大多数情况,这不是问题,但是这里绑定挂载到一个容器对 inode 的改变很敏感。为了解决这个问题,你需要改变 backupcopy 的行为。 -Either in the vim session or in your .vimrc, add set backupcopy=yes. This will make sure the original file gets truncated and overwritten, preserving the inode and propagating the changes into the container. -不管是在 `vim` 会话还是在你的 `.vimrc`中,添加 `set backupcopy=yes`。这将确保原先的文件被截断并且被覆写,维持了 inode 并且在容器中产生了改变。 + 这种组合设定意味着当你编辑一个文件时,除非 `vim` 有理由(请查看文档了解其逻辑),你将会得到一个包含你编辑内容的新文件,当你保存时它会重命名为原先的文件。这意味着这个文件获得了新的 inode。对于大多数情况,这不是问题,但是这里容器的绑定挂载bind mount对 inode 的改变很敏感。为了解决这个问题,你需要改变 `backupcopy` 的行为。 + + 不管是在 `vim` 会话中还是在你的 `.vimrc`中,请添加 `set backupcopy=yes`。这将确保原先的文件被清空并覆写,维持了 inode 不变并且将该改变传递到了容器中。 ------------ via: http://rhelblog.redhat.com/2017/07/19/containing-system-services-in-red-hat-enterprise-linux-part-1/ -作者:[Matt Micene ][a] +作者:[Matt Micene][a] 译者:[liuxinyu123](https://github.com/liuxinyu123) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - - - - - - - - - - - - +[a]:http://rhelblog.redhat.com/2017/07/19/containing-system-services-in-red-hat-enterprise-linux-part-1/ +[1]:http://rhelblog.redhat.com/author/mmicenerht/ +[2]:http://rhelblog.redhat.com/2017/07/19/containing-system-services-in-red-hat-enterprise-linux-part-1/#repo +[3]:http://rhelblog.redhat.com/2017/07/19/containing-system-services-in-red-hat-enterprise-linux-part-1/#sidebar_1 +[4]:http://rhelblog.redhat.com/2017/07/19/containing-system-services-in-red-hat-enterprise-linux-part-1/#sidebar_2 +[5]:http://redhatstackblog.wordpress.com/feed/ +[6]:https://access.redhat.com/containers +[7]:http://rhelblog.redhat.com/2017/07/19/containing-system-services-in-red-hat-enterprise-linux-part-1/#repo +[8]:https://github.com/nzwulfin/named-container +[9]:http://vimdoc.sourceforge.net/htmldoc/editing.html#backup-table \ No newline at end of file From 2e1505297c29af94b555133cc817b3d56cae7a6a Mon Sep 17 00:00:00 2001 From: wxy Date: Tue, 12 Dec 2017 12:03:48 +0800 Subject: [PATCH 211/236] =?UTF-8?q?PUB:20170719=20Containing=20System=20Se?= =?UTF-8?q?rvices=20in=20Red=20Hat=20Enterprise=20Linux=20=E2=80=93=20Part?= =?UTF-8?q?=201.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @liuxinyu123 文章发布地址:https://linux.cn/article-9135-1.html 你的 LCTT 专页地址:https://linux.cn/lctt/liuxinyu123 --- ...aining System Services in Red Hat Enterprise Linux – Part 1.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20170719 Containing System Services in Red Hat Enterprise Linux – Part 1.md (100%) diff --git a/translated/tech/20170719 Containing System Services in Red Hat Enterprise Linux – Part 1.md b/published/20170719 Containing System Services in Red Hat Enterprise Linux – Part 1.md similarity index 100% rename from translated/tech/20170719 Containing System Services in Red Hat Enterprise Linux – Part 1.md rename to published/20170719 Containing System Services in Red Hat Enterprise Linux – Part 1.md From 9a40287250406858cc22bbae9a414c7ae8cb62bf Mon Sep 17 00:00:00 2001 From: darsh8 <752458434@qq.com> Date: Tue, 12 Dec 2017 13:07:42 +0800 Subject: [PATCH 212/236] translated --- ...131 Book review Ours to Hack and to Own.md | 65 ------------------- ...131 Book review Ours to Hack and to Own.md | 63 ++++++++++++++++++ 2 files changed, 63 insertions(+), 65 deletions(-) delete mode 100644 sources/talk/20170131 Book review Ours to Hack and to Own.md create mode 100644 translated/20170131 Book review Ours to Hack and to Own.md diff --git a/sources/talk/20170131 Book review Ours to Hack and to Own.md b/sources/talk/20170131 Book review Ours to Hack and to Own.md deleted file mode 100644 index a75a39a718..0000000000 --- a/sources/talk/20170131 Book review Ours to Hack and to Own.md +++ /dev/null @@ -1,65 +0,0 @@ -darsh8 Translating - -Book review: Ours to Hack and to Own -============================================================ - - ![Book review: Ours to Hack and to Own](https://opensource.com/sites/default/files/styles/image-full-size/public/images/education/EDUCATION_colorbooks.png?itok=liB3FyjP "Book review: Ours to Hack and to Own") -Image by : opensource.com - -It seems like the age of ownership is over, and I'm not just talking about the devices and software that many of us bring into our homes and our lives. I'm also talking about the platforms and services on which those devices and apps rely. - -While many of the services that we use are free, we don't have any control over them. The firms that do, in essence, control what we see, what we hear, and what we read. Not only that, but many of them are also changing the nature of work. They're using closed platforms to power a shift away from full-time work to the [gig economy][2], one that offers little in the way of security or certainty. - -This move has wide-ranging implications for the Internet and for everyone who uses and relies on it. The vision of the open Internet from just 20-odd-years ago is fading and is rapidly being replaced by an impenetrable curtain. - -One remedy that's becoming popular is building [platform cooperatives][3], which are digital platforms that their users own. The idea behind platform cooperatives has many of the same roots as open source, as the book "[Ours to Hack and to Own][4]" explains. - -Scholar Trebor Scholz and writer Nathan Schneider have collected 40 essays discussing the rise of, and the need for, platform cooperatives as tools ordinary people can use to promote openness, and to counter the opaqueness and the restrictions of closed systems. - -### Where open source fits in - -At or near the core of any platform cooperative lies open source; not necessarily open source technologies, but the principles and the ethos that underlie open source—openness, transparency, cooperation, collaboration, and sharing. - -In his introduction to the book, Trebor Scholz points out that: - -> In opposition to the black-box systems of the Snowden-era Internet, these platforms need to distinguish themselves by making their data flows transparent. They need to show where the data about customers and workers are stored, to whom they are sold, and for what purpose. - -It's that transparency, so essential to open source, which helps make platform cooperatives so appealing and a refreshing change from much of what exists now. - -Open source software can definitely play a part in the vision of platform cooperatives that "Ours to Hack and to Own" shares. Open source software can provide a fast, inexpensive way for groups to build the technical infrastructure that can power their cooperatives. - -Mickey Metts illustrates this in the essay, "Meet Your Friendly Neighborhood Tech Co-Op." Metts works for a firm called Agaric, which uses Drupal to build for groups and small business what they otherwise couldn't do for themselves. On top of that, Metts encourages anyone wanting to build and run their own business or co-op to embrace free and open source software. Why? It's high quality, it's inexpensive, you can customize it, and you can connect with large communities of helpful, passionate people. - -### Not always about open source, but open source is always there - -Not all of the essays in this book focus or touch on open source; however, the key elements of the open source way—cooperation, community, open governance, and digital freedom—are always on or just below the surface. - -In fact, as many of the essays in "Ours to Hack and to Own" argue, platform cooperatives can be important building blocks of a more open, commons-based economy and society. That can be, in Douglas Rushkoff's words, organizations like Creative Commons compensating "for the privatization of shared intellectual resources." It can also be what Francesca Bria, Barcelona's CTO, describes as cities running their own "distributed common data infrastructures with systems that ensure the security and privacy and sovereignty of citizens' data." - -### Final thought - -If you're looking for a blueprint for changing the Internet and the way we work, "Ours to Hack and to Own" isn't it. The book is more a manifesto than user guide. Having said that, "Ours to Hack and to Own" offers a glimpse at what we can do if we apply the principles of the open source way to society and to the wider world. - --------------------------------------------------------------------------------- - -作者简介: - -Scott Nesbitt - Writer. Editor. Soldier of fortune. Ocelot wrangler. Husband and father. Blogger. Collector of pottery. Scott is a few of these things. He's also a long-time user of free/open source software who extensively writes and blogs about it. You can find Scott on Twitter, GitHub - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/17/1/review-book-ours-to-hack-and-own - -作者:[Scott Nesbitt][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/scottnesbitt -[1]:https://opensource.com/article/17/1/review-book-ours-to-hack-and-own?rate=dgkFEuCLLeutLMH2N_4TmUupAJDjgNvFpqWqYCbQb-8 -[2]:https://en.wikipedia.org/wiki/Access_economy -[3]:https://en.wikipedia.org/wiki/Platform_cooperative -[4]:http://www.orbooks.com/catalog/ours-to-hack-and-to-own/ -[5]:https://opensource.com/user/14925/feed -[6]:https://opensource.com/users/scottnesbitt diff --git a/translated/20170131 Book review Ours to Hack and to Own.md b/translated/20170131 Book review Ours to Hack and to Own.md new file mode 100644 index 0000000000..1948ea4ab9 --- /dev/null +++ b/translated/20170131 Book review Ours to Hack and to Own.md @@ -0,0 +1,63 @@ +书评:《Ours to Hack and to Own》 +============================================================ + + ![书评: Ours to Hack and to Own](https://opensource.com/sites/default/files/styles/image-full-size/public/images/education/EDUCATION_colorbooks.png?itok=liB3FyjP "Book review: Ours to Hack and to Own") +Image by : opensource.com + +私有制的时代看起来似乎结束了,我将不仅仅讨论那些由我们中的许多人引入到我们的家庭与生活的设备和软件。我也将讨论这些设备与应用依赖的平台与服务。 + +尽管我们使用的许多服务是免费的,我们对它们并没有任何控制。本质上讲,这些企业确实控制着我们所看到的,听到的以及阅读到的内容。不仅如此,许多企业还在改变工作的性质。他们正使用封闭的平台来助长由全职工作到[零工经济][2]的转变方式,这种方式提供极少的安全性与确定性。 + +这项行动对于网络以及每一个使用与依赖网络的人产生了广泛的影响。仅仅二十多年前的开放网络的想象正在逐渐消逝并迅速地被一块难以穿透的幕帘所取代。 + +一种变得流行的补救办法就是建立[平台合作][3], 由他们的用户所拥有的电子化平台。正如这本书所阐述的,平台合作社背后的观点与开源有许多相同的根源。 + +学者Trebor Scholz和作家Nathan Schneider已经收集了40篇探讨平台合作社作为普通人可使用以提升开放性并对闭源系统的不透明性及各种限制予以还击的工具的增长及需求的论文。 + +### 哪里适合开源 + +任何平台合作社核心及接近核心的部分依赖与开源;不仅开源技术是必要的,构成开源开放性,透明性,协同合作以及共享的准则与理念同样不可或缺。 + +在这本书的介绍中, Trebor Scholz指出: + +> 与网络的黑盒子系统相反,这些平台需要使它们的数据流透明来辨别自身。他们需要展示客户与员工的数据在哪里存储,数据出售给了谁以及数据为了何种目的。 + +正是对开源如此重要的透明性,促使平台合作社如此吸引人并在目前大量已存平台之中成为令人耳目一新的变化。 + +开源软件在《Ours to Hack and to Own》所分享的平台合作社的构想中必然充当着重要角色。开源软件能够为群体建立助推合作社的技术型公共建设提供快速,不算昂贵的途径。 + +Mickey Metts在论文中这样形容, "与你的友好的社区型技术合作社相遇。(原文:Meet Your Friendly Neighborhood Tech Co-Op.)" Metts为一家名为Agaric的企业工作,这家企业使用Drupal为团体及小型企业建立他们不能独自完成的产品。除此以外, Metts还鼓励任何想要建立并运营自己的企业的公司或合作社的人接受免费且开源的软件。为什么呢?因为它是高质量的,不算昂贵的,可定制的,并且你能够与由乐于助人而又热情的人们组成的大型社区产生联系。 + +### 不总是开源的,但开源总在 + +这本书里不是所有的论文都聚焦或提及开源的;但是,开源方式的关键元素-合作,社区,开放管理以及电子自由化-总是在其表面若隐若现。 + +事实上正如《Ours to Hack and to Own》中许多论文所讨论的,建立一个更加开放,基于平常人的经济与社会区块,平台合作社会变得非常重要。用Douglas Rushkoff的话讲,那会是类似Creative Commons的组织“对共享知识资源的私有化”的补偿。它们也如Barcelona的CTO(首席执行官)Francesca Bria所描述的那样,是“通过确保市民数据安全性,隐私性和权利的系统”来运营他们自己的“分布式通用数据基础架构”的城市。 + +### 最后的思考 + +如果你在寻找改变互联网的蓝图以及我们工作的方式,《Ours to Hack and to Own》并不是你要寻找的。这本书与其说是用户指南,不如说是一种宣言。如书中所说,《Ours to Hack and to Own》让我们略微了解如果我们将开源方式准则应用于社会及更加广泛的世界我们能够做的事。 + +-------------------------------------------------------------------------------- + +作者简介: + +Scott Nesbitt -作家,编辑,雇佣兵,虎猫牛仔(原文:Ocelot wrangle),丈夫与父亲,博客写手,陶器收藏家。Scott正是做这样的一些事情。他还是大量写关于开源软件文章与博客的长期开源用户。你可以在Twitter,Github上找到他。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/17/1/review-book-ours-to-hack-and-own + +作者:[Scott Nesbitt][a] +译者:[darsh8](https://github.com/darsh8) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/scottnesbitt +[1]:https://opensource.com/article/17/1/review-book-ours-to-hack-and-own?rate=dgkFEuCLLeutLMH2N_4TmUupAJDjgNvFpqWqYCbQb-8 +[2]:https://en.wikipedia.org/wiki/Access_economy +[3]:https://en.wikipedia.org/wiki/Platform_cooperative +[4]:http://www.orbooks.com/catalog/ours-to-hack-and-to-own/ +[5]:https://opensource.com/user/14925/feed +[6]:https://opensource.com/users/scottnesbitt From 6afe8d9765092b483859541ddf600637b96c2869 Mon Sep 17 00:00:00 2001 From: darksun Date: Tue, 12 Dec 2017 14:32:10 +0800 Subject: [PATCH 213/236] =?UTF-8?q?=E7=BF=BB=E8=AF=91=E5=AE=8C=E6=AF=95?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...(nc) Command Examples for Linux Systems.md | 200 ------------------ ...(nc) Command Examples for Linux Systems.md | 199 +++++++++++++++++ 2 files changed, 199 insertions(+), 200 deletions(-) delete mode 100644 sources/tech/20171206 10 useful ncat (nc) Command Examples for Linux Systems.md create mode 100644 translated/tech/20171206 10 useful ncat (nc) Command Examples for Linux Systems.md diff --git a/sources/tech/20171206 10 useful ncat (nc) Command Examples for Linux Systems.md b/sources/tech/20171206 10 useful ncat (nc) Command Examples for Linux Systems.md deleted file mode 100644 index 8264014664..0000000000 --- a/sources/tech/20171206 10 useful ncat (nc) Command Examples for Linux Systems.md +++ /dev/null @@ -1,200 +0,0 @@ -translating by lujun9972 -10 useful ncat (nc) Command Examples for Linux Systems -====== - [![nc-ncat-command-examples-Linux-Systems](https://www.linuxtechi.com/wp-content/uploads/2017/12/nc-ncat-command-examples-Linux-Systems.jpg)][1] - -ncat or nc is networking utility with functionality similar to cat command but for network. It is a general purpose CLI tool for reading, writing, redirecting data across a network. It is designed to be a reliable back-end tool that can be used with scripts or other programs. It’s also a great tool for network debugging, as it can create any kind of connect one can need. - -ncat/nc can be a port scanning tool, or a security tool, or monitoring tool and is also a simple TCP proxy. Since it has so many features, it is known as a network swiss army knife. It’s one of those tools that every System Admin should know & master. - -In most of Debian distributions ‘nc’ is available and its package is automatically installed during installation. But in minimal CentOS 7 / RHEL 7 installation you will not find nc as a default package. You need to install using the following command. - -``` -[root@linuxtechi ~]# yum install nmap-ncat -y -``` - -System admins can use it audit their system security, they can use it find the ports that are opened & than secure them. Admins can also use it as a client for auditing web servers, telnet servers, mail servers etc, with ‘nc’ we can control every character sent & can also view the responses to sent queries. - -We can also cause it to capture data being sent by client to understand what they are upto. - -In this tutorial, we are going to learn about how to use ‘nc’ command with 10 examples, - -#### Example: 1) Listen to inbound connections - -Ncat can work in listen mode & we can listen for inbound connections on port number with option ‘l’. Complete command is, - -$ ncat -l port_number - -For example, - -``` -$ ncat -l 8080 -``` - -Server will now start listening to port 8080 for inbound connections. - -#### Example: 2) Connect to a remote system - -To connect to a remote system with nc, we can use the following command, - -$ ncat IP_address port_number - -Let’s take an example, - -``` -$ ncat 192.168.1.100 80 -``` - -Now a connection to server with IP address 192.168.1.100 will be made at port 80 & we can now send instructions to server. Like we can get the complete page content with - -GET / HTTP/1.1 - -or get the page name, - -GET / HTTP/1.1 - -or we can get banner for OS fingerprinting with the following, - -HEAD / HTTP/1.1 - -This will tell what software is being used to run the web Server. - -#### Example: 3) Connecting to UDP ports - -By default , the nc utility makes connections only to TCP ports. But we can also make connections to UDP ports, for that we can use option ‘u’, - -``` -$ ncat -l -u 1234 -``` - -Now our system will start listening a udp port ‘1234’, we can verify this using below netstat command, - -``` -$ netstat -tunlp | grep 1234 -udp 0 0 0.0.0.0:1234 0.0.0.0:* 17341/nc -udp6 0 0 :::1234 :::* 17341/nc -``` - -Let’s assume we want to send or test UDP port connectivity to a specific remote host, then use the following command, - -$ ncat -v -u {host-ip} {udp-port} - -example: - -``` -[root@localhost ~]# ncat -v -u 192.168.105.150 53 -Ncat: Version 6.40 ( http://nmap.org/ncat ) -Ncat: Connected to 192.168.105.150:53. -``` - -#### Example: 4) NC as chat tool - -NC can also be used as chat tool, we can configure server to listen to a port & than can make connection to server from a remote machine on same port & start sending message. On server side, run - -``` -$ ncat -l 8080 -``` - -On remote client machine, run - -``` -$ ncat 192.168.1.100 8080 -``` - -Than start sending messages & they will be displayed on server terminal. - -#### Example: 5) NC as a proxy - -NC can also be used as a proxy with a simple command. Let’s take an example, - -``` -$ ncat -l 8080 | ncat 192.168.1.200 80 -``` - -Now all the connections coming to our server on port 8080 will be automatically redirected to 192.168.1.200 server on port 80\. But since we are using a pipe, data can only be transferred & to be able to receive the data back, we need to create a two way pipe. Use the following commands to do so, - -``` -$ mkfifo 2way -$ ncat -l 8080 0<2way | ncat 192.168.1.200 80 1>2way -``` - -Now you will be able to send & receive data over nc proxy. - -#### Example: 6) Copying Files using nc/ncat - -NC can also be used to copy the files from one system to another, though it is not recommended & mostly all systems have ssh/scp installed by default. But none the less if you have come across a system with no ssh/scp, you can also use nc as last ditch effort. - -Start with machine on which data is to be received & start nc is listener mode, - -``` -$ ncat -l 8080 > file.txt -``` - -Now on the machine from where data is to be copied, run the following command, - -``` -$ ncat 192.168.1.100 8080 --send-only < data.txt -``` - -Here, data.txt is the file that has to be sent. –send-only option will close the connection once the file has been copied. If not using this option, than we will have press ctrl+c to close the connection manually. - -We can also copy entire disk partitions using this method, but it should be done with caution. - -#### Example: 7) Create a backdoor via nc/nact - -NC command can also be used to create backdoor to your systems & this technique is actually used by hackers a lot. We should know how it works in order to secure our system. To create a backdoor, the command is, - -``` -$ ncat -l 10000 -e /bin/bash -``` - -‘e‘ flag attaches a bash to port 10000\. Now a client can connect to port 10000 on server & will have complete access to our system via bash, - -``` -$ ncat 192.168.1.100 1000 -``` - -#### Example: 8) Port forwarding via nc/ncat - -We can also use NC for port forwarding with the help of option ‘c’ , syntax for accomplishing port forwarding is, - -``` -$ ncat -u -l 80 -c 'ncat -u -l 8080' -``` - -Now all the connections for port 80 will be forwarded to port 8080. - -#### Example: 9) Set Connection timeouts - -Listener mode in ncat will continue to run & would have to be terminated manually. But we can configure timeouts with option ‘w’, - -``` -$ ncat -w 10 192.168.1.100 8080 -``` - -This will cause connection to be terminated in 10 seconds, but it can only be used on client side & not on server side. - -#### Example: 10) Force server to stay up using -k option in ncat - -When client disconnects from server, after sometime server also stops listening. But we can force server to stay connected & continuing port listening with option ‘k’. Run the following command, - -``` -$ ncat -l -k 8080 -``` - -Now server will stay up, even if a connection from client is broken. - -With this we end our tutorial, please feel free to ask any question regarding this article using the comment box below. - --------------------------------------------------------------------------------- - -via: https://www.linuxtechi.com/nc-ncat-command-examples-linux-systems/ - -作者:[Pradeep Kumar][a] -译者:[lujun9972](https://github.com/lujun9972) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.linuxtechi.com/author/pradeep/ -[1]:https://www.linuxtechi.com/wp-content/uploads/2017/12/nc-ncat-command-examples-Linux-Systems.jpg diff --git a/translated/tech/20171206 10 useful ncat (nc) Command Examples for Linux Systems.md b/translated/tech/20171206 10 useful ncat (nc) Command Examples for Linux Systems.md new file mode 100644 index 0000000000..914aea87ea --- /dev/null +++ b/translated/tech/20171206 10 useful ncat (nc) Command Examples for Linux Systems.md @@ -0,0 +1,199 @@ +10 个例子教你学会 ncat (nc) 命令 +====== + [![nc-ncat-command-examples-Linux-Systems](https://www.linuxtechi.com/wp-content/uploads/2017/12/nc-ncat-command-examples-Linux-Systems.jpg)][1] + +ncat 或者说 nc 是一款功能类似 cat 的网络工具。它是一款拥有多种功能的 CLI 工具,可以用来在网络上读,写以及重定向数据。 它被设计成可以被脚本或其他程序调用的可靠后端工具。 同时由于它能创建任意所需的连接,因此也是一个很好的网络调试工具。 + +ncat/nc 既是一个端口扫描工具,也是一款安全工具, 还能是一款检测工具,甚至可以做一个简单的 TCP 代理。 由于有这么多的功能,它被誉为是网络界的瑞士军刀。 这是每个系统管理员都应该知道并且掌握它。 + +在大多数 Debian 发行版中,`nc` 是默认可用的,它会在安装系统的过程中自动被安装。 但是在 CentOS 7 / RHEL 7 的最小化安装中,`nc` 并不会默认被安装。 你需要用下列命令手工安装。 + +``` +[root@linuxtechi ~]# yum install nmap-ncat -y +``` + +系统管理员可以用它来审计系统安全,用它来找出开放的端口然后保护这些端口。 管理员还能用它作为客户端来审计 Web 服务器, 远程登陆服务器, 邮件服务器等, 通过 `nc` 我们可以控制发送的每个字符,也可以查看对方的回应。 + +我们还可以 o 用它捕获客户端发送的数据一次来了解这些客户端是做什么的。 + +在本文中,我们会通过 10 个例子来学习如何使用 `nc` 命令。 + +### 例子: 1) 监听入站连接 + +通过 `l` 选项,ncat 可以进入监听模式,使我们可以在指定端口监听入站连接。 完整的命令是这样的 + +$ ncat -l port_number + +比如, + +``` +$ ncat -l 8080 +``` + +服务器就会开始在 8080 端口监听入站连接。 + +### 例子: 2) 连接远程系统 + +使用下面命令可以用 nc 来连接远程系统, + +$ ncat IP_address port_number + +让我们来看个例子, + +``` +$ ncat 192。168。1。100 80 +``` + +这会创建一个连接,连接到 IP 为 192。168。1。100 的服务器上的 80 端口,然后我们就可以向服务器发送指令了。 比如我们可以输入下面内容来获取完整的网页内容 + +GET / HTTP/1.1 + +或者获取页面名称, + +GET / HTTP/1.1 + +或者我们可以通过以下方式获得操作系统指纹标识, + +HEAD / HTTP/1.1 + +这会告诉我们使用的是什么软件来运行这个 web 服务器的。 + +### 例子: 3) 连接 UDP 端口 + +默认情况下,nc 创建连接时只会连接 TCP 端口。 不过我们可以使用 `u` 选项来连接到 UDP 端口, + +``` +$ ncat -l -u 1234 +``` + +现在我们的系统会开始监听 udp 的'1234'端口,我们可以使用下面的 netstat 命令来验证这一点, + +``` +$ netstat -tunlp | grep 1234 +udp 0 0 0。0。0。0:1234 0。0。0。0:* 17341/nc +udp6 0 0 :::1234 :::* 17341/nc +``` + +假设我们想发送或者说测试某个远程主机 UDP 端口的连通性,我们可以使用下面命令, + +$ ncat -v -u {host-ip} {udp-port} + +比如: + +``` +[root@localhost ~]# ncat -v -u 192。168。105。150 53 +Ncat: Version 6。40 ( http://nmap.org/ncat ) +Ncat: Connected to 192。168。105。150:53。 +``` + +#### 例子: 4) 将 NC 作为聊天工具 + +NC 也可以作为聊天工具来用,我们可以配置服务器监听某个端口,然后从远程主机上连接到服务器的这个端口,就可以开始发送消息了。 在服务器这端运行: + +``` +$ ncat -l 8080 +``` + +在远程客户端主机上运行: + +``` +$ ncat 192。168。1。100 8080 +``` + +之后开始发送消息,这些消息会在服务器终端上显示出来。 + +#### 例子: 5) 将 NC 作为代理 + +NC 也可以用来做代理。比如下面这个例子, + +``` +$ ncat -l 8080 | ncat 192。168。1。200 80 +``` + +所有发往我们服务器 8080 端口的连接都会自动转发到 192。168。1。200 上的 80 端口。 不过由于我们使用了管道,数据只能被单向传输。 要同时能够接受返回的数据,我们需要创建一个双向管道。 使用下面命令可以做到这点: + +``` +$ mkfifo 2way +$ ncat -l 8080 0<2way | ncat 192。168。1。200 80 1>2way +``` + +现在你可以通过 nc 代理来收发数据了。 + +#### 例子: 6) 使用 nc/ncat 拷贝文件 + +NC 还能用来在系统间拷贝文件,虽然这么做并不推荐,因为绝大多数系统默认都安装了 ssh/scp。 不过如果你恰好遇见个没有 ssh/scp 的系统的话, 你可以用 nc 来作最后的努力。 + +在要接受数据的机器上启动 nc 并让它进入监听模式: + +``` +$ ncat -l 8080 > file.txt +``` + +现在去要被拷贝数据的机器上运行下面命令: + +``` +$ ncat 192。168。1。100 8080 --send-only < data.txt +``` + +这里,data.txt 是要发送的文件。 –send-only 选项会在文件拷贝完后立即关闭连接。 如果不加该选项, 我们需要手工按下 ctrl+c to 来关闭连接。 + +我们也可以用这种方法拷贝整个磁盘分区,不过请一定要小心。 + +#### 例子: 7) 通过 nc/ncat 创建后门 + +NC 命令还可以用来在系统中创建后门,并且这种技术也确实被黑客大量使用。 为了保护我们的系统,我们需要知道它是怎么做的。 创建后门的命令为: + +``` +$ ncat -l 10000 -e /bin/bash +``` + +‘e‘ 标志将一个 bash 与端口 10000 相连。现在客户端只要连接到服务器上的 10000 端口就能通过 bash 获取我们系统的完整访问权限: + +``` +$ ncat 192。168。1。100 10000 +``` + +#### 例子: 8) 通过 nc/ncat 进行端口转发 + +我们通过选项 `c` 来用 NC 进行端口转发,实现端口转发的语法为: + +``` +$ ncat -u -l 80 -c 'ncat -u -l 8080' +``` + +这样,所有连接到 80 端口的连接都会转发到 8080 端口。 + +#### 例子: 9) 设置连接超时 + +ncat 的监听模式会一直运行,直到手工终止。 不过我们可以通过选项 `w` 设置超时时间: + +``` +$ ncat -w 10 192。168。1。100 8080 +``` + +这回导致连接 10 秒后终止,不过这个选项只能用于客户端而不是服务端。 + +#### 例子: 10) 使用 -k 选项强制 ncat 待命 + +当客户端从服务端断开连接后,过一段时间服务端也会停止监听。 但通过选项 `k` 我们可以强制服务器保持连接并继续监听端口。 命令如下: + +``` +$ ncat -l -k 8080 +``` + +现在即使来自客户端的连接断了也依然会处于待命状态。 + +自此我们的教程就完了,如有疑问,请在下方留言。 + +-------------------------------------------------------------------------------- + +via: https://www.linuxtechi.com/nc-ncat-command-examples-linux-systems/ + +作者:[Pradeep Kumar][a] +译者:[lujun9972](https://github.com/lujun9972) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.linuxtechi.com/author/pradeep/ +[1]:https://www.linuxtechi.com/wp-content/uploads/2017/12/nc-ncat-command-examples-Linux-Systems.jpg From 6d2dd585ccf62cb72f11f02fd5dba1d88047dcab Mon Sep 17 00:00:00 2001 From: wxy Date: Tue, 12 Dec 2017 14:35:31 +0800 Subject: [PATCH 214/236] =?UTF-8?q?=E7=9B=AE=E5=BD=95=E4=B8=8D=E5=AF=B9?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @darsh8 目录不对~我帮你挪下。 --- .../{ => talk}/20170131 Book review Ours to Hack and to Own.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename translated/{ => talk}/20170131 Book review Ours to Hack and to Own.md (100%) diff --git a/translated/20170131 Book review Ours to Hack and to Own.md b/translated/talk/20170131 Book review Ours to Hack and to Own.md similarity index 100% rename from translated/20170131 Book review Ours to Hack and to Own.md rename to translated/talk/20170131 Book review Ours to Hack and to Own.md From 2220feb1ad1c97c6b50be79c1b633abc4abe3e4d Mon Sep 17 00:00:00 2001 From: darksun Date: Tue, 12 Dec 2017 14:40:56 +0800 Subject: [PATCH 215/236] =?UTF-8?q?=E9=80=89=E9=A2=98:=207=20rules=20for?= =?UTF-8?q?=20avoiding=20documentation=20pitfalls?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...les for avoiding documentation pitfalls.md | 133 ++++++++++++++++++ 1 file changed, 133 insertions(+) create mode 100644 sources/tech/20171205 7 rules for avoiding documentation pitfalls.md diff --git a/sources/tech/20171205 7 rules for avoiding documentation pitfalls.md b/sources/tech/20171205 7 rules for avoiding documentation pitfalls.md new file mode 100644 index 0000000000..8d4b4407f0 --- /dev/null +++ b/sources/tech/20171205 7 rules for avoiding documentation pitfalls.md @@ -0,0 +1,133 @@ +translating by lujun9972 +7 rules for avoiding documentation pitfalls +====== +English serves as _lingua franca_ in the open source community. To reduce +translation costs, many teams have switched to English as the source language +for their documentation. But surprisingly, writing in English for an +international audience does not necessarily put native English speakers in a +better position. On the contrary, they tend to forget that the document 's +language might not be the audience's first language. + +Let's have a look at the following simple sentence as an example: "Encrypt the +password using the `foo bar` command." Grammatically, the sentence is correct. +Given that '-ing' forms (gerunds) are frequently used in the English language +and most native speakers consider them an elegant way of expressing things, +native speakers usually do not hesitate to phrase a sentence like this. On +closer inspection, the sentence is ambiguous because "using" may refer either +to the object ("the password") or to the verb ("encrypt"). Thus, the sentence +can be interpreted in two different ways: + + * "Encrypt the password that uses the `foo bar` command." + * "Encrypt the password by using the `foo bar` command." + +As long as you have previous knowledge about the topic (password encryption or +the `foo bar` command), you can resolve this ambiguity and correctly decide +that the second version is the intended meaning of this sentence. But what if +you lack in-depth knowledge of the topic? What if you are not an expert, but a +translator with only general knowledge of the subject? Or if you are a non- +native speaker of English, unfamiliar with advanced grammatical forms like +gerunds? + +Even native English speakers need training to write clear and straightforward +technical documentation. The first step is to raise awareness about the +usability of texts and potential problems, so let's look at seven rules that +can help avoid common pitfalls. + +### 1. Know your target audience and step into their shoes. + +If you are a developer writing for end users, view the product from their +perspective. Does the structure reflect the users' goals? The [persona +technique][1] can help you to focus on the target audience and provide the +right level of detail for your readers. + +### 2. Follow the KISS principle--keep it short and simple. + +The principle can be applied on several levels, such as grammar, sentences, or +words. Here are examples: + + * Use the simplest tense that is appropriate. For example, use present tense when mentioning the result of an action: + * " ~~Click 'OK.' The 'Printer Options' dialog will appear.~~" -> "Click 'OK.' The 'Printer Options' dialog appears." + * As a rule of thumb, present one idea in one sentence; however, short sentences are not automatically easy to understand (especially if they are an accumulation of nouns). Sometimes, trimming down sentences to a certain word count can introduce ambiguities. In turn, this makes the sentences more difficult to understand. + * Uncommon and long words slow reading and might be obstacles for non-native speakers. Use simpler alternatives: + + * " ~~utilize~~ " -> "use" + * " ~~indicate~~ " -> "show," "tell," or "say" + * " ~~prerequisite~~ " -> "requirement" + +### 3. Avoid disturbing the reading flow. + +Move particles or longer parentheses to the beginning or end of a sentence: + + * " ~~They are not, however, marked as installed.~~ " -> "However, they are not marked as installed." + +Place long commands at the end of a sentence. This also results in better +segmentation of sentences for automatic or semi-automatic translations. + +### 4. Discriminate between two basic information types. + +Discriminating between _descriptive information_ and _task-based information_ +is useful. Typical examples for descriptive information are command-line +references, whereas how-to 's are task-based information; however, both +information types are needed in technical writing. On closer inspection, many +texts contain a mixture of both information types. Clearly separating the +information types is helpful. For better orientation, label them accordingly. +Titles should reflect a section's content and information type. Use noun-based +titles for descriptive sections ("Types of Frobnicators") and verbally phrased +titles for task-based sections ("Installing Frobnicators"). This helps readers +quickly identify the sections they are interested in and allows them to skip +the ones they don't need at the moment. + +### 5. Consider different reading situations and modes of text consumption. + +Some of your readers are already frustrated when they turn to the product +documentation because they could not achieve a certain goal on their own. They +might also work in a noisy environment that makes it hard to focus on reading. +Also, do not expect your audience to read cover to cover, as many people skim +or browse texts for keywords or look up topics by using tables, indexes, or +full-text search. With that in mind, look at your text from different +perspectives. Often, compromises are needed to find a text structure that +works well for multiple situations. + +### 6. Break down complex information into smaller chunks. + +This makes it easier for the audience to remember and process the information. +For example, procedures should not exceed seven to 10 steps (according to +[Miller's Law][2] in cognitive psychology). If more steps are required, split +the task into separate procedures. + +### 7. Form follows function. + +Examine your text according to the question: What is the _purpose_ (function) +of a certain sentence, a paragraph, or a section? For example, is it an +instruction? A result? A warning? For instructions, use active voice: +"Configure the system." Passive voice may be appropriate for descriptions: +"The system is configured automatically." Add warnings _before_ the step or +action where danger arises. Focusing on the purpose also helps detect +redundant content to help eliminate fillers like "basically" or "easily," +unnecessary modifications like " ~~already~~ existing " or " ~~completely~~ +new, " or any content that is not relevant for your target audience. + +As you might have guessed by now, writing is re-writing. Good writing requires +effort and practice. Even if you write only occasionally, you can +significantly improve your texts by focusing on the target audience and +following the rules above. The better the readability of a text, the easier it +is to process, even for audiences with varying language skills. Especially +when it comes to localization, good quality of the source text is important: +"Garbage in, garbage out." If the original text has deficiencies, translating +the text takes longer, resulting in higher costs. In the worst case, flaws are +multiplied during translation and must be corrected in various languages. + + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/17/12/7-rules + +作者:[Tanja Roth][a] +译者:[lujun9972](https://github.com/lujun9972) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com +[1]:https://en.wikipedia.org/wiki/Persona_(user_experience) +[2]:https://en.wikipedia.org/wiki/The_Magical_Number_Seven,_Plus_or_Minus_Two From 1211adee5f3197124bed888c1366ccda8ee7df32 Mon Sep 17 00:00:00 2001 From: darksun Date: Tue, 12 Dec 2017 14:55:35 +0800 Subject: [PATCH 216/236] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Toplip=20?= =?UTF-8?q?=E2=80=93=20A=20Very=20Strong=20File=20Encryption=20And=20Decry?= =?UTF-8?q?ption=20CLI=20Utility?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ile Encryption And Decryption CLI Utility.md | 282 ++++++++++++++++++ 1 file changed, 282 insertions(+) create mode 100644 sources/tech/20171212 Toplip – A Very Strong File Encryption And Decryption CLI Utility.md diff --git a/sources/tech/20171212 Toplip – A Very Strong File Encryption And Decryption CLI Utility.md b/sources/tech/20171212 Toplip – A Very Strong File Encryption And Decryption CLI Utility.md new file mode 100644 index 0000000000..ad3528f60b --- /dev/null +++ b/sources/tech/20171212 Toplip – A Very Strong File Encryption And Decryption CLI Utility.md @@ -0,0 +1,282 @@ +Toplip – A Very Strong File Encryption And Decryption CLI Utility +====== +There are numerous file encryption tools available on the market to protect +your files. We have already reviewed some encryption tools such as +[**Cryptomater**][1], [**Cryptkeeper**][2], [**CryptGo**][3], [**Cryptr**][4], +[**Tomb**][5], and [**GnuPG**][6] etc. Today, we will be discussing yet +another file encryption and decryption command line utility named **" +Toplip"**. It is a free and open source encryption utility that uses a very +strong encryption method called **[AES256][7]** , along with an **XTS-AES** +design to safeguard your confidential data. Also, it uses [**Scrypt**][8], a +password-based key derivation function, to protect your passphrases against +brute-force attacks. + +### Prominent features + +Compared to other file encryption tools, toplip ships with the following +unique and prominent features. + + * Very strong XTS-AES256 based encryption method. + * Plausible deniability. + * Encrypt files inside images (PNG/JPG). + * Multiple passphrase protection. + * Simplified brute force recovery protection. + * No identifiable output markers. + * Open source/GPLv3. + +### Installing Toplip + +There is no installation required. Toplip is a standalone executable binary +file. All you have to do is download the latest toplip from the [**official +products page**][9] and make it as executable. To do so, just run: + +``` +chmod +x toplip +``` + +### Usage + +If you run toplip without any arguments, you will see the help section. + +``` +./toplip +``` + +[![][10]][11] + +Allow me to show you some examples. + +For the purpose of this guide, I have created two files namely **file1** and +**file2**. Also, I have an image file which we need it to hide the files +inside it. And finally, I have **toplip** executable binary file. I have kept +them all in a directory called **test**. + +[![][12]][13] + +**Encrypt/decrypt a single file** + +Now, let us encrypt **file1**. To do so, run: + +``` +./toplip file1 > file1.encrypted +``` + +This command will prompt you to enter a passphrase. Once you have given the +passphrase, it will encrypt the contents of **file1** and save them in a file +called **file1.encrypted** in your current working directory. + +Sample output of the above command would be: + +``` +This is toplip v1.20 (C) 2015, 2016 2 Ton Digital. Author: Jeff Marrison A showcase piece for the HeavyThing library. Commercial support available Proudly made in Cooroy, Australia. More info: https://2ton.com.au/toplip file1 Passphrase #1: generating keys...Done +Encrypting...Done +``` + +To verify if the file is really encrypted., try to open it and you will see +some random characters. + +To decrypt the encrypted file, use **-d** flag like below: + +``` +./toplip -d file1.encrypted +``` + +This command will decrypt the given file and display the contents in the +Terminal window. + +To restore the file instead of writing to stdout, do: + +``` +./toplip -d file1.encrypted > file1.decrypted +``` + +Enter the correct passphrase to decrypt the file. All contents of **file1.encrypted** will be restored in a file called **file1.decrypted**. + +Please don't follow this naming method. I used it for the sake of easy understanding. Use any other name(s) which is very hard to predict. + +**Encrypt/decrypt multiple files +** + +Now we will encrypt two files with two separate passphrases for each one. + +``` +./toplip -alt file1 file2 > file3.encrypted +``` + +You will be asked to enter passphrase for each file. Use different +passphrases. + +Sample output of the above command will be: + +``` +This is toplip v1.20 (C) 2015, 2016 2 Ton Digital. Author: Jeff Marrison A showcase piece for the HeavyThing library. Commercial support available Proudly made in Cooroy, Australia. More info: https://2ton.com.au/toplip +**file2 Passphrase #1** : generating keys...Done +**file1 Passphrase #1** : generating keys...Done +Encrypting...Done +``` + +What the above command will do is encrypt the contents of two files and save +them in a single file called **file3.encrypted**. While restoring, just give +the respective password. For example, if you give the passphrase of the file1, +toplip will restore file1. If you enter the passphrase of file2, toplip will +restore file2. + +Each **toplip** encrypted output may contain up to four wholly independent +files, and each created with their own separate and unique passphrase. Due to +the way the encrypted output is put together, there is no way to easily +determine whether or not multiple files actually exist in the first place. By +default, even if only one file is encrypted using toplip, random data is added +automatically. If more than one file is specified, each with their own +passphrase, then you can selectively extract each file independently and thus +deny the existence of the other files altogether. This effectively allows a +user to open an encrypted bundle with controlled exposure risk, and no +computationally inexpensive way for an adversary to conclusively identify that +additional confidential data exists. This is called **Plausible deniability** +, one of the notable feature of toplip. + +To decrypt **file1** from **file3.encrypted** , just enter: + +``` +./toplip -d file3.encrypted > file1.encrypted +``` + +You will be prompted to enter the correct passphrase of file1. + +To decrypt **file2** from **file3.encrypted** , enter: + +``` +./toplip -d file3.encrypted > file2.encrypted +``` + +Do not forget to enter the correct passphrase of file2. + +**Use multiple passphrase protection** + +This is another cool feature that I admire. We can provide multiple +passphrases for a single file when encrypting it. It will protect the +passphrases against brute force attempts. + +``` +./toplip -c 2 file1 > file1.encrypted +``` + +Here, **-c 2** represents two different passphrases. Sample output of above +command would be: + +``` +This is toplip v1.20 (C) 2015, 2016 2 Ton Digital. Author: Jeff Marrison A showcase piece for the HeavyThing library. Commercial support available Proudly made in Cooroy, Australia. More info: https://2ton.com.au/toplip +**file1 Passphrase #1:** generating keys...Done +**file1 Passphrase #2:** generating keys...Done +Encrypting...Done +``` + +As you see in the above example, toplip prompted me to enter two passphrases. +Please note that you must **provide two different passphrases** , not a single +passphrase twice. + +To decrypt this file, do: + +``` +$ ./toplip -c 2 -d file1.encrypted > file1.decrypted +This is toplip v1.20 (C) 2015, 2016 2 Ton Digital. Author: Jeff Marrison A showcase piece for the HeavyThing library. Commercial support available Proudly made in Cooroy, Australia. More info: https://2ton.com.au/toplip +**file1.encrypted Passphrase #1:** generating keys...Done +**file1.encrypted Passphrase #2:** generating keys...Done +Decrypting...Done +``` + +**Hide files inside image** + +The practice of concealing a file, message, image, or video within another +file is called **steganography**. Fortunately, this feature exists in toplip +by default. + +To hide a file(s) inside images, use **-m** flag as shown below. + +``` +$ ./toplip -m image.png file1 > image1.png +This is toplip v1.20 (C) 2015, 2016 2 Ton Digital. Author: Jeff Marrison A showcase piece for the HeavyThing library. Commercial support available Proudly made in Cooroy, Australia. More info: https://2ton.com.au/toplip +file1 Passphrase #1: generating keys...Done +Encrypting...Done +``` + +This command conceals the contents of file1 inside an image named image1.png. +To decrypt it, run: + +``` +$ ./toplip -d image1.png > file1.decrypted This is toplip v1.20 (C) 2015, 2016 2 Ton Digital. Author: Jeff Marrison A showcase piece for the HeavyThing library. Commercial support available Proudly made in Cooroy, Australia. More info: https://2ton.com.au/toplip +image1.png Passphrase #1: generating keys...Done +Decrypting...Done +``` + +**Increase password complexity** + +To make things even harder to break, we can increase the password complexity +like below. + +``` +./toplip -c 5 -i 0x8000 -alt file1 -c 10 -i 10 file2 > file3.encrypted +``` + +The above command will prompt to you enter 10 passphrases for the file1, 5 +passphrases for the file2 and encrypt both of them in a single file called +"file3.encrypted". As you may noticed, we have used one more additional flag +**-i** in this example. This is used to specify key derivation iterations. +This option overrides the default iteration count of 1 for scrypt's initial +and final PBKDF2 stages. Hexadecimal or decimal values permitted, e.g. +**0x8000** , **10** , etc. Please note that this can dramatically increase the +calculation times. + +To decrypt file1, use: + +``` +./toplip -c 5 -i 0x8000 -d file3.encrypted > file1.decrypted +``` + +To decrypt file2, use: + +``` +./toplip -c 10 -i 10 -d file3.encrypted > file2.decrypted +``` + +To know more about the underlying technical information and crypto methods +used in toplip, refer its official website given at the end. + +My personal recommendation to all those who wants to protect their data. Don't +rely on single method. Always use more than one tools/methods to encrypt +files. Do not write passphrases/passwords in a paper and/or do not save them +in your local or cloud storage. Just memorize them and destroy the notes. If +you're poor at remembering passwords, consider to use any trustworthy password +managers. + +And, that's all. More good stuffs to come. Stay tuned! + +Cheers! + + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/toplip-strong-file-encryption-decryption-cli-utility/ + +作者:[SK][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.ostechnix.com/author/sk/ +[1]:https://www.ostechnix.com/cryptomator-open-source-client-side-encryption-tool-cloud/ +[2]:https://www.ostechnix.com/how-to-encrypt-your-personal-foldersdirectories-in-linux-mint-ubuntu-distros/ +[3]:https://www.ostechnix.com/cryptogo-easy-way-encrypt-password-protect-files/ +[4]:https://www.ostechnix.com/cryptr-simple-cli-utility-encrypt-decrypt-files/ +[5]:https://www.ostechnix.com/tomb-file-encryption-tool-protect-secret-files-linux/ +[6]:https://www.ostechnix.com/an-easy-way-to-encrypt-and-decrypt-files-from-commandline-in-linux/ +[7]:http://en.wikipedia.org/wiki/Advanced_Encryption_Standard +[8]:http://en.wikipedia.org/wiki/Scrypt +[9]:https://2ton.com.au/Products/ +[10]:https://www.ostechnix.com/wp-content/uploads/2017/12/toplip-2.png%201366w,%20https://www.ostechnix.com/wp-content/uploads/2017/12/toplip-2-300x157.png%20300w,%20https://www.ostechnix.com/wp-content/uploads/2017/12/toplip-2-768x403.png%20768w,%20https://www.ostechnix.com/wp-content/uploads/2017/12/toplip-2-1024x537.png%201024w +[11]:http://www.ostechnix.com/wp-content/uploads/2017/12/toplip-2.png +[12]:https://www.ostechnix.com/wp-content/uploads/2017/12/toplip-1.png%20779w,%20https://www.ostechnix.com/wp-content/uploads/2017/12/toplip-1-300x101.png%20300w,%20https://www.ostechnix.com/wp-content/uploads/2017/12/toplip-1-768x257.png%20768w +[13]:http://www.ostechnix.com/wp-content/uploads/2017/12/toplip-1.png + From d83793cba44482333d3ae39711eb04e33caa6138 Mon Sep 17 00:00:00 2001 From: wxy Date: Tue, 12 Dec 2017 14:56:17 +0800 Subject: [PATCH 217/236] PRF&PUB:20171129 Suplemon - Modern CLI Text Editor with Multi Cursor Support.md @geekpi --- ...I Text Editor with Multi Cursor Support.md | 127 +++++++++++++++ ...I Text Editor with Multi Cursor Support.md | 151 ------------------ 2 files changed, 127 insertions(+), 151 deletions(-) create mode 100644 published/20171129 Suplemon - Modern CLI Text Editor with Multi Cursor Support.md delete mode 100644 translated/tech/20171129 Suplemon - Modern CLI Text Editor with Multi Cursor Support.md diff --git a/published/20171129 Suplemon - Modern CLI Text Editor with Multi Cursor Support.md b/published/20171129 Suplemon - Modern CLI Text Editor with Multi Cursor Support.md new file mode 100644 index 0000000000..9b768997e2 --- /dev/null +++ b/published/20171129 Suplemon - Modern CLI Text Editor with Multi Cursor Support.md @@ -0,0 +1,127 @@ +Suplemon:带有多光标支持的现代 CLI 文本编辑器 +====== + +Suplemon 是一个 CLI 中的现代文本编辑器,它模拟 [Sublime Text][1] 的多光标行为和其它特性。它是轻量级的,非常易于使用,就像 Nano 一样。 + +使用 CLI 编辑器的好处之一是,无论你使用的 Linux 发行版是否有 GUI,你都可以使用它。这种文本编辑器也很简单、快速和强大。 + +你可以在其[官方仓库][2]中找到有用的信息和源代码。 + +### 功能 + +这些是一些它有趣的功能: + +* 多光标支持 +* 撤销/重做 +* 复制和粘贴,带有多行支持 +* 鼠标支持 +* 扩展 +* 查找、查找所有、查找下一个 +* 语法高亮 +* 自动完成 +* 自定义键盘快捷键 + +### 安装 + +首先,确保安装了最新版本的 python3 和 pip3。 + +然后在终端输入: + +``` +$ sudo pip3 install suplemon +``` + +### 使用 + +#### 在当前目录中创建一个新文件 + +打开一个终端并输入: + +``` +$ suplemon +``` + +你将看到如下: + +![suplemon new file](https://linoxide.com/wp-content/uploads/2017/11/suplemon-new-file.png) + +#### 打开一个或多个文件 + +打开一个终端并输入: + +``` +$ suplemon ... +``` + +例如: + +``` +$ suplemon example1.c example2.c +``` + +### 主要配置 + +你可以在 `~/.config/suplemon/suplemon-config.json` 找到配置文件。 + +编辑这个文件很简单,你只需要进入命令模式(进入 suplemon 后)并运行 `config` 命令。你可以通过运行 `config defaults` 来查看默认配置。 + +#### 键盘映射配置 + +我会展示 suplemon 的默认键映射。如果你想编辑它们,只需运行 `keymap` 命令。运行 `keymap default` 来查看默认的键盘映射文件。 + +| 操作 | 快捷键 | +| ---- | ---- | +| 退出| `Ctrl + Q`| +| 复制行到缓冲区|`Ctrl + C`| +| 剪切行缓冲区| `Ctrl + X`| +| 插入缓冲区| `Ctrl + V`| +| 复制行| `Ctrl + K`| +| 跳转| `Ctrl + G`。 你可以跳转到一行或一个文件(只需键入一个文件名的开头)。另外,可以输入类似于 `exam:50` 跳转到 `example.c` 第 `50` 行。| +| 用字符串或正则表达式搜索| `Ctrl + F`| +| 搜索下一个| `Ctrl + D`| +| 去除空格| `Ctrl + T`| +| 在箭头方向添加新的光标| `Alt + 方向键`| +| 跳转到上一个或下一个单词或行| `Ctrl + 左/右`| +| 恢复到单光标/取消输入提示| `Esc`| +| 向上/向下移动行| `Page Up` / `Page Down`| +| 保存文件|`Ctrl + S`| +| 用新名称保存文件|`F1`| +| 重新载入当前文件|`F2`| +| 打开文件|`Ctrl + O`| +| 关闭文件|`Ctrl + W`| +| 切换到下一个/上一个文件|`Ctrl + Page Up` / `Ctrl + Page Down`| +| 运行一个命令|`Ctrl + E`| +| 撤消|`Ctrl + Z`| +| 重做|`Ctrl + Y`| +| 触发可见的空格|`F7`| +| 切换鼠标模式|`F8`| +| 显示行号|`F9`| +| 显示全屏|`F11`| + + + +#### 鼠标快捷键 + +* 将光标置于指针位置:左键单击 +* 在指针位置添加一个光标:右键单击 +* 垂直滚动:向上/向下滚动滚轮 + +### 总结 + +在尝试 Suplemon 一段时间后,我改变了对 CLI 文本编辑器的看法。我以前曾经尝试过 Nano,是的,我喜欢它的简单性,但是它的现代特征的缺乏使它在日常使用中变得不实用。 + +这个工具有 CLI 和 GUI 世界最好的东西……简单性和功能丰富!所以我建议你试试看,并在评论中写下你的想法 :-) + +-------------------------------------------------------------------------------- + +via: https://linoxide.com/tools/suplemon-cli-text-editor-multi-cursor/ + +作者:[Ivo Ursino][a] +译者:[geekpi](https://github.com/geekpi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://linoxide.com/author/ursinov/ +[1]:https://linoxide.com/tools/install-sublime-text-editor-linux/ +[2]:https://github.com/richrd/suplemon/ diff --git a/translated/tech/20171129 Suplemon - Modern CLI Text Editor with Multi Cursor Support.md b/translated/tech/20171129 Suplemon - Modern CLI Text Editor with Multi Cursor Support.md deleted file mode 100644 index 4fc430ff2a..0000000000 --- a/translated/tech/20171129 Suplemon - Modern CLI Text Editor with Multi Cursor Support.md +++ /dev/null @@ -1,151 +0,0 @@ -Suplemon - 带有多光标支持的现代 CLI 文本编辑器 -====== -Suplemon 是一个 CLI 中的现代文本编辑器,它模拟 [Sublime Text][1] 的多光标行为和其他特性。它是轻量级的,非常易于使用,就像 Nano 一样。 - -使用 CLI 编辑器的好处之一是,无论你使用的 Linux 发行版是否有 GUI,你都可以使用它。这种文本编辑器也很简单、快速和强大。 - -你可以在[官方仓库][2]中找到有用的信息和源代码。 - -### 功能 - -这些事一些它有趣的功能: - -* 多光标支持 - -* 撤销/重做 - -* 复制和粘贴,带有多行支持 - -* 鼠标支持 - -* 扩展 - -* 查找、查找所有、查找下一个 - -* 语法高亮 - -* 自动完成 - -* 自定义键盘快捷键 - -### 安装 - -首先,确保安装了最新版本的 python3 和 pip3。 - -然后在终端输入: - -``` -$ sudo pip3 install suplemon -``` - -在当前目录中创建一个新文件 - -打开一个终端并输入: - -``` -$ suplemon -``` - -![suplemon new file](https://linoxide.com/wp-content/uploads/2017/11/suplemon-new-file.png) - -打开一个或多个文件 - -打开一个终端并输入: - -``` -$ suplemon ... -``` - -``` -$ suplemon example1.c example2.c -``` - -主要配置 - -你可以在这 ~/.config/suplemon/suplemon-config.json 找到配置文件。 - -编辑这个文件很简单,你只需要进入命令模式(进入 suplemon 后)并运行 config 命令。你可以通过运行 config defaults 来查看默认配置。 - -键盘映射配置 - -我会展示 suplemon 的默认键映射。如果你想编辑它们,只需运行 keymap 命令。运行 keymap default 来查看默认的键盘映射文件。 - -* 退出: Ctrl + Q - -* 复制行到缓冲区:Ctrl + C - -* 剪切行缓冲区: Ctrl + X - -* 插入缓冲区: Ctrl + V - -* 复制行: Ctrl + K - -* 跳转: Ctrl + G。 你可以跳转到一行或一个文件(只需键入一个文件名的开头)。另外,可以输入类似于 “exam:50” 跳转到 example.c 的第 50行。 - -* 用字符串或正则表达式搜索: Ctrl + F - -* 搜索下一个: Ctrl + D - -* 去除空格: Ctrl + T - -* 在箭头方向添加新的光标: Alt + 方向键 - -* 跳转到上一个或下一个单词或行: Ctrl + 左/右 - -* 恢复到单光标/取消输入提示: Esc - -* 向上/向下移动行: Page Up / Page Down - -* 保存文件:Ctrl + S - -* 用新名称保存文件:F1 - -* 重新载入当前文件:F2 - -* 打开文件:Ctrl + O - -* Close file: 关闭文件: - -* 切换到下一个/上一个文件:Ctrl + Page Up / Ctrl + Page Down - -* 运行一个命令:Ctrl + E - -* 撤消:Ctrl + Z - -* 重做:Ctrl + Y - -* 触发可见的空格:F7 - -* 切换鼠标模式:F8 - -* 显示行号:F9 - -* 显示全屏:F11 - -鼠标快捷键 - -* 将光标置于指针位置:左键单击 - -* 在指针位置添加一个光标:右键单击 - -* 垂直滚动:向上/向下滚动滚轮 - -### 总结 - -在尝试 Suplemon 一段时间后,我改变了对 CLI 文本编辑的看法。我以前曾经尝试过 Nano,是的,我喜欢它的简单性,但是它的现代特征的缺乏使它在日常使用中变得不实用。 - -这个工具有 CLI 和 GUI 世界最好的东西。。。简单性和功能丰富!所以我建议你试试看,并在评论中写下你的想法 :-) - --------------------------------------------------------------------------------- - -via: https://linoxide.com/tools/suplemon-cli-text-editor-multi-cursor/ - -作者:[Ivo Ursino][a] -译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://linoxide.com/author/ursinov/ -[1]:https://linoxide.com/tools/install-sublime-text-editor-linux/ -[2]:https://github.com/richrd/suplemon/ From 1ba6dd22898654328354d73a559a3ddbaa8c6a43 Mon Sep 17 00:00:00 2001 From: wxy Date: Tue, 12 Dec 2017 17:00:09 +0800 Subject: [PATCH 218/236] PRF&PUB:20171113 Glitch write fun small web projects instantly.md @geekpi --- ... write fun small web projects instantly.md | 25 ++++++++----------- 1 file changed, 11 insertions(+), 14 deletions(-) rename {translated/tech => published}/20171113 Glitch write fun small web projects instantly.md (64%) diff --git a/translated/tech/20171113 Glitch write fun small web projects instantly.md b/published/20171113 Glitch write fun small web projects instantly.md similarity index 64% rename from translated/tech/20171113 Glitch write fun small web projects instantly.md rename to published/20171113 Glitch write fun small web projects instantly.md index fde7d7f880..7e041f0e73 100644 --- a/translated/tech/20171113 Glitch write fun small web projects instantly.md +++ b/published/20171113 Glitch write fun small web projects instantly.md @@ -1,19 +1,18 @@ -Glitch:立即写出有趣的小型网站项目 +Glitch:可以让你立即写出有趣的小型网站 ============================================================ -我刚写了一篇关于 Jupyter Notebooks 是一个有趣的交互式写 Python 代码的方式。这让我想起我最近学习了 Glitch,这个我同样喜爱!我构建了一个小的程序来用于[关闭转发 twitter][2]。因此有了这篇文章! +我刚写了一篇关于 Jupyter Notebooks 的文章,它是一个有趣的交互式写 Python 代码的方式。这让我想起我最近学习了 Glitch,这个我同样喜爱!我构建了一个小的程序来用于[关闭转发 twitter][2]。因此有了这篇文章! -[Glitch][3] 是一个简单的构建 Javascript web 程序的方式(javascript 后端、javascript 前端) +[Glitch][3] 是一个简单的构建 Javascript web 程序的方式(javascript 后端、javascript 前端)。 -关于 glitch 有趣的事有: +关于 glitch 有趣的地方有: 1. 你在他们的网站输入 Javascript 代码 - 2. 只要输入了任何代码,它会自动用你的新代码重载你的网站。你甚至不必保存!它会自动保存。 所以这就像 Heroku,但更神奇!像这样的编码(你输入代码,代码立即在公共网络上运行)对我而言感觉很**有趣**。 -这有点像 ssh 登录服务器,编辑服务器上的 PHP/HTML 代码,并让它立即可用,这也是我所喜爱的。现在我们有了“更好的部署实践”,而不是“编辑代码,它立即出现在互联网上”,但我们并不是在谈论严肃的开发实践,而是在讨论编写微型程序的乐趣。 +这有点像用 ssh 登录服务器,编辑服务器上的 PHP/HTML 代码,它立即就可用了,而这也是我所喜爱的方式。虽然现在我们有了“更好的部署实践”,而不是“编辑代码,让它立即出现在互联网上”,但我们并不是在谈论严肃的开发实践,而是在讨论编写微型程序的乐趣。 ### Glitch 有很棒的示例应用程序 @@ -22,18 +21,16 @@ Glitch 似乎是学习编程的好方式! 比如,这有一个太空侵略者游戏(由 [Mary Rose Cook][4] 编写):[https://space-invaders.glitch.me/][5]。我喜欢的是我只需要点击几下。 1. 点击 “remix this” - 2. 开始编辑代码使箱子变成橘色而不是黑色 - 3. 制作我自己太空侵略者游戏!我的在这:[http://julias-space-invaders.glitch.me/][1]。(我只做了很小的更改使其变成橘色,没什么神奇的) 他们有大量的示例程序,你可以从中启动 - 例如[机器人][6]、[游戏][7]等等。 ### 实际有用的非常好的程序:tweetstorms -我学习 Glitch 的方式是从这个程序:[https://tweetstorms.glitch.me/][8],它会向你展示给定用户的 tweetstorm。 +我学习 Glitch 的方式是从这个程序开始的:[https://tweetstorms.glitch.me/][8],它会向你展示给定用户的推特云。 -比如,你可以在 [https://tweetstorms.glitch.me/sarahmei][10] 看到 [@sarahmei][9] 的 tweetstorm(她发布了很多好的 tweetstorm!)。 +比如,你可以在 [https://tweetstorms.glitch.me/sarahmei][10] 看到 [@sarahmei][9] 的推特云(她发布了很多好的 tweetstorm!)。 ### 我的 Glitch 程序: 关闭转推 @@ -41,11 +38,11 @@ Glitch 似乎是学习编程的好方式! 我喜欢我不必设置一个本地开发环境,我可以直接开始输入然后开始! -Glitch 只支持 Javascript,我不非常了解 Javascript(我之前从没写过一个 Node 程序),所以代码不是很好。但是编写它很愉快 - 能够输入并立即看到我的代码运行是令人愉快的。这是我的项目:[https://turn-off-retweets.glitch.me/][11]。 +Glitch 只支持 Javascript,我不是非常了解 Javascript(我之前从没写过一个 Node 程序),所以代码不是很好。但是编写它很愉快 - 能够输入并立即看到我的代码运行是令人愉快的。这是我的项目:[https://turn-off-retweets.glitch.me/][11]。 ### 就是这些! -使用 Glitch 感觉真的很有趣和民主。通常情况下,如果我想 fork 某人的 Web 项目,并做出更改,我不会这样做 - 我必须 fork,找一个托管,设置本地开发环境或者 Heroku 或其他,安装依赖项等。我认为像安装 node.js 依赖关系这样的任务过去很有趣,就像“我正在学习新东西很酷”,现在我觉得它们很乏味。 +使用 Glitch 感觉真的很有趣和民主。通常情况下,如果我想 fork 某人的 Web 项目,并做出更改,我不会这样做 - 我必须 fork,找一个托管,设置本地开发环境或者 Heroku 或其他,安装依赖项等。我认为像安装 node.js 依赖关系这样的任务在过去很有趣,就像“我正在学习新东西很酷”,但现在我觉得它们很乏味。 所以我喜欢只需点击 “remix this!” 并立即在互联网上能有我的版本。 @@ -53,9 +50,9 @@ Glitch 只支持 Javascript,我不非常了解 Javascript(我之前从没写 via: https://jvns.ca/blog/2017/11/13/glitch--write-small-web-projects-easily/ -作者:[Julia Evans ][a] +作者:[Julia Evans][a] 译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From c22adbd65816fd34030f7b14f7b7dd1d5ba67aa8 Mon Sep 17 00:00:00 2001 From: darksun Date: Tue, 12 Dec 2017 18:26:57 +0800 Subject: [PATCH 219/236] =?UTF-8?q?=E9=80=89=E9=A2=98:?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...0171119 10 Best LaTeX Editors For Linux.md | 264 ++++++++++++++++++ 1 file changed, 264 insertions(+) create mode 100644 sources/tech/20171119 10 Best LaTeX Editors For Linux.md diff --git a/sources/tech/20171119 10 Best LaTeX Editors For Linux.md b/sources/tech/20171119 10 Best LaTeX Editors For Linux.md new file mode 100644 index 0000000000..467257a68d --- /dev/null +++ b/sources/tech/20171119 10 Best LaTeX Editors For Linux.md @@ -0,0 +1,264 @@ +10 Best LaTeX Editors For Linux +====== +**Brief: Once you get over the learning curve, there is nothing like LaTex. +Here are the best LaTex editors for Linux and other systems.** + +## What is LaTeX? + +[LaTeX][1] is a document preparation system. Unlike plain text editor, you +can't just write a plain text using LaTeX editors. Here, you will have to +utilize LaTeX commands in order to manage the content of the document. + +![LaTex Sample][2]![LaTex Sample][3] + +LaTex Editors are generally used to publish scientific research documents or +books for academic purposes. Most importantly, LaText editors come handy while +dealing with a document containing complex Mathematical notations. Surely, +LaTeX editors are fun to use. But, not that useful unless you have specific +needs for a document. + +## Why should you use LaTex? + +Well, just like I previously mentioned, LaTeX editors are meant for specific +purposes. You do not need to be a geek head in order to figure out the way to +use LaTeX editors but it is not a productive solution for users who deal with +basic text editors. + +If you are looking to craft a document but you are not interested in spending +time formatting the text, then LaTeX editors should be the one you should go +for. With LaTeX editors, you just have to specify the type of document, and +the text font and sizes will be taken care of accordingly. No wonder it is +considered one of the [best open source tools for writers][4]. + +Do note that it isn't something automated, you will have to first learn LaTeX +commands to let the editor handle the text formatting with precision. + +## 10 Of The Best LaTeX Editors For Linux + +Just for information, the list is not in any specific order. Editor at number +three is not better than the editor at number seven. + +### 1\. Lyx + +![][2] + +![][5] + +Lyx is an open source LaTeX Editor. In other words, it is one of the best +document processors available on the web.LyX helps you focus on the structure +of the write-up, just as every LaTeX editor should and lets you forget about +the word formatting. LyX would manage whatsoever depending on the type of +document specified. You get to control a lot of stuff while you have it +installed. The margins, headers/footers, spacing/indents, tables, and so on. + +If you are into crafting scientific documents, research thesis, or similar, +you will be delighted to experience Lyx's formula editor which should be a +charm to use. LyX also includes a set of tutorials to get started without much +of a hassle. + +[Lyx][6] + +### 2\. Texmaker + +![][2] + +![][7] + +Texmaker is considered to be one of the best LaTeX editors for GNOME desktop +environment. It presents a great user interface which results in a good user +experience. It is also crowned to be one among the most useful LaTeX editor +there is.If you perform PDF conversions often, you will find TeXmaker to be +relatively faster than other LaTeX editors. You can take a look at a preview +of what the final document would look like while you write. Also, one could +observe the symbols being easy to reach when needed. + +Texmaker also offers an extensive support for hotkeys configuration. Why not +give it a try? + +[Texmaker][8] + +### 3\. TeXstudio + +![][2] + +![][9] + +If you want a LaTeX editor which offers you a decent level of customizability +along with an easy-to-use interface, then TeXstudio would be the perfect one +to have installed. The UI is surely very simple but not clumsy. TeXstudio lets +you highlight syntax, comes with an integrated viewer, lets you check the +references and also bundles some other assistant tools. + +It also supports some cool features like auto-completion, link overlay, +bookmarks, multi-cursors, and so on - which makes writing a LaTeX document +easier than ever before. + +TeXstudio is actively maintained, which makes it a compelling choice for both +novice users and advanced writers. + +[TeXstudio][10] + +### 4\. Gummi + +![][2] + +![][11] + +Gummi is a very simple LaTeX editor based on the GTK+ toolkit. Well, you may +not find a lot of fancy options here but if you are just starting out - Gummi +will be our recommendation.It supports exporting the documents to PDF format, +lets you highlight syntax, and helps you with some basic error checking +functionalities. Though Gummi isn't actively maintained via GitHub it works +just fine. + +[Gummi][12] + +### 5\. TeXpen + +![][2] + +![][13] + +TeXpen is yet another simplified tool to go with. You get the auto-completion +functionality with this LaTeX editor. However, you may not find the user +interface impressive. If you do not mind the UI, but want a super easy LaTeX +editor, TeXpen could fulfill that wish for you.Also, TeXpen lets you +correct/improve the English grammar and expressions used in the document. + +[TeXpen][14] + +### 6\. ShareLaTeX + +![][2] + +![][15] + +ShareLaTeX is an online LaTeX editor. If you want someone (or a group of +people) to collaborate on documents you are working on, this is what you need. + +It offers a free plan along with several paid packages. Even the students of +Harvard University & Oxford University utilize this for their projects. With +the free plan, you get the ability to add one collaborator. + +The paid packages let you sync the documents on GitHub and Dropbox along with +the ability to record the full document history. You can choose to have +multiple collaborators as per your plan. For students, there's a separate +pricing plan available. + +[ShareLaTeX][16] + +### 7\. Overleaf + +![][2] + +![][17] + +Overleaf is yet another online LaTeX editor. Similar to ShareLaTeX, it offers +separate pricing plans for professionals and students. It also includes a free +plan where you can sync with GitHub, check your revision history, and add +multiple collaborators. + +There's a limit on the number of files you can create per project - so it +could bother if you are a professional working with LaTeX documents most of +the time. + +[Overleaf][18] + +### 8\. Authorea + +![][2] + +![][19] + +Authorea is a wonderful online LaTeX editor. However, it is not the best out +there - when considering the pricing plans. For free, it offers just 100 MB of +data upload limit and 1 private document at a time. The paid plans offer you +more perks but it may not be the cheapest from the lot.The only reason you +should choose Authorea is the user interface. If you love to work with a tool +offering an impressive user interface, there's no looking back. + +[Authorea][20] + +### 9\. Papeeria + +![][2] + +![][21] + +Papeeria is the cheapest LaTeX editor you can find on the Internet - +considering it is as reliable as the others. You do not get private projects +if you want to utilize it for free. But, if you prefer public projects it lets +you work on an unlimited number of projects with numerous collaborators. It +features a pretty simple plot builder and includes Git sync for no additional +cost.If you opt for the paid plan, it will empower you with the ability to +work on 10 private projects. + +[Papeeria][22] + +### 10\. Kile + +![Kile LaTeX editor][2] + +![Kile LaTeX editor][23] + +Last entry in our list of best LaTeX editor is Kile. Some people swear by +Kile. Primarily because of the features it provides. + +Kile is more than just an editor. It is an IDE tool like Eclipse that provides +a complete environment to work on documents and projects. Apart from quick +compilation and preview, you get features like auto-completion of commands, +insert citations, organize document in chapters etc. You really have to use +Kile to realize its true potential. + +Kile is available for Linux and Windows. + +[Kile][24] + +### Wrapping Up + +So, there go our recommendations for the LaTeX editors you should utilize on +Ubuntu/Linux. + +There are chances that we might have missed some interesting LaTeX editors +available for Linux. If you happen to know about any, let us know down in the +comments below. + + + + + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/latex-editors-linux/ + +作者:[Ankush Das][a] +译者:[翻译者ID](https://github.com/翻译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://itsfoss.com/author/ankush/ +[1]:https://www.latex-project.org/ +[2]:data:image/gif;base64,R0lGODdhAQABAPAAAP///wAAACwAAAAAAQABAEACAkQBADs= +[3]:https://itsfoss.com/wp-content/uploads/2017/11/latex-sample-example.jpeg +[4]:https://itsfoss.com/open-source-tools-writers/ +[5]:https://itsfoss.com/wp-content/uploads/2017/10/lyx_latex_editor.jpg +[6]:https://www.lyx.org/ +[7]:https://itsfoss.com/wp-content/uploads/2017/10/texmaker_latex_editor.jpg +[8]:http://www.xm1math.net/texmaker/ +[9]:https://itsfoss.com/wp-content/uploads/2017/10/tex_studio_latex_editor.jpg +[10]:https://www.texstudio.org/ +[11]:https://itsfoss.com/wp-content/uploads/2017/10/gummi_latex_editor.jpg +[12]:https://github.com/alexandervdm/gummi +[13]:https://itsfoss.com/wp-content/uploads/2017/10/texpen_latex_editor.jpg +[14]:https://sourceforge.net/projects/texpen/ +[15]:https://itsfoss.com/wp-content/uploads/2017/10/sharelatex.jpg +[16]:https://www.sharelatex.com/ +[17]:https://itsfoss.com/wp-content/uploads/2017/10/overleaf.jpg +[18]:https://www.overleaf.com/ +[19]:https://itsfoss.com/wp-content/uploads/2017/10/authorea.jpg +[20]:https://www.authorea.com/ +[21]:https://itsfoss.com/wp-content/uploads/2017/10/papeeria_latex_editor.jpg +[22]:https://www.papeeria.com/ +[23]:https://itsfoss.com/wp-content/uploads/2017/11/kile-latex-800x621.png +[24]:https://kile.sourceforge.io/ From 3bb0b0593ddb7ad919ab91796a8aa478a2c9d1bc Mon Sep 17 00:00:00 2001 From: darksun Date: Tue, 12 Dec 2017 22:38:54 +0800 Subject: [PATCH 220/236] =?UTF-8?q?=E7=BF=BB=E8=AF=91=E5=AE=8C=E6=AF=95?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...e Zombie Processes And How To Find & Kill Zombie Processes-.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {sources => translated}/tech/20171211 What Are Zombie Processes And How To Find & Kill Zombie Processes-.md (100%) diff --git a/sources/tech/20171211 What Are Zombie Processes And How To Find & Kill Zombie Processes-.md b/translated/tech/20171211 What Are Zombie Processes And How To Find & Kill Zombie Processes-.md similarity index 100% rename from sources/tech/20171211 What Are Zombie Processes And How To Find & Kill Zombie Processes-.md rename to translated/tech/20171211 What Are Zombie Processes And How To Find & Kill Zombie Processes-.md From e1fa5c2c284bd3d0b4af89b5dbdad32a117c1fe8 Mon Sep 17 00:00:00 2001 From: darksun Date: Tue, 12 Dec 2017 22:39:44 +0800 Subject: [PATCH 221/236] =?UTF-8?q?update=20at=202017=E5=B9=B4=2012?= =?UTF-8?q?=E6=9C=88=2012=E6=97=A5=20=E6=98=9F=E6=9C=9F=E4=BA=8C=2022:39:4?= =?UTF-8?q?4=20CST?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...nd How To Find & Kill Zombie Processes-.md | 76 ++++++++----------- 1 file changed, 32 insertions(+), 44 deletions(-) diff --git a/translated/tech/20171211 What Are Zombie Processes And How To Find & Kill Zombie Processes-.md b/translated/tech/20171211 What Are Zombie Processes And How To Find & Kill Zombie Processes-.md index a0d01b195a..79e13fcc01 100644 --- a/translated/tech/20171211 What Are Zombie Processes And How To Find & Kill Zombie Processes-.md +++ b/translated/tech/20171211 What Are Zombie Processes And How To Find & Kill Zombie Processes-.md @@ -1,79 +1,67 @@ -translating by lujun9972 -What Are Zombie Processes And How To Find & Kill Zombie Processes? +什么是僵尸进程以及如何找到并杀掉僵尸进程? ====== - [![What Are Zombie Processes And How To Find & Kill Zombie Processes?](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/what-are-the-zombie-processes_orig.jpg)][1] + [![What Are Zombie Processes And How To Find & Kill Zombie Processes?](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/what-are-the-zombie-processes_orig.jpg)][1] -If you are a regular Linux user, you must have encountered the term `Zombie Processes`. So what are the Zombie Processes? How do they get created? Are they harmful to the system? How do I kill these processes? Keep reading for the answers to all these questions. +如果你经常使用 Linux,你应该遇到这个术语 `僵尸进程`。 那么什么是僵尸进程? 它们是怎么产生的? 他们是否对系统有害? 我要怎样杀掉这些进程? 下面将会回答这些问题。 -### What are Zombie Processes? +### 什么是僵尸进程? -So we all know how processes work. We launch a program, start our task & once our task is over, we end that process. Once the process has ended, it has to be removed from the processes table. +我们都知道进程的工作原理。我们启动一个程序,开始我们的任务,然后等任务结束了,我们就停止这个进程。 进程停止后, 该进程就会从进程表中移除。 -​ +你可以通过 `System-Monitor` 查看当前进程。 -You can see the current processes in the ‘System-Monitor’. + [![](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/linux-check-zombie-processes_orig.jpg)][2] - [![Replace the pid with the id of the parent process so that the parent process will remove all the child processes that are dead and completed. Imagine it Like this : “You find a dead body in the middle of the road, you call the dead body’s family and they take that body away from the road.” But a lot of programs are not programmed well enough to remove these child zombies because if they were, you wouldn’t have those zombies in the first place. So the only thing guaranteed to remove Child Zombies is killing the parent.](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/linux-check-zombie-processes_orig.jpg)][2] +但是,有时候有些程序即使执行完了也依然留在进程表中。 -But, sometimes some of these processes stay in the processes table even after they have completed execution. +那么,这些完成了生命周期但却依然留在进程表中的进程,我们称之为 `僵尸进程`。 -​ +### 他们是如何产生的? -So these processes that have completed their life of execution but still exist in the processes table are called ‘Zombie Processes’. +当你运行一个程序时,它会产生一个父进程以及很多子进程。 所有这些子进程都会消耗内核分配给他们的内存和 CPU 资源。 -### And How Exactly do they get Created? +这些子进程完成执行后会发送一个 Exit 信号然后死掉。这个 Exit 信号需要被父进程所读取。父进程需要随后调用 `wait` 命令来读取子进程的退出状态并将子进程从进程表中移除。 -Whenever we run a program it creates a parent process and a lot of child processes. All of these child processes use resources such as memory and CPU allocated to them by the kernel. +若父进程正确第读取了子进程的 Exit 信号,则子进程会从进程表中删掉。 -​ +但若父进程未能读取到子进程的 Exit 信号,则这个子进程虽然完成执行处于死亡的状态,但也不会从进程表中删掉。 -Once these child processes have finished executing they send an Exit call and die. This Exit call has to be read by the parent process which later calls the wait command to read the exit_status of the child process so that the child process can be removed from the processes table. +### 僵尸进程对系统有害吗 Are Zombie processes harmful to the System? -If the Parent reads the Exit call correctly sent by the Child Process, the process is removed from the processes table. +**不会**。由于僵尸进程并不做任何事情, 不会使用任何资源也不会影响其他进程, 因此存在僵尸进程也没什么坏处。 不过由于进程表中的退出状态以及其他一些进程信息也是存储在内存中的,因此存在太多僵尸进程有时也会是一些问题。 -But, if the parent fails to read the exit call from the child process, the child process which has already finished its execution and is now dead will not be removed from the processes table. +**你可以想象成这样:** -### Are Zombie processes harmful to the System? +“你是一家建筑公司的老板。你每天根据工人们的工作量来支付工资。 有一个工人每天来到施工现场,就坐在那里, 你不用付钱, 它也不做任何工作。 他只是每天都来然后呆坐在那,仅此而已!” -**No. ** +这个工人就是僵尸进程的一个活生生的例子。**但是**, 如果你有很多僵尸工人, 你的建设工地就会很拥堵从而让那些正常的工人难以工作。 -Since zombie process is not doing any work, not using any resources or affecting any other process, there is no harm in having a zombie process. But since the exit_status and other process information from the process table are stored in the RAM, having too many Zombie processes can sometimes be an issue. +### 那么如何找出僵尸进程呢? - **_Imagine it Like this :_** - -“ - - _You are the owner of a construction company. You pay daily wages to all your workers depending upon how they work. _ _A worker comes to the construction site every day, just sits there, you don’t have to pay him, he doesn’t do any work. _ _He just comes every day and sits, that’s it !”_ - -Such a worker is the living example of a zombie process. - -**But,** - -if you have a lot of zombie workers, your construction site will get crowded and it might get difficult for the people that are actually working. - -### So how to find Zombie Processes? - -Fire up a terminal and type the following command - +打开终端并输入下面命令: +``` ps aux | grep Z +``` -You will now get details of all zombie processes in the processes table. +会列出进程表中所有僵尸进程的详细内容。 -### How to kill Zombie processes? +### 如何杀掉僵尸进程? -Normally we kill processes with the SIGKILL command but zombie processes are already dead. You Cannot kill something that is already dead. So what you do is you type this command - +正常情况下我们可以用 `SIGKILL` 信号来杀死进程,但是僵尸进程已经死了, 你不能杀死已经死掉的东西。 因此你需要输入的命令应该是 +``` kill -s SIGCHLD pid +``` -​Replace the pid with the id of the parent process so that the parent process will remove all the child processes that are dead and completed. +将这里的 pid 替换成父进程的 id,这样父进程就会删除所有以及完成并死掉的子进程了。 - **_Imagine it Like this :_** +**你可以把它想象成:** -“ +"你在道路中间发现一具尸体,于是你联系了死者的家属,随后他们就会将尸体带离道路了。" - _You find a dead body in the middle of the road, you call the dead body’s family and they take that body away from the road.”_ +不过许多程序写的不是那么好,无法删掉这些子僵尸(否则你一开始也见不到这些僵尸了)。 因此确保删除子僵尸的唯一方法就是杀掉它们的父进程。 -But a lot of programs are not programmed well enough to remove these child zombies because if they were, you wouldn’t have those zombies in the first place. So the only thing guaranteed to remove Child Zombies is killing the parent. -------------------------------------------------------------------------------- From 71631b8e773634b80fef0374a0feecb42bb68b08 Mon Sep 17 00:00:00 2001 From: darksun Date: Tue, 12 Dec 2017 22:48:19 +0800 Subject: [PATCH 222/236] =?UTF-8?q?=E9=80=89=E9=A2=98:=20The=20Biggest=20P?= =?UTF-8?q?roblems=20With=20UC=20Browser?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...08 The Biggest Problems With UC Browser.md | 93 +++++++++++++++++++ 1 file changed, 93 insertions(+) create mode 100644 sources/tech/20171208 The Biggest Problems With UC Browser.md diff --git a/sources/tech/20171208 The Biggest Problems With UC Browser.md b/sources/tech/20171208 The Biggest Problems With UC Browser.md new file mode 100644 index 0000000000..0a18d1b802 --- /dev/null +++ b/sources/tech/20171208 The Biggest Problems With UC Browser.md @@ -0,0 +1,93 @@ +The Biggest Problems With UC Browser +====== +Before we even begin talking about the cons, I want to establish the fact that +I have been a devoted UC Browser user for the past 3 years. I really love the +download speeds I get, the ultra-sleek user interface and eye-catching icons +used for tools. I was a Chrome for Android user in the beginning but I +migrated to UC on a friend's recommendation. But in the past 1 year or so, I +have seen some changes that have made me rethink about my choice and now I +feel like migrating back to chrome again. + +### The Unwanted **Notifications** + +I am sure I am not the only one who gets these unwanted notifications every +few hours. These clickbait articles are a real pain and the worst part is that +you get them every few hours. + +[![uc browser's annoying ads notifications][1]][1] + +I tried closing them down from the notification settings but they still kept +appearing with a less frequency. + +### The **News Homepage** + +Another unwanted section that is completely useless. We completely understand +that UC browser is free to download and it may require funding but this is not +the way to do it. The homepage features news articles that are extremely +distracting and unwanted. Sometimes when you are in a professional or family +environment some of these click baits might even cause awkwardness. + +[![uc browser's embarrassing news homepage][2]][2] + +And they even have a setting for that. To Turn the **UC** **News Display ON / +OFF.** And guess what, I tried that too **.** In the image below, You can see +my efforts on the left-hand side and the output on the right-hand side.[![uc +browser homepage settings][3]][3] + +And click bait news isn't enough, they have started adding some unnecessary +features. So let's include them as well. + +### UC **Music** + +UC browser integrated a **music player** in their browser to play music. It 's +just something that works, nothing too fancy. So why even have it? What's the +point? Who needs a music player in their browsers? + +[![uc browser adds uc music player][4]][4] + +It's not even like it will play audio from the web directly via that player in +the background. Instead, it is a music player that plays offline music. So why +have it? I mean it is not even good enough to be used as a primary music +player. Even if it was, it doesn't run independently of UC Browser. So why +would someone have his/her browser running just to use your Music Player? + +### The **Quick** Access Bar + +I have seen 9 out of 10 average users have this bar hanging around in their +notification area because it comes default with the installation and they +don't know how to get rid of it. The settings on the right get the job done. + +[![uc browser annoying quick access bar][5]][5] + +But I still wanna ask, "Why does it come by default ?". It's a headache for +most users. If we want it we will enable it. Why forcing the users though. + +### Conclusion + +UC browser is still one of the top players in the game. It provides one of the +best experiences, however, I am not sure what UC is trying to prove by packing +more and more unwanted features in their browser and forcing the user to use +them. + +I have loved UC for its speed and design. But recent experiences have led to +me having a second thought about my primary browser. + + + +-------------------------------------------------------------------------------- + +via: https://www.theitstuff.com/biggest-problems-uc-browser + +作者:[Rishabh Kandari][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.theitstuff.com/author/reevkandari +[1]:http://www.theitstuff.com/wp-content/uploads/2017/10/Untitled-design-6.png +[2]:http://www.theitstuff.com/wp-content/uploads/2017/10/Untitled-design-1-1.png +[3]:http://www.theitstuff.com/wp-content/uploads/2017/12/uceffort.png +[4]:http://www.theitstuff.com/wp-content/uploads/2017/10/Untitled-design-3-1.png +[5]:http://www.theitstuff.com/wp-content/uploads/2017/10/Untitled-design-4-1.png + From d4e353411a16f48d2a7a968eb8e1ceb8ddc5e9db Mon Sep 17 00:00:00 2001 From: geekpi Date: Wed, 13 Dec 2017 08:47:14 +0800 Subject: [PATCH 223/236] translated --- ...Unity from the Dead as an Official Spin.md | 43 ------------------- ...Unity from the Dead as an Official Spin.md | 41 ++++++++++++++++++ 2 files changed, 41 insertions(+), 43 deletions(-) delete mode 100644 sources/tech/20171129 Someone Tries to Bring Back Ubuntus Unity from the Dead as an Official Spin.md create mode 100644 translated/tech/20171129 Someone Tries to Bring Back Ubuntus Unity from the Dead as an Official Spin.md diff --git a/sources/tech/20171129 Someone Tries to Bring Back Ubuntus Unity from the Dead as an Official Spin.md b/sources/tech/20171129 Someone Tries to Bring Back Ubuntus Unity from the Dead as an Official Spin.md deleted file mode 100644 index d50a3cdfc5..0000000000 --- a/sources/tech/20171129 Someone Tries to Bring Back Ubuntus Unity from the Dead as an Official Spin.md +++ /dev/null @@ -1,43 +0,0 @@ -translating---geekpi - -Someone Tries to Bring Back Ubuntu's Unity from the Dead as an Official Spin -============================================================ - - - -> The Ubuntu Unity remix would be supported for nine months - -Canonical's sudden decision of killing its Unity user interface after seven years affected many Ubuntu users, and it looks like someone now tries to bring it back from the dead as an unofficial spin. - -Long-time [Ubuntu][1] member Dale Beaudoin [ran a poll][2] last week on the official Ubuntu forums to take the pulse of the community and see if they are interested in an Ubuntu Unity Remix that would be released alongside Ubuntu 18.04 LTS (Bionic Beaver) next year and be supported for nine months or five years. - -Thirty people voted in the poll, with 67 percent of them opting for an LTS (Long Term Support) release of the so-called Ubuntu Unity Remix, while 33 percent voted for the 9-month supported release. It also looks like this upcoming Ubuntu Unity Spin [looks to become an official flavor][3], yet this means commitment from those developing it. - -"A recent poll voted 2/3rds in favor of Ubuntu Unity to become an LTS distribution. We should try to work this cycle assuming that it will be LTS and an official flavor," said Dale Beaudoin. "We will try and release an updated ISO once every week or 10 days using the current 18.04 daily builds of default Ubuntu Bionic Beaver as a platform." - -### Is Ubuntu Unity making a comeback? - -The last Ubuntu version to ship with Unity by default was Ubuntu 17.04 (Zesty Zapus), which will reach end of life on January 2018\. Ubuntu 17.10 (Artful Artful), the current stable release of the popular operating system, is the first to use the GNOME desktop environment by default for the main Desktop edition as Canonical CEO [announced][4] earlier this year that Unity would no longer be developed. - -However, Canonical is still offering the Unity desktop environment from the official software repositories, so if someone wants to install it, it's one click away. But the bad news is that they'll be supported up until the release of Ubuntu 18.04 LTS (Bionic Beaver) in April 2018, so the developers of the Ubuntu Unity Remix would have to continue to keep in on life support on their a separate repository. - -On the other hand, we don't believe Canonical will change their mind and accept this Ubuntu Unity Spin to become an official flavor, which would mean they failed to continue development of Unity, and now a handful of people can do it. Most probably, if interest in this Ubuntu Unity Remix won't fade away soon, it will be an unofficial spin supported by the nostalgic community. - -Question is, would you be interested in an Ubuntu Unity spin, official or not? - --------------------------------------------------------------------------------- - -via: http://news.softpedia.com/news/someone-tries-to-bring-back-ubuntu-s-unity-from-the-dead-as-an-unofficial-spin-518778.shtml - -作者:[Marius Nestor ][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://news.softpedia.com/editors/browse/marius-nestor -[1]:http://linux.softpedia.com/downloadTag/Ubuntu -[2]:https://community.ubuntu.com/t/poll-unity-7-distro-9-month-spin-or-lts-for-18-04/2066 -[3]:https://community.ubuntu.com/t/unity-maintenance-roadmap/2223 -[4]:http://news.softpedia.com/news/canonical-to-stop-developing-unity-8-ubuntu-18-04-lts-ships-with-gnome-desktop-514604.shtml -[5]:http://news.softpedia.com/editors/browse/marius-nestor diff --git a/translated/tech/20171129 Someone Tries to Bring Back Ubuntus Unity from the Dead as an Official Spin.md b/translated/tech/20171129 Someone Tries to Bring Back Ubuntus Unity from the Dead as an Official Spin.md new file mode 100644 index 0000000000..6bbb37ae57 --- /dev/null +++ b/translated/tech/20171129 Someone Tries to Bring Back Ubuntus Unity from the Dead as an Official Spin.md @@ -0,0 +1,41 @@ +有人试图将 Ubuntu Unity 非正式地从死亡带回来 +============================================================ + + + +> Ubuntu Unity Remix 将支持九个月 + +Canonical 在七年之后突然决定抛弃它的 Unity 用户界面影响了许多 Ubuntu 用户,看起来有人现在试图非正式地把它从死亡中带回来。 + +长期 [Ubuntu][1] 成员 Dale Beaudoin 上周在官方的 Ubuntu 论坛上[进行了一项调查][2]来了解社区,看看他们是否对明年发布的 Ubuntu 18.04 LTS(Bionic Beaver)的 Ubuntu Unity Remix 感兴趣,它将支持 9 个月或 5 年。 + +有 30 人进行了投票,其中 67% 的人选择了所谓的 Ubuntu Unity Remix 的 LTS(长期支持)版本,33% 的人投票支持 9 个月的支持版本。它也看起来像即将到来的 Ubuntu Unity Spin [看起来会成为官方版本][3],但这不意味着开发它的承诺。 + +Dale Beaudoin 表示:“最近的一项民意调查显示,2/3 的人支持 Ubuntu Unity 成为 LTS 发行版,我们应该尝试这个循环,假设它将是 LTS 和官方的风格。“我们将尝试使用当前默认的 Ubuntu Bionic Beaver 18.04 的每日版本作为平台每周或每 10 天发布一次更新的 ISO。” + +### Ubuntu Unity 是否会卷土重来? + +默认情况下,最后一个带有 Unity 的 Ubuntu 版本是 Ubuntu 17.04(Zesty Zapus),它将在 2018 年 1 月终止支持。当前流行操作系统的稳定版本 Ubuntu 17.10(Artful Artful),是今年早些时候 Canonical CEO [宣布][4]之后第一个默认使用 GNOME 桌面环境的版本,Unity 将不再开发。 + +然而,Canonical 仍然从官方软件仓库提供 Unity 桌面环境,所以如果有人想要安装它,只需点击一下即可。但坏消息是,它们支持到 2018 年 4 月发布 Ubuntu 18.04 LTS(Bionic Beaver)之前,所以 Ubuntu Unity Remix 的开发者们将不得不在独立的仓库中继续支持。 + +另一方面,我们不相信 Canonical 会改变主意,接受这个 Ubuntu Unity Spin 成为官方的风格,这意味着他们无法继续开发 Unity,现在只有一小部分人可以做到这一点。最有可能的是,如果对 Ubuntu Unity Remix 的兴趣没有很快消失,那么,这可能会是一个由怀旧社区支持的非官方版本。 + +问题是,你会对 你会对Ubuntu Unity Spin 感兴趣么,官方或者非官方? + +-------------------------------------------------------------------------------- + +via: http://news.softpedia.com/news/someone-tries-to-bring-back-ubuntu-s-unity-from-the-dead-as-an-unofficial-spin-518778.shtml + +作者:[Marius Nestor ][a] +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://news.softpedia.com/editors/browse/marius-nestor +[1]:http://linux.softpedia.com/downloadTag/Ubuntu +[2]:https://community.ubuntu.com/t/poll-unity-7-distro-9-month-spin-or-lts-for-18-04/2066 +[3]:https://community.ubuntu.com/t/unity-maintenance-roadmap/2223 +[4]:http://news.softpedia.com/news/canonical-to-stop-developing-unity-8-ubuntu-18-04-lts-ships-with-gnome-desktop-514604.shtml +[5]:http://news.softpedia.com/editors/browse/marius-nestor From 8aa7c77147abf457e999115e3db0b2e0e9565c01 Mon Sep 17 00:00:00 2001 From: geekpi Date: Wed, 13 Dec 2017 08:50:14 +0800 Subject: [PATCH 224/236] translating --- ...0171115 Security Jobs Are Hot Get Trained and Get Noticed.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20171115 Security Jobs Are Hot Get Trained and Get Noticed.md b/sources/tech/20171115 Security Jobs Are Hot Get Trained and Get Noticed.md index a0a6b1ed60..002477680c 100644 --- a/sources/tech/20171115 Security Jobs Are Hot Get Trained and Get Noticed.md +++ b/sources/tech/20171115 Security Jobs Are Hot Get Trained and Get Noticed.md @@ -1,3 +1,5 @@ +translating---geekpi + Security Jobs Are Hot: Get Trained and Get Noticed ============================================================ From 24f2439455e0e933848c55da3ac1cee249af127a Mon Sep 17 00:00:00 2001 From: darksun Date: Wed, 13 Dec 2017 09:51:59 +0800 Subject: [PATCH 225/236] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Cheat=20=E2=80=93?= =?UTF-8?q?=20A=20Collection=20Of=20Practical=20Linux=20Command=20Examples?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ction Of Practical Linux Command Examples.md | 195 ++++++++++++++++++ 1 file changed, 195 insertions(+) create mode 100644 sources/tech/20171213 Cheat – A Collection Of Practical Linux Command Examples.md diff --git a/sources/tech/20171213 Cheat – A Collection Of Practical Linux Command Examples.md b/sources/tech/20171213 Cheat – A Collection Of Practical Linux Command Examples.md new file mode 100644 index 0000000000..3e82106ade --- /dev/null +++ b/sources/tech/20171213 Cheat – A Collection Of Practical Linux Command Examples.md @@ -0,0 +1,195 @@ +Cheat – A Collection Of Practical Linux Command Examples +====== +Many of us very often checks **[Man Pages][1]** to know about command switches +(options), it shows you the details about command syntax, description, +details, and available switches but it doesn 't has any practical examples. +Hence, we are face some trouble to form a exact command format which we need. + +Are you really facing the trouble on this and want a better solution? i would +advise you to check about cheat utility. + +#### What Is Cheat + +[Cheat][2] allows you to create and view interactive cheatsheets on the +command-line. It was designed to help remind *nix system administrators of +options for commands that they use frequently, but not frequently enough to +remember. + +#### How to Install Cheat + +Cheat package was developed using python, so install pip package to install +cheat on your system. + +For **`Debian/Ubuntu`** , use [apt-get command][3] or [apt command][4] to +install pip. + +``` + + [For Python2] + + + $ sudo apt install python-pip python-setuptools + + + + [For Python3] + + + $ sudo apt install python3-pip + +``` + +pip doesn't shipped with **`RHEL/CentOS`** system official repository so, +enable [EPEL Repository][5] and use [YUM command][6] to install pip. + +``` + + $ sudo yum install python-pip python-devel python-setuptools + +``` + +For **`Fedora`** system, use [dnf Command][7] to install pip. + +``` + + [For Python2] + + + $ sudo dnf install python-pip + + + + [For Python3] + + + $ sudo dnf install python3 + +``` + +For **`Arch Linux`** based systems, use [Pacman Command][8] to install pip. + +``` + + [For Python2] + + + $ sudo pacman -S python2-pip python-setuptools + + + + [For Python3] + + + $ sudo pacman -S python-pip python3-setuptools + +``` + +For **`openSUSE`** system, use [Zypper Command][9] to install pip. + +``` + + [For Python2] + + + $ sudo pacman -S python-pip + + + + [For Python3] + + + $ sudo pacman -S python3-pip + +``` + +pip is a python module bundled with setuptools, it's one of the recommended +tool for installing Python packages in Linux. + +``` + + $ sudo pip install cheat + +``` + +#### How to Use Cheat + +Run `cheat` followed by corresponding `command` to view the cheatsheet, For +demonstration purpose, we are going to check about `tar` command examples. + +``` + + $ cheat tar + # To extract an uncompressed archive: + tar -xvf /path/to/foo.tar + + # To create an uncompressed archive: + tar -cvf /path/to/foo.tar /path/to/foo/ + + # To extract a .gz archive: + tar -xzvf /path/to/foo.tgz + + # To create a .gz archive: + tar -czvf /path/to/foo.tgz /path/to/foo/ + + # To list the content of an .gz archive: + tar -ztvf /path/to/foo.tgz + + # To extract a .bz2 archive: + tar -xjvf /path/to/foo.tgz + + # To create a .bz2 archive: + tar -cjvf /path/to/foo.tgz /path/to/foo/ + + # To extract a .tar in specified Directory: + tar -xvf /path/to/foo.tar -C /path/to/destination/ + + # To list the content of an .bz2 archive: + tar -jtvf /path/to/foo.tgz + + # To create a .gz archive and exclude all jpg,gif,... from the tgz + tar czvf /path/to/foo.tgz --exclude=\*.{jpg,gif,png,wmv,flv,tar.gz,zip} /path/to/foo/ + + # To use parallel (multi-threaded) implementation of compression algorithms: + tar -z ... -> tar -Ipigz ... + tar -j ... -> tar -Ipbzip2 ... + tar -J ... -> tar -Ipixz ... + +``` + +Run the following command to see what cheatsheets are available. + +``` + + $ cheat -l + +``` + +Navigate to help page for more details. + +``` + + $ cheat -h + +``` + + +-------------------------------------------------------------------------------- + +via: https://www.2daygeek.com/cheat-a-collection-of-practical-linux-command-examples/ + +作者:[Magesh Maruthamuthu][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.2daygeek.com +[1]:https://www.2daygeek.com/linux-color-man-pages-configuration-less-most-command/ +[2]:https://github.com/chrisallenlane/cheat +[3]:https://www.2daygeek.com/apt-get-apt-cache-command-examples-manage-packages-debian-ubuntu-systems/ +[4]:https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/ +[5]:https://www.2daygeek.com/install-enable-epel-repository-on-rhel-centos-scientific-linux-oracle-linux/ +[6]:https://www.2daygeek.com/yum-command-examples-manage-packages-rhel-centos-systems/ +[7]:https://www.2daygeek.com/dnf-command-examples-manage-packages-fedora-system/ +[8]:https://www.2daygeek.com/pacman-command-examples-manage-packages-arch-linux-system/ +[9]:https://www.2daygeek.com/zypper-command-examples-manage-packages-opensuse-system/ From fd368e42baf07cef1ab596bd021c72b15ed78ee9 Mon Sep 17 00:00:00 2001 From: darksun Date: Wed, 13 Dec 2017 10:05:13 +0800 Subject: [PATCH 226/236] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Useful=20Linux=20?= =?UTF-8?q?Commands=20that=20you=20should=20know?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ful Linux Commands that you should know.md | 263 ++++++++++++++++++ 1 file changed, 263 insertions(+) create mode 100644 sources/tech/20170910 Useful Linux Commands that you should know.md diff --git a/sources/tech/20170910 Useful Linux Commands that you should know.md b/sources/tech/20170910 Useful Linux Commands that you should know.md new file mode 100644 index 0000000000..6dcd34c941 --- /dev/null +++ b/sources/tech/20170910 Useful Linux Commands that you should know.md @@ -0,0 +1,263 @@ +Useful Linux Commands that you should know +====== +If you are Linux system administrator or just a Linux enthusiast/lover, than +you love & use command line aks CLI. Until some years ago majority of Linux +work was accomplished using CLI only & even there are some limitations to GUI +. Though there are plenty of Linux distributions that can complete tasks with +GUI but still learning CLI is major part of mastering Linux. + +To this effect, we present you list of useful Linux commands that you should +know. + + **Note:-** There is no definite order to all these commands & all of these +commands are equally important to learn & master in order to excel in Linux +administration. One more thing, we have only used some of the options for each +command for an example, you can refer to 'man pages' for complete list of +options for each command. + +### 1- top command + +'top' command displays the real time summary/information of our system. It +also displays the processes and all the threads that are running & are being +managed by the system kernel. + +Information provided by top command includes uptime, number of users, Load +average, running/sleeping/zombie processes, CPU usage in percentage based on +users/system etc, system memory free & used, swap memory etc. + +To use top command, open terminal & execute the comamnd, + + **$ top** + +To exit out the command, either press 'q' or 'ctrl+c'. + +### 2- free command + +'free' command is used to specifically used to get the information about +system memory or RAM. With this command we can get information regarding +physical memory, swap memory as well as system buffers. It provided amount of +total, free & used memory available on the system. + +To use this utility, execute following command in terminal + + **$ free** + +It will present all the data in kb or kilobytes, for megabytes use options +'-m' & '-g ' for gb. + +#### 3- cp command + +'cp' or copy command is used to copy files among the folders. Syntax for using +'cp' command is, + + **$ cp source destination** + +### 4- cd command + +'cd' command is used for changing directory . We can switch among directories +using cd command. + +To use it, execute + + **$ cd directory_location** + +### 5- ifconfig + +'Ifconfig' is very important utility for viewing & configuring network +information on Linux machine. + +To use it, execute + + **$ ifconfig** + +This will present the network information of all the networking devices on the +system. There are number of options that can be used with 'ifconfig' for +configuration, in fact they are some many options that we have created a +separate article for it ( **Read it here ||[IFCONFIG command : Learn with some +examples][1]** ). + +### 6- crontab command + +'Crontab' is another important utility that is used schedule a job on Linux +system. With crontab, we can make sure that a command or a script is executed +at the pre-defined time. To create a cron job, run + + **$ crontab -e** + +To display all the created jobs, run + + **$ crontab -l** + +You can read our detailed article regarding crontab ( **Read it here ||[ +Scheduling Important Jobs with Crontab][2]** ) + +### 7- cat command + +'cat' command has many uses, most common use is that it's used to display +content of a file, + + **$ cat file.txt** + +But it can also be used to merge two or more file using the syntax below, + + **$ cat file1 file2 file3 file4 > file_new** + +We can also use 'cat' command to clone a whole disk ( **Read it here || +[Cloning Disks using dd & cat commands for Linux systems][3]** ) + +### 8- df command + +'df' command is used to show the disk utilization of our whole Linux file +system. Simply run. + + **$ df** + +& we will be presented with disk complete utilization of all the partitions on +our Linux machine. + +### 9- du command + +'du' command shows the amount of disk that is being utilized by the files & +directories on our Linux machine. To run it, type + + **$ du /directory** + +( **Recommended Read :[Use of du & df commands with examples][4]** ) + +### 10- mv command + +'mv' command is used to move the files or folders from one location to +another. Command syntax for moving the files/folders is, + + **$ mv /source/filename /destination** + +We can also use 'mv' command to rename a file/folder. Syntax for changing name +is, + + **$ mv file_oldname file_newname** + +### 11- rm command + +'rm' command is used to remove files\folders from Linux system. To use it, run + + **$ rm filename** + +We can also use '-rf' option with 'rm' command to completely remove a +file\folder from the system but we must use this with caution. + +### 12- vi/vim command + +VI or VIM is very famous & one of the widely used CLI-based text editor for +Linux. It takes some time to master it but it has a great number of utilities, +which makes it a favorite for Linux users. + +For detailed knowledge of VIM, kindly refer to the articles [**Beginner 's +Guide to LVM (Logical Volume Management)** & **Working with Vi/Vim Editor : +Advanced concepts.**][5] + +### 13- ssh command + +SSH utility is to remotely access another machine from the current Linux +machine. To access a machine, execute + + **$ ssh[[email protected]][6] OR machine_name** + +Once we have remote access to machine, we can work on CLI of that machine as +if we are working on local machine. + +### 14- tar command + +'tar' command is used to compress & extract the files\folders. To compress the +files\folders using tar, execute + + **$ tar -cvf file.tar file_name** + +where file.tar will be the name of compressed folder & 'file_name' is the name +of source file or folders. To extract a compressed folder, + + **$ tar -xvf file.tar** + +For more details on 'tar' command, read [**Tar command : Compress & Decompress +the files\directories**][7] + +### 15- locate command + +'locate' command is used to locate files & folders on your Linux machines. To +use it, run + + **$ locate file_name** + +### 16- grep command + +'grep' command another very important command that a Linux administrator +should know. It comes especially handy when we want to grab a keyword or +multiple keywords from a file. Syntax for using it is, + + **$ grep 'pattern' file.txt** + +It will search for 'pattern' in the file 'file.txt' and produce the output on +the screen. We can also redirect the output to another file, + + **$ grep 'pattern' file.txt > newfile.txt** + +### 17- ps command + +'ps' command is especially used to get the process id of a running process. To +get information of all the processes, run + + **$ ps -ef** + +To get information regarding a single process, executed + + **$ ps -ef | grep java** + +### 18- kill command + +'kill' command is used to kill a running process. To kill a process we will +need its process id, which we can get using above 'ps' command. To kill a +process, run + + **$ kill -9 process_id** + +### 19- ls command + +'ls' command is used list all the files in a directory. To use it, execute + + **$ ls** + +### 20- mkdir command + +To create a directory in Linux machine, we use command 'mkdir'. Syntax for +using 'mkdir' is + + **$ mkdir new_dir** + +These were some of the useful linux commands that every System Admin should +know, we will soon be sharing another list of some more important commands +that you should know being a Linux lover. You can also leave your suggestions +and queries in the comment box below. + + +-------------------------------------------------------------------------------- + +via: http://linuxtechlab.com/useful-linux-commands-you-should-know/ + +作者:[][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://linuxtechlab.com +[1]:http://linuxtechlab.com/ifconfig-command-learn-examples/ +[2]:http://linuxtechlab.com/scheduling-important-jobs-crontab/ +[3]:http://linuxtechlab.com/linux-disk-cloning-using-dd-cat-commands/ +[4]:http://linuxtechlab.com/du-df-commands-examples/ +[5]:http://linuxtechlab.com/working-vivim-editor-advanced-concepts/ +[6]:/cdn-cgi/l/email-protection#bbcec8dec9d5dad6defbf2ebdadfdfc9dec8c8 +[7]:http://linuxtechlab.com/tar-command-compress-decompress-files +[8]:https://www.facebook.com/linuxtechlab/ +[9]:https://twitter.com/LinuxTechLab +[10]:https://plus.google.com/+linuxtechlab +[11]:http://linuxtechlab.com/contact-us-2/ + From 1c0cda3a5789e3cd8798f7fc11e63b56782e9643 Mon Sep 17 00:00:00 2001 From: DarkSun Date: Wed, 13 Dec 2017 10:08:57 +0800 Subject: [PATCH 227/236] =?UTF-8?q?update=20=E8=AF=91=E8=80=85id?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...Zombie Processes And How To Find & Kill Zombie Processes-.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/translated/tech/20171211 What Are Zombie Processes And How To Find & Kill Zombie Processes-.md b/translated/tech/20171211 What Are Zombie Processes And How To Find & Kill Zombie Processes-.md index 79e13fcc01..72acbcf558 100644 --- a/translated/tech/20171211 What Are Zombie Processes And How To Find & Kill Zombie Processes-.md +++ b/translated/tech/20171211 What Are Zombie Processes And How To Find & Kill Zombie Processes-.md @@ -68,7 +68,7 @@ kill -s SIGCHLD pid via: http://www.linuxandubuntu.com/home/what-are-zombie-processes-and-how-to-find-kill-zombie-processes 作者:[linuxandubuntu][a] -译者:[译者ID](https://github.com/译者ID) +译者:[lujun9972](https://github.com/lujun9972) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 6411499e2d150c822d2adacd39279157fe13c344 Mon Sep 17 00:00:00 2001 From: darksun Date: Wed, 13 Dec 2017 11:17:25 +0800 Subject: [PATCH 228/236] =?UTF-8?q?=E9=80=89=E9=A2=98:=20OnionShare=20-=20?= =?UTF-8?q?Share=20Files=20Anonymously?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...08 OnionShare - Share Files Anonymously.md | 77 +++++++++++++++++++ 1 file changed, 77 insertions(+) create mode 100644 sources/tech/20171208 OnionShare - Share Files Anonymously.md diff --git a/sources/tech/20171208 OnionShare - Share Files Anonymously.md b/sources/tech/20171208 OnionShare - Share Files Anonymously.md new file mode 100644 index 0000000000..bc42a009c5 --- /dev/null +++ b/sources/tech/20171208 OnionShare - Share Files Anonymously.md @@ -0,0 +1,77 @@ +OnionShare - Share Files Anonymously +====== +In this Digital World, we share our media, documents, important files via the Internet using different cloud storage like Dropbox, Mega, Google Drive and many more. But every cloud storage comes with two major problems, one is the Size and the other Security. After getting used to Bit Torrent the size is not a matter anymore, but the security is. + +Even though you send your files through the secure cloud services they will be noted by the company, if the files are confidential, even the government can have them. So to overcome these problems we use OnionShare, as per the name it uses the Onion internet i.e Tor to share files Anonymously to anyone. + +### How to Use **OnionShare**? + + * First Download the [OnionShare][1] and [Tor Browser][2]. After downloading install both of them. + + + +[![install onionshare and tor browser][3]][3] + + * Now open OnionShare from the start menu + + + +[![onionshare share files anonymously][4]][4] + + * Click on Add and add a File/Folder to share. + * Click start sharing. It produces a .onion URL, you could share the URL with your recipient. + + + +[![share file with onionshare anonymously][5]][5] + + * To Download file from the URL, copy the URL and open Tor Browser and paste it. Open the URL and download the Files/Folder. + + + +[![receive file with onionshare anonymously][6]][6] + +### Start of **OnionShare** + +A few years back when Glenn Greenwald found that some of the NSA documents which he received from Edward Snowden had been corrupted. But he needed the documents and decided to get the files by using a USB. It was not successful. + +After reading the book written by Greenwald, Micah Lee crypto expert at The Intercept, released the OnionShare - simple, free software to share files anonymously and securely. He created the program to share big data dumps via a direct channel encrypted and protected by the anonymity software Tor, making it hard to get the files for the eavesdroppers. + +### How Does **OnionShare** Work? + +OnionShare starts a web server at 127.0.0.1 for sharing the file on a random port. It chooses any of two words from the wordlist of 6800-wordlist called slug. It makes the server available as Tor onion service to send the file. The final URL looks like: + +`http://qx2d7lctsnqwfdxh.onion/subside-durable` + +The OnionShare shuts down after downloading. There is an option to allow the files to be downloaded multiple times. This makes the file not available on the internet anymore. + +### Advantages of using **OnionShare** + +Other Websites or Applications have access to your files: The file the sender shares using OnionShare is not stored on any server. It is directly hosted on the sender's system. + +No one can spy on the shared files: As the connection between the users is encrypted by the Onion service and Tor Browser. This makes the connection secure and hard to eavesdroppers to get the files. + +Both users are Anonymous: OnionShare and Tor Browser make both sender and recipient anonymous. + +### Conclusion + +In this article, I have explained how to **share your documents, files anonymously**. I also explained how it works. Hope you have understood how OnionShare works, and if you still have a doubt regarding anything, just drop in a comment. + + +-------------------------------------------------------------------------------- + +via: https://www.theitstuff.com/onionshare-share-files-anonymously-2 + +作者:[Anirudh Rayapeddi][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.theitstuff.com +[1]https://onionshare.org/ +[2]https://www.torproject.org/projects/torbrowser.html.en +[3]http://www.theitstuff.com/wp-content/uploads/2017/12/Icons.png +[4]http://www.theitstuff.com/wp-content/uploads/2017/12/Onion-Share.png +[5]http://www.theitstuff.com/wp-content/uploads/2017/12/With-Link.png +[6]http://www.theitstuff.com/wp-content/uploads/2017/12/Tor.png From b9fd51c06e05a3bfeb1f614b023bf5975b10b9a2 Mon Sep 17 00:00:00 2001 From: darksun Date: Wed, 13 Dec 2017 11:22:27 +0800 Subject: [PATCH 229/236] =?UTF-8?q?update=20at=202017=E5=B9=B4=2012?= =?UTF-8?q?=E6=9C=88=2013=E6=97=A5=20=E6=98=9F=E6=9C=9F=E4=B8=89=2011:22:2?= =?UTF-8?q?7=20CST?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ng a blog with pelican and Github pages.md | 364 ++++++++++++++++++ 1 file changed, 364 insertions(+) create mode 100644 sources/tech/20171213 Creating a blog with pelican and Github pages.md diff --git a/sources/tech/20171213 Creating a blog with pelican and Github pages.md b/sources/tech/20171213 Creating a blog with pelican and Github pages.md new file mode 100644 index 0000000000..a50494e0ac --- /dev/null +++ b/sources/tech/20171213 Creating a blog with pelican and Github pages.md @@ -0,0 +1,364 @@ +Creating a blog with pelican and Github pages +====== +# Creating a blog with pelican and Github pages +~ 5 min read +ter 05 dezembro 2017 in +in [blog][1] +Today I'm going to talk about how this blog was created. Before we begin, I expect you to be familiarized with using Github and creating a Python virtual enviroment to develop. If you aren't, I recommend you to learn with the [Django Girls tutorial][2], which covers that and more. +This is a tutorial to help you publish a personal blog hosted by Github. For that, you will need a regular Github user account (instead of a project account). +The first thing you will do is to create the Github repository where your code will live. If you want your blog to point to only your username (like rsip22.github.io) instead of a subfolder (like rsip22.github.io/blog), you have to create the repository with that full name. +![Screenshot of Github, the menu to create a new repository is open and a new repo is being created with the name 'rsip22.github.io'][3] +I recommend that you initialize your repository with a README, with a .gitignore for Python and with a [free software license][4]. If you use a free software license, you still own the code, but you make sure that others will benefit from it, by allowing them to study it, reuse it and, most importantly, keep sharing it. +Now that the repository is ready, let's clone it to the folder you will be using to store the code in your machine: +``` + $ git clone https://github.com/YOUR_USERNAME/YOUR_USERNAME.github.io.git + +``` +And change to the new directory: +``` + $ + cd + YOUR_USERNAME.github.io + +``` +Because of how Github Pages prefers to work, serving the files from the master branch, you have to put your source code in a new branch, preserving the "master" for the output of the static files generated by Pelican. To do that, you must create a new branch called "source": +``` + $ git checkout -b + source + + +``` +Create the virtualenv with the Python3 version installed on your system. +On GNU/Linux systems, the command might go as: +``` + $ python3 -m venv venv + +``` +or as +``` + $ virtualenv --python + = + python3.5 venv + +``` +And activate it: +``` + $ + source + venv/bin/activate + +``` +Inside the virtualenv, you have to install pelican and it's dependencies. You should also install ghp-import (to help us with publishing to github) and Markdown (for writing your posts using markdown). It goes like this: +``` + (venv)$ pip install pelican markdown ghp-import + +``` +Once that is done, you can start creating your blog using pelican-quickstart: +``` + (venv)$ pelican-quickstart + +``` +Which will prompt us a series of questions. Before answering them, take a look at my answers below: +``` + > Where do you want to create your new web site? [.] ./ + > What will be the title of this web site? Renata's blog + > Who will be the author of this web site? Renata + > What will be the default language of this web site? [pt] en + > Do you want to specify a URL prefix? e.g., http://example.com (Y/n) n + > Do you want to enable article pagination? (Y/n) y + > How many articles per page do you want? [10] 10 + > What is your time zone? [Europe/Paris] America/Sao_Paulo + > Do you want to generate a Fabfile/Makefile to automate generation and publishing? (Y/n) Y **# PAY ATTENTION TO THIS!** + > Do you want an auto-reload & simpleHTTP script to assist with theme and site development? (Y/n) n + > Do you want to upload your website using FTP? (y/N) n + > Do you want to upload your website using SSH? (y/N) n + > Do you want to upload your website using Dropbox? (y/N) n + > Do you want to upload your website using S3? (y/N) n + > Do you want to upload your website using Rackspace Cloud Files? (y/N) n + > Do you want to upload your website using GitHub Pages? (y/N) y + > Is this your personal page (username.github.io)? (y/N) y + Done. Your new project is available at /home/username/YOUR_USERNAME.github.io + +``` +About the time zone, it should be specified as TZ Time zone (full list here: [List of tz database time zones][5]). +Now, go ahead and create your first blog post! You might want to open the project folder on your favorite code editor and find the "content" folder inside it. Then, create a new file, which can be called my-first-post.md (don't worry, this is just for testing, you can change it later). The contents should begin with the metadata which identifies the Title, Date, Category and more from the post before you start with the content, like this: +``` + .lang + = + "markdown" + + # DON'T COPY this line, it exists just for highlighting purposes + + + + Title: + + My + + first + + post + + + Date: + + 2017-11-26 + + 10:01 + + + Modified: + + 2017-11-27 + + 12:30 + + + Category: + + misc + + + Tags: + + first + , + + misc + + + Slug: + + My-first-post + + + Authors: + + Your + + name + + + Summary: + + What + + does + + your + + post + + talk + + about + ? + + Write + + here. + + + + This + + is + + the + + * + first + + post + * + + from + + my + + Pelican + + blog. + + ** + YAY + !** + + +``` +Let's see how it looks? +Go to the terminal, generate the static files and start the server. To do that, use the following command: +``` + (venv)$ make html && make serve + +``` +While this command is running, you should be able to visit it on your favorite web browser by typing localhost:8000 on the address bar. +![Screenshot of the blog home. It has a header with the title Renata\\'s blog, the first post on the left, info about the post on the right, links and social on the bottom.][6] +Pretty neat, right? +Now, what if you want to put an image in a post, how do you do that? Well, first you create a directory inside your content directory, where your posts are. Let's call this directory 'images' for easy reference. Now, you have to tell Pelican to use it. Find the pelicanconf.py, the file where you configure the system, and add a variable that contains the directory with your images: +``` + .lang + = + "python" + + # DON'T COPY this line, it exists just for highlighting purposes + + + + STATIC_PATHS + + = + + [ + ' + images + ' + ] + + +``` +Save it. Go to your post and add the image this way: +``` + .lang + = + "markdown" + + # DON'T COPY this line, it exists just for highlighting purposes + + + + ![ + Write + + here + + a + + good + + description + + for + + people + + who + + can + ' + t + + see + + the + + image + ]( + { + filename + }/ + images + / + IMAGE_NAME.jpg + ) + + +``` +You can interrupt the server at anytime pressing CTRL+C on the terminal. But you should start it again and check if the image is correct. Can you remember how? +``` + (venv)$ make html && make serve + +``` +One last step before your coding is "done": you should make sure anyone can read your posts using ATOM or RSS feeds. Find the pelicanconf.py, the file where you configure the system, and edit the part about feed generation: +``` + .lang + = + "python" + + # DON'T COPY this line, it exists just for highlighting purposes + + + + FEED_ALL_ATOM + + = + + ' + feeds + / + all.atom.xml + ' + + + FEED_ALL_RSS + + = + + ' + feeds + / + all.rss.xml + ' + + + AUTHOR_FEED_RSS + + = + + ' + feeds + / + %s.rss.xml + ' + + + RSS_FEED_SUMMARY_ONLY + + = + + False + + +``` +Save everything so you can send the code to Github. You can do that by adding all files, committing it with a message ('first commit') and using git push. You will be asked for your Github login and password. +``` + $ git add -A + && + git commit -a -m + 'first commit' + + && + git push --all + +``` +And... remember how at the very beginning I said you would be preserving the master branch for the output of the static files generated by Pelican? Now it's time for you to generate them: +``` + $ make github + +``` +You will be asked for your Github login and password again. And... voila! Your new blog should be live on https://YOUR_USERNAME.github.io. +If you had an error in any step of the way, please reread this tutorial, try and see if you can detect in which part the problem happened, because that is the first step to debbugging. Sometimes, even something simple like a typo or, with Python, a wrong indentation, can give us trouble. Shout out and ask for help online or on your community. +For tips on how to write your posts using Markdown, you should read the [Daring Fireball Markdown guide][7]. +To get other themes, I recommend you visit [Pelican Themes][8]. +This post was adapted from [Adrien Leger's Create a github hosted Pelican blog with a Bootstrap3 theme][9]. I hope it was somewhat useful for you. + +-------------------------------------------------------------------------------- + +via: https://rsip22.github.io/blog/create-a-blog-with-pelican-and-github-pages.html + +作者:[][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://rsip22.github.io +[1]https://rsip22.github.io/blog/category/blog.html +[2]https://tutorial.djangogirls.org +[3]https://rsip22.github.io/blog/img/create_github_repository.png +[4]https://www.gnu.org/licenses/license-list.html +[5]https://en.wikipedia.org/wiki/List_of_tz_database_time_zones +[6]https://rsip22.github.io/blog/img/blog_screenshot.png +[7]https://daringfireball.net/projects/markdown/syntax +[8]http://www.pelicanthemes.com/ +[9]https://a-slide.github.io/blog/github-pelican From edbce0ef746dcf3196d81a5224e656bcb8a0ac1c Mon Sep 17 00:00:00 2001 From: darksun Date: Wed, 13 Dec 2017 11:29:55 +0800 Subject: [PATCH 230/236] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Creating=20a=20bl?= =?UTF-8?q?og=20with=20pelican=20and=20Github=20pages?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ng a blog with pelican and Github pages.md | 186 ++++++++++++++++++ 1 file changed, 186 insertions(+) create mode 100644 sources/tech/20171213 Creating a blog with pelican and Github pages.md diff --git a/sources/tech/20171213 Creating a blog with pelican and Github pages.md b/sources/tech/20171213 Creating a blog with pelican and Github pages.md new file mode 100644 index 0000000000..ef3bf36a9b --- /dev/null +++ b/sources/tech/20171213 Creating a blog with pelican and Github pages.md @@ -0,0 +1,186 @@ +Creating a blog with pelican and Github pages +====== + +Today I'm going to talk about how this blog was created. Before we begin, I expect you to be familiarized with using Github and creating a Python virtual enviroment to develop. If you aren't, I recommend you to learn with the [Django Girls tutorial][2], which covers that and more. + +This is a tutorial to help you publish a personal blog hosted by Github. For that, you will need a regular Github user account (instead of a project account). + +The first thing you will do is to create the Github repository where your code will live. If you want your blog to point to only your username (like rsip22.github.io) instead of a subfolder (like rsip22.github.io/blog), you have to create the repository with that full name. + +![Screenshot of Github, the menu to create a new repository is open and a new repo is being created with the name 'rsip22.github.io'][3] + +I recommend that you initialize your repository with a README, with a .gitignore for Python and with a [free software license][4]. If you use a free software license, you still own the code, but you make sure that others will benefit from it, by allowing them to study it, reuse it and, most importantly, keep sharing it. + +Now that the repository is ready, let's clone it to the folder you will be using to store the code in your machine: +``` + $ git clone https://github.com/YOUR_USERNAME/YOUR_USERNAME.github.io.git + +``` + +And change to the new directory: +``` + $ cd YOUR_USERNAME.github.io + +``` + +Because of how Github Pages prefers to work, serving the files from the master branch, you have to put your source code in a new branch, preserving the "master" for the output of the static files generated by Pelican. To do that, you must create a new branch called "source": +``` + $ git checkout -b source + +``` + +Create the virtualenv with the Python3 version installed on your system. + +On GNU/Linux systems, the command might go as: +``` + $ python3 -m venv venv + +``` + +or as +``` + $ virtualenv --python=python3.5 venv + +``` + +And activate it: +``` + $ source venv/bin/activate + +``` + +Inside the virtualenv, you have to install pelican and it's dependencies. You should also install ghp-import (to help us with publishing to github) and Markdown (for writing your posts using markdown). It goes like this: +``` + (venv)$ pip install pelican markdown ghp-import + +``` + +Once that is done, you can start creating your blog using pelican-quickstart: +``` + (venv)$ pelican-quickstart + +``` + +Which will prompt us a series of questions. Before answering them, take a look at my answers below: +``` + > Where do you want to create your new web site? [.] ./ + > What will be the title of this web site? Renata's blog + > Who will be the author of this web site? Renata + > What will be the default language of this web site? [pt] en + > Do you want to specify a URL prefix? e.g., http://example.com (Y/n) n + > Do you want to enable article pagination? (Y/n) y + > How many articles per page do you want? [10] 10 + > What is your time zone? [Europe/Paris] America/Sao_Paulo + > Do you want to generate a Fabfile/Makefile to automate generation and publishing? (Y/n) Y **# PAY ATTENTION TO THIS!** + > Do you want an auto-reload & simpleHTTP script to assist with theme and site development? (Y/n) n + > Do you want to upload your website using FTP? (y/N) n + > Do you want to upload your website using SSH? (y/N) n + > Do you want to upload your website using Dropbox? (y/N) n + > Do you want to upload your website using S3? (y/N) n + > Do you want to upload your website using Rackspace Cloud Files? (y/N) n + > Do you want to upload your website using GitHub Pages? (y/N) y + > Is this your personal page (username.github.io)? (y/N) y + Done. Your new project is available at /home/username/YOUR_USERNAME.github.io + +``` + +About the time zone, it should be specified as TZ Time zone (full list here: [List of tz database time zones][5]). + +Now, go ahead and create your first blog post! You might want to open the project folder on your favorite code editor and find the "content" folder inside it. Then, create a new file, which can be called my-first-post.md (don't worry, this is just for testing, you can change it later). The contents should begin with the metadata which identifies the Title, Date, Category and more from the post before you start with the content, like this: +``` + .lang="markdown" # DON'T COPY this line, it exists just for highlighting purposes + Title: My first post + Date: 2017-11-26 10:01 + Modified: 2017-11-27 12:30 + Category: misc + Tags: first , misc + Slug: My-first-post + Authors: Your name + Summary: What does your post talk about ? Write here. + + This is the *first post* from my Pelican blog. ** YAY !** +``` + +Let's see how it looks? + +Go to the terminal, generate the static files and start the server. To do that, use the following command: +``` + (venv)$ make html && make serve +``` + +While this command is running, you should be able to visit it on your favorite web browser by typing localhost:8000 on the address bar. + +![Screenshot of the blog home. It has a header with the title Renata\\'s blog, the first post on the left, info about the post on the right, links and social on the bottom.][6] + +Pretty neat, right? + +Now, what if you want to put an image in a post, how do you do that? Well, first you create a directory inside your content directory, where your posts are. Let's call this directory 'images' for easy reference. Now, you have to tell Pelican to use it. Find the pelicanconf.py, the file where you configure the system, and add a variable that contains the directory with your images: +``` + .lang="python" # DON'T COPY this line, it exists just for highlighting purposes + STATIC_PATHS = ['images'] + +``` + +Save it. Go to your post and add the image this way: +``` + .lang="markdown" # DON'T COPY this line, it exists just for highlighting purposes + ![Write here a good description for people who can ' t see the image]({filename}/images/IMAGE_NAME.jpg) + +``` + +You can interrupt the server at anytime pressing CTRL+C on the terminal. But you should start it again and check if the image is correct. Can you remember how? +``` + (venv)$ make html && make serve +``` + +One last step before your coding is "done": you should make sure anyone can read your posts using ATOM or RSS feeds. Find the pelicanconf.py, the file where you configure the system, and edit the part about feed generation: +``` + .lang="python" # DON'T COPY this line, it exists just for highlighting purposes + FEED_ALL_ATOM = 'feeds/all.atom.xml' + FEED_ALL_RSS = 'feeds/all.rss.xml' + AUTHOR_FEED_RSS = 'feeds/%s.rss.xml' + RSS_FEED_SUMMARY_ONLY = False +``` + +Save everything so you can send the code to Github. You can do that by adding all files, committing it with a message ('first commit') and using git push. You will be asked for your Github login and password. +``` + $ git add -A && git commit -a -m 'first commit' && git push --all + +``` + +And... remember how at the very beginning I said you would be preserving the master branch for the output of the static files generated by Pelican? Now it's time for you to generate them: +``` + $ make github + +``` + +You will be asked for your Github login and password again. And... voila! Your new blog should be live on https://YOUR_USERNAME.github.io. + +If you had an error in any step of the way, please reread this tutorial, try and see if you can detect in which part the problem happened, because that is the first step to debbugging. Sometimes, even something simple like a typo or, with Python, a wrong indentation, can give us trouble. Shout out and ask for help online or on your community. + +For tips on how to write your posts using Markdown, you should read the [Daring Fireball Markdown guide][7]. + +To get other themes, I recommend you visit [Pelican Themes][8]. + +This post was adapted from [Adrien Leger's Create a github hosted Pelican blog with a Bootstrap3 theme][9]. I hope it was somewhat useful for you. + +-------------------------------------------------------------------------------- + +via: https://rsip22.github.io/blog/create-a-blog-with-pelican-and-github-pages.html + +作者:[][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://rsip22.github.io +[1]https://rsip22.github.io/blog/category/blog.html +[2]https://tutorial.djangogirls.org +[3]https://rsip22.github.io/blog/img/create_github_repository.png +[4]https://www.gnu.org/licenses/license-list.html +[5]https://en.wikipedia.org/wiki/List_of_tz_database_time_zones +[6]https://rsip22.github.io/blog/img/blog_screenshot.png +[7]https://daringfireball.net/projects/markdown/syntax +[8]http://www.pelicanthemes.com/ +[9]https://a-slide.github.io/blog/github-pelican From d8d94b43a4d6984641db7312f115021f123c52c0 Mon Sep 17 00:00:00 2001 From: darksun Date: Wed, 13 Dec 2017 14:32:20 +0800 Subject: [PATCH 231/236] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Useful=20GNOME=20?= =?UTF-8?q?Shell=20Keyboard=20Shortcuts=20You=20Might=20Not=20Know=20About?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...oard Shortcuts You Might Not Know About.md | 77 +++++++++++++++++++ 1 file changed, 77 insertions(+) create mode 100644 sources/tech/20171120 Useful GNOME Shell Keyboard Shortcuts You Might Not Know About.md diff --git a/sources/tech/20171120 Useful GNOME Shell Keyboard Shortcuts You Might Not Know About.md b/sources/tech/20171120 Useful GNOME Shell Keyboard Shortcuts You Might Not Know About.md new file mode 100644 index 0000000000..17f657647d --- /dev/null +++ b/sources/tech/20171120 Useful GNOME Shell Keyboard Shortcuts You Might Not Know About.md @@ -0,0 +1,77 @@ +Useful GNOME Shell Keyboard Shortcuts You Might Not Know About +====== +As Ubuntu has moved to Gnome Shell in its 17.10 release, many users may be interested to discover some of the most useful shortcuts in Gnome as well as how to create your own shortcuts. This article will explain both. + +If you expect GNOME to ship with hundreds or thousands of shell shortcuts, you will be disappointed to learn this isn't the case. The list of shortcuts isn't miles long, and not all of them will be useful to you, but there are still many keyboard shortcuts you can take advantage of. + +![gnome-shortcuts-01-settings][1] + +![gnome-shortcuts-01-settings][1] + +To access the list of shortcuts, go to "Settings -> Devices -> Keyboard." Here are some less popular, yet useful shortcuts. + + * Ctrl + Alt + T - this combination launches the terminal; you can use this from anywhere within GNOME + + + +Two shortcuts I personally use quite frequently are: + + * Alt + F4 - close the window on focus + * Alt + F8 - resize the window + + +Most of you know how to switch between open applications (Alt + Tab), but you may not know you can use Alt + Shift + Tab to cycle through applications in reverse direction. + +Another useful combination for switching within the windows of an application is Alt + (key above Tab) (example: Alt + ` on a US keyboard). + +If you want to show the Activities overview, use Alt + F1. + +There are quite a lot of shortcuts related to workspaces. If you are like me and don't use multiple workspaces frequently, these shortcuts are useless to you. Still, some of the ones worth noting are the following: + + * Super + PageUp (or PageDown) moves to the workspace above or below + * Ctrl + Alt + Left (or Right) moves to the workspace on the left/right + +If you add Shift to these commands, e.g. Shift + Ctrl + Alt + Left, you move the window one worskpace above, below, to the left, or to the right. + +Another favorite keyboard shortcut of mine is in the Accessibility section - Increase/Decrease Text Size. You can use Ctrl + + (and Ctrl + -) to zoom text size quickly. In some cases, this may be disabled by default, so do check it out before you try it. + +The above-mentioned shortcuts are lesser known, yet useful keyboard shortcuts. If you are curious to see what else is available, you can check [the official GNOME shell cheat sheet][2]. + +If the default shortcuts are not to your liking, you can change them or create new ones. You do this from the same "Settings -> Devices -> Keyboard" dialog. Just select the entry you want to change, and the following dialog will popup. + +![gnome-shortcuts-02-change-shortcut][3] + +![gnome-shortcuts-02-change-shortcut][3] + +Enter the keyboard combination you want. + +![gnome-shortcuts-03-set-shortcut][4] + +![gnome-shortcuts-03-set-shortcut][4] + +If it is already in use you will get a message. If not, just click Set, and you are done. + +If you want to add new shortcuts rather than change existing ones, scroll down until you see the "Plus" sign, click it, and in the dialog that appears, enter the name and keys of your new keyboard shortcut. + +![gnome-shortcuts-04-add-custom-shortcut][5] + +![gnome-shortcuts-04-add-custom-shortcut][5] + +GNOME doesn't come with tons of shell shortcuts by default, and the above listed ones are some of the more useful ones. If these shortcuts are not enough for you, you can always create your own. Let us know if this is helpful to you. + +-------------------------------------------------------------------------------- + +via: https://www.maketecheasier.com/gnome-shell-keyboard-shortcuts/ + +作者:[Ada Ivanova][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.maketecheasier.com/author/adaivanoff/ +[1]https://www.maketecheasier.com/assets/uploads/2017/10/gnome-shortcuts-01-settings.jpg (gnome-shortcuts-01-settings) +[2]https://wiki.gnome.org/Projects/GnomeShell/CheatSheet +[3]https://www.maketecheasier.com/assets/uploads/2017/10/gnome-shortcuts-02-change-shortcut.png (gnome-shortcuts-02-change-shortcut) +[4]https://www.maketecheasier.com/assets/uploads/2017/10/gnome-shortcuts-03-set-shortcut.png (gnome-shortcuts-03-set-shortcut) +[5]https://www.maketecheasier.com/assets/uploads/2017/10/gnome-shortcuts-04-add-custom-shortcut.png (gnome-shortcuts-04-add-custom-shortcut) From 804ce4725a575fb0d7a55c8169d529c7e0aed973 Mon Sep 17 00:00:00 2001 From: wxy Date: Wed, 13 Dec 2017 14:53:19 +0800 Subject: [PATCH 232/236] PRF:20171207 7 tools for analyzing performance in Linux with bccBPF.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @yongshouzhang 非常棒的一篇文章,翻译的也不错,不过后面部分有些疏忽。 --- ...lyzing performance in Linux with bccBPF.md | 175 ++++++++---------- 1 file changed, 77 insertions(+), 98 deletions(-) diff --git a/translated/tech/20171207 7 tools for analyzing performance in Linux with bccBPF.md b/translated/tech/20171207 7 tools for analyzing performance in Linux with bccBPF.md index 4e55ed979a..7bba3778ff 100644 --- a/translated/tech/20171207 7 tools for analyzing performance in Linux with bccBPF.md +++ b/translated/tech/20171207 7 tools for analyzing performance in Linux with bccBPF.md @@ -1,50 +1,31 @@ -translating by yongshouzhang - - -7个 Linux 下使用 bcc/BPF 的性能分析工具 +7 个使用 bcc/BPF 的性能分析神器 ============================================================ -###使用伯克利的包过滤(BPF)编译器集合(BCC)工具深度探查你的 linux 代码。 +> 使用伯克利包过滤器Berkeley Packet Filter(BPF)编译器集合Compiler Collection(BCC)工具深度探查你的 linux 代码。 - [![](https://opensource.com/sites/default/files/styles/byline_thumbnail/public/pictures/brendan_face2017_620d.jpg?itok=xZzBQNcY)][7] 21 Nov 2017 [Brendan Gregg][8] [Feed][9] - -43[up][10] - - [4 comments][11] ![7 superpowers for Fedora bcc/BPF performance analysis](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/penguins%20in%20space_0.jpg?itok=umpCTAul) -图片来源 : +在 Linux 中出现的一种新技术能够为系统管理员和开发者提供大量用于性能分析和故障排除的新工具和仪表盘。它被称为增强的伯克利数据包过滤器enhanced Berkeley Packet Filter(eBPF,或 BPF),虽然这些改进并不是由伯克利开发的,而且它们不仅仅是处理数据包,更多的是过滤。我将讨论在 Fedora 和 Red Hat Linux 发行版中使用 BPF 的一种方法,并在 Fedora 26 上演示。 -opensource.com +BPF 可以在内核中运行由用户定义的沙盒程序,可以立即添加新的自定义功能。这就像按需给 Linux 系统添加超能力一般。 你可以使用它的例子包括如下: -在 linux 中出现的一种新技术能够为系统管理员和开发者提供大量用于性能分析和故障排除的新工具和仪表盘。 它被称为增强的伯克利数据包过滤器(eBPF,或BPF),虽然这些改进并不由伯克利开发,它们不仅仅是处理数据包,更多的是过滤。我将讨论在 Fedora 和 Red Hat Linux 发行版中使用 BPF 的一种方法,并在 Fedora 26 上演示。 +* **高级性能跟踪工具**:对文件系统操作、TCP 事件、用户级事件等的可编程的低开销检测。 +* **网络性能**: 尽早丢弃数据包以提高对 DDoS 的恢复能力,或者在内核中重定向数据包以提高性能。 +* **安全监控**: 7x24 小时的自定义检测和记录内核空间与用户空间内的可疑事件。 -BPF 可以在内核中运行用户定义的沙盒程序,以立即添加新的自定义功能。这就像可按需给 Linux 系统添加超能力一般。 你可以使用它的例子包括如下: - -* 高级性能跟踪工具:文件系统操作、TCP事件、用户级事件等的编程低开销检测。 - -* 网络性能 : 尽早丢弃数据包以提高DDoS的恢复能力,或者在内核中重定向数据包以提高性能。 - -* 安全监控 : 24x7 小时全天候自定义检测和记录内核空间与用户空间内的可疑事件。 - -在可能的情况下,BPF 程序必须通过一个内核验证机制来保证它们的安全运行,这比写自定义的内核模块更安全。我在此假设大多数人并不编写自己的 BPF 程序,而是使用别人写好的。在 GitHub 上的 [BPF Compiler Collection (bcc)][12] 项目中,我已发布许多开源代码。bcc 提供不同的 BPF 开发前端支持,包括Python和Lua,并且是目前最活跃的 BPF 模具项目。 +在可能的情况下,BPF 程序必须通过一个内核验证机制来保证它们的安全运行,这比写自定义的内核模块更安全。我在此假设大多数人并不编写自己的 BPF 程序,而是使用别人写好的。在 GitHub 上的 [BPF Compiler Collection (bcc)][12] 项目中,我已发布许多开源代码。bcc 为 BPF 开发提供了不同的前端支持,包括 Python 和 Lua,并且是目前最活跃的 BPF 工具项目。 ### 7 个有用的 bcc/BPF 新工具 -为了了解BCC / BPF工具和他们的乐器,我创建了下面的图表并添加到项目中 -To understand the bcc/BPF tools and what they instrument, I created the following diagram and added it to the bcc project: - -### [bcc_跟踪工具.png][13] +为了了解 bcc/BPF 工具和它们的检测内容,我创建了下面的图表并添加到 bcc 项目中。 ![Linux bcc/BPF 跟踪工具图](https://opensource.com/sites/default/files/u128651/bcc_tracing_tools.png) -Brendan Gregg, [CC BY-SA 4.0][14] +这些是命令行界面工具,你可以通过 SSH 使用它们。目前大多数分析,包括我的老板,都是用 GUI 和仪表盘进行的。SSH 是最后的手段。但这些命令行工具仍然是预览 BPF 能力的好方法,即使你最终打算通过一个可用的 GUI 使用它。我已着手向一个开源 GUI 添加 BPF 功能,但那是另一篇文章的主题。现在我想向你分享今天就可以使用的 CLI 工具。 -这些是命令行界面工具,你可以通过 SSH (安全外壳)使用它们。目前大多数分析,包括我的老板,是用 GUIs 和仪表盘进行的。SSH是最后的手段。但这些命令行工具仍然是预览BPF能力的好方法,即使你最终打算通过一个可用的 GUI 使用它。我已着手向一个开源 GUI 添加BPF功能,但那是另一篇文章的主题。现在我想分享你今天可以使用的 CLI 工具。 +#### 1、 execsnoop -### 1\. execsnoop - -从哪儿开始? 如何查看新的进程。这些可以消耗系统资源,但很短暂,它们不会出现在 top(1)命令或其他工具中。 这些新进程可以使用[execsnoop] [15]进行检测(或使用行业术语,可以追踪)。 在追踪时,我将在另一个窗口中通过 SSH 登录: +从哪儿开始呢?如何查看新的进程。那些会消耗系统资源,但很短暂的进程,它们甚至不会出现在 `top(1)` 命令或其它工具中的显示之中。这些新进程可以使用 [execsnoop][15] 进行检测(或使用行业术语说,可以被追踪traced)。 在追踪时,我将在另一个窗口中通过 SSH 登录: ``` # /usr/share/bcc/tools/execsnoop @@ -67,13 +48,14 @@ grep 12255 12254 0 /usr/bin/grep -qsi ^COLOR.*none /etc/GREP_COL grepconf.sh 12256 12239 0 /usr/libexec/grepconf.sh -c grep 12257 12256 0 /usr/bin/grep -qsi ^COLOR.*none /etc/GREP_COLORS ``` -哇。 那是什么? 什么是grepconf.sh? 什么是 /etc/GREP_COLORS? 而且 grep通过运行自身阅读它自己的配置文件? 这甚至是如何工作的? -欢迎来到有趣的系统追踪世界。 你可以学到很多关于系统是如何工作的(或者一些情况下根本不工作),并且发现一些简单的优化。 execsnoop 通过跟踪 exec()系统调用来工作,exec() 通常用于在新进程中加载不同的程序代码。 +哇哦。 那是什么? 什么是 `grepconf.sh`? 什么是 `/etc/GREP_COLORS`? 是 `grep` 在读取它自己的配置文件……由 `grep` 运行的? 这究竟是怎么工作的? -### 2\. opensnoop +欢迎来到有趣的系统追踪世界。 你可以学到很多关于系统是如何工作的(或者根本不工作,在有些情况下),并且发现一些简单的优化方法。 `execsnoop` 通过跟踪 `exec()` 系统调用来工作,`exec()` 通常用于在新进程中加载不同的程序代码。 -从上面继续,所以,grepconf.sh可能是一个shell脚本,对吧? 我将运行file(1)来检查,并使用[opensnoop][16] bcc 工具来查看打开的文件: +#### 2、 opensnoop + +接着上面继续,所以,`grepconf.sh` 可能是一个 shell 脚本,对吧? 我将运行 `file(1)` 来检查它,并使用[opensnoop][16] bcc 工具来查看打开的文件: ``` # /usr/share/bcc/tools/opensnoop @@ -91,18 +73,20 @@ PID COMM FD ERR PATH 1 systemd 16 0 /proc/565/cgroup 1 systemd 16 0 /proc/536/cgroup ``` -像execsnoop和opensnoop这样的工具每个事件打印一行。上图显示 file(1)命令当前打开(或尝试打开)的文件:返回的文件描述符(“FD”列)对于 /etc/magic.mgc 是-1,而“ERR”列指示它是“文件未找到”。我不知道该文件,也不知道 file(1)正在读取的 /usr/share/misc/magic.mgc 文件。我不应该感到惊讶,但是 file(1)在识别文件类型时没有问题: + +像 `execsnoop` 和 `opensnoop` 这样的工具会将每个事件打印一行。上图显示 `file(1)` 命令当前打开(或尝试打开)的文件:返回的文件描述符(“FD” 列)对于 `/etc/magic.mgc` 是 -1,而 “ERR” 列指示它是“文件未找到”。我不知道该文件,也不知道 `file(1)` 正在读取的 `/usr/share/misc/magic.mgc` 文件是什么。我不应该感到惊讶,但是 `file(1)` 在识别文件类型时没有问题: ``` # file /usr/share/misc/magic.mgc /etc/magic /usr/share/misc/magic.mgc: magic binary file for file(1) cmd (version 14) (little endian) /etc/magic: magic text file for file(1) cmd, ASCII text ``` -opensnoop通过跟踪 open()系统调用来工作。为什么不使用 strace -feopen file 命令呢? 这将在这种情况下起作用。然而,opensnoop 的一些优点在于它能在系统范围内工作,并且跟踪所有进程的 open()系统调用。注意上例的输出中包括了从systemd打开的文件。Opensnoop 也应该有更低的开销:BPF 跟踪已经被优化,并且当前版本的 strace(1)仍然使用较老和较慢的 ptrace(2)接口。 -### 3\. xfsslower +`opensnoop` 通过跟踪 `open()` 系统调用来工作。为什么不使用 `strace -feopen file` 命令呢? 在这种情况下是可以的。然而,`opensnoop` 的一些优点在于它能在系统范围内工作,并且跟踪所有进程的 `open()` 系统调用。注意上例的输出中包括了从 systemd 打开的文件。`opensnoop` 应该系统开销更低:BPF 跟踪已经被优化过,而当前版本的 `strace(1)` 仍然使用较老和较慢的 `ptrace(2)` 接口。 -bcc/BPF 不仅仅可以分析系统调用。[xfsslower][17] 工具跟踪具有大于1毫秒(参数)延迟的常见XFS文件系统操作。 +#### 3、 xfsslower + +bcc/BPF 不仅仅可以分析系统调用。[xfsslower][17] 工具可以跟踪大于 1 毫秒(参数)延迟的常见 XFS 文件系统操作。 ``` # /usr/share/bcc/tools/xfsslower 1 @@ -119,15 +103,16 @@ TIME COMM PID T BYTES OFF_KB LAT(ms) FILENAME 14:17:46 cksum 4168 R 65536 128 1.01 grub2-fstest [...] ``` -在上图输出中,我捕获了多个延迟超过 1 毫秒 的 cksum(1)读数(字段“T”等于“R”)。这个工作是在 xfsslower 工具运行的时候,通过在 XFS 中动态地设置内核函数实现,当它结束的时候解除检测。其他文件系统也有这个 bcc 工具的版本:ext4slower,btrfsslower,zfsslower 和 nfsslower。 -这是个有用的工具,也是 BPF 追踪的重要例子。对文件系统性能的传统分析主要集中在块 I/O 统计信息 - 通常你看到的是由 iostat(1)工具打印并由许多性能监视 GUI 绘制的图表。这些统计数据显示了磁盘如何执行,但不是真正的文件系统。通常比起磁盘你更关心文件系统的性能,因为应用程序是在文件系统中发起请求和等待。并且文件系统的性能可能与磁盘的性能大为不同!文件系统可以完全从内存缓存中读取数据,也可以通过预读算法和回写缓存填充缓存。xfsslower 显示了文件系统的性能 - 应用程序直接体验到什么。这对于免除整个存储子系统通常是有用的; 如果确实没有文件系统延迟,那么性能问题很可能在别处。 +在上图输出中,我捕获到了多个延迟超过 1 毫秒 的 `cksum(1)` 读取操作(字段 “T” 等于 “R”)。这是在 `xfsslower` 工具运行的时候,通过在 XFS 中动态地检测内核函数实现的,并当它结束的时候解除该检测。这个 bcc 工具也有其它文件系统的版本:`ext4slower`、`btrfsslower`、`zfsslower` 和 `nfsslower`。 -### 4\. biolatency +这是个有用的工具,也是 BPF 追踪的重要例子。对文件系统性能的传统分析主要集中在块 I/O 统计信息 —— 通常你看到的是由 `iostat(1)` 工具输出,并由许多性能监视 GUI 绘制的图表。这些统计数据显示的是磁盘如何执行,而不是真正的文件系统如何执行。通常比起磁盘来说,你更关心的是文件系统的性能,因为应用程序是在文件系统中发起请求和等待。并且,文件系统的性能可能与磁盘的性能大为不同!文件系统可以完全从内存缓存中读取数据,也可以通过预读算法和回写缓存来填充缓存。`xfsslower` 显示了文件系统的性能 —— 这是应用程序直接体验到的性能。通常这对于排除整个存储子系统的问题是有用的;如果确实没有文件系统延迟,那么性能问题很可能是在别处。 -虽然文件系统性能对于理解应用程序性能非常重要,但研究磁盘性能也是有好处的。当各种缓存技巧不能再隐藏其延迟时,磁盘的低性能终会影响应用程序。 磁盘性能也是容量规划研究的目标。 +#### 4、 biolatency -iostat(1)工具显示平均磁盘 I/O 延迟,但平均值可能会引起误解。 以直方图的形式研究 I/O 延迟的分布是有用的,这可以通过使用 [biolatency] 来实现[18]: +虽然文件系统性能对于理解应用程序性能非常重要,但研究磁盘性能也是有好处的。当各种缓存技巧都无法挽救其延迟时,磁盘的低性能终会影响应用程序。 磁盘性能也是容量规划研究的目标。 + +`iostat(1)` 工具显示了平均磁盘 I/O 延迟,但平均值可能会引起误解。 以直方图的形式研究 I/O 延迟的分布是有用的,这可以通过使用 [biolatency] 来实现[18]: ``` # /usr/share/bcc/tools/biolatency @@ -147,9 +132,10 @@ Tracing block device I/O... Hit Ctrl-C to end. 1024 -> 2047 : 117 |******** | 2048 -> 4095 : 8 | | ``` -这是另一个有用的工具和例子; 它使用一个名为maps的BPF特性,它可以用来实现高效的内核内摘要统计。从内核级别到用户级别的数据传输仅仅是“计数”列。 用户级程序生成其余的。 -值得注意的是,其中许多工具支持CLI选项和参数,如其使用信息所示: +这是另一个有用的工具和例子;它使用一个名为 maps 的 BPF 特性,它可以用来实现高效的内核摘要统计。从内核层到用户层的数据传输仅仅是“计数”列。 用户级程序生成其余的。 + +值得注意的是,这种工具大多支持 CLI 选项和参数,如其使用信息所示: ``` # /usr/share/bcc/tools/biolatency -h @@ -175,11 +161,12 @@ examples: ./biolatency -Q # include OS queued time in I/O time ./biolatency -D # show each disk device separately ``` -它们的行为像其他Unix工具是通过设计,以协助采用。 -### 5\. tcplife +它们的行为就像其它 Unix 工具一样,以利于采用而设计。 -另一个有用的工具是[tcplife][19] ,该例显示TCP会话的生命周期和吞吐量统计 +#### 5、 tcplife + +另一个有用的工具是 [tcplife][19] ,该例显示 TCP 会话的生命周期和吞吐量统计。 ``` # /usr/share/bcc/tools/tcplife @@ -189,11 +176,12 @@ PID COMM LADDR LPORT RADDR RPORT TX_KB RX_KB MS 12844 wget 10.0.2.15 34250 54.204.39.132 443 11 1870 5712.26 12851 curl 10.0.2.15 34252 54.204.39.132 443 0 74 505.90 ``` -在你说:“我不能只是刮 tcpdump(8)输出这个?”之前请注意,运行 tcpdump(8)或任何数据包嗅探器,在高数据包速率系统上花费的开销会很大,即使tcpdump(8)的用户级和内核级机制已经过多年优化(可能更差)。tcplife不会测试每个数据包; 它只会监视TCP会话状态的变化,从而影响会话的持续时间。它还使用已经跟踪吞吐量的内核计数器,以及处理和命令信息(“PID”和“COMM”列),这些对 tcpdump(8)等线上嗅探工具是做不到的。 -### 6\. gethostlatency +在你说 “我不是可以只通过 `tcpdump(8)` 就能输出这个?” 之前请注意,运行 `tcpdump(8)` 或任何数据包嗅探器,在高数据包速率的系统上的开销会很大,即使 `tcpdump(8)` 的用户层和内核层机制已经过多年优化(要不可能更差)。`tcplife` 不会测试每个数据包;它只会有效地监视 TCP 会话状态的变化,并由此得到该会话的持续时间。它还使用已经跟踪了吞吐量的内核计数器,以及进程和命令信息(“PID” 和 “COMM” 列),这些对于 `tcpdump(8)` 等线上嗅探工具是做不到的。 -之前的每个例子都涉及到内核跟踪,所以我至少需要一个用户级跟踪的例子。 这是[gethostlatency] [20],其中gethostbyname(3)和相关的库调用名称解析: +#### 6、 gethostlatency + +之前的每个例子都涉及到内核跟踪,所以我至少需要一个用户级跟踪的例子。 这就是 [gethostlatency][20],它检测用于名称解析的 `gethostbyname(3)` 和相关的库调用: ``` # /usr/share/bcc/tools/gethostlatency @@ -207,24 +195,26 @@ TIME PID COMM LATms HOST 06:45:07 12952 curl 13.64 opensource.cats 06:45:19 13139 curl 13.10 opensource.cats ``` -是的,它始终是DNS,所以有一个工具来监视系统范围内的DNS请求可以很方便(这只有在应用程序使用标准系统库时才有效)看看我如何跟踪多个查找“opensource.com”? 第一个是188.98毫秒,然后是更快,不到10毫秒,毫无疑问,缓存的作用。它还追踪多个查找“opensource.cats”,一个可悲的不存在的主机,但我们仍然可以检查第一个和后续查找的延迟。 (第二次查找后是否有一点负面缓存?) -### 7\. trace +是的,总是有 DNS 请求,所以有一个工具来监视系统范围内的 DNS 请求会很方便(这只有在应用程序使用标准系统库时才有效)。看看我如何跟踪多个对 “opensource.com” 的查找? 第一个是 188.98 毫秒,然后更快,不到 10 毫秒,毫无疑问,这是缓存的作用。它还追踪多个对 “opensource.cats” 的查找,一个不存在的可怜主机名,但我们仍然可以检查第一个和后续查找的延迟。(第二次查找后是否有一些否定缓存的影响?) -好的,再举一个例子。 [trace] [21]工具由Sasha Goldshtein提供,并提供了一些基本的printf(1)功能和自定义探针。 例如: +#### 7、 trace + +好的,再举一个例子。 [trace][21] 工具由 Sasha Goldshtein 提供,并提供了一些基本的 `printf(1)` 功能和自定义探针。 例如: ``` # /usr/share/bcc/tools/trace 'pam:pam_start "%s: %s", arg1, arg2' PID TID COMM FUNC - 13266 13266 sshd pam_start sshd: root ``` -在这里,我正在跟踪 libpam 及其 pam_start(3)函数并将其两个参数都打印为字符串。 Libpam 用于可插入的身份验证模块系统,输出显示 sshd 为“root”用户(我登录)调用了 pam_start()。 USAGE消息中有更多的例子(“trace -h”),而且所有这些工具在bcc版本库中都有手册页和示例文件。 例如trace_example.txt和trace.8。 + +在这里,我正在跟踪 `libpam` 及其 `pam_start(3)` 函数,并将其两个参数都打印为字符串。 `libpam` 用于插入式身份验证模块系统,该输出显示 sshd 为 “root” 用户调用了 `pam_start()`(我登录了)。 其使用信息中有更多的例子(`trace -h`),而且所有这些工具在 bcc 版本库中都有手册页和示例文件。 例如 `trace_example.txt` 和 `trace.8`。 ### 通过包安装 bcc -安装 bcc 最佳的方法是从 iovisor 仓储库中安装,按照 bcc [INSTALL.md][22]。[IO Visor] [23]是包含 bcc 的Linux基金会项目。4.x系列Linux内核中增加了这些工具使用的BPF增强功能,上至4.9 \。这意味着拥有4.8内核的 Fedora 25可以运行大部分这些工具。 Fedora 26及其4.11内核可以运行它们(至少目前)。 +安装 bcc 最佳的方法是从 iovisor 仓储库中安装,按照 bcc 的 [INSTALL.md][22] 进行即可。[IO Visor][23] 是包括了 bcc 的 Linux 基金会项目。4.x 系列 Linux 内核中增加了这些工具所使用的 BPF 增强功能,直到 4.9 添加了全部支持。这意味着拥有 4.8 内核的 Fedora 25 可以运行这些工具中的大部分。 使用 4.11 内核的 Fedora 26 可以全部运行它们(至少在目前是这样)。 -如果你使用的是Fedora 25(或者Fedora 26,而且这个帖子已经在很多个月前发布了 - 你好,来自遥远的过去!),那么这个包的方法应该是正常的。 如果您使用的是Fedora 26,那么请跳至“通过源代码安装”部分,该部分避免了已知的固定错误。 这个错误修复目前还没有进入Fedora 26软件包的依赖关系。 我使用的系统是: +如果你使用的是 Fedora 25(或者 Fedora 26,而且这个帖子已经在很多个月前发布了 —— 你好,来自遥远的过去!),那么这个通过包安装的方式是可以工作的。 如果您使用的是 Fedora 26,那么请跳至“通过源代码安装”部分,它避免了一个[已修复的][26]的[已知][25]错误。 这个错误修复目前还没有进入 Fedora 26 软件包的依赖关系。 我使用的系统是: ``` # uname -a @@ -232,7 +222,8 @@ Linux localhost.localdomain 4.11.8-300.fc26.x86_64 #1 SMP Thu Jun 29 20:09:48 UT # cat /etc/fedora-release Fedora release 26 (Twenty Six) ``` -以下是我所遵循的安装步骤,但请参阅INSTALL.md获取更新的版本: + +以下是我所遵循的安装步骤,但请参阅 INSTALL.md 获取更新的版本: ``` # echo -e '[iovisor]\nbaseurl=https://repo.iovisor.org/yum/nightly/f25/$basearch\nenabled=1\ngpgcheck=0' | sudo tee /etc/yum.repos.d/iovisor.repo @@ -242,7 +233,8 @@ Total download size: 37 M Installed size: 143 M Is this ok [y/N]: y ``` -安装完成后,您可以在/ usr / share中看到新的工具: + +安装完成后,您可以在 `/usr/share` 中看到新的工具: ``` # ls /usr/share/bcc/tools/ @@ -250,6 +242,7 @@ argdist dcsnoop killsnoop softirqs trace bashreadline dcstat llcstat solisten ttysnoop [...] ``` + 试着运行其中一个: ``` @@ -262,7 +255,8 @@ Traceback (most recent call last): raise Exception("Failed to compile BPF module %s" % src_file) Exception: Failed to compile BPF module ``` -运行失败,提示/lib/modules/4.11.8-300.fc26.x86_64/build丢失。 如果你也这样做,那只是因为系统缺少内核头文件。 如果你看看这个文件指向什么(这是一个符号链接),然后使用“dnf whatprovides”来搜索它,它会告诉你接下来需要安装的包。 对于这个系统,它是: + +运行失败,提示 `/lib/modules/4.11.8-300.fc26.x86_64/build` 丢失。 如果你也遇到这个问题,那只是因为系统缺少内核头文件。 如果你看看这个文件指向什么(这是一个符号链接),然后使用 `dnf whatprovides` 来搜索它,它会告诉你接下来需要安装的包。 对于这个系统,它是: ``` # dnf install kernel-devel-4.11.8-300.fc26.x86_64 @@ -272,7 +266,8 @@ Installed size: 63 M Is this ok [y/N]: y [...] ``` -现在 + +现在: ``` # /usr/share/bcc/tools/opensnoop @@ -283,11 +278,12 @@ PID COMM FD ERR PATH 11792 ls 3 0 /lib64/libc.so.6 [...] ``` -运行起来了。 这是从另一个窗口中的ls命令捕捉活动。 请参阅前面的部分以获取其他有用的命令 + +运行起来了。 这是捕获自另一个窗口中的 ls 命令活动。 请参阅前面的部分以使用其它有用的命令。 ### 通过源码安装 -如果您需要从源代码安装,您还可以在[INSTALL.md] [27]中找到文档和更新说明。 我在Fedora 26上做了如下的事情: +如果您需要从源代码安装,您还可以在 [INSTALL.md][27] 中找到文档和更新说明。 我在 Fedora 26 上做了如下的事情: ``` sudo dnf install -y bison cmake ethtool flex git iperf libstdc++-static \ @@ -299,16 +295,16 @@ sudo dnf install -y \ sudo pip install pyroute2 sudo dnf install -y clang clang-devel llvm llvm-devel llvm-static ncurses-devel ``` -除 netperf 外一切妥当,其中有以下错误: + +除 `netperf` 外一切妥当,其中有以下错误: ``` Curl error (28): Timeout was reached for http://pkgs.repoforge.org/netperf/netperf-2.6.0-1.el6.rf.x86_64.rpm [Connection timed out after 120002 milliseconds] ``` -不必理会,netperf是可选的 - 它只是用于测试 - 而 bcc 没有它也会编译成功。 - -以下是 bcc 编译和安装余下的步骤: +不必理会,`netperf` 是可选的,它只是用于测试,而 bcc 没有它也会编译成功。 +以下是余下的 bcc 编译和安装步骤: ``` git clone https://github.com/iovisor/bcc.git @@ -317,7 +313,8 @@ cmake .. -DCMAKE_INSTALL_PREFIX=/usr make sudo make install ``` -在这一点上,命令应该起作用: + +现在,命令应该可以工作了: ``` # /usr/share/bcc/tools/opensnoop @@ -329,53 +326,35 @@ PID COMM FD ERR PATH [...] ``` -More Linux resources +### 写在最后和其他的前端 -* [What is Linux?][1] +这是一个可以在 Fedora 和 Red Hat 系列操作系统上使用的新 BPF 性能分析强大功能的快速浏览。我演示了 BPF 的流行前端 [bcc][28] ,并包括了其在 Fedora 上的安装说明。bcc 附带了 60 多个用于性能分析的新工具,这将帮助您充分利用 Linux 系统。也许你会直接通过 SSH 使用这些工具,或者一旦 GUI 监控程序支持 BPF 的话,你也可以通过它们来使用相同的功能。 -* [What are Linux containers?][2] +此外,bcc 并不是正在开发的唯一前端。[ply][29] 和 [bpftrace][30],旨在为快速编写自定义工具提供更高级的语言支持。此外,[SystemTap][31] 刚刚发布[版本3.2][32],包括一个早期的实验性 eBPF 后端。 如果这个继续开发,它将为运行多年来开发的许多 SystemTap 脚本和 tapset(库)提供一个安全和高效的生产级引擎。(随同 eBPF 使用 SystemTap 将是另一篇文章的主题。) -* [Download Now: Linux commands cheat sheet][3] - -* [Advanced Linux commands cheat sheet][4] - -* [Our latest Linux articles][5] - -### 写在最后和其他前端 - -这是一个可以在 Fedora 和 Red Hat 系列操作系统上使用的新 BPF 性能分析强大功能的快速浏览。我演示了BPF的流行前端 [bcc][28] ,并包含了其在 Fedora 上的安装说明。bcc 附带了60多个用于性能分析的新工具,这将帮助您充分利用Linux系统。也许你会直接通过SSH使用这些工具,或者一旦它们支持BPF,你也可以通过监视GUI来使用相同的功能。 - -此外,bcc并不是开发中唯一的前端。[ply][29]和[bpftrace][30],旨在为快速编写自定义工具提供更高级的语言。此外,[SystemTap] [31]刚刚发布[版本3.2] [32],包括一个早期的实验性eBPF后端。 如果这一点继续得到发展,它将为运行多年来开发的许多SystemTap脚本和攻击集(库)提供一个生产安全和高效的引擎。 (使用SystemTap和eBPF将成为另一篇文章的主题。) - -如果您需要开发自定义工具,那么也可以使用 bcc 来实现,尽管语言比 SystemTap,ply 或 bpftrace 要冗长得多。 我的 bcc 工具可以作为代码示例,另外我还贡献了[教程] [33]来开发 Python 中的 bcc 工具。 我建议先学习bcc多工具,因为在需要编写新工具之前,你可能会从里面获得很多里程。 您可以从他们 bcc 存储库[funccount] [34],[funclatency] [35],[funcslower] [36],[stackcount] [37],[trace] [38] ,[argdist] [39] 的示例文件中研究 bcc。 +如果您需要开发自定义工具,那么也可以使用 bcc 来实现,尽管语言比 SystemTap、ply 或 bpftrace 要冗长得多。我的 bcc 工具可以作为代码示例,另外我还贡献了用 Python 开发 bcc 工具的[教程][33]。 我建议先学习 bcc 的 multi-tools,因为在需要编写新工具之前,你可能会从里面获得很多经验。 您可以从它们的 bcc 存储库[funccount] [34],[funclatency] [35],[funcslower] [36],[stackcount] [37],[trace] [38] ,[argdist] [39] 的示例文件中研究 bcc。 感谢[Opensource.com] [40]进行编辑。 -###  专题 +### 关于作者 - [Linux][41][系统管理员][42] +[![Brendan Gregg](https://opensource.com/sites/default/files/styles/profile_pictures/public/pictures/brendan_face2017_620d.jpg?itok=LIwTJjL9)][43] -### About the author +Brendan Gregg 是 Netflix 的一名高级性能架构师,在那里他进行大规模的计算机性能设计、分析和调优。 - [![](https://opensource.com/sites/default/files/styles/profile_pictures/public/pictures/brendan_face2017_620d.jpg?itok=LIwTJjL9)][43] Brendan Gregg - -- -Brendan Gregg是Netflix的一名高级性能架构师,在那里他进行大规模的计算机性能设计,分析和调优。[关于更多] [44] - - -* [Learn how you can contribute][6] +(题图:opensource.com) -------------------------------------------------------------------------------- via:https://opensource.com/article/17/11/bccbpf-performance -作者:[Brendan Gregg ][a] +作者:[Brendan Gregg][a] 译者:[yongshouzhang](https://github.com/yongshouzhang) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 -[a]: +[a]:https://opensource.com/users/brendang [1]:https://opensource.com/resources/what-is-linux?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent [2]:https://opensource.com/resources/what-are-linux-containers?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent [3]:https://developers.redhat.com/promotions/linux-cheatsheet/?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent From ff4d262b8c977554cdc714f3735034e8812a03e4 Mon Sep 17 00:00:00 2001 From: wxy Date: Wed, 13 Dec 2017 14:54:04 +0800 Subject: [PATCH 233/236] PUB:20171207 7 tools for analyzing performance in Linux with bccBPF.md @yongshouzhang https://linux.cn/article-9139-1.html --- ...1207 7 tools for analyzing performance in Linux with bccBPF.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20171207 7 tools for analyzing performance in Linux with bccBPF.md (100%) diff --git a/translated/tech/20171207 7 tools for analyzing performance in Linux with bccBPF.md b/published/20171207 7 tools for analyzing performance in Linux with bccBPF.md similarity index 100% rename from translated/tech/20171207 7 tools for analyzing performance in Linux with bccBPF.md rename to published/20171207 7 tools for analyzing performance in Linux with bccBPF.md From e7f9e32c38e07bf137eb7cf7e7d73c599bb4057c Mon Sep 17 00:00:00 2001 From: darksun Date: Wed, 13 Dec 2017 14:58:00 +0800 Subject: [PATCH 234/236] rename --- ...207 Cheat – A Collection Of Practical Linux Command Examples.md} | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename sources/tech/{20171213 Cheat – A Collection Of Practical Linux Command Examples.md => 20171207 Cheat – A Collection Of Practical Linux Command Examples.md} (100%) diff --git a/sources/tech/20171213 Cheat – A Collection Of Practical Linux Command Examples.md b/sources/tech/20171207 Cheat – A Collection Of Practical Linux Command Examples.md similarity index 100% rename from sources/tech/20171213 Cheat – A Collection Of Practical Linux Command Examples.md rename to sources/tech/20171207 Cheat – A Collection Of Practical Linux Command Examples.md From f8804f212b22163ea96a5755f5f15551f81c527e Mon Sep 17 00:00:00 2001 From: wxy Date: Wed, 13 Dec 2017 15:39:20 +0800 Subject: [PATCH 235/236] PRF&PUB:20171121 LibreOffice Is Now Available on Flathub the Flatpak App Store.md @geekpi --- ...ilable on Flathub the Flatpak App Store.md | 37 ++++++++++--------- 1 file changed, 20 insertions(+), 17 deletions(-) rename {translated/tech => published}/20171121 LibreOffice Is Now Available on Flathub the Flatpak App Store.md (68%) diff --git a/translated/tech/20171121 LibreOffice Is Now Available on Flathub the Flatpak App Store.md b/published/20171121 LibreOffice Is Now Available on Flathub the Flatpak App Store.md similarity index 68% rename from translated/tech/20171121 LibreOffice Is Now Available on Flathub the Flatpak App Store.md rename to published/20171121 LibreOffice Is Now Available on Flathub the Flatpak App Store.md index 4edb744098..d4de978b38 100644 --- a/translated/tech/20171121 LibreOffice Is Now Available on Flathub the Flatpak App Store.md +++ b/published/20171121 LibreOffice Is Now Available on Flathub the Flatpak App Store.md @@ -1,27 +1,22 @@ -# LibreOffice 现在在 Flatpak 的 Flathub 应用商店提供 +LibreOffice 上架 Flathub 应用商店 +=============== ![LibreOffice on Flathub](http://www.omgubuntu.co.uk/wp-content/uploads/2017/11/libroffice-on-flathub-750x250.jpeg) -LibreOffice 现在可以从集中化的 Flatpak 应用商店 [Flathub][3] 进行安装。 +> LibreOffice 现在可以从集中化的 Flatpak 应用商店 [Flathub][3] 进行安装。 -它的到来使任何运行现代 Linux 发行版的人都能只点击一两次安装 LibreOffice 的最新稳定版本,而无需搜索 PPA,纠缠 tar 包或等待发行商将其打包。 +它的到来使任何运行现代 Linux 发行版的人都能只点击一两次即可安装 LibreOffice 的最新稳定版本,而无需搜索 PPA,纠缠于 tar 包或等待发行版将其打包。 -自去年 8 月份以来,[LibreOffice Flatpak][5] 已经可供用户下载和安装 [LibreOffice 5.2][6]。 +自去年 8 月份 [LibreOffice 5.2][6] 发布以来,[LibreOffice Flatpak][5] 已经可供用户下载和安装。 -这里“新”的是发行方法。文档基金会选择使用 Flathub 而不是专门的服务器来发布更新。 +这里“新”的是指发行方法。文档基金会Document Foundation选择使用 Flathub 而不是专门的服务器来发布更新。 -这对于终端用户来说是一个_很好_的消息,因为这意味着不需要在新安装时担心仓库,但对于 Flatpak 的倡议者来说也是一个好消息:LibreOffice 是开源软件最流行的生产力套件。它对格式和应用商店的支持肯定会受到热烈的欢迎。 +这对于终端用户来说是一个_很好_的消息,因为这意味着不需要在新安装时担心仓库,但对于 Flatpak 的倡议者来说也是一个好消息:LibreOffice 是开源软件里最流行的生产力套件。它对该格式和应用商店的支持肯定会受到热烈的欢迎。 在撰写本文时,你可以从 Flathub 安装 LibreOffice 5.4.2。新的稳定版本将在发布时添加。 ### 在 Ubuntu 上启用 Flathub -![](http://www.omgubuntu.co.uk/wp-content/uploads/2017/11/flathub-750x495.png) - -Fedora、Arch 和 Linux Mint 18.3 用户已经安装了 Flatpak,随时可以开箱即用。Mint 甚至预启用了 Flathub remote。 - -[从 Flathub 安装 LibreOffice][7] - 要在 Ubuntu 上启动并运行 Flatpak,首先必须安装它: ``` @@ -34,17 +29,25 @@ sudo apt install flatpak gnome-software-plugin-flatpak flatpak remote-add --if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepo ``` -这就行了。只需注销并返回(以便 Ubuntu Software 刷新其缓存),之后你应该能够通过 Ubuntu Software 看到 Flathub 上的任何 Flatpak 程序了。 +这就行了。只需注销并重新登录(以便 Ubuntu Software 刷新其缓存),之后你应该能够通过 Ubuntu Software 看到 Flathub 上的任何 Flatpak 程序了。 + +![](http://www.omgubuntu.co.uk/wp-content/uploads/2017/11/flathub-750x495.png) + +*Fedora、Arch 和 Linux Mint 18.3 用户已经安装了 Flatpak,随时可以开箱即用。Mint 甚至预启用了 Flathub remote。* 在本例中,搜索 “LibreOffice” 并在结果中找到下面有 Flathub 提示的结果。(请记住,Ubuntu 已经调整了客户端,来将 Snap 程序显示在最上面,所以你可能需要向下滚动列表来查看它)。 +### 从 Flathub 安装 LibreOffice + +- [从 Flathub 安装 LibreOffice][7] + 从 flatpakref 中[安装 Flatpak 程序有一个 bug][8],所以如果上面的方法不起作用,你也可以使用命令行从 Flathub 中安装 Flathub 程序。 Flathub 网站列出了安装每个程序所需的命令。切换到“命令行”选项卡来查看它们。 -#### Flathub 上更多的应用 +### Flathub 上更多的应用 -如果你经常看这个网站,你就会知道我喜欢 Flathub。这是我最喜欢的一些应用(Corebird、Parlatype、GNOME MPV、Peek、Audacity、GIMP 等)的家园。我无需折衷就能获得这些应用程序的最新,稳定版本(加上它们需要的所有依赖)。 +如果你经常看这个网站,你就会知道我喜欢 Flathub。这是我最喜欢的一些应用(Corebird、Parlatype、GNOME MPV、Peek、Audacity、GIMP 等)的家园。我无需等待就能获得这些应用程序的最新、稳定版本(加上它们需要的所有依赖)。 而且,在我 twiiter 上发布一周左右后,大多数 Flatpak 应用现在看起来有很棒 GTK 主题 - 不再需要[临时方案][9]了! @@ -52,9 +55,9 @@ Flathub 网站列出了安装每个程序所需的命令。切换到“命令行 via: http://www.omgubuntu.co.uk/2017/11/libreoffice-now-available-flathub-flatpak-app-store -作者:[ JOEY SNEDDON ][a] +作者:[JOEY SNEDDON][a] 译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 077310856ada760861ca1b0fb2e1b372c27d7b3f Mon Sep 17 00:00:00 2001 From: Ezio Date: Wed, 13 Dec 2017 18:31:31 +0800 Subject: [PATCH 236/236] =?UTF-8?q?20171213-1=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...0171212 Internet protocols are changing.md | 176 ++++++++++++++++++ 1 file changed, 176 insertions(+) create mode 100644 sources/tech/20171212 Internet protocols are changing.md diff --git a/sources/tech/20171212 Internet protocols are changing.md b/sources/tech/20171212 Internet protocols are changing.md new file mode 100644 index 0000000000..c95e8c732c --- /dev/null +++ b/sources/tech/20171212 Internet protocols are changing.md @@ -0,0 +1,176 @@ +Internet protocols are changing +============================================================ + + +![](https://blog.apnic.net/wp-content/uploads/2017/12/evolution-555x202.png) + +When the Internet started to become widely used in the 1990s, most traffic used just a few protocols: IPv4 routed packets, TCP turned those packets into connections, SSL (later TLS) encrypted those connections, DNS named hosts to connect to, and HTTP was often the application protocol using it all. + +For many years, there were negligible changes to these core Internet protocols; HTTP added a few new headers and methods, TLS slowly went through minor revisions, TCP adapted congestion control, and DNS introduced features like DNSSEC. The protocols themselves looked about the same ‘on the wire’ for a very long time (excepting IPv6, which already gets its fair amount of attention in the network operator community.) + +As a result, network operators, vendors, and policymakers that want to understand (and sometimes, control) the Internet have adopted a number of practices based upon these protocols’ wire ‘footprint’ — whether intended to debug issues, improve quality of service, or impose policy. + +Now, significant changes to the core Internet protocols are underway. While they are intended to be compatible with the Internet at large (since they won’t get adoption otherwise), they might be disruptive to those who have taken liberties with undocumented aspects of protocols or made an assumption that things won’t change. + +#### Why we need to change the Internet + +There are a number of factors driving these changes. + +First, the limits of the core Internet protocols have become apparent, especially regarding performance. Because of structural problems in the application and transport protocols, the network was not being used as efficiently as it could be, leading to end-user perceived performance (in particular, latency). + +This translates into a strong motivation to evolve or replace those protocols because there is a [large body of experience showing the impact of even small performance gains][14]. + +Second, the ability to evolve Internet protocols — at any layer — has become more difficult over time, largely thanks to the unintended uses by networks discussed above. For example, HTTP proxies that tried to compress responses made it more difficult to deploy new compression techniques; TCP optimization in middleboxes made it more difficult to deploy improvements to TCP. + +Finally, [we are in the midst of a shift towards more use of encryption on the Internet][15], first spurred by Edward Snowden’s revelations in 2015\. That’s really a separate discussion, but it is relevant here in that encryption is one of best tools we have to ensure that protocols can evolve. + +Let’s have a look at what’s happened, what’s coming next, how it might impact networks, and how networks impact protocol design. + +#### HTTP/2 + +[HTTP/2][16] (based on Google’s SPDY) was the first notable change — standardized in 2015, it multiplexes multiple requests onto one TCP connection, thereby avoiding the need to queue requests on the client without blocking each other. It is now widely deployed, and supported by all major browsers and web servers. + +From a network’s viewpoint, HTTP/2 made a few notable changes. First, it’s a binary protocol, so any device that assumes it’s HTTP/1.1 is going to break. + +That breakage was one of the primary reasons for another big change in HTTP/2; it effectively requires encryption. This gives it a better chance of avoiding interference from intermediaries that assume it’s HTTP/1.1, or do more subtle things like strip headers or block new protocol extensions — both things that had been seen by some of the engineers working on the protocol, causing significant support problems for them. + +[HTTP/2 also requires TLS/1.2 to be used when it is encrypted][17], and [blacklists ][18]cipher suites that were judged to be insecure — with the effect of only allowing ephemeral keys. See the TLS 1.3 section for potential impacts here. + +Finally, HTTP/2 allows more than one host’s requests to be [coalesced onto a connection][19], to improve performance by reducing the number of connections (and thereby, congestion control contexts) used for a page load. + +For example, you could have a connection for www.example.com, but also use it for requests for images.example.com. [Future protocol extensions might also allow additional hosts to be added to the connection][20], even if they weren’t listed in the original TLS certificate used for it. As a result, assuming that the traffic on a connection is limited to the purpose it was initiated for isn’t going to apply. + +Despite these changes, it’s worth noting that HTTP/2 doesn’t appear to suffer from significant interoperability problems or interference from networks. + +#### TLS 1.3 + +[TLS 1.3][21] is just going through the final processes of standardization and is already supported by some implementations. + +Don’t be fooled by its incremental name; this is effectively a new version of TLS, with a much-revamped handshake that allows application data to flow from the start (often called ‘0RTT’). The new design relies upon ephemeral key exchange, thereby ruling out static keys. + +This has caused concern from some network operators and vendors — in particular those who need visibility into what’s happening inside those connections. + +For example, consider the datacentre for a bank that has regulatory requirements for visibility. By sniffing traffic in the network and decrypting it with the static keys of their servers, they can log legitimate traffic and identify harmful traffic, whether it be attackers from the outside or employees trying to leak data from the inside. + +TLS 1.3 doesn’t support that particular technique for intercepting traffic, since it’s also [a form of attack that ephemeral keys protect against][22]. However, since they have regulatory requirements to both use modern encryption protocols and to monitor their networks, this puts those network operators in an awkward spot. + +There’s been much debate about whether regulations require static keys, whether alternative approaches could be just as effective, and whether weakening security for the entire Internet for the benefit of relatively few networks is the right solution. Indeed, it’s still possible to decrypt traffic in TLS 1.3, but you need access to the ephemeral keys to do so, and by design, they aren’t long-lived. + +At this point it doesn’t look like TLS 1.3 will change to accommodate these networks, but there are rumblings about creating another protocol that allows a third party to observe what’s going on— and perhaps more — for these use cases. Whether that gets traction remains to be seen. + +#### QUIC + +During work on HTTP/2, it became evident that TCP has similar inefficiencies. Because TCP is an in-order delivery protocol, the loss of one packet can prevent those in the buffers behind it from being delivered to the application. For a multiplexed protocol, this can make a big difference in performance. + +[QUIC][23] is an attempt to address that by effectively rebuilding TCP semantics (along with some of HTTP/2’s stream model) on top of UDP. Like HTTP/2, it started as a Google effort and is now in the IETF, with an initial use case of HTTP-over-UDP and a goal of becoming a standard in late 2018\. However, since Google has already deployed QUIC in the Chrome browser and on its sites, it already accounts for more than 7% of Internet traffic. + +Read [Your questions answered about QUIC][24] + +Besides the shift from TCP to UDP for such a sizable amount of traffic (and all of the adjustments in networks that might imply), both Google QUIC (gQUIC) and IETF QUIC (iQUIC) require encryption to operate at all; there is no unencrypted QUIC. + +iQUIC uses TLS 1.3 to establish keys for a session and then uses them to encrypt each packet. However, since it’s UDP-based, a lot of the session information and metadata that’s exposed in TCP gets encrypted in QUIC. + +In fact, iQUIC’s current [‘short header’][25] — used for all packets except the handshake — only exposes a packet number, an optional connection identifier, and a byte of state for things like the encryption key rotation schedule and the packet type (which might end up encrypted as well). + +Everything else is encrypted — including ACKs, to raise the bar for [traffic analysis][26] attacks. + +However, this means that passively estimating RTT and packet loss by observing connections is no longer possible; there isn’t enough information. This lack of observability has caused a significant amount of concern by some in the operator community, who say that passive measurements like this are critical for debugging and understanding their networks. + +One proposal to meet this need is the ‘[Spin Bit][27]‘ — a bit in the header that flips once a round trip, so that observers can estimate RTT. Since it’s decoupled from the application’s state, it doesn’t appear to leak any information about the endpoints, beyond a rough estimate of location on the network. + +#### DOH + +The newest change on the horizon is DOH — [DNS over HTTP][28]. A [significant amount of research has shown that networks commonly use DNS as a means of imposing policy][29] (whether on behalf of the network operator or a greater authority). + +Circumventing this kind of control with encryption has been [discussed for a while][30], but it has a disadvantage (at least from some standpoints) — it is possible to discriminate it from other traffic; for example, by using its port number to block access. + +DOH addresses that by piggybacking DNS traffic onto an existing HTTP connection, thereby removing any discriminators. A network that wishes to block access to that DNS resolver can only do so by blocking access to the website as well. + +For example, if Google was to deploy its [public DNS service over DOH][31]on www.google.com and a user configures their browser to use it, a network that wants (or is required) to stop it would have to effectively block all of Google (thanks to how they host their services). + +DOH has just started its work, but there’s already a fair amount of interest in it, and some rumblings of deployment. How the networks (and governments) that use DNS to impose policy will react remains to be seen. + +Read [IETF 100, Singapore: DNS over HTTP (DOH!)][1] + +#### Ossification and grease + +To return to motivations, one theme throughout this work is how protocol designers are increasingly encountering problems where networks make assumptions about traffic. + +For example, TLS 1.3 has had a number of last-minute issues with middleboxes that assume it’s an older version of the protocol. gQUIC blacklists several networks that throttle UDP traffic, because they think that it’s harmful or low-priority traffic. + +When a protocol can’t evolve because deployments ‘freeze’ its extensibility points, we say it has  _ossified_ . TCP itself is a severe example of ossification; so many middleboxes do so many things to TCP — whether it’s blocking packets with TCP options that aren’t recognized, or ‘optimizing’ congestion control. + +It’s necessary to prevent ossification, to ensure that protocols can evolve to meet the needs of the Internet in the future; otherwise, it would be a ‘tragedy of the commons’ where the actions of some individual networks — although well-intended — would affect the health of the Internet overall. + +There are many ways to prevent ossification; if the data in question is encrypted, it cannot be accessed by any party but those that hold the keys, preventing interference. If an extension point is unencrypted but commonly used in a way that would break applications visibly (for example, HTTP headers), it’s less likely to be interfered with. + +Where protocol designers can’t use encryption and an extension point isn’t used often, artificially exercising the extension point can help; we call this  _greasing_  it. + +For example, QUIC encourages endpoints to use a range of decoy values in its [version negotiation][32], to avoid implementations assuming that it will never change (as was often encountered in TLS implementations, leading to significant problems). + +#### The network and the user + +Beyond the desire to avoid ossification, these changes also reflect the evolving relationship between networks and their users. While for a long time people assumed that networks were always benevolent — or at least disinterested — parties, this is no longer the case, thanks not only to [pervasive monitoring][33] but also attacks like [Firesheep][34]. + +As a result, there is growing tension between the needs of Internet users overall and those of the networks who want to have access to some amount of the data flowing over them. Particularly affected will be networks that want to impose policy upon those users; for example, enterprise networks. + +In some cases, they might be able to meet their goals by installing software (or a CA certificate, or a browser extension) on their users’ machines. However, this isn’t as easy in cases where the network doesn’t own or have access to the computer; for example, BYOD has become common, and IoT devices seldom have the appropriate control interfaces. + +As a result, a lot of discussion surrounding protocol development in the IETF is touching on the sometimes competing needs of enterprises and other ‘leaf’ networks and the good of the Internet overall. + +#### Get involved + +For the Internet to work well in the long run, it needs to provide value to end users, avoid ossification, and allow networks to operate. The changes taking place now need to meet all three goals, but we need more input from network operators. + +If these changes affect your network — or won’t— please leave comments below, or better yet, get involved in the [IETF][35] by attending a meeting, joining a mailing list, or providing feedback on a draft. + +Thanks to Martin Thomson and Brian Trammell for their review. + + _Mark Nottingham is a member of the Internet Architecture Board and co-chairs the IETF’s HTTP and QUIC Working Groups._ + +-------------------------------------------------------------------------------- + +via: https://blog.apnic.net/2017/12/12/internet-protocols-changing/ + +作者:[ Mark Nottingham ][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://blog.apnic.net/author/mark-nottingham/ +[1]:https://blog.apnic.net/2017/11/17/ietf-100-singapore-dns-http-doh/ +[2]:https://blog.apnic.net/author/mark-nottingham/ +[3]:https://blog.apnic.net/category/tech-matters/ +[4]:https://blog.apnic.net/tag/dns/ +[5]:https://blog.apnic.net/tag/doh/ +[6]:https://blog.apnic.net/tag/guest-post/ +[7]:https://blog.apnic.net/tag/http/ +[8]:https://blog.apnic.net/tag/ietf/ +[9]:https://blog.apnic.net/tag/quic/ +[10]:https://blog.apnic.net/tag/tls/ +[11]:https://blog.apnic.net/tag/protocol/ +[12]:https://blog.apnic.net/2017/12/12/internet-protocols-changing/#comments +[13]:https://blog.apnic.net/ +[14]:https://www.smashingmagazine.com/2015/09/why-performance-matters-the-perception-of-time/ +[15]:https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/46197.pdf +[16]:https://http2.github.io/ +[17]:http://httpwg.org/specs/rfc7540.html#TLSUsage +[18]:http://httpwg.org/specs/rfc7540.html#BadCipherSuites +[19]:http://httpwg.org/specs/rfc7540.html#reuse +[20]:https://tools.ietf.org/html/draft-bishop-httpbis-http2-additional-certs +[21]:https://datatracker.ietf.org/doc/draft-ietf-tls-tls13/ +[22]:https://en.wikipedia.org/wiki/Forward_secrecy +[23]:https://quicwg.github.io/ +[24]:https://blog.apnic.net/2016/08/30/questions-answered-quic/ +[25]:https://quicwg.github.io/base-drafts/draft-ietf-quic-transport.html#short-header +[26]:https://www.mjkranch.com/docs/CODASPY17_Kranch_Reed_IdentifyingHTTPSNetflix.pdf +[27]:https://tools.ietf.org/html/draft-trammell-quic-spin +[28]:https://datatracker.ietf.org/wg/doh/about/ +[29]:https://datatracker.ietf.org/meeting/99/materials/slides-99-maprg-fingerprint-based-detection-of-dns-hijacks-using-ripe-atlas/ +[30]:https://datatracker.ietf.org/wg/dprive/about/ +[31]:https://developers.google.com/speed/public-dns/ +[32]:https://quicwg.github.io/base-drafts/draft-ietf-quic-transport.html#rfc.section.3.7 +[33]:https://tools.ietf.org/html/rfc7258 +[34]:http://codebutler.com/firesheep +[35]:https://www.ietf.org/