mirror of
https://github.com/LCTT/TranslateProject.git
synced 2025-03-12 01:40:10 +08:00
commit
4b9cc4ea89
317
published/20160604 How to Build Your First Slack Bot with Python.md
Executable file
317
published/20160604 How to Build Your First Slack Bot with Python.md
Executable file
@ -0,0 +1,317 @@
|
||||
如何运用 Python 建立你的第一个 Slack 聊天机器人?
|
||||
============
|
||||
|
||||
[聊天机器人(Bot)](https://www.fullstackpython.com/bots.html) 是一种像 [Slack](https://slack.com/) 一样的实用的互动聊天服务方式。如果你之前从来没有建立过聊天机器人,那么这篇文章提供了一个简单的入门指南,告诉你如何用 Python 结合 [Slack API](https://api.slack.com/) 建立你第一个聊天机器人。
|
||||
|
||||
我们通过搭建你的开发环境, 获得一个 Slack API 的聊天机器人令牌,并用 Pyhon 开发一个简单聊天机器人。
|
||||
|
||||
### 我们所需的工具
|
||||
|
||||
我们的聊天机器人我们将它称作为“StarterBot”,它需要 Python 和 Slack API。要运行我们的 Python 代码,我们需要:
|
||||
|
||||
* [Python 2 或者 Python 3](https://www.fullstackpython.com/python-2-or-3.html)
|
||||
* [pip](https://pip.pypa.io/en/stable/) 和 [virtualenv](https://virtualenv.pypa.io/en/stable/) 来处理 Python [应用程序依赖关系](https://www.fullstackpython.com/application-dependencies.html)
|
||||
* 一个可以访问 API 的[免费 Slack 账号](https://slack.com/),或者你可以注册一个 [Slack Developer Hangout team](http://dev4slack.xoxco.com/)。
|
||||
* 通过 Slack 团队建立的官方 Python [Slack 客户端](https://github.com/slackhq/python-slackclient)代码库
|
||||
* [Slack API 测试令牌](https://api.slack.com/tokens)
|
||||
|
||||
当你在本教程中进行构建时,[Slack API 文档](https://api.slack.com/) 是很有用的。
|
||||
|
||||
本教程中所有的代码都放在 [slack-starterbot](https://github.com/mattmakai/slack-starterbot) 公共库里,并以 MIT 许可证开源。
|
||||
|
||||
### 搭建我们的环境
|
||||
|
||||
我们现在已经知道我们的项目需要什么样的工具,因此让我们来搭建我们所的开发环境吧。首先到终端上(或者 Windows 上的命令提示符)并且切换到你想要存储这个项目的目录。在那个目录里,创建一个新的 virtualenv 以便和其他的 Python 项目相隔离我们的应用程序依赖关系。
|
||||
|
||||
```
|
||||
virtualenv starterbot
|
||||
|
||||
```
|
||||
|
||||
激活 virtualenv:
|
||||
|
||||
```
|
||||
source starterbot/bin/activate
|
||||
|
||||
```
|
||||
|
||||
你的提示符现在应该看起来如截图:
|
||||
|
||||

|
||||
|
||||
这个官方的 slack 客户端 API 帮助库是由 Slack 建立的,它可以通过 Slack 通道发送和接收消息。通过这个 `pip` 命令安装 slackclient 库:
|
||||
|
||||
```
|
||||
pip install slackclient
|
||||
|
||||
```
|
||||
|
||||
当 `pip` 命令完成时,你应该看到类似这样的输出,并返回提示符。
|
||||
|
||||

|
||||
|
||||
我们也需要为我们的 Slack 项目获得一个访问令牌,以便我们的聊天机器人可以用它来连接到 Slack API。
|
||||
|
||||
### Slack 实时消息传递(RTM)API
|
||||
|
||||
Slack 允许程序通过一个 [Web API](https://www.fullstackpython.com/application-programming-interfaces.html) 来访问他们的消息传递通道。去这个 [Slack Web API 页面](https://api.slack.com/) 注册建立你自己的 Slack 项目。你也可以登录一个你拥有管理权限的已有账号。
|
||||
|
||||

|
||||
|
||||
登录后你会到达 [聊天机器人用户页面](https://api.slack.com/bot-users)。
|
||||
|
||||

|
||||
|
||||
给你的聊天机器人起名为“starterbot”然后点击 “Add bot integration” 按钮。
|
||||
|
||||

|
||||
|
||||
这个页面将重新加载,你将看到一个新生成的访问令牌。你还可以将标志改成你自己设计的。例如我给的这个“Full Stack Python”标志。
|
||||
|
||||

|
||||
|
||||
在页面底部点击“Save Integration”按钮。你的聊天机器人现在已经准备好连接 Slack API。
|
||||
|
||||
Python 开发人员的一个常见的做法是以环境变量输出秘密令牌。输出的 Slack 令牌名字为`SLACK_BOT_TOKEN`:
|
||||
|
||||
```
|
||||
export SLACK_BOT_TOKEN='你的 slack 令牌粘帖在这里'
|
||||
|
||||
```
|
||||
|
||||
好了,我们现在得到了将这个 Slack API 用作聊天机器人的授权。
|
||||
|
||||
我们建立聊天机器人还需要更多信息:我们的聊天机器人的 ID。接下来我们将会写一个简短的脚本,从 Slack API 获得该 ID。
|
||||
|
||||
### 获得我们聊天机器人的 ID
|
||||
|
||||
这是最后写一些 Python 代码的时候了! 我们编写一个简短的 Python 脚本获得 StarterBot 的 ID 来热身一下。这个 ID 基于 Slack 项目而不同。
|
||||
|
||||
我们需要该 ID,当解析从 Slack RTM 上发给 StarterBot 的消息时,它用于对我们的应用验明正身。我们的脚本也会测试我们 `SLACK_BOT_TOKEN` 环境变量是否设置正确。
|
||||
|
||||
建立一个命名为 print_bot_id.py 的新文件,并且填入下面的代码:
|
||||
|
||||
```
|
||||
import os
|
||||
from slackclient import SlackClient
|
||||
|
||||
|
||||
BOT_NAME = 'starterbot'
|
||||
|
||||
slack_client = SlackClient(os.environ.get('SLACK_BOT_TOKEN'))
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
api_call = slack_client.api_call("users.list")
|
||||
if api_call.get('ok'):
|
||||
# retrieve all users so we can find our bot
|
||||
users = api_call.get('members')
|
||||
for user in users:
|
||||
if 'name' in user and user.get('name') == BOT_NAME:
|
||||
print("Bot ID for '" + user['name'] + "' is " + user.get('id'))
|
||||
else:
|
||||
print("could not find bot user with the name " + BOT_NAME)
|
||||
|
||||
```
|
||||
|
||||
我们的代码导入 SlackClient,并用我们设置的环境变量 `SLACK_BOT_TOKEN` 实例化它。 当该脚本通过 python 命令执行时,我们通过会访问 Slack API 列出所有的 Slack 用户并且获得匹配一个名字为“satrterbot”的 ID。
|
||||
|
||||
这个获得聊天机器人的 ID 的脚本我们仅需要运行一次。
|
||||
|
||||
```
|
||||
python print_bot_id.py
|
||||
|
||||
```
|
||||
|
||||
当它运行为我们提供了聊天机器人的 ID 时,脚本会打印出简单的一行输出。
|
||||
|
||||

|
||||
|
||||
复制这个脚本打印出的唯一 ID。并将该 ID 作为一个环境变量 `BOT_ID` 输出。
|
||||
|
||||
```
|
||||
(starterbot)$ export BOT_ID='bot id returned by script'
|
||||
|
||||
```
|
||||
|
||||
这个脚本仅仅需要运行一次来获得聊天机器人的 ID。 我们现在可以在我们的运行 StarterBot 的 Python应用程序中使用这个 ID 。
|
||||
|
||||
### 编码我们的 StarterBot
|
||||
|
||||
现在我们拥有了写我们的 StarterBot 代码所需的一切。 创建一个新文件命名为 starterbot.py ,它包括以下代码。
|
||||
|
||||
```
|
||||
import os
|
||||
import time
|
||||
from slackclient import SlackClient
|
||||
|
||||
```
|
||||
|
||||
对 `os` 和 `SlackClient` 的导入我们看起来很熟悉,因为我们已经在 theprint_bot_id.py 中用过它们了。
|
||||
|
||||
通过我们导入的依赖包,我们可以使用它们获得环境变量值,并实例化 Slack 客户端。
|
||||
|
||||
```
|
||||
# starterbot 的 ID 作为一个环境变量
|
||||
BOT_ID = os.environ.get("BOT_ID")
|
||||
|
||||
# 常量
|
||||
AT_BOT = "<@" + BOT_ID + ">:"
|
||||
EXAMPLE_COMMAND = "do"
|
||||
|
||||
# 实例化 Slack 和 Twilio 客户端
|
||||
slack_client = SlackClient(os.environ.get('SLACK_BOT_TOKEN'))
|
||||
|
||||
```
|
||||
|
||||
该代码通过我们以输出的环境变量 `SLACK_BOT_TOKEN 实例化 `SlackClient` 客户端。
|
||||
|
||||
```
|
||||
if __name__ == "__main__":
|
||||
READ_WEBSOCKET_DELAY = 1 # 1 从 firehose 读取延迟 1 秒
|
||||
if slack_client.rtm_connect():
|
||||
print("StarterBot connected and running!")
|
||||
while True:
|
||||
command, channel = parse_slack_output(slack_client.rtm_read())
|
||||
if command and channel:
|
||||
handle_command(command, channel)
|
||||
time.sleep(READ_WEBSOCKET_DELAY)
|
||||
else:
|
||||
print("Connection failed. Invalid Slack token or bot ID?")
|
||||
|
||||
```
|
||||
|
||||
Slack 客户端会连接到 Slack RTM API WebSocket,然后当解析来自 firehose 的消息时会不断循环。如果有任何发给 StarterBot 的消息,那么一个被称作 `handle_command` 的函数会决定做什么。
|
||||
|
||||
接下来添加两个函数来解析 Slack 的输出并处理命令。
|
||||
|
||||
```
|
||||
def handle_command(command, channel):
|
||||
"""
|
||||
Receives commands directed at the bot and determines if they
|
||||
are valid commands. If so, then acts on the commands. If not,
|
||||
returns back what it needs for clarification.
|
||||
"""
|
||||
response = "Not sure what you mean. Use the *" + EXAMPLE_COMMAND + \
|
||||
"* command with numbers, delimited by spaces."
|
||||
if command.startswith(EXAMPLE_COMMAND):
|
||||
response = "Sure...write some more code then I can do that!"
|
||||
slack_client.api_call("chat.postMessage", channel=channel,
|
||||
text=response, as_user=True)
|
||||
|
||||
def parse_slack_output(slack_rtm_output):
|
||||
"""
|
||||
The Slack Real Time Messaging API is an events firehose.
|
||||
this parsing function returns None unless a message is
|
||||
directed at the Bot, based on its ID.
|
||||
"""
|
||||
output_list = slack_rtm_output
|
||||
if output_list and len(output_list) > 0:
|
||||
for output in output_list:
|
||||
if output and 'text' in output and AT_BOT in output['text']:
|
||||
# 返回 @ 之后的文本,删除空格
|
||||
return output['text'].split(AT_BOT)[1].strip().lower(), \
|
||||
output['channel']
|
||||
return None, None
|
||||
|
||||
```
|
||||
|
||||
`parse_slack_output` 函数从 Slack 接受信息,并且如果它们是发给我们的 StarterBot 时会作出判断。消息以一个给我们的聊天机器人 ID 的直接命令开始,然后交由我们的代码处理。目前只是通过 Slack 管道发布一个消息回去告诉用户去多写一些 Python 代码!
|
||||
|
||||
这是整个程序组合在一起的样子 (你也可以 [在 GitHub 中查看该文件](https://github.com/mattmakai/slack-starterbot/blob/master/starterbot.py)):
|
||||
|
||||
```
|
||||
import os
|
||||
import time
|
||||
from slackclient import SlackClient
|
||||
|
||||
# starterbot 的 ID 作为一个环境变量
|
||||
BOT_ID = os.environ.get("BOT_ID")
|
||||
|
||||
# 常量
|
||||
AT_BOT = "<@" + BOT_ID + ">:"
|
||||
EXAMPLE_COMMAND = "do"
|
||||
|
||||
# 实例化 Slack 和 Twilio 客户端
|
||||
slack_client = SlackClient(os.environ.get('SLACK_BOT_TOKEN'))
|
||||
|
||||
def handle_command(command, channel):
|
||||
"""
|
||||
Receives commands directed at the bot and determines if they
|
||||
are valid commands. If so, then acts on the commands. If not,
|
||||
returns back what it needs for clarification.
|
||||
"""
|
||||
response = "Not sure what you mean. Use the *" + EXAMPLE_COMMAND + \
|
||||
"* command with numbers, delimited by spaces."
|
||||
if command.startswith(EXAMPLE_COMMAND):
|
||||
response = "Sure...write some more code then I can do that!"
|
||||
slack_client.api_call("chat.postMessage", channel=channel,
|
||||
text=response, as_user=True)
|
||||
|
||||
def parse_slack_output(slack_rtm_output):
|
||||
"""
|
||||
The Slack Real Time Messaging API is an events firehose.
|
||||
this parsing function returns None unless a message is
|
||||
directed at the Bot, based on its ID.
|
||||
"""
|
||||
output_list = slack_rtm_output
|
||||
if output_list and len(output_list) > 0:
|
||||
for output in output_list:
|
||||
if output and 'text' in output and AT_BOT in output['text']:
|
||||
# 返回 @ 之后的文本,删除空格
|
||||
return output['text'].split(AT_BOT)[1].strip().lower(), \
|
||||
output['channel']
|
||||
return None, None
|
||||
|
||||
if __name__ == "__main__":
|
||||
READ_WEBSOCKET_DELAY = 1 # 1 second delay between reading from firehose
|
||||
if slack_client.rtm_connect():
|
||||
print("StarterBot connected and running!")
|
||||
while True:
|
||||
command, channel = parse_slack_output(slack_client.rtm_read())
|
||||
if command and channel:
|
||||
handle_command(command, channel)
|
||||
time.sleep(READ_WEBSOCKET_DELAY)
|
||||
else:
|
||||
print("Connection failed. Invalid Slack token or bot ID?")
|
||||
|
||||
```
|
||||
|
||||
现在我们的代码已经有了,我们可以通过 `python starterbot.py` 来运行我们 StarterBot 的代码了。
|
||||
|
||||

|
||||
|
||||
在 Slack 中创建新通道,并且把 StarterBot 邀请进来,或者把 StarterBot 邀请进一个已经存在的通道中。
|
||||
|
||||

|
||||
|
||||
现在在你的通道中给 StarterBot 发命令。
|
||||
|
||||

|
||||
|
||||
如果你从聊天机器人得到的响应中遇见问题,你可能需要做一个修改。正如上面所写的这个教程,其中一行 `AT_BOT = "<@" + BOT_ID + ">:"`,在“@starter”(你给你自己的聊天机器人起的名字)后需要一个冒号。从`AT_BOT` 字符串后面移除`:`。Slack 似乎需要在 `@` 一个人名后加一个冒号,但这好像是有些不协调的。
|
||||
|
||||
### 结束
|
||||
|
||||
好吧,你现在已经获得一个简易的聊天机器人,你可以在代码中很多地方加入你想要创建的任何特性。
|
||||
|
||||
我们能够使用 Slack RTM API 和 Python 完成很多功能。看看通过这些文章你还可以学习到什么:
|
||||
|
||||
* 附加一个持久的[关系数据库](https://www.fullstackpython.com/databases.html) 或者 [NoSQL 后端](https://www.fullstackpython.com/no-sql-datastore.html) 比如 [PostgreSQL](https://www.fullstackpython.com/postgresql.html)、[MySQL](https://www.fullstackpython.com/mysql.html) 或者 [SQLite](https://www.fullstackpython.com/sqlite.html) ,来保存和检索用户数据
|
||||
* 添加另外一个与聊天机器人互动的通道,比如 [短信](https://www.twilio.com/blog/2016/05/build-sms-slack-bot-python.html) 或者[电话呼叫](https://www.twilio.com/blog/2016/05/add-phone-calling-slack-python.html)
|
||||
* [集成其它的 web API](https://www.fullstackpython.com/api-integration.html),比如 [GitHub](https://developer.github.com/v3/)、[Twilio](https://www.twilio.com/docs) 或者 [api.ai](https://docs.api.ai/)
|
||||
|
||||
有问题? 通过 Twitter 联系我 [@fullstackpython](https://twitter.com/fullstackpython) 或 [@mattmakai](https://twitter.com/mattmakai)。 我在 GitHub 上的用户名是 [mattmakai](https://github.com/mattmakai)。
|
||||
|
||||
这篇文章感兴趣? Fork 这个 [GitHub 上的页面](https://github.com/mattmakai/fullstackpython.com/blob/gh-pages/source/content/posts/160604-build-first-slack-bot-python.markdown)吧。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
via: https://www.fullstackpython.com/blog/build-first-slack-bot-python.html
|
||||
|
||||
作者:[Matt Makai][a]
|
||||
译者:[jiajia9llinuxer](https://github.com/jiajia9linuxer)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出aa
|
||||
|
||||
[a]: https://www.fullstackpython.com/about-author.html
|
@ -1,13 +1,13 @@
|
||||
translating by StdioA
|
||||
|
||||
搭个 Web 服务器(三)
|
||||
=====================================
|
||||
|
||||
>“当我们必须创造时,才能够学到更多。” ——伯爵
|
||||
>“当我们必须创造时,才能够学到更多。” ——皮亚杰
|
||||
|
||||
在本系列的第二部分中,你创造了一个可以处理基本 HTTP GET 请求的、朴素的 WSGI 服务器。当时我问了一个问题:“你该如何让你的服务器在同一时间处理多个请求呢?”在这篇文章中,你会找到答案。系好安全带,我们要认真起来,全速前进了!你将会体验到一段非常快速的旅程。准备好你的 Linux,Mac OS X(或者其他 *nix 系统),还有你的 Python. 本文中所有源代码均可在 [GitHub][1] 上找到。
|
||||
在本系列的[第二部分](https://linux.cn/article-7685-1.html)中,你创造了一个可以处理基本 HTTP GET 请求的、朴素的 WSGI 服务器。当时我问了一个问题:“你该如何让你的服务器在同一时间处理多个请求呢?”在这篇文章中,你会找到答案。系好安全带,我们要认真起来,全速前进了!你将会体验到一段非常快速的旅程。准备好你的 Linux、Mac OS X(或者其他 *nix 系统),还有你的 Python。本文中所有源代码均可在 [GitHub][1] 上找到。
|
||||
|
||||
首先,我们来回顾一下 Web 服务器的基本结构,以及服务器处理来自客户端的请求时,所需的必要步骤。你在第一及第二部分中创建的轮询服务器只能够在同一时间内处理一个请求。在处理完当前请求之前,它不能够打开一个新的客户端连接。所有请求为了等待服务都需要排队,在服务繁忙时,这个队伍可能会排的很长,一些客户端可能会感到不开心。
|
||||
### 服务器的基本结构及如何处理请求
|
||||
|
||||
首先,我们来回顾一下 Web 服务器的基本结构,以及服务器处理来自客户端的请求时,所需的必要步骤。你在[第一部分](https://linux.cn/article-7662-1.html)及[第二部分](https://linux.cn/article-7685-1.html)中创建的轮询服务器只能够一次处理一个请求。在处理完当前请求之前,它不能够接受新的客户端连接。所有请求为了等待服务都需要排队,在服务繁忙时,这个队伍可能会排的很长,一些客户端可能会感到不开心。
|
||||
|
||||

|
||||
|
||||
@ -53,7 +53,7 @@ if __name__ == '__main__':
|
||||
serve_forever()
|
||||
```
|
||||
|
||||
为了观察到你的服务器在同一时间只能处理一个请求,我们对服务器的代码做一点点修改:在将响应发送至客户端之后,将程序阻塞 60 秒。这个修改只需要一行代码,来告诉服务器进程暂停 60 秒钟。
|
||||
为了观察到你的服务器在同一时间只能处理一个请求的行为,我们对服务器的代码做一点点修改:在将响应发送至客户端之后,将程序阻塞 60 秒。这个修改只需要一行代码,来告诉服务器进程暂停 60 秒钟。
|
||||
|
||||

|
||||
|
||||
@ -84,7 +84,7 @@ HTTP/1.1 200 OK
|
||||
Hello, World!
|
||||
"""
|
||||
client_connection.sendall(http_response)
|
||||
time.sleep(60) # 睡眠语句,阻塞该进程 60 秒
|
||||
time.sleep(60) ### 睡眠语句,阻塞该进程 60 秒
|
||||
|
||||
|
||||
def serve_forever():
|
||||
@ -126,88 +126,85 @@ $ curl http://localhost:8888/hello
|
||||
|
||||

|
||||
|
||||
当你等待足够长的时间(60 秒以上)后,你会看到第一个 `curl` 程序完成,而第二个 `curl` 在屏幕上输出了“Hello, World!”,然后休眠 60 秒,进而停止运行。
|
||||
当你等待足够长的时间(60 秒以上)后,你会看到第一个 `curl` 程序完成,而第二个 `curl` 在屏幕上输出了“Hello, World!”,然后休眠 60 秒,进而终止。
|
||||
|
||||

|
||||
|
||||
这两个程序这样运行,是因为在服务器在处理完第一个来自 `curl` 的请求之后,只有等待 60 秒才能开始处理第二个请求。这个处理请求的过程按顺序进行(也可以说,迭代进行),一步一步进行,在我们刚刚给出的例子中,在同一时间内只能处理一个请求。
|
||||
这样运行的原因是因为在服务器在处理完第一个来自 `curl` 的请求之后,只有等待 60 秒才能开始处理第二个请求。这个处理请求的过程按顺序进行(也可以说,迭代进行),一步一步进行,在我们刚刚给出的例子中,在同一时间内只能处理一个请求。
|
||||
|
||||
现在,我们来简单讨论一下客户端与服务器的交流过程。为了让两个程序在网络中互相交流,它们必须使用套接字。你应当在本系列的前两部分中见过它几次了。但是,套接字是什么?
|
||||
|
||||

|
||||
|
||||
套接字是一个交互通道的端点的抽象形式,它可以让你的程序通过文件描述符来与其它程序进行交流。在这篇文章中,我只会单独讨论 Linux 或 Mac OS X 中的 TCP/IP 套接字。这里有一个重点概念需要你去理解:TCP 套接字对。
|
||||
套接字(socket)是一个通讯通道端点(endpoint)的抽象描述,它可以让你的程序通过文件描述符来与其它程序进行交流。在这篇文章中,我只会单独讨论 Linux 或 Mac OS X 中的 TCP/IP 套接字。这里有一个重点概念需要你去理解:TCP 套接字对(socket pair)。
|
||||
|
||||
> TCP 连接使用的套接字对是一个由 4 个元素组成的元组,它确定了 TCP 连接的两端:本地 IP 地址、本地端口、远端 IP 地址及远端端口。一个套接字对独一无二地确定了网络中的每一个 TCP 连接。在连接一端的两个值:一个 IP 地址和一个端口,通常被称作一个套接字。[1][4]
|
||||
> TCP 连接使用的套接字对是一个由 4 个元素组成的元组,它确定了 TCP 连接的两端:本地 IP 地址、本地端口、远端 IP 地址及远端端口。一个套接字对唯一地确定了网络中的每一个 TCP 连接。在连接一端的两个值:一个 IP 地址和一个端口,通常被称作一个套接字。(引自[《UNIX 网络编程 卷1:套接字联网 API (第3版)》][4])
|
||||
|
||||

|
||||
|
||||
所以,元组 {10.10.10.2:49152, 12.12.12.3:8888} 就是一个能够在客户端确定 TCP 连接两端的套接字对,而元组 {12.12.12.3:8888, 10.10.10.2:49152} 则是在服务端确定 TCP 连接两端的套接字对。在这个例子中,确定 TCP 服务端的两个值(IP 地址 `12.12.12.3` 及端口 `8888`),代表一个套接字;另外两个值则代表客户端的套接字。
|
||||
所以,元组 `{10.10.10.2:49152, 12.12.12.3:8888}` 就是一个能够在客户端确定 TCP 连接两端的套接字对,而元组 `{12.12.12.3:8888, 10.10.10.2:49152}` 则是在服务端确定 TCP 连接两端的套接字对。在这个例子中,确定 TCP 服务端的两个值(IP 地址 `12.12.12.3` 及端口 `8888`),代表一个套接字;另外两个值则代表客户端的套接字。
|
||||
|
||||
一个服务器创建一个套接字并开始建立连接的基本工作流程如下:
|
||||
|
||||

|
||||
|
||||
1. 服务器创建一个 TCP/IP 套接字。我们可以用下面那条 Python 语句来创建:
|
||||
1. 服务器创建一个 TCP/IP 套接字。我们可以用这条 Python 语句来创建:
|
||||
|
||||
```
|
||||
listen_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
|
||||
```
|
||||
```
|
||||
listen_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
|
||||
```
|
||||
2. 服务器可能会设定一些套接字选项(这个步骤是可选的,但是你可以看到上面的服务器代码做了设定,这样才能够在重启服务器时多次复用同一地址):
|
||||
|
||||
2. 服务器可能会设定一些套接字选项(这个步骤是可选的,但是你可以看到上面的服务器代码做了设定,这样才能够在重启服务器时多次复用同一地址)。
|
||||
|
||||
```
|
||||
listen_socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
|
||||
```
|
||||
|
||||
3. 然后,服务器绑定一个地址。绑定函数可以将一个本地协议地址赋给套接字。若使用 TCP 协议,调用绑定函数时,需要指定一个端口号,一个 IP 地址,或两者兼有,或两者兼无。[1][4]
|
||||
|
||||
```
|
||||
listen_socket.bind(SERVER_ADDRESS)
|
||||
```
|
||||
```
|
||||
listen_socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
|
||||
```
|
||||
3. 然后,服务器绑定一个地址。绑定函数 `bind` 可以将一个本地协议地址赋给套接字。若使用 TCP 协议,调用绑定函数 `bind` 时,需要指定一个端口号,一个 IP 地址,或两者兼有,或两者全无。(引自[《UNIX网络编程 卷1:套接字联网 API (第3版)》][4])
|
||||
|
||||
```
|
||||
listen_socket.bind(SERVER_ADDRESS)
|
||||
```
|
||||
4. 然后,服务器开启套接字的监听模式。
|
||||
|
||||
```
|
||||
listen_socket.listen(REQUEST_QUEUE_SIZE)
|
||||
```
|
||||
```
|
||||
listen_socket.listen(REQUEST_QUEUE_SIZE)
|
||||
```
|
||||
|
||||
监听函数只应在服务端调用。它会通知操作系统内核,标明它会接受所有向该套接字发送请求的链接。
|
||||
监听函数 `listen` 只应在服务端调用。它会通知操作系统内核,表明它会接受所有向该套接字发送的入站连接请求。
|
||||
|
||||
以上四步完成后,服务器将循环接收来自客户端的连接,一次循环处理一条。当有连接可用时,`accept` 函数将会返回一个已连接的客户端套接字。然后,服务器从客户端套接字中读取请求数据,将它在标准输出流中打印出来,并向客户端回送一条消息。然后,服务器会关闭这个客户端连接,并准备接收一个新的客户端连接。
|
||||
以上四步完成后,服务器将循环接收来自客户端的连接,一次循环处理一条。当有连接可用时,接受请求函数 `accept` 将会返回一个已连接的客户端套接字。然后,服务器从这个已连接的客户端套接字中读取请求数据,将数据在其标准输出流中输出出来,并向客户端回送一条消息。然后,服务器会关闭这个客户端连接,并准备接收一个新的客户端连接。
|
||||
|
||||
这是客户端使用 TCP/IP 协议与服务器通信的必要步骤:
|
||||
|
||||

|
||||
|
||||
下面是一段示例代码,使用这段代码,客户端可以连接你的服务器,发送一个请求,并打印响应内容:
|
||||
下面是一段示例代码,使用这段代码,客户端可以连接你的服务器,发送一个请求,并输出响应内容:
|
||||
|
||||
```
|
||||
import socket
|
||||
|
||||
# 创建一个套接字,并连接值服务器
|
||||
### 创建一个套接字,并连接值服务器
|
||||
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
|
||||
sock.connect(('localhost', 8888))
|
||||
|
||||
# 发送一段数据,并接收响应数据
|
||||
### 发送一段数据,并接收响应数据
|
||||
sock.sendall(b'test')
|
||||
data = sock.recv(1024)
|
||||
print(data.decode())
|
||||
```
|
||||
|
||||
在创建套接字后,客户端需要连接至服务器。我们可以调用 `connect` 函数来完成这个操作:
|
||||
在创建套接字后,客户端需要连接至服务器。我们可以调用连接函数 `connect` 来完成这个操作:
|
||||
|
||||
```
|
||||
sock.connect(('localhost', 8888))
|
||||
```
|
||||
|
||||
客户端只需提供待连接服务器的 IP 地址(或主机名),及端口号,即可连接至远端服务器。
|
||||
客户端只需提供待连接的远程服务器的 IP 地址(或主机名),及端口号,即可连接至远端服务器。
|
||||
|
||||
你可能已经注意到了,客户端不需要调用 `bind` 及 `accept` 函数,就可以与服务器建立连接。客户端不需要调用 `bind` 函数是因为客户端不需要关注本地 IP 地址及端口号。操作系统内核中的 TCP/IP 协议栈会在客户端调用 `connect` 函数时,自动为套接字分配本地 IP 地址及本地端口号。这个本地端口被称为临时端口,也就是一个短暂开放的端口。
|
||||
你可能已经注意到了,客户端不需要调用 `bind` 及 `accept` 函数,就可以与服务器建立连接。客户端不需要调用 `bind` 函数是因为客户端不需要关注本地 IP 地址及端口号。操作系统内核中的 TCP/IP 协议栈会在客户端调用 `connect` 函数时,自动为套接字分配本地 IP 地址及本地端口号。这个本地端口被称为临时端口(ephemeral port),即一个短暂开放的端口。
|
||||
|
||||

|
||||
|
||||
服务器中有一些端口被用于承载一些众所周知的服务,它们被称作通用端口:如 80 端口用于 HTTP 服务,22 端口用于 SSH 服务。打开你的 Python shell,与你在本地运行的服务器建立一个连接,来看看内核给你的客户端套接字分配了哪个临时端口(在尝试这个例子之前,你需要运行服务器程序 `webserver3a.py` 或 `webserver3b.py`):
|
||||
服务器中有一些端口被用于承载一些众所周知的服务,它们被称作通用(well-known)端口:如 80 端口用于 HTTP 服务,22 端口用于 SSH 服务。打开你的 Python shell,与你在本地运行的服务器建立一个连接,来看看内核给你的客户端套接字分配了哪个临时端口(在尝试这个例子之前,你需要运行服务器程序 `webserver3a.py` 或 `webserver3b.py`):
|
||||
|
||||
```
|
||||
>>> import socket
|
||||
@ -222,12 +219,11 @@ sock.connect(('localhost', 8888))
|
||||
|
||||
在我开始回答我在第二部分中提出的问题之前,我还需要快速讲解一些概念。你很快就会明白这些概念为什么非常重要。这两个概念,一个是进程,另外一个是文件描述符。
|
||||
|
||||
什么是进程?进程就是一个程序执行的实体。举个例子:当你的服务器代码被执行时,它会被载入内存,而内存中表现此次程序运行的实体就叫做进程。内核记录了进程的一系列有关信息——比如进程 ID——来追踪它的运行情况。当你在执行轮询服务器 `webserver3a.py` 或 `webserver3b.py` 时,你只启动了一个进程。
|
||||
什么是进程?进程就是一个程序执行的实体。举个例子:当你的服务器代码被执行时,它会被载入内存,而内存中表现此次程序运行的实体就叫做进程。内核记录了进程的一系列有关信息——比如进程 ID——来追踪它的运行情况。当你在执行轮询服务器 `webserver3a.py` 或 `webserver3b.py` 时,你其实只是启动了一个进程。
|
||||
|
||||

|
||||
|
||||
我们在终端窗口中运行 `webserver3b.py`:
|
||||
Start the server webserver3b.py in a terminal window:
|
||||
|
||||
```
|
||||
$ python webserver3b.py
|
||||
@ -240,7 +236,7 @@ $ ps | grep webserver3b | grep -v grep
|
||||
7182 ttys003 0:00.04 python webserver3b.py
|
||||
```
|
||||
|
||||
`ps` 命令显示,我们刚刚只运行了一个 Python 进程 `webserver3b`。当一个进程被创建时,内核会为其分配一个进程 ID,也就是 PID。在 UNIX 中,所有用户进程都有一个父进程;当然,这个父进程也有进程 ID,叫做父进程 ID,缩写为 PPID。假设你默认使用 BASH shell,那当你启动服务器时,一个新的进程会被启动,同时被赋予一个 PID,而它的父进程 PID 会被设为 BASH shell 的 PID。
|
||||
`ps` 命令显示,我们刚刚只运行了一个 Python 进程 `webserver3b.py`。当一个进程被创建时,内核会为其分配一个进程 ID,也就是 PID。在 UNIX 中,所有用户进程都有一个父进程;当然,这个父进程也有进程 ID,叫做父进程 ID,缩写为 PPID。假设你默认使用 BASH shell,那当你启动服务器时,就会启动一个新的进程,同时被赋予一个 PID,而它的父进程 PID 会被设为 BASH shell 的 PID。
|
||||
|
||||

|
||||
|
||||
@ -248,11 +244,11 @@ $ ps | grep webserver3b | grep -v grep
|
||||
|
||||

|
||||
|
||||
另外一个需要了解的概念,就是文件描述符。什么是文件描述符?文件描述符是一个非负整数,当进程打开一个现有文件、创建新文件或创建一个新的套接字时,内核会将这个数返回给进程。你以前可能听说过,在 UNIX 中,一切皆是文件。内核会根据一个文件描述符来为一个进程打开一个文件。当你需要读取文件或向文件写入时,我们同样通过文件描述符来定位这个文件。Python 提供了高层次的文件(或套接字)对象,所以你不需要直接通过文件描述符来定位文件。但是,在高层对象之下,我们就是用它来在 UNIX 中定位文件及套接字:整形的文件描述符。
|
||||
另外一个需要了解的概念,就是文件描述符。什么是文件描述符?文件描述符是一个非负整数,当进程打开一个现有文件、创建新文件或创建一个新的套接字时,内核会将这个数返回给进程。你以前可能听说过,在 UNIX 中,一切皆是文件。内核会按文件描述符来找到一个进程所打开的文件。当你需要读取文件或向文件写入时,我们同样通过文件描述符来定位这个文件。Python 提供了高层次的操作文件(或套接字)的对象,所以你不需要直接通过文件描述符来定位文件。但是,在高层对象之下,我们就是用它来在 UNIX 中定位文件及套接字,通过这个整数的文件描述符。
|
||||
|
||||

|
||||
|
||||
一般情况下,UNIX shell 会将一个进程的标准输入流的文件描述符设为 0,标准输出流设为 1,而标准错误打印的文件描述符会被设为 2。
|
||||
一般情况下,UNIX shell 会将一个进程的标准输入流(STDIN)的文件描述符设为 0,标准输出流(STDOUT)设为 1,而标准错误打印(STDERR)的文件描述符会被设为 2。
|
||||
|
||||

|
||||
|
||||
@ -289,7 +285,7 @@ hello
|
||||
3
|
||||
```
|
||||
|
||||
我还想再提一件事:不知道你有没有注意到,在我们的第二个轮询服务器 `webserver3b.py` 中,当你的服务器休眠 60 秒的过程中,你仍然可以通过第二个 `curl` 命令连接至服务器。当然 `curl` 命令并没有立刻输出任何内容而是挂在哪里,但是既然服务器没有接受连接,那它为什么不立即拒绝掉连接,而让它还能够继续与服务器建立连接呢?这个问题的答案是:当我在调用套接字对象的 `listen` 方法时,我为该方法提供了一个 `BACKLOG` 参数,在代码中用 `REQUEST_QUEUE_SIZE` 变量来表示。`BACKLOG` 参数决定了在内核中为存放即将到来的连接请求所创建的队列的大小。当服务器 `webserver3b.py` 被挂起的时候,你运行的第二个 `curl` 命令依然能够连接至服务器,因为内核中用来存放即将接收的连接请求的队列依然拥有足够大的可用空间。
|
||||
我还想再提一件事:不知道你有没有注意到,在我们的第二个轮询服务器 `webserver3b.py` 中,当你的服务器休眠 60 秒的过程中,你仍然可以通过第二个 `curl` 命令连接至服务器。当然 `curl` 命令并没有立刻输出任何内容而是挂在哪里,但是既然服务器没有接受连接,那它为什么不立即拒绝掉连接,而让它还能够继续与服务器建立连接呢?这个问题的答案是:当我在调用套接字对象的 `listen` 方法时,我为该方法提供了一个 `BACKLOG` 参数,在代码中用 `REQUEST_QUEUE_SIZE` 常量来表示。`BACKLOG` 参数决定了在内核中为存放即将到来的连接请求所创建的队列的大小。当服务器 `webserver3b.py` 在睡眠的时候,你运行的第二个 `curl` 命令依然能够连接至服务器,因为内核中用来存放即将接收的连接请求的队列依然拥有足够大的可用空间。
|
||||
|
||||
尽管增大 `BACKLOG` 参数并不能神奇地使你的服务器同时处理多个请求,但当你的服务器很繁忙时,将它设置为一个较大的值还是相当重要的。这样,在你的服务器调用 `accept` 方法时,不需要再等待一个新的连接建立,而可以立刻直接抓取队列中的第一个客户端连接,并不加停顿地立刻处理它。
|
||||
|
||||
@ -297,7 +293,7 @@ hello
|
||||
|
||||

|
||||
|
||||
- 迭代服务器
|
||||
- 轮询服务器
|
||||
- 服务端套接字创建流程(创建套接字,绑定,监听及接受)
|
||||
- 客户端连接创建流程(创建套接字,连接)
|
||||
- 套接字对
|
||||
@ -308,6 +304,8 @@ hello
|
||||
- 文件描述符
|
||||
- 套接字的 `listen` 方法中,`BACKLOG` 参数的含义
|
||||
|
||||
### 如何并发处理多个请求
|
||||
|
||||
现在,我可以开始回答第二部分中的那个问题了:“你该如何让你的服务器在同一时间处理多个请求呢?”或者换一种说法:“如何编写一个并发服务器?”
|
||||
|
||||

|
||||
@ -368,13 +366,13 @@ def serve_forever():
|
||||
while True:
|
||||
client_connection, client_address = listen_socket.accept()
|
||||
pid = os.fork()
|
||||
if pid == 0: # 子进程
|
||||
listen_socket.close() # 关闭子进程中复制的套接字对象
|
||||
if pid == 0: ### 子进程
|
||||
listen_socket.close() ### 关闭子进程中复制的套接字对象
|
||||
handle_request(client_connection)
|
||||
client_connection.close()
|
||||
os._exit(0) # 子进程在这里退出
|
||||
else: # 父进程
|
||||
client_connection.close() # 关闭父进程中的客户端连接对象,并循环执行
|
||||
os._exit(0) ### 子进程在这里退出
|
||||
else: ### 父进程
|
||||
client_connection.close() ### 关闭父进程中的客户端连接对象,并循环执行
|
||||
|
||||
if __name__ == '__main__':
|
||||
serve_forever()
|
||||
@ -386,13 +384,13 @@ if __name__ == '__main__':
|
||||
$ python webserver3c.py
|
||||
```
|
||||
|
||||
然后,像我们之前测试轮询服务器那样,运行两个 `curl` 命令,来看看这次的效果。现在你可以看到,即使子进程在处理客户端请求后会休眠 60 秒,但它并不会影响其它客户端连接,因为他们都是由完全独立的进程来处理的。你应该看到你的 `curl` 命令立即输出了“Hello, World!”然后挂起 60 秒。你可以按照你的想法运行尽可能多的 `curl` 命令(好吧,并不能运行特别特别多 ^_^),所有的命令都会立刻输出来自服务器的响应“Hello, World!”,并不会出现任何可被察觉到的延迟行为。试试看吧。
|
||||
然后,像我们之前测试轮询服务器那样,运行两个 `curl` 命令,来看看这次的效果。现在你可以看到,即使子进程在处理客户端请求后会休眠 60 秒,但它并不会影响其它客户端连接,因为他们都是由完全独立的进程来处理的。你应该看到你的 `curl` 命令立即输出了“Hello, World!”然后挂起 60 秒。你可以按照你的想法运行尽可能多的 `curl` 命令(好吧,并不能运行特别特别多 `^_^`),所有的命令都会立刻输出来自服务器的响应 “Hello, World!”,并不会出现任何可被察觉到的延迟行为。试试看吧。
|
||||
|
||||
如果你要理解 `fork()`,那最重要的一点是:你调用了它一次,但是它会返回两次:一次在父进程中,另一次是在子进程中。当你创建了一个新进程,那么 `fork()` 在子进程中的返回值是 0。如果是在父进程中,那 `fork()` 函数会返回子进程的 PID。
|
||||
如果你要理解 `fork()`,那最重要的一点是:**你调用了它一次,但是它会返回两次** —— 一次在父进程中,另一次是在子进程中。当你创建了一个新进程,那么 `fork()` 在子进程中的返回值是 0。如果是在父进程中,那 `fork()` 函数会返回子进程的 PID。
|
||||
|
||||

|
||||
|
||||
我依然记得在第一次看到它并尝试使用 `fork()` 的时候,我是多么的入迷。它在我眼里就像是魔法一样。这就好像我在读一段顺序执行的代码,然后“砰”地一声,代码变成了两份,然后出现了两个实体,同时并行地运行相同的代码。讲真,那个时候我觉得它真的跟魔法一样神奇。
|
||||
我依然记得在第一次看到它并尝试使用 `fork()` 的时候,我是多么的入迷。它在我眼里就像是魔法一样。这就好像我在读一段顺序执行的代码,然后“砰!”地一声,代码变成了两份,然后出现了两个实体,同时并行地运行相同的代码。讲真,那个时候我觉得它真的跟魔法一样神奇。
|
||||
|
||||
当父进程创建出一个新的子进程时,子进程会复制从父进程中复制一份文件描述符:
|
||||
|
||||
@ -401,38 +399,39 @@ $ python webserver3c.py
|
||||
你可能注意到,在上面的代码中,父进程关闭了客户端连接:
|
||||
|
||||
```
|
||||
else: # parent
|
||||
else: ### parent
|
||||
client_connection.close() # close parent copy and loop over
|
||||
```
|
||||
|
||||
不过,既然父进程关闭了这个套接字,那为什么子进程仍然能够从来自客户端的套接字中读取数据呢?答案就在上面的图片中。内核会使用描述符引用计数器来决定是否要关闭一个套接字。当你的服务器创建一个子进程时,子进程会复制父进程的所有文件描述符,内核中改描述符的引用计数也会增加。如果只有一个父进程及一个子进程,那客户端套接字的文件描述符引用数应为 2;当父进程关闭客户端连接的套接字时,内核只会减少它的引用计数,将其变为 1,但这仍然不会使内核关闭该套接字。子进程也关闭了父进程中 `listen_socket` 的复制实体,因为子进程不需要关注新的客户端连接,而只需要处理已建立的客户端连接中的请求。
|
||||
不过,既然父进程关闭了这个套接字,那为什么子进程仍然能够从来自客户端的套接字中读取数据呢?答案就在上面的图片中。内核会使用描述符引用计数器来决定是否要关闭一个套接字。当你的服务器创建一个子进程时,子进程会复制父进程的所有文件描述符,内核中该描述符的引用计数也会增加。如果只有一个父进程及一个子进程,那客户端套接字的文件描述符引用数应为 2;当父进程关闭客户端连接的套接字时,内核只会减少它的引用计数,将其变为 1,但这仍然不会使内核关闭该套接字。子进程也关闭了父进程中 `listen_socket` 的复制实体,因为子进程不需要关注新的客户端连接,而只需要处理已建立的客户端连接中的请求。
|
||||
|
||||
```
|
||||
listen_socket.close() # 关闭子进程中的复制实体
|
||||
listen_socket.close() ### 关闭子进程中的复制实体
|
||||
```
|
||||
|
||||
我们将会在后文中讨论,如果你不关闭那些重复的描述符,会发生什么。
|
||||
|
||||
你可以从你的并发服务器源码看到,父进程的主要职责为:接受一个新的客户端连接,复制出一个子进程来处理这个连接,然后继续循环来接受另外的客户端连接,仅此而已。服务器父进程并不会处理客户端连接——子进程才会做这件事。
|
||||
你可以从你的并发服务器源码中看到,父进程的主要职责为:接受一个新的客户端连接,复制出一个子进程来处理这个连接,然后继续循环来接受另外的客户端连接,仅此而已。服务器父进程并不会处理客户端连接——子进程才会做这件事。
|
||||
|
||||
打个岔:当我们说两个事件并发执行时,我们在说什么?
|
||||
A little aside. What does it mean when we say that two events are concurrent?
|
||||
打个岔:当我们说两个事件并发执行时,我们所要表达的意思是什么?
|
||||
|
||||

|
||||
|
||||
当我们说“两个事件并发执行”时,它通常意味着这两个事件同时发生。简单来讲,这个定义没问题,但你应该记住它的严格定义:
|
||||
|
||||
> 如果你阅读代码时,无法判断两个事件的发生顺序,那这两个事件就是并发执行的。[2][5]
|
||||
> 如果你不能在代码中判断两个事件的发生顺序,那这两个事件就是并发执行的。(引自[《信号系统简明手册 (第二版): 并发控制深入浅出及常见错误》][5])
|
||||
|
||||
好的,现在你又该回顾一下你刚刚学过的知识点了。
|
||||
|
||||

|
||||
|
||||
- 在 Unix 中,编写一个并发服务器的最简单的方式——使用 `fork()` 系统调用;
|
||||
- 当一个进程复制出另一个进程时,它会变成刚刚复制出的进程的父进程;
|
||||
- 当一个进程分叉(`fork`)出另一个进程时,它会变成刚刚分叉出的进程的父进程;
|
||||
- 在进行 `fork` 调用后,父进程和子进程共享相同的文件描述符;
|
||||
- 系统内核通过描述符引用计数来决定是否要关闭该描述符对应的文件或套接字;
|
||||
- 服务器父进程的主要职责:现在它做的只是从客户端接受一个新的连接,复制出子进程来处理这个客户端连接,然后开始下一轮循环,去接收新的客户端连接。
|
||||
- 系统内核通过描述符的引用计数来决定是否要关闭该描述符对应的文件或套接字;
|
||||
- 服务器父进程的主要职责:现在它做的只是从客户端接受一个新的连接,分叉出子进程来处理这个客户端连接,然后开始下一轮循环,去接收新的客户端连接。
|
||||
|
||||
### 进程分叉后不关闭重复的套接字会发生什么?
|
||||
|
||||
我们来看看,如果我们不在父进程与子进程中关闭重复的套接字描述符会发生什么。下面是刚才的并发服务器代码的修改版本,这段代码(`webserver3d.py` 中,服务器不会关闭重复的描述符):
|
||||
|
||||
@ -470,15 +469,15 @@ def serve_forever():
|
||||
clients = []
|
||||
while True:
|
||||
client_connection, client_address = listen_socket.accept()
|
||||
# 将引用存储起来,否则在下一轮循环时,他们会被垃圾回收机制销毁
|
||||
### 将引用存储起来,否则在下一轮循环时,他们会被垃圾回收机制销毁
|
||||
clients.append(client_connection)
|
||||
pid = os.fork()
|
||||
if pid == 0: # 子进程
|
||||
listen_socket.close() # 关闭子进程中多余的套接字
|
||||
if pid == 0: ### 子进程
|
||||
listen_socket.close() ### 关闭子进程中多余的套接字
|
||||
handle_request(client_connection)
|
||||
client_connection.close()
|
||||
os._exit(0) # 子进程在这里结束
|
||||
else: # 父进程
|
||||
os._exit(0) ### 子进程在这里结束
|
||||
else: ### 父进程
|
||||
# client_connection.close()
|
||||
print(len(clients))
|
||||
|
||||
@ -503,7 +502,7 @@ Hello, World!
|
||||
|
||||

|
||||
|
||||
所以,为什么 `curl` 不终止呢?原因就在于多余的文件描述符。当子进程关闭客户端连接时,系统内核会减少客户端套接字的引用计数,将其变为 1。服务器子进程退出了,但客户端套接字并没有被内核关闭,因为该套接字的描述符引用计数并没有变为 0,所以,这就导致了连接终止包(在 TCP/IP 协议中称作 `FIN`)不会被发送到客户端,所以客户端会一直保持连接。这里就会出现另一个问题:如果你的服务器在长时间运行,并且不关闭重复的文件描述符,那么可用的文件描述符会被消耗殆尽:
|
||||
所以,为什么 `curl` 不终止呢?原因就在于文件描述符的副本。当子进程关闭客户端连接时,系统内核会减少客户端套接字的引用计数,将其变为 1。服务器子进程退出了,但客户端套接字并没有被内核关闭,因为该套接字的描述符引用计数并没有变为 0,所以,这就导致了连接终止包(在 TCP/IP 协议中称作 `FIN`)不会被发送到客户端,所以客户端会一直保持连接。这里也会出现另一个问题:如果你的服务器长时间运行,并且不关闭文件描述符的副本,那么可用的文件描述符会被消耗殆尽:
|
||||
|
||||

|
||||
|
||||
@ -529,7 +528,7 @@ virtual memory (kbytes, -v) unlimited
|
||||
file locks (-x) unlimited
|
||||
```
|
||||
|
||||
你可以从上面的结果看到,在我的 Ubuntu box 中,系统为我的服务器进程分配的最大可用文件描述符(文件打开)数为 1024。
|
||||
你可以从上面的结果看到,在我的 Ubuntu 机器中,系统为我的服务器进程分配的最大可用文件描述符(文件打开)数为 1024。
|
||||
|
||||
现在我们来看一看,如果你的服务器不关闭重复的描述符,它会如何消耗可用的文件描述符。在一个已有的或新建的终端窗口中,将你的服务器进程的最大可用文件描述符设为 256:
|
||||
|
||||
@ -607,15 +606,18 @@ if __name__ == '__main__':
|
||||
$ python client3.py --max-clients=300
|
||||
```
|
||||
|
||||
过一会,你的服务器就该爆炸了。这是我的环境中出现的异常截图:
|
||||
过一会,你的服务器进程就该爆了。这是我的环境中出现的异常截图:
|
||||
|
||||

|
||||
|
||||
这个例子很明显——你的服务器应该关闭重复的描述符。但是,即使你关闭了多余的描述符,你依然没有摆脱险境,因为你的服务器还有一个问题,这个问题在于“僵尸”!
|
||||
这个例子很明显——你的服务器应该关闭描述符副本。
|
||||
|
||||
#### 僵尸进程
|
||||
|
||||
但是,即使你关闭了描述符副本,你依然没有摆脱险境,因为你的服务器还有一个问题,这个问题在于“僵尸(zombies)”!
|
||||
|
||||

|
||||
|
||||
|
||||
没错,这个服务器代码确实在制造僵尸进程。我们来看看怎么回事。重新运行你的服务器:
|
||||
|
||||
```
|
||||
@ -636,13 +638,13 @@ vagrant 9099 0.0 1.2 31804 6256 pts/0 S+ 16:33 0:00 python webserve
|
||||
vagrant 9102 0.0 0.0 0 0 pts/0 Z+ 16:33 0:00 [python] <defunct>
|
||||
```
|
||||
|
||||
你看到第二行中,pid 为 9102,状态为 Z+,名字里面有个 `<defunct>` 的进程了吗?那就是我们的僵尸进程。这个僵尸进程的问题在于:你无法将它杀掉。
|
||||
你看到第二行中,pid 为 9102,状态为 `Z+`,名字里面有个 `<defunct>` 的进程了吗?那就是我们的僵尸进程。这个僵尸进程的问题在于:你无法将它杀掉!
|
||||
|
||||

|
||||
|
||||
就算你尝试使用 `kill -9` 来杀死僵尸进程,它们仍旧会存活。自己试试看,看看结果。
|
||||
|
||||
这个僵尸到底是什么,为什么我们的服务器会造出它们呢?一个僵尸进程是一个已经结束的进程,但它的父进程并没有等待它结束,并且也没有收到它的终结状态。如果一个进程在父进程退出之前退出,系统内核会把它变为一个僵尸进程,存储它的部分信息,以便父进程读取。内核保存的进程信息通常包括进程 ID,进程终止状态,以及进程的资源占用情况。OK,所以僵尸进程确实有存在的意义,但如果服务器不管这些僵尸进程,你的系统调用将会被阻塞。我们来看看这个要如何发生。首先,关闭你的服务器;然后,在一个新的终端窗口中,使用 `ulimit` 命令将最大用户进程数设为 400(同时,要确保你的最大可用描述符数大于这个数字,我们在这里设为 500):
|
||||
这个僵尸到底是什么,为什么我们的服务器会造出它们呢?一个僵尸进程(zombie)是一个已经结束的进程,但它的父进程并没有等待(`waited`)它结束,并且也没有收到它的终结状态。如果一个进程在父进程退出之前退出,系统内核会把它变为一个僵尸进程,存储它的部分信息,以便父进程读取。内核保存的进程信息通常包括进程 ID、进程终止状态,以及进程的资源占用情况。OK,所以僵尸进程确实有存在的意义,但如果服务器不管这些僵尸进程,你的系统将会被壅塞。我们来看看这个会如何发生。首先,关闭你运行的服务器;然后,在一个新的终端窗口中,使用 `ulimit` 命令将最大用户进程数设为 400(同时,要确保你的最大可用描述符数大于这个数字,我们在这里设为 500):
|
||||
|
||||
```
|
||||
$ ulimit -u 400
|
||||
@ -661,33 +663,35 @@ $ python webserver3d.py
|
||||
$ python client3.py --max-clients=500
|
||||
```
|
||||
|
||||
然后,过一会,你的服务器应该会再次爆炸,它会在创建新进程时抛出一个 `OSError: 资源暂时不可用` 异常。但它并没有达到系统允许的最大进程数。这是我的环境中输出的异常信息截图:
|
||||
然后,过一会,你的服务器进程应该会再次爆了,它会在创建新进程时抛出一个 `OSError: 资源暂时不可用` 的异常。但它并没有达到系统允许的最大进程数。这是我的环境中输出的异常信息截图:
|
||||
|
||||

|
||||
|
||||
你可以看到,如果服务器不管僵尸进程,它们会引发问题。我会简单探讨一下僵尸进程问题的解决方案。
|
||||
你可以看到,如果服务器不管僵尸进程,它们会引发问题。接下来我会简单探讨一下僵尸进程问题的解决方案。
|
||||
|
||||
我们来回顾一下你刚刚掌握的知识点:
|
||||
|
||||

|
||||
|
||||
- 如果你不关闭重复的描述符,客户端就不会在请求处理完成后终止,因为客户端连接没有被关闭;
|
||||
- 如果你不关闭重复的描述符,长久运行的服务器最终会把可用的文件描述符(最大文件打开数)消耗殆尽;
|
||||
- 当你创建一个新进程,而父进程不等待子进程,也不在子进程结束后收集它的终止状态,它会变为一个僵尸进程;
|
||||
- 僵尸通常都会吃东西,在我们的例子中,僵尸进程会占用资源。如果你的服务器不管僵尸进程,它最终会消耗掉所有的可用进程(最大用户进程数);
|
||||
- 你不能杀死僵尸进程,你需要等待它。
|
||||
- 如果你不关闭文件描述符副本,客户端就不会在请求处理完成后终止,因为客户端连接没有被关闭;
|
||||
- 如果你不关闭文件描述符副本,长久运行的服务器最终会把可用的文件描述符(最大文件打开数)消耗殆尽;
|
||||
- 当你创建一个新进程,而父进程不等待(`wait`)子进程,也不在子进程结束后收集它的终止状态,它会变为一个僵尸进程;
|
||||
- 僵尸通常都会吃东西,在我们的例子中,僵尸进程会吃掉资源。如果你的服务器不管僵尸进程,它最终会消耗掉所有的可用进程(最大用户进程数);
|
||||
- 你不能杀死(`kill`)僵尸进程,你需要等待(`wait`)它。
|
||||
|
||||
所以,你需要做什么来处理僵尸进程呢?你需要修改你的服务器代码,来等待僵尸进程,并收集它们的终止信息。你可以在代码中使用系统调用 `wait` 来完成这个任务。不幸的是,这个方法里理想目标还很远,因为在没有终止的子进程存在的情况下调用 `wait` 会导致程序阻塞,这会阻碍你的服务器处理新的客户端连接请求。那么,我们有其他选择吗?嗯,有的,其中一个解决方案需要结合信号处理以及 `wait` 系统调用。
|
||||
### 如何处理僵尸进程?
|
||||
|
||||
所以,你需要做什么来处理僵尸进程呢?你需要修改你的服务器代码,来等待(`wait`)僵尸进程,并收集它们的终止信息。你可以在代码中使用系统调用 `wait` 来完成这个任务。不幸的是,这个方法离理想目标还很远,因为在没有终止的子进程存在的情况下调用 `wait` 会导致服务器进程阻塞,这会阻碍你的服务器处理新的客户端连接请求。那么,我们有其他选择吗?嗯,有的,其中一个解决方案需要结合信号处理以及 `wait` 系统调用。
|
||||
|
||||

|
||||
|
||||
这是它的工作流程。当一个子进程退出时,内核会发送 `SIGCHLD` 信号。父进程可以设置一个信号处理器,它可以异步响应 `SIGCHLD` 信号,并在信号响应函数中等待子进程收集终止信息,从而阻止了僵尸进程的存在。
|
||||
这是它的工作流程。当一个子进程退出时,内核会发送 `SIGCHLD` 信号。父进程可以设置一个信号处理器,它可以异步响应 `SIGCHLD` 信号,并在信号响应函数中等待(`wait`)子进程收集终止信息,从而阻止了僵尸进程的存在。
|
||||
|
||||

|
||||
|
||||
顺便,异步事件意味着父进程无法提前知道事件的发生时间。
|
||||
顺便说一下,异步事件意味着父进程无法提前知道事件的发生时间。
|
||||
|
||||
修改你的服务器代码,设置一个 `SIGCHLD` 信号处理器,在信号处理器中等待终止的子进程。修改后的代码如下(webserver3e.py):
|
||||
修改你的服务器代码,设置一个 `SIGCHLD` 信号处理器,在信号处理器中等待(`wait`)终止的子进程。修改后的代码如下(webserver3e.py):
|
||||
|
||||
```
|
||||
#######################################################
|
||||
@ -722,7 +726,7 @@ HTTP/1.1 200 OK
|
||||
Hello, World!
|
||||
"""
|
||||
client_connection.sendall(http_response)
|
||||
# 挂起进程,来允许父进程完成循环,并在 "accept" 处阻塞
|
||||
### 挂起进程,来允许父进程完成循环,并在 "accept" 处阻塞
|
||||
time.sleep(3)
|
||||
|
||||
|
||||
@ -738,12 +742,12 @@ def serve_forever():
|
||||
while True:
|
||||
client_connection, client_address = listen_socket.accept()
|
||||
pid = os.fork()
|
||||
if pid == 0: # 子进程
|
||||
listen_socket.close() # 关闭子进程中多余的套接字
|
||||
if pid == 0: ### 子进程
|
||||
listen_socket.close() ### 关闭子进程中多余的套接字
|
||||
handle_request(client_connection)
|
||||
client_connection.close()
|
||||
os._exit(0)
|
||||
else: # 父进程
|
||||
else: ### 父进程
|
||||
client_connection.close()
|
||||
|
||||
if __name__ == '__main__':
|
||||
@ -766,7 +770,7 @@ $ curl http://localhost:8888/hello
|
||||
|
||||

|
||||
|
||||
刚刚发生了什么?`accept` 调用失败了,错误信息为 `EINTR`
|
||||
刚刚发生了什么?`accept` 调用失败了,错误信息为 `EINTR`。
|
||||
|
||||

|
||||
|
||||
@ -822,20 +826,20 @@ def serve_forever():
|
||||
client_connection, client_address = listen_socket.accept()
|
||||
except IOError as e:
|
||||
code, msg = e.args
|
||||
# 若 'accept' 被打断,那么重启它
|
||||
### 若 'accept' 被打断,那么重启它
|
||||
if code == errno.EINTR:
|
||||
continue
|
||||
else:
|
||||
raise
|
||||
|
||||
pid = os.fork()
|
||||
if pid == 0: # 子进程
|
||||
listen_socket.close() # 关闭子进程中多余的描述符
|
||||
if pid == 0: ### 子进程
|
||||
listen_socket.close() ### 关闭子进程中多余的描述符
|
||||
handle_request(client_connection)
|
||||
client_connection.close()
|
||||
os._exit(0)
|
||||
else: # 父进程
|
||||
client_connection.close() # 关闭父进程中多余的描述符,继续下一轮循环
|
||||
else: ### 父进程
|
||||
client_connection.close() ### 关闭父进程中多余的描述符,继续下一轮循环
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
@ -854,7 +858,7 @@ $ python webserver3f.py
|
||||
$ curl http://localhost:8888/hello
|
||||
```
|
||||
|
||||
看到了吗?没有 EINTR 异常出现了。现在检查一下,确保没有僵尸进程存活,调用 `wait` 函数的 `SIGCHLD` 信号处理器能够正常处理被终止的子进程。我们只需使用 `ps` 命令,然后看看现在没有处于 Z+ 状态(或名字包含 `<defunct>` )的 Python 进程就好了。很棒!僵尸进程没有了,我们很安心。
|
||||
看到了吗?没有 EINTR 异常出现了。现在检查一下,确保没有僵尸进程存活,调用 `wait` 函数的 `SIGCHLD` 信号处理器能够正常处理被终止的子进程。我们只需使用 `ps` 命令,然后看看现在没有处于 `Z+` 状态(或名字包含 `<defunct>` )的 Python 进程就好了。很棒!僵尸进程没有了,我们很安心。
|
||||
|
||||

|
||||
|
||||
@ -862,6 +866,8 @@ $ curl http://localhost:8888/hello
|
||||
- 使用 `SIGCHLD` 信号处理器可以异步地等待子进程终止,并收集其终止状态;
|
||||
- 当使用事件处理器时,你需要牢记,系统调用可能会被打断,所以你需要处理这种情况发生时带来的异常。
|
||||
|
||||
#### 正确处理 SIGCHLD 信号
|
||||
|
||||
好的,一切顺利。是不是没问题了?额,几乎是。重新尝试运行 `webserver3f.py` 但我们这次不会只发送一个请求,而是同步创建 128 个连接:
|
||||
|
||||
```
|
||||
@ -882,7 +888,7 @@ $ ps auxw | grep -i python | grep -v grep
|
||||
|
||||

|
||||
|
||||
这个问题的解决方案依然是设置 `SIGCHLD` 事件处理器。但我们这次将会用 `WNOHANG` 参数循环调用 `waitpid`,来保证所有处于终止状态的子进程都会被处理。下面是修改后的代码,`webserver3g.py`:
|
||||
这个问题的解决方案依然是设置 `SIGCHLD` 事件处理器。但我们这次将会用 `WNOHANG` 参数循环调用 `waitpid` 来替代 `wait`,以保证所有处于终止状态的子进程都会被处理。下面是修改后的代码,`webserver3g.py`:
|
||||
|
||||
```
|
||||
#######################################################
|
||||
@ -904,13 +910,13 @@ def grim_reaper(signum, frame):
|
||||
while True:
|
||||
try:
|
||||
pid, status = os.waitpid(
|
||||
-1, # 等待所有子进程
|
||||
os.WNOHANG # 无终止进程时,不阻塞进程,并抛出 EWOULDBLOCK 错误
|
||||
-1, ### 等待所有子进程
|
||||
os.WNOHANG ### 无终止进程时,不阻塞进程,并抛出 EWOULDBLOCK 错误
|
||||
)
|
||||
except OSError:
|
||||
return
|
||||
|
||||
if pid == 0: # 没有僵尸进程存在了
|
||||
if pid == 0: ### 没有僵尸进程存在了
|
||||
return
|
||||
|
||||
|
||||
@ -939,20 +945,20 @@ def serve_forever():
|
||||
client_connection, client_address = listen_socket.accept()
|
||||
except IOError as e:
|
||||
code, msg = e.args
|
||||
# 若 'accept' 被打断,那么重启它
|
||||
### 若 'accept' 被打断,那么重启它
|
||||
if code == errno.EINTR:
|
||||
continue
|
||||
else:
|
||||
raise
|
||||
|
||||
pid = os.fork()
|
||||
if pid == 0: # 子进程
|
||||
listen_socket.close() # 关闭子进程中多余的描述符
|
||||
if pid == 0: ### 子进程
|
||||
listen_socket.close() ### 关闭子进程中多余的描述符
|
||||
handle_request(client_connection)
|
||||
client_connection.close()
|
||||
os._exit(0)
|
||||
else: # 父进程
|
||||
client_connection.close() # 关闭父进程中多余的描述符,继续下一轮循环
|
||||
else: ### 父进程
|
||||
client_connection.close() ### 关闭父进程中多余的描述符,继续下一轮循环
|
||||
|
||||
if __name__ == '__main__':
|
||||
serve_forever()
|
||||
@ -970,17 +976,19 @@ $ python webserver3g.py
|
||||
$ python client3.py --max-clients 128
|
||||
```
|
||||
|
||||
现在来查看一下,确保没有僵尸进程存在。耶!没有僵尸的生活真美好 ^_^
|
||||
现在来查看一下,确保没有僵尸进程存在。耶!没有僵尸的生活真美好 `^_^`。
|
||||
|
||||

|
||||
|
||||
恭喜!你刚刚经历了一段很长的旅程,我希望你能够喜欢它。现在你拥有了自己的建议并发服务器,并且这段代码能够为你在继续研究生产级 Web 服务器的路上奠定基础。
|
||||
### 大功告成
|
||||
|
||||
我将会留一个作业:你需要将第二部分中的 WSGI 服务器升级,将它改造为一个并发服务器。你可以在[这里][12]找到更改后的代码。但是,当你实现了自己的版本之后,你才应该来看我的代码。你已经拥有了实现这个服务器所需的所有信息。所以,快去实现它吧 ^_^
|
||||
恭喜!你刚刚经历了一段很长的旅程,我希望你能够喜欢它。现在你拥有了自己的简易并发服务器,并且这段代码能够为你在继续研究生产级 Web 服务器的路上奠定基础。
|
||||
|
||||
我将会留一个作业:你需要将第二部分中的 WSGI 服务器升级,将它改造为一个并发服务器。你可以在[这里][12]找到更改后的代码。但是,当你实现了自己的版本之后,你才应该来看我的代码。你已经拥有了实现这个服务器所需的所有信息。所以,快去实现它吧 `^_^`。
|
||||
|
||||
然后要做什么呢?乔希·比林斯说过:
|
||||
|
||||
> “我们应该做一枚邮票——专注于一件事,不达目的不罢休。”
|
||||
> “就像一枚邮票一样——专注于一件事,不达目的不罢休。”
|
||||
|
||||
开始学习基本知识。回顾你已经学过的知识。然后一步一步深入。
|
||||
|
||||
@ -990,13 +998,13 @@ $ python client3.py --max-clients 128
|
||||
|
||||
下面是一份书单,我从这些书中提炼出了这篇文章所需的素材。他们能助你在我刚刚所述的几个方面中发掘出兼具深度和广度的知识。我极力推荐你们去搞到这几本书看看:从你的朋友那里借,在当地的图书馆中阅读,或者直接在亚马逊上把它买回来。下面是我的典藏秘籍:
|
||||
|
||||
1. [UNIX网络编程 (卷1):套接字联网API (第3版)][6]
|
||||
2. [UNIX环境高级编程 (第3版)][7]
|
||||
3. [Linux/UNIX系统编程手册][8]
|
||||
4. [TCP/IP详解 (卷1):协议 (第2版) (爱迪生-韦斯莱专业编程系列)][9]
|
||||
5. [信号系统简明手册 (第二版): 并发控制深入浅出及常见错误][10]. 这本书也可以从[作者的个人网站][11]中买到。
|
||||
1. [《UNIX 网络编程 卷1:套接字联网 API (第3版)》][6]
|
||||
2. [《UNIX 环境高级编程(第3版)》][7]
|
||||
3. [《Linux/UNIX 系统编程手册》][8]
|
||||
4. [《TCP/IP 详解 卷1:协议(第2版)][9]
|
||||
5. [《信号系统简明手册 (第二版): 并发控制深入浅出及常见错误》][10],这本书也可以从[作者的个人网站][11]中免费下载到。
|
||||
|
||||
顺便,我在撰写一本名为《搭个 Web 服务器:从头开始》的书。这本书讲解了如何从头开始编写一个基本的 Web 服务器,里面包含本文中没有的更多细节。订阅邮件列表,你就可以获取到这本书的最新进展,以及发布日期。
|
||||
顺便,我在撰写一本名为《搭个 Web 服务器:从头开始》的书。这本书讲解了如何从头开始编写一个基本的 Web 服务器,里面包含本文中没有的更多细节。订阅[原文下方的邮件列表][13],你就可以获取到这本书的最新进展,以及发布日期。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -1004,7 +1012,7 @@ via: https://ruslanspivak.com/lsbaws-part3/
|
||||
|
||||
作者:[Ruslan][a]
|
||||
译者:[StdioA](https://github.com/StdioA)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
@ -1013,12 +1021,13 @@ via: https://ruslanspivak.com/lsbaws-part3/
|
||||
[1]: https://github.com/rspivak/lsbaws/blob/master/part3/
|
||||
[2]: https://github.com/rspivak/lsbaws/blob/master/part3/webserver3a.py
|
||||
[3]: https://github.com/rspivak/lsbaws/blob/master/part3/webserver3b.py
|
||||
[4]: https://ruslanspivak.com/lsbaws-part3/#fn:1
|
||||
[5]: https://ruslanspivak.com/lsbaws-part3/#fn:2
|
||||
[6]: http://www.amazon.com/gp/product/0131411551/ref=as_li_tl?ie=UTF8&camp=1789&creative=9325&creativeASIN=0131411551&linkCode=as2&tag=russblo0b-20&linkId=2F4NYRBND566JJQL
|
||||
[7]: http://www.amazon.com/gp/product/0321637739/ref=as_li_tl?ie=UTF8&camp=1789&creative=9325&creativeASIN=0321637739&linkCode=as2&tag=russblo0b-20&linkId=3ZYAKB537G6TM22J
|
||||
[8]: http://www.amazon.com/gp/product/1593272200/ref=as_li_tl?ie=UTF8&camp=1789&creative=9325&creativeASIN=1593272200&linkCode=as2&tag=russblo0b-20&linkId=CHFOMNYXN35I2MON
|
||||
[9]: http://www.amazon.com/gp/product/0321336313/ref=as_li_tl?ie=UTF8&camp=1789&creative=9325&creativeASIN=0321336313&linkCode=as2&tag=russblo0b-20&linkId=K467DRFYMXJ5RWAY
|
||||
[4]: http://www.epubit.com.cn/book/details/1692
|
||||
[5]: http://www.amazon.com/gp/product/1441418687/ref=as_li_tl?ie=UTF8&camp=1789&creative=9325&creativeASIN=1441418687&linkCode=as2&tag=russblo0b-20&linkId=QFOAWARN62OWTWUG
|
||||
[6]: http://www.epubit.com.cn/book/details/1692
|
||||
[7]: http://www.epubit.com.cn/book/details/1625
|
||||
[8]: http://www.epubit.com.cn/book/details/1432
|
||||
[9]: http://www.epubit.com.cn/book/details/4232
|
||||
[10]: http://www.amazon.com/gp/product/1441418687/ref=as_li_tl?ie=UTF8&camp=1789&creative=9325&creativeASIN=1441418687&linkCode=as2&tag=russblo0b-20&linkId=QFOAWARN62OWTWUG
|
||||
[11]: http://greenteapress.com/semaphores/
|
||||
[12]: https://github.com/rspivak/lsbaws/blob/master/part3/webserver3h.py
|
||||
[13]: https://ruslanspivak.com/lsbaws-part1/
|
59
published/20160812 What is copyleft.md
Executable file
59
published/20160812 What is copyleft.md
Executable file
@ -0,0 +1,59 @@
|
||||
什么是 Copyleft ?
|
||||
=============
|
||||
|
||||
如果你在开源项目中花费了很多时间的话,你可能会看到使用的术语 “copyleft”(GNU 官方网站上的释义:[中文][1],[英文][2])。虽然这个术语使用比较普遍,但是很多人却不理解它。软件许可是一个至少不亚于文件编辑器和打包格式的激烈辩论的主题。专家们对 copyleft 的理解可能会充斥在好多书中,但是这篇文章可以作为你理解 copyleft 启蒙之路的起点。
|
||||
|
||||
### 什么是 copyright?
|
||||
|
||||
在我们可以理解 copyleft 之前,我们必须先介绍一下 copyright 的概念。copyleft 并不是一个脱离于 copyright 的法律框架,copyleft 存在于 copyright 规则中。那么,什么是 copyright?
|
||||
|
||||
它的准确定义随着司法权的不同而不同,但是其本质就是:作品的作者对于作品的复制(copying)(因此这个术语称之为 “copyright”:copy right)、表现等有一定的垄断性。在美国,其宪法明确地阐述了美国国会的任务就是制定版权法律来“促进科学和实用艺术的进步”。
|
||||
|
||||
不同于以往,版权会立刻附加到作品上——而且不需要注册。默认情况下,所有的权力都是保留的。也就是说,没有经过作者的允许,没有人可以重新出版、表现或者修改作品。这种“允许”就是一种许可,可能还会附加有一定的条件。
|
||||
|
||||
如果希望得到对于 copyright 更彻底的介绍,Coursera 上的[教育工作者和图书管理员的著作权](https://www.coursera.org/learn/copyright-for-education)是一个非常优秀的课程。
|
||||
|
||||
### 什么是 copyleft?
|
||||
|
||||
先不要着急,在我们讨论 copyleft 是什么之前,还有一步。首先,让我们解释一下开源(open source)意味着什么。所有的开源许可协议,按照[开源倡议的定义(Open Source Inititative's definition)](https://opensource.org/osd)(规定),除其他形式外,必须以源码的形式发放。获得开源软件的任何人都有权利查看并修改源码。
|
||||
|
||||
copyleft 许可和所谓的 “自由(permissive)” 许可不同的地方在于,其衍生的作品中,也需要相同的 copyleft 许可。我倾向于通过这种方式来区分两者不同: 自由(permissive)许可向直接下游的开发者提供了最大的自由(包括能够在闭源项目中使用开源代码的权力),而 copyleft 许可则向最终用户提供最大的自由。
|
||||
|
||||
GNU 项目为 copyleft 提供了这个简单的定义([中文][3],[英文][4]):“规则就是当重新分发该程序时,你不可以添加限制来否认其他人对于[自由软件]的自由。(the rule that when redistributing the program, you cannot add restrictions to deny other people the central freedoms [of free software].)”这可以被认为权威的定义,因为 [GNU 通用许可证(GNU General Public License,GPL)](https://www.gnu.org/licenses/gpl.html)的各种版本的依然是最广泛使用的 copyleft 许可。
|
||||
|
||||
### 软件中的 copyleft
|
||||
|
||||
GPL 家族是最出名的 copyleft 许可,但是它们并不是唯一的。[Mozilla 公共许可协议(Mozilla Public License,MPL)](https://www.mozilla.org/en-US/MPL/)和[Eclipse 公共许可协议( Eclipse Public License,EPL)](https://www.eclipse.org/legal/epl-v10.html)也很出名。很多[其它的 copyleft 许可](https://tldrlegal.com/licenses/tags/Copyleft) 也有较少的采用。
|
||||
|
||||
就像之前章节介绍的那样,一个 copyleft 许可意味着下游的项目不可以在软件的使用上添加额外的限制。这最好用一个例子来说明。如果我写了一个名为 MyCoolProgram 的程序,并且使用 copyleft 许可来发布,你将有使用和修改它的自由。你可以发布你修改后的版本,但是你必须让你的用户拥有我给你的同样的自由。(但)如果我使用 “自由(permissive)” 许可,你将可以将它自由地合并到一个不提供源码的闭源软件中。
|
||||
|
||||
对于我的 MyCoolProgram 程序,和你必须能做什么同样重要的是你必须不能做什么。你不必用和我完全一样的许可协议,只要它们相互兼容就行(但一般的为了简单起见,下游的项目也使用相同的许可)。你不必向我贡献出你的修改,但是你这么做的话,通常被认为一个很好的形式,尤其是这些修改是 bug 修复的话。
|
||||
|
||||
### 非软件中的 copyleft
|
||||
|
||||
虽然,copyleft 的概念起始于软件世界,但是它也存在于之外的世界。“做你想做的,只要你保留其他人也有做同样的事的权力”的概念是应用于文字创作、视觉艺术等方面的知识共享署名许可([中文][5],[英文][6])的一个显著的特点(CC BY-SA 4.0 是贡献于 Opensource.com 默认的许可,也是很多开源网站,包括 [Linux.cn][7] 在内所采用的内容许可协议)。[GNU 自由文档许可证](https://www.gnu.org/licenses/fdl.html)是另一个非软件协议中 copyleft 的例子。在非软件中使用软件协议通常不被建议。
|
||||
|
||||
### 我是否需要选择一种 copyleft 许可?
|
||||
|
||||
关于项目应该使用哪一种许可,可以用(已经有了)成篇累牍的文章在阐述。我的建议是首先将许可列表缩小,以满足你的哲学信条和项目目标。GitHub 的 [choosealicense.com](http://choosealicense.com/) 是一种查找满足你的需求的许可协议的好方法。[tl;drLegal](https://tldrlegal.com/)使用平实的语言来解释了许多常见和不常见的软件许可。而且也要考虑你的项目所在的生态系统,围绕一种特定语言和技术的项目经常使用相同或者相似的许可。如果你希望你的项目可以运行的更出色,你可能需要确保你选择的许可是兼容的。
|
||||
|
||||
关于更多 copyleft 的信息,请查看 [copyleft 指南](https://copyleft.org/)。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/resources/what-is-copyleft
|
||||
|
||||
作者:[Ben Cotton][a]
|
||||
译者:[yangmingming](https://github.com/yangmingming)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/bcotton
|
||||
[1]: https://www.gnu.org/licenses/copyleft.zh-cn.html
|
||||
[2]: https://www.gnu.org/licenses/copyleft.en.html
|
||||
[3]: https://www.gnu.org/philosophy/free-sw.zh-cn.html
|
||||
[4]: https://www.gnu.org/philosophy/free-sw.en.html
|
||||
[5]: https://creativecommons.org/licenses/by-sa/4.0/deed.zh
|
||||
[6]: https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[7]: https://linux.cn/
|
@ -0,0 +1,66 @@
|
||||
零配置部署 React
|
||||
========================
|
||||
|
||||
你想使用 [React][1] 来构建应用吗?“[入门][2]”是很容易的,可是接下来呢?
|
||||
|
||||
React 是一个构建用户界面的库,而它只是组成一个应用的一部分。应用还有其他的部分——风格、路由器、npm 模块、ES6 代码、捆绑和更多——这就是为什么使用它们的开发者不断流失的原因。这被称为 [JavaScript 疲劳][3]。尽管存在这种复杂性,但是使用 React 的用户依旧继续增长。
|
||||
|
||||
社区应对这一挑战的方法是共享[模版文件][4]。这些模版文件展示出开发者们架构选择的多样性。官方的“开始入门”似乎离一个实际可用的应用程序相去甚远。
|
||||
|
||||
### 新的,零配置体验
|
||||
|
||||
受开发者来自 [Ember.js][5] 和 [Elm][6] 的经验启发,Facebook 的人们想要提供一个简单、直接的方式。他们发明了一个[新的开发 React 应用的方法][10] :`create-react-app`。在初始的公开版发布的三个星期以来,它已经受到了极大的社区关注(超过 8000 个 GitHub 粉丝)和支持(许多的拉取请求)。
|
||||
|
||||
`create-react-app` 是不同于许多过去使用模板和开发启动工具包的尝试。它的目标是零配置的[惯例-优于-配置][7],使开发者关注于他们的应用的不同之处。
|
||||
|
||||
零配置一个强大的附带影响是这个工具可以在后台逐步成型。零配置奠定了工具生态系统的基础,创造的自动化和喜悦的开发远远超越 React 本身。
|
||||
|
||||
### 将零配置部署到 Heroku 上
|
||||
|
||||
多亏了 create-react-app 中打下的零配置基础,零配置的目标看起来快要达到了。因为这些新的应用都使用一个公共的、默认的架构,构建的过程可以被自动化,同时可以使用智能的默认项来配置。因此,[我们创造这个社区构建包来体验在 Heroku 零配置的过程][8]。
|
||||
|
||||
#### 在两分钟内创造和发布 React 应用
|
||||
|
||||
你可以免费在 Heroku 上开始构建 React 应用。
|
||||
```
|
||||
npm install -g create-react-app
|
||||
create-react-app my-app
|
||||
cd my-app
|
||||
git init
|
||||
heroku create -b https://github.com/mars/create-react-app-buildpack.git
|
||||
git add .
|
||||
git commit -m "react-create-app on Heroku"
|
||||
git push heroku master
|
||||
heroku open
|
||||
```
|
||||
[使用构建包文档][9]亲自试试吧。
|
||||
|
||||
### 从零配置出发
|
||||
|
||||
create-react-app 非常的新(目前版本是 0.2),同时因为它的目标是简洁的开发者体验,更多高级的使用情景并不支持(或者肯定不会支持)。例如,它不支持服务端渲染或者自定义捆绑。
|
||||
|
||||
为了支持更好的控制,create-react-app 包括了 npm run eject 命令。Eject 将所有的工具(配置文件和 package.json 依赖库)解压到应用所在的路径,因此你可以按照你心中的想法定做。一旦被弹出,你做的改变或许有必要选择一个特定的用 Node.js 或静态的构建包来布署。总是通过一个分支/拉取请求来使类似的工程改变生效,因此这些改变可以轻易撤销。Heroku 的预览应用对测试发布的改变是完美的。
|
||||
|
||||
我们将会追踪 create-react-app 的进度,当它们可用时,同时适配构建包来支持更多的高级使用情况。发布万岁!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://blog.heroku.com/deploying-react-with-zero-configuration
|
||||
|
||||
作者:[Mars Hall][a]
|
||||
译者:[zky001](https://github.com/zky001)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://blog.heroku.com/deploying-react-with-zero-configuration
|
||||
[1]: https://facebook.github.io/react/
|
||||
[2]: https://facebook.github.io/react/docs/getting-started.html
|
||||
[3]: https://medium.com/@ericclemmons/javascript-fatigue-48d4011b6fc4
|
||||
[4]: https://github.com/search?q=react+boilerplate
|
||||
[5]: http://emberjs.com/
|
||||
[6]: http://elm-lang.org/
|
||||
[7]: http://rubyonrails.org/doctrine/#convention-over-configuration
|
||||
[8]: https://github.com/mars/create-react-app-buildpack
|
||||
[9]: https://github.com/mars/create-react-app-buildpack#usage
|
||||
[10]: https://github.com/facebookincubator/create-react-app
|
@ -0,0 +1,122 @@
|
||||
揭秘 Twitter 背后的基础设施:效率与优化篇
|
||||
===========
|
||||
|
||||
过去我们曾经发布过一些关于 [Finagle](https://twitter.github.io/finagle/) 、[Manhattan](https://blog.twitter.com/2014/manhattan-our-real-time-multi-tenant-distributed-database-for-twitter-scale) 这些项目的文章,还写过一些针对大型事件活动的[架构优化](https://blog.twitter.com/2013/new-tweets-per-second-record-and-how)的文章,例如天空之城、超级碗、2014 世界杯、全球新年夜庆祝活动等。在这篇基础设施系列文章中,我主要聚焦于 Twitter 的一些关键设施和组件。我也会写一些我们在系统的扩展性、可靠性、效率方面的做过的改进,例如我们基础设施的历史,遇到过的挑战,学到的教训,做过的升级,以及我们现在前进的方向等等。
|
||||
|
||||
> 天空之城:2013 年 8 月 2 日,宫崎骏的《天空之城(Castle in the Sky)》在 NTV 迎来其第 14 次电视重播,剧情发展到高潮之时,Twitter 的 TPS(Tweets Per Second)也被推上了新的高度——143,199 TPS,是平均值的 25 倍,这个记录保持至今。-- LCTT 译注
|
||||
|
||||
### 数据中心的效率优化
|
||||
|
||||
#### 历史
|
||||
|
||||
当前 Twitter 硬件和数据中心的规模已经超过大多数公司。但达到这样的规模不是一蹴而就的,系统是随着软硬件的升级优化一步步成熟起来的,过程中我们也曾经犯过很多错误。
|
||||
|
||||
有个一时期我们的系统故障不断。软件问题、硬件问题,甚至底层设备问题不断爆发,常常导致系统运营中断。出现故障的地方存在于各个方面,必须综合考虑才能确定其风险和受到影响的服务。随着 Twitter 在客户、服务、媒体上的影响力不断扩大,构建一个高效、可靠的系统来提供服务成为我们的战略诉求。
|
||||
|
||||
> Twitter系统故障的界面被称为失败鲸(Fail Whale),如下图 -- LCTT 译注
|
||||
> 
|
||||
|
||||
#### 挑战
|
||||
|
||||
一开始,我们的软件是直接安装在服务器,这意味着软件可靠性依赖硬件,电源、网络以及其他的环境因素都是威胁。这种情况下,如果要增加容错能力,就需要统筹考虑这些互不关联的物理设备因素及在上面运行的服务。
|
||||
|
||||
最早采购数据中心方案的时候,我们都还是菜鸟,对于站点选择、运营和设计都非常不专业。我们先直接托管主机,业务增长后我们改用租赁机房。早期遇到的问题主要是因为设备故障、数据中心设计问题、维护问题以及人为操作失误。我们也在持续迭代我们的硬件设计,从而增强硬件和数据中心的容错性。
|
||||
|
||||
服务中断的原因有很多,其中硬件故障常发生在服务器、机架交换机、核心交换机这地方。举一个我们曾经犯过的错误,硬件团队最初在设计服务器的时候,认为双路电源对减少供电问题的意义不大 -- 他们真的就移除了一块电源。然而数据中心一般给机架提供两路供电来提高冗余性,防止电网故障传导到服务器,而这需要两块电源。最终我们不得不在机架上增加了一个 ATS 单元(AC transfer switch 交流切换开关)来接入第二路供电。
|
||||
|
||||
提高系统的可靠性靠的就是这样的改进,给网络、供电甚至机房增加冗余,从而将影响控制到最小范围。
|
||||
|
||||
#### 我们学到的教训以及技术的升级、迁移和选型
|
||||
|
||||
我们学到的第一个教训就是要先建模,将可能出故障的地方(例如建筑的供电和冷却系统、硬件、光纤网络等)和运行在上面的服务之间的依赖关系弄清楚,这样才能更好地分析,从而优化设计提升容错能力。
|
||||
|
||||
我们增加了更多的数据中心提升地理容灾能力,减少自然灾害的影响。而且这种站点隔离也降低了软件的风险,减少了例如软件部署升级和系统故障的风险。这种多活的数据中心架构提供了代码灰度发布(staged code deployment)的能力,减少代码首次上线时候的影响。
|
||||
|
||||
我们设计新硬件使之能够在更高温度下正常运行,数据中心的能源效率因此有所提升。
|
||||
|
||||
#### 下一步工作
|
||||
|
||||
随着公司的战略发展和运营增长,我们在不影响我们的最终用户的前提下,持续不断改进我们的数据中心。下一步工作主要是在当前能耗和硬件的基础上,通过维护和优化来提升效率。
|
||||
|
||||
### 硬件的效率优化
|
||||
|
||||
#### 历史和挑战
|
||||
|
||||
我们的硬件工程师团队刚成立的时候只能测试市面上现有硬件,而现在我们能自己定制硬件以节省成本并提升效率。
|
||||
|
||||
Twitter 是一个很大的公司,它对硬件的要求对任何团队来说都是一个不小的挑战。为了满足整个公司的需求,我们的首要工作是能检测并保证购买的硬件的品质。团队重点关注的是性能和可靠性这两部分。对于硬件我们会做系统性的测试来保证其性能可预测,保证尽量不引入新的问题。
|
||||
|
||||
随着我们一些关键组件的负荷越来越大(如 Mesos、Hadoop、Manhattan、MySQL 等),市面上的产品已经无法满足我们的需求。同时供应商提供的一些高级服务器功能,例如 Raid 管理或者电源热切换等,可靠性提升很小,反而会拖累系统性能而且价格高昂,例如一些 Raid 控制器价格高达系统总报价的三分之一,还拖累了 SSD 的性能。
|
||||
|
||||
那时,我们也是 MySQL 数据库的一个大型用户。SAS(Serial Attached SCSI,串行连接 SCSI )设备的供应和性能都有很大的问题。我们大量使用 1U 规格的服务器,它的磁盘和回写缓存一起也只能支撑每秒 2000 次的顺序 IO。为了获得更好的效果,我们只得不断增加 CPU 核心数并加强磁盘能力。我们那时候找不到更节省成本的方案。
|
||||
|
||||
后来随着我们对硬件需求越来越大,我们可以成立了一个硬件团队,从而自己来设计更便宜更高效的硬件。
|
||||
|
||||
#### 关键技术变更与选择
|
||||
|
||||
我们不断的优化硬件相关的技术,下面是我们采用的新技术和自研平台的时间轴。
|
||||
|
||||
- 2012 - 采用 SSD 作为我们 MySQL 和 Key-Value 数据库的主要存储。
|
||||
- 2013 - 我们开发了第一个定制版 Hadoop 工作站,它现在是我们主要的大容量存储方案。
|
||||
- 2013 - 我们定制的解决方案应用在 Mesos、TFE( Twitter Front-End )以及缓存设备上。
|
||||
- 2014 - 我们定制的 SSD Key-Value 服务器完成开发。
|
||||
- 2015 - 我们定制的数据库解决方案完成开发。
|
||||
- 2016 - 我们开发了一个 GPU 系统来做模糊推理和训练机器学习。
|
||||
|
||||
#### 学到的教训
|
||||
|
||||
硬件团队的工作本质是通过做取舍来优化 TCO(总体拥有成本),最终达到达到降低 CAPEX(资本支出)和 OPEX(运营支出)的目的。概括来说,服务器降成本就是:
|
||||
|
||||
1. 删除无用的功能和组件
|
||||
2. 提升利用率
|
||||
|
||||
Twitter 的设备总体来说有这四大类:存储设备、计算设备、数据库和 GPU 。 Twitter 对每一类都定义了详细的需求,让硬件工程师更针对性地设计产品,从而优化掉那些用不到或者极少用的冗余部分。例如,我们的存储设备就专门为 Hadoop 优化过,设备的购买和运营成本相比于 OEM 产品降低了 20% 。同时,这样做减法还提高了设备的性能和可靠性。同样的,对于计算设备,硬件工程师们也通过移除无用的特性获得了效率提升。
|
||||
|
||||
一个服务器可以移除的组件总是有限的,我们很快就把能移除的都扔掉了。于是我们想出了其他办法,例如在存储设备里,我们认为降低成本最好的办法是用一个节点替换多个节点,并通过 Aurora/Mesos 来管理任务负载。这就是我们现在正在做的东西。
|
||||
|
||||
对于这个我们自己新设计的服务器,首先要通过一系列的标准测试,然后会再做一系列负载测试,我们的目标是一台新设备至少能替换两台旧设备。最大的性能提升来自增加 CPU 的线程数,我们的测试结果表示新 CPU 的 单线程能力提高了 20~50% 。同时由于整个服务器的线程数增加,我们看到单线程能效提升了 25%。
|
||||
|
||||
这个新设备首次部署的时候,监控发现新设备只能替换 1.5 台旧设备,这比我们的目标低了很多。对性能数据检查后发现,我们之前对负载特性的一些假定是有问题的,而这正是我们在做性能测试需要发现的问题。
|
||||
|
||||
对此我们硬件团队开发了一个模型,用来预测在不同的硬件配置下当前 Aurora 任务的填充效率。这个模型正确的预测了新旧硬件的性能比例。模型还指出了我们一开始没有考虑到的存储需求,并因此建议我们增加 CPU 核心数。另外,它还预测,如果我们修改内存的配置,那系统的性能还会有较大提高。
|
||||
|
||||
硬件配置的改变都需要花时间去操作,所以我们的硬件工程师们就首先找出几个关键痛点。例如我们和 SRE(Site Reliability Engineer,网站可靠性工程师)团队一起调整任务顺序来降低存储需求,这种修改很简单也很有效,新设备可以代替 1.85 个旧设备了。
|
||||
|
||||
为了更好的优化效率,我们对新硬件的配置做了修改,只是扩大了内存和磁盘容量就将 CPU 利用率提高了20% ,而这只增加了非常小的成本。同时我们的硬件工程师也和合作生产厂商一起为那些服务器的最初出货调整了物料清单。后续的观察发现我们的自己的新设备实际上可以代替 2.4 台旧设备,这个超出了预定的目标。
|
||||
|
||||
### 从裸设备迁移到 mesos 集群
|
||||
|
||||
直到 2012 年为止,软件团队在 Twitter 开通一个新服务还需要自己操心硬件:配置硬件的规格需求,研究机架尺寸,开发部署脚本以及处理硬件故障。同时,系统中没有所谓的“服务发现”机制,当一个服务需要调用一个另一个服务时候,需要读取一个 YAML 配置文件,这个配置文件中有目标服务对应的主机 IP 和端口信息(预留的端口信息是由一个公共 wiki 页面维护的)。随着硬件的替换和更新,YAML 配置文件里的内容也会不断的编辑更新。在缓存层做修改意味着我们可以按小时或按天做很多次部署,每次添加少量主机并按阶段部署。我们经常遇到在部署过程中 cache 不一致导致的问题,因为有的主机在使用旧的配置有的主机在用新的。有时候一台主机的异常(例如在部署过程中它临时宕机了)会导致整个站点都无法正常工作。
|
||||
|
||||
在 2012/2013 年的时候,Twitter 开始尝试两个新事物:服务发现(来自 ZooKeeper 集群和 [Finagle](https://twitter.github.io/finagle/) 核心模块中的一个库)和 [Mesos](http://mesos.apache.org/)(包括基于 Mesos 的一个自研的计划任务框架 Aurora ,它现在也是 Apache 基金会的一个项目)。
|
||||
|
||||
服务发现功能意味着不需要再维护一个静态 YAML 主机列表了。服务或者在启动后主动注册,或者自动被 mesos 接入到一个“服务集”(就是一个 ZooKeeper 中的 znode 列表,包含角色、环境和服务名信息)中。任何想要访问这个服务的组件都只需要监控这个路径就可以实时获取到一个正在工作的服务列表。
|
||||
|
||||
现在我们通过 Mesos/Aurora ,而不是使用脚本(我们曾经是 [Capistrano](https://github.com/capistrano/capistrano) 的重度用户)来获取一个主机列表、分发代码并规划重启任务。现在软件团队如果想部署一个新服务,只需要将软件包上传到一个叫 Packer 的工具上(它是一个基于 HDFS 的服务),再在 Aurora 配置上描述文件(需要多少 CPU ,多少内存,多少个实例,启动的命令行代码),然后 Aurora 就会自动完成整个部署过程。 Aurora 先找到可用的主机,从 Packer 下载代码,注册到“服务发现”,最后启动这个服务。如果整个过程中遇到失败(硬件故障、网络中断等等), Mesos/Aurora 会自动重选一个新主机并将服务部署上去。
|
||||
|
||||
#### Twitter 的私有 PaaS 云平台
|
||||
|
||||
Mesos/Aurora 和服务发现这两个功能给我们带了革命性的变化。虽然在接下来几年里,我们碰到了无数 bug ,伤透了无数脑筋,学到了分布式系统里的无数教训,但是这套架还是非常赞的。以前大家一直忙于处理硬件搭配和管理,而现在,大家只需要考虑如何优化业务以及需要多少系统能力就可以了。同时,我们也从根本上解决了 Twitter 之前经历过的 CPU 利用率低的问题,以前服务直接安装在服务器上,这种方式无法充分利用服务器资源,任务协调能力也很差。现在 Mesos 允许我们把多个服务打包成一个服务包,增加一个新服务只需要修改配额,再改一行配置就可以了。
|
||||
|
||||
在两年时间里,多数“无状态”服务迁移到了 Mesos 平台。一些大型且重要的服务(包括我们的用户服务和广告服务系统)是最先迁移上去的。因为它们的体量巨大,所以它们从这些服务里获得的好处也最多,这也降低了它们的服务压力。
|
||||
|
||||
我们一直在不断追求效率提升和架构优化的最佳实践。我们会定期去测试公有云的产品,和我们自己产品的 TCO 以及性能做对比。我们也拥抱公有云的服务,事实上我们现在正在使用公有云产品。最后,这个系列的下一篇将会主要聚焦于我们基础设施的体量方面。
|
||||
|
||||
特别感谢 [Jennifer Fraser][1]、[David Barr][2]、[Geoff Papilion][3]、 [Matt Singer][4]、[Lam Dong][5] 对这篇文章的贡献。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://blog.twitter.com/2016/the-infrastructure-behind-twitter-efficiency-and-optimization
|
||||
|
||||
作者:[mazdakh][a]
|
||||
译者:[eriwoon](https://github.com/eriwoon)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://twitter.com/intent/user?screen_name=mazdakh
|
||||
[1]: https://twitter.com/jenniferfraser
|
||||
[2]: https://twitter.com/davebarr
|
||||
[3]: https://twitter.com/gpapilion
|
||||
[4]: https://twitter.com/mattbytes
|
||||
[5]: https://twitter.com/lamdong
|
85
published/201609/20160506 Setup honeypot in Kali Linux.md
Normal file
85
published/201609/20160506 Setup honeypot in Kali Linux.md
Normal file
@ -0,0 +1,85 @@
|
||||
在 Kali Linux 环境下设置蜜罐
|
||||
=========================
|
||||
|
||||
Pentbox 是一个包含了许多可以使渗透测试工作变得简单流程化的工具的安全套件。它是用 Ruby 编写并且面向 GNU / Linux,同时也支持 Windows、MacOS 和其它任何安装有 Ruby 的系统。在这篇短文中我们将讲解如何在 Kali Linux 环境下设置蜜罐。如果你还不知道什么是蜜罐(honeypot),“蜜罐是一种计算机安全机制,其设置用来发现、转移、或者以某种方式,抵消对信息系统的非授权尝试。"
|
||||
|
||||
### 下载 Pentbox:
|
||||
|
||||
在你的终端中简单的键入下面的命令来下载 pentbox-1.8。
|
||||
|
||||
```
|
||||
root@kali:~# wget http://downloads.sourceforge.net/project/pentbox18realised/pentbox-1.8.tar.gz
|
||||
```
|
||||
|
||||

|
||||
|
||||
### 解压 pentbox 文件
|
||||
|
||||
使用如下命令解压文件:
|
||||
|
||||
```
|
||||
root@kali:~# tar -zxvf pentbox-1.8.tar.gz
|
||||
```
|
||||
|
||||

|
||||
|
||||
### 运行 pentbox 的 ruby 脚本
|
||||
|
||||
改变目录到 pentbox 文件夹:
|
||||
|
||||
```
|
||||
root@kali:~# cd pentbox-1.8/
|
||||
```
|
||||
|
||||

|
||||
|
||||
使用下面的命令来运行 pentbox:
|
||||
|
||||
```
|
||||
root@kali:~# ./pentbox.rb
|
||||
```
|
||||
|
||||

|
||||
|
||||
### 设置一个蜜罐
|
||||
|
||||
使用选项 2 (Network Tools) 然后是其中的选项 3 (Honeypot)。
|
||||
|
||||

|
||||
|
||||
完成让我们执行首次测试,选择其中的选项 1 (Fast Auto Configuration)
|
||||
|
||||

|
||||
|
||||
这样就在 80 端口上开启了一个蜜罐。打开浏览器并且打开链接 http://192.168.160.128 (这里的 192.168.160.128 是你自己的 IP 地址。)你应该会看到一个 Access denied 的报错。
|
||||
|
||||
|
||||

|
||||
|
||||
并且在你的终端应该会看到 “HONEYPOT ACTIVATED ON PORT 80” 和跟着的 “INTRUSION ATTEMPT DETECTED”。
|
||||
|
||||

|
||||
|
||||
现在,如果你在同一步选择了选项 2 (Manual Configuration), 你应该看见更多的其它选项:
|
||||
|
||||

|
||||
|
||||
执行相同的步骤但是这次选择 22 端口 (SSH 端口)。接着在你家里的路由器上做一个端口转发,将外部的 22 端口转发到这台机器的 22 端口上。或者,把这个蜜罐设置在你的云端服务器的一个 VPS 上。
|
||||
|
||||
你将会被有如此多的机器在持续不断地扫描着 SSH 端口而震惊。 你知道你接着应该干什么么? 你应该黑回它们去!桀桀桀!
|
||||
|
||||
如果视频是你的菜的话,这里有一个设置蜜罐的视频:
|
||||
|
||||
<https://youtu.be/NufOMiktplA>
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.blackmoreops.com/2016/05/06/setup-honeypot-in-kali-linux/
|
||||
|
||||
作者:[blackmoreops.com][a]
|
||||
译者:[wcnnbdk1](https://github.com/wcnnbdk1)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: blackmoreops.com
|
@ -0,0 +1,59 @@
|
||||
Linux 发行版们应该禁用 IPv4 映射的 IPv6 地址吗?
|
||||
=============================================
|
||||
|
||||
从各方面来看,互联网向 IPv6 的过渡是件很缓慢的事情。不过在最近几年,可能是由于 IPv4 地址资源的枯竭,IPv6 的使用处于[上升态势][1]。相应的,开发者也有兴趣确保软件能在 IPv4 和 IPv6 下工作。但是,正如近期 OpenBSD 邮件列表的讨论所关注的,一个使得向 IPv6 转换更加轻松的机制设计同时也可能导致网络更不安全——并且 Linux 发行版们的默认配置可能并不安全。
|
||||
|
||||
### 地址映射
|
||||
|
||||
IPv6 在很多方面看起来可能很像 IPv4,但它是一个不同地址空间的不同的协议。服务器程序想要接受使用二者之中任意一个协议的连接,必须给两个不同的地址族分别打开一个套接字——IPv4 的 `AF_INET` 和 IPv6 的 `AF_INET6`。特别是一个程序希望在主机的使用两种地址协议的任意接口都接受连接的话,需要创建一个绑定到全零通配符地址(`0.0.0.0`)的 `AF_INET` 套接字和一个绑定到 IPv6 等效地址(写作 `::`)的 `AF_INET6` 套接字。它必须在两个套接字上都监听连接——或者有人会这么认为。
|
||||
|
||||
多年前,在 [RFC 3493][2],IETF 指定了一个机制,程序可以使用一个单独的 IPv6 套接字工作在两个协议之上。有了一个启用这个行为的套接字,程序只需要绑定到 `::` 来在所有接口上接受使用这两个协议连接。当创建了一个 IPv4 连接到该绑定端口,源地址会像 [RFC 2373][3] 中描述的那样映射到 IPv6。所以,举个例子,一个使用了这个模式的程序会将一个 `192.168.1.1` 的传入连接看作来自 `::ffff:192.168.1.1`(这个混合的写法就是这种地址通常的写法)。程序也能通过相同的映射方法打开一个到 IPv4 地址的连接。
|
||||
|
||||
RFC 要求这个行为要默认实现,所以大多数系统这么做了。不过也有些例外,OpenBSD 就是其中之一;在那里,希望在两种协议下工作的程序能做的只能是创建两个独立的套接字。但一个在 Linux 中打开两个套接字的程序会遇到麻烦:IPv4 和 IPv6 套接字都会尝试绑定到 IPv4 地址,所以不论是哪个后者都会失败。换句话说,一个绑定到 `::` 指定端口的套接字的程序会同时绑定到 IPv6 `::` 和 IPv4 `0.0.0.0` 地址的那个端口上。如果程序之后尝试绑定一个 IPv4 套接字到 `0.0.0.0` 的相同端口上时,这个操作会失败,因为这个端口已经被绑定了。
|
||||
|
||||
当然有个办法可以解决这个问题;程序可以调用 `setsockopt()` 来打开 `IPV6_V6ONLY` 选项。一个打开两个套接字并且设置了 `IPV6_V6ONLY` 的程序应该可以在所有的系统间移植。
|
||||
|
||||
读者们可能对不是每个程序都能正确处理这一问题没那么震惊。事实证明,这些程序的其中之一是网络时间协议(Network Time Protocol)的 [OpenNTPD][4] 实现。Brent Cook 最近给上游 OpenNTPD 源码[提交了一个小补丁][5],添加了必要的 `setsockopt()` 调用,它也被提交到了 OpenBSD 中了。不过那个补丁看起来不大可能被接受,最可能是因为 OpenBSD 式的理由(LCTT 译注:如前文提到的,OpenBSD 并不受这个问题的影响)。
|
||||
|
||||
### 安全担忧
|
||||
|
||||
正如上文所提到,OpenBSD 根本不支持 IPv4 映射的 IPv6 套接字。即使一个程序试着通过将 `IPV6_V6ONLY` 选项设置为 0 显式地启用地址映射,它的作者会感到沮丧,因为这个设置在 OpenBSD 系统中无效。这个决定背后的原因是这个映射带来了一些安全担忧。攻击打开的接口的攻击类型有很多种,但它们最后都会回到规定的两个途径到达相同的端口,每个端口都有它自己的控制规则。
|
||||
|
||||
任何给定的服务器系统可能都设置了防火墙规则,描述端口的允许访问权限。也许还会有适当的机制,比如 TCP wrappers 或一个基于 BPF 的过滤器,或一个网络上的路由器可以做连接状态协议过滤。结果可能是导致防火墙保护和潜在的所有类型的混乱连接之间的缺口导致同一 IPv4 地址可以通过两个不同的协议到达。如果地址映射是在网络边界完成的,情况甚至会变得更加复杂;参看[这个 2003 年的 RFC 草案][6],它描述了如果映射地址在主机之间传播,一些随之而来的其它攻击场景。
|
||||
|
||||
改变系统和软件正确地处理 IPv4 映射的 IPv6 地址当然可以实现。但那增加了系统的整体复杂度,并且可以确定这个改动没有实际地完整实现到它应该实现的范围内。如同 Theo de Raadt [说的][7]:
|
||||
|
||||
> **有时候人们将一个糟糕的想法放进了 RFC。之后他们发现这个想法是不可能的就将它丢回垃圾箱了。结果就是概念变得如此复杂,每个人都得在管理和编码方面是个全职专家。**
|
||||
|
||||
我们也根本不清楚这些全职专家有多少在实际配置使用 IPv4 映射的 IPv6 地址的系统和网络。
|
||||
|
||||
有人可能会说,尽管 IPv4 映射的 IPv6 地址造成了安全危险,更改一下程序让它在实现了地址映射的系统上关闭地址映射应该没什么危害。但 Theo [认为][8]不应该这么做,有两个理由。第一个是有许多破旧的程序,它们永远不会被修复。而实际的原因是给发行版们施加压力去默认关闭地址映射。正如他说的:“**最终有人会理解这个危害是系统性的,并更改系统默认行为使之‘secure by default’**。”
|
||||
|
||||
### Linux 上的地址映射
|
||||
|
||||
在 Linux 系统,地址映射由一个叫做 `net.ipv6.bindv6only` 的 sysctl 开关控制;它默认设置为 0(启用地址映射)。管理员(或发行版们)可以通过将它设置为 1 来关闭地址映射,但在部署这样一个系统到生产环境之前最好确认软件都能正常工作。一个快速调查显示没有哪个主要发行版改变这个默认值;Debian 在 2009 年的 “squeeze” 中[改变了这个默认值][9],但这个改动破坏了足够多的软件包(比如[任何包含 Java 的程序][10]),[在经过了几次的 Debian 式的讨论之后][11],它恢复到了原来的设置。看上去不少程序依赖于默认启用地址映射。
|
||||
|
||||
OpenBSD 有以“secure by default”的名义打破其核心系统之外的东西的传统;而 Linux 发行版们则更倾向于难以作出这样的改变。所以那些一般不愿意收到他们用户的不满的发行版们,不太可能很快对 bindv6only 的默认设置作出改变。好消息是这个功能作为默认已经很多年了,但很难找到被利用的例子。但是,正如我们都知道的,谁都无法保证这样的利用不可能发生。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://lwn.net/Articles/688462/
|
||||
|
||||
作者:[Jonathan Corbet][a]
|
||||
译者:[alim0x](https://github.com/alim0x)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://lwn.net/
|
||||
[1]: https://www.google.com/intl/en/ipv6/statistics.html
|
||||
[2]: https://tools.ietf.org/html/rfc3493#section-3.7
|
||||
[3]: https://tools.ietf.org/html/rfc2373#page-10
|
||||
[4]: https://github.com/openntpd-portable/
|
||||
[5]: https://lwn.net/Articles/688464/
|
||||
[6]: https://tools.ietf.org/html/draft-itojun-v6ops-v4mapped-harmful-02
|
||||
[7]: https://lwn.net/Articles/688465/
|
||||
[8]: https://lwn.net/Articles/688466/
|
||||
[9]: https://lists.debian.org/debian-devel/2009/10/msg00541.html
|
||||
[10]: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=560056
|
||||
[11]: https://lists.debian.org/debian-devel/2010/04/msg00099.html
|
@ -0,0 +1,76 @@
|
||||
Instagram 基于 Python 语言的 Web Service 效率提升之道
|
||||
===============================================
|
||||
|
||||
Instagram 目前部署了世界上最大规模的 Django Web 框架(该框架完全使用 Python 编写)。我们最初选用 Python 是因为它久负盛名的简洁性与实用性,这非常符合我们的哲学思想——“先做简单的事情”。但简洁性也会带来效率方面的折衷。Instagram 的规模在过去两年中已经翻番,并且最近已突破 5 亿用户,所以急需最大程度地提升 web 服务效率以便我们的平台能够继续顺利地扩大。在过去的一年,我们已经将效率计划(efficiency program)提上日程,并在过去的六个月,我们已经能够做到无需向我们的 Django 层(Django tiers)添加新的容量来维持我们的用户增长。我们将在本文分享一些由我们构建的工具以及如何使用它们来优化我们的日常部署流程。
|
||||
|
||||
### 为何需要提升效率?
|
||||
|
||||
Instagram,正如所有的软件,受限于像服务器和数据中心能源这样的物理限制。鉴于这些限制,在我们的效率计划中有两个我们希望实现的主要目标:
|
||||
|
||||
1. Instagram 应当能够利用持续代码发布正常地提供通信服务,防止因为自然灾害、区域性网络问题等造成某一个数据中心区丢失。
|
||||
2. Instagram 应当能够自由地滚动发布新产品和新功能,不必因容量而受阻。
|
||||
|
||||
想要实现这些目标,我们意识到我们需要持续不断地监控我们的系统并与回归(regressions)进行战斗。
|
||||
|
||||
### 定义效率
|
||||
|
||||
Web services 的瓶颈通常在于每台服务器上可用的 CPU 时间。在这种环境下,效率就意味着利用相同的 CPU 资源完成更多的任务,也就是说,每秒处理更多的用户请求(requests per second, RPS)。当我们寻找优化方法时,我们面临的第一个最大的挑战就是尝试量化我们当前的效率。到目前为止,我们一直在使用“每次请求的平均 CPU 时间”来评估效率,但使用这种指标也有其固有限制:
|
||||
|
||||
1. **设备多样性**。使用 CPU 时间来测量 CPU 资源并非理想方案,因为它同时受到 CPU 型号与 CPU 负载的影响。
|
||||
2. **请求影响数据**。测量每次请求的 CPU 资源并非理想方案,因为在使用每次请求测量(per-request measurement)方案时,添加或移除轻量级或重量级的请求也会影响到效率指标。
|
||||
|
||||
相对于 CPU 时间来说,CPU 指令是一种更好的指标,因为对于相同的请求,它会报告相同的数字,不管 CPU 型号和 CPU 负载情况如何。我们选择使用了一种叫做“每个活动用户(per active user)”的指标,而不是将我们所有的数据关联到每个用户请求上。我们最终采用“每个活动用户在高峰期间的 CPU 指令(CPU instruction per active user during peak minute)”来测量效率。我们建立好新的度量标准后,下一步就是通过对 Django 的分析来更多的了解一下我们的回归。
|
||||
|
||||
### Django web services 分析
|
||||
|
||||
通过分析我们的 Django web services,我们希望回答两个主要问题:
|
||||
|
||||
1. CPU 回归会发生吗?
|
||||
2. 是什么导致了 CPU 回归发生以及我们该怎样修复它?
|
||||
|
||||
想要回答第一个问题,我们需要追踪”每个活动用户的 CPU 指令(CPU-instruction-per-active-user)“指标。如果该指标增加,我们就知道已经发生了一次 CPU 回归。
|
||||
|
||||
我们为此构建的工具叫做 Dynostats。Dynostats 利用 Django 中间件以一定的速率采样用户请求,记录关键的效率以及性能指标,例如 CPU 总指令数、端到端请求时延、花费在访问内存缓存(memcache)和数据库服务的时间等。另一方面,每个请求都有很多可用于聚合的元数据(metadata),例如端点名称、HTTP 请求返回码、服务该请求的服务器名称以及请求中最新提交的哈希值(hash)。对于单个请求记录来说,有两个方面非常强大,因为我们可以在不同的维度上进行切割,那将帮助我们减少任何导致 CPU 回归的原因。例如,我们可以根据它们的端点名称聚合所有请求,正如下面的时间序列图所示,从图中可以清晰地看出在特定端点上是否发生了回归。
|
||||
|
||||

|
||||
|
||||
CPU 指令对测量效率很重要——当然,它们也很难获得。Python 并没有支持直接访问 CPU 硬件计数器(CPU 硬件计数器是指可编程 CPU 寄存器,用于测量性能指标,例如 CPU 指令)的公共库。另一方面,Linux 内核提供了 `perf_event_open` 系统调用。通过 Python `ctypes` 桥接技术能够让我们调用标准 C 库的系统调用函数 `syscall`,它也为我们提供了兼容 C 的数据类型,从而可以编程硬件计数器并从它们读取数据。
|
||||
|
||||
使用 Dynostats,我们已经可以找出 CPU 回归,并探究 CPU 回归发生的原因,例如哪个端点受到的影响最多,谁提交了真正会导致 CPU 回归的变更等。然而,当开发者收到他们的变更已经导致一次 CPU 回归发生的通知时,他们通常难以找出问题所在。如果问题很明显,那么回归可能就不会一开始就被提交!
|
||||
|
||||
这就是为何我们需要一个 Python 分析器,(一旦 Dynostats 发现了它)从而使开发者能够使用它找出回归发生的根本原因。不同于白手起家,我们决定对一个现成的 Python 分析器 cProfile 做适当的修改。cProfile 模块通常会提供一个统计集合来描述程序不同的部分执行时间和执行频率。我们将 cProfile 的定时器(timer)替换成了一个从硬件计数器读取的 CPU 指令计数器,以此取代对时间的测量。我们在采样请求后产生数据并把数据发送到数据流水线。我们也会发送一些我们在 Dynostats 所拥有的类似元数据,例如服务器名称、集群、区域、端点名称等。
|
||||
|
||||
在数据流水线的另一边,我们创建了一个消费数据的尾随者(tailer)。尾随者的主要功能是解析 cProfile 的统计数据并创建能够表示 Python 函数级别的 CPU 指令的实体。如此,我们能够通过 Python 函数来聚合 CPU 指令,从而更加方便地找出是什么函数导致了 CPU 回归。
|
||||
|
||||
### 监控与警报机制
|
||||
|
||||
在 Instagram,我们[每天部署 30-50 次后端服务][1]。这些部署中的任何一个都能发生 CPU 回归的问题。因为每次发生通常都包含至少一个差异(diff),所以找出任何回归是很容易的。我们的效率监控机制包括在每次发布前后都会在 Dynostats 中扫描 CPU 指令,并且当变更超出某个阈值时发出警告。对于长期会发生 CPU 回归的情况,我们也有一个探测器为负载最繁重的端点提供日常和每周的变更扫描。
|
||||
|
||||
部署新的变更并非触发一次 CPU 回归的唯一情况。在许多情况下,新的功能和新的代码路径都由全局环境变量(global environment variables,GEV)控制。 在一个计划好的时间表上,给一部分用户发布新功能是很常见事情。我们在 Dynostats 和 cProfile 统计数据中为每个请求添加了这个信息作为额外的元数据字段。按这些字段将请求分组可以找出由全局环境变量(GEV)改变导致的可能的 CPU 回归问题。这让我们能够在它们对性能造成影响前就捕获到 CPU 回归。
|
||||
|
||||
### 接下来是什么?
|
||||
|
||||
Dynostats 和我们定制的 cProfile,以及我们建立的支持它们的监控和警报机制能够有效地找出大多数导致 CPU 回归的元凶。这些进展已经帮助我们恢复了超过 50% 的不必要的 CPU 回归,否则我们就根本不会知道。
|
||||
|
||||
我们仍然还有一些可以提升的方面,并很容易将它们地加入到 Instagram 的日常部署流程中:
|
||||
|
||||
1. CPU 指令指标应该要比其它指标如 CPU 时间更加稳定,但我们仍然观察了让我们头疼的差异。保持“信噪比(signal:noise ratio)”合理地低是非常重要的,这样开发者们就可以集中于真实的回归上。这可以通过引入置信区间(confidence intervals)的概念来提升,并在信噪比过高时发出警报。针对不同的端点,变化的阈值也可以设置为不同值。
|
||||
2. 通过更改 GEV 来探测 CPU 回归的一个限制就是我们要在 Dynostats 中手动启用这些比较的日志输出。当 GEV 的数量逐渐增加,开发了越来越多的功能,这就不便于扩展了。相反,我们能够利用一个自动化框架来调度这些比较的日志输出,并对所有的 GEV 进行遍历,然后当检查到回归时就发出警告。
|
||||
3. cProfile 需要一些增强以便更好地处理封装函数以及它们的子函数。
|
||||
|
||||
鉴于我们在为 Instagram 的 web service 构建效率框架中所投入的工作,所以我们对于将来使用 Python 继续扩展我们的服务很有信心。我们也开始向 Python 语言本身投入更多,并且开始探索从 Python 2 转移 Python 3 之道。我们将会继续探索并做更多的实验以继续提升基础设施与开发者效率,我们期待着很快能够分享更多的经验。
|
||||
|
||||
本文作者 Min Ni 是 Instagram 的软件工程师。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://engineering.instagram.com/web-service-efficiency-at-instagram-with-python-4976d078e366#.tiakuoi4p
|
||||
|
||||
作者:[Min Ni][a]
|
||||
译者:[ChrisLeeGit](https://github.com/chrisleegit)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://engineering.instagram.com/@InstagramEng?source=post_header_lockup
|
||||
[1]: https://engineering.instagram.com/continuous-deployment-at-instagram-1e18548f01d1#.p5adp7kcz
|
@ -0,0 +1,134 @@
|
||||
使用 Python 和 Asyncio 来编写在线多人游戏(三)
|
||||
=================================================================
|
||||
|
||||

|
||||
|
||||
> 在这个系列中,我们基于多人游戏 [贪吃蛇][1] 来制作一个异步的 Python 程序。上一篇文章聚焦于[编写游戏循环][2]上,而本系列第 1 部分则涵盖了如何[异步化][3]。
|
||||
|
||||
- 代码戳[这里][4]
|
||||
|
||||
### 4、制作一个完整的游戏
|
||||
|
||||

|
||||
|
||||
#### 4.1 工程概览
|
||||
|
||||
在此部分,我们将回顾一个完整在线游戏的设计。这是一个经典的贪吃蛇游戏,增加了多玩家支持。你可以自己在 (<http://snakepit-game.com>) 亲自试玩。源码在 GitHub 的这个[仓库][5]。游戏包括下列文件:
|
||||
|
||||
- [server.py][6] - 处理主游戏循环和连接。
|
||||
- [game.py][7] - 主要的 `Game` 类。实现游戏的逻辑和游戏的大部分通信协议。
|
||||
- [player.py][8] - `Player` 类,包括每一个独立玩家的数据和蛇的展现。这个类负责获取玩家的输入并相应地移动蛇。
|
||||
- [datatypes.py][9] - 基本数据结构。
|
||||
- [settings.py][10] - 游戏设置,在注释中有相关的说明。
|
||||
- [index.html][11] - 客户端所有的 html 和 javascript代码都放在一个文件中。
|
||||
|
||||
#### 4.2 游戏循环内窥
|
||||
|
||||
多人的贪吃蛇游戏是个用于学习十分好的例子,因为它简单。所有的蛇在每个帧中移动到一个位置,而且帧以非常低的频率进行变化,这样就可以让你就观察到游戏引擎到底是如何工作的。因为速度慢,对于玩家的按键不会立马响应。按键先是记录下来,然后在一个游戏循环迭代的最后计算下一帧时使用。
|
||||
|
||||
> 现代的动作游戏帧频率更高,而且通常服务端和客户端的帧频率是不相等的。客户端的帧频率通常依赖于客户端的硬件性能,而服务端的帧频率则是固定的。一个客户端可能根据一个游戏“嘀嗒”的数据渲染多个帧。这样就可以创建平滑的动画,这个受限于客户端的性能。在这个例子中,服务端不仅传输物体的当前位置,也要传输它们的移动方向、速度和加速度。客户端的帧频率称之为 FPS(每秒帧数:frames per second),服务端的帧频率称之为 TPS(每秒滴答数:ticks per second)。在这个贪吃蛇游戏的例子中,二者的值是相等的,在客户端显示的一帧是在服务端的一个“嘀嗒”内计算出来的。
|
||||
|
||||
我们使用类似文本模式的游戏区域,事实上是 html 表格中的一个字符宽的小格。游戏中的所有对象都是通过表格中的不同颜色字符来表示。大部分时候,客户端将按键的码发送至服务端,然后每个“滴答”更新游戏区域。服务端一次更新包括需要更新字符的坐标和颜色。所以我们将所有游戏逻辑放置在服务端,只将需要渲染的数据发送给客户端。此外,我们通过替换通过网络发送的数据来减少游戏被破解的概率。
|
||||
|
||||
#### 4.3 它是如何运行的?
|
||||
|
||||
这个游戏中的服务端出于简化的目的,它和例子 3.2 类似。但是我们用一个所有服务端都可访问的 `Game` 对象来代替之前保存了所有已连接 websocket 的全局列表。一个 `Game` 实例包括一个表示连接到此游戏的玩家的 `Player` 对象的列表(在 `self._players` 属性里面),以及他们的个人数据和 websocket 对象。将所有游戏相关的数据存储在一个 `Game` 对象中,会方便我们增加多个游戏房间这个功能——如果我们要增加这个功能的话。这样,我们维护多个 `Game` 对象,每个游戏开始时创建一个。
|
||||
|
||||
客户端和服务端的所有交互都是通过编码成 json 的消息来完成。来自客户端的消息仅包含玩家所按下键码对应的编号。其它来自客户端消息使用如下格式:
|
||||
|
||||
```
|
||||
[command, arg1, arg2, ... argN ]
|
||||
```
|
||||
|
||||
来自服务端的消息以列表的形式发送,因为通常一次要发送多个消息 (大多数情况下是渲染的数据):
|
||||
|
||||
```
|
||||
[[command, arg1, arg2, ... argN ], ... ]
|
||||
```
|
||||
|
||||
在每次游戏循环迭代的最后会计算下一帧,并且将数据发送给所有的客户端。当然,每次不是发送完整的帧,而是发送两帧之间的变化列表。
|
||||
|
||||
注意玩家连接上服务端后不是立马加入游戏。连接开始时是观望者(spectator)模式,玩家可以观察其它玩家如何玩游戏。如果游戏已经开始或者上一个游戏会话已经在屏幕上显示 “game over” (游戏结束),用户此时可以按下 “Join”(参与),来加入一个已经存在的游戏,或者如果游戏没有运行(没有其它玩家)则创建一个新的游戏。后一种情况下,游戏区域在开始前会被先清空。
|
||||
|
||||
游戏区域存储在 `Game._field` 这个属性中,它是由嵌套列表组成的二维数组,用于内部存储游戏区域的状态。数组中的每一个元素表示区域中的一个小格,最终小格会被渲染成 html 表格的格子。它有一个 `Char` 的类型,是一个 `namedtuple` ,包括一个字符和颜色。在所有连接的客户端之间保证游戏区域的同步很重要,所以所有游戏区域的更新都必须依据发送到客户端的相应的信息。这是通过 `Game.apply_render()` 来实现的。它接受一个 `Draw` 对象的列表,其用于内部更新游戏区域和发送渲染消息给客户端。
|
||||
|
||||
> 我们使用 `namedtuple` 不仅因为它表示简单数据结构很方便,也因为用它生成 json 格式的消息时相对于 `dict` 更省空间。如果你在一个真实的游戏循环中需要发送复杂的数据结构,建议先将它们序列化成一个简单的、更短的格式,甚至打包成二进制格式(例如 bson,而不是 json),以减少网络传输。
|
||||
|
||||
`Player` 对象包括用 `deque` 对象表示的蛇。这种数据类型和 `list` 相似,但是在两端增加和删除元素时效率更高,用它来表示蛇很理想。它的主要方法是 `Player.render_move()`,它返回移动玩家的蛇至下一个位置的渲染数据。一般来说它在新的位置渲染蛇的头部,移除上一帧中表示蛇的尾巴的元素。如果蛇吃了一个数字变长了,在相应的多个帧中尾巴是不需要移动的。蛇的渲染数据在主类的 `Game.next_frame()` 中使用,该方法中实现所有的游戏逻辑。这个方法渲染所有蛇的移动,检查每一个蛇前面的障碍物,而且生成数字和“石头”。每一个“嘀嗒”,`game_loop()` 都会直接调用它来生成下一帧。
|
||||
|
||||
如果蛇头前面有障碍物,在 `Game.next_frame()` 中会调用 `Game.game_over()`。它后通知所有的客户端那个蛇死掉了 (会调用 `player.render_game_over()` 方法将其变成石头),然后更新表中的分数排行榜。`Player` 对象的 `alive` 标记被置为 `False`,当渲染下一帧时,这个玩家会被跳过,除非他重新加入游戏。当没有蛇存活时,游戏区域会显示 “game over” (游戏结束)。而且,主游戏循环会停止,设置 `game.running` 标记为 `False`。当某个玩家下次按下 “Join” (加入)时,游戏区域会被清空。
|
||||
|
||||
在渲染游戏的每个下一帧时也会产生数字和石头,它们是由随机值决定的。产生数字或者石头的概率可以在 `settings.py` 中修改成其它值。注意数字的产生是针对游戏区域每一个活的蛇的,所以蛇越多,产生的数字就越多,这样它们都有足够的食物来吃掉。
|
||||
|
||||
#### 4.4 网络协议
|
||||
|
||||
从客户端发送消息的列表:
|
||||
|
||||
命令 | 参数 |描述
|
||||
:-- |:-- |:--
|
||||
new_player | [name] |设置玩家的昵称
|
||||
join | |玩家加入游戏
|
||||
|
||||
|
||||
从服务端发送消息的列表:
|
||||
|
||||
命令 | 参数 |描述
|
||||
:-- |:-- |:--
|
||||
handshake |[id] |给一个玩家指定 ID
|
||||
world |[[(char, color), ...], ...] |初始化游戏区域(世界地图)
|
||||
reset_world | |清除实际地图,替换所有字符为空格
|
||||
render |[x, y, char, color] |在某个位置显示字符
|
||||
p_joined |[id, name, color, score] |新玩家加入游戏
|
||||
p_gameover |[id] |某个玩家游戏结束
|
||||
p_score |[id, score] |给某个玩家计分
|
||||
top_scores |[[name, score, color], ...] |更新排行榜
|
||||
|
||||
典型的消息交换顺序:
|
||||
|
||||
客户端 -> 服务端 |服务端 -> 客户端 |服务端 -> 所有客户端 |备注
|
||||
:-- |:-- |:-- |:--
|
||||
new_player | | |名字传递给服务端
|
||||
|handshake | |指定 ID
|
||||
|world | |初始化传递的世界地图
|
||||
|top_scores | |收到传递的排行榜
|
||||
join | | |玩家按下“Join”,游戏循环开始
|
||||
| |reset_world |命令客户端清除游戏区域
|
||||
| |render, render, ... |第一个游戏“滴答”,渲染第一帧
|
||||
(key code) | | |玩家按下一个键
|
||||
| |render, render, ... |渲染第二帧
|
||||
| |p_score |蛇吃掉了一个数字
|
||||
| |render, render, ... |渲染第三帧
|
||||
| | |... 重复若干帧 ...
|
||||
| |p_gameover |试着吃掉障碍物时蛇死掉了
|
||||
| |top_scores |更新排行榜(如果需要更新的话)
|
||||
|
||||
### 5. 总结
|
||||
|
||||
说实话,我十分享受 Python 最新的异步特性。新的语法做了改善,所以异步代码很容易阅读。可以明显看出哪些调用是非阻塞的,什么时候发生 greenthread 的切换。所以现在我可以宣称 Python 是异步编程的好工具。
|
||||
|
||||
SnakePit 在 7WebPages 团队中非常受欢迎。如果你在公司想休息一下,不要忘记给我们在 [Twitter][12] 或者 [Facebook][13] 留下反馈。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://7webpages.com/blog/writing-online-multiplayer-game-with-python-and-asyncio-part-3/
|
||||
|
||||
作者:[Kyrylo Subbotin][a]
|
||||
译者:[chunyang-wen](https://github.com/chunyang-wen)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://7webpages.com/blog/writing-online-multiplayer-game-with-python-and-asyncio-part-3/
|
||||
[1]: http://snakepit-game.com/
|
||||
[2]: https://linux.cn/article-7784-1.html
|
||||
[3]: https://linux.cn/article-7767-1.html
|
||||
[4]: https://github.com/7WebPages/snakepit-game
|
||||
[5]: https://github.com/7WebPages/snakepit-game
|
||||
[6]: https://github.com/7WebPages/snakepit-game/blob/master/server.py
|
||||
[7]: https://github.com/7WebPages/snakepit-game/blob/master/game.py
|
||||
[8]: https://github.com/7WebPages/snakepit-game/blob/master/player.py
|
||||
[9]: https://github.com/7WebPages/snakepit-game/blob/master/datatypes.py
|
||||
[10]: https://github.com/7WebPages/snakepit-game/blob/master/settings.py
|
||||
[11]: https://github.com/7WebPages/snakepit-game/blob/master/index.html
|
||||
[12]: https://twitter.com/7WebPages
|
||||
[13]: https://www.facebook.com/7WebPages/
|
@ -0,0 +1,239 @@
|
||||
科学音频处理(二):如何使用 Octave 对音频文件进行基本数学信号处理
|
||||
=========
|
||||
|
||||
在[前一篇的指导教程][1]中,我们看到了读、写以及重放音频文件的简单步骤,我们甚至看到如何从一个周期函数比如余弦函数合成一个音频文件。在这篇指导教程中,我们将会看到如何对信号进行叠加和倍乘(调整),并应用一些基本的数学函数看看它们对原始信号的影响。
|
||||
|
||||
### 信号叠加
|
||||
|
||||
两个信号 S1(t)和 S2(t)相加形成一个新的信号 R(t),这个信号在任何瞬间的值等于构成它的两个信号在那个时刻的值之和。就像下面这样:
|
||||
|
||||
```
|
||||
R(t) = S1(t) + S2(t)
|
||||
```
|
||||
|
||||
我们将用 Octave 重新产生两个信号的和并通过图表看达到的效果。首先,我们生成两个不同频率的信号,看一看它们的叠加信号是什么样的。
|
||||
|
||||
#### 第一步:产生两个不同频率的信号(oog 文件)
|
||||
|
||||
```
|
||||
>> sig1='cos440.ogg'; %creating the audio file @440 Hz
|
||||
>> sig2='cos880.ogg'; %creating the audio file @880 Hz
|
||||
>> fs=44100; %generating the parameters values (Period, sampling frequency and angular frequency)
|
||||
>> t=0:1/fs:0.02;
|
||||
>> w1=2*pi*440*t;
|
||||
>> w2=2*pi*880*t;
|
||||
>> audiowrite(sig1,cos(w1),fs); %writing the function cos(w) on the files created
|
||||
>> audiowrite(sig2,cos(w2),fs);
|
||||
```
|
||||
|
||||
然后我们绘制出两个信号的图像。
|
||||
|
||||
**信号 1 的图像(440 赫兹)**
|
||||
|
||||
```
|
||||
>> [y1, fs] = audioread(sig1);
|
||||
>> plot(y1)
|
||||
```
|
||||
|
||||
[](https://www.howtoforge.com/images/octave-audio-signal-processing-ubuntu/big/plotsignal1.png)
|
||||
|
||||
**信号 2 的图像(880 赫兹)**
|
||||
|
||||
```
|
||||
>> [y2, fs] = audioread(sig2);
|
||||
>> plot(y2)
|
||||
```
|
||||
|
||||
[](https://www.howtoforge.com/images/octave-audio-signal-processing-ubuntu/big/plotsignal2.png)
|
||||
|
||||
#### 第二步:把两个信号叠加
|
||||
|
||||
现在我们展示一下前面步骤中产生的两个信号的和。
|
||||
|
||||
```
|
||||
>> sumres=y1+y2;
|
||||
>> plot(sumres)
|
||||
```
|
||||
|
||||
叠加信号的图像
|
||||
|
||||
[](https://www.howtoforge.com/images/octave-audio-signal-processing-ubuntu/big/plotsum.png)
|
||||
|
||||
**Octaver 中的效果**
|
||||
|
||||
在 Octaver 中,这个效果产生的声音是独特的,因为它可以仿真音乐家弹奏的低八度或者高八度音符(取决于内部程序设计),仿真音符和原始音符成对,也就是两个音符发出相同的声音。
|
||||
|
||||
#### 第三步:把两个真实的信号相加(比如两首音乐歌曲)
|
||||
|
||||
为了实现这个目的,我们使用格列高利圣咏(Gregorian Chants)中的两首歌曲(声音采样)。
|
||||
|
||||
**圣母颂曲(Avemaria Track)**
|
||||
|
||||
首先,我们看一下圣母颂曲并绘出它的图像:
|
||||
|
||||
```
|
||||
>> [y1,fs]=audioread('avemaria_.ogg');
|
||||
>> plot(y1)
|
||||
```
|
||||
|
||||
[](https://www.howtoforge.com/images/octave-audio-signal-processing-ubuntu/big/avemaria.png)
|
||||
|
||||
**赞美诗曲(Hymnus Track)**
|
||||
|
||||
现在我们看一下赞美诗曲并绘出它的图像。
|
||||
|
||||
```
|
||||
>> [y2,fs]=audioread('hymnus.ogg');
|
||||
>> plot(y2)
|
||||
```
|
||||
|
||||
[](https://www.howtoforge.com/images/octave-audio-signal-processing-ubuntu/big/hymnus.png)
|
||||
|
||||
**圣母颂曲 + 赞美诗曲**
|
||||
|
||||
```
|
||||
>> y='avehymnus.ogg';
|
||||
>> audiowrite(y, y1+y2, fs);
|
||||
>> [y, fs]=audioread('avehymnus.ogg');
|
||||
>> plot(y)
|
||||
```
|
||||
|
||||
[](https://www.howtoforge.com/images/octave-audio-signal-processing-ubuntu/big/avehymnus.png)
|
||||
|
||||
结果,从音频的角度来看,两个声音信号混合在了一起。
|
||||
|
||||
### 两个信号的乘积
|
||||
|
||||
对于求两个信号的乘积,我们可以使用类似求和的方法。我们使用之前生成的相同文件。
|
||||
|
||||
```
|
||||
R(t) = S1(t) * S2(t)
|
||||
```
|
||||
|
||||
```
|
||||
>> sig1='cos440.ogg'; %creating the audio file @440 Hz
|
||||
>> sig2='cos880.ogg'; %creating the audio file @880 Hz
|
||||
>> product='prod.ogg'; %creating the audio file for product
|
||||
>> fs=44100; %generating the parameters values (Period, sampling frequency and angular frequency)
|
||||
>> t=0:1/fs:0.02;
|
||||
>> w1=2*pi*440*t;
|
||||
>> w2=2*pi*880*t;
|
||||
>> audiowrite(sig1, cos(w1), fs); %writing the function cos(w) on the files created
|
||||
>> audiowrite(sig2, cos(w2), fs);>> [y1,fs]=audioread(sig1);>> [y2,fs]=audioread(sig2);
|
||||
>> audiowrite(product, y1.*y2, fs); %performing the product
|
||||
>> [yprod,fs]=audioread(product);
|
||||
>> plot(yprod); %plotting the product
|
||||
```
|
||||
|
||||
|
||||
注意:我们必须使用操作符 ‘.*’,因为在参数文件中,这个乘积是值与值相乘。更多信息,请参考 Octave 矩阵操作产品手册。
|
||||
|
||||
#### 乘积生成信号的图像
|
||||
|
||||
[](https://www.howtoforge.com/images/octave-audio-signal-processing-ubuntu/big/plotprod.png)
|
||||
|
||||
#### 两个基本频率相差很大的信号相乘后的图表效果(调制原理)
|
||||
|
||||
**第一步:**
|
||||
|
||||
生成两个频率为 220 赫兹的声音信号。
|
||||
|
||||
```
|
||||
>> fs=44100;
|
||||
>> t=0:1/fs:0.03;
|
||||
>> w=2*pi*220*t;
|
||||
>> y1=cos(w);
|
||||
>> plot(y1);
|
||||
```
|
||||
|
||||
[](https://www.howtoforge.com/images/octave-audio-signal-processing-ubuntu/big/carrier.png)
|
||||
|
||||
**第二步:**
|
||||
|
||||
生成一个 22000 赫兹的高频调制信号。
|
||||
|
||||
```
|
||||
>> y2=cos(100*w);
|
||||
>> plot(y2);
|
||||
```
|
||||
|
||||
[](https://www.howtoforge.com/images/octave-audio-signal-processing-ubuntu/big/modulating.png)
|
||||
|
||||
**第三步:**
|
||||
|
||||
把两个信号相乘并绘出图像。
|
||||
|
||||
```
|
||||
>> plot(y1.*y2);
|
||||
```
|
||||
|
||||
[](https://www.howtoforge.com/images/octave-audio-signal-processing-ubuntu/big/modulated.png)
|
||||
|
||||
### 一个信号和一个标量相乘
|
||||
|
||||
一个函数和一个标量相乘的效果等于更改它的值域,在某些情况下,更改的是相标志。给定一个标量 K ,一个函数 F(t) 和这个标量相乘定义为:
|
||||
|
||||
```
|
||||
R(t) = K*F(t)
|
||||
```
|
||||
|
||||
```
|
||||
>> [y,fs]=audioread('cos440.ogg'); %creating the work files
|
||||
>> res1='coslow.ogg';
|
||||
>> res2='coshigh.ogg';>> res3='cosinverted.ogg';
|
||||
>> K1=0.2; %values of the scalars
|
||||
>> K2=0.5;>> K3=-1;
|
||||
>> audiowrite(res1, K1*y, fs); %product function-scalar
|
||||
>> audiowrite(res2, K2*y, fs);
|
||||
>> audiowrite(res3, K3*y, fs);
|
||||
```
|
||||
|
||||
**原始信号的图像**
|
||||
|
||||
```
|
||||
>> plot(y)
|
||||
```
|
||||
|
||||
[](https://www.howtoforge.com/images/octave-audio-signal-processing-ubuntu/big/originalsignal.png)
|
||||
|
||||
**信号振幅减为原始信号振幅的 0.2 倍后的图像**
|
||||
|
||||
```
|
||||
>> plot(res1)
|
||||
```
|
||||
|
||||
[](https://www.howtoforge.com/images/octave-audio-signal-processing-ubuntu/big/coslow.png)
|
||||
|
||||
**信号振幅减为原始振幅的 0.5 倍后的图像**
|
||||
|
||||
```
|
||||
>> plot(res2)
|
||||
```
|
||||
|
||||
[](https://www.howtoforge.com/images/octave-audio-signal-processing-ubuntu/big/coshigh.png)
|
||||
|
||||
**倒相后的信号图像**
|
||||
|
||||
```
|
||||
>> plot(res3)
|
||||
```
|
||||
|
||||
[](https://www.howtoforge.com/images/octave-audio-signal-processing-ubuntu/big/cosinverted.png)
|
||||
|
||||
### 结论
|
||||
|
||||
基本数学运算比如代数和、乘,以及函数与常量相乘是更多高级运算比如谱分析、振幅调制,角调制等的支柱和基础。在下一个教程中,我们来看一看如何进行这样的运算以及它们对声音文件产生的效果。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.howtoforge.com/tutorial/octave-audio-signal-processing-ubuntu/
|
||||
|
||||
作者:[David Duarte][a]
|
||||
译者:[ucasFL](https://github.com/ucasFL)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.howtoforge.com/tutorial/octave-audio-signal-processing-ubuntu/
|
||||
[1]: https://linux.cn/article-7755-1.html
|
||||
|
@ -0,0 +1,135 @@
|
||||
你必须了解的基础的 Linux 网络命令
|
||||
==================================================
|
||||
|
||||

|
||||
|
||||
> 摘要:有抱负的 Linux 系统管理员和 Linux 狂热者必须知道的、最重要的、而且基础的 Linux 网络命令合集。
|
||||
|
||||
在 It’s FOSS 我们并非每天都谈论 Linux 的“命令行方面”。基本上,我更专注于 Linux 的桌面端。但你们读者中的一些人在内部调查(仅面向 It's FOSS newsletter 订阅者)中指出,你们也想学些命令行技巧。速查表也受大部分读者所喜欢和支持。
|
||||
|
||||
为此,我编辑了一个 Linux 中基础网络命令的列表。它并不是一个教你如何使用这些命令的教程,而是一个命令合集和他们的简短解释。所以,如果你已经使用过这些命令,你可以用它来快速记住命令。
|
||||
|
||||
你可以把这个网页添加为书签以便快速查阅,或输出一个 PDF 版本以便离线使用。
|
||||
|
||||
当我还是通信系统工程专业的学生的时候我就有这个 Linux 网络命令的列表了。它帮助我在计算机网络课程获得了高分。希望它也能以同样的方式帮助你。
|
||||
|
||||
### Linux 基础网络命令列表
|
||||
|
||||
我在计算机网络课程上使用 FreeBSD,不过这些 UNIX 命令应该也能在 Linux 上同样工作。
|
||||
|
||||
#### 连通性
|
||||
|
||||
- `ping <host>`:发送 ICMP echo 消息(一个包)到主机。这可能会不停地发送直到你按下 `Control-C`。Ping 的通意味着一个包从你的机器通过 ICMP 发送出去,并在 IP 层回显。Ping 告诉你另一个主机是否在运行。
|
||||
- `telnet <host> [port]`:与主机在指定的端口通信。默认的 telnet 端口是 23。按 Control-] 以退出 telnet。其它一些常用的端口是:
|
||||
- 7 —— echo 端口
|
||||
- 25 —— SMTP,用于发送邮件
|
||||
- 79 —— Finger (LCTT 译注:[维基百科 - Finger protocal](https://en.wikipedia.org/wiki/Finger_protocol),不过举例 Finger 恐怕不合时宜,倒不如试试 80?),提供该网络下其它用户的信息。
|
||||
|
||||
#### ARP
|
||||
|
||||
ARP 用于将 IP 地址转换为以太网地址。root 用户可以添加和删除 ARP 记录。当 ARP 记录被污染或者错误时,删除它们会有用。root 显式添加的 ARP 记录是永久的 —— 代理设置的也是。ARP 表保存在内核中,动态地被操作。ARP 记录会被缓存,通常在 20 分钟后失效并被删除。
|
||||
|
||||
- `arp -a`:打印 ARP 表。
|
||||
- `arp -s <ip_address> <mac_address> [pub]`:添加一条记录到表中。
|
||||
- `arp -a -d`:删除 ARP 表中的所有记录。
|
||||
|
||||
#### 路由
|
||||
|
||||
- `netstat -r`:打印路由表。路由表保存在内核中,用于 IP 层把包路由到非本地网络。
|
||||
- `route add`:route 命令用于向路由表添加静态(手动指定而非动态)路由路径。所有从该 PC 到那个 IP/子网的流量都会经由指定的网关 IP。它也可以用来设置一个默认路由。例如,在 IP/子网处使用 0.0.0.0,就可以发送所有包到特定的网关。
|
||||
- `routed`:控制动态路由的 BSD 守护程序。开机时启动。它运行 RIP 路由协议。只有 root 用户可用。没有 root 权限你不能运行它。
|
||||
- `gated`:gated 是另一个使用 RIP 协议的路由守护进程。它同时支持 OSPF、EGP 和 RIP 协议。只有 root 用户可用。
|
||||
- `traceroute`:用于跟踪 IP 包的路由。它每次发送包时都把跳数加 1,从而使得从源地址到目的地之间的所有网关都会返回消息。
|
||||
- `netstat -rnf inet`:显示 IPv4 的路由表。
|
||||
- `sysctl net.inet.ip.forwarding=1`:启用包转发(把主机变为路由器)。
|
||||
- `route add|delete [-net|-host] <destination> <gateway>`:(如 `route add 192.168.20.0/24 192.168.30.4`)添加一条路由。
|
||||
- `route flush`:删除所有路由。
|
||||
- `route add -net 0.0.0.0 192.168.10.2`:添加一条默认路由。
|
||||
- `routed -Pripv2 -Pno_rdisc -d [-s|-q]`:运行 routed 守护进程,使用 RIPv2 协议,不启用 ICMP 自动发现,在前台运行,供给模式或安静模式。
|
||||
- `route add 224.0.0.0/4 127.0.0.1`:为本地地址定义多播路由。(LCTT 译注:原文存疑)
|
||||
- `rtquery -n <host>`(LCTT 译注:增加了 host 参数):查询指定主机上的 RIP 守护进程(手动更新路由表)。
|
||||
|
||||
#### 其它
|
||||
|
||||
- `nslookup`:向 DNS 服务器查询,将 IP 转为名称,或反之。例如,`nslookup facebook.com` 会给出 facebook.com 的 IP。
|
||||
- `ftp <host> [port]`(LCTT 译注:原文中 water 应是笔误):传输文件到指定主机。通常可以使用 登录名 "anonymous" , 密码 "guest" 来登录。
|
||||
- `rlogin -l <host>`(LCTT 译注:添加了 host 参数):使用类似 telnet 的虚拟终端登录到主机。
|
||||
|
||||
#### 重要文件
|
||||
|
||||
- `/etc/hosts`:域名到 IP 地址的映射。
|
||||
- `/etc/networks`:网络名称到 IP 地址的映射。
|
||||
- `/etc/protocols`:协议名称到协议编号的映射。
|
||||
- `/etc/services`:TCP/UDP 服务名称到端口号的映射。
|
||||
|
||||
#### 工具和网络性能分析
|
||||
|
||||
- `ifconfig <interface> <address> [up]`:启动接口。
|
||||
- `ifconfig <interface> [down|delete]`:停止接口。
|
||||
- `ethereal &`:在后台打开 `ethereal` 而非前台。
|
||||
- `tcpdump -i -vvv`:抓取和分析包的工具。
|
||||
- `netstat -w [seconds] -I [interface]`:显示网络设置和统计信息。
|
||||
- `udpmt -p [port] -s [bytes] target_host`:发送 UDP 流量。
|
||||
- `udptarget -p [port]`:接收 UDP 流量。
|
||||
- `tcpmt -p [port] -s [bytes] target_host`:发送 TCP 流量。
|
||||
- `tcptarget -p [port]`:接收 TCP 流量。
|
||||
|
||||
|
||||
#### 交换机
|
||||
|
||||
- `ifconfig sl0 srcIP dstIP`:配置一个串行接口(在此前先执行 `slattach -l /dev/ttyd0`,此后执行 `sysctl net.inet.ip.forwarding=1`)
|
||||
- `telnet 192.168.0.254`:从子网中的一台主机访问交换机。
|
||||
- `sh ru` 或 `show running-configuration`:查看当前配置。
|
||||
- `configure terminal`:进入配置模式。
|
||||
- `exit`:退出当前模式。(LCTT 译注:原文存疑)
|
||||
|
||||
#### VLAN
|
||||
|
||||
- `vlan n`:创建一个 ID 为 n 的 VLAN。
|
||||
- `no vlan N`:删除 ID 为 n 的 VLAN。
|
||||
- `untagged Y`:添加端口 Y 到 VLAN n。
|
||||
- `ifconfig vlan0 create`:创建 vlan0 接口。
|
||||
- `ifconfig vlan0 vlan_ID vlandev em0`:把 em0 加入到 vlan0 接口(LCTT 译注:原文存疑),并设置标记为 ID。
|
||||
- `ifconfig vlan0 [up]`:启用虚拟接口。
|
||||
- `tagged Y`:为当前 VLAN 的端口 Y 添加标记帧支持。
|
||||
|
||||
#### UDP/TCP
|
||||
|
||||
- `socklab udp`:使用 UDP 协议运行 `socklab`。
|
||||
- `sock`:创建一个 UDP 套接字,等效于输入 `sock udp` 和 `bind`。
|
||||
- `sendto <Socket ID> <hostname> <port #>`:发送数据包。
|
||||
- `recvfrom <Socket ID> <byte #>`:从套接字接收数据。
|
||||
- `socklab tcp`:使用 TCP 协议运行 `socklab`。
|
||||
- `passive`:创建一个被动模式的套接字,等效于 `socklab`,`sock tcp`,`bind`,`listen`。
|
||||
- `accept`:接受进来的连接(可以在发起进来的连接之前或之后执行)。
|
||||
- `connect <hostname> <port #>`:等效于 `socklab`,`sock tcp`,`bind`,`connect`。
|
||||
- `close`:关闭连接。
|
||||
- `read <byte #>`:从套接字中读取 n 字节。
|
||||
- `write`:(例如,`write ciao`、`write #10`)向套接字写入 "ciao" 或 10 个字节。
|
||||
|
||||
#### NAT/防火墙
|
||||
|
||||
- `rm /etc/resolv.conf`:禁止地址解析,保证你的过滤和防火墙规则正确工作。
|
||||
- `ipnat -f file_name`:将过滤规则写入文件。
|
||||
- `ipnat -l`:显示活动的规则列表。
|
||||
- `ipnat -C -F`:重新初始化规则表。
|
||||
- `map em0 192.168.1.0/24 -> 195.221.227.57/32 em0`:将 IP 地址映射到接口。
|
||||
- `map em0 192.168.1.0/24 -> 195.221.227.57/32 portmap tcp/udp 20000:50000`:带端口号的映射。
|
||||
- `ipf -f file_name`:将过滤规则写入文件。
|
||||
- `ipf -F -a`:重置规则表。
|
||||
- `ipfstat -I`:当与 -s 选项合用时列出活动的状态条目(LCTT 译注:原文存疑)。
|
||||
|
||||
希望这份基础的 Linux 网络命令合集对你有用。欢迎各种问题和建议。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/basic-linux-networking-commands
|
||||
|
||||
作者:[Abhishek Prakash][a]
|
||||
译者:[bianjp](https://github.com/bianjp)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/abhishek/
|
||||
[1]: https://drive.google.com/open?id=0By49_3Av9sT1cDdaZnh4cHB4aEk
|
@ -0,0 +1,46 @@
|
||||
DAISY : 一种 Linux 上可用的服务于视力缺陷者的文本格式
|
||||
=================================================================
|
||||
|
||||
|
||||

|
||||
|
||||
*图片: 由Kate Ter Haar提供图片。 opensource.com 后期修饰。 CC BY-SA 2.0 *
|
||||
|
||||
如果你是盲人或像我一样有视力障碍,你可能经常需要各种软硬件才能做到视觉正常的人们视之为理所当然的事情。这其中之一就是阅读的印刷图书的专用格式:布莱叶盲文(Braille)(假设你知道怎样阅读它)或特殊的文本格式例如DAISY。
|
||||
|
||||
### DAISY 是什么?
|
||||
|
||||
DAISY 是数字化无障碍信息系统(Digital Accessible Information System)的缩写。 它是一种开放的标准,专用于帮助盲人阅读课本、杂志、报纸、小说,以及你想到的各种东西。 它由[ DAISY 联盟][1]创立于上世纪 90 年代中期,该联盟包括的组织们致力于制定出一套标准,可以让以这种方式标记的文本易于阅读、可以跳转、进行注释以及其它的文本操作,就像视觉正常的人能做的一样。
|
||||
|
||||
当前的 DAISY 3.0 版本发布于 2005 年中期,是一个完全重写了的标准。它创建的目的是更容易撰写遵守该规范的书籍。值得注意的是,DAISY 能够仅支持纯文本、或仅是录音(PCM wave 文件格式或者 MP3 格式)、或既有文本也有录音。特殊的软件能阅读这类书,并支持用户设置书签和目录导航,就像正常人阅读印刷书籍一样。
|
||||
|
||||
### DAISY 是怎样工作的呢?
|
||||
|
||||
DAISY,除开特殊的版本,它工作时有点像这样:你拥有自己的主向导文件(在 DAISY 2.02 中是 ncc.html),它包含书籍的元数据,比如作者姓名、版权信息、书籍页数等等。而在 DAISY 3.0 中这个文件是一个有效的 XML 文件,以及一个被强烈建议包含在每一本书中的 DTD(文档类型定义)文件。
|
||||
|
||||
在导航控制文件中,标记精确描述了各个位置——无论是文本导航中当前光标位置还是录音中的毫秒级定位,这让该软件可以跳到确切的位置,就像视力健康的人翻到某个章节一样。值得注意的是这种导航控制文件仅包含书中主要的、最大的书籍组成部分的位置。
|
||||
|
||||
更小的内容组成部分由 SMIL(同步多媒体集成语言(synchronized multimedia integration language))文件处理。导航的层次很大程度上取决于书籍的标记的怎么样。这样设想一下,如果印刷书籍没有章节标题,你就需要花很多的时间来确定自己阅读的位置。如果一本 DAISY 格式的书籍被标记的很差,你可能只能转到书本的开头或者目录。如果书籍被标记的太差了(或者完全没有标记),你的 DAISY 阅读软件很可能会直接忽略它。
|
||||
|
||||
### 为什么需要专门的软件?
|
||||
|
||||
你可能会问,如果 DAISY 仅仅是 HTML、XML、录音文件,为什么还需要使用专门的软件进行阅读和操作。单纯从技术上而言,你并不需要。专业化的软件大多数情况下是为了方便。这就像在 Linux 操作系统中,一个简单的 Web 浏览器可以被用来打开并阅读书籍。如果你在一本 DAISY 3 的书中点击 XML 文件,软件通常做的就是读取那些你赋予访问权限的书籍的名称,并建立一个列表让你点击选择要打开的书。如果书籍被标记的很差,它不会显示在这份清单中。
|
||||
|
||||
创建 DAISY 则完全是另一件事了,通常需要专门的软件,或需要拥有足够的专业知识来修改一个通用的软件以达到这样的目的。
|
||||
|
||||
### 结语
|
||||
|
||||
幸运的是,DAISY 是一个已确立的标准。虽然它在阅读方面表现的很棒,但是需要特殊软件来生产它使得视力缺陷者孤立于正常人眼中的世界,在那里人们可以以各种格式去阅读他们电子化书籍。这就是 DAISY 联盟在 EPUB 格式取得了 DAISY 成功的原因,它的第 3 版支持一种叫做“媒体覆盖”的规范,基本上来说是在 EPUB 电子书中可选增加声频或视频。由于 EPUB 和 DAISY 共享了很多 XML 标记,一些能够阅读 DAISY 的软件能够看到 EPUB 电子书但不能阅读它们。这也就意味着只要网站为我们换到这种开放格式的书籍,我们将会有更多可选的软件来阅读我们的书籍。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/life/16/5/daisy-linux-compatible-text-format-visually-impaired
|
||||
|
||||
作者:[Kendell Clark][a]
|
||||
译者:[theArcticOcean](https://github.com/theArcticOcean)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/kendell-clark
|
||||
[1]: http://www.daisy.org
|
@ -0,0 +1,404 @@
|
||||
旅行时通过树莓派和 iPad Pro 备份图片
|
||||
===================================================================
|
||||
|
||||

|
||||
|
||||
*旅行中备份图片 - 组件*
|
||||
|
||||
### 介绍
|
||||
|
||||
我在很长的时间内一直在寻找一个旅行中备份图片的理想方法,把 SD 卡放进你的相机包会让你暴露在太多的风险之中:SD 卡可能丢失或者被盗,数据可能损坏或者在传输过程中失败。比较好的一个选择是复制到另外一个介质中,即使它也是个 SD 卡,并且将它放到一个比较安全的地方去,备份到远端也是一个可行的办法,但是如果去了一个没有网络的地方就不太可行了。
|
||||
|
||||
我理想的备份步骤需要下面的工具:
|
||||
|
||||
1. 用一台 iPad pro 而不是一台笔记本。我喜欢轻装旅行,我的大部分旅程都是商务相关的(而不是拍摄休闲的),我痛恨带着个人笔记本的时候还得带着商务本。而我的 iPad 却一直带着,这就是我为什么选择它的原因。
|
||||
2. 用尽可能少的硬件设备。
|
||||
3. 设备之间的连接需要很安全。我需要在旅馆和机场使用这套设备,所以设备之间的连接需要是封闭而加密的。
|
||||
4. 整个过程应该是可靠稳定的,我还用过其他的路由器/组合设备,但是[效果不太理想][1]。
|
||||
|
||||
### 设备
|
||||
|
||||
我配置了一套满足上面条件并且在未来可以扩充的设备,它包含下面这些部件的使用:
|
||||
|
||||
1. [9.7 英寸的 iPad Pro][2],这是本文写作时最强大、轻薄的 iOS 设备,苹果笔不是必需的,但是作为零件之一,当我在路上可以做一些编辑工作,所有的重活由树莓派做 ,其他设备只能通过 SSH 连接就行。
|
||||
2. 安装了 Raspbian 操作系统[树莓派 3][3](LCTT 译注:Raspbian 是基于 Debian 的树莓派操作系统)。
|
||||
3. 树莓派的 [Mini SD卡][4] 和 [盒子/外壳][5]。
|
||||
5. [128G 的优盘][6],对于我是够用了,你可以买个更大的。你也可以买个像[这样][7]的移动硬盘,但是树莓派没法通过 USB 给移动硬盘提供足够的电量,这意味你需要额外准备一个[供电的 USB hub][8] 以及电缆,这就破坏了我们让设备轻薄的初衷。
|
||||
6. [SD 读卡器][9]
|
||||
7. [另外的 SD 卡][10],我会使用几个 SD 卡,在用满之前就会立即换一个,这样就会让我在一次旅途当中的照片散布在不同的 SD 卡上。
|
||||
|
||||
下图展示了这些设备之间如何相互连接。
|
||||
|
||||

|
||||
|
||||
*旅行时照片的备份-流程图*
|
||||
|
||||
树莓派会作为一个安全的热点。它会创建一个自己的 WPA2 加密的 WIFI 网络,iPad Pro 会连入其中。虽然有很多在线教程教你如何创建 Ad Hoc 网络(计算机到计算机的单对单网络),还更简单一些,但是它的连接是不加密的,而且附件的设备很容易就能连接进去。因此我选择创建 WIFI 网络。
|
||||
|
||||
相机的 SD 卡通过 SD 读卡器插到树莓派 USB 端口之一,128G 的大容量优盘一直插在树莓派的另外一个 USB 端口上,我选择了一款[闪迪的][11],因为体积比较小。主要的思路就是通过 Python 脚本把 SD 卡的照片备份到优盘上,备份过程是增量备份,每次脚本运行时都只有变化的(比如新拍摄的照片)部分会添加到备份文件夹中,所以这个过程特别快。如果你有很多的照片或者拍摄了很多 RAW 格式的照片,在就是个巨大的优势。iPad 将用来运行 Python 脚本,而且用来浏览 SD 卡和优盘的文件。
|
||||
|
||||
作为额外的好处,如果给树莓派连上一根能上网的网线(比如通过以太网口),那么它就可以共享互联网连接给那些通过 WIFI 连入的设备。
|
||||
|
||||
### 1. 树莓派的设置
|
||||
|
||||
这部分需要你卷起袖子亲自动手了,我们要用到 Raspbian 的命令行模式,我会尽可能详细的介绍,方便大家进行下去。
|
||||
|
||||
#### 安装和配置 Raspbian
|
||||
|
||||
给树莓派连接鼠标、键盘和 LCD 显示器,将 SD 卡插到树莓派上,按照[树莓派官网][12]的步骤安装 Raspbian。
|
||||
|
||||
安装完后,打开 Raspbian 的终端,执行下面的命令:
|
||||
|
||||
```
|
||||
sudo apt-get update
|
||||
sudo apt-get upgrade
|
||||
```
|
||||
|
||||
这将升级机器上所有的软件到最新,我将树莓派连接到本地网络,而且为了安全更改了默认的密码。
|
||||
|
||||
Raspbian 默认开启了 SSH,这样所有的设置可以在一个远程的设备上完成。我也设置了 RSA 验证,但这是可选的功能,可以在[这里][13]查看更多信息。
|
||||
|
||||
这是一个在 Mac 上在 [iTerm][14] 里建立 SSH 连接到树莓派上的截图[14]。(LCTT 译注:原文图丢失。)
|
||||
|
||||
#### 建立 WPA2 加密的 WIFI AP
|
||||
|
||||
安装过程基于[这篇文章][15],根据我的情况进行了调整。
|
||||
|
||||
**1. 安装软件包**
|
||||
|
||||
我们需要安装下面的软件包:
|
||||
|
||||
```
|
||||
sudo apt-get install hostapd
|
||||
sudo apt-get install dnsmasq
|
||||
```
|
||||
|
||||
hostapd 用来使用内置的 WiFi 来创建 AP,dnsmasp 是一个组合的 DHCP 和 DNS 服务其,很容易设置。
|
||||
|
||||
**2. 编辑 dhcpcd.conf**
|
||||
|
||||
通过以太网连接树莓派,树莓派上的网络接口配置由 `dhcpd` 控制,因此我们首先忽略这一点,将 `wlan0` 设置为一个静态的 IP。
|
||||
|
||||
用 `sudo nano /etc/dhcpcd.conf` 命令打开 dhcpcd 的配置文件,在最后一行添加上如下内容:
|
||||
|
||||
```
|
||||
denyinterfaces wlan0
|
||||
```
|
||||
|
||||
注意:它必须放在如果已经有的其它接口行**之上**。
|
||||
|
||||
**3. 编辑接口**
|
||||
|
||||
现在设置静态 IP,使用 `sudo nano /etc/network/interfaces` 打开接口配置文件,按照如下信息编辑`wlan0`部分:
|
||||
|
||||
```
|
||||
allow-hotplug wlan0
|
||||
iface wlan0 inet static
|
||||
address 192.168.1.1
|
||||
netmask 255.255.255.0
|
||||
network 192.168.1.0
|
||||
broadcast 192.168.1.255
|
||||
# wpa-conf /etc/wpa_supplicant/wpa_supplicant.conf
|
||||
```
|
||||
|
||||
同样,然后 `wlan1` 编辑如下:
|
||||
|
||||
```
|
||||
#allow-hotplug wlan1
|
||||
#iface wlan1 inet manual
|
||||
# wpa-conf /etc/wpa_supplicant/wpa_supplicant.conf
|
||||
```
|
||||
|
||||
重要: 使用 `sudo service dhcpcd restart` 命令重启 `dhcpd`服务,然后用 `sudo ifdown eth0; sudo ifup wlan0` 命令来重载`wlan0`的配置。
|
||||
|
||||
**4. 配置 Hostapd**
|
||||
|
||||
接下来,我们需要配置 hostapd,使用 `sudo nano /etc/hostapd/hostapd.conf` 命令创建一个新的配置文件,内容如下:
|
||||
|
||||
```
|
||||
interface=wlan0
|
||||
|
||||
# Use the nl80211 driver with the brcmfmac driver
|
||||
driver=nl80211
|
||||
|
||||
# This is the name of the network
|
||||
ssid=YOUR_NETWORK_NAME_HERE
|
||||
|
||||
# Use the 2.4GHz band
|
||||
hw_mode=g
|
||||
|
||||
# Use channel 6
|
||||
channel=6
|
||||
|
||||
# Enable 802.11n
|
||||
ieee80211n=1
|
||||
|
||||
# Enable QoS Support
|
||||
wmm_enabled=1
|
||||
|
||||
# Enable 40MHz channels with 20ns guard interval
|
||||
ht_capab=[HT40][SHORT-GI-20][DSSS_CCK-40]
|
||||
|
||||
# Accept all MAC addresses
|
||||
macaddr_acl=0
|
||||
|
||||
# Use WPA authentication
|
||||
auth_algs=1
|
||||
|
||||
# Require clients to know the network name
|
||||
ignore_broadcast_ssid=0
|
||||
|
||||
# Use WPA2
|
||||
wpa=2
|
||||
|
||||
# Use a pre-shared key
|
||||
wpa_key_mgmt=WPA-PSK
|
||||
|
||||
# The network passphrase
|
||||
wpa_passphrase=YOUR_NEW_WIFI_PASSWORD_HERE
|
||||
|
||||
# Use AES, instead of TKIP
|
||||
rsn_pairwise=CCMP
|
||||
```
|
||||
|
||||
配置完成后,我们需要告诉`dhcpcd` 在系统启动运行时到哪里寻找配置文件。 使用 `sudo nano /etc/default/hostapd` 命令打开默认配置文件,然后找到`#DAEMON_CONF=""` 替换成`DAEMON_CONF="/etc/hostapd/hostapd.conf"`。
|
||||
|
||||
**5. 配置 Dnsmasq**
|
||||
|
||||
自带的 dnsmasp 配置文件包含很多信息方便你使用它,但是我们不需要那么多选项,我建议把它移动到别的地方(而不要删除它),然后自己创建一个新文件:
|
||||
|
||||
```
|
||||
sudo mv /etc/dnsmasq.conf /etc/dnsmasq.conf.orig
|
||||
sudo nano /etc/dnsmasq.conf
|
||||
```
|
||||
|
||||
粘贴下面的信息到新文件中:
|
||||
|
||||
```
|
||||
interface=wlan0 # Use interface wlan0
|
||||
listen-address=192.168.1.1 # Explicitly specify the address to listen on
|
||||
bind-interfaces # Bind to the interface to make sure we aren't sending things elsewhere
|
||||
server=8.8.8.8 # Forward DNS requests to Google DNS
|
||||
domain-needed # Don't forward short names
|
||||
bogus-priv # Never forward addresses in the non-routed address spaces.
|
||||
dhcp-range=192.168.1.50,192.168.1.100,12h # Assign IP addresses in that range with a 12 hour lease time
|
||||
```
|
||||
|
||||
**6. 设置 IPv4 转发**
|
||||
|
||||
最后我们需要做的事就是配置包转发,用 `sudo nano /etc/sysctl.conf` 命令打开 `sysctl.conf` 文件,将包含 `net.ipv4.ip_forward=1`的那一行之前的#号删除,它将在下次重启时生效。
|
||||
|
||||
我们还需要给连接到树莓派的设备通过 WIFI 分享互联网连接,做一个 `wlan0`和 `eth0` 之间的 NAT。我们可以参照下面的脚本来实现。
|
||||
|
||||
```
|
||||
sudo iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
|
||||
sudo iptables -A FORWARD -i eth0 -o wlan0 -m state --state RELATED,ESTABLISHED -j ACCEPT
|
||||
sudo iptables -A FORWARD -i wlan0 -o eth0 -j ACCEPT
|
||||
```
|
||||
|
||||
我命名这个脚本名为 `hotspot-boot.sh`,然后让它可以执行:
|
||||
|
||||
```
|
||||
sudo chmod 755 hotspot-boot.sh
|
||||
```
|
||||
|
||||
该脚本应该在树莓派启动的时候运行。有很多方法实现,下面是我实现的方式:
|
||||
|
||||
1. 把文件放到`/home/pi/scripts`目录下。
|
||||
2. 输入`sudo nano /etc/rc.local`命令编辑 `rc.local` 文件,将运行该脚本的命令放到 `exit 0`之前。(更多信息参照[这里][16])。
|
||||
|
||||
编辑后`rc.local`看起来像这样:
|
||||
|
||||
```
|
||||
#!/bin/sh -e
|
||||
#
|
||||
# rc.local
|
||||
#
|
||||
# This script is executed at the end of each multiuser runlevel.
|
||||
# Make sure that the script will "exit 0" on success or any other
|
||||
# value on error.
|
||||
#
|
||||
# In order to enable or disable this script just change the execution
|
||||
# bits.
|
||||
#
|
||||
# By default this script does nothing.
|
||||
|
||||
# Print the IP address
|
||||
_IP=$(hostname -I) || true
|
||||
if [ "$_IP" ]; then
|
||||
printf "My IP address is %s\n" "$_IP"
|
||||
fi
|
||||
|
||||
sudo /home/pi/scripts/hotspot-boot.sh &
|
||||
|
||||
exit 0
|
||||
|
||||
```
|
||||
|
||||
#### 安装 Samba 服务和 NTFS 兼容驱动
|
||||
|
||||
我们要安装下面几个软件来启用 samba 协议,使[文件浏览器][20]能够访问树莓派分享的文件夹,`ntfs-3g` 可以使我们能够访问移动硬盘中 ntfs 文件系统的文件。
|
||||
|
||||
```
|
||||
sudo apt-get install ntfs-3g
|
||||
sudo apt-get install samba samba-common-bin
|
||||
```
|
||||
|
||||
你可以参照[这些文档][17]来配置 Samba。
|
||||
|
||||
重要提示:参考的文档介绍的是挂载外置硬盘到树莓派上,我们不这样做,是因为在这篇文章写作的时候,树莓派在启动时的 auto-mounts 功能同时将 SD 卡和优盘挂载到`/media/pi/`上,该文章有一些多余的功能我们也不会采用。
|
||||
|
||||
### 2. Python 脚本
|
||||
|
||||
树莓派配置好后,我们需要开发脚本来实际拷贝和备份照片。注意,这个脚本只是提供了特定的自动化备份进程,如果你有基本的 Linux/树莓派命令行操作的技能,你可以 ssh 进树莓派,然后创建需要的文件夹,使用`cp`或`rsync`命令拷贝你自己的照片从一个设备到另外一个设备上。在脚本里我们用`rsync`命令,这个命令比较可靠而且支持增量备份。
|
||||
|
||||
这个过程依赖两个文件,脚本文件自身和`backup_photos.conf`这个配置文件,后者只有几行包含被挂载的目的驱动器(优盘)和应该挂载到哪个目录,它看起来是这样的:
|
||||
|
||||
```
|
||||
mount folder=/media/pi/
|
||||
destination folder=PDRIVE128GB
|
||||
```
|
||||
|
||||
重要提示:在这个符号`=`前后不要添加多余的空格,否则脚本会失效。
|
||||
|
||||
下面是这个 Python 脚本,我把它命名为`backup_photos.py`,把它放到了`/home/pi/scripts/`目录下,我在每行都做了注释可以方便的查看各行的功能。
|
||||
|
||||
```
|
||||
#!/usr/bin/python3
|
||||
|
||||
import os
|
||||
import sys
|
||||
from sh import rsync
|
||||
|
||||
'''
|
||||
脚本将挂载到 /media/pi 的 SD 卡上的内容复制到目的磁盘的同名目录下,目的磁盘的名字在 .conf文件里定义好了。
|
||||
|
||||
|
||||
Argument: label/name of the mounted SD Card.
|
||||
'''
|
||||
|
||||
CONFIG_FILE = '/home/pi/scripts/backup_photos.conf'
|
||||
ORIGIN_DEV = sys.argv[1]
|
||||
|
||||
def create_folder(path):
|
||||
|
||||
print ('attempting to create destination folder: ',path)
|
||||
if not os.path.exists(path):
|
||||
try:
|
||||
os.mkdir(path)
|
||||
print ('Folder created.')
|
||||
except:
|
||||
print ('Folder could not be created. Stopping.')
|
||||
return
|
||||
else:
|
||||
print ('Folder already in path. Using that instead.')
|
||||
|
||||
|
||||
|
||||
confFile = open(CONFIG_FILE,'rU')
|
||||
#重要:: rU 选项将以统一换行模式打开文件,
|
||||
#所以 \n 和/或 \r 都被识别为一个新行。
|
||||
|
||||
confList = confFile.readlines()
|
||||
confFile.close()
|
||||
|
||||
|
||||
for line in confList:
|
||||
line = line.strip('\n')
|
||||
|
||||
try:
|
||||
name , value = line.split('=')
|
||||
|
||||
if name == 'mount folder':
|
||||
mountFolder = value
|
||||
elif name == 'destination folder':
|
||||
destDevice = value
|
||||
|
||||
|
||||
except ValueError:
|
||||
print ('Incorrect line format. Passing.')
|
||||
pass
|
||||
|
||||
|
||||
destFolder = mountFolder+destDevice+'/'+ORIGIN_DEV
|
||||
create_folder(destFolder)
|
||||
|
||||
print ('Copying files...')
|
||||
|
||||
# 取消这行备注将删除不在源处的文件
|
||||
# rsync("-av", "--delete", mountFolder+ORIGIN_DEV, destFolder)
|
||||
rsync("-av", mountFolder+ORIGIN_DEV+'/', destFolder)
|
||||
|
||||
print ('Done.')
|
||||
```
|
||||
|
||||
### 3. iPad Pro 的配置
|
||||
|
||||
因为重活都由树莓派干了,文件不通过 iPad Pro 传输,这比我[之前尝试的一种方案][18]有巨大的优势。我们在 iPad 上只需要安装上 [Prompt2][19] 来通过 SSH 连接树莓派就行了,这样你既可以运行 Python 脚本也可以手动复制文件了。
|
||||
|
||||

|
||||
|
||||
*iPad 用 Prompt2 通过 SSH 连接树莓派*
|
||||
|
||||
因为我们安装了 Samba,我们可以以更图形化的方式访问连接到树莓派的 USB 设备,你可以看视频,在不同的设备之间复制和移动文件,[文件浏览器][20]对于这种用途非常完美。
|
||||
|
||||
### 4. 将它们结合在一起
|
||||
|
||||
我们假设`SD32GB-03`是连接到树莓派 USB 端口之一的 SD 卡的卷标,`PDRIVE128GB`是那个优盘的卷标,也连接到设备上,并在上面指出的配置文件中定义好。如果我们想要备份 SD 卡上的图片,我们需要这么做:
|
||||
|
||||
1. 给树莓派加电打开,将驱动器自动挂载好。
|
||||
2. 连接树莓派配置好的 WIFI 网络。
|
||||
3. 用 [Prompt2][21] 这个 app 通过 SSH 连接到树莓派。
|
||||
4. 连接好后输入下面的命令:`python3 backup_photos.py SD32GB-03`
|
||||
|
||||
首次备份需要一些时间,这依赖于你的 SD 卡使用了多少容量。这意味着你需要一直保持树莓派和 iPad 设备连接不断,你可以在脚本运行之前通过 `nohup` 命令解决:
|
||||
|
||||
```
|
||||
nohup python3 backup_photos.py SD32GB-03 &
|
||||
```
|
||||
|
||||

|
||||
|
||||
*运行完成的脚本如图所示*
|
||||
|
||||
### 未来的定制
|
||||
|
||||
我在树莓派上安装了 vnc 服务,这样我可以通过其它计算机或在 iPad 上用 [Remoter App][23]连接树莓派的图形界面,我安装了 [BitTorrent Sync][24] 用来远端备份我的图片,当然需要先设置好。当我有了可以运行的解决方案之后,我会补充我的文章。
|
||||
|
||||
你可以在下面发表你的评论和问题,我会在此页下面回复。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.movingelectrons.net/blog/2016/06/26/backup-photos-while-traveling-with-a-raspberry-pi.html
|
||||
|
||||
作者:[Lenin][a]
|
||||
译者:[jiajia9linuxer](https://github.com/jiajia9linuxer)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: http://www.movingelectrons.net/blog/2016/06/26/backup-photos-while-traveling-with-a-raspberry-pi.html
|
||||
[1]: http://bit.ly/1MVVtZi
|
||||
[2]: http://www.amazon.com/dp/B01D3NZIMA/?tag=movinelect0e-20
|
||||
[3]: http://www.amazon.com/dp/B01CD5VC92/?tag=movinelect0e-20
|
||||
[4]: http://www.amazon.com/dp/B010Q57T02/?tag=movinelect0e-20
|
||||
[5]: http://www.amazon.com/dp/B01F1PSFY6/?tag=movinelect0e-20
|
||||
[6]: http://amzn.to/293kPqX
|
||||
[7]: http://amzn.to/290syFY
|
||||
[8]: http://amzn.to/290syFY
|
||||
[9]: http://amzn.to/290syFY
|
||||
[10]: http://amzn.to/290syFY
|
||||
[11]: http://amzn.to/293kPqX
|
||||
[12]: https://www.raspberrypi.org/downloads/noobs/
|
||||
[13]: https://www.raspberrypi.org/documentation/remote-access/ssh/passwordless.md
|
||||
[14]: https://www.iterm2.com/
|
||||
[15]: https://frillip.com/using-your-raspberry-pi-3-as-a-wifi-access-point-with-hostapd/
|
||||
[16]: https://www.raspberrypi.org/documentation/linux/usage/rc-local.md
|
||||
[17]: http://www.howtogeek.com/139433/how-to-turn-a-raspberry-pi-into-a-low-power-network-storage-device/
|
||||
[18]: http://bit.ly/1MVVtZi
|
||||
[19]: https://itunes.apple.com/us/app/prompt-2/id917437289?mt=8&uo=4&at=11lqkH
|
||||
[20]: https://itunes.apple.com/us/app/filebrowser-access-files-on/id364738545?mt=8&uo=4&at=11lqkH
|
||||
[21]: https://itunes.apple.com/us/app/prompt-2/id917437289?mt=8&uo=4&at=11lqkH
|
||||
[22]: https://en.m.wikipedia.org/wiki/Nohup
|
||||
[23]: https://itunes.apple.com/us/app/remoter-pro-vnc-ssh-rdp/id519768191?mt=8&uo=4&at=11lqkH
|
||||
[24]: https://getsync.com/
|
@ -0,0 +1,88 @@
|
||||
适用于所有发行版的 Linux 应用程序——是否真的有好处呢?
|
||||
============================================================================
|
||||
|
||||

|
||||
|
||||
让我们回顾一下 Linux 社区最新的愿景——推动去中心化的应用来解决发行版的碎片化。
|
||||
|
||||
继上周的文章:“[Snap、Flatpak 这种通吃所有发行版的打包方式真的有用吗?][1]” 之后,一系列新观点浮出水面,其中可能包含关于这样应用是否有用的重要信息。
|
||||
|
||||
### 缺点
|
||||
|
||||
就这个话题在[这里][2]的评论,一个叫 Till 的 [Gentoo][3] 使用者,对于上一次我们未能完全解释的问题给出了一些新的观点。
|
||||
|
||||
对于上一次我们选择仅仅称之为膨胀的的东西,Till 从另一方面做了剖析膨胀将来的发展,这可以帮助我们更好的理解它的组成和其影响。
|
||||
|
||||
这些被称之为“捆绑应用”的应用程序能够工作在所有发行版上的机制是——将它依赖的库都包含在它们的应用软件之中,Till 说:
|
||||
|
||||
> “捆绑应用装载了大量的并不被应用开发者所维护的软件。如果其中的某个函数库被发现了一个安全问题而需要更新的话,你得为每一个独立的应用程序安装更新来确保你的系统安全。”
|
||||
|
||||
本质上,Till 提出了一个**重要的安全问题**。但是它并不仅仅与安全有关系,它还关系到许多方面,比如说系统维护、原子更新等等。
|
||||
|
||||
此外,如果我们进一步假设:依赖的开发者们也许会合作,将他们的软件与使用它的应用程序一起发布(一种理想状况),但这将导致整个平台的开发整体放缓。
|
||||
|
||||
另一个将会导致的问题是**透明的依赖关系变得模糊**,就是说,如果你想知道一个应用程序捆绑了哪些依赖关系,你必须依靠开发者发布这些数据。
|
||||
|
||||
或者就像 Till 说的:“比如说像某某包是否已经包含了更新的某函数库这样的问题将会是你每天需要面对的。”
|
||||
|
||||
与之相反,对于 Linux 现行的标准的包管理方法(包括二进制包和源码包),你能够很容易的注意到哪些函数库已经在系统中更新了。
|
||||
|
||||
并且,你也可以很轻松的知道其它哪些应用使用了这个函数库,这就将你从繁琐的单独检查每一个应用程序的工作中解救了出来。
|
||||
|
||||
其他可能由膨胀导致的缺点包括:**更大的包体积**(每一个应用程序捆绑了依赖),**更高的内存占用**(没有共享函数库),并且,**少了一个包过滤机制**来防止恶意软件:发行版的包维护者也充当了一个在开发者和用户之间的过滤者,他保障了用户获得高质量的软件。
|
||||
|
||||
而在捆绑应用中就不再是这种情况了。
|
||||
|
||||
最后一点,Till 声称,尽管在某些情况下很有用,但是在大多数情况下,**捆绑应用程序将弱化自由软件在发行版中的地位**(专有软件供应商将被能够发布他们的软件而不用把它放到公共软件仓库中)。
|
||||
|
||||
除此之外,它引出了许多其他问题。很多问题都可以简单归结到开发人员身上。
|
||||
|
||||
### 优点
|
||||
|
||||
相比之下,另一个名叫 Sven 的人的评论试图反驳目前普遍反对使用捆绑应用程序的观点,从而证明和支持使用它。
|
||||
|
||||
“浪费空间?”——Sven 声称在当今世界我们有**很多其他事情在浪费磁盘空间**,比如电影存储在硬盘上、本地安装等等……
|
||||
|
||||
最终,这些事情浪费的空间要远远多于仅仅“ 100 MB 而你每天都要使用的程序。……因此浪费空间的说法实在很荒谬。”
|
||||
|
||||
“浪费运行内存?”——主要的观点有:
|
||||
|
||||
- **共享库浪费的内存要远远少于程序的运行时数据所占用的**。
|
||||
|
||||
- 而今运行**内存已经很便宜**了。
|
||||
|
||||
“安全梦魇”——不是每个应用程序的运行**真正的要注重安全**。
|
||||
|
||||
而且,**许多应用程序甚至从来没有过任何安全更新**,除非在“滚动更新的发行版”。
|
||||
|
||||
除了 Sven 这种从实用出发的观点以外,Till 其实也指出了捆绑应用在一些情况下也有着其优点:
|
||||
|
||||
- 专有软件的供应商想要保持他们的代码游离于公共仓库之外将更加容易。
|
||||
- 没有被你的发行版打包进去的小众应用程序将变得更加可行。
|
||||
- 在没有 Beta 包的二进制发行版中测试应用将变得简单。
|
||||
- 将用户从复杂的依赖关系中解放出来。
|
||||
|
||||
### 最后的思考
|
||||
|
||||
虽然关于此问题有着不同的想法,但是有一个被大家共同接受的观点是:**捆绑应用对于填补 Linux 生态系统有着其独到的作用。**
|
||||
|
||||
虽然如此,它的定位,不论是主流的还是边缘的,都变得愈发清晰,至少理论上是这样。
|
||||
|
||||
想要尽可能优化其系统的用户,在大多数情况下应该要避免使用捆绑应用。
|
||||
|
||||
而讲究易用性、尽可能在维护系统上少费劲的用户,可能应该会感觉这种新应用十分舒爽。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.iwillfolo.com/linux-applications-that-works-on-all-distributions-are-they-any-good/
|
||||
|
||||
作者:[Liron][a]
|
||||
译者:[Chao-zhi](https://github.com/Chao-zhi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: http://www.iwillfolo.com/category/editorials/
|
||||
[1]: https://linux.cn/article-7783-1.html
|
||||
[2]: http://www.proli.net/2016/06/25/gnulinux-bundled-application-ramblings/
|
||||
[3]: http://www.iwillfolo.com/5-reasons-use-gentoo-linux/
|
219
published/201609/20160809 How to build your own Git server.md
Normal file
219
published/201609/20160809 How to build your own Git server.md
Normal file
@ -0,0 +1,219 @@
|
||||
Git 系列(六):如何搭建你自己的 Git 服务器
|
||||
====================
|
||||
|
||||

|
||||
|
||||
现在我们将要学习如何搭建 git 服务器,如何编写自定义的 Git 钩子来在特定的事件触发相应的动作(例如通知),或者是发布你的代码到一个站点。
|
||||
|
||||
直到现在,我们主要讨论的还是以一个使用者的身份与 Git 进行交互。这篇文章中我将讨论 Git 的管理,并且设计一个灵活的 Git 框架。你可能会觉得这听起来是 “高阶 Git 技术” 或者 “只有狂热粉才能阅读”的一句委婉的说法,但是事实是这里面的每个任务都不需要很深的知识或者其他特殊的训练,就能基本理解 Git 的工作原理,有可能需要一丁点关于 Linux 的知识。
|
||||
|
||||
### 共享 Git 服务器
|
||||
|
||||
创建你自己的共享 Git 服务器意外地简单,而且在很多情况下,遇到的这点麻烦是完全值得的。不仅仅是因为它保证你有权限查看自己的代码,它还可以通过扩展为 Git 的使用敞开了一扇大门,例如个人 Git 钩子、无限制的数据存储、和持续集成与分发(CI & CD)。
|
||||
|
||||
如果你知道如何使用 Git 和 SSH,那么你已经知道怎么创建一个 Git 服务器了。Git 的设计方式,就是让你在创建或者 clone 一个仓库的时候,就完成了一半服务器的搭建。然后允许用 SSH 访问仓库,而且任何有权限访问的人都可以使用你的仓库作为 clone 的新仓库的基础。
|
||||
|
||||
但是,这是一个小的点对点环境(ad-hoc)。按照一些方案你可以创建一些带有同样的功能的设计优良的 Git 服务器,同时有更好的拓展性。
|
||||
|
||||
首要之事:确认你的用户们,现在的用户以及之后的用户都要考虑。如果你是唯一的用户那么没有任何改动的必要。但是如果你试图邀请其他的代码贡献者使用,那么你应该允许一个专门的分享系统用户给你的开发者们。
|
||||
|
||||
假定你有一个可用的服务器(如果没有,这不成问题,Git 会帮忙解决,CentOS 的 [树莓派 3][3] 是个不错的开始),然后第一步就是只允许使用 SSH 密钥认证的 SSH 登录。这比使用密码登录安全得多,因为这可以免于暴力破解,也可以通过直接删除用户密钥而禁用用户。
|
||||
|
||||
一旦你启用了 SSH 密钥认证,创建 `gituser` 用户。这是给你的所有授权的用户们的公共用户:
|
||||
|
||||
```
|
||||
$ su -c 'adduser gituser'
|
||||
```
|
||||
|
||||
然后切换到刚创建的 `gituser` 用户,创建一个 `~/.ssh` 的框架,并设置好合适的权限。这很重要,如果权限设置得太开放会使自己所保护的 SSH 没有意义。
|
||||
|
||||
```
|
||||
$ su - gituser
|
||||
$ mkdir .ssh && chmod 700 .ssh
|
||||
$ touch .ssh/authorized_keys
|
||||
$ chmod 600 .ssh/authorized_keys
|
||||
```
|
||||
|
||||
`authorized_keys` 文件里包含所有你的开发者们的 SSH 公钥,你开放权限允许他们可以在你的 Git 项目上工作。他们必须创建他们自己的 SSH 密钥对然后把他们的公钥给你。复制公钥到 gituser 用户下的 `authorized_keys` 文件中。例如,为一个叫 Bob 的开发者,执行以下命令:
|
||||
|
||||
```
|
||||
$ cat ~/path/to/id_rsa.bob.pub >> /home/gituser/.ssh/authorized_keys
|
||||
```
|
||||
|
||||
只要开发者 Bob 有私钥并且把相对应的公钥给你,Bob 就可以用 `gituser` 用户访问服务器。
|
||||
|
||||
但是,你并不是想让你的开发者们能使用服务器,即使只是以 `gituser` 的身份访问。你只是想给他们访问 Git 仓库的权限。因为这个特殊的原因,Git 提供了一个限制的 shell,准确的说是 `git-shell`。以 root 身份执行以下命令,把 `git-shell` 添加到你的系统中,然后设置成 `gituser` 用户的默认 shell。
|
||||
|
||||
```
|
||||
# grep git-shell /etc/shells || su -c "echo `which git-shell` >> /etc/shells"
|
||||
# su -c 'usermod -s git-shell gituser'
|
||||
```
|
||||
|
||||
现在 `gituser` 用户只能使用 SSH 来 push 或者 pull Git 仓库,并且无法使用任何一个可以登录的 shell。你应该把你自己添加到和 `gituser` 一样的组中,在我们的样例服务器中这个组的名字也是 `gituser`。
|
||||
|
||||
举个例子:
|
||||
|
||||
```
|
||||
# usermod -a -G gituser seth
|
||||
```
|
||||
|
||||
仅剩下的一步就是创建一个 Git 仓库。因为没有人能在服务器上直接与 Git 交互(也就是说,你之后不能 SSH 到服务器然后直接操作这个仓库),所以创建一个空的仓库 。如果你想使用这个放在服务器上的仓库来完成工作,你可以从它的所在处 `clone` 下来,然后在你的 home 目录下进行工作。
|
||||
|
||||
严格地讲,你不是必须创建这个空的仓库;它和一个正常的仓库一样工作。但是,一个空的仓库没有工作分支(working tree) (也就是说,使用 `checkout` 并没有任何分支显示)。这很重要,因为不允许远程使用者们 `push` 到一个有效的分支上(如果你正在 `dev` 分支工作然后突然有人把一些变更 `push` 到你的工作分支,你会有怎么样的感受?)。因为一个空的仓库可以没有有效的分支,所以这不会成为一个问题。
|
||||
|
||||
你可以把这个仓库放到任何你想放的地方,只要你想要放开权限给用户和用户组,让他们可以在仓库下工作。千万不要保存目录到比如说一个用户的 home 目录下,因为那里有严格的权限限制。保存到一个常规的共享地址,例如 `/opt` 或者 `/usr/local/share`。
|
||||
|
||||
以 root 身份创建一个空的仓库:
|
||||
|
||||
```
|
||||
# git init --bare /opt/jupiter.git
|
||||
# chown -R gituser:gituser /opt/jupiter.git
|
||||
# chmod -R 770 /opt/jupiter.git
|
||||
```
|
||||
|
||||
现在任何一个用户,只要他被认证为 `gituser` 或者在 `gituser` 组中,就可以从 jupiter.git 库中读取或者写入。在本地机器尝试以下操作:
|
||||
|
||||
```
|
||||
$ git clone gituser@example.com:/opt/jupiter.git jupiter.clone
|
||||
Cloning into 'jupiter.clone'...
|
||||
Warning: you appear to have cloned an empty repository.
|
||||
```
|
||||
|
||||
谨记:开发者们**一定**要把他们的 SSH 公钥加入到 `gituser` 用户下的 `authorized_keys` 文件里,或者说,如果他们有服务器上的用户(如果你给了他们用户),那么他们的用户必须属于 `gituser` 用户组。
|
||||
|
||||
### Git 钩子
|
||||
|
||||
运行你自己的 Git 服务器最赞的一件事之一就是可以使用 Git 钩子。Git 托管服务有时提供一个钩子类的接口,但是他们并不会给你真正的 Git 钩子来让你访问文件系统。Git 钩子是一个脚本,它将在一个 Git 过程的某些点运行;钩子可以运行在当一个仓库即将接收一个 commit 时、或者接受一个 commit 之后,或者即将接收一次 push 时,或者一次 push 之后等等。
|
||||
|
||||
这是一个简单的系统:任何放在 `.git/hooks` 目录下的脚本、使用标准的命名体系,就可按设计好的时间运行。一个脚本是否应该被运行取决于它的名字; `pre-push` 脚本在 `push` 之前运行,`post-receive` 脚本在接受 `commit` 之后运行等等。这或多或少的可以从名字上看出来。
|
||||
|
||||
脚本可以用任何语言写;如果在你的系统上有可以执行的脚本语言,例如输出 ‘hello world’ ,那么你就可以这个语言来写 Git 钩子脚本。Git 默认带了一些例子,但是并不有启用。
|
||||
|
||||
想要动手试一个?这很简单。如果你没有现成的 Git 仓库,首先创建一个 Git 仓库:
|
||||
|
||||
```
|
||||
$ mkdir jupiter
|
||||
$ cd jupiter
|
||||
$ git init .
|
||||
```
|
||||
|
||||
然后写一个 “hello world” 的 Git 钩子。因为我为了支持老旧系统而使用 tsch,所以我仍然用它作为我的脚本语言,你可以自由的使用自己喜欢的语言(Bash,Python,Ruby,Perl,Rust,Swift,Go):
|
||||
|
||||
```
|
||||
$ echo "#\!/bin/tcsh" > .git/hooks/post-commit
|
||||
$ echo "echo 'POST-COMMIT SCRIPT TRIGGERED'" >> ~/jupiter/.git/hooks/post-commit
|
||||
$ chmod +x ~/jupiter/.git/hooks/post-commit
|
||||
```
|
||||
|
||||
现在测试它的输出:
|
||||
|
||||
```
|
||||
$ echo "hello world" > foo.txt
|
||||
$ git add foo.txt
|
||||
$ git commit -m 'first commit'
|
||||
! POST-COMMIT SCRIPT TRIGGERED
|
||||
[master (root-commit) c8678e0] first commit
|
||||
1 file changed, 1 insertion(+)
|
||||
create mode 100644 foo.txt
|
||||
```
|
||||
|
||||
现在你已经实现了:你的第一个有功能的 Git 钩子。
|
||||
|
||||
### 有名的 push-to-web 钩子
|
||||
|
||||
Git 钩子最流行的用法就是自动 `push` 更改的代码到一个正在使用中的产品级 Web 服务器目录下。这是摆脱 FTP 的很好的方式,对于正在使用的产品保留完整的版本控制,整合并自动化内容的发布。
|
||||
|
||||
如果操作正确,网站发布工作会像以前一样很好的完成,而且在某种程度上,很精准。Git 真的好棒。我不知道谁最初想到这个主意,但是我是从 Emacs 和 Git 方面的专家,IBM 的 Bill von Hagen 那里第一次听到它的。他的文章包含关于这个过程的权威介绍:[Git 改变了分布式网页开发的游戏规则][1]。
|
||||
|
||||
### Git 变量
|
||||
|
||||
每一个 Git 钩子都有一系列不同的变量对应触发钩子的不同 Git 行为。你需不需要这些变量,主要取决于你写的程序。如果你只是需要一个当某人 push 代码时候的通用邮件通知,那么你就不需要什么特殊的东西,甚至也不需要编写额外的脚本,因为已经有现成的适合你的样例脚本。如果你想在邮件里查看 commit 信息和 commit 的作者,那么你的脚本就会变得相对麻烦些。
|
||||
|
||||
Git 钩子并不是被用户直接执行,所以要弄清楚如何收集可能会混淆的重要信息。事实上,Git 钩子脚本类似于其他的脚本,像 BASH、Python、C++ 等等一样从标准输入读取参数。不同的是,我们不会给它提供这个输入,所以,你在使用的时候,需要知道可能的输入参数。
|
||||
|
||||
在写 Git 钩子之前,看一下 Git 在你的项目目录下 `.git/hooks` 目录中提供的一些例子。举个例子,在这个 `pre-push.sample` 文件里,注释部分说明了如下内容:
|
||||
|
||||
```
|
||||
# $1 -- 即将 push 的远程仓库的名字
|
||||
# $2 -- 即将 push 的远程仓库的 URL
|
||||
# 如果 push 的时候,并没有一个命名的远程仓库,那么这两个参数将会一样。
|
||||
#
|
||||
# 提交的信息将以下列形式按行发送给标准输入
|
||||
# <local ref> <local sha1> <remote ref> <remote sha1>
|
||||
```
|
||||
|
||||
并不是所有的例子都是这么清晰,而且关于钩子获取变量的文档依旧缺乏(除非你去读 Git 的源码)。但是,如果你有疑问,你可以从线上[其他用户的尝试中][2]学习,或者你只是写一些基本的脚本,比如 `echo $1, $2, $3` 等等。
|
||||
|
||||
### 分支检测示例
|
||||
|
||||
我发现,对于生产环境来说有一个共同的需求,就是需要一个只有在特定分支被修改之后,才会触发事件的钩子。以下就是如何跟踪分支的示例。
|
||||
|
||||
首先,Git 钩子本身是不受版本控制的。 Git 并不会跟踪它自己的钩子,因为对于钩子来说,它是 Git 的一部分,而不是你仓库的一部分。所以,Git 钩子可以监控你的 Git 服务器上的一个空仓库的 commit 记录和 push 记录,而不是你本地仓库的一部分。
|
||||
|
||||
我们来写一个 `post-receive`(也就是说,在 `commit` 被接受之后触发)钩子。第一步就是需要确定分支名:
|
||||
|
||||
```
|
||||
#!/bin/tcsh
|
||||
|
||||
foreach arg ( $< )
|
||||
set argv = ( $arg )
|
||||
set refname = $1
|
||||
end
|
||||
```
|
||||
|
||||
这个 for 循环用来读入第一个参数 `$1` ,然后循环用第二个参数 `$2` 去覆盖它,然后用第三个参数 `$3` 再这样。在 Bash 中有一个更好的方法,使用 `read` 命令,并且把值放入数组里。但是,这里是 tcsh,并且变量的顺序可以预测的,所以,这个方法也是可行的。
|
||||
|
||||
当我们有了 commit 记录的 `refname`,我们就能使用 Git 去找到这个分支的供人看的名字:
|
||||
|
||||
```
|
||||
set branch = `git rev-parse --symbolic --abbrev-ref $refname`
|
||||
echo $branch #DEBUG
|
||||
```
|
||||
|
||||
然后把这个分支名和我们想要触发的事件的分支名关键字进行比较:
|
||||
|
||||
```
|
||||
if ( "$branch" == "master" ) then
|
||||
echo "Branch detected: master"
|
||||
git \
|
||||
--work-tree=/path/to/where/you/want/to/copy/stuff/to \
|
||||
checkout -f $branch || echo "master fail"
|
||||
else if ( "$branch" == "dev" ) then
|
||||
echo "Branch detected: dev"
|
||||
Git \
|
||||
--work-tree=/path/to/where/you/want/to/copy/stuff/to \
|
||||
checkout -f $branch || echo "dev fail"
|
||||
else
|
||||
echo "Your push was successful."
|
||||
echo "Private branch detected. No action triggered."
|
||||
endif
|
||||
```
|
||||
|
||||
给这个脚本分配可执行权限:
|
||||
|
||||
```
|
||||
$ chmod +x ~/jupiter/.git/hooks/post-receive
|
||||
```
|
||||
|
||||
现在,当一个用户提交到服务器的 master 分支,那些代码就会被复制到一个生产环境的目录,提交到 dev 分支则会被复制到另外的地方,其他分支将不会触发这些操作。
|
||||
|
||||
同时,创造一个 `pre-commit` 脚本也很简单。比如,判断一个用户是否在他们不该 `push` 的分支上 `push` 代码,或者对 commit 信息进行解析等等。
|
||||
|
||||
Git 钩子也可以变得复杂,而且它们因为 Git 的工作流的抽象层次不同而变得难以理解,但是它们确实是一个强大的系统,让你能够在你的 Git 基础设施上针对所有的行为进行对应的操作。如果你是一个 Git 重度用户,或者一个全职 Git 管理员,那么 Git 钩子是值得学习的,只有当你熟悉这个过程,你才能真正掌握它。
|
||||
|
||||
在我们这个系列下一篇也是最后一篇文章中,我们将会学习如何使用 Git 来管理非文本的二进制数据,比如音频和图片。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/life/16/8/how-construct-your-own-git-server-part-6
|
||||
|
||||
作者:[Seth Kenlon][a]
|
||||
译者:[maywanting](https://github.com/maywanting)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/seth
|
||||
[1]: http://www.ibm.com/developerworks/library/wa-git/
|
||||
[2]: https://www.analysisandsolutions.com/code/git-hooks-summary-cheat-sheet.htm
|
||||
[3]: https://wiki.centos.org/SpecialInterestGroup/AltArch/Arm32/RaspberryPi3
|
212
published/201609/20160813 Journey-to-HTTP2.md
Normal file
212
published/201609/20160813 Journey-to-HTTP2.md
Normal file
@ -0,0 +1,212 @@
|
||||
漫游 HTTP/2
|
||||
===================
|
||||
|
||||
自从我写了上一篇博文之后,就再也找不到空闲时间写文章了。今天我终于可以抽出时间写一些关于 HTTP 的东西。
|
||||
|
||||
我认为每一个 web 开发者都应该对这个支撑了整个 Web 世界的 HTTP 协议有所了解,这样才能帮助你更好的完成开发任务。
|
||||
|
||||
在这篇文章中,我将讨论什么是 HTTP,它是怎么产生的,它的地位,以及我们应该怎么使用它。
|
||||
|
||||
### HTTP 是什么
|
||||
|
||||
首先我们要明白 HTTP 是什么。HTTP 是一个基于 `TCP/IP` 的应用层通信协议,它是客户端和服务端在互联网互相通讯的标准。它定义了内容是如何通过互联网进行请求和传输的。HTTP 是在应用层中抽象出的一个标准,使得主机(客户端和服务端)之间的通信得以通过 `TCP/IP` 来进行请求和响应。TCP 默认使用的端口是 `80`,当然也可以使用其它端口,比如 HTTPS 使用的就是 `443` 端口。
|
||||
|
||||
### `HTTP/0.9` - 单行协议 (1991)
|
||||
|
||||
HTTP 最早的规范可以追溯到 1991 年,那时候的版本是 `HTTP/0.9`,该版本极其简单,只有一个叫做 `GET` 的 请求方式。如果客户端要访问服务端上的一个页面,只需要如下非常简单的请求:
|
||||
|
||||
```
|
||||
GET /index.html
|
||||
```
|
||||
|
||||
服务端的响应类似如下:
|
||||
|
||||
```
|
||||
(response body)
|
||||
(connection closed)
|
||||
```
|
||||
|
||||
就这么简单,服务端捕获到请求后立马返回 HTML 并且关闭连接,在这之中:
|
||||
|
||||
- 没有头信息(headers)
|
||||
- 仅支持 `GET` 这一种请求方法
|
||||
- 必须返回 HTML
|
||||
|
||||
如同你所看到的,当时的 HTTP 协议只是一块基础的垫脚石。
|
||||
|
||||
### HTTP/1.0 - 1996
|
||||
|
||||
在 1996 年,新版本的 HTTP 对比之前的版本有了极大的改进,同时也被命名为 `HTTP/1.0`。
|
||||
|
||||
与 `HTTP/0.9` 只能返回 HTML 不同的是,`HTTP/1.0` 支持处理多种返回的格式,比如图片、视频、文本或者其他格式的文件。它还增加了更多的请求方法(如 `POST` 和 `HEAD`),请求和响应的格式也相应做了改变,两者都增加了头信息;引入了状态码来定义返回的特征;引入了字符集支持;支持多段类型(multi-part)、用户验证信息、缓存、内容编码格式等等。
|
||||
|
||||
一个简单的 HTTP/1.0 请求大概是这样的:
|
||||
|
||||
```
|
||||
GET / HTTP/1.0
|
||||
Host: kamranahmed.info
|
||||
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5)
|
||||
Accept: */*
|
||||
```
|
||||
|
||||
正如你所看到的,在请求中附带了客户端中的一些个人信息、响应类型要求等内容。这些是在 `HTTP/0.9` 无法实现的,因为那时候没有头信息。
|
||||
|
||||
一个对上述请求的响应例子如下所示:
|
||||
|
||||
```
|
||||
HTTP/1.0 200 OK
|
||||
Content-Type: text/plain
|
||||
Content-Length: 137582
|
||||
Expires: Thu, 05 Dec 1997 16:00:00 GMT
|
||||
Last-Modified: Wed, 5 August 1996 15:55:28 GMT
|
||||
Server: Apache 0.84
|
||||
|
||||
(response body)
|
||||
(connection closed)
|
||||
```
|
||||
|
||||
从 `HTTP/1.0` (HTTP 后面跟的是版本号)早期开始,在状态码 `200` 之后就附带一个原因短语(你可以用来描述状态码)。
|
||||
|
||||
在这个较新一点的版本中,请求和响应的头信息仍然必须是 `ASCII` 编码,但是响应的内容可以是任意类型,如图片、视频、HTML、文本或其他类型,服务器可以返回任意内容给客户端。所以这之后,`HTTP` 中的“超文本(Hyper Text)”成了名不副实。 `HMTP` (超媒体传输协议 - Hypermedia transfer protocol)可能会更有意义,但是我猜我们还是会一直沿用这个名字。
|
||||
|
||||
`HTTP/1.0` 的一个主要缺点就是它不能在一个连接内拥有多个请求。这意味着,当客户端需要从服务器获取东西时,必须建立一个新的 TCP 连接,并且处理完单个请求后连接即被关闭。需要下一个东西时,你必须重新建立一个新的连接。这样的坏处在哪呢?假设你要访问一个有 `10` 张图片,`5` 个样式表(stylesheet)和 `5` 个 JavaScript 的总计 `20` 个文件才能完整展示的一个页面。由于一个连接在处理完成一次请求后即被关闭,所以将有 `20` 个单独的连接,每一个文件都将通过各自对应的连接单独处理。当连接数量变得庞大的时候就会面临严重的性能问题,因为 `TCP` 启动需要经过三次握手,才能缓慢开始。
|
||||
|
||||
#### 三次握手
|
||||
|
||||
三次握手是一个简单的模型,所有的 `TCP` 连接在传输应用数据之前都需要在三次握手中传输一系列数据包。
|
||||
|
||||
- `SYN` - 客户端选取一个随机数,我们称为 `x`,然后发送给服务器。
|
||||
- `SYN ACK` - 服务器响应对应请求的 `ACK` 包中,包含了一个由服务器随机产生的数字,我们称为 `y`,并且把客户端发送的 `x+1`,一并返回给客户端。
|
||||
- `ACK` - 客户端在从服务器接受到 `y` 之后把 `y` 加上 `1` 作为一个 `ACK` 包返回给服务器。
|
||||
|
||||
一旦三次握手完成后,客户端和服务器之间就可以开始交换数据。值得注意的是,当客户端发出最后一个 `ACK` 数据包后,就可以立刻向服务器发送应用数据包,而服务器则需要等到收到这个 `ACK` 数据包后才能接受应用数据包。
|
||||
|
||||

|
||||
|
||||
> 请注意,上图有点小问题,客户端发回的最后一个 ACK 包仅包含 `y+1`,上图应该是 `ACK:y+1` 而不是 `ACK:x+1,y+1`
|
||||
|
||||
然而,某些 HTTP/1.0 的实现试图通过新引入一个称为 `Connection: keep-alive` 的头信息来克服这一问题,这个头信息意味着告诉服务器“嘿,服务器,请不要关闭此连接,我还要用它”。但是,这并没有得到广泛的支持,问题依然存在。
|
||||
|
||||
除了无连接之外,HTTP 还是一个无状态的协议,即服务器不维护有关客户端的信息。因此每个请求必须给服务器必要的信息才能完成请求,每个请求都与之前的旧的请求无关。所以,这增加了推波助澜的作用,客户端除了需要新建大量连接之外,在每次连接中还需要发送许多重复的数据,这导致了带宽的大量浪费。
|
||||
|
||||
### `HTTP/1.1` - 1999
|
||||
|
||||
`HTTP/1.0` 经过仅仅 3 年,下一个版本,即 `HTTP/1.1` 就在 1999 年发布了,改进了它的前身很多问题,主要的改进包括:
|
||||
|
||||
- **增加了许多 HTTP 请求方法**,包括 `PUT`、`PATCH`、`HEAD`、`OPTIONS`、`DELETE`。
|
||||
- **主机标识符** `Host` 在 `HTTP/1.0` 并不是必须的,而在 `HTTP/1.1` 是必须的。
|
||||
- 如上所述的**持久连接**。在 `HTTP/1.0` 中每个连接只有一个请求并在该请求结束后被立即关闭,这导致了性能问题和增加了延迟。 `HTTP/1.1` 引入了持久连接,即连接在默认情况下是不关闭并保持开放的,这允许多个连续的请求使用这个连接。要关闭该连接只需要在头信息加入 `Connection: close`,客户通常在最后一个请求里发送这个头信息就能安全地关闭连接。
|
||||
- 新版本还引入了“**管线化(pipelining)**”的支持,客户端可以不用等待服务器返回响应,就能在同一个连接内发送多个请求给服务器,而服务器必须以接收到的请求相同的序列发送响应。但是你可能会问了,客户端如何知道哪里是第一个响应下载完成而下一个响应内容开始的地方呢?要解决这个问题,头信息必须有 `Content-Length`,客户可以使用它来确定哪些响应结束之后可以开始等待下一个响应。
|
||||
- 值得注意的是,为了从持久连接或管线化中受益, 头部信息必须包含 `Content-Length`,因为这会使客户端知道什么时候完成了传输,然后它可以发送下一个请求(持久连接中,以正常的依次顺序发送请求)或开始等待下一个响应(启用管线化时)。
|
||||
- 但是,使用这种方法仍然有一个问题。那就是,如果数据是动态的,服务器无法提前知道内容长度呢?那么在这种情况下,你就不能使用这种方法中获益了吗?为了解决这个问题,`HTTP/1.1` 引进了分块编码。在这种情况下,服务器可能会忽略 `Content-Length` 来支持分块编码(更常见一些)。但是,如果它们都不可用,那么连接必须在请求结束时关闭。
|
||||
- 在动态内容的情况下**分块传输**,当服务器在传输开始但无法得到 `Content-Length` 时,它可能会开始按块发送内容(一块接一块),并在传输时为每一个小块添加 `Content-Length`。当发送完所有的数据块后,即整个传输已经完成后,它发送一个空的小块,比如设置 `Content-Length` 为 0 ,以便客户端知道传输已完成。为了通知客户端块传输的信息,服务器在头信息中包含了 `Transfer-Encoding: chunked`。
|
||||
- 不像 HTTP/1.0 中只有 Basic 身份验证方式,`HTTP/1.1` 包括摘要验证方式(digest authentication)和代理验证方式(proxy authentication)。
|
||||
- 缓存。
|
||||
- 范围请求(Byte Ranges)。
|
||||
- 字符集。
|
||||
- 内容协商(Content Negotiation)。
|
||||
- 客户端 cookies。
|
||||
- 支持压缩。
|
||||
- 新的状态码。
|
||||
- 等等。
|
||||
|
||||
我不打算在这里讨论所有 `HTTP/1.1` 的特性,因为你可以围绕这个话题找到很多关于这些的讨论。我建议你阅读 [`HTTP/1.0` 和 `HTTP/1.1` 版本之间的主要差异][5],希望了解更多可以读[原始的 RFC][6]。
|
||||
|
||||
`HTTP/1.1` 在 1999 年推出,到现在已经是多年前的标准。虽然,它比前一代改善了很多,但是网络日新月异,它已经垂垂老矣。相比之前,加载网页更是一个资源密集型任务,打开一个简单的网页已经需要建立超过 30 个连接。你或许会说,`HTTP/1.1` 具有持久连接,为什么还有这么多连接呢?其原因是,在任何时刻 `HTTP/1.1` 只能有一个未完成的连接。 `HTTP/1.1` 试图通过引入管线来解决这个问题,但它并没有完全地解决。因为一旦管线遇到了缓慢的请求或庞大的请求,后面的请求便被阻塞住,它们必须等待上一个请求完成。为了克服 `HTTP/1.1` 的这些缺点,开发人员开始实现一些解决方法,例如使用 spritesheets、在 CSS 中编码图像、单个巨型 CSS / JavaScript 文件、[域名切分][7]等。
|
||||
|
||||
### SPDY - 2009
|
||||
|
||||
谷歌走在业界前列,为了使网络速度更快,提高网络安全,同时减少网页的等待时间,他们开始实验替代的协议。在 2009 年,他们宣布了 `SPDY`。
|
||||
|
||||
> `SPDY` 是谷歌的商标,而不是一个缩写。
|
||||
|
||||
显而易见的是,如果我们继续增加带宽,网络性能开始的时候能够得到提升,但是到了某个阶段后带来的性能提升就很有限了。但是如果把这些优化放在等待时间上,比如减少等待时间,将会有持续的性能提升。这就是 `SPDY` 优化之前的协议的核心思想,减少等待时间来提升网络性能。
|
||||
|
||||
> 对于那些不知道其中区别的人,等待时间就是延迟,即数据从源到达目的地需要多长时间(单位为毫秒),而带宽是每秒钟数据的传输量(比特每秒)。
|
||||
|
||||
`SPDY` 的特点包括:复用、压缩、优先级、安全性等。我不打算展开 `SPDY` 的细节。在下一章节,当我们将介绍 `HTTP/2`,这些都会被提到,因为 `HTTP/2` 大多特性是从 `SPDY` 受启发的。
|
||||
|
||||
`SPDY` 没有试图取代 HTTP,它是处于应用层的 HTTP 之上的一个传输层,它只是在请求被发送之前做了一些修改。它开始成为事实标准,大多数浏览器都开始支持了。
|
||||
|
||||
2015年,谷歌不想有两个相互竞争的标准,所以他们决定将其合并到 HTTP 协议,这样就导致了 `HTTP/2` 的出现和 `SPDY` 的废弃。
|
||||
|
||||
### `HTTP/2` - 2015
|
||||
|
||||
现在想必你明白了为什么我们需要另一个版本的 HTTP 协议了。 `HTTP/2` 是专为了低延迟地内容传输而设计。主要特点和与 `HTTP/1.1` 的差异包括:
|
||||
|
||||
- 使用二进制替代明文
|
||||
- 多路传输 - 多个异步 HTTP 请求可以使用单一连接
|
||||
- 报头使用 HPACK 压缩
|
||||
- 服务器推送 - 单个请求多个响应
|
||||
- 请求优先级
|
||||
- 安全性
|
||||
|
||||

|
||||
|
||||
#### 1. 二进制协议
|
||||
|
||||
`HTTP/2` 通过使其成为一个二进制协议以解决 HTTP/1.x 中存在的延迟问题。作为一个二进制协议,它更容易解析,但可读性却不如 `HTTP/1.x`。帧(frames)和流(stream)的概念组成了 `HTTP/2` 的主要部分。
|
||||
|
||||
**帧和流**
|
||||
|
||||
现在 HTTP 消息是由一个或多个帧组成的。`HEADERS` 帧承载了元数据(meta data),`DATA` 帧则承载了内容。还有其他类型的帧(`HEADERS`、`DATA`、`RST_STREAM`、`SETTINGS`、`PRIORITY` 等等),这些你可以通过[HTTP/2 规范][3]来了解。
|
||||
|
||||
每个 `HTTP/2` 请求和响应都被赋予一个唯一的流 ID,并切分成帧。帧就是一小片二进制数据。帧的集合称为流,每个帧都有个标识了其所属流的流 ID,所以在同一个流下的每个帧具有共同的报头。值得注意的是,除了流 ID 是唯一的之外,由客户端发起的请求使用了奇数作为流 ID,从来自服务器的响应使用了偶数作为流 ID。
|
||||
|
||||
除了 `HEADERS` 帧和 `DATA` 帧,另一个值得一提的帧是 `RST_STREAM`。这是一个特殊的帧类型,用来中止流,即客户可以发送此帧让服务器知道,我不再需要这个流了。在 `HTTP/1.1` 中让服务器停止给客户端发送响应的唯一方法是关闭连接,这样造成了延迟增加,因为之后要发送请求时,就要必须打开一个新的请求。而在 `HTTP/2` ,客户端可以使用 `RST_STREAM` 来停止接收特定的数据流,而连接仍然打开着,可以被其他请求使用。
|
||||
|
||||
#### 2. 多路传输
|
||||
|
||||
因为 `HTTP/2` 是一个二进制协议,而且如上所述它使用帧和流来传输请求与响应,一旦建立了 TCP 连接,相同连接内的所有流都可以同过这个 TCP 连接异步发送,而不用另外打开连接。反过来说,服务器也可以使用同样的异步方式返回响应,也就是说这些响应可以是无序的,客户端使用分配的流 ID 来识别数据包所属的流。这也解决了 HTTP/1.x 中请求管道被阻塞的问题,即客户端不必等待占用时间的请求而其他请求仍然可以被处理。
|
||||
|
||||
#### 3. HPACK 请求头部压缩
|
||||
|
||||
RFC 花了一篇文档的篇幅来介绍针对发送的头信息的优化,它的本质是当我们在同一客户端上不断地访问服务器时,许多冗余数据在头部中被反复发送,有时候仅仅是 cookies 就能增加头信息的大小,这会占用许多宽带和增加传输延迟。为了解决这个问题,`HTTP/2` 引入了头信息压缩。
|
||||
|
||||

|
||||
|
||||
不像请求和响应那样,头信息中的信息不会以 `gzip` 或者 `compress` 等格式压缩。而是采用一种不同的机制来压缩头信息,客户端和服务器同时维护一张头信息表,储存了使用了哈夫曼编码进行编码后的头信息的值,并且后续请求中若出现同样的字段则忽略重复值(例如用户代理(user agent)等),只发送存在两边信息表中它的引用即可。
|
||||
|
||||
我们说的头信息,它们同 HTTP/1.1 中一样,并在此基础上增加了一些伪头信息,如 `:scheme`,`:host` 和 `:path`。
|
||||
|
||||
#### 4. 服务器推送
|
||||
|
||||
服务器推送是 `HTTP/2` 的另一个巨大的特点。对于服务器来说,当它知道客户端需要一定的资源后,它可以把数据推送到客户端,即使客户端没有请求它。例如,假设一个浏览器在加载一个网页时,它解析了整个页面,发现有一些内容必须要从服务端获取,然后发送相应的请求到服务器以获取这些内容。
|
||||
|
||||
服务器推送减少了传输这些数据需要来回请求的次数。它是如何做到的呢?服务器通过发送一个名字为 `PUSH_PROMISE` 特殊的帧通知到客户端“嘿,我准备要发送这个资源给你了,不要再问我要了。”这个 `PUSH_PROMISE` 帧与要产生推送的流联系在一起,并包含了要推送的流 ID,也就是说这个流将会被服务器推送到客户端上。
|
||||
|
||||
#### 5. 请求优先级
|
||||
|
||||
当流被打开的时候,客户端可以在 `HEADERS` 帧中包含优先级信息来为流指定优先级。在任何时候,客户端都可以发送 `PRIORITY` 帧来改变流的优先级。
|
||||
|
||||
如果没有任何优先级信息,服务器将异步地无序地处理这些请求。如果流分配了优先级,服务器将基于这个优先级来决定需要分配多少资源来处理这个请求。
|
||||
|
||||
#### 6. 安全性
|
||||
|
||||
在是否强制使用 `TLS` 来增加安全性的问题上产生了大范围的讨论,讨论的结果是不强制使用。然而大多数厂商只有在使用 `TLS` 时才能使用 `HTTP/2`。所以 `HTTP/2` 虽然规范上不要求加密,但是加密已经约定俗成了。这样,在 `TLS` 之上实现 `HTTP/2` 就有了一些强制要求,比如,`TLS` 的最低版本为 `1.2`,必须达到某种级别的最低限度的密钥大小,需要布署 ephemeral 密钥等等。
|
||||
|
||||
到现在 `HTTP/2` 已经[完全超越了 SPDY][4],并且还在不断成长,HTTP/2 有很多关系性能的提升,我们应该开始布署它了。
|
||||
|
||||
如果你想更深入的了解细节,请访问[该规范的链接][1]和[ HTTP/2 性能提升演示的链接][2]。请在留言板写下你的疑问或者评论,最后如果你发现有错误,请同样留言指出。
|
||||
|
||||
这就是全部了,我们之后再见~
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://kamranahmed.info/blog/2016/08/13/http-in-depth/
|
||||
|
||||
作者:[Kamran Ahmed][a]
|
||||
译者:[NearTan](https://github.com/NearTan)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: http://github.com/kamranahmedse
|
||||
|
||||
[1]: https://http2.github.io/http2-spec
|
||||
[2]: http://www.http2demo.io/
|
||||
[3]: https://http2.github.io/http2-spec/#FrameTypes
|
||||
[4]: http://caniuse.com/#search=http2
|
||||
[5]: http://www.ra.ethz.ch/cdstore/www8/data/2136/pdf/pd1.pdf
|
||||
[6]: https://tools.ietf.org/html/rfc2616
|
||||
[7]: https://www.maxcdn.com/one/visual-glossary/domain-sharding-2/
|
@ -0,0 +1,217 @@
|
||||
理解 Linux 下 Shell 命令的不同分类及它们的用法
|
||||
====================
|
||||
|
||||
当你打算真正操纵好你的 Linux 系统,没有什么能比命令行界面更让你做到这一点。为了成为一个 Linux 高手,你必须能够理解 [Shell 命令的不同类型][1],并且会在终端下正确的使用它们。
|
||||
|
||||
在 Linux 下,命令有几种类型,对于一个 Linux 新手来说,知道不同命令的意思才能够高效和准确的使用它们。因此,在这篇文章里,我们将会遍及各种不同分类的 Linux Shell 命令。
|
||||
|
||||
需要注意一件非常重要的事:命令行界面和 Shell 是不同的,命令行界面只是为你提供一个访问 Shell 的方式。而 Shell ,它是可编程的,这使得它可以通过命令与内核进行交流。
|
||||
|
||||
下面列出了 Linux 下命令的不同种类:
|
||||
|
||||
### 1. 程序可执行文件(文件系统 中的命令)
|
||||
|
||||
当你执行一条命令的时候,Linux 通过从左到右搜索存储在 `$PATH` 环境变量中的目录来找到这条命令的可执行文件。
|
||||
|
||||
你可以像下面这样查看存储在 `$PATH` 中的目录:
|
||||
|
||||
```
|
||||
$ echo $PATH
|
||||
/home/aaronkilik/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games
|
||||
```
|
||||
|
||||
在上面的命令中,目录 `/home/aaronkilik/bin` 将会被首先搜索,紧跟着是 `/usr/local/sbin`,然后一直接着下去。在搜索过程中,搜索顺序是至关重要的。
|
||||
|
||||
比如在 `/usr/bin` 目录里的文件系统中的命令:
|
||||
|
||||
```
|
||||
$ ll /bin/
|
||||
```
|
||||
|
||||
样本输出:
|
||||
|
||||
```
|
||||
total 16284
|
||||
drwxr-xr-x 2 root root 4096 Jul 31 16:30 ./
|
||||
drwxr-xr-x 23 root root 4096 Jul 31 16:29 ../
|
||||
-rwxr-xr-x 1 root root 6456 Apr 14 18:53 archdetect*
|
||||
-rwxr-xr-x 1 root root 1037440 May 17 16:15 bash*
|
||||
-rwxr-xr-x 1 root root 520992 Jan 20 2016 btrfs*
|
||||
-rwxr-xr-x 1 root root 249464 Jan 20 2016 btrfs-calc-size*
|
||||
lrwxrwxrwx 1 root root 5 Jul 31 16:19 btrfsck -> btrfs*
|
||||
-rwxr-xr-x 1 root root 278376 Jan 20 2016 btrfs-convert*
|
||||
-rwxr-xr-x 1 root root 249464 Jan 20 2016 btrfs-debug-tree*
|
||||
-rwxr-xr-x 1 root root 245368 Jan 20 2016 btrfs-find-root*
|
||||
-rwxr-xr-x 1 root root 270136 Jan 20 2016 btrfs-image*
|
||||
-rwxr-xr-x 1 root root 249464 Jan 20 2016 btrfs-map-logical*
|
||||
-rwxr-xr-x 1 root root 245368 Jan 20 2016 btrfs-select-super*
|
||||
-rwxr-xr-x 1 root root 253816 Jan 20 2016 btrfs-show-super*
|
||||
-rwxr-xr-x 1 root root 249464 Jan 20 2016 btrfstune*
|
||||
-rwxr-xr-x 1 root root 245368 Jan 20 2016 btrfs-zero-log*
|
||||
-rwxr-xr-x 1 root root 31288 May 20 2015 bunzip2*
|
||||
-rwxr-xr-x 1 root root 1964536 Aug 19 2015 busybox*
|
||||
-rwxr-xr-x 1 root root 31288 May 20 2015 bzcat*
|
||||
lrwxrwxrwx 1 root root 6 Jul 31 16:19 bzcmp -> bzdiff*
|
||||
-rwxr-xr-x 1 root root 2140 May 20 2015 bzdiff*
|
||||
lrwxrwxrwx 1 root root 6 Jul 31 16:19 bzegrep -> bzgrep*
|
||||
-rwxr-xr-x 1 root root 4877 May 20 2015 bzexe*
|
||||
lrwxrwxrwx 1 root root 6 Jul 31 16:19 bzfgrep -> bzgrep*
|
||||
-rwxr-xr-x 1 root root 3642 May 20 2015 bzgrep*
|
||||
```
|
||||
|
||||
### 2. Linux 别名
|
||||
|
||||
这些是用户定义的命令,它们是通过 shell 内置命令 `alias` 创建的,其中包含其它一些带有选项和参数的 shell 命令。这个意图主要是使用新颖、简短的名字来替代冗长的命令。
|
||||
|
||||
创建一个别名的语法像下面这样:
|
||||
|
||||
```
|
||||
$ alias newcommand='command -options'
|
||||
```
|
||||
|
||||
通过下面的命令,可以列举系统中的所有别名:
|
||||
|
||||
```
|
||||
$ alias -p
|
||||
alias alert='notify-send --urgency=low -i "$([ $? = 0 ] && echo terminal || echo error)" "$(history|tail -n1|sed -e '\''s/^\s*[0-9]\+\s*//;s/[;&|]\s*alert$//'\'')"'
|
||||
alias egrep='egrep --color=auto'
|
||||
alias fgrep='fgrep --color=auto'
|
||||
alias grep='grep --color=auto'
|
||||
alias l='ls -CF'
|
||||
alias la='ls -A'
|
||||
alias ll='ls -alF'
|
||||
alias ls='ls --color=auto'
|
||||
```
|
||||
|
||||
要在 Linux 中创建一个新的别名,仔细阅读下面的例子。
|
||||
|
||||
```
|
||||
$ alias update='sudo apt update'
|
||||
$ alias upgrade='sudo apt dist-upgrade'
|
||||
$ alias -p | grep 'up'
|
||||
```
|
||||
|
||||

|
||||
|
||||
然而,上面这些我们创建的别名只能暂时的工作,当经过下一次系统启动后它们不再工作。你可以像下面展示的这样在 '.bashrc' 文件中设置永久别名。
|
||||
|
||||

|
||||
|
||||
添加以后,运行下面的命令来激活:
|
||||
|
||||
```
|
||||
$ source ~/.bashrc
|
||||
```
|
||||
|
||||
### 3. Linux Shell 保留字
|
||||
|
||||
在 shell 程序设计中,`if`、`then`、`fi`、`for`、`while`、`case`、`esac`、`else`、`until` 以及其他更多的字都是 shell 保留字。正如描述所暗示的,它们在 shell 中有特殊的含义。
|
||||
|
||||
你可以通过使用下面展示的 `type` 命令来列出所有的 shell 关键字:
|
||||
|
||||
```
|
||||
$ type if then fi for while case esac else until
|
||||
if is a shell keyword
|
||||
then is a shell keyword
|
||||
fi is a shell keyword
|
||||
for is a shell keyword
|
||||
while is a shell keyword
|
||||
case is a shell keyword
|
||||
esac is a shell keyword
|
||||
else is a shell keyword
|
||||
until is a shell keyword
|
||||
```
|
||||
|
||||
### 4. Linux shell 函数
|
||||
|
||||
一个 shell 函数是一组在当前 shell 内一起执行的命令。函数有利于在 shell 脚本中实现特殊任务。在 shell 脚本中写 shell 函数的传统形式是下面这样:
|
||||
|
||||
```
|
||||
function_name() {
|
||||
command1
|
||||
command2
|
||||
......
|
||||
}
|
||||
```
|
||||
|
||||
或者像这样:
|
||||
|
||||
```
|
||||
function function_name {
|
||||
command1
|
||||
command2
|
||||
......
|
||||
}
|
||||
```
|
||||
|
||||
让我们看一看如何在一个名为 shell_functions.sh 的脚本中写 shell 函数。
|
||||
|
||||
```
|
||||
#!/bin/bash
|
||||
#write a shell function to update and upgrade installed packages
|
||||
upgrade_system(){
|
||||
sudo apt update;
|
||||
sudo apt dist-upgrade;
|
||||
}
|
||||
#execute function
|
||||
upgrade_system
|
||||
```
|
||||
|
||||
取代通过命令行执行两条命令:`sudo apt update` 和 `sudo apt dist-upgrade`,我们在脚本内写了一个像执行一条单一命令一样来执行两条命令的 shell 函数 upgrade_system。
|
||||
|
||||
保存文件,然后使脚本可执行。最后像下面这样运行 shell 函数:
|
||||
|
||||
```
|
||||
$ chmod +x shell_functions.sh
|
||||
$ ./shell_functions.sh
|
||||
```
|
||||
|
||||

|
||||
|
||||
### 5. Linux Shell 内置命令
|
||||
|
||||
这些是在 shell 中内置的 Linux 命令,所以你无法在文件系统中找到它们。这些命令包括 `pwd`、`cd`、`bg`、`alias`、`history`、`type`、`source`、`read`、`exit` 等。
|
||||
|
||||
你可以通过下面展示的 `type` 命令来列出或检查 Linux 内置命令:
|
||||
|
||||
```
|
||||
$ type pwd
|
||||
pwd is a shell builtin
|
||||
$ type cd
|
||||
cd is a shell builtin
|
||||
$ type bg
|
||||
bg is a shell builtin
|
||||
$ type alias
|
||||
alias is a shell builtin
|
||||
$ type history
|
||||
history is a shell builtin
|
||||
```
|
||||
|
||||
学习一些 Linux 内置命令用法:
|
||||
|
||||
- [Linux 下 15 个 pwd 命令例子][2]
|
||||
- [Linux 下 15 个 cd 命令例子][3]
|
||||
- [了解 Linux 下 history 命令的威力][4]
|
||||
|
||||
### 结论
|
||||
|
||||
作为一个 Linux 用户,知道你所运行的命令类型是很重要的。我相信,通过上面明确、简单并且易于理解的解释,包括一些相关的说明,你可能对 “[Linux 命令的不同种类][1]”有了很好的理解。
|
||||
|
||||
你也可以在下面的评论区提任何问题或补充意见,从而和我们取得联系。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.tecmint.com/understanding-different-linux-shell-commands-usage/
|
||||
|
||||
作者:[Aaron Kili][a]
|
||||
译者:[ucasFL](https://github.com/ucasFL)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: http://www.tecmint.com/author/aaronkili/
|
||||
[1]: http://www.tecmint.com/different-types-of-linux-shells/
|
||||
[2]: http://www.tecmint.com/pwd-command-examples/
|
||||
[3]: http://www.tecmint.com/cd-command-in-linux/
|
||||
[4]: http://www.tecmint.com/history-command-examples/
|
||||
[5]: http://www.tecmint.com/category/linux-commands/
|
76
published/201609/20160828 4 Best Linux Boot Loaders.md
Normal file
76
published/201609/20160828 4 Best Linux Boot Loaders.md
Normal file
@ -0,0 +1,76 @@
|
||||
4 个最好的 Linux 引导程序
|
||||
==================
|
||||
|
||||
当你打开你的机器,开机自检(POST)成功完成后,BIOS(基本输入输出系统)立即定位所配置的引导介质,并从 MBR(主引导记录)或 GUID(全局唯一标识符)分区表读取一些命令,这是引导介质的最前面 512 个字节内容。主引导记录(MBR)中包含两个重要的信息集合,第一个是引导程序,第二个是分区表。
|
||||
|
||||
### 什么是引导程序?
|
||||
|
||||
引导程序(Boot Loader)是存储在 MBR(主引导记录)或 GUID(全局唯一标识符)分区表中的一个小程序,用于帮助把操作系统装载到内存中。如果没有引导程序,那么你的操作系统将不能够装载到内存中。
|
||||
|
||||
有一些我们可以随同 Linux 安装到系统上的引导程序,在这篇文章里,我将简要地谈论几个最好的可以与 Linux 一同工作的 Linux 引导程序。
|
||||
|
||||
### 1. GNU GRUB
|
||||
|
||||
GNU GRUB 是一个非常受欢迎,也可能是用的最多的具有多重引导能力的 Linux 引导程序,它以原始的 Eirch Stefan Broleyn 发明的 GRUB(GRand Unified Bootlader)为基础。GNU GRUB 增强了原来的 GRUB,带来了一些改进、新的特性和漏洞修复。
|
||||
|
||||
重要的是,GRUB 2 现在已经取代了 GRUB。值得注意的是,GRUB 这个名字被重新命名为 GRUB Legacy,但没有活跃开发,不过,它可以用来引导老的系统,因为漏洞修复依然继续。
|
||||
|
||||
GRUB 具有下面一些显著的特性:
|
||||
|
||||
- 支持多重引导
|
||||
- 支持多种硬件结构和操作系统,比如 Linux 和 Windows
|
||||
- 提供一个类似 Bash 的交互式命令行界面,从而用户可以运行 GRUB 命令来和配置文件进行交互
|
||||
- 允许访问 GRUB 编辑器
|
||||
- 支持设置加密口令以确保安全
|
||||
- 支持从网络进行引导,以及一些次要的特性
|
||||
|
||||
访问主页: <https://www.gnu.org/software/grub/>
|
||||
|
||||
### 2. LILO(Linux 引导程序(LInux LOader))
|
||||
|
||||
LILO 是一个简单但强大且非常稳定的 Linux 引导程序。由于 GRUB 有很大改善和增加了许多强大的特性,越来越受欢迎,因此 LILO 在 Linux 用户中已经不是很流行了。
|
||||
|
||||
当 LILO 引导的时候,单词“LILO”会出现在屏幕上,并且每一个字母会在一个特定的事件发生前后出现。然而,从 2015 年 12 月开始,LILO 的开发停止了,它有许多特性比如下面列举的:
|
||||
|
||||
- 不提供交互式命令行界面
|
||||
- 支持一些错误代码
|
||||
- 不支持网络引导(LCTT 译注:其变体 ELILO 支持 TFTP/DHCP 引导)
|
||||
- 所有的文件存储在驱动的最开始 1024 个柱面上
|
||||
- 面临 BTFS、GTP、RAID 等的限制
|
||||
|
||||
访问主页: <http://lilo.alioth.debian.org/>
|
||||
|
||||
### 3. BURG - 新的引导程序
|
||||
|
||||
基于 GRUB,BURG 是一个相对来说比较新的引导程序(LCTT 译注:已于 2011 年停止了开发)。由于 BURG 起源于 GRUB, 所以它带有一些 GRUB 主要特性。尽管如此, BURG 也提供了一些出色的特性,比如一种新的对象格式可以支持包括 Linux、Windows、Mac OS、 FreeBSD 等多种平台。
|
||||
|
||||
另外,BURG 支持可高度配置的文本和图标模式的引导菜单,计划增加的“流”支持未来可以不同的输入/输出设备一同工作。
|
||||
|
||||
访问主页: <https://launchpad.net/burg>
|
||||
|
||||
### 4. Syslinux
|
||||
|
||||
Syslinux 是一种能从光盘驱动器、网络等进行引导的轻型引导程序。Syslinux 支持诸如 MS-DOS 上的 FAT、 Linux 上的 ext2、ext3、ext4 等文件系统。Syslinux 也支持未压缩的单一设备上的 Btrfs。
|
||||
|
||||
注意由于 Syslinux 仅能访问自己分区上的文件,因此不具备多重文件系统引导能力。
|
||||
|
||||
访问主页: <http://www.syslinux.org/wiki/index.php?title=The_Syslinux_Project>
|
||||
|
||||
### 结论
|
||||
|
||||
一个引导程序允许你在你的机器上管理多个操作系统,并在某个的时间选择其中一个使用。没有引导程序,你的机器就不能够装载内核以及操作系统的剩余部分。
|
||||
|
||||
我们是否遗漏了任何一流的 Linux 引导程序?如果有请让我们知道,请在下面的评论表中填入值得推荐的 Linux 系统引导程序。
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.tecmint.com/best-linux-boot-loaders/
|
||||
|
||||
作者:[Aaron Kili][a]
|
||||
译者:[ucasFL](https://github.com/ucasFL)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: http://www.tecmint.com/best-linux-boot-loaders/
|
@ -0,0 +1,265 @@
|
||||
Ohm:用两百行 JavaScript 创造你自己的编程语言
|
||||
==========
|
||||
|
||||

|
||||
|
||||
解析器是一种超级有用的软件库。从概念上简单的说,它们的实现很有挑战性,并且在计算机科学中经常被认为是黑魔法。在这个系列的博文中,我会向你们展示为什么你不需要成为哈利波特就能够精通解析器这种魔法。但是为了以防万一带上你的魔杖吧!
|
||||
|
||||
我们将探索一种叫做 Ohm 的新的开源库,它使得搭建解析器很简单并且易于重用。在这个系列里,我们使用 Ohm 去识别数字,构建一个计算器等等。在这个系列的最后你将已经用不到 200 行的代码发明了一种完整的编程语言。这个强大的工具将让你能够做到一些你可能过去认为不可能的事情。
|
||||
|
||||
###为什么解析器很困难?
|
||||
|
||||
解析器非常有用。在很多时候你可能需要一个解析器。或许有一种你需要处理的新的文件格式,但还没有人为它写了一个库;又或许你发现了一种古老格式的文件,但是已有的解析器不能在你的平台上构建。我已经看到这样的事发生无数次。 Code 在或者不在, Data 就在那里,不增不减。
|
||||
|
||||
从根本上来说,解析器很简单:只是把一个数据结构转化成另一个。所以你会不会觉得你要是邓布利多校长就好了?
|
||||
|
||||
解析器历来是出奇地难写,所面临的挑战是绝大多数现有的工具都很老,并且需要一定的晦涩难懂的计算机科学知识。如果你在大学里上过编译器课程,那么课本里也许还有从上世纪七十年传下来的技术。幸运的是,解析器技术从那时候起已经提高了很多。
|
||||
|
||||
典型的,解析器是通过使用一种叫作[形式语法(formal grammar)][1]的特殊语法来定义你想要解析的东西来创造的,然后你需要把它放入像 [Bison][2] 和 [Yacc][3] 的工具中,这些工具能够产生一堆 C 代码,这些代码你需要修改或者链接到你实际写入的编程语言中。另外的选择是用你更喜欢的语言亲自动手写一个解析器,这很慢且很容易出错,在你能够真正使用它之前还有许多额外的工作。
|
||||
|
||||
想像一下,是否你关于你想要解析的东西的语法描述也是解析器?如果你能够只是直接运行这些语法,然后仅在你需要的地方增加一些挂钩(hook)呢?那就是 Ohm 所可以做到的事。
|
||||
|
||||
### Ohm 简介
|
||||
|
||||
[Ohm][4]是一种新的解析系统。它类似于你可能已经在课本里面看到过的语法,但是它更强大,使用起来更简单。通过 Ohm, 你能够使用一种灵活的语法在一个 .ohm 文件中来写你自己的格式定义,然后使用你的宿主语言把语义加入到里面。在这篇博文里,我们将用 JavaScript 作为宿主语言。
|
||||
|
||||
Ohm 建立于一个为创造更简单、更灵活的解析器的多年研究基础之上。VPRI 的 [STEPS program (pdf)][5] 使用 Ohm 的前身 [Ometa][6] 为许多特殊的任务创造了专门的语言(比如一个有 400 行代码的平行制图描绘器)。
|
||||
|
||||
Ohm 有许多有趣的特点和符号,但是相比于全部解释它们,我认为我们只需要深入其中并构建一些东西就行了。
|
||||
|
||||
###解析整数
|
||||
|
||||
让我们来解析一些数字。这看起来会很简单,只需在一个文本串中寻找毗邻的数字,但是让我们尝试去处理所有形式的数字:整数和浮点数、十六进制数和八进制数、科学计数、负数。解析数字很简单,正确解析却很难。
|
||||
|
||||
亲自构建这个代码将会很困难,会有很多问题,会伴随有许多特殊的情况,比如有时会相互矛盾。正则表达式或许可以做的这一点,但是它会非常丑陋而难以维护。让我们用 Ohm 来试试。
|
||||
|
||||
用 Ohm 构建的解析器涉及三个部分:语法(grammar)、语义(semantics)和测试(tests)。我通常挑选问题的一部分为它写测试,然后构建足够的语法和语义来使测试通过。然后我再挑选问题的另一部分,增加更多的测试、更新语法和语义,从而确保所有的测试能够继续通过。即使我们有了新的强大的工具,写解析器从概念上来说依旧很复杂。测试是用一种合理的方式来构建解析器的唯一方法。现在,让我们开始工作。
|
||||
|
||||
我们将从整数开始。一个整数由一系列相互毗邻的数字组成。让我们把下面的内容放入一个叫做 grammar.ohm 的文件中:
|
||||
|
||||
```
|
||||
CoolNums {
|
||||
// just a basic integer
|
||||
Number = digit+
|
||||
}
|
||||
```
|
||||
|
||||
这创造了一条匹配一个或多个数字(`digit`)叫作 `Number` 的单一规则。`+` 意味着一个或更多,就在正则表达式中一样。当有一个或更多的数字时,这个规则将会匹配它们,如果没有数字或者有一些不是数字的东西将不会匹配。“数字(`digit`)”的定义是从 0 到 9 其中的一个字符。`digit` 也是像 `Number` 一样的规则,但是它是 Ohm 的其中一条构建规则因此我们不需要去定义它。如果我们想的话可以推翻它,但在这时候这没有任何意义,毕竟我们不打算去发明一种新的数。
|
||||
|
||||
现在,我们可以读入这个语法并用 Ohm 库来运行它。
|
||||
|
||||
把它放入 test1.js:
|
||||
|
||||
```
|
||||
var ohm = require('ohm-js');
|
||||
var fs = require('fs');
|
||||
var assert = require('assert');
|
||||
var grammar = ohm.grammar(fs.readFileSync('src/blog_numbers/syntax1.ohm').toString());
|
||||
```
|
||||
|
||||
`Ohm.grammar` 调用将读入该文件并解析成一个语法对象。现在我们可以增加一些语义。把下面内容增加到你的 JavaScript 文件中:
|
||||
|
||||
```
|
||||
var sem = grammar.createSemantics().addOperation('toJS', {
|
||||
Number: function(a) {
|
||||
return parseInt(this.sourceString,10);
|
||||
}
|
||||
});
|
||||
```
|
||||
|
||||
这通过 `toJS` 操作创造了一个叫作 `sem` 的语法集。这些语义本质上是一些对应到语法中每个规则的函数。每个函数当与之相匹配的语法规则被解析时将会被调用。上面的 `Number` 函数将会在语法中的 `Number` 规则被解析时被调用。语法(grammar)定义了在语言中这些代码是什么,语义(semantics)定义了当这些代码被解析时应该做什么。
|
||||
|
||||
语义函数能够做我们想做的任何事,比如打印出故障信息、创建对象,或者在任何子节点上递归调用 `toJS`。此时我们仅仅想把匹配的文本转换成真正的 JavaScript 整数。
|
||||
|
||||
所有的语义函数有一个内含的 `this` 对象,带有一些有用的属性。其 `source` 属性代表了输入文本中和这个节点相匹配的部分。`this.sourceString` 是一个匹配输入的串,调用内置在 JavaScript 中的 `parseInt` 函数会把这个串转换成一个数。传给 `parseInt` 的 `10` 这个参数告诉 JavaScript 我们输入的是一个以 `10` 为基底(10 进制)的数。如果少了这个参数, JavaScript 也会假定以 10 为基底,但是我们把它包含在里面因为后面我们将支持以 16 为基底的数,所以使之明确比较好。
|
||||
|
||||
既然我们有一些语法,让我们来实际解析一些东西看一看我们的解析器是否能够工作。如何知道我们的解析器可以工作?通过测试,许多许多的测试,每一个可能的边缘情况都需要一个测试。
|
||||
|
||||
使用标准的断言 `assert` API,以下这个测试函数能够匹配一些输入并运用我们的语义把它转换成一个数,然后把这个数和我们期望的输入进行比较。
|
||||
|
||||
```
|
||||
function test(input, answer) {
|
||||
var match = grammar.match(input);
|
||||
if(match.failed()) return console.log("input failed to match " + input + match.message);
|
||||
var result = sem(match).toJS();
|
||||
assert.deepEqual(result,answer);
|
||||
console.log('success = ', result, answer);
|
||||
}
|
||||
```
|
||||
|
||||
就是如此。现在我们能够为各种不同的数写一堆测试。如果匹配失败我们的脚本将会抛出一个例外。否则就打印成功信息。让我们尝试一下,把下面这些内容加入到脚本中:
|
||||
|
||||
```
|
||||
test("123",123);
|
||||
test("999",999);
|
||||
test("abc",999);
|
||||
```
|
||||
|
||||
然后用 `node test1.js` 运行脚本。
|
||||
|
||||
你的输出应该是这样:
|
||||
|
||||
```
|
||||
success = 123 123
|
||||
success = 999 999
|
||||
input failed to match abcLine 1, col 1:
|
||||
> 1 | abc
|
||||
^
|
||||
Expected a digit
|
||||
```
|
||||
|
||||
真酷。正如预期的那样,前两个成功了,第三个失败了。更好的是,Ohm 自动给了我们一个很棒的错误信息指出匹配失败。
|
||||
|
||||
###浮点数
|
||||
|
||||
我们的解析器工作了,但是它做的工作不是很有趣。让我们把它扩展成既能解析整数又能解析浮点数。改变 grammar.ohm 文件使它看起来像下面这样:
|
||||
|
||||
```
|
||||
CoolNums {
|
||||
// just a basic integer
|
||||
Number = float | int
|
||||
int = digit+
|
||||
float = digit+ "." digit+
|
||||
}
|
||||
```
|
||||
|
||||
这把 `Number` 规则改变成指向一个浮点数(`float`)或者一个整数(`int`)。这个 `|` 代表着“或”。我们把这个读成“一个 `Number` 由一个浮点数或者一个整数构成。”然后整数(`int`)定义成 `digit+`,浮点数(`float`)定义成 `digit+` 后面跟着一个句号然后再跟着另一个 `digit+`。这意味着在句号前和句号后都至少要有一个数字。如果一个数中没有一个句号那么它就不是一个浮点数,因此就是一个整数。
|
||||
|
||||
现在,让我们再次看一下我们的语义功能。由于我们现在有了新的规则所以我们需要新的功能函数:一个作为整数的,一个作为浮点数的。
|
||||
|
||||
```
|
||||
var sem = grammar.createSemantics().addOperation('toJS', {
|
||||
Number: function(a) {
|
||||
return a.toJS();
|
||||
},
|
||||
int: function(a) {
|
||||
console.log("doing int", this.sourceString);
|
||||
return parseInt(this.sourceString,10);
|
||||
},
|
||||
float: function(a,b,c) {
|
||||
console.log("doing float", this.sourceString);
|
||||
return parseFloat(this.sourceString);
|
||||
}
|
||||
});
|
||||
```
|
||||
|
||||
这里有两件事情需要注意。首先,整数(`int`)、浮点数(`float`)和数(`Number`)都有相匹配的语法规则和函数。然而,针对 `Number` 的功能不再有任何意义。它接收子节点 `a` 然后返回该子节点的 `toJS` 结果。换句话说,`Number` 规则简单的返回相匹配的子规则。由于这是在 Ohm 中任何规则的默认行为,因此实际上我们不用去考虑 `Number` 的作用,Ohm 会替我们做好这件事。
|
||||
|
||||
其次,整数(`int`)有一个参数 `a`,然而浮点数有三个:`a`、`b` 和 `c`。这是由于规则的实参数量(arity)决定的。[实参数量(arity)][7] 意味着一个规则里面有多少参数。如果我们回过头去看语法,浮点数(`float`)的规则是:
|
||||
|
||||
```
|
||||
float = digit+ "." digit+
|
||||
```
|
||||
|
||||
浮点数规则通过三个部分来定义:第一个 `digit+`、`.`、以及第二个 `digit+`。这三个部分都会作为参数传递给浮点数的功能函数。因此浮点数必须有三个参数,否则 Ohm 库会给出一个错误。在这种情况下我们不用在意参数,因为我们仅仅直接攫取了输入串,但是我们仍然需要参数列在那里来避免编译器错误。后面我们将实际使用其中一些参数。
|
||||
|
||||
现在我们可以为新的浮点数支持添加更多的测试。
|
||||
|
||||
```
|
||||
test("123",123);
|
||||
test("999",999);
|
||||
//test("abc",999);
|
||||
test('123.456',123.456);
|
||||
test('0.123',0.123);
|
||||
test('.123',0.123);
|
||||
```
|
||||
|
||||
注意最后一个测试将会失败。一个浮点数必须以一个数开始,即使它就是个 0,`.123` 不是有效的,实际上真正的 JavaScript 语言也有相同的规则。
|
||||
|
||||
###十六进制数
|
||||
|
||||
现在我们已经有了整数和浮点数,但是还有一些其它的数的语法最好可以支持:十六进制数和科学计数。十六进制数是以 16 为基底的整数。十六进制数的数字能从 0 到 9 和从 A 到 F。十六进制数经常用在计算机科学中,当用二进制数据工作时,你可以仅仅使用两个数字表示 0 到 255 的数。
|
||||
|
||||
在绝大多数源自 C 的编程语言(包括 JavaScript),十六进制数通过在前面加上 `0x` 来向编译器表明后面跟的是一个十六进制数。为了让我们的解析器支持十六进制数,我们只需要添加另一条规则。
|
||||
|
||||
```
|
||||
Number = hex | float | int
|
||||
int = digit+
|
||||
float = digit+ "." digit+
|
||||
hex = "0x" hexDigit+
|
||||
hexDigit := "0".."9" | "a".."f" | "A".."F"
|
||||
```
|
||||
|
||||
我实际上已经增加了两条规则。十六进制数(`hex`)表明它是一个 `0x` 后面一个或多个十六进制数字(`hexDigits`)的串。一个十六进制数字(`hexDigit`)是从 0 到 9,或从 a 到 f,或 A 到 F(包括大写和小写的情况)的一个字符。我也修改了 `Number` 规则来识别十六进制数作为另外一种可能的情况。现在我们只需要另一条针对十六进制数的功能规则。
|
||||
|
||||
```
|
||||
hex: function(a,b) {
|
||||
return parseInt(this.sourceString,16);
|
||||
}
|
||||
```
|
||||
|
||||
注意到,在这种情况下,我们把 `16` 作为基底传递给 `parseInt`,因为我们希望 JavaScript 知道这是一个十六进制数。
|
||||
|
||||
我略过了一些很重要需要注意的事。`hexDigit` 的规则像下面这样:
|
||||
|
||||
```
|
||||
hexDigit := "0".."9" | "a".."f" | "A".."F"
|
||||
```
|
||||
|
||||
注意我使用的是 `:=` 而不是 `=`。在 Ohm 中,`:=` 是当你需要推翻一条规则的时候使用。这表明 Ohm 已经有了一条针对 `hexDigit` 的默认规则,就像 `digit`、`space` 等一堆其他的东西。如果我使用了 `=`, Ohm 将会报告一个错误。这是一个检查,从而避免我无意识的推翻一个规则。由于新的 `hexDigit` 规则和 Ohm 的构建规则一样,所以我们可以把它注释掉,然后让 Ohm 自己来实现它。我留下这个规则只是因为这样我们可以看到它实际上是如何进行的。
|
||||
|
||||
现在,我们可以添加更多的测试然后看到十六进制数真的能工作:
|
||||
|
||||
```
|
||||
test('0x456',0x456);
|
||||
test('0xFF',255);
|
||||
```
|
||||
|
||||
###科学计数
|
||||
|
||||
最后,让我们来支持科学计数。科学计数是针对非常大或非常小的数的,比如 `1.8×10^3`。在大多数编程语言中,科学计数法表示的数会写成这样:1.8e3 表示 18000,或者 1.8e-3 表示 .018。让我们增加另外一对规则来支持这个指数表示:
|
||||
|
||||
```
|
||||
float = digit+ "." digit+ exp?
|
||||
exp = "e" "-"? digit+
|
||||
```
|
||||
|
||||
上面在浮点数规则末尾增加了一个指数(`exp`)规则和一个 `?`。`?` 表示没有或有一个,所以指数(`exp`)是可选的,但是不能超过一个。增加指数(`exp`)规则也改变了浮点数规则的实参数量,所以我们需要为浮点数功能增加另一个参数,即使我们不使用它。
|
||||
|
||||
```
|
||||
float: function(a,b,c,d) {
|
||||
console.log("doing float", this.sourceString);
|
||||
return parseFloat(this.sourceString);
|
||||
},
|
||||
```
|
||||
|
||||
现在我们的测试可以通过了:
|
||||
|
||||
```
|
||||
test('4.8e10',4.8e10);
|
||||
test('4.8e-10',4.8e-10);
|
||||
```
|
||||
|
||||
###结论
|
||||
|
||||
Ohm 是构建解析器的一个很棒的工具,因为它易于上手,并且你可以递增的增加规则。Ohm 也还有其他我今天没有写到的很棒的特点,比如调试观察仪和子类化。
|
||||
|
||||
到目前为止,我们已经使用 Ohm 来把字符串翻译成 JavaScript 数,并且 Ohm 经常用于把一种表示方式转化成另外一种。然而,Ohm 还有更多的用途。通过放入不同的语义功能集,你可以使用 Ohm 来真正处理和计算东西。一个单独的语法可以被许多不同的语义使用,这是 Ohm 的魔法之一。
|
||||
|
||||
在这个系列的下一篇文章中,我将向你们展示如何像真正的计算机一样计算像 `(4.85 + 5 * (238 - 68)/2)` 这样的数学表达式,不仅仅是解析数。
|
||||
|
||||
额外的挑战:你能够扩展语法来支持八进制数吗?这些以 8 为基底的数能够只用 0 到 7 这几个数字来表示,前面加上一个数字 0 或者字母 `o`。看看针对下面这些测试情况是够正确。下次我将给出答案。
|
||||
|
||||
```
|
||||
test('0o77',7*8+7);
|
||||
test('0o23',0o23);
|
||||
```
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.pubnub.com/blog/2016-08-30-javascript-parser-ohm-makes-creating-a-programming-language-easy/
|
||||
|
||||
作者:[Josh Marinacci][a]
|
||||
译者:[ucasFL](https://github.com/ucasFL)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.pubnub.com/blog/author/josh/
|
||||
[1]: https://en.wikipedia.org/wiki/Formal_grammar
|
||||
[2]: https://en.wikipedia.org/wiki/GNU_bison
|
||||
[3]: https://en.wikipedia.org/wiki/Yacc
|
||||
[4]: https://github.com/cdglabs/ohm
|
||||
[5]: http://www.vpri.org/pdf/tr2012001_steps.pdf
|
||||
[6]: http://tinlizzie.org/ometa/
|
||||
[7]: https://en.wikipedia.org/wiki/Arity
|
80
published/201609/20160830 Spark comparison - AWS vs. GCP.md
Normal file
80
published/201609/20160830 Spark comparison - AWS vs. GCP.md
Normal file
@ -0,0 +1,80 @@
|
||||
AWS 和 GCP 的 Spark 技术哪家强?
|
||||
==============
|
||||
|
||||
> Tianhui Michael Li 和Ariel M’ndange-Pfupfu将在今年10月10、12和14号组织一个在线经验分享课程:[Spark 分布式计算入门][4]。该课程的内容包括创建端到端的运行应用程序和精通 Spark 关键工具。
|
||||
|
||||
毋庸置疑,云计算将会在未来数据科学领域扮演至关重要的角色。弹性,可扩展性和按需分配的计算能力作为云计算的重要资源,直接导致云服务提供商集体火拼。其中最大的两股势力正是[亚马逊网络服务(AWS)][1] 和[谷歌云平台(GCP)][2]。
|
||||
|
||||
本文依据构建时间和运营成本对 AWS 和 GCP 的 Spark 工作负载作一个简短比较。实验由我们的学生在数据孵化器(The Data Incubator)进行,[数据孵化器(The Data Incubator)][3]是一个大数据培训组织,专门为公司招聘顶尖数据科学家并为公司职员培训最新的大数据科学技能。尽管 Spark 效率惊人,分布式工作负载的时间和成本亦然可以大到不能忽略不计。因此我们一直努力寻求更高效的技术,以便我们的学生能够学习到最好和最快的工具。
|
||||
|
||||
### 提交 Spark 任务到云
|
||||
|
||||
[Spark][5] 是一个类 MapReduce 但是比 MapReduce 更灵活、更抽象的并行计算框架。Spark 提供 Python 和 Java 编程接口,但它更愿意用户使用原生的 Scala 语言进行应用程序开发。Scala 可以把应用程序和依赖文件打包在一个 JAR 文件从而使 Spark 任务提交变得简单。
|
||||
|
||||
通常情况下,Sprark 结合 HDFS 应用于分布式数据存储,而与 YARN 协同工作则应用于集群管理;这种堪称完美的配合使得 Spark 非常适用于 AWS 的弹性 MapReduce (EMR)集群和 GCP 的 Dataproc 集群。这两种集群都已有 HDFS 和 YARN 预配置,不需要额外进行配置。
|
||||
|
||||
### 配置云服务
|
||||
|
||||
通过命令行比通过网页界面管理数据、集群和任务具有更高的可扩展性。对 AWS 而言,这意味着客户需要安装 [CLI][6]。客户必须获得证书并为每个 EC2 实例创建[独立的密钥对][7]。除此之外,客户还需要为 EMR 用户和 EMR 本身创建角色(基本权限),主要是准入许可规则,从而使 EMR 用户获得足够多的权限(通常在 CLI 运行 `aws emr create-default-roles` 就可以)。
|
||||
|
||||
相比而言,GCP 的处理流程更加直接。如果客户选择安装 [Google Cloud SDK][8] 并且使用其 Google 账号登录,那么客户即刻可以使用 GCP 的几乎所有功能而无需其他任何配置。唯一需要提醒的是不要忘记通过 API 管理器启用计算引擎、Dataproc 和云存储 JSON 的 API。
|
||||
|
||||
当你安装你的喜好设置好之后,有趣的事情就发生了!比如可以通过`aws s3 cp`或者`gsutil cp`命令拷贝客户的数据到云端。再比如客户可以创建自己的输入、输出或者任何其他需要的 bucket,如此,运行一个应用就像创建一个集群或者提交 JAR 文件一样简单。请确定日志存放的地方,毕竟在云环境下跟踪问题或者调试 bug 有点诡异。
|
||||
|
||||
### 一分钱一分货
|
||||
|
||||
谈及成本,Google 的服务在以下几个方面更有优势。首先,购买计算能力的原始成本更低。4 个 vCPU 和 15G RAM 的 Google 计算引擎服务(GCE)每小时只需 0.20 美元,如果运行 Dataproc,每小时也只需区区 0.24 美元。相比之下,同等的云配置,AWS EMR 则需要每小时 0.336 美元。
|
||||
|
||||
其次,计费方式。AWS 按小时计费,即使客户只使用 15 分钟也要付足 1 小时的费用。GCP 按分钟计费,最低计费 10 分钟。在诸多用户案例中,资费方式的不同直接导致成本的巨大差异。
|
||||
|
||||
两种云服务都有其他多种定价机制。客户可以使用 AWS 的 Sport Instance 或 GCP 的 Preemptible Instance 来竞价它们的空闲云计算能力。这些服务比专有的、按需服务便宜,缺点是不能保证随时有可用的云资源提供服务。在 GCP 上,如果客户长时间(每月的 25% 至 100%)使用服务,可以获取更多折扣。在 AWS 上预付费或者一次购买大批量服务可以节省不少费用。底线是,如果你是一个超级用户,并且使用云计算已经成为一种常态,那么最好深入研究云计算,自己算计好成本。
|
||||
|
||||
最后,新手在 GCP 上体验云服务的费用较低。新手只需 300 美元信用担保,就可以免费试用 60 天 GCP 提供的全部云服务。AWS 只免费提供特定服务的特定试用层级,如果运行 Spark 任务,需要付费。这意味着初次体验 Spark,GCP 具有更多选择,也少了精打细算和讨价还价的烦恼。
|
||||
|
||||
### 性能比拼
|
||||
|
||||
我们通过实验检测一个典型 Spark 工作负载的性能与开销。实验分别选择 AWS 的 m3.xlarg 和 GCP 的 n1-standard-4,它们都是由一个 Master 和 5 个核心实例组成的集群。除了规格略有差别,虚拟核心和费用都相同。实际上它们在 Spark 任务的执行时间上也表现的惊人相似。
|
||||
|
||||
测试 Spark 任务包括对数据的解析、过滤、合并和聚合,这些数据来自公开的[堆栈交换数据转储(Stack Exchange Data Dump)][9]。通过运行相同的 JAR,我们首先对大约 50M 的数据子集进行[交叉验证][10],然后将验证扩大到大约 9.5G 的数据集。
|
||||
|
||||

|
||||
>Figure 1. Credit: Michael Li and Ariel M'ndange-Pfupfu.
|
||||
|
||||

|
||||
>Figure 2. Credit: Michael Li and Ariel M'ndange-Pfupfu.
|
||||
|
||||
结果表明,短任务在 GCP 上具有明显的成本优势,这是因为 GCP 以分钟计费,并最终扣除了 10 分钟的费用,而 AWS 则收取了 1 小时的费用。但是即使长任务,因为计费方式占优,GPS 仍然具有相当优势。同样值得注意的是存储成本并不包括在此次比较当中。
|
||||
|
||||
###结论
|
||||
|
||||
AWS 是云计算的先驱,这甚至体现在 API 中。AWS 拥有巨大的生态系统,但其许可模型已略显陈旧,配置管理也有些晦涩难解。相比之下,Google 是云计算领域的新星并且将云计算服务打造得更加圆润自如。但是 GCP 缺少一些便捷的功能,比如通过简单方法自动结束集群和详细的任务计费信息分解。另外,其 Python 编程接口也不像 [AWS 的 Boto][11] 那么全面。
|
||||
|
||||
如果你初次使用云计算,GCP 因其简单易用,别具魅力。即使你已在使用 AWS,你也许会发现迁移到 GCP 可能更划算,尽管真正从 AWS 迁移到 GCP 的代价可能得不偿失。
|
||||
|
||||
当然,现在对两种云服务作一个全面的总结还非常困难,因为它们都不是单一的实体,而是由多个实体整合而成的完整生态系统,并且各有利弊。真正的赢家是用户。一个例证就是在数据孵化器(The Data Incubator),我们的博士数据科学研究员在学习分布式负载的过程中真正体会到成本的下降。虽然我们的[大数据企业培训客户][12]可能对价格不那么敏感,他们更在意能够更快速地处理企业数据,同时保持价格不增加。数据科学家现在可以享受大量的可选服务,这些都是从竞争激烈的云计算市场得到的实惠。
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.oreilly.com/ideas/spark-comparison-aws-vs-gcp
|
||||
|
||||
作者:[Michael Li][a],[Ariel M'Ndange-Pfupfu][b]
|
||||
译者:[firstadream](https://github.com/firstadream)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.oreilly.com/people/76a5b-michael-li
|
||||
[b]: https://www.oreilly.com/people/Ariel-Mndange-Pfupfu
|
||||
[1]: https://aws.amazon.com/
|
||||
[2]: https://cloud.google.com/
|
||||
[3]: https://www.thedataincubator.com/training.html?utm_source=OReilly&utm_medium=blog&utm_campaign=AWSvsGCP
|
||||
[4]: http://www.oreilly.com/live-training/distributed-computing-with-spark.html?intcmp=il-data-olreg-lp-oltrain_20160828_new_site_spark_comparison_aws_gcp_post_top_note_training_link
|
||||
[5]: http://spark.apache.org/
|
||||
[6]: http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-set-up.html
|
||||
[7]: http://docs.aws.amazon.com/ElasticMapReduce/latest/DeveloperGuide/EMR_SetUp_KeyPair.html
|
||||
[8]: https://cloud.google.com/sdk/#Quick_Start
|
||||
[9]: https://archive.org/details/stackexchange
|
||||
[10]: http://stats.stackexchange.com/
|
||||
[11]: https://github.com/boto/boto3
|
||||
[12]: https://www.thedataincubator.com/training.html?utm_source=OReilly&utm_medium=blog&utm_campaign=AWSvsGCP
|
@ -0,0 +1,333 @@
|
||||
17 个 tar 命令实用示例
|
||||
=====
|
||||
|
||||
Tar(Tape ARchive,磁带归档的缩写,LCTT 译注:最初设计用于将文件打包到磁带上,现在我们大都使用它来实现备份某个分区或者某些重要的目录)是类 Unix 系统中使用最广泛的命令,用于归档多个文件或目录到单个归档文件中,并且归档文件可以进一步使用 gzip 或者 bzip2 等技术进行压缩。换言之,tar 命令也可以用于备份:先是归档多个文件和目录到一个单独的 tar 文件或归档文件,然后在需要之时将 tar 文件中的文件和目录释放出来。
|
||||
|
||||
本文将介绍 tar 的 17 个实用示例。
|
||||
|
||||
tar 命令语法如下:
|
||||
|
||||
```
|
||||
# tar <选项> <文件>
|
||||
```
|
||||
|
||||
下面列举 tar 命令中一些常用的选项:
|
||||
|
||||
> --delete : 从归档文件 (而非磁带) 中删除
|
||||
|
||||
> -r, --append : 将文件追加到归档文件中
|
||||
|
||||
> -t, --list : 列出归档文件中包含的内容
|
||||
|
||||
> --test-label : 测试归档文件卷标并退出
|
||||
|
||||
> -u, --update : 将已更新的文件追加到归档文件中
|
||||
|
||||
> -x, --extract, --get : 释放归档文件中文件及目录
|
||||
|
||||
> -C, --directory=DIR : 执行归档动作前变更工作目录到 DIR
|
||||
|
||||
> -f, --file=ARCHIVE : 指定 (将要创建或已存在的) 归档文件名
|
||||
|
||||
> -j, --bip2 : 对归档文件使用 bzip2 压缩
|
||||
|
||||
> -J, --xz : 对归档文件使用 xz 压缩
|
||||
|
||||
> -p, --preserve-permissions : 保留原文件的访问权限
|
||||
|
||||
> -v, --verbose : 显示命令整个执行过程
|
||||
|
||||
> -z, gzip : 对归档文件使用 gzip 压缩
|
||||
|
||||
|
||||
注 : 在 tar 命令选项中的连接符 `-` 是可选的(LCTT 译注:不用 `-` 也没事。这在 GNU 软件里面很罕见,大概是由于 tar 命令更多受到古老的 UNIX 风格影响)。
|
||||
|
||||
### 示例 1:创建一个 tar 归档文件
|
||||
|
||||
现在来创建一个 tar 文件,将 /etc/ 目录和 /root/anaconda-ks.cfg 文件打包进去。
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# tar -cvf myarchive.tar /etc /root/anaconda-ks.cfg
|
||||
```
|
||||
|
||||
以上命令会在当前目录创建一个名为 "myarchive" 的 tar 文件,内含 /etc/ 目录和 /root/anaconda-ks.cfg 文件。
|
||||
|
||||
其中,`-c` 选项表示要创建 tar 文件,`-v` 选项用于输出 tar 的详细过程到屏幕上,`-f` 选项则是指定归档文件名称。
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# ls -l myarchive.tar
|
||||
-rw-r--r--. 1 root root 22947840 Sep 7 00:24 myarchive.tar
|
||||
[root@linuxtechi ~]#
|
||||
```
|
||||
|
||||
### 示例 2:列出归档文件中的内容
|
||||
|
||||
在 tar 命令中使用 `–t` 选项可以不用释放其中的文件就可以快速列出文件中包含的内容。
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# tar -tvf myarchive.tar
|
||||
```
|
||||
|
||||
列出 tar 文件中的指定的文件和目录。下列命令尝试查看 anaconda-ks.cfg 文件是否存在于 tar 文件中。
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# tar -tvf myarchive.tar root/anaconda-ks.cfg
|
||||
-rw------- root/root 953 2016-08-24 01:33 root/anaconda-ks.cfg
|
||||
[root@linuxtechi ~]#
|
||||
```
|
||||
|
||||
### 示例 3:追加文件到归档(tar)文件中
|
||||
|
||||
`-r` 选项用于向已有的 tar 文件中追加文件。下面来将 /etc/fstab 添加到 data.tar 中。
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# tar -rvf data.tar /etc/fstab
|
||||
```
|
||||
|
||||
注:在压缩过的 tar 文件中无法进行追加文件操作。
|
||||
|
||||
### 示例 4:从 tar 文件中释放文件以及目录
|
||||
|
||||
`-x` 选项用于释放出 tar 文件中的文件和目录。下面来释放上边创建的 tar 文件中的内容。
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# tar -xvf myarchive.tar
|
||||
```
|
||||
|
||||
这个命令会在当前目录中释放出 myarchive.tar 文件中的内容。
|
||||
|
||||
### 示例 5:释放 tar 文件到指定目录
|
||||
|
||||
假如你想要释放 tar 文件中的内容到指定的文件夹或者目录,使用 `-C` 选项后边加上指定的文件的路径。
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# tar -xvf myarchive.tar -C /tmp/
|
||||
```
|
||||
|
||||
### 示例 6:释放 tar 文件中的指定文件或目录
|
||||
|
||||
假设你只要释放 tar 文件中的 anaconda-ks.cfg 到 /tmp 目录。
|
||||
|
||||
语法如下:
|
||||
|
||||
```
|
||||
# tar –xvf {tar-file } {file-to-be-extracted } -C {path-where-to-extract}
|
||||
|
||||
[root@linuxtechi tmp]# tar -xvf /root/myarchive.tar root/anaconda-ks.cfg -C /tmp/
|
||||
root/anaconda-ks.cfg
|
||||
[root@linuxtechi tmp]# ls -l /tmp/root/anaconda-ks.cfg
|
||||
-rw-------. 1 root root 953 Aug 24 01:33 /tmp/root/anaconda-ks.cfg
|
||||
[root@linuxtechi tmp]#
|
||||
```
|
||||
|
||||
### 示例 7:创建并压缩归档文件(.tar.gz 或 .tgz)
|
||||
|
||||
假设我们需要打包 /etc 和 /opt 文件夹,并用 gzip 工具将其压缩。可以在 tar 命令中使用 `-z` 选项来实现。这种 tar 文件的扩展名可以是 .tar.gz 或者 .tgz。
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# tar -zcpvf myarchive.tar.gz /etc/ /opt/
|
||||
```
|
||||
|
||||
或
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# tar -zcpvf myarchive.tgz /etc/ /opt/
|
||||
```
|
||||
|
||||
### 示例 8:创建并压缩归档文件(.tar.bz2 或 .tbz2)
|
||||
|
||||
假设我们需要打包 /etc 和 /opt 文件夹,并使用 bzip2 压缩。可以在 tar 命令中使用 `-j` 选项来实现。这种 tar 文件的扩展名可以是 .tar.bz2 或者 .tbz。
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# tar -jcpvf myarchive.tar.bz2 /etc/ /opt/
|
||||
```
|
||||
|
||||
或
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# tar -jcpvf myarchive.tbz2 /etc/ /opt/
|
||||
```
|
||||
|
||||
### 示例 9:排除指定文件或类型后创建 tar 文件
|
||||
|
||||
创建 tar 文件时在 tar 命令中使用 `–exclude` 选项来排除指定文件或者类型。假设在创建压缩的 tar 文件时要排除 .html 文件。
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# tar -zcpvf myarchive.tgz /etc/ /opt/ --exclude=*.html
|
||||
```
|
||||
|
||||
### 示例 10:列出 .tar.gz 或 .tgz 文件中的内容
|
||||
|
||||
使用 `-t` 选项可以查看 .tar.gz 或 .tgz 文件中内容。如下:
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# tar -tvf myarchive.tgz | more
|
||||
.............................................
|
||||
drwxr-xr-x root/root 0 2016-09-07 08:41 etc/
|
||||
-rw-r--r-- root/root 541 2016-08-24 01:23 etc/fstab
|
||||
-rw------- root/root 0 2016-08-24 01:23 etc/crypttab
|
||||
lrwxrwxrwx root/root 0 2016-08-24 01:23 etc/mtab -> /proc/self/mounts
|
||||
-rw-r--r-- root/root 149 2016-09-07 08:41 etc/resolv.conf
|
||||
drwxr-xr-x root/root 0 2016-09-06 03:55 etc/pki/
|
||||
drwxr-xr-x root/root 0 2016-09-06 03:15 etc/pki/rpm-gpg/
|
||||
-rw-r--r-- root/root 1690 2015-12-09 04:59 etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
|
||||
-rw-r--r-- root/root 1004 2015-12-09 04:59 etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-Debug-7
|
||||
-rw-r--r-- root/root 1690 2015-12-09 04:59 etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-Testing-7
|
||||
-rw-r--r-- root/root 3140 2015-09-15 06:53 etc/pki/rpm-gpg/RPM-GPG-KEY-foreman
|
||||
..........................................................
|
||||
```
|
||||
|
||||
### 示例 11:列出 .tar.bz2 或 .tbz2 文件中的内容
|
||||
|
||||
使用 `-t` 选项可以查看 .tar.bz2 或 .tbz2 文件中内容。如下:
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# tar -tvf myarchive.tbz2 | more
|
||||
........................................................
|
||||
rwxr-xr-x root/root 0 2016-08-24 01:25 etc/pki/java/
|
||||
lrwxrwxrwx root/root 0 2016-08-24 01:25 etc/pki/java/cacerts -> /etc/pki/ca-trust/extracted/java/cacerts
|
||||
drwxr-xr-x root/root 0 2016-09-06 02:54 etc/pki/nssdb/
|
||||
-rw-r--r-- root/root 65536 2010-01-12 15:09 etc/pki/nssdb/cert8.db
|
||||
-rw-r--r-- root/root 9216 2016-09-06 02:54 etc/pki/nssdb/cert9.db
|
||||
-rw-r--r-- root/root 16384 2010-01-12 16:21 etc/pki/nssdb/key3.db
|
||||
-rw-r--r-- root/root 11264 2016-09-06 02:54 etc/pki/nssdb/key4.db
|
||||
-rw-r--r-- root/root 451 2015-10-21 09:42 etc/pki/nssdb/pkcs11.txt
|
||||
-rw-r--r-- root/root 16384 2010-01-12 15:45 etc/pki/nssdb/secmod.db
|
||||
drwxr-xr-x root/root 0 2016-08-24 01:26 etc/pki/CA/
|
||||
drwxr-xr-x root/root 0 2015-06-29 08:48 etc/pki/CA/certs/
|
||||
drwxr-xr-x root/root 0 2015-06-29 08:48 etc/pki/CA/crl/
|
||||
drwxr-xr-x root/root 0 2015-06-29 08:48 etc/pki/CA/newcerts/
|
||||
drwx------ root/root 0 2015-06-29 08:48 etc/pki/CA/private/
|
||||
drwx------ root/root 0 2015-11-20 06:34 etc/pki/rsyslog/
|
||||
drwxr-xr-x root/root 0 2016-09-06 03:44 etc/pki/pulp/
|
||||
..............................................................
|
||||
```
|
||||
|
||||
### 示例 12:解压 .tar.gz 或 .tgz 文件
|
||||
|
||||
使用 `-x` 和 `-z` 选项来解压 .tar.gz 或 .tgz 文件。如下:
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# tar -zxpvf myarchive.tgz -C /tmp/
|
||||
```
|
||||
|
||||
以上命令将 tar 文件解压到 /tmp 目录。
|
||||
|
||||
注:现今的 tar 命令会在执行解压动作前自动检查文件的压缩类型,这意味着我们在使用 tar 命令是可以不用指定文件的压缩类型。如下:
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# tar -xpvf myarchive.tgz -C /tmp/
|
||||
```
|
||||
|
||||
### 示例 13:解压 .tar.bz2 或 .tbz2 文件
|
||||
|
||||
使用 `-j` 和 `-x` 选项来解压 .tar.bz2 或 .tbz2 文件。如下:
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# tar -jxpvf myarchive.tbz2 -C /tmp/
|
||||
```
|
||||
|
||||
或
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# tar xpvf myarchive.tbz2 -C /tmp/
|
||||
```
|
||||
|
||||
### 示例 14:使用 tar 命令进行定时备份
|
||||
|
||||
总有一些实时场景需要我们对指定的文件和目录进行打包,已达到日常备份的目的。假设需要每天备份整个 /opt 目录,可以创建一个带 tar 命令的 cron 任务来完成。如下:
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# tar -zcvf optbackup-$(date +%Y-%m-%d).tgz /opt/
|
||||
```
|
||||
|
||||
为以上命令创建一个 cron 任务即可。
|
||||
|
||||
### 示例 15:使用 -T 及 -X 创建压缩归档文件
|
||||
|
||||
想像这样一个场景:把想要归档和压缩的文件及目录记录到到一个文件,然后把这个文件当做 tar 命令的传入参数来完成归档任务;而有时候则是需要排除上面提到的这个文件里面记录的特定路径后进行归档和压缩。
|
||||
|
||||
在 tar 命令中使用 `-T` 选项来指定该输入文件,使用 `-X` 选项来指定包含要排除的文件列表。
|
||||
|
||||
假设要归档 /etc、/opt、/home 目录,并排除 /etc/sysconfig/kdump 和 /etc/sysconfig/foreman 文件,可以创建 /root/tar-include 和 /root/tar-exclude 然后分别输入以下内容:
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# cat /root/tar-include
|
||||
/etc
|
||||
/opt
|
||||
/home
|
||||
[root@linuxtechi ~]#
|
||||
[root@linuxtechi ~]# cat /root/tar-exclude
|
||||
/etc/sysconfig/kdump
|
||||
/etc/sysconfig/foreman
|
||||
[root@linuxtechi ~]#
|
||||
```
|
||||
|
||||
运行以下命令来创建一个压缩归档文件。
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# tar zcpvf mybackup-$(date +%Y-%m-%d).tgz -T /root/tar-include -X /root/tar-exclude
|
||||
```
|
||||
|
||||
### 示例 16:查看 .tar、.tgz 和 .tbz2 文件的大小
|
||||
|
||||
使用如下命令来查看 (压缩) tar 文件的体积。
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# tar -czf - data.tar | wc -c
|
||||
427
|
||||
[root@linuxtechi ~]# tar -czf - mybackup-2016-09-09.tgz | wc -c
|
||||
37956009
|
||||
[root@linuxtechi ~]# tar -czf - myarchive.tbz2 | wc -c
|
||||
30835317
|
||||
[root@linuxtechi ~]#
|
||||
```
|
||||
|
||||
### 示例 17:分割体积庞大的 tar 文件为多份小文件
|
||||
|
||||
类 Unix 系统中使用 split 命令来将大体积文件分割成小体积文件。大体积的 tar 当然也可以使用这个命令来进行分割。
|
||||
|
||||
假设需要将 "mybackup-2016-09-09.tgz" 分割成每份 6 MB 的小文件。
|
||||
|
||||
```
|
||||
Syntax : split -b <Size-in-MB> <tar-file-name>.<extension> “prefix-name”
|
||||
```
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# split -b 6M mybackup-2016-09-09.tgz mybackup-parts
|
||||
```
|
||||
|
||||
以上命令会在当前目录分割 mybackup-2016-09-09.tgz 文件成为多个 6 MB 的小文件,文件名为 mybackup-partsaa ~ mybackup-partsag。如果在要在分割文件后以数字而非字母来区分,可以在以上的 split 命令使用 `-d` 选项。
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# ls -l mybackup-parts*
|
||||
-rw-r--r--. 1 root root 6291456 Sep 10 03:05 mybackup-partsaa
|
||||
-rw-r--r--. 1 root root 6291456 Sep 10 03:05 mybackup-partsab
|
||||
-rw-r--r--. 1 root root 6291456 Sep 10 03:05 mybackup-partsac
|
||||
-rw-r--r--. 1 root root 6291456 Sep 10 03:05 mybackup-partsad
|
||||
-rw-r--r--. 1 root root 6291456 Sep 10 03:05 mybackup-partsae
|
||||
-rw-r--r--. 1 root root 6291456 Sep 10 03:05 mybackup-partsaf
|
||||
-rw-r--r--. 1 root root 637219 Sep 10 03:05 mybackup-partsag
|
||||
[root@linuxtechi ~]#
|
||||
```
|
||||
|
||||
然后通过网络将这些分割文件转移到其他服务器,就可以合并成为一个单独的 tar 文件了,如下:
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# cat mybackup-partsa* > mybackup-2016-09-09.tgz
|
||||
[root@linuxtechi ~]#
|
||||
```
|
||||
|
||||
文毕,希望你喜欢 tar 命令的这几个不同的示例。随时评论并分享你的心得。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.linuxtechi.com/17-tar-command-examples-in-linux/
|
||||
|
||||
作者:[Pradeep Kumar][a]
|
||||
译者:[GHLandy](https://github.com/GHLandy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: http://www.linuxtechi.com/author/pradeep/
|
@ -0,0 +1,58 @@
|
||||
5 个值得了解的 Linux 服务器发行版
|
||||
=========
|
||||
|
||||
> 你在 Distrowatch.com 上看到列出的将近 300 个 Linux 发行版本中,几乎任何一个发行版都可以被用来作为服务器系统。下面是一些相对于其他发行版而言比较突出的一些发行版。
|
||||
|
||||

|
||||
|
||||
你在 Distrowatch.com 上看到列出的将近 300 个 Linux 发行版本中,几乎任何一个发行版都可以被用来作为服务器系统,在 Linux 发展的早期,给用户提供的一直是“全能”发行版,例如 Slackware、Debian 和 Gentoo 可以为家庭和企业作为服务器完成繁重的工作。那或许对业余爱好者是不错的,但是它对于专业人员来说也有好多不必要的地方。
|
||||
|
||||
首先,这里有一些发行版可以作为文件和应用服务器,给工作站提供常见外围设备的共享,提供网页服务和其它我们希望服务器做的任何工作,不管是在云端、在数据中心或者在服务器机架上,除此之外没有别的用途。
|
||||
|
||||
下面是 5 个最常用的 Linux 发行版的简单总结,而且每一个发行版都可以满足小型企业的需求。
|
||||
|
||||
### Red Hat Enterprise Linux(RHEL)
|
||||
|
||||
这或许是最有名的 Linux 服务器发行版了。RHEL 以它在高要求的至关重要的任务上坚如磐石的稳定性而出名,例如运行着纽约证券交易系统。红帽也提供了业内最佳的服务支持。
|
||||
|
||||
那么红帽 Linux 的缺点都有什么呢? 尽管红帽以提供首屈一指的客户服务和支持而出名,但是它的支持订阅费用并不便宜。有人可能会指出,这的确物有所值。确实有便宜的 RHEL 第三方服务,但是你或许应该在这么做之前做一些研究。
|
||||
|
||||
### CentOS
|
||||
|
||||
任何喜欢 RHEL,但是又不想给红帽付费来获得支持的人都应该了解一下 CentOS,它基本上是红帽企业版 Linux 的一个分支。尽管这个项目 2004 年左右才开始,但它在 2014 年得到了红帽的官方支持,而它现在雇佣可这个项目的大多数开发者,这意味着安全补丁和漏洞修复提交到红帽不久后就会在 CentOS 上可用。
|
||||
|
||||
如果你想要部署 CentOS,你将需要有 Linux 技能的员工,因为没有了技术支持,你基本上只能靠自己。有一个好消息是 CentOS 社区提供了十分丰富的资源,例如邮件列表、Web 论坛和聊天室,所以对那些寻找帮助的人来说,社区帮助还是有的。
|
||||
|
||||
### Ubuntu Server
|
||||
|
||||
当许多年前 Canonical 宣布它将要推出一个服务器版本的 Ubuntu 的时候,你可能会听到过嘲笑者的声音。然而嘲笑很快变成了惊奇,Ubuntu Server 迅速地站稳了脚跟。部分原因是因为其来自 Debian 派生的基因,Debian 长久以来就是一个备受喜爱的 Linux 服务器发行版,Ubuntu 通过提供一般人可以支付的起的技术支持费用、优秀的硬件支持、开发工具和很多亮点填补了这个市场空隙。
|
||||
|
||||
那么 Ubuntu Server 有多么受欢迎呢?最近的数据表明它正在成为在 OpenStack 和 Amazon Elastic Compute Cloud 上[部署最多的操作系统][1]。在那里 Ubuntu Server 超过了位居第二的 Amazon Linux 的 Amazon Machine Image 一大截,而且让第三位 Windows 处于尘封的地位。另外一个调查显示 Ubuntu Server 是[使用最多的 Linux web 服务器][2]。
|
||||
|
||||
### SUSE Linux Enterprise Server(SLES)
|
||||
|
||||
这个源自德国的发行版在欧洲有很大的用户群,而且在本世纪初由 Novell 公司引起的 PR 问题出现之前,它一直都是大西洋这边的排名第一服务器发行版。在那段漫长的时期之后,SUSE 在美国获得了发展,而且它的使用或许加速了惠普企业公司将它作为[ Linux 首选合作伙伴][3]。
|
||||
|
||||
SLES 稳定而且易于维护,这正是很久以来对于一个好的 Linux 发行版所期待的东西。付费的 7*24 小时快速响应技术支持可以供你选择,使得这发行版很适合关键任务的部署。
|
||||
|
||||
### ClearOS
|
||||
|
||||
基于 RHEL,之所以这里要包括 [ClearOS][4] 是因为它对于每个人来说都足够简单,甚至是没有专业知识的人都可以去配置它。它定位于服务中小型企业,它也可以被家庭用户用来作为娱乐服务器,为了简单易用我们可以基于 Web 界面进行管理,它是以“构建你的 IT 基础设施应该像在智能手机上下载 app 一样简单”为前提来定制的。
|
||||
|
||||
最新的 7.2 发行版本,包括了一些可能并不“轻量级”的功能,例如对微软 Hyper-V 技术的 VM 支持,支持 XFS 和 BTRFS 文件系统,也支持 LVM 和 IPv6。这些新特性在免费版本或者在并不算贵的带着各种支持选项的专业版中都是可用的。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://windowsitpro.com/industry/five-linux-server-distros-worth-checking-out
|
||||
|
||||
作者:[Christine Hall][a]
|
||||
译者:[LinuxBars](https://github.com/LinuxBars)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: http://windowsitpro.com/industry/five-linux-server-distros-worth-checking-out
|
||||
[1]: http://www.zdnet.com/article/ubuntu-linux-continues-to-dominate-openstack-and-other-clouds/
|
||||
[2]: https://w3techs.com/technologies/details/os-linux/all/all
|
||||
[3]: http://windowsitpro.com/industry/suse-now-hpes-preferred-partner-micro-focus-pact
|
||||
[4]: https://www.clearos.com/
|
@ -0,0 +1,58 @@
|
||||
在 WordPress 下如何通过 Markdown 来提高工作效率
|
||||
=================
|
||||
|
||||

|
||||
|
||||
Markdown 是一种简单的标记语言,旨在帮助你花费更小的代价来格式化纯文本文档。在 WordPress 下你可以使用 HTML 或者可视化编辑器来格式化你的文档,但是使用 markdown 可以让格式化文档变得更加容易,而且你随时可以导出成很多种格式,包括(但不限于)HTML。
|
||||
|
||||
WordPress 没有原生的 markdown 的支持,但是,如果你希望的话,在你的网站上有多种插件可以添加这种功能。
|
||||
|
||||
在这个教程中,我将会演示如何使用流行的 WP-Markdown 插件为 WordPress 网站添加 markdown 支持。
|
||||
|
||||
### 安装
|
||||
|
||||
导航到 “Plugins -> Add New”,然后在提供的搜索框中输入 “[wp-markdown][1]” 就可以直接安装。插件应该会出现在列表中的第一个。单击 “Install Now” 进行安装。
|
||||
|
||||

|
||||
|
||||
### 配置
|
||||
|
||||
当你已经安装了这个插件并且激活它之后,导航到 “Settings -> Writing” 并向下滚动,直到 markdown 选项。
|
||||
|
||||
你可以启用文章、页面和评论中对于 markdown 的支持。如果你刚刚开始学习 markdown 语法的话,那么你也可以在文章编辑器或者评论的地方启用一个帮助栏,这可以使你更方便一些。
|
||||
|
||||

|
||||
|
||||
如果在你的博客文章中包括代码片段的话,那么启用 “Prettify syntax highlighter” 将会让你的代码片段自动语法高亮。
|
||||
|
||||
一旦对于你的选择感觉满意的话,那么就单击 “Save Changes” 来保存你的设置吧。
|
||||
|
||||
### 使用 Markdown 来写你的文章
|
||||
|
||||
当你在自己网站中启用了 markdown 的支持,你就可以立马开始使用了。
|
||||
|
||||
通过 “Posts -> Add New” 创建一篇新的文章。你将会注意到默认的可视化及纯文本编辑器已经被 markdown 编辑器所替代。
|
||||
|
||||
如果你在配置选项中没有启用 markdown 的帮助栏,你将不会看到 markdown 格式化后的实时预览。然而,只要你的语法是正确的,当你保存或者发布文章的时候,你的 markdown 就会转换成正确的 HTML。
|
||||
|
||||
然而,如果你是 markdown 的初学者的话,实时预览这一特征对你会很重要,只需要简单的回到刚才的设置中启用帮助栏选项,你就可以在你的文章底部看到一块漂亮的实时预览区域。另外,你也可以在顶部看到很多按钮,它们将帮助你在文章中快速的插入 markdown 格式。
|
||||
|
||||

|
||||
|
||||
### 结语
|
||||
|
||||
正如你所看到的那样,在 WordPress 网站上添加 markdown 支持确实容易,你将只需要花费几分钟的时间就可以了。如果对于 markdown 你全然不知的话,或许你也可以查看我们的 [markdown 备忘录][2],它将对于 markdown 语法提供一个全面的参考。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.maketecheasier.com/use-markdown-in-wordpress/
|
||||
|
||||
作者:[Ayo Isaiah][a]
|
||||
译者:[yangmingming](https://github.com/yangmingming)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.maketecheasier.com/author/ayoisaiah/
|
||||
[1]: https://wordpress.org/plugins/wp-markdown/
|
||||
[2]: https://www.maketecheasier.com/productive-with-markdown-cheatsheet/
|
@ -0,0 +1,63 @@
|
||||
Linus Torvalds 透露他编程最喜欢使用的笔记本
|
||||
=================
|
||||
|
||||
> 是戴尔 XPS 13 开发者版。下面就是原因。
|
||||
|
||||
我最近和一些 Linux 开发者讨论了对于严谨的程序员来说,最好的笔记本是什么样的问题。结果,我从这些程序员的观点中筛选出了多款笔记本电脑。那么依我之见赢家是谁呢?就是戴尔 XPS 13 开发者版。和我观点一样的大有人在。Linux的缔造者 Linus Torvalds 也认同这个观点。对于他来说,戴尔 XPS 13 开发者版大概是最好的笔记本电脑了。
|
||||
|
||||

|
||||
|
||||
Torvalds 的需求可能和你的不同。
|
||||
|
||||
在 Google+ 上,Torvalds 解释道,“第一:[我从来不把笔记本当成台式机的替代品][1],并且,我每年旅游不了几次。所以对于我来说,笔记本是一个相当专用的东西,并不是每日(甚至每周)都要使用,因此,主要的标准不是那种“差不多每天都使用”的标准,而是非常适合于旅游时使用。”
|
||||
|
||||
因此,对于 Torvalds 来说,“我最后比较关心的一点是它是相当的小和轻,因为在会议上我可能一整天都需要带着它。我同样需要一个好的屏幕,因为到目前为止,我主要是在桌子上使用它,我希望文字的显示小而且清晰。”
|
||||
|
||||
戴尔的显示器是由 Intel's Iris 540 GPU 支持的。在我的印象中,它表现的非常的不错。
|
||||
|
||||
Iris 驱动了 13.3 英寸的 3200×1800 的显示屏。每英寸有 280 像素,比我喜欢的 [2015 年的 Chromebook Pixel][2] 多了 40 个像素,比 [Retina 屏的 MacBook Pro][3] 还要多 60 个像素。
|
||||
|
||||
然而,要让上面说的硬件配置在 [Gnome][4] 桌面上玩好也不容易。正如 Torvalds 在另一篇文章解释的那样,它“[和我的桌面电脑有一样的分辨率][5],但是,显然因为笔记本的显示屏更小,Gnome 桌面似乎自己做了个艰难的决定,认为我需要 2 倍的自动缩放比例,这简直愚蠢到炸裂(例如窗口显示,图标等)。”
|
||||
|
||||
解决方案?你不用想着在用户界面里面找了。你需要在 shell下运行:`gsettings set org.gnome.desktop.interface scaling-factor 1`。
|
||||
|
||||
Torvalds 或许使用 Gnome 桌面,但是他不是很喜欢 Gnome 3.x 系列。这一点上我跟他没有不同意见。这就是为什么我使用 [Cinnamon][7] 来代替。
|
||||
|
||||
他还希望“一个相当强大的 CPU,因为当我旅游的时候,我依旧需要编译 Linux 内核很多次。我并不会像在家那样每次 pull request 都进行一次完整的“make allmodconfig”编译,但是我希望可以比我以前的笔记本多编译几次,实际上,这(也包括屏幕)应该是我想升级的主要原因。”
|
||||
|
||||
Linus 没有描述他的 XPS 13 的细节,但是我测评过的那台是一个高端机型。它带有双核 2.2GHz 的第 6 代英特尔的酷睿 i7-6550U Skylake 处理器,16GBs DDR3 内存,以及一块半 TB (500GB)的 PCIe 固态硬盘(SSD)。我可以肯定,Torvalds 的系统至少是精良装备。”
|
||||
|
||||
一些你或许会关注的特征没有在 Torvalds 的清单中:
|
||||
|
||||
> “我不会关心的是触摸屏,因为我的手指相对于我所看到的文字是又大又笨拙(我也无法处理污渍:也许我的手指特别油腻,但是我真的不想碰那些屏幕)。
|
||||
|
||||
> 我并不十分关心那些“一整天的电池寿命”,因为坦率的讲,我不记得上次没有接入电源时什么时候了。我可能着急忙慌而忘记插电,但它不是一个天大的问题。现在电池的寿命“超过两小时”,我只是不那么在乎了。”
|
||||
|
||||
戴尔声称,XPS 13,搭配 56 瓦小时的 4 芯电池,拥有 12 小时的电池寿命。以我的使用经验它已经很好的用过了 10 个小时。我从没有尝试过把电量完全耗完是什么状态。
|
||||
|
||||
Torvalds 也没有遇到 Intel 的 Wi-Fi 设备问题。非开发者版使用 Broadcom 的芯片设备,已经被 Windows 和 Linux 用户发现了一些问题。戴尔的技术支持对于我来解决这些问题非常有帮助。
|
||||
|
||||
一些用户在使用 XPS 13 触摸板的时候,遇到了问题。Torvalds 和我都几乎没有什么困扰。Torvalds 写到,“XPS13 触摸板对于我来说运行的非常好。这可能只是个人喜好,但它操作起来比较流畅,响应比较快。”
|
||||
|
||||
不过,尽管 Torvalds 喜欢 XPS 13,他同时也钟情于最新版的联想 X1 Carbon、惠普 Spectre 13 x360,和去年的联想 Yoga 900。至于我?我喜欢 XPS 13 开发者版。至于价钱,我以前测评过的型号是 $1949.99,可能刷你的信用卡就够了。
|
||||
|
||||
因此,如果你希望像世界上顶级的程序员之一一样开发的话,Dell XPS 13 开发者版对得起它的价格。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.zdnet.com/article/linus-torvalds-reveals-his-favorite-programming-laptop/
|
||||
|
||||
作者:[Steven J. Vaughan-Nichols][a]
|
||||
译者:[yangmingming](https://github.com/yangmingming)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: http://www.zdnet.com/meet-the-team/us/steven-j-vaughan-nichols/
|
||||
[1]: https://plus.google.com/+LinusTorvalds/posts/VZj8vxXdtfe
|
||||
[2]: http://www.zdnet.com/article/the-best-chromebook-ever-the-chromebook-pixel-2015/
|
||||
[3]: http://www.zdnet.com/product/apple-15-inch-macbook-pro-with-retina-display-mid-2015/
|
||||
[4]: https://www.gnome.org/
|
||||
[5]: https://plus.google.com/+LinusTorvalds/posts/d7nfnWSXjfD
|
||||
[6]: http://www.zdnet.com/article/linus-torvalds-finds-gnome-3-4-to-be-a-total-user-experience-design-failure/
|
||||
[7]: http://www.zdnet.com/article/how-to-customise-your-linux-desktop-cinnamon/
|
@ -0,0 +1,54 @@
|
||||
Taskwarrior:Linux 下一个很棒的命令行 TODO 工具
|
||||
==============
|
||||
|
||||
Taskwarrior 是 Ubuntu/Linux 下一个简单而直接的基于命令行的 TODO 工具。这个开源软件是我曾用过的最简单的[基于命令行的工具][4]之一。Taskwarrior 可以帮助你更好地组织你自己,而不用安装笨重的新工具——这有时丧失了 TODO 工具的目的。
|
||||
|
||||

|
||||
|
||||
### Taskwarrior:一个基于简单的基于命令行帮助完成任务的TODO工具
|
||||
|
||||
Taskwarrior是一个开源、跨平台、基于命令行的 TODO 工具,它帮你在终端中管理你的 to-do 列表。这个工具让你可以轻松地添加任务、展示列表、移除任务。而且,在你的默认仓库中就有,不用安装新的 PPA。在 Ubuntu 16.04 LTS 或者相似的发行版中。在终端中按照如下步骤安装 Taskwarrior。
|
||||
|
||||
```
|
||||
sudo apt-get install task
|
||||
```
|
||||
|
||||
简单的用如下:
|
||||
|
||||
```
|
||||
$ task add Read a book
|
||||
Created task 1.
|
||||
$ task add priority:H Pay the bills
|
||||
Created task 2.
|
||||
```
|
||||
|
||||
我使用上面截图中的同样一个例子。是的,你可以设置优先级(H、L 或者 M)。并且你可以使用‘task’或者‘task next’命令来查看你最新创建的 to-do 列表。比如:
|
||||
|
||||
```
|
||||
$ task next
|
||||
|
||||
ID Age P Description Urg
|
||||
-- --- - -------------------------------- ----
|
||||
2 10s H Pay the bills 6
|
||||
1 20s Read a book 0
|
||||
```
|
||||
|
||||
完成之后,你可以使用 ‘task 1 done’ 或者 ‘task 2 done’ 来清除列表。[可以在这里][1]找到更加全面的命令和使用案例。同样,Taskwarrior 是跨平台的,这意味着不管怎样,你都可以找到一个[满足你需求][2]的版本。如果你需要的话,这里甚至有[一个安卓版][3]。祝您用得开心!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.techdrivein.com/2016/09/taskwarrior-command-line-todo-app-linux.html
|
||||
|
||||
作者:[Manuel Jose][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: http://www.techdrivein.com/2016/09/taskwarrior-command-line-todo-app-linux.html?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+techdrivein+%28Tech+Drive-in%29
|
||||
[1]: https://taskwarrior.org/docs/
|
||||
[2]: https://taskwarrior.org/download/
|
||||
[3]: https://taskwarrior.org/news/news.20160225.html
|
||||
[4]: http://www.techdrivein.com/search/label/Terminal
|
||||
|
||||
|
@ -0,0 +1,78 @@
|
||||
如何用四个简单的步骤加速 LibreOffice
|
||||
====
|
||||
|
||||

|
||||
|
||||
对于许多许多开源软件的粉丝和支持者来说,LibreOffice 是 Microsoft Office 最好的替代品,在最近的一些发布版本中可以看到它明显有了巨大的改进。然而,初始启动的体验仍然距离期望有所距离。有一些方法可以缩短 LibreOffice 的启动时间并改善它的整体性能。
|
||||
|
||||
在下面的段落里,我将会展示一些实用性的步骤,你可以通过它们来改善 LibreOffice 的加载时间和响应能力。
|
||||
|
||||
### 1. 增加每个对象和图像缓存的内存占用
|
||||
|
||||
这将可以通过分配更多的内存资源给图像缓存和对象来加快程序的加载时间。
|
||||
|
||||
1. 启动 LibreOffice Writer (或者 Calc)。
|
||||
2. 点击菜单栏上的 “工具 -> 选项” 或者按键盘上的快捷键“Alt + F12”。
|
||||
3. 点击 LibreOffice 下面的“内存”然后增加“用于 LibreOffice” 到 128MB。
|
||||
4. 同样的增加“每个对象的内存占用”到 20MB。
|
||||
5. 点击确定来保存你的修改。
|
||||
|
||||

|
||||
|
||||
注意:你可以根据自己机器的性能把数值设置得比建议值的高一些或低一些。最好通过亲自体验来看看什么值能够让机器达到最佳性能。
|
||||
|
||||
### 2.启用 LibreOffice 的快速启动器(QuickStarter)
|
||||
|
||||
如果你的机器上有足够大的内存,比如 4GB 或者更大,你可以启用“系统托盘快速启动器”,从而让 LibreOffice 的一部分保持在内存中,在打开新文件时能够快速反应。
|
||||
|
||||
在启用这个选择以后,你会清楚的看到在打开新文件时它的性能有了很大的提高。
|
||||
|
||||
1. 通过点击“工具 -> 选项”来打开选项对话框。
|
||||
2. 在 “LibreOffice” 下面的侧边栏选择“内存”。
|
||||
3. 勾选“启用系统托盘快速启动器”复选框。
|
||||
4. 点击“确定”来保存修改。
|
||||
|
||||

|
||||
|
||||
一旦这个选项启用以后,你将会在你的系统托盘看到 LibreOffice 图标,以及可以打开任何类型的文件的选项。
|
||||
|
||||
### 3. 禁用 Java 运行环境
|
||||
|
||||
另一个加快 LibreOffice 加载时间和响应能力的简单方法是禁用 Java。
|
||||
|
||||
1. 同时按下“Alt + F12”打开选项对话框。
|
||||
2. 在侧边栏里,选择“Libreoffice”,然后选择“高级”。
|
||||
3. 取消勾选“使用 Java 运行环境”选项。
|
||||
4. 点击“确定”来关闭对话框。
|
||||
|
||||

|
||||
|
||||
如果你只使用 Writer 和 Calc,那么关闭 Java 不会影响你正常使用,但如果你需要使用 LibreOffice Base 和一些其他的特性,那么你可能需要重新启用它。在那种情况,将会弹出一个框询问你是否希望再次打开它。
|
||||
|
||||
### 4. 减少使用撤销步骤
|
||||
|
||||
默认情况下,LibreOffice 允许你撤销一个文件的多达 100 个改变。绝大多数用户不需要这么多,所以在内存中保留这么多撤销步骤是对资源的巨大浪费。
|
||||
|
||||
我建议减少撤销步骤到 20 次以下来为其他东西释放内存,但是这个部分需要根据你自己的需求来确定。
|
||||
|
||||
1. 通过点击 “工具 -> 选项”来打开选项对话框。
|
||||
2. 在 “LibreOffice” 下面的侧边栏,选择“内存”。
|
||||
3. 在“撤销”下面把步骤数目改成最适合你的值。
|
||||
4. 点击“确定”来保存修改。
|
||||
|
||||

|
||||
|
||||
假如你这些技巧为加速你的 LibreOffice 套件的加载时间提供了帮助,请在评论里告诉我们。同样,请分享你知道的任何其他技巧来给其他人带来帮助。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.maketecheasier.com/speed-up-libreoffice/
|
||||
|
||||
|
||||
作者:[Ayo Isaiah][a]
|
||||
译者:[ucasFL](https://github.com/ucasFL)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.maketecheasier.com/author/ayoisaiah/
|
@ -0,0 +1,63 @@
|
||||
是时候合并 LibreOffice 和 OpenOffice 了
|
||||
==========
|
||||
|
||||

|
||||
|
||||
先说下 OpenOffice。可能你已经无数次地看到说 Apache OpenOffice 即将结束。上一个稳定版本是 4.1.2 (发布于 2015 年 10 月),而最近的一个严重安全漏洞用了一个月才打上补丁。编码人员的缺乏使得开发如爬行前进一般。然后,可能是最糟糕的消息了:这个项目建议用户切换到 [MS Office](https://products.office.com/)(或 [LibreOffice](https://www.libreoffice.org/download/))。
|
||||
|
||||
丧钟为谁而鸣?丧钟为你而鸣,OpenOffice。
|
||||
|
||||
我想说些可能会惹恼一些人的话。你准备好了吗?
|
||||
|
||||
OpenOffice 的终止对开源和用户来说都将是件好事。
|
||||
|
||||
让我解释一下。
|
||||
|
||||
### 一个分支统治所有
|
||||
|
||||
当 LibreOffice 从 OpenOffice 分支出来后,我们看到了另一个情况:分支不只在原始基础上进行改进,而且大幅超越了它。LibreOffice 一举成功。所有之前预装 OpenOffice 的 Linux 发行版都迁移到了这个新项目。LibreOffice 从起跑线突然冲出,并迅速迈出了一大步。更新以极快的速度发布,改善内容丰富而重要。
|
||||
|
||||
不久后,OpenOffice 就被开源社区丢在了脑后。当 2011 年 Oracle 决定终止这个项目并把代码捐赠给 Apache 项目时,这种情况自然更加恶化了。从此 OpenOffice 艰难前进,然后把我们就看到了现在这种局面:一个生机勃勃的 LibreOffice 和一个艰难的、缓慢的 OpenOffice。
|
||||
|
||||
但我认为在这个相当昏暗的隧道末尾有一丝曙光。
|
||||
|
||||
### 合并他们
|
||||
|
||||
这听起来可能很疯狂,但我认为是时候把 LibreOffice 和 OpenOffice 合二为一了。是的,我知道很可能有政治考虑和自尊意识,但我认为合并成一个会更好。合并的好处很多。我首先能想到的是:
|
||||
|
||||
- 把 MS Office 过滤器整合起来:OpenOffice 在更好地导入某些 MS Office 文件上功能很强(而众所周知 LibreOffice 正在改进,但时好时坏)
|
||||
- LibreOffice 有更多开发者:尽管 OpenOffice 的开发者数量不多,但也无疑会增加到合并后的项目。
|
||||
- 结束混乱:很多用户以为 OpenOffice 和 LibreOffice 是同一个东西。有些甚至不知道 LibreOffice 存在。这将终结那些混乱。
|
||||
- 合并他们的用户量:OpenOffice 和 LibreOffice 各自拥有大量用户。联合后,他们将是个巨大的力量。
|
||||
|
||||
### 宝贵机遇
|
||||
|
||||
OpenOffice 的终止实际上会成为整个开源办公套件行业的一个宝贵机遇。为什么?我想表明有些东西我认为已经需要很久了。如果 OpenOffice 和 LibreOffice 集中他们的力量,比较他们的代码并合并,他们之后就可以做一些更必要的改进工作,不仅是整体的内部工作,也包括界面。
|
||||
|
||||
我们得面对现实,LibreOffice 和(相关的) OpenOffice 的用户界面都是过时的。当我安装 LibreOffice 5.2.1.2 时,工具栏绝对是个灾难(见下图)。
|
||||
|
||||

|
||||
|
||||
*LibreOffice 默认工具栏显示*
|
||||
|
||||
尽管我支持和关心(并且日常使用)LibreOffice,但事实已经再清楚不过了,界面需要完全重写。我们正在使用的是 90 年代末/ 2000 年初的复古界面,它必须得改变了。当新用户第一次打开 LibreOffice 时,他们会被淹没在大量按钮、图标和工具栏中。Ubuntu Unity 的平视显示(Head up Display,简称 HUD)帮助解决了这个问题,但那并不适用于其它桌面和发行版。当然,有经验的用户知道在哪里找什么(甚至定制工具栏以满足特殊的需要),但对新用户或普通用户,那种界面是个噩梦。现在是做出改变的一个好时机。引入 OpenOffice 最后残留的开发者并让他们加入到改善界面的战斗中。借助于整合 OpenOffice 额外的导入过滤器和现代化的界面,LibreOffice 终能在家庭和办公桌面上都引起一些轰动。
|
||||
|
||||
### 这会真的发生吗?
|
||||
|
||||
这需要发生。但是会发生吗?我不知道。但即使掌权者决定用户界面并不需要重组(这会是个失误),合并 OpenOffice 仍是前进的一大步。合并两者将带来开发的更专注,更好的推广,公众更少的困惑。
|
||||
|
||||
我知道这可能看起来有悖于开源的核心精神,但合并 LibreOffice 和 OpenOffice 将能联合两者的力量,而且可能会摆脱弱点。
|
||||
|
||||
在我看来,这是双赢的。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.techrepublic.com/article/its-time-to-make-libreoffice-and-openoffice-one-again/
|
||||
|
||||
作者:[Jack Wallen][a]
|
||||
译者:[bianjp](https://github.com/bianjp)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: http://www.techrepublic.com/search/?a=jack%2Bwallen
|
@ -0,0 +1,364 @@
|
||||
在 Ubuntu 16.04 上安装和使用服务器监控报警系统 Shinken
|
||||
=====
|
||||
|
||||
Shinken 是一个用 Python 实现的开源的主机和网络监控框架,并与 Nagios like 兼容,它可以运行在所有支持 Python 程序的操作系统上,比如说 Linux、Unix 和 Windows,Shinken 是 Jean Gabes 为了验证一个新的 Nagios 架构思路而编写,但是这个想法被 Nagios 的作者拒绝后成为了一个独立的网络系统监视软件,并保持了与 Nagios 的兼容。
|
||||
|
||||
在这篇教程中,我将会描述如何从源代码编译安装 Shinken 和向监视系统中添加一台 Linux 主机。我将会以 Ubuntu 16.04 Xenial Xerus 操作系统来作为 Shinken 服务器和所监控的主机。
|
||||
|
||||
### 第一步 安装 Shinken 服务器
|
||||
|
||||
Shinken 是一个 Python 框架,我们可以通过 `pip` 安装或者从源码来安装它,在这一步中,我们将用源代码编译安装 Shinken。
|
||||
|
||||
在我们开始安装 Shinken 之前还需要完成几个步骤。
|
||||
|
||||
安装一些新的 Python 软件包并创建一个名为 `shinken` 的系统用户:
|
||||
|
||||
```
|
||||
sudo apt-get install python-setuptools python-pip python-pycurl
|
||||
useradd -m -s /bin/bash shinken
|
||||
```
|
||||
|
||||
从 GitHub 仓库下载 Shinken 源代码:
|
||||
|
||||
```
|
||||
git clone https://github.com/naparuba/shinken.git
|
||||
cd shinken/
|
||||
```
|
||||
|
||||
然后用以下命令安装 Shinken:
|
||||
|
||||
```
|
||||
git checkout 2.4.3
|
||||
python setup.py install
|
||||
```
|
||||
|
||||
然后,为了得到更好的效果,我们还需要从 Ubuntu 软件库中安装 `python-cherrypy3` 软件包:
|
||||
|
||||
```
|
||||
sudo apt-get install python-cherrypy3
|
||||
```
|
||||
|
||||
到这里,Shinken 已经成功安装,接下来我们将 Shinken 添加到系统启动项并且启动它:
|
||||
|
||||
```
|
||||
update-rc.d shinken defaults
|
||||
systemctl start shinken
|
||||
```
|
||||
|
||||
### 第二步 安装 Shinken Webui2
|
||||
|
||||
Webui2 是 Shinken 的 Web 界面(在 shinken.io 可以找到)。最简单的安装 Shinken webui2 的方法是使用shinken CLI 命令(必须作为 `shinken` 用户执行)。
|
||||
|
||||
切换到 shinken 用户:
|
||||
|
||||
```
|
||||
su - shinken
|
||||
```
|
||||
|
||||
初始化 shiken 配置文件,下面的命令将会创建一个新的配置文件 `.shinken.ini` :
|
||||
|
||||
```
|
||||
shinken --init
|
||||
```
|
||||
|
||||
接下来用 shinken CLI 命令来安装 `webui2`:
|
||||
|
||||
```
|
||||
shinken install webui2
|
||||
```
|
||||
|
||||

|
||||
|
||||
至此 webui2 已经安装好,但是我们还需要安装 MongoDB 和用 `pip` 来安装另一个 Python 软件包。在 root 下运行如下命令:
|
||||
|
||||
```
|
||||
sudo apt-get install mongodb
|
||||
pip install pymongo>=3.0.3 requests arrow bottle==0.12.8
|
||||
```
|
||||
|
||||
接下来,切换到 shinken 目录下并且通过编辑 `broker-master.cfg` 文件来添加这个新的 webui2 模块:
|
||||
|
||||
```
|
||||
cd /etc/shinken/brokers/
|
||||
vim broker-master.cfg
|
||||
```
|
||||
|
||||
在第 40 行添加一个模块选项:
|
||||
|
||||
```
|
||||
modules webui2
|
||||
```
|
||||
|
||||
保存文件并且退出编辑器。
|
||||
|
||||
现在进入 `contacts` 目录下编辑 `admin.cfg` 来进行管理配置。
|
||||
|
||||
```
|
||||
cd /etc/shinken/contacts/
|
||||
vim admin.cfg
|
||||
```
|
||||
|
||||
按照如下修改:
|
||||
|
||||
```
|
||||
contact_name admin # Username 'admin'
|
||||
password yourpass # Pass 'mypass'
|
||||
```
|
||||
|
||||
保存和退出。
|
||||
|
||||
### 第三步 安装 Nagios 插件和 Shinken 软件包
|
||||
|
||||
在这一步中,我们将安装 Nagios 插件和一些 Perl 模块。然后从 shinken.io 安装其他的软件包来实现监视。
|
||||
|
||||
安装 Nagios 插件和安装 Perl 模块所需要的 `cpanminus`:
|
||||
|
||||
```
|
||||
sudo apt-get install nagios-plugins* cpanminus
|
||||
```
|
||||
|
||||
用 `cpanm` 命令来安装 Perl 模块。
|
||||
|
||||
```
|
||||
cpanm Net::SNMP
|
||||
cpanm Time::HiRes
|
||||
cpanm DBI
|
||||
```
|
||||
|
||||
现在我们创建一个 `utils.pm` 文件的链接到 shinken 的目录,并且为 `Log_File_Health` 创建了一个新的日志目录 。
|
||||
|
||||
```
|
||||
chmod u+s /usr/lib/nagios/plugins/check_icmp
|
||||
ln -s /usr/lib/nagios/plugins/utils.pm /var/lib/shinken/libexec/
|
||||
mkdir -p /var/log/rhosts/
|
||||
touch /var/log/rhosts/remote-hosts.log
|
||||
```
|
||||
|
||||
然后,从 shinken.io 安装 shinken 软件包 `ssh` 和 `linux-snmp` 来监视 SSH 和 SNMP :
|
||||
|
||||
```
|
||||
su - shinken
|
||||
shinken install ssh
|
||||
shinken install linux-snmp
|
||||
```
|
||||
|
||||
### 第四步 添加一个 Linux 主机 host-one
|
||||
|
||||
我们将添加一个新的将被监控的 Linux 主机,IP 地址为 192.168.1.121,主机名为 host-one 的 Ubuntu 16.04。
|
||||
|
||||
连接到 host-one 主机:
|
||||
|
||||
```
|
||||
ssh host1@192.168.1.121
|
||||
```
|
||||
|
||||
从 Ubuntu 软件库中安装 snmp 和snmpd 软件包:
|
||||
|
||||
```
|
||||
sudo apt-get install snmp snmpd
|
||||
```
|
||||
|
||||
然后,用 `vim` 编辑 `snmpd.conf` 配置文件:
|
||||
|
||||
```
|
||||
vim /etc/snmp/snmpd.conf
|
||||
```
|
||||
|
||||
注释掉第 15 行并取消注释第 17 行:
|
||||
|
||||
```
|
||||
#agentAddress udp:127.0.0.1:161
|
||||
agentAddress udp:161,udp6:[::1]:161
|
||||
```
|
||||
|
||||
注释掉第 51 和 53 行,然后加一行新的配置,如下:
|
||||
|
||||
```
|
||||
#rocommunity mypass default -V systemonly
|
||||
#rocommunity6 mypass default -V systemonly
|
||||
|
||||
rocommunity mypass
|
||||
```
|
||||
|
||||
保存并退出。
|
||||
|
||||
现在用 `systemctl` 命令来启动 `snmpd` 服务:
|
||||
|
||||
```
|
||||
systemctl start snmpd
|
||||
```
|
||||
|
||||
在 shinken 服务器上通过在 `hosts` 文件夹下创建新的文件来定义一个新的主机:
|
||||
|
||||
```
|
||||
cd /etc/shinken/hosts/
|
||||
vim host-one.cfg
|
||||
```
|
||||
|
||||
粘贴如下配置信息:
|
||||
|
||||
```
|
||||
define host{
|
||||
use generic-host,linux-snmp,ssh
|
||||
contact_groups admins
|
||||
host_name host-one
|
||||
address 192.168.1.121
|
||||
_SNMPCOMMUNITY mypass # SNMP Pass Config on snmpd.conf
|
||||
}
|
||||
```
|
||||
|
||||
保存并退出。
|
||||
|
||||
在 shinken 服务器上编辑 SNMP 配置文件。
|
||||
|
||||
```
|
||||
vim /etc/shinken/resource.d/snmp.cfg
|
||||
```
|
||||
|
||||
将 `public` 改为 `mypass` -必须和你在客户端 `snmpd` 配置文件中使用的密码相同:
|
||||
|
||||
```
|
||||
$SNMPCOMMUNITYREAD$=mypass
|
||||
```
|
||||
|
||||
保存并退出。
|
||||
|
||||
现在将服务端和客户端都重启:
|
||||
|
||||
```
|
||||
reboot
|
||||
```
|
||||
|
||||
现在 Linux 主机已经被成功地添加到 shinken 服务器中了。
|
||||
|
||||
### 第五步 访问 Shinken Webui2
|
||||
|
||||
在端口 7677 访问 Shinken webui2 (将 URL 中的 IP 替换成你自己的 IP 地址):
|
||||
|
||||
```
|
||||
http://192.168.1.120:7767
|
||||
```
|
||||
|
||||
用管理员用户和密码登录(你在 admin.cfg 文件中设置的)
|
||||
|
||||

|
||||
|
||||
Webui2 中的 Shinken 面板:
|
||||
|
||||

|
||||
|
||||
我们的两个服务器正在被 Shinken 监控:
|
||||
|
||||

|
||||
|
||||
列出所有被 linux-snmp 监控的服务:
|
||||
|
||||

|
||||
|
||||
所有主机和服务的状态信息:
|
||||
|
||||

|
||||
|
||||
### 第6步 Shinken 的常见问题
|
||||
|
||||
#### NTP 服务器相关的问题
|
||||
|
||||
当你得到如下的 NTP 错误提示
|
||||
|
||||
```
|
||||
TimeSync - CRITICAL ( NTP CRITICAL: No response from the NTP server)
|
||||
TimeSync - CRITICAL ( NTP CRITICAL: Offset unknown )
|
||||
```
|
||||
|
||||
为了解决这个问题,在所有 Linux 主机上安装 ntp。
|
||||
|
||||
```
|
||||
sudo apt-get install ntp ntpdate
|
||||
```
|
||||
|
||||
编辑 ntp 配置文件:
|
||||
|
||||
```
|
||||
vim /etc/ntp.conf
|
||||
```
|
||||
|
||||
注释掉所有 pools 并替换为:
|
||||
|
||||
```
|
||||
#pool 0.ubuntu.pool.ntp.org iburst
|
||||
#pool 1.ubuntu.pool.ntp.org iburst
|
||||
#pool 2.ubuntu.pool.ntp.org iburst
|
||||
#pool 3.ubuntu.pool.ntp.org iburst
|
||||
|
||||
pool 0.id.pool.ntp.org
|
||||
pool 1.asia.pool.ntp.org
|
||||
pool 0.asia.pool.ntp.org
|
||||
```
|
||||
|
||||
然后,在新的一行添加如下限制规则:
|
||||
|
||||
```
|
||||
# Local users may interrogate the ntp server more closely.
|
||||
restrict 127.0.0.1
|
||||
restrict 192.168.1.120 #shinken server IP address
|
||||
restrict ::1
|
||||
NOTE: 192.168.1.120 is the Shinken server IP address.
|
||||
```
|
||||
|
||||
保存并退出。
|
||||
|
||||
启动 ntp 并且检查 Shinken 面板。
|
||||
|
||||
```
|
||||
ntpd
|
||||
```
|
||||
|
||||
#### check_netint.pl Not Found 问题
|
||||
|
||||
从 github 仓库下载源代码到 shinken 的库目录下:
|
||||
|
||||
```
|
||||
cd /var/lib/shinken/libexec/
|
||||
wget https://raw.githubusercontent.com/Sysnove/shinken-plugins/master/check_netint.pl
|
||||
chmod +x check_netint.pl
|
||||
chown shinken:shinken check_netint.pl
|
||||
```
|
||||
|
||||
#### 网络占用的问题
|
||||
|
||||
这是错误信息:
|
||||
|
||||
```
|
||||
ERROR : Unknown interface eth\d+
|
||||
```
|
||||
|
||||
检查你的网络接口并且编辑 `linux-snmp` 模版。
|
||||
|
||||
在我的 Ununtu 服务器,网卡是 “enp0s8”,而不是 eth0,所以我遇到了这个错误。
|
||||
|
||||
`vim` 编辑 `linux-snmp` 模版:
|
||||
|
||||
```
|
||||
vim /etc/shinken/packs/linux-snmp/templates.cfg
|
||||
```
|
||||
|
||||
在第 24 行添加网络接口信息:
|
||||
|
||||
```
|
||||
_NET_IFACES eth\d+|em\d+|enp0s8
|
||||
```
|
||||
|
||||
保存并退出。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.howtoforge.com/tutorial/server-monitoring-with-shinken-on-ubuntu-16-04/
|
||||
|
||||
作者:[Muhammad Arul][a]
|
||||
译者:[LinuxBars](https://github.com/LinuxBars)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.howtoforge.com/tutorial/server-monitoring-with-shinken-on-ubuntu-16-04/
|
||||
Save and exit.
|
@ -0,0 +1,140 @@
|
||||
Googler:现在可以 Linux 终端下进行 Google 搜索了!
|
||||
============================================
|
||||
|
||||

|
||||
|
||||
一个小问题:你每天做什么事?当然了,好多事情,但是我可以指出一件事,你几乎每天(如果不是每天)都会用 Google 搜索,我说的对吗?(LCTT 译注:Google 是啥?/cry )
|
||||
|
||||
现在,如果你是一位 Linux 用户(我猜你也是),这里有另外一个问题:如果你甚至不用离开终端就可以进行 Google 搜索那岂不是相当棒?甚至不用打开一个浏览器窗口?
|
||||
|
||||
如果你是一位类 [*nix][7] 系统的狂热者而且也是喜欢终端界面的人,我知道你的答案是肯定的,而且我认为,接下来你也将喜欢上我今天将要介绍的这个漂亮的小工具。它被称做 Googler。
|
||||
|
||||
### Googler:在你 linux 终端下的 google
|
||||
|
||||
Googler 是一个简单的命令行工具,它用于直接在命令行窗口中进行 google 搜索,Googler 主要支持三种类型的 Google 搜索:
|
||||
|
||||
- Google 搜索:简单的 Google 搜索,和在 Google 主页搜索是等效的。
|
||||
- Google 新闻搜索:Google 新闻搜索,和在 Google News 中的搜索一样。
|
||||
- Google 站点搜索:Google 从一个特定的网站搜索结果。
|
||||
|
||||
Googler 用标题、链接和网页摘要来显示搜索结果。搜索出来的结果可以仅通过两个按键就可以在浏览器里面直接打开。
|
||||
|
||||

|
||||
|
||||
### 在 Ubuntu 下安装 Googler
|
||||
|
||||
先让我们进行软件的安装。
|
||||
|
||||
首先确保你的 python 版本大于等于 3.3,可以用以下命令查看。
|
||||
|
||||
```
|
||||
python3 --version
|
||||
```
|
||||
|
||||
如果不是的话,就更新一下。Googler 要求 python 版本 3.3 及以上运行。
|
||||
|
||||
虽然 Googler 现在还不能在 Ununtu 的软件库中找到,我们可以很容易地从 GitHub 仓库中安装它。我们需要做的就是运行以下命令:
|
||||
|
||||
```
|
||||
cd /tmp
|
||||
git clone https://github.com/jarun/googler.git
|
||||
cd googler
|
||||
sudo make install
|
||||
cd auto-completion/bash/
|
||||
sudo cp googler-completion.bash /etc/bash_completion.d/
|
||||
```
|
||||
|
||||
这样 Googler 就带着命令自动完成特性安装完毕了。
|
||||
|
||||
### 特点 & 基本用法
|
||||
|
||||
如果我们快速浏览它所有的特点,我们会发现 Googler 实际上是一个十分强大的工具,它的一些主要特点就是:
|
||||
|
||||
#### 交互界面
|
||||
|
||||
在终端下运行以下命令:
|
||||
|
||||
```
|
||||
googler
|
||||
```
|
||||
|
||||
交互界面就会被打开,Googler 的开发者 [Arun Prakash Jana][1] 称之为全向提示符(omniprompt),你可以输入 `?` 去寻找可用的命令参数:
|
||||
|
||||

|
||||
|
||||
在提示符处,输入任何搜索词汇关键字去开始搜索,然后你可以输入`n`或者`p`导航到搜索结果的后一页和前一页。
|
||||
|
||||
要在浏览器窗口中打开搜索结果,直接输入搜索结果的编号,或者你可以输入 `o` 命令来打开这个搜索网页。
|
||||
|
||||
#### 新闻搜索
|
||||
|
||||
如果你想去搜索新闻,直接以`N`参数启动 Googler:
|
||||
|
||||
```
|
||||
googler -N
|
||||
```
|
||||
|
||||
随后的搜索将会从 Google News 抓取结果。
|
||||
|
||||
#### 站点搜索
|
||||
|
||||
如果你想从某个特定的站点进行搜索,以`w 域名`参数启动 Googler:
|
||||
|
||||
```
|
||||
googler -w itsfoss.com
|
||||
```
|
||||
|
||||
随后的搜索会只从这个博客中抓取结果!
|
||||
|
||||
#### 手册页
|
||||
|
||||
运行以下命令去查看 Googler 的带着各种用例的手册页:
|
||||
|
||||
```
|
||||
man googler
|
||||
```
|
||||
|
||||
#### 指定国家/地区的 Google 搜索引擎
|
||||
|
||||
```
|
||||
googler -c in "hello world"
|
||||
```
|
||||
|
||||
上面的示例命令将会开始从 Google 的印度域名搜索结果(in 代表印度)
|
||||
|
||||
还支持:
|
||||
|
||||
- 通过时间和语言偏好来过滤搜索结果
|
||||
- 支持 Google 查询关键字,例如:`site:example.com` 或者 `filetype:pdf` 等等
|
||||
- 支持 HTTPS 代理
|
||||
- Shell 命令自动补全
|
||||
- 禁用自动拼写纠正
|
||||
|
||||
这里还有更多特性。你可以用 Googler 去满足你的需要。
|
||||
|
||||
Googler 也可以和一些基于文本的浏览器整合在一起(例如:[elinks][2]、[links][3]、[lynx][4]、w3m 等),所以你甚至都不用离开终端去浏览网页。在 [Googler 的 GitHub 项目页][5]可以找到指导。
|
||||
|
||||
如果你想看一下 Googler 不同的特性的视频演示,方便的话你可以查看 GitHub 项目页附带的终端记录演示页: [jarun/googler v2.7 quick demo][6]。
|
||||
|
||||
### 对于 Googler 的看法?
|
||||
|
||||
尽管 googler 可能并不是对每个人都是必要和渴望的,对于一些不想打开浏览器进行 google 搜索或者就是想泡在终端窗口里面的人来说,这是一个很棒的工具。你认为呢?
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/review-googler-linux/
|
||||
|
||||
作者:[Munif Tanjim][a]
|
||||
译者:[LinuxBars](https://github.com/LinuxBars)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/munif/
|
||||
[1]: https://github.com/jarun
|
||||
[2]: http://elinks.or.cz/
|
||||
[3]: http://links.twibright.com/
|
||||
[4]: http://lynx.browser.org/
|
||||
[5]: https://github.com/jarun/googler#faq
|
||||
[6]: https://asciinema.org/a/85019
|
||||
[7]: https://en.wikipedia.org/wiki/Unix-like
|
@ -0,0 +1,90 @@
|
||||
你该选择 openSUSE 的五大理由
|
||||
===============
|
||||
|
||||
[](https://itsfoss.com/wp-content/uploads/2016/09/why-opensuse-is-best.jpg)
|
||||
|
||||
多数的的桌面 Linux 用户都会选择三种发行版本:Debian/Ubuntu、Fedora 或者 Arch Linux。但是今天,我将给出你需要使用 openSUSE 的五大理由。
|
||||
|
||||
相比其他的 Linux 发行版,我总能在 [openSUSE](https://www.opensuse.org/) 上看到一些令人耳目一新的东西。我说不太好,但它总是如此的闪亮和个性鲜明。这绿色的大蜥蜴是看起来如此的令人惊叹!但这并不是 openSUSE **即便不是最好也是要比其它发行版更好**的原因!
|
||||
|
||||
请别误解我。我在各种场合用过许多不同的 Linux 发行版,同时也很敬佩在这些发行版背后默默工作的开发者,是他们让计算变成一件快乐的事情。但是 openSUSE 一直让人感觉,嗯,令人崇敬——你是不是也这样觉得?
|
||||
|
||||
### openSUSE 比其他 Linux 发行版要好的五大理由
|
||||
|
||||
你是不是认为我在说 openSUSE 是最好的 Linux 发行版?不,我并不是要表达这个意思。其实没有任何一个 Linux 发行版是最好的。它真的可以满足你寻找 “灵魂伴侣” 的需求。
|
||||
|
||||
但在这里,我准备给大家说说,openSUSE 比其他发行版做得要好的五件事。如下:
|
||||
|
||||
#### #1 社区规则
|
||||
|
||||
openSUSE 是一个典型的社区驱动型项目。我经常看到很多用户在升级后抱怨开发人员改变了他们喜欢的发行版。但在 openSUSE 不是这样,openSUSE 是纯社区驱动的项目,并且任何时候都朝着用户所希望的方向发展。
|
||||
|
||||
#### #2 系统的健壮性
|
||||
|
||||
另外一个是操作系统的集成程度。我可以在同一个 openSUSE 系统上安装所有的[最好的 Linux 桌面环境](https://itsfoss.com/best-linux-desktop-environments/),而在 Ubuntu 上则因为系统的稳定性,坚决不允许用户这样做。而这恰好体现了一个系统的健壮程度。因此,对于那些喜欢自己动手完成每一件事的用户,openSUSE 还是很诱人的。
|
||||
|
||||
#### #3 易于安装软件
|
||||
|
||||
在 Linux 的世界里有很多非常好用的包管理工具。从 Debian 的 apt-get 到 [Fedora](https://itsfoss.com/fedora-24-review/) 的 DNF,它们无不吸引着用户,而且在这些发行版成为流行版本的过程中扮演着重要角色。
|
||||
|
||||
openSUSE 同样有一个将软件传递到桌面的好方法。[software.opensuse.org](https://software.opensuse.org/421/en) 是一个 Web 界面,你可以用它从仓库中获取安装软件。你所需要做的就是打开这个链接 (当然,是在 openSUSE 系统上),在搜索框中输入你想要的软件,点击“直接安装”即可。就是这么简单,不是吗?
|
||||
|
||||
听起来就像是在使用 Google 商店一样,是吗?
|
||||
|
||||
#### #4 YAST
|
||||
|
||||
毫不夸张的说,[YaST](https://en.opensuse.org/Portal:YaST) (LCTT 译注: YaST 是 openSUSE 和 SUSE Linux 企业版的安装和配置工具) 绝对是世界上有史以来**操作系统**上最好的控制中心。并且毫无疑问地,你可以使用它来操控系统上的一切:网络、软件升级以及所有的基础设置等。无论是 openSUSE 的个人版或是 SUSE Linux 企业版,你都能在 YaST 的强力支撑下,轻松的完成安装。总之,一个工具,方便而无所不能。
|
||||
|
||||
#### #5 开箱即用的极致体验
|
||||
|
||||
SUSE 的团队是 Linux 内核中最大的贡献者团体之一。他们辛勤的努力也意味着,他们有足够的经验来应付不同的硬件条件。
|
||||
|
||||
有着良好的硬件支持,一定会有很棒的开箱即用的体验。
|
||||
|
||||
#### #6 他们做了一些搞笑视频
|
||||
|
||||
等等,不是说好了是五个理由吗?怎么多了一个!
|
||||
|
||||
但因为 [Abhishek](https://itsfoss.com/author/abhishek/) 逼着我加进来,因为他们做的 Linux 的搞笑视频才使 openSUSE 成为了最好的发行版。
|
||||
|
||||
开了个玩笑,不过还是看看 [Uptime Funk](https://www.youtube.com/watch?v=zbABy9ul11I),你将会知道[为什么 SUSE 是最酷的 Linux ](https://itsfoss.com/suse-coolest-linux-enterprise/)。
|
||||
|
||||
### LEAP 还是 TUMBLEWEED?该用哪一个!
|
||||
|
||||
如果你现在想要使用 openSUSE 了,让我来告诉你,这两个 openSUSE 版本:LEAP 和 TUMBLEWEED。哪一个更合适你一点。
|
||||
|
||||

|
||||
|
||||
尽管两者都提供了相似的体验和相似的环境,但还是需要你自行决定安装那个版本到你硬盘上。
|
||||
|
||||
#### OPENSUSE : LEAP
|
||||
|
||||
[openSUSE Leap](https://en.opensuse.org/Portal:Leap) 是一个普通的大众版本,基本上每八个月更新一次。目前最新的版本是 openSUSE 42.1。它所有的软件都是稳定版,给用户提供最顺畅的体验。
|
||||
|
||||
对于家庭用户、办公和商业用户,是再合适不过的了。它合适那些想要一个稳定系统,而不必事事亲为,可以让他们安心工作的用户。只要进行正确的设置之后,你就不必担心其他事情,然后专心投入工作就好。同时,我也强烈建议图书馆和学校使用 Leap。
|
||||
|
||||
#### OPENSUSE: TUMBLEWEED
|
||||
|
||||
[Tumbleweed version of openSUSE](https://en.opensuse.org/Portal:Tumbleweed) 是滚动式更新的版本。它定期更新系统中所使用的软件集的最新版本。对于想要使用最新软件以及想向 openSUSE 做出贡献的开发者和高级用户来说,这个版本绝对值得一试。
|
||||
|
||||
需要指出的是,Tumbleweed 并不是 Leap 的 beta 和 测试版本 。它是最新锐的 Linux 稳定发行版。
|
||||
|
||||
Tumbleweed 给了你最快的更新,但是仅在开发者确定某个包的稳定性之后才行。
|
||||
|
||||
### 说说你的想法?
|
||||
|
||||
请在下方评论说出你对 openSUSE 的想法。如果你已经在考虑使用 openSUSE,你更想用 Leap 和 Tumbleweed 哪一个版本呢?
|
||||
|
||||
来,让我们开干!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/why-use-opensuse/
|
||||
|
||||
作者:[Aquil Roshan][a]
|
||||
译者:[GHLandy](https://github.com/GHLandy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/aquil/
|
@ -0,0 +1,85 @@
|
||||
如何在 Ubuntu 16.04 和 Fedora 22-24 上安装最新的 XFCE 桌面?
|
||||
==========================
|
||||
|
||||
Xfce 是一款针对 Linux 系统的现代化[轻型开源桌面环境][1],它在其他的类 Unix 系统上,比如 Mac OS X、 Solaries、 *BSD 以及其它几种上也能工作得很好。它非常快并以简洁而优雅的用户界面展现了用户友好性。
|
||||
|
||||
在服务器上安装一个桌面环境有时还是有用的,因为某些应用程序可能需要一个桌面界面,以便高效而可靠的管理。 Xfce 的一个卓越的特性是其内存消耗等系统资源占用率很低,因此,如果服务器需要一个桌面环境的话它会是首选。
|
||||
|
||||
### XFCE 桌面的功能特性
|
||||
|
||||
另外,它的一些值得注意的组件和功能特性列在下面:
|
||||
|
||||
- Xfwm 窗口管理器
|
||||
- Thunar 文件管理器
|
||||
- 用户会话管理器:用来处理用户登录、电源管理之类
|
||||
- 桌面管理器:用来设置背景图片、桌面图标等等
|
||||
- 应用管理器
|
||||
- 它的高度可连接性以及一些其他次要功能特性
|
||||
|
||||
Xfce 的最新稳定发行版是 Xfce 4.12,它所有的功能特性和与旧版本的变化都列在了[这儿][2]。
|
||||
|
||||
#### 在 Ubuntu 16.04 上安装 Xfce 桌面
|
||||
|
||||
Linux 发行版,比如 Xubuntu、Manjaro、OpenSUSE、Fedora Xfce Spin、Zenwalk 以及许多其他发行版都提供它们自己的 Xfce 桌面安装包,但你也可以像下面这样安装最新的版本。
|
||||
|
||||
```
|
||||
$ sudo apt update
|
||||
$ sudo apt install xfce4
|
||||
```
|
||||
|
||||
等待安装过程结束,然后退出当前会话,或者你也可以选择重启系统。在登录界面,选择 Xfce 桌面,然后登录,截图如下:
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
|
||||
#### 在 Fedora 22-24 上安装 Xfce 桌面
|
||||
|
||||
如果你已经有一个安装好的 Linux 发行版 Fedora,想在上面安装 xfce 桌面,那么你可以使用如下所示的 yum 或 dnf 命令。
|
||||
|
||||
```
|
||||
-------------------- 在 Fedora 22 上 --------------------
|
||||
# yum install @xfce
|
||||
-------------------- 在 Fedora 23-24 上 --------------------
|
||||
# dnf install @xfce-desktop-environment
|
||||
```
|
||||
|
||||
|
||||
安装 Xfce 以后,你可以从会话菜单选择 Xfce 登录或者重启系统。
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
|
||||
如果你不再想要 Xfce 桌面留在你的系统上,那么可以使用下面的命令来卸载它:
|
||||
|
||||
```
|
||||
-------------------- 在 Ubuntu 16.04 上 --------------------
|
||||
$ sudo apt purge xfce4
|
||||
$ sudo apt autoremove
|
||||
-------------------- 在 Fedora 22 上 --------------------
|
||||
# yum remove @xfce
|
||||
-------------------- 在 Fedora 23-24 上 --------------------
|
||||
# dnf remove @xfce-desktop-environment
|
||||
```
|
||||
|
||||
|
||||
在这个简单的入门指南中,我们讲解了如何安装最新版 Xfce 桌面的步骤,我相信这很容易掌握。如果一切进行良好,你可以享受一下使用 xfce —— 这个[ Linux 系统上最佳桌面环境][1]之一。
|
||||
|
||||
此外,如果你再次回来,你可以通过下面的反馈表单和我们始终保持联系。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.tecmint.com/install-xfce-desktop-in-ubuntu-fedora/
|
||||
|
||||
作者:[Aaron Kili][a]
|
||||
译者:[ucasFL](https://github.com/ucasFL)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: http://www.tecmint.com/author/aaronkili/
|
||||
[1]: http://www.tecmint.com/best-linux-desktop-environments/
|
||||
[2]: https://www.xfce.org/about/news/?post=1425081600
|
@ -0,0 +1,140 @@
|
||||
15 个开源的顶级人工智能工具
|
||||
=========
|
||||
|
||||
人工智能(artificial intelligence, AI)是科技研究中最热门的方向之一。像 IBM、谷歌、微软、Facebook 和亚马逊等公司都在研发上投入大量的资金、或者收购那些在机器学习、神经网络、自然语言和图像处理等领域取得了进展的初创公司。考虑到人们对此感兴趣的程度,我们将不会惊讶于斯坦福的专家在[人工智能报告][1]中得出的结论:“越来越强大的人工智能应用,可能会对我们的社会和经济产生深远的积极影响,这将出现在从现在到 2030 年的时间段里。”
|
||||
|
||||
在最近的一篇[文章][2]中,我们概述了 45 个十分有趣或有前途的人工智能项目。在本文中,我们将聚焦于开源的人工智能工具,详细的了解下最著名的 15 个开源人工智能项目。
|
||||
|
||||

|
||||
|
||||
*开源人工智能*
|
||||
|
||||
以下这些开源人工智能应用都处于人工智能研究的最前沿。
|
||||
|
||||
### 1. Caffe
|
||||
|
||||

|
||||
|
||||
它是由[贾扬清][3]在加州大学伯克利分校的读博时创造的,[Caffe][4] 是一个基于表达体系结构和可扩展代码的深度学习框架。使它声名鹊起的是它的速度,这让它受到研究人员和企业用户的欢迎。根据其网站所言,它可以在一天之内只用一个 NVIDIA K40 GPU 处理 6000 万多个图像。它是由伯克利视野和学习中心(BVLC)管理的,并且由 NVIDIA 和亚马逊等公司资助来支持它的发展。
|
||||
|
||||
### 2. CNTK
|
||||
|
||||

|
||||
|
||||
它是计算网络工具包(Computational Network Toolkit)的缩写,[CNTK][5] 是一个微软的开源人工智能工具。不论是在单个 CPU、单个 GPU、多个 GPU 或是拥有多个 GPU 的多台机器上它都有优异的表现。微软主要用它做语音识别的研究,但是它在机器翻译、图像识别、图像字幕、文本处理、语言理解和语言建模方面都有着良好的应用。
|
||||
|
||||
### 3. Deeplearning4j
|
||||
|
||||

|
||||
|
||||
[Deeplearning4j][6] 是一个 java 虚拟机(JVM)的开源深度学习库。它运行在分布式环境并且集成在 Hadoop 和 Apache Spark 中。这使它可以配置深度神经网络,并且它与 Java、Scala 和 其他 JVM 语言兼容。
|
||||
|
||||
这个项目是由一个叫做 Skymind 的商业公司管理的,它为这个项目提供支持、培训和一个企业的发行版。
|
||||
|
||||
### 4. DMTK
|
||||
|
||||

|
||||
|
||||
[DMTK][7] 是分布式机器学习工具(Distributed Machine Learning Toolkit)的缩写,和 CNTK 一样,是微软的开源人工智能工具。作为设计用于大数据的应用程序,它的目标是更快的训练人工智能系统。它包括三个主要组件:DMTK 框架、LightLDA 主题模型算法和分布式(多义)字嵌入算法。为了证明它的速度,微软声称在一个八集群的机器上,它能够“用 100 万个主题和 1000 万个单词的词汇表(总共 10 万亿参数)训练一个主题模型,在一个文档中收集 1000 亿个符号,”。这一成绩是别的工具无法比拟的。
|
||||
|
||||
### 5. H20
|
||||
|
||||

|
||||
|
||||
相比起科研,[H2O][8] 更注重将 AI 服务于企业用户,因此 H2O 有着大量的公司客户,比如第一资本金融公司、思科、Nielsen Catalina、PayPal 和泛美都是它的用户。它声称任何人都可以利用机器学习和预测分析的力量来解决业务难题。它可以用于预测建模、风险和欺诈分析、保险分析、广告技术、医疗保健和客户情报。
|
||||
|
||||
它有两种开源版本:标准版 H2O 和 Sparking Water 版,它被集成在 Apache Spark 中。也有付费的企业用户支持。
|
||||
|
||||
### 6. Mahout
|
||||
|
||||

|
||||
|
||||
它是 Apache 基金会项目,[Mahout][9] 是一个开源机器学习框架。根据它的网站所言,它有着三个主要的特性:一个构建可扩展算法的编程环境、像 Spark 和 H2O 一样的预制算法工具和一个叫 Samsara 的矢量数学实验环境。使用 Mahout 的公司有 Adobe、埃森哲咨询公司、Foursquare、英特尔、领英、Twitter、雅虎和其他许多公司。其网站列了出第三方的专业支持。
|
||||
|
||||
### 7. MLlib
|
||||
|
||||

|
||||
|
||||
由于其速度,Apache Spark 成为一个最流行的大数据处理工具。[MLlib][10] 是 Spark 的可扩展机器学习库。它集成了 Hadoop 并可以与 NumPy 和 R 进行交互操作。它包括了许多机器学习算法如分类、回归、决策树、推荐、集群、主题建模、功能转换、模型评价、ML 管道架构、ML 持久、生存分析、频繁项集和序列模式挖掘、分布式线性代数和统计。
|
||||
|
||||
### 8. NuPIC
|
||||
|
||||

|
||||
|
||||
由 [Numenta][11] 公司管理的 [NuPIC][12] 是一个基于分层暂时记忆(Hierarchical Temporal Memory, HTM)理论的开源人工智能项目。从本质上讲,HTM 试图创建一个计算机系统来模仿人类大脑皮层。他们的目标是创造一个 “在许多认知任务上接近或者超越人类认知能力” 的机器。
|
||||
|
||||
除了开源许可,Numenta 还提供 NuPic 的商业许可协议,并且它还提供技术专利的许可证。
|
||||
|
||||
### 9. OpenNN
|
||||
|
||||

|
||||
|
||||
作为一个为开发者和科研人员设计的具有高级理解力的人工智能,[OpenNN][13] 是一个实现神经网络算法的 c++ 编程库。它的关键特性包括深度的架构和快速的性能。其网站上可以查到丰富的文档,包括一个解释了神经网络的基本知识的入门教程。OpenNN 的付费支持由一家从事预测分析的西班牙公司 [Artelnics][14] 提供。
|
||||
|
||||
### 10. OpenCyc
|
||||
|
||||

|
||||
|
||||
由 Cycorp 公司开发的 [OpenCyc][15] 提供了对 Cyc 知识库的访问和常识推理引擎。它拥有超过 239,000 个条目,大约 2,093,000 个三元组和大约 69,000 owl:这是一种类似于链接到外部语义库的命名空间。它在富领域模型、语义数据集成、文本理解、特殊领域的专家系统和游戏 AI 中有着良好的应用。该公司还提供另外两个版本的 Cyc:一个免费的用于科研但是不开源,和一个提供给企业的但是需要付费。
|
||||
|
||||
### 11. Oryx 2
|
||||
|
||||

|
||||
|
||||
构建在 Apache Spark 和 Kafka 之上的 [Oryx 2][16] 是一个专门针对大规模机器学习的应用程序开发框架。它采用一个独特的三层 λ 架构。开发者可以使用 Orys 2 创建新的应用程序,另外它还拥有一些预先构建的应用程序可以用于常见的大数据任务比如协同过滤、分类、回归和聚类。大数据工具供应商 Cloudera 创造了最初的 Oryx 1 项目并且一直积极参与持续发展。
|
||||
|
||||
### 12. PredictionIO
|
||||
|
||||

|
||||
|
||||
今年的二月,Salesforce 收购了 [PredictionIO][17],接着在七月,它将该平台和商标贡献给 Apache 基金会,Apache 基金会将其列为孵育计划。所以当 Salesforce 利用 PredictionIO 技术来提升它的机器学习能力时,成效将会同步出现在开源版本中。它可以帮助用户创建带有机器学习功能的预测引擎,这可用于部署能够实时动态查询的 Web 服务。
|
||||
|
||||
### 13. SystemML
|
||||
|
||||

|
||||
|
||||
最初由 IBM 开发, [SystemML][18] 现在是一个 Apache 大数据项目。它提供了一个高度可伸缩的平台,可以实现高等数学运算,并且它的算法用 R 或一种类似 python 的语法写成。企业已经在使用它来跟踪汽车维修客户服务、规划机场交通和连接社会媒体数据与银行客户。它可以在 Spark 或 Hadoop 上运行。
|
||||
|
||||
### 14. TensorFlow
|
||||
|
||||

|
||||
|
||||
[TensorFlow][19] 是一个谷歌的开源人工智能工具。它提供了一个使用数据流图进行数值计算的库。它可以运行在多种不同的有着单或多 CPU 和 GPU 的系统,甚至可以在移动设备上运行。它拥有深厚的灵活性、真正的可移植性、自动微分功能,并且支持 Python 和 c++。它的网站拥有十分详细的教程列表来帮助开发者和研究人员沉浸于使用或扩展他的功能。
|
||||
|
||||
### 15. Torch
|
||||
|
||||

|
||||
|
||||
[Torch][20] 将自己描述为:“一个优先使用 GPU 的拥有机器学习算法广泛支持的科学计算框架”,它的特点是灵活性和速度。此外,它可以很容易的通过软件包用于机器学习、计算机视觉、信号处理、并行处理、图像、视频、音频和网络等方面。它依赖一个叫做 LuaJIT 的脚本语言,而 LuaJIT 是基于 Lua 的。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.datamation.com/open-source/slideshows/15-top-open-source-artificial-intelligence-tools.html
|
||||
|
||||
作者:[Cynthia Harvey][a]
|
||||
译者:[Chao-zhi](https://github.com/Chao-zhi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: http://www.datamation.com/author/Cynthia-Harvey-6460.html
|
||||
[1]: https://ai100.stanford.edu/sites/default/files/ai_100_report_0906fnlc_single.pdf
|
||||
[2]: http://www.datamation.com/applications/artificial-intelligence-software-45-ai-projects-to-watch-1.html
|
||||
[3]: http://daggerfs.com/
|
||||
[4]: http://caffe.berkeleyvision.org/
|
||||
[5]: https://www.cntk.ai/
|
||||
[6]: http://deeplearning4j.org/
|
||||
[7]: http://www.dmtk.io/
|
||||
[8]: http://www.h2o.ai/
|
||||
[9]: http://mahout.apache.org/
|
||||
[10]: https://spark.apache.org/mllib/
|
||||
[11]: http://numenta.com/
|
||||
[12]: http://numenta.org/
|
||||
[13]: http://www.opennn.net/
|
||||
[14]: https://www.artelnics.com/
|
||||
[15]: http://www.cyc.com/platform/opencyc/
|
||||
[16]: http://oryx.io/
|
||||
[17]: https://prediction.io/
|
||||
[18]: http://systemml.apache.org/
|
||||
[19]: https://www.tensorflow.org/
|
||||
[20]: http://torch.ch/
|
@ -0,0 +1,476 @@
|
||||
使用 Elasticsearch 和 cAdvisor 监控 Docker 容器
|
||||
=======
|
||||
|
||||
如果你正在运行 Swarm 模式的集群,或者只运行单台 Docker,你都会有下面的疑问:
|
||||
|
||||
> 我如何才能监控到它们都在干些什么?
|
||||
|
||||
这个问题的答案是“很不容易”。
|
||||
|
||||
你需要监控下面的参数:
|
||||
|
||||
1. 容器的数量和状态。
|
||||
2. 一台容器是否已经移到另一个节点了,如果是,那是在什么时候,移动到哪个节点?
|
||||
3. 给定节点上运行着的容器数量。
|
||||
4. 一段时间内的通信峰值。
|
||||
5. 孤儿卷和网络(LCTT 译注:孤儿卷就是当你删除容器时忘记删除它的卷,这个卷就不会再被使用,但会一直占用资源)。
|
||||
6. 可用磁盘空间、可用 inode 数。
|
||||
7. 容器数量与连接在 `docker0` 和 `docker_gwbridge` 上的虚拟网卡数量不一致(LCTT 译注:当 docker 启动时,它会在宿主机器上创建一个名为 docker0 的虚拟网络接口)。
|
||||
8. 开启和关闭 Swarm 节点。
|
||||
9. 收集并集中处理日志。
|
||||
|
||||
本文的目标是介绍 [Elasticsearch][1] + [Kibana][2] + [cAdvisor][3] 的用法,使用它们来收集 Docker 容器的参数,分析数据并产生可视化报表。
|
||||
|
||||
阅读本文后你可以发现有一个监控仪表盘能够部分解决上述列出的问题。但如果只是使用 cAdvisor,有些参数就无法显示出来,比如 Swarm 模式的节点。
|
||||
|
||||
如果你有一些 cAdvisor 或其他工具无法解决的特殊需求,我建议你开发自己的数据收集器和数据处理器(比如 [Beats][4]),请注意我不会演示如何使用 Elasticsearch 来集中收集 Docker 容器的日志。
|
||||
|
||||
> [“你要如何才能监控到 Swarm 模式集群里面发生了什么事情?要做到这点很不容易。” —— @fntlnz][5]
|
||||
|
||||
### 我们为什么要监控容器?
|
||||
|
||||
想象一下这个经典场景:你在管理一台或多台虚拟机,你把 tmux 工具用得很溜,用各种 session 事先设定好了所有基础的东西,包括监控。然后生产环境出问题了,你使用 `top`、`htop`、`iotop`、`jnettop` 各种 top 来排查,然后你准备好修复故障。
|
||||
|
||||
现在重新想象一下你有 3 个节点,包含 50 台容器,你需要在一个地方查看整洁的历史数据,这样你知道问题出在哪个地方,而不是把你的生命浪费在那些字符界面来赌你可以找到问题点。
|
||||
|
||||
### 什么是 Elastic Stack ?
|
||||
|
||||
Elastic Stack 就一个工具集,包括以下工具:
|
||||
|
||||
- Elasticsearch
|
||||
- Kibana
|
||||
- Logstash
|
||||
- Beats
|
||||
|
||||
我们会使用其中一部分工具,比如使用 Elasticsearch 来分析基于 JSON 格式的文本,以及使用 Kibana 来可视化数据并产生报表。
|
||||
|
||||
另一个重要的工具是 [Beats][4],但在本文中我们还是把精力放在容器上,官方的 Beats 工具不支持 Docker,所以我们选择原生兼容 Elasticsearch 的 cAdvisor。
|
||||
|
||||
[cAdvisor][3] 工具负责收集、整合正在运行的容器数据,并导出报表。在本文中,这些报表被到入到 Elasticsearch 中。
|
||||
|
||||
cAdvisor 有两个比较酷的特性:
|
||||
|
||||
- 它不只局限于 Docker 容器。
|
||||
- 它有自己的 Web 服务器,可以简单地显示当前节点的可视化报表。
|
||||
|
||||
### 设置测试集群,或搭建自己的基础架构
|
||||
|
||||
和我[以前的文章][9]一样,我习惯提供一个简单的脚本,让读者不用花很多时间就能部署好和我一样的测试环境。你可以使用以下(非生产环境使用的)脚本来搭建一个 Swarm 模式的集群,其中一个容器运行着 Elasticsearch。
|
||||
|
||||
> 如果你有充足的时间和经验,你可以搭建自己的基础架构 (Bring Your Own Infrastructure,BYOI)。
|
||||
|
||||
如果要继续阅读本文,你需要:
|
||||
|
||||
- 运行 Docker 进程的一个或多个节点(docker 版本号大于等于 1.12)。
|
||||
- 至少有一个独立运行的 Elasticsearch 节点(版本号 2.4.X)。
|
||||
|
||||
重申一下,此 Elasticsearch 集群环境不能放在生产环境中使用。生产环境也不推荐使用单节点集群,所以如果你计划安装一个生产环境,请参考 [Elastic 指南][6]。
|
||||
|
||||
### 对喜欢尝鲜的用户的友情提示
|
||||
|
||||
我就是一个喜欢尝鲜的人(当然我也已经在生产环境中使用了最新的 alpha 版本),但是在本文中,我不会使用最新的 Elasticsearch 5.0.0 alpha 版本,我还不是很清楚这个版本的功能,所以我不想成为那个引导你们出错的关键。
|
||||
|
||||
所以本文中涉及的 Elasticsearch 版本为最新稳定版 2.4.0。
|
||||
|
||||
### 测试集群部署脚本
|
||||
|
||||
前面已经说过,我提供这个脚本给你们,让你们不必费神去部署 Swarm 集群和 Elasticsearch,当然你也可以跳过这一步,用你自己的 Swarm 模式引擎和你自己的 Elasticserch 节点。
|
||||
|
||||
执行这段脚本之前,你需要:
|
||||
|
||||
- [Docker Machine][7] – 最终版:在 DigitalOcean 中提供 Docker 引擎。
|
||||
- [DigitalOcean API Token][8]: 让 docker 机器按照你的意思来启动节点。
|
||||
|
||||

|
||||
|
||||
### 创建集群的脚本
|
||||
|
||||
现在万事俱备,你可以把下面的代码拷到 create-cluster.sh 文件中:
|
||||
|
||||
```
|
||||
#!/usr/bin/env bash
|
||||
#
|
||||
# Create a Swarm Mode cluster with a single master and a configurable number of workers
|
||||
|
||||
workers=${WORKERS:-"worker1 worker2"}
|
||||
|
||||
#######################################
|
||||
# Creates a machine on Digital Ocean
|
||||
# Globals:
|
||||
# DO_ACCESS_TOKEN The token needed to access DigitalOcean's API
|
||||
# Arguments:
|
||||
# $1 the actual name to give to the machine
|
||||
#######################################
|
||||
create_machine() {
|
||||
docker-machine create \
|
||||
-d digitalocean \
|
||||
--digitalocean-access-token=$DO_ACCESS_TOKEN \
|
||||
--digitalocean-size 2gb \
|
||||
$1
|
||||
}
|
||||
|
||||
#######################################
|
||||
# Executes a command on the specified machine
|
||||
# Arguments:
|
||||
# $1 The machine on which to run the command
|
||||
# $2..$n The command to execute on that machine
|
||||
#######################################
|
||||
machine_do() {
|
||||
docker-machine ssh $@
|
||||
}
|
||||
|
||||
main() {
|
||||
|
||||
if [ -z "$DO_ACCESS_TOKEN" ]; then
|
||||
echo "Please export a DigitalOcean Access token: https://cloud.digitalocean.com/settings/api/tokens/new"
|
||||
echo "export DO_ACCESS_TOKEN=<yourtokenhere>"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [ -z "$WORKERS" ]; then
|
||||
echo "You haven't provided your workers by setting the \$WORKERS environment variable, using the default ones: $workers"
|
||||
fi
|
||||
|
||||
# Create the first and only master
|
||||
echo "Creating the master"
|
||||
|
||||
create_machine master1
|
||||
|
||||
master_ip=$(docker-machine ip master1)
|
||||
|
||||
# Initialize the swarm mode on it
|
||||
echo "Initializing the swarm mode"
|
||||
machine_do master1 docker swarm init --advertise-addr $master_ip
|
||||
|
||||
# Obtain the token to allow workers to join
|
||||
worker_tkn=$(machine_do master1 docker swarm join-token -q worker)
|
||||
echo "Worker token: ${worker_tkn}"
|
||||
|
||||
# Create and join the workers
|
||||
for worker in $workers; do
|
||||
echo "Creating worker ${worker}"
|
||||
create_machine $worker
|
||||
machine_do $worker docker swarm join --token $worker_tkn $master_ip:2377
|
||||
done
|
||||
}
|
||||
|
||||
main $@
|
||||
```
|
||||
|
||||
赋予它可执行权限:
|
||||
|
||||
```
|
||||
chmod +x create-cluster.sh
|
||||
```
|
||||
|
||||
### 创建集群
|
||||
|
||||
如文件名所示,我们可以用它来创建集群。默认情况下这个脚本会创建一个 master 和两个 worker,如果你想修改 worker 个数,可以设置环境变量 WORKERS。
|
||||
|
||||
现在就来创建集群吧。
|
||||
|
||||
```
|
||||
./create-cluster.sh
|
||||
```
|
||||
|
||||
你可以出去喝杯咖啡,因为这需要花点时间。
|
||||
|
||||
最后集群部署好了。
|
||||
|
||||

|
||||
|
||||
现在为了验证 Swarm 模式集群已经正常运行,我们可以通过 ssh 登录进 master:
|
||||
|
||||
```
|
||||
docker-machine ssh master1
|
||||
```
|
||||
|
||||
然后列出集群的节点:
|
||||
|
||||
```
|
||||
docker node ls
|
||||
```
|
||||
|
||||
```
|
||||
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
|
||||
26fi3wiqr8lsidkjy69k031w2 * master1 Ready Active Leader
|
||||
dyluxpq8sztj7kmwlzs51u4id worker2 Ready Active
|
||||
epglndegvixag0jztarn2lte8 worker1 Ready Active
|
||||
```
|
||||
|
||||
### 安装 Elasticsearch 和 Kibana
|
||||
|
||||
> 注意,从现在开始所有的命令都运行在主节点 master1 上。
|
||||
|
||||
在生产环境中,你可能会把 Elasticsearch 和 Kibana 安装在一个单独的、[大小合适][14]的实例集合中。但是在我们的实验中,我们还是把它们和 Swarm 模式集群安装在一起。
|
||||
|
||||
为了将 Elasticsearch 和 cAdvisor 连通,我们需要创建一个自定义的网络,因为我们使用了集群,并且容器可能会分布在不同的节点上,我们需要使用 [overlay][10] 网络(LCTT 译注:overlay 网络是指在不改变现有网络基础设施的前提下,通过某种约定通信协议,把二层报文封装在 IP 报文之上的新的数据格式,是目前最主流的容器跨节点数据传输和路由方案)。
|
||||
|
||||
也许你会问,“为什么还要网络?我们不是可以用 link 吗?” 请考虑一下,自从引入*用户定义网络*后,link 机制就已经过时了。
|
||||
|
||||
以下内容摘自[ Docker 文档][11]:
|
||||
|
||||
> 在 Docker network 特性出来以前,你可以使用 Docker link 特性实现容器互相发现、安全通信。而在 network 特性出来以后,你还可以使用 link,但是当容器处于默认桥接网络或用户自定义网络时,它们的表现是不一样的。
|
||||
|
||||
现在创建 overlay 网络,名称为 monitoring:
|
||||
|
||||
```
|
||||
docker network create monitoring -d overlay
|
||||
```
|
||||
|
||||
### Elasticsearch 容器
|
||||
|
||||
```
|
||||
docker service create --network=monitoring \
|
||||
--mount type=volume,target=/usr/share/elasticsearch/data \
|
||||
--constraint node.hostname==worker1 \
|
||||
--name elasticsearch elasticsearch:2.4.0
|
||||
```
|
||||
|
||||
注意 Elasticsearch 容器被限定在 worker1 节点,这是因为它运行时需要依赖 worker1 节点上挂载的卷。
|
||||
|
||||
### Kibana 容器
|
||||
|
||||
```
|
||||
docker service create --network=monitoring --name kibana -e ELASTICSEARCH_URL="http://elasticsearch:9200" -p 5601:5601 kibana:4.6.0
|
||||
```
|
||||
|
||||
如你所见,我们启动这两个容器时,都让它们加入 monitoring 网络,这样一来它们可以通过名称(如 Kibana)被相同网络的其他服务访问。
|
||||
|
||||
现在,通过 [routing mesh][12] 机制,我们可以使用浏览器访问服务器的 IP 地址来查看 Kibana 报表界面。
|
||||
|
||||
获取 master1 实例的公共 IP 地址:
|
||||
|
||||
```
|
||||
docker-machine ip master1
|
||||
```
|
||||
|
||||
打开浏览器输入地址:`http://[master1 的 ip 地址]:5601/status`
|
||||
|
||||
所有项目都应该是绿色:
|
||||
|
||||

|
||||
|
||||
让我们接下来开始收集数据!
|
||||
|
||||
### 收集容器的运行数据
|
||||
|
||||
收集数据之前,我们需要创建一个服务,以全局模式运行 cAdvisor,为每个有效节点设置一个定时任务。
|
||||
|
||||
这个服务与 Elasticsearch 处于相同的网络,以便于 cAdvisor 可以推送数据给 Elasticsearch。
|
||||
|
||||
```
|
||||
docker service create --network=monitoring --mode global --name cadvisor \
|
||||
--mount type=bind,source=/,target=/rootfs,readonly=true \
|
||||
--mount type=bind,source=/var/run,target=/var/run,readonly=false \
|
||||
--mount type=bind,source=/sys,target=/sys,readonly=true \
|
||||
--mount type=bind,source=/var/lib/docker/,target=/var/lib/docker,readonly=true \
|
||||
google/cadvisor:latest \
|
||||
-storage_driver=elasticsearch \
|
||||
-storage_driver_es_host="http://elasticsearch:9200"
|
||||
```
|
||||
|
||||
> 注意:如果你想配置 cAdvisor 选项,参考[这里][13]。
|
||||
|
||||
现在 cAdvisor 在发送数据给 Elasticsearch,我们通过定义一个索引模型来检索 Kibana 中的数据。有两种方式可以做到这一点:通过 Kibana 或者通过 API。在这里我们使用 API 方式实现。
|
||||
|
||||
我们需要在一个连接到 monitoring 网络的正在运行的容器中运行索引创建命令,你可以在 cAdvisor 容器中拿到 shell,不幸的是 Swarm 模式在开启服务时会在容器名称后面附加一个唯一的 ID 号,所以你需要手动指定 cAdvisor 容器的名称。
|
||||
|
||||
拿到 shell:
|
||||
|
||||
```
|
||||
docker exec -ti <cadvisor-container-name> sh
|
||||
```
|
||||
|
||||
创建索引:
|
||||
|
||||
```
|
||||
curl -XPUT http://elasticsearch:9200/.kibana/index-pattern/cadvisor -d '{"title" : "cadvisor*", "timeFieldName": "container_stats.timestamp"}'
|
||||
```
|
||||
|
||||
如果你够懒,可以只执行下面这一句:
|
||||
|
||||
```
|
||||
docker exec $(docker ps | grep cadvisor | awk '{print $1}' | head -1) curl -XPUT http://elasticsearch:9200/.kibana/index-pattern/cadvisor -d '{"title" : "cadvisor*", "timeFieldName": "container_stats.timestamp"}'
|
||||
```
|
||||
|
||||
### 把数据汇总成报表
|
||||
|
||||
你现在可以使用 Kibana 来创建一份美观的报表了。但是不要着急,我为你们建了一份报表和一些图形界面来方便你们入门。
|
||||
|
||||

|
||||
|
||||
访问 Kibana 界面 => Setting => Objects => Import,然后选择包含以下内容的 JSON 文件,就可以导入我的配置信息了:
|
||||
|
||||
```
|
||||
[
|
||||
{
|
||||
"_id": "cAdvisor",
|
||||
"_type": "dashboard",
|
||||
"_source": {
|
||||
"title": "cAdvisor",
|
||||
"hits": 0,
|
||||
"description": "",
|
||||
"panelsJSON": "[{\"id\":\"Filesystem-usage\",\"type\":\"visualization\",\"panelIndex\":1,\"size_x\":6,\"size_y\":3,\"col\":1,\"row\":1},{\"id\":\"Memory-[Node-equal->Container]\",\"type\":\"visualization\",\"panelIndex\":2,\"size_x\":6,\"size_y\":4,\"col\":7,\"row\":4},{\"id\":\"memory-usage-by-machine\",\"type\":\"visualization\",\"panelIndex\":3,\"size_x\":6,\"size_y\":6,\"col\":1,\"row\":4},{\"id\":\"CPU-Total-Usage\",\"type\":\"visualization\",\"panelIndex\":4,\"size_x\":6,\"size_y\":5,\"col\":7,\"row\":8},{\"id\":\"Network-RX-TX\",\"type\":\"visualization\",\"panelIndex\":5,\"size_x\":6,\"size_y\":3,\"col\":7,\"row\":1}]",
|
||||
"optionsJSON": "{\"darkTheme\":false}",
|
||||
"uiStateJSON": "{}",
|
||||
"version": 1,
|
||||
"timeRestore": false,
|
||||
"kibanaSavedObjectMeta": {
|
||||
"searchSourceJSON": "{\"filter\":[{\"query\":{\"query_string\":{\"query\":\"*\",\"analyze_wildcard\":true}}}]}"
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"_id": "Network",
|
||||
"_type": "search",
|
||||
"_source": {
|
||||
"title": "Network",
|
||||
"description": "",
|
||||
"hits": 0,
|
||||
"columns": [
|
||||
"machine_name",
|
||||
"container_Name",
|
||||
"container_stats.network.name",
|
||||
"container_stats.network.interfaces",
|
||||
"container_stats.network.rx_bytes",
|
||||
"container_stats.network.rx_packets",
|
||||
"container_stats.network.rx_dropped",
|
||||
"container_stats.network.rx_errors",
|
||||
"container_stats.network.tx_packets",
|
||||
"container_stats.network.tx_bytes",
|
||||
"container_stats.network.tx_dropped",
|
||||
"container_stats.network.tx_errors"
|
||||
],
|
||||
"sort": [
|
||||
"container_stats.timestamp",
|
||||
"desc"
|
||||
],
|
||||
"version": 1,
|
||||
"kibanaSavedObjectMeta": {
|
||||
"searchSourceJSON": "{\"index\":\"cadvisor*\",\"query\":{\"query_string\":{\"analyze_wildcard\":true,\"query\":\"*\"}},\"highlight\":{\"pre_tags\":[\"@kibana-highlighted-field@\"],\"post_tags\":[\"@/kibana-highlighted-field@\"],\"fields\":{\"*\":{}},\"fragment_size\":2147483647},\"filter\":[]}"
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"_id": "Filesystem-usage",
|
||||
"_type": "visualization",
|
||||
"_source": {
|
||||
"title": "Filesystem usage",
|
||||
"visState": "{\"title\":\"Filesystem usage\",\"type\":\"histogram\",\"params\":{\"addLegend\":true,\"addTimeMarker\":false,\"addTooltip\":true,\"defaultYExtents\":false,\"mode\":\"stacked\",\"scale\":\"linear\",\"setYExtents\":false,\"shareYAxis\":true,\"times\":[],\"yAxis\":{}},\"aggs\":[{\"id\":\"1\",\"type\":\"avg\",\"schema\":\"metric\",\"params\":{\"field\":\"container_stats.filesystem.usage\",\"customLabel\":\"USED\"}},{\"id\":\"2\",\"type\":\"terms\",\"schema\":\"split\",\"params\":{\"field\":\"machine_name\",\"size\":5,\"order\":\"desc\",\"orderBy\":\"1\",\"row\":false}},{\"id\":\"3\",\"type\":\"avg\",\"schema\":\"metric\",\"params\":{\"field\":\"container_stats.filesystem.capacity\",\"customLabel\":\"AVAIL\"}},{\"id\":\"4\",\"type\":\"terms\",\"schema\":\"segment\",\"params\":{\"field\":\"container_stats.filesystem.device\",\"size\":5,\"order\":\"desc\",\"orderBy\":\"1\"}}],\"listeners\":{}}",
|
||||
"uiStateJSON": "{\"vis\":{\"colors\":{\"Average container_stats.filesystem.available\":\"#E24D42\",\"Average container_stats.filesystem.base_usage\":\"#890F02\",\"Average container_stats.filesystem.capacity\":\"#3F6833\",\"Average container_stats.filesystem.usage\":\"#E24D42\",\"USED\":\"#BF1B00\",\"AVAIL\":\"#508642\"}}}",
|
||||
"description": "",
|
||||
"version": 1,
|
||||
"kibanaSavedObjectMeta": {
|
||||
"searchSourceJSON": "{\"index\":\"cadvisor*\",\"query\":{\"query_string\":{\"analyze_wildcard\":true,\"query\":\"*\"}},\"filter\":[]}"
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"_id": "CPU-Total-Usage",
|
||||
"_type": "visualization",
|
||||
"_source": {
|
||||
"title": "CPU Total Usage",
|
||||
"visState": "{\"title\":\"CPU Total Usage\",\"type\":\"area\",\"params\":{\"shareYAxis\":true,\"addTooltip\":true,\"addLegend\":true,\"smoothLines\":false,\"scale\":\"linear\",\"interpolate\":\"linear\",\"mode\":\"stacked\",\"times\":[],\"addTimeMarker\":false,\"defaultYExtents\":false,\"setYExtents\":false,\"yAxis\":{}},\"aggs\":[{\"id\":\"1\",\"type\":\"avg\",\"schema\":\"metric\",\"params\":{\"field\":\"container_stats.cpu.usage.total\"}},{\"id\":\"2\",\"type\":\"date_histogram\",\"schema\":\"segment\",\"params\":{\"field\":\"container_stats.timestamp\",\"interval\":\"auto\",\"customInterval\":\"2h\",\"min_doc_count\":1,\"extended_bounds\":{}}},{\"id\":\"3\",\"type\":\"terms\",\"schema\":\"group\",\"params\":{\"field\":\"container_Name\",\"size\":5,\"order\":\"desc\",\"orderBy\":\"1\"}},{\"id\":\"4\",\"type\":\"terms\",\"schema\":\"split\",\"params\":{\"field\":\"machine_name\",\"size\":5,\"order\":\"desc\",\"orderBy\":\"1\",\"row\":true}}],\"listeners\":{}}",
|
||||
"uiStateJSON": "{}",
|
||||
"description": "",
|
||||
"version": 1,
|
||||
"kibanaSavedObjectMeta": {
|
||||
"searchSourceJSON": "{\"index\":\"cadvisor*\",\"query\":{\"query_string\":{\"query\":\"*\",\"analyze_wildcard\":true}},\"filter\":[]}"
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"_id": "memory-usage-by-machine",
|
||||
"_type": "visualization",
|
||||
"_source": {
|
||||
"title": "Memory [Node]",
|
||||
"visState": "{\"title\":\"Memory [Node]\",\"type\":\"area\",\"params\":{\"shareYAxis\":true,\"addTooltip\":true,\"addLegend\":true,\"smoothLines\":false,\"scale\":\"linear\",\"interpolate\":\"linear\",\"mode\":\"stacked\",\"times\":[],\"addTimeMarker\":false,\"defaultYExtents\":false,\"setYExtents\":false,\"yAxis\":{}},\"aggs\":[{\"id\":\"1\",\"type\":\"avg\",\"schema\":\"metric\",\"params\":{\"field\":\"container_stats.memory.usage\"}},{\"id\":\"2\",\"type\":\"date_histogram\",\"schema\":\"segment\",\"params\":{\"field\":\"container_stats.timestamp\",\"interval\":\"auto\",\"customInterval\":\"2h\",\"min_doc_count\":1,\"extended_bounds\":{}}},{\"id\":\"3\",\"type\":\"terms\",\"schema\":\"group\",\"params\":{\"field\":\"machine_name\",\"size\":5,\"order\":\"desc\",\"orderBy\":\"1\"}}],\"listeners\":{}}",
|
||||
"uiStateJSON": "{}",
|
||||
"description": "",
|
||||
"version": 1,
|
||||
"kibanaSavedObjectMeta": {
|
||||
"searchSourceJSON": "{\"index\":\"cadvisor*\",\"query\":{\"query_string\":{\"query\":\"*\",\"analyze_wildcard\":true}},\"filter\":[]}"
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"_id": "Network-RX-TX",
|
||||
"_type": "visualization",
|
||||
"_source": {
|
||||
"title": "Network RX TX",
|
||||
"visState": "{\"title\":\"Network RX TX\",\"type\":\"histogram\",\"params\":{\"addLegend\":true,\"addTimeMarker\":true,\"addTooltip\":true,\"defaultYExtents\":false,\"mode\":\"stacked\",\"scale\":\"linear\",\"setYExtents\":false,\"shareYAxis\":true,\"times\":[],\"yAxis\":{}},\"aggs\":[{\"id\":\"1\",\"type\":\"avg\",\"schema\":\"metric\",\"params\":{\"field\":\"container_stats.network.rx_bytes\",\"customLabel\":\"RX\"}},{\"id\":\"2\",\"type\":\"date_histogram\",\"schema\":\"segment\",\"params\":{\"field\":\"container_stats.timestamp\",\"interval\":\"s\",\"customInterval\":\"2h\",\"min_doc_count\":1,\"extended_bounds\":{}}},{\"id\":\"3\",\"type\":\"avg\",\"schema\":\"metric\",\"params\":{\"field\":\"container_stats.network.tx_bytes\",\"customLabel\":\"TX\"}}],\"listeners\":{}}",
|
||||
"uiStateJSON": "{\"vis\":{\"colors\":{\"RX\":\"#EAB839\",\"TX\":\"#BF1B00\"}}}",
|
||||
"description": "",
|
||||
"savedSearchId": "Network",
|
||||
"version": 1,
|
||||
"kibanaSavedObjectMeta": {
|
||||
"searchSourceJSON": "{\"filter\":[]}"
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"_id": "Memory-[Node-equal->Container]",
|
||||
"_type": "visualization",
|
||||
"_source": {
|
||||
"title": "Memory [Node=>Container]",
|
||||
"visState": "{\"title\":\"Memory [Node=>Container]\",\"type\":\"area\",\"params\":{\"shareYAxis\":true,\"addTooltip\":true,\"addLegend\":true,\"smoothLines\":false,\"scale\":\"linear\",\"interpolate\":\"linear\",\"mode\":\"stacked\",\"times\":[],\"addTimeMarker\":false,\"defaultYExtents\":false,\"setYExtents\":false,\"yAxis\":{}},\"aggs\":[{\"id\":\"1\",\"type\":\"avg\",\"schema\":\"metric\",\"params\":{\"field\":\"container_stats.memory.usage\"}},{\"id\":\"2\",\"type\":\"date_histogram\",\"schema\":\"segment\",\"params\":{\"field\":\"container_stats.timestamp\",\"interval\":\"auto\",\"customInterval\":\"2h\",\"min_doc_count\":1,\"extended_bounds\":{}}},{\"id\":\"3\",\"type\":\"terms\",\"schema\":\"group\",\"params\":{\"field\":\"container_Name\",\"size\":5,\"order\":\"desc\",\"orderBy\":\"1\"}},{\"id\":\"4\",\"type\":\"terms\",\"schema\":\"split\",\"params\":{\"field\":\"machine_name\",\"size\":5,\"order\":\"desc\",\"orderBy\":\"1\",\"row\":true}}],\"listeners\":{}}",
|
||||
"uiStateJSON": "{}",
|
||||
"description": "",
|
||||
"version": 1,
|
||||
"kibanaSavedObjectMeta": {
|
||||
"searchSourceJSON": "{\"index\":\"cadvisor*\",\"query\":{\"query_string\":{\"query\":\"* NOT container_Name.raw: \\\\\\\"/\\\\\\\" AND NOT container_Name.raw: \\\\\\\"/docker\\\\\\\"\",\"analyze_wildcard\":true}},\"filter\":[]}"
|
||||
}
|
||||
}
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
这里还有很多东西可以玩,你也许想自定义报表界面,比如添加内存页错误状态,或者收发包的丢包数。如果你能实现开头列表处我没能实现的项目,那也是很好的。
|
||||
|
||||
### 总结
|
||||
|
||||
正确监控需要大量时间和精力,容器的 CPU、内存、IO、网络和磁盘,监控的这些参数还只是整个监控项目中的沧海一粟而已。
|
||||
|
||||
我不知道你做到了哪一阶段,但接下来的任务也许是:
|
||||
|
||||
- 收集运行中的容器的日志
|
||||
- 收集应用的日志
|
||||
- 监控应用的性能
|
||||
- 报警
|
||||
- 监控健康状态
|
||||
|
||||
如果你有意见或建议,请留言。祝你玩得开心。
|
||||
|
||||
现在你可以关掉这些测试系统了:
|
||||
|
||||
```
|
||||
docker-machine rm master1 worker{1,2}
|
||||
```
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://blog.codeship.com/monitoring-docker-containers-with-elasticsearch-and-cadvisor/
|
||||
|
||||
作者:[Lorenzo Fontana][a]
|
||||
译者:[bazz2](https://github.com/bazz2)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://blog.codeship.com/author/lorenzofontana/
|
||||
|
||||
[1]: https://github.com/elastic/elasticsearch
|
||||
[2]: https://github.com/elastic/kibana
|
||||
[3]: https://github.com/google/cadvisor
|
||||
[4]: https://github.com/elastic/beats
|
||||
[5]: https://twitter.com/share?text=%22How+do+you+keep+track+of+all+that%27s+happening+in+a+Swarm+Mode+cluster%3F+Not+easily.%22+via+%40fntlnz&url=https://blog.codeship.com/monitoring-docker-containers-with-elasticsearch-and-cadvisor/
|
||||
[6]: https://www.elastic.co/guide/en/elasticsearch/guide/2.x/deploy.html
|
||||
[7]: https://docs.docker.com/machine/install-machine/
|
||||
[8]: https://cloud.digitalocean.com/settings/api/tokens/new
|
||||
[9]: https://blog.codeship.com/nginx-reverse-proxy-docker-swarm-clusters/
|
||||
[10]: https://docs.docker.com/engine/userguide/networking/get-started-overlay/
|
||||
[11]: https://docs.docker.com/engine/userguide/networking/default_network/dockerlinks/
|
||||
[12]: https://docs.docker.com/engine/swarm/ingress/
|
||||
[13]: https://github.com/google/cadvisor/blob/master/docs/runtime_options.md
|
||||
[14]: https://www.elastic.co/blog/found-sizing-elasticsearch
|
@ -0,0 +1,66 @@
|
||||
Ryver:你应该使用它替代 Slack
|
||||
===============
|
||||
|
||||

|
||||
|
||||
貌似每个人都听说过 [Slack][1],它是一款跨平台的,可以使你时刻保持与他人同步的团队沟通应用。它革新了用户讨论和规划项目的方式,显而易见,它升级了 email 的沟通功能。
|
||||
|
||||
我在一个非常小的写作团队工作,不管是通过手机还是电脑,我从未在使用 Slack 过程中遇到过沟通问题。若想与任何规模的团队保持同步,继续使用 Slack 仍然不失为不错的方式。
|
||||
|
||||
既然如此,为什么我们还要讨论今天的话题?Ryver 被人们认为是下一个热点,相比 Slack,Ryver 提供了升级版的服务。Ryver 完全免费,它的团队正在奋力争取更大的市场份额。
|
||||
|
||||
是否 Ryver 已经强大到可以扮演 Slack 杀手的角色?这两种旗鼓相当的消息应用究竟有何不同?
|
||||
|
||||
欲知详情,请阅读下文。
|
||||
|
||||
### 为什么用 Ryver ?
|
||||
|
||||

|
||||
|
||||
既然 Slack 能用为什么还要折腾呢?Ryver 的开发者对 Slack 的功能滚瓜烂熟,他们希望 Ryver 改进的服务足以让你移情别恋。他们承诺 Ryver 提供完全免费的团队沟通服务,并且不会在任何一个环节隐形收费。
|
||||
|
||||
谢天谢地,他们用高质量产品兑现了自己的承诺。
|
||||
|
||||
额外的内容是关键所在,他们承诺去掉一些你在 Slack 免费账号上面遇到的限制。无限的存储空间是一个加分点,除此之外,在许多其他方面 Ryver 也更加开放。如果存储空间限制对你来说是个痛点,不防试试 Ryver。
|
||||
|
||||
这是一个简单易用的系统,所有的功能都可以一键搞定。这种设计哲学使 Apple 大获成功。当你开始使用它之后,也不会遭遇成长的烦恼。
|
||||
|
||||

|
||||
|
||||
会话分为私聊和公示,这意味着团队平台和私人用途有明确的界限。它应该有助于避免将任何尴尬的广而告之给你的同事,这些问题我在使用 Slack 期间都遇到过。
|
||||
|
||||
Ryver 支持与大量现成的 App 的集成,并在大多数平台上有原生应用程序。
|
||||
|
||||
在需要时,你可以添加访客而无需增加费用,如果你经常和外部客户打交道,这将是一个非常有用的功能。访客可以增加更多的访客,这种流动性的元素是无法从其他更流行的消息应用中看到的。
|
||||
|
||||
考虑到 Ryver 是一个为迎合不同需求而产生的完全不同的服务。如果你需要一个账户来处理几个客户,Ryver 值得一试。
|
||||
|
||||
问题是它是如何做到免费的呢? 简单的答案是高级用户将为你的使用付了费。 就像 Spotify 和其他应用一样,有一小部分人为我们其他人支付了费用。 这里有一个[直接链接][2]到他们的下载页面的地址,如果有兴趣就去试一试吧。
|
||||
|
||||
### 你应该切换到 Ryver 吗?
|
||||
|
||||

|
||||
|
||||
像我一样在小团队使用 Slack 的体验还是非常棒,但是 Ryver 可以给予的更多。一个完全免费的团队沟通应用的想法不可谓不宏伟,更何况它工作的十分完美。
|
||||
|
||||
同时使用这两种消息应用也无可厚非,但是如果你不愿意为一个白金 Slack 账户付费,一定要尝试一下竞争对手的服务。你可能会发现,两者各擅胜场,这取决于你需要什么。
|
||||
|
||||
最重要的是,Ryver 是一个极棒的免费替代品,它不仅仅是一个 Slack 克隆。他们清楚地知道他们想要实现什么,他们有一个可以在拥挤不堪的市场提供不同的东西的不错的产品。
|
||||
|
||||
但是,如果将来持续缺乏资金,Ryver 有可能消失。 它可能会让你的团队和讨论陷入混乱。 目前一切还好,但是如果你计划把更大的业务委托给这个新贵还是需要三思而行。
|
||||
|
||||
如果你厌倦了 Slack 对免费帐户的限制,你会对 Ryver 印象深刻。 要了解更多,请访问其网站以获取有关服务的信息。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.maketecheasier.com/why-use-ryver-instead-of-slack/
|
||||
|
||||
作者:[James Milin-Ashmore][a]
|
||||
译者:[firstadream](https://github.com/firstadream)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.maketecheasier.com/author/james-ashmore/
|
||||
[1]: https://www.maketecheasier.com/new-slack-features/
|
||||
[2]: http://www.ryver.com/downloads/
|
@ -0,0 +1,50 @@
|
||||
一个漂亮的 Linux 桌面 REST 客户端:Insomnia 3.0
|
||||
=====
|
||||
|
||||

|
||||
|
||||
正在为 Linux 桌面端找一个免费的 REST 客户端? 别睡不着觉了!试试 [Insomnia][1]。
|
||||
|
||||
这个应用是跨平台的,可以工作在 Linux、macOS、Windows。开发者 Gregory Schier 告诉我们他创造这个应用是为了“帮助开发者处理和 [REST API][2] 的通信”
|
||||
|
||||
他还说,Insomnia 已经有大约10000 个活跃用户,9% 使用着 Linux.
|
||||
|
||||
“目前来说,Linux用户的反馈是非常积极的,因为类似的应用(反正不怎么样)通常不支持 Linux。”
|
||||
|
||||
Insomnia 的目标是“加速你的 API 测试工作流”,通过一个简洁的接口让你组织、运行、调试 HTTP 请求。
|
||||
|
||||
这款应用还包含一些其他的高级功能比如 Cookie 管理、全局环境、SSL 验证和代码段生成。
|
||||
|
||||
由于我不是一个开发者,没有办法第一时间的评价这款应用,也没办法告诉你的它的特性或指出任何比较重大的不足之处。
|
||||
|
||||
但是,我将这款应用告诉你,让你自己决定它,如果你正在寻找一个有着顺滑的用户界面的替代命令行工具,比如HTTPie,它可能是值得一试的。
|
||||
|
||||
### 下载 Linux 版 Insomnia 3.0
|
||||
|
||||
Insomnia 3.0 现在可以用在 Windows、macOS、Linux 上(不要和只能在 Chrome 上使用的 Insomnia v2.0 混淆)。
|
||||
|
||||
- [下载 Insomnia 3.0][4]
|
||||
|
||||
对于 Ubuntu 14.04 LTS 或更高版本,有一个安装包,它是一个跨发行版的安装包:
|
||||
|
||||
- [下载 Insomnia 3.0 (.AppImage)][5]
|
||||
|
||||
如果你想跟进这个应用的步伐,你可以在 [Twitter][6] 上关注它。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.omgubuntu.co.uk/2016/09/insomnia-3-is-free-rest-client-for-linux
|
||||
|
||||
作者:[JOEY-ELIJAH SNEDDON][a]
|
||||
译者:[Bestony](https://github.com/Bestony)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://plus.google.com/117485690627814051450/?rel=author
|
||||
[1]: http://insomnia.rest/
|
||||
[2]: https://en.wikipedia.org/wiki/Representational_state_transfer
|
||||
[3]: https://github.com/jkbrzt/httpie
|
||||
[4]: https://insomnia.rest/download/
|
||||
[5]: https://builds.insomnia.rest/downloads/linux/latest
|
||||
[6]: https://twitter.com/GetInsomnia
|
@ -0,0 +1,87 @@
|
||||
Torvalds 2.0: Linus 之女谈计算机、大学、女权主义和提升技术界的多元化
|
||||
====================
|
||||
|
||||

|
||||
|
||||
*图片来源:照片来自 Becky Svartström, Opensource.com 修改*
|
||||
|
||||
Patricia Torvalds 暂时还不像她的父亲 Linus 一样闻名于 Linux 和开源领域。
|
||||
|
||||

|
||||
|
||||
在她 18 岁的时候,Patricia 已经是一个有多项技术成就、拥有开源行业经验的女权主义者,而且她已经把目标放在进入杜克大学普拉特工程学院的新学期上了。当时她以实习生的身份在位于美国奥勒冈州伯特兰市的 [Puppet 实验室][2]工作。但不久后,她就将前往北卡罗纳州的达拉莫,开始秋季学期的大学学习。
|
||||
|
||||
在这次独家采访中,Patricia 谈到了使她对计算机科学与工程学感兴趣的(剧透警告:不是因为她的父亲)原因,她所在高中学校在技术教学上所采取的“正确”方法,女权主义在她的生活中扮演的重要角色,以及对技术缺乏多元化的思考。
|
||||
|
||||

|
||||
|
||||
###是什么使你对学习计算机科学与工程学发生兴趣?###
|
||||
|
||||
我在技术方面的兴趣主要来自于高中时代。我曾一度想投身于生物学,这种想法一直维持到大约大学二年级的时候。大二结束以后,我在波特兰 [VA][11] 做网页设计实习生。与此同时,我参加了一个叫做“探索冒险家(Exploratory Ventures,XV)”的工程学课程,在我大二学年的后期,我们把一个水下机器人送入了太平洋。但是,转折点大概是在我大三学年的中期被授予“[NCWIT 的计算机之理想][6]”奖的地区冠军和全国亚军的时候出现的(LCTT 译注:NCWIT - National Center for Women & IT,女性与 IT 国家中心)。
|
||||
|
||||
这个奖项的获得让我感觉到确立了自己的兴趣。当然,我认为最重要的部分是我加入到一个由所有获奖者组成的 Facebook 群。女孩们获奖很难想象,因此我们彼此非常支持。由于在 XV 和 [VA][11] 的工作,我在获奖前就已经确实对计算机科学发生了兴趣,但是和这些女孩们的交谈更加坚定了这份兴趣,使之更加强烈。再后来,后期大三、大四年级的时候执教 XV 也使我体会到工程学和计算机科学的乐趣。
|
||||
|
||||
###你打算学习什么?你已经知道自己毕业后想干什么了吗?###
|
||||
|
||||
我希望要么主修机械,要么是电子和计算机工程学,以及计算机科学,并且辅修女性学。毕业以后,我希望在一个支持社会公益或者为其创造技术的公司工作,或者自己开公司。
|
||||
|
||||
###我的女儿在高中有一门 Visual Basic 的编程课。她是整个班上唯一的一个女生,并且以困扰和痛苦的经历结束了这门课程。你的经历是什么样的呢?###
|
||||
|
||||
我的高中在高年级的时候开设了计算机科学的课程,我也学习了 Visual Basic!这门课不是很糟糕,但我的确是 20 多个人的班级里仅有的三四个女生之一。其他的计算机课程似乎也有相似的性别比例差异。然而,我所在的高中极其小,并且老师对技术包容性非常支持,所以我并没有感到困扰。希望在未来的一些年里这些课程会变得更加多样化。
|
||||
|
||||
###你的学校做了哪些促进技术的举措?它们如何能够变得更好?###
|
||||
|
||||
我的高中学校给了我们长时间接触计算机的机会,老师们会突然在不相关的课程上安排技术相关的任务,有几次我们还为社会实践课程建了一个网站,我认为这很棒,因为它使我们每一个人都能接触到技术。机器人俱乐部也很活跃并且资金充足,但是非常小,不过我不是其中的成员。学校的技术/工程学项目中一个非常重要的组成部分是一门叫做”[探索冒险家(Exploratory Ventures)][8]“的由学生自己教的工程学课程,这是一门需要亲自动手的课程,并且每年换一个工程学或者计算机科学方面的难题。我和我的一个同学在这儿教了两年,在课程结束以后,有学生上来告诉我他们对从事工程学或者计算机科学发生了兴趣。
|
||||
|
||||
然而,我的高中没有特别的关注于让年轻女性加入到这些课程中来,并且在人种上也没有呈现多样化。计算机的课程和俱乐部大量的主要成员都是男性白人学生。这的确应该需要有所改善。
|
||||
|
||||
###在成长过程中,你如何在家运用技术?###
|
||||
|
||||
老实说,小的时候,我使用我的上机时间([我的父亲 Linus][9] 设置了一个跟踪装置,当我们上网一个小时就会断线)来玩[尼奥宠物][10]和或者相似的游戏。我想我本可以搞乱跟踪装置或者在不连接网络的情况下玩游戏,但我没有这样做。我有时候也会和我的父亲做一些小的科学项目,我还记得有一次我和他在电脑终端上打印出几千个“Hello world”。但是大多数时候,我都是和我的妹妹一起玩网络游戏,直到高中的时候才开始学习计算机。
|
||||
|
||||
###你在高中学校的女权俱乐部很活跃,从这份经历中你学到了什么?现在对你来说什么女权问题是最重要的?###
|
||||
|
||||
在高中二年级的后期,我和我的朋友一起建立了女权俱乐部。刚开始,我们受到了很多人对俱乐部的排斥,并且这从来就没有完全消失过。到我们毕业的时候,女权主义思想已经彻底成为了学校文化的一部分。我们在学校做的女权主义工作通常是在一些比较直接的方面,并集中于像着装要求这样一些问题。
|
||||
|
||||
就我个人来说,我更关注于新女性主义( intersectional feminism),这是一种致力于(消除)其它方面压迫(比如,种族歧视和阶级压迫等)的女权主义。Facebook 上的 [Gurrilla Feminism][4] 专页是新女性主义一个非常好的例子,并且我从中学到了很多。我目前管理着波特兰分会。
|
||||
|
||||
在技术多样性方面女权主义对我也非常重要,尽管作为一名和技术世界有很强联系的高年级白人女性,女权主义问题对我产生的影响相比其他人来说少得多,我所参与的新女性主义也是同样的。[《Model View Culture》][5]的出版非常鼓舞我,谢谢 Shanley Kane 所做的这一切。
|
||||
|
||||
###你会给想教他们的孩子学习编程的父母什么样的建议?###
|
||||
|
||||
老实说,从没有人推着我学习计算机科学或者工程学。正如我前面说的,在很长一段时间里,我想成为一名遗传学家。大二结束的那个夏天,我在 [VA][11] 做了一个夏天的网页设计实习生,这彻底改变了我之前的想法。所以我不知道我是否能够充分回答这个问题。
|
||||
|
||||
我的确认为真正的兴趣很重要。如果在我 12 岁的时候,我的父亲让我坐在一台电脑前,教我配置一台网站服务器,我认为我不会对计算机科学感兴趣。相反,我的父母给了我很多可以支配的自由时间让我去做自己想做的事情,绝大多数时候是我在为我的尼奥宠物游戏编写糟糕的 HTML 网站。比我小的妹妹们没有一个对工程学或计算机科学感兴趣,我的父母也不在乎。我感到很幸运的是我的父母给了我和我的妹妹们鼓励和资源去探索自己的兴趣。
|
||||
|
||||
仍然要讲的是,在我成长过程中我也常说未来职业生涯要“像我爹一样”,尽管那时我还不知道我父亲是干什么的,只知道他有一个很酷的工作。另外,中学的时候有一次我告诉我的父亲这件事,然后他没有发表什么看法只是告诉我高中的时候不要想这事。所以我猜想这从一定程度上鼓励了我。
|
||||
|
||||
###对于开源社区的领导者们,你有什么建议给他们来吸引和维持更加多元化的贡献者?###
|
||||
|
||||
我实际上在开源社区并不是特别积极和活跃,我更喜欢和其它女性讨论计算机。我是“[NCWIT 的计算机之理想][6]”成员之一,这是我对技术持久感到兴趣的一个重要方面,同样也包括 Facebook 的”[Ladies Storm Hackathons][7]” 群。
|
||||
|
||||
我认为对于吸引和留住那些天才而形形色色的贡献者,安全的空间很重要。我曾经看到过在一些开源社区有人发表关于女性歧视和种族主义的评论,当人们指出这一问题随后该人就被解职了。我认为要维持一个专业的社区必须就骚扰事件和不正当行为有一个高标准。当然,人们已经有而且还会有,关于在开源社区或其他任何社区能够表达什么意见的更多的观点。然而,如果社区领导人真的想吸引和留住形形色色的天才们,他们必须创造一个安全的空间并且以高标准要求社区成员们。
|
||||
|
||||
我也觉得一些社区领导者不明白多元化的价值。很容易觉得在技术上是唯才是举的,并且这个原因有一些是技术上不处于中心位置的人是他们不在意的,问题来自于发展的早期。他们争论如果一个人在自己的工作上做得很好,那么他的性别或者民族还有性取向这些情况都变得不重要了。这很容易反驳,但我不想看到为这些错误找的理由。我认为多元化的缺失是一个错误,我们应该为之负责并尽力去改善这件事。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://opensource.com/life/15/8/patricia-torvalds-interview
|
||||
|
||||
作者:[Rikki Endsley][a]
|
||||
译者:[ucasFL](https://github.com/ucasFL)
|
||||
校对:[LinuxBars](https://github.com/LinuxBars), [wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://opensource.com/users/rikki-endsley
|
||||
[1]:https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[2]:https://puppetlabs.com/
|
||||
[3]:https://www.aspirations.org/
|
||||
[4]:https://www.facebook.com/guerrillafeminism
|
||||
[5]:https://modelviewculture.com/
|
||||
[6]:https://www.aspirations.org/
|
||||
[7]:https://www.facebook.com/groups/LadiesStormHackathons/
|
||||
[8]: http://exploratoryventures.com/
|
||||
[9]: https://plus.google.com/+LinusTorvalds/about
|
||||
[10]: http://www.neopets.com/
|
||||
[11]: http://www.va.gov/
|
@ -1,17 +1,19 @@
|
||||
如何使用 Awk 语言写脚本 - Part 13
|
||||
awk 系列:如何使用 awk 语言编写脚本
|
||||
====
|
||||
|
||||
从 Awk 系列开始直到第 12 部分,我们都是在命令行或者脚本文件写一些简短的 Awk 命令和程序。
|
||||

|
||||
|
||||
然而 Awk 和 Shell 一样也是一个解释语言。通过从开始到现在的一系列的学习,你现在能写可以执行的 Awk 脚本了。
|
||||
从 awk 系列开始直到[第 12 部分][1],我们都是在命令行或者脚本文件里写一些简短的 awk 命令和程序。
|
||||
|
||||
和写 shell 脚本差不多,Awk 脚本以下面这一行开头:
|
||||
然而 awk 和 shell 一样也是一个解释型语言。通过从开始到现在的一系列的学习,你现在能写可以执行的 awk 脚本了。
|
||||
|
||||
和写 shell 脚本差不多,awk 脚本以下面这一行开头:
|
||||
|
||||
```
|
||||
#! /path/to/awk/utility -f
|
||||
```
|
||||
|
||||
例如在我的系统上,Awk 工具安装在 /user/bin/awk 目录,所以我的 Awk 脚本以如下内容作为开头:
|
||||
例如在我的系统上,awk 工具安装在 /user/bin/awk 目录,所以我的 awk 脚本以如下内容作为开头:
|
||||
|
||||
```
|
||||
#! /usr/bin/awk -f
|
||||
@ -19,13 +21,11 @@
|
||||
|
||||
上面一行的解释如下:
|
||||
|
||||
```
|
||||
#! – 称为 Shebang,指明使用那个解释器来执行脚本中的命令
|
||||
/usr/bin/awk –解释器
|
||||
-f – 解释器选项,用来指定读取的程序文件
|
||||
```
|
||||
- `#!` ,称为[释伴(Shebang)][2],指明使用那个解释器来执行脚本中的命令
|
||||
- `/usr/bin/awk` ,即解释器
|
||||
- `-f` ,解释器选项,用来指定读取的程序文件
|
||||
|
||||
说是这么说,现在从下面的简单例子开始,让我们深入研究一些可执行的 Awk 脚本。使用你最喜欢的编辑器创建一个新文件,像下面这样:
|
||||
说是这么说,现在从下面的简单例子开始,让我们深入研究一些可执行的 awk 脚本。使用你最喜欢的编辑器创建一个新文件,像下面这样:
|
||||
|
||||
```
|
||||
$ vi script.awk
|
||||
@ -35,7 +35,7 @@ $ vi script.awk
|
||||
|
||||
```
|
||||
#!/usr/bin/awk -f
|
||||
BEGIN { printf "%s\n","Writing my first Awk executable script!" }
|
||||
BEGIN { printf "%s\n","Writing my first awk executable script!" }
|
||||
```
|
||||
|
||||
保存文件后退出,然后执行下面命令,使得脚本可执行:
|
||||
@ -53,10 +53,10 @@ $ ./script.awk
|
||||
输出样例:
|
||||
|
||||
```
|
||||
Writing my first Awk executable script!
|
||||
Writing my first awk executable script!
|
||||
```
|
||||
|
||||
一个严格的程序员一定会问:“注释呢?”。是的,你可以在 Awk 脚本中包含注释。在代码中写注释是一种良好的编程习惯。
|
||||
一个严格的程序员一定会问:“注释呢?”。是的,你可以在 awk 脚本中包含注释。在代码中写注释是一种良好的编程习惯。
|
||||
|
||||
它有利于其它程序员阅读你的代码,理解程序文件或者脚本中每一部分的功能。
|
||||
|
||||
@ -64,20 +64,20 @@ Writing my first Awk executable script!
|
||||
|
||||
```
|
||||
#!/usr/bin/awk -f
|
||||
#This is how to write a comment in Awk
|
||||
#using the BEGIN special pattern to print a sentence
|
||||
BEGIN { printf "%s\n","Writing my first Awk executable script!" }
|
||||
# 这是如何在 awk 中写注释的示例
|
||||
# 使用特殊模式 BEGIN 来输出一句话
|
||||
BEGIN { printf "%s\n","Writing my first awk executable script!" }
|
||||
```
|
||||
|
||||
接下来我们看一个读文件的例子。我们想从帐号文件 /etc/passwd 中查找一个叫 aaronkilik 的用户,然后像下面这样打印用户名,用户的 ID,用户的 GID (译者注:组 ID):
|
||||
接下来我们看一个读文件的例子。我们想从帐号文件 /etc/passwd 中查找一个叫 aaronkilik 的用户,然后像下面这样打印用户名、用户的 ID、用户的 GID (LCTT译注:组 ID):
|
||||
|
||||
下面是我们脚本文件的内容,文件名为 second.awk。
|
||||
|
||||
```
|
||||
#! /usr/bin/awk -f
|
||||
#use BEGIN sepecial character to set FS built-in variable
|
||||
# 使用 BEGIN 指定字符来设定 FS 内置变量
|
||||
BEGIN { FS=":" }
|
||||
#search for username: aaronkilik and print account details
|
||||
# 搜索用户名 aaronkilik 并输出账号细节
|
||||
/aaronkilik/ { print "Username :",$1,"User ID :",$3,"User GID :",$4 }
|
||||
```
|
||||
|
||||
@ -88,13 +88,13 @@ $ chmod +x second.awk
|
||||
$ ./second.awk /etc/passwd
|
||||
```
|
||||
|
||||
输出样例
|
||||
输出样例:
|
||||
|
||||
```
|
||||
Username : aaronkilik User ID : 1000 User GID : 1000
|
||||
```
|
||||
|
||||
在下面最后一个例子中,我们将使用 do while 语句来打印数字 0-10:
|
||||
在下面最后一个例子中,我们将使用 `do while` 语句来打印数字 0-10:
|
||||
|
||||
下面是我们脚本文件的内容,文件名为 do.awk。
|
||||
|
||||
@ -138,22 +138,24 @@ $ ./do.awk
|
||||
|
||||
### 总结
|
||||
|
||||
我们已经到达这个精彩的 Awk 系列的最后,我希望你从整个 13 部分中学到了很多知识,把这些当作你 Awk 编程语言的入门指导。
|
||||
我们已经到达这个精彩的 awk 系列的最后,我希望你从整个 13 个章节中学到了很多知识,把这些当作你 awk 编程语言的入门指导。
|
||||
|
||||
我一开始就提到过,Awk 是一个完整的文本处理语言,所以你可以学习很多 Awk 编程语言的其它方面,例如环境变量,数组,函数(内置的或者用户自定义的),等等。
|
||||
我一开始就提到过,awk 是一个完整的文本处理语言,所以你可以学习很多 awk 编程语言的其它方面,例如环境变量、数组、函数(内置的或者用户自定义的),等等。
|
||||
|
||||
Awk 编程还有其它内容需要学习和掌握,所以在文末我提供了一些重要的在线资源的链接,你可以利用他们拓展你的 Awk 编程技能。但这不是必须的,你也可以阅读一些关于 Awk 的书籍。
|
||||
awk 编程还有其它内容需要学习和掌握,所以在文末我提供了一些重要的在线资源的链接,你可以利用他们拓展你的 awk 编程技能。但这不是必须的,你也可以阅读一些关于 awk 的书籍。
|
||||
|
||||
如果你任何想要分享的想法或者问题,在下面留言。记得保持关注 Tecmint,会有更多的精彩内容。
|
||||
如果你任何想要分享的想法或者问题,在下面留言。记得保持关注我们,会有更多的精彩内容。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.tecmint.com/write-shell-scripts-in-awk-programming/
|
||||
|
||||
作者:[Aaron Kili |][a]
|
||||
作者:[Aaron Kili][a]
|
||||
译者:[chunyang-wen](https://github.com/chunyang-wen)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: http://www.tecmint.com/author/aaronkili/
|
||||
[1]: https://linux.cn/article-7723-1.html
|
||||
[2]: https://linux.cn/article-3664-1.html
|
@ -1,7 +1,7 @@
|
||||
如何使用 Awk 中的流控制语句 - part12
|
||||
awk 系列:如何在 awk 中使用流控制语句
|
||||
====
|
||||
|
||||
回顾从 Awk 系列最开始到现在我们所讲的所有关于 Awk 的例子,你会发现不同例子中的所有命令都是顺序执行的,也就是一个接一个的执行。但是在某些场景下,我们可能希望根据一些条件来执行一些文本过滤,这个时候流控制语句就派上用场了。
|
||||
回顾从 Awk 系列,从最开始到现在我们所讲的所有关于 Awk 的例子,你会发现不同例子中的所有命令都是顺序执行的,也就是一个接一个的执行。但是在某些场景下,我们可能希望根据一些条件来执行一些[文本过滤操作][1],这个时候流控制语句就派上用场了。
|
||||
|
||||

|
||||
|
||||
@ -17,7 +17,7 @@ Awk 包含很多的流控制语句,包括:
|
||||
- nextfile 语句
|
||||
- exit 语句
|
||||
|
||||
但是在这个系列中,我们将详细解释:if-else,for,while,do-while 语句。关于如何使用 next 语句,如果你们记得的话,我们已经在 Awk 系列的第6部分介绍过了。
|
||||
但是在这个系列中,我们将详细解释:`if-else`,`for`,`while`,`do-while` 语句。关于如何使用 `next` 语句,如果你们记得的话,我们已经在 [Awk 系列的第6部分][2]介绍过了。
|
||||
|
||||
### 1. if-else 语句
|
||||
|
||||
@ -36,7 +36,7 @@ actions2
|
||||
|
||||
当 condition1 满足时,意味着它的值是 true,此时会执行 actions1,if 语句退出,否则(译注:condition1 为 false)执行 actions2。
|
||||
|
||||
if 语句可以扩展成如下的 if-else_if-else:
|
||||
if 语句可以扩展成如下的 `if-else_if-else`:
|
||||
|
||||
```
|
||||
if (condition1){
|
||||
@ -52,7 +52,7 @@ actions3
|
||||
|
||||
上面例子中,如果 condition1 为 true,执行 actions1,if 语句退出;否则对 condition2 求值,如果值为 true,那么执行 actions2,if 语句退出。然而如果 condition2 是 false,那么会执行 actions3 退出 if语句。
|
||||
|
||||
下面是一个使用 if 语句的例子,我们有一个存储用户和他们年龄列表的文件,users.txt。
|
||||
下面是一个使用 if 语句的例子,我们有一个存储用户和他们年龄列表的文件 users.txt。
|
||||
|
||||
我们想要打印用户的名字以及他们的年龄是大于 25 还是小于 25。
|
||||
|
||||
@ -85,7 +85,7 @@ $ chmod +x test.sh
|
||||
$ ./test.sh
|
||||
```
|
||||
|
||||
输出样例
|
||||
输出样例:
|
||||
|
||||
```
|
||||
User Sarah L is more than 25 years old
|
||||
@ -96,7 +96,7 @@ User Kili Seth is more than 25 years old
|
||||
|
||||
### 2. for 语句
|
||||
|
||||
如果你想循环执行一些 Awk 命令,那么 for 语句十分合适,它的语法如下:
|
||||
如果你想循环执行一些 Awk 命令,那么 `for` 语句十分合适,它的语法如下:
|
||||
|
||||
这里只是简单的定义一个计数器来控制循环的执行。首先你要初始化那个计数器 (counter),然后根据某个条件判断是否执行,如果该条件为 true 则执行,最后增加计数器。当计数器不满足条件时则终止循环。
|
||||
|
||||
@ -106,7 +106,7 @@ actions
|
||||
}
|
||||
```
|
||||
|
||||
下面的 Awk 命令利用打印数字 0-10 来说明 for 语句是怎么工作的。
|
||||
下面的 Awk 命令利用打印数字 0-10 来说明 `for` 语句是怎么工作的。
|
||||
|
||||
```
|
||||
$ awk 'BEGIN{ for(counter=0;counter<=10;counter++){ print counter} }'
|
||||
@ -130,7 +130,7 @@ $ awk 'BEGIN{ for(counter=0;counter<=10;counter++){ print counter} }'
|
||||
|
||||
### 3. while 语句
|
||||
|
||||
传统的 while 语句语法如下:
|
||||
传统的 `while` 语句语法如下:
|
||||
|
||||
```
|
||||
while ( condition ) {
|
||||
@ -138,7 +138,7 @@ actions
|
||||
}
|
||||
```
|
||||
|
||||
上面的 condition 是 Awk 表达式,actions 是当 condition 为 true 时执行的 Awk命令。
|
||||
上面的 condition 是 Awk 表达式,actions 是当 condition 为 true 时执行的 Awk 命令。
|
||||
|
||||
下面是仍然用打印数字 0-10 来解释 while 语句的用法:
|
||||
|
||||
@ -159,8 +159,7 @@ $ chmod +x test.sh
|
||||
$ ./test.sh
|
||||
```
|
||||
|
||||
输出样例
|
||||
Sample Output
|
||||
输出样例:
|
||||
|
||||
```
|
||||
0
|
||||
@ -178,7 +177,7 @@ Sample Output
|
||||
|
||||
### 4. do-while 语句
|
||||
|
||||
这个是上面的 while 语句语法的一个变化,其语法如下:
|
||||
这个是上面的 `while` 语句语法的一个变化,其语法如下:
|
||||
|
||||
```
|
||||
do {
|
||||
@ -187,7 +186,7 @@ actions
|
||||
while (condition)
|
||||
```
|
||||
|
||||
二者的区别是,在 do-while 中,Awk 的命令在条件求值前先执行。我们使用 while 语句中同样的例子来解释 do-while 的使用,将 test.sh 脚本中的 Awk 命令做如下更改:
|
||||
二者的区别是,在 `do-while` 中,Awk 的命令在条件求值前先执行。我们使用 `while` 语句中同样的例子来解释 `do-while` 的使用,将 test.sh 脚本中的 Awk 命令做如下更改:
|
||||
|
||||
```
|
||||
#!/bin/bash
|
||||
@ -238,10 +237,11 @@ via: http://www.tecmint.com/use-flow-control-statements-with-awk-command/
|
||||
|
||||
作者:[Aaron Kili][a]
|
||||
译者:[chunyang-wen](https://github.com/chunyang-wen)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: http://www.tecmint.com/author/aaronkili/
|
||||
|
||||
[1]: http://www.tecmint.com/use-linux-awk-command-to-filter-text-string-in-files/
|
||||
[2]: http://www.tecmint.com/use-next-command-with-awk-in-linux/
|
||||
|
@ -1,4 +1,4 @@
|
||||
GHLandy Translating
|
||||
Editing by FrankXinqi
|
||||
|
||||
Linux vs. Windows device driver model : architecture, APIs and build environment comparison
|
||||
============================================================================================
|
||||
@ -179,7 +179,7 @@ Download this article as ad-free PDF (made possible by [your kind donation][2]):
|
||||
via: http://xmodulo.com/linux-vs-windows-device-driver-model.html
|
||||
|
||||
作者:[Dennis Turpitka][a]
|
||||
译者:[GHLandy](https://github.com/GHLandy)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,50 +0,0 @@
|
||||
Adobe's new CIO shares leadership advice for starting a new role
|
||||
====
|
||||
|
||||

|
||||
|
||||
I’m currently a few months into a new CIO role at a highly-admired, cloud-based technology company. One of my first tasks was to get to know the organization’s people, culture, and priorities.
|
||||
|
||||
As part of that goal, I am visiting all the major IT sites. While In India, less than two months into the job, I was asked directly: “What are you going to do? What is your plan?” My response, which will not surprise seasoned CIOs, was that I was still in discovery mode, and I was there to listen and learn.
|
||||
|
||||
I’ve never gone into an organization with a set blueprint for what I’ll do. I know some CIOs have a playbook for how they will operate. They’ll come in and blow the whole organization up and put their set plan in motion.
|
||||
|
||||
Yes, there may be situations where things are massively broken and not working, so that course of action makes sense. Once I’m inside a company, however, my strategy is to go through a discovery process. I don’t want to have any preconceived notions about the way things should be or what’s working versus what’s not.
|
||||
|
||||
Here are my guiding principles as a newly-appointed leader:
|
||||
|
||||
### Get to know your people
|
||||
|
||||
This means building relationships, and it includes your IT staff as well as your business users and your top salespeople. What are the top things on their lists? What do they want you to focus on? What’s working well? What’s not? How is the customer experience? Knowing how you can help everyone be more successful will help you shape the way you deliver services to them.
|
||||
|
||||
If your department is spread across several floors, as mine is, consider meet-and-greet lunches or mini-tech fairs so people can introduce themselves, discuss what they’re working on, and share stories about their family, if they feel comfortable doing that. If you have an open-door office policy, make sure they know that as well. If your staff spreads across countries or continents, get out there and visit as soon as you reasonably can.
|
||||
|
||||
### Get to know your products and company culture
|
||||
|
||||
One of the things that surprised me coming into to Adobe was how broad our product portfolio is. We have a platform of solutions and services across three clouds – Adobe Creative Cloud, Document Cloud and Marketing Cloud – and a vast portfolio of products within each. You’ll never know how much opportunity your new company presents until you get to know your products and learn how to support all of them. At Adobe we use many of our digital media and digital marketing solutions as Customer Zero, so we have first-hand experiences to share with our customers
|
||||
|
||||
### Get to know customers
|
||||
|
||||
Very early on, I started getting requests to meet with customers. Meeting with customers is a great way to jump-start your thinking into the future of the IT organization, which includes the different types of technologies, customers, and consumers we could have going forward.
|
||||
|
||||
### Plan for the future
|
||||
|
||||
As a new leader, I have a fresh perspective and can think about the future of the organization without getting distracted by challenges or obstacles.
|
||||
|
||||
What CIOs need to do is jump-start IT into its next generation. When I meet my staff, I’m asking them what we want to be three to five years out so we can start positioning ourselves for that future. That means discussing the initiatives and priorities.
|
||||
|
||||
After that, it makes sense to bring the leadership team together so you can work to co-create the next generation of the organization – its mission, vision, modes of alignment, and operating norms. If you start changing IT from the inside out, it will percolate into business and everything else you do.
|
||||
|
||||
Through this whole process, I’ve been very open with people that this is not going to be a top-down directive. I have ideas on priorities and what we need to focus on, but we have to be in lockstep, working as a team and figuring out what we want to do jointly.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://enterprisersproject.com/article/2016/9/adobes-new-cio-shares-leadership-advice-starting-new-role
|
||||
|
||||
作者:[Cynthia Stoddard][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://enterprisersproject.com/user/cynthia-stoddard
|
@ -1,62 +0,0 @@
|
||||
Linus Torvalds reveals his favorite programming laptop
|
||||
====
|
||||
|
||||
>It's the Dell XPS 13 Developer Edition. Here's why.
|
||||
|
||||
I recently talked with some Linux developers about what the best laptop is for serious programmers. As a result I checked out several laptops from a programmer's viewpoint. The winner in my book? The 2016 Dell XPS 13 Developer Edition. I'm in good company. Linus Torvalds, Linux's creator, agrees. The Dell XPS 13 Developer Edition, for him, is the best laptop around.
|
||||
|
||||

|
||||
|
||||
Torvald's requirements may not be yours though.
|
||||
|
||||
On Google+, Torvalds explained, "First off: [I don't use my laptop as a desktop replacement][1], and I only travel for a small handful of events each year. So for me, the laptop is a fairly specialized thing that doesn't get daily (or even weekly) use, so the main criteria are not some kind of "average daily use", but very much "travel use".
|
||||
|
||||
Therefore, for Torvalds, "I end up caring a lot about it being fairly small and light, because I may end up carrying it around all day at a conference. I also want it to have a good screen, because by now I'm just used to it at my main desktop, and I want my text to be legible but small."
|
||||
|
||||
The Dell's display is powered by Intel's Iris 540 GPU. In my experience it works really well.
|
||||
|
||||
The Iris powers a 13.3 inch display with a 3,200×1,800 touchscreen. That's 280 pixels per inch, 40 more than my beloved [2015 Chromebook Pixel][2] and 60 more than a [MacBook Pro with Retina][3].
|
||||
|
||||
However, getting that hardware to work and play well with the [Gnome][4] desktop isn't easy. As Torvalds explained in another post, it "has the [same resolution as my desktop][5], but apparently because the laptop screen is smaller, Gnome seems to decide on its own that I need an automatic scaling factor of 2, which blows up all the stupid things (window decorations, icons etc) to a ridiculous degree".
|
||||
|
||||
The solution? You can forget about looking to the user interface. You need to go to the shell and run: gsettings set org.gnome.desktop.interface scaling-factor 1.
|
||||
|
||||
Torvalds may use Gnome, but he's [never liked the Gnome 3.x family much][6]. I can't argue with him. That's why I use [Cinnamon][7] instead.
|
||||
|
||||
He also wants "a reasonably powerful CPU, because when I'm traveling I still build the kernel a lot. I don't do my normal full 'make allmodconfig' build between each pull request like I do at home, but I'd like to do it more often than I did with my previous laptop, which is actually (along with the screen) the main reason I wanted to upgrade."
|
||||
|
||||
Linus doesn't describe the features of his XPS 13, but my review unit was a high-end model. It came with dual-core, 2.2GHz 6th Generation Intel Core i7-6560U Skylake processor and 16GBs of DDR3 RAM with a half a terabyte, PCIe solid state drive (SSD). I'm sure Torvalds' system is at least that well-equipped.
|
||||
|
||||
Some features you may care about aren't on Torvalds' list.
|
||||
|
||||
>"What I don't tend to care about is touch-screens, because my fingers are big and clumsy compared to the text I'm looking at (I also can't handle the smudges: maybe I just have particularly oily fingers, but I really don't want to touch that screen).
|
||||
|
||||
I also don't care deeply about some 'all day battery life', because quite frankly, I can't recall the last time I didn't have access to power. I might not want to bother to plug it in for some quick check, but it's just not a big overwhelming issue. By the time battery life is in 'more than a couple of hours', I just don't care very much any more."
|
||||
Dell claims the XPS 13, with its 56wHR, 4-Cell Battery, has about a 12-hour battery life. It has well over 10 in my experience. I haven't tried to run it down to the dregs.
|
||||
|
||||
Torvalds also didn't have any trouble with the Intel Wi-Fi set. The non Developer Edition uses a Broadcom chip set and that has proven troublesome for both Windows and Linux users. Dell technical support was extremely helpful to me in getting this problem under control.
|
||||
|
||||
Some people have trouble with the XPS 13 touchpad. Neither I nor Torvalds have any worries. Torvalds wrote, the "XPS13 touchpad works very well for me. That may be a personal preference thing, but it seems to be both smooth and responsive."
|
||||
|
||||
Still, while Torvalds likes the XPS 13, he's also fond of the latest Lenovo X1 Carbon, HP Spectre 13 x360, and last year's Lenovo Yoga 900. Me? I like the XPS 13 Developer Editor. The price tag, which for the model I reviewed was $1949.99, may keep you from reaching for your credit card.
|
||||
|
||||
Still, if you want to develop like one of the world's top programmers, the Dell XPS 13 Developer Edition is worth the money.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.zdnet.com/article/linus-torvalds-reveals-his-favorite-programming-laptop/
|
||||
|
||||
作者:[Steven J. Vaughan-Nichols ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: http://www.zdnet.com/meet-the-team/us/steven-j-vaughan-nichols/
|
||||
[1]: https://plus.google.com/+LinusTorvalds/posts/VZj8vxXdtfe
|
||||
[2]: http://www.zdnet.com/article/the-best-chromebook-ever-the-chromebook-pixel-2015/
|
||||
[3]: http://www.zdnet.com/product/apple-15-inch-macbook-pro-with-retina-display-mid-2015/
|
||||
[4]: https://www.gnome.org/
|
||||
[5]: https://plus.google.com/+LinusTorvalds/posts/d7nfnWSXjfD
|
||||
[6]: http://www.zdnet.com/article/linus-torvalds-finds-gnome-3-4-to-be-a-total-user-experience-design-failure/
|
||||
[7]: http://www.zdnet.com/article/how-to-customise-your-linux-desktop-cinnamon/
|
@ -1,3 +1,10 @@
|
||||
Translating by WangYueScream
|
||||
============================================================================
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
Should Smartphones Do Away with the Headphone Jack? Here Are Our Thoughts
|
||||
====
|
||||
|
||||
|
@ -1,3 +1,5 @@
|
||||
willcoderwang 正在翻译
|
||||
|
||||
What the rise of permissive open source licenses means
|
||||
====
|
||||
|
||||
|
@ -1,84 +0,0 @@
|
||||
Setup honeypot in Kali Linux
|
||||
====
|
||||
|
||||
The Pentbox is a safety kit containing various tools for streamlining PenTest conducting a job easily. It is programmed in Ruby and oriented to GNU / Linux, with support for Windows, MacOS and every systems where Ruby is installed. In this small article we will explain how to set up a honeypot in Kali Linux. If you don’t know what is a honeypot, “a honeypot is a computer security mechanism set to detect, deflect, or, in some manner, counteract attempts at unauthorized use of information systems.”
|
||||
|
||||
### Download Pentbox:
|
||||
|
||||
Simply type in the following command in your terminal to download pentbox-1.8.
|
||||
|
||||
```
|
||||
root@kali:~# wget http://downloads.sourceforge.net/project/pentbox18realised/pentbox-1.8.tar.gz
|
||||
```
|
||||
|
||||

|
||||
|
||||
### Uncompress pentbox files
|
||||
|
||||
Decompressing the file with the following command:
|
||||
|
||||
```
|
||||
root@kali:~# tar -zxvf pentbox-1.8.tar.gz
|
||||
```
|
||||
|
||||

|
||||
|
||||
### Run pentbox ruby script
|
||||
|
||||
Change directory into pentbox folder
|
||||
|
||||
```
|
||||
root@kali:~# cd pentbox-1.8/
|
||||
```
|
||||
|
||||

|
||||
|
||||
Run pentbox using the following command
|
||||
|
||||
```
|
||||
root@kali:~# ./pentbox.rb
|
||||
```
|
||||
|
||||

|
||||
|
||||
### Setup a honeypot
|
||||
|
||||
Use option 2 (Network Tools) and then option 3 (Honeypot).
|
||||
|
||||

|
||||
|
||||
Finally for first test, choose option 1 (Fast Auto Configuration)
|
||||
|
||||

|
||||
|
||||
This opens up a honeypot in port 80. Simply open browser and browse to http://192.168.160.128 (where 192.168.160.128 is your IP Address. You should see an Access denied error.
|
||||
|
||||

|
||||
|
||||
and in the terminal you should see “HONEYPOT ACTIVATED ON PORT 80” followed by “INTRUSION ATTEMPT DETECTED”.
|
||||
|
||||

|
||||
|
||||
Now, if you do the same steps but this time select Option 2 (Manual Configuration), you should see more extra options
|
||||
|
||||

|
||||
|
||||
Do the same steps but select port 22 this time (SSH Port). Then do a port forwarding in your home router to forward port external port 22 to this machines’ port 22. Alternatively, set it up in a VPS in your cloud server.
|
||||
|
||||
You’d be amazed how many bots out there scanning port SSH continuously. You know what you do then? You try to hack them back for the lulz!
|
||||
|
||||
Here’s a video of setting up honeypot if video is your thing:
|
||||
|
||||
<https://youtu.be/NufOMiktplA>
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.blackmoreops.com/2016/05/06/setup-honeypot-in-kali-linux/
|
||||
|
||||
作者:[blackmoreops.com][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: blackmoreops.com
|
@ -1,4 +1,4 @@
|
||||
[Translating by itsang]
|
||||
[Translating by bestony]
|
||||
4 Container Networking Tools to Know
|
||||
=======================================
|
||||
|
||||
|
@ -1,146 +0,0 @@
|
||||
translating by hkurj
|
||||
Basic Linux Networking Commands You Should Know
|
||||
==================================================
|
||||
|
||||

|
||||
|
||||
Brief: A collection of most important and yet basic Linux networking commands an aspiring Linux SysAdmin and Linux enthusiasts must know.
|
||||
|
||||
It’s not every day at It’s FOSS that we talk about the “command line side” of Linux. Basically, I focus more on the desktop side of Linux. But as some of you readers pointed out in the internal survey (exclusive for It’s FOSS newsletter subscribers), that you would like to learn some command line tricks as well. Cheat sheets were also liked and encouraged by most readers.
|
||||
|
||||
For this purpose, I have compiled a list of the basic networking commands in Linux. It’s not a tutorial that teaches you how to use these commands, rather, it’s a collection of commands and their short explanation. So if you already have some experience with these commands, you can use it for quickly remembering the commands.
|
||||
|
||||
You can bookmark this page for quick reference or even download all the commands in PDF for offline access.
|
||||
|
||||
I had this list of Linux networking commands when I was a student of Communication System Engineering. It helped me to get the top score in Computer Networks course. I hope it helps you in the same way.
|
||||
|
||||
>Exclusive bonus: [Download Linux Networking Commands Cheat Sheet][1] for future reference. You can print it or save it for offline viewing.
|
||||
|
||||
### List of basic networking commands in Linux
|
||||
|
||||
I used FreeBSD in the computer networking course but the UNIX commands should work the same in Linux also.
|
||||
|
||||
#### Connectivity:
|
||||
|
||||
- ping <host> —- sends an ICMP echo message (one packet) to a host. This may go continually until you hit Control-C. Ping means a packet was sent from your machine via ICMP, and echoed at the IP level. ping tells you if the other Host is Up.
|
||||
|
||||
- telnet host <port> —- talk to “hosts” at the given port number. By default, the telnet port is port 23. Few other famous ports are:
|
||||
```
|
||||
7 – echo port,
|
||||
25 – SMTP, use to send mail
|
||||
79 – Finger, provides information on other users of the network
|
||||
```
|
||||
|
||||
Use control-] to get out of telnet.
|
||||
|
||||
#### Arp:
|
||||
|
||||
Arp is used to translate IP addresses into Ethernet addresses. Root can add and delete arp entries. Deleting them can be useful if an arp entry is malformed or just wrong. Arp entries explicitly added by root are permanent — they can also be by proxy. The arp table is stored in the kernel and manipulated dynamically. Arp entries are cached and will time out and are deleted normally in 20 minutes.
|
||||
|
||||
- arp –a : Prints the arp table
|
||||
- arp –s <ip_address> <mac_address> [pub] to add an entry in the table
|
||||
- arp –a –d to delete all the entries in the ARP table
|
||||
|
||||
#### Routing:
|
||||
|
||||
- netstat –r —- Print routing tables. The routing tables are stored in the kernel and used by ip to route packets to non-local networks.
|
||||
- route add —- The route command is used for setting a static (non-dynamic by hand route) route path in the route tables. All the traffic from this PC to that IP/SubNet will go through the given Gateway IP. It can also be used for setting a default route; i.e., send all packets to a particular gateway, by using 0.0.0.0 in the pace of IP/SubNet.
|
||||
- routed —– The BSD daemon that does dynamic routing. Started at boot. This runs the RIP routing protocol. ROOT ONLY. You won’t be able to run this without root access.
|
||||
- gated —– Gated is an alternative routing daemon to RIP. It uses the OSPF, EGP, and RIP protocols in one place. ROOT ONLY.
|
||||
- traceroute —- Useful for tracing the route of IP packets. The packet causes messages to be sent back from all gateways in between the source and destination by increasing the number of hopes by 1 each time.
|
||||
- netstat –rnf inet : it displays the routing tables of IPv4
|
||||
- sysctl net.inet.ip.forwarding=1 : to enable packets forwarding (to turn a host into a router)
|
||||
- route add|delete [-net|-host] <destination> <gateway> (ex. route add 192.168.20.0/24 192.168.30.4) to add a route
|
||||
- route flush : it removes all the routes
|
||||
- route add -net 0.0.0.0 192.168.10.2 : to add a default route
|
||||
- routed -Pripv2 –Pno_rdisc –d [-s|-q] to execute routed daemon with RIPv2 protocol, without ICMP auto-discovery, in foreground, in supply or in quiet mode
|
||||
- route add 224.0.0.0/4 127.0.0.1 : it defines the route used from RIPv2
|
||||
- rtquery –n : to query the RIP daemon on a specific host (manually update the routing table)
|
||||
|
||||
#### Others:
|
||||
|
||||
- nslookup —- Makes queries to the DNS server to translate IP to a name, or vice versa. eg. nslookup facebook.com will gives you the IP of facebook.com
|
||||
- ftp <host>water —– Transfer files to host. Often can use login=“anonymous” , p/w=“guest”
|
||||
- rlogin -l —– Logs into the host with a virtual terminal like telnet
|
||||
|
||||
#### Important Files:
|
||||
|
||||
```
|
||||
/etc/hosts —- names to ip addresses
|
||||
/etc/networks —- network names to ip addresses
|
||||
/etc/protocols —– protocol names to protocol numbers
|
||||
/etc/services —- tcp/udp service names to port numbers
|
||||
```
|
||||
|
||||
#### Tools and network performance analysis
|
||||
|
||||
- ifconfig <interface> <address> [up] : start the interface
|
||||
- ifconfig <interface> [down|delete] : stop the interface
|
||||
- ethereal & : it allows you open ethereal background not foreground
|
||||
- tcpdump –i -vvv : tool to capture and analyze packets
|
||||
- netstat –w [seconds] –I [interface] : display network settings and statistics
|
||||
- udpmt –p [port] –s [bytes] target_host : it creates UDP traffic
|
||||
- udptarget –p [port] : it’s able to receive UDP traffic
|
||||
- tcpmt –p [port] –s [bytes] target_host : it creates TCP traffic
|
||||
- tcptarget –p [port] it’s able to receive TCP traffic
|
||||
- ifconfig netmask [up] : it allows to subnet the sub-networks
|
||||
|
||||
|
||||
|
||||
#### Switching:
|
||||
|
||||
- ifconfig sl0 srcIP dstIP : configure a serial interface (do “slattach –l /dev/ttyd0” before, and “sysctl net.inet.ip.forwarding=1“ after)
|
||||
- telnet 192.168.0.254 : to access the switch from a host in its subnetwork
|
||||
- sh ru or show running-configuration : to see the current configurations
|
||||
- configure terminal : to enter in configuration mode
|
||||
- exit : in order to go to the lower configuration mode
|
||||
|
||||
#### VLAN:
|
||||
|
||||
- vlan n : it creates a VLAN with ID n
|
||||
- no vlan N : it deletes the VLAN with ID N
|
||||
- untagged Y : it adds the port Y to the VLAN N
|
||||
- ifconfig vlan0 create : it creates vlan0 interface
|
||||
- ifconfig vlan0 vlan ID vlandev em0 : it associates vlan0 interface on top of em0, and set the tags to ID
|
||||
- ifconfig vlan0 [up] : to turn on the virtual interface
|
||||
- tagged Y : it adds to the port Y the support of tagged frames for the current VLAN
|
||||
|
||||
#### UDP/TCP
|
||||
|
||||
- socklab udp – it executes socklab with udp protocol
|
||||
- sock – it creates a udp socket, it’s equivalent to type sock udp and bind
|
||||
- sendto <Socket ID> <hostname> <port #> – emission of data packets
|
||||
- recvfrom <Socket ID> <byte #> – it receives data from socket
|
||||
- socklab tcp – it executes socklab with tcp protocol
|
||||
- passive – it creates a socket in passive mode, it’s equivalent to socklab, sock tcp, bind, listen
|
||||
- accept – it accepts an incoming connection (it can be done before or after creating the incoming connection)
|
||||
- connect <hostname> <port #> – these two commands are equivalent to socklab, sock tcp, bind, connect
|
||||
- close – it closes the connection
|
||||
- read <byte #> – to read bytes on the socket
|
||||
- write (ex. write ciao, ex. write #10) to write “ciao” or to write 10 bytes on the socket
|
||||
|
||||
#### NAT/Firewall
|
||||
|
||||
- rm /etc/resolv.conf – it prevent address resolution and make sure your filtering and firewall rules works properly
|
||||
- ipnat –f file_name – it writes filtering rules into file_name
|
||||
- ipnat –l – it gives the list of active rules
|
||||
- ipnat –C –F – it re-initialize the rules table
|
||||
- map em0 192.168.1.0/24 -> 195.221.227.57/32 em0 : mapping IP addresses to the interface
|
||||
- map em0 192.168.1.0/24 -> 195.221.227.57/32 portmap tcp/udp 20000:50000 : mapping with port
|
||||
- ipf –f file_name : it writes filtering rules into file_name
|
||||
- ipf –F –a : it resets the rule table
|
||||
- ipfstat –I : it grants access to a few information on filtered packets, as well as active filtering rules
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/basic-linux-networking-commands/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+ItsFoss+%28Its+FOSS%21+An+Open+Source+Blog%29
|
||||
|
||||
作者:[Abhishek Prakash][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/abhishek/
|
||||
[1]: https://drive.google.com/open?id=0By49_3Av9sT1cDdaZnh4cHB4aEk
|
@ -1,4 +1,4 @@
|
||||
translated by pspkforever
|
||||
translated by pspkforever,now Firstadream
|
||||
DOCKER DATACENTER IN AWS AND AZURE IN A FEW CLICKS
|
||||
===================================================
|
||||
|
||||
|
@ -1,47 +0,0 @@
|
||||
Translating by 19761332
|
||||
DAISY : A Linux-compatible text format for the visually impaired
|
||||
=================================================================
|
||||
|
||||
|
||||

|
||||
>Image by :
|
||||
Image by Kate Ter Haar. Modified by opensource.com. CC BY-SA 2.0.
|
||||
|
||||
If you're blind or visually impaired like I am, you usually require various levels of hardware or software to do things that people who can see take for granted. One among these is specialized formats for reading print books: Braille (if you know how to read it) or specialized text formats such as DAISY.
|
||||
|
||||
### What is DAISY?
|
||||
|
||||
DAISY stands for Digital Accessible Information System. It's an open standard used almost exclusively by the blind to read textbooks, periodicals, newspapers, fiction, you name it. It was founded in the mid '90s by [The DAISY Consortium][1], a group of organizations dedicated to producing a set of standards that would allow text to be marked up in a way that would make it easy to read, skip around in, annotate, and otherwise manipulate text in much the same way a sighted user would.
|
||||
|
||||
The current version of DAISY 3.0, was released in mid-2005 and is a complete rewrite of the standard. It was created with the goal of making it much easier to write books complying with it. It's worth noting that DAISY can support plain text only, audio recordings (in PCM Wave or MPEG Layer III format) only, or a combination of text and audio. Specialized software can read these books and allow users to set bookmarks and navigate a book as easily as a sighted person would with a print book.
|
||||
|
||||
### How does DAISY work?
|
||||
|
||||
DAISY, regardless of the specific version, works a bit like this: You have your main navigation file (ncc.html in DAISY 2.02) that contains metadata about the book, such as author's name, copyright date, how many pages the book has, etc. This file is a valid XML document in the case of DAISY 3.0, with DTD (document type definition) files being highly recommended to be included with each book.
|
||||
|
||||
In the navigation control file is markup describing precise positions—either text caret offsets in the case of text navigation or time down to the millisecond in the case of audio recordings—that allows the software to skip to that exact point in the book much as a sighted person would turn to a chapter page. It's worth noting that this navigation control file only contains positions for the main, and largest, elements of a book.
|
||||
|
||||
The smaller elements are handled by SMIL (synchronized multimedia integration language) files. These files contain position points for each chapter in the book. The level of navigation depends heavily on how well the book was marked up. Think of it like this: If a print book has no chapter headings, you will have a hard time figuring out which chapter you're in. If a DAISY book is badly marked up, you might only be able to navigate to the start of the book, or possibly only to the table of contents. If a book is marked up badly enough (or missing markup entirely), your DAISY reading software is likely to simply ignore it.
|
||||
|
||||
### Why the need for specialized software?
|
||||
|
||||
You may be wondering why, if DAISY is little more than HTML, XML, and audio files, you would need specialized software to read and manipulate it. Technically speaking, you don't. The specialized software is mostly for convenience. In Linux, for example, a simple web browser can be used to open the books and read them. If you click on the XML file in a DAISY 3 book, all the software will generally do is read the spines of the books you give it access to and create a list of them that you click on to open. If a book is badly marked up, it won't show up in this list.
|
||||
|
||||
Producing DAISY is another matter entirely, and usually requires either specialized software or enough knowledge of the specifications to modify general-purpose software to parse it.
|
||||
|
||||
### Conclusion
|
||||
|
||||
Fortunately, DAISY is a dying standard. While it is very good at what it does, the need for specialized software to produce it has set us apart from the normal sighted world, where readers use a variety of formats to read their books electronically. This is why the DAISY consortium has succeeded DAISY with EPUB, version 3, which supports what are called media overlays. This is basically an EPUB book with optional audio or video. Since EPUB shares a lot of DAISY's XML markup, some software that can read DAISY can see EPUB books but usually cannot read them. This means that once the websites that provide books for us switch over to this open format, we will have a much larger selection of software to read our books.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/life/16/5/daisy-linux-compatible-text-format-visually-impaired
|
||||
|
||||
作者:[Kendell Clark][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/kendell-clark
|
||||
[1]: http://www.daisy.org
|
@ -1,11 +1,16 @@
|
||||
Translating by wikiios
|
||||
Building a data science portfolio: Machine learning project
|
||||
构建一个数据科学投资:机器学习项目
|
||||
===========================================================
|
||||
|
||||
>This is the third in a series of posts on how to build a Data Science Portfolio. If you like this and want to know when the next post in the series is released, you can [subscribe at the bottom of the page][1].
|
||||
>这是如何构建一个数据科学投资系列文章的第三弹。如果你喜欢本文并且想知道本系列下一篇文章发布的时间,你可以订阅本文底[邮箱][1]。
|
||||
|
||||
Data science companies are increasingly looking at portfolios when making hiring decisions. One of the reasons for this is that a portfolio is the best way to judge someone’s real-world skills. The good news for you is that a portfolio is entirely within your control. If you put some work in, you can make a great portfolio that companies are impressed by.
|
||||
数据科学公司越来越多地研究投资并同时作出雇佣决定。原因之一就是一个投资是评判某人真实能力最好的方式。对你来说好消息是一个投资是完全在你掌控之中。如果你花些心思在其中,你就可以做出一个令公司印象深刻的投资。
|
||||
|
||||
The first step in making a high-quality portfolio is to know what skills to demonstrate. The primary skills that companies want in data scientists, and thus the primary skills they want a portfolio to demonstrate, are:
|
||||
做出高质量投资的第一步是了解应该展示哪些技能,
|
||||
|
||||
- Ability to communicate
|
||||
- Ability to collaborate with others
|
||||
|
@ -1,3 +1,6 @@
|
||||
|
||||
###################Translating by messon007###################
|
||||
|
||||
Going Serverless with AWS Lambda and API Gateway
|
||||
============================
|
||||
|
||||
|
@ -1,265 +0,0 @@
|
||||
Being translated by ChrisLeeGit
|
||||
How to Monitor Docker Containers using Grafana on Ubuntu
|
||||
================================================================================
|
||||
|
||||
Grafana is an open source feature rich metrics dashboard. It is very useful for visualizing large-scale measurement data. It provides a powerful and elegant way to create, share, and explore data and dashboards from your disparate metric databases.
|
||||
|
||||
It supports a wide variety of graphing options for ultimate flexibility. Furthermore, it supports many different storage backends for your Data Source. Each Data Source has a specific Query Editor that is customized for the features and capabilities that the particular Data Source exposes. The following datasources are officially supported by Grafana: Graphite, InfluxDB, OpenTSDB, Prometheus, Elasticsearch and Cloudwatch
|
||||
|
||||
The query language and capabilities of each Data Source are obviously very different. You can combine data from multiple Data Sources onto a single Dashboard, but each Panel is tied to a specific Data Source that belongs to a particular Organization. It supports authenticated login and a basic role based access control implementation. It is deployed as a single software installation which is written in Go and Javascript.
|
||||
|
||||
In this article, I'll explain on how to install Grafana on a docker container in Ubuntu 16.04 and configure docker monitoring using this software.
|
||||
|
||||
### Pre-requisites ###
|
||||
|
||||
- Docker installed server
|
||||
|
||||
### Installing Grafana ###
|
||||
|
||||
We can build our Grafana in a docker container. There is an official docker image available for building Grafana. Please run this command to build a Grafana container.
|
||||
|
||||
```
|
||||
root@ubuntu:~# docker run -i -p 3000:3000 grafana/grafana
|
||||
|
||||
Unable to find image 'grafana/grafana:latest' locally
|
||||
latest: Pulling from grafana/grafana
|
||||
5c90d4a2d1a8: Pull complete
|
||||
b1a9a0b6158e: Pull complete
|
||||
acb23b0d58de: Pull complete
|
||||
Digest: sha256:34ca2f9c7986cb2d115eea373083f7150a2b9b753210546d14477e2276074ae1
|
||||
Status: Downloaded newer image for grafana/grafana:latest
|
||||
t=2016-07-27T15:20:19+0000 lvl=info msg="Starting Grafana" logger=main version=3.1.0 commit=v3.1.0 compiled=2016-07-12T06:42:28+0000
|
||||
t=2016-07-27T15:20:19+0000 lvl=info msg="Config loaded from" logger=settings file=/usr/share/grafana/conf/defaults.ini
|
||||
t=2016-07-27T15:20:19+0000 lvl=info msg="Config loaded from" logger=settings file=/etc/grafana/grafana.ini
|
||||
t=2016-07-27T15:20:19+0000 lvl=info msg="Config overriden from command line" logger=settings arg="default.paths.data=/var/lib/grafana"
|
||||
t=2016-07-27T15:20:19+0000 lvl=info msg="Config overriden from command line" logger=settings arg="default.paths.logs=/var/log/grafana"
|
||||
t=2016-07-27T15:20:19+0000 lvl=info msg="Config overriden from command line" logger=settings arg="default.paths.plugins=/var/lib/grafana/plugins"
|
||||
t=2016-07-27T15:20:19+0000 lvl=info msg="Path Home" logger=settings path=/usr/share/grafana
|
||||
t=2016-07-27T15:20:19+0000 lvl=info msg="Path Data" logger=settings path=/var/lib/grafana
|
||||
t=2016-07-27T15:20:19+0000 lvl=info msg="Path Logs" logger=settings path=/var/log/grafana
|
||||
t=2016-07-27T15:20:19+0000 lvl=info msg="Path Plugins" logger=settings path=/var/lib/grafana/plugins
|
||||
t=2016-07-27T15:20:19+0000 lvl=info msg="Initializing DB" logger=sqlstore dbtype=sqlite3
|
||||
|
||||
t=2016-07-27T15:20:20+0000 lvl=info msg="Executing migration" logger=migrator id="create playlist table v2"
|
||||
t=2016-07-27T15:20:20+0000 lvl=info msg="Executing migration" logger=migrator id="create playlist item table v2"
|
||||
t=2016-07-27T15:20:20+0000 lvl=info msg="Executing migration" logger=migrator id="drop preferences table v2"
|
||||
t=2016-07-27T15:20:20+0000 lvl=info msg="Executing migration" logger=migrator id="drop preferences table v3"
|
||||
t=2016-07-27T15:20:20+0000 lvl=info msg="Executing migration" logger=migrator id="create preferences table v3"
|
||||
t=2016-07-27T15:20:20+0000 lvl=info msg="Created default admin user: [admin]"
|
||||
t=2016-07-27T15:20:20+0000 lvl=info msg="Starting plugin search" logger=plugins
|
||||
t=2016-07-27T15:20:20+0000 lvl=info msg="Server Listening" logger=server address=0.0.0.0:3000 protocol=http subUrl=
|
||||
```
|
||||
|
||||
We can confirm the working of the Grafana container by running this command `docker ps -a` or by accessing it by URL `http://Docker IP:3000`
|
||||
|
||||
All Grafana configuration settings are defined using environment variables, this is much useful when using container technology. The Grafana configuration file is located at /etc/grafana/grafana.ini.
|
||||
|
||||
### Understanding the Configuration ###
|
||||
|
||||
The Grafana has number of configuration options that can be specified in its configuration file as .ini file or can be specified using environment variables as mentioned before.
|
||||
|
||||
#### Config file locations ####
|
||||
|
||||
Normal config file locations.
|
||||
|
||||
- Default configuration from : $WORKING_DIR/conf/defaults.ini
|
||||
- Custom configuration from : $WORKING_DIR/conf/custom.ini
|
||||
|
||||
PS : When you install Grafana using the deb or rpm packages or docker images, then your configuration file is located at /etc/grafana/grafana.ini
|
||||
|
||||
#### Understanding the config variables ####
|
||||
|
||||
Let's see some of the variables in the configuration file below:
|
||||
|
||||
`instance_name` : It's the name of the grafana server instance. It default value is fetched from ${HOSTNAME}, which will be replaced with environment variable HOSTNAME, if that is empty or does not exist Grafana will try to use system calls to get the machine name.
|
||||
|
||||
`[paths]`
|
||||
|
||||
`data` : It's the path where Grafana stores the sqlite3 database (when used), file based sessions (when used), and other data.
|
||||
|
||||
`logs` : It's where Grafana stores the logs.
|
||||
|
||||
Both these paths are usually specified via command line in the init.d scripts or the systemd service file.
|
||||
|
||||
`[server]`
|
||||
|
||||
`http_addr` : The IP address to bind the application. If it's left empty it will bind to all interfaces.
|
||||
|
||||
`http_port` : The port to which the application is bind to, defaults is 3000. You can redirect your 80 port to 3000 using the below command.
|
||||
|
||||
```
|
||||
$iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-port 3000
|
||||
```
|
||||
|
||||
`root_url` : This is the URL used to access Grafana from a web browser.
|
||||
|
||||
`cert_file` : Path to the certificate file (if protocol is set to https).
|
||||
|
||||
`cert_key` : Path to the certificate key file (if protocol is set to https).
|
||||
|
||||
`[database]`
|
||||
|
||||
Grafana uses a database to store its users and dashboards and other informations. By default it is configured to use sqlite3 which is an embedded database included in the main Grafana binary.
|
||||
|
||||
`type`
|
||||
You can choose mysql, postgres or sqlite3 as per our requirement.
|
||||
|
||||
`path`
|
||||
It's applicable only for sqlite3 database. The file path where the database will be stored.
|
||||
|
||||
`host`
|
||||
It's applicable only to MySQL or Postgres. it includes IP or hostname and port. For example, for MySQL running on the same host as Grafana: host = 127.0.0.1:3306
|
||||
|
||||
`name`
|
||||
The name of the Grafana database. Leave it set to grafana or some other name.
|
||||
|
||||
`user`
|
||||
The database user (not applicable for sqlite3).
|
||||
|
||||
`password`
|
||||
The database user's password (not applicable for sqlite3).
|
||||
|
||||
`ssl_mode`
|
||||
For Postgres, use either disable, require or verify-full. For MySQL, use either true, false, or skip-verify.
|
||||
|
||||
`ca_cert_path`
|
||||
(MySQL only) The path to the CA certificate to use. On many linux systems, certs can be found in /etc/ssl/certs.
|
||||
|
||||
`client_key_path`
|
||||
(MySQL only) The path to the client key. Only if server requires client authentication.
|
||||
|
||||
`client_cert_path`
|
||||
(MySQL only) The path to the client cert. Only if server requires client authentication.
|
||||
|
||||
`server_cert_name`
|
||||
(MySQL only) The common name field of the certificate used by the mysql server. Not necessary if ssl_mode is set to skip-verify.
|
||||
|
||||
`[security]`
|
||||
|
||||
`admin_user` : It is the name of the default Grafana admin user. The default name set is admin.
|
||||
|
||||
`admin_password` : It is the password of the default Grafana admin. It is set on first-run. The default password is admin.
|
||||
|
||||
`login_remember_days` : The number of days the keep me logged in / remember me cookie lasts.
|
||||
|
||||
`secret_key` : It is used for signing keep me logged in / remember me cookies.
|
||||
|
||||
### Essentials components for setting up Monitoring ###
|
||||
|
||||
We use the below components to create our Docker Monitoring system.
|
||||
|
||||
`cAdvisor` : It is otherwise called Container Advisor. It provides its users an understanding of the resource usage and performance characteristics. It collects, aggregates, processes and exports information about the running containers. You can go through this documentation for more information about this.
|
||||
|
||||
`InfluxDB` : It is a time series, metrics, and analytic database. We use this datasource for setting up our monitoring. cAdvisor displays only real time information and doesn’t store the metrics. Influx Db helps to store the monitoring information which cAdvisor provides in order to display a time range other than real time.
|
||||
|
||||
`Grafana Dashboard` : It allows us to combine all the pieces of information together visually. This powerful Dashboard allows us to run queries against the data store InfluxDB and chart them accordingly in beautiful layout.
|
||||
|
||||
### Installation of Docker Monitoring ###
|
||||
|
||||
We need to install each of these components one by one in our docker system.
|
||||
|
||||
#### Installing InfluxDB ####
|
||||
|
||||
We can use this command to pull InfluxDB image and setuup a influxDB container.
|
||||
|
||||
```
|
||||
root@ubuntu:~# docker run -d -p 8083:8083 -p 8086:8086 --expose 8090 --expose 8099 -e PRE_CREATE_DB=cadvisor --name influxsrv tutum/influxdb:0.8.8
|
||||
Unable to find image 'tutum/influxdb:0.8.8' locally
|
||||
0.8.8: Pulling from tutum/influxdb
|
||||
a3ed95caeb02: Already exists
|
||||
23efb549476f: Already exists
|
||||
aa2f8df21433: Already exists
|
||||
ef072d3c9b41: Already exists
|
||||
c9f371853f28: Already exists
|
||||
a248b0871c3c: Already exists
|
||||
749db6d368d0: Already exists
|
||||
7d7c7d923e63: Pull complete
|
||||
e47cc7808961: Pull complete
|
||||
1743b6eeb23f: Pull complete
|
||||
Digest: sha256:8494b31289b4dbc1d5b444e344ab1dda3e18b07f80517c3f9aae7d18133c0c42
|
||||
Status: Downloaded newer image for tutum/influxdb:0.8.8
|
||||
d3b6f7789e0d1d01fa4e0aacdb636c221421107d1df96808ecbe8e241ceb1823
|
||||
|
||||
-p 8083:8083 : user interface, log in with username-admin, pass-admin
|
||||
-p 8086:8086 : interaction with other application
|
||||
--name influxsrv : container have name influxsrv, use to cAdvisor link it.
|
||||
```
|
||||
|
||||
You can test your InfluxDB installation by calling this URL >>http://45.79.148.234:8083 and login with user/password as "root".
|
||||
|
||||

|
||||
|
||||
We can create our required databases from this tab.
|
||||
|
||||

|
||||
|
||||
#### Installing cAdvisor ####
|
||||
|
||||
Our next step is to install cAdvisor container and link it to the InfluxDB container. You can use this command to create it.
|
||||
|
||||
```
|
||||
root@ubuntu:~# docker run --volume=/:/rootfs:ro --volume=/var/run:/var/run:rw --volume=/sys:/sys:ro --volume=/var/lib/docker/:/var/lib/docker:ro --publish=8080:8080 --detach=true --link influxsrv:influxsrv --name=cadvisor google/cadvisor:latest -storage_driver_db=cadvisor -storage_driver_host=influxsrv:8086
|
||||
Unable to find image 'google/cadvisor:latest' locally
|
||||
latest: Pulling from google/cadvisor
|
||||
09d0220f4043: Pull complete
|
||||
151807d34af9: Pull complete
|
||||
14cd28dce332: Pull complete
|
||||
Digest: sha256:8364c7ab7f56a087b757a304f9376c3527c8c60c848f82b66dd728980222bd2f
|
||||
Status: Downloaded newer image for google/cadvisor:latest
|
||||
3bfdf7fdc83872485acb06666a686719983a1172ac49895cd2a260deb1cdde29
|
||||
root@ubuntu:~#
|
||||
|
||||
--publish=8080:8080 : user interface
|
||||
--link=influxsrv:influxsrv: link to container influxsrv
|
||||
-storage_driver=influxdb: set the storage driver as InfluxDB
|
||||
Specify what InfluxDB instance to push data to:
|
||||
-storage_driver_host=influxsrv:8086: The ip:port of the database. Default is ‘localhost:8086’
|
||||
-storage_driver_db=cadvisor: database name. Uses db ‘cadvisor’ by default
|
||||
```
|
||||
|
||||
You can test our cAdvisor installation by calling this URL >>http://45.79.148.234:8080. This will provide you the statistics of your Docker host and containers.
|
||||
|
||||

|
||||
|
||||
#### Installing the Grafana Dashboard ####
|
||||
|
||||
Finally, we need to install the Grafana Dashboard and link to the InfluxDB. You can run this command to setup that.
|
||||
|
||||
```
|
||||
root@ubuntu:~# docker run -d -p 3000:3000 -e INFLUXDB_HOST=localhost -e INFLUXDB_PORT=8086 -e INFLUXDB_NAME=cadvisor -e INFLUXDB_USER=root -e INFLUXDB_PASS=root --link influxsrv:influxsrv --name grafana grafana/grafana
|
||||
f3b7598529202b110e4e6b998dca6b6e60e8608d75dcfe0d2b09ae408f43684a
|
||||
```
|
||||
|
||||
Now we can login to Grafana and configure the Data Sources. Navigate to http://45.79.148.234:3000 or just http://45.79.148.234:
|
||||
|
||||
Username - admin
|
||||
Password - admin
|
||||
|
||||
Once we've installed Grafana, we can connect the InfluxDB. Login on the Dashboard and click on the Grafana icon(Fireball) in the upper left hand corner of the panel. Click on Data Sources to configure.
|
||||
|
||||

|
||||
|
||||
Now you can add our new Graph to our default Datasource InfluxDB.
|
||||
|
||||

|
||||
|
||||
We can edit and modify our query by adjusting our graph at Metric tab.
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
You can get [more information][1] on docker monitoring here. Thank you for reading this. I would suggest your valuable comments and suggestions on this. Hope you'd a wonderful day!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://linoxide.com/linux-how-to/monitor-docker-containers-grafana-ubuntu/
|
||||
|
||||
作者:[Saheetha Shameer][a]
|
||||
译者:[ChrisLeeGit](https://github.com/chrisleegit)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://linoxide.com/author/saheethas/
|
||||
[1]:https://github.com/vegasbrianc/docker-monitoring
|
@ -1,215 +0,0 @@
|
||||
NearTan 认领
|
||||
|
||||
Journey to HTTP/2
|
||||
===================
|
||||
|
||||
It has been quite some time since I last wrote through my blog and the reason is not being able to find enough time to put into it. I finally got some time today and thought to put some of it writing about HTTP.
|
||||
|
||||
HTTP is the protocol that every web developer should know as it powers the whole web and knowing it is definitely going to help you develop better applications.
|
||||
|
||||
In this article, I am going to be discussing what HTTP is, how it came to be, where it is today and how did we get here.
|
||||
|
||||
### What is HTTP?
|
||||
|
||||
First things first, what is HTTP? HTTP is the TCP/IP based application layer communication protocol which standardizes how the client and server communicate with each other. It defines how the content is requested and transmitted across the internet. By application layer protocol, I mean it’s just an abstraction layer that standardizes how the hosts (clients and servers) communicate and itself it depends upon TCP/IP to get request and response between the client and server. By default TCP port 80 is used but other ports can be used as well. HTTPS, however, uses port 443.
|
||||
|
||||
#### HTTP/0.9 - The One Liner (1991)
|
||||
|
||||
The first documented version of HTTP was HTTP/0.9 which was put forward in 1991. It was the simplest protocol ever; having a single method called GET. If a client had to access some webpage on the server, it would have made the simple request like below
|
||||
|
||||
```
|
||||
GET /index.html
|
||||
```
|
||||
|
||||
And the response from server would have looked as follows
|
||||
|
||||
```
|
||||
(response body)
|
||||
(connection closed)
|
||||
```
|
||||
|
||||
That is, the server would get the request, reply with the HTML in response and as soon as the content has been transferred, the connection will be closed. There were
|
||||
|
||||
- No headers
|
||||
- GET was the only allowed method
|
||||
- Response had to be HTML
|
||||
|
||||
As you can see, the protocol really had nothing more than being a stepping stone for what was to come.
|
||||
|
||||
#### HTTP/1.0 - 1996
|
||||
|
||||
In 1996, the next version of HTTP i.e. HTTP/1.0 evolved that vastly improved over the original version.
|
||||
|
||||
Unlike HTTP/0.9 which was only designed for HTML response, HTTP/1.0 could now deal with other response formats i.e. images, video files, plain text or any other content type as well. It added more methods (i.e. POST and HEAD), request/response formats got changed, HTTP headers got added to both the request and responses, status codes were added to identify the response, character set support was introduced, multi-part types, authorization, caching, content encoding and more was included.
|
||||
|
||||
Here is how a sample HTTP/1.0 request and response might have looked like:
|
||||
|
||||
```
|
||||
GET / HTTP/1.0
|
||||
Host: kamranahmed.info
|
||||
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5)
|
||||
Accept: */*
|
||||
```
|
||||
|
||||
As you can see, alongside the request, client has also sent it’s personal information, required response type etc. While in HTTP/0.9 client could never send such information because there were no headers.
|
||||
|
||||
Example response to the request above may have looked like below
|
||||
|
||||
```
|
||||
HTTP/1.0 200 OK
|
||||
Content-Type: text/plain
|
||||
Content-Length: 137582
|
||||
Expires: Thu, 05 Dec 1997 16:00:00 GMT
|
||||
Last-Modified: Wed, 5 August 1996 15:55:28 GMT
|
||||
Server: Apache 0.84
|
||||
|
||||
(response body)
|
||||
(connection closed)
|
||||
```
|
||||
|
||||
In the very beginning of the response there is HTTP/1.0 (HTTP followed by the version number), then there is the status code 200 followed by the reason phrase (or description of the status code, if you will).
|
||||
|
||||
In this newer version, request and response headers were still kept as ASCII encoded, but the response body could have been of any type i.e. image, video, HTML, plain text or any other content type. So, now that server could send any content type to the client; not so long after the introduction, the term “Hyper Text” in HTTP became misnomer. HMTP or Hypermedia transfer protocol might have made more sense but, I guess, we are stuck with the name for life.
|
||||
|
||||
One of the major drawbacks of HTTP/1.0 were you couldn’t have multiple requests per connection. That is, whenever a client will need something from the server, it will have to open a new TCP connection and after that single request has been fulfilled, connection will be closed. And for any next requirement, it will have to be on a new connection. Why is it bad? Well, let’s assume that you visit a webpage having 10 images, 5 stylesheets and 5 javascript files, totalling to 20 items that needs to fetched when request to that webpage is made. Since the server closes the connection as soon as the request has been fulfilled, there will be a series of 20 separate connections where each of the items will be served one by one on their separate connections. This large number of connections results in a serious performance hit as requiring a new TCP connection imposes a significant performance penalty because of three-way handshake followed by slow-start.
|
||||
|
||||
### Three-way Handshake
|
||||
|
||||
Three-way handshake in it’s simples form is that all the TCP connections begin with a three-way handshake in which the client and the server share a series of packets before starting to share the application data.
|
||||
|
||||
- SYN - Client picks up a random number, let’s say x, and sends it to the server.
|
||||
- SYN ACK - Server acknowledges the request by sending an ACK packet back to the client which is made up of a random number, let’s say y picked up by server and the number x+1 where x is the number that was sent by the client
|
||||
- ACK - Client increments the number y received from the server and sends an ACK packet back with the number x+1
|
||||
|
||||
Once the three-way handshake is completed, the data sharing between the client and server may begin. It should be noted that the client may start sending the application data as soon as it dispatches the last ACK packet but the server will still have to wait for the ACK packet to be recieved in order to fulfill the request.
|
||||
|
||||

|
||||
|
||||
However, some implementations of HTTP/1.0 tried to overcome this issue by introducing a new header called Connection: keep-alive which was meant to tell the server “Hey server, do not close this connection, I need it again”. But still, it wasn’t that widely supported and the problem still persisted.
|
||||
|
||||
Apart from being connectionless, HTTP also is a stateless protocol i.e. server doesn’t maintain the information about the client and so each of the requests has to have the information necessary for the server to fulfill the request on it’s own without any association with any old requests. And so this adds fuel to the fire i.e. apart from the large number of connections that the client has to open, it also has to send some redundant data on the wire causing increased bandwidth usage.
|
||||
|
||||
#### HTTP/1.1 - 1999
|
||||
|
||||
After merely 3 years of HTTP/1.0, the next version i.e. HTTP/1.1 was released in 1999; which made alot of improvements over it’s predecessor. The major improvements over HTTP/1.0 included
|
||||
|
||||
- New HTTP methods were added, which introduced PUT, PATCH, HEAD, OPTIONS, DELETE
|
||||
|
||||
- Hostname Identification In HTTP/1.0 Host header wasn’t required but HTTP/1.1 made it required.
|
||||
|
||||
- Persistent Connections As discussed above, in HTTP/1.0 there was only one request per connection and the connection was closed as soon as the request was fulfilled which resulted in accute performance hit and latency problems. HTTP/1.1 introduced the persistent connections i.e. connections weren’t closed by default and were kept open which allowed multiple sequential requests. To close the connections, the header Connection: close had to be available on the request. Clients usually send this header in the last request to safely close the connection.
|
||||
|
||||
- Pipelining It also introduced the support for pipelining, where the client could send multiple requests to the server without waiting for the response from server on the same connection and server had to send the response in the same sequence in which requests were received. But how does the client know that this is the point where first response download completes and the content for next response starts, you may ask! Well, to solve this, there must be Content-Length header present which clients can use to identify where the response ends and it can start waiting for the next response.
|
||||
|
||||
>It should be noted that in order to benefit from persistent connections or pipelining, Content-Length header must be available on the response, because this would let the client know when the transmission completes and it can send the next request (in normal sequential way of sending requests) or start waiting for the the next response (when pipelining is enabled).
|
||||
|
||||
>But there was still an issue with this approach. And that is, what if the data is dynamic and server cannot find the content length before hand? Well in that case, you really can’t benefit from persistent connections, could you?! In order to solve this HTTP/1.1 introduced chunked encoding. In such cases server may omit content-Length in favor of chunked encoding (more to it in a moment). However, if none of them are available, then the connection must be closed at the end of request.
|
||||
|
||||
- Chunked Transfers In case of dynamic content, when the server cannot really find out the Content-Length when the transmission starts, it may start sending the content in pieces (chunk by chunk) and add the Content-Length for each chunk when it is sent. And when all of the chunks are sent i.e. whole transmission has completed, it sends an empty chunk i.e. the one with Content-Length set to zero in order to identify the client that transmission has completed. In order to notify the client about the chunked transfer, server includes the header Transfer-Encoding: chunked
|
||||
|
||||
- Unlike HTTP/1.0 which had Basic authentication only, HTTP/1.1 included digest and proxy authentication
|
||||
- Caching
|
||||
- Byte Ranges
|
||||
- Character sets
|
||||
- Language negotiation
|
||||
- Client cookies
|
||||
- Enhanced compression support
|
||||
- New status codes
|
||||
- ..and more
|
||||
|
||||
I am not going to dwell about all the HTTP/1.1 features in this post as it is a topic in itself and you can already find a lot about it. The one such document that I would recommend you to read is Key differences between HTTP/1.0 and HTTP/1.1 and here is the link to original RFC for the overachievers.
|
||||
|
||||
HTTP/1.1 was introduced in 1999 and it had been a standard for many years. Although, it improved alot over it’s predecessor; with the web changing everyday, it started to show it’s age. Loading a web page these days is more resource-intensive than it ever was. A simple webpage these days has to open more than 30 connections. Well HTTP/1.1 has persistent connections, then why so many connections? you say! The reason is, in HTTP/1.1 it can only have one outstanding connection at any moment of time. HTTP/1.1 tried to fix this by introducing pipelining but it didn’t completely address the issue because of the head-of-line blocking where a slow or heavy request may block the requests behind and once a request gets stuck in a pipeline, it will have to wait for the next requests to be fulfilled. To overcome these shortcomings of HTTP/1.1, the developers started implementing the workarounds, for example use of spritesheets, encoded images in CSS, single humungous CSS/Javascript files, domain sharding etc.
|
||||
|
||||
#### SPDY - 2009
|
||||
|
||||
Google went ahead and started experimenting with alternative protocols to make the web faster and improving web security while reducing the latency of web pages. In 2009, they announced SPDY.
|
||||
|
||||
>SPDY is a trademark of Google and isn’t an acronym.
|
||||
|
||||
It was seen that if we keep increasing the bandwidth, the network performance increases in the beginning but a point comes when there is not much of a performance gain. But if you do the same with latency i.e. if we keep dropping the latency, there is a constant performance gain. This was the core idea for performance gain behind SPDY, decrease the latency to increase the network performance.
|
||||
|
||||
>For those who don’t know the difference, latency is the delay i.e. how long it takes for data to travel between the source and destination (measured in milliseconds) and bandwidth is the amount of data transfered per second (bits per second).
|
||||
|
||||
The features of SPDY included, multiplexing, compression, prioritization, security etc. I am not going to get into the details of SPDY, as you will get the idea when we get into the nitty gritty of HTTP/2 in the next section as I said HTTP/2 is mostly inspired from SPDY.
|
||||
|
||||
SPDY didn’t really try to replace HTTP; it was a translation layer over HTTP which existed at the application layer and modified the request before sending it over to the wire. It started to become a defacto standards and majority of browsers started implementing it.
|
||||
|
||||
In 2015, at Google, they didn’t want to have two competing standards and so they decided to merge it into HTTP while giving birth to HTTP/2 and deprecating SPDY.
|
||||
|
||||
#### HTTP/2 - 2015
|
||||
|
||||
By now, you must be convinced that why we needed another revision of the HTTP protocol. HTTP/2 was designed for low latency transport of content. The key features or differences from the old version of HTTP/1.1 include
|
||||
|
||||
- Binary instead of Textual
|
||||
- Multiplexing - Multiple asynchronous HTTP requests over a single connection
|
||||
- Header compression using HPACK
|
||||
- Server Push - Multiple responses for single request
|
||||
- Request Prioritization
|
||||
- Security
|
||||
|
||||

|
||||
|
||||
##### 1. Binary Protocol
|
||||
|
||||
HTTP/2 tends to address the issue of increased latency that existed in HTTP/1.x by making it a binary protocol. Being a binary protocol, it easier to parse but unlike HTTP/1.x it is no longer readable by the human eye. The major building blocks of HTTP/2 are Frames and Streams
|
||||
|
||||
**Frames and Streams**
|
||||
|
||||
HTTP messages are now composed of one or more frames. There is a HEADERS frame for the meta data and DATA frame for the payload and there exist several other types of frames (HEADERS, DATA, RST_STREAM, SETTINGS, PRIORITY etc) that you can check through the HTTP/2 specs.
|
||||
|
||||
Every HTTP/2 request and response is given a unique stream ID and it is divided into frames. Frames are nothing but binary pieces of data. A collection of frames is called a Stream. Each frame has a stream id that identifies the stream to which it belongs and each frame has a common header. Also, apart from stream ID being unique, it is worth mentioning that, any request initiated by client uses odd numbers and the response from server has even numbers stream IDs.
|
||||
|
||||
Apart from the HEADERS and DATA, another frame type that I think worth mentioning here is RST_STREAM which is a special frame type that is used to abort some stream i.e. client may send this frame to let the server know that I don’t need this stream anymore. In HTTP/1.1 the only way to make the server stop sending the response to client was closing the connection which resulted in increased latency because a new connection had to be opened for any consecutive requests. While in HTTP/2, client can use RST_STREAM and stop receiving a specific stream while the connection will still be open and the other streams will still be in play.
|
||||
|
||||
##### 2. Multiplexing
|
||||
|
||||
Since HTTP/2 is now a binary protocol and as I said above that it uses frames and streams for requests and responses, once a TCP connection is opened, all the streams are sent asynchronously through the same connection without opening any additional connections. And in turn, the server responds in the same asynchronous way i.e. the response has no order and the client uses the assigned stream id to identify the stream to which a specific packet belongs. This also solves the head-of-line blocking issue that existed in HTTP/1.x i.e. the client will not have to wait for the request that is taking time and other requests will still be getting processed.
|
||||
|
||||
##### 3. HPACK Header Compression
|
||||
|
||||
It was part of a separate RFC which was specifically aimed at optimizing the sent headers. The essence of it is that when we are constantly accessing the server from a same client there is alot of redundant data that we are sending in the headers over and over, and sometimes there might be cookies increasing the headers size which results in bandwidth usage and increased latency. To overcome this, HTTP/2 introduced header compression.
|
||||
|
||||

|
||||
|
||||
Unlike request and response, headers are not compressed in gzip or compress etc formats but there is a different mechanism in place for header compression which is literal values are encoded using Huffman code and a headers table is maintained by the client and server and both the client and server omit any repetitive headers (e.g. user agent etc) in the subsequent requests and reference them using the headers table maintained by both.
|
||||
|
||||
While we are talking headers, let me add here that the headers are still the same as in HTTP/1.1, except for the addition of some pseudo headers i.e. :method, :scheme, :host and :path
|
||||
|
||||
##### 4. Server Push
|
||||
|
||||
Server push is another tremendous feature of HTTP/2 where the server, knowing that the client is going to ask for a certain resource, can push it to the client without even client asking for it. For example, let’s say a browser loads a web page, it parses the whole page to find out the remote content that it has to load from the server and then sends consequent requests to the server to get that content.
|
||||
|
||||
Server push allows the server to decrease the roundtrips by pushing the data that it knows that client is going to demand. How it is done is, server sends a special frame called PUSH_PROMISE notifying the client that, “Hey, I am about to send this resource to you! Do not ask me for it.” The PUSH_PROMISE frame is associated with the stream that caused the push to happen and it contains the promised stream ID i.e. the stream on which the server will send the resource to be pushed.
|
||||
|
||||
##### 5. Request Prioritization
|
||||
|
||||
A client can assign a priority to a stream by including the prioritization information in the HEADERS frame by which a stream is opened. At any other time, client can send a PRIORITY frame to change the priority of a stream.
|
||||
|
||||
Without any priority information, server processes the requests asynchronously i.e. without any order. If there is priority assigned to a stream, then based on this prioritization information, server decides how much of the resources need to be given to process which request.
|
||||
|
||||
##### 6. Security
|
||||
|
||||
There was extensive discussion on whether security (through TLS) should be made mandatory for HTTP/2 or not. In the end, it was decided not to make it mandatory. However, most vendors stated that they will only support HTTP/2 when it is used over TLS. So, although HTTP/2 doesn’t require encryption by specs but it has kind of become mandatory by default anyway. With that out of the way, HTTP/2 when implemented over TLS does impose some requirementsi.e. TLS version 1.2 or higher must be used, there must be a certain level of minimum keysizes, ephemeral keys are required etc.
|
||||
|
||||
HTTP/2 is here and it has already surpassed SPDY in adaption which is gradually increasing. HTTP/2 has alot to offer in terms of performance gain and it is about time we should start using it.
|
||||
|
||||
For anyone interested in further details here is the [link to specs][1] and a [link demonstrating the performance benefits of][2] HTTP/2. For any questions or comments, use the comments section below. Also, while reading, if you find any blatant lies; do point them out.
|
||||
|
||||
And that about wraps it up. Until next time! stay tuned.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://kamranahmed.info/blog/2016/08/13/http-in-depth/?utm_source=webopsweekly&utm_medium=email
|
||||
|
||||
作者:[Kamran Ahmed][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: http://github.com/kamranahmedse
|
||||
|
||||
[1]: https://http2.github.io/http2-spec
|
||||
[2]: http://www.http2demo.io/
|
||||
|
135
sources/tech/20160815 How to manage binary blobs with Git.md
Normal file
135
sources/tech/20160815 How to manage binary blobs with Git.md
Normal file
@ -0,0 +1,135 @@
|
||||
icecoobe translating
|
||||
|
||||
part 7 - How to manage binary blobs with Git
|
||||
=====================
|
||||
|
||||
|
||||
In the previous six articles in this series we learned how to manage version control on text files with Git. But what about binary files? Git has extensions for handling binary blobs such as multimedia files, so today we will learn how to manage binary assets with Git.
|
||||
|
||||
One thing everyone seems to agree on is Git is not great for big binary blobs. Keep in mind that a binary blob is different from a large text file; you can use Git on large text files without a problem, but Git can't do much with an impervious binary file except treat it as one big solid black box and commit it as-is.
|
||||
|
||||
Say you have a complex 3D model for the exciting new first person puzzle game you're making, and you save it in a binary format, resulting in a 1 gigabyte file. Yougit commit it once, adding a gigabyte to your repository's history. Later, you give the model a different hair style and commit your update; Git can't tell the hair apart from the head or the rest of the model, so you've just committed another gigabyte. Then you change the model's eye color and commit that small change: another gigabyte. That is three gigabytes for one model with a few minor changes made on a whim. Scale that across all the assets in a game, and you have a serious problem.
|
||||
|
||||
Contrast that to a text file like the .obj format. One commit stores everything, just as with the other model, but an .obj file is a series of lines of plain text describing the vertices of a model. If you modify the model and save it back out to .obj, Git can read the two files line by line, create a diff of the changes, and process a fairly small commit. The more refined the model becomes, the smaller the commits get, and it's a standard Git use case. It is a big file, but it uses a kind of overlay or sparse storage method to build a complete picture of the current state of your data.
|
||||
|
||||
However, not everything works in plain text, and these days everyone wants to work with Git. A solution was required, and several have surfaced.
|
||||
|
||||
[OSTree](https://ostree.readthedocs.io/en/latest/) began as a GNOME project and is intended to manage operating system binaries. It doesn't apply here, so I'll skip it.
|
||||
|
||||
[Git Large File Storage](https://git-lfs.github.com/) (LFS) is an open source project from GitHub that began life as a fork of git-media. [git-media](https://github.com/alebedev/git-media) and [git-annex](https://git-annex.branchable.com/walkthrough/) are extensions to Git meant to manage large files. They are two different approaches to the same problem, and they each have advantages. These aren't official statements from the projects themselves, but in my experience, the unique aspects of each are:
|
||||
|
||||
* git-media is a centralised model, a repository for common assets. You tellgit-media where your large files are stored, whether that is a hard drive, a server, or a cloud storage service, and each user on your project treats that location as the central master location for large assets.
|
||||
* git-annex favors a distributed model; you and your users create repositories, and each repository gets a local .git/annex directory where big files are stored. The annexes are synchronized regularly so that all assets are available to all users as needed. Unless configured otherwise with annex-cost, git-annex prefers local storage before off-site storage.
|
||||
|
||||
Of these options, I've used git-media and git-annex in production, so I'll give you an overview of how they each work.
|
||||
|
||||
```
|
||||
git-media
|
||||
```
|
||||
|
||||
git-media uses Ruby, so you must install a gem for it. Instructions are on the[website](https://github.com/alebedev/git-media). Each user who wants to use git-media needs to install it, but it is cross-platform, so that is not a problem.
|
||||
|
||||
After installing git-media, you must set some Git configuration options. You only need to do this once per machine you use:
|
||||
|
||||
```
|
||||
$ git config filter.media.clean "git-media filter-clean"$ git config filter.media.smudge "git-media filter-smudge"
|
||||
```
|
||||
|
||||
In each repository that you want to use git-media, set an attribute to marry the filters you've just created to the file types you want to classify as media. Don't get confused by the terminology; a better term is "assets," since "media" usually means audio, video, and photos, but you might just as easily classify 3D models, bakes, and textures as media.
|
||||
|
||||
For example:
|
||||
|
||||
```
|
||||
$ echo "*.mp4 filter=media -crlf" >> .gitattributes$ echo "*.mkv filter=media -crlf" >> .gitattributes$ echo "*.wav filter=media -crlf" >> .gitattributes$ echo "*.flac filter=media -crlf" >> .gitattributes$ echo "*.kra filter=media -crlf" >> .gitattributes
|
||||
```
|
||||
|
||||
When you stage a file of those types, the file is copied to .git/media.
|
||||
|
||||
Assuming you have a Git repository on the server already, the final step is to tell your Git repository where the "mothership" is; that is, where the media files will go when they have been pushed for all users to share. Set this in the repository's.git/config file, substituting your own user, host, and path:
|
||||
|
||||
```
|
||||
[git-media]
|
||||
transport = scp
|
||||
autodownload = false #true to pull assets by default
|
||||
scpuser = seth
|
||||
scphost = example.com
|
||||
scppath = /opt/jupiter.git
|
||||
```
|
||||
|
||||
If you have complex SSH settings on your server, such as a non-standard port or path to a non-default SSH key file use .ssh/config to set defaults for the host.
|
||||
|
||||
Life with git-media is mostly normal; you work in your repository, you stage files and blobs alike, and commit them as usual. The only difference in workflow is that at some point along the way, you should sync your secret stockpile of assets (er, media) to the shared repository.
|
||||
|
||||
When you are ready to publish your assets for your team or for your own backup, use this command:
|
||||
|
||||
```
|
||||
$ git media sync
|
||||
```
|
||||
|
||||
To replace a file in git-media with a changed version (for example, an audio file has been sweetened, or a matte painting has been completed, or a video file has been colour graded), you must explicitly tell Git to update the media. This overrides git-media's default to not copy a file if it already exists remotely:
|
||||
|
||||
```
|
||||
$ git update-index --really-refresh
|
||||
```
|
||||
|
||||
When other members of your team (or you, on a different computer) clones the repository, no assets will be downloaded by default unless you have set theautodownload option in .git/config to true. A git media sync cures all ills.
|
||||
|
||||
```
|
||||
git-annex
|
||||
```
|
||||
|
||||
git-annex has a slightly different workflow, and defaults to local repositories, but the basic ideas are the same. You should be able to install git-annex from your distribution's repository, or you can get it from the website as needed. As withgit-media, any user using git-annex must install it on their machine.
|
||||
|
||||
The initial setup is simpler than git-media. To create a bare repository on your server run this command, substituting your own path:
|
||||
|
||||
```
|
||||
$ git init --bare --shared /opt/jupiter.git
|
||||
```
|
||||
|
||||
Then clone it onto your local computer, and mark it as a git-annex location:
|
||||
|
||||
```
|
||||
$ git clone seth@example.com:/opt/jupiter.cloneCloning into 'jupiter.clone'... warning: You appear to have clonedan empty repository. Checking connectivity... done.$ git annex init "seth workstation" init seth workstation ok
|
||||
```
|
||||
|
||||
Rather than using filters to identify media assets or large files, you configure what gets classified as a large file by using the git annex command:
|
||||
|
||||
```
|
||||
$ git annex add bigblobfile.flacadd bigblobfile.flac (checksum) ok(Recording state in Git...)
|
||||
```
|
||||
|
||||
Committing is done as usual:
|
||||
|
||||
```
|
||||
$ git commit -m 'added flac source for sound fx'
|
||||
```
|
||||
|
||||
But pushing is different, because git annex uses its own branch to track assets. The first push you make may need the -u option, depending on how you manage your repository:
|
||||
|
||||
```
|
||||
$ git push -u origin master git-annexTo seth@example.com:/opt/jupiter.git* [new branch] master -> master* [new branch] git-annex -> git-annex
|
||||
```
|
||||
|
||||
As with git-media, a normal git push does not copy your assets to the server, it only sends information about the media. When you're ready to share your assets with the rest of the team, run the sync command:
|
||||
|
||||
```
|
||||
$ git annex sync --content
|
||||
```
|
||||
|
||||
If someone else has shared assets to the server and you need to pull them, git annex sync will prompt your local checkout to pull assets that are not present on your machine, but that exist on the server.
|
||||
|
||||
Both git-media and git-annex are flexible and can use local repositories instead of a server, so they're just as useful for managing private local projects, too.
|
||||
|
||||
Git is a powerful and extensible system, and by now there is really no excuse for not using it. Try it out today!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/life/16/8/how-manage-binary-blobs-git-part-7
|
||||
|
||||
作者:[Seth Kenlon][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/seth
|
222
sources/tech/20160815 The cost of small modules.md
Normal file
222
sources/tech/20160815 The cost of small modules.md
Normal file
@ -0,0 +1,222 @@
|
||||
Translating by Yinr
|
||||
|
||||
The cost of small modules
|
||||
====
|
||||
|
||||
About a year ago I was refactoring a large JavaScript codebase into smaller modules, when I discovered a depressing fact about Browserify and Webpack:
|
||||
|
||||
> “The more I modularize my code, the bigger it gets. ”– Nolan Lawson
|
||||
|
||||
Later on, Sam Saccone published some excellent research on [Tumblr](https://docs.google.com/document/d/1E2w0UQ4RhId5cMYsDcdcNwsgL0gP_S6SDv27yi1mCEY/edit) and [Imgur](https://github.com/perfs/audits/issues/1)‘s page load performance, in which he noted:
|
||||
|
||||
> “Over 400ms is being spent simply walking the Browserify tree.”– Sam Saccone
|
||||
|
||||
In this post, I’d like to demonstrate that small modules can have a surprisingly high performance cost depending on your choice of bundler and module system. Furthermore, I’ll explain why this applies not only to the modules in your own codebase, but also to the modules within dependencies, which is a rarely-discussed aspect of the cost of third-party code.
|
||||
|
||||
### Web perf 101
|
||||
|
||||
The more JavaScript included on a page, the slower that page tends to be. Large JavaScript bundles cause the browser to spend more time downloading, parsing, and executing the script, all of which lead to slower load times.
|
||||
|
||||
Even when breaking up the code into multiple bundles – Webpack [code splitting](https://webpack.github.io/docs/code-splitting.html), Browserify[factor bundles](https://github.com/substack/factor-bundle), etc. – the cost is merely delayed until later in the page lifecycle. Sooner or later, the JavaScript piper must be paid.
|
||||
|
||||
Furthermore, because JavaScript is a dynamic language, and because the prevailing[CommonJS](http://www.commonjs.org/) module system is also dynamic, it’s fiendishly difficult to extract unused code from the final payload that gets shipped to users. You might only need jQuery’s $.ajax, but by including jQuery, you pay the cost of the entire library.
|
||||
|
||||
The JavaScript community has responded to this problem by advocating the use of [small modules](http://substack.net/how_I_write_modules). Small modules have a lot of [aesthetic and practical benefits](http://dailyjs.com/2015/07/02/small-modules-complexity-over-size/) – easier to maintain, easier to comprehend, easier to plug together – but they also solve the jQuery problem by promoting the inclusion of small bits of functionality rather than big “kitchen sink” libraries.
|
||||
|
||||
So in the “small modules” world, instead of doing:
|
||||
|
||||
```
|
||||
var _ = require('lodash')
|
||||
_.uniq([1,2,2,3])
|
||||
```
|
||||
|
||||
You might do:
|
||||
|
||||
```
|
||||
var uniq = require('lodash.uniq')
|
||||
uniq([1,2,2,3])
|
||||
```
|
||||
|
||||
### Packages vs modules
|
||||
|
||||
It’s important to note that, when I say “modules,” I’m not talking about “packages” in the npm sense. When you install a package from npm, it might only expose a single module in its public API, but under the hood it could actually be a conglomeration of many modules.
|
||||
|
||||
For instance, consider a package like [is-array](https://www.npmjs.com/package/is-array). It has no dependencies and only contains[one JavaScript file](https://github.com/retrofox/is-array/blob/d79f1c90c824416b60517c04f0568b5cd3f8271d/index.js#L6-L33), so it has one module. Simple enough.
|
||||
|
||||
Now consider a slightly more complex package like [once](https://www.npmjs.com/package/once), which has exactly one dependency:[wrappy](https://www.npmjs.com/package/wrappy). [Both](https://github.com/isaacs/once/blob/2ad558657e17fafd24803217ba854762842e4178/once.js#L1-L21) [packages](https://github.com/npm/wrappy/blob/71d91b6dc5bdeac37e218c2cf03f9ab55b60d214/wrappy.js#L6-L33) contain one module, so the total module count is 2\. So far, so good.
|
||||
|
||||
Now let’s consider a more deceptive example: [qs](https://www.npmjs.com/package/qs). Since it has zero dependencies, you might assume it only has one module. But in fact, it has four!
|
||||
|
||||
You can confirm this by using a tool I wrote called [browserify-count-modules](https://www.npmjs.com/package/browserify-count-modules), which simply counts the total number of modules in a Browserify bundle:
|
||||
|
||||
```
|
||||
$ npm install qs
|
||||
$ browserify node_modules/qs | browserify-count-modules
|
||||
4
|
||||
```
|
||||
|
||||
This means that a given package can actually contain one or more modules. These modules can also depend on other packages, which might bring in their own packages and modules. The only thing you can be sure of is that each package contains at least one module.
|
||||
|
||||
### Module bloat
|
||||
|
||||
How many modules are in a typical web application? Well, I ran browserify-count-moduleson a few popular Browserify-using sites, and came up with these numbers:
|
||||
|
||||
* [requirebin.com](http://requirebin.com/): 91 modules
|
||||
* [keybase.io](https://keybase.io/): 365 modules
|
||||
* [m.reddit.com](http://m.reddit.com/): 1050 modules
|
||||
* [Apple.com](http://images.apple.com/ipad-air-2/): 1060 modules (Added. [Thanks, Max!](https://twitter.com/denormalize/status/765300194078437376))
|
||||
|
||||
For the record, my own [Pokedex.org](https://pokedex.org/) (the largest open-source site I’ve built) contains 311 modules across four bundle files.
|
||||
|
||||
Ignoring for a moment the raw size of those JavaScript bundles, I think it’s interesting to explore the cost of the number of modules themselves. Sam Saccone has already blown this story wide open in [“The cost of transpiling es2015 in 2016”](https://github.com/samccone/The-cost-of-transpiling-es2015-in-2016#the-cost-of-transpiling-es2015-in-2016), but I don’t think his findings have gotten nearly enough press, so let’s dig a little deeper.
|
||||
|
||||
### Benchmark time!
|
||||
|
||||
I put together [a small benchmark](https://github.com/nolanlawson/cost-of-small-modules) that constructs a JavaScript module importing 100, 1000, and 5000 other modules, each of which merely exports a number. The parent module just sums the numbers together and logs the result:
|
||||
|
||||
```
|
||||
// index.js
|
||||
var total = 0
|
||||
total += require('./module_0')
|
||||
total += require('./module_1')
|
||||
total += require('./module_2')
|
||||
// etc.
|
||||
console.log(total)
|
||||
|
||||
|
||||
// module_1.js
|
||||
module.exports = 1
|
||||
```
|
||||
|
||||
I tested five bundling methods: Browserify, Browserify with the [bundle-collapser](https://www.npmjs.com/package/bundle-collapser) plugin, Webpack, Rollup, and Closure Compiler. For Rollup and Closure Compiler I used ES6 modules, whereas for Browserify and Webpack I used CommonJS, so as not to unfairly disadvantage them (since they would need a transpiler like Babel, which adds its own overhead).
|
||||
|
||||
In order to best simulate a production environment, I used Uglify with the --mangle and--compress settings for all bundles, and served them gzipped over HTTPS using GitHub Pages. For each bundle, I downloaded and executed it 15 times and took the median, noting the (uncached) load time and execution time using performance.now().
|
||||
|
||||
### Bundle sizes
|
||||
|
||||
Before we get into the benchmark results, it’s worth taking a look at the bundle files themselves. Here are the byte sizes (minified but ungzipped) for each bundle ([chart view](https://nolanwlawson.files.wordpress.com/2016/08/min.png)):
|
||||
|
||||
| | 100 modules | 1000 modules | 5000 modules |
|
||||
| --- | --- | --- | --- |
|
||||
| browserify | 7982 | 79987 | 419985 |
|
||||
| browserify-collapsed | 5786 | 57991 | 309982 |
|
||||
| webpack | 3954 | 39055 | 203052 |
|
||||
| rollup | 671 | 6971 | 38968 |
|
||||
| closure | 758 | 7958 | 43955 |
|
||||
|
||||
| | 100 modules | 1000 modules | 5000 modules |
|
||||
| --- | --- | --- | --- |
|
||||
| browserify | 1649 | 13800 | 64513 |
|
||||
| browserify-collapsed | 1464 | 11903 | 56335 |
|
||||
| webpack | 693 | 5027 | 26363 |
|
||||
| rollup | 300 | 2145 | 11510 |
|
||||
| closure | 302 | 2140 | 11789 |
|
||||
|
||||
The way Browserify and Webpack work is by isolating each module into its own function scope, and then declaring a top-level runtime loader that locates the proper module whenever require() is called. Here’s what our Browserify bundle looks like:
|
||||
|
||||
```
|
||||
(function e(t,n,r){function s(o,u){if(!n[o]){if(!t[o]){var a=typeof require=="function"&&require;if(!u&&a)return a(o,!0);if(i)return i(o,!0);var f=new Error("Cannot find module '"+o+"'");throw f.code="MODULE_NOT_FOUND",f}var l=n[o]={exports:{}};t[o][0].call(l.exports,function(e){var n=t[o][1][e];return s(n?n:e)},l,l.exports,e,t,n,r)}return n[o].exports}var i=typeof require=="function"&&require;for(var o=0;o
|
||||
```
|
||||
|
||||
Whereas the Rollup and Closure bundles look more like what you might hand-author if you were just writing one big module. Here’s Rollup:
|
||||
|
||||
```
|
||||
(function () {
|
||||
'use strict';
|
||||
var total = 0
|
||||
total += 0
|
||||
total += 1
|
||||
total += 2
|
||||
// etc.
|
||||
```
|
||||
|
||||
If you understand the inherent cost of functions-within-functions in JavaScript, and of looking up a value in an associative array, then you’ll be in a good position to understand the following benchmark results.
|
||||
|
||||
### Results
|
||||
|
||||
I ran this benchmark on a Nexus 5 with Android 5.1.1 and Chrome 52 (to represent a low- to mid-range device) as well as an iPod Touch 6th generation running iOS 9 (to represent a high-end device).
|
||||
|
||||
Here are the results for the Nexus 5 ([tabular results](https://gist.github.com/nolanlawson/e84ad060a20f0cb7a7c32308b6b46abe)):
|
||||
|
||||
[](https://nolanwlawson.files.wordpress.com/2016/08/modules_nexus_5.png)
|
||||
|
||||
And here are the results for the iPod Touch ([tabular results](https://gist.github.com/nolanlawson/45ed2c7fa53da035dfc1e153763b9f93)):
|
||||
|
||||
[](https://nolanwlawson.files.wordpress.com/2016/08/modules_ipod.png)
|
||||
|
||||
At 100 modules, the variance between all the bundlers is pretty negligible, but once we get up to 1000 or 5000 modules, the difference becomes severe. The iPod Touch is hurt the least by the choice of bundler, but the Nexus 5, being an aging Android phone, suffers a lot under Browserify and Webpack.
|
||||
|
||||
I also find it interesting that both Rollup and Closure’s execution cost is essentially free for the iPod, regardless of the number of modules. And in the case of the Nexus 5, the runtime costs aren’t free, but they’re still much cheaper for Rollup/Closure than for Browserify/Webpack, the latter of which chew up the main thread for several frames if not hundreds of milliseconds, meaning that the UI is frozen just waiting for the module loader to finish running.
|
||||
|
||||
Note that both of these tests were run on a fast Gigabit connection, so in terms of network costs, it’s really a best-case scenario. Using the Chrome Dev Tools, we can manually throttle that Nexus 5 down to 3G and see the impact ([tabular results](https://gist.github.com/nolanlawson/6269d304c970174c21164288808392ea)):
|
||||
|
||||
[](https://nolanwlawson.files.wordpress.com/2016/08/modules_nexus_53g.png)
|
||||
|
||||
Once we take slow networks into account, the difference between Browserify/Webpack and Rollup/Closure is even more stark. In the case of 1000 modules (which is close to Reddit’s count of 1050), Browserify takes about 400 milliseconds longer than Rollup. And that 400ms is no small potatoes, since Google and Bing have both noted that sub-second delays have an[appreciable impact on user engagement](http://radar.oreilly.com/2009/06/bing-and-google-agree-slow-pag.html).
|
||||
|
||||
One thing to note is that this benchmark doesn’t measure the precise execution cost of 100, 1000, or 5000 modules per se, since that will depend on your usage of require(). Inside of these bundles, I’m calling require() once per module, but if you are calling require()multiple times per module (which is the norm in most codebases) or if you are callingrequire() multiple times on-the-fly (i.e. require() within a sub-function), then you could see severe performance degradations.
|
||||
|
||||
Reddit’s mobile site is a good example of this. Even though they have 1050 modules, I clocked their real-world Browserify execution time as much worse than the “1000 modules” benchmark. When profiling on that same Nexus 5 running Chrome, I measured 2.14 seconds for Reddit’s Browserify require() function, and 197 milliseconds for the equivalent function in the “1000 modules” script. (In desktop Chrome on an i7 Surface Book, I also measured it at 559ms vs 37ms, which is pretty astonishing given we’re talking desktop.)
|
||||
|
||||
This suggests that it may be worthwhile to run the benchmark again with multiplerequire()s per module, although in my opinion it wouldn’t be a fair fight for Browserify/Webpack, since Rollup/Closure both resolve duplicate ES6 imports into a single hoisted variable declaration, and it’s also impossible to import from anywhere but the top-level scope. So in essence, the cost of a single import for Rollup/Closure is the same as the cost of n imports, whereas for Browserify/Webpack, the execution cost will increase linearly with n require()s.
|
||||
|
||||
For the purposes of this analysis, though, I think it’s best to just assume that the number of modules is only a lower bound for the performance hit you might feel. In reality, the “5000 modules” benchmark may be a better yardstick for “5000 require() calls.”
|
||||
|
||||
### Conclusions
|
||||
|
||||
First off, the bundle-collapser plugin seems to be a valuable addition to Browserify. If you’re not using it in production, then your bundle will be a bit larger and slower than it would be otherwise (although I must admit the difference is slight). Alternatively, you could switch to Webpack and get an even faster bundle without any extra configuration. (Note that it pains me to say this, since I’m a diehard Browserify fanboy.)
|
||||
|
||||
However, these results clearly show that Webpack and Browserify both underperform compared to Rollup and Closure Compiler, and that the gap widens the more modules you add. Unfortunately I’m not sure [Webpack 2](https://gist.github.com/sokra/27b24881210b56bbaff7) will solve any of these problems, because although they’ll be [borrowing some ideas from Rollup](http://www.2ality.com/2015/12/webpack-tree-shaking.html), they seem to be more focused on the[tree-shaking aspects](http://www.2ality.com/2015/12/bundling-modules-future.html) and not the scope-hoisting aspects. (Update: a better name is “inlining,” and the Webpack team is [working on it](https://github.com/webpack/webpack/issues/2873#issuecomment-240067865).)
|
||||
|
||||
Given these results, I’m surprised Closure Compiler and Rollup aren’t getting much traction in the JavaScript community. I’m guessing it’s due to the fact that (in the case of the former) it has a Java dependency, and (in the case of the latter) it’s still fairly immature and doesn’t quite work out-of-the-box yet (see [Calvin’s Metcalf’s comments](https://github.com/rollup/rollup/issues/552) for a good summary).
|
||||
|
||||
Even without the average JavaScript developer jumping on the Rollup/Closure bandwagon, though, I think npm package authors are already in a good position to help solve this problem. If you npm install lodash, you’ll notice that the main export is one giant JavaScript module, rather than what you might expect given Lodash’s hyper-modular nature (require('lodash/uniq'), require('lodash.uniq'), etc.). For PouchDB, we made a similar decision to [use Rollup as a prepublish step](http://pouchdb.com/2016/01/13/pouchdb-5.2.0-a-better-build-system-with-rollup.html), which produces the smallest possible bundle in a way that’s invisible to users.
|
||||
|
||||
I also created [rollupify](https://github.com/nolanlawson/rollupify) to try to make this pattern a bit easier to just drop-in to existing Browserify projects. The basic idea is to use imports and exports within your own project ([cjs-to-es6](https://github.com/nolanlawson/cjs-to-es6) can help migrate), and then use require() for third-party packages. That way, you still have all the benefits of modularity within your own codebase, while exposing more-or-less one big module to your users. Unfortunately, you still pay the costs for third-party modules, but I’ve found that this is a good compromise given the current state of the npm ecosystem.
|
||||
|
||||
So there you have it: one horse-sized JavaScript duck is faster than a hundred duck-sized JavaScript horses. Despite this fact, though, I hope that our community will eventually realize the pickle we’re in – advocating for a “small modules” philosophy that’s good for developers but bad for users – and improve our tools, so that we can have the best of both worlds.
|
||||
|
||||
### Bonus round! Three desktop browsers
|
||||
|
||||
Normally I like to run performance tests on mobile devices, since that’s where you see the clearest differences. But out of curiosity, I also ran this benchmark on Chrome 52, Edge 14, and Firefox 48 on an i7 Surface Book using Windows 10 RS1\. Here are the results:
|
||||
|
||||
Chrome 52 ([tabular results](https://gist.github.com/nolanlawson/4f79258dc05bbd2c14b85cf2196c6ef0))
|
||||
|
||||
[](https://nolanwlawson.files.wordpress.com/2016/08/modules_chrome.png)
|
||||
|
||||
Edge 14 ([tabular results](https://gist.github.com/nolanlawson/726fa47e0723b45e4ee9ecf0cf2fcddb))
|
||||
|
||||
[](https://nolanwlawson.files.wordpress.com/2016/08/modules_edge.png)
|
||||
|
||||
Firefox 48 ([tabular results](https://gist.github.com/nolanlawson/7eed17e6ffa18752bf99a9d4bff2941f))
|
||||
|
||||
[](https://nolanwlawson.files.wordpress.com/2016/08/modules_firefox.png)
|
||||
|
||||
The only interesting tidbits I’ll call out in these results are:
|
||||
|
||||
1. bundle-collapser is definitely not a slam-dunk in all cases.
|
||||
2. The ratio of network-to-execution time is always extremely high for Rollup and Closure; their runtime costs are basically zilch. ChakraCore and SpiderMonkey eat them up for breakfast, and V8 is not far behind.
|
||||
|
||||
This latter point could be extremely important if your JavaScript is largely lazy-loaded, because if you can afford to wait on the network, then using Rollup and Closure will have the additional benefit of not clogging up the UI thread, i.e. they’ll introduce less jank than Browserify or Webpack.
|
||||
|
||||
Update: in response to this post, JDD has [opened an issue on Webpack](https://github.com/webpack/webpack/issues/2873). There’s also [one on Browserify](https://github.com/substack/node-browserify/issues/1379).
|
||||
|
||||
Update 2: [Ryan Fitzer](https://github.com/nolanlawson/cost-of-small-modules/pull/5) has generously added RequireJS and RequireJS with [Almond](https://github.com/requirejs/almond) to the benchmark, both of which use AMD instead of CommonJS or ES6.
|
||||
|
||||
Testing shows that RequireJS has [the largest bundle sizes](https://gist.github.com/nolanlawson/511e0ce09fed29fed040bb8673777ec5) but surprisingly its runtime costs are [very close to Rollup and Closure](https://gist.github.com/nolanlawson/4e725df00cd1bc9673b25ef72b831c8b). Here are the results for a Nexus 5 running Chrome 52 throttled to 3G:
|
||||
|
||||
[](https://nolanwlawson.files.wordpress.com/2016/08/2016-08-20-14_45_29-small_modules3-xlsx-excel.png)
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://nolanlawson.com/2016/08/15/the-cost-of-small-modules/?utm_source=javascriptweekly&utm_medium=email
|
||||
|
||||
作者:[Nolan][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://nolanlawson.com/
|
@ -1,70 +0,0 @@
|
||||
翻译中--byzky001
|
||||
Deploying React with Zero Configuration
|
||||
========================
|
||||
|
||||
So you want to build an app with [React][1]? "[Getting started][2]" is easy… and then what?
|
||||
|
||||
React is a library for building user interfaces, which comprise only one part of an app. Deciding on all the other parts — styles, routers, npm modules, ES6 code, bundling and more — and then figuring out how to use them is a drain on developers. This has become known as [javascript fatigue][3]. Despite this complexity, usage of React continues to grow.
|
||||
|
||||
The community answers this challenge by sharing boilerplates. These [boilerplates][4] reveal the profusion of architectural choices developers must make. That official "Getting Started" seems so far away from the reality of an operational app.
|
||||
|
||||
### New, Zero-configuration Experience
|
||||
|
||||
Inspired by the cohesive developer experience provided by [Ember.js][5] and [Elm][6], the folks at Facebook wanted to provide an easy, opinionated way forward. They created a new way to develop React apps, `create-react-app`. In the three weeks since initial public release, it has received tremendous community awareness (over 8,000 GitHub stargazers) and support (dozens of pull requests).
|
||||
|
||||
`create-react-app` is different than many past attempts with boilerplates and starter kits. It targets zero configuration [[convention-over-configuration]][7], focusing the developer on what is interesting and different about their application.
|
||||
|
||||
A powerful side-effect of zero configuration is that the tools can now evolve in the background. Zero configuration lays the foundation for the tools ecosystem to create automation and delight developers far beyond React itself.
|
||||
|
||||
### Zero-configuration Deploy to Heroku
|
||||
|
||||
Thanks to the zero-config foundation of create-react-app, the idea of zero-config deployment seemed within reach. Since these new apps all share a common, implicit architecture, the build process can be automated and then served with intelligent defaults. So, [we created this community buildpack to experiment with no-configuration deployment to Heroku][8].
|
||||
|
||||
#### Create and Deploy a React App in Two Minutes
|
||||
|
||||
You can get started building React apps for free on Heroku.
|
||||
|
||||
```
|
||||
npm install -g create-react-app
|
||||
create-react-app my-app
|
||||
cd my-app
|
||||
git init
|
||||
heroku create -b https://github.com/mars/create-react-app-buildpack.git
|
||||
git add .
|
||||
git commit -m "react-create-app on Heroku"
|
||||
git push heroku master
|
||||
heroku open
|
||||
```
|
||||
|
||||
Try it yourself [using the buildpack docs][9].
|
||||
|
||||
### Growing Up from Zero Config
|
||||
|
||||
create-react-app is very new (currently version 0.2) and since its target is a crystal-clear developer experience, more advanced use cases are not supported (or may never be supported). For example, it does not provide server-side rendering or customized bundles.
|
||||
|
||||
To support greater control, create-react-app includes the command npm run eject. Eject unpacks all the tooling (config files and package.json dependencies) into the app's directory, so you can customize to your heart's content. Once ejected, changes you make may necessitate switching to a custom deployment with Node.js and/or static buildpacks. Always perform such project changes through a branch / pull request, so they can be easily undone. Heroku's Review Apps are perfect for testing changes to the deployment.
|
||||
|
||||
We'll be tracking progress on create-react-app and adapting the buildpack to support more advanced use cases as they become available. Happy deploying!
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://blog.heroku.com/deploying-react-with-zero-configuration?c=7013A000000NnBFQA0&utm_campaign=Display%20-%20Endemic%20-Cooper%20-Node%20-%20Blog%20-%20Zero-Configuration&utm_medium=display&utm_source=cooperpress&utm_content=blog&utm_term=node
|
||||
|
||||
作者:[Mars Hall][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://blog.heroku.com/deploying-react-with-zero-configuration?c=7013A000000NnBFQA0&utm_campaign=Display%20-%20Endemic%20-Cooper%20-Node%20-%20Blog%20-%20Zero-Configuration&utm_medium=display&utm_source=cooperpress&utm_content=blog&utm_term=node
|
||||
[1]: https://facebook.github.io/react/
|
||||
[2]: https://facebook.github.io/react/docs/getting-started.html
|
||||
[3]: https://medium.com/@ericclemmons/javascript-fatigue-48d4011b6fc4
|
||||
[4]: https://github.com/search?q=react+boilerplate
|
||||
[5]: http://emberjs.com/
|
||||
[6]: http://elm-lang.org/
|
||||
[7]: http://rubyonrails.org/doctrine/#convention-over-configuration
|
||||
[8]: https://github.com/mars/create-react-app-buildpack
|
||||
[9]: https://github.com/mars/create-react-app-buildpack#usage
|
@ -1,129 +0,0 @@
|
||||
Eriwoon Start to translate this article
|
||||
The infrastructure behind Twitter: efficiency and optimization
|
||||
===========
|
||||
|
||||
|
||||
In the past, we've published details about Finagle, Manhattan, and the summary of how we re-architected the site to be able to handle events like Castle in the Sky, the Super Bowl, 2014 World Cup, the global New Year's Eve celebration, among others. In this infrastructure series, we're focusing on the core infrastructure and components that run Twitter. We're also going to focus each blog on efforts surrounding scalability, reliability, and efficiency in a way that highlights the history of our infrastructure, challenges we've faced, lessons learned, upgrades made, and where we're heading.
|
||||
|
||||
### Data center efficiency
|
||||
|
||||
#### History
|
||||
|
||||
Twitter hardware and data centers are at the scale few technology companies ever reach. However, this was not accomplished without a few missteps along the way. Our uptime has matured through a combination of physical improvements and software-based changes.
|
||||
|
||||
During the period when the fail whale was prevalent, outages occurred due to software limitations, as well as physical failures at the hardware or infrastructure level. Failure domains existed in various definitions which had to be aggregated to determine the risk and required redundancy for services. As the business scaled in customers, services, media content, and global presence, the strategy evolved to efficiently and resiliently support the service.
|
||||
|
||||
#### Challenges
|
||||
|
||||
Software dependencies on bare metal were further dependant on our data centers' ability to operate and maintain uptime of power, fiber connectivity, and environment. These discrete physical failure domains had to be reviewed against the services distributed on the hardware to provide for fault tolerance.
|
||||
|
||||
The initial decision of which data center service provider to scale with was done when specialization in site selection, operation, and design was in its infancy. We began in a hosted provider then migrated to a colocation facility as we scaled. Early service interruptions occurred as result of equipment failures, data center design issues, maintenance issues, and human error. As a result, we continually iterated on the physical layer designs to increase the resiliency of the hardware and the data center operations.
|
||||
|
||||
The physical reasons for service interruptions were inclusive of hardware failures at the server component level, top of rack switch, and core switches. For example, during the initial evaluation of our customized servers, the hardware team determined the cost of the second power supply was not warranted given the low rate of failure of server power supplies — so they were removed from the design. The data center power topology provides redundancy through separate physical whips to the racks and requires the second power supply. Removal of the second power supply eliminated the redundant power path, leaving the hardware vulnerable to impact during distribution faults in the power system. To mitigate the impact of the single power supply, ATS units were required to be added at the rack level to allow a secondary path for power.
|
||||
|
||||
The layering of systems with diverse fiber paths, power sources, and physical domains continued to separate services from impacts at relatively small scale interruptions, thus improving resiliency.
|
||||
|
||||
#### Lessons learned and major technology upgrades, migrations, and adoptions
|
||||
|
||||
We learned to model dependencies between the physical failure domains, (i.e. building power and cooling, hardware, fiber) and the services distributed across them to better predict fault tolerance and drive improvements.
|
||||
|
||||
We added additional data centers providing regional diversity to mitigate risk from natural disaster and the ability to fail between regions when it was needed during major upgrades, deploys or incidents. The active-active operation of data centers provided for staged code deployment reducing overall impacts of code rollouts.
|
||||
|
||||
The efficiency of power use by the data centers has improved with expanding the operating ranges of the environmental envelope and designing the hardware for resiliency at the higher operating temperatures.
|
||||
|
||||
#### Future work
|
||||
|
||||
Our data centers continue to evolve in strategy and operation, providing for live changes to the operating network and hardware without interruption to the users. Our strategy will continue to focus on scale within the existing power and physical footprints through optimization and maintaining flexibility while driving efficiency in the coming years.
|
||||
|
||||
### Hardware efficiency
|
||||
|
||||
#### History and challenges
|
||||
|
||||
Our hardware engineering team was started to qualify and validate performance of off-the-shelf purchased hardware, and evolved into customization of hardware for cost and performance optimizations.
|
||||
|
||||
Procuring and consuming hardware at Twitter's scale comes with a unique set of challenges. In order to meet the demands of our internal customers, we initially started a program to qualify and ensure the quality of purchased hardware. The team was primarily focused on performance and reliability testing ensuring that systems could meet the demands. Running systematic tests to validate the behavior was predictable, and there were very few bugs introduced.
|
||||
|
||||
As we scaled our major workloads (Mesos, Hadoop, Manhattan, and MySQL) it became apparent the available market offerings didn't quite meet the needs. Off-the-shelf servers come with enterprise features, like raid controllers and hot swap power supplies. These components improve reliability at small scale, but often decrease performance and increase cost; for example some raid controllers interfered with the performance of SSDs and could be a third of the cost of the system.
|
||||
|
||||
At the time, we were a large user of mysql databases. Issues arose from both supply and performance of SAS media. The majority of deployments were 1u servers, and the total number of drives used plus a writeback cache could predict the performance of a system often time limited to a sustained 2000 sequential IOPS. In order to continue scaling this workload, we were stranding CPU cores and disk capacity to meet IOPS requirement. We were unable to find cost-effective solutions at this time.
|
||||
|
||||
As our volume of hardware reached a critical mass, it made sense to invest in a hardware engineering team for customized white box solutions with focus on reducing the capital expenses and increased performance metrics.
|
||||
|
||||
#### Major technology changes and adoption
|
||||
|
||||
We've made many transitions in our hardware technology stack. Below is a timeline for adoptions of new technology and internally developed platforms.
|
||||
|
||||
- 2012 - SSDs become the primary storage media for our MySQL and key/value databases.
|
||||
- 2013 - Our first custom solution for Hadoop workloads is developed, and becomes our primary bulk storage solution.
|
||||
- 2013 - Our custom solution is developed for Mesos, TFE, and cache workloads.
|
||||
- 2014 - Our custom SSD key/value server completes development.
|
||||
- 2015 - Our custom database solution is developed.
|
||||
- 2016 - We developed GPU systems for inference and training of machine learning models.
|
||||
|
||||
#### Lessons learned
|
||||
|
||||
The objective of our Hardware Engineering team is to significantly reduce the capital expenditure and operating expenditure by making small tradeoffs that improve our TCO. Two generalizations can apply to reduce the cost of a server:
|
||||
|
||||
1. Removing the unused components
|
||||
2. Improving utilization
|
||||
|
||||
Twitter's workload is divided into four main verticals: storage, compute, database, and gpu. Twitter defines requirements on a per vertical basis, allowing Hardware Engineering to produce a focused feature set for each. This approach allows us to optimize component selection where the equipment may go unused or underutilized. For example, our storage configuration has been designed specifically for Hadoop workloads and was delivered at a TCO reduction of 20% over the original OEM solution. At the same time, the design improved both the performance and reliability of the hardware. Similarly, for our compute vertical, the Hardware Engineering Team has improved the efficiency of these systems by removing unnecessary features.
|
||||
|
||||
There is a minimum overhead required to operate a server, and we quickly reached a point where it could no longer remove components to reduce cost. In the compute vertical specifically, we decided the best approach was to look at solutions that replaced multiple nodes with a single node, and rely on Aurora/Mesos to manage the capacity. We settled on a design that replaced two of our previous generation compute nodes with a single node.
|
||||
|
||||
Our design verification began with a series of rough benchmarks, and then progressed to a series of production load tests confirming a scaling factor of 2. Most of this improvement came from simply increasing the thread count of the CPU, but our testing confirmed a 20-50% improvement in our per thread performance. Additionally we saw a 25% increase in our per thread power efficiency, due to sharing the overhead of the server across more threads.
|
||||
|
||||
For the initial deployment, our monitoring showed a 1.5 replacement factor, which was well below the design goal. An examination of the performance data revealed there was a flawed assumption in the workload characteristics, and that it needed to be identified.
|
||||
|
||||
Our Hardware Engineering Team's initial action was to develop a model to predict the packing efficiency of the current Aurora job set into various hardware configurations. This model correctly predicted the scaling factor we were observing in the fleet, and suggested we were stranding cores due to unforeseen storage requirements. Additionally, the model predicted we would see a still improved scaling factor by changing the memory configuration as well.
|
||||
|
||||
Hardware configuration changes take time to implement, so Hardware Engineering identified a few large jobs and worked with our SRE teams to adjust the scheduling requirements to reduce the storage needs. These changes were quick to deploy, and resulted in an immediate improvement to a 1.85 scaling factor.
|
||||
|
||||
In order to address the situation permanently, we needed to adjust to configuration of the server. Simply expanding the installed memory and disk capacity resulted in a 20% improvement in the CPU core utilization, at a minimal cost increase. Hardware Engineering worked with our manufacturing partners to adjust the bill of materials for the initial shipments of these servers. Follow up observations confirmed a 2.4 scaling factor exceeding the target design.
|
||||
|
||||
### Migration from bare metal to mesos
|
||||
|
||||
Until 2012, running a service inside Twitter required hardware requisitions. Service owners had to find out and request the particular model or class of server, worry about your rack diversity, maintain scripts to deploy code, and manage dead hardware. There was essentially no "service discovery." When a web service needed to talk to the user service, it typically loaded up a YAML file containing all of the host IPs and ports of the user service and the service used that list (port reservations were tracked in a wiki page). As hardware died or was added, managing required editing and committing changes to the YAML file that would go out with the next deploy. Making changes in the caching tier meant many deploys over hours and days, adding a few hosts at a time and deploying in stages. Dealing with cache inconsistencies during the deploy was a common occurrence, since some hosts would be using the new list and some the old. It was possible to have a host running old code (because the box was temporarily down during the deploy) resulting in a flaky behavior with the site.
|
||||
|
||||
In 2012/2013, two things started to get adopted at Twitter: service discovery (via a zookeeper cluster and a library in the core module of Finagle) and Mesos (including our own scheduler framework on top of Mesos called Aurora, now an Apache project).
|
||||
|
||||
Service discovery no longer required static YAML host lists. A service either self-registered on startup or was automatically registered under mesos into a "serverset" (which is just a path to a list of znodes in zookeeper based on the role, environment, and service name). Any service that needed to talk to that service would just watch that path and get a live view of what servers were out there.
|
||||
|
||||
With Mesos/Aurora, instead of having a script (we were heavy users of Capistrano) that took a list of hosts, pushed binaries around and orchestrated a rolling restart, a service owner pushed the package into a service called "packer" (which is a service backed by HDFS), uploaded an aurora configuration that described the service (how many CPUs it needed, how much memory, how many instances needed, the command lines of all the tasks each instance should run) and Aurora would complete the deploy. It schedules instances on an available hosts, downloads the artifact from packer, registers it in service discovery, and launches it. If there are any failures (hardware dies, network fails, etc), Mesos/Aurora automatically reschedules the instance on another host.
|
||||
|
||||
#### Twitter's Private PaaS
|
||||
|
||||
Mesos/Aurora and Service Discovery in combination were revolutionary. There were many bugs and growing pains over the next few years and many hard lessons learned about distributed systems, but the fundamental design was sound. In the old world, the teams were constantly dealing with and thinking about hardware and its management. In the new world, the engineers only have to think about how best to configure their services and how much capacity to deploy. We were also able to radically improve the CPU utilization of Twitter's fleet over time, since generally each service that got their own bare metal hardware didn't fully utilize its resources and did a poor job of managing capacity. Mesos allows us to pack multiple services into a box without having to think about it, and adding capacity to a service is only requesting quota, changing one line of a config, and doing a deploy.
|
||||
|
||||
Within two years, most "stateless" services moved into Mesos. Some of the most important and largest services (including our user service and our ads serving system) were among the first to move. Being the largest, they saw the biggest benefit to their operational burden. This allowed them to reduce their operational burden.
|
||||
|
||||
We are continuously looking for ways to improve the efficiency and optimization of the infrastructure. As part of this, we regularly benchmark against public cloud providers and offerings to validate our TCO and performance expectations of the infrastructure. We also have a good presence in public cloud, and will continue to utilize the public cloud when it's the best available option. The next series of this post will mainly focus on the scale of our infrastructure.
|
||||
|
||||
Special thanks to Jennifer Fraser, David Barr, Geoff Papilion, Matt Singer, and Lam Dong for all their contributions to this blog post.
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://blog.twitter.com/2016/the-infrastructure-behind-twitter-efficiency-and-optimization?utm_source=webopsweekly&utm_medium=email
|
||||
|
||||
作者:[mazdakh][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://twitter.com/intent/user?screen_name=mazdakh
|
||||
[1]: https://twitter.com/jenniferfraser
|
||||
[2]: https://twitter.com/davebarr
|
||||
[3]: https://twitter.com/gpapilion
|
||||
[4]: https://twitter.com/lamdong
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
@ -1,219 +0,0 @@
|
||||
Understanding Different Classifications of Shell Commands and Their Usage in Linux
|
||||
====
|
||||
|
||||
When it comes to gaining absolute control over your Linux system, then nothing comes close to the command line interface (CLI). In order to become a Linux power user, one must understand the [different types of shell commands][1] and the appropriate ways of using them from the terminal.
|
||||
|
||||
In Linux, there are several types of commands, and for a new Linux user, knowing the meaning of different commands enables for efficient and precise usage. Therefore, in this article, we shall walk through the various classifications of shell commands in Linux.
|
||||
|
||||
One important thing to note is that the command line interface is different from the shell, it only provides a means for you to access the shell. The shell, which is also programmable then makes it possible to communicate with the kernel using commands.
|
||||
|
||||
Different classifications of Linux commands fall under the following classifications:
|
||||
|
||||
### 1. Program Executables (File System Commands)
|
||||
|
||||
When you run a command, Linux searches through the directories stored in the $PATH environmental variable from left to right for the executable of that specific command.
|
||||
|
||||
You can view the directories in the $PATH as follows:
|
||||
|
||||
```
|
||||
$ echo $PATH
|
||||
/home/aaronkilik/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games
|
||||
```
|
||||
|
||||
In the above order, the directory /home/aaronkilik/bin will be searched first followed by /usr/local/sbin and so on, the order is significant in the search process.
|
||||
|
||||
Examples of file system commands in /usr/bin directory:
|
||||
|
||||
```
|
||||
$ ll /bin/
|
||||
```
|
||||
|
||||
Sample Output
|
||||
|
||||
```
|
||||
total 16284
|
||||
drwxr-xr-x 2 root root 4096 Jul 31 16:30 ./
|
||||
drwxr-xr-x 23 root root 4096 Jul 31 16:29 ../
|
||||
-rwxr-xr-x 1 root root 6456 Apr 14 18:53 archdetect*
|
||||
-rwxr-xr-x 1 root root 1037440 May 17 16:15 bash*
|
||||
-rwxr-xr-x 1 root root 520992 Jan 20 2016 btrfs*
|
||||
-rwxr-xr-x 1 root root 249464 Jan 20 2016 btrfs-calc-size*
|
||||
lrwxrwxrwx 1 root root 5 Jul 31 16:19 btrfsck -> btrfs*
|
||||
-rwxr-xr-x 1 root root 278376 Jan 20 2016 btrfs-convert*
|
||||
-rwxr-xr-x 1 root root 249464 Jan 20 2016 btrfs-debug-tree*
|
||||
-rwxr-xr-x 1 root root 245368 Jan 20 2016 btrfs-find-root*
|
||||
-rwxr-xr-x 1 root root 270136 Jan 20 2016 btrfs-image*
|
||||
-rwxr-xr-x 1 root root 249464 Jan 20 2016 btrfs-map-logical*
|
||||
-rwxr-xr-x 1 root root 245368 Jan 20 2016 btrfs-select-super*
|
||||
-rwxr-xr-x 1 root root 253816 Jan 20 2016 btrfs-show-super*
|
||||
-rwxr-xr-x 1 root root 249464 Jan 20 2016 btrfstune*
|
||||
-rwxr-xr-x 1 root root 245368 Jan 20 2016 btrfs-zero-log*
|
||||
-rwxr-xr-x 1 root root 31288 May 20 2015 bunzip2*
|
||||
-rwxr-xr-x 1 root root 1964536 Aug 19 2015 busybox*
|
||||
-rwxr-xr-x 1 root root 31288 May 20 2015 bzcat*
|
||||
lrwxrwxrwx 1 root root 6 Jul 31 16:19 bzcmp -> bzdiff*
|
||||
-rwxr-xr-x 1 root root 2140 May 20 2015 bzdiff*
|
||||
lrwxrwxrwx 1 root root 6 Jul 31 16:19 bzegrep -> bzgrep*
|
||||
-rwxr-xr-x 1 root root 4877 May 20 2015 bzexe*
|
||||
lrwxrwxrwx 1 root root 6 Jul 31 16:19 bzfgrep -> bzgrep*
|
||||
-rwxr-xr-x 1 root root 3642 May 20 2015 bzgrep*
|
||||
```
|
||||
|
||||
### 2. Linux Aliases
|
||||
|
||||
These are user defined commands, they are created using the alias shell built-in command, and contain other shell commands with some options and arguments. The ideas is to basically use new and short names for lengthy commands.
|
||||
|
||||
The syntax for creating an alias is as follows:
|
||||
|
||||
```
|
||||
$ alias newcommand='command -options'
|
||||
```
|
||||
|
||||
To list all aliases on your system, issue the command below:
|
||||
|
||||
```
|
||||
$ alias -p
|
||||
alias alert='notify-send --urgency=low -i "$([ $? = 0 ] && echo terminal || echo error)" "$(history|tail -n1|sed -e '\''s/^\s*[0-9]\+\s*//;s/[;&|]\s*alert$//'\'')"'
|
||||
alias egrep='egrep --color=auto'
|
||||
alias fgrep='fgrep --color=auto'
|
||||
alias grep='grep --color=auto'
|
||||
alias l='ls -CF'
|
||||
alias la='ls -A'
|
||||
alias ll='ls -alF'
|
||||
alias ls='ls --color=auto'
|
||||
```
|
||||
|
||||
To create a new alias in Linux, go through some below examples.
|
||||
|
||||
```
|
||||
$ alias update='sudo apt update'
|
||||
$ alias upgrade='sudo apt dist-upgrade'
|
||||
$ alias -p | grep 'up'
|
||||
```
|
||||
|
||||

|
||||
|
||||
However, the aliases we have created above only work temporarily, when the system is restarted, they will not work after the next boot. You can set permanent aliases in your `.bashrc` file as shown below.
|
||||
|
||||

|
||||
|
||||
After adding them, run the command below to active.
|
||||
|
||||
```
|
||||
$ source ~/.bashrc
|
||||
```
|
||||
|
||||
### 3. Linux Shell Reserved Words
|
||||
|
||||
In shell programming, words such as if, then, fi, for, while, case, esac, else, until and many others are shell reserved words. As the description implies, they have specialized meaning to the shell.
|
||||
|
||||
You can list out all Linux shell keywords using type command as shown:
|
||||
|
||||
```
|
||||
$ type if then fi for while case esac else until
|
||||
if is a shell keyword
|
||||
then is a shell keyword
|
||||
fi is a shell keyword
|
||||
for is a shell keyword
|
||||
while is a shell keyword
|
||||
case is a shell keyword
|
||||
esac is a shell keyword
|
||||
else is a shell keyword
|
||||
until is a shell keyword
|
||||
```
|
||||
|
||||
Suggested Read: 10 Useful Linux Chaining Operators with Practical Examples
|
||||
|
||||
### 4. Linux Shell Functions
|
||||
|
||||
A shell function is a group of commands that are executed collectively within the current shell. Functions help to carry out a specific task in a shell script. The conventional form of writing shell functions in a script is:
|
||||
|
||||
```
|
||||
function_name() {
|
||||
command1
|
||||
command2
|
||||
…….
|
||||
}
|
||||
```
|
||||
|
||||
Alternatively,
|
||||
|
||||
```
|
||||
function function_name {
|
||||
command1
|
||||
command2
|
||||
…….
|
||||
}
|
||||
```
|
||||
|
||||
Let’s take a look at how to write shell functions in a script named shell_functions.sh.
|
||||
|
||||
```
|
||||
#!/bin/bash
|
||||
#write a shell function to update and upgrade installed packages
|
||||
upgrade_system(){
|
||||
sudo apt update;
|
||||
sudo apt dist-upgrade;
|
||||
}
|
||||
#execute function
|
||||
upgrade_system
|
||||
```
|
||||
|
||||
Instead of executing the two commands: sudo apt update and sudo apt dist-upgrade from the command line, we have written a simple shell function to execute the two commands as a single command, upgrade_system within a script.
|
||||
|
||||
Save the file and thereafter, make the script executable. Finally run it as below:
|
||||
|
||||
```
|
||||
$ chmod +x shell_functions.sh
|
||||
$ ./shell_functions.sh
|
||||
```
|
||||
|
||||

|
||||
|
||||
### 5. Linux Shell Built-in Commands
|
||||
|
||||
These are Linux commands that built into the shell, thus you cannot find them within the file system. They include pwd, cd, bg, alias, history, type, source, read, exit and many others.
|
||||
|
||||
You can list or check Linux built-in commands using type command as shown:
|
||||
|
||||
```
|
||||
$ type pwd
|
||||
pwd is a shell builtin
|
||||
$ type cd
|
||||
cd is a shell builtin
|
||||
$ type bg
|
||||
bg is a shell builtin
|
||||
$ type alias
|
||||
alias is a shell builtin
|
||||
$ type history
|
||||
history is a shell builtin
|
||||
```
|
||||
|
||||
Learn about some Linux built-in Commands usage:
|
||||
|
||||
- [15 ‘pwd’ Command Examples in Linux][2]
|
||||
- [15 ‘cd’ Command Examples in Linux][3]
|
||||
- [Learn The Power of Linux ‘history’ Command][4]
|
||||
|
||||
### Conclusion
|
||||
|
||||
As a Linux user, it is always important to know the type of command you are running. I believe, with the precise and simple-to-understand explanation above including a few relevant illustrations, you probably have a good understanding of the [various categories of Linux commands][5].
|
||||
|
||||
You can as well get in tough through the comment section below for any questions or supplementary ideas that you would like to offer us.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://linoxide.com/firewall/pfsense-setup-basic-configuration/
|
||||
|
||||
作者:[Aaron Kili ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: http://www.tecmint.com/author/aaronkili/
|
||||
[1]: http://www.tecmint.com/different-types-of-linux-shells/
|
||||
[2]: http://www.tecmint.com/pwd-command-examples/
|
||||
[3]: http://www.tecmint.com/cd-command-in-linux/
|
||||
[4]: http://www.tecmint.com/history-command-examples/
|
||||
[5]: http://www.tecmint.com/category/linux-commands/
|
@ -1,78 +0,0 @@
|
||||
translating by ucasFL
|
||||
|
||||
4 Best Linux Boot Loaders
|
||||
====
|
||||
|
||||
When you turn on your machine, immediately after POST (Power On Self Test) is completed successfully, the BIOS locates the configured bootable media, and reads some instructions from the master boot record (MBR) or GUID partition table which is the first 512 bytes of the bootable media. The MBR contains two important sets of information, one is the boot loader and two, the partition table.
|
||||
|
||||
### What is a Boot Loader?
|
||||
|
||||
A boot loader is a small program stored in the MBR or GUID partition table that helps to load an operating system into memory. Without a boot loader, your operating system can not be loaded into memory.
|
||||
|
||||
There are several boot loaders we can install together with Linux on our systems and in this article, we shall briefly talk about a handful of the best Linux boot loaders to work with.
|
||||
|
||||
### 1. GNU GRUB
|
||||
|
||||
GNU GRUB is a popular and probably the most used multiboot Linux boot loader available, based on the original GRUB (GRand Unified Bootlader) which was created by Eirch Stefan Broleyn. It comes with several improvements, new features and bug fixes as enhancements of the original GRUB program.
|
||||
|
||||
Importantly, GRUB 2 has now replaced the GRUB. And notably, the name GRUB was renamed to GRUB Legacy and is not actively developed, however, it can be used for booting older systems since bug fixes are still on going.
|
||||
|
||||
GRUB has the following prominent features:
|
||||
|
||||
- Supports multiboot
|
||||
- Supports multiple hardware architectures and operating systems such as Linux and Windows
|
||||
- Offers a Bash-like interactive command line interface for users to run GRUB commands as well interact with configuration files
|
||||
- Enables access to GRUB editor
|
||||
- Supports setting of passwords with encryption for security
|
||||
- Supports booting from a network combined with several other minor features
|
||||
|
||||
Visit Homepage: <https://www.gnu.org/software/grub/>
|
||||
|
||||
### 2. LILO (Linux Loader)
|
||||
|
||||
LILO is a simple yet powerful and stable Linux boot loader. With the growing popularity and use of GRUB, which has come with numerous improvements and powerful features, LILO has become less popular among Linux users.
|
||||
|
||||
While it loads, the word “LILO” is displayed on the screen and each letter appears before or after a particular event has occurred. However, the development of LILO was stopped in December 2015, it has a number of features as listed below:
|
||||
|
||||
- Does not offer an interactive command line interface
|
||||
- Supports several error codes
|
||||
- Offers no support for booting from a network
|
||||
- All its files are stored in the first 1024 cylinders of a drive
|
||||
- Faces limitation with BTFS, GPT and RAID plus many more.
|
||||
|
||||
Visit Homepage: <http://lilo.alioth.debian.org/>
|
||||
|
||||
### 3. BURG – New Boot Loader
|
||||
|
||||
Based on GRUB, BURG is a relatively new Linux boot loader. Because it is derived from GRUB, it ships in with some of the primary GRUB features, nonetheless, it also offers remarkable features such as a new object format to support multiple platforms including Linux, Windows, Mac OS, FreeBSD and beyond.
|
||||
|
||||
Additionally, it supports a highly configurable text and graphical mode boot menu, stream plus planned future improvements for it to work with various input/output devices.
|
||||
|
||||
Visit Homepage: <https://launchpad.net/burg>
|
||||
|
||||
### 4. Syslinux
|
||||
|
||||
Syslinux is an assortment of light weight boot loaders that enable booting from CD-ROMs, from a network and so on. It supports filesystems such as FAT for MS-DOS, and ext2, ext3, ext4 for Linux. It as well supports uncompressed single-device Btrfs.
|
||||
|
||||
Note that Syslinux only accesses files in its own partition, therefore, it does not offer multi-filesystem boot capabilities.
|
||||
|
||||
Visit Homepage: <http://www.syslinux.org/wiki/index.php?title=The_Syslinux_Project>
|
||||
|
||||
### Conclusion
|
||||
|
||||
A boot loader allows you to manage multiple operating systems on your machine and select which one to use at a particular time, without it, your machine can not load the kernel and the rest of the operating system files.
|
||||
|
||||
Have we missed any tip-top Linux boot loader here? If so, then let us know by using the comment form below by making suggestions of any commendable boot loaders that can support Linux operating system.
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://linoxide.com/firewall/pfsense-setup-basic-configuration/
|
||||
|
||||
作者:[Aaron Kili][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: http://www.tecmint.com/best-linux-boot-loaders/
|
@ -1,82 +0,0 @@
|
||||
Spark comparison: AWS vs. GCP
|
||||
===========
|
||||
|
||||
|
||||
>Tianhui Michael Li and Ariel M’ndange-Pfupfu will lead a hands-on online course Oct 10, 12, and 14, 2016: Distributed Computing with Spark for Beginners. Instruction includes building functioning applications from end-to-end and mastering critical tooling around Spark.
|
||||
|
||||
There’s little doubt that cloud computing will play an important role in data science for the foreseeable future. The flexible, scalable, on-demand computing power available is an important resource, and as a result, there’s a lot of competition between the providers of this service. Two of the biggest players in the space are [Amazon Web Services (AWS)][1] and [Google Cloud Platform (GCP)][2].
|
||||
|
||||
This article includes a short comparison of distributed Spark workloads in AWS and GCP—both in terms of setup time and operating cost. We ran this experiment with our students at The Data Incubator, [a big data training organization][3] that helps companies hire top-notch data scientists and train their employees on the latest data science skills. Even with the efficiencies built into Spark, the cost and time of distributed workloads can be substantial, and we are always looking for the most efficient technologies so our students are learning the best and fastest tools.
|
||||
|
||||
### Submitting Spark jobs to the cloud
|
||||
|
||||
Spark is a popular distributed computation engine that incorporates MapReduce-like aggregations into a more flexible, abstract framework. There are APIs for Python and Java, but writing applications in Spark’s native Scala is preferable. That makes job submission simple, as you can package your application and all its dependencies into one JAR file.
|
||||
|
||||
|
||||
It’s common to use Spark in conjunction with HDFS for distributed data storage, and YARN for cluster management; this makes Spark a perfect fit for AWS’s Elastic MapReduce (EMR) clusters and GCP’s Dataproc clusters. Both EMR and Dataproc clusters have HDFS and YARN preconfigured, with no extra work required.
|
||||
|
||||
### Configuring cloud services
|
||||
|
||||
Managing data, clusters, and jobs from the command line is more scalable than using the web interface. For AWS, this means installing and using the command-line interface (cli). You’ll have to set up your credentials beforehand as well as make a separate keypair for the EC2 instances that are used under the hood. You’ll also need to set up roles—basically permissions—for both users (making sure they have sufficient rights) and EMR itself (usually running aws emr create-default-roles in the cli is good enough to get started).
|
||||
|
||||
For GCP the process is more straightforward. If you install the Google Cloud SDK and sign in with your Google account, you should be able to do most things right off the bat. The thing to remember here is to enable the relevant APIs in the API Manager: Compute Engine, Dataproc, and Cloud Storage JSON.
|
||||
|
||||
Once you have things set up to your liking, the fun part begins! Using commands like aws s3 cp or gsutil cp you can copy your data into the cloud. Once you have buckets set up for your inputs, outputs, and anything else you might need, running your app is as easy as starting up a cluster and submitting the JAR file. Make sure you know where the logs are kept—it can be tricky to track down problems or bugs in a cloud environment.
|
||||
|
||||
### You get what you pay for
|
||||
|
||||
When it comes to cost, Google’s service is more affordable in several ways. First, the raw cost of purchasing computing power is cheaper. Running a Google Compute Engine machine with 4 vCPUs and 15 GB of RAM will run you $0.20 every hour, or $0.24 with Dataproc. An identically-specced AWS instance will cost you $0.336 per hour running EMR.
|
||||
|
||||
The second factor to consider is the granularity of the billing. AWS charges by the hour, so you pay the full rate even if your job takes 15 minutes. GCP charges by the minute, with a 10-minute minimum charge. This ends up being a huge difference in cost in a lot of use cases.
|
||||
|
||||
Both services have various other discounts. You can effectively bid on spare cloud capacity with AWS’s spot instances or GCP’s preemptible instances. These will be cheaper than dedicated, on-demand instances, but they’re not guaranteed to be available. Discounted rates are available on GCP if your instances live for long periods of time (25% to 100% of the month). On AWS, paying some of the costs upfront or buying in bulk can save you some money. The bottom line is, if you’re a power user and you use cloud computing on a regular or even constant basis, you’ll need to delve deeper and perform your own calculations.
|
||||
|
||||
Lastly, the costs for new users wanting to try out these services are lower for GCP. They offer a 60-day free trial with $300 in credit to use however you want. AWS only offers a free tier where certain services are free to a certain point or discounted, so you will end up paying to run Spark jobs.This means that if you want to test out Spark for the first time, you’ll have more freedom to do what you want on GCP without worrying about price.
|
||||
|
||||
### Performance comparison
|
||||
|
||||
We set up a trial to compare the performance and cost of a typical Spark workload. The trial used clusters with one master and five core instances of AWS’s m3.xlarge and GCP’s n1-standard-4. They differ slightly in specification, but the number of virtual cores and amount of memory is the same. In fact, they behaved almost identically when it came to job execution time.
|
||||
|
||||
The job itself involved parsing, filtering, joining, and aggregating data from the publicly available Stack Exchange Data Dump. We ran the same JAR on a ~50M subset of the data (Cross Validated) and then on the full ~9.5G data set.
|
||||
|
||||

|
||||
>Figure 1. Credit: Michael Li and Ariel M'ndange-Pfupfu.
|
||||
|
||||

|
||||
>Figure 2. Credit: Michael Li and Ariel M'ndange-Pfupfu.
|
||||
|
||||
The short job clearly benefited from GCP’s by-the-minute billing, being charged only for 10 minutes of cluster time, whereas AWS charged for a full hour. But even the longer job was cheaper on GPS both because of fractional-hour billing and a lower per-unit time cost for comparable performance. It’s also worth noting that storage costs weren’t included in this comparison.
|
||||
|
||||
### Conclusion
|
||||
|
||||
AWS was the first mover in the space, and this shows in the API. Its ecosystem is vast, but its permissions model is a little dated, and its configuration is a little arcane. By contrast, Google is the shiny new entrant in this space and has polished off some of the rough edges. It is missing some features on our wishlist, like an easy way to auto-terminate clusters and detailed billing information broken down by job. Also, for managing tasks programmatically in Python, the API client library isn’t as full-featured as AWS’s Boto.
|
||||
|
||||
If you’re new to cloud computing, GCP is easier to get up and running, and the credits make it a tempting platform. Even if you are already used to AWS, you may still find the cost savings make switching worth it, although the switching costs may not make moving to GCP worth it.
|
||||
|
||||
Ultimately, it’s difficult to make sweeping statements about these services because they’re not just one entity; they’re entire ecosystems of integrated parts, and both have pros and cons. The real winners are the users. As an example, at The Data Incubator, our Ph.D. data science fellows really appreciate the cost reduction as they learn about distributed workloads. And while our big data corporate training clients may be less price sensitive, they appreciate being able to crunch enterprise data faster, while holding price constant. Data scientists can now enjoy the multitude of options available, and the benefits of having a competitive cloud computing market.
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.oreilly.com/ideas/spark-comparison-aws-vs-gcp?utm_source=dbweekly&utm_medium=email
|
||||
|
||||
作者:[Michael Li][a] [Ariel M'Ndange-Pfupfu][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.oreilly.com/people/76a5b-michael-li
|
||||
[b]: https://www.oreilly.com/people/Ariel-Mndange-Pfupfu
|
||||
[1]: https://aws.amazon.com/
|
||||
[2]: https://cloud.google.com/
|
||||
[3]: https://www.thedataincubator.com/training.html?utm_source=OReilly&utm_medium=blog&utm_campaign=AWSvsGCP
|
@ -1,3 +1,4 @@
|
||||
翻译中 by-zky001
|
||||
Apache Spark @Scale: A 60 TB+ production use case
|
||||
===========
|
||||
|
||||
@ -113,7 +114,7 @@ While this post details our most challenging use case for Spark, a growing numbe
|
||||
via: https://code.facebook.com/posts/1671373793181703/apache-spark-scale-a-60-tb-production-use-case/?utm_source=dbweekly&utm_medium=email
|
||||
|
||||
作者:[Sital Kedia][a] [王硕杰][b] [Avery Ching][c]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
译者:[zky001](https://github.com/zky001)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,180 +0,0 @@
|
||||
Using webpack with the Amazon Cognito Identity SDK for JavaScript
|
||||
=====
|
||||
|
||||
This blog post is aimed at developers of all experience levels who develop and deploy JavaScript based applications (whether server-side with Node.js or client side) that incorporate the AWS SDK, the Amazon Cognito Identity SDK for JavaScript and who also use the popular [webpack][1] module bundler.
|
||||
|
||||
In July 2016, Amazon Web Services [launched Amazon Cognito User Pools][2], a feature that makes it easy for developers to add sign-up and sign-in functionality to mobile and web applications. To help developers easily realize the power of user pools within their own applications, we also released the [Amazon Cognito Identity SDK for JavaScript][3].
|
||||
|
||||
Amazon Cognito User Pools allows you to easily add user sign-up and sign-in to your mobile and web applications. The fully managed user pool can scale to hundreds of millions of users and you can have multiple directories per AWS account. Creating a user pool takes just a few minutes and allows you to decide exactly which attributes (including address, email, phone number as well as custom attributes) are mandatory and even optional when a new user signs up for your application or service. Your application can also specify the desired password strength, whether the use of Multi-Factor Authentication (MFA) is required, and verify new users via phone number or email address to further enhance the security of your application.
|
||||
|
||||
If you are new to the Amazon Cognito Identity SDK for JavaScript [this AWS blog post][4] is a great place to start.
|
||||
|
||||
### Why Use Asset & Module Bundling with the Amazon Cognito Identity SDK for JavaScript
|
||||
|
||||
Today, modern web applications for both mobile and desktop have to provide the user with a secure, fast, responsive, and native-app-like experience. There is no doubt that modern browsers are extremely powerful and cater to a vast array of possible implementation approaches. Many of the more popular implementations rely heavily on the deployment of a JavaScript application through some form of asset packaging and/or module bundling. This allows a developer to take their JavaScript application and create one or more files that can be loaded by the client browser by using script tags.
|
||||
|
||||
There are many schools of thought on how you can achieve this packaging , including task runners such as [Grunt][5] and [Gulp][6], and bundlers such as [Browserify][7]. However, there is a general consensus that asset packaging is not only about improving load times—it enables the modularization of your application while ensuring testability and robustness.
|
||||
|
||||
### Using webpack with the Amazon Cognito Identity SDK for JavaScript
|
||||
|
||||
In the many requests we get to provide more detail on how to integrate the Amazon Cognito Identity SDK for JavaScript within a webpack environment, we’re specifically asked how to ensure that webpack correctly manages the following third-party dependencies:
|
||||
|
||||
- [JavaScript BN library for BigInteger computation][8] (jsbn)
|
||||
- [an extension to jsbn][9] the with the rest of the jsbn methods including most public BigInteger methods (jsbn2)
|
||||
- the [Stanford JavaScript Crypto Library][10] (sjcl)
|
||||
|
||||
Throughout these examples, the following bower libraries are used by bower.json
|
||||
|
||||
```
|
||||
"aws-cognito-sdk": "https://raw.githubusercontent.com/aws/amazon-cognito-identity-js/master/dist/aws-cognito-sdk.js",
|
||||
"amazon-cognito-identity": "https://raw.githubusercontent.com/aws/amazon-cognito-identity-js/master/dist/amazon-cognito-identity.min.js",
|
||||
"sjcl": "https://raw.githubusercontent.com/bitwiseshiftleft/sjcl/master/sjcl.js",
|
||||
"jsbn": "https://raw.githubusercontent.com/andyperlitch/jsbn/master/index.js",
|
||||
```
|
||||
|
||||
For all the reasons we gave earlier for the importance of asset packaging to development processes, and unless your application is extremely small, the use of an asset packaging tool such as webpack is almost always recommended. Of course, one could simply pull in all of these dependencies using tags. However, this would pollute global namespace, and not provide the most optimal resource management and loading approach. Many developers start with a standard webpack.config.js file that has a standard babel loader, as shown here.
|
||||
|
||||
```
|
||||
{
|
||||
/** test for file ending in js or jsx
|
||||
* exclude node_module and bower_components - we dont want to babel these
|
||||
* use the babel loader
|
||||
* apply the react and es2015 (es6) transformations **/
|
||||
|
||||
test: /\.jsx?$/,
|
||||
exclude: /(node_modules|bower_components)/,
|
||||
loader: 'babel',
|
||||
query: {
|
||||
presets: ['react', 'es2015']
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
It’s important to remember that this configuration doesn’t take into account that some of third-party dependencies used by the Amazon Cognito Identity SDK for JavaScript currently do not use the [Universal Module Definition (UMD) pattern for JavaScript][11].
|
||||
|
||||
The UMD pattern attempts to offer Asynchronous Module Definition (AMD) based compatibility with the most popular script loaders of the day such as [RequireJS][12] and [CommonJS][13].
|
||||
|
||||
This is a pattern that webpack relies on, and so we must make some changes to how webpack loads these modules. Without these changes, you may encounter errors such as the following.
|
||||
|
||||
```
|
||||
amazon-cognito-identity.min.js:19 Uncaught ReferenceError: BigInteger is not defined
|
||||
```
|
||||
|
||||
Such an error may be encountered when making a call to AWSCognito.CognitoIdentityServiceProvider.CognitoUser property authenticateUser This is an example of where we can make use of the webpack imports and exports loader capability to overcome this error.
|
||||
|
||||
### Using webpack Loaders
|
||||
|
||||
According to the [webpack documentation][14] "loaders allow you to preprocess files as you require() or “load” them. Loaders are kind of like “tasks” are in other build tools, and provide a powerful way to handle front-end build steps. Loaders can transform files from a different language like, CoffeeScript to JavaScript, or inline images as data URLs"
|
||||
|
||||
In order to resolve the lack of UMD compatibility, you will rely to two specific loaders, import and export.
|
||||
|
||||
#### Using the Export Loader
|
||||
|
||||
In the case of the Amazon Cognito Identity SDK for JavaScript, we need to ensure we export theAWSCognito variables into the scope of the module that requires/imports (for ES6) them.
|
||||
|
||||
```
|
||||
{
|
||||
test: /aws-cognito-sdk\/index\.js/,
|
||||
loader: 'exports?AWSCognito'
|
||||
}
|
||||
```
|
||||
|
||||
Using the exports loader has the effect of exporting a module method within bundle created by webpack. As a result, both AWSCognito and AWS are now accessible when required or import(ed) (for ES6).
|
||||
|
||||
```
|
||||
var AWSCognito = require('aws-cognito-sdk')
|
||||
|
||||
/*** EXPORTS from export-loader ***/
|
||||
module.exports = AWSCongito
|
||||
```
|
||||
|
||||
More information about the exports loader can be found here
|
||||
|
||||
#### Using the Import Loader
|
||||
|
||||
The import loader is mostly used to inject (import) variables into the scope of another module. This is especially useful if third-party modules are relying on global variables like BitInteger or sjcl as is the case with Amazon Cognito Identity SDK for JavaScript.
|
||||
|
||||
If you don’t use the webpack loader, the following is generated within the bundle.
|
||||
|
||||
```
|
||||
__webpack_require__(431); // refers to jsbin
|
||||
__webpack_require__(432); // refers to sjcl
|
||||
```
|
||||
|
||||
Beacuse neither jsbin or sjcl export anything, any calls that rely on these modules will result in an error.
|
||||
|
||||
To resolve this, we can use the following webpack loader configuration:
|
||||
|
||||
```
|
||||
{
|
||||
test: /amazon-cognito-identity\/index\.js/,
|
||||
loader: 'imports?jsbn,BigInteger=>jsbn.BigInteger,sjcl'
|
||||
},
|
||||
{
|
||||
test: /sjcl\/index\.js/,
|
||||
loader: 'imports?sjcl'
|
||||
}
|
||||
```
|
||||
|
||||
This injects the following into the bundle (in this case bundle.js) created by webpack.
|
||||
|
||||
````
|
||||
/*** IMPORTS FROM imports-loader ***/
|
||||
var jsbn = __webpack_require__(431);
|
||||
var BigInteger = jsbn.BigInteger;
|
||||
var sjcl = __webpack_require__(432);
|
||||
```
|
||||
|
||||
As a result, jsbn, BigInteger and sjcl are all imported from their respective modules into Amazon Cognito Identity SDK for JavaScript.
|
||||
|
||||
More information about the import loader can be found [here][15]
|
||||
|
||||
### Next Steps
|
||||
|
||||
We encourage you to download the [Amazon Cognito Identity SDK for JavaScript][16] and start building your application. Coupled with webpack, and by following the guidance in this blog, you we hope you have a smooth development experience.
|
||||
|
||||
If you have any comments or questions, please free to comment below, reach out via email (teichtah@amazon.com) or raise an issue [here][17].
|
||||
|
||||
### References
|
||||
|
||||
This blog post makes reference to the following third party resources
|
||||
|
||||
- webpack - https://webpack.github.io/
|
||||
- webpack documentation - http://webpack.github.io/docs/what-is-webpack.html
|
||||
- webpack exports loader - https://github.com/webpack/exports-loader
|
||||
- webpack imports loader - https://github.com/webpack/imports-loader
|
||||
- JavaScript BN library for BigInteger computation - http://www-cs-students.stanford.edu/~tjw/jsbn/jsbn.js
|
||||
- jsbns - http://www-cs-students.stanford.edu/~tjw/jsbn/jsbn2.js
|
||||
- Stanford JavaScript Crypto Library - https://github.com/bitwiseshiftleft/sjcl
|
||||
- RequireJS - http://requirejs.org/
|
||||
- CommonJS - http://www.commonjs.org/
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://mobile.awsblog.com/post/Tx1A84CLMDJ744T/Using-webpack-with-the-Amazon-Cognito-Identity-SDK-for-JavaScript?utm_source=webopsweekly&utm_medium=email
|
||||
|
||||
作者:[Marc Teichtahl ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://mobile.awsblog.com/blog/author/Marc+Teichtahl
|
||||
[1]: https://webpack.github.io/
|
||||
[2]: https://blogs.aws.amazon.com/security/post/Tx13NVD4AWG9QK9/Amazon-Cognito-Your-User-Pools-is-Now-Generally-Available
|
||||
[3]: https://github.com/aws/amazon-cognito-identity-js
|
||||
[4]: http://mobile.awsblog.com/post/Tx2O14ZY8A5LFHT/Accessing-Your-User-Pools-using-the-Amazon-Cognito-Identity-SDK-for-JavaScript
|
||||
[5]: http://gruntjs.com/
|
||||
[6]: http://gulpjs.com/
|
||||
[7]: http://browserify.org/
|
||||
[8]: http://www-cs-students.stanford.edu/~tjw/jsbn/jsbn.js
|
||||
[9]: http://www-cs-students.stanford.edu/~tjw/jsbn/jsbn2.js
|
||||
[10]: https://github.com/bitwiseshiftleft/sjcl
|
||||
[11]: https://github.com/umdjs/umd
|
||||
[12]: http://requirejs.org/
|
||||
[13]: http://www.commonjs.org/
|
||||
[14]: http://webpack.github.io/docs/what-is-webpack.html
|
||||
[15]: https://github.com/webpack/imports-loader
|
||||
[16]: https://github.com/aws/amazon-cognito-identity-js
|
||||
[17]: https://github.com/aws/amazon-cognito-identity-js/issues
|
||||
|
@ -1,436 +0,0 @@
|
||||
HOSTING .NET CORE ON LINUX WITH DOCKER - A NOOB'S GUIDE
|
||||
=====
|
||||
|
||||
This post builds on my previous [introduction to .NET Core][1]. First I upgrade that RESTful API from .NET Core RC1 to .NET Core 1.0, then I add support for Docker and describe how to host it on Linux in a production environment.
|
||||
|
||||
|
||||

|
||||
|
||||
I’m completely new to Docker and I’m far from a Linux expert, so these are very much the thoughts of a noob.
|
||||
|
||||
### INSTALLATION
|
||||
|
||||
Follow the instructions on https://www.microsoft.com/net/core to install .NET Core on your development machine. This will include the dotnet command line tool and the latest Visual Studio tooling for Windows.
|
||||
|
||||
### SOURCE CODE
|
||||
|
||||
You can jump straight to the finished source code on GitHub.
|
||||
|
||||
### CONVERTING TO .NET CORE 1.0
|
||||
|
||||
Naturally, my first port of call when thinking about how to upgrade the API from RC1 to 1.0 was to Google it with Bing. There are two pretty comprehensive guides that I followed:
|
||||
|
||||
- [Migrating from DNX to .NET Core CLI][2]
|
||||
- [Migrating from ASP.NET 5 RC1 to ASP.NET Core 1.0][3]
|
||||
|
||||
I advise reading through both of these very carefully when migrating your code because I tried to skim read the second one without reading the first one and got very confused and frustrated!
|
||||
|
||||
I won’t describe the changes in detail because you can look at the commit on GitHub. Here is a summary of what I changed:
|
||||
|
||||
- Updated version numbers on global.json and project.json
|
||||
- Removed obsolete sections from project.json
|
||||
- Using the more lightweight ControllerBase rather than Controller because I don’t need methods related to MVC views (this was an optional change)
|
||||
- Removed the Http prefix from helper methods e.g. HttpNotFound -> NotFound
|
||||
- LogVerbose -> LogTrace
|
||||
- Namespace changes: Microsoft.AspNetCore.*
|
||||
- Using SetBasePath in Startup (appsettings.json won’t be found without this)
|
||||
- Running via WebHostBuilder rather than WebApplication.Run
|
||||
- Removed Serilog (at the time of writing it does not support .NET Core 1.0)
|
||||
|
||||
The only real headache here is the need to remove Serilog. I could have implemented my own file logger, but I just deleted file logging because I didn’t want to focus on it for this exercise.
|
||||
|
||||
Unfortunately, there will be plenty of third party developers that will be playing catch up with support for .NET Core 1.0 but I have sympathy for them because they are often working in their spare time without anything close to the resources available to Microsoft. I recommend reading Travis Illig’s [.NET Core 1.0 is Released, but Where is Autofac][4]? for a third party developer’s point of view on this!
|
||||
|
||||
Having made these changes I was able to dotnet restore, dotnet build and dotnet run from the project.json directory and see the API working as before.
|
||||
|
||||
### RUNNING WITH DOCKER
|
||||
|
||||
At the time of writing, Docker only really works on Linux. There is beta support for Docker on Windows and OS X but they both rely on virtualisation so I’ve chosen to run Ubuntu 14.04 as a VirtualBox. Follow these instructions if you haven’t already got Docker installed.
|
||||
|
||||
I’ve been doing a bit of reading about Docker recently but I’ve never tried to actually do anything with it until now. I’ll assume the reader has no Docker knowledge so I’ll explain all parts of the commands that I’m using.
|
||||
|
||||
#### HELLO DOCKER
|
||||
|
||||
Having installed Docker on my Ubuntu machine, my next move was to follow the instructions at https://www.microsoft.com/net/core#docker to see how to get started with .NET Core and Docker.
|
||||
|
||||
First start a container with .NET Core installed:
|
||||
|
||||
```
|
||||
docker run -it microsoft/dotnet:latest
|
||||
```
|
||||
|
||||
The -it option means interactive so having executed this command you will be inside the container and free to run any bash commands you like.
|
||||
|
||||
Then we can run five commands to get Microsoft’s Hello World .NET Core console application running inside Docker!
|
||||
|
||||
1. mkdir hwapp
|
||||
2. cd hwapp
|
||||
3. dotnet new
|
||||
4. dotnet restore
|
||||
5. dotnet run
|
||||
|
||||
You can exit to leave the container, then docker ps -a to show that you have created a container which has exited. You should really now tidy up that container using docker rm <container_name>.
|
||||
|
||||
#### MOUNTING THE SOURCE
|
||||
|
||||
My next move was to use the same microsoft/dotnet image as above but to mount the source for my application as a data volume.
|
||||
|
||||
First check out the repository at the relevant commit:
|
||||
|
||||
1. git clone https://github.com/niksoper/aspnet5-books.git
|
||||
2. cd aspnet5-books/src/MvcLibrary
|
||||
3. git checkout dotnet-core-1.0
|
||||
|
||||
Now start a container running .NET Core 1.0 with the source located at /books. Note that you’ll need to change the /path/to/repo part to match your machine:
|
||||
|
||||
```
|
||||
docker run -it \
|
||||
-v /path/to/repo/aspnet5-books/src/MvcLibrary:/books \
|
||||
microsoft/dotnet:latest
|
||||
```
|
||||
|
||||
Now you can run the application inside the container!
|
||||
|
||||
```
|
||||
cd /books
|
||||
dotnet restore
|
||||
dotnet run
|
||||
```
|
||||
|
||||
That’s great as a proof of concept but we don’t really want to have to worry about mounting the source code into a container like this whenever we want to start the application.
|
||||
|
||||
#### ADDING A DOCKERFILE
|
||||
|
||||
The next step I took was to introduce a Dockerfile, which will allow the application to be started easily inside its own container.
|
||||
|
||||
My Dockerfile lives in the src/MvcLibrary directory alongside project.json and looks like this:
|
||||
|
||||
```
|
||||
FROM microsoft/dotnet:latest
|
||||
|
||||
# Create directory for the app source code
|
||||
RUN mkdir -p /usr/src/books
|
||||
WORKDIR /usr/src/books
|
||||
|
||||
# Copy the source and restore dependencies
|
||||
COPY . /usr/src/books
|
||||
RUN dotnet restore
|
||||
|
||||
# Expose the port and start the app
|
||||
EXPOSE 5000
|
||||
CMD [ "dotnet", "run" ]
|
||||
```
|
||||
|
||||
Strictly, the `RUN mkdir -p /usr/src/books` command is not needed because COPY will create any missing directories automatically.
|
||||
|
||||
Docker images are built in layers. We start from the image containing .NET Core and add another layer which builds the application from source then runs the application.
|
||||
|
||||
Having added the Dockerfile, I then ran the following commands to build the image and start a container using that image (make sure you are in the same directory as the Dockerfile and you should really use your own username):
|
||||
|
||||
1. docker build -t niksoper/netcore-books .
|
||||
2. docker run -it niksoper/netcore-books
|
||||
|
||||
You should see that the application started listening just as before, except this time we don’t need to bother mounting the source code because it’s already contained in the docker image.
|
||||
|
||||
#### EXPOSING AND PUBLISHING A PORT
|
||||
|
||||
This API isn’t going to be very useful unless we can communicate with it from outside the container. Docker has the concept of exposing and publishing ports, which are two very different things.
|
||||
|
||||
From the official Docker documentation:
|
||||
|
||||
>The EXPOSE instruction informs Docker that the container listens on the specified network ports at runtime. EXPOSE does not make the ports of the container accessible to the host. To do that, you must use either the -p flag to publish a range of ports or the -P flag to publish all of the exposed ports.
|
||||
|
||||
EXPOSE only adds metadata to the image so you can think of it as documentation for the consumers of the image. Technically, I could have left out the EXPOSE 5000 line completely because I know the port that the API is listening on but leaving it in is helpful and certainly recommended.
|
||||
|
||||
At this stage I want to access the API directly from the host so I need to use -p to publish the port - this allows a request to port 5000 on the host be forwarded to port 5000 in the container regardless of whether the port has previously been exposed via the Dockerfile:
|
||||
|
||||
```
|
||||
docker run -d -p 5000:5000 niksoper/netcore-books
|
||||
```
|
||||
|
||||
Using -d tells docker to run the container in detached mode so we won’t see its output but it will still be running and listening on port 5000 - prove this to yourself with docker ps.
|
||||
|
||||
So then I prepared to celebrate by making a request from the host to the container:
|
||||
|
||||
```
|
||||
curl http://localhost:5000/api/books
|
||||
```
|
||||
|
||||
It didn’t work.
|
||||
|
||||
Making the same curl request repeatedly, I see one of two errors - either curl: (56) Recv failure: Connection reset by peer or curl: (52) Empty reply from server.
|
||||
|
||||
I went back to the docker run documentation and double checked I was using the -p option correctly as well as EXPOSE in the Dockerfile. I couldn’t see the problem and became a bit sad…
|
||||
|
||||
After pulling myself together, I decided to consult one of my local DevOps heroes - Dave Wybourn (also mentioned in this post on Docker Swarm). His team had run into this exact problem and the issue was the way that I had (not) configured Kestrel - the new lightweight, cross platform web server used for .NET Core.
|
||||
|
||||
By default, Kestrel will listen on http://localhost:5000. The problem here is that localhost is a loopback interface.
|
||||
|
||||
From Wikipedia:
|
||||
|
||||
>In computer networking, localhost is a hostname that means this computer. It is used to access the network services that are running on the host via its loopback network interface. Using the loopback interface bypasses any local network interface hardware.
|
||||
|
||||
This is a problem when running inside a container because localhost can only be reached from within that container. The solution was to update the Main method in Startup.cs to configure the URLs that Kestrel will listen on:
|
||||
|
||||
```
|
||||
public static void Main(string[] args)
|
||||
{
|
||||
var host = new WebHostBuilder()
|
||||
.UseKestrel()
|
||||
.UseContentRoot(Directory.GetCurrentDirectory())
|
||||
.UseUrls("http://*:5000") // listen on port 5000 on all network interfaces
|
||||
.UseIISIntegration()
|
||||
.UseStartup<Startup>()
|
||||
.Build();
|
||||
|
||||
host.Run();
|
||||
}
|
||||
```
|
||||
|
||||
With this extra configuration in place, I could then rebuild image and run the application in a container which will accept requests from the host:
|
||||
|
||||
1. docker build -t niksoper/netcore-books .
|
||||
2. docker run -d -p 5000:5000 niksoper/netcore-books
|
||||
3. curl -i http://localhost:5000/api/books
|
||||
|
||||
I now get the following response:
|
||||
|
||||
```
|
||||
HTTP/1.1 200 OK
|
||||
Date: Tue, 30 Aug 2016 15:25:43 GMT
|
||||
Transfer-Encoding: chunked
|
||||
Content-Type: application/json; charset=utf-8
|
||||
Server: Kestrel
|
||||
|
||||
[{"id":"1","title":"RESTful API with ASP.NET Core MVC 1.0","author":"Nick Soper"}]
|
||||
```
|
||||
|
||||
### KESTREL IN PRODUCTION
|
||||
|
||||
Microsoft’s words:
|
||||
|
||||
>Kestrel is great for serving dynamic content from ASP.NET, however the web serving parts aren’t as feature rich as full-featured servers like IIS, Apache or Nginx. A reverse proxy-server can allow you to offload work like serving static content, caching requests, compressing requests, and SSL termination from the HTTP server.
|
||||
|
||||
So I need to set up Nginx on my Linux machine to act as my reverse proxy. Microsoft spell out how to do this in Publish to a Linux Production Environment. I’ll summarise the instructions here:
|
||||
|
||||
1. Use dotnet publish to produce a self contained package for the application
|
||||
2. Copy the published application to the server
|
||||
3. Install and configure Nginx (as a reverse proxy server)
|
||||
4. Install and configure supervisor (for keeping the Kestrel server running)
|
||||
5. Enable and configure AppArmor (for limiting the resources available to an application)
|
||||
6. Configure the server firewall
|
||||
7. Secure Nginx (involves building from source and configuring SSL)
|
||||
|
||||
It’s beyond the scope of this post to cover all of that, so I’m only going to concentrate on configuring Nginx as a reverse proxy - and naturally, I’m going to use Docker to do it.
|
||||
|
||||
### RUN NGINX IN ANOTHER CONTAINER
|
||||
|
||||
My aim is to run Nginx in a second Docker container and configure it as a reverse proxy to my application container.
|
||||
|
||||
I’ve used the official Nginx image from Docker Hub. First I tried it out like this:
|
||||
|
||||
```
|
||||
docker run -d -p 8080:80 --name web nginx
|
||||
```
|
||||
|
||||
This starts a container running Nginx and maps port 8080 on the host to port 80 in the container. Hitting http://localhost:8080 in the browser now shows the default Nginx landing page.
|
||||
|
||||
Now we’ve proved how easy it is to get Nginx running, we can kill the container.
|
||||
|
||||
```
|
||||
docker rm -f web
|
||||
```
|
||||
|
||||
### CONFIGURING NGINX AS A REVERSE PROXY
|
||||
|
||||
Nginx can be configured as a reverse proxy by editing the config file at /etc/nginx/conf.d/default.conf like this:
|
||||
|
||||
```
|
||||
server {
|
||||
listen 80;
|
||||
|
||||
location / {
|
||||
proxy_pass http://localhost:6666;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
The config above will cause Nginx to proxy all requests from the root to http://localhost:6666. Remember localhost here refers to the container running Nginx. We can use our own config file inside the Nginx container using a volume:
|
||||
|
||||
```
|
||||
docker run -d -p 8080:80 \
|
||||
-v /path/to/my.conf:/etc/nginx/conf.d/default.conf \
|
||||
nginx
|
||||
```
|
||||
|
||||
Note: this maps a single file from the host to the container, rather than an entire directory.
|
||||
|
||||
### COMMUNICATING BETWEEN CONTAINERS
|
||||
|
||||
Docker allows inter-container communication using shared virtual networks. By default, all containers started by the Docker daemon will have access to a virtual network called bridge. This allows containers to be referenced from other containers on the same network via IP address and port.
|
||||
|
||||
You can discover the IP address of a running container by inspecting it. I’ll start a container from the niksoper/netcore-books image that I created earlier, and inspect it:
|
||||
|
||||
1. docker run -d -p 5000:5000 --name books niksoper/netcore-books
|
||||
2. docker inspect books
|
||||
|
||||

|
||||
|
||||
We can see this container has "IPAddress": "172.17.0.3".
|
||||
|
||||
So now if I create the following Nginx config file, then start an Nginx container using that file, then it will proxy requests to my API:
|
||||
|
||||
```
|
||||
server {
|
||||
listen 80;
|
||||
|
||||
location / {
|
||||
proxy_pass http://172.17.0.3:5000;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Now I can start an Nginx container using that config (note I’m mapping port 8080 on the host to port 80 on the Nginx container):
|
||||
|
||||
```
|
||||
docker run -d -p 8080:80 \
|
||||
-v ~/dev/nginx/my.nginx.conf:/etc/nginx/conf.d/default.conf \
|
||||
nginx
|
||||
```
|
||||
|
||||
A request to http://localhost:8080 will now be proxied to my application. Note the Server header in the following curl response:
|
||||
|
||||

|
||||
|
||||
### DOCKER COMPOSE
|
||||
|
||||
At this point I was fairly pleased with my progress but I thought there must be a better way of configuring Nginx without needing to know the exact IP address of the application container. Another of the local Scott Logic DevOps heroes - Jason Ebbin - stepped up at this point and suggested Docker Compose.
|
||||
|
||||
As a high level description - Docker Compose makes it very easy to start up a collection of interconnected containers using a declarative syntax. I won’t go into the details of how Docker Compose works because you can read about it in this previous post.
|
||||
|
||||
I’ll start with the docker-compose.yml file that I’m using:
|
||||
|
||||
```
|
||||
version: '2'
|
||||
services:
|
||||
books-service:
|
||||
container_name: books-api
|
||||
build: .
|
||||
|
||||
reverse-proxy:
|
||||
container_name: reverse-proxy
|
||||
image: nginx
|
||||
ports:
|
||||
- "9090:8080"
|
||||
volumes:
|
||||
- ./proxy.conf:/etc/nginx/conf.d/default.conf
|
||||
```
|
||||
|
||||
This is version 2 syntax, so you’ll need to have at least version 1.6 of Docker Compose in order for this to work.
|
||||
|
||||
This file tells Docker to create two services - one for the application and another for the Nginx reverse proxy.
|
||||
|
||||
### BOOKS-SERVICE
|
||||
|
||||
This builds a container called books-api from the Dockerfile in the same directory as this docker-compose.yml. Note that this container does not need to publish any ports because it only needs to be accessed from the reverse-proxy container rather than the host operating system.
|
||||
|
||||
### REVERSE-PROXY
|
||||
|
||||
This starts a container called reverse-proxy based on the nginx image with a proxy.conf file mounted as the config from the current directory. It maps port 9090 on the host to port 8080 in the container which allows us to access the container from the host at http://localhost:9090.
|
||||
|
||||
The proxy.conf file looks like this:
|
||||
|
||||
```
|
||||
server {
|
||||
listen 8080;
|
||||
|
||||
location / {
|
||||
proxy_pass http://books-service:5000;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
The key point here is that we can now refer to books-service by name so we don’t need to know the IP address of the books-api container!
|
||||
|
||||
Now we can start the two containers with a working reverse proxy (-d means detached so we don’t see the output from the containers):
|
||||
|
||||
```
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
Prove the containers were created:
|
||||
|
||||
```
|
||||
docker ps
|
||||
```
|
||||
|
||||
And finally confirm that we can hit the API via the reverse proxy:
|
||||
|
||||
```
|
||||
curl -i http://localhost:9090/api/books
|
||||
```
|
||||
|
||||
### WHAT’S GOING ON?
|
||||
|
||||
Docker Compose makes this happen by creating a new virtual network called mvclibrary_default which is used by both books-api and reverse-proxy containers (the name is based on the parent directory of the docker-compose.yml file).
|
||||
|
||||
Prove the network exists with docker network ls:
|
||||
|
||||

|
||||
|
||||
You can see the details of the new network using docker network inspect mvclibrary_default:
|
||||
|
||||

|
||||
|
||||
Note that Docker has assigned "Subnet": "172.18.0.0/16" to the network. The /16 part is CIDR notation and a full explanation is way beyond the scope of this post but CIDR just refers to a range of IP addresses. Running docker network inspect bridge shows "Subnet": "172.17.0.0/16" so the two networks do not overlap.
|
||||
|
||||
Now docker inspect books-api to confirm the application container is using this network:
|
||||
|
||||

|
||||
|
||||
Notice the two "Aliases" for the container are the container identifier (3c42db680459) and the service name given in docker-compose.yml (books-service). We’re using the books-service alias to reference the application container in the custom Nginx configuration file. This could have been done manually with docker network create but I like Docker Compose because it wraps up container creation and interdependencies cleanly and succinctly.
|
||||
|
||||
### CONCLUSION
|
||||
|
||||
So now I can get the application running on Linux with Nginx in a few easy steps, without making any lasting changes to the host operating system:
|
||||
|
||||
```
|
||||
git clone https://github.com/niksoper/aspnet5-books.git
|
||||
cd aspnet5-books/src/MvcLibrary
|
||||
git checkout blog-docker
|
||||
docker-compose up -d
|
||||
curl -i http://localhost:9090/api/books
|
||||
```
|
||||
|
||||
I know what I have described in this post is not a truly production ready setup because I’ve not spoken about any of the following, but most of these topics could take an entire post on their own:
|
||||
|
||||
- Security concerns like firewalls or SSL configuration
|
||||
- How to ensure the application keeps running
|
||||
- How to be selective about what to include in a Docker image (I dumped everything in via the Dockerfile)
|
||||
- Databases - how to manage them in containers
|
||||
|
||||
This has been a very interesting learning experience for me because for a while now I have been curious to explore the new cross platform support that comes with ASP.NET Core, and the opportunity to explore a little bit of the DevOps world using Docker Compose for a “Configuration as Code” approach has been both enjoyable and educational.
|
||||
|
||||
If you’re at all curious about Docker then I encourage you to get stuck in by trying it out - especially if this puts you out of your comfort zone. Who knows, you might enjoy it?
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://blog.scottlogic.com/2016/09/05/hosting-netcore-on-linux-with-docker.html?utm_source=webopsweekly&utm_medium=email
|
||||
|
||||
作者:[Nick Soper][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: http://blog.scottlogic.com/nsoper
|
||||
[1]: http://blog.scottlogic.com/2016/01/20/restful-api-with-aspnet50.html
|
||||
[2]: https://docs.microsoft.com/en-us/dotnet/articles/core/migrating-from-dnx
|
||||
[3]: https://docs.asp.net/en/latest/migration/rc1-to-rtm.html
|
||||
[4]: http://www.paraesthesia.com/archive/2016/06/29/netcore-rtm-where-is-autofac/
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
@ -1,310 +0,0 @@
|
||||
GHLandy translating
|
||||
|
||||
17 tar command practical examples in Linux
|
||||
=====
|
||||
|
||||
Tar (tape archive ) is the most widely used command in Unix like operating system for creating archive of multiple files and folders into a single archive file and that archive file can be further compressed using gzip and bzip2 techniques. In other words we can say that tar command is used to take backup by archiving multiple files and directory into a single tar or archive file and later on files & directories can be extracted from the tar compressed file.
|
||||
|
||||
In this article we will discuss 17 practical examples of tar command in Linux.
|
||||
|
||||
Syntax of tar command:
|
||||
|
||||
```
|
||||
# tar <options> <files>
|
||||
```
|
||||
|
||||
Some of the commonly used options in tar command are listed below :
|
||||
|
||||

|
||||
|
||||
Note : hyphen ( – ) in the tar command options are optional.
|
||||
|
||||
### Example: 1 Create a tar archive file
|
||||
|
||||
Let’s create a tar file of /etc directory and ‘/root/ anaconda-ks.cfg’ file.
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# tar -cvf myarchive.tar /etc /root/anaconda-ks.cfg
|
||||
```
|
||||
|
||||
Above command will create a tar file with the name “myarchive” in the current folder. Tar file contains all the files and directories of /etc folder and anaconda-ks.cfg file.
|
||||
|
||||
In the tar command ‘-c‘ option specify to create a tar file, ‘-v’ is used for verbose output and ‘-f’ option is used to specify the archive file name.
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# ls -l myarchive.tar
|
||||
-rw-r--r--. 1 root root 22947840 Sep 7 00:24 myarchive.tar
|
||||
[root@linuxtechi ~]#
|
||||
```
|
||||
|
||||
### Example:2 List the contents of tar archive file.
|
||||
|
||||
Using ‘–t‘ option in tar command we can view the contents of tar files without extracting it.
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# tar -tvf myarchive.tar
|
||||
```
|
||||
|
||||
Listing a specific file or directory from tar file. In the below example i am trying to view whether ‘anaconda-ks.cfg’ file is there in the tar file or not.
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# tar -tvf myarchive.tar root/anaconda-ks.cfg
|
||||
-rw------- root/root 953 2016-08-24 01:33 root/anaconda-ks.cfg
|
||||
[root@linuxtechi ~]#
|
||||
```
|
||||
|
||||
### Example:3 Append or add files to end of archive or tar file.
|
||||
|
||||
‘-r‘ option in the tar command is used to append or add file to existing tar file. Let’s add /etc/fstab file in ‘data.tar‘
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# tar -rvf data.tar /etc/fstab
|
||||
```
|
||||
|
||||
Note: In the Compressed tar file we can’t append file or directory.
|
||||
|
||||
### Example:4 Extracting files and directories from tar file.
|
||||
|
||||
‘-x‘ option is used to extract the files and directories from the tar file. Let’s extract the content of above created tar file.
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# tar -xvf myarchive.tar
|
||||
```
|
||||
|
||||
This command will extract all the files and directories of myarchive tar file in the current working directory.
|
||||
|
||||
### Example:5 Extracting tar file to a particular folder.
|
||||
|
||||
In case you want to extract tar file to a particular folder or directory then use ‘-C‘ option and after that specify the path of a folder.
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# tar -xvf myarchive.tar -C /tmp/
|
||||
```
|
||||
|
||||
### Example:6 Extracting particular file or directory from tar file.
|
||||
|
||||
Let’s assume you want to extract only anaconda-ks.cfg file from the tar file under /tmp folder.
|
||||
|
||||
Syntax :
|
||||
|
||||
```
|
||||
# tar –xvf {tar-file } {file-to-be-extracted } -C {path-where-to-extract}
|
||||
|
||||
[root@linuxtechi tmp]# tar -xvf /root/myarchive.tar root/anaconda-ks.cfg -C /tmp/
|
||||
root/anaconda-ks.cfg
|
||||
[root@linuxtechi tmp]# ls -l /tmp/root/anaconda-ks.cfg
|
||||
-rw-------. 1 root root 953 Aug 24 01:33 /tmp/root/anaconda-ks.cfg
|
||||
[root@linuxtechi tmp]#
|
||||
```
|
||||
|
||||
### Example:7 Creating and compressing tar file (tar.gz or .tgz )
|
||||
|
||||
Let’s assume that we want to create a tar file of /etc and /opt folder and also want to compress it using gzip tool. This can be achieved using ‘-z‘ option in tar command. Extensions of such tar files will be either tar.gz or .tgz
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# tar -zcpvf myarchive.tar.gz /etc/ /opt/
|
||||
```
|
||||
|
||||
Or
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# tar -zcpvf myarchive.tgz /etc/ /opt/
|
||||
```
|
||||
|
||||
### Example:8 Creating and compressing tar file ( tar.bz2 or .tbz2 )
|
||||
|
||||
Let’s assume that we want to create compressed (bzip2) tar file of /etc and /opt folder. This can be achieved by using the option ( -j) in the tar command.Extensions of such tar files will be either tar.bz2 or .tbz
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# tar -jcpvf myarchive.tar.bz2 /etc/ /opt/
|
||||
```
|
||||
|
||||
Or
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# tar -jcpvf myarchive.tbz2 /etc/ /opt/
|
||||
```
|
||||
|
||||
### Example:9 Excluding particular files or type while creating tar file.
|
||||
|
||||
Using “–exclude” option in tar command we can exclude the particular file or file type while creating tar file. Let’s assume we want to exclude the file type of html while creating the compressed tar file.
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# tar -zcpvf myarchive.tgz /etc/ /opt/ --exclude=*.html
|
||||
```
|
||||
|
||||
### Example:10 Listing the contents of tar.gz or .tgz file
|
||||
|
||||
Contents of tar file with the extensions tar.gz or .tgz is viewed by using the option ‘-t’. Example is shown below :
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# tar -tvf myarchive.tgz | more
|
||||
.............................................
|
||||
drwxr-xr-x root/root 0 2016-09-07 08:41 etc/
|
||||
-rw-r--r-- root/root 541 2016-08-24 01:23 etc/fstab
|
||||
-rw------- root/root 0 2016-08-24 01:23 etc/crypttab
|
||||
lrwxrwxrwx root/root 0 2016-08-24 01:23 etc/mtab -> /proc/self/mounts
|
||||
-rw-r--r-- root/root 149 2016-09-07 08:41 etc/resolv.conf
|
||||
drwxr-xr-x root/root 0 2016-09-06 03:55 etc/pki/
|
||||
drwxr-xr-x root/root 0 2016-09-06 03:15 etc/pki/rpm-gpg/
|
||||
-rw-r--r-- root/root 1690 2015-12-09 04:59 etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
|
||||
-rw-r--r-- root/root 1004 2015-12-09 04:59 etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-Debug-7
|
||||
-rw-r--r-- root/root 1690 2015-12-09 04:59 etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-Testing-7
|
||||
-rw-r--r-- root/root 3140 2015-09-15 06:53 etc/pki/rpm-gpg/RPM-GPG-KEY-foreman
|
||||
..........................................................
|
||||
```
|
||||
|
||||
### Example:11 Listing the contents of tar.bz2 or .tbz2 file.
|
||||
|
||||
Contents of tar file with the extensions tar.bz2 or .tbz2 is viewed by using the option ‘-t’. Example is shown below :
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# tar -tvf myarchive.tbz2 | more
|
||||
........................................................
|
||||
rwxr-xr-x root/root 0 2016-08-24 01:25 etc/pki/java/
|
||||
lrwxrwxrwx root/root 0 2016-08-24 01:25 etc/pki/java/cacerts -> /etc/pki/ca-trust/extracted/java/cacerts
|
||||
drwxr-xr-x root/root 0 2016-09-06 02:54 etc/pki/nssdb/
|
||||
-rw-r--r-- root/root 65536 2010-01-12 15:09 etc/pki/nssdb/cert8.db
|
||||
-rw-r--r-- root/root 9216 2016-09-06 02:54 etc/pki/nssdb/cert9.db
|
||||
-rw-r--r-- root/root 16384 2010-01-12 16:21 etc/pki/nssdb/key3.db
|
||||
-rw-r--r-- root/root 11264 2016-09-06 02:54 etc/pki/nssdb/key4.db
|
||||
-rw-r--r-- root/root 451 2015-10-21 09:42 etc/pki/nssdb/pkcs11.txt
|
||||
-rw-r--r-- root/root 16384 2010-01-12 15:45 etc/pki/nssdb/secmod.db
|
||||
drwxr-xr-x root/root 0 2016-08-24 01:26 etc/pki/CA/
|
||||
drwxr-xr-x root/root 0 2015-06-29 08:48 etc/pki/CA/certs/
|
||||
drwxr-xr-x root/root 0 2015-06-29 08:48 etc/pki/CA/crl/
|
||||
drwxr-xr-x root/root 0 2015-06-29 08:48 etc/pki/CA/newcerts/
|
||||
drwx------ root/root 0 2015-06-29 08:48 etc/pki/CA/private/
|
||||
drwx------ root/root 0 2015-11-20 06:34 etc/pki/rsyslog/
|
||||
drwxr-xr-x root/root 0 2016-09-06 03:44 etc/pki/pulp/
|
||||
..............................................................
|
||||
```
|
||||
|
||||
### Example:12 Extracting or unzip tar.gz or .tgz files.
|
||||
|
||||
tar files with extension tar.gz or .tgz is extracted or unzipped with option ‘-x’ and ‘-z’. Example is shown below :
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# tar -zxpvf myarchive.tgz -C /tmp/
|
||||
```
|
||||
|
||||
Above command will extract tar file under /tmp folder.
|
||||
|
||||
Note : Now a days tar command will take care compression file types automatically while extracting, it means it is optional for us to specify compression type in tar command. Example is shown below :
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# tar -xpvf myarchive.tgz -C /tmp/
|
||||
```
|
||||
|
||||
### Example:13 Extracting or unzip tar.bz2 or .tbz2 files
|
||||
|
||||
tar files with the extension tar.bz2 or .tbz2 is extract with option ‘-j’ and ‘-x’. Example is shown below:
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# tar -jxpvf myarchive.tbz2 -C /tmp/
|
||||
```
|
||||
|
||||
Or
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# tar xpvf myarchive.tbz2 -C /tmp/
|
||||
```
|
||||
|
||||
### Example:14 Scheduling backup with tar command
|
||||
|
||||
There are some real time scenarios where we have to create the tar files of particular files and directories for backup purpose on daily basis. Let’s suppose we have to take backup of whole /opt folder on daily basis, this can be achieved by creating a cron job of tar command. Example is shown below :
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# tar -zcvf optbackup-$(date +%Y-%m-%d).tgz /opt/
|
||||
```
|
||||
|
||||
Create a cron job for above command.
|
||||
|
||||
### Example:15 Creating compressed archive or tar file with -T and -X option.
|
||||
|
||||
There are some real time scenarios where we want tar command to take input from a file and that file will consists of path of files & directory that are to be archived and compressed, there might some files that we would like to exclude in the archive which are mentioned in input file.
|
||||
|
||||
In the tar command input file is specified after ‘-T’ option and file which consists of exclude list is specified after ‘-X’ option.
|
||||
|
||||
Let’s suppose we want to archive and compress the directories like /etc , /opt and /home and want to exclude the file ‘/etc/sysconfig/kdump’ and ‘/etc/sysconfig/foreman‘, Create a text file ‘/root/tar-include’ and ‘/root/tar-exclude’ and put the following contents in respective file.
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# cat /root/tar-include
|
||||
/etc
|
||||
/opt
|
||||
/home
|
||||
[root@linuxtechi ~]#
|
||||
[root@linuxtechi ~]# cat /root/tar-exclude
|
||||
/etc/sysconfig/kdump
|
||||
/etc/sysconfig/foreman
|
||||
[root@linuxtechi ~]#
|
||||
```
|
||||
|
||||
Now run the below command to create and compress archive file.
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# tar zcpvf mybackup-$(date +%Y-%m-%d).tgz -T /root/tar-include -X /root/tar-exclude
|
||||
```
|
||||
|
||||
### Example:16 View the size of .tar , .tgz and .tbz2 file.
|
||||
|
||||
Use the below commands to view the size of tar and compressed tar files.
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# tar -czf - data.tar | wc -c
|
||||
427
|
||||
[root@linuxtechi ~]# tar -czf - mybackup-2016-09-09.tgz | wc -c
|
||||
37956009
|
||||
[root@linuxtechi ~]# tar -czf - myarchive.tbz2 | wc -c
|
||||
30835317
|
||||
[root@linuxtechi ~]#
|
||||
```
|
||||
|
||||
### Example:17 Splitting big tar file into smaller files.
|
||||
|
||||
In Unix like operating big file is divided or split into smaller files using split command. Big tar file can also be divided into the smaller parts using split command.
|
||||
|
||||
Let’s assume we want to split ‘mybackup-2016-09-09.tgz‘ file into smaller parts of each 6 MB.
|
||||
|
||||
```
|
||||
Syntax : split -b <Size-in-MB> <tar-file-name>.<extension> “prefix-name”
|
||||
```
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# split -b 6M mybackup-2016-09-09.tgz mybackup-parts
|
||||
```
|
||||
|
||||
Above command will split the mybackup compressed tar file into the smaller files each of size 6 MB in current working directory and split file names will starts from mybackup-partsaa … mybackup-partsag. In case if you want to append numbers in place of alphabets then use ‘-d’ option in above split command.
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# ls -l mybackup-parts*
|
||||
-rw-r--r--. 1 root root 6291456 Sep 10 03:05 mybackup-partsaa
|
||||
-rw-r--r--. 1 root root 6291456 Sep 10 03:05 mybackup-partsab
|
||||
-rw-r--r--. 1 root root 6291456 Sep 10 03:05 mybackup-partsac
|
||||
-rw-r--r--. 1 root root 6291456 Sep 10 03:05 mybackup-partsad
|
||||
-rw-r--r--. 1 root root 6291456 Sep 10 03:05 mybackup-partsae
|
||||
-rw-r--r--. 1 root root 6291456 Sep 10 03:05 mybackup-partsaf
|
||||
-rw-r--r--. 1 root root 637219 Sep 10 03:05 mybackup-partsag
|
||||
[root@linuxtechi ~]#
|
||||
```
|
||||
|
||||
Now we can move these files into another server over the network and then we can merge all the files into a single tar compressed file using below mentioned command.
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# cat mybackup-partsa* > mybackup-2016-09-09.tgz
|
||||
[root@linuxtechi ~]#
|
||||
```
|
||||
|
||||
That’s all, hope you like different examples of tar command. Please share your feedback and comments.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.linuxtechi.com/17-tar-command-examples-in-linux/
|
||||
|
||||
作者:[Pradeep Kumar ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: http://www.linuxtechi.com/author/pradeep/
|
@ -1,127 +0,0 @@
|
||||
translating by Chao-zhi
|
||||
|
||||
15 Top Open Source Artificial Intelligence Tools
|
||||
====
|
||||
|
||||
Artificial Intelligence (AI) is one of the hottest areas of technology research. Companies like IBM, Google, Microsoft, Facebook and Amazon are investing heavily in their own R&D, as well as buying up startups that have made progress in areas like machine learning, neural networks, natural language and image processing. Given the level of interest, it should come as no surprise that a recent [artificial intelligence report][1] from experts at Stanford University concluded that "increasingly useful applications of AI, with potentially profound positive impacts on our society and economy are likely to emerge between now and 2030."
|
||||
|
||||
In a recent [article][2], we provided an overview of 45 AI projects that seem particularly promising or interesting. In this slideshow, we're focusing in on open source artificial intelligence tools, with a closer look at fifteen of the best-known open source AI projects.
|
||||
|
||||

|
||||
|
||||
Open Source Artificial Intelligence
|
||||
|
||||
These open source AI applications are on the cutting edge of artificial intelligence research.
|
||||
|
||||

|
||||
|
||||
### 1. Caffe
|
||||
|
||||
The brainchild of a UC Berkeley PhD candidate, Caffe is a deep learning framework based on expressive architecture and extensible code. It's claim to fame is its speed, which makes it popular with both researchers and enterprise users. According to its website, it can process more than 60 million images in a single day using just one NVIDIA K40 GPU. It is managed by the Berkeley Vision and Learning Center (BVLC), and companies like NVIDIA and Amazon have made grants to support its development.
|
||||
|
||||

|
||||
|
||||
### 2. CNTK
|
||||
|
||||
Short for Computational Network Toolkit, CNTK is one of Microsoft's open source artificial intelligence tools. It boasts outstanding performance whether it is running on a system with only CPUs, a single GPU, multiple GPUs or multiple machines with multiple GPUs. Microsoft has primarily utilized it for research into speech recognition, but it is also useful for applications like machine translation, image recognition, image captioning, text processing, language understanding and language modeling.
|
||||
|
||||

|
||||
|
||||
### 3. Deeplearning4j
|
||||
|
||||
Deeplearning4j is an open source deep learning library for the Java Virtual Machine (JVM). It runs in distributed environments and integrates with both Hadoop and Apache Spark. It makes it possible to configure deep neural networks, and it's compatible with Java, Scala and other JVM languages.
|
||||
|
||||
The project is managed by a commercial company called Skymind, which offers paid support, training and an enterprise distribution of Deeplearning4j.
|
||||
|
||||

|
||||
|
||||
### 4. Distributed Machine Learning Toolkit
|
||||
|
||||
Like CNTK, the Distributed Machine Learning Toolkit (DMTK) is one of Microsoft's open source artificial intelligence tools. Designed for use in big data applications, it aims to make it faster to train AI systems. It consists of three key components: the DMTK framework, the LightLDA topic model algorithm, and the Distributed (Multisense) Word Embedding algorithm. As proof of DMTK's speed, Microsoft says that on an eight-cluster machine, it can "train a topic model with 1 million topics and a 10-million-word vocabulary (for a total of 10 trillion parameters), on a document collection with over 100-billion tokens," a feat that is unparalleled by other tools.
|
||||
|
||||

|
||||
|
||||
### 5. H20
|
||||
|
||||
Focused more on enterprise uses for AI than on research, H2O has large companies like Capital One, Cisco, Nielsen Catalina, PayPal and Transamerica among its users. It claims to make is possible for anyone to use the power of machine learning and predictive analytics to solve business problems. It can be used for predictive modeling, risk and fraud analysis, insurance analytics, advertising technology, healthcare and customer intelligence.
|
||||
|
||||
It comes in two open source versions: standard H2O and Sparkling Water, which is integrated with Apache Spark. Paid enterprise support is also available.
|
||||
|
||||

|
||||
|
||||
### 6. Mahout
|
||||
|
||||
An Apache Foundation project, Mahout is an open source machine learning framework. According to its website, it offers three major features: a programming environment for building scalable algorithms, premade algorithms for tools like Spark and H2O, and a vector-math experimentation environment called Samsara. Companies using Mahout include Adobe, Accenture, Foursquare, Intel, LinkedIn, Twitter, Yahoo and many others. Professional support is available through third parties listed on the website.
|
||||
|
||||

|
||||
|
||||
### 7. MLlib
|
||||
|
||||
Known for its speed, Apache Spark has become one of the most popular tools for big data processing. MLlib is Spark's scalable machine learning library. It integrates with Hadoop and interoperates with both NumPy and R. It includes a host of machine learning algorithms for classification, regression, decision trees, recommendation, clustering, topic modeling, feature transformations, model evaluation, ML pipeline construction, ML persistence, survival analysis, frequent itemset and sequential pattern mining, distributed linear algebra and statistics.
|
||||
|
||||

|
||||
|
||||
### 8. NuPIC
|
||||
|
||||
Managed by a company called Numenta, NuPIC is an open source artificial intelligence project based on a theory called Hierarchical Temporal Memory, or HTM. Essentially, HTM is an attempt to create a computer system modeled after the human neocortex. The goal is to create machines that "approach or exceed human level performance for many cognitive tasks."
|
||||
|
||||
In addition to the open source license, Numenta also offers NuPic under a commercial license, and it also offers licenses on the patents that underlie the technology.
|
||||
|
||||

|
||||
|
||||
### 9. OpenNN
|
||||
|
||||
Designed for researchers and developers with advanced understanding of artificial intelligence, OpenNN is a C++ programming library for implementing neural networks. Its key features include deep architectures and fast performance. Extensive documentation is available on the website, including an introductory tutorial that explains the basics of neural networks. Paid support for OpenNNis available through Artelnics, a Spain-based firm that specializes in predictive analytics.
|
||||
|
||||

|
||||
|
||||
### 10. OpenCyc
|
||||
|
||||
Developed by a company called Cycorp, OpenCyc provides access to the Cyc knowledge base and commonsense reasoning engine. It includes more than 239,000 terms, about 2,093,000 triples, and about 69,000 owl:sameAs links to external semantic data namespaces. It is useful for rich domain modeling, semantic data integration, text understanding, domain-specific expert systems and game AIs. The company also offers two other versions of Cyc: one for researchers that is free but not open source and one for enterprise use that requires a fee.
|
||||
|
||||

|
||||
|
||||
### 11. Oryx 2
|
||||
|
||||
Built on top of Apache Spark and Kafka, Oryx 2 is a specialized application development framework for large-scale machine learning. It utilizes a unique lambda architecture with three tiers. Developers can use Oryx 2 to create new applications, and it also includes some pre-built applications for common big data tasks like collaborative filtering, classification, regression and clustering. The big data tool vendor Cloudera created the original Oryx 1 project and has been heavily involved in continuing development.
|
||||
|
||||

|
||||
|
||||
### 12. PredictionIO
|
||||
|
||||
In February this year, Salesforce bought PredictionIO, and then in July, it contributed the platform and its trademark to the Apache Foundation, which accepted it as an incubator project. So while Salesforce is using PredictionIO technology to advance its own machine learning capabilities, work will also continue on the open source version. It helps users create predictive engines with machine learning capabilities that can be used to deploy Web services that respond to dynamic queries in real time.
|
||||
|
||||

|
||||
|
||||
### 13. SystemML
|
||||
|
||||
First developed by IBM, SystemML is now an Apache big data project. It offers a highly-scalable platform that can implement high-level math and algorithms written in R or a Python-like syntax. Enterprises are already using it to track customer service on auto repairs, to direct airport traffic and to link social media data with banking customers. It can run on top of Spark or Hadoop.
|
||||
|
||||

|
||||
|
||||
### 14. TensorFlow
|
||||
|
||||
TensorFlow is one of Google's open source artificial intelligence tools. It offers a library for numerical computation using data flow graphs. It can run on a wide variety of different systems with single- or multi-CPUs and GPUs and even runs on mobile devices. It boasts deep flexibility, true portability, automatic differential capabilities and support for Python and C++. The website includes a very extensive list of tutorials and how-tos for developers or researchers interested in using or extending its capabilities.
|
||||
|
||||

|
||||
|
||||
### 15. Torch
|
||||
|
||||
Torch describes itself as "a scientific computing framework with wide support for machine learning algorithms that puts GPUs first." The emphasis here is on flexibility and speed. In addition, it's fairly easy to use with packages for machine learning, computer vision, signal processing, parallel processing, image, video, audio and networking. It relies on a scripting language called LuaJIT that is based on Lua.
|
||||
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.datamation.com/open-source/slideshows/15-top-open-source-artificial-intelligence-tools.html
|
||||
|
||||
作者:[Cynthia Harvey][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: http://www.datamation.com/author/Cynthia-Harvey-6460.html
|
||||
[1]: https://ai100.stanford.edu/sites/default/files/ai_100_report_0906fnlc_single.pdf
|
||||
[2]: http://www.datamation.com/applications/artificial-intelligence-software-45-ai-projects-to-watch-1.html
|
@ -1,3 +1,4 @@
|
||||
LinuxBars 翻译中
|
||||
8 best practices for building containerized applications
|
||||
====
|
||||
|
||||
|
@ -1,257 +0,0 @@
|
||||
Content Security Policy, Your Future Best Friend
|
||||
=====
|
||||
|
||||
A long time ago, my personal website was attacked. I do not know how it happened, but it happened. Fortunately, the damage from the attack was quite minor: A piece of JavaScript was inserted at the bottom of some pages. I updated the FTP and other credentials, cleaned up some files, and that was that.
|
||||
|
||||
One point made me mad: At the time, there was no simple solution that could have informed me there was a problem and — more importantly — that could have protected the website’s visitors from this annoying piece of code.
|
||||
|
||||
A solution exists now, and it is a technology that succeeds in both roles. Its name is content security policy (CSP).
|
||||
|
||||
### What Is A CSP? Link
|
||||
|
||||
The idea is quite simple: By sending a CSP header from a website, you are telling the browser what it is authorized to execute and what it is authorized to block.
|
||||
|
||||
Here is an example with PHP:
|
||||
|
||||
```
|
||||
<?php
|
||||
header("Content-Security-Policy: <your directives>");
|
||||
?>
|
||||
```
|
||||
|
||||
#### SOME DIRECTIVES LINK
|
||||
|
||||
You may define global rules or define rules related to a type of asset:
|
||||
|
||||
```
|
||||
default-src 'self' ;
|
||||
# self = same port, same domain name, same protocol => OK
|
||||
```
|
||||
|
||||
The base argument is default-src: If no directive is defined for a type of asset, then the browser will use this value.
|
||||
|
||||
```
|
||||
script-src 'self' www.google-analytics.com ;
|
||||
# JS files on these domains => OK
|
||||
```
|
||||
|
||||
In this example, we’ve authorized the domain name www.google-analytics.com as a source of JavaScript files to use on our website. We’ve added the keyword 'self'; if we redefined the directive script-src with another rule, it would override default-src rules.
|
||||
|
||||
If no scheme or port is specified, then it enforces the same scheme or port from the current page. This prevents mixed content. If the page is https://example.com, then you wouldn’t be able to load http://www.google-analytics.com/file.js because it would be blocked (the scheme wouldn’t match). However, there is an exception to allow a scheme upgrade. If http://example.com tries to load https://www.google-analytics.com/file.js, then the scheme or port would be allowed to change to facilitate the scheme upgrade.
|
||||
|
||||
```
|
||||
style-src 'self' data: ;
|
||||
# Data-Uri in a CSS => OK
|
||||
```
|
||||
|
||||
In this example, the keyword data: authorizes embedded content in CSS files.
|
||||
|
||||
Under the CSP level 1 specification, you may also define rules for the following:
|
||||
|
||||
- `img-src`
|
||||
|
||||
valid sources of images
|
||||
|
||||
- `connect-src`
|
||||
|
||||
applies to XMLHttpRequest (AJAX), WebSocket or EventSource
|
||||
|
||||
- `font-src`
|
||||
|
||||
valid sources of fonts
|
||||
|
||||
- `object-src`
|
||||
|
||||
valid sources of plugins (for example, `<object>, <embed>, <applet>`)
|
||||
|
||||
- `media-src`
|
||||
|
||||
valid sources of `<audio> and <video>`
|
||||
|
||||
|
||||
CSP level 2 rules include the following:
|
||||
|
||||
- `child-src`
|
||||
|
||||
valid sources of web workers and elements such as `<frame>` and `<iframe>` (this replaces the deprecated frame-src from CSP level 1)
|
||||
|
||||
- `form-action`
|
||||
|
||||
valid sources that can be used as an HTML `<form>` action
|
||||
|
||||
- `frame-ancestors`
|
||||
|
||||
valid sources for embedding the resource using `<frame>, <iframe>, <object>, <embed> or <applet>`.
|
||||
|
||||
- `upgrade-insecure-requests`
|
||||
|
||||
instructs user agents to rewrite URL schemes, changing HTTP to HTTPS (for websites with a lot of old URLs that need to be rewritten).
|
||||
|
||||
For better backwards-compatibility with deprecated properties, you may simply copy the contents of the actual directive and duplicate them in the deprecated one. For example, you may copy the contents of child-src and duplicate them in frame-src.
|
||||
|
||||
CSP 2 allows you to whitelist paths (CSP 1 allows only domains to be whitelisted). So, rather than whitelisting all of www.foo.com, you could whitelist www.foo.com/some/folder to restrict it further. This does require CSP 2 support in the browser, but it is obviously more secure.
|
||||
|
||||
#### AN EXAMPLE
|
||||
|
||||
I made a simple example for the Paris Web 2015 conference, where I presented a talk entitled “[CSP in Action][1].”
|
||||
Without CSP, the page would look like this:
|
||||
|
||||

|
||||
|
||||
Not very nice. What if we enabled the following CSP directives?
|
||||
|
||||
```
|
||||
<?php
|
||||
header("Content-Security-Policy:
|
||||
default-src 'self' ;
|
||||
script-src 'self' www.google-analytics.com stats.g.doubleclick.net ;
|
||||
style-src 'self' data: ;
|
||||
img-src 'self' www.google-analytics.com stats.g.doubleclick.net data: ;
|
||||
frame-src 'self' ;");
|
||||
?>
|
||||
```
|
||||
|
||||
What would the browser do? It would (very strictly) apply these directives under the primary rule of CSP, which is that anything not authorized in a CSP directive will be blocked (“blocked” meaning not executed, not displayed and not used by the website).
|
||||
|
||||
By default in CSP, inline scripts and styles are not authorized, which means that every `<script>`, onclick or style attribute will be blocked. You could authorize inline CSS with style-src 'unsafe-inline' ;.
|
||||
|
||||
In a modern browser with CSP support, the example would look like this:
|
||||
|
||||

|
||||
|
||||
What happened? The browser applied the directives and rejected anything that was not authorized. It sent these notifications to the console:
|
||||
|
||||

|
||||
|
||||
If you’re still not convinced of the value of CSP, have a look at Aaron Gustafson’s article “[More Proof We Don’t Control Our Web Pages][2].”
|
||||
|
||||
Of course, you may use stricter directives than the ones in the example provided above:
|
||||
|
||||
- set default-src to 'none',
|
||||
- specify what you need for each rule,
|
||||
- specify the exact paths of required files,
|
||||
- etc.
|
||||
|
||||
|
||||
### More Information On CSP
|
||||
|
||||
#### SUPPORT
|
||||
|
||||
CSP is not a nightly feature requiring three flags to be activated in order for it to work. CSP levels 1 and 2 are candidate recommendations! [Browser support for CSP level 1][3] is excellent.
|
||||
|
||||

|
||||
|
||||
The [level 2 specification][4] is more recent, so it is a bit less supported.
|
||||
|
||||

|
||||
|
||||
CSP level 3 is an early draft now, so it is not yet supported, but you can already do great things with levels 1 and 2.
|
||||
|
||||
#### OTHER CONSIDERATIONS
|
||||
|
||||
CSP has been designed to reduce cross-site scripting (XSS) risks, which is why enabling inline scripts in script-src directives is not recommended. Firefox illustrates this issue very nicely: In the browser, hit Shift + F2 and type security csp, and it will show you directives and advice. For example, here it is used on Twitter’s website:
|
||||
|
||||

|
||||
|
||||
Another possibility for inline scripts or inline styles, if you really have to use them, is to create a hash value. For example, suppose you need to have this inline script:
|
||||
|
||||
```
|
||||
<script>alert('Hello, world.');</script>
|
||||
```
|
||||
|
||||
You might add 'sha256-qznLcsROx4GACP2dm0UCKCzCG-HiZ1guq6ZZDob_Tng=' as a valid source in your script-src directives. The hash generated is the result of this in PHP:
|
||||
|
||||
```
|
||||
<?php
|
||||
echo base64_encode(hash('sha256', "alert('Hello, world.');", true));
|
||||
?>
|
||||
```
|
||||
|
||||
I said earlier that CSP is designed to reduce XSS risks — I could have added, “… and reduce the risks of unsolicited content.” With CSP, you have to know where your sources of content are and what they are doing on your front end (inline styles, etc.). CSP can also help you force contributors, developers and others to respect your rules about sources of content!
|
||||
|
||||
Now your question is, “OK, this is great, but how do we use it in a production environment?”
|
||||
|
||||
### How To Use It In The Real World
|
||||
|
||||
The easiest way to get discouraged with using CSP the first time is to test it in a live environment, thinking, “This will be easy. My code is bad ass and perfectly clean.” Don’t do this. I did it. It’s stupid, trust me.
|
||||
|
||||
As I explained, CSP directives are activated with a CSP header — there is no middle ground. You are the weak link here. You might forget to authorize something or forget a piece of code on your website. CSP will not forgive your oversight. However, two features of CSP greatly simplify this problem.
|
||||
|
||||
#### REPORT-URI
|
||||
|
||||
Remember the notifications that CSP sends to the console? The directive report-uri can be used to tell the browser to send them to the specified address. Reports are sent in JSON format.
|
||||
|
||||
```
|
||||
report-uri /csp-parser.php ;
|
||||
```
|
||||
|
||||
So, in the csp-parser.php file, we can process the data sent by the browser. Here is the most basic example in PHP:
|
||||
|
||||
```
|
||||
$data = file_get_contents('php://input');
|
||||
|
||||
if ($data = json_decode($data, true)) {
|
||||
$data = json_encode(
|
||||
$data,
|
||||
JSON_PRETTY_PRINT | JSON_UNESCAPED_SLASHES
|
||||
);
|
||||
mail(EMAIL, SUBJECT, $data);
|
||||
}
|
||||
```
|
||||
|
||||
This notification will be transformed into an email. During development, you might not need anything more complex than this.
|
||||
|
||||
For a production environment (or a more visited development environment), you’d better use a way other than email to collect information, because there is no auth or rate limiting on the endpoint, and CSP can be very noisy. Just imagine a page that generates 100 CSP notifications (for example, a script that display images from an unauthorized source) and that is viewed 100 times a day — you could get 10,000 notifications a day!
|
||||
|
||||
A service such as report-uri.io can be used to simplify the management of reporting. You can see other simple examples for report-uri (with a database, with some optimizations, etc.) on GitHub.
|
||||
|
||||
### REPORT-ONLY
|
||||
|
||||
As we have seen, the biggest issue is that there is no middle ground between CSP being enabled and disabled. However, a feature named report-only sends a slightly different header:
|
||||
|
||||
```
|
||||
<?php
|
||||
header("Content-Security-Policy-Report-Only: <your directives>");
|
||||
?>
|
||||
```
|
||||
|
||||
Basically, this tells the browser, “Act as if these CSP directives were being applied, but do not block anything. Just send me the notifications.” It is a great way to test directives without the risk of blocking any required assets.
|
||||
|
||||
With report-only and report-uri, you can test CSP directives with no risk, and you can monitor in real time everything CSP-related on a website. These two features are really powerful for deploying and maintaining CSP!
|
||||
|
||||
### Conclusion
|
||||
|
||||
#### WHY CSP IS COOL
|
||||
|
||||
CSP is most important for your users: They don’t have to suffer any unsolicited scripts or content or XSS vulnerabilities on your website.
|
||||
|
||||
The most important advantage of CSP for website maintainers is awareness. If you’ve set strict rules for image sources, and a script kiddie attempts to insert an image on your website from an unauthorized source, that image will be blocked, and you will be notified instantly.
|
||||
|
||||
Developers, meanwhile, need to know exactly what their front-end code does, and CSP helps them master that. They will be prompted to refactor parts of their code (avoiding inline functions and styles, etc.) and to follow best practices.
|
||||
|
||||
#### HOW CSP COULD BE EVEN COOLER LINK
|
||||
|
||||
Ironically, CSP is too efficient in some browsers — it creates bugs with bookmarklets. So, do not update your CSP directives to allow bookmarklets. We can’t blame any one browser in particular; all of them have issues:
|
||||
|
||||
- Firefox
|
||||
- Chrome (Blink)
|
||||
- WebKit
|
||||
|
||||
Most of the time, the bugs are false positives in blocked notifications. All browser vendors are working on these issues, so we can expect fixes soon. Anyway, this should not stop you from using CSP.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.smashingmagazine.com/2016/09/content-security-policy-your-future-best-friend/?utm_source=webopsweekly&utm_medium=email
|
||||
|
||||
作者:[Nicolas Hoffmann][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.smashingmagazine.com/author/nicolashoffmann/
|
||||
[1]: https://rocssti.net/en/example-csp-paris-web2015
|
||||
[2]: https://www.aaron-gustafson.com/notebook/more-proof-we-dont-control-our-web-pages/
|
||||
[3]: http://caniuse.com/#feat=contentsecuritypolicy
|
||||
[4]: http://caniuse.com/#feat=contentsecuritypolicy2
|
||||
|
@ -1,44 +0,0 @@
|
||||
Five Linux Server Distros Worth Checking Out
|
||||
====
|
||||
|
||||
>Pretty much any of the nearly 300 Linux distributions you'll find listed on Distrowatch can be made to work as servers. Here are those that stand out above the rest.
|
||||
|
||||

|
||||
|
||||
Pretty much any of the nearly 300 Linux distributions you'll find listed on Distrowatch can be made to work as servers. Since Linux's earliest days, users have been provisioning "all purpose" distributions such as Slackware, Debian and Gentoo to do heavy lifting as servers for home and business. That may be fine for the hobbyist, but its a lot of unnecessary work for the professional.
|
||||
|
||||
From the beginning, however, there have been distributions with no other purpose but to serve files and applications, help workstations share common peripherals, serve-up web pages and all the other things we ask servers to do, whether in the cloud, in a data center or on a shelf in a utility closet.
|
||||
|
||||
Here's a look at four of the most used Linux server distros, as well as one distro that might fit the bill for smaller businesses.
|
||||
|
||||
**Red Hat Enterprise Linux**: Perhaps the best known of Linux server distros, RHEL has a reputation for being a solid distribution ready for the most demanding mission critical tasks -- like running the New York Stock Exchange for instance. It's also backed by Red Hat's best-of-breed support.
|
||||
|
||||
The downside? While Red Hat is known for offering customer service and support that's second to none, its support subscriptions aren't cheap. Some might point out, however, that you get what you pay for. Cheaper third party support for RHEL is available, but you might want to do some research before going that route.
|
||||
|
||||
**CentOS**: Anyone who likes RHEL but would like to avoid shoveling money to Red Hat for support should take a look at CentOS, which is basically an RHEL fork. Although it's been around since 2004, in 2014 it became officially sponsored by Red Hat, which now employs most of the project's developers. This means that security patches and bug fixes are made available to CentOS soon after they're pushed to Red Hat.
|
||||
|
||||
If you're going to deploy CentOS, you'll need people with Linux skills on staff, because as far as technical support goes, you're mainly on your own. The good news is that the CentOS community offers excellent resources, such as mailing lists, web forums, and chat rooms, so help is available to those who search.
|
||||
|
||||
**Ubuntu Server**: When Canonical announced many years back that it was coming out with a server edition of Ubuntu, you could hear the snickers. Laughter turned into amazement rather quickly, however, as Ubuntu Server rapidly took hold. This was partly due to the DNA it shares as a derivative of Debian, which has long been a favorite base for Linux servers. Ubuntu filled a gap by adding affordable technical support, superior hardware support, developer tools and lots of polish.
|
||||
|
||||
How popular is Ubuntu Server? Recent figures show it being the most deployed operating system both on OpenStack and on the Amazon Elastic Compute Cloud, where it outpaces second place Amazon Linux Amazon Machine Image by a mile and leaves third place Windows in the virtual dust. Another study shows it as the most used Linux web server.
|
||||
|
||||
**SUSE Linux Enterprise Server**: This German distro has a large base of users in Europe, and was a top server distro on this side of the Atlantic until PR issues arose after it was bought by Novell in the early part of the century. With those days long behind it, SUSE has been gaining ground in the US, and its use will probably accelerate now that HPE is naming it as its preferred Linux partner.
|
||||
|
||||
SUSE Linux Enterprise Server, or SLES, is stable and easy to maintain, which you'd expect for a distro that's been around for nearly as long as Linux itself. Affordable 24/7 "rapid-response" technical support is available, making it suitable for mission critical deployments.
|
||||
|
||||
**ClearOS**: Based on RHEL, ClearOS is included here because it's simple enough for anyone, even most non-techies, to configure. Targeted at small to medium sized businesses, it can also be used as an entertainment server by home users. Using a web-based administration interface for ease-of-use, it's built with the premise in mind that "building your IT infrastructure should be as simple as downloading apps on your smart phone."
|
||||
|
||||
The latest release, version 7.2, includes capabilities that might not be expected from a "lightweight" offering, such as VM support which includes Microsoft Hyper-V, support for the XFS and BTRFS file systems, as well as support for LVM caching and IPv6. Available in a free version or in an inexpensive "professional" version that comes with a variety of support options.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://windowsitpro.com/industry/five-linux-server-distros-worth-checking-out
|
||||
|
||||
作者:[Christine Hall][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: http://windowsitpro.com/industry/five-linux-server-distros-worth-checking-out
|
@ -1,3 +1,5 @@
|
||||
translating by Chao-zhi
|
||||
|
||||
4 big ways companies benefit from having open source program offices
|
||||
====
|
||||
|
||||
|
@ -1,58 +0,0 @@
|
||||
How to Use Markdown in WordPress to Improve Workflow
|
||||
====
|
||||
|
||||

|
||||
|
||||
Markdown is a simple markup language that helps you format your plain text documents with minimal effort. You may be used to formatting your articles using HTML or the Visual Editor in WordPress, but using markdown makes formatting a lot easier, and you can always export it to several formats including (but not limited to) HTML.
|
||||
|
||||
WordPress does not come with native markdown support, but there are plugins that can add this functionality to your website if you so desire.
|
||||
|
||||
In this tutorial I will demonstrate how to use the popular WP-Markdown plugin to add markdown support to a WordPress website.
|
||||
|
||||
### Installation
|
||||
|
||||
You can install this plugin directly by navigating to “Plugins -> Add New” and entering “[wp-markdown][1]” in the search box provided. The plugin should appear as the first option on the list. Click “Install Now” to install it.
|
||||
|
||||

|
||||
|
||||
### Configuration
|
||||
|
||||
Once you have installed the plugin and activated it, navigate to “Settings -> Writing” in the menu and scroll down until you get to the markdown section.
|
||||
|
||||
You can enable markdown support in posts, pages and comments. You can also enable a help bar for your post editor or comments which could be handy if you’re just learning the markdown syntax.
|
||||
|
||||

|
||||
|
||||
If you include code snippets in your blog posts, enabling the “Prettify syntax highlighter” option will automatically provide syntax highlighting for your code snippets.
|
||||
|
||||
Once you are satisfied with your selections, click “Save Changes” to save your settings.
|
||||
|
||||
### Write your posts with Markdown
|
||||
|
||||
Once you have enabled markdown support on your website, you can start using it right away.
|
||||
|
||||
Create a new post by going to “Posts -> Add New.” You will notice that the default Visual and Plain Text editors have been replaced by the markdown editor.
|
||||
|
||||
If you did not enable the markdown help bar in the configuration options, you will not see a live preview of your formatted markdown. Nonetheless, as long as your syntax is correct, your markdown will be converted to valid HTML when you save or publish the post.
|
||||
|
||||
However, if you’re a beginner to markdown and the live preview feature is important to you, simply go back to the settings to enable the help bar option, and you will get a nice live preview area at the bottom of your posts. In addition, you also get some buttons on top that will help you quickly insert markdown syntax into your posts. This could be a potentially amazing setting if people use it. You can adjust the priority of the notification on individual apps. This will let you choose what you see in the notification bar.
|
||||
|
||||

|
||||
|
||||
### Wrap up
|
||||
|
||||
As you can see adding markdown support to a WordPress website is really easy, and it will only take a few minutes of your time. If you are completely new to markdown, you might also check out our [markdown cheatsheet][2] which provides a comprehensive reference to the markdown syntax.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.maketecheasier.com/use-markdown-in-wordpress/?utm_medium=feed&utm_source=feedpress.me&utm_campaign=Feed%3A+maketecheasier
|
||||
|
||||
作者:[Ayo Isaiah][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.maketecheasier.com/author/ayoisaiah/
|
||||
[1]: https://wordpress.org/plugins/wp-markdown/
|
||||
[2]: https://www.maketecheasier.com/productive-with-markdown-cheatsheet/
|
@ -1,207 +0,0 @@
|
||||
Monitoring Docker Containers with Elasticsearch and cAdvisor
|
||||
=======
|
||||
|
||||
If you’re running a Swarm Mode cluster or even a single Docker engine, you’ll end up asking this question:
|
||||
|
||||
>How do I keep track of all that’s happening?
|
||||
|
||||
The answer is “not easily.”
|
||||
|
||||
You need a few things to have a complete overview of stuff like:
|
||||
|
||||
|
||||
1. Number and status of containers
|
||||
2. If, where, and when a container has been moved to another node
|
||||
3. Number of containers on a given node
|
||||
4. Traffic peaks at a given time
|
||||
5. Orphan volumes and networks
|
||||
6. Free disk space, free inodes
|
||||
7. Number of containers against number of veths attached to the docker0 and docker_gwbridge bridges
|
||||
8. Up and down Swarm nodes
|
||||
9. Centralize logs
|
||||
|
||||
The goal of this post is to demonstrate the use of [Elasticsearch][1] + [Kibana][2] + [cAdvisor][3] as tools to analyze and gather metrics and visualize dashboards for Docker containers.
|
||||
|
||||
Later on in this post, you can find a dashboard trying to address a few points from the previous list. There are also points that can’t be addressed by simply using cAdvisor, like the status of Swarm Mode nodes.
|
||||
|
||||
Also, if you have specific needs that aren’t covered by cAdvisor or another tool, I encourage you to write your own data collector and data shipper (e.g., [Beats][4]). Note that I won’t be showing you how to centralize Docker containers log on Elasticsearch.
|
||||
|
||||
>[“How do you keep track of all that’s happening in a Swarm Mode cluster? Not easily.” via @fntlnz][5]
|
||||
|
||||
### Why Do We Need to Monitor Containers?
|
||||
|
||||
Imagine yourself in the classic situation of managing a virtual machine, either just one or several. You are a tmux hero, so you have your sessions preconfigured to do basically everything, monitoring included. There’s a problem in production? You just do a top, htop, iotop, jnettop, whatevertop on all your machines, and you’re ready for troubleshooting!
|
||||
|
||||
Now imagine that you have the same three nodes but split into 50 containers. You need some history displayed nicely in a single place where you can perform queries to know what happened instead of just risking your life in front of those ncurses tools.
|
||||
|
||||
### What Is the Elastic Stack?
|
||||
|
||||
The Elastic Stack is a set of tools composed of:
|
||||
|
||||
- Elasticsearch
|
||||
- Kibana
|
||||
- Logstash
|
||||
- Beats
|
||||
|
||||
We’re going to use a few open-source tools from the Elastic Stack, such as Elasticsearch for the JSON-based analytics engine and Kibana to visualize data and create dashboards.
|
||||
|
||||
Another important piece of the Elastic Stack is Beats, but in this post, we’re focused on containers. There’s no official Beat for Docker, so we’ll just use cAdvisor that can natively talk with Elasticsearch.
|
||||
|
||||
cAdvisor is a tool that collects, aggregates, and exports metrics about running containers. In our case, those metrics are being exported to an Elasticsearch storage.
|
||||
|
||||
Two cool facts about cAdvisor are:
|
||||
|
||||
- It’s not limited to Docker containers.
|
||||
- It has its own webserver with a simple dashboard to visualize gathered metrics for the current node.
|
||||
|
||||
### Set Up a Test Cluster or BYOI
|
||||
|
||||
As I did in my previous posts, my habit is to provide a small script to allow the reader to set up a test environment on which to try out my project’s steps in no time. So you can use the following not-for-production-use script to set up a little Swarm Mode cluster with Elasticsearch running as a container.
|
||||
|
||||
>If you have enough time/experience, you can BYOI (Bring Your Own Infrastructure).
|
||||
|
||||
|
||||
To follow this post, you’ll just need:
|
||||
|
||||
- One or more nodes running the Docker daemon >= 1.12
|
||||
- At least a stand-alone Elasticsearch node 2.4.X
|
||||
|
||||
Again, note that this post is not about setting up a production-ready Elasticsearch cluster. A single node cluster is not recommended for production. So if you’re planning a production installation, please refer to [Elastic guidelines][6].
|
||||
|
||||
### A friendly note for early adopters
|
||||
|
||||
I’m usually an early adopter (and I’m already using the latest alpha version in production, of course). But for this post, I chose not to use the latest Elasticsearch 5.0.0 alpha. Their roadmap is not perfectly clear to me, and I don’t want be the root cause of your problems!
|
||||
|
||||
So the Elasticsearch reference version for this post is the latest stable version, 2.4.0 at the moment of writing.
|
||||
|
||||
### Test cluster setup script
|
||||
|
||||
As said, I wanted to provide this script for everyone who would like to follow the blog without having to figure out how to create a Swarm cluster and install an Elasticsearch. Of course, you can skip this if you choose to use your own Swarm Mode engines and your own Elasticsearch nodes.
|
||||
|
||||
To execute the setup script, you’ll need:
|
||||
|
||||
- [Docker Machine][7] – latest version: to provision Docker engines on DigitalOcean
|
||||
- [DigitalOcean API Token][8]: to allow docker-machine to start nodes on your behalf
|
||||
|
||||

|
||||
|
||||
### Create Cluster Script
|
||||
|
||||
Now that you have everything we need, you can copy the following script in a file named create-cluster.sh:
|
||||
|
||||
```
|
||||
#!/usr/bin/env bash
|
||||
#
|
||||
# Create a Swarm Mode cluster with a single master and a configurable number of workers
|
||||
|
||||
workers=${WORKERS:-"worker1 worker2"}
|
||||
|
||||
#######################################
|
||||
# Creates a machine on Digital Ocean
|
||||
# Globals:
|
||||
# DO_ACCESS_TOKEN The token needed to access DigitalOcean's API
|
||||
# Arguments:
|
||||
# $1 the actual name to give to the machine
|
||||
#######################################
|
||||
create_machine() {
|
||||
docker-machine create \
|
||||
-d digitalocean \
|
||||
--digitalocean-access-token=$DO_ACCESS_TOKEN \
|
||||
--digitalocean-size 2gb \
|
||||
$1
|
||||
}
|
||||
|
||||
#######################################
|
||||
# Executes a command on the specified machine
|
||||
# Arguments:
|
||||
# $1 The machine on which to run the command
|
||||
# $2..$n The command to execute on that machine
|
||||
#######################################
|
||||
machine_do() {
|
||||
docker-machine ssh $@
|
||||
}
|
||||
|
||||
main() {
|
||||
|
||||
if [ -z "$DO_ACCESS_TOKEN" ]; then
|
||||
echo "Please export a DigitalOcean Access token: https://cloud.digitalocean.com/settings/api/tokens/new"
|
||||
echo "export DO_ACCESS_TOKEN=<yourtokenhere>"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [ -z "$WORKERS" ]; then
|
||||
echo "You haven't provided your workers by setting the \$WORKERS environment variable, using the default ones: $workers"
|
||||
fi
|
||||
|
||||
# Create the first and only master
|
||||
echo "Creating the master"
|
||||
|
||||
create_machine master1
|
||||
|
||||
master_ip=$(docker-machine ip master1)
|
||||
|
||||
# Initialize the swarm mode on it
|
||||
echo "Initializing the swarm mode"
|
||||
machine_do master1 docker swarm init --advertise-addr $master_ip
|
||||
|
||||
# Obtain the token to allow workers to join
|
||||
worker_tkn=$(machine_do master1 docker swarm join-token -q worker)
|
||||
echo "Worker token: ${worker_tkn}"
|
||||
|
||||
# Create and join the workers
|
||||
for worker in $workers; do
|
||||
echo "Creating worker ${worker}"
|
||||
create_machine $worker
|
||||
machine_do $worker docker swarm join --token $worker_tkn $master_ip:2377
|
||||
done
|
||||
}
|
||||
|
||||
main $@
|
||||
```
|
||||
|
||||
And make it executable:
|
||||
|
||||
```
|
||||
chmod +x create-cluster.sh
|
||||
```
|
||||
|
||||
### Create the cluster
|
||||
|
||||
As the name suggests, we’ll use the script to create the cluster. By default, the script will create a cluster with a single master and two workers. If you want to configure the number of workers, you can do that by setting the WORKERS environment variable.
|
||||
|
||||
Now, let’s create that cluster!
|
||||
|
||||
```
|
||||
./create-cluster.sh
|
||||
```
|
||||
|
||||
Ok, now you can go out for a coffee. This will take a while.
|
||||
|
||||
Finally the cluster is ready!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://blog.codeship.com/monitoring-docker-containers-with-elasticsearch-and-cadvisor/
|
||||
|
||||
作者:[Lorenzo Fontana][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://blog.codeship.com/author/lorenzofontana/
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
[1]: https://github.com/elastic/elasticsearch
|
||||
[2]: https://github.com/elastic/kibana
|
||||
[3]: https://github.com/google/cadvisor
|
||||
[4]: https://github.com/elastic/beats
|
||||
[5]: https://twitter.com/share?text=%22How+do+you+keep+track+of+all+that%27s+happening+in+a+Swarm+Mode+cluster%3F+Not+easily.%22+via+%40fntlnz&url=https://blog.codeship.com/monitoring-docker-containers-with-elasticsearch-and-cadvisor/
|
||||
[6]: https://www.elastic.co/guide/en/elasticsearch/guide/2.x/deploy.html
|
||||
[7]: https://docs.docker.com/machine/install-machine/
|
||||
[8]: https://cloud.digitalocean.com/settings/api/tokens/new
|
@ -1,62 +0,0 @@
|
||||
Ryver: Why You Should Be Using It instead of Slack
|
||||
=====
|
||||
|
||||
It seems like everyone has heard of Slack, a team communication tool that can be used across multiple platforms to stay in the loop. It has revolutionised the way users discuss and plan projects, and it’s a clear upgrade to emails.
|
||||
|
||||
I work in small writing teams, and I’ve never had a problem with communicating with others on my phone or computer while using it. If you want to keep up to date with team of any size, it’s a great way to stay in the loop.
|
||||
|
||||
So, why are we here? Ryver is supposed to be the next big thing, offering an upgraded service in comparison to Slack. It’s completely free, and they’re pushing for a larger share of the market.
|
||||
|
||||
Is it good enough to be a Slack-Killer? What are the differences between two similar sounding services?
|
||||
|
||||
Read on to find out more.
|
||||
|
||||
### Why Ryver?
|
||||
|
||||

|
||||
|
||||
Why mess with something that works? The developers at Ryver are well aware of Slack, and they’re hoping their improved service will be enough to make you switch over. They promise a completely free team-communication service with no hidden charges along the way.
|
||||
|
||||
Thankfully, they deliver on their main aim with a high quality product.
|
||||
|
||||
Extra content is the name of the game, and they promise to remove some of the limits you’ll find on a free account with Slack. Unlimited data storage is a major plus point, and it’s also more open in a number of ways. If storage limits are an issue for you, you have to check out Ryver.
|
||||
|
||||
It’s a simple system to use, as it was built so that all functions are always one click away. It’s a mantra used to great success by Apple, and there aren’t many growing pains when you first get started.
|
||||
|
||||

|
||||
|
||||
Conversations are split between personal chats and public posts, and it means there’s a clear line between team platforms and personal use. It should help to avoid broadcasting any embarrassing announcements to your colleagues, and I’ve seen a few during my time as a Slack user.
|
||||
|
||||
Integration with a number of existing apps is supported, and there are native applications for most platforms.
|
||||
|
||||
You can add guests when needed at no additional cost, and it’s useful if you deal with external clients regularly. Guests can add more guests, so there’s an element of fluidity that isn’t seen with the more popular option.
|
||||
|
||||
Think of Ryver as a completely different service that will cater to different needs. If you need to deal with numerous clients on the same account, it’s worth trying out.
|
||||
|
||||
The question is how is it free? The quick answer is premium users will be paying your way. Like Spotify and other services, there’s a minority paying for the rest of us. Here’s a direct link to their download page if you’re interested in giving it a go.
|
||||
|
||||
### Should You Switch to Ryver?
|
||||
|
||||

|
||||
|
||||
Slack is great as long as you stick to smaller teams like I do, but Ryver has a lot to offer. The idea of a completely free team messaging program is noble, and it works perfectly.
|
||||
|
||||
There’s nothing wrong with using both, so make sure to try out the competition if you’re not willing to pay for a premium Slack account. You might find that both are better in different situations, depending on what you need.
|
||||
|
||||
Above all, Ryver is a great free alternative, and it’s more than just a Slack clone. They have a clear idea of what they’re trying to achieve, and they have a decent product that offers something different in a crowded marketplace.
|
||||
|
||||
However, there’s a chance that it will disappear if there’s a sustained lack of funding in the future. It could leave your teams and discussions in disarray. Everything is fine for now, but be careful if you plan to export a larger business over to the new upstart.
|
||||
|
||||
If you’re tired of Slack’s limitations on a free account, you’ll be impressed by what Ryver has to offer. To learn more, check out their website for information about the service.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.maketecheasier.com/why-use-ryver-instead-of-slack/?utm_medium=feed&utm_source=feedpress.me&utm_campaign=Feed%3A+maketecheasier
|
||||
|
||||
作者:[James Milin-Ashmore][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.maketecheasier.com/author/james-ashmore/
|
@ -1,52 +0,0 @@
|
||||
Taskwarrior: A Brilliant Command-Line TODO App For Linux
|
||||
====
|
||||
|
||||
Taskwarrior is a simple, straight-forward command-line based TODO app for Ubuntu/Linux. This open-source app has to be one of the easiest of all [CLI based apps][4] I've ever used. Taskwarrior helps you better organize yourself, and without installing bulky new apps which sometimes defeats the whole purpose of TODO apps.
|
||||
|
||||

|
||||
|
||||
### Taskwarrior: A Simple CLI Based TODO App That Gets The Job Done!
|
||||
|
||||
Taskwarrior is an open-source and cross-platform, command-line based TODO app, which lets you manage your to-do lists right from the Terminal. The app lets you add tasks, shows you the list, and removes tasks from that list with much ease. And what's more, it's available within your default repositories, no need to fiddle with PPAs. In Ubuntu 16.04 LTS and similar, do the following in Terminal to install Taskwarrior.
|
||||
|
||||
```
|
||||
sudo apt-get install task
|
||||
```
|
||||
|
||||
A simple use case can be as follows:
|
||||
|
||||
```
|
||||
$ task add Read a book
|
||||
Created task 1.
|
||||
$ task add priority:H Pay the bills
|
||||
Created task 2.
|
||||
```
|
||||
|
||||
This is the same example I used in the screenshot above. Yes, you can set priority levels (H, L or M) as shown. And then you can use 'task' or 'task next' commands to see your newly-created todo list. For example:
|
||||
|
||||
```
|
||||
$ task next
|
||||
|
||||
ID Age P Description Urg
|
||||
-- --- - -------------------------------- ----
|
||||
2 10s H Pay the bills 6
|
||||
1 20s Read a book 0
|
||||
```
|
||||
|
||||
And once its completed, you can use 'task 1 done' or 'task 2 done' commands to clear the lists. A more comprehensive list of commands, use-cases [can be found here][1]. Also, Taskwarrior is cross-platform, which means you'll find a version that [fits your needs][2] no matter what. There's even an [Android version][3] if you want one. Enjoy!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.techdrivein.com/2016/09/taskwarrior-command-line-todo-app-linux.html?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+techdrivein+%28Tech+Drive-in%29
|
||||
|
||||
作者:[Manuel Jose ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: http://www.techdrivein.com/2016/09/taskwarrior-command-line-todo-app-linux.html?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+techdrivein+%28Tech+Drive-in%29
|
||||
[1]: https://taskwarrior.org/docs/
|
||||
[2]: https://taskwarrior.org/download/
|
||||
[3]: https://taskwarrior.org/news/news.20160225.html
|
||||
[4]: http://www.techdrivein.com/search/label/Terminal
|
@ -1,95 +0,0 @@
|
||||
The Five Principles of Monitoring Microservices
|
||||
====
|
||||
|
||||

|
||||
|
||||
The need for microservices can be summed up in just one word: speed. The need to deliver more functionality and reliability faster has revolutionized the way developers create software. Not surprisingly, this change has caused ripple effects within software management, including monitoring systems. In this post, we’ll focus on the radical changes required to monitor your microservices in production efficiently. We’ll lay out five guiding principles for adapting your monitoring approach for this new software architecture.
|
||||
|
||||
Monitoring is a critical piece of the control systems of microservices, as the more complex your software gets, the harder it is to understand its performance and troubleshoot problems. Given the dramatic changes to software delivery, however, monitoring needs an overhaul to perform well in a microservice environment. The rest of this article presents the five principles of monitoring microservices, as follows:
|
||||
|
||||
1. Monitor containers and what’s inside them.
|
||||
2. Alert on service performance, not container performance.
|
||||
3. Monitor services that are elastic and multi-location.
|
||||
4. Monitor APIs.
|
||||
5. Map your monitoring to your organizational structure.
|
||||
|
||||
Leveraging these five principles will allow you to establish more effective monitoring as you make your way towards microservices. These principles will allow you to address both the technological changes associated with microservices, in addition to the organizational changes related to them.
|
||||
|
||||
### The Principles of Microservice Monitoring
|
||||
|
||||
#### 1. Monitor Containers and What’s Running Inside Them
|
||||
|
||||
Containers gained prominence as the building blocks of microservices. The speed, portability, and isolation of containers made it easy for developers to embrace a microservice model. There’s been a lot written on the benefits of containers so we won’t recount it all here.
|
||||
|
||||
Containers are black boxes to most systems that live around them. That’s incredibly useful for development, enabling a high level of portability from development through production, from developer laptop to cloud. But when it comes to operating, monitoring and troubleshooting a service, black boxes make common activities harder, leading us to wonder: what’s running in the container? How is the application/code performing? Is it spitting out important custom metrics? From the DevOps perspective, you need deep visibility inside containers rather than just knowing that some containers exist.
|
||||
|
||||

|
||||
|
||||
The typical process for instrumentation in a non-containerized environment — an agent that lives in the user space of a host or VM — doesn’t work particularly well for containers. That’s because containers benefit from being small, isolated processes with as few dependencies as possible.
|
||||
|
||||
And, at scale, running thousands of monitoring agents for even a modestly-sized deployment is an expensive use of resources and an orchestration nightmare. Two potential solutions arise for containers: 1) ask your developers to instrument their code directly, or 2) leverage a universal kernel-level instrumentation approach to see all application and container activity on your hosts. We won’t go into depth here, but each method has pros and cons.
|
||||
|
||||
#### 2. Leverage Orchestration Systems to Alert on Service Performance
|
||||
|
||||
Making sense of operational data in a containerized environment is a new challenge. The metrics of a single container have a much lower marginal value than the aggregate information from all the containers that make up a function or a service.
|
||||
|
||||
This particularly applies to application-level information, like which queries have the slowest response times or which URLs are seeing the most errors, but also applies to infrastructure-level monitoring, like which services’ containers are using the most resources beyond their allocated CPU shares.
|
||||
|
||||
Increasingly, software deployment requires an orchestration system to “translate” a logical application blueprint into physical containers. Common orchestration systems include Kubernetes, Mesosphere DC/OS and Docker Swarm. Teams use an orchestration system to (1) define your microservices and (2) understand the current state of each service in deployment. You could argue that the orchestration system is even more important than the containers. The actual containers are ephemeral — they matter only for the short time that they exist — while your services matter for the life of their usefulness.
|
||||
|
||||
DevOps teams should redefine alerts to focus on characteristics that get as close to monitoring the experience of the service as possible. These alerts are the first line of defense in assessing if something is impacting the application. But getting to these alerts is challenging, if not impossible unless your monitoring system is container-native.
|
||||
|
||||
Container-native solutions leverage orchestration metadata to dynamically aggregate container and application data and calculate monitoring metrics on a per-service basis. Depending on your orchestration tool, you might have different layers of a hierarchy that you’d like to drill into. For example, in Kubernetes, you typically have a Namespace, ReplicaSets, Pods and some containers. Aggregating at these various layers is essential for logical troubleshooting, regardless of the physical deployment of the containers that make up the service.
|
||||
|
||||

|
||||
|
||||
#### 3. Be Prepared for Services that are Elastic and Multi-Location
|
||||
|
||||
Elastic services are certainly not a new concept, but the velocity of change is much faster in container-native environments than virtualized environments. Rapidly changing environments can wreak havoc on brittle monitoring systems.
|
||||
|
||||
Frequently monitoring legacy systems required manual tuning of metrics and checks based on individual deployments of software. This tuning can be as specific as defining the individual metrics to be captured, or configuring collection based on what application is operating in a particular container. While that may be acceptable on a small scale (think tens of containers), it would be unbearable in anything larger. Microservice focused monitoring must be able to comfortably grow and shrink in step with elastic services, without human intervention.
|
||||
|
||||
For example, if the DevOps team must manually define what service a container is included in for monitoring purposes, they no doubt drop the ball as Kubernetes or Mesos spins up new containers regularly throughout the day. Similarly, if Ops were required to install a custom stats endpoint when new code is built and pushed into production, challenges may arise as developers pull base images from a Docker registry.
|
||||
|
||||
In production, build monitoring toward a sophisticated deployment that spans multiple data centers or multiple clouds. Leveraging, for example, AWS CloudWatch will only get you so far if your services span your private data center as well as AWS. That leads back to implementing a monitoring system that can span these different locations as well as operate in dynamic, container-native environments.
|
||||
|
||||
#### 4. Monitor APIs
|
||||
|
||||
In microservice environments, APIs are the lingua franca. They are essentially the only elements of a service that are exposed to other teams. In fact, response and consistency of the API may be the “internal SLA” even if there isn’t a formal SLA defined.
|
||||
|
||||
As a result, API monitoring is essential. API monitoring can take many forms but clearly, must go beyond binary up/down checks. For instance, it’s valuable to understand the most frequently used endpoints as a function of time. This allows teams to see if anything noticeable has changed in the usage of services, whether it be due to a design change or a user change.
|
||||
|
||||
You can also consider the slowest endpoints of your service, as these can reveal significant problems, or, at the very least, point to areas that need the most optimization in your system.
|
||||
|
||||
Finally, the ability to trace service calls through your system represents another critical capability. While typically used by developers, this type of profiling will help you understand the overall user experience while breaking information down into infrastructure and application-based views of your environment.
|
||||
|
||||
#### 5. Map Monitoring to Your Organizational Structure
|
||||
|
||||
While most of this post has been focused on the technological shift in microservices and monitoring, like any technology story, this is as much about people as it is about software bits.
|
||||
|
||||
For those of you familiar with Conway’s law, he reminds us that the design of systems is defined by the organizational structure of the teams building them. The allure of creating faster, more agile software has pushed teams to think about restructuring their development organization and the rules that govern it.
|
||||
|
||||

|
||||
|
||||
So if an organization wants to benefit from this new software architecture approach, their teams must, therefore, mirror microservices themselves. That means smaller teams, loosely coupled; that can choose their direction as long as it still meets the needs of the whole. Within each team, there is more control than ever over languages used, how bugs are handled, or even operational responsibilities.
|
||||
|
||||
DevOps teams can enable a monitoring platform that does exactly this: allows each microservice team to isolate their alerts, metrics, and dashboards, while still giving operations a view into the global system.
|
||||
|
||||
### Conclusion
|
||||
|
||||
There’s one, clear trigger event that precipitated the move to microservices: speed. Organizations wanted to deliver more capabilities to their customers in less time. Once this happened, technology stepped in, the architectural move to micro-services and the underlying shift to containers make speed happen. Anything that gets in the way of this progress train is going to get run over on the tracks.
|
||||
|
||||
As a result, the fundamental principles of monitoring need to adapt to the underlying technology and organizational changes that accompany microservices. Operations teams that recognize this shift can adapt to microservices earlier and easier.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://linoxide.com/firewall/pfsense-setup-basic-configuration/
|
||||
|
||||
作者:[Apurva Dave][a] [Loris Degioanni][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: http://thenewstack.io/author/apurvadave/
|
||||
[b]: http://thenewstack.io/author/lorisdegioanni/
|
@ -1,85 +0,0 @@
|
||||
Down and dirty with Windows Nano Server 2016
|
||||
====
|
||||
|
||||

|
||||
|
||||
>Nano Server is a very fast, powerful tool for remotely administering Windows servers, but you need to know what you're doing
|
||||
|
||||
There's been a good deal of talk around the [upcoming Nano version of Windows Server 2016][1], the remote-administered, command-line version designed with private clouds and datacenters in mind. But there's also a big difference between talking about it and getting your hands into it. Let's get into the guts.
|
||||
|
||||
Nano has no local login, is 64-bit all the way (applications, tools, and agents), and is fast to set up, update, and restart (for the rare times it needs to restart). It's perfect for compute hosts in or out of a cluster, a storage host, a DNS server, an IIS web server, and any server-hosting applications running in a container or virtual-machine guest operating system.
|
||||
|
||||
A Nano Server isn't all that fun to play with: You have to know what you want to accomplish. Otherwise, you'll be looking at a remote PowerShell connection and wondering what you're supposed to do next. But if you know what you want, it's very fast and powerful.
|
||||
|
||||
Microsoft has provided a [quick-start guide][2] to setting up Nano Server. Here, I take the boots-on-the-ground approach to show you what it's like in the real world.
|
||||
|
||||
First, you have to create a .vhd virtual hard drive file. As you can see in Figure 1, I had a few issues with files not being in the right place. PowerShell errors often indicate a mistyped line, but in this case, I had to keep double-checking where I put the files so that it could use the ISO information (which has to be copied and pasted to the server you want to create the .vhd file on). Once you have everything in place, you should see it go through the process of creating the .vhd file.
|
||||
|
||||

|
||||
>Figure 1: One of the many file path errors I got when trying to run the New-NanoServerImage script. Once I worked out the file-location issues, it went through and created the .vhd file (as shown here).
|
||||
|
||||
Next, when you create the VM in Hyper-V using the VM wizard, you need to point to an existing virtual hard disk and point to the new .vhd file you created (Figure 2).
|
||||
|
||||

|
||||
>Figure 2: Connecting to a virtual hard disk (the one you created at the start).
|
||||
|
||||
When you start up the Nano server, you may get a memory error depending on how much memory you allocated and how much memory the Hyper-V server has left if you have other VMs running. I had to shut off a few VMs and increase the RAM until it finally started up. That was unexpected -- [Microsoft's Nano system][3] requirements say you can run it with 512MB, although it recommends you give it at least 800MB. (I ended up allocating 8GB after 1GB didn't work; I was impatient, so I didn't try increments in between.)
|
||||
|
||||
I finally came to the login screen, then signed in to get the Nano Server Recovery Console (Figure 3), which is essentially Nano server's terminal screen.
|
||||
|
||||

|
||||
>Figure 3: The Nano Server Recovery Console.
|
||||
|
||||
Once I was in, I thought I was golden. But in trying to figure out a few details (how to join a domain, how to inject drivers I might not have, how to add roles), I realized that some configuration pieces would have been easier to add when I ran the New-NanoServerImage cmdlet by popping in a few more parameters.
|
||||
|
||||
However, once you have the server up and running, there are ways to configure it live. It all starts with a Remote PowerShell connection, as Figure 4 shows.
|
||||
|
||||

|
||||
>Figure 4: Getting information from the Nano Server Recovery Console that you can use to perform a PowerShell Remote connection.
|
||||
|
||||
Microsoft provides direction on how to make the connection happen, but after trying four different sites, I found MSDN has the clearest (working) direction on the subject. Figure 5 shows the result.
|
||||
|
||||

|
||||
>Figure 5: Making the remote PowerShell connection to your Nano Server.
|
||||
|
||||
Note: Once you've done the remote connection the long way, you can connect more quickly using a single line:
|
||||
|
||||
```
|
||||
Enter-PSSession –ComputerName "192.168.0.100"-Credential ~\Administrator.
|
||||
```
|
||||
|
||||
If you knew ahead of time that this server was going to be a DNS server or be part of a compute cluster and so on, you would have added those roles or feature packages when you were creating the .vhd image in the first place. If you're looking to do so after the fact, you'll need to make the remote PowerShell connection, then install the NanoServerPackage and import it. Then you can see which packages you want to deploy using Find-NanoServerPackage (shown in Figure 6).
|
||||
|
||||

|
||||
>Figure 6: Once you have installed and imported the NanoServerPackage, you can find the one you need to get your Nano Server up and running with the roles and features you require.
|
||||
|
||||
I tested this out by running the DNS package with the following command: `Install-NanoServerPackage –Name Microsoft-NanoServer-DNS-Package`. Once it was installed, I had to enable it with the following command: `Enable-WindowsOptionalFeature –Online –FeatureName DNS-Server-Full-Role`.
|
||||
|
||||
Obviously I didn't know these commands ahead of time. I have never run them before in my life, nor had I ever enabled a DNS role this way, but with a little research I had a DNS (Nano) Server up and running.
|
||||
|
||||
The next part of the process involves using PowerShell to configure the DNS server. That's a completely different topic and one best researched online. But it doesn't appear to be mind-blowingly difficult once you've learned the cmdlets to use: Add a zone? Use the Add-DNSServerPrimaryZone cmdlet. Add a record in that zone? Use the Add-DNSServerResourceRecordA. And so on.
|
||||
|
||||
After doing all this command-line work, you'll likely want proof that any of this is working. You should be able to do a quick review of PowerShell commands and not the many DNS ones that now present themselves (using Get-Command).
|
||||
|
||||
But if you need a GUI-based confirmation, you can open Server Manager on a GUI-based server and add the IP address of the Nano Server. Then right-click that server and choose Manage As to provide your credentials (~\Administrator and password). Once you have connected, right-click the server in Server Manager and choose Add Roles and Features; it should show that you have DNS installed as a role, as Figure 7 shows.
|
||||
|
||||

|
||||
>Figure 7: Proving through the GUI that DNS was installed.
|
||||
|
||||
Don't bother trying to remote-desktop into the server. There is only so much you can do through the Server Manager tool, and that isn't one of them. And just because you can confirm the DNS role doesn't mean you have the ability to add new roles and features through the GUI. It's all locked down. Nano Server is how you'll make any needed adjustments.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.infoworld.com/article/3119770/windows-server/down-and-dirty-with-windows-nano-server-2016.html?utm_source=webopsweekly&utm_medium=email
|
||||
|
||||
作者:[J. Peter Bruzzese ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: http://www.infoworld.com/author/J.-Peter-Bruzzese/
|
||||
[1]: http://www.infoworld.com/article/3049191/windows-server/nano-server-a-slimmer-slicker-windows-server-core.html
|
||||
[2]: https://technet.microsoft.com/en-us/windows-server-docs/compute/nano-server/getting-started-with-nano-server
|
||||
[3]: https://technet.microsoft.com/en-us/windows-server-docs/get-started/system-requirements--and-installation
|
||||
|
@ -1,90 +0,0 @@
|
||||
How to Speed Up LibreOffice with 4 Simple Steps
|
||||
====
|
||||
|
||||

|
||||
|
||||
For many fans and supporters of Open Source software, LibreOffice is the best alternative to Microsoft Office, and it has definitely seen huge improvements over the last few releases. However, the initial startup experience still leaves a lot to be desired. There are ways to improve launch time and overall performance of LibreOffice.
|
||||
|
||||
I will go over some practical steps that you can take to improve the load time and responsiveness of LibreOffice in the paragraphs below.
|
||||
|
||||
### 1. Increase Memory Per Object and Image Cache
|
||||
|
||||
This will help the program load faster by allocating more memory resources to the image cache and objects.
|
||||
|
||||
1. Launch LibreOffice Writer (or Calc)
|
||||
|
||||
2. Navigate to “Tools -> Options” in the menubar or use the keyboard shortcut “Alt + F12.”
|
||||
|
||||
3. Click “Memory” under LibreOffice and increase “Use for LibreOffice” to 128MB.
|
||||
|
||||
4. Also increase “Memory per object” to 20Mb.
|
||||
|
||||
5. Click “Ok” to save your changes.
|
||||
|
||||

|
||||
|
||||
Note: You can set the numbers higher or lower than the suggested values depending on how powerful your machine is. It is best to experiment and see which value gives you the optimum performance.
|
||||
|
||||
### 2. Enable LibreOffice QuickStarter
|
||||
|
||||
If you have a generous amount of RAM on your machine, say 4GB and above, you can enable the “Systray Quickstarter” option to keep part of LibreOffice in memory for quicker response with opening new documents.
|
||||
|
||||
You will definitely see improved performance in opening new documents after enabling this option.
|
||||
|
||||
1. Open the options dialog by navigating to “Tools -> Options.”
|
||||
|
||||
2. In the sidebar under “LibreOffice”, select “Memory.”
|
||||
|
||||
3. Tick the “Enable Systray Quickstarter” checkbox.
|
||||
|
||||
4. Click “OK” to save the changes.
|
||||
|
||||

|
||||
|
||||
Once this option is enabled, you will see the LibreOffice icon in your system tray with options to open any type of document.
|
||||
|
||||
### 3. Disable Java Runtime
|
||||
|
||||
Another easy way to speed up the launch time and responsiveness of LibreOffice is to disable Java.
|
||||
|
||||
1. Open the Options dialog using “Alt + F12.”
|
||||
|
||||
2. In the sidebar, select “LibreOffice,” then “Advanced.”
|
||||
|
||||
3. Uncheck the “Use Java runtime environment” option.
|
||||
|
||||
4. Click “OK” to close the dialog.
|
||||
|
||||

|
||||
|
||||
If all you use is Writer and Calc, disabling Java will not stop you from working with your files as normal. But to use LibreOffice Base and some other special features, you may need to re-enable it again. In that case, you will get a popup asking if you wish to turn it back on.
|
||||
|
||||
### 4. Reduce Number of Undo Steps
|
||||
|
||||
By default, LibreOffice allows you to undo up to 100 changes to a document. Most users do not need anywhere near that, so holding that many steps in memory is largely a waste of resources.
|
||||
|
||||
I recommend that you reduce this number to 20 to free up memory for other things, but feel free to customise this part to suit your needs.
|
||||
|
||||
1. Open the options dialog by navigating to “Tools -> Options.”
|
||||
|
||||
2. In the sidebar under “LibreOffice,” select “Memory.”
|
||||
|
||||
3. Under “Undo” and change the number of steps to your preferred value.
|
||||
|
||||
4. Click “OK” to save the changes.
|
||||
|
||||

|
||||
|
||||
If the tips provided helped you speed up the launch time of your LibreOffice Suite, let us know in the comments. Also, please share any other tips you may know for others to benefit as well.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.maketecheasier.com/speed-up-libreoffice/?utm_medium=feed&utm_source=feedpress.me&utm_campaign=Feed%3A+maketecheasier
|
||||
|
||||
作者:[Ayo Isaiah][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.maketecheasier.com/author/ayoisaiah/
|
@ -1,66 +0,0 @@
|
||||
Translating by bianjp
|
||||
|
||||
It's time to make LibreOffice and OpenOffice one again
|
||||
==========
|
||||
|
||||

|
||||
|
||||
Let's talk about OpenOffice. More than likely you've already read, countless times, that Apache OpenOffice is near the end. The last stable iteration was 4.1.2 (released October, 2015) and a recent major security flaw took a month to patch. A lack of coders has brought development to a creeping crawl. And then, the worst possible news hit the ether; the project suggested users switch to MS Office (or LibreOffice).
|
||||
|
||||
For whom the bells tolls? The bell tolls for thee, OpenOffice.
|
||||
|
||||
I'm going to say something that might ruffle a few feathers. Are you ready for it?
|
||||
|
||||
The end of OpenOffice will be a good thing for open source and for users.
|
||||
|
||||
Let me explain.
|
||||
|
||||
### One fork to rule them all
|
||||
|
||||
When LibreOffice was forked from OpenOffice we saw yet another instance of the fork not only improving on the original, but vastly surpassing it. LibreOffice was an instant success. Every Linux distribution that once shipped with OpenOffice migrated to the new kid on the block. LibreOffice burst out of the starting gate and immediately hit its stride. Updates came at an almost breakneck speed and the improvements were plenty and important.
|
||||
|
||||
After a while, OpenOffice became an afterthought for the open source community. This, of course, was exacerbated when Oracle decided to discontinue the project in 2011 and donated the code to the Apache Project. By this point OpenOffice was struggling to move forward and that brings us to now. A burgeoning LibreOffice and a suffering, stuttering OpenOffice.
|
||||
|
||||
But I say there is a light at the end of this rather dim tunnel.
|
||||
|
||||
### Unfork them
|
||||
|
||||
This may sound crazy, but I think it's time LibreOffice and OpenOffice became one again. Yes, I know there are probably political issues and egos at stake, but I believe the two would be better served as one. The benefits of this merger would be many. Off the top of my head:
|
||||
|
||||
- Bring the MS Office filters together: OpenOffice has a strong track record of better importing certain files from MS Office (whereas LibreOffice has been known to be improving, but spotty).
|
||||
- More developers for LibreOffice: Although OpenOffice wouldn't bring with it a battalion of developers, it would certainly add to the mix.
|
||||
- End the confusion: Many users assume OpenOffice and LibreOffice are the same thing. Some don't even know that LibreOffice exists. This would end that confusion.
|
||||
- Combine their numbers: Separate, OpenOffice and LibreOffice have impressive usage numbers. Together, they would be a force.
|
||||
### A golden opportunity
|
||||
|
||||
The possible loss of OpenOffice could actually wind up being a golden opportunity for open source office suites in general. Why? I would like to suggest something that I believe has been necessary for a while now. If OpenOffice and LibreOffice were to gather their forces, diff their code, and merge, they could then do some much-needed retooling of not just the internal works of the whole, but also of the interface.
|
||||
|
||||
Let's face it, the LibreOffice and (by extension) OpenOffice UIs are both way out of date. When I install LibreOffice 5.2.1.2 the tool bar is an absolute disaster (Figure A).
|
||||
|
||||
### Figure A
|
||||
|
||||

|
||||
|
||||
#### The LibreOffice default toolbar setup.
|
||||
|
||||
As much as I support and respect (and use daily) LibreOffice, it has become all too clear the interface needs a complete overhaul. What we're dealing with now is a throwback to the late 90s/early 2000s and it has to go. When a new user opens up LibreOffice for the first time, they are inundated with buttons, icons, and toolbars. Ubuntu Unity helped this out with the Head up Display (HUD), but that did nothing for other desktops and distributions. Sure, the enlightened user has no problem knowing what to look for and where it is (or to even customize the toolbars to reflect their specific needs), but for a new or average user, that interface is a nightmare. Now would be the perfect time for this change. Bring in the last vestiges of the OpenOffice developers and have them join the fight for an improved interface. With the combination of the additional import filters from OpenOffice and a modern interface, LibreOffice could finally make some serious noise on both the home and business desktops.
|
||||
|
||||
### Will this actually happen?
|
||||
|
||||
This needs to happen. Will it? I have no idea. But even if the powers that be decide the UI isn't in need of retooling (which would be a mistake), bringing OpenOffice into the fold would still be a big step forward. The merging of the two efforts would bring about a stronger focus on development, easier marketing, and far less confusion by the public at large.
|
||||
|
||||
I realize this might seem a bit antithetical to the very heart and spirit of open source, but merging LibreOffice and OpenOffice would combine the strengths of the two constituent pieces and possibly jettison the weaknesses.
|
||||
|
||||
From my perspective, that's a win-win.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.techrepublic.com/article/its-time-to-make-libreoffice-and-openoffice-one-again/
|
||||
|
||||
作者:[Jack Wallen ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: http://www.techrepublic.com/search/?a=jack%2Bwallen
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user