Merge pull request #5 from LCTT/master

2016-9-20
This commit is contained in:
Chao-zhi Liu 2016-09-20 20:33:42 +08:00 committed by GitHub
commit 5dec79dac0
43 changed files with 2721 additions and 1919 deletions

View File

@ -0,0 +1,85 @@
在 Kali Linux 环境下设置蜜罐
=========================
Pentbox 是一个包含了许多可以使渗透测试工作变得简单流程化的工具的安全套件。它是用 Ruby 编写并且面向 GNU / Linux同时也支持 Windows、MacOS 和其它任何安装有 Ruby 的系统。在这篇短文中我们将讲解如何在 Kali Linux 环境下设置蜜罐。如果你还不知道什么是蜜罐honeypot“蜜罐是一种计算机安全机制其设置用来发现、转移、或者以某种方式抵消对信息系统的非授权尝试。"
### 下载 Pentbox
在你的终端中简单的键入下面的命令来下载 pentbox-1.8。
```
root@kali:~# wget http://downloads.sourceforge.net/project/pentbox18realised/pentbox-1.8.tar.gz
```
![](https://www.blackmoreops.com/wp-content/uploads/2016/05/Set-up-a-honeypot-in-Kali-Linux-blackMORE-Ops-1.jpg)
### 解压 pentbox 文件
使用如下命令解压文件:
```
root@kali:~# tar -zxvf pentbox-1.8.tar.gz
```
![](https://www.blackmoreops.com/wp-content/uploads/2016/05/Set-up-a-honeypot-in-Kali-Linux-blackMORE-Ops-2.jpg)
### 运行 pentbox 的 ruby 脚本
改变目录到 pentbox 文件夹:
```
root@kali:~# cd pentbox-1.8/
```
![](https://www.blackmoreops.com/wp-content/uploads/2016/05/Set-up-a-honeypot-in-Kali-Linux-blackMORE-Ops-3.jpg)
使用下面的命令来运行 pentbox
```
root@kali:~# ./pentbox.rb
```
![](https://www.blackmoreops.com/wp-content/uploads/2016/05/Set-up-a-honeypot-in-Kali-Linux-blackMORE-Ops-4.jpg)
### 设置一个蜜罐
使用选项 2 (Network Tools) 然后是其中的选项 3 (Honeypot)。
![](https://www.blackmoreops.com/wp-content/uploads/2016/05/Set-up-a-honeypot-in-Kali-Linux-blackMORE-Ops-5.jpg)
完成让我们执行首次测试,选择其中的选项 1 (Fast Auto Configuration)
![](https://www.blackmoreops.com/wp-content/uploads/2016/05/Set-up-a-honeypot-in-Kali-Linux-blackMORE-Ops-6.jpg)
这样就在 80 端口上开启了一个蜜罐。打开浏览器并且打开链接 http://192.168.160.128 (这里的 192.168.160.128 是你自己的 IP 地址。)你应该会看到一个 Access denied 的报错。
![](https://www.blackmoreops.com/wp-content/uploads/2016/05/Set-up-a-honeypot-in-Kali-Linux-blackMORE-Ops-7.jpg)
并且在你的终端应该会看到 “HONEYPOT ACTIVATED ON PORT 80” 和跟着的 “INTRUSION ATTEMPT DETECTED”。
![](https://www.blackmoreops.com/wp-content/uploads/2016/05/Set-up-a-honeypot-in-Kali-Linux-blackMORE-Ops-8.jpg)
现在,如果你在同一步选择了选项 2 (Manual Configuration), 你应该看见更多的其它选项:
![](https://www.blackmoreops.com/wp-content/uploads/2016/05/Set-up-a-honeypot-in-Kali-Linux-blackMORE-Ops-9.jpg)
执行相同的步骤但是这次选择 22 端口 (SSH 端口)。接着在你家里的路由器上做一个端口转发,将外部的 22 端口转发到这台机器的 22 端口上。或者,把这个蜜罐设置在你的云端服务器的一个 VPS 上。
你将会被有如此多的机器在持续不断地扫描着 SSH 端口而震惊。 你知道你接着应该干什么么? 你应该黑回它们去!桀桀桀!
如果视频是你的菜的话,这里有一个设置蜜罐的视频:
<https://youtu.be/NufOMiktplA>
--------------------------------------------------------------------------------
via: https://www.blackmoreops.com/2016/05/06/setup-honeypot-in-kali-linux/
作者:[blackmoreops.com][a]
译者:[wcnnbdk1](https://github.com/wcnnbdk1)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: blackmoreops.com

View File

@ -0,0 +1,236 @@
使用 Python 和 Asyncio 编写在线多用人游戏(二)
==================================================================
![](https://7webpages.com/media/cache/fd/d1/fdd1f8f8bbbf4166de5f715e6ed0ac00.gif)
> 你在 Python 中用过异步编程吗?本文中我会告诉你怎样做,而且用一个[能工作的例子][1]来展示它:这是一个流行的贪吃蛇游戏,而且是为多人游戏而设计的。
介绍和理论部分参见“[第一部分 异步化][2]”。
- [游戏入口在此,点此体验][1]。
### 3、编写游戏循环主体
游戏循环是每一个游戏的核心。它持续地运行以读取玩家的输入、更新游戏的状态,并且在屏幕上渲染游戏结果。在在线游戏中,游戏循环分为客户端和服务端两部分,所以一般有两个循环通过网络通信。通常客户端的角色是获取玩家输入,比如按键或者鼠标移动,将数据传输给服务端,然后接收需要渲染的数据。服务端处理来自玩家的所有数据,更新游戏的状态,执行渲染下一帧的必要计算,然后将结果传回客户端,例如游戏中对象的新位置。如果没有可靠的理由,不混淆客户端和服务端的角色是一件很重要的事。如果你在客户端执行游戏逻辑的计算,很容易就会和其它客户端失去同步,其实你的游戏也可以通过简单地传递客户端的数据来创建。
> 游戏循环的一次迭代称为一个嘀嗒tick。嘀嗒是一个事件表示当前游戏循环的迭代已经结束下一帧或者多帧的数据已经就绪。
在后面的例子中,我们使用相同的客户端,它使用 WebSocket 从一个网页上连接到服务端。它执行一个简单的循环,将按键码发送给服务端,并显示来自服务端的所有信息。[客户端代码戳这里][4]。
### 例子 3.1:基本游戏循环
> [例子 3.1 源码][5]。
我们使用 [aiohttp][6] 库来创建游戏服务器。它可以通过 asyncio 创建网页服务器和客户端。这个库的一个优势是它同时支持普通 http 请求和 websocket。所以我们不用其他网页服务器来渲染游戏的 html 页面。
下面是启动服务器的方法:
```
app = web.Application()
app["sockets"] = []
asyncio.ensure_future(game_loop(app))
app.router.add_route('GET', '/connect', wshandler)
app.router.add_route('GET', '/', handle)
web.run_app(app)
```
`web.run_app` 是创建服务主任务的快捷方法,通过它的 `run_forever()` 方法来执行 `asyncio` 事件循环。建议你查看这个方法的源码,弄清楚服务器到底是如何创建和结束的。
`app` 变量就是一个类似于字典的对象,它用于在所连接的客户端之间共享数据。我们使用它来存储连接的套接字的列表。随后会用这个列表来给所有连接的客户端发送消息。`asyncio.ensure_future()` 调用会启动主游戏循环的任务每隔2 秒向客户端发送嘀嗒消息。这个任务会在同样的 asyncio 事件循环中和网页服务器并行执行。
有两个网页请求处理器:`handle` 是提供 html 页面的处理器;`wshandler` 是主要的 websocket 服务器任务,处理和客户端之间的交互。在事件循环中,每一个连接的客户端都会创建一个新的 `wshandler` 任务。这个任务会添加客户端的套接字到列表中,以便 `game_loop` 任务可以给所有的客户端发送消息。然后它将随同消息回显客户端的每个击键。
在启动的任务中,我们在 `asyncio` 的主事件循环中启动 worker 循环。任务之间的切换发生在它们之间任何一个使用 `await`语句来等待某个协程结束时。例如 `asyncio.sleep` 仅仅是将程序执行权交给调度器一段指定的时间;`ws.receive` 等待 websocket 的消息,此时调度器可能切换到其它任务。
在浏览器中打开主页,连接上服务器后,试试随便按下键。它们的键值会从服务端返回,每隔 2 秒这个数字会被游戏循环中发给所有客户端的嘀嗒消息所覆盖。
我们刚刚创建了一个处理客户端按键的服务器,主游戏循环在后台做一些处理,周期性地同时更新所有的客户端。
### 例子 3.2: 根据请求启动游戏
> [例子 3.2 的源码][7]
在前一个例子中,在服务器的生命周期内,游戏循环一直运行着。但是现实中,如果没有一个人连接服务器,空运行游戏循环通常是不合理的。而且,同一个服务器上可能有不同的“游戏房间”。在这种假设下,每一个玩家“创建”一个游戏会话(比如说,多人游戏中的一个比赛或者大型多人游戏中的副本),这样其他用户可以加入其中。当游戏会话开始时,游戏循环才开始执行。
在这个例子中,我们使用一个全局标记来检测游戏循环是否在执行。当第一个用户发起连接时,启动它。最开始,游戏循环没有执行,标记设置为 `False`。游戏循环是通过客户端的处理方法启动的。
```
if app["game_is_running"] == False:
asyncio.ensure_future(game_loop(app))
```
`game_loop()` 运行时,这个标记设置为 `True`;当所有客户端都断开连接时,其又被设置为 `False`
### 例子 3.3:管理任务
> [例子3.3源码][8]
这个例子用来解释如何和任务对象协同工作。我们把游戏循环的任务直接存储在游戏循环的全局字典中,代替标记的使用。在像这样的一个简单例子中并不一定是最优的,但是有时候你可能需要控制所有已经启动的任务。
```
if app["game_loop"] is None or \
app["game_loop"].cancelled():
app["game_loop"] = asyncio.ensure_future(game_loop(app))
```
这里 `ensure_future()` 返回我们存放在全局字典中的任务对象,当所有用户都断开连接时,我们使用下面方式取消任务:
```
app["game_loop"].cancel()
```
这个 `cancel()` 调用将通知调度器不要向这个协程传递执行权,而且将它的状态设置为已取消:`cancelled`,之后可以通过 `cancelled()` 方法来检查是否已取消。这里有一个值得一提的小注意点:当你持有一个任务对象的外部引用时,而这个任务执行中发生了异常,这个异常不会抛出。取而代之的是为这个任务设置一个异常状态,可以通过 `exception()` 方法来检查是否出现了异常。这种悄无声息地失败在调试时不是很有用。所以,你可能想用抛出所有异常来取代这种做法。你可以对所有未完成的任务显式地调用 `result()` 来实现。可以通过如下的回调来实现:
```
app["game_loop"].add_done_callback(lambda t: t.result())
```
如果我们打算在我们代码中取消这个任务,但是又不想产生 `CancelError` 异常,有一个检查 `cancelled` 状态的点:
```
app["game_loop"].add_done_callback(lambda t: t.result()
if not t.cancelled() else None)
```
注意仅当你持有任务对象的引用时才需要这么做。在前一个例子,所有的异常都是没有额外的回调,直接抛出所有异常。
### 例子 3.4:等待多个事件
> [例子 3.4 源码][9]
在许多场景下,在客户端的处理方法中你需要等待多个事件的发生。除了来自客户端的消息,你可能需要等待不同类型事件的发生。比如,如果你的游戏时间有限制,那么你可能需要等一个来自定时器的信号。或者你需要使用管道来等待来自其它进程的消息。亦或者是使用分布式消息系统的网络中其它服务器的信息。
为了简单起见,这个例子是基于例子 3.1。但是这个例子中我们使用 `Condition` 对象来与已连接的客户端保持游戏循环的同步。我们不保存套接字的全局列表,因为只在该处理方法中使用套接字。当游戏循环停止迭代时,我们使用 `Condition.notify_all()` 方法来通知所有的客户端。这个方法允许在 `asyncio` 的事件循环中使用发布/订阅的模式。
为了等待这两个事件,首先我们使用 `ensure_future()` 来封装任务中这个可等待对象。
```
if not recv_task:
recv_task = asyncio.ensure_future(ws.receive())
if not tick_task:
await tick.acquire()
tick_task = asyncio.ensure_future(tick.wait())
```
在我们调用 `Condition.wait()` 之前,我们需要在它后面获取一把锁。这就是我们为什么先调用 `tick.acquire()` 的原因。在调用 `tick.wait()` 之后,锁会被释放,这样其他的协程也可以使用它。但是当我们收到通知时,会重新获取锁,所以在收到通知后需要调用 `tick.release()` 来释放它。
我们使用 `asyncio.wait()` 协程来等待两个任务。
```
done, pending = await asyncio.wait(
[recv_task,
tick_task],
return_when=asyncio.FIRST_COMPLETED)
```
程序会阻塞,直到列表中的任意一个任务完成。然后它返回两个列表:执行完成的任务列表和仍然在执行的任务列表。如果任务执行完成了,其对应变量赋值为 `None`,所以在下一个迭代时,它可能会被再次创建。
### 例子 3.5 结合多个线程
> [例子 3.5 源码][10]
在这个例子中,我们结合 `asyncio` 循环和线程,在一个单独的线程中执行主游戏循环。我之前提到过,由于 `GIL` 的存在Python 代码的真正并行执行是不可能的。所以使用其它线程来执行复杂计算并不是一个好主意。然而,在使用 `asyncio` 时结合线程有原因的:当我们使用的其它库不支持 `asyncio` 时就需要。在主线程中调用这些库会阻塞循环的执行,所以异步使用他们的唯一方法是在不同的线程中使用他们。
我们使用 `asyncio` 循环的`run_in_executor()` 方法和 `ThreadPoolExecutor` 来执行游戏循环。注意 `game_loop()` 已经不再是一个协程了。它是一个由其它线程执行的函数。然而我们需要和主线程交互,在游戏事件到来时通知客户端。`asyncio` 本身不是线程安全的,它提供了可以在其它线程中执行你的代码的方法。普通函数有 `call_soon_threadsafe()`,协程有 `run_coroutine_threadsafe()`。我们在 `notify()` 协程中增加了通知客户端游戏的嘀嗒的代码,然后通过另外一个线程执行主事件循环。
```
def game_loop(asyncio_loop):
print("Game loop thread id {}".format(threading.get_ident()))
async def notify():
print("Notify thread id {}".format(threading.get_ident()))
await tick.acquire()
tick.notify_all()
tick.release()
while 1:
task = asyncio.run_coroutine_threadsafe(notify(), asyncio_loop)
# blocking the thread
sleep(1)
# make sure the task has finished
task.result()
```
当你执行这个例子时,你会看到 “Notify thread id” 和 “Main thread id” 相等,因为 `notify()` 协程在主线程中执行。与此同时 `sleep(1)` 在另外一个线程中执行,因此它不会阻塞主事件循环。
### 例子 3.6:多进程和扩展
> [例子 3.6 源码][11]
单线程的服务器可能运行得很好,但是它只能使用一个 CPU 核。为了将服务扩展到多核,我们需要执行多个进程,每个进程执行各自的事件循环。这样我们需要在进程间交互信息或者共享游戏的数据。而且在一个游戏中经常需要进行复杂的计算,例如路径查找之类。这些任务有时候在一个游戏嘀嗒中没法快速完成。在协程中不推荐进行费时的计算,因为它会阻塞事件的处理。在这种情况下,将这个复杂任务交给其它并行执行的进程可能更合理。
最简单的使用多个核的方法是启动多个使用单核的服务器,就像之前的例子中一样,每个服务器占用不同的端口。你可以使用 `supervisord` 或者其它进程控制的系统。这个时候你需要一个像 `HAProxy` 这样的负载均衡器,使得连接的客户端分布在多个进程间。已经有一些可以连接 asyncio 和一些流行的消息及存储系统的适配系统。例如:
- [aiomcache][12] 用于 memcached 客户端
- [aiozmq][13] 用于 zeroMQ
- [aioredis][14] 用于 Redis 存储,支持发布/订阅
你可以在 github 或者 pypi 上找到其它的软件包,大部分以 `aio` 开头。
使用网络服务在存储持久状态和交换某些信息时可能比较有效。但是如果你需要进行进程间通信的实时处理,它的性能可能不足。此时,使用标准的 unix 管道可能更合适。`asyncio` 支持管道,在`aiohttp`仓库有个 [使用管道的服务器的非常底层的例子][15]。
在当前的例子中,我们使用 Python 的高层类库 [multiprocessing][16] 来在不同的核上启动复杂的计算,使用 `multiprocessing.Queue` 来进行进程间的消息交互。不幸的是,当前的 `multiprocessing` 实现与 `asyncio` 不兼容。所以每一个阻塞方法的调用都会阻塞事件循环。但是此时线程正好可以起到帮助作用,因为如果在不同线程里面执行 `multiprocessing` 的代码,它就不会阻塞主线程。所有我们需要做的就是把所有进程间的通信放到另外一个线程中去。这个例子会解释如何使用这个方法。和上面的多线程例子非常类似,但是我们从线程中创建的是一个新的进程。
```
def game_loop(asyncio_loop):
# coroutine to run in main thread
async def notify():
await tick.acquire()
tick.notify_all()
tick.release()
queue = Queue()
# function to run in a different process
def worker():
while 1:
print("doing heavy calculation in process {}".format(os.getpid()))
sleep(1)
queue.put("calculation result")
Process(target=worker).start()
while 1:
# blocks this thread but not main thread with event loop
result = queue.get()
print("getting {} in process {}".format(result, os.getpid()))
task = asyncio.run_coroutine_threadsafe(notify(), asyncio_loop)
task.result()
```
这里我们在另外一个进程中运行 `worker()` 函数。它包括一个执行复杂计算并把计算结果放到 `queue` 中的循环,这个 `queue``multiprocessing.Queue` 的实例。然后我们就可以在另外一个线程的主事件循环中获取结果并通知客户端,就和例子 3.5 一样。这个例子已经非常简化了,它没有合理的结束进程。而且在真实的游戏中,我们可能需要另外一个队列来将数据传递给 `worker`
有一个项目叫 [aioprocessing][17],它封装了 `multiprocessing`,使得它可以和 `asyncio` 兼容。但是实际上它只是和上面例子使用了完全一样的方法:从线程中创建进程。它并没有给你带来任何方便,除了它使用了简单的接口隐藏了后面的这些技巧。希望在 Python 的下一个版本中,我们能有一个基于协程且支持 `asyncio``multiprocessing` 库。
> 注意!如果你从主线程或者主进程中创建了一个不同的线程或者子进程来运行另外一个 `asyncio` 事件循环,你需要显式地使用 `asyncio.new_event_loop()` 来创建循环,不然的话可能程序不会正常工作。
--------------------------------------------------------------------------------
via: https://7webpages.com/blog/writing-online-multiplayer-game-with-python-and-asyncio-writing-game-loop/
作者:[Kyrylo Subbotin][a]
译者:[chunyang-wen](https://github.com/chunyang-wen)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://7webpages.com/blog/writing-online-multiplayer-game-with-python-and-asyncio-writing-game-loop/
[1]: http://snakepit-game.com/
[2]: https://linux.cn/article-7767-1.html
[3]: http://snakepit-game.com/
[4]: https://github.com/7WebPages/snakepit-game/blob/master/simple/index.html
[5]: https://github.com/7WebPages/snakepit-game/blob/master/simple/game_loop_basic.py
[6]: http://aiohttp.readthedocs.org/
[7]: https://github.com/7WebPages/snakepit-game/blob/master/simple/game_loop_handler.py
[8]: https://github.com/7WebPages/snakepit-game/blob/master/simple/game_loop_global.py
[9]: https://github.com/7WebPages/snakepit-game/blob/master/simple/game_loop_wait.py
[10]: https://github.com/7WebPages/snakepit-game/blob/master/simple/game_loop_thread.py
[11]: https://github.com/7WebPages/snakepit-game/blob/master/simple/game_loop_process.py
[12]: https://github.com/aio-libs/aiomcache
[13]: https://github.com/aio-libs/aiozmq
[14]: https://github.com/aio-libs/aioredis
[15]: https://github.com/KeepSafe/aiohttp/blob/master/examples/mpsrv.py
[16]: https://docs.python.org/3.5/library/multiprocessing.html
[17]: https://github.com/dano/aioprocessing

View File

@ -0,0 +1,46 @@
DAISY : 一种 Linux 上可用的服务于视力缺陷者的文本格式
=================================================================
![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/education/osdc-lead_books.png?itok=K8wqfPT5)
*图片: 由Kate Ter Haar提供图片。 opensource.com 后期修饰。 CC BY-SA 2.0 *
如果你是盲人或像我一样有视力障碍你可能经常需要各种软硬件才能做到视觉正常的人们视之为理所当然的事情。这其中之一就是阅读的印刷图书的专用格式布莱叶盲文Braille假设你知道怎样阅读它或特殊的文本格式例如DAISY。
### DAISY 是什么?
DAISY 是数字化无障碍信息系统Digital Accessible Information System的缩写。 它是一种开放的标准,专用于帮助盲人阅读课本、杂志、报纸、小说,以及你想到的各种东西。 它由[ DAISY 联盟][1]创立于上世纪 90 年代中期,该联盟包括的组织们致力于制定出一套标准,可以让以这种方式标记的文本易于阅读、可以跳转、进行注释以及其它的文本操作,就像视觉正常的人能做的一样。
当前的 DAISY 3.0 版本发布于 2005 年中期是一个完全重写了的标准。它创建的目的是更容易撰写遵守该规范的书籍。值得注意的是DAISY 能够仅支持纯文本、或仅是录音PCM wave 文件格式或者 MP3 格式)、或既有文本也有录音。特殊的软件能阅读这类书,并支持用户设置书签和目录导航,就像正常人阅读印刷书籍一样。
### DAISY 是怎样工作的呢?
DAISY除开特殊的版本它工作时有点像这样你拥有自己的主向导文件在 DAISY 2.02 中是 ncc.html它包含书籍的元数据比如作者姓名、版权信息、书籍页数等等。而在 DAISY 3.0 中这个文件是一个有效的 XML 文件,以及一个被强烈建议包含在每一本书中的 DTD文档类型定义文件。
在导航控制文件中,标记精确描述了各个位置——无论是文本导航中当前光标位置还是录音中的毫秒级定位,这让该软件可以跳到确切的位置,就像视力健康的人翻到某个章节一样。值得注意的是这种导航控制文件仅包含书中主要的、最大的书籍组成部分的位置。
更小的内容组成部分由 SMIL同步多媒体集成语言synchronized multimedia integration language文件处理。导航的层次很大程度上取决于书籍的标记的怎么样。这样设想一下如果印刷书籍没有章节标题你就需要花很多的时间来确定自己阅读的位置。如果一本 DAISY 格式的书籍被标记的很差,你可能只能转到书本的开头或者目录。如果书籍被标记的太差了(或者完全没有标记),你的 DAISY 阅读软件很可能会直接忽略它。
### 为什么需要专门的软件?
你可能会问,如果 DAISY 仅仅是 HTML、XML、录音文件为什么还需要使用专门的软件进行阅读和操作。单纯从技术上而言你并不需要。专业化的软件大多数情况下是为了方便。这就像在 Linux 操作系统中,一个简单的 Web 浏览器可以被用来打开并阅读书籍。如果你在一本 DAISY 3 的书中点击 XML 文件,软件通常做的就是读取那些你赋予访问权限的书籍的名称,并建立一个列表让你点击选择要打开的书。如果书籍被标记的很差,它不会显示在这份清单中。
创建 DAISY 则完全是另一件事了,通常需要专门的软件,或需要拥有足够的专业知识来修改一个通用的软件以达到这样的目的。
### 结语
幸运的是DAISY 是一个已确立的标准。虽然它在阅读方面表现的很棒,但是需要特殊软件来生产它使得视力缺陷者孤立于正常人眼中的世界,在那里人们可以以各种格式去阅读他们电子化书籍。这就是 DAISY 联盟在 EPUB 格式取得了 DAISY 成功的原因,它的第 3 版支持一种叫做“媒体覆盖”的规范,基本上来说是在 EPUB 电子书中可选增加声频或视频。由于 EPUB 和 DAISY 共享了很多 XML 标记,一些能够阅读 DAISY 的软件能够看到 EPUB 电子书但不能阅读它们。这也就意味着只要网站为我们换到这种开放格式的书籍,我们将会有更多可选的软件来阅读我们的书籍。
--------------------------------------------------------------------------------
via: https://opensource.com/life/16/5/daisy-linux-compatible-text-format-visually-impaired
作者:[Kendell Clark][a]
译者:[theArcticOcean](https://github.com/theArcticOcean)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/kendell-clark
[1]: http://www.daisy.org

View File

@ -0,0 +1,404 @@
旅行时通过树莓派和 iPad Pro 备份图片
===================================================================
![](http://www.movingelectrons.net/images/bkup_photos_main.jpg)
*旅行中备份图片 - 组件*
### 介绍
我在很长的时间内一直在寻找一个旅行中备份图片的理想方法,把 SD 卡放进你的相机包会让你暴露在太多的风险之中SD 卡可能丢失或者被盗,数据可能损坏或者在传输过程中失败。比较好的一个选择是复制到另外一个介质中,即使它也是个 SD 卡,并且将它放到一个比较安全的地方去,备份到远端也是一个可行的办法,但是如果去了一个没有网络的地方就不太可行了。
我理想的备份步骤需要下面的工具:
1. 用一台 iPad pro 而不是一台笔记本。我喜欢轻装旅行,我的大部分旅程都是商务相关的(而不是拍摄休闲的),我痛恨带着个人笔记本的时候还得带着商务本。而我的 iPad 却一直带着,这就是我为什么选择它的原因。
2. 用尽可能少的硬件设备。
3. 设备之间的连接需要很安全。我需要在旅馆和机场使用这套设备,所以设备之间的连接需要是封闭而加密的。
4. 整个过程应该是可靠稳定的,我还用过其他的路由器/组合设备,但是[效果不太理想][1]。
### 设备
我配置了一套满足上面条件并且在未来可以扩充的设备,它包含下面这些部件的使用:
1. [9.7 英寸的 iPad Pro][2],这是本文写作时最强大、轻薄的 iOS 设备,苹果笔不是必需的,但是作为零件之一,当我在路上可以做一些编辑工作,所有的重活由树莓派做 ,其他设备只能通过 SSH 连接就行。
2. 安装了 Raspbian 操作系统[树莓派 3][3]LCTT 译注Raspbian 是基于 Debian 的树莓派操作系统)。
3. 树莓派的 [Mini SD卡][4] 和 [盒子/外壳][5]。
5. [128G 的优盘][6],对于我是够用了,你可以买个更大的。你也可以买个像[这样][7]的移动硬盘,但是树莓派没法通过 USB 给移动硬盘提供足够的电量,这意味你需要额外准备一个[供电的 USB hub][8] 以及电缆,这就破坏了我们让设备轻薄的初衷。
6. [SD 读卡器][9]
7. [另外的 SD 卡][10],我会使用几个 SD 卡,在用满之前就会立即换一个,这样就会让我在一次旅途当中的照片散布在不同的 SD 卡上。
下图展示了这些设备之间如何相互连接。
![](http://www.movingelectrons.net/images/bkup_photos_diag.jpg)
*旅行时照片的备份-流程图*
树莓派会作为一个安全的热点。它会创建一个自己的 WPA2 加密的 WIFI 网络iPad Pro 会连入其中。虽然有很多在线教程教你如何创建 Ad Hoc 网络(计算机到计算机的单对单网络),还更简单一些,但是它的连接是不加密的,而且附件的设备很容易就能连接进去。因此我选择创建 WIFI 网络。
相机的 SD 卡通过 SD 读卡器插到树莓派 USB 端口之一128G 的大容量优盘一直插在树莓派的另外一个 USB 端口上,我选择了一款[闪迪的][11],因为体积比较小。主要的思路就是通过 Python 脚本把 SD 卡的照片备份到优盘上,备份过程是增量备份,每次脚本运行时都只有变化的(比如新拍摄的照片)部分会添加到备份文件夹中,所以这个过程特别快。如果你有很多的照片或者拍摄了很多 RAW 格式的照片在就是个巨大的优势。iPad 将用来运行 Python 脚本,而且用来浏览 SD 卡和优盘的文件。
作为额外的好处,如果给树莓派连上一根能上网的网线(比如通过以太网口),那么它就可以共享互联网连接给那些通过 WIFI 连入的设备。
### 1. 树莓派的设置
这部分需要你卷起袖子亲自动手了,我们要用到 Raspbian 的命令行模式,我会尽可能详细的介绍,方便大家进行下去。
#### 安装和配置 Raspbian
给树莓派连接鼠标、键盘和 LCD 显示器,将 SD 卡插到树莓派上,按照[树莓派官网][12]的步骤安装 Raspbian。
安装完后,打开 Raspbian 的终端,执行下面的命令:
```
sudo apt-get update
sudo apt-get upgrade
```
这将升级机器上所有的软件到最新,我将树莓派连接到本地网络,而且为了安全更改了默认的密码。
Raspbian 默认开启了 SSH这样所有的设置可以在一个远程的设备上完成。我也设置了 RSA 验证,但这是可选的功能,可以在[这里][13]查看更多信息。
这是一个在 Mac 上在 [iTerm][14] 里建立 SSH 连接到树莓派上的截图[14]。LCTT 译注:原文图丢失。)
#### 建立 WPA2 加密的 WIFI AP
安装过程基于[这篇文章][15],根据我的情况进行了调整。
**1. 安装软件包**
我们需要安装下面的软件包:
```
sudo apt-get install hostapd
sudo apt-get install dnsmasq
```
hostapd 用来使用内置的 WiFi 来创建 APdnsmasp 是一个组合的 DHCP 和 DNS 服务其,很容易设置。
**2. 编辑 dhcpcd.conf**
通过以太网连接树莓派,树莓派上的网络接口配置由 `dhcpd` 控制,因此我们首先忽略这一点,将 `wlan0` 设置为一个静态的 IP。
`sudo nano /etc/dhcpcd.conf` 命令打开 dhcpcd 的配置文件,在最后一行添加上如下内容:
```
denyinterfaces wlan0
```
注意:它必须放在如果已经有的其它接口行**之上**。
**3. 编辑接口**
现在设置静态 IP使用 `sudo nano /etc/network/interfaces` 打开接口配置文件,按照如下信息编辑`wlan0`部分:
```
allow-hotplug wlan0
iface wlan0 inet static
address 192.168.1.1
netmask 255.255.255.0
network 192.168.1.0
broadcast 192.168.1.255
# wpa-conf /etc/wpa_supplicant/wpa_supplicant.conf
```
同样,然后 `wlan1` 编辑如下:
```
#allow-hotplug wlan1
#iface wlan1 inet manual
# wpa-conf /etc/wpa_supplicant/wpa_supplicant.conf
```
重要: 使用 `sudo service dhcpcd restart` 命令重启 `dhcpd`服务,然后用 `sudo ifdown eth0; sudo ifup wlan0` 命令来重载`wlan0`的配置。
**4. 配置 Hostapd**
接下来,我们需要配置 hostapd使用 `sudo nano /etc/hostapd/hostapd.conf` 命令创建一个新的配置文件,内容如下:
```
interface=wlan0
# Use the nl80211 driver with the brcmfmac driver
driver=nl80211
# This is the name of the network
ssid=YOUR_NETWORK_NAME_HERE
# Use the 2.4GHz band
hw_mode=g
# Use channel 6
channel=6
# Enable 802.11n
ieee80211n=1
# Enable QoS Support
wmm_enabled=1
# Enable 40MHz channels with 20ns guard interval
ht_capab=[HT40][SHORT-GI-20][DSSS_CCK-40]
# Accept all MAC addresses
macaddr_acl=0
# Use WPA authentication
auth_algs=1
# Require clients to know the network name
ignore_broadcast_ssid=0
# Use WPA2
wpa=2
# Use a pre-shared key
wpa_key_mgmt=WPA-PSK
# The network passphrase
wpa_passphrase=YOUR_NEW_WIFI_PASSWORD_HERE
# Use AES, instead of TKIP
rsn_pairwise=CCMP
```
配置完成后,我们需要告诉`dhcpcd` 在系统启动运行时到哪里寻找配置文件。 使用 `sudo nano /etc/default/hostapd` 命令打开默认配置文件,然后找到`#DAEMON_CONF=""` 替换成`DAEMON_CONF="/etc/hostapd/hostapd.conf"`。
**5. 配置 Dnsmasq**
自带的 dnsmasp 配置文件包含很多信息方便你使用它,但是我们不需要那么多选项,我建议把它移动到别的地方(而不要删除它),然后自己创建一个新文件:
```
sudo mv /etc/dnsmasq.conf /etc/dnsmasq.conf.orig
sudo nano /etc/dnsmasq.conf
```
粘贴下面的信息到新文件中:
```
interface=wlan0 # Use interface wlan0
listen-address=192.168.1.1 # Explicitly specify the address to listen on
bind-interfaces # Bind to the interface to make sure we aren't sending things elsewhere
server=8.8.8.8 # Forward DNS requests to Google DNS
domain-needed # Don't forward short names
bogus-priv # Never forward addresses in the non-routed address spaces.
dhcp-range=192.168.1.50,192.168.1.100,12h # Assign IP addresses in that range with a 12 hour lease time
```
**6. 设置 IPv4 转发**
最后我们需要做的事就是配置包转发,用 `sudo nano /etc/sysctl.conf` 命令打开 `sysctl.conf` 文件,将包含 `net.ipv4.ip_forward=1`的那一行之前的#号删除,它将在下次重启时生效。
我们还需要给连接到树莓派的设备通过 WIFI 分享互联网连接,做一个 `wlan0``eth0` 之间的 NAT。我们可以参照下面的脚本来实现。
```
sudo iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
sudo iptables -A FORWARD -i eth0 -o wlan0 -m state --state RELATED,ESTABLISHED -j ACCEPT
sudo iptables -A FORWARD -i wlan0 -o eth0 -j ACCEPT
```
我命名这个脚本名为 `hotspot-boot.sh`,然后让它可以执行:
```
sudo chmod 755 hotspot-boot.sh
```
该脚本应该在树莓派启动的时候运行。有很多方法实现,下面是我实现的方式:
1. 把文件放到`/home/pi/scripts`目录下。
2. 输入`sudo nano /etc/rc.local`命令编辑 `rc.local` 文件,将运行该脚本的命令放到 `exit 0`之前。(更多信息参照[这里][16])。
编辑后`rc.local`看起来像这样:
```
#!/bin/sh -e
#
# rc.local
#
# This script is executed at the end of each multiuser runlevel.
# Make sure that the script will "exit 0" on success or any other
# value on error.
#
# In order to enable or disable this script just change the execution
# bits.
#
# By default this script does nothing.
# Print the IP address
_IP=$(hostname -I) || true
if [ "$_IP" ]; then
printf "My IP address is %s\n" "$_IP"
fi
sudo /home/pi/scripts/hotspot-boot.sh &
exit 0
```
#### 安装 Samba 服务和 NTFS 兼容驱动
我们要安装下面几个软件来启用 samba 协议,使[文件浏览器][20]能够访问树莓派分享的文件夹,`ntfs-3g` 可以使我们能够访问移动硬盘中 ntfs 文件系统的文件。
```
sudo apt-get install ntfs-3g
sudo apt-get install samba samba-common-bin
```
你可以参照[这些文档][17]来配置 Samba。
重要提示:参考的文档介绍的是挂载外置硬盘到树莓派上,我们不这样做,是因为在这篇文章写作的时候,树莓派在启动时的 auto-mounts 功能同时将 SD 卡和优盘挂载到`/media/pi/`上,该文章有一些多余的功能我们也不会采用。
### 2. Python 脚本
树莓派配置好后,我们需要开发脚本来实际拷贝和备份照片。注意,这个脚本只是提供了特定的自动化备份进程,如果你有基本的 Linux/树莓派命令行操作的技能,你可以 ssh 进树莓派,然后创建需要的文件夹,使用`cp`或`rsync`命令拷贝你自己的照片从一个设备到另外一个设备上。在脚本里我们用`rsync`命令,这个命令比较可靠而且支持增量备份。
这个过程依赖两个文件,脚本文件自身和`backup_photos.conf`这个配置文件,后者只有几行包含被挂载的目的驱动器(优盘)和应该挂载到哪个目录,它看起来是这样的:
```
mount folder=/media/pi/
destination folder=PDRIVE128GB
```
重要提示:在这个符号`=`前后不要添加多余的空格,否则脚本会失效。
下面是这个 Python 脚本,我把它命名为`backup_photos.py`,把它放到了`/home/pi/scripts/`目录下,我在每行都做了注释可以方便的查看各行的功能。
```
#!/usr/bin/python3
import os
import sys
from sh import rsync
'''
脚本将挂载到 /media/pi 的 SD 卡上的内容复制到目的磁盘的同名目录下,目的磁盘的名字在 .conf文件里定义好了。
Argument: label/name of the mounted SD Card.
'''
CONFIG_FILE = '/home/pi/scripts/backup_photos.conf'
ORIGIN_DEV = sys.argv[1]
def create_folder(path):
print ('attempting to create destination folder: ',path)
if not os.path.exists(path):
try:
os.mkdir(path)
print ('Folder created.')
except:
print ('Folder could not be created. Stopping.')
return
else:
print ('Folder already in path. Using that instead.')
confFile = open(CONFIG_FILE,'rU')
#重要: rU 选项将以统一换行模式打开文件,
#所以 \n 和/或 \r 都被识别为一个新行。
confList = confFile.readlines()
confFile.close()
for line in confList:
line = line.strip('\n')
try:
name , value = line.split('=')
if name == 'mount folder':
mountFolder = value
elif name == 'destination folder':
destDevice = value
except ValueError:
print ('Incorrect line format. Passing.')
pass
destFolder = mountFolder+destDevice+'/'+ORIGIN_DEV
create_folder(destFolder)
print ('Copying files...')
# 取消这行备注将删除不在源处的文件
# rsync("-av", "--delete", mountFolder+ORIGIN_DEV, destFolder)
rsync("-av", mountFolder+ORIGIN_DEV+'/', destFolder)
print ('Done.')
```
### 3. iPad Pro 的配置
因为重活都由树莓派干了,文件不通过 iPad Pro 传输,这比我[之前尝试的一种方案][18]有巨大的优势。我们在 iPad 上只需要安装上 [Prompt2][19] 来通过 SSH 连接树莓派就行了,这样你既可以运行 Python 脚本也可以手动复制文件了。
![](http://www.movingelectrons.net/images/bkup_photos_ipad&rpi_prompt.jpg)
*iPad 用 Prompt2 通过 SSH 连接树莓派*
因为我们安装了 Samba我们可以以更图形化的方式访问连接到树莓派的 USB 设备,你可以看视频,在不同的设备之间复制和移动文件,[文件浏览器][20]对于这种用途非常完美。
### 4. 将它们结合在一起
我们假设`SD32GB-03`是连接到树莓派 USB 端口之一的 SD 卡的卷标,`PDRIVE128GB`是那个优盘的卷标,也连接到设备上,并在上面指出的配置文件中定义好。如果我们想要备份 SD 卡上的图片,我们需要这么做:
1. 给树莓派加电打开,将驱动器自动挂载好。
2. 连接树莓派配置好的 WIFI 网络。
3. 用 [Prompt2][21] 这个 app 通过 SSH 连接到树莓派。
4. 连接好后输入下面的命令:`python3 backup_photos.py SD32GB-03`
首次备份需要一些时间,这依赖于你的 SD 卡使用了多少容量。这意味着你需要一直保持树莓派和 iPad 设备连接不断,你可以在脚本运行之前通过 `nohup` 命令解决:
```
nohup python3 backup_photos.py SD32GB-03 &
```
![](http://www.movingelectrons.net/images/bkup_photos_ipad&rpi_finished.png)
*运行完成的脚本如图所示*
### 未来的定制
我在树莓派上安装了 vnc 服务,这样我可以通过其它计算机或在 iPad 上用 [Remoter App][23]连接树莓派的图形界面,我安装了 [BitTorrent Sync][24] 用来远端备份我的图片,当然需要先设置好。当我有了可以运行的解决方案之后,我会补充我的文章。
你可以在下面发表你的评论和问题,我会在此页下面回复。
--------------------------------------------------------------------------------
via: http://www.movingelectrons.net/blog/2016/06/26/backup-photos-while-traveling-with-a-raspberry-pi.html
作者:[Lenin][a]
译者:[jiajia9linuxer](https://github.com/jiajia9linuxer)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://www.movingelectrons.net/blog/2016/06/26/backup-photos-while-traveling-with-a-raspberry-pi.html
[1]: http://bit.ly/1MVVtZi
[2]: http://www.amazon.com/dp/B01D3NZIMA/?tag=movinelect0e-20
[3]: http://www.amazon.com/dp/B01CD5VC92/?tag=movinelect0e-20
[4]: http://www.amazon.com/dp/B010Q57T02/?tag=movinelect0e-20
[5]: http://www.amazon.com/dp/B01F1PSFY6/?tag=movinelect0e-20
[6]: http://amzn.to/293kPqX
[7]: http://amzn.to/290syFY
[8]: http://amzn.to/290syFY
[9]: http://amzn.to/290syFY
[10]: http://amzn.to/290syFY
[11]: http://amzn.to/293kPqX
[12]: https://www.raspberrypi.org/downloads/noobs/
[13]: https://www.raspberrypi.org/documentation/remote-access/ssh/passwordless.md
[14]: https://www.iterm2.com/
[15]: https://frillip.com/using-your-raspberry-pi-3-as-a-wifi-access-point-with-hostapd/
[16]: https://www.raspberrypi.org/documentation/linux/usage/rc-local.md
[17]: http://www.howtogeek.com/139433/how-to-turn-a-raspberry-pi-into-a-low-power-network-storage-device/
[18]: http://bit.ly/1MVVtZi
[19]: https://itunes.apple.com/us/app/prompt-2/id917437289?mt=8&uo=4&at=11lqkH
[20]: https://itunes.apple.com/us/app/filebrowser-access-files-on/id364738545?mt=8&uo=4&at=11lqkH
[21]: https://itunes.apple.com/us/app/prompt-2/id917437289?mt=8&uo=4&at=11lqkH
[22]: https://en.m.wikipedia.org/wiki/Nohup
[23]: https://itunes.apple.com/us/app/remoter-pro-vnc-ssh-rdp/id519768191?mt=8&uo=4&at=11lqkH
[24]: https://getsync.com/

View File

@ -0,0 +1,76 @@
4 个最好的 Linux 引导程序
==================
当你打开你的机器开机自检POST成功完成后BIOS基本输入输出系统立即定位所配置的引导介质并从 MBR主引导记录或 GUID全局唯一标识符分区表读取一些命令这是引导介质的最前面 512 个字节内容。主引导记录MBR中包含两个重要的信息集合第一个是引导程序第二个是分区表。
### 什么是引导程序?
引导程序Boot Loader是存储在 MBR主引导记录或 GUID全局唯一标识符分区表中的一个小程序用于帮助把操作系统装载到内存中。如果没有引导程序那么你的操作系统将不能够装载到内存中。
有一些我们可以随同 Linux 安装到系统上的引导程序,在这篇文章里,我将简要地谈论几个最好的可以与 Linux 一同工作的 Linux 引导程序。
### 1. GNU GRUB
GNU GRUB 是一个非常受欢迎,也可能是用的最多的具有多重引导能力的 Linux 引导程序,它以原始的 Eirch Stefan Broleyn 发明的 GRUBGRand Unified Bootlader为基础。GNU GRUB 增强了原来的 GRUB带来了一些改进、新的特性和漏洞修复。
重要的是GRUB 2 现在已经取代了 GRUB。值得注意的是GRUB 这个名字被重新命名为 GRUB Legacy但没有活跃开发不过它可以用来引导老的系统因为漏洞修复依然继续。
GRUB 具有下面一些显著的特性:
- 支持多重引导
- 支持多种硬件结构和操作系统,比如 Linux 和 Windows
- 提供一个类似 Bash 的交互式命令行界面,从而用户可以运行 GRUB 命令来和配置文件进行交互
- 允许访问 GRUB 编辑器
- 支持设置加密口令以确保安全
- 支持从网络进行引导,以及一些次要的特性
访问主页: <https://www.gnu.org/software/grub/>
### 2. LILOLinux 引导程序LInux LOader
LILO 是一个简单但强大且非常稳定的 Linux 引导程序。由于 GRUB 有很大改善和增加了许多强大的特性,越来越受欢迎,因此 LILO 在 Linux 用户中已经不是很流行了。
当 LILO 引导的时候单词“LILO”会出现在屏幕上并且每一个字母会在一个特定的事件发生前后出现。然而从 2015 年 12 月开始LILO 的开发停止了,它有许多特性比如下面列举的:
- 不提供交互式命令行界面
- 支持一些错误代码
- 不支持网络引导LCTT 译注:其变体 ELILO 支持 TFTP/DHCP 引导)
- 所有的文件存储在驱动的最开始 1024 个柱面上
- 面临 BTFS、GTP、RAID 等的限制
访问主页: <http://lilo.alioth.debian.org/>
### 3. BURG - 新的引导程序
基于 GRUBBURG 是一个相对来说比较新的引导程序LCTT 译注:已于 2011 年停止了开发)。由于 BURG 起源于 GRUB, 所以它带有一些 GRUB 主要特性。尽管如此, BURG 也提供了一些出色的特性,比如一种新的对象格式可以支持包括 Linux、Windows、Mac OS、 FreeBSD 等多种平台。
另外BURG 支持可高度配置的文本和图标模式的引导菜单,计划增加的“流”支持未来可以不同的输入/输出设备一同工作。
访问主页: <https://launchpad.net/burg>
### 4. Syslinux
Syslinux 是一种能从光盘驱动器、网络等进行引导的轻型引导程序。Syslinux 支持诸如 MS-DOS 上的 FAT、 Linux 上的 ext2、ext3、ext4 等文件系统。Syslinux 也支持未压缩的单一设备上的 Btrfs。
注意由于 Syslinux 仅能访问自己分区上的文件,因此不具备多重文件系统引导能力。
访问主页: <http://www.syslinux.org/wiki/index.php?title=The_Syslinux_Project>
### 结论
一个引导程序允许你在你的机器上管理多个操作系统,并在某个的时间选择其中一个使用。没有引导程序,你的机器就不能够装载内核以及操作系统的剩余部分。
我们是否遗漏了任何一流的 Linux 引导程序?如果有请让我们知道,请在下面的评论表中填入值得推荐的 Linux 系统引导程序。
--------------------------------------------------------------------------------
via: http://www.tecmint.com/best-linux-boot-loaders/
作者:[Aaron Kili][a]
译者:[ucasFL](https://github.com/ucasFL)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://www.tecmint.com/best-linux-boot-loaders/

View File

@ -0,0 +1,140 @@
Googler现在可以 Linux 终端下进行 Google 搜索了!
============================================
![](https://itsfoss.com/wp-content/uploads/2016/09/google-from-linux-terminal.jpg)
一个小问题:你每天做什么事?当然了,好多事情,但是我可以指出一件事,你几乎每天(如果不是每天)都会用 Google 搜索我说的对吗LCTT 译注Google 是啥?/cry
现在,如果你是一位 Linux 用户(我猜你也是),这里有另外一个问题:如果你甚至不用离开终端就可以进行 Google 搜索那岂不是相当棒?甚至不用打开一个浏览器窗口?
如果你是一位类 [*nix][7] 系统的狂热者而且也是喜欢终端界面的人,我知道你的答案是肯定的,而且我认为,接下来你也将喜欢上我今天将要介绍的这个漂亮的小工具。它被称做 Googler。
### Googler在你 linux 终端下的 google
Googler 是一个简单的命令行工具,它用于直接在命令行窗口中进行 google 搜索Googler 主要支持三种类型的 Google 搜索:
- Google 搜索:简单的 Google 搜索,和在 Google 主页搜索是等效的。
- Google 新闻搜索Google 新闻搜索,和在 Google News 中的搜索一样。
- Google 站点搜索Google 从一个特定的网站搜索结果。
Googler 用标题、链接和网页摘要来显示搜索结果。搜索出来的结果可以仅通过两个按键就可以在浏览器里面直接打开。
![](https://itsfoss.com/wp-content/uploads/2016/09/googler-1.png)
### 在 Ubuntu 下安装 Googler
先让我们进行软件的安装。
首先确保你的 python 版本大于等于 3.3,可以用以下命令查看。
```
python3 --version
```
如果不是的话就更新一下。Googler 要求 python 版本 3.3 及以上运行。
虽然 Googler 现在还不能在 Ununtu 的软件库中找到,我们可以很容易地从 GitHub 仓库中安装它。我们需要做的就是运行以下命令:
```
cd /tmp
git clone https://github.com/jarun/googler.git
cd googler
sudo make install
cd auto-completion/bash/
sudo cp googler-completion.bash /etc/bash_completion.d/
```
这样 Googler 就带着命令自动完成特性安装完毕了。
### 特点 & 基本用法
如果我们快速浏览它所有的特点,我们会发现 Googler 实际上是一个十分强大的工具,它的一些主要特点就是:
#### 交互界面
在终端下运行以下命令:
```
googler
```
交互界面就会被打开Googler 的开发者 [Arun Prakash Jana][1] 称之为全向提示符omniprompt你可以输入 `?` 去寻找可用的命令参数:
![](https://itsfoss.com/wp-content/uploads/2016/09/googler-2.png)
在提示符处,输入任何搜索词汇关键字去开始搜索,然后你可以输入`n`或者`p`导航到搜索结果的后一页和前一页。
要在浏览器窗口中打开搜索结果,直接输入搜索结果的编号,或者你可以输入 `o` 命令来打开这个搜索网页。
#### 新闻搜索
如果你想去搜索新闻,直接以`N`参数启动 Googler
```
googler -N
```
随后的搜索将会从 Google News 抓取结果。
#### 站点搜索
如果你想从某个特定的站点进行搜索,以`w 域名`参数启动 Googler
```
googler -w itsfoss.com
```
随后的搜索会只从这个博客中抓取结果!
#### 手册页
运行以下命令去查看 Googler 的带着各种用例的手册页:
```
man googler
```
#### 指定国家/地区的 Google 搜索引擎
```
googler -c in "hello world"
```
上面的示例命令将会开始从 Google 的印度域名搜索结果in 代表印度)
还支持:
- 通过时间和语言偏好来过滤搜索结果
- 支持 Google 查询关键字,例如:`site:example.com` 或者 `filetype:pdf` 等等
- 支持 HTTPS 代理
- Shell 命令自动补全
- 禁用自动拼写纠正
这里还有更多特性。你可以用 Googler 去满足你的需要。
Googler 也可以和一些基于文本的浏览器整合在一起(例如:[elinks][2]、[links][3]、[lynx][4]、w3m 等),所以你甚至都不用离开终端去浏览网页。在 [Googler 的 GitHub 项目页][5]可以找到指导。
如果你想看一下 Googler 不同的特性的视频演示,方便的话你可以查看 GitHub 项目页附带的终端记录演示页: [jarun/googler v2.7 quick demo][6]。
### 对于 Googler 的看法?
尽管 googler 可能并不是对每个人都是必要和渴望的,对于一些不想打开浏览器进行 google 搜索或者就是想泡在终端窗口里面的人来说,这是一个很棒的工具。你认为呢?
--------------------------------------------------------------------------------
via: https://itsfoss.com/review-googler-linux/
作者:[Munif Tanjim][a]
译者:[LinuxBars](https://github.com/LinuxBars)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/munif/
[1]: https://github.com/jarun
[2]: http://elinks.or.cz/
[3]: http://links.twibright.com/
[4]: http://lynx.browser.org/
[5]: https://github.com/jarun/googler#faq
[6]: https://asciinema.org/a/85019
[7]: https://en.wikipedia.org/wiki/Unix-like

View File

@ -0,0 +1,87 @@
Torvalds 2.0 Linus 之女谈计算机、大学、女权主义和提升技术界的多元化
====================
![Image by : Photo by Becky Svartström. Modified by Opensource.com. CC BY-SA 4.0](http://opensource.com/sites/default/files/styles/image-full-size/public/images/life/osdc-lead-patriciatorvalds.png)
*图片来源:照片来自 Becky Svartström Opensource.com 修改*
Patricia Torvalds 暂时还不像她的父亲 Linus 一样闻名于 Linux 和开源领域。
![](http://opensource.com/sites/default/files/images/life-uploads/ptorvalds.png)
在她 18 岁的时候Patricia 已经是一个有多项技术成就、拥有开源行业经验的女权主义者,而且她已经把目标放在进入杜克大学普拉特工程学院的新学期上了。当时她以实习生的身份在位于美国奥勒冈州伯特兰市的 [Puppet 实验室][2]工作。但不久后,她就将前往北卡罗纳州的达拉莫,开始秋季学期的大学学习。
在这次独家采访中Patricia 谈到了使她对计算机科学与工程学感兴趣的(剧透警告:不是因为她的父亲)原因,她所在高中学校在技术教学上所采取的“正确”方法,女权主义在她的生活中扮演的重要角色,以及对技术缺乏多元化的思考。
![](http://opensource.com/sites/default/files/images/life/Interview%20banner%20Q%26A.png)
###是什么使你对学习计算机科学与工程学发生兴趣?###
我在技术方面的兴趣主要来自于高中时代。我曾一度想投身于生物学,这种想法一直维持到大约大学二年级的时候。大二结束以后,我在波特兰 [VA][11] 做网页设计实习生。与此同时我参加了一个叫做“探索冒险家Exploratory VenturesXV”的工程学课程在我大二学年的后期我们把一个水下机器人送入了太平洋。但是转折点大概是在我大三学年的中期被授予“[NCWIT 的计算机之理想][6]”奖的地区冠军和全国亚军的时候出现的LCTT 译注NCWIT - National Center for Women & IT女性与 IT 国家中心)。
这个奖项的获得让我感觉到确立了自己的兴趣。当然,我认为最重要的部分是我加入到一个由所有获奖者组成的 Facebook 群。女孩们获奖很难想象,因此我们彼此非常支持。由于在 XV 和 [VA][11] 的工作,我在获奖前就已经确实对计算机科学发生了兴趣,但是和这些女孩们的交谈更加坚定了这份兴趣,使之更加强烈。再后来,后期大三、大四年级的时候执教 XV 也使我体会到工程学和计算机科学的乐趣。
###你打算学习什么?你已经知道自己毕业后想干什么了吗?###
我希望要么主修机械,要么是电子和计算机工程学,以及计算机科学,并且辅修女性学。毕业以后,我希望在一个支持社会公益或者为其创造技术的公司工作,或者自己开公司。
###我的女儿在高中有一门 Visual Basic 的编程课。她是整个班上唯一的一个女生,并且以困扰和痛苦的经历结束了这门课程。你的经历是什么样的呢?###
我的高中在高年级的时候开设了计算机科学的课程,我也学习了 Visual Basic这门课不是很糟糕但我的确是 20 多个人的班级里仅有的三四个女生之一。其他的计算机课程似乎也有相似的性别比例差异。然而,我所在的高中极其小,并且老师对技术包容性非常支持,所以我并没有感到困扰。希望在未来的一些年里这些课程会变得更加多样化。
###你的学校做了哪些促进技术的举措?它们如何能够变得更好?###
我的高中学校给了我们长时间接触计算机的机会,老师们会突然在不相关的课程上安排技术相关的任务,有几次我们还为社会实践课程建了一个网站,我认为这很棒,因为它使我们每一个人都能接触到技术。机器人俱乐部也很活跃并且资金充足,但是非常小,不过我不是其中的成员。学校的技术/工程学项目中一个非常重要的组成部分是一门叫做”[探索冒险家Exploratory Ventures][8]“的由学生自己教的工程学课程,这是一门需要亲自动手的课程,并且每年换一个工程学或者计算机科学方面的难题。我和我的一个同学在这儿教了两年,在课程结束以后,有学生上来告诉我他们对从事工程学或者计算机科学发生了兴趣。
然而,我的高中没有特别的关注于让年轻女性加入到这些课程中来,并且在人种上也没有呈现多样化。计算机的课程和俱乐部大量的主要成员都是男性白人学生。这的确应该需要有所改善。
###在成长过程中,你如何在家运用技术?###
老实说,小的时候,我使用我的上机时间([我的父亲 Linus][9] 设置了一个跟踪装置,当我们上网一个小时就会断线)来玩[尼奥宠物][10]和或者相似的游戏。我想我本可以搞乱跟踪装置或者在不连接网络的情况下玩游戏但我没有这样做。我有时候也会和我的父亲做一些小的科学项目我还记得有一次我和他在电脑终端上打印出几千个“Hello world”。但是大多数时候我都是和我的妹妹一起玩网络游戏直到高中的时候才开始学习计算机。
###你在高中学校的女权俱乐部很活跃,从这份经历中你学到了什么?现在对你来说什么女权问题是最重要的?###
在高中二年级的后期,我和我的朋友一起建立了女权俱乐部。刚开始,我们受到了很多人对俱乐部的排斥,并且这从来就没有完全消失过。到我们毕业的时候,女权主义思想已经彻底成为了学校文化的一部分。我们在学校做的女权主义工作通常是在一些比较直接的方面,并集中于像着装要求这样一些问题。
就我个人来说,我更关注于新女性主义( intersectional feminism这是一种致力于消除其它方面压迫比如种族歧视和阶级压迫等的女权主义。Facebook 上的 [Gurrilla Feminism][4] 专页是新女性主义一个非常好的例子,并且我从中学到了很多。我目前管理着波特兰分会。
在技术多样性方面女权主义对我也非常重要,尽管作为一名和技术世界有很强联系的高年级白人女性,女权主义问题对我产生的影响相比其他人来说少得多,我所参与的新女性主义也是同样的。[《Model View Culture》][5]的出版非常鼓舞我,谢谢 Shanley Kane 所做的这一切。
###你会给想教他们的孩子学习编程的父母什么样的建议?###
老实说,从没有人推着我学习计算机科学或者工程学。正如我前面说的,在很长一段时间里,我想成为一名遗传学家。大二结束的那个夏天,我在 [VA][11] 做了一个夏天的网页设计实习生,这彻底改变了我之前的想法。所以我不知道我是否能够充分回答这个问题。
我的确认为真正的兴趣很重要。如果在我 12 岁的时候,我的父亲让我坐在一台电脑前,教我配置一台网站服务器,我认为我不会对计算机科学感兴趣。相反,我的父母给了我很多可以支配的自由时间让我去做自己想做的事情,绝大多数时候是我在为我的尼奥宠物游戏编写糟糕的 HTML 网站。比我小的妹妹们没有一个对工程学或计算机科学感兴趣,我的父母也不在乎。我感到很幸运的是我的父母给了我和我的妹妹们鼓励和资源去探索自己的兴趣。
仍然要讲的是,在我成长过程中我也常说未来职业生涯要“像我爹一样”,尽管那时我还不知道我父亲是干什么的,只知道他有一个很酷的工作。另外,中学的时候有一次我告诉我的父亲这件事,然后他没有发表什么看法只是告诉我高中的时候不要想这事。所以我猜想这从一定程度上鼓励了我。
###对于开源社区的领导者们,你有什么建议给他们来吸引和维持更加多元化的贡献者?###
我实际上在开源社区并不是特别积极和活跃,我更喜欢和其它女性讨论计算机。我是“[NCWIT 的计算机之理想][6]”成员之一,这是我对技术持久感到兴趣的一个重要方面,同样也包括 Facebook 的”[Ladies Storm Hackathons][7]” 群。
我认为对于吸引和留住那些天才而形形色色的贡献者,安全的空间很重要。我曾经看到过在一些开源社区有人发表关于女性歧视和种族主义的评论,当人们指出这一问题随后该人就被解职了。我认为要维持一个专业的社区必须就骚扰事件和不正当行为有一个高标准。当然,人们已经有而且还会有,关于在开源社区或其他任何社区能够表达什么意见的更多的观点。然而,如果社区领导人真的想吸引和留住形形色色的天才们,他们必须创造一个安全的空间并且以高标准要求社区成员们。
我也觉得一些社区领导者不明白多元化的价值。很容易觉得在技术上是唯才是举的,并且这个原因有一些是技术上不处于中心位置的人是他们不在意的,问题来自于发展的早期。他们争论如果一个人在自己的工作上做得很好,那么他的性别或者民族还有性取向这些情况都变得不重要了。这很容易反驳,但我不想看到为这些错误找的理由。我认为多元化的缺失是一个错误,我们应该为之负责并尽力去改善这件事。
--------------------------------------------------------------------------------
via: http://opensource.com/life/15/8/patricia-torvalds-interview
作者:[Rikki Endsley][a]
译者:[ucasFL](https://github.com/ucasFL)
校对:[LinuxBars](https://github.com/LinuxBars), [wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://opensource.com/users/rikki-endsley
[1]:https://creativecommons.org/licenses/by-sa/4.0/
[2]:https://puppetlabs.com/
[3]:https://www.aspirations.org/
[4]:https://www.facebook.com/guerrillafeminism
[5]:https://modelviewculture.com/
[6]:https://www.aspirations.org/
[7]:https://www.facebook.com/groups/LadiesStormHackathons/
[8]: http://exploratoryventures.com/
[9]: https://plus.google.com/+LinusTorvalds/about
[10]: http://www.neopets.com/
[11]: http://www.va.gov/

View File

@ -1,3 +1,5 @@
translating by Chao-zhi
Adobe's new CIO shares leadership advice for starting a new role
====
@ -42,7 +44,7 @@ Through this whole process, Ive been very open with people that this is not g
via: https://enterprisersproject.com/article/2016/9/adobes-new-cio-shares-leadership-advice-starting-new-role
作者:[Cynthia Stoddard][a]
译者:[译者ID](https://github.com/译者ID)
译者:[Chao-zhi](https://github.com/Chao-zhi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,62 +0,0 @@
Linus Torvalds reveals his favorite programming laptop
====
>It's the Dell XPS 13 Developer Edition. Here's why.
I recently talked with some Linux developers about what the best laptop is for serious programmers. As a result I checked out several laptops from a programmer's viewpoint. The winner in my book? The 2016 Dell XPS 13 Developer Edition. I'm in good company. Linus Torvalds, Linux's creator, agrees. The Dell XPS 13 Developer Edition, for him, is the best laptop around.
![](http://zdnet3.cbsistatic.com/hub/i/r/2016/07/18/702609c3-db38-4603-9f5f-4dcc3d71b140/resize/770xauto/50a8ba1c2acb1f0994aec2115d2e55ce/2016-dell-xps-13.jpg)
Torvald's requirements may not be yours though.
On Google+, Torvalds explained, "First off: [I don't use my laptop as a desktop replacement][1], and I only travel for a small handful of events each year. So for me, the laptop is a fairly specialized thing that doesn't get daily (or even weekly) use, so the main criteria are not some kind of "average daily use", but very much "travel use".
Therefore, for Torvalds, "I end up caring a lot about it being fairly small and light, because I may end up carrying it around all day at a conference. I also want it to have a good screen, because by now I'm just used to it at my main desktop, and I want my text to be legible but small."
The Dell's display is powered by Intel's Iris 540 GPU. In my experience it works really well.
The Iris powers a 13.3 inch display with a 3,200×1,800 touchscreen. That's 280 pixels per inch, 40 more than my beloved [2015 Chromebook Pixel][2] and 60 more than a [MacBook Pro with Retina][3].
However, getting that hardware to work and play well with the [Gnome][4] desktop isn't easy. As Torvalds explained in another post, it "has the [same resolution as my desktop][5], but apparently because the laptop screen is smaller, Gnome seems to decide on its own that I need an automatic scaling factor of 2, which blows up all the stupid things (window decorations, icons etc) to a ridiculous degree".
The solution? You can forget about looking to the user interface. You need to go to the shell and run: gsettings set org.gnome.desktop.interface scaling-factor 1.
Torvalds may use Gnome, but he's [never liked the Gnome 3.x family much][6]. I can't argue with him. That's why I use [Cinnamon][7] instead.
He also wants "a reasonably powerful CPU, because when I'm traveling I still build the kernel a lot. I don't do my normal full 'make allmodconfig' build between each pull request like I do at home, but I'd like to do it more often than I did with my previous laptop, which is actually (along with the screen) the main reason I wanted to upgrade."
Linus doesn't describe the features of his XPS 13, but my review unit was a high-end model. It came with dual-core, 2.2GHz 6th Generation Intel Core i7-6560U Skylake processor and 16GBs of DDR3 RAM with a half a terabyte, PCIe solid state drive (SSD). I'm sure Torvalds' system is at least that well-equipped.
Some features you may care about aren't on Torvalds' list.
>"What I don't tend to care about is touch-screens, because my fingers are big and clumsy compared to the text I'm looking at (I also can't handle the smudges: maybe I just have particularly oily fingers, but I really don't want to touch that screen).
I also don't care deeply about some 'all day battery life', because quite frankly, I can't recall the last time I didn't have access to power. I might not want to bother to plug it in for some quick check, but it's just not a big overwhelming issue. By the time battery life is in 'more than a couple of hours', I just don't care very much any more."
Dell claims the XPS 13, with its 56wHR, 4-Cell Battery, has about a 12-hour battery life. It has well over 10 in my experience. I haven't tried to run it down to the dregs.
Torvalds also didn't have any trouble with the Intel Wi-Fi set. The non Developer Edition uses a Broadcom chip set and that has proven troublesome for both Windows and Linux users. Dell technical support was extremely helpful to me in getting this problem under control.
Some people have trouble with the XPS 13 touchpad. Neither I nor Torvalds have any worries. Torvalds wrote, the "XPS13 touchpad works very well for me. That may be a personal preference thing, but it seems to be both smooth and responsive."
Still, while Torvalds likes the XPS 13, he's also fond of the latest Lenovo X1 Carbon, HP Spectre 13 x360, and last year's Lenovo Yoga 900. Me? I like the XPS 13 Developer Editor. The price tag, which for the model I reviewed was $1949.99, may keep you from reaching for your credit card.
Still, if you want to develop like one of the world's top programmers, the Dell XPS 13 Developer Edition is worth the money.
--------------------------------------------------------------------------------
via: http://www.zdnet.com/article/linus-torvalds-reveals-his-favorite-programming-laptop/
作者:[Steven J. Vaughan-Nichols ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://www.zdnet.com/meet-the-team/us/steven-j-vaughan-nichols/
[1]: https://plus.google.com/+LinusTorvalds/posts/VZj8vxXdtfe
[2]: http://www.zdnet.com/article/the-best-chromebook-ever-the-chromebook-pixel-2015/
[3]: http://www.zdnet.com/product/apple-15-inch-macbook-pro-with-retina-display-mid-2015/
[4]: https://www.gnome.org/
[5]: https://plus.google.com/+LinusTorvalds/posts/d7nfnWSXjfD
[6]: http://www.zdnet.com/article/linus-torvalds-finds-gnome-3-4-to-be-a-total-user-experience-design-failure/
[7]: http://www.zdnet.com/article/how-to-customise-your-linux-desktop-cinnamon/

View File

@ -1,84 +0,0 @@
Setup honeypot in Kali Linux
====
The Pentbox is a safety kit containing various tools for streamlining PenTest conducting a job easily. It is programmed in Ruby and oriented to GNU / Linux, with support for Windows, MacOS and every systems where Ruby is installed. In this small article we will explain how to set up a honeypot in Kali Linux. If you dont know what is a honeypot, “a honeypot is a computer security mechanism set to detect, deflect, or, in some manner, counteract attempts at unauthorized use of information systems.”
### Download Pentbox:
Simply type in the following command in your terminal to download pentbox-1.8.
```
root@kali:~# wget http://downloads.sourceforge.net/project/pentbox18realised/pentbox-1.8.tar.gz
```
![](https://www.blackmoreops.com/wp-content/uploads/2016/05/Set-up-a-honeypot-in-Kali-Linux-blackMORE-Ops-1.jpg)
### Uncompress pentbox files
Decompressing the file with the following command:
```
root@kali:~# tar -zxvf pentbox-1.8.tar.gz
```
![](https://www.blackmoreops.com/wp-content/uploads/2016/05/Set-up-a-honeypot-in-Kali-Linux-blackMORE-Ops-2.jpg)
### Run pentbox ruby script
Change directory into pentbox folder
```
root@kali:~# cd pentbox-1.8/
```
![](https://www.blackmoreops.com/wp-content/uploads/2016/05/Set-up-a-honeypot-in-Kali-Linux-blackMORE-Ops-3.jpg)
Run pentbox using the following command
```
root@kali:~# ./pentbox.rb
```
![](https://www.blackmoreops.com/wp-content/uploads/2016/05/Set-up-a-honeypot-in-Kali-Linux-blackMORE-Ops-4.jpg)
### Setup a honeypot
Use option 2 (Network Tools) and then option 3 (Honeypot).
![](https://www.blackmoreops.com/wp-content/uploads/2016/05/Set-up-a-honeypot-in-Kali-Linux-blackMORE-Ops-5.jpg)
Finally for first test, choose option 1 (Fast Auto Configuration)
![](https://www.blackmoreops.com/wp-content/uploads/2016/05/Set-up-a-honeypot-in-Kali-Linux-blackMORE-Ops-6.jpg)
This opens up a honeypot in port 80. Simply open browser and browse to http://192.168.160.128 (where 192.168.160.128 is your IP Address. You should see an Access denied error.
![](https://www.blackmoreops.com/wp-content/uploads/2016/05/Set-up-a-honeypot-in-Kali-Linux-blackMORE-Ops-7.jpg)
and in the terminal you should see “HONEYPOT ACTIVATED ON PORT 80” followed by “INTRUSION ATTEMPT DETECTED”.
![](https://www.blackmoreops.com/wp-content/uploads/2016/05/Set-up-a-honeypot-in-Kali-Linux-blackMORE-Ops-8.jpg)
Now, if you do the same steps but this time select Option 2 (Manual Configuration), you should see more extra options
![](https://www.blackmoreops.com/wp-content/uploads/2016/05/Set-up-a-honeypot-in-Kali-Linux-blackMORE-Ops-9.jpg)
Do the same steps but select port 22 this time (SSH Port). Then do a port forwarding in your home router to forward port external port 22 to this machines port 22. Alternatively, set it up in a VPS in your cloud server.
Youd be amazed how many bots out there scanning port SSH continuously. You know what you do then? You try to hack them back for the lulz!
Heres a video of setting up honeypot if video is your thing:
<https://youtu.be/NufOMiktplA>
--------------------------------------------------------------------------------
via: https://www.blackmoreops.com/2016/05/06/setup-honeypot-in-kali-linux/
作者:[blackmoreops.com][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: blackmoreops.com

View File

@ -0,0 +1,234 @@
# Scientific Audio Processing, Part II - How to make basic Mathematical Signal Processing in Audio files using Ubuntu with Octave 4.0
In the [previous tutorial](https://www.howtoforge.com/tutorial/how-to-read-and-write-audio-files-with-octave-4-in-ubuntu/), we saw the simple steps to read, write and playback audio files. We even saw how we can synthesize an audio file from a periodic function such as the cosine function. In this tutorial, we will see how we can do additions to signals, multiplying signals (modulation), and applying some basic mathematical functions to see their effect on the original signal.
### Adding Signals
The sum of two signals S1(t) and S2(t) results in a signal R(t) whose value at any instant of time is the sum of the added signal values at that moment. Just like this:
```
R(t) = S1(t) + S2(t)
```
We will recreate the sum of two signals in Octave and see the effect graphically. First, we will generate two signals of different frequencies to see the signal resulting from the sum.
#### Step 1: Creating two signals of different frequencies (ogg files)
```
>> sig1='cos440.ogg'; %creating the audio file @440 Hz
>> sig2='cos880.ogg'; %creating the audio file @880 Hz
>> fs=44100; %generating the parameters values (Period, sampling frequency and angular frequency)
>> t=0:1/fs:0.02;
>> w1=2*pi*440*t;
>> w2=2*pi*880*t;
>> audiowrite(sig1,cos(w1),fs); %writing the function cos(w) on the files created
>> audiowrite(sig2,cos(w2),fs);
```
Here we will plot both signals.
Plot of Signal 1 (440 Hz)
```
>> [y1, fs] = audioread(sig1);
>> plot(y1)
```
[![Plot of signal 1](https://www.howtoforge.com/images/octave-audio-signal-processing-ubuntu/plotsignal1.png)](https://www.howtoforge.com/images/octave-audio-signal-processing-ubuntu/big/plotsignal1.png)
Plot of Signal 2 (880 Hz)
```
>> [y2, fs] = audioread(sig2);
>> plot(y2)
```
[![Plot of signal 2](https://www.howtoforge.com/images/octave-audio-signal-processing-ubuntu/plotsignal2.png)](https://www.howtoforge.com/images/octave-audio-signal-processing-ubuntu/big/plotsignal2.png)
#### Step 2: Adding two signals
Now we perform the sum of the two signals created in the previous step.
```
>> sumres=y1+y2;
>> plot(sumres)
```
Plot of Resulting Signal
[![Plot Signal sum.](https://www.howtoforge.com/images/octave-audio-signal-processing-ubuntu/plotsum.png)](https://www.howtoforge.com/images/octave-audio-signal-processing-ubuntu/big/plotsum.png)
The Octaver Effect
In the Octaver, the sound provided by this effect is characteristic because it emulates the note being played by the musician, either in a lower or higher octave (according as it has been programmed), coupled with sound the original note, ie two notes appear identically sounding.
#### Step 3: Adding two real signals (example with two musical tracks)
For this purpose, we will use two tracks of Gregorian Chants (voice sampling).
Avemaria Track
First, will read and plot an Avemaria track:
```
>> [y1,fs]=audioread('avemaria_.ogg');
>> plot(y1)
```
[![Avemaria track](https://www.howtoforge.com/images/octave-audio-signal-processing-ubuntu/avemaria.png)](https://www.howtoforge.com/images/octave-audio-signal-processing-ubuntu/big/avemaria.png)
Hymnus Track
Now, will read and plot an hymnus track
```
>> [y2,fs]=audioread('hymnus.ogg');
>> plot(y2)
```
[![Hymnus track](https://www.howtoforge.com/images/octave-audio-signal-processing-ubuntu/hymnus.png)](https://www.howtoforge.com/images/octave-audio-signal-processing-ubuntu/big/hymnus.png)
Avemaria + Hymnus Track
```
>> y='avehymnus.ogg';
>> audiowrite(y, y1+y2, fs);
>> [y, fs]=audioread('avehymnus.ogg');
>> plot(y)
```
[![Avearia + hymnus track](https://www.howtoforge.com/images/octave-audio-signal-processing-ubuntu/avehymnus.png)](https://www.howtoforge.com/images/octave-audio-signal-processing-ubuntu/big/avehymnus.png)The result, from the point of view of audio, is that both tracks will sound mixed.
### Product of two Signals
To multiply two signals, we have to use an analogous way to the sum. Let´s use the same files created previously.
```
R(t) = S1(t) * S2(t)
```
```
>> sig1='cos440.ogg'; %creating the audio file @440 Hz
>> sig2='cos880.ogg'; %creating the audio file @880 Hz
>> product='prod.ogg'; %creating the audio file for product
>> fs=44100; %generating the parameters values (Period, sampling frequency and angular frequency)
>> t=0:1/fs:0.02;
>> w1=2*pi*440*t;
>> w2=2*pi*880*t;
>> audiowrite(sig1, cos(w1), fs); %writing the function cos(w) on the files created
>> audiowrite(sig2, cos(w2), fs);>> [y1,fs]=audioread(sig1);>> [y2,fs]=audioread(sig2);
>> audiowrite(product, y1.*y2, fs); %performing the product
>> [yprod,fs]=audioread(product);
>> plot(yprod); %plotting the product
```
Note: we have to use the operand '.*' because this product is made, value to value, on the argument files. For more information, please refer to the manual of product operations with matrices of Octave.
#### Plot of Resulting Product Signal
[![Plotted product](https://www.howtoforge.com/images/octave-audio-signal-processing-ubuntu/plotprod.png)](https://www.howtoforge.com/images/octave-audio-signal-processing-ubuntu/big/plotprod.png)
#### Graphical effect of multiplying two signals with a big fundamental frequency difference (Principles of Modulation)
##### Step 1:
Create an audio frequency signal with a 220Hz frequency.
```
>> fs=44100;
>> t=0:1/fs:0.03;
>> w=2*pi*220*t;
>> y1=cos(w);
>> plot(y1);
```
[![Carrier](https://www.howtoforge.com/images/octave-audio-signal-processing-ubuntu/carrier.png)](https://www.howtoforge.com/images/octave-audio-signal-processing-ubuntu/big/carrier.png)
##### Step 2:
Create a higher frequency modulating signal of 22000 Hz.
```
>> y2=cos(100*w);
>> plot(y2);
```
[![Modulating](https://www.howtoforge.com/images/octave-audio-signal-processing-ubuntu/modulating.png)](https://www.howtoforge.com/images/octave-audio-signal-processing-ubuntu/big/modulating.png)
##### Step 3:
Multiplying and plotting the two signals.
```
>> plot(y1.*y2);
```
[![Modulated signal](https://www.howtoforge.com/images/octave-audio-signal-processing-ubuntu/modulated.png)](https://www.howtoforge.com/images/octave-audio-signal-processing-ubuntu/big/modulated.png)
### Multiplying a signal by a scalar
The effect of multiplying a function by a scalar is equivalent to modify their scope and, in some cases, the sign of the phase. Given a scalar K, the product of a function F(t) by the scalar is defined as:
```
R(t) = K*F(t)
```
```
>> [y,fs]=audioread('cos440.ogg'); %creating the work files
>> res1='coslow.ogg';
>> res2='coshigh.ogg';>> res3='cosinverted.ogg';
>> K1=0.2; %values of the scalars
>> K2=0.5;>> K3=-1;
>> audiowrite(res1, K1*y, fs); %product function-scalar
>> audiowrite(res2, K2*y, fs);
>> audiowrite(res3, K3*y, fs);
```
#### Plot of the Original Signal
```
>> plot(y)
```
[![](https://www.howtoforge.com/images/octave-audio-signal-processing-ubuntu/originalsignal.png)](https://www.howtoforge.com/images/octave-audio-signal-processing-ubuntu/big/originalsignal.png)
Plot of a Signal reduced in amplitude by 0.2
```
>> plot(res1)
```
[![Cosine low](https://www.howtoforge.com/images/octave-audio-signal-processing-ubuntu/coslow.png)](https://www.howtoforge.com/images/octave-audio-signal-processing-ubuntu/big/coslow.png)
Plot of a Signal reduced in amplitude by 0.5
```
>> plot(res2)
```
[![Cosine high](https://www.howtoforge.com/images/octave-audio-signal-processing-ubuntu/coshigh.png)](https://www.howtoforge.com/images/octave-audio-signal-processing-ubuntu/big/coshigh.png)
Plot of a Signal with inverted phase
```
>> plot(res3)
```
[![Cosine inverted](https://www.howtoforge.com/images/octave-audio-signal-processing-ubuntu/cosinverted.png)](https://www.howtoforge.com/images/octave-audio-signal-processing-ubuntu/big/cosinverted.png)
### Conclusion
The basic mathematical operations, such as algebraic sum, product, and product of a function by a scalar are the backbone of more advanced operations among which are, spectrum analysis, modulation in amplitude, angular modulation, etc. In the next tutorial, we will see how to make such operations and their effects on audio signals.
--------------------------------------------------------------------------------
via: https://www.howtoforge.com/tutorial/octave-audio-signal-processing-ubuntu/
作者:[David Duarte][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.howtoforge.com/tutorial/octave-audio-signal-processing-ubuntu/

View File

@ -1,4 +1,5 @@
translating by hkurj
Translating by bianjp
Basic Linux Networking Commands You Should Know
==================================================
@ -85,7 +86,7 @@ Arp is used to translate IP addresses into Ethernet addresses. Root can add and
- tcptarget p [port] its able to receive TCP traffic
- ifconfig netmask [up] : it allows to subnet the sub-networks
#### Switching:

View File

@ -1,47 +0,0 @@
Translating by 19761332
DAISY : A Linux-compatible text format for the visually impaired
=================================================================
![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/education/osdc-lead_books.png?itok=K8wqfPT5)
>Image by :
Image by Kate Ter Haar. Modified by opensource.com. CC BY-SA 2.0.
If you're blind or visually impaired like I am, you usually require various levels of hardware or software to do things that people who can see take for granted. One among these is specialized formats for reading print books: Braille (if you know how to read it) or specialized text formats such as DAISY.
### What is DAISY?
DAISY stands for Digital Accessible Information System. It's an open standard used almost exclusively by the blind to read textbooks, periodicals, newspapers, fiction, you name it. It was founded in the mid '90s by [The DAISY Consortium][1], a group of organizations dedicated to producing a set of standards that would allow text to be marked up in a way that would make it easy to read, skip around in, annotate, and otherwise manipulate text in much the same way a sighted user would.
The current version of DAISY 3.0, was released in mid-2005 and is a complete rewrite of the standard. It was created with the goal of making it much easier to write books complying with it. It's worth noting that DAISY can support plain text only, audio recordings (in PCM Wave or MPEG Layer III format) only, or a combination of text and audio. Specialized software can read these books and allow users to set bookmarks and navigate a book as easily as a sighted person would with a print book.
### How does DAISY work?
DAISY, regardless of the specific version, works a bit like this: You have your main navigation file (ncc.html in DAISY 2.02) that contains metadata about the book, such as author's name, copyright date, how many pages the book has, etc. This file is a valid XML document in the case of DAISY 3.0, with DTD (document type definition) files being highly recommended to be included with each book.
In the navigation control file is markup describing precise positions—either text caret offsets in the case of text navigation or time down to the millisecond in the case of audio recordings—that allows the software to skip to that exact point in the book much as a sighted person would turn to a chapter page. It's worth noting that this navigation control file only contains positions for the main, and largest, elements of a book.
The smaller elements are handled by SMIL (synchronized multimedia integration language) files. These files contain position points for each chapter in the book. The level of navigation depends heavily on how well the book was marked up. Think of it like this: If a print book has no chapter headings, you will have a hard time figuring out which chapter you're in. If a DAISY book is badly marked up, you might only be able to navigate to the start of the book, or possibly only to the table of contents. If a book is marked up badly enough (or missing markup entirely), your DAISY reading software is likely to simply ignore it.
### Why the need for specialized software?
You may be wondering why, if DAISY is little more than HTML, XML, and audio files, you would need specialized software to read and manipulate it. Technically speaking, you don't. The specialized software is mostly for convenience. In Linux, for example, a simple web browser can be used to open the books and read them. If you click on the XML file in a DAISY 3 book, all the software will generally do is read the spines of the books you give it access to and create a list of them that you click on to open. If a book is badly marked up, it won't show up in this list.
Producing DAISY is another matter entirely, and usually requires either specialized software or enough knowledge of the specifications to modify general-purpose software to parse it.
### Conclusion
Fortunately, DAISY is a dying standard. While it is very good at what it does, the need for specialized software to produce it has set us apart from the normal sighted world, where readers use a variety of formats to read their books electronically. This is why the DAISY consortium has succeeded DAISY with EPUB, version 3, which supports what are called media overlays. This is basically an EPUB book with optional audio or video. Since EPUB shares a lot of DAISY's XML markup, some software that can read DAISY can see EPUB books but usually cannot read them. This means that once the websites that provide books for us switch over to this open format, we will have a much larger selection of software to read our books.
--------------------------------------------------------------------------------
via: https://opensource.com/life/16/5/daisy-linux-compatible-text-format-visually-impaired
作者:[Kendell Clark][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/kendell-clark
[1]: http://www.daisy.org

View File

@ -0,0 +1,54 @@
What is copyleft?
=============
If you've spent much time in open source projects, you have probably seen the term "copyleft" used. While the term is quite commonly used, many people don't understand it. Software licensing is the subject of at least as much heated debate as text editors or packaging formats. An expert understanding of copyleft would fill many books, but this article can be a starting point on your road to copyleft enlightenment.
## What is copyright?
Before we can understand copyleft, we must first introduce the concept of copyright. Copyleft is not a separate legal framework from copyright; copyleft exists within the rules of copyright. So what is copyright?
The exact definition varies based on jurisdiction, but the essence is this: the author of a work has a limited monopoly on the copying (hence the term "copyright"), performance, etc. of the work. In the United States, the Constitution explicitly tasks Congress for creating copyright laws in order to "promote the Progress of Science and useful Arts."
Unlike in the past, copyright attaches to a work immediately -- no registration is required. By default, all rights are reserved. That means no one can republish, perform, or modify a work without permission from the author. This permission is a "license" and may come with certain conditions attached.
For a more thorough introduction to copyright, Coursera's [Copyright for Educators & Librarians](https://www.coursera.org/learn/copyright-for-education) is an excellent resource.
## What is copyleft?
Bear with me, but there's one more step to take before we discuss what copyleft is. First, let's examine what open source means. All open source licenses, by the[Open Source Inititative's definition](https://opensource.org/osd) must, among other things, allow distribution in source form. Anyone who receives open source software has the right to inspect and modify the code.
Where copyleft licenses differ from so-called "permissive" licenses is that copyleft licenses require these same rights to be included in any derivative works. I prefer to think of the distinction in this way: permissive licenses provide the maximum freedom to the immediate downstream developers (including the ability to use the open source code in a closed source project), whereas copyleft licenses provide the maximum freedom through to the end users.
The GNU Project gives this [simple definition](https://www.gnu.org/philosophy/free-sw.en.html) of copyleft: "the rule that when redistributing the program, you cannot add restrictions to deny other people the central freedoms [of free software]." This can be considered the canonical definition, since the [GNU General Public License](https://www.gnu.org/licenses/gpl.html) (GPL) in its various versions remains the most widely-used copyleft license.
## Copyleft in software
While the GPL family are the most popular copyleft licenses, they are by no means the only ones. The [Mozilla Public License](https://www.mozilla.org/en-US/MPL/) and the [Eclipse Public License](https://www.eclipse.org/legal/epl-v10.html) are also very popular. Many [other copyleft licenses](https://tldrlegal.com/licenses/tags/Copyleft) exist with smaller adoption footprints.
As explained in the previous section, a copyleft license means downstream projects cannot add additional restrictions on the use of the software. This is best illustrated with an example. If I wrote MyCoolProgram and distributed it under a copyleft license, you would have the freedom to use and modify it. You could distribute versions with your changes, but you'd have to give your users the same freedoms I gave you. If I had licensed it under a permissive license, you'd be free to incorporate it into a closed software project that you do not provide the source to.
But just as important as what you must do with MyCoolProgram is what you don't have to do. You don't have to use the exact same license I did, so long as the terms are compatible (generally downstream projects use the same license for simplicity's sake). You don't have to contribute your changes back to me, but it's generally considered good form, especially when the changes are bug fixes.
## Copyleft in non-software
Although the notion of copyleft began in the software world, it exists outside as well. The notion of "do what you want, so long as you preserve the right for others to do the same" is the distinguishing characteristic of the [Creative Commons Attribution-ShareAlike](http://creativecommons.org/licenses/by-sa/4.0/) license used for written work, visual art, etc. (CC BY-SA 4.0 is the default license for contributions to Opensource.com.) The [GNU Free Documentation License ](https://www.gnu.org/licenses/fdl.html)is another example of a copyleft non-software license. The use of software licenses for non-software work is generally discouraged.
## Should I choose a copyleft license?
Pages and pages could be (and have been!) written about what type of license should be used for a project. My advice is to first narrow the list of licenses to ones that match your philosophy and your goals for the project. GitHub's[choosealicense.com](http://choosealicense.com/) is a good way to find a license that fits your needs. [tl;drLegal](https://tldrlegal.com/)has plain-language explanations of many common and uncommon software licenses. Also consider the ecosystem that your project lives in. Projects around a specific language or technology will often use the same or similar licenses. If you want your project to be able to play nicely, you may need to make sure the license you choose is compatible.
For more information about copyleft licensing, check out the [Copyleft Guide](https://copyleft.org/)project.
--------------------------------------------------------------------------------
via: http://linoxide.com/firewall/pfsense-setup-basic-configuration/
作者:[Ben Cotton][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/bcotton

View File

@ -0,0 +1,133 @@
part 7 - How to manage binary blobs with Git
=====================
In the previous six articles in this series we learned how to manage version control on text files with Git. But what about binary files? Git has extensions for handling binary blobs such as multimedia files, so today we will learn how to manage binary assets with Git.
One thing everyone seems to agree on is Git is not great for big binary blobs. Keep in mind that a binary blob is different from a large text file; you can use Git on large text files without a problem, but Git can't do much with an impervious binary file except treat it as one big solid black box and commit it as-is.
Say you have a complex 3D model for the exciting new first person puzzle game you're making, and you save it in a binary format, resulting in a 1 gigabyte file. Yougit commit it once, adding a gigabyte to your repository's history. Later, you give the model a different hair style and commit your update; Git can't tell the hair apart from the head or the rest of the model, so you've just committed another gigabyte. Then you change the model's eye color and commit that small change: another gigabyte. That is three gigabytes for one model with a few minor changes made on a whim. Scale that across all the assets in a game, and you have a serious problem.
Contrast that to a text file like the .obj format. One commit stores everything, just as with the other model, but an .obj file is a series of lines of plain text describing the vertices of a model. If you modify the model and save it back out to .obj, Git can read the two files line by line, create a diff of the changes, and process a fairly small commit. The more refined the model becomes, the smaller the commits get, and it's a standard Git use case. It is a big file, but it uses a kind of overlay or sparse storage method to build a complete picture of the current state of your data.
However, not everything works in plain text, and these days everyone wants to work with Git. A solution was required, and several have surfaced.
[OSTree](https://ostree.readthedocs.io/en/latest/) began as a GNOME project and is intended to manage operating system binaries. It doesn't apply here, so I'll skip it.
[Git Large File Storage](https://git-lfs.github.com/) (LFS) is an open source project from GitHub that began life as a fork of git-media. [git-media](https://github.com/alebedev/git-media) and [git-annex](https://git-annex.branchable.com/walkthrough/) are extensions to Git meant to manage large files. They are two different approaches to the same problem, and they each have advantages. These aren't official statements from the projects themselves, but in my experience, the unique aspects of each are:
* git-media is a centralised model, a repository for common assets. You tellgit-media where your large files are stored, whether that is a hard drive, a server, or a cloud storage service, and each user on your project treats that location as the central master location for large assets.
* git-annex favors a distributed model; you and your users create repositories, and each repository gets a local .git/annex directory where big files are stored. The annexes are synchronized regularly so that all assets are available to all users as needed. Unless configured otherwise with annex-cost, git-annex prefers local storage before off-site storage.
Of these options, I've used git-media and git-annex in production, so I'll give you an overview of how they each work.
```
git-media
```
git-media uses Ruby, so you must install a gem for it. Instructions are on the[website](https://github.com/alebedev/git-media). Each user who wants to use git-media needs to install it, but it is cross-platform, so that is not a problem.
After installing git-media, you must set some Git configuration options. You only need to do this once per machine you use:
```
$ git config filter.media.clean "git-media filter-clean"$ git config filter.media.smudge "git-media filter-smudge"
```
In each repository that you want to use git-media, set an attribute to marry the filters you've just created to the file types you want to classify as media. Don't get confused by the terminology; a better term is "assets," since "media" usually means audio, video, and photos, but you might just as easily classify 3D models, bakes, and textures as media.
For example:
```
$ echo "*.mp4 filter=media -crlf" >> .gitattributes$ echo "*.mkv filter=media -crlf" >> .gitattributes$ echo "*.wav filter=media -crlf" >> .gitattributes$ echo "*.flac filter=media -crlf" >> .gitattributes$ echo "*.kra filter=media -crlf" >> .gitattributes
```
When you stage a file of those types, the file is copied to .git/media.
Assuming you have a Git repository on the server already, the final step is to tell your Git repository where the "mothership" is; that is, where the media files will go when they have been pushed for all users to share. Set this in the repository's.git/config file, substituting your own user, host, and path:
```
[git-media]
transport = scp
autodownload = false #true to pull assets by default
scpuser = seth
scphost = example.com
scppath = /opt/jupiter.git
```
If you have complex SSH settings on your server, such as a non-standard port or path to a non-default SSH key file use .ssh/config to set defaults for the host.
Life with git-media is mostly normal; you work in your repository, you stage files and blobs alike, and commit them as usual. The only difference in workflow is that at some point along the way, you should sync your secret stockpile of assets (er, media) to the shared repository.
When you are ready to publish your assets for your team or for your own backup, use this command:
```
$ git media sync
```
To replace a file in git-media with a changed version (for example, an audio file has been sweetened, or a matte painting has been completed, or a video file has been colour graded), you must explicitly tell Git to update the media. This overrides git-media's default to not copy a file if it already exists remotely:
```
$ git update-index --really-refresh
```
When other members of your team (or you, on a different computer) clones the repository, no assets will be downloaded by default unless you have set theautodownload option in .git/config to true. A git media sync cures all ills.
```
git-annex
```
git-annex has a slightly different workflow, and defaults to local repositories, but the basic ideas are the same. You should be able to install git-annex from your distribution's repository, or you can get it from the website as needed. As withgit-media, any user using git-annex must install it on their machine.
The initial setup is simpler than git-media. To create a bare repository on your server run this command, substituting your own path:
```
$ git init --bare --shared /opt/jupiter.git
```
Then clone it onto your local computer, and mark it as a git-annex location:
```
$ git clone seth@example.com:/opt/jupiter.cloneCloning into 'jupiter.clone'... warning: You appear to have clonedan empty repository. Checking connectivity... done.$ git annex init "seth workstation" init seth workstation ok
```
Rather than using filters to identify media assets or large files, you configure what gets classified as a large file by using the git annex command:
```
$ git annex add bigblobfile.flacadd bigblobfile.flac (checksum) ok(Recording state in Git...)
```
Committing is done as usual:
```
$ git commit -m 'added flac source for sound fx'
```
But pushing is different, because git annex uses its own branch to track assets. The first push you make may need the -u option, depending on how you manage your repository:
```
$ git push -u origin master git-annexTo seth@example.com:/opt/jupiter.git* [new branch] master -> master* [new branch] git-annex -> git-annex
```
As with git-media, a normal git push does not copy your assets to the server, it only sends information about the media. When you're ready to share your assets with the rest of the team, run the sync command:
```
$ git annex sync --content
```
If someone else has shared assets to the server and you need to pull them, git annex sync will prompt your local checkout to pull assets that are not present on your machine, but that exist on the server.
Both git-media and git-annex are flexible and can use local repositories instead of a server, so they're just as useful for managing private local projects, too.
Git is a powerful and extensible system, and by now there is really no excuse for not using it. Try it out today!
--------------------------------------------------------------------------------
via: https://opensource.com/life/16/8/how-manage-binary-blobs-git-part-7
作者:[Seth Kenlon][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth

View File

@ -0,0 +1,220 @@
The cost of small modules
====
About a year ago I was refactoring a large JavaScript codebase into smaller modules, when I discovered a depressing fact about Browserify and Webpack:
> “The more I modularize my code, the bigger it gets. ”– Nolan Lawson
Later on, Sam Saccone published some excellent research on [Tumblr](https://docs.google.com/document/d/1E2w0UQ4RhId5cMYsDcdcNwsgL0gP_S6SDv27yi1mCEY/edit) and [Imgur](https://github.com/perfs/audits/issues/1)s page load performance, in which he noted:
> “Over 400ms is being spent simply walking the Browserify tree.”– Sam Saccone
In this post, Id like to demonstrate that small modules can have a surprisingly high performance cost depending on your choice of bundler and module system. Furthermore, Ill explain why this applies not only to the modules in your own codebase, but also to the modules within dependencies, which is a rarely-discussed aspect of the cost of third-party code.
### Web perf 101
The more JavaScript included on a page, the slower that page tends to be. Large JavaScript bundles cause the browser to spend more time downloading, parsing, and executing the script, all of which lead to slower load times.
Even when breaking up the code into multiple bundles Webpack [code splitting](https://webpack.github.io/docs/code-splitting.html), Browserify[factor bundles](https://github.com/substack/factor-bundle), etc. the cost is merely delayed until later in the page lifecycle. Sooner or later, the JavaScript piper must be paid.
Furthermore, because JavaScript is a dynamic language, and because the prevailing[CommonJS](http://www.commonjs.org/) module system is also dynamic, its fiendishly difficult to extract unused code from the final payload that gets shipped to users. You might only need jQuerys $.ajax, but by including jQuery, you pay the cost of the entire library.
The JavaScript community has responded to this problem by advocating the use of [small modules](http://substack.net/how_I_write_modules). Small modules have a lot of [aesthetic and practical benefits](http://dailyjs.com/2015/07/02/small-modules-complexity-over-size/) easier to maintain, easier to comprehend, easier to plug together but they also solve the jQuery problem by promoting the inclusion of small bits of functionality rather than big “kitchen sink” libraries.
So in the “small modules” world, instead of doing:
```
var _ = require('lodash')
_.uniq([1,2,2,3])
```
You might do:
```
var uniq = require('lodash.uniq')
uniq([1,2,2,3])
```
### Packages vs modules
Its important to note that, when I say “modules,” Im not talking about “packages” in the npm sense. When you install a package from npm, it might only expose a single module in its public API, but under the hood it could actually be a conglomeration of many modules.
For instance, consider a package like [is-array](https://www.npmjs.com/package/is-array). It has no dependencies and only contains[one JavaScript file](https://github.com/retrofox/is-array/blob/d79f1c90c824416b60517c04f0568b5cd3f8271d/index.js#L6-L33), so it has one module. Simple enough.
Now consider a slightly more complex package like [once](https://www.npmjs.com/package/once), which has exactly one dependency:[wrappy](https://www.npmjs.com/package/wrappy). [Both](https://github.com/isaacs/once/blob/2ad558657e17fafd24803217ba854762842e4178/once.js#L1-L21) [packages](https://github.com/npm/wrappy/blob/71d91b6dc5bdeac37e218c2cf03f9ab55b60d214/wrappy.js#L6-L33) contain one module, so the total module count is 2\. So far, so good.
Now lets consider a more deceptive example: [qs](https://www.npmjs.com/package/qs). Since it has zero dependencies, you might assume it only has one module. But in fact, it has four!
You can confirm this by using a tool I wrote called [browserify-count-modules](https://www.npmjs.com/package/browserify-count-modules), which simply counts the total number of modules in a Browserify bundle:
```
$ npm install qs
$ browserify node_modules/qs | browserify-count-modules
4
```
This means that a given package can actually contain one or more modules. These modules can also depend on other packages, which might bring in their own packages and modules. The only thing you can be sure of is that each package contains at least one module.
### Module bloat
How many modules are in a typical web application? Well, I ran browserify-count-moduleson a few popular Browserify-using sites, and came up with these numbers:
* [requirebin.com](http://requirebin.com/): 91 modules
* [keybase.io](https://keybase.io/): 365 modules
* [m.reddit.com](http://m.reddit.com/): 1050 modules
* [Apple.com](http://images.apple.com/ipad-air-2/): 1060 modules (Added. [Thanks, Max!](https://twitter.com/denormalize/status/765300194078437376))
For the record, my own [Pokedex.org](https://pokedex.org/) (the largest open-source site Ive built) contains 311 modules across four bundle files.
Ignoring for a moment the raw size of those JavaScript bundles, I think its interesting to explore the cost of the number of modules themselves. Sam Saccone has already blown this story wide open in [“The cost of transpiling es2015 in 2016”](https://github.com/samccone/The-cost-of-transpiling-es2015-in-2016#the-cost-of-transpiling-es2015-in-2016), but I dont think his findings have gotten nearly enough press, so lets dig a little deeper.
### Benchmark time!
I put together [a small benchmark](https://github.com/nolanlawson/cost-of-small-modules) that constructs a JavaScript module importing 100, 1000, and 5000 other modules, each of which merely exports a number. The parent module just sums the numbers together and logs the result:
```
// index.js
var total = 0
total += require('./module_0')
total += require('./module_1')
total += require('./module_2')
// etc.
console.log(total)
// module_1.js
module.exports = 1
```
I tested five bundling methods: Browserify, Browserify with the [bundle-collapser](https://www.npmjs.com/package/bundle-collapser) plugin, Webpack, Rollup, and Closure Compiler. For Rollup and Closure Compiler I used ES6 modules, whereas for Browserify and Webpack I used CommonJS, so as not to unfairly disadvantage them (since they would need a transpiler like Babel, which adds its own overhead).
In order to best simulate a production environment, I used Uglify with the --mangle and--compress settings for all bundles, and served them gzipped over HTTPS using GitHub Pages. For each bundle, I downloaded and executed it 15 times and took the median, noting the (uncached) load time and execution time using performance.now().
### Bundle sizes
Before we get into the benchmark results, its worth taking a look at the bundle files themselves. Here are the byte sizes (minified but ungzipped) for each bundle ([chart view](https://nolanwlawson.files.wordpress.com/2016/08/min.png)):
| | 100 modules | 1000 modules | 5000 modules |
| --- | --- | --- | --- |
| browserify | 7982 | 79987 | 419985 |
| browserify-collapsed | 5786 | 57991 | 309982 |
| webpack | 3954 | 39055 | 203052 |
| rollup | 671 | 6971 | 38968 |
| closure | 758 | 7958 | 43955 |
| | 100 modules | 1000 modules | 5000 modules |
| --- | --- | --- | --- |
| browserify | 1649 | 13800 | 64513 |
| browserify-collapsed | 1464 | 11903 | 56335 |
| webpack | 693 | 5027 | 26363 |
| rollup | 300 | 2145 | 11510 |
| closure | 302 | 2140 | 11789 |
The way Browserify and Webpack work is by isolating each module into its own function scope, and then declaring a top-level runtime loader that locates the proper module whenever require() is called. Heres what our Browserify bundle looks like:
```
(function e(t,n,r){function s(o,u){if(!n[o]){if(!t[o]){var a=typeof require=="function"&&require;if(!u&&a)return a(o,!0);if(i)return i(o,!0);var f=new Error("Cannot find module '"+o+"'");throw f.code="MODULE_NOT_FOUND",f}var l=n[o]={exports:{}};t[o][0].call(l.exports,function(e){var n=t[o][1][e];return s(n?n:e)},l,l.exports,e,t,n,r)}return n[o].exports}var i=typeof require=="function"&&require;for(var o=0;o
```
Whereas the Rollup and Closure bundles look more like what you might hand-author if you were just writing one big module. Heres Rollup:
```
(function () {
'use strict';
var total = 0
total += 0
total += 1
total += 2
// etc.
```
If you understand the inherent cost of functions-within-functions in JavaScript, and of looking up a value in an associative array, then youll be in a good position to understand the following benchmark results.
### Results
I ran this benchmark on a Nexus 5 with Android 5.1.1 and Chrome 52 (to represent a low- to mid-range device) as well as an iPod Touch 6th generation running iOS 9 (to represent a high-end device).
Here are the results for the Nexus 5 ([tabular results](https://gist.github.com/nolanlawson/e84ad060a20f0cb7a7c32308b6b46abe)):
[![Nexus 5 results](https://nolanwlawson.files.wordpress.com/2016/08/modules_nexus_5.png?w=570&h=834)](https://nolanwlawson.files.wordpress.com/2016/08/modules_nexus_5.png)
And here are the results for the iPod Touch ([tabular results](https://gist.github.com/nolanlawson/45ed2c7fa53da035dfc1e153763b9f93)):
[![iPod Touch results](https://nolanwlawson.files.wordpress.com/2016/08/modules_ipod.png?w=570&h=827)](https://nolanwlawson.files.wordpress.com/2016/08/modules_ipod.png)
At 100 modules, the variance between all the bundlers is pretty negligible, but once we get up to 1000 or 5000 modules, the difference becomes severe. The iPod Touch is hurt the least by the choice of bundler, but the Nexus 5, being an aging Android phone, suffers a lot under Browserify and Webpack.
I also find it interesting that both Rollup and Closures execution cost is essentially free for the iPod, regardless of the number of modules. And in the case of the Nexus 5, the runtime costs arent free, but theyre still much cheaper for Rollup/Closure than for Browserify/Webpack, the latter of which chew up the main thread for several frames if not hundreds of milliseconds, meaning that the UI is frozen just waiting for the module loader to finish running.
Note that both of these tests were run on a fast Gigabit connection, so in terms of network costs, its really a best-case scenario. Using the Chrome Dev Tools, we can manually throttle that Nexus 5 down to 3G and see the impact ([tabular results](https://gist.github.com/nolanlawson/6269d304c970174c21164288808392ea)):
[![Nexus 5 3G results](https://nolanwlawson.files.wordpress.com/2016/08/modules_nexus_53g.png?w=570&h=834)](https://nolanwlawson.files.wordpress.com/2016/08/modules_nexus_53g.png)
Once we take slow networks into account, the difference between Browserify/Webpack and Rollup/Closure is even more stark. In the case of 1000 modules (which is close to Reddits count of 1050), Browserify takes about 400 milliseconds longer than Rollup. And that 400ms is no small potatoes, since Google and Bing have both noted that sub-second delays have an[appreciable impact on user engagement](http://radar.oreilly.com/2009/06/bing-and-google-agree-slow-pag.html).
One thing to note is that this benchmark doesnt measure the precise execution cost of 100, 1000, or 5000 modules per se, since that will depend on your usage of require(). Inside of these bundles, Im calling require() once per module, but if you are calling require()multiple times per module (which is the norm in most codebases) or if you are callingrequire() multiple times on-the-fly (i.e. require() within a sub-function), then you could see severe performance degradations.
Reddits mobile site is a good example of this. Even though they have 1050 modules, I clocked their real-world Browserify execution time as much worse than the “1000 modules” benchmark. When profiling on that same Nexus 5 running Chrome, I measured 2.14 seconds for Reddits Browserify require() function, and 197 milliseconds for the equivalent function in the “1000 modules” script. (In desktop Chrome on an i7 Surface Book, I also measured it at 559ms vs 37ms, which is pretty astonishing given were talking desktop.)
This suggests that it may be worthwhile to run the benchmark again with multiplerequire()s per module, although in my opinion it wouldnt be a fair fight for Browserify/Webpack, since Rollup/Closure both resolve duplicate ES6 imports into a single hoisted variable declaration, and its also impossible to import from anywhere but the top-level scope. So in essence, the cost of a single import for Rollup/Closure is the same as the cost of n imports, whereas for Browserify/Webpack, the execution cost will increase linearly with n require()s.
For the purposes of this analysis, though, I think its best to just assume that the number of modules is only a lower bound for the performance hit you might feel. In reality, the “5000 modules” benchmark may be a better yardstick for “5000 require() calls.”
### Conclusions
First off, the bundle-collapser plugin seems to be a valuable addition to Browserify. If youre not using it in production, then your bundle will be a bit larger and slower than it would be otherwise (although I must admit the difference is slight). Alternatively, you could switch to Webpack and get an even faster bundle without any extra configuration. (Note that it pains me to say this, since Im a diehard Browserify fanboy.)
However, these results clearly show that Webpack and Browserify both underperform compared to Rollup and Closure Compiler, and that the gap widens the more modules you add. Unfortunately Im not sure [Webpack 2](https://gist.github.com/sokra/27b24881210b56bbaff7) will solve any of these problems, because although theyll be [borrowing some ideas from Rollup](http://www.2ality.com/2015/12/webpack-tree-shaking.html), they seem to be more focused on the[tree-shaking aspects](http://www.2ality.com/2015/12/bundling-modules-future.html) and not the scope-hoisting aspects. (Update: a better name is “inlining,” and the Webpack team is [working on it](https://github.com/webpack/webpack/issues/2873#issuecomment-240067865).)
Given these results, Im surprised Closure Compiler and Rollup arent getting much traction in the JavaScript community. Im guessing its due to the fact that (in the case of the former) it has a Java dependency, and (in the case of the latter) its still fairly immature and doesnt quite work out-of-the-box yet (see [Calvins Metcalfs comments](https://github.com/rollup/rollup/issues/552) for a good summary).
Even without the average JavaScript developer jumping on the Rollup/Closure bandwagon, though, I think npm package authors are already in a good position to help solve this problem. If you npm install lodash, youll notice that the main export is one giant JavaScript module, rather than what you might expect given Lodashs hyper-modular nature (require('lodash/uniq'), require('lodash.uniq'), etc.). For PouchDB, we made a similar decision to [use Rollup as a prepublish step](http://pouchdb.com/2016/01/13/pouchdb-5.2.0-a-better-build-system-with-rollup.html), which produces the smallest possible bundle in a way thats invisible to users.
I also created [rollupify](https://github.com/nolanlawson/rollupify) to try to make this pattern a bit easier to just drop-in to existing Browserify projects. The basic idea is to use imports and exports within your own project ([cjs-to-es6](https://github.com/nolanlawson/cjs-to-es6) can help migrate), and then use require() for third-party packages. That way, you still have all the benefits of modularity within your own codebase, while exposing more-or-less one big module to your users. Unfortunately, you still pay the costs for third-party modules, but Ive found that this is a good compromise given the current state of the npm ecosystem.
So there you have it: one horse-sized JavaScript duck is faster than a hundred duck-sized JavaScript horses. Despite this fact, though, I hope that our community will eventually realize the pickle were in advocating for a “small modules” philosophy thats good for developers but bad for users and improve our tools, so that we can have the best of both worlds.
### Bonus round! Three desktop browsers
Normally I like to run performance tests on mobile devices, since thats where you see the clearest differences. But out of curiosity, I also ran this benchmark on Chrome 52, Edge 14, and Firefox 48 on an i7 Surface Book using Windows 10 RS1\. Here are the results:
Chrome 52 ([tabular results](https://gist.github.com/nolanlawson/4f79258dc05bbd2c14b85cf2196c6ef0))
[![Chrome results](https://nolanwlawson.files.wordpress.com/2016/08/modules_chrome.png?w=570&h=831)](https://nolanwlawson.files.wordpress.com/2016/08/modules_chrome.png)
Edge 14 ([tabular results](https://gist.github.com/nolanlawson/726fa47e0723b45e4ee9ecf0cf2fcddb))
[![Edge results](https://nolanwlawson.files.wordpress.com/2016/08/modules_edge.png?w=570&h=827)](https://nolanwlawson.files.wordpress.com/2016/08/modules_edge.png)
Firefox 48 ([tabular results](https://gist.github.com/nolanlawson/7eed17e6ffa18752bf99a9d4bff2941f))
[![Firefox results](https://nolanwlawson.files.wordpress.com/2016/08/modules_firefox.png?w=570&h=830)](https://nolanwlawson.files.wordpress.com/2016/08/modules_firefox.png)
The only interesting tidbits Ill call out in these results are:
1. bundle-collapser is definitely not a slam-dunk in all cases.
2. The ratio of network-to-execution time is always extremely high for Rollup and Closure; their runtime costs are basically zilch. ChakraCore and SpiderMonkey eat them up for breakfast, and V8 is not far behind.
This latter point could be extremely important if your JavaScript is largely lazy-loaded, because if you can afford to wait on the network, then using Rollup and Closure will have the additional benefit of not clogging up the UI thread, i.e. theyll introduce less jank than Browserify or Webpack.
Update: in response to this post, JDD has [opened an issue on Webpack](https://github.com/webpack/webpack/issues/2873). Theres also [one on Browserify](https://github.com/substack/node-browserify/issues/1379).
Update 2: [Ryan Fitzer](https://github.com/nolanlawson/cost-of-small-modules/pull/5) has generously added RequireJS and RequireJS with [Almond](https://github.com/requirejs/almond) to the benchmark, both of which use AMD instead of CommonJS or ES6.
Testing shows that RequireJS has [the largest bundle sizes](https://gist.github.com/nolanlawson/511e0ce09fed29fed040bb8673777ec5) but surprisingly its runtime costs are [very close to Rollup and Closure](https://gist.github.com/nolanlawson/4e725df00cd1bc9673b25ef72b831c8b). Here are the results for a Nexus 5 running Chrome 52 throttled to 3G:
[![Nexus 5 (3G) results with RequireJS](https://nolanwlawson.files.wordpress.com/2016/08/2016-08-20-14_45_29-small_modules3-xlsx-excel.png?w=570&h=829)](https://nolanwlawson.files.wordpress.com/2016/08/2016-08-20-14_45_29-small_modules3-xlsx-excel.png)
--------------------------------------------------------------------------------
via: https://nolanlawson.com/2016/08/15/the-cost-of-small-modules/?utm_source=javascriptweekly&utm_medium=email
作者:[Nolan][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://nolanlawson.com/

View File

@ -1,129 +0,0 @@
Eriwoon Start to translate this article
The infrastructure behind Twitter: efficiency and optimization
===========
In the past, we've published details about Finagle, Manhattan, and the summary of how we re-architected the site to be able to handle events like Castle in the Sky, the Super Bowl, 2014 World Cup, the global New Year's Eve celebration, among others. In this infrastructure series, we're focusing on the core infrastructure and components that run Twitter. We're also going to focus each blog on efforts surrounding scalability, reliability, and efficiency in a way that highlights the history of our infrastructure, challenges we've faced, lessons learned, upgrades made, and where we're heading.
### Data center efficiency
#### History
Twitter hardware and data centers are at the scale few technology companies ever reach. However, this was not accomplished without a few missteps along the way. Our uptime has matured through a combination of physical improvements and software-based changes.
During the period when the fail whale was prevalent, outages occurred due to software limitations, as well as physical failures at the hardware or infrastructure level. Failure domains existed in various definitions which had to be aggregated to determine the risk and required redundancy for services. As the business scaled in customers, services, media content, and global presence, the strategy evolved to efficiently and resiliently support the service.
#### Challenges
Software dependencies on bare metal were further dependant on our data centers' ability to operate and maintain uptime of power, fiber connectivity, and environment. These discrete physical failure domains had to be reviewed against the services distributed on the hardware to provide for fault tolerance.
The initial decision of which data center service provider to scale with was done when specialization in site selection, operation, and design was in its infancy. We began in a hosted provider then migrated to a colocation facility as we scaled. Early service interruptions occurred as result of equipment failures, data center design issues, maintenance issues, and human error. As a result, we continually iterated on the physical layer designs to increase the resiliency of the hardware and the data center operations.
The physical reasons for service interruptions were inclusive of hardware failures at the server component level, top of rack switch, and core switches. For example, during the initial evaluation of our customized servers, the hardware team determined the cost of the second power supply was not warranted given the low rate of failure of server power supplies — so they were removed from the design. The data center power topology provides redundancy through separate physical whips to the racks and requires the second power supply. Removal of the second power supply eliminated the redundant power path, leaving the hardware vulnerable to impact during distribution faults in the power system. To mitigate the impact of the single power supply, ATS units were required to be added at the rack level to allow a secondary path for power.
The layering of systems with diverse fiber paths, power sources, and physical domains continued to separate services from impacts at relatively small scale interruptions, thus improving resiliency.
#### Lessons learned and major technology upgrades, migrations, and adoptions
We learned to model dependencies between the physical failure domains, (i.e. building power and cooling, hardware, fiber) and the services distributed across them to better predict fault tolerance and drive improvements.
We added additional data centers providing regional diversity to mitigate risk from natural disaster and the ability to fail between regions when it was needed during major upgrades, deploys or incidents. The active-active operation of data centers provided for staged code deployment reducing overall impacts of code rollouts.
The efficiency of power use by the data centers has improved with expanding the operating ranges of the environmental envelope and designing the hardware for resiliency at the higher operating temperatures.
#### Future work
Our data centers continue to evolve in strategy and operation, providing for live changes to the operating network and hardware without interruption to the users. Our strategy will continue to focus on scale within the existing power and physical footprints through optimization and maintaining flexibility while driving efficiency in the coming years.
### Hardware efficiency
#### History and challenges
Our hardware engineering team was started to qualify and validate performance of off-the-shelf purchased hardware, and evolved into customization of hardware for cost and performance optimizations.
Procuring and consuming hardware at Twitter's scale comes with a unique set of challenges. In order to meet the demands of our internal customers, we initially started a program to qualify and ensure the quality of purchased hardware. The team was primarily focused on performance and reliability testing ensuring that systems could meet the demands. Running systematic tests to validate the behavior was predictable, and there were very few bugs introduced.
As we scaled our major workloads (Mesos, Hadoop, Manhattan, and MySQL) it became apparent the available market offerings didn't quite meet the needs. Off-the-shelf servers come with enterprise features, like raid controllers and hot swap power supplies. These components improve reliability at small scale, but often decrease performance and increase cost; for example some raid controllers interfered with the performance of SSDs and could be a third of the cost of the system.
At the time, we were a large user of mysql databases. Issues arose from both supply and performance of SAS media. The majority of deployments were 1u servers, and the total number of drives used plus a writeback cache could predict the performance of a system often time limited to a sustained 2000 sequential IOPS. In order to continue scaling this workload, we were stranding CPU cores and disk capacity to meet IOPS requirement. We were unable to find cost-effective solutions at this time.
As our volume of hardware reached a critical mass, it made sense to invest in a hardware engineering team for customized white box solutions with focus on reducing the capital expenses and increased performance metrics.
#### Major technology changes and adoption
We've made many transitions in our hardware technology stack. Below is a timeline for adoptions of new technology and internally developed platforms.
- 2012 - SSDs become the primary storage media for our MySQL and key/value databases.
- 2013 - Our first custom solution for Hadoop workloads is developed, and becomes our primary bulk storage solution.
- 2013 - Our custom solution is developed for Mesos, TFE, and cache workloads.
- 2014 - Our custom SSD key/value server completes development.
- 2015 - Our custom database solution is developed.
- 2016 - We developed GPU systems for inference and training of machine learning models.
#### Lessons learned
The objective of our Hardware Engineering team is to significantly reduce the capital expenditure and operating expenditure by making small tradeoffs that improve our TCO. Two generalizations can apply to reduce the cost of a server:
1. Removing the unused components
2. Improving utilization
Twitter's workload is divided into four main verticals: storage, compute, database, and gpu. Twitter defines requirements on a per vertical basis, allowing Hardware Engineering to produce a focused feature set for each. This approach allows us to optimize component selection where the equipment may go unused or underutilized. For example, our storage configuration has been designed specifically for Hadoop workloads and was delivered at a TCO reduction of 20% over the original OEM solution. At the same time, the design improved both the performance and reliability of the hardware. Similarly, for our compute vertical, the Hardware Engineering Team has improved the efficiency of these systems by removing unnecessary features.
There is a minimum overhead required to operate a server, and we quickly reached a point where it could no longer remove components to reduce cost. In the compute vertical specifically, we decided the best approach was to look at solutions that replaced multiple nodes with a single node, and rely on Aurora/Mesos to manage the capacity. We settled on a design that replaced two of our previous generation compute nodes with a single node.
Our design verification began with a series of rough benchmarks, and then progressed to a series of production load tests confirming a scaling factor of 2. Most of this improvement came from simply increasing the thread count of the CPU, but our testing confirmed a 20-50% improvement in our per thread performance. Additionally we saw a 25% increase in our per thread power efficiency, due to sharing the overhead of the server across more threads.
For the initial deployment, our monitoring showed a 1.5 replacement factor, which was well below the design goal. An examination of the performance data revealed there was a flawed assumption in the workload characteristics, and that it needed to be identified.
Our Hardware Engineering Team's initial action was to develop a model to predict the packing efficiency of the current Aurora job set into various hardware configurations. This model correctly predicted the scaling factor we were observing in the fleet, and suggested we were stranding cores due to unforeseen storage requirements. Additionally, the model predicted we would see a still improved scaling factor by changing the memory configuration as well.
Hardware configuration changes take time to implement, so Hardware Engineering identified a few large jobs and worked with our SRE teams to adjust the scheduling requirements to reduce the storage needs. These changes were quick to deploy, and resulted in an immediate improvement to a 1.85 scaling factor.
In order to address the situation permanently, we needed to adjust to configuration of the server. Simply expanding the installed memory and disk capacity resulted in a 20% improvement in the CPU core utilization, at a minimal cost increase. Hardware Engineering worked with our manufacturing partners to adjust the bill of materials for the initial shipments of these servers. Follow up observations confirmed a 2.4 scaling factor exceeding the target design.
### Migration from bare metal to mesos
Until 2012, running a service inside Twitter required hardware requisitions. Service owners had to find out and request the particular model or class of server, worry about your rack diversity, maintain scripts to deploy code, and manage dead hardware. There was essentially no "service discovery." When a web service needed to talk to the user service, it typically loaded up a YAML file containing all of the host IPs and ports of the user service and the service used that list (port reservations were tracked in a wiki page). As hardware died or was added, managing required editing and committing changes to the YAML file that would go out with the next deploy. Making changes in the caching tier meant many deploys over hours and days, adding a few hosts at a time and deploying in stages. Dealing with cache inconsistencies during the deploy was a common occurrence, since some hosts would be using the new list and some the old. It was possible to have a host running old code (because the box was temporarily down during the deploy) resulting in a flaky behavior with the site.
In 2012/2013, two things started to get adopted at Twitter: service discovery (via a zookeeper cluster and a library in the core module of Finagle) and Mesos (including our own scheduler framework on top of Mesos called Aurora, now an Apache project).
Service discovery no longer required static YAML host lists. A service either self-registered on startup or was automatically registered under mesos into a "serverset" (which is just a path to a list of znodes in zookeeper based on the role, environment, and service name). Any service that needed to talk to that service would just watch that path and get a live view of what servers were out there.
With Mesos/Aurora, instead of having a script (we were heavy users of Capistrano) that took a list of hosts, pushed binaries around and orchestrated a rolling restart, a service owner pushed the package into a service called "packer" (which is a service backed by HDFS), uploaded an aurora configuration that described the service (how many CPUs it needed, how much memory, how many instances needed, the command lines of all the tasks each instance should run) and Aurora would complete the deploy. It schedules instances on an available hosts, downloads the artifact from packer, registers it in service discovery, and launches it. If there are any failures (hardware dies, network fails, etc), Mesos/Aurora automatically reschedules the instance on another host.
#### Twitter's Private PaaS
Mesos/Aurora and Service Discovery in combination were revolutionary. There were many bugs and growing pains over the next few years and many hard lessons learned about distributed systems, but the fundamental design was sound. In the old world, the teams were constantly dealing with and thinking about hardware and its management. In the new world, the engineers only have to think about how best to configure their services and how much capacity to deploy. We were also able to radically improve the CPU utilization of Twitter's fleet over time, since generally each service that got their own bare metal hardware didn't fully utilize its resources and did a poor job of managing capacity. Mesos allows us to pack multiple services into a box without having to think about it, and adding capacity to a service is only requesting quota, changing one line of a config, and doing a deploy.
Within two years, most "stateless" services moved into Mesos. Some of the most important and largest services (including our user service and our ads serving system) were among the first to move. Being the largest, they saw the biggest benefit to their operational burden. This allowed them to reduce their operational burden.
We are continuously looking for ways to improve the efficiency and optimization of the infrastructure. As part of this, we regularly benchmark against public cloud providers and offerings to validate our TCO and performance expectations of the infrastructure. We also have a good presence in public cloud, and will continue to utilize the public cloud when it's the best available option. The next series of this post will mainly focus on the scale of our infrastructure.
Special thanks to Jennifer Fraser, David Barr, Geoff Papilion, Matt Singer, and Lam Dong for all their contributions to this blog post.
--------------------------------------------------------------------------------
via: https://blog.twitter.com/2016/the-infrastructure-behind-twitter-efficiency-and-optimization?utm_source=webopsweekly&utm_medium=email
作者:[mazdakh][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://twitter.com/intent/user?screen_name=mazdakh
[1]: https://twitter.com/jenniferfraser
[2]: https://twitter.com/davebarr
[3]: https://twitter.com/gpapilion
[4]: https://twitter.com/lamdong

View File

@ -1,3 +1,4 @@
translating by ucasFL
Understanding Different Classifications of Shell Commands and Their Usage in Linux
====

View File

@ -1,78 +0,0 @@
translating by ucasFL
4 Best Linux Boot Loaders
====
When you turn on your machine, immediately after POST (Power On Self Test) is completed successfully, the BIOS locates the configured bootable media, and reads some instructions from the master boot record (MBR) or GUID partition table which is the first 512 bytes of the bootable media. The MBR contains two important sets of information, one is the boot loader and two, the partition table.
### What is a Boot Loader?
A boot loader is a small program stored in the MBR or GUID partition table that helps to load an operating system into memory. Without a boot loader, your operating system can not be loaded into memory.
There are several boot loaders we can install together with Linux on our systems and in this article, we shall briefly talk about a handful of the best Linux boot loaders to work with.
### 1. GNU GRUB
GNU GRUB is a popular and probably the most used multiboot Linux boot loader available, based on the original GRUB (GRand Unified Bootlader) which was created by Eirch Stefan Broleyn. It comes with several improvements, new features and bug fixes as enhancements of the original GRUB program.
Importantly, GRUB 2 has now replaced the GRUB. And notably, the name GRUB was renamed to GRUB Legacy and is not actively developed, however, it can be used for booting older systems since bug fixes are still on going.
GRUB has the following prominent features:
- Supports multiboot
- Supports multiple hardware architectures and operating systems such as Linux and Windows
- Offers a Bash-like interactive command line interface for users to run GRUB commands as well interact with configuration files
- Enables access to GRUB editor
- Supports setting of passwords with encryption for security
- Supports booting from a network combined with several other minor features
Visit Homepage: <https://www.gnu.org/software/grub/>
### 2. LILO (Linux Loader)
LILO is a simple yet powerful and stable Linux boot loader. With the growing popularity and use of GRUB, which has come with numerous improvements and powerful features, LILO has become less popular among Linux users.
While it loads, the word “LILO” is displayed on the screen and each letter appears before or after a particular event has occurred. However, the development of LILO was stopped in December 2015, it has a number of features as listed below:
- Does not offer an interactive command line interface
- Supports several error codes
- Offers no support for booting from a network
- All its files are stored in the first 1024 cylinders of a drive
- Faces limitation with BTFS, GPT and RAID plus many more.
Visit Homepage: <http://lilo.alioth.debian.org/>
### 3. BURG New Boot Loader
Based on GRUB, BURG is a relatively new Linux boot loader. Because it is derived from GRUB, it ships in with some of the primary GRUB features, nonetheless, it also offers remarkable features such as a new object format to support multiple platforms including Linux, Windows, Mac OS, FreeBSD and beyond.
Additionally, it supports a highly configurable text and graphical mode boot menu, stream plus planned future improvements for it to work with various input/output devices.
Visit Homepage: <https://launchpad.net/burg>
### 4. Syslinux
Syslinux is an assortment of light weight boot loaders that enable booting from CD-ROMs, from a network and so on. It supports filesystems such as FAT for MS-DOS, and ext2, ext3, ext4 for Linux. It as well supports uncompressed single-device Btrfs.
Note that Syslinux only accesses files in its own partition, therefore, it does not offer multi-filesystem boot capabilities.
Visit Homepage: <http://www.syslinux.org/wiki/index.php?title=The_Syslinux_Project>
### Conclusion
A boot loader allows you to manage multiple operating systems on your machine and select which one to use at a particular time, without it, your machine can not load the kernel and the rest of the operating system files.
Have we missed any tip-top Linux boot loader here? If so, then let us know by using the comment form below by making suggestions of any commendable boot loaders that can support Linux operating system.
--------------------------------------------------------------------------------
via: http://linoxide.com/firewall/pfsense-setup-basic-configuration/
作者:[Aaron Kili][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://www.tecmint.com/best-linux-boot-loaders/

View File

@ -1,3 +1,5 @@
GHLandy translating
17 tar command practical examples in Linux
=====

View File

@ -1,3 +1,4 @@
LinuxBars翻译认领
Five Linux Server Distros Worth Checking Out
====

View File

@ -1,52 +0,0 @@
Taskwarrior: A Brilliant Command-Line TODO App For Linux
====
Taskwarrior is a simple, straight-forward command-line based TODO app for Ubuntu/Linux. This open-source app has to be one of the easiest of all [CLI based apps][4] I've ever used. Taskwarrior helps you better organize yourself, and without installing bulky new apps which sometimes defeats the whole purpose of TODO apps.
![](https://2.bp.blogspot.com/-pQnRlOUNIxk/V9cuc3ytsBI/AAAAAAAAKHs/yYxyiAk4PwMIE0HTxlrm6arWOAPcBRRywCLcB/s1600/taskwarrior-todo-app.png)
### Taskwarrior: A Simple CLI Based TODO App That Gets The Job Done!
Taskwarrior is an open-source and cross-platform, command-line based TODO app, which lets you manage your to-do lists right from the Terminal. The app lets you add tasks, shows you the list, and removes tasks from that list with much ease. And what's more, it's available within your default repositories, no need to fiddle with PPAs. In Ubuntu 16.04 LTS and similar, do the following in Terminal to install Taskwarrior.
```
sudo apt-get install task
```
A simple use case can be as follows:
```
$ task add Read a book
Created task 1.
$ task add priority:H Pay the bills
Created task 2.
```
This is the same example I used in the screenshot above. Yes, you can set priority levels (H, L or M) as shown. And then you can use 'task' or 'task next' commands to see your newly-created todo list. For example:
```
$ task next
ID Age P Description Urg
-- --- - -------------------------------- ----
2 10s H Pay the bills 6
1 20s Read a book 0
```
And once its completed, you can use 'task 1 done' or 'task 2 done' commands to clear the lists. A more comprehensive list of commands, use-cases [can be found here][1]. Also, Taskwarrior is cross-platform, which means you'll find a version that [fits your needs][2] no matter what. There's even an [Android version][3] if you want one. Enjoy!
--------------------------------------------------------------------------------
via: http://www.techdrivein.com/2016/09/taskwarrior-command-line-todo-app-linux.html?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+techdrivein+%28Tech+Drive-in%29
作者:[Manuel Jose ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://www.techdrivein.com/2016/09/taskwarrior-command-line-todo-app-linux.html?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+techdrivein+%28Tech+Drive-in%29
[1]: https://taskwarrior.org/docs/
[2]: https://taskwarrior.org/download/
[3]: https://taskwarrior.org/news/news.20160225.html
[4]: http://www.techdrivein.com/search/label/Terminal

View File

@ -1,90 +0,0 @@
How to Speed Up LibreOffice with 4 Simple Steps
====
![](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2016/08/speed-up-libreoffice-featured-2.jpg)
For many fans and supporters of Open Source software, LibreOffice is the best alternative to Microsoft Office, and it has definitely seen huge improvements over the last few releases. However, the initial startup experience still leaves a lot to be desired. There are ways to improve launch time and overall performance of LibreOffice.
I will go over some practical steps that you can take to improve the load time and responsiveness of LibreOffice in the paragraphs below.
### 1. Increase Memory Per Object and Image Cache
This will help the program load faster by allocating more memory resources to the image cache and objects.
1. Launch LibreOffice Writer (or Calc)
2. Navigate to “Tools -> Options” in the menubar or use the keyboard shortcut “Alt + F12.”
3. Click “Memory” under LibreOffice and increase “Use for LibreOffice” to 128MB.
4. Also increase “Memory per object” to 20Mb.
5. Click “Ok” to save your changes.
![](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2016/08/speed-up-libreoffice-step-1.png)
Note: You can set the numbers higher or lower than the suggested values depending on how powerful your machine is. It is best to experiment and see which value gives you the optimum performance.
### 2. Enable LibreOffice QuickStarter
If you have a generous amount of RAM on your machine, say 4GB and above, you can enable the “Systray Quickstarter” option to keep part of LibreOffice in memory for quicker response with opening new documents.
You will definitely see improved performance in opening new documents after enabling this option.
1. Open the options dialog by navigating to “Tools -> Options.”
2. In the sidebar under “LibreOffice”, select “Memory.”
3. Tick the “Enable Systray Quickstarter” checkbox.
4. Click “OK” to save the changes.
![](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2016/08/speed-up-libreoffice-2.png)
Once this option is enabled, you will see the LibreOffice icon in your system tray with options to open any type of document.
### 3. Disable Java Runtime
Another easy way to speed up the launch time and responsiveness of LibreOffice is to disable Java.
1. Open the Options dialog using “Alt + F12.”
2. In the sidebar, select “LibreOffice,” then “Advanced.”
3. Uncheck the “Use Java runtime environment” option.
4. Click “OK” to close the dialog.
![](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2016/08/speed-up-libreoffice-3.png)
If all you use is Writer and Calc, disabling Java will not stop you from working with your files as normal. But to use LibreOffice Base and some other special features, you may need to re-enable it again. In that case, you will get a popup asking if you wish to turn it back on.
### 4. Reduce Number of Undo Steps
By default, LibreOffice allows you to undo up to 100 changes to a document. Most users do not need anywhere near that, so holding that many steps in memory is largely a waste of resources.
I recommend that you reduce this number to 20 to free up memory for other things, but feel free to customise this part to suit your needs.
1. Open the options dialog by navigating to “Tools -> Options.”
2. In the sidebar under “LibreOffice,” select “Memory.”
3. Under “Undo” and change the number of steps to your preferred value.
4. Click “OK” to save the changes.
![](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2016/08/speed-up-libreoffice-5.png)
If the tips provided helped you speed up the launch time of your LibreOffice Suite, let us know in the comments. Also, please share any other tips you may know for others to benefit as well.
--------------------------------------------------------------------------------
via: https://www.maketecheasier.com/speed-up-libreoffice/?utm_medium=feed&utm_source=feedpress.me&utm_campaign=Feed%3A+maketecheasier
作者:[Ayo Isaiah][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.maketecheasier.com/author/ayoisaiah/

View File

@ -1,66 +0,0 @@
Translating by bianjp
It's time to make LibreOffice and OpenOffice one again
==========
![](http://tr2.cbsistatic.com/hub/i/2016/09/14/2e91089b-7ebd-4579-bf8f-74c34d1a94ce/e7e9c8dd481d8e068f2934c644788928/openofficedeathhero.jpg)
Let's talk about OpenOffice. More than likely you've already read, countless times, that Apache OpenOffice is near the end. The last stable iteration was 4.1.2 (released October, 2015) and a recent major security flaw took a month to patch. A lack of coders has brought development to a creeping crawl. And then, the worst possible news hit the ether; the project suggested users switch to MS Office (or LibreOffice).
For whom the bells tolls? The bell tolls for thee, OpenOffice.
I'm going to say something that might ruffle a few feathers. Are you ready for it?
The end of OpenOffice will be a good thing for open source and for users.
Let me explain.
### One fork to rule them all
When LibreOffice was forked from OpenOffice we saw yet another instance of the fork not only improving on the original, but vastly surpassing it. LibreOffice was an instant success. Every Linux distribution that once shipped with OpenOffice migrated to the new kid on the block. LibreOffice burst out of the starting gate and immediately hit its stride. Updates came at an almost breakneck speed and the improvements were plenty and important.
After a while, OpenOffice became an afterthought for the open source community. This, of course, was exacerbated when Oracle decided to discontinue the project in 2011 and donated the code to the Apache Project. By this point OpenOffice was struggling to move forward and that brings us to now. A burgeoning LibreOffice and a suffering, stuttering OpenOffice.
But I say there is a light at the end of this rather dim tunnel.
### Unfork them
This may sound crazy, but I think it's time LibreOffice and OpenOffice became one again. Yes, I know there are probably political issues and egos at stake, but I believe the two would be better served as one. The benefits of this merger would be many. Off the top of my head:
- Bring the MS Office filters together: OpenOffice has a strong track record of better importing certain files from MS Office (whereas LibreOffice has been known to be improving, but spotty).
- More developers for LibreOffice: Although OpenOffice wouldn't bring with it a battalion of developers, it would certainly add to the mix.
- End the confusion: Many users assume OpenOffice and LibreOffice are the same thing. Some don't even know that LibreOffice exists. This would end that confusion.
- Combine their numbers: Separate, OpenOffice and LibreOffice have impressive usage numbers. Together, they would be a force.
### A golden opportunity
The possible loss of OpenOffice could actually wind up being a golden opportunity for open source office suites in general. Why? I would like to suggest something that I believe has been necessary for a while now. If OpenOffice and LibreOffice were to gather their forces, diff their code, and merge, they could then do some much-needed retooling of not just the internal works of the whole, but also of the interface.
Let's face it, the LibreOffice and (by extension) OpenOffice UIs are both way out of date. When I install LibreOffice 5.2.1.2 the tool bar is an absolute disaster (Figure A).
### Figure A
![](http://tr2.cbsistatic.com/hub/i/2016/09/14/cc5250df-48cd-40e3-a083-34250511ffab/c5ac8eb1e2cb12224690a6a3525999f0/openofficea.jpg)
#### The LibreOffice default toolbar setup.
As much as I support and respect (and use daily) LibreOffice, it has become all too clear the interface needs a complete overhaul. What we're dealing with now is a throwback to the late 90s/early 2000s and it has to go. When a new user opens up LibreOffice for the first time, they are inundated with buttons, icons, and toolbars. Ubuntu Unity helped this out with the Head up Display (HUD), but that did nothing for other desktops and distributions. Sure, the enlightened user has no problem knowing what to look for and where it is (or to even customize the toolbars to reflect their specific needs), but for a new or average user, that interface is a nightmare. Now would be the perfect time for this change. Bring in the last vestiges of the OpenOffice developers and have them join the fight for an improved interface. With the combination of the additional import filters from OpenOffice and a modern interface, LibreOffice could finally make some serious noise on both the home and business desktops.
### Will this actually happen?
This needs to happen. Will it? I have no idea. But even if the powers that be decide the UI isn't in need of retooling (which would be a mistake), bringing OpenOffice into the fold would still be a big step forward. The merging of the two efforts would bring about a stronger focus on development, easier marketing, and far less confusion by the public at large.
I realize this might seem a bit antithetical to the very heart and spirit of open source, but merging LibreOffice and OpenOffice would combine the strengths of the two constituent pieces and possibly jettison the weaknesses.
From my perspective, that's a win-win.
--------------------------------------------------------------------------------
via: http://www.techrepublic.com/article/its-time-to-make-libreoffice-and-openoffice-one-again/
作者:[Jack Wallen ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://www.techrepublic.com/search/?a=jack%2Bwallen

View File

@ -1,3 +1,4 @@
LinuxBars翻译认领
Server Monitoring with Shinken on Ubuntu 16.04
=====

View File

@ -1,129 +0,0 @@
GOOGLER: NOW YOU CAN GOOGLE FROM LINUX TERMINAL!
====
![](https://itsfoss.com/wp-content/uploads/2016/09/google-from-linux-terminal.jpg)
A quick question: What do you do every day? Of course, a lot of things. But I can tell one thing, you search on Google almost every day (if not every day). Am I right?
Now, if you are a Linux user (which Im guessing you are) heres another question: wouldnt it be nice if you can Google without even leaving the terminal? Without even firing up a Browser window?
If you are a *nix enthusiast and also one of those people who just love the view of the terminal, I know your answer is Yes. And I think, the rest of you will also like the nifty little tool Im going to introduce today. Its called Googler!
### GOOGLER: GOOGLE IN YOUR LINUX TERMINAL
Googler is a straightforward command-line utility for Google-ing right from your terminal window. Googler mainly supports three types of Google Searches:
- Google Search: Simple Google searching, equivalent to searching on Google homepage.
- Google News Search: Google searching for News, equivalent to searching on Google News.
- Google Site Search: Google searching for results from a specific site.
Googler shows the search results with the title, URL and page excerpt. The search results can be opened directly in the browser with only a couple of keystrokes.
![](https://itsfoss.com/wp-content/uploads/2016/09/googler-1.png)
### INSTALLATION ON UBUNTU
Lets go through the installation process first.
At first make sure you have python version 3.3 or later using this command:
```
python3 --version
```
If not, upgrade it. Googler requires python 3.3+ for running.
Though Googler is yet not available through package repository on Ubuntu, we can easily install it from the GitHub repository. All we have to do is run the following commands:
```
cd /tmp
git clone https://github.com/jarun/googler.git
cd googler
sudo make install
cd auto-completion/bash/
sudo cp googler-completion.bash /etc/bash_completion.d/
```
And thats it. Googler is installed along with command autocompletion feature.
### FEATURES & BASIC USAGE
If we go through all its features, Googler is actually quite powerful a tool. Some of the main features are:
Interactive Interface: Run the following command in terminal:
```
googler
```
The interactive interface will be opened. The developer of Googler, Arun [Prakash Jana][1] calls it the omniprompt. You can enter ? for available commands on omniprompt.
![](https://itsfoss.com/wp-content/uploads/2016/09/googler-2.png)
From the omniprompt, enter any search phrases to initiate the search. You can then enter n or p to navigate next or previous page of search results.
To open any search result in a browser window, just enter the index number of that result. Or you can open the search page itself by entering o .
- News Search: If you want to search News, start googler with the N optional argument:
```
googler -N
```
The subsequent omniprompt will fetch results from Google News.
- Site Search: If you want to search pages from a specific site, run googler with w {domain} argument:
```
googler -w itsfoss.com
```
The subsequent omniprompt with fetch results only from Its FOSS blog!
- Manual Page: Run the following command for Googler manual page equipped with various examples:
```
man googler
```
- Google country/domain specific search:
```
googler -c in "hello world"
```
The above example command will open search results from Googles Indian domain (in for India).
- Filter search results by duration and language preference.
- Google search keywords support, such as: site:example.com or filetype:pdf etc.
- HTTPS proxy support.
- Shell commands autocomplete.
- Disable automatic spelling correction.
There are much more. You can twist Googler to suit your needs.
Googler can also be integrated with a text-based browser ( like [elinks][2], [links][3], [lynx][4], w3m etc.), so that you wouldnt even need to leave the terminal for browsing web pages. The instructions can be found on the [GitHub project page of Googler][5].
If you want a graphical demonstration of Googlers various features, feel free to check the terminal recording attached to the GitHub project page : [jarun/googler v2.7 quick demo][6].
### THOUGHTS ON GOOGLER?
Though Googler might not feel necessary or desired to everybody, for someone who doesnt want to open the browser just for searching on google or simply want to spend as much as time possible on the terminal window, it is a great tool indeed. What do you think?
--------------------------------------------------------------------------------
via: http://linoxide.com/firewall/pfsense-setup-basic-configuration/
作者:[Munif Tanjim][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/munif/
[1]: https://github.com/jarun
[2]: http://elinks.or.cz/
[3]: http://links.twibright.com/
[4]: http://lynx.browser.org/
[5]: https://github.com/jarun/googler#faq
[6]: https://asciinema.org/a/85019

View File

@ -0,0 +1,99 @@
# 5 REASONS WHY YOU SHOULD BE USING OPENSUSE
[![Reasons why you should use OpenSUSE](https://itsfoss.com/wp-content/uploads/2016/09/why-opensuse-is-best.jpg)](https://itsfoss.com/wp-content/uploads/2016/09/why-opensuse-is-best.jpg)
Most of the desktop Linux users stay in 3 categories: Debian/Ubuntu, Fedora, Arch Linux. But today, Ill give you 5 reasons why you should use openSUSE.
Ive always found [openSUSE](https://www.opensuse.org/) to be a bit different kind of a Linux distro. I dont know, but its just so shiny and charismatic. The green chameleon looks awesome. But thats not the reason why openSUSE is the best or better than other Linux distributions.
Dont misunderstand me. I run so many different distros for different purposes and appreciate the work people behind these distros are doing to make computing a joy. But openSUSE always felt, well, sacred. You feel me?
## 5 REASONS WHY OPENSUSE IS BETTER THAN OTHER LINUX DISTRIBUTIONS
Did I just say that openSUSE is the best Linux distribution? No, I didnt. There is no one best Linux distribution. It is really up to your needs what you find as your soulmate.
But here, I am going to list 5 things that I have found that openSUSE does better than other Linux distros. Lets see them.
### #1 COMMUNITY RULES
openSUSE is a great symbol of community driven projects. I have seen a lot of users complain about changes made by the developers in their favorite distro after an update. But not openSUSE. openSUSE is truly community driven and gives its users what they want. Everytime.
### #2 ROCK SOLID OS
Another thing is OS integrity. I can install almost all of the [best Linux desktop environments](https://itsfoss.com/best-linux-desktop-environments/) on the same openSUSE installation which is not possible even on Ubuntu without compromising the stability of the system. This clearly shows how robust the system is. Therefore, openSUSE should appeal to the users wholl be tinkering a lot under the hood.
### #3 EASY TO INSTALL SOFTWARE
We do have lots of awesome package managers in Linux world. From Debian apt-get to the DNF of [Fedora](https://itsfoss.com/fedora-24-review/), all do appeal to the users and sometimes play a great role in attracting users to a particular distro.
openSUSE has again brought a great software delivery method to the table. [software.opensuse.org](https://software.opensuse.org/421/en) is a web-portal that you can use to install software from the repository. All you need to do is to go to the link (on your openSUSE OS of course), use the search box to find your desired software. Click “Direct Install”. Done. Thats all.
Sounds like using Google PlayStore, aint it?
### #4 YAST
[YaST](https://en.opensuse.org/Portal:YaST) is literally the best control center ANY OS in the world has ever had. No arguments there. You can control everything. Networking, Software Update, all the basic settings. Everything. YaST gives you absolute power over your openSUSE installation, be it the enterprise edition or the personal installation. Convenient and everything at one place.
### #5 EXCELLENT OUT OF THE BOX EXPERIENCE
SUSE team is one of the biggest contributors to the Linux kernel. This diligent effort also means that they have excellent support for various hardware.
With such good hardware support, comes great out-of-the-box experience.
### #6 THEY MAKE GREAT PARODY VIDEOS
Wait! There were five reasons that made openSUSE awesome, right?
But I am forced to write it as [Abhishek](https://itsfoss.com/author/abhishek/) wants to me add that openSUSE is the best because they make great Linux parody videos :)
Just kidding but do check out the super awesome [Uptime Funk](https://www.youtube.com/watch?v=zbABy9ul11I)and you would know [why SUSE is the coolest Linux](https://itsfoss.com/suse-coolest-linux-enterprise/).
## LEAP OR TUMBLEWEED? WHICH OPENSUSE SHOULD I USE?
Now if I have convinced you to use openSUSE, let me tell you about the choices you have when it comes to openSUSE. openSUSE comes in two Distributions. The Leap and the Tumbleweed.
![Choice](https://itsfoss.com/wp-content/uploads/2016/09/Untitled-design-2.jpg)
Now although both offer a similar experience and a similar environment, There is a decision You must make before choosing which one of these two to imprint on Your hard disk.
## OPENSUSE : LEAP
[openSUSE Leap](https://en.opensuse.org/Portal:Leap) is for most people. It has a release cycle of 8 months which is followed orthodoxly. Currently, we have openSUSE 42.1\. It contains all the stable packages and provides the smoothest experience of the two.
It is highly suitable for Home, Office and for Business computers. It is for people who need a good OS but wont/cant keep pampering the OS and need it to move aside and let them work. Once setup, you need not worry about anything and focus on your productivity. I also highly recommend Leap for use in libraries and schools.
## OPENSUSE: TUMBLEWEED
The [Tumbleweed version of openSUSE](https://en.opensuse.org/Portal:Tumbleweed) is a rolling release. It very regularly gets updates and always contains the newest set of software running on it. It is recommended for developers, advanced users who want the newest of everything on their system and anybody who wants to contribute to openSUSE.
Let me clarify one thing, though. Tumbleweed is in no way a beta/testing release to the Leap. It is the most bleeding edge stable Linux distro available.
Tumbleweed gives you the fastest updates, but only after the developers assure for the packages stability.
### YOUR SAY?
[](https://itsfoss.com/install-antergos-linux/)[](https://itsfoss.com/linux-national-os/)
Let us know in the comments below what you think of openSUSE? And if you already thinking of using openSUSE, which of the two version would you prefer: Leap or Tumbleweed? Cheers :)
--------------------------------------------------------------------------------
via: https://itsfoss.com/why-use-opensuse/?utm_source=newsletter&utm_medium=email&utm_campaign=5_reasons_why_you_should_use_opensuse_and_other_linux_stories&utm_term=2016-09-19
作者:[Aquil Roshan][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/aquil/

View File

@ -1,82 +0,0 @@
translating by ucasFL
How to Install Latest XFCE Desktop in Ubuntu 16.04 and Fedora 22-24
====
Xfce is a modern, open source and lightweight desktop environment for Linux systems. It also works well on many other Unix-like systems such as Mac OS X, Solaris, *BSD plus several others. It is fast and also user friendly with a simple and elegant user interface.
Installing a desktop environment on servers can sometimes prove helpful, as certain applications may require a desktop interface for efficient and reliable administration and one of the remarkable properties of Xfce is its low system resources utilization such as low RAM consumption, thereby making it a recommended desktop environment for servers if need be.
### XFCE Desktop Features
Additionally, some of its noteworthy components and features are listed below:
- Xfwm windows manager
- Thunar file manager
- User session manger to deal with logins, power management and beyond
- Desktop manager for setting background image, desktop icons and many more
- An application manager
- Its highly pluggable as well plus several other minor features
The latest stable release of this desktop is Xfce 4.12, all its features and changes from previous versions are listed here.
#### Install Xfce Desktop on Ubuntu 16.04
Linux distributions such as Xubuntu, Manjaro, OpenSUSE, Fedora Xfce Spin, Zenwalk and many others provide their own Xfce desktop packages, however, you can install the latest version as follows.
```
$ sudo apt update
$ sudo apt install xfce4
```
Wait for the installation process to complete, then logout out of your current session or you can possibly restart your system as well. At the login interface, choose Xfce desktop and login as in the screen shot below:
![](http://www.tecmint.com/wp-content/uploads/2016/09/Select-Xfce-Desktop-at-Login.png)
![](http://www.tecmint.com/wp-content/uploads/2016/09/XFCE-Desktop.png)
#### Install Xfce Desktop in Fedora 22-24
If you have an existing Fedora distribution and wanted to install xfce desktop, you can use yum or dnf to install it as shown.
```
-------------------- On Fedora 22 --------------------
# yum install @xfce
-------------------- On Fedora 23-24 --------------------
# dnf install @xfce-desktop-environment
```
After installing Xfce, you can choose the xfce login from the Session menu or reboot the system.
![](http://www.tecmint.com/wp-content/uploads/2016/09/Select-Xfce-Desktop-at-Fedora-Login.png)
![](http://www.tecmint.com/wp-content/uploads/2016/09/Install-Xfce-Desktop-in-Fedora.png)
If you dont want Xfce desktop on your system anymore, use the command below to uninstall it:
```
-------------------- On Ubuntu 16.04 --------------------
$ sudo apt purge xfce4
$ sudo apt autoremove
-------------------- On Fedora 22 --------------------
# yum remove @xfce
-------------------- On Fedora 23-24 --------------------
# dnf remove @xfce-desktop-environment
```
In this simple how-to guide, we walked through the steps for installation of latest version of Xfce desktop, which I believe were easy to follow. If all went well, you can enjoy using xfce, as one of the [best desktop environments for Linux systems][1].
However, to get back to us, you can use the feedback section below and remember to always stay connected to Tecmint.
--------------------------------------------------------------------------------
via: http://linoxide.com/firewall/pfsense-setup-basic-configuration/
作者:[Aaron Kili ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://www.tecmint.com/author/aaronkili/
[1]: http://www.tecmint.com/best-linux-desktop-environments/

View File

@ -1,160 +0,0 @@
Part 13 - How to Write Scripts Using Awk Programming Language
====
All along from the beginning of the Awk series up to Part 12, we have been writing small Awk commands and programs on the command line and in shell scripts respectively.
However, Awk, just as Shell, is also an interpreted language, therefore, with all that we have walked through from the start of this series, you can now write Awk executable scripts.
Similar to how we write a shell script, Awk scripts start with the line:
```
#! /path/to/awk/utility -f
```
For example on my system, the Awk utility is located in /usr/bin/awk, therefore, I would start an Awk script as follows:
```
#! /usr/bin/awk -f
```
Explaining the line above:
```
#! referred to as Shebang, which specifies an interpreter for the instructions in a script
/usr/bin/awk is the interpreter
-f interpreter option, used to read a program file
```
That said, let us now dive into looking at some examples of Awk executable scripts, we can start with the simple script below. Use your favorite editor to open a new file as follows:
```
$ vi script.awk
```
And paste the code below in the file:
```
#!/usr/bin/awk -f
BEGIN { printf "%s\n","Writing my first Awk executable script!" }
```
Save the file and exit, then make the script executable by issuing the command below:
```
$ chmod +x script.awk
```
Thereafter, run it:
```
$ ./script.awk
```
Sample Output
```
Writing my first Awk executable script!
```
A critical programmer out there must be asking, “where are the comments?”, yes, you can also include comments in your Awk script. Writing comments in your code is always a good programming practice.
It helps other programmers looking through your code to understand what you are trying to achieve in each section of a script or program file.
Therefore, you can include comments in the script above as follows.
```
#!/usr/bin/awk -f
#This is how to write a comment in Awk
#using the BEGIN special pattern to print a sentence
BEGIN { printf "%s\n","Writing my first Awk executable script!" }
```
Next, we shall look at an example where we read input from a file. We want to search for a system user named aaronkilik in the account file, /etc/passwd, then print the username, user ID and user GID as follows:
Below is the content of our script called second.awk.
```
#! /usr/bin/awk -f
#use BEGIN sepecial character to set FS built-in variable
BEGIN { FS=":" }
#search for username: aaronkilik and print account details
/aaronkilik/ { print "Username :",$1,"User ID :",$3,"User GID :",$4 }
```
Save the file and exit, make the script executable and execute it as below:
```
$ chmod +x second.awk
$ ./second.awk /etc/passwd
```
Sample Output
```
Username : aaronkilik User ID : 1000 User GID : 1000
```
In the last example below, we shall use do while statement to print out numbers from 0-10:
Below is the content of our script called do.awk.
```
#! /usr/bin/awk -f
#printing from 0-10 using a do while statement
#do while statement
BEGIN {
#initialize a counter
x=0
do {
print x;
x+=1;
}
while(x<=10)
}
```
After saving the file, make the script executable as we have done before. Afterwards, run it:
```
$ chmod +x do.awk
$ ./do.awk
```
Sample Output
```
0
1
2
3
4
5
6
7
8
9
10
```
### Summary
We have come to the end of this interesting Awk series, I hope you have learned a lot from all the 13 parts, as an introduction to Awk programming language.
As I mentioned from the beginning, Awk is a complete text processing language, for that reason, you can learn more other aspects of Awk programming language such as environmental variables, arrays, functions (built-in & user defined) and beyond.
There is yet additional parts of Awk programming to learn and master, so, below, I have provided some links to important online resources that you can use to expand your Awk programming skills, these are not necessarily all that you need, you can also look out for useful Awk programming books.
For any thoughts you wish to share or questions, use the comment form below. Remember to always stay connected to Tecmint for more exciting series.
--------------------------------------------------------------------------------
via: http://www.tecmint.com/write-shell-scripts-in-awk-programming/
作者:[Aaron Kili |][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://www.tecmint.com/author/aaronkili/

View File

@ -1,246 +0,0 @@
How to Use Flow Control Statements in Awk - part12
====
When you review all the Awk examples we have covered so far, right from the start of the Awk series, you will notice that all the commands in the various examples are executed sequentially, that is one after the other. But in certain situations, we may want to run some text filtering operations based on some conditions, that is where the approach of flow control statements sets in.
![](http://www.tecmint.com/wp-content/uploads/2016/08/Use-Flow-Control-Statements-in-Awk.png)
There are various flow control statements in Awk programming and these include:
- if-else statement
- for statement
- while statement
- do-while statement
- break statement
- continue statement
- next statement
- nextfile statement
- exit statement
However, for the scope of this series, we shall expound on: if-else, for, while and do while statements. Remember that we already walked through how to use next statement in Part 6 of this Awk series.
### 1. The if-else Statement
The expected syntax of the if statement is similar to that of the shell if statement:
```
if (condition1) {
actions1
}
else {
actions2
}
```
In the above syntax, condition1 and condition2 are Awk expressions, and actions1 and actions2 are Awk commands executed when the respective conditions are satisfied.
When condition1 is satisfied, meaning its true, then actions1 is executed and the if statement exits, otherwise actions2 is executed.
The if statement can also be expanded to a if-else_if-else statement as below:
```
if (condition1){
actions1
}
else if (conditions2){
actions2
}
else{
actions3
}
```
For the form above, if condition1 is true, then actions1 is executed and the if statement exits, otherwise condition2 is evaluated and if it is true, then actions2 is executed and the if statement exits. However, when condition2 is false then, actions3 is executed and the if statement exits.
Here is a case in point of using if statements, we have a list of users and their ages stored in the file, users.txt.
We want to print a statement indicating a users name and whether the users age is less or more than 25 years old.
```
aaronkilik@tecMint ~ $ cat users.txt
Sarah L 35 F
Aaron Kili 40 M
John Doo 20 M
Kili Seth 49 M
```
We can write a short shell script to carry out our job above, here is the content of the script:
```
#!/bin/bash
awk ' {
if ( $3 <= 25 ){
print "User",$1,$2,"is less than 25 years old." ;
}
else {
print "User",$1,$2,"is more than 25 years old" ;
}
}' ~/users.txt
```
Then save the file and exit, make the script executable and run it as follows:
```
$ chmod +x test.sh
$ ./test.sh
```
Sample Output
```
User Sarah L is more than 25 years old
User Aaron Kili is more than 25 years old
User John Doo is less than 25 years old.
User Kili Seth is more than 25 years old
```
### 2. The for Statement
In case you want to execute some Awk commands in a loop, then the for statement offers you a suitable way to do that, with the syntax below:
Here, the approach is simply defined by the use of a counter to control the loop execution, first you need to initialize the counter, then run it against a test condition, if it is true, execute the actions and finally increment the counter. The loop terminates when the counter does not satisfy the condition.
```
for ( counter-initialization; test-condition; counter-increment ){
actions
}
```
The following Awk command shows how the for statement works, where we want to print the numbers 0-10:
```
$ awk 'BEGIN{ for(counter=0;counter<=10;counter++){ print counter} }'
```
Sample Output
```
0
1
2
3
4
5
6
7
8
9
10
```
### 3. The while Statement
The conventional syntax of the while statement is as follows:
```
while ( condition ) {
actions
}
```
The condition is an Awk expression and actions are lines of Awk commands executed when the condition is true.
Below is a script to illustrate the use of while statement to print the numbers 0-10:
```
#!/bin/bash
awk ' BEGIN{ counter=0 ;
while(counter<=10){
print counter;
counter+=1 ;
}
}
```
Save the file and make the script executable, then run it:
```
$ chmod +x test.sh
$ ./test.sh
```
Sample Output
```
0
1
2
3
4
5
6
7
8
9
10
```
### 4. The do while Statement
It is a modification of the while statement above, with the following underlying syntax:
```
do {
actions
}
while (condition)
```
The slight difference is that, under do while, the Awk commands are executed before the condition is evaluated. Using the very example under while statement above, we can illustrate the use of do while by altering the Awk command in the test.sh script as follows:
```
#!/bin/bash
awk ' BEGIN{ counter=0 ;
do{
print counter;
counter+=1 ;
}
while (counter<=10)
}
'
```
After modifying the script, save the file and exit. Then make the script executable and execute it as follows:
```
$ chmod +x test.sh
$ ./test.sh
```
Sample Output
```
0
1
2
3
4
5
6
7
8
9
10
```
### Conclusion
This is not a comprehensive guide regarding Awk flow control statements, as I had mentioned earlier on, there are several other flow control statements in Awk.
Nonetheless, this part of the Awk series should give you a clear fundamental idea of how execution of Awk commands can be controlled based on certain conditions.
You can as well expound more on the rest of the flow control statements to gain more understanding on the subject matter. Finally, in the next section of the Awk series, we shall move into writing Awk scripts.
--------------------------------------------------------------------------------
via: http://www.tecmint.com/use-flow-control-statements-with-awk-command/
作者:[Aaron Kili][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://www.tecmint.com/author/aaronkili/

View File

@ -0,0 +1,63 @@
Linus Torvalds 透漏他最喜欢的编程使用的笔记本
====
>是戴尔 XPS 13 开发版。下面就是原因。
我最近和一些 Linux 开发者讨论了对于严谨的程序员来说,最好的笔记本是什么样的问题。结果,我从这些程序员的观点中筛选出了多款笔记本电脑。在我的书上的赢家是谁呢?就是戴尔 XPS 13 开发版。我在一家好的公司。Linus Torvalds —— Linux的缔造者 —— 也认同这个观点。对于他来说,戴尔 XPS 13 开发版是他周围最好的笔记本电脑了。
![](http://zdnet3.cbsistatic.com/hub/i/r/2016/07/18/702609c3-db38-4603-9f5f-4dcc3d71b140/resize/770xauto/50a8ba1c2acb1f0994aec2115d2e55ce/2016-dell-xps-13.jpg)
Torvald's 的说法,可能和你的想法不同。
在 Google+ 上Torvalds解释道“第一[我从来不把笔记本当成台式机的替代品][1],并且,我每年旅游不了几次。对于我来说,笔记本是有专门用途的,而不是每日(或者每周)都要使用,因此,主要的标准不是类似于“平均每天使用”,而是非常适合于旅游时使用。
因此,对于 Torvalds 来说,“我最后比较关心的一点是它是相当的小和轻,因为在会议上我可能一整天都需要带着它。我同样需要一个好的屏幕,因为到目前为止,我主要是在桌子上使用它,我希望文字的显示,小而且清晰。”
戴尔的显示器是由 Intel's Iris 540 GPU 提供的。在我的印象中这款GPU非常的不错。
Iris 提供了 13.3 英寸的 3,200×1,800 的显示屏。每英寸有280像素比我喜欢的[2015年的 Chromebook Pixel][2]多了40个像素比[MacBook Pro with Retina][3]多了60个像素。
然而,有了上面说的硬件配置,在[Gnome][4]桌面上玩好也不容易。正如 Torvalds 在另一篇文章解释的那样,它“[和我的桌面电脑有一样的分辨率][5]但是显然因为笔记本的显示屏更小Gnome 桌面似乎自己做决定所以我需要2倍的自动缩放因子这样才避免所有愚蠢的事情例如窗口显示图标等
解决方案你可以忘掉对于用户界面的期待。你需要在shell下运行gsettings 设置 org.gnome.desktop.interface 的缩放因子为1。
Torvalds 或许使用 Gnome 桌面,但是他不是很喜欢 Gnome 3.x 系列。我不是很赞同他。这就是为什么我使用 [Cinnamon][7] 来代替。
他还希望“一个相当强大的 CPU因为当我旅游的时候我依旧需要多次编译 Linux 内核。我并不需要像在家那样每次 pull request 都进行一次完整的“make allmodconfig”编译但是我希望可以比我以前的笔记本多编译几次实际上也包括屏幕应该是我想升级的主要原因。
Linus 没有描述他的 XPS 13 的细节,但是我审查单位一个高端机型。它是双核 2.2GHz 的第 6 代英特尔的酷睿 i7-6550U处理器搭载 Skylake 架构,并且 16GBs DDR3 内存和一个半兆兆位【注约500M】的PCIe固态硬盘SSD。我可以肯定Torvalds 的系统至少是精良装备。
一些你或许会关注的特征,不在 Torvalds 给出的列表中。
>“我不倾向于关心是触摸屏,因为我的手指相对于我所看到的文字是有大有笨拙(我也无法处理污迹:也许我有特别油腻的手指,但是我真的不想触摸屏幕)。
>我并不十分关心一些“一整天电池寿命”,因为坦率的讲,我不记得上次没有接入电源时什么时候了。我可能不会为了快速检查而打扰它在那儿充电,但它不是一个压倒一切的大问题。等到电池的寿命“超过两小时”,我只是不那么在乎了。
戴尔声称XPS 13搭配 56wHR4 芯电池拥有12小时的电池寿命。它已经很好的超过了我印象中10个小时电池寿命。我从没有尝试过把电量完全耗完是什么状态。
Torvalds 也不需要担心 Intel 的 Wi-Fi 设置。非开发版使用 Broadcom 的芯片设置,已经被 Windows 和 Linux 用户发现了一些问题。戴尔的技术支持对于我来控制这些问题非常有帮助。
一些用户在使用 XPS 13 触摸板的时候遇到了问题。Torvalds 和我都几乎没有什么困扰。Torvalds 写到“XPS13 触摸板对于我来说运行的非常好。这可能只是个人喜好,但它操作起来比较流畅,响应比较快。”
不过,尽管 Torvalds 喜欢 XPS 13,他同时也钟情于最新版的联想 X1 Carbon,惠普 Spectre 13 x360,和去年联想 Yoga 900。至于我我喜欢 XPS 13 开发版,至于价钱,我以前见到的模型是 $1949.99,可能使用你的信用卡就可以了。
因此如果你希望像世界上顶级的程序员之一一样开发的话Dell XPS 13 开发版对得起它的价格。
--------------------------------------------------------------------------------
via: http://www.zdnet.com/article/linus-torvalds-reveals-his-favorite-programming-laptop/
作者:[Steven J. Vaughan-Nichols ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://www.zdnet.com/meet-the-team/us/steven-j-vaughan-nichols/
[1]: https://plus.google.com/+LinusTorvalds/posts/VZj8vxXdtfe
[2]: http://www.zdnet.com/article/the-best-chromebook-ever-the-chromebook-pixel-2015/
[3]: http://www.zdnet.com/product/apple-15-inch-macbook-pro-with-retina-display-mid-2015/
[4]: https://www.gnome.org/
[5]: https://plus.google.com/+LinusTorvalds/posts/d7nfnWSXjfD
[6]: http://www.zdnet.com/article/linus-torvalds-finds-gnome-3-4-to-be-a-total-user-experience-design-failure/
[7]: http://www.zdnet.com/article/how-to-customise-your-linux-desktop-cinnamon/

View File

@ -1,50 +0,0 @@
Torvalds2.0: Patricia Torvalds 谈“计算”,大学,女权主义和科技界的多元化
![Image by : Photo by Becky Svartström. Modified by Opensource.com. CC BY-SA 4.0](http://opensource.com/sites/default/files/styles/image-full-size/public/images/life/osdc-lead-patriciatorvalds.png)
Image by : Photo by Becky S. Modified by Opensource.com. [CC BY-SA 4.0][1]
图片来源照片来自Becky Svartström, 修改自Opensource.com.
Patricia Torvalds 并不是那个在 Linux 和开源领域非常有名同样叫做 Torvalds 的人。
![](http://opensource.com/sites/default/files/images/life-uploads/ptorvalds.png)
18岁的时候Patricia 已经是一个有多项科技成就并且拥有开源产业经验的女权主义者。紧接着,她把目光投入到在杜克大学地球科学学院工程学第一学年的学习中,以实习生的身份在位于美国奥勒冈州伯特兰市的 Puppet 实验室工作。但不久后,她又直接去了北卡罗纳州的达拉莫,开始秋季学期的大学学习。
在这次独家采访中Patricia 表示,使她对计算机科学和工程学感兴趣的(剧透警告:不是她的父亲)原因,包括高中时候对科技的偏爱,女权主义在她的生活中扮演了重要角色以及对科技多元化缺乏的思考。
![](http://opensource.com/sites/default/files/images/life/Interview%20banner%20Q%26A.png)
###什么东西使你钟情于学习计算机科学和工程学?###
我在科技方面的兴趣的确产生于高中时代。我曾一度想投身于生物学,直到大约大学二年级的时候。大二结束以后,我在波特兰 VA 当网页设计实习生。与此同时我参加了一个叫做“风险勘探”的工程学课程在我大二学年的后期我们把一个水下机器人送到了太平洋。但是转折点大概是在青年时代的中期当我因被授予“NCWIT Aspiration in Computing"奖而被称作地区和民族英雄的时候出现的。
这个奖项的获得我感觉确立了自己的兴趣。当然,我认为最重要的部分是我加入到一个所有获奖者在里面的 Facebook 群,里面那些已经获奖的女孩们简直难以置信的相互支持。由于在 XV 和 VA
的工作,我在获奖前就已经确定了致力于计算机科学,但是和这些女孩的交谈更加坚定了这份兴趣,使之更加强壮。再后来,初高年级后期的时候教授 XV 也使我体会到工程学和计算机科学对我来说的确很有趣。
###你打算学习什么?毕业以后你已经知道自己想干什么了吗?###
我希望主修力学或电气科学以及计算机工程学和计算机科学还有女性学。毕业以后,我希望在一个支持或者创造科技为社会造福的公司工作,或者自己开公司。
###我的女儿在高中有一门 Visual Basic的编程课。她是整个班上唯一的一个女生并且以疲倦和痛苦的经历结束这门课程。你的经历是什么样的呢###
我的高中在高年级的时候开设计算机科学的课程,我也学习了 Visual Basic这门课不是很糟糕但我的确是20多个人的班级里唯一的三四个女生之一。其他的计算机课程似乎也有相似的性别比例差异。然而我所在的高中极其小并且老师对科技非常支持和包容所以我并没有感到厌倦。希望在未来的一些年里计算机方面的课程会变得更加多样化。
###你的学校做了哪些促进科技的智举?它们如何能够变得更好?###
我的高中学校给了我们长时间的机会接触到计算机,老师们会突然在不相关的课程上安排科技基础任务,比如有好多次任务,我们必须建一个供社会学习课程使用的网站,我认为这很棒因为它使我们每一个人都能接触到科技。机器人俱乐部也很活跃并且资金充足,但是非常小,我不是其中的成员。学校的科技/工程学项目中一个非常强大的组成部分是一门叫做”风险勘测“的学生教学工程学课程,这是一门需要亲自动手的课程,并且每年处理一个工程学或者计算机科学难题。我和我的一个同学在这儿教授了两年,在课程结束以后,有学生上来告诉我他们对从事工程学或者计算机科学感兴趣。
然而,我的高中没有特别的关注于让年轻女性加入到这些项目中来,并且在人种上也没有呈现多样化。计算机课程和俱乐部大量的主要成员都是男性白人。这的确应该能够有所改善。
###在成长过程中,你如何在家使用科技?###
老实说小的时候我使用电脑我的父亲设了一个跟踪装置当我们上网一个小时就会断线玩尼奥宠物和或者相似的游戏。我想我本可以毁坏跟踪装置或者在不连接网络的情况下玩游戏但我没有这样做。我有时候也会和我的父亲做一些小的科学项目我还记得我和他在电脑终端上打印出”Hello world"无数次。但是大多数时候,我都是和我的妹妹一起玩网络游戏,直到高中的时候才开始学习“计算”。
###你在高中学校的女权俱乐部很积极,从这份经历中你学到了什么?现在对你来说什么女权问题是最重要的?###
在高中二年级的后期,我和我的朋友一起建立了女权俱乐部。刚开始,我们受到了很多反对和抵抗,并且这从来就没有完全消失过。到我们毕业的时候,女权主义理想已经彻底成为了学校文化的一个部分。我们在学校做的女权主义工作通常是以一些比较直接的方式并集中于像着装要求这样一些问题。
就我个人来说我更集中于交叉地带的女权主义把女权主义运用到缓解其他方面的压迫比如种族歧视和阶级歧视。Facebook 网页《Gurrilla Feminism》是交叉地带女权主义一个非常好的例子并且我从中学到了很多。我目前管理波特兰分支。
在科技多样性方面女权主义对我也非常重要尽管作为一名和科技世界有很强联系的上流社会女性女权主义问题对我产生的影响相比其他人来说非常少我所涉及的交叉地带女权主义也是同样的。出版集团比如《Model View Culture》非常鼓舞我并且我很感激 Shanley Kane 所做的一切。
###你会给想教他们的孩子学习编程的父母什么样的建议?###
老实说,从没有人把我推向计算机科学或者工程学。正如我前面说的,在很长一段时间里,我想成为一名遗传学家。大二结束的那个夏天,我在 VA 当了一个夏季的网页设计实习生,这彻底改变了我之前的想法。所以我不知道我是否能够完整的回答这个问题。
我的确认为真实的兴趣很重要。如果在我12岁的时候我的父亲让我坐在一台电脑前教我安装网站服务器我认为我不会对计算机科学感兴趣。相反我的父母给了我很多可以支配的自由让我去做自己想做的事情绝大多数时候是为我的尼奥宠物游戏编糟糕的HTML网站。比我小的妹妹们没有一个对工程学或计算机科学感兴趣我的父母也不在乎。我感到很幸运的是我的父母给了我和我的妹妹们鼓励和资源去探索自己的兴趣。
仍然,在我成长过程中我也常说未来要和我的父亲做同样的职业,尽管我还不知道我父亲是干什么的,只知道他有一个很酷的工作。另外,中学的时候有一次,我告诉我的父亲这件事,然后他没有发表什么看法只是告诉我高中的时候不要想这事。所以我猜想这从一定程度上刺激了我。
###对于开源社区的领导者们,你有什么建议给他们来吸引和维持更加多元化的贡献者?###
我实际上在开源社区不是特别积极和活跃。和女性讨论“计算”我感觉更舒服。我是“NCWIT Aspirarion in Computing"网站的一名成员这是我对科技有持久兴趣的一个重要方面同样也包括Facebook群”Ladies Storm Hackathons".
我认为对于吸引和维持多种多样有天赋的贡献者,安全空间很重要。我过去看到在一些开源社区有人发表关于女性歧视和种族主义的评论,人们指出这一问题随后该人被解职。我认为要维持一个专业的社区必须就骚扰事件和不正当行为有一个很高的标准。当然,人们已经有或者将有很多的选择关于在开源社区或其他任何社区能够表达什么。然而,如果社区领导人真的想吸引和维持多元化有天赋的人员,他们必须创造一个安全的空间并且把社区成员维持在很高的标准上。我也认为一些一些社区领导者不明白多元化的价值。很容易说明科技象征着精英社会,并且很多人被科技忽视的原因是他们不感兴趣,这一问题在很早的准备过程中就提出了。他们争论如果一个人在自己的工作上做得很好,那么他的性别或者民族还有性取向这些情况都变得不重要了。这很容易反驳,但我不想为错误找理由。我认为多元化缺失是一个错误,我们应该为之负责并尽力去改善这件事。
--------------------------------------------------------------------------------
via: http://opensource.com/life/15/8/patricia-torvalds-interview
作者:[Rikki Endsley][a]
译者:[ucasFL](https://github.com/ucasFL)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://opensource.com/users/rikki-endsley
[1]:https://creativecommons.org/licenses/by-sa/4.0/
[2]:https://puppetlabs.com/
[3]:https://www.aspirations.org/
[4]:https://www.facebook.com/guerrillafeminism
[5]:https://modelviewculture.com/
[6]:https://www.aspirations.org/
[7]:https://www.facebook.com/groups/LadiesStormHackathons/

View File

@ -1,234 +0,0 @@
使用 Python 和 asyncio 编写在线多用人游戏 - 第2部分
==================================================================
![](https://7webpages.com/media/cache/fd/d1/fdd1f8f8bbbf4166de5f715e6ed0ac00.gif)
你曾经写过异步的 Python 程序吗?这里我将告诉你如果如何做,而且在接下来的部分用一个[实例][1] - 专为多玩家设计的、受欢迎的贪吃蛇游戏来演示。
介绍和理论部分参见第一部分[异步化[第1部分]][2]。
试玩游戏[3]。
### 3. 编写游戏循环主体
游戏循环是每一个游戏的核心。它持续地读取玩家的输入,更新游戏的状态,并且在屏幕上渲染游戏结果。在在线游戏中,游戏循环分为客户端和服务端两部分,所以一般有两个循环通过网络通信。通常客户端的角色是获取玩家输入,比如按键或者鼠标移动,将数据传输给服务端,然后接收需要渲染的数据。服务端处理来自玩家的所有数据,更新游戏的状态,执行渲染下一帧的必要计算,然后将结果传回客户端,例如游戏中对象的位置。如果没有可靠的理由,不混淆客户端和服务端的角色很重要。如果你在客户端执行游戏逻辑的计算,很容易就会和其它客户端失去同步,其实你的游戏也可以通过简单地传递客户端的数据来创建。
游戏循环的一次迭代称为一个嘀嗒。嘀嗒表示当前游戏循环的迭代已经结束,下一帧(或者多帧)的数据已经就绪。在后面的例子中,我们使用相同的客户端,使用 WebSocket 连接服务端。它执行一个简单的循环,将按键码发送给服务端,显示来自服务端的所有信息。[客户端代码戳这里][4]。
#### 例子3.1:基本游戏循环
[例子3.1源码][5]。
我们使用 [aiohttp][6] 库来创建游戏服务器。它可以通过 asyncio 创建网页服务器和客户端。这个库的一个优势是它同时支持普通 http 请求和 websocket。所以我们不用其他网页服务器来渲染游戏的 html 页面。
下面是启动服务器的方法:
```
app = web.Application()
app["sockets"] = []
asyncio.ensure_future(game_loop(app))
app.router.add_route('GET', '/connect', wshandler)
app.router.add_route('GET', '/', handle)
web.run_app(app)
```
`web.run_app` 是创建服务主任务的快捷方法,通过他的 `run_forever()` 方法来执行 asyncio 事件循环。建议你查看这个方法的源码,弄清楚服务器到底是如何创建和结束的。
`app` 变量就是一个类似于字典的对象,它可以在所连接的客户端之间共享数据。我们使用它来存储连接套接字的列表。随后会用这个列表来给所有连接的客户端发送消息。`asyncio.ensure_future()` 调用会启动主游戏循环的任务每隔2s向客户端发送嘀嗒消息。这个任务会在同样的 asyncio 事件循环中和网页服务器并行执行。
有两个网页请求处理器:提供 html 页面的处理器 (`handle`)`wshandler` 是主要的 websocket 服务器任务,处理和客户端之间的交互。在事件循环中,每一个连接的客户端都会创建一个新的 `wshandler`
在启动的任务中,我们在 asyncio 的主事件循环中启动 worker 循环。任务之间的切换发生在他们任何一个使用 `await`语句来等待某个协程结束。例如 `asyncio.sleep` 仅仅是将程序执行权交给调度器指定的时间;`ws.receive` 等待 websocket 的消息,此时调度器可能切换到其它任务。
在浏览器中打开主页连接上服务器后试试随便按下键。他们的键值会从服务端返回每隔2秒这个数字会被游戏循环发给所有客户端的嘀嗒消息覆盖。
我们刚刚创建了一个处理客户端按键的服务器,主游戏循环在后台做一些处理,周期性地同时更新所有的客户端。
#### 例子 3.2: 根据请求启动游戏
[例子 3.2的源码][7]
在前一个例子中,在服务器的生命周期内,游戏循环一直运行着。但是现实中,如果没有一个人连接服务器,空运行游戏循环通常是不合理的。而且,同一个服务器上可能有不同的’游戏房间‘。在这种假设下,每一个玩家创建一个游戏会话(多人游戏中的一个比赛或者大型多人游戏中的副本),这样其他用户可以加入其中。当游戏会话开始时,游戏循环才开始执行。
在这个例子中,我们使用一个全局标记来检测游戏循环是否在执行。当第一个用户发起连接时,启动它。最开始,游戏循环不在执行,标记设置为 `False`。游戏循环是通过客户端的处理方法启动的。
```
if app["game_is_running"] == False:
asyncio.ensure_future(game_loop(app))
```
当游戏的循环(`loop()`)运行时,这个标记设置为 `True`;当所有客户端都断开连接时,其又被设置为 `False`
#### 例子 3.3:管理任务
[例子3.3源码][8]
这个例子用来解释如何和任务对象协同工作。我们把游戏循环的任务直接存储在游戏循环的全局字典中,代替标记的使用。在这个简单例子中并不一定是最优的,但是有时候你可能需要控制所有已经启动的任务。
```
if app["game_loop"] is None or \
app["game_loop"].cancelled():
app["game_loop"] = asyncio.ensure_future(game_loop(app))
```
这里 `ensure_future()` 返回我们存放在全局字典中的任务对象,当所有用户都断开连接时,我们使用下面方式取消任务:
```
app["game_loop"].cancel()
```
这个 `cancel()` 调用将通知所有的调度器不要向这个协程提交任何执行任务,而且将它的状态设置为已取消,之后可以通过 `cancelled()` 方法来检查是否已取消。这里有一个值得一提的小注意点:当你持有一个任务对象的外部引用时,而这个任务执行中抛出了异常,这个异常不会抛出。取而代之的是为这个任务设置一个异常状态,可以通过 `exception()` 方法来检查是否出现了异常。这种悄无声息地失败在调试时不是很有用。所以,你可能想用抛出所有异常来取代这种做法。你可以对所有未完成的任务显示地调用 `result()` 来实现。可以通过如下的回调来实现:
```
app["game_loop"].add_done_callback(lambda t: t.result())
```
如果我们打算在我们代码中取消任务,但是又不想产生 `CancelError` 异常,有一个检查 `cancelled` 状态的点:
```
app["game_loop"].add_done_callback(lambda t: t.result()
if not t.cancelled() else None)
```
注意仅当你持有任务对象的引用时必须要这么做。在前一个例子,所有的异常都是没有额外的回调,直接抛出所有异常。
#### 例子 3.4:等待多个事件
[例子 3.4 源码][9]
在许多场景下,在客户端的处理方法中你需要等待多个事件的发生。除了客户端的消息,你可能需要等待不同类型事件的发生。比如,如果你的游戏时间有限制,那么你可能需要等一个来自定时器的信号。或者你需要使用管道来等待来自其它进程的消息。亦或者是使用分布式消息系统网络中其它服务器的信息。
为了简单起见,这个例子是基于例子 3.1。但是这个例子中我们使用 `Condition` 对象来保证已连接客户端游戏循环的同步。我们不保存套接字的全局列表,因为只在方法中使用套接字。当游戏循环停止迭代时,我们使用 `Condition.notify_all()` 方法来通知所有的客户端。这个方法允许在 `asyncio` 的事件循环中使用发布/订阅的模式。
为了等待两个事件,首先我们使用 `ensure_future()` 来封装任务中可以等待的对象。
```
if not recv_task:
recv_task = asyncio.ensure_future(ws.receive())
if not tick_task:
await tick.acquire()
tick_task = asyncio.ensure_future(tick.wait())
```
在我们调用 `Condition.wait()` 之前,我们需要在背后获取一把锁。这就是我们为什么先调用 `tick.acquire()` 的原因。在调用 `tick.wait()` 之后,锁会被释放,这样其他的协程也可以使用它。但是当我们收到通知时,会重新获取锁,所以在收到通知后需要调用 `tick.release()` 来释放它。
我们使用 `asyncio.wait()` 协程来等待两个任务。
```
done, pending = await asyncio.wait(
[recv_task,
tick_task],
return_when=asyncio.FIRST_COMPLETED)
```
程序会阻塞,直到列表中的任意一个任务完成。然后它返回两个列表:执行完成的任务列表和仍然在执行的任务列表。如果任务执行完成了,其对应变量赋值为 `None`,所以在下一个迭代时,它可能会被再次创建。
#### 例子 3.5 结合多个线程
[例子 3.5 源码][10]
在这个例子中,我们结合 asyncio 循环和线程,在一个单独的线程中执行主游戏循环。我之前提到过,由于 `GIL` 的存在Python 代码的真正并行执行是不可能的。所以使用其它线程来执行复杂计算并不是一个好主意。然而,在使用 `asyncio` 时结合线程有原因的:当我们使用的其它库不支持 `asyncio` 时。在主线程中调用这些库会阻塞循环的执行,所以异步使用他们的唯一方法是在不同的线程中使用他们。
在 asyncio 的循环和 `ThreadPoolExecutor` 中,我们通过 `run_in_executor()` 方法来执行游戏循环。注意 `game_loop()` 已经不再是一个协程了。它是一个由其它线程执行的函数。然而我们需要和主线程交互在游戏事件到来时通知客户端。asyncio 本身不是线程安全的,它提供了可以在其它线程中执行你的代码的方法。普通函数有 `call_soon_threadsafe()`, 协程有 `run_coroutine_threadsafe()`。我们在 `notify()` 协程中增加代码通知客户端游戏的嘀嗒,然后通过另外一个线程执行主事件循环。
```
def game_loop(asyncio_loop):
print("Game loop thread id {}".format(threading.get_ident()))
async def notify():
print("Notify thread id {}".format(threading.get_ident()))
await tick.acquire()
tick.notify_all()
tick.release()
while 1:
task = asyncio.run_coroutine_threadsafe(notify(), asyncio_loop)
# blocking the thread
sleep(1)
# make sure the task has finished
task.result()
```
当你执行这个例子时,你会看到 "Notify thread id" 和 "Main thread id" 相等,因为 `notify()` 协程在主线程中执行。与此同时 `sleep(1)` 在另外一个线程中执行,因此它不会阻塞主事件循环。
#### 例子 3.6:多进程和扩展
[例子 3.6 源码][11]
单线程的服务器可能运行得很好但是它只能使用一个CPU核。为了将服务扩展到多核我们需要执行多个进程每个进程执行各自的事件循环。这样我们需要在进程间交互信息或者共享游戏的数据。而且在一个游戏中经常需要进行复杂的计算例如路径查找。这些任务有时候在一个游戏嘀嗒中没法快速完成。在协程中不推荐进行费时的计算因为它会阻塞事件的处理。在这种情况下将这个复杂任务交给并行执行地其它进程可能更合理。
最简单的使用多个核的方法是启动多个使用单核的服务器,就像之前的例子中一样,每个服务器占用不同的端口。你可以使用 `supervisord` 或者其它进程控制的系统。这个时候你需要一个负载均衡器,像 `HAProxy`,使得连接的客户端在多个进程间均匀分布。有一些适配 asyncio 消息系统和存储系统。例如:
- [aiomcache][12] for memcached client
- [aiozmq][13] for zeroMQ
- [aioredis][14] for Redis storage and pub/sub
你可以在 github 或者 pypi 上找到其它的安装包,大部分以 `aio` 开头。
使用网络服务在存储持久状态和交互信息时可能比较有效。但是如果你需要进行进程通信的实时处理,它的性能可能不足。此时,使用标准的 unix 管道可能更合适。asyncio 支持管道,这个仓库有个 [使用pipe且比较底层的例子][15]
在当前的例子中,我们使用 Python 的高层库 [multiprocessing][16] 来在不同的核上启动复杂的计算,使用 `multiprocessing.Queue` 来进行进程间的消息交互。不幸的是,当前的 multiprocessing 实现与 asyncio 不兼容。所以每一个阻塞方法的调用都会阻塞事件循环。但是此时线程正好可以起到帮助作用,因为如果在不同线程里面执行 multiprocessing 的代码,它就不会阻塞主线程。所有我们需要做的就是把所有进程间的通信放到另外一个线程中去。这个例子会解释如何使用这个方法。和上面的多线程例子非常类似,但是我们从线程中创建的是一个新的进程。
```
def game_loop(asyncio_loop):
# coroutine to run in main thread
async def notify():
await tick.acquire()
tick.notify_all()
tick.release()
queue = Queue()
# function to run in a different process
def worker():
while 1:
print("doing heavy calculation in process {}".format(os.getpid()))
sleep(1)
queue.put("calculation result")
Process(target=worker).start()
while 1:
# blocks this thread but not main thread with event loop
result = queue.get()
print("getting {} in process {}".format(result, os.getpid()))
task = asyncio.run_coroutine_threadsafe(notify(), asyncio_loop)
task.result()
```
这里我们在另外一个进程中运行 `worker()` 函数。它包括一个执行复杂计算的循环,然后把计算结果放到 `queue` 中,这个 `queue``multiprocessing.Queue` 的实例。然后我们就可以在另外一个线程的主事件循环中获取结果并通知客户端,就是例子 3.5 一样。这个例子已经非常简化了,它没有合理的结束进程。而且在真实的游戏中,我们可能需要另外一个队列来将数据传递给 `worker`
有一个项目叫 [aioprocessing][17],它封装了 multiprocessing使得它可以和 asyncio 兼容。但是实际上它只是和上面例子使用了完全一样的方法:从线程中创建进程。它并没有给你带来任何方便,除了它使用了简单的接口隐藏了后面的这些技巧。希望在 Python 的下一个版本中,我们能有一个基于协程且支持 asyncio 的 multiprocessing 库。
> 注意!如果你从主线程或者主进程中创建了一个不同的线程或者子进程来运行另外一个 asyncio 事件循环,你需要显示地使用 `asyncio.new_event_loop()` 来创建循环,不然的话可能程序不会正常工作。
--------------------------------------------------------------------------------
via: https://7webpages.com/blog/writing-online-multiplayer-game-with-python-and-asyncio-writing-game-loop/
作者:[Kyrylo Subbotin][a]
译者:[chunyang-wen](https://github.com/chunyang-wen)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://7webpages.com/blog/writing-online-multiplayer-game-with-python-and-asyncio-writing-game-loop/
[1]: http://snakepit-game.com/
[2]: https://7webpages.com/blog/writing-online-multiplayer-game-with-python-asyncio-getting-asynchronous/
[3]: http://snakepit-game.com/
[4]: https://github.com/7WebPages/snakepit-game/blob/master/simple/index.html
[5]: https://github.com/7WebPages/snakepit-game/blob/master/simple/game_loop_basic.py
[6]: http://aiohttp.readthedocs.org/
[7]: https://github.com/7WebPages/snakepit-game/blob/master/simple/game_loop_handler.py
[8]: https://github.com/7WebPages/snakepit-game/blob/master/simple/game_loop_global.py
[9]: https://github.com/7WebPages/snakepit-game/blob/master/simple/game_loop_wait.py
[10]: https://github.com/7WebPages/snakepit-game/blob/master/simple/game_loop_thread.py
[11]: https://github.com/7WebPages/snakepit-game/blob/master/simple/game_loop_process.py
[12]: https://github.com/aio-libs/aiomcache
[13]: https://github.com/aio-libs/aiozmq
[14]: https://github.com/aio-libs/aioredis
[15]: https://github.com/KeepSafe/aiohttp/blob/master/examples/mpsrv.py
[16]: https://docs.python.org/3.5/library/multiprocessing.html
[17]: https://github.com/dano/aioprocessing

View File

@ -7,12 +7,11 @@
代码戳[这里][4]
### 4. 制作一个完的游戏
### 4. 制作一个完的游戏
![](https://7webpages.com/static/img/14chs7.gif)
#### 4.1 工程概览
#### 4.1 Project's overview
在此部分,我们将回顾一个完整在线游戏的设计。这是一个经典的贪吃蛇游戏,增加了多玩家支持。你可以自己在 (<http://snakepit-game.com>) 亲自试玩。源码在 Github的这个[仓库][5]。游戏包括下列文件:

View File

@ -1,405 +0,0 @@
旅行时通过树莓派和iPad Pro备份图片
===================================================================
![](http://www.movingelectrons.net/images/bkup_photos_main.jpg)
>旅行中备份图片 - Gear.
### 介绍
我在很长的时间内一直在寻找一个旅行中备份图片的理想方法,把SD卡放进你的相机包是比较危险和暴露的SD卡可能丢失或者被盗数据可能损坏或者在传输过程中失败。比较好的一个选择是复制到另外一个设备即使它也是个SD卡并且将它放到一个比较安全的地方去备份到远端也是一个可行的办法但是如果去了一个没有网络的地方就不太可行了。
我理想的备份步骤需要下面的工具:
1. 用一台iPad pro而不是一台笔记本。我喜欢简便的旅行我的旅行大部分是商务的而不是拍摄休闲的这很显然我为什么选择了iPad Pro
2. 用尽可能少的设备
3. 设备之间的连接需要很安全。我需要在旅馆和机场使用,所以设备之间的连接需要时封闭的加密的。
4. 整个过程应该是稳定的安全的,我还用过其他的移动设备,但是效果不太理想[1].
### 设置
我制定了一个满足上面条件并且在未来可以扩充的设定,它包含下面这些部件的使用:
1. [2]9.7寸写作时最棒的又小又轻便的IOS系统的iPad Pro苹果笔不是不许的但是当我在路上进行一些编辑的时候依然需要所有的重活由树莓派做 其他设备通过ssh连接设备
2. [3] 树莓派3包含Raspbian系统
3. [4]Mini SD卡 [box/case][5].
5. [6]128G的优盘对于我是够用了你可以买个更大的你也可以买个移动硬盘但是树莓派没办法给移动硬盘供电你需要额外准备一个供电的hub当然优质的线缆能提供可靠便捷的安装和连接。
6. [9]SD读卡器
7. [10]另外的sd卡SD卡我在用满之前就会立即换一个这样就会让我的照片分布在不同的sd卡上
下图展示了这些设备之间如何相互连接.
![](http://www.movingelectrons.net/images/bkup_photos_diag.jpg)
>旅行时照片的备份-过程表格.
树莓派会作为一个热点. 它会创建一个WIFI网络当然也可以建立一个Ad Hoc网络更简单一些但是它不会加密设备之间的连接因此我选择创建WIFI网络。
SD卡放进SD读卡器插到树莓派USB端口上128G的大容量优盘一直插在树莓派的USB端口上我选择了一款闪迪的体积比较小。主要的思路就是通过脚本把SD卡的图片备份到优盘上脚本是增量备份而且脚本会自动运行使备份特别快如果你有很多的照片或者拍摄了很多没压缩的照片这个任务量就比较大用ipad来运行Python脚本而且用来浏览SD卡和优盘的文件。
如果给树莓派连上一根能上网的网线那样连接树莓派wifi的设备就可以上网啦
### 1. 树莓派的设置
这部分要用到命令行模式,我会尽可能详细的介绍,方便大家进行下去。
#### 安装和配置Raspbian
给树莓派连接鼠标键盘和显示器将SD卡插到树莓派上在官网按步骤安装Raspbian [12].
安装完后执行下面的命令:
```
sudo apt-get update
sudo apt-get upgrade
```
升级机器上所有的软件到最新,我将树莓派连接到本地网络,而且为了安全更改了默认的密码。
Raspbian默认开启了SSH这样所有的设置可以在一个远程的设备上完成。我也设置了RSA验证那是个可选的功能查看能多信息 [这里][13].
这是一个在MAC上建立SSH连接到树莓派上的截图[14]:
##### 建立WPA2验证的WIFI
这个安装过程是基于这篇文章,只适用于我自己做的例子[15].
##### 1. 安装软件包
我们需要安装下面的软件包:
```
sudo apt-get install hostapd
sudo apt-get install dnsmasq
```
hostapd用来创建wifidnsmasp用来做dhcp和dns服务很容易设置.
##### 2. 编辑dhcpcd.conf
通过网络连接树莓派网络设置树莓派需要dhcpd首先我们将wlan0设置为一个静态的IP。
用sudo nano `/etc/dhcpcd.conf`命令打开配置文件,在最后一行添加上如下信息:
```
denyinterfaces wlan0
```
注意: 必须先配置这个接口才能配置其他接口.
##### 3. 编辑端口
现在设置静态IPsudo nano `/etc/network/interfaces`打开端口配置文件按照如下信息编辑wlan0选项:
```
allow-hotplug wlan0
iface wlan0 inet static
address 192.168.1.1
netmask 255.255.255.0
network 192.168.1.0
broadcast 192.168.1.255
# wpa-conf /etc/wpa_supplicant/wpa_supplicant.conf
```
同样, 然后添加wlan1信息:
```
#allow-hotplug wlan1
#iface wlan1 inet manual
# wpa-conf /etc/wpa_supplicant/wpa_supplicant.conf
```
重要: sudo service dhcpcd restart命令重启dhcpd服务`sudo ifdown eth0; sudo ifup wlan0`命令用来关闭eth0端口再开启用来生效配置文件.
##### 4. 配置Hostapd
接下来我们配置hostapd`sudo nano /etc/hostapd/hostapd.conf` 用这个命令创建并填写配置信息到文件中:
```
interface=wlan0
# Use the nl80211 driver with the brcmfmac driver
driver=nl80211
# This is the name of the network
ssid=YOUR_NETWORK_NAME_HERE
# Use the 2.4GHz band
hw_mode=g
# Use channel 6
channel=6
# Enable 802.11n
ieee80211n=1
# Enable QoS Support
wmm_enabled=1
# Enable 40MHz channels with 20ns guard interval
ht_capab=[HT40][SHORT-GI-20][DSSS_CCK-40]
# Accept all MAC addresses
macaddr_acl=0
# Use WPA authentication
auth_algs=1
# Require clients to know the network name
ignore_broadcast_ssid=0
# Use WPA2
wpa=2
# Use a pre-shared key
wpa_key_mgmt=WPA-PSK
# The network passphrase
wpa_passphrase=YOUR_NEW_WIFI_PASSWORD_HERE
# Use AES, instead of TKIP
rsn_pairwise=CCMP
```
配置完成后,我们需要运行 `sudo nano /etc/default/hostapd` 命令打开这个配置文件然后找到`#DAEMON_CONF=""` 替换成`DAEMON_CONF="/etc/hostapd/hostapd.conf"`以便hostapd服务能够找到对应的配置文件.
##### 5. 配置Dnsmasq
dnsmasp配置文件包含很多信息方便你使用它但是我们不需要那么多选项我建议用下面两条命令把它放到别的地方不要删除它然后自己创建一个文件
```
sudo mv /etc/dnsmasq.conf /etc/dnsmasq.conf.orig
sudo nano /etc/dnsmasq.conf
```
粘贴下面的信息到新文件中:
```
interface=wlan0 # Use interface wlan0
listen-address=192.168.1.1 # Explicitly specify the address to listen on
bind-interfaces # Bind to the interface to make sure we aren't sending things elsewhere
server=8.8.8.8 # Forward DNS requests to Google DNS
domain-needed # Don't forward short names
bogus-priv # Never forward addresses in the non-routed address spaces.
dhcp-range=192.168.1.50,192.168.1.100,12h # Assign IP addresses in that range with a 12 hour lease time
```
##### 6. 设置IPv4转发
最后我们需要做的事就是配置包转发,用`sudo nano /etc/sysctl.conf`命令打开sysctl.conf文件将containing `net.ipv4.ip_forward=1`之前的#号删除,然后重启生效
我们还需要给连接树莓派的设备通过WIFI分享一个网络连接做一个wlan0和eth0的NAT我们可以参照下面的脚本来实现。
```
sudo iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
sudo iptables -A FORWARD -i eth0 -o wlan0 -m state --state RELATED,ESTABLISHED -j ACCEPT
sudo iptables -A FORWARD -i wlan0 -o eth0 -j ACCEPT
```
我命名了一个hotspot-boot.sh的脚本然后让它可以运行:
```
sudo chmod 755 hotspot-boot.sh
```
脚本会在树莓派启动的时候运行,有很多方法实现,下面是我实现的方式:
1. 把文件放到`/home/pi/scripts`目录下.
2. 编辑rc.local文件输入`sudo nano /etc/rc.local`命令将运行脚本命令放到exit0之前[16]).
下面是实例.
```
#!/bin/sh -e
#
# rc.local
#
# This script is executed at the end of each multiuser runlevel.
# Make sure that the script will "exit 0" on success or any other
# value on error.
#
# In order to enable or disable this script just change the execution
# bits.
#
# By default this script does nothing.
# Print the IP address
_IP=$(hostname -I) || true
if [ "$_IP" ]; then
printf "My IP address is %s\n" "$_IP"
fi
sudo /home/pi/scripts/hotspot-boot.sh &
exit 0
```
#### 安装Samba服务和NTFS兼容驱动.
我们要安装下面几个软件使我们能够访问树莓派分享的文件夹ntfs-3g可以使我们能够方位ntfs文件系统的文件.
```
sudo apt-get install ntfs-3g
sudo apt-get install samba samba-common-bin
```
你可以参照这些文档来配置Samba[17] .
重要提示推荐的文档要先挂在外置硬盘我们不这样做因为在这篇文章写作的时候树莓派在启动时的auto-mounts功能同时将sd卡和优盘挂载到`/media/pi/`上,这篇文章有一些多余的功能我们也不会采用。
### 2. Python脚本
树莓派配置好后我们需要让脚本拷贝和备份照片的时候真正的起作用脚本只提供了特定的自动化备份进程如果你有基本的cli操作的技能你可以ssh进树莓派然后拷贝你自己的照片从一个设备到另外一个设备用cp或者rsync命令。在脚本里我们用rsync命令这个命令比较可靠而且支持增量备份。
这个过程依赖两个文件,脚本文件自身和`backup_photos.conf`这个配置文件,后者只有几行包含已挂载的目的驱动器和应该挂载到哪个目录,它看起来是这样的:
```
mount folder=/media/pi/
destination folder=PDRIVE128GB
```
重要提示: 在这个符号`=`前后不要添加多余的空格,否则脚本会失效.
下面是这个Python脚本我把它命名为`backup_photos.py`,把它放到了`/home/pi/scripts/`目录下,我在每行都做了注释可以方便的查看各行的功能.
```
#!/usr/bin/python3
import os
import sys
from sh import rsync
'''
脚本将挂载到/media/pi的sd卡上的内容复制到一个目的磁盘的同名目录下目的驱动器的名字在.conf文件里定义好了.
Argument: label/name of the mounted SD Card.
'''
CONFIG_FILE = '/home/pi/scripts/backup_photos.conf'
ORIGIN_DEV = sys.argv[1]
def create_folder(path):
print ('attempting to create destination folder: ',path)
if not os.path.exists(path):
try:
os.mkdir(path)
print ('Folder created.')
except:
print ('Folder could not be created. Stopping.')
return
else:
print ('Folder already in path. Using that instead.')
confFile = open(CONFIG_FILE,'rU')
#IMPORTANT: rU Opens the file with Universal Newline Support,
#so \n and/or \r is recognized as a new line.
confList = confFile.readlines()
confFile.close()
for line in confList:
line = line.strip('\n')
try:
name , value = line.split('=')
if name == 'mount folder':
mountFolder = value
elif name == 'destination folder':
destDevice = value
except ValueError:
print ('Incorrect line format. Passing.')
pass
destFolder = mountFolder+destDevice+'/'+ORIGIN_DEV
create_folder(destFolder)
print ('Copying files...')
# Comment out to delete files that are not in the origin:
# rsync("-av", "--delete", mountFolder+ORIGIN_DEV, destFolder)
rsync("-av", mountFolder+ORIGIN_DEV+'/', destFolder)
print ('Done.')
```
### 3.iPad Pro的配置
树莓派做了最重的活而且iPad Pro根本没参与传输文件我们在iPad上只需要安装上Prompt2来通过ssh连接树莓派就行了这样你既可以运行Python脚本也可以复制文件了。[18]; [19].
![](http://www.movingelectrons.net/images/bkup_photos_ipad&rpi_prompt.jpg)
>iPad用prompt通过SSH连接树莓派.
我们安装了Samba我们可以通过图形方式通过树莓派连接到USB设备你可以看视频在不同的设备之间复制和移动文件文件浏览器是必须的[20] .
### 4. 将它们都放到一起
我们假设`SD32GB-03`是连接到树莓派的SD卡名字`PDRIVE128GB`是那个优盘通过事先的配置文件挂载好如果我们想要备份SD卡上的图片我们需要这么做:
1. 让树莓派先正常运行,将设备挂载好.
2. 连接树莓派配置好的WIFI网络.
3. 用prompt这个app通过ssh连接树莓派[21].
4. 连接好后输入下面的命令:
```
python3 backup_photos.py SD32GB-03
```
首次备份需要一些时间基于SD卡的容量你需要保持好设备之间的连接在脚本运行之前你可以通过下面这个命令绕过.
```
nohup python3 backup_photos.py SD32GB-03 &
```
![](http://www.movingelectrons.net/images/bkup_photos_ipad&rpi_finished.png)
>运行完成的脚本如图所示.
### 未来的定制
我在树莓派上安装了vnc服务这样我可以通过ipad连接树莓派的图形界面我安装了bittorrent用来远端备份我的图片当然需要先设置好我会放出这些当我完成这些工作后[23[24]。
你可以在下面发表你的评论和问题,我会在此页下面回复。.
--------------------------------------------------------------------------------
via: http://www.movingelectrons.net/blog/2016/06/26/backup-photos-while-traveling-with-a-raspberry-pi.html
作者:[Editor][a]
译者:[jiajia9linuxer](https://github.com/jiajia9linuxer)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://www.movingelectrons.net/blog/2016/06/26/backup-photos-while-traveling-with-a-raspberry-pi.html
[1]: http://bit.ly/1MVVtZi
[2]: http://www.amazon.com/dp/B01D3NZIMA/?tag=movinelect0e-20
[3]: http://www.amazon.com/dp/B01CD5VC92/?tag=movinelect0e-20
[4]: http://www.amazon.com/dp/B010Q57T02/?tag=movinelect0e-20
[5]: http://www.amazon.com/dp/B01F1PSFY6/?tag=movinelect0e-20
[6]: http://amzn.to/293kPqX
[7]: http://amzn.to/290syFY
[8]: http://amzn.to/290syFY
[9]: http://amzn.to/290syFY
[10]: http://amzn.to/290syFY
[11]: http://amzn.to/293kPqX
[12]: https://www.raspberrypi.org/downloads/noobs/
[13]: https://www.raspberrypi.org/documentation/remote-access/ssh/passwordless.md
[14]: https://www.iterm2.com/
[15]: https://frillip.com/using-your-raspberry-pi-3-as-a-wifi-access-point-with-hostapd/
[16]: https://www.raspberrypi.org/documentation/linux/usage/rc-local.md
[17]: http://www.howtogeek.com/139433/how-to-turn-a-raspberry-pi-into-a-low-power-network-storage-device/
[18]: http://bit.ly/1MVVtZi
[19]: https://itunes.apple.com/us/app/prompt-2/id917437289?mt=8&uo=4&at=11lqkH
[20]: https://itunes.apple.com/us/app/filebrowser-access-files-on/id364738545?mt=8&uo=4&at=11lqkH
[21]: https://itunes.apple.com/us/app/prompt-2/id917437289?mt=8&uo=4&at=11lqkH
[22]: https://en.m.wikipedia.org/wiki/Nohup
[23]: https://itunes.apple.com/us/app/remoter-pro-vnc-ssh-rdp/id519768191?mt=8&uo=4&at=11lqkH
[24]: https://getsync.com/

View File

@ -0,0 +1,129 @@
Twitter背后的基础设施效率与优化
===========
过去我们曾经发布过一些关于 [Finagle](https://twitter.github.io/finagle/) , [Manhattan](https://blog.twitter.com/2014/manhattan-our-real-time-multi-tenant-distributed-database-for-twitter-scale) 这些项目的文章,还写过一些针对大型事件活动的架构优化的文章,例如天空之城,超级碗, 2014 世界杯,全球新年夜庆祝活动等。在这篇基础设施系列文章中,我主要聚焦于 Twitter 的一些关键设施和组件。我也会写一些我们在系统的扩展性,可靠性,效率性方面的做过的改进,例如我们基础设施的历史,遇到过的挑战,学到的教训,做过的升级,以及我们现在前进的方向等等。
> 天空之城2013年8月2日宫崎骏的《天空之城》在NTV迎来其第14次电视重播剧情发展到高潮之时Twitter的TPSTweets Per Second也被推上了新的高度——143,199 TPS是平均值的25倍这个记录保持至今 -- 译者注。
### 数据中心的效率优化
#### 历史
当前Twitter硬件和数据中心的规模已经超过大多数公司。但达到这样的规模不是一蹴而就的系统是随着软硬件的升级优化一步步成熟起来的过程中我们也曾经犯过很多错误。
有个一时期我们的系统故障不断。软件问题,硬件问题,甚至底层设备问题不断爆发,常常导致系统运营中断。随着 Twitter 在客户、服务、媒体上的影响力不断扩大,构建一个高效、可靠的系统来提供服务成为我们的战略诉求。
> Twitter系统故障的界面被称为失败鲸Fail Whale如下图 -- 译者注
![Fail Whale](https://upload.wikimedia.org/wikipedia/en/d/de/Failwhale.png)
#### 挑战
一开始,我们的软件是直接安装在服务器,这意味着软件可靠性依赖硬件,电源、网络以及其他的环境因素都是威胁。这种情况下,如果要增加容错能力,就需要统筹考虑物理设备和在上面运行的服务。
最早采购数据中心方案的时候,我们都还是菜鸟,对于站点选择、运营和设计都非常不专业。我们先直接租用主机,业务增长后我们改用主机托管。早期遇到的问题主要是因为设备故障、数据中心设计问题、维护问题以及人为操作失误。我们也在持续迭代我们的硬件设计,从而增强硬件和数据中心的容错性。
服务中断的原因有很多,其中硬件故障常发生在服务器、机架交换机、核心交换机这地方。举一个我们曾经犯过的错误,硬件团队最初在设计服务器的时候,认为双路电源对减少供电问题的意义不大 -- 他们真的就移除了一块电源。然而数据中心一般给机架提供两路供电来提高冗余性,防止电网故障传导到服务器,而这需要两块电源。最终我们不得不在机架上增加了一个 ATS 单元AC transfer switch 交流切换开关)来接入第二路供电。
提高系统的可靠性靠的就是这样的改进,给网络、供电甚至机房增加冗余,从而将影响控制到最小范围。
#### 我们学到的教训以及技术的升级、迁移和选型
我们学到的第一个教训就是要先建模,将可能出故障的地方(例如建筑的供电和冷却系统、硬件、光线网络等)和运行在上面的服务之间的依赖关系弄清楚,这样才能更好地分析,从而优化设计提升容错能力。
我们增加了更多的数据中心提升地理容灾能力,减少自然灾害的影响。而且这种站点隔离也降低了软件的风险,减少了例如软件部署升级和系统故障的风险。这种多活的数据中心架构提供了代码灰度发布的能力,减少代码首次上线时候的影响。
我们设计新硬件使之能够在更高温度下正常运行,数据中心的能源效率因此有所提升。
#### 下一步工作
随着公司的战略发展和运营增长,我们在不影响我们的最终用户的前提下,持续不断改进我们的数据中心。下一步工作主要是在当前能耗和硬件的基础上,通过维护和优化来提升效率。
### 硬件的效率优化
#### 历史和挑战
我们的硬件工程师团队刚成立的时候只能测试市面上现有硬件,而现在我们能自己定制硬件以节省成本并提升效率。
Twitter 是一个很大的公司,它对硬件的要求对任何团队来说都是一个不小的挑战。为了满足整个公司的需求,我们的首要工作是能检测并保证购买的硬件的品质。团队重点关注的是性能和可靠性这两部分。对于硬件我们会做系统性的测试来保证其性能可预测,保证尽量不引入新的问题。
随着我们一些关键组件的负荷越来越大(如 Mesos , Hadoop , Manhattan , MySQL 等),市面上的产品已经无法满足我们的需求。同时供应商提供的一些高级服务器功能,例如 Raid 管理或者电源热切换等,可靠性提升很小,反而会拖累系统性能而且价格高昂,例如一些 Raid 控制器价格高达系统总报价的三分之一,还拖累了 SSD 的性能。
那时,我们也是 MySQL 数据库的一个大型用户。SASSerial Attached SCSI串行连接 SCSI )设备的供应和性能都有很大的问题。我们大量使用 1 u 的服务器,它的驱动器和回写缓存一起也只能支撑每秒 2000 次顺序 IO。为了获得更好的效果我们只得不断增加 CPU 核心数并加强磁盘能力。我们那时候找不到更节省成本的方案。
后来随着我们对硬件需求越来越大,我们可以成立了一个硬件团队,从而自己来设计更便宜更高效的硬件。
#### 关键技术变更与选择
我们不断的优化硬件相关的技术,下面是我们采用的新技术和自研平台的时间轴。
- 2012 - 采用 SSD 作为我们 MySQL 和 Key-Value 数据库的主要存储。
- 2013 - 我们开发了第一个定制版 Hadoop 工作站,它现在是我们主要的大容量存储方案。
- 2013 - 我们定制的解决方案应用在 Mesos 、 TFE Twitter Front-End )以及缓存设备上。
- 2014 - 我们定制的 SSD Key-Value 服务器完成开发。
- 2015 - 我们定制的数据库解决方案完成开发。
- 2016 - 我们开发了一个 GPU 系统来做模糊推理和训练机器学习。
#### 学到的教训
硬件团队的工作本质是通过做取舍来优化TCO总体拥有成本最终达到达到降低 CAPEX资本支出和 OPEX运营支出的目的。概括来说服务器降成本就是
1. 删除无用的功能和组件
2. 提升利用率
Twitter 的设备总体来说有这四大类:存储设备、计算设备、数据库和 GPU 。 Twitter 对每一类都定义了详细的需求,让硬件工程师更针对性地设计产品,从而优化掉那些用不到或者极少用的冗余部分。例如,我们的存储设备就专门为 Hadoop 优化,设备的购买和运营成本相比于 OEM 产品降低了 20% 。同时,这样做减法还提高了设备的性能和可靠性。同样的,对于计算设备,硬件工程师们也通过移除无用的特性获得了效率提升。
一个服务器可以移除的组件总是有限的,我们很快就把能移除的都扔掉了。于是我们想出了其他办法,例如在存储设备里,我们认为降低成本最好的办法是用一个节点替换多个节点,并通过 Aurora/Mesos 来管理任务负载。这就是我们现在正在做的东西。
对于这个我们自己新设计的服务器,首先要通过一系列的标准测试,然后会再做一系列负载测试,我们的目标是一台新设备至少能替换两台旧设备。大多数的提升都比较简单,例如增加 CPU 的进程数,同时我们的测试也比较出新 CPU 的 单线程能力提高了 20~50% ,对应能耗降低了 25% ,这都是我们测试环节需要做的工作。
这个新设备首次部署的时候,监控发现新设备只能替换 1.5 台旧设备,这比我们的目标低了很多。对性能数据检查后发现,我们之前新硬件的部分指标是错的,而这正是我们在做性能测试需要发现的问题。
对此我们硬件团队开发了一个模型,用来预测在不同的硬件配置下当前 Aurora 任务的打包效率。这个模型正确的预测了新旧硬件的性能比例。模型还指出了我们一开始没有考虑到的存储需求,并因此建议我们增加 CPU 核心数。另外,它还预测,如果我们修改内存的配置,那系统的性能还会有较大提高。
硬件配置的改变都需要花时间去操作,所以我们的硬件工程师们就首先找出几个关键痛点。例如我们和站点工程团队一起调整任务顺序来降低存储需求,这种修改很简单也很有效,新设备可以代替 1.85 个旧设备了。
为了更好的优化效率,我们对新硬件的配置做了修改,扩大了内存和磁盘容量就将 CPU 利用率提高了20% ,而这只增加了非常小的成本。同时我们的硬件工程师也和生产的伙伴一起优化发货顺序来降低货运成本。后续的观察发现我们的自己的新设备实际上可以代替 2.4 台旧设备,这个超出了预定的目标。
### 从裸设备迁移到 mesos 集群
直到2012年为止软件团队在 Twitter 开通一个新服务还需要自己操心硬件:配置硬件的规格需求,研究机架尺寸,开发部署脚本以及处理硬件故障。同时,系统中没有所谓的“服务发现”机制,当一个服务需要调用一个另一个服务时候,需要读取一个 YAML 配置文件,这个配置文件中有目标服务对应的主机 IP 和端口信息(端口信息是由一个公共 wiki 页面维护的。随着硬件的替换和更新YAML 配置文件里的内容也会不断的编辑更新。每次更新都需要花几个小时甚至几天来重启在各个服务,从而将新配置刷新到所有服务的缓存里,所以我们只能尽量一次增加多个配置并且按次序分别重启。我们经常遇到重启过程中 cache 不一致导致的问题,因为有的主机在使用旧的配置有的主机在用新的。有时候一台主机的异常(例如它正在重启)会导致整个站点都无法正常工作。
在 2012/2013 年的时候Twitter 开始尝试两个新事物:服务发现(来自 ZooKeeper 集群和 Finagle 核心模块中的一个库)和 Mesos包括基于 Mesos 的一个自研的计划任务框架 Aurora ,它现在也是 Apache 基金会的一个项目)。
服务发现功能意味着不需要再维护一个静态 YAML 主机列表了。服务或者在启动后主动注册,或者自动被 mesos 接入到一个“服务集”(就是一个 ZooKeeper 中的 znode 列表,包含角色、环境和服务名信息)中。任何想要访问这个服务的组件都只需要监控这个路径就可以实时获取到一个正在工作的服务列表。
现在我们通过 Mesos/Aurora ,而不是使用脚本(我们曾经是 Capistrano 的重度用户)来获取一个主机列表、分发代码并规划重启任务。现在软件团队如果想部署一个新服务,只需要将软件包上传到一个叫 Packer 的工具上(它是一个基于 HDFS 的服务),再在 Aurora 配置上描述文件(需要多少 CPU ,多少内存,多少个实例,启动的命令行代码),然后 Aurora 就会自动完成整个部署过程。 Aurora 先找到可用的主机,从 Packer 下载代码,注册到“服务发现”,最后启动这个服务。如果整个过程中遇到失败(硬件故障、网络中断等等), Mesos/Aurora 会自动重选一个新主机并将服务部署上去。
#### Twitter 的私有 PaaS 云平台
Mesos/Aurora 和服务发现这两个功能给我们带了革命性的变化。虽然在接下来几年里,我们碰到了无数 bug ,伤透了无数脑筋,学到了分布式系统里的无数教训,但是这套架还是非常赞的。以前大家一直忙于处理硬件搭配和管理,而现在,大家只需要考虑如何优化业务以及需要多少系统能力就可以了。同时,我们也从根本上解决了 CPU 利用率低的问题,以前服务直接安装在服务器上,这种方式无法充分利用服务器资源,任务协调能力也很差。现在 Mesos 允许我们把多个服务打包成一个服务包,增加一个新服务只需要修改硬件配额,再改一行配置就可以了。
在两年时间里,多数“无状态”服务迁移到了 Mesos 平台。一些大型且重要的服务(包括我们的用户服务和广告服务)是最先迁移上去的。因为它们的体量巨大,所以他们从这些服务里获得的好处也最多。
我们一直在不断追求效率提升和架构优化的最佳实践。我们会定期去测试公有云的产品,和我们自己产品的 TCO 以及性能做对比。我们也拥抱公有云的服务,事实上我们现在正在使用公有云产品。最后,这个系列的下一篇将会主要聚焦于我们基础设施的体量方面。
特别感谢 Jennifer Fraser, David Barr, Geoff Papilion, Matt Singer, Lam Dong 对这篇文章的贡献。
--------------------------------------------------------------------------------
via: https://blog.twitter.com/2016/the-infrastructure-behind-twitter-efficiency-and-optimization?utm_source=webopsweekly&utm_medium=email
作者:[mazdakh][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://twitter.com/intent/user?screen_name=mazdakh
[1]: https://twitter.com/jenniferfraser
[2]: https://twitter.com/davebarr
[3]: https://twitter.com/gpapilion
[4]: https://twitter.com/lamdong

View File

@ -0,0 +1,54 @@
TaskwarriorLinux下一个很棒的命令行TODO工具
====
Taskwarrior是Ubuntu/Linux下一个简单直接的基于命令行的TODO工具。这个开源软件是我曾用过的最简单的[基于命令行的工具][4]之一。Taskwarrior帮助你更好地组织你自己而不用安装笨重的新工具这有时破坏了TODO工具的目的。
![](https://2.bp.blogspot.com/-pQnRlOUNIxk/V9cuc3ytsBI/AAAAAAAAKHs/yYxyiAk4PwMIE0HTxlrm6arWOAPcBRRywCLcB/s1600/taskwarrior-todo-app.png)
### Taskwarrior一个基于简单的基于命令行帮助完成任务的TODO工具
Taskwarrior是一个开源、跨平台、基于命令行的TODO工具它帮你在终端中管理你的to-do列表。这个工具让你可以轻松地添加任务、展示列表、移除任务。同时在你的默认仓库中就由不用安装新的PPA。在 Ubuntu 16.04 LTS或者相似的发行版中。在终端中按照如下步骤安装Taskwarrior。
```
sudo apt-get install task
```
一个简单的使用如下:
```
$ task add Read a book
Created task 1.
$ task add priority:H Pay the bills
Created task 2.
```
我使用上面截图中的同样一个例子。是的你可以设置优先级H、L或者M。并且你可以使用task或者task next命令来查看你最新创建的to-do列表。比如
```
$ task next
ID Age P Description Urg
-- --- - -------------------------------- ----
2 10s H Pay the bills 6
1 20s Read a book 0
```
完成之后你可以使用task 1 done或者task 2 done来清除列表。[可以在这里][1]找到更加全面的命令和使用案例。同样Taskwarrior是跨平台的这意味着不管怎样你都可以找到一个[满足你需求][2]的版本。如果你需要的话,这里甚至有[一个安卓版][3]。用得开心!
--------------------------------------------------------------------------------
via: http://www.techdrivein.com/2016/09/taskwarrior-command-line-todo-app-linux.html?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+techdrivein+%28Tech+Drive-in%29
作者:[Manuel Jose ][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://www.techdrivein.com/2016/09/taskwarrior-command-line-todo-app-linux.html?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+techdrivein+%28Tech+Drive-in%29
[1]: https://taskwarrior.org/docs/
[2]: https://taskwarrior.org/download/
[3]: https://taskwarrior.org/news/news.20160225.html
[4]: http://www.techdrivein.com/search/label/Terminal

View File

@ -0,0 +1,65 @@
是时候合并 LibreOffice 和 OpenOffice 了
==========
![](http://tr2.cbsistatic.com/hub/i/2016/09/14/2e91089b-7ebd-4579-bf8f-74c34d1a94ce/e7e9c8dd481d8e068f2934c644788928/openofficedeathhero.jpg)
先说下 OpenOffice。可能你已经无数次地看到说 Apache OpenOffice 即将终止。上一个稳定版本是 4.1.2 (发布于 2015 年 10 月),而最近的一个严重安全漏洞用了一个月才打上补丁。编码人员的缺乏使得开发爬行前进。然后,可能是最糟糕的消息了:这个项目建议用户切换到 [MS Office](https://products.office.com/)(或 [LibreOffice](https://www.libreoffice.org/download/))。
丧钟为谁而鸣丧钟为你而鸣OpenOffice。
我想说些可能会惹恼一些人的话。你准备好了吗?
OpenOffice 的终止对开源和用户来说都将是件好事。
让我解释一下。
### 一个分支统治所有
当 LibreOffice 从 OpenOffice 分支出来后我们看到了另一个实例分支不只在原始基础上进行改进而且大幅超越了它。LibreOffice 一举成功。所有之前预装 OpenOffice 的 Linux 发行版都迁移到了这个新项目。LibreOffice 从起跑线突然冲出,并迅速迈出了一大步。更新以极快的速度发布,改善内容丰富而重要。
不久后OpenOffice 就被开源社区排在了后面。当 2011 年 Oracle 决定终止这个项目并把代码捐赠给 Apache 项目时,这种情况自然更加恶化了。从此 OpenOffice 艰难前进,然后把我们带到了现在这种局面。一个生机勃勃的 LibreOffice 和一个艰难的、缓慢的 OpenOffice。
但我认为在这个相当昏暗的隧道末尾有一丝曙光。
### 合并他们
这听起来可能很疯狂,但我认为是时候合并 LibreOffice 和 OpenOffice 了。是的,我知道很可能有政治问题和自尊意识,但我认为合并成一个会更好。合并的好处很多。我一时能想到的是:
- 把 MS Office 过滤器整合起来OpenOffice 在更好地导入某些 MS Office 文件上功能很强(而众所周知 LibreOffice 正在改进,但时好时坏)
- LibreOffice 有更多开发者:尽管 OpenOffice 的开发者数量不多,但也无疑会增加到合并后的项目。
- 结束混乱:很多用户以为 OpenOffice 和 LibreOffice 是同一个东西。有些甚至不知道 LibreOffice 存在。这将终结那些混乱。
- 合并他们的用户量OpenOffice 和 LibreOffice 各自拥有大量用户。联合后,他们将是个巨大的力量。
### 宝贵机遇
OpenOffice 的终止实际上会成为整个开源办公套件行业的一个宝贵机遇。为什么?我想表明有些东西我认为已经需要很久了。如果 OpenOffice 和 LibreOffice 集中他们的力量,比较他们的代码并合并,他们之后就可以做一些更必要的改组工作,不仅是整体的内部工作,也包括界面。
我们得面对现实LibreOffice 和(相关的) OpenOffice 的用户界面都是过时的。当我安装 LibreOffice 5.2.1.2 时,工具栏绝对是个灾难(见图 A
### Figure A
![](http://tr2.cbsistatic.com/hub/i/2016/09/14/cc5250df-48cd-40e3-a083-34250511ffab/c5ac8eb1e2cb12224690a6a3525999f0/openofficea.jpg)
#### LibreOffice 默认工具栏显示
尽管我支持和关心并且日常使用LibreOffice但事实已经再清楚不过了界面需要完全重写。我们正在使用的是 90 年代末/ 2000 年初的复古界面,它必须得改变了。当新用户第一次打开 LibreOffice 时他们会被淹没在大量按钮、图标和工具栏中。Ubuntu Unity 的平视显示Head up Display简称 HUD帮助解决了这个问题但那并不适用于其它桌面和发行版。当然有经验的用户知道在哪里找什么甚至定制工具栏以满足特殊的需要但对新用户或普通用户那种界面是个噩梦。现在是做出改变的一个好时机。引入 OpenOffice 最后残留的开发者并让他们加入到改善界面的战斗中。借助于整合 OpenOffice 额外的导入过滤器和现代化的界面LibreOffice 终能在家庭和办公桌面上都引起一些轰动。
### 这会真的发生吗?
这需要发生。将会吗?我不知道。但即使掌权者决定用户界面并不需要重组(这会是个失误),合并 OpenOffice 仍是前进的一大步。合并两者将带来开发的更专注,更好的推广,公众更少的困惑。
我知道这可能看起来有悖于开源的核心精神,但合并 LibreOffice 和 OpenOffice 将能联合两者的力量,而且可能会摆脱弱点。
在我看来,这是双赢的。
--------------------------------------------------------------------------------
via: http://www.techrepublic.com/article/its-time-to-make-libreoffice-and-openoffice-one-again/
作者:[Jack Wallen ][a]
译者:[bianjp](https://github.com/bianjp)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://www.techrepublic.com/search/?a=jack%2Bwallen

View File

@ -0,0 +1,84 @@
如何在 Ubuntu 16.04 和 Fedora 22-24 上安装最新的 XFCE 桌面?
====
Xfce 是一款针对 Linux 系统的现代化轻型开源桌面环境,它在其他的类 Unix 系统上,比如 Mac OS X, Solaries, *BSD plus, 也能工作得很好。它非常快并且因具有一个简单而优雅的用户界面很好地体现出了用户友好性。
在服务器上安装一个桌面环境有时被证明是很有用的,因为确切的运用程序可能需要一个桌面界面来高效和可靠的管理。 Xfce 一个卓越的性能是它的系统资源占用率很低,比如内存消耗很低,因此,如果服务器需要一个桌面环境的话它会是首选。
### XFCE 桌面的功能特性
另外,它的一些显著的组成部分和功能特性列在下面:
- Xfwm 窗口管理器
- Thunar 文件管理器
- 用户会话管理器:用来处理用户登录,电源管理及以后
- 桌面管理器:用来设置背景图片,桌面头像等更多操作
- 运用管理器
- 它的高度可连接性也增加了一些其他次要功能特性
Xfce 的最新稳定发行版是 Xfce 4.12, 它所有的功能特性和区别于旧版本的变化都列在了这儿。
#### 在Ubuntu 16.04 上安装 Xfce 桌面
Linux 分支比如 Xubuntu, Manjaro, OpenSUSE, Fedora Xfce Spin, Zenwalk 等许多其他版本的都提供它们自己的 Xfce 桌面安装包,但你也可以像下面这样安装最新的版本。
```
$ sudo apt update
$ sudo apt install xfce4
```
等待安装进程结束,然后退出当前会话或者你也可以选择重启系统。在登录界面,选择 Xfce 桌面,然后在像下面的频幕截图这样登录:
![](http://www.tecmint.com/wp-content/uploads/2016/09/Select-Xfce-Desktop-at-Login.png)
![](http://www.tecmint.com/wp-content/uploads/2016/09/XFCE-Desktop.png)
#### 在 Fedora 22-24 上安装 Xfce 桌面
如果你想在现存 Linux 分支 Fedora 上安装 xfce 桌面,那么你可以使用下面展示的 yum 或 dnf 命令。
```
-------------------- 在 Fedora 22 上 --------------------
# yum install @xfce
-------------------- 在 Fedora 23-24 上 --------------------
# dnf install @xfce-desktop-environment
```
安装 Xfce 以后,你可以从会话菜单选择 xfce 登录或者重启系统。
![](http://www.tecmint.com/wp-content/uploads/2016/09/Select-Xfce-Desktop-at-Fedora-Login.png)
![](http://www.tecmint.com/wp-content/uploads/2016/09/Install-Xfce-Desktop-in-Fedora.png)
如果你不再想要 Xfce 桌面留在你的系统上,那么可以使用下面的命令来卸载它:
```
-------------------- 在 Ubuntu 16.04 上 --------------------
$ sudo apt purge xfce4
$ sudo apt autoremove
-------------------- 在 Fedora 22 上 --------------------
# yum remove @xfce
-------------------- 在 Fedora 23-24 上 --------------------
# dnf remove @xfce-desktop-environment
```
在这个简单的入门指南中,我们讲解了如何安装最新版 Xfce 桌面的步骤,我相信这很容易掌握。如果一切进行良好,你可以享受一下使用 xfce, 作为其中一个 [best desktop environments for Linux systems][1].
然而,如果想再次和我们联系,你可以通过下面的反馈环节并且记得始终和 Tecmint 保持联系。
--------------------------------------------------------------------------------
via: http://linoxide.com/firewall/pfsense-setup-basic-configuration/
作者:[Aaron Kili ][a]
译者:[译者ucasFL](https://github.com/ucasFL)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://www.tecmint.com/author/aaronkili/
[1]: http://www.tecmint.com/best-linux-desktop-environments/

View File

@ -0,0 +1,159 @@
如何使用 Awk 语言写脚本 - Part 13
====
从 Awk 系列开始直到第 12 部分,我们都是在命令行或者脚本文件写一些简短的 Awk 命令和程序。
然而 Awk 和 Shell 一样也是一个解释语言。通过从开始到现在的一系列的学习,你现在能写可以执行的 Awk 脚本了。
和写 shell 脚本差不多Awk 脚本以下面这一行开头:
```
#! /path/to/awk/utility -f
```
例如在我的系统上Awk 工具安装在 /user/bin/awk 目录,所以我的 Awk 脚本以如下内容作为开头:
```
#! /usr/bin/awk -f
```
上面一行的解释如下:
```
#! 称为 Shebang指明使用那个解释器来执行脚本中的命令
/usr/bin/awk –解释器
-f 解释器选项,用来指定读取的程序文件
```
说是这么说,现在从下面的简单例子开始,让我们深入研究一些可执行的 Awk 脚本。使用你最喜欢的编辑器创建一个新文件,像下面这样:
```
$ vi script.awk
```
然后把下面代码粘贴到文件中:
```
#!/usr/bin/awk -f
BEGIN { printf "%s\n","Writing my first Awk executable script!" }
```
保存文件后退出,然后执行下面命令,使得脚本可执行:
```
$ chmod +x script.awk
```
然后,执行它:
```
$ ./script.awk
```
输出样例:
```
Writing my first Awk executable script!
```
一个严格的程序员一定会问:“注释呢?”。是的,你可以在 Awk 脚本中包含注释。在代码中写注释是一种良好的编程习惯。
它有利于其它程序员阅读你的代码,理解程序文件或者脚本中每一部分的功能。
所以,你可以像下面这样在脚本中增加注释:
```
#!/usr/bin/awk -f
#This is how to write a comment in Awk
#using the BEGIN special pattern to print a sentence
BEGIN { printf "%s\n","Writing my first Awk executable script!" }
```
接下来我们看一个读文件的例子。我们想从帐号文件 /etc/passwd 中查找一个叫 aaronkilik 的用户,然后像下面这样打印用户名,用户的 ID用户的 GID (译者注:组 ID)
下面是我们脚本文件的内容,文件名为 second.awk。
```
#! /usr/bin/awk -f
#use BEGIN sepecial character to set FS built-in variable
BEGIN { FS=":" }
#search for username: aaronkilik and print account details
/aaronkilik/ { print "Username :",$1,"User ID :",$3,"User GID :",$4 }
```
保存文件后退出,使得脚本可执行,然后像下面这样执行它:
```
$ chmod +x second.awk
$ ./second.awk /etc/passwd
```
输出样例
```
Username : aaronkilik User ID : 1000 User GID : 1000
```
在下面最后一个例子中,我们将使用 do while 语句来打印数字 0-10
下面是我们脚本文件的内容,文件名为 do.awk。
```
#! /usr/bin/awk -f
#printing from 0-10 using a do while statement
#do while statement
BEGIN {
#initialize a counter
x=0
do {
print x;
x+=1;
}
while(x<=10)
}
```
保存文件后,像之前操作一样使得脚本可执行。然后,运行它:
```
$ chmod +x do.awk
$ ./do.awk
```
输出样例
```
0
1
2
3
4
5
6
7
8
9
10
```
### 总结
我们已经到达这个精彩的 Awk 系列的最后,我希望你从整个 13 部分中学到了很多知识,把这些当作你 Awk 编程语言的入门指导。
我一开始就提到过Awk 是一个完整的文本处理语言,所以你可以学习很多 Awk 编程语言的其它方面,例如环境变量,数组,函数(内置的或者用户自定义的),等等。
Awk 编程还有其它内容需要学习和掌握,所以在文末我提供了一些重要的在线资源的链接,你可以利用他们拓展你的 Awk 编程技能。但这不是必须的,你也可以阅读一些关于 Awk 的书籍。
如果你任何想要分享的想法或者问题,在下面留言。记得保持关注 Tecmint会有更多的精彩内容。
--------------------------------------------------------------------------------
via: http://www.tecmint.com/write-shell-scripts-in-awk-programming/
作者:[Aaron Kili |][a]
译者:[chunyang-wen](https://github.com/chunyang-wen)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://www.tecmint.com/author/aaronkili/

View File

@ -0,0 +1,247 @@
如何使用 Awk 中的流控制语句 - part12
====
回顾从 Awk 系列最开始到现在我们所讲的所有关于 Awk 的例子,你会发现不同例子中的所有命令都是顺序执行的,也就是一个接一个的执行。但是在某些场景下,我们可能希望根据一些条件来执行一些文本过滤,这个时候流控制语句就派上用场了。
![](http://www.tecmint.com/wp-content/uploads/2016/08/Use-Flow-Control-Statements-in-Awk.png)
Awk 包含很多的流控制语句,包括:
- if-else 语句
- for 语句
- while 语句
- do-while 语句
- break 语句
- continue 语句
- next 语句
- nextfile 语句
- exit 语句
但是在这个系列中我们将详细解释if-elseforwhiledo-while 语句。关于如何使用 next 语句,如果你们记得的话,我们已经在 Awk 系列的第6部分介绍过了。
### 1. if-else 语句
if 语句的语法和 shell 里面的 if 语句类似:
```
if (condition1) {
actions1
}
else {
actions2
}
```
上面的语法中condition1 和 condition2 是 Awk 的表达式actions1 和 actions2 是当相应的条件满足时执行的 Awk 命令。
当 condition1 满足时,意味着它的值是 true此时会执行 actions1if 语句退出否则译注condition1 为 false执行 actions2。
if 语句可以扩展成如下的 if-else_if-else
```
if (condition1){
actions1
}
else if (conditions2){
actions2
}
else{
actions3
}
```
上面例子中,如果 condition1 为 true执行 actions1if 语句退出;否则对 condition2 求值,如果值为 true那么执行 actions2if 语句退出。然而如果 condition2 是 false那么会执行 actions3 退出 if语句。
下面是一个使用 if 语句的例子我们有一个存储用户和他们年龄列表的文件users.txt。
我们想要打印用户的名字以及他们的年龄是大于 25 还是小于 25。
```
aaronkilik@tecMint ~ $ cat users.txt
Sarah L 35 F
Aaron Kili 40 M
John Doo 20 M
Kili Seth 49 M
```
我们可以写一个简短的 shell 脚本来执行我们上面的任务,下面是脚本的内容:
```
#!/bin/bash
awk ' {
if ( $3 <= 25 ){
print "User",$1,$2,"is less than 25 years old." ;
}
else {
print "User",$1,$2,"is more than 25 years old" ;
}
}' ~/users.txt
```
保存文件后退出,执行下面命令让脚本可执行,然后执行:
```
$ chmod +x test.sh
$ ./test.sh
```
输出样例
```
User Sarah L is more than 25 years old
User Aaron Kili is more than 25 years old
User John Doo is less than 25 years old.
User Kili Seth is more than 25 years old
```
### 2. for 语句
如果你想循环执行一些 Awk 命令,那么 for 语句十分合适,它的语法如下:
这里只是简单的定义一个计数器来控制循环的执行。首先你要初始化那个计数器 counter然后根据某个条件判断是否执行如果该条件为 true 则执行,最后增加计数器。当计数器不满足条件时则终止循环。
```
for ( counter-initialization; test-condition; counter-increment ){
actions
}
```
下面的 Awk 命令利用打印数字 0-10 来说明 for 语句是怎么工作的。
```
$ awk 'BEGIN{ for(counter=0;counter<=10;counter++){ print counter} }'
```
输出样例
```
0
1
2
3
4
5
6
7
8
9
10
```
### 3. while 语句
传统的 while 语句语法如下:
```
while ( condition ) {
actions
}
```
上面的 condition 是 Awk 表达式actions 是当 condition 为 true 时执行的 Awk命令。
下面是仍然用打印数字 0-10 来解释 while 语句的用法:
```
#!/bin/bash
awk ' BEGIN{ counter=0 ;
while(counter<=10){
print counter;
counter+=1 ;
}
}
```
保存文件,让文件可执行,然后执行:
```
$ chmod +x test.sh
$ ./test.sh
```
输出样例
Sample Output
```
0
1
2
3
4
5
6
7
8
9
10
```
### 4. do-while 语句
这个是上面的 while 语句语法的一个变化,其语法如下:
```
do {
actions
}
while (condition)
```
二者的区别是,在 do-while 中Awk 的命令在条件求值前先执行。我们使用 while 语句中同样的例子来解释 do-while 的使用,将 test.sh 脚本中的 Awk 命令做如下更改:
```
#!/bin/bash
awk ' BEGIN{ counter=0 ;
do{
print counter;
counter+=1 ;
}
while (counter<=10)
}
'
```
修改脚本后,保存退出。让脚本可执行,然后按如下方式执行:
```
$ chmod +x test.sh
$ ./test.sh
```
输出样例
```
0
1
2
3
4
5
6
7
8
9
10
```
### 结论
前面指出,这并不是一个 Awk 流控制的完整介绍。在 Awk 中还有其它几个流控制语句。
不管怎样Awk 系列的此部分给你一个如何基于某些条件来控制 Awk 命令执行的基本概念。
你可以接着通过仔细看看其余的流控制语句来获得关于这个主题的更多知识。最后Awk 系列的下一部分,我们将会介绍如何写 Awk 脚本。
--------------------------------------------------------------------------------
via: http://www.tecmint.com/use-flow-control-statements-with-awk-command/
作者:[Aaron Kili][a]
译者:[chunyang-wen](https://github.com/chunyang-wen)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://www.tecmint.com/author/aaronkili/

View File

@ -0,0 +1,94 @@
如何用四个简单的步骤加速 LibreOffice
====
![](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2016/08/speed-up-libreoffice-featured-2.jpg)
对于许多许多开源软件的粉丝和支持者来说LibreOffice 是 Microsoft Office 最好的替代品,在过去的一些发行版中可以看到它明显有了巨大的改进。然而,开始启动依然有许多经验需要掌握。有一些方法可以缩短 LibreOffice 的启动时间并改善它的所有性能。
在下面的段落里,我将会展示一些实用性的步骤,你可以通过它们来改善 LibreOffice 的加载时间和响应能力。
### 1. 增加每个对象和图像缓存的存储空间
这将可以通过分配更多的内存资源给图像缓存和对象来加快程序的加载时间。
1. 启动 LibreOffice Writer (或者 Calc)
2. 点击菜单栏上的 “工具 -> 选择” 或者按键盘上的快捷键“Alt + F12.”
3. 点击 LibreOffice 下面的“内存”然后增加“LibreOffice使用”到128MB
4. 同样的增加“每个对象的内存”到20MB。
5. 点击确定来保存你的修改。
![](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2016/08/speed-up-libreoffice-step-1.png)
注意:你可以根据自己机器的性能把数值设置得比建议值的高一些或低一些。最好通过亲自体验来看看什么值能够让机器达到最佳性能。
### 2.启用 LibreOffice 的快速启动
如果你的机器上有足够大的 RAM随机存取存储器比如 4GB 或者更大,你可以启用“系统托盘快速启动”,从而让内存中的部分 LibreOffice 在打开新文件时能够快速反应。
在启用这个选择以后,你会清楚的看到在打开新文件时它的性能有了很大的提高。
1.通过点击“工具 -> 选择”来打开选择对话框
2. 在 “LibreOffice” 下面的侧边栏选择“内存。”
3. 勾选“系统托盘快速启动”复选框。
4. 点击“确定”来保存修改。
![](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2016/08/speed-up-libreoffice-2.png)
一旦这个选项启用以后,你将会在你的系统托盘看到 LibreOffice 图标,可以选择来打开任何类型的文件。
### 3. 关闭 Java 运行环境
另一个加快 LibreOffice 加载时间和响应能力的简单方法是关闭 Java。
1. 同时按下“Alt + F12”打开选择对话框
2. 在侧边栏里选择“Libreoffice”, 然后选择“高级”。
3. 取消勾选“使用 Java 运行环境”选项。
4. 点击“确定”来关闭对话框。
![](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2016/08/speed-up-libreoffice-3.png)
如果你只使用 Writer 和 Calc那么关闭 Java 不会影响你正常使用,但如果你需要使用 LibreOffice Base 和一些其他的特性,那么你可能需要重新启用它。在那种情况,将会弹出一个框询问你是否希望再次打开它。
### 4. 减少使用撤销步骤
默认情况下LibreOffice 允许你最多撤销一个文件的 100 个改变。绝大多数用户不需要靠近那儿任何地方(使用撤销操作),所以多次使用撤销步骤是对内存资源的巨大浪费。
我建议减少撤销步骤到 20 次以下来为其他东西释放内存,但是这部分可以自由选择来满足你的需求。
1. 通过点击 “工具 -> 选择”来打开选择对话框。
2. 在 “LibreOffice” 下面的侧边栏,选择“内存”。
3. 在“撤销”下面把步骤数目改成最适合你的值。
4. 点击“确定”来保存修改。
![](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2016/08/speed-up-libreoffice-5.png)
如果你这些技巧为加速你的 LibreOffice 套件的加载时间提供了帮助,请在评论里告诉我们。同样,请分享你知道的任何其他技巧来给其他人带来帮助。
--------------------------------------------------------------------------------
via: https://www.maketecheasier.com/speed-up-libreoffice/?utm_medium=feed&utm_source=feedpress.me&utm_campaign=Feed%3A+maketecheasier
作者:[Ayo Isaiah][a]
译者:[ucasFL](https://github.com/ucasFL)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.maketecheasier.com/author/ayoisaiah/