mirror of
https://github.com/LCTT/TranslateProject.git
synced 2025-03-12 01:40:10 +08:00
commit
6a38ecd4df
@ -1,39 +1,40 @@
|
||||
|
||||
如何在 Linux 中使用 Fio 来测评硬盘性能
|
||||
======
|
||||
|
||||

|
||||
|
||||
Fio(Flexible I/O Tester) 是一款由 Jens Axboe 开发的用于测评和压力/硬件验证的[免费开源][1]的软件
|
||||
Fio(Flexible I/O Tester) 是一款由 Jens Axboe 开发的用于测评和压力/硬件验证的[自由开源][1]的软件。
|
||||
|
||||
它支持 19 种不同类型的 I/O 引擎 (sync, mmap, libaio, posixaio, SG v3, splice, null, network, syslet, guasi, solarisaio, 以及更多), I/O 优先级(针对较新的 Linux 内核),I/O 速度,复刻或线程任务,和其他更多的东西。它能够在块设备和文件上工作。
|
||||
它支持 19 种不同类型的 I/O 引擎 (sync、mmap、libaio、posixaio、SG v3、splice、null、network、 syslet、guasi、solarisaio,以及更多), I/O 优先级(针对较新的 Linux 内核),I/O 速度,fork 的任务或线程任务等等。它能够在块设备和文件上工作。
|
||||
|
||||
Fio 接受一种非常简单易于理解的文本格式作为任务描述。软件默认包含了许多示例任务文件。 Fio 展示了所有类型的 I/O 性能信息,包括完整的 IO 延迟和百分比。
|
||||
Fio 接受一种非常简单易于理解的文本格式的任务描述。软件默认包含了几个示例任务文件。 Fio 展示了所有类型的 I/O 性能信息,包括完整的 IO 延迟和百分比。
|
||||
|
||||
它被广泛的应用在非常多的地方,包括测评、QA,以及验证用途。它支持 Linux 、 FreeBSD 、 NetBSD、 OpenBSD、 OS X、 OpenSolaris、 AIX、 HP-UX、 Android 以及 Windows。
|
||||
它被广泛的应用在非常多的地方,包括测评、QA,以及验证用途。它支持 Linux 、FreeBSD 、NetBSD、 OpenBSD、 OS X、 OpenSolaris、 AIX、 HP-UX、 Android 以及 Windows。
|
||||
|
||||
在这个教程,我们将使用 Ubuntu 16 ,你需要拥有这台电脑的 sudo 或 root 权限。我们将完整的进行安装和 Fio 的使用。
|
||||
在这个教程,我们将使用 Ubuntu 16 ,你需要拥有这台电脑的 `sudo` 或 root 权限。我们将完整的进行安装和 Fio 的使用。
|
||||
|
||||
### 使用源码安装 Fio
|
||||
|
||||
我们要去克隆 Github 上的仓库。安装所需的依赖,然后我们将会从源码构建应用。首先,确保我们安装了 Git 。
|
||||
我们要去克隆 GitHub 上的仓库。安装所需的依赖,然后我们将会从源码构建应用。首先,确保我们安装了 Git 。
|
||||
|
||||
```
|
||||
sudo apt-get install git
|
||||
```
|
||||
|
||||
CentOS 用户可以执行下述命令:
|
||||
|
||||
```
|
||||
sudo yum install git
|
||||
```
|
||||
|
||||
现在,我们切换到 /opt 目录,并从 Github 上克隆仓库:
|
||||
现在,我们切换到 `/opt` 目录,并从 Github 上克隆仓库:
|
||||
|
||||
```
|
||||
cd /opt
|
||||
git clone https://github.com/axboe/fio
|
||||
```
|
||||
|
||||
你应该会看到下面这样的输出
|
||||
你应该会看到下面这样的输出:
|
||||
|
||||
```
|
||||
Cloning into 'fio'...
|
||||
@ -45,7 +46,7 @@ Resolving deltas: 100% (16251/16251), done.
|
||||
Checking connectivity... done.
|
||||
```
|
||||
|
||||
现在,我们通过在 opt 目录下输入下方的命令切换到 Fio 的代码目录:
|
||||
现在,我们通过在 `/opt` 目录下输入下方的命令切换到 Fio 的代码目录:
|
||||
|
||||
```
|
||||
cd fio
|
||||
@ -61,7 +62,7 @@ cd fio
|
||||
|
||||
### 在 Ubuntu 上安装 Fio
|
||||
|
||||
对于 Ubuntu 和 Debian 来说, Fio 已经在主仓库内。你可以很容易的使用类似 yum 和 apt-get 的标准包管理器来安装 Fio。
|
||||
对于 Ubuntu 和 Debian 来说, Fio 已经在主仓库内。你可以很容易的使用类似 `yum` 和 `apt-get` 的标准包管理器来安装 Fio。
|
||||
|
||||
对于 Ubuntu 和 Debian ,你只需要简单的执行下述命令:
|
||||
|
||||
@ -69,7 +70,8 @@ cd fio
|
||||
sudo apt-get install fio
|
||||
```
|
||||
|
||||
对于 CentOS/Redhat 你只需要简单执行下述命令:
|
||||
对于 CentOS/Redhat 你只需要简单执行下述命令。
|
||||
|
||||
在 CentOS ,你可能在你能安装 Fio 前需要去安装 EPEL 仓库到你的系统中。你可以通过执行下述命令来安装它:
|
||||
|
||||
```
|
||||
@ -124,23 +126,20 @@ Run status group 0 (all jobs):
|
||||
|
||||
Disk stats (read/write):
|
||||
sda: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00%
|
||||
|
||||
|
||||
```
|
||||
|
||||
### 执行随机读测试
|
||||
|
||||
我们将要执行一个随机读测试,我们将会尝试读取一个随机的 2GB 文件。
|
||||
|
||||
```
|
||||
|
||||
sudo fio --name=randread --ioengine=libaio --iodepth=16 --rw=randread --bs=4k --direct=0 --size=512M --numjobs=4 --runtime=240 --group_reporting
|
||||
|
||||
|
||||
```
|
||||
|
||||
你应该会看到下面这样的输出
|
||||
```
|
||||
你应该会看到下面这样的输出:
|
||||
|
||||
```
|
||||
...
|
||||
fio-2.2.10
|
||||
Starting 4 processes
|
||||
@ -176,15 +175,13 @@ Run status group 0 (all jobs):
|
||||
|
||||
Disk stats (read/write):
|
||||
sda: ios=521587/871, merge=0/1142, ticks=96664/612, in_queue=97284, util=99.85%
|
||||
|
||||
|
||||
```
|
||||
|
||||
最后,我们想要展示一个简单的随机读-写测试来看一看 Fio 返回的输出类型。
|
||||
|
||||
### 读写性能测试
|
||||
|
||||
下述命令将会测试 USB Pen 驱动器 (/dev/sdc1) 的随机读写性能:
|
||||
下述命令将会测试 USB Pen 驱动器 (`/dev/sdc1`) 的随机读写性能:
|
||||
|
||||
```
|
||||
sudo fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=random_read_write.fio --bs=4k --iodepth=64 --size=4G --readwrite=randrw --rwmixread=75
|
||||
@ -213,8 +210,6 @@ Disk stats (read/write):
|
||||
sda: ios=774141/258944, merge=1463/899, ticks=748800/150316, in_queue=900720, util=99.35%
|
||||
```
|
||||
|
||||
We hope you enjoyed this tutorial and enjoyed following along, Fio is a very useful tool and we hope you can use it in your next debugging activity. If you enjoyed reading this post feel free to leave a comment of questions. Go ahead and clone the repo and play around with the code.
|
||||
|
||||
我们希望你能喜欢这个教程并且享受接下来的内容,Fio 是一个非常有用的工具,并且我们希望你能在你下一次 Debugging 活动中使用到它。如果你喜欢这个文章,欢迎留下评论和问题。
|
||||
|
||||
|
||||
@ -224,7 +219,7 @@ via: https://wpmojo.com/how-to-use-fio-to-measure-disk-performance-in-linux/
|
||||
|
||||
作者:[Alex Pearson][a]
|
||||
译者:[Bestony](https://github.com/bestony)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,58 @@
|
||||
老树发新芽:微服务
|
||||
======
|
||||
|
||||

|
||||
|
||||
如果我告诉你有这样一种软件架构,一个应用程序的组件通过基于网络的通讯协议为其它组件提供服务,我估计你可能会说它是 …
|
||||
|
||||
是的,它和你编程的年限有关。如果你从上世纪九十年代就开始了你的编程生涯,那么你肯定会说它是 <ruby>[面向服务的架构][1]<rt> Service-Oriented Architecture</rt></ruby>(SOA)。但是,如果你是个年青人,并且在云上获得初步的经验,那么,你将会说:“哦,你说的是 <ruby>[微服务][2]<rt>Microservices</rt></ruby>。”
|
||||
|
||||
你们都没错。如果想真正地了解它们的差别,你需要深入地研究这两种架构。
|
||||
|
||||
在 SOA 中,服务是一个功能,它是定义好的、自包含的、并且是不依赖上下文和其它服务的状态的功能。总共有两种服务。一种是消费者服务,它从另外类型的服务 —— 提供者服务 —— 中请求一个服务。一个 SOA 服务可以同时扮演这两种角色。
|
||||
|
||||
SOA 服务可以与其它服务交换数据。两个或多个服务也可以彼此之间相互协调。这些服务执行基本的任务,比如创建一个用户帐户、提供登录功能、或验证支付。
|
||||
|
||||
与其说 SOA 是模块化一个应用程序,还不如说它是把分布式的、独立维护和部署的组件,组合成一个应用程序。然后在服务器上运行这些组件。
|
||||
|
||||
早期版本的 SOA 使用面向对象的协议进行组件间通讯。例如,微软的 <ruby>[分布式组件对象模型][3]<rt> Distributed Component Object Model</rt></ruby>(DCOM) 和使用 <ruby>[通用对象请求代理架构][5]<rt>Common Object Request Broker Architecture</rt></ruby>(CORBA) 规范的 <ruby>[对象请求代理][4]<rt> Object Request Broker</rt></ruby>(ORB)。
|
||||
|
||||
用于消息服务的最新的版本,有 <ruby>[Java 消息服务][6]<rt> Java Message Service</rt></ruby>(JMS)或者 <ruby>[高级消息队列协议][7]<rt>Advanced Message Queuing Protocol</rt></ruby>(AMQP)。这些服务通过<ruby>企业服务总线<rt>Enterprise Service Bus</rt></ruby>(ESB) 进行连接。基于这些总线,来传递和接收可扩展标记语言(XML)格式的数据。
|
||||
|
||||
[微服务][2] 是一个架构样式,其中的应用程序以松散耦合的服务或模块组成。它适用于开发大型的、复杂的应用程序的<ruby>持续集成<rt>Continuous Integration</rt></ruby>/<ruby>持续部署<rt>Continuous Deployment</rt></ruby>(CI/CD)模型。一个应用程序就是一堆模块的汇总。
|
||||
|
||||
每个微服务提供一个应用程序编程接口(API)端点。它们通过轻量级协议连接,比如,<ruby>[表述性状态转移][8]<rt> REpresentational State Transfer</rt></ruby>(REST),或 [gRPC][9]。数据倾向于使用 <ruby>[JavaScript 对象标记][10]<rt> JavaScript Object Notation</rt></ruby>(JSON)或 [Protobuf][11] 来表示。
|
||||
|
||||
这两种架构都可以用于去替代以前老的整体式架构,整体式架构的应用程序被构建为单个的、自治的单元。例如,在一个客户机 —— 服务器模式中,一个典型的 Linux、Apache、MySQL、PHP/Python/Perl (LAMP) 服务器端应用程序将去处理 HTTP 请求、运行子程序、以及从底层的 MySQL 数据库中检索/更新数据。所有这些应用程序“绑”在一起提供服务。当你改变了任何一个东西,你都必须去构建和部署一个新版本。
|
||||
|
||||
使用 SOA,你可以只改变需要的几个组件,而不是整个应用程序。使用微服务,你可以做到一次只改变一个服务。使用微服务,你才能真正做到一个解耦架构。
|
||||
|
||||
微服务也比 SOA 更轻量级。不过 SOA 服务是部署到服务器和虚拟机上,而微服务是部署在容器中。协议也更轻量级。这使得微服务比 SOA 更灵活。因此,它更适合于要求敏捷性的电商网站。
|
||||
|
||||
说了这么多,到底意味着什么呢?微服务就是 SOA 在容器和云计算上的变种。
|
||||
|
||||
老式的 SOA 并没有离我们远去,而因为我们不断地将应用程序搬迁到容器中,所以微服务架构将越来越流行。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://blogs.dxc.technology/2018/05/08/everything-old-is-new-again-microservices/
|
||||
|
||||
作者:[Cloudy Weather][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[qhwdw](https://github.com/qhwdw)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://blogs.dxc.technology/author/steven-vaughan-nichols/
|
||||
[1]:https://www.service-architecture.com/articles/web-services/service-oriented_architecture_soa_definition.html
|
||||
[2]:http://microservices.io/
|
||||
[3]:https://technet.microsoft.com/en-us/library/cc958799.aspx
|
||||
[4]:https://searchmicroservices.techtarget.com/definition/Object-Request-Broker-ORB
|
||||
[5]:http://www.corba.org/
|
||||
[6]:https://docs.oracle.com/javaee/6/tutorial/doc/bncdq.html
|
||||
[7]:https://www.amqp.org/
|
||||
[8]:https://www.service-architecture.com/articles/web-services/representational_state_transfer_rest.html
|
||||
[9]:https://grpc.io/
|
||||
[10]:https://www.json.org/
|
||||
[11]:https://github.com/google/protobuf/
|
@ -2,18 +2,16 @@
|
||||
============================================================
|
||||
|
||||

|
||||
>欧洲核子研究组织(简称 CERN)依靠开源技术处理大型强子对撞机生成的大量数据。ATLAS(超环面仪器,如图所示)是一种探测基本粒子的通用探测器。(图片来源:CERN)[经许可使用][2]
|
||||
|
||||
>欧洲核子研究组织(简称 CERN)依靠开源技术处理大型强子对撞机生成的大量数据。ATLAS(超环面仪器,如图所示)是一种探测基本粒子的通用探测器。(图片来源:CERN)
|
||||
|
||||
[CERN][3]
|
||||
|
||||
[CERN][6] 无需过多介绍了吧。CERN 创建了万维网和大型强子对撞机(LHC),这是世界上最大的粒子加速器,就是通过它发现了 [希格斯玻色子][7]。负责该组织 IT 操作系统和基础架构的 Tim Bell 表示,他的团队的目标是“为全球 13000 名物理学家提供计算设施,以分析这些碰撞、了解宇宙的构成以及是如何运转的。”
|
||||
[CERN][6] 无需过多介绍了吧。CERN 创建了<ruby>万维网<rt>World Wide Web</rt></ruby>(WWW)和<ruby>大型强子对撞机<rt>Large Hadron Collider</rt></ruby>(LHC),这是世界上最大的<ruby>粒子加速器<rt>particle accelerator</rt></ruby>,就是通过它发现了 <ruby>[希格斯玻色子][7]<rt>Higgs boson</rt></ruby>。负责该组织 IT 操作系统和基础架构的 Tim Bell 表示,他的团队的目标是“为全球 13000 名物理学家提供计算设施,以分析这些碰撞,了解宇宙的构成以及是如何运转的。”
|
||||
|
||||
CERN 正在进行硬核科学研究,尤其是大型强子对撞机,它在运行时 [生成大量数据][8]。“CERN 目前存储大约 200 PB 的数据,当加速器运行时,每月有超过 10 PB 的数据产生。这必然会给计算基础架构带来极大的挑战,包括存储大量数据,以及能够在合理的时间范围内处理数据,对于网络、存储技术和高效计算架构都是很大的压力。“Bell 说到。
|
||||
|
||||
### [tim-bell-cern.png][4]
|
||||

|
||||
|
||||

|
||||
Tim Bell, CERN [经许可使用][1] Swapnil Bhartiya
|
||||
*Tim Bell, CERN*
|
||||
|
||||
大型强子对撞机的运作规模和它产生的数据量带来了严峻的挑战,但 CERN 对这些问题并不陌生。CERN 成立于 1954 年,已经 60 余年了。“我们一直面临着难以解决的计算能力挑战,但我们一直在与开源社区合作解决这些问题。”Bell 说,“即使在 90 年代,当我们发明万维网时,我们也希望与人们共享,使其能够从 CERN 的研究中受益,开源是做这件事的再合适不过的工具了。”
|
||||
|
||||
@ -29,19 +27,20 @@ CERN 帮助 CentOS 提供基础架构,他们还组织了 CentOS DoJo 活动(
|
||||
|
||||
除了 OpenStack 和 CentOS 之外,CERN 还是其他开源项目的深度用户,包括用于配置管理的 Puppet、用于监控的 Grafana 和 InfluxDB,等等。
|
||||
|
||||
“我们与全球约 170 个实验室合作。因此,每当我们发现一个开源项目的可完善之处,其他实验室便可以很容易地采纳使用。“Bell 说,”与此同时,我们也向其他项目学习。当像 eBay 和 Rackspace 这样大规模的安装提高了解决方案的可扩展性时,我们也从中受益,也可以扩大规模。“
|
||||
“我们与全球约 170 个实验室合作。因此,每当我们发现一个开源项目的改进之处,其他实验室便可以很容易地采纳使用。”Bell 说,“与此同时,我们也向其他项目学习。当像 eBay 和 Rackspace 这样大规模的装机量提高了解决方案的可扩展性时,我们也从中受益,也可以扩大规模。“
|
||||
|
||||
### 解决现实问题
|
||||
|
||||
2012 年左右,CERN 正在研究如何为大型强子对撞机扩展计算能力,但难点是人员而不是技术。CERN 雇用的员工人数是固定的。“我们必须找到一种方法来扩展计算能力,而不需要大量额外的人来管理。”Bell 说,“OpenStack 为我们提供了一个自动的 API 驱动和软件定义的基础架构。”OpenStack 还帮助 CERN 检查与服务交付相关的问题,然后使其自动化,而无需增加员工。
|
||||
|
||||
“我们目前在日内瓦和布达佩斯的两个数据中心运行大约 280000 个核心(cores)和 7000 台服务器。我们正在使用软件定义的基础架构使一切自动化,这使我们能够在保持员工数量不变的同时继续添加更多的服务器。“Bell 说。
|
||||
“我们目前在日内瓦和布达佩斯的两个数据中心运行大约 280000 个处理器核心和 7000 台服务器。我们正在使用软件定义的基础架构使一切自动化,这使我们能够在保持员工数量不变的同时继续添加更多的服务器。“Bell 说。
|
||||
|
||||
随着时间的推移,CERN 将面临更大的挑战。大型强子对撞机有一个到 2035 年的蓝图,包括一些重要的升级。“我们的加速器运转三到四年,然后会用 18 个月或两年的时间来升级基础架构。在这维护期间我们会做一些计算能力的规划。“Bell 说。CERN 还计划升级高亮度大型强子对撞机,会允许更高光度的光束。与目前的 CERN 的规模相比,升级意味着计算需求需增加约 60 倍。
|
||||
随着时间的推移,CERN 将面临更大的挑战。大型强子对撞机有一个到 2035 年的蓝图,包括一些重要的升级。“我们的加速器运转三到四年,然后会用 18 个月或两年的时间来升级基础架构。在这维护期间我们会做一些计算能力的规划。
|
||||
”Bell 说。CERN 还计划升级高亮度大型强子对撞机,会允许更高光度的光束。与目前的 CERN 的规模相比,升级意味着计算需求需增加约 60 倍。
|
||||
|
||||
“根据摩尔定律,我们可能只能满足需求的四分之一,因此我们必须找到相应的扩展计算能力和存储基础架构的方法,并找到自动化和解决方案,例如 OpenStack,将有助于此。”Bell 说。
|
||||
|
||||
“当我们开始使用大型强子对撞机并观察我们如何提供计算能力时,很明显我们无法将所有内容都放入 CERN 的数据中心,因此我们设计了一个分布式网格结构:位于中心的 CERN 和围绕着它的级联结构。“Bell 说,“全世界约有 12 个大型一级数据中心,然后是 150 所小型大学和实验室。他们从大型强子对撞机的数据中收集样本,以帮助物理学家理解和分析数据。“
|
||||
“当我们开始使用大型强子对撞机并观察我们如何提供计算能力时,很明显我们无法将所有内容都放入 CERN 的数据中心,因此我们设计了一个分布式网格结构:位于中心的 CERN 和围绕着它的级联结构。”Bell 说,“全世界约有 12 个大型一级数据中心,然后是 150 所小型大学和实验室。他们从大型强子对撞机的数据中收集样本,以帮助物理学家理解和分析数据。”
|
||||
|
||||
这种结构意味着 CERN 正在进行国际合作,数百个国家正致力于分析这些数据。归结为一个基本原则,即开源不仅仅是共享代码,还包括人们之间的协作、知识共享,以实现个人、组织或公司无法单独实现的目标。这就是开源世界的希格斯玻色子。
|
||||
|
||||
@ -49,9 +48,9 @@ CERN 帮助 CentOS 提供基础架构,他们还组织了 CentOS DoJo 活动(
|
||||
|
||||
via: https://www.linux.com/blog/2018/5/how-cern-using-linux-open-source
|
||||
|
||||
作者:[SWAPNIL BHARTIYA ][a]
|
||||
作者:[SWAPNIL BHARTIYA][a]
|
||||
译者:[jessie-pang](https://github.com/jessie-pang)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,21 +1,24 @@
|
||||
如何在 Git 中重置、恢复、和返回到以前的状态
|
||||
如何在 Git 中重置、恢复,返回到以前的状态
|
||||
======
|
||||
|
||||
> 用简洁而优雅的 Git 命令撤销仓库中的改变。
|
||||
|
||||

|
||||
|
||||
使用 Git 工作时其中一个鲜为人知(和没有意识到)的方面就是,如何很容易地返回到你以前的位置 —— 也就是说,在仓库中如何很容易地去撤销那怕是重大的变更。在本文中,我们将带你了解如何去重置、恢复、和完全回到以前的状态,做到这些只需要几个简单而优雅的 Git 命令。
|
||||
使用 Git 工作时其中一个鲜为人知(和没有意识到)的方面就是,如何轻松地返回到你以前的位置 —— 也就是说,在仓库中如何很容易地去撤销那怕是重大的变更。在本文中,我们将带你了解如何去重置、恢复和完全回到以前的状态,做到这些只需要几个简单而优雅的 Git 命令。
|
||||
|
||||
### reset
|
||||
### 重置
|
||||
|
||||
我们从 Git 的 `reset` 命令开始。确实,你应该能够想到它就是一个 "回滚" — 它将你本地环境返回到前面的提交。这里的 "本地环境" 一词,我们指的是你的本地仓库、暂存区、以及工作目录。
|
||||
我们从 Git 的 `reset` 命令开始。确实,你应该能够认为它就是一个 “回滚” —— 它将你本地环境返回到之前的提交。这里的 “本地环境” 一词,我们指的是你的本地仓库、暂存区以及工作目录。
|
||||
|
||||
先看一下图 1。在这里我们有一个在 Git 中表示一系列状态的提交。在 Git 中一个分支就是简单的一个命名的、可移动指针到一个特定的提交。在这种情况下,我们的 master 分支是链中指向最新提交的一个指针。
|
||||
先看一下图 1。在这里我们有一个在 Git 中表示一系列提交的示意图。在 Git 中一个分支简单来说就是一个命名的、指向一个特定的提交的可移动指针。在这里,我们的 master 分支是指向链中最新提交的一个指针。
|
||||
|
||||
![Local Git environment with repository, staging area, and working directory][2]
|
||||
|
||||
图 1:有仓库、暂存区、和工作目录的本地环境
|
||||
*图 1:有仓库、暂存区、和工作目录的本地环境*
|
||||
|
||||
如果看一下我们的 master 分支是什么,可以看一下到目前为止我们产生的提交链。
|
||||
|
||||
```
|
||||
$ git log --oneline
|
||||
b764644 File with three lines
|
||||
@ -23,41 +26,49 @@ b764644 File with three lines
|
||||
9ef9173 File with one line
|
||||
```
|
||||
|
||||
如果我们想回滚到前一个提交会发生什么呢?很简单 —— 我们只需要移动分支指针即可。Git 提供了为我们做这个动作的命令。例如,如果我们重置 master 为当前提交回退两个提交的位置,我们可以使用如下之一的方法:
|
||||
如果我们想回滚到前一个提交会发生什么呢?很简单 —— 我们只需要移动分支指针即可。Git 提供了为我们做这个动作的 `reset` 命令。例如,如果我们重置 master 为当前提交回退两个提交的位置,我们可以使用如下之一的方法:
|
||||
|
||||
`$ git reset 9ef9173`(使用一个绝对的提交 SHA1 值 9ef9173)
|
||||
```
|
||||
$ git reset 9ef9173
|
||||
```
|
||||
|
||||
或
|
||||
(使用一个绝对的提交 SHA1 值 `9ef9173`)
|
||||
|
||||
`$ git reset current~2`(在 “current” 标签之前,使用一个相对值 -2)
|
||||
或:
|
||||
|
||||
```
|
||||
$ git reset current~2
|
||||
```
|
||||
(在 “current” 标签之前,使用一个相对值 -2)
|
||||
|
||||
图 2 展示了操作的结果。在这之后,如果我们在当前分支(master)上运行一个 `git log` 命令,我们将看到只有一个提交。
|
||||
|
||||
```
|
||||
$ git log --oneline
|
||||
|
||||
9ef9173 File with one line
|
||||
|
||||
```
|
||||
|
||||
![After reset][4]
|
||||
|
||||
图 2:在 `reset` 之后
|
||||
*图 2:在 `reset` 之后*
|
||||
|
||||
`git reset` 命令也包含使用一个你最终满意的提交内容去更新本地环境的其它部分的选项。这些选项包括:`hard` 在仓库中去重置指向的提交,用提交的内容去填充工作目录,并重置暂存区;`soft` 仅重置仓库中的指针;而 `mixed`(默认值)将重置指针和暂存区。
|
||||
`git reset` 命令也包含使用一些选项,可以让你最终满意的提交内容去更新本地环境的其它部分。这些选项包括:`hard` 在仓库中去重置指向的提交,用提交的内容去填充工作目录,并重置暂存区;`soft` 仅重置仓库中的指针;而 `mixed`(默认值)将重置指针和暂存区。
|
||||
|
||||
这些选项在特定情况下非常有用,比如,`git reset --hard <commit sha1 | reference>` 这个命令将覆盖本地任何未提交的更改。实际上,它重置了(清除掉)暂存区,并用你重置的提交内容去覆盖了工作区中的内容。在你使用 `hard` 选项之前,一定要确保这是你真正地想要做的操作,因为这个命令会覆盖掉任何未提交的更改。
|
||||
|
||||
### revert
|
||||
### 恢复
|
||||
|
||||
`git revert` 命令的实际结果类似于 `reset`,但它的方法不同。`reset` 命令是在(默认)链中向后移动分支的指针去“撤销”更改,`revert` 命令是在链中添加一个新的提交去“取消”更改。再次查看图 1 可以非常轻松地看到这种影响。如果我们在链中的每个提交中向文件添加一行,一种方法是使用 `reset` 使那个提交返回到仅有两行的那个版本,如:`git reset HEAD~1`。
|
||||
`git revert` 命令的实际结果类似于 `reset`,但它的方法不同。`reset` 命令(默认)是在链中向后移动分支的指针去“撤销”更改,`revert` 命令是在链中添加一个新的提交去“取消”更改。再次查看图 1 可以非常轻松地看到这种影响。如果我们在链中的每个提交中向文件添加一行,一种方法是使用 `reset` 使那个提交返回到仅有两行的那个版本,如:`git reset HEAD~1`。
|
||||
|
||||
另一个方法是添加一个新的提交去删除第三行,以使最终结束变成两行的版本 —— 实际效果也是取消了那个更改。使用一个 `git revert` 命令可以实现上述目的,比如:
|
||||
|
||||
另一个方法是添加一个新的提交去删除第三行,以使最终结束变成两行的版本 — 实际效果也是取消了那个更改。使用一个 `git revert` 命令可以实现上述目的,比如:
|
||||
```
|
||||
$ git revert HEAD
|
||||
|
||||
```
|
||||
|
||||
因为它添加了一个新的提交,Git 将提示如下的提交信息:
|
||||
|
||||
```
|
||||
Revert "File with three lines"
|
||||
|
||||
@ -74,6 +85,7 @@ This reverts commit b764644bad524b804577684bf74e7bca3117f554.
|
||||
图 3(在下面)展示了 `revert` 操作完成后的结果。
|
||||
|
||||
如果我们现在运行一个 `git log` 命令,我们将看到前面的提交之前的一个新提交。
|
||||
|
||||
```
|
||||
$ git log --oneline
|
||||
11b7712 Revert "File with three lines"
|
||||
@ -83,6 +95,7 @@ b764644 File with three lines
|
||||
```
|
||||
|
||||
这里是工作目录中这个文件当前的内容:
|
||||
|
||||
```
|
||||
$ cat <filename>
|
||||
Line 1
|
||||
@ -91,31 +104,34 @@ Line 2
|
||||
|
||||

|
||||
|
||||
#### Revert 或 reset 如何选择?
|
||||
*图 3 `revert` 操作之后*
|
||||
|
||||
#### 恢复或重置如何选择?
|
||||
|
||||
为什么要优先选择 `revert` 而不是 `reset` 操作?如果你已经将你的提交链推送到远程仓库(其它人可以已经拉取了你的代码并开始工作),一个 `revert` 操作是让他们去获得更改的非常友好的方式。这是因为 Git 工作流可以非常好地在分支的末端添加提交,但是当有人 `reset` 分支指针之后,一组提交将再也看不见了,这可能会是一个挑战。
|
||||
|
||||
当我们以这种方式使用 Git 工作时,我们的基本规则之一是:在你的本地仓库中使用这种方式去更改还没有推送的代码是可以的。如果提交已经推送到了远程仓库,并且可能其它人已经使用它来工作了,那么应该避免这些重写提交历史的更改。
|
||||
|
||||
总之,如果你想回滚、撤销、或者重写其它人已经在使用的一个提交链的历史,当你的同事试图将他们的更改合并到他们拉取的原始链上时,他们可能需要做更多的工作。如果你必须对已经推送并被其他人正在使用的代码做更改,在你做更改之前必须要与他们沟通,让他们先合并他们的更改。然后在没有需要去合并的侵入操作之后,他们再拉取最新的副本。
|
||||
总之,如果你想回滚、撤销或者重写其它人已经在使用的一个提交链的历史,当你的同事试图将他们的更改合并到他们拉取的原始链上时,他们可能需要做更多的工作。如果你必须对已经推送并被其他人正在使用的代码做更改,在你做更改之前必须要与他们沟通,让他们先合并他们的更改。然后在这个侵入操作没有需要合并的内容之后,他们再拉取最新的副本。
|
||||
|
||||
你可能注意到了,在我们做了 `reset` 操作之后,原始的提交链仍然在那个位置。我们移动了指针,然后 `reset` 代码回到前一个提交,但它并没有删除任何提交。换句话说就是,只要我们知道我们所指向的原始提交,我们能够通过简单的返回到分支的原始链的头部来“恢复”指针到前面的位置:
|
||||
|
||||
你可能注意到了,在我们做了 `reset` 操作之后,原始的链仍然在那个位置。我们移动了指针,然后 `reset` 代码回到前一个提交,但它并没有删除任何提交。换句话说就是,只要我们知道我们所指向的原始提交,我们能够通过简单的返回到分支的原始头部来“恢复”指针到前面的位置:
|
||||
```
|
||||
git reset <sha1 of commit>
|
||||
|
||||
```
|
||||
|
||||
当提交被替换之后,我们在 Git 中做的大量其它操作也会发生类似的事情。新提交被创建,有关的指针被移动到一个新的链,但是老的提交链仍然存在。
|
||||
|
||||
### Rebase
|
||||
### 变基
|
||||
|
||||
现在我们来看一个分支变基。假设我们有两个分支 — master 和 feature — 提交链如下图 4 所示。Master 的提交链是 `C4->C2->C1->C0` 和 feature 的提交链是 `C5->C3->C2->C1->C0`.
|
||||
现在我们来看一个分支变基。假设我们有两个分支:master 和 feature,提交链如下图 4 所示。master 的提交链是 `C4->C2->C1->C0` 和 feature 的提交链是 `C5->C3->C2->C1->C0`。
|
||||
|
||||
![Chain of commits for branches master and feature][6]
|
||||
|
||||
图 4:master 和 feature 分支的提交链
|
||||
*图 4:master 和 feature 分支的提交链*
|
||||
|
||||
如果我们在分支中看它的提交记录,它们看起来应该像下面的这样。(为了易于理解,`C` 表示提交信息)
|
||||
|
||||
```
|
||||
$ git log --oneline master
|
||||
6a92e7a C4
|
||||
@ -131,9 +147,10 @@ f33ae68 C1
|
||||
5043e79 C0
|
||||
```
|
||||
|
||||
我给人讲,在 Git 中,可以将 `rebase` 认为是 “将历史合并”。从本质上来说,Git 将一个分支中的每个不同提交尝试“重放”到另一个分支中。
|
||||
我告诉人们在 Git 中,可以将 `rebase` 认为是 “将历史合并”。从本质上来说,Git 将一个分支中的每个不同提交尝试“重放”到另一个分支中。
|
||||
|
||||
因此,我们使用基本的 Git 命令,可以变基一个 feature 分支进入到 master 中,并将它拼入到 `C4` 中(比如,将它插入到 feature 的链中)。操作命令如下:
|
||||
|
||||
因此,我们使用基本的 Git 命令,可以 rebase 一个 feature 分支进入到 master 中,并将它拼入到 `C4` 中(比如,将它插入到 feature 的链中)。操作命令如下:
|
||||
```
|
||||
$ git checkout feature
|
||||
$ git rebase master
|
||||
@ -147,9 +164,10 @@ Applying: C5
|
||||
|
||||
![Chain of commits after the rebase command][8]
|
||||
|
||||
图 5:`rebase` 命令完成后的提交链
|
||||
*图 5:`rebase` 命令完成后的提交链*
|
||||
|
||||
接着,我们看一下提交历史,它应该变成如下的样子。
|
||||
|
||||
```
|
||||
$ git log --oneline master
|
||||
6a92e7a C4
|
||||
@ -168,25 +186,27 @@ f33ae68 C1
|
||||
|
||||
注意那个 `C3'` 和 `C5'`— 在 master 分支上已处于提交链的“顶部”,由于产生了更改而创建了新提交。但是也要注意的是,rebase 后“原始的” `C3` 和 `C5` 仍然在那里 — 只是再没有一个分支指向它们而已。
|
||||
|
||||
如果我们做了这个 rebase,然后确定这不是我们想要的结果,希望去撤销它,我们可以做下面示例所做的操作:
|
||||
如果我们做了这个变基,然后确定这不是我们想要的结果,希望去撤销它,我们可以做下面示例所做的操作:
|
||||
|
||||
```
|
||||
$ git reset 79768b8
|
||||
|
||||
```
|
||||
|
||||
由于这个简单的变更,现在我们的分支将重新指向到做 `rebase` 操作之前一模一样的位置 —— 完全等效于撤销操作(图 6)。
|
||||
|
||||
![After undoing rebase][10]
|
||||
|
||||
图 6:撤销 `rebase` 操作之后
|
||||
*图 6:撤销 `rebase` 操作之后*
|
||||
|
||||
如果你想不起来之前一个操作指向的一个分支上提交了什么内容怎么办?幸运的是,Git 命令依然可以帮助你。用这种方式可以修改大多数操作的指针,Git 会记住你的原始提交。事实上,它是在 `.git` 仓库目录下,将它保存为一个特定的名为 `ORIG_HEAD ` 的文件中。在它被修改之前,那个路径是一个包含了大多数最新引用的文件。如果我们 `cat` 这个文件,我们可以看到它的内容。
|
||||
|
||||
```
|
||||
$ cat .git/ORIG_HEAD
|
||||
79768b891f47ce06f13456a7e222536ee47ad2fe
|
||||
```
|
||||
|
||||
我们可以使用 `reset` 命令,正如前面所述,它返回指向到原始的链。然后它的历史将是如下的这样:
|
||||
|
||||
```
|
||||
$ git log --oneline feature
|
||||
79768b8 C5
|
||||
@ -196,7 +216,8 @@ f33ae68 C1
|
||||
5043e79 C0
|
||||
```
|
||||
|
||||
在 reflog 中是获取这些信息的另外一个地方。这个 reflog 是你本地仓库中相关切换或更改的详细描述清单。你可以使用 `git reflog` 命令去查看它的内容:
|
||||
在 reflog 中是获取这些信息的另外一个地方。reflog 是你本地仓库中相关切换或更改的详细描述清单。你可以使用 `git reflog` 命令去查看它的内容:
|
||||
|
||||
```
|
||||
$ git reflog
|
||||
79768b8 HEAD@{0}: reset: moving to 79768b
|
||||
@ -216,10 +237,10 @@ f33ae68 HEAD@{13}: commit: C1
|
||||
5043e79 HEAD@{14}: commit (initial): C0
|
||||
```
|
||||
|
||||
你可以使用日志中列出的、你看到的相关命名格式,去 reset 任何一个东西:
|
||||
你可以使用日志中列出的、你看到的相关命名格式,去重置任何一个东西:
|
||||
|
||||
```
|
||||
$ git reset HEAD@{1}
|
||||
|
||||
```
|
||||
|
||||
一旦你理解了当“修改”链的操作发生后,Git 是如何跟踪原始提交链的基本原理,那么在 Git 中做一些更改将不再是那么可怕的事。这就是强大的 Git 的核心能力之一:能够很快速、很容易地尝试任何事情,并且如果不成功就撤销它们。
|
||||
@ -233,7 +254,7 @@ via: https://opensource.com/article/18/6/git-reset-revert-rebase-commands
|
||||
作者:[Brent Laster][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[qhwdw](https://github.com/qhwdw)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,34 +1,34 @@
|
||||
比特币是一个邪教 — Adam Caudill
|
||||
比特币是一个邪教
|
||||
======
|
||||
经过这些年,比特币社区已经发生了非常大的变化;社区成员从闭着眼睛都能讲解 [Merkle 树][1] 的技术迷们,变成了被一夜爆富欲望驱使的投机者和由一些连什么是 Merkle 树都不懂的人所领导的企图寻求 10 亿美元估值的区块链初创公司。随着时间的流逝,围绕比特币和其它加密货币形成了一个狂热,他们认为比特币和其它加密货币远比实际的更重要;他们相信常见的货币(法定货币)正在成为过去,而加密货币将从根本上改变世界经济。
|
||||
|
||||
每一年他们的队伍都在壮大,而他们对加密货币的看法也在变得更加宏伟,那怕是因为[使用新技术][2]而使它陷入困境的情况下。虽然我坚信设计优良的加密货币可以使金钱的跨境流动更容易,并且在大规模通胀的领域提供一个更稳定的选择,但现实情况是,我们并没有做到这些。实际上,正是价值的巨大不稳定性才使得投机者赚钱。那些宣扬美元和欧元即将死去的人,已经完全抛弃了对现实世界客观公正的看法。
|
||||
经过这些年,比特币社区已经发生了非常大的变化;社区成员从闭着眼睛都能讲解 [梅克尔树][1] 的技术迷们,变成了被一夜爆富欲望驱使的投机者和由一些连什么是梅克尔树都不懂的人所领导的企图寻求 10 亿美元估值的区块链初创公司。随着时间的流逝,围绕比特币和其它加密货币形成了一股热潮,他们认为比特币和其它加密货币远比实际的更重要;他们相信常见的货币(法定货币)正在成为过去,而加密货币将从根本上改变世界经济。
|
||||
|
||||
每一年他们的队伍都在壮大,而他们对加密货币的看法也在变得更加宏伟,那怕是对该技术的[新奇的用法][2]而使它陷入了困境。虽然我坚信设计优良的加密货币可以使金钱的跨境流动更容易,并且在大规模通胀的领域提供一个更稳定的选择,但现实情况是,我们并没有做到这些。实际上,正是价值的巨大不稳定性才使得投机者赚钱。那些宣扬美元和欧元即将死去的人,已经完全抛弃了对现实世界客观公正的看法。
|
||||
|
||||
### 一点点背景 …
|
||||
|
||||
比特币发行那天,我读了它的白皮书 —— 它使用有趣的 [Merkle 树][1] 去创建一个公共账簿和一个非常合理的共识协议 —— 由于它新颖的特性引起了密码学领域中许多人的注意。在白皮书发布后的几年里,比特币变得非常有价值,并由此吸引了许多人将它视为是一种投资,和那些认为它将改变一切的忠实追随者(和发声者)。这篇文章将讨论的正是后者。
|
||||
比特币发行那天,我读了它的白皮书 —— 它使用有趣的 [梅克尔树][1] 去创建一个公共账簿和一个非常合理的共识协议 —— 由于它新颖的特性引起了密码学领域中许多人的注意。在白皮书发布后的几年里,比特币变得非常有价值,并由此吸引了许多人将它视为是一种投资,和那些认为它将改变一切的忠实追随者(和发声者)。这篇文章将讨论的正是后者。
|
||||
|
||||
昨天,有人在推特上发布了一个最近的比特币区块的哈希,下面成千上万的推文和其它讨论让我相信,比特币已经跨越界线进入了真正的邪教领域。
|
||||
昨天(2018/6/20),有人在推特上发布了一个最近的比特币区块的哈希,下面成千上万的推文和其它讨论让我相信,比特币已经跨越界线进入了真正的邪教领域。
|
||||
|
||||
一切都源于 Mark Wilcox 的这个推文:
|
||||
一切都源于 Mark Wilcox 的[这个推文][9]:
|
||||
|
||||
> #00000000000000000021e800c1e8df51b22c1588e5a624bea17e9faa34b2dc4a
|
||||
> — Mark Wilcox (@mwilcox) June 19, 2018
|
||||
> [#00000000000000000021e800c1e8df51b22c1588e5a624bea17e9faa34b2dc4a][8]
|
||||
|
||||
张贴的这个值是 [比特币 #528249 号区块][3] 的哈希值。前导零是挖矿过程的结果;挖掘一个区块就是把区块内容与一个 nonce(和其它数据)组合起来,然后做哈希运算,并且它至少有一定数量的前导零才能被验证为有效区块。如果它不是正确的数字,你可以更换 nonce 再试。重复这个过程直到哈希值的前导零数量是正确的数字之后,你就有了一个有效的区块。让人们感到很兴奋的部分是接下来的 21e800。
|
||||
> — Mark Wilcox (@mwilcox) [June 19, 2018][9]
|
||||
|
||||
张贴的这个值是 [比特币 #528249 号区块][3] 的哈希值。前导零是挖矿过程的结果;挖掘一个区块就是把区块内容与一个<ruby>现时数<rt>nonce</rt></ruby>(和其它数据)组合起来,然后做哈希运算,并且它至少有一定数量的前导零才能被验证为有效区块。如果它不是正确的数字,你可以更换现时数再试。重复这个过程直到哈希值的前导零数量是正确的数字之后,你就有了一个有效的区块。让人们感到很兴奋的部分是接下来的 `21e800`。
|
||||
|
||||
一些人说这是一个有意义的编号,挖掘出这个区块的人实际上的难度远远超出当前所看到的,不仅要调整前导零的数量,还要匹配接下来的 24 位 —— 它要求非常强大的计算能力。如果有人能够以蛮力去实现它,这将表明有些事情很严重,比如,在计算或密码学方面的重大突破。
|
||||
|
||||
你一定会有疑问,为什么 21e800 如此重要 —— 一个你问了肯定会后悔的问题。有人说它是参考了 [E8 理论][4](一个广受批评的提出标准场理论的论文),或是表示总共存在 2100000000 枚比特币(`21 x 10^8` 就是 2,100,000,000)。还有其它说法,因为太疯狂了而没有办法写出来。另一个重要的事实是,在前导零后面有 21e8 的区块平均每年被挖掘出一次 —— 这些从来没有人认为是很重要的。
|
||||
你一定会有疑问,为什么 `21e800` 如此重要 —— 一个你问了肯定会后悔的问题。有人说它是参考了 [E8 理论][4](一个广受批评的提出标准场理论的论文),或是表示总共存在 2,100,000,000 枚比特币(`21 x 10^8` 就是 2,100,000,000)。还有其它说法,因为太疯狂了而没有办法写出来。另一个重要的事实是,在前导零后面有 21e8 的区块平均每年被挖掘出一次 —— 这些从来没有人认为是很重要的。
|
||||
|
||||
这就引出了有趣的地方:关于这是如何发生的[理论][5]。
|
||||
|
||||
* 一台量子计算机,它能以某种方式用不可思议的速度做哈希运算。尽管在量子计算机的理论中还没有迹象表明它能够做这件事。哈希是量子计算机认为很安全的东西之一。
|
||||
* 一台量子计算机,它能以某种方式用不可思议的速度做哈希运算。尽管在量子计算机的理论中还没有迹象表明它能够做这件事。哈希是量子计算机认为安全的东西之一。
|
||||
* 时间旅行。是的,真的有人这么说,有人从未来穿梭回到现在去挖掘这个区块。我认为这种说法太荒谬了,都懒得去解释它为什么是错误的。
|
||||
* 中本聪回来了。尽管事实上他的私钥没有任何活动,一些人从理论上认为他回来了,他能做一些没人能做的事情。这些理论是无法解释他如何做到的。
|
||||
|
||||
|
||||
|
||||
> 因此,总的来说(按我的理解)中本聪,为了知道和计算他做的事情,根据现代科学,他可能是以下之一:
|
||||
>
|
||||
> A) 使用了一台量子计算机
|
||||
@ -37,39 +37,35 @@
|
||||
>
|
||||
> — Crypto Randy Marsh [REKT] (@nondualrandy) [June 21, 2018][6]
|
||||
|
||||
如果你觉得所有的这一切听起来像 [命理学][7],不止你一个人是这样想的。
|
||||
如果你觉得所有的这一切听起来像 <ruby>[命理学][7]<rt>numerology</rt></ruby>,不止你一个人是这样想的。
|
||||
|
||||
所有围绕有特殊意义的区块哈希的讨论,也引发了对在某种程度上比较有趣的东西的讨论。比特币的创世区块,它是第一个比特币区块,有一个不寻常的属性:早期的比特币要求哈希值的前 32 位是零;而创始区块的前导零有 43 位。因为由代码产生的创世区块从不会发布,它不知道它是如何产生的,也不知道是用什么类型的硬件产生的。中本聪有学术背景,因此可能他有比那个时候大学中常见设备更强大的计算能力。从这一点上说,只是对古怪的创世区块的历史有点好奇,仅此而已。
|
||||
所有围绕有特殊意义的区块哈希的讨论,也引发了对在某种程度上比较有趣的东西的讨论。比特币的创世区块,它是第一个比特币区块,有一个不寻常的属性:早期的比特币要求哈希值的前 32 <ruby>位<rt>bit</rt></ruby>是零;而创始区块的前导零有 43 位。因为产生创世区块的代码从未发布过,不知道它是如何产生的,也不知道是用什么类型的硬件产生的。中本聪有学术背景,因此可能他有比那个时候大学中常见设备更强大的计算能力。从这一点上说,只是对古怪的创世区块的历史有点好奇,仅此而已。
|
||||
|
||||
### 关于哈希运算的简单题外话
|
||||
|
||||
这种喧嚣始于比特币区块的哈希运算;因此理解哈希是什么很重要,并且要理解一个非常重要的属性,一个哈希是单向加密函数,它能够基于给定的数据创建一个伪随机输出。
|
||||
这种喧嚣始于比特币区块的哈希运算;因此理解哈希是什么很重要,并且要理解一个非常重要的属性,哈希是单向加密函数,它能够基于给定的数据创建一个伪随机输出。
|
||||
|
||||
这意味着什么呢?基于本文讨论的目的,对于每个给定的输入你将得到一个随机的输出。随机数有时看起来很有趣,很简单,因为它是随机的结果,并且人类大脑可以很容易从任何东西中找到顺序。当你从随机数据中开始查看顺序时,你就会发现有趣的事情 —— 这些东西毫无意义,因为它们只是简单地随机数。当人们把重要的意义归属到随机数据上时,它将告诉你很多这些参与者观念相关的东西,而不是数据本身。
|
||||
|
||||
### 币的邪教
|
||||
### 币之邪教
|
||||
|
||||
首先,我们来定义一组术语:
|
||||
|
||||
* 邪教:一个宗教崇拜和直接向一个特定的人或物虔诚的体系。
|
||||
* 宗教:有人认为是至高无上的追求或兴趣。
|
||||
* <ruby>邪教<rt>Cult</rt></ruby>:一个宗教崇拜和直接向一个特定的人或物虔诚的体系。
|
||||
* <ruby>宗教<rt>Religion</rt></ruby>:有人认为是至高无上的追求或兴趣。
|
||||
|
||||
<ruby>币之邪教<rt>Cult of the Coin</rt></ruby>有许多圣人,或许没有人比<ruby>中本聪<rt>Satoshi Nakamoto</rt></ruby>更伟大,他是比特币创始者(们)的假名。(对他的)狂热拥戴,要归因于他的能力和理解力远超过一般的研究人员,认为他的远见卓视无人能比,他影响了世界新经济的秩序。当将中本聪的神秘本质和未知的真实身份结合起来时,狂热的追随着们将中本聪视为一个真正值得尊敬的人物。
|
||||
|
||||
当然,除了追随其他圣人的追捧者之外,毫无疑问这些追捧者认为自己是正确的。任何对他们的圣人的批评都被认为也是对他们的批评。例如,那些追捧 EOS 的人,可能会视中本聪为一个开发了失败项目的黑客,而对 EOS 那怕是最轻微的批评,他们也会作出激烈的反应,之所以反应如此强烈,仅仅是因为攻击了他们心目中的神。那些追捧 IOTA 的人的反应也一样;还有更多这样的例子。
|
||||
|
||||
币的狂热追捧者中的许多圣人,或许没有人比中本聪更伟大,他是比特币创始人的假名。强力的护卫、赋予能力和理解力远超过一般的研究人员,认为他的远见卓视无人能比,他影响了世界新经济的秩序。当将中本聪的神秘本质和未知的真实身份结合起来时,狂热的追随着们将中本聪视为一个真正的值的尊敬的人物。
|
||||
|
||||
当然,除了追随其他圣人的追捧者之外,毫无疑问这些追捧者认为自己是正确的。任何对他们的圣人的批评都被认为也是对他们的批评。例如,那些追捧 EOS 的人,可能会认为中本聪是开发了一个失败项目的黑客,而对 EOS 那怕是最轻微的批评,他们也会作出激烈的反应,之所以反应如此强烈,仅仅是因为攻击了他们心目中的神。那些追捧 IOTA 的人的反应也一样;还有更多这样的例子。
|
||||
|
||||
这些追随着在讨论问题时已经失去了理性和客观,他们的狂热遮盖了他们的视野。任何对这些项目和项目背后的人的讨论,如果不是溢美之词,必然以某种程序的刻薄言辞结束,对于一个技术的讨论那种做法是毫无道理的。
|
||||
这些追随者在讨论问题时已经失去了理性和客观,他们的狂热遮盖了他们的视野。任何对这些项目和项目背后的人的讨论,如果不是溢美之词,必然以某种程序的刻薄言辞结束,对于一个技术的讨论那种做法是毫无道理的。
|
||||
|
||||
这很危险,原因很多:
|
||||
|
||||
* 开发者 & 研究者对缺陷视而不见。由于追捧者的大量赞美,这些参与开发的人对自己的能力开始膨胀,并将一些批评看作是无端的攻击 —— 因为他们认为自己是不可能错的。
|
||||
* 开发者 & 研究者对缺陷视而不见。由于追捧者的大量赞美,这些参与开发的人对自己的能力的看法开始膨胀,并将一些批评看作是无端的攻击 —— 因为他们认为自己是不可能错的。
|
||||
* 真正的问题是被攻击。技术问题不再被看作是需要去解决的问题和改进的机会,他们认为是来自那些想去破坏项目的人的攻击。
|
||||
* 用一枚币来控制他们。追随者们通常会结盟,而圣人仅有一个。承认其它项目的优越,意味着认同自己项目的缺陷或不足,而这是他们不愿意做的事情。
|
||||
* 阻止真实的进步。进化是很残酷的,它要求死亡,项目失败,以及承认这些失败的原因。如果忽视失败的教训,如果不允许那些应该去死亡的事情发生,进步就会停止。
|
||||
|
||||
|
||||
* 物以类聚,人以币分。追随者们通常会结盟到一起,而圣人仅有一个。承认其它项目的优越,意味着认同自己项目的缺陷或不足,而这是他们不愿意做的事情。
|
||||
* 阻止真实的进步。进化是很残酷的,死亡是必然会有的,项目可能失败,也要承认这些失败的原因。如果忽视失败的教训,如果不允许那些应该去死亡的事情发生,进步就会停止。
|
||||
|
||||
许多围绕加密货币和相关区块链项目的讨论已经开始变得越来越”有毒“,善意的人想在不受攻击的情况下进行技术性讨论越来越不可能。随着对真正缺陷的讨论,那些在其它环境中注定要失败的缺陷,在没有做任何的事实分析的情况下即刻被判定为异端已经成为了惯例,善意的人参与其中的代价变得极其昂贵。至少有些人已经意识到极其严重的安全漏洞,由于高“毒性”的环境,他们选择保持沉默。
|
||||
|
||||
@ -79,7 +75,10 @@
|
||||
|
||||
[注意:这种行为有许多例子可以引用,但是为了保护那些因批评项目而成为被攻击目标的人,我选择尽可能少的列出这种例子。我看到许多我很尊敬的人、许多我认为是朋友的人成为这种恶毒攻击的受害者 —— 我不想引起人们对这些攻击的注意和重新引起对他们的攻击。]
|
||||
|
||||
---
|
||||
关于作者:
|
||||
|
||||
我是一个资深应用安全顾问、研究员和具有超过 15 年的经验的软件开发者。我主要关注的是应用程序安全、安全通信和加密, 虽然我经常由于无聊而去研究新的领域。我通常会写了一些关于我的研究和安全、开发和软件设计,和我当前吸引了我注意力的爱好的文章。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -88,7 +87,7 @@ via: https://adamcaudill.com/2018/06/21/bitcoin-is-a-cult/
|
||||
作者:[Adam Caudill][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[qhwdw](https://github.com/qhwdw)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
@ -100,3 +99,5 @@ via: https://adamcaudill.com/2018/06/21/bitcoin-is-a-cult/
|
||||
[5]:https://medium.com/@coop__soup/00000000000000000021e800c1e8df51b22c1588e5a624bea17e9faa34b2dc4a-cd4b67d446be
|
||||
[6]:https://twitter.com/nondualrandy/status/1009609117768605696?ref_src=twsrc%5Etfw
|
||||
[7]:https://en.wikipedia.org/wiki/Numerology
|
||||
[8]:https://twitter.com/hashtag/00000000000000000021e800c1e8df51b22c1588e5a624bea17e9faa34b2dc4a?src=hash&ref_src=twsrc%5Etfw
|
||||
[9]:https://twitter.com/mwilcox/status/1009160832398262273?ref_src=twsrc%5Etfw
|
52
published/20180801 Cross-Site Request Forgery.md
Normal file
52
published/20180801 Cross-Site Request Forgery.md
Normal file
@ -0,0 +1,52 @@
|
||||
CSRF(跨站请求伪造)简介
|
||||
======
|
||||

|
||||
|
||||
设计 Web 程序时,安全性是一个主要问题。我不是在谈论 DDoS 保护、使用强密码或两步验证。我说的是对网络程序的最大威胁。它被称为**CSRF**, 是 **Cross Site Resource Forgery** (跨站请求伪造)的缩写。
|
||||
|
||||
### 什么是 CSRF?
|
||||
|
||||
[][1]
|
||||
|
||||
首先,**CSRF** 是 Cross Site Resource Forgery 的缩写。它通常发音为 “sea-surf”,也经常被称为 XSRF。CSRF 是一种攻击类型,在受害者不知情的情况下,在受害者登录的 Web 程序上执行各种操作。这些行为可以是任何事情,从简单地点赞或评论社交媒体帖子到向人们发送垃圾消息,甚至从受害者的银行账户转移资金。
|
||||
|
||||
### CSRF 如何工作?
|
||||
|
||||
**CSRF** 攻击尝试利用所有浏览器上的一个简单的常见漏洞。每次我们对网站进行身份验证或登录时,会话 cookie 都会存储在浏览器中。因此,每当我们向网站提出请求时,这些 cookie 就会自动发送到服务器,服务器通过匹配与服务器记录一起发送的 cookie 来识别我们。这样就知道是我们了。
|
||||
|
||||
[][2]
|
||||
|
||||
这意味着我将在知情或不知情的情况下发出请求。由于 cookie 也被发送并且它们将匹配服务器上的记录,服务器认为我在发出该请求。
|
||||
|
||||
CSRF 攻击通常以链接的形式出现。我们可以在其他网站上点击它们或通过电子邮件接收它们。单击这些链接时,会向服务器发出不需要的请求。正如我之前所说,服务器认为我们发出了请求并对其进行了身份验证。
|
||||
|
||||
#### 一个真实世界的例子
|
||||
|
||||
为了把事情看得更深入,想象一下你已登录银行的网站。并在 **yourbank.com/transfer** 上填写表格。你将接收者的帐号填写为 1234,填入金额 5,000 并单击提交按钮。现在,我们将有一个 **yourbank.com/transfer/send?to=1234&amount=5000** 的请求。因此服务器将根据请求进行操作并转账。现在想象一下你在另一个网站上,然后点击一个链接,用黑客的帐号作为参数打开上面的 URL。这笔钱现在会转账给黑客,服务器认为你做了交易。即使你没有。
|
||||
|
||||
[][3]
|
||||
|
||||
#### CSRF 防护
|
||||
|
||||
CSRF 防护非常容易实现。它通常将一个称为 CSRF 令牌的令牌发送到网页。每次发出新请求时,都会发送并验证此令牌。因此,向服务器发出的恶意请求将通过 cookie 身份验证,但 CSRF 验证会失败。大多数 Web 框架为防止 CSRF 攻击提供了开箱即用的支持,而 CSRF 攻击现在并不像以前那样常见。
|
||||
|
||||
### 总结
|
||||
|
||||
CSRF 攻击在 10 年前是一件大事,但如今我们看不到太多。过去,Youtube、纽约时报和 Netflix 等知名网站都容易受到 CSRF 的攻击。然而,CSRF 攻击的普遍性和发生率最近有减少。尽管如此,CSRF 攻击仍然是一种威胁,重要的是,你要保护自己的网站或程序免受攻击。
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.linuxandubuntu.com/home/understanding-csrf-cross-site-request-forgery
|
||||
|
||||
作者:[linuxandubuntu][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.linuxandubuntu.com
|
||||
[1]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/csrf-what-is-cross-site-forgery_orig.jpg
|
||||
[2]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/cookies-set-by-website-chrome_orig.jpg
|
||||
[3]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/csrf-hacking-bank-account_orig.jpg
|
@ -1,4 +1,4 @@
|
||||
20 questions DevOps job candidates should be prepared to answer
|
||||
Translating by FelixYFZ 20 questions DevOps job candidates should be prepared to answer
|
||||
======
|
||||
|
||||

|
||||
|
@ -0,0 +1,68 @@
|
||||
3 pitfalls everyone should avoid with hybrid multi-cloud, part 2
|
||||
======
|
||||
|
||||

|
||||
|
||||
This article was co-written with [Roel Hodzelmans][1].
|
||||
|
||||
Cloud hype is all around you—you're told it's critical to ensuring a digital future for your business. Whether you choose cloud, hybrid cloud, or hybrid multi-cloud, you have numerous decisions to make, even as you continue the daily work of enhancing your customers' experience and agile delivery of your applications (including legacy applications)—likely some of your business' most important resources.
|
||||
|
||||
In this series, we explain three pitfalls everyone should avoid when transitioning to hybrid multi-cloud environments. [In part one][2], we defined the different cloud types and explained the differences between hybrid cloud and multi-cloud. Here, in part two, we will dive into the first pitfall: Why cost is not always the best motivator for moving to the cloud.
|
||||
|
||||
### Why not?
|
||||
|
||||
When looking at hybrid or multi-cloud strategies for your business, don't let cost become the obvious motivator. There are a few other aspects of any migration strategy that you should review when putting your plan together. But often budget rules the conversations.
|
||||
|
||||
When giving this talk three times at conferences, we've asked our audience to answer a live, online questionnaire about their company, customers, and experiences in the field. Over 73% of respondents said cost was the driving factor in their business' decision to move to hybrid or multi-cloud.
|
||||
|
||||
But, if you already have full control of your on-premises data centers, yet perpetually underutilize and overpay for resources, how can you expect to prevent those costs from rolling over into your cloud strategy?
|
||||
|
||||
There are three main (and often forgotten, ignored, and unaccounted for) reasons cost shouldn't be the primary motivating factor for migrating to the cloud: labor costs, overcapacity, and overpaying for resources. They are important points to consider when developing a hybrid or multi-cloud strategy.
|
||||
|
||||
### Labor costs
|
||||
|
||||
Imagine a utility company making the strategic decision to move everything to the cloud within the next three years. The company kicks off enthusiastically, envisioning huge cost savings, but soon runs into labor cost issues that threaten to blow up the budget.
|
||||
|
||||
One of the most overlooked aspects of moving to the cloud is the cost of labor to migrate existing applications and data. A Forrester study reports that labor costs can consume [over 50% of the total cost of a public cloud migration][3]. Forrester says, "customer-facing apps for systems of engagement… typically employ lots of new code rather than migrating existing code to cloud platforms."
|
||||
|
||||
Step back and analyze what's essential to your customer success and move only that to the cloud. Then, evaluate all your non-essential applications and, over time, consider moving them to commercial, off-the-shelf solutions that require little labor cost.
|
||||
|
||||
### Overcapacity
|
||||
|
||||
"More than 80% of in-house data centers have [way more server capacity than is necessary][4]," reports Business Insider. This amazing bit of information should shock you to your core.
|
||||
|
||||
What exactly is "way more" in this context?
|
||||
|
||||
One hint comes from Deutsche Bank CTO Pat Healey, presenting at Red Hat Summit 2017. He talks about ordering hardware for the financial institution's on-premises data center, only to find out later that [usage numbers were in the single digits][5].
|
||||
|
||||
Healey is not alone; many companies have these problems. They don't do routine assessments, such as checking electricity, cooling, licensing, and other factors, to see how much capacity they are using on a consistent basis.
|
||||
|
||||
### Overpaying
|
||||
|
||||
Companies are paying an average of 36% more for cloud services than they need to, according to the Business Insider article mentioned above.
|
||||
|
||||
One reason is that public cloud providers enthusiastically support customers coming agnostically into their cloud. As customers leverage more of the platform's cloud-native features, they reach a monetary threshold, and technical support drops off dramatically.
|
||||
|
||||
It's a classic case of vendor lock-in, where the public cloud provider knows it is cost-prohibitive for the customer to migrate off its cloud, so it doesn't feel compelled to provide better service.
|
||||
|
||||
### Coming up
|
||||
|
||||
In part three of this series, we'll discuss the second of three pitfalls that everyone should avoid with hybrid multi-cloud. Stay tuned to learn why you should take care with moving everything to the cloud.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/6/reasons-move-to-cloud
|
||||
|
||||
作者:[Eric D.Schabell][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/eschabell
|
||||
[1]:https://opensource.com/users/roelh
|
||||
[2]:https://opensource.com/article/18/4/pitfalls-hybrid-multi-cloud
|
||||
[3]:https://www.techrepublic.com/article/labor-costs-can-make-up-50-of-public-cloud-migration-is-it-worth-it/
|
||||
[4]:http://www.businessinsider.com/companies-waste-62-billion-on-the-cloud-by-paying-for-storage-they-dont-need-according-to-a-report-2017-11
|
||||
[5]:https://youtu.be/SPRUJ5Z-Aew
|
@ -1,3 +1,5 @@
|
||||
Translating by jessie-pang
|
||||
|
||||
Why moving all your workloads to the cloud is a bad idea
|
||||
======
|
||||
|
||||
|
@ -0,0 +1,180 @@
|
||||
3 tips for moving your team to a microservices architecture
|
||||
======
|
||||

|
||||
|
||||
Microservices are gaining in popularity and providing new ways for tech companies to improve their services for end users. But what impact does the shift to microservices have on team culture and morale? What issues should CTOs, developers, and project managers consider when the best technological choice is a move to microservices?
|
||||
|
||||
Below you’ll find key advice and insight from CTOs and project leads as they reflect on their experiences with team culture and microservices.
|
||||
|
||||
### You can't build successful microservices without a successful team culture
|
||||
|
||||
When I was working with Java developers, there was tension within the camp about who got to work on the newest and meatiest features. Our engineering leadership had decided that we would exclusively use Java to build all new microservices.
|
||||
|
||||
There were great reasons for this decision, but as I will explain later, such a restrictive decision come with some repercussions. Communicating the “why” of technical decisions can go a long way toward creating a culture where people feel included and informed.
|
||||
|
||||
When you're organizing and managing a team around microservices, it’s always challenging to balance the mood, morale, and overall culture. In most cases, the leadership needs to balance the risk of team members using new technology against the needs of the client and the business itself.
|
||||
|
||||
This dilemma, and many others like it, has led CTOs to ask themselves questions such as: How much freedom should I give my team when it comes to adopting new technologies? And perhaps even more importantly, how can I manage the overarching culture within my camp?
|
||||
|
||||
### Give every team member a chance to thrive
|
||||
|
||||
When the engineering leaders in the example above decided that Java was the best technology to use when building microservices, the decision was best for the company: Java is performant, and many of the senior people on the team were well-versed with it. However, not everyone on the team had experience with Java.
|
||||
|
||||
The problem was, our team was split into two camps: the Java guys and the JavaScript guys. As time went by and exciting new projects came up, we’d always reach for Java to get the job done. Before long, some annoyance within the JavaScript camp crept in: “Why do the Java guys always get to work on the exciting new projects while we’re left to do the mundane front-end tasks like implementing third-party analytics tools? We want a big, exciting project to work on too!”
|
||||
|
||||
Like most rifts, it started out small, but it grew worse over time.
|
||||
|
||||
The lesson I learned from that experience was to take your team’s expertise and favored technologies into account when choosing a de facto tech stack for your microservices and when adjusting your team's level of freedom to pick and choose their tools.
|
||||
|
||||
Sure, you need some structure, but if you’re too restrictive—or worse, blind to the desire of team members to innovate with different technologies—you may have a rift of your own to manage.
|
||||
|
||||
So evaluate your team closely and come up with a plan that empowers everyone. That way, every section of your team can get involved in major projects, and nobody will feel like they’re being left on the bench.
|
||||
|
||||
### Technology choices: stability vs. flexibility
|
||||
|
||||
Let’s say you hire a new junior developer who is excited about some brand new, fresh-off-the-press JavaScript framework.
|
||||
|
||||
That framework, while sporting some technical breakthroughs, may not have proven itself in production environments, and it probably doesn’t have great support available. CTOs have to make a difficult choice: Okaying that move for the morale of the team, or declining it to protect the company and its bottom line and to keep the project stable as the deadline approaches.
|
||||
|
||||
The answer depends on a lot of different factors (which also means there is no single correct answer).
|
||||
|
||||
### Technological freedom
|
||||
|
||||
“We give our team and ourselves 100% freedom in considering technology choices. We eventually identified two or three technologies not to use in the end, primarily due to not wanting to complicate our deployment story,” said [Benjamin Curtis][1], co-founder of [Honeybadger][2].
|
||||
|
||||
“In other words, we considered introducing new languages and new approaches into our tech stack when creating our microservices, and we actually did deploy a production microservice on a different stack at one point. [While we do generally] stick with technologies that we know in order to simplify our ops stack, we periodically revisit that decision to see if potential performance or reliability benefits would be gained by adopting a new technology, but so far we haven't made a change,” Curtis continued.
|
||||
|
||||
When I spoke with [Stephen Blum][3], CTO at [PubNub][4], he expressed a similar view, welcoming pretty much any technology that cuts the mustard: “We're totally open with it. We want to continue to push forward with new open source technologies that are available, and we only have a couple of constraints with the team that are very fair: [It] must run in container environment, and it has to be cost-effective.”
|
||||
|
||||
### High freedom, high responsibility
|
||||
|
||||
[Sumo Logic][5] CTO [Christian Beedgen][6] and chief architect [Stefan Zier][7] expanded on this topic, agreeing that if you’re going to give developers freedom to choose their technology, it must come with a high level of responsibility attached. “It’s really important that [whoever builds] the software takes full ownership for it. In other words, they not only build software, but they also run the software and remain responsible for the whole lifecycle.”
|
||||
|
||||
Beedgen and Zier recommend implementing a system that resembles a federal government system, keeping those freedoms in check by heightening responsibility: “[You need] a federal culture, really. You've got to have a system where multiple, independent teams can come together towards the greater goal. That limits the independence of the units to some degree, as they have to agree that there is potentially a federal government of some sort. But within those smaller groups, they can make as many decisions on their own as they like within guidelines established on a higher level.”
|
||||
|
||||
Decentralized, federal, or however you frame it, this approach to structuring microservice teams gives each team and each team member the freedom they want, without enabling anyone to pull the project apart.
|
||||
|
||||
However, not everyone agrees.
|
||||
|
||||
### Restrict technology to simplify things
|
||||
|
||||
[Darby Frey][8], co-founder of [Lead Honestly][9], takes a more restrictive approach to technology selection.
|
||||
|
||||
“At my last company we had a lot of services and a fairly small team, and one of the main things that made it work, especially for the team size that we had, was that every app was the same. Every backend service was a Ruby app,” he explained.
|
||||
|
||||
Frey explained that this helped simplify the lives of his team members: “[Every service has] the same testing framework, the same database backend, the same background job processing tool, et cetera. Everything was the same.
|
||||
|
||||
“That meant that when an engineer would jump around between apps, they weren’t having to learn a new pattern or learn a different language each time,” Frey continued, “So we're very aware and very strict about keeping that commonality.”
|
||||
|
||||
While Frey is sympathetic to developers wanting to introduce a new language, admitting that he “loves the idea of trying new things,” he feels that the cons still outweigh the pros.
|
||||
|
||||
“Having a polyglot architecture can increase the development and maintenance costs. If it's just all the same, you can focus on business value and business features and not have to be super siloed in how your services operate. I don't think everybody loves that decision, but at the end of the day, when they have to fix something on a weekend or in the middle of the night, they appreciate it,” said Frey.
|
||||
|
||||
### Centralized or decentralized organization
|
||||
|
||||
How your team is structured is also going to impact your microservices engineering culture—for better or worse.
|
||||
|
||||
For example, it’s common for software engineers to write the code before shipping it off to the operations team, who in turn deploy it to the servers. But when things break (and things always break!), an internal conflict occurs.
|
||||
|
||||
Because operation engineers don’t write the code themselves, they rarely understand problems when they first arise. As a result, they need to get in touch with those who did code it: the software engineers. So right from the get-go, you’ve got a middleman relaying messages between the problem and the team that can fix that problem.
|
||||
|
||||
To add an extra layer of complexity, because software engineers aren’t involved with operations, they often can’t fully appreciate how their code affects the overall operation of the platform. They learn of issues only when operations engineers complain about them.
|
||||
|
||||
As you can see, this is a relationship that’s destined for constant conflict.
|
||||
|
||||
### Navigating conflict
|
||||
|
||||
One way to attack this problem is by following the lead of Netflix and Amazon, both of which favor decentralized governance. Software development thought leaders James Lewis and Martin Fowler feel that decentralized governance is the way to go when it comes to microservice team organization, as they explain in a [blog post][10].
|
||||
|
||||
“One of the consequences of centralized governance is the tendency to standardize on single technology platforms. Experience shows that this approach is constricting—not every problem is a nail and not every solution a hammer,” the article reads. “Perhaps the apogee of decentralized governance is the ‘build it, run it’ ethos popularized by Amazon. Teams are responsible for all aspects of the software they build, including operating the software 24/7.”
|
||||
|
||||
Netflix, Lewis and Fowler write, is another company pushing higher levels of responsibility on development teams. They hypothesize that, because they’ll be responsible and called upon should anything go wrong later down the line, more care will be taken during the development and testing stages to ensure each microservice is in ship shape.
|
||||
|
||||
“These ideas are about as far away from the traditional centralized governance model as it is possible to be,” they conclude.
|
||||
|
||||
### Who's on weekend pager duty?
|
||||
|
||||
When considering a centralized or decentralized culture, think about how it impacts your team members when problems inevitably crop up at inopportune times. A decentralized system implies that each decentralized team takes responsibility for one service or one set of services. But that also creates a problem: Silos.
|
||||
|
||||
That’s one reason why Lead Honestly's Frey isn’t a proponent of the concept of decentralized governance.
|
||||
|
||||
“The pattern of ‘a single team is responsible for a particular service’ is something you see a lot in microservice architectures. We don't do that, for a couple of reasons. The primary business reason is that we want teams that are responsible not for specific code but for customer-facing features. A team might be responsible for order processing, so that will touch multiple code bases but the end result for the business is that there is one team that owns the whole thing end to end, so there are fewer cracks for things to fall through,” Frey explained.
|
||||
|
||||
The other main reason, he continued, is that developers can take more ownership of the overall project: “They can actually think about [the project] holistically.”
|
||||
|
||||
Nathan Peck, developer advocate for container services at Amazon Web Services, [explained this problem in more depth][11]. In essence, when you separate the software engineers and the operations engineers, you make life harder for your team whenever an issue arises with the code—which is bad news for end users, too.
|
||||
|
||||
But does decentralization need to lead to separation and siloization?
|
||||
|
||||
Peck explained that his solution lies in [DevOps][12], a model aimed at tightening the feedback loop by bringing these two teams closer together, strengthening team culture and communication in the process. Peck describes this as the “you build it, you run it” approach.
|
||||
|
||||
However, that doesn’t mean teams need to get siloed or distanced away from partaking in certain tasks, as Frey suggests might happen.
|
||||
|
||||
“One of the most powerful approaches to decentralized governance is to build a mindset of ‘DevOps,’” Peck wrote. “[With this approach], engineers are involved in all parts of the software pipeline: writing code, building it, deploying the resulting product, and operating and monitoring it in production. The DevOps way contrasts with the older model of separating development teams from operations teams by having development teams ship code ‘over the wall’ to operations teams who were then responsible to run it and maintain it.”
|
||||
|
||||
DevOps, as [Armory][13] CTO [Isaac Mosquera][14] explained, is an agile software development framework and culture that’s gaining traction thanks to—well, pretty much everything that Peck said.
|
||||
|
||||
Interestingly, Mosquera feels that this approach actually flies in the face of [Conway’s Law][15]:
|
||||
|
||||
_" Organizations which design systems ... are constrained to produce designs which are copies of the communication structures of these organizations." — M. Conway_
|
||||
|
||||
“Instead of communication driving software design, now software architecture drives communication. Not only do teams operate and organize differently, but it requires a new set of tooling and process to support this type of architecture; i.e., DevOps,” Mosquera explained.
|
||||
|
||||
[Chris McFadden][16], VP of engineering at [SparkPost][17], offers an interesting example that might be worth following. At SparkPost, you’ll find decentralized governance—but you won’t find a one-team-per-service culture.
|
||||
|
||||
“The team that is developing these microservices started off as one team, but they’re now split up into three teams under the same larger group. Each team has some level of responsibility around certain domains and certain expertise, but the ownership of these services is not restricted to any one of these teams,” McFadden explained.
|
||||
|
||||
This approach, McFadden continued, allows any team to work on anything from new features to bug fixes to production issues relating to any of those services. There’s total flexibility and not a silo in sight.
|
||||
|
||||
“It allows [the teams to be] a little more flexible both in terms of new product development as well, just because you're not getting too restricted and that's based on our size as a company and as an engineering team. We really need to retain some flexibility,” he said.
|
||||
|
||||
However, size might matter here. McFadden admitted that if SparkPost was a lot larger, “then it would make more sense to have a single, larger team own one of those microservices.”
|
||||
|
||||
“[It's] better, I think, to have a little bit more broad responsibility for these services and it gives you a little more flexibility. At least that works for us at this time, where we are as an organization,” he said.
|
||||
|
||||
### A successful microservices engineering culture is a balancing act
|
||||
|
||||
When it comes to technology, freedom—with responsibility—looks to be the most rewarding path. Team members with differing technological preferences will come and go, while new challenges may require you to ditch technologies that have previously served you well. Software development is constantly in flux, so you’ll need to continually balance the needs of your team are new devices, technologies, and clients emerge.
|
||||
|
||||
As for structuring your teams, a decentralized yet un-siloed approach that leverages DevOps and instills a “you build it, you run it” mentality seems to be popular, although other schools of thought do exist. As usual, you’re going to have to experiment to see what suits your team best.
|
||||
|
||||
Here’s a quick recap on how to ensure your team culture meshes well with a microservices architecture:
|
||||
|
||||
* **Be sustainable, yet flexible** : Balance sustainability without forgetting about flexibility and the need for your team to be innovative when the right opportunity comes along. However, there’s a distinct difference of opinion over how you should achieve that balance.
|
||||
|
||||
* **Give equal opportunities** : Don’t favor one section of your team over another. If you’re going to impose restrictions, make sure it’s not going to fundamentally alienate team members from the get-go. Think about how your product roadmap is shaping up and forecast how it will be built and who’s going to do the work.
|
||||
|
||||
* **Structure your team to be agile, yet responsible** : Decentralized governance and agile development is the flavor of the day for a good reason, but don’t forget to install a sense of responsibility within each team.
|
||||
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/8/microservices-team-challenges
|
||||
|
||||
作者:[Jake Lumetta][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/jakelumetta
|
||||
[1]:https://twitter.com/stympy?lang=en
|
||||
[2]:https://www.honeybadger.io/
|
||||
[3]:https://twitter.com/stephenlb
|
||||
[4]:https://www.pubnub.com/
|
||||
[5]:http://sumologic.com/
|
||||
[6]:https://twitter.com/raychaser
|
||||
[7]:https://twitter.com/stefanzier
|
||||
[8]:https://twitter.com/darbyfrey
|
||||
[9]:https://leadhonestly.com/
|
||||
[10]:https://martinfowler.com/articles/microservices.html#ProductsNotProjects
|
||||
[11]:https://medium.com/@nathankpeck/microservice-principles-decentralized-governance-4cdbde2ff6ca
|
||||
[12]:https://opensource.com/resources/devops
|
||||
[13]:http://armory.io/
|
||||
[14]:https://twitter.com/imosquera
|
||||
[15]:https://en.wikipedia.org/wiki/Conway%27s_law
|
||||
[16]:https://twitter.com/cristoirmac
|
||||
[17]:https://www.sparkpost.com/
|
56
sources/talk/20180809 How do tools affect culture.md
Normal file
56
sources/talk/20180809 How do tools affect culture.md
Normal file
@ -0,0 +1,56 @@
|
||||
How do tools affect culture?
|
||||
======
|
||||
|
||||

|
||||
|
||||
Most of the DevOps community talks about how tools don’t matter much. The culture has to change first, the argument goes, which might modify how the tools are used.
|
||||
|
||||
I agree and disagree with that concept. I believe the relationship between tools and culture is more symbiotic and bidirectional than unidirectional. I have discovered this through real-world transformations across several companies now. I admit it’s hard to determine whether the tools changed the culture or whether the culture changed how the tools were used.
|
||||
|
||||
### Violating principles
|
||||
|
||||
Some tools violate core principles of modern development and operations. The primary violation I have seen are tools that require GUI interactions. This often separates operators from the value pipeline in a way that is cognitively difficult to overcome. If everything in your infrastructure is supposed to be configured and deployed through a value pipeline, then taking someone out of that flow inherently changes their perspective and engagement. Making manual modifications also injects risk into the system that creates unpredictability and undermines the value of the pipeline.
|
||||
|
||||
I’ve heard it said that these tools are fine and can be made to work within the new culture, and I’ve tried this in the past. Screen scraping and form manipulation tools have been used to attempt automation with some systems I’ve integrated. This is very fragile and doesn’t work on all systems. It ultimately required a lot of manual intervention.
|
||||
|
||||
Another system from a large vendor providing integrated monitoring and ticketing solutions for infrastructure seemed to implement its API as an afterthought, and this resulted in the system being unable to handle the load from the automated system. This required constant manual recoveries and sometimes the tedious task of manually closing errant tickets that shouldn’t have been created or that weren’t closed properly.
|
||||
|
||||
The individuals maintaining these systems experienced great frustration and often expressed a lack of confidence in the overall DevOps transformation. In one of these instances, we introduced a modern tool for monitoring and alerting, and the same individuals suddenly developed a tremendous amount of confidence in the overall DevOps transformation. I believe this is because tools can reinforce culture and improve it when a similar tool that lacks modern capabilities would otherwise stymie motivation and engagement.
|
||||
|
||||
### Choosing tools
|
||||
|
||||
At the NAIC (National Association of Insurance Commissioners), we’ve adopted a practice of evaluating new and existing tools based on features we believe reinforce the core principles of our value pipeline. We currently have seven items on our list:
|
||||
|
||||
* REST API provided and fully functional (possesses all application functionality)
|
||||
* Ability to provision immutably (can be installed, configured, and started without human intervention)
|
||||
* Ability to provide all configuration through static files
|
||||
* Open source code
|
||||
* Uses open standards when available
|
||||
* Offered as Software as a Service (SaaS) or hosted (we don't run anything)
|
||||
* Deployable to public cloud (based on licensing and cost)
|
||||
|
||||
|
||||
|
||||
This is a prioritized list. Each item gets rated green, yellow, or red to indicate how much each statement applies to a particular technology. This creates a visual that makes it quite clear how the different candidates compare to one another. We then use this to make decisions about which tools we should use. We don’t make decisions solely on these criteria, but they do provide a clearer picture and help us know when we’re sacrificing principles. Transparency is a core principle in our culture, and this system helps reinforce that in our decision-making process.
|
||||
|
||||
We use green, yellow, and red because there’s not normally a clear binary representation of these criteria within each tool. For example, some tools have an incomplete API, which would result in yellow being applied. If the tool uses open standards like OpenAPI and there’s no other applicable open standard, then it would receive green for “Uses open standards when available.” However, a tracing system that uses OpenAPI and not OpenTracing would receive a yellow rating.
|
||||
|
||||
This type of system creates a common understanding of what is valued when it comes to tool selection, and it helps avoid unknowingly violating core principles of your value pipeline. We recently used this method to select [GitLab][1] as our version control and continuous integration system, and it has drastically improved our culture for many reasons. I estimated 50 users for the first year, and we’re already over 120 in just the first few months.
|
||||
|
||||
The tools we used previously didn’t allow us to contribute back our own features, collaborate transparently, or automate so completely. We’ve also benefited from GitLab’s culture influencing ours. Its [handbook][2] and open communication have been invaluable to our growth. Tools, and the companies that make them, can and will influence your company’s culture. What are you willing to allow in?
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/8/how-tools-affect-culture
|
||||
|
||||
作者:[Dan Barker][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/barkerd427
|
||||
[1]:https://about.gitlab.com/
|
||||
[2]:https://about.gitlab.com/handbook/
|
@ -1,3 +1,5 @@
|
||||
Translatin by imquanquan
|
||||
|
||||
Here are some amazing advantages of Go that you don’t hear much about
|
||||
============================================================
|
||||
|
||||
@ -220,4 +222,4 @@ via: https://medium.freecodecamp.org/here-are-some-amazing-advantages-of-go-that
|
||||
[19]:https://golang.org/src/encoding/json/encode.go
|
||||
[20]:https://tour.golang.org/
|
||||
[21]:https://github.com/kirillrogovoy/
|
||||
[22]:https://twitter.com/krogovoy
|
||||
[22]:https://twitter.com/krogovoy
|
||||
|
@ -0,0 +1,87 @@
|
||||
3 pitfalls everyone should avoid with hybrid multicloud
|
||||
======
|
||||

|
||||
|
||||
This article was co-written with [Roel Hodzelmans][1].
|
||||
|
||||
We're all told the cloud is the way to ensure a digital future for our businesses. But which cloud? From cloud to hybrid cloud to hybrid multi-cloud, you need to make choices, and these choices don't preclude the daily work of enhancing your customers' experience or agile delivery of the applications they need.
|
||||
|
||||
This article is the first in a four-part series on avoiding pitfalls in hybrid multi-cloud computing. Let's start by examining multi-cloud, hybrid cloud, and hybrid multi-cloud and what makes them different from one another.
|
||||
|
||||
### Hybrid vs. multi-cloud
|
||||
|
||||
There are many conversations you may be having in your business around moving to the cloud. For example, you may want to take your on-premises computing capacity and turn it into your own private cloud. You may wish to provide developers with a cloud-like experience using the same resources you already have. A more traditional reason for expansion is to use external computing resources to augment those in your own data centers. The latter leads you to the various public cloud providers, as well as to our first definition, multi-cloud.
|
||||
|
||||
#### Multi-cloud
|
||||
|
||||
Multi-cloud means using multiple clouds from multiple providers for multiple tasks.
|
||||
|
||||
![Multi-cloud][3]
|
||||
|
||||
Figure 1. Multi-cloud IT with multiple isolated cloud environments
|
||||
|
||||
Typically, multi-cloud refers to the use of several different public clouds in order to achieve greater flexibility, lower costs, avoid vendor lock-in, or use specific regional cloud providers.
|
||||
|
||||
A challenge of the multi-cloud approach is achieving consistent policies, compliance, and management with different providers involved.
|
||||
|
||||
Multi-cloud is mainly a strategy to expand your business while leveraging multi-vendor cloud solutions and spreading the risk of lock-in. Figure 1 shows the isolated nature of cloud services in this model, without any sort of coordination between the services and business applications. Each is managed separately, and applications are isolated to services found in their environments.
|
||||
|
||||
#### Hybrid cloud
|
||||
|
||||
Hybrid cloud solves issues where isolation and coordination are central to the solution. It is a combination of one or more public and private clouds with at least a degree of workload portability, integration, orchestration, and unified management.
|
||||
|
||||
![Hybrid cloud][5]
|
||||
|
||||
Figure 2. Hybrid clouds may be on or off premises, but must have a degree of interoperability
|
||||
|
||||
The key issue here is that there is an element of interoperability, migration potential, and a connection between tasks running in public clouds and on-premises infrastructure, even if it's not always seamless or otherwise fully implemented.
|
||||
|
||||
If your cloud model is missing portability, integration, orchestration, and management, then it's just a bunch of clouds, not a hybrid cloud.
|
||||
|
||||
The cloud environments in Fig. 2 include at least one private and public cloud. They can be off or on premises, but they have some degree of the following:
|
||||
|
||||
* Interoperability
|
||||
* Application portability
|
||||
* Data portability
|
||||
* Common management
|
||||
|
||||
|
||||
|
||||
As you can probably guess, combining multi-cloud and hybrid cloud results in a hybrid multi-cloud. But what does that look like?
|
||||
|
||||
### Hybrid multi-cloud
|
||||
|
||||
Hybrid multi-cloud pulls together multiple clouds and provides the tools to ensure interoperability between the various services in hybrid and multi-cloud solutions.
|
||||
|
||||
![Hybrid multi-cloud][7]
|
||||
|
||||
Figure 3. Hybrid multi-cloud solutions using open technologies
|
||||
|
||||
Bringing these together can be a serious challenge, but the result ensures better use of resources without isolation in their respective clouds.
|
||||
|
||||
Fig. 3 shows an example of hybrid multi-cloud based on open technologies for interoperability, workload portability, and management.
|
||||
|
||||
### Moving forward: Pitfalls of hybrid multi-cloud
|
||||
|
||||
In part two of this series, we'll look at the first of three pitfalls to avoid with hybrid multi-cloud. Namely, why cost is not always the obvious motivator when determining how to transition your business to the cloud.
|
||||
|
||||
This article is based on "[3 pitfalls everyone should avoid with hybrid multi-cloud][8]," a talk the authors will be giving at [Red Hat Summit 2018][9], which will be held May 8-10 in San Francisco. [Register by May 7][9] to save US$ 500 off of registration. Use discount code **OPEN18** on the payment page to apply the discount.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/4/pitfalls-hybrid-multi-cloud
|
||||
|
||||
作者:[Eric D.Schabell][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/eschabell
|
||||
[1]:https://opensource.com/users/roelh
|
||||
[3]:https://opensource.com/sites/default/files/u128651/multi-cloud.png (Multi-cloud)
|
||||
[5]:https://opensource.com/sites/default/files/u128651/hybrid-cloud.png (Hybrid cloud)
|
||||
[7]:https://opensource.com/sites/default/files/u128651/hybrid-multicloud.png (Hybrid multi-cloud)
|
||||
[8]:https://agenda.summit.redhat.com/SessionDetail.aspx?id=153892
|
||||
[9]:https://www.redhat.com/en/summit/2018
|
@ -0,0 +1,330 @@
|
||||
Tuptime - A Tool To Report The Historical Uptime Of Linux System
|
||||
======
|
||||
Beginning of this month we written an article about system uptime that helps user to check how long your Linux system has been running without downtime? when the system is up and what date. This can be done using 11 methods.
|
||||
|
||||
uptime is one of the very famous commands, which everyone use when there is a requirement to check the Linux server uptime.
|
||||
|
||||
But it won’t shows historical and statistical running time of Linux system, that’s why tuptime is came to picture.
|
||||
|
||||
server uptime is very important when the server running with critical applications such as online portals.
|
||||
|
||||
**Suggested Read :** [11 Methods To Find System/Server Uptime In Linux][1]
|
||||
|
||||
### What Is tuptime?
|
||||
|
||||
[Tuptime][2] is a tool for report the historical and statistical running time of the system, keeping it between restarts. Like uptime command but with more interesting output.
|
||||
|
||||
### tuptime Features
|
||||
|
||||
* Count system startups
|
||||
* Register first boot time (a.k.a. installation time)
|
||||
* Count nicely and accidentally shutdowns
|
||||
* Uptime and downtime percentage since first boot time
|
||||
* Accumulated system uptime, downtime and total
|
||||
* Largest, shortest and average uptime and downtime
|
||||
* Current uptime
|
||||
* Print formatted table or list with most of the previous values
|
||||
* Register used kernels
|
||||
* Narrow reports since and/or until a given startup or timestamp
|
||||
* Reports in csv
|
||||
|
||||
|
||||
|
||||
### Prerequisites
|
||||
|
||||
Make sure your system should have installed Python3 as a prerequisites. If no, install it using your distribution package manager.
|
||||
|
||||
**Suggested Read :** [3 Methods To Install Latest Python3 Package On CentOS 6 System][3]
|
||||
|
||||
### How To Install tuptime
|
||||
|
||||
Few distributions offer tuptime package but it may be bit older version. I would advise you to install latest available version to avail all the features using the below method.
|
||||
|
||||
Clone tuptime repository from github.
|
||||
```
|
||||
# git clone https://github.com/rfrail3/tuptime.git
|
||||
|
||||
```
|
||||
|
||||
Copy executable file from `tuptime/src/tuptime` to `/usr/bin/` and assign 755 permission.
|
||||
```
|
||||
# cp tuptime/src/tuptime /usr/bin/tuptime
|
||||
# chmod 755 /usr/bin/tuptime
|
||||
|
||||
```
|
||||
|
||||
All scripts, units and related files are provided inside this repo so, copy and past the necessary files in the appropriate location to get full functionality of tuptime utility.
|
||||
|
||||
Add tuptime user because it doesn’t run as a daemon, at least, it only need execution when the init manager startup and shutdown the system.
|
||||
```
|
||||
# useradd -d /var/lib/tuptime -s /bin/sh tuptime
|
||||
|
||||
```
|
||||
|
||||
Change owner of the db file.
|
||||
```
|
||||
# chown -R tuptime:tuptime /var/lib/tuptime
|
||||
|
||||
```
|
||||
|
||||
Copy cron file from `tuptime/src/tuptime` to `/usr/bin/` and assign 644 permission.
|
||||
```
|
||||
# cp tuptime/src/cron.d/tuptime /etc/cron.d/tuptime
|
||||
# chmod 644 /etc/cron.d/tuptime
|
||||
|
||||
```
|
||||
|
||||
Add system service file based on your system initsystem. Use the below command to check if your system is running with systemd or init.
|
||||
```
|
||||
# ps -p 1
|
||||
PID TTY TIME CMD
|
||||
1 ? 00:00:03 systemd
|
||||
|
||||
# ps -p 1
|
||||
PID TTY TIME CMD
|
||||
1 ? 00:00:00 init
|
||||
|
||||
```
|
||||
|
||||
If is a system with systemd, copy service file and enable it.
|
||||
```
|
||||
# cp tuptime/src/systemd/tuptime.service /lib/systemd/system/
|
||||
# chmod 644 /lib/systemd/system/tuptime.service
|
||||
# systemctl enable tuptime.service
|
||||
|
||||
```
|
||||
|
||||
If have upstart system, copy the file:
|
||||
```
|
||||
# cp tuptime/src/init.d/redhat/tuptime /etc/init.d/tuptime
|
||||
# chmod 755 /etc/init.d/tuptime
|
||||
# chkconfig --add tuptime
|
||||
# chkconfig tuptime on
|
||||
|
||||
```
|
||||
|
||||
If have init system, copy the file:
|
||||
```
|
||||
# cp tuptime/src/init.d/debian/tuptime /etc/init.d/tuptime
|
||||
# chmod 755 /etc/init.d/tuptime
|
||||
# update-rc.d tuptime defaults
|
||||
# /etc/init.d/tuptime start
|
||||
|
||||
```
|
||||
|
||||
### How To Use tuptime
|
||||
|
||||
Make sure you should run the command with a privileged user. Intially you will get output similar to this.
|
||||
```
|
||||
# tuptime
|
||||
System startups: 1 since 02:48:00 AM 04/12/2018
|
||||
System shutdowns: 0 ok - 0 bad
|
||||
System uptime: 100.0 % - 26 days, 5 hours, 31 minutes and 52 seconds
|
||||
System downtime: 0.0 % - 0 seconds
|
||||
System life: 26 days, 5 hours, 31 minutes and 52 seconds
|
||||
|
||||
Largest uptime: 26 days, 5 hours, 31 minutes and 52 seconds from 02:48:00 AM 04/12/2018
|
||||
Shortest uptime: 26 days, 5 hours, 31 minutes and 52 seconds from 02:48:00 AM 04/12/2018
|
||||
Average uptime: 26 days, 5 hours, 31 minutes and 52 seconds
|
||||
|
||||
Largest downtime: 0 seconds
|
||||
Shortest downtime: 0 seconds
|
||||
Average downtime: 0 seconds
|
||||
|
||||
Current uptime: 26 days, 5 hours, 31 minutes and 52 seconds since 02:48:00 AM 04/12/2018
|
||||
|
||||
```
|
||||
|
||||
### Details:
|
||||
|
||||
* **`System startups:`** Total number of system startups from since to until date. Until is joined if is used in a narrow range.
|
||||
* **`System shutdowns:`** Total number of shutdowns done correctly or incorrectly. The separator usually points to the state of last shutdown () bad.
|
||||
* **`System uptime:`** Percentage of uptime and time counter.
|
||||
* **`System downtime:`** Percentage of downtime and time counter.
|
||||
* **`System life:`** Time counter since first startup date until last.
|
||||
* **`Largest/Shortest uptime:`** Time counter and date with the largest/shortest uptime register.
|
||||
* **`Largest/Shortest downtime:`** Time counter and date with the largest/shortest downtime register.
|
||||
* **`Average uptime/downtime:`** Time counter with the average time.
|
||||
* **`Current uptime:`** Actual time counter and date since registered boot date.
|
||||
|
||||
|
||||
|
||||
If you do the same a few days after some reboot, the output may will be more similar to this.
|
||||
```
|
||||
# tuptime
|
||||
System startups: 3 since 02:48:00 AM 04/12/2018
|
||||
System shutdowns: 0 ok -> 2 bad
|
||||
System uptime: 97.0 % - 28 days, 4 hours, 6 minutes and 0 seconds
|
||||
System downtime: 3.0 % - 20 hours, 54 minutes and 22 seconds
|
||||
System life: 29 days, 1 hour, 0 minutes and 23 seconds
|
||||
|
||||
Largest uptime: 26 days, 5 hours, 32 minutes and 57 seconds from 02:48:00 AM 04/12/2018
|
||||
Shortest uptime: 1 hour, 31 minutes and 12 seconds from 02:17:11 AM 05/11/2018
|
||||
Average uptime: 9 days, 9 hours, 22 minutes and 0 seconds
|
||||
|
||||
Largest downtime: 20 hours, 51 minutes and 58 seconds from 08:20:57 AM 05/08/2018
|
||||
Shortest downtime: 2 minutes and 24 seconds from 02:14:47 AM 05/11/2018
|
||||
Average downtime: 10 hours, 27 minutes and 11 seconds
|
||||
|
||||
Current uptime: 1 hour, 31 minutes and 12 seconds since 02:17:11 AM 05/11/2018
|
||||
|
||||
```
|
||||
|
||||
Enumerate as table each startup number, startup date, uptime, shutdown date, end status and downtime. Multiple order options can be combined together.
|
||||
```
|
||||
# tuptime -t
|
||||
No. Startup Date Uptime Shutdown Date End Downtime
|
||||
|
||||
1 02:48:00 AM 04/12/2018 26 days, 5 hours, 32 minutes and 57 seconds 08:20:57 AM 05/08/2018 BAD 20 hours, 51 minutes and 58 seconds
|
||||
2 05:12:55 AM 05/09/2018 1 day, 21 hours, 1 minute and 52 seconds 02:14:47 AM 05/11/2018 BAD 2 minutes and 24 seconds
|
||||
3 02:17:11 AM 05/11/2018 1 hour, 34 minutes and 33 seconds
|
||||
|
||||
```
|
||||
|
||||
Enumerate as list each startup number, startup date, uptime, shutdown date, end status and offtime. Multiple order options can be combined together.
|
||||
```
|
||||
# tuptime -l
|
||||
Startup: 1 at 02:48:00 AM 04/12/2018
|
||||
Uptime: 26 days, 5 hours, 32 minutes and 57 seconds
|
||||
Shutdown: BAD at 08:20:57 AM 05/08/2018
|
||||
Downtime: 20 hours, 51 minutes and 58 seconds
|
||||
|
||||
Startup: 2 at 05:12:55 AM 05/09/2018
|
||||
Uptime: 1 day, 21 hours, 1 minute and 52 seconds
|
||||
Shutdown: BAD at 02:14:47 AM 05/11/2018
|
||||
Downtime: 2 minutes and 24 seconds
|
||||
|
||||
Startup: 3 at 02:17:11 AM 05/11/2018
|
||||
Uptime: 1 hour, 34 minutes and 36 seconds
|
||||
|
||||
```
|
||||
|
||||
To print kernel information with tuptime output.
|
||||
```
|
||||
# tuptime -k
|
||||
System startups: 3 since 02:48:00 AM 04/12/2018
|
||||
System shutdowns: 0 ok -> 2 bad
|
||||
System uptime: 97.0 % - 28 days, 4 hours, 11 minutes and 25 seconds
|
||||
System downtime: 3.0 % - 20 hours, 54 minutes and 22 seconds
|
||||
System life: 29 days, 1 hour, 5 minutes and 47 seconds
|
||||
System kernels: 1
|
||||
|
||||
Largest uptime: 26 days, 5 hours, 32 minutes and 57 seconds from 02:48:00 AM 04/12/2018
|
||||
...with kernel: Linux-2.6.32-696.23.1.el6.x86_64-x86_64-with-centos-6.9-Final
|
||||
Shortest uptime: 1 hour, 36 minutes and 36 seconds from 02:17:11 AM 05/11/2018
|
||||
...with kernel: Linux-2.6.32-696.23.1.el6.x86_64-x86_64-with-centos-6.9-Final
|
||||
Average uptime: 9 days, 9 hours, 23 minutes and 48 seconds
|
||||
|
||||
Largest downtime: 20 hours, 51 minutes and 58 seconds from 08:20:57 AM 05/08/2018
|
||||
...with kernel: Linux-2.6.32-696.23.1.el6.x86_64-x86_64-with-centos-6.9-Final
|
||||
Shortest downtime: 2 minutes and 24 seconds from 02:14:47 AM 05/11/2018
|
||||
...with kernel: Linux-2.6.32-696.23.1.el6.x86_64-x86_64-with-centos-6.9-Final
|
||||
Average downtime: 10 hours, 27 minutes and 11 seconds
|
||||
|
||||
Current uptime: 1 hour, 36 minutes and 36 seconds since 02:17:11 AM 05/11/2018
|
||||
...with kernel: Linux-2.6.32-696.23.1.el6.x86_64-x86_64-with-centos-6.9-Final
|
||||
|
||||
```
|
||||
|
||||
Change the date format. By default it’s printed based on system locales.
|
||||
```
|
||||
# tuptime -d %d/%m/%y %H:%M:%S
|
||||
System startups: 3 since 12/04/18
|
||||
System shutdowns: 0 ok -> 2 bad
|
||||
System uptime: 97.0 % - 28 days, 4 hours, 15 minutes and 18 seconds
|
||||
System downtime: 3.0 % - 20 hours, 54 minutes and 22 seconds
|
||||
System life: 29 days, 1 hour, 9 minutes and 41 seconds
|
||||
|
||||
Largest uptime: 26 days, 5 hours, 32 minutes and 57 seconds from 12/04/18
|
||||
Shortest uptime: 1 hour, 40 minutes and 30 seconds from 11/05/18
|
||||
Average uptime: 9 days, 9 hours, 25 minutes and 6 seconds
|
||||
|
||||
Largest downtime: 20 hours, 51 minutes and 58 seconds from 08/05/18
|
||||
Shortest downtime: 2 minutes and 24 seconds from 11/05/18
|
||||
Average downtime: 10 hours, 27 minutes and 11 seconds
|
||||
|
||||
Current uptime: 1 hour, 40 minutes and 30 seconds since 11/05/18
|
||||
|
||||
```
|
||||
|
||||
Print information about the internals of tuptime. It’s good for debugging how it gets the variables.
|
||||
```
|
||||
# tuptime -v
|
||||
INFO:Arguments: {'endst': 0, 'seconds': None, 'table': False, 'csv': False, 'ts': None, 'silent': False, 'order': False, 'since': 0, 'kernel': False, 'reverse': False, 'until': 0, 'db_file': '/var/lib/tuptime/tuptime.db', 'lst': False, 'tu': None, 'date_format': '%X %x', 'update': True}
|
||||
INFO:Linux system
|
||||
INFO:uptime = 5773.54
|
||||
INFO:btime = 1526019431
|
||||
INFO:kernel = Linux-2.6.32-696.23.1.el6.x86_64-x86_64-with-centos-6.9-Final
|
||||
INFO:Execution user = 0
|
||||
INFO:Directory exists = /var/lib/tuptime
|
||||
INFO:DB file exists = /var/lib/tuptime/tuptime.db
|
||||
INFO:Last btime from db = 1526019431
|
||||
INFO:Last uptime from db = 5676.04
|
||||
INFO:Drift over btime = 0
|
||||
INFO:System wasn't restarted. Updating db values...
|
||||
System startups: 3 since 02:48:00 AM 04/12/2018
|
||||
System shutdowns: 0 ok -> 2 bad
|
||||
System uptime: 97.0 % - 28 days, 4 hours, 11 minutes and 2 seconds
|
||||
System downtime: 3.0 % - 20 hours, 54 minutes and 22 seconds
|
||||
System life: 29 days, 1 hour, 5 minutes and 25 seconds
|
||||
|
||||
Largest uptime: 26 days, 5 hours, 32 minutes and 57 seconds from 02:48:00 AM 04/12/2018
|
||||
Shortest uptime: 1 hour, 36 minutes and 14 seconds from 02:17:11 AM 05/11/2018
|
||||
Average uptime: 9 days, 9 hours, 23 minutes and 41 seconds
|
||||
|
||||
Largest downtime: 20 hours, 51 minutes and 58 seconds from 08:20:57 AM 05/08/2018
|
||||
Shortest downtime: 2 minutes and 24 seconds from 02:14:47 AM 05/11/2018
|
||||
Average downtime: 10 hours, 27 minutes and 11 seconds
|
||||
|
||||
Current uptime: 1 hour, 36 minutes and 14 seconds since 02:17:11 AM 05/11/2018
|
||||
|
||||
```
|
||||
|
||||
Print a quick reference of the command line parameters.
|
||||
```
|
||||
# tuptime -h
|
||||
Usage: tuptime [options]
|
||||
|
||||
Options:
|
||||
-h, --help show this help message and exit
|
||||
-c, --csv csv output
|
||||
-d DATE_FORMAT, --date=DATE_FORMAT
|
||||
date format output
|
||||
-f FILE, --filedb=FILE
|
||||
database file
|
||||
-g, --graceful register a gracefully shutdown
|
||||
-k, --kernel print kernel information
|
||||
-l, --list enumerate system life as list
|
||||
-n, --noup avoid update values
|
||||
-o TYPE, --order=TYPE
|
||||
order enumerate by []
|
||||
-r, --reverse reverse order
|
||||
-s, --seconds output time in seconds and epoch
|
||||
-S SINCE, --since=SINCE
|
||||
restric since this register number
|
||||
-t, --table enumerate system life as table
|
||||
--tsince=TIMESTAMP restrict since this epoch timestamp
|
||||
--tuntil=TIMESTAMP restrict until this epoch timestamp
|
||||
-U UNTIL, --until=UNTIL
|
||||
restrict until this register number
|
||||
-v, --verbose verbose output
|
||||
-V, --version show version
|
||||
-x, --silent update values into db without output
|
||||
|
||||
```
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.2daygeek.com/tuptime-a-tool-to-report-the-historical-and-statistical-running-time-of-linux-system/
|
||||
|
||||
作者:[Prakash Subramanian][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.2daygeek.com/author/prakash/
|
||||
[1]:https://www.2daygeek.com/11-methods-to-find-check-system-server-uptime-in-linux/
|
||||
[2]:https://github.com/rfrail3/tuptime/
|
||||
[3]:https://www.2daygeek.com/3-methods-to-install-latest-python3-package-on-centos-6-system/
|
@ -1,218 +0,0 @@
|
||||
BriFuture is translating
|
||||
|
||||
Twitter Sentiment Analysis using NodeJS
|
||||
============================================================
|
||||
|
||||
|
||||

|
||||
|
||||
If you want to know how people feel about something, there is no better place than Twitter. It is a continuous stream of opinion, with around 6,000 new tweets being created every second. The internet is quick to react to events and if you want to be updated with the latest and hottest, Twitter is the place to be.
|
||||
|
||||
Now, we live in an age where data is king and companies put Twitter's data to good use. From gauging the reception of their new products to trying to predict the next market trend, analysis of Twitter data has many uses. Businesses use it to market their product that to the right customers, to gather feedback on their brand and improve or to assess the reasons for the failure of a product or promotional campaign. Not only businesses, many political and economic decisions are made based on observation of people's opinion. Today, I will try and give you a taste of simple [sentiment analysis][1] of tweets to determine whether a tweet is positive, negative or neutral. It won't be as sophisticated as those used by professionals, but nonetheless, it will give you an idea about opinion mining.
|
||||
|
||||
We will be using NodeJs since JavaScript is ubiquitous nowadays and is one of the easiest languages to get started with.
|
||||
|
||||
### Prerequisite:
|
||||
|
||||
* NodeJs and NPM installed
|
||||
|
||||
* A little experience with NodeJs and NPM packages
|
||||
|
||||
* some familiarity with the command line.
|
||||
|
||||
Alright, that's it. Let's get started.
|
||||
|
||||
### Getting Started
|
||||
|
||||
Make a new directory for your project and go inside the directory. Open a terminal (or command line). Go inside the newly created directory and run the `npm init -y` command. This will create a `package.json` in your directory. Now we can install the npm packages we need. We just need to create a new file named `index.js` and then we are all set to start coding.
|
||||
|
||||
### Getting the tweets
|
||||
|
||||
Well, we want to analyze tweets and for that, we need programmatic access to Twitter. For this, we will use the [twit][2] package. So, let's install it with the `npm i twit` command. We also need to register an App through our account to gain access to the Twitter API. Head over to this [link][3], fill in all the details and copy ‘Consumer Key’, ‘Consumer Secret’, ‘Access token’ and ‘Access Token Secret’ from 'Keys and Access Token' tabs in a `.env` file like this:
|
||||
|
||||
```
|
||||
# .env
|
||||
# replace the stars with values you copied
|
||||
CONSUMER_KEY=************
|
||||
CONSUMER_SECRET=************
|
||||
ACCESS_TOKEN=************
|
||||
ACCESS_TOKEN_SECRET=************
|
||||
|
||||
```
|
||||
|
||||
Now, let's begin.
|
||||
|
||||
Open `index.js` in your favorite code editor. We need to install the `dotenv`package to read from `.env` file with the command `npm i dotenv`. Alright, let's create an API instance.
|
||||
|
||||
```
|
||||
const Twit = require('twit');
|
||||
const dotenv = require('dotenv');
|
||||
|
||||
dotenv.config();
|
||||
|
||||
const { CONSUMER_KEY
|
||||
, CONSUMER_SECRET
|
||||
, ACCESS_TOKEN
|
||||
, ACCESS_TOKEN_SECRET
|
||||
} = process.env;
|
||||
|
||||
const config_twitter = {
|
||||
consumer_key: CONSUMER_KEY,
|
||||
consumer_secret: CONSUMER_SECRET,
|
||||
access_token: ACCESS_TOKEN,
|
||||
access_token_secret: ACCESS_TOKEN_SECRET,
|
||||
timeout_ms: 60*1000
|
||||
};
|
||||
|
||||
let api = new Twit(config_twitter);
|
||||
|
||||
```
|
||||
|
||||
Here we have established a connection to the Twitter with the required configuration. But we are not doing anything with it. Let's define a function to get tweets.
|
||||
|
||||
```
|
||||
async function get_tweets(q, count) {
|
||||
let tweets = await api.get('search/tweets', {q, count, tweet_mode: 'extended'});
|
||||
return tweets.data.statuses.map(tweet => tweet.full_text);
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
This is an async function because of the `api.get` the function returns a promise and instead of chaining `then`s, I wanted an easy way to extract the text of the tweets. It accepts two arguments -q and count, `q` being the query or keyword we want to search for and `count` is the number of tweets we want the `api` to return.
|
||||
|
||||
So now we have an easy way to get the full texts from the tweets. But we still have a problem, the text that we will get now may contain some links or may be truncated if it's a retweet. So we will write another function that will extract and return the text of the tweets, even for retweets and remove the links if any.
|
||||
|
||||
```
|
||||
function get_text(tweet) {
|
||||
let txt = tweet.retweeted_status ? tweet.retweeted_status.full_text : tweet.full_text;
|
||||
return txt.split(/ |\n/).filter(v => !v.startsWith('http')).join(' ');
|
||||
}
|
||||
|
||||
async function get_tweets(q, count) {
|
||||
let tweets = await api.get('search/tweets', {q, count, 'tweet_mode': 'extended'});
|
||||
return tweets.data.statuses.map(get_text);
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
So, now we have the text of tweets. Our next step is getting the sentiment from the text. For this, we will use another package from `npm` - [`sentiment`][4]package. Let's install it like the other packages and add to our script.
|
||||
|
||||
```
|
||||
const sentiment = require('sentiment')
|
||||
|
||||
```
|
||||
|
||||
Using `sentiment` is very easy. We will just have to call the `sentiment`function on the text that we want to analyze and it will return us the comparative score of the text. If the score is below 0, it expresses a negative sentiment, a score above 0 is positive and 0, as you may have guessed, is neutral. So based on this, we will print the tweets in different colors - green for positive, red for negative and blue for neutral. For this, we will use the [`colors`][5] package. Let's install it like the other packages and add to our script.
|
||||
|
||||
```
|
||||
const colors = require('colors/safe');
|
||||
|
||||
```
|
||||
|
||||
Alright, now let us bring it all together in a `main` function.
|
||||
|
||||
```
|
||||
async function main() {
|
||||
let keyword = \* define the keyword that you want to search for *\;
|
||||
let count = \* define the count of tweets you want *\;
|
||||
let tweets = await get_tweets(keyword, count);
|
||||
for (tweet of tweets) {
|
||||
let score = sentiment(tweet).comparative;
|
||||
tweet = `${tweet}\n`;
|
||||
if (score > 0) {
|
||||
tweet = colors.green(tweet);
|
||||
} else if (score < 0) {
|
||||
tweet = colors.red(tweet);
|
||||
} else {
|
||||
tweet = colors.blue(tweet);
|
||||
}
|
||||
console.log(tweet);
|
||||
}
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
And finally, execute the `main` function.
|
||||
|
||||
```
|
||||
main();
|
||||
|
||||
```
|
||||
|
||||
There you have it, a short script of analyzing the basic sentiments of a tweet.
|
||||
|
||||
```
|
||||
\\ full script
|
||||
const Twit = require('twit');
|
||||
const dotenv = require('dotenv');
|
||||
const sentiment = require('sentiment');
|
||||
const colors = require('colors/safe');
|
||||
|
||||
dotenv.config();
|
||||
|
||||
const { CONSUMER_KEY
|
||||
, CONSUMER_SECRET
|
||||
, ACCESS_TOKEN
|
||||
, ACCESS_TOKEN_SECRET
|
||||
} = process.env;
|
||||
|
||||
const config_twitter = {
|
||||
consumer_key: CONSUMER_KEY,
|
||||
consumer_secret: CONSUMER_SECRET,
|
||||
access_token: ACCESS_TOKEN,
|
||||
access_token_secret: ACCESS_TOKEN_SECRET,
|
||||
timeout_ms: 60*1000
|
||||
};
|
||||
|
||||
let api = new Twit(config_twitter);
|
||||
|
||||
function get_text(tweet) {
|
||||
let txt = tweet.retweeted_status ? tweet.retweeted_status.full_text : tweet.full_text;
|
||||
return txt.split(/ |\n/).filter(v => !v.startsWith('http')).join(' ');
|
||||
}
|
||||
|
||||
async function get_tweets(q, count) {
|
||||
let tweets = await api.get('search/tweets', {q, count, 'tweet_mode': 'extended'});
|
||||
return tweets.data.statuses.map(get_text);
|
||||
}
|
||||
|
||||
async function main() {
|
||||
let keyword = 'avengers';
|
||||
let count = 100;
|
||||
let tweets = await get_tweets(keyword, count);
|
||||
for (tweet of tweets) {
|
||||
let score = sentiment(tweet).comparative;
|
||||
tweet = `${tweet}\n`;
|
||||
if (score > 0) {
|
||||
tweet = colors.green(tweet);
|
||||
} else if (score < 0) {
|
||||
tweet = colors.red(tweet);
|
||||
} else {
|
||||
tweet = colors.blue(tweet)
|
||||
}
|
||||
console.log(tweet)
|
||||
}
|
||||
}
|
||||
|
||||
main();
|
||||
```
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://boostlog.io/@anshulc95/twitter-sentiment-analysis-using-nodejs-5ad1331247018500491f3b6a
|
||||
|
||||
作者:[Anshul Chauhan][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://boostlog.io/@anshulc95
|
||||
[1]:https://en.wikipedia.org/wiki/Sentiment_analysis
|
||||
[2]:https://github.com/ttezel/twit
|
||||
[3]:https://boostlog.io/@anshulc95/apps.twitter.com
|
||||
[4]:https://www.npmjs.com/package/sentiment
|
||||
[5]:https://www.npmjs.com/package/colors
|
||||
[6]:https://boostlog.io/tags/nodejs
|
||||
[7]:https://boostlog.io/tags/twitter
|
||||
[8]:https://boostlog.io/@anshulc95
|
@ -1,3 +1,5 @@
|
||||
FSSlc translating
|
||||
|
||||
View The Contents Of An Archive Or Compressed File Without Extracting It
|
||||
======
|
||||

|
||||
|
@ -1,3 +1,4 @@
|
||||
Translating by leemeans
|
||||
Setting Up a Timer with systemd in Linux
|
||||
======
|
||||
|
||||
|
@ -1,138 +0,0 @@
|
||||
translating---geekpi
|
||||
|
||||
How To Use Pbcopy And Pbpaste Commands On Linux
|
||||
======
|
||||
|
||||

|
||||
|
||||
Since Linux and Mac OS X are *Nix based systems, many commands would work on both platforms. However, some commands may not available in on both platforms, for example **pbcopy** and **pbpaste**. These commands are exclusively available only on Mac OS X platform. The Pbcopy command will copy the standard input into clipboard. You can then paste the clipboard contents using Pbpaste command wherever you want. Of course, there could be some Linux alternatives to the above commands, for example **Xclip**. The Xclip will do exactly same as Pbcopy. But, the distro-hoppers who switched to Linux from Mac OS would miss this command-pair and still prefer to use them. No worries! This brief tutorial describes how to use Pbcopy and Pbpaste commands on Linux.
|
||||
|
||||
### Install Xclip / Xsel
|
||||
|
||||
Like I already said, Pbcopy and Pbpaste commands are not available in Linux. However, we can replicate the functionality of pbcopy and pbpaste commands using Xclip and/or Xsel commands via shell aliasing. Both Xclip and Xsel packages available in the default repositories of most Linux distributions. Please note that you need not to install both utilities. Just install any one of the above utilities.
|
||||
|
||||
To install them on Arch Linux and its derivatives, run:
|
||||
```
|
||||
$ sudo pacman xclip xsel
|
||||
|
||||
```
|
||||
|
||||
On Fedora:
|
||||
```
|
||||
$ sudo dnf xclip xsel
|
||||
|
||||
```
|
||||
|
||||
On Debian, Ubuntu, Linux Mint:
|
||||
```
|
||||
$ sudo apt install xclip xsel
|
||||
|
||||
```
|
||||
|
||||
Once installed, you need create aliases for pbcopy and pbpaste commands. To do so, edit your **~/.bashrc** file:
|
||||
```
|
||||
$ vi ~/.bashrc
|
||||
|
||||
```
|
||||
|
||||
If you want to use Xclip, paste the following lines:
|
||||
```
|
||||
alias pbcopy='xclip -selection clipboard'
|
||||
alias pbpaste='xclip -selection clipboard -o'
|
||||
|
||||
```
|
||||
|
||||
If you want to use xsel, paste the following lines in your ~/.bashrc file.
|
||||
```
|
||||
alias pbcopy='xsel --clipboard --input'
|
||||
alias pbpaste='xsel --clipboard --output'
|
||||
|
||||
```
|
||||
|
||||
Save and close the file.
|
||||
|
||||
Next, run the following command to update the changes in ~/.bashrc file.
|
||||
```
|
||||
$ source ~/.bashrc
|
||||
|
||||
```
|
||||
|
||||
The ZSH users paste the above lines in **~/.zshrc** file.
|
||||
|
||||
### Use Pbcopy And Pbpaste Commands On Linux
|
||||
|
||||
Let us see some examples.
|
||||
|
||||
The pbcopy command will copy the text from stdin into clipboard buffer. For example, have a look at the following example.
|
||||
```
|
||||
$ echo "Welcome To OSTechNix!" | pbcopy
|
||||
|
||||
```
|
||||
|
||||
The above command will copy the text “Welcome To OSTechNix” into clipboard. You can access this content later and paste them anywhere you want using Pbpaste command like below.
|
||||
```
|
||||
$ echo `pbpaste`
|
||||
Welcome To OSTechNix!
|
||||
|
||||
```
|
||||
|
||||

|
||||
|
||||
Here are some other use cases.
|
||||
|
||||
I have a file named **file.txt** with the following contents.
|
||||
```
|
||||
$ cat file.txt
|
||||
Welcome To OSTechNix!
|
||||
|
||||
```
|
||||
|
||||
You can directly copy the contents of a file into a clipboard as shown below.
|
||||
```
|
||||
$ pbcopy < file.txt
|
||||
|
||||
```
|
||||
|
||||
Now, the contents of the file is available in the clipboard as long as you updated with another file’s contents.
|
||||
|
||||
To retrieve the contents from clipboard, simply type:
|
||||
```
|
||||
$ pbpaste
|
||||
Welcome To OSTechNix!
|
||||
|
||||
```
|
||||
|
||||
You can also send the output of any Linux command to clip board using pipeline character. Have a look at the following example.
|
||||
```
|
||||
$ ps aux | pbcopy
|
||||
|
||||
```
|
||||
|
||||
Now, type “pbpaste” command at any time to display the output of “ps aux” command from the clipboard.
|
||||
```
|
||||
$ pbpaste
|
||||
|
||||
```
|
||||
|
||||

|
||||
|
||||
There is much more you can do with Pbcopy and Pbpaste commands. I hope you now got a basic idea about these commands.
|
||||
|
||||
And, that’s all for now. More good stuffs to come. Stay tuned!
|
||||
|
||||
Cheers!
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/how-to-use-pbcopy-and-pbpaste-commands-on-linux/
|
||||
|
||||
作者:[SK][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.ostechnix.com/author/sk/
|
@ -1,55 +0,0 @@
|
||||
translating---geekpi
|
||||
|
||||
Cross-Site Request Forgery
|
||||
======
|
||||

|
||||
Security is a major concern when designing web apps. And I am not talking about DDOS protection, using a strong password or 2 step verification. I am talking about the biggest threat to a web app. It is known as **CSRF** short for **Cross Site Resource Forgery**.
|
||||
|
||||
### What is CSRF?
|
||||
|
||||
[][1]
|
||||
|
||||
First thing first, **CSRF** is short for Cross Site Resource Forgery. It is commonly pronounced as sea-surf and often referred to as XSRF. CSRF is a type of attack where various actions are performed on the web app where the victim is logged in without the victim's knowledge. These actions could be anything ranging from simply liking or commenting on a social media post to sending abusive messages to people or even transferring money from the victim’s bank account.
|
||||
|
||||
### How CSRF works?
|
||||
|
||||
**CSRF** attacks try to bank upon a simple common vulnerability in all browsers. Every time, we authenticate or log in to a website, session cookies are stored in the browser. So whenever we make a request to the website these cookies are automatically sent to the server where the server identifies us by matching the cookie we sent with the server’s records. So that way it knows it’s us.
|
||||
|
||||
[][2]
|
||||
|
||||
This means that any request made by me, knowingly or unknowingly, will be fulfilled. Since the cookies are being sent and they will match the records on the server, the server thinks I am making that request.
|
||||
|
||||
|
||||
|
||||
CSRF attacks usually come in form of links. We may click them on other websites or receive them as email. On clicking these links, an unwanted request is made to the server. And as I previously said, the server thinks we made the request and authenticates it.
|
||||
|
||||
#### A Real World Example
|
||||
|
||||
To put things into perspective, imagine you are logged into your bank’s website. And you fill up a form on the page at **yourbank.com/transfer** . You fill in the account number of the receiver as 1234 and the amount of 5,000 and you click on the submit button. Now, a request will be made to **yourbank.com/transfer/send?to=1234&amount=5000** . So the server will act upon the request and make the transfer. Now just imagine you are on another website and you click on a link that opens up the above URL with the hacker’s account number. That money is now transferred to the hacker and the server thinks you made the transaction. Even though you didn’t.
|
||||
|
||||
[][3]
|
||||
|
||||
#### Protection against CSRF
|
||||
|
||||
CSRF protection is very easy to implement. It usually involves sending a token called the CSRF token to the webpage. This token is sent and verified on the server with every new request made. So malicious requests made by the server will pass cookie authentication but fail CSRF authentication. Most web frameworks provide out of the box support for preventing CSRF attacks and CSRF attacks are not as commonly seen today as they were some time back.
|
||||
|
||||
### Conclusion
|
||||
|
||||
CSRF attacks were a big thing 10 years back but today we don’t see too many of them. In the past, famous sites such as Youtube, The New York Times and Netflix have been vulnerable to CSRF. However, popularity and occurrence of CSRF attacks have decreased lately. Nevertheless, CSRF attacks are still a threat and it is important, you protect your website or app from it.
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.linuxandubuntu.com/home/understanding-csrf-cross-site-request-forgery
|
||||
|
||||
作者:[linuxandubuntu][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.linuxandubuntu.com
|
||||
[1]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/csrf-what-is-cross-site-forgery_orig.jpg
|
||||
[2]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/cookies-set-by-website-chrome_orig.jpg
|
||||
[3]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/csrf-hacking-bank-account_orig.jpg
|
@ -1,3 +1,5 @@
|
||||
translating----geekpi
|
||||
|
||||
UNIX curiosities
|
||||
======
|
||||
Recently I've been doing more UNIXy things in various tools I'm writing, and I hit two interesting issues. Neither of these are "bugs", but behaviors that I wasn't expecting.
|
||||
|
@ -0,0 +1,90 @@
|
||||
5 applications to manage your to-do list on Fedora
|
||||
======
|
||||
|
||||

|
||||
|
||||
Effective management of your to-do list can do wonders for your productivity. Some prefer just keeping a to-do list in a text file, or even just using a notepad and pen. For users that want more out of their to-do list, they often turn to an application. In this article we highlight 4 graphical applications and a terminal-based tool for managing your to-do list.
|
||||
|
||||
### GNOME To Do
|
||||
|
||||
[GNOME To Do][1] is a personal task manager designed specifically for the GNOME desktop (Fedora Workstation’s default desktop). When comparing GNOME To Do with some others in this list, it is has a range of neat features.
|
||||
|
||||
GNOME To Do provides organization of tasks by lists, and the ability to assign a colour to that list. Additionally, individual tasks can be assigned due dates & priorities, and notes for each task. Futhermore, GNOME To Do has extensions, allowing even more features, including support for [todo.txt][2] and syncing with online services such as [todoist][3].
|
||||
|
||||
![][4]
|
||||
|
||||
Install GNOME To Do either by using the Software application, or using the following command in the Terminal:
|
||||
```
|
||||
sudo dnf install gnome-todo
|
||||
|
||||
```
|
||||
|
||||
### Getting things GNOME!
|
||||
|
||||
Before GNOME To Do existed, the go-to application for tracking tasks on GNOME was [Getting things GNOME!][5] This older-style GNOME application has a multiple window layout, allowing you to show the details of multiple tasks at the same time. Rather than having lists of tasks, GTG has the ability to add sub-tasks to tasks and even to sub-tasks. GTG also has the ability to add due dates and start dates. Syncing to other apps and services is also possible in GTG via plugins.
|
||||
|
||||
![][6]
|
||||
|
||||
Install Getting Things GNOME either by using the Software application, or using the following command in the Terminal:
|
||||
```
|
||||
sudo dnf install gtg
|
||||
|
||||
```
|
||||
|
||||
### Go For It!
|
||||
|
||||
[Go For It!][7] is a super-simple task management application. It is used to simply create a list of tasks, and mark them as done when completed. It does not have the ability to group tasks, or create sub-tasks. By default, Go For It! stored tasks in the todo.txt format, allowing simpler syncing to online services and other applications. Additionally, Go For It! contains a simple timer to track how much time you have spent on the current task.
|
||||
|
||||
![][8]
|
||||
|
||||
Go For It is available to download from the Flathub application repository. To install, simply [enable Flathub as a software source][9], and then install via the Software application.
|
||||
|
||||
### Agenda
|
||||
|
||||
If you are looking for a no-fuss super simple to-do application, look no further than [Agenda][10]. Create tasks, mark them as complete, and then delete them from your list. Agenda shows all tasks (completed or open) until you remove them.
|
||||
|
||||
![][11]
|
||||
|
||||
Agenda is available to download from the Flathub application repository. To install, simply [enable Flathub as a software source][9], and then install via the Software application.
|
||||
|
||||
### Taskwarrior
|
||||
|
||||
[Taskwarrior][12] is a flexible command-line task management program. It is highly customizable, but can also be used “right out of the box.” Using simple commands, you can create tasks, mark them as complete, and list current open tasks. Additionally, tasks can be tagged, added to projects, searched and filtered. Furthermore, you can set up recurring tasks, and apply due dates to tasks.
|
||||
|
||||
[This previous article on the Fedora Magazine][13] provides a good overview of getting started with Taskwarrior.
|
||||
|
||||
![][14]
|
||||
|
||||
Install Taskwarrior with this command in the Terminal:
|
||||
```
|
||||
sudo dnf install task
|
||||
|
||||
```
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/5-tools-to-manage-your-to-do-list-on-fedora/
|
||||
|
||||
作者:[Ryan Lerch][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fedoramagazine.org/introducing-flatpak/
|
||||
[1]:https://wiki.gnome.org/Apps/Todo/
|
||||
[2]:http://todotxt.org/
|
||||
[3]:https://en.todoist.com/
|
||||
[4]:https://fedoramagazine.org/wp-content/uploads/2018/08/gnome-todo.png
|
||||
[5]:https://wiki.gnome.org/Apps/GTG
|
||||
[6]:https://fedoramagazine.org/wp-content/uploads/2018/08/gtg.png
|
||||
[7]:http://manuel-kehl.de/projects/go-for-it/
|
||||
[8]:https://fedoramagazine.org/wp-content/uploads/2018/08/goforit.png
|
||||
[9]:https://fedoramagazine.org/install-flathub-apps-fedora/
|
||||
[10]:https://github.com/dahenson/agenda
|
||||
[11]:https://fedoramagazine.org/wp-content/uploads/2018/08/agenda.png
|
||||
[12]:https://taskwarrior.org/
|
||||
[13]:https://fedoramagazine.org/getting-started-taskwarrior/
|
||||
[14]:https://fedoramagazine.org/wp-content/uploads/2018/08/taskwarrior.png
|
@ -0,0 +1,334 @@
|
||||
Getting started with Postfix, an open source mail transfer agent
|
||||
======
|
||||
|
||||

|
||||
|
||||
[Postfix][1] is a great program that routes and delivers email to accounts that are external to the system. It is currently used by approximately [33% of internet mail servers][2]. In this article, I'll explain how you can use Postfix to send mail using Gmail with two-factor authentication enabled.
|
||||
|
||||
Before you get Postfix up and running, however, you need to have some items lined up. Following are instructions on how to get it working on a number of distros.
|
||||
|
||||
### Prerequisites
|
||||
|
||||
* An installed OS (Ubuntu/Debian/Fedora/Centos/Arch/FreeBSD/OpenSUSE)
|
||||
* A Google account with two-factor authentication
|
||||
* A working internet connection
|
||||
|
||||
|
||||
|
||||
### Step 1: Prepare Google
|
||||
|
||||
Open a web browser and log into your Google account. Once you’re in, go to your settings by clicking your picture and selecting "Google Account.” Click “Sign-in & security” and scroll down to "App passwords.” Use your password to log in. Then you can create a new app password (I named mine "postfix Setup”).
|
||||
|
||||

|
||||
|
||||
Note the crazy password (shown below), which I will use throughout this article.
|
||||
|
||||

|
||||
|
||||
### Step 2: Install Postfix
|
||||
|
||||
Before you can configure the mail client, you need to install it. You must also install either the `mailutils` or `mailx` utility, depending on the OS you're using. Here's how to install it for each OS:
|
||||
|
||||
**Debian/Ubuntu** :
|
||||
```
|
||||
apt-get update && apt-get install postfix mailutils
|
||||
|
||||
```
|
||||
|
||||
**Fedora** :
|
||||
```
|
||||
dnf update && dnf install postfix mailx
|
||||
|
||||
```
|
||||
|
||||
**Centos** :
|
||||
```
|
||||
yum update && yum install postfix mailx cyrus-sasl cyrus-sasl-plain
|
||||
|
||||
```
|
||||
|
||||
**Arch** :
|
||||
```
|
||||
pacman -Sy postfix mailutils
|
||||
|
||||
```
|
||||
|
||||
**FreeBSD** :
|
||||
```
|
||||
portsnap fetch extract update
|
||||
|
||||
cd /usr/ports/mail/postfix
|
||||
|
||||
make config
|
||||
|
||||
```
|
||||
|
||||
In the configuration dialog, select "SASL support." All other options can remain the same.
|
||||
|
||||
From there: `make install clean`
|
||||
|
||||
Install `mailx` from the binary package: `pkg install mailx`
|
||||
|
||||
**OpenSUSE** :
|
||||
```
|
||||
zypper update && zypper install postfix mailx cyrus-sasl
|
||||
|
||||
```
|
||||
|
||||
### Step 3: Set up Gmail authentication
|
||||
|
||||
Once you've installed Postfix, you can set up Gmail authentication. Since you have created the app password, you need to put it in a configuration file and lock it down so no one else can see it. Fortunately, this is simple to do:
|
||||
|
||||
**Ubuntu/Debian/Fedora/Centos/Arch/OpenSUSE** :
|
||||
```
|
||||
vim /etc/postfix/sasl_passwd
|
||||
|
||||
```
|
||||
|
||||
Add this line:
|
||||
```
|
||||
[smtp.gmail.com]:587 ben.heffron@gmail.com:thgcaypbpslnvgce
|
||||
|
||||
```
|
||||
|
||||
Save and close the file. Since your Gmail password is stored as plaintext, make the file accessible only by root to be extra safe.
|
||||
```
|
||||
chmod 600 /etc/postfix/sasl_passwd
|
||||
|
||||
```
|
||||
|
||||
**FreeBSD** :
|
||||
```
|
||||
vim /usr/local/etc/postfix/sasl_passwd
|
||||
|
||||
```
|
||||
|
||||
Add this line:
|
||||
```
|
||||
[smtp.gmail.com]:587 ben.heffron@gmail.com:thgcaypbpslnvgce
|
||||
|
||||
```
|
||||
|
||||
Save and close the file. Since your Gmail password is stored as plaintext, make the file accessible only by root to be extra safe.
|
||||
```
|
||||
chmod 600 /usr/local/etc/postfix/sasl_passwd
|
||||
|
||||
```
|
||||
|
||||

|
||||
|
||||
### Step 4: Get Postfix moving
|
||||
|
||||
This step is the "meat and potatoes"—everything you've done so far has been preparation.
|
||||
|
||||
Postfix gets its configuration from the `main.cf` file, so the settings in this file are critical. For Google, it is mandatory to enable the correct SSL settings.
|
||||
|
||||
Here are the six options you need to enter or update on the `main.cf` to make it work with Gmail (from the [SASL readme][3]):
|
||||
|
||||
* The **smtp_sasl_auth_enable** setting enables client-side authentication. We will configure the client’s username and password information in the second part of the example.
|
||||
* The **relayhost** setting forces the Postfix SMTP to send all remote messages to the specified mail server instead of trying to deliver them directly to their destination.
|
||||
* With the **smtp_sasl_password_maps** parameter, we configure the Postfix SMTP client to send username and password information to the mail gateway server.
|
||||
* Postfix SMTP client SASL security options are set using **smtp_sasl_security_options** , with a whole lot of options. In this case, it will be nothing; otherwise, Gmail won’t play nicely with Postfix.
|
||||
* The **smtp_tls_CAfile** is a file containing CA certificates of root CAs trusted to sign either remote SMTP server certificates or intermediate CA certificates.
|
||||
* From the [configure settings page:][4] **stmp_use_tls** uses TLS when a remote SMTP server announces STARTTLS support, the default is not using TLS.
|
||||
|
||||
|
||||
|
||||
**Ubuntu/Debian/Arch**
|
||||
|
||||
These three OSes keep their files (certificates and `main.cf`) in the same location, so this is all you need to put in there:
|
||||
```
|
||||
vim /etc/postfix/main.cf
|
||||
|
||||
```
|
||||
|
||||
If the following values aren’t there, add them:
|
||||
```
|
||||
relayhost = [smtp.gmail.com]:587
|
||||
|
||||
smtp_use_tls = yes
|
||||
|
||||
smtp_sasl_auth_enable = yes
|
||||
|
||||
smtp_sasl_security_options =
|
||||
|
||||
smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd
|
||||
|
||||
smtp_tls_CAfile = /etc/ssl/certs/ca-certificates.crt
|
||||
|
||||
```
|
||||
|
||||
Save and close the file.
|
||||
|
||||
**Fedora/CentOS**
|
||||
|
||||
These two OSes are based on the same underpinnings, so they share the same updates.
|
||||
```
|
||||
vim /etc/postfix/main.cf
|
||||
|
||||
```
|
||||
|
||||
If the following values aren’t there, add them:
|
||||
```
|
||||
relayhost = [smtp.gmail.com]:587
|
||||
|
||||
smtp_use_tls = yes
|
||||
|
||||
smtp_sasl_auth_enable = yes
|
||||
|
||||
smtp_sasl_security_options =
|
||||
|
||||
smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd
|
||||
|
||||
smtp_tls_CAfile = /etc/ssl/certs/ca-bundle.crt
|
||||
|
||||
```
|
||||
|
||||
Save and close the file.
|
||||
|
||||
**OpenSUSE**
|
||||
```
|
||||
vim /etc/postfix/main.cf
|
||||
|
||||
```
|
||||
|
||||
If the following values aren’t there, add them:
|
||||
```
|
||||
relayhost = [smtp.gmail.com]:587
|
||||
|
||||
smtp_use_tls = yes
|
||||
|
||||
smtp_sasl_auth_enable = yes
|
||||
|
||||
smtp_sasl_security_options =
|
||||
|
||||
smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd
|
||||
|
||||
smtp_tls_CAfile = /etc/ssl/ca-bundle.pem
|
||||
|
||||
```
|
||||
|
||||
Save and close the file.
|
||||
|
||||
OpenSUSE also requires that you modify the Postfix master process configuration file `master.cf`. Open it for editing:
|
||||
```
|
||||
vim /etc/postfix/master.cf
|
||||
|
||||
```
|
||||
|
||||
Uncomment the line that reads:
|
||||
```
|
||||
#tlsmgr unix - - n 1000? 1 tlsmg
|
||||
|
||||
```
|
||||
|
||||
It should look like this:
|
||||
```
|
||||
tlsmgr unix - - n 1000? 1 tlsmg
|
||||
|
||||
```
|
||||
|
||||
Save and close the file.
|
||||
|
||||
**FreeBSD**
|
||||
```
|
||||
vim /usr/local/etc/postfix/main.cf
|
||||
|
||||
```
|
||||
|
||||
If the following values aren’t there, add them:
|
||||
```
|
||||
relayhost = [smtp.gmail.com]:587
|
||||
|
||||
smtp_use_tls = yes
|
||||
|
||||
smtp_sasl_auth_enable = yes
|
||||
|
||||
smtp_sasl_security_options =
|
||||
|
||||
smtp_sasl_password_maps = hash:/usr/local/etc/postfix/sasl_passwd
|
||||
|
||||
smtp_tls_CAfile = /etc/mail/certs/cacert.pem
|
||||
|
||||
```
|
||||
|
||||
Save and close the file.
|
||||
|
||||
### Step 5: Set up the password file
|
||||
|
||||
Remember that password file you created? Now you need to feed it into Postfix using `postmap`. This is part of the `mailutils` or `mailx` utilities.
|
||||
|
||||
**Debian, Ubuntu, Fedora, CentOS, OpenSUSE, Arch Linux**
|
||||
```
|
||||
postmap /etc/postfix/sasl_passwd
|
||||
|
||||
```
|
||||
|
||||
**FreeBSD**
|
||||
```
|
||||
postmap /usr/local/etc/postfix/sasl_passwd
|
||||
|
||||
```
|
||||
|
||||
### Step 6: Get Postfix grooving
|
||||
|
||||
To get all the settings and configurations working, you must restart Postfix.
|
||||
|
||||
**Debian, Ubuntu, Fedora, CentOS, OpenSUSE, Arch Linux**
|
||||
|
||||
These guys make it simple to restart:
|
||||
```
|
||||
systemctl restart postfix.service
|
||||
|
||||
```
|
||||
|
||||
**FreeBSD**
|
||||
|
||||
To start Postfix at startup, edit `/etc/rc.conf`:
|
||||
```
|
||||
vim /etc/rc.conf
|
||||
|
||||
```
|
||||
|
||||
Add the line:
|
||||
```
|
||||
postfix_enable=YES
|
||||
|
||||
```
|
||||
|
||||
Save and close the file. Then start Postfix by running:
|
||||
```
|
||||
service postfix start
|
||||
|
||||
```
|
||||
|
||||
### Step 7: Test it
|
||||
|
||||
Now for the big finale—time to test it to see if it works. The `mail` command is another tool installed with `mailutils` or `mailx`.
|
||||
```
|
||||
echo Just testing my sendmail gmail relay" | mail -s "Sendmail gmail Relay" ben.heffron@gmail.com
|
||||
|
||||
```
|
||||
|
||||
This is what I used to test my settings, and then it came up in my Gmail.
|
||||
|
||||

|
||||
|
||||
Now you can use Gmail with two-factor authentication in your Postfix setup.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/8/postfix-open-source-mail-transfer-agent
|
||||
|
||||
作者:[Ben Heffron][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/elheffe
|
||||
[1]:http://www.postfix.org/start.html
|
||||
[2]:http://www.securityspace.com/s_survey/data/man.201806/mxsurvey.html
|
||||
[3]:http://www.postfix.org/SASL_README.html
|
||||
[4]:http://www.postfix.org/postconf.5.html#smtp_tls_security_level
|
@ -0,0 +1,130 @@
|
||||
translating---geekpi
|
||||
|
||||
How To Switch Between Multiple PHP Versions In Ubuntu
|
||||
======
|
||||
|
||||

|
||||
|
||||
Sometimes, the most recent version of an installed package might not work as you expected. Your application may not compatible with the updated package and support only a specific old version of package. In such cases, you can simply downgrade the problematic package to its earlier working version in no time. Refer our old guides on how to downgrade a package in Ubuntu and its variants [**here**][1] and how to downgrade a package in Arch Linux and its derivatives [**here**][2]. However, you need not to downgrade some packages. We can use multiple versions at the same time. For instance, let us say you are testing a PHP application in [**LAMP stack**][3] deployed in Ubuntu 18.04 LTS. After a while you find out that the application worked fine in PHP5.6, but not in PHP 7.2 (Ubuntu 18.04 LTS installs PHP 7.x by default). Are you going to reinstall PHP or the whole LAMP stack again? Not necessary, though. You don’t even have to downgrade the PHP to its earlier version. In this brief tutorial, I will show you how to switch between multiple PHP versions in Ubuntu 18.04 LTS. It’s not that difficult as you may think. Read on.
|
||||
|
||||
### Switch Between Multiple PHP Versions
|
||||
|
||||
To check the default installed version of PHP, run:
|
||||
```
|
||||
$ php -v
|
||||
PHP 7.2.7-0ubuntu0.18.04.2 (cli) (built: Jul 4 2018 16:55:24) ( NTS )
|
||||
Copyright (c) 1997-2018 The PHP Group
|
||||
Zend Engine v3.2.0, Copyright (c) 1998-2018 Zend Technologies
|
||||
with Zend OPcache v7.2.7-0ubuntu0.18.04.2, Copyright (c) 1999-2018, by Zend Technologies
|
||||
|
||||
```
|
||||
|
||||
As you can see, the installed version of PHP is 7.2.7. After testing your application for couple days, you find out that your application doesn’t support PHP7.2. In such cases, it is a good idea to have both PHP5.x version and PHP7.x version, so that you can easily switch between to/from any supported version at any time.
|
||||
|
||||
You don’t need to remove PHP7.x or reinstall LAMP stack. You can use both PHP5.x and 7.x versions together.
|
||||
|
||||
I assume you didn’t uninstall php5.6 in your system yet. Just in case, you removed it already, you can install it again using a PPA like below.
|
||||
|
||||
You can install PHP5.6 from a PPA:
|
||||
```
|
||||
$ sudo add-apt-repository -y ppa:ondrej/php
|
||||
$ sudo apt update
|
||||
$ sudo apt install php5.6
|
||||
|
||||
```
|
||||
|
||||
#### Switch from PHP7.x to PHP5.x
|
||||
|
||||
First disable PHP7.2 module using command:
|
||||
```
|
||||
$ sudo a2dismod php7.2
|
||||
Module php7.2 disabled.
|
||||
To activate the new configuration, you need to run:
|
||||
systemctl restart apache2
|
||||
|
||||
```
|
||||
|
||||
Next, enable PHP5.6 module:
|
||||
```
|
||||
$ sudo a2enmod php5.6
|
||||
|
||||
```
|
||||
|
||||
Set PHP5.6 as default version:
|
||||
```
|
||||
$ sudo update-alternatives --set php /usr/bin/php5.6
|
||||
|
||||
```
|
||||
|
||||
Alternatively, you can run the following command to to set which system wide version of PHP you want to use by default.
|
||||
```
|
||||
$ sudo update-alternatives --config php
|
||||
|
||||
```
|
||||
|
||||
Enter the selection number to set it as default version or simply press ENTER to keep the current choice.
|
||||
|
||||
In case, you have installed other PHP extensions, set them as default as well.
|
||||
```
|
||||
$ sudo update-alternatives --set phar /usr/bin/phar5.6
|
||||
|
||||
```
|
||||
|
||||
Finally, restart your Apache web server:
|
||||
```
|
||||
$ sudo systemctl restart apache2
|
||||
|
||||
```
|
||||
|
||||
Now, check if PHP5.6 is the default version or not:
|
||||
```
|
||||
$ php -v
|
||||
PHP 5.6.37-1+ubuntu18.04.1+deb.sury.org+1 (cli)
|
||||
Copyright (c) 1997-2016 The PHP Group
|
||||
Zend Engine v2.6.0, Copyright (c) 1998-2016 Zend Technologies
|
||||
with Zend OPcache v7.0.6-dev, Copyright (c) 1999-2016, by Zend Technologies
|
||||
|
||||
```
|
||||
|
||||
#### Switch from PHP5.x to PHP7.x
|
||||
|
||||
Likewise, you can switch from PHP5.x to PHP7.x version as shown below.
|
||||
```
|
||||
$ sudo a2enmod php7.2
|
||||
|
||||
$ sudo a2dismod php5.6
|
||||
|
||||
$ sudo update-alternatives --set php /usr/bin/php7.2
|
||||
|
||||
$ sudo systemctl restart apache2
|
||||
|
||||
```
|
||||
|
||||
**A word of caution:**
|
||||
|
||||
The final stable PHP5.6 version has reached the [**end of active support**][4] as of 19 Jan 2017. However, PHP 5.6 will continue to receive support for critical security issues until 31 Dec 2018. So, It is recommended to upgrade all your PHP applications to be compatible with PHP7.x as soon as possible.
|
||||
|
||||
If you want prevent PHP to be automatically upgraded in future, refer the following guide.
|
||||
|
||||
And, that’s all for now. Hope this helps. More good stuffs to come. Stay tuned!
|
||||
|
||||
Cheers!
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/how-to-switch-between-multiple-php-versions-in-ubuntu/
|
||||
|
||||
作者:[SK][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.ostechnix.com/author/sk/
|
||||
[1]:https://www.ostechnix.com/how-to-downgrade-a-package-in-ubuntu/
|
||||
[2]:https://www.ostechnix.com/downgrade-package-arch-linux/
|
||||
[3]:https://www.ostechnix.com/install-apache-mariadb-php-lamp-stack-ubuntu-16-04/
|
||||
[4]:http://php.net/supported-versions.php
|
@ -0,0 +1,176 @@
|
||||
Perform robust unit tests with PyHamcrest
|
||||
======
|
||||
|
||||

|
||||
|
||||
At the base of the [testing pyramid][1] are unit tests. Unit tests test one unit of code at a time—usually one function or method.
|
||||
|
||||
Often, a single unit test is designed to test one particular flow through a function, or a specific branch choice. This enables easy mapping of a unit test that fails and the bug that made it fail.
|
||||
|
||||
Ideally, unit tests use few or no external resources, isolating them and making them faster.
|
||||
|
||||
_Good_ tests increase developer productivity by catching bugs early and making testing faster. _Bad_ tests decrease developer productivity.
|
||||
|
||||
Unit test suites help maintain high-quality products by signaling problems early in the development process. An effective unit test catches bugs before the code has left the developer machine, or at least in a continuous integration environment on a dedicated branch. This marks the difference between good and bad unit tests:tests increase developer productivity by catching bugs early and making testing faster.tests decrease developer productivity.
|
||||
|
||||
Productivity usually decreases when testing _incidental features_. The test fails when the code changes, even if it is still correct. This happens because the output is different, but in a way that is not part of the function's contract.
|
||||
|
||||
A good unit test, therefore, is one that helps enforce the contract to which the function is committed.
|
||||
|
||||
If a unit test breaks, the contract is violated and should be either explicitly amended (by changing the documentation and tests), or fixed (by fixing the code and leaving the tests as is).
|
||||
|
||||
While limiting tests to enforce only the public contract is a complicated skill to learn, there are tools that can help.
|
||||
|
||||
One of these tools is [Hamcrest][2], a framework for writing assertions. Originally invented for Java-based unit tests, today the Hamcrest framework supports several languages, including [Python][3].
|
||||
|
||||
Hamcrest is designed to make test assertions easier to write and more precise.
|
||||
```
|
||||
def add(a, b):
|
||||
|
||||
return a + b
|
||||
|
||||
|
||||
|
||||
from hamcrest import assert_that, equal_to
|
||||
|
||||
|
||||
|
||||
def test_add():
|
||||
|
||||
assert_that(add(2, 2), equal_to(4))
|
||||
|
||||
```
|
||||
|
||||
This is a simple assertion, for simple functionality. What if we wanted to assert something more complicated?
|
||||
```
|
||||
def test_set_removal():
|
||||
|
||||
my_set = {1, 2, 3, 4}
|
||||
|
||||
my_set.remove(3)
|
||||
|
||||
assert_that(my_set, contains_inanyorder([1, 2, 4]))
|
||||
|
||||
assert_that(my_set, is_not(has_item(3)))
|
||||
|
||||
```
|
||||
|
||||
Note that we can succinctly assert that the result has `1`, `2`, and `4` in any order since sets do not guarantee order.
|
||||
|
||||
We also easily negate assertions with `is_not`. This helps us write _precise assertions_ , which allow us to limit ourselves to enforcing public contracts of functions.
|
||||
|
||||
Sometimes, however, none of the built-in functionality is _precisely_ what we need. In those cases, Hamcrest allows us to write our own matchers.
|
||||
|
||||
Imagine the following function:
|
||||
```
|
||||
def scale_one(a, b):
|
||||
|
||||
scale = random.randint(0, 5)
|
||||
|
||||
pick = random.choice([a,b])
|
||||
|
||||
return scale * pick
|
||||
|
||||
```
|
||||
|
||||
We can confidently assert that the result divides into at least one of the inputs evenly.
|
||||
|
||||
A matcher inherits from `hamcrest.core.base_matcher.BaseMatcher`, and overrides two methods:
|
||||
```
|
||||
class DivisibleBy(hamcrest.core.base_matcher.BaseMatcher):
|
||||
|
||||
|
||||
|
||||
def __init__(self, factor):
|
||||
|
||||
self.factor = factor
|
||||
|
||||
|
||||
|
||||
def _matches(self, item):
|
||||
|
||||
return (item % self.factor) == 0
|
||||
|
||||
|
||||
|
||||
def describe_to(self, description):
|
||||
|
||||
description.append_text('number divisible by')
|
||||
|
||||
description.append_text(repr(self.factor))
|
||||
|
||||
```
|
||||
|
||||
Writing high-quality `describe_to` methods is important, since this is part of the message that will show up if the test fails.
|
||||
```
|
||||
def divisible_by(num):
|
||||
|
||||
return DivisibleBy(num)
|
||||
|
||||
```
|
||||
|
||||
By convention, we wrap matchers in a function. Sometimes this gives us a chance to further process the inputs, but in this case, no further processing is needed.
|
||||
```
|
||||
def test_scale():
|
||||
|
||||
result = scale_one(3, 7)
|
||||
|
||||
assert_that(result,
|
||||
|
||||
any_of(divisible_by(3),
|
||||
|
||||
divisible_by(7)))
|
||||
|
||||
```
|
||||
|
||||
Note that we combined our `divisible_by` matcher with the built-in `any_of` matcher to ensure that we test only what the contract commits to.
|
||||
|
||||
While editing this article, I heard a rumor that the name "Hamcrest" was chosen as an anagram for "matches". Hrm...
|
||||
```
|
||||
>>> assert_that("matches", contains_inanyorder(*"hamcrest")
|
||||
|
||||
Traceback (most recent call last):
|
||||
|
||||
File "<stdin>", line 1, in <module>
|
||||
|
||||
File "/home/moshez/src/devops-python/build/devops/lib/python3.6/site-packages/hamcrest/core/assert_that.py", line 43, in assert_that
|
||||
|
||||
_assert_match(actual=arg1, matcher=arg2, reason=arg3)
|
||||
|
||||
File "/home/moshez/src/devops-python/build/devops/lib/python3.6/site-packages/hamcrest/core/assert_that.py", line 57, in _assert_match
|
||||
|
||||
raise AssertionError(description)
|
||||
|
||||
AssertionError:
|
||||
|
||||
Expected: a sequence over ['h', 'a', 'm', 'c', 'r', 'e', 's', 't'] in any order
|
||||
|
||||
but: no item matches: 'r' in ['m', 'a', 't', 'c', 'h', 'e', 's']
|
||||
|
||||
```
|
||||
|
||||
Researching more, I found the source of the rumor: It is an anagram for "matchers".
|
||||
```
|
||||
>>> assert_that("matchers", contains_inanyorder(*"hamcrest"))
|
||||
|
||||
>>>
|
||||
|
||||
```
|
||||
|
||||
If you are not yet writing unit tests for your Python code, now is a good time to start. If you are writing unit tests for your Python code, using Hamcrest will allow you to make your assertion _precise_ —neither more nor less than what you intend to test. This will lead to fewer false positives when modifying code and less time spent modifying tests for working code.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/8/robust-unit-tests-hamcrest
|
||||
|
||||
作者:[Moshe Zadka][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/moshez
|
||||
[1]:https://martinfowler.com/bliki/TestPyramid.html
|
||||
[2]:http://hamcrest.org/
|
||||
[3]:https://www.python.org/
|
@ -0,0 +1,75 @@
|
||||
Automatically Switch To Light / Dark Gtk Themes Based On Sunrise And Sunset Times With AutomaThemely
|
||||
======
|
||||
If you're looking for an easy way of automatically changing the Gtk theme based on sunrise and sunset times, give [AutomaThemely][3] a try.
|
||||
|
||||

|
||||
|
||||
**AutomaThemely is a Python application that automatically changes Gnome themes according to light and dark hours, useful if you want to use a dark Gtk theme at night and a light Gtk theme during the day.**
|
||||
|
||||
**While the application is made for the Gnome desktop, it also works with Unity**. AutomaThemely does not support changing the Gtk theme for desktop environments that don't make use of the `org.gnome.desktop.interface Gsettings` , like Cinnamon, or changing the icon theme, at least not yet. It also doesn't support setting the Gnome Shell theme.
|
||||
|
||||
Besides automatically changing the Gtk3 theme, **AutomaThemely can also automatically switch between dark and light themes for Atom editor and VSCode, as well as between light and dark syntax highlighting for Atom editor.** This is obviously also done based the time of day.
|
||||
|
||||
[![AutomaThemely Atom VSCode][1]][2]
|
||||
AutomaThemely Atom and VSCode theme / syntax settings
|
||||
|
||||
The application uses your IP address to determine your location in order to retrieve the sunrise and sunset times, and requires a working Internet connection for this. However, you can disable automatic location from the application user interface, and enter your location manually.
|
||||
|
||||
From the AutomaThemely user interface you can also enter a time offset (in minutes) for the sunrise and sunset times, and enable or disable notifications on theme changes.
|
||||
|
||||
### Downloading / installing AutomaThemely
|
||||
|
||||
**Ubuntu 18.04** : using the link above, download the Python 3.6 DEB which includes dependencies (python3.6-automathemely_1.2_all.deb).
|
||||
|
||||
**Ubuntu 16.04:** you'll need to download and install the AutomaThemely Python 3.5 DEB which DOES NOT include dependencies (python3.5-no_deps-automathemely_1.2_all.deb), and install the dependencies (`requests` , `astral` , `pytz` , `tzlocal` and `schedule`) separately, using PIP3:
|
||||
```
|
||||
sudo apt install python3-pip
|
||||
python3 -m pip install --user requests astral pytz tzlocal schedule
|
||||
|
||||
```
|
||||
|
||||
The AutomaThemely download page also includes RPM packages for Python 3.5 or 3.6, with and without dependencies. Install the package appropriate for your Python version. If you download the package that includes dependencies and they are not available on your system, grab the "no_deps" package and install the Python3 dependencies using PIP3, as explained above.
|
||||
|
||||
### Using AutomaThemely to change to light / dark Gtk themes based on Sun times
|
||||
|
||||
Once installed, run AutomaThemely once to generate the configuration file. Either click on the AutomaThemely menu entry or run this in a terminal:
|
||||
```
|
||||
automathemely
|
||||
|
||||
```
|
||||
|
||||
This doesn't run any GUI, it only generates the configuration file.
|
||||
|
||||
Using AutomaThemely is a bit counter intuitive. You'll get an AutomaThemely icon in your menu but clicking it does not open any window / GUI. If you use Gnome or some other Gnome-based desktop that supports jumplists / quicklists, you can right click the AutomaThemely icon in the menu (or you can pin it to Dash / dock and right click it there) and select Manage Settings to launch the GUI:
|
||||
|
||||

|
||||
|
||||
You can also launch the AutomaThemely GUI from the command line, using:
|
||||
```
|
||||
automathemely --manage
|
||||
|
||||
```
|
||||
|
||||
**Once you configure the themes you want to use, you'll need to update the Sun times and restart the AutomaThemely scheduler**. You can do this by right clicking on the AutomaThemely icon (should work in Unity / Gnome) and selecting `Update sun times` , and then `Restart the scheduler` . You can also do this from a terminal, using these commands:
|
||||
```
|
||||
automathemely --update
|
||||
automathemely --restart
|
||||
|
||||
```
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linuxuprising.com/2018/08/automatically-switch-to-light-dark-gtk.html
|
||||
|
||||
作者:[Logix][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://plus.google.com/118280394805678839070
|
||||
[1]:https://4.bp.blogspot.com/-K2-1K_MIWv0/W2q9GEWYA6I/AAAAAAAABUg/-z_gTMSHlxgN-ZXDvUGIeTQ8I72WrRq0ACLcBGAs/s640/automathemely-settings_2.png (AutomaThemely Atom VSCode)
|
||||
[2]:https://4.bp.blogspot.com/-K2-1K_MIWv0/W2q9GEWYA6I/AAAAAAAABUg/-z_gTMSHlxgN-ZXDvUGIeTQ8I72WrRq0ACLcBGAs/s1600/automathemely-settings_2.png
|
||||
[3]:https://github.com/C2N14/AutomaThemely
|
@ -1,57 +0,0 @@
|
||||
老树发新芽:微服务 – DXC Blogs
|
||||
======
|
||||

|
||||
|
||||
如果我告诉你有这样一种软件架构,一个应用程序的组件通过基于网络的通讯协议为其它组件提供服务,我估计你可能会说它是 …
|
||||
|
||||
是的,确实是。如果你从上世纪九十年代就开始了你的编程生涯,那么你肯定会说它是 [面向服务的架构 (SOA)][1]。但是,如果你是个年青人,并且在云上获得初步的经验,那么,你将会说:“哦,你说的是 [微服务][2]。”
|
||||
|
||||
你们都没错。如果想真正地了解它们的差别,你需要深入地研究这两种架构。
|
||||
|
||||
在 SOA 中,一个服务是一个功能,它是定义好的、自包含的、并且是不依赖上下文和其它服务的状态的功能。总共有两种服务。一种是消费者服务,它从另外类型的服务 —— 提供者服务 —— 中请求一个服务。一个 SOA 服务可以同时扮演这两种角色。
|
||||
|
||||
SOA 服务可以与其它服务交换数据。两个或多个服务也可以彼此之间相互协调。这些服务执行基本的任务,比如创建一个用户帐户、提供登陆功能、或验证支付。
|
||||
|
||||
与其说 SOA 是模块化一个应用程序,还不如说它是把分布式的、独立维护和部署的组件,组合成一个应用程序。然后在服务器上运行这些组件。
|
||||
|
||||
早期版本的 SOA 使用面向对象的协议进行组件间通讯。例如,微软的 [分布式组件对象模型 (DCOM)][3] 和使用 [通用对象请求代理架构 (CORBA)][5] 规范的 [对象请求代理 (ORBs)][4]。
|
||||
|
||||
用于消息服务的最新版本,比如 [Java 消息服务 (JMS)][6] 或者 [高级消息队列协议 (AMQP)][7]。这些服务通过企业服务总线 (ESB) 进行连接。基于这些总线,来传递和接收可扩展标记语言(XML)格式的数据。
|
||||
|
||||
[微服务][2] 是一个架构样式,其中的应用程序以松散耦合的服务或模块组成。它适用于开发大型的、复杂的应用程序的持续集成/持续部署(CI/CD)模型。一个应用程序就是一堆模块的汇总。
|
||||
|
||||
每个微服务提供一个应用程序编程接口(API)端点。它们通过轻量级协议连接,比如,[表述性状态转移 (REST)][8],或 [gRPC][9]。数据倾向于使用 [JavaScript 对象标记 (JSON)][10] 或 [Protobuf][11] 来表示。
|
||||
|
||||
这两种架构都可以用于去替代以前老的整体式架构,整体式架构的应用程序被构建为单个自治的单元。例如,在一个客户机 - 服务器模式中,一个典型的 Linux、Apache、MySQL、PHP/Python/Perl (LAMP) 服务器端应用程序将去处理 HTTP 请求、运行子程序、以及从底层的 MySQL 数据库中检索/更新数据。所有这些应用程序”绑“在一起提供服务。当你改变了任何一个东西,你都必须去构建和部署一个新版本。
|
||||
|
||||
使用 SOA,你可以只改变需要的几个组件,而不是整个应用程序。使用微服务,你可以做到一次只改变一个服务。使用微服务,你才能真正做到一个解耦架构。
|
||||
|
||||
微服务也比 SOA 更轻量级。不过 SOA 服务是部署到服务器和虚拟机上,而微服务是部署在容器中。协议也更轻量级。这使得微服务比 SOA 更灵活。因此,它更适合于要求敏捷性的电商网站。
|
||||
|
||||
说了这么多,到底意味着什么呢?微服务就是 SOA 在容器和云计算上的变种。
|
||||
|
||||
老式的 SOA 并没有离我们远去,但是,因为我们持续将应用程序搬迁到容器中,所以微服务架构将越来越流行。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://blogs.dxc.technology/2018/05/08/everything-old-is-new-again-microservices/
|
||||
|
||||
作者:[Cloudy Weather][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[qhwdw](https://github.com/qhwdw)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://blogs.dxc.technology/author/steven-vaughan-nichols/
|
||||
[1]:https://www.service-architecture.com/articles/web-services/service-oriented_architecture_soa_definition.html
|
||||
[2]:http://microservices.io/
|
||||
[3]:https://technet.microsoft.com/en-us/library/cc958799.aspx
|
||||
[4]:https://searchmicroservices.techtarget.com/definition/Object-Request-Broker-ORB
|
||||
[5]:http://www.corba.org/
|
||||
[6]:https://docs.oracle.com/javaee/6/tutorial/doc/bncdq.html
|
||||
[7]:https://www.amqp.org/
|
||||
[8]:https://www.service-architecture.com/articles/web-services/representational_state_transfer_rest.html
|
||||
[9]:https://grpc.io/
|
||||
[10]:https://www.json.org/
|
||||
[11]:https://github.com/google/protobuf/
|
@ -0,0 +1,216 @@
|
||||
用 NodeJS 进行 Twitter 情感分析
|
||||
============================================================
|
||||
|
||||
|
||||

|
||||
|
||||
如果你想知道大家对某件事情的看法,Twitter 是最好的地方了。Twitter 是观点持续不断的涌现出来的地方,每秒钟大概有 6000 条新 Twitter 发送出来。因特网上的发展很快,如果你想与时俱进或者跟上潮流,Twitter 就是你要去的地方。
|
||||
|
||||
现在,我们生活在一个数据为王的时代,很多公司都善于运用 Twitter 上的数据。根据测量到的他们新产品的人气,尝试预测之后的市场趋势,分析 Twitter 上的数据有很多用处。通过数据,商人把产品卖给合适的用户,收集关于他们品牌和改进的反馈,或者获取他们产品或促销活动失败的原因。不仅仅是商人,很多政治和经济上的决定是在观察人们意见的基础上所作的。今天,我会试着让你感受下关于 Twitter 的简单 [情感分析][1],判断这个 Twitter 是正能量,负能量还是中性的。这不会像专业人士所用的那么复杂,但至少,它会让你知道挖掘观念的想法。
|
||||
|
||||
我们将使用 NodeJs,因为 JavaScript 太常用了,而且它还是最容易入门的语言。
|
||||
|
||||
### 前置条件:
|
||||
|
||||
* 安装了 NodeJs 和 NPM
|
||||
|
||||
* 有 NodeJs 和 NPM 包的经验
|
||||
|
||||
* 熟悉命令行。
|
||||
|
||||
好了,就是这样。开始吧
|
||||
|
||||
### 开始
|
||||
|
||||
为了你的项目新建一个目录,进入这个目录下面。打开终端(或是命令行)。进入刚创建的目录下面,运行命令 `npm init -y`。这会在这个目录下创建一个 `package.json` 文件。现在我们可以安装需要的 npm 包了。只需要创建一个新文件,命名为 `index.js` 然后我们就完成了初始的编码。
|
||||
|
||||
### 获取 tweets
|
||||
|
||||
好了,我们想要分析 Twitter ,为了实现这个目的,我们需要获取 Twitter 的标题。为此,我们要用到 [twit][2] 包。因此,先用 `npm i wit` 命令安装它。我们还需要在 APP 上注册账户,用来访问 Twitter 的 API。点击这个 [链接][3],填写所有项目,从 “Keys and Access Token” 标签页中复制 “Consumer Key”,“Consumer Secret”,“Access token” 和 “Access Token Secret” 这几项到 `.env` 文件中,就像这样:
|
||||
|
||||
```
|
||||
# .env
|
||||
# replace the stars with values you copied
|
||||
CONSUMER_KEY=************
|
||||
CONSUMER_SECRET=************
|
||||
ACCESS_TOKEN=************
|
||||
ACCESS_TOKEN_SECRET=************
|
||||
|
||||
```
|
||||
|
||||
现在开始。
|
||||
|
||||
用你最喜欢的代码编辑器打开 `index.js`。我们需要用 `npm i dotenv` 命令安装 `dotenv` 包来读取 `.env` 文件。好了,创建一个 API 实例。
|
||||
|
||||
```
|
||||
const Twit = require('twit');
|
||||
const dotenv = require('dotenv');
|
||||
|
||||
dotenv.config();
|
||||
|
||||
const { CONSUMER_KEY
|
||||
, CONSUMER_SECRET
|
||||
, ACCESS_TOKEN
|
||||
, ACCESS_TOKEN_SECRET
|
||||
} = process.env;
|
||||
|
||||
const config_twitter = {
|
||||
consumer_key: CONSUMER_KEY,
|
||||
consumer_secret: CONSUMER_SECRET,
|
||||
access_token: ACCESS_TOKEN,
|
||||
access_token_secret: ACCESS_TOKEN_SECRET,
|
||||
timeout_ms: 60*1000
|
||||
};
|
||||
|
||||
let api = new Twit(config_twitter);
|
||||
|
||||
```
|
||||
|
||||
这里已经用所需的配置文件建立了到 Twitter 上的连接。但我们什么事情都没做。先定义个获取 Twitter 的函数:
|
||||
|
||||
```
|
||||
async function get_tweets(q, count) {
|
||||
let tweets = await api.get('search/tweets', {q, count, tweet_mode: 'extended'});
|
||||
return tweets.data.statuses.map(tweet => tweet.full_text);
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
这是个 async 函数,因为 `api.get` 函数返回一个 promise 对象,而不是 `then` 链,我想通过这种简单的方式获取推文。它接收两个参数 -q 和 count,`q` 是查询或者我们想要搜索的关键字,`count` 是让这个 `api` 返回的 Twitter 数量。
|
||||
|
||||
目前为止我们拥有了一个从 Twitter 上获取完整文本的简单方法,我们要获取的文本中可能包含某些连接或者原推文可能已经被删除了。所以我们会编写另一个函数,获取并返回即便是转发的 Twitter 的文本,,并且删除其中存在的链接。
|
||||
|
||||
```
|
||||
function get_text(tweet) {
|
||||
let txt = tweet.retweeted_status ? tweet.retweeted_status.full_text : tweet.full_text;
|
||||
return txt.split(/ |\n/).filter(v => !v.startsWith('http')).join(' ');
|
||||
}
|
||||
|
||||
async function get_tweets(q, count) {
|
||||
let tweets = await api.get('search/tweets', {q, count, 'tweet_mode': 'extended'});
|
||||
return tweets.data.statuses.map(get_text);
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
现在我们拿到了文本。下一步是从文本中获取情感。为此我们会使用 `npm` 中的另一个包 —— [`sentiment`][4]。让我们像安装其他包那样安装 `sentiment`,添加到脚本中。
|
||||
|
||||
```
|
||||
const sentiment = require('sentiment')
|
||||
|
||||
```
|
||||
|
||||
`sentiment` 用起来很简单。我们只用把 `sentiment` 函数用在我们想要分析的文本上,它就能返回文本的相对分数。如果分数小于 0,它表达的就是消极情感,大于 0 的分数是积极情感,而 0,如你所料,表示中性的情感。基于此,我们将会把 tweets 打印成不同的颜色 —— 绿色表示积极,红色表示消极,蓝色表示中性。为此,我们会用到 [`colors`][5] 包。先安装这个包,然后添加到脚本中。
|
||||
|
||||
```
|
||||
const colors = require('colors/safe');
|
||||
|
||||
```
|
||||
|
||||
好了,现在把所有东西都整合到 `main` 函数中。
|
||||
|
||||
```
|
||||
async function main() {
|
||||
let keyword = \* define the keyword that you want to search for *\;
|
||||
let count = \* define the count of tweets you want *\;
|
||||
let tweets = await get_tweets(keyword, count);
|
||||
for (tweet of tweets) {
|
||||
let score = sentiment(tweet).comparative;
|
||||
tweet = `${tweet}\n`;
|
||||
if (score > 0) {
|
||||
tweet = colors.green(tweet);
|
||||
} else if (score < 0) {
|
||||
tweet = colors.red(tweet);
|
||||
} else {
|
||||
tweet = colors.blue(tweet);
|
||||
}
|
||||
console.log(tweet);
|
||||
}
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
最后,执行 `main` 函数。
|
||||
|
||||
```
|
||||
main();
|
||||
|
||||
```
|
||||
|
||||
就是这样,一个简单的分析 tweet 中的基本情感的脚本。
|
||||
|
||||
```
|
||||
\\ full script
|
||||
const Twit = require('twit');
|
||||
const dotenv = require('dotenv');
|
||||
const sentiment = require('sentiment');
|
||||
const colors = require('colors/safe');
|
||||
|
||||
dotenv.config();
|
||||
|
||||
const { CONSUMER_KEY
|
||||
, CONSUMER_SECRET
|
||||
, ACCESS_TOKEN
|
||||
, ACCESS_TOKEN_SECRET
|
||||
} = process.env;
|
||||
|
||||
const config_twitter = {
|
||||
consumer_key: CONSUMER_KEY,
|
||||
consumer_secret: CONSUMER_SECRET,
|
||||
access_token: ACCESS_TOKEN,
|
||||
access_token_secret: ACCESS_TOKEN_SECRET,
|
||||
timeout_ms: 60*1000
|
||||
};
|
||||
|
||||
let api = new Twit(config_twitter);
|
||||
|
||||
function get_text(tweet) {
|
||||
let txt = tweet.retweeted_status ? tweet.retweeted_status.full_text : tweet.full_text;
|
||||
return txt.split(/ |\n/).filter(v => !v.startsWith('http')).join(' ');
|
||||
}
|
||||
|
||||
async function get_tweets(q, count) {
|
||||
let tweets = await api.get('search/tweets', {q, count, 'tweet_mode': 'extended'});
|
||||
return tweets.data.statuses.map(get_text);
|
||||
}
|
||||
|
||||
async function main() {
|
||||
let keyword = 'avengers';
|
||||
let count = 100;
|
||||
let tweets = await get_tweets(keyword, count);
|
||||
for (tweet of tweets) {
|
||||
let score = sentiment(tweet).comparative;
|
||||
tweet = `${tweet}\n`;
|
||||
if (score > 0) {
|
||||
tweet = colors.green(tweet);
|
||||
} else if (score < 0) {
|
||||
tweet = colors.red(tweet);
|
||||
} else {
|
||||
tweet = colors.blue(tweet)
|
||||
}
|
||||
console.log(tweet)
|
||||
}
|
||||
}
|
||||
|
||||
main();
|
||||
```
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://boostlog.io/@anshulc95/twitter-sentiment-analysis-using-nodejs-5ad1331247018500491f3b6a
|
||||
|
||||
作者:[Anshul Chauhan][a]
|
||||
译者:[BriFuture](https://github.com/BriFuture)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://boostlog.io/@anshulc95
|
||||
[1]:https://en.wikipedia.org/wiki/Sentiment_analysis
|
||||
[2]:https://github.com/ttezel/twit
|
||||
[3]:https://boostlog.io/@anshulc95/apps.twitter.com
|
||||
[4]:https://www.npmjs.com/package/sentiment
|
||||
[5]:https://www.npmjs.com/package/colors
|
||||
[6]:https://boostlog.io/tags/nodejs
|
||||
[7]:https://boostlog.io/tags/twitter
|
||||
[8]:https://boostlog.io/@anshulc95
|
@ -0,0 +1,138 @@
|
||||
如何在 Linux 上使用 Pbcopy 和 Pbpaste 命令
|
||||
======
|
||||
|
||||

|
||||
|
||||
由于 Linux 和 Mac OS X 是基于 *Nix 的系统,因此许多命令可以在两个平台上运行。但是,某些命令可能在两个平台上都没有,比如 **pbcopy** 和 **pbpast**。这些命令仅在 Mac OS X 平台上可用。Pbcopy 命令将标准输入复制到剪贴板。然后,你可以在任何地方使用 Pbpaste 命令粘贴剪贴板内容。当然,上述命令可能有一些 Linux 替代品,例如 **Xclip**。 Xclip 与 Pbcopy 完全相同。但是,从 Mac OS 切换到 Linux 的发行版的人将会错过这两个命令,但仍然更喜欢使用它们。别担心!这个简短的教程描述了如何在 Linux 上使用 Pbcopy 和 Pbpaste 命令。
|
||||
|
||||
### 安装 Xclip / Xsel
|
||||
|
||||
就像我已经说过的那样,Linux 中没有 Pbcopy 和 Pbpaste 命令。但是,我们可以通过 shell 别名使用 Xclip 和/或 Xsel 命令复制 pbcopy 和 pbpaste 命令的功能。Xclip 和 Xsel 包存在于大多数 Linux 发行版的默认存储库中。请注意,你无需安装这两个程序。只需安装上述任何一个程序即可。
|
||||
|
||||
要在 Arch Linux 及其衍生产版上安装它们,请运行:
|
||||
```
|
||||
$ sudo pacman xclip xsel
|
||||
|
||||
```
|
||||
|
||||
在 Fedora 上:
|
||||
```
|
||||
$ sudo dnf xclip xsel
|
||||
|
||||
```
|
||||
|
||||
在 Debian、Ubuntu、Linux Mint 上:
|
||||
```
|
||||
$ sudo apt install xclip xsel
|
||||
|
||||
```
|
||||
|
||||
Once installed, you need create aliases for pbcopy and pbpaste commands. To do so, edit your **~/.bashrc** file:
|
||||
安装后,你需要为 pbcopy 和 pbpaste 命令创建别名。为此,请编辑 **~/.bashrc**:
|
||||
```
|
||||
$ vi ~/.bashrc
|
||||
|
||||
```
|
||||
|
||||
如果要使用 Xclip,请粘贴以下行:
|
||||
```
|
||||
alias pbcopy='xclip -selection clipboard'
|
||||
alias pbpaste='xclip -selection clipboard -o'
|
||||
|
||||
```
|
||||
|
||||
如果要使用 xsel,请在 ~/.bashrc 中粘贴以下行。
|
||||
```
|
||||
alias pbcopy='xsel --clipboard --input'
|
||||
alias pbpaste='xsel --clipboard --output'
|
||||
|
||||
```
|
||||
|
||||
保存并关闭文件。
|
||||
|
||||
接下来,运行以下命令以更新 ~/.bashrc 中的更改。
|
||||
|
||||
```
|
||||
$ source ~/.bashrc
|
||||
|
||||
```
|
||||
|
||||
ZSH 用户将上述行粘贴到 **~/.zshrc** 中。
|
||||
|
||||
### 在 Linux 上使用 Pbcopy 和 Pbpaste 命令
|
||||
|
||||
让我们看一些例子。
|
||||
|
||||
pbcopy 命令将文本从 stdin 复制到剪贴板缓冲区。例如,看看下面的例子。
|
||||
```
|
||||
$ echo "Welcome To OSTechNix!" | pbcopy
|
||||
|
||||
```
|
||||
|
||||
上面的命令会将文本 “Welcome to OSTechNix” 复制到剪贴板中。你可以稍后访问此内容并使用如下所示的 Pbpaste 命令将其粘贴到任何位置。
|
||||
```
|
||||
$ echo `pbpaste`
|
||||
Welcome To OSTechNix!
|
||||
|
||||
```
|
||||
|
||||

|
||||
|
||||
以下是一些其他例子。
|
||||
|
||||
我有一个名为 **file.txt*** 的文件,其中包含以下内容。
|
||||
```
|
||||
$ cat file.txt
|
||||
Welcome To OSTechNix!
|
||||
|
||||
```
|
||||
|
||||
你可以直接将文件内容复制到剪贴板中,如下所示。
|
||||
```
|
||||
$ pbcopy < file.txt
|
||||
|
||||
```
|
||||
|
||||
现在,只要你用其他文件的内容更新了剪切板,那么剪切板中的内容就可用了。
|
||||
|
||||
要从剪贴板检索内容,只需输入:
|
||||
```
|
||||
$ pbpaste
|
||||
Welcome To OSTechNix!
|
||||
|
||||
```
|
||||
|
||||
你还可以使用管道字符将任何 Linux 命令的输出发送到剪贴板。看看下面的例子。
|
||||
```
|
||||
$ ps aux | pbcopy
|
||||
|
||||
```
|
||||
|
||||
现在,输入 “pbpaste” 命令以显示剪贴板中 “ps aux” 命令的输出。
|
||||
```
|
||||
$ pbpaste
|
||||
|
||||
```
|
||||
|
||||

|
||||
|
||||
使用 Pbcopy 和 Pbpaste 命令可以做更多的事情。我希望你现在对这些命令有一个基本的想法。
|
||||
|
||||
就是这些了。还有更好的东西。敬请关注!
|
||||
|
||||
干杯!
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/how-to-use-pbcopy-and-pbpaste-commands-on-linux/
|
||||
|
||||
作者:[SK][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.ostechnix.com/author/sk/
|
Loading…
Reference in New Issue
Block a user