mirror of
https://github.com/LCTT/TranslateProject.git
synced 2025-01-25 23:11:02 +08:00
Merge remote-tracking branch 'LCTT/master'
This commit is contained in:
commit
e078cde621
134
published/20171005 Reasons Kubernetes is cool.md
Normal file
134
published/20171005 Reasons Kubernetes is cool.md
Normal file
@ -0,0 +1,134 @@
|
||||
为什么 Kubernetes 很酷
|
||||
============================================================
|
||||
|
||||
在我刚开始学习 Kubernetes(大约是一年半以前吧?)时,我真的不明白为什么应该去关注它。
|
||||
|
||||
在我使用 Kubernetes 全职工作了三个多月后,我才逐渐明白了为什么我应该使用它。(我距离成为一个 Kubernetes 专家还很远!)希望这篇文章对你理解 Kubernetes 能做什么会有帮助!
|
||||
|
||||
我将尝试去解释我对 Kubernetes 感兴趣的一些原因,而不去使用 “<ruby>原生云<rt>cloud native</rt></ruby>”、“<ruby>编排系统<rt>orchestration</rt></ruby>”、“<ruby>容器<rt>container</rt></ruby>”,或者任何 Kubernetes 专用的术语 :)。我去解释的这些观点主要来自一位 Kubernetes 操作者/基础设施工程师,因为,我现在的工作就是去配置 Kubernetes 和让它工作的更好。
|
||||
|
||||
我不会去尝试解决一些如 “你应该在你的生产系统中使用 Kubernetes 吗?”这样的问题。那是非常复杂的问题。(不仅是因为“生产系统”根据你的用途而总是有不同的要求)
|
||||
|
||||
### Kubernetes 可以让你无需设置一台新的服务器即可在生产系统中运行代码
|
||||
|
||||
我首次被说教使用 Kubernetes 是与我的伙伴 Kamal 的下面的谈话:
|
||||
|
||||
大致是这样的:
|
||||
|
||||
* Kamal: 使用 Kubernetes 你可以通过一条命令就能设置一台新的服务器。
|
||||
* Julia: 我觉得不太可能吧。
|
||||
* Kamal: 像这样,你写一个配置文件,然后应用它,这时候,你就在生产系统中运行了一个 HTTP 服务。
|
||||
* Julia: 但是,现在我需要去创建一个新的 AWS 实例,明确地写一个 Puppet 清单,设置服务发现,配置负载均衡,配置我们的部署软件,并且确保 DNS 正常工作,如果没有什么问题的话,至少在 4 小时后才能投入使用。
|
||||
* Kamal: 是的,使用 Kubernetes 你不需要做那么多事情,你可以在 5 分钟内设置一台新的 HTTP 服务,并且它将自动运行。只要你的集群中有空闲的资源它就能正常工作!
|
||||
* Julia: 这儿一定是一个“坑”。
|
||||
|
||||
这里有一种陷阱,设置一个生产用 Kubernetes 集群(在我的经险中)确实并不容易。(查看 [Kubernetes 艰难之旅][3] 中去开始使用时有哪些复杂的东西)但是,我们现在并不深入讨论它。
|
||||
|
||||
因此,Kubernetes 第一个很酷的事情是,它可能使那些想在生产系统中部署新开发的软件的方式变得更容易。那是很酷的事,而且它真的是这样,因此,一旦你使用一个运作中的 Kubernetes 集群,你真的可以仅使用一个配置文件就在生产系统中设置一台 HTTP 服务(在 5 分钟内运行这个应用程序,设置一个负载均衡,给它一个 DNS 名字,等等)。看起来真的很有趣。
|
||||
|
||||
### 对于运行在生产系统中的代码,Kubernetes 可以提供更好的可见性和可管理性
|
||||
|
||||
在我看来,在理解 etcd 之前,你可能不会理解 Kubernetes 的。因此,让我们先讨论 etcd!
|
||||
|
||||
想像一下,如果现在我这样问你,“告诉我你运行在生产系统中的每个应用程序,它运行在哪台主机上?它是否状态很好?是否为它分配了一个 DNS 名字?”我并不知道这些,但是,我可能需要到很多不同的地方去查询来回答这些问题,并且,我需要花很长的时间才能搞定。我现在可以很确定地说不需要查询,仅一个 API 就可以搞定它们。
|
||||
|
||||
在 Kubernetes 中,你的集群的所有状态 – 运行中的应用程序 (“pod”)、节点、DNS 名字、 cron 任务、 等等 —— 都保存在一个单一的数据库中(etcd)。每个 Kubernetes 组件是无状态的,并且基本是通过下列方式工作的:
|
||||
|
||||
* 从 etcd 中读取状态(比如,“分配给节点 1 的 pod 列表”)
|
||||
* 产生变化(比如,“在节点 1 上运行 pod A”)
|
||||
* 更新 etcd 中的状态(比如,“设置 pod A 的状态为 ‘running’”)
|
||||
|
||||
这意味着,如果你想去回答诸如 “在那个可用区中有多少台运行着 nginx 的 pod?” 这样的问题时,你可以通过查询一个统一的 API(Kubernetes API)去回答它。并且,你可以在每个其它 Kubernetes 组件上运行那个 API 去进行同样的访问。
|
||||
|
||||
这也意味着,你可以很容易地去管理每个运行在 Kubernetes 中的任何东西。比如说,如果你想要:
|
||||
|
||||
* 部署实现一个复杂的定制的部署策略(部署一个东西,等待 2 分钟,部署 5 个以上,等待 3.7 分钟,等等)
|
||||
* 每当推送到 github 上一个分支,自动化 [启动一个新的 web 服务器][1]
|
||||
* 监视所有你的运行的应用程序,确保它们有一个合理的内存使用限制。
|
||||
|
||||
这些你只需要写一个程序与 Kubernetes API(“controller”)通讯就可以了。
|
||||
|
||||
另一个关于 Kubernetes API 的令人激动的事情是,你不会局限于 Kubernetes 所提供的现有功能!如果对于你要部署/创建/监视的软件有你自己的方案,那么,你可以使用 Kubernetes API 去写一些代码去达到你的目的!它可以让你做到你想做的任何事情。
|
||||
|
||||
### 即便每个 Kubernetes 组件都“挂了”,你的代码将仍然保持运行
|
||||
|
||||
关于 Kubernetes 我(在各种博客文章中 :))承诺的一件事情是,“如果 Kubernetes API 服务和其它组件‘挂了’也没事,你的代码将一直保持运行状态”。我认为理论上这听起来很酷,但是我不确定它是否真是这样的。
|
||||
|
||||
到目前为止,这似乎是真的!
|
||||
|
||||
我已经断开了一些正在运行的 etcd,发生了这些情况:
|
||||
|
||||
1. 所有的代码继续保持运行状态
|
||||
2. 不能做 _新的_ 事情(你不能部署新的代码或者生成变更,cron 作业将停止工作)
|
||||
3. 当它恢复时,集群将赶上这期间它错过的内容
|
||||
|
||||
这样做意味着如果 etcd 宕掉,并且你的应用程序的其中之一崩溃或者发生其它事情,在 etcd 恢复之前,它不能够恢复。
|
||||
|
||||
### Kubernetes 的设计对 bug 很有弹性
|
||||
|
||||
与任何软件一样,Kubernetes 也会有 bug。例如,到目前为止,我们的集群控制管理器有内存泄漏,并且,调度器经常崩溃。bug 当然不好,但是,我发现 Kubernetes 的设计可以帮助减轻它的许多核心组件中的错误的影响。
|
||||
|
||||
如果你重启动任何组件,将会发生:
|
||||
|
||||
* 从 etcd 中读取所有的与它相关的状态
|
||||
* 基于那些状态(调度 pod、回收完成的 pod、调度 cron 作业、按需部署等等),它会去做那些它认为必须要做的事情
|
||||
|
||||
因为,所有的组件并不会在内存中保持状态,你在任何时候都可以重启它们,这可以帮助你减轻各种 bug 的影响。
|
||||
|
||||
例如,如果在你的控制管理器中有内存泄露。因为,控制管理器是无状态的,你可以每小时定期去重启它,或者,在感觉到可能导致任何不一致的问题发生时重启它。又或者,在调度器中遇到了一个 bug,它有时忘记了某个 pod,从来不去调度它们。你可以每隔 10 分钟来重启调度器来缓减这种情况。(我们并不会这么做,而是去修复这个 bug,但是,你_可以这样做_ :))
|
||||
|
||||
因此,我觉得即使在它的核心组件中有 bug,我仍然可以信任 Kubernetes 的设计可以让我确保集群状态的一致性。并且,总在来说,随着时间的推移软件质量会提高。唯一你必须去操作的有状态的东西就是 etcd。
|
||||
|
||||
不用过多地讨论“状态”这个东西 —— 而我认为在 Kubernetes 中很酷的一件事情是,唯一需要去做备份/恢复计划的东西是 etcd (除非为你的 pod 使用了持久化存储的卷)。我认为这样可以使 Kubernetes 运维比你想的更容易一些。
|
||||
|
||||
### 在 Kubernetes 之上实现新的分布式系统是非常容易的
|
||||
|
||||
假设你想去实现一个分布式 cron 作业调度系统!从零开始做工作量非常大。但是,在 Kubernetes 里面实现一个分布式 cron 作业调度系统是非常容易的!(仍然没那么简单,毕竟它是一个分布式系统)
|
||||
|
||||
我第一次读到 Kubernetes 的 cron 作业控制器的代码时,我对它是如此的简单感到由衷高兴。去读读看,其主要的逻辑大约是 400 行的 Go 代码。去读它吧! => [cronjob_controller.go][4] <=
|
||||
|
||||
cron 作业控制器基本上做的是:
|
||||
|
||||
* 每 10 秒钟:
|
||||
* 列出所有已存在的 cron 作业
|
||||
* 检查是否有需要现在去运行的任务
|
||||
* 如果有,创建一个新的作业对象去调度,并通过其它的 Kubernetes 控制器实际运行它
|
||||
* 清理已完成的作业
|
||||
* 重复以上工作
|
||||
|
||||
Kubernetes 模型是很受限制的(它有定义在 etcd 中的资源模式,控制器读取这个资源并更新 etcd),我认为这种相关的固有的/受限制的模型,可以使它更容易地在 Kubernetes 框架中开发你自己的分布式系统。
|
||||
|
||||
Kamal 给我说的是 “Kubernetes 是一个写你自己的分布式系统的很好的平台” ,而不是“ Kubernetes 是一个你可以使用的分布式系统”,并且,我觉得它真的很有意思。他做了一个 [为你推送到 GitHub 的每个分支运行一个 HTTP 服务的系统][5] 的原型。这花了他一个周末的时间,大约 800 行 Go 代码,我认为它真不可思议!
|
||||
|
||||
### Kubernetes 可以使你做一些非常神奇的事情(但并不容易)
|
||||
|
||||
我一开始就说 “kubernetes 可以让你做一些很神奇的事情,你可以用一个配置文件来做这么多的基础设施,它太神奇了”。这是真的!
|
||||
|
||||
为什么说 “Kubernetes 并不容易”呢?是因为 Kubernetes 有很多部分,学习怎么去成功地运营一个高可用的 Kubernetes 集群要做很多的工作。就像我发现它给我了许多抽象的东西,我需要去理解这些抽象的东西才能调试问题和正确地配置它们。我喜欢学习新东西,因此,它并不会使我发狂或者生气,但是我认为了解这一点很重要 :)
|
||||
|
||||
对于 “我不能仅依靠抽象概念” 的一个具体的例子是,我努力学习了许多 [Linux 上网络是如何工作的][6],才让我对设置 Kubernetes 网络稍有信心,这比我以前学过的关于网络的知识要多很多。这种方式很有意思但是非常费时间。在以后的某个时间,我或许写更多的关于设置 Kubernetes 网络的困难/有趣的事情。
|
||||
|
||||
或者,为了成功设置我的 Kubernetes CA,我写了一篇 [2000 字的博客文章][7],述及了我不得不学习 Kubernetes 不同方式的 CA 的各种细节。
|
||||
|
||||
我觉得,像 GKE (Google 的 Kubernetes 产品) 这样的一些监管的 Kubernetes 的系统可能更简单,因为,他们为你做了许多的决定,但是,我没有尝试过它们。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://jvns.ca/blog/2017/10/05/reasons-kubernetes-is-cool/
|
||||
|
||||
作者:[Julia Evans][a]
|
||||
译者:[qhwdw](https://github.com/qhwdw)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://jvns.ca/about
|
||||
[1]:https://github.com/kamalmarhubi/kubereview
|
||||
[2]:https://jvns.ca/categories/kubernetes
|
||||
[3]:https://github.com/kelseyhightower/kubernetes-the-hard-way
|
||||
[4]:https://github.com/kubernetes/kubernetes/blob/e4551d50e57c089aab6f67333412d3ca64bc09ae/pkg/controller/cronjob/cronjob_controller.go
|
||||
[5]:https://github.com/kamalmarhubi/kubereview
|
||||
[6]:https://jvns.ca/blog/2016/12/22/container-networking/
|
||||
[7]:https://jvns.ca/blog/2017/08/05/how-kubernetes-certificates-work/
|
||||
|
||||
|
106
published/20180103 Creating an Offline YUM repository for LAN.md
Normal file
106
published/20180103 Creating an Offline YUM repository for LAN.md
Normal file
@ -0,0 +1,106 @@
|
||||
创建局域网内的离线 Yum 仓库
|
||||
======
|
||||
|
||||
在早先的教程中,我们讨论了[如何使用 ISO 镜像和在线 Yum 仓库的方式来创建自己的 Yum 仓库 ][1]。创建自己的 Yum 仓库是一个不错的想法,但若网络中只有 2-3 台 Linux 机器那就没啥必要了。不过若你的网络中有大量的 Linux 服务器,而且这些服务器还需要定时进行升级,或者你有大量服务器无法直接访问互联网,那么创建自己的 Yum 仓库就很有必要了。
|
||||
|
||||
当我们有大量的 Linux 服务器,而每个服务器都直接从互联网上升级系统时,数据消耗会很可观。为了节省数据量,我们可以创建个离线 Yum 源并将之分享到本地网络中。网络中的其他 Linux 机器就可以直接从本地 Yum 上获取系统更新,从而节省数据量,而且传输速度也会很好。
|
||||
|
||||
我们可以使用下面两种方法来分享 Yum 仓库:
|
||||
|
||||
* 使用 Web 服务器(Apache)
|
||||
* 使用 FTP 服务器(VSFTPD)
|
||||
|
||||
在开始讲解这两个方法之前,我们需要先根据[之前的教程][1]创建一个 Yum 仓库。
|
||||
|
||||
### 使用 Web 服务器
|
||||
|
||||
首先在 Yum 服务器上安装 Web 服务器(Apache),我们假设服务器 IP 是 `192.168.1.100`。我们已经在这台系统上配置好了 Yum 仓库,现在我们来使用 `yum` 命令安装 Apache Web 服务器,
|
||||
|
||||
```
|
||||
$ yum install httpd
|
||||
```
|
||||
|
||||
下一步,拷贝所有的 rpm 包到默认的 Apache 根目录下,即 `/var/www/html`,由于我们已经将包都拷贝到了 `/YUM` 下,我们也可以创建一个软连接来从 `/var/www/html` 指向 `/YUM`。
|
||||
|
||||
```
|
||||
$ ln -s /var/www/html/Centos /YUM
|
||||
```
|
||||
|
||||
重启 Web 服务器应用改变:
|
||||
|
||||
```
|
||||
$ systemctl restart httpd
|
||||
```
|
||||
|
||||
#### 配置客户端机器
|
||||
|
||||
服务端的配置就完成了,现在需要配置下客户端来从我们创建的离线 Yum 中获取升级包,这里假设客户端 IP 为 `192.168.1.101`。
|
||||
|
||||
在 `/etc/yum.repos.d` 目录中创建 `offline-yum.repo` 文件,输入如下信息,
|
||||
|
||||
```
|
||||
$ vi /etc/yum.repos.d/offline-yum.repo
|
||||
```
|
||||
|
||||
```
|
||||
name=Local YUM
|
||||
baseurl=http://192.168.1.100/CentOS/7
|
||||
gpgcheck=0
|
||||
enabled=1
|
||||
```
|
||||
|
||||
客户端也配置完了。试一下用 `yum` 来安装/升级软件包来确认仓库是正常工作的。
|
||||
|
||||
### 使用 FTP 服务器
|
||||
|
||||
在 FTP 上分享 Yum,首先需要安装所需要的软件包,即 vsftpd。
|
||||
|
||||
```
|
||||
$ yum install vsftpd
|
||||
```
|
||||
|
||||
vsftp 的默认根目录为 `/var/ftp/pub`,因此你可以拷贝 rpm 包到这个目录,或者为它创建一个软连接:
|
||||
|
||||
```
|
||||
$ ln -s /var/ftp/pub /YUM
|
||||
```
|
||||
|
||||
重启服务应用改变:
|
||||
|
||||
```
|
||||
$ systemctl restart vsftpd
|
||||
```
|
||||
|
||||
#### 配置客户端机器
|
||||
|
||||
像上面一样,在 `/etc/yum.repos.d` 中创建 `offline-yum.repo` 文件,并输入下面信息,
|
||||
|
||||
```
|
||||
$ vi /etc/yum.repos.d/offline-yum.repo
|
||||
```
|
||||
|
||||
```
|
||||
[Offline YUM]
|
||||
name=Local YUM
|
||||
baseurl=ftp://192.168.1.100/pub/CentOS/7
|
||||
gpgcheck=0
|
||||
enabled=1
|
||||
```
|
||||
|
||||
现在客户机可以通过 ftp 接收升级了。要配置 vsftpd 服务器为其他 Linux 系统分享文件,请[阅读这篇指南][2]。
|
||||
|
||||
这两种方法都很不错,你可以任意选择其中一种方法。有任何疑问或这想说的话,欢迎在下面留言框中留言。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://linuxtechlab.com/offline-yum-repository-for-lan/
|
||||
|
||||
作者:[Shusain][a]
|
||||
译者:[lujun9972](https://github.com/lujun9972)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://linuxtechlab.com/author/shsuain/
|
||||
[1]:https://linux.cn/article-9296-1.html
|
||||
[2]:http://linuxtechlab.com/ftp-secure-installation-configuration/
|
@ -0,0 +1,138 @@
|
||||
A history of low-level Linux container runtimes
|
||||
============================================================
|
||||
|
||||
### "Container runtime" is an overloaded term.
|
||||
|
||||
|
||||
![two ships passing in the ocean](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/running-containers-two-ship-container-beach.png?itok=wr4zJC6p "two ships passing in the ocean")
|
||||
Image credits : Rikki Endsley. [CC BY-SA 4.0][12]
|
||||
|
||||
At Red Hat we like to say, "Containers are Linux—Linux is Containers." Here is what this means. Traditional containers are processes on a system that usually have the following three characteristics:
|
||||
|
||||
### 1\. Resource constraints
|
||||
|
||||
Linux Containers
|
||||
|
||||
* [What are Linux containers?][1]
|
||||
|
||||
* [What is Docker?][2]
|
||||
|
||||
* [What is Kubernetes?][3]
|
||||
|
||||
* [An introduction to container terminology][4]
|
||||
|
||||
When you run lots of containers on a system, you do not want to have any container monopolize the operating system, so we use resource constraints to control things like CPU, memory, network bandwidth, etc. The Linux kernel provides the cgroups feature, which can be configured to control the container process resources.
|
||||
|
||||
### 2\. Security constraints
|
||||
|
||||
Usually, you do not want your containers being able to attack each other or attack the host system. We take advantage of several features of the Linux kernel to set up security separation, such as SELinux, seccomp, capabilities, etc.
|
||||
|
||||
### 3\. Virtual separation
|
||||
|
||||
Container processes should not have a view of any processes outside the container. They should be on their own network. Container processes need to be able to bind to port 80 in different containers. Each container needs a different view of its image, needs its own root filesystem (rootfs). In Linux we use kernel namespaces to provide virtual separation.
|
||||
|
||||
Therefore, a process that runs in a cgroup, has security settings, and runs in namespaces can be called a container. Looking at PID 1, systemd, on a Red Hat Enterprise Linux 7 system, you see that systemd runs in a cgroup.
|
||||
|
||||
```
|
||||
# tail -1 /proc/1/cgroup
|
||||
1:name=systemd:/
|
||||
```
|
||||
|
||||
The `ps` command shows you that the system process has an SELinux label ...
|
||||
|
||||
```
|
||||
# ps -eZ | grep systemd
|
||||
system_u:system_r:init_t:s0 1 ? 00:00:48 systemd
|
||||
```
|
||||
|
||||
and capabilities.
|
||||
|
||||
```
|
||||
# grep Cap /proc/1/status
|
||||
...
|
||||
CapEff: 0000001fffffffff
|
||||
CapBnd: 0000001fffffffff
|
||||
CapBnd: 0000003fffffffff
|
||||
```
|
||||
|
||||
Finally, if you look at the `/proc/1/ns` subdir, you will see the namespace that systemd runs in.
|
||||
|
||||
```
|
||||
ls -l /proc/1/ns
|
||||
lrwxrwxrwx. 1 root root 0 Jan 11 11:46 mnt -> mnt:[4026531840]
|
||||
lrwxrwxrwx. 1 root root 0 Jan 11 11:46 net -> net:[4026532009]
|
||||
lrwxrwxrwx. 1 root root 0 Jan 11 11:46 pid -> pid:[4026531836]
|
||||
...
|
||||
```
|
||||
|
||||
If PID 1 (and really every other process on the system) has resource constraints, security settings, and namespaces, I argue that every process on the system is in a container.
|
||||
|
||||
Container runtime tools just modify these resource constraints, security settings, and namespaces. Then the Linux kernel executes the processes. After the container is launched, the container runtime can monitor PID 1 inside the container or the container's `stdin`/`stdout`—the container runtime manages the lifecycles of these processes.
|
||||
|
||||
### Container runtimes
|
||||
|
||||
You might say to yourself, well systemd sounds pretty similar to a container runtime. Well, after having several email discussions about why container runtimes do not use `systemd-nspawn` as a tool for launching containers, I decided it would be worth discussing container runtimes and giving some historical context.
|
||||
|
||||
[Docker][13] is often called a container runtime, but "container runtime" is an overloaded term. When folks talk about a "container runtime," they're really talking about higher-level tools like Docker, [CRI-O][14], and [RKT][15] that come with developer functionality. They are API driven. They include concepts like pulling the container image from the container registry, setting up the storage, and finally launching the container. Launching the container often involves running a specialized tool that configures the kernel to run the container, and these are also referred to as "container runtimes." I will refer to them as "low-level container runtimes." Daemons like Docker and CRI-O, as well as command-line tools like [Podman][16] and [Buildah][17], should probably be called "container managers" instead.
|
||||
|
||||
When Docker was originally written, it launched containers using the `lxc` toolset, which predates `systemd-nspawn`. Red Hat's original work with Docker was to try to integrate `[libvirt][6]` (`libvirt-lxc`) into Docker as an alternative to the `lxc` tools, which were not supported in RHEL. `libvirt-lxc` also did not use `systemd-nspawn`. At that time, the systemd team was saying that `systemd-nspawn` was only a tool for testing, not for production.
|
||||
|
||||
At the same time, the upstream Docker developers, including some members of my Red Hat team, decided they wanted a golang-native way to launch containers, rather than launching a separate application. Work began on libcontainer, as a native golang library for launching containers. Red Hat engineering decided that this was the best path forward and dropped `libvirt-lxc`.
|
||||
|
||||
Later, the [Open Container Initiative][18] (OCI) was formed, party because people wanted to be able to launch containers in additional ways. Traditional namespace-separated containers were popular, but people also had the desire for virtual machine-level isolation. Intel and [Hyper.sh][19] were working on KVM-separated containers, and Microsoft was working on Windows-based containers. The OCI wanted a standard specification defining what a container is, so the [OCI Runtime Specification][20] was born.
|
||||
|
||||
The OCI Runtime Specification defines a JSON file format that describes what binary should be run, how it should be contained, and the location of the rootfs of the container. Tools can generate this JSON file. Then other tools can read this JSON file and execute a container on the rootfs. The libcontainer parts of Docker were broken out and donated to the OCI. The upstream Docker engineers and our engineers helped create a new frontend tool to read the OCI Runtime Specification JSON file and interact with libcontainer to run the container. This tool, called `[runc][7]`, was also donated to the OCI. While `runc` can read the OCI JSON file, users are left to generate it themselves. `runc` has since become the most popular low-level container runtime. Almost all container-management tools support `runc`, including CRI-O, Docker, Buildah, Podman, and [Cloud Foundry Garden][21]. Since then, other tools have also implemented the OCI Runtime Spec to execute OCI-compliant containers.
|
||||
|
||||
Both [Clear Containers][22] and Hyper.sh's `runV` tools were created to use the OCI Runtime Specification to execute KVM-based containers, and they are combining their efforts in a new project called [Kata][23]. Last year, Oracle created a demonstration version of an OCI runtime tool called [RailCar][24], written in Rust. It's been two months since the GitHub project has been updated, so it's unclear if it is still in development. A couple of years ago, Vincent Batts worked on adding a tool, `[nspawn-oci][8]`, that interpreted an OCI Runtime Specification file and launched `systemd-nspawn`, but no one really picked up on it, and it was not a native implementation.
|
||||
|
||||
If someone wants to implement a native `systemd-nspawn --oci OCI-SPEC.json` and get it accepted by the systemd team for support, then CRI-O, Docker, and eventually Podman would be able to use it in addition to `runc `and Clear Container/runV ([Kata][25]). (No one on my team is working on this.)
|
||||
|
||||
The bottom line is, back three or four years, the upstream developers wanted to write a low-level golang tool for launching containers, and this tool ended up becoming `runc`. Those developers at the time had a C-based tool for doing this called `lxc` and moved away from it. I am pretty sure that at the time they made the decision to build libcontainer, they would not have been interested in `systemd-nspawn` or any other non-native (golang) way of running "namespace" separated containers.
|
||||
|
||||
|
||||
### About the author
|
||||
|
||||
[![](https://opensource.com/sites/default/files/styles/profile_pictures/public/walsh1.jpg?itok=JbZWFm6J)][26] Daniel J Walsh - Daniel Walsh has worked in the computer security field for almost 30 years. Dan joined Red Hat in August 2001\. Dan leads the RHEL Docker enablement team since August 2013, but has been working on container technology for several years. He has led the SELinux project, concentrating on the application space and policy development. Dan helped developed sVirt, Secure Virtualization. He also created the SELinux Sandbox, the Xguest user and the Secure Kiosk. Previously, Dan worked Netect/Bindview... [more about Daniel J Walsh][9][More about me][10]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/1/history-low-level-container-runtimes
|
||||
|
||||
作者:[Daniel J Walsh ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/rhatdan
|
||||
[1]:https://opensource.com/resources/what-are-linux-containers?utm_campaign=containers&intcmp=70160000000h1s6AAA
|
||||
[2]:https://opensource.com/resources/what-docker?utm_campaign=containers&intcmp=70160000000h1s6AAA
|
||||
[3]:https://opensource.com/resources/what-is-kubernetes?utm_campaign=containers&intcmp=70160000000h1s6AAA
|
||||
[4]:https://developers.redhat.com/blog/2016/01/13/a-practical-introduction-to-docker-container-terminology/?utm_campaign=containers&intcmp=70160000000h1s6AAA
|
||||
[5]:https://opensource.com/article/18/1/history-low-level-container-runtimes?rate=05T2m7ayQ7DRxtzQFjGcfBAlaTF5ffHN-EH1kEqSt9Q
|
||||
[6]:https://libvirt.org/
|
||||
[7]:https://github.com/opencontainers/runc
|
||||
[8]:https://github.com/vbatts/nspawn-oci
|
||||
[9]:https://opensource.com/users/rhatdan
|
||||
[10]:https://opensource.com/users/rhatdan
|
||||
[11]:https://opensource.com/user/16673/feed
|
||||
[12]:https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[13]:https://github.com/docker
|
||||
[14]:https://github.com/kubernetes-incubator/cri-o
|
||||
[15]:https://github.com/rkt/rkt
|
||||
[16]:https://github.com/projectatomic/libpod/tree/master/cmd/podman
|
||||
[17]:https://github.com/projectatomic/buildah
|
||||
[18]:https://www.opencontainers.org/
|
||||
[19]:https://www.hyper.sh/
|
||||
[20]:https://github.com/opencontainers/runtime-spec
|
||||
[21]:https://github.com/cloudfoundry/garden
|
||||
[22]:https://clearlinux.org/containers
|
||||
[23]:https://clearlinux.org/containers
|
||||
[24]:https://github.com/oracle/railcar
|
||||
[25]:https://github.com/kata-containers
|
||||
[26]:https://opensource.com/users/rhatdan
|
||||
[27]:https://opensource.com/users/rhatdan
|
||||
[28]:https://opensource.com/users/rhatdan
|
||||
[29]:https://opensource.com/tags/containers
|
||||
[30]:https://opensource.com/tags/linux
|
||||
[31]:https://opensource.com/tags/containers-column
|
@ -0,0 +1,94 @@
|
||||
6 pivotal moments in open source history
|
||||
============================================================
|
||||
|
||||
### Here's how open source developed from a printer jam solution at MIT to a major development model in the tech industry today.
|
||||
|
||||
![6 pivotal moments in open source history](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/welcome-open-sign-door-osdc-lead.png?itok=i9jCnaiu "6 pivotal moments in open source history")
|
||||
Image credits : [Alan Levine][4]. [CC0 1.0][5]
|
||||
|
||||
Open source has taken a prominent role in the IT industry today. It is everywhere from the smallest embedded systems to the biggest supercomputer, from the phone in your pocket to the software running the websites and infrastructure of the companies we engage with every day. Let's explore how we got here and discuss key moments from the past 40 years that have paved a path to the current day.
|
||||
|
||||
### 1\. RMS and the printer
|
||||
|
||||
In the late 1970s, [Richard M. Stallman (RMS)][6] was a staff programmer at MIT. His department, like those at many universities at the time, shared a PDP-10 computer and a single printer. One problem they encountered was that paper would regularly jam in the printer, causing a string of print jobs to pile up in a queue until someone fixed the jam. To get around this problem, the MIT staff came up with a nice social hack: They wrote code for the printer driver so that when it jammed, a message would be sent to everyone who was currently waiting for a print job: "The printer is jammed, please fix it." This way, it was never stuck for long.
|
||||
|
||||
In 1980, the lab accepted a donation of a brand-new laser printer. When Stallman asked for the source code for the printer driver, however, so he could reimplement the social hack to have the system notify users on a paper jam, he was told that this was proprietary information. He heard of a researcher in a different university who had the source code for a research project, and when the opportunity arose, he asked this colleague to share it—and was shocked when they refused. They had signed an NDA, which Stallman took as a betrayal of the hacker culture.
|
||||
|
||||
The late '70s and early '80s represented an era where software, which had traditionally been given away with the hardware in source code form, was seen to be valuable. Increasingly, MIT researchers were starting software companies, and selling licenses to the software was key to their business models. NDAs and proprietary software licenses became the norms, and the best programmers were hired from universities like MIT to work on private development projects where they could no longer share or collaborate.
|
||||
|
||||
As a reaction to this, Stallman resolved that he would create a complete operating system that would not deprive users of the freedom to understand how it worked, and would allow them to make changes if they wished. It was the birth of the free software movement.
|
||||
|
||||
### 2\. Creation of GNU and the advent of free software
|
||||
|
||||
By late 1983, Stallman was ready to announce his project and recruit supporters and helpers. In September 1983, [he announced the creation of the GNU project][7] (GNU stands for GNU's Not Unix—a recursive acronym). The goal of the project was to clone the Unix operating system to create a system that would give complete freedom to users.
|
||||
|
||||
In January 1984, he started working full-time on the project, first creating a compiler system (GCC) and various operating system utilities. Early in 1985, he published "[The GNU Manifesto][8]," which was a call to arms for programmers to join the effort, and launched the Free Software Foundation in order to accept donations to support the work. This document is the founding charter of the free software movement.
|
||||
|
||||
### 3\. The writing of the GPL
|
||||
|
||||
Until 1989, software written and released by the [Free Software Foundation][9] and RMS did not have a single license. Emacs was released under the Emacs license, GCC was released under the GCC license, and so on; however, after a company called Unipress forced Stallman to stop distributing copies of an Emacs implementation they had acquired from James Gosling (of Java fame), he felt that a license to secure user freedoms was important.
|
||||
|
||||
The first version of the GNU General Public License was released in 1989, and it encapsulated the values of copyleft (a play on words—what is the opposite of copyright?): You may use, copy, distribute, and modify the software covered by the license, but if you make changes, you must share the modified source code alongside the modified binaries. This simple requirement to share modified software, in combination with the advent of the internet in the 1990s, is what enabled the decentralized, collaborative development model of the free software movement to flourish.
|
||||
|
||||
### 4\. "The Cathedral and the Bazaar"
|
||||
|
||||
By the mid-1990s, Linux was starting to take off, and free software had become more mainstream—or perhaps "less fringe" would be more accurate. The Linux kernel was being developed in a way that was completely different to anything people had been seen before, and was very successful doing it. Out of the chaos of the kernel community came order, and a fast-moving project.
|
||||
|
||||
In 1997, Eric S. Raymond published the seminal essay, "[The Cathedral and the Bazaar][10]," comparing and contrasting the development methodologies and social structure of GCC and the Linux kernel and talking about his own experiences with a "bazaar" development model with the Fetchmail project. Many of the principles that Raymond describes in this essay will later become central to agile development and the DevOps movement—"release early, release often," refactoring of code, and treating users as co-developers are all fundamental to modern software development.
|
||||
|
||||
This essay has been credited with bringing free software to a broader audience, and with convincing executives at software companies at the time that releasing their software under a free software license was the right thing to do. Raymond went on to be instrumental in the coining of the term "open source" and the creation of the Open Source Institute.
|
||||
|
||||
"The Cathedral and the Bazaar" was credited as a key document in the 1998 release of the source code for the Netscape web browser Mozilla. At the time, this was the first major release of an existing, widely used piece of desktop software as free software, which brought it further into the public eye.
|
||||
|
||||
### 5\. Open source
|
||||
|
||||
As far back as 1985, the ambiguous nature of the word "free", used to describe software freedom, was identified as problematic by RMS himself. In the GNU Manifesto, he identified "give away" and "for free" as terms that confused zero price and user freedom. "Free as in freedom," "Speech not beer," and similar mantras were common when free software hit a mainstream audience in the late 1990s, but a number of prominent community figures argued that a term was needed that made the concept more accessible to the general public.
|
||||
|
||||
After Netscape released the source code for Mozilla in 1998 (see #4), a group of people, including Eric Raymond, Bruce Perens, Michael Tiemann, Jon "Maddog" Hall, and many of the leading lights of the free software world, gathered in Palo Alto to discuss an alternative term. The term "open source" was [coined by Christine Peterson][11] to describe free software, and the Open Source Institute was later founded by Bruce Perens and Eric Raymond. The fundamental difference with proprietary software, they argued, was the availability of the source code, and so this was what should be put forward first in the branding.
|
||||
|
||||
Later that year, at a summit organized by Tim O'Reilly, an extended group of some of the most influential people in the free software world at the time gathered to debate various new brands for free software. In the end, "open source" edged out "sourceware," and open source began to be adopted by many projects in the community.
|
||||
|
||||
There was some disagreement, however. Richard Stallman and the Free Software Foundation continued to champion the term "free software," because to them, the fundamental difference with proprietary software was user freedom, and the availability of source code was just a means to that end. Stallman argued that removing the focus on freedom would lead to a future where source code would be available, but the user of the software would not be able to avail of the freedom to modify the software. With the advent of web-deployed software-as-a-service and open source firmware embedded in devices, the battle continues to be waged today.
|
||||
|
||||
### 6\. Corporate investment in open source—VA Linux, Red Hat, IBM
|
||||
|
||||
In the late 1990s, a series of high-profile events led to a huge increase in the professionalization of free and open source software. Among these, the highest-profile events were the IPOs of VA Linux and Red Hat in 1999\. Both companies had massive gains in share price on their opening days as publicly traded companies, proving that open source was now going commercial and mainstream.
|
||||
|
||||
Also in 1999, IBM announced that they were supporting Linux by investing $1 billion in its development, making is less risky to traditional enterprise users. The following year, Sun Microsystems released the source code to its cross-platform office suite, StarOffice, and created the [OpenOffice.org][12] project.
|
||||
|
||||
The combined effect of massive Silicon Valley funding of open source projects, the attention of Wall Street for young companies built around open source software, and the market credibility that tech giants like IBM and Sun Microsystems brought had combined to create the massive adoption of open source, and the embrace of the open development model that helped it thrive have led to the dominance of Linux and open source in the tech industry today.
|
||||
|
||||
_Which pivotal moments would you add to the list? Let us know in the comments._
|
||||
|
||||
### About the author
|
||||
|
||||
[![](https://opensource.com/sites/default/files/styles/profile_pictures/public/pictures/picture-11423-8ecef7f357341aaa7aee8b43e9b530c9.png?itok=n1snBFq3)][13] Dave Neary - Dave Neary is a member of the Open Source and Standards team at Red Hat, helping make Open Source projects important to Red Hat be successful. Dave has been around the free and open source software world, wearing many different hats, since sending his first patch to the GIMP in 1999.[More about me][2]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/2/pivotal-moments-history-open-source
|
||||
|
||||
作者:[Dave Neary ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/dneary
|
||||
[1]:https://opensource.com/article/18/2/pivotal-moments-history-open-source?rate=gsG-JrjfROWACP7i9KUoqmH14JDff8-31C2IlNPPyu8
|
||||
[2]:https://opensource.com/users/dneary
|
||||
[3]:https://opensource.com/user/16681/feed
|
||||
[4]:https://www.flickr.com/photos/cogdog/6476689463/in/photolist-aSjJ8H-qHAvo4-54QttY-ofm5ZJ-9NnUjX-tFxS7Y-bPPjtH-hPYow-bCndCk-6NpFvF-5yQ1xv-7EWMXZ-48RAjB-5EzYo3-qAFAdk-9gGty4-a2BBgY-bJsTcF-pWXATc-6EBTmq-SkBnSJ-57QJco-ddn815-cqt5qG-ddmYSc-pkYxRz-awf3n2-Rvnoxa-iEMfeG-bVfq5-jXy74D-meCC1v-qx22rx-fMScsJ-ci1435-ie8P5-oUSXhp-xJSm9-bHgApk-mX7ggz-bpsxd7-8ukud7-aEDmBj-qWkytq-ofwhdM-b7zSeD-ddn5G7-ddn5gb-qCxnB2-S74vsk
|
||||
[5]:https://creativecommons.org/publicdomain/zero/1.0/
|
||||
[6]:https://en.wikipedia.org/wiki/Richard_Stallman
|
||||
[7]:https://groups.google.com/forum/#!original/net.unix-wizards/8twfRPM79u0/1xlglzrWrU0J
|
||||
[8]:https://www.gnu.org/gnu/manifesto.en.html
|
||||
[9]:https://www.fsf.org/
|
||||
[10]:https://en.wikipedia.org/wiki/The_Cathedral_and_the_Bazaar
|
||||
[11]:https://opensource.com/article/18/2/coining-term-open-source-software
|
||||
[12]:http://www.openoffice.org/
|
||||
[13]:https://opensource.com/users/dneary
|
||||
[14]:https://opensource.com/users/dneary
|
||||
[15]:https://opensource.com/users/dneary
|
||||
[16]:https://opensource.com/article/18/2/pivotal-moments-history-open-source#comments
|
||||
[17]:https://opensource.com/tags/licensing
|
103
sources/talk/20180201 How I coined the term open source.md
Normal file
103
sources/talk/20180201 How I coined the term open source.md
Normal file
@ -0,0 +1,103 @@
|
||||
How I coined the term 'open source'
|
||||
============================================================
|
||||
|
||||
### Christine Peterson finally publishes her account of that fateful day, 20 years ago.
|
||||
|
||||
![How I coined the term 'open source'](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/hello-name-sticker-badge-tag.png?itok=fAgbMgBb "How I coined the term 'open source'")
|
||||
Image by : opensource.com
|
||||
|
||||
In a few days, on February 3, the 20th anniversary of the introduction of the term "[open source software][6]" is upon us. As open source software grows in popularity and powers some of the most robust and important innovations of our time, we reflect on its rise to prominence.
|
||||
|
||||
I am the originator of the term "open source software" and came up with it while executive director at Foresight Institute. Not a software developer like the rest, I thank Linux programmer Todd Anderson for supporting the term and proposing it to the group.
|
||||
|
||||
This is my account of how I came up with it, how it was proposed, and the subsequent reactions. Of course, there are a number of accounts of the coining of the term, for example by Eric Raymond and Richard Stallman, yet this is mine, written on January 2, 2006.
|
||||
|
||||
It has never been published, until today.
|
||||
|
||||
* * *
|
||||
|
||||
The introduction of the term "open source software" was a deliberate effort to make this field of endeavor more understandable to newcomers and to business, which was viewed as necessary to its spread to a broader community of users. The problem with the main earlier label, "free software," was not its political connotations, but that—to newcomers—its seeming focus on price is distracting. A term was needed that focuses on the key issue of source code and that does not immediately confuse those new to the concept. The first term that came along at the right time and fulfilled these requirements was rapidly adopted: open source.
|
||||
|
||||
This term had long been used in an "intelligence" (i.e., spying) context, but to my knowledge, use of the term with respect to software prior to 1998 has not been confirmed. The account below describes how the term [open source software][7] caught on and became the name of both an industry and a movement.
|
||||
|
||||
### Meetings on computer security
|
||||
|
||||
In late 1997, weekly meetings were being held at Foresight Institute to discuss computer security. Foresight is a nonprofit think tank focused on nanotechnology and artificial intelligence, and software security is regarded as central to the reliability and security of both. We had identified free software as a promising approach to improving software security and reliability and were looking for ways to promote it. Interest in free software was starting to grow outside the programming community, and it was increasingly clear that an opportunity was coming to change the world. However, just how to do this was unclear, and we were groping for strategies.
|
||||
|
||||
At these meetings, we discussed the need for a new term due to the confusion factor. The argument was as follows: those new to the term "free software" assume it is referring to the price. Oldtimers must then launch into an explanation, usually given as follows: "We mean free as in freedom, not free as in beer." At this point, a discussion on software has turned into one about the price of an alcoholic beverage. The problem was not that explaining the meaning is impossible—the problem was that the name for an important idea should not be so confusing to newcomers. A clearer term was needed. No political issues were raised regarding the free software term; the issue was its lack of clarity to those new to the concept.
|
||||
|
||||
### Releasing Netscape
|
||||
|
||||
On February 2, 1998, Eric Raymond arrived on a visit to work with Netscape on the plan to release the browser code under a free-software-style license. We held a meeting that night at Foresight's office in Los Altos to strategize and refine our message. In addition to Eric and me, active participants included Brian Behlendorf, Michael Tiemann, Todd Anderson, Mark S. Miller, and Ka-Ping Yee. But at that meeting, the field was still described as free software or, by Brian, "source code available" software.
|
||||
|
||||
While in town, Eric used Foresight as a base of operations. At one point during his visit, he was called to the phone to talk with a couple of Netscape legal and/or marketing staff. When he was finished, I asked to be put on the phone with them—one man and one woman, perhaps Mitchell Baker—so I could bring up the need for a new term. They agreed in principle immediately, but no specific term was agreed upon.
|
||||
|
||||
Between meetings that week, I was still focused on the need for a better name and came up with the term "open source software." While not ideal, it struck me as good enough. I ran it by at least four others: Eric Drexler, Mark Miller, and Todd Anderson liked it, while a friend in marketing and public relations felt the term "open" had been overused and abused and believed we could do better. He was right in theory; however, I didn't have a better idea, so I thought I would try to go ahead and introduce it. In hindsight, I should have simply proposed it to Eric Raymond, but I didn't know him well at the time, so I took an indirect strategy instead.
|
||||
|
||||
Todd had agreed strongly about the need for a new term and offered to assist in getting the term introduced. This was helpful because, as a non-programmer, my influence within the free software community was weak. My work in nanotechnology education at Foresight was a plus, but not enough for me to be taken very seriously on free software questions. As a Linux programmer, Todd would be listened to more closely.
|
||||
|
||||
### The key meeting
|
||||
|
||||
Later that week, on February 5, 1998, a group was assembled at VA Research to brainstorm on strategy. Attending—in addition to Eric Raymond, Todd, and me—were Larry Augustin, Sam Ockman, and attending by phone, Jon "maddog" Hall.
|
||||
|
||||
The primary topic was promotion strategy, especially which companies to approach. I said little, but was looking for an opportunity to introduce the proposed term. I felt that it wouldn't work for me to just blurt out, "All you technical people should start using my new term." Most of those attending didn't know me, and for all I knew, they might not even agree that a new term was greatly needed, or even somewhat desirable.
|
||||
|
||||
Fortunately, Todd was on the ball. Instead of making an assertion that the community should use this specific new term, he did something less directive—a smart thing to do with this community of strong-willed individuals. He simply used the term in a sentence on another topic—just dropped it into the conversation to see what happened. I went on alert, hoping for a response, but there was none at first. The discussion continued on the original topic. It seemed only he and I had noticed the usage.
|
||||
|
||||
Not so—memetic evolution was in action. A few minutes later, one of the others used the term, evidently without noticing, still discussing a topic other than terminology. Todd and I looked at each other out of the corners of our eyes to check: yes, we had both noticed what happened. I was excited—it might work! But I kept quiet: I still had low status in this group. Probably some were wondering why Eric had invited me at all.
|
||||
|
||||
Toward the end of the meeting, the [question of terminology][8] was brought up explicitly, probably by Todd or Eric. Maddog mentioned "freely distributable" as an earlier term, and "cooperatively developed" as a newer term. Eric listed "free software," "open source," and "sourceware" as the main options. Todd advocated the "open source" model, and Eric endorsed this. I didn't say much, letting Todd and Eric pull the (loose, informal) consensus together around the open source name. It was clear that to most of those at the meeting, the name change was not the most important thing discussed there; a relatively minor issue. Only about 10% of my notes from this meeting are on the terminology question.
|
||||
|
||||
But I was elated. These were some key leaders in the community, and they liked the new name, or at least didn't object. This was a very good sign. There was probably not much more I could do to help; Eric Raymond was far better positioned to spread the new meme, and he did. Bruce Perens signed on to the effort immediately, helping set up [Opensource.org][9] and playing a key role in spreading the new term.
|
||||
|
||||
For the name to succeed, it was necessary, or at least highly desirable, that Tim O'Reilly agree and actively use it in his many projects on behalf of the community. Also helpful would be use of the term in the upcoming official release of the Netscape Navigator code. By late February, both O'Reilly & Associates and Netscape had started to use the term.
|
||||
|
||||
### Getting the name out
|
||||
|
||||
After this, there was a period during which the term was promoted by Eric Raymond to the media, by Tim O'Reilly to business, and by both to the programming community. It seemed to spread very quickly.
|
||||
|
||||
On April 7, 1998, Tim O'Reilly held a meeting of key leaders in the field. Announced in advance as the first "[Freeware Summit][10]," by April 14 it was referred to as the first "[Open Source Summit][11]."
|
||||
|
||||
These months were extremely exciting for open source. Every week, it seemed, a new company announced plans to participate. Reading Slashdot became a necessity, even for those like me who were only peripherally involved. I strongly believe that the new term was helpful in enabling this rapid spread into business, which then enabled wider use by the public.
|
||||
|
||||
A quick Google search indicates that "open source" appears more often than "free software," but there still is substantial use of the free software term, which remains useful and should be included when communicating with audiences who prefer it.
|
||||
|
||||
### A happy twinge
|
||||
|
||||
When an [early account][12] of the terminology change written by Eric Raymond was posted on the Open Source Initiative website, I was listed as being at the VA brainstorming meeting, but not as the originator of the term. This was my own fault; I had neglected to tell Eric the details. My impulse was to let it pass and stay in the background, but Todd felt otherwise. He suggested to me that one day I would be glad to be known as the person who coined the name "open source software." He explained the situation to Eric, who promptly updated his site.
|
||||
|
||||
Coming up with a phrase is a small contribution, but I admit to being grateful to those who remember to credit me with it. Every time I hear it, which is very often now, it gives me a little happy twinge.
|
||||
|
||||
The big credit for persuading the community goes to Eric Raymond and Tim O'Reilly, who made it happen. Thanks to them for crediting me, and to Todd Anderson for his role throughout. The above is not a complete account of open source history; apologies to the many key players whose names do not appear. Those seeking a more complete account should refer to the links in this article and elsewhere on the net.
|
||||
|
||||
### About the author
|
||||
|
||||
[![photo of Christine Peterson](https://opensource.com/sites/default/files/styles/profile_pictures/public/pictures/cp2016_crop2_185.jpg?itok=vUkSjFig)][13] Christine Peterson - Christine Peterson writes, lectures, and briefs the media on coming powerful technologies, especially nanotechnology, artificial intelligence, and longevity. She is Cofounder and Past President of Foresight Institute, the leading nanotech public interest group. Foresight educates the public, technical community, and policymakers on coming powerful technologies and how to guide their long-term impact. She serves on the Advisory Board of the [Machine Intelligence... ][2][more about Christine Peterson][3][More about me][4]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/2/coining-term-open-source-software
|
||||
|
||||
作者:[ Christine Peterson][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/christine-peterson
|
||||
[1]:https://opensource.com/article/18/2/coining-term-open-source-software?rate=HFz31Mwyy6f09l9uhm5T_OFJEmUuAwpI61FY-fSo3Gc
|
||||
[2]:http://intelligence.org/
|
||||
[3]:https://opensource.com/users/christine-peterson
|
||||
[4]:https://opensource.com/users/christine-peterson
|
||||
[5]:https://opensource.com/user/206091/feed
|
||||
[6]:https://opensource.com/resources/what-open-source
|
||||
[7]:https://opensource.org/osd
|
||||
[8]:https://wiki2.org/en/Alternative_terms_for_free_software
|
||||
[9]:https://opensource.org/
|
||||
[10]:http://www.oreilly.com/pub/pr/636
|
||||
[11]:http://www.oreilly.com/pub/pr/796
|
||||
[12]:https://ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/wiki/Alternative_terms_for_free_software.html
|
||||
[13]:https://opensource.com/users/christine-peterson
|
||||
[14]:https://opensource.com/users/christine-peterson
|
||||
[15]:https://opensource.com/users/christine-peterson
|
||||
[16]:https://opensource.com/article/18/2/coining-term-open-source-software#comments
|
@ -0,0 +1,75 @@
|
||||
Open source software: 20 years and counting
|
||||
============================================================
|
||||
|
||||
### On the 20th anniversary of the coining of the term "open source software," how did it rise to dominance and what's next?
|
||||
|
||||
![Open source software: 20 years and counting](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/2cents.png?itok=XlT7kFNY "Open source software: 20 years and counting")
|
||||
Image by : opensource.com
|
||||
|
||||
Twenty years ago, in February 1998, the term "open source" was first applied to software. Soon afterwards, the Open Source Definition was created and the seeds that became the Open Source Initiative (OSI) were sown. As the OSD’s author [Bruce Perens relates][9],
|
||||
|
||||
> “Open source” is the proper name of a campaign to promote the pre-existing concept of free software to business, and to certify licenses to a rule set.
|
||||
|
||||
Twenty years later, that campaign has proven wildly successful, beyond the imagination of anyone involved at the time. Today open source software is literally everywhere. It is the foundation for the internet and the web. It powers the computers and mobile devices we all use, as well as the networks they connect to. Without it, cloud computing and the nascent Internet of Things would be impossible to scale and perhaps to create. It has enabled new ways of doing business to be tested and proven, allowing giant corporations like Google and Facebook to start from the top of a mountain others already climbed.
|
||||
|
||||
Like any human creation, it has a dark side as well. It has also unlocked dystopian possibilities for surveillance and the inevitably consequent authoritarian control. It has provided criminals with new ways to cheat their victims and unleashed the darkness of bullying delivered anonymously and at scale. It allows destructive fanatics to organize in secret without the inconvenience of meeting. All of these are shadows cast by useful capabilities, just as every human tool throughout history has been used both to feed and care and to harm and control. We need to help the upcoming generation strive for irreproachable innovation. As [Richard Feynman said][10],
|
||||
|
||||
> To every man is given the key to the gates of heaven. The same key opens the gates of hell.
|
||||
|
||||
As open source has matured, the way it is discussed and understood has also matured. The first decade was one of advocacy and controversy, while the second was marked by adoption and adaptation.
|
||||
|
||||
1. In the first decade, the key question concerned business models—“how can I contribute freely yet still be paid?”—while during the second, more people asked about governance—“how can I participate yet keep control/not be controlled?”
|
||||
|
||||
2. Open source projects of the first decade were predominantly replacements for off-the-shelf products; in the second decade, they were increasingly components of larger solutions.
|
||||
|
||||
3. Projects of the first decade were often run by informal groups of individuals; in the second decade, they were frequently run by charities created on a project-by-project basis.
|
||||
|
||||
4. Open source developers of the first decade were frequently devoted to a single project and often worked in their spare time. In the second decade, they were increasingly employed to work on a specific technology—professional specialists.
|
||||
|
||||
5. While open source was always intended as a way to promote software freedom, during the first decade, conflict arose with those preferring the term “free software.” In the second decade, this conflict was largely ignored as open source adoption accelerated.
|
||||
|
||||
So what will the third decade bring?
|
||||
|
||||
1. _The complexity business model_ —The predominant business model will involve monetizing the solution of the complexity arising from the integration of many open source parts, especially from deployment and scaling. Governance needs will reflect this.
|
||||
|
||||
2. _Open source mosaics_ —Open source projects will be predominantly families of component parts, together with being built into stacks of components. The resultant larger solutions will be a mosaic of open source parts.
|
||||
|
||||
3. _Families of projects_ —More and more projects will be hosted by consortia/trade associations like the Linux Foundation and OpenStack, and by general-purpose charities like Apache and the Software Freedom Conservancy.
|
||||
|
||||
4. _Professional generalists_ —Open source developers will increasingly be employed to integrate many technologies into complex solutions and will contribute to a range of projects.
|
||||
|
||||
5. _Software freedom redux_ —As new problems arise, software freedom (the application of the Four Freedoms to user and developer flexibility) will increasingly be applied to identify solutions that work for collaborative communities and independent deployers.
|
||||
|
||||
I’ll be expounding on all this in conference keynotes around the world during 2018\. Watch for [OSI’s 20th Anniversary World Tour][11]!
|
||||
|
||||
_This article was originally published on [Meshed Insights Ltd.][2] and is reprinted with permission. This article, as well as my work at OSI, is supported by [Patreon patrons][3]._
|
||||
|
||||
### About the author
|
||||
|
||||
[![Simon Phipps (smiling)](https://opensource.com/sites/default/files/styles/profile_pictures/public/pictures/picture-2305.jpg?itok=CefW_OYh)][12] Simon Phipps - Computer industry and open source veteran Simon Phipps started [Public Software][4], a European host for open source projects, and volunteers as President at OSI and a director at The Document Foundation. His posts are sponsored by [Patreon patrons][5] - become one if you'd like to see more! Over a 30+ year career he has been involved at a strategic level in some of the world’s leading... [more about Simon Phipps][6][More about me][7]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/2/open-source-20-years-and-counting
|
||||
|
||||
作者:[Simon Phipps ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/simonphipps
|
||||
[1]:https://opensource.com/article/18/2/open-source-20-years-and-counting?rate=TZxa8jxR6VBcYukor0FDsTH38HxUrr7Mt8QRcn0sC2I
|
||||
[2]:https://meshedinsights.com/2017/12/21/20-years-and-counting/
|
||||
[3]:https://patreon.com/webmink
|
||||
[4]:https://publicsoftware.eu/
|
||||
[5]:https://patreon.com/webmink
|
||||
[6]:https://opensource.com/users/simonphipps
|
||||
[7]:https://opensource.com/users/simonphipps
|
||||
[8]:https://opensource.com/user/12532/feed
|
||||
[9]:https://perens.com/2017/09/26/on-usage-of-the-phrase-open-source/
|
||||
[10]:https://www.brainpickings.org/2013/07/19/richard-feynman-science-morality-poem/
|
||||
[11]:https://opensource.org/node/905
|
||||
[12]:https://opensource.com/users/simonphipps
|
||||
[13]:https://opensource.com/users/simonphipps
|
||||
[14]:https://opensource.com/users/simonphipps
|
@ -0,0 +1,102 @@
|
||||
How to Install Tripwire IDS (Intrusion Detection System) on Linux
|
||||
============================================================
|
||||
|
||||
|
||||
Tripwire is a popular Linux Intrusion Detection System (IDS) that runs on systems in order to detect if unauthorized filesystem changes occurred over time.
|
||||
|
||||
In CentOS and RHEL distributions, tripwire is not a part of official repositories. However, the tripwire package can be installed via [Epel repositories][1].
|
||||
|
||||
To begin, first install Epel repositories in CentOS and RHEL system, by issuing the below command.
|
||||
|
||||
```
|
||||
# yum install epel-release
|
||||
```
|
||||
|
||||
After you’ve installed Epel repositories, make sure you update the system with the following command.
|
||||
|
||||
```
|
||||
# yum update
|
||||
```
|
||||
|
||||
After the update process finishes, install Tripwire IDS software by executing the below command.
|
||||
|
||||
```
|
||||
# yum install tripwire
|
||||
```
|
||||
|
||||
Fortunately, tripwire is a part of Ubuntu and Debian default repositories and can be installed with following commands.
|
||||
|
||||
```
|
||||
$ sudo apt update
|
||||
$ sudo apt install tripwire
|
||||
```
|
||||
|
||||
On Ubuntu and Debian, the tripwire installation will be asked to choose and confirm a site key and local key passphrase. These keys are used by tripwire to secure its configuration files.
|
||||
|
||||
[![Create Tripwire Site and Local Key](https://www.tecmint.com/wp-content/uploads/2018/01/Create-Site-and-Local-key.png)][2]
|
||||
|
||||
Create Tripwire Site and Local Key
|
||||
|
||||
On CentOS and RHEL, you need to create tripwire keys with the below command and supply a passphrase for site key and local key.
|
||||
|
||||
```
|
||||
# tripwire-setup-keyfiles
|
||||
```
|
||||
[![Create Tripwire Keys](https://www.tecmint.com/wp-content/uploads/2018/01/Create-Tripwire-Keys.png)][3]
|
||||
|
||||
Create Tripwire Keys
|
||||
|
||||
In order to validate your system, you need to initialize Tripwire database with the following command. Due to the fact that the database hasn’t been initialized yet, tripwire will display a lot of false-positive warnings.
|
||||
|
||||
```
|
||||
# tripwire --init
|
||||
```
|
||||
[![Initialize Tripwire Database](https://www.tecmint.com/wp-content/uploads/2018/01/Initialize-Tripwire-Database.png)][4]
|
||||
|
||||
Initialize Tripwire Database
|
||||
|
||||
Finally, generate a tripwire system report in order to check the configurations by issuing the below command. Use `--help` switch to list all tripwire check command options.
|
||||
|
||||
```
|
||||
# tripwire --check --help
|
||||
# tripwire --check
|
||||
```
|
||||
|
||||
After tripwire check command completes, review the report by opening the file with the extension `.twr` from /var/lib/tripwire/report/ directory with your favorite text editor command, but before that you need to convert to text file.
|
||||
|
||||
```
|
||||
# twprint --print-report --twrfile /var/lib/tripwire/report/tecmint-20170727-235255.twr > report.txt
|
||||
# vi report.txt
|
||||
```
|
||||
[![Tripwire System Report](https://www.tecmint.com/wp-content/uploads/2018/01/Tripwire-System-Report.png)][5]
|
||||
|
||||
Tripwire System Report
|
||||
|
||||
That’s It! you have successfully installed Tripwire on Linux server. I hope you can now easily configure your [Tripwire IDS][6].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
I'am a computer addicted guy, a fan of open source and linux based system software, have about 4 years experience with Linux distributions desktop, servers and bash scripting.
|
||||
|
||||
-------
|
||||
|
||||
via: https://www.tecmint.com/install-tripwire-ids-intrusion-detection-system-on-linux/
|
||||
|
||||
作者:[ Matei Cezar][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.tecmint.com/author/cezarmatei/
|
||||
[1]:https://www.tecmint.com/how-to-enable-epel-repository-for-rhel-centos-6-5/
|
||||
[2]:https://www.tecmint.com/wp-content/uploads/2018/01/Create-Site-and-Local-key.png
|
||||
[3]:https://www.tecmint.com/wp-content/uploads/2018/01/Create-Tripwire-Keys.png
|
||||
[4]:https://www.tecmint.com/wp-content/uploads/2018/01/Initialize-Tripwire-Database.png
|
||||
[5]:https://www.tecmint.com/wp-content/uploads/2018/01/Tripwire-System-Report.png
|
||||
[6]:https://www.tripwire.com/
|
||||
[7]:https://www.tecmint.com/author/cezarmatei/
|
||||
[8]:https://www.tecmint.com/10-useful-free-linux-ebooks-for-newbies-and-administrators/
|
||||
[9]:https://www.tecmint.com/free-linux-shell-scripting-books/
|
File diff suppressed because it is too large
Load Diff
@ -0,0 +1,153 @@
|
||||
Building a Linux-based HPC system on the Raspberry Pi with Ansible
|
||||
============================================================
|
||||
|
||||
### Create a high-performance computing cluster with low-cost hardware and open source software.
|
||||
|
||||
![Building a Linux-based HPC system on the Raspberry Pi with Ansible](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/code_computer_development_programming.png?itok=4OM29-82 "Building a Linux-based HPC system on the Raspberry Pi with Ansible")
|
||||
Image by : opensource.com
|
||||
|
||||
In my [previous article for Opensource.com][14], I introduced the [OpenHPC][15] project, which aims to accelerate innovation in high-performance computing (HPC). This article goes a step further by using OpenHPC's capabilities to build a small HPC system. To call it an _HPC system_ might sound bigger than it is, so maybe it is better to say this is a system based on the [Cluster Building Recipes][16] published by the OpenHPC project.
|
||||
|
||||
The resulting cluster consists of two Raspberry Pi 3 systems acting as compute nodes and one virtual machine acting as the master node:
|
||||
|
||||
|
||||
![Map of HPC cluster](https://opensource.com/sites/default/files/u128651/hpc_with_pi-1.png "Map of HPC cluster")
|
||||
|
||||
My master node is running CentOS on x86_64 and my compute nodes are running a slightly modified CentOS on aarch64.
|
||||
|
||||
This is what the setup looks in real life:
|
||||
|
||||
|
||||
![HPC hardware setup](https://opensource.com/sites/default/files/u128651/hpc_with_pi-2.jpg "HPC hardware setup")
|
||||
|
||||
To set up my system like an HPC system, I followed some of the steps from OpenHPC's Cluster Building Recipes [install guide for CentOS 7.4/aarch64 + Warewulf + Slurm][17] (PDF). This recipe includes provisioning instructions using [Warewulf][18]; because I manually installed my three systems, I skipped the Warewulf parts and created an [Ansible playbook][19] for the steps I took.
|
||||
|
||||
|
||||
Once my cluster was set up by the [Ansible][26] playbooks, I could start to submit jobs to my resource manager. The resource manager, [Slurm][27] in my case, is the instance in the cluster that decides where and when my jobs are executed. One possibility to start a simple job on the cluster is:
|
||||
```
|
||||
[ohpc@centos01 ~]$ srun hostname
|
||||
calvin
|
||||
```
|
||||
|
||||
If I need more resources, I can tell Slurm that I want to run my command on eight CPUs:
|
||||
|
||||
```
|
||||
[ohpc@centos01 ~]$ srun -n 8 hostname
|
||||
hobbes
|
||||
hobbes
|
||||
hobbes
|
||||
hobbes
|
||||
calvin
|
||||
calvin
|
||||
calvin
|
||||
calvin
|
||||
```
|
||||
|
||||
In the first example, Slurm ran the specified command (`hostname`) on a single CPU, and in the second example Slurm ran the command on eight CPUs. One of my compute nodes is named `calvin` and the other is named `hobbes`; that can be seen in the output of the above commands. Each of the compute nodes is a Raspberry Pi 3 with four CPU cores.
|
||||
|
||||
Another way to submit jobs to my cluster is the command `sbatch`, which can be used to execute scripts with the output written to a file instead of my terminal.
|
||||
|
||||
```
|
||||
[ohpc@centos01 ~]$ cat script1.sh
|
||||
#!/bin/sh
|
||||
date
|
||||
hostname
|
||||
sleep 10
|
||||
date
|
||||
[ohpc@centos01 ~]$ sbatch script1.sh
|
||||
Submitted batch job 101
|
||||
```
|
||||
|
||||
This will create an output file called `slurm-101.out` with the following content:
|
||||
|
||||
```
|
||||
Mon 11 Dec 16:42:31 UTC 2017
|
||||
calvin
|
||||
Mon 11 Dec 16:42:41 UTC 2017
|
||||
```
|
||||
|
||||
To demonstrate the basic functionality of the resource manager, simple and serial command line tools are suitable—but a bit boring after doing all the work to set up an HPC-like system.
|
||||
|
||||
A more interesting application is running an [Open MPI][20] parallelized job on all available CPUs on the cluster. I'm using an application based on [Game of Life][21], which was used in a [video][22] called "Running Game of Life across multiple architectures with Red Hat Enterprise Linux." In addition to the previously used MPI-based Game of Life implementation, the version now running on my cluster colors the cells for each involved host differently. The following script starts the application interactively with a graphical output:
|
||||
|
||||
```
|
||||
$ cat life.mpi
|
||||
#!/bin/bash
|
||||
|
||||
module load gnu6 openmpi3
|
||||
|
||||
if [[ "$SLURM_PROCID" != "0" ]]; then
|
||||
exit
|
||||
fi
|
||||
|
||||
mpirun ./mpi_life -a -p -b
|
||||
```
|
||||
|
||||
I start the job with the following command, which tells Slurm to allocate eight CPUs for the job:
|
||||
|
||||
```
|
||||
$ srun -n 8 --x11 life.mpi
|
||||
```
|
||||
|
||||
For demonstration purposes, the job has a graphical interface that shows the current result of the calculation:
|
||||
|
||||
|
||||
![](https://opensource.com/sites/default/files/u128651/hpc_with_pi-3.png)
|
||||
|
||||
The position of the red cells is calculated on one of the compute nodes, and the green cells are calculated on the other compute node. I can also tell the Game of Life program to color the cell for each used CPU (there are four per compute node) differently, which leads to the following output:
|
||||
|
||||
|
||||
![](https://opensource.com/sites/default/files/u128651/hpc_with_pi-4.png)
|
||||
|
||||
Thanks to the installation recipes and the software packages provided by OpenHPC, I was able to set up two compute nodes and a master node in an HPC-type configuration. I can submit jobs to my resource manager, and I can use the software provided by OpenHPC to start MPI applications utilizing all my Raspberry Pis' CPUs.
|
||||
|
||||
* * *
|
||||
|
||||
_To learn more about using OpenHPC to build a Raspberry Pi cluster, please attend Adrian Reber's talks at [DevConf.cz 2018][10], January 26-28, in Brno, Czech Republic, and at the [CentOS Dojo 2018][11], on February 2, in Brussels._
|
||||
|
||||
### About the author
|
||||
|
||||
[![](https://opensource.com/sites/default/files/styles/profile_pictures/public/pictures/gotchi-square.png?itok=PJKu7LHn)][23] Adrian Reber - Adrian is a Senior Software Engineer at Red Hat and is migrating processes at least since 2010\. He started to migrate processes in a high performance computing environment and at some point he migrated so many processes that he got a PhD for that and since he joined Red Hat he started to migrate containers. Occasionally he still migrates single processes and is still interested in high performance computing topics.[More about me][12]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/1/how-build-hpc-system-raspberry-pi-and-openhpc
|
||||
|
||||
作者:[Adrian Reber ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/adrianreber
|
||||
[1]:https://opensource.com/resources/what-are-linux-containers?utm_campaign=containers&intcmp=70160000000h1s6AAA
|
||||
[2]:https://opensource.com/resources/what-docker?utm_campaign=containers&intcmp=70160000000h1s6AAA
|
||||
[3]:https://opensource.com/resources/what-is-kubernetes?utm_campaign=containers&intcmp=70160000000h1s6AAA
|
||||
[4]:https://developers.redhat.com/blog/2016/01/13/a-practical-introduction-to-docker-container-terminology/?utm_campaign=containers&intcmp=70160000000h1s6AAA
|
||||
[5]:https://opensource.com/file/384031
|
||||
[6]:https://opensource.com/file/384016
|
||||
[7]:https://opensource.com/file/384021
|
||||
[8]:https://opensource.com/file/384026
|
||||
[9]:https://opensource.com/article/18/1/how-build-hpc-system-raspberry-pi-and-openhpc?rate=l9n6B6qRcR20LJyXEoUoWEZ4mb2nDc9sFZ1YSPc60vE
|
||||
[10]:https://devconfcz2018.sched.com/event/DJYi/openhpc-introduction
|
||||
[11]:https://wiki.centos.org/Events/Dojo/Brussels2018
|
||||
[12]:https://opensource.com/users/adrianreber
|
||||
[13]:https://opensource.com/user/188446/feed
|
||||
[14]:https://opensource.com/article/17/11/openhpc
|
||||
[15]:https://openhpc.community/
|
||||
[16]:https://openhpc.community/downloads/
|
||||
[17]:https://github.com/openhpc/ohpc/releases/download/v1.3.3.GA/Install_guide-CentOS7-Warewulf-SLURM-1.3.3-aarch64.pdf
|
||||
[18]:https://en.wikipedia.org/wiki/Warewulf
|
||||
[19]:http://people.redhat.com/areber/openhpc/ansible/
|
||||
[20]:https://www.open-mpi.org/
|
||||
[21]:https://en.wikipedia.org/wiki/Conway%27s_Game_of_Life
|
||||
[22]:https://www.youtube.com/watch?v=n8DvxMcOMXk
|
||||
[23]:https://opensource.com/users/adrianreber
|
||||
[24]:https://opensource.com/users/adrianreber
|
||||
[25]:https://opensource.com/users/adrianreber
|
||||
[26]:https://www.ansible.com/
|
||||
[27]:https://slurm.schedmd.com/
|
||||
[28]:https://opensource.com/tags/raspberry-pi
|
||||
[29]:https://opensource.com/tags/programming
|
||||
[30]:https://opensource.com/tags/linux
|
||||
[31]:https://opensource.com/tags/ansible
|
@ -1,109 +0,0 @@
|
||||
translating by wenwensnow
|
||||
Linux whereis Command Explained for Beginners (5 Examples)
|
||||
======
|
||||
|
||||
Sometimes, while working on the command line, we just need to quickly find out the location of the binary file for a command. Yes, the [find][1] command is an option in this case, but it's a bit time consuming and will likely produce some non-desired results as well. There's a specific command that's designed for this purpose: **whereis**.
|
||||
|
||||
In this article, we will discuss the basics of this command using some easy to understand examples. But before we do that, it's worth mentioning that all examples in this tutorial have been tested on Ubuntu 16.04LTS.
|
||||
|
||||
### Linux whereis command
|
||||
|
||||
The whereis command lets users locate binary, source, and manual page files for a command. Following is its syntax:
|
||||
|
||||
```
|
||||
whereis [options] [-BMS directory... -f] name...
|
||||
```
|
||||
|
||||
And here's how the tool's man page explains it:
|
||||
```
|
||||
whereis locates the binary, source and manual files for the specified command names. The supplied
|
||||
names are first stripped of leading pathname components and any (single) trailing extension of the
|
||||
form .ext (for example: .c) Prefixes of s. resulting from use of source code control are also dealt
|
||||
with. whereis then attempts to locate the desired program in the standard Linux places, and in the
|
||||
places specified by $PATH and $MANPATH.
|
||||
```
|
||||
|
||||
The following Q&A-styled examples should give you a good idea on how the whereis command works.
|
||||
|
||||
### Q1. How to find location of binary file using whereis?
|
||||
|
||||
Suppose you want to find the location for, let's say, the whereis command itself. Then here's how you can do that:
|
||||
|
||||
```
|
||||
whereis whereis
|
||||
```
|
||||
|
||||
[![How to find location of binary file using whereis][2]][3]
|
||||
|
||||
Note that the first path in the output is what you are looking for. The whereis command also produces paths for manual pages and source code (if available, which isn't in this case). So the second path you see in the output above is the path to the whereis manual file(s).
|
||||
|
||||
### Q2. How to specifically search for binaries, manuals, or source code?
|
||||
|
||||
If you want to search specifically for, say binary, then you can use the **-b** command line option. For example:
|
||||
|
||||
```
|
||||
whereis -b cp
|
||||
```
|
||||
|
||||
[![How to specifically search for binaries, manuals, or source code][4]][5]
|
||||
|
||||
Similarly, the **-m** and **-s** options are used in case you want to find manuals and sources.
|
||||
|
||||
### Q3. How to limit whereis search as per requirement?
|
||||
|
||||
By default whereis tries to find files from hard-coded paths, which are defined with glob patterns. However, if you want, you can limit the search using specific command line options. For example, if you want whereis to only search for binary files in /usr/bin, then you can do this using the **-B** command line option.
|
||||
|
||||
```
|
||||
whereis -B /usr/bin/ -f cp
|
||||
```
|
||||
|
||||
**Note** : Since you can pass multiple paths this way, the **-f** command line option terminates the directory list and signals the start of file names.
|
||||
|
||||
Similarly, if you want to limit manual or source searches, you can use the **-M** and **-S** command line options.
|
||||
|
||||
### Q4. How to see paths that whereis uses for search?
|
||||
|
||||
There's an option for this as well. Just run the command with **-l**.
|
||||
|
||||
```
|
||||
whereis -l
|
||||
```
|
||||
|
||||
Here is the list (partial) it produced for us:
|
||||
|
||||
[![How to see paths that whereis uses for search][6]][7]
|
||||
|
||||
### Q5. How to find command names with unusual entries?
|
||||
|
||||
For whereis, a command becomes unusual if it does not have just one entry of each explicitly requested type. For example, commands with no documentation available, or those with documentation in multiple places are considered unusual. The **-u** command line option, when used, makes whereis show the command names that have unusual entries.
|
||||
|
||||
For example, the following command should display files in the current directory which have no documentation file, or more than one.
|
||||
|
||||
```
|
||||
whereis -m -u *
|
||||
```
|
||||
|
||||
### Conclusion
|
||||
|
||||
Agreed, whereis is not the kind of command line tool that you'll require very frequently. But when the situation arises, it definitely makes your life easy. We've covered some of the important command line options the tool offers, so do practice them. For more info, head to its [man page][8].
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.howtoforge.com/linux-whereis-command/
|
||||
|
||||
作者:[Himanshu Arora][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.howtoforge.com
|
||||
[1]:https://www.howtoforge.com/tutorial/linux-find-command/
|
||||
[2]:https://www.howtoforge.com/images/command-tutorial/whereis-basic-usage.png
|
||||
[3]:https://www.howtoforge.com/images/command-tutorial/big/whereis-basic-usage.png
|
||||
[4]:https://www.howtoforge.com/images/command-tutorial/whereis-b-option.png
|
||||
[5]:https://www.howtoforge.com/images/command-tutorial/big/whereis-b-option.png
|
||||
[6]:https://www.howtoforge.com/images/command-tutorial/whereis-l.png
|
||||
[7]:https://www.howtoforge.com/images/command-tutorial/big/whereis-l.png
|
||||
[8]:https://linux.die.net/man/1/whereis
|
@ -0,0 +1,106 @@
|
||||
An introduction to the Web::Simple Perl module, a minimalist web framework
|
||||
============================================================
|
||||
|
||||
### Perl module Web::Simple is easy to learn and packs a big enough punch for a variety of one-offs and smaller services.
|
||||
|
||||
|
||||
![An introduction to the Web::Simple Perl module, a minimalist web framework](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/openweb-osdc-lead.png?itok=yjU4KliG "An introduction to the Web::Simple Perl module, a minimalist web framework")
|
||||
Image credits : [You as a Machine][10]. Modified by Rikki Endsley. [CC BY-SA 2.0][11].
|
||||
|
||||
One of the more-prominent members of the Perl community is [Matt Trout][12], technical director at [Shadowcat Systems][13]. He's been building core tools for Perl applications for years, including being a co-maintaner of the [Catalyst][14] MVC (Model, View, Controller) web framework, creator of the [DBIx::Class][15] object-management system, and much more. In person, he's energetic, interesting, brilliant, and sometimes hard to keep up with. When Matt writes code…well, think of a runaway chainsaw, with the trigger taped down and the safety features disabled. He's off and running, and you never quite know what will come out. Two things are almost certain: the module will precisely fit the purpose Matt has in mind, and it will show up on CPAN for others to use.
|
||||
|
||||
|
||||
One of Matt's special-purpose modules is [Web::Simple][23]. Touted as "a quick and easy way to build simple web applications," it is a stripped-down, minimalist web framework, with an easy to learn interface. Web::Simple is not at all designed for a large-scale application; however, it may be ideal for a small tool that does one or two things in a lower-traffic environment. I can also envision it being used for rapid prototyping if you wanted to create quick wireframes of a new application for demonstrations.
|
||||
|
||||
### Installation, and a quick "Howdy!"
|
||||
|
||||
You can install the module using `cpan` or `cpanm`. Once you've got it installed, you're ready to write simple web apps without having to hassle with managing the connections or any of that—just your functionality. Here's a quick example:
|
||||
|
||||
```
|
||||
#!/usr/bin/perl
|
||||
package HelloReader;
|
||||
use Web::Simple;
|
||||
|
||||
sub dispatch_request {
|
||||
GET => sub {
|
||||
[ 200, [ 'Content-type', 'text/plain' ], [ 'Howdy, Opensource.com reader!' ] ]
|
||||
},
|
||||
'' => sub {
|
||||
[ 405, [ 'Content-type', 'text/plain' ], [ 'You cannot do that, friend. Sorry.' ] ]
|
||||
}
|
||||
}
|
||||
|
||||
HelloReader->run_if_script;
|
||||
```
|
||||
|
||||
There are a couple of things to notice right off. For one, I didn't `use strict` and `use warnings` like I usually would. Web::Simple imports those for you, so you don't have to. It also imports [Moo][16], a minimalist OO framework, so if you know Moo and want to use it here, you can! The heart of the system lies in the `dispatch_request`method, which you must define in your application. Each entry in the method is a match string, followed by a subroutine to respond if that string matches. The subroutine must return an array reference containing status, headers, and content of the reply to the request.
|
||||
|
||||
### Matching
|
||||
|
||||
The matching system in Web::Simple is powerful, allowing for complicated matches, passing parameters in a URL, query parameters, and extension matches, in pretty much any combination you want. As you can see in the example above, starting with a capital letter will match on the request method, and you can combine that with a path match easily:
|
||||
|
||||
```
|
||||
'GET + /person/*' => sub {
|
||||
my ($self, $person) = @_;
|
||||
# write some code to retrieve and display a person
|
||||
},
|
||||
'POST + /person/* + %*' => sub {
|
||||
my ($self, $person, $params) = @_;
|
||||
# write some code to modify a person, perhaps
|
||||
}
|
||||
```
|
||||
|
||||
In the latter case, the third part of the match indicates that we should pick up all the POST parameters and put them in a hashref called `$params` for use by the subroutine. Using `?` instead of `%` in that part of the match would pick up query parameters, as normally used in a GET request. There's also a useful exported subroutine called `redispatch_to`. This tool lets you redirect, without using a 3xx redirect; it's handled internally, invisible to the user. So:
|
||||
|
||||
```
|
||||
'GET + /some/url' => sub {
|
||||
redispatch_to '/some/other/url';
|
||||
}
|
||||
```
|
||||
|
||||
A GET request to `/some/url` would get handled as if it was sent to `/some/other/url`, without a redirect, and the user won't see a redirect in their browser.
|
||||
|
||||
I've just scratched the surface with this module. If you're looking for something production-ready for larger projects, you'll be better off with [Dancer][17] or [Catalyst][18]. But with its light weight and built-in Moo integration, Web::Simple packs a big enough punch for a variety of one-offs and smaller services.
|
||||
|
||||
### About the author
|
||||
|
||||
[![](https://opensource.com/sites/default/files/styles/profile_pictures/public/pictures/dsc_0028.jpg?itok=RS0GBh25)][19] Ruth Holloway - Ruth Holloway has been a system administrator and software developer for a long, long time, getting her professional start on a VAX 11/780, way back when. She spent a lot of her career (so far) serving the technology needs of libraries, and has been a contributor since 2008 to the Koha open source library automation suite.Ruth is currently a Perl Developer at cPanel in Houston, and also serves as chief of staff for an obnoxious cat. In her copious free time, she occasionally reviews old romance... [more about Ruth Holloway][7][More about me][8]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/1/introduction-websimple-perl-module-minimalist-web-framework
|
||||
|
||||
作者:[Ruth Holloway ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/druthb
|
||||
[1]:https://opensource.com/tags/python?src=programming_resource_menu1
|
||||
[2]:https://opensource.com/tags/javascript?src=programming_resource_menu2
|
||||
[3]:https://opensource.com/tags/perl?src=programming_resource_menu3
|
||||
[4]:https://developers.redhat.com/?intcmp=7016000000127cYAAQ&src=programming_resource_menu4
|
||||
[5]:http://perldoc.perl.org/functions/package.html
|
||||
[6]:https://opensource.com/article/18/1/introduction-websimple-perl-module-minimalist-web-framework?rate=ICN35y076ElpInDKoMqp-sN6f4UVF-n2Qt6dL6lb3kM
|
||||
[7]:https://opensource.com/users/druthb
|
||||
[8]:https://opensource.com/users/druthb
|
||||
[9]:https://opensource.com/user/36051/feed
|
||||
[10]:https://www.flickr.com/photos/youasamachine/8025582590/in/photolist-decd6C-7pkccp-aBfN9m-8NEffu-3JDbWb-aqf5Tx-7Z9MTZ-rnYTRu-3MeuPx-3yYwA9-6bSLvd-irmvxW-5Asr4h-hdkfCA-gkjaSQ-azcgct-gdV5i4-8yWxCA-9G1qDn-5tousu-71V8U2-73D4PA-iWcrTB-dDrya8-7GPuxe-5pNb1C-qmnLwy-oTxwDW-3bFhjL-f5Zn5u-8Fjrua-bxcdE4-ddug5N-d78G4W-gsYrFA-ocrBbw-pbJJ5d-682rVJ-7q8CbF-7n7gDU-pdfgkJ-92QMx2-aAmM2y-9bAGK1-dcakkn-8rfyTz-aKuYvX-hqWSNP-9FKMkg-dyRPkY
|
||||
[11]:https://creativecommons.org/licenses/by/2.0/
|
||||
[12]:https://shadow.cat/resources/bios/matt_short/
|
||||
[13]:https://shadow.cat/
|
||||
[14]:https://metacpan.org/pod/Catalyst
|
||||
[15]:https://metacpan.org/pod/DBIx::Class
|
||||
[16]:https://metacpan.org/pod/Moo
|
||||
[17]:http://perldancer.org/
|
||||
[18]:http://www.catalystframework.org/
|
||||
[19]:https://opensource.com/users/druthb
|
||||
[20]:https://opensource.com/users/druthb
|
||||
[21]:https://opensource.com/users/druthb
|
||||
[22]:https://opensource.com/article/18/1/introduction-websimple-perl-module-minimalist-web-framework#comments
|
||||
[23]:https://metacpan.org/pod/Web::Simple
|
||||
[24]:https://opensource.com/tags/perl
|
||||
[25]:https://opensource.com/tags/programming
|
||||
[26]:https://opensource.com/tags/perl-column
|
||||
[27]:https://opensource.com/tags/web-development
|
@ -0,0 +1,280 @@
|
||||
Running a Python application on Kubernetes
|
||||
============================================================
|
||||
|
||||
### This step-by-step tutorial takes you through the process of deploying a simple Python application on Kubernetes.
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/build_structure_tech_program_code_construction.png?itok=nVsiLuag)
|
||||
Image by : opensource.com
|
||||
|
||||
Kubernetes is an open source platform that offers deployment, maintenance, and scaling features. It simplifies management of containerized Python applications while providing portability, extensibility, and self-healing capabilities.
|
||||
|
||||
Whether your Python applications are simple or more complex, Kubernetes lets you efficiently deploy and scale them, seamlessly rolling out new features while limiting resources to only those required.
|
||||
|
||||
In this article, I will describe the process of deploying a simple Python application to Kubernetes, including:
|
||||
|
||||
* Creating Python container images
|
||||
|
||||
* Publishing the container images to an image registry
|
||||
|
||||
* Working with persistent volume
|
||||
|
||||
* Deploying the Python application to Kubernetes
|
||||
|
||||
### Requirements
|
||||
|
||||
You will need Docker, kubectl, and this [source code][10].
|
||||
|
||||
Docker is an open platform to build and ship distributed applications. To install Docker, follow the [official documentation][11]. To verify that Docker runs your system:
|
||||
|
||||
```
|
||||
$ docker info
|
||||
Containers: 0
|
||||
Images: 289
|
||||
Storage Driver: aufs
|
||||
Root Dir: /var/lib/docker/aufs
|
||||
Dirs: 289
|
||||
Execution Driver: native-0.2
|
||||
Kernel Version: 3.16.0-4-amd64
|
||||
Operating System: Debian GNU/Linux 8 (jessie)
|
||||
WARNING: No memory limit support
|
||||
WARNING: No swap limit support
|
||||
```
|
||||
|
||||
kubectl is a command-line interface for executing commands against a Kubernetes cluster. Run the shell script below to install kubectl:
|
||||
|
||||
```
|
||||
curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
|
||||
```
|
||||
|
||||
Deploying to Kubernetes requires a containerized application. Let's review containerizing Python applications.
|
||||
|
||||
### Containerization at a glance
|
||||
|
||||
Containerization involves enclosing an application in a container with its own operating system. This full machine virtualization option has the advantage of being able to run an application on any machine without concerns about dependencies.
|
||||
|
||||
Roman Gaponov's [article][12] serves as a reference. Let's start by creating a container image for our Python code.
|
||||
|
||||
### Create a Python container image
|
||||
|
||||
To create these images, we will use Docker, which enables us to deploy applications inside isolated Linux software containers. Docker is able to automatically build images using instructions from a Docker file.
|
||||
|
||||
This is a Docker file for our Python application:
|
||||
|
||||
```
|
||||
FROM python:3.6
|
||||
MAINTAINER XenonStack
|
||||
|
||||
# Creating Application Source Code Directory
|
||||
RUN mkdir -p /k8s_python_sample_code/src
|
||||
|
||||
# Setting Home Directory for containers
|
||||
WORKDIR /k8s_python_sample_code/src
|
||||
|
||||
# Installing python dependencies
|
||||
COPY requirements.txt /k8s_python_sample_code/src
|
||||
RUN pip install --no-cache-dir -r requirements.txt
|
||||
|
||||
# Copying src code to Container
|
||||
COPY . /k8s_python_sample_code/src/app
|
||||
|
||||
# Application Environment variables
|
||||
ENV APP_ENV development
|
||||
|
||||
# Exposing Ports
|
||||
EXPOSE 5035
|
||||
|
||||
# Setting Persistent data
|
||||
VOLUME ["/app-data"]
|
||||
|
||||
# Running Python Application
|
||||
CMD ["python", "app.py"]
|
||||
```
|
||||
|
||||
This Docker file contains instructions to run our sample Python code. It uses the Python 3.5 development environment.
|
||||
|
||||
### Build a Python Docker image
|
||||
|
||||
We can now build the Docker image from these instructions using this command:
|
||||
|
||||
```
|
||||
docker build -t k8s_python_sample_code .
|
||||
```
|
||||
|
||||
This command creates a Docker image for our Python application.
|
||||
|
||||
### Publish the container images
|
||||
|
||||
We can publish our Python container image to different private/public cloud repositories, like Docker Hub, AWS ECR, Google Container Registry, etc. For this tutorial, we'll use Docker Hub.
|
||||
|
||||
Before publishing the image, we need to tag it to a version:
|
||||
|
||||
```
|
||||
docker tag k8s_python_sample_code:latest k8s_python_sample_code:0.1
|
||||
```
|
||||
|
||||
### Push the image to a cloud repository
|
||||
|
||||
Using a Docker registry other than Docker Hub to store images requires you to add that container registry to the local Docker daemon and Kubernetes Docker daemons. You can look up this information for the different cloud registries. We'll use Docker Hub in this example.
|
||||
|
||||
Execute this Docker command to push the image:
|
||||
|
||||
```
|
||||
docker push k8s_python_sample_code
|
||||
```
|
||||
|
||||
### Working with CephFS persistent storage
|
||||
|
||||
Kubernetes supports many persistent storage providers, including AWS EBS, CephFS, GlusterFS, Azure Disk, NFS, etc. I will cover Kubernetes persistence storage with CephFS.
|
||||
|
||||
To use CephFS for persistent data to Kubernetes containers, we will create two files:
|
||||
|
||||
persistent-volume.yml
|
||||
|
||||
```
|
||||
apiVersion: v1
|
||||
kind: PersistentVolume
|
||||
metadata:
|
||||
name: app-disk1
|
||||
namespace: k8s_python_sample_code
|
||||
spec:
|
||||
capacity:
|
||||
storage: 50Gi
|
||||
accessModes:
|
||||
- ReadWriteMany
|
||||
cephfs:
|
||||
monitors:
|
||||
- "172.17.0.1:6789"
|
||||
user: admin
|
||||
secretRef:
|
||||
name: ceph-secret
|
||||
readOnly: false
|
||||
```
|
||||
|
||||
persistent_volume_claim.yaml
|
||||
|
||||
```
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: appclaim1
|
||||
namespace: k8s_python_sample_code
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteMany
|
||||
resources:
|
||||
requests:
|
||||
storage: 10Gi
|
||||
```
|
||||
|
||||
We can now use kubectl to add the persistent volume and claim to the Kubernetes cluster:
|
||||
|
||||
```
|
||||
$ kubectl create -f persistent-volume.yml
|
||||
$ kubectl create -f persistent-volume-claim.yml
|
||||
```
|
||||
|
||||
We are now ready to deploy to Kubernetes.
|
||||
|
||||
### Deploy the application to Kubernetes
|
||||
|
||||
To manage the last mile of deploying the application to Kubernetes, we will create two important files: a service file and a deployment file.
|
||||
|
||||
Create a file and name it `k8s_python_sample_code.service.yml` with the following content:
|
||||
|
||||
```
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
labels:
|
||||
k8s-app: k8s_python_sample_code
|
||||
name: k8s_python_sample_code
|
||||
namespace: k8s_python_sample_code
|
||||
spec:
|
||||
type: NodePort
|
||||
ports:
|
||||
- port: 5035
|
||||
selector:
|
||||
k8s-app: k8s_python_sample_code
|
||||
```
|
||||
|
||||
Create a file and name it `k8s_python_sample_code.deployment.yml` with the following content:
|
||||
|
||||
```
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: k8s_python_sample_code
|
||||
namespace: k8s_python_sample_code
|
||||
spec:
|
||||
replicas: 1
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
k8s-app: k8s_python_sample_code
|
||||
spec:
|
||||
containers:
|
||||
- name: k8s_python_sample_code
|
||||
image: k8s_python_sample_code:0.1
|
||||
imagePullPolicy: "IfNotPresent"
|
||||
ports:
|
||||
- containerPort: 5035
|
||||
volumeMounts:
|
||||
- mountPath: /app-data
|
||||
name: k8s_python_sample_code
|
||||
volumes:
|
||||
- name: <name of application>
|
||||
persistentVolumeClaim:
|
||||
claimName: appclaim1
|
||||
```
|
||||
|
||||
Finally, use kubectl to deploy the application to Kubernetes:
|
||||
|
||||
```
|
||||
$ kubectl create -f k8s_python_sample_code.deployment.yml $ kubectl create -f k8s_python_sample_code.service.yml
|
||||
```
|
||||
|
||||
Your application was successfully deployed to Kubernetes.
|
||||
|
||||
You can verify whether your application is running by inspecting the running services:
|
||||
|
||||
```
|
||||
kubectl get services
|
||||
```
|
||||
|
||||
May Kubernetes free you from future deployment hassles!
|
||||
|
||||
_Want to learn more about Python? Nanjekye's book, [Python 2 and 3 Compatibility][7]offers clean ways to write code that will run on both Python 2 and 3, including detailed examples of how to convert existing Python 2-compatible code to code that will run reliably on both Python 2 and 3._
|
||||
|
||||
|
||||
### About the author
|
||||
|
||||
[![](https://opensource.com/sites/default/files/styles/profile_pictures/public/pictures/joannah-nanjekye.jpg?itok=F4RqEjoA)][13] Joannah Nanjekye - Straight Outta 256 , I choose Results over Reasons, Passionate Aviator, Show me the code.[More about me][8]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/1/running-python-application-kubernetes
|
||||
|
||||
作者:[Joannah Nanjekye ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/nanjekyejoannah
|
||||
[1]:https://opensource.com/resources/python?intcmp=7016000000127cYAAQ
|
||||
[2]:https://opensource.com/resources/python/ides?intcmp=7016000000127cYAAQ
|
||||
[3]:https://opensource.com/resources/python/gui-frameworks?intcmp=7016000000127cYAAQ
|
||||
[4]:https://opensource.com/tags/python?intcmp=7016000000127cYAAQ
|
||||
[5]:https://developers.redhat.com/?intcmp=7016000000127cYAAQ
|
||||
[6]:https://opensource.com/article/18/1/running-python-application-kubernetes?rate=D9iKksKbd9q9vOVb92Mg-v0Iyqn0QVO5fbIERTbSHz4
|
||||
[7]:https://www.apress.com/gp/book/9781484229545
|
||||
[8]:https://opensource.com/users/nanjekyejoannah
|
||||
[9]:https://opensource.com/user/196386/feed
|
||||
[10]:https://github.com/jnanjekye/k8s_python_sample_code/tree/master
|
||||
[11]:https://docs.docker.com/engine/installation/
|
||||
[12]:https://hackernoon.com/docker-tutorial-getting-started-with-python-redis-and-nginx-81a9d740d091
|
||||
[13]:https://opensource.com/users/nanjekyejoannah
|
||||
[14]:https://opensource.com/users/nanjekyejoannah
|
||||
[15]:https://opensource.com/users/nanjekyejoannah
|
||||
[16]:https://opensource.com/tags/python
|
||||
[17]:https://opensource.com/tags/kubernetes
|
140
sources/tech/20180128 Being open about data privacy.md
Normal file
140
sources/tech/20180128 Being open about data privacy.md
Normal file
@ -0,0 +1,140 @@
|
||||
Being open about data privacy
|
||||
============================================================
|
||||
|
||||
### Regulations including GDPR notwithstanding, data privacy is something that's important for pretty much everybody.
|
||||
|
||||
![Being open about data privacy ](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/GOV_opendata.png?itok=M8L2HGVx "Being open about data privacy ")
|
||||
Image by : opensource.com
|
||||
|
||||
Today is [Data Privacy Day][9], ("Data Protection Day" in Europe), and you might think that those of us in the open source world should think that all data should be free, [as information supposedly wants to be][10], but life's not that simple. That's for two main reasons:
|
||||
|
||||
1. Most of us (and not just in open source) believe there's at least some data about us that we might not feel happy sharing (I compiled an example list in [a post][2] I published a while ago).
|
||||
|
||||
2. Many of us working in open source actually work for commercial companies or other organisations subject to legal requirements around what they can share.
|
||||
|
||||
So actually, data privacy is something that's important for pretty much everybody.
|
||||
|
||||
It turns out that the starting point for what data people and governments believe should be available for organisations to use is somewhat different between the U.S. and Europe, with the former generally providing more latitude for entities—particularly, the more cynical might suggest, large commercial entities—to use data they've collected about us as they will. Europe, on the other hand, has historically taken a more restrictive view, and on the 25th of May, Europe's view arguably will have triumphed.
|
||||
|
||||
### The impact of GDPR
|
||||
|
||||
That's a rather sweeping statement, but the fact remains that this is the date on which a piece of legislation called the General Data Protection Regulation (GDPR), enacted by the European Union in 2016, becomes enforceable. The GDPR basically provides a stringent set of rules about how personal data can be stored, what it can be used for, who can see it, and how long it can be kept. It also describes what personal data is—and it's a pretty broad set of items, from your name and home address to your medical records and on through to your computer's IP address.
|
||||
|
||||
What is important about the GDPR, though, is that it doesn't apply just to European companies, but to any organisation processing data about EU citizens. If you're an Argentinian, Japanese, U.S., or Russian company and you're collecting data about an EU citizen, you're subject to it.
|
||||
|
||||
"Pah!" you may say,[1][11] "I'm not based in the EU: what can they do to me?" The answer is simple: If you want to continue doing any business in the EU, you'd better comply, because if you breach GDPR rules, you could be liable for up to four percent of your _global_ revenues. Yes, that's global revenues: not just revenues in a particular country in Europe or across the EU, not just profits, but _global revenues_ . Those are the sorts of numbers that should lead you to talk to your legal team, who will direct you to your exec team, who will almost immediately direct you to your IT group to make sure you're compliant in pretty short order.
|
||||
|
||||
This may seem like it's not particularly relevant to non-EU citizens, but it is. For most companies, it's going to be simpler and more efficient to implement the same protection measures for data associated with _all_ customers, partners, and employees they deal with, rather than just targeting specific measures at EU citizens. This has got to be a good thing.[2][12]
|
||||
|
||||
However, just because GDPR will soon be applied to organisations across the globe doesn't mean that everything's fine and dandy[3][13]: it's not. We give away information about ourselves all the time—and permission for companies to use it.
|
||||
|
||||
There's a telling (though disputed) saying: "If you're not paying, you're the product." What this suggests is that if you're not paying for a service, then somebody else is paying to use your data. Do you pay to use Facebook? Twitter? Gmail? How do you think they make their money? Well, partly through advertising, and some might argue that's a service they provide to you, but actually that's them using your data to get money from the advertisers. You're not really a customer of advertising—it's only once you buy something from the advertiser that you become their customer, but until you do, the relationship is between the the owner of the advertising platform and the advertiser.
|
||||
|
||||
Some of these services allow you to pay to reduce or remove advertising (Spotify is a good example), but on the other hand, advertising may be enabled even for services that you think you do pay for (Amazon is apparently working to allow adverts via Alexa, for instance). Unless we want to start paying to use all of these "free" services, we need to be aware of what we're giving up, and making some choices about what we expose and what we don't.
|
||||
|
||||
### Who's the customer?
|
||||
|
||||
There's another issue around data that should be exercising us, and it's a direct consequence of the amounts of data that are being generated. There are many organisations out there—including "public" ones like universities, hospitals, or government departments[4][14]—who generate enormous quantities of data all the time, and who just don't have the capacity to store it. It would be a different matter if this data didn't have long-term value, but it does, as the tools for handling Big Data are developing, and organisations are realising they can be mining this now and in the future.
|
||||
|
||||
The problem they face, though, as the amount of data increases and their capacity to store it fails to keep up, is what to do with it. _Luckily_ —and I use this word with a very heavy dose of irony,[5][15] big corporations are stepping in to help them. "Give us your data," they say, "and we'll host it for free. We'll even let you use the data you collected when you want to!" Sounds like a great deal, yes? A fantastic example of big corporations[6][16] taking a philanthropic stance and helping out public organisations that have collected all of that lovely data about us.
|
||||
|
||||
Sadly, philanthropy isn't the only reason. These hosting deals come with a price: in exchange for agreeing to host the data, these corporations get to sell access to it to third parties. And do you think the public organisations, or those whose data is collected, will get a say in who these third parties are or how they will use it? I'll leave this as an exercise for the reader.[7][17]
|
||||
|
||||
### Open and positive
|
||||
|
||||
It's not all bad news, however. There's a growing "open data" movement among governments to encourage departments to make much of their data available to the public and other bodies for free. In some cases, this is being specifically legislated. Many voluntary organisations—particularly those receiving public funding—are starting to do the same. There are glimmerings of interest even from commercial organisations. What's more, there are techniques becoming available, such as those around differential privacy and multi-party computation, that are beginning to allow us to mine data across data sets without revealing too much about individuals—a computing problem that has historically been much less tractable than you might otherwise expect.
|
||||
|
||||
What does this all mean to us? Well, I've written before on Opensource.com about the [commonwealth of open source][18], and I'm increasingly convinced that we need to look beyond just software to other areas: hardware, organisations, and, relevant to this discussion, data. Let's imagine that you're a company (A) that provides a service to another company, a customer (B).[8][19] There are four different types of data in play:
|
||||
|
||||
1. Data that's fully open: visible to A, B, and the rest of the world
|
||||
|
||||
2. Data that's known, shared, and confidential: visible to A and B, but nobody else
|
||||
|
||||
3. Data that's company-confidential: visible to A, but not B
|
||||
|
||||
4. Data that's customer-confidential: visible to B, but not A
|
||||
|
||||
First of all, maybe we should be a bit more open about data and default to putting it into bucket 1\. That data—on self-driving cars, voice recognition, mineral deposits, demographic statistics—could be enormously useful if it were available to everyone.[9][20]Also, wouldn't it be great if we could find ways to make the data in buckets 2, 3, and 4—or at least some of it—available in bucket 1, whilst still keeping the details confidential? That's the hope for some of these new techniques being researched. They're a way off, though, so don't get too excited, and in the meantime, start thinking about making more of your data open by default.
|
||||
|
||||
### Some concrete steps
|
||||
|
||||
So, what can we do around data privacy and being open? Here are a few concrete steps that occurred to me: please use the comments to contribute more.
|
||||
|
||||
* Check to see whether your organisation is taking GDPR seriously. If it isn't, push for it.
|
||||
|
||||
* Default to encrypting sensitive data (or hashing where appropriate), and deleting when it's no longer required—there's really no excuse for data to be in the clear to these days except for when it's actually being processed.
|
||||
|
||||
* Consider what information you disclose when you sign up to services, particularly social media.
|
||||
|
||||
* Discuss this with your non-technical friends.
|
||||
|
||||
* Educate your children, your friends' children, and their friends. Better yet, go and talk to their teachers about it and present something in their schools.
|
||||
|
||||
* Encourage the organisations you work for, volunteer for, or interact with to make data open by default. Rather than thinking, "why should I make this public?" start with "why _shouldn't_ I make this public?"
|
||||
|
||||
* Try accessing some of the open data sources out there. Mine it, create apps that use it, perform statistical analyses, draw pretty graphs,[10][3] make interesting music, but consider doing something with it. Tell the organisations that sourced it, thank them, and encourage them to do more.
|
||||
|
||||
* * *
|
||||
|
||||
1\. Though you probably won't, I admit.
|
||||
|
||||
2\. Assuming that you believe that your personal data should be protected.
|
||||
|
||||
3\. If you're wondering what "dandy" means, you're not alone at this point.
|
||||
|
||||
4\. Exactly how public these institutions seem to you will probably depend on where you live: [YMMV][21].
|
||||
|
||||
5\. And given that I'm British, that's a really very, very heavy dose.
|
||||
|
||||
6\. And they're likely to be big corporations: nobody else can afford all of that storage and the infrastructure to keep it available.
|
||||
|
||||
7\. No. The answer's "no."
|
||||
|
||||
8\. Although the example works for people, too. Oh, look: A could be Alice, B could be Bob…
|
||||
|
||||
9\. Not that we should be exposing personal data or data that actually needs to be confidential, of course—not that type of data.
|
||||
|
||||
10\. A friend of mine decided that it always seemed to rain when she picked her children up from school, so to avoid confirmation bias, she accessed rainfall information across the school year and created graphs that she shared on social media.
|
||||
|
||||
|
||||
|
||||
### About the author
|
||||
|
||||
[![](https://opensource.com/sites/default/files/styles/profile_pictures/public/pictures/2017-05-10_0129.jpg?itok=Uh-eKFhx)][22] Mike Bursell - I've been in and around Open Source since around 1997, and have been running (GNU) Linux as my main desktop at home and work since then: [not always easy][4]... I'm a security bod and architect, and am currently employed as Chief Security Architect for Red Hat. I have a blog - "[Alice, Eve & Bob][5]" - where I write (sometimes rather parenthetically) about security. I live in the UK and... [more about Mike Bursell][6][More about me][7]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/1/being-open-about-data-privacy
|
||||
|
||||
作者:[Mike Bursell ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/mikecamel
|
||||
[1]:https://opensource.com/article/18/1/being-open-about-data-privacy?rate=oQCDAM0DY-P97d3pEEW_yUgoCV1ZXhv8BHYTnJVeHMc
|
||||
[2]:https://aliceevebob.wordpress.com/2017/06/06/helping-our-governments-differently/
|
||||
[3]:https://opensource.com/article/18/1/being-open-about-data-privacy#10
|
||||
[4]:https://opensource.com/article/17/11/politics-linux-desktop
|
||||
[5]:https://aliceevebob.com/
|
||||
[6]:https://opensource.com/users/mikecamel
|
||||
[7]:https://opensource.com/users/mikecamel
|
||||
[8]:https://opensource.com/user/105961/feed
|
||||
[9]:https://en.wikipedia.org/wiki/Data_Privacy_Day
|
||||
[10]:https://en.wikipedia.org/wiki/Information_wants_to_be_free
|
||||
[11]:https://opensource.com/article/18/1/being-open-about-data-privacy#1
|
||||
[12]:https://opensource.com/article/18/1/being-open-about-data-privacy#2
|
||||
[13]:https://opensource.com/article/18/1/being-open-about-data-privacy#3
|
||||
[14]:https://opensource.com/article/18/1/being-open-about-data-privacy#4
|
||||
[15]:https://opensource.com/article/18/1/being-open-about-data-privacy#5
|
||||
[16]:https://opensource.com/article/18/1/being-open-about-data-privacy#6
|
||||
[17]:https://opensource.com/article/18/1/being-open-about-data-privacy#7
|
||||
[18]:https://opensource.com/article/17/11/commonwealth-open-source
|
||||
[19]:https://opensource.com/article/18/1/being-open-about-data-privacy#8
|
||||
[20]:https://opensource.com/article/18/1/being-open-about-data-privacy#9
|
||||
[21]:http://www.outpost9.com/reference/jargon/jargon_40.html#TAG2036
|
||||
[22]:https://opensource.com/users/mikecamel
|
||||
[23]:https://opensource.com/users/mikecamel
|
||||
[24]:https://opensource.com/users/mikecamel
|
||||
[25]:https://opensource.com/tags/open-data
|
109
sources/tech/20180129 5 Real World Uses for Redis.md
Normal file
109
sources/tech/20180129 5 Real World Uses for Redis.md
Normal file
@ -0,0 +1,109 @@
|
||||
5 Real World Uses for Redis
|
||||
============================================================
|
||||
|
||||
|
||||
Redis is a powerful in-memory data structure store which has many uses including a database, a cache, and a message broker. Most people often think of it a simple key-value store, but it has so much more power. I will be going over some real world examples of some of the many things Redis can do for you.
|
||||
|
||||
### 1\. Full Page Cache
|
||||
|
||||
The first thing is full page caching. If you are using server-side rendered content, you do not want to re-render each page for every single request. Using a cache like Redis, you can cache regularly requested content and drastically decrease latency for your most requested pages, and most frameworks have hooks for caching your pages with Redis.
|
||||
Simple Commands
|
||||
|
||||
```
|
||||
// Set the page that will last 1 minute
|
||||
SET key "<html>...</html>" EX 60
|
||||
|
||||
// Get the page
|
||||
GET key
|
||||
|
||||
```
|
||||
|
||||
### 2\. Leaderboard
|
||||
|
||||
One of the places Redis shines is for leaderboards. Because Redis is in-memory, it can deal with incrementing and decrementing very fast and efficiently. Compare this to running a SQL query every request the performance gains are huge! This combined with Redis's sorted sets means you can grab only the highest rated items in the list in milliseconds, and it is stupid easy to implement.
|
||||
Simple Commands
|
||||
|
||||
```
|
||||
// Add an item to the sorted set
|
||||
ZADD sortedSet 1 "one"
|
||||
|
||||
// Get all items from the sorted set
|
||||
ZRANGE sortedSet 0 -1
|
||||
|
||||
// Get all items from the sorted set with their score
|
||||
ZRANGE sortedSet 0 -1 WITHSCORES
|
||||
|
||||
```
|
||||
|
||||
### 3\. Session Storage
|
||||
|
||||
The most common use for Redis I have seen is session storage. Unlike other session stores like Memcache, Redis can persist data so in the situation where your cache goes down when it comes back up all the data will still be there. Although this isn't mission critical to be persisted, this feature can save your users lots of headaches. No one likes their session to be randomly dropped for no reason.
|
||||
Simple Commands
|
||||
|
||||
```
|
||||
// Set session that will last 1 minute
|
||||
SET randomHash "{userId}" EX 60
|
||||
|
||||
// Get userId
|
||||
GET randomHash
|
||||
|
||||
```
|
||||
|
||||
### 4\. Queue
|
||||
|
||||
One of the less common, but very useful things you can do with Redis is queue things. Whether it's a queue of emails or data to be consumed by another application, you can create an efficient queue it in Redis. Using this functionality is easy and natural for any developer who is familiar with Stacks and pushing and popping items.
|
||||
Simple Commands
|
||||
|
||||
```
|
||||
// Add a Message
|
||||
HSET messages <id> <message>
|
||||
ZADD due <due_timestamp> <id>
|
||||
|
||||
// Recieving Message
|
||||
ZRANGEBYSCORE due -inf <current_timestamp> LIMIT 0 1
|
||||
HGET messages <message_id>
|
||||
|
||||
// Delete Message
|
||||
ZREM due <message_id>
|
||||
HDEL messages <message_id>
|
||||
|
||||
```
|
||||
|
||||
### 5\. Pub/Sub
|
||||
|
||||
The final real world use for Redis I am going to bring up in this post is pub/sub. This is one of the most powerful features Redis has built in; the possibilities are limitless. You can create a real-time chat system with it, trigger notifications for friend requests on social networks, etc... This feature is one of the most underrated features Redis offers but is very powerful, yet simple to use.
|
||||
Simple Commands
|
||||
|
||||
```
|
||||
// Add a message to a channel
|
||||
PUBLISH channel message
|
||||
|
||||
// Recieve messages from a channel
|
||||
SUBSCRIBE channel
|
||||
|
||||
```
|
||||
|
||||
### Conclusion
|
||||
|
||||
I hope you enjoyed this list of some of the many real world uses for Redis. This is just scratching the surface of what Redis can do for you, but I hope it gave you some ideas of how you can use the full potential Redis has to offer.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
Hi, my name is Ryan! I am a Software Developer with experience in many web frameworks and libraries including NodeJS, Django, Golang, and Laravel.
|
||||
|
||||
|
||||
-------------------
|
||||
|
||||
|
||||
via: https://ryanmccue.ca/5-real-world-uses-for-redis/
|
||||
|
||||
作者:[Ryan McCue ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://ryanmccue.ca/author/ryan/
|
||||
[1]:https://ryanmccue.ca/author/ryan/
|
@ -0,0 +1,68 @@
|
||||
A look inside Facebook's open source program
|
||||
============================================================
|
||||
|
||||
### Facebook developer Christine Abernathy discusses how open source helps the company share insights and boost innovation.
|
||||
|
||||
![A look inside Facebook's open source program](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LAW-Internet_construction_9401467_520x292_0512_dc.png?itok=RPkPPtDe "A look inside Facebook's open source program")
|
||||
Image by : opensource.com
|
||||
|
||||
|
||||
Open source becomes more ubiquitous every year, appearing everywhere from [government municipalities][11] to [universities][12]. Companies of all sizes are also increasingly turning to open source software. In fact, some companies are taking open source a step further by supporting projects financially or working with developers.
|
||||
|
||||
Facebook's open source program, for example, encourages others to release their code as open source, while working and engaging with the community to support open source projects. [Christine Abernathy][13], a Facebook developer, open source advocate, and member of the company's open source team, visited the Rochester Institute of Technology last November, presenting at the [November edition][14] of the FOSS Talks speaker series. In her talk, Abernathy explained how Facebook approaches open source and why it's an important part of the work the company does.
|
||||
|
||||
### Facebook and open source
|
||||
|
||||
Abernathy said that open source plays a fundamental role in Facebook's mission to create community and bring the world closer together. This ideological match is one motivating factor for Facebook's participation in open source. Additionally, Facebook faces unique infrastructure and development challenges, and open source provides a platform for the company to share solutions that could help others. Open source also provides a way to accelerate innovation and create better software, helping engineering teams produce better software and work more transparently. Today, Facebook's 443 projects on GitHub comprise 122,000 forks, 292,000 commits, and 732,000 followers.
|
||||
|
||||
|
||||
|
||||
![open source projects by Facebook](https://opensource.com/sites/default/files/images/life-uploads/blog-article-facebook-open-source-projects.png "open source projects by Facebood")
|
||||
|
||||
Some of the Facebook projects released as open source include React, GraphQL, Caffe2, and others. (Image by Christine Abernathy, used with permission)
|
||||
|
||||
### Lessons learned
|
||||
|
||||
Abernathy emphasized that Facebook has learned many lessons from the open source community, and it looks forward to learning many more. She identified the three most important ones:
|
||||
|
||||
* Share what's useful
|
||||
|
||||
* Highlight your heroes
|
||||
|
||||
* Fix common pain points
|
||||
|
||||
_Christine Abernathy visited RIT as part of the FOSS Talks speaker series. Every month, a guest speaker from the open source world shares wisdom, insight, and advice about the open source world with students interested in free and open source software. The [FOSS @ MAGIC][3] community is thankful to have Abernathy attend as a speaker._
|
||||
|
||||
### About the author
|
||||
|
||||
[![Picture of Justin W. Flory](https://opensource.com/sites/default/files/styles/profile_pictures/public/pictures/october_2017_cropped_0.jpg?itok=gV-RgINC)][15] Justin W. Flory - Justin is a student at the [Rochester Institute of Technology][4]majoring in Networking and Systems Administration. He is currently a contributor to the [Fedora Project][5]. In Fedora, Justin is the editor-in-chief of the [Fedora Magazine][6], the lead of the [Community... ][7][more about Justin W. Flory][8][More about me][9]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/1/inside-facebooks-open-source-program
|
||||
|
||||
作者:[Justin W. Flory ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/jflory
|
||||
[1]:https://opensource.com/file/383786
|
||||
[2]:https://opensource.com/article/18/1/inside-facebooks-open-source-program?rate=H9_bfSwXiJfi2tvOLiDxC_tbC2xkEOYtCl-CiTq49SA
|
||||
[3]:http://foss.rit.edu/
|
||||
[4]:https://www.rit.edu/
|
||||
[5]:https://fedoraproject.org/wiki/Overview
|
||||
[6]:https://fedoramagazine.org/
|
||||
[7]:https://fedoraproject.org/wiki/CommOps
|
||||
[8]:https://opensource.com/users/jflory
|
||||
[9]:https://opensource.com/users/jflory
|
||||
[10]:https://opensource.com/user/74361/feed
|
||||
[11]:https://opensource.com/article/17/8/tirana-government-chooses-open-source
|
||||
[12]:https://opensource.com/article/16/12/2016-election-night-hackathon
|
||||
[13]:https://twitter.com/abernathyca
|
||||
[14]:https://www.eventbrite.com/e/fossmagic-talks-open-source-facebook-with-christine-abernathy-tickets-38955037566#
|
||||
[15]:https://opensource.com/users/jflory
|
||||
[16]:https://opensource.com/users/jflory
|
||||
[17]:https://opensource.com/users/jflory
|
||||
[18]:https://opensource.com/article/18/1/inside-facebooks-open-source-program#comments
|
@ -0,0 +1,245 @@
|
||||
CopperheadOS: Security features, installing apps, and more
|
||||
============================================================
|
||||
|
||||
### Fly your open source flag proudly with Copperhead, a mobile OS that takes its FOSS commitment seriously.
|
||||
|
||||
|
||||
![Android security and privacy](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/android_security_privacy.png?itok=MPHAV5mL "Android security and privacy")
|
||||
Image by : Norebbo via [Flickr][15] (Original: [public domain][16]). Modified by Opensource.com. [CC BY-SA 4.0][17].
|
||||
|
||||
_Editor's note: CopperheadOS is [licensed][11] under the Creative Commons Attribution-NonCommercial-Shar<wbr>eAlike 4.0 license (userspace) and GPL2 license (kernel). It is also based on Android Open Source Project (AOSP)._
|
||||
|
||||
Several years ago, I made the decision to replace proprietary technologies (mainly Apple products) with technology that ran on free and open source software (FOSS). I can't say it was easy, but I now happily use FOSS for pretty much everything.
|
||||
|
||||
The hardest part involved my mobile handset. There are basically only two choices today for phones and tablets: Apple's iOS or Google's Android. Since Android is open source, it seemed the obvious choice, but I was frustrated by both the lack of open source applications on Android and the pervasiveness of Google on those devices.
|
||||
|
||||
So I entered the world of custom ROMs. These are projects that take the base [Android Open Source Project][18] (AOSP) and customize it. Almost all these projects allow you to install the standard Google applications as a separate package, called GApps, and you can have as much or as little Google presence on your phone as you like. GApps packages come in a number of flavors, from the full suite of apps that Google ships with its devices to a "pico" version that includes just the minimal amount of software needed to run the Google Play Store, and from there you can add what you like.
|
||||
|
||||
I started out using CyanogenMod, but when that project went in a direction I didn't like, I switched to OmniROM. I was quite happy with it, but still wondered what information I was sending to Google behind the scenes.
|
||||
|
||||
Then I found out about [CopperheadOS][19]. Copperhead is a version of AOSP that focuses on delivering the most secure Android experience possible. I've been using it for a year now and have been quite happy with it.
|
||||
|
||||
Unlike other custom ROMs that strive to add lots of new functionality, Copperhead runs a pretty vanilla version of AOSP. Also, while the first thing you usually do when playing with a custom ROM is to add root access to the device, not only does Copperhead prevent that, it also requires that you have a device that has verified boot, so there's no unlocking the bootloader. This is to prevent malicious code from getting access to the handset.
|
||||
|
||||
Copperhead starts with a hardened version of the AOSP baseline, including full encryption, and then adds a [ton of stuff][20] I can only pretend to understand. It also applies a number of kernel and Android patches before they are applied to the mainline Android releases.
|
||||
|
||||
### [copperos_extrapatches.png][1]
|
||||
|
||||
![About phone with extra patches](https://opensource.com/sites/default/files/u128651/copperos_extrapatches.png "About phone with extra patches")
|
||||
|
||||
It has a couple of more obvious features that I like. If you use a PIN to unlock your device, there is an option to scramble the digits.
|
||||
|
||||
### [copperos_scrambleddigits.png][2]
|
||||
|
||||
![Option to scramble digits](https://opensource.com/sites/default/files/u128651/copperos_scrambleddigits.png "Option to scramble digits")
|
||||
|
||||
This should prevent any casual shoulder-surfer from figuring out your PIN, although it can make it a bit more difficult to unlock your device while, say, driving (but no one should be using their handset in the car, right?).
|
||||
|
||||
Another issue it addresses involves tracking people by monitoring their WiFi MAC address. Most devices that use WiFi perform active scanning for wireless access points. This protocol includes the MAC address of the interface, and there are a number of ways people can use [mobile location analytics][21] to track your movement. Copperhead has an option to randomize your MAC address, which counters this process.
|
||||
|
||||
### [copperos_randommac.png][3]
|
||||
|
||||
![Randomize MAC address](https://opensource.com/sites/default/files/u128651/copperos_randommac.png "Randomize MAC address")
|
||||
|
||||
### Installing apps
|
||||
|
||||
This all sounds pretty good, right? Well, here comes the hard part. While Android is open source, much of the Google code, including the [Google Play Store][22], is not. If you install the Play Store and the code necessary for it to work, you allow Google to install software without your permission. [Google Play's terms of service][23] says:
|
||||
|
||||
> "Google may update any Google app or any app you have downloaded from Google Play to a new version of such app, irrespective of any update settings that you may have selected within the Google Play app or your Device, if Google determines that the update will fix a critical security vulnerability related to the app."
|
||||
|
||||
This is not acceptable from a security standpoint, so you cannot install Google applications on a Copperhead device.
|
||||
|
||||
This took some getting used to, as I had come to rely on things such as Google Maps. The default application repository that ships with Copperhead is [F-Droid][24], which contains only FOSS applications. While I previously used many FOSS applications on Android, it took some effort to use _nothing but_ free software. I did find some ways to cheat this system, and I'll cover that below. First, here are some of the applications I've grown to love from F-Droid.
|
||||
|
||||
### F-Droid favorites
|
||||
|
||||
**K-9 Mail**
|
||||
|
||||
### [copperheados_k9mail.png][4]
|
||||
|
||||
![K-9 Mail](https://opensource.com/sites/default/files/u128651/copperheados_k9mail.png "K-9 Mail")
|
||||
|
||||
Even before I started using Copperhead, I loved [K-9 Mail][25]. This is simply the best mobile email client I've found, period, and it is one of the first things I install on any new device. I even use it to access my Gmail account, via IMAP and SMTP.
|
||||
|
||||
**Open Camera**
|
||||
|
||||
### [copperheados_cameraapi.png][5]
|
||||
|
||||
![Open Camera](https://opensource.com/sites/default/files/u128651/copperheados_cameraapi.png "Open Camera")
|
||||
|
||||
Copperhead runs only on rather new hardware, and I was consistently disappointed in the quality of the pictures from its default camera application. Then I discovered [Open Camera][26]. A full-featured camera app, it allows you to enable an advanced API to take advantage of the camera hardware. The only thing I miss is the ability to take a panoramic photo.
|
||||
|
||||
**Amaze**
|
||||
|
||||
### [copperheados_amaze.png][6]
|
||||
|
||||
![Amaze](https://opensource.com/sites/default/files/u128651/copperheados_amaze.png "Amaze")
|
||||
|
||||
[Amaze][27] is one of the best file managers I've ever used, free or not. When I need to navigate the filesystem, Amaze is my go-to app.
|
||||
|
||||
**Vanilla Music**
|
||||
|
||||
### [copperheados_vanillamusic.png][7]
|
||||
|
||||
![Vanilla Music](https://opensource.com/sites/default/files/u128651/copperheados_vanillamusic.png "Vanilla Music")
|
||||
|
||||
I was unhappy with the default music player, so I checked out a number of them on F-Droid and settled on [Vanilla Music][28]. It has an easy-to-use interface and interacts well with my Bluetooth devices.
|
||||
|
||||
**OCReader**
|
||||
|
||||
### [coperheados_ocreader.png][8]
|
||||
|
||||
![OCReader](https://opensource.com/sites/default/files/u128651/coperheados_ocreader.png "OCReader")
|
||||
|
||||
I am a big fan of [Nextcloud][29], particularly [Nextcloud News][30], a replacement for the now-defunct [Google Reader][31]. While I can access my news feeds through a web browser, I really missed the ability to manage them through a dedicated app. Enter [OCReader][32]. While it stands for "ownCloud Reader," it works with Nextcloud, and I've had very few issues with it.
|
||||
|
||||
**Noise**
|
||||
|
||||
The SMS/MMS application of choice for most privacy advocates is [Signal][33] by [Open Whisper Systems][34]. Endorsed by [Edward Snowden][35], Signal allows for end-to-end encrypted messaging. If the person you are messaging is also on Signal, your messages will be sent, encrypted, over a data connection facilitated by centralized servers maintained by Open Whisper Systems. It also, until recently, relied on [Google Cloud Messaging][36] (GCM) for notifications, which requires Google Play Services.
|
||||
|
||||
The fact that Signal requires a centralized server bothered some people, so the default application on Copperhead is a fork of Signal called [Silence][37]. This application doesn't use a centralized server but does require that all parties be on Silence for encryption to work.
|
||||
|
||||
Well, no one I know uses Silence. At the moment you can't even get it from the Google Play Store in the U.S. due to a trademark issue, and there is no iOS client. An encrypted SMS client isn't very useful if you can't use it for encryption.
|
||||
|
||||
Enter [Noise][38]. Noise is another application maintained by Copperhead that is a fork of Signal that removes the need for GCM. While not available in the standard F-Droid repositories, Copperhead includes their own repository in the version of F-Droid they ship, which at the moment contains only the Noise application. This app will let you communicate securely with anyone else using Noise or Signal.
|
||||
|
||||
### F-Droid workarounds
|
||||
|
||||
**FFUpdater**
|
||||
|
||||
Copperhead ships with a hardened version of the Chromium web browser, but I am a Firefox fan. Unfortunately, [Firefox is no longer included][39] in the F-Droid repository. Apps on F-Droid are all built by the F-Droid maintainers, so the process for getting into F-Droid can be complicated. The [Compass app for OpenNMS][40] isn't in F-Droid because, at the moment, it does not support builds using the [Ionic Framework][41], which Compass uses.
|
||||
|
||||
Luckily, there is a simple workaround: Install the [FFUpdater][42] app on F-Droid. This allows me to install Firefox and keep it up to date through the browser itself.
|
||||
|
||||
**Amazon Appstore**
|
||||
|
||||
This brings me to a cool feature of Android 8, Oreo. In previous versions of Android, you had a single "known source" for software, usually the Google Play Store, and if you wanted to install software from another repository, you had to go to settings and allow "Install from Unknown Sources." I always had to remember to turn that off after an install to prevent malicious code from being able to install software on my device.
|
||||
|
||||
### [copperheados_sources.png][9]
|
||||
|
||||
![Allowing sources to install apps](https://opensource.com/sites/default/files/u128651/copperheados_sources.png "Allowing sources to install apps")
|
||||
|
||||
With Oreo, you can permanently allow a specified application to install applications. For example, I use some applications from the [Amazon Appstore][43] (such as the Amazon Shopping and Kindle apps). When I download and install the Amazon Appstore Android package (APK), I am prompted to allow the application to install apps and then I'm not asked again. Of course, this can be turned on and off on a per-application basis.
|
||||
|
||||
The Amazon Appstore has a number of useful apps, such as [IMDB][44] and [eBay][45]. Many of them don't require Google Services, but some do. For example, if I install the [Skype][46] app via Amazon, it starts up, but then complains about the operating system. The American Airlines app would start, then complain about an expired certificate. (I contacted them and was told they were no longer maintaining the version in the Amazon Appstore and it would be removed.) In any case, I can pretty simply install a couple of applications I like without using Google Play.
|
||||
|
||||
**Google Play**
|
||||
|
||||
Well, what about those apps you love that don't use Google Play Services but are only available through the Google Play Store? There is yet another way to safely get those apps on your Copperhead device.
|
||||
|
||||
This does require some technical expertise and another device. On the second device, install the [TWRP][47] recovery application. This is usually a key first step in installing any custom ROM, and TWRP is supported on a large number of devices. You will also need the Android Debug Bridge ([ADB][48]) application from the [Android SDK][49], which can be downloaded at no cost.
|
||||
|
||||
On the second device, use the Google Play Store to install the applications you want. Then, reboot into recovery. You can mount the system partition via TWRP; plug the device into a computer via a USB cable and you should be able to see it via ADB. There is a system directory called `/data/app`, and in it you will find all the APK files for your applications. Copy those you want to your computer (I use the ADB `pull`command and copy over the whole directory).
|
||||
|
||||
Disconnect that phone and connect your Copperhead device. Enable the "Transfer files" option, and you should see the storage directory mounted on your computer. Copy over the APK files for the applications you want, then install them via the Amaze file manager (just navigate to the APK file and click on it).
|
||||
|
||||
Note that you can do this for any application, and it might even be possible to install Google Play Services this way on Copperhead, but that kind of defeats the purpose. I use this mainly to get the [Electric Sheep][50] screensaver and a guitar tuning app I like called [Cleartune][51]. Be aware that if you install TWRP, especially on a Google Pixel, security updates may not work, as they'll expect the stock recovery. In this case you can always use [fastboot][52] to access TWRP, but leave the default recovery in place.
|
||||
|
||||
### Must-have apps without a workaround
|
||||
|
||||
Unfortunately, there are still a couple of Google apps I find it hard to live without. Google Maps is probably the main Google application I use, and yes, while I know I'm giving up my location to Google, it has saved hours of my life by routing me around traffic issues. [OpenStreetMap][53] has an app available via F-Droid, but it doesn't have the real-time information that makes Google Maps so useful. I also use Skype on occasion, usually when I am out of the country and have only a data connection (i.e., through a hotel WiFi network). It lets me call home and other places at a very affordable price.
|
||||
|
||||
My workaround is to carry two phones. I know this isn't an option for most people, but it is the only one I've found for now. I use my Copperhead phone for anything personal (email, contacts, calendars, pictures, etc.) and my "Googlephone" for Maps, Skype, and various games.
|
||||
|
||||
My dream would be for someone to perfect a hypervisor on a handset. Then I could run Copperhead and stock Google Android on the same device. I don't think anyone has a strong business reason to do it, but I do hope it happens.
|
||||
|
||||
### Devices that support Copperhead
|
||||
|
||||
Before you rush out to install Copperhead, there are some hurdles you'll have to jump. First, it is supported on only a [limited number of handsets][54], almost all of them late-model Google devices. The logic behind this is simple: Google tends to release Android security updates for its devices quickly, and I've found that Copperhead is able to follow suit within a day, if not within hours. Second, like any open source project, it has limited resources and it is difficult to support even a fraction of the devices now available to end users. Finally, if you want to run Copperhead on handsets like the Pixel and Pixel XL, you'll either have to build from source or [buy a device][55] from Copperhead directly.
|
||||
|
||||
When I discovered Copperhead, I had a Nexus 6P, which (along with the Nexus 5X) is one of the supported devices. This allowed me to play with and get used to the operating system. I liked it so much that I donated some money to the project, but I kind of balked at the price they were asking for Pixel and Pixel XL handsets.
|
||||
|
||||
Recently, though, I ended up purchasing a Pixel XL directly from Copperhead. There were a couple of reasons. One, since all of the code is available on GitHub, I set out to do [my own build][56] for a Pixel device. That process (which I never completed) made me appreciate the amount of work Copperhead puts into its project. Two, there was an article on [Slashdot][57] discussing how people were selling devices with Copperhead pre-installed and using Copperhead's update servers. I didn't appreciate that very much. Finally, I support FOSS not only by being a vocal user but also with my wallet.
|
||||
|
||||
### Putting the "libre" back into free
|
||||
|
||||
Another thing I love about FOSS is that I have options. There is even a new option to Copperhead being developed called [Eelo][58]. Created by [Gaël Duval][59], the developer of Mandrake Linux, this is a privacy-based Android operating system based on [LineageOS][60] (the descendant of CyanogenMod). While it should be supported on more handsets than Copperhead is, it is still in the development stage, and Copperhead is very stable and mature. I am eager to check it out, though.
|
||||
|
||||
For the year I've used CopperheadOS, I've never felt safer when using a mobile device to connect to a network. I've found the open source replacements for my old apps to be more than adequate, if not better than the original apps. I've also rediscovered the browser. Where I used to have around three to four tabs open, I now have around 10, because I've found that I usually don't need to install an app to easily access a site's content.
|
||||
|
||||
With companies like Google and Apple trying more and more to insinuate themselves into the lives of their users, it is nice to have an option that puts the "libre" back into free.
|
||||
|
||||
|
||||
### About the author
|
||||
|
||||
[![](https://opensource.com/sites/default/files/styles/profile_pictures/public/balog_tarus_-_julian_-_square.jpg?itok=ZA6yem3I)][61]
|
||||
|
||||
Tarus Balog - Having been kicked out of some of the best colleges and universities in the country, I managed after seven years to get a BSEE and entered the telecommunications industry. I always ended up working on projects where we were trying to get the phone switch to talk to PCs. This got me interested in the creation and management of large communication networks. So I moved into the data communications field (they were separate back then) and started working with commercial network management tools... [more about Tarus Balog][12][More about me][13]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/1/copperheados-delivers-mobile-freedom-privacy-and-security
|
||||
|
||||
作者:[Tarus Balog ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/sortova
|
||||
[1]:https://opensource.com/file/384496
|
||||
[2]:https://opensource.com/file/384501
|
||||
[3]:https://opensource.com/file/384506
|
||||
[4]:https://opensource.com/file/384491
|
||||
[5]:https://opensource.com/file/384486
|
||||
[6]:https://opensource.com/file/384481
|
||||
[7]:https://opensource.com/file/384476
|
||||
[8]:https://opensource.com/file/384471
|
||||
[9]:https://opensource.com/file/384466
|
||||
[10]:https://opensource.com/article/18/1/copperheados-delivers-mobile-freedom-privacy-and-security?rate=P32BmRpJF5bYEYTHo4mW3Hp4XRk34Eq3QqMDf2oOGnw
|
||||
[11]:https://copperhead.co/android/docs/building#redistribution
|
||||
[12]:https://opensource.com/users/sortova
|
||||
[13]:https://opensource.com/users/sortova
|
||||
[14]:https://opensource.com/user/11447/feed
|
||||
[15]:https://www.flickr.com/photos/mstable/17517955832
|
||||
[16]:https://creativecommons.org/publicdomain/mark/1.0/
|
||||
[17]:https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[18]:https://en.wikipedia.org/wiki/Android_(operating_system)#AOSP
|
||||
[19]:https://copperhead.co/
|
||||
[20]:https://copperhead.co/android/docs/technical_overview
|
||||
[21]:https://en.wikipedia.org/wiki/Mobile_location_analytics
|
||||
[22]:https://en.wikipedia.org/wiki/Google_Play#Compatibility
|
||||
[23]:https://play.google.com/intl/en-us_us/about/play-terms.html
|
||||
[24]:https://en.wikipedia.org/wiki/F-Droid
|
||||
[25]:https://f-droid.org/en/packages/com.fsck.k9/
|
||||
[26]:https://f-droid.org/en/packages/net.sourceforge.opencamera/
|
||||
[27]:https://f-droid.org/en/packages/com.amaze.filemanager/
|
||||
[28]:https://f-droid.org/en/packages/ch.blinkenlights.android.vanilla/
|
||||
[29]:https://nextcloud.com/
|
||||
[30]:https://github.com/nextcloud/news
|
||||
[31]:https://en.wikipedia.org/wiki/Google_Reader
|
||||
[32]:https://f-droid.org/packages/email.schaal.ocreader/
|
||||
[33]:https://en.wikipedia.org/wiki/Signal_(software)
|
||||
[34]:https://en.wikipedia.org/wiki/Open_Whisper_Systems
|
||||
[35]:https://en.wikipedia.org/wiki/Edward_Snowden
|
||||
[36]:https://en.wikipedia.org/wiki/Google_Cloud_Messaging
|
||||
[37]:https://f-droid.org/en/packages/org.smssecure.smssecure/
|
||||
[38]:https://github.com/copperhead/Noise
|
||||
[39]:https://f-droid.org/wiki/page/org.mozilla.firefox
|
||||
[40]:https://compass.opennms.io/
|
||||
[41]:https://ionicframework.com/
|
||||
[42]:https://f-droid.org/en/packages/de.marmaro.krt.ffupdater/
|
||||
[43]:https://www.amazon.com/gp/feature.html?docId=1000626391
|
||||
[44]:https://www.imdb.com/
|
||||
[45]:https://www.ebay.com/
|
||||
[46]:https://www.skype.com/
|
||||
[47]:https://twrp.me/
|
||||
[48]:https://en.wikipedia.org/wiki/Android_software_development#ADB
|
||||
[49]:https://developer.android.com/studio/index.html
|
||||
[50]:https://play.google.com/store/apps/details?id=com.spotworks.electricsheep&hl=en
|
||||
[51]:https://play.google.com/store/apps/details?id=com.bitcount.cleartune&hl=en
|
||||
[52]:https://en.wikipedia.org/wiki/Android_software_development#Fastboot
|
||||
[53]:https://f-droid.org/packages/net.osmand.plus/
|
||||
[54]:https://copperhead.co/android/downloads
|
||||
[55]:https://copperhead.co/android/store
|
||||
[56]:https://copperhead.co/android/docs/building
|
||||
[57]:https://news.slashdot.org/story/17/11/12/024231/copperheados-fights-unlicensed-installations-on-nexus-phones
|
||||
[58]:https://eelo.io/
|
||||
[59]:https://en.wikipedia.org/wiki/Ga%C3%ABl_Duval
|
||||
[60]:https://en.wikipedia.org/wiki/LineageOS
|
||||
[61]:https://opensource.com/users/sortova
|
||||
[62]:https://opensource.com/users/sortova
|
||||
[63]:https://opensource.com/users/sortova
|
||||
[64]:https://opensource.com/article/18/1/copperheados-delivers-mobile-freedom-privacy-and-security#comments
|
||||
[65]:https://opensource.com/tags/mobile
|
||||
[66]:https://opensource.com/tags/android
|
@ -190,6 +190,11 @@ This wasn't the end of the story, since the next question was: What about zombie
|
||||
|
||||
I'll conclude by saying that this was a simpler task than this image search, and it was greatly helped by the processes I had already developed.
|
||||
|
||||
|
||||
### About the author
|
||||
|
||||
[![](https://opensource.com/sites/default/files/styles/profile_pictures/public/20150529_gregp.jpg?itok=nv02g6PV)][7] Greg Pittman - Greg is a retired neurologist in Louisville, Kentucky, with a long-standing interest in computers and programming, beginning with Fortran IV in the 1960s. When Linux and open source software came along, it kindled a commitment to learning more, and eventually contributing. He is a member of the Scribus Team.[More about me][8]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/1/parsing-html-python
|
||||
@ -203,3 +208,5 @@ via: https://opensource.com/article/18/1/parsing-html-python
|
||||
[a]:https://opensource.com/users/greg-p
|
||||
[1]:https://www.crummy.com/software/BeautifulSoup/
|
||||
[2]:https://www.kde.org/applications/utilities/kwrite/
|
||||
[7]:https://opensource.com/users/greg-p
|
||||
[8]:https://opensource.com/users/greg-p
|
||||
|
@ -0,0 +1,189 @@
|
||||
What Happens When You Want to Create a Special File with All Special Characters in Linux?
|
||||
============================================================
|
||||
|
||||
|
||||
![special chars](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/special-chars.png?itok=EEvlt5Nw "special chars")
|
||||
Learn how to handle creation of a special file filled with special characters.[Used with permission][1]
|
||||
|
||||
I recently joined Holberton School as a student, hoping to learn full-stack software development. What I did not expect was that in two weeks I would be pretty much proficient with creating shell scripts that would make my coding life easy and fast!
|
||||
|
||||
So what is the post about? It is about a novel problem that my peers and I faced when we were asked to create a file with no regular alphabets/ numbers but instead special characters!! Just to give you a look at what kind of file name we were dealing with —
|
||||
|
||||
### \*\\’”Holberton School”\’\\*$\?\*\*\*\*\*:)
|
||||
|
||||
What a novel file name! Of course, this question was met with the collective groaning and long drawn sighs of all 55 (batch #5) students!
|
||||
|
||||
![1*Lf_XPhmgm-RB5ipX_lBjsQ.gif](https://cdn-images-1.medium.com/max/1600/1*Lf_XPhmgm-RB5ipX_lBjsQ.gif)
|
||||
|
||||
Some proceeded to make their lives easier by breaking the file name into pieces on a doc file and adding in the **“\\” or “\”** in front of certain special character which kind of resulted in this format -
|
||||
|
||||
#### \\*\\\\’\”Holberton School\”\\’\\\\*$\\?\\*\\*\\*\\*\\*:)
|
||||
|
||||
![1*p6s8WlysClalj0x2fQhGOg.gif](https://cdn-images-1.medium.com/max/1600/1*p6s8WlysClalj0x2fQhGOg.gif)
|
||||
|
||||
Everyone trying to get the \\ right
|
||||
|
||||
bamboozled? me, too! I did not want to believe that this was the only way to solve this, as I was getting frustrated with every “\\” or “\” that was required to escape and print those special characters as normal characters!
|
||||
|
||||
If you’re new to shell scripting, here is a quick walk through on why so many “\\” , “\” were required and where.
|
||||
|
||||
In shell scripting “ ” and ‘ ’ have special usage and once you understand and remember when and where to use them it can make your life easier!
|
||||
|
||||
#### Double Quoting
|
||||
|
||||
The first type of quoting we will look at is double quotes. **If you place text inside double quotes, all the special characters used by the shell lose their special meaning and are treated as ordinary characters. The exceptions are “$”, “\” (backslash), and “`” (back- quote).**This means that word-splitting, pathname expansion, tilde expansion, and brace expansion are suppressed, but parameter expansion, arithmetic expansion, and command substitution are still carried out. Using double quotes, we can cope with filenames containing embedded spaces.
|
||||
|
||||
So this means that you can create file with names that have spaces between words — if that is your thing, but I would suggest you to not do that as it is inconvenient and rather an unpleasant experience for you to try to find that file when you need !
|
||||
|
||||
**Quoting “THE” guide for linux I follow and read like it is the Harry Potter of the linux coding world —**
|
||||
|
||||
Say you were the unfortunate victim of a file called two words.txt. If you tried to use this on the command line, word-splitting would cause this to be treated as two separate arguments rather than the desired single argument:
|
||||
|
||||
**[[me@linuxbox][3] me]$ ls -l two words.txt**
|
||||
|
||||
```
|
||||
ls: cannot access two: No such file or directory
|
||||
ls: cannot access words.txt: No such file or directory
|
||||
```
|
||||
|
||||
By using double quotes, you can stop the word-splitting and get the desired result; further, you can even repair the damage:
|
||||
|
||||
```
|
||||
[me@linuxbox me]$ ls -l “two words.txt”
|
||||
-rw-rw-r — 1 me me 18 2008–02–20 13:03 two words.txt
|
||||
[me@linuxbox me]$ mv “two words.txt” two_words.t
|
||||
```
|
||||
|
||||
There! Now we don’t have to keep typing those pesky double quotes.
|
||||
|
||||
Now, let us talk about single quotes and what is their significance in shell —
|
||||
|
||||
#### Single Quotes
|
||||
|
||||
Enclosing characters in single quotes (‘’’) preserves the literal value of each character within the quotes. A single quote may not occur between single quotes, even when preceded by a backslash.
|
||||
|
||||
Yes! that got me and I was wondering how will I use it, apparently when I was googling to find and easier way to do it I stumbled across this piece of information on the internet —
|
||||
|
||||
### Strong quoting
|
||||
|
||||
Strong quoting is very easy to explain:
|
||||
|
||||
Inside a single-quoted string **nothing** is interpreted, except the single-quote that closes the string.
|
||||
|
||||
```
|
||||
echo 'Your PATH is: $PATH'
|
||||
```
|
||||
|
||||
`$PATH` won't be expanded, it's interpreted as ordinary text because it's surrounded by strong quotes.
|
||||
|
||||
In practice that means to produce a text like `Here's my test…` as a single-quoted string, **you have to leave and re-enter the single quoting to get the character "`'`" as literal text:**
|
||||
|
||||
```
|
||||
# WRONG
|
||||
echo 'Here's my test...'
|
||||
```
|
||||
|
||||
```
|
||||
# RIGHT
|
||||
echo 'Here'\''s my test...'
|
||||
```
|
||||
|
||||
```
|
||||
# ALTERNATIVE: It's also possible to mix-and-match quotes for readability:
|
||||
echo "Here's my test"
|
||||
```
|
||||
|
||||
Well now you’re wondering — “well that explains the quotes but what about the “\”??”
|
||||
|
||||
So for certain characters we need a special way to escape those pesky “\” we saw in that file name.
|
||||
|
||||
#### Escaping Characters
|
||||
|
||||
Sometimes you only want to quote a single character. To do this, you can precede a character with a backslash, which in this context is called the _escape character_ . Often this is done inside double quotes to selectively prevent an expansion:
|
||||
|
||||
```
|
||||
[me@linuxbox me]$ echo “The balance for user $USER is: \$5.00”
|
||||
The balance for user me is: $5.00
|
||||
```
|
||||
|
||||
It is also common to use escaping to eliminate the special meaning of a character in a filename. For example, it is possible to use characters in filenames that normally have special meaning to the shell. These would include “$”, “!”, “&”, “ “, and others. To include a special character in a filename you can to this:
|
||||
|
||||
```
|
||||
[me@linuxbox me]$ mv bad\&filename good_filename
|
||||
```
|
||||
|
||||
> _**To allow a backslash character to appear, escape it by typing “\\”. Note that within single quotes, the backslash loses its special meaning and is treated as an ordinary character.**_
|
||||
|
||||
Looking at the filename now we can understand better as to why the “\\” were used in front of all those “\”s.
|
||||
|
||||
So, to print the file name without losing “\” and other special characters what others did was to suppress the “\” with “\\” and to print the single quotes there are a few ways you can do that.
|
||||
|
||||
```
|
||||
1. echo $'It\'s Shell Programming' # ksh, bash, and zsh only, does not expand variables
|
||||
2. echo "It's Shell Programming" # all shells, expands variables
|
||||
3. echo 'It'\''s Shell Programming' # all shells, single quote is outside the quotes
|
||||
4\. echo 'It'"'"'s Shell Programming' # all shells, single quote is inside double quotes
|
||||
```
|
||||
|
||||
```
|
||||
for further reading please follow this link
|
||||
```
|
||||
|
||||
Looking at option 3, I realized this would mean that I would only need to use “\” and single quotes at certain places to be able to write the whole file without getting frustrated with “\\” placements.
|
||||
|
||||
So with the hope in mind and lesser trial and errors I was actually able to print out the file name like this:
|
||||
|
||||
#### ‘\*\\’\’’”Holberton School”\’\’’\\*$\?\*\*\*\*\*:)’
|
||||
|
||||
to understand better I have added an **“a”** instead of my single quotes so that the file name and process becomes more clearer. For a better understanding, I’ll break them down into modules:
|
||||
|
||||
![1*hP1gmzbn7G7gUEhoynj1ew.gif](https://cdn-images-1.medium.com/max/1600/1*hP1gmzbn7G7gUEhoynj1ew.gif)
|
||||
|
||||
#### a\*\\a \’ a”Holberton School”\a \’ a\\*$\?\*\*\*\*\*:)a
|
||||
|
||||
#### Module 1 — a\*\\a
|
||||
|
||||
Here the use of single quote (a) creates a safe suppression for \*\\ and as mentioned before in strong quoting, the only way we can print the ‘ is to leave and re-enter the single quoting to get the character.
|
||||
|
||||
#### Module 2 , 4— \’
|
||||
|
||||
The \ suppresses the single quote as a standalone module.
|
||||
|
||||
#### Module 3 — a”Holberton School”\a
|
||||
|
||||
Here the use of single quote (a) creates a safe suppression for double quotes and \ along with regular text.
|
||||
|
||||
#### Module 5 — a\\*$\?\*\*\*\*\*:)a
|
||||
|
||||
Here the use of single quote (a) creates a safe suppression for all special characters being used such as *, \, $, ?, : and ).
|
||||
|
||||
so in the end I was able to be lazy and maintain my sanity, and got away with only using single quotes to create small modules and “\” in certain places.
|
||||
|
||||
![1*rO34jp-bYSkCnHSdwoO3qQ.gif](https://cdn-images-1.medium.com/max/1600/1*rO34jp-bYSkCnHSdwoO3qQ.gif)
|
||||
|
||||
And, that is how I was able to get the file to work right! After a few misses, it felt amazing and it was great to learn a new way to do things!
|
||||
|
||||
![1*PE9_VtcfGGQjnYMwJ8YB1A.gif](https://cdn-images-1.medium.com/max/1600/1*PE9_VtcfGGQjnYMwJ8YB1A.gif)
|
||||
|
||||
Handled that curve-ball pretty well! Hope this helps you in the future when, someday you might need to create a special file for a special reason in shell!
|
||||
|
||||
_**Mitali Sengupta **is a former digital marketing professional, currently enrolled as a full-stack engineering student at Holberton School. She is passionate about innovation in AI and Blockchain technologies.. You can contact Mitali on [Twitter][4], [LinkedIn][5] or [GitHub][6]._
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/blog/what-happens-when-you-want-create-special-file-all-special-characters-linux
|
||||
|
||||
作者:[MITALI SENGUPTA ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.linux.com/users/mitalisengupta
|
||||
[1]:https://www.linux.com/licenses/category/used-permission
|
||||
[2]:https://www.linux.com/files/images/special-charspng
|
||||
[3]:mailto:me@linuxbox
|
||||
[4]:https://twitter.com/aadhiBangalan
|
||||
[5]:https://www.linkedin.com/in/mitali-sengupta-auger
|
||||
[6]:https://github.com/MitaliSengupta
|
||||
[7]:http://mywiki.wooledge.org/Quotes#Examples
|
@ -0,0 +1,194 @@
|
||||
An introduction to the DomTerm terminal emulator for Linux
|
||||
============================================================
|
||||
|
||||
### Learn about DomTerm, a terminal emulator and multiplexer with HTML graphics and other unusual features.
|
||||
|
||||
![Terminal](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc_terminals.png?itok=CfBqYBah "Terminal")
|
||||
Image by :
|
||||
|
||||
[Jamie Cox][25]. Modified by Opensource.com. [CC BY 2.0.][26]
|
||||
|
||||
[DomTerm][27] is a modern terminal emulator that uses a browser engine as a "GUI toolkit." This enables some neat features, such as embeddable graphics and links, HTML rich text, and foldable (show/hide) commands. Otherwise it looks and feels like a feature-full, standalone terminal emulator, with excellent xterm compatibility (including mouse handling and 24-bit color), and appropriate "chrome" (menus). In addition, there is built-in support for session management and sub-windows (as in `tmux` and `GNU screen`), basic input editing (as in `readline`), and paging (as in `less`).
|
||||
|
||||
### [domterm1.png][10]
|
||||
|
||||
![DomTerminal terminal emulator](https://opensource.com/sites/default/files/u128651/domterm1.png "DomTerminal terminal emulator")
|
||||
|
||||
Image 1: The DomTerminal terminal emulator. _[View larger image.][1]_
|
||||
|
||||
Below we'll look more at these features. We'll assume you have `domterm` installed (skip to the end of this article if you need to get and build DomTerm). First, though, here's a quick overview of the technology.
|
||||
|
||||
### Frontend vs. backend
|
||||
|
||||
More Linux resources
|
||||
|
||||
* [What is Linux?][2]
|
||||
|
||||
* [What are Linux containers?][3]
|
||||
|
||||
* [Download Now: Linux commands cheat sheet][4]
|
||||
|
||||
* [Advanced Linux commands cheat sheet][5]
|
||||
|
||||
* [Our latest Linux articles][6]
|
||||
|
||||
Most of DomTerm is written in JavaScript and runs in a browser engine. This can be a desktop web browser, such as Chrome or Firefox (see [image 3][28]), or it can be an embedded browser. Using a general web browser works fine, but the user experience isn't as nice (as the menus are designed for general browsing, not for a terminal emulator), and the security model gets in the way, so using an embedded browser is nicer.
|
||||
|
||||
The following are currently supported:
|
||||
|
||||
* `qtdomterm`, which uses the Qt toolkit and `QtWebEngine`
|
||||
|
||||
* An `[Electron][11]` embedding (see [image 1][16])
|
||||
|
||||
* `atom-domterm` runs DomTerm as a package in the [Atom text editor][17] (which is also based on Electron) and integrates with the Atom pane system (see [image 2][18])
|
||||
|
||||
* A wrapper for JavaFX's `WebEngine`, which is useful for code written in Java (see [image 4][19])
|
||||
|
||||
* Previously, the preferred frontend used [Firefox-XUL][20], but Mozilla has since dropped XUL
|
||||
|
||||
### [dt-atom1.png][12]
|
||||
|
||||
![DomTerm terminal panes in Atom editor](https://opensource.com/sites/default/files/images/dt-atom1.png "DomTerm terminal panes in Atom editor")
|
||||
|
||||
Image 2: DomTerm terminal panes in Atom editor. _[View larger image.][7]_
|
||||
|
||||
Currently, the Electron frontend is probably the nicest option, closely followed by the Qt frontend. If you use Atom, `atom-domterm` works pretty well.
|
||||
|
||||
The backend server is written in C. It manages pseudo terminals (PTYs) and sessions. It is also an HTTP server that provides the JavaScript and other files to the frontend. The `domterm` command starts terminal jobs and performs other requests. If there is no server running, `domterm` daemonizes itself. Communication between the backend and the server is normally done using WebSockets (with [libwebsockets][29] on the server). However, the JavaFX embedding uses neither WebSockets nor the DomTerm server; instead Java applications communicate directly using the Java–JavaScript bridge.
|
||||
|
||||
### A solid xterm-compatible terminal emulator
|
||||
|
||||
DomTerm looks and feels like a modern terminal emulator. It handles mouse events, 24-bit color, Unicode, double-width (CJK) characters, and input methods. DomTerm does a very good job on the [vttest testsuite][30].
|
||||
|
||||
Unusual features include:
|
||||
|
||||
**Show/hide buttons ("folding"):** The little triangles (seen in [image 2][31] above) are buttons that hide/show the corresponding output. To create the buttons, just add certain [escape sequences][32] in the [prompt text][33].
|
||||
|
||||
**Mouse-click support for `readline` and similar input editors:** If you click in the (yellow) input area, DomTerm will send the right sequence of arrow-key keystrokes to the application. (This is enabled by escape sequences in the prompt; you can also force it using Alt+Click.)
|
||||
|
||||
**Style the terminal using CSS:** This is usually done in `~/.domterm/settings.ini`, which is automatically reloaded when saved. For example, in [image 2][34], terminal-specific background colors were set.
|
||||
|
||||
### A better REPL console
|
||||
|
||||
A classic terminal emulator works on rectangular grids of character cells. This works for a REPL (command shell), but it is not ideal. Here are some DomTerm features useful for REPLs that are not typically found in terminal emulators:
|
||||
|
||||
**A command can "print" an image, a graph, a mathematical formula, or a set of clickable links:** An application can send an escape sequence containing almost any HTML. (The HTML is scrubbed to remove JavaScript and other dangerous features.)
|
||||
|
||||
The [image 3][35] shows a fragment from a [`gnuplot`][36] session. Gnuplot (2.1 or later) supports `domterm` as a terminal type. Graphical output is converted to an [SVG image][37], which is then printed to the terminal. My blog post [Gnuplot display on DomTerm][38]provides more information on this.
|
||||
|
||||
### [dt-gnuplot.png][13]
|
||||
|
||||
![Image 3: Gnuplot screenshot](https://opensource.com/sites/default/files/dt-gnuplot.png "Image 3: Gnuplot screenshot")
|
||||
|
||||
Image 3: Gnuplot screenshot. _[View larger image.][8]_
|
||||
|
||||
The [Kawa][39] language has a library for creating and transforming [geometric picture values][40]. If you print such a picture value to a DomTerm terminal, the picture is converted to SVG and embedded in the output.
|
||||
|
||||
### [dt-kawa1.png][14]
|
||||
|
||||
![Image 4: Computable geometry in Kawa](https://opensource.com/sites/default/files/dt-kawa1.png "Image 4: Computable geometry in Kawa")
|
||||
|
||||
Image 4: Computable geometry in Kawa. _[View larger image.][9]_
|
||||
|
||||
**Rich text in output:** Help messages are more readable and look nicer with HTML styling. The lower pane of [image 1][41] shows the ouput from `domterm help`. (The output is plaintext if not running under DomTerm.) Note the `PAUSED` message from the built-in pager.
|
||||
|
||||
**Error messages can include clickable links:** DomTerm recognizes the syntax `<var style="font-size: 1em; line-height: 1.5em;">filename</var>:<var style="font-size: 1em; line-height: 1.5em;">line</var>:<var style="font-size: 1em; line-height: 1.5em;">column:</var>` and turns it into a link that opens the file and line in a configurable text editor. (This works for relative filenames if you use `PROMPT_COMMAND` or similar to track directories.)
|
||||
|
||||
A compiler can detect that it is running under DomTerm and directly emit file links in an escape sequence. This is more robust than depending on DomTerm's pattern matching, as it handles spaces and other special characters, and it does not depend on directory tracking. In [image 4][42], you can see error messages from the [Kawa compiler][43]. Hovering over the file position causes it to be underlined, and the `file:`URL shows in the `atom-domterm` message area (bottom of the window). (When not using `atom-domterm`, such messages are shown in an overlay box, as seen for the `PAUSED` message in [image 1][44].)
|
||||
|
||||
The action when clicking on a link is configurable. The default action for a `file:` link with a `#position` suffix is to open the file in a text editor.
|
||||
|
||||
**Structured internal representation:** The following are all represented in the internal node structure: Commands, prompts, input lines, normal and error output, tabs, and preserving the structure if you "Save as HTML." The HTML file is compatible with XML, so you can use XML tools to search or transform the output. The command `domterm view-saved` opens a saved HTML file in a way that enables command folding (show/hide buttons are active) and reflow on window resize.
|
||||
|
||||
**Built-in Lisp-style pretty-printing:** You can include pretty-printing directives (e.g., grouping) in the output such that line breaks are recalculated on window resize. See my article [Dynamic pretty-printing in DomTerm][45] for a deeper discussion.
|
||||
|
||||
**Basic built-in line editing** with history (like `GNU readline`): This uses the browser's built-in editor, so it has great mouse and selection handling. You can switch between normal character-mode (most characters typed are sent directly to the process); or line-mode (regular characters are inserted while control characters cause editing actions, with Enter sending the edited line to the process). The default is automatic mode, where DomTerm switches between character-mode and line-mode depending on whether the PTY is in <q style="quotes: "“" "”" "‘" "’";">raw<q style="quotes: "“" "”" "‘" "’";"> or <q style="quotes: "“" "”" "‘" "’";">canonical</q> mode.</q></q>
|
||||
|
||||
**A built-in pager** (like a simplified `less`): Keyboard shortcuts will control scrolling. In "paging mode," the output pauses after each new screen (or single line, if you move forward line-by-line). The paging mode is unobtrusive and smart about user input, so you can (if you wish) run it without it interfering with interactive programs.
|
||||
|
||||
### Multiplexing and sessions
|
||||
|
||||
**Tabs and tiling:** Not only can you create multiple terminal tabs, you can also tile them. You can use either the mouse or a keyboard shortcut to move between panes and tabs as well as create new ones. They can be rearranged and resized with the mouse. This is implemented using the [GoldenLayout][46] JavaScript library. [Image 1][47]shows a window with two panes. The top one has two tabs, with one running [Midnight Commander][48]; the bottom pane shows `domterm help` output as HTML. However, on Atom we instead use its built-in draggable tiles and tabs; you can see this in [image 2][49].
|
||||
|
||||
**Detaching and reattaching to sessions:** DomTerm supports sessions arrangement, similar to `tmux` and GNU `screen`. You can even attach multiple windows or panes to the same session. This supports multi-user session sharing and remote connections. (For security, all sessions of the same server need to be able to read a Unix domain socket and a local file containing a random key. This restriction will be lifted when we have a good, safe remote-access story.)
|
||||
|
||||
**The ****`domterm`**** command** is also like `tmux` or GNU `screen` in that has multiple options for controlling or starting a server that manages one or more sessions. The major difference is that, if it's not already running under DomTerm, the `domterm` command creates a new top-level window, rather than running in the existing terminal.
|
||||
|
||||
The `domterm` command has a number of sub-commands, similar to `tmux` or `git`. Some sub-commands create windows or sessions. Others (such as "printing" an image) only work within an existing DomTerm session.
|
||||
|
||||
The command `domterm browse` opens a window or pane for browsing a specified URL, such as when browsing documentation.
|
||||
|
||||
### Getting and installing DomTerm
|
||||
|
||||
DomTerm is available from its [GitHub repository][50]. Currently, there are no prebuilt packages, but there are [detailed instructions][51]. All prerequisites are available on Fedora 27, which makes it especially easy to build.
|
||||
|
||||
### About the author
|
||||
|
||||
[![Per Bothner](https://opensource.com/sites/default/files/styles/profile_pictures/public/pictures/per180116a.jpg?itok=dNNCOoqX)][52] Per Bothner - Per has been involved with Open Source since before the term existed. He was an early employee of Cygnus (later purchased by Red Hat), which was the first company commercializing Free Software. There he worked on gcc, g++, libio (the precursor to GNU/Linux stdio), gdb, Kawa, and more. Later he worked in the Java group at Sun and Oracle. Per wrote the Emacs term mode. Currently, Per spends too much time on [Kawa][21] (a Scheme compiler... [more about Per Bothner][22][More about me][23]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/1/introduction-domterm-terminal-emulator
|
||||
|
||||
作者:[ Per Bothner][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/perbothner
|
||||
[1]:https://opensource.com/sites/default/files/u128651/domterm1.png
|
||||
[2]:https://opensource.com/resources/what-is-linux?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent
|
||||
[3]:https://opensource.com/resources/what-are-linux-containers?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent
|
||||
[4]:https://developers.redhat.com/promotions/linux-cheatsheet/?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent
|
||||
[5]:https://developers.redhat.com/cheat-sheet/advanced-linux-commands-cheatsheet?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent
|
||||
[6]:https://opensource.com/tags/linux?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent
|
||||
[7]:https://opensource.com/sites/default/files/images/dt-atom1.png
|
||||
[8]:https://opensource.com/sites/default/files/dt-gnuplot.png
|
||||
[9]:https://opensource.com/sites/default/files/dt-kawa1.png
|
||||
[10]:https://opensource.com/file/384931
|
||||
[11]:https://electronjs.org/
|
||||
[12]:https://opensource.com/file/385346
|
||||
[13]:https://opensource.com/file/385326
|
||||
[14]:https://opensource.com/file/385331
|
||||
[15]:https://opensource.com/article/18/1/introduction-domterm-terminal-emulator?rate=9HfSplqf1e4NohKkTld_881cH1hXTlSwU_2XKrnpTJQ
|
||||
[16]:https://opensource.com/article/18/1/introduction-domterm-terminal-emulator#image1
|
||||
[17]:https://atom.io/
|
||||
[18]:https://opensource.com/article/18/1/introduction-domterm-terminal-emulator#image2
|
||||
[19]:https://opensource.com/article/18/1/introduction-domterm-terminal-emulator#image4
|
||||
[20]:https://en.wikipedia.org/wiki/XUL
|
||||
[21]:https://www.gnu.org/software/kawa/
|
||||
[22]:https://opensource.com/users/perbothner
|
||||
[23]:https://opensource.com/users/perbothner
|
||||
[24]:https://opensource.com/user/205986/feed
|
||||
[25]:https://www.flickr.com/photos/15587432@N02/3281139507/
|
||||
[26]:http://creativecommons.org/licenses/by/2.0
|
||||
[27]:http://domterm.org/
|
||||
[28]:https://opensource.com/article/18/1/introduction-domterm-terminal-emulator#image3
|
||||
[29]:https://libwebsockets.org/
|
||||
[30]:http://invisible-island.net/vttest/
|
||||
[31]:https://opensource.com/article/18/1/introduction-domterm-terminal-emulator#image2
|
||||
[32]:http://domterm.org/Wire-byte-protocol.html
|
||||
[33]:http://domterm.org/Shell-prompts.html
|
||||
[34]:https://opensource.com/article/18/1/introduction-domterm-terminal-emulator#image2
|
||||
[35]:https://opensource.com/article/18/1/introduction-domterm-terminal-emulator#image3
|
||||
[36]:http://www.gnuplot.info/
|
||||
[37]:https://developer.mozilla.org/en-US/docs/Web/SVG
|
||||
[38]:http://per.bothner.com/blog/2016/gnuplot-in-domterm/
|
||||
[39]:https://www.gnu.org/software/kawa/
|
||||
[40]:https://www.gnu.org/software/kawa/Composable-pictures.html
|
||||
[41]:https://opensource.com/article/18/1/introduction-domterm-terminal-emulator#image1
|
||||
[42]:https://opensource.com/article/18/1/introduction-domterm-terminal-emulator#image4
|
||||
[43]:https://www.gnu.org/software/kawa/
|
||||
[44]:https://opensource.com/article/18/1/introduction-domterm-terminal-emulator#image1
|
||||
[45]:http://per.bothner.com/blog/2017/dynamic-prettyprinting/
|
||||
[46]:https://golden-layout.com/
|
||||
[47]:https://opensource.com/sites/default/files/u128651/domterm1.png
|
||||
[48]:https://midnight-commander.org/
|
||||
[49]:https://opensource.com/article/18/1/introduction-domterm-terminal-emulator#image2
|
||||
[50]:https://github.com/PerBothner/DomTerm
|
||||
[51]:http://domterm.org/Downloading-and-building.html
|
||||
[52]:https://opensource.com/users/perbothner
|
||||
[53]:https://opensource.com/users/perbothner
|
||||
[54]:https://opensource.com/users/perbothner
|
||||
[55]:https://opensource.com/tags/linux
|
104
sources/tech/20180130 Refreshing old computers with Linux.md
Normal file
104
sources/tech/20180130 Refreshing old computers with Linux.md
Normal file
@ -0,0 +1,104 @@
|
||||
Refreshing old computers with Linux
|
||||
============================================================
|
||||
|
||||
### A middle school's Tech Stewardship program is now an elective class for science and technology students.
|
||||
|
||||
|
||||
![Refreshing old computers with Linux](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/idea_innovation_kid_education.png?itok=3lRp6gFa "Refreshing old computers with Linux")
|
||||
Image by : opensource.com
|
||||
|
||||
It's nearly impossible to enter a school these days without seeing an abundance of technology. Despite this influx of computers into education, funding inequity forces school systems to make difficult choices. Some educators see things as they are and wonder, "Why?" while others see problems as opportunities and think, "Why not?"
|
||||
|
||||
[Andrew Dobbie ][31]is one of those visionaries who saw his love of Linux and computer reimaging as a unique learning opportunity for his students.
|
||||
|
||||
Andrew teaches sixth grade at Centennial Senior Public School in Brampton, Ontario, Canada, and is a[Google Certified Innovator][16]. Andrew said, "Centennial Senior Public School hosts a special regional science & technology program that invites students from throughout the region to spend three years learning Ontario curriculum through the lens of science and technology." However, the school's students were in danger of falling prey to the digital divide that's exacerbated by hardware and software product lifecycles and inadequate funding.
|
||||
|
||||
![Tech Stewardship students working on a computer](https://opensource.com/sites/default/files/u128651/techstewardship_students.jpg "Tech Stewardship students working on a computer")
|
||||
|
||||
Image courtesy of [Affordable Tech for All][6]
|
||||
|
||||
Although there was a school-wide need for access to computers in the classrooms, Andrew and his students discovered that dozens of old computers were being shipped out of the school because they were too old and slow to keep up with the latest proprietary operating systems or function on the school's network.
|
||||
|
||||
Andrew saw this problem as a unique learning opportunity for his students and created the [Tech Stewardship][17] program. He works in partnership with two other teachers, Mike Doiu and Neil Lyons, and some students, who "began experimenting with open source operating systems like [Lubuntu][18] and [CubLinux][19] to help develop a solution to our in-class computer problem," he says.
|
||||
|
||||
The sixth-grade students deployed the reimaged computers into classrooms throughout the school. When they exhausted the school's supply of surplus computers, they sourced more free computers from a local nonprofit organization called [Renewed Computer Technology Ontario][20]. In all, the Tech Stewardship program has provided more than 200 reimaged computers for students to use in classrooms throughout the school.
|
||||
|
||||
|
||||
![Tech Stewardship students](https://opensource.com/sites/default/files/u128651/techstewardship_class.jpg "Tech Stewardship students")
|
||||
|
||||
Image courtesy of [Affordable Tech for All][7]
|
||||
|
||||
The Tech Stewardship program is now an elective class for the school's science and technology students in grades six, seven, and eight. Not only are the students learning about computer reimaging, they're also giving back to their local communities through this open source outreach program.
|
||||
|
||||
### A broad impact
|
||||
|
||||
The Tech Stewardship program is linked directly to the school's curriculum, especially in social studies by teaching the [United Nations' Sustainable Development Goals][21] (SDGs). The program is a member of [Teach SDGs][22], and Andrew serves as a Teach SDGs ambassador. Also, as a Google Certified Innovator, Andrew partners with Google and the [EdTechTeam][23], and Tech Stewardship has participated in Ontario's [Bring it Together][24] conference for educational technology.
|
||||
|
||||
Andrew's students also serve as mentors to their fellow students. In one instance, a group of girls taught a grade 3 class about effective use of Google Drive and helped these younger students to make the best use of their Linux computers. Andrew said, "outreach and extension of learning beyond the classroom at Centennial is a major goal of the Tech Stewardship program."
|
||||
|
||||
### What the students say
|
||||
|
||||
Linux and open source are an integral part of the program. A girl named Ashna says, "In grade 6, Mr. Dobbie had shown us how to reimage a computer into Linux to use it for educational purposes. Since then, we have been learning more and growing." Student Shradhaa says, "At the very beginning, we didn't even know how to reimage with Linux. Mr. Dobbie told us to write steps for how to reimage Linux devices, and using those steps we are trying to reimage the computers."
|
||||
|
||||
|
||||
![Tech Stewardship student upgrading memory](https://opensource.com/sites/default/files/u128651/techstewardship_upgrading-memory.jpg "Tech Stewardship student upgrading memory")
|
||||
|
||||
Image courtesy of [Affordable Tech for All][8]
|
||||
|
||||
The students were quick to add that Tech Stewardship has become a portal for discussion about being advocates for the change they want to see in the world. Through their hands-on activity, students learn to support the United Nations Sustainable Development goals. They also learn lessons far beyond the curriculum itself. For example, a student named Areez says he has learned how to find other resources, including donations, that allow the project to expand, since the class work upfitting older computers doesn't produce an income stream.
|
||||
|
||||
Another student, Harini, thinks the Tech Stewardship program has demonstrated to other students what is possible and how one small initiative can change the world. After learning about the program, 40 other schools and individuals are reimaging computers with Linux. Harini says, "The more people who use them for educational purposes, the more outstanding the future will become since those educated people will lead out new, amazing lives with jobs."
|
||||
|
||||
Joshua, another student in the program, sees it this way: "I thought of it as just a fun experience, but as it went on, we continued learning and understanding how what we were doing was making such a big impact on the world!" Later, he says, "a school reached out to us and asked us if we could reimage some computers for them. We went and completed the task. Then it continued to grow, as people from Europe came to see how we were fixing broken computers and started doing it when they went back."
|
||||
|
||||
Andrew Dobbie is keen to share his experience with schools and interested individuals. You can contact him on [Twitter][25] or through his [website][26].
|
||||
|
||||
|
||||
### About the author
|
||||
|
||||
[![](https://opensource.com/sites/default/files/styles/profile_pictures/public/donw2-crop.jpg?itok=OqOYd3A8)][27] Don Watkins - Educator, education technology specialist, entrepreneur, open source advocate. M.A. in Educational Psychology, MSED in Educational Leadership, Linux system administrator, CCNA, virtualization using Virtual Box. Follow me at [@Don_Watkins .][13][More about me][14]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/1/new-linux-computers-classroom
|
||||
|
||||
作者:[Don Watkins ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/don-watkins
|
||||
[1]:https://opensource.com/resources/what-is-linux?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent
|
||||
[2]:https://opensource.com/resources/what-are-linux-containers?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent
|
||||
[3]:https://developers.redhat.com/promotions/linux-cheatsheet/?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent
|
||||
[4]:https://developers.redhat.com/cheat-sheet/advanced-linux-commands-cheatsheet?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent
|
||||
[5]:https://opensource.com/tags/linux?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent
|
||||
[6]:https://photos.google.com/share/AF1QipPnm-q9OIQnrzDD4n7oWIBBIE7RQ6BI9lv486RaU5lKBrs88pq3gPKM8VAgY0prkw?key=cS1RdEZ3ZHdXLWp0bUwzMEk3UnFQRkUwbWl1dWhn
|
||||
[7]:https://photos.google.com/share/AF1QipPnm-q9OIQnrzDD4n7oWIBBIE7RQ6BI9lv486RaU5lKBrs88pq3gPKM8VAgY0prkw?key=cS1RdEZ3ZHdXLWp0bUwzMEk3UnFQRkUwbWl1dWhn
|
||||
[8]:https://photos.google.com/share/AF1QipPnm-q9OIQnrzDD4n7oWIBBIE7RQ6BI9lv486RaU5lKBrs88pq3gPKM8VAgY0prkw?key=cS1RdEZ3ZHdXLWp0bUwzMEk3UnFQRkUwbWl1dWhn
|
||||
[9]:https://opensource.com/file/384581
|
||||
[10]:https://opensource.com/file/384591
|
||||
[11]:https://opensource.com/file/384586
|
||||
[12]:https://opensource.com/article/18/1/new-linux-computers-classroom?rate=bK5X7pRc5y9TyY6jzOZeLDW6ehlWmNPXuP38DYsQ-6I
|
||||
[13]:https://twitter.com/Don_Watkins
|
||||
[14]:https://opensource.com/users/don-watkins
|
||||
[15]:https://opensource.com/user/15542/feed
|
||||
[16]:https://edutrainingcenter.withgoogle.com/certification_innovator
|
||||
[17]:https://sites.google.com/view/mrdobbie/tech-stewardship
|
||||
[18]:https://lubuntu.net/
|
||||
[19]:https://en.wikipedia.org/wiki/Cub_Linux
|
||||
[20]:http://www.rcto.ca/
|
||||
[21]:http://www.un.org/sustainabledevelopment/sustainable-development-goals/
|
||||
[22]:http://www.teachsdgs.org/
|
||||
[23]:https://www.edtechteam.com/team/
|
||||
[24]:http://bringittogether.ca/
|
||||
[25]:https://twitter.com/A_Dobbie11
|
||||
[26]:http://bit.ly/linuxresources
|
||||
[27]:https://opensource.com/users/don-watkins
|
||||
[28]:https://opensource.com/users/don-watkins
|
||||
[29]:https://opensource.com/users/don-watkins
|
||||
[30]:https://opensource.com/article/18/1/new-linux-computers-classroom#comments
|
||||
[31]:https://twitter.com/A_Dobbie11
|
||||
[32]:https://opensource.com/tags/education
|
||||
[33]:https://opensource.com/tags/linux
|
128
sources/tech/20180131 10 things I love about Vue.md
Normal file
128
sources/tech/20180131 10 things I love about Vue.md
Normal file
@ -0,0 +1,128 @@
|
||||
10 things I love about Vue
|
||||
============================================================
|
||||
|
||||
![](https://cdn-images-1.medium.com/max/1600/1*X4ipeKVYzmY2M3UPYgUYuA.png)
|
||||
|
||||
I love Vue. When I first looked at it in 2016, perhaps I was coming from a perspective of JavaScript framework fatigue. I’d already had experience with Backbone, Angular, React, among others and I wasn’t overly enthusiastic to try a new framework. It wasn’t until I read a comment on hacker news describing Vue as the ‘new jquery’ of JavaScript, that my curiosity was piqued. Until that point, I had been relatively content with React — it is a good framework based on solid design principles centred around view templates, virtual DOM and reacting to state, and Vue also provides these great things. In this blog post, I aim to explore why Vue is the framework for me. I choose it above any other that I have tried. Perhaps you will agree with some of my points, but at the very least I hope to give you some insight into what it is like to develop modern JavaScript applications with Vue.
|
||||
|
||||
1\. Minimal Template Syntax
|
||||
|
||||
The template syntax which you are given by default from Vue is minimal, succinct and extendable. Like many parts of Vue, it’s easy to not use the standard template syntax and instead use something like JSX (there is even an official page of documentation about how to do this), but I don’t know why you would want to do that to be honest. For all that is good about JSX, there are some valid criticisms: by blurring the line between JavaScript and HTML, it makes it a bit too easy to start writing complex code in your template which should instead be separated out and written elsewhere in your JavaScript view code.
|
||||
|
||||
Vue instead uses standard HTML to write your templates, with a minimal template syntax for simple things such as iteratively creating elements based on the view data.
|
||||
|
||||
```
|
||||
<template>
|
||||
<div id="app">
|
||||
<ul>
|
||||
<li v-for='number in numbers' :key='number'>{{ number }}</li>
|
||||
</ul>
|
||||
<form @submit.prevent='addNumber'>
|
||||
<input type='text' v-model='newNumber'>
|
||||
<button type='submit'>Add another number</button>
|
||||
</form>
|
||||
</div>
|
||||
</template>
|
||||
|
||||
<script>
|
||||
export default {
|
||||
name: 'app',
|
||||
methods: {
|
||||
addNumber() {
|
||||
const num = +this.newNumber;
|
||||
if (typeof num === 'number' && !isNaN(num)) {
|
||||
this.numbers.push(num);
|
||||
}
|
||||
}
|
||||
},
|
||||
data() {
|
||||
return {
|
||||
newNumber: null,
|
||||
numbers: [1, 23, 52, 46]
|
||||
};
|
||||
}
|
||||
}
|
||||
</script>
|
||||
|
||||
<style lang="scss">
|
||||
ul {
|
||||
padding: 0;
|
||||
li {
|
||||
list-style-type: none;
|
||||
color: blue;
|
||||
}
|
||||
}
|
||||
</style>
|
||||
```
|
||||
|
||||
|
||||
I also like the short-bindings provided by Vue, ‘:’ for binding data variables into your template and ‘@’ for binding to events. It’s a small thing, but it feels nice to type and keeps your components succinct.
|
||||
|
||||
2\. Single File Components
|
||||
|
||||
When most people write Vue, they do so using ‘single file components’. Essentially it is a file with the suffix .vue containing up to 3 parts (the css, html and javascript) for each component.
|
||||
|
||||
This coupling of technologies feels right. It makes it easy to understand each component in a single place. It also has the nice side effect of encouraging you to keep your code short for each component. If the JavaScript, CSS and HTML for your component is taking up too many lines then it might be time to modularise further.
|
||||
|
||||
When it comes to the <style> tag of a Vue component, we can add the ‘scoped’ attribute. This will fully encapsulate the styling to this component. Meaning if we had a .name CSS selector defined in this component, it won’t apply that style in any other component. I much prefer this approach of styling view components to the approaches of writing CSS in JS which seems popular in other leading frameworks.
|
||||
|
||||
Another very nice thing about single file components is that they are actually valid HTML5 files. <template>, <script>, <style> are all part of the official w3c specification. This means that many tools you use as part of your development process (such as linters) can work out of the box or with minimal adaptation.
|
||||
|
||||
3\. Vue as the new jQuery
|
||||
|
||||
Really these two libraries are not similar and are doing different things. Let me provide you with a terrible analogy that I am actually quite fond of to describe the relationship of Vue and jQuery: The Beatles and Led Zeppelin. The Beatles need no introduction, they were the biggest group of the 1960s and were supremely influential. It gets harder to pin the accolade of ‘biggest group of the 1970s’ but sometimes that goes to Led Zeppelin. You could say that the musical relationship between the Beatles and Led Zeppelin is tenuous and their music is distinctively different, but there is some prior art and influence to accept. Maybe 2010s JavaScript world is like the 1970s music world and as Vue gets more radio plays, it will only attract more fans.
|
||||
|
||||
Some of the philosophy that made jQuery so great is also present in Vue: a really easy learning curve but with all the power you need to build great web applications based on modern web standards. At its core, Vue is really just a wrapper around JavaScript objects.
|
||||
|
||||
4\. Easily extensible
|
||||
|
||||
As mentioned, Vue uses standard HTML, JS and CSS to build its components as a default, but it is really easy to plug in other technologies. If we want to use pug instead of HTML or typescript instead of JS or sass instead of CSS, it’s just a matter of installing the relevant node modules and adding an attribute to the relevant section of our single file component. You could even mix and match components within a project — e.g. some components using HTML and others using pug — although I’m not sure doing this is the best practice.
|
||||
|
||||
5\. Virtual DOM
|
||||
|
||||
The virtual DOM is used in many frameworks these days and it is great. It means the framework can work out what has changed in our state and then efficiently apply DOM updates, minimizing re-rendering and optimising the performance of our application. Everyone and their mother has a Virtual DOM these days, so whilst it’s not something unique, it’s still very cool.
|
||||
|
||||
6\. Vuex is great
|
||||
|
||||
For most applications, managing state becomes a tricky issue which using a view library alone can not solve. Vue’s solution to this is the vuex library. It’s easy to setup and integrates very well with vue. Those familiar with redux will be at home here, but I find that the integration between vue and vuex is neater and more minimal than that of react and redux. Soon-to-be-standard JavaScript provides the object spread operator which allows us to merge in state or functions to manipulate state from vuex into the components that need it.
|
||||
|
||||
7\. Vue CLI
|
||||
|
||||
The CLI provided by Vue is really great and makes it easy to get started with a webpack project with Vue. Single file components support, babel, linting, testing and a sensible project structure can all be created with a single command in your terminal.
|
||||
|
||||
There is one thing, however I miss from the CLI and that is the ‘vue build’.
|
||||
|
||||
```
|
||||
try this: `echo '<template><h1>Hello World!</h1></template>' > Hello.vue && vue build Hello.vue -o`
|
||||
```
|
||||
|
||||
It looked so simple to build and run components and test them in the browser. Unfortunately this command was later removed from vue, instead the recommendation is to now use poi. Poi is basically a wrapper around webpack, but I don’t think it quite gets to the same point of simplicity as the quoted tweet.
|
||||
|
||||
8\. Re-rendering optimizations worked out for you
|
||||
|
||||
In Vue, you don’t have to manually state which parts of the DOM should be re-rendered. I never was a fan of the management on react components, such as ‘shouldComponentUpdate’ in order to stop the whole DOM tree re-rendering. Vue is smart about this.
|
||||
|
||||
9\. Easy to get help
|
||||
|
||||
Vue has reached a critical mass of developers using the framework to build a wide variety of applications. The documentation is very good. Should you need further help, there are multiple channels available with many active users: stackoverflow, discord, twitter etc. — this should give you some more confidence in building an application over some other frameworks with less users.
|
||||
|
||||
10\. Not maintained by a single corporation
|
||||
|
||||
I think it’s a good thing for an open source library to not have the voting rights of its direction steered too much by a single corporation. Issues such as the react licensing issue (now resolved) are not something Vue has had to deal with.
|
||||
|
||||
In summary, I think Vue is an excellent choice for whatever JavaScript project you might be starting next. The available ecosystem is larger than I covered in this blog post. For a more full-stack offering you could look at Nuxt.js. And if you want some re-usable styled components you could look at something like Vuetify. Vue has been one of the fastest growing frameworks of 2017 and I predict the growth is not going to slow down for 2018\. If you have a spare 30 minutes, why not dip your toes in and see what Vue has to offer for yourself?
|
||||
|
||||
P.S. — The documentation gives you a great comparison to other frameworks here: [https://vuejs.org/v2/guide/comparison.html][1]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://medium.com/@dalaidunc/10-things-i-love-about-vue-505886ddaff2
|
||||
|
||||
作者:[Duncan Grant ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://medium.com/@dalaidunc
|
||||
[1]:https://vuejs.org/v2/guide/comparison.html
|
@ -0,0 +1,176 @@
|
||||
Microservices vs. monolith: How to choose
|
||||
============================================================
|
||||
|
||||
### Both architectures have pros and cons, and the right decision depends on your organization's unique needs.
|
||||
|
||||
|
||||
![Microservices vs. monolith: How to choose](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/building_architecture_design.jpg?itok=lB_qYv-I "Microservices vs. monolith: How to choose")
|
||||
Image by :
|
||||
|
||||
Onasill ~ Bill Badzo on [Flickr][11]. [CC BY-NC-SA 2.0][12]. Modified by Opensource.com.
|
||||
|
||||
For many startups, conventional wisdom says to start with a monolith architecture over microservices. But are there exceptions to this?
|
||||
|
||||
The upcoming book, [_Microservices for Startups_][13] , explores the benefits and drawbacks of microservices, offering insights from dozens of CTOs.
|
||||
|
||||
While different CTOs take different approaches when starting new ventures, they agree that context and capability are key. If you're pondering whether your business would be best served by a monolith or microservices, consider the factors discussed below.
|
||||
|
||||
### Understanding the spectrum
|
||||
|
||||
More on Microservices
|
||||
|
||||
* [How to explain microservices to your CEO][1]
|
||||
|
||||
* [Free eBook: Microservices vs. service-oriented architecture][2]
|
||||
|
||||
* [Secured DevOps for microservices][3]
|
||||
|
||||
Let's first clarify what exactly we mean by “monolith” and “microservice.”
|
||||
|
||||
Microservices are an approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanisms, often an HTTP resource API. These services are built around business capabilities and independently deployable by fully automated deployment machinery.
|
||||
|
||||
A monolithic application is built as a single, unified unit, and usually one massive code base. Often a monolith consists of three parts: a database, a client-side user interface (consisting of HTML pages and/or JavaScript running in a browser), and a server-side application.
|
||||
|
||||
“System architectures lie on a spectrum,” Zachary Crockett, CTO of [Particle][14], said in an interview. “When discussing microservices, people tend to focus on one end of that spectrum: many tiny applications passing too many messages to each other. At the other end of the spectrum, you have a giant monolith doing too many things. For any real system, there are many possible service-oriented architectures between those two extremes.”
|
||||
|
||||
Depending on your situation, there are good reasons to tend toward either a monolith or microservices.
|
||||
|
||||
"We want to use the best tool for each service." Julien Lemoine, CTO at Algolia
|
||||
|
||||
Contrary to what many people think, a monolith isn’t a dated architecture that's best left in the past. In certain circumstances, a monolith is ideal. I spoke to Steven Czerwinski, head of engineering at [Scaylr][15] and a former Google employee, to better understand this.
|
||||
|
||||
“Even though we had had positive experiences of using microservices at Google, we [at Scalyr] went [for a monolith] route because having one monolithic server means less work for us as two engineers,” he explained. (This was back in the early days of Scalyr.)
|
||||
|
||||
But if your team is experienced with microservices and you have a clear idea of the direction you’re going, microservices can be a great alternative.
|
||||
|
||||
Julien Lemoine, CTO at [Algolia][16], chimed in on this point: “We have always started with a microservices approach. The main goal was to be able to use different technology to build our service, for two big reasons:
|
||||
|
||||
* We want to use the best tool for each service. Our search API is highly optimized at the lowest level, and C++ is the perfect language for that. That said, using C++ for everything is a waste of productivity, especially to build a dashboard.
|
||||
|
||||
* We want the best talent, and using only one technology would limit our options. This is why we have different languages in the company.”
|
||||
|
||||
If your team is prepared, starting with microservices allows your organization to get used to the rhythm of developing in a microservice environment right from the start.
|
||||
|
||||
### Weighing the pros and cons
|
||||
|
||||
Before you decide which approach is best for your organization, it's important to consider the strengths and weaknesses of each.
|
||||
|
||||
### Monoliths
|
||||
|
||||
### Pros:
|
||||
|
||||
* **Fewer cross-cutting concerns:** Most apps have cross-cutting concerns, such as logging, rate limiting, and security features like audit trails and DOS protection. When everything is running through the same app, it’s easy to address those concerns by hooking up components.
|
||||
|
||||
* **Less operational overhead:** There’s only one application to set up for logging, monitoring, and testing. Also, it's generally less complex to deploy.
|
||||
|
||||
* **Performance:** A monolith architecture can offer performance advantages since shared-memory access is faster than inter-process communication (IPC).
|
||||
|
||||
### Cons:
|
||||
|
||||
* **Tightly coupled:** Monolithic app services tend to get tightly coupled and entangled as the application evolves, making it difficult to isolate services for purposes such as independent scaling or code maintainability.
|
||||
|
||||
* **Harder to understand:** Monolithic architectures are more difficult to understand because of dependencies, side effects, and other factors that are not obvious when you’re looking at a specific service or controller.
|
||||
|
||||
### Microservices
|
||||
|
||||
### Pros:
|
||||
|
||||
* **Better organization:** Microservice architectures are typically better organized, since each microservice has a specific job and is not concerned with the jobs of other components.
|
||||
|
||||
* **Decoupled:** Decoupled services are easier to recompose and reconfigure to serve different apps (for example, serving both web clients and the public API). They also allow fast, independent delivery of individual parts within a larger integrated system.
|
||||
|
||||
* **Performance:** Depending on how they're organized, microservices can offer performance advantages because you can isolate hot services and scale them independently of the rest of the app.
|
||||
|
||||
* **Fewer mistakes:** Microservices enable parallel development by establishing a strong boundary between different parts of your system. Doing this makes it more difficult to connect parts that shouldn’t be connected, for example, or couple too tightly those that need to be connected.
|
||||
|
||||
### Cons:
|
||||
|
||||
* **Cross-cutting concerns across each service:** As you build a new microservice architecture, you’re likely to discover cross-cutting concerns you may not have anticipated at design time. You’ll either need to incur the overhead of separate modules for each cross-cutting concern (i.e., testing), or encapsulate cross-cutting concerns in another service layer through which all traffic is routed. Eventually, even monolithic architectures tend to route traffic through an outer service layer for cross-cutting concerns, but with a monolithic architecture, it’s possible to delay the cost of that work until the project is more mature.
|
||||
|
||||
* **Higher operational overhead:** Microservices are frequently deployed on their own virtual machines or containers, causing a proliferation of VM wrangling. These tasks are frequently automated with container fleet management tools.
|
||||
|
||||
### Decision time
|
||||
|
||||
Once you understand the pros and cons of both approaches, how do you apply this information to your startup? Based on interviews with CTOs, here are three questions to guide your decision process:
|
||||
|
||||
**Are you in familiar territory?**
|
||||
|
||||
Diving directly into microservices is less risky if your team has previous domain experience (for example, in e-commerce) and knowledge concerning the needs of your customers. If you’re traveling down an unknown path, on the other hand, a monolith may be a safer option.
|
||||
|
||||
**Is your team prepared?**
|
||||
|
||||
Does your team have experience with microservices? If you quadruple the size of your team within the next year, will microservices offer the best environment? Evaluating the dimensions of your team is crucial to the success of your project.
|
||||
|
||||
**How’s your infrastructure?**
|
||||
|
||||
To make microservices work, you’ll need a cloud-based infrastructure.
|
||||
|
||||
David Strauss, CTO of [Pantheon][17], explained: “[Previously], you would want to start with a monolith because you wanted to deploy one database server. The idea of having to set up a database server for every single microservice and then scale out was a mammoth task. Only a huge, tech-savvy organization could do that. Today, with services like Google Cloud and Amazon AWS, you have many options for deploying tiny things without needing to own the persistence layer for each one.”
|
||||
|
||||
### Evaluate the business risk
|
||||
|
||||
As a tech-savvy startup with high ambitions, you might think microservices is the “right” way to go. But microservices can pose a business risk. Strauss explained, “A lot of teams overbuild their project initially. Everyone wants to think their startup will be the next unicorn, and they should therefore build everything with microservices or some other hyper-scalable infrastructure. But that's usually wrong.” In these cases, Strauss continued, the areas that they thought they needed to scale are often not the ones that actually should scale first, resulting in wasted time and effort.
|
||||
|
||||
### Situational awareness
|
||||
|
||||
Ultimately, context is key. Here are some tips from CTOs:
|
||||
|
||||
#### When to start with a monolith
|
||||
|
||||
* **Your team is at founding stage:** Your team is small—say, 2 to 5 members—and is unable to tackle a broader, high-overhead microservices architecture.
|
||||
|
||||
* **You’re building an unproven product or proof of concept:** If you're bringing a brand-new product to market, it will likely evolve over time, and a monolith is better-suited to allow for rapid product iteration. The same notion applies to a proof of concept, where your goal is to learn as much as possible as quickly as possible, even if you end up throwing it away.
|
||||
|
||||
* **You have no microservices experience:** Unless you can justify the risk of learning on the fly at an early stage, a monolith may be a safer approach for an inexperienced team.
|
||||
|
||||
#### When to start with microservices
|
||||
|
||||
* **You need quick, independent service delivery:** Microservices allow for fast, independent delivery of individual parts within a larger integrated system. Note that it can take some time to see service delivery gains with microservices compared to a monolith, depending on your team's size.
|
||||
|
||||
* **A piece of your platform needs to be extremely efficient:** If your business does intensive processing of petabytes of log volume, you’ll likely want to build that service out in an efficient language like C++, while your user dashboard may be built in [Ruby on Rails][5].
|
||||
|
||||
* **You plan to grow your team:** Starting with microservices gets your team used to developing in separate small services from the beginning, and teams that are separated by service boundaries are easier to scale as needed.
|
||||
|
||||
To decide whether a monolith or microservices is right for your organization, be honest and self-aware about your context and capabilities. This will help you find the best path to grow your business.
|
||||
|
||||
### Topics
|
||||
|
||||
[Microservices][21][DevOps][22]
|
||||
|
||||
### About the author
|
||||
|
||||
[![](https://opensource.com/sites/default/files/styles/profile_pictures/public/pictures/profile_15.jpg?itok=EaSRMCN-)][18] jakelumetta - Jake is the CEO of [ButterCMS, an API-first CMS][6]. He loves whipping up Butter puns and building tools that makes developers lives better. For more content like this, follow [@ButterCMS][7] on Twitter and [subscribe to our blog][8].[More about me][9]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/1/how-choose-between-monolith-microservices
|
||||
|
||||
作者:[jakelumetta ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/jakelumetta
|
||||
[1]:https://blog.openshift.com/microservices-how-to-explain-them-to-your-ceo/?intcmp=7016000000127cYAAQ&src=microservices_resource_menu1
|
||||
[2]:https://www.openshift.com/promotions/microservices.html?intcmp=7016000000127cYAAQ&src=microservices_resource_menu2
|
||||
[3]:https://opensource.com/business/16/11/secured-devops-microservices?src=microservices_resource_menu3
|
||||
[4]:https://opensource.com/article/18/1/how-choose-between-monolith-microservices?rate=tSotlNvwc-Itch5fhYiIn5h0L8PcUGm_qGvqSVzu9w8
|
||||
[5]:http://rubyonrails.org/
|
||||
[6]:https://buttercms.com/
|
||||
[7]:https://twitter.com/ButterCMS
|
||||
[8]:https://buttercms.com/blog/
|
||||
[9]:https://opensource.com/users/jakelumetta
|
||||
[10]:https://opensource.com/user/205531/feed
|
||||
[11]:https://www.flickr.com/photos/onasill/16452059791/in/photolist-r4P7ci-r3xUqZ-JkWzgN-dUr8Mo-biVsvF-kA2Vot-qSLczk-nLvGTX-biVxwe-nJJmzt-omA1vW-gFtM5-8rsk8r-dk9uPv-5kja88-cv8YTq-eQqNJu-7NJiqd-pBUkk-pBUmQ-6z4dAw-pBULZ-vyM3V3-JruMsr-pBUiJ-eDrP5-7KCWsm-nsetSn-81M3EC-pBURh-HsVXuv-qjgBy-biVtvx-5KJ5zK-81F8xo-nGFQo3-nJr89v-8Mmi8L-81C9A6-qjgAW-564xeQ-ihmDuk-biVBNz-7C5VBr-eChMAV-JruMBe-8o4iKu-qjgwW-JhhFXn-pBUjw
|
||||
[12]:https://creativecommons.org/licenses/by-nc-sa/2.0/
|
||||
[13]:https://buttercms.com/books/microservices-for-startups/
|
||||
[14]:https://www.particle.io/Particle
|
||||
[15]:https://www.scalyr.com/
|
||||
[16]:https://www.algolia.com/
|
||||
[17]:https://pantheon.io/
|
||||
[18]:https://opensource.com/users/jakelumetta
|
||||
[19]:https://opensource.com/users/jakelumetta
|
||||
[20]:https://opensource.com/users/jakelumetta
|
||||
[21]:https://opensource.com/tags/microservices
|
||||
[22]:https://opensource.com/tags/devops
|
@ -0,0 +1,147 @@
|
||||
How to Manage PGP and SSH Keys with Seahorse
|
||||
============================================================
|
||||
|
||||
|
||||
![Seahorse](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/fish-1907607_1920.jpg?itok=u07bav4m "Seahorse")
|
||||
Learn how to manage both PGP and SSH keys with the Seahorse GUI tool.[Creative Commons Zero][6]
|
||||
|
||||
Security is tantamount to peace of mind. After all, security is a big reason why so many users migrated to Linux in the first place. But why stop with merely adopting the platform, when you can also employ several techniques and technologies to help secure your desktop or server systems.
|
||||
|
||||
One such technology involves keys—in the form of PGP and SSH. PGP keys allow you to encrypt and decrypt emails and files, and SSH keys allow you to log into servers with an added layer of security.
|
||||
|
||||
Sure, you can manage these keys via the command-line interface (CLI), but what if you’re working on a desktop with a resplendent GUI? Experienced Linux users may cringe at the idea of shrugging off the command line, but not all users have the same skill set and comfort level there. Thus, the GUI!
|
||||
|
||||
In this article, I will walk you through the process of managing both PGP and SSH keys through the [Seahorse][14] GUI tool. Seahorse has a pretty impressive feature set; it can:
|
||||
|
||||
* Encrypt/decrypt/sign files and text.
|
||||
|
||||
* Manage your keys and keyring.
|
||||
|
||||
* Synchronize your keys and your keyring with remote key servers.
|
||||
|
||||
* Sign and publish keys.
|
||||
|
||||
* Cache your passphrase.
|
||||
|
||||
* Backup both keys and keyring.
|
||||
|
||||
* Add an image in any GDK supported format as a OpenPGP photo ID.
|
||||
|
||||
* Create, configure, and cache SSH keys.
|
||||
|
||||
For those that don’t know, Seahorse is a GNOME application for managing both encryption keys and passwords within the GNOME keyring. But fear not, Seahorse is available for installation on numerous desktops. And since Seahorse is found in the standard repositories, you can open up your desktop’s app store (such as Ubuntu Software or Elementary OS AppCenter) and install. To do this, locate Seahorse in your distribution’s application store and click to install. Once you have Seahorse installed, you’re ready to start making use of a very handy tool.
|
||||
|
||||
Let’s do just that.
|
||||
|
||||
### PGP Keys
|
||||
|
||||
The first thing we’re going to do is create a new PGP key. As I said earlier, PGP keys can be used to encrypt email (with tools like [Thunderbird][15]’s [Enigmail][16] or the built-in encryption function with [Evolution][17]). A PGP key also allows you to encrypt files. Anyone with your public key will be able to decrypt those emails or files. Without a PGP key, no can do.
|
||||
|
||||
Creating a new PGP key pair is incredibly simple with Seahorse. Here’s what you do:
|
||||
|
||||
1. Open the Seahorse app
|
||||
|
||||
2. Click the + button in the upper left corner of the main pane
|
||||
|
||||
3. Select PGP Key (Figure 1)
|
||||
|
||||
4. Click Continue
|
||||
|
||||
5. When prompted, type a full name and email address
|
||||
|
||||
6. Click Create
|
||||
|
||||
|
||||
![Seahorse](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/seahorse_1.jpg?itok=khLOYC61 "Seahorse")
|
||||
Figure 1: Creating a PGP key with Seahorse.[Used with permission][1]
|
||||
|
||||
While creating your PGP key, you can click to expand the Advanced key options section, where you can configure a comment for the key, encryption type, key strength, and expiration date (Figure 2).
|
||||
|
||||
|
||||
![PGP](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/seahorse_2.jpg?itok=eWiazwrn "PGP")
|
||||
Figure 2: PGP key advanced options.[Used with permission][2]
|
||||
|
||||
The comment section is very handy to help you remember a key’s purpose (or other informative bits).
|
||||
With your PGP created, double-click on it from the key listing. In the resulting window, click on the Names and Signatures tab. In this window, you can sign your key (to indicate you trust this key). Click the Sign button and then (in the resulting window) indicate how carefully you’ve checked this key and how others will see the signature (Figure 3).
|
||||
|
||||
|
||||
![Key signing](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/seahorse_3.jpg?itok=7USKG9fI "Key signing")
|
||||
Figure 3: Signing a key to indicate trust level.[Used with permission][3]
|
||||
|
||||
Signing keys is very important when you’re dealing with other people’s keys, as a signed key will ensure your system (and you) you’ve done the work and can fully trust an imported key.
|
||||
|
||||
Speaking of imported keys, Seahorse allows you to easily import someone’s public key file (the file will end in .asc). Having someone’s public key on your system means you can decrypt emails and files sent to you from them. However, Seahorse has suffered a [known bug][18] for quite some time. The problem is that Seahorse imports using gpg version one, but displays with gpg version two. This means, until this long-standing bug is fixed, importing public keys will always fail. If you want to import a public PGP key into Seahorse, you’re going to have to use the command line. So, if someone has sent you the file olivia.asc, and you want to import it so it can be used with Seahorse, you would issue the command gpg2 --import olivia.asc. That key would then appear in the GnuPG Keys listing. You can open the key, click the I trust signatures button, and then click the Sign this key button to indicate how carefully you’ve checked the key in question.
|
||||
|
||||
### SSH Keys
|
||||
|
||||
Now we get to what I consider to be the most important aspect of Seahorse—SSH keys. Not only does Seahorse make it easy to generate an SSH key, it makes it easy to send that key to a server, so you can take advantage of SSH key authentication. Here’s how you generate a new key and then export it to a remote server.
|
||||
|
||||
1. Open up Seahorse
|
||||
|
||||
2. Click the + button
|
||||
|
||||
3. Select Secure Shell Key
|
||||
|
||||
4. Click Continue
|
||||
|
||||
5. Give the key a description
|
||||
|
||||
6. Click Create and Set Up
|
||||
|
||||
7. Type and verify a passphrase for the key
|
||||
|
||||
8. Click OK
|
||||
|
||||
9. Type the address of the remote server and a remote login name found on the server (Figure 4)
|
||||
|
||||
10. Type the password for the remote user
|
||||
|
||||
11. Click OK
|
||||
|
||||
|
||||
![SSH key](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/seahorse_4.jpg?itok=ZxuxT8ry "SSH key")
|
||||
Figure 4: Uploading an SSH key to a remote server.[Used with permission][4]
|
||||
|
||||
The new key will be uploaded to the remote server and is ready to use. If your server is set up for SSH key authentication, you’re good to go.
|
||||
|
||||
Do note, during the creation of an SSH key, you can click to expand the Advanced key options and configure Encryption Type and Key Strength (Figure 5).
|
||||
|
||||
|
||||
![Advanced options](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/seahorse_5.jpg?itok=vUT7pi0z "Advanced options")
|
||||
Figure 5: Advanced SSH key options.[Used with permission][5]
|
||||
|
||||
### A must-use for new Linux users
|
||||
|
||||
Any new-to-Linux user should get familiar with Seahorse. Even with its flaws, Seahorse is still an incredibly handy tool to have at the ready. At some point, you will likely want (or need) to encrypt or decrypt an email/file, or manage secure shell keys for SSH key authentication. If you want to do this, while avoiding the command line, Seahorse is the tool to use.
|
||||
|
||||
_Learn more about Linux through the free ["Introduction to Linux" ][13]course from The Linux Foundation and edX._
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/learn/intro-to-linux/2018/2/how-manage-pgp-and-ssh-keys-seahorse
|
||||
|
||||
作者:[JACK WALLEN ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.linux.com/users/jlwallen
|
||||
[1]:https://www.linux.com/licenses/category/used-permission
|
||||
[2]:https://www.linux.com/licenses/category/used-permission
|
||||
[3]:https://www.linux.com/licenses/category/used-permission
|
||||
[4]:https://www.linux.com/licenses/category/used-permission
|
||||
[5]:https://www.linux.com/licenses/category/used-permission
|
||||
[6]:https://www.linux.com/licenses/category/creative-commons-zero
|
||||
[7]:https://www.linux.com/files/images/seahorse1jpg
|
||||
[8]:https://www.linux.com/files/images/seahorse2jpg
|
||||
[9]:https://www.linux.com/files/images/seahorse3jpg
|
||||
[10]:https://www.linux.com/files/images/seahorse4jpg
|
||||
[11]:https://www.linux.com/files/images/seahorse5jpg
|
||||
[12]:https://www.linux.com/files/images/fish-19076071920jpg
|
||||
[13]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux
|
||||
[14]:https://wiki.gnome.org/Apps/Seahorse
|
||||
[15]:https://www.mozilla.org/en-US/thunderbird/
|
||||
[16]:https://enigmail.net/index.php/en/
|
||||
[17]:https://wiki.gnome.org/Apps/Evolution
|
||||
[18]:https://bugs.launchpad.net/ubuntu/+source/seahorse/+bug/1577198
|
96
sources/tech/20180202 Tuning MySQL 3 Simple Tweaks.md
Normal file
96
sources/tech/20180202 Tuning MySQL 3 Simple Tweaks.md
Normal file
@ -0,0 +1,96 @@
|
||||
Tuning MySQL: 3 Simple Tweaks
|
||||
============================================================
|
||||
|
||||
If you don’t change the default MySQL configuration, your server is going to perform like a Ferrari that’s stuck in first gear…
|
||||
|
||||
|
||||
![](https://cdn-images-1.medium.com/max/1000/1*b7M28XbrOc4FF3tJP-vvyg.png)
|
||||
|
||||
I don’t claim to be an expert DBA, but I am a strong proponent of the 80/20 principle and when it comes to tuning MySQL, it’s fair to say you can squeeze 80% of the juice by making a few simple adjustments to your configuration. Useful, especially when server resources are getting cheaper all the time.
|
||||
|
||||
#### Health warnings:
|
||||
|
||||
1. No two databases or applications are the same. This is written with the ‘typical’ website owner in mind, where fast queries, a nice user experience and being able to handle lots of traffic are your priorities.
|
||||
|
||||
2. Back up your database before you do this!
|
||||
|
||||
### 1\. Use the InnoDB storage engine
|
||||
|
||||
If you’re using the MyISAM storage engine, then it’s time to move to InnoDB. There are many reasons why it’s superior, but if performance is what you’re after, it comes down to how each utilises physical memory:
|
||||
|
||||
* MyISAM: Only stores indexes in memory.
|
||||
|
||||
* InnoDB: Stores indexes _and_ data in memory.
|
||||
|
||||
Bottom line: stuff stored in memory is accessible much faster than stuff stored on the disk.
|
||||
|
||||
Here’s how you convert your tables:
|
||||
|
||||
```
|
||||
ALTER TABLE table_name ENGINE=InnoDB;
|
||||
```
|
||||
|
||||
_Note:_ _ You have created all of the appropriate indexes, right? That should always be your first priority for better performance._
|
||||
|
||||
### 2\. Let InnoDB use all that memory
|
||||
|
||||
You can edit your MySQL configuration in your _my.cnf_ file. The amount of memory that InnoDB is allowed to use on your server is configured with the innodb_buffer_pool_size parameter.
|
||||
|
||||
The accepted ‘rule of thumb’ for this (for servers _only_ tasked with running MySQL) is to set this to 80% of your server’s physical memory. You want to maximise the use of the RAM, but leave enough for the OS to run without it needing to utilise the swap.
|
||||
|
||||
So, if your server has 32GB memory, set it to ~ 25GB.
|
||||
|
||||
```
|
||||
innodb_buffer_pool_size = 25600M
|
||||
```
|
||||
|
||||
_Notes:_ _ (1) If your server is small and this number comes in less than 1GB, you ought to upgrade to a box with more memory for the rest of this article to be applicable. (2) If you have a huge server, eg. 200gb memory, then use common sense — you don’t need to leave 40gb free memory for the OS._
|
||||
|
||||
### 3\. Let InnoDB multitask
|
||||
|
||||
For servers where _innodb_buffer_pool_size_ is greater than 1GB, _innodb_buffer_pool_instances _ divides the InnoDB buffer pool into this many instances.
|
||||
|
||||
The benefit to having more than 1 buffer pool is:
|
||||
|
||||
> You might encounter bottlenecks from multiple threads trying to access the buffer pool at once. You can enable multiple buffer pools to minimize this contention.
|
||||
|
||||
The official recommendation for the number of buffers is:
|
||||
|
||||
> For best efficiency, specify a combination of innodb_buffer_pool_instances and innodb_buffer_pool_size so that each buffer pool instance is at least 1 gigabyte.
|
||||
|
||||
So in our example of a 32GB server with a 25GB _innodb_buffer_pool_size,_ a suitable solution might be 25600M / 24 = 1.06GB
|
||||
|
||||
```
|
||||
innodb_buffer_pool_instances = 24
|
||||
```
|
||||
|
||||
### Don’t forget!
|
||||
|
||||
After making changes to _my.cnf _ you’ll need to restart MySQL:
|
||||
|
||||
```
|
||||
sudo service mysql restart
|
||||
```
|
||||
|
||||
* * *
|
||||
|
||||
There are far more scientific ways to optimise these parameters, but using this as a general guide will get you a long way towards a better performing MySQL server.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
I like tech businesses & fast cars | Group CTO @ Parcel Monkey, Cloud Fulfilment & Kong.
|
||||
|
||||
|
||||
------
|
||||
|
||||
via: https://medium.com/@richb_/tuning-mysql-3-simple-tweaks-6356768f9b90
|
||||
|
||||
作者:[Rich Barrett][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://medium.com/@richb_
|
@ -0,0 +1,59 @@
|
||||
Which Linux Kernel Version Is ‘Stable’?
|
||||
============================================================
|
||||
|
||||
|
||||
![Linux kernel ](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/apple1.jpg?itok=PGRxOQz_ "Linux kernel")
|
||||
Konstantin Ryabitsev explains which Linux kernel versions are considered "stable" and how to choose what's right for you.[Creative Commons Zero][1]
|
||||
|
||||
Almost every time Linus Torvalds releases [a new mainline Linux kernel][4], there's inevitable confusion about which kernel is the "stable" one now. Is it the brand new X.Y one, or the previous X.Y-1.Z one? Is the brand new kernel too new? Should you stick to the previous release?
|
||||
|
||||
The [kernel.org][5] page doesn't really help clear up this confusion. Currently, right at the top of the page. we see that 4.15 is the latest stable kernel -- but then in the table below, 4.14.16 is listed as "stable," and 4.15 as "mainline." Frustrating, eh?
|
||||
|
||||
Unfortunately, there are no easy answers. We use the word "stable" for two different things here: as the name of the Git tree where the release originated, and as indicator of whether the kernel should be considered “stable” as in “production-ready.”
|
||||
|
||||
Due to the distributed nature of Git, Linux development happens in a number of [various forked repositories][6]. All bug fixes and new features are first collected and prepared by subsystem maintainers and then submitted to Linus Torvalds for inclusion into [his own Linux tree][7], which is considered the “master” Git repository. We call this the “mainline” Linux tree.
|
||||
|
||||
### Release Candidates
|
||||
|
||||
Before each new kernel version is released, it goes through several “release candidate” cycles, which are used by developers to test and polish all the cool new features. Based on the feedback he receives during this cycle, Linus decides whether the final version is ready to go yet or not. Usually, there are 7 weekly pre-releases, but that number routinely goes up to -rc8, and sometimes even up to -rc9 and above. When Linus is convinced that the new kernel is ready to go, he makes the final release, and we call this release “stable” to indicate that it’s not a “release candidate.”
|
||||
|
||||
### Bug Fixes
|
||||
|
||||
As any kind of complex software written by imperfect human beings, each new version of the Linux kernel contains bugs, and those bugs require fixing. The rule for bug fixes in the Linux Kernel is very straightforward: all fixes must first go into Linus’s tree. Once the bug is fixed in the mainline repository, it may then be applied to previously released kernels that are still maintained by the Kernel development community. All fixes backported to stable releases must meet a [set of important criteria][8] before they are considered -- and one of them is that they “must already exist in Linus’s tree.” There is a [separate Git repository][9] used for the purpose of maintaining backported bug fixes, and it is called the “stable” tree -- because it is used to track previously released stable kernels. It is maintained and curated by Greg Kroah-Hartman.
|
||||
|
||||
### Latest Stable Kernel
|
||||
|
||||
So, whenever you visit kernel.org looking for the latest stable kernel, you should use the version that is in the Big Yellow Button that says “Latest Stable Kernel.”
|
||||
|
||||
![sWnmAYf0BgxjGdAHshK61CE9GdQQCPBkmSF9MG8s](https://lh6.googleusercontent.com/sWnmAYf0BgxjGdAHshK61CE9GdQQCPBkmSF9MG8sYqZsmL6e0h8AiyJwqtWYC-MoxWpRWHpdIEpKji0hJ5xxeYshK9QkbTfubFb2TFaMeFNmtJ5ypQNt8lAHC2zniEEe8O4v7MZh)
|
||||
|
||||
Ah, but now you may wonder -- if both 4.15 and 4.14.16 are stable, then which one is more stable? Some people avoid using ".0" releases of kernel because they think a particular version is not stable enough until there is at least a ".1". It's hard to either prove or disprove this, and there are pro and con arguments for both, so it's pretty much up to you to decide which you prefer.
|
||||
|
||||
On the one hand, anything that goes into a stable tree release must first be accepted into the mainline kernel and then backported. This means that mainline kernels will always have fresher bug fixes than what is released in the stable tree, and therefore you should always use mainline “.0” releases if you want fewest “known bugs.”
|
||||
|
||||
On the other hand, mainline is where all the cool new features are added -- and new features bring with them an unknown quantity of “new bugs” that are not in the older stable releases. Whether new, unknown bugs are more worrisome than older, known, but yet unfixed bugs -- well, that is entirely your call. However, it is worth pointing out that many bug fixes are only thoroughly tested against mainline kernels. When patches are backported into older kernels, chances are they will work just fine, but there are fewer integration tests performed against older stable releases. More often than not, it is assumed that "previous stable" is close enough to current mainline that things will likely "just work." And they usually do, of course, but this yet again shows how hard it is to say "which kernel is actually more stable."
|
||||
|
||||
So, basically, there is no quantitative or qualitative metric we can use to definitively say which kernel is more stable -- 4.15 or 4.14.16\. The most we can do is to unhelpfully state that they are "differently stable.”
|
||||
|
||||
_Learn more about Linux through the free ["Introduction to Linux" ][3]course from The Linux Foundation and edX._
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/blog/learn/2018/2/which-linux-kernel-version-stable
|
||||
|
||||
作者:[KONSTANTIN RYABITSEV ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.linux.com/users/mricon
|
||||
[1]:https://www.linux.com/licenses/category/creative-commons-zero
|
||||
[2]:https://www.linux.com/files/images/apple1jpg
|
||||
[3]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux
|
||||
[4]:https://www.linux.com/blog/intro-to-linux/2018/1/linux-kernel-415-unusual-release-cycle
|
||||
[5]:https://www.kernel.org/
|
||||
[6]:https://git.kernel.org/pub/scm/linux/kernel/git/
|
||||
[7]:https://git.kernel.org/torvalds/c/v4.15
|
||||
[8]:https://www.kernel.org/doc/html/latest/process/stable-kernel-rules.html
|
||||
[9]:https://git.kernel.org/stable/linux-stable/c/v4.14.16
|
@ -1,150 +0,0 @@
|
||||
为什么 Kubernetes 很酷
|
||||
============================================================
|
||||
|
||||
在我刚开始学习 Kubernetes(大约是一年半以前吧?)时,我真的不明白为什么应该去关注它。
|
||||
|
||||
在我使用 Kubernetes 全职工作了三个多月后,我才有了一些想法为什么我应该考虑使用它了。(我距离成为一个 Kubernetes 专家还很远!)希望这篇文章对你理解 Kubernetes 能做什么会有帮助!
|
||||
|
||||
我将尝试去解释我认为的对 Kubernetes 感兴趣的一些原因,而不去使用 “原生云(cloud native)”、“编排系统(orchestration)"、”容器(container)“、或者任何 Kubernetes 专用的术语 :)。我去解释的这些主要来自 Kubernetes 操作者/基础设施工程师的观点,因为,我现在的工作就是去配置 Kubernetes 和让它工作的更好。
|
||||
|
||||
我根本就不去尝试解决一些如 “你应该在你的生产系统中使用 Kubernetes 吗?”这样的问题。那是非常复杂的问题。(不仅是因为“生产系统”根据你的用途而总是有不同的要求“)
|
||||
|
||||
### Kubernetes 可以让你在生产系统中运行代码而不需要去设置一台新的服务器
|
||||
|
||||
我首次被说教使用 Kubernetes 是与我的伙伴 Kamal 的下面的谈话:
|
||||
|
||||
大致是这样的:
|
||||
|
||||
* Kamal: 使用 Kubernetes 你可以通过几个简单的命令就能设置一台新的服务器。
|
||||
|
||||
* Julia: 我觉得不太可能吧。
|
||||
|
||||
* Kamal: 像这样,你写一个配置文件,然后应用它,这时候,你就在生产系统中运行了一个 HTTP 服务。
|
||||
|
||||
* Julia: 但是,现在我需要去创建一个新的 AWS 实例,明确地写一个 Puppet,设置服务发现,配置负载均衡,配置开发软件,并且确保 DNS 正常工作,如果没有什么问题的话,至少在 4 小时后才能投入使用。
|
||||
|
||||
* Kamal: 是的,使用 Kubernetes 你不需要做那么多事情,你可以在 5 分钟内设置一台新的 HTTP 服务,并且它将自动运行。只要你的集群中有空闲的资源它就能正常工作!
|
||||
|
||||
* Julia: 这儿一定是一个”坑“
|
||||
|
||||
这里有一种陷阱,设置一个生产用 Kubernetes 集群(在我的经险中)确实并不容易。(查看 [Kubernetes The Hard Way][3] 中去开始使用时有哪些复杂的东西)但是,我们现在并不深入讨论它。
|
||||
|
||||
因此,Kubernetes 第一个很酷的事情是,它可能使那些想在生产系统中部署新开发的软件的方式变得更容易。那是很酷的事,而且它真的是这样,因此,一旦你使用一个 Kubernetes 集群工作,你真的可以仅使用一个配置文件在生产系统中设置一台 HTTP 服务(在 5 分钟内运行这个应用程序,设置一个负载均衡,给它一个 DNS 名字,等等)。看起来真的很有趣。
|
||||
|
||||
### 对于运行在生产系统中的你的代码,Kubernetes 可以提供更好的可见性和可管理性
|
||||
|
||||
在我看来,在理解 etcd 之前,你可能不会理解 Kubernetes 的。因此,让我们先讨论 etcd!
|
||||
|
||||
想像一下,如果现在我这样问你,”告诉我你运行在生产系统中的每个应用程序,它运行在哪台主机上?它是否状态很好?是否为它分配了一个 DNS 名字?”我并不知道这些,但是,我可能需要到很多不同的地方去查询来回答这些问题,并且,我需要花很长的时间才能搞定。我现在可以很确定地说不需要查询,仅一个 API 就可以搞定它们。
|
||||
|
||||
在 Kubernetes 中,你的集群的所有状态 – 应用程序运行 (“pods”)、节点、DNS 名字、 cron 任务、 等等 – 都保存在一个单一的数据库中(etcd)。每个 Kubernetes 组件是无状态的,并且基本是通过下列来工作的。
|
||||
|
||||
* 从 etcd 中读取状态(比如,“分配给节点 1 的 pods 列表“)
|
||||
|
||||
* 产生变化(比如,”在节点 1 上运行 pod A")
|
||||
|
||||
* 更新 etcd 中的状态(比如,“设置 pod A 的状态为 ‘running’”)
|
||||
|
||||
这意味着,如果你想去回答诸如 “在那个可用区域中有多少台运行 nginx 的 pods?” 这样的问题时,你可以通过查询一个统一的 API(Kubernetes API)去回答它。并且,你可以在每个其它 Kubernetes 组件上运行那个 API 去进行同样的访问。
|
||||
|
||||
这也意味着,你可以很容易地去管理每个运行在 Kubernetes 中的任何东西。如果你想这样做,你可以:
|
||||
|
||||
* 为部署实现一个复杂的定制的部署策略(部署一个东西,等待 2 分钟,部署 5 个以上,等待 3.7 分钟,等等)
|
||||
|
||||
* 每当推送到 github 上一个分支,自动化 [启动一个新的 web 服务器][1]
|
||||
|
||||
* 监视所有你的运行的应用程序,确保它们有一个合理的内存使用限制。
|
||||
|
||||
所有你需要做的这些事情,只需要写一个告诉 Kubernetes API(“controller”)的程序就可以了。
|
||||
|
||||
关于 Kubernetes API 的其它的令人激动的事情是,你不会被局限为 Kubernetes 提供的现有功能!如果对于你想去部署/创建/监视的软件有你自己的想法,那么,你可以使用 Kubernetes API 去写一些代码去达到你的目的!它可以让你做到你想做的任何事情。
|
||||
|
||||
### 如果每个 Kubernetes 组件都“挂了”,你的代码将仍然保持运行
|
||||
|
||||
关于 Kubernetes 我承诺的(通过各种博客文章:))一件事情是,“如果 Kubernetes API 服务和其它组件”挂了“,你的代码将一直保持运行状态”。从理论上说,这是它第二件很酷的事情,但是,我不确定它是否真是这样的。
|
||||
|
||||
到目前为止,这似乎是真的!
|
||||
|
||||
我已经断开了一些正在运行的 etcd,它会发生的事情是
|
||||
|
||||
1. 所有的代码继续保持运行状态
|
||||
|
||||
2. 不能做 _新的_ 事情(你不能部署新的代码或者生成变更,cron 作业将停止工作)
|
||||
|
||||
3. 当它恢复时,集群将赶上这期间它错过的内容
|
||||
|
||||
这样做,意味着如果 etcd 宕掉,并且你的应用程序的其中之一崩溃或者发生其它事情,在 etcd 恢复之前,它并不能返回(come back up)。
|
||||
|
||||
### Kubernetes 的设计对 bugs 很有弹性
|
||||
|
||||
与任何软件一样,Kubernetes 有 bugs。例如,到目前为止,我们的集群控制管理器有内存泄漏,并且,调度器经常崩溃。Bugs 当然不好,但是,我发现 Kubernetes 的设计,帮助减少了许多在它的内核中的错误。
|
||||
|
||||
如果你重启动任何组件,将发生:
|
||||
|
||||
* 从 etcd 中读取所有的与它相关的状态
|
||||
|
||||
* 基于那些状态(调度 pods、全部 pods 的垃圾回收、调度 cronjobs、按需部署、等等),它启动去做它认为必须要做的事情。
|
||||
|
||||
因为,所有的组件并不会在内存中保持状态,你在任何时候都可以重启它们,它可以帮助你减少各种 bugs。
|
||||
|
||||
例如,假如说,在你的控制管理器中有内存泄露。因为,控制管理器是无状态的,你可以每小时定期去启动它,或者,感觉到可能导致任何不一致的问题发生时。或者 ,在我们运行的调度器中有一个 bug,它有时仅仅是忘记了 pods 或者从来没有调度它们。你可以每隔 10 分钟来重启调度器来缓减这种情况。(我们并不这么做,而是去修复这个 bug,但是,你_可以吗_:))
|
||||
|
||||
因此,我觉得即使在它的内核组件中有 bug,我仍然可以信任 Kubernetes 的设计去帮助我确保集群状态的一致性。并且,总在来说,随着时间的推移软件将会提高。你去操作的仅有的有状态的东西是 etcd。
|
||||
|
||||
不用过多地讨论“状态”这个东西 – 但是,我认为在 Kubernetes 中很酷的一件事情是,唯一需要去做备份/恢复计划的事情是 etcd (除非为你的 pods 使用了持久化存储的卷)。我认为这样可以使 kubernetes 对关于你考虑的事情的操作更容易一些。
|
||||
|
||||
### 在 Kubernetes 之上实现新的分发系统是非常容易的
|
||||
|
||||
假设你想去实现一个分发 cron 作业调度系统!从零开始做工作量非常大。但是,在 Kubernetes 里面实现一个分发 cron 作业调度系统是非常容易的!(它仍然是一个分布式系统)
|
||||
|
||||
我第一次读到 Kubernetes 的 cronjob 作业控制器的代码时,它是如此的简单,我真的特别高兴。它在这里,去读它吧,主要的逻辑大约是 400 行。去读它吧! => [cronjob_controller.go][4] <=
|
||||
|
||||
从本质上来看,cronjob 控制器做了:
|
||||
|
||||
* 每 10 秒钟:
|
||||
* 列出所有已存在的 cronjobs
|
||||
|
||||
* 检查是否有需要现在去运行的任务
|
||||
|
||||
* 如果有,创建一个新的作业对象去被调度并通过其它的 Kubernetes 控制器去真正地去运行它
|
||||
|
||||
* 清理已完成的作业
|
||||
|
||||
* 重复以上工作
|
||||
|
||||
Kubernetes 模型是很受限制的(它有定义在 etcd 中的资源模式,控制器读取这个资源和更新 etcd),我认为这种相关的固有的/受限制的模型,可以使它更容易地在 Kubernetes 框架中开发你自己的分布式系统。
|
||||
|
||||
Kamal 介绍给我的 “ Kubernetes 是一个写你自己的分布式系统的很好的平台” 这一想法,而不是“ Kubernetes 是一个你可以使用的分布式系统”,并且,我想我对它真的有兴趣。他有一个 [system to run an HTTP service for every branch you push to github][5] 的雏型。他花了一个周末的时候,大约有了 800 行,我觉得它真的很不错!
|
||||
|
||||
### Kubernetes 可以使你做一些非常神奇的事情(但并不容易)
|
||||
|
||||
我一开始就说 “kubernetes 可以让你做一些很神奇的事情,你可以用一个配置文件来做这么多的基础设施,它太神奇了”,而且这是真的!
|
||||
|
||||
为什么说“Kubernetes 并不容易”呢?,是因为 Kubernetes 有很多的课件去学习怎么去成功地运营一个高可用的 Kubernetes 集群要做很多的工作。就像我发现它给我了许多抽象的东西,我需要去理解这些抽象的东西,为了去调试问题和正确地配置它们。我喜欢学习新东西,因此,它并不会使我发狂或者生气,我只是觉得理解它很重要:)
|
||||
|
||||
对于 “我不能仅依靠抽象概念” 的一个具体的例子是,我一直在努力学习需要的更多的 [Linux 上的关于网络的工作][6],去对设置 Kubernetes 网络有信心,这比我以前学过的关于网络的知识要多很多。这种方式很有意思但是非常费时间。在以后的某个时间,我可以写更多的关于设置 Kubernetes 网络的困难的/有趣的事情。
|
||||
|
||||
或者,我写一个关于学习 Kubernetes 的不同选项所做事情的 [2000 字的博客文章][7],才能够成功去设置我的 Kubernetes CAs。
|
||||
|
||||
我觉得,像 GKE (google 的 Kubernetes 生产系统) 这样的一些管理 Kubernetes 的系统可能更简单,因为,他们为你做了许多的决定,但是,我没有尝试过它们。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://jvns.ca/blog/2017/10/05/reasons-kubernetes-is-cool/
|
||||
|
||||
作者:[Julia Evans][a]
|
||||
译者:[qhwdw](https://github.com/qhwdw)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://jvns.ca/about
|
||||
[1]:https://github.com/kamalmarhubi/kubereview
|
||||
[2]:https://jvns.ca/categories/kubernetes
|
||||
[3]:https://github.com/kelseyhightower/kubernetes-the-hard-way
|
||||
[4]:https://github.com/kubernetes/kubernetes/blob/e4551d50e57c089aab6f67333412d3ca64bc09ae/pkg/controller/cronjob/cronjob_controller.go
|
||||
[5]:https://github.com/kamalmarhubi/kubereview
|
||||
[6]:https://jvns.ca/blog/2016/12/22/container-networking/
|
||||
[7]:https://jvns.ca/blog/2017/08/05/how-kubernetes-certificates-work/
|
||||
|
||||
|
@ -1,108 +0,0 @@
|
||||
创建局域网内的离线 YUM 仓库
|
||||
======
|
||||
在早先的教程中,我们讨论了" **[如何使用 ISO 镜像和镜像在线 yum 仓库的方式来创建自己的 yum 仓库 ][1]** "。创建自己的 yum 仓库是一个不错的想法,但若网络中只有 2-3 台 Linux 机器那就没啥必要了。不过若你的网络中有大量的 Linux 服务器,而且这些服务器还需要定时进行升级,或者你有大量服务器无法直接访问因特网,那么创建自己的 yum 仓库就很有必要了。
|
||||
|
||||
当我们有大量的 Linux 服务器,而每个服务器都直接从因特网上升级系统时,数据消耗会很可观。为了节省数据量,我们可以创建个离线 yum 源并将之分享到本地网络中。网络中的其他 Linux 机器然后就可以直接从本地 yum 上获取系统更新,从而节省数据量,而且传输速度也会很好。
|
||||
|
||||
我们可以使用下面两种方法来分享 yum 仓库:
|
||||
|
||||
* **使用 Web 服务器 (Apache)**
|
||||
* **使用 ftp (VSFTPD)**
|
||||
|
||||
|
||||
在开始讲解这两个方法之前,我们需要先根据之前的教程创建一个 YUM 仓库( **[看这里 ][1]** )
|
||||
|
||||
## 使用 Web 服务器
|
||||
|
||||
首先在 yum 服务器上安装安装 Web 服务器 (Apache),我们假设服务器 IP 是 **192.168.1.100**。我们已经在这台系统上配置好了 yum 仓库,现在我们来使用 yum 命令安装 apache web 服务器,
|
||||
|
||||
```
|
||||
$ yum install httpd
|
||||
```
|
||||
|
||||
下一步,拷贝所有的 rpm 包到默认的 apache 跟目录下,即 **/var/www/html**,由于我们已经将包都拷贝到了 **/YUM** 下,我们也可以创建一个软连接来从 /var/www/html 指向 /YUM
|
||||
|
||||
```
|
||||
$ ln -s /var/www/html/Centos /yum
|
||||
```
|
||||
|
||||
重启 web 服务器应用变更
|
||||
|
||||
```
|
||||
$ systemctl restart httpd
|
||||
```
|
||||
|
||||
|
||||
### 配置客户端机器
|
||||
|
||||
服务端的配置就完成了,现在需要配置下客户端来从我们创建的离线 yum 中获取升级包,这里假设客户端 IP 为 **192.168.1.101**。
|
||||
|
||||
在 `/etc/yum.repos.d` 目录中创建 `offline-yum.repo` 文件,输入如下信息,
|
||||
|
||||
```
|
||||
$ vi /etc/yum.repos.d/offline-yum.repo
|
||||
```
|
||||
|
||||
```
|
||||
name=Local YUM
|
||||
baseurl=http://192.168.1.100/CentOS/7
|
||||
gpgcheck=0
|
||||
enabled=1
|
||||
```
|
||||
|
||||
客户端也配置完了。试一下用 yum 来安装/升级软件包来确认仓库是正常工作的。
|
||||
|
||||
## 使用 FTP 服务器
|
||||
|
||||
在 FTP 上分享 YUM,首先需要安装所需要的软件包,即 vsftpd
|
||||
|
||||
```
|
||||
$ yum install vsftpd
|
||||
```
|
||||
|
||||
vsftp 的默认根目录为 `/var/ftp/pub`,因此你可以拷贝 rpm 包到这个目录或着为它创建一个软连接,
|
||||
|
||||
```
|
||||
$ ln -s /var/ftp/pub /YUM
|
||||
```
|
||||
|
||||
重启服务应用变更
|
||||
|
||||
```
|
||||
$ systemctl restart vsftpd
|
||||
```
|
||||
|
||||
### 配置客户端机器
|
||||
|
||||
像上面一样,在 `/etc/yum.repos.d` 中创建 **offline-yum.repo** 文件,并输入下面信息,
|
||||
|
||||
```
|
||||
$ vi /etc/yum.repos.d/offline-yum.repo
|
||||
```
|
||||
|
||||
```
|
||||
[Offline YUM]
|
||||
name=Local YUM
|
||||
baseurl=ftp://192.168.1.100/pub/CentOS/7
|
||||
gpgcheck=0
|
||||
enabled=1
|
||||
```
|
||||
|
||||
现在客户机可以通过 ftp 接受升级了。要配置 vsftpd 服务器为其他 Linux 系统分享文件,请[**阅读这篇指南 **][2]。
|
||||
|
||||
这两种方法都很不错,你可以任意选择其中一种方法。有任何疑问或这想说的话,欢迎在下面留言框中留言。
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://linuxtechlab.com/offline-yum-repository-for-lan/
|
||||
|
||||
作者:[Shusain][a]
|
||||
译者:[lujun9972](https://github.com/lujun9972)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://linuxtechlab.com/author/shsuain/
|
||||
[1]:http://linuxtechlab.com/creating-yum-repository-iso-online-repo/
|
||||
[2]:http://linuxtechlab.com/ftp-secure-installation-configuration/
|
@ -0,0 +1,123 @@
|
||||
为初学者介绍 Linux whereis 命令 (5个例子)
|
||||
======
|
||||
|
||||
有时,在使用命令行的时候,我们需要快速找到某一个命令二进制文件所在位置。这种情况下可以选择[find][1]命令,但使用它会耗费时间,可能也会出现意料之外的情况。有一个专门为这种情况设计的命令:**whereis**。
|
||||
|
||||
|
||||
在这篇文章里,我们会通过一些便于理解的例子来解释这一命令的基础内容。但在这之前,值得说明的一点是,下面出现的所有例子都在 Ubuntu 16.04 LTS 下测试过。
|
||||
|
||||
|
||||
|
||||
### Linux whereis 命令
|
||||
|
||||
whereis 命令可以帮助用户寻找某一命令的二进制文件,源码以及帮助页面。下面是它的格式:
|
||||
|
||||
```
|
||||
whereis [options] [-BMS directory... -f] name...
|
||||
```
|
||||
|
||||
这是这一命令的man 页面给出的解释:
|
||||
|
||||
```
|
||||
|
||||
whereis可以查找指定命令的二进制文件,源文件和帮助文件。 被找到的文件在显示时,会去掉主路径名,然后再去掉文件的扩展名 (如: .c),来源于源代码控制的.s前缀也会被去掉。接下来,whereis会尝试在Linux存储命令的位置里,寻找具体程序,也会在由$ PATH和$ MANPATH指定的路径中寻找。
|
||||
```
|
||||
|
||||
下面这些以Q&A 形式出现的例子,可以给你一个关于如何使用whereis命令的直观感受。
|
||||
|
||||
|
||||
### Q1.如何用whereis 命令寻找二进制文件所在位置?
|
||||
|
||||
假设你想找,比如说,whereis命令自己所在位置。下面是你具体的操作:
|
||||
|
||||
|
||||
```
|
||||
whereis whereis
|
||||
```
|
||||
|
||||
[![How to find location of binary file using whereis][2]][3]
|
||||
|
||||
需要注意的是,输出的第一个路径才是你想要的结果。使用whereis 命令,同时也会显示帮助页面和源码所在路径。(如果能找到的情况下会显示,但是在这一例中没有找到)所以你在输出中看见的第二个路径就是帮助页面文件所在位置。
|
||||
|
||||
|
||||
|
||||
### Q2.怎么在搜索时规定只搜索二进制文件,帮助页面,还是源代码呢?
|
||||
|
||||
如果你想只搜索,假设说,二进制文件,你可以使用 **-b** 这一命令行选项。例如:
|
||||
|
||||
|
||||
```
|
||||
whereis -b cp
|
||||
```
|
||||
|
||||
[![How to specifically search for binaries, manuals, or source code][4]][5]
|
||||
|
||||
类似的, **-m** and **-s** 这两个 选项分别对应 帮助页面和源码。
|
||||
|
||||
|
||||
### Q3.如何限制whereis 命令的输出结果条数?
|
||||
|
||||
默认情况下,whereis 是从系统的硬编码路径来寻找文件的,它会输出所有符合条件的结果。但如果你想的话,你可以用命令行选项来限制输出内容。例如,如果你只想在 /usr/bin 寻找二进制文件,你可以用 **-B** 这一选项来实现。
|
||||
|
||||
|
||||
```
|
||||
whereis -B /usr/bin/ -f cp
|
||||
```
|
||||
|
||||
**注意**:使用这种方式时可以给出多个路径。使用**-f** 这一选项是指在给出的路径中没有找到这些文件,
|
||||
|
||||
|
||||
类似的,如果你想只搜索 帮助文件或源码,你可以对应使用 **-M** and **-S** 这两个选项。
|
||||
|
||||
|
||||
### Q4. 如何查看 whereis 的搜索路径?
|
||||
|
||||
与次相对应的也有一个选项。只要在whereis 后加上 **-l**。
|
||||
|
||||
|
||||
```
|
||||
whereis -l
|
||||
```
|
||||
|
||||
这是例子的部分输出结果:
|
||||
|
||||
|
||||
[![How to see paths that whereis uses for search][6]][7]
|
||||
|
||||
### Q5. How to find command names with unusual entries? 如何找到一个有异常条目的命令?
|
||||
|
||||
对于whereis 命令来说,如果一个命令对每个显式请求类型都没有条目,则该命令异常。例如,没有可用文档的命令,或者对应文档分散在各处的命令都可以算作异常命令。 当使用 **-u** 这一选项,whereis就会显示那些有异常条目的命令。
|
||||
|
||||
|
||||
例如,下面这一例子就显示,在当前目录中,没有对应文档或有多个文档的命令。
|
||||
|
||||
|
||||
```
|
||||
whereis -m -u *
|
||||
```
|
||||
|
||||
### 总结
|
||||
|
||||
我同意,whereis 不是那种你需要经常使用的命令行工具。但在遇到某些特殊情况时,它绝对会让你的生活变得轻松。我们已经涉及了这一工具提供的一些重要命令行选项,所以要注意练习。想了解更多信息,直接去看它的[man][8]页面吧。
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.howtoforge.com/linux-whereis-command/
|
||||
|
||||
作者:[Himanshu Arora][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.howtoforge.com
|
||||
[1]:https://www.howtoforge.com/tutorial/linux-find-command/
|
||||
[2]:https://www.howtoforge.com/images/command-tutorial/whereis-basic-usage.png
|
||||
[3]:https://www.howtoforge.com/images/command-tutorial/big/whereis-basic-usage.png
|
||||
[4]:https://www.howtoforge.com/images/command-tutorial/whereis-b-option.png
|
||||
[5]:https://www.howtoforge.com/images/command-tutorial/big/whereis-b-option.png
|
||||
[6]:https://www.howtoforge.com/images/command-tutorial/whereis-l.png
|
||||
[7]:https://www.howtoforge.com/images/command-tutorial/big/whereis-l.png
|
||||
[8]:https://linux.die.net/man/1/whereis
|
Loading…
Reference in New Issue
Block a user