mirror of
https://github.com/LCTT/TranslateProject.git
synced 2025-03-12 01:40:10 +08:00
Merge remote-tracking branch 'LCTT/master'
This commit is contained in:
commit
2a9d598a79
@ -0,0 +1,205 @@
|
||||
使用 Ansible 来管理你的工作站:配置自动化
|
||||
======
|
||||
> 学习如何使 Ansible 自动对一系列台式机和笔记本应用配置。
|
||||
|
||||

|
||||
|
||||
Ansible 是一个令人惊讶的自动化的配置管理工具。其主要应用在服务器和云部署上,但在工作站上的应用(无论是台式机还是笔记本)却鲜少得到关注,这就是本系列所要关注的。
|
||||
|
||||
在这个系列的[第一部分][1],我向你展示了 `ansible-pull` 命令的基本用法,我们创建了一个安装了少量包的剧本。它本身是没有多大的用处的,但是为后续的自动化做了准备。
|
||||
|
||||
在这篇文章中,将会达成闭环,而且在最后部分,我们将会有一个针对工作站自动配置的完整的工作解决方案。现在,我们将要设置 Ansible 的配置,这样未来将要做的改变将会自动的部署应用到我们的工作站上。现阶段,假设你已经完成了[第一部分][1]的工作。如果没有的话,当你完成的时候回到本文。你应该已经有一个包含第一篇文章中代码的 GitHub 库。我们将直接在之前创建的部分之上继续。
|
||||
|
||||
首先,因为我们要做的不仅仅是安装包文件,所以我们要做一些重新的组织工作。现在,我们已经有一个名为 `local.yml` 并包含以下内容的剧本:
|
||||
|
||||
```
|
||||
- hosts: localhost
|
||||
become: true
|
||||
tasks:
|
||||
- name: Install packages
|
||||
apt: name={{item}}
|
||||
with_items:
|
||||
- htop
|
||||
- mc
|
||||
- tmux
|
||||
```
|
||||
|
||||
如果我们仅仅想实现一个任务那么上面的配置就足够了。随着向我们的配置中不断的添加内容,这个文件将会变的相当的庞大和杂乱。最好能够根据不同类型的配置将我们的<ruby>动作<rt>play</rt></ruby>分为独立的文件。为了达到这个要求,创建一个名为<ruby>任务手册<rt>taskbook</rt></ruby>的东西,它和<ruby>剧本<rt>playbook</rt></ruby>很像但内容更加的流线型。让我们在 Git 库中为任务手册创建一个目录。
|
||||
|
||||
```
|
||||
mkdir tasks
|
||||
```
|
||||
|
||||
`local.yml ` 剧本中的代码可以很好地过渡为安装包文件的任务手册。让我们把这个文件移动到刚刚创建好的 `task` 目录中,并重新命名。
|
||||
|
||||
```
|
||||
mv local.yml tasks/packages.yml
|
||||
```
|
||||
|
||||
现在,我们编辑 `packages.yml` 文件将它进行大幅的瘦身,事实上,我们可以精简除了独立任务本身之外的所有内容。让我们把 `packages.yml` 编辑成如下的形式:
|
||||
|
||||
```
|
||||
- name: Install packages
|
||||
apt: name={{item}}
|
||||
with_items:
|
||||
- htop
|
||||
- mc
|
||||
- tmux
|
||||
```
|
||||
|
||||
正如你所看到的,它使用同样的语法,但我们去掉了对这个任务无用没有必要的所有内容。现在我们有了一个专门安装包文件的任务手册。然而我们仍然需要一个名为 `local.yml` 的文件,因为执行 `ansible-pull` 命令时仍然会去找这个文件。所以我们将在我们库的根目录下(不是在 `task` 目录下)创建一个包含这些内容的全新文件:
|
||||
|
||||
```
|
||||
- hosts: localhost
|
||||
become: true
|
||||
pre_tasks:
|
||||
- name: update repositories
|
||||
apt: update_cache=yes
|
||||
changed_when: False
|
||||
|
||||
tasks:
|
||||
- include: tasks/packages.yml
|
||||
```
|
||||
|
||||
这个新的 `local.yml` 扮演的是导入我们的任务手册的索引的角色。我已经在这个文件中添加了一些你在这个系列中还没见到的内容。首先,在这个文件的开头处,我添加了 `pre_tasks`,这个任务的作用是在其他所有任务运行之前先运行某个任务。在这种情况下,我们给 Ansible 的命令是让它去更新我们的发行版的软件库的索引,下面的配置将执行这个任务要求:
|
||||
|
||||
```
|
||||
apt: update_cache=yes
|
||||
```
|
||||
|
||||
通常 `apt` 模块是用来安装包文件的,但我们也能够让它来更新软件库索引。这样做的目的是让我们的每个动作在 Ansible 运行的时候能够以最新的索引工作。这将确保我们在使用一个老旧的索引安装一个包的时候不会出现问题。因为 `apt` 模块仅仅在 Debian、Ubuntu 及它们的衍生发行版下工作。如果你运行的一个不同的发行版,你要使用特定于你的发行版的模块而不是 `apt`。如果你需要使用一个不同的模块请查看 Ansible 的相关文档。
|
||||
|
||||
下面这行也需要进一步解释:
|
||||
|
||||
```
|
||||
changed_when: False
|
||||
```
|
||||
|
||||
在某个任务中的这行阻止了 Ansible 去报告动作改变的结果,即使是它本身在系统中导致的一个改变。在这里,我们不会去在意库索引是否包含新的数据;它几乎总是会的,因为库总是在改变的。我们不会去在意 `apt` 库的改变,因为索引的改变是正常的过程。如果我们删除这行,我们将在过程报告的后面看到所有的变动,即使仅仅库的更新而已。最好忽略这类的改变。
|
||||
|
||||
接下来是常规任务的阶段,我们将创建好的任务手册导入。我们每次添加另一个任务手册的时候,要添加下面这一行:
|
||||
|
||||
```
|
||||
tasks:
|
||||
- include: tasks/packages.yml
|
||||
```
|
||||
|
||||
如果你现在运行 `ansible-pull` 命令,它应该基本上像上一篇文章中做的一样。不同的是我们已经改进了我们的组织方式,并且能够更有效的扩展它。为了节省你到上一篇文章中去寻找,`ansible-pull` 命令的语法参考如下:
|
||||
|
||||
```
|
||||
sudo ansible-pull -U https://github.com/<github_user>/ansible.git
|
||||
```
|
||||
|
||||
如果你还记得话,`ansible-pull` 的命令拉取一个 Git 仓库并且应用它所包含的配置。
|
||||
|
||||
既然我们的基础已经搭建好,我们现在可以扩展我们的 Ansible 并且添加功能。更特别的是,我们将添加配置来自动化的部署对工作站要做的改变。为了支撑这个要求,首先我们要创建一个特殊的账户来应用我们的 Ansible 配置。这个不是必要的,我们仍然能够在我们自己的用户下运行 Ansible 配置。但是使用一个隔离的用户能够将其隔离到不需要我们参与的在后台运行的一个系统进程中,
|
||||
|
||||
我们可以使用常规的方式来创建这个用户,但是既然我们正在使用 Ansible,我们应该尽量避开使用手动的改变。替代的是,我们将会创建一个任务手册来处理用户创建任务。这个任务手册目前将会仅仅创建一个用户,但你可以在这个任务手册中添加额外的动作来创建更多的用户。我将这个用户命名为 `ansible`,你可以按照自己的想法来命名(如果你做了这个改变要确保更新所有出现地方)。让我们来创建一个名为 `user.yml` 的任务手册并且将以下代码写进去:
|
||||
|
||||
```
|
||||
- name: create ansible user
|
||||
user: name=ansible uid=900
|
||||
```
|
||||
|
||||
下一步,我们需要编辑 `local.yml` 文件,将这个新的任务手册添加进去,像如下这样写:
|
||||
|
||||
```
|
||||
- hosts: localhost
|
||||
become: true
|
||||
pre_tasks:
|
||||
- name: update repositories
|
||||
apt: update_cache=yes
|
||||
changed_when: False
|
||||
|
||||
tasks:
|
||||
- include: tasks/users.yml
|
||||
- include: tasks/packages.yml
|
||||
```
|
||||
|
||||
现在当我们运行 `ansible-pull` 命令的时候,一个名为 `ansible` 的用户将会在系统中被创建。注意我特地通过参数 `uid` 为这个用户声明了用户 ID 为 900。这个不是必须的,但建议直接创建好 UID。因为在 1000 以下的 UID 在登录界面是不会显示的,这样是很棒的,因为我们根本没有需要去使用 `ansibe` 账户来登录我们的桌面。UID 900 是随便定的;它应该是在 1000 以下没有被使用的任何一个数值。你可以使用以下命令在系统中去验证 UID 900 是否已经被使用了:
|
||||
|
||||
```
|
||||
cat /etc/passwd |grep 900
|
||||
```
|
||||
|
||||
不过,你使用这个 UID 应该不会遇到什么问题,因为迄今为止在我使用的任何发行版中我还没遇到过它是被默认使用的。
|
||||
|
||||
现在,我们已经拥有了一个名为 `ansible` 的账户,它将会在之后的自动化配置中使用。接下来,我们可以创建实际的定时作业来自动操作。我们应该将其分开放到它自己的文件中,而不是将其放置到我们刚刚创建的 `users.yml` 文件中。在任务目录中创建一个名为 `cron.yml` 的任务手册并且将以下的代码写进去:
|
||||
|
||||
```
|
||||
- name: install cron job (ansible-pull)
|
||||
cron: user="ansible" name="ansible provision" minute="*/10" job="/usr/bin/ansible-pull -o -U https://github.com/<github_user>/ansible.git > /dev/null"
|
||||
```
|
||||
|
||||
`cron` 模块的语法几乎不需加以说明。通过这个动作,我们创建了一个通过用户 `ansible` 运行的定时作业。这个作业将每隔 10 分钟执行一次,下面是它将要执行的命令:
|
||||
|
||||
```
|
||||
/usr/bin/ansible-pull -o -U https://github.com/<github_user>/ansible.git > /dev/null
|
||||
```
|
||||
|
||||
同样,我们也可以添加想要我们的所有工作站部署的额外的定时作业到这个文件中。我们只需要在新的定时作业中添加额外的动作即可。然而,仅仅是添加一个定时的任务手册是不够的,我们还需要将它添加到 `local.yml` 文件中以便它能够被调用。将下面的一行添加到末尾:
|
||||
|
||||
```
|
||||
- include: tasks/cron.yml
|
||||
```
|
||||
|
||||
现在当 `ansible-pull` 命令执行的时候,它将会以用户 `ansible` 每隔十分钟设置一个新的定时作业。但是,每个十分钟运行一个 Ansible 作业并不是一个好的方式,因为这个将消耗很多的 CPU 资源。每隔十分钟来运行对于 Ansible 来说是毫无意义的,除非我们已经在 Git 仓库中改变一些东西。
|
||||
|
||||
然而,我们已经解决了这个问题。注意我在定时作业中的命令 `ansible-pill` 添加的我们之前从未用到过的参数 `-o`。这个参数告诉 Ansible 只有在从上次 `ansible-pull` 被调用以后库有了变化后才会运行。如果库没有任何变化,它将不会做任何事情。通过这个方法,你将不会无端的浪费 CPU 资源。当然在拉取存储库的时候会使用一些 CPU 资源,但不会像再一次应用整个配置的时候使用的那么多。当 `ansible-pull` 执行的时候,它将会遍历剧本和任务手册中的所有任务,但至少它不会毫无目的的运行。
|
||||
|
||||
尽管我们已经添加了所有必须的配置要素来自动化 `ansible-pull`,它仍然还不能正常的工作。`ansible-pull` 命令需要 `sudo` 的权限来运行,这将允许它执行系统级的命令。然而我们创建的用户 `ansible` 并没有被设置为以 `sudo` 的权限来执行命令,因此当定时作业触发的时候,执行将会失败。通常我们可以使用命令 `visudo` 来手动的去设置用户 `ansible` 去拥有这个权限。然而我们现在应该以 Ansible 的方式来操作,而且这将会是一个向你展示 `copy` 模块是如何工作的机会。`copy` 模块允许你从库复制一个文件到文件系统的任何位置。在这个案列中,我们将会复制 `sudo` 的一个配置文件到 `/etc/sudoers.d/` 以便用户 `ansible` 能够以管理员的权限执行任务。
|
||||
|
||||
打开 `users.yml`,将下面的的动作添加到文件末尾。
|
||||
|
||||
```
|
||||
- name: copy sudoers_ansible
|
||||
copy: src=files/sudoers_ansible dest=/etc/sudoers.d/ansible owner=root group=root mode=0440
|
||||
```
|
||||
|
||||
正如我们看到的,`copy`模块从我们的仓库中复制一个文件到其他任何位置。在这个过程中,我们正在抓取一个名为 `sudoers_ansible`(我们将在后续创建)的文件并将它复制为 `/etc/sudoers/ansible`,并且拥有者为 `root`。
|
||||
|
||||
接下来,我们需要创建我们将要复制的文件。在你的仓库的根目录下,创建一个名为 `files` 的目录:
|
||||
|
||||
```
|
||||
mkdir files
|
||||
```
|
||||
|
||||
然后,在我们刚刚创建的 `files` 目录里,创建名为 `sudoers_ansible` 的文件,包含以下内容:
|
||||
|
||||
```
|
||||
ansible ALL=(ALL) NOPASSWD: ALL
|
||||
```
|
||||
|
||||
就像我们正在这样做的,在 `/etc/sudoer.d` 目录里创建一个文件允许我们为一个特殊的用户配置 `sudo` 权限。现在我们正在通过 `sudo` 允许用户 `ansible` 不需要密码提示就拥有完全控制权限。这将允许 `ansible-pull` 以后台任务的形式运行而不需要手动去运行。
|
||||
|
||||
现在,你可以通过再次运行 `ansible-pull` 来拉取最新的变动:
|
||||
|
||||
```
|
||||
sudo ansible-pull -U https://github.com/<github_user>/ansible.git
|
||||
```
|
||||
|
||||
从这里开始,`ansible-pull` 的定时作业将会在后台每隔十分钟运行一次来检查你的仓库是否有变化,如果它发现有变化,将会运行你的剧本并且应用你的任务手册。
|
||||
|
||||
所以现在我们有了一个完整的可工作方案。当你第一次设置一台新的笔记本或者台式机的时候,你要去手动的运行 `ansible-pull` 命令,但仅仅是在第一次的时候。从第一次之后,用户 `ansible` 将会在后台接手后续的运行任务。当你想对你的机器做变动的时候,你只需要简单的去拉取你的 Git 仓库来做变动,然后将这些变化回传到库中。接着,当定时作业下次在每台机器上运行的时候,它将会拉取变动的部分并应用它们。你现在只需要做一次变动,你的所有工作站将会跟着一起变动。这方法尽管有一点不同寻常,通常,你会有一个包含你的机器列表和不同机器所属规则的清单文件。然而,`ansible-pull` 的方法,就像在文章中描述的,是管理工作站配置的非常有效的方法。
|
||||
|
||||
我已经在我的 [Github 仓库][2]中更新了这篇文章中的代码,所以你可以随时去浏览来对比检查你的语法。同时我将前一篇文章中的代码移到了它自己的目录中。
|
||||
|
||||
在[第三部分][3],我们将通过介绍使用 Ansible 来配置 GNOME 桌面设置来结束这个系列。我将会告诉你如何设置你的墙纸和锁屏壁纸、应用一个桌面主题以及更多的东西。
|
||||
|
||||
同时,到了布置一些作业的时候了,大多数人都有我们所使用的各种应用的配置文件。可能是 Bash、Vim 或者其他你使用的工具的配置文件。现在你可以尝试通过我们在使用的 Ansible 库来自动复制这些配置到你的机器中。在这篇文章中,我已将向你展示了如何去复制文件,所以去尝试以下看看你是都已经能应用这些知识。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/3/manage-your-workstation-configuration-ansible-part-2
|
||||
|
||||
作者:[Jay LaCroix][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[FelixYFZ](https://github.com/FelixYFZ)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/jlacroix
|
||||
[1]:https://linux.cn/article-10434-1.html
|
||||
[2]:https://github.com/jlacroix82/ansible_article.git
|
||||
[3]:https://opensource.com/article/18/5/manage-your-workstation-ansible-part-3
|
@ -1,158 +0,0 @@
|
||||
translated by lixinyuxx
|
||||
5 guiding principles you should know before you design a microservice
|
||||
======
|
||||
|
||||

|
||||
One of the biggest challenges for teams starting off with microservices is adhering to the Goldilocks Principle: Not too big, not too small, and not too tightly coupled. Part of this challenge arises from confusion about what, exactly, constitutes a well-designed microservice.
|
||||
|
||||
Dozens of CTOs shared their experiences through interviews, and those conversations illuminated five characteristics of well-designed microservices. This article will help guide teams as they design microservices. (For more information, check out the upcoming book [Microservices for Startups][1]). This article will briefly touch on microservice boundaries and arbitrary "rules" to avoid before diving into the five characteristics to guide your design of microservices.
|
||||
|
||||
### Microservice boundaries
|
||||
|
||||
One of the [core benefits of developing new systems with microservices][2] is that the architecture allows developers to build and modify individual components independently—but problems can arise when it comes to minimizing the number of callbacks between each API. The solution, according to Chris McFadden, VP of engineering at [SparkPost][3] , is to apply the appropriate service boundaries.
|
||||
|
||||
With respect to boundaries, in contrast to the sometimes difficult-to-grasp and abstract concept of domain-driven design (DDD)—a framework for microservices—this article focuses on practical principles for creating well-defined microservice boundaries with some of our industry's top CTOs.
|
||||
|
||||
### Avoid arbitrary "rules"
|
||||
|
||||
If you read enough advice about designing and creating a microservice, you're bound to come across some of the "rules" below. Although it's tempting to use them as guideposts for creating microservices, adhesion to these arbitrary rules is not a principled way to determine thoughtful boundaries for microservices.
|
||||
|
||||
#### "A microservice should have X lines of code"
|
||||
|
||||
Let's get one thing straight: There are no limitations on how many lines of code there are in a microservice. A microservice doesn't suddenly become a monolith just because you write a few lines of extra code. The key is ensuring there is high cohesion for the code within a service (more on this later).
|
||||
|
||||
#### "Turn each function into a microservice"
|
||||
|
||||
If a function computes something based on three input values and returns a result, is it a good candidate for a microservice? Should it be a separately deployable application of its own? This really depends on what the function is and how it serves to the entire system. Turning each function into a microservice simply might not make sense in your context.
|
||||
|
||||
Other arbitrary rules include those that don't take into account your entire context, such as the team's experience, DevOps capacity, what the service is doing, and availability needs of the data.
|
||||
|
||||
### 5 characteristics of a well-designed service
|
||||
|
||||
If you've read about microservices, you've no doubt come across advice on what makes a well-designed service. Simply put, high cohesion and loose coupling. There are [many][4] [articles][5] on these concepts to review if you're not familiar with them. And while they offer sound advice, these concepts are quite abstract. Below, based on conversations with experienced CTOs, are key characteristics to keep in mind when creating well-designed microservices.
|
||||
|
||||
#### #1: It doesn't share database tables with another service
|
||||
|
||||
In the early days of SparkPost, Chris McFadden and his team had to solve a problem that every SaaS business faces: They needed to provide basic services like authentication, account management, and billing.
|
||||
|
||||
To tackle this, they created two microservices: a Users API and an Accounts API. The Users API would handle user accounts, API keys, and authentication, while the Accounts API would handle all of the billing-related logic. A very logical separation—but before long, they spotted a problem.
|
||||
|
||||
"We had one service that was called the User API, and we had another one called the Account API. The problem was that they were actually having several calls back and forth between them. So you would do something in accounts and have a call and endpoint in users or vice versa," McFadden explained.
|
||||
|
||||
The two services were too tightly coupled.
|
||||
|
||||
When it comes to designing a microservice, it's a red flag if you have multiple services referencing the same table, as it likely means your DB is a source of coupling.
|
||||
|
||||
It is really about how the service relates to the data, which is exactly what Oleksiy Kovrin, head of [Swiftype SRE, Elastic][6], told me. "One of the main foundational principles we use when developing new services is that they should not cross database boundaries. Each service should rely on its own set of underlying data stores. This allows us to centralize access controls, audit logging, caching logic, etc.," he said.
|
||||
|
||||
Kovyrin went on to explain that if a subset of your database tables "have no or very little connections to the rest of the dataset, it is a strong signal that component could be isolated into a separate API or a separate service."
|
||||
|
||||
Darby Frey, co-founder of [Lead Honestly][7], echoed this sentiment: "Each service should have its own tables [and] should never share database tables."
|
||||
|
||||
#### #2: It has a minimal amount of database tables
|
||||
|
||||
The ideal size of a microservice is small enough, but no smaller. And the same goes for the number of database tables per service.
|
||||
|
||||
Steven Czerwinski, head of engineering, [Scaylr][8], explained during an interview that the sweet spot for Scaylr is "one or two database tables for a service."
|
||||
|
||||
SparkPost's Chris McFadden agreed: "We have a suppression microservices, and it handles, keeps track of, millions and billions of entries around suppressions, but it's all very focused just around suppression, so there's really only one or two tables there. The same goes for other services like webhooks."
|
||||
|
||||
#### #3: It's thoughtfully stateful or stateless
|
||||
|
||||
When designing your microservice, you need to ask yourself whether it requires access to a database or whether it's going to be a stateless service processing terabytes of data like emails or logs.
|
||||
|
||||
Julien Lemoine, CTO of [Algolia][9], explained, "We define the boundaries of a service by defining its input and output. Sometimes a service is a network API, but it can also be a process consuming files and producing records in a database (this is the case of our log-processing service)."
|
||||
|
||||
Be clear about statefulness up front and it will lead to a better-designed service.
|
||||
|
||||
#### #4: Its data availability needs are accounted for
|
||||
|
||||
When designing a microservice, keep in mind what services will rely on this new service and the system-wide impact if that data becomes unavailable. Taking that into account allows you to properly design data backup and recovery systems for this service
|
||||
|
||||
Steven Czerwinski mentioned that at Scaylr, critical customer row space mapping data is replicated and separated in different ways due to its importance.
|
||||
|
||||
In contrast, he added, "The per shard information, that's in its own little partition. It sucks if it goes down because that portion of the customer population is not going to have their logs available, but it's only impacting 5 percent of the customers rather than 100 percent of the customers."
|
||||
|
||||
#### #5: It's a single source of truth
|
||||
|
||||
Design a service to be the single source of truth for something in your system
|
||||
|
||||
For example, when you order something from an e-commerce site, an order ID is generated. This order ID can be used by other services to query an order service for complete information about the order. Using the [publish/subscribe pattern][10], the data that is passed around between services should be the order ID, not the attributes/information of the order itself. Only the order service has complete information and is the single source of truth for a given order.
|
||||
|
||||
### Considerations for larger teams
|
||||
|
||||
Keeping in mind the five considerations listed above, larger teams should be aware of the impacts of their organizational structure on microservice boundaries.
|
||||
|
||||
For larger organizations, where entire teams can be dedicated to owning a service, organizational consideration comes into play when determining service boundaries. And there are two considerations to consider: **independent release schedule** and **different uptime importance**.
|
||||
|
||||
"The most successful implementation of microservices we've seen is either based on a software design principle like domain-driven design, for example, and service-oriented architecture, or the ones that reflect an organizational approach," said Khash Sajadi, CEO of [Cloud66.][11]
|
||||
|
||||
"So [for the] payments team," Sajadi continued, "they have the payment service or credit card validation service, and that's the service they provide to the outside world. So it's not necessarily anything about software. It's mostly about the business unit [that] provides one more service to the outside world."
|
||||
|
||||
### The two-pizza principle
|
||||
|
||||
Amazon is a perfect example of a large organization with multiple teams. As mentioned in an article published in [API Evangelist][12], Jeff Bezos issued a mandate to all employees informing them that every team within the company had to communicate via API. Anyone who didn't would be fired.
|
||||
|
||||
This way, all the data and functionality was exposed through the interface. Bezos also managed to get every team to decouple, define what their resources are, and make them available through the API. Amazon was building a system from the ground up. This allows every team within the company to become a partner of one another.
|
||||
|
||||
I spoke to Travis Reeder, CTO of [Iron.io][13], about Bezos' internal initiative.
|
||||
|
||||
"Jeff Bezos mandated that all teams had to build API's to communicate with other teams," Reeder said. "He's also the guy who came up with the 'two-pizza' rule: A team shouldn't be larger than what two pizzas can feed.
|
||||
|
||||
"I think the same could apply here: Whatever a small team can develop, manage, and be productive with. If it starts to get unwieldy or starts to slow down, it's probably getting too big," Reeder told me.
|
||||
|
||||
### Final considerations: Is your service the right size and properly defined?
|
||||
|
||||
During the testing and implementation phase of your microservice system, there are indicators to keep in mind.
|
||||
|
||||
#### Indicator #1: Is there over-reliance between services?
|
||||
|
||||
If two services are constantly calling back to one another, then that's a strong indication of coupling and a signal that they might be better off combined into one service.
|
||||
|
||||
Going back to Chris McFadden's example where he had two API services, accounts, and users that were constantly communicating with one another, McFadden came up an idea to merge the services and decided to call it the Accuser's API. This turned out to be a fruitful strategy.
|
||||
|
||||
"What we started doing was eliminating these links [which were the] internal API calls between them," McFadden told me. "It's helped simplify the code."
|
||||
|
||||
#### Indicator #2: Does the overhead of setting up the service outweigh the benefit of having the service be independent?
|
||||
|
||||
Darby Frey explained, "Every app needs to have its logs aggregated somewhere and needs to be monitored. You need to set up alerting for it. You need to have standard operating procedures and run books for when things break. You have to manage SSH access to that thing. There's a huge foundation of things that have to exist in order for an app to just run."
|
||||
|
||||
### Key takeaways
|
||||
|
||||
Designing microservices can often feel more like an art than a science. For engineers, that may not sit well. There's lots of general advice out there, but at times it can be a bit too abstract. Let's recap the five specific characteristics to look for when designing your next set of microservices:
|
||||
|
||||
1. It doesn't share database tables with another service
|
||||
2. It has a minimal amount of database tables
|
||||
3. It's thoughtfully stateful or stateless
|
||||
4. Its data availability needs are accounted for
|
||||
5. It's a single source of truth
|
||||
|
||||
|
||||
|
||||
Next time you're designing a set of microservices and determining service boundaries, referring back to these principles should make the task easier.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/4/guide-design-microservices
|
||||
|
||||
作者:[Jake Lumetta][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/jakelumetta
|
||||
[1]:https://buttercms.com/books/microservices-for-startups/
|
||||
[2]:https://buttercms.com/books/microservices-for-startups/should-you-always-start-with-a-monolith
|
||||
[3]:https://www.sparkpost.com/
|
||||
[4]:https://thebojan.ninja/2015/04/08/high-cohesion-loose-coupling/
|
||||
[5]:https://en.wikipedia.org/wiki/Single_responsibility_principle
|
||||
[6]:https://www.elastic.co/solutions/site-search
|
||||
[7]:https://leadhonestly.com/
|
||||
[8]:https://www.scalyr.com/
|
||||
[9]:https://www.algolia.com/
|
||||
[10]:https://en.wikipedia.org/wiki/Publish%E2%80%93subscribe_pattern
|
||||
[11]:https://www.cloud66.com/
|
||||
[12]:https://apievangelist.com/2012/01/12/the-secret-to-amazons-success-internal-apis/
|
||||
[13]:https://www.iron.io/
|
84
sources/talk/20190108 Hacking math education with Python.md
Normal file
84
sources/talk/20190108 Hacking math education with Python.md
Normal file
@ -0,0 +1,84 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Hacking math education with Python)
|
||||
[#]: via: (https://opensource.com/article/19/1/hacking-math)
|
||||
[#]: author: (Don Watkins https://opensource.com/users/don-watkins)
|
||||
|
||||
Hacking math education with Python
|
||||
======
|
||||
Teacher, programmer, and author Peter Farrell explains why teaching math with Python works better than the traditional approach.
|
||||

|
||||
|
||||
Mathematics instruction has a bad reputation, especially with people (like me) who've had trouble with the traditional approach, which emphasizes rote memorization and theory that seems far removed from students' real world.
|
||||
|
||||
While teaching a student who was baffled by his math lessons, [Peter Farrell][1], a Python developer and mathematics teacher, decided to try using Python to teach the boy the math concepts he was having trouble learning.
|
||||
|
||||
Peter was inspired by the work of [Seymour Papert][2], the father of the Logo programming language, which lives on in Python's [Turtle module][3]. The Turtle metaphor hooked Peter on Python and using it to teach math, much like [I was drawn to Python][4].
|
||||
|
||||
Peter shares his approach in his new book, [Math Adventures with Python][5]: An Illustrated Guide to Exploring Math with Code. And, I recently interviewed him to learn more about it.
|
||||
|
||||
**Don Watkins:** What is your background?
|
||||
|
||||
**Peter Farrell:** I was a math teacher for eight years, and I tutored math for 10 years after that. When I was a teacher, I read Papert's [Mindstorms][6] and was inspired to introduce all my math classes to Logo and Turtles.
|
||||
|
||||
**DW:** Why did you start using Python?
|
||||
|
||||
**PF:** I was working with a homeschooled boy on a very dry, textbook-driven math curriculum, which at the time seemed like a curse to me. But I found ways to sneak in the Logo Turtles, and he was a programming fan, so he liked that. Once we got into functions and real programming, he asked if we could continue in Python. I didn't know any Python but it didn't seem that different from Logo, so I agreed. And I never looked back!
|
||||
|
||||
I was also looking for a 3D graphics package I could use to model a solar system and lead students through making planets move and get pulled by the force of attraction between the bodies, according to Newton's formula. Many graphics packages required programming in C or something hard, but I found an excellent package called Visual Python that was very easy to use. I used [VPython][7] for years after that.
|
||||
|
||||
So, I was introduced to Python in the context of working with a student on math. For some time after that, he was my programming tutor while I was his math tutor!
|
||||
|
||||
**DW:** What got you interested in math?
|
||||
|
||||
**PF:** I learned it the old-fashioned way: by hand, on paper and blackboards. I was good at manipulating symbols, so algebra was never a problem, and I liked drawing and graphing, so geometry and trig could be fun, too. I did some programming in BASIC and Fortran in college, but it never inspired me. Later on, programming inspired me greatly! I'm still tickled by the way programming makes easy work of the laborious stuff you have to do in math class, freeing you up to do the more fun of exploring, graphing, tweaking, and discovering.
|
||||
|
||||
**DW:** What inspired you to consider your Python approach to math?
|
||||
|
||||
**PF:** When I was teaching the homeschooled student, I was amazed at what we could do by writing a simple function and then calling it a bunch of times with different values using a loop. That would take a half an hour by hand, but the computer spit it out instantly! Then we could look for patterns (which is what a math student should be doing), express the pattern as a function, and extend it further.
|
||||
|
||||
**DW:** How does your approach to teaching help students—especially those who struggle with math? How does it make math more relevant?
|
||||
|
||||
**PF:** Students, especially high-schoolers, question the need to be doing all this calculating, graphing, and solving by hand in the 21st century, and I don't disagree with them. Learning to use Excel, for example, to crunch numbers should be seen as a basic necessity to work in an office. Learning to code, in any language, is becoming a very valuable skill to companies. So, there's a real-world appeal to me.
|
||||
|
||||
But the idea of making art with code can revolutionize math class. Just putting a shape on a screen requires math—the position (x-y coordinates), the dimensions, and even the color are all numbers. If you want something to move or change, you'll need to use variables, and not the "guess what x equals" kind of variable. You'll vary the position using a variable or, more efficiently, using a vector. [This makes] math topics like vectors and matrices seen as helpful tools you can use, rather than required information you'll never use.
|
||||
|
||||
Students who struggle with math might just be turned off to "school math," which is heavy on memorization and following rules and light on creativity and real applications. They might find they're actually good at math, just not the way it was taught in school. I've had parents see the cool graphics their kids have created with code and say, "I never knew that's what sines and cosines were used for!"
|
||||
|
||||
**DW:** How do you see your approach to math and programming encouraging STEM in schools?
|
||||
|
||||
**PF:** I love the idea of combining previously separated topics into an idea like STEM or STEAM! Unfortunately for us math folks, the "M" is very often neglected. I see lots of fun projects being done in STEM labs, even by very young children, and they're obviously getting an education in technology, engineering, and science. But I see precious little math material in the projects. STEM/[mechatronics][8] teacher extraordinaire Ken Hawthorn and I are creating projects to try to remedy that.
|
||||
|
||||
Hopefully, my book helps encourage students, girls and boys, to get creative with technology, real and virtual. There are a lot of beautiful graphics in the book, which I hope will inspire people to go through the coding adventure and make them. All the software I use ([Python Processing][9]) is available for free and can be easily installed, or is already installed, on the Raspberry Pi. Entry into the STEM world should not be cost-prohibitive to schools or individuals.
|
||||
|
||||
**DW:** What would you like to share with other math teachers?
|
||||
|
||||
**PF:** If the math establishment is really serious about teaching students the standards they have agreed upon, like numerical reasoning, logic, analysis, modeling, geometry, interpreting data, and so on, they're going to have to admit that coding can help with every single one of those goals. My approach was born, as I said before, from just trying to enrich a dry, traditional approach, and I think any teacher can do that. They just need somebody who can show them how to do everything they're already doing, just using code to automate the laborious stuff.
|
||||
|
||||
My graphics-heavy approach is made possible by the availability of free graphics software. Folks might need to be shown where to find these packages and how to get started. But a math teacher can soon be leading students through solving problems using 21st-century technology and visualizing progress or results and finding more patterns to pursue.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/1/hacking-math
|
||||
|
||||
作者:[Don Watkins][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/don-watkins
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://twitter.com/hackingmath
|
||||
[2]: https://en.wikipedia.org/wiki/Seymour_Papert
|
||||
[3]: https://en.wikipedia.org/wiki/Turtle_graphics
|
||||
[4]: https://opensource.com/life/15/8/python-turtle-graphics
|
||||
[5]: https://nostarch.com/mathadventures
|
||||
[6]: https://en.wikipedia.org/wiki/Mindstorms_(book)
|
||||
[7]: http://vpython.org/
|
||||
[8]: https://en.wikipedia.org/wiki/Mechatronics
|
||||
[9]: https://processing.org/
|
64
sources/talk/20190110 Toyota Motors and its Linux Journey.md
Normal file
64
sources/talk/20190110 Toyota Motors and its Linux Journey.md
Normal file
@ -0,0 +1,64 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Toyota Motors and its Linux Journey)
|
||||
[#]: via: (https://itsfoss.com/toyota-motors-linux-journey)
|
||||
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
|
||||
|
||||
Toyota Motors and its Linux Journey
|
||||
======
|
||||
|
||||
**This is a community submission from It’s FOSS reader Malcolm Dean.**
|
||||
|
||||
I spoke with Brian R Lyons of TMNA Toyota Motor Corp North America about the implementation of Linux in Toyota and Lexus infotainment systems. I came to find out there is an Automotive Grade Linux (AGL) being used by several autmobile manufacturers.
|
||||
|
||||
I put together a short article comprising of my discussion with Brian about Toyota and its tryst with Linux. I hope that Linux enthusiasts will like this quick little chat.
|
||||
|
||||
All [Toyota vehicles and Lexus vehicles are going to use Automotive Grade Linux][1] (AGL) majorly for the infotainment system. This is instrumental in Toyota Motor Corp because as per Mr. Lyons “As a technology leader, Toyota realized that adopting open source development methodology is the best way to keep up with the rapid pace of new technologies”.
|
||||
|
||||
Toyota among other automotive companies thought, going with a Linux based operating system might be cheaper and quicker when it comes to updates, and upgrades compared to using proprietary software.
|
||||
|
||||
Wow! Finally Linux in a vehicle. I use Linux every day on my desktop; what a great way to expand the use of this awesome software to a completely different industry.
|
||||
|
||||
I was curious when Toyota decided to use the [Automotive Grade Linux][2] (AGL). According to Mr. Lyons, it goes back to 2011.
|
||||
|
||||
> “Toyota has been an active member and contributor to AGL since its launch more than five years ago, collaborating with other OEMs, Tier 1s and suppliers to develop a robust, Linux-based platform with increased security and capabilities”
|
||||
|
||||
![Toyota Infotainment][3]
|
||||
|
||||
In 2011, [Toyota joined the Linux Foundation][4] and started discussions about IVI (In-Vehicle Infotainment) software with other car OEMs and software companies. As a result, in 2012, Automotive Grade Linux working group was formed in the Linux Foundation.
|
||||
|
||||
What Toyota did at first in AGL group was to take “code first” approach as normal as in the open source domains, and then start the conversation about the initial direction by specifying requirement specifications which had been discussed among car OEMs, IVI Tier-1 companies, software companies, and so on.
|
||||
|
||||
Toyota had already realized that sharing the software code among Tier1 companies was going to essential at the time when it joined the Linux Foundation. This was because the cost of maintaining such a huge software was very costly and was no longer differentiation by Tier1 companies. Toyota and its Tier1 supplier companies wanted to spend more resources n new functions and new user experiences rather than maintaining conventional code all by themselves.
|
||||
|
||||
This is a huge thing as automotive companies have gone in together to further their cooperation. Many companies have adopted this after finding proprietary software to be expensive.
|
||||
|
||||
Today, AGL is used for all Toyota and Lexus vehicles and is used in all markets where vehicles are sold.
|
||||
|
||||
As someone who has sold cars for Lexus, I think this is a huge step forward. I and other sales associates had many customers who would come back to speak with a technology specialist to learn about the full capabilities of their infotainment system.
|
||||
|
||||
I see this as a huge step forward for the Linux community, and users. The operating system we use on a daily basis is being put to use right in front of us albeit in a modified form but is there none-the-less.
|
||||
|
||||
Where does this lead? Hopefully a better user-friendly and less glitchy experience for consumers.
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/toyota-motors-linux-journey
|
||||
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/abhishek/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.linuxfoundation.org/press-release/2018/01/automotive-grade-linux-hits-road-globally-toyota-amazon-alexa-joins-agl-support-voice-recognition/
|
||||
[2]: https://www.automotivelinux.org/
|
||||
[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/01/toyota-interiors.jpg?resize=800%2C450&ssl=1
|
||||
[4]: https://www.linuxfoundation.org/press-release/2011/07/toyota-joins-linux-foundation/
|
130
sources/talk/20190114 Remote Working Survival Guide.md
Normal file
130
sources/talk/20190114 Remote Working Survival Guide.md
Normal file
@ -0,0 +1,130 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Remote Working Survival Guide)
|
||||
[#]: via: (https://www.jonobacon.com/2019/01/14/remote-working-survival/)
|
||||
[#]: author: (Jono Bacon https://www.jonobacon.com/author/admin/)
|
||||
|
||||
Remote Working Survival Guide
|
||||
======
|
||||

|
||||
|
||||
Remote working seems to be all the buzz. Apparently, [70% of professionals work from home at least once a week][1]. Similarly, [77% of people work more productively][2] and [68% of millennials would consider a company more if they offered remote working][3]. It seems to make sense: technology, connectivity, and culture seem to be setting the world up more and more for remote working. Oh, and home-brewed coffee is better than ever too.
|
||||
|
||||
Now, I am going to write another piece for how companies should optimize for remote working (so make sure you [Join As a Member][4] to stay tuned — it is free).
|
||||
|
||||
Today though I want to **share recommendations for how individuals can do remote working well themselves**. Whether you are a full-time remote worker or have the option of working from home a few days a week, this article should hopefully be helpful.
|
||||
|
||||
Now, you need to know that **remote working is not a panacea**. Sure, it seems like hanging around at home in your jimjams, listening to your antisocial music, and sipping on buckets of coffee is perfect, but it isn’t for everyone.
|
||||
|
||||
Some people need the structure of an office. Some people need the social element of an office. Some people need to get out the house. Some people lack the discipline to stay focused at home. Some people are avoiding the government coming and knocking on the door due to years of unpaid back taxes.
|
||||
|
||||
**Remote working is like a muscle: it can bring enormous strength and capabilities IF you train and maintain it**. If you don’t, your results are going to vary.
|
||||
|
||||
I have worked from home for the vast majority of my career. I love it. I am more productive, happier, and empowered when I work from home. I don’t dislike working in an office, and I enjoy the social element, but I am more in my “zone” when I work from home. I also love blisteringly heavy metal, which can pose a problem when the office don’t want to listen to [After The Burial][5].
|
||||
|
||||
![][6]
|
||||
“Squirrel.”
|
||||
[Credit][7]
|
||||
|
||||
I have learned how I need to manage remote work, using the right balance of work routine, travel, and other elements, and here are some of my recommendations. Be sure to **share yours in the comments**.
|
||||
|
||||
### 1\. You need discipline and routine (and to understand your “waves”)
|
||||
|
||||
Remote work really is a muscle that needs to be trained. Just like building actual muscle, there needs to be a clear routine and a healthy dollop of discipline mixed in.
|
||||
|
||||
Always get dressed (no jimjams). Set your start and end time for your day (I work 9am – 6pm most days). Choose your lunch break (mine is 12pm). Choose your morning ritual (mine is email followed by a full review of my client needs). Decide where your main workplace will be (mine is my home office). Decide when you will exercise each day (I do it at 5pm most days).
|
||||
|
||||
**Design a realistic routine and do it for 66 days**. It takes this long to build a habit. Try not to deviate from the routine. The more you stick the routine, the less work it will seem further down the line. By the end of the 66 days it will feel natural and you won’t have to think about it.
|
||||
|
||||
Here’s the deal though, we don’t live in a vacuum ([cleaner, or otherwise][8]). We all have waves.
|
||||
|
||||
A wave is when you need a change of routine to mix things up. For example, in summertime I generally want more sunlight. I will often work outside in the garden. Near the holidays I get more distracted, so I need more structure in my day. Sometimes I just need more human contact, so I will work from coffee shops for a few weeks. Sometimes I just fancy working in the kitchen or on the couch. You need to learn your waves and listen to your body. **Build your habit first, and then modify it as you learn your waves**.
|
||||
|
||||
### 2\. Set expectations with your management and colleagues
|
||||
|
||||
Not everyone knows how to do remote working, and if your company is less familiar with remote working, you especially need to set expectations with colleagues.
|
||||
|
||||
This can be pretty simple: **when you have designed your routine, communicate it clearly to your management and team**. Let them know how they can get hold of you, how to contact you in an emergency, and how you will be collaborating while at home.
|
||||
|
||||
The communication component here is critical. There are some remote workers who are scared to leave their computer for fear that someone will send them a message while they are away (and they are worried people may think they are just eating Cheetos and watching Netflix).
|
||||
|
||||
You need time away. You need to eat lunch without one eye on your computer. You are not a 911 emergency responder. **Set expectations that sometimes you may not be immediately responsive, but you will get back to them as soon as possible**.
|
||||
|
||||
Similarly, set expectations on your general availability. For example, I set expectations with clients that I generally work from 9am – 6pm every day. Sure, if a client needs something urgently, I am more than happy to respond outside of those hours, but as a general rule I am usually working between those hours. This is necessary for a balanced life.
|
||||
|
||||
### 3\. Distractions are your enemy and they need managing
|
||||
|
||||
We all get distracted. It is human nature. It could be your young kid getting home and wanting to play Rescue Bots. It could be checking Facebook, Instagram, or Twitter to ensure you don’t miss any unwanted political opinions or photos of people’s lunches. It could be that there is something else going on your life that is taking your attention (such as an upcoming wedding, event, or big trip.)
|
||||
|
||||
**You need to learn what distracts you and how to manage it**. For example, I know I get distracted by my email and Twitter. I check it religiously and every check gets me out of the zone of what I am working on. I also get distracted by grabbing coffee and water, which then may turn into a snack and a YouTube video.
|
||||
|
||||
![][9]
|
||||
My nemesis for distractions.
|
||||
|
||||
The digital distractions have a simple solution: **lock them out**. Close down the tabs until you complete what you are doing. I do this all the time with big chunks of work: I lock out the distractions until I am done. It requires discipline, but all of this does.
|
||||
|
||||
The human elements are tougher. If you have a family you need to make it clear that when you are work, you need to be generally left alone. This is why a home office is so important: you need to set boundaries that mum or dad is working. Come in if there is emergency, but otherwise they need to be left alone.
|
||||
|
||||
There are all kinds of opportunities for locking these distractions out. Put your phone on silent. Set yourself as away. Move to a different room (or building) where the distraction isn’t there. Again, be honest in what distracts you and manage it. If you don’t, you will always be at their mercy.
|
||||
|
||||
### 4\. Relationships need in-person attention
|
||||
|
||||
Some roles are more attuned to remote working than others. For example, I have seen great work from engineering, quality assurance, support, security, and other teams (typically more focused on digital collaboration). Other teams such as design or marketing often struggle more in remote environments (as they are often more tactile.)
|
||||
|
||||
With any team though, having strong relationship is critical, and in-person discussion, collaboration, and socializing is essential to this. So many of our senses (such as body language) are removed in a digital environment, and these play a key role in how we build trust and relationships.
|
||||
|
||||
![][10]
|
||||
Rockets also help.
|
||||
|
||||
This is especially important if (a) you are new a company and need to build these relationships, (b) are new to a role and need to build relationships with your team, or (c) are in a leadership position where building buy-in and engagement is a key part of your job.
|
||||
|
||||
**The solution? A sensible mix of remote and in-person time.** If your company is nearby, work from home part of the week and at the office part of the week. If your company is further a away, schedule regular trips to the office (and set expectations with your management that you need this). For example, when I worked at XPRIZE I flew to LA every few weeks for a few days. When I worked at Canonical (who were based in London), we had sprints every three months.
|
||||
|
||||
### 5\. Stay focused, but cut yourself some slack
|
||||
|
||||
The crux of everything in this article is about building a capability, and developing a remote working muscle. This is as simple as building a routine, sticking to it, and having an honest view of your “waves” and distractions and how to manage them.
|
||||
|
||||
I see the world in a fairly specific way: **everything we do has the opportunity to be refined and improved**. For example, I have been public speaking now for over 15 years, but I am always discovering new ways to improve, and new mistakes to fix (speaking of which, see my [10 Ways To Up Your Public Speaking Game][11].)
|
||||
|
||||
There is a thrill in the discovery of new ways to get better, and to see every stumbling block and mistake as an “aha!” moment to kick ass in new and different ways. It is no different with remote working: look for patterns that help to unlock ways in which you can make your remote working time more efficient, more comfortable, and more fun.
|
||||
|
||||
![][12]
|
||||
Get these books. They are fantastic for personal development.
|
||||
See my [$150 Personal Development Kit][13] article
|
||||
|
||||
…but don’t go crazy over it. There are some people who obsesses every minute of their day about how to get better. They beat themselves up constantly for “not doing well enough”, “not getting more done”, and not meeting their internal unrealistic view of perfection.
|
||||
|
||||
We are humans. We are animals, and we are not robots. Always strive to improve, but be realistic that not everything will be perfect. You are going to have some off-days or off-weeks. You are going to struggle at times with stress and burnout. You are going to handle a situation poorly remotely that would have been easier in the office. Learn from these moments but don’t obsess over them. Life is too damn short.
|
||||
|
||||
**What are your tips, tricks, and recommendations? How do you manage remote working? What is missing from my recommendations? Share them in the comments box!**
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.jonobacon.com/2019/01/14/remote-working-survival/
|
||||
|
||||
作者:[Jono Bacon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.jonobacon.com/author/admin/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.cnbc.com/2018/05/30/70-percent-of-people-globally-work-remotely-at-least-once-a-week-iwg-study.html
|
||||
[2]: http://www.cosocloud.com/press-release/connectsolutions-survey-shows-working-remotely-benefits-employers-and-employees
|
||||
[3]: https://www.aftercollege.com/cf/2015-annual-survey
|
||||
[4]: https://www.jonobacon.com/join/
|
||||
[5]: https://www.facebook.com/aftertheburial/
|
||||
[6]: https://www.jonobacon.com/wp-content/uploads/2019/01/aftertheburial2.jpg
|
||||
[7]: https://skullsnbones.com/burial-live-photos-vans-warped-tour-denver-co/
|
||||
[8]: https://www.youtube.com/watch?v=wK1PNNEKZBY
|
||||
[9]: https://www.jonobacon.com/wp-content/uploads/2019/01/IMG_20190114_102429-1024x768.jpg
|
||||
[10]: https://www.jonobacon.com/wp-content/uploads/2019/01/15381733956_3325670fda_k-1024x576.jpg
|
||||
[11]: https://www.jonobacon.com/2018/12/11/10-ways-to-up-your-public-speaking-game/
|
||||
[12]: https://www.jonobacon.com/wp-content/uploads/2019/01/DwVBxhjX4AgtJgV-1024x532.jpg
|
||||
[13]: https://www.jonobacon.com/2017/11/13/150-dollar-personal-development-kit/
|
@ -0,0 +1,54 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (The Art of Unix Programming, reformatted)
|
||||
[#]: via: (https://arp242.net/weblog/the-art-of-unix-programming.html)
|
||||
[#]: author: (Martin Tournoij https://arp242.net/)
|
||||
|
||||
The Art of Unix Programming, reformatted
|
||||
======
|
||||
|
||||
tl;dr: I reformatted Eric S. Raymond’s The Art of Unix Programming for readability; [read it here][1].
|
||||
|
||||
I recently wanted to look up a quote for an article I was writing, and I was fairly sure I had read it in The Art of Unix Programming. Eric S. Raymond (esr) has [kindly published it online][2], but it’s difficult to search as it’s distributed over many different pages, and the formatting is not exactly conducive for readability.
|
||||
|
||||
I `wget --mirror`’d it to my drive, and started out with a simple [script][3] to join everything to a single page, but eventually ended up rewriting a lot of the HTML from crappy 2003 docbook-generated tagsoup to more modern standards, and I slapped on some CSS to make it more readable.
|
||||
|
||||
The results are fairly nice, and it should work well in any version of any browser (I haven’t tested Internet Explorer and Edge, lacking access to a Windows computer, but I’m reasonably confident it should work without issues; if not, see the bottom of this page on how to get in touch).
|
||||
|
||||
The HTML could be simplified further (so rms can read it too), but dealing with 360k lines of ill-formatted HTML is not exactly my idea of fun, so this will have to do for now.
|
||||
|
||||
The entire page is self-contained. You can save it to your laptop or mobile phone and read it on a plane or whatnot.
|
||||
|
||||
Why spend so much work on an IT book from 2003? I think a substantial part of the book still applies very much today, for all programmers (not just Unix programmers). For example the [Basics of the Unix Philosophy][4] was good advice in 1972, is still good advice in 2019, and will continue to be good advice well in to the future.
|
||||
|
||||
Other parts have aged less gracefully; for example “since 2000, practice has been moving toward use of XML-DocBook as a documentation interchange format” doesn’t really represent the current state of things, and the [Data File Metaformats][5] section mentions XML and INI, but not JSON or YAML (as they weren’t invented until after the book was written)
|
||||
|
||||
I find this adds, rather than detracts. It makes for an interesting window in to past. The downside is that the uninitiated will have a bit of a hard time distinguishing between the good and outdated parts; as a rule of thumb: if it talks about abstract concepts, it probably still applies today. If it talks about specific software, it may be outdated.
|
||||
|
||||
I toyed with the idea of updating or annotating the text, but the license doesn’t allow derivative works, so that’s not going to happen. Perhaps I’ll email esr and ask nicely. Another project, for another weekend :-)
|
||||
|
||||
You can mail me at [martin@arp242.net][6] or [create a GitHub issue][7] for feedback, questions, etc.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://arp242.net/weblog/the-art-of-unix-programming.html
|
||||
|
||||
作者:[Martin Tournoij][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://arp242.net/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://arp242.net/the-art-of-unix-programming/
|
||||
[2]: http://catb.org/~esr/writings/taoup/html/
|
||||
[3]: https://arp242.net/the-art-of-unix-programming/fix-taoup.py
|
||||
[4]: https://arp242.net/the-art-of-unix-programming#ch01s06
|
||||
[5]: https://arp242.net/the-art-of-unix-programming/#ch05s02
|
||||
[6]: mailto:martin@arp242.net
|
||||
[7]: https://github.com/Carpetsmoker/arp242.net/issues/new
|
85
sources/talk/20191110 What is DevSecOps.md
Normal file
85
sources/talk/20191110 What is DevSecOps.md
Normal file
@ -0,0 +1,85 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (What is DevSecOps?)
|
||||
[#]: via: (https://opensource.com/article/19/1/what-devsecops)
|
||||
[#]: author: (Brett Hunoldt https://opensource.com/users/bretthunoldtcom)
|
||||
|
||||
What is DevSecOps?
|
||||
======
|
||||
The journey to DevSecOps begins with empowerment, enablement, and education. Here's how to get started.
|
||||

|
||||
|
||||
> “DevSecOps enables organizations to deliver inherently secure software at DevOps speed.” -Stefan Streichsbier
|
||||
|
||||
DevSecOps as a practice or an art form is an evolution on the concept of DevOps. To better understand DevSecOps, you should first have an understanding of what DevOps means.
|
||||
|
||||
DevOps was born from merging the practices of development and operations, removing the silos, aligning the focus, and improving efficiency and performance of both the teams and the product. A new synergy was formed, with DevOps focused on building products and services that are easy to maintain and that automate typical operations functions.
|
||||
|
||||
Security is a common silo in many organizations. Security’s core focus is protecting the organization, and sometimes this means creating barriers or policies that slow down the execution of new services or products to ensure that everything is well understood and done safely and that nothing introduces unnecessary risk to the organization.
|
||||
|
||||
**[[Download the Getting started with DevSecOps guide]][1]**
|
||||
|
||||
Because of the distinct nature of the security silo and the friction it can introduce, development and operations sometimes bypass or work around security to meet their objectives. At some firms, the silo creates an expectation that security is entirely the responsibility of the security team and it is up to them to figure out what security defects or issues may be introduced as a result of a product.
|
||||
|
||||
DevSecOps looks at merging the security discipline within DevOps. By enhancing or building security into the developer and/or operational role, or including a security role within the product engineering team, security naturally finds itself in the product by design.
|
||||
|
||||
This allows companies to release new products and updates more quickly and with full confidence that security is embedded into the product.
|
||||
|
||||
### Where does rugged software fit into DevSecOps?
|
||||
|
||||
Building rugged software is more an aspect of the DevOps culture than a distinct practice, and it complements and enhances a DevSecOps practice. Think of a rugged product as something that has been battle-hardened through experimentation or experience.
|
||||
|
||||
It’s important to note that rugged software is not necessarily 100% secure (although it may have been at some point in time). However, it has been designed to handle most of what is thrown at it.
|
||||
|
||||
The key tenets of a rugged software practice are fostering competition, experimentation, controlled failure, and cooperation.
|
||||
|
||||
### How do you get started in DevSecOps?
|
||||
|
||||
Gettings started with DevSecOps involves shifting security requirements and execution to the earliest possible stage in the development process. It ultimately creates a shift in culture where security becomes everyone’s responsibility, not only the security team’s.
|
||||
|
||||
You may have heard teams talking about a "shift left." If you flatten the development pipeline into a horizontal line to include the key stages of the product evolution—from initiation to design, building, testing, and finally to operating—the goal of a security is to be involved as early as possible. This allows the risks to be better evaluated, socialized, and mitigated by design. The "shift-left" mentality is about moving this engagement far left in this pipeline.
|
||||
|
||||
This journey begins with three key elements:
|
||||
|
||||
* empowerment
|
||||
* enablement
|
||||
* education
|
||||
|
||||
|
||||
|
||||
Empowerment, in my view, is about releasing control and allowing teams to make independent decisions without fear of failure or repercussion (within reason). The only caveat in this process is that information is critical to making informed decisions (more on that below).
|
||||
|
||||
To achieve empowerment, business and executive support (which can be created through internal sales, presentations, and establishing metrics to show the return on this investment) is critical to break down the historic barriers and siloed teams. Integrating security into the development and operations teams and increasing both communication and transparency can help you begin the journey to DevSecOps.
|
||||
|
||||
This integration and mobilization allows teams to focus on a single outcome: Building a product for which they share responsibility and collaborate on development and security in a reliable way. This will take you most of the way towards empowerment. It places the shared responsibility for the product with the teams building it and ensures that any part of the product can be taken apart and maintain its security.
|
||||
|
||||
Enablement involves placing the right tools and resources in the hands of the teams. It’s about creating a culture of knowledge-sharing through forums, wikis, and informal gatherings.
|
||||
|
||||
Creating a culture that focuses on automation and the concept that repetitive tasks should be coded will likely reduce operational overhead and strengthen security. This scenario is about more than providing knowledge; it is about making this knowledge highly accessible through multiple channels and mediums (which are enabled through tools) so that it can be consumed and shared in whatever way teams or individuals prefer. One medium might work best when team members are coding and another when they are on the road. Make the tools accessible and simple and let the team play with them.
|
||||
|
||||
Different DevSecOp teams will have different preferences, so allow them to be independent whenever possible. This is a delicate balancing exercise because you do want economies of scale and the ability to share among products. Collaboration and involvement in the selection and renewal of these tools will help lower the barriers of adoption.
|
||||
|
||||
Finally, and perhaps most importantly, DevSecOps is about training and awareness building. Meetups, social gatherings, or formal presentations within the organization are great ways for peers to teach and share their learnings. Sometimes these highlight shared challenges, concerns, or risks others may not have considered. Sharing and teaching are also effective ways to learn and to mentor teams.
|
||||
|
||||
In my experience, each organization's culture is unique, so you can’t take a “one-size-fits-all” approach. Reach out to your teams and find out what tools they want to use. Test different forums and gatherings and see what works best for your culture. Seek feedback and ask the teams what is working, what they like, and why. Adapt and learn, be positive, and never stop trying, and you’ll almost always succeed.
|
||||
|
||||
[Download the Getting started with DevSecOps guide][1]
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/1/what-devsecops
|
||||
|
||||
作者:[Brett Hunoldt][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/bretthunoldtcom
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/downloads/devsecops
|
212
sources/tech/20180220 JSON vs XML vs TOML vs CSON vs YAML.md
Normal file
212
sources/tech/20180220 JSON vs XML vs TOML vs CSON vs YAML.md
Normal file
@ -0,0 +1,212 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (JSON vs XML vs TOML vs CSON vs YAML)
|
||||
[#]: via: (https://www.zionandzion.com/json-vs-xml-vs-toml-vs-cson-vs-yaml/)
|
||||
[#]: author: (Tim Anderson https://www.zionandzion.com)
|
||||
|
||||
JSON vs XML vs TOML vs CSON vs YAML
|
||||
======
|
||||
|
||||
|
||||
### A Super Serious Segment About Sets, Subsets, and Supersets of Sample Serialization
|
||||
|
||||
I’m a developer. I read code. I write code. I write code that writes code. I write code that writes code for other code to read. It’s all very mumbo-jumbo, but beautiful in its own way. However, that last bit, writing code that writes code for other code to read, can get more convoluted than this paragraph—quickly. There are a lot of ways to do it. One not-so-convoluted way and a favorite among the developer community is through data serialization. For those who aren’t savvy on the super buzzword I just threw at you, data serialization is the process of taking some information from one system, churning it into a format that other systems can read, and then passing it along to those other systems.
|
||||
|
||||
While there are enough [data serialization formats][1] out there to bury the Burj Khalifa, they all mostly fall into two categories:
|
||||
|
||||
* simplicity for humans to read and write,
|
||||
* and simplicity for machines to read and write.
|
||||
|
||||
|
||||
|
||||
It’s difficult to have both as we humans enjoy loosely typed, flexible formatting standards that allow us to be more expressive, whereas machines tend to enjoy being told exactly what everything is without doubt or lack of detail, and consider “strict specifications” to be their favorite flavor of Ben & Jerry’s.
|
||||
|
||||
Since I’m a web developer and we’re an agency who creates websites, we’ll stick to those special formats that web systems can understand, or be made to understand without much effort, and that are particularly useful for human readability: XML, JSON, TOML, CSON, and YAML. Each has benefits, cons, and appropriate use cases.
|
||||
|
||||
### Facts First
|
||||
|
||||
Back in the early days of the interwebs, [some really smart fellows][2] decided to put together a standard language which every system could read and creatively named it Standard Generalized Markup Language, or SGML for short. SGML was incredibly flexible and well defined by its publishers. It became the father of languages such as XML, SVG, and HTML. All three fall under the SGML specification, but are subsets with stricter rules and shorter flexibility.
|
||||
|
||||
Eventually, people started seeing a great deal of benefit in having very small, concise, easy to read, and easy to generate data that could be shared programmatically between systems with very little overhead. Around that time, JSON was born and was able to fulfil all requirements. In turn, other languages began popping up to deal with more specialized cases such as CSON, TOML, and YAML.
|
||||
|
||||
### XML: Ixnayed
|
||||
|
||||
Originally, the XML language was amazingly flexible and easy to write, but its drawback was that it was verbose, difficult for humans to read, really difficult for computers to read, and had a lot of syntax that wasn’t entirely necessary to communicate information.
|
||||
|
||||
Today, it’s all but dead for data serialization purposes on the web. Unless you’re writing HTML or SVG, both siblings to XML, you probably aren’t going to see XML in too many other places. Some outdated systems still use it today, but using it to pass data around tends to be overkill for the web.
|
||||
|
||||
I can already hear the XML greybeards beginning to scribble upon their stone tablets as to why XML is ah-may-zing, so I’ll provide a small addendum: XML can be easy to read and write by systems and people. However, it is really, and I mean ridiculously, hard to create a system that can read it to specification. Here’s a simple, beautiful example of XML:
|
||||
|
||||
```
|
||||
<book id="bk101">
|
||||
<author>Gambardella, Matthew</author>
|
||||
<title>XML Developer's Guide</title>
|
||||
<genre>Computer</genre>
|
||||
<price>44.95</price>
|
||||
<publish_date>2000-10-01</publish_date>
|
||||
<description>An in-depth look at creating applications
|
||||
with XML.</description>
|
||||
</book>
|
||||
```
|
||||
|
||||
Wonderful. Easy to read, reason about, write, and code a system that can read and write. But consider this example:
|
||||
|
||||
```
|
||||
<!DOCTYPE r [ <!ENTITY y "a]>b"> ]>
|
||||
<r>
|
||||
<a b="&y;>" />
|
||||
<![CDATA[[a>b <a>b <a]]>
|
||||
<?x <a> <!-- <b> ?> c --> d
|
||||
</r>
|
||||
```
|
||||
|
||||
The above is 100% valid XML. Impossible to read, understand, or reason about. Writing code that can consume and understand this would cost at least 36 heads of hair and 248 pounds of coffee grounds. We don’t have that kind of time nor coffee, and most of us greybeards are balding nowadays. So let’s let it live only in our memory alongside [css hacks][3], [internet explorer 6][4], and [vacuum tubes][5].
|
||||
|
||||
### JSON: Juxtaposition Jamboree
|
||||
|
||||
Okay, we’re all in agreement. XML = bad. So, what’s a good alternative? JavaScript Object Notation, or JSON for short. JSON (read like the name Jason) was invented by Brendan Eich, and made popular by the great and powerful Douglas Crockford, the [Dutch Uncle of JavaScript][6]. It’s used just about everywhere nowadays. The format is easy to write by both human and machine, fairly easy to [parse][7] with strict rules in the specification, and flexible—allowing deep nesting of data, all of the primitive data types, and interpretation of collections as either arrays or objects. JSON became the de facto standard for transferring data from one system to another. Nearly every language out there has built-in functionality for reading and writing it.
|
||||
|
||||
JSON syntax is straightforward. Square brackets denote arrays, curly braces denote records, and two values separated by semicolons denote properties (or ‘keys’) on the left, and values on the right. All keys must be wrapped in double quotes:
|
||||
|
||||
```
|
||||
{
|
||||
"books": [
|
||||
{
|
||||
"id": "bk102",
|
||||
"author": "Crockford, Douglas",
|
||||
"title": "JavaScript: The Good Parts",
|
||||
"genre": "Computer",
|
||||
"price": 29.99,
|
||||
"publish_date": "2008-05-01",
|
||||
"description": "Unearthing the Excellence in JavaScript"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
This should make complete sense to you. It’s nice and concise, and has stripped much of the extra nonsense from XML to convey the same amount of information. JSON is king right now, and the rest of this article will go into other language formats that are nothing more than JSON boiled down in an attempt to be either more concise or more readable by humans, but follow very similar structure.
|
||||
|
||||
### TOML: Truncated to Total Altruism
|
||||
|
||||
TOML (Tom’s Obvious, Minimal Language) allows for defining deeply-nested data structures rather quickly and succinctly. The name-in-the-name refers to the inventor, [Tom Preston-Werner][8], an inventor and software developer who’s active in our industry. The syntax is a bit awkward when compared to JSON, and is more akin to an [ini file][9]. It’s not a bad syntax, but could take some getting used to:
|
||||
|
||||
```
|
||||
[[books]]
|
||||
id = 'bk101'
|
||||
author = 'Crockford, Douglas'
|
||||
title = 'JavaScript: The Good Parts'
|
||||
genre = 'Computer'
|
||||
price = 29.99
|
||||
publish_date = 2008-05-01T00:00:00+00:00
|
||||
description = 'Unearthing the Excellence in JavaScript'
|
||||
```
|
||||
|
||||
A couple great features have been integrated into TOML, such as multiline strings, auto-escaping of reserved characters, datatypes such as dates, time, integers, floats, scientific notation, and “table expansion”. That last bit is special, and is what makes TOML so concise:
|
||||
|
||||
```
|
||||
[a.b.c]
|
||||
d = 'Hello'
|
||||
e = 'World'
|
||||
```
|
||||
|
||||
The above expands to the following:
|
||||
|
||||
```
|
||||
{
|
||||
"a": {
|
||||
"b": {
|
||||
"c": {
|
||||
"d": "Hello"
|
||||
"e": "World"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
You can definitely see how much you can save in both time and file length using TOML. There are few systems which use it or something very similar for configuration, and that is its biggest con. There simply aren’t very many languages or libraries out there written to interpret TOML.
|
||||
|
||||
### CSON: Simple Samples Enslaved by Specific Systems
|
||||
|
||||
First off, there are two CSON specifications. One stands for CoffeeScript Object Notation, the other stands for Cursive Script Object Notation. The latter isn’t used too often, so we won’t be getting into it. Let’s just focus on the CoffeeScript one.
|
||||
|
||||
[CSON][10] will take a bit of intro. First, let’s talk about CoffeeScript. [CoffeeScript][11] is a language that runs through a compiler to generate JavaScript. It allows you to write JavaScript in a more syntactically concise way, and have it [transcompiled][12] into actual JavaScript, which you would then use in your web application. CoffeeScript makes writing JavaScript easier by removing a lot of the extra syntax necessary in JavaScript. A big one that CoffeeScript gets rid of is curly braces—no need for them. In that same token, CSON is JSON without the curly braces. It instead relies on indentation to determine hierarchy of your data. CSON is very easy to read and write and usually requires fewer lines of code than JSON because there are no brackets.
|
||||
|
||||
CSON also offers up some extra niceties that JSON doesn’t have to offer. Multiline strings are incredibly easy to write, you can enter [comments][13] by starting a line with a hash, and there’s no need for separating key-value pairs with commas.
|
||||
|
||||
```
|
||||
books: [
|
||||
id: 'bk102'
|
||||
author: 'Crockford, Douglas'
|
||||
title: 'JavaScript: The Good Parts'
|
||||
genre: 'Computer'
|
||||
price: 29.99
|
||||
publish_date: '2008-05-01'
|
||||
description: 'Unearthing the Excellence in JavaScript'
|
||||
]
|
||||
```
|
||||
|
||||
Here’s the big issue with CSON. It’s **CoffeeScript** Object Notation. Meaning CoffeeScript is what you use to parse/tokenize/lex/transcompile or otherwise use CSON. CoffeeScript is the system that reads the data. If the intent of data serialization is to allow data to be passed from one system to another, and here we have a data serialization format that’s only read by a single system, well that makes it about as useful as a fireproof match, or a waterproof sponge, or that annoyingly flimsy fork part of a spork.
|
||||
|
||||
If this format is adopted by other systems, it could be pretty useful in the developer world. Thus far that hasn’t happened in a comprehensive manner, so using it in alternative languages such as PHP or JAVA are a no-go.
|
||||
|
||||
### YAML: Yielding Yips from Youngsters
|
||||
|
||||
Developers rejoice, as YAML comes into the scene from [one of the contributors to Python][14]. YAML has the same feature set and similar syntax as CSON, a boatload of new features, and parsers available in just about every web programming language there is. It also has some extra features, like circular referencing, soft-wraps, multi-line keys, typecasting tags, binary data, object merging, and [set maps][15]. It has incredibly good human readability and writability, and is a superset of JSON, so you can use fully qualified JSON syntax inside YAML and all will work well. You almost never need quotes, and it can interpret most of your base data types (strings, integers, floats, booleans, etc.).
|
||||
|
||||
```
|
||||
books:
|
||||
- id: bk102
|
||||
author: Crockford, Douglas
|
||||
title: 'JavaScript: The Good Parts'
|
||||
genre: Computer
|
||||
price: 29.99
|
||||
publish_date: !!str 2008-05-01
|
||||
description: Unearthing the Excellence in JavaScript
|
||||
```
|
||||
|
||||
The younglings of the industry are rapidly adopting YAML as their preferred data serialization and system configuration format. They are smart to do so. YAML has all the benefits of being as terse as CSON, and all the features of datatype interpretation as JSON. YAML is as easy to read as Canadians are to hang out with.
|
||||
|
||||
There are two issues with YAML that stick out to me, and the first is a big one. At the time of this writing, YAML parsers haven’t yet been built into very many languages, so you’ll need to use a third-party library or extension for your chosen language to parse .yaml files. This wouldn’t be a big deal, however it seems most developers who’ve created parsers for YAML have chosen to throw “additional features” into their parsers at random. Some allow [tokenization][16], some allow [chain referencing][17], some even allow inline calculations. This is all well and good (sort of), except that none of these features are part of the specification, and so are difficult to find amongst other parsers in other languages. This results in system-locking; you end up with the same issue that CSON is subject to. If you use a feature found in only one parser, other parsers won’t be able to interpret the input. Most of these features are nonsense that don’t belong in a dataset, but rather in your application logic, so it’s best to simply ignore them and write your YAML to specification.
|
||||
|
||||
The second issue is there are few parsers that yet completely implement the specification. All the basics are there, but it can be difficult to find some of the more complex and newer things like soft-wraps, document markers, and circular references in your preferred language. I have yet to see an absolute need for these things, so hopefully they shouldn’t slow you down too much. With the above considered, I tend to keep to the more matured feature set presented in the [1.1 specification][18], and avoid the newer stuff found in the [1.2 specification][19]. However, programming is an ever-evolving monster, so by the time you finish reading this article, you’re likely to be able to use the 1.2 spec.
|
||||
|
||||
### Final Philosophy
|
||||
|
||||
The final word here is that each serialization language should be treated with a case-by-case reverence. Some are the bee’s knees when it comes to machine readability, some are the cat’s meow for human readability, and some are simply gilded turds. Here’s the ultimate breakdown: If you are writing code for other code to read, use YAML. If you are writing code that writes code for other code to read, use JSON. Finally, if you are writing code that transcompiles code into code that other code will read, rethink your life choices.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.zionandzion.com/json-vs-xml-vs-toml-vs-cson-vs-yaml/
|
||||
|
||||
作者:[Tim Anderson][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.zionandzion.com
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://en.wikipedia.org/wiki/Comparison_of_data_serialization_formats
|
||||
[2]: https://en.wikipedia.org/wiki/Standard_Generalized_Markup_Language#History
|
||||
[3]: https://www.quirksmode.org/css/csshacks.html
|
||||
[4]: http://www.ie6death.com/
|
||||
[5]: https://en.wikipedia.org/wiki/Vacuum_tube
|
||||
[6]: https://twitter.com/BrendanEich/status/773403975865470976
|
||||
[7]: https://en.wikipedia.org/wiki/Parsing#Parser
|
||||
[8]: https://en.wikipedia.org/wiki/Tom_Preston-Werner
|
||||
[9]: https://en.wikipedia.org/wiki/INI_file
|
||||
[10]: https://github.com/bevry/cson#what-is-cson
|
||||
[11]: http://coffeescript.org/
|
||||
[12]: https://en.wikipedia.org/wiki/Source-to-source_compiler
|
||||
[13]: https://en.wikipedia.org/wiki/Comment_(computer_programming)
|
||||
[14]: http://clarkevans.com/
|
||||
[15]: http://exploringjs.com/es6/ch_maps-sets.html
|
||||
[16]: https://www.tutorialspoint.com/compiler_design/compiler_design_lexical_analysis.htm
|
||||
[17]: https://en.wikipedia.org/wiki/Fluent_interface
|
||||
[18]: http://yaml.org/spec/1.1/current.html
|
||||
[19]: http://www.yaml.org/spec/1.2/spec.html
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: translator: (runningwater)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
@ -96,7 +96,7 @@ via: https://anarc.at/blog/2018-12-21-large-files-with-git/
|
||||
|
||||
作者:[Anarc.at][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
译者:[runningwater](https://github.com/runningwater)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,223 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Getting started with Pelican: A Python-based static site generator)
|
||||
[#]: via: (https://opensource.com/article/19/1/getting-started-pelican)
|
||||
[#]: author: (Craig Sebenik https://opensource.com/users/craig5)
|
||||
|
||||
Getting started with Pelican: A Python-based static site generator
|
||||
======
|
||||
Pelican is a great choice for Python users who want to self-host a simple website or blog.
|
||||
|
||||

|
||||
|
||||
If you want to create a custom website or blog, you have a lot of options. Many providers will host your website and do much of the work for you. (WordPress is an extremely popular option.) But you lose some flexibility by using a hosted solution. As a software developer, I prefer to manage my own server and keep more freedom in how my website operates.
|
||||
|
||||
However, it is a fair amount of work to manage a web server. Installing it and getting a simple application up to serve content is easy enough. But keeping on top of security patches and updates is very time-consuming. If you just want to serve static web pages, having a web server and a host of applications may be more effort than it's worth. Creating HTML pages by hand is also not a good option.
|
||||
|
||||
This is where a static site generator can come in. These applications use templates to create all the static pages you want and cross-link them with associated metadata. (e.g., showing all the pages with a common tag or keyword.) Static site generators help you create a site with a common look and feel using elements like navigation areas and a header and footer.
|
||||
|
||||
I have been using [Python][1] for years now. So, when I first started looking for something to generate static HTML pages, I wanted something written in Python. The main reason is that I often want to peek into the internals of how an application works, and using a language that I already know makes that easier. (If that isn't important to you or you don't use Python, there are some other great [static site generators][2] that use Ruby, JavaScript, and other languages.)
|
||||
|
||||
I decided to give [Pelican][3] a try. It is a commonly used static site generator written in Python. It directly supports [reStructuredText][4] and can support [Markdown][5] when the required package is installed. All the tasks are performed via command-line interface (CLI) tools, which makes it simple for anyone familiar with the command line. And its simple quickstart CLI tool makes creating a website extremely easy.
|
||||
|
||||
In this article, I'll explain how to install Pelican 4, add an article, and change the default theme. (Note: This was all developed on MacOS; it should work the same using any flavor of Unix/Linux, but I don't have a Windows host to test on.)
|
||||
|
||||
### Installation and configuration
|
||||
|
||||
The first step is to create a [virtualenv][6] and install Pelican.
|
||||
|
||||
```
|
||||
$ mkdir test-site
|
||||
$ cd test-site
|
||||
$ python3 -m venv venv
|
||||
$ ./venv/bin/pip install --upgrade pip
|
||||
...
|
||||
Successfully installed pip-18.1
|
||||
$ ./venv/bin/pip install pelican
|
||||
Collecting pelican
|
||||
...
|
||||
Successfully installed MarkupSafe-1.1.0 blinker-1.4 docutils-0.14 feedgenerator-1.9 jinja2-2.10 pelican-4.0.1 pygments-2.3.1 python-dateutil-2.7.5 pytz-2018.7 six-1.12.0 unidecode-1.0.23
|
||||
```
|
||||
|
||||
To keep things simple, I entered values for the title and author and replied N to URL prefix and article pagination. (For the rest of the questions, I used the default given.)
|
||||
|
||||
Pelican's quickstart CLI tool will create the basic layout and a few files to get you started. Run the **pelican-quickstart** command. To keep things simple, I entered values for the **title** and **author** and replied **N** to URL prefix and article pagination. It is very easy to change these settings in the configuration file later.
|
||||
|
||||
```
|
||||
$ ./venv/bin/pelicanquickstart
|
||||
Welcome to pelicanquickstart v4.0.1.
|
||||
|
||||
This script will help you create a new Pelican-based website.
|
||||
|
||||
Please answer the following questions so this script can generate the files needed by Pelican.
|
||||
|
||||
> Where do you want to create your new web site? [.]
|
||||
> What will be the title of this web site? My Test Blog
|
||||
> Who will be the author of this web site? Craig
|
||||
> What will be the default language of this web site? [en]
|
||||
> Do you want to specify a URL prefix? e.g., https://example.com (Y/n) n
|
||||
> Do you want to enable article pagination? (Y/n) n
|
||||
> What is your time zone? [Europe/Paris]
|
||||
> Do you want to generate a tasks.py/Makefile to automate generation and publishing? (Y/n)
|
||||
> Do you want to upload your website using FTP? (y/N)
|
||||
> Do you want to upload your website using SSH? (y/N)
|
||||
> Do you want to upload your website using Dropbox? (y/N)
|
||||
> Do you want to upload your website using S3? (y/N)
|
||||
> Do you want to upload your website using Rackspace Cloud Files? (y/N)
|
||||
> Do you want to upload your website using GitHub Pages? (y/N)
|
||||
Done. Your new project is available at /Users/craig/tmp/pelican/test-site
|
||||
```
|
||||
|
||||
All the files you need to get started are ready to go.
|
||||
|
||||
The quickstart defaults to the Europe/Paris time zone, so change that before proceeding. Open the **pelicanconf.py** file in your favorite text editor. Look for the **TIMEZONE** variable.
|
||||
|
||||
```
|
||||
TIMEZONE = 'Europe/Paris'
|
||||
```
|
||||
|
||||
Change it to **UTC**.
|
||||
|
||||
```
|
||||
TIMEZONE = 'UTC'
|
||||
```
|
||||
|
||||
To update the social settings, look for the **SOCIAL** variable in **pelicanconf.py**.
|
||||
|
||||
```
|
||||
SOCIAL = (('You can add links in your config file', '#'),
|
||||
('Another social link', '#'),)
|
||||
```
|
||||
|
||||
I'll add a link to my Twitter account.
|
||||
|
||||
```
|
||||
SOCIAL = (('Twitter (#craigs55)', 'https://twitter.com/craigs55'),)
|
||||
```
|
||||
|
||||
Notice that trailing comma—it's important. That comma helps Python recognize the variable is actually a set. Make sure you don't delete that comma.
|
||||
|
||||
Now you have the basics of a site. The quickstart created a Makefile with a number of targets. Giving the **devserver** target to **make** will start a development server on your machine so you can preview everything. The CLI commands used in the Makefile are assumed to be part of your **PATH** , so you need to **activate** the **virtualenv** first.
|
||||
|
||||
```
|
||||
$ source ./venv/bin/activate
|
||||
$ make devserver
|
||||
pelican -lr /Users/craig/tmp/pelican/test-site/content o
|
||||
/Users/craig/tmp/pelican/test-site/output -s /Users/craig/tmp/pelican/test-site/pelicanconf.py
|
||||
|
||||
-> Modified: theme, settings. regenerating...
|
||||
WARNING: No valid files found in content for the active readers:
|
||||
| BaseReader (static)
|
||||
| HTMLReader (htm, html)
|
||||
| RstReader (rst)
|
||||
Done: Processed 0 articles, 0 drafts, 0 pages, 0 hidden pages and 0 draft pages in 0.18 seconds.
|
||||
```
|
||||
|
||||
Point your favorite browser to <http://localhost:8000> to see your simple test blog.
|
||||
|
||||

|
||||
|
||||
You can see the Twitter link on the right side and some links to Pelican, Python, and Jinja to the left of it. (Jinja is a great templating language that Pelican can use. You can learn more about it in [Jinja's documentation][7].)
|
||||
|
||||
### Adding content
|
||||
|
||||
Now that you have a basic site, add some content. First, add a file called **welcome.rst** to the site's **content** directory. In your favorite text editor, create a file with the following text:
|
||||
|
||||
```
|
||||
$ pwd
|
||||
/Users/craig/tmp/pelican/test-site
|
||||
$ cat content/welcome.rst
|
||||
|
||||
Welcome to my blog!
|
||||
###################
|
||||
|
||||
:date: 20181216 08:30
|
||||
:tags: welcome
|
||||
:category: Intro
|
||||
:slug: welcome
|
||||
:author: Craig
|
||||
:summary: Welcome document
|
||||
|
||||
Welcome to my blog.
|
||||
This is a short page just to show how to put up a static page.
|
||||
```
|
||||
|
||||
The metadata lines—date, tags, etc.—are automatically parsed by Pelican.
|
||||
|
||||
After you write the file, the **devserver** should output something like this:
|
||||
|
||||
```
|
||||
-> Modified: content. regenerating...
|
||||
Done: Processed 1 article, 0 drafts, 0 pages, 0 hidden pages and 0 draft pages in 0.10 seconds.
|
||||
```
|
||||
|
||||
Reload your test site in your browser to view the changes.
|
||||
|
||||

|
||||
|
||||
The metadata (e.g., date and tags) were automatically added to the page. Also, Pelican automatically detected the **intro** category and added the section to the top navigation.
|
||||
|
||||
### Change the theme
|
||||
|
||||
One of the nicest parts of working with popular, open source software like Pelican is that many users will make changes and contribute them back to the project. Many of the contributions are in the form of themes.
|
||||
|
||||
A site's theme sets colors, layout options, etc. It's really easy to try out new themes. You can preview many of them at [Pelican Themes][8].
|
||||
|
||||
First, clone the GitHub repo:
|
||||
|
||||
```
|
||||
$ cd ..
|
||||
$ git clone --recursive https://github.com/getpelican/pelicanthemes
|
||||
Cloning into 'pelicanthemes'...
|
||||
```
|
||||
|
||||
Since I like the color blue, I'll try [blueidea][9].
|
||||
|
||||
Edit **pelicanconf.py** and add the following line:
|
||||
|
||||
```
|
||||
THEME = '/Users/craig/tmp/pelican/pelican-themes/blueidea/'
|
||||
```
|
||||
|
||||
The **devserver** will regenerate your output. Reload the webpage in your browser to see the new theme.
|
||||
|
||||

|
||||
|
||||
The theme controls many aspects of the layout. For example, in the default theme, you can see the category (Intro) with the meta tags next to the article. But that category is not displayed in the blueidea theme.
|
||||
|
||||
### Other considerations
|
||||
|
||||
This was a pretty quick introduction to Pelican. There are a couple of important topics that I did not cover.
|
||||
|
||||
First, one reason I was hesitant to move to a static site was that it wouldn't allow discussions on the articles. Fortunately, there are some third-party providers that will host discussions for you. The one I am currently looking at is [Disqus][10].
|
||||
|
||||
Next, everything above was done on my local machine. If I want others to view my site, I'll have to upload the pre-generated HTML files somewhere. If you look at the **pelican-quickstart** output, you will see options for using FTP, SSH, S3, and even GitHub Pages. Each option has its pros and cons. But, if I had to choose one, I would likely publish to GitHub Pages.
|
||||
|
||||
Pelican has many other features—I am still learning more about it every day. If you want to self-host a website or a blog with simple, static content and you want to use Python, Pelican is a great choice. It has an active user community that is fixing bugs, adding features, and creating new and interesting themes. Give it a try!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/1/getting-started-pelican
|
||||
|
||||
作者:[Craig Sebenik][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/craig5
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/resources/python
|
||||
[2]: https://opensource.com/sitewide-search?search_api_views_fulltext=static%20site%20generator
|
||||
[3]: http://docs.getpelican.com/en/stable/
|
||||
[4]: http://docutils.sourceforge.net/rst.html
|
||||
[5]: https://daringfireball.net/projects/markdown/
|
||||
[6]: https://virtualenv.pypa.io/en/latest/
|
||||
[7]: http://jinja.pocoo.org/docs/2.10/
|
||||
[8]: http://www.pelicanthemes.com/
|
||||
[9]: https://github.com/nasskach/pelican-blueidea/tree/58fb13112a2707baa7d65075517c40439ab95c0a
|
||||
[10]: https://disqus.com/
|
135
sources/tech/20190107 Testing isn-t everything.md
Normal file
135
sources/tech/20190107 Testing isn-t everything.md
Normal file
@ -0,0 +1,135 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Testing isn't everything)
|
||||
[#]: via: (https://arp242.net/weblog/testing.html)
|
||||
[#]: author: (Martin Tournoij https://arp242.net/)
|
||||
|
||||
Testing isn't everything
|
||||
======
|
||||
|
||||
This is adopted from a discussion about [Want to write good unit tests in go? Don’t panic… or should you?][1] While this mainly talks about Go a lot of the points also apply to other languages.
|
||||
|
||||
Some of the most difficult code I’ve worked with is code that is “easily testable”. Code that abstracts everything to the point where you have no idea what’s going on, just so that it can add a “unit test” to what would otherwise be a very straightforward function. DHH called this [Test-induced design damage][2].
|
||||
|
||||
Testing is just one tool to make sure that your program works, out of several. Another very important tool is writing code in such a way that it is easy to understand and reason about (“simplicity”).
|
||||
|
||||
Books that advocate extensive testing – such as Robert C. Martin’s Clean Code – were written, in part, as a response to ever more complex programs, where you read 1,000 lines of code but still had no idea what’s going on. I recently had to port a simple Java “emoji replacer” (😂 ➙ 😂) to Go. To ensure compatibility I looked up the implementation. It was a whole bunch of classes, factories, and whatnot which all just resulted in calling a regexp on a string. 🤷
|
||||
|
||||
In dynamic languages like Ruby and Python tests are important for a different reason, as something like this will “work” just fine:
|
||||
|
||||
```
|
||||
if condition:
|
||||
print('w00t')
|
||||
else:
|
||||
nonexistent_function()
|
||||
```
|
||||
|
||||
Except of course if that `else` branch is entered. It’s easy to typo stuff, or mix stuff up.
|
||||
|
||||
In Go, both of these problems are less of a concern. It has a good static type system, and the focus is on simple straightforward code that is easy to comprehend. Even for a number of dynamic languages there are optional typing systems (function annotations in Python, TypeScript for JavaScript).
|
||||
|
||||
Sometimes you can do a straightforward implementation that doesn’t sacrifice anything for testability; great! But sometimes you have to strike a balance. For some code, not adding a unit test is fine.
|
||||
|
||||
Intensive focus on “unit tests” can be incredibly damaging to a code base. Some codebases have a gazillion unit tests, which makes any change excessively time-consuming as you’re fixing up a whole bunch of tests for even trivial changes. Often times a lot of these tests are just duplicates; adding tests to every layer of a simple CRUD HTTP endpoint is a common example. In many apps it’s fine to just rely on a single integration test.
|
||||
|
||||
Stuff like SQL mocks is another great example. It makes code more complex, harder to change, all so we can say we added a “unit test” to `select * from foo where x=?`. The worst part is, it doesn’t even test anything other than verifying you didn’t typo an SQL query. As soon as the test starts doing anything useful, such as verifying that it actually returns the correct rows from the database, the Unit Test purists will start complaining that it’s not a True Unit Test™ and that You’re Doing It Wrong™.
|
||||
For most queries, the integration tests and/or manual tests are fine, and extensive SQL mocks are entirely superfluous at best, and harmful at worst.
|
||||
|
||||
There are exceptions, of course; if you’ve got a lot of `if cond { q += "more sql" }` then adding SQL mocks to verify the correctness of that logic might be a good idea. Even in those cases a “non-unit unit test” (e.g. one that just accesses the database) is still a viable option. Integration tests are also still an option. A lot of applications don’t have those kind of complex queries anyway.
|
||||
|
||||
One important reason for the focus on unit tests is to ensure test code runs fast. This was a response to massive test harnesses that take a day to run. This, again, is not really a problem in Go. All integration tests I’ve written run in a reasonable amount of time (several seconds at most, usually faster). The test cache introduced in Go 1.10 makes it even less of a concern.
|
||||
|
||||
Last year a coworker refactored our ETag-based caching library. The old code was very straightforward and easy to understand, and while I’m not claiming it was guaranteed bug-free, it did work very well for a long time.
|
||||
|
||||
It should have been written with some tests in place, but it wasn’t (I didn’t write the original version). Note that the code was not completely untested, as we did have integration tests.
|
||||
|
||||
The refactored version is much more complex. Aside from the two weeks lost on refactoring a working piece of code to … another working piece of code (topic for another post), I’m not so convinced it’s actually that much better. I consider myself a reasonably accomplished and experienced programmer, with a reasonable knowledge and experience in Go. I think that in general, based on feedback from peers and performance reviews, I am at least a programmer of “average” skill level, if not more.
|
||||
|
||||
If an average programmer has trouble comprehending what is in essence a handful of simple functions because there are so many layers of abstractions, then something has gone wrong. The refactor traded one tool to verify correctness (simplicity) with another (testing). Simplicity is hardly a guarantee to ensure correctness, but neither are unit tests. Ideally, we should do both.
|
||||
|
||||
Postscript: the refactor introduced a bug and removed a feature that was useful, but is now harder to add, not in the least because the code is much more complex.
|
||||
|
||||
All units working correctly gives exactly zero guarantees that the program is working correctly. A lot of logic errors won’t be caught because the logic consists of several units working together. So you need integration tests, and if the integration tests duplicate half of your unit tests, then why bother with those unit tests?
|
||||
|
||||
Test Driven Development (TDD) is also just one tool. It works well for some problems; not so much for others. In particular, I think that “forced to write code in tiny units” can be terribly harmful in some cases. Some code is just a serial script which says “do this, and then that, and then this”. Splitting that up in a whole bunch of “tiny units” can greatly reduce how easy the code is to understand, and thus harder to verify that it is correct.
|
||||
|
||||
I’ve had to fix some Ruby code where everything was in tiny units – there is a strong culture of TDD in the Ruby community – and even though the units were easy to understand I found it incredibly hard to understand the application logic. If everything is split in “tiny units” then understanding how everything fits together to create an actual program that does something useful will be much harder.
|
||||
|
||||
You see the same friction in the old microkernel vs. monolithic kernel debate, or the more recent microservices vs. monolithic app one. In principle splitting everything up in small parts sounds like a great idea, but in practice it turns out that making all the small parts work together is a very hard problem. A hybrid approach seems to work best for kernels and app design, balancing the advantages and downsides of both approaches. I think the same applies to code.
|
||||
|
||||
To be clear, I am not against unit tests or TDD and claiming we should all gung-go cowboy code our way through life 🤠. I write unit tests and practice TDD, when it makes sense. My point is that unit tests and TDD are not the solution to every single last problem and should applied indiscriminately. This is why I use words such as “some” and “often” so frequently.
|
||||
|
||||
This brings me to the topic of testing frameworks. I have never understood what problem libraries such as [goblin][3] are solving. How is this:
|
||||
|
||||
```
|
||||
Expect(err).To(nil)
|
||||
Expect(out).To(test.wantOut)
|
||||
```
|
||||
|
||||
An improvement over this?
|
||||
|
||||
```
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
if out != tt.want {
|
||||
t.Errorf("out: %q\nwant: %q", out, tt.want)
|
||||
}
|
||||
```
|
||||
|
||||
What’s wrong with `if` and `==`? Why do we need to abstract it? Note that with table-driven tests you’re only typing these checks once, so you’re saving just a few lines here.
|
||||
|
||||
[Ginkgo][4] is even worse. It turns a very simple, straightforward, and understandable piece of code and doesn’t just abstract `if`, it also chops up the execution in several different functions (`BeforeEach()` and `DescribeTable()`).
|
||||
|
||||
This is known as Behaviour-driven development (BDD). I am not entirely sure what to think of BDD. I am skeptical, but I’ve never properly used it in a large project so I’m hesitant to just dismiss it. Note that I said “properly”: most projects don’t really use BDD, they just use a library with a BDD syntax and shoehorn their testing code in to that. That’s ad-hoc BDD, or faux-BDD.
|
||||
|
||||
Whatever merits BDD may have, they are not present simply because your testing code vaguely resembles BDD-style syntax. This on its own demonstrates that BDD is perhaps not a great idea for many projects.
|
||||
|
||||
I think there are real problems with these BDD(-ish) test tools, as they obfuscate what you’re actually doing. No matter what, testing remains a matter of getting the output of a function and checking if that matches what you expected. No testing methodology is going to change that fundamental. The more layers you add on top of that, the harder it will be to debug.
|
||||
|
||||
When determining if something is “easy” then my prime concern is not how easy something is to write, but how easy something is to debug when things fail. I will gladly spend a bit more effort writing things if that makes things a lot easier to debug.
|
||||
|
||||
All code – including testing code – can fail in confusing, surprising, and unexpected ways (a “bug”), and then you’re expected to debug that code. The more complex the code, the harder it is to debug.
|
||||
|
||||
You should expect all code – including testing code – to go through several debugging cycles. Note that with debugging cycle I don’t mean “there is a bug in the code you need to fix”, but rather “I need to look at this code to fix the bug”.
|
||||
|
||||
In general, I already find testing code harder to debug than regular code, as the “code surface” tends to be larger. You have the testing code and the actual implementation code to think of. That’s a lot more than just thinking of the implementation code.
|
||||
|
||||
Adding these abstractions means you will now also have to think about that, too! This might be okay if the abstractions would reduce the scope of what you have to think about, which is a common reason to add abstractions in regular code, but it doesn’t. It just adds more things to think about.
|
||||
|
||||
So these are exactly the wrong kind of abstractions: they wrap and obfuscate, rather than separate concerns and reduce the scope.
|
||||
|
||||
If you’re interested in soliciting contributions from other people in open source projects then making your tests understandable is a very important concern (it’s also important in business context, but a bit less so, as you’ve got actual time to train people).
|
||||
|
||||
Seeing PRs with “here’s the code, it works, but I couldn’t figure out the tests, plz halp!” is not uncommon; and I’m fairly sure that at least a few people never even bothered to submit PRs just because they got stuck on the tests. I know I have.
|
||||
|
||||
There is one open source project that I contributed to, and would like to contribute more to, but don’t because it’s just too hard to write and run tests. Every change is “write working code in 15 minutes, spend 45 minutes dealing with tests”. It’s … no fun at all.
|
||||
|
||||
Writing good software is hard. I’ve got some ideas on how to do it, but don’t have a comprehensive view. I’m not sure if anyone really does. I do know that “always add unit tests” and “always practice TDD” isn’t the answer, in spite of them being useful concepts. To give an analogy: most people would agree that a free market is a good idea, but at the same time even most libertarians would agree it’s not the complete solution to every single problem (well, [some do][5], but those ideas are … rather misguided).
|
||||
|
||||
You can mail me at [martin@arp242.net][6] or [create a GitHub issue][7] for feedback, questions, etc.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://arp242.net/weblog/testing.html
|
||||
|
||||
作者:[Martin Tournoij][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://arp242.net/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://medium.com/@jens.neuse/want-to-write-good-unit-tests-in-go-dont-panic-or-should-you-ba3eb5bf4f51
|
||||
[2]: http://david.heinemeierhansson.com/2014/test-induced-design-damage.html
|
||||
[3]: https://github.com/franela/goblin
|
||||
[4]: https://github.com/onsi/ginkgo
|
||||
[5]: https://en.wikipedia.org/wiki/Murray_Rothbard#Children's_rights_and_parental_obligations
|
||||
[6]: mailto:martin@arp242.net
|
||||
[7]: https://github.com/Carpetsmoker/arp242.net/issues/new
|
@ -0,0 +1,301 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Create your own video streaming server with Linux)
|
||||
[#]: via: (https://opensource.com/article/19/1/basic-live-video-streaming-server)
|
||||
[#]: author: (Aaron J.Prisk https://opensource.com/users/ricepriskytreat)
|
||||
|
||||
Create your own video streaming server with Linux
|
||||
======
|
||||
Set up a basic live streaming server on a Linux or BSD operating system.
|
||||

|
||||
|
||||
Live video streaming is incredibly popular—and it's still growing. Platforms like Amazon's Twitch and Google's YouTube boast millions of users that stream and consume countless hours of live and recorded media. These services are often free to use but require you to have an account and generally hold your content behind advertisements. Some people don't need their videos to be available to the masses or just want more control over their content. Thankfully, with the power of open source software, anyone can set up a live streaming server.
|
||||
|
||||
### Getting started
|
||||
|
||||
In this tutorial, I'll explain how to set up a basic live streaming server with a Linux or BSD operating system.
|
||||
|
||||
This leads to the inevitable question of system requirements. These can vary, as there are a lot of variables involved with live streaming, such as:
|
||||
|
||||
* **Stream quality:** Do you want to stream in high definition or will standard definition fit your needs?
|
||||
* **Viewership:** How many viewers are you expecting for your videos?
|
||||
* **Storage:** Do you plan on keeping saved copies of your video stream?
|
||||
* **Access:** Will your stream be private or open to the world?
|
||||
|
||||
|
||||
|
||||
There are no set rules when it comes to system requirements, so I recommend you experiment and find what works best for your needs. I installed my server on a virtual machine with 4GB RAM, a 20GB hard drive, and a single Intel i7 processor core.
|
||||
|
||||
This project uses the Real-Time Messaging Protocol (RTMP) to handle audio and video streaming. There are other protocols available, but I chose RTMP because it has broad support. As open standards like WebRTC become more compatible, I would recommend that route.
|
||||
|
||||
It's also very important to know that "live" doesn't always mean instant. A video stream must be encoded, transferred, buffered, and displayed, which often adds delays. The delay can be shortened or lengthened depending on the type of stream you're creating and its attributes.
|
||||
|
||||
### Setting up a Linux server
|
||||
|
||||
You can use many different distributions of Linux, but I prefer Ubuntu, so I downloaded the [Ubuntu Server][1] edition for my operating system. If you prefer your server to have a graphical user interface (GUI), feel free to use [Ubuntu Desktop][2] or one of its many flavors. Then, I fired up the Ubuntu installer on my computer or virtual machine and chose the settings that best matched my environment. Below are the steps I took.
|
||||
|
||||
Note: Because this is a server, you'll probably want to set some static network settings.
|
||||
|
||||

|
||||
|
||||
After the installer finishes and your system reboots, you'll be greeted with a lovely new Ubuntu system. As with any newly installed operating system, install any updates that are available:
|
||||
|
||||
```
|
||||
sudo apt update
|
||||
sudo apt upgrade
|
||||
```
|
||||
|
||||
This streaming server will use the very powerful and versatile Nginx web server, so you'll need to install it:
|
||||
|
||||
```
|
||||
sudo apt install nginx
|
||||
```
|
||||
|
||||
Then you'll need to get the RTMP module so Nginx can handle your media stream:
|
||||
|
||||
```
|
||||
sudo add-apt-repository universe
|
||||
sudo apt install libnginx-mod-rtmp
|
||||
```
|
||||
|
||||
Adjust your web server's configuration so it can accept and deliver your media stream.
|
||||
|
||||
```
|
||||
sudo nano /etc/nginx/nginx.conf
|
||||
```
|
||||
|
||||
Scroll to the bottom of the configuration file and add the following code:
|
||||
|
||||
```
|
||||
rtmp {
|
||||
server {
|
||||
listen 1935;
|
||||
chunk_size 4096;
|
||||
|
||||
application live {
|
||||
live on;
|
||||
record off;
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||

|
||||
|
||||
Save the config. Because I'm a heretic, I use [Nano][3] for editing configuration files. In Nano, you can save your config by pressing **Ctrl+X** , **Y** , and then **Enter.**
|
||||
|
||||
This is a very minimal config that will create a working streaming server. You'll add to this config later, but this is a great starting point.
|
||||
|
||||
However, before you can begin your first stream, you'll need to restart Nginx with its new configuration:
|
||||
|
||||
```
|
||||
sudo systemctl restart nginx
|
||||
```
|
||||
|
||||
### Setting up a BSD server
|
||||
|
||||
If you're of the "beastie" persuasion, getting a streaming server up and running is also devilishly easy.
|
||||
|
||||
Head on over to the [FreeBSD][4] website and download the latest release. Fire up the FreeBSD installer on your computer or virtual machine and go through the initial steps and choose settings that best match your environment. Since this is a server, you'll likely want to set some static network settings.
|
||||
|
||||
After the installer finishes and your system reboots, you should have a shiny new FreeBSD system. Like any other freshly installed system, you'll likely want to get everything updated (from this step forward, make sure you're logged in as root):
|
||||
|
||||
```
|
||||
pkg update
|
||||
pkg upgrade
|
||||
```
|
||||
|
||||
I install [Nano][3] for editing configuration files:
|
||||
|
||||
```
|
||||
pkg install nano
|
||||
```
|
||||
|
||||
This streaming server will use the very powerful and versatile Nginx web server. You can build Nginx using the excellent ports system that FreeBSD boasts.
|
||||
|
||||
First, update your ports tree:
|
||||
|
||||
```
|
||||
portsnap fetch
|
||||
portsnap extract
|
||||
```
|
||||
|
||||
Browse to the Nginx ports directory:
|
||||
|
||||
```
|
||||
cd /usr/ports/www/nginx
|
||||
```
|
||||
|
||||
And begin building Nginx by running:
|
||||
|
||||
```
|
||||
make install
|
||||
```
|
||||
|
||||
You'll see a screen asking what modules to include in your Nginx build. For this project, you'll need to add the RTMP module. Scroll down until the RTMP module is selected and press **Space**. Then Press **Enter** to proceed with the rest of the build and installation.
|
||||
|
||||
Once Nginx has finished installing, it's time to configure it for streaming purposes.
|
||||
|
||||
First, add an entry into **/etc/rc.conf** to ensure the Nginx server starts when your system boots:
|
||||
|
||||
```
|
||||
nano /etc/rc.conf
|
||||
```
|
||||
|
||||
Add this text to the file:
|
||||
|
||||
```
|
||||
nginx_enable="YES"
|
||||
```
|
||||
|
||||

|
||||
|
||||
Next, create a webroot directory from where Nginx will serve its content. I call mine **stream** :
|
||||
|
||||
```
|
||||
cd /usr/local/www/
|
||||
mkdir stream
|
||||
chmod -R 755 stream/
|
||||
```
|
||||
|
||||
Now that you have created your stream directory, configure Nginx by editing its configuration file:
|
||||
|
||||
```
|
||||
nano /usr/local/etc/nginx/nginx.conf
|
||||
```
|
||||
|
||||
Load your streaming modules at the top of the file:
|
||||
|
||||
```
|
||||
load_module /usr/local/libexec/nginx/ngx_stream_module.so;
|
||||
load_module /usr/local/libexec/nginx/ngx_rtmp_module.so;
|
||||
```
|
||||
|
||||

|
||||
|
||||
Under the **Server** section, change the webroot location to match the one you created earlier:
|
||||
|
||||
```
|
||||
Location / {
|
||||
root /usr/local/www/stream
|
||||
}
|
||||
```
|
||||
|
||||

|
||||
|
||||
And finally, add your RTMP settings so Nginx will know how to handle your media streams:
|
||||
|
||||
```
|
||||
rtmp {
|
||||
server {
|
||||
listen 1935;
|
||||
chunk_size 4096;
|
||||
|
||||
application live {
|
||||
live on;
|
||||
record off;
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Save the config. In Nano, you can do this by pressing **Ctrl+X** , **Y** , and then **Enter.**
|
||||
|
||||
As you can see, this is a very minimal config that will create a working streaming server. Later, you'll add to this config, but this will provide you with a great starting point.
|
||||
|
||||
However, before you can begin your first stream, you'll need to restart Nginx with its new config:
|
||||
|
||||
```
|
||||
service nginx restart
|
||||
```
|
||||
|
||||
### Set up your streaming software
|
||||
|
||||
#### Broadcasting with OBS
|
||||
|
||||
Now that your server is ready to accept your video streams, it's time to set up your streaming software. This tutorial uses the powerful and open source Open Broadcast Studio (OBS).
|
||||
|
||||
Head over to the [OBS website][5] and find the build for your operating system and install it. Once OBS launches, you should see a first-time-run wizard that will help you configure OBS with the settings that best fit your hardware.
|
||||
|
||||

|
||||
|
||||
OBS isn't capturing anything because you haven't supplied it with a source. For this tutorial, you'll just capture your desktop for the stream. Simply click the **+** button under **Source** , choose **Screen Capture** , and select which desktop you want to capture.
|
||||
|
||||
Click OK, and you should see OBS mirroring your desktop.
|
||||
|
||||
Now it's time to send your newly configured video stream to your server. In OBS, click **File** > **Settings**. Click on the **Stream** section, and set **Stream Type** to **Custom Streaming Server**.
|
||||
|
||||
In the URL box, enter the prefix **rtmp://** followed the IP address of your streaming server followed by **/live**. For example, **rtmp://IP-ADDRESS/live**.
|
||||
|
||||
Next, you'll probably want to enter a Stream key—a special identifier required to view your stream. Enter whatever key you want (and can remember) in the **Stream key** box.
|
||||
|
||||

|
||||
|
||||
Click **Apply** and then **OK**.
|
||||
|
||||
Now that OBS is configured to send your stream to your server, you can start your first stream. Click **Start Streaming**.
|
||||
|
||||
If everything worked, you should see the button change to **Stop Streaming** and some bandwidth metrics will appear at the bottom of OBS.
|
||||
|
||||

|
||||
|
||||
If you receive an error, double-check Stream Settings in OBS for misspellings. If everything looks good, there could be another issue preventing it from working.
|
||||
|
||||
### Viewing your stream
|
||||
|
||||
A live video isn't much good if no one is watching it, so be your first viewer!
|
||||
|
||||
There are a multitude of open source media players that support RTMP, but the most well-known is probably [VLC media player][6].
|
||||
|
||||
After you install and launch VLC, open your stream by clicking on **Media** > **Open Network Stream**. Enter the path to your stream, adding the Stream Key you set up in OBS, then click **Play**. For example, **rtmp://IP-ADDRESS/live/SECRET-KEY**.
|
||||
|
||||
You should now be viewing your very own live video stream!
|
||||
|
||||

|
||||
|
||||
### Where to go next?
|
||||
|
||||
This is a very simple setup that will get you off the ground. Here are two other features you likely will want to use.
|
||||
|
||||
* **Limit access:** The next step you might want to take is to limit access to your server, as the default setup allows anyone to stream to and from the server. There are a variety of ways to set this up, such as an operating system firewall, [.htaccess file][7], or even using the [built-in access controls in the STMP module][8].
|
||||
|
||||
* **Record streams:** This simple Nginx configuration will only stream and won't save your videos, but this is easy to add. In the Nginx config, under the RTMP section, set up the recording options and the location where you want to save your videos. Make sure the path you set exists and Nginx is able to write to it.
|
||||
|
||||
|
||||
|
||||
|
||||
```
|
||||
application live {
|
||||
live on;
|
||||
record all;
|
||||
record_path /var/www/html/recordings;
|
||||
record_unique on;
|
||||
}
|
||||
```
|
||||
|
||||
The world of live streaming is constantly evolving, and if you're interested in more advanced uses, there are lots of other great resources you can find floating around the internet. Good luck and happy streaming!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/1/basic-live-video-streaming-server
|
||||
|
||||
作者:[Aaron J.Prisk][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/ricepriskytreat
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.ubuntu.com/download/server
|
||||
[2]: https://www.ubuntu.com/download/desktop
|
||||
[3]: https://www.nano-editor.org/
|
||||
[4]: https://www.freebsd.org/
|
||||
[5]: https://obsproject.com/
|
||||
[6]: https://www.videolan.org/vlc/index.html
|
||||
[7]: https://httpd.apache.org/docs/current/howto/htaccess.html
|
||||
[8]: https://github.com/arut/nginx-rtmp-module/wiki/Directives#access
|
80
sources/tech/20190109 Bash 5.0 Released with New Features.md
Normal file
80
sources/tech/20190109 Bash 5.0 Released with New Features.md
Normal file
@ -0,0 +1,80 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Bash 5.0 Released with New Features)
|
||||
[#]: via: (https://itsfoss.com/bash-5-release)
|
||||
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
|
||||
|
||||
Bash 5.0 Released with New Features
|
||||
======
|
||||
|
||||
The [mailing list][1] confirmed the release of Bash-5.0 recently. And, it is exciting to know that it comes baked with new features and variable.
|
||||
|
||||
Well, if you’ve been using Bash 4.4.XX, you will definitely love the fifth major release of [Bash][2].
|
||||
|
||||
The fifth release focuses on new shell variables and a lot of major bug fixes with an overhaul. It also introduces a couple of new features along with some incompatible changes between bash-4.4 and bash-5.0.
|
||||
|
||||
![Bash logo][3]
|
||||
|
||||
### What about the new features?
|
||||
|
||||
The mailing list explains the bug fixed in this new release:
|
||||
|
||||
> This release fixes several outstanding bugs in bash-4.4 and introduces several new features. The most significant bug fixes are an overhaul of how nameref variables resolve and a number of potential out-of-bounds memory errors discovered via fuzzing. There are a number of changes to the expansion of $@ and $* in various contexts where word splitting is not performed to conform to a Posix standard interpretation, and additional changes to resolve corner cases for Posix conformance.
|
||||
|
||||
It also introduces some new features. As per the release note, these are the most notable new features are several new shell variables:
|
||||
|
||||
> The BASH_ARGV0, EPOCHSECONDS, and EPOCHREALTIME. The ‘history’ builtin can remove ranges of history entries and understands negative arguments as offsets from the end of the history list. There is an option to allow local variables to inherit the value of a variable with the same name at a preceding scope. There is a new shell option that, when enabled, causes the shell to attempt to expand associative array subscripts only once (this is an issue when they are used in arithmetic expressions). The ‘globasciiranges‘ shell option is now enabled by default; it can be set to off by default at configuration time.
|
||||
|
||||
### What about the changes between Bash-4.4 and Bash-5.0?
|
||||
|
||||
The update log mentioned about the incompatible changes and the supported readline version history. Here’s what it said:
|
||||
|
||||
> There are a few incompatible changes between bash-4.4 and bash-5.0. The changes to how nameref variables are resolved means that some uses of namerefs will behave differently, though I have tried to minimize the compatibility issues. By default, the shell only sets BASH_ARGC and BASH_ARGV at startup if extended debugging mode is enabled; it was an oversight that it was set unconditionally and caused performance issues when scripts were passed large numbers of arguments.
|
||||
>
|
||||
> Bash can be linked against an already-installed Readline library rather than the private version in lib/readline if desired. Only readline-8.0 and later versions are able to provide all of the symbols that bash-5.0 requires; earlier versions of the Readline library will not work correctly.
|
||||
|
||||
I believe some of the features/variables added are very useful. Some of my favorites are:
|
||||
|
||||
* There is a new (disabled by default, undocumented) shell option to enable and disable sending history to syslog at runtime.
|
||||
* The shell doesn’t automatically set BASH_ARGC and BASH_ARGV at startup unless it’s in debugging mode, as the documentation has always said, but will dynamically create them if a script references them at the top level without having enabled debugging mode.
|
||||
* The ‘history’ can now delete ranges of history entries using ‘-d start-end’.
|
||||
* If a non-interactive shell with job control enabled detects that a foreground job died due to SIGINT, it acts as if it received the SIGINT.
|
||||
* BASH_ARGV0: a new variable that expands to $0 and sets $0 on assignment.
|
||||
|
||||
|
||||
|
||||
To check the complete list of changes and features you should refer to the [Mailing list post][1].
|
||||
|
||||
### Wrapping Up
|
||||
|
||||
You can check your current Bash version, using this command:
|
||||
|
||||
```
|
||||
bash --version
|
||||
```
|
||||
|
||||
It’s more likely that you’ll have Bash 4.4 installed. If you want to get the new version, I would advise waiting for your distribution to provide it.
|
||||
|
||||
With Bash-5.0 available, what do you think about it? Are you using any alternative to bash? If so, would this update change your mind?
|
||||
|
||||
Let us know your thoughts in the comments below.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/bash-5-release
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/ankush/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://lists.gnu.org/archive/html/bug-bash/2019-01/msg00063.html
|
||||
[2]: https://www.gnu.org/software/bash/
|
||||
[3]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/01/bash-logo.jpg?resize=800%2C450&ssl=1
|
@ -0,0 +1,170 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Top 5 Linux Distributions for Productivity)
|
||||
[#]: via: (https://www.linux.com/blog/learn/2019/1/top-5-linux-distributions-productivity)
|
||||
[#]: author: (Jack Wallen https://www.linux.com/users/jlwallen)
|
||||
|
||||
Top 5 Linux Distributions for Productivity
|
||||
======
|
||||
|
||||

|
||||
|
||||
I have to confess, this particular topic is a tough one to address. Why? First off, Linux is a productive operating system by design. Thanks to an incredibly reliable and stable platform, getting work done is easy. Second, to gauge effectiveness, you have to consider what type of work you need a productivity boost for. General office work? Development? School? Data mining? Human resources? You see how this question can get somewhat complicated.
|
||||
|
||||
That doesn’t mean, however, that some distributions aren’t able to do a better job of configuring and presenting that underlying operating system into an efficient platform for getting work done. Quite the contrary. Some distributions do a much better job of “getting out of the way,” so you don’t find yourself in a work-related hole, having to dig yourself out and catch up before the end of day. These distributions help strip away the complexity that can be found in Linux, thereby making your workflow painless.
|
||||
|
||||
Let’s take a look at the distros I consider to be your best bet for productivity. To help make sense of this, I’ve divided them into categories of productivity. That task itself was challenging, because everyone’s productivity varies. For the purposes of this list, however, I’ll look at:
|
||||
|
||||
* General Productivity: For those who just need to work efficiently on multiple tasks.
|
||||
|
||||
* Graphic Design: For those that work with the creation and manipulation of graphic images.
|
||||
|
||||
* Development: For those who use their Linux desktops for programming.
|
||||
|
||||
* Administration: For those who need a distribution to facilitate their system administration tasks.
|
||||
|
||||
* Education: For those who need a desktop distribution to make them more productive in an educational environment.
|
||||
|
||||
|
||||
|
||||
|
||||
Yes, there are more categories to be had, many of which can get very niche-y, but these five should fill most of your needs.
|
||||
|
||||
### General Productivity
|
||||
|
||||
For general productivity, you won’t get much more efficient than [Ubuntu][1]. The primary reason for choosing Ubuntu for this category is the seamless integration of apps, services, and desktop. You might be wondering why I didn’t choose Linux Mint for this category? Because Ubuntu now defaults to the GNOME desktop, it gains the added advantage of GNOME Extensions (Figure 1).
|
||||
|
||||
![GNOME Clipboard][3]
|
||||
|
||||
Figure 1: The GNOME Clipboard Indicator extension in action.
|
||||
|
||||
[Used with permission][4]
|
||||
|
||||
These extensions go a very long way to aid in boosting productivity (so Ubuntu gets the nod over Mint). But Ubuntu didn’t just accept a vanilla GNOME desktop. Instead, they tweaked it to make it slightly more efficient and user-friendly, out of the box. And because Ubuntu contains just the right mixture of default, out-of-the-box, apps (that just work), it makes for a nearly perfect platform for productivity.
|
||||
|
||||
Whether you need to write a paper, work on a spreadsheet, code a new app, work on your company website, create marketing images, administer a server or network, or manage human resources from within your company HR tool, Ubuntu has you covered. The Ubuntu desktop distribution also doesn’t require the user to jump through many hoops to get things working … it simply works (and quite well). Finally, thanks to it’s Debian base, Ubuntu makes installing third-party apps incredibly easy.
|
||||
|
||||
Although Ubuntu tends to be the go-to for nearly every list of “top distributions for X,” it’s very hard to argue against this particular distribution topping the list of general productivity distributions.
|
||||
|
||||
### Graphic Design
|
||||
|
||||
If you’re looking to up your graphic design productivity, you can’t go wrong with [Fedora Design Suite][5]. This Fedora respin was created by the team responsible for all Fedora-related art work. Although the default selection of apps isn’t a massive collection of tools, those it does include are geared specifically for the creation and manipulation of images.
|
||||
|
||||
With apps like GIMP, Inkscape, Darktable, Krita, Entangle, Blender, Pitivi, Scribus, and more (Figure 2), you’ll find everything you need to get your image editing jobs done and done well. But Fedora Design Suite doesn’t end there. This desktop platform also includes a bevy of tutorials that cover countless subjects for many of the installed applications. For anyone trying to be as productive as possible, this is some seriously handy information to have at the ready. I will say, however, the tutorial entry in the GNOME Favorites is nothing more than a link to [this page][6].
|
||||
|
||||
![Fedora Design Suite Favorites][8]
|
||||
|
||||
Figure 2: The Fedora Design Suite Favorites menu includes plenty of tools for getting your graphic design on.
|
||||
|
||||
[Used with permission][4]
|
||||
|
||||
Those that work with a digital camera will certainly appreciate the inclusion of the Entangle app, which allows you to control your DSLR from the desktop.
|
||||
|
||||
### Development
|
||||
|
||||
Nearly all Linux distributions are great platforms for programmers. However, one particular distributions stands out, above the rest, as one of the most productive tools you’ll find for the task. That OS comes from [System76][9] and it’s called [Pop!_OS][10]. Pop!_OS is tailored specifically for creators, but not of the artistic type. Instead, Pop!_OS is geared toward creators who specialize in developing, programming, and making. If you need an environment that is not only perfected suited for your development work, but includes a desktop that’s sure to get out of your way, you won’t find a better option than Pop!_OS (Figure 3).
|
||||
|
||||
What might surprise you (given how “young” this operating system is), is that Pop!_OS is also one of the single most stable GNOME-based platforms you’ll ever use. This means Pop!_OS isn’t just for creators and makers, but anyone looking for a solid operating system. One thing that many users will greatly appreciate with Pop!_OS, is that you can download an ISO specifically for your video hardware. If you have Intel hardware, [download][10] the version for Intel/AMD. If your graphics card is NVIDIA, download that specific release. Either way, you are sure go get a solid platform for which to create your masterpiece.
|
||||
|
||||
![Pop!_OS][12]
|
||||
|
||||
Figure 3: The Pop!_OS take on GNOME Overview.
|
||||
|
||||
[Used with permission][4]
|
||||
|
||||
Interestingly enough, with Pop!_OS, you won’t find much in the way of pre-installed development tools. You won’t find an included IDE, or many other dev tools. You can, however, find all the development tools you need in the Pop Shop.
|
||||
|
||||
### Administration
|
||||
|
||||
If you’re looking to find one of the most productive distributions for admin tasks, look no further than [Debian][13]. Why? Because Debian is not only incredibly reliable, it’s one of those distributions that gets out of your way better than most others. Debian is the perfect combination of ease of use and unlimited possibility. On top of which, because this is the distribution for which so many others are based, you can bet if there’s an admin tool you need for a task, it’s available for Debian. Of course, we’re talking about general admin tasks, which means most of the time you’ll be using a terminal window to SSH into your servers (Figure 4) or a browser to work with web-based GUI tools on your network. Why bother making use of a desktop that’s going to add layers of complexity (such as SELinux in Fedora, or YaST in openSUSE)? Instead, chose simplicity.
|
||||
|
||||
![Debian][15]
|
||||
|
||||
Figure 4: SSH’ing into a remote server on Debian.
|
||||
|
||||
[Used with permission][4]
|
||||
|
||||
And because you can select which desktop you want (from GNOME, Xfce, KDE, Cinnamon, MATE, LXDE), you can be sure to have the interface that best matches your work habits.
|
||||
|
||||
### Education
|
||||
|
||||
If you are a teacher or student, or otherwise involved in education, you need the right tools to be productive. Once upon a time, there existed the likes of Edubuntu. That distribution never failed to be listed in the top of education-related lists. However, that distro hasn’t been updated since it was based on Ubuntu 14.04. Fortunately, there’s a new education-based distribution ready to take that title, based on openSUSE. This spin is called [openSUSE:Education-Li-f-e][16] (Linux For Education - Figure 5), and is based on openSUSE Leap 42.1 (so it is slightly out of date).
|
||||
|
||||
openSUSE:Education-Li-f-e includes tools like:
|
||||
|
||||
* Brain Workshop - A dual n-back brain exercise
|
||||
|
||||
* GCompris - An educational software suite for young children
|
||||
|
||||
* gElemental - A periodic table viewer
|
||||
|
||||
* iGNUit - A general purpose flash card program
|
||||
|
||||
* Little Wizard - Development environment for children based on Pascal
|
||||
|
||||
* Stellarium - An astronomical sky simulator
|
||||
|
||||
* TuxMath - An math tutor game
|
||||
|
||||
* TuxPaint - A drawing program for young children
|
||||
|
||||
* TuxType - An educational typing tutor for children
|
||||
|
||||
* wxMaxima - A cross platform GUI for the computer algebra system
|
||||
|
||||
* Inkscape - Vector graphics program
|
||||
|
||||
* GIMP - Graphic image manipulation program
|
||||
|
||||
* Pencil - GUI prototyping tool
|
||||
|
||||
* Hugin - Panorama photo stitching and HDR merging program
|
||||
|
||||
|
||||
![Education][18]
|
||||
|
||||
Figure 5: The openSUSE:Education-Li-f-e distro has plenty of tools to help you be productive in or for school.
|
||||
|
||||
[Used with permission][4]
|
||||
|
||||
Also included with openSUSE:Education-Li-f-e is the [KIWI-LTSP Server][19]. The KIWI-LTSP Server is a flexible, cost effective solution aimed at empowering schools, businesses, and organizations all over the world to easily install and deploy desktop workstations. Although this might not directly aid the student to be more productive, it certainly enables educational institutions be more productive in deploying desktops for students to use. For more information on setting up KIWI-LTSP, check out the openSUSE [KIWI-LTSP quick start guide][20].
|
||||
|
||||
Learn more about Linux through the free ["Introduction to Linux" ][21]course from The Linux Foundation and edX.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/blog/learn/2019/1/top-5-linux-distributions-productivity
|
||||
|
||||
作者:[Jack Wallen][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.linux.com/users/jlwallen
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.ubuntu.com/
|
||||
[2]: /files/images/productivity1jpg
|
||||
[3]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/productivity_1.jpg?itok=yxez3X1w (GNOME Clipboard)
|
||||
[4]: /licenses/category/used-permission
|
||||
[5]: https://labs.fedoraproject.org/en/design-suite/
|
||||
[6]: https://fedoraproject.org/wiki/Design_Suite/Tutorials
|
||||
[7]: /files/images/productivity2jpg
|
||||
[8]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/productivity_2.jpg?itok=ke0b8qyH (Fedora Design Suite Favorites)
|
||||
[9]: https://system76.com/
|
||||
[10]: https://system76.com/pop
|
||||
[11]: /files/images/productivity3jpg-0
|
||||
[12]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/productivity_3_0.jpg?itok=8UkCUfsD (Pop!_OS)
|
||||
[13]: https://www.debian.org/
|
||||
[14]: /files/images/productivity4jpg
|
||||
[15]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/productivity_4.jpg?itok=c9yD3Xw2 (Debian)
|
||||
[16]: https://en.opensuse.org/openSUSE:Education-Li-f-e
|
||||
[17]: /files/images/productivity5jpg
|
||||
[18]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/productivity_5.jpg?itok=oAFtV8nT (Education)
|
||||
[19]: https://en.opensuse.org/Portal:KIWI-LTSP
|
||||
[20]: https://en.opensuse.org/SDB:KIWI-LTSP_quick_start
|
||||
[21]: https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux
|
168
sources/tech/20190113 Editing Subtitles in Linux.md
Normal file
168
sources/tech/20190113 Editing Subtitles in Linux.md
Normal file
@ -0,0 +1,168 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Editing Subtitles in Linux)
|
||||
[#]: via: (https://itsfoss.com/editing-subtitles)
|
||||
[#]: author: (Shirish https://itsfoss.com/author/shirish/)
|
||||
|
||||
Editing Subtitles in Linux
|
||||
======
|
||||
|
||||
I have been a world movie and regional movies lover for decades. Subtitles are the essential tool that have enabled me to enjoy the best movies in various languages and from various countries.
|
||||
|
||||
If you enjoy watching movies with subtitles, you might have noticed that sometimes the subtitles are not synced or not correct.
|
||||
|
||||
Did you know that you can edit subtitles and make them better? Let me show you some basic subtitle editing in Linux.
|
||||
|
||||
![Editing subtitles in Linux][1]
|
||||
|
||||
### Extracting subtitles from closed captions data
|
||||
|
||||
Around 2012, 2013 I came to know of a tool called [CCEextractor.][2] As time passed, it has become one of the vital tools for me, especially if I come across a media file which has the subtitle embedded in it.
|
||||
|
||||
CCExtractor analyzes video files and produces independent subtitle files from the closed captions data.
|
||||
|
||||
CCExtractor is a cross-platform, free and open source tool. The tool has matured quite a bit from its formative years and has been part of [GSOC][3] and Google Code-in now and [then.][4]
|
||||
|
||||
The tool, to put it simply, is more or less a set of scripts which work one after another in a serialized order to give you an extracted subtitle.
|
||||
|
||||
You can follow the installation instructions for CCExtractor on [this page][5].
|
||||
|
||||
After installing when you want to extract subtitles from a media file, do the following:
|
||||
|
||||
```
|
||||
ccextractor <path_to_video_file>
|
||||
```
|
||||
|
||||
The output of the command will be something like this:
|
||||
|
||||
It basically scans the media file. In this case, it found that the media file is in malyalam and that the media container is an [.mkv][6] container. It extracted the subtitle file with the same name as the video file adding _eng to it.
|
||||
|
||||
CCExtractor is a wonderful tool which can be used to enhance subtitles along with Subtitle Edit which I will share in the next section.
|
||||
|
||||
```
|
||||
Interesting Read: There is an interesting synopsis of subtitles at [vicaps][7] which tells and shares why subtitles are important to us. It goes into quite a bit of detail of movie-making as well for those interested in such topics.
|
||||
```
|
||||
|
||||
### Editing subtitles with SubtitleEditor Tool
|
||||
|
||||
You probably are aware that most subtitles are in [.srt format][8] . The beautiful thing about this format is and was you could load it in your text editor and do little fixes in it.
|
||||
|
||||
A srt file looks something like this when launched into a simple text-editor:
|
||||
|
||||
The excerpt subtitle I have shared is from a pretty Old German Movie called [The Cabinet of Dr. Caligari (1920)][9]
|
||||
|
||||
Subtitleeditor is a wonderful tool when it comes to editing subtitles. Subtitle Editor is and can be used to manipulate time duration, frame-rate of the subtitle file to be in sync with the media file, duration of breaks in-between and much more. I’ll share some of the basic subtitle editing here.
|
||||
|
||||
![][10]
|
||||
|
||||
First install subtitleeditor the same way you installed ccextractor, using your favorite installation method. In Debian, you can use this command:
|
||||
|
||||
```
|
||||
sudo apt install subtitleeditor
|
||||
```
|
||||
|
||||
When you have it installed, let’s see some of the common scenarios where you need to edit a subtitle.
|
||||
|
||||
#### Manipulating Frame-rates to sync with Media file
|
||||
|
||||
If you find that the subtitles are not synced with the video, one of the reasons could be the difference between the frame rates of the video file and the subtitle file.
|
||||
|
||||
How do you know the frame rates of these files, then?
|
||||
|
||||
To get the frame rate of a video file, you can use the mediainfo tool. You may need to install it first using your distribution’s package manager.
|
||||
|
||||
Using mediainfo is simple:
|
||||
|
||||
```
|
||||
$ mediainfo somefile.mkv | grep Frame
|
||||
Format settings : CABAC / 4 Ref Frames
|
||||
Format settings, ReFrames : 4 frames
|
||||
Frame rate mode : Constant
|
||||
Frame rate : 25.000 FPS
|
||||
Bits/(Pixel*Frame) : 0.082
|
||||
Frame rate : 46.875 FPS (1024 SPF)
|
||||
```
|
||||
|
||||
Now you can see that framerate of the video file is 25.000 FPS. The other Frame-rate we see is for the audio. While I can share why particular fps are used in Video-encoding, Audio-encoding etc. it would be a different subject matter. There is a lot of history associated with it.
|
||||
|
||||
Next is to find out the frame rate of the subtitle file and this is a slightly complicated.
|
||||
|
||||
Usually, most subtitles are in a zipped format. Unzipping the .zip archive along with the subtitle file which ends in something.srt. Along with it, there is usually also a .info file with the same name which sometime may have the frame rate of the subtitle.
|
||||
|
||||
If not, then it usually is a good idea to go some site and download the subtitle from a site which has that frame rate information. For this specific German file, I will be using [Opensubtitle.org][11]
|
||||
|
||||
As you can see in the link, the frame rate of the subtitle is 23.976 FPS. Quite obviously, it won’t play well with my video file with frame rate 25.000 FPS.
|
||||
|
||||
In such cases, you can change the frame rate of the subtitle file using the Subtitle Editor tool:
|
||||
|
||||
Select all the contents from the subtitle file by doing CTRL+A. Go to Timings -> Change Framerate and change frame rates from 23.976 fps to 25.000 fps or whatever it is that is desired. Save the changed file.
|
||||
|
||||
![synchronize frame rates of subtitles in Linux][12]
|
||||
|
||||
#### Changing the Starting position of a subtitle file
|
||||
|
||||
Sometimes the above method may be enough, sometimes though it will not be enough.
|
||||
|
||||
You might find some cases when the start of the subtitle file is different from that in the movie or a media file while the frame rate is the same.
|
||||
|
||||
In such cases, do the following:
|
||||
|
||||
Select all the contents from the subtitle file by doing CTRL+A. Go to Timings -> Select Move Subtitle.
|
||||
|
||||
![Move subtitles using Subtitle Editor on Linux][13]
|
||||
|
||||
Change the new Starting position of the subtitle file. Save the changed file.
|
||||
|
||||
![Move subtitles using Subtitle Editor in Linux][14]
|
||||
|
||||
If you wanna be more accurate, then use [mpv][15] to see the movie or media file and click on the timing, if you click on the timing bar which shows how much the movie or the media file has elapsed, clicking on it will also reveal the microsecond.
|
||||
|
||||
I usually like to be accurate so I try to be as precise as possible. It is very difficult in MPV as human reaction time is imprecise. If I wanna be super accurate then I use something like [Audacity][16] but then that is another ball-game altogether as you can do so much more with it. That may be something to explore in a future blog post as well.
|
||||
|
||||
#### Manipulating Duration
|
||||
|
||||
Sometimes even doing both is not enough and you even have to shrink or add the duration to make it sync with the media file. This is one of the more tedious works as you have to individually fix the duration of each sentence. This can happen especially if you have variable frame rates in the media file (nowadays rare but you still get such files).
|
||||
|
||||
In such a scenario, you may have to edit the duration manually and automation is not possible. The best way is either to fix the video file (not possible without degrading the video quality) or getting video from another source at a higher quality and then [transcode][17] it with the settings you prefer. This again, while a major undertaking I could shed some light on in some future blog post.
|
||||
|
||||
### Conclusion
|
||||
|
||||
What I have shared in above is more or less on improving on existing subtitle files. If you were to start a scratch you need loads of time. I haven’t shared that at all because a movie or any video material of say an hour can easily take anywhere from 4-6 hours or even more depending upon skills of the subtitler, patience, context, jargon, accents, native English speaker, translator etc. all of which makes a difference to the quality of the subtitle.
|
||||
|
||||
I hope you find this interesting and from now onward, you’ll handle your subtitles slightly better. If you have any suggestions to add, please leave a comment below.
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/editing-subtitles
|
||||
|
||||
作者:[Shirish][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/shirish/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/01/editing-subtitles-in-linux.jpeg?resize=800%2C450&ssl=1
|
||||
[2]: https://www.ccextractor.org/
|
||||
[3]: https://itsfoss.com/best-open-source-internships/
|
||||
[4]: https://www.ccextractor.org/public:codein:google_code-in_2018
|
||||
[5]: https://github.com/CCExtractor/ccextractor/wiki/Installation
|
||||
[6]: https://en.wikipedia.org/wiki/Matroska
|
||||
[7]: https://www.vicaps.com/blog/history-of-silent-movies-and-subtitles/
|
||||
[8]: https://en.wikipedia.org/wiki/SubRip#SubRip_text_file_format
|
||||
[9]: https://www.imdb.com/title/tt0010323/
|
||||
[10]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2018/12/subtitleeditor.jpg?ssl=1
|
||||
[11]: https://www.opensubtitles.org/en/search/sublanguageid-eng/idmovie-4105
|
||||
[12]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/01/subtitleeditor-frame-rate-sync.jpg?resize=800%2C450&ssl=1
|
||||
[13]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/01/Move-subtitles-Caligiri.jpg?resize=800%2C450&ssl=1
|
||||
[14]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/01/move-subtitles.jpg?ssl=1
|
||||
[15]: https://itsfoss.com/mpv-video-player/
|
||||
[16]: https://www.audacityteam.org/
|
||||
[17]: https://en.wikipedia.org/wiki/Transcoding
|
||||
[18]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/01/editing-subtitles-in-linux.jpeg?fit=800%2C450&ssl=1
|
@ -0,0 +1,64 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wwhio)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Get started with Wekan, an open source kanban board)
|
||||
[#]: via: (https://opensource.com/article/19/1/productivity-tool-wekan)
|
||||
[#]: author: (Kevin Sonney https://opensource.com/users/ksonney (Kevin Sonney))
|
||||
|
||||
Get started with Wekan, an open source kanban board
|
||||
======
|
||||
In the second article in our series on open source tools that will make you more productive in 2019, check out Wekan.
|
||||

|
||||
|
||||
There seems to be a mad rush at the beginning of every year to find ways to be more productive. New Year's resolutions, the itch to start the year off right, and of course, an "out with the old, in with the new" attitude all contribute to this. And the usual round of recommendations is heavily biased towards closed source and proprietary software. It doesn't have to be that way.
|
||||
|
||||
Here's the second of my picks for 19 new (or new-to-you) open source tools to help you be more productive in 2019.
|
||||
|
||||
### Wekan
|
||||
|
||||
[Kanban][1] boards are a mainstay of today's agile processes. And many of us (myself included) use them to organize not just our work but also our personal lives. I know several artists who use apps like [Trello][2] to keep track of their commision lists as well as what's in progress and what's complete.
|
||||
|
||||

|
||||
|
||||
But these apps are often linked to a work account or a commercial service. Enter [Wekan][3], an open source kanban board you can run locally or on the service of your choice. Wekan offers much of the same functionality as other Kanban apps, such as creating boards, lists, swimlanes, and cards, dragging and dropping between lists, assigning to users, labeling cards, and doing pretty much everything else you'd expect in a modern kanban board.
|
||||
|
||||

|
||||
|
||||
The thing that distinguishes Wekan from most other kanban boards is the built-in rules. While most other boards support emailing updates, Wekan allows you to set up triggers when taking actions on cards, checklists, and labels.
|
||||
|
||||

|
||||
|
||||
Wekan can then take actions like moving cards, updating labels, adding checklists, and sending emails.
|
||||
|
||||

|
||||
|
||||
Setting up Wekan locally is a snap—literally. If your desktop supports [Snapcraft][4] applications, installing is as easy as:
|
||||
|
||||
```
|
||||
sudo snap install wekan
|
||||
```
|
||||
|
||||
It also supports Docker, which means installing on a server is reasonably straightforward on most servers and desktops.
|
||||
|
||||
Overall, if you want a nice kanban board that you can run yourself, Wekan has you covered.
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/1/productivity-tool-wekan
|
||||
|
||||
作者:[Kevin Sonney][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/ksonney (Kevin Sonney)
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://en.wikipedia.org/wiki/Kanban
|
||||
[2]: https://www.trello.com
|
||||
[3]: https://wekan.github.io/
|
||||
[4]: https://snapcraft.io/
|
@ -0,0 +1,139 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Hegemon – A Modular System And Hardware Monitoring Tool For Linux)
|
||||
[#]: via: (https://www.2daygeek.com/hegemon-a-modular-system-and-hardware-monitoring-tool-for-linux/)
|
||||
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
|
||||
|
||||
Hegemon – A Modular System And Hardware Monitoring Tool For Linux
|
||||
======
|
||||
|
||||
I know that everybody is preferring for **[TOP Command][1]** to monitor system utilization.
|
||||
|
||||
It’s one of the best and native command which used by vast of Linux administrators.
|
||||
|
||||
In Linux there is an alternative for everything respective of packages.
|
||||
|
||||
There are many utilities are available for this purpose in Linux and i prefer **[HTOP Command][2]**.
|
||||
|
||||
If you want to know about other alternatives, i would suggest you to navigate to the each link to know more about it.
|
||||
|
||||
Those are htop, CorFreq, glances, atop, Dstat, Gtop, Linux Dash, Netdata, Monit, etc.
|
||||
|
||||
All these tools only allow us to monitor system utilization and not for the system hardware’s.
|
||||
|
||||
But Hegemon is allow us to monitor both in the single dashboard.
|
||||
|
||||
If you are looking for system hardware monitoring then i would suggest you to check **[lm_sensors][3]** and **[s-tui Stress Terminal UI][4]** utilities.
|
||||
|
||||
### What’s Hegemon?
|
||||
|
||||
Hegemon is a work-in-progress modular system monitor written in safe Rust.
|
||||
|
||||
It allow users to monitor both utilization in a single dashboard. It’s system utilization and hardware temperatures.
|
||||
|
||||
### Currently Available Features in Hegemon
|
||||
|
||||
* Monitor CPU and memory usage, temperatures, and fan speeds
|
||||
* Expand any data stream to reveal a more detailed graph and additional information
|
||||
* Adjustable update interval
|
||||
* Clean MVC architecture with good code quality
|
||||
* Unit tests
|
||||
|
||||
|
||||
|
||||
### Planned Features include
|
||||
|
||||
* macOS and BSD support (only Linux is supported at the moment)
|
||||
* Monitor disk and network I/O, GPU usage (maybe), and more
|
||||
* Select and reorder data streams
|
||||
* Mouse control
|
||||
|
||||
|
||||
|
||||
### How to Install Hegemon in Linux?
|
||||
|
||||
Hegemon is requires Rust 1.26 or later and the development files for libsensors. So, make sure these packages were installed before your perform Hegemon installation.
|
||||
|
||||
libsensors library package is available in most of the distribution official repository so, use the following command to install it.
|
||||
|
||||
For **`Debian/Ubuntu`** systems, use **[APT-GET Command][5]** or **[APT Command][6]** to install libsensors on your systems.
|
||||
|
||||
```
|
||||
# apt install lm_sensors-devel
|
||||
```
|
||||
|
||||
For **`Fedora`** system, use **[DNF Package Manager][7]** to install libsensors on your system.
|
||||
|
||||
```
|
||||
# dnf install libsensors4-dev
|
||||
```
|
||||
|
||||
Run the following command to install Rust programming language and follow the instruction. Navigate to the following URL if you want handy tutorials for **[Rust installation][8]**.
|
||||
|
||||
```
|
||||
$ curl https://sh.rustup.rs -sSf | sh
|
||||
```
|
||||
|
||||
If you have successfully installed Rust. Run the following command to install Hegemon.
|
||||
|
||||
```
|
||||
$ cargo install hegemon
|
||||
```
|
||||
|
||||
### How to Lunch Hegemon in Linux?
|
||||
|
||||
Once you successfully install Hegemon package. Run run the below command to launch it.
|
||||
|
||||
```
|
||||
$ hegemon
|
||||
```
|
||||
|
||||
![][10]
|
||||
|
||||
I was facing an issue when i was launching the “Hegemon” application due to libsensors.so.4 libraries issue.
|
||||
|
||||
```
|
||||
$ hegemon
|
||||
error while loading shared libraries: libsensors.so.4: cannot open shared object file: No such file or directory manjaro
|
||||
```
|
||||
|
||||
I’m using Manjaro 18.04. It has the libsensors.so & libsensors.so.5 shared libraries and not for libsensors.so.4. So, i just created the following symlink to fix the issue.
|
||||
|
||||
```
|
||||
$ sudo ln -s /usr/lib/libsensors.so /usr/lib/libsensors.so.4
|
||||
```
|
||||
|
||||
Here is the sample gif file which was taken from my Lenovo-Y700 laptop.
|
||||
![][11]
|
||||
|
||||
By default it shows only overall summary and if you would like to see the detailed output then you need to expand the each section. See the expanded output with Hegemon.
|
||||
![][12]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.2daygeek.com/hegemon-a-modular-system-and-hardware-monitoring-tool-for-linux/
|
||||
|
||||
作者:[Magesh Maruthamuthu][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.2daygeek.com/author/magesh/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.2daygeek.com/top-command-examples-to-monitor-server-performance/
|
||||
[2]: https://www.2daygeek.com/linux-htop-command-linux-system-performance-resource-monitoring-tool/
|
||||
[3]: https://www.2daygeek.com/view-check-cpu-hard-disk-temperature-linux/
|
||||
[4]: https://www.2daygeek.com/s-tui-stress-terminal-ui-monitor-linux-cpu-temperature-frequency/
|
||||
[5]: https://www.2daygeek.com/apt-get-apt-cache-command-examples-manage-packages-debian-ubuntu-systems/
|
||||
[6]: https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/
|
||||
[7]: https://www.2daygeek.com/dnf-command-examples-manage-packages-fedora-system/
|
||||
[8]: https://www.2daygeek.com/how-to-install-rust-programming-language-in-linux/
|
||||
[9]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
|
||||
[10]: https://www.2daygeek.com/wp-content/uploads/2019/01/hegemon-a-modular-system-and-hardware-monitoring-tool-for-linux-1.png
|
||||
[11]: https://www.2daygeek.com/wp-content/uploads/2019/01/hegemon-a-modular-system-and-hardware-monitoring-tool-for-linux-2a.gif
|
||||
[12]: https://www.2daygeek.com/wp-content/uploads/2019/01/hegemon-a-modular-system-and-hardware-monitoring-tool-for-linux-3.png
|
183
sources/tech/20190115 Linux Tools- The Meaning of Dot.md
Normal file
183
sources/tech/20190115 Linux Tools- The Meaning of Dot.md
Normal file
@ -0,0 +1,183 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Linux Tools: The Meaning of Dot)
|
||||
[#]: via: (https://www.linux.com/blog/learn/2019/1/linux-tools-meaning-dot)
|
||||
[#]: author: (Paul Brown https://www.linux.com/users/bro66)
|
||||
|
||||
Linux Tools: The Meaning of Dot
|
||||
======
|
||||
|
||||

|
||||
|
||||
Let's face it: writing one-liners and scripts using shell commands can be confusing. Many of the names of the tools at your disposal are far from obvious in terms of what they do ( _grep_ , _tee_ and _awk_ , anyone?) and, when you combine two or more, the resulting "sentence" looks like some kind of alien gobbledygook.
|
||||
|
||||
None of the above is helped by the fact that many of the symbols you use to build a chain of instructions can mean different things depending on their context.
|
||||
|
||||
### Location, location, location
|
||||
|
||||
Take the humble dot (`.`) for example. Used with instructions that are expecting the name of a directory, it means "this directory" so this:
|
||||
|
||||
```
|
||||
find . -name "*.jpg"
|
||||
```
|
||||
|
||||
translates to " _find in this directory (and all its subdirectories) files that have names that end in`.jpg`_ ".
|
||||
|
||||
Both `ls .` and `cd .` act as expected, so they list and "change" to the current directory, respectively, although including the dot in these two cases is not necessary.
|
||||
|
||||
Two dots, one after the other, in the same context (i.e., when your instruction is expecting a directory path) means " _the directory immediately above the current one_ ". If you are in _/home/your_directory_ and run
|
||||
|
||||
```
|
||||
cd ..
|
||||
```
|
||||
|
||||
you will be taken to _/home_. So, you may think this still kind of fits into the “dots represent nearby directories” narrative and is not complicated at all, right?
|
||||
|
||||
How about this, then? If you use a dot at the beginning of a directory or file, it means the directory or file will be hidden:
|
||||
|
||||
```
|
||||
$ touch somedir/file01.txt somedir/file02.txt somedir/.secretfile.txt
|
||||
$ ls -l somedir/
|
||||
total 0
|
||||
-rw-r--r-- 1 paul paul 0 Jan 13 19:57 file01.txt
|
||||
-rw-r--r-- 1 paul paul 0 Jan 13 19:57 file02.txt
|
||||
$ # Note how there is no .secretfile.txt in the listing above
|
||||
$ ls -la somedir/
|
||||
total 8
|
||||
drwxr-xr-x 2 paul paul 4096 Jan 13 19:57 .
|
||||
drwx------ 48 paul paul 4096 Jan 13 19:57 ..
|
||||
-rw-r--r-- 1 paul paul 0 Jan 13 19:57 file01.txt
|
||||
-rw-r--r-- 1 paul paul 0 Jan 13 19:57 file02.txt
|
||||
-rw-r--r-- 1 paul paul 0 Jan 13 19:57 .secretfile.txt
|
||||
$ # The -a option tells ls to show "all" files, including the hidden ones
|
||||
```
|
||||
|
||||
And then there's when you use `.` as a command. Yep! You heard me: `.` is a full-fledged command. It is a synonym of `source` and you use that to execute a file in the current shell, as opposed to running a script some other way (which usually mean Bash will spawn a new shell in which to run it).
|
||||
|
||||
Confused? Don't worry -- try this: Create a script called _myscript_ that contains the line
|
||||
|
||||
```
|
||||
myvar="Hello"
|
||||
```
|
||||
|
||||
and execute it the regular way, that is, with `sh myscript` (or by making the script executable with `chmod a+x myscript` and then running `./myscript`). Now try and see the contents of `myvar` with `echo $myvar` (spoiler: You will get nothing). This is because, when your script plunks " _Hello_ " into `myvar`, it does so in a separate bash shell instance. When the script ends, the spawned instance disappears and control returns to the original shell, where `myvar` never even existed.
|
||||
|
||||
However, if you run _myscript_ like this:
|
||||
|
||||
```
|
||||
. myscript
|
||||
```
|
||||
|
||||
`echo $myvar` will print _Hello_ to the command line.
|
||||
|
||||
You will often use the `.` (or `source`) command after making changes to your _.bashrc_ file, [like when you need to expand your `PATH` variable][1]. You use `.` to make the changes available immediately in your current shell instance.
|
||||
|
||||
### Double Trouble
|
||||
|
||||
Just like the seemingly insignificant single dot has more than one meaning, so has the double dot. Apart from pointing to the parent of the current directory, the double dot (`..`) is also used to build sequences.
|
||||
|
||||
Try this:
|
||||
|
||||
```
|
||||
echo {1..10}
|
||||
```
|
||||
|
||||
It will print out the list of numbers from 1 to 10. In this context, `..` means " _starting with the value on my left, count up to the value on my right_ ".
|
||||
|
||||
Now try this:
|
||||
|
||||
```
|
||||
echo {1..10..2}
|
||||
```
|
||||
|
||||
You'll get _1 3 5 7 9_. The `..2` part of the command tells Bash to print the sequence, but not one by one, but two by two. In other words, you'll get all the odd numbers from 1 to 10.
|
||||
|
||||
It works backwards, too:
|
||||
|
||||
```
|
||||
echo {10..1..2}
|
||||
```
|
||||
|
||||
You can also pad your numbers with 0s. Doing:
|
||||
|
||||
```
|
||||
echo {000..121..2}
|
||||
```
|
||||
|
||||
will print out every even number from 0 to 121 like this:
|
||||
|
||||
```
|
||||
000 002 004 006 ... 050 052 054 ... 116 118 120
|
||||
```
|
||||
|
||||
But how is this sequence-generating construct useful? Well, suppose one of your New Year's resolutions is to be more careful with your accounts. As part of that, you want to create directories in which to classify your digital invoices of the last 10 years:
|
||||
|
||||
```
|
||||
mkdir {2009..2019}_Invoices
|
||||
```
|
||||
|
||||
Job done.
|
||||
|
||||
Or maybe you have a hundreds of numbered files, say, frames extracted from a video clip, and, for whatever reason, you want to remove only every third frame between the frames 43 and 61:
|
||||
|
||||
```
|
||||
rm frame_{043..61..3}
|
||||
```
|
||||
|
||||
It is likely that, if you have more than 100 frames, they will be named with padded 0s and look like this:
|
||||
|
||||
```
|
||||
frame_000 frame_001 frame_002 ...
|
||||
```
|
||||
|
||||
That’s why you will use `043` in your command instead of just `43`.
|
||||
|
||||
### Curly~Wurly
|
||||
|
||||
Truth be told, the magic of sequences lies not so much in the double dot as in the sorcery of the curly braces (`{}`). Look how it works for letters, too. Doing:
|
||||
|
||||
```
|
||||
touch file_{a..z}.txt
|
||||
```
|
||||
|
||||
creates the files _file_a.txt_ through _file_z.txt_.
|
||||
|
||||
You must be careful, however. Using a sequence like `{Z..a}` will run through a bunch of non-alphanumeric characters (glyphs that are neither numbers or letters) that live between the uppercase alphabet and the lowercase one. Some of these glyphs are unprintable or have a special meaning of their own. Using them to generate names of files could lead to a whole bevy of unexpected and potentially unpleasant effects.
|
||||
|
||||
One final thing worth pointing out about sequences encased between `{...}` is that they can also contain lists of strings:
|
||||
|
||||
```
|
||||
touch {blahg, splurg, mmmf}_file.txt
|
||||
```
|
||||
|
||||
Creates _blahg_file.txt_ , _splurg_file.txt_ and _mmmf_file.txt_.
|
||||
|
||||
Of course, in other contexts, the curly braces have different meanings (surprise!). But that is the stuff of another article.
|
||||
|
||||
### Conclusion
|
||||
|
||||
Bash and the utilities you can run within it have been shaped over decades by system administrators looking for ways to solve very particular problems. To say that sysadmins and their ways are their own breed of special would be an understatement. Consequently, as opposed to other languages, Bash was not designed to be user-friendly, easy or even logical.
|
||||
|
||||
That doesn't mean it is not powerful -- quite the contrary. Bash's grammar and shell tools may be inconsistent and sprawling, but they also provide a dizzying range of ways to do everything you can possibly imagine. It is like having a toolbox where you can find everything from a power drill to a spoon, as well as a rubber duck, a roll of duct tape, and some nail clippers.
|
||||
|
||||
Apart from fascinating, it is also fun to discover all you can achieve directly from within the shell, so next time we will delve ever deeper into how you can build bigger and better Bash command lines.
|
||||
|
||||
Until then, have fun!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/blog/learn/2019/1/linux-tools-meaning-dot
|
||||
|
||||
作者:[Paul Brown][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.linux.com/users/bro66
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.linux.com/blog/learn/2018/12/bash-variables-environmental-and-otherwise
|
@ -0,0 +1,155 @@
|
||||
设计微服务架构前,您应该了解的5项指导原则
|
||||
======
|
||||
|
||||

|
||||
对于从微服务开始的团队来说,最大的挑战之一就是坚持金发姑娘原则(The *Goldilocks principle*):不要太大, 不要太小,不能太紧密耦合。之所以是挑战的原因是会对究竟什么是设计良好的微服务感到疑惑。
|
||||
|
||||
数十家 CTOs 通过采访分享了他们的经验, 这些对话说明了设计良好的微服务的五个特点。本文将帮助指导团队设计微服务。(有关详细信息, 请查看即将出版的书籍 [Microservices for Startups][1])。本文将简要介绍微服务的边界和主观的 ”规则“,以避免在深入了解五个特征之前就开始指导您的微服务设计。
|
||||
|
||||
### 微服务边界
|
||||
|
||||
[core benefits of developing new systems with microservices][2] (开发具有微服务的新系统的核心优势)其中之一是该体系结构允许开发人员独立构建和修改单个组件, 但在最大限度地减少每个 API 之间的回调数量方面可能会出现问题。 根据 Chris McFadden 的解决方法,[SparkPost ][3] 的工程副总裁,应用适当的服务边界去解决问题。
|
||||
|
||||
关于边界, 与 domain-driven design (DDD)—a framework for microservices (域驱动设计--微服务框架)有时难以理解和抽象的概念形成鲜明对比,本文重点介绍了与我们行业的一些顶级 CTOs 建立明确定义的微服务边界的实用原则。
|
||||
|
||||
### 避免主观的 ”规则“
|
||||
|
||||
如果您阅读了足够多的关于设计和创建微服务的建议,您一定会遇到下面的一些 ”规则“。 尽管将它们用作创建微服务的指南很有吸引力, 但加入这些主观规则并不是思考确定微服务的边界的原则性方式。
|
||||
|
||||
#### ”微服务应该有 X 行代码“
|
||||
|
||||
让我们直说:微服务中有多少行代码没有限制。微服务不会因为您写了几行额外的代码而突然变成一个巨无霸。关键是要确保服务中的代码具有很高的内聚性 (稍后将对此进行更多介绍)。
|
||||
|
||||
#### “将每个功能转换为微服务”
|
||||
|
||||
如果函数基于三个输入值计算某些内容并返回结果,它是否是微服务的理想候选项?它是否应该是单独可部署应用程序?这确实取决于函数是什么以及它是如何服务于整个系统。将每个函数转换为微服务在您的内容中可能根本没有意义。
|
||||
|
||||
其他主观规则包括不考虑整个内容的规则, 例如团队的经验、 DevOps (Development和Operations的组合词)容量、服务正在执行的操作以及数据的可用性需求。
|
||||
|
||||
### 精心设计的服务的5个特点
|
||||
|
||||
如果您读过关于微服务的文章, 您无疑会遇到关于什么是设计良好的服务的建议。简单地说, 高内聚和低耦合。如果您不熟悉这些概念, 有许多文章需要查看 [many][4] [articles][5] 。虽然他们提供了合理的建议,但这些概念是相当抽象的。下面, 基于与经验丰富的 CTOs 的对话, 在创建设计良好的微服务时需要牢记的关键特征。
|
||||
|
||||
#### #1: 不与其他服务共享数据库表
|
||||
|
||||
在 SparkPost 的早期, Chris McFadden 和他的团队必须解决一个问题,,每个 SaaS 生意(Software-as-a-Service,软件即服务)都要面对的:他们需要提供基本服务,如身份验证、帐户管理和计费。
|
||||
|
||||
为了解决这个问题,他们创建了两个微服务:用户 API 和帐户 API。用户 API 将处理用户帐户、API 密钥和身份验证,而帐户 API 将处理所有与计费相关的逻辑。一个非常符合逻辑的分离--但没过多久,他们发现了一个问题。
|
||||
|
||||
McFadden 解释说,“我们有一个名为 ”用户 API “的服务,还有一个名为”帐户 API “的服务。问题是,他们之间实际上有几个来回的电话。因此, 您会在帐户中执行一些操作,并在用户中具有调用和终结点,反之亦然”
|
||||
|
||||
这两个服务的耦合太紧密了。
|
||||
|
||||
在设计微服务时, 如果您有多个服务引用同一个表, 则它是一个危险的信号,因为这可能意味着您的数据库是耦合的源头。
|
||||
|
||||
这确实是关于服务与数据的关系, 这正是Oleksiy Kovrin,[Swiftype SRE, Elastic][6] 的领导者, 告诉我。“我们在开发新服务时使用的主要基本原则之一是, 它们不应跨越数据库边界。每个服务都应依赖于自己的一组基础数据存储。这使我们能够集中访问控制、审计日志记录、缓存逻辑等。”
|
||||
|
||||
Kovrin 接着解释说,如果数据库表的子集 “与数据集的其余部分没有或很少连接,则这是一个强烈的信号, 表明组件可以被隔离到单独的 API 或单独的服务中”。
|
||||
|
||||
Darby Frey , [Lead Honestly][7] 的联合创始人,与此的观点相呼应:”每个服务都应该有自己的表 [并且] 永远不应该共享数据库表。“
|
||||
|
||||
#### #2: 数据库表数量最小化
|
||||
|
||||
微服务的理想尺寸足够小,但不会更小。每个服务的数据库表的数量也是如此。
|
||||
|
||||
Steven Czerwinski,[Scaylr][8] 的工程主管, 在接受采访时解释说 Scaylr 的最佳选择是 ”一个或两个服务的数据库表“。
|
||||
|
||||
SparkPost's Chris McFadden 同意:”我们有一个(suppression)限制微服务,它处理, 跟踪, 数以百万计和数十亿的条目周围的限制,但它都非常集中只是围绕限制,所以实际上只有一个或两个表。其他服务也是如此,比如 *webhooks* 。
|
||||
|
||||
#### #3: 考虑有状态和无状态
|
||||
|
||||
在设计微服务时,您需要问问自己它是否需要访问数据库,或者它是否会是处理 TB 级数据 (如电子邮件或日志) 的无状态服务。
|
||||
|
||||
Julien Lemoine, [Algolia][9] 的 CTO,解释说::“我们通过定义服务的输入和输出来定义服务的边界。有时服务是网络 API ,但它也可能是在数据库中使用文件和生成记录的进程 (这就是我们的日志处理服务)。
|
||||
|
||||
事先要清楚状态,这将引导一个更好的服务设计。
|
||||
|
||||
#### #4: 考虑数据可用性需求
|
||||
|
||||
在设计微服务时,请记住哪些服务将依赖于此新服务, 以及在该数据不可用时的全系统影响。考虑到这一点,您可以正确地设计此服务的数据备份和恢复系统。
|
||||
|
||||
Steven Czerwinski 在 Scaylr 提到,由于关键客户行空间映射数据的重要性,它将以不同的方式复制和分离。
|
||||
|
||||
他补充说,”每个分片信息,在自己的小分区里。如果因为这部分客户群体不会有他们的可用日志而下降,那很糟糕,但它只影响5% 的客户,而不是100% 的客户。
|
||||
|
||||
#### #5: 真理的唯一来源
|
||||
|
||||
设计服务,使其成为系统中某些内容的唯一实际来源
|
||||
|
||||
例如,当您从电子商务网站订购内容时, 则会生成订单 ID ,其他服务可以使用此订单 ID 来查询订单服务,以获取有关订单的完整信息。使用 [publish/subscribe pattern][10] ,在服务之间传递的数据应该是订单 ID , 而不是订单本身的属性信息。只有订单服务具有完整的信息,并且是给定订单的唯一实际来源。
|
||||
|
||||
### 大型团队的注意事项
|
||||
|
||||
考虑到上面列出的五个注意事项,较大的团队应了解其组织结构对微服务边界的影响。
|
||||
|
||||
对于大型组织,整个团队可以专门拥有服务,在确定服务边界时, 有组织性的考虑因素就会发挥作用。还有两个需要考虑的因素: **独立的发布计划**和**不同的正常运行时间**的重要性。
|
||||
|
||||
Khash Sajadi , [Cloud66.][11] 的 CEO 说:”我们所看到的微服务最成功的实现要么基于软件设计原则 (例如域驱动设计和面向服务的体系结构), 要么基于反映组织方法的设计原则“
|
||||
|
||||
”所以 (对于) 支付团队“ Sajadi 继续说,,”他们有支付服务或信用卡验证服务, 这就是他们向外界提供的服务。所以这不一定是关于软件的。这主要是关于为外界提供更多服务的业务单位 “
|
||||
|
||||
### 双披萨原理
|
||||
|
||||
Amazon 是一个拥有多个团队的大型组织的完美示例。正如在一篇文章中所提到的, [API Evangelist][12] ,Jeff Bezos向所有员工发出授权,告知他们公司内的每个团队都必须通过 API 进行沟通。任何没有被解雇的人都会被解雇。
|
||||
|
||||
这样,所有数据和功能都通过接口公开。Bezos 还设法让每个团队分离,定义他们的资源是什么, 并通过 API 使它们可用。亚马逊正在从地面上建立一个系统。这使得公司内的每一支团队都能成为彼此的合作伙伴。
|
||||
|
||||
Travis Reeder , [Iron.io][13] 的CTO,谈论关于 Bezos 的内部提议。
|
||||
|
||||
”Jeff Bezos 规定所有团队都必须构建 API 才能与其他团队进行沟通,“ Reeder 说。”他也是提出‘双披萨’规则的人:一支球队不应该比两个比萨饼能养活的大。
|
||||
|
||||
“我认为这里也可以适用同样的方法:无论一个小型团队是否能够开发、管理和富有成效。如果它开始变得笨重或开始后退。可能会变得太大” Reeder 告诉我。
|
||||
|
||||
### 最后注意事项: 您的服务是否具有正确的大小和正确定义?
|
||||
|
||||
在微服务系统的测试和实施阶段,有一些指标需要记住。
|
||||
|
||||
#### 指标 #1: 服务之间是否存在过度依赖?
|
||||
|
||||
如果两个服务不断地相互回调,那么这就是耦合的强烈信号,也是它们可能更好地合并为一个服务的信号。
|
||||
|
||||
回到 Chris McFadden 的例子, 他有两个 API 服务、帐户和用户不断地相互通信, McFadden 提出了一个合并服务的想法, 并决定将其称为 “原告 API”。事实证明, 这是一项富有成效的战略。”我们开始做的是消除这些链接 [这是] 内部 API 调用投注他们之间“ McFadden 告诉我。这有助于简化代码。
|
||||
|
||||
#### 指标 #2: 设置服务的开销是否超过了服务独立的好处?
|
||||
|
||||
Darby Frey 解释说,“每个应用都需要将其日志聚合到某个位置,并需要进行监视。你需要设置它的警报。你需要有标准的作业程序,并在事情发生时运行书籍。您必须管理 SSH 对该事物的访问。为了让一个应用单纯运行, 必须存在一个巨大的基础。
|
||||
|
||||
### 主要外包
|
||||
|
||||
设计微服务往往会让人感觉更像是一门艺术, 而不是一门科学。对工程师来说, 这可能不是很好。外面有很多一般性的建议, 但有时可能有点太抽象了。让我们回顾一下在设计下一组微服务时要注意的五个具体特征:
|
||||
|
||||
1. 不与其他服务共享数据库表
|
||||
2. 数据库表数量最小化
|
||||
3. 考虑有状态和无状态
|
||||
4. 考虑数据可用性需求
|
||||
5. 真理的唯一来源
|
||||
|
||||
|
||||
|
||||
下次设计一组微服务并确定服务边界时,回顾这些原则应该会使任务变得更容易。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/4/guide-design-microservices
|
||||
|
||||
作者:[Jake Lumetta][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[lixinyuxx](https://github.com/lixinyuxx)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/jakelumetta
|
||||
[1]:https://buttercms.com/books/microservices-for-startups/
|
||||
[2]:https://buttercms.com/books/microservices-for-startups/should-you-always-start-with-a-monolith
|
||||
[3]:https://www.sparkpost.com/
|
||||
[4]:https://thebojan.ninja/2015/04/08/high-cohesion-loose-coupling/
|
||||
[5]:https://en.wikipedia.org/wiki/Single_responsibility_principle
|
||||
[6]:https://www.elastic.co/solutions/site-search
|
||||
[7]:https://leadhonestly.com/
|
||||
[8]:https://www.scalyr.com/
|
||||
[9]:https://www.algolia.com/
|
||||
[10]:https://en.wikipedia.org/wiki/Publish%E2%80%93subscribe_pattern
|
||||
[11]:https://www.cloud66.com/
|
||||
[12]:https://apievangelist.com/2012/01/12/the-secret-to-amazons-success-internal-apis/
|
||||
[13]:https://www.iron.io/
|
@ -1,231 +0,0 @@
|
||||
使用Ansible来管理你的工作站:配置自动化
|
||||
======
|
||||
|
||||

|
||||
|
||||
Ansible是一个令人惊讶的自动化的配置管理工具。主要应用在服务器和云部署上,但在工作站上的应用(无论是台式机还是笔记本)却得到了很少的关注,这就是本系列所要关注的。
|
||||
|
||||
在这个系列的第一部分,我会向你展示'ansible-pull'命令的基本用法,我们创建了一个安装了少量包的palybook.它本身是没有多大的用处的,但是它为后续的自动化做了准备。
|
||||
|
||||
在这篇文章中,所有的事件操作都是闭环的,而且在最后部分,我们将会有一个针对工作站自动配置的完整的工作解决方案。现在,我们将要设置Ansible的配置,这样未来将要做的改变将会自动的部署应用到我们的工作站上。现阶段,假设你已经完成了第一部分的工作。如果没有的话,当你完成的时候回到本文。你应该已经有一个包含第一篇文章中代码的Github库。我们将直接按照之前的方式创建。
|
||||
|
||||
首先,因为我们要做的不仅仅是安装包文件,所以我们要做一些重新的组织工作。现在,我们已经有一个名为'local.yml'并包含以下内容的playbook:
|
||||
```
|
||||
- hosts: localhost
|
||||
|
||||
become: true
|
||||
|
||||
tasks:
|
||||
|
||||
- name: Install packages
|
||||
|
||||
apt: name={{item}}
|
||||
|
||||
with_items:
|
||||
|
||||
- htop
|
||||
|
||||
- mc
|
||||
|
||||
- tmux
|
||||
|
||||
```
|
||||
|
||||
如果我们仅仅想实现一个任务那么上面的配置就足够了。随着向我们的配置中不断的添加内容,这个文件将会变的相当的庞大和杂乱。最好能够根据不同类型的配置将play文件分为独立的文件。为了达到这个要求,创建一个名为taskbook的文件,它和playbook很像但内容更加的流线型。让我们在Git库中为taskbook创建一个目录。
|
||||
```
|
||||
mkdir tasks
|
||||
|
||||
```
|
||||
|
||||
在'local.yml'playbook中的代码使它很好过过渡到成为安装包文件的taskbook.让我们把这个文件移动到刚刚创建好并新命名的目录中。
|
||||
|
||||
```
|
||||
mv local.yml tasks/packages.yml
|
||||
|
||||
```
|
||||
现在,我们编辑'packages.yml'文件将它进行大幅的瘦身,事实上,我们可以精简除了独立任务本身之外的所有内容。让我们把'packages.yml'编辑成如下的形式:
|
||||
```
|
||||
- name: Install packages
|
||||
|
||||
apt: name={{item}}
|
||||
|
||||
with_items:
|
||||
|
||||
- htop
|
||||
|
||||
- mc
|
||||
|
||||
- tmux
|
||||
|
||||
```
|
||||
|
||||
正如你所看到的,它使用同样的语法,但我们去掉了对这个任务无用没有必要的所有内容。现在我们有了一个专门安装包文件的taskbook.然而我们仍然需要一个名为'local.yml'的文件,因为执行'ansible-pull'命令时仍然会去发现这个文件。所以我们将在我们库的根目录下(不是在'task'目录下)创建一个包含这些内容的全新文件:
|
||||
```
|
||||
- hosts: localhost
|
||||
|
||||
become: true
|
||||
|
||||
pre_tasks:
|
||||
|
||||
- name: update repositories
|
||||
|
||||
apt: update_cache=yes
|
||||
|
||||
changed_when: False
|
||||
|
||||
|
||||
|
||||
tasks:
|
||||
|
||||
- include: tasks/packages.yml
|
||||
|
||||
```
|
||||
|
||||
这个新的'local.yml'扮演的是将要导入我们的taksbooks的主页的角色。我已经在这个文件中添加了一些你在这个系列中看不到的内容。首先,在这个文件的开头处,我添加了'pre——tasks',这个任务的作用是在其他所有任务运行之前先运行某个任务。在这种情况下,我们给Ansible的命令是让它去更新我们的分布存储库主页,下面的配置将执行这个任务要求:
|
||||
|
||||
```
|
||||
apt: update_cache=yes
|
||||
|
||||
```
|
||||
通常'apt'模块是用来安装包文件的,但我们也能够让它来更新库索引。这样做的目的是让我们的每个play在Ansible运行的时候能够以最新的索引工作。这将确保我们在使用一个老旧的索引安装一个包的时候不会出现问题。因为'apt'模块仅仅在Debian,Ubuntu和他们的衍生环境下工作。如果你运行的一个不同的环境,你期望在你的环境中使用一个特殊的模块而不是'apt'。如果你需要使用一个不同的模块请查看Ansible的相关文档。
|
||||
|
||||
下面这行值得以后解释:
|
||||
```
|
||||
changed_when: False
|
||||
|
||||
```
|
||||
在独立任务中的这行阻止了Ansible去报告play改变的结果即使是它本身在系统中导致的一个改变。在这中情况下,我们不会去在意库索引是否包含新的数据;它几乎总是会的,因为库总是在改变的。我们不会去在意'apt'库的改变,因为索引的改变是正常的过程。如果我们删除这行,我们将在过程保告的后面看到所有的变动,即使仅仅库的更新而已。最好能够去忽略这类的改变。
|
||||
|
||||
接下来是常规任务的阶段,我们将创建好的taskbook导入。我们每次添加另一个taskbook的时候,要添加下面这一行:
|
||||
```
|
||||
tasks:
|
||||
|
||||
- include: tasks/packages.yml
|
||||
|
||||
```
|
||||
|
||||
如果你将要运行'ansible-pull'命令,他应该向上一篇文章中的那样做同样重要的事情。 不同的是我们已经提高了我们的组织并且能够更有效的扩展它。'ansible-pull'命令的语法,为了节省你到上一篇文章中去寻找,参考如下:
|
||||
```
|
||||
sudo ansible-pull -U https://github.com/<github_user>/ansible.git
|
||||
|
||||
```
|
||||
如果你还记得话,'ansible-pull'的命令拉取一个Git库并且应用了它所包含的配置。
|
||||
|
||||
既然我们的基础已经搭建好,我们现在可以扩展我们的Ansible并且添加功能。更特别的是,我们将添加配置来自动化的部署对工作站要做的改变。为了支撑这个要求,首先我们要创建一个特殊的账户来应用我们的Ansible配置。这个不是必要的,我们仍然能够在我们自己的用户下运行Ansible配置。但是使用一个隔离的用户能够将其隔离到不需要我们参与的在后台运行的一个系统进程中,
|
||||
|
||||
我们可以使用常规的方式来创建这个用户,但是既然我们正在使用Ansible,我们应该尽量避开使用手动的改变。替代的是,我们将会创建一个taskbook来处理用户的创建任务。这个taskbook目前将会仅仅创建一个用户,但你可以在这个taskbook中添加额外的plays来创建更多的用户。我将这个用户命名为'ansible',你可以按照自己的想法来命名(如果你做了这个改变要确保更新所有的变动)。让我们来创建一个名为'user.yml'的taskbook并且将以下代码写进去:
|
||||
|
||||
```
|
||||
- name: create ansible user
|
||||
|
||||
user: name=ansible uid=900
|
||||
|
||||
```
|
||||
下一步,我们需要编辑'local.yml'文件,将这个新的taskbook添加进去,像如下这样写:
|
||||
|
||||
```
|
||||
- hosts: localhost
|
||||
|
||||
become: true
|
||||
|
||||
pre_tasks:
|
||||
|
||||
- name: update repositories
|
||||
|
||||
apt: update_cache=yes
|
||||
|
||||
changed_when: False
|
||||
|
||||
|
||||
|
||||
tasks:
|
||||
|
||||
- include: tasks/users.yml
|
||||
|
||||
- include: tasks/packages.yml
|
||||
|
||||
```
|
||||
现在当我们运行'ansible-pull'命令的时候,一个名为'ansible'的用户将会在系统中被创建。注意我特地通过参数'UID'为这个用户声明了用户ID为900。这个不是必须的,但建议直接创建好UID。因为在1000以下的UID在登陆界面是不会显示的,这样是很棒的因为我们根本没有需要去使用'ansibe'账户来登陆我们的桌面。UID 900是固定的;它应该是在1000以下没有被使用的任何一个数值。你可以使用以下命令在系统中去验证UID 900是否已经被使用了:
|
||||
|
||||
```
|
||||
cat /etc/passwd |grep 900
|
||||
|
||||
```
|
||||
然而,你使用这个UID应该不会遇到什么问题,因为迄今为止在我使用的任何发行版中我还没遇到过它是被默认使用的。
|
||||
|
||||
现在,我们已经拥有了一个名为'ansible'的账户,它将会在之后的自动化配置中使用。接下来,我们可以创建实际的定时作业来自动操作它。而不是将其放置到我们刚刚创建的'users.yml'文件中,我们应该将其分开放到它自己的文件中。在任务目录中创建一个名为'cron.yml'的taskbook并且将以下的代买写进去:
|
||||
```
|
||||
- name: install cron job (ansible-pull)
|
||||
|
||||
cron: user="ansible" name="ansible provision" minute="*/10" job="/usr/bin/ansible-pull -o -U https://github.com/<github_user>/ansible.git > /dev/null"
|
||||
|
||||
```
|
||||
定时模块的语法几乎是不需加以说明的。通过这个play,我们创建了一个通过用户'ansible'运行的定时作业。这个作业将每隔10分钟执行一次,下面是它将要执行的命令:
|
||||
|
||||
```
|
||||
/usr/bin/ansible-pull -o -U https://github.com/<github_user>/ansible.git > /dev/null
|
||||
|
||||
```
|
||||
同样,我们也可以添加想要我们的所有工作站部署的额外定时作业到这个文件中。我们只需要在新的定时作业中添加额外的palys即可。然而,仅仅是添加一个定时的taskbook是不够的,我们还需要将它添加到'local.yml'文件中以便它能够被调用。将下面的一行添加到末尾:
|
||||
```
|
||||
- include: tasks/cron.yml
|
||||
|
||||
```
|
||||
现在当'ansible-pull'命令执行的时候,它将会以通过用户'ansible'每个十分钟设置一个新的定时作业。但是,每个十分钟运行一个Ansible作业并不是一个好的方式因为这个将消耗很多的CPU资源。每隔十分钟来运行对于Ansible来说是毫无意义的除非欧文已经在Git库中改变一些东西。
|
||||
|
||||
然而,我们已经解决了这个问题。注意到我在定时作业中的命令'ansible-pill'添加的我们之前从未用到过的参数'-o'.这个参数告诉Ansible只有在从上次'ansible-pull'被调用以后库有了变化后才会运行。如果库没有任何变化,他将不会做任何事情。通过这个方法,你将不会无端的浪费CPU资源。当然,一些CPU资源将会在下来存储库的时候被使用,但不会像再一次应用整个配置的时候使用的那么多。当'ansible-pull'执行的时候,它将会遍历在playbooks和taskbooks中的所有任务,但至少它不会毫无目的的运行。
|
||||
|
||||
尽管我们已经添加了所有必须的配置要素来自动化'ansible-pull',它任然还不能正常的工作。'ansible-pull'命令需要sudo的权限来运行,这将允许它执行系统级的命令。然而我们创建的用户'ansible'并没有被设置为以'sudo'的权限来执行命令,因此当定时作业触发的时候,执行将会失败。通常沃恩可以使用命令'visudo'来手动的去设置用户'ansible'的拥有这个权限。然而我们现在应该以Ansible的方式来操作,而且这将会是一个向你展示'copy'模块是如何工作的机会。'copy'模块允许你从库复制一个文件到文件系统的任何位置。在这个案列中,我们将会复制'sudo'的一个配置文件到'/etc/sudoers.d/'以便用户'ansible'能够以管理员的权限执行任务。
|
||||
|
||||
打开'users.yml',将下面的play添加到文件末尾。
|
||||
|
||||
```
|
||||
- name: copy sudoers_ansible
|
||||
|
||||
copy: src=files/sudoers_ansible dest=/etc/sudoers.d/ansible owner=root group=root mode=0440
|
||||
|
||||
```
|
||||
'copy'模块,正如我们看到的,从库复制一个文件到其他任何位置。在这个过程中,我们正在抓取一个名为'sudoers_ansible'(我们将在后续创建)的文件并将它复制到拥有者为'root'的'/etc/sudoers/ansible'中。
|
||||
|
||||
接下来,我们需要创建我们将要复制的文件。在你的库的根目录下,创建一个名为'files'的目录:
|
||||
|
||||
```
|
||||
mkdir files
|
||||
|
||||
```
|
||||
然后,在我们刚刚创建的'files'目录里,创建包含以下内容的名为'sudoers_ansible'的文件:
|
||||
```
|
||||
ansible ALL=(ALL) NOPASSWD: ALL
|
||||
|
||||
```
|
||||
在'/etc/sudoer.d'目录里创建一个文件,就像我们正在这样做的,允许我们为一个特殊的用户配置'sudo'权限。现在我们正在通过'sudo'允许用户'ansible'不需要密码拥有完全控制权限。这将允许'ansible-pull'以后台任务的形式运行而不需要手动去运行。
|
||||
|
||||
现在,你可以通过再次运行'ansible-pull'来拉取最新的变动:
|
||||
```
|
||||
sudo ansible-pull -U https://github.com/<github_user>/ansible.git
|
||||
|
||||
```
|
||||
从这个节点开始,'ansible-pull'的定时作业将会在后台每隔十分钟运行一次来检查你的库是否有变化,如果它发现有变化,将会运行你的palybook并且应用你的taskbooks.
|
||||
|
||||
所以现在我们有了一个完整的工作方案。当你第一次设置一台新的笔记本或者台式机的时候,你要去手动的运行'ansible-pull'命令,但仅仅是在第一次的时候。从第一次之后,用户'ansible'将会在后台接手后续的运行任务。当你想对你的机器做变动的时候,你只需要简单的去拉取你的Git库来做变动,然后将这些变化回传到库中。接着,当定时作业下次在每台机器上运行的时候,它将会拉取变动的部分并应用它们。你现在只需要做一次变动,你的所有工作站将会跟着一起变动。这方法尽管有一点不方便,通常,你会有一个你的机器列表的文件和包含不同机器的规则。不管怎样,'ansible-pull'的方法,就像在文章中描述的,是管理工作站配置的非常有效的方法。
|
||||
|
||||
我已经在我的[Github repository]中更新了这篇文章中的代码,所以你可以随时去浏览来再一次检查你的语法。同时我将前一篇文章中的代码移到了它自己的目录中。
|
||||
|
||||
在第三部分,我们将通过介绍使用Ansible来配置GNOME桌面设置来结束这个系列。我将会告诉你如何设置你的墙纸和锁屏壁纸,应用一个桌面主题以及更多的东西。
|
||||
|
||||
同时,到了布置一些作业的时候了,大多数人有我们使用的各种应用的配置文件。可能是Bash,Vim或者其他你使用的工具的配置文件。现在你可以尝试通过我们在使用的Ansible库来自动复制这些配置到你的机器中。在这篇文章中,我已将想你展示了如何去复制文件,所以去尝试以下看看你是都已经能应用这些知识。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/3/manage-your-workstation-configuration-ansible-part-2
|
||||
|
||||
作者:[Jay LaCroix][a]
|
||||
译者:[FelixYFZ](https://github.com/FelixYFZ)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/jlacroix
|
||||
[1]:https://opensource.com/article/18/3/manage-workstation-ansible
|
||||
[2]:https://github.com/jlacroix82/ansible_article.git
|
Loading…
Reference in New Issue
Block a user