mirror of
https://github.com/LCTT/TranslateProject.git
synced 2025-01-13 22:30:37 +08:00
Merge remote-tracking branch 'LCTT/master'
This commit is contained in:
commit
1b461e56d9
@ -1,18 +1,20 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11784-1.html)
|
||||
[#]: subject: (Generating numeric sequences with the Linux seq command)
|
||||
[#]: via: (https://www.networkworld.com/article/3511954/generating-numeric-sequences-with-the-linux-seq-command.html)
|
||||
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
|
||||
|
||||
使用 Linux seq 命令生成数字序列
|
||||
======
|
||||
Linux seq 命令可以以闪电般的速度生成数字列表。它易于使用而且灵活。
|
||||
[Jamie][1] [(CC BY 2.0)][2]
|
||||
|
||||
在 Linux 中生成数字列表的最简单方法之一是使用 **seq**(sequence)命令。最简单的形式是,**seq** 接收一个数字,并输出从 1 到该数字的列表。例如:
|
||||
![](https://img.linux.net.cn/data/attachment/album/202001/15/112717drpb9nuwss84xebu.jpg)
|
||||
|
||||
> Linux 的 seq 命令可以以闪电般的速度生成数字列表,而且它也易于使用而且灵活。
|
||||
|
||||
在 Linux 中生成数字列表的最简单方法之一是使用 `seq`(<ruby>系列<rt>sequence</rt></ruby>)命令。其最简单的形式是,`seq` 接收一个数字参数,并输出从 1 到该数字的列表。例如:
|
||||
|
||||
```
|
||||
$ seq 5
|
||||
@ -23,7 +25,7 @@ $ seq 5
|
||||
5
|
||||
```
|
||||
|
||||
除非另有指定,否则 **seq** 始终以 1 开头。你可以在最终数字前面插上不同数字开始。
|
||||
除非另有指定,否则 `seq` 始终以 1 开头。你可以在最终数字前面插上不同数字开始一个序列。
|
||||
|
||||
```
|
||||
$ seq 3 5
|
||||
@ -34,7 +36,7 @@ $ seq 3 5
|
||||
|
||||
### 指定增量
|
||||
|
||||
你还可以指定增量。假设你要列出 3 的倍数。指定起点(在此示例中为第一个 3 ),增量(第二个 3)和终点(18)。
|
||||
你还可以指定增量步幅。假设你要列出 3 的倍数。指定起点(在此示例中为第一个 3 ),增量(第二个 3)和终点(18)。
|
||||
|
||||
```
|
||||
$ seq 3 3 18
|
||||
@ -58,9 +60,7 @@ $ seq 18 -3 3
|
||||
3
|
||||
```
|
||||
|
||||
**seq** 命令也非常快。你或许可以在 10 秒内生成一百万个数字的列表。
|
||||
|
||||
Advertisement
|
||||
`seq` 命令也非常快。你或许可以在 10 秒内生成一百万个数字的列表。
|
||||
|
||||
```
|
||||
$ time seq 1000000
|
||||
@ -78,9 +78,9 @@ user 0m0.020s
|
||||
sys 0m0.899s
|
||||
```
|
||||
|
||||
## 使用分隔符
|
||||
### 使用分隔符
|
||||
|
||||
另一个非常有用的选项是使用分隔符。你可以插入逗号,冒号或其他一些字符,而不是在每行上列出单个数字。-s 选项后跟要使用的字符。
|
||||
另一个非常有用的选项是使用分隔符。你可以插入逗号、冒号或其他一些字符,而不是在每行上列出单个数字。`-s` 选项后跟要使用的字符。
|
||||
|
||||
```
|
||||
$ seq -s: 3 3 18
|
||||
@ -96,21 +96,21 @@ $ seq -s' ' 3 3 18
|
||||
|
||||
### 开始数学运算
|
||||
|
||||
从生成数字序列到进行数学运算似乎是一个巨大的飞跃,但是有了正确的分隔符,**seq** 可以轻松地传递给 **bc** 进行计算。例如:
|
||||
从生成数字序列到进行数学运算似乎是一个巨大的飞跃,但是有了正确的分隔符,`seq` 可以轻松地传递给 `bc` 进行计算。例如:
|
||||
|
||||
```
|
||||
$ seq -s* 5 | bc
|
||||
120
|
||||
```
|
||||
|
||||
该命令中发生了什么?让我们来看看。首先,**seq** 生成一个数字列表,并使用 \* 作为分隔符。
|
||||
该命令中发生了什么?让我们来看看。首先,`seq` 生成一个数字列表,并使用 `*` 作为分隔符。
|
||||
|
||||
```
|
||||
$ seq -s* 5
|
||||
1*2*3*4*5
|
||||
```
|
||||
|
||||
然后,它将字符串传递给计算器 (**bc**),计算器立即将数字相乘。你可以在不到一秒的时间内进行相当广泛的计算。
|
||||
然后,它将字符串传递给计算器(`bc`),计算器立即将数字相乘。你可以在不到一秒的时间内进行相当庞大的计算。
|
||||
|
||||
```
|
||||
$ time seq -s* 117 | bc
|
||||
@ -125,15 +125,13 @@ sys 0m0.000s
|
||||
|
||||
### 局限性
|
||||
|
||||
你只能选择一个分隔符,因此计算将非常有限。单独使用 **bc** 可进行更复杂的数学运算。此外,**seq** 仅适用于数字。要生成单个字母序列,请改用如下命令:
|
||||
你只能选择一个分隔符,因此计算将非常有限。而单独使用 `bc` 可进行更复杂的数学运算。此外,`seq` 仅适用于数字。要生成单个字母的序列,请改用如下命令:
|
||||
|
||||
```
|
||||
$ echo {a..g}
|
||||
a b c d e f g
|
||||
```
|
||||
|
||||
加入 [Facebook][5] 和 [LinkedIn][6] 上的 Network World 社区,评论热门主题。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3511954/generating-numeric-sequences-with-the-linux-seq-command.html
|
||||
@ -141,7 +139,7 @@ via: https://www.networkworld.com/article/3511954/generating-numeric-sequences-w
|
||||
作者:[Sandra Henry-Stocker][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,126 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (heguangzhi)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11786-1.html)
|
||||
[#]: subject: (How piwheels will save Raspberry Pi users time in 2020)
|
||||
[#]: via: (https://opensource.com/article/20/1/piwheels)
|
||||
[#]: author: (Ben Nuttall https://opensource.com/users/bennuttall)
|
||||
|
||||
piwheels 是如何为树莓派用户节省时间的
|
||||
======
|
||||
|
||||
> 通过为树莓派提供预编译的 Python 包,piwheels 项目为用户节省了大量的时间和精力。
|
||||
|
||||
![rainbow colors on pinwheels in the sun][1]
|
||||
|
||||
piwheels 自动为 Python 包索引 [PiPi][2] 上的所有项目构建 Python wheels(预编译的 Python包),并使用了树莓派硬件以确保其兼容性。这意味着,当树莓派用户想要使用 `pip` 安装一个 Python 库时,他们会得到一个现成编译好的版本,并保证可以在树莓派上良好的工作。这使得树莓派用户更容易入门并开始他们的项目。
|
||||
|
||||
![Piwheels logo][3]
|
||||
|
||||
当我在 2018 年 10 月写 [piwheels:为树莓派提供快速 Python 包安装][4]时,那时 piwheels 项目已经有一年了,并且已经证明了其为树莓派用户节省大量时间和精力。但当这个项目进入第二年时,它为树莓派提供了预编译的 Python 包做了更多工作。
|
||||
|
||||
![Raspberry Pi 4][5]
|
||||
|
||||
### 它是怎么工作的
|
||||
|
||||
树莓派的主要操作系统 [Raspbian][6] 预配置使用了 piwheels,所以用户不需要做任何特殊的事情就可以使用 piwheels。
|
||||
|
||||
配置文件(在 `/etc/pip.conf`)告诉 `pip` 使用 [piwheels.org][7] 作*附加索引*,因此 `pip` 会首先查找 PyPI,然后查找 piwheels。piwheels 的网站被托管在一个树莓派 3 上,该项目构建的所有 wheels 都托管在该树莓派上。它每月提供 100 多万个软件包——这对于一台 35 美元的电脑来说还真不赖!
|
||||
|
||||
除了提供网站服务的主树莓派以外,piwheels 项目还使用其他七个树莓派来构建软件包。其中一些运行 Raspbian Jessie,为 Python 3.4 构建 wheels;另外一些运行 Raspbian Stretch 为 Python 3.5 构建;还有一些运行 Raspbian Buster 为 Python 3.7 构建。该项目通常不支持其他 Python 版本。还有一个“合适的服务器”——一台运行 Postgres 数据库的虚拟机。由于树莓派 3 只有 1GB 的内存,所以(非常大的)数据库不能在其上很好地运行,所以我们把它移到了虚拟机上。带 4GB 内存的树莓派 4 可能是合用的,所以我们将来可能会用到它。
|
||||
|
||||
这些树莓派都在“派云”中的 IPv6 网络上——这是一项由总部位于剑桥的托管公司 [Mythic Beasts][8] 提供的卓越服务。
|
||||
|
||||
![Mythic Beasts hosting service][9]
|
||||
|
||||
### 下载和统计趋势
|
||||
|
||||
每次下载 piwheels 文件时,它都会记录在数据库中。这提供了对什么包最受欢迎以及人们使用什么 Python 版本和操作系统的统计。我们没有太多来自用户代理的信息,但是因为树莓派 1/Zero 的架构显示为 “armv6”,树莓派 2/3/4 显示为 “armv7”,所以我们可以将它们区分开来。
|
||||
|
||||
截至 2019 年 12 月中旬,从 piwheels 下载的软件包超过 1400 万个,仅 2019 年就有近 900 万个。
|
||||
|
||||
自项目开始以来最受欢迎的 10 个软件包是:
|
||||
|
||||
1. [pycparser][10](821,060 个下载)
|
||||
2. [PyYAML][11](366,979 个下载)
|
||||
3. [numpy][12](354,531 个下载)
|
||||
4. [cffi][13](336,982 个下载)
|
||||
5. [MarkupSafe][14](318,878 个下载)
|
||||
6. [future][15](282,349 个下载)
|
||||
7. [aiohttp][16](277,046 个下载)
|
||||
8. [cryptography][17](276,167 个下载)
|
||||
9. [home-assistant-frontend][18](266,667 个下载)
|
||||
10. [multidict][19](256,185 个下载)
|
||||
|
||||
请注意,许多纯 Python 包,如 [urllib3][20],都是作为 PyPI 上的 wheels 提供的;因为这些是跨平台兼容的,所以通常不会从 piwheels 下载,因为 PyPI 优先。
|
||||
|
||||
随着时间的推移,我们也看到了使用哪些 Python 版本的趋势。这里显示了 Raspbian Buster 发布时从 3.5 版快速升级到了 Python 3.7:
|
||||
|
||||
![Data from piwheels on Python versions used over time][21]
|
||||
|
||||
你可以在我们的这篇 [统计博文][22] 看到更多的统计趋势。
|
||||
|
||||
### 节省的时间
|
||||
|
||||
每个包构建都被记录在数据库中,并且每个下载也被存储。交叉引用下载数和构建时间显示了节省了多少时间。一个例子是 numpy —— 最新版本大约需要 11 分钟来构建。
|
||||
|
||||
迄今为止,piwheels 项目已经为用户节省了总计超过 165 年的构建时间。按照目前的使用率,piwheels 项目每天可以节省 200 多天。
|
||||
|
||||
除了节省构建时间,拥有预编译的 wheels 也意味着人们不必安装各种开发工具来构建包。一些包需要其他 apt 包来访问共享库。弄清楚你需要哪一个可能会很痛苦,所以我们也让这一步变得容易了。首先,我们找到了这个过程,[在博客上记录了这个过程][23]。然后,我们将这个逻辑添加到构建过程中,这样当构建一个 wheels 时,它的依赖关系会被自动计算并添加到包的项目页面中:
|
||||
|
||||
![numpy dependencies][24]
|
||||
|
||||
### piwheels 的下一步是什么?
|
||||
|
||||
今年,我们推出了项目页面(例如,[numpy][25]),这是一种非常有用的方式,可以让人们以人类可读的方式查找项目信息。它们还使人们更容易报告问题,例如 piwheels 中缺少一个项目,或者他们下载的包有问题。
|
||||
|
||||
2020 年初,我们计划对 piwheels 项目进行一些升级,以启用新的 JSON 应用编程接口,这样你就可以自动检查哪些版本可用,查找项目的依赖关系,等等。
|
||||
|
||||
下一次 Debian/Raspbian 升级要到 2021 年年中才会发生,所以在那之前我们不会开始为任何新的 Python 版本构建 wheels。
|
||||
|
||||
你可以在这个项目的[博客][26]上读到更多关于 piwheels 的信息,我将在 2020 年初在那里发表一篇 2019 年的综述。你也可以在推特上关注 [@piwheels][27],在那里你可以看到每日和每月的统计数据以及任何达到的里程碑。
|
||||
|
||||
当然,piwheels 是一个开源项目,你可以在 [GitHub][28] 上看到整个项目源代码。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/1/piwheels
|
||||
|
||||
作者:[Ben Nuttall][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[heguangzhi](https://github.com/heguangzhi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/bennuttall
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rainbow-pinwheel-piwheel-diversity-inclusion.png?itok=di41Wd3V (rainbow colors on pinwheels in the sun)
|
||||
[2]: https://pypi.org/
|
||||
[3]: https://opensource.com/sites/default/files/uploads/piwheels.png (Piwheels logo)
|
||||
[4]: https://opensource.com/article/18/10/piwheels-python-raspberrypi
|
||||
[5]: https://opensource.com/sites/default/files/uploads/raspberry-pi-4_0.jpg (Raspberry Pi 4)
|
||||
[6]: https://www.raspberrypi.org/downloads/raspbian/
|
||||
[7]: http://piwheels.org
|
||||
[8]: https://www.mythic-beasts.com/order/rpi
|
||||
[9]: https://opensource.com/sites/default/files/uploads/pi-cloud.png (Mythic Beasts hosting service)
|
||||
[10]: https://www.piwheels.org/project/pycparser
|
||||
[11]: https://www.piwheels.org/project/PyYAML
|
||||
[12]: https://www.piwheels.org/project/numpy
|
||||
[13]: https://www.piwheels.org/project/cffi
|
||||
[14]: https://www.piwheels.org/project/MarkupSafe
|
||||
[15]: https://www.piwheels.org/project/future
|
||||
[16]: https://www.piwheels.org/project/aiohttp
|
||||
[17]: https://www.piwheels.org/project/cryptography
|
||||
[18]: https://www.piwheels.org/project/home-assistant-frontend
|
||||
[19]: https://www.piwheels.org/project/multidict
|
||||
[20]: https://piwheels.org/project/urllib3/
|
||||
[21]: https://opensource.com/sites/default/files/uploads/pyvers2019.png (Data from piwheels on Python versions used over time)
|
||||
[22]: https://blog.piwheels.org/piwheels-stats-for-2019/
|
||||
[23]: https://blog.piwheels.org/how-to-work-out-the-missing-dependencies-for-a-python-package/
|
||||
[24]: https://opensource.com/sites/default/files/uploads/numpy-deps.png (numpy dependencies)
|
||||
[25]: https://www.piwheels.org/project/numpy/
|
||||
[26]: https://blog.piwheels.org/
|
||||
[27]: https://twitter.com/piwheels
|
||||
[28]: https://github.com/piwheels/
|
@ -0,0 +1,65 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (What communities of practice can do for your organization)
|
||||
[#]: via: (https://opensource.com/open-organization/20/1/why-build-community-of-practice)
|
||||
[#]: author: (Tracy Buckner https://opensource.com/users/tracyb)
|
||||
|
||||
What communities of practice can do for your organization
|
||||
======
|
||||
In open organizations, fostering passionate communities can increase
|
||||
collaboration, accelerate problem solving, and lead to greater
|
||||
innovation.
|
||||
![Lots of people in a crowd.][1]
|
||||
|
||||
As I discussed in the [first part of this series][2], community is a fundamental principle in open organizations. In open organizations, people often define their roles, responsibilities, and affiliations through shared interests and passions—[not title, role, or position on an organizational chart][3].
|
||||
|
||||
So fostering and supporting communities of practice (CoPs) can be a great strategy for building a more open organization. Members of communities of practice interact regularly, discuss various topics, and solve problems collectively and collaboratively. These communities provide members with an opportunity to share their expertise, learn from others, and network with one another.
|
||||
|
||||
As Luis Gonçalves states in [Learning Organisation][4]:
|
||||
|
||||
> CoPs provide a shared context where members of the organisation can communicate and share information; stimulate learning through peer-to-peer mentoring, coaching, self-reflection, and collaboration; generate new knowledge; and initiate projects that develop tangible results.
|
||||
|
||||
But those aren't the only value such communities can offer. Communities of practice can also enhance an organization in the following ways (summarized in Figure 1).
|
||||
|
||||
**Decreased learning curves**. Many organizations face the challenge of rapidly increasing productivity of new employees (or members). This task becomes especially challenging in organizations where an employee's manager may be located in a different state, country, or region. Communities of practice offer new employees a network of connections they can tap into more quickly and easily. By joining a CoP, they can immediately access a network of experts and share resources, ask questions, and seek guidance outside the formal lines of the organizational chart.
|
||||
|
||||
Members of communities of practice interact regularly, discuss various topics, and solve problems collectively and collaboratively.
|
||||
|
||||
**Increased collaboration.** A recent survey from My Customer.com [shows][5] that 40 percent of company employees report not feeling adequately supported by their colleagues—because "different departments have their own agendas." A lack of collaboration between departments limits innovation and increases opportunities for miscommunication. Communities of practice encourage members from _all_ roles across _all_ departments to unite in sharing their expertise. This increases collaboration and reduces the threat of organizational silos.
|
||||
|
||||
**Rapid problem-solving.** Communities of practice provide a centralized location for communication and resources useful for solving organizational or business problems. Enabling people to come together—regardless of their organizational reporting structure, location, and/or management structure—encourages problem-solving and can lead to faster resolution of those problems.
|
||||
|
||||
**Enhanced innovation.** Researchers Pouwels and Koster [recently argued][6] that “collaboration contributes to innovation." CoPs provide a unique opportunity for members to collaborate on topics within their shared domains of interest and passion. This passion ignites a desire to discover new and innovative ways to solve problems and create new ideas.
|
||||
|
||||
![Benefits of Communities of Practice][7]
|
||||
|
||||
_Figure 1: Benefits of communities of practice. Courtesy of Tracy Buckner. CC BY-SA._
|
||||
|
||||
[Étienne Wenger][8], an educational theorist and proponent of communities of practice, said that learning doesn't only occur through a master; it also happens among apprentices. Communities of practice foster learning by connecting apprentices, while encouraging collaboration and offering an opportunity for creative problem-solving and innovation.
|
||||
|
||||
In the final article of this series, I'll explain how you can reap these benefits by creating your own community of practice.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/open-organization/20/1/why-build-community-of-practice
|
||||
|
||||
作者:[Tracy Buckner][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/tracyb
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_community2.png?itok=1blC7-NY (Lots of people in a crowd.)
|
||||
[2]: https://opensource.com/open-organization/19/11/what-is-community-practice
|
||||
[3]: https://opensource.com/open-organization/resources/open-org-definition
|
||||
[4]: https://www.organisationalmastery.com/category/learning-organisation/
|
||||
[5]: https://www.mycustomer.com/experience/engagement/the-stats-that-prove-silos-are-your-biggest-cx-challenge
|
||||
[6]: https://www.researchgate.net/profile/Ferry_Koster/publication/313659568_Inter-organizational_cooperation_and_organizational_innovativeness_A_comparative_study/links/59e64d510f7e9b13aca3c224/Inter-organizational-cooperation-and-organizational-innovativeness-A-comparative-study.pdf
|
||||
[7]: https://opensource.com/sites/default/files/images/open-org/cop_benefits.png (Benefits of Communities of Practice)
|
||||
[8]: https://en.wikipedia.org/wiki/%C3%89tienne_Wenger
|
@ -0,0 +1,68 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Instant, secure ‘teleportation’ of data in the works)
|
||||
[#]: via: (https://www.networkworld.com/article/3512037/instant-secure-teleportation-of-data-in-the-works.html)
|
||||
[#]: author: (Patrick Nelson https://www.networkworld.com/author/Patrick-Nelson/)
|
||||
|
||||
Instant, secure ‘teleportation’ of data in the works
|
||||
======
|
||||
Quantum teleportation, where information is sent instantaneously, will secure the Internet, researchers say. Scientists are making progress.
|
||||
Thinkstock
|
||||
|
||||
Sending information instantly between two computer chips using quantum teleportation has been accomplished reliably for the first time, according to scientists from the University of Bristol, in collaboration with the Technical University of Denmark (DTU). Data was exchanged without any electrical or physical connection – a transmission method that may influence the next generation of ultra-secure data networks.
|
||||
|
||||
Teleportation involves the moving of information instantaneously and securely. In the “Star Trek” series, fictional people move immediately from one place to another via teleportation. In the University of Bristol experiment, data is passed instantly via a single quantum state across two chips using light particles, or photons. Importantly, each of the two chips knows the characteristics of the other, because they’re entangled through quantum physics, meaning they therefore share a single physics-based state.
|
||||
|
||||
The researchers involved in these successful silicon tests said they built the photon-based silicon chips in a lab and then used them to encode the quantum information in single particles. It was “a high-quality entanglement link across two chips, where photons on either chip share a single quantum state,” said Dan Llewellyn of University of Bristol in a [press release][1].
|
||||
|
||||
[][2]
|
||||
|
||||
BrandPost Sponsored by HPE
|
||||
|
||||
[Take the Intelligent Route with Consumption-Based Storage][2]
|
||||
|
||||
Combine the agility and economics of HPE storage with HPE GreenLake and run your IT department with efficiency.
|
||||
|
||||
“These chips are able to encode quantum information in light generated inside the circuits and can process the quantum information,” the school stated. It claims a successful quantum teleportation of 91%, which is considered high quality.
|
||||
|
||||
### Entanglement boosts data transmission
|
||||
|
||||
Entanglement links to be used in data transmission are where information is conjoined, or entangled, so that the start of a link has the same state as the end of a link. The particles, and thus data, are at the beginning of the link and at the end of the link at the same time.
|
||||
|
||||
The physics principle holds promise for data transmissions, in part because intrusion is easily seen; interference by a bad actor becomes obvious if the beginning state of the link and the end state are no longer the same. In other words, any change in one element means a change in the other, and that can be over distance, too. Additionally, the technique allows leaks to be stopped: Instant key destruction can occur at the actual moment of attempted interference.
|
||||
|
||||
“Particles can be in two places at the same time, and they can even be entangled with twin particles, so that they can feel everything that happens to each other,” explained Jonas Schou Neergaard-Nielsen, a senior researcher at DTU, [in a 2015 story][3] about the university’s earlier teleportation exploration. “At the sub-microscopic level, where quantum mechanics rule, you find a completely different logic to what we are used to in our macroscopic reality,” Schou Neergaard-Nielsen said back in 2015.
|
||||
|
||||
[[Get regularly scheduled insights by signing up for Network World newsletters.]][4]
|
||||
|
||||
### Quantum chips gain momentum
|
||||
|
||||
In the bigger picture, the area of quantum-based microprocessors is gaining momentum. It is thought, for example, that quantum-embedded chips could ultimately secure the internet of things. IoT security vendor Crypto Quantique has said that quantum chips could be made unclonable. Its solution uses a quantum method of [creating totally random keys][5] from the measurement of low currents on the silicon. It’s related to how electrons can leak through transistor gates. “Unforgeable hardware trust anchors [are] generated by the device,” Crypto Quantique explains on its website. “Our technology offers true randomness.”
|
||||
|
||||
A secure quantum computing environment overall could have “profound impacts on modern society,” the University of Bristol said. And with the introduction of entangled physics states across networks, a highly secured “quantum internet could ultimately protect the world’s information from malicious attacks.”
|
||||
|
||||
Join the Network World communities on [Facebook][6] and [LinkedIn][7] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3512037/instant-secure-teleportation-of-data-in-the-works.html
|
||||
|
||||
作者:[Patrick Nelson][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Patrick-Nelson/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.bristol.ac.uk/news/2019/december/quantum-teleportation.html
|
||||
[2]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE21620&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
|
||||
[3]: https://www.dtu.dk/english/news/2015/01/dtu-physics-researchers-developing-tomorrows-teleportation?id=ece937e9-1402-437a-8f57-cc3124563bc8
|
||||
[4]: https://www.networkworld.com/newsletters/signup.html
|
||||
[5]: mailto:https://www.networkworld.com/article/3333808/quantum-embedded-chips-could-secure-iot.html
|
||||
[6]: https://www.facebook.com/NetworkWorld/
|
||||
[7]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,73 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Google Cloud launches Archive cold storage service)
|
||||
[#]: via: (https://www.networkworld.com/article/3513903/google-cloud-launches-archive-cold-storage-service.html)
|
||||
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
|
||||
|
||||
Google Cloud launches Archive cold storage service
|
||||
======
|
||||
Archive will focus on long-term data retention and compete against AWS Glacier, Microsoft Cool Blob Storage, and IBM Cloud Storage.
|
||||
Google
|
||||
|
||||
Google Cloud announced the general availability of Archive, a long-term data retention service intended as an alternative to on-premises tape backup.
|
||||
|
||||
Google pitches it as cold storage, meaning it is for data which is accessed less than once a year and has been stored for many years. Cold storage data is usually consigned to tape backup, which remains a [surprisingly successful][1] market despite repeated predictions of its demise.
|
||||
|
||||
Of course, Google's competition has their own products. Amazon Web Services has Glacier, Microsoft has Cool Blob Storage, and IBM has Cloud Storage. Google also offers its own Coldline and Nearline cloud storage offerings; Coldline is designed for data a business expects to touch less than once a quarter, while Nearline is aimed at data that requires access less than once a month.
|
||||
|
||||
[][2]
|
||||
|
||||
BrandPost Sponsored by HPE
|
||||
|
||||
[Take the Intelligent Route with Consumption-Based Storage][2]
|
||||
|
||||
Combine the agility and economics of HPE storage with HPE GreenLake and run your IT department with efficiency.
|
||||
|
||||
With Archive, Google highlights a few differentiators from the competition and its own archival offerings. First, Google promises no delay on data retrieval, claiming millisecond latency. AWS can take minutes or hours. While Archive costs a little more than AWS and Azure – $1.23 per terabyte per month vs. $1 per terabyte per month for AWS and Azure – that’s due in part to the longer remit for an early deletion charge. Google offers 365 days compared with 180 days for AWS and Azure.
|
||||
|
||||
"Having flexible storage options allows you to optimize your total cost of ownership while meeting your business needs," [wrote][3] Geoffrey Noer, Google Cloud storage product manager in a blog post announcing the service’s availability. "At Google Cloud, we think that you should have a range of straightforward storage options that allow you to more securely and reliably access your data when and where you need it, without performance bottlenecks or delays to your users."
|
||||
|
||||
Archive is a store-and-forget service, where you keep stuff only because you have to. Tape replacement and archiving data under regulatory retention requirements are two of the most common use cases, according to Google. Other examples include long-term backups and original master copies of videos and images.
|
||||
|
||||
The Archive class can also be combined with [Bucket Lock][4], Google Cloud’s data locking mechanism to prevent data from being modified, which is available to enterprises for meeting various data retention laws, according to Noer.
|
||||
|
||||
* [Backup vs. archive: Why it’s important to know the difference][5]
|
||||
* [How to pick an off-site data-backup method][6]
|
||||
* [Tape vs. disk storage: Why isn’t tape dead yet?][7]
|
||||
* [The correct levels of backup save time, bandwidth, space][8]
|
||||
|
||||
|
||||
|
||||
The Archive class can be set up in dual-regions or multi-regions for geo-redundancy and offers checksum verification durability of "11 nines" – 99.999999999 percent.
|
||||
|
||||
More information can be found [here][9].
|
||||
|
||||
Join the Network World communities on [Facebook][10] and [LinkedIn][11] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3513903/google-cloud-launches-archive-cold-storage-service.html
|
||||
|
||||
作者:[Andy Patrizio][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Andy-Patrizio/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.networkworld.com/article/3263452/theres-still-a-lot-of-life-left-in-tape-backup.html
|
||||
[2]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE21620&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
|
||||
[3]: https://cloud.google.com/blog/products/storage-data-transfer/archive-storage-class-for-coldest-data-now-available
|
||||
[4]: https://cloud.google.com/storage/docs/bucket-lock
|
||||
[5]: https://www.networkworld.com/article/3285652/storage/backup-vs-archive-why-its-important-to-know-the-difference.html
|
||||
[6]: https://www.networkworld.com/article/3328488/backup-systems-and-services/how-to-pick-an-off-site-data-backup-method.html
|
||||
[7]: https://www.networkworld.com/article/3315156/storage/tape-vs-disk-storage-why-isnt-tape-dead-yet.html
|
||||
[8]: https://www.networkworld.com/article/3302804/storage/the-correct-levels-of-backup-save-time-bandwidth-space.html
|
||||
[9]: https://cloud.google.com/blog/products/storage-data-transfer/whats-cooler-than-being-cool-ice-cold-archive-storage
|
||||
[10]: https://www.facebook.com/NetworkWorld/
|
||||
[11]: https://www.linkedin.com/company/network-world
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: translator: (hopefully2333)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
@ -1,65 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Sync files across multiple devices with Syncthing)
|
||||
[#]: via: (https://opensource.com/article/20/1/sync-files-syncthing)
|
||||
[#]: author: (Kevin Sonney https://opensource.com/users/ksonney)
|
||||
|
||||
Sync files across multiple devices with Syncthing
|
||||
======
|
||||
Learn how to sync files between devices with Syncthing in the first
|
||||
article in our series on 20 ways to be more productive with open source
|
||||
in 2020.
|
||||
![Files in a folder][1]
|
||||
|
||||
Last year, I brought you 19 days of new (to you) productivity tools for 2019. This year, I'm taking a different approach: building an environment that will allow you to be more productive in the new year, using tools you may or may not already be using.
|
||||
|
||||
### Syncing files with Synthing
|
||||
|
||||
Setting up a new machine is a pain. We all have our "standard setups" that we copy from machine to machine. And over the years, I've used a lot of ways to keep them in sync between machines. In the old days (and this will tell you how old I am), it was with floppy disks, then Zip disks, USB sticks, SCP, Rsync, Dropbox, ownCloud—you name it. And they never seemed to work right for me.
|
||||
|
||||
Then I stumbled upon [Syncthing][2].
|
||||
|
||||
![syncthing console][3]
|
||||
|
||||
Syncthing is a lightweight, peer-to-peer file-synchronization system. You don't need to pay for a service, you don't need a third-party server, and it's fast. Much faster, in my experience, than many of the "big names" in file synchronization.
|
||||
|
||||
Syncthing is available for Linux, MacOS, Windows, and several flavors of BSD. There is also an Android app (but nothing official for iOS yet). There are even handy graphical frontends for all of the above (although I'm not going to cover those here). On Linux, there are packages available for most distributions, so installation is very straightforward.
|
||||
|
||||
![Installing Syncthing on Ubuntu][4]
|
||||
|
||||
When you start Syncthing the first time, it launches a web browser to configure the daemon. There's not much to do on the first machine, but it is a good chance to poke around the user interface (UI) a little bit. The most important thing to see is System ID under the **Actions** menu in the top-right.
|
||||
|
||||
![Machine ID][5]
|
||||
|
||||
Once the first machine is set up, repeat the installation on the second machine. In the UI, there will be a button on the lower-right labeled **Add Remote Device**. Click the button, and you will be presented with a box to enter a **Device ID and a Name**. Copy and paste the **Device ID** from the first machine and click **Save**.
|
||||
|
||||
You should see a pop-up on the first node asking to add the second. Once you accept it, the new machine will show up on the lower-right of the first one. Share the default directory with the second machine. Click on **Default Folder** and then click the **Edit** button. There are four links at the top of the pop-up. Click on **Sharing** and then select the second machine. Click **Save** and look at the second machine. You should get a prompt to accept the shared directory. Once you accept that, it will start synchronizing files between the two machines.
|
||||
|
||||
![Sharing a directory in Syncthing][6]
|
||||
|
||||
Test it out by copying a file to the default directory (**/your/home/Share**) on one of the machines. It should show up on the other one very quickly.
|
||||
|
||||
You can add as many directories as you want or need to the sharing, which is pretty handy. As you can see in the first image, I have one for **myconfigs**—that's where I keep my configuration files. When I get a new machine, I just install Syncthing, and if I tune a configuration on one, I don't have to update all of them—it happens automatically.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/1/sync-files-syncthing
|
||||
|
||||
作者:[Kevin Sonney][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/ksonney
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/files_documents_paper_folder.png?itok=eIJWac15 (Files in a folder)
|
||||
[2]: https://syncthing.net/
|
||||
[3]: https://opensource.com/sites/default/files/uploads/productivity_1-1.png (syncthing console)
|
||||
[4]: https://opensource.com/sites/default/files/uploads/productivity_1-2.png (Installing Syncthing on Ubuntu)
|
||||
[5]: https://opensource.com/sites/default/files/uploads/productivity_1-3.png (Machine ID)
|
||||
[6]: https://opensource.com/sites/default/files/uploads/productivity_1-4.png (Sharing a directory in Syncthing)
|
@ -1,60 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Use Stow for configuration management of multiple machines)
|
||||
[#]: via: (https://opensource.com/article/20/1/configuration-management-stow)
|
||||
[#]: author: (Kevin Sonney https://opensource.com/users/ksonney)
|
||||
|
||||
Use Stow for configuration management of multiple machines
|
||||
======
|
||||
Learn how to use Stow to manage configurations across machines in the
|
||||
second article in our series on 20 ways to be more productive with open
|
||||
source in 2020.
|
||||
![A person programming][1]
|
||||
|
||||
Last year, I brought you 19 days of new (to you) productivity tools for 2019. This year, I'm taking a different approach: building an environment that will allow you to be more productive in the new year, using tools you may or may not already be using.
|
||||
|
||||
### Manage symlinks with Stow
|
||||
|
||||
Yesterday, I explained how I keep my files in sync across multiple machines with [Syncthing][2]. But that's only one of the tools I use to keep my configurations consistent. The other is a seemingly simple tool called [Stow][3].
|
||||
|
||||
![Stow help screen][4]
|
||||
|
||||
Stow manages symlinks. By default, it makes symlinks from the directory it is in to the directory below it. There are also options to set a source and target directory, but I don't usually use them.
|
||||
|
||||
As I mentioned in the Syncthing [article][5], I use Syncthing to keep a directory called **myconfigs** consistent across all of my machines. The **myconfigs** directory has several subdirectories underneath it. Each subdirectory contains the configuration files for one of the applications I use regularly.
|
||||
|
||||
![myconfigs directory][6]
|
||||
|
||||
On each machine, I change to the **myconfigs** directory and run **stow -S <directory name>** to symlink the files inside the directory to my home directory. For example, under the **vim** directory, I have my **.vimrc** and **.vim** directories. On each machine, I run **stow -S vim** to create the symlinks **~/.vimrc** and **~/.vim**. When I make a change to my Vim configuration on one machine, it applies to ALL of my machines.
|
||||
|
||||
Sometimes, though, I need something machine-specific, which is why I have directories like **msmtp-personal** and **msmtp-elastic** (my employer). Since my **msmtp** SMTP client needs to know what email server to relay through, and each one has different setups and credentials, I can use Stow to swap between the two by "unstowing" one with the **-D** flag and then putting the other in place.
|
||||
|
||||
![Unstow one, stow the other][7]
|
||||
|
||||
Sometimes I find myself adding files to a configuration. For that, there is the "restow" option with **-R**. For example, I like to use a specific font when I use Vim as a graphical application and not a console. The **.gvimrc** file lets me set options that apply only to the graphical version, in addition to the standard **.vimrc** file. When I first set this up, I moved **~/.gvimrc** to **~/myconfigs/vim** and then ran **stow -R vim**, which unlinks and relinks everything in that directory.
|
||||
|
||||
Stow lets me switch between several configurations with a simple command line and, in combination with Syncthing, I can be sure that I have the setup I like for the tools I use ready to go, no matter where I am or where I make changes.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/1/configuration-management-stow
|
||||
|
||||
作者:[Kevin Sonney][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/ksonney
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/computer_keyboard_laptop_development_code_woman.png?itok=vbYz6jjb (A person programming)
|
||||
[2]: https://syncthing.net/
|
||||
[3]: https://www.gnu.org/software/stow/
|
||||
[4]: https://opensource.com/sites/default/files/uploads/productivity_2-1.png (Stow help screen)
|
||||
[5]: https://opensource.com/article/20/1/20-productivity-tools-syncthing
|
||||
[6]: https://opensource.com/sites/default/files/uploads/productivity_2-2.png (myconfigs directory)
|
||||
[7]: https://opensource.com/sites/default/files/uploads/productivity_2-3.png (Unstow one, stow the other)
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
111
sources/tech/20200114 Organize your email with Notmuch.md
Normal file
111
sources/tech/20200114 Organize your email with Notmuch.md
Normal file
@ -0,0 +1,111 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Organize your email with Notmuch)
|
||||
[#]: via: (https://opensource.com/article/20/1/organize-email-notmuch)
|
||||
[#]: author: (Kevin Sonney https://opensource.com/users/ksonney)
|
||||
|
||||
Organize your email with Notmuch
|
||||
======
|
||||
Notmuch indexes, tags, and sorts your email. Learn how to use it in the
|
||||
fourth article in our series on 20 ways to be more productive with open
|
||||
source in 2020.
|
||||
![Filing cabinet for organization][1]
|
||||
|
||||
Last year, I brought you 19 days of new (to you) productivity tools for 2019. This year, I'm taking a different approach: building an environment that will allow you to be more productive in the new year, using tools you may or may not already be using.
|
||||
|
||||
### Index your email with Notmuch
|
||||
|
||||
Yesterday, I talked about how I use OfflineIMAP to [sync my mail][2] to my local machine. Today, I'll talk about how I preprocess all that mail before I read it.
|
||||
|
||||
![Notmuch][3]
|
||||
|
||||
[Maildir][4] is probably one of the most useful mail storage formats out there. And there are a LOT of tools to help with managing your mail. The one I keep coming back to is a little program called [Notmuch][5] that indexes, tags, and searches mail messages. And there are several programs that work with Notmuch to make it even easier to handle a large amount of mail.
|
||||
|
||||
Most Linux distributions include Notmuch, and you can also get it for MacOS. Windows users can access it through Windows Subsystem for Linux ([WSL][6]), but it may require some additional tweaks.
|
||||
|
||||
![Notmuch's first run][7]
|
||||
|
||||
On Notmuch's very first run, it will ask you some questions and create a **.notmuch-config** file in your home directory. Next, index and tag all your mail by running **notmuch new**. You can verify it with **notmuch search tag:new**; this will find all messages with the "new" tag. That's probably a lot of mail since Notmuch uses the "new" tag to indicate messages that are new to it, so you'll want to clean that up.
|
||||
|
||||
Run **notmuch search tag:unread** to find any unread messages; that should result in quite a lot less mail. To remove the "new" tag from messages you've already seen, run **notmuch tag -new not tag:unread**, which will search for all messages without the "unread" tag and remove the "new" tag from them. Now when you run **notmuch search tag:new**, it should show only the unread mail messages.
|
||||
|
||||
Tagging messages in bulk is probably more useful, though, since manually updating tags at every run can be really tedious. The **\--batch** command-line option tells Notmuch to read multiple lines of commands and execute them. There is also the **\--input=filename** option, which reads commands from a file and applies them. I have a file called **tagmail.notmuch** that I use to add tags to mail that is "new"; it looks something like this:
|
||||
|
||||
|
||||
```
|
||||
# Manage sent, spam, and trash folders
|
||||
-unread -new folder:Trash
|
||||
-unread -new folder:Spam
|
||||
-unread -new folder:Sent
|
||||
|
||||
# Note mail sent specifically to me (excluding bug mail)
|
||||
+to-me to:kevin at sonney.com and tag:new and not tag:to-me
|
||||
|
||||
# And note all mail sent from me
|
||||
+sent from:kevin at sonney.com and tag:new and not tag:sent
|
||||
|
||||
# Remove the new tag from messages
|
||||
-new tag:new
|
||||
```
|
||||
|
||||
I can then run **notmuch tag --input=tagmail.notmuch** to bulk-process my mail messages after running **notmuch new**, and then I can search on those tags as well.
|
||||
|
||||
Notmuch also supports running pre- and post-new hooks. These scripts, stored in **Maildir/.notmuch/hooks**, define actions to run before (pre-new) and after (post-new) to index new mail with **notmuch new**. In yesterday's article, I talked about using [OfflineIMAP][8] to sync mail from my IMAP server. It's very easy to run it from the "pre-new" hook:
|
||||
|
||||
|
||||
```
|
||||
#!/bin/bash
|
||||
# Remove the new tag from messages that are still tagged as new
|
||||
notmuch tag -new tag:new
|
||||
|
||||
# Sync mail messages
|
||||
offlineimap -a LocalSync -u quiet
|
||||
```
|
||||
|
||||
You can also use the Python application [afew][9], which interfaces with the Notmuch database, to tag things like _Mailing List_ and _Spam_ for you. You can run afew from the post-new hook in a similar way:
|
||||
|
||||
|
||||
```
|
||||
#!/bin/bash
|
||||
# tag with my custom tags
|
||||
notmuch tag --input=~/tagmail.notmuch
|
||||
|
||||
# Run afew to tag new mail
|
||||
afew -t -n
|
||||
```
|
||||
|
||||
I recommend that when using afew to tag messages, you do NOT use the **[ListMailsFilter]** since some mail handlers add obscure or downright junk List-ID headers to mail messages (I'm looking at you, Google).
|
||||
|
||||
![alot email client][10]
|
||||
|
||||
At this point, any mail reader that supports Notmuch or Maildir can work with my email. I'll sometimes use [alot][11], a Notmuch-specific client, to read mail at the console, but it's not as fancy as some other mail readers.
|
||||
|
||||
In the coming days, I'll show you some other mail clients that will likely integrate with tools you already use. In the meantime, check out some of the other tools that work with Maildir mailboxes—you might find a hidden gem I've not tried yet.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/1/organize-email-notmuch
|
||||
|
||||
作者:[Kevin Sonney][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/ksonney
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/files_documents_organize_letter.png?itok=GTtiiabr (Filing cabinet for organization)
|
||||
[2]: https://opensource.com/article/20/1/sync-email-offlineimap
|
||||
[3]: https://opensource.com/sites/default/files/uploads/productivity_4-1.png (Notmuch)
|
||||
[4]: https://en.wikipedia.org/wiki/Maildir
|
||||
[5]: https://notmuchmail.org/
|
||||
[6]: https://docs.microsoft.com/en-us/windows/wsl/install-win10
|
||||
[7]: https://opensource.com/sites/default/files/uploads/productivity_4-2.png (Notmuch's first run)
|
||||
[8]: http://www.offlineimap.org/
|
||||
[9]: https://afew.readthedocs.io/en/latest/index.html
|
||||
[10]: https://opensource.com/sites/default/files/uploads/productivity_4-3.png (alot email client)
|
||||
[11]: https://github.com/pazz/alot
|
524
sources/tech/20200115 6 handy Bash scripts for Git.md
Normal file
524
sources/tech/20200115 6 handy Bash scripts for Git.md
Normal file
@ -0,0 +1,524 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (6 handy Bash scripts for Git)
|
||||
[#]: via: (https://opensource.com/article/20/1/bash-scripts-git)
|
||||
[#]: author: (Bob Peterson https://opensource.com/users/bobpeterson)
|
||||
|
||||
6 handy Bash scripts for Git
|
||||
======
|
||||
These six Bash scripts will make your life easier when you're working
|
||||
with Git repositories.
|
||||
![Digital hand surrounding by objects, bike, light bulb, graphs][1]
|
||||
|
||||
I wrote a bunch of Bash scripts that make my life easier when I'm working with Git repositories. Many of my colleagues say there's no need; that everything I need to do can be done with Git commands. While that may be true, I find the scripts infinitely more convenient than trying to figure out the appropriate Git command to do what I want.
|
||||
|
||||
### 1\. gitlog
|
||||
|
||||
**gitlog** prints an abbreviated list of current patches against the master version. It prints them from oldest to newest and shows the author and description, with **H** for **HEAD**, **^** for **HEAD^**, **2** for **HEAD~2,** and so forth. For example:
|
||||
|
||||
|
||||
```
|
||||
$ gitlog
|
||||
\-----------------------[ recovery25 ]-----------------------
|
||||
(snip)
|
||||
11 340d27a33895 Bob Peterson gfs2: drain the ail2 list after io errors
|
||||
10 9b3c4e6efb10 Bob Peterson gfs2: clean up iopen glock mess in gfs2_create_inode
|
||||
9 d2e8c22be39b Bob Peterson gfs2: Do proper error checking for go_sync family of glops
|
||||
8 9563e31f8bfd Christoph Hellwig gfs2: use page_offset in gfs2_page_mkwrite
|
||||
7 ebac7a38036c Christoph Hellwig gfs2: don't use buffer_heads in gfs2_allocate_page_backing
|
||||
6 f703a3c27874 Andreas Gruenbacher gfs2: Improve mmap write vs. punch_hole consistency
|
||||
5 a3e86d2ef30e Andreas Gruenbacher gfs2: Multi-block allocations in gfs2_page_mkwrite
|
||||
4 da3c604755b0 Andreas Gruenbacher gfs2: Fix end-of-file handling in gfs2_page_mkwrite
|
||||
3 4525c2f5b46f Bob Peterson Rafael Aquini's slab instrumentation
|
||||
2 a06a5b7dea02 Bob Peterson GFS2: Add go_get_holdtime to gl_ops
|
||||
^ 8ba93c796d5c Bob Peterson gfs2: introduce new function remaining_hold_time and use it in dq
|
||||
H e8b5ff851bb9 Bob Peterson gfs2: Allow rgrps to have a minimum hold time
|
||||
```
|
||||
|
||||
If I want to see what patches are on a different branch, I can specify an alternate branch:
|
||||
|
||||
|
||||
```
|
||||
`$ gitlog recovery24`
|
||||
```
|
||||
|
||||
### 2\. gitlog.id
|
||||
|
||||
**gitlog.id** just prints the patch SHA1 IDs:
|
||||
|
||||
|
||||
```
|
||||
$ gitlog.id
|
||||
\-----------------------[ recovery25 ]-----------------------
|
||||
56908eeb6940 2ca4a6b628a1 fc64ad5d99fe 02031a00a251 f6f38da7dd18 d8546e8f0023 fc3cc1f98f6b 12c3e0cb3523 76cce178b134 6fc1dce3ab9c 1b681ab074ca 26fed8de719b 802ff51a5670 49f67a512d8c f04f20193bbb 5f6afe809d23 2030521dc70e dada79b3be94 9b19a1e08161 78a035041d3e f03da011cae2 0d2b2e068fcd 2449976aa133 57dfb5e12ccd 53abedfdcf72 6fbdda3474b3 49544a547188 187032f7a63c 6f75dae23d93 95fc2a261b00 ebfb14ded191 f653ee9e414a 0e2911cb8111 73968b76e2e3 8a3e4cb5e92c a5f2da803b5b 7c9ef68388ed 71ca19d0cba8 340d27a33895 9b3c4e6efb10 d2e8c22be39b 9563e31f8bfd ebac7a38036c f703a3c27874 a3e86d2ef30e da3c604755b0 4525c2f5b46f a06a5b7dea02 8ba93c796d5c e8b5ff851bb9
|
||||
```
|
||||
|
||||
Again, it assumes the current branch, but I can specify a different branch if I want.
|
||||
|
||||
### 3\. gitlog.id2
|
||||
|
||||
**gitlog.id2** is the same as **gitlog.id** but without the branch line at the top. This is handy for cherry-picking all patches from one branch to the current branch:
|
||||
|
||||
|
||||
```
|
||||
$ # create a new branch
|
||||
$ git branch --track origin/master
|
||||
$ # check out the new branch I just created
|
||||
$ git checkout recovery26
|
||||
$ # cherry-pick all patches from the old branch to the new one
|
||||
$ for i in `gitlog.id2 recovery25` ; do git cherry-pick $i ;done
|
||||
```
|
||||
|
||||
### 4\. gitlog.grep
|
||||
|
||||
**gitlog.grep** greps for a string within that collection of patches. For example, if I find a bug and want to fix the patch that has a reference to function **inode_go_sync**, I simply do:
|
||||
|
||||
|
||||
```
|
||||
$ gitlog.grep inode_go_sync
|
||||
\-----------------------[ recovery25 - 50 patches ]-----------------------
|
||||
(snip)
|
||||
11 340d27a33895 Bob Peterson gfs2: drain the ail2 list after io errors
|
||||
10 9b3c4e6efb10 Bob Peterson gfs2: clean up iopen glock mess in gfs2_create_inode
|
||||
9 d2e8c22be39b Bob Peterson gfs2: Do proper error checking for go_sync family of glops
|
||||
152:-static void inode_go_sync(struct gfs2_glock *gl)
|
||||
153:+static int inode_go_sync(struct gfs2_glock *gl)
|
||||
163:@@ -296,6 +302,7 @@ static void inode_go_sync(struct gfs2_glock *gl)
|
||||
8 9563e31f8bfd Christoph Hellwig gfs2: use page_offset in gfs2_page_mkwrite
|
||||
7 ebac7a38036c Christoph Hellwig gfs2: don't use buffer_heads in gfs2_allocate_page_backing
|
||||
6 f703a3c27874 Andreas Gruenbacher gfs2: Improve mmap write vs. punch_hole consistency
|
||||
5 a3e86d2ef30e Andreas Gruenbacher gfs2: Multi-block allocations in gfs2_page_mkwrite
|
||||
4 da3c604755b0 Andreas Gruenbacher gfs2: Fix end-of-file handling in gfs2_page_mkwrite
|
||||
3 4525c2f5b46f Bob Peterson Rafael Aquini's slab instrumentation
|
||||
2 a06a5b7dea02 Bob Peterson GFS2: Add go_get_holdtime to gl_ops
|
||||
^ 8ba93c796d5c Bob Peterson gfs2: introduce new function remaining_hold_time and use it in dq
|
||||
H e8b5ff851bb9 Bob Peterson gfs2: Allow rgrps to have a minimum hold time
|
||||
```
|
||||
|
||||
So, now I know that patch **HEAD~9** is the one that needs fixing. I use **git rebase -i HEAD~10** to edit patch 9, **git commit -a --amend**, then **git rebase --continue** to make the necessary adjustments.
|
||||
|
||||
### 5\. gitbranchcmp3
|
||||
|
||||
**gitbranchcmp3** lets me compare my current branch to another branch, so I can compare older versions of patches to my newer versions and quickly see what's changed and what hasn't. It generates a compare script (that uses the KDE tool [Kompare][2], which works on GNOME3, as well) to compare the patches that aren't quite the same. If there are no differences other than line numbers, it prints **[SAME]**. If there are only comment differences, it prints **[same]** (in lower case). For example:
|
||||
|
||||
|
||||
```
|
||||
$ gitbranchcmp3 recovery24
|
||||
Branch recovery24 has 47 patches
|
||||
Branch recovery25 has 50 patches
|
||||
|
||||
(snip)
|
||||
38 87eb6901607a 340d27a33895 [same] gfs2: drain the ail2 list after io errors
|
||||
39 90fefb577a26 9b3c4e6efb10 [same] gfs2: clean up iopen glock mess in gfs2_create_inode
|
||||
40 ba3ae06b8b0e d2e8c22be39b [same] gfs2: Do proper error checking for go_sync family of glops
|
||||
41 2ab662294329 9563e31f8bfd [SAME] gfs2: use page_offset in gfs2_page_mkwrite
|
||||
42 0adc6d817b7a ebac7a38036c [SAME] gfs2: don't use buffer_heads in gfs2_allocate_page_backing
|
||||
43 55ef1f8d0be8 f703a3c27874 [SAME] gfs2: Improve mmap write vs. punch_hole consistency
|
||||
44 de57c2f72570 a3e86d2ef30e [SAME] gfs2: Multi-block allocations in gfs2_page_mkwrite
|
||||
45 7c5305fbd68a da3c604755b0 [SAME] gfs2: Fix end-of-file handling in gfs2_page_mkwrite
|
||||
46 162524005151 4525c2f5b46f [SAME] Rafael Aquini's slab instrumentation
|
||||
47 a06a5b7dea02 [ ] GFS2: Add go_get_holdtime to gl_ops
|
||||
48 8ba93c796d5c [ ] gfs2: introduce new function remaining_hold_time and use it in dq
|
||||
49 e8b5ff851bb9 [ ] gfs2: Allow rgrps to have a minimum hold time
|
||||
|
||||
Missing from recovery25:
|
||||
The missing:
|
||||
Compare script generated at: /tmp/compare_mismatches.sh
|
||||
```
|
||||
|
||||
### 6\. gitlog.find
|
||||
|
||||
Finally, I have **gitlog.find**, a script to help me identify where the upstream versions of my patches are and each patch's current status. It does this by matching the patch description. It also generates a compare script (again, using Kompare) to compare the current patch to the upstream counterpart:
|
||||
|
||||
|
||||
```
|
||||
$ gitlog.find
|
||||
\-----------------------[ recovery25 - 50 patches ]-----------------------
|
||||
(snip)
|
||||
11 340d27a33895 Bob Peterson gfs2: drain the ail2 list after io errors
|
||||
lo 5bcb9be74b2a Bob Peterson gfs2: drain the ail2 list after io errors
|
||||
10 9b3c4e6efb10 Bob Peterson gfs2: clean up iopen glock mess in gfs2_create_inode
|
||||
fn 2c47c1be51fb Bob Peterson gfs2: clean up iopen glock mess in gfs2_create_inode
|
||||
9 d2e8c22be39b Bob Peterson gfs2: Do proper error checking for go_sync family of glops
|
||||
lo feb7ea639472 Bob Peterson gfs2: Do proper error checking for go_sync family of glops
|
||||
8 9563e31f8bfd Christoph Hellwig gfs2: use page_offset in gfs2_page_mkwrite
|
||||
ms f3915f83e84c Christoph Hellwig gfs2: use page_offset in gfs2_page_mkwrite
|
||||
7 ebac7a38036c Christoph Hellwig gfs2: don't use buffer_heads in gfs2_allocate_page_backing
|
||||
ms 35af80aef99b Christoph Hellwig gfs2: don't use buffer_heads in gfs2_allocate_page_backing
|
||||
6 f703a3c27874 Andreas Gruenbacher gfs2: Improve mmap write vs. punch_hole consistency
|
||||
fn 39c3a948ecf6 Andreas Gruenbacher gfs2: Improve mmap write vs. punch_hole consistency
|
||||
5 a3e86d2ef30e Andreas Gruenbacher gfs2: Multi-block allocations in gfs2_page_mkwrite
|
||||
fn f53056c43063 Andreas Gruenbacher gfs2: Multi-block allocations in gfs2_page_mkwrite
|
||||
4 da3c604755b0 Andreas Gruenbacher gfs2: Fix end-of-file handling in gfs2_page_mkwrite
|
||||
fn 184b4e60853d Andreas Gruenbacher gfs2: Fix end-of-file handling in gfs2_page_mkwrite
|
||||
3 4525c2f5b46f Bob Peterson Rafael Aquini's slab instrumentation
|
||||
Not found upstream
|
||||
2 a06a5b7dea02 Bob Peterson GFS2: Add go_get_holdtime to gl_ops
|
||||
Not found upstream
|
||||
^ 8ba93c796d5c Bob Peterson gfs2: introduce new function remaining_hold_time and use it in dq
|
||||
Not found upstream
|
||||
H e8b5ff851bb9 Bob Peterson gfs2: Allow rgrps to have a minimum hold time
|
||||
Not found upstream
|
||||
Compare script generated: /tmp/compare_upstream.sh
|
||||
```
|
||||
|
||||
The patches are shown on two lines, the first of which is your current patch, followed by the corresponding upstream patch, and a 2-character abbreviation to indicate its upstream status:
|
||||
|
||||
* **lo** means the patch is in the local upstream Git repo only (i.e., not pushed upstream yet).
|
||||
* **ms** means the patch is in Linus Torvald's master branch.
|
||||
* **fn** means the patch is pushed to my "for-next" development branch, intended for the next upstream merge window.
|
||||
|
||||
|
||||
|
||||
Some of my scripts make assumptions based on how I normally work with Git. For example, when searching for upstream patches, it uses my well-known Git tree's location. So, you will need to adjust or improve them to suit your conditions. The **gitlog.find** script is designed to locate [GFS2][3] and [DLM][4] patches only, so unless you're a GFS2 developer, you will want to customize it to the components that interest you.
|
||||
|
||||
### Source code
|
||||
|
||||
Here is the source for these scripts.
|
||||
|
||||
#### 1\. gitlog
|
||||
|
||||
|
||||
```
|
||||
#!/bin/bash
|
||||
branch=$1
|
||||
|
||||
if test "x$branch" = x; then
|
||||
branch=`git branch -a | grep "*" | cut -d ' ' -f2`
|
||||
fi
|
||||
|
||||
patches=0
|
||||
tracking=`git rev-parse --abbrev-ref --symbolic-full-name @{u}`
|
||||
|
||||
LIST=`git log --reverse --abbrev-commit --pretty=oneline $tracking..$branch | cut -d ' ' -f1 |paste -s -d ' '`
|
||||
for i in $LIST; do patches=$(echo $patches + 1 | bc);done
|
||||
|
||||
if [[ $branch =~ .*for-next.* ]]
|
||||
then
|
||||
start=HEAD
|
||||
# start=origin/for-next
|
||||
else
|
||||
start=origin/master
|
||||
fi
|
||||
|
||||
tracking=`git rev-parse --abbrev-ref --symbolic-full-name @{u}`
|
||||
|
||||
/usr/bin/echo "-----------------------[" $branch "]-----------------------"
|
||||
patches=$(echo $patches - 1 | bc);
|
||||
for i in $LIST; do
|
||||
if [ $patches -eq 1 ]; then
|
||||
cnt=" ^"
|
||||
elif [ $patches -eq 0 ]; then
|
||||
cnt=" H"
|
||||
else
|
||||
if [ $patches -lt 10 ]; then
|
||||
cnt=" $patches"
|
||||
else
|
||||
cnt="$patches"
|
||||
fi
|
||||
fi
|
||||
/usr/bin/git show --abbrev-commit -s --pretty=format:"$cnt %h %<|(32)%an %s %n" $i
|
||||
patches=$(echo $patches - 1 | bc)
|
||||
done
|
||||
#git log --reverse --abbrev-commit --pretty=format:"%h %<|(32)%an %s" $tracking..$branch
|
||||
#git log --reverse --abbrev-commit --pretty=format:"%h %<|(32)%an %s" ^origin/master ^linux-gfs2/for-next $branch
|
||||
```
|
||||
|
||||
#### 2\. gitlog.id
|
||||
|
||||
|
||||
```
|
||||
#!/bin/bash
|
||||
branch=$1
|
||||
|
||||
if test "x$branch" = x; then
|
||||
branch=`git branch -a | grep "*" | cut -d ' ' -f2`
|
||||
fi
|
||||
|
||||
tracking=`git rev-parse --abbrev-ref --symbolic-full-name @{u}`
|
||||
|
||||
/usr/bin/echo "-----------------------[" $branch "]-----------------------"
|
||||
git log --reverse --abbrev-commit --pretty=oneline $tracking..$branch | cut -d ' ' -f1 |paste -s -d ' '
|
||||
```
|
||||
|
||||
#### 3\. gitlog.id2
|
||||
|
||||
|
||||
```
|
||||
#!/bin/bash
|
||||
branch=$1
|
||||
|
||||
if test "x$branch" = x; then
|
||||
branch=`git branch -a | grep "*" | cut -d ' ' -f2`
|
||||
fi
|
||||
|
||||
tracking=`git rev-parse --abbrev-ref --symbolic-full-name @{u}`
|
||||
git log --reverse --abbrev-commit --pretty=oneline $tracking..$branch | cut -d ' ' -f1 |paste -s -d ' '
|
||||
```
|
||||
|
||||
#### 4\. gitlog.grep
|
||||
|
||||
|
||||
```
|
||||
#!/bin/bash
|
||||
param1=$1
|
||||
param2=$2
|
||||
|
||||
if test "x$param2" = x; then
|
||||
branch=`git branch -a | grep "*" | cut -d ' ' -f2`
|
||||
string=$param1
|
||||
else
|
||||
branch=$param1
|
||||
string=$param2
|
||||
fi
|
||||
|
||||
patches=0
|
||||
tracking=`git rev-parse --abbrev-ref --symbolic-full-name @{u}`
|
||||
|
||||
LIST=`git log --reverse --abbrev-commit --pretty=oneline $tracking..$branch | cut -d ' ' -f1 |paste -s -d ' '`
|
||||
for i in $LIST; do patches=$(echo $patches + 1 | bc);done
|
||||
/usr/bin/echo "-----------------------[" $branch "-" $patches "patches ]-----------------------"
|
||||
patches=$(echo $patches - 1 | bc);
|
||||
for i in $LIST; do
|
||||
if [ $patches -eq 1 ]; then
|
||||
cnt=" ^"
|
||||
elif [ $patches -eq 0 ]; then
|
||||
cnt=" H"
|
||||
else
|
||||
if [ $patches -lt 10 ]; then
|
||||
cnt=" $patches"
|
||||
else
|
||||
cnt="$patches"
|
||||
fi
|
||||
fi
|
||||
/usr/bin/git show --abbrev-commit -s --pretty=format:"$cnt %h %<|(32)%an %s" $i
|
||||
/usr/bin/git show --pretty=email --patch-with-stat $i | grep -n "$string"
|
||||
patches=$(echo $patches - 1 | bc)
|
||||
done
|
||||
```
|
||||
|
||||
#### 5\. gitbranchcmp3
|
||||
|
||||
|
||||
```
|
||||
#!/bin/bash
|
||||
#
|
||||
# gitbranchcmp3 <old branch> [<new_branch>]
|
||||
#
|
||||
oldbranch=$1
|
||||
newbranch=$2
|
||||
script=/tmp/compare_mismatches.sh
|
||||
|
||||
/usr/bin/rm -f $script
|
||||
echo "#!/bin/bash" > $script
|
||||
/usr/bin/chmod 755 $script
|
||||
echo "# Generated by gitbranchcmp3.sh" >> $script
|
||||
echo "# Run this script to compare the mismatched patches" >> $script
|
||||
echo " " >> $script
|
||||
echo "function compare_them()" >> $script
|
||||
echo "{" >> $script
|
||||
echo " git show --pretty=email --patch-with-stat \$1 > /tmp/gronk1" >> $script
|
||||
echo " git show --pretty=email --patch-with-stat \$2 > /tmp/gronk2" >> $script
|
||||
echo " kompare /tmp/gronk1 /tmp/gronk2" >> $script
|
||||
echo "}" >> $script
|
||||
echo " " >> $script
|
||||
|
||||
if test "x$newbranch" = x; then
|
||||
newbranch=`git branch -a | grep "*" | cut -d ' ' -f2`
|
||||
fi
|
||||
|
||||
tracking=`git rev-parse --abbrev-ref --symbolic-full-name @{u}`
|
||||
|
||||
declare -a oldsha1s=(`git log --reverse --abbrev-commit --pretty=oneline $tracking..$oldbranch | cut -d ' ' -f1 |paste -s -d ' '`)
|
||||
declare -a newsha1s=(`git log --reverse --abbrev-commit --pretty=oneline $tracking..$newbranch | cut -d ' ' -f1 |paste -s -d ' '`)
|
||||
|
||||
#echo "old: " $oldsha1s
|
||||
oldcount=${#oldsha1s[@]}
|
||||
echo "Branch $oldbranch has $oldcount patches"
|
||||
oldcount=$(echo $oldcount - 1 | bc)
|
||||
#for o in `seq 0 ${#oldsha1s[@]}`; do
|
||||
# echo -n ${oldsha1s[$o]} " "
|
||||
# desc=`git show $i | head -5 | tail -1|cut -b5-`
|
||||
#done
|
||||
|
||||
#echo "new: " $newsha1s
|
||||
newcount=${#newsha1s[@]}
|
||||
echo "Branch $newbranch has $newcount patches"
|
||||
newcount=$(echo $newcount - 1 | bc)
|
||||
#for o in `seq 0 ${#newsha1s[@]}`; do
|
||||
# echo -n ${newsha1s[$o]} " "
|
||||
# desc=`git show $i | head -5 | tail -1|cut -b5-`
|
||||
#done
|
||||
echo
|
||||
|
||||
for new in `seq 0 $newcount`; do
|
||||
newsha=${newsha1s[$new]}
|
||||
newdesc=`git show $newsha | head -5 | tail -1|cut -b5-`
|
||||
oldsha=" "
|
||||
same="[ ]"
|
||||
for old in `seq 0 $oldcount`; do
|
||||
if test "${oldsha1s[$old]}" = "match"; then
|
||||
continue;
|
||||
fi
|
||||
olddesc=`git show ${oldsha1s[$old]} | head -5 | tail -1|cut -b5-`
|
||||
if test "$olddesc" = "$newdesc" ; then
|
||||
oldsha=${oldsha1s[$old]}
|
||||
#echo $oldsha
|
||||
git show $oldsha |tail -n +2 |grep -v "index.*\\.\\." |grep -v "@@" > /tmp/gronk1
|
||||
git show $newsha |tail -n +2 |grep -v "index.*\\.\\." |grep -v "@@" > /tmp/gronk2
|
||||
diff /tmp/gronk1 /tmp/gronk2 &> /dev/null
|
||||
if [ $? -eq 0 ] ;then
|
||||
# No differences
|
||||
same="[SAME]"
|
||||
oldsha1s[$old]="match"
|
||||
break
|
||||
fi
|
||||
git show $oldsha |sed -n '/diff/,$p' |grep -v "index.*\\.\\." |grep -v "@@" > /tmp/gronk1
|
||||
git show $newsha |sed -n '/diff/,$p' |grep -v "index.*\\.\\." |grep -v "@@" > /tmp/gronk2
|
||||
diff /tmp/gronk1 /tmp/gronk2 &> /dev/null
|
||||
if [ $? -eq 0 ] ;then
|
||||
# Differences in comments only
|
||||
same="[same]"
|
||||
oldsha1s[$old]="match"
|
||||
break
|
||||
fi
|
||||
oldsha1s[$old]="match"
|
||||
echo "compare_them $oldsha $newsha" >> $script
|
||||
fi
|
||||
done
|
||||
echo "$new $oldsha $newsha $same $newdesc"
|
||||
done
|
||||
|
||||
echo
|
||||
echo "Missing from $newbranch:"
|
||||
the_missing=""
|
||||
# Now run through the olds we haven't matched up
|
||||
for old in `seq 0 $oldcount`; do
|
||||
if test ${oldsha1s[$old]} != "match"; then
|
||||
olddesc=`git show ${oldsha1s[$old]} | head -5 | tail -1|cut -b5-`
|
||||
echo "${oldsha1s[$old]} $olddesc"
|
||||
the_missing=`echo "$the_missing ${oldsha1s[$old]}"`
|
||||
fi
|
||||
done
|
||||
|
||||
echo "The missing: " $the_missing
|
||||
echo "Compare script generated at: $script"
|
||||
#git log --reverse --abbrev-commit --pretty=oneline $tracking..$branch | cut -d ' ' -f1 |paste -s -d ' '
|
||||
```
|
||||
|
||||
#### 6\. gitlog.find
|
||||
|
||||
|
||||
```
|
||||
#!/bin/bash
|
||||
#
|
||||
# Find the upstream equivalent patch
|
||||
#
|
||||
# gitlog.find
|
||||
#
|
||||
cwd=$PWD
|
||||
param1=$1
|
||||
ubranch=$2
|
||||
patches=0
|
||||
script=/tmp/compare_upstream.sh
|
||||
echo "#!/bin/bash" > $script
|
||||
/usr/bin/chmod 755 $script
|
||||
echo "# Generated by gitbranchcmp3.sh" >> $script
|
||||
echo "# Run this script to compare the mismatched patches" >> $script
|
||||
echo " " >> $script
|
||||
echo "function compare_them()" >> $script
|
||||
echo "{" >> $script
|
||||
echo " cwd=$PWD" >> $script
|
||||
echo " git show --pretty=email --patch-with-stat \$2 > /tmp/gronk2" >> $script
|
||||
echo " cd ~/linux.git/fs/gfs2" >> $script
|
||||
echo " git show --pretty=email --patch-with-stat \$1 > /tmp/gronk1" >> $script
|
||||
echo " cd $cwd" >> $script
|
||||
echo " kompare /tmp/gronk1 /tmp/gronk2" >> $script
|
||||
echo "}" >> $script
|
||||
echo " " >> $script
|
||||
|
||||
#echo "Gathering upstream patch info. Please wait."
|
||||
branch=`git branch -a | grep "*" | cut -d ' ' -f2`
|
||||
tracking=`git rev-parse --abbrev-ref --symbolic-full-name @{u}`
|
||||
|
||||
cd ~/linux.git
|
||||
if test "X${ubranch}" = "X"; then
|
||||
ubranch=`git branch -a | grep "*" | cut -d ' ' -f2`
|
||||
fi
|
||||
utracking=`git rev-parse --abbrev-ref --symbolic-full-name @{u}`
|
||||
#
|
||||
# gather a list of gfs2 patches from master just in case we can't find it
|
||||
#
|
||||
#git log --abbrev-commit --pretty=format:" %h %<|(32)%an %s" master |grep -i -e "gfs2" -e "dlm" > /tmp/gronk
|
||||
git log --reverse --abbrev-commit --pretty=format:"ms %h %<|(32)%an %s" master fs/gfs2/ > /tmp/gronk.gfs2
|
||||
# ms = in Linus's master
|
||||
git log --reverse --abbrev-commit --pretty=format:"ms %h %<|(32)%an %s" master fs/dlm/ > /tmp/gronk.dlm
|
||||
|
||||
cd $cwd
|
||||
LIST=`git log --reverse --abbrev-commit --pretty=oneline $tracking..$branch | cut -d ' ' -f1 |paste -s -d ' '`
|
||||
for i in $LIST; do patches=$(echo $patches + 1 | bc);done
|
||||
/usr/bin/echo "-----------------------[" $branch "-" $patches "patches ]-----------------------"
|
||||
patches=$(echo $patches - 1 | bc);
|
||||
for i in $LIST; do
|
||||
if [ $patches -eq 1 ]; then
|
||||
cnt=" ^"
|
||||
elif [ $patches -eq 0 ]; then
|
||||
cnt=" H"
|
||||
else
|
||||
if [ $patches -lt 10 ]; then
|
||||
cnt=" $patches"
|
||||
else
|
||||
cnt="$patches"
|
||||
fi
|
||||
fi
|
||||
/usr/bin/git show --abbrev-commit -s --pretty=format:"$cnt %h %<|(32)%an %s" $i
|
||||
desc=`/usr/bin/git show --abbrev-commit -s --pretty=format:"%s" $i`
|
||||
cd ~/linux.git
|
||||
cmp=1
|
||||
up_eq=`git log --reverse --abbrev-commit --pretty=format:"lo %h %<|(32)%an %s" $utracking..$ubranch | grep "$desc"`
|
||||
# lo = in local for-next
|
||||
if test "X$up_eq" = "X"; then
|
||||
up_eq=`git log --reverse --abbrev-commit --pretty=format:"fn %h %<|(32)%an %s" master..$utracking | grep "$desc"`
|
||||
# fn = in for-next for next merge window
|
||||
if test "X$up_eq" = "X"; then
|
||||
up_eq=`grep "$desc" /tmp/gronk.gfs2`
|
||||
if test "X$up_eq" = "X"; then
|
||||
up_eq=`grep "$desc" /tmp/gronk.dlm`
|
||||
if test "X$up_eq" = "X"; then
|
||||
up_eq=" Not found upstream"
|
||||
cmp=0
|
||||
fi
|
||||
fi
|
||||
fi
|
||||
fi
|
||||
echo "$up_eq"
|
||||
if [ $cmp -eq 1 ] ; then
|
||||
UP_SHA1=`echo $up_eq|cut -d' ' -f2`
|
||||
echo "compare_them $UP_SHA1 $i" >> $script
|
||||
fi
|
||||
cd $cwd
|
||||
patches=$(echo $patches - 1 | bc)
|
||||
done
|
||||
echo "Compare script generated: $script"
|
||||
```
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/1/bash-scripts-git
|
||||
|
||||
作者:[Bob Peterson][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/bobpeterson
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003588_01_rd3os.combacktoschoolseriesk12_rh_021x_0.png?itok=fvorN0e- (Digital hand surrounding by objects, bike, light bulb, graphs)
|
||||
[2]: https://kde.org/applications/development/org.kde.kompare
|
||||
[3]: https://en.wikipedia.org/wiki/GFS2
|
||||
[4]: https://en.wikipedia.org/wiki/Distributed_lock_manager
|
@ -0,0 +1,199 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Develop GUI apps using Flutter on Fedora)
|
||||
[#]: via: (https://fedoramagazine.org/develop-gui-apps-using-flutter-on-fedora/)
|
||||
[#]: author: (Carmine Zaccagnino https://fedoramagazine.org/author/carzacc/)
|
||||
|
||||
Develop GUI apps using Flutter on Fedora
|
||||
======
|
||||
|
||||
![][1]
|
||||
|
||||
When it comes to app development frameworks, Flutter is the latest and greatest. Google seems to be planning to take over the entire GUI app development world with Flutter, starting with mobile devices, which are already perfectly supported. Flutter allows you to develop cross-platform GUI apps for multiple targets — mobile, web, and desktop — from a single codebase.
|
||||
|
||||
This post will go through how to install the Flutter SDK and tools on Fedora, as well as how to use them both for mobile development and web/desktop development.
|
||||
|
||||
### Installing Flutter and Android SDKs on Fedora
|
||||
|
||||
To get started building apps with Flutter, you need to install
|
||||
|
||||
* the Android SDK;
|
||||
* the Flutter SDK itself; and,
|
||||
* optionally, an IDE and its Flutter plugins.
|
||||
|
||||
|
||||
|
||||
#### Installing the Android SDK
|
||||
|
||||
Flutter requires the installation of the Android SDK with the entire [Android Studio][2] suite of tools. Google provides a _tar.gz_ archive. The Android Studio executable can be found in the _android-studio/bin_ directory and is called _studio.sh_. To run it, open a terminal, _cd_ into the aforementioned directory, and then run:
|
||||
|
||||
```
|
||||
$ ./studio.sh
|
||||
```
|
||||
|
||||
#### Installing the Flutter SDK
|
||||
|
||||
Before you install Flutter you may want to consider what release channel you want to be on.
|
||||
|
||||
The _stable_ channel is least likely to give you a headache if you just want to build a mobile app using mainstream Flutter features.
|
||||
|
||||
On the other hand, you may want to use the latest features, especially for desktop and web app development. In that case, you might be better off installing either the latest version of the _beta_ or even the _dev_ channel.
|
||||
|
||||
Either way, you can switch between channels after you install using the _flutter channel_ command explained later in the article.
|
||||
|
||||
Head over to the [official SDK archive page][3] and download the latest installation bundle for the release channel most appropriate for your use case.
|
||||
|
||||
The installation bundle is simply a _xz-_compressed tarball (_.tar.xz_ extension). You can extract it wherever you want, given that you add the _flutter/bin_ subdirectory to the _PATH_ environment variable.
|
||||
|
||||
#### Installing the IDE plugins
|
||||
|
||||
To install the plugin for [Visual Studio Code][4], you need to search for _Flutter_ in the _Extensions_ tab. Installing it will also install the _Dart_ plugin.
|
||||
|
||||
The same will happen when you install the plugin for Android Studio by opening the _Settings_, then the _Plugins_ tab and installing the _Flutter_ plugin.
|
||||
|
||||
### Using the Flutter and Android CLI Tools on Fedora
|
||||
|
||||
Now that you’ve installed Flutter, here’s how to use the CLI tool.
|
||||
|
||||
#### Upgrading and Maintaining Your Flutter Installations
|
||||
|
||||
The _flutter doctor_ command is used to check whether your installation and related tools are complete and don’t require any further action.
|
||||
|
||||
For example, the output you may get from _flutter doctor_ right after installing on Fedora is:
|
||||
|
||||
```
|
||||
Doctor summary (to see all details, run flutter doctor -v):
|
||||
|
||||
[✓] Flutter (Channel stable, v1.12.13+hotfix.5, on Linux, locale it_IT.UTF-8)
|
||||
|
||||
[!] Android toolchain - develop for Android devices (Android SDK version 29.0.2)
|
||||
|
||||
✗ Android licenses not accepted. To resolve this, run: flutter doctor --android-licenses
|
||||
|
||||
[!] Android Studio (version 3.5)
|
||||
|
||||
✗ Flutter plugin not installed; this adds Flutter specific functionality.
|
||||
|
||||
✗ Dart plugin not installed; this adds Dart specific functionality.
|
||||
|
||||
[!] Connected device
|
||||
|
||||
! No devices available
|
||||
|
||||
! Doctor found issues in 3 categories.
|
||||
```
|
||||
|
||||
Of course the issue with the Android toolchain has to be resolved in order to build for Android. Run this command to accept the licenses:
|
||||
|
||||
```
|
||||
$ flutter doctor --android-licenses
|
||||
```
|
||||
|
||||
Use the _flutter channel_ command to switch channels after installation. It’s just like switching branches on Git (and that’s actually what it does). You use it in the following way:
|
||||
|
||||
```
|
||||
$ flutter channel <channel_name>
|
||||
```
|
||||
|
||||
…where you’d replace _<channel_name>_ with the release channel you want to switch to.
|
||||
|
||||
After doing that, or whenever you feel the need to do it, you need to update your installation. You might consider running this every once in a while or when a major update comes out if you follow Flutter news. Run this command:
|
||||
|
||||
```
|
||||
$ flutter upgrade
|
||||
```
|
||||
|
||||
#### Building for Mobile
|
||||
|
||||
You can build for Android very easily: the _flutter build_ command supports it by default, and it allows you to build both APKs and newfangled app bundles.
|
||||
|
||||
All you need to do is to create a project with _flutter create_, which will generate some code for an example app and the necessary _android_ and _ios_ folders.
|
||||
|
||||
When you’re done coding you can either run:
|
||||
|
||||
* _flutter build apk_ or _flutter build appbundle_ to generate the necessary app files to distribute, or
|
||||
* _flutter run_ to run the app on a connected device or emulator directly.
|
||||
|
||||
|
||||
|
||||
When you run the app on a phone or emulator with _flutter run_, you can use the _R_ button on the keyboard to use _stateful hot reload_. This feature updates what’s displayed on the phone or emulator to reflect the changes you’ve made to the code without requiring a full rebuild.
|
||||
|
||||
If you input a capital _R_ character to the debug console, you trigger a _hot restart_. This restart doesn’t preserve state and is necessary for bigger changes to the app.
|
||||
|
||||
If you’re using a GUI IDE, you can trigger a hot reload using the _bolt_ icon button and a hot restart with the typical _refresh_ button.
|
||||
|
||||
#### Building for the Desktop
|
||||
|
||||
To build apps for the desktop on Fedora, use the [flutter-desktop-embedding][5] repository. The _flutter create_ command doesn’t have templates for desktop Linux apps yet. That repository contains examples of desktop apps and files required to build on desktop, as well as examples of plugins for desktop apps.
|
||||
|
||||
To build or run apps for Linux, you also need to be on the _master_ release channel and enable Linux desktop app development. To do this, run:
|
||||
|
||||
```
|
||||
$ flutter config --enable-linux-desktop
|
||||
```
|
||||
|
||||
After that, you can use _flutter run_ to run the app on your development workstation directly, or run _flutter build linux_ to build a binary file in the _build/_ directory.
|
||||
|
||||
If those commands don’t work, run this command in the project directory to generate the required files to build in the _linux/_ directory:
|
||||
|
||||
```
|
||||
$ flutter create .
|
||||
```
|
||||
|
||||
#### Building for the Web
|
||||
|
||||
Starting with Flutter 1.12, you can build Web apps using Flutter with the mainline codebase, without having to use the _flutter_web_ forked libraries, but you have to be running on the _beta_ channel.
|
||||
|
||||
If you are (you can switch to it using _flutter channel beta_ and _flutter upgrade_ as we’ve seen earlier), you need to enable web development by running _flutter config –enable-web_.
|
||||
|
||||
After doing that, you can run _flutter run -d web_ and a local web server will be started from which you can access your app. The command returns the URL at which the server is listening, including the port number.
|
||||
|
||||
You can also run _flutter build web_ to build the static website files in the _build/_ directory.
|
||||
|
||||
If those commands don’t work, run this command in the project directory to generate the required files to build in the _web/_ directory:
|
||||
|
||||
```
|
||||
$ flutter create .
|
||||
```
|
||||
|
||||
### Packages for Installing Flutter
|
||||
|
||||
Other distributions have packages or community repositories to install and update in a more straightforward and intuitive way. However, at the time of writing, no such thing exists for Flutter. If you have experience packaging RPMs for Fedora, consider contributing to [this GitHub repository][6] for [this COPR package][7].
|
||||
|
||||
The next step is learning Flutter. You can do that in a number of ways:
|
||||
|
||||
* Read the good API reference documentation on the official site
|
||||
* Watching some of the introductory video courses available online
|
||||
* Read one of the many books out there today. _[Check out the author’s bio for a suggestion! — Ed.]_
|
||||
|
||||
|
||||
|
||||
* * *
|
||||
|
||||
_Photo by [Randall Ruiz][8] on [Unsplash][9]._
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/develop-gui-apps-using-flutter-on-fedora/
|
||||
|
||||
作者:[Carmine Zaccagnino][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fedoramagazine.org/author/carzacc/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://fedoramagazine.org/wp-content/uploads/2020/01/flutter-816x345.jpg
|
||||
[2]: https://developer.android.com/studio
|
||||
[3]: https://flutter.dev/docs/development/tools/sdk/releases?tab=linux
|
||||
[4]: https://fedoramagazine.org/using-visual-studio-code-fedora/
|
||||
[5]: https://github.com/google/flutter-desktop-embedding
|
||||
[6]: https://github.com/carzacc/flutter-copr
|
||||
[7]: https://copr.fedorainfracloud.org/coprs/carzacc/flutter/
|
||||
[8]: https://unsplash.com/@ruizra?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
|
||||
[9]: https://unsplash.com/s/photos/flutter?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
|
@ -0,0 +1,108 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Organize and sync your calendar with khal and vdirsyncer)
|
||||
[#]: via: (https://opensource.com/article/20/1/open-source-calendar)
|
||||
[#]: author: (Kevin Sonney https://opensource.com/users/ksonney)
|
||||
|
||||
Organize and sync your calendar with khal and vdirsyncer
|
||||
======
|
||||
Keeping and sharing a calendar can be a pain. Learn how to make it
|
||||
easier in the fifth article in our series on 20 ways to be more
|
||||
productive with open source in 2020.
|
||||
![Calendar close up snapshot][1]
|
||||
|
||||
Last year, I brought you 19 days of new (to you) productivity tools for 2019. This year, I'm taking a different approach: building an environment that will allow you to be more productive in the new year, using tools you may or may not already be using.
|
||||
|
||||
### Keep track of your schedule with khal and vdirsyncer
|
||||
|
||||
Calendars are a _pain_ to deal with, and finding good tooling is always hard. But I've made some progress since last year when I listed calendaring as [one of my "fails."][2]
|
||||
|
||||
The most difficult thing about calendars today is that now they almost always need to be shared online in some way. The two most popular online calendars are Google Calendar and Microsoft Outlook/Exchange. Both are used heavily in corporate environments, which means my calendar has to support one or both options.
|
||||
|
||||
![khal calendar][3]
|
||||
|
||||
[Khal][4] is a console-based calendar that reads and writes VCalendar files. It's fairly easy to configure, but it does not support syncing with other applications.
|
||||
|
||||
Fortunately, khal works with [vdirsyncer][5], a nifty command-line program that can synchronize online calendars (and contacts, which I'll talk about in a separate article) to your local drive. And yes, this includes uploading new events, too.
|
||||
|
||||
![vdirsyncer][6]
|
||||
|
||||
Vdirsyncer is a Python 3 program, and it can be installed via your package manager or pip. It can synchronize CalDAV, VCalendar/iCalendar, Google Calendar, and local files in a directory. Since I use Google Calendar, I'll use that as an example, although it is not the easiest thing to set up.
|
||||
|
||||
Setting vdirsyncer up for Google is [well-documented][7], so I won't go into the nuts and bolts here. The important thing is to make sure your sync pairs are set up in a way that sets Google Calendar as the "winner" for conflict resolution. That is, if there are two updates to the same event, it needs to know which one takes precedence. Do so with something like this:
|
||||
|
||||
|
||||
```
|
||||
[general]
|
||||
status_path = "~/.calendars/status"
|
||||
|
||||
[pair personal_sync]
|
||||
a = "personal"
|
||||
b = "personallocal"
|
||||
collections = ["from a", "from b"]
|
||||
conflict_resolution = "a wins"
|
||||
metadata = ["color"]
|
||||
|
||||
[storage personal]
|
||||
type = "google_calendar"
|
||||
token_file = "~/.vdirsyncer/google_calendar_token"
|
||||
client_id = "google_client_id"
|
||||
client_secret = "google_client_secret"
|
||||
|
||||
[storage personallocal]
|
||||
type = "filesystem"
|
||||
path = "~/.calendars/Personal"
|
||||
fileext = ".ics"
|
||||
```
|
||||
|
||||
After the first sync of vdirsyncer, you will end up with a series of directories in the storage path. Each will contain several files, one for each entry in the calendar. The next step is to get them into khal. Start by running **khal configure** to do the initial setup.
|
||||
|
||||
![Configuring khal][8]
|
||||
|
||||
Now, running **khal interactive** will bring up the display shown at the beginning of this article. Typing **n** will bring up the New Event dialog. One small thing to note here: the calendars are named to match the directories that vdirsyncer creates, but you can change the khal config file to give them clearer names. Adding colors to entries based on which calendar they're on will also help you identify which is which on the list:
|
||||
|
||||
|
||||
```
|
||||
[calendars]
|
||||
[[personal]]
|
||||
path = ~/.calendars/Personal/kevin@sonney.com/
|
||||
color = light magenta
|
||||
[[holidays]]
|
||||
path = ~/.calendars/Personal/cln2stbjc4hmgrrcd5i62ua0ctp6utbg5pr2sor1dhimsp31e8n6errfctm6abj3dtmg@virtual/
|
||||
color = light blue
|
||||
[[birthdays]]
|
||||
path = ~/.calendars/Personal/c5i68sj5edpm4rrfdchm6rreehgm6t3j81jn4rrle0n7cbj3c5m6arj4c5p2sprfdtjmop9ecdnmq@virtual/
|
||||
color = brown
|
||||
```
|
||||
|
||||
Now when you run **khal interactive**, each calendar will be colored to distinguish it from the others, and when you add a new entry, it will have a more descriptive name.
|
||||
|
||||
![Adding a new calendar entry][9]
|
||||
|
||||
The setup is a little tricky, but once it's done, khal with vdirsyncer gives you an easy way to manage calendar events and keep them in sync with your online services.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/1/open-source-calendar
|
||||
|
||||
作者:[Kevin Sonney][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/ksonney
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/calendar.jpg?itok=jEKbhvDT (Calendar close up snapshot)
|
||||
[2]: https://opensource.com/article/19/1/productivity-tool-wish-list
|
||||
[3]: https://opensource.com/sites/default/files/uploads/productivity_5-1.png (khal calendar)
|
||||
[4]: https://khal.readthedocs.io/en/v0.9.2/index.html
|
||||
[5]: https://github.com/pimutils/vdirsyncer
|
||||
[6]: https://opensource.com/sites/default/files/uploads/productivity_5-2.png (vdirsyncer)
|
||||
[7]: https://vdirsyncer.pimutils.org/en/stable/config.html#google
|
||||
[8]: https://opensource.com/sites/default/files/uploads/productivity_5-3.png (Configuring khal)
|
||||
[9]: https://opensource.com/sites/default/files/uploads/productivity_5-4.png (Adding a new calendar entry)
|
@ -0,0 +1,175 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Root User in Ubuntu: Important Things You Should Know)
|
||||
[#]: via: (https://itsfoss.com/root-user-ubuntu/)
|
||||
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
|
||||
|
||||
Root User in Ubuntu: Important Things You Should Know
|
||||
======
|
||||
|
||||
When you have just started using Linux, you’ll find many things that are different from Windows. One of those ‘different things’ is the concept of the root user.
|
||||
|
||||
In this beginner series, I’ll explain a few important things about the root user in Ubuntu.
|
||||
|
||||
**Please keep in mind that while I am writing this from Ubuntu user’s perspective, it should be valid for most Linux distributions.**
|
||||
|
||||
You’ll learn the following in this article:
|
||||
|
||||
* [Why root user is disabled in Ubuntu][1]
|
||||
* [Using commands as root][2]
|
||||
* [Switch to root user][3]
|
||||
* [Unlock the root user][4]
|
||||
|
||||
|
||||
|
||||
### What is root user? Why is it locked in Ubuntu?
|
||||
|
||||
![][5]
|
||||
|
||||
In Linux, there is always a super user called [root][6]. This is the super admin account that can do anything and everything with the system. It can access any file and run any command on your Linux system.
|
||||
|
||||
With great power comes great responsibility. Root user gives you complete power over the system and hence it should be used with great cautious. Root user can access system files and run commands to make changes to the system configuration. And hence, an incorrect command may destroy the system.
|
||||
|
||||
This is why [Ubuntu][7] and other Ubuntu-based distributions lock the root user by default to save you from accidental disasters.
|
||||
|
||||
You don’t need to have root privilege for your daily tasks like moving file in your home directory, downloading files from internet, creating documents etc.
|
||||
|
||||
_**Take this analogy for understanding it better. If you have to cut a fruit, you use a kitchen knife. If you have to cut down a tree, you have to use a saw. Now, you may use the saw to cut fruits but that’s not wise, is it?**_
|
||||
|
||||
Does this mean that you cannot be root in Ubuntu or use the system with root privileges? No, you can still have root access with the help of ‘sudo’ (explained in the next section).
|
||||
|
||||
**Bottom line:
|
||||
**Root user is too powerful to be used for regular tasks. This is why it is not recommended to use root all the time. You can still run specific commands with root.
|
||||
|
||||
### How to run commands as root user in Ubuntu?
|
||||
|
||||
![Image Credit: xkcd][8]
|
||||
|
||||
You’ll need root privileges for some system specific tasks. For example, if you want to [update Ubuntu via command line][9], you cannot run the command as a regular user. It will give you permission denied error.
|
||||
|
||||
```
|
||||
apt update
|
||||
Reading package lists... Done
|
||||
E: Could not open lock file /var/lib/apt/lists/lock - open (13: Permission denied)
|
||||
E: Unable to lock directory /var/lib/apt/lists/
|
||||
W: Problem unlinking the file /var/cache/apt/pkgcache.bin - RemoveCaches (13: Permission denied)
|
||||
W: Problem unlinking the file /var/cache/apt/srcpkgcache.bin - RemoveCaches (13: Permission denied)
|
||||
```
|
||||
|
||||
So, how do you run commands as root? The simple answer is to add sudo before the commands that require to be run as root.
|
||||
|
||||
```
|
||||
sudo apt update
|
||||
```
|
||||
|
||||
Ubuntu and many other Linux distributions use a special mechanism called sudo. Sudo is a program that controls access to running commands as root (or other users).
|
||||
|
||||
Sudo is actually quite a versatile tool. It can be configured to allow a user to run all commands as root or only some commands as root. You can also configure if password is required for some commands or not to run it with sudo. It’s an extensive topic and maybe I’ll discuss it in details in another article.
|
||||
|
||||
For the moment, you should know that [when you install Ubuntu][10], you are forced to create a user account. This user account works as the admin on your system and as per the default sudo policy in Ubuntu, it can run any command on your system with root privileges.
|
||||
|
||||
The thing with sudo is that running **sudo doesn’t require root password but the user’s own password**.
|
||||
|
||||
And this is why when you run a command with sudo, it asks for the password of the user who is running the sudo command:
|
||||
|
||||
```
|
||||
[email protected]:~$ sudo apt update
|
||||
[sudo] password for abhishek:
|
||||
```
|
||||
|
||||
As you can see in the example above, user _abhishek_ was trying to run the ‘apt update’ command with _sudo_ and the system asked the password for _abhishek_.
|
||||
|
||||
If you are absolutely new to Linux, you might be surprised that when you start typing your password in the terminal, nothing happens on the screen. This is perfectly normal because as the default security feature, nothing is displayed on the screen. Not even the asterisks (*). You type your password and press enter.
|
||||
|
||||
**Bottom line:
|
||||
**To run commands as root in Ubuntu, add sudo before the command.
|
||||
When asked for password, enter your account’s password.
|
||||
When you type the password on the screen, nothing is visible. Just keep on typing the password and press enter.
|
||||
|
||||
### How to become root user in Ubuntu?
|
||||
|
||||
You can use sudo to run the commands as root. However in situations, where you have to run several commands as root and you keep forogetting to add sudo before the commands, you may switch to root user temporarily.
|
||||
|
||||
The sudo command allows you to simulate a root login shell with this command:
|
||||
|
||||
```
|
||||
sudo -i
|
||||
```
|
||||
|
||||
```
|
||||
[email protected]:~$ sudo -i
|
||||
[sudo] password for abhishek:
|
||||
[email protected]:~# whoami
|
||||
root
|
||||
[email protected]:~#
|
||||
```
|
||||
|
||||
You’ll notice that when you switch to root, the shell command prompt changes from $ (dollar key sign) to # (pound key sign). This makes me crack a (lame) joke that pound is stronger than dollar.
|
||||
|
||||
_**Though I have showed you how to become the root user, I must warn you that you should avoid using the system as root. It’s discouraged for a reason after all.**_
|
||||
|
||||
Another way to temporarily switch to root user is by using the su command:
|
||||
|
||||
```
|
||||
sudo su
|
||||
```
|
||||
|
||||
If you try to use the su command without sudo, you’ll encounter ‘su authentication failure’ error.
|
||||
|
||||
You can go back to being the normal user by using the exit command.
|
||||
|
||||
```
|
||||
exit
|
||||
```
|
||||
|
||||
### How to enable root user in Ubuntu?
|
||||
|
||||
By now you know that the root user is locked by default in Ubuntu based distributions.
|
||||
|
||||
Linux gives you the freedom to do whatever you want with your system. Unlocking the root user is one of those freedoms.
|
||||
|
||||
If, for some reasons, you decided to enable the root user, you can do so by setting up a password for it:
|
||||
|
||||
```
|
||||
sudo passwd root
|
||||
```
|
||||
|
||||
Again, this is not recommended and I won’t encourage you to do that on your desktop. If you forgot it, you won’t be able to [change the root password in Ubuntu][11] again.
|
||||
|
||||
You can lock the root user again by removing the password:
|
||||
|
||||
```
|
||||
sudo passwd -dl root
|
||||
```
|
||||
|
||||
**In the end…**
|
||||
|
||||
I hope you have a slightly better understanding of the root concept now. If you still have some confusion and questions about it, please let me know in the comments. I’ll try to answer your questions and might update the article as well.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/root-user-ubuntu/
|
||||
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/abhishek/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: tmp.IrHYJBAqVn#what-is-root
|
||||
[2]: tmp.IrHYJBAqVn#run-command-as-root
|
||||
[3]: tmp.IrHYJBAqVn#become-root
|
||||
[4]: tmp.IrHYJBAqVn#enable-root
|
||||
[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/01/root_user_ubuntu.png?ssl=1
|
||||
[6]: http://www.linfo.org/root.html
|
||||
[7]: https://ubuntu.com/
|
||||
[8]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/01/sudo_sandwich.png?ssl=1
|
||||
[9]: https://itsfoss.com/update-ubuntu/
|
||||
[10]: https://itsfoss.com/install-ubuntu/
|
||||
[11]: https://itsfoss.com/how-to-hack-ubuntu-password/
|
@ -0,0 +1,85 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Why everyone is talking about WebAssembly)
|
||||
[#]: via: (https://opensource.com/article/20/1/webassembly)
|
||||
[#]: author: (Mike Bursell https://opensource.com/users/mikecamel)
|
||||
|
||||
Why everyone is talking about WebAssembly
|
||||
======
|
||||
Learn more about the newest way to run any code in a web browser.
|
||||
![Digital creative of a browser on the internet][1]
|
||||
|
||||
If you haven’t heard of [WebAssembly][2] yet, then you will soon. It’s one of the industry’s best-kept secrets, but it’s everywhere. It’s supported by all the major browsers, and it’s coming to the server-side, too. It’s fast. It’s being used for gaming. It’s an open World Wide Web Consortium (W3C), the main international standards organization for the web, standard. It’s platform-neutral and can run on Linux, Macs, and Windows.
|
||||
|
||||
"Wow," you may be saying, "this sounds like something I should learn to code in!" You’d be right, but you’d be wrong too; you don’t code in WebAssembly. Let’s take some time to learn about the technology that’s often fondly abbreviated to "Wasm."
|
||||
|
||||
### Where did it come from?
|
||||
|
||||
Going back about ten years, there was a growing recognition that the widely-used JavaScript wasn’t fast enough for many purposes. JavaScript was undoubtedly successful and convenient. It ran in any browser and enabled the type of dynamic web pages that we take for granted today. But it was a high-level language and wasn’t designed with compute-intensive workloads in mind.
|
||||
|
||||
However, although engineers responsible for the leading web browsers were generally in agreement about the performance problem, they weren’t aligned on what to do about it. Two camps emerged. Google began its Native Client project and, later, its Portable Native Client variation, to focus on allowing games and other software written in C/C++ to run in a secure compartment within Chrome. Mozilla, meanwhile, won the backing of Microsoft for asm.js, an approach that updated the browser so it can run a low-level subset of JavaScript instructions very quickly (another project enabled the conversion of C/C++ code into these instructions).
|
||||
|
||||
With neither camp gaining widespread adoption, the various parties agreed to join forces in 2015 around a new standard called WebAssembly that built on the basic approach taken by asm.js. [As CNET’s Stephen Shankland wrote at the time][3], "On today’s Web, the browser’s JavaScript translates those instructions into machine code. But with WebAssembly, the programmer does a lot of the work earlier in the process, producing a program that’s in between the two states. That frees the browser from a lot of the hard work of creating the machine code, but it also fulfills the promise of the Web—that software will run on any device with a browser regardless of the underlying hardware details."
|
||||
|
||||
In 2017, Mozilla declared it to be a minimum viable product and brought it out of preview. All the main browsers had adopted it by the end of that year. [In December 2019][4], the WebAssembly Working Group published the three WebAssembly specifications as W3C recommendations.
|
||||
|
||||
WebAssembly defines a portable binary code format for executable programs, a corresponding textual assembly language, and interfaces for facilitating interactions between such programs and their host environment. WebAssembly code runs within a low-level virtual machine that mimics the functionality of the many microprocessors upon which it can be run. Either through Just-In-Time (JIT) compilation or interpretation, the WebAssembly engine can perform at nearly the speed of code compiled for a native platform.
|
||||
|
||||
### Why the interest now?
|
||||
|
||||
Certainly, some of the recent interest in WebAssembly stems from that initial desire to run more compute-intensive code in browsers. Laptop users, in particular, are spending more and more of their time in a browser (or, in the case of Chromebooks, essentially all their time). This trend has created an urgency around removing barriers to running a broad range of applications within a browser. And one of those barriers is often some aspect of performance, which is what WebAssembly and its predecessors were originally conceived to address.
|
||||
|
||||
However, WebAssembly isn’t just for browsers. In 2019, [Mozilla announced a project called WASI][5] (WebAssembly System Interface) to standardize how WebAssembly code interacts with operating systems outside of a browser context. With the combination of browser support for WebAssembly and WASI, compiled binaries will be able to run both within and without browsers, across different devices and operating systems, at near-native speeds.
|
||||
|
||||
WebAssembly’s low overhead immediately makes it practical for use beyond browsers, but that’s arguably table stakes; there are obviously other ways to run applications that don’t introduce performance bottlenecks. Why use WebAssembly, specifically?
|
||||
|
||||
One important reason is its portability. Widely used compiled languages like C++ and Rust are probably the ones most associated with WebAssembly today. However, [a wide range of other languages][6] compile to or have their virtual machines in WebAssembly. Furthermore, while WebAssembly [assumes certain prerequisites][7] for its execution environments, it is designed to execute efficiently on a variety of operating systems and instruction set architectures. WebAssembly code can, therefore, be written using a wide range of languages and run on a wide range of operating systems and processor types.
|
||||
|
||||
Another WebAssembly advantage stems from the fact that code runs within a virtual machine. As a result, each WebAssembly module executes within a sandboxed environment, separated from the host runtime using fault isolation techniques. This implies, among other things, that applications execute in isolation from the rest of their host environment and can’t escape the sandbox without going through appropriate APIs.
|
||||
|
||||
### WebAssembly in action
|
||||
|
||||
What does all this mean in practice?
|
||||
|
||||
One example of WebAssembly in action is [Enarx][8].
|
||||
|
||||
Enarx is a project that provides hardware independence for securing applications using Trusted Execution Environments (TEE). Enarx lets you securely deliver an application compiled into WebAssembly all the way into a cloud provider and execute it remotely. As Red Hat security engineer [Nathaniel McCallum puts it][9], "The way that we do this is, we take your application as inputs, and we perform an attestation process with the remote hardware. We validate that the remote hardware is, in fact, the hardware that it claims to be, using cryptographic techniques. The end result of that is not only an increased level of trust in the hardware that we’re speaking to; it’s also a session key, which we can then use to deliver encrypted code and data into this environment that we have just asked for cryptographic attestation on."
|
||||
|
||||
Another example is OPA, the Open Policy Agent, which [announced][10] in November 2019 that you could [compile][11] their policy definition language, Rego, into WebAssembly. Rego lets you write logic to search and combine JSON/YAML data from different sources to ask questions like, "Is this API allowed?"
|
||||
|
||||
OPA has been used to policy-enable software including, but not limited to, Kubernetes. Simplifying policy using tools like OPA [is considered an important step][12] for properly securing Kubernetes deployments across a variety of different environments. WebAssembly’s portability and built-in security features are a good match for these tools.
|
||||
|
||||
Our last example is [Unity][13]. Remember, we mentioned at the beginning of the article that WebAssembly is used for gaming? Well, Unity, the cross-platform game engine, was an early adopter of WebAssembly, providing the first demo of Wasm running in browsers, and, since August 2018, [has used WebAssembly][14] as the output target for the Unity WebGL build target.
|
||||
|
||||
These are just a few of the ways WebAssembly has already begun to make an impact. Find out more and keep up to date with all things Wasm at <https://webassembly.org/>
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/1/webassembly
|
||||
|
||||
作者:[Mike Bursell][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/mikecamel
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/browser_web_internet_website.png?itok=g5B_Bw62 (Digital creative of a browser on the internet)
|
||||
[2]: https://opensource.com/article/19/8/webassembly-speed-code-reuse
|
||||
[3]: https://www.cnet.com/news/the-secret-alliance-that-could-give-the-web-a-massive-speed-boost/
|
||||
[4]: https://www.w3.org/blog/news/archives/8123
|
||||
[5]: https://hacks.mozilla.org/2019/03/standardizing-wasi-a-webassembly-system-interface/
|
||||
[6]: https://github.com/appcypher/awesome-wasm-langs
|
||||
[7]: https://webassembly.org/docs/portability/
|
||||
[8]: https://enarx.io
|
||||
[9]: https://enterprisersproject.com/article/2019/9/application-security-4-facts-confidential-computing-consortium
|
||||
[10]: https://blog.openpolicyagent.org/tagged/webassembly
|
||||
[11]: https://github.com/open-policy-agent/opa/tree/master/wasm
|
||||
[12]: https://enterprisersproject.com/article/2019/11/kubernetes-reality-check-3-takeaways-kubecon
|
||||
[13]: https://opensource.com/article/20/1/www.unity.com
|
||||
[14]: https://blogs.unity3d.com/2018/08/15/webassembly-is-here/
|
@ -1,128 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (heguangzhi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How piwheels will save Raspberry Pi users time in 2020)
|
||||
[#]: via: (https://opensource.com/article/20/1/piwheels)
|
||||
[#]: author: (Ben Nuttall https://opensource.com/users/bennuttall)
|
||||
|
||||
|
||||
piwheels 是如何在2020年节省树莓派用户的时间的
|
||||
======
|
||||
通过为树莓派提供预编译的 Python 包,piwheels 项目为用户节省了大量的时间和精力。![rainbow colors on pinwheels in the sun][1]
|
||||
|
||||
piwheels 自动为[PiPi][2]上的所有项目构建 Python wheels( 预编译的 Python包 ),即 Python 包索引,使用树莓派硬件确保其兼容性。这意味着,当树莓派用户想要使用 **pip**,安装一个Python库时,他们会得到一个现成的编译版本,并保证可以在树莓派上良好的工作。这使得树莓派用户更容易进入和开始他们的项目。
|
||||
|
||||
![Piwheels logo][3]
|
||||
|
||||
当我在2018年10月写[piwheels:为树莓派提供快速 Python 包安装][4]时,piwheels 项目已经进入第一年,并且已经证明了其为树莓派用户节省大量时间和精力。但是这个项目已经进入第二年,它为树莓派提供了预编译的 Python 包走了更长的路。
|
||||
|
||||
![Raspberry Pi 4][5]
|
||||
|
||||
### 它是怎么工作的
|
||||
|
||||
[Raspbian][6],树莓派的主要操作系统,预配置使用 piwheels,所以用户不需要做任何特殊的事情就可以使用piwheels。
|
||||
|
||||
配置文件(在 **/etc/pip.conf**)告诉 pip 使用[piwheels.org][7]作为 _附加索引_,因此 pip 首先查看PyPI,然后查看 piwheels。piwheels 网站位于树莓派 3的 hosts中,该项目建造的所有 wheels 都位于该派上。它每月提供100多万套服务——对于一台35美元的电脑来说还不错!
|
||||
|
||||
网站除了主要服务于树莓派以外,piwheels 项目还对使用其他七个派系统并构建软件包。有人运行 Raspbian Jessie,为 Python 3.4 建造 wheels,有人运行 Raspbian Stretch 为 Python 3.5,有人运行 Raspbian Buster 为 Python 3.7。该项目通常不支持其他 Python 版本。还有一个“合适的服务器”——运行 Postgres 数据库的虚拟机。由于 派3 只有1GB的内存,所以(非常大的)数据库不能在其上很好地运行,所以我们把它移到了虚拟机上。带 4GB 内存的 派4 可能是合适的,所以我们将来可能会用到它。
|
||||
|
||||
派都在“派云”中的 IPv6 网络上——这是一项由总部位于剑桥的托管公司[Mythic Beasts][8]提供的卓越的服务。
|
||||
![Mythic Beasts hosting service][9]
|
||||
|
||||
### 下载和统计趋势
|
||||
|
||||
每次下载 piwheels 文件时,它都会记录在数据库中。这提供了对什么包最受欢迎以及人们使用什么 Python 版本和操作系统的统计。我们没有太多来自用户代理的信息,但是因为 派1/Zero 的架构显示为 “armv6”,派2/3/4显示为“armv7”,所以我们可以将它们区分开来。
|
||||
|
||||
截至2019年12月中旬,从派风车下载的软件包超过1,400万个,仅2019年就有近900万个。
|
||||
|
||||
自项目开始以来最受欢迎的10个软件包是:
|
||||
|
||||
1. [pycparser][10] (821,060 downloads)
|
||||
2. [PyYAML][11] (366,979)
|
||||
3. [numpy][12] (354,531)
|
||||
4. [cffi][13] (336,982)
|
||||
5. [MarkupSafe][14] (318,878)
|
||||
6. [future][15] (282,349)
|
||||
7. [aiohttp][16] (277,046)
|
||||
8. [cryptography][17] (276,167)
|
||||
9. [home-assistant-frontend][18] (266,667)
|
||||
10. [multidict][19] (256,185)
|
||||
|
||||
|
||||
|
||||
请注意,许多纯 Python 包,如[urllib3][20],都是作为 PyPI 上的 wheels 提供的;因为这些是跨平台兼容的,所以通常不会从 piwheels 下载,因为PyPI优先。
|
||||
|
||||
随着时间的推移,我们也看到了使用哪些 Python 版本的趋势。这里显示了 Raspbian Buster 发布时从3.5版快速升级到了Python 3.7:
|
||||
|
||||
![Data from piwheels on Python versions used over time][21]
|
||||
|
||||
|
||||
你可以看到更多的统计趋势在[stats blog posts][22]。
|
||||
|
||||
### 节省时间
|
||||
|
||||
|
||||
每个包构建都被记录在数据库中,并且每个下载也被存储。带有构建持续时间的交叉引用下载显示了节省了多少时间。一个例子是 numpy ——最新版本大约需要11分钟来构建。
|
||||
|
||||
迄今为止,piwheels 项目已经为用户节省了总计超过165年的构建时间。按照目前的使用率,piwheels 项目每天节省200多天。
|
||||
|
||||
|
||||
除了节省构建时间,拥有预编译的 wheels 也意味着人们不必安装各种开发工具来构建包。一些包需要其他apt包来访问共享库。弄清楚你需要哪一个可能会很痛苦,所以我们也让这一步变得容易了。首先,我们找到了这个过程,[在博客上记录了这个过程][23]。然后,我们将这个逻辑添加到构建过程中,这样当构建一个 wheels 时,它的依赖关系会被自动计算并添加到包的项目页面中:
|
||||
|
||||
![numpy dependencies][24]
|
||||
|
||||
### 派风车的下一步是什么?
|
||||
|
||||
今年,我们推出了项目页面(例如,[numpy][25),这是一种非常有用的方式,可以让人们以人类可读的方式查找项目信息。它们还使人们更容易报告问题,例如 piwheels 中缺少一个项目,或者他们下载的包有问题。
|
||||
|
||||
2020年初,我们计划对 piwheels 项目进行一些升级,以启用新的JSON应用编程接口,这样你就可以自动检查哪些版本可用,查找项目的依赖关系,等等。
|
||||
|
||||
下一次 Debian/Raspbian 升级要到2021年年中才会发生,所以在那之前我们不会开始为任何新的 Python 版本 制造 wheels。
|
||||
|
||||
你可以在这个项目的[博客][26]上读到更多关于 piwheels 的信息,我将在2020年初在那里发表一篇2019年的综述。你也可以在推特上关注[@piwheels][27],在那里你可以看到每日和每月的统计数据以及任何达到的里程碑。
|
||||
|
||||
当然,piwheels 是一个开源项目,你可以在[GitHub][28]上看到整个项目源代码。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/1/piwheels
|
||||
|
||||
作者:[Ben Nuttall][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[heguangzhi](https://github.com/heguangzhi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/bennuttall
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rainbow-pinwheel-piwheel-diversity-inclusion.png?itok=di41Wd3V (rainbow colors on pinwheels in the sun)
|
||||
[2]: https://pypi.org/
|
||||
[3]: https://opensource.com/sites/default/files/uploads/piwheels.png (Piwheels logo)
|
||||
[4]: https://opensource.com/article/18/10/piwheels-python-raspberrypi
|
||||
[5]: https://opensource.com/sites/default/files/uploads/raspberry-pi-4_0.jpg (Raspberry Pi 4)
|
||||
[6]: https://www.raspberrypi.org/downloads/raspbian/
|
||||
[7]: http://piwheels.org
|
||||
[8]: https://www.mythic-beasts.com/order/rpi
|
||||
[9]: https://opensource.com/sites/default/files/uploads/pi-cloud.png (Mythic Beasts hosting service)
|
||||
[10]: https://www.piwheels.org/project/pycparser
|
||||
[11]: https://www.piwheels.org/project/PyYAML
|
||||
[12]: https://www.piwheels.org/project/numpy
|
||||
[13]: https://www.piwheels.org/project/cffi
|
||||
[14]: https://www.piwheels.org/project/MarkupSafe
|
||||
[15]: https://www.piwheels.org/project/future
|
||||
[16]: https://www.piwheels.org/project/aiohttp
|
||||
[17]: https://www.piwheels.org/project/cryptography
|
||||
[18]: https://www.piwheels.org/project/home-assistant-frontend
|
||||
[19]: https://www.piwheels.org/project/multidict
|
||||
[20]: https://piwheels.org/project/urllib3/
|
||||
[21]: https://opensource.com/sites/default/files/uploads/pyvers2019.png (Data from piwheels on Python versions used over time)
|
||||
[22]: https://blog.piwheels.org/piwheels-stats-for-2019/
|
||||
[23]: https://blog.piwheels.org/how-to-work-out-the-missing-dependencies-for-a-python-package/
|
||||
[24]: https://opensource.com/sites/default/files/uploads/numpy-deps.png (numpy dependencies)
|
||||
[25]: https://www.piwheels.org/project/numpy/
|
||||
[26]: https://blog.piwheels.org/
|
||||
[27]: https://twitter.com/piwheels
|
||||
[28]: https://github.com/piwheels/
|
@ -0,0 +1,63 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Sync files across multiple devices with Syncthing)
|
||||
[#]: via: (https://opensource.com/article/20/1/sync-files-syncthing)
|
||||
[#]: author: (Kevin Sonney https://opensource.com/users/ksonney)
|
||||
|
||||
使用 Syncthing 在多个设备间同步文件
|
||||
======
|
||||
2020 年,在我们的 20 个使用开源提升生产力的系列文章中,首先了解如何使用 Syncthing 同步文件。
|
||||
![Files in a folder][1]
|
||||
|
||||
去年,我在 19 天里给你介绍了 19 个新(对你而言)的生产力工具。今年,我换了一种方式:使用你在使用或者还没使用的工具,构建一个使你可以在新一年更加高效的环境。
|
||||
|
||||
### 使用 Synthing 同步文件
|
||||
|
||||
置新机器很麻烦。我们都有在机器之间复制的“标准设置”。多年来,我使用了很多方法来使它们在计算机之间同步。在过去(这会告诉你我年纪有多大了),曾经是软盘、然后是 Zip 磁盘、U 盘、SCP、Rsync、Dropbox、ownCloud,你想到的都试过。但这些似乎对我都不够好。
|
||||
|
||||
然后我偶然发现了 [Syncthing][2]。
|
||||
|
||||
![syncthing console][3]
|
||||
|
||||
Syncthing 是一个轻量级的点对点文件同步系统。你不需要为服务付费,也不需要第三方服务器,而且速度很快。以我的经验,比文件同步中的许多“大牌”要快得多。
|
||||
|
||||
Syncthing 可在 Linux、MacOS、Windows 和多种 BSD 中使用。还有一个 Android 应用(但尚无官方 iOS 版本)。以上所有终端都有方便的图形化前端(尽管我不会在这里介绍)。在 Linux 上,大多数发行版都有可用的软件包,因此安装非常简单。
|
||||
|
||||
![Installing Syncthing on Ubuntu][4]
|
||||
|
||||
首次启动 Syncthing 时,它将启动 Web 浏览器以配置守护程序。第一台计算机上没有太多要做,但是这是一个很好的机会来介绍一下用户界面 (UI)。最重要的是在右上方的 **Actions** 菜单下的 “System ID”。
|
||||
|
||||
![Machine ID][5]
|
||||
|
||||
设置第一台计算机后,请在第二台计算机上重复安装。在 UI 中,右下方将显示一个按钮,名为 **Add Remote Device**。单击按钮,你将会看到一个要求输入**设备 ID 和设备名**的框。从第一台计算机上复制并粘贴**设备 ID**,然后单击 **Save**。
|
||||
|
||||
你应该会在第一台上看到一个请求添加第二台的弹出窗口。接受后,新机器将显示在第一台机器的右下角。与第二台计算机共享默认目录。单击 **Default Folder**,然后单击 **Edit** 按钮。弹出窗口的顶部有四个链接。单击 **Sharing**,然后选择第二台计算机。单击 **Save**,然后查看第二台计算机。你会看到一个接受共享目录的提示。接受后,它将开始在两台计算机之间同步文件。
|
||||
|
||||
![Sharing a directory in Syncthing][6]
|
||||
|
||||
测试从一台计算机上复制文件到默认目录(**/your/home/Share**)。它应该很快会在另一台上出现。
|
||||
|
||||
你可以根据需要添加任意数量的目录,这非常方便。如你在第一张图中所看到的,我有一个用于保存配置的 **myconfigs** 文件夹。当我买了一台新机器时,我只需安装 Syncthing,如果我在一台机器上调整了配置,我不必更新所有,它会自动更新。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/1/sync-files-syncthing
|
||||
|
||||
作者:[Kevin Sonney][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/ksonney
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/files_documents_paper_folder.png?itok=eIJWac15 (Files in a folder)
|
||||
[2]: https://syncthing.net/
|
||||
[3]: https://opensource.com/sites/default/files/uploads/productivity_1-1.png (syncthing console)
|
||||
[4]: https://opensource.com/sites/default/files/uploads/productivity_1-2.png (Installing Syncthing on Ubuntu)
|
||||
[5]: https://opensource.com/sites/default/files/uploads/productivity_1-3.png (Machine ID)
|
||||
[6]: https://opensource.com/sites/default/files/uploads/productivity_1-4.png (Sharing a directory in Syncthing)
|
@ -0,0 +1,58 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Use Stow for configuration management of multiple machines)
|
||||
[#]: via: (https://opensource.com/article/20/1/configuration-management-stow)
|
||||
[#]: author: (Kevin Sonney https://opensource.com/users/ksonney)
|
||||
|
||||
使用 Stow 管理多台机器配置
|
||||
======
|
||||
2020 年,在我们的 20 个使用开源提升生产力的系列文章中,让我们了解如何使用 Stow 跨机器管理配置。
|
||||
![A person programming][1]
|
||||
|
||||
去年,我在 19 天里给你介绍了 19 个新(对你而言)的生产力工具。今年,我换了一种方式:使用你在使用或者还没使用的工具,构建一个使你可以在新一年更加高效的环境。
|
||||
|
||||
### 使用 Stow 管理符号链接
|
||||
|
||||
昨天,我解释了如何使用 [Syncthing][2] 在多台计算机上保持文件同步。但是,这只是我用来保持配置一致性的工具之一。还有另一个简单的工具 [Stow][3]。
|
||||
|
||||
![Stow help screen][4]
|
||||
|
||||
Stow 管理符号链接。默认情况下,它会链接目录到上一级目录。还有设置源和目标目录的选项,但我通常不使用它们。
|
||||
|
||||
正如我在 Syncthing 的[文章][5] 中提到的,我使用 Syncthing 来保持 **myconfigs** 目录在我所有的计算机上一致。**myconfigs** 目录下面有多个子目录。每个子目录包含我经常使用的应用之一的配置文件。
|
||||
|
||||
![myconfigs directory][6]
|
||||
|
||||
在每台计算机上,我进入 **myconfigs** 目录,并运行 **stow -S <目录名称>** 以将目录中的文件符号链接到我的家目录。例如,在**vim** 目录下,我有 **.vimrc** 和 **.vim** 目录。在每台机器上,我运行 **stow -S vim** 来创建符号链接 **~/.vimrc** 和 **~/.vim**。当我在一台计算机上更改 Vim 配置时,它会应用到我的所有机器上。
|
||||
|
||||
然而,有时候,我需要一些特定于机器的配置,这就是为什么我有如 **msmtp-personal** 和 **msmtp-elastic**(我的雇主)这样的目录。由于我的 **msmtp** SMTP 客户端需要知道要中继的电子邮件服务器,并且每个服务器都有不同的设置和凭据,我会使用 **-D** 标志来取消链接,接着链接另外一个。
|
||||
|
||||
![Unstow one, stow the other][7]
|
||||
|
||||
有时我要给配置添加文件。为此,有一个 **-R** 选项来”重新链接“。例如,我喜欢在图形化 Vim 中使用一种与控制台不同的特定字体。除了标准 **.vimrc** 文件,**.gvimrc** 文件能让我设置特定于图形化版本的选项。当我第一次设置它时,我移动 **~/.gvimrc** 到 **~/myconfigs/vim** 中,然后运行 **stow -R vim**,它取消链接并重新链接该目录中的所有内容。
|
||||
|
||||
Stow 让我使用一个简单的命令行在多种配置之间切换,并且,结合 Syncthing,我可以确保无论我身在何处或在哪里进行更改,我都有我喜欢的工具的设置。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/1/configuration-management-stow
|
||||
|
||||
作者:[Kevin Sonney][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/ksonney
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/computer_keyboard_laptop_development_code_woman.png?itok=vbYz6jjb (A person programming)
|
||||
[2]: https://syncthing.net/
|
||||
[3]: https://www.gnu.org/software/stow/
|
||||
[4]: https://opensource.com/sites/default/files/uploads/productivity_2-1.png (Stow help screen)
|
||||
[5]: https://opensource.com/article/20/1/20-productivity-tools-syncthing
|
||||
[6]: https://opensource.com/sites/default/files/uploads/productivity_2-2.png (myconfigs directory)
|
||||
[7]: https://opensource.com/sites/default/files/uploads/productivity_2-3.png (Unstow one, stow the other)
|
Loading…
Reference in New Issue
Block a user