This commit is contained in:
runningwater 2019-09-20 19:15:43 +08:00
commit f749bbcfe3
54 changed files with 6470 additions and 1937 deletions

View File

@ -0,0 +1,63 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11354-1.html)
[#]: subject: (Firefox 69 available in Fedora)
[#]: via: (https://fedoramagazine.org/firefox-69-available-in-fedora/)
[#]: author: (Paul W. Frields https://fedoramagazine.org/author/pfrields/)
Firefox 69 已可在 Fedora 中获取
======
![][1]
当你安装 Fedora Workstation 时,你会发现它包括了世界知名的 Firefox 浏览器。 Mozilla 基金会以开发 Firefox 以及其他促进开放、安全和隐私的互联网项目为己任。Firefox 有快速的浏览引擎和大量的隐私功能。
开发者社区不断改进和增强 Firefox。最新版本 Firefox 69 于最近发布,你可在稳定版 Fedora 系统30 及更高版本)中获取它。继续阅读以获得更多详情。
### Firefox 69 中的新功能
最新版本的 Firefox 包括<ruby>[增强跟踪保护][2]<rt>Enhanced Tracking Protection</rt></ruby>ETP。当你使用带有新或重置配置文件的 Firefox 69 时,浏览器会使网站更难以跟踪你的信息或滥用你的计算机资源。
例如,不太正直的网站使用脚本让你的系统进行大量计算来产生加密货币,这称为<ruby>[加密挖矿][3]<rt>cryptomining</rt></ruby>。加密挖矿在你不知情或未经许可的情况下发生因此是对你的系统的滥用。Firefox 69 中的新标准设置可防止网站遭受此类滥用。
Firefox 69 还有其他设置,可防止识别或记录你的浏览器指纹,以供日后使用。这些改进为你提供了额外的保护,免于你的活动被在线追踪。
另一个常见的烦恼是在没有提示的情况下播放视频。视频播放也会占用更多的 CPU你可能不希望未经许可就在你的笔记本上发生这种情况。Firefox 使用<ruby>[阻止自动播放][4]<rt>Block Autoplay</rt></ruby>这个功能阻止了这种情况的发生。而 Firefox 69 还允许你停止静默开始播放的视频。此功能可防止不必要的突然的噪音。它还解决了更多真正的问题 —— 未经许可使用计算机资源。
新版本中还有许多其他新功能。在 [Firefox 发行说明][5]中阅读有关它们的更多信息。
### 如何获得更新
Firefox 69 存在于稳定版 Fedora 30、预发布版 Fedora 31 和 Rawhide 仓库中。该更新由 Fedora 的 Firefox 包维护者提供。维护人员还确保更新了 Mozilla 的网络安全服务nss 包)。我们感谢 Mozilla 项目和 Firefox 社区在提供此新版本方面的辛勤工作。
如果你使用的是 Fedora 30 或更高版本,请在 Fedora Workstation 上使用*软件中心*,或在任何 Fedora 系统上运行以下命令:
```
$ sudo dnf --refresh upgrade firefox
```
如果你使用的是 Fedora 29请[帮助测试更新][6],这样它可以变得稳定,让所有用户可以轻松使用。
Firefox 可能会提示你升级个人设置以使用新设置。要使用新功能,你应该这样做。
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/firefox-69-available-in-fedora/
作者:[Paul W. Frields][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/pfrields/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2019/09/firefox-v69-816x345.jpg
[2]: https://blog.mozilla.org/blog/2019/09/03/todays-firefox-blocks-third-party-tracking-cookies-and-cryptomining-by-default/
[3]: https://www.webopedia.com/TERM/C/cryptocurrency-mining.html
[4]: https://support.mozilla.org/kb/block-autoplay
[5]: https://www.mozilla.org/en-US/firefox/69.0/releasenotes/
[6]: https://bodhi.fedoraproject.org/updates/FEDORA-2019-89ae5bb576

View File

@ -0,0 +1,168 @@
[#]: collector: (lujun9972)
[#]: translator: (heguangzhi)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11356-1.html)
[#]: subject: (Managing Ansible environments on MacOS with Conda)
[#]: via: (https://opensource.com/article/19/8/using-conda-ansible-administration-macos)
[#]: author: (James Farrell https://opensource.com/users/jamesf)
使用 Conda 管理 MacOS 上的 Ansible 环境
=====
> Conda 将 Ansible 所需的一切都收集到虚拟环境中并将其与其他项目分开。
![](https://img.linux.net.cn/data/attachment/album/201909/18/123838m1bcmke570kl6kzm.jpg)
如果你是一名使用 MacOS 并涉及到 Ansible 管理的 Python 开发人员,你可能希望使用 Conda 包管理器将 Ansible 的工作内容与核心操作系统和其他本地项目分开。
Ansible 基于 Python。要让 Ansible 在 MacOS 上工作Conda 并不是必须要的,但是它确实让你管理 Python 版本和包依赖变得更加容易。这允许你在 MacOS 上使用升级的 Python 版本并在你的系统中、Ansible 和其他编程项目之间保持 Python 包的依赖性相互独立。
在 MacOS 上安装 Ansible 还有其他方法。你可以使用 [Homebrew][2],但是如果你对 Python 开发(或 Ansible 开发)感兴趣,你可能会发现在一个独立 Python 虚拟环境中管理 Ansible 可以减少一些混乱。我觉得这更简单;与其试图将 Python 版本和依赖项加载到系统或 `/usr/local` 目录中 ,还不如使用 Conda 帮助我将 Ansible 所需的一切都收集到一个虚拟环境中,并将其与其他项目完全分开。
本文着重于使用 Conda 作为 Python 项目来管理 Ansible以保持它的干净并与其他项目分开。请继续阅读并了解如何安装 Conda、创建新的虚拟环境、安装 Ansible 并对其进行测试。
### 序幕
最近,我想学习 [Ansible][3],所以我需要找到安装它的最佳方法。
我通常对在我的日常工作站上安装东西很谨慎。我尤其不喜欢对供应商的默认操作系统安装应用手动更新(这是我多年作为 Unix 系统管理的习惯)。我真的很想使用 Python 3.7,但是 MacOS 的 Python 包是旧的 2.7,我不会安装任何可能干扰核心 MacOS 系统的全局 Python 包。
所以,我使用本地 Ubuntu 18.04 虚拟机上开始了我的 Ansible 工作。这提供了真正意义上的的安全隔离,但我很快发现管理它是非常乏味的。所以我着手研究如何在本机 MacOS 上获得一个灵活但独立的 Ansible 系统。
由于 Ansible 基于 PythonConda 似乎是理想的解决方案。
### 安装 Conda
Conda 是一个开源软件,它提供方便的包和环境管理功能。它可以帮助你管理多个版本的 Python、安装软件包依赖关系、执行升级和维护项目隔离。如果你手动管理 Python 虚拟环境Conda 将有助于简化和管理你的工作。浏览 [Conda 文档][4]可以了解更多细节。
我选择了 [Miniconda][5] Python 3.7 安装在我的工作站中,因为我想要最新的 Python 版本。无论选择哪个版本,你都可以使用其他版本的 Python 安装新的虚拟环境。
要安装 Conda请下载 PKG 格式的文件,进行通常的双击,并选择 “Install for me only” 选项。安装在我的系统上占用了大约 158 兆的空间。
安装完成后,调出一个终端来查看你有什么了。你应该看到:
* 在你的家目录中的 `miniconda3` 目录
* shell 提示符被修改为 `(base)`
* `.bash_profile` 文件更新了一些 Conda 特有的设置内容
现在基础已经安装好了,你有了第一个 Python 虚拟环境。运行 Python 版本检查可以证明这一点,你的 `PATH` 将指向新的位置:
```
(base) $ which python
/Users/jfarrell/miniconda3/bin/python
(base) $ python --version
Python 3.7.1
```
现在安装了 Conda下一步是建立一个虚拟环境然后安装 Ansible 并运行。
### 为 Ansible 创建虚拟环境
我想将 Ansible 与我的其他 Python 项目分开,所以我创建了一个新的虚拟环境并切换到它:
```
(base) $ conda create --name ansible-env --clone base
(base) $ conda activate ansible-env
(ansible-env) $ conda env list
```
第一个命令将 Conda 库克隆到一个名为 `ansible-env` 的新虚拟环境中。克隆引入了 Python 3.7 版本和一系列默认的 Python 模块,你可以根据需要添加、删除或升级这些模块。
第二个命令将 shell 上下文更改为这个新的环境。它为 Python 及其包含的模块设置了正确的路径。请注意,在 `conda activate ansible-env` 命令后,你的 shell 提示符会发生变化。
第三个命令不是必须的;它列出了安装了哪些 Python 模块及其版本和其他数据。
你可以随时使用 Conda 的 `activate` 命令切换到另一个虚拟环境。这将带你回到基本环境:`conda base`。
### 安装 Ansible
安装 Ansible 有多种方法,但是使用 Conda 可以将 Ansible 版本和所有需要的依赖项打包在一个地方。Conda 提供了灵活性,既可以将所有内容分开,又可以根据需要添加其他新环境(我将在后面演示)。
要安装 Ansible 的相对较新版本,请使用:
```
(base) $ conda activate ansible-env
(ansible-env) $ conda install -c conda-forge ansible
```
由于 Ansible 不是 Conda 默认通道的一部分,因此 `-c` 用于从备用通道搜索和安装。Ansible 现已安装到 `ansible-env` 虚拟环境中,可以使用了。
### 使用 Ansible
既然你已经安装了 Conda 虚拟环境,就可以使用它了。首先,确保要控制的节点已将工作站的 SSH 密钥安装到正确的用户帐户。
调出一个新的 shell 并运行一些基本的 Ansible 命令:
```
(base) $ conda activate ansible-env
(ansible-env) $ ansible --version
ansible 2.8.1
config file = None
configured module search path = ['/Users/jfarrell/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/jfarrell/miniconda3/envs/ansibleTest/lib/python3.7/site-packages/ansible
executable location = /Users/jfarrell/miniconda3/envs/ansibleTest/bin/ansible
python version = 3.7.1 (default, Dec 14 2018, 13:28:58) [Clang 4.0.1 (tags/RELEASE_401/final)]
(ansible-env) $ ansible all -m ping -u ansible
192.168.99.200 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"ping": "pong"
}
```
现在 Ansible 工作了,你可以在控制台中抽身,并从你的 MacOS 工作站中使用它们。
### 克隆新的 Ansible 进行 Ansible 开发
这部分完全是可选的;只有当你想要额外的虚拟环境来修改 Ansible 或者安全地使用有问题的 Python 模块时,才需要它。你可以通过以下方式将主 Ansible 环境克隆到开发副本中:
```
(ansible-env) $ conda create --name ansible-dev --clone ansible-env
(ansible-env) $ conda activte ansible-dev
(ansible-dev) $
```
### 需要注意的问题
偶尔你可能遇到使用 Conda 的麻烦。你通常可以通过以下方式删除不良环境:
```
$ conda activate base
$ conda remove --name ansible-dev --all
```
如果出现无法解决的错误,通常可以通过在 `~/miniconda3/envs` 中找到该环境并删除整个目录来直接删除环境。如果基础环境损坏了,你可以删除整个 `~/miniconda3`,然后从 PKG 文件中重新安装。只要确保保留 `~/miniconda3/envs` ,或使用 Conda 工具导出环境配置并在以后重新创建即可。
MacOS 上不包括 `sshpass` 程序。只有当你的 Ansible 工作要求你向 Ansible 提供 SSH 登录密码时,才需要它。你可以在 SourceForge 上找到当前的 [sshpass 源代码][6]。
最后,基础的 Conda Python 模块列表可能缺少你工作所需的一些 Python 模块。如果你需要安装一个模块,首选命令是 `conda install package`,但是需要的话也可以使用 `pip`Conda 会识别安装的模块。
### 结论
Ansible 是一个强大的自动化工具值得我们去学习。Conda 是一个简单有效的 Python 虚拟环境管理工具。
在你的 MacOS 环境中保持软件安装分离是保持日常工作环境的稳定性和健全性的谨慎方法。Conda 尤其有助于升级你的 Python 版本,将 Ansible 从其他项目中分离出来,并安全地使用 Ansible。
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/8/using-conda-ansible-administration-macos
作者:[James Farrell][a]
选题:[lujun9972][b]
译者:[heguangzhi](https://github.com/heguangzhi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jamesf
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/cicd_continuous_delivery_deployment_gears.png?itok=kVlhiEkc (CICD with gears)
[2]: https://brew.sh/
[3]: https://docs.ansible.com/?extIdCarryOver=true&sc_cid=701f2000001OH6uAAG
[4]: https://conda.io/projects/conda/en/latest/index.html
[5]: https://docs.conda.io/en/latest/miniconda.html
[6]: https://sourceforge.net/projects/sshpass/

View File

@ -0,0 +1,97 @@
[#]: collector: (lujun9972)
[#]: translator: (qfzy1233)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11359-1.html)
[#]: subject: (How to Change Themes in Linux Mint)
[#]: via: (https://itsfoss.com/install-themes-linux-mint/)
[#]: author: (It's FOSS Community https://itsfoss.com/author/itsfoss/)
如何在 Linux Mint 中更换主题
======
![](https://img.linux.net.cn/data/attachment/album/201909/19/100317ixxp3y1l7lljl47a.jpg)
一直以来,使用 Cinnamon 桌面环境的 Linux Mint 都是一种卓越的体验。这也是[为何我喜爱 Linux Mint][1]的主要原因之一。
自从 Mint 的开发团队[开始更为严肃的对待设计][2], “桌面主题” 应用便成为了更换新主题、图标、按钮样式、窗口边框以及鼠标指针的重要方式,当然你也可以直接通过它安装新的主题。感兴趣么?让我们开始吧。
### 如何在 Linux Mint 中更换主题
在菜单中搜索主题并打开主题应用。
![Theme Applet provides an easy way of installing and changing themes][3]
在应用中有一个“添加/删除”按钮,非常简单吧。点击它,我们可以看到按流行程度排序的 Cinnamon SpicesCinnamon 的官方插件库)的主题。
![Installing new themes in Linux Mint Cinnamon][4]
要安装主题你所要做的就是点击你喜欢的主题然后等待它下载。之后主题将在应用第一页的“Desktop”选项中显示可用。只需双击已安装的主题之一就可以开始使用它。
![Changing themes in Linux Mint Cinnamon][5]
下面是默认的 Linux Mint 外观:
![Linux Mint Default Theme][6]
这是在我更换主题之后:
![Linux Mint with Carta Theme][7]
所有的主题都可以在 Cinnamon Spices 网站上获得更多的信息和更大的截图,这样你就可以更好地了解你的系统的外观。
- [浏览 Cinnamon 主题][8]
### 在 Linux Mint 中安装第三方主题
> “我在另一个网站上看到了这个优异的主题,但 Cinnamon Spices 网站上没有……”
Cinnamon Spices 集成了许多优秀的主题,但你仍然会发现,你看到的主题并没有被 Cinnamon Spices 官方网站收录。
这时你可能会想:如果有别的办法就好了,对么?你可能会认为有(我的意思是……当然啦)。首先,我们可以在其他网站上找到一些很酷的主题。
我推荐你去 Cinnamon Look 浏览一下那儿的主题。如果你喜欢什么,就下载吧。
- [在 Cinnamon Look 中获取更多主题][9]
下载了首选主题之后,你现在将得到一个压缩文件,其中包含安装所需的所有内容。提取它并保存到 `~/.themes`。迷糊么? `~` 代表了你的家目录的对应路径:`/home/{YOURUSER}/.themes`。
然后跳转到你的家目录。按 `Ctrl+H` 来[显示 Linux 中的隐藏文件][11]。如果没有看到 `.themes` 文件夹,创建一个新文件夹并命名为 `.themes`。记住,文件夹名称开头的点很重要。
将提取的主题文件夹从下载目录复制到你的家目录中的 `.themes` 文件夹中。
最后,在上面提到的应用中查找已安装的主题。
> 注记
>
> 请记住,主题必须是 Cinnamon 相对应的,即使它是一个从 GNOME 复刻的系统也不行,并不是所有的 GNOME 主题都适用于 Cinnamon。
改变主题是 Cinnamon 定制工作的一部分。你还可以[通过更改图标来更改 Linux Mint 的外观][12]。
我希望你现在已经知道如何在 Linux Mint 中更改主题了。快去选取你喜欢的主题吧。
--------------------------------------------------------------------------------
via: https://itsfoss.com/install-themes-linux-mint/
作者:[It's FOSS][a]
选题:[lujun9972][b]
译者:[qfzy1233](https://github.com/qfzy1233)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/itsfoss/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/tiny-features-linux-mint-cinnamon/
[2]: https://itsfoss.com/linux-mint-new-design/
[3]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/09/install-theme-linux-mint-1.jpg?resize=800%2C625&ssl=1
[4]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/09/install-theme-linux-mint-2.jpg?resize=800%2C625&ssl=1
[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/09/install-theme-linux-mint-3.jpg?resize=800%2C450&ssl=1
[6]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/09/linux-mint-default-theme.jpg?resize=800%2C450&ssl=1
[7]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/09/linux-mint-carta-theme.jpg?resize=800%2C450&ssl=1
[8]: https://cinnamon-spices.linuxmint.com/themes
[9]: https://www.cinnamon-look.org/
[10]: https://itsfoss.com/failed-to-start-session-ubuntu-14-04/
[11]: https://itsfoss.com/hide-folders-and-show-hidden-files-in-ubuntu-beginner-trick/
[12]: https://itsfoss.com/install-icon-linux-mint/

View File

@ -1,8 +1,8 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11362-1.html)
[#]: subject: (Bash Script to Send a Mail About New User Account Creation)
[#]: via: (https://www.2daygeek.com/linux-shell-script-to-monitor-user-creation-send-email/)
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
@ -10,37 +10,27 @@
用 Bash 脚本发送新用户帐户创建的邮件
======
出于某些原因,你可能需要跟踪 Linux 上的新用户创建信息。
![](https://img.linux.net.cn/data/attachment/album/201909/20/093308a615tcuiopctvp5t.jpg)
同时,你可能需要通过邮件发送详细信息。
这或许是审计目标的一部分,或者安全团队出于跟踪目的可能希望对此进行监控。
出于某些原因,你可能需要跟踪 Linux 上的新用户创建信息。同时,你可能需要通过邮件发送详细信息。这或许是审计目标的一部分,或者安全团队出于跟踪目的可能希望对此进行监控。
我们可以通过其他方式进行此操作,正如我们在上一篇文章中已经描述的那样。
* **[在系统中创建新用户帐户时发送邮件的 Bash 脚本][1]**
* [在系统中创建新用户帐户时发送邮件的 Bash 脚本][1]
Linux 有许多开源监控工具可以使用。
但我不认为他们有办法跟踪新用户创建过程,并在发生时提醒管理员。
Linux 有许多开源监控工具可以使用。但我不认为他们有办法跟踪新用户创建过程,并在发生时提醒管理员。
那么我们怎样才能做到这一点?
我们可以编写自己的 Bash 脚本来实现这一目标。
我们过去写过许多有用的 shell 脚本。如果你想了解,请进入下面的链接。
* **[如何使用 shell 脚本自动化日常活动?][2]**
我们可以编写自己的 Bash 脚本来实现这一目标。我们过去写过许多有用的 shell 脚本。如果你想了解,请进入下面的链接。
* [如何使用 shell 脚本自动化日常活动?][2]
### 这个脚本做了什么?
这将每天两次(一天的开始和结束)备份 “/etc/passwd” 文件,这将使你能够获取指定日期的新用户创建详细信息。
这将每天两次(一天的开始和结束)备份 `/etc/passwd` 文件,这将使你能够获取指定日期的新用户创建详细信息。
我们需要添加以下两个 cron 任务来复制 “/etc/passwd” 文件。
我们需要添加以下两个 cron 任务来复制 `/etc/passwd` 文件。
```
# crontab -e
@ -49,7 +39,7 @@ Linux 有许多开源监控工具可以使用。
59 23 * * * cp /etc/passwd /opt/scripts/passwd-end-$(date +"%Y-%m-%d")
```
它使用 “difference” 命令来检测文件之间的差异,如果发现与昨日有任何差异,脚本将向指定 email 发送新用户详细信息。
它使用 `diff` 命令来检测文件之间的差异,如果发现与昨日有任何差异,脚本将向指定 email 发送新用户详细信息。
我们不用经常运行此脚本,因为用户创建不经常发生。但是,我们计划每天运行一次此脚本。
@ -66,21 +56,21 @@ mv /opt/scripts/passwd-end-$(date --date='yesterday' '+%Y-%m-%d') /opt/scripts/p
ucount=$(diff /opt/scripts/passwd-start /opt/scripts/passwd-end | grep ">" | cut -d":" -f6 | cut -d"/" -f3 | wc -l)
if [ $ucount -gt 0 ]
then
SUBJECT="ATTENTION: New User Account is created on server : `date --date='yesterday' '+%b %e'`"
MESSAGE="/tmp/new-user-logs.txt"
TO="[email protected]"
echo "Hostname: `hostname`" >> $MESSAGE
echo -e "\n" >> $MESSAGE
echo "The New User Details are below." >> $MESSAGE
echo "+------------------------------+" >> $MESSAGE
diff /opt/scripts/passwd-start /opt/scripts/passwd-end | grep ">" | cut -d":" -f6 | cut -d"/" -f3 >> $MESSAGE
echo "+------------------------------+" >> $MESSAGE
mail -s "$SUBJECT" "$TO" < $MESSAGE
rm $MESSAGE
fi
SUBJECT="ATTENTION: New User Account is created on server : `date --date='yesterday' '+%b %e'`"
MESSAGE="/tmp/new-user-logs.txt"
TO="2daygeek@gmail.com"
echo "Hostname: `hostname`" >> $MESSAGE
echo -e "\n" >> $MESSAGE
echo "The New User Details are below." >> $MESSAGE
echo "+------------------------------+" >> $MESSAGE
diff /opt/scripts/passwd-start /opt/scripts/passwd-end | grep ">" | cut -d":" -f6 | cut -d"/" -f3 >> $MESSAGE
echo "+------------------------------+" >> $MESSAGE
mail -s "$SUBJECT" "$TO" < $MESSAGE
rm $MESSAGE
fi
```
“new-user-detail.sh” 文件添加可执行权限。
`new-user-detail.sh` 文件添加可执行权限。
```
$ chmod +x /opt/scripts/new-user-detail.sh
@ -116,7 +106,7 @@ via: https://www.2daygeek.com/linux-shell-script-to-monitor-user-creation-send-e
作者:[Magesh Maruthamuthu][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,99 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11364-1.html)
[#]: subject: (An introduction to Virtual Machine Manager)
[#]: via: (https://opensource.com/article/19/9/introduction-virtual-machine-manager)
[#]: author: (Alan Formy-Duval https://opensource.com/users/alanfdoss)
虚拟机管理器Virtual Machine Manager简介
======
> virt-manager 为 Linux 虚拟化提供了全方位的选择。
![](https://img.linux.net.cn/data/attachment/album/201909/20/113434dxbbp3ttmxbhmnnm.jpg)
在我关于 [GNOME Boxes][3] 的[系列文章][2]中,我已经解释了 Linux 用户如何能够在他们的桌面上快速启动虚拟机。当你只需要简单的配置时Box 可以轻而易举地创建虚拟机。
但是如果你需要在虚拟机中配置更多详细信息那么你就需要一个工具为磁盘、网卡NIC和其他硬件提供全面的选项。这时就需要 [虚拟机管理器Virtual Machine Manager][4]virt-manager了。如果在应用菜单中没有看到它你可以从包管理器或命令行安装它
* 在 Fedora 上:`sudo dnf install virt-manager`
* 在 Ubuntu 上:`sudo apt install virt-manager`
安装完成后,你可以从应用菜单或在命令行中输入 `virt-manager` 启动。
![Virtual Machine Manager's main screen][5]
为了演示如何使用 virt-manager 创建虚拟机,我将设置一个 Red Hat Enterprise Linux 8 虚拟机。
首先,单击 “<ruby>文件<rt>File</rt></ruby>” 然后点击 “<ruby>新建虚拟机<rt>New Virtual Machine</rt></ruby>”。Virt-manager 的开发者已经标记好了每一步(例如,“<ruby>第 1 步,共 5 步<rt>Step 1 of 5</rt></ruby>”)来使其变得简单。单击 “<ruby>本地安装介质<rt>Local install media</rt></ruby>” 和 “<ruby>下一步<rt>Forward</rt></ruby>”。
![Step 1 virtual machine creation][6]
在下个页面中,选择要安装的操作系统的 ISO 文件。RHEL 8 镜像位于我的下载目录中。Virt-manager 自动检测操作系统。
![Step 2 Choose the ISO File][7]
在步骤 3 中,你可以指定虚拟机的内存和 CPU。默认值为内存 1,024MB 和一个 CPU。
![Step 3 Set CPU and Memory][8]
我想给 RHEL 充足的配置来运行,我使用的硬件配置也充足,所以我将它们(分别)增加到 4,096MB 和两个 CPU。
下一步为虚拟机配置存储。默认设置是 10GB 硬盘。(我保留此设置,但你可以根据需要进行调整。)你还可以选择现有磁盘镜像或在自定义位置创建一个磁盘镜像。
![Step 4 Configure VM Storage][9]
步骤 5 是命名虚拟机并单击“<ruby>完成<rt>Finish</rt></ruby>”。这相当于创建了一台虚拟机,也就是 GNOME Boxes 中的一个 Box。虽然技术上讲是最后一步但你有几个选择如下面的截图所示。由于 virt-manager 的优点是能够自定义虚拟机,因此在单击“<ruby>完成<rt>Finish</rt></ruby>”之前,我将选中“<ruby>在安装前定制配置<rt>Customize configuration before install</rt></ruby>”的复选框。
因为我选择了自定义配置virt-manager 打开了一个有一组设备和设置的页面。这里是重点!
这里你也可以命名该虚拟机。在左侧列表中,你可以查看各个方面的详细信息,例如 CPU、内存、磁盘、控制器和许多其他项目。例如我可以单击 “CPU” 来验证我在步骤 3 中所做的更改。
![Changing the CPU count][10]
我也可以确认我设置的内存量。
当虚拟机作为服务器运行时,我通常会禁用或删除声卡。为此,请选择 “<ruby>声卡<rt>Sound</rt></ruby>” 并单击 “<ruby>移除<rt>Remove</rt></ruby>” 或右键单击 “<ruby>声卡<rt>Sound</rt></ruby>” 并选择 “<ruby>移除硬件<rt>Remove Hardware</rt></ruby>”。
你还可以使用底部的 “<ruby>添加硬件<rt>Add Hardware</rt></ruby>” 按钮添加硬件。这会打开 “<ruby>添加新的虚拟硬件<rt>Add New Virtual Hardware</rt></ruby>” 页面,你可以在其中添加其他存储设备、内存、声卡等。这就像可以访问一个库存充足的(虚拟)计算机硬件仓库。
![The Add New Hardware screen][11]
对 VM 配置感到满意后,单击 “<ruby>开始安装<rt>Begin Installation</rt></ruby>”,系统将启动并开始从 ISO 安装指定的操作系统。
![Begin installing the OS][12]
完成后,它会重新启动,你的新虚拟机就可以使用了。
![Red Hat Enterprise Linux 8 running in VMM][13]
Virtual Machine Manager 是桌面 Linux 用户的强大工具。它是开源的,是专有和封闭虚拟化产品的绝佳替代品。
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/9/introduction-virtual-machine-manager
作者:[Alan Formy-Duval][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/alanfdoss
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/computer_keyboard_laptop_development_code_woman.png?itok=vbYz6jjb (A person programming)
[2]: https://opensource.com/sitewide-search?search_api_views_fulltext=GNOME%20Box
[3]: https://wiki.gnome.org/Apps/Boxes
[4]: https://virt-manager.org/
[5]: https://opensource.com/sites/default/files/1-vmm_main_0.png (Virtual Machine Manager's main screen)
[6]: https://opensource.com/sites/default/files/2-vmm_step1_0.png (Step 1 virtual machine creation)
[7]: https://opensource.com/sites/default/files/3-vmm_step2.png (Step 2 Choose the ISO File)
[8]: https://opensource.com/sites/default/files/4-vmm_step3default.png (Step 3 Set CPU and Memory)
[9]: https://opensource.com/sites/default/files/6-vmm_step4.png (Step 4 Configure VM Storage)
[10]: https://opensource.com/sites/default/files/9-vmm_customizecpu.png (Changing the CPU count)
[11]: https://opensource.com/sites/default/files/11-vmm_addnewhardware.png (The Add New Hardware screen)
[12]: https://opensource.com/sites/default/files/12-vmm_rhelbegininstall.png
[13]: https://opensource.com/sites/default/files/13-vmm_rhelinstalled_0.png (Red Hat Enterprise Linux 8 running in VMM)

View File

@ -0,0 +1,92 @@
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11355-1.html)
[#]: subject: (Linux Plumbers, Appwrite, and more industry trends)
[#]: via: (https://opensource.com/article/19/9/conferences-industry-trends)
[#]: author: (Tim Hildred https://opensource.com/users/thildred)
每周开源点评Linux Plumbers、Appwrite
======
> 了解每周的开源社区和行业趋势。
![Person standing in front of a giant computer screen with numbers, data][1]
作为采用开源开发模式的企业软件公司的高级产品营销经理,这是我为产品营销人员、经理和其他相关人员发布的有关开源社区、市场和行业趋势的定期更新。以下是本次更新中我最喜欢的五篇文章。
### 《在 Linux Plumbers 会议上解决 Linux 具体细节》
- [文章地址][2]
> Linux 的创建者 Linus Torvalds 告诉我,<ruby>内核维护者峰会<rt>Kernel Maintainers Summit</rt></ruby>是顶级 Linux 内核开发人员的邀请制聚会。但是,虽然你可能认为这是关于规划 Linux 内核的未来的会议,但事实并非如此。“这个维护者峰会真的与众不同,因为它甚至不谈论技术问题。”相反,“全都谈的是关于创建和维护 Linux 内核的过程。”
**影响**:这就像技术版的 Bilderberg 会议你们举办的都是各种华丽的流行语会议而在这里我们做出的才是真正的决定。不过我觉得可能不太会涉及到私人飞机吧。LCTT 译注:有关 Bilderberg 请自行搜索)
### 《微软主办第一个 WSL 会议》
- [文章地址][3]
> [Whitewater Foundry][4] 是一家专注于 [Windows 的 Linux 子系统WSL][5]的创业公司,它的创始人 Hayden Barnes [宣布举办 WSLconf 1][6],这是 WSL 的第一次社区会议。该活动将于 2020 年 3 月 10 日至 11 日在华盛顿州雷德蒙市的微软总部 20 号楼举行。会议是合办的。我们已经知道将有来自[PengwinWhitewater 的 Linux for Windows][7]、微软 WSL 和 Canonical 的 Ubuntu on WSL 开发人员的演讲和研讨会。
**影响**:微软正在培育社区成长的种子,围绕它越来越多地采用开源软件并作出贡献。这足以让我眼前一亮。
### 《Appwrite 简介:面向移动和 Web 开发人员的开源后端服务器》
- [文章链接][10]
> [Appwrite][11] 是一个新的[开源软件][12],用于前端和移动开发人员的端到端的后端服务器,可以让你更快地构建应用程序。[Appwrite][13] 的目标是抽象和简化 REST API 和工具背后的常见开发任务,以帮助开发人员更快地构建高级应用程序。
>
> 在这篇文章中,我将简要介绍一些主要的 [Appwrite][14] 服务,并解释它们的主要功能以及它们的设计方式,相比从头开始编写所有后端 API这可以帮助你更快地构建下一个项目。
**影响**随着更多开源中间件变得更易于使用软件开发越来越容易。Appwrite 声称可将开发时间和成本降低 70。想象一下这对小型移动开发机构或个人开发者意味着什么。我很好奇他们将如何通过这种方式赚钱。
### 《“不只是 IT”开源技术专家说协作文化是政府转型的关键》
- [文章链接][15]
> AGL<ruby>敏捷的政府领导<rt>agile government leadership</rt></ruby>正在为那些帮助政府更好地为公众工作的人们提供价值支持网络。该组织专注于我非常热衷的事情DevOps、数字化转型、开源以及许多政府 IT 领导者首选的类似主题。AGL 为我提供了一个社区,可以了解当今最优秀和最聪明的人所做的事情,并与整个行业的同行分享这些知识。
**影响**:不管你的政治信仰如何,对政府都很容易愤世嫉俗。我发现令人耳目一新的是,政府也是由一个个实际的人组成的,他们大多在尽力将相关技术应用于公益事业。特别是当该技术是开源的!
### 《彭博社如何通过 Kubernetes 实现接近 90-95% 的硬件利用率》
- [文章链接][16]
> 2016 年,彭博社采用了 Kubernetes当时仍处于 alpha 阶段中自使用该项目的上游代码以来取得了显著成果。Rybka 说:“借助 Kubernetes我们能够非常高效地使用我们的硬件使利用率接近 90% 到 95%。”Kubernetes 中的自动缩放使系统能够更快地满足需求。此外Kubernetes “为我们提供了标准化我们构建和管理服务的方法的能力,这意味着我们可以花费更多时间专注于实际使用我们支持的开源工具,”数据和分析基础架构主管 Steven Bower 说,“如果我们想要在世界的另一个位置建立一个新的集群,那么这样做真的非常简单。一切都只是代码。配置就是代码。”
**影响**:没有什么能像利用率统计那样穿过营销的迷雾。我听说过关于 Kube 的一件事是,当人们运行它时,他们不知道用它做什么。像这样的用例可以给他们(和你)一些想要的东西。
*我希望你喜欢这个上周重要内容的清单,请下周回来了解有关开源社区、市场和行业趋势的更多信息。*
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/9/conferences-industry-trends
作者:[Tim Hildred][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/thildred
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/data_metrics_analytics_desktop_laptop.png?itok=9QXd7AUr (Person standing in front of a giant computer screen with numbers, data)
[2]: https://www.zdnet.com/article/working-on-linuxs-nuts-and-bolts-at-linux-plumbers/
[3]: https://www.zdnet.com/article/microsoft-hosts-first-windows-subsystem-for-linux-conference/
[4]: https://github.com/WhitewaterFoundry
[5]: https://docs.microsoft.com/en-us/windows/wsl/install-win10
[6]: https://www.linkedin.com/feed/update/urn:li:activity:6574754435518599168/
[7]: https://www.zdnet.com/article/pengwin-a-linux-specifically-for-windows-subsystem-for-linux/
[8]: https://canonical.com/
[9]: https://ubuntu.com/
[10]: https://medium.com/@eldadfux/introducing-appwrite-an-open-source-backend-server-for-mobile-web-developers-4be70731575d
[11]: https://appwrite.io
[12]: https://github.com/appwrite/appwrite
[13]: https://medium.com/@eldadfux/introducing-appwrite-an-open-source-backend-server-for-mobile-web-developers-4be70731575d?source=friends_link&sk=b6a2be384aafd1fa5b1b6ff12906082c
[14]: https://appwrite.io/
[15]: https://medium.com/agile-government-leadership/more-than-just-it-open-source-technologist-says-collaborative-culture-is-key-to-government-c46d1489f822
[16]: https://www.cncf.io/blog/2019/09/12/how-bloomberg-achieves-close-to-90-95-hardware-utilization-with-kubernetes/

View File

@ -0,0 +1,139 @@
[#]: collector: (lujun9972)
[#]: translator: (Morisun029)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11360-1.html)
[#]: subject: (How to Check Linux Mint Version Number & Codename)
[#]: via: (https://itsfoss.com/check-linux-mint-version/)
[#]: author: (Sergiu https://itsfoss.com/author/sergiu/)
如何查看 Linux Mint 版本号和代号
======
Linux Mint 每两年发布一次主版本(如 Mint 19每六个月左右发布一次次版本如 Mint 19.1、19.2 等)。 你可以自己升级 Linux Mint 版本,而次版本也会自动更新。
在所有这些版本中,你可能想知道你正在使用的是哪个版本。了解 Linux Mint 版本号可以帮助你确定某个特定软件是否适用于你的系统,或者检查你的系统是否已达到使用寿命。
你可能需要 Linux Mint 版本号有多种原因,你也有多种方法可以获取此信息。让我向你展示用图形和命令行的方式获取 Mint 版本信息。
* [使用命令行查看 Linux Mint 版本信息][1]
* [使用 GUI图形用户界面查看 Linux Mint 版本信息][2]
### 使用终端查看 Linux Mint 版本号的方法
![][3]
我将介绍几种使用非常简单的命令查看 Linux Mint 版本号和代号的方法。 你可以从 “菜单” 中打开终端,或按 `CTRL+ALT+T`(默认热键)打开。
本文中的最后两个命令还会输出你当前的 Linux Mint 版本所基于的 Ubuntu 版本。
#### 1、/etc/issue
从最简单的 CLI 方法开始,你可以打印出 `/etc/issue` 的内容来检查你的版本号和代号:
```
[email protected]:~$ cat /etc/issue
Linux Mint 19.2 Tina \n \l
```
#### 2、hostnamectl
![hostnamectl][4]
这一个命令(`hostnamectl`)打印的信息几乎与“系统信息”中的信息相同。 你可以看到你的操作系统(带有版本号)以及你的内核版本。
#### 3、lsb_release
`lsb_release` 是一个非常简单的 Linux 实用程序,用于查看有关你的发行版本的基本信息:
```
[email protected]:~$ lsb_release -a
No LSB modules are available.
Distributor ID: LinuxMint
Description: Linux Mint 19.2 Tina
Release: 19.2
Codename: tina
```
**注:** 我使用 `a` 标签打印所有参数, 但你也可以使用 `-s` 作为简写格式,`-d` 用于描述等 (用 `man lsb_release` 查看所有选项)
#### 4、/etc/linuxmint/info
![/etc/linuxmint/info][5]
这不是命令,而是 Linux Mint 系统上的文件。只需使用 `cat` 命令将其内容打印到终端,然后查看你的版本号和代号。
#### 5、使用 /etc/os-release 命令也可以获取到 Ubuntu 代号
![/etc/os-release][7]
Linux Mint 基于 Ubuntu。每个 Linux Mint 版本都基于不同的 Ubuntu 版本。了解你的 Linux Mint 版本所基于的 Ubuntu 版本有助你在必须要使用 Ubuntu 版本号的情况下使用(比如你需要在 [Linux Mint 中安装最新的 Virtual Box][8]添加仓库时)。
`os-release` 则是另一个类似于 `info` 的文件,向你展示 Linux Mint 所基于的 Ubuntu 版本代号。
#### 6、使用 /etc/upstream-release/lsb-release 只获取 Ubuntu 的基本信息
如果你只想要查看有关 Ubuntu 的基本信息,请输出 `/etc/upstream-release/lsb-release`
```
[email protected]:~$ cat /etc/upstream-release/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=18.04
DISTRIB_CODENAME=bionic
DISTRIB_DESCRIPTION="Ubuntu 18.04 LTS"
```
特别提示:[你可以使用 uname 命令查看 Linux 内核版本][9]。
```
[email protected]:~$ uname -r
4.15.0-54-generic
```
**注:** `-r` 代表 release你可以使用 `man uname` 查看其他信息。
### 使用 GUI 查看 Linux Mint 版本信息
如果你对终端和命令行不满意,可以使用图形方法。如你所料,这个非常明了。
打开“菜单” (左下角),然后转到“偏好设置 > 系统信息”:
![Linux Mint 菜单][10]
或者在菜单中你可以搜索“System Info”
![Menu Search System Info][11]
在这里,你可以看到你的操作系统(包括版本号),内核和桌面环境的版本号:
![System Info][12]
### 总结
我已经介绍了一些不同的方法,用这些方法你可以快速查看你正在使用的 Linux Mint 的版本和代号(以及所基于的 Ubuntu 版本和内核)。我希望这个初学者教程对你有所帮助。请在评论中告诉我们你最喜欢哪个方法!
--------------------------------------------------------------------------------
via: https://itsfoss.com/check-linux-mint-version/
作者:[Sergiu][a]
选题:[lujun9972][b]
译者:[Morisun029](https://github.com/Morisun029)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/sergiu/
[b]: https://github.com/lujun9972
[1]: tmp.pL5Hg3N6Qt#terminal
[2]: tmp.pL5Hg3N6Qt#GUI
[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/09/check-linux-mint-version.png?ssl=1
[4]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/09/hostnamectl.jpg?ssl=1
[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/09/linuxmint_info.jpg?ssl=1
[6]: https://itsfoss.com/rid-google-chrome-icons-dock-elementary-os-freya/
[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/09/os_release.jpg?ssl=1
[8]: https://itsfoss.com/install-virtualbox-ubuntu/
[9]: https://itsfoss.com/find-which-kernel-version-is-running-in-ubuntu/
[10]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/09/linux_mint_menu.jpg?ssl=1
[11]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/09/menu_search_system_info.jpg?ssl=1
[12]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/09/system_info.png?ssl=1

View File

@ -0,0 +1,146 @@
[#]: collector: (lujun9972)
[#]: translator: (name1e5s)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11358-1.html)
[#]: subject: (Amid Epstein Controversy, Richard Stallman is Forced to Resign as FSF President)
[#]: via: (https://itsfoss.com/richard-stallman-controversy/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
Richard Stallman 被迫辞去 FSF 主席的职务
======
> Richard Stallman自由软件基金会的创建者以及主席已经辞去主席及董事会职务。此前因为 Stallman 对于爱泼斯坦事件中的受害者的观点,一小撮活动家以及媒体人发起了清除 Stallman 的运动。这份声明就是在这些活动后发生的。阅读全文以获得更多信息。
![][1]
### Stallman 事件的背景概述
如果你不知道这次事件发生的前因后果,请看本段的详细信息。
[Richard Stallman][2]66岁是就职于 [MIT][3] 的计算机科学家。他最著名的成就就是在 1983 年发起了[自由软件运动][4]。他也开发了 GNU 项目旗下的部分软件,比如 GCC 和 Emacs。受自由软件运动影响选择使用 GPL 开源协议的项目不计其数。Linux 是其中最出名的项目之一。
[Jeffrey Epstein][5](爱泼斯坦),美国亿万富翁,金融大佬。其涉嫌为社会上流精英提供性交易服务(其中有未成年少女)而被指控成为性犯罪者。在受审期间,爱泼斯坦在监狱中自杀身亡。
[Marvin Lee Minsky][6]MIT 知名计算机科学家。他在 MIT 建立了人工智能实验室。2016 年88 岁的 Minsky 逝世。在 Minsky 逝世后,一位名为 Misky 的爱泼斯坦事件受害者声称其在未成年时曾被“诱导”到爱泼斯坦的私人岛屿,与之发生性关系。
但是这些与 Richard Stallman 有什么关系?这要从 Stallman 发给 MIT 计算机科学与人工智能实验室CSAIL的学生以及附属机构就爱泼斯坦的捐款提出抗议的邮件列表的邮件说起。邮件全文翻译如下
> 周五事件的公告对 Marvin Minsky 来说是不公正的。
>
> “已故的人工智能 ‘先锋’ Marvin Minsky (被控告侵害了爱泼斯坦事件的受害者之一\[2])”
>
> 不公正之处在于 “<ruby>侵害<rt>assulting</rt></ruby>” 这个用语。“<ruby>性侵犯<rt>sexual assault</rt></ruby>” 这个用语非常的糢糊,夸大了指控的严重性:宣称某人做了 X 但误导别人,让别人觉得这个人做了 YY 远远比 X 严重。
>
> 上面引用的指控显然就是夸大。报导声称 Minksy 与爱泼斯坦的<ruby>女眷<rt>harem</rt></ruby>之一发生了性关系(详见 <https://www.theverge.com/2019/8/9/20798900/marvin-minsky-jeffrey-epstein-sex-trafficking-island-court-records-unsealed>)。我们假设这是真的(我找不到理由不相信)。
>
> “<ruby>侵害<rt>assulting</rt></ruby>” 这个词,意味着他使用了某种暴力。但那篇报道并没有提到这个,只说了他们发生了性关系。
>
> 我们可以想像很多种情况,但最合理的情况是,她在 Marvin 面前表现的像是完全自愿的。假设她是被爱泼斯坦强迫的,那爱泼斯坦有充足的理由让她对大多数人守口如瓶。
>
> 从各种的指控夸大事例中,我总结出,在指控时使用“<ruby>性侵犯<rt>sexual assault</rt></ruby>”是绝对错误的。
>
> 无论你想要批判什么行为,你都应该使用特定的词汇来描述,以此避免批判的本质的道德模糊性。
### “清除 Stallman” 的呼吁
爱泼斯坦在美国是颇具争议的话题。Stallman 对该敏感事件做出如此鲁莽的 “知识陈述” 不会有好结果,事实也是如此。
一位机器人学工程师从她的朋友那里收到了转发的邮件并发起了一个[清除 Stallman 的活动][7]。她要的不是澄清或者道歉,她只想要清除斯托曼,就算这意味着 “将 MIT 夷为平地” 也在所不惜。
> 是,至少 Stallman 没有被控强奸任何人。但这就是我们的最高标准吗?这所声望极高的学院坚持的标准就是这样的吗?如果这是麻省理工学院想要捍卫的、想要代表的标准的话,还不如把这破学校夷为平地…
>
> 如果有必要的话,就把所有人都清除出去,之后从废墟中建立出更好的秩序。
>
> —— Salem发起“清除 Stallman“运动的机器人学专业学生
Salem 的声讨最初没有被主流媒体重视。但它还是被反对软件行业内的精英崇拜以及性别偏见的积极分子发现了。
> [#epstein][8] [#MIT][9] 嗨 记者没有回复我我很生气就自己写了这么个故事。作为 MIT 的校友我还真是高兴啊🙃 <https://t.co/D4V5L5NzPA>
>
> — SZJG (@selamjie) [September 12, 2019][10]
.
> 是不是对于性侵儿童的 “杰出混蛋” 我们也可以辩护说 “万一这是你情我愿的” <https://t.co/gSYPJ3WOfp>
>
> — Tracy Chou 👩🏻‍💻 (@triketora) [September 13, 2019][11]
.
> 多年来我就一直发推说 Richard "RMS" Stallman 这人有多恶心 —— 恋童癖、厌女症、还残障歧视
>
> 不可避免的是,每次我这样做,都会有老哥检查我的数据来源,然后说 “这都是几年前的事了!他现在变了!”
>
> 变个屁。 <https://t.co/ti2SrlKObp>
>
> — Sarah Mei (@sarahmei) [September 12, 2019][12]
下面是 Sage Sharp 开头的一篇关于 Stallman 的行为如何对科技人员产生负面影响的帖子:
> 👇大家说下 Richard Stallman 对科技从业者的影响吧,尤其是女性。 [例如: 强奸、乱伦、残障歧视、性交易]
>
> [@fsf][13] 有必要永久禁止 Richard Stallman 担任自由软件基金会董事会主席。
>
> — Sage Sharp (@\_sagesharp\_) [September 16, 2019][14]
Stallman 一直以来也不是一个圣人。他粗暴,不合时宜、多年来一直在开带有性别歧视的笑话。你可以在[这里][15]和[这里][16]读到。
很快这个消息就被 [The Vice][17]、[每日野兽][18][未来主义][19]等大媒体采访。他们把 Stallman 描绘成爱泼斯坦的捍卫者。在强烈的抗议声中,[GNOME 执行董事威胁要结束 GNOME 和 FSF 之间的关系][20]。
最后Stallman 先是从 MIT 辞职,现在又从 [自由软件基金会][21] 辞职。
![][22]
### 危险的特权?
我们见识到了,把一个人从他创建并为之工作了三十多年的组织中驱逐出去仅仅需要五天。这甚至还是在 Stallman 没有参与性交易丑闻的情况下。
其中一些 “活动家” 过去也曾[针对过 Linux 的作者 Linus Torvalds][23]。Linux 基金会背后的管理层预见到了科技行业激进主义的增长趋势,因此他们制定了[适用于 Linux 内核开发的行为准则][24]并[强制 Torvalds 接受培训以改善他的行为][25]。如果他们没有采取纠正措施,可能 Torvalds 也已经被批倒批臭了。
忽视技术支持者的鲁莽行为和性别歧视是不可接受的,但是对于那些遇到不同意某种流行观点的人就进行声讨,施以私刑也是不道德的做法。我不支持 Stallman 和他过去的言论,但我也不能接受他以这种方式(被迫?)辞职。
Techrights 对此有一些有趣的评论,你可以在[这里][26]和[这里][27]看到。
*你对此事有何看法?请文明分享你的观点和意见。过激评论将不会公布。*
--------------------------------------------------------------------------------
via: https://itsfoss.com/richard-stallman-controversy/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[name1e5s](https://github.com/name1e5s)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/09/stallman-conroversy.png?w=800&ssl=1
[2]: https://en.wikipedia.org/wiki/Richard_Stallman
[3]: https://en.wikipedia.org/wiki/Massachusetts_Institute_of_Technology
[4]: https://en.wikipedia.org/wiki/Free_software_movement
[5]: https://en.wikipedia.org/wiki/Jeffrey_Epstein
[6]: https://en.wikipedia.org/wiki/Marvin_Minsky
[7]: https://medium.com/@selamie/remove-richard-stallman-fec6ec210794
[8]: https://twitter.com/hashtag/epstein?src=hash&ref_src=twsrc%5Etfw
[9]: https://twitter.com/hashtag/MIT?src=hash&ref_src=twsrc%5Etfw
[10]: https://twitter.com/selamjie/status/1172244207978897408?ref_src=twsrc%5Etfw
[11]: https://twitter.com/triketora/status/1172443389536555009?ref_src=twsrc%5Etfw
[12]: https://twitter.com/sarahmei/status/1172283772428906496?ref_src=twsrc%5Etfw
[13]: https://twitter.com/fsf?ref_src=twsrc%5Etfw
[14]: https://twitter.com/_sagesharp_/status/1173637138413318144?ref_src=twsrc%5Etfw
[15]: https://geekfeminism.wikia.org/wiki/Richard_Stallman
[16]: https://medium.com/@selamie/remove-richard-stallman-appendix-a-a7e41e784f88
[17]: https://www.vice.com/en_us/article/9ke3ke/famed-computer-scientist-richard-stallman-described-epstein-victims-as-entirely-willing
[18]: https://www.thedailybeast.com/famed-mit-computer-scientist-richard-stallman-defends-epstein-victims-were-entirely-willing
[19]: https://futurism.com/richard-stallman-epstein-scandal
[20]: https://blog.halon.org.uk/2019/09/gnome-foundation-relationship-gnu-fsf/
[21]: https://www.fsf.org/news/richard-m-stallman-resigns
[22]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/09/richard-stallman.png?resize=800%2C94&ssl=1
[23]: https://www.newyorker.com/science/elements/after-years-of-abusive-e-mails-the-creator-of-linux-steps-aside
[24]: https://itsfoss.com/linux-code-of-conduct/
[25]: https://itsfoss.com/torvalds-takes-a-break-from-linux/
[26]: http://techrights.org/2019/09/15/media-attention-has-been-shifted/
[27]: http://techrights.org/2019/09/16/stallman-removed/

View File

@ -1,63 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Firefox 69 available in Fedora)
[#]: via: (https://fedoramagazine.org/firefox-69-available-in-fedora/)
[#]: author: (Paul W. Frields https://fedoramagazine.org/author/pfrields/)
Firefox 69 available in Fedora
======
![][1]
When you install the Fedora Workstation, youll find the world-renowned Firefox browser included. The Mozilla Foundation underwrites work on Firefox, as well as other projects that promote an open, safe, and privacy respecting Internet. Firefox already features a fast browsing engine and numerous privacy features.
A community of developers continues to improve and enhance Firefox. The latest version, Firefox 69, was released recently and you can get it for your stable Fedora system (30 and later). Read on for more details.
### New features in Firefox 69
The newest version of Firefox includes [Enhanced Tracking Protection][2] (or ETP). When you use Firefox 69 with a new (or reset) settings profile, the browser makes it harder for sites to track your information or misuse your computer resources.
For instance, less scrupulous websites use scripts that cause your system to do lots of intense calculations to produce cryptocurrency results, called _[cryptomining][3]_. Cryptomining happens without your knowledge or permission and is therefore a misuse of your system. The new standard setting in Firefox 69 prevents sites from this kind of abuse.
Firefox 69 has additional settings to prevent sites from identifying or fingerprinting your browser for later use. These improvements give you additional protection from having your activities tracked online.
Another common annoyance is videos that start in your browser without warning. Video playback also uses extra CPU power and you may not want this happening on your laptop without permission. Firefox already stops this from happening using the [Block Autoplay][4] feature. But Firefox 69 also lets you stop videos from playing even if they start without sound. This feature prevents unwanted sudden noise. It also solves more of the real problem — having your computers power used without permission.
There are numerous other new features in the new release. Read more about them in the [Firefox release notes][5].
### How to get the update
Firefox 69 is available in the stable Fedora 30 and pre-release Fedora 31 repositories, as well as Rawhide. The update is provided by Fedoras maintainers of the Firefox package. The maintainers also ensured an update to Mozillas Network Security Services (the nss package). We appreciate the hard work of the Mozilla project and Firefox community in providing this new release.
If youre using Fedora 30 or later, use the _Software_ tool on Fedora Workstation, or run the following command on any Fedora system:
```
$ sudo dnf --refresh upgrade firefox
```
If youre on Fedora 29, [help test the update][6] for that release so it can become stable and easily available for all users.
Firefox may prompt you to upgrade your profile to use the new settings. To take advantage of new features, you should do this.
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/firefox-69-available-in-fedora/
作者:[Paul W. Frields][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/pfrields/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2019/09/firefox-v69-816x345.jpg
[2]: https://blog.mozilla.org/blog/2019/09/03/todays-firefox-blocks-third-party-tracking-cookies-and-cryptomining-by-default/
[3]: https://www.webopedia.com/TERM/C/cryptocurrency-mining.html
[4]: https://support.mozilla.org/kb/block-autoplay
[5]: https://www.mozilla.org/en-US/firefox/69.0/releasenotes/
[6]: https://bodhi.fedoraproject.org/updates/FEDORA-2019-89ae5bb576

View File

@ -1,57 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Dell EMC updates PowerMax storage systems)
[#]: via: (https://www.networkworld.com/article/3438325/dell-emc-updates-powermax-storage-systems.html)
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
Dell EMC updates PowerMax storage systems
======
Dell EMC's new PowerMax enterprise storage systems add support for Intel Optane drives and NVMe over Fabric.
Getty Images/Dell EMC
Dell EMC has updated its PowerMax line of enterprise storage systems to offer Intels Optane persistent storage and NVMe-over-Fabric, both of which will give the PowerMax a big boost in performance.
Last year, Dell launched the PowerMax line with high-performance storage, specifically targeting industries that need very low latency and high resiliency, such as banking, healthcare, and cloud service providers.
The company claims the new PowerMax is the first-to-market with dual port Intel Optane SSDs and the use of storage-class memory (SCM) as persistent storage. The Optane is a new type of non-volatile storage that sits between SSDs and memory. It has the persistence of a SSD but almost the speed of a DRAM. Optane storage also has a ridiculous price tag. For example, a 512 GB stick costs nearly $8,000.
**[ Read also: [Mass data fragmentation requires a storage rethink][1] | Get regularly scheduled insights: [Sign up for Network World newsletters][2] ]**
The other big change is support for NVMe-oF, which allows SSDs to talk directly to each other via Fibre Channel rather than making multiple hops through the network. PowerMax already supports NVMe SSDs, but this update adds end-to-end NVMe support.
The coupling of NVMe and Intel Optane on dual port gives the new PowerMax systems up to 15 million IOPS, a 50% improvement over the previous generation released just one year ago, with up to 50% better response times and twice the bandwidth. Response time is under 100 microseconds.
In addition, the new Dell EMC PowerMax systems are validated for Dell Technologies Cloud, an architecture designed to bridge multi-cloud deployments. Dell offers connections between private clouds and Amazon Web Services (AWS), Microsoft Azure, and Google Cloud.
PowerMax comes with a built-in machine learning engine for predictive analytics and pattern recognition to automatically place data on the correct media type, SCM or Flash, based on its I/O profile. PowerMax analyzes and forecasts 40 million data sets in real time, driving 6 billion decisions per day.
It also has several important software integrations. The first is VMwares vRealize Orchestrator (vRO) plug-in, which allows customers to develop end-to-end automation routines, including provisioning, data protection, and host operations.
Second, it has pre-built Red Hat Ansible modules to allow customers to create Playbooks for storage provisioning, snapshots, and data management workflows for consistent and automated operations. These modules are available on GitHub now.
Finally, there is a container storage interface (CSI) plugin that provisions and manages storage for workloads running on Kubernetes. The CSI plugin, available now on GitHub, extends PowerMax's performance and data services to a growing number of applications built on a micro-services-based architecture.
The new PowerMax systems and PowerBricks will be available Monday, Sept.16.
Join the Network World communities on [Facebook][3] and [LinkedIn][4] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3438325/dell-emc-updates-powermax-storage-systems.html
作者:[Andy Patrizio][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Andy-Patrizio/
[b]: https://github.com/lujun9972
[1]: https://www.networkworld.com/article/3323580/mass-data-fragmentation-requires-a-storage-rethink.html
[2]: https://www.networkworld.com/newsletters/signup.html
[3]: https://www.facebook.com/NetworkWorld/
[4]: https://www.linkedin.com/company/network-world

View File

@ -1,79 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Linux Plumbers, Appwrite, and more industry trends)
[#]: via: (https://opensource.com/article/19/9/conferences-industry-trends)
[#]: author: (Tim Hildred https://opensource.com/users/thildred)
Linux Plumbers, Appwrite, and more industry trends
======
A weekly look at open source community and industry trends.
![Person standing in front of a giant computer screen with numbers, data][1]
As part of my role as a senior product marketing manager at an enterprise software company with an open source development model, I publish a regular update about open source community, market, and industry trends for product marketers, managers, and other influencers. Here are five of my and their favorite articles from that update.
## [Working on Linux's nuts and bolts at Linux Plumbers][2]
> The Kernel Maintainers Summit, Linux creator Linus Torvalds told me, is an invitation-only gathering of the top Linux kernel developers. But, while you might think it's about planning on the Linux kernel's future, that's not the case. "The maintainer summit is really different because it doesn't even talk about technical issues." Instead, "It's all about the process of creating and maintaining the Linux kernel."
**The impact**: This is like the technical version of the Bilderberg meeting: you can have your flashy buzzword conferences, but we'll be over here making the real decisions. Or so I imagine. Probably less private jets involved though.
## [Microsoft hosts first Windows Subsystem for Linux conference][3]
> Hayden Barnes, founder of [Whitewater Foundry][4], a startup focusing on [Windows Subsystem for Linux (WSL)][5] [announced WSLconf 1][6], the first community conference for WSL. This event will be held on March 10-11, 2020 at Building 20 on the Microsoft HQ campus in Redmond, WA. The conference is still coming together. But we already know it will have presentations and workshops from [Pengwin, Whitewater's Linux for Windows,][7] Microsoft WSL, and [Canonical][8]'s [Ubuntu][9] on WSL developers.
**The impact**: Microsoft is nurturing the seeds of community growing up around its increasing adoption of and contribution to open source software. It's enough to bring a tear to my eye.
## [Introducing Appwrite: An open source backend server for mobile and web developers][10]
> [Appwrite][11] is a new [open source][12], end to end backend server for frontend and mobile developers that allows you to build apps a lot faster. [Appwrite][13] goal is to abstract and simplify common development tasks behind REST APIs and tools, to help developers build advanced apps way faster.
>
> In this post I will shortly cover some of the main [Appwrite][14] services and explain about their main features and how they are designed to help you build your next project way faster than you would when writing all your backend APIs from scratch.
**The impact**: Software development is getting more and more accessible as more open source middleware gets easier to use. Appwrite claims to reduce the time and cost of development by 70%. Imagine what that would mean to a small mobile development agency or citizen developer. I'm curious about how they'll monetize this.
## ['More than just IT': Open source technologist says collaborative culture is key to government transformation][15]
> AGL (agile government leadership) is providing a valuable support network for people who are helping government work better for the public. The organization is focused on things that I am very passionate about — DevOps, digital transformation, open source, and similar topics that are top-of-mind for many government IT leaders. AGL provides me with a community to learn about what the best and brightest are doing today, and share those learnings with my peers throughout the industry.
**The impact**: It is easy to be cynical about the government no matter your political persuasion. I found it refreshing to have a reminder that the government is comprised of real people who are mostly doing their best to apply relevant technology to the public good. Especially when that technology is open source!
## [How Bloomberg achieves close to 90-95% hardware utilization with Kubernetes][16]
> In 2016, Bloomberg adopted Kubernetes—when it was still in alpha—and has seen remarkable results ever since using the projects upstream code. “With Kubernetes, were able to very efficiently use our hardware to the point where we can get close to 90 to 95% utilization rates,” says Rybka. Autoscaling in Kubernetes allows the system to meet demands much faster. Furthermore, Kubernetes “offered us the ability to standardize our approach to how we build and manage services, which means that we can spend more time focused on actually working on the open source tools that we support,” says Steven Bower, Data and Analytics Infrastructure Lead. “If we want to stand up a new cluster in another location in the world, its really very straightforward to do that. Everything is all just code. Configuration is code.”
**The impact**: Nothing cuts through the fog of marketing like utilization stats. One of the things that I've heard about Kube is that people don't know what to do with it when they have it running. Use cases like this give them (and you) something to aspire to.
_I hope you enjoyed this list of what stood out to me from last week and come back next Monday for more open source community, market, and industry trends._
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/9/conferences-industry-trends
作者:[Tim Hildred][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/thildred
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/data_metrics_analytics_desktop_laptop.png?itok=9QXd7AUr (Person standing in front of a giant computer screen with numbers, data)
[2]: https://www.zdnet.com/article/working-on-linuxs-nuts-and-bolts-at-linux-plumbers/
[3]: https://www.zdnet.com/article/microsoft-hosts-first-windows-subsystem-for-linux-conference/
[4]: https://github.com/WhitewaterFoundry
[5]: https://docs.microsoft.com/en-us/windows/wsl/install-win10
[6]: https://www.linkedin.com/feed/update/urn:li:activity:6574754435518599168/
[7]: https://www.zdnet.com/article/pengwin-a-linux-specifically-for-windows-subsystem-for-linux/
[8]: https://canonical.com/
[9]: https://ubuntu.com/
[10]: https://medium.com/@eldadfux/introducing-appwrite-an-open-source-backend-server-for-mobile-web-developers-4be70731575d
[11]: https://appwrite.io
[12]: https://github.com/appwrite/appwrite
[13]: https://medium.com/@eldadfux/introducing-appwrite-an-open-source-backend-server-for-mobile-web-developers-4be70731575d?source=friends_link&sk=b6a2be384aafd1fa5b1b6ff12906082c
[14]: https://appwrite.io/
[15]: https://medium.com/agile-government-leadership/more-than-just-it-open-source-technologist-says-collaborative-culture-is-key-to-government-c46d1489f822
[16]: https://www.cncf.io/blog/2019/09/12/how-bloomberg-achieves-close-to-90-95-hardware-utilization-with-kubernetes/

View File

@ -0,0 +1,62 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Here Comes Oracle Autonomous Linux Worlds First Autonomous Operating System)
[#]: via: (https://opensourceforu.com/2019/09/here-comes-oracle-autonomous-linux-worlds-first-autonomous-operating-system/)
[#]: author: (Longjam Dineshwori https://opensourceforu.com/author/dineshwori-longjam/)
Here Comes Oracle Autonomous Linux Worlds First Autonomous Operating System
======
* _**Oracle Autonomous Linux**_ _**delivers automated patching, updates and tuning without human intervention.**_
* _**It can help IT companies improve reliability and protect their systems from cyberthreats**_
* _**Oracle also introduces Oracle OS Management Service that delivers control and visibility over systems**_
![Oracle cloud][1]
Oracle today marked a major milestone in the companys autonomous strategy with the introduction of Oracle Autonomous Linux the worlds first autonomous operating system.
Oracle Autonomous Linux, along with the new Oracle OS Management Service, is the first and only autonomous operating environment that eliminates complexity and human error to deliver unprecedented cost savings, security and availability for customers, the company claims in a just released statement.
Keeping systems patched and secure is one of the biggest ongoing challenges faced by IT today. With Oracle Autonomous Linux, the company says, customers can rely on autonomous capabilities to help ensure their systems are secure and highly available to help prevent cyberattacks.
“Oracle Autonomous Linux builds on Oracles proven history of delivering Linux with extreme performance, reliability and security to run the most demanding enterprise applications,” said Wim Coekaerts, senior vice president of operating systems and virtualization engineering, Oracle.
“Today we are taking the next step in our autonomous strategy with Oracle Autonomous Linux, providing a rich set of capabilities to help our customers significantly improve reliability and protect their systems from cyberthreats,” he added.
**Oracle OS Management Service**
Along with Oracle Autonomous Linux, Oracle introduced Oracle OS Management Service, a highly available Oracle Cloud Infrastructure component that delivers control and visibility over systems whether they run Autonomous Linux, Linux or Windows.
Combined with resource governance policies, OS Management Service, via the Oracle Cloud Infrastructure console or APIs, also enables users to automate capabilities that will execute common management tasks for Linux systems, including patch and package management, security and compliance reporting, and configuration management.
It can be further automated with other Oracle Cloud Infrastructure services like auto-scaling as workloads need to grow or shrink to meet elastic demand.
**Always Free Autonomous Database and Cloud Infrastructure**
Oracle Autonomous Linux, in conjunction with Oracle OS Management Service, uses advanced machine learning and autonomous capabilities to deliver unprecedented cost savings, security and availability and frees up critical IT resources to tackle more strategic initiatives.
They are included with Oracle Premier Support at no extra charge with Oracle Cloud Infrastructure compute services. Combined with Oracle Cloud Infrastructures other cost advantages, most Linux workload customers can expect to have 30-50 percent TCO savings versus both on-premise and other cloud vendors over five years.
“Adding autonomous capabilities to the operating system layer, with future plans to expand beyond infrastructure software, goes straight after the OpEx challenges nearly all customers face today,” said Al Gillen, Group VP, Software Development and Open Source, IDC.
“This capability effectively turns Oracle Linux into a service, freeing customers to focus their IT resources on application and user experience, where they can deliver true competitive differentiation,” he added.
--------------------------------------------------------------------------------
via: https://opensourceforu.com/2019/09/here-comes-oracle-autonomous-linux-worlds-first-autonomous-operating-system/
作者:[Longjam Dineshwori][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensourceforu.com/author/dineshwori-longjam/
[b]: https://github.com/lujun9972
[1]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2016/09/Oracle-cloud.jpg?resize=350%2C197&ssl=1

View File

@ -0,0 +1,67 @@
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Oracle Unleashes Worlds Fastest Database Machine Exadata X8M)
[#]: via: (https://opensourceforu.com/2019/09/oracle-unleashes-worlds-fastest-database-machine-exadata-x8m/)
[#]: author: (Longjam Dineshwori https://opensourceforu.com/author/dineshwori-longjam/)
Oracle Unleashes Worlds Fastest Database Machine Exadata X8M
======
* _**Exadata X8M is the first database machine with integrated persistent memory and RoCE**_
* _**Oracle also announced availability of Oracle Zero Data Loss Recovery Appliance X8M (ZDLRA)**_
![][1]
Oracle has released its new Exadata Database Machine X8M with an aim to set a new bar in the database infrastructure market.
Exadata X8M combines Intel Optane DC persistent memory and 100 gigabit remote direct memory access (RDMA) over Converged Ethernet (RoCE) to remove storage bottlenecks and dramatically increase performance for the most demanding workloads such as Online Transaction Processing (OLTP), analytics, IoT, fraud detection and high frequency trading.
“With Exadata X8M, we deliver in-memory performance with all the benefits of shared storage for both OLTP and analytics,” said Juan Loaiza, executive vice president, mission-critical database technologies, Oracle.
“Reducing response times by an order of magnitude using direct database access to shared persistent memory accelerates every OLTP application and is a game changer for applications that need real-time access to large amounts of data such as fraud detection and personalized shopping,” the official added.
**Whats unique about it?**
Oracle Exadata X8M uses RDMA directly from the database to access persistent memory in smart storage servers, bypassing the entire OS, IO and network software stacks. This results in lower latency and higher throughput. Using RDMA to bypass software stacks also frees CPU resources on storage servers to execute more Smart Scan queries in support of analytics workloads.
**No More Storage Bottlenecks**
“High-performance OLTP applications require a demanding mixture of high Input/Output Operations Per Second (IOPS) with low latency. Direct database access to shared persistent memory increases peak performance to 16 million SQL read IOPS, 2.5X greater than the industry leading Exadata X8,” Oracle said in a statement.
Additionally, Exadata X8M dramatically reduces the latency of critical database IOs by enabling remote IO latencies below 19 microseconds more than 10X faster than the Exadata X8. These ultra-low latencies are achieved even for workloads requiring millions of IOs per second.
**More Efficient Better than AWS and Azure**
The company claimed that compared to the fastest Amazon RDS storage for Oracle, Exadata X8M delivers up to 50X lower latency, 200X more IOPS and 15X more capacity.
Compared to Azure SQL Database Service storage, it says, Exadata X8M delivers 100X lower latency, 150X more IOPS and 300X more capacity.
According to oracle, a single rack Exadata X8M delivers up to 2X the OLTP read IOPS, 3X the throughput and 5X lower latency than shared storage systems with persistent memory such as a single rack of Dell EMC PowerMax 8000.
“By simultaneously supporting faster OLTP queries and greater throughput for analytics workloads, Exadata X8M is the ideal platform on which to converge mixed-workload environments to decrease IT costs and complexity,” it said.
**Oracle Zero Data Loss Recovery Appliance X8**
Oracle today also announced availability of Oracle Zero Data Loss Recovery Appliance X8M (ZDLRA), which uses new 100Gb RoCE for high throughput internal data transfers between compute and storage servers.
Exadata and ZDLRA customers can now choose between RoCE or InfiniBand-based Engineered Systems for optimal flexibility in their architectural deployments.
--------------------------------------------------------------------------------
via: https://opensourceforu.com/2019/09/oracle-unleashes-worlds-fastest-database-machine-exadata-x8m/
作者:[Longjam Dineshwori][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensourceforu.com/author/dineshwori-longjam/
[b]: https://github.com/lujun9972
[1]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/02/Oracle-Cloud.jpg?resize=350%2C212&ssl=1

View File

@ -0,0 +1,105 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Why the Blockchain is Your Best Bet for Cyber Security)
[#]: via: (https://opensourceforu.com/2019/09/why-the-blockchain-is-your-best-bet-for-cyber-security/)
[#]: author: (Anna Aneena M G https://opensourceforu.com/author/anna-aneena/)
Why the Blockchain is Your Best Bet for Cyber Security
======
[![][1]][2]
_Blockchain is a tamper-proof, shared digital ledger that records the history of transactions between the peers in a peer-to-peer network. This article describes how blockchain technology can be used to protect data and the network from cyber-attacks._
Cyber security is a set of technologies, processes and controls designed to protect systems, devices, data, networks and programs from cyber-attacks. It secures data from threats such as theft or misuse, and also safeguards the system from viruses.
In todays world, cyber-attacks are major threats faced by each user. Most of us are responsive to the advertisements on various websites on the Internet, and if asked questions or any personal details, respond without even thinking of the consequences. Sharing ones personal information is very risky, as one may lose whatever one has. In 2016, the search engine Yahoo! faced a major attack and around one billion accounts were compromised. The attackers were able to get the user names, passwords, phone numbers, and security questions and answers of e-mail users. On September 7, 2017, Equifax, one of the largest consumer credit recording agencies in the world, faced a massive cyber security threat. It is believed that someone gave unauthorised access to the data with this agency, from mid-May to July 2017. Around 145.5 million people felt threatened by the news as they had shared personal information like names, social security numbers, birthdays, addresses and driving license numbers with Equifax.
Many people use weak or default passwords for their personal accounts on some Internet sites, which is very risky as these can be cracked, and their personal details can be compromised. This may be even more risky if people use default passwords for all their sites, just for convenience. If the attackers crack that password, then it can be used for all other sites, and all their personal details, including their credit card and bank details may be harvested. In this digital era, cyber-attacks are a matter of real concern. Cyber criminals are greatly increasing in number and are attempting to steal financial data, personal identifiable information (PII) as well as identities of the common Internet user.
Businesses, the government, and the private as well as public sectors are continuously fighting against such frauds, malicious bugs and so on. Even as the hackers are increasing their expertise, the ways to cope with their attacks are also improving very fast. One of these ways is the blockchain.
**Blockchain: A brief background**
Each block in a blockchain contains transaction data, a hash function and hash of the previous block. Blockchain is managed by a peer-to-peer (P2P) network. On the network, no central authority exists and all blocks are distributed among all the users in the network. Everyone in the network is responsible for verifying the data that is shared, and ensuring that no existing blocks are being altered and no false data is being added. Blockchain technology enables direct transaction between two individuals, without the involvement of a third party and hence provides transparency. When a transaction happens, the transaction information is shared amongst everyone in the blockchain network. These transactions are individually time stamped. When these transactions are put together in a block, they are time stamped again as a whole block. Blockchain can be used to prevent cyber-attacks in three ways by being a trusted system, by being immutable and by network consensus.
A blockchain based system runs on the concept of human trust. A blockchain network is built in such a way that it presumes any individual node could attack it at any time. The consensus protocol, like proof of work, ensures that even if this happens, the network completes its work as intended, regardless of human cheating or intervention. The blockchain allows one to secure stored data using various cryptographic properties such as digital signatures and hashing. As soon as the data enters a block in the blockchain, it cannot be tampered with and this property is called immutability. If anyone tries to tamper with the blockchain database, then the network consensus will recognise the fact and shut down the attempt.
Blockchains are made up of nodes; these can be within one institution like a hospital, or can be all over the world on the computer of any citizen who wants to participate in the blockchain. For any decision to be made, the majority of the nodes need to come to a consensus. The blockchain has a democratic system instead of a central authoritarian figure. So if any one node is compromised due to malicious action, the rest of the nodes recognise the problem and do not execute the unacceptable activity. Though blockchain has a pretty incredible security feature, it is not used by everyone to store data.
![Figure 1: Applications of blockchain][3]
**Common use cases of blockchain in cyber security**
Mitigating DDoS attacks: A distributed denial-of-service attack is a cyber-attack; it involves multiple compromised computer systems that aim at a target and attack it, causing denial of service for users of the targeted resources. This causes the system to slow down and crash, hence denying services to legitimate users. There are some forms of DDoS software that are causing huge problems. One among them is Hide and Seek malware, which has the ability to act even after the system reboots and hence can cause the system to crash over and over again. Currently, the difficulty in handling DDoS attacks is due to the existing DNS (Domain Name System). A solution to this is the implementation of blockchain technology. It will decentralise the DNS, distributing the data to a greater number of nodes and making it impossible for the hackers to hack.
**More secure DNS:** For hackers, DNS is an easy target. Hence DNS service providers like Twitter, PayPal, etc, can be brought down. Adding the blockchain to the DNS will enhance the security, because that one single target which can be compromised is removed.
**Advanced confidentiality and data integrity:** Initially, blockchain had no particular access controls. But as it evolved, more confidentiality and access controls were added, ensuring that data as a whole or in part was not accessible to any wrong person or organisation. Private keys are generally used for signing documents and other purposes. Since these keys can be tampered with, they need to be protected. Blockchain replaces such secret keys with transparency.
**Improved PKI:** PKI or Public Key Infrastructure is one of the most popular forms of public key cryptography which keeps the messaging apps, websites, emails and other forms of communications secure. The main issue with this cryptography is that most PKI implementations depend on trusted third party Certificate Authorities (CA). But these certificate authorities can easily be compromised by hackers and spoof user identities. When keys are published on the blockchain, the risk of false key generation is eliminated. Along with that, blockchain enables applications to verify the identity of the person you are communicating with. Certain is the first implementation of blockchain based PKI.
**The major roles of blockchain in cyber security**
**Eliminating the human factor from authentication:** Human intervention is eliminated from the process of authentication. With the help of blockchain technology, businesses are able to authenticate devices and users without the need for a password. Hence, blockchain avoids being a potential attack vector.
**Decentralised storage:** Blockchain users data can be maintained on their computers in their network. This ensures that the chain wont collapse. If someone other than the owner of a component of data (say, an attacker) attempts to destroy a block, the entire system checks each and every data block to identify the one that differs from the rest. If this block is identified or located by the system, it is recognised as false and is deleted from the chain.
**Traceability:** All the transactions that are added to a private or public blockchain are time stamped and signed digitally. This means that companies can trace every transaction back to a particular time period. And they can also locate the corresponding party on the blockchain through their public address.
**DDoS:** Blockchain transactions can be denied easily if the participating units are delayed from sending transactions. For example, the entire attendant infrastructure and the blockchain organisation can be crippled due to the DDoS attack on a set of entities or an entity. These kinds of attacks can introduce integrity risks to a blockchain.
**Blockchain for cyber security**
One interesting use case is applying the strong integrity assurance feature of blockchain technology to strengthen the cyber security of many other technologies. For example, to ensure the integrity of software downloads like firmware updates, patches, installers, etc, blockchain can be used in the same way that we make use of MD5 hashes today. Our file download that we compare against the hash might be compromised on a vendor website and altered without our knowledge. With a higher level of confidence, we can make a comparison against what is permanently recorded in the blockchain. The use of blockchain technologies has great security potential, particularly in the world of cyber-physical systems (CPS) such as IoT, industrial controls, vehicles, robotics, etc.
Summarising this, for cyber-physical systems the integrity of data is the key concern while the confidentiality in many cases is almost irrelevant. This is the key difference between cyber security for cyber-physical systems and cyber security for traditional enterprise IT systems. Blockchain technology is just what the doctor ordered to address the key cyber-physical systems security concerns.
The key characteristics of a blockchain that establish trust are listed below.
* _**Identification and authentication:**_ Access is granted via cryptographic keys and access rules.
* _**Transaction rules:**_ At every node standard rules are applied to every transaction.
* _**Transaction concatenation:**_ Every transaction is linked to its previous transaction.
* _**Consensus mechanism:**_ In order to agree upon the transaction integrity, mathematical puzzles are solved for all nodes.
* _**Distributed ledger:**_ There are standards for listing transactions on every node.
* _**Permissioned and unpermissioned:**_ Ability to participate in a blockchain can be open or pre-approved.
**Is blockchain secure?**
Blockchain stores data using sophisticated and innovative software rules that are extremely difficult for attackers to manipulate. The best example is Bitcoin. In Bitcoins blockchain, the shared data is the history of every transaction made. Information is stored in multiple copies on a network of computers called nodes. Each time someone submits a transaction to the ledger, the node checks to make sure the transaction is valid or not. A subset of the package validates the transaction into blocks and adds them to the previous chain.
The blockchain offers a higher level of security to every individual user. This is because it removes the need for easily compromised and weak online identities and passwords.
**How does a blockchain protect data?**
Instead of uploading data to a cloud server or storing it in a single location, a blockchain breaks everything into several small nodes. A blockchain protects data because:
* It is decentralised.
* It offers encryption and validation.
* It can be private or public.
* It is virtually impossible to hack.
* It offers quality assurance.
* It ensures fast, cheap and secure transfer of funds across the globe.
* It is well known for its traceability.
* Through it, transactions become more transparent.
**Cyber security is a priority, not an afterthought**
It seems like in the digital age, comfort and convenience have overtaken things like privacy and security in very unfortunate ways. The handing over of personal information to companies like Facebook is a personal choice; however, no one wants to see information leaked to third parties without consent. The blockchain is all about security. It has provided us a simple, effective and affordable way of ensuring that our cyber security needs are not only met, but also exceeded. We need to understand that the technologies we use to improve our lives can also be used to harm us. That is the reality of the era we are living in, one where most of our personal data is on the cloud, on our mobile device, or on our computer. Because of that, it is vital to look at online safety and cyber security as priorities and not afterthoughts. The blockchain can assist us in turning that thought into reality, and allow us to build a future where online threats are kept to a minimum.
--------------------------------------------------------------------------------
via: https://opensourceforu.com/2019/09/why-the-blockchain-is-your-best-bet-for-cyber-security/
作者:[Anna Aneena M G][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensourceforu.com/author/anna-aneena/
[b]: https://github.com/lujun9972
[1]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/09/1.png?resize=615%2C434&ssl=1 (1)
[2]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/09/1.png?fit=615%2C434&ssl=1
[3]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/09/2.png?resize=350%2C219&ssl=1

View File

@ -0,0 +1,63 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How Cloud Billing Can Benefit Your Business)
[#]: via: (https://opensourceforu.com/2019/09/how-cloud-billing-can-benefit-your-business/)
[#]: author: (Aashima Sharma https://opensourceforu.com/author/aashima-sharma/)
How Cloud Billing Can Benefit Your Business
======
[![][1]][2]
_The Digital Age has led to many changes in the way we do business, and also in the way that businesses are run on a day to day basis. Computers are now ubiquitous even in the smallest of businesses and are utilised for a wide variety of purposes. When it comes to making things easier, a particularly efficient use of computer technology and software is cloud billing. Let us explain in a little more detail what its all about._
**What Cloud Billing?**
The idea of [_cloud billing_][3] is relatively simple. Cloud billing is designed to enable the user to perform a number of functions essential to the business and it can be used in any business without taking up unnecessary space on computer services. For example, you can use a cloud billing solution to ensure that invoices are sent at a certain time, subscriptions are efficiently managed, discounts are applied and many more regular or one-off functions.
![Figure 1: Tridens Monetization][4]
**How Cloud Billing Platform Can Benefit Your Business**
Lets have a quick look at some of the major benefits of a cloud billing platform:
* Lower costs on IT services: a cloud billing system, like any cloud-based IT solution, does not need to take up any space on your server. Nor does it require you to purchase software or hardware. You dont need to engage the services of an IT professional, its all done via the service provider.
* Reduced operating costs: there is little in the way of operating costs when you use a cloud billing solution, as there is no extra equipment to maintain.
* Faster product to market: if you are rolling out new products or services on a regular basis, a cloud billing solution means you can cut out a lot of time that would normally be take up with the process.
* Grow with the business: if in the future you need to expand or add to your cloud billing solution, you can, and without any additional equipment or software. You pay for what you need and use, and no more.
* Quick start: once you decide you want to use cloud billing, there is very little to do before you are ready to go. No software to install, no computer servers or network, its all in the cloud.
There are many more benefits to engaging the services of a cloud billing solutions provider, and finding the right one a service provider you can trust and with whom you can build a reliable working relationship is the important part of the deal.
**Where to Get the Best Cloud Billing Platform?**
The market for cloud-based billing services is one that hotly contested, and there are many potential options for your business. There are many Cloud billing available, you can try [_Tridens Monetization_][3] or any of the open source billing software solutions that may be suitable for your business.
Where the best cloud billing solutions come to the fore is in providing real-time ability and flexibility for various areas of industry and commerce. They are ideal, for example, for use in media and communications, in education and utilities, and in the healthcare industry as well as many others. No matter the size, type or are of your business, a cloud-based billing platform offers not just a cost-saving opportunity, but it will also help accelerate growth, engender brand loyalty thanks to more efficient service, and more.
Use a cloud billing solution for recurring billing purposes, for invoicing and revenue management, or for producing a range of reports and real-time analysis of your business performance, and all without the need for expensive hardware and software, or costly in-house IT experts. The best such solutions can be connected to many payment gateways, can handle your business tax requirements, and will reduce the workload on your team, so that each can dedicate their time to what they do best.
**Summary**
Put simply, your business needs a cloud billing platform if you want to grow, improve your efficiency both for you and your clients without the need for great expenditure and restructuring. We recommend you check it out further, and talk to the experts in more detail about your cloud billing requirements.
**By: Alivia Mallan**
--------------------------------------------------------------------------------
via: https://opensourceforu.com/2019/09/how-cloud-billing-can-benefit-your-business/
作者:[Aashima Sharma][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensourceforu.com/author/aashima-sharma/
[b]: https://github.com/lujun9972
[1]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2016/01/Young-man-with-the-head-in-the-clouds-thinking_15259762_xl.jpg?resize=579%2C606&ssl=1 (Young man with the head in the clouds thinking_15259762_xl)
[2]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2016/01/Young-man-with-the-head-in-the-clouds-thinking_15259762_xl.jpg?fit=579%2C606&ssl=1
[3]: https://tridenstechnology.com/monetization/
[4]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/09/diagram-monetization-infinite-v2-1.png?resize=350%2C196&ssl=1

View File

@ -0,0 +1,120 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (3 steps to developing psychological safety)
[#]: via: (https://opensource.com/open-organization/19/9/psychological-safety-leadership-behaviors)
[#]: author: (Kathleen Hayes https://opensource.com/users/khayes4dayshttps://opensource.com/users/khayes4dayshttps://opensource.com/users/mdoyle)
3 steps to developing psychological safety
======
The mindsets, behaviors, and communication patterns necessary for
establishing psychological safety in our organizations may not be our
defaults, but they are teachable and observable.
![Brain map][1]
Psychological safety is a belief that one will not be punished or humiliated for speaking up with ideas, questions, concerns, or mistakes. And it's critical for high-performing teams in open organizations.
Part one of this series introduced the concept of [psychological safety][2]. In this companion article, I'll recount my personal journey toward understanding psychological safety—and explain the fundamental shifts in mindset, behavior, and communication that anyone hoping to create psychologically safe teams and environments will need to make.
### Mindset: Become the learner
I have participated in a number of corporate cultures that fostered a "gotcha" mindset—one in which people were poised to pounce when something didn't go according to plan. Dropping this deeply ingrained mindset was a requirement for achieving psychological safety, and doing _that_ required a fundamental shift in the way I lived my life.
_Guiding value: Become the learner, not the knower._
Life is a process; achieving perfection will never happen. Similarly, building an organization is a process; there is no "we've got it now" state. In most cases, we're traversing unchartered territories. Enormous uncertainty lies ahead. [We can't know what will happen][3], which is why it was important for me to become the learner in work, just as in life.
On my first day as a new marketing leader, a team member was describing their collaboration with engineering and told me about "the F-You board." If a new program was rolled out and engineers complained that it missed the mark, was wrong, or was downright ridiculous, someone placed a tally on the F-You whiteboard. When they'd accumulated ten, the team would go out drinking.
There is a lot to unpack in this dynamic. For our purposes, however, let's focus on a few actionable steps that helped me reframe the "gotcha" mentality into a learning mindset.
First, I shaped marketing programs and campaigns as experiments, using the word "experiment" with intention not just _within_ the marketing department but _across_ the entire organization. Corporate-wide communications about upcoming rollouts concluded with, "If you see any glaring omissions or areas for refinement, please let me know," inviting engineers to surface blind spots and bring forward solutions—rather than point fingers after the work had concluded.
Only after shifting my perspective from that of the knower to that of the learner did I open a genuine desire to understand another's perspective.
Next, to stimulate the learning process in myself and others, I began fostering a "[try, learn, modify][4]" mindset, in which setbacks are not viewed as doomsday failures but as opportunities for clarification and improvement. To recover quickly when something doesn't go according to plan, I would ask four key questions:
* What did we set out to do?
* What happened?
* What did we learn?
* How quickly can we improve upon it?
It's nearly impossible for every project to be a home run. Setbacks will occur. As the ground-breaking research on psychological safety revealed, [learning happens through vulnerable conversations][5]. When engaged in psychologically safe environments, we can use these moments to cultivate more learning and less defensiveness.
### Behavior: Model curiosity
One way we can help our team drop their defensiveness is by _modeling curiosity_.
As the "knower," I was adept at command and control. Quite often this meant playing the devil's advocate and shaming others into submitting to my point of view.
In a meeting early in my career as a vice president, a colleague was rolling out a new program and asked each executive to share feedback. I was surprised as each person around the table gave a thumbs up. I had reservations about the direction and was concerned that no one else could see the issues that seemed so readily apparent to me. Rather than asking for clarification and [stimulating a productive dialog][6], I simply quipped, "Am I the only one that hasn't [sipped the purple Kool-Aid][7]?" Not my finest moment.
As I look back, this behavior was fueled by a mixture of fear and overconfidence, a hostile combination resulting in a hostile psychological attitude. I wasn't curious because I was too busy being right, being the knower. By becoming the learner, I let a genuine interest in understanding others' perspectives come to the forefront. This helped me more deeply understand a fundamental fact about the human condition.
_Guiding value: Situations are rarely, if ever, crystal clear._
The process of understanding is dynamic. We are constantly moving along a continuum from clear to unclear and back again. For large teams, this swing is more pronounced as each member brings a unique perspective to bear on an issue. And rightly so. There are seven billion people on this planet; it's nearly impossible for everyone to see a situation the same way.
Recalibrating this attitude—the devil's advocate attitude of "I disagree" to the learner's space and behavior of "help me see what you see"—took practice. One thing that worked for me was intentionally using the phrase "I have a different perspective to share" when offering my opinion and, when clarifying, saying, "That is not consistent with my understanding. Can you tell me more about your perspective?" These tactics helped me move from my default of knowing and convincing. I also asked a trusted team member to privately point out when something or someone had triggered my old default. Over time, my self-awareness matured and, with practice, the intentional tactics evolved into a learned behavior.
As I look back, this behavior was fueled by a mixture of fear and overconfidence, a hostile combination resulting in a hostile psychological attitude. I wasn't curious because I was too busy being right, being the knower.
I feel compelled to share that without the right mindset these tactics would have been cheap communication gimmicks. Only after shifting my perspective from that of the knower to that of the learner did I open a genuine desire to understand another's perspective. This allowed me to develop the capacity to model curiosity and open the space for my team members—and me—to explore ideas with safety, vulnerability, and respect.
### Communication: Deliver productive feedback
Psychological safety does not imply a cozy situation in which people are necessarily close friends, nor does it suggest an absence of pressure or problems. When problems inevitably arise, we must hold ourselves and others accountable and deliver feedback without tiptoeing around the truth, or playing the blame game. However, giving productive feedback is [a skill most leaders have never learned][8].
_Guiding value: Clear is kind; unclear is unkind._
When problems arise during experiments in marketing, I am finding team communication to be incredibly productive when using that _try, learn, modify_ approach and modeling curiosity. One-on-one conversations about real deficits, however, have proven more difficult.
I found so many creative reasons to delay or avoid these conversations. In a fast-paced startup, one of my favorites was, "They've only been in this role for a short while. Give them more time to get up to speed." That was an unfortunate approach, especially when coupled later with vague direction, like, "I need you to deliver more, more quickly." Because I was unable to clearly communicate what was expected, team members were not clear on what needed to be improved. This stall tactic and belittling innuendo masquerading as feedback was leading to more shame and blame than learning and growth.
In becoming the learner, the guiding value of "clear is kind, unclear is unkind," crystalized for me when studying _Dare to Lead_. In her work, [Brené Brown explains][9]:
* Feeding people half-truths to make them feel better, which is almost always about making ourselves feel more comfortable, is unkind.
* Not getting clear with people about your expectations because it feels too hard, yet holding them accountable or blaming them for not delivering, is unkind.
Below are three actionable tips that are helping me deliver more clear, productive feedback.
**Get specific.** Point to the specific lack in proficiency that needs to be addressed. Tie your comments to a shared vision of how this impacts career progression. When giving feedback on behavior, stay away from character and separate person from process.
**Allow people to have feelings.** Rather than rushing in, give them space to feel. Learn how to hold the discomfort.
**Think carefully about how you want to show up**. Work through how your conversation may unfold and where you might trigger blaming behaviors. Knowing puts you in a safer mindset for having difficult conversations.
Teaching team members, reassessing skill gaps, reassigning them (or even letting them go) can become more empathetic processes when "clear is kind" is top of mind for leaders.
### Closing
The mindsets, behaviors, and communication patterns necessary for establishing psychological safety in our organizations may not be our defaults, but they are teachable and observable. Stay curious, ask questions, and deepen your understanding of others' perspectives. Do the difficult work of holding yourself and others accountable for showing up in a way that's aligned with cultivating a culture where your creativity—and your team members—thrive.
--------------------------------------------------------------------------------
via: https://opensource.com/open-organization/19/9/psychological-safety-leadership-behaviors
作者:[Kathleen Hayes][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/khayes4dayshttps://opensource.com/users/khayes4dayshttps://opensource.com/users/mdoyle
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/open_art-mindmap-520.png?itok=qQVBAoVw (Brain map)
[2]: https://opensource.com/open-organization/19/3/introduction-psychological-safety
[3]: https://opensource.com/open-organization/19/5/planning-future-unknowable
[4]: https://opensource.com/open-organization/18/3/try-learn-modify
[5]: https://www.youtube.com/watch?v=LhoLuui9gX8
[6]: https://opensource.com/open-organization/19/5/productive-arguments
[7]: https://opensource.com/open-organization/15/7/open-organizations-kool-aid
[8]: https://opensource.com/open-organization/19/4/be-open-with-difficult-feedback
[9]: https://brenebrown.com/articles/2018/10/15/clear-is-kind-unclear-is-unkind/

View File

@ -0,0 +1,138 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How Ansible brought peace to my home)
[#]: via: (https://opensource.com/article/19/9/ansible-documentation-kids-laptops)
[#]: author: (James Farrell https://opensource.com/users/jamesfhttps://opensource.com/users/jlozadadhttps://opensource.com/users/jason-bakerhttps://opensource.com/users/aseem-sharmahttps://opensource.com/users/marcobravo)
How Ansible brought peace to my home
======
Configuring his young daughters' computers with Ansible made it simple
for this dad to manage the family's computers.
![Coffee and laptop][1]
A few months ago, I read Marco Bravo's article [_How to use Ansible to document procedures_][2] on Opensource.com. I will admit, I didn't quite get it at the time. I was not actively using [Ansible][3], and I remember thinking it looked like more work than it was worth. But I had an open mind and decided to spend time looking deeper into Ansible.
I soon found an excuse to embark on my first real Ansible adventure: repurposing old laptops like in [_How to make an old computer useful again_][4]. I've always liked playing with old computers, and the prospect of automating something with modern methods piqued my interest.
### The task
Earlier this year, I gave my seven-year-old daughter a repurposed Dell Mini 9 running some flavor of Ubuntu. At first, my six-year-old daughter didn't care much about it, but as the music played and she discovered the fun programs, her interest set in.
I realized I would need to build another one for her soon. And any parent with small children close in age can likely identify with my dilemma. If both children don't get identical things, conflicts will arise. Similar toys, similar clothes, similar shoes … sometimes the color, shape, and blinking lights must be identical. I am sure they would notice any difference in laptop configuration, and it would become a point of contention. Therefore, I needed these laptops to have identical functionality.
Also, with small children in the mix, I suspected I would be rebuilding these things a few times. Failures, accidents, upgrades, corruptions … this threatened to become a time sink.
Since two young girls sharing one Dell Mini 9 was not really a workable solution, I grabbed a Dell D620 from my pile of old hardware, upgraded the RAM, put in an inexpensive SSD, and started to cook up a repeatable process to build the children's computer configuration.
If you think about it, this task seems ideal for a configuration management system. I needed something to document what I was doing so it could be easily repeatable.
### Ansible to the rescue
I didn't try to set up a full-on pre-boot execution environment (PXE) to support an occasional laptop install. I wanted to teach my children to do some of the installation work for me (a different kind of automation, ha!).
I decided to start from a minimal OS install and eventually broke down my Ansible approach into three parts: bootstrap, account setup, and software installation. I could have put everything into one giant script, but separating these functions allowed me to mix and match them for other projects and refine them individually over time. Ansible's YAML file readability helped keep things clear as I refined my systems.
For this laptop experiment, I decided to use Debian 32-bit as my starting point, as it seemed to work best on my older hardware. The bootstrap YAML script is intended to take a bare-minimal OS install and bring it up to some standard. It relies on a non-root account to be available over SSH and little else. Since a minimal OS install usually contains very little that is useful to Ansible, I use the following to hit one host and prompt me to log in with privilege escalation:
```
`$ ansible-playbook bootstrap.yml -i '192.168.0.100,' -u jfarrell -Kk`
```
The script makes use of Ansible's [raw][5] module to set some base requirements. It ensures Python is available, upgrades the OS, sets up an Ansible control account, transfers SSH keys, and configures sudo privilege escalation. When bootstrap completes, everything should be in place to have this node fully participate in my larger Ansible inventory. I've found that bootstrapping bare-minimum OS installs is nuanced (if there is interest, I'll write another article on this topic).
The account YAML setup script is used to set up (or reset) user accounts for each family member. This keeps user IDs (UIDs) and group IDs (GIDs) consistent across the small number of machines we have, and it can be used to fix locked accounts when needed. Yes, I know I could have set up Network Information Service or LDAP authentication, but the number of accounts I have is very small, and I prefer to keep these systems very simple. Here is an excerpt I found especially useful for this:
```
\---
\- name: Set user accounts
  hosts: all
  gather_facts: false
  become: yes
  vars_prompt:
    - name: passwd
      prompt: "Enter the desired ansible password:"
      private: yes
  tasks:
  - name: Add child 1 account
    user:
      state: present
      name: child1
      password: "{{ passwd | password_hash('sha512') }}"
      comment: Child One
      uid: 888
      group: users
      shell: /bin/bash
      generate_ssh_key: yes
      ssh_key_bits: 2048
      update_password: always
      create_home: yes
```
The **vars_prompt** section prompts me for a password, which is put to a Jinja2 transformation to produce the desired password hash. This means I don't need to hardcode passwords into the YAML file and can run it to change passwords as needed.
The software installation YAML file is still evolving. It includes a base set of utilities for the sysadmin and then the stuff my users need. This mostly consists of ensuring that the same graphical user interface (GUI) interface and all the same programs, games, and media files are installed on each machine. Here is a small excerpt of the software for my young children:
```
 - name: Install kids software
    apt:
      name: "{{ packages }}"
      state: present
    vars:
      packages:
     - lxde
      - childsplay
      - tuxpaint
      - tuxtype
      - pysycache
      - pysiogame
      - lmemory
      - bouncy
```
I created these three Ansible scripts using a virtual machine. When they were perfect, I tested them on the D620. Then converting the Mini 9 was a snap; I simply loaded the same minimal Debian install then ran the bootstrap, accounts, and software configurations. Both systems then functioned identically.
For a while, both sisters enjoyed their respective computers, comparing usage and exploring software features.
### The moment of truth
A few weeks later came the inevitable. My older daughter finally came to the conclusion that her pink Dell Mini 9 was underpowered. Her sister's D620 had superior power and screen real estate. YouTube was the new rage, and the Mini 9 could not keep up. As you can guess, the poor Mini 9 fell into disuse; she wanted a new machine, and sharing her younger sister's would not do.
I had another D620 in my pile. I replaced the BIOS battery, gave it a new SSD, and upgraded the RAM. Another perfect example of breathing new life into old hardware.
I pulled my Ansible scripts from source control, and everything I needed was right there: bootstrap, account setup, and software. By this time, I had forgotten a lot of the specific software installation information. But details like account UIDs and all the packages to install were all clearly documented and ready for use. While I surely could have figured it out by looking at my other machines, there was no need to spend the time! Ansible had it all clearly laid out in YAML.
Not only was the YAML documentation valuable, but Ansible's automation made short work of the new install. The minimal Debian OS install from USB stick took about 15 minutes. The subsequent shape up of the system using Ansible for end-user deployment only took another nine minutes. End-user acceptance testing was successful, and a new era of computing calmness was brought to my family (other parents will understand!).
### Conclusion
Taking the time to learn and practice Ansible with this exercise showed me the true value of its automation and documentation abilities. Spending a few hours figuring out the specifics for the first example saves time whenever I need to provision or fix a machine. The YAML is clear, easy to read, and—thanks to Ansible's idempotency—easy to test and refine over time. When I have new ideas or my children have new requests, using Ansible to control a local virtual machine for testing is a valuable time-saving tool.
Doing sysadmin tasks in your free time can be fun. Spending the time to automate and document your work pays rewards in the future; instead of needing to investigate and relearn a bunch of things you've already solved, Ansible keeps your work documented and ready to apply so you can move onto other, newer fun things!
I can see the brightness of curiosity in my six year old niece Shuchi's eyes when she explores a...
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/9/ansible-documentation-kids-laptops
作者:[James Farrell][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jamesfhttps://opensource.com/users/jlozadadhttps://opensource.com/users/jason-bakerhttps://opensource.com/users/aseem-sharmahttps://opensource.com/users/marcobravo
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/coffee_cafe_brew_laptop_desktop.jpg?itok=G-n1o1-o (Coffee and laptop)
[2]: https://opensource.com/article/19/4/ansible-procedures
[3]: https://www.ansible.com/
[4]: https://opensource.com/article/19/7/how-make-old-computer-useful-again
[5]: https://docs.ansible.com/ansible/2.3/raw_module.html

View File

@ -0,0 +1,107 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Election fraud: Is there an open source solution?)
[#]: via: (https://opensource.com/article/19/9/voting-fraud-open-source-solution)
[#]: author: (Jeff Macharyas https://opensource.com/users/jeffmacharyashttps://opensource.com/users/mbrownhttps://opensource.com/users/luis-ibanezhttps://opensource.com/users/jhibbets)
Election fraud: Is there an open source solution?
======
The Trust The Vote project is developing open source technology to help
keep elections honest.
![Team checklist][1]
Can open source technology help keep our elections honest? With its [Trust The Vote Project][2], the [Open Source Election Technology (OSET) Institute][3] is working on making that a reality for elections in the United States and around the world.
The project is developing an open, adaptable, flexible, full-featured, and innovative elections technology platform called [ElectOS][4]. It will support all aspects of elections administration and voting, including creating, marking, casting, and counting ballots and managing all back-office functions. The software is freely available under an Open Source Initiative (OSI)-recognized public license for adoption, adaptation, and deployment by anyone, including elections jurisdictions directly or, more commonly, commercial vendors or systems integrators.
With elections coming under more and more scrutiny due to vulnerable designs, aging machinery, hacking, foreign influence, and human incompetence, the OSET Institute is working on technology that will help ensure that every vote is counted as it was cast.
### Mississippi churning
The Mississippi Republican gubernatorial primary in August 2019 was called into question when a voting machine malfunctioned, denying a vote for candidate Bill Waller as the machine changed it to his opponent, Tate Reeves. The incident was [caught on camera][5].
"Around 40 states have voting machines that are a decade old or older," J. Alex Halderman, a computer science professor at the University of Michigan, tells [Politifact][6]. "There are reports of machines misbehaving in this manner after almost every election." That's rather alarming. Can open source be a solution to this problem?
The OSET Institute was founded in January 2007 by venture capital advisers who are alumni of Apple, Mozilla, Netscape, and Sun Microsystems. It is a non-partisan, non-profit 501(c)3 organization that researches and designs public technology—open source technology—for use in the US and around the world.
"For a long time, there have been systemic problems—market malformation and dysfunction—in the election industry. Due to market dynamics and industry performance issues, there has been no commercial incentive to make the investment in the necessary research and development to produce the kind of high-assurance, mission-critical systems for verifiable and accurate elections," says Gregory Miller, co-founder and COO of the OSET Institute.
Reflecting back to the 2000 presidential election between Vice President Al Gore and Texas Governor George W. Bush, Miller says: "Nobody was thinking about digital attacks during the ['chadfest' of Florida][7] and even as recently as 2007." Miller adds that one of the most important aspects of public technology is that it must be entirely trustworthy.
### Essential election technologies
Most voting systems in use are based around proprietary, black-box hardware and software built on 1990s technology—Windows 2000 or older. Some newer machines are running Windows 7—which is scheduled to lose maintenance and support in January 2020.
Miller says there are two crucial parts of the election process: the election administration systems and the voting systems.
* Election administration systems in the back office are responsible for elections setup, administration, and operation, especially casting and counting ballots; voter registration; ballot design, layout, and distribution; poll-book configuration; ballot-marking device configuration; and election results reporting and analytics.
* Voting systems are simply the ballot marking, casting, and counting components.
The most important element of the system is the bridge between the voting systems in polling places and the back-office administration systems. This behind-the-scenes process aggregates vote tallies into tabulations to determine the results.
The key device—and arguably the Achilles Heel of the entire ecosystem—is the election management system (EMS), which is the connection between election administration and the voting systems. The EMS machine is a back-office element but also a component of the voting system. Because the EMS software typically resides on a desktop machine used for a variety of government functions that serve citizens, it is the element most vulnerable to attacks.
Despite the vote-changing problem in the Mississippi primary, Miller warns, "the vulnerability of attack is not to the voting machinery in the polling place but to the tabulation component of the back-office EMS or other vital configuration tools, including the configuration of poll books (those stacks of papers, binders, or … apps that check a voter in to cast a ballot). As a result, attackers need only to find a highly contentious swing state precinct to be disruptive."
### Code causes change
Because voting security vulnerabilities are largely software-based, "code causes change," Miller declares. But, there are barriers to getting this done in time for the next US presidential election in November 2020. Foremost is the fact that there is little time left for OSET's team of 12 full-time people to finish the core voting platform and create separate EMS and voting system components, all based on a non-black box, non-proprietary model that uses public, open source technology with off-the-shelf hardware.
However, there is a lot more to do in addition to developing code for off-the-shelf hardware. OSET is developing new open data standards with NIST and the Elections Assistance Commission (EAC). A new component-based certification process must be invented, and new design guidelines must be issued.
Meanwhile, service contracts that last for decades or more are protecting legacy vendors and making migration to new systems a challenge. Miller explains, "there are three primary vendors that control 85% of voting systems in the US and 70% globally; with the barriers to entry that exist, it will take a finished system to display the opportunity for change."
Getting there with open code is a process too. Miller says, "there is a very closely managed code-commit process, and the work faces far more scrutiny than an open source operating system, web server, or [content management system]. So, we are implementing a dual-sandbox model where one allows for wide-ranging, free-wheeling development and contributions. The other is the controlled environment that must pass security muster with the federal government in order for states to feel confident that the actual code for production systems is verifiable, accurate, secure, and transparent. We'll use a process for [passing] code across a review membrane that enables work in the less secure environment to be committed to the production environment. It's a process still in design."
The licenses are a bit complex: for governments or vendors that have regulatory issues with procuring and deploying commercial systems comprised of open source software, commercial hardware, and professional services, OSET offers the OSET Public License (OPL), an OSI-approved open source license based on the Mozilla Public License. For other research, development, and non-commercial use, or commercial use where certain procurement regulations are not an issue, the code is available under the GPL 2.0 license.
Of OSET's activities, Miller says, "85% is code development—a democracy software foundry. Another 10% is cybersecurity advisory for election administrators, and the final 5% of OSET's activity is public policy work (we are informers, not influencers in legislation). As a 501(c)3, we can not—and do not—perform lobbying, but a portion of our mission is education. So, our work remains steadfastly nonpartisan and philanthropically funded by grant-making organizations and the public.
Explains Miller: "We have a fairly straightforward charter: to build a trustworthy codebase that can exist on an inherently untrustworthy hardware base. Certain types of election administration apps and services can live in the cloud, while the voting system—a marriage of that software layer and hardware, federally certified to be deployed—must be built up from a hardened Linux kernel. So, a portion of our R&amp;D lies in the arenas of trusted boot with hardware attestation and other important security architectures such as computer-assisted code randomization and so forth."
### Work with the code
There are two levels of access for people who want to work with the OSET Institute's Trust the Vote code. One is to contact the project to request access to certain code to advance development efforts; all legitimate requests will be honored, but the code is not yet accessible in the public domain. The other access point is to the extensive work that the OSET Institute has done for third-party voter registration systems, such as Rock The Vote. That source code is publicly available [online][8].
One component of this is [RockyOVR][9] for online voter registration services (it is operated by the OSET Institute and Rock The Vote with support from AWS). Another is [Grommet][10], an Android-native mobile app used by voter registration drives. [Siggy][11] and [VoteReady][12] are prototypes for new voter services under development that will be announced soon.
The OSET Institute is continually on the lookout for talented volunteers to advance the project. Top needs include system architecture and engineering and software development for both cloud-based and offline applications. These are not entry-level projects or positions, and in some cases, they require advanced skills in BIOS and firmware development; operating system internals; device drivers; and more.
All developers at the OSET Institute start as volunteers, and the best are moved into contract and staff positions, as project funding allows, in a meritocratic process. The Institute is based in Palo Alto, Calif., but operations are distributed and virtual with offices and centers of development excellence in Arlington, Va.; Boston; Mountain View, Calif.; Portland, Ore.; San Francisco; and the University of Edinburgh, Scotland.
The Open Election Data Initiative wants to give access to election data for a true picture of an...
The pernicious effects of closed proprietary software reaches its peak of damaging the general...
One of the ways Obama won the 2012 election was with technology. It wasnt the only way, but...
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/9/voting-fraud-open-source-solution
作者:[Jeff Macharyas][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jeffmacharyashttps://opensource.com/users/mbrownhttps://opensource.com/users/luis-ibanezhttps://opensource.com/users/jhibbets
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/checklist_todo_clock_time_team.png?itok=1z528Q0y (Team checklist)
[2]: https://trustthevote.org/
[3]: https://www.osetfoundation.org/
[4]: https://bit.ly/EOSt1
[5]: https://twitter.com/STaylorRayburn/status/1166347828152680449
[6]: https://www.politifact.com/truth-o-meter/article/2019/aug/29/viral-video-voting-machine-malfunction-mississippi/
[7]: https://en.wikipedia.org/wiki/2000_United_States_presidential_election_in_Florida
[8]: https://github.com/TrustTheVote-Project
[9]: https://github.com/TrustTheVote-Project/Rocky-OVR
[10]: https://github.com/TrustTheVote-Project/Grommet
[11]: https://github.com/TrustTheVote-Project/Siggy
[12]: https://github.com/TrustTheVote-Project/VoteReady

View File

@ -0,0 +1,57 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Microsoft brings IBM iron to Azure for on-premises migrations)
[#]: via: (https://www.networkworld.com/article/3438904/microsoft-brings-ibm-iron-to-azure-for-on-premises-migrations.html)
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
Microsoft brings IBM iron to Azure for on-premises migrations
======
Once again Microsoft shows it has shed its not-invented here attitude to support customers.
Microsoft / Just_Super / Getty Images
When Microsoft launched Azure as a cloud-based version of its Windows Server operating system, it didn't make it exclusively Windows. It also included Linux support, and in just a few years, the [number of Linux instances now outnumbers Windows instances][1].
It's nice to see Microsoft finally shed that not-invented-here attitude that was so toxic for so long, but the company's latest move is really surprising.
Microsoft has partnered with a company called Skytap to offer IBM Power9 instances on its Azure cloud service to run Power-based systems inside of the Azure cloud, which will be offered as Azure virtual machines (VM) along with the Xeon and Epyc server instances that it already offers.
**Also read: [How to make hybrid cloud work][2]**
Skytap is an interesting company. Founded by three University of Washington professors, it specializes in cloud migrations of older on-premises hardware, such as IBM System I or Sparc. It has a data center in its home town of Seattle, with IBM hardware running IBM's PowerVM hypervisor, plus some co-locations in IBM data centers in the U.S. and England.
Its motto is to migrate fast, then modernize at your own pace. So, its focus is on helping legacy systems migrate to the cloud and then modernize the apps, which is what the alliance with Microsoft appears to be aimed at. Azure will provide enterprises with a platform to enhance the value of traditional applications without the major expense of rewriting for a new platform.
Skytap is providing a preview of whats possible when lifting and extending a legacy IBM i application using DB2 on Skytap and augmenting it with Azure IoT Hub. The application seamlessly spans old and new architectures, demonstrating there is no need to completely rewrite rock-solid IBM i applications to benefit from modern cloud capabilities.
### Migrating to Azure cloud
Under the deal, Microsoft will deploy Power S922 servers from IBM and deploy them in an undeclared Azure region. These machines can run the PowerVM hypervisor, which supports legacy IBM operating systems, as well as Linux.
"Migrating to the cloud by first replacing older technologies is time consuming and risky," said Brad Schick, CEO of Skytap, in a statement. "Skytaps goal has always been to provide businesses with a path to get these systems into the cloud with little change and less risk. Working with Microsoft, we will bring Skytaps native support for a wide range of legacy applications to Microsoft Azure, including those dependent on IBM i, AIX, and Linux on Power. This will give businesses the ability to extend the life of traditional systems and increase their value by modernizing with Azure services."
As Power-based applications are modernized, Skytap will then bring in DevOps CI/CD toolchains to accelerate software delivery. After moving to Skytap on Azure, customers will be able to integrate Azure DevOps, in addition to CI/CD toolchains for Power, such as Eradani and UrbanCode.
These sound like first steps, which means there will be more to come, especially in terms of the app migration. If it's only in one Azure region, it sounds like they are testing and finding their legs with this project and will likely expand later this year or next.
Join the Network World communities on [Facebook][3] and [LinkedIn][4] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3438904/microsoft-brings-ibm-iron-to-azure-for-on-premises-migrations.html
作者:[Andy Patrizio][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Andy-Patrizio/
[b]: https://github.com/lujun9972
[1]: https://www.openwall.com/lists/oss-security/2019/06/27/7
[2]: https://www.networkworld.com/article/3119362/hybrid-cloud/how-to-make-hybrid-cloud-work.html#tk.nww-fsb
[3]: https://www.facebook.com/NetworkWorld/
[4]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,143 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (The Protocols That Help Things to Communicate Over the Internet)
[#]: via: (https://opensourceforu.com/2019/09/the-protocols-that-help-things-to-communicate-over-the-internet/)
[#]: author: (Sapna Panchal https://opensourceforu.com/author/sapna-panchal/)
The Protocols That Help Things to Communicate Over the Internet
======
[![][1]][2]
_The Internet of Things is a system of connected, interrelated objects. These objects transmit data to servers for processing and, in turn, receive messages from the servers. These messages are sent and received using different protocols. This article discusses some of the protocols related to the IoT._
The Internet of Things (IoT) is beginning to pervade more and more aspects of our lives. Everyone, everywhere is using the Internet of Things. Using the Internet, connected things are used to collect information, convey/send information back, or do both. IoT is an architecture that is a combination of available technologies. It helps to make our daily lives more pleasant and convenient.
![Figure 1: IoT architecture][3]
![Figure 2: Messaging Queuing Telemetry protocol][4]
**IoT architecture**
Basically, IoT architecture has four components. In this article, we will explore each component to understand the architecture better.
* **Sensors:** These are present everywhere. They help to collect data from any location and then share it to the IoT gateway. As an example, sensors sense the temperature at different locations, which helps to gauge the weather conditions. And this information is shared or passed to the IoT gateway. This is a basic example of how the IoT operates.
* **IoT gateway:** Once the information is collected from the sensors, it is passed on to the gateway. The gateway is a mediator between sensor nodes and the World Wide Web. So basically, it processes the data that is collected from sensor nodes and then transmits this to the Internet infrastructure.
* **Cloud server:** Once data is transmitted through the gateway, it is stored and processed in the cloud server.
* **Mobile app:** Using a mobile application, the user can view and access the data processed in the cloud server.
This is the basic idea of the IoT and its architecture, along with the components. We now move on to the basic ideas behind different IoT protocols.
**IoT protocols**
As mentioned earlier, connected things are used to collect information, convey/send information back, or do both, using the Internet. This is the fundamental basis of the IoT. To convey/send information, we need a protocol, which is a set of procedures that is used to transmit the data between electronic devices.
Essentially, we have two types of IoT protocols — the IoT network protocols and the IoT data protocols. This article discusses the IoT data protocols.
![Figure 3: Advance Message Queuing Protocol][5]
![Figure 4: CoAP][6]
**MQTT**
The Messaging Queuing Telemetry Transmit (MQTT) protocol was primarily designed for low bandwidth networks, but is very popular today as an IoT protocol. It is used to exchange data between clients and the server. It is a lightweight messaging protocol.
This protocol has many advantages:
* It is small in size and has low power usage.
* It is a lightweight protocol.
* It is based on low network usage.
* It works entirely in real-time.
Considering all the above reasons, MQTT emerges as the perfect IoT data protocol.
**How MQTT works:** MQTT is based on a client-server relationship. The server manages the requests that come from different clients and sends the required information to clients. MQTT is based on two operations.
i) _Publish:_ When the client sends data to the MQTT broker, this operation is known as Publish.
ii) _Subscribe:_ When the client receives data from the broker, this operation is known asSubscribe.
The MQTT broker is the mediator that handles these operations, primarily taking messages and delivering them to the application or client.
Lets look at the example of a device temperature sensor, which sends readings to the MQTT broker, and then information is delivered to desktop or mobile applications. As stated earlier, Publish means sending readings to the MQTT broker and Subscribe means delivering the information to the desktop/mobile application.
**AMQP**
Advanced Message Queuing Protocol is a peer-to-peer protocol, where one peer plays the role of the client application and the other peer plays the role of the delivery service or broker. It is the combination of hard and fast components that basically routes and saves messages within the delivery service or broker carrier.
The benefits of AMQP are:
* It helps to send messages without them getting missed out.
* It helps to guarantee a one-time only and secured delivery.
* It provides a secure connection.
* It always supports acknowledgments for message delivery or failure.
**How AMQP works and its architecture:** The AMQP architecture is made up of the following parts.
_**Exchange**_ Messages that come from the publisher are accepted by Exchange, which routes them to the message queue.
_**Message queue**_ This is the combination of multiple queues and is helpful for processing the messages.
_**Binding**_ This helps to maintain the connectivity between Exchange and the message queue.
The combination of Exchange and the message queues is known as the broker or AMQP broker.
![Figure 5: Constrained Application Protocol architecture][7]
**Constrained Application Protocol (CoAP)**
This was initially used as a machine-to-machine (M2M) protocol, and later began to be used as an IoT protocol. It is a Web transfer protocol that is used with constrained nodes and constrained networks. CoAP uses the RESTful architecture, just like the HTTP protocol.
The advantages CoAP offers are:
* It works as a REST model for small devices.
* As this is like HTTP, its easy for developers to work on.
* It is a one-to-one protocol for transferring information between the client and server, directly.
* It is very simple to parse.
**How CoAP works and its architecture:** From Figure 4, we can understand that CoAP is the combination of Request/Response and Message. We can also say it has two layers Request/Responseand Message.
Figure 5 clearly explains that CoAP architecture is based on the client server relationship, where…
* The client sends requests to the server.
* The server receives requests from the client and responds to them.
**Extensible Messaging and Presence Protocol (XMPP)**
This protocol is used to exchange messages in real-time. It is used not only to communicate with others, but also to get information on the status of the user (away, offline, active). This protocol is widely used in real life, like in WhatsApp.
The Extensible Messaging and Presence Protocol should be used because:
* It is free, open and easy to understand. Hence, it is very popular.
* It has secured authentication, is extensible and flexible.
![Figure 6: Extensible Messaging and Presence Protocol][8]
**How XMPP works and its architecture:** In the XMPP architecture, each client has a unique name associated with it and communicates to other clients via the XMPP server. The XMPP client has either the same domain or a different one.
In Figure 6, the XMPP client belongs to the same domain in which one XMPP client sends the information to the XMPP server. The server translates it and conveys the information to another client.
Basically, this protocol is the backbone that provides universal connectivity between different endpoint protocols.
--------------------------------------------------------------------------------
via: https://opensourceforu.com/2019/09/the-protocols-that-help-things-to-communicate-over-the-internet/
作者:[Sapna Panchal][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensourceforu.com/author/sapna-panchal/
[b]: https://github.com/lujun9972
[1]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/09/Internet-of-things-illustration.jpg?resize=696%2C439&ssl=1 (Internet of things illustration)
[2]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/09/Internet-of-things-illustration.jpg?fit=1125%2C710&ssl=1
[3]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/09/Figure-1-IoT-architecture.jpg?resize=350%2C133&ssl=1
[4]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/09/Figure-2-Messaging-Queuing-Telemetry-Transmit-protocol.jpg?resize=350%2C206&ssl=1
[5]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/09/Figure-3-Advance-Message-Queuing-Protocol.jpg?resize=350%2C160&ssl=1
[6]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/09/Figure-4-CoAP.jpg?resize=350%2C84&ssl=1
[7]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/09/Figure-5-Constrained-Application-Protocol-architecture.jpg?resize=350%2C224&ssl=1
[8]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/09/Figure-6-Extensible-Messaging-and-Presence-Protocol.jpg?resize=350%2C46&ssl=1

View File

@ -0,0 +1,79 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (The community-led renaissance of open source)
[#]: via: (https://opensource.com/article/19/9/community-led-renaissance)
[#]: author: (Donald Fischer https://opensource.com/users/dff)
The community-led renaissance of open source
======
Moving beyond the scarcity mindset of "open core."
![shapes of people symbols][1]
With few commercial participants, early free software and open source communities were, by definition, community-led. Software was designed and created organically by communities of users in response to their needs and inspiration. The results, to a degree nobody predicted, were often magical.
However, there were some missing pieces that prevented this magic from being unleashed at greater scale. As professional developers in larger organizations began to depend on open source software for key functions, they started looking for the same kinds of commercial support they were used to having for the proprietary software they purchased from companies like Microsoft, Oracle, and SAP. Someone to be accountable for fixing a new security vulnerability in a timely manner, or providing basic intellectual property assurances such as license verification and indemnification, or just delivering the everyday maintenance necessary to keep the software in good working order.
First-generation open source businesses like Red Hat emerged to respond to these needs. They combined the best of both worlds: the flexibility and control of raw open source with the commercial support that enterprises depend on. These new open source businesses found their opportunity by _adding_ the missing—but necessary—commercial services to community-led open source projects. These services would be costly for organizations to provide on their own and potentially even more costly to do without. One early leader of that era, Cygnus Solutions, even adopted the counter-intuitive tagline "Making free software affordable."
But back then, it was always overwhelmingly clear: The commercial vendors were in service of the community, filling in around the edges to enable commercial applications. The community was the star, and the companies were the supporting cast.
### The dark ages of open core
With the success of the original commercial open source companies like Red Hat, investment dollars flowed toward startups looking to harness the newfound commercial power of open source.
Unfortunately, by and large, this generation of open source companies drew the wrong lessons from first-generation players like Red Hat.
Witnessing the powerful combination of community-created technology with vendor-provided commercial capabilities, this new breed of company concluded that the vendors were the stars of the show, not the community.
This marked a turning point for open source.
In this vendor-centric view of the world, it was imagined that a single organization could generate the insights and set the roadmap for open source technologies. This drove a pervasive new belief that open source communities primarily represented a capital-efficient marketing channel rather than a new form of internet-enabled co-creation.
These companies approached open source with a scarcity mindset. Instead of investing in community-led projects to unlock the potential of the crowd, they created vendor-dominated projects, released demo versions under open source licenses, and directed the bulk of their resources to companion proprietary technology that they withheld as paid-only, closed-source products. By reverting to some of [the worst aspects of traditional proprietary software][2]—like uncertain licensing terms, unclear support horizons, and unknowable cost—these businesses crowded out the best aspects of open source.
As so often happens, this misreading of the open source model took on a new life when it was assigned an innocent-sounding brand name: "open core."
The open core dog chased its tail into an escalating flurry of blog posts, pitch decks, and even dedicated open core conferences. In its worst moments, leading players in this movement even [sought to redefine the very meaning of the words open source][3].
In the worldview of open core, the vendors are at the center of the universe, and open source users are just a commodity to be exploited.
### A community-led renaissance to restore balance
While business interests whipped themselves into a frenzy around open core, the community of creators at the heart of open source just kept on building. While a handful of high-profile companies occupied the industry headlines, thousands of individual creators and teams kept on building software, one pull request at a time.
It added up. Today, the modern application development platform isn't from a single vendor, or even a collection of vendors. It's the union of thousands of discrete open source packages—implemented in languages like JavaScript, Python, PHP, Ruby, Java, Go, .NET, Rust, R, and many others. Each element built for its own purpose, but together creating a beautiful tapestry that has become the foundation of all modern applications.
In some cases, the creators behind these projects are assisted by organizations that arose naturally from the communities, like Ruby Together, the Apache Software Foundation, and the Python Software Foundation. But by and large, these creators are self-supported, making time in the margins of their day jobs and central pursuits to collaborate on the software that makes their work possible while collectively building a huge commons of open source software that's available for any individual or organization to use.
But now, there's an emerging way for open source maintainers to participate in the value they create, which isn't about withholding value, but instead is about [creating _additional_ value][4].
In a revival and expansion of the principles that drove the first generation of community-led open source commercial players, creators are now coming together in a new form of collaboration. Rather than withholding software under a different license, they're partnering with each other to provide [the same kinds of professional assurances][5] that companies such as Red Hat discovered were necessary back in the day, but for the thousands of discrete components that make up the modern development platform.
Today's generation of entrepreneurial open source creators is leaving behind the scarcity mindset that bore open core and its brethren. Instead, they're advancing an optimistic, additive, and still practical model that [adds missing commercial value][6] on top of raw open source.
And by emulating first-generation open source companies, these creators are rediscovering a wide-open opportunity for value creation that benefits everyone. As commercial organizations engage with managed open source services sourced directly from the creators themselves, there's an immediate clarity in the alignment of interests between producers and consumers.
The result? The end of the scarcity-mindset dark ages of open core, and a renaissance of technology driven by a new class of thriving, independent, full-time open source creators.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/9/community-led-renaissance
作者:[Donald Fischer][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/dff
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/Open%20Pharma.png?itok=GP7zqNZE (shapes of people symbols)
[2]: https://blog.tidelift.com/the-closed-source-sustainability-crisis
[3]: https://opensource.com/article/19/4/fauxpen-source-bad-business
[4]: https://www.techrepublic.com/article/the-key-to-open-source-sustainability-is-good-old-fashioned-self-interest/
[5]: https://tidelift.com/subscription/video/what-is-managed-open-source
[6]: https://blog.tidelift.com/what-is-managed-open-source

View File

@ -0,0 +1,114 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Building a Messenger App: Schema)
[#]: via: (https://nicolasparada.netlify.com/posts/go-messenger-schema/)
[#]: author: (Nicolás Parada https://nicolasparada.netlify.com/)
Building a Messenger App: Schema
======
New post on building a messenger app. You already know this kind of app. They allow you to have conversations with your friends. [Facebook Messenger][1], [WhatsApp][2] and [Skype][3] are a few examples. Tho, these apps allows you to send pictures, stream video, record audio, chat with large groups of people, etc… Well try to keep it simple and just send text messages between two users.
Well use [CockroachDB][4] as the SQL database, [Go][5] as the backend language, and JavaScript to make a web app.
In this first post, were getting around the database design.
```
CREATE TABLE users (
id SERIAL NOT NULL PRIMARY KEY,
username STRING NOT NULL UNIQUE,
avatar_url STRING,
github_id INT NOT NULL UNIQUE
);
```
Of course, this app requires users. We will go with social login. I selected just [GitHub][6] so we keep a reference to the github user ID there.
```
CREATE TABLE conversations (
id SERIAL NOT NULL PRIMARY KEY,
last_message_id INT,
INDEX (last_message_id DESC)
);
```
Each conversation references the last message. Every time we insert a new message, well go and update this field. (Ill add the foreign key constraint below).
… You can say that we can group conversations and get the last message that way, but that will add much more complexity to the queries.
```
CREATE TABLE participants (
user_id INT NOT NULL REFERENCES users ON DELETE CASCADE,
conversation_id INT NOT NULL REFERENCES conversations ON DELETE CASCADE,
messages_read_at TIMESTAMPTZ NOT NULL DEFAULT now(),
PRIMARY KEY (user_id, conversation_id)
);
```
Even tho I said conversations will be between just two users, well go with a design that allow the possibility to add multiple participants to a conversation. Thats why we have a participants table between the conversation and users.
To know whether the user has unread messages we have the `messages_read_at` field. Every time the user read in a conversation, we update this value, so we can compare it with the conversation last message `created_at` field.
```
CREATE TABLE messages (
id SERIAL NOT NULL PRIMARY KEY,
content STRING NOT NULL,
user_id INT NOT NULL REFERENCES users ON DELETE CASCADE,
conversation_id INT NOT NULL REFERENCES conversations ON DELETE CASCADE,
created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
INDEX(created_at DESC)
);
```
Last but not least is the messages table, it saves a reference to the user who created it and the conversation in which it goes. Is has an index on `created_at` too to sort messages.
```
ALTER TABLE conversations
ADD CONSTRAINT fk_last_message_id_ref_messages
FOREIGN KEY (last_message_id) REFERENCES messages ON DELETE SET NULL;
```
And yep, the fk constraint I said.
These four tables will do the trick. You can save those queries to a file and pipe it to the Cockroach CLI. First start a new node:
```
cockroach start --insecure --host 127.0.0.1
```
Then create the database and tables:
```
cockroach sql --insecure -e "CREATE DATABASE messenger"
cat schema.sql | cockroach sql --insecure -d messenger
```
* * *
Thats it. In the next part well do the login. Wait for it.
[Souce Code][7]
--------------------------------------------------------------------------------
via: https://nicolasparada.netlify.com/posts/go-messenger-schema/
作者:[Nicolás Parada][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://nicolasparada.netlify.com/
[b]: https://github.com/lujun9972
[1]: https://www.messenger.com/
[2]: https://www.whatsapp.com/
[3]: https://www.skype.com/
[4]: https://www.cockroachlabs.com/
[5]: https://golang.org/
[6]: https://github.com/
[7]: https://github.com/nicolasparada/go-messenger-demo

View File

@ -0,0 +1,448 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Building a Messenger App: OAuth)
[#]: via: (https://nicolasparada.netlify.com/posts/go-messenger-oauth/)
[#]: author: (Nicolás Parada https://nicolasparada.netlify.com/)
Building a Messenger App: OAuth
======
[Previous part: Schema][1].
In this post we start the backend by adding social login.
This is how it works: the user click on a link that redirects him to the GitHub authorization page. The user grant access to his info and get redirected back logged in. The next time he tries to login, he wont be asked to grant permission, it is remembered so the login flow is as fast as a single click.
Internally, the history is more complex tho. First we need the register a new [OAuth app on GitHub][2].
The important part is the callback URL. Set it to `http://localhost:3000/api/oauth/github/callback`. On development we are on localhost, so when you ship the app to production, register a new app with the correct callback URL.
This will give you a client id and a secret key. Dont share them with anyone 👀
With that off of the way, lets start to write some code. Create a `main.go` file:
```
package main
import (
"database/sql"
"fmt"
"log"
"net/http"
"net/url"
"os"
"strconv"
"github.com/gorilla/securecookie"
"github.com/joho/godotenv"
"github.com/knq/jwt"
_ "github.com/lib/pq"
"github.com/matryer/way"
"golang.org/x/oauth2"
"golang.org/x/oauth2/github"
)
var origin *url.URL
var db *sql.DB
var githubOAuthConfig *oauth2.Config
var cookieSigner *securecookie.SecureCookie
var jwtSigner jwt.Signer
func main() {
godotenv.Load()
port := intEnv("PORT", 3000)
originString := env("ORIGIN", fmt.Sprintf("http://localhost:%d/", port))
databaseURL := env("DATABASE_URL", "postgresql://root@127.0.0.1:26257/messenger?sslmode=disable")
githubClientID := os.Getenv("GITHUB_CLIENT_ID")
githubClientSecret := os.Getenv("GITHUB_CLIENT_SECRET")
hashKey := env("HASH_KEY", "secret")
jwtKey := env("JWT_KEY", "secret")
var err error
if origin, err = url.Parse(originString); err != nil || !origin.IsAbs() {
log.Fatal("invalid origin")
return
}
if i, err := strconv.Atoi(origin.Port()); err == nil {
port = i
}
if githubClientID == "" || githubClientSecret == "" {
log.Fatalf("remember to set both $GITHUB_CLIENT_ID and $GITHUB_CLIENT_SECRET")
return
}
if db, err = sql.Open("postgres", databaseURL); err != nil {
log.Fatalf("could not open database connection: %v\n", err)
return
}
defer db.Close()
if err = db.Ping(); err != nil {
log.Fatalf("could not ping to db: %v\n", err)
return
}
githubRedirectURL := *origin
githubRedirectURL.Path = "/api/oauth/github/callback"
githubOAuthConfig = &oauth2.Config{
ClientID: githubClientID,
ClientSecret: githubClientSecret,
Endpoint: github.Endpoint,
RedirectURL: githubRedirectURL.String(),
Scopes: []string{"read:user"},
}
cookieSigner = securecookie.New([]byte(hashKey), nil).MaxAge(0)
jwtSigner, err = jwt.HS256.New([]byte(jwtKey))
if err != nil {
log.Fatalf("could not create JWT signer: %v\n", err)
return
}
router := way.NewRouter()
router.HandleFunc("GET", "/api/oauth/github", githubOAuthStart)
router.HandleFunc("GET", "/api/oauth/github/callback", githubOAuthCallback)
router.HandleFunc("GET", "/api/auth_user", guard(getAuthUser))
log.Printf("accepting connections on port %d\n", port)
log.Printf("starting server at %s\n", origin.String())
addr := fmt.Sprintf(":%d", port)
if err = http.ListenAndServe(addr, router); err != nil {
log.Fatalf("could not start server: %v\n", err)
}
}
func env(key, fallbackValue string) string {
v, ok := os.LookupEnv(key)
if !ok {
return fallbackValue
}
return v
}
func intEnv(key string, fallbackValue int) int {
v, ok := os.LookupEnv(key)
if !ok {
return fallbackValue
}
i, err := strconv.Atoi(v)
if err != nil {
return fallbackValue
}
return i
}
```
Install dependencies:
```
go get -u github.com/gorilla/securecookie
go get -u github.com/joho/godotenv
go get -u github.com/knq/jwt
go get -u github.com/lib/pq
ge get -u github.com/matoous/go-nanoid
go get -u github.com/matryer/way
go get -u golang.org/x/oauth2
```
We use a `.env` file to save secret keys and other configurations. Create it with at least this content:
```
GITHUB_CLIENT_ID=your_github_client_id
GITHUB_CLIENT_SECRET=your_github_client_secret
```
The other enviroment variables we use are:
* `PORT`: The port in which the server runs. Defaults to `3000`.
* `ORIGIN`: Your domain. Defaults to `http://localhost:3000/`. The port can also be extracted from this.
* `DATABASE_URL`: The Cockroach address. Defaults to `postgresql://root@127.0.0.1:26257/messenger?sslmode=disable`.
* `HASH_KEY`: Key to sign cookies. Yeah, well use signed cookies for security.
* `JWT_KEY`: Key to sign JSON web tokens.
Because they have default values, your dont need to write them on the `.env` file.
After reading the configuration and connecting to the database, we create an OAuth config. We use the origin to build the callback URL (the same we registered on the github page). And we set the scope to “read:user”. This will give us permission to read the public user info. Thats because we just need his username and avatar. Then we initialize the cookie and JWT signers. Define some endpoints and start the server.
Before implementing those HTTP handlers lets write a couple functions to send HTTP responses.
```
func respond(w http.ResponseWriter, v interface{}, statusCode int) {
b, err := json.Marshal(v)
if err != nil {
respondError(w, fmt.Errorf("could not marshal response: %v", err))
return
}
w.Header().Set("Content-Type", "application/json; charset=utf-8")
w.WriteHeader(statusCode)
w.Write(b)
}
func respondError(w http.ResponseWriter, err error) {
log.Println(err)
http.Error(w, http.StatusText(http.StatusInternalServerError), http.StatusInternalServerError)
}
```
The first one is to send JSON and the second one logs the error to the console and return a `500 Internal Server Error` error.
### OAuth Start
So, the user clicks on a link that says “Access with GitHub”… That link points the this endpoint `/api/oauth/github` that will redirect the user to github.
```
func githubOAuthStart(w http.ResponseWriter, r *http.Request) {
state, err := gonanoid.Nanoid()
if err != nil {
respondError(w, fmt.Errorf("could not generte state: %v", err))
return
}
stateCookieValue, err := cookieSigner.Encode("state", state)
if err != nil {
respondError(w, fmt.Errorf("could not encode state cookie: %v", err))
return
}
http.SetCookie(w, &http.Cookie{
Name: "state",
Value: stateCookieValue,
Path: "/api/oauth/github",
HttpOnly: true,
})
http.Redirect(w, r, githubOAuthConfig.AuthCodeURL(state), http.StatusTemporaryRedirect)
}
```
OAuth2 uses a mechanism to prevent CSRF attacks so it requires a “state”. We use nanoid to create a random string and use that as state. We save it as a cookie too.
### OAuth Callback
Once the user grant access to his info on the GitHub page, he will be redirected to this endpoint. The URL will come with the state and a code on the query string `/api/oauth/github/callback?state=&code=`
```
const jwtLifetime = time.Hour * 24 * 14
type GithubUser struct {
ID int `json:"id"`
Login string `json:"login"`
AvatarURL *string `json:"avatar_url,omitempty"`
}
type User struct {
ID string `json:"id"`
Username string `json:"username"`
AvatarURL *string `json:"avatarUrl"`
}
func githubOAuthCallback(w http.ResponseWriter, r *http.Request) {
stateCookie, err := r.Cookie("state")
if err != nil {
http.Error(w, http.StatusText(http.StatusTeapot), http.StatusTeapot)
return
}
http.SetCookie(w, &http.Cookie{
Name: "state",
Value: "",
MaxAge: -1,
HttpOnly: true,
})
var state string
if err = cookieSigner.Decode("state", stateCookie.Value, &state); err != nil {
http.Error(w, http.StatusText(http.StatusTeapot), http.StatusTeapot)
return
}
q := r.URL.Query()
if state != q.Get("state") {
http.Error(w, http.StatusText(http.StatusTeapot), http.StatusTeapot)
return
}
ctx := r.Context()
t, err := githubOAuthConfig.Exchange(ctx, q.Get("code"))
if err != nil {
respondError(w, fmt.Errorf("could not fetch github token: %v", err))
return
}
client := githubOAuthConfig.Client(ctx, t)
resp, err := client.Get("https://api.github.com/user")
if err != nil {
respondError(w, fmt.Errorf("could not fetch github user: %v", err))
return
}
var githubUser GithubUser
if err = json.NewDecoder(resp.Body).Decode(&githubUser); err != nil {
respondError(w, fmt.Errorf("could not decode github user: %v", err))
return
}
defer resp.Body.Close()
tx, err := db.BeginTx(ctx, nil)
if err != nil {
respondError(w, fmt.Errorf("could not begin tx: %v", err))
return
}
var user User
if err = tx.QueryRowContext(ctx, `
SELECT id, username, avatar_url FROM users WHERE github_id = $1
`, githubUser.ID).Scan(&user.ID, &user.Username, &user.AvatarURL); err == sql.ErrNoRows {
if err = tx.QueryRowContext(ctx, `
INSERT INTO users (username, avatar_url, github_id) VALUES ($1, $2, $3)
RETURNING id
`, githubUser.Login, githubUser.AvatarURL, githubUser.ID).Scan(&user.ID); err != nil {
respondError(w, fmt.Errorf("could not insert user: %v", err))
return
}
user.Username = githubUser.Login
user.AvatarURL = githubUser.AvatarURL
} else if err != nil {
respondError(w, fmt.Errorf("could not query user by github ID: %v", err))
return
}
if err = tx.Commit(); err != nil {
respondError(w, fmt.Errorf("could not commit to finish github oauth: %v", err))
return
}
exp := time.Now().Add(jwtLifetime)
token, err := jwtSigner.Encode(jwt.Claims{
Subject: user.ID,
Expiration: json.Number(strconv.FormatInt(exp.Unix(), 10)),
})
if err != nil {
respondError(w, fmt.Errorf("could not create token: %v", err))
return
}
expiresAt, _ := exp.MarshalText()
data := make(url.Values)
data.Set("token", string(token))
data.Set("expires_at", string(expiresAt))
http.Redirect(w, r, "/callback?"+data.Encode(), http.StatusTemporaryRedirect)
}
```
First we try to decode the cookie with the state we saved before. And compare it with the state that comes in the query string. In case they dont match, we return a `418 I'm teapot` error.
Then we exchange the code for a token. This token is used to create an HTTP client to make requests to the GitHub API. So we do a GET request to `https://api.github.com/user`. This endpoint will give us the current authenticated user info in JSON format. We decode it to get the user ID, login (username) and avatar URL.
Then we try to find a user with that GitHub ID on the database. If none is found, we create one using that data.
Then, with the newly created user, we issue a JSON web token with the user ID as Subject and redirect to the frontend with the token, along side the expiration date in the query string.
The web app will be for another post, but the URL you are being redirected is `/callback?token=&expires_at=`. There well have some JavaScript to extract the token and expiration date from the URL and do a GET request to `/api/auth_user` with the token in the `Authorization` header in the form of `Bearer token_here` to get the authenticated user and save it to localStorage.
### Guard Middleware
To get the current authenticated user we use a middleware. Thats because in future posts well have more endpoints that requires authentication, and a middleware allow us to share functionality.
```
type ContextKey struct {
Name string
}
var keyAuthUserID = ContextKey{"auth_user_id"}
func guard(handler http.HandlerFunc) http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
var token string
if a := r.Header.Get("Authorization"); strings.HasPrefix(a, "Bearer ") {
token = a[7:]
} else if t := r.URL.Query().Get("token"); t != "" {
token = t
} else {
http.Error(w, http.StatusText(http.StatusUnauthorized), http.StatusUnauthorized)
return
}
var claims jwt.Claims
if err := jwtSigner.Decode([]byte(token), &claims); err != nil {
http.Error(w, http.StatusText(http.StatusUnauthorized), http.StatusUnauthorized)
return
}
ctx := r.Context()
ctx = context.WithValue(ctx, keyAuthUserID, claims.Subject)
handler(w, r.WithContext(ctx))
}
}
```
First we try to read the token from the `Authorization` header or a `token` in the URL query string. If none found, we return a `401 Unauthorized` error. Then we decode the claims in the token and use the Subject as the current authenticated user ID.
Now, we can wrap any `http.handlerFunc` that needs authentication with this middleware and well have the authenticated user ID in the context.
```
var guarded = guard(func(w http.ResponseWriter, r *http.Request) {
authUserID := r.Context().Value(keyAuthUserID).(string)
})
```
### Get Authenticated User
```
func getAuthUser(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
authUserID := ctx.Value(keyAuthUserID).(string)
var user User
if err := db.QueryRowContext(ctx, `
SELECT username, avatar_url FROM users WHERE id = $1
`, authUserID).Scan(&user.Username, &user.AvatarURL); err == sql.ErrNoRows {
http.Error(w, http.StatusText(http.StatusTeapot), http.StatusTeapot)
return
} else if err != nil {
respondError(w, fmt.Errorf("could not query auth user: %v", err))
return
}
user.ID = authUserID
respond(w, user, http.StatusOK)
}
```
We use the guard middleware to get the current authenticated user id and do a query to the database.
* * *
That will cover the OAuth process on the backend. In the next part well see how to start conversations with other users.
[Souce Code][3]
--------------------------------------------------------------------------------
via: https://nicolasparada.netlify.com/posts/go-messenger-oauth/
作者:[Nicolás Parada][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://nicolasparada.netlify.com/
[b]: https://github.com/lujun9972
[1]: https://nicolasparada.netlify.com/posts/go-messenger-schema/
[2]: https://github.com/settings/applications/new
[3]: https://github.com/nicolasparada/go-messenger-demo

View File

@ -0,0 +1,351 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Building a Messenger App: Conversations)
[#]: via: (https://nicolasparada.netlify.com/posts/go-messenger-conversations/)
[#]: author: (Nicolás Parada https://nicolasparada.netlify.com/)
Building a Messenger App: Conversations
======
This post is the 3rd in a series:
* [Part 1: Schema][1]
* [Part 2: OAuth][2]
In our messenger app, messages are stacked by conversations between two participants. You start a conversation providing the user you want to chat with, the conversations is created (if not exists already) and you can start sending messages to that conversations.
On the front-end were interested in showing a list of the lastest conversations. There well show the last message of it and the name and avatar of the other participant.
In this post, well code the endpoints to start a conversation, list the latest and find a single one.
Inside the `main()` function add this routes.
```
router.HandleFunc("POST", "/api/conversations", requireJSON(guard(createConversation)))
router.HandleFunc("GET", "/api/conversations", guard(getConversations))
router.HandleFunc("GET", "/api/conversations/:conversationID", guard(getConversation))
```
These three endpoints require authentication so we use the `guard()` middleware. There is a new middleware that checks for the request content type JSON.
### Require JSON Middleware
```
func requireJSON(handler http.HandlerFunc) http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
if ct := r.Header.Get("Content-Type"); !strings.HasPrefix(ct, "application/json") {
http.Error(w, "Content type of application/json required", http.StatusUnsupportedMediaType)
return
}
handler(w, r)
}
}
```
If the request isnt JSON, it responds with a `415 Unsupported Media Type` error.
### Create Conversation
```
type Conversation struct {
ID string `json:"id"`
OtherParticipant *User `json:"otherParticipant"`
LastMessage *Message `json:"lastMessage"`
HasUnreadMessages bool `json:"hasUnreadMessages"`
}
```
So, a conversation holds a reference to the other participant and the last message. Also has a bool field to tell if it has unread messages.
```
type Message struct {
ID string `json:"id"`
Content string `json:"content"`
UserID string `json:"-"`
ConversationID string `json:"conversationID,omitempty"`
CreatedAt time.Time `json:"createdAt"`
Mine bool `json:"mine"`
ReceiverID string `json:"-"`
}
```
Messages are for the next post, but I define the struct now since we are using it. Most of the fields are the same as the database table. We have `Mine` to tell if the message is owned by the current authenticated user and `ReceiverID` will be used to filter messanges once we add realtime capabilities.
Lets write the HTTP handler then. Its quite long but dont be scared.
```
func createConversation(w http.ResponseWriter, r *http.Request) {
var input struct {
Username string `json:"username"`
}
defer r.Body.Close()
if err := json.NewDecoder(r.Body).Decode(&input); err != nil {
http.Error(w, err.Error(), http.StatusBadRequest)
return
}
input.Username = strings.TrimSpace(input.Username)
if input.Username == "" {
respond(w, Errors{map[string]string{
"username": "Username required",
}}, http.StatusUnprocessableEntity)
return
}
ctx := r.Context()
authUserID := ctx.Value(keyAuthUserID).(string)
tx, err := db.BeginTx(ctx, nil)
if err != nil {
respondError(w, fmt.Errorf("could not begin tx: %v", err))
return
}
defer tx.Rollback()
var otherParticipant User
if err := tx.QueryRowContext(ctx, `
SELECT id, avatar_url FROM users WHERE username = $1
`, input.Username).Scan(
&otherParticipant.ID,
&otherParticipant.AvatarURL,
); err == sql.ErrNoRows {
http.Error(w, "User not found", http.StatusNotFound)
return
} else if err != nil {
respondError(w, fmt.Errorf("could not query other participant: %v", err))
return
}
otherParticipant.Username = input.Username
if otherParticipant.ID == authUserID {
http.Error(w, "Try start a conversation with someone else", http.StatusForbidden)
return
}
var conversationID string
if err := tx.QueryRowContext(ctx, `
SELECT conversation_id FROM participants WHERE user_id = $1
INTERSECT
SELECT conversation_id FROM participants WHERE user_id = $2
`, authUserID, otherParticipant.ID).Scan(&conversationID); err != nil && err != sql.ErrNoRows {
respondError(w, fmt.Errorf("could not query common conversation id: %v", err))
return
} else if err == nil {
http.Redirect(w, r, "/api/conversations/"+conversationID, http.StatusFound)
return
}
var conversation Conversation
if err = tx.QueryRowContext(ctx, `
INSERT INTO conversations DEFAULT VALUES
RETURNING id
`).Scan(&conversation.ID); err != nil {
respondError(w, fmt.Errorf("could not insert conversation: %v", err))
return
}
if _, err = tx.ExecContext(ctx, `
INSERT INTO participants (user_id, conversation_id) VALUES
($1, $2),
($3, $2)
`, authUserID, conversation.ID, otherParticipant.ID); err != nil {
respondError(w, fmt.Errorf("could not insert participants: %v", err))
return
}
if err = tx.Commit(); err != nil {
respondError(w, fmt.Errorf("could not commit tx to create conversation: %v", err))
return
}
conversation.OtherParticipant = &otherParticipant
respond(w, conversation, http.StatusCreated)
}
```
For this endpoint you do a POST request to `/api/conversations` with a JSON body containing the username of the user you want to chat with.
So first it decodes the request body into an struct with the username. Then it validates that the username is not empty.
```
type Errors struct {
Errors map[string]string `json:"errors"`
}
```
This is the `Errors` struct. Its just a map. If you enter an empty username you get this JSON with a `422 Unprocessable Entity` error.
```
{
"errors": {
"username": "Username required"
}
}
```
Then, we begin an SQL transaction. We only received an username, but we need the actual user ID. So the first part of the transaction is to query for the id and avatar of that user (the other participant). If the user is not found, we respond with a `404 Not Found` error. Also, if the user happens to be the same as the current authenticated user, we respond with `403 Forbidden`. There should be two different users, not the same.
Then, we try to find a conversation those two users have in common. We use `INTERSECT` for that. If there is one, we redirect to that conversation `/api/conversations/{conversationID}` and return there.
If no common conversation was found, we continue by creating a new one and adding the two participants. Finally, we `COMMIT` the transaction and respond with the newly created conversation.
### Get Conversations
This endpoint `/api/conversations` is to get all the conversations of the current authenticated user.
```
func getConversations(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
authUserID := ctx.Value(keyAuthUserID).(string)
rows, err := db.QueryContext(ctx, `
SELECT
conversations.id,
auth_user.messages_read_at < messages.created_at AS has_unread_messages,
messages.id,
messages.content,
messages.created_at,
messages.user_id = $1 AS mine,
other_users.id,
other_users.username,
other_users.avatar_url
FROM conversations
INNER JOIN messages ON conversations.last_message_id = messages.id
INNER JOIN participants other_participants
ON other_participants.conversation_id = conversations.id
AND other_participants.user_id != $1
INNER JOIN users other_users ON other_participants.user_id = other_users.id
INNER JOIN participants auth_user
ON auth_user.conversation_id = conversations.id
AND auth_user.user_id = $1
ORDER BY messages.created_at DESC
`, authUserID)
if err != nil {
respondError(w, fmt.Errorf("could not query conversations: %v", err))
return
}
defer rows.Close()
conversations := make([]Conversation, 0)
for rows.Next() {
var conversation Conversation
var lastMessage Message
var otherParticipant User
if err = rows.Scan(
&conversation.ID,
&conversation.HasUnreadMessages,
&lastMessage.ID,
&lastMessage.Content,
&lastMessage.CreatedAt,
&lastMessage.Mine,
&otherParticipant.ID,
&otherParticipant.Username,
&otherParticipant.AvatarURL,
); err != nil {
respondError(w, fmt.Errorf("could not scan conversation: %v", err))
return
}
conversation.LastMessage = &lastMessage
conversation.OtherParticipant = &otherParticipant
conversations = append(conversations, conversation)
}
if err = rows.Err(); err != nil {
respondError(w, fmt.Errorf("could not iterate over conversations: %v", err))
return
}
respond(w, conversations, http.StatusOK)
}
```
This handler just does a query to the database. It queries to the conversations table with some joins… First, to the messages table to get the last message. Then to the participants, but it adds a condition to a participant whose ID is not the one of the current authenticated user; this is the other participant. Then it joins to the users table to get his username and avatar. And finally joins with the participants again but with the contrary condition, so this participant is the current authenticated user. We compare `messages_read_at` with the message `created_at` to know whether the conversation has unread messages. And we use the message `user_id` to check if its “mine” or not.
Note that this query assumes that a conversation has just two users. It only works for that scenario. Also, if you want to show a count of the unread messages, this design isnt good. I think you could add a `unread_messages_count` `INT` field on the `participants` table and increment it each time a new message is created and reset it when the user read them.
Then it iterates over the rows, scan each one to make an slice of conversations and respond with those at the end.
### Get Conversation
This endpoint `/api/conversations/{conversationID}` respond with a single conversation by its ID.
```
func getConversation(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
authUserID := ctx.Value(keyAuthUserID).(string)
conversationID := way.Param(ctx, "conversationID")
var conversation Conversation
var otherParticipant User
if err := db.QueryRowContext(ctx, `
SELECT
IFNULL(auth_user.messages_read_at < messages.created_at, false) AS has_unread_messages,
other_users.id,
other_users.username,
other_users.avatar_url
FROM conversations
LEFT JOIN messages ON conversations.last_message_id = messages.id
INNER JOIN participants other_participants
ON other_participants.conversation_id = conversations.id
AND other_participants.user_id != $1
INNER JOIN users other_users ON other_participants.user_id = other_users.id
INNER JOIN participants auth_user
ON auth_user.conversation_id = conversations.id
AND auth_user.user_id = $1
WHERE conversations.id = $2
`, authUserID, conversationID).Scan(
&conversation.HasUnreadMessages,
&otherParticipant.ID,
&otherParticipant.Username,
&otherParticipant.AvatarURL,
); err == sql.ErrNoRows {
http.Error(w, "Conversation not found", http.StatusNotFound)
return
} else if err != nil {
respondError(w, fmt.Errorf("could not query conversation: %v", err))
return
}
conversation.ID = conversationID
conversation.OtherParticipant = &otherParticipant
respond(w, conversation, http.StatusOK)
}
```
The query is quite similar. Were not interested in showing the last message, so we omit those fields, but we need the message to know whether the conversation has unread messages. This time we do a `LEFT JOIN` instead of an `INNER JOIN` because the `last_message_id` is `NULLABLE`; in other case we wont get any rows. We use an `IFNULL` in the `has_unread_messages` comparison for that reason too. Lastly, we filter by ID.
If the query returns no rows, we respond with a `404 Not Found` error, otherwise `200 OK` with the found conversation.
* * *
Yeah, that concludes with the conversation endpoints.
Wait for the next post to create and list messages 👋
[Souce Code][3]
--------------------------------------------------------------------------------
via: https://nicolasparada.netlify.com/posts/go-messenger-conversations/
作者:[Nicolás Parada][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://nicolasparada.netlify.com/
[b]: https://github.com/lujun9972
[1]: https://nicolasparada.netlify.com/posts/go-messenger-schema/
[2]: https://nicolasparada.netlify.com/posts/go-messenger-oauth/
[3]: https://github.com/nicolasparada/go-messenger-demo

View File

@ -0,0 +1,315 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Building a Messenger App: Messages)
[#]: via: (https://nicolasparada.netlify.com/posts/go-messenger-messages/)
[#]: author: (Nicolás Parada https://nicolasparada.netlify.com/)
Building a Messenger App: Messages
======
This post is the 4th on a series:
* [Part 1: Schema][1]
* [Part 2: OAuth][2]
* [Part 3: Conversations][3]
In this post well code the endpoints to create a message and list them, also an endpoint to update the last time the participant read messages. Start by adding these routes in the `main()` function.
```
router.HandleFunc("POST", "/api/conversations/:conversationID/messages", requireJSON(guard(createMessage)))
router.HandleFunc("GET", "/api/conversations/:conversationID/messages", guard(getMessages))
router.HandleFunc("POST", "/api/conversations/:conversationID/read_messages", guard(readMessages))
```
Messages goes into conversations so the endpoint includes the conversation ID.
### Create Message
This endpoint handles POST requests to `/api/conversations/{conversationID}/messages` with a JSON body with just the message content and return the newly created message. It has two side affects: it updates the conversation `last_message_id` and updates the participant `messages_read_at`.
```
func createMessage(w http.ResponseWriter, r *http.Request) {
var input struct {
Content string `json:"content"`
}
defer r.Body.Close()
if err := json.NewDecoder(r.Body).Decode(&input); err != nil {
http.Error(w, err.Error(), http.StatusBadRequest)
return
}
errs := make(map[string]string)
input.Content = removeSpaces(input.Content)
if input.Content == "" {
errs["content"] = "Message content required"
} else if len([]rune(input.Content)) > 480 {
errs["content"] = "Message too long. 480 max"
}
if len(errs) != 0 {
respond(w, Errors{errs}, http.StatusUnprocessableEntity)
return
}
ctx := r.Context()
authUserID := ctx.Value(keyAuthUserID).(string)
conversationID := way.Param(ctx, "conversationID")
tx, err := db.BeginTx(ctx, nil)
if err != nil {
respondError(w, fmt.Errorf("could not begin tx: %v", err))
return
}
defer tx.Rollback()
isParticipant, err := queryParticipantExistance(ctx, tx, authUserID, conversationID)
if err != nil {
respondError(w, fmt.Errorf("could not query participant existance: %v", err))
return
}
if !isParticipant {
http.Error(w, "Conversation not found", http.StatusNotFound)
return
}
var message Message
if err := tx.QueryRowContext(ctx, `
INSERT INTO messages (content, user_id, conversation_id) VALUES
($1, $2, $3)
RETURNING id, created_at
`, input.Content, authUserID, conversationID).Scan(
&message.ID,
&message.CreatedAt,
); err != nil {
respondError(w, fmt.Errorf("could not insert message: %v", err))
return
}
if _, err := tx.ExecContext(ctx, `
UPDATE conversations SET last_message_id = $1
WHERE id = $2
`, message.ID, conversationID); err != nil {
respondError(w, fmt.Errorf("could not update conversation last message ID: %v", err))
return
}
if err = tx.Commit(); err != nil {
respondError(w, fmt.Errorf("could not commit tx to create a message: %v", err))
return
}
go func() {
if err = updateMessagesReadAt(nil, authUserID, conversationID); err != nil {
log.Printf("could not update messages read at: %v\n", err)
}
}()
message.Content = input.Content
message.UserID = authUserID
message.ConversationID = conversationID
// TODO: notify about new message.
message.Mine = true
respond(w, message, http.StatusCreated)
}
```
First, it decodes the request body into an struct with the message content. Then, it validates the content is not empty and has less than 480 characters.
```
var rxSpaces = regexp.MustCompile("\\s+")
func removeSpaces(s string) string {
if s == "" {
return s
}
lines := make([]string, 0)
for _, line := range strings.Split(s, "\n") {
line = rxSpaces.ReplaceAllLiteralString(line, " ")
line = strings.TrimSpace(line)
if line != "" {
lines = append(lines, line)
}
}
return strings.Join(lines, "\n")
}
```
This is the function to remove spaces. It iterates over each line, remove more than two consecutives spaces and returns with the non empty lines.
After the validation, it starts an SQL transaction. First, it queries for the participant existance in the conversation.
```
func queryParticipantExistance(ctx context.Context, tx *sql.Tx, userID, conversationID string) (bool, error) {
if ctx == nil {
ctx = context.Background()
}
var exists bool
if err := tx.QueryRowContext(ctx, `SELECT EXISTS (
SELECT 1 FROM participants
WHERE user_id = $1 AND conversation_id = $2
)`, userID, conversationID).Scan(&exists); err != nil {
return false, err
}
return exists, nil
}
```
I extracted it into a function because its reused later.
If the user isnt participant of the conversation, we return with a `404 Not Found` error.
Then, it inserts the message and updates the conversation `last_message_id`. Since this point, `last_message_id` cannot by `NULL` because we dont allow removing messages.
Then it commits the transaction and we update the participant `messages_read_at` in a goroutine.
```
func updateMessagesReadAt(ctx context.Context, userID, conversationID string) error {
if ctx == nil {
ctx = context.Background()
}
if _, err := db.ExecContext(ctx, `
UPDATE participants SET messages_read_at = now()
WHERE user_id = $1 AND conversation_id = $2
`, userID, conversationID); err != nil {
return err
}
return nil
}
```
Before responding with the new message, we must notify about it. This is for the realtime part well code in the next post so I left a comment there.
### Get Messages
This endpoint handles GET requests to `/api/conversations/{conversationID}/messages`. It responds with a JSON array with all the messages in the conversation. It also has the same side affect of updating the participant `messages_read_at`.
```
func getMessages(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
authUserID := ctx.Value(keyAuthUserID).(string)
conversationID := way.Param(ctx, "conversationID")
tx, err := db.BeginTx(ctx, &sql.TxOptions{ReadOnly: true})
if err != nil {
respondError(w, fmt.Errorf("could not begin tx: %v", err))
return
}
defer tx.Rollback()
isParticipant, err := queryParticipantExistance(ctx, tx, authUserID, conversationID)
if err != nil {
respondError(w, fmt.Errorf("could not query participant existance: %v", err))
return
}
if !isParticipant {
http.Error(w, "Conversation not found", http.StatusNotFound)
return
}
rows, err := tx.QueryContext(ctx, `
SELECT
id,
content,
created_at,
user_id = $1 AS mine
FROM messages
WHERE messages.conversation_id = $2
ORDER BY messages.created_at DESC
`, authUserID, conversationID)
if err != nil {
respondError(w, fmt.Errorf("could not query messages: %v", err))
return
}
defer rows.Close()
messages := make([]Message, 0)
for rows.Next() {
var message Message
if err = rows.Scan(
&message.ID,
&message.Content,
&message.CreatedAt,
&message.Mine,
); err != nil {
respondError(w, fmt.Errorf("could not scan message: %v", err))
return
}
messages = append(messages, message)
}
if err = rows.Err(); err != nil {
respondError(w, fmt.Errorf("could not iterate over messages: %v", err))
return
}
if err = tx.Commit(); err != nil {
respondError(w, fmt.Errorf("could not commit tx to get messages: %v", err))
return
}
go func() {
if err = updateMessagesReadAt(nil, authUserID, conversationID); err != nil {
log.Printf("could not update messages read at: %v\n", err)
}
}()
respond(w, messages, http.StatusOK)
}
```
First, it begins an SQL transaction in readonly mode. Checks for the participant existance and queries all the messages. In each message, we use the current authenticated user ID to know whether the user owns the message (`mine`). Then it commits the transaction, updates the participant `messages_read_at` in a goroutine and respond with the messages.
### Read Messages
This endpoint handles POST requests to `/api/conversations/{conversationID}/read_messages`. Without any request or response body. In the frontend well make this request each time a new message arrive in the realtime stream.
```
func readMessages(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
authUserID := ctx.Value(keyAuthUserID).(string)
conversationID := way.Param(ctx, "conversationID")
if err := updateMessagesReadAt(ctx, authUserID, conversationID); err != nil {
respondError(w, fmt.Errorf("could not update messages read at: %v", err))
return
}
w.WriteHeader(http.StatusNoContent)
}
```
It uses the same function weve been using to update the participant `messages_read_at`.
* * *
That concludes it. Realtime messages is the only part left in the backend. Wait for it in the next post.
[Souce Code][4]
--------------------------------------------------------------------------------
via: https://nicolasparada.netlify.com/posts/go-messenger-messages/
作者:[Nicolás Parada][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://nicolasparada.netlify.com/
[b]: https://github.com/lujun9972
[1]: https://nicolasparada.netlify.com/posts/go-messenger-schema/
[2]: https://nicolasparada.netlify.com/posts/go-messenger-oauth/
[3]: https://nicolasparada.netlify.com/posts/go-messenger-conversations/
[4]: https://github.com/nicolasparada/go-messenger-demo

View File

@ -0,0 +1,175 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Building a Messenger App: Realtime Messages)
[#]: via: (https://nicolasparada.netlify.com/posts/go-messenger-realtime-messages/)
[#]: author: (Nicolás Parada https://nicolasparada.netlify.com/)
Building a Messenger App: Realtime Messages
======
This post is the 5th on a series:
* [Part 1: Schema][1]
* [Part 2: OAuth][2]
* [Part 3: Conversations][3]
* [Part 4: Messages][4]
For realtime messages well use [Server-Sent Events][5]. This is an open connection in which we can stream data. Well have and endpoint in which the user subscribes to all the messages sended to him.
### Message Clients
Before the HTTP part, lets code a map to have all the clients listening for messages. Initialize this globally like so:
```
type MessageClient struct {
Messages chan Message
UserID string
}
var messageClients sync.Map
```
### New Message Created
Remember in the [last post][4] when we created the message, we left a “TODO” comment. There well dispatch a goroutine with this function.
```
go messageCreated(message)
```
Insert that line just where we left the comment.
```
func messageCreated(message Message) error {
if err := db.QueryRow(`
SELECT user_id FROM participants
WHERE user_id != $1 and conversation_id = $2
`, message.UserID, message.ConversationID).
Scan(&message.ReceiverID); err != nil {
return err
}
go broadcastMessage(message)
return nil
}
func broadcastMessage(message Message) {
messageClients.Range(func(key, _ interface{}) bool {
client := key.(*MessageClient)
if client.UserID == message.ReceiverID {
client.Messages <- message
}
return true
})
}
```
The function queries for the recipient ID (the other participant ID) and sends the message to all the clients.
### Subscribe to Messages
Lets go to the `main()` function and add this route:
```
router.HandleFunc("GET", "/api/messages", guard(subscribeToMessages))
```
This endpoint handles GET requests on `/api/messages`. The request should be an [EventSource][6] connection. It responds with an event stream in which the data is JSON formatted.
```
func subscribeToMessages(w http.ResponseWriter, r *http.Request) {
if a := r.Header.Get("Accept"); !strings.Contains(a, "text/event-stream") {
http.Error(w, "This endpoint requires an EventSource connection", http.StatusNotAcceptable)
return
}
f, ok := w.(http.Flusher)
if !ok {
respondError(w, errors.New("streaming unsupported"))
return
}
ctx := r.Context()
authUserID := ctx.Value(keyAuthUserID).(string)
h := w.Header()
h.Set("Cache-Control", "no-cache")
h.Set("Connection", "keep-alive")
h.Set("Content-Type", "text/event-stream")
messages := make(chan Message)
defer close(messages)
client := &MessageClient{Messages: messages, UserID: authUserID}
messageClients.Store(client, nil)
defer messageClients.Delete(client)
for {
select {
case <-ctx.Done():
return
case message := <-messages:
if b, err := json.Marshal(message); err != nil {
log.Printf("could not marshall message: %v\n", err)
fmt.Fprintf(w, "event: error\ndata: %v\n\n", err)
} else {
fmt.Fprintf(w, "data: %s\n\n", b)
}
f.Flush()
}
}
}
```
First it checks for the correct request headers and checks the server supports streaming. We create a channel of messages to make a client and store it in the clients map. Each time a new message is created, it will go in this channel, so we can read from it with a `for-select` loop.
Server-Sent Events uses this format to send data:
```
data: some data here\n\n
```
We are sending it in JSON format:
```
data: {"foo":"bar"}\n\n
```
We are using `fmt.Fprintf()` to write to the response writter in this format and flushing the data in each iteration of the loop.
This will loop until the connection is closed using the request context. We defered the close of the channel and the delete of the client, so when the loop ends, the channel will be closed and the client wont receive more messages.
Note aside, the JavaScript API to work with Server-Sent Events (EventSource) doesnt support setting custom headers 😒 So we cannot set `Authorization: Bearer <token>`. And thats the reason why the `guard()` middleware reads the token from the URL query string also.
* * *
That concludes the realtime messages. Id like to say thats everything in the backend, but to code the frontend Ill add one more endpoint to login. A login that will be just for development.
[Souce Code][7]
--------------------------------------------------------------------------------
via: https://nicolasparada.netlify.com/posts/go-messenger-realtime-messages/
作者:[Nicolás Parada][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://nicolasparada.netlify.com/
[b]: https://github.com/lujun9972
[1]: https://nicolasparada.netlify.com/posts/go-messenger-schema/
[2]: https://nicolasparada.netlify.com/posts/go-messenger-oauth/
[3]: https://nicolasparada.netlify.com/posts/go-messenger-conversations/
[4]: https://nicolasparada.netlify.com/posts/go-messenger-messages/
[5]: https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events
[6]: https://developer.mozilla.org/en-US/docs/Web/API/EventSource
[7]: https://github.com/nicolasparada/go-messenger-demo

View File

@ -0,0 +1,145 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Building a Messenger App: Development Login)
[#]: via: (https://nicolasparada.netlify.com/posts/go-messenger-dev-login/)
[#]: author: (Nicolás Parada https://nicolasparada.netlify.com/)
Building a Messenger App: Development Login
======
This post is the 6th on a series:
* [Part 1: Schema][1]
* [Part 2: OAuth][2]
* [Part 3: Conversations][3]
* [Part 4: Messages][4]
* [Part 5: Realtime Messages][5]
We already implemented login through GitHub, but if we want to play around with the app, we need a couple of users to test it. In this post well add an endpoint to login as any user just giving an username. This endpoint will be just for development.
Start by adding this route in the `main()` function.
```
router.HandleFunc("POST", "/api/login", requireJSON(login))
```
### Login
This function handles POST requests to `/api/login` with a JSON body with just an username and returns the authenticated user, a token and expiration date of it in JSON format.
```
func login(w http.ResponseWriter, r *http.Request) {
if origin.Hostname() != "localhost" {
http.NotFound(w, r)
return
}
var input struct {
Username string `json:"username"`
}
if err := json.NewDecoder(r.Body).Decode(&input); err != nil {
http.Error(w, err.Error(), http.StatusBadRequest)
return
}
defer r.Body.Close()
var user User
if err := db.QueryRowContext(r.Context(), `
SELECT id, avatar_url
FROM users
WHERE username = $1
`, input.Username).Scan(
&user.ID,
&user.AvatarURL,
); err == sql.ErrNoRows {
http.Error(w, "User not found", http.StatusNotFound)
return
} else if err != nil {
respondError(w, fmt.Errorf("could not query user: %v", err))
return
}
user.Username = input.Username
exp := time.Now().Add(jwtLifetime)
token, err := issueToken(user.ID, exp)
if err != nil {
respondError(w, fmt.Errorf("could not create token: %v", err))
return
}
respond(w, map[string]interface{}{
"authUser": user,
"token": token,
"expiresAt": exp,
}, http.StatusOK)
}
```
First it checks we are on localhost or it responds with `404 Not Found`. It decodes the body skipping validation since this is just for development. Then it queries to the database for a user with the given username, if none is found, it returns with `404 Not Found`. Then it issues a new JSON web token using the user ID as Subject.
```
func issueToken(subject string, exp time.Time) (string, error) {
token, err := jwtSigner.Encode(jwt.Claims{
Subject: subject,
Expiration: json.Number(strconv.FormatInt(exp.Unix(), 10)),
})
if err != nil {
return "", err
}
return string(token), nil
}
```
The function does the same we did [previously][2]. I just moved it to reuse code.
After creating the token, it responds with the user, token and expiration date.
### Seed Users
Now you can add users to play with to the database.
```
INSERT INTO users (id, username) VALUES
(1, 'john'),
(2, 'jane');
```
You can save it to a file and pipe it to the Cockroach CLI.
```
cat seed_users.sql | cockroach sql --insecure -d messenger
```
* * *
Thats it. Once you deploy the code to production and use your own domain this login function wont be available.
This post concludes the backend.
[Souce Code][6]
--------------------------------------------------------------------------------
via: https://nicolasparada.netlify.com/posts/go-messenger-dev-login/
作者:[Nicolás Parada][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://nicolasparada.netlify.com/
[b]: https://github.com/lujun9972
[1]: https://nicolasparada.netlify.com/posts/go-messenger-schema/
[2]: https://nicolasparada.netlify.com/posts/go-messenger-oauth/
[3]: https://nicolasparada.netlify.com/posts/go-messenger-conversations/
[4]: https://nicolasparada.netlify.com/posts/go-messenger-messages/
[5]: https://nicolasparada.netlify.com/posts/go-messenger-realtime-messages/
[6]: https://github.com/nicolasparada/go-messenger-demo

View File

@ -0,0 +1,459 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Building a Messenger App: Access Page)
[#]: via: (https://nicolasparada.netlify.com/posts/go-messenger-access-page/)
[#]: author: (Nicolás Parada https://nicolasparada.netlify.com/)
Building a Messenger App: Access Page
======
This post is the 7th on a series:
* [Part 1: Schema][1]
* [Part 2: OAuth][2]
* [Part 3: Conversations][3]
* [Part 4: Messages][4]
* [Part 5: Realtime Messages][5]
* [Part 6: Development Login][6]
Now that were done with the backend, lets move to the frontend. I will go with a single-page application.
Lets start by creating a file `static/index.html` with the following content.
```
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Messenger</title>
<link rel="shortcut icon" href="data:,">
<link rel="stylesheet" href="/styles.css">
<script src="/main.js" type="module"></script>
</head>
<body></body>
</html>
```
This HTML file must be server for every URL and JavaScript will take care of rendering the correct page.
So lets go the the `main.go` for a moment and in the `main()` function add the following route:
```
router.Handle("GET", "/...", http.FileServer(SPAFileSystem{http.Dir("static")}))
type SPAFileSystem struct {
fs http.FileSystem
}
func (spa SPAFileSystem) Open(name string) (http.File, error) {
f, err := spa.fs.Open(name)
if err != nil {
return spa.fs.Open("index.html")
}
return f, nil
}
```
We use a custom file system so instead of returning `404 Not Found` for unknown URLs, it serves the `index.html`.
### Router
In the `index.html` we loaded two files: `styles.css` and `main.js`. I leave styling to your taste.
Lets move to `main.js`. Create a `static/main.js` file with the following content:
```
import { guard } from './auth.js'
import Router from './router.js'
let currentPage
const disconnect = new CustomEvent('disconnect')
const router = new Router()
router.handle('/', guard(view('home'), view('access')))
router.handle('/callback', view('callback'))
router.handle(/^\/conversations\/([^\/]+)$/, guard(view('conversation'), view('access')))
router.handle(/^\//, view('not-found'))
router.install(async result => {
document.body.innerHTML = ''
if (currentPage instanceof Node) {
currentPage.dispatchEvent(disconnect)
}
currentPage = await result
if (currentPage instanceof Node) {
document.body.appendChild(currentPage)
}
})
function view(pageName) {
return (...args) => import(`/pages/${pageName}-page.js`)
.then(m => m.default(...args))
}
```
If you are follower of this blog, you already know how this works. That router is the one showed [here][7]. Just download it from [@nicolasparada/router][8] and save it to `static/router.js`.
We registered four routes. At the root `/` we show the home or access page whether the user is authenticated. At `/callback` we show the callback page. On `/conversations/{conversationID}` we show the conversation or access page whether the user is authenticated and for every other URL, we show a not found page.
We tell the router to render the result to the document body and dispatch a `disconnect` event to each page before leaving.
We have each page in a different file and we import them with the new dynamic `import()`.
### Auth
`guard()` is a function that given two functions, executes the first one if the user is authenticated, or the sencond one if not. It comes from `auth.js` so lets create a `static/auth.js` file with the following content:
```
export function isAuthenticated() {
const token = localStorage.getItem('token')
const expiresAtItem = localStorage.getItem('expires_at')
if (token === null || expiresAtItem === null) {
return false
}
const expiresAt = new Date(expiresAtItem)
if (isNaN(expiresAt.valueOf()) || expiresAt <= new Date()) {
return false
}
return true
}
export function guard(fn1, fn2) {
return (...args) => isAuthenticated()
? fn1(...args)
: fn2(...args)
}
export function getAuthUser() {
if (!isAuthenticated()) {
return null
}
const authUser = localStorage.getItem('auth_user')
if (authUser === null) {
return null
}
try {
return JSON.parse(authUser)
} catch (_) {
return null
}
}
```
`isAuthenticated()` checks for `token` and `expires_at` from localStorage to tell if the user is authenticated. `getAuthUser()` gets the authenticated user from localStorage.
When we login, well save all the data to localStorage so it will make sense.
### Access Page
![access page screenshot][9]
Lets start with the access page. Create a file `static/pages/access-page.js` with the following content:
```
const template = document.createElement('template')
template.innerHTML = `
<h1>Messenger</h1>
<a href="/api/oauth/github" onclick="event.stopPropagation()">Access with GitHub</a>
`
export default function accessPage() {
return template.content
}
```
Because the router intercepts all the link clicks to do its navigation, we must prevent the event propagation for this link in particular.
Clicking on that link will redirect us to the backend, then to GitHub, then to the backend and then to the frontend again; to the callback page.
### Callback Page
Create the file `static/pages/callback-page.js` with the following content:
```
import http from '../http.js'
import { navigate } from '../router.js'
export default async function callbackPage() {
const url = new URL(location.toString())
const token = url.searchParams.get('token')
const expiresAt = url.searchParams.get('expires_at')
try {
if (token === null || expiresAt === null) {
throw new Error('Invalid URL')
}
const authUser = await getAuthUser(token)
localStorage.setItem('auth_user', JSON.stringify(authUser))
localStorage.setItem('token', token)
localStorage.setItem('expires_at', expiresAt)
} catch (err) {
alert(err.message)
} finally {
navigate('/', true)
}
}
function getAuthUser(token) {
return http.get('/api/auth_user', { authorization: `Bearer ${token}` })
}
```
The callback page doesnt render anything. Its an async function that does a GET request to `/api/auth_user` using the token from the URL query string and saves all the data to localStorage. Then it redirects to `/`.
### HTTP
There is an HTTP module. Create a `static/http.js` file with the following content:
```
import { isAuthenticated } from './auth.js'
async function handleResponse(res) {
const body = await res.clone().json().catch(() => res.text())
if (res.status === 401) {
localStorage.removeItem('auth_user')
localStorage.removeItem('token')
localStorage.removeItem('expires_at')
}
if (!res.ok) {
const message = typeof body === 'object' && body !== null && 'message' in body
? body.message
: typeof body === 'string' && body !== ''
? body
: res.statusText
throw Object.assign(new Error(message), {
url: res.url,
statusCode: res.status,
statusText: res.statusText,
headers: res.headers,
body,
})
}
return body
}
function getAuthHeader() {
return isAuthenticated()
? { authorization: `Bearer ${localStorage.getItem('token')}` }
: {}
}
export default {
get(url, headers) {
return fetch(url, {
headers: Object.assign(getAuthHeader(), headers),
}).then(handleResponse)
},
post(url, body, headers) {
const init = {
method: 'POST',
headers: getAuthHeader(),
}
if (typeof body === 'object' && body !== null) {
init.body = JSON.stringify(body)
init.headers['content-type'] = 'application/json; charset=utf-8'
}
Object.assign(init.headers, headers)
return fetch(url, init).then(handleResponse)
},
subscribe(url, callback) {
const urlWithToken = new URL(url, location.origin)
if (isAuthenticated()) {
urlWithToken.searchParams.set('token', localStorage.getItem('token'))
}
const eventSource = new EventSource(urlWithToken.toString())
eventSource.onmessage = ev => {
let data
try {
data = JSON.parse(ev.data)
} catch (err) {
console.error('could not parse message data as JSON:', err)
return
}
callback(data)
}
const unsubscribe = () => {
eventSource.close()
}
return unsubscribe
},
}
```
This module is a wrapper around the [fetch][10] and [EventSource][11] APIs. The most important part is that it adds the JSON web token to the requests.
### Home Page
![home page screenshot][12]
So, when the user login, the home page will be shown. Create a `static/pages/home-page.js` file with the following content:
```
import { getAuthUser } from '../auth.js'
import { avatar } from '../shared.js'
export default function homePage() {
const authUser = getAuthUser()
const template = document.createElement('template')
template.innerHTML = `
<div>
<div>
${avatar(authUser)}
<span>${authUser.username}</span>
</div>
<button id="logout-button">Logout</button>
</div>
<!-- conversation form here -->
<!-- conversation list here -->
`
const page = template.content
page.getElementById('logout-button').onclick = onLogoutClick
return page
}
function onLogoutClick() {
localStorage.clear()
location.reload()
}
```
For this post, this is the only content we render on the home page. We show the current authenticated user and a logout button.
When the user clicks to logout, we clear all inside localStorage and do a reload of the page.
### Avatar
That `avatar()` function is to show the users avatar. Because its used in more than one place, I moved it to a `shared.js` file. Create the file `static/shared.js` with the following content:
```
export function avatar(user) {
return user.avatarUrl === null
? `<figure class="avatar" data-initial="${user.username[0]}"></figure>`
: `<img class="avatar" src="${user.avatarUrl}" alt="${user.username}'s avatar">`
}
```
We use a small figure with the users initial in case the avatar URL is null.
You can show the initial with a little of CSS using the `attr()` function.
```
.avatar[data-initial]::after {
content: attr(data-initial);
}
```
### Development Login
![access page with login form screenshot][13]
In the previous post we coded a login for development. Lets add a form for that in the access page. Go to `static/pages/access-page.js` and modify it a little.
```
import http from '../http.js'
const template = document.createElement('template')
template.innerHTML = `
<h1>Messenger</h1>
<form id="login-form">
<input type="text" placeholder="Username" required>
<button>Login</button>
</form>
<a href="/api/oauth/github" onclick="event.stopPropagation()">Access with GitHub</a>
`
export default function accessPage() {
const page = template.content.cloneNode(true)
page.getElementById('login-form').onsubmit = onLoginSubmit
return page
}
async function onLoginSubmit(ev) {
ev.preventDefault()
const form = ev.currentTarget
const input = form.querySelector('input')
const submitButton = form.querySelector('button')
input.disabled = true
submitButton.disabled = true
try {
const payload = await login(input.value)
input.value = ''
localStorage.setItem('auth_user', JSON.stringify(payload.authUser))
localStorage.setItem('token', payload.token)
localStorage.setItem('expires_at', payload.expiresAt)
location.reload()
} catch (err) {
alert(err.message)
setTimeout(() => {
input.focus()
}, 0)
} finally {
input.disabled = false
submitButton.disabled = false
}
}
function login(username) {
return http.post('/api/login', { username })
}
```
I added a login form. When the user submits the form. It does a POST requets to `/api/login` with the username. Saves all the data to localStorage and reloads the page.
Remember to remove this form once you are done with the frontend.
* * *
Thats all for this post. In the next one, well continue with the home page to add a form to start conversations and display a list with the latest ones.
[Souce Code][14]
--------------------------------------------------------------------------------
via: https://nicolasparada.netlify.com/posts/go-messenger-access-page/
作者:[Nicolás Parada][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://nicolasparada.netlify.com/
[b]: https://github.com/lujun9972
[1]: https://nicolasparada.netlify.com/posts/go-messenger-schema/
[2]: https://nicolasparada.netlify.com/posts/go-messenger-oauth/
[3]: https://nicolasparada.netlify.com/posts/go-messenger-conversations/
[4]: https://nicolasparada.netlify.com/posts/go-messenger-messages/
[5]: https://nicolasparada.netlify.com/posts/go-messenger-realtime-messages/
[6]: https://nicolasparada.netlify.com/posts/go-messenger-dev-login/
[7]: https://nicolasparada.netlify.com/posts/js-router/
[8]: https://unpkg.com/@nicolasparada/router
[9]: https://nicolasparada.netlify.com/img/go-messenger-access-page/access-page.png
[10]: https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API
[11]: https://developer.mozilla.org/en-US/docs/Web/API/EventSource
[12]: https://nicolasparada.netlify.com/img/go-messenger-access-page/home-page.png
[13]: https://nicolasparada.netlify.com/img/go-messenger-access-page/access-page-v2.png
[14]: https://github.com/nicolasparada/go-messenger-demo

View File

@ -0,0 +1,255 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Building a Messenger App: Home Page)
[#]: via: (https://nicolasparada.netlify.com/posts/go-messenger-home-page/)
[#]: author: (Nicolás Parada https://nicolasparada.netlify.com/)
Building a Messenger App: Home Page
======
This post is the 8th on a series:
* [Part 1: Schema][1]
* [Part 2: OAuth][2]
* [Part 3: Conversations][3]
* [Part 4: Messages][4]
* [Part 5: Realtime Messages][5]
* [Part 6: Development Login][6]
* [Part 7: Access Page][7]
Continuing the frontend, lets finish the home page in this post. Well add a form to start conversations and a list with the latest ones.
### Conversation Form
![conversation form screenshot][8]
In the `static/pages/home-page.js` file add some markup in the HTML view.
```
<form id="conversation-form">
<input type="search" placeholder="Start conversation with..." required>
</form>
```
Add that form just below the section in which we displayed the auth user and logout button.
```
page.getElementById('conversation-form').onsubmit = onConversationSubmit
```
Now we can listen to the “submit” event to create the conversation.
```
import http from '../http.js'
import { navigate } from '../router.js'
async function onConversationSubmit(ev) {
ev.preventDefault()
const form = ev.currentTarget
const input = form.querySelector('input')
input.disabled = true
try {
const conversation = await createConversation(input.value)
input.value = ''
navigate('/conversations/' + conversation.id)
} catch (err) {
if (err.statusCode === 422) {
input.setCustomValidity(err.body.errors.username)
} else {
alert(err.message)
}
setTimeout(() => {
input.focus()
}, 0)
} finally {
input.disabled = false
}
}
function createConversation(username) {
return http.post('/api/conversations', { username })
}
```
On submit we do a POST request to `/api/conversations` with the username and redirect to the conversation page (for the next post).
### Conversation List
![conversation list screenshot][9]
In the same file, we are going to make the `homePage()` function async to load the conversations first.
```
export default async function homePage() {
const conversations = await getConversations().catch(err => {
console.error(err)
return []
})
/*...*/
}
function getConversations() {
return http.get('/api/conversations')
}
```
Then, add a list in the markup to render conversations there.
```
<ol id="conversations"></ol>
```
Add it just below the current markup.
```
const conversationsOList = page.getElementById('conversations')
for (const conversation of conversations) {
conversationsOList.appendChild(renderConversation(conversation))
}
```
So we can append each conversation to the list.
```
import { avatar, escapeHTML } from '../shared.js'
function renderConversation(conversation) {
const messageContent = escapeHTML(conversation.lastMessage.content)
const messageDate = new Date(conversation.lastMessage.createdAt).toLocaleString()
const li = document.createElement('li')
li.dataset['id'] = conversation.id
if (conversation.hasUnreadMessages) {
li.classList.add('has-unread-messages')
}
li.innerHTML = `
<a href="/conversations/${conversation.id}">
<div>
${avatar(conversation.otherParticipant)}
<span>${conversation.otherParticipant.username}</span>
</div>
<div>
<p>${messageContent}</p>
<time>${messageDate}</time>
</div>
</a>
`
return li
}
```
Each conversation item contains a link to the conversation page and displays the other participant info and a preview of the last message. Also, you can use `.hasUnreadMessages` to add a class to the item and do some styling with CSS. Maybe a bolder font or accent the color.
Note that were escaping the message content. That function comes from `static/shared.js`:
```
export function escapeHTML(str) {
return str
.replace(/&/g, '&amp;')
.replace(/</g, '&lt;')
.replace(/>/g, '&gt;')
.replace(/"/g, '&quot;')
.replace(/'/g, '&#039;')
}
```
That prevents displaying as HTML the message the user wrote. If the user happens to write something like:
```
<script>alert('lololo')</script>
```
It would be very annoying because that script will be executed 😅
So yeah, always remember to escape content from untrusted sources.
### Messages Subscription
Last but not least, I want to subscribe to the message stream here.
```
const unsubscribe = subscribeToMessages(onMessageArrive)
page.addEventListener('disconnect', unsubscribe)
```
Add that line in the `homePage()` function.
```
function subscribeToMessages(cb) {
return http.subscribe('/api/messages', cb)
}
```
The `subscribe()` function returns a function that once called it closes the underlying connection. Thats why I passed it to the “disconnect” event; so when the user leaves the page, the event stream will be closed.
```
async function onMessageArrive(message) {
const conversationLI = document.querySelector(`li[data-id="${message.conversationID}"]`)
if (conversationLI !== null) {
conversationLI.classList.add('has-unread-messages')
conversationLI.querySelector('a > div > p').textContent = message.content
conversationLI.querySelector('a > div > time').textContent = new Date(message.createdAt).toLocaleString()
return
}
let conversation
try {
conversation = await getConversation(message.conversationID)
conversation.lastMessage = message
} catch (err) {
console.error(err)
return
}
const conversationsOList = document.getElementById('conversations')
if (conversationsOList === null) {
return
}
conversationsOList.insertAdjacentElement('afterbegin', renderConversation(conversation))
}
function getConversation(id) {
return http.get('/api/conversations/' + id)
}
```
Every time a new message arrives, we go and query for the conversation item in the DOM. If found, we add the `has-unread-messages` class to the item, and update the view. If not found, it means the message is from a new conversation created just now. We go and do a GET request to `/api/conversations/{conversationID}` to get the conversation in which the message was created and prepend it to the conversation list.
* * *
That covers the home page 😊
On the next post well code the conversation page.
[Souce Code][10]
--------------------------------------------------------------------------------
via: https://nicolasparada.netlify.com/posts/go-messenger-home-page/
作者:[Nicolás Parada][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://nicolasparada.netlify.com/
[b]: https://github.com/lujun9972
[1]: https://nicolasparada.netlify.com/posts/go-messenger-schema/
[2]: https://nicolasparada.netlify.com/posts/go-messenger-oauth/
[3]: https://nicolasparada.netlify.com/posts/go-messenger-conversations/
[4]: https://nicolasparada.netlify.com/posts/go-messenger-messages/
[5]: https://nicolasparada.netlify.com/posts/go-messenger-realtime-messages/
[6]: https://nicolasparada.netlify.com/posts/go-messenger-dev-login/
[7]: https://nicolasparada.netlify.com/posts/go-messenger-access-page/
[8]: https://nicolasparada.netlify.com/img/go-messenger-home-page/conversation-form.png
[9]: https://nicolasparada.netlify.com/img/go-messenger-home-page/conversation-list.png
[10]: https://github.com/nicolasparada/go-messenger-demo

View File

@ -0,0 +1,269 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Building a Messenger App: Conversation Page)
[#]: via: (https://nicolasparada.netlify.com/posts/go-messenger-conversation-page/)
[#]: author: (Nicolás Parada https://nicolasparada.netlify.com/)
Building a Messenger App: Conversation Page
======
This post is the 9th and last in a series:
* [Part 1: Schema][1]
* [Part 2: OAuth][2]
* [Part 3: Conversations][3]
* [Part 4: Messages][4]
* [Part 5: Realtime Messages][5]
* [Part 6: Development Login][6]
* [Part 7: Access Page][7]
* [Part 8: Home Page][8]
In this post well code the conversation page. This page is the chat between the two users. At the top well show info about the other participant, below, a list of the latest messages and a message form at the bottom.
### Chat heading
![chat heading screenshot][9]
Lets start by creating the file `static/pages/conversation-page.js` with the following content:
```
import http from '../http.js'
import { navigate } from '../router.js'
import { avatar, escapeHTML } from '../shared.js'
export default async function conversationPage(conversationID) {
let conversation
try {
conversation = await getConversation(conversationID)
} catch (err) {
alert(err.message)
navigate('/', true)
return
}
const template = document.createElement('template')
template.innerHTML = `
<div>
<a href="/">← Back</a>
${avatar(conversation.otherParticipant)}
<span>${conversation.otherParticipant.username}</span>
</div>
<!-- message list here -->
<!-- message form here -->
`
const page = template.content
return page
}
function getConversation(id) {
return http.get('/api/conversations/' + id)
}
```
This page receives the conversation ID the router extracted from the URL.
First it does a GET request to `/api/conversations/{conversationID}` to get info about the conversation. In case of error, we show it and redirect back to `/`. Then we render info about the other participant.
### Conversation List
![chat heading screenshot][10]
Well fetch the latest messages too to display them.
```
let conversation, messages
try {
[conversation, messages] = await Promise.all([
getConversation(conversationID),
getMessages(conversationID),
])
}
```
Update the `conversationPage()` function to fetch the messages too. We use `Promise.all()` to do both request at the same time.
```
function getMessages(conversationID) {
return http.get(`/api/conversations/${conversationID}/messages`)
}
```
A GET request to `/api/conversations/{conversationID}/messages` gets the latest messages of the conversation.
```
<ol id="messages"></ol>
```
Now, add that list to the markup.
```
const messagesOList = page.getElementById('messages')
for (const message of messages.reverse()) {
messagesOList.appendChild(renderMessage(message))
}
```
So we can append messages to the list. We show them in reverse order.
```
function renderMessage(message) {
const messageContent = escapeHTML(message.content)
const messageDate = new Date(message.createdAt).toLocaleString()
const li = document.createElement('li')
if (message.mine) {
li.classList.add('owned')
}
li.innerHTML = `
<p>${messageContent}</p>
<time>${messageDate}</time>
`
return li
}
```
Each message item displays the message content itself with its timestamp. Using `.mine` we can append a different class to the item so maybe you can show the message to the right.
### Message Form
![chat heading screenshot][11]
```
<form id="message-form">
<input type="text" placeholder="Type something" maxlength="480" required>
<button>Send</button>
</form>
```
Add that form to the current markup.
```
page.getElementById('message-form').onsubmit = messageSubmitter(conversationID)
```
Attach an event listener to the “submit” event.
```
function messageSubmitter(conversationID) {
return async ev => {
ev.preventDefault()
const form = ev.currentTarget
const input = form.querySelector('input')
const submitButton = form.querySelector('button')
input.disabled = true
submitButton.disabled = true
try {
const message = await createMessage(input.value, conversationID)
input.value = ''
const messagesOList = document.getElementById('messages')
if (messagesOList === null) {
return
}
messagesOList.appendChild(renderMessage(message))
} catch (err) {
if (err.statusCode === 422) {
input.setCustomValidity(err.body.errors.content)
} else {
alert(err.message)
}
} finally {
input.disabled = false
submitButton.disabled = false
setTimeout(() => {
input.focus()
}, 0)
}
}
}
function createMessage(content, conversationID) {
return http.post(`/api/conversations/${conversationID}/messages`, { content })
}
```
We make use of [partial application][12] to have the conversation ID in the “submit” event handler. It takes the message content from the input and does a POST request to `/api/conversations/{conversationID}/messages` with it. Then prepends the newly created message to the list.
### Messages Subscription
To make it realtime well subscribe to the message stream in this page also.
```
page.addEventListener('disconnect', subscribeToMessages(messageArriver(conversationID)))
```
Add that line in the `conversationPage()` function.
```
function subscribeToMessages(cb) {
return http.subscribe('/api/messages', cb)
}
function messageArriver(conversationID) {
return message => {
if (message.conversationID !== conversationID) {
return
}
const messagesOList = document.getElementById('messages')
if (messagesOList === null) {
return
}
messagesOList.appendChild(renderMessage(message))
readMessages(message.conversationID)
}
}
function readMessages(conversationID) {
return http.post(`/api/conversations/${conversationID}/read_messages`)
}
```
We also make use of partial application to have the conversation ID here.
When a new message arrives, first we check if its from this conversation. If it is, we go a prepend a message item to the list and do a POST request to `/api/conversations/{conversationID}/read_messages` to updated the last time the participant read messages.
* * *
That concludes this series. The messenger app is now functional.
~~Ill add pagination on the conversation and message list, also user searching before sharing the source code. Ill updated once its ready along with a hosted demo 👨‍💻~~
[Souce Code][13] • [Demo][14]
--------------------------------------------------------------------------------
via: https://nicolasparada.netlify.com/posts/go-messenger-conversation-page/
作者:[Nicolás Parada][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://nicolasparada.netlify.com/
[b]: https://github.com/lujun9972
[1]: https://nicolasparada.netlify.com/posts/go-messenger-schema/
[2]: https://nicolasparada.netlify.com/posts/go-messenger-oauth/
[3]: https://nicolasparada.netlify.com/posts/go-messenger-conversations/
[4]: https://nicolasparada.netlify.com/posts/go-messenger-messages/
[5]: https://nicolasparada.netlify.com/posts/go-messenger-realtime-messages/
[6]: https://nicolasparada.netlify.com/posts/go-messenger-dev-login/
[7]: https://nicolasparada.netlify.com/posts/go-messenger-access-page/
[8]: https://nicolasparada.netlify.com/posts/go-messenger-home-page/
[9]: https://nicolasparada.netlify.com/img/go-messenger-conversation-page/heading.png
[10]: https://nicolasparada.netlify.com/img/go-messenger-conversation-page/list.png
[11]: https://nicolasparada.netlify.com/img/go-messenger-conversation-page/form.png
[12]: https://en.wikipedia.org/wiki/Partial_application
[13]: https://github.com/nicolasparada/go-messenger-demo
[14]: https://go-messenger-demo.herokuapp.com/

View File

@ -1,173 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (2018: Year in review)
[#]: via: (https://jvns.ca/blog/2018/12/23/2018--year-in-review/)
[#]: author: (Julia Evans https://jvns.ca/)
2018: Year in review
======
I wrote these in [2015][1] and [2016][2] and [2017][3] and its always interesting to look back at them, so heres a summary of what went on in my side projects in 2018.
### ruby profiler!
At the beginning of this year I wrote [rbspy][4] (docs: <https://rbspy.github.io/>). It inspired a Python version called [py-spy][5] and a PHP profiler called [phpspy][6], both of which are excellent. I think py-spy in particular is [probably _better_][7] than rbspy which makes me really happy.
Writing a program that does something innovative (`top` for your Ruby programs functions!) and inspiring other people to make amazing new tools is something Im really proud of.
### started a side business!
A very surprising thing that happened in 2018 is that I started a business! This is the website: <https://wizardzines.com/>, and I sell programming zines.
Its been astonishingly successful (it definitely made me enough money that I could have lived on just the revenue from the business this year), and Im really grateful to everyones whos supported that work. I hope the zines have helped you. I always thought that it was impossible to make anywhere near as much money teaching people useful things as I can as a software developer, and now I think thats not true. I dont think that Id _want_ to make that switch (I like working as a programmer!), but now I actually think that if I was serious about it and was interested in working on my business skills, I could probably make it work.
I dont really know whats next, but I plan to write at least one zine next year. I learned a few things about business this year, mainly from:
* [stephanie hurlburts twitter][8]
* [amy hoy][9]
* the book [growing a business by paul hawken][10]
* seeing what joel hooks is doing with [egghead.io][11]
* a little from [indie hackers][12]
I used to think that sales / marketing had to be gross, but reading some of these business books made me think that its actually possible to run a business by being honest &amp; just building good things.
### work!
this is mostly about side projects, but a few things about work:
* I still have the same manager ([jay][13]). Hes been really great to work with. The [help! i have a manager!][14] zine is secretly largely things I learned from working with him.
* my team made some big networking infrastructure changes and it went pretty well. I learned a lot about proxies/TLS and a little bit about C++.
* I mentored another intern, and the intern I mentored last year joined us full time!
When I go back to work Im going to switch to working on something COMPLETELY DIFFERENT (writing code that sends messages to banks!) for 3 months. Its a lot closer to the companys core business, and I think itll be neat to learn more about how financial infastracture works.
I struggled a bit with understanding/defining my job this year. I wrote [Whats a senior engineers job?][15] about that, but I have not yet reached enlightenment.
### talks!
I gave 4 talks in 2018:
* [So you want to be a wizard][16] at StarCon
* [Building a Ruby profiler][17] at the Recurse Centers localhost series
* [Build Impossible Programs][18] in May at Deconstruct.
* [High Reliability Infrastructure Migrations][19] at Kubecon. Im pretty happy about this talk because Ive wanted to give a good talk about what I do at work for a long time and I think I finally succeeded. Previously when I gave talks about my work I think I fell into the trap of just describing what we do (“we do X Y Z” … “okay, so what?“). With this one, I think I was able to actually say things that were useful to other people.
In past years Ive mostly given talks which can mostly be summarized “here are some cool tools” and “here is how to learn hard things”. This year I changed focus to giving talks about the actual work I do there were two talks about building a Ruby profiler, and one about what I do at work (I spend a lot of time on infrastructure migrations!)
Im not sure whether if Ill give any talks in 2019. I travelled more than I wanted to in 2018, and to stay sane I ended up having to cancel on a talk I was planning to give with relatively short notice which wasnt good.
### podcasts!
I also experimented a bit with a new format: the podcast! These were basically all really fun! They dont take that long (about 2 hours total?).
* [Software Engineering Daily][20], on rbspy and how to use a profiler
* [FLOSS weekly][21], again about rbspy. They told me Im the guest that asked _them_ the most questions, which I took as a compliment :)
* [CodeNewbie][22] on computer networking &amp; how the Internet works
* [Hanselminutes with Scott Hanselman][23] on writing zines / teaching / learning
* [egghead.io][24], on making zines &amp; running a business
what I learned about doing podcasts:
* Its really important to give the hosts a list of good questions to ask, and to be prepared to give good answers to those questions! Im not a super polished podcast guest.
* you need a good microphone. At least one of these people told me I actually couldnt be on their podcast unless I had a good enough microphone, so I bought a [medium fancy microphone][25]. It wasnt too expensive and its nice to have a better quality microphone! Maybe I will use it more to record audio/video at some point!
### !!Con
I co-organized [!!Con][26] for the 4th time I ran sponsorships. Its always such a delight and the speakers are so great.
!!Con is expanding [to the west coast in 2019][27] Im not directly involved with that but its going to be amazing.
### blog posts!
I apparently wrote 54 blog posts in 2018. A couple of my favourites are [Whats a senior engineers job?][15] , [How to teach yourself hard things][28], and [batch editing files with ed][29].
There were basically 4 themes in blogging for 2018:
* progress on the rbspy project while I was working on it ([this category][30])
* computer networking / infrastructure engineering (basically all I did at work this year was networking, though I didnt write about it as much as I might have)
* musings about zines / business / developer education, for instance [why sell zines?][31] and [who pays to educate developers?][32]
* a few of the usual “how do you learn things” / “how do you succeed at your job” posts as I figure things about about that, for instance [working remotely, 4 years in][33]
### a tiny inclusion project: a guide to performance reviews
[Last year][3] in addition to my actual job, I did a couple of projects at work towards helping make sure the performance/promotion process works well for folks i collaborated with the amazing [karla][34] on the idea of a “brag document”, and redid our engineering levels.
This year, in the same vein, I wrote a document called the “Unofficial guide to the performance reviews”. A lot of folks said it helped them but probably its too early to celebrate. I think explaining to folks how the performance review process actually works and how to approach it is really valuable and I might try to publish a more general version here at some point.
I like that I work at a place where its possible/encouraged to do projects like this. I spend a relatively small amount of time on them (maybe I spent 15 hours on this one?) but it feels good to be able to make tiny steps towards building a better workplace from time to time. Its really hard to judge the results though!
### conclusions?
some things that worked in 2018:
* setting [boundaries][15] around what my job is
* doing open source work while being paid for it
* starting a side business
* doing small inclusion projects at work
* writing zines is very time consuming but I feel happy about the time I spent on that
* blogging is always great
--------------------------------------------------------------------------------
via: https://jvns.ca/blog/2018/12/23/2018--year-in-review/
作者:[Julia Evans][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://jvns.ca/
[b]: https://github.com/lujun9972
[1]: https://jvns.ca/blog/2015/12/26/2015-year-in-review/
[2]: https://jvns.ca/blog/2016/12/21/2016--year-in-review/
[3]: https://jvns.ca/blog/2017/12/31/2017--year-in-review/
[4]: https://github.com/rbspy/rbspy
[5]: https://github.com/benfred/py-spy
[6]: https://github.com/adsr/phpspy/
[7]: https://jvns.ca/blog/2018/09/08/an-awesome-new-python-profiler--py-spy-/
[8]: https://twitter.com/sehurlburt
[9]: https://stackingthebricks.com/
[10]: https://www.amazon.com/Growing-Business-Paul-Hawken/dp/0671671642
[11]: https://egghead.io/
[12]: https://www.indiehackers.com/
[13]: https://twitter.com/jshirley
[14]: https://wizardzines.com/zines/manager/
[15]: https://jvns.ca/blog/senior-engineer/
[16]: https://www.youtube.com/watch?v=FBMC9bm-KuU
[17]: https://jvns.ca/blog/2018/04/16/rbspy-talk/
[18]: https://www.deconstructconf.com/2018/julia-evans-build-impossible-programs
[19]: https://www.youtube.com/watch?v=obB2IvCv-K0
[20]: https://softwareengineeringdaily.com/2018/06/05/profilers-with-julia-evans/
[21]: https://twit.tv/shows/floss-weekly/episodes/487
[22]: https://www.codenewbie.org/podcast/how-does-the-internet-work
[23]: https://hanselminutes.com/643/learning-how-to-be-a-wizard-programmer-with-julia-evans
[24]: https://player.fm/series/eggheadio-developer-chats-1728019/exploring-concepts-and-teaching-using-focused-zines-with-julia-evans
[25]: https://www.amazon.com/gp/product/B000EOPQ7E/ref=as_li_tl?ie=UTF8&camp=1789&creative=390957&creativeASIN=B000EOPQ7E&linkCode=as2&tag=diabeticbooks&linkId=ZBZBIVR4EB7V6JFL
[26]: http://bangbangcon.com
[27]: http://bangbangcon.com/west/
[28]: https://jvns.ca/blog/2018/09/01/learning-skills-you-can-practice/
[29]: https://jvns.ca/blog/2018/05/11/batch-editing-files-with-ed/
[30]: https://jvns.ca/categories/ruby-profiler/
[31]: https://jvns.ca/blog/2018/09/23/why-sell-zines/
[32]: https://jvns.ca/blog/2018/09/01/who-pays-to-educate-developers-/
[33]: https://jvns.ca/blog/2018/02/18/working-remotely--4-years-in/
[34]: https://karla.io/

View File

@ -1,235 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (laingke)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Create an online store with this Java-based framework)
[#]: via: (https://opensource.com/article/19/1/scipio-erp)
[#]: author: (Paul Piper https://opensource.com/users/madppiper)
Create an online store with this Java-based framework
======
Scipio ERP comes with a large range of applications and functionality.
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc_whitehurst_money.png?itok=ls-SOzM0)
So you want to sell products or services online, but either can't find a fitting software or think customization would be too costly? [Scipio ERP][1] may just be what you are looking for.
Scipio ERP is a Java-based open source e-commerce framework that comes with a large range of applications and functionality. The project was forked from [Apache OFBiz][2] in 2014 with a clear focus on better customization and a more modern appeal. The e-commerce component is quite extensive and works in a multi-store setup, internationally, and with a wide range of product configurations, and it's also compatible with modern HTML frameworks. The software also provides standard applications for many other business cases, such as accounting, warehouse management, or sales force automation. It's all highly standardized and therefore easy to customize, which is great if you are looking for more than a virtual cart.
The system makes it very easy to keep up with modern web standards, too. All screens are constructed using the system's "[templating toolkit][3]," an easy-to-learn macro set that separates HTML from all applications. Because of it, every application is already standardized to the core. Sounds confusing? It really isn't—it all looks a lot like HTML, but you write a lot less of it.
### Initial setup
Before you get started, make sure you have Java 1.8 (or greater) SDK and a Git client installed. Got it? Great! Next, check out the master branch from GitHub:
```
git clone https://github.com/ilscipio/scipio-erp.git
cd scipio-erp
git checkout master
```
To set up the system, simply run **./install.sh** and select either option from the command line. Throughout development, it is best to stick to an **installation for development** (Option 1), which will also install a range of demo data. For professional installations, you can modify the initial config data ("seed data") so it will automatically set up the company and catalog data for you. By default, the system will run with an internal database, but it [can also be configured][4] with a wide range of relational databases such as PostgreSQL and MariaDB.
![Setup wizard][6]
Follow the setup wizard to complete your initial configuration,
Start the system with **./start.sh** and head over to **<https://localhost:8443/setup/>** to complete the configuration. If you installed with demo data, you can log in with username **admin** and password **scipio**. During the setup wizard, you can set up a company profile, accounting, a warehouse, your product catalog, your online store, and additional user profiles. Keep the website entries on the product store configuration screen for now. The system allows you to run multiple webstores with different underlying code; unless you want to do that, it is easiest to stick to the defaults.
Congratulations, you just installed Scipio ERP! Play around with the screens for a minute or two to get a feel for the functionality.
### Shortcuts
Before you jump into the customization, here are a few handy commands that will help you along the way:
* Create a shop-override: **./ant create-component-shop-override**
* Create a new component: **./ant create-component**
* Create a new theme component: **./ant create-theme**
* Create admin user: **./ant create-admin-user-login**
* Various other utility functions: **./ant -p**
* Utility to install & update add-ons: **./git-addons help**
Also, make a mental note of the following locations:
* Scripts to run Scipio as a service: **/tools/scripts/**
* Log output directory: **/runtime/logs**
* Admin application: **<https://localhost:8443/admin/>**
* E-commerce application: **<https://localhost:8443/shop/>**
Last, Scipio ERP structures all code in the following five major directories:
* Framework: framework-related sources, the application server, generic screens, and configurations
* Applications: core applications
* Addons: third-party extensions
* Themes: modifies the look and feel
* Hot-deploy: your own components
Aside from a few configurations, you will be working within the hot-deploy and themes directories.
### Webstore customizations
To really make the system your own, start thinking about [components][7]. Components are a modular approach to override, extend, and add to the system. Think of components as self-contained web modules that capture information on databases ([entity][8]), functions ([services][9]), screens ([views][10]), [events and actions][11], and web applications. Thanks to components, you can add your own code while remaining compatible with the original sources.
Run **./ant create-component-shop-override** and follow the steps to create your webstore component. A new directory will be created inside of the hot-deploy directory, which extends and overrides the original e-commerce application.
![component directory structure][13]
A typical component directory structure.
Your component will have the following directory structure:
* config: configurations
* data: seed data
* entitydef: database table definitions
* script: Groovy script location
* servicedef: service definitions
* src: Java classes
* webapp: your web application
* widget: screen definitions
Additionally, the **ivy.xml** file allows you to add Maven libraries to the build process and the **ofbiz-component.xml** file defines the overall component and web application structure. Apart from the obvious, you will also find a **controller.xml** file inside the web apps' **WEB-INF** directory. This allows you to define request entries and connect them to events and screens. For screens alone, you can also use the built-in CMS functionality, but stick to the core mechanics first. Familiarize yourself with **/applications/shop/** before introducing changes.
#### Adding custom screens
Remember the [templating toolkit][3]? You will find it used on every screen. Think of it as a set of easy-to-learn macros that structure all content. Here's an example:
```
<@section title="Title">
    <@heading id="slider">Slider</@heading>
    <@row>
        <@cell columns=6>
            <@slider id="" class="" controls=true indicator=true>
                <@slide link="#" image="https://placehold.it/800x300">Just some content…</@slide>
                <@slide title="This is a title" link="#" image="https://placehold.it/800x300"></@slide>
            </@slider>
        </@cell>
        <@cell columns=6>Second column</@cell>
    </@row>
</@section>
```
Not too difficult, right? Meanwhile, themes contain the HTML definitions and styles. This hands the power over to your front-end developers, who can define the output of each macro and otherwise stick to their own build tools for development.
Let's give it a quick try. First, define a request on your own webstore. You will modify the code for this. A built-in CMS is also available at **<https://localhost:8443/cms/>** , which allows you to create new templates and screens in a much more efficient way. It is fully compatible with the templating toolkit and comes with example templates that can be adopted to your preferences. But since we are trying to understand the system here, let's go with the more complicated way first.
Open the **[controller.xml][14]** file inside of your shop's webapp directory. The controller keeps track of request events and performs actions accordingly. The following will create a new request under **/shop/test** :
```
<!-- Request Mappings -->
<request-map uri="test">
     <security https="true" auth="false"/>
      <response name="success" type="view" value="test"/>
</request-map>
```
You can define multiple responses and, if you want, you could use an event or a service call inside the request to determine which response you may want to use. I opted for a response of type "view." A view is a rendered response; other types are request-redirects, forwards, and alike. The system comes with various renderers and allows you to determine the output later; to do so, add the following:
```
<!-- View Mappings -->
<view-map name="test" type="screen" page="component://mycomponent/widget/CommonScreens.xml#test"/>
```
Replace **my-component** with your own component name. Then you can define your very first screen by adding the following inside the tags within the **widget/CommonScreens.xml** file:
```
<screen name="test">
        <section>
            <actions>
            </actions>
            <widgets>
                <decorator-screen name="CommonShopAppDecorator" location="component://shop/widget/CommonScreens.xml">
                    <decorator-section name="body">
                        <platform-specific><html><html-template location="component://mycomponent/webapp/mycomponent/test/test.ftl"/></html></platform-specific>
                    </decorator-section>
                </decorator-screen>
            </widgets>
        </section>
    </screen>
```
Screens are actually quite modular and consist of multiple elements ([widgets, actions, and decorators][15]). For the sake of simplicity, leave this as it is for now, and complete the new webpage by adding your very first templating toolkit file. For that, create a new **webapp/mycomponent/test/test.ftl** file and add the following:
```
<@alert type="info">Success!</@alert>
```
![Custom screen][17]
A custom screen.
Open **<https://localhost:8443/shop/control/test/>** and marvel at your own accomplishments.
#### Custom themes
Modify the look and feel of the shop by creating your very own theme. All themes can be found as components inside of the themes folder. Run **./ant create-theme** to add your own.
![theme component layout][19]
A typical theme component layout.
Here's a list of the most important directories and files:
* Theme configuration: **data/*ThemeData.xml**
* Theme-specific wrapping HTML: **includes/*.ftl**
* Templating Toolkit HTML definition: **includes/themeTemplate.ftl**
* CSS class definition: **includes/themeStyles.ftl**
* CSS framework: **webapp/theme-title/***
Take a quick look at the Metro theme in the toolkit; it uses the Foundation CSS framework and makes use of all the things above. Afterwards, set up your own theme inside your newly constructed **webapp/theme-title** directory and start developing. The Foundation-shop theme is a very simple shop-specific theme implementation that you can use as a basis for your own work.
Voila! You have set up your own online store and are ready to customize!
![Finished Scipio ERP shop][21]
A finished shop based on Scipio ERP.
### What's next?
Scipio ERP is a powerful framework that simplifies the development of complex e-commerce applications. For a more complete understanding, check out the project [documentation][7], try the [online demo][22], or [join the community][23].
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/1/scipio-erp
作者:[Paul Piper][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/madppiper
[b]: https://github.com/lujun9972
[1]: https://www.scipioerp.com
[2]: https://ofbiz.apache.org/
[3]: https://www.scipioerp.com/community/developer/freemarker-macros/
[4]: https://www.scipioerp.com/community/developer/installation-configuration/configuration/#database-configuration
[5]: /file/419711
[6]: https://opensource.com/sites/default/files/uploads/setup_step5_sm.jpg (Setup wizard)
[7]: https://www.scipioerp.com/community/developer/architecture/components/
[8]: https://www.scipioerp.com/community/developer/entities/
[9]: https://www.scipioerp.com/community/developer/services/
[10]: https://www.scipioerp.com/community/developer/views-requests/
[11]: https://www.scipioerp.com/community/developer/events-actions/
[12]: /file/419716
[13]: https://opensource.com/sites/default/files/uploads/component_structure.jpg (component directory structure)
[14]: https://www.scipioerp.com/community/developer/views-requests/request-controller/
[15]: https://www.scipioerp.com/community/developer/views-requests/screen-widgets-decorators/
[16]: /file/419721
[17]: https://opensource.com/sites/default/files/uploads/success_screen_sm.jpg (Custom screen)
[18]: /file/419726
[19]: https://opensource.com/sites/default/files/uploads/theme_structure.jpg (theme component layout)
[20]: /file/419731
[21]: https://opensource.com/sites/default/files/uploads/finished_shop_1_sm.jpg (Finished Scipio ERP shop)
[22]: https://www.scipioerp.com/demo/
[23]: https://forum.scipioerp.com/

View File

@ -1,144 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (!!Con 2019: submit a talk!)
[#]: via: (https://jvns.ca/blog/2019/02/16/--con-2019--submit-a-talk-/)
[#]: author: (Julia Evans https://jvns.ca/)
!!Con 2019: submit a talk!
======
As some of you might know, for the last 5 years Ive been one of the organizers for a conferences called [!!Con][1]. This year its going to be held on **May 11-12 in NYC**.
The submission deadline is **Sunday, March 3** and you can [submit a talk here][2].
(we also expanded to the west coast this year: [!!Con West][3] is next week!! Im not on the !!Con West team since I live on the east coast but theyre doing amazing work, I have a ticket, and Im so excited for there to be more !!Con in the world)
### !!Con is about the joy, excitement, and surprise of computing
Computers are AMAZING. You can make programs that seem like magic, computer science has all kind of fun and surprising tidbits, there are all kinds of ways to make really cool art with computers, the systems that we use every day (like DNS!) are often super fascinating, and sometimes our computers do REALLY STRANGE THINGS and its very fun to figure out why.
!!Con is about getting together for 2 days to share what we all love about computing. The only rule of !!Con talks is that the talk has to have an exclamation mark in the title :)
We originally considered calling !!Con ExclamationMarkCon but that was too unwieldy so we went with !!Con :).
### !!Con is inclusive
The other big thing about !!Con is that we think computing should include everyone. To make !!Con a space where everyone can participate, we
* have open captioning for all talks (so that people who cant hear well can read the text of the talk as its happening). This turns out to be great for LOTS of people if you just werent paying attention for a second, you can look at the live transcript to see what you missed!
* pay our speakers &amp; pay for speaker travel
* have a code of conduct (of course)
* use the RC [social rules][4]
* make sure our washrooms work for people of all genders
* let people specify on their badges if they dont want photos taken of them
* do a lot of active outreach to make sure our set of speakers is diverse
### past !!Con talks
I think maybe the easiest way to explain !!Con if you havent been is through the talk titles! Here are a few arbitrarily chosen talks from past !!Cons:
* [Four Fake Filesystems!][5]
* [Islamic Geometry: Hankins Polygons in Contact Algorithm!!!][6]
* [Dont know about you, but Im feeling like SHA-2!: Checksumming with Taylor Swift][7]
* [MissingNo., my favourite Pokémon!][8]
* [Music! Programming! Arduino! (Or: Building Electronic Musical Interfaces to Create Awesome)][9]
* [How I Code and Use a Computer at 1,000 WPM!!][10]
* [The emoji that Killed Chrome!!][11]
* [We built a map to aggregate real-time flood data in under two days!][12]
* [PUSH THE BUTTON! 🔴 Designing a fun game where the only input is a BIG RED BUTTON! 🔴 !!!][13]
* [Serious programming with jq?! A practical and purely functional programming language!][14]
* [I wrote to a dead address in a deleted PDF and now I know where all the airplanes are!!][15]
* [Making Mushrooms Glow!][16]
* [HDR Photography in Microsoft Excel?!][17]
* [DHCP: ITS MOSTLY YELLING!!][18]
* [Lossy text compression, for some reason?!][19]
* [Plants are Recursive!!: Using L-Systems to Generate Realistic Weeds][20]
If you want to see more (or get an idea of what !!Con talk descriptions usually look like), heres every past year of the conference:
* 2018: [talk descriptions][21] and [recordings][22]
* 2017: [talk descriptions][23] and [recordings][24]
* 2016: [talk descriptions][25] and [recordings][26]
* 2015: [talk descriptions][27] and [recordings][28]
* 2014: [talk descriptions][29] and [recordings][30]
### this year you can also submit a play / song / performance!
One difference from previous !!Cons is that if you want submit a non-talk-talk to !!Con this year (like a play!), you can! Im very excited to see what people come up with. For more of that see [Expanding the !!Con aesthetic][31].
### all talks are reviewed anonymously
One big choice that weve made is to review all talks anonymously. This means that well review your talk the same way whether youve never given a talk before or if youre an internationally recognized public speaker. I love this because many of our best talks are from first time speakers or people who Id never heard of before, and I think anonymous review makes it easier to find great people who arent well known.
### writing a good outline is important
We cant rely on someones reputation to determine if theyll give a good talk, but we do need a way to see that people have a plan for how to present their material in an engaging way. So we ask everyone to give a somewhat detailed outline explaining how theyll spend their 10 minutes. Some people do it minute-by-minute and some people just say “Ill explain X, then Y, then Z, then W”.
Lindsey Kuper wrote some good advice about writing a clear !!Con outline here which has some examples of really good outlines [which you can see here][32].
### Were looking for sponsors
!!Con is pay-what-you-can (if you cant afford a $300 conference ticket, were the conference for you!). Because of that, we rely on our incredible sponsors (companies who want to build an inclusive future for tech with us!) to help make up the difference so that we can pay our speakers for their amazing work, pay for speaker travel, have open captioning, and everything else that makes !!Con the amazing conference it is.
If you love !!Con, a huge way you can help support the conference is to ask your company to sponsor us! Heres our [sponsorship page][33] and you can email me at [[email protected]][34] if youre interested.
### hope to see you there ❤
Ive met so many fantastic people through !!Con, and it brings me a lot of joy every year. The thing that makes !!Con great is all the amazing people who come to share what theyre excited about every year, and I hope youll be one of them.
--------------------------------------------------------------------------------
via: https://jvns.ca/blog/2019/02/16/--con-2019--submit-a-talk-/
作者:[Julia Evans][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://jvns.ca/
[b]: https://github.com/lujun9972
[1]: http://bangbangcon.com
[2]: http://bangbangcon.com/give-a-talk.html
[3]: http://bangbangcon.com/west/
[4]: https://www.recurse.com/social-rules
[5]: https://youtube.com/watch?v=pfHpDDXJQVg
[6]: https://youtube.com/watch?v=ld4gpQnaziU
[7]: https://youtube.com/watch?v=1QgamEwwPro
[8]: https://youtube.com/watch?v=yX7tDROZUt8
[9]: https://youtube.com/watch?v=67Y-wH0FJFg
[10]: https://youtube.com/watch?v=G1r55efei5c
[11]: https://youtube.com/watch?v=UE-fJjMasec
[12]: https://youtube.com/watch?v=hfatYo2J8gY
[13]: https://youtube.com/watch?v=KqEc2Ek4GzA
[14]: https://youtube.com/watch?v=PS_9pyIASvQ
[15]: https://youtube.com/watch?v=FhVob_sRqQk
[16]: https://youtube.com/watch?v=T75FvUDirNM
[17]: https://youtube.com/watch?v=bkQJdaGGVM8
[18]: https://youtube.com/watch?v=enRY9jd0IJw
[19]: https://youtube.com/watch?v=meovx9OqWJc
[20]: https://youtube.com/watch?v=0eXg4B1feOY
[21]: http://bangbangcon.com/2018/speakers.html
[22]: http://bangbangcon.com/2018/recordings.html
[23]: http://bangbangcon.com/2017/speakers.html
[24]: http://bangbangcon.com/2017/recordings.html
[25]: http://bangbangcon.com/2016/speakers.html
[26]: http://bangbangcon.com/2016/recordings.html
[27]: http://bangbangcon.com/2015/speakers.html
[28]: http://bangbangcon.com/2015/recordings.html
[29]: http://bangbangcon.com/2014/speakers.html
[30]: http://bangbangcon.com/2014/recordings.html
[31]: https://organicdonut.com/2019/01/expanding-the-con-aesthetic/
[32]: http://composition.al/blog/2017/06/30/how-to-write-a-timeline-for-a-bangbangcon-talk-proposal/
[33]: http://bangbangcon.com/sponsors
[34]: https://jvns.ca/cdn-cgi/l/email-protection

View File

@ -1,105 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (heguangzhi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (The Linux kernel: Top 5 innovations)
[#]: via: (https://opensource.com/article/19/8/linux-kernel-top-5-innovations)
[#]: author: (Seth Kenlon https://opensource.com/users/sethhttps://opensource.com/users/mhaydenhttps://opensource.com/users/mralexjuarez)
The Linux kernel: Top 5 innovations
======
Want to know what the actual (not buzzword) innovations are when it
comes to the Linux kernel? Read on.
![Penguin with green background][1]
The word _innovation_ gets bandied about in the tech industry almost as much as _revolution_, so it can be difficult to differentiate hyperbole from something thats actually exciting. The Linux kernel has been called innovative, but then again its also been called the biggest hack in modern computing, a monolith in a micro world.
Setting aside marketing and modeling, Linux is arguably the most popular kernel of the open source world, and its introduced some real game-changers over its nearly 30-year life span.
### Cgroups (2.6.24)
Back in 2007, Paul Menage and Rohit Seth got the esoteric [_control groups_ (cgroups)][2] feature added to the kernel (the current implementation of cgroups is a rewrite by Tejun Heo.) This new technology was initially used as a way to ensure, essentially, quality of service for a specific set of tasks.
For example, you could create a control group definition (cgroup) for all tasks associated with your web server, another cgroup for routine backups, and yet another for general operating system requirements. You could then control a percentage of resources for each cgroup, such that your OS and web server gets the bulk of system resources while your backup processes have access to whatever is left.
What cgroups has become most famous for, though, is its role as the technology driving the cloud today: containers. In fact, cgroups were originally named [process containers][3]. It was no great surprise when they were adopted by projects like [LXC][4], [CoreOS][5], and Docker.
The floodgates being opened, the term _containers_ justly became synonymous with Linux, and the concept of microservice-style cloud-based “apps” quickly became the norm. These days, its hard to get away from cgroups, theyre so prevalent. Every large-scale infrastructure (and probably your laptop, if you run Linux) takes advantage of cgroups in a meaningful way, making your computing experience more manageable and more flexible than ever.
For example, you might already have installed [Flathub][6] or [Flatpak][7] on your computer, or maybe youve started using [Kubernetes][8] and/or [OpenShift][9] at work. Regardless, if the term “containers” is still hazy for you, you can gain a hands-on understanding of containers from [Behind the scenes with Linux containers][10].
### LKMM (4.17)
In 2018, the hard work of Jade Alglave, Alan Stern, Andrea Parri, Luc Maranget, Paul McKenney, and several others, got merged into the mainline Linux kernel to provide formal memory models. The Linux Kernel Memory [Consistency] Model (LKMM) subsystem is a set of tools describing the Linux memory coherency model, as well as producing _litmus tests_ (**klitmus**, specifically) for testing.
As systems become more complex in physical design (more CPU cores added, cache and RAM grow, and so on), the harder it is for them to know which address space is required by which CPU, and when. For example, if CPU0 needs to write data to a shared variable in memory, and CPU1 needs to read that value, then CPU0 must write before CPU1 attempts to read. Similarly, if values are written in one order to memory, then theres an expectation that they are also read in that same order, regardless of which CPU or CPUs are doing the reading.
Even on a single CPU, memory management requires a specific task order. A simple action such as **x = y** requires a CPU to load the value of **y** from memory, and then store that value in **x**. Placing the value stored in **y** into the **x** variable cannot occur _before_ the CPU has read the value from memory. There are also address dependencies: **x[n] = 6** requires that **n** is loaded before the CPU can store the value of six.
LKMM helps identify and trace these memory patterns in code. It does this in part with a tool called **herd**, which defines the constraints imposed by a memory model (in the form of logical axioms), and then enumerates all possible outcomes consistent with these constraints.
### Low-latency patch (2.6.38)
Long ago, in the days before 2011, if you wanted to do "serious" [multimedia work on Linux][11], you had to obtain a low-latency kernel. This mostly applied to [audio recording][12] while adding lots of real-time effects (such as singing into a microphone and adding reverb, and hearing your voice in your headset with no noticeable delay). There were distributions, such as [Ubuntu Studio][13], that reliably provided such a kernel, so in practice it wasn't much of a hurdle, just a significant caveat when choosing your distribution as an artist.
However, if you werent using Ubuntu Studio, or you had some need to update your kernel before your distribution got around to it, you had to go to the rt-patches web page, download the kernel patches, apply them to your kernel source code, compile, and install manually.
And then, with the release of kernel version 2.6.38, this process was all over. The Linux kernel suddenly, as if by magic, had low-latency code (according to benchmarks, latency decreased by a factor of 10, at least) built-in by default. No more downloading patches, no more compiling. Everything just worked, and all because of a small 200-line patch implemented by Mike Galbraith.
For open source multimedia artists the world over, it was a game-changer. Things got so good from 2011 on that in 2016, I challenged myself to [build a Digital Audio Workstation (DAW) on a Raspberry Pi v1 (model B)][14] and found that it worked surprisingly well.
### RCU (2.5)
RCU, or Read-Copy-Update, is a system defined in computer science that allows multiple processor threads to read from shared memory. It does this by deferring updates, but also marking them as updated, to ensure that the datas consumers read the latest version. Effectively, this means that reads happen concurrently with updates.
The typical RCU cycle is a little like this:
1. Remove pointers to data to prevent other readers from referencing it.
2. Wait for readers to complete their critical processes.
3. Reclaim the memory space.
Dividing the update stage into removal and reclamation phases means the updater performs the removal immediately while deferring reclamation until all active readers are complete (either by blocking them or by registering a callback to be invoked upon completion).
While the concept of read-copy-update was not invented for the Linux kernel, its implementation in Linux is a defining example of the technology.
### Collaboration (0.01)
The final answer to the question of what the Linux kernel innovated will always be, above all else, collaboration. Call it good timing, call it technical superiority, call it hackability, or just call it open source, but the Linux kernel and the many projects that it enabled is a glowing example of collaboration and cooperation.
And it goes well beyond just the kernel. People from all walks of life have contributed to open source, arguably _because_ of the Linux kernel. The Linux was, and remains to this day, a major force of [Free Software][15], inspiring users to bring their code, art, ideas, or just themselves, to a global, productive, and diverse community of humans.
### Whats your favorite innovation?
This list is biased toward my own interests: containers, non-uniform memory access (NUMA), and multimedia. Ive surely left your favorite kernel innovation off the list. Tell me about it in the comments!
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/8/linux-kernel-top-5-innovations
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[heguangzhi](https://github.com/heguangzhi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/sethhttps://opensource.com/users/mhaydenhttps://opensource.com/users/mralexjuarez
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/linux_penguin_green.png?itok=ENdVzW22 (Penguin with green background)
[2]: https://en.wikipedia.org/wiki/Cgroups
[3]: https://lkml.org/lkml/2006/10/20/251
[4]: https://linuxcontainers.org
[5]: https://coreos.com/
[6]: http://flathub.org
[7]: http://flatpak.org
[8]: http://kubernetes.io
[9]: https://www.redhat.com/sysadmin/learn-openshift-minishift
[10]: https://opensource.com/article/18/11/behind-scenes-linux-containers
[11]: http://slackermedia.info
[12]: https://opensource.com/article/17/6/qtractor-audio
[13]: http://ubuntustudio.org
[14]: https://opensource.com/life/16/3/make-music-raspberry-pi-milkytracker
[15]: http://fsf.org

View File

@ -1,103 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (qfzy1233)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to Change Themes in Linux Mint)
[#]: via: (https://itsfoss.com/install-themes-linux-mint/)
[#]: author: (It's FOSS Community https://itsfoss.com/author/itsfoss/)
How to Change Themes in Linux Mint
======
Using Linux Mint is, from the start, a unique experience for its main Desktop Environment: Cinnamon. This is one of the main [features why I love Linux Mint][1].
Since Mints dev team [started to take design more serious][2], “Themes” applet became an important way not only to choose new themes, icons, buttons, window borders and mouse pointers, but also to install new themes directly from it. Interested? Lets jump into it.
### How to change themes in Linux Mint
Search for themes in the Menu and open the Themes applet.
![Theme Applet provides an easy way of installing and changing themes][3]
At the applet theres a “Add/Remove” button, pretty simple, huh? And, clicking on it, you and I can see Cinnamon Spices (Cinnamons official addons repository) themes ordered first by popularity.
![Installing new themes in Linux Mint Cinnamon][4]
To install one, all its needed to do is click on yours preferred one and wait for it to download. After that, the theme will be available at the “Desktop” option on the first page of the applet. Just double click on one of the installed themes to start using it.
![Changing themes in Linux Mint Cinnamon][5]
Heres the default Linux Mint look:
![Linux Mint Default Theme][6]
And heres after I change the theme:
![Linux Mint with Carta Theme][7]
All the themes are also available at the Cinnamon Spices site for more information and bigger screenshots so you can take a better look on how your system will look.
[Browse Cinnamon Themes][8]
### Installing third party themes in Linux Mint
_“I saw this amazing theme on another site and it is not available at Cinnamon Spices…”_
Cinnamon Spices has a good collection of themes but youll still find that the theme you saw some place else is not available on the official Cinnamon website.
Well, it would be nice if there was another way, huh? You might imagine that there is (Im mean…obviously there is). So, first things first, there are other websites where you and I can find new cool themes.
Ill recommend going to Cinnamon Look and browse themes there. If you like something download it.
[Get more themes at Cinnamon Look][9]
After the preferred theme is downloaded, you will have a compressed file now with all you need for the installation. Extract it and save at ~/.themes. Confused? The “~” file path is actually your home folder: /home/{YOURUSER}/.themes.
[][10]
Suggested read  Fix "Failed To Start Session" At Login In Ubuntu 16.04
So go to the your Home directory. Press Ctrl+H to [show hidden files in Linux][11]. If you dont see a .themes folder, create a new folder and name .themes. Remember that the dot at the beginning of the folder name is important.
Copy the extracted theme folder from your Downloads directory to the .themes folder in your Home.
After that, look for the installed theme at the applet above mentioned.
Note
Remember that the themes must be made to work on Cinnamon, even though it is a fork from GNOME, not all themes made for GNOME works at Cinnamon.
Changing theme is one part of Cinnamon customization. You can also [change the looks of Linux Mint by changing the icons][12].
I hope you now you know how to change themes in Linux Mint. Which theme are you going to use?
### João Gondim
Linux enthusiast from Brasil.
--------------------------------------------------------------------------------
via: https://itsfoss.com/install-themes-linux-mint/
作者:[It's FOSS Community][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/itsfoss/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/tiny-features-linux-mint-cinnamon/
[2]: https://itsfoss.com/linux-mint-new-design/
[3]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/09/install-theme-linux-mint-1.jpg?resize=800%2C625&ssl=1
[4]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/09/install-theme-linux-mint-2.jpg?resize=800%2C625&ssl=1
[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/09/install-theme-linux-mint-3.jpg?resize=800%2C450&ssl=1
[6]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/09/linux-mint-default-theme.jpg?resize=800%2C450&ssl=1
[7]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/09/linux-mint-carta-theme.jpg?resize=800%2C450&ssl=1
[8]: https://cinnamon-spices.linuxmint.com/themes
[9]: https://www.cinnamon-look.org/
[10]: https://itsfoss.com/failed-to-start-session-ubuntu-14-04/
[11]: https://itsfoss.com/hide-folders-and-show-hidden-files-in-ubuntu-beginner-trick/
[12]: https://itsfoss.com/install-icon-linux-mint/

View File

@ -1,69 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to put an HTML page on the internet)
[#]: via: (https://jvns.ca/blog/2019/09/06/how-to-put-an-html-page-on-the-internet/)
[#]: author: (Julia Evans https://jvns.ca/)
How to put an HTML page on the internet
======
One thing I love about the internet is that its SO EASY to put static HTML websites on the internet. Someone asked me today how to do it, so I thought Id write down how really quickly!
### just an HTML page
All of my sites are just static HTML and CSS. My web design skills are relatively minimal (<https://wizardzines.com> is the most complicated site Ive developed on my own), so keeping all my internet sites relatively simple means that I have some hope of being able to make changes / fix things without spending a billion hours on it.
So were going to take as minimal of an approach as possible in this blog post just one HTML page.
### the HTML page
The website were going to put on the internet is just one file, called `index.html`. You can find it at <https://github.com/jvns/website-example>, which is a Github repository with exactly one file in it.
The HTML file has some CSS in it to make it look a little less boring, which is partly copied from <https://example.com>.
### how to put the HTML page on the internet
Here are the steps:
1. sign up for a [Neocities][1] account
2. copy the index.html into the index.html in your neocities site
3. done
The index.html page above is on the internet at [julia-example-website.neocities.com][2], if you view source youll see that its the same HTML as in the github repo.
I think this is probably the simplest way to put an HTML page on the internet (and its a throwback to Geocities, which is how I made my first website in 2003) :). I also like that Neocities (like [glitch][3], which I also love) is about experimentation and learning and having fun..
### other options
This is definitely not the only easy way Github pages and Gitlab pages and Netlify will all automatically publish a site when you push to a Git repository, and theyre all very easy to use (just connect them to your github repository and youre done). I personally use the Git repository approach because not having things in Git makes me nervous I like to know what changes to my website Im actually pushing. But I think if you just want to put an HTML site on the internet for the first time and play around with HTML/CSS, Neocities is a really nice way to do it.
If you want to actually use your website for a Real Thing and not just to play around you probably want to buy a domain and link it to your website so that you can change hosting providers in the future, but that is a bit less simple.
### this is a good possible jumping off point for learning HTML
If you are a person who is comfortable editing files in a Git repository but wants to practice HTML/CSS, I think this is a fun way to put a website on the internet and play around! I really like the simplicity of it theres literally just one file, so theres no fancy extra magic to get in the way of understanding whats going on.
There are also a bunch of ways to complicate/extend this, like this blog is actually generated with [Hugo][4] which generates a bunch of HTML files which then go on the internet, but its always nice to start with the basics.
--------------------------------------------------------------------------------
via: https://jvns.ca/blog/2019/09/06/how-to-put-an-html-page-on-the-internet/
作者:[Julia Evans][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://jvns.ca/
[b]: https://github.com/lujun9972
[1]: https://neocities.org/
[2]: https://julia-example-website.neocities.org/
[3]: https://glitch.com
[4]: https://gohugo.io/

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (qfzy1233)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -1,102 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (An introduction to Virtual Machine Manager)
[#]: via: (https://opensource.com/article/19/9/introduction-virtual-machine-manager)
[#]: author: (Alan Formy-Duval https://opensource.com/users/alanfdosshttps://opensource.com/users/alanfdosshttps://opensource.com/users/bgamrathttps://opensource.com/users/marcobravo)
An introduction to Virtual Machine Manager
======
Virt-manager provides a full range of options for spinning up virtual
machines on Linux.
![A person programming][1]
In my [series][2] about [GNOME Boxes][3], I explained how Linux users can quickly spin up virtual machines on their desktop without much fuss. Boxes is ideal for creating virtual machines in a pinch when a simple configuration is all you need.
But if you need to configure more detail in your virtual machine, you need a tool that provides a full range of options for disks, network interface cards (NICs), and other hardware. This is where [Virtual Machine Manager][4] (virt-manager) comes in. If you don't see it in your applications menu, you can install it from your package manager or via the command line:
* On Fedora: **sudo dnf install virt-manager**
* On Ubuntu: **sudo apt install virt-manager**
Once it's installed, you can launch it from its application menu icon or from the command line by entering **virt-manager**.
![Virtual Machine Manager's main screen][5]
To demonstrate how to create a virtual machine using virt-manager, I'll go through the steps to set one up for Red Hat Enterprise Linux 8.
To start, click **File** then **New Virtual Machine**. Virt-manager's developers have thoughtfully titled each step of the process (e.g., Step 1 of 5) to make it easy. Click **Local install media** and **Forward**.
![Step 1 virtual machine creation][6]
On the next screen, browse to select the ISO file for the operating system you want to install. (My RHEL 8 image is located in my Downloads directory.) Virt-manager automatically detects the operating system.
![Step 2 Choose the ISO File][7]
In Step 3, you can specify the virtual machine's memory and CPU. The defaults are 1,024MB memory and one CPU.
![Step 3 Set CPU and Memory][8]
I want to give RHEL ample room to run—and the hardware I'm using can accommodate it—so I'll increase them (respectively) to 4,096MB and two CPUs.
The next step configures storage for the virtual machine; the default setting is a 10GB disk image. (I'll keep this setting, but you can adjust it for your needs.) You can also choose an existing disk image or create one in a custom location.
![Step 4 Configure VM Storage][9]
Step 5 is the place to name your virtual machine and click Finish. This is equivalent to creating a virtual machine or a Box in GNOME Boxes. While it's technically the last step, you have several options (as you can see in the screenshot below). Since the advantage of virt-manager is the ability to customize a virtual machine, I'll check the box labeled **Customize configuration before install** before I click **Finish**.
Since I chose to customize the configuration, virt-manager opens a screen displaying a bunch of devices and settings. This is the fun part!
Here you have another chance to name the virtual machine. In the list on the left, you can view details on various aspects, such as CPU, memory, disks, controllers, and many other items. For example, I can click on **CPUs** to verify the change I made in Step 3.
![Changing the CPU count][10]
I can also confirm the amount of memory I set.
When installing a VM to run as a server, I usually disable or remove its sound capability. To do so, select **Sound** and click **Remove** or right-click on **Sound** and choose **Remove Hardware**.
You can also add hardware with the **Add Hardware** button at the bottom. This brings up the **Add New Virtual Hardware** screen where you can add additional storage devices, memory, sound, etc. It's like having access to a very well-stocked (if virtual) computer hardware warehouse.
![The Add New Hardware screen][11]
Once you are happy with your VM configuration, click **Begin Installation**, and the system will boot and begin installing your specified operating system from the ISO.
![Begin installing the OS][12]
Once it completes, it reboots, and your new VM is ready for use.
![Red Hat Enterprise Linux 8 running in VMM][13]
Virtual Machine Manager is a powerful tool for desktop Linux users. It is open source and an excellent alternative to proprietary and closed virtualization products.
Learn how Vagrant and Ansible can be used to provision virtual machines for web development.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/9/introduction-virtual-machine-manager
作者:[Alan Formy-Duval][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/alanfdosshttps://opensource.com/users/alanfdosshttps://opensource.com/users/bgamrathttps://opensource.com/users/marcobravo
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/computer_keyboard_laptop_development_code_woman.png?itok=vbYz6jjb (A person programming)
[2]: https://opensource.com/sitewide-search?search_api_views_fulltext=GNOME%20Box
[3]: https://wiki.gnome.org/Apps/Boxes
[4]: https://virt-manager.org/
[5]: https://opensource.com/sites/default/files/1-vmm_main_0.png (Virtual Machine Manager's main screen)
[6]: https://opensource.com/sites/default/files/2-vmm_step1_0.png (Step 1 virtual machine creation)
[7]: https://opensource.com/sites/default/files/3-vmm_step2.png (Step 2 Choose the ISO File)
[8]: https://opensource.com/sites/default/files/4-vmm_step3default.png (Step 3 Set CPU and Memory)
[9]: https://opensource.com/sites/default/files/6-vmm_step4.png (Step 4 Configure VM Storage)
[10]: https://opensource.com/sites/default/files/9-vmm_customizecpu.png (Changing the CPU count)
[11]: https://opensource.com/sites/default/files/11-vmm_addnewhardware.png (The Add New Hardware screen)
[12]: https://opensource.com/sites/default/files/12-vmm_rhelbegininstall.png
[13]: https://opensource.com/sites/default/files/13-vmm_rhelinstalled_0.png (Red Hat Enterprise Linux 8 running in VMM)

View File

@ -1,352 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to Find and Replace a String in File Using the sed Command in Linux)
[#]: via: (https://www.2daygeek.com/linux-sed-to-find-and-replace-string-in-files/)
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
How to Find and Replace a String in File Using the sed Command in Linux
======
When you are working on text files you may need to find and replace a string in the file.
Sed command is mostly used to replace the text in a file.
This can be done using the sed command and awk command in Linux.
In this tutorial, we will show you how to do this using the sed command and then show about the awk command.
### What is sed Command
Sed command stands for Stream Editor, It is used to perform basic text manipulation in Linux. It could perform various functions such as search, find, modify, insert or delete files.
Also, its performing complex regular expression pattern matching.
It can be used for the following purpose.
* To find and replace matches with a given format.
* To find and replace specific lines that match a given format.
* To find and replace the entire line that matches the given format.
* To search and replace two different patterns simultaneously.
The fifteen examples listed in this article will help you to master in the sed command.
If you want to remove a line from a file using the Sed command, go to the following article.
**`Note:`** Since this is a demonstration article, we use the sed command without the `-i` option, which removes lines and prints the contents of the file in the Linux terminal.
But if you want to remove lines from the source file in the real environment, use the `-i` option with the sed command.
Common Syntax for sed to replace a string.
```
sed -i 's/Search_String/Replacement_String/g' Input_File
```
First we need to understand sed syntax to do this. See details about it.
* `sed:` Its a Linux command.
* `-i:` Its one of the option for sed and what it does? By default sed print the results to the standard output. When you add this option with sed then it will edit files in place. A backup of the original file will be created when you add a suffix (For ex, -i.bak
* `s:` The s is the substitute command.
* `Search_String:` To search a given string or regular expression.
* `Replacement_String:` The replacement string.
* `g:` Global replacement flag. By default, the sed command replaces the first occurrence of the pattern in each line and it wont replace the other occurrence in the line. But, all occurrences will be replaced when the replacement flag is provided
* `/` Delimiter character.
* `Input_File:` The filename that you want to perform the action.
Let us look at some examples of commonly used with sed command to search and convert text in files.
We have created the below file for demonstration purposes.
```
# cat sed-test.txt
1 Unix unix unix 23
2 linux Linux 34
3 linuxunix UnixLinux
linux /bin/bash CentOS Linux OS
Linux is free and opensource operating system
```
### 1) How to Find and Replace the “first” Event of the Pattern on a Line
The below sed command replaces the word **unix** with **linux** in the file. This only changes the first instance of the pattern on each line.
```
# sed 's/unix/linux/' sed-test.txt
1 Unix linux unix 23
2 linux Linux 34
3 linuxlinux UnixLinux
linux /bin/bash CentOS Linux OS
Linux is free and opensource operating system
```
### 2) How to Find and Replace the “Nth” Occurrence of the Pattern on a Line
Use the /1,/2,../n flags to replace the corresponding occurrence of a pattern in a line.
The below sed command replaces the second instance of the “unix” pattern with “linux” in a line.
```
# sed 's/unix/linux/2' sed-test.txt
1 Unix unix linux 23
2 linux Linux 34
3 linuxunix UnixLinux
linux /bin/bash CentOS Linux OS
Linux is free and opensource operating system
```
### 3) How to Search and Replace all Instances of the Pattern in a Line
The below sed command replaces all instances of the “unix” format with “Linux” on the line because “g” means a global replacement.
```
# sed 's/unix/linux/g' sed-test.txt
1 Unix linux linux 23
2 linux Linux 34
3 linuxlinux UnixLinux
linux /bin/bash CentOS Linux OS
Linux is free and opensource operating system
```
### 4) How to Find and Replace the Pattern for all Instances in a Line from the “Nth” Event
The below sed command replaces all the patterns from the “Nth” instance of a pattern in a line.
```
# sed 's/unix/linux/2g' sed-test.txt
1 Unix unix linux 23
2 linux Linux 34
3 linuxunix UnixLinux
linux /bin/bash CentOS Linux OS
Linux is free and opensource operating system
```
### 5) Search and Replace the pattern on a specific line number
You can able to replace the string on a specific line number. The below sed command replaces the pattern “unix” with “linux” only on the 3rd line.
```
# sed '3 s/unix/linux/' sed-test.txt
1 Unix unix unix 23
2 linux Linux 34
3 linuxlinux UnixLinux
linux /bin/bash CentOS Linux OS
Linux is free and opensource operating system
```
### 6) How to Find and Replace Pattern in a Range of Lines
You can specify the range of line numbers to replace the string.
The below sed command replaces the “Unix” pattern with “Linux” with lines 1 through 3.
```
# sed '1,3 s/unix/linux/' sed-test.txt
1 Unix linux unix 23
2 linux Linux 34
3 linuxlinux UnixLinux
linux /bin/bash CentOS Linux OS
Linux is free and opensource operating system
```
### 7) How to Find and Change the pattern in the Last Line
The below sed command allows you to replace the matching string only in the last line.
The below sed command replaces the “Linux” pattern with “Unix” only on the last line.
```
# sed '$ s/Linux/Unix/' sed-test.txt
1 Unix unix unix 23
2 linux Linux 34
3 linuxunix UnixLinux
linux /bin/bash CentOS Linux OS
Unix is free and opensource operating system
```
### 8) How to Find and Replace the Pattern with only Right Word in a Line
As you might have noticed, the substring “linuxunix” is replaced with “linuxlinux” in the 6th example. If you want to replace only the right matching word, use the word-boundary expression “\b” on both ends of the search string.
```
# sed '1,3 s/\bunix\b/linux/' sed-test.txt
1 Unix linux unix 23
2 linux Linux 34
3 linuxunix UnixLinux
linux /bin/bash CentOS Linux OS
Linux is free and opensource operating system
```
### 9) How to Search and Replaces the pattern with case insensitive
Everyone knows that Linux is case sensitive. To make the pattern match with case insensitive, use the I flag.
```
# sed 's/unix/linux/gI' sed-test.txt
1 linux linux linux 23
2 linux Linux 34
3 linuxlinux linuxLinux
linux /bin/bash CentOS Linux OS
Linux is free and opensource operating system
```
### 10) How to Find and Replace a String that Contains the Delimiter Character
When you search and replace for a string with the delimiter character, we need to use the backslash “\” to escape the slash.
In this example, we are going to replaces the “/bin/bash” with “/usr/bin/fish”.
```
# sed 's/\/bin\/bash/\/usr\/bin\/fish/g' sed-test.txt
1 Unix unix unix 23
2 linux Linux 34
3 linuxunix UnixLinux
linux /usr/bin/fish CentOS Linux OS
Linux is free and opensource operating system
```
The above sed command works as expected, but it looks bad. To simplify this, most of the people will use the vertical bar “|”. So, I advise you to go with it.
```
# sed 's|/bin/bash|/usr/bin/fish/|g' sed-test.txt
1 Unix unix unix 23
2 linux Linux 34
3 linuxunix UnixLinux
linux /usr/bin/fish/ CentOS Linux OS
Linux is free and opensource operating system
```
### 11) How to Find and Replaces Digits with a Given Pattern
Similarly, digits can be replaced with pattern. The below sed command replaces all digits with “[0-9]” “number” pattern.
```
# sed 's/[0-9]/number/g' sed-test.txt
number Unix unix unix numbernumber
number linux Linux numbernumber
number linuxunix UnixLinux
linux /bin/bash CentOS Linux OS
Linux is free and opensource operating system
```
### 12) How to Find and Replace only two Digit Numbers with Pattern
If you want to replace the two digit numbers with the pattern, use the sed command below.
```
# sed 's/\b[0-9]\{2\}\b/number/g' sed-test.txt
1 Unix unix unix number
2 linux Linux number
3 linuxunix UnixLinux
linux /bin/bash CentOS Linux OS
Linux is free and opensource operating system
```
### 13) How to Print only Replaced Lines with the sed Command
If you want to display only the changed lines, use the below sed command.
* p It prints the replaced line twice on the terminal.
* n It suppresses the duplicate rows generated by the “p” flag.
```
# sed -n 's/Unix/Linux/p' sed-test.txt
1 Linux unix unix 23
3 linuxunix LinuxLinux
```
### 14) How to Run Multiple sed Commands at Once
The following sed command detect and replaces two different patterns simultaneously.
The below sed command searches for “linuxunix” and “CentOS” pattern, replacing them with “LINUXUNIX” and “RHEL8” at a time.
```
# sed -e 's/linuxunix/LINUXUNIX/g' -e 's/CentOS/RHEL8/g' sed-test.txt
1 Unix unix unix 23
2 linux Linux 34
3 LINUXUNIX UnixLinux
linux /bin/bash RHEL8 Linux OS
Linux is free and opensource operating system
```
The following sed command search for two different patterns and replaces them with one string at a time.
The below sed command searches for “linuxunix” and “CentOS” pattern, replacing them with “Fedora30” at a time.
```
# sed -e 's/\(linuxunix\|CentOS\)/Fedora30/g' sed-test.txt
1 Unix unix unix 23
2 linux Linux 34
3 Fedora30 UnixLinux
linux /bin/bash Fedora30 Linux OS
Linux is free and opensource operating system
```
### 15) How to Find and Replace the Entire Line if the Given Pattern Matches
If the pattern matches, you can use the sed command to replace the entire line with the new line. This can be done using the “C” flag.
```
# sed '/OS/ c New Line' sed-test.txt
1 Unix unix unix 23
2 linux Linux 34
3 linuxunix UnixLinux
New Line
Linux is free and opensource operating system
```
### 16) How to Search and Replace lines that Matches a Pattern
You can specify a pattern for the sed command to fit on a line. In the event of pattern matching, the sed command searches for the string to be replaced.
The below sed command first looks for lines that have the “OS” pattern, then replaces the word “Linux” with “ArchLinux”.
```
# sed '/OS/ s/Linux/ArchLinux/' sed-test.txt
1 Unix unix unix 23
2 linux Linux 34
3 linuxunix UnixLinux
linux /bin/bash CentOS ArchLinux OS
Linux is free and opensource operating system
```
--------------------------------------------------------------------------------
via: https://www.2daygeek.com/linux-sed-to-find-and-replace-string-in-files/
作者:[Magesh Maruthamuthu][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.2daygeek.com/author/magesh/
[b]: https://github.com/lujun9972

View File

@ -1,63 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Taking a year to explain computer things)
[#]: via: (https://jvns.ca/blog/2019/09/13/a-year-explaining-computer-things/)
[#]: author: (Julia Evans https://jvns.ca/)
Taking a year to explain computer things
======
Ive been working on explaining computer things Im learning on this blog for 6 years. I wrote one of my first posts, [what does a shell even do?][1] on Sept 30, 2013. Since then, Ive written 11 zines, 370,000 words on this blog, and given 20 or so talks. So it seems like I like explaining things a lot.
### tl;dr: Im going to work on explaining computer things for a year
Heres the exciting news: I left my job a month ago and my plan is to spend the next year working on explaining computer things!
As for why Im doing this I was talking through some reasons with my friend Mat last night and he said “well, sometimes there are things you just feel compelled to do”. I think thats all there is to it :)
### what does “explain computer things” mean?
Im planning to:
1. write some more zines (maybe I can write 10 zines in a year? well see! I want to tackle both general-interest and slightly more niche topics, well see what happens).
2. work on some more interactive ways to learn things. I learn things best by trying things out and breaking them, so I want to see if I can facilitate that a little bit for other people. I started a project around this in May which has been on the backburner for a bit but which Im excited about. Hopefully Ill release it soon and then you can try it out and tell me what you think!
I say “a year” because I think I have at least a years worth of ideas and I cant predict how Ill feel after doing this for a year.
### how: run a business
I started a corporation almost exactly a year ago, and Im planning to keep running my explaining-things efforts as a business. This business has been making more than I made in my first programming job (that is, definitely enough money to live on!), which has been really surprising and great (thank you!).
some parameters of the business:
* Im not planning to hire employees or anything, itll just be me and some (awesome) freelancers. The biggest change I have in mind is that Im hoping to find a freelance editor to help me with editing.
* I also dont have any specific plans for world domination or to work 80-hour weeks. Im just going to make zines &amp; things that explain computer concepts and sell them on the internet, like Ive been doing.
* No commissions or consulting work, just building ideas I have
Its been pretty interesting to learn more about running a small business and so far I like it more than I thought I would. (except for taxes, which I like exactly as much as I thought I would)
### thats all!
Im excited to keep making explanations of computer things and to have more time to do it. This blog might change a bit away from “heres what Im learning at work these days” and towards “here are attempts at explaining things that I mostly already know”. Itll be different! Well see how it goes!
--------------------------------------------------------------------------------
via: https://jvns.ca/blog/2019/09/13/a-year-explaining-computer-things/
作者:[Julia Evans][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://jvns.ca/
[b]: https://github.com/lujun9972
[1]: https://jvns.ca/blog/2013/09/30/hacker-school-day-2-what-does-a-shell-even-do/

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -0,0 +1,366 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (The Emacs Series Exploring ts.el)
[#]: via: (https://opensourceforu.com/2019/09/the-emacs-series-exploring-ts-el/)
[#]: author: (Shakthi Kannan https://opensourceforu.com/author/shakthi-kannan/)
The Emacs Series Exploring ts.el
======
[![][1]][2]
_In this article, the author reviews the ts.el date and time library for Emacs. Written by Adam Porter, ts.el is still in the development phase and has been released under the GNU General Public License v3.0._
The ts.el package uses intuitive names for date and time functions. It internally uses UNIX timestamps and depends on both the dash and s Emacs libraries. The parts of the date are computed lazily and also cached for performance. The source code is available at _<https://github.com/alphapapa/ts.el>_. In this article, we will explore the API functions available from the ts.el library.
**Installation**
The package does not have a tagged release yet; hence, you should download it from <https://github.com/alphapapa/ts.el/blob/master/ts.el> and add it to your Emacs load path to use it. You should also have the dash and s libraries installed and loaded in your Emacs environment. You can then load the library using the following command:
```
(require ts)
```
**Usage**
Let us explore the various functions available to retrieve parts of the date from the ts.el library. When the examples were executed, the date was Friday July 5, 2019. The ts-dow function can be used to obtain the day of the week, as shown below:
```
(ts-dow (ts-now))
5
```
_ts-now_ is a Lisp construct that returns a timestamp set. It is defined in ts.el as follows:
```
(defsubst ts-now ()
“Return `ts struct set to now.”
(make-ts :unix (float-time)))
```
The day of the week starts from Monday (1) and hence Friday has the value of 5. An abbreviated form of the day can be fetched using the _ts-day-abbr_ function. In the following example, Friday is shortened toFri.
```
(ts-day-abbr (ts-now))
"Fri"
```
The day of the week in full form can be obtained using the _ts-day-name_ function, as shown below:
```
(ts-day-name (ts-now))
“Friday”
```
The twelve months from January to December are numbered from 1 to 12 respectively. Hence, for the month of July, the index number is 7. This numeric value for the month can be retrieved using the ts-month API. For example:
```
(ts-month (ts-now))
7
```
If you want a three-character abbreviation for the months name, you can use the ts-month-abbr function as shown below:
```
(ts-month-abbr (ts-now))
“Jul”
```
The _ts-month-name_ function can be used to obtain the full name of the month. For example:
```
(ts-month-name (ts-now))
“July”
```
The day of the week starts from Monday and has an index 1, while Sunday has an index 7. If you need the numeric value for the day of the week, use the ts-day function as indicated below:
```
(ts-day (ts-now))
5
```
The _ts-year_ API returns the year. In our example, it is 2019 as shown below:
```
(ts-year (ts-now))
2019
```
The hour, minute and seconds can be retrieved using the _ts-hour, ts-minute_ and _ts-second_ functions, respectively. Examples of these functions are given below:
```
(ts-hour (ts-now))
18
(ts-minute (ts-now))
19
(ts-second (ts-now))
5
```
The UNIX timestamps are in UTC, by default. The _ts-tz-offset_ function returns the offset from UTC. The Indian Standard Time (IST) is five-and-a-half-hours ahead of UTC and hence this function returns +0530 as shown below:
```
(ts-tz-offset (ts-now))
"+0530"
```
The _ts-tz-abbr_ API returns an abbreviated form of the time zone. In our case, IST is returned for the Indian Standard Time.
```
(ts-tz-abbr (ts-now))
"IST"
```
The _ts-adjustf_ function applies the time adjustments passed to the timestamp and the _ts-format_ function formats the timestamp as a string. A couple of examples are given below:
```
(let ((ts (ts-now)))
(ts-adjustf ts day 1)
(ts-format nil ts))
“2019-07-06 18:23:24 +0530”
(let ((ts (ts-now)))
(ts-adjustf ts year 1 month 3 day 5)
(ts-format nil ts))
“2020-10-10 18:24:07 +0530”
```
You can use the _ts-dec_ function to decrement the timestamp. For example:
```
(ts-day-name (ts-dec day 1 (ts-now)))
“Thursday”
```
The threading macro syntax can also be used with the ts-dec function as shown below:
```
(->> (ts-now) (ts-dec day 2) ts-day-name)
“Wednesday”
```
The UNIX epoch is the number of seconds that have elapsed since January 1, 1970 (midnight UTC/GMT). The ts-unix function returns an epoch UNIX timestamp as illustrated below:
```
(ts-unix (ts-adjust day -2 (ts-now)))
1562158551.0 ;; Wednesday, July 3, 2019 6:25:51 PM GMT+05:30
```
An hour has 3600 seconds and a day has 86400 seconds. You can compare epoch timestamps as shown in the following example:
```
(/ (- (ts-unix (ts-now))
(ts-unix (ts-adjust day -4 (ts-now))))
86400)
4
```
The _ts-difference_ function returns the difference between two timestamps, while the _ts-human-duration_ function returns the property list (_plist_) values of years, days, hours, minutes and seconds. For example:
```
(ts-human-duration
(ts-difference (ts-now)
(ts-dec day 3 (ts-now))))
(:years 0 :days 3 :hours 0 :minutes 0 :seconds 0)
```
A number of aliases are available for the hour, minute, second, year, month and day format string constructors. A few examples are given below:
```
(ts-hour (ts-now))
18
(ts-H (ts-now))
18
(ts-minute (ts-now))
46
(ts-min (ts-now))
46
(ts-M (ts-now))
46
(ts-second (ts-now))
16
(ts-sec (ts-now))
16
(ts-S (ts-now))
16
(ts-year (ts-now))
2019
(ts-Y (ts-now))
2019
(ts-month (ts-now))
7
(ts-m (ts-now))
7
(ts-day (ts-now))
5
(ts-d (ts-now))
5
```
You can parse a string into a timestamp object using the ts-parse function. For example:
```
(ts-format nil (ts-parse “Fri Dec 6 2019 18:48:00”))
“2019-12-06 18:48:00 +0530”
```
You can also format the difference between two timestamps in a human readable format as shown in the following example:
```
(ts-human-format-duration
(ts-difference (ts-now)
(ts-adjust day -1 hour -3 minute -2 second -4 (ts-now))))
“1 days, 3 hours, 2 minutes, 4 seconds”
```
The timestamp comparator operations are also defined in ts.el. The ts&lt; function compares if one epoch UNIX timestamp is less than the other. Its definition is as follows:
```
(defun ts< (a b)
“Return non-nil if timestamp A is less than timestamp B.”
(< (ts-unix a) (ts-unix b)))
```
In the example given below, the current timestamp is not less than the previous day and hence it returns nil.
```
(ts< (ts-now) (ts-adjust day -1 (ts-now)))
nil
```
Similarly, we have other comparator functions like ts&gt;, ts=, ts&gt;= and ts&lt;=. A few examples of these function use cases are given below:
```
(ts> (ts-now) (ts-adjust day -1 (ts-now)))
t
(ts= (ts-now) (ts-now))
nil
(ts>= (ts-now) (ts-adjust day -1 (ts-now)))
t
(ts<= (ts-now) (ts-adjust day -2 (ts-now)))
nil
```
**Benchmarking**
A few performance tests can be conducted to compare the Emacs internal time values versus the UNIX timestamps. The benchmarking tests can be executed by including the bench-multi macro and bench-multi-process-results function available from _<https://github.com/alphapapa/emacs-package-dev-handbook>_ in your Emacs environment.
You will also need to load the dash-functional library to use the -on function.
```
(require dash-functional)
```
The following tests have been executed on an Intel(R) Core(TM) i7-3740QM CPU at 2.70GHz with eight cores, 16GB RAM and running Ubuntu 18.04 LTS.
**Formatting**
The first benchmarking exercise is to compare the formatting of the UNIX timestamp and the Emacs internal time. The Emacs Lisp code to run the test is shown below:
```
(let ((format “%Y-%m-%d %H:%M:%S”))
(bench-multi :times 100000
:forms ((“Unix timestamp” (format-time-string format 1544311232))
(“Internal time” (format-time-string format (23564 20962 864324 108000))))))
```
The output appears as an s-expression:
```
((“Form” “x faster than next” “Total runtime” “# of GCs” “Total GC runtime”)
hline
(“Internal time” “1.11” “2.626460” 13 “0.838733”)
(“Unix timestamp” “slowest” “2.921408” 13 “0.920814”))
```
The abbreviation GC refers to garbage collection. A tabular representation of the above results is given below:
[![][3]][4]
We observe that formatting the internal time is slightly faster.
**Getting the current time**
The functions to obtain the current time can be compared using the following test:
```
(bench-multi :times 100000
:forms ((“Unix timestamp” (float-time))
(“Internal time” (current-time))))
```
The results are shown below:
[![][5]][6]
We observe that using the Unix timestamp is faster.
**Parsing**
The third benchmarking exercise is to compare parsing functions on a date timestamp string. The corresponding test code is given below:
```
(let* ((s “Wed 10 Jul 2019”))
(bench-multi :times 100000
:forms ((“ts-parse” (ts-parse s))
(“ts-parse ts-unix” (ts-unix (ts-parse s))))))
```
The _ts-parse_ function is slightly faster than the ts-parse _ts-unix_ function, as seen in the results:
[![][7]][8]
**A new timestamp versus blanking fields**
The last performance comparison is between creating a new timestamp and blanking the fields. The relevant test code is as follows:
```
(let* ((a (ts-now)))
(bench-multi :times 100000
:ensure-equal t
:forms ((“New” (let ((ts (copy-ts a)))
(setq ts (ts-fill ts))
(make-ts :unix (ts-unix ts))))
(“Blanking” (let ((ts (copy-ts a)))
(setq ts (ts-fill ts))
(ts-reset ts))))))
```
The output of the benchmarking exercise is given below:
[![][9]][10]
We observe that creating a new timestamp is slightly faster than blanking the fields.
You are encouraged to read the ts.el README and notes.org from the GitHub repository _<https://github.com/alphapapa/ts.el>_ for more information.
--------------------------------------------------------------------------------
via: https://opensourceforu.com/2019/09/the-emacs-series-exploring-ts-el/
作者:[Shakthi Kannan][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensourceforu.com/author/shakthi-kannan/
[b]: https://github.com/lujun9972
[1]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/09/GPL-emacs-1.jpg?resize=696%2C435&ssl=1 (GPL emacs)
[2]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/09/GPL-emacs-1.jpg?fit=800%2C500&ssl=1
[3]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/09/1-1.png?resize=350%2C151&ssl=1
[4]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/09/1-1.png?ssl=1
[5]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/09/2-1.png?resize=350%2C191&ssl=1
[6]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/09/2-1.png?ssl=1
[7]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/09/3.png?resize=350%2C144&ssl=1
[8]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/09/3.png?ssl=1
[9]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/09/4.png?resize=350%2C149&ssl=1
[10]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/09/4.png?ssl=1

View File

@ -0,0 +1,232 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Getting started with Zsh)
[#]: via: (https://opensource.com/article/19/9/getting-started-zsh)
[#]: author: (Seth Kenlon https://opensource.com/users/sethhttps://opensource.com/users/sethhttps://opensource.com/users/sethhttps://opensource.com/users/sethhttps://opensource.com/users/falm)
Getting started with Zsh
======
Improve your shell game by upgrading from Bash to Z-shell.
![bash logo on green background][1]
Z-shell (or Zsh) is an interactive Bourne-like POSIX shell known for its abundance of innovative features. Z-Shell users often cite its many conveniences and credit it for increased efficiency and extensive customization.
If you're relatively new to Linux or Unix but experienced enough to have opened a terminal and run a few commands, you have probably used the Bash shell. Bash is arguably the definitive free software shell, partly because of its progressive features and partly because it ships as the default shell on most of the popular Linux and Unix operating systems. However, the more you use a shell, the more you start to find small things that might be better for the way you want to use it. If there's one thing open source is famous for, it's _choice_. Many people choose to "graduate" from Bash to Z.
### What is Zsh?
A shell is just an interface to your operating system. An interactive shell allows you to type in commands through what is called _standard input_, or **stdin**, and get output through _standard output_ and _standard error_, or **stdout** and **stderr**. There are many shells, including Bash, Csh, Ksh, Tcsh, Dash, and Zsh. Each has features based on what its programmers thought would be best for a shell. Whether those features are good or bad is up to you, the end user.
Zsh has features like interactive Tab completion, automated file searching, regex integration, advanced shorthand for defining command scope, and a rich theme engine. These features are included in an otherwise familiar Bourne-like shell environment, meaning that if you already know and love Bash, you'll find Zsh familiar—except with more features. You might think of it as a kind of Bash++.
### Installing Zsh
Install Zsh with your package manager.
On Fedora, RHEL, and CentOS:
```
`$ sudo dnf install zsh`
```
On Ubuntu and Debian:
```
`$ sudo apt install zsh`
```
On MacOS, you can install it using MacPorts:
```
`$ sudo port install zsh`
```
Or with Homebrew:
```
`$ brew install zsh`
```
It's possible to run Zsh on Windows, but only on top of a Linux or Linux-like layer such as [Windows Subsystem for Linux][2] (WSL) or [Cygwin][3]. That installation is out of scope for this article, so refer to Microsoft documentation.
### Setting up Zsh
Zsh is not a terminal emulator; it's a shell that runs inside a terminal emulator. So, to launch Zsh, you must first launch a terminal window such as GNOME Terminal, Konsole, Terminal, iTerm2, rxvt, or another terminal of your preference. Then you can launch Zsh by typing:
```
`$ zsh`
```
The first time you launch Zsh, you're asked to choose some configuration options. These can all be changed later, so press **1** to continue.
```
This is the Z Shell configuration function for new users, zsh-newuser-install.
(q)  Quit and do nothing.
(0)  Exit, creating the file ~/.zshrc
(1)  Continue to the main menu.
```
There are four categories of preferences, so just start at the top.
1. The first category lets you choose how many commands are retained in your shell history file. By default, it's set to 1,000 lines.
2. Zsh completion is one of its most exciting features. To keep things simple, consider activating it with its default options until you get used to how it works. Press **1** for default options, **2** to set options manually.
3. Choose Emacs or Vi key bindings. Bash uses Emacs bindings, so you may be used to that already.
4. Finally, you can learn about (and set or unset) some of Zsh's subtle features. For instance, you can stop using the **cd** command by allowing Zsh to initiate a directory change when you provide a non-executable path with no command. To activate one of these extra options, type the option number and enter **s** to _set_ it. Try turning on all options to get the full Zsh experience. You can unset them later by editing **~/.zshrc**.
To complete configuration, press **0**.
### Using Zsh
At first, Zsh feels a lot like using Bash, which is unmistakably one of its many features. There are serious differences between, for instance, Bash and Tcsh, so being able to switch between Bash and Zsh is a convenience that makes Zsh easy to try and easy to use at home if you have to use Bash at work or on your server.
#### Change directory with Zsh
It's the small differences that make Zsh nice. First, try changing the directory to your Documents folder _without the **cd** command_. It seems too good to be true; but if you enter a directory path with no further instruction, Zsh changes to that directory:
```
% Documents
% pwd
/home/seth/Documents
```
That renders an error in Bash or any other normal shell. But Zsh is far from normal, and this is just the beginning.
#### Search with Zsh
When you want to find a file using a normal shell, you probably resort to the **find** or **locate** command. At the very least, you may have used **ls -R** for a recursive listing of a set of directories. Zsh has a built-in feature allowing it to find a file in the current or any other subdirectory.
For instance, assume you have two files called **foo.txt**. One is located in your current directory, and the other is in a subdirectory called **foo**. In a Bash shell, you can list the file in the current directory with:
```
$ ls
foo.txt
```
and you can list the other one by stating the subdirectory's path explicitly:
```
$ ls foo
foo.txt
```
To list both, you must use the **-R** switch, maybe combined with **grep**:
```
$ ls -R | grep foo.txt
foo.txt
foo.txt
```
But in Zsh, you can use the ****** shorthand:
```
% ls **/foo.txt
foo.txt
foo.txt
```
And you can use this syntax with any command, not just with **ls**. Imagine your increased efficiency when moving specific file types from one collection of directories to a single location, or concatenating snippets of text into a file, or grepping through logs.
### Using Zsh Tab completion
Tab completion is a power-user feature in Bash and some other shells, and it took the Unix world by storm when it became commonplace. No longer did Unix users have to resort to wildcards when typing long and tedious paths (such as **/h*/s*h/V*/SCS/sc*/comp*/t*/a*/*9/04/LS*boat*v**, which is a lot easier than typing **/home/seth/Videos/SCS/scenes/composite/takes/approved/109/04/LS_boat-port-cargo-mover.mkv**). Instead, they could just press the Tab key when they entered enough of a unique string. For example, if you know there's only one directory starting with an **h** at the root level of your system, you might type **/h** and then hit Tab. It's fast, it's simple, it's efficient. It also confirms a path exists; if Tab doesn't complete anything, you know you're looking in the wrong place or you mistyped part of the path.
However, if you have many directories that share five or more of the same first letters, Tab staunchly refuses to complete. While in most modern terminals it will (at least) reveal the files blocking it from guessing what you mean, it usually takes two Tab presses to reveal them; therefore, Tab completion often becomes such an interplay of letters and Tabs across your keyboard that you feel like you're training for a piano recital.
Zsh solves this minor annoyance by cycling through possible completions. If you type **ls ~/D** and press Tab, Zsh completes your command with **Documents** first; if you press Tab again, it offers **Downloads**, and so on until you find the one you want.
### Wildcards in Zsh
Wildcards behave differently in Zsh than what Bash users are used to. First of all, they can be modified. For example, if you want to list all folders in your current directory, you can use a modified wildcard:
```
% ls
dir0   dir1   dir2   file0   file1
% ls *(/)
dir0   dir1   dir2
```
In this example, the **(/)** qualifies the results of the wildcard so Zsh will display only directories. To list just the files, use **(.)**. To list symlinks, use **(@)**. To list executable files, use **(*)**.
```
% ls ~/bin/*(*)
fop  exify  tt
```
Zsh isn't aware of file types only. It can also list according to modification time, using the same wildcard modifier convention. For example, if you want to find a file that was modified within the past eight hours, use the **mh** modifier (for **modified** and **hours**) and the negative integer of hours:
```
% ls ~/Documents/*(mh-8)
cal.org   game.org   home.org
```
To find a file modified more than (for instance) two days ago, the modifiers change to **md** (for **modified** and **day**) with a positive integer:
```
% ls ~/Documents/*(+2)
holiday.org
```
There's a lot more you can do with wildcard modifiers and qualifiers, so read the [Zsh man page][4] for full details.
#### The wildcard side effect
To use wildcards the way you would use them in Bash, sometimes they must be escaped in Zsh. For instance, if you're copying some files to your server in Bash, you might use a wildcard like this:
```
`$ scp IMG_*.JPG seth@example.com:~/www/ph*/*19/09/14`
```
That works in Bash, but Zsh returns an error because it tries to expand the variables on the remote side before issuing the **scp** command. To avoid this, you must escape the remote variables:
```
`% scp IMG_*.JPG seth@example.com:~/www/ph\*/\*19/09/14`
```
It's these types of little exceptions that can frustrate you when you're switching to a new shell. There aren't many when using Zsh (there are probably more when switching back to Bash after experiencing Zsh) but when they happen, remain calm and be explicit. Rarely will you go wrong to adhere strictly to POSIX—but if that fails, look up the problem to solve it and move on. [Hyperpolyglot.org][5] has proven invaluable to many users stuck on one shell at work and another at home.
In my next Zsh article, I'll show you how to install themes and plugins to make your Z-Shell even Z-ier.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/9/getting-started-zsh
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/sethhttps://opensource.com/users/sethhttps://opensource.com/users/sethhttps://opensource.com/users/sethhttps://opensource.com/users/falm
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bash_command_line.png?itok=k4z94W2U (bash logo on green background)
[2]: https://devblogs.microsoft.com/commandline/category/bash-on-ubuntu-on-windows/
[3]: https://www.cygwin.com/
[4]: https://linux.die.net/man/1/zsh
[5]: http://hyperpolyglot.org/unix-shells

View File

@ -1,145 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to Check Linux Mint Version Number & Codename)
[#]: via: (https://itsfoss.com/check-linux-mint-version/)
[#]: author: (Sergiu https://itsfoss.com/author/sergiu/)
How to Check Linux Mint Version Number & Codename
======
Linux Mint has a major release (like Mint 19) every two years and minor releases (like Mint 19.1, 19.2 etc) every six months or so. You can upgrade Linux Mint version on your own or it may get automatically update for the minor releases.
Between all these release, you may wonder which Linux Mint version you are using. Knowing the version number is also helpful in determining whether a particular software is available for your system or if your system has reached end of life.
There could be a number of reasons why you might require the Linux Mint version number and there are various ways you can obtain this information. Let me show you both graphical and command line ways to get the Mint release information.
* [Check Linux Mint version using command line][1]
* [Check Linux Mint version information using GUI][2]
### Ways to check Linux Mint version number using terminal
![][3]
Ill go over several ways you can check your Linux Mint version number and codename using very simple commands. You can open up a **terminal** from the **Menu** or by pressing **CTRL+ALT+T** (default hotkey).
The **last two entries** in this list also output the **Ubuntu release** your current Linux Mint version is based on.
#### 1\. /etc/issue
Starting out with the simplest CLI method, you can print out the contents of **/etc/issue** to check your **Version Number** and **Codename**:
```
[email protected]:~$ cat /etc/issue
Linux Mint 19.2 Tina \n \l
```
#### 2\. hostnamectl
![hostnamectl][4]
This single command (**hostnamectl**) prints almost the same information as that found in **System Info**. You can see your **Operating System** (with **version number**), as well as your **kernel version**.3.
#### 3\. lsb_release
**lsb_release** is a very simple Linux utility to check basic information about your distribution:
```
[email protected]:~$ lsb_release -a
No LSB modules are available.
Distributor ID: LinuxMint
Description: Linux Mint 19.2 Tina
Release: 19.2
Codename: tina
```
**Note:** *I used the *****_**a**_ _tag to print all parameters, but you can also use **-s** for short form, **-d** for description etc. (check **man lsb_release** for all tags)._
#### 4\. /etc/linuxmint/info
![/etc/linuxmint/info][5]
This isnt a command, but rather a file on any Linux Mint install. Simply use cat command to print its contents to your terminal and see your **Release Number** and **Codename**.
[][6]
Suggested read  Get Rid Of Two Google Chrome Icons From Dock In Elementary OS Freya [Quick Tip]
#### 5\. Use /etc/os-release to get Ubuntu codename as well
![/etc/os-release][7]
Linux Mint is based on Ubuntu. Each Linux Mint release is based on a different Ubuntu release. Knowing which Ubuntu version your Linux Mint release is based on is helpful in cases where youll have to use Ubuntu codename while adding a repository like when you need to [install the latest Virtual Box in Linux Mint][8].
**os-release** is yet another file similar to **info**, showing you the codename for the **Ubuntu** release your Linux Mint is based on.
#### 6\. Use /etc/upstream-release/lsb-release to get only Ubuntu base info
If you only ****want to see information about the **Ubuntu** base, output **/etc/upstream-release/lsb-release**:
```
[email protected]:~$ cat /etc/upstream-release/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=18.04
DISTRIB_CODENAME=bionic
DISTRIB_DESCRIPTION="Ubuntu 18.04 LTS"
```
Bonus Tip: [You can just check Linux kernel version][9] with the **uname** command:
```
[email protected]:~$ uname -r
4.15.0-54-generic
```
**Note:** _**-r** stands for **release**, however you can check the other flags with **man uname**._
### Check Linux Mint version information using GUI
If you are not comfortable with the terminal and commands, you can use the graphical method. As you would expect, this one is pretty straight-forward.
Open up the **Menu** (bottom-left corner) and then go to **Preferences &gt; System Info**:
![Linux Mint Menu][10]
Alternatively, in the Menu you can search for **System Info**:
![Menu Search System Info][11]
Here you can see both your operating system (including version number), your kernel and the version number of your DE:
![System Info][12]
**Wrapping Up**
I have covered some different ways you can quickly check the version and name (as well as the Ubuntu base and kernel) of the Linux Mint release you are running. I hope you found this beginner tutorial helpful. Let us know in the comments which one is your favorite method!
--------------------------------------------------------------------------------
via: https://itsfoss.com/check-linux-mint-version/
作者:[Sergiu][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/sergiu/
[b]: https://github.com/lujun9972
[1]: tmp.pL5Hg3N6Qt#terminal
[2]: tmp.pL5Hg3N6Qt#GUI
[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/09/check-linux-mint-version.png?ssl=1
[4]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/09/hostnamectl.jpg?ssl=1
[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/09/linuxmint_info.jpg?ssl=1
[6]: https://itsfoss.com/rid-google-chrome-icons-dock-elementary-os-freya/
[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/09/os_release.jpg?ssl=1
[8]: https://itsfoss.com/install-virtualbox-ubuntu/
[9]: https://itsfoss.com/find-which-kernel-version-is-running-in-ubuntu/
[10]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/09/linux_mint_menu.jpg?ssl=1
[11]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/09/menu_search_system_info.jpg?ssl=1
[12]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/09/system_info.png?ssl=1

View File

@ -0,0 +1,115 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Talking to machines: Lisp and the origins of AI)
[#]: via: (https://opensource.com/article/19/9/command-line-heroes-lisp)
[#]: author: (Matthew Broberg https://opensource.com/users/mbbroberghttps://opensource.com/users/mbbroberghttps://opensource.com/users/mbbroberghttps://opensource.com/users/mbbroberg)
Talking to machines: Lisp and the origins of AI
======
The Command Line Heroes podcast explores the invention of Lisp and the
rise of thinking computers powered by open source software.
![Listen to the Command Line Heroes Podcast][1]
Artificial intelligence (AI) is all the rage today, and its massive impact on the world is still to come, says the[ Association for the Advancement of Artificial Intelligence][2] (AAAI). According to an article on [Nanalyze][3]:
> "The vast majority of nearly 2,000 experts polled by the Pew Research Center in 2014 said they anticipate robotics and artificial intelligence will permeate wide segments of daily life by 2025. A 2015 study covering 17 countries found that artificial intelligence and related technologies added an estimated 0.4 percentage point on average to those countries' annual GDP growth between 1993 and 2007, accounting for just over one-tenth of those countries' overall GDP growth during that time."
However, this is the second time has AI garnered so much attention. When was AI first popular, and what does that have to do with the obscure-but-often-loved programming language Lisp?
The second-to-last podcast of [Command Line Heroes][4]' third season dives into these topics and leaves us thinking about open source at the core of AI.
### Before the term AI
Thinking machines have been a curiosity for centuries, long before they could be realized. In the 1800s, computer science pioneers Charles Babbage and Ada Lovelace imagined an analytical engine capable of predictions far beyond human skills, such as correctly selecting the winning horse in a race.
In the 1940s and '50s, Alan Turing defined what it would look like for intelligent machines to emulate human intelligence; that's what we now call the Turing Test. In his 1950 [research paper][5], Turing's "imitation game" set out to convince someone they were communicating with a human in another room when, in reality, it was a machine.
While these theories inspired imaginative debate, they became less theoretical as computer hardware began providing enough power to begin experimenting.
### Why Lisp is at the heart of AI theory
John McCarthy, the person to coin the term "artificial intelligence," is also the person who reinvented how we program to create thinking machines. His reimagined approach was codified into the Lisp programming language. As [Paul Graham][6] wrote:
> "In 1960, [John McCarthy][7] published a remarkable paper in which he did for programming something like what Euclid did for geometry. He showed how, given a handful of simple operators and a notation for functions, you can build a whole programming language. He called this language Lisp, for 'List Processing,' because one of his key ideas was to use a simple data structure called a list for both code and data.
>
> "It's worth understanding what McCarthy discovered, not just as a landmark in the history of computers, but as a model for what programming is tending to become in our own time. It seems to me that there have been two really clean, consistent models of programming so far: the C model and the Lisp model. These two seem points of high ground, with swampy lowlands between them. As computers have grown more powerful, the new languages being developed have been [moving steadily][8] toward the Lisp model. A popular recipe for new programming languages in the past 20 years has been to take the C model of computing and add to it, piecemeal, parts taken from the Lisp model, like runtime typing and garbage collection."
I remember when I first wrote Lisp for a computer science class. After wrapping my head around its seemingly infinite number of parentheses, I uncovered a beautiful pattern of thought: Can I think through what I want this software to do?
![The elegance of Lisp programming is timeless][9]
That sounds silly: computers process what we code them to do, but there's something about recursion that made me think in a wildly different light. It's exciting to learn that 15 years ago, I may have been tapping into the big-picture changes McCarthy was describing.
### Why the slowdown in AI?
By the mid-to-late 1960s, McCarthy's work made way to a new field of research, where AI, machine learning (ML), and deep learning all became possibilities. And Lisp became the accepted standard in this emerging field. It's said that in 1968, McCarthy made a wager with David Levy, a Scottish chess master, that in 10 years a computer would be able to beat Levy in a chess match. Why did it take nearly 30 years to get to the famous [Deep Blue vs. Garry Kasparov][10] match?
Command Line Heroes explores one theory: that for-profit investment in AI pulled essential talent from academia, where they were advancing the science, and pushed them onto a different path. Whether or not this was the reason, the world of AI fell into a "winter," where the people pursuing it were considered unrealistic.
This AI winter lasted for quite some time. In 2005, The [_New York Times_ reported][11] that AI had become so stigmatized that "some computer scientists and software engineers avoided the term artificial intelligence for fear of being viewed as wild-eyed dreamers."
### Where is AI now?
Fast forward to today, when talking about AI or ML is a fast pass to getting people's attention—but that attention isn't always positive. Many are concerned that AI will remove millions of jobs from the world. Others say it will [create][12] millions of more jobs than are lost.
The verdict is still out. [McKinsey's research][13] on the job loss vs. job gain debate is fascinating. When you take into account growing world consumption, aging populations, "marketization" of previously unpaid domestic work, and other factors, you find that the answer depends on your outlook.
One thing is for sure: AI will be a significant part of our lives, and it will have much wider implications than other areas of tech. For this reason (among others), examining the [misconceptions around ethics and bias in AI][14] is essential.
### Open source and AI
McCarthy had a dream that machines could have common sense. His AI goals included open source from the very beginning; this is visualized on Red Hat's beautifully animated webpage on the [origins of AI and its open source roots][15].
[![Origins of AI and open source screenshot][16]][15]
If we are to achieve the goals of McCarthy, Turing, or other AI pioneers, I believe it will be because of the open source community behind the technology. Part of the reason AI's popularity bounced back is because of open source: languages, frameworks, and the datasets we analyze are increasingly open. Here are a handful of things to explore:
* [Learn enough Python and R][17] to be part of this future
* [Explore Python libraries][18] that will bulk up your skills
* Understand how [AI and ML are related][19]
* Explore [free and open datasets][20]
* Use modern implementations of Lisp, [available under open source licenses][21]
It's possible that early AI explored the right ideas in the wrong decade. World-class computers back then weren't even as powerful as today's cellphones, and each one was shared by dozens of individuals. Today, many of us own multiple supercomputers and carry them with us all the time. For this reason, among others, the future of AI is strong and its highest achievements are yet to come.
_Command Line Heroes has covered programming languages for all of Season 3. [Subscribe so that you don't miss the last episode of the season][4], and I would love to hear your thoughts in the comments below._
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/9/command-line-heroes-lisp
作者:[Matthew Broberg][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/mbbroberghttps://opensource.com/users/mbbroberghttps://opensource.com/users/mbbroberghttps://opensource.com/users/mbbroberg
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/command_line_hereoes_ep7_blog-header-292x521.png?itok=lI4DXvq2 (Listen to the Command Line Heroes Podcast)
[2]: http://aaai.org/
[3]: https://www.nanalyze.com/2016/11/artificial-intelligence-definition/
[4]: https://www.redhat.com/en/command-line-heroes
[5]: https://www.csee.umbc.edu/courses/471/papers/turing.pdf
[6]: http://www.paulgraham.com/rootsoflisp.html
[7]: http://www-formal.stanford.edu/jmc/index.html
[8]: http://www.paulgraham.com/diff.html
[9]: https://opensource.com/sites/default/files/uploads/lisp_cycles.png (The elegance of Lisp programming is timeless)
[10]: https://en.wikipedia.org/wiki/Deep_Blue_versus_Garry_Kasparov
[11]: https://www.nytimes.com/2005/10/14/technology/behind-artificial-intelligence-a-squadron-of-bright-real-people.html
[12]: https://singularityhub.com/2019/01/01/ai-will-create-millions-more-jobs-than-it-will-destroy-heres-how/
[13]: https://www.mckinsey.com/featured-insights/future-of-work/jobs-lost-jobs-gained-what-the-future-of-work-will-mean-for-jobs-skills-and-wages
[14]: https://opensource.com/article/19/8/4-misconceptions-ethics-and-bias-ai
[15]: https://www.redhat.com/en/open-source-stories/ai-revolutionaries/origins-ai-open-source
[16]: https://opensource.com/sites/default/files/uploads/origins_aiopensource.png (Origins of AI and open source screenshot)
[17]: https://opensource.com/article/19/5/learn-python-r-data-science
[18]: https://opensource.com/article/18/5/top-8-open-source-ai-technologies-machine-learning
[19]: https://opensource.com/tags/ai-and-machine-learning
[20]: https://opensource.com/article/19/2/learn-data-science-ai
[21]: https://www.cliki.net/Common+Lisp+implementation

View File

@ -0,0 +1,328 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Whats Good About TensorFlow 2.0?)
[#]: via: (https://opensourceforu.com/2019/09/whats-good-about-tensorflow-2-0/)
[#]: author: (Siva Rama Krishna Reddy B https://opensourceforu.com/author/siva-krishna/)
Whats Good About TensorFlow 2.0?
======
[![][1]][2]
_Version 2.0 of TensorFlow is focused on simplicity and ease of use. It has been strengthened with updates like eager execution and intuitive higher level APIs accompanied by flexible model building. It is platform agnostic, and makes APIs more consistent, while removing those that are redundant._
Machine learning and artificial intelligence are experiencing a revolution these days, primarily due to three major factors. The first is the increased computing power available within small form factors such as GPUs, NPUs and TPUs. The second is the breakthrough in machine learning algorithms. State-of-art algorithms and hence models are available to infer faster. Finally, huge amounts of labelled data is essential for deep learning models to perform better, and this is now available.
TensorFlow is an open source AI framework from Google which arms researchers and developers with the right tools to build novel models. It was made open source in 2015 and, in the past few years, has evolved with various enhancements covering operator support, programming languages, hardware support, data sets, official models, and distributed training and deployment strategies.
TensorFlow 2.0 was released recently at the TensorFlow Developer Summit. It has major changes across the stack, some of which will be discussed from the developers point of view.
TensorFlow 2.0 is primarily focused on the ease-of-use, power and scalability aspects. Ease is ensured in terms of simplified APIs, Keras being the main high level API interface; eager execution is available by default. Version 2.0 is powerful in the sense of being flexible and running much faster than earlier, with more optimisation. Finally, it is more scalable since it can be deployed on high-end distributed environments as well as on small edge devices.
This new release streamlines the various components involved, from data preparation all the way up to deployment on various targets. High speed data processing pipelines are offered by tf.data, high level APIs are offered by tf.keras, and there are simplified APIs to access various distribution strategies on targets like the CPU, GPU and TPU. TensorFlow 2.0 offers a unique packaging format called SavedModel that can be deployed over the cloud through a TensorFlow Serving. Edge devices can be deployed through TensorFlow Lite, and Web applications through the newly introduced TensorFlow.js and various other language bindings that are also available.
![Figure 1: The evolution of TensorFlow][3]
TensorFlow.js was announced at the developer summit with off-the-shelf pretrained models for the browser, node, desktop and mobile native applications. The inclusion of Swift was also announced. Looking at some of the performance improvements since last year, the latest release claims a training speedup of 1.8x on NVIDIA Tesla V100, a 1.6x training speedup on Google Cloud TPUv2 and a 3.3.x inference speedup on Intel Skylake.
**Upgrade to 2.0**
The new release offers a utility _tf_upgrade_v2_ to convert a 1.x Python application script to a 2.0 compatible script. It does most of the job in converting the 1.x deprecated API to a newer compatibility API. An example of the same can be seen below:
```
test-pc:~$cat test-infer-v1.py
# Tensorflow imports
import tensorflow as tf
save_path = checkpoints/dev
with tf.gfile.FastGFile(“./trained-graph.pb”, rb) as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
tf.import_graph_def(graph_def, name=)
with tf.Session(graph=tf.get_default_graph()) as sess:
input_data = sess.graph.get_tensor_by_name(“DecodeJPGInput:0”)
output_data = sess.graph.get_tensor_by_name(“final_result:0”)
image = elephant-299.jpg
if not tf.gfile.Exists(image):
tf.logging.fatal(File does not exist %s, image)
image_data = tf.gfile.FastGFile(image, rb).read()
result = sess.run(output_data, {DecodeJPGInput:0: image_data})
print(result)
test-pc:~$ tf_upgrade_v2 --infile test-infer-v1.py --outfile test-infer-v2.py
INFO line 5:5: Renamed tf.gfile.FastGFile to tf.compat.v1.gfile.FastGFile
INFO line 6:16: Renamed tf.GraphDef to tf.compat.v1.GraphDef
INFO line 10:9: Renamed tf.Session to tf.compat.v1.Session
INFO line 10:26: Renamed tf.get_default_graph to tf.compat.v1.get_default_graph
INFO line 15:15: Renamed tf.gfile.Exists to tf.io.gfile.exists
INFO line 16:12: Renamed tf.logging.fatal to tf.compat.v1.logging.fatal
INFO line 17:21: Renamed tf.gfile.FastGFile to tf.compat.v1.gfile.FastGFile
TensorFlow 2.0 Upgrade Script
-----------------------------
Converted 1 files
Detected 0 issues that require attention
-------------------------------------------------------------
Make sure to read the detailed log report.txt
test-pc:~$ cat test-infer-v2.py
# Tensorflow imports
import tensorflow as tf
save_path = checkpoints/dev
with tf.compat.v1.gfile.FastGFile(“./trained-graph.pb”, rb) as f:
graph_def = tf.compat.v1.GraphDef()
graph_def.ParseFromString(f.read())
tf.import_graph_def(graph_def, name=)
with tf.compat.v1.Session(graph=tf.compat.v1.get_default_graph()) as sess:
input_data = sess.graph.get_tensor_by_name(“DecodeJPGInput:0”)
output_data = sess.graph.get_tensor_by_name(“final_result:0”)
image = elephant-299.jpg
if not tf.io.gfile.exists(image):
tf.compat.v1.logging.fatal(File does not exist %s, image)
image_data = tf.compat.v1.gfile.FastGFile(image, rb).read()
result = sess.run(output_data, {DecodeJPGInput:0: image_data})
print(result)
```
As we can see here, the _tf_upgrade_v2_ utility converts all the deprecated APIs to compatible v1 APIs, to make them work with 2.0.
**Eager execution:** Eager execution allows real-time evaluation of Tensors without calling _session.run_. A major advantage with eager execution is that we can print the Tensor values any time for debugging.
With TensorFlow 1.x, the code is:
```
test-pc:~$python3
Python 3.6.7 (default, Oct 22 2018, 11:32:17)
[GCC 8.2.0] on linux
Type “help”, “copyright”, “credits” or “license” for more information.
>>> import tensorflow as tf
>>> print(tf.__version__)
1.14.0
>>> tf.add(2,3)
<tf.Tensor Add:0 shape=() dtype=int32>
```
TensorFlow 2.0, on the other hand, evaluates the result that we call the API:
```
test-pc:~$python3
Python 3.6.7 (default, Oct 22 2018, 11:32:17)
[GCC 8.2.0] on linux
Type “help”, “copyright”, “credits” or “license” for more information.
>>> import tensorflow as tf
>>> print(tf.__version__)
2.0.0-beta1
>>> tf.add(2,3)
<tf.Tensor: id=2, shape=(), dtype=int32, numpy=5>
```
In v1.x, the resulting Tensor doesnt display the value and we need to execute the graph under a session to get the value, but in v2.0 the values are implicitly computed and available for debugging.
**Keras**
Keras (_tf.keras_) is now the official high level API. It has been enhanced with many compatible low level APIs. The redundancy across Keras and TensorFlow is removed, and most of the APIs are now available with Keras. The low level operators are still accessible through tf.raw_ops.
We can now save the Keras model directly as a Tensorflow SavedModel, as shown below:
```
# Save Model to SavedModel
saved_model_path = tf.keras.experimental.export_saved_model(model, /path/to/model)
# Load the SavedModel
new_model = tf.keras.experimental.load_from_saved_model(saved_model_path)
# new_model is now keras Model object.
new_model.summary()
```
Earlier, APIs related to various layers, optimisers, metrics and loss functions were distributed across Keras and native TensorFlow. Latest enhancements unify them as _tf.keras.optimizer.*, tf.keras.metrics.*, tf.keras.losses.* and tf.keras.layers.*._
The RNN layers are now much more simplified compared to v 1.x.
With TensorFlow 1.x, the commands given are:
```
if tf.test.is_gpu_available():
model.add(tf.keras.layers.CudnnLSTM(32))
else
model.add(tf.keras.layers.LSTM(32))
```
With TensorFlow 2.0, the commands given are:
```
# This will use Cudnn kernel when the GPU is available.
model.add(tf.keras.layer.LSTM(32))
```
TensorBoard integration is now a simple call back, as shown below:
```
tb_callbaclk = tf.keras.callbacks.TensorBoard(log_dir=log_dir)
model.fit(
x_train, y_train, epocha=5,
validation_data = [x_test, y_test],
Callbacks = [tb_callbacks])
```
With this simple call back addition, TensorBoard is up on the browser to look for all the statistics in real-time.
Keras offers unified distribution strategies, and a few lines of code can enable the required strategy as shown below:
```
strategy = tf.distribute.MirroredStrategy()
with strategy.scope()
model = tf.keras.models.Sequential([
tf.keras.layers.Dense(64, input_shape=[10]),
tf.keras.layers.Dense(64, activation=relu),
tf.keras.layers.Dense(10, activation=softmax)])
model.compile(optimizer=adam,
loss=categorical_crossentropy,
metrics=[accuracy])
```
As shown above, the model definition under the desired scope is all we need to apply the desired strategy. Very soon, there will be support for multi-node synchronous and TPU strategy, and later, for parameter server strategy.
![Figure 2: Coral products with edge TPU][4]
**TensorFlow function**
Function is a major upgrade that impacts the way we write TensorFlow applications. The new version introduces tf.function, which simplifies the applications and makes it very close to writing a normal Python application.
A sample _tf.function_ definition looks like whats shown in the code snippet below. Here the _tf.function_ declaration makes the user define a function as a TensorFlow operator, and all optimisation is applied automatically. Also, the function is faster than eager execution. APIs like _tf.control_dependencies_, _tf.global_variable_initializer_, and _tf.cond, tf.while_loop_ are no longer needed with _tf.function_. The user defined functions are polymorphic by default, i.e., we may pass mixed type tensors.
```
test-pc:~$ cat tf-test.py
import tensorflow as tf
print(tf.__version__)
@tf.function
def add(a, b):
return (a+b)
print(add(tf.ones([2,2]), tf.ones([2,2])))
test-pc:~$ python3 tf-test.py
2.0.0-beta1
tf.Tensor(
[[2. 2.]
[2. 2.]], shape=(2, 2), dtype=float32)
```
Here is another example to demonstrate automatic control flows and Autograph in action. Autograph automatically converts the conditions, while it loops Python to TensorFlow operators.
```
test-pc:~$ cat tf-test-control.py
import tensorflow as tf
print(tf.__version__)
@tf.function
def f(x):
while tf.reduce_sum(x) > 1:
x = tf.tanh(x)
return x
print(f(tf.random.uniform([10])))
test-pc:~$ python3 tf-test-control.py
2.0.0-beta1
tf.Tensor(
[0.10785562 0.11102211 0.11347286 0.11239681 0.03989326 0.10335539
0.11030331 0.1135259 0.11357211 0.07324989], shape=(10,), dtype=float32)
```
We can see Autograph in action with the following API over the function.
```
print(tf.autograph.to_code(f)) # f is the function name
```
**TensorFlow Lite**
The latest advancements in edge devices add neural network accelerators. Google has released EdgeTPU, Intel has the edge inference platform Movidius, Huawei mobile devices have the Kirin based NPU, Qualcomm has come up with NPE SDK to accelerate on the Snapdragon chipsets using Hexagon power and, recently, Samsung released Exynos 9 with NPU. An edge device optimised framework is necessary to support these hardware ecosystems.
Unlike TensorFlow, which is widely used in high power-consuming server infrastructure, edge devices are challenging in terms of reduced computing power, limited memory and battery constraints. TensorFlow Lite is aimed at bringing in TensorFlow models directly onto the edge with minimal effort. The TF Lite model format is different from TensorFlow. A TF Lite converter is available to convert a TensorFlow SavedBundle to a TF Lite model.
Though TensorFlow Lite is evolving, there are limitations too, such as in the number of operations supported, and the unsupported semantics like control-flows and RNNs. In its early days, TF Lite used a TOCO converter and there were a few challenges for the developer community. A brand new 2.0 converter is planned to be released soon. There are claims that using TF Lite results in huge improvements across the CPU, GPU and TPU.
TF Lite introduces delegates to accelerate parts of the graph on an accelerator. We may choose a specific delegate for a specific sub-graph, if needed.
```
#import “tensorflow/lite/delegates/gpu/metal_delegate.h”
// Initialize interpreter with GPU delegate
std::unique_ptr<Interpreter> interpreter;
InterpreterBuilder(*model, resolver)(&interpreter);
auto* delegate = NewGpuDelegate(nullptr); // default config
if (interpreter->ModifyGraphWithDelegate(delegate) != kTfLiteOk) return false;
// Run inference
while (true) {
WriteToInputTensor(interpreter->typed_input_tensor<float>(0));
if (interpreter->Invoke() != kTfLiteOk) return false;
ReadFromOutputTensor(interpreter->typed_output_tensor<float>(0));
}
// Clean up
interpreter = nullptr;
DeleteGpuDelegate(delegate);
```
As shown above, we can choose GPUDelegate, and modify the graph with the respective kernels runtime. TF Lite is going to support the Android NNAPI delegate, in order to support all the hardware that is supported by NNAPI. For edge devices, CPU optimisation is also important, as not all edge devices are equipped with accelerators; hence, there is a plan to support further optimisations for ARM and x86.
Optimisations based on quantisation and pruning are evolving to reduce the size and processing demands of models. Quantisation generally can reduce model size by 4x (i.e., 32-bit to 8-bit). Models with more convolution layers may get faster by 10 to 50 per cent on the CPU. Fully connected and RNN layers may speed up operation by 3x.
TF Lite now supports post-training quantisation, which reduces the size along with compute demands greatly. TensorFlow 2.0 offers simplified APIs to build models with quantisation and by pruning optimisations.
A normal dense layer without quantisation looks like what follows:
```
tf.keras.layers.Dense(512, activation=relu)
```
Whereas a quality dense layer looks like whats shown below:
```
quantize.Quantize(tf.keras.layers.Dense(512, activation=relu))
```
Pruning is a technique used to drop connections that are ineffective. In general, dense layers contain lots of connections which dont influence the output. Such connections can be dropped by making the weight zero. Tensors with lots of zeros may be represented as sparse and can be compressed. Also, the number of operations in a sparse tensor is less.
Building a layer with _prune_ is as simple as using the following command:
```
prune.Prune(tf.keras.layers.Dense(512, activation=relu))
```
In a pipeline, there is Keras based quantised training and Keras based connection pruning. These optimisations may push TF Lite further ahead of the competition, with regard to other frameworks.
**Coral**
Coral is a new platform for creating products with on-device ML acceleration. The first product here features Googles Edge TPU in SBC and USB form factors. TensorFlow Lite is officially supported on this platform, with the salient features being very fast inference speed, privacy and no reliance on network connection.
More details related to hardware specifications, pricing, and a getting started guide can be found at _<https://coral.withgoogle.com>_.
With these advances as well as a wider ecosystem, its very evident that TensorFlow may become the leading framework for artificial intelligence and machine learning, similar to how Android evolved in the mobile world.
--------------------------------------------------------------------------------
via: https://opensourceforu.com/2019/09/whats-good-about-tensorflow-2-0/
作者:[Siva Rama Krishna Reddy B][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensourceforu.com/author/siva-krishna/
[b]: https://github.com/lujun9972
[1]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2018/09/ML-with-tensorflow.jpg?resize=696%2C328&ssl=1 (ML with tensorflow)
[2]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2018/09/ML-with-tensorflow.jpg?fit=1200%2C565&ssl=1
[3]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/09/Figure-1-The-evolution-of-TensorFlow.jpg?resize=350%2C117&ssl=1
[4]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/09/Figure-2-Coral-products-with-edge-TPU.jpg?resize=350%2C198&ssl=1

View File

@ -0,0 +1,210 @@
[#]: collector: (lujun9972)
[#]: translator: (amwps290 )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Adding themes and plugins to Zsh)
[#]: via: (https://opensource.com/article/19/9/adding-plugins-zsh)
[#]: author: (Seth Kenlon https://opensource.com/users/sethhttps://opensource.com/users/sethhttps://opensource.com/users/sethhttps://opensource.com/users/seth)
Adding themes and plugins to Zsh
======
Expand Z-shell's capabilities with themes and plugins installed with Oh
My Zsh.
![Someone wearing a hardhat and carrying code ][1]
In my [previous article][2], I explained how to get started with [Z-shell][2] (Zsh). For some users, the most exciting thing about Zsh is its ability to adopt new themes. It's so easy to theme Zsh both because of the active community designing visuals for the shell and also because of the [Oh My Zsh][3] project, which makes it trivial to install them.
Theming is one of those changes you notice immediately, so if you don't feel like you changed shells when you installed Zsh, you'll definitely feel it once you've adopted one of the 100+ themes bundled with Oh My Zsh. There's a lot more to Oh My Zsh than just pretty themes, though; there are also hundreds of plugins that add features to your Z-shell environment.
### Installing Oh My Zsh
The [ohmyz.sh][3] website encourages you to install the framework by running a script over the internet from your computer. While the Oh My Zsh project is almost certainly trustworthy, it's generally ill-advised to blindly run scripts on your system. If you want to run the install script, you can download it, read it, and run it after you're satisfied you understand what it's doing.
If you download the script and read it, you may notice that installation is only a three-step process:
#### 1\. Clone oh-my-zsh
First, clone the oh-my-zsh repository into a directory called **~/.oh-my-zsh**:
```
`% git clone http://github.com/robbyrussell/oh-my-zsh ~/.oh-my-zsh`
```
#### 2\. Switch the config file
Next, back up your existing **.zshrc** file and move the default one from the oh-my-zsh install into its place. You can do this in one command using the **-b** (backup) option for **mv**, as long as your version of the **mv** command includes that option:
```
% mv -b \
~/.oh-my-zsh/templates/zshrc.zsh-template \
~/.zshrc
```
#### 3\. Edit the config
By default, Oh My Zsh's configuration is pretty bland, so you might want to reintegrate your custom **~/.zshrc** into the **.oh-my-zsh** config. To do that, append your old config to the end of the new one using the [cat command][4]:
```
`% cat ~/.zshrc~ >> ~/.zshrc`
```
To see the default configuration and learn about some of the options it provides, open **~/.zshrc** in your favorite text editor. The file is well-commented, so it's a great way to get a good idea of what's possible.
For instance, you can change the location of your **.oh-my-zsh** directory. At installation, it resides at the base of your home directory, but modern Linux convention, as defined by the [Free Desktop][5] specification, is to place directories that extend the functionality of applications in the **~/.local/share** directory. You can change it in **~/.zshrc** by editing the line:
```
# Path to your oh-my-zsh installation.
export ZSH=$HOME/.local/share/oh-my-zsh
```
then moving the directory to that location:
```
% mv ~/.oh-my-zsh \
$HOME/.local/share/oh-my-zsh
```
If you're using MacOS, the specification is less clear, but arguably the most appropriate place for the directory is **$HOME/Library/Application\ Support**.
### Relaunching Zsh
After editing the config, you have to relaunch your shell. Before you do that, make sure you've finished any in-progress config changes; for instance, don't change the path of **.oh-my-zsh** then forget to move the directory to its new location. If you don't want to relaunch your shell, you can **source** the config file, just as you can with Bash:
```
% source ~/.zshrc
 .oh-my-zsh git:(master) ✗
```
You can ignore any warnings about missing update files; they will be resolved upon relaunch.
### Changing your theme
Installing Oh My Zsh sets your Z-shell theme to **robbyrussell**, a theme by the project's maintainer. This theme's changes are minimal, mostly involving the color of your prompt.
To view all the available themes, list the contents of the **.oh-my-zsh** theme directory:
```
 .oh-my-zsh git:(master) ✗ ls \
~/.local/share/oh-my-zsh/themes
3den.zsh-theme
adben.zsh-theme
af-magic.zsh-theme
afowler.zsh-theme
agnoster.zsh-theme
[...]
```
To see screenshots of themes before trying them, visit the Oh My Zsh [wiki][6]. For even more themes, visit the [External themes][7] wiki page.
Most themes are simple to set up and use. Just change the value of the theme name in **.zshrc** and reload the config:
```
➜ ~ sed -i \
's/_THEME=\"robbyrussel\"/_THEME=\"linuxonly\"/g' \
~/.zshrc
➜ ~ source ~/.zshrc
seth@darkstar:pts/0-&gt;/home/skenlon (0) ➜
```
Other themes require extra configuration. For example, to use the **agnoster** theme, you must first install the Powerline font. This is an open source font, and it's probably in your software repository if you're running Linux. Install it with:
```
`➜ ~ sudo dnf install powerline-fonts`
```
Set your theme in the config:
```
➜ ~ sed -i \
's/_THEME=\"linuxonly\"/_THEME=\"agnoster\"/g' \
~/.zshrc
```
and then relaunch (a simple **source** won't work). Upon relaunch, you will see the new theme:
![agnoster theme][8]
### Installing plugins
Over 200 plugins ship with Oh My Zsh, and you can see them by looking in **.oh-my-zsh/plugins**. Each plugin directory has a README file explaining what the plugin does.
Some plugins are relatively simple. For instance, the **dnf**, **ubuntu**, **brew**, and **macports** plugins are collections of aliases to simplify interactions with the DNF, Apt, Homebrew, and MacPorts package managers.
Others are more complex. The **git** plugin, active by default, detects when you're working in a [Git repository][9] and updates your shell prompt so that it lists the current branch and even indicates whether there are unmerged changes.
To activate a plugin, add it to the plugin setting in **~/.zshrc**. For example, to add the **dnf** and **pass** plugins, open **~/.zshrc** in your favorite text editor:
```
`plugins=(git dnf pass)`
```
Save your changes and reload your Zsh session:
```
`% source ~/.zshrc`
```
The plugins are now active. You can test the **dnf** plugin by using one of the aliases it provides:
```
% dnfs fop
====== Name Exactly Matched: fop ======
fop.noarch : XSL-driven print formatter
```
Different plugins do different things, so you may want to install only one or two at a time to help you learn the new capabilities of your shell.
#### Cheating
Some Oh My Zsh plugins are pretty generic. If you look at a plugin that claims to be a Z-shell plugin and the code is also compatible with Bash, then you can use it in your Bash shell. Some plugins require Z-shell-specific functions, so this won't work with all of them. But you can load plugins like **dnf**, **ubuntu**, **[firewalld][10]**, and others into a Bash shell by using **source** to load the plugin of your choice. For example:
```
if [ -d $HOME/.local/share/oh-my-zsh/plugins ]; then
        source $HOME/.local/share/oh-my-zsh/plugins/dnf/dnf.plugin.zsh
fi
```
### To Z or not to Z
Z-shell is a powerful shell both for its built-in features and the plugins contributed by its passionate community. Whether you use it as your primary shell or just as a shell you visit on weekends or holidays, you owe it to yourself to try it out.
What are your favorite Z-shell themes and plugins? Tell us in the comments!
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/9/adding-plugins-zsh
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/sethhttps://opensource.com/users/sethhttps://opensource.com/users/sethhttps://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/build_structure_tech_program_code_construction.png?itok=nVsiLuag (Someone wearing a hardhat and carrying code )
[2]: https://opensource.com/article/19/9/getting-started-zsh
[3]: https://ohmyz.sh/
[4]: https://opensource.com/article/19/2/getting-started-cat-command
[5]: http://freedesktop.org
[6]: https://github.com/robbyrussell/oh-my-zsh/wiki/Themes
[7]: https://github.com/robbyrussell/oh-my-zsh/wiki/External-themes
[8]: https://opensource.com/sites/default/files/uploads/zsh-agnoster.jpg (agnoster theme)
[9]: https://opensource.com/resources/what-is-git
[10]: https://opensource.com/article/19/7/make-linux-stronger-firewalls

View File

@ -0,0 +1,114 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to remove carriage returns from text files on Linux)
[#]: via: (https://www.networkworld.com/article/3438857/how-to-remove-carriage-returns-from-text-files-on-linux.html)
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
How to remove carriage returns from text files on Linux
======
When carriage returns (also referred to as Ctrl+M's) get on your nerves, don't fret. There are several easy ways to show them the door.
[Kim Siever][1]
Carriage returns go back a long way as far back as typewriters on which a mechanism or a lever swung the carriage that held a sheet of paper to the right so that suddenly letters were being typed on the left again. They have persevered in text files on Windows, but were never used on Linux systems. This incompatibility sometimes causes problems when youre trying to process files on Linux that were created on Windows, but it's an issue that is very easily resolved.
The carriage return, also referred to as **Ctrl+M**, character would show up as an octal 15 if you were looking at the file with an **od** octal dump) command. The characters **CRLF** are often used to represent the carriage return and linefeed sequence that ends lines on Windows text files. Those who like to gaze at octal dumps will spot the **\r \n**. Linux text files, by comparison, end with just linefeeds.
**[ Two-Minute Linux Tips: [Learn how to master a host of Linux commands in these 2-minute video tutorials][2] ]**
Here's a sample of **od** output with the lines containing the **CRLF** characters in both octal and character form highlighted.
```
$ od -bc testfile.txt
0000000 124 150 151 163 040 151 163 040 141 040 164 145 163 164 040 146
T h i s i s a t e s t f
0000020 151 154 145 040 146 162 157 155 040 127 151 156 144 157 167 163
i l e f r o m W i n d o w s
0000040 056 015 012 111 164 047 163 040 144 151 146 146 145 162 145 156 <==
. \r \n I t ' s d i f f e r e n <==
0000060 164 040 164 150 141 156 040 141 040 125 156 151 170 040 164 145
t t h a n a U n i x t e
0000100 170 164 040 146 151 154 145 015 012 167 157 165 154 144 040 142 <==
x t f i l e \r \n w o u l d b <==
```
While these characters dont represent a huge problem, they can sometimes interfere when you want to parse the text files in some way and dont want to have to code around their presence or absence.
### 3 ways to remove carriage return characters from text files
Fortunately, there are several ways to easily remove carriage return characters. Here are three options:
#### dos2unix
You might need to go through the trouble of installing it, but **dos2unix** is probably the easiest way to turn Windows text files into Unix/Linux text files. One command with one argument, and youre done. No second file name is required. The file will be changed in place.
```
$ dos2unix testfile.txt
dos2unix: converting file testfile.txt to Unix format...
```
You should see the file length decrease, depending on how many lines it contains. A file with 100 lines would likely shrink by 99 characters, since only the last line will not end with the **CRLF** characters.
Before:
```
-rw-rw-r-- 1 shs shs 121 Sep 14 19:11 testfile.txt
```
After:
```
-rw-rw-r-- 1 shs shs 118 Sep 14 19:12 testfile.txt
```
If you need to convert a large collection of files, don't fix them one at a time. Instead, put them all in a directory by themselves and run a command like this:
```
$ find . -type f -exec dos2unix {} \;
```
In this command, we use find to locate regular files and then run the **dos2unix** command to convert them one at a time. The {} in the command is replaced by the filename. You should be sitting in the directory with the files when you run it. This command could damage other types of files, such as those that contain octal 15 characters in some context other than a text file (e.g., bytes in an image file).
#### sed
You can also use **sed**, the stream editor, to remove carriage returns. You will, however, have to supply a second file name. Heres an example:
```
$ sed -e “s/^M//” before.txt > after.txt
```
One important thing to note is that you DONT type what that command appears to be. You must enter **^M** by typing **Ctrl+V** followed by **Ctrl+M**. The “s” is the substitute command. The slashes separate the text were looking for (the Ctrl+M) and the text (nothing in this case) that were replacing it with.
#### vi
You can even remove carriage return (**Ctrl+M**) characters with **vi**, although this assumes youre not running through hundreds of files and are maybe making some other changes, as well. You would type “**:**” to go to the command line and then type the string shown below. As with **sed**, the **^M** portion of this command requires typing **Ctrl+V** to get the **^** and then **Ctrl+M** to insert the **M**. The **%s** is a substitute operation, the slashes again separate the characters we want to remove and the text (nothing) we want to replace it with. The “**g**” (global) means to do this on every line in the file.
```
:%s/^M//g
```
#### Wrap-up
The **dos2unix** command is probably the easiest to remember and most reliable way to remove carriage returns from text files. Other options are a little trickier to use, but they provide the same basic function.
Join the Network World communities on [Facebook][3] and [LinkedIn][4] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3438857/how-to-remove-carriage-returns-from-text-files-on-linux.html
作者:[Sandra Henry-Stocker][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
[b]: https://github.com/lujun9972
[1]: https://www.flickr.com/photos/kmsiever/5895380540/in/photolist-9YXnf5-cNmpxq-2KEvib-rfecPZ-9snnkJ-2KAcDR-dTxzKW-6WdgaG-6H5i46-2KzTZX-7cnSw7-e3bUdi-a9meh9-Zm3pD-xiFhs-9Hz6YM-ar4DEx-4PXAhw-9wR4jC-cihLcs-asRFJc-9ueXvG-aoWwHq-atwL3T-ai89xS-dgnntH-5en8Te-dMUDd9-aSQVn-dyZqij-cg4SeS-abygkg-f2umXt-Xk129E-4YAeNn-abB6Hb-9313Wk-f9Tot-92Yfva-2KA7Sv-awSCtG-2KDPzb-eoPN6w-FE9oi-5VhaNf-eoQgx7-eoQogA-9ZWoYU-7dTGdG-5B1aSS
[2]: https://www.youtube.com/playlist?list=PL7D2RMSmRO9J8OTpjFECi8DJiTQdd4hua
[3]: https://www.facebook.com/NetworkWorld/
[4]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,235 @@
[#]: collector: (lujun9972)
[#]: translator: (laingke)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Create an online store with this Java-based framework)
[#]: via: (https://opensource.com/article/19/1/scipio-erp)
[#]: author: (Paul Piper https://opensource.com/users/madppiper)
使用这个 Java 框架创建一个在线商店
======
Scipio ERP 具有广泛的应用程序和功能。
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc_whitehurst_money.png?itok=ls-SOzM0)
所以,你想在网上销售产品或服务,但要么找不到合适的软件,要么认为定制成本太高? [Scipio ERP][1] 也许正是你想要的。
Scipio ERP 是一个基于 Java 的开放源码电子商务框架,具有广泛的应用程序和功能。这个项目在 2014 年从 [Apache OFBiz][2] fork 而来,侧重于更好的定制和更现代的吸引力。这个电子商务组件应用非常广泛,可以在多商店安装中工作,同时完成国际化,并具有广泛的产品配置,而且它还兼容现代 HTML 框架。该软件还为许多其他业务案例提供标准应用程序,例如会计,仓库管理或销售人员自动化。它都是高度标准化的,因此易于定制,如果您想要的不仅仅是一个虚拟购物车,这是非常棒的。
该系统也使得跟上现代 web 标准变得非常容易。所有界面都是使用系统的“[模板工具包][3]”构建的,这是一个易于学习的宏集,可以将 HTML 与所有应用程序分开。正因为如此,每个应用程序都已经标准化到核心。听起来令人困惑?它真的不是——它看起来很像 HTML但你写的内容少了很多。
### 初始安装
在您开始之前,请确保您已经安装了 Java 1.8(或更高版本)的 SDK 以及一个 Git 客户端。完成了?太棒了!接下来,切换到 Github 上的主分支:
```
git clone https://github.com/ilscipio/scipio-erp.git
cd scipio-erp
git checkout master
```
要安装系统,只需要运行 **./install.sh** 并从命令行中选择任一选项。在开发过程中,最好一直使用 **installation for development** (选项 1它还将安装一系列演示数据。对于专业安装您可以修改初始配置数据“种子数据”以便自动为您设置公司和目录数据。默认情况下系统将使用内部数据库运行但是它[也可以配置][4]使用各种关系数据库,比如 PostgreSQL 和 MariaDB 等。
![安装向导][6]
按照安装向导完成初始配置,
通过命令 **./start.sh** 启动系统然后打开链接 **<https://localhost:8443/setup/>** 完成配置。如果您安装了演示数据, 您可以使用用户名 **admin** 和密码 **scipio** 进行登录。在安装向导中,您可以设置公司简介、会计、仓库、产品目录、在线商店和额外的用户配置信息。暂时在产品商店配置界面上跳过网站实体的配置。系统允许您使用不同的底层代码运行多个在线商店;除非您想这样做,一直选择默认值是最简单的。
祝贺您,您刚刚安装了 Scipio ERP在界面上操作一两分钟感受一下它的功能。
### 捷径
在您进入自定义之前,这里有一些方便的命令可以帮助您:
* 创建一个 shop-override**./ant create-component-shop-override**
* 创建一个新组件:**./ant create-component**
* 创建一个新主题组件:**./ant create-theme**
* 创建管理员用户:**./ant create-admin-user-login**
* 各种其他实用功能:**./ant -p**
* 用于安装和更新插件的实用程序:**./git-addons help**
另外,请记下以下位置:
* 将 Scipio 作为服务运行的脚本:**/tools/scripts/**
* 日志输出目录:**/runtime/logs**
* 管理应用程序:**<https://localhost:8443/admin/>**
* 电子商务应用程序:**<https://localhost:8443/shop/>**
最后Scipio ERP 在以下五个主要目录中构建了所有代码:
* Framework: 框架相关的源,应用程序服务器,通用界面和配置
* Applications: 核心应用程序
* Addons: 第三方扩展
* Themes: 修改界面外观
* Hot-deploy: 您自己的组件
除了一些配置,您将在 hot-deploy 和 themes 目录中工作。
### 在线商店定制
要真正使系统成为您自己的系统,请开始考虑使用[组件][7]。组件是一种模块化方法,可以覆盖,扩展和添加到系统中。您可以将组件视为可以捕获有关数据库([实体][8]),功能([服务][9]),界面([视图][10][事件和操作][11]和 Web 应用程序信息的独立 Web 模块。由于组件功能,您可以添加自己的代码,同时保持与原始源兼容。
运行命令 **./ant create-component-shop-override** 并按照步骤创建您的在线商店组件。该操作将会在 hot-deploy 目录内创建一个新目录,该目录将扩展并覆盖原始的电子商务应用程序。
![组件目录结构][13]
一个典型的组件目录结构。
您的组件将具有以下目录结构:
* config: 配置
* data: 种子数据
* entitydef: 数据库表定义
* script: Groovy 脚本的位置
* servicedef: 服务定义
* src: Java 类
* webapp: 您的 web 应用程序
* widget: 界面定义
此外,**ivy.xml** 文件允许您将 Maven 库添加到构建过程中,**ofbiz-component.xml** 文件定义整个组件和 Web 应用程序结构。除了一些在当前目录所能够看到的,您还可以在 Web 应用程序的 **WEB-INF** 目录中找到 **controller.xml** 文件。这允许您定义请求实体并将它们连接到事件和界面。仅对于界面来说,您还可以使用内置的 CMS 功能,但优先要坚持使用核心机制。在引入更改之前,请熟悉**/applications/shop/**。
#### 添加自定义界面
还记得[模板工具包][3]吗?您会发现它在每个界面都有使用到。您可以将其视为一组易于学习的宏,它用来构建所有内容。下面是一个例子:
```
<@section title="Title">
    <@heading id="slider">Slider</@heading>
    <@row>
        <@cell columns=6>
            <@slider id="" class="" controls=true indicator=true>
                <@slide link="#" image="https://placehold.it/800x300">Just some content…</@slide>
                <@slide title="This is a title" link="#" image="https://placehold.it/800x300"></@slide>
            </@slider>
        </@cell>
        <@cell columns=6>Second column</@cell>
    </@row>
</@section>
```
不是很难,对吧?同时,主题包含 HTML 定义和样式。这将权力交给您的前端开发人员,他们可以定义每个宏的输出,并坚持使用自己的构建工具进行开发。
我们快点试试吧。首先,在您自己的在线商店上定义一个请求。您将修改此代码。一个内置的 CMS 系统也可以通过 **<https://localhost:8443/cms/>** 进行访问,它允许您以更有效的方式创建新模板和界面。它与模板工具包完全兼容,并附带可根据您的喜好采用的示例模板。但是既然我们试图在这里理解系统,那么首先让我们采用更复杂的方法。
打开您商店 webapp 目录中的 **[controller.xml][14]** 文件。Controller 跟踪请求事件并相应地执行操作。下面的操作将会在 **/shop/test** 下创建一个新的请求:
```
<!-- Request Mappings -->
<request-map uri="test">
     <security https="true" auth="false"/>
      <response name="success" type="view" value="test"/>
</request-map>
```
您可以定义多个响应,如果需要,可以在请求中使用事件或服务调用来确定您可能要使用的响应。我选择了“视图”类型的响应。视图是渲染的响应; 其他类型是请求重定向,转发等。系统附带各种渲染器,可让您稍后确定输出; 为此,请添加以下内容:
```
<!-- View Mappings -->
<view-map name="test" type="screen" page="component://mycomponent/widget/CommonScreens.xml#test"/>
```
用您自己的组件名称替换 **my-component**。然后,您可以通过在 **widget/CommonScreens.xml** 文件的标签内添加以下内容来定义您的第一个界面:
```
<screen name="test">
        <section>
            <actions>
            </actions>
            <widgets>
                <decorator-screen name="CommonShopAppDecorator" location="component://shop/widget/CommonScreens.xml">
                    <decorator-section name="body">
                        <platform-specific><html><html-template location="component://mycomponent/webapp/mycomponent/test/test.ftl"/></html></platform-specific>
                    </decorator-section>
                </decorator-screen>
            </widgets>
        </section>
    </screen>
```
商店界面实际上非常模块化,由多个元素组成([小部件,动作和装饰器][15])。为简单起见,请暂时保留原样,并通过添加第一个模板工具包文件来完成新网页。为此,创建一个新的 **webapp/mycomponent/test/test.ftl** 文件并添加以下内容:
```
<@alert type="info">Success!</@alert>
```
![自定义的界面][17]
一个自定义的界面。
打开 **<https://localhost:8443/shop/control/test/>** 并惊叹于你自己的成就。
#### 自定义主题
通过创建自己的主题来修改商店的界面外观。所有主题都可以作为组件在themes文件夹中找到。运行命令 **./ant create-theme** 来创建您自己的主题。
![主题组件布局][19]
一个典型的主题组件布局。
以下是最重要的目录和文件列表:
* 主题配置:**data/\*ThemeData.xml**
* 特定主题封装的HTML**includes/\*.ftl**
* 模板工具包HTML定义**includes/themeTemplate.ftl**
* CSS 类定义:**includes/themeStyles.ftl**
* CSS 框架: **webapp/theme-title/**
快速浏览工具包中的 Metro 主题;它使用 Foundation CSS 框架并且充分利用了这个框架。然后,然后,在新构建的 **webapp/theme-title** 目录中设置自己的主题并开始开发。Foundation-shop 主题是一个非常简单的特定于商店的主题实现,您可以将其用作您自己工作的基础。
瞧!您已经建立了自己的在线商店,准备个性化定制吧!
![搭建完成的 Scipio ERP 在线商店][21]
一个搭建完成的基于 Scipio ERP的在线商店。
### 接下来是什么?
Scipio ERP 是一个功能强大的框架,可简化复杂的电子商务应用程序的开发。为了更完整的理解,请查看项目[文档][7],尝试[在线演示][22],或者[加入社区][23].
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/1/scipio-erp
作者:[Paul Piper][a]
选题:[lujun9972][b]
译者:[laingke](https://github.com/laingke)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/madppiper
[b]: https://github.com/lujun9972
[1]: https://www.scipioerp.com
[2]: https://ofbiz.apache.org/
[3]: https://www.scipioerp.com/community/developer/freemarker-macros/
[4]: https://www.scipioerp.com/community/developer/installation-configuration/configuration/#database-configuration
[5]: /file/419711
[6]: https://opensource.com/sites/default/files/uploads/setup_step5_sm.jpg (Setup wizard)
[7]: https://www.scipioerp.com/community/developer/architecture/components/
[8]: https://www.scipioerp.com/community/developer/entities/
[9]: https://www.scipioerp.com/community/developer/services/
[10]: https://www.scipioerp.com/community/developer/views-requests/
[11]: https://www.scipioerp.com/community/developer/events-actions/
[12]: /file/419716
[13]: https://opensource.com/sites/default/files/uploads/component_structure.jpg (component directory structure)
[14]: https://www.scipioerp.com/community/developer/views-requests/request-controller/
[15]: https://www.scipioerp.com/community/developer/views-requests/screen-widgets-decorators/
[16]: /file/419721
[17]: https://opensource.com/sites/default/files/uploads/success_screen_sm.jpg (Custom screen)
[18]: /file/419726
[19]: https://opensource.com/sites/default/files/uploads/theme_structure.jpg (theme component layout)
[20]: /file/419731
[21]: https://opensource.com/sites/default/files/uploads/finished_shop_1_sm.jpg (Finished Scipio ERP shop)
[22]: https://www.scipioerp.com/demo/
[23]: https://forum.scipioerp.com/

View File

@ -0,0 +1,108 @@
[#]: collector: (lujun9972)
[#]: translator: (heguangzhi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (The Linux kernel: Top 5 innovations)
[#]: via: (https://opensource.com/article/19/8/linux-kernel-top-5-innovations)
[#]: author: (Seth Kenlon https://opensource.com/users/sethhttps://opensource.com/users/mhaydenhttps://opensource.com/users/mralexjuarez)
The Linux kernel: Top 5 innovations
======
Linux 内核:五大创新
======
想知道什么是真正的(不是那种时髦的)在 Linux 内核上的创新吗?请继续阅读。
![绿色背景的企鹅][1]
_创新_ 这个词在科技行业的传播几乎和 _革命_ 一样多所以很难区分那些夸张和真正令人振奋的东西。Linux 内核被称为创新的,但它又被称为现代计算中最大的黑客,一个微观世界中的庞然大物。
撇开市场和模式不谈Linux 可以说是开源世界中最受欢迎的内核它在近30年的生命周期中引入了一些真正的游戏改变者。
### Cgroups (2.6.24)
早在2007年Paul Menage 和 Rohit Seth 就在内核中添加了深奥的[_control groups_ (cgroups)][2]功能(cgroups 的当前实现是由 Tejun Heo 重写的。)这种新技术最初被用作一种方法,从本质上来说,是为了确保一组特定任务的服务质量。
例如,您为与您的 WEB 服务相关联的所有任务创建一个控制组定义 ( cgroup ),为常规备份创建另一个 cgroup 为一般操作系统需求创建另一个cgroup。然后您可以控制每个组的资源百分比这样您的操作系统和 WEB 服务就可以获得大部分系统资源,而您的备份进程可以访问剩余的资源。
然而cgroups 最著名的是它作为今天驱动云技术的角色:容器。事实上cgroups 最初被命名为[进程容器][3]。当它们被 [LXC][4][CoreOS][5]和 Docker 等项目采用时,这并不奇怪。
就像闸门打开后一样,“ _容器_ ”一词恰好成为了 Linux 的同义词,微服务风格的基于云的“应用”概念很快成为了规范。如今,很难脱离 cgroups ,他们是如此普遍。每一个大规模的基础设施(可能还有你的笔记本电脑,如果你运行 Linux 的话)都以一种有意思的方式使用了 cgroups ,使你的计算体验比以往任何时候都更加易于管理和灵活。
例如,您可能已经在电脑上安装了[Flathub][6]或[Flatpak][7],或者您已经在工作中使用[Kubernetes][8]和/或[OpenShift][9]。不管怎样,如果“容器”这个术语对你来说仍然模糊不清,你可以在[ Linux 容器背后的应用场景][10] 获得对容器的实际理解。
### LKMM (4.17)
2018年Jade Alglave, Alan Stern, Andrea Parri, Luc Maranget, Paul McKenney, 和其他几个人的辛勤工作的成果被合并到主线 Linux 内核中以提供正式的内存模型。Linux 内核内存[一致性]模型(LKMM)子系统是一套描述Linux 内存一致性模型的工具,同时也产生测试用例。
随着系统在物理设计上变得越来越复杂(增加了更多的中央处理器内核,高速缓存和内存增加,等等),它们就越难知道哪个中央处理器需要哪个地址空间,以及何时需要。例如,如果 CPU0 需要将数据写入内存中的共享变量,并且 CPU1 需要读取该值,那么 CPU0 必须在 CPU1 尝试读取之前写入。类似地,如果值是以一种顺序写入内存的,那么期望它们也以同样的顺序被读取,而不管哪个或哪些 CPU 正在读取。
即使在单个处理器上,内存管理也需要特定的顺序。像 **x = y** 这样的简单操作需要处理器从内存中加载 **y** 的值,然后将该值存储在 **x** 中。在处理器从内存中读取值之前,是不能将存储在 **y** 中的值放入 **x** 变量的。还有地址依赖:**x[n] = 6** 要求在处理器能够存储值6之前加载 **n**
LKMM 帮助识别和跟踪代码中的这些内存模式。这部分是通过一个名为 **herd** 的工具来实现的,该工具定义了内存模型施加的约束(以逻辑公式的形式),然后列举了与这些约束一致性的所有可能的结果。
### 低延迟补丁 (2.6.38)
很久以前在2011年之前的日子里如果你想在 Linux进行 多媒体工作 [multimedia work on Linux][11] ,您必须获得一个低延迟内核。这主要适用于[录音/audio recording][12],同时添加了许多实时效果(如对着麦克风唱歌和添加混音,以及在耳机中无延迟地听到您的声音)。有些发行版,如[Ubuntu Studio][13],可靠地提供了这样一个内核,所以实际上这没有什么障碍,当艺术家选择发行版本时,只是作为一个重要提醒。
然而,如果您没有使用 Ubuntu Studio ,或者您需要在分发之前更新您的内核,您必须跳转到 rt-patches 网页,下载内核补丁,将它们应用到您的内核源代码,编译,然后手动安装。
然后随着内核版本2.6.38的发布这个过程结束了。默认情况下Linux 内核突然像变魔术一样内置了低延迟代码(根据基准测试延迟至少降低了10倍)。不再下载补丁,不用编译。一切都很顺利,这都是因为 Mike Galbraith 编写了一个200行的小补丁。
对于全世界的开源多媒体艺术家来说这是一个游戏规则的改变。从2011年开始到2016年事情变得如此美好我向自己做了一个挑战要求[在树莓派v1(型号B)上建造一个数字音频工作站(DAW)][14],结果发现它运行得出奇地好。
### RCU (2.5)
RCU或称读-拷贝-更新,是计算机科学中定义的一个系统,它允许多个处理器线程从共享内存中读取数据。它通过推迟更新来做到这一点,但也将它们标记为已更新,以确保数据读取为最新内容。实际上,这意味着读取与更新同时发生。
典型的 RCU 循环有点像这样:
1. 删除指向数据的指针,以防止其他读操作引用它。
2. 等待读完成他们的关键处理。
3. 回收内存空间。
将更新阶段划分为删除和回收阶段意味着更新程序会立即执行删除,同时推迟回收直到所有活动读取完成(通过阻止它们或注册一个回调以便在完成时调用)。
虽然读-拷贝-更新的概念不是为 Linux 内核发明的,但它在 Linux 中的实现是该技术的一个定义性的例子。
### 合作 (0.01)
对于 Linux 内核创新的问题,最重要的是协作,最终答案也是。称之为好时机,称之为技术优势,称之为黑客能力,或者仅仅称之为开源,但 Linux 内核及其支持的许多项目是协作与合作的光辉范例。
它远远超出了内核范畴。各行各业的人都对开源做出了贡献,可以说是因为 Linux 内核。Linux 曾经是,现在仍然是 [自由软件][15]的主要力量,激励人们把他们的代码、艺术、想法或者仅仅是他们自己带到一个全球化的、有生产力的、多样化的人类社区中。
### 你最喜欢的创新是什么?
这个列表偏向于我自己的兴趣:容器、非统一内存访问(NUMA)和多媒体。我肯定把你最喜欢的内核创新从列表中去掉了。在评论中告诉我!
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/8/linux-kernel-top-5-innovations
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[heguangzhi](https://github.com/heguangzhi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/sethhttps://opensource.com/users/mhaydenhttps://opensource.com/users/mralexjuarez
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/linux_penguin_green.png?itok=ENdVzW22 (Penguin with green background)
[2]: https://en.wikipedia.org/wiki/Cgroups
[3]: https://lkml.org/lkml/2006/10/20/251
[4]: https://linuxcontainers.org
[5]: https://coreos.com/
[6]: http://flathub.org
[7]: http://flatpak.org
[8]: http://kubernetes.io
[9]: https://www.redhat.com/sysadmin/learn-openshift-minishift
[10]: https://opensource.com/article/18/11/behind-scenes-linux-containers
[11]: http://slackermedia.info
[12]: https://opensource.com/article/17/6/qtractor-audio
[13]: http://ubuntustudio.org
[14]: https://opensource.com/life/16/3/make-music-raspberry-pi-milkytracker
[15]: http://fsf.org

View File

@ -1,172 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (heguangzhi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Managing Ansible environments on MacOS with Conda)
[#]: via: (https://opensource.com/article/19/8/using-conda-ansible-administration-macos)
[#]: author: (James Farrell https://opensource.com/users/jamesf)
使用 Conda 管理 MacOS 上的 Ansible 环境
=====
Conda 将 Ansible 所需的一切都收集到虚拟环境中并将其与其他项目分开。
![CICD with gears][1]
如果您是一名使用 MacOS 并参与 Ansible 管理的 Python 开发人员,您可能希望使用 Conda 包管理器将 Ansible 的工作内容与核心操作系统和其他本地项目分开。
Ansible 基于 Python的。让 Ansible 在 MacOS 上工作 Conda 并不是必须要的,但是它确实让您管理 Python 版本和包依赖变得更加容易。这允许您在 MacOS 上使用升级的 Python 版本并在您的系统中、Ansible 和其他编程项目之间保持 Python 包的依赖性是相互独立的。
在 MacOS 上安装 Ansible 还有其他方法。您可以使用[Homebrew][2],但是如果您对 Python 开发(或 Ansible 开发)感兴趣,您可能会发现在一个独立 Python 虚拟环境中管理 Ansible 可以减少一些混乱。我觉得这更简单;与其试图将 Python 版本和依赖项加载到系统或在 **/usr/local** 目录中 ,还不如使用 Conda 帮助我将 Ansible 所需的一切都收集到一个虚拟环境中,并将其与其他项目完全分开。
本文着重于使用 Conda 作为 Python 项目来管理 Ansible ,以保持它的干净并与其他项目分开。请继续阅读,并了解如何安装 Conda、创建新的虚拟环境、安装 Ansible 并对其进行测试。
### 序幕
最近,我想学习[Ansible][3],所以我需要找到安装它的最佳方法。
我通常对在我的日常工作站上安装东西很谨慎。我尤其不喜欢对供应商的默认操作系统安装应用手动更新(这是我多年作为 Unix 系统管理的首选)。我真的很想使用 Python 3.7,但是 MacOS 包是旧的2.7,我不会安装任何可能干扰核心 MacOS 系统的全局 Python 包。
所以,我使用本地 Ubuntu 18.04 虚拟机上开始了我的 Ansible 工作。这提供了真正意义上的的安全隔离,但我很快发现管理它是非常乏味的。所以我着手研究如何在本机 MacOS 上获得一个灵活但独立的 Ansible 系统。
由于 Ansible 基于 PythonConda 似乎是理想的解决方案。
### 安装 Conda
Conda 是一个开源软件,它提供方便的包和环境管理功能。它可以帮助您管理多个版本的 Python 、安装软件包依赖关系、执行升级和维护项目隔离。如果您手动管理 Python 虚拟环境Conda 将有助于简化和管理您的工作。浏览 [Conda 文档][4]可以了解更多细节。
我选择了 [Miniconda][5] Python 3.7 安装在我的工作站中,因为我想要最新的 Python 版本。无论选择哪个版本,您都可以使用其他版本的 Python 安装新的虚拟环境。
要安装 Conda请下载 PKG 格式的文件,进行通常的双击,并选择 “Install for me only” 选项。安装在我的系统上占用了大约158兆的空间。
安装完成后,调出一个终端来查看您有什么了。您应该看到:
* 一个 **miniconda3** 目录在您的 **home** 目录中
* shell 提示符被修改为 "(base)"
* **.bash_profile** 文件被 Conda-specific 设置内容更新
现在已经安装了基础,您就有了第一个 Python 虚拟环境。运行 Python 版本检查可以证明这一点,您的 PATH 将指向新的位置:
```
(base) $ which python
/Users/jfarrell/miniconda3/bin/python
(base) $ python --version
Python 3.7.1
```
现在安装了 Conda ,下一步是建立一个虚拟环境,然后安装 Ansible 并运行。
### 为 Ansible 创建虚拟环境
我想将 Ansible 与我的其他 Python 项目分开,所以我创建了一个新的虚拟环境并切换到它:
```
(base) $ conda create --name ansible-env --clone base
(base) $ conda activate ansible-env
(ansible-env) $ conda env list
```
第一个命令将 Conda 库克隆到一个名为 **ansible-env** 的新虚拟环境中。克隆引入了 Python 3.7 版本和一系列默认的 Python 模块,您可以根据需要添加、删除或升级这些模块。
第二个命令将 shell 上下文更改为这个新的环境。它为 Python 及其包含的模块设置了正确的路径。请注意,在 **conda activate ansible-env** 命令后,您的 shell 提示符会发生变化。
第三个命令不是必须的;它列出了安装了哪些 Python 模块及其版本和其他数据。
您可以随时使用 Conda 的 **activate** 命令切换到另一个虚拟环境。这将带您回到基本的: **conda base**
### 安装 Ansible
安装 Ansible 有多种方法,但是使用 Conda 可以将 Ansible 版本和所有需要的依赖项打包在一个地方。Conda 提供了灵活的,既可以将所有内容分开,又可以根据需要添加其他新环境(我将在后面演示)。
要安装 Ansible 的相对较新版本,请使用:
```
(base) $ conda activate ansible-env
(ansible-env) $ conda install -c conda-forge ansible
```
由于 Ansible 不是 Conda 默认的一部分,因此**-c**用于从备用通道搜索和安装。Ansible 现已安装到**ansible-env**虚拟环境中,可以使用了。
### 使用 Ansible
既然您已经安装了 Conda 虚拟环境,就可以使用它了。首先,确保要控制的节点已将工作站的 SSH 密钥安装到正确的用户帐户。
调出一个新的 shell 并运行一些基本的Ansible命令:
```
(base) $ conda activate ansible-env
(ansible-env) $ ansible --version
ansible 2.8.1
  config file = None
  configured module search path = ['/Users/jfarrell/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /Users/jfarrell/miniconda3/envs/ansibleTest/lib/python3.7/site-packages/ansible
  executable location = /Users/jfarrell/miniconda3/envs/ansibleTest/bin/ansible
  python version = 3.7.1 (default, Dec 14 2018, 13:28:58) [Clang 4.0.1 (tags/RELEASE_401/final)]
(ansible-env) $ ansible all -m ping -u ansible
192.168.99.200 | SUCCESS =&gt; {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python"
    },
    "changed": false,
    "ping": "pong"
}
```
现在 Ansible 正在工作了,您可以在控制台中抽身,并从您的 MacOS 工作站中使用它们。
### 克隆新的 Ansible 进行 Ansible 开发
这部分完全是可选的;只有当您想要额外的虚拟环境来修改 Ansible 或者安全地使用有问题的 Python 模块时,才需要它。您可以通过以下方式将主 Ansible 环境克隆到开发副本中:
```
(ansible-env) $ conda create --name ansible-dev --clone ansible-env
(ansible-env) $ conda activte ansible-dev
(ansible-dev) $
```
### 需要注意的问题
偶尔您可能遇到使用 Conda 的麻烦。您通常可以通过以下方式删除不良环境:
```
$ conda activate base
$ conda remove --name ansible-dev --all
```
如果出现无法解决的错误,通常可以通过在 **~/miniconda3/envs** 中找到环境并删除整个目录来直接删除环境。如果基础损坏了,您可以删除整个 **~/miniconda3**,然后从 PKG 文件中重新安装。只要确保保留 **~/miniconda3/envs** ,或使用 Conda 工具导出环境配置并在以后重新创建即可。
MacOS 上不包括 **sshpass** 程序。只有当您的 Ansible 工作要求您向 Ansible 提供SSH登录密码时才需要它。您可以在 SourceForge 上找到当前的[sshpass source][6]。
最后,基础 Conda Python 模块列表可能缺少您工作所需的一些 Python 模块。如果您需要安装一个模块,**conda install &lt;package&gt;** 命令是首选的,但是 **pip** 可以在需要的地方使用Conda会识别安装模块。
### 结论
Ansible 是一个强大的自动化工具值得我们去学习。Conda是一个简单有效的 Python 虚拟环境管理工具。
在您的 MacOS 环境中保持软件安装分离是保持日常工作环境的稳定性和健全性的谨慎方法。Conda 尤其有助于升级您的Python 版本,将 Ansible 从其他项目中分离出来,并安全地使用 Ansible。
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/8/using-conda-ansible-administration-macos
作者:[James Farrell][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/heguangzhi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jamesf
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/cicd_continuous_delivery_deployment_gears.png?itok=kVlhiEkc (CICD with gears)
[2]: https://brew.sh/
[3]: https://docs.ansible.com/?extIdCarryOver=true&sc_cid=701f2000001OH6uAAG
[4]: https://conda.io/projects/conda/en/latest/index.html
[5]: https://docs.conda.io/en/latest/miniconda.html
[6]: https://sourceforge.net/projects/sshpass/

View File

@ -0,0 +1,70 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to put an HTML page on the internet)
[#]: via: (https://jvns.ca/blog/2019/09/06/how-to-put-an-html-page-on-the-internet/)
[#]: author: (Julia Evans https://jvns.ca/)
如何在互联网放置 HTML 页面
======
我喜欢互联网的一点是在互联网放置静态页面是如此简单。今天有人问我该怎么做,所以我想我会快速地写下来!
### 只是一个 HTML 页面
我的所有网站都只是静态 HTML 和 CSS。我的网页设计技巧相对不高<https://wizardzines.com>是我自己开发的最复杂的网站),因此保持我所有的网站相对简单意味着我可以做一些改变/修复,而不会花费大量时间。
因此,我们将在此文章中采用尽可能简单的方式 - 只需一个 HTML 页面。
### HTML 页面
我们要放在互联网上的网站只是一个名为 `index.html` 的文件。你可以在 <https://github.com/jvns/website-example> 找到它,它是一个 Github 仓库,其中只包含一个文件。
HTML 文件中包含一些 CSS使其看起来不那么无聊部分复制自< https://example.com>
### 如何将 HTML 页面放在互联网上
有以下几步:
1. 注册 [Neocities][1] 帐户
2. 将 index.html 复制到你自己 neocities 站点的 index.html 中
3. 完成
上面的 index.html 页面位于 [julia-example-website.neocities.com][2] 中,如果你查看源代码,你将看到它与 github 仓库中的 HTML 相同。
我认为这可能是将 HTML 页面放在互联网上的最简单的方法(这是一次回归 Geocities它是我在 2003 年制作我的第一个网站的方式):)。我也喜欢 Neocities (像 [glitch][3],我也喜欢)它能实验、学习,并有乐趣。
### 其他选择
这绝不是唯一简单的方式 - 在你推送 Git 仓库时Github pages 和 Gitlab pages 以及 Netlify 都将会自动发布站点,并且它们都非常易于使用(只需将它们连接到你的 github 仓库即可)。我个人使用 Git 仓库的方式,因为 Git 没有东西让我感到紧张 - 我想知道我实际推送的页面发生了什么更改。但我想你如果第一次只想将 HTML/CSS 制作的站点放到互联网上,那么 Neocities 就是一个非常好的方法。
如果你不只是玩,而是要将网站用于真实用途,那么你或许会需要买一个域名,以便你将来可以更改托管服务提供商,但这有点不那么简单。
### 这是学习 HTML 的一个很好的起点
如果你熟悉在 Git 中编辑文件,同时想练习 HTML/CSS 的话,我认为将它放在网站中是一个有趣的方式!我真的很喜欢它的简单性 - 实际上这只有一个文件,所以没有其他花哨的东西需要去理解。
还有很多方法可以复杂化/扩展它,比如这个博客实际上是用 [Hugo][4] 生成的,它生成了一堆 HTML 文件并放在网络中,但从基础开始总是不错的。
--------------------------------------------------------------------------------
via: https://jvns.ca/blog/2019/09/06/how-to-put-an-html-page-on-the-internet/
作者:[Julia Evans][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://jvns.ca/
[b]: https://github.com/lujun9972
[1]: https://neocities.org/
[2]: https://julia-example-website.neocities.org/
[3]: https://glitch.com
[4]: https://gohugo.io/

View File

@ -7,31 +7,29 @@
[#]: via: (https://fedoramagazine.org/how-to-set-up-a-tftp-server-on-fedora/)
[#]: author: (Curt Warfield https://fedoramagazine.org/author/rcurtiswarfield/)
How to set up a TFTP server on Fedora
如何在 Fedora 上建立一个 TFTP 服务器
======
![][1]
**TFTP**, or Trivial File Transfer Protocol, allows users to transfer files between systems using the [UDP protocol][2]. By default, it uses UDP port 69. The TFTP protocol is extensively used to support remote booting of diskless devices. So, setting up a TFTP server on your own local network can be an interesting way to do [Fedora installations][3], or other diskless operations.
**TFTP** 即简单文本传输协议,允许用户通过 [UDP][2] 协议在系统之间传输文件。默认情况下,协议使用的是 UDP 的 69 号端口。TFTP 协议广泛用于无盘设备的远程启动。因此,在你的本地网络建立一个 TFTP 服务器,这样你就可以进行 [Fedora 的安装][3]和其他无盘设备的一些操作,这将非常有趣。
TFTP can only read and write files to or from a remote system. It doesnt have the capability to list files or make any changes on the remote server. There are also no provisions for user authentication. Because of security implications and the lack of advanced features, TFTP is generally only used on a local area network (LAN).
TFTP 仅仅能够从远端系统读取数据或者向远端系统写入数据。但它并没有列出远端服务器上文件的能力,同时也没有修改远端服务器的能力(译者注:感觉和前一句话矛盾)。用户身份验证也没有规定。 由于安全隐患和缺乏高级功能TFTP 通常仅用于局域网LAN
### TFTP server installation
### 安装 TFTP 服务器
The first thing you will need to do is install the TFTP client and server packages:
首先你要做的事就是安装 TFTP 客户端和 TFTP 服务器:
```
dnf install tftp-server tftp -y
```
This creates a _tftp_ service and socket file for [systemd][4] under _/usr/lib/systemd/system_.
上述的这条命令会为 [systemd][4] 在 _/usr/lib/systemd/system_ 目录下创建 _tftp.service__tftp.socket_ 文件。
```
/usr/lib/systemd/system/tftp.service
/usr/lib/systemd/system/tftp.socket
```
Next, copy and rename these files to _/etc/systemd/system_:
接下来,将这两个文件复制到 _/etc/systemd/system_ 目录下,并重新命名。
```
cp /usr/lib/systemd/system/tftp.service /etc/systemd/system/tftp-server.service
@ -39,9 +37,9 @@ cp /usr/lib/systemd/system/tftp.service /etc/systemd/system/tftp-server.service
cp /usr/lib/systemd/system/tftp.socket /etc/systemd/system/tftp-server.socket
```
### Making local changes
### 修改文件
You need to edit these files from the new location after youve copied and renamed them, to add some additional parameters. Here is what the _tftp-server.service_ file initially looks like:
当你把这些文件复制和重命名后,你就可以去添加一些额外的参数,下面是 _tftp-server.service_ 刚开始的样子:
```
[Unit]
@ -57,40 +55,36 @@ StandardInput=socket
Also=tftp.socket
```
Make the following changes to the _[Unit]_ section:
_[Unit]_ 部分添加如下内容:
```
Requires=tftp-server.socket
```
Make the following changes to the _ExecStart_ line:
修改 _[ExecStart]_ 行:
```
ExecStart=/usr/sbin/in.tftpd -c -p -s /var/lib/tftpboot
```
Here are what the options mean:
下面是这些选项的意思:
* The _**-c**_ option allows new files to be created.
* The _**-p**_ option is used to have no additional permissions checks performed above the normal system-provided access controls.
* The _**-s**_ option is recommended for security as well as compatibility with some boot ROMs which cannot be easily made to include a directory name in its request.
* _**-c**_ 选项允许创建新的文件
* _**-p**_ 选项用于指明在正常系统提供的权限检查之上没有其他额外的权限检查
* _**-s**_ 建议使用该选项以确保安全性以及与某些引导 ROM 的兼容性,这些引导 ROM 在其请求中不容易包含目录名。
默认的上传和下载位置位于 _/var/lib/tftpboot_
The default upload/download location for transferring the files is _/var/lib/tftpboot_.
Next, make the following changes to the _[Install]_ section:
下一步,修改 _[Install}_ 部分的内容
```
[Install]
WantedBy=multi-user.target
Also=tftp-server.socket
```
Dont forget to save your changes!
Here is the completed _/etc/systemd/system/tftp-server.service_ file:
不要忘记保存你的修改。
下面是 _/etc/systemd/system/tftp-server.service_ 文件的完整内容:
```
[Unit]
Description=Tftp Server
@ -106,42 +100,41 @@ WantedBy=multi-user.target
Also=tftp-server.socket
```
### Starting the TFTP server
### 启动 TFTP 服务器
Reload the systemd daemon:
重新启动 systemd 守护进程:
```
systemctl daemon-reload
```
Now start and enable the server:
启动服务器:
```
systemctl enable --now tftp-server
```
To change the permissions of the TFTP server to allow upload and download functionality, use this command. Note TFTP is an inherently insecure protocol, so this may not be advised on a network you share with other people.
要更改 TFTP 服务器允许上传和下载的权限,请使用此命令。注意 TFTP 是一种固有的不安全协议,因此不建议你在于其他人共享的网络上这样做。
```
chmod 777 /var/lib/tftpboot
```
Configure your firewall to allow TFTP traffic:
配置防火墙让 TFTP 能够使用:
```
firewall-cmd --add-service=tftp --perm
firewall-cmd --reload
```
### Client Configuration
### 客户端配置
Install the TFTP client:
安装 TFTP 客户端
```
yum install tftp -y
```
Run the _tftp_ command to connect to the TFTP server. Here is an example that enables the verbose option:
运行 _tftp_ 命令连接服务器。下面是一个启用详细信息选项的例子:
```
[client@thinclient:~ ]$ tftp 192.168.1.164
@ -154,7 +147,7 @@ tftp> quit
[client@thinclient:~ ]$
```
Remember, TFTP does not have the ability to list file names. So youll need to know the file name before running the _get_ command to download any files.
记住,因为 TFTP 没有列出服务器上文件的能力,因此,在你使用 _get_ 命令之前需要知道文件的具体名称。
* * *
@ -166,7 +159,7 @@ via: https://fedoramagazine.org/how-to-set-up-a-tftp-server-on-fedora/
作者:[Curt Warfield][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
译者:[amwps290](https://github.com/amwps290)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,358 @@
[#]: collector: (lujun9972)
[#]: translator: (asche910)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to Find and Replace a String in File Using the sed Command in Linux)
[#]: via: (https://www.2daygeek.com/linux-sed-to-find-and-replace-string-in-files/)
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
在Linux中使用sed命令如何查找和替换文件中的字符串
======
当你在使用文本文件时,很可能需要查找和替换文件中的字符串。
Sed命令主要用于替换一个文件中的文本。
在Linux中这可以通过使用sed命令和awk命令来完成。
在本教程中我们将告诉你使用sed命令如何做到这一点然后讨论讨论awk命令相关的。
### sed命令是什么
sed命令表示Stream Editor流编辑器用来在Linux上执行基本的文本操作。 它可以执行各种功能,如搜索,查找,修改,插入或删除文件。
此外,它也可以执行复杂的正则表达式匹配。
它可用于以下目的:
* 查找和替换匹配给定的格式的内容。
* 在指定行查找和替换匹配给定的格式的内容。
* 在所有行查找和替换匹配给定的格式的内容。
* 搜索并同时替换两种不同的模式。
本文列出的十五个例子可以帮助你掌握sed命令。
如果要使用sed命令删除文件中的行去下面的文章。
** `注意:`**由于这是一篇演示文章,我们使用不带`-i '选项的sed命令该命令在Linux终端中删除行并打印文件内容。
但是,在实际环境中如果你想删除源文件中的行,使用带`-i`选项的sed命令。
常见的sed替换字符串的语法。
```
sed -i 's/Search_String/Replacement_String/g' Input_File
```
首先我们需要了解sed语法来做到这一点。 请参阅有关的细节。
* `sed:` 这是一个Linux命令。
* `-i:` 这是sed命令的一个选项它有啥作用 默认情况下sed打印结果到标准输出。 当您添加使用sed这个选项时那么它会在适当的位置修改文件。 当您添加一个后缀(比如, -i.bak原始文件的备份将就会被创建。
* `s:` 字母s是一个替换命令。
* `Search_String:` 搜索一个给定的字符串或正则表达式。
* `Replacement_String:` 替换的字符串。
* `g:` 全局替换标志。 默认情况下sed命令替换每一行第一次出现的pattern模式它不会替换行中的其他匹配结果。 但是,提供了该置换标志时,所有匹配都将被替换。
* `/` 分界符。
* `Input_File:` 要执行操作的文件名。
Let us look at some examples of commonly used with sed command to search and convert text in files.
We have created the below file for demonstration purposes.
让我们来看看文件中用sed命令来搜索和转换文本的一些常用例子。
我们已经创建了用于演示的以下文件。
```
# cat sed-test.txt
1 Unix unix unix 23
2 linux Linux 34
3 linuxunix UnixLinux
linux /bin/bash CentOS Linux OS
Linux is free and opensource operating system
```
### 1) 在一行中如何查找和替换Pattern模式的“第一次”匹配
下面的sed命令用**linux**替换文件中的**unix**。 这仅仅改变了每一行pattern模式的第一个实例。
```
# sed 's/unix/linux/' sed-test.txt
1 Unix linux unix 23
2 linux Linux 34
3 linuxlinux UnixLinux
linux /bin/bash CentOS Linux OS
Linux is free and opensource operating system
```
### 2) 如何查找和替换每一行中“第N次”出现的模式
在行中使用/1,/2,../n标志来代替相应的匹配。
下面的sed命令在一行中用“linux”来替换“unix”模式的第二个实例。
```
# sed 's/unix/linux/2' sed-test.txt
1 Unix unix linux 23
2 linux Linux 34
3 linuxunix UnixLinux
linux /bin/bash CentOS Linux OS
Linux is free and opensource operating system
```
### 3) 在一行中如何搜索和替换模式的所有实例
The below sed command replaces all instances of the “unix” format with “Linux” on the line because “g” means a global replacement.
下面的sed命令用“Linux”替换“unix”格式的所有实例因为“G”是指一个全局的替换标志。
```
# sed 's/unix/linux/g' sed-test.txt
1 Unix linux linux 23
2 linux Linux 34
3 linuxlinux UnixLinux
linux /bin/bash CentOS Linux OS
Linux is free and opensource operating system
```
### 4) 在一行的“第N个”匹配中如何查找和替换模式的所有实例
下面的sed命令在一行中且从模式的“第n个”实例中替换所有的模式。
```
# sed 's/unix/linux/2g' sed-test.txt
1 Unix unix linux 23
2 linux Linux 34
3 linuxunix UnixLinux
linux /bin/bash CentOS Linux OS
Linux is free and opensource operating system
```
### 5) 在特定的行号搜索和替换模式
你可以替换特定行号中的字符串。 下面的sed命令用“linux”仅替换第三行的“unix”模式。
```
# sed '3 s/unix/linux/' sed-test.txt
1 Unix unix unix 23
2 linux Linux 34
3 linuxlinux UnixLinux
linux /bin/bash CentOS Linux OS
Linux is free and opensource operating system
```
### 6) 在特定范围行号间搜索和替换模式
You can specify the range of line numbers to replace the string.
The below sed command replaces the “Unix” pattern with “Linux” with lines 1 through 3.
您可以指定行号的范围,以替换字符串。
下面的sed命令在1到3行间用“Linux”替换“Unix”模式。
```
# sed '1,3 s/unix/linux/' sed-test.txt
1 Unix linux unix 23
2 linux Linux 34
3 linuxlinux UnixLinux
linux /bin/bash CentOS Linux OS
Linux is free and opensource operating system
```
### 7) 如何查找和修改最后一行的模式
下面的sed命令允许您只在最后一行更换匹配的字符串。
下面的sed命令只在最后一行用“Unix”替换“Linux”模式。
```
# sed '$ s/Linux/Unix/' sed-test.txt
1 Unix unix unix 23
2 linux Linux 34
3 linuxunix UnixLinux
linux /bin/bash CentOS Linux OS
Unix is free and opensource operating system
```
### 8) 在一行中如何只查找和替换正确的模式匹配
你可能已经注意到子串“linuxunix”被替换为在第6例中的“linuxlinux”。 如果您只想更改正确的匹配词,用这个边界符“\b”上的搜索串的两端。
```
# sed '1,3 s/\bunix\b/linux/' sed-test.txt
1 Unix linux unix 23
2 linux Linux 34
3 linuxunix UnixLinux
linux /bin/bash CentOS Linux OS
Linux is free and opensource operating system
```
### 9) 如何以不区分大小写来搜索与替换模式
大家都知道Linux是区分大小写的。 为了与不区分大小写的模式匹配使用I标志。
```
# sed 's/unix/linux/gI' sed-test.txt
1 linux linux linux 23
2 linux Linux 34
3 linuxlinux linuxLinux
linux /bin/bash CentOS Linux OS
Linux is free and opensource operating system
```
### 10) 如何查找和替换包含分隔符的字符串
当您搜索和替换含分隔符的字符串时,我们需要用反斜杠“\”来取消转义。
在这个例子中,我们将用“/usr/bin/fish”来替换“/bin/bash”。
```
# sed 's/\/bin\/bash/\/usr\/bin\/fish/g' sed-test.txt
1 Unix unix unix 23
2 linux Linux 34
3 linuxunix UnixLinux
linux /usr/bin/fish CentOS Linux OS
Linux is free and opensource operating system
```
上述sed命令按预期工作但它看起来来很糟糕。 为了简化,大部分的人会用竖线“|”。 所以,我建议你去用它。
```
# sed 's|/bin/bash|/usr/bin/fish/|g' sed-test.txt
1 Unix unix unix 23
2 linux Linux 34
3 linuxunix UnixLinux
linux /usr/bin/fish/ CentOS Linux OS
Linux is free and opensource operating system
```
### 11) 如何以给定的模式来查找和替换数字
类似地,数字可以用模式来代替。 下面的sed命令替换所有数字以“[0-9]”“number”的模式。
```
# sed 's/[0-9]/number/g' sed-test.txt
number Unix unix unix numbernumber
number linux Linux numbernumber
number linuxunix UnixLinux
linux /bin/bash CentOS Linux OS
Linux is free and opensource operating system
```
### 12) 如何用模式仅查找和替换两个数字
如果你想用模式来代替两位数字使用下面的sed命令。
```
# sed 's/\b[0-9]\{2\}\b/number/g' sed-test.txt
1 Unix unix unix number
2 linux Linux number
3 linuxunix UnixLinux
linux /bin/bash CentOS Linux OS
Linux is free and opensource operating system
```
### 13) 如何用sed命令仅打印被替换的行
如果你想显示仅更改的行使用下面的sed命令。
* p - 它输出替换的行在终端上两次。
* n - 它抑制由“p”标志所产生的重复行。
```
# sed -n 's/Unix/Linux/p' sed-test.txt
1 Linux unix unix 23
3 linuxunix LinuxLinux
```
### 14) 如何同时运行多个sed命令
以下sed命令检测和同时置换两个不同的模式。
下面的sed命令搜索为“linuxunix”和“CentOS的”模式用“LINUXUNIX”和“RHEL8”一次性更换他们。
```
# sed -e 's/linuxunix/LINUXUNIX/g' -e 's/CentOS/RHEL8/g' sed-test.txt
1 Unix unix unix 23
2 linux Linux 34
3 LINUXUNIX UnixLinux
linux /bin/bash RHEL8 Linux OS
Linux is free and opensource operating system
```
下面的sed命令搜索替换两个不同的模式每次以一个字符串。
以下sed的命令搜索 为“linuxunix”和“CentOS的”模式用“Fedora30”替换它们。
```
# sed -e 's/\(linuxunix\|CentOS\)/Fedora30/g' sed-test.txt
1 Unix unix unix 23
2 linux Linux 34
3 Fedora30 UnixLinux
linux /bin/bash Fedora30 Linux OS
Linux is free and opensource operating system
```
### 15) 如何查找和替换整个行,如果给定的模式匹配
如果模式匹配可以使用sed命令用新行来代替整行。 这可以通过使用“C”标志来完成。
```
# sed '/OS/ c New Line' sed-test.txt
1 Unix unix unix 23
2 linux Linux 34
3 linuxunix UnixLinux
New Line
Linux is free and opensource operating system
```
### 16) 如何搜索和替换相匹配的模式行
在一行中您可以为sed命令指定一种模式以适应。 在模式匹配的情况下sed命令搜索要被替换的字符串。
下面的sed命令首先查找具有“OS”模式的行然后用“ArchLinux”替换单词“Linux”。
```
# sed '/OS/ s/Linux/ArchLinux/' sed-test.txt
1 Unix unix unix 23
2 linux Linux 34
3 linuxunix UnixLinux
linux /bin/bash CentOS ArchLinux OS
Linux is free and opensource operating system
```
--------------------------------------------------------------------------------
via: https://www.2daygeek.com/linux-sed-to-find-and-replace-string-in-files/
作者:[Magesh Maruthamuthu][a]
选题:[lujun9972][b]
译者:[Asche910](https://github.com/asche910)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.2daygeek.com/author/magesh/
[b]: https://github.com/lujun9972