Merge remote-tracking branch 'LCTT/master'

This commit is contained in:
Xingyu Wang 2019-09-19 08:36:09 +08:00
commit 3d8f5c88b7
25 changed files with 4536 additions and 278 deletions

View File

@ -0,0 +1,168 @@
[#]: collector: (lujun9972)
[#]: translator: (heguangzhi)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11356-1.html)
[#]: subject: (Managing Ansible environments on MacOS with Conda)
[#]: via: (https://opensource.com/article/19/8/using-conda-ansible-administration-macos)
[#]: author: (James Farrell https://opensource.com/users/jamesf)
使用 Conda 管理 MacOS 上的 Ansible 环境
=====
> Conda 将 Ansible 所需的一切都收集到虚拟环境中并将其与其他项目分开。
![](https://img.linux.net.cn/data/attachment/album/201909/18/123838m1bcmke570kl6kzm.jpg)
如果你是一名使用 MacOS 并涉及到 Ansible 管理的 Python 开发人员,你可能希望使用 Conda 包管理器将 Ansible 的工作内容与核心操作系统和其他本地项目分开。
Ansible 基于 Python。要让 Ansible 在 MacOS 上工作Conda 并不是必须要的,但是它确实让你管理 Python 版本和包依赖变得更加容易。这允许你在 MacOS 上使用升级的 Python 版本并在你的系统中、Ansible 和其他编程项目之间保持 Python 包的依赖性相互独立。
在 MacOS 上安装 Ansible 还有其他方法。你可以使用 [Homebrew][2],但是如果你对 Python 开发(或 Ansible 开发)感兴趣,你可能会发现在一个独立 Python 虚拟环境中管理 Ansible 可以减少一些混乱。我觉得这更简单;与其试图将 Python 版本和依赖项加载到系统或 `/usr/local` 目录中 ,还不如使用 Conda 帮助我将 Ansible 所需的一切都收集到一个虚拟环境中,并将其与其他项目完全分开。
本文着重于使用 Conda 作为 Python 项目来管理 Ansible以保持它的干净并与其他项目分开。请继续阅读并了解如何安装 Conda、创建新的虚拟环境、安装 Ansible 并对其进行测试。
### 序幕
最近,我想学习 [Ansible][3],所以我需要找到安装它的最佳方法。
我通常对在我的日常工作站上安装东西很谨慎。我尤其不喜欢对供应商的默认操作系统安装应用手动更新(这是我多年作为 Unix 系统管理的习惯)。我真的很想使用 Python 3.7,但是 MacOS 的 Python 包是旧的 2.7,我不会安装任何可能干扰核心 MacOS 系统的全局 Python 包。
所以,我使用本地 Ubuntu 18.04 虚拟机上开始了我的 Ansible 工作。这提供了真正意义上的的安全隔离,但我很快发现管理它是非常乏味的。所以我着手研究如何在本机 MacOS 上获得一个灵活但独立的 Ansible 系统。
由于 Ansible 基于 PythonConda 似乎是理想的解决方案。
### 安装 Conda
Conda 是一个开源软件,它提供方便的包和环境管理功能。它可以帮助你管理多个版本的 Python、安装软件包依赖关系、执行升级和维护项目隔离。如果你手动管理 Python 虚拟环境Conda 将有助于简化和管理你的工作。浏览 [Conda 文档][4]可以了解更多细节。
我选择了 [Miniconda][5] Python 3.7 安装在我的工作站中,因为我想要最新的 Python 版本。无论选择哪个版本,你都可以使用其他版本的 Python 安装新的虚拟环境。
要安装 Conda请下载 PKG 格式的文件,进行通常的双击,并选择 “Install for me only” 选项。安装在我的系统上占用了大约 158 兆的空间。
安装完成后,调出一个终端来查看你有什么了。你应该看到:
* 在你的家目录中的 `miniconda3` 目录
* shell 提示符被修改为 `(base)`
* `.bash_profile` 文件更新了一些 Conda 特有的设置内容
现在基础已经安装好了,你有了第一个 Python 虚拟环境。运行 Python 版本检查可以证明这一点,你的 `PATH` 将指向新的位置:
```
(base) $ which python
/Users/jfarrell/miniconda3/bin/python
(base) $ python --version
Python 3.7.1
```
现在安装了 Conda下一步是建立一个虚拟环境然后安装 Ansible 并运行。
### 为 Ansible 创建虚拟环境
我想将 Ansible 与我的其他 Python 项目分开,所以我创建了一个新的虚拟环境并切换到它:
```
(base) $ conda create --name ansible-env --clone base
(base) $ conda activate ansible-env
(ansible-env) $ conda env list
```
第一个命令将 Conda 库克隆到一个名为 `ansible-env` 的新虚拟环境中。克隆引入了 Python 3.7 版本和一系列默认的 Python 模块,你可以根据需要添加、删除或升级这些模块。
第二个命令将 shell 上下文更改为这个新的环境。它为 Python 及其包含的模块设置了正确的路径。请注意,在 `conda activate ansible-env` 命令后,你的 shell 提示符会发生变化。
第三个命令不是必须的;它列出了安装了哪些 Python 模块及其版本和其他数据。
你可以随时使用 Conda 的 `activate` 命令切换到另一个虚拟环境。这将带你回到基本环境:`conda base`。
### 安装 Ansible
安装 Ansible 有多种方法,但是使用 Conda 可以将 Ansible 版本和所有需要的依赖项打包在一个地方。Conda 提供了灵活性,既可以将所有内容分开,又可以根据需要添加其他新环境(我将在后面演示)。
要安装 Ansible 的相对较新版本,请使用:
```
(base) $ conda activate ansible-env
(ansible-env) $ conda install -c conda-forge ansible
```
由于 Ansible 不是 Conda 默认通道的一部分,因此 `-c` 用于从备用通道搜索和安装。Ansible 现已安装到 `ansible-env` 虚拟环境中,可以使用了。
### 使用 Ansible
既然你已经安装了 Conda 虚拟环境,就可以使用它了。首先,确保要控制的节点已将工作站的 SSH 密钥安装到正确的用户帐户。
调出一个新的 shell 并运行一些基本的 Ansible 命令:
```
(base) $ conda activate ansible-env
(ansible-env) $ ansible --version
ansible 2.8.1
config file = None
configured module search path = ['/Users/jfarrell/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/jfarrell/miniconda3/envs/ansibleTest/lib/python3.7/site-packages/ansible
executable location = /Users/jfarrell/miniconda3/envs/ansibleTest/bin/ansible
python version = 3.7.1 (default, Dec 14 2018, 13:28:58) [Clang 4.0.1 (tags/RELEASE_401/final)]
(ansible-env) $ ansible all -m ping -u ansible
192.168.99.200 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"ping": "pong"
}
```
现在 Ansible 工作了,你可以在控制台中抽身,并从你的 MacOS 工作站中使用它们。
### 克隆新的 Ansible 进行 Ansible 开发
这部分完全是可选的;只有当你想要额外的虚拟环境来修改 Ansible 或者安全地使用有问题的 Python 模块时,才需要它。你可以通过以下方式将主 Ansible 环境克隆到开发副本中:
```
(ansible-env) $ conda create --name ansible-dev --clone ansible-env
(ansible-env) $ conda activte ansible-dev
(ansible-dev) $
```
### 需要注意的问题
偶尔你可能遇到使用 Conda 的麻烦。你通常可以通过以下方式删除不良环境:
```
$ conda activate base
$ conda remove --name ansible-dev --all
```
如果出现无法解决的错误,通常可以通过在 `~/miniconda3/envs` 中找到该环境并删除整个目录来直接删除环境。如果基础环境损坏了,你可以删除整个 `~/miniconda3`,然后从 PKG 文件中重新安装。只要确保保留 `~/miniconda3/envs` ,或使用 Conda 工具导出环境配置并在以后重新创建即可。
MacOS 上不包括 `sshpass` 程序。只有当你的 Ansible 工作要求你向 Ansible 提供 SSH 登录密码时,才需要它。你可以在 SourceForge 上找到当前的 [sshpass 源代码][6]。
最后,基础的 Conda Python 模块列表可能缺少你工作所需的一些 Python 模块。如果你需要安装一个模块,首选命令是 `conda install package`,但是需要的话也可以使用 `pip`Conda 会识别安装的模块。
### 结论
Ansible 是一个强大的自动化工具值得我们去学习。Conda 是一个简单有效的 Python 虚拟环境管理工具。
在你的 MacOS 环境中保持软件安装分离是保持日常工作环境的稳定性和健全性的谨慎方法。Conda 尤其有助于升级你的 Python 版本,将 Ansible 从其他项目中分离出来,并安全地使用 Ansible。
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/8/using-conda-ansible-administration-macos
作者:[James Farrell][a]
选题:[lujun9972][b]
译者:[heguangzhi](https://github.com/heguangzhi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jamesf
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/cicd_continuous_delivery_deployment_gears.png?itok=kVlhiEkc (CICD with gears)
[2]: https://brew.sh/
[3]: https://docs.ansible.com/?extIdCarryOver=true&sc_cid=701f2000001OH6uAAG
[4]: https://conda.io/projects/conda/en/latest/index.html
[5]: https://docs.conda.io/en/latest/miniconda.html
[6]: https://sourceforge.net/projects/sshpass/

View File

@ -0,0 +1,62 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Here Comes Oracle Autonomous Linux Worlds First Autonomous Operating System)
[#]: via: (https://opensourceforu.com/2019/09/here-comes-oracle-autonomous-linux-worlds-first-autonomous-operating-system/)
[#]: author: (Longjam Dineshwori https://opensourceforu.com/author/dineshwori-longjam/)
Here Comes Oracle Autonomous Linux Worlds First Autonomous Operating System
======
* _**Oracle Autonomous Linux**_ _**delivers automated patching, updates and tuning without human intervention.**_
* _**It can help IT companies improve reliability and protect their systems from cyberthreats**_
* _**Oracle also introduces Oracle OS Management Service that delivers control and visibility over systems**_
![Oracle cloud][1]
Oracle today marked a major milestone in the companys autonomous strategy with the introduction of Oracle Autonomous Linux the worlds first autonomous operating system.
Oracle Autonomous Linux, along with the new Oracle OS Management Service, is the first and only autonomous operating environment that eliminates complexity and human error to deliver unprecedented cost savings, security and availability for customers, the company claims in a just released statement.
Keeping systems patched and secure is one of the biggest ongoing challenges faced by IT today. With Oracle Autonomous Linux, the company says, customers can rely on autonomous capabilities to help ensure their systems are secure and highly available to help prevent cyberattacks.
“Oracle Autonomous Linux builds on Oracles proven history of delivering Linux with extreme performance, reliability and security to run the most demanding enterprise applications,” said Wim Coekaerts, senior vice president of operating systems and virtualization engineering, Oracle.
“Today we are taking the next step in our autonomous strategy with Oracle Autonomous Linux, providing a rich set of capabilities to help our customers significantly improve reliability and protect their systems from cyberthreats,” he added.
**Oracle OS Management Service**
Along with Oracle Autonomous Linux, Oracle introduced Oracle OS Management Service, a highly available Oracle Cloud Infrastructure component that delivers control and visibility over systems whether they run Autonomous Linux, Linux or Windows.
Combined with resource governance policies, OS Management Service, via the Oracle Cloud Infrastructure console or APIs, also enables users to automate capabilities that will execute common management tasks for Linux systems, including patch and package management, security and compliance reporting, and configuration management.
It can be further automated with other Oracle Cloud Infrastructure services like auto-scaling as workloads need to grow or shrink to meet elastic demand.
**Always Free Autonomous Database and Cloud Infrastructure**
Oracle Autonomous Linux, in conjunction with Oracle OS Management Service, uses advanced machine learning and autonomous capabilities to deliver unprecedented cost savings, security and availability and frees up critical IT resources to tackle more strategic initiatives.
They are included with Oracle Premier Support at no extra charge with Oracle Cloud Infrastructure compute services. Combined with Oracle Cloud Infrastructures other cost advantages, most Linux workload customers can expect to have 30-50 percent TCO savings versus both on-premise and other cloud vendors over five years.
“Adding autonomous capabilities to the operating system layer, with future plans to expand beyond infrastructure software, goes straight after the OpEx challenges nearly all customers face today,” said Al Gillen, Group VP, Software Development and Open Source, IDC.
“This capability effectively turns Oracle Linux into a service, freeing customers to focus their IT resources on application and user experience, where they can deliver true competitive differentiation,” he added.
--------------------------------------------------------------------------------
via: https://opensourceforu.com/2019/09/here-comes-oracle-autonomous-linux-worlds-first-autonomous-operating-system/
作者:[Longjam Dineshwori][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensourceforu.com/author/dineshwori-longjam/
[b]: https://github.com/lujun9972
[1]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2016/09/Oracle-cloud.jpg?resize=350%2C197&ssl=1

View File

@ -0,0 +1,105 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Why the Blockchain is Your Best Bet for Cyber Security)
[#]: via: (https://opensourceforu.com/2019/09/why-the-blockchain-is-your-best-bet-for-cyber-security/)
[#]: author: (Anna Aneena M G https://opensourceforu.com/author/anna-aneena/)
Why the Blockchain is Your Best Bet for Cyber Security
======
[![][1]][2]
_Blockchain is a tamper-proof, shared digital ledger that records the history of transactions between the peers in a peer-to-peer network. This article describes how blockchain technology can be used to protect data and the network from cyber-attacks._
Cyber security is a set of technologies, processes and controls designed to protect systems, devices, data, networks and programs from cyber-attacks. It secures data from threats such as theft or misuse, and also safeguards the system from viruses.
In todays world, cyber-attacks are major threats faced by each user. Most of us are responsive to the advertisements on various websites on the Internet, and if asked questions or any personal details, respond without even thinking of the consequences. Sharing ones personal information is very risky, as one may lose whatever one has. In 2016, the search engine Yahoo! faced a major attack and around one billion accounts were compromised. The attackers were able to get the user names, passwords, phone numbers, and security questions and answers of e-mail users. On September 7, 2017, Equifax, one of the largest consumer credit recording agencies in the world, faced a massive cyber security threat. It is believed that someone gave unauthorised access to the data with this agency, from mid-May to July 2017. Around 145.5 million people felt threatened by the news as they had shared personal information like names, social security numbers, birthdays, addresses and driving license numbers with Equifax.
Many people use weak or default passwords for their personal accounts on some Internet sites, which is very risky as these can be cracked, and their personal details can be compromised. This may be even more risky if people use default passwords for all their sites, just for convenience. If the attackers crack that password, then it can be used for all other sites, and all their personal details, including their credit card and bank details may be harvested. In this digital era, cyber-attacks are a matter of real concern. Cyber criminals are greatly increasing in number and are attempting to steal financial data, personal identifiable information (PII) as well as identities of the common Internet user.
Businesses, the government, and the private as well as public sectors are continuously fighting against such frauds, malicious bugs and so on. Even as the hackers are increasing their expertise, the ways to cope with their attacks are also improving very fast. One of these ways is the blockchain.
**Blockchain: A brief background**
Each block in a blockchain contains transaction data, a hash function and hash of the previous block. Blockchain is managed by a peer-to-peer (P2P) network. On the network, no central authority exists and all blocks are distributed among all the users in the network. Everyone in the network is responsible for verifying the data that is shared, and ensuring that no existing blocks are being altered and no false data is being added. Blockchain technology enables direct transaction between two individuals, without the involvement of a third party and hence provides transparency. When a transaction happens, the transaction information is shared amongst everyone in the blockchain network. These transactions are individually time stamped. When these transactions are put together in a block, they are time stamped again as a whole block. Blockchain can be used to prevent cyber-attacks in three ways by being a trusted system, by being immutable and by network consensus.
A blockchain based system runs on the concept of human trust. A blockchain network is built in such a way that it presumes any individual node could attack it at any time. The consensus protocol, like proof of work, ensures that even if this happens, the network completes its work as intended, regardless of human cheating or intervention. The blockchain allows one to secure stored data using various cryptographic properties such as digital signatures and hashing. As soon as the data enters a block in the blockchain, it cannot be tampered with and this property is called immutability. If anyone tries to tamper with the blockchain database, then the network consensus will recognise the fact and shut down the attempt.
Blockchains are made up of nodes; these can be within one institution like a hospital, or can be all over the world on the computer of any citizen who wants to participate in the blockchain. For any decision to be made, the majority of the nodes need to come to a consensus. The blockchain has a democratic system instead of a central authoritarian figure. So if any one node is compromised due to malicious action, the rest of the nodes recognise the problem and do not execute the unacceptable activity. Though blockchain has a pretty incredible security feature, it is not used by everyone to store data.
![Figure 1: Applications of blockchain][3]
**Common use cases of blockchain in cyber security**
Mitigating DDoS attacks: A distributed denial-of-service attack is a cyber-attack; it involves multiple compromised computer systems that aim at a target and attack it, causing denial of service for users of the targeted resources. This causes the system to slow down and crash, hence denying services to legitimate users. There are some forms of DDoS software that are causing huge problems. One among them is Hide and Seek malware, which has the ability to act even after the system reboots and hence can cause the system to crash over and over again. Currently, the difficulty in handling DDoS attacks is due to the existing DNS (Domain Name System). A solution to this is the implementation of blockchain technology. It will decentralise the DNS, distributing the data to a greater number of nodes and making it impossible for the hackers to hack.
**More secure DNS:** For hackers, DNS is an easy target. Hence DNS service providers like Twitter, PayPal, etc, can be brought down. Adding the blockchain to the DNS will enhance the security, because that one single target which can be compromised is removed.
**Advanced confidentiality and data integrity:** Initially, blockchain had no particular access controls. But as it evolved, more confidentiality and access controls were added, ensuring that data as a whole or in part was not accessible to any wrong person or organisation. Private keys are generally used for signing documents and other purposes. Since these keys can be tampered with, they need to be protected. Blockchain replaces such secret keys with transparency.
**Improved PKI:** PKI or Public Key Infrastructure is one of the most popular forms of public key cryptography which keeps the messaging apps, websites, emails and other forms of communications secure. The main issue with this cryptography is that most PKI implementations depend on trusted third party Certificate Authorities (CA). But these certificate authorities can easily be compromised by hackers and spoof user identities. When keys are published on the blockchain, the risk of false key generation is eliminated. Along with that, blockchain enables applications to verify the identity of the person you are communicating with. Certain is the first implementation of blockchain based PKI.
**The major roles of blockchain in cyber security**
**Eliminating the human factor from authentication:** Human intervention is eliminated from the process of authentication. With the help of blockchain technology, businesses are able to authenticate devices and users without the need for a password. Hence, blockchain avoids being a potential attack vector.
**Decentralised storage:** Blockchain users data can be maintained on their computers in their network. This ensures that the chain wont collapse. If someone other than the owner of a component of data (say, an attacker) attempts to destroy a block, the entire system checks each and every data block to identify the one that differs from the rest. If this block is identified or located by the system, it is recognised as false and is deleted from the chain.
**Traceability:** All the transactions that are added to a private or public blockchain are time stamped and signed digitally. This means that companies can trace every transaction back to a particular time period. And they can also locate the corresponding party on the blockchain through their public address.
**DDoS:** Blockchain transactions can be denied easily if the participating units are delayed from sending transactions. For example, the entire attendant infrastructure and the blockchain organisation can be crippled due to the DDoS attack on a set of entities or an entity. These kinds of attacks can introduce integrity risks to a blockchain.
**Blockchain for cyber security**
One interesting use case is applying the strong integrity assurance feature of blockchain technology to strengthen the cyber security of many other technologies. For example, to ensure the integrity of software downloads like firmware updates, patches, installers, etc, blockchain can be used in the same way that we make use of MD5 hashes today. Our file download that we compare against the hash might be compromised on a vendor website and altered without our knowledge. With a higher level of confidence, we can make a comparison against what is permanently recorded in the blockchain. The use of blockchain technologies has great security potential, particularly in the world of cyber-physical systems (CPS) such as IoT, industrial controls, vehicles, robotics, etc.
Summarising this, for cyber-physical systems the integrity of data is the key concern while the confidentiality in many cases is almost irrelevant. This is the key difference between cyber security for cyber-physical systems and cyber security for traditional enterprise IT systems. Blockchain technology is just what the doctor ordered to address the key cyber-physical systems security concerns.
The key characteristics of a blockchain that establish trust are listed below.
* _**Identification and authentication:**_ Access is granted via cryptographic keys and access rules.
* _**Transaction rules:**_ At every node standard rules are applied to every transaction.
* _**Transaction concatenation:**_ Every transaction is linked to its previous transaction.
* _**Consensus mechanism:**_ In order to agree upon the transaction integrity, mathematical puzzles are solved for all nodes.
* _**Distributed ledger:**_ There are standards for listing transactions on every node.
* _**Permissioned and unpermissioned:**_ Ability to participate in a blockchain can be open or pre-approved.
**Is blockchain secure?**
Blockchain stores data using sophisticated and innovative software rules that are extremely difficult for attackers to manipulate. The best example is Bitcoin. In Bitcoins blockchain, the shared data is the history of every transaction made. Information is stored in multiple copies on a network of computers called nodes. Each time someone submits a transaction to the ledger, the node checks to make sure the transaction is valid or not. A subset of the package validates the transaction into blocks and adds them to the previous chain.
The blockchain offers a higher level of security to every individual user. This is because it removes the need for easily compromised and weak online identities and passwords.
**How does a blockchain protect data?**
Instead of uploading data to a cloud server or storing it in a single location, a blockchain breaks everything into several small nodes. A blockchain protects data because:
* It is decentralised.
* It offers encryption and validation.
* It can be private or public.
* It is virtually impossible to hack.
* It offers quality assurance.
* It ensures fast, cheap and secure transfer of funds across the globe.
* It is well known for its traceability.
* Through it, transactions become more transparent.
**Cyber security is a priority, not an afterthought**
It seems like in the digital age, comfort and convenience have overtaken things like privacy and security in very unfortunate ways. The handing over of personal information to companies like Facebook is a personal choice; however, no one wants to see information leaked to third parties without consent. The blockchain is all about security. It has provided us a simple, effective and affordable way of ensuring that our cyber security needs are not only met, but also exceeded. We need to understand that the technologies we use to improve our lives can also be used to harm us. That is the reality of the era we are living in, one where most of our personal data is on the cloud, on our mobile device, or on our computer. Because of that, it is vital to look at online safety and cyber security as priorities and not afterthoughts. The blockchain can assist us in turning that thought into reality, and allow us to build a future where online threats are kept to a minimum.
--------------------------------------------------------------------------------
via: https://opensourceforu.com/2019/09/why-the-blockchain-is-your-best-bet-for-cyber-security/
作者:[Anna Aneena M G][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensourceforu.com/author/anna-aneena/
[b]: https://github.com/lujun9972
[1]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/09/1.png?resize=615%2C434&ssl=1 (1)
[2]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/09/1.png?fit=615%2C434&ssl=1
[3]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/09/2.png?resize=350%2C219&ssl=1

View File

@ -0,0 +1,63 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How Cloud Billing Can Benefit Your Business)
[#]: via: (https://opensourceforu.com/2019/09/how-cloud-billing-can-benefit-your-business/)
[#]: author: (Aashima Sharma https://opensourceforu.com/author/aashima-sharma/)
How Cloud Billing Can Benefit Your Business
======
[![][1]][2]
_The Digital Age has led to many changes in the way we do business, and also in the way that businesses are run on a day to day basis. Computers are now ubiquitous even in the smallest of businesses and are utilised for a wide variety of purposes. When it comes to making things easier, a particularly efficient use of computer technology and software is cloud billing. Let us explain in a little more detail what its all about._
**What Cloud Billing?**
The idea of [_cloud billing_][3] is relatively simple. Cloud billing is designed to enable the user to perform a number of functions essential to the business and it can be used in any business without taking up unnecessary space on computer services. For example, you can use a cloud billing solution to ensure that invoices are sent at a certain time, subscriptions are efficiently managed, discounts are applied and many more regular or one-off functions.
![Figure 1: Tridens Monetization][4]
**How Cloud Billing Platform Can Benefit Your Business**
Lets have a quick look at some of the major benefits of a cloud billing platform:
* Lower costs on IT services: a cloud billing system, like any cloud-based IT solution, does not need to take up any space on your server. Nor does it require you to purchase software or hardware. You dont need to engage the services of an IT professional, its all done via the service provider.
* Reduced operating costs: there is little in the way of operating costs when you use a cloud billing solution, as there is no extra equipment to maintain.
* Faster product to market: if you are rolling out new products or services on a regular basis, a cloud billing solution means you can cut out a lot of time that would normally be take up with the process.
* Grow with the business: if in the future you need to expand or add to your cloud billing solution, you can, and without any additional equipment or software. You pay for what you need and use, and no more.
* Quick start: once you decide you want to use cloud billing, there is very little to do before you are ready to go. No software to install, no computer servers or network, its all in the cloud.
There are many more benefits to engaging the services of a cloud billing solutions provider, and finding the right one a service provider you can trust and with whom you can build a reliable working relationship is the important part of the deal.
**Where to Get the Best Cloud Billing Platform?**
The market for cloud-based billing services is one that hotly contested, and there are many potential options for your business. There are many Cloud billing available, you can try [_Tridens Monetization_][3] or any of the open source billing software solutions that may be suitable for your business.
Where the best cloud billing solutions come to the fore is in providing real-time ability and flexibility for various areas of industry and commerce. They are ideal, for example, for use in media and communications, in education and utilities, and in the healthcare industry as well as many others. No matter the size, type or are of your business, a cloud-based billing platform offers not just a cost-saving opportunity, but it will also help accelerate growth, engender brand loyalty thanks to more efficient service, and more.
Use a cloud billing solution for recurring billing purposes, for invoicing and revenue management, or for producing a range of reports and real-time analysis of your business performance, and all without the need for expensive hardware and software, or costly in-house IT experts. The best such solutions can be connected to many payment gateways, can handle your business tax requirements, and will reduce the workload on your team, so that each can dedicate their time to what they do best.
**Summary**
Put simply, your business needs a cloud billing platform if you want to grow, improve your efficiency both for you and your clients without the need for great expenditure and restructuring. We recommend you check it out further, and talk to the experts in more detail about your cloud billing requirements.
**By: Alivia Mallan**
--------------------------------------------------------------------------------
via: https://opensourceforu.com/2019/09/how-cloud-billing-can-benefit-your-business/
作者:[Aashima Sharma][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensourceforu.com/author/aashima-sharma/
[b]: https://github.com/lujun9972
[1]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2016/01/Young-man-with-the-head-in-the-clouds-thinking_15259762_xl.jpg?resize=579%2C606&ssl=1 (Young man with the head in the clouds thinking_15259762_xl)
[2]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2016/01/Young-man-with-the-head-in-the-clouds-thinking_15259762_xl.jpg?fit=579%2C606&ssl=1
[3]: https://tridenstechnology.com/monetization/
[4]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/09/diagram-monetization-infinite-v2-1.png?resize=350%2C196&ssl=1

View File

@ -0,0 +1,120 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (3 steps to developing psychological safety)
[#]: via: (https://opensource.com/open-organization/19/9/psychological-safety-leadership-behaviors)
[#]: author: (Kathleen Hayes https://opensource.com/users/khayes4dayshttps://opensource.com/users/khayes4dayshttps://opensource.com/users/mdoyle)
3 steps to developing psychological safety
======
The mindsets, behaviors, and communication patterns necessary for
establishing psychological safety in our organizations may not be our
defaults, but they are teachable and observable.
![Brain map][1]
Psychological safety is a belief that one will not be punished or humiliated for speaking up with ideas, questions, concerns, or mistakes. And it's critical for high-performing teams in open organizations.
Part one of this series introduced the concept of [psychological safety][2]. In this companion article, I'll recount my personal journey toward understanding psychological safety—and explain the fundamental shifts in mindset, behavior, and communication that anyone hoping to create psychologically safe teams and environments will need to make.
### Mindset: Become the learner
I have participated in a number of corporate cultures that fostered a "gotcha" mindset—one in which people were poised to pounce when something didn't go according to plan. Dropping this deeply ingrained mindset was a requirement for achieving psychological safety, and doing _that_ required a fundamental shift in the way I lived my life.
_Guiding value: Become the learner, not the knower._
Life is a process; achieving perfection will never happen. Similarly, building an organization is a process; there is no "we've got it now" state. In most cases, we're traversing unchartered territories. Enormous uncertainty lies ahead. [We can't know what will happen][3], which is why it was important for me to become the learner in work, just as in life.
On my first day as a new marketing leader, a team member was describing their collaboration with engineering and told me about "the F-You board." If a new program was rolled out and engineers complained that it missed the mark, was wrong, or was downright ridiculous, someone placed a tally on the F-You whiteboard. When they'd accumulated ten, the team would go out drinking.
There is a lot to unpack in this dynamic. For our purposes, however, let's focus on a few actionable steps that helped me reframe the "gotcha" mentality into a learning mindset.
First, I shaped marketing programs and campaigns as experiments, using the word "experiment" with intention not just _within_ the marketing department but _across_ the entire organization. Corporate-wide communications about upcoming rollouts concluded with, "If you see any glaring omissions or areas for refinement, please let me know," inviting engineers to surface blind spots and bring forward solutions—rather than point fingers after the work had concluded.
Only after shifting my perspective from that of the knower to that of the learner did I open a genuine desire to understand another's perspective.
Next, to stimulate the learning process in myself and others, I began fostering a "[try, learn, modify][4]" mindset, in which setbacks are not viewed as doomsday failures but as opportunities for clarification and improvement. To recover quickly when something doesn't go according to plan, I would ask four key questions:
* What did we set out to do?
* What happened?
* What did we learn?
* How quickly can we improve upon it?
It's nearly impossible for every project to be a home run. Setbacks will occur. As the ground-breaking research on psychological safety revealed, [learning happens through vulnerable conversations][5]. When engaged in psychologically safe environments, we can use these moments to cultivate more learning and less defensiveness.
### Behavior: Model curiosity
One way we can help our team drop their defensiveness is by _modeling curiosity_.
As the "knower," I was adept at command and control. Quite often this meant playing the devil's advocate and shaming others into submitting to my point of view.
In a meeting early in my career as a vice president, a colleague was rolling out a new program and asked each executive to share feedback. I was surprised as each person around the table gave a thumbs up. I had reservations about the direction and was concerned that no one else could see the issues that seemed so readily apparent to me. Rather than asking for clarification and [stimulating a productive dialog][6], I simply quipped, "Am I the only one that hasn't [sipped the purple Kool-Aid][7]?" Not my finest moment.
As I look back, this behavior was fueled by a mixture of fear and overconfidence, a hostile combination resulting in a hostile psychological attitude. I wasn't curious because I was too busy being right, being the knower. By becoming the learner, I let a genuine interest in understanding others' perspectives come to the forefront. This helped me more deeply understand a fundamental fact about the human condition.
_Guiding value: Situations are rarely, if ever, crystal clear._
The process of understanding is dynamic. We are constantly moving along a continuum from clear to unclear and back again. For large teams, this swing is more pronounced as each member brings a unique perspective to bear on an issue. And rightly so. There are seven billion people on this planet; it's nearly impossible for everyone to see a situation the same way.
Recalibrating this attitude—the devil's advocate attitude of "I disagree" to the learner's space and behavior of "help me see what you see"—took practice. One thing that worked for me was intentionally using the phrase "I have a different perspective to share" when offering my opinion and, when clarifying, saying, "That is not consistent with my understanding. Can you tell me more about your perspective?" These tactics helped me move from my default of knowing and convincing. I also asked a trusted team member to privately point out when something or someone had triggered my old default. Over time, my self-awareness matured and, with practice, the intentional tactics evolved into a learned behavior.
As I look back, this behavior was fueled by a mixture of fear and overconfidence, a hostile combination resulting in a hostile psychological attitude. I wasn't curious because I was too busy being right, being the knower.
I feel compelled to share that without the right mindset these tactics would have been cheap communication gimmicks. Only after shifting my perspective from that of the knower to that of the learner did I open a genuine desire to understand another's perspective. This allowed me to develop the capacity to model curiosity and open the space for my team members—and me—to explore ideas with safety, vulnerability, and respect.
### Communication: Deliver productive feedback
Psychological safety does not imply a cozy situation in which people are necessarily close friends, nor does it suggest an absence of pressure or problems. When problems inevitably arise, we must hold ourselves and others accountable and deliver feedback without tiptoeing around the truth, or playing the blame game. However, giving productive feedback is [a skill most leaders have never learned][8].
_Guiding value: Clear is kind; unclear is unkind._
When problems arise during experiments in marketing, I am finding team communication to be incredibly productive when using that _try, learn, modify_ approach and modeling curiosity. One-on-one conversations about real deficits, however, have proven more difficult.
I found so many creative reasons to delay or avoid these conversations. In a fast-paced startup, one of my favorites was, "They've only been in this role for a short while. Give them more time to get up to speed." That was an unfortunate approach, especially when coupled later with vague direction, like, "I need you to deliver more, more quickly." Because I was unable to clearly communicate what was expected, team members were not clear on what needed to be improved. This stall tactic and belittling innuendo masquerading as feedback was leading to more shame and blame than learning and growth.
In becoming the learner, the guiding value of "clear is kind, unclear is unkind," crystalized for me when studying _Dare to Lead_. In her work, [Brené Brown explains][9]:
* Feeding people half-truths to make them feel better, which is almost always about making ourselves feel more comfortable, is unkind.
* Not getting clear with people about your expectations because it feels too hard, yet holding them accountable or blaming them for not delivering, is unkind.
Below are three actionable tips that are helping me deliver more clear, productive feedback.
**Get specific.** Point to the specific lack in proficiency that needs to be addressed. Tie your comments to a shared vision of how this impacts career progression. When giving feedback on behavior, stay away from character and separate person from process.
**Allow people to have feelings.** Rather than rushing in, give them space to feel. Learn how to hold the discomfort.
**Think carefully about how you want to show up**. Work through how your conversation may unfold and where you might trigger blaming behaviors. Knowing puts you in a safer mindset for having difficult conversations.
Teaching team members, reassessing skill gaps, reassigning them (or even letting them go) can become more empathetic processes when "clear is kind" is top of mind for leaders.
### Closing
The mindsets, behaviors, and communication patterns necessary for establishing psychological safety in our organizations may not be our defaults, but they are teachable and observable. Stay curious, ask questions, and deepen your understanding of others' perspectives. Do the difficult work of holding yourself and others accountable for showing up in a way that's aligned with cultivating a culture where your creativity—and your team members—thrive.
--------------------------------------------------------------------------------
via: https://opensource.com/open-organization/19/9/psychological-safety-leadership-behaviors
作者:[Kathleen Hayes][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/khayes4dayshttps://opensource.com/users/khayes4dayshttps://opensource.com/users/mdoyle
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/open_art-mindmap-520.png?itok=qQVBAoVw (Brain map)
[2]: https://opensource.com/open-organization/19/3/introduction-psychological-safety
[3]: https://opensource.com/open-organization/19/5/planning-future-unknowable
[4]: https://opensource.com/open-organization/18/3/try-learn-modify
[5]: https://www.youtube.com/watch?v=LhoLuui9gX8
[6]: https://opensource.com/open-organization/19/5/productive-arguments
[7]: https://opensource.com/open-organization/15/7/open-organizations-kool-aid
[8]: https://opensource.com/open-organization/19/4/be-open-with-difficult-feedback
[9]: https://brenebrown.com/articles/2018/10/15/clear-is-kind-unclear-is-unkind/

View File

@ -0,0 +1,138 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How Ansible brought peace to my home)
[#]: via: (https://opensource.com/article/19/9/ansible-documentation-kids-laptops)
[#]: author: (James Farrell https://opensource.com/users/jamesfhttps://opensource.com/users/jlozadadhttps://opensource.com/users/jason-bakerhttps://opensource.com/users/aseem-sharmahttps://opensource.com/users/marcobravo)
How Ansible brought peace to my home
======
Configuring his young daughters' computers with Ansible made it simple
for this dad to manage the family's computers.
![Coffee and laptop][1]
A few months ago, I read Marco Bravo's article [_How to use Ansible to document procedures_][2] on Opensource.com. I will admit, I didn't quite get it at the time. I was not actively using [Ansible][3], and I remember thinking it looked like more work than it was worth. But I had an open mind and decided to spend time looking deeper into Ansible.
I soon found an excuse to embark on my first real Ansible adventure: repurposing old laptops like in [_How to make an old computer useful again_][4]. I've always liked playing with old computers, and the prospect of automating something with modern methods piqued my interest.
### The task
Earlier this year, I gave my seven-year-old daughter a repurposed Dell Mini 9 running some flavor of Ubuntu. At first, my six-year-old daughter didn't care much about it, but as the music played and she discovered the fun programs, her interest set in.
I realized I would need to build another one for her soon. And any parent with small children close in age can likely identify with my dilemma. If both children don't get identical things, conflicts will arise. Similar toys, similar clothes, similar shoes … sometimes the color, shape, and blinking lights must be identical. I am sure they would notice any difference in laptop configuration, and it would become a point of contention. Therefore, I needed these laptops to have identical functionality.
Also, with small children in the mix, I suspected I would be rebuilding these things a few times. Failures, accidents, upgrades, corruptions … this threatened to become a time sink.
Since two young girls sharing one Dell Mini 9 was not really a workable solution, I grabbed a Dell D620 from my pile of old hardware, upgraded the RAM, put in an inexpensive SSD, and started to cook up a repeatable process to build the children's computer configuration.
If you think about it, this task seems ideal for a configuration management system. I needed something to document what I was doing so it could be easily repeatable.
### Ansible to the rescue
I didn't try to set up a full-on pre-boot execution environment (PXE) to support an occasional laptop install. I wanted to teach my children to do some of the installation work for me (a different kind of automation, ha!).
I decided to start from a minimal OS install and eventually broke down my Ansible approach into three parts: bootstrap, account setup, and software installation. I could have put everything into one giant script, but separating these functions allowed me to mix and match them for other projects and refine them individually over time. Ansible's YAML file readability helped keep things clear as I refined my systems.
For this laptop experiment, I decided to use Debian 32-bit as my starting point, as it seemed to work best on my older hardware. The bootstrap YAML script is intended to take a bare-minimal OS install and bring it up to some standard. It relies on a non-root account to be available over SSH and little else. Since a minimal OS install usually contains very little that is useful to Ansible, I use the following to hit one host and prompt me to log in with privilege escalation:
```
`$ ansible-playbook bootstrap.yml -i '192.168.0.100,' -u jfarrell -Kk`
```
The script makes use of Ansible's [raw][5] module to set some base requirements. It ensures Python is available, upgrades the OS, sets up an Ansible control account, transfers SSH keys, and configures sudo privilege escalation. When bootstrap completes, everything should be in place to have this node fully participate in my larger Ansible inventory. I've found that bootstrapping bare-minimum OS installs is nuanced (if there is interest, I'll write another article on this topic).
The account YAML setup script is used to set up (or reset) user accounts for each family member. This keeps user IDs (UIDs) and group IDs (GIDs) consistent across the small number of machines we have, and it can be used to fix locked accounts when needed. Yes, I know I could have set up Network Information Service or LDAP authentication, but the number of accounts I have is very small, and I prefer to keep these systems very simple. Here is an excerpt I found especially useful for this:
```
\---
\- name: Set user accounts
  hosts: all
  gather_facts: false
  become: yes
  vars_prompt:
    - name: passwd
      prompt: "Enter the desired ansible password:"
      private: yes
  tasks:
  - name: Add child 1 account
    user:
      state: present
      name: child1
      password: "{{ passwd | password_hash('sha512') }}"
      comment: Child One
      uid: 888
      group: users
      shell: /bin/bash
      generate_ssh_key: yes
      ssh_key_bits: 2048
      update_password: always
      create_home: yes
```
The **vars_prompt** section prompts me for a password, which is put to a Jinja2 transformation to produce the desired password hash. This means I don't need to hardcode passwords into the YAML file and can run it to change passwords as needed.
The software installation YAML file is still evolving. It includes a base set of utilities for the sysadmin and then the stuff my users need. This mostly consists of ensuring that the same graphical user interface (GUI) interface and all the same programs, games, and media files are installed on each machine. Here is a small excerpt of the software for my young children:
```
 - name: Install kids software
    apt:
      name: "{{ packages }}"
      state: present
    vars:
      packages:
     - lxde
      - childsplay
      - tuxpaint
      - tuxtype
      - pysycache
      - pysiogame
      - lmemory
      - bouncy
```
I created these three Ansible scripts using a virtual machine. When they were perfect, I tested them on the D620. Then converting the Mini 9 was a snap; I simply loaded the same minimal Debian install then ran the bootstrap, accounts, and software configurations. Both systems then functioned identically.
For a while, both sisters enjoyed their respective computers, comparing usage and exploring software features.
### The moment of truth
A few weeks later came the inevitable. My older daughter finally came to the conclusion that her pink Dell Mini 9 was underpowered. Her sister's D620 had superior power and screen real estate. YouTube was the new rage, and the Mini 9 could not keep up. As you can guess, the poor Mini 9 fell into disuse; she wanted a new machine, and sharing her younger sister's would not do.
I had another D620 in my pile. I replaced the BIOS battery, gave it a new SSD, and upgraded the RAM. Another perfect example of breathing new life into old hardware.
I pulled my Ansible scripts from source control, and everything I needed was right there: bootstrap, account setup, and software. By this time, I had forgotten a lot of the specific software installation information. But details like account UIDs and all the packages to install were all clearly documented and ready for use. While I surely could have figured it out by looking at my other machines, there was no need to spend the time! Ansible had it all clearly laid out in YAML.
Not only was the YAML documentation valuable, but Ansible's automation made short work of the new install. The minimal Debian OS install from USB stick took about 15 minutes. The subsequent shape up of the system using Ansible for end-user deployment only took another nine minutes. End-user acceptance testing was successful, and a new era of computing calmness was brought to my family (other parents will understand!).
### Conclusion
Taking the time to learn and practice Ansible with this exercise showed me the true value of its automation and documentation abilities. Spending a few hours figuring out the specifics for the first example saves time whenever I need to provision or fix a machine. The YAML is clear, easy to read, and—thanks to Ansible's idempotency—easy to test and refine over time. When I have new ideas or my children have new requests, using Ansible to control a local virtual machine for testing is a valuable time-saving tool.
Doing sysadmin tasks in your free time can be fun. Spending the time to automate and document your work pays rewards in the future; instead of needing to investigate and relearn a bunch of things you've already solved, Ansible keeps your work documented and ready to apply so you can move onto other, newer fun things!
I can see the brightness of curiosity in my six year old niece Shuchi's eyes when she explores a...
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/9/ansible-documentation-kids-laptops
作者:[James Farrell][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jamesfhttps://opensource.com/users/jlozadadhttps://opensource.com/users/jason-bakerhttps://opensource.com/users/aseem-sharmahttps://opensource.com/users/marcobravo
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/coffee_cafe_brew_laptop_desktop.jpg?itok=G-n1o1-o (Coffee and laptop)
[2]: https://opensource.com/article/19/4/ansible-procedures
[3]: https://www.ansible.com/
[4]: https://opensource.com/article/19/7/how-make-old-computer-useful-again
[5]: https://docs.ansible.com/ansible/2.3/raw_module.html

View File

@ -0,0 +1,57 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Microsoft brings IBM iron to Azure for on-premises migrations)
[#]: via: (https://www.networkworld.com/article/3438904/microsoft-brings-ibm-iron-to-azure-for-on-premises-migrations.html)
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
Microsoft brings IBM iron to Azure for on-premises migrations
======
Once again Microsoft shows it has shed its not-invented here attitude to support customers.
Microsoft / Just_Super / Getty Images
When Microsoft launched Azure as a cloud-based version of its Windows Server operating system, it didn't make it exclusively Windows. It also included Linux support, and in just a few years, the [number of Linux instances now outnumbers Windows instances][1].
It's nice to see Microsoft finally shed that not-invented-here attitude that was so toxic for so long, but the company's latest move is really surprising.
Microsoft has partnered with a company called Skytap to offer IBM Power9 instances on its Azure cloud service to run Power-based systems inside of the Azure cloud, which will be offered as Azure virtual machines (VM) along with the Xeon and Epyc server instances that it already offers.
**Also read: [How to make hybrid cloud work][2]**
Skytap is an interesting company. Founded by three University of Washington professors, it specializes in cloud migrations of older on-premises hardware, such as IBM System I or Sparc. It has a data center in its home town of Seattle, with IBM hardware running IBM's PowerVM hypervisor, plus some co-locations in IBM data centers in the U.S. and England.
Its motto is to migrate fast, then modernize at your own pace. So, its focus is on helping legacy systems migrate to the cloud and then modernize the apps, which is what the alliance with Microsoft appears to be aimed at. Azure will provide enterprises with a platform to enhance the value of traditional applications without the major expense of rewriting for a new platform.
Skytap is providing a preview of whats possible when lifting and extending a legacy IBM i application using DB2 on Skytap and augmenting it with Azure IoT Hub. The application seamlessly spans old and new architectures, demonstrating there is no need to completely rewrite rock-solid IBM i applications to benefit from modern cloud capabilities.
### Migrating to Azure cloud
Under the deal, Microsoft will deploy Power S922 servers from IBM and deploy them in an undeclared Azure region. These machines can run the PowerVM hypervisor, which supports legacy IBM operating systems, as well as Linux.
"Migrating to the cloud by first replacing older technologies is time consuming and risky," said Brad Schick, CEO of Skytap, in a statement. "Skytaps goal has always been to provide businesses with a path to get these systems into the cloud with little change and less risk. Working with Microsoft, we will bring Skytaps native support for a wide range of legacy applications to Microsoft Azure, including those dependent on IBM i, AIX, and Linux on Power. This will give businesses the ability to extend the life of traditional systems and increase their value by modernizing with Azure services."
As Power-based applications are modernized, Skytap will then bring in DevOps CI/CD toolchains to accelerate software delivery. After moving to Skytap on Azure, customers will be able to integrate Azure DevOps, in addition to CI/CD toolchains for Power, such as Eradani and UrbanCode.
These sound like first steps, which means there will be more to come, especially in terms of the app migration. If it's only in one Azure region, it sounds like they are testing and finding their legs with this project and will likely expand later this year or next.
Join the Network World communities on [Facebook][3] and [LinkedIn][4] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3438904/microsoft-brings-ibm-iron-to-azure-for-on-premises-migrations.html
作者:[Andy Patrizio][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Andy-Patrizio/
[b]: https://github.com/lujun9972
[1]: https://www.openwall.com/lists/oss-security/2019/06/27/7
[2]: https://www.networkworld.com/article/3119362/hybrid-cloud/how-to-make-hybrid-cloud-work.html#tk.nww-fsb
[3]: https://www.facebook.com/NetworkWorld/
[4]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,114 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Building a Messenger App: Schema)
[#]: via: (https://nicolasparada.netlify.com/posts/go-messenger-schema/)
[#]: author: (Nicolás Parada https://nicolasparada.netlify.com/)
Building a Messenger App: Schema
======
New post on building a messenger app. You already know this kind of app. They allow you to have conversations with your friends. [Facebook Messenger][1], [WhatsApp][2] and [Skype][3] are a few examples. Tho, these apps allows you to send pictures, stream video, record audio, chat with large groups of people, etc… Well try to keep it simple and just send text messages between two users.
Well use [CockroachDB][4] as the SQL database, [Go][5] as the backend language, and JavaScript to make a web app.
In this first post, were getting around the database design.
```
CREATE TABLE users (
id SERIAL NOT NULL PRIMARY KEY,
username STRING NOT NULL UNIQUE,
avatar_url STRING,
github_id INT NOT NULL UNIQUE
);
```
Of course, this app requires users. We will go with social login. I selected just [GitHub][6] so we keep a reference to the github user ID there.
```
CREATE TABLE conversations (
id SERIAL NOT NULL PRIMARY KEY,
last_message_id INT,
INDEX (last_message_id DESC)
);
```
Each conversation references the last message. Every time we insert a new message, well go and update this field. (Ill add the foreign key constraint below).
… You can say that we can group conversations and get the last message that way, but that will add much more complexity to the queries.
```
CREATE TABLE participants (
user_id INT NOT NULL REFERENCES users ON DELETE CASCADE,
conversation_id INT NOT NULL REFERENCES conversations ON DELETE CASCADE,
messages_read_at TIMESTAMPTZ NOT NULL DEFAULT now(),
PRIMARY KEY (user_id, conversation_id)
);
```
Even tho I said conversations will be between just two users, well go with a design that allow the possibility to add multiple participants to a conversation. Thats why we have a participants table between the conversation and users.
To know whether the user has unread messages we have the `messages_read_at` field. Every time the user read in a conversation, we update this value, so we can compare it with the conversation last message `created_at` field.
```
CREATE TABLE messages (
id SERIAL NOT NULL PRIMARY KEY,
content STRING NOT NULL,
user_id INT NOT NULL REFERENCES users ON DELETE CASCADE,
conversation_id INT NOT NULL REFERENCES conversations ON DELETE CASCADE,
created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
INDEX(created_at DESC)
);
```
Last but not least is the messages table, it saves a reference to the user who created it and the conversation in which it goes. Is has an index on `created_at` too to sort messages.
```
ALTER TABLE conversations
ADD CONSTRAINT fk_last_message_id_ref_messages
FOREIGN KEY (last_message_id) REFERENCES messages ON DELETE SET NULL;
```
And yep, the fk constraint I said.
These four tables will do the trick. You can save those queries to a file and pipe it to the Cockroach CLI. First start a new node:
```
cockroach start --insecure --host 127.0.0.1
```
Then create the database and tables:
```
cockroach sql --insecure -e "CREATE DATABASE messenger"
cat schema.sql | cockroach sql --insecure -d messenger
```
* * *
Thats it. In the next part well do the login. Wait for it.
[Souce Code][7]
--------------------------------------------------------------------------------
via: https://nicolasparada.netlify.com/posts/go-messenger-schema/
作者:[Nicolás Parada][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://nicolasparada.netlify.com/
[b]: https://github.com/lujun9972
[1]: https://www.messenger.com/
[2]: https://www.whatsapp.com/
[3]: https://www.skype.com/
[4]: https://www.cockroachlabs.com/
[5]: https://golang.org/
[6]: https://github.com/
[7]: https://github.com/nicolasparada/go-messenger-demo

View File

@ -0,0 +1,448 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Building a Messenger App: OAuth)
[#]: via: (https://nicolasparada.netlify.com/posts/go-messenger-oauth/)
[#]: author: (Nicolás Parada https://nicolasparada.netlify.com/)
Building a Messenger App: OAuth
======
[Previous part: Schema][1].
In this post we start the backend by adding social login.
This is how it works: the user click on a link that redirects him to the GitHub authorization page. The user grant access to his info and get redirected back logged in. The next time he tries to login, he wont be asked to grant permission, it is remembered so the login flow is as fast as a single click.
Internally, the history is more complex tho. First we need the register a new [OAuth app on GitHub][2].
The important part is the callback URL. Set it to `http://localhost:3000/api/oauth/github/callback`. On development we are on localhost, so when you ship the app to production, register a new app with the correct callback URL.
This will give you a client id and a secret key. Dont share them with anyone 👀
With that off of the way, lets start to write some code. Create a `main.go` file:
```
package main
import (
"database/sql"
"fmt"
"log"
"net/http"
"net/url"
"os"
"strconv"
"github.com/gorilla/securecookie"
"github.com/joho/godotenv"
"github.com/knq/jwt"
_ "github.com/lib/pq"
"github.com/matryer/way"
"golang.org/x/oauth2"
"golang.org/x/oauth2/github"
)
var origin *url.URL
var db *sql.DB
var githubOAuthConfig *oauth2.Config
var cookieSigner *securecookie.SecureCookie
var jwtSigner jwt.Signer
func main() {
godotenv.Load()
port := intEnv("PORT", 3000)
originString := env("ORIGIN", fmt.Sprintf("http://localhost:%d/", port))
databaseURL := env("DATABASE_URL", "postgresql://root@127.0.0.1:26257/messenger?sslmode=disable")
githubClientID := os.Getenv("GITHUB_CLIENT_ID")
githubClientSecret := os.Getenv("GITHUB_CLIENT_SECRET")
hashKey := env("HASH_KEY", "secret")
jwtKey := env("JWT_KEY", "secret")
var err error
if origin, err = url.Parse(originString); err != nil || !origin.IsAbs() {
log.Fatal("invalid origin")
return
}
if i, err := strconv.Atoi(origin.Port()); err == nil {
port = i
}
if githubClientID == "" || githubClientSecret == "" {
log.Fatalf("remember to set both $GITHUB_CLIENT_ID and $GITHUB_CLIENT_SECRET")
return
}
if db, err = sql.Open("postgres", databaseURL); err != nil {
log.Fatalf("could not open database connection: %v\n", err)
return
}
defer db.Close()
if err = db.Ping(); err != nil {
log.Fatalf("could not ping to db: %v\n", err)
return
}
githubRedirectURL := *origin
githubRedirectURL.Path = "/api/oauth/github/callback"
githubOAuthConfig = &oauth2.Config{
ClientID: githubClientID,
ClientSecret: githubClientSecret,
Endpoint: github.Endpoint,
RedirectURL: githubRedirectURL.String(),
Scopes: []string{"read:user"},
}
cookieSigner = securecookie.New([]byte(hashKey), nil).MaxAge(0)
jwtSigner, err = jwt.HS256.New([]byte(jwtKey))
if err != nil {
log.Fatalf("could not create JWT signer: %v\n", err)
return
}
router := way.NewRouter()
router.HandleFunc("GET", "/api/oauth/github", githubOAuthStart)
router.HandleFunc("GET", "/api/oauth/github/callback", githubOAuthCallback)
router.HandleFunc("GET", "/api/auth_user", guard(getAuthUser))
log.Printf("accepting connections on port %d\n", port)
log.Printf("starting server at %s\n", origin.String())
addr := fmt.Sprintf(":%d", port)
if err = http.ListenAndServe(addr, router); err != nil {
log.Fatalf("could not start server: %v\n", err)
}
}
func env(key, fallbackValue string) string {
v, ok := os.LookupEnv(key)
if !ok {
return fallbackValue
}
return v
}
func intEnv(key string, fallbackValue int) int {
v, ok := os.LookupEnv(key)
if !ok {
return fallbackValue
}
i, err := strconv.Atoi(v)
if err != nil {
return fallbackValue
}
return i
}
```
Install dependencies:
```
go get -u github.com/gorilla/securecookie
go get -u github.com/joho/godotenv
go get -u github.com/knq/jwt
go get -u github.com/lib/pq
ge get -u github.com/matoous/go-nanoid
go get -u github.com/matryer/way
go get -u golang.org/x/oauth2
```
We use a `.env` file to save secret keys and other configurations. Create it with at least this content:
```
GITHUB_CLIENT_ID=your_github_client_id
GITHUB_CLIENT_SECRET=your_github_client_secret
```
The other enviroment variables we use are:
* `PORT`: The port in which the server runs. Defaults to `3000`.
* `ORIGIN`: Your domain. Defaults to `http://localhost:3000/`. The port can also be extracted from this.
* `DATABASE_URL`: The Cockroach address. Defaults to `postgresql://root@127.0.0.1:26257/messenger?sslmode=disable`.
* `HASH_KEY`: Key to sign cookies. Yeah, well use signed cookies for security.
* `JWT_KEY`: Key to sign JSON web tokens.
Because they have default values, your dont need to write them on the `.env` file.
After reading the configuration and connecting to the database, we create an OAuth config. We use the origin to build the callback URL (the same we registered on the github page). And we set the scope to “read:user”. This will give us permission to read the public user info. Thats because we just need his username and avatar. Then we initialize the cookie and JWT signers. Define some endpoints and start the server.
Before implementing those HTTP handlers lets write a couple functions to send HTTP responses.
```
func respond(w http.ResponseWriter, v interface{}, statusCode int) {
b, err := json.Marshal(v)
if err != nil {
respondError(w, fmt.Errorf("could not marshal response: %v", err))
return
}
w.Header().Set("Content-Type", "application/json; charset=utf-8")
w.WriteHeader(statusCode)
w.Write(b)
}
func respondError(w http.ResponseWriter, err error) {
log.Println(err)
http.Error(w, http.StatusText(http.StatusInternalServerError), http.StatusInternalServerError)
}
```
The first one is to send JSON and the second one logs the error to the console and return a `500 Internal Server Error` error.
### OAuth Start
So, the user clicks on a link that says “Access with GitHub”… That link points the this endpoint `/api/oauth/github` that will redirect the user to github.
```
func githubOAuthStart(w http.ResponseWriter, r *http.Request) {
state, err := gonanoid.Nanoid()
if err != nil {
respondError(w, fmt.Errorf("could not generte state: %v", err))
return
}
stateCookieValue, err := cookieSigner.Encode("state", state)
if err != nil {
respondError(w, fmt.Errorf("could not encode state cookie: %v", err))
return
}
http.SetCookie(w, &http.Cookie{
Name: "state",
Value: stateCookieValue,
Path: "/api/oauth/github",
HttpOnly: true,
})
http.Redirect(w, r, githubOAuthConfig.AuthCodeURL(state), http.StatusTemporaryRedirect)
}
```
OAuth2 uses a mechanism to prevent CSRF attacks so it requires a “state”. We use nanoid to create a random string and use that as state. We save it as a cookie too.
### OAuth Callback
Once the user grant access to his info on the GitHub page, he will be redirected to this endpoint. The URL will come with the state and a code on the query string `/api/oauth/github/callback?state=&code=`
```
const jwtLifetime = time.Hour * 24 * 14
type GithubUser struct {
ID int `json:"id"`
Login string `json:"login"`
AvatarURL *string `json:"avatar_url,omitempty"`
}
type User struct {
ID string `json:"id"`
Username string `json:"username"`
AvatarURL *string `json:"avatarUrl"`
}
func githubOAuthCallback(w http.ResponseWriter, r *http.Request) {
stateCookie, err := r.Cookie("state")
if err != nil {
http.Error(w, http.StatusText(http.StatusTeapot), http.StatusTeapot)
return
}
http.SetCookie(w, &http.Cookie{
Name: "state",
Value: "",
MaxAge: -1,
HttpOnly: true,
})
var state string
if err = cookieSigner.Decode("state", stateCookie.Value, &state); err != nil {
http.Error(w, http.StatusText(http.StatusTeapot), http.StatusTeapot)
return
}
q := r.URL.Query()
if state != q.Get("state") {
http.Error(w, http.StatusText(http.StatusTeapot), http.StatusTeapot)
return
}
ctx := r.Context()
t, err := githubOAuthConfig.Exchange(ctx, q.Get("code"))
if err != nil {
respondError(w, fmt.Errorf("could not fetch github token: %v", err))
return
}
client := githubOAuthConfig.Client(ctx, t)
resp, err := client.Get("https://api.github.com/user")
if err != nil {
respondError(w, fmt.Errorf("could not fetch github user: %v", err))
return
}
var githubUser GithubUser
if err = json.NewDecoder(resp.Body).Decode(&githubUser); err != nil {
respondError(w, fmt.Errorf("could not decode github user: %v", err))
return
}
defer resp.Body.Close()
tx, err := db.BeginTx(ctx, nil)
if err != nil {
respondError(w, fmt.Errorf("could not begin tx: %v", err))
return
}
var user User
if err = tx.QueryRowContext(ctx, `
SELECT id, username, avatar_url FROM users WHERE github_id = $1
`, githubUser.ID).Scan(&user.ID, &user.Username, &user.AvatarURL); err == sql.ErrNoRows {
if err = tx.QueryRowContext(ctx, `
INSERT INTO users (username, avatar_url, github_id) VALUES ($1, $2, $3)
RETURNING id
`, githubUser.Login, githubUser.AvatarURL, githubUser.ID).Scan(&user.ID); err != nil {
respondError(w, fmt.Errorf("could not insert user: %v", err))
return
}
user.Username = githubUser.Login
user.AvatarURL = githubUser.AvatarURL
} else if err != nil {
respondError(w, fmt.Errorf("could not query user by github ID: %v", err))
return
}
if err = tx.Commit(); err != nil {
respondError(w, fmt.Errorf("could not commit to finish github oauth: %v", err))
return
}
exp := time.Now().Add(jwtLifetime)
token, err := jwtSigner.Encode(jwt.Claims{
Subject: user.ID,
Expiration: json.Number(strconv.FormatInt(exp.Unix(), 10)),
})
if err != nil {
respondError(w, fmt.Errorf("could not create token: %v", err))
return
}
expiresAt, _ := exp.MarshalText()
data := make(url.Values)
data.Set("token", string(token))
data.Set("expires_at", string(expiresAt))
http.Redirect(w, r, "/callback?"+data.Encode(), http.StatusTemporaryRedirect)
}
```
First we try to decode the cookie with the state we saved before. And compare it with the state that comes in the query string. In case they dont match, we return a `418 I'm teapot` error.
Then we exchange the code for a token. This token is used to create an HTTP client to make requests to the GitHub API. So we do a GET request to `https://api.github.com/user`. This endpoint will give us the current authenticated user info in JSON format. We decode it to get the user ID, login (username) and avatar URL.
Then we try to find a user with that GitHub ID on the database. If none is found, we create one using that data.
Then, with the newly created user, we issue a JSON web token with the user ID as Subject and redirect to the frontend with the token, along side the expiration date in the query string.
The web app will be for another post, but the URL you are being redirected is `/callback?token=&expires_at=`. There well have some JavaScript to extract the token and expiration date from the URL and do a GET request to `/api/auth_user` with the token in the `Authorization` header in the form of `Bearer token_here` to get the authenticated user and save it to localStorage.
### Guard Middleware
To get the current authenticated user we use a middleware. Thats because in future posts well have more endpoints that requires authentication, and a middleware allow us to share functionality.
```
type ContextKey struct {
Name string
}
var keyAuthUserID = ContextKey{"auth_user_id"}
func guard(handler http.HandlerFunc) http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
var token string
if a := r.Header.Get("Authorization"); strings.HasPrefix(a, "Bearer ") {
token = a[7:]
} else if t := r.URL.Query().Get("token"); t != "" {
token = t
} else {
http.Error(w, http.StatusText(http.StatusUnauthorized), http.StatusUnauthorized)
return
}
var claims jwt.Claims
if err := jwtSigner.Decode([]byte(token), &claims); err != nil {
http.Error(w, http.StatusText(http.StatusUnauthorized), http.StatusUnauthorized)
return
}
ctx := r.Context()
ctx = context.WithValue(ctx, keyAuthUserID, claims.Subject)
handler(w, r.WithContext(ctx))
}
}
```
First we try to read the token from the `Authorization` header or a `token` in the URL query string. If none found, we return a `401 Unauthorized` error. Then we decode the claims in the token and use the Subject as the current authenticated user ID.
Now, we can wrap any `http.handlerFunc` that needs authentication with this middleware and well have the authenticated user ID in the context.
```
var guarded = guard(func(w http.ResponseWriter, r *http.Request) {
authUserID := r.Context().Value(keyAuthUserID).(string)
})
```
### Get Authenticated User
```
func getAuthUser(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
authUserID := ctx.Value(keyAuthUserID).(string)
var user User
if err := db.QueryRowContext(ctx, `
SELECT username, avatar_url FROM users WHERE id = $1
`, authUserID).Scan(&user.Username, &user.AvatarURL); err == sql.ErrNoRows {
http.Error(w, http.StatusText(http.StatusTeapot), http.StatusTeapot)
return
} else if err != nil {
respondError(w, fmt.Errorf("could not query auth user: %v", err))
return
}
user.ID = authUserID
respond(w, user, http.StatusOK)
}
```
We use the guard middleware to get the current authenticated user id and do a query to the database.
* * *
That will cover the OAuth process on the backend. In the next part well see how to start conversations with other users.
[Souce Code][3]
--------------------------------------------------------------------------------
via: https://nicolasparada.netlify.com/posts/go-messenger-oauth/
作者:[Nicolás Parada][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://nicolasparada.netlify.com/
[b]: https://github.com/lujun9972
[1]: https://nicolasparada.netlify.com/posts/go-messenger-schema/
[2]: https://github.com/settings/applications/new
[3]: https://github.com/nicolasparada/go-messenger-demo

View File

@ -0,0 +1,351 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Building a Messenger App: Conversations)
[#]: via: (https://nicolasparada.netlify.com/posts/go-messenger-conversations/)
[#]: author: (Nicolás Parada https://nicolasparada.netlify.com/)
Building a Messenger App: Conversations
======
This post is the 3rd in a series:
* [Part 1: Schema][1]
* [Part 2: OAuth][2]
In our messenger app, messages are stacked by conversations between two participants. You start a conversation providing the user you want to chat with, the conversations is created (if not exists already) and you can start sending messages to that conversations.
On the front-end were interested in showing a list of the lastest conversations. There well show the last message of it and the name and avatar of the other participant.
In this post, well code the endpoints to start a conversation, list the latest and find a single one.
Inside the `main()` function add this routes.
```
router.HandleFunc("POST", "/api/conversations", requireJSON(guard(createConversation)))
router.HandleFunc("GET", "/api/conversations", guard(getConversations))
router.HandleFunc("GET", "/api/conversations/:conversationID", guard(getConversation))
```
These three endpoints require authentication so we use the `guard()` middleware. There is a new middleware that checks for the request content type JSON.
### Require JSON Middleware
```
func requireJSON(handler http.HandlerFunc) http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
if ct := r.Header.Get("Content-Type"); !strings.HasPrefix(ct, "application/json") {
http.Error(w, "Content type of application/json required", http.StatusUnsupportedMediaType)
return
}
handler(w, r)
}
}
```
If the request isnt JSON, it responds with a `415 Unsupported Media Type` error.
### Create Conversation
```
type Conversation struct {
ID string `json:"id"`
OtherParticipant *User `json:"otherParticipant"`
LastMessage *Message `json:"lastMessage"`
HasUnreadMessages bool `json:"hasUnreadMessages"`
}
```
So, a conversation holds a reference to the other participant and the last message. Also has a bool field to tell if it has unread messages.
```
type Message struct {
ID string `json:"id"`
Content string `json:"content"`
UserID string `json:"-"`
ConversationID string `json:"conversationID,omitempty"`
CreatedAt time.Time `json:"createdAt"`
Mine bool `json:"mine"`
ReceiverID string `json:"-"`
}
```
Messages are for the next post, but I define the struct now since we are using it. Most of the fields are the same as the database table. We have `Mine` to tell if the message is owned by the current authenticated user and `ReceiverID` will be used to filter messanges once we add realtime capabilities.
Lets write the HTTP handler then. Its quite long but dont be scared.
```
func createConversation(w http.ResponseWriter, r *http.Request) {
var input struct {
Username string `json:"username"`
}
defer r.Body.Close()
if err := json.NewDecoder(r.Body).Decode(&input); err != nil {
http.Error(w, err.Error(), http.StatusBadRequest)
return
}
input.Username = strings.TrimSpace(input.Username)
if input.Username == "" {
respond(w, Errors{map[string]string{
"username": "Username required",
}}, http.StatusUnprocessableEntity)
return
}
ctx := r.Context()
authUserID := ctx.Value(keyAuthUserID).(string)
tx, err := db.BeginTx(ctx, nil)
if err != nil {
respondError(w, fmt.Errorf("could not begin tx: %v", err))
return
}
defer tx.Rollback()
var otherParticipant User
if err := tx.QueryRowContext(ctx, `
SELECT id, avatar_url FROM users WHERE username = $1
`, input.Username).Scan(
&otherParticipant.ID,
&otherParticipant.AvatarURL,
); err == sql.ErrNoRows {
http.Error(w, "User not found", http.StatusNotFound)
return
} else if err != nil {
respondError(w, fmt.Errorf("could not query other participant: %v", err))
return
}
otherParticipant.Username = input.Username
if otherParticipant.ID == authUserID {
http.Error(w, "Try start a conversation with someone else", http.StatusForbidden)
return
}
var conversationID string
if err := tx.QueryRowContext(ctx, `
SELECT conversation_id FROM participants WHERE user_id = $1
INTERSECT
SELECT conversation_id FROM participants WHERE user_id = $2
`, authUserID, otherParticipant.ID).Scan(&conversationID); err != nil && err != sql.ErrNoRows {
respondError(w, fmt.Errorf("could not query common conversation id: %v", err))
return
} else if err == nil {
http.Redirect(w, r, "/api/conversations/"+conversationID, http.StatusFound)
return
}
var conversation Conversation
if err = tx.QueryRowContext(ctx, `
INSERT INTO conversations DEFAULT VALUES
RETURNING id
`).Scan(&conversation.ID); err != nil {
respondError(w, fmt.Errorf("could not insert conversation: %v", err))
return
}
if _, err = tx.ExecContext(ctx, `
INSERT INTO participants (user_id, conversation_id) VALUES
($1, $2),
($3, $2)
`, authUserID, conversation.ID, otherParticipant.ID); err != nil {
respondError(w, fmt.Errorf("could not insert participants: %v", err))
return
}
if err = tx.Commit(); err != nil {
respondError(w, fmt.Errorf("could not commit tx to create conversation: %v", err))
return
}
conversation.OtherParticipant = &otherParticipant
respond(w, conversation, http.StatusCreated)
}
```
For this endpoint you do a POST request to `/api/conversations` with a JSON body containing the username of the user you want to chat with.
So first it decodes the request body into an struct with the username. Then it validates that the username is not empty.
```
type Errors struct {
Errors map[string]string `json:"errors"`
}
```
This is the `Errors` struct. Its just a map. If you enter an empty username you get this JSON with a `422 Unprocessable Entity` error.
```
{
"errors": {
"username": "Username required"
}
}
```
Then, we begin an SQL transaction. We only received an username, but we need the actual user ID. So the first part of the transaction is to query for the id and avatar of that user (the other participant). If the user is not found, we respond with a `404 Not Found` error. Also, if the user happens to be the same as the current authenticated user, we respond with `403 Forbidden`. There should be two different users, not the same.
Then, we try to find a conversation those two users have in common. We use `INTERSECT` for that. If there is one, we redirect to that conversation `/api/conversations/{conversationID}` and return there.
If no common conversation was found, we continue by creating a new one and adding the two participants. Finally, we `COMMIT` the transaction and respond with the newly created conversation.
### Get Conversations
This endpoint `/api/conversations` is to get all the conversations of the current authenticated user.
```
func getConversations(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
authUserID := ctx.Value(keyAuthUserID).(string)
rows, err := db.QueryContext(ctx, `
SELECT
conversations.id,
auth_user.messages_read_at < messages.created_at AS has_unread_messages,
messages.id,
messages.content,
messages.created_at,
messages.user_id = $1 AS mine,
other_users.id,
other_users.username,
other_users.avatar_url
FROM conversations
INNER JOIN messages ON conversations.last_message_id = messages.id
INNER JOIN participants other_participants
ON other_participants.conversation_id = conversations.id
AND other_participants.user_id != $1
INNER JOIN users other_users ON other_participants.user_id = other_users.id
INNER JOIN participants auth_user
ON auth_user.conversation_id = conversations.id
AND auth_user.user_id = $1
ORDER BY messages.created_at DESC
`, authUserID)
if err != nil {
respondError(w, fmt.Errorf("could not query conversations: %v", err))
return
}
defer rows.Close()
conversations := make([]Conversation, 0)
for rows.Next() {
var conversation Conversation
var lastMessage Message
var otherParticipant User
if err = rows.Scan(
&conversation.ID,
&conversation.HasUnreadMessages,
&lastMessage.ID,
&lastMessage.Content,
&lastMessage.CreatedAt,
&lastMessage.Mine,
&otherParticipant.ID,
&otherParticipant.Username,
&otherParticipant.AvatarURL,
); err != nil {
respondError(w, fmt.Errorf("could not scan conversation: %v", err))
return
}
conversation.LastMessage = &lastMessage
conversation.OtherParticipant = &otherParticipant
conversations = append(conversations, conversation)
}
if err = rows.Err(); err != nil {
respondError(w, fmt.Errorf("could not iterate over conversations: %v", err))
return
}
respond(w, conversations, http.StatusOK)
}
```
This handler just does a query to the database. It queries to the conversations table with some joins… First, to the messages table to get the last message. Then to the participants, but it adds a condition to a participant whose ID is not the one of the current authenticated user; this is the other participant. Then it joins to the users table to get his username and avatar. And finally joins with the participants again but with the contrary condition, so this participant is the current authenticated user. We compare `messages_read_at` with the message `created_at` to know whether the conversation has unread messages. And we use the message `user_id` to check if its “mine” or not.
Note that this query assumes that a conversation has just two users. It only works for that scenario. Also, if you want to show a count of the unread messages, this design isnt good. I think you could add a `unread_messages_count` `INT` field on the `participants` table and increment it each time a new message is created and reset it when the user read them.
Then it iterates over the rows, scan each one to make an slice of conversations and respond with those at the end.
### Get Conversation
This endpoint `/api/conversations/{conversationID}` respond with a single conversation by its ID.
```
func getConversation(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
authUserID := ctx.Value(keyAuthUserID).(string)
conversationID := way.Param(ctx, "conversationID")
var conversation Conversation
var otherParticipant User
if err := db.QueryRowContext(ctx, `
SELECT
IFNULL(auth_user.messages_read_at < messages.created_at, false) AS has_unread_messages,
other_users.id,
other_users.username,
other_users.avatar_url
FROM conversations
LEFT JOIN messages ON conversations.last_message_id = messages.id
INNER JOIN participants other_participants
ON other_participants.conversation_id = conversations.id
AND other_participants.user_id != $1
INNER JOIN users other_users ON other_participants.user_id = other_users.id
INNER JOIN participants auth_user
ON auth_user.conversation_id = conversations.id
AND auth_user.user_id = $1
WHERE conversations.id = $2
`, authUserID, conversationID).Scan(
&conversation.HasUnreadMessages,
&otherParticipant.ID,
&otherParticipant.Username,
&otherParticipant.AvatarURL,
); err == sql.ErrNoRows {
http.Error(w, "Conversation not found", http.StatusNotFound)
return
} else if err != nil {
respondError(w, fmt.Errorf("could not query conversation: %v", err))
return
}
conversation.ID = conversationID
conversation.OtherParticipant = &otherParticipant
respond(w, conversation, http.StatusOK)
}
```
The query is quite similar. Were not interested in showing the last message, so we omit those fields, but we need the message to know whether the conversation has unread messages. This time we do a `LEFT JOIN` instead of an `INNER JOIN` because the `last_message_id` is `NULLABLE`; in other case we wont get any rows. We use an `IFNULL` in the `has_unread_messages` comparison for that reason too. Lastly, we filter by ID.
If the query returns no rows, we respond with a `404 Not Found` error, otherwise `200 OK` with the found conversation.
* * *
Yeah, that concludes with the conversation endpoints.
Wait for the next post to create and list messages 👋
[Souce Code][3]
--------------------------------------------------------------------------------
via: https://nicolasparada.netlify.com/posts/go-messenger-conversations/
作者:[Nicolás Parada][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://nicolasparada.netlify.com/
[b]: https://github.com/lujun9972
[1]: https://nicolasparada.netlify.com/posts/go-messenger-schema/
[2]: https://nicolasparada.netlify.com/posts/go-messenger-oauth/
[3]: https://github.com/nicolasparada/go-messenger-demo

View File

@ -0,0 +1,315 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Building a Messenger App: Messages)
[#]: via: (https://nicolasparada.netlify.com/posts/go-messenger-messages/)
[#]: author: (Nicolás Parada https://nicolasparada.netlify.com/)
Building a Messenger App: Messages
======
This post is the 4th on a series:
* [Part 1: Schema][1]
* [Part 2: OAuth][2]
* [Part 3: Conversations][3]
In this post well code the endpoints to create a message and list them, also an endpoint to update the last time the participant read messages. Start by adding these routes in the `main()` function.
```
router.HandleFunc("POST", "/api/conversations/:conversationID/messages", requireJSON(guard(createMessage)))
router.HandleFunc("GET", "/api/conversations/:conversationID/messages", guard(getMessages))
router.HandleFunc("POST", "/api/conversations/:conversationID/read_messages", guard(readMessages))
```
Messages goes into conversations so the endpoint includes the conversation ID.
### Create Message
This endpoint handles POST requests to `/api/conversations/{conversationID}/messages` with a JSON body with just the message content and return the newly created message. It has two side affects: it updates the conversation `last_message_id` and updates the participant `messages_read_at`.
```
func createMessage(w http.ResponseWriter, r *http.Request) {
var input struct {
Content string `json:"content"`
}
defer r.Body.Close()
if err := json.NewDecoder(r.Body).Decode(&input); err != nil {
http.Error(w, err.Error(), http.StatusBadRequest)
return
}
errs := make(map[string]string)
input.Content = removeSpaces(input.Content)
if input.Content == "" {
errs["content"] = "Message content required"
} else if len([]rune(input.Content)) > 480 {
errs["content"] = "Message too long. 480 max"
}
if len(errs) != 0 {
respond(w, Errors{errs}, http.StatusUnprocessableEntity)
return
}
ctx := r.Context()
authUserID := ctx.Value(keyAuthUserID).(string)
conversationID := way.Param(ctx, "conversationID")
tx, err := db.BeginTx(ctx, nil)
if err != nil {
respondError(w, fmt.Errorf("could not begin tx: %v", err))
return
}
defer tx.Rollback()
isParticipant, err := queryParticipantExistance(ctx, tx, authUserID, conversationID)
if err != nil {
respondError(w, fmt.Errorf("could not query participant existance: %v", err))
return
}
if !isParticipant {
http.Error(w, "Conversation not found", http.StatusNotFound)
return
}
var message Message
if err := tx.QueryRowContext(ctx, `
INSERT INTO messages (content, user_id, conversation_id) VALUES
($1, $2, $3)
RETURNING id, created_at
`, input.Content, authUserID, conversationID).Scan(
&message.ID,
&message.CreatedAt,
); err != nil {
respondError(w, fmt.Errorf("could not insert message: %v", err))
return
}
if _, err := tx.ExecContext(ctx, `
UPDATE conversations SET last_message_id = $1
WHERE id = $2
`, message.ID, conversationID); err != nil {
respondError(w, fmt.Errorf("could not update conversation last message ID: %v", err))
return
}
if err = tx.Commit(); err != nil {
respondError(w, fmt.Errorf("could not commit tx to create a message: %v", err))
return
}
go func() {
if err = updateMessagesReadAt(nil, authUserID, conversationID); err != nil {
log.Printf("could not update messages read at: %v\n", err)
}
}()
message.Content = input.Content
message.UserID = authUserID
message.ConversationID = conversationID
// TODO: notify about new message.
message.Mine = true
respond(w, message, http.StatusCreated)
}
```
First, it decodes the request body into an struct with the message content. Then, it validates the content is not empty and has less than 480 characters.
```
var rxSpaces = regexp.MustCompile("\\s+")
func removeSpaces(s string) string {
if s == "" {
return s
}
lines := make([]string, 0)
for _, line := range strings.Split(s, "\n") {
line = rxSpaces.ReplaceAllLiteralString(line, " ")
line = strings.TrimSpace(line)
if line != "" {
lines = append(lines, line)
}
}
return strings.Join(lines, "\n")
}
```
This is the function to remove spaces. It iterates over each line, remove more than two consecutives spaces and returns with the non empty lines.
After the validation, it starts an SQL transaction. First, it queries for the participant existance in the conversation.
```
func queryParticipantExistance(ctx context.Context, tx *sql.Tx, userID, conversationID string) (bool, error) {
if ctx == nil {
ctx = context.Background()
}
var exists bool
if err := tx.QueryRowContext(ctx, `SELECT EXISTS (
SELECT 1 FROM participants
WHERE user_id = $1 AND conversation_id = $2
)`, userID, conversationID).Scan(&exists); err != nil {
return false, err
}
return exists, nil
}
```
I extracted it into a function because its reused later.
If the user isnt participant of the conversation, we return with a `404 Not Found` error.
Then, it inserts the message and updates the conversation `last_message_id`. Since this point, `last_message_id` cannot by `NULL` because we dont allow removing messages.
Then it commits the transaction and we update the participant `messages_read_at` in a goroutine.
```
func updateMessagesReadAt(ctx context.Context, userID, conversationID string) error {
if ctx == nil {
ctx = context.Background()
}
if _, err := db.ExecContext(ctx, `
UPDATE participants SET messages_read_at = now()
WHERE user_id = $1 AND conversation_id = $2
`, userID, conversationID); err != nil {
return err
}
return nil
}
```
Before responding with the new message, we must notify about it. This is for the realtime part well code in the next post so I left a comment there.
### Get Messages
This endpoint handles GET requests to `/api/conversations/{conversationID}/messages`. It responds with a JSON array with all the messages in the conversation. It also has the same side affect of updating the participant `messages_read_at`.
```
func getMessages(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
authUserID := ctx.Value(keyAuthUserID).(string)
conversationID := way.Param(ctx, "conversationID")
tx, err := db.BeginTx(ctx, &sql.TxOptions{ReadOnly: true})
if err != nil {
respondError(w, fmt.Errorf("could not begin tx: %v", err))
return
}
defer tx.Rollback()
isParticipant, err := queryParticipantExistance(ctx, tx, authUserID, conversationID)
if err != nil {
respondError(w, fmt.Errorf("could not query participant existance: %v", err))
return
}
if !isParticipant {
http.Error(w, "Conversation not found", http.StatusNotFound)
return
}
rows, err := tx.QueryContext(ctx, `
SELECT
id,
content,
created_at,
user_id = $1 AS mine
FROM messages
WHERE messages.conversation_id = $2
ORDER BY messages.created_at DESC
`, authUserID, conversationID)
if err != nil {
respondError(w, fmt.Errorf("could not query messages: %v", err))
return
}
defer rows.Close()
messages := make([]Message, 0)
for rows.Next() {
var message Message
if err = rows.Scan(
&message.ID,
&message.Content,
&message.CreatedAt,
&message.Mine,
); err != nil {
respondError(w, fmt.Errorf("could not scan message: %v", err))
return
}
messages = append(messages, message)
}
if err = rows.Err(); err != nil {
respondError(w, fmt.Errorf("could not iterate over messages: %v", err))
return
}
if err = tx.Commit(); err != nil {
respondError(w, fmt.Errorf("could not commit tx to get messages: %v", err))
return
}
go func() {
if err = updateMessagesReadAt(nil, authUserID, conversationID); err != nil {
log.Printf("could not update messages read at: %v\n", err)
}
}()
respond(w, messages, http.StatusOK)
}
```
First, it begins an SQL transaction in readonly mode. Checks for the participant existance and queries all the messages. In each message, we use the current authenticated user ID to know whether the user owns the message (`mine`). Then it commits the transaction, updates the participant `messages_read_at` in a goroutine and respond with the messages.
### Read Messages
This endpoint handles POST requests to `/api/conversations/{conversationID}/read_messages`. Without any request or response body. In the frontend well make this request each time a new message arrive in the realtime stream.
```
func readMessages(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
authUserID := ctx.Value(keyAuthUserID).(string)
conversationID := way.Param(ctx, "conversationID")
if err := updateMessagesReadAt(ctx, authUserID, conversationID); err != nil {
respondError(w, fmt.Errorf("could not update messages read at: %v", err))
return
}
w.WriteHeader(http.StatusNoContent)
}
```
It uses the same function weve been using to update the participant `messages_read_at`.
* * *
That concludes it. Realtime messages is the only part left in the backend. Wait for it in the next post.
[Souce Code][4]
--------------------------------------------------------------------------------
via: https://nicolasparada.netlify.com/posts/go-messenger-messages/
作者:[Nicolás Parada][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://nicolasparada.netlify.com/
[b]: https://github.com/lujun9972
[1]: https://nicolasparada.netlify.com/posts/go-messenger-schema/
[2]: https://nicolasparada.netlify.com/posts/go-messenger-oauth/
[3]: https://nicolasparada.netlify.com/posts/go-messenger-conversations/
[4]: https://github.com/nicolasparada/go-messenger-demo

View File

@ -0,0 +1,175 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Building a Messenger App: Realtime Messages)
[#]: via: (https://nicolasparada.netlify.com/posts/go-messenger-realtime-messages/)
[#]: author: (Nicolás Parada https://nicolasparada.netlify.com/)
Building a Messenger App: Realtime Messages
======
This post is the 5th on a series:
* [Part 1: Schema][1]
* [Part 2: OAuth][2]
* [Part 3: Conversations][3]
* [Part 4: Messages][4]
For realtime messages well use [Server-Sent Events][5]. This is an open connection in which we can stream data. Well have and endpoint in which the user subscribes to all the messages sended to him.
### Message Clients
Before the HTTP part, lets code a map to have all the clients listening for messages. Initialize this globally like so:
```
type MessageClient struct {
Messages chan Message
UserID string
}
var messageClients sync.Map
```
### New Message Created
Remember in the [last post][4] when we created the message, we left a “TODO” comment. There well dispatch a goroutine with this function.
```
go messageCreated(message)
```
Insert that line just where we left the comment.
```
func messageCreated(message Message) error {
if err := db.QueryRow(`
SELECT user_id FROM participants
WHERE user_id != $1 and conversation_id = $2
`, message.UserID, message.ConversationID).
Scan(&message.ReceiverID); err != nil {
return err
}
go broadcastMessage(message)
return nil
}
func broadcastMessage(message Message) {
messageClients.Range(func(key, _ interface{}) bool {
client := key.(*MessageClient)
if client.UserID == message.ReceiverID {
client.Messages <- message
}
return true
})
}
```
The function queries for the recipient ID (the other participant ID) and sends the message to all the clients.
### Subscribe to Messages
Lets go to the `main()` function and add this route:
```
router.HandleFunc("GET", "/api/messages", guard(subscribeToMessages))
```
This endpoint handles GET requests on `/api/messages`. The request should be an [EventSource][6] connection. It responds with an event stream in which the data is JSON formatted.
```
func subscribeToMessages(w http.ResponseWriter, r *http.Request) {
if a := r.Header.Get("Accept"); !strings.Contains(a, "text/event-stream") {
http.Error(w, "This endpoint requires an EventSource connection", http.StatusNotAcceptable)
return
}
f, ok := w.(http.Flusher)
if !ok {
respondError(w, errors.New("streaming unsupported"))
return
}
ctx := r.Context()
authUserID := ctx.Value(keyAuthUserID).(string)
h := w.Header()
h.Set("Cache-Control", "no-cache")
h.Set("Connection", "keep-alive")
h.Set("Content-Type", "text/event-stream")
messages := make(chan Message)
defer close(messages)
client := &MessageClient{Messages: messages, UserID: authUserID}
messageClients.Store(client, nil)
defer messageClients.Delete(client)
for {
select {
case <-ctx.Done():
return
case message := <-messages:
if b, err := json.Marshal(message); err != nil {
log.Printf("could not marshall message: %v\n", err)
fmt.Fprintf(w, "event: error\ndata: %v\n\n", err)
} else {
fmt.Fprintf(w, "data: %s\n\n", b)
}
f.Flush()
}
}
}
```
First it checks for the correct request headers and checks the server supports streaming. We create a channel of messages to make a client and store it in the clients map. Each time a new message is created, it will go in this channel, so we can read from it with a `for-select` loop.
Server-Sent Events uses this format to send data:
```
data: some data here\n\n
```
We are sending it in JSON format:
```
data: {"foo":"bar"}\n\n
```
We are using `fmt.Fprintf()` to write to the response writter in this format and flushing the data in each iteration of the loop.
This will loop until the connection is closed using the request context. We defered the close of the channel and the delete of the client, so when the loop ends, the channel will be closed and the client wont receive more messages.
Note aside, the JavaScript API to work with Server-Sent Events (EventSource) doesnt support setting custom headers 😒 So we cannot set `Authorization: Bearer <token>`. And thats the reason why the `guard()` middleware reads the token from the URL query string also.
* * *
That concludes the realtime messages. Id like to say thats everything in the backend, but to code the frontend Ill add one more endpoint to login. A login that will be just for development.
[Souce Code][7]
--------------------------------------------------------------------------------
via: https://nicolasparada.netlify.com/posts/go-messenger-realtime-messages/
作者:[Nicolás Parada][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://nicolasparada.netlify.com/
[b]: https://github.com/lujun9972
[1]: https://nicolasparada.netlify.com/posts/go-messenger-schema/
[2]: https://nicolasparada.netlify.com/posts/go-messenger-oauth/
[3]: https://nicolasparada.netlify.com/posts/go-messenger-conversations/
[4]: https://nicolasparada.netlify.com/posts/go-messenger-messages/
[5]: https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events
[6]: https://developer.mozilla.org/en-US/docs/Web/API/EventSource
[7]: https://github.com/nicolasparada/go-messenger-demo

View File

@ -0,0 +1,145 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Building a Messenger App: Development Login)
[#]: via: (https://nicolasparada.netlify.com/posts/go-messenger-dev-login/)
[#]: author: (Nicolás Parada https://nicolasparada.netlify.com/)
Building a Messenger App: Development Login
======
This post is the 6th on a series:
* [Part 1: Schema][1]
* [Part 2: OAuth][2]
* [Part 3: Conversations][3]
* [Part 4: Messages][4]
* [Part 5: Realtime Messages][5]
We already implemented login through GitHub, but if we want to play around with the app, we need a couple of users to test it. In this post well add an endpoint to login as any user just giving an username. This endpoint will be just for development.
Start by adding this route in the `main()` function.
```
router.HandleFunc("POST", "/api/login", requireJSON(login))
```
### Login
This function handles POST requests to `/api/login` with a JSON body with just an username and returns the authenticated user, a token and expiration date of it in JSON format.
```
func login(w http.ResponseWriter, r *http.Request) {
if origin.Hostname() != "localhost" {
http.NotFound(w, r)
return
}
var input struct {
Username string `json:"username"`
}
if err := json.NewDecoder(r.Body).Decode(&input); err != nil {
http.Error(w, err.Error(), http.StatusBadRequest)
return
}
defer r.Body.Close()
var user User
if err := db.QueryRowContext(r.Context(), `
SELECT id, avatar_url
FROM users
WHERE username = $1
`, input.Username).Scan(
&user.ID,
&user.AvatarURL,
); err == sql.ErrNoRows {
http.Error(w, "User not found", http.StatusNotFound)
return
} else if err != nil {
respondError(w, fmt.Errorf("could not query user: %v", err))
return
}
user.Username = input.Username
exp := time.Now().Add(jwtLifetime)
token, err := issueToken(user.ID, exp)
if err != nil {
respondError(w, fmt.Errorf("could not create token: %v", err))
return
}
respond(w, map[string]interface{}{
"authUser": user,
"token": token,
"expiresAt": exp,
}, http.StatusOK)
}
```
First it checks we are on localhost or it responds with `404 Not Found`. It decodes the body skipping validation since this is just for development. Then it queries to the database for a user with the given username, if none is found, it returns with `404 Not Found`. Then it issues a new JSON web token using the user ID as Subject.
```
func issueToken(subject string, exp time.Time) (string, error) {
token, err := jwtSigner.Encode(jwt.Claims{
Subject: subject,
Expiration: json.Number(strconv.FormatInt(exp.Unix(), 10)),
})
if err != nil {
return "", err
}
return string(token), nil
}
```
The function does the same we did [previously][2]. I just moved it to reuse code.
After creating the token, it responds with the user, token and expiration date.
### Seed Users
Now you can add users to play with to the database.
```
INSERT INTO users (id, username) VALUES
(1, 'john'),
(2, 'jane');
```
You can save it to a file and pipe it to the Cockroach CLI.
```
cat seed_users.sql | cockroach sql --insecure -d messenger
```
* * *
Thats it. Once you deploy the code to production and use your own domain this login function wont be available.
This post concludes the backend.
[Souce Code][6]
--------------------------------------------------------------------------------
via: https://nicolasparada.netlify.com/posts/go-messenger-dev-login/
作者:[Nicolás Parada][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://nicolasparada.netlify.com/
[b]: https://github.com/lujun9972
[1]: https://nicolasparada.netlify.com/posts/go-messenger-schema/
[2]: https://nicolasparada.netlify.com/posts/go-messenger-oauth/
[3]: https://nicolasparada.netlify.com/posts/go-messenger-conversations/
[4]: https://nicolasparada.netlify.com/posts/go-messenger-messages/
[5]: https://nicolasparada.netlify.com/posts/go-messenger-realtime-messages/
[6]: https://github.com/nicolasparada/go-messenger-demo

View File

@ -0,0 +1,459 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Building a Messenger App: Access Page)
[#]: via: (https://nicolasparada.netlify.com/posts/go-messenger-access-page/)
[#]: author: (Nicolás Parada https://nicolasparada.netlify.com/)
Building a Messenger App: Access Page
======
This post is the 7th on a series:
* [Part 1: Schema][1]
* [Part 2: OAuth][2]
* [Part 3: Conversations][3]
* [Part 4: Messages][4]
* [Part 5: Realtime Messages][5]
* [Part 6: Development Login][6]
Now that were done with the backend, lets move to the frontend. I will go with a single-page application.
Lets start by creating a file `static/index.html` with the following content.
```
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Messenger</title>
<link rel="shortcut icon" href="data:,">
<link rel="stylesheet" href="/styles.css">
<script src="/main.js" type="module"></script>
</head>
<body></body>
</html>
```
This HTML file must be server for every URL and JavaScript will take care of rendering the correct page.
So lets go the the `main.go` for a moment and in the `main()` function add the following route:
```
router.Handle("GET", "/...", http.FileServer(SPAFileSystem{http.Dir("static")}))
type SPAFileSystem struct {
fs http.FileSystem
}
func (spa SPAFileSystem) Open(name string) (http.File, error) {
f, err := spa.fs.Open(name)
if err != nil {
return spa.fs.Open("index.html")
}
return f, nil
}
```
We use a custom file system so instead of returning `404 Not Found` for unknown URLs, it serves the `index.html`.
### Router
In the `index.html` we loaded two files: `styles.css` and `main.js`. I leave styling to your taste.
Lets move to `main.js`. Create a `static/main.js` file with the following content:
```
import { guard } from './auth.js'
import Router from './router.js'
let currentPage
const disconnect = new CustomEvent('disconnect')
const router = new Router()
router.handle('/', guard(view('home'), view('access')))
router.handle('/callback', view('callback'))
router.handle(/^\/conversations\/([^\/]+)$/, guard(view('conversation'), view('access')))
router.handle(/^\//, view('not-found'))
router.install(async result => {
document.body.innerHTML = ''
if (currentPage instanceof Node) {
currentPage.dispatchEvent(disconnect)
}
currentPage = await result
if (currentPage instanceof Node) {
document.body.appendChild(currentPage)
}
})
function view(pageName) {
return (...args) => import(`/pages/${pageName}-page.js`)
.then(m => m.default(...args))
}
```
If you are follower of this blog, you already know how this works. That router is the one showed [here][7]. Just download it from [@nicolasparada/router][8] and save it to `static/router.js`.
We registered four routes. At the root `/` we show the home or access page whether the user is authenticated. At `/callback` we show the callback page. On `/conversations/{conversationID}` we show the conversation or access page whether the user is authenticated and for every other URL, we show a not found page.
We tell the router to render the result to the document body and dispatch a `disconnect` event to each page before leaving.
We have each page in a different file and we import them with the new dynamic `import()`.
### Auth
`guard()` is a function that given two functions, executes the first one if the user is authenticated, or the sencond one if not. It comes from `auth.js` so lets create a `static/auth.js` file with the following content:
```
export function isAuthenticated() {
const token = localStorage.getItem('token')
const expiresAtItem = localStorage.getItem('expires_at')
if (token === null || expiresAtItem === null) {
return false
}
const expiresAt = new Date(expiresAtItem)
if (isNaN(expiresAt.valueOf()) || expiresAt <= new Date()) {
return false
}
return true
}
export function guard(fn1, fn2) {
return (...args) => isAuthenticated()
? fn1(...args)
: fn2(...args)
}
export function getAuthUser() {
if (!isAuthenticated()) {
return null
}
const authUser = localStorage.getItem('auth_user')
if (authUser === null) {
return null
}
try {
return JSON.parse(authUser)
} catch (_) {
return null
}
}
```
`isAuthenticated()` checks for `token` and `expires_at` from localStorage to tell if the user is authenticated. `getAuthUser()` gets the authenticated user from localStorage.
When we login, well save all the data to localStorage so it will make sense.
### Access Page
![access page screenshot][9]
Lets start with the access page. Create a file `static/pages/access-page.js` with the following content:
```
const template = document.createElement('template')
template.innerHTML = `
<h1>Messenger</h1>
<a href="/api/oauth/github" onclick="event.stopPropagation()">Access with GitHub</a>
`
export default function accessPage() {
return template.content
}
```
Because the router intercepts all the link clicks to do its navigation, we must prevent the event propagation for this link in particular.
Clicking on that link will redirect us to the backend, then to GitHub, then to the backend and then to the frontend again; to the callback page.
### Callback Page
Create the file `static/pages/callback-page.js` with the following content:
```
import http from '../http.js'
import { navigate } from '../router.js'
export default async function callbackPage() {
const url = new URL(location.toString())
const token = url.searchParams.get('token')
const expiresAt = url.searchParams.get('expires_at')
try {
if (token === null || expiresAt === null) {
throw new Error('Invalid URL')
}
const authUser = await getAuthUser(token)
localStorage.setItem('auth_user', JSON.stringify(authUser))
localStorage.setItem('token', token)
localStorage.setItem('expires_at', expiresAt)
} catch (err) {
alert(err.message)
} finally {
navigate('/', true)
}
}
function getAuthUser(token) {
return http.get('/api/auth_user', { authorization: `Bearer ${token}` })
}
```
The callback page doesnt render anything. Its an async function that does a GET request to `/api/auth_user` using the token from the URL query string and saves all the data to localStorage. Then it redirects to `/`.
### HTTP
There is an HTTP module. Create a `static/http.js` file with the following content:
```
import { isAuthenticated } from './auth.js'
async function handleResponse(res) {
const body = await res.clone().json().catch(() => res.text())
if (res.status === 401) {
localStorage.removeItem('auth_user')
localStorage.removeItem('token')
localStorage.removeItem('expires_at')
}
if (!res.ok) {
const message = typeof body === 'object' && body !== null && 'message' in body
? body.message
: typeof body === 'string' && body !== ''
? body
: res.statusText
throw Object.assign(new Error(message), {
url: res.url,
statusCode: res.status,
statusText: res.statusText,
headers: res.headers,
body,
})
}
return body
}
function getAuthHeader() {
return isAuthenticated()
? { authorization: `Bearer ${localStorage.getItem('token')}` }
: {}
}
export default {
get(url, headers) {
return fetch(url, {
headers: Object.assign(getAuthHeader(), headers),
}).then(handleResponse)
},
post(url, body, headers) {
const init = {
method: 'POST',
headers: getAuthHeader(),
}
if (typeof body === 'object' && body !== null) {
init.body = JSON.stringify(body)
init.headers['content-type'] = 'application/json; charset=utf-8'
}
Object.assign(init.headers, headers)
return fetch(url, init).then(handleResponse)
},
subscribe(url, callback) {
const urlWithToken = new URL(url, location.origin)
if (isAuthenticated()) {
urlWithToken.searchParams.set('token', localStorage.getItem('token'))
}
const eventSource = new EventSource(urlWithToken.toString())
eventSource.onmessage = ev => {
let data
try {
data = JSON.parse(ev.data)
} catch (err) {
console.error('could not parse message data as JSON:', err)
return
}
callback(data)
}
const unsubscribe = () => {
eventSource.close()
}
return unsubscribe
},
}
```
This module is a wrapper around the [fetch][10] and [EventSource][11] APIs. The most important part is that it adds the JSON web token to the requests.
### Home Page
![home page screenshot][12]
So, when the user login, the home page will be shown. Create a `static/pages/home-page.js` file with the following content:
```
import { getAuthUser } from '../auth.js'
import { avatar } from '../shared.js'
export default function homePage() {
const authUser = getAuthUser()
const template = document.createElement('template')
template.innerHTML = `
<div>
<div>
${avatar(authUser)}
<span>${authUser.username}</span>
</div>
<button id="logout-button">Logout</button>
</div>
<!-- conversation form here -->
<!-- conversation list here -->
`
const page = template.content
page.getElementById('logout-button').onclick = onLogoutClick
return page
}
function onLogoutClick() {
localStorage.clear()
location.reload()
}
```
For this post, this is the only content we render on the home page. We show the current authenticated user and a logout button.
When the user clicks to logout, we clear all inside localStorage and do a reload of the page.
### Avatar
That `avatar()` function is to show the users avatar. Because its used in more than one place, I moved it to a `shared.js` file. Create the file `static/shared.js` with the following content:
```
export function avatar(user) {
return user.avatarUrl === null
? `<figure class="avatar" data-initial="${user.username[0]}"></figure>`
: `<img class="avatar" src="${user.avatarUrl}" alt="${user.username}'s avatar">`
}
```
We use a small figure with the users initial in case the avatar URL is null.
You can show the initial with a little of CSS using the `attr()` function.
```
.avatar[data-initial]::after {
content: attr(data-initial);
}
```
### Development Login
![access page with login form screenshot][13]
In the previous post we coded a login for development. Lets add a form for that in the access page. Go to `static/pages/access-page.js` and modify it a little.
```
import http from '../http.js'
const template = document.createElement('template')
template.innerHTML = `
<h1>Messenger</h1>
<form id="login-form">
<input type="text" placeholder="Username" required>
<button>Login</button>
</form>
<a href="/api/oauth/github" onclick="event.stopPropagation()">Access with GitHub</a>
`
export default function accessPage() {
const page = template.content.cloneNode(true)
page.getElementById('login-form').onsubmit = onLoginSubmit
return page
}
async function onLoginSubmit(ev) {
ev.preventDefault()
const form = ev.currentTarget
const input = form.querySelector('input')
const submitButton = form.querySelector('button')
input.disabled = true
submitButton.disabled = true
try {
const payload = await login(input.value)
input.value = ''
localStorage.setItem('auth_user', JSON.stringify(payload.authUser))
localStorage.setItem('token', payload.token)
localStorage.setItem('expires_at', payload.expiresAt)
location.reload()
} catch (err) {
alert(err.message)
setTimeout(() => {
input.focus()
}, 0)
} finally {
input.disabled = false
submitButton.disabled = false
}
}
function login(username) {
return http.post('/api/login', { username })
}
```
I added a login form. When the user submits the form. It does a POST requets to `/api/login` with the username. Saves all the data to localStorage and reloads the page.
Remember to remove this form once you are done with the frontend.
* * *
Thats all for this post. In the next one, well continue with the home page to add a form to start conversations and display a list with the latest ones.
[Souce Code][14]
--------------------------------------------------------------------------------
via: https://nicolasparada.netlify.com/posts/go-messenger-access-page/
作者:[Nicolás Parada][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://nicolasparada.netlify.com/
[b]: https://github.com/lujun9972
[1]: https://nicolasparada.netlify.com/posts/go-messenger-schema/
[2]: https://nicolasparada.netlify.com/posts/go-messenger-oauth/
[3]: https://nicolasparada.netlify.com/posts/go-messenger-conversations/
[4]: https://nicolasparada.netlify.com/posts/go-messenger-messages/
[5]: https://nicolasparada.netlify.com/posts/go-messenger-realtime-messages/
[6]: https://nicolasparada.netlify.com/posts/go-messenger-dev-login/
[7]: https://nicolasparada.netlify.com/posts/js-router/
[8]: https://unpkg.com/@nicolasparada/router
[9]: https://nicolasparada.netlify.com/img/go-messenger-access-page/access-page.png
[10]: https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API
[11]: https://developer.mozilla.org/en-US/docs/Web/API/EventSource
[12]: https://nicolasparada.netlify.com/img/go-messenger-access-page/home-page.png
[13]: https://nicolasparada.netlify.com/img/go-messenger-access-page/access-page-v2.png
[14]: https://github.com/nicolasparada/go-messenger-demo

View File

@ -0,0 +1,255 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Building a Messenger App: Home Page)
[#]: via: (https://nicolasparada.netlify.com/posts/go-messenger-home-page/)
[#]: author: (Nicolás Parada https://nicolasparada.netlify.com/)
Building a Messenger App: Home Page
======
This post is the 8th on a series:
* [Part 1: Schema][1]
* [Part 2: OAuth][2]
* [Part 3: Conversations][3]
* [Part 4: Messages][4]
* [Part 5: Realtime Messages][5]
* [Part 6: Development Login][6]
* [Part 7: Access Page][7]
Continuing the frontend, lets finish the home page in this post. Well add a form to start conversations and a list with the latest ones.
### Conversation Form
![conversation form screenshot][8]
In the `static/pages/home-page.js` file add some markup in the HTML view.
```
<form id="conversation-form">
<input type="search" placeholder="Start conversation with..." required>
</form>
```
Add that form just below the section in which we displayed the auth user and logout button.
```
page.getElementById('conversation-form').onsubmit = onConversationSubmit
```
Now we can listen to the “submit” event to create the conversation.
```
import http from '../http.js'
import { navigate } from '../router.js'
async function onConversationSubmit(ev) {
ev.preventDefault()
const form = ev.currentTarget
const input = form.querySelector('input')
input.disabled = true
try {
const conversation = await createConversation(input.value)
input.value = ''
navigate('/conversations/' + conversation.id)
} catch (err) {
if (err.statusCode === 422) {
input.setCustomValidity(err.body.errors.username)
} else {
alert(err.message)
}
setTimeout(() => {
input.focus()
}, 0)
} finally {
input.disabled = false
}
}
function createConversation(username) {
return http.post('/api/conversations', { username })
}
```
On submit we do a POST request to `/api/conversations` with the username and redirect to the conversation page (for the next post).
### Conversation List
![conversation list screenshot][9]
In the same file, we are going to make the `homePage()` function async to load the conversations first.
```
export default async function homePage() {
const conversations = await getConversations().catch(err => {
console.error(err)
return []
})
/*...*/
}
function getConversations() {
return http.get('/api/conversations')
}
```
Then, add a list in the markup to render conversations there.
```
<ol id="conversations"></ol>
```
Add it just below the current markup.
```
const conversationsOList = page.getElementById('conversations')
for (const conversation of conversations) {
conversationsOList.appendChild(renderConversation(conversation))
}
```
So we can append each conversation to the list.
```
import { avatar, escapeHTML } from '../shared.js'
function renderConversation(conversation) {
const messageContent = escapeHTML(conversation.lastMessage.content)
const messageDate = new Date(conversation.lastMessage.createdAt).toLocaleString()
const li = document.createElement('li')
li.dataset['id'] = conversation.id
if (conversation.hasUnreadMessages) {
li.classList.add('has-unread-messages')
}
li.innerHTML = `
<a href="/conversations/${conversation.id}">
<div>
${avatar(conversation.otherParticipant)}
<span>${conversation.otherParticipant.username}</span>
</div>
<div>
<p>${messageContent}</p>
<time>${messageDate}</time>
</div>
</a>
`
return li
}
```
Each conversation item contains a link to the conversation page and displays the other participant info and a preview of the last message. Also, you can use `.hasUnreadMessages` to add a class to the item and do some styling with CSS. Maybe a bolder font or accent the color.
Note that were escaping the message content. That function comes from `static/shared.js`:
```
export function escapeHTML(str) {
return str
.replace(/&/g, '&amp;')
.replace(/</g, '&lt;')
.replace(/>/g, '&gt;')
.replace(/"/g, '&quot;')
.replace(/'/g, '&#039;')
}
```
That prevents displaying as HTML the message the user wrote. If the user happens to write something like:
```
<script>alert('lololo')</script>
```
It would be very annoying because that script will be executed 😅
So yeah, always remember to escape content from untrusted sources.
### Messages Subscription
Last but not least, I want to subscribe to the message stream here.
```
const unsubscribe = subscribeToMessages(onMessageArrive)
page.addEventListener('disconnect', unsubscribe)
```
Add that line in the `homePage()` function.
```
function subscribeToMessages(cb) {
return http.subscribe('/api/messages', cb)
}
```
The `subscribe()` function returns a function that once called it closes the underlying connection. Thats why I passed it to the “disconnect” event; so when the user leaves the page, the event stream will be closed.
```
async function onMessageArrive(message) {
const conversationLI = document.querySelector(`li[data-id="${message.conversationID}"]`)
if (conversationLI !== null) {
conversationLI.classList.add('has-unread-messages')
conversationLI.querySelector('a > div > p').textContent = message.content
conversationLI.querySelector('a > div > time').textContent = new Date(message.createdAt).toLocaleString()
return
}
let conversation
try {
conversation = await getConversation(message.conversationID)
conversation.lastMessage = message
} catch (err) {
console.error(err)
return
}
const conversationsOList = document.getElementById('conversations')
if (conversationsOList === null) {
return
}
conversationsOList.insertAdjacentElement('afterbegin', renderConversation(conversation))
}
function getConversation(id) {
return http.get('/api/conversations/' + id)
}
```
Every time a new message arrives, we go and query for the conversation item in the DOM. If found, we add the `has-unread-messages` class to the item, and update the view. If not found, it means the message is from a new conversation created just now. We go and do a GET request to `/api/conversations/{conversationID}` to get the conversation in which the message was created and prepend it to the conversation list.
* * *
That covers the home page 😊
On the next post well code the conversation page.
[Souce Code][10]
--------------------------------------------------------------------------------
via: https://nicolasparada.netlify.com/posts/go-messenger-home-page/
作者:[Nicolás Parada][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://nicolasparada.netlify.com/
[b]: https://github.com/lujun9972
[1]: https://nicolasparada.netlify.com/posts/go-messenger-schema/
[2]: https://nicolasparada.netlify.com/posts/go-messenger-oauth/
[3]: https://nicolasparada.netlify.com/posts/go-messenger-conversations/
[4]: https://nicolasparada.netlify.com/posts/go-messenger-messages/
[5]: https://nicolasparada.netlify.com/posts/go-messenger-realtime-messages/
[6]: https://nicolasparada.netlify.com/posts/go-messenger-dev-login/
[7]: https://nicolasparada.netlify.com/posts/go-messenger-access-page/
[8]: https://nicolasparada.netlify.com/img/go-messenger-home-page/conversation-form.png
[9]: https://nicolasparada.netlify.com/img/go-messenger-home-page/conversation-list.png
[10]: https://github.com/nicolasparada/go-messenger-demo

View File

@ -0,0 +1,269 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Building a Messenger App: Conversation Page)
[#]: via: (https://nicolasparada.netlify.com/posts/go-messenger-conversation-page/)
[#]: author: (Nicolás Parada https://nicolasparada.netlify.com/)
Building a Messenger App: Conversation Page
======
This post is the 9th and last in a series:
* [Part 1: Schema][1]
* [Part 2: OAuth][2]
* [Part 3: Conversations][3]
* [Part 4: Messages][4]
* [Part 5: Realtime Messages][5]
* [Part 6: Development Login][6]
* [Part 7: Access Page][7]
* [Part 8: Home Page][8]
In this post well code the conversation page. This page is the chat between the two users. At the top well show info about the other participant, below, a list of the latest messages and a message form at the bottom.
### Chat heading
![chat heading screenshot][9]
Lets start by creating the file `static/pages/conversation-page.js` with the following content:
```
import http from '../http.js'
import { navigate } from '../router.js'
import { avatar, escapeHTML } from '../shared.js'
export default async function conversationPage(conversationID) {
let conversation
try {
conversation = await getConversation(conversationID)
} catch (err) {
alert(err.message)
navigate('/', true)
return
}
const template = document.createElement('template')
template.innerHTML = `
<div>
<a href="/">← Back</a>
${avatar(conversation.otherParticipant)}
<span>${conversation.otherParticipant.username}</span>
</div>
<!-- message list here -->
<!-- message form here -->
`
const page = template.content
return page
}
function getConversation(id) {
return http.get('/api/conversations/' + id)
}
```
This page receives the conversation ID the router extracted from the URL.
First it does a GET request to `/api/conversations/{conversationID}` to get info about the conversation. In case of error, we show it and redirect back to `/`. Then we render info about the other participant.
### Conversation List
![chat heading screenshot][10]
Well fetch the latest messages too to display them.
```
let conversation, messages
try {
[conversation, messages] = await Promise.all([
getConversation(conversationID),
getMessages(conversationID),
])
}
```
Update the `conversationPage()` function to fetch the messages too. We use `Promise.all()` to do both request at the same time.
```
function getMessages(conversationID) {
return http.get(`/api/conversations/${conversationID}/messages`)
}
```
A GET request to `/api/conversations/{conversationID}/messages` gets the latest messages of the conversation.
```
<ol id="messages"></ol>
```
Now, add that list to the markup.
```
const messagesOList = page.getElementById('messages')
for (const message of messages.reverse()) {
messagesOList.appendChild(renderMessage(message))
}
```
So we can append messages to the list. We show them in reverse order.
```
function renderMessage(message) {
const messageContent = escapeHTML(message.content)
const messageDate = new Date(message.createdAt).toLocaleString()
const li = document.createElement('li')
if (message.mine) {
li.classList.add('owned')
}
li.innerHTML = `
<p>${messageContent}</p>
<time>${messageDate}</time>
`
return li
}
```
Each message item displays the message content itself with its timestamp. Using `.mine` we can append a different class to the item so maybe you can show the message to the right.
### Message Form
![chat heading screenshot][11]
```
<form id="message-form">
<input type="text" placeholder="Type something" maxlength="480" required>
<button>Send</button>
</form>
```
Add that form to the current markup.
```
page.getElementById('message-form').onsubmit = messageSubmitter(conversationID)
```
Attach an event listener to the “submit” event.
```
function messageSubmitter(conversationID) {
return async ev => {
ev.preventDefault()
const form = ev.currentTarget
const input = form.querySelector('input')
const submitButton = form.querySelector('button')
input.disabled = true
submitButton.disabled = true
try {
const message = await createMessage(input.value, conversationID)
input.value = ''
const messagesOList = document.getElementById('messages')
if (messagesOList === null) {
return
}
messagesOList.appendChild(renderMessage(message))
} catch (err) {
if (err.statusCode === 422) {
input.setCustomValidity(err.body.errors.content)
} else {
alert(err.message)
}
} finally {
input.disabled = false
submitButton.disabled = false
setTimeout(() => {
input.focus()
}, 0)
}
}
}
function createMessage(content, conversationID) {
return http.post(`/api/conversations/${conversationID}/messages`, { content })
}
```
We make use of [partial application][12] to have the conversation ID in the “submit” event handler. It takes the message content from the input and does a POST request to `/api/conversations/{conversationID}/messages` with it. Then prepends the newly created message to the list.
### Messages Subscription
To make it realtime well subscribe to the message stream in this page also.
```
page.addEventListener('disconnect', subscribeToMessages(messageArriver(conversationID)))
```
Add that line in the `conversationPage()` function.
```
function subscribeToMessages(cb) {
return http.subscribe('/api/messages', cb)
}
function messageArriver(conversationID) {
return message => {
if (message.conversationID !== conversationID) {
return
}
const messagesOList = document.getElementById('messages')
if (messagesOList === null) {
return
}
messagesOList.appendChild(renderMessage(message))
readMessages(message.conversationID)
}
}
function readMessages(conversationID) {
return http.post(`/api/conversations/${conversationID}/read_messages`)
}
```
We also make use of partial application to have the conversation ID here.
When a new message arrives, first we check if its from this conversation. If it is, we go a prepend a message item to the list and do a POST request to `/api/conversations/{conversationID}/read_messages` to updated the last time the participant read messages.
* * *
That concludes this series. The messenger app is now functional.
~~Ill add pagination on the conversation and message list, also user searching before sharing the source code. Ill updated once its ready along with a hosted demo 👨‍💻~~
[Souce Code][13] • [Demo][14]
--------------------------------------------------------------------------------
via: https://nicolasparada.netlify.com/posts/go-messenger-conversation-page/
作者:[Nicolás Parada][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://nicolasparada.netlify.com/
[b]: https://github.com/lujun9972
[1]: https://nicolasparada.netlify.com/posts/go-messenger-schema/
[2]: https://nicolasparada.netlify.com/posts/go-messenger-oauth/
[3]: https://nicolasparada.netlify.com/posts/go-messenger-conversations/
[4]: https://nicolasparada.netlify.com/posts/go-messenger-messages/
[5]: https://nicolasparada.netlify.com/posts/go-messenger-realtime-messages/
[6]: https://nicolasparada.netlify.com/posts/go-messenger-dev-login/
[7]: https://nicolasparada.netlify.com/posts/go-messenger-access-page/
[8]: https://nicolasparada.netlify.com/posts/go-messenger-home-page/
[9]: https://nicolasparada.netlify.com/img/go-messenger-conversation-page/heading.png
[10]: https://nicolasparada.netlify.com/img/go-messenger-conversation-page/list.png
[11]: https://nicolasparada.netlify.com/img/go-messenger-conversation-page/form.png
[12]: https://en.wikipedia.org/wiki/Partial_application
[13]: https://github.com/nicolasparada/go-messenger-demo
[14]: https://go-messenger-demo.herokuapp.com/

View File

@ -1,105 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (heguangzhi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (The Linux kernel: Top 5 innovations)
[#]: via: (https://opensource.com/article/19/8/linux-kernel-top-5-innovations)
[#]: author: (Seth Kenlon https://opensource.com/users/sethhttps://opensource.com/users/mhaydenhttps://opensource.com/users/mralexjuarez)
The Linux kernel: Top 5 innovations
======
Want to know what the actual (not buzzword) innovations are when it
comes to the Linux kernel? Read on.
![Penguin with green background][1]
The word _innovation_ gets bandied about in the tech industry almost as much as _revolution_, so it can be difficult to differentiate hyperbole from something thats actually exciting. The Linux kernel has been called innovative, but then again its also been called the biggest hack in modern computing, a monolith in a micro world.
Setting aside marketing and modeling, Linux is arguably the most popular kernel of the open source world, and its introduced some real game-changers over its nearly 30-year life span.
### Cgroups (2.6.24)
Back in 2007, Paul Menage and Rohit Seth got the esoteric [_control groups_ (cgroups)][2] feature added to the kernel (the current implementation of cgroups is a rewrite by Tejun Heo.) This new technology was initially used as a way to ensure, essentially, quality of service for a specific set of tasks.
For example, you could create a control group definition (cgroup) for all tasks associated with your web server, another cgroup for routine backups, and yet another for general operating system requirements. You could then control a percentage of resources for each cgroup, such that your OS and web server gets the bulk of system resources while your backup processes have access to whatever is left.
What cgroups has become most famous for, though, is its role as the technology driving the cloud today: containers. In fact, cgroups were originally named [process containers][3]. It was no great surprise when they were adopted by projects like [LXC][4], [CoreOS][5], and Docker.
The floodgates being opened, the term _containers_ justly became synonymous with Linux, and the concept of microservice-style cloud-based “apps” quickly became the norm. These days, its hard to get away from cgroups, theyre so prevalent. Every large-scale infrastructure (and probably your laptop, if you run Linux) takes advantage of cgroups in a meaningful way, making your computing experience more manageable and more flexible than ever.
For example, you might already have installed [Flathub][6] or [Flatpak][7] on your computer, or maybe youve started using [Kubernetes][8] and/or [OpenShift][9] at work. Regardless, if the term “containers” is still hazy for you, you can gain a hands-on understanding of containers from [Behind the scenes with Linux containers][10].
### LKMM (4.17)
In 2018, the hard work of Jade Alglave, Alan Stern, Andrea Parri, Luc Maranget, Paul McKenney, and several others, got merged into the mainline Linux kernel to provide formal memory models. The Linux Kernel Memory [Consistency] Model (LKMM) subsystem is a set of tools describing the Linux memory coherency model, as well as producing _litmus tests_ (**klitmus**, specifically) for testing.
As systems become more complex in physical design (more CPU cores added, cache and RAM grow, and so on), the harder it is for them to know which address space is required by which CPU, and when. For example, if CPU0 needs to write data to a shared variable in memory, and CPU1 needs to read that value, then CPU0 must write before CPU1 attempts to read. Similarly, if values are written in one order to memory, then theres an expectation that they are also read in that same order, regardless of which CPU or CPUs are doing the reading.
Even on a single CPU, memory management requires a specific task order. A simple action such as **x = y** requires a CPU to load the value of **y** from memory, and then store that value in **x**. Placing the value stored in **y** into the **x** variable cannot occur _before_ the CPU has read the value from memory. There are also address dependencies: **x[n] = 6** requires that **n** is loaded before the CPU can store the value of six.
LKMM helps identify and trace these memory patterns in code. It does this in part with a tool called **herd**, which defines the constraints imposed by a memory model (in the form of logical axioms), and then enumerates all possible outcomes consistent with these constraints.
### Low-latency patch (2.6.38)
Long ago, in the days before 2011, if you wanted to do "serious" [multimedia work on Linux][11], you had to obtain a low-latency kernel. This mostly applied to [audio recording][12] while adding lots of real-time effects (such as singing into a microphone and adding reverb, and hearing your voice in your headset with no noticeable delay). There were distributions, such as [Ubuntu Studio][13], that reliably provided such a kernel, so in practice it wasn't much of a hurdle, just a significant caveat when choosing your distribution as an artist.
However, if you werent using Ubuntu Studio, or you had some need to update your kernel before your distribution got around to it, you had to go to the rt-patches web page, download the kernel patches, apply them to your kernel source code, compile, and install manually.
And then, with the release of kernel version 2.6.38, this process was all over. The Linux kernel suddenly, as if by magic, had low-latency code (according to benchmarks, latency decreased by a factor of 10, at least) built-in by default. No more downloading patches, no more compiling. Everything just worked, and all because of a small 200-line patch implemented by Mike Galbraith.
For open source multimedia artists the world over, it was a game-changer. Things got so good from 2011 on that in 2016, I challenged myself to [build a Digital Audio Workstation (DAW) on a Raspberry Pi v1 (model B)][14] and found that it worked surprisingly well.
### RCU (2.5)
RCU, or Read-Copy-Update, is a system defined in computer science that allows multiple processor threads to read from shared memory. It does this by deferring updates, but also marking them as updated, to ensure that the datas consumers read the latest version. Effectively, this means that reads happen concurrently with updates.
The typical RCU cycle is a little like this:
1. Remove pointers to data to prevent other readers from referencing it.
2. Wait for readers to complete their critical processes.
3. Reclaim the memory space.
Dividing the update stage into removal and reclamation phases means the updater performs the removal immediately while deferring reclamation until all active readers are complete (either by blocking them or by registering a callback to be invoked upon completion).
While the concept of read-copy-update was not invented for the Linux kernel, its implementation in Linux is a defining example of the technology.
### Collaboration (0.01)
The final answer to the question of what the Linux kernel innovated will always be, above all else, collaboration. Call it good timing, call it technical superiority, call it hackability, or just call it open source, but the Linux kernel and the many projects that it enabled is a glowing example of collaboration and cooperation.
And it goes well beyond just the kernel. People from all walks of life have contributed to open source, arguably _because_ of the Linux kernel. The Linux was, and remains to this day, a major force of [Free Software][15], inspiring users to bring their code, art, ideas, or just themselves, to a global, productive, and diverse community of humans.
### Whats your favorite innovation?
This list is biased toward my own interests: containers, non-uniform memory access (NUMA), and multimedia. Ive surely left your favorite kernel innovation off the list. Tell me about it in the comments!
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/8/linux-kernel-top-5-innovations
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[heguangzhi](https://github.com/heguangzhi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/sethhttps://opensource.com/users/mhaydenhttps://opensource.com/users/mralexjuarez
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/linux_penguin_green.png?itok=ENdVzW22 (Penguin with green background)
[2]: https://en.wikipedia.org/wiki/Cgroups
[3]: https://lkml.org/lkml/2006/10/20/251
[4]: https://linuxcontainers.org
[5]: https://coreos.com/
[6]: http://flathub.org
[7]: http://flatpak.org
[8]: http://kubernetes.io
[9]: https://www.redhat.com/sysadmin/learn-openshift-minishift
[10]: https://opensource.com/article/18/11/behind-scenes-linux-containers
[11]: http://slackermedia.info
[12]: https://opensource.com/article/17/6/qtractor-audio
[13]: http://ubuntustudio.org
[14]: https://opensource.com/life/16/3/make-music-raspberry-pi-milkytracker
[15]: http://fsf.org

View File

@ -0,0 +1,366 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (The Emacs Series Exploring ts.el)
[#]: via: (https://opensourceforu.com/2019/09/the-emacs-series-exploring-ts-el/)
[#]: author: (Shakthi Kannan https://opensourceforu.com/author/shakthi-kannan/)
The Emacs Series Exploring ts.el
======
[![][1]][2]
_In this article, the author reviews the ts.el date and time library for Emacs. Written by Adam Porter, ts.el is still in the development phase and has been released under the GNU General Public License v3.0._
The ts.el package uses intuitive names for date and time functions. It internally uses UNIX timestamps and depends on both the dash and s Emacs libraries. The parts of the date are computed lazily and also cached for performance. The source code is available at _<https://github.com/alphapapa/ts.el>_. In this article, we will explore the API functions available from the ts.el library.
**Installation**
The package does not have a tagged release yet; hence, you should download it from <https://github.com/alphapapa/ts.el/blob/master/ts.el> and add it to your Emacs load path to use it. You should also have the dash and s libraries installed and loaded in your Emacs environment. You can then load the library using the following command:
```
(require ts)
```
**Usage**
Let us explore the various functions available to retrieve parts of the date from the ts.el library. When the examples were executed, the date was Friday July 5, 2019. The ts-dow function can be used to obtain the day of the week, as shown below:
```
(ts-dow (ts-now))
5
```
_ts-now_ is a Lisp construct that returns a timestamp set. It is defined in ts.el as follows:
```
(defsubst ts-now ()
“Return `ts struct set to now.”
(make-ts :unix (float-time)))
```
The day of the week starts from Monday (1) and hence Friday has the value of 5. An abbreviated form of the day can be fetched using the _ts-day-abbr_ function. In the following example, Friday is shortened toFri.
```
(ts-day-abbr (ts-now))
"Fri"
```
The day of the week in full form can be obtained using the _ts-day-name_ function, as shown below:
```
(ts-day-name (ts-now))
“Friday”
```
The twelve months from January to December are numbered from 1 to 12 respectively. Hence, for the month of July, the index number is 7. This numeric value for the month can be retrieved using the ts-month API. For example:
```
(ts-month (ts-now))
7
```
If you want a three-character abbreviation for the months name, you can use the ts-month-abbr function as shown below:
```
(ts-month-abbr (ts-now))
“Jul”
```
The _ts-month-name_ function can be used to obtain the full name of the month. For example:
```
(ts-month-name (ts-now))
“July”
```
The day of the week starts from Monday and has an index 1, while Sunday has an index 7. If you need the numeric value for the day of the week, use the ts-day function as indicated below:
```
(ts-day (ts-now))
5
```
The _ts-year_ API returns the year. In our example, it is 2019 as shown below:
```
(ts-year (ts-now))
2019
```
The hour, minute and seconds can be retrieved using the _ts-hour, ts-minute_ and _ts-second_ functions, respectively. Examples of these functions are given below:
```
(ts-hour (ts-now))
18
(ts-minute (ts-now))
19
(ts-second (ts-now))
5
```
The UNIX timestamps are in UTC, by default. The _ts-tz-offset_ function returns the offset from UTC. The Indian Standard Time (IST) is five-and-a-half-hours ahead of UTC and hence this function returns +0530 as shown below:
```
(ts-tz-offset (ts-now))
"+0530"
```
The _ts-tz-abbr_ API returns an abbreviated form of the time zone. In our case, IST is returned for the Indian Standard Time.
```
(ts-tz-abbr (ts-now))
"IST"
```
The _ts-adjustf_ function applies the time adjustments passed to the timestamp and the _ts-format_ function formats the timestamp as a string. A couple of examples are given below:
```
(let ((ts (ts-now)))
(ts-adjustf ts day 1)
(ts-format nil ts))
“2019-07-06 18:23:24 +0530”
(let ((ts (ts-now)))
(ts-adjustf ts year 1 month 3 day 5)
(ts-format nil ts))
“2020-10-10 18:24:07 +0530”
```
You can use the _ts-dec_ function to decrement the timestamp. For example:
```
(ts-day-name (ts-dec day 1 (ts-now)))
“Thursday”
```
The threading macro syntax can also be used with the ts-dec function as shown below:
```
(->> (ts-now) (ts-dec day 2) ts-day-name)
“Wednesday”
```
The UNIX epoch is the number of seconds that have elapsed since January 1, 1970 (midnight UTC/GMT). The ts-unix function returns an epoch UNIX timestamp as illustrated below:
```
(ts-unix (ts-adjust day -2 (ts-now)))
1562158551.0 ;; Wednesday, July 3, 2019 6:25:51 PM GMT+05:30
```
An hour has 3600 seconds and a day has 86400 seconds. You can compare epoch timestamps as shown in the following example:
```
(/ (- (ts-unix (ts-now))
(ts-unix (ts-adjust day -4 (ts-now))))
86400)
4
```
The _ts-difference_ function returns the difference between two timestamps, while the _ts-human-duration_ function returns the property list (_plist_) values of years, days, hours, minutes and seconds. For example:
```
(ts-human-duration
(ts-difference (ts-now)
(ts-dec day 3 (ts-now))))
(:years 0 :days 3 :hours 0 :minutes 0 :seconds 0)
```
A number of aliases are available for the hour, minute, second, year, month and day format string constructors. A few examples are given below:
```
(ts-hour (ts-now))
18
(ts-H (ts-now))
18
(ts-minute (ts-now))
46
(ts-min (ts-now))
46
(ts-M (ts-now))
46
(ts-second (ts-now))
16
(ts-sec (ts-now))
16
(ts-S (ts-now))
16
(ts-year (ts-now))
2019
(ts-Y (ts-now))
2019
(ts-month (ts-now))
7
(ts-m (ts-now))
7
(ts-day (ts-now))
5
(ts-d (ts-now))
5
```
You can parse a string into a timestamp object using the ts-parse function. For example:
```
(ts-format nil (ts-parse “Fri Dec 6 2019 18:48:00”))
“2019-12-06 18:48:00 +0530”
```
You can also format the difference between two timestamps in a human readable format as shown in the following example:
```
(ts-human-format-duration
(ts-difference (ts-now)
(ts-adjust day -1 hour -3 minute -2 second -4 (ts-now))))
“1 days, 3 hours, 2 minutes, 4 seconds”
```
The timestamp comparator operations are also defined in ts.el. The ts&lt; function compares if one epoch UNIX timestamp is less than the other. Its definition is as follows:
```
(defun ts< (a b)
“Return non-nil if timestamp A is less than timestamp B.”
(< (ts-unix a) (ts-unix b)))
```
In the example given below, the current timestamp is not less than the previous day and hence it returns nil.
```
(ts< (ts-now) (ts-adjust day -1 (ts-now)))
nil
```
Similarly, we have other comparator functions like ts&gt;, ts=, ts&gt;= and ts&lt;=. A few examples of these function use cases are given below:
```
(ts> (ts-now) (ts-adjust day -1 (ts-now)))
t
(ts= (ts-now) (ts-now))
nil
(ts>= (ts-now) (ts-adjust day -1 (ts-now)))
t
(ts<= (ts-now) (ts-adjust day -2 (ts-now)))
nil
```
**Benchmarking**
A few performance tests can be conducted to compare the Emacs internal time values versus the UNIX timestamps. The benchmarking tests can be executed by including the bench-multi macro and bench-multi-process-results function available from _<https://github.com/alphapapa/emacs-package-dev-handbook>_ in your Emacs environment.
You will also need to load the dash-functional library to use the -on function.
```
(require dash-functional)
```
The following tests have been executed on an Intel(R) Core(TM) i7-3740QM CPU at 2.70GHz with eight cores, 16GB RAM and running Ubuntu 18.04 LTS.
**Formatting**
The first benchmarking exercise is to compare the formatting of the UNIX timestamp and the Emacs internal time. The Emacs Lisp code to run the test is shown below:
```
(let ((format “%Y-%m-%d %H:%M:%S”))
(bench-multi :times 100000
:forms ((“Unix timestamp” (format-time-string format 1544311232))
(“Internal time” (format-time-string format (23564 20962 864324 108000))))))
```
The output appears as an s-expression:
```
((“Form” “x faster than next” “Total runtime” “# of GCs” “Total GC runtime”)
hline
(“Internal time” “1.11” “2.626460” 13 “0.838733”)
(“Unix timestamp” “slowest” “2.921408” 13 “0.920814”))
```
The abbreviation GC refers to garbage collection. A tabular representation of the above results is given below:
[![][3]][4]
We observe that formatting the internal time is slightly faster.
**Getting the current time**
The functions to obtain the current time can be compared using the following test:
```
(bench-multi :times 100000
:forms ((“Unix timestamp” (float-time))
(“Internal time” (current-time))))
```
The results are shown below:
[![][5]][6]
We observe that using the Unix timestamp is faster.
**Parsing**
The third benchmarking exercise is to compare parsing functions on a date timestamp string. The corresponding test code is given below:
```
(let* ((s “Wed 10 Jul 2019”))
(bench-multi :times 100000
:forms ((“ts-parse” (ts-parse s))
(“ts-parse ts-unix” (ts-unix (ts-parse s))))))
```
The _ts-parse_ function is slightly faster than the ts-parse _ts-unix_ function, as seen in the results:
[![][7]][8]
**A new timestamp versus blanking fields**
The last performance comparison is between creating a new timestamp and blanking the fields. The relevant test code is as follows:
```
(let* ((a (ts-now)))
(bench-multi :times 100000
:ensure-equal t
:forms ((“New” (let ((ts (copy-ts a)))
(setq ts (ts-fill ts))
(make-ts :unix (ts-unix ts))))
(“Blanking” (let ((ts (copy-ts a)))
(setq ts (ts-fill ts))
(ts-reset ts))))))
```
The output of the benchmarking exercise is given below:
[![][9]][10]
We observe that creating a new timestamp is slightly faster than blanking the fields.
You are encouraged to read the ts.el README and notes.org from the GitHub repository _<https://github.com/alphapapa/ts.el>_ for more information.
--------------------------------------------------------------------------------
via: https://opensourceforu.com/2019/09/the-emacs-series-exploring-ts-el/
作者:[Shakthi Kannan][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensourceforu.com/author/shakthi-kannan/
[b]: https://github.com/lujun9972
[1]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/09/GPL-emacs-1.jpg?resize=696%2C435&ssl=1 (GPL emacs)
[2]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/09/GPL-emacs-1.jpg?fit=800%2C500&ssl=1
[3]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/09/1-1.png?resize=350%2C151&ssl=1
[4]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/09/1-1.png?ssl=1
[5]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/09/2-1.png?resize=350%2C191&ssl=1
[6]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/09/2-1.png?ssl=1
[7]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/09/3.png?resize=350%2C144&ssl=1
[8]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/09/3.png?ssl=1
[9]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/09/4.png?resize=350%2C149&ssl=1
[10]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/09/4.png?ssl=1

View File

@ -0,0 +1,232 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Getting started with Zsh)
[#]: via: (https://opensource.com/article/19/9/getting-started-zsh)
[#]: author: (Seth Kenlon https://opensource.com/users/sethhttps://opensource.com/users/sethhttps://opensource.com/users/sethhttps://opensource.com/users/sethhttps://opensource.com/users/falm)
Getting started with Zsh
======
Improve your shell game by upgrading from Bash to Z-shell.
![bash logo on green background][1]
Z-shell (or Zsh) is an interactive Bourne-like POSIX shell known for its abundance of innovative features. Z-Shell users often cite its many conveniences and credit it for increased efficiency and extensive customization.
If you're relatively new to Linux or Unix but experienced enough to have opened a terminal and run a few commands, you have probably used the Bash shell. Bash is arguably the definitive free software shell, partly because of its progressive features and partly because it ships as the default shell on most of the popular Linux and Unix operating systems. However, the more you use a shell, the more you start to find small things that might be better for the way you want to use it. If there's one thing open source is famous for, it's _choice_. Many people choose to "graduate" from Bash to Z.
### What is Zsh?
A shell is just an interface to your operating system. An interactive shell allows you to type in commands through what is called _standard input_, or **stdin**, and get output through _standard output_ and _standard error_, or **stdout** and **stderr**. There are many shells, including Bash, Csh, Ksh, Tcsh, Dash, and Zsh. Each has features based on what its programmers thought would be best for a shell. Whether those features are good or bad is up to you, the end user.
Zsh has features like interactive Tab completion, automated file searching, regex integration, advanced shorthand for defining command scope, and a rich theme engine. These features are included in an otherwise familiar Bourne-like shell environment, meaning that if you already know and love Bash, you'll find Zsh familiar—except with more features. You might think of it as a kind of Bash++.
### Installing Zsh
Install Zsh with your package manager.
On Fedora, RHEL, and CentOS:
```
`$ sudo dnf install zsh`
```
On Ubuntu and Debian:
```
`$ sudo apt install zsh`
```
On MacOS, you can install it using MacPorts:
```
`$ sudo port install zsh`
```
Or with Homebrew:
```
`$ brew install zsh`
```
It's possible to run Zsh on Windows, but only on top of a Linux or Linux-like layer such as [Windows Subsystem for Linux][2] (WSL) or [Cygwin][3]. That installation is out of scope for this article, so refer to Microsoft documentation.
### Setting up Zsh
Zsh is not a terminal emulator; it's a shell that runs inside a terminal emulator. So, to launch Zsh, you must first launch a terminal window such as GNOME Terminal, Konsole, Terminal, iTerm2, rxvt, or another terminal of your preference. Then you can launch Zsh by typing:
```
`$ zsh`
```
The first time you launch Zsh, you're asked to choose some configuration options. These can all be changed later, so press **1** to continue.
```
This is the Z Shell configuration function for new users, zsh-newuser-install.
(q)  Quit and do nothing.
(0)  Exit, creating the file ~/.zshrc
(1)  Continue to the main menu.
```
There are four categories of preferences, so just start at the top.
1. The first category lets you choose how many commands are retained in your shell history file. By default, it's set to 1,000 lines.
2. Zsh completion is one of its most exciting features. To keep things simple, consider activating it with its default options until you get used to how it works. Press **1** for default options, **2** to set options manually.
3. Choose Emacs or Vi key bindings. Bash uses Emacs bindings, so you may be used to that already.
4. Finally, you can learn about (and set or unset) some of Zsh's subtle features. For instance, you can stop using the **cd** command by allowing Zsh to initiate a directory change when you provide a non-executable path with no command. To activate one of these extra options, type the option number and enter **s** to _set_ it. Try turning on all options to get the full Zsh experience. You can unset them later by editing **~/.zshrc**.
To complete configuration, press **0**.
### Using Zsh
At first, Zsh feels a lot like using Bash, which is unmistakably one of its many features. There are serious differences between, for instance, Bash and Tcsh, so being able to switch between Bash and Zsh is a convenience that makes Zsh easy to try and easy to use at home if you have to use Bash at work or on your server.
#### Change directory with Zsh
It's the small differences that make Zsh nice. First, try changing the directory to your Documents folder _without the **cd** command_. It seems too good to be true; but if you enter a directory path with no further instruction, Zsh changes to that directory:
```
% Documents
% pwd
/home/seth/Documents
```
That renders an error in Bash or any other normal shell. But Zsh is far from normal, and this is just the beginning.
#### Search with Zsh
When you want to find a file using a normal shell, you probably resort to the **find** or **locate** command. At the very least, you may have used **ls -R** for a recursive listing of a set of directories. Zsh has a built-in feature allowing it to find a file in the current or any other subdirectory.
For instance, assume you have two files called **foo.txt**. One is located in your current directory, and the other is in a subdirectory called **foo**. In a Bash shell, you can list the file in the current directory with:
```
$ ls
foo.txt
```
and you can list the other one by stating the subdirectory's path explicitly:
```
$ ls foo
foo.txt
```
To list both, you must use the **-R** switch, maybe combined with **grep**:
```
$ ls -R | grep foo.txt
foo.txt
foo.txt
```
But in Zsh, you can use the ****** shorthand:
```
% ls **/foo.txt
foo.txt
foo.txt
```
And you can use this syntax with any command, not just with **ls**. Imagine your increased efficiency when moving specific file types from one collection of directories to a single location, or concatenating snippets of text into a file, or grepping through logs.
### Using Zsh Tab completion
Tab completion is a power-user feature in Bash and some other shells, and it took the Unix world by storm when it became commonplace. No longer did Unix users have to resort to wildcards when typing long and tedious paths (such as **/h*/s*h/V*/SCS/sc*/comp*/t*/a*/*9/04/LS*boat*v**, which is a lot easier than typing **/home/seth/Videos/SCS/scenes/composite/takes/approved/109/04/LS_boat-port-cargo-mover.mkv**). Instead, they could just press the Tab key when they entered enough of a unique string. For example, if you know there's only one directory starting with an **h** at the root level of your system, you might type **/h** and then hit Tab. It's fast, it's simple, it's efficient. It also confirms a path exists; if Tab doesn't complete anything, you know you're looking in the wrong place or you mistyped part of the path.
However, if you have many directories that share five or more of the same first letters, Tab staunchly refuses to complete. While in most modern terminals it will (at least) reveal the files blocking it from guessing what you mean, it usually takes two Tab presses to reveal them; therefore, Tab completion often becomes such an interplay of letters and Tabs across your keyboard that you feel like you're training for a piano recital.
Zsh solves this minor annoyance by cycling through possible completions. If you type **ls ~/D** and press Tab, Zsh completes your command with **Documents** first; if you press Tab again, it offers **Downloads**, and so on until you find the one you want.
### Wildcards in Zsh
Wildcards behave differently in Zsh than what Bash users are used to. First of all, they can be modified. For example, if you want to list all folders in your current directory, you can use a modified wildcard:
```
% ls
dir0   dir1   dir2   file0   file1
% ls *(/)
dir0   dir1   dir2
```
In this example, the **(/)** qualifies the results of the wildcard so Zsh will display only directories. To list just the files, use **(.)**. To list symlinks, use **(@)**. To list executable files, use **(*)**.
```
% ls ~/bin/*(*)
fop  exify  tt
```
Zsh isn't aware of file types only. It can also list according to modification time, using the same wildcard modifier convention. For example, if you want to find a file that was modified within the past eight hours, use the **mh** modifier (for **modified** and **hours**) and the negative integer of hours:
```
% ls ~/Documents/*(mh-8)
cal.org   game.org   home.org
```
To find a file modified more than (for instance) two days ago, the modifiers change to **md** (for **modified** and **day**) with a positive integer:
```
% ls ~/Documents/*(+2)
holiday.org
```
There's a lot more you can do with wildcard modifiers and qualifiers, so read the [Zsh man page][4] for full details.
#### The wildcard side effect
To use wildcards the way you would use them in Bash, sometimes they must be escaped in Zsh. For instance, if you're copying some files to your server in Bash, you might use a wildcard like this:
```
`$ scp IMG_*.JPG seth@example.com:~/www/ph*/*19/09/14`
```
That works in Bash, but Zsh returns an error because it tries to expand the variables on the remote side before issuing the **scp** command. To avoid this, you must escape the remote variables:
```
`% scp IMG_*.JPG seth@example.com:~/www/ph\*/\*19/09/14`
```
It's these types of little exceptions that can frustrate you when you're switching to a new shell. There aren't many when using Zsh (there are probably more when switching back to Bash after experiencing Zsh) but when they happen, remain calm and be explicit. Rarely will you go wrong to adhere strictly to POSIX—but if that fails, look up the problem to solve it and move on. [Hyperpolyglot.org][5] has proven invaluable to many users stuck on one shell at work and another at home.
In my next Zsh article, I'll show you how to install themes and plugins to make your Z-Shell even Z-ier.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/9/getting-started-zsh
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/sethhttps://opensource.com/users/sethhttps://opensource.com/users/sethhttps://opensource.com/users/sethhttps://opensource.com/users/falm
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bash_command_line.png?itok=k4z94W2U (bash logo on green background)
[2]: https://devblogs.microsoft.com/commandline/category/bash-on-ubuntu-on-windows/
[3]: https://www.cygwin.com/
[4]: https://linux.die.net/man/1/zsh
[5]: http://hyperpolyglot.org/unix-shells

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (Morisun029)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -0,0 +1,115 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Talking to machines: Lisp and the origins of AI)
[#]: via: (https://opensource.com/article/19/9/command-line-heroes-lisp)
[#]: author: (Matthew Broberg https://opensource.com/users/mbbroberghttps://opensource.com/users/mbbroberghttps://opensource.com/users/mbbroberghttps://opensource.com/users/mbbroberg)
Talking to machines: Lisp and the origins of AI
======
The Command Line Heroes podcast explores the invention of Lisp and the
rise of thinking computers powered by open source software.
![Listen to the Command Line Heroes Podcast][1]
Artificial intelligence (AI) is all the rage today, and its massive impact on the world is still to come, says the[ Association for the Advancement of Artificial Intelligence][2] (AAAI). According to an article on [Nanalyze][3]:
> "The vast majority of nearly 2,000 experts polled by the Pew Research Center in 2014 said they anticipate robotics and artificial intelligence will permeate wide segments of daily life by 2025. A 2015 study covering 17 countries found that artificial intelligence and related technologies added an estimated 0.4 percentage point on average to those countries' annual GDP growth between 1993 and 2007, accounting for just over one-tenth of those countries' overall GDP growth during that time."
However, this is the second time has AI garnered so much attention. When was AI first popular, and what does that have to do with the obscure-but-often-loved programming language Lisp?
The second-to-last podcast of [Command Line Heroes][4]' third season dives into these topics and leaves us thinking about open source at the core of AI.
### Before the term AI
Thinking machines have been a curiosity for centuries, long before they could be realized. In the 1800s, computer science pioneers Charles Babbage and Ada Lovelace imagined an analytical engine capable of predictions far beyond human skills, such as correctly selecting the winning horse in a race.
In the 1940s and '50s, Alan Turing defined what it would look like for intelligent machines to emulate human intelligence; that's what we now call the Turing Test. In his 1950 [research paper][5], Turing's "imitation game" set out to convince someone they were communicating with a human in another room when, in reality, it was a machine.
While these theories inspired imaginative debate, they became less theoretical as computer hardware began providing enough power to begin experimenting.
### Why Lisp is at the heart of AI theory
John McCarthy, the person to coin the term "artificial intelligence," is also the person who reinvented how we program to create thinking machines. His reimagined approach was codified into the Lisp programming language. As [Paul Graham][6] wrote:
> "In 1960, [John McCarthy][7] published a remarkable paper in which he did for programming something like what Euclid did for geometry. He showed how, given a handful of simple operators and a notation for functions, you can build a whole programming language. He called this language Lisp, for 'List Processing,' because one of his key ideas was to use a simple data structure called a list for both code and data.
>
> "It's worth understanding what McCarthy discovered, not just as a landmark in the history of computers, but as a model for what programming is tending to become in our own time. It seems to me that there have been two really clean, consistent models of programming so far: the C model and the Lisp model. These two seem points of high ground, with swampy lowlands between them. As computers have grown more powerful, the new languages being developed have been [moving steadily][8] toward the Lisp model. A popular recipe for new programming languages in the past 20 years has been to take the C model of computing and add to it, piecemeal, parts taken from the Lisp model, like runtime typing and garbage collection."
I remember when I first wrote Lisp for a computer science class. After wrapping my head around its seemingly infinite number of parentheses, I uncovered a beautiful pattern of thought: Can I think through what I want this software to do?
![The elegance of Lisp programming is timeless][9]
That sounds silly: computers process what we code them to do, but there's something about recursion that made me think in a wildly different light. It's exciting to learn that 15 years ago, I may have been tapping into the big-picture changes McCarthy was describing.
### Why the slowdown in AI?
By the mid-to-late 1960s, McCarthy's work made way to a new field of research, where AI, machine learning (ML), and deep learning all became possibilities. And Lisp became the accepted standard in this emerging field. It's said that in 1968, McCarthy made a wager with David Levy, a Scottish chess master, that in 10 years a computer would be able to beat Levy in a chess match. Why did it take nearly 30 years to get to the famous [Deep Blue vs. Garry Kasparov][10] match?
Command Line Heroes explores one theory: that for-profit investment in AI pulled essential talent from academia, where they were advancing the science, and pushed them onto a different path. Whether or not this was the reason, the world of AI fell into a "winter," where the people pursuing it were considered unrealistic.
This AI winter lasted for quite some time. In 2005, The [_New York Times_ reported][11] that AI had become so stigmatized that "some computer scientists and software engineers avoided the term artificial intelligence for fear of being viewed as wild-eyed dreamers."
### Where is AI now?
Fast forward to today, when talking about AI or ML is a fast pass to getting people's attention—but that attention isn't always positive. Many are concerned that AI will remove millions of jobs from the world. Others say it will [create][12] millions of more jobs than are lost.
The verdict is still out. [McKinsey's research][13] on the job loss vs. job gain debate is fascinating. When you take into account growing world consumption, aging populations, "marketization" of previously unpaid domestic work, and other factors, you find that the answer depends on your outlook.
One thing is for sure: AI will be a significant part of our lives, and it will have much wider implications than other areas of tech. For this reason (among others), examining the [misconceptions around ethics and bias in AI][14] is essential.
### Open source and AI
McCarthy had a dream that machines could have common sense. His AI goals included open source from the very beginning; this is visualized on Red Hat's beautifully animated webpage on the [origins of AI and its open source roots][15].
[![Origins of AI and open source screenshot][16]][15]
If we are to achieve the goals of McCarthy, Turing, or other AI pioneers, I believe it will be because of the open source community behind the technology. Part of the reason AI's popularity bounced back is because of open source: languages, frameworks, and the datasets we analyze are increasingly open. Here are a handful of things to explore:
* [Learn enough Python and R][17] to be part of this future
* [Explore Python libraries][18] that will bulk up your skills
* Understand how [AI and ML are related][19]
* Explore [free and open datasets][20]
* Use modern implementations of Lisp, [available under open source licenses][21]
It's possible that early AI explored the right ideas in the wrong decade. World-class computers back then weren't even as powerful as today's cellphones, and each one was shared by dozens of individuals. Today, many of us own multiple supercomputers and carry them with us all the time. For this reason, among others, the future of AI is strong and its highest achievements are yet to come.
_Command Line Heroes has covered programming languages for all of Season 3. [Subscribe so that you don't miss the last episode of the season][4], and I would love to hear your thoughts in the comments below._
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/9/command-line-heroes-lisp
作者:[Matthew Broberg][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/mbbroberghttps://opensource.com/users/mbbroberghttps://opensource.com/users/mbbroberghttps://opensource.com/users/mbbroberg
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/command_line_hereoes_ep7_blog-header-292x521.png?itok=lI4DXvq2 (Listen to the Command Line Heroes Podcast)
[2]: http://aaai.org/
[3]: https://www.nanalyze.com/2016/11/artificial-intelligence-definition/
[4]: https://www.redhat.com/en/command-line-heroes
[5]: https://www.csee.umbc.edu/courses/471/papers/turing.pdf
[6]: http://www.paulgraham.com/rootsoflisp.html
[7]: http://www-formal.stanford.edu/jmc/index.html
[8]: http://www.paulgraham.com/diff.html
[9]: https://opensource.com/sites/default/files/uploads/lisp_cycles.png (The elegance of Lisp programming is timeless)
[10]: https://en.wikipedia.org/wiki/Deep_Blue_versus_Garry_Kasparov
[11]: https://www.nytimes.com/2005/10/14/technology/behind-artificial-intelligence-a-squadron-of-bright-real-people.html
[12]: https://singularityhub.com/2019/01/01/ai-will-create-millions-more-jobs-than-it-will-destroy-heres-how/
[13]: https://www.mckinsey.com/featured-insights/future-of-work/jobs-lost-jobs-gained-what-the-future-of-work-will-mean-for-jobs-skills-and-wages
[14]: https://opensource.com/article/19/8/4-misconceptions-ethics-and-bias-ai
[15]: https://www.redhat.com/en/open-source-stories/ai-revolutionaries/origins-ai-open-source
[16]: https://opensource.com/sites/default/files/uploads/origins_aiopensource.png (Origins of AI and open source screenshot)
[17]: https://opensource.com/article/19/5/learn-python-r-data-science
[18]: https://opensource.com/article/18/5/top-8-open-source-ai-technologies-machine-learning
[19]: https://opensource.com/tags/ai-and-machine-learning
[20]: https://opensource.com/article/19/2/learn-data-science-ai
[21]: https://www.cliki.net/Common+Lisp+implementation

View File

@ -0,0 +1,328 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Whats Good About TensorFlow 2.0?)
[#]: via: (https://opensourceforu.com/2019/09/whats-good-about-tensorflow-2-0/)
[#]: author: (Siva Rama Krishna Reddy B https://opensourceforu.com/author/siva-krishna/)
Whats Good About TensorFlow 2.0?
======
[![][1]][2]
_Version 2.0 of TensorFlow is focused on simplicity and ease of use. It has been strengthened with updates like eager execution and intuitive higher level APIs accompanied by flexible model building. It is platform agnostic, and makes APIs more consistent, while removing those that are redundant._
Machine learning and artificial intelligence are experiencing a revolution these days, primarily due to three major factors. The first is the increased computing power available within small form factors such as GPUs, NPUs and TPUs. The second is the breakthrough in machine learning algorithms. State-of-art algorithms and hence models are available to infer faster. Finally, huge amounts of labelled data is essential for deep learning models to perform better, and this is now available.
TensorFlow is an open source AI framework from Google which arms researchers and developers with the right tools to build novel models. It was made open source in 2015 and, in the past few years, has evolved with various enhancements covering operator support, programming languages, hardware support, data sets, official models, and distributed training and deployment strategies.
TensorFlow 2.0 was released recently at the TensorFlow Developer Summit. It has major changes across the stack, some of which will be discussed from the developers point of view.
TensorFlow 2.0 is primarily focused on the ease-of-use, power and scalability aspects. Ease is ensured in terms of simplified APIs, Keras being the main high level API interface; eager execution is available by default. Version 2.0 is powerful in the sense of being flexible and running much faster than earlier, with more optimisation. Finally, it is more scalable since it can be deployed on high-end distributed environments as well as on small edge devices.
This new release streamlines the various components involved, from data preparation all the way up to deployment on various targets. High speed data processing pipelines are offered by tf.data, high level APIs are offered by tf.keras, and there are simplified APIs to access various distribution strategies on targets like the CPU, GPU and TPU. TensorFlow 2.0 offers a unique packaging format called SavedModel that can be deployed over the cloud through a TensorFlow Serving. Edge devices can be deployed through TensorFlow Lite, and Web applications through the newly introduced TensorFlow.js and various other language bindings that are also available.
![Figure 1: The evolution of TensorFlow][3]
TensorFlow.js was announced at the developer summit with off-the-shelf pretrained models for the browser, node, desktop and mobile native applications. The inclusion of Swift was also announced. Looking at some of the performance improvements since last year, the latest release claims a training speedup of 1.8x on NVIDIA Tesla V100, a 1.6x training speedup on Google Cloud TPUv2 and a 3.3.x inference speedup on Intel Skylake.
**Upgrade to 2.0**
The new release offers a utility _tf_upgrade_v2_ to convert a 1.x Python application script to a 2.0 compatible script. It does most of the job in converting the 1.x deprecated API to a newer compatibility API. An example of the same can be seen below:
```
test-pc:~$cat test-infer-v1.py
# Tensorflow imports
import tensorflow as tf
save_path = checkpoints/dev
with tf.gfile.FastGFile(“./trained-graph.pb”, rb) as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
tf.import_graph_def(graph_def, name=)
with tf.Session(graph=tf.get_default_graph()) as sess:
input_data = sess.graph.get_tensor_by_name(“DecodeJPGInput:0”)
output_data = sess.graph.get_tensor_by_name(“final_result:0”)
image = elephant-299.jpg
if not tf.gfile.Exists(image):
tf.logging.fatal(File does not exist %s, image)
image_data = tf.gfile.FastGFile(image, rb).read()
result = sess.run(output_data, {DecodeJPGInput:0: image_data})
print(result)
test-pc:~$ tf_upgrade_v2 --infile test-infer-v1.py --outfile test-infer-v2.py
INFO line 5:5: Renamed tf.gfile.FastGFile to tf.compat.v1.gfile.FastGFile
INFO line 6:16: Renamed tf.GraphDef to tf.compat.v1.GraphDef
INFO line 10:9: Renamed tf.Session to tf.compat.v1.Session
INFO line 10:26: Renamed tf.get_default_graph to tf.compat.v1.get_default_graph
INFO line 15:15: Renamed tf.gfile.Exists to tf.io.gfile.exists
INFO line 16:12: Renamed tf.logging.fatal to tf.compat.v1.logging.fatal
INFO line 17:21: Renamed tf.gfile.FastGFile to tf.compat.v1.gfile.FastGFile
TensorFlow 2.0 Upgrade Script
-----------------------------
Converted 1 files
Detected 0 issues that require attention
-------------------------------------------------------------
Make sure to read the detailed log report.txt
test-pc:~$ cat test-infer-v2.py
# Tensorflow imports
import tensorflow as tf
save_path = checkpoints/dev
with tf.compat.v1.gfile.FastGFile(“./trained-graph.pb”, rb) as f:
graph_def = tf.compat.v1.GraphDef()
graph_def.ParseFromString(f.read())
tf.import_graph_def(graph_def, name=)
with tf.compat.v1.Session(graph=tf.compat.v1.get_default_graph()) as sess:
input_data = sess.graph.get_tensor_by_name(“DecodeJPGInput:0”)
output_data = sess.graph.get_tensor_by_name(“final_result:0”)
image = elephant-299.jpg
if not tf.io.gfile.exists(image):
tf.compat.v1.logging.fatal(File does not exist %s, image)
image_data = tf.compat.v1.gfile.FastGFile(image, rb).read()
result = sess.run(output_data, {DecodeJPGInput:0: image_data})
print(result)
```
As we can see here, the _tf_upgrade_v2_ utility converts all the deprecated APIs to compatible v1 APIs, to make them work with 2.0.
**Eager execution:** Eager execution allows real-time evaluation of Tensors without calling _session.run_. A major advantage with eager execution is that we can print the Tensor values any time for debugging.
With TensorFlow 1.x, the code is:
```
test-pc:~$python3
Python 3.6.7 (default, Oct 22 2018, 11:32:17)
[GCC 8.2.0] on linux
Type “help”, “copyright”, “credits” or “license” for more information.
>>> import tensorflow as tf
>>> print(tf.__version__)
1.14.0
>>> tf.add(2,3)
<tf.Tensor Add:0 shape=() dtype=int32>
```
TensorFlow 2.0, on the other hand, evaluates the result that we call the API:
```
test-pc:~$python3
Python 3.6.7 (default, Oct 22 2018, 11:32:17)
[GCC 8.2.0] on linux
Type “help”, “copyright”, “credits” or “license” for more information.
>>> import tensorflow as tf
>>> print(tf.__version__)
2.0.0-beta1
>>> tf.add(2,3)
<tf.Tensor: id=2, shape=(), dtype=int32, numpy=5>
```
In v1.x, the resulting Tensor doesnt display the value and we need to execute the graph under a session to get the value, but in v2.0 the values are implicitly computed and available for debugging.
**Keras**
Keras (_tf.keras_) is now the official high level API. It has been enhanced with many compatible low level APIs. The redundancy across Keras and TensorFlow is removed, and most of the APIs are now available with Keras. The low level operators are still accessible through tf.raw_ops.
We can now save the Keras model directly as a Tensorflow SavedModel, as shown below:
```
# Save Model to SavedModel
saved_model_path = tf.keras.experimental.export_saved_model(model, /path/to/model)
# Load the SavedModel
new_model = tf.keras.experimental.load_from_saved_model(saved_model_path)
# new_model is now keras Model object.
new_model.summary()
```
Earlier, APIs related to various layers, optimisers, metrics and loss functions were distributed across Keras and native TensorFlow. Latest enhancements unify them as _tf.keras.optimizer.*, tf.keras.metrics.*, tf.keras.losses.* and tf.keras.layers.*._
The RNN layers are now much more simplified compared to v 1.x.
With TensorFlow 1.x, the commands given are:
```
if tf.test.is_gpu_available():
model.add(tf.keras.layers.CudnnLSTM(32))
else
model.add(tf.keras.layers.LSTM(32))
```
With TensorFlow 2.0, the commands given are:
```
# This will use Cudnn kernel when the GPU is available.
model.add(tf.keras.layer.LSTM(32))
```
TensorBoard integration is now a simple call back, as shown below:
```
tb_callbaclk = tf.keras.callbacks.TensorBoard(log_dir=log_dir)
model.fit(
x_train, y_train, epocha=5,
validation_data = [x_test, y_test],
Callbacks = [tb_callbacks])
```
With this simple call back addition, TensorBoard is up on the browser to look for all the statistics in real-time.
Keras offers unified distribution strategies, and a few lines of code can enable the required strategy as shown below:
```
strategy = tf.distribute.MirroredStrategy()
with strategy.scope()
model = tf.keras.models.Sequential([
tf.keras.layers.Dense(64, input_shape=[10]),
tf.keras.layers.Dense(64, activation=relu),
tf.keras.layers.Dense(10, activation=softmax)])
model.compile(optimizer=adam,
loss=categorical_crossentropy,
metrics=[accuracy])
```
As shown above, the model definition under the desired scope is all we need to apply the desired strategy. Very soon, there will be support for multi-node synchronous and TPU strategy, and later, for parameter server strategy.
![Figure 2: Coral products with edge TPU][4]
**TensorFlow function**
Function is a major upgrade that impacts the way we write TensorFlow applications. The new version introduces tf.function, which simplifies the applications and makes it very close to writing a normal Python application.
A sample _tf.function_ definition looks like whats shown in the code snippet below. Here the _tf.function_ declaration makes the user define a function as a TensorFlow operator, and all optimisation is applied automatically. Also, the function is faster than eager execution. APIs like _tf.control_dependencies_, _tf.global_variable_initializer_, and _tf.cond, tf.while_loop_ are no longer needed with _tf.function_. The user defined functions are polymorphic by default, i.e., we may pass mixed type tensors.
```
test-pc:~$ cat tf-test.py
import tensorflow as tf
print(tf.__version__)
@tf.function
def add(a, b):
return (a+b)
print(add(tf.ones([2,2]), tf.ones([2,2])))
test-pc:~$ python3 tf-test.py
2.0.0-beta1
tf.Tensor(
[[2. 2.]
[2. 2.]], shape=(2, 2), dtype=float32)
```
Here is another example to demonstrate automatic control flows and Autograph in action. Autograph automatically converts the conditions, while it loops Python to TensorFlow operators.
```
test-pc:~$ cat tf-test-control.py
import tensorflow as tf
print(tf.__version__)
@tf.function
def f(x):
while tf.reduce_sum(x) > 1:
x = tf.tanh(x)
return x
print(f(tf.random.uniform([10])))
test-pc:~$ python3 tf-test-control.py
2.0.0-beta1
tf.Tensor(
[0.10785562 0.11102211 0.11347286 0.11239681 0.03989326 0.10335539
0.11030331 0.1135259 0.11357211 0.07324989], shape=(10,), dtype=float32)
```
We can see Autograph in action with the following API over the function.
```
print(tf.autograph.to_code(f)) # f is the function name
```
**TensorFlow Lite**
The latest advancements in edge devices add neural network accelerators. Google has released EdgeTPU, Intel has the edge inference platform Movidius, Huawei mobile devices have the Kirin based NPU, Qualcomm has come up with NPE SDK to accelerate on the Snapdragon chipsets using Hexagon power and, recently, Samsung released Exynos 9 with NPU. An edge device optimised framework is necessary to support these hardware ecosystems.
Unlike TensorFlow, which is widely used in high power-consuming server infrastructure, edge devices are challenging in terms of reduced computing power, limited memory and battery constraints. TensorFlow Lite is aimed at bringing in TensorFlow models directly onto the edge with minimal effort. The TF Lite model format is different from TensorFlow. A TF Lite converter is available to convert a TensorFlow SavedBundle to a TF Lite model.
Though TensorFlow Lite is evolving, there are limitations too, such as in the number of operations supported, and the unsupported semantics like control-flows and RNNs. In its early days, TF Lite used a TOCO converter and there were a few challenges for the developer community. A brand new 2.0 converter is planned to be released soon. There are claims that using TF Lite results in huge improvements across the CPU, GPU and TPU.
TF Lite introduces delegates to accelerate parts of the graph on an accelerator. We may choose a specific delegate for a specific sub-graph, if needed.
```
#import “tensorflow/lite/delegates/gpu/metal_delegate.h”
// Initialize interpreter with GPU delegate
std::unique_ptr<Interpreter> interpreter;
InterpreterBuilder(*model, resolver)(&interpreter);
auto* delegate = NewGpuDelegate(nullptr); // default config
if (interpreter->ModifyGraphWithDelegate(delegate) != kTfLiteOk) return false;
// Run inference
while (true) {
WriteToInputTensor(interpreter->typed_input_tensor<float>(0));
if (interpreter->Invoke() != kTfLiteOk) return false;
ReadFromOutputTensor(interpreter->typed_output_tensor<float>(0));
}
// Clean up
interpreter = nullptr;
DeleteGpuDelegate(delegate);
```
As shown above, we can choose GPUDelegate, and modify the graph with the respective kernels runtime. TF Lite is going to support the Android NNAPI delegate, in order to support all the hardware that is supported by NNAPI. For edge devices, CPU optimisation is also important, as not all edge devices are equipped with accelerators; hence, there is a plan to support further optimisations for ARM and x86.
Optimisations based on quantisation and pruning are evolving to reduce the size and processing demands of models. Quantisation generally can reduce model size by 4x (i.e., 32-bit to 8-bit). Models with more convolution layers may get faster by 10 to 50 per cent on the CPU. Fully connected and RNN layers may speed up operation by 3x.
TF Lite now supports post-training quantisation, which reduces the size along with compute demands greatly. TensorFlow 2.0 offers simplified APIs to build models with quantisation and by pruning optimisations.
A normal dense layer without quantisation looks like what follows:
```
tf.keras.layers.Dense(512, activation=relu)
```
Whereas a quality dense layer looks like whats shown below:
```
quantize.Quantize(tf.keras.layers.Dense(512, activation=relu))
```
Pruning is a technique used to drop connections that are ineffective. In general, dense layers contain lots of connections which dont influence the output. Such connections can be dropped by making the weight zero. Tensors with lots of zeros may be represented as sparse and can be compressed. Also, the number of operations in a sparse tensor is less.
Building a layer with _prune_ is as simple as using the following command:
```
prune.Prune(tf.keras.layers.Dense(512, activation=relu))
```
In a pipeline, there is Keras based quantised training and Keras based connection pruning. These optimisations may push TF Lite further ahead of the competition, with regard to other frameworks.
**Coral**
Coral is a new platform for creating products with on-device ML acceleration. The first product here features Googles Edge TPU in SBC and USB form factors. TensorFlow Lite is officially supported on this platform, with the salient features being very fast inference speed, privacy and no reliance on network connection.
More details related to hardware specifications, pricing, and a getting started guide can be found at _<https://coral.withgoogle.com>_.
With these advances as well as a wider ecosystem, its very evident that TensorFlow may become the leading framework for artificial intelligence and machine learning, similar to how Android evolved in the mobile world.
--------------------------------------------------------------------------------
via: https://opensourceforu.com/2019/09/whats-good-about-tensorflow-2-0/
作者:[Siva Rama Krishna Reddy B][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensourceforu.com/author/siva-krishna/
[b]: https://github.com/lujun9972
[1]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2018/09/ML-with-tensorflow.jpg?resize=696%2C328&ssl=1 (ML with tensorflow)
[2]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2018/09/ML-with-tensorflow.jpg?fit=1200%2C565&ssl=1
[3]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/09/Figure-1-The-evolution-of-TensorFlow.jpg?resize=350%2C117&ssl=1
[4]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/09/Figure-2-Coral-products-with-edge-TPU.jpg?resize=350%2C198&ssl=1

View File

@ -0,0 +1,142 @@
[#]: collector: (lujun9972)
[#]: translator: (name1e5s)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Amid Epstein Controversy, Richard Stallman is Forced to Resign as FSF President)
[#]: via: (https://itsfoss.com/richard-stallman-controversy/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
Richard Stallman 被迫辞去 FSF 主席的职务
======
_**Richard Stallman自由软件基金会的创建者以及主席已经辞去他的职务开始寻求下一任主席。此前因为 Stallman 对于爱泼斯坦事件中的受害者的观点,一小撮活动家以及媒体人发起了清除 Stallman 的运动。这份声明就是在这些活动后发生的。阅读全文以获得更多信息。**_
![][1]
### Stallman 事件的背景概述
如果您不知道这次事件发生的前因后果,请看本段的详细信息。
[Richard Stallman][2]66岁是就职于 [MIT][3] 的计算机科学家。他最著名的成就就是在 1983 年发起了[自由软件运动][4]。他也开发了 GNU 项目旗下的部分软件,比如 GCC 和 Emacs。受自由软件运动影响选择使用 GPL 开源协议的项目不计其数。Linux 是其中最出名的项目之一。
[Jeffrey Epstein][5],美国亿万富翁,金融大佬。其涉嫌为社会上流精英提供性交易服务(其中有未成年少女)而被指控成为性犯罪者。在受审期间,爱泼斯坦在监狱中自杀身亡。
[Marvin Lee Minsky][6]MIT 知名计算机科学家。他在 MIT 建立了人工智能实验室。2016 年88 岁的 Minsky 逝世。在 Minsky 逝世后,一位名为 Misky 的爱泼斯坦事件受害者生成其在未成年时曾被“诱导”到爱泼斯坦的私人岛屿,与之发生性关系。
但是这些与 Richard Stallman 有什么关系?这要从 Stallman 发给 MIT 计算机科学与人工智能实验室(CSAIL) 的学生以及附属机构就爱泼斯坦的捐款提出抗议的邮件列表的邮件说起。邮件全文翻译如下:
> 周五事件的公告对 Marvin Minsky 来说是不公正的。
>
> “已故的人工智能 ’先锋‘ Marvin Minsky (被控告侵害了爱泼斯坦事件的受害者之一[2])”
>
> 不公正之处在于 “侵害(assulting)” 这个用语。“性侵犯(sexual assault)” 这个用语非常的糢糊,夸大了指控的严重性:宣称某人做了 X 但误导别人,让别人觉得这个人做了 YY 远远比 X 严重。
>
> 上面引用的指控显然就是夸大。报导声称 Minksy 与爱泼斯坦的女眷之一发生了性关系(详见 <https://www.theverge.com/2019/8/9/20798900/marvin-minsky-jeffrey-epstein-sex-trafficking-island-court-records-unsealed>)。我们假设这是真的(我找不到理由不相信)。
>
> “侵害(assulting)” 这个词,意味着他使用了某种暴力。但那篇报道并没有提到这个,只说了他们发生了性关系。
>
> 我们可以想像很多种情况,但最合理的情况是,她在 Marvin 面前表现的像是完全自愿的。假设她是被爱泼斯坦强迫的,那爱泼斯坦有充足的理由让她对大多数人守口如瓶。
>
> 从各种的指控夸大事例中,我总结出,在指控时使用“性侵犯(sexual assault)”是绝对错误的。
>
> 无论你想要批判什么行为,你都应该使用特定的词汇来描述,以此避免批判本身天然的道德上的模糊性。
### “清除 Stallman” 的呼吁
爱泼斯坦在美国是颇具争议的话题。Stallman 对该敏感事件做出如此鲁莽的 “知识陈述” 不会有好结果,事实也是如此。
一位机器人学工程师从她的朋友那里收到了转发的邮件并发起了一个[清除 Stallman 的活动][7]。她要的不是澄清或者道歉,她只想要清除斯托曼,就算这意味着 “将 MIT 夷为平地” 也在所不惜。
> 是,至少 Stallman 没有的确被控强奸任何人。但这就是我们的最高标准吗?这所声望极高的学院坚持的标准就是这样的吗? 如果这是麻省理工学院想要捍卫的、想要代表的标准的话,还不如把这破学校夷为平地…
>
> 如果有必要的话,就把所有人都清除出去,之后从废墟中建立出更好的秩序。
>
> Salem发起“清除 Stallman“运动的机器人学专业学生
Salem 的大字报最初没有被主流媒体重视。但它还是被反对软件行业内精英崇拜以及性别偏见的积极分子发现了。
> [#epstein][8] [#MIT][9] 嗨 记者没有回复我我很生气就自己写了这么个故事。 作为 MIT 的校友我还真是高兴啊🙃 <https://t.co/D4V5L5NzPA>
>
> — SZJG (@selamjie) [September 12, 2019][10]
> 是不是对于性侵儿童的 “杰出混蛋” 我们也可以辩护说 “万一这是你情我愿的” <https://t.co/gSYPJ3WOfp>
>
> — Tracy Chou 👩🏻‍💻 (@triketora) [September 13, 2019][11]
> 多年来我就一直发推说 Richard "RMS" Stallman 这人有多恶心 —— 恋童癖、厌女症、还残障歧视
>
> 不可避免的是,每次我这样做,都会有老哥检查我的数据来源,然后说 “这都是几年前的事了!他现在变了!”
>
> 变个屁。 <https://t.co/ti2SrlKObp>
>
> — Sarah Mei (@sarahmei) [September 12, 2019][12]
下面是 Sage Sharp 开头的一篇关于 Stallman 的行为如何对科技人员产生负面影响的帖子:
> 👇大家说下 Richard Stallman 对科技从业者的影响吧,尤其是女性。 [例如: 强奸,乱伦,残障歧视,性交易]
>
> [@fsf][13] 有必要永久禁止 Richard Stallman 担任自由软件基金会董事会主席。
>
> — Sage Sharp (@_sagesharp_) [September 16, 2019][14]
Stallman 一直以来也不是一个圣人。 他粗暴,不合时宜、带有性别歧视的笑话多年来一直在进行。你可以在[这里][15]和[这里][16]读到。
很快这个消息就被 [The Vice][17][每日野兽][18][未来主义][19]等大媒体采访。他们把 Stallman 描绘成爱泼斯坦的捍卫者。在强烈的抗议声中,[GNOME 执行董事威胁要结束 GNOME 和 FSF 之间的关系][20]。
最后Stallman 先是从 MIT 辞职,现在又从 [自由软件基金会][21] 辞职。
![][22]
### 危险的特权?
我们见识到了,把一个人从他创建并为之工作了三十多年的组织中驱逐出去仅仅需要五天。这甚至还是在 Stallman 没有参与性交易丑闻的情况下。
其中一些 “活动家” 过去也曾[针对 Linux 的作者 Linus Torvalds][23]。 Linux 基金会背后的管理层预见到了科技行业激进主义的增长趋势,因此他们制定了[适用于 Linux 内核开发的行为准则][24]并[强制 Torvalds 接受培训以改善他的行为][25]。 如果他们没有采取纠正措施,可能 Torvalds 也已经被批倒批臭了。
忽视技术支持者的鲁莽行为和性别歧视是不可接受的,但是对于那些遇到不同意某种流行观点的人就贴大字报,施以私刑也是不道德的做法。我不支持 Stallman 和他过去的言论,但我也不能接受他以这种方式(被迫?)辞职。
Techrights 对此有一些有趣的评论,你可以在 [这里][26] 和 [这里][27] 看到。
_**您对此事有何看法? 请文明分享您的观点和意见。过激评论将不会公布。**_
--------------------------------------------------------------------------------
via: https://itsfoss.com/richard-stallman-controversy/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[name1e5s](https://github.com/name1e5s)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/09/stallman-conroversy.png?ssl=1
[2]: https://en.wikipedia.org/wiki/Richard_Stallman
[3]: https://en.wikipedia.org/wiki/Massachusetts_Institute_of_Technology
[4]: https://en.wikipedia.org/wiki/Free_software_movement
[5]: https://en.wikipedia.org/wiki/Jeffrey_Epstein
[6]: https://en.wikipedia.org/wiki/Marvin_Minsky
[7]: https://medium.com/@selamie/remove-richard-stallman-fec6ec210794
[8]: https://twitter.com/hashtag/epstein?src=hash&ref_src=twsrc%5Etfw
[9]: https://twitter.com/hashtag/MIT?src=hash&ref_src=twsrc%5Etfw
[10]: https://twitter.com/selamjie/status/1172244207978897408?ref_src=twsrc%5Etfw
[11]: https://twitter.com/triketora/status/1172443389536555009?ref_src=twsrc%5Etfw
[12]: https://twitter.com/sarahmei/status/1172283772428906496?ref_src=twsrc%5Etfw
[13]: https://twitter.com/fsf?ref_src=twsrc%5Etfw
[14]: https://twitter.com/_sagesharp_/status/1173637138413318144?ref_src=twsrc%5Etfw
[15]: https://geekfeminism.wikia.org/wiki/Richard_Stallman
[16]: https://medium.com/@selamie/remove-richard-stallman-appendix-a-a7e41e784f88
[17]: https://www.vice.com/en_us/article/9ke3ke/famed-computer-scientist-richard-stallman-described-epstein-victims-as-entirely-willing
[18]: https://www.thedailybeast.com/famed-mit-computer-scientist-richard-stallman-defends-epstein-victims-were-entirely-willing
[19]: https://futurism.com/richard-stallman-epstein-scandal
[20]: https://blog.halon.org.uk/2019/09/gnome-foundation-relationship-gnu-fsf/
[21]: https://www.fsf.org/news/richard-m-stallman-resigns
[22]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/09/richard-stallman.png?resize=800%2C94&ssl=1
[23]: https://www.newyorker.com/science/elements/after-years-of-abusive-e-mails-the-creator-of-linux-steps-aside
[24]: https://itsfoss.com/linux-code-of-conduct/
[25]: https://itsfoss.com/torvalds-takes-a-break-from-linux/
[26]: http://techrights.org/2019/09/15/media-attention-has-been-shifted/
[27]: http://techrights.org/2019/09/16/stallman-removed/

View File

@ -0,0 +1,108 @@
[#]: collector: (lujun9972)
[#]: translator: (heguangzhi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (The Linux kernel: Top 5 innovations)
[#]: via: (https://opensource.com/article/19/8/linux-kernel-top-5-innovations)
[#]: author: (Seth Kenlon https://opensource.com/users/sethhttps://opensource.com/users/mhaydenhttps://opensource.com/users/mralexjuarez)
The Linux kernel: Top 5 innovations
======
Linux 内核:五大创新
======
想知道什么是真正的(不是那种时髦的)在 Linux 内核上的创新吗?请继续阅读。
![绿色背景的企鹅][1]
_创新_ 这个词在科技行业的传播几乎和 _革命_ 一样多所以很难区分那些夸张和真正令人振奋的东西。Linux 内核被称为创新的,但它又被称为现代计算中最大的黑客,一个微观世界中的庞然大物。
撇开市场和模式不谈Linux 可以说是开源世界中最受欢迎的内核它在近30年的生命周期中引入了一些真正的游戏改变者。
### Cgroups (2.6.24)
早在2007年Paul Menage 和 Rohit Seth 就在内核中添加了深奥的[_control groups_ (cgroups)][2]功能(cgroups 的当前实现是由 Tejun Heo 重写的。)这种新技术最初被用作一种方法,从本质上来说,是为了确保一组特定任务的服务质量。
例如,您为与您的 WEB 服务相关联的所有任务创建一个控制组定义 ( cgroup ),为常规备份创建另一个 cgroup 为一般操作系统需求创建另一个cgroup。然后您可以控制每个组的资源百分比这样您的操作系统和 WEB 服务就可以获得大部分系统资源,而您的备份进程可以访问剩余的资源。
然而cgroups 最著名的是它作为今天驱动云技术的角色:容器。事实上cgroups 最初被命名为[进程容器][3]。当它们被 [LXC][4][CoreOS][5]和 Docker 等项目采用时,这并不奇怪。
就像闸门打开后一样,“ _容器_ ”一词恰好成为了 Linux 的同义词,微服务风格的基于云的“应用”概念很快成为了规范。如今,很难脱离 cgroups ,他们是如此普遍。每一个大规模的基础设施(可能还有你的笔记本电脑,如果你运行 Linux 的话)都以一种有意思的方式使用了 cgroups ,使你的计算体验比以往任何时候都更加易于管理和灵活。
例如,您可能已经在电脑上安装了[Flathub][6]或[Flatpak][7],或者您已经在工作中使用[Kubernetes][8]和/或[OpenShift][9]。不管怎样,如果“容器”这个术语对你来说仍然模糊不清,你可以在[ Linux 容器背后的应用场景][10] 获得对容器的实际理解。
### LKMM (4.17)
2018年Jade Alglave, Alan Stern, Andrea Parri, Luc Maranget, Paul McKenney, 和其他几个人的辛勤工作的成果被合并到主线 Linux 内核中以提供正式的内存模型。Linux 内核内存[一致性]模型(LKMM)子系统是一套描述Linux 内存一致性模型的工具,同时也产生测试用例。
随着系统在物理设计上变得越来越复杂(增加了更多的中央处理器内核,高速缓存和内存增加,等等),它们就越难知道哪个中央处理器需要哪个地址空间,以及何时需要。例如,如果 CPU0 需要将数据写入内存中的共享变量,并且 CPU1 需要读取该值,那么 CPU0 必须在 CPU1 尝试读取之前写入。类似地,如果值是以一种顺序写入内存的,那么期望它们也以同样的顺序被读取,而不管哪个或哪些 CPU 正在读取。
即使在单个处理器上,内存管理也需要特定的顺序。像 **x = y** 这样的简单操作需要处理器从内存中加载 **y** 的值,然后将该值存储在 **x** 中。在处理器从内存中读取值之前,是不能将存储在 **y** 中的值放入 **x** 变量的。还有地址依赖:**x[n] = 6** 要求在处理器能够存储值6之前加载 **n**
LKMM 帮助识别和跟踪代码中的这些内存模式。这部分是通过一个名为 **herd** 的工具来实现的,该工具定义了内存模型施加的约束(以逻辑公式的形式),然后列举了与这些约束一致性的所有可能的结果。
### 低延迟补丁 (2.6.38)
很久以前在2011年之前的日子里如果你想在 Linux进行 多媒体工作 [multimedia work on Linux][11] ,您必须获得一个低延迟内核。这主要适用于[录音/audio recording][12],同时添加了许多实时效果(如对着麦克风唱歌和添加混音,以及在耳机中无延迟地听到您的声音)。有些发行版,如[Ubuntu Studio][13],可靠地提供了这样一个内核,所以实际上这没有什么障碍,当艺术家选择发行版本时,只是作为一个重要提醒。
然而,如果您没有使用 Ubuntu Studio ,或者您需要在分发之前更新您的内核,您必须跳转到 rt-patches 网页,下载内核补丁,将它们应用到您的内核源代码,编译,然后手动安装。
然后随着内核版本2.6.38的发布这个过程结束了。默认情况下Linux 内核突然像变魔术一样内置了低延迟代码(根据基准测试延迟至少降低了10倍)。不再下载补丁,不用编译。一切都很顺利,这都是因为 Mike Galbraith 编写了一个200行的小补丁。
对于全世界的开源多媒体艺术家来说这是一个游戏规则的改变。从2011年开始到2016年事情变得如此美好我向自己做了一个挑战要求[在树莓派v1(型号B)上建造一个数字音频工作站(DAW)][14],结果发现它运行得出奇地好。
### RCU (2.5)
RCU或称读-拷贝-更新,是计算机科学中定义的一个系统,它允许多个处理器线程从共享内存中读取数据。它通过推迟更新来做到这一点,但也将它们标记为已更新,以确保数据读取为最新内容。实际上,这意味着读取与更新同时发生。
典型的 RCU 循环有点像这样:
1. 删除指向数据的指针,以防止其他读操作引用它。
2. 等待读完成他们的关键处理。
3. 回收内存空间。
将更新阶段划分为删除和回收阶段意味着更新程序会立即执行删除,同时推迟回收直到所有活动读取完成(通过阻止它们或注册一个回调以便在完成时调用)。
虽然读-拷贝-更新的概念不是为 Linux 内核发明的,但它在 Linux 中的实现是该技术的一个定义性的例子。
### 合作 (0.01)
对于 Linux 内核创新的问题,最重要的是协作,最终答案也是。称之为好时机,称之为技术优势,称之为黑客能力,或者仅仅称之为开源,但 Linux 内核及其支持的许多项目是协作与合作的光辉范例。
它远远超出了内核范畴。各行各业的人都对开源做出了贡献,可以说是因为 Linux 内核。Linux 曾经是,现在仍然是 [自由软件][15]的主要力量,激励人们把他们的代码、艺术、想法或者仅仅是他们自己带到一个全球化的、有生产力的、多样化的人类社区中。
### 你最喜欢的创新是什么?
这个列表偏向于我自己的兴趣:容器、非统一内存访问(NUMA)和多媒体。我肯定把你最喜欢的内核创新从列表中去掉了。在评论中告诉我!
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/8/linux-kernel-top-5-innovations
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[heguangzhi](https://github.com/heguangzhi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/sethhttps://opensource.com/users/mhaydenhttps://opensource.com/users/mralexjuarez
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/linux_penguin_green.png?itok=ENdVzW22 (Penguin with green background)
[2]: https://en.wikipedia.org/wiki/Cgroups
[3]: https://lkml.org/lkml/2006/10/20/251
[4]: https://linuxcontainers.org
[5]: https://coreos.com/
[6]: http://flathub.org
[7]: http://flatpak.org
[8]: http://kubernetes.io
[9]: https://www.redhat.com/sysadmin/learn-openshift-minishift
[10]: https://opensource.com/article/18/11/behind-scenes-linux-containers
[11]: http://slackermedia.info
[12]: https://opensource.com/article/17/6/qtractor-audio
[13]: http://ubuntustudio.org
[14]: https://opensource.com/life/16/3/make-music-raspberry-pi-milkytracker
[15]: http://fsf.org

View File

@ -1,172 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (heguangzhi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Managing Ansible environments on MacOS with Conda)
[#]: via: (https://opensource.com/article/19/8/using-conda-ansible-administration-macos)
[#]: author: (James Farrell https://opensource.com/users/jamesf)
使用 Conda 管理 MacOS 上的 Ansible 环境
=====
Conda 将 Ansible 所需的一切都收集到虚拟环境中并将其与其他项目分开。
![CICD with gears][1]
如果您是一名使用 MacOS 并参与 Ansible 管理的 Python 开发人员,您可能希望使用 Conda 包管理器将 Ansible 的工作内容与核心操作系统和其他本地项目分开。
Ansible 基于 Python的。让 Ansible 在 MacOS 上工作 Conda 并不是必须要的,但是它确实让您管理 Python 版本和包依赖变得更加容易。这允许您在 MacOS 上使用升级的 Python 版本并在您的系统中、Ansible 和其他编程项目之间保持 Python 包的依赖性是相互独立的。
在 MacOS 上安装 Ansible 还有其他方法。您可以使用[Homebrew][2],但是如果您对 Python 开发(或 Ansible 开发)感兴趣,您可能会发现在一个独立 Python 虚拟环境中管理 Ansible 可以减少一些混乱。我觉得这更简单;与其试图将 Python 版本和依赖项加载到系统或在 **/usr/local** 目录中 ,还不如使用 Conda 帮助我将 Ansible 所需的一切都收集到一个虚拟环境中,并将其与其他项目完全分开。
本文着重于使用 Conda 作为 Python 项目来管理 Ansible ,以保持它的干净并与其他项目分开。请继续阅读,并了解如何安装 Conda、创建新的虚拟环境、安装 Ansible 并对其进行测试。
### 序幕
最近,我想学习[Ansible][3],所以我需要找到安装它的最佳方法。
我通常对在我的日常工作站上安装东西很谨慎。我尤其不喜欢对供应商的默认操作系统安装应用手动更新(这是我多年作为 Unix 系统管理的首选)。我真的很想使用 Python 3.7,但是 MacOS 包是旧的2.7,我不会安装任何可能干扰核心 MacOS 系统的全局 Python 包。
所以,我使用本地 Ubuntu 18.04 虚拟机上开始了我的 Ansible 工作。这提供了真正意义上的的安全隔离,但我很快发现管理它是非常乏味的。所以我着手研究如何在本机 MacOS 上获得一个灵活但独立的 Ansible 系统。
由于 Ansible 基于 PythonConda 似乎是理想的解决方案。
### 安装 Conda
Conda 是一个开源软件,它提供方便的包和环境管理功能。它可以帮助您管理多个版本的 Python 、安装软件包依赖关系、执行升级和维护项目隔离。如果您手动管理 Python 虚拟环境Conda 将有助于简化和管理您的工作。浏览 [Conda 文档][4]可以了解更多细节。
我选择了 [Miniconda][5] Python 3.7 安装在我的工作站中,因为我想要最新的 Python 版本。无论选择哪个版本,您都可以使用其他版本的 Python 安装新的虚拟环境。
要安装 Conda请下载 PKG 格式的文件,进行通常的双击,并选择 “Install for me only” 选项。安装在我的系统上占用了大约158兆的空间。
安装完成后,调出一个终端来查看您有什么了。您应该看到:
* 一个 **miniconda3** 目录在您的 **home** 目录中
* shell 提示符被修改为 "(base)"
* **.bash_profile** 文件被 Conda-specific 设置内容更新
现在已经安装了基础,您就有了第一个 Python 虚拟环境。运行 Python 版本检查可以证明这一点,您的 PATH 将指向新的位置:
```
(base) $ which python
/Users/jfarrell/miniconda3/bin/python
(base) $ python --version
Python 3.7.1
```
现在安装了 Conda ,下一步是建立一个虚拟环境,然后安装 Ansible 并运行。
### 为 Ansible 创建虚拟环境
我想将 Ansible 与我的其他 Python 项目分开,所以我创建了一个新的虚拟环境并切换到它:
```
(base) $ conda create --name ansible-env --clone base
(base) $ conda activate ansible-env
(ansible-env) $ conda env list
```
第一个命令将 Conda 库克隆到一个名为 **ansible-env** 的新虚拟环境中。克隆引入了 Python 3.7 版本和一系列默认的 Python 模块,您可以根据需要添加、删除或升级这些模块。
第二个命令将 shell 上下文更改为这个新的环境。它为 Python 及其包含的模块设置了正确的路径。请注意,在 **conda activate ansible-env** 命令后,您的 shell 提示符会发生变化。
第三个命令不是必须的;它列出了安装了哪些 Python 模块及其版本和其他数据。
您可以随时使用 Conda 的 **activate** 命令切换到另一个虚拟环境。这将带您回到基本的: **conda base**
### 安装 Ansible
安装 Ansible 有多种方法,但是使用 Conda 可以将 Ansible 版本和所有需要的依赖项打包在一个地方。Conda 提供了灵活的,既可以将所有内容分开,又可以根据需要添加其他新环境(我将在后面演示)。
要安装 Ansible 的相对较新版本,请使用:
```
(base) $ conda activate ansible-env
(ansible-env) $ conda install -c conda-forge ansible
```
由于 Ansible 不是 Conda 默认的一部分,因此**-c**用于从备用通道搜索和安装。Ansible 现已安装到**ansible-env**虚拟环境中,可以使用了。
### 使用 Ansible
既然您已经安装了 Conda 虚拟环境,就可以使用它了。首先,确保要控制的节点已将工作站的 SSH 密钥安装到正确的用户帐户。
调出一个新的 shell 并运行一些基本的Ansible命令:
```
(base) $ conda activate ansible-env
(ansible-env) $ ansible --version
ansible 2.8.1
  config file = None
  configured module search path = ['/Users/jfarrell/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /Users/jfarrell/miniconda3/envs/ansibleTest/lib/python3.7/site-packages/ansible
  executable location = /Users/jfarrell/miniconda3/envs/ansibleTest/bin/ansible
  python version = 3.7.1 (default, Dec 14 2018, 13:28:58) [Clang 4.0.1 (tags/RELEASE_401/final)]
(ansible-env) $ ansible all -m ping -u ansible
192.168.99.200 | SUCCESS =&gt; {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python"
    },
    "changed": false,
    "ping": "pong"
}
```
现在 Ansible 正在工作了,您可以在控制台中抽身,并从您的 MacOS 工作站中使用它们。
### 克隆新的 Ansible 进行 Ansible 开发
这部分完全是可选的;只有当您想要额外的虚拟环境来修改 Ansible 或者安全地使用有问题的 Python 模块时,才需要它。您可以通过以下方式将主 Ansible 环境克隆到开发副本中:
```
(ansible-env) $ conda create --name ansible-dev --clone ansible-env
(ansible-env) $ conda activte ansible-dev
(ansible-dev) $
```
### 需要注意的问题
偶尔您可能遇到使用 Conda 的麻烦。您通常可以通过以下方式删除不良环境:
```
$ conda activate base
$ conda remove --name ansible-dev --all
```
如果出现无法解决的错误,通常可以通过在 **~/miniconda3/envs** 中找到环境并删除整个目录来直接删除环境。如果基础损坏了,您可以删除整个 **~/miniconda3**,然后从 PKG 文件中重新安装。只要确保保留 **~/miniconda3/envs** ,或使用 Conda 工具导出环境配置并在以后重新创建即可。
MacOS 上不包括 **sshpass** 程序。只有当您的 Ansible 工作要求您向 Ansible 提供SSH登录密码时才需要它。您可以在 SourceForge 上找到当前的[sshpass source][6]。
最后,基础 Conda Python 模块列表可能缺少您工作所需的一些 Python 模块。如果您需要安装一个模块,**conda install &lt;package&gt;** 命令是首选的,但是 **pip** 可以在需要的地方使用Conda会识别安装模块。
### 结论
Ansible 是一个强大的自动化工具值得我们去学习。Conda是一个简单有效的 Python 虚拟环境管理工具。
在您的 MacOS 环境中保持软件安装分离是保持日常工作环境的稳定性和健全性的谨慎方法。Conda 尤其有助于升级您的Python 版本,将 Ansible 从其他项目中分离出来,并安全地使用 Ansible。
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/8/using-conda-ansible-administration-macos
作者:[James Farrell][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/heguangzhi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jamesf
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/cicd_continuous_delivery_deployment_gears.png?itok=kVlhiEkc (CICD with gears)
[2]: https://brew.sh/
[3]: https://docs.ansible.com/?extIdCarryOver=true&sc_cid=701f2000001OH6uAAG
[4]: https://conda.io/projects/conda/en/latest/index.html
[5]: https://docs.conda.io/en/latest/miniconda.html
[6]: https://sourceforge.net/projects/sshpass/