Merge pull request #6 from LCTT/master

update
This commit is contained in:
wyxplus 2021-03-30 12:29:47 +08:00 committed by GitHub
commit d684fe94ea
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
50 changed files with 6065 additions and 1891 deletions

View File

@ -0,0 +1,299 @@
[#]: collector: (lujun9972)
[#]: translator: (stevenzdg988)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-13233-1.html)
[#]: subject: (Using Python to explore Google's Natural Language API)
[#]: via: (https://opensource.com/article/19/7/python-google-natural-language-api)
[#]: author: (JR Oakes https://opensource.com/users/jroakes)
利用 Python 探究 Google 的自然语言 API
======
> Google API 可以凸显出有关 Google 如何对网站进行分类的线索,以及如何调整内容以改进搜索结果的方法。
![](https://img.linux.net.cn/data/attachment/album/202103/24/232018q66pz2uc5uuq1p03.jpg)
作为一名技术性的搜索引擎优化人员,我一直在寻找以新颖的方式使用数据的方法,以更好地了解 Google 如何对网站进行排名。我最近研究了 Google 的 [自然语言 API][2] 能否更好地揭示 Google 是如何分类网站内容的。
尽管有 [开源 NLP 工具][3],但我想探索谷歌的工具,前提是它可能在其他产品中使用同样的技术,比如搜索。本文介绍了 Google 的自然语言 API并探究了常见的自然语言处理NLP任务以及如何使用它们来为网站内容创建提供信息。
### 了解数据类型
首先,了解 Google 自然语言 API 返回的数据类型非常重要。
#### 实体
<ruby>实体<rt>Entities</rt></ruby>是可以与物理世界中的某些事物联系在一起的文本短语。<ruby>命名实体识别<rt>Named Entity Recognition</rt></ruby>NER是 NLP 的难点,因为工具通常需要查看关键字的完整上下文才能理解其用法。例如,<ruby>同形异义字<rt>homographs</rt></ruby>拼写相同,但是具有多种含义。句子中的 “lead” 是指一种金属“铅”名词使某人移动“牵领”动词还可能是剧本中的主要角色也是名词Google 有 12 种不同类型的实体,还有第 13 个名为 “UNKNOWN”未知的统称类别。一些实体与维基百科的文章相关这表明 [知识图谱][4] 对数据的影响。每个实体都会返回一个显著性分数,即其与所提供文本的整体相关性。
![实体][5]
#### 情感
<ruby>情感<rt>Sentiment</rt></ruby>,即对某事的看法或态度,是在文件和句子层面以及文件中发现的单个实体上进行衡量。情感的<ruby>得分<rt>score</rt></ruby>范围从 -1.0(消极)到 1.0(积极)。<ruby>幅度<rt>magnitude</rt></ruby>代表情感的<ruby>非归一化<rt>non-normalized</rt></ruby>强度;它的范围是 0.0 到无穷大。
![情感][6]
#### 语法
<ruby>语法<rt>Syntax</rt></ruby>解析包含了大多数在较好的库中常见的 NLP 活动,例如 <ruby>[词形演变][7]<rt>lemmatization</rt></ruby><ruby>[词性标记][8]<rt>part-of-speech tagging</rt></ruby><ruby>[依赖树解析][9]<rt>dependency-tree parsing</rt></ruby>。NLP 主要处理帮助机器理解文本和关键字之间的关系。语法解析是大多数语言处理或理解任务的基础部分。
![语法][10]
#### 分类
<ruby>分类<rt>Categories</rt></ruby>是将整个给定内容分配给特定行业或主题类别,其<ruby>置信度<rt>confidence</rt></ruby>得分从 0.0 到 1.0。这些分类似乎与其他 Google 工具使用的受众群体和网站类别相同,如 AdWords。
![分类][11]
### 提取数据
现在,我将提取一些示例数据进行处理。我使用 Google 的 [搜索控制台 API][12] 收集了一些搜索查询及其相应的网址。Google 搜索控制台是一个报告人们使用 Google Search 查找网站页面的术语的工具。这个 [开源的 Jupyter 笔记本][13] 可以让你提取有关网站的类似数据。在此示例中,我在 2019 年 1 月 1 日至 6 月 1 日期间生成的一个网站(我没有提及名字)上提取了 Google 搜索控制台数据,并将其限制为至少获得一次点击(而不只是<ruby>曝光<rt>impressions</rt></ruby>)的查询。
该数据集包含 2969 个页面和在 Google Search 的结果中显示了该网站网页的 7144 条查询的信息。下表显示,绝大多数页面获得的点击很少,因为该网站侧重于所谓的长尾(越特殊通常就更长尾)而不是短尾(非常笼统,搜索量更大)搜索查询。
![所有页面的点击次数柱状图][14]
为了减少数据集的大小并仅获得效果最好的页面,我将数据集限制为在此期间至少获得 20 次曝光的页面。这是精炼数据集的按页点击的柱状图,其中包括 723 个页面:
![部分网页的点击次数柱状图][15]
### 在 Python 中使用 Google 自然语言 API 库
要测试 API在 Python 中创建一个利用 [google-cloud-language][16] 库的小脚本。以下代码基于 Python 3.5+。
首先,激活一个新的虚拟环境并安装库。用环境的唯一名称替换 `<your-env>`
```
virtualenv <your-env>
source <your-env>/bin/activate
pip install --upgrade google-cloud-language
pip install --upgrade requests
```
该脚本从 URL 提取 HTML并将 HTML 提供给自然语言 API。返回一个包含 `sentiment``entities``categories` 的字典,其中这些键的值都是列表。我使用 Jupyter 笔记本运行此代码,因为使用同一内核注释和重试代码更加容易。
```
# Import needed libraries
import requests
import json
from google.cloud import language
from google.oauth2 import service_account
from google.cloud.language import enums
from google.cloud.language import types
# Build language API client (requires service account key)
client = language.LanguageServiceClient.from_service_account_json('services.json')
# Define functions
def pull_googlenlp(client, url, invalid_types = ['OTHER'], **data):
html = load_text_from_url(url, **data)
if not html:
return None
document = types.Document(
content=html,
type=language.enums.Document.Type.HTML )
features = {'extract_syntax': True,
'extract_entities': True,
'extract_document_sentiment': True,
'extract_entity_sentiment': True,
'classify_text': False
}
response = client.annotate_text(document=document, features=features)
sentiment = response.document_sentiment
entities = response.entities
response = client.classify_text(document)
categories = response.categories
def get_type(type):
return client.enums.Entity.Type(entity.type).name
result = {}
result['sentiment'] = []
result['entities'] = []
result['categories'] = []
if sentiment:
result['sentiment'] = [{ 'magnitude': sentiment.magnitude, 'score':sentiment.score }]
for entity in entities:
if get_type(entity.type) not in invalid_types:
result['entities'].append({'name': entity.name, 'type': get_type(entity.type), 'salience': entity.salience, 'wikipedia_url': entity.metadata.get('wikipedia_url', '-') })
for category in categories:
result['categories'].append({'name':category.name, 'confidence': category.confidence})
return result
def load_text_from_url(url, **data):
timeout = data.get('timeout', 20)
results = []
try:
print("Extracting text from: {}".format(url))
response = requests.get(url, timeout=timeout)
text = response.text
status = response.status_code
if status == 200 and len(text) > 0:
return text
return None
except Exception as e:
print('Problem with url: {0}.'.format(url))
return None
```
要访问该 API请按照 Google 的 [快速入门说明][17] 在 Google 云主控台中创建一个项目,启用该 API 并下载服务帐户密钥。之后,你应该拥有一个类似于以下内容的 JSON 文件:
![services.json 文件][18]
命名为 `services.json`,并上传到项目文件夹。
然后,你可以通过运行以下程序来提取任何 URL例如 Opensource.com的 API 数据:
```
url = "https://opensource.com/article/19/6/how-ssh-running-container"
pull_googlenlp(client,url)
```
如果设置正确,你将看到以下输出:
![拉取 API 数据的输出][19]
为了使入门更加容易,我创建了一个 [Jupyter 笔记本][20],你可以下载并使用它来测试提取网页的实体、类别和情感。我更喜欢使用 [JupyterLab][21],它是 Jupyter 笔记本的扩展,其中包括文件查看器和其他增强的用户体验功能。如果你不熟悉这些工具,我认为利用 [Anaconda][22] 是开始使用 Python 和 Jupyter 的最简单途径。它使安装和设置 Python 以及常用库变得非常容易,尤其是在 Windows 上。
### 处理数据
使用这些函数,可抓取给定页面的 HTML 并将其传递给自然语言 API我可以对 723 个 URL 进行一些分析。首先,我将通过查看所有页面中返回的顶级分类的数量来查看与网站相关的分类。
#### 分类
![来自示例站点的分类数据][23]
这似乎是该特定站点的关键主题的相当准确的代表。通过查看一个效果最好的页面进行排名的单个查询,我可以比较同一查询在 Google 搜索结果中的其他排名页面。
* URL 1 |顶级类别:/法律和政府/与法律相关的0.5099999904632568)共 1 个类别。
* 未返回任何类别。
* URL 3 |顶级类别:/互联网与电信/移动与无线0.6100000143051147)共 1 个类别。
* URL 4 |顶级类别:/计算机与电子产品/软件0.5799999833106995)共有 2 个类别。
* URL 5 |顶级类别:/互联网与电信/移动与无线/移动应用程序和附件0.75)共有 1 个类别。
* 未返回任何类别。
* URL 7 |顶级类别:/计算机与电子/软件/商业与生产力软件0.7099999785423279)共 2 个类别。
* URL 8 |顶级类别:/法律和政府/与法律相关的0.8999999761581421)共 3 个类别。
* URL 9 |顶级类别:/参考/一般参考/类型指南和模板0.6399999856948853)共有 1 个类别。
* 未返回任何类别。
上方括号中的数字表示 Google 对页面内容与该分类相关的置信度。对于相同分类,第八个结果比第一个结果具有更高的置信度,因此,这似乎不是定义排名相关性的灵丹妙药。此外,分类太宽泛导致无法满足特定搜索主题的需要。
通过排名查看平均置信度,这两个指标之间似乎没有相关性,至少对于此数据集而言如此:
![平均置信度排名分布图][24]
这两种方法对网站进行规模审查是有意义的,以确保内容类别易于理解,并且样板或销售内容不会使你的页面与你的主要专业知识领域无关。想一想,如果你出售工业用品,但是你的页面返回 “Marketing销售” 作为主要分类。似乎没有一个强烈的迹象表明,分类相关性与你的排名有什么关系,至少在页面级别如此。
#### 情感
我不会在情感上花很多时间。在所有从 API 返回情感的页面中它们分为两个区间0.1 和 0.2,这几乎是中立的情感。根据直方图,很容易看出情感没有太大价值。对于新闻或舆论网站而言,测量特定页面的情感到中位数排名之间的相关性将是一个更加有趣的指标。
![独特页面的情感柱状图][25]
#### 实体
在我看来,实体是 API 中最有趣的部分。这是在所有页面中按<ruby>显著性<rt>salience</rt></ruby>或与页面的相关性选择的顶级实体。请注意对于相同的术语销售清单Google 会推断出不同的类型,可能是错误的。这是由于这些术语出现在内容中的不同上下文中引起的。
![示例网站的顶级实体][26]
然后,我分别查看了每个实体类型,并一起查看了该实体的显著性与页面的最佳排名位置之间是否存在任何关联。对于每种类型,我匹配了与该类型匹配的顶级实体的显著性(与页面的整体相关性),按显著性排序(降序)。
有些实体类型在所有示例中返回的显著性为零,因此我从下面的图表中省略了这些结果。
![显著性与最佳排名位置的相关性][27]
“Consumer Good消费性商品” 实体类型具有最高的正相关性,<ruby>皮尔森相关度<rt>Pearson correlation</rt></ruby>为 0.15854,尽管由于较低编号的排名更好,所以 “Person” 实体的结果最好,相关度为 -0.15483。这是一个非常小的样本集,尤其是对于单个实体类型,我不能对数据做太多的判断。我没有发现任何具有强相关性的值,但是 “Person” 实体最有意义。网站通常都有关于其首席执行官和其他主要雇员的页面,这些页面很可能在这些查询的搜索结果方面做得好。
继续,当从整体上看站点,根据实体名称和实体类型,出现了以下主题。
![基于实体名称和实体类型的主题][28]
我模糊了几个看起来过于具体的结果,以掩盖网站的身份。从主题上讲,名称信息是在你(或竞争对手)的网站上局部查看其核心主题的一种好方法。这样做仅基于示例网站的排名网址,而不是基于所有网站的可能网址(因为 Search Console 数据仅记录 Google 中展示的页面),但是结果会很有趣,尤其是当你使用像 [Ahrefs][29] 之类的工具提取一个网站的主要排名 URL该工具会跟踪许多查询以及这些查询的 Google 搜索结果。
实体数据中另一个有趣的部分是标记为 “CONSUMER_GOOD” 的实体倾向于 “看起来” 像我在看到 “<ruby>知识结果<rt>Knowledge Results</rt></ruby>”的结果,即页面右侧的 Google 搜索结果。
![Google 搜索结果][30]
在我们的数据集中具有三个或三个以上关键字的 “Consumer Good消费性商品” 实体名称中,有 5.8 的知识结果与 Google 对该实体命名的结果相同。这意味着,如果你在 Google 中搜索术语或短语,则右侧的框(例如,上面显示 Linux 的知识结果)将显示在搜索结果页面中。由于 Google 会 “挑选” 代表实体的示例网页因此这是一个很好的机会可以在搜索结果中识别出具有唯一特征的机会。同样有趣的是5.8 的在 Google 中显示这些知识结果名称中,没有一个实体的维基百科 URL 从自然语言 API 中返回。这很有趣,值得进行额外的分析。这将是非常有用的,特别是对于传统的全球排名跟踪工具(如 Ahrefs数据库中没有的更深奥的主题。
如前所述,知识结果对于那些希望自己的内容在 Google 中被收录的网站所有者来说是非常重要的,因为它们在桌面搜索中加强高亮显示。假设,它们也很可能与 Google [Discover][31] 的知识库主题保持一致,这是一款适用于 Android 和 iOS 的产品,它试图根据用户感兴趣但没有明确搜索的主题为用户浮现内容。
### 总结
本文介绍了 Google 的自然语言 API分享了一些代码并研究了此 API 对网站所有者可能有用的方式。关键要点是:
* 学习使用 Python 和 Jupyter 笔记本可以为你的数据收集任务打开到一个由令人难以置信的聪明和有才华的人建立的不可思议的 API 和开源项目(如 Pandas 和 NumPy的世界。
* Python 允许我为了一个特定目的快速提取和测试有关 API 值的假设。
* 通过 Google 的分类 API 传递网站页面可能是一项很好的检查,以确保其内容分解成正确的主题分类。对于竞争对手的网站执行此操作还可以提供有关在何处进行调整或创建内容的指导。
* 对于示例网站Google 的情感评分似乎并不是一个有趣的指标,但是对于新闻或基于意见的网站,它可能是一个有趣的指标。
* Google 发现的实体从整体上提供了更细化的网站的主题级别视图,并且像分类一样,在竞争性内容分析中使用将非常有趣。
* 实体可以帮助定义机会,使你的内容可以与搜索结果或 Google Discover 结果中的 Google 知识块保持一致。我们将 5.8 的结果设置为更长的字计数“Consumer Goods消费商品” 实体,显示这些结果,对于某些网站来说,可能有机会更好地优化这些实体的页面显著性分数,从而有更好的机会在 Google 搜索结果或 Google Discovers 建议中抓住这个重要作用的位置。
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/7/python-google-natural-language-api
作者:[JR Oakes][a]
选题:[lujun9972][b]
译者:[stevenzdg988](https://github.com/stevenzdg988)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jroakes
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/search_find_code_issue_bug_programming.png?itok=XPrh7fa0 (magnifying glass on computer screen)
[2]: https://cloud.google.com/natural-language/#natural-language-api-demo
[3]: https://opensource.com/article/19/3/natural-language-processing-tools
[4]: https://en.wikipedia.org/wiki/Knowledge_Graph
[5]: https://opensource.com/sites/default/files/uploads/entities.png (Entities)
[6]: https://opensource.com/sites/default/files/uploads/sentiment.png (Sentiment)
[7]: https://en.wikipedia.org/wiki/Lemmatisation
[8]: https://en.wikipedia.org/wiki/Part-of-speech_tagging
[9]: https://en.wikipedia.org/wiki/Parse_tree#Dependency-based_parse_trees
[10]: https://opensource.com/sites/default/files/uploads/syntax.png (Syntax)
[11]: https://opensource.com/sites/default/files/uploads/categories.png (Categories)
[12]: https://developers.google.com/webmaster-tools/
[13]: https://github.com/MLTSEO/MLTS/blob/master/Demos.ipynb
[14]: https://opensource.com/sites/default/files/uploads/histogram_1.png (Histogram of clicks for all pages)
[15]: https://opensource.com/sites/default/files/uploads/histogram_2.png (Histogram of clicks for subset of pages)
[16]: https://pypi.org/project/google-cloud-language/
[17]: https://cloud.google.com/natural-language/docs/quickstart
[18]: https://opensource.com/sites/default/files/uploads/json_file.png (services.json file)
[19]: https://opensource.com/sites/default/files/uploads/output.png (Output from pulling API data)
[20]: https://github.com/MLTSEO/MLTS/blob/master/Tutorials/Google_Language_API_Intro.ipynb
[21]: https://github.com/jupyterlab/jupyterlab
[22]: https://www.anaconda.com/distribution/
[23]: https://opensource.com/sites/default/files/uploads/categories_2.png (Categories data from example site)
[24]: https://opensource.com/sites/default/files/uploads/plot.png (Plot of average confidence by ranking position )
[25]: https://opensource.com/sites/default/files/uploads/histogram_3.png (Histogram of sentiment for unique pages)
[26]: https://opensource.com/sites/default/files/uploads/entities_2.png (Top entities for example site)
[27]: https://opensource.com/sites/default/files/uploads/salience_plots.png (Correlation between salience and best ranking position)
[28]: https://opensource.com/sites/default/files/uploads/themes.png (Themes based on entity name and entity type)
[29]: https://ahrefs.com/
[30]: https://opensource.com/sites/default/files/uploads/googleresults.png (Google search results)
[31]: https://www.blog.google/products/search/introducing-google-discover/

View File

@ -1,56 +1,46 @@
[#]: collector: (lujun9972)
[#]: translator: (cooljelly)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-13239-1.html)
[#]: subject: (Multicloud, security integration drive massive SD-WAN adoption)
[#]: via: (https://www.networkworld.com/article/3527194/multicloud-security-integration-drive-massive-sd-wan-adoption.html)
[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
多云融合和安全集成推动SD-WAN的大规模应用
多云融合和安全集成推动 SD-WAN 的大规模应用
======
2022 年 SD-WAN 市场 40% 的同比增长主要来自于包括 Cisco、VMWare、Juniper 和 Arista 在内的网络供应商和包括 AWS、Microsoft AzureGoogle Anthos 和 IBM RedHat 在内的服务提供商之间的紧密联系。
[Gratisography][1] [(CC0)][2]
越来越多的云应用,以及越来越完善的网络安全性,可视化特性和可管理性,正以惊人的速度推动企业软件定义广域网 ([SD-WAN][3]) 部署
> 2022 年 SD-WAN 市场 40% 的同比增长主要来自于包括 Cisco、VMWare、Juniper 和 Arista 在内的网络供应商和包括 AWS、Microsoft AzureGoogle Anthos 和 IBM RedHat 在内的服务提供商之间的紧密联系。
IDCInternational Data Corporation译者注公司的网络基础架构副总裁 Rohit Mehra 表示,根据 IDC 的研究过去一年中特别是软件和基础设施即服务SaaS 和 IaaS产品推动了 SD-WAN 的实施。
**阅读更多关于边缘计算的文章**
* [边缘计算和物联网如何重塑数据中心][4]
* [边缘计算的最佳实践][5]
* [边缘计算如何提高物联网的安全性][6]
![](https://img.linux.net.cn/data/attachment/album/202103/27/095154f0625f3k8455800x.jpg)
越来越多的云应用,以及越来越完善的网络安全性、可视化特性和可管理性,正以惊人的速度推动企业<ruby>软件定义广域网<rt>software-defined WAN</rt></ruby>[SD-WAN][3])的部署。
IDCLCTT 译注International Data Corporation公司的网络基础架构副总裁 Rohit Mehra 表示,根据 IDC 的研究过去一年中特别是软件和基础设施即服务SaaS 和 IaaS产品推动了 SD-WAN 的实施。
例如IDC 表示,根据其最近的客户调查结果,有 95 的客户将在两年内使用 [SD-WAN][7] 技术,而 42 的客户已经部署了它。IDC 还表示,到 2022 年SD-WAN 基础设施市场将达到 45 亿美元,此后每年将以每年 40 的速度增长。
SD-WAN 的增长是一个广泛的趋势,很大程度上是由企业希望优化远程站点的云连接性的需求推动的。 Mehra 说。
SD-WAN 的增长是一个广泛的趋势,很大程度上是由企业希望优化远程站点的云连接性的需求推动的。 Mehra 说。
思科最近撰文称,多云网络的发展正在促使许多企业改组其网络,以更好地使用 SD-WAN 技术。SD-WAN 对于采用云服务的企业至关重要,它是园区网、分支机构、[物联网][8]、[数据中心][9] 和云之间的连接中间件。思科公司表示,根据调查,平均每个思科的企业客户有 30 个付费的 SaaS 应用程序,而他们实际使用的 SaaS 应用会更多——在某些情况下甚至超过 100 种。
这种趋势的部分原因是由网络供应商(例如 Cisco、VMware、Juniper、Arista 等)(这里的网络供应商指的是提供硬件或软件并可按需组网的厂商,译者注)与服务提供商(例如 Amazon AWS、Microsoft Azure、Google Anthos 和 IBM RedHat 等)建立的关系推动的。
这种趋势的部分原因是由网络供应商(例如 Cisco、VMware、Juniper、Arista 等)(LCTT 译注:这里的网络供应商指的是提供硬件或软件并可按需组网的厂商)与服务提供商(例如 Amazon AWS、Microsoft Azure、Google Anthos 和 IBM RedHat 等)建立的关系推动的。
去年 12 月AWS为其云产品发布了关键服务其中包括诸如 [AWS Transit Gateway][10] 等新集成技术的关键服务,这标志着 SD-WAN 与多云场景关系的日益重要。使用 AWS Transit Gateway 技术,客户可以将 AWS 中的 VPCVirtual Private Cloud 和其自有网络均连接到相同的网关。Aruba、Aviatrix Cisco、Citrix Systems、Silver Peak 和 Versa 已经宣布支持该技术,这将简化和增强这些公司的 SD-WAN 产品与 AWS 云服务的集成服务的性能和表现。
[][11]
去年 12 月AWS 为其云产品发布了关键服务,其中包括诸如 [AWS Transit Gateway][10] 等新集成技术的关键服务,这标志着 SD-WAN 与多云场景关系的日益重要。使用 AWS Transit Gateway 技术,客户可以将 AWS 中的 VPC<ruby>虚拟私有云<rt>Virtual Private Cloud</rt></ruby>和其自有网络均连接到相同的网关。Aruba、Aviatrix Cisco、Citrix Systems、Silver Peak 和 Versa 已经宣布支持该技术,这将简化和增强这些公司的 SD-WAN 产品与 AWS 云服务的集成服务的性能和表现。
Mehra 说,展望未来,对云应用的友好兼容和完善的性能监控等增值功能将是 SD-WAN 部署的关键部分。
随着 SD-WAN 与云的关系不断发展SD-WAN 对集成安全功能的需求也在不断增长。
Mehra 说SD-WAN 产品集成安全性的方式比以往单独打包的广域网安全软件或服务要好得多。SD-WAN 是一个更加敏捷的安全环境。SD-WAN 公认的主要组成部分包括安全功能,数据分析功能和广域网优化功能等,其中安全功能则是下一代 SD-WAN 解决方案的首要需求。
Mehra 说SD-WAN 产品集成安全性的方式比以往单独打包的广域网安全软件或服务要好得多。SD-WAN 是一个更加敏捷的安全环境。SD-WAN 公认的主要组成部分包括安全功能,数据分析功能和广域网优化功能等,其中安全功能则是下一代 SD-WAN 解决方案的首要需求。
Mehra 说,企业将越来越少地关注仅解决某个具体问题的 SD-WAN 解决方案,而将青睐于能够解决更广泛的网络管理和安全需求的 SD-WAN 平台。他们将寻找可以与他们的 IT 基础设施(包括企业数据中心网络、企业园区局域网、[公有云][12] 资源等)集成更紧密的 SD-WAN 平台。他说,企业将寻求无缝融合的安全服务,并希望有其他各种功能的支持,例如可视化数据分析和统一通信功能。
Mehra 说,企业将越来越少地关注仅解决某个具体问题的 SD-WAN 解决方案,而将青睐于能够解决更广泛的网络管理和安全需求的 SD-WAN 平台。他们将寻找可以与他们的 IT 基础设施(包括企业数据中心网络、企业园区局域网、[公有云][12] 资源等)集成更紧密的 SD-WAN 平台。他说,企业将寻求无缝融合的安全服务,并希望有其他各种功能的支持,例如可视化数据分析和统一通信功能。
“随着客户不断将其基础设施与软件集成在一起,他们可以做更多的事情,例如根据其局域网和广域网上的用户、设备或应用程序的需求,实现一致的管理和安全策略,并最终获得更好的整体使用体验。” Mehra 说。
一个新兴趋势是 SD-WAN 产品包需要支持 [SD-branch][13] 技术。 Mehra 说,超过 70 的 IDC 受调查客户希望在明年使用 SD-Branch。在最近几周[Juniper][14] 和 [Aruba][15] 公司已经优化了 SD-branch 产品,这一趋势预计将在今年持续下去。
SD-Branch 技术基于 SD-WAN 的概念和支持但更专注于满足分支机构中局域网的组网和管理需求。展望未来SD-Branch 如何与其他技术集成,例如数据分析,音视频,统一通信等,将成为该技术的主要驱动力。
加入 [Facebook][16] 和 [LinkedIn][17] 上的 Network World 社区,以评论您最关注的主题。
SD-Branch 技术建立在 SD-WAN 的概念和支持的基础上但更专注于满足分支机构中局域网的组网和管理需求。展望未来SD-Branch 如何与其他技术集成,例如数据分析、音视频、统一通信等,将成为该技术的主要驱动力。
--------------------------------------------------------------------------------
@ -59,13 +49,13 @@ via: https://www.networkworld.com/article/3527194/multicloud-security-integratio
作者:[Michael Cooney][a]
选题:[lujun9972][b]
译者:[cooljelly](https://github.com/cooljelly)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Michael-Cooney/
[b]: https://github.com/lujun9972
[1]: https://www.pexels.com/photo/black-and-white-branches-tree-high-279/
[1]: https://images.idgesg.net/images/article/2018/07/branches_branching_trees_bare_black_and_white_by_gratisography_cc0_via_pexels_1200x800-100763250-large.jpg
[2]: https://creativecommons.org/publicdomain/zero/1.0/
[3]: https://www.networkworld.com/article/3031279/sd-wan-what-it-is-and-why-you-ll-use-it-one-day.html
[4]: https://www.networkworld.com/article/3291790/data-center/how-edge-networking-and-iot-will-reshape-data-centers.html

View File

@ -1,8 +1,8 @@
[#]: collector: (lujun9972)
[#]: translator: (wyxplus)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-13242-1.html)
[#]: subject: (How to automate your cryptocurrency trades with Python)
[#]: via: (https://opensource.com/article/20/4/python-crypto-trading-bot)
[#]: author: (Stephan Avenwedde https://opensource.com/users/hansic99)
@ -11,11 +11,11 @@
如何使用 Python 来自动交易加密货币
======
在本教程中,教你如何设置和使用 Pythonic 来编程。它是一个图形化编程工具,用户可以很容易地使用现成的函数模块创建 Python 程序。
> 在本教程中,教你如何设置和使用 Pythonic 来编程。它是一个图形化编程工具,用户可以很容易地使用现成的函数模块创建 Python 程序。
![scientific calculator][1]
![](https://img.linux.net.cn/data/attachment/album/202103/28/093858qu0bh3w2sd3rh20s.jpg)
然而,不像纽约证券交易所这样的传统证券交易所一样,有一段固定的交易时间。对于加密货币而言,则是 7×24 小时交易,任何人都无法独自盯着市场。
然而,不像纽约证券交易所这样的传统证券交易所一样,有一段固定的交易时间。对于加密货币而言,则是 7×24 小时交易,这使得任何人都无法独自盯着市场。
在以前,我经常思考与加密货币交易相关的问题:
@ -24,21 +24,19 @@
- 为什么下单?
- 为什么不下单?
通常的解决手段是当在你做其他事情时,例如睡觉、与家人在一起或享受空闲时光,使用加密交易机器人代替你下单。虽然有很多商业解决方案可用,但是我选择开源的解决方案,因此我编写了加密交易机器人 [Pythonic][2]。 正如去年 [我写过的文章][3] 一样,“ Pythonic 是一种图形化编程工具,它让用户可以轻松使用现成的功能模块来创建Python应用程序。” 最初它是作为加密货币机器人使用,并具有可扩展的日志记录引擎以及经过精心测试的可重用部件,例如调度器和计时器。
通常的解决手段是使用加密交易机器人,当在你做其他事情时,例如睡觉、与家人在一起或享受空闲时光,代替你下单。虽然有很多商业解决方案可用,但是我选择开源的解决方案,因此我编写了加密交易机器人 [Pythonic][2]。 正如去年 [我写过的文章][3] 一样“Pythonic 是一种图形化编程工具,它让用户可以轻松使用现成的函数模块来创建 Python 应用程序。” 最初它是作为加密货币机器人使用,并具有可扩展的日志记录引擎以及经过精心测试的可重用部件,例如调度器和计时器。
### 开始
本教程将教你如何开始使用 Pythonic 进行自动交易。我选择 [币安][6]Binance<ruby>[币安][6]<rt>Binance</rt></ruby> 交易所的 [波场][4]Tron<ruby>[波场][4]<rt>Tron</rt></ruby>[比特币][3]Bitcoin<ruby>[比特币][3]<rt>Bitcoin</rt></ruby>
本教程将教你如何开始使用 Pythonic 进行自动交易。我选择 <ruby>[币安][6]<rt>Binance</rt></ruby> 交易所的 <ruby>[波场][4]<rt>Tron</rt></ruby><ruby>[比特币][3]<rt>Bitcoin</rt></ruby> 交易对为例。我之所以选择这个加密货币对,是因为它们彼此之间的波动性大,而不是出于个人喜好。
交易对为例。我之所以选择这些加密货币,是因为它们彼此之间的波动性大,而不是出于个人喜好。
机器人将根据 [指数移动平均][7] EMAs来做出决策。
机器人将根据 <ruby>[指数移动平均][7]<rt>exponential moving averages</rt></ruby> EMA来做出决策。
![TRX/BTC 1-hour candle chart][8]
TRX/BTC 1 小时 K 线图
*TRX/BTC 1 小时 K 线图*
EMA 指标通常是指加权移动平均线,可以对近期价格数据赋予更多权重。尽管移动平均线可能只是一个简单的指标,但我能熟练使用它
EMA 指标通常是一个加权的移动平均线,可以对近期价格数据赋予更多权重。尽管移动平均线可能只是一个简单的指标,但我对它很有经验
上图中的紫色线显示了 EMA-25 指标(这表示要考虑最近的 25 个值)。
@ -48,20 +46,16 @@ EMA 指标通常是指加权移动平均线,可以对近期价格数据赋予
### 工具链
将在本教程使用如下工具:
- 币安专业交易视图(已经有其他人做了数据可视化,所以不需要重复造轮子)
- Jupyter Notebook:用于数据科学任务
- Jupyter 笔记本:用于数据科学任务
- Pythonic作为整体框架
- PythonicDaemon :作为终端运行(仅适用于控制台和 Linux
### 数据挖掘
为了使加密货币交易机器人尽可能做出正确的决定,以可靠的方式获取资产的美国线([OHLC][9])数据是至关重要。你可以使用 Pythonic 的内置元素,还可以根据自己逻辑来对其进行扩展。
为了使加密货币交易机器人尽可能做出正确的决定,以可靠的方式获取资产的<ruby>美国线<rt>open-high-low-close chart</rt></ruby>[OHLC][9])数据是至关重要。你可以使用 Pythonic 的内置元素,还可以根据自己逻辑来对其进行扩展。
一般的工作流程:
@ -70,31 +64,28 @@ EMA 指标通常是指加权移动平均线,可以对近期价格数据赋予
3. 从文件中把 OHLC 数据加载到内存
4. 比较数据集并扩展更新数据集
这个工作流程可能有点夸张,但是它能使得程序更加健壮,甚至在停机和断开连接时,也能平稳运行。
一开始,你需要 **币安 OHLC 查询**Binance OHLC Query<ruby>**币安 OHLC 查询**<rt>Binance OHLC Query</rt></ruby> 元素和一个 **基础操作**Basic Operation<ruby>**基础操作**<rt>Basic Operation</rt></ruby> 元素来执行你的代码。
一开始,你需要 <ruby>**币安 OHLC 查询**<rt>Binance OHLC Query</rt></ruby> 元素和一个 <ruby>**基础操作**<rt>Basic Operation</rt></ruby> 元素来执行你的代码。
![Data-mining workflow][10]
数据挖掘工作流程
*数据挖掘工作流程*
OHLC 查询设置为每隔一小时查询一次 **TRXBTC** 资产对(波场/比特币)。
![Configuration of the OHLC query element][11]
配置 OHLC 查询元素
*配置 OHLC 查询元素*
其中输出的元素是 [Pandas DataFrame][12]。你可以在 **基础操作** 元素中使用 **输入**input<ruby>**输入**<rt>input</rt></ruby> 变量来访问 DataFrame。其中将 Vim 设置为 **基础操作** 元素的默认代码编辑器。
其中输出的元素是 [Pandas DataFrame][12]。你可以在 **基础操作** 元素中使用 <ruby>**输入**<rt>input</rt></ruby> 变量来访问 DataFrame。其中将 Vim 设置为 **基础操作** 元素的默认代码编辑器。
![Basic Operation element set up to use Vim][13]
使用 Vim 编辑基础操作元素
*使用 Vim 编辑基础操作元素*
具体代码如下:
```
import pickle, pathlib, os
import pandas as pd
@ -121,38 +112,38 @@ if isinstance(input, pd.DataFrame):
output = df
```
首先,检查输入是否为 DataFrame 元素。然后在用户的家目录(**〜/ **)中查找名为 **TRXBTC_1h.bin** 的文件。如果存在,则将其打开,执行新代码段(**try** 部分中的代码),并删除重复项。如果文件不存在,则触发异常并执行 **except** 部分中的代码,创建一个新文件。
首先,检查输入是否为 DataFrame 元素。然后在用户的家目录(`~/`)中查找名为 `TRXBTC_1h.bin` 的文件。如果存在,则将其打开,执行新代码段(`try` 部分中的代码),并删除重复项。如果文件不存在,则触发异常并执行 `except` 部分中的代码,创建一个新文件。
只要启用了复选框 **日志输出**log output<ruby>**日志输出**<rt>log output</rt></ruby>,你就可以使用命令行工具 **tail** 查看日志记录:
只要启用了复选框 <ruby>**日志输出**<rt>log output</rt></ruby>,你就可以使用命令行工具 `tail` 查看日志记录:
```
`$ tail -f ~/Pythonic_2020/Feb/log_2020_02_19.txt`
$ tail -f ~/Pythonic_2020/Feb/log_2020_02_19.txt
```
出于开发目的,现在跳过与币安时间的同步和计划执行,这将在下面实现。
### 准备数据
下一步是在单独的 网格Grid<ruby>网格<rt>Grid</rt></ruby> 中处理评估逻辑。因此,你必须借助 **返回元素**Return element<ruby>**返回元素**<rt>Return element</rt></ruby> 将 DataFrame 从网格 1 传递到网格 2 的第一个元素。
下一步是在单独的 <ruby>网格<rt>Grid</rt></ruby> 中处理评估逻辑。因此,你必须借助<ruby>**返回元素**<rt>Return element</rt></ruby> 将 DataFrame 从网格 1 传递到网格 2 的第一个元素。
在网格 2 中,通过使 DataFrame 通过 **基础技术分析**Basic Technical Analysis<ruby>**基础技术分析**<rt>Basic Technical Analysis</rt></ruby> 元素,将 DataFrame 扩展包含 EMA 值的一列。
在网格 2 中,通过使 DataFrame 通过 <ruby>**基础技术分析**<rt>Basic Technical Analysis</rt></ruby> 元素,将 DataFrame 扩展包含 EMA 值的一列。
![Technical analysis workflow in Grid 2][14]
在网格 2 中技术分析工作流程
*在网格 2 中技术分析工作流程*
配置技术分析元素以计算 25 个值的 EMAs
配置技术分析元素以计算 25 个值的 EMA。
![Configuration of the technical analysis element][15]
配置技术分析元素
*配置技术分析元素*
当你运行整个程序并开启 **技术分析**Technical Analysis<ruby>**技术分析**<rt>Technical Analysis</rt></ruby> 元素的调试输出时,你将发现 EMA-25 列的值似乎都相同。
当你运行整个程序并开启 <ruby>**技术分析**<rt>Technical Analysis</rt></ruby> 元素的调试输出时,你将发现 EMA-25 列的值似乎都相同。
![Missing decimal places in output][16]
输出中精度不够
*输出中精度不够*
这是因为调试输出中的 EMA-25 值仅包含六位小数,即使输出保留了 8 个字节完整精度的浮点值。
@ -160,31 +151,31 @@ if isinstance(input, pd.DataFrame):
![Workflow in Grid 2][17]
网格 2 中的工作流程
*网格 2 中的工作流程*
使用 **基础操作** 元素,将 DataFrame 与添加的 EMA-25 列一起转储,以便可以将其加载到 Jupyter Notebook中;
使用 **基础操作** 元素,将 DataFrame 与添加的 EMA-25 列一起转储,以便可以将其加载到 Jupyter 笔记本中;
![Dump extended DataFrame to file][18]
将扩展后的 DataFrame 存储到文件中
*将扩展后的 DataFrame 存储到文件中*
### 评估策略
在 Juypter Notebook 中开发评估策略,让你可以更直接地访问代码。要加载 DataFrame你需要使用如下代码
在 Juypter 笔记本中开发评估策略,让你可以更直接地访问代码。要加载 DataFrame你需要使用如下代码
![Representation with all decimal places][19]
用全部小数位表示
*用全部小数位表示*
你可以使用 [**iloc**][20] 和列名来访问最新的 EMA-25 值,并且会保留所有小数位。
你可以使用 [iloc][20] 和列名来访问最新的 EMA-25 值,并且会保留所有小数位。
你已经知道如何来获得最新的数据。上面示例的最后一行仅显示该值。为了能将该值拷贝到不同的变量中,你必须使用如下图所示的 **.at** 方法方能成功。
你已经知道如何来获得最新的数据。上面示例的最后一行仅显示该值。为了能将该值拷贝到不同的变量中,你必须使用如下图所示的 `.at` 方法方能成功。
你也可以直接计算出你下一步所需的交易参数。
![Buy/sell decision][21]
买卖决策
*买卖决策*
### 确定交易参数
@ -194,31 +185,31 @@ if isinstance(input, pd.DataFrame):
![Validation function][22]
回测功能
*回测功能*
在此示例中,**buy_factor** 和 **sell_factor** 是预先定义好的。因此,发散思维用直接计算出表现最佳的参数。
在此示例中,`buy_factor` 和 `sell_factor` 是预先定义好的。因此,发散思维用直接计算出表现最佳的参数。
![Nested for loops for determining the buy and sell factor][23]
嵌套的 _for_ 循环,用于确定购买和出售的参数
*嵌套的 for 循环,用于确定购买和出售的参数*
这要跑 81 个循环9x9在我的机器Core i7 267QM上花费了几分钟。
![System utilization while brute forcing][24]
在暴力运算时系统的利用率
*在暴力运算时系统的利用率*
在每个循环之后,它将 **buy_factor****sell_factor** 元组和生成的 **利润**profit<ruby>**利润**<rt>profit</rt></ruby> 元组追加到 **trading_factors** 列表中。按利润降序对列表进行排序。
在每个循环之后,它将 `buy_factor`、`sell_factor` 元组和生成的 `profit` 元组追加到 `trading_factors` 列表中。按利润降序对列表进行排序。
![Sort profit with related trading factors in descending order][25]
将利润与相关的交易参数按降序排序
*将利润与相关的交易参数按降序排序*
当你打印出列表时,你会看到 0.002 是最好的参数。
![Sorted list of trading factors and profit][26]
交易要素和收益的有序列表
*交易要素和收益的有序列表*
当我在 2020 年 3 月写下这篇文章时,价格的波动还不足以呈现出更理想的结果。我在 2 月份得到了更好的结果,但即使在那个时候,表现最好的交易参数也在 0.002 左右。
@ -230,73 +221,73 @@ if isinstance(input, pd.DataFrame):
![Implemented evaluation logic][27]
实现评估策略
*实现评估策略*
如果输出 **1** 表示你应该购买,如果输出 **2** 则表示你应该卖出。 输出 **0** 表示现在无需操作。使用 **分支**Branch<ruby>**分支**<rt>Branch</rt></ruby> 元素来控制执行路径。
如果输出 `1` 表示你应该购买,如果输出 `2` 则表示你应该卖出。 输出 `0` 表示现在无需操作。使用 <ruby>**分支**<rt>Branch</rt></ruby> 元素来控制执行路径。
![Branch element: Grid 3 Position 2A][28]
Branch 元素:网格 32A 位置
*分支元素:网格 32A 位置*
因为 **0****-1** 的处理流程一样,所以你需要在最右边添加一个分支元素来判断你是否应该卖出。
因为 `0``-1` 的处理流程一样,所以你需要在最右边添加一个分支元素来判断你是否应该卖出。
![Branch element: Grid 3 Position 3B][29]
分支元素:网格 33B 位置
*分支元素:网格 33B 位置*
网格 3 应该现在如下图所示:
![Workflow on Grid 3][30]
网格 3 的工作流程
*网格 3 的工作流程*
### 下单
由于无需在一个周期中购买两次,因此必须在周期之间保留一个持久变量,以指示你是否已经购买。
你可以利用 **栈**Stack<ruby>**栈**<rt>Stack</rt></ruby> 元素来实现。顾名思义,栈元素表示可以用任何 Python 数据类型来放入的基于文件的栈。
你可以利用 <ruby>**栈**<rt>Stack</rt></ruby> 元素来实现。顾名思义,栈元素表示可以用任何 Python 数据类型来放入的基于文件的栈。
你需要定义栈仅包含一个布尔类型,该布尔类型决定是否购买了(**True**)或(**False**)。因此,你必须使用 **False** 来初始化栈。例如,你可以在网格 4 中简单地通过将 **False** 传递给栈来进行设置。![Forward a False-variable to the subsequent Stack element][31]
你需要定义栈仅包含一个布尔类型,该布尔类型决定是否购买了(`True`)或(`False`)。因此,你必须使用 `False` 来初始化栈。例如,你可以在网格 4 中简单地通过将 `False` 传递给栈来进行设置。
**False** 变量传输到后续的栈元素中
![Forward a False-variable to the subsequent Stack element][31]
*将 False 变量传输到后续的栈元素中*
在分支树后的栈实例可以进行如下配置:
![Configuration of the Stack element][32]
设置栈元素
*设置栈元素*
在栈元素设置中,将 **Do this with input** 设置成 **Nothing**。否则,布尔值将被 1 或 0 覆盖。
在栈元素设置中,将 <ruby>对输入的操作<rt>Do this with input</rt></ruby> 设置成 <ruby><rt>Nothing</rt></ruby>。否则,布尔值将被 `1``0` 覆盖。
该设置确保仅将一个值保存于栈中(**True** 或 **False**),并且只能读取一个值(为了清楚起见)。
该设置确保仅将一个值保存于栈中(`True` 或 `False`),并且只能读取一个值(为了清楚起见)。
在栈元素之后,你需要另外一个 **分支** 元素来判断栈的值,然后再放置 **币安订单**Binance Order<ruby>**币安订单**<rt>Binance Order</rt></ruby> 元素。
在栈元素之后,你需要另外一个 **分支** 元素来判断栈的值,然后再放置 <ruby>币安订单<rt>Binance Order</rt></ruby> 元素。
![Evaluate the variable from the stack][33]
判断栈中的变量
*判断栈中的变量*
将币安订单元素添加到分支元素的 **True** 路径。网格 3 上的工作流现在应如下所示:
将币安订单元素添加到分支元素的 `True` 路径。网格 3 上的工作流现在应如下所示:
![Workflow on Grid 3][34]
网格 3 的工作流程
*网格 3 的工作流程*
币安订单元素应如下配置:
![Configuration of the Binance Order element][35]
编辑币安订单元素
*编辑币安订单元素*
你可以在币安网站上的帐户设置中生成 API 和密钥。
![Creating an API key in Binance][36]
在币安账户设置中创建一个 API key
*在币安账户设置中创建一个 API 密钥*
在本文中每笔交易都是作为市价交易执行的交易量为10,000 TRX2020 年 3 月约为 150 美元)(出于教学的目的,我通过使用市价下单来演示整个过程。因此,我建议至少使用限价下单。)
在本文中,每笔交易都是作为市价交易执行的,交易量为 10,000 TRX2020 年 3 月约为 150 美元)(出于教学的目的,我通过使用市价下单来演示整个过程。因此,我建议至少使用限价下单。)
如果未正确执行下单(例如,网络问题、资金不足或货币对不正确),则不会触发后续元素。因此,你可以假定如果触发了后续元素,则表示该订单已下达。
@ -304,7 +295,7 @@ Branch 元素:网格 32A 位置
![Output of a successfully placed sell order][37]
成功卖单的输出
*成功卖单的输出*
该行为使后续步骤更加简单:你可以始终假设只要成功输出,就表示订单成功。因此,你可以添加一个 **基础操作** 元素,该元素将简单地输出 **True** 并将此值放入栈中以表示是否下单。
@ -312,21 +303,21 @@ Branch 元素:网格 32A 位置
![Logging output of Binance Order element][38]
币安订单元素中的输出日志信息
*币安订单元素中的输出日志信息*
### 调度和同步
对于日程调度和同步,请在网格 1 中将整个工作流程置于 **币安调度器**Binance Scheduler<ruby>**币安调度器**<rt>Binance Scheduler</rt></ruby> 元素的前面。
对于日程调度和同步,请在网格 1 中将整个工作流程置于 <ruby>币安调度器<rt>Binance Scheduler</rt></ruby> 元素的前面。
![Binance Scheduler at Grid 1, Position 1A][39]
在网格 11A 位置的币安调度器
*在网格 11A 位置的币安调度器*
由于币安调度器元素只执行一次,因此请在网格 1 的末尾拆分执行路径,并通过将输出传递回币安调度器来强制让其重新同步。
![Grid 1: Split execution path][40]
网格 1拆分执行路径
*网格 1拆分执行路径*
5A 元素指向 网格 2 的 1A 元素,并且 5B 元素指向网格 1 的 1A 元素(币安调度器)。
@ -336,40 +327,34 @@ Branch 元素:网格 32A 位置
![PythonicDaemon console interface][41]
PythonicDaemon 控制台
PythonicDaemon 是基础程序的一部分。要使用它请保存完整的工作流程将其传输到远程运行的系统中例如通过安全拷贝协议Secure Copy<ruby>安全拷贝协议<rt>Secure Copy</rt></ruby> [SCP]),然后把工作流程文件作为参数来启动 PythonicDaemon
*PythonicDaemon 控制台*
PythonicDaemon 是基础程序的一部分。要使用它,请保存完整的工作流程,将其传输到远程运行的系统中(例如,通过<ruby>安全拷贝协议<rt>Secure Copy</rt></ruby> SCP然后把工作流程文件作为参数来启动 PythonicDaemon
```
`$ PythonicDaemon trading_bot_one`
$ PythonicDaemon trading_bot_one
```
为了能在系统启动时自启 PythonicDaemon可以将一个条目添加到 crontab 中:
```
`# crontab -e`
# crontab -e
```
![Crontab on Ubuntu Server][42]
在 Ubuntu 服务器上的 Crontab
*在 Ubuntu 服务器上的 Crontab*
### 下一步
正如我在一开始时所说的,本教程只是自动交易的入门。对交易机器人进行编程大约需要 10 的编程和 90 的测试。当涉及到让你的机器人用金钱交易时,你肯定会对编写的代码再三思考。因此,我建议你编码时要尽可能简单和易于理解。
如果你想自己继续开发交易机器人,接下来所需要做的事:
- 收益自动计算(希望你有正收益!)
- 计算你想买的价格
- 比较你的预订单(例如,订单是否填写完整?)
你可以从 [GitHub][2] 上获取完整代码。
--------------------------------------------------------------------------------
@ -379,7 +364,7 @@ via: https://opensource.com/article/20/4/python-crypto-trading-bot
作者:[Stephan Avenwedde][a]
选题:[lujun9972][b]
译者:[wyxplus](https://github.com/wyxplus)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,244 @@
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-13240-1.html)
[#]: subject: (Convert your Windows install into a VM on Linux)
[#]: via: (https://opensource.com/article/21/1/virtualbox-windows-linux)
[#]: author: (David Both https://opensource.com/users/dboth)
在 Linux 上将你的 Windows 系统转换为虚拟机
======
> 下面是我如何配置 VirtualBox 虚拟机以在我的 Linux 工作站上使用物理的 Windows 操作系统。
![](https://img.linux.net.cn/data/attachment/album/202103/27/105053kyd66r1cpr1s2vz2.jpg)
我经常使用 VirtualBox 来创建虚拟机来测试新版本的 Fedora、新的应用程序和很多管理工具比如 Ansible。我甚至使用 VirtualBox 来测试创建一个 Windows 访客主机。
我从来没有在我的任何一台个人电脑上使用 Windows 作为我的主要操作系统,甚至也没在虚拟机中执行过一些用 Linux 无法完成的冷门任务。不过,我确实为一个需要使用 Windows 下的财务程序的组织做志愿者。这个程序运行在办公室经理的电脑上,使用的是预装的 Windows 10 Pro。
这个财务应用程序并不特别,[一个更好的 Linux 程序][2] 可以很容易地取代它,但我发现许多会计和财务主管极不愿意做出改变,所以我还没能说服我们组织中的人迁移。
这一系列的情况,加上最近的安全恐慌,使得我非常希望将运行 Windows 的主机转换为 Fedora并在该主机上的虚拟机中运行 Windows 和会计程序。
重要的是要明白,我出于多种原因极度不喜欢 Windows。主要原因是我不愿意为了在新的虚拟机上安装它而再花钱购买一个 Windows 许可证Windows 10 Pro 大约需要 200 美元。此外Windows 10 在新系统上设置时或安装后需要足够的信息,如果微软的数据库被攻破,破解者就可以窃取一个人的身份。任何人都不应该为了注册软件而需要提供自己的姓名、电话号码和出生日期。
### 开始
这台实体电脑已经在主板上唯一可用的 m.2 插槽中安装了一个 240GB 的 NVMe m.2 的 SSD 存储设备。我决定在主机上安装一个新的 SATA SSD并将现有的带有 Windows 的 SSD 作为 Windows 虚拟机的存储设备。金士顿在其网站上对各种 SSD 设备、外形尺寸和接口做了很好的概述。
这种方法意味着我不需要重新安装 Windows 或任何现有的应用软件。这也意味着,在这台电脑上工作的办公室经理将使用 Linux 进行所有正常的活动,如电子邮件、访问 Web、使用 LibreOffice 创建文档和电子表格。这种方法增加了主机的安全性。唯一会使用 Windows 虚拟机的时间是运行会计程序。
### 先备份
在做其他事情之前,我创建了整个 NVMe 存储设备的备份 ISO 镜像。我在 500GB 外置 USB 存储盘上创建了一个分区,在其上创建了一个 ext4 文件系统,然后将该分区挂载到 `/mnt`。我使用 `dd` 命令来创建镜像。
我在主机中安装了新的 500GB SATA SSD并从<ruby>临场<rt>live</rt></ruby> USB 上安装了 Fedora 32 Xfce <ruby>偏好版<rt>spin</rt></ruby>。在安装后的初次重启时,在 GRUB2 引导菜单上Linux 和 Windows 操作系统都是可用的。此时,主机可以在 Linux 和 Windows 之间进行双启动。
### 在网上寻找帮助
现在我需要一些关于创建一个使用物理硬盘或 SSD 作为其存储设备的虚拟机的信息。我很快就在 VirtualBox 文档和互联网上发现了很多关于如何做到这一点的信息。虽然 VirtualBox 文档初步帮助了我,但它并不完整,遗漏了一些关键信息。我在互联网上找到的大多数其他信息也很不完整。
在我们的记者 Joshua Holm 的帮助下,我得以突破这些残缺的信息,并以一个可重复的流程来完成这项工作。
### 让它发挥作用
这个过程其实相当简单虽然需要一个玄妙的技巧才能实现。当我准备好这一步的时候Windows 和 Linux 操作系统已经到位了。
首先,我在 Linux 主机上安装了最新版本的 VirtualBox。VirtualBox 可以从许多发行版的软件仓库中安装,也可以直接从 Oracle VirtualBox 仓库中安装,或者从 VirtualBox 网站上下载所需的包文件并在本地安装。我选择下载 AMD64 版本,它实际上是一个安装程序而不是一个软件包。我使用这个版本来规避一个与这个特定项目无关的问题。
安装过程总是在 `/etc/group` 中创建一个 `vboxusers` 组。我把打算运行这个虚拟机的用户添加到 `/etc/group` 中的 `vboxusers``disk` 组。将相同的用户添加到 `disk` 组是很重要的,因为 VirtualBox 是以启动它的用户身份运行的,而且还需要直接访问 `/dev/sdx` 特殊设备文件才能在这种情况下工作。将用户添加到 `disk` 组可以提供这种级别的访问权限,否则他们就不会有这种权限。
然后,我创建了一个目录来存储虚拟机,并赋予它 `root.vboxusers` 的所有权和 `775` 的权限。我使用 `/vms` 用作该目录但可以是任何你想要的目录。默认情况下VirtualBox 会在创建虚拟机的用户的子目录中创建新的虚拟机。这将使多个用户之间无法共享对虚拟机的访问,从而不会产生巨大的安全漏洞。将虚拟机目录放置在一个可访问的位置,可以共享虚拟机。
我以非 root 用户的身份启动 VirtualBox 管理器。然后,我使用 VirtualBox 的“<ruby>偏好<rt>Preferences</rt></ruby> => <ruby>一般<rt>General</rt></ruby>”菜单将“<ruby>默认机器文件夹<rt>Default Machine Folder</rt></ruby>”设置为 `/vms` 目录。
我创建的虚拟机没有虚拟磁盘。“<ruby>类型<rt>Type<rt></ruby>” 应该是 `Windows`,“<ruby>版本<rt>Version</rt></ruby>”应该设置为 `Windows 10 64-bit`。为虚拟机设置一个合理的内存量,但只要虚拟机处于关闭状态,以后可以更改。在安装的“<ruby>硬盘<rt>Hard disk</rt></ruby>”页面,我选择了 “<ruby>不要添加虚拟硬盘<rt>Do not add a virtual hard disk</rt></ruby>”,点击“<ruby>创建<rt>Create</rt></ruby>”。新的虚拟机出现在VirtualBox 管理器窗口中。这个过程也创建了 `/vms/Test1` 目录。
我使用“<ruby>高级<rt>Advanced</rt></ruby>”菜单在一个页面上设置了所有的配置,如图 1 所示。“<ruby>向导模式<rt>Guided Mode</rt></ruby>”可以获得相同的信息,但需要更多的点击,以通过一个窗口来进行每个配置项目。它确实提供了更多的帮助内容,但我并不需要。
![VirtualBox 对话框:创建新的虚拟机,但不添加硬盘][3]
*图 1创建一个新的虚拟机但不要添加硬盘。*
然后,我需要知道 Linux 给原始 Windows 硬盘分配了哪个设备。在终端会话中以 root 身份使用 `lshw` 命令来发现 Windows 磁盘的设备分配情况。在本例中,代表整个存储设备的设备是 `/dev/sdb`
```
# lshw -short -class disk,volume
H/W path           Device      Class          Description
=========================================================
/0/100/17/0        /dev/sda    disk           500GB CT500MX500SSD1
/0/100/17/0/1                  volume         2047MiB Windows FAT volume
/0/100/17/0/2      /dev/sda2   volume         4GiB EXT4 volume
/0/100/17/0/3      /dev/sda3   volume         459GiB LVM Physical Volume
/0/100/17/1        /dev/cdrom  disk           DVD+-RW DU-8A5LH
/0/100/17/0.0.0    /dev/sdb    disk           256GB TOSHIBA KSG60ZMV
/0/100/17/0.0.0/1  /dev/sdb1   volume         649MiB Windows FAT volume
/0/100/17/0.0.0/2  /dev/sdb2   volume         127MiB reserved partition
/0/100/17/0.0.0/3  /dev/sdb3   volume         236GiB Windows NTFS volume
/0/100/17/0.0.0/4  /dev/sdb4   volume         989MiB Windows NTFS volume
[root@office1 etc]#
```
VirtualBox 不需要把虚拟存储设备放在 `/vms/Test1` 目录中,而是需要有一种方法来识别要从其启动的物理硬盘。这种识别是通过创建一个 `*.vmdk` 文件来实现的,该文件指向将作为虚拟机存储设备的原始物理磁盘。作为非 root 用户,我创建了一个 vmdk 文件,指向整个 Windows 设备 `/dev/sdb`
```
$ VBoxManage internalcommands createrawvmdk -filename /vms/Test1/Test1.vmdk -rawdisk /dev/sdb
RAW host disk access VMDK file /vms/Test1/Test1.vmdk created successfully.
```
然后,我使用 VirtualBox 管理器 “<ruby>文件<rt>File</rt></ruby> => <ruby>虚拟介质管理器<rt>Virtual Media Manager</rt></ruby>” 对话框将 vmdk 磁盘添加到可用硬盘中。我点击了“<ruby>添加<rt>Add</rt></ruby>”,文件管理对话框中显示了默认的 `/vms` 位置。我选择了 `Test1` 目录,然后选择了 `Test1.vmdk` 文件。然后我点击“<ruby>打开<rt>Open</rt></ruby>”,`Test1.vmdk` 文件就显示在可用硬盘列表中。我选择了它,然后点击“<ruby>关闭<rt>Close</rt></ruby>”。
下一步就是将这个 vmdk 磁盘添加到我们的虚拟机的存储设备中。在 “Test1 VM” 的设置菜单中,我选择了 “<ruby>存储<rt>Storage</rt></ruby>”,并点击了添加硬盘的图标。这时打开了一个对话框,在一个名为“<ruby>未连接<rt>Not attached</rt></ruby>”的列表中显示了 `Test1vmdk` 虚拟磁盘文件。我选择了这个文件,并点击了“<ruby>选择<rt>Choose</rt></ruby>”按钮。这个设备现在显示在连接到 “Test1 VM” 的存储设备列表中。这个虚拟机上唯一的其他存储设备是一个空的 CD/DVD-ROM 驱动器。
我点击了“<ruby>确定<rt>OK</rt></ruby>”,完成了将此设备添加到虚拟机中。
在新的虚拟机工作之前,还有一个项目需要配置。使用 VirtualBox 管理器设置对话框中的 “Test1 VM”我导航到 “<ruby>系统<rt>System</rt></ruby> => <ruby>主板<rt>Motherboard</rt></ruby>”页面,并在 “<ruby>启用 EFI<rt>Enable EFI</rt></ruby>”的方框中打上勾。如果你不这样做当你试图启动这个虚拟机时VirtualBox 会产生一个错误,说明它无法找到一个可启动的介质。
现在,虚拟机从原始的 Windows 10 硬盘驱动器启动。然而,我无法登录,因为我在这个系统上没有一个常规账户,而且我也无法获得 Windows 管理员账户的密码。
### 解锁驱动器
不,本节并不是要破解硬盘的加密,而是要绕过众多 Windows 管理员账户之一的密码,而这些账户是不属于组织中某个人的。
尽管我可以启动 Windows 虚拟机,但我无法登录,因为我在该主机上没有账户,而向人们索要密码是一种可怕的安全漏洞。尽管如此,我还是需要登录这个虚拟机来安装 “VirtualBox Guest Additions”它可以提供鼠标指针的无缝捕捉和释放允许我将虚拟机调整到大于 1024x768 的大小,并在未来进行正常的维护。
这是一个完美的用例Linux 的功能就是更改用户密码。尽管我是访问之前的管理员的账户来启动,但在这种情况下,他不再支持这个系统,我也无法辨别他的密码或他用来生成密码的模式。我就直接清除了上一个系统管理员的密码。
有一个非常不错的开源软件工具,专门用于这个任务。在 Linux 主机上,我安装了 `chntpw`,它的意思大概是:“更改 NT 的密码”。
```
# dnf -y install chntpw
```
我关闭了虚拟机的电源,然后将 `/dev/sdb3` 分区挂载到 `/mnt` 上。我确定 `/dev/sdb3` 是正确的分区,因为它是我在之前执行 `lshw` 命令的输出中看到的第一个大的 NTFS 分区。一定不要在虚拟机运行时挂载该分区,那样会导致虚拟机存储设备上的数据严重损坏。请注意,在其他主机上分区可能有所不同。
导航到 `/mnt/Windows/System32/config` 目录。如果当前工作目录PWD不在这里`chntpw` 实用程序就无法工作。请启动该程序。
```
# chntpw -i SAM
chntpw version 1.00 140201, (c) Petter N Hagen
Hive <SAM> name (from header): <\SystemRoot\System32\Config\SAM>
ROOT KEY at offset: 0x001020 * Subkey indexing type is: 686c <lh>
File size 131072 [20000] bytes, containing 11 pages (+ 1 headerpage)
Used for data: 367/44720 blocks/bytes, unused: 14/24560 blocks/bytes.
<>========<> chntpw Main Interactive Menu <>========<>
Loaded hives: <SAM>
1 - Edit user data and passwords
2 - List groups
- - -
9 - Registry editor, now with full write support!
q - Quit (you will be asked if there is something to save)
What to do? [1] ->
```
`chntpw` 命令使用 TUI文本用户界面它提供了一套菜单选项。当选择其中一个主要菜单项时通常会显示一个次要菜单。按照明确的菜单名称我首先选择了菜单项 `1`
```
What to do? [1] -> 1
===== chntpw Edit User Info & Passwords ====
| RID -|---------- Username ------------| Admin? |- Lock? --|
| 01f4 | Administrator | ADMIN | dis/lock |
| 03eb | john | ADMIN | dis/lock |
| 01f7 | DefaultAccount | | dis/lock |
| 01f5 | Guest | | dis/lock |
| 01f8 | WDAGUtilityAccount | | dis/lock |
Please enter user number (RID) or 0 to exit: [3e9]
```
接下来,我选择了我们的管理账户 `john`,在提示下输入 RID。这将显示用户的信息并提供额外的菜单项来管理账户。
```
Please enter user number (RID) or 0 to exit: [3e9] 03eb
================= USER EDIT ====================
RID : 1003 [03eb]
Username: john
fullname:
comment :
homedir :
00000221 = Users (which has 4 members)
00000220 = Administrators (which has 5 members)
Account bits: 0x0214 =
[ ] Disabled | [ ] Homedir req. | [ ] Passwd not req. |
[ ] Temp. duplicate | [X] Normal account | [ ] NMS account |
[ ] Domain trust ac | [ ] Wks trust act. | [ ] Srv trust act |
[X] Pwd don't expir | [ ] Auto lockout | [ ] (unknown 0x08) |
[ ] (unknown 0x10) | [ ] (unknown 0x20) | [ ] (unknown 0x40) |
Failed login count: 0, while max tries is: 0
Total login count: 47
- - - - User Edit Menu:
1 - Clear (blank) user password
2 - Unlock and enable user account [probably locked now]
3 - Promote user (make user an administrator)
4 - Add user to a group
5 - Remove user from a group
q - Quit editing user, back to user select
Select: [q] > 2
```
这时,我选择了菜单项 `2`,“<ruby>解锁并启用用户账户<rt>Unlock and enable user account</rt></ruby>”,这样就可以删除密码,使我可以不用密码登录。顺便说一下 —— 这就是自动登录。然后我退出了该程序。在继续之前,一定要先卸载 `/mnt`
我知道,我知道,但为什么不呢! 我已经绕过了这个硬盘和主机的安全问题,所以一点也不重要。这时,我确实登录了旧的管理账户,并为自己创建了一个新的账户,并设置了安全密码。然后,我以自己的身份登录,并删除了旧的管理账户,这样别人就无法使用了。
网上也有 Windows Administrator 账号的使用说明(上面列表中的 `01f4`)。如果它不是作为组织管理账户,我可以删除或更改该账户的密码。还要注意的是,这个过程也可以从目标主机上运行临场 USB 来执行。
### 重新激活 Windows
因此,我现在让 Windows SSD 作为虚拟机在我的 Fedora 主机上运行了。然而令人沮丧的是在运行了几个小时后Windows 显示了一条警告信息,表明我需要“激活 Windows”。
在看了许许多多的死胡同网页之后,我终于放弃了使用现有激活码重新激活的尝试,因为它似乎已经以某种方式被破坏了。最后,当我试图进入其中一个在线虚拟支持聊天会话时,虚拟的“获取帮助”应用程序显示我的 Windows 10 Pro 实例已经被激活。这怎么可能呢?它一直希望我激活它,然而当我尝试时,它说它已经被激活了。
### 或者不
当我在三天内花了好几个小时做研究和实验时,我决定回到原来的 SSD 启动到 Windows 中,以后再来处理这个问题。但后来 Windows —— 即使从原存储设备启动,也要求重新激活。
在微软支持网站上搜索也无济于事。在不得不与之前一样的自动支持大费周章之后,我拨打了提供的电话号码,却被自动响应系统告知,所有对 Windows 10 Pro 的支持都只能通过互联网提供。到现在,我已经晚了将近一天才让电脑运行起来并安装回办公室。
### 回到未来
我终于吸了一口气,购买了一份 Windows 10 Home大约 120 美元,并创建了一个带有虚拟存储设备的虚拟机,将其安装在上面。
我将大量的文档和电子表格文件复制到办公室经理的主目录中。我重新安装了一个我们需要的 Windows 程序,并与办公室经理验证了它可以工作,数据都在那里。
### 总结
因此,我的目标达到了,实际上晚了一天,花了 120 美元,但使用了一种更标准的方法。我仍在对权限进行一些调整,并恢复 Thunderbird 通讯录;我有一些 CSV 备份,但 `*.mab` 文件在 Windows 驱动器上包含的信息很少。我甚至用 Linux 的 `find` 命令来定位原始存储设备上的所有。
我走了很多弯路,每次都要自己重新开始。我遇到了一些与这个项目没有直接关系的问题,但却影响了我的工作。这些问题包括一些有趣的事情,比如把 Windows 分区挂载到我的 Linux 机器的 `/mnt` 上,得到的信息是该分区已经被 Windows 不正确地关闭(是的,在我的 Linux 主机上),并且它已经修复了不一致的地方。即使是 Windows 通过其所谓的“恢复”模式多次重启后也做不到这一点。
也许你从 `chntpw` 工具的输出数据中发现了一些线索。出于安全考虑,我删掉了主机上显示的其他一些用户账号,但我从这些信息中看到,所有的用户都是管理员。不用说,我也改了。我仍然对我遇到的糟糕的管理方式感到惊讶,但我想我不应该这样。
最后我被迫购买了一个许可证但这个许可证至少比原来的要便宜一些。我知道的一点是一旦我找到了所有必要的信息Linux 这一块就能完美地工作。问题是处理 Windows 激活的问题。你们中的一些人可能已经成功地让 Windows 重新激活了。如果是这样,我还是想知道你们是怎么做到的,所以请把你们的经验添加到评论中。
这是我不喜欢 Windows只在自己的系统上使用 Linux 的又一个原因。这也是我将组织中所有的计算机都转换为 Linux 的原因之一。只是需要时间和说服力。我们只剩下这一个会计程序了,我需要和财务主管一起找到一个适合她的程序。我明白这一点 —— 我喜欢自己的工具,我需要它们以一种最适合我的方式工作。
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/1/virtualbox-windows-linux
作者:[David Both][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/dboth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/puzzle_computer_solve_fix_tool.png?itok=U0pH1uwj (Puzzle pieces coming together to form a computer screen)
[2]: https://opensource.com/article/20/7/godbledger
[3]: https://opensource.com/sites/default/files/virtualbox.png

View File

@ -0,0 +1,103 @@
[#]: collector: (lujun9972)
[#]: translator: (ShuyRoy)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-13229-1.html)
[#]: subject: (Get started with distributed tracing using Grafana Tempo)
[#]: via: (https://opensource.com/article/21/2/tempo-distributed-tracing)
[#]: author: (Annanay Agarwal https://opensource.com/users/annanayagarwal)
使用 Grafana Tempo 进行分布式跟踪
======
> Grafana Tempo 是一个新的开源、大容量分布式跟踪后端。
![](https://img.linux.net.cn/data/attachment/album/202103/23/221354lc1eiill7lln4lli.jpg)
Grafana 的 [Tempo][2] 是出自 Grafana 实验室的一个简单易用、大规模的、分布式的跟踪后端。Tempo 集成了 [Grafana][3]、[Prometheus][4] 以及 [Loki][5],并且它只需要对象存储进行操作,因此成本低廉,操作简单。
我从一开始就参与了这个开源项目,所以我将介绍一些关于 Tempo 的基础知识,并说明为什么云原生社区会注意到它。
### 分布式跟踪
想要收集对应用程序请求的遥测数据是很常见的。但是在现在的服务器中,单个应用通常被分割为多个微服务,可能运行在几个不同的节点上。
分布式跟踪是一种获得关于应用的性能细粒度信息的方式该应用程序可能由离散的服务组成。当请求到达一个应用时它提供了该请求的生命周期的统一视图。Tempo 的分布式跟踪可以用于单体应用或微服务应用,它提供 [请求范围的信息][6],使其成为可观察性的第三个支柱(另外两个是度量和日志)。
接下来是一个分布式跟踪系统生成应用程序甘特图的示例。它使用 Jaeger [HotROD][7] 的演示应用生成跟踪,并把它们存到 Grafana 云托管的 Tempo 上。这个图展示了按照服务和功能划分的请求处理时间。
![Gantt chart from Grafana Tempo][8]
### 减少索引的大小
在丰富且定义良好的数据模型中,跟踪包含大量信息。通常,跟踪后端有两种交互:使用元数据选择器(如服务名或者持续时间)筛选跟踪,以及筛选后的可视化跟踪。
为了加强搜索,大多数的开源分布式跟踪框架会对跟踪中的许多字段进行索引,包括服务名称、操作名称、标记和持续时间。这会导致索引很大,并迫使你使用 Elasticsearch 或者 [Cassandra][10] 这样的数据库。但是,这些很难管理,而且大规模运营成本很高,所以我在 Grafana 实验室的团队开始提出一个更好的解决方案。
在 Grafana 中,我们的待命调试工作流从使用指标报表开始(我们使用 [Cortex][11] 来存储我们应用中的指标,它是一个云原生基金会孵化的项目,用于扩展 Prometheus深入研究这个问题筛选有问题服务的日志我们将日志存储在 Loki 中,它就像 Prometheus 一样,只不过 Loki 是存日志的),然后查看跟踪给定的请求。我们意识到,我们过滤时所需的所有索引信息都可以在 Cortex 和 Loki 中找到。但是,我们需要一个强大的集成,以通过这些工具实现跟踪的可发现性,并需要一个很赞的存储,以根据跟踪 ID 进行键值查找。
这就是 [Grafana Tempo][12] 项目的开始。通过专注于给定检索跟踪 ID 的跟踪,我们将 Tempo 设计为最小依赖性、大容量、低成本的分布式跟踪后端。
### 操作简单,性价比高
Tempo 使用对象存储后端,这是它唯一的依赖。它既可以被用于单一的二进制下,也可以用于微服务模式(请参考仓库中的 [例子][13],了解如何轻松上手)。使用对象存储还意味着你可以存储大量的应用程序的痕迹,而无需任何采样。这可以确保你永远不会丢弃那百万分之一的出错或具有较高延迟的请求的跟踪。
### 与开源工具的强大集成
[Grafana 7.3 包括了 Tempo 数据源][14],这意味着你可以在 Grafana UI 中可视化来自Tempo 的跟踪。而且,[Loki 2.0 的新查询特性][15] 使得 Tempo 中的跟踪更简单。为了与 Prometheus 集成,该团队正在添加对<ruby>范例<rt>exemplar</rt></ruby>的支持,范例是可以添加到时间序列数据中的高基数元数据信息。度量存储后端不会对它们建立索引,但是你可以在 Grafana UI 中检索和显示度量值。尽管范例可以存储各种元数据,但是在这个用例中,存储跟踪 ID 是为了与 Tempo 紧密集成。
这个例子展示了使用带有请求延迟直方图的范例,其中每个范例数据点都链接到 Tempo 中的一个跟踪。
![Using exemplars in Tempo][16]
### 元数据一致性
作为容器化应用程序运行的应用发出的遥测数据通常具有一些相关的元数据。这可以包括集群 ID、命名空间、吊舱 IP 等。这对于提供基于需求的信息是好的,但如果你能将元数据中包含的信息用于生产性的东西,那就更好了。
 
例如,你可以使用 [Grafana 云代理将跟踪信息导入 Tempo 中][17],代理利用 Prometheus 服务发现机制轮询 Kubernetes API 以获取元数据信息,并且将这些标记添加到应用程序发出的跨域数据中。由于这些元数据也在 Loki 中也建立了索引,所以通过元数据转换为 Loki 标签选择器,可以很容易地从跟踪跳转到查看给定服务的日志。
下面是一个一致元数据的示例它可用于Tempo跟踪中查看给定范围的日志。
![][18]
### 云原生
Grafana Tempo 可以作为容器化应用,你可以在如 Kubernetes、Mesos 等编排引擎上运行它。根据获取/查询路径上的工作负载各种服务可以水平伸缩。你还可以使用云原生的对象存储如谷歌云存储、Amazon S3 或者 Tempo Azure 博客存储。更多的信息,请阅读 Tempo 文档中的 [架构部分][19]。
### 试一试 Tempo
如果这对你和我们一样有用,可以 [克隆 Tempo 仓库][20]试一试。
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/2/tempo-distributed-tracing
作者:[Annanay Agarwal][a]
选题:[lujun9972][b]
译者:[RiaXu](https://github.com/ShuyRoy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/annanayagarwal
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/computer_space_graphic_cosmic.png?itok=wu493YbB (Computer laptop in space)
[2]: https://grafana.com/oss/tempo/
[3]: http://grafana.com/oss/grafana
[4]: https://prometheus.io/
[5]: https://grafana.com/oss/loki/
[6]: https://peter.bourgon.org/blog/2017/02/21/metrics-tracing-and-logging.html
[7]: https://github.com/jaegertracing/jaeger/tree/master/examples/hotrod
[8]: https://opensource.com/sites/default/files/uploads/tempo_gantt.png (Gantt chart from Grafana Tempo)
[9]: https://creativecommons.org/licenses/by-sa/4.0/
[10]: https://opensource.com/article/19/8/how-set-apache-cassandra-cluster
[11]: https://cortexmetrics.io/
[12]: http://github.com/grafana/tempo
[13]: https://grafana.com/docs/tempo/latest/getting-started/example-demo-app/
[14]: https://grafana.com/blog/2020/10/29/grafana-7.3-released-support-for-the-grafana-tempo-tracing-system-new-color-palettes-live-updates-for-dashboard-viewers-and-more/
[15]: https://grafana.com/blog/2020/11/09/trace-discovery-in-grafana-tempo-using-prometheus-exemplars-loki-2.0-queries-and-more/
[16]: https://opensource.com/sites/default/files/uploads/tempo_exemplar.png (Using exemplars in Tempo)
[17]: https://grafana.com/blog/2020/11/17/tracing-with-the-grafana-cloud-agent-and-grafana-tempo/
[18]: https://lh5.googleusercontent.com/vNqk-ygBOLjKJnCbTbf2P5iyU5Wjv2joR7W-oD7myaP73Mx0KArBI2CTrEDVi04GQHXAXecTUXdkMqKRq8icnXFJ7yWUEpaswB1AOU4wfUuADpRV8pttVtXvTpVVv8_OfnDINgfN
[19]: https://grafana.com/docs/tempo/latest/architecture/architecture/
[20]: https://github.com/grafana/tempo

View File

@ -0,0 +1,150 @@
[#]: subject: (Learn Python dictionary values with Jupyter)
[#]: via: (https://opensource.com/article/21/3/dictionary-values-python)
[#]: author: (Lauren Maffeo https://opensource.com/users/lmaffeo)
[#]: collector: (lujun9972)
[#]: translator: (DCOLIVERSUN)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-13236-1.html)
用 Jupyter 学习 Python 字典
======
> 字典数据结构可以帮助你快速访问信息。
![](https://img.linux.net.cn/data/attachment/album/202103/26/094720i58u5qxx3l4qsssx.jpg)
字典是 Python 编程语言使用的数据结构。一个 Python 字典由多个键值对组成;每个键值对将键映射到其关联的值上。
例如你是一名老师,想把学生姓名与成绩对应起来。你可以使用 Python 字典,将学生姓名映射到他们关联的成绩上。此时,键值对中键是姓名,值是对应的成绩。
如果你想知道某个学生的考试成绩,你可以从字典中访问。这种快捷查询方式可以为你节省解析整个列表找到学生成绩的时间。
本文介绍了如何通过键访问对应的字典值。学习前,请确保你已经安装了 [Anaconda 包管理器][2]和 [Jupyter 笔记本][3]。
### 1、在 Jupyter 中打开一个新的笔记本
首先在 Web 浏览器中打开并运行 Jupyter。然后
1. 转到左上角的 “File”。
2. 选择 “New Notebook”点击 “Python 3”。
![新建 Jupyter 笔记本][4]
开始时,新建的笔记本是无标题的,你可以将其重命名为任何名称。我为我的笔记本取名为 “OpenSource.com Data Dictionary Tutorial”。
笔记本中标有行号的位置就是你写代码的区域,也是你输入的位置。
在 macOS 上,可以同时按 `Shift + Return` 键得到输出。在创建新的代码区域前,请确保完成上述动作;否则,你写的任何附加代码可能无法运行。
### 2、新建一个键值对
在字典中输入你希望访问的键与值。输入前,你需要在字典上下文中定义它们的含义:
```
empty_dictionary = {}
grades = {
    "Kelsey": 87,
    "Finley": 92
}
one_line = {a: 1, b: 2}
```
![定义字典键值对的代码][6]
这段代码让字典将特定键与其各自的值关联起来。字典按名称存储数据,从而可以更快地查询。
### 3、通过键访问字典值
现在你想查询指定的字典值;在上述例子中,字典值指特定学生的成绩。首先,点击 “Insert” 后选择 “Insert Cell Below”。
![在 Jupyter 插入新建单元格][7]
在新单元格中,定义字典中的键与值。
然后,告诉字典打印该值的键,找到需要的值。例如,查询名为 Kelsey 的学生的成绩:
```
# 访问字典中的数据
grades = {
    "Kelsey": 87,
    "Finley": 92
}
print(grades["Kelsey"])
87
```
![查询特定值的代码][8]
当你查询 Kelsey 的成绩(也就是你想要查询的值)时,如果你用的是 macOS只需要同时按 `Shift+Return` 键。
你会在单元格下方看到 Kelsey 的成绩。
### 4、更新已有的键
当把一位学生的错误成绩添加到字典时,你会怎么办?可以通过更新字典、存储新值来修正这类错误。
首先,选择你想更新的那个键。在上述例子中,假设你错误地输入了 Finley 的成绩,那么 Finley 就是你需要更新的键。
为了更新 Finley 的成绩,你需要在下方插入新的单元格,然后创建一个新的键值对。同时按 `Shift+Return` 键打印字典全部信息:
```
grades["Finley"] = 90
print(grades)
{'Kelsey': 87; "Finley": 90}
```
![更新键的代码][9]
单元格下方输出带有 Finley 更新成绩的字典。
### 5、添加新键
假设你得到一位新学生的考试成绩。你可以用新键值对将那名学生的姓名与成绩补充到字典中。
插入新的单元格,以键值对形式添加新学生的姓名与成绩。当你完成这些后,同时按 `Shift+Return` 键打印字典全部信息:
```
grades["Alex"] = 88
print(grades)
{'Kelsey': 87, 'Finley': 90, 'Alex': 88}
```
![添加新键][10]
所有的键值对输出在单元格下方。
### 使用字典
请记住,键与值可以是任意数据类型,但它们很少是<ruby>[扩展数据类型][11]<rt>non-primitive types</rt></ruby>。此外,字典不能以指定的顺序存储、组织里面的数据。如果你想要数据有序,最好使用 Python 列表,而非字典。
如果你考虑使用字典,首先要确认你的数据结构是否是合适的,例如像电话簿的结构。如果不是,列表、元组、树或者其他数据结构可能是更好的选择。
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/3/dictionary-values-python
作者:[Lauren Maffeo][a]
选题:[lujun9972][b]
译者:[DCOLIVERSUN](https://github.com/DCOLIVERSUN)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/lmaffeo
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/python-programming-code-keyboard.png?itok=fxiSpmnd (Hands on a keyboard with a Python book )
[2]: https://docs.anaconda.com/anaconda/
[3]: https://opensource.com/article/18/3/getting-started-jupyter-notebooks
[4]: https://opensource.com/sites/default/files/uploads/new-jupyter-notebook.png (Create Jupyter notebook)
[5]: https://creativecommons.org/licenses/by-sa/4.0/
[6]: https://opensource.com/sites/default/files/uploads/define-keys-values.png (Code for defining key-value pairs in the dictionary)
[7]: https://opensource.com/sites/default/files/uploads/jupyter_insertcell.png (Inserting a new cell in Jupyter)
[8]: https://opensource.com/sites/default/files/uploads/lookforvalue.png (Code to look for a specific value)
[9]: https://opensource.com/sites/default/files/uploads/jupyter_updatekey.png (Code for updating a key)
[10]: https://opensource.com/sites/default/files/uploads/jupyter_addnewkey.png (Add a new key)
[11]: https://www.datacamp.com/community/tutorials/data-structures-python

View File

@ -0,0 +1,94 @@
[#]: subject: (6 things to know about using WebAssembly on Firefox)
[#]: via: (https://opensource.com/article/21/3/webassembly-firefox)
[#]: author: (Stephan Avenwedde https://opensource.com/users/hansic99)
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-13230-1.html)
在 Firefox 上使用 WebAssembly 要了解的 6 件事
======
> 了解在 Firefox 上运行 WebAssembly 的机会和局限性。
![](https://img.linux.net.cn/data/attachment/album/202103/23/223901pi6tcg7ybsyxos7x.jpg)
WebAssembly 是一种可移植的执行格式由于它能够以近乎原生的速度在浏览器中执行应用而引起了人们的极大兴趣。WebAssembly 本质上有一些特殊的属性和局限性。但是,通过将其与其他技术结合,将出现全新的可能性,尤其是与浏览器中的游戏有关的可能性。
本文介绍了在 Firefox 上运行 WebAssembly 的概念、可能性和局限性。
### 沙盒
WebAssembly 有 [严格的安全策略][2]。 WebAssembly 中的程序或功能单元称为*模块*。每个模块实例都运行在自己的隔离内存空间中。因此即使同一个网页加载了多个模块它们也无法访问另一个模块的虚拟地址空间。设计上WebAssembly 还考虑了内存安全性和控制流完整性,这使得(几乎)确定性的执行成为可能。
### Web API
通过 JavaScript [Web API][3] 可以访问多种输入和输出设备。根据这个 [提案][4],将来可以不用绕道到 JavaScript 来访问 Web API。C++ 程序员可以在 [Emscripten.org][5] 上找到有关访问 Web API 的信息。Rust 程序员可以使用 [rustwasm.github.io][7] 中写的 [wasm-bindgen][6] 库。
### 文件输入/输出
因为 WebAssembly 是在沙盒环境中执行的所以当它在浏览器中执行时它无法访问主机的文件系统。但是Emscripten 提供了虚拟文件系统形式的解决方案。
Emscripten 使在编译时将文件预加载到内存文件系统成为可能。然后可以像在普通文件系统上一样从 WebAssembly 应用中读取这些文件。这个 [教程][8] 提供了更多信息。
### 持久化数据
如果你需要在客户端存储持久化数据,那么必须通过 JavaScript Web API 来完成。请参考 Mozilla 开发者网络MDN关于 [浏览器存储限制和过期标准][9] 的文档,了解不同方法的详细信息。
### 内存管理
WebAssembly 模块作为 [堆栈机][10] 在线性内存上运行。这意味着堆内存分配等概念是没有的。然而,如果你在 C++ 中使用 `new` 或者在 Rust 中使用 `Box::new`,你会期望它会进行堆内存分配。将堆内存分配请求转换成 WebAssembly 的方式在很大程度上依赖于工具链。你可以在 Frank Rehberger 关于 [WebAssembly 和动态内存][11] 的文章中找到关于不同工具链如何处理堆内存分配的详细分析。
### 游戏!
与 [WebGL][12] 结合使用时WebAssembly 的执行速度很高,因此可以在浏览器中运行原生游戏。大型专有游戏引擎 [Unity][13] 和[虚幻 4][14] 展示了 WebGL 可以实现的功能。也有使用 WebAssembly 和 WebGL 接口的开源游戏引擎。这里有些例子:
* 自 2011 年 11 月起,[id Tech 4][15] 引擎(更常称之为 Doom 3 引擎)可在 [GitHub][16] 上以 GPL 许可的形式获得。此外,还有一个 [Doom 3 的 WebAssembly 移植版][17]。
* Urho3D 引擎提供了一些 [令人印象深刻的例子][18],它们可以在浏览器中运行。
* 如果你喜欢复古游戏,可以试试这个 [Game Boy 模拟器][19]。
* [Godot 引擎也能生成 WebAssembly][20]。我找不到演示,但 [Godot 编辑器][21] 已经被移植到 WebAssembly 上。
### 有关 WebAssembly 的更多信息
WebAssembly 是一项很有前途的技术我相信我们将来会越来越多地看到它。除了在浏览器中执行之外WebAssembly 还可以用作可移植的执行格式。[Wasmer][22] 容器主机使你可以在各种平台上执行 WebAssembly 代码。
如果你需要更多的演示、示例和教程,请看一下这个 [WebAssembly 主题集合][23]。Mozilla 的 [游戏和示例合集][24] 并非全是 WebAssembly但仍然值得一看。
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/3/webassembly-firefox
作者:[Stephan Avenwedde][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/hansic99
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/lenovo-thinkpad-laptop-concentration-focus-windows-office.png?itok=-8E2ihcF (Woman using laptop concentrating)
[2]: https://webassembly.org/docs/security/
[3]: https://developer.mozilla.org/en-US/docs/Web/API
[4]: https://github.com/WebAssembly/gc/blob/master/README.md
[5]: https://emscripten.org/docs/porting/connecting_cpp_and_javascript/Interacting-with-code.html
[6]: https://github.com/rustwasm/wasm-bindgen
[7]: https://rustwasm.github.io/wasm-bindgen/
[8]: https://emscripten.org/docs/api_reference/Filesystem-API.html
[9]: https://developer.mozilla.org/en-US/docs/Web/API/IndexedDB_API/Browser_storage_limits_and_eviction_criteria
[10]: https://en.wikipedia.org/wiki/Stack_machine
[11]: https://frehberg.wordpress.com/webassembly-and-dynamic-memory/
[12]: https://en.wikipedia.org/wiki/WebGL
[13]: https://beta.unity3d.com/jonas/AngryBots/
[14]: https://www.youtube.com/watch?v=TwuIRcpeUWE
[15]: https://en.wikipedia.org/wiki/Id_Tech_4
[16]: https://github.com/id-Software/DOOM-3
[17]: https://wasm.continuation-labs.com/d3demo/
[18]: https://urho3d.github.io/samples/
[19]: https://vaporboy.net/
[20]: https://docs.godotengine.org/en/stable/development/compiling/compiling_for_web.html
[21]: https://godotengine.org/editor/latest/godot.tools.html
[22]: https://github.com/wasmerio/wasmer
[23]: https://github.com/mbasso/awesome-wasm
[24]: https://developer.mozilla.org/en-US/docs/Games/Examples

View File

@ -0,0 +1,155 @@
[#]: subject: (How to write 'Hello World' in WebAssembly)
[#]: via: (https://opensource.com/article/21/3/hello-world-webassembly)
[#]: author: (Stephan Avenwedde https://opensource.com/users/hansic99)
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-13250-1.html)
如何在 WebAssembly 中写 “Hello World”
======
> 通过这个分步教程,开始用人类可读的文本编写 WebAssembly。
![](https://img.linux.net.cn/data/attachment/album/202103/30/095907r6ecev48dw0l9w44.jpg)
WebAssembly 是一种字节码格式,[几乎所有的浏览器][2] 都可以将它编译成其宿主操作系统的机器代码。除了 JavaScript 和 WebGL 之外WebAssembly 还满足了将应用移植到浏览器中以实现平台独立的需求。作为 C++ 和 Rust 的编译目标WebAssembly 使 Web 浏览器能够以接近原生的速度执行代码。
当谈论 WebAssembly 应用时,你必须区分三种状态:
1. **源码(如 C++ 或 Rust** 你有一个用兼容语言编写的应用,你想把它在浏览器中执行。
2. **WebAssembly 字节码:** 你选择 WebAssembly 字节码作为编译目标。最后,你得到一个 `.wasm` 文件。
3. **机器码opcode** 浏览器加载 `.wasm` 文件,并将其编译成主机系统的相应机器码。
WebAssembly 还有一种文本格式,用人类可读的文本表示二进制格式。为了简单起见,我将其称为 **WASM-text**。WASM-text 可以比作高级汇编语言。当然,你不会基于 WASM-text 来编写一个完整的应用,但了解它的底层工作原理是很好的(特别是对于调试和性能优化)。
本文将指导你在 WASM-text 中创建经典的 “Hello World” 程序。
### 创建 .wat 文件
WASM-text 文件通常以 `.wat` 结尾。第一步创建一个名为 `helloworld.wat` 的空文本文件,用你最喜欢的文本编辑器打开它,然后粘贴进去:
```
(module
    ;; 从 JavaScript 命名空间导入
    (import  "console"  "log" (func  $log (param  i32  i32))) ;; 导入 log 函数
    (import  "js"  "mem" (memory  1)) ;; 导入 1 页 内存64kb
   
    ;; 我们的模块的数据段
    (data (i32.const 0) "Hello World from WebAssembly!")
   
    ;; 函数声明:导出 helloWorld(),无参数
    (func (export  "helloWorld")
        i32.const 0  ;; 传递偏移 0 到 log
        i32.const 29  ;; 传递长度 29 到 log示例文本的字符串长度
        call  $log
        )
)
```
WASM-text 格式是基于 S 表达式的。为了实现交互JavaScript 函数用 `import` 语句导入WebAssembly 函数用 `export` 语句导出。在这个例子中,从 `console` 模块中导入 `log` 函数,它需要两个类型为 `i32` 的参数作为输入以及一页内存64KB来存储字符串。
字符串将被写入偏移量 为 `0` 的数据段。数据段是你的内存的<ruby>叠加投影<rt>overlay</rt></ruby>,内存是在 JavaScript 部分分配的。
函数用关键字 `func` 标记。当进入函数时,栈是空的。在调用另一个函数之前,函数参数会被压入栈中(这里是偏移量和长度)(见 `call $log`)。当一个函数返回一个 `f32` 类型时(例如),当离开函数时,一个 `f32` 变量必须保留在栈中(但在本例中不是这样)。
### 创建 .wasm 文件
WASM-text 和 WebAssembly 字节码是 1:1 对应的,这意味着你可以将 WASM-text 转换成字节码(反之亦然)。你已经有了 WASM-text现在将创建字节码。
转换可以通过 [WebAssembly Binary Toolkit][3]WABT来完成。从该链接克隆仓库并按照安装说明进行安装。
建立工具链后,打开控制台并输入以下内容,将 WASM-text 转换为字节码:
```
wat2wasm helloworld.wat -o helloworld.wasm
```
你也可以用以下方法将字节码转换为 WASM-text
```
wasm2wat helloworld.wasm -o helloworld_reverse.wat
```
一个从 `.wasm` 文件创建的 `.wat` 文件不包括任何函数或参数名称。默认情况下WebAssembly 用它们的索引来识别函数和参数。
### 编译 .wasm 文件
目前WebAssembly 只与 JavaScript 共存,所以你必须编写一个简短的脚本来加载和编译 `.wasm` 文件并进行函数调用。你还需要在 WebAssembly 模块中定义你要导入的函数。
创建一个空的文本文件,并将其命名为 `helloworld.html`,然后打开你喜欢的文本编辑器并粘贴进去:
```
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<title>Simple template</title>
</head>
<body>
<script>
var memory = new WebAssembly.Memory({initial:1});
function consoleLogString(offset, length) {
var bytes = new Uint8Array(memory.buffer, offset, length);
var string = new TextDecoder('utf8').decode(bytes);
console.log(string);
};
var importObject = {
console: {
log: consoleLogString
},
js : {
mem: memory
}
};
WebAssembly.instantiateStreaming(fetch('helloworld.wasm'), importObject)
.then(obj => {
obj.instance.exports.helloWorld();
});
</script>
</body>
</html>
```
`WebAssembly.Memory(...)` 方法返回一个大小为 64KB 的内存页。函数 `consoleLogString` 根据长度和偏移量从该内存页读取一个字符串。这两个对象作为 `importObject` 的一部分传递给你的 WebAssembly 模块。
在你运行这个例子之前,你可能必须允许 Firefox 从这个目录中访问文件,在地址栏输入 `about:config`,并将 `privacy.file_unique_origin` 设置为 `true`
![Firefox setting][4]
> **注意:** 这样做会使你容易受到 [CVE-2019-11730][6] 安全问题的影响。
现在,在 Firefox 中打开 `helloworld.html`,按下 `Ctrl+K` 打开开发者控制台。
![Debugger output][7]
### 了解更多
这个 Hello World 的例子只是 MDN 的 [了解 WebAssembly 文本格式][8] 文档中的教程之一。如果你想了解更多关于 WebAssembly 的知识以及它的工作原理,可以看看这些文档。
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/3/hello-world-webassembly
作者:[Stephan Avenwedde][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/hansic99
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/helloworld_bread_lead.jpeg?itok=1r8Uu7gk (Hello World inked on bread)
[2]: https://developer.mozilla.org/en-US/docs/WebAssembly#browser_compatibility
[3]: https://github.com/webassembly/wabt
[4]: https://opensource.com/sites/default/files/uploads/firefox_setting.png (Firefox setting)
[5]: https://creativecommons.org/licenses/by-sa/4.0/
[6]: https://www.mozilla.org/en-US/security/advisories/mfsa2019-21/#CVE-2019-11730
[7]: https://opensource.com/sites/default/files/uploads/debugger_output.png (Debugger output)
[8]: https://developer.mozilla.org/en-US/docs/WebAssembly/Understanding_the_text_format

View File

@ -3,41 +3,41 @@
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-13234-1.html)
在 Linux 终端中使用 gdu 进行更快的磁盘使用情况检查
使用 gdu 进行更快的磁盘使用情况检查
======
在 Linux 终端中有两种常用的[检查磁盘使用情况的方法][1]du 命令和 df 命令。[du 命令更多的是用来检查目录的使用空间][2]df 命令则是提供文件系统级别的磁盘使用情况。
![](https://img.linux.net.cn/data/attachment/album/202103/24/233818dkfvi4fviiysn8o9.jpg)
还有更友好的[用 GNOME Disks 等图形工具在 Linux 中查看磁盘使用情况的方法][3]。如果局限于终端,你可以使用像 [ncdu][5] 这样的[ TUI][4] 工具,以一种图形化的方式获取磁盘使用信息
在 Linux 终端中有两种常用的 [检查磁盘使用情况的方法][1]`du` 命令和 `df` 命令。[du 命令更多的是用来检查目录的使用空间][2]`df` 命令则是提供文件系统级别的磁盘使用情况
### Gdu: 在 Linux 终端中检查磁盘使用情况
还有更友好的 [用 GNOME “磁盘” 等图形工具在 Linux 中查看磁盘使用情况的方法][3]。如果局限于终端,你可以使用像 [ncdu][5] 这样的 [TUI][4] 工具,以一种图形化的方式获取磁盘使用信息。
[Gdu][6] 就是这样一个用 Go 编写的工具(因此是 gdu 中的 “g”。Gdu 开发者的[基准测试][7]表明,它的磁盘使用情况检查速度相当快,特别是在 SSD 上。事实上gdu 主要是针对 SSD 的,尽管它也可以在 HDD 上工作。
### gdu: 在 Linux 终端中检查磁盘使用情况
如果你在使用 gdu 命令时没有任何选项,它就会显示你当前所在目录的磁盘使用情况。
[gdu][6] 就是这样一个用 Go 编写的工具(因此是 gdu 中的 “g”。gdu 开发者的 [基准测试][7] 表明,它的磁盘使用情况检查速度相当快,特别是在 SSD 上。事实上gdu 主要是针对 SSD 的,尽管它也可以在 HDD 上工作。
如果你在使用 `gdu` 命令时没有使用任何选项,它就会显示你当前所在目录的磁盘使用情况。
![][8]
由于它具有终端用户界面TUI你可以使用箭头浏览目录和磁盘。你也可以按文件名或大小对结果进行排序。
由于它具有文本用户界面TUI你可以使用箭头浏览目录和磁盘。你也可以按文件名或大小对结果进行排序。
你可以用它做到:
* 向上箭头或 k 键将光标向上移动
* 向下箭头或 j 键将光标向下移动
* 向上箭头或 `k` 键将光标向上移动
* 向下箭头或 `j` 键将光标向下移动
* 回车选择目录/设备
* 左箭头或 h 键转到上级目录
* 使用 d 键删除所选文件或目录
* 使用 n 键按名称排序
* 使用 s 键按大小排序
* 使用 c 键按项目排序
* 左箭头或 `h` 键转到上级目录
* 使用 `d` 键删除所选文件或目录
* 使用 `n` 键按名称排序
* 使用 `s` 键按大小排序
* 使用 `c` 键按项目排序
你将注意到一些条目前的一些符号。这些符号有特定的意义。
你会注意到一些条目前的一些符号。这些符号有特定的意义。
![][9]
@ -47,8 +47,6 @@
* `H` 表示文件已经被计数(硬链接)。
* `e` 表示目录为空。
要查看所有挂载磁盘的磁盘利用率和可用空间,使用选项 `d`
```
@ -63,9 +61,9 @@ gdu -d
### 在 Linux 上安装 gdu
Gdu 是通过 [AUR][11] 为 Arch 和 Manjaro 用户提供的。我想作为一个 Arch 用户,你应该知道如何使用 AUR。
gdu 是通过 [AUR][11] 提供给 Arch 和 Manjaro 用户的。我想,作为一个 Arch 用户,你应该知道如何使用 AUR。
它包含在即将到来的 Ubuntu 21.04 的 universe 仓库中,但有可能你现在还没有使用它。这种情况下,你可以使用 Snap 安装它,这可能看起来有很多条 snap 命令:
它包含在即将到来的 Ubuntu 21.04 的 universe 仓库中,但有可能你现在还没有使用它。这种情况下,你可以使用 Snap 安装它,这可能看起来有很多条 `snap` 命令:
```
snap install gdu-disk-usage-analyzer
@ -76,9 +74,9 @@ snap alias gdu-disk-usage-analyzer.gdu gdu
你也可以在其发布页面找到源代码:
[Source code download for gdu][12]
- [下载 gdu 的源代码][12]
我更习惯于使用 du 和 df 命令,但我可以看到一些 Linux 用户可能会喜欢 gdu。你是其中之一吗
我更习惯于使用 `du``df` 命令,但我觉得一些 Linux 用户可能会喜欢 gdu。你是其中之一吗
--------------------------------------------------------------------------------
@ -87,7 +85,7 @@ via: https://itsfoss.com/gdu/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,193 @@
[#]: subject: "Practice using the Linux grep command"
[#]: via: "https://opensource.com/article/21/3/grep-cheat-sheet"
[#]: author: "Seth Kenlon https://opensource.com/users/seth"
[#]: collector: "lujun9972"
[#]: translator: "lxbwolf"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-13247-1.html"
练习使用 Linux 的 grep 命令
======
> 来学习下搜索文件中内容的基本操作,然后下载我们的备忘录作为 grep 和正则表达式的快速参考指南。
![](https://img.linux.net.cn/data/attachment/album/202103/29/093323yn6ilqvg6z6iizcf.jpg)
`grep`<ruby>全局正则表达式打印<rt>Global Regular Expression Print</rt></ruby>)是由 Ken Thompson 早在 1974 年开发的基本 Unix 命令之一。在计算领域,它无处不在,通常被用作为动词(“搜索一个文件中的内容”)。如果你的谈话对象有极客精神,那么它也能在真实生活场景中使用。(例如,“我会 `grep` 我的内存条来回想起那些信息。”)简而言之,`grep` 是一种用特定的字符模式来搜索文件中内容的方式。如果你感觉这听起来像是文字处理器或文本编辑器的现代 Find 功能,那么你就已经在计算行业感受到了 `grep` 的影响。
`grep` 绝不是被现代技术抛弃的远古命令,它的强大体现在两个方面:
* `grep` 可以在终端操作数据流,因此你可以把它嵌入到复杂的处理中。你不仅可以在一个文本文件中*查找*文字,还可以提取文字后把它发给另一个命令。
* `grep` 使用正则表达式来提供灵活的搜索能力。
虽然需要一些练习,但学习 `grep` 命令还是很容易的。本文会介绍一些我认为 `grep` 最有用的功能。
- 下载我们免费的 [grep 备忘录][2]
### 安装 grep
Linux 默认安装了 `grep`
MacOS 默认安装了 BSD 版的 `grep`。BSD 版的 `grep` 跟 GNU 版有一点不一样,因此如果你想完全参照本文,那么请使用 [Homebrew][3] 或 [MacPorts][4] 安装 GNU 版的 `grep`
### 基础的 grep
所有版本的 `grep` 基础语法都一样。入参是匹配模式和你需要搜索的文件。它会把匹配到的每一行输出到你的终端。
```
$ grep gnu gpl-3.0.txt
along with this program. If not, see <http://www.gnu.org/licenses/>.
<http://www.gnu.org/licenses/>.
<http://www.gnu.org/philosophy/why-not-lgpl.html>.
```
`grep` 命令默认大小写敏感,因此 “gnu”、“GNU”、“Gnu” 是三个不同的值。你可以使用 `--ignore-case` 选项来忽略大小写。
```
$ grep --ignore-case gnu gpl-3.0.txt
GNU GENERAL PUBLIC LICENSE
The GNU General Public License is a free, copyleft license for
the GNU General Public License is intended to guarantee your freedom to
GNU General Public License for most of our software; it applies also to
[...16 more results...]
<http://www.gnu.org/licenses/>.
<http://www.gnu.org/philosophy/why-not-lgpl.html>.
```
你也可以通过 `--invert-match` 选项来输出所有没有匹配到的行:
```
$ grep --invert-match \
--ignore-case gnu gpl-3.0.txt
Version 3, 29 June 2007
Copyright (C) 2007 Free Software Foundation, Inc. <http://fsf.org/>
[...648 lines...]
Public License instead of this License. But first, please read
```
### 管道
能搜索文件中的文本内容是很有用的,但是 [POSIX][8] 的真正强大之处是可以通过“管道”来连接多条命令。我发现我使用 `grep` 最好的方式是把它与其他工具如 `cut`、`tr` 或 [curl][9] 联合使用。
假如现在有一个文件,文件中每一行是我想要下载的技术论文。我可以打开文件手动点击每一个链接,然后点击火狐浏览器的选项把每一个文件保存到我的硬盘,但是需要点击多次且耗费很长时间。而我还可以搜索文件中的链接,用 `--only-matching` 选项*只*打印出匹配到的字符串。
```
$ grep --only-matching http\:\/\/.*pdf example.html
http://example.com/linux_whitepaper.pdf
http://example.com/bsd_whitepaper.pdf
http://example.com/important_security_topic.pdf
```
输出是一系列的 URL每行一个。而这与 Bash 处理数据的方式完美契合,因此我不再把 URL 打印到终端,而是把它们通过管道传给 `curl`
```
$ grep --only-matching http\:\/\/.*pdf \
example.html | curl --remote-name
```
这条命令可以下载每一个文件,然后以各自的远程文件名命名保存在我的硬盘上。
这个例子中我的搜索模式可能很晦涩。那是因为它用的是正则表达式,一种在大量文本中进行模糊搜索时非常有用的”通配符“语言。
### 正则表达式
没有人会觉得<ruby>正则表达式<rt>regular expression</rt></ruby>(简称 “regex”很简单。然而我发现它的名声往往比它应得的要差。诚然很多人在使用正则表达式时“过于炫耀聪明”直到它变得难以阅读大而全以至于复杂得换行才好理解但是你不必过度使用正则。这里简单介绍一下我使用正则表达式的方式。
首先,创建一个名为 `example.txt` 的文件,输入以下内容:
```
Albania
Algeria
Canada
0
1
3
11
```
最基础的元素是不起眼的 `.` 字符。它表示一个字符。
```
$ grep Can.da example.txt
Canada
```
模式 `Can.da` 能成功匹配到 `Canada` 是因为 `.` 字符表示任意*一个*字符。
可以使用下面这些符号来使 `.` 通配符表示多个字符:
* `?` 匹配前面的模式零次或一次
* `*` 匹配前面的模式零次或多次
* `+` 匹配前面的模式一次或多次
* `{4}` 匹配前面的模式 4 次(或是你在括号中写的其他次数)
了解了这些知识后,你可以用你认为有意思的所有模式来在 `example.txt` 中做练习。可能有些会成功,有些不会成功。重要的是你要去分析结果,这样你才会知道原因。
例如,下面的命令匹配不到任何国家:
```
$ grep A.a example.txt
```
因为 `.` 字符只能匹配一个字符,除非你增加匹配次数。使用 `*` 字符,告诉 `grep` 匹配一个字符零次或者必要的任意多次直到单词末尾。因为你知道你要处理的内容,因此在本例中*零次*是没有必要的。在这个列表中一定没有单个字母的国家。因此,你可以用 `+` 来匹配一个字符至少一次且任意多次直到单词末尾:
```
$ grep A.+a example.txt
Albania
Algeria
```
你可以使用方括号来提供一系列的字母:
```
$ grep [A,C].+a example.txt
Albania
Algeria
Canada
```
也可以用来匹配数字。结果可能会震惊你:
```
$ grep [1-9] example.txt
1
3
11
```
看到 11 出现在搜索数字 1 到 9 的结果中,你惊讶吗?
如果把 13 加到搜索列表中,会出现什么结果呢?
这些数字之所以会被匹配到,是因为它们包含 1而 1 在要匹配的数字中。
你可以发现,正则表达式有时会令人费解,但是通过体验和练习,你可以熟练掌握它,用它来提高你搜索数据的能力。
### 下载备忘录
`grep` 命令还有很多文章中没有列出的选项。有用来更好地展示匹配结果、列出文件、列出匹配到的行号、通过打印匹配到的行周围的内容来显示上下文的选项,等等。如果你在学习 `grep`,或者你经常使用它并且通过查阅它的`帮助`页面来查看选项,那么你可以下载我们的备忘录。这个备忘录使用短选项(例如,使用 `-v`,而不是 `--invert-matching`)来帮助你更好地熟悉 `grep`。它还有一部分正则表达式可以帮你记住用途最广的正则表达式代码。 [现在就下载 grep 备忘录!][2]
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/3/grep-cheat-sheet
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[lxbwolf](https://github.com/lxbwolf)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/yearbook-haff-rx-linux-file-lead_0.png?itok=-i0NNfDC "Hand putting a Linux file folder into a drawer"
[2]: https://opensource.com/downloads/grep-cheat-sheet
[3]: https://opensource.com/article/20/6/homebrew-mac
[4]: https://opensource.com/article/20/11/macports
[5]: http://www.gnu.org/licenses/\>
[6]: http://www.gnu.org/philosophy/why-not-lgpl.html\>
[7]: http://fsf.org/\>
[8]: https://opensource.com/article/19/7/what-posix-richard-stallman-explains
[9]: https://opensource.com/downloads/curl-command-cheat-sheet

View File

@ -0,0 +1,143 @@
[#]: subject: (4 cool new projects to try in Copr for March 2021)
[#]: via: (https://fedoramagazine.org/4-cool-new-projects-to-try-in-copr-for-march-2021/)
[#]: author: (Jakub Kadlčík https://fedoramagazine.org/author/frostyx/)
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-13243-1.html)
COPR 仓库中 4 个很酷的新项目2021.03
======
![][1]
> COPR 是个人软件仓库 [集合][2],它不在 Fedora 中。这是因为某些软件不符合轻松打包的标准;或者它可能不符合其他 Fedora 标准尽管它是自由而开源的。COPR 可以在 Fedora 套件之外提供这些项目。COPR 中的软件不受 Fedora 基础设施的支持,或者是由项目自己背书的。但是,这是一种尝试新的或实验性的软件的一种巧妙的方式。
本文介绍了 COPR 中一些有趣的新项目。如果你第一次使用 COPR请参阅 [COPR 用户文档][3]。
### Ytfzf
[Ytfzf][5] 是一个简单的命令行工具,用于搜索和观看 YouTube 视频。它提供了围绕模糊查找程序 [fzf][6] 构建的快速直观的界面。它使用 [youtube-dl][7] 来下载选定的视频,并打开外部视频播放器来观看。由于这种方式,`ytfzf` 比使用浏览器观看 YouTube 资源占用要少得多。它支持缩略图(通过 [ueberzug][8])、历史记录保存、多个视频排队或下载它们以供以后使用、频道订阅以及其他方便的功能。多亏了像 [dmenu][9] 或 [rofi][10] 这样的工具,它甚至可以在终端之外使用。
![][11]
#### 安装说明
目前[仓库][13]为 Fedora 33 和 34 提供 Ytfzf。要安装它请使用以下命令
```
sudo dnf copr enable bhoman/ytfzf
sudo dnf install ytfzf
```
### Gemini 客户端
你有没有想过,如果万维网走的是一条完全不同的路线,不采用 CSS 和客户端脚本,你的互联网浏览体验会如何?[Gemini][15] 是 HTTPS 协议的现代替代品,尽管它并不打算取代 HTTPS 协议。[stenstorp/gemini][16] COPR 项目提供了各种客户端来浏览 Gemini _网站_,有 [Castor][17]、[Dragonstone][18]、[Kristall][19] 和 [Lagrange][20]。
[Gemini][21] 站点提供了一些使用该协议的主机列表。以下显示了使用 Castor 访问这个站点的情况:
![][22]
#### 安装说明
该 [仓库][16] 目前为 Fedora 32、33、34 和 Fedora Rawhide 提供 Gemini 客户端。EPEL 7 和 8以及 CentOS Stream 也可使用。要安装浏览器,请从这里显示的安装命令中选择:
```
sudo dnf copr enable stenstorp/gemini
sudo dnf install castor
sudo dnf install dragonstone
sudo dnf install kristall
sudo dnf install lagrange
```
### Ly
[Ly][25] 是一个 Linux 和 BSD 的轻量级登录管理器。它有一个类似于 ncurses 的基于文本的用户界面。理论上,它应该支持所有的 X 桌面环境和窗口管理器(其中很多都 [经过测试][26]。Ly 还提供了基本的 Wayland 支持Sway 也工作良好)。在配置的某个地方,有一个复活节彩蛋选项,可以在背景中启用著名的 [PSX DOOM fire][27] 动画,就其本身而言,值得一试。
![][28]
#### 安装说明
该 [仓库][30] 目前为 Fedora 32、33 和 Fedora Rawhide 提供 Ly。要安装它请使用以下命令
```
sudo dnf copr enable dhalucario/ly
sudo dnf install ly
```
在将 Ly 设置为系统登录界面之前,请在终端中运行 `ly` 命令以确保其正常工作。然后关闭当前的登录管理器,启用 Ly。
```
sudo systemctl disable gdm
sudo systemctl enable ly
```
最后,重启计算机,使其更改生效。
### AWS CLI v2
[AWS CLI v2][32] 带来基于社区反馈进行的稳健而有条理的演变,而不是对原有客户端的大规模重新设计。它引入了配置凭证的新机制,现在允许用户从 AWS 控制台中生成的 `.csv` 文件导入凭证。它还提供了对 AWS SSO 的支持。其他主要改进是服务端自动补全,以及交互式参数生成。一个新功能是交互式向导,它提供了更高层次的抽象,并结合多个 AWS API 调用来创建、更新或删除 AWS 资源。
![][33]
#### 安装说明
该 [仓库][35] 目前为 Fedora Linux 32、33、34 和 Fedora Rawhide 提供 AWS CLI v2。要安装它请使用以下命令
```
sudo dnf copr enable spot/aws-cli-2
sudo dnf install aws-cli-2
```
自然地,访问 AWS 账户凭证是必要的。
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/4-cool-new-projects-to-try-in-copr-for-march-2021/
作者:[Jakub Kadlčík][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/frostyx/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2020/10/4-copr-945x400-1-816x345.jpg
[2]: https://copr.fedorainfracloud.org/
[3]: https://docs.pagure.org/copr.copr/user_documentation.html
[4]: https://github.com/FrostyX/fedora-magazine/blob/main/2021-march.md#droidcam
[5]: https://github.com/pystardust/ytfzf
[6]: https://github.com/junegunn/fzf
[7]: http://ytdl-org.github.io/youtube-dl/
[8]: https://github.com/seebye/ueberzug
[9]: https://tools.suckless.org/dmenu/
[10]: https://github.com/davatorium/rofi
[11]: https://fedoramagazine.org/wp-content/uploads/2021/03/ytfzf.png
[12]: https://github.com/FrostyX/fedora-magazine/blob/main/2021-march.md#installation-instructions
[13]: https://copr.fedorainfracloud.org/coprs/bhoman/ytfzf/
[14]: https://github.com/FrostyX/fedora-magazine/blob/main/2021-march.md#gemini-clients
[15]: https://gemini.circumlunar.space/
[16]: https://copr.fedorainfracloud.org/coprs/stenstorp/gemini/
[17]: https://git.sr.ht/~julienxx/castor
[18]: https://gitlab.com/baschdel/dragonstone
[19]: https://kristall.random-projects.net/
[20]: https://github.com/skyjake/lagrange
[21]: https://gemini.circumlunar.space/servers/
[22]: https://fedoramagazine.org/wp-content/uploads/2021/03/gemini.png
[23]: https://github.com/FrostyX/fedora-magazine/blob/main/2021-march.md#installation-instructions-1
[24]: https://github.com/FrostyX/fedora-magazine/blob/main/2021-march.md#ly
[25]: https://github.com/nullgemm/ly
[26]: https://github.com/nullgemm/ly#support
[27]: https://fabiensanglard.net/doom_fire_psx/index.html
[28]: https://fedoramagazine.org/wp-content/uploads/2021/03/ly.png
[29]: https://github.com/FrostyX/fedora-magazine/blob/main/2021-march.md#installation-instructions-2
[30]: https://copr.fedorainfracloud.org/coprs/dhalucario/ly/
[31]: https://github.com/FrostyX/fedora-magazine/blob/main/2021-march.md#aws-cli-v2
[32]: https://aws.amazon.com/blogs/developer/aws-cli-v2-is-now-generally-available/
[33]: https://fedoramagazine.org/wp-content/uploads/2021/03/aws-cli-2.png
[34]: https://github.com/FrostyX/fedora-magazine/blob/main/2021-march.md#installation-instructions-3
[35]: https://copr.fedorainfracloud.org/coprs/spot/aws-cli-2/

View File

@ -0,0 +1,100 @@
[#]: subject: (Why I use exa instead of ls on Linux)
[#]: via: (https://opensource.com/article/21/3/replace-ls-exa)
[#]: author: (Sudeshna Sur https://opensource.com/users/sudeshna-sur)
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-13237-1.html)
为什么我在 Linux 上使用 exa 而不是 ls
======
> exa 是一个 Linux ls 命令的现代替代品。
![](https://img.linux.net.cn/data/attachment/album/202103/26/101726h008fn6tttn4g6gt.jpg)
我们生活在一个繁忙的世界里,当我们需要查找文件和数据时,使用 `ls` 命令可以节省时间和精力。但如果不经过大量调整,默认的 `ls` 输出并不十分舒心。当有一个 exa 替代方案时,为什么要花时间眯着眼睛看黑白文字呢?
[exa][2] 是一个常规 `ls` 命令的现代替代品,它让生活变得更轻松。这个工具是用 [Rust][3] 编写的,该语言以并行性和安全性而闻名。
### 安装 exa
要安装 `exa`,请运行:
```
$ dnf install exa
```
### 探索 exa 的功能
`exa` 改进了 `ls` 文件列表,它提供了更多的功能和更好的默认值。它使用颜色来区分文件类型和元数据。它能识别符号链接、扩展属性和 Git。而且它体积小、速度快只有一个二进制文件。
#### 跟踪文件
你可以使用 `exa` 来跟踪某个 Git 仓库中新增的文件。
![Tracking Git files with exa][4]
#### 树形结构
这是 `exa` 的基本树形结构。`--level` 的值决定了列表的深度,这里设置为 2。如果你想列出更多的子目录和文件请增加 `--level` 的值。
![exa's default tree structure][6]
这个树包含了每个文件的很多元数据。
![Metadata in exa's tree structure][7]
#### 配色方案
默认情况下,`exa` 根据 [内置的配色方案][8] 来标识不同的文件类型。它不仅对文件和目录进行颜色编码,还对 `Cargo.toml`、`CMakeLists.txt`、`Gruntfile.coffee`、`Gruntfile.js`、`Makefile` 等多种文件类型进行颜色编码。
#### 扩展文件属性
当你使用 `exa` 探索 xattrs扩展的文件属性`--extended` 会显示所有的 xattrs。
![xattrs in exa][9]
#### 符号链接
`exa` 能识别符号链接,也能指出实际的文件。
![symlinks in exa][10]
#### 递归
当你想递归当前目录下所有目录的列表时,`exa` 能进行递归。
![recurse in exa][11]
### 总结
我相信 `exa 是最简单、最容易适应的工具之一。它帮助我跟踪了很多 Git 和 Maven 文件。它的颜色编码让我更容易在多个子目录中进行搜索,它还能帮助我了解当前的 xattrs。
你是否已经用 `exa` 替换了 `ls`?请在评论中分享你的反馈。
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/3/replace-ls-exa
作者:[Sudeshna Sur][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/sudeshna-sur
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bash_command_line.png?itok=k4z94W2U (bash logo on green background)
[2]: https://the.exa.website/docs
[3]: https://opensource.com/tags/rust
[4]: https://opensource.com/sites/default/files/uploads/exa_trackingfiles.png (Tracking Git files with exa)
[5]: https://creativecommons.org/licenses/by-sa/4.0/
[6]: https://opensource.com/sites/default/files/uploads/exa_treestructure.png (exa's default tree structure)
[7]: https://opensource.com/sites/default/files/uploads/exa_metadata.png (Metadata in exa's tree structure)
[8]: https://the.exa.website/features/colours
[9]: https://opensource.com/sites/default/files/uploads/exa_xattrs.png (xattrs in exa)
[10]: https://opensource.com/sites/default/files/uploads/exa_symlinks.png (symlinks in exa)
[11]: https://opensource.com/sites/default/files/uploads/exa_recurse.png (recurse in exa)

View File

@ -0,0 +1,75 @@
[#]: subject: (3 new Java tools to try in 2021)
[#]: via: (https://opensource.com/article/21/3/enterprise-java-tools)
[#]: author: (Daniel Oh https://opensource.com/users/daniel-oh)
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-13249-1.html)
2021 年要尝试的 3 个新的 Java 工具
======
> 通过这三个工具和框架,为你的企业级 Java 应用和你的职业生涯提供助力。
![](https://img.linux.net.cn/data/attachment/album/202103/29/212649w9j5e05b0ppi9bew.jpg)
尽管在 Kubernetes 上广泛使用 [Python][2]、[Go][3] 和 [Node.js][4] 实现 [人工智能][5] 和机器学习应用以及 [无服务函数][6],但 Java 技术仍然在开发企业应用中发挥着关键作用。根据 [开发者经济学][7] 的数据,在 2020 年第三季度,全球有 800 万名企业 Java 开发者。
虽然这门语言已经存在了超过 25 年,但 Java 世界中总是有新的趋势、工具和框架,可以为你的应用和你的职业生涯赋能。
绝大多数 Java 框架都是为具有动态行为的长时间运行的进程而设计的,这些动态行为用于运行可变的应用服务器,例如物理服务器和虚拟机。自从 Kubernetes 容器在 2014 年发布以来,情况已经发生了变化。在 Kubernetes 上使用 Java 应用的最大问题是通过减少内存占用、加快启动和响应时间以及减少文件大小来优化应用性能。
### 3 个值得考虑的新 Java 框架和工具
Java 开发人员也一直在寻找更简便的方法,将闪亮的新开源工具和项目集成到他们的 Java 应用和日常工作中。这极大地提高了开发效率,并激励更多的企业和个人开发者继续使用 Java 栈。
当试图满足上述企业 Java 生态系统的期望时,这三个新的 Java 框架和工具值得你关注。
#### 1、Quarkus
[Quarkus][8] 旨在以惊人的快速启动时间、超低的常驻内存集RSS和高密度内存利用率在 Kubernetes 等容器编排平台中开发云原生的微服务和无服务。根据 JRebel 的 [第九届全球 Java 开发者生产力年度报告][9]Java 开发者对 Quarkus 的使用率从不到 1% 上升到 6%[Micronaut][10] 和 [Vert.x][11] 均从去年的 1% 左右分别增长到 4% 和 2%。
#### 2、Eclipse JKube
[Eclipse JKube][12] 使 Java 开发者能够使用 [Docker][13]、[Jib][14] 或 [Source-To-Image][15] 构建策略,基于云原生 Java 应用构建容器镜像。它还能在编译时生成 Kubernetes 和 OpenShift 清单,并改善开发人员对调试、观察和日志工具的体验。
#### 3、MicroProfile
[MicroProfile][16] 解决了与优化企业 Java 的微服务架构有关的最大问题而无需采用新的框架或重构整个应用。此外MicroProfile [规范][17](即 Health、Open Tracing、Open API、Fault Tolerance、Metrics、Config继续与 [Jakarta EE][18] 的实现保持一致。
### 总结
很难说哪个 Java 框架或工具是企业 Java 开发人员实现的最佳选择。只要 Java 栈还有改进的空间,并能加速企业业务的发展,我们就可以期待新的框架、工具和平台的出现,比如上面的三个。花点时间看看它们是否能在 2021 年改善你的企业 Java 应用。
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/3/enterprise-java-tools
作者:[Daniel Oh][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/daniel-oh
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/coffee_tea_laptop_computer_work_desk.png?itok=D5yMx_Dr (Person drinking a hot drink at the computer)
[2]: https://opensource.com/resources/python
[3]: https://opensource.com/article/18/11/learning-golang
[4]: https://opensource.com/article/18/7/node-js-interactive-cli
[5]: https://opensource.com/article/18/12/how-get-started-ai
[6]: https://opensource.com/article/19/4/enabling-serverless-kubernetes
[7]: https://developereconomics.com/
[8]: https://quarkus.io/
[9]: https://www.jrebel.com/resources/java-developer-productivity-report-2021
[10]: https://micronaut.io/
[11]: https://vertx.io/
[12]: https://www.eclipse.org/jkube/
[13]: https://opensource.com/resources/what-docker
[14]: https://github.com/GoogleContainerTools/jib
[15]: https://www.openshift.com/blog/create-s2i-builder-image
[16]: https://opensource.com/article/18/1/eclipse-microprofile
[17]: https://microprofile.io/
[18]: https://opensource.com/article/18/5/jakarta-ee

View File

@ -0,0 +1,222 @@
[#]: subject: (Get better at programming by learning how things work)
[#]: via: (https://jvns.ca/blog/learn-how-things-work/)
[#]: author: (Julia Evans https://jvns.ca/)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
Get better at programming by learning how things work
======
When we talk about getting better at programming, we often talk about testing, writing reusable code, design patterns, and readability.
All of those things are important. But in this blog post, I want to talk about a different way to get better at programming: learning how the systems youre using work! This is the main way I approach getting better at programming.
### examples of “how things work”
To explain what I mean by “how things work”, here are some different types of programming and examples of what you could learn about how they work.
Frontend JS:
* how the event loop works
* HTTP methods like GET and POST
* what the DOM is and what you can do with it
* the same-origin policy and CORS
CSS:
* how inline elements are rendered differently from block elements
* what the “default flow” is
* how flexbox works
* how CSS decides which selector to apply to which element (the “cascading” part of the cascading style sheets)
Systems programming:
* the difference between the stack and the heap
* how virtual memory works
* how numbers are represented in binary
* what a symbol table is
* how code from external libraries gets loaded (e.g. dynamic/static linking)
* Atomic instructions and how theyre different from mutexes
### you can use something without understanding how it works (and that can be ok!)
We work with a LOT of different systems, and its unreasonable to expect that every single person understands everything about all of them. For example, many people write programs that send email, and most of those people probably dont understand everything about how email works. Email is really complicated! Thats why we have abstractions.
But if youre working with something (like CSS, or HTTP, or goroutines, or email) more seriously and you dont really understand how it works, sometimes youll start to run into problems.
### your bugs will tell you when you need to improve your mental model
When Im programming and Im missing a key concept about how something works, it doesnt always show up in an obvious way. What will happen is:
* Ill have bugs in my programs because of an incorrect mental model
* Ill struggle to fix those bugs quickly and I wont be able to find the right questions to ask to diagnose them
* I feel really frustrated
I think its actually an important skill **just to be able to recognize that this is happening**: Ive slowly learned to recognize the feeling of “wait, Im really confused, I think theres something I dont understand about how this system works, what is it?”
Being a senior developer is less about knowing absolutely everything and more about quickly being able to recognize when you **dont** know something and learn it. Speaking of being a senior developer…
### even senior developers need to learn how their systems work
So far Ive never stopped learning how things work, because there are so many different types of systems we work with!
For example, I know a lot of the fundamentals of how C programs work and web programming (like the examples at the top of this post), but when it comes to graphics programming/OpenGL/GPUs, I know very few of the fundamental ideas. And sometimes Ill discover a new fact that Im missing about a system I thought I knew, like last year I [discovered][1] that I was missing a LOT of information about how CSS works.
It can feel bad to realise that you really dont understand how a system youve been using works when you have 10 years of experience (“ugh, shouldnt I know this already? Ive been using this for so long!“), but its normal! Theres a lot to know about computers and we are constantly inventing new things to know, so nobody can keep up with every single thing.
### how I go from “Im confused” to “ok, I get it!”
When I notice Im confused, I like to approach it like this:
1. Notice Im confused about a topic (“hey, when I write `await` in my Javascript program, what is actually happening?“)
2. Break down my confusion into specific factual questions, like “when theres an `await` and its waiting, how does it decide which part of my code runs next? Where is that information stored?”
3. Find out the answers to those questions (by writing a program, reading something on the internet, or asking someone)
4. Test my understanding by writing a program (“hey, thats why I was having that async bug! And I can fix it like this!“)
The last “test my understanding” step is really important. The whole point of understanding how computers work is to actually write code to make them do things!
I find that if I can use my newfound understanding to do something concrete like implement a new feature or fix a bug or even just write a test program that demonstrates how the thing works, it feels a LOT more real than if I just read about it. And then its much more likely that Ill be able to use it in practice later.
### just learning a few facts can help a lot
Learning how things work doesnt need to be a big huge thing. For example, I used to not really know how floating point numbers worked, and I felt nervous that something weird would happen that I didnt understand.
And then one day in 2013 I went to a talk by Stefan Karpinski explaining how floating point numbers worked (containing roughly the information in [this comic][2], but with more weird details). And now I feel totally confident using floating point numbers! I know what their basic limitations are, and when not to use them (to represent integers larger than 2^53). And I know what I _dont_ know I know its hard to write numerically stable linear algebra algorithms and I have no idea how to do that.
### connect new facts to information you already know
When learning a new fact, its easy to be able to recite a sentence like “ok, there are 8 bits in a byte”. Thats true, but so what? Whats harder (and much more useful!) is to be able to connect that information to what you already know about programming.
For example, lets take this “8 bits in a byte thing”. In your program you probably have strings, like “Hello”. You can already start asking lots of questions about this, like:
* How many bytes in memory are used to represent the string “Hello”? (its 5!)
* What bits exactly does the letter “H” correspond to? (the encoding for “Hello” is going to be using ASCII, so you can look it up in an ASCII table!)
* If you have a running program thats printing out the string “Hello”, can you go look at its memory and find out where those bytes are in its memory? How do you do that?
The important thing here is to ask the questions and explore the connections that **youre** curious about maybe youre not so interested in how the strings are represented in memory, but you really want to know how many bytes a heart emoji is in Unicode! Or maybe you want to learn about how floating point numbers work!
I find that when I connect new facts to things Im already familiar with (like emoji or floating point numbers or strings), then the information sticks a lot better.
Next up, I want to talk about 2 ways to get information: asking a person yes/no questions, and asking the computer.
### how to get information: ask yes/no questions
When Im talking to someone who knows more about the concept than me, I find it helps to start by asking really simple questions, where the answer is just “yes” or “no”. Ive written about yes/no questions before in [how to ask good questions][3], but I love it a lot so lets talk about it again!
I do this because it forces me to articulate exactly what my current mental model _is_, and because I think yes/no questions are often easier for the person Im asking to answer.
For example, here are some different types of questions:
* Check if your current understanding is correct
* Example: “Is a pixel shader the same thing as a fragment shader?”
* How concepts youve heard of are related to each other
* Example: “Does shadertoy use OpenGL?”
* Example: “Do graphics cards know about triangles?”
* High-level questions about what the main purpose of something is
* Example: “Does mysql orchestrator proxy database queries?”
* Example: “Does OpenGL give you more control or less control over the graphics card than Vulkan?”
### yes/no questions put you in control
When I ask very open-ended questions like “how does X work?”, I find that it often goes wrong in one of 2 ways:
1. The person starts telling me a bunch of things that I already knew
2. The person starts telling me a bunch of things that I dont know, but which arent really what I was interested in understanding
Both of these are frustrating, but of course neither of these things are their fault! They cant know exactly what informatoin I wanted about X, because I didnt tell them. But it still always feels bad to have to interrupt someone with “oh no, sorry, thats not what I wanted to know at all!”
I love yes/no questions because, even though theyre harder to formulate, Im WAY more likely to get the exact answers I want and less likely to waste the time of the person Im asking by having them explain a bunch of things that Im not interested in.
### asking yes/no questions isnt always easy
When Im asking someone questions to try to learn about something new, sometimes this happens:
**me:** so, just to check my understanding, it works like this, right?
**them:** actually, no, its &lt;completely different thing&gt;
**me (internally)**: (brief moment of panic)
**me:** ok, let me think for a minute about my next question
It never quite feels _good_ to learn that my mental model was totally wrong, even though its incredibly helpful information. Asking this kind of really specific question (even though its more effective!) puts you in a more vulnerable position than asking a broader question, because sometimes you have to reveal specific things that you were totally wrong about!
When this happens, I like to just say that Im going to take a minute to incorporate the new fact into my mental model and think about my next question.
Okay, thats the end of this digression into my love for yes/no questions :)
### how to get information: ask the computer
Sometimes when Im trying to answer a question I have, there wont be anybody to ask, and Ill Google it or search the documentation and wont find anything.
But the delightful thing about computers is that you can often get answers to questions about computers by… asking your computer!
Here are a few examples (from past blog posts) of questions Ive had and computer experiments I ran to answer them for myself:
* Are atomics faster or slower than mutexes? (blog post: [trying out mutexes and atomics][4])
* If I add a user to a group, will existing processes running as that user have the new group? (blog post: [How do groups work on Linux?][5])
* On Linux, if you have a server listening on 0.0.0.0 but you dont have any network interfaces, can you connect to that server? (blog post: [whats a network interface?][6])
* How is the data in a SQLite database actually organized on disk? (blog post: [How does SQLite work? Part 1: pages!][7])
### asking the computer is a skill
It definitely takes time to learn how to turn “Im confused about X” into specific questions, and then to turn that question into an experiment you can run on your computer to definitively answer it.
But its a really powerful tool to have! If youre not limited to just the things that you can Google / whats in the documentation / what the people around you know, then you can do a LOT more.
### be aware of what you still dont understand
Like I said earlier, the point here isnt to understand every single thing. But especially as you get more senior, its important to be aware of what you dont know! For example, here are five things I dont know (out of a VERY large list):
* How database transactions / isolation levels work
* How vertex shaders work (in graphics)
* How font rendering works
* How BGP / peering work
* How multiple inheritance works in Python
And I dont really need to know how those things work right now! But one day Im pretty sure Im going to need to know how database transactions work, and I know its something I can learn when that day comes :)
Someone who read this post asked me “how do you figure out what you dont know?” and I didnt have a good answer, so Id love to hear your thoughts!
Thanks to Haider Al-Mosawi, Ivan Savov, Jake Donham, John Hergenroeder, Kamal Marhubi, Matthew Parker, Matthieu Cneude, Ori Bernstein, Peter Lyons, Sebastian Gutierrez, Shae Matijs Erisson, Vaibhav Sagar, and Zell Liew for reading a draft of this.
--------------------------------------------------------------------------------
via: https://jvns.ca/blog/learn-how-things-work/
作者:[Julia Evans][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://jvns.ca/
[b]: https://github.com/lujun9972
[1]: https://jvns.ca/blog/debugging-attitude-matters/
[2]: https://wizardzines.com/comics/floating-point/
[3]: https://jvns.ca/blog/good-questions/
[4]: https://jvns.ca/blog/2014/12/14/fun-with-threads/
[5]: https://jvns.ca/blog/2017/11/20/groups/
[6]: https://jvns.ca/blog/2017/09/03/network-interfaces/
[7]: https://jvns.ca/blog/2014/09/27/how-does-sqlite-work-part-1-pages/

View File

@ -0,0 +1,80 @@
[#]: subject: (Elevating open leaders by getting out of their way)
[#]: via: (https://opensource.com/open-organization/21/3/open-spaces-leadership-talent)
[#]: author: (Jos Groen https://opensource.com/users/jos-groen)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
Elevating open leaders by getting out of their way
======
Your organization's leaders likely know the most effective and
innovative path forward. Are you giving them the space they need to get
you there?
![Leaders are catalysts][1]
Today, we're seeing the rapid rise of agile organizations capable of quickly and effectively adapting to market new ideas with large-scale impacts. These companies tend to have something in common: they have a clear core direction and young, energetic leaders—leaders who encourage their talented employees to develop their potential.
The way these organizations apply open principles to developing their internal talent—that is, how they facilitate and encourage talented employees to develop and advance in all layers of the organization—is a critical component of their sustainability and success. The organizations have achieved an important kind of "flow," through which talented employees can easily shift to the places in the organization where they can add the most value based on their talents, skills, and [intrinsic motivators.][2] Flow ensures fresh ideas and new impulses. After all, the best idea can originate anywhere in the organization—no matter where a particular employee may be located.
In this new series, I'll explore various dimensions of this open approach to organizational talent management. In this article, I explicitly focus on employees who demonstrate leadership talent. After all, we need leaders to create contexts based on open principles, leaders able to balance people and business in their organization.
### The elements of success
I see five crucial elements that determine the success of businesses today:
1. Talented leaders are engaged and empowered—given the space to develop, grow, and build experience under the guidance of mentors (leaders) in a safe environment. They can fail fast and learn fast.
2. Their organizations know how to quickly and decisively convert new ideas into valuable products, services, or solutions.
3. The dynamic between "top" and "bottom" managers and leaders in the organization is one of balance.
4. People are willing to let go of deeply held beliefs, processes, and behaviors. It's brave to work openly.
5. The organization has a clear core direction and strong identity based on the open principles.
All these elements of success are connected to employees' creativity and ingenuity.
### Open and safe working environment
Companies that traditionally base their services, governance, and strategic execution on hierarchy and the authority embedded in their systems, processes, and management structure rarely leave room for this kind of open talent development. In these systems, good ideas too often get "stuck" in bureaucracies, and authority to lead is primarily [based on tenure and seniority][3], not on talent. Moreover, traditionally minded board members and management don't always have an adequate eye for management talent. So there is the first challenge! We need leaders who can have a primary eye on leadership talent. The first step to balance management and leadership at the top. Empowering the most talented and passionate—rather than the more senior—makes them uncomfortable. So leaders with potentially innovative ideas rarely get invited to participate in the "inner circle."
Fortunately, I see these organizations beginning to realize that they need to get moving before they lose their competitive edge.
The truth is that there is no "right" or "wrong" choice for organizing a business. The choices an organization makes are simply the choices that determine their overall speed, strength, and agility.
They're beginning to understand that they need to provide talented employees with [safe spaces for experimentation][4]—an open and safe work environment, one in which employees can experiment with new ideas, learn from their mistakes, and [find that place][5] in the organization [where they thrive][6].
But the truth is that there is no "right" or "wrong" choice for organizing a business. The choices an organization makes are simply the choices that determine their overall speed, strength, and agility. And more frequently, organizations are choosing open approaches to building their cultures and processes, because their talent thrives better in environments based on transparency and trust. Employees in these organizations have more perspective and are actively involved in the design and development of the organization itself. They keep their eyes and ears "open" for new ideas and approaches—so the organization benefits from empowering them.
### Hybrid thinking
As [I've said before][7]: the transition from a conventional organization to a more open one is never a guaranteed success. During this transformation, you'll encounter periods in which traditional and open practices operate side by side, even mixed and shuffled. These are an organization's _hybrid_ phase.
When your organization enters this hybrid phase, it needs to begin thinking about changing its approach to talent management. In addition to its _individual_ transformation, it will need to balance the needs and perspectives of senior managers and leaders alongside _other_ management layers, which are beginning to shift. In short, it must establish a new vision and strategy for the development of leadership talent.
The starting point here is to create a safe and stimulating environment where mentors and coaches support these future leaders in their growth. During this hybrid period, you will be searching for the balance between passion and performance in the organization—which means you'll need to let go of deeply rooted beliefs, processes, and behaviors. In my opinion, this means focusing on the _human_ elements present in your organization, its leadership, and its flows of talent, without losing sight of organizational performance. This "letting go" doesn't happen quickly or immediately, like pressing a button, nor is it one that you can entirely influence. But it is an exciting and comprehensive journey that you and your organization will embark on.
And that journey begins with you. Are you ready for it?
Resolved to be a more open leader in 2016? Start by reading these books.
--------------------------------------------------------------------------------
via: https://opensource.com/open-organization/21/3/open-spaces-leadership-talent
作者:[Jos Groen][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jos-groen
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/leaderscatalysts.jpg?itok=f8CwHiKm (Leaders are catalysts)
[2]: https://opensource.com/open-organization/18/5/rethink-motivation-engagement
[3]: https://opensource.com/open-organization/16/8/how-make-meritocracy-work
[4]: https://opensource.com/open-organization/19/3/introduction-psychological-safety
[5]: https://opensource.com/open-organization/17/9/own-your-open-career
[6]: https://opensource.com/open-organization/17/12/drive-open-career-forward
[7]: https://opensource.com/open-organization/20/6/organization-everyone-deserves

View File

@ -0,0 +1,74 @@
[#]: subject: (Linux powers the internet, confirms EU commissioner)
[#]: via: (https://opensource.com/article/21/3/linux-powers-internet)
[#]: author: (James Lovegrove https://opensource.com/users/jlo)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
Linux powers the internet, confirms EU commissioner
======
EU celebrates the importance of open source software at the annual EU
Open Source Policy Summit.
![Penguin driving a car with a yellow background][1]
In 20 years of EU digital policy in Brussels, I have seen growing awareness and recognition among policymakers in Europe of the importance of open source software (OSS). A recent keynote by EU internal market commissioner Thierry Breton at the annual [EU Open Source Policy Summit][2] in February provides another example—albeit with a sense of urgency and strategic opportunity that has been largely missing in the past.
Commissioner Breton did more than just recognize the "long list of [OSS] success stories." He also underscored OSS's critical role in accelerating Europe's €750 billion recovery and the goal to further "embed open source" into Europe's longer-term policy objectives in the public sector and other key industrial sectors.
In addition to the commissioner's celebration that "Linux is powering the internet," there was a policy-related call to action to expand the OSS value proposition to many other areas of digital sovereignty. Indeed, with only 2.5 years of EU Commission mandate remaining, there is a welcome sense of urgency. I see three possible reasons for this: 1. fresh facts and figures, 2. compelling policy commitments, and 3. game-changing investment opportunities for Europe.
### 1\. Fresh facts and figures
Commissioner Breton shared new facts and figures to better inform policymakers in Brussels and all European capitals. The EU's new [Open Source Study][3] reveals that the "economic impact of OSS is estimated to have been between €65 and €95 billion (2018 figures)" and an "increase of 10% [in code contributions] would generate in the future around additional €100 billion in EU GDP per year."
This EU report on OSS, the first since 2006, builds nicely on several other recent open source reports in Germany (from [Bitkom][4]) and France (from [CNLL/Syntec][5]), recent strategic IT analysis by the German federal government, and the [Berlin Declaration][6]'s December 2020 pledge for all EU member states to "implement common standards, modular architectures, and—when suitable—open source technologies in the development and deployment of cross-border digital solutions" by 2024, the end of current EU Commission's mandate.
### 2\. Compelling policy commitments
Commissioner Breton's growth and sovereignty questions seemed to hinge on the need to bolster existing open source adoption and collaboration—notably "how to embed open source into public administration to make them more efficient and resilient" and "how to create an enabling framework for the private sector to invest in open source."
I would encourage readers to review the various [panel discussions][7] from the Policy Summit that address many of the important enabling factors (e.g., establishing open source program offices [OSPOs], open standards, public sector sharing and reuse, etc.). These will be tackled over the coming months with deeper dives by OpenForum Europe and other European associations (e.g., Bitkom's Open Source Day on 16 September), thereby bringing policymaking and open source code and collaboration closer together.
### 3\. Game-changing investments
The European Parliament [recently approved][8] the final go-ahead for the €750 billion Next Generation European Union ([NGEU][9]) stimulus package. This game-changing investment is a once-in-a-generation opportunity to realize longstanding EU policy objectives while accelerating digital transformation in an open and sustainable fashion, as "each plan has to dedicate at least 37% of its budget to climate and at least 20% to digital actions."
During the summit, great insights into how Europe's public sector can further embrace open innovation in the context of these game-changing EU funds were shared by [OFE][10] and [Digital Europe][11] speakers from Germany, Italy, Portugal, Slovenia, FIWARE, and Red Hat. 2021 is fast becoming a critical year when this objective can be realized within the public sector and [industry][12].
### A call to action
Commissioner Breton's recognition of Linux is more than another political validation that "open source has won." It is a call to action to collaborate to accelerate European competitiveness and transformation and is a key to sovereignty (interoperability within services and portability of data and workloads) to reflect key European values through open source.
Commissioner Breton is working closely with the EU executive vice president for a digital age, Margate Vestager, to roll out a swathe of regulatory carrots and sticks for the digital sector. Indeed, in the words of the Commission President Ursula von der Leyen at the recent [Masters of Digital 2021][13] event, "this year we are rewriting the rule book for our digital internal market. I want companies to know that across the European Union, there will be one set of digital rules instead of this patchwork of national rules."
In another 10 years, we will all look back on the past year and ask ourselves this question: did we "waste a good crisis" to realize [Europe's digital decade][14]?
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/3/linux-powers-internet
作者:[James Lovegrove][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jlo
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/car-penguin-drive-linux-yellow.png?itok=twWGlYAc (Penguin driving a car with a yellow background)
[2]: https://openforumeurope.org/event/policy-summit-2021/
[3]: https://ec.europa.eu/digital-single-market/en/news/study-and-survey-impact-open-source-software-and-hardware-eu-economy
[4]: https://www.bitkom.org/Presse/Presseinformation/Open-Source-deutschen-Wirtschaft-angekommen
[5]: https://joinup.ec.europa.eu/collection/open-source-observatory-osor/news/technological-independence
[6]: https://www.bmi.bund.de/SharedDocs/downloads/EN/eu-presidency/gemeinsame-erklaerungen/berlin-declaration-digital-society.html
[7]: https://www.youtube.com/user/openforumeurope/videos
[8]: https://www.europarl.europa.eu/news/en/press-room/20210204IPR97105/parliament-gives-go-ahead-to-EU672-5-billion-recovery-and-resilience-facility
[9]: https://ec.europa.eu/info/strategy/recovery-plan-europe_en
[10]: https://www.youtube.com/watch?v=xU7cfhVk3_s&feature=emb_logo
[11]: https://www.youtube.com/watch?v=Jq3s6cdsA0I&feature=youtu.be
[12]: https://www.digitaleurope.org/wp/wp-content/uploads/2021/02/DIGITALEUROPE-recommendations-on-the-Update-to-the-EU-Industrial-Strategy_Industrial-Forum-questionnaire-comms.pdf
[13]: https://www.youtube.com/watch?v=EDzQI7q2YKc
[14]: https://ec.europa.eu/info/law/better-regulation/have-your-say/initiatives/12900-Europe-s-digital-decade-2030-digital-targets

View File

@ -1,250 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Convert your Windows install into a VM on Linux)
[#]: via: (https://opensource.com/article/21/1/virtualbox-windows-linux)
[#]: author: (David Both https://opensource.com/users/dboth)
Convert your Windows install into a VM on Linux
======
Here's how I configured a VirtualBox VM to use a physical Windows drive
on my Linux workstation.
![Puzzle pieces coming together to form a computer screen][1]
I use VirtualBox frequently to create virtual machines for testing new versions of Fedora, new application programs, and lots of administrative tools like Ansible. I have even used VirtualBox to test the creation of a Windows guest host.
Never have I ever used Windows as my primary operating system on any of my personal computers or even in a VM to perform some obscure task that cannot be done with Linux. I do, however, volunteer for an organization that uses one financial program that requires Windows. This program runs on the office manager's computer on Windows 10 Pro, which came preinstalled.
This financial application is not special, and [a better Linux program][2] could easily replace it, but I've found that many accountants and treasurers are extremely reluctant to make changes, so I've not yet been able to convince those in our organization to migrate.
This set of circumstances, along with a recent security scare, made it highly desirable to convert the host running Windows to Fedora and to run Windows and the accounting program in a VM on that host.
It is important to understand that I have an extreme dislike for Windows for multiple reasons. The primary ones that apply to this case are that I would hate to pay for another Windows license Windows 10 Pro costs about $200 to install it on a new VM. Also, Windows 10 requires enough information when setting it up on a new system or after an installation to enable crackers to steal one's identity, should the Microsoft database be breached. No one should need to provide their name, phone number, and birth date in order to register software.
### Getting started
The physical computer already had a 240GB NVMe m.2 storage device installed in the only available m.2 slot on the motherboard. I decided to install a new SATA SSD in the host and use the existing SSD with Windows on it as the storage device for the Windows VM. Kingston has an excellent overview of various SSD devices, form factors, and interfaces on its web site.
That approach meant that I wouldn't need to do a completely new installation of Windows or any of the existing application software. It also meant that the office manager who works at this computer would use Linux for all normal activities such as email, web access, document and spreadsheet creation with LibreOffice. This approach increases the host's security profile. The only time that the Windows VM would be used is to run the accounting program.
### Back it up first
Before I did anything else, I created a backup ISO image of the entire NVMe storage device. I made a partition on a 500GB external USB storage drive, created an ext4 filesystem on it, and then mounted that partition on **/mnt**. I used the **dd** command to create the image.
I installed the new 500GB SATA SSD in the host and installed the Fedora 32 Xfce spin on it from a Live USB. At the initial reboot after installation, both the Linux and Windows drives were available on the GRUB2 boot menu. At this point, the host could be dual-booted between Linux and Windows.
### Looking for help in all the internet places
Now I needed some information on creating a VM that uses a physical hard drive or SSD as its storage device. I quickly discovered a lot of information about how to do this in the VirtualBox documentation and the internet in general. Although the VirtualBox documentation helped me to get started, it is not complete, leaving out some critical information. Most of the other information I found on the internet is also quite incomplete.
With some critical help from one of our Opensource.com Correspondents, Joshua Holm, I was able to break through the cruft and make this work in a repeatable procedure.
### Making it work
This procedure is actually fairly simple, although one arcane hack is required to make it work. The Windows and Linux operating systems were already in place by the time I was ready for this step.
First, I installed the most recent version of VirtualBox on the Linux host. VirtualBox can be installed from many distributions' software repositories, directly from the Oracle VirtualBox repository, or by downloading the desired package file from the VirtualBox web site and installing locally. I chose to download the AMD64 version, which is actually an installer and not a package. I use this version to circumvent a problem that is not related to this particular project.
The installation procedure always creates a **vboxusers** group in **/etc/group**. I added the users intended to run this VM to the **vboxusers** and **disk** groups in **/etc/group**. It is important to add the same users to the **disk** group because VirtualBox runs as the user who launched it and also requires direct access to the **/dev/sdx** device special file to work in this scenario. Adding users to the **disk** group provides that level of access, which they would not otherwise have.
I then created a directory to store the VMs and gave it ownership of **root.vboxusers** and **775** permissions. I used **/vms** for the directory, but it could be anything you want. By default, VirtualBox creates new virtual machines in a subdirectory of the user creating the VM. That would make it impossible to share access to the VM among multiple users without creating a massive security vulnerability. Placing the VM directory in an accessible location allows sharing the VMs.
I started the VirtualBox Manager as a non-root user. I then used the VirtualBox **Preferences ==&gt; General** menu to set the Default Machine Folder to the directory **/vms**.
I created the VM without a virtual disk. The **Type** should be **Windows**, and the **Version** should be set to **Windows 10 64-bit**. Set a reasonable amount of RAM for the VM, but this can be changed later so long as the VM is off. On the **Hard disk** page of the installation, I chose the "Do not add a virtual hard disk" and clicked on **Create**. The new VM appeared in the VirtualBox Manager window. This procedure also created the **/vms/Test1** directory.
I did this using the **Advanced** menu and performed all of the configurations on a single page, as seen in Figure 1. The **Guided Mode** obtains the same information but requires more clicks to go through a window for each configuration item. It does provide a little more in the way of help text, but I did not need that.
![VirtualBox dialog box to create a new virtual machine but do not add a hard disk][3]
opensource.com
Figure 1: Create a new virtual machine but do not add a hard disk.
Then I needed to know which device was assigned by Linux to the raw Windows drive. As root in a terminal session, use the **lshw** command to discover the device assignment for the Windows disk. In this case, the device that represents the entire storage device is **/dev/sdb**.
```
# lshw -short -class disk,volume
H/W path           Device      Class          Description
=========================================================
/0/100/17/0        /dev/sda    disk           500GB CT500MX500SSD1
/0/100/17/0/1                  volume         2047MiB Windows FAT volume
/0/100/17/0/2      /dev/sda2   volume         4GiB EXT4 volume
/0/100/17/0/3      /dev/sda3   volume         459GiB LVM Physical Volume
/0/100/17/1        /dev/cdrom  disk           DVD+-RW DU-8A5LH
/0/100/17/0.0.0    /dev/sdb    disk           256GB TOSHIBA KSG60ZMV
/0/100/17/0.0.0/1  /dev/sdb1   volume         649MiB Windows FAT volume
/0/100/17/0.0.0/2  /dev/sdb2   volume         127MiB reserved partition
/0/100/17/0.0.0/3  /dev/sdb3   volume         236GiB Windows NTFS volume
/0/100/17/0.0.0/4  /dev/sdb4   volume         989MiB Windows NTFS volume
[root@office1 etc]#
```
Instead of a virtual storage device located in the **/vms/Test1** directory, VirtualBox needs to have a way to identify the physical hard drive from which it is to boot. This identification is accomplished by creating a ***.vmdk** file, which points to the raw physical disk that will be used as the storage device for the VM. As a non-root user, I created a **vmdk** file that points to the entire Windows device, **/dev/sdb**.
```
$ VBoxManage internalcommands createrawvmdk -filename /vms/Test1/Test1.vmdk -rawdisk /dev/sdb
RAW host disk access VMDK file /vms/Test1/Test1.vmdk created successfully.
```
I then used the **VirtualBox Manager File ==&gt; Virtual Media Manager** dialog to add the **vmdk** disk to the available hard disks. I clicked on **Add**, and the default **/vms** location was displayed in the file management dialog. I selected the **Test1** directory and then the **Test1.vmdk** file. I then clicked **Open**, and the **Test1.vmdk** file was displayed in the list of available hard drives. I selected it and clicked on **Close**.
The next step was to add this **vmdk** disk to the storage devices for our VM. In the settings menu for the **Test1 VM**, I selected **Storage** and clicked on the icon to add a hard disk. This opened a dialog that showed the **Test1vmdk** virtual disk file in a list entitled **Not attached.** I selected this file and clicked on the **Choose** button. This device is now displayed in the list of storage devices connected to the **Test1 VM**. The only other storage device on this VM is an empty CD/DVD-ROM drive.
I clicked on **OK** to complete the addition of this device to the VM.
There was one more item to configure before the new VM would work. Using the **VirtualBox Manager Settings** dialog for the **Test1 VM**, I navigated to the **System ==&gt; Motherboard** page and placed a check in the box for **Enable EFI**. If you do not do this, VirtualBox will generate an error stating that it cannot find a bootable medium when you attempt to boot this VM.
The virtual machine now boots from the raw Windows 10 hard drive. However, I could not log in because I did not have a regular account on this system, and I also did not have access to the password for the Windows administrator account.
### Unlocking the drive
No, this section is not about breaking the encryption of the hard drive. Rather, it is about bypassing the password for one of the many Windows administrator accounts, which no one at the organization had.
Even though I could boot the Windows VM, I could not log in because I had no account on that host and asking people for their passwords is a horrible security breach. Nevertheless, I needed to log in to the VM to install the **VirtualBox Guest Additions**, which would provide seamless capture and release of the mouse pointer, allow me to resize the VM to be larger than 1024x768, and perform normal maintenance in the future.
This is a perfect use case for the Linux capability to change user passwords. Even though I am accessing the previous administrator's account to start, in this case, he will no longer support this system, and I won't be able to discern his password or the patterns he uses to generate them. I will simply clear the password for the previous sysadmin.
There is a very nice open source software tool specifically for this task. On the Linux host, I installed **chntpw**, which probably stands for something like, "Change NT PassWord."
```
`# dnf -y install chntpw`
```
I powered off the VM and then mounted the **/dev/sdb3** partition on **/mnt**. I determined that **/dev/sdb3** is the correct partition because it is the first large NTFS partition I saw in the output from the **lshw** command I performed previously. Be sure not to mount the partition while the VM is running; that could cause significant corruption of the data on the VM storage device. Note that the correct partition might be different on other hosts.
Navigate to the **/mnt/Windows/System32/config** directory. The **chntpw** utility program does not work if that is not the present working directory (PWD). Start the program.
```
# chntpw -i SAM
chntpw version 1.00 140201, (c) Petter N Hagen
Hive &lt;SAM&gt; name (from header): &lt;\SystemRoot\System32\Config\SAM&gt;
ROOT KEY at offset: 0x001020 * Subkey indexing type is: 686c &lt;lh&gt;
File size 131072 [20000] bytes, containing 11 pages (+ 1 headerpage)
Used for data: 367/44720 blocks/bytes, unused: 14/24560 blocks/bytes.
&lt;&gt;========&lt;&gt; chntpw Main Interactive Menu &lt;&gt;========&lt;&gt;
Loaded hives: &lt;SAM&gt;
  1 - Edit user data and passwords
  2 - List groups
      - - -
  9 - Registry editor, now with full write support!
  q - Quit (you will be asked if there is something to save)
What to do? [1] -&gt;
```
The **chntpw** command uses a TUI (Text User Interface), which provides a set of menu options. When one of the primary menu items is chosen, a secondary menu is usually displayed. Following the clear menu names, I first chose menu item **1**.
```
What to do? [1] -&gt; 1
===== chntpw Edit User Info &amp; Passwords ====
| RID -|---------- Username ------------| Admin? |- Lock? --|
| 01f4 | Administrator                  | ADMIN  | dis/lock |
| 03ec | john                           | ADMIN  | dis/lock |
| 01f7 | DefaultAccount                 |        | dis/lock |
| 01f5 | Guest                          |        | dis/lock |
| 01f8 | WDAGUtilityAccount             |        | dis/lock |
Please enter user number (RID) or 0 to exit: [3e9]
```
Next, I selected our admin account, **john**, by typing the RID at the prompt. This displays information about the user and offers additional menu items to manage the account.
```
Please enter user number (RID) or 0 to exit: [3e9] 03eb
================= USER EDIT ====================
RID     : 1003 [03eb]
Username: john
fullname:
comment :
homedir :
00000221 = Users (which has 4 members)
00000220 = Administrators (which has 5 members)
Account bits: 0x0214 =
[ ] Disabled        | [ ] Homedir req.    | [ ] Passwd not req. |
[ ] Temp. duplicate | [X] Normal account  | [ ] NMS account     |
[ ] Domain trust ac | [ ] Wks trust act.  | [ ] Srv trust act   |
[X] Pwd don't expir | [ ] Auto lockout    | [ ] (unknown 0x08)  |
[ ] (unknown 0x10)  | [ ] (unknown 0x20)  | [ ] (unknown 0x40)  |
Failed login count: 0, while max tries is: 0
Total  login count: 47
\- - - - User Edit Menu:
 1 - Clear (blank) user password
 2 - Unlock and enable user account [probably locked now]
 3 - Promote user (make user an administrator)
 4 - Add user to a group
 5 - Remove user from a group
 q - Quit editing user, back to user select
Select: [q] &gt; 2
```
At this point, I chose menu item **2**, "Unlock and enable user account," which deletes the password and enables me to log in without a password. By the way this is an automatic login. I then exited the program. Be sure to unmount **/mnt** before proceeding.
I know, I know, but why not! I have already bypassed security on this drive and host, so it matters not one iota. At this point, I did log in to the old administrative account and created a new account for myself with a secure password. I then logged in as myself and deleted the old admin account so that no one else could use it.
There are also instructions on the internet for using the Windows Administrator account (01f4 in the list above). I could have deleted or changed the password on that account had there not been an organizational admin account in place. Note also that this procedure can be performed from a live USB running on the target host.
### Reactivating Windows
So I now had the Windows SSD running as a VM on my Fedora host. However, in a frustrating turn of events, after running for a few hours, Windows displayed a warning message indicating that I needed to "Activate Windows."
After following many more dead-end web pages, I finally gave up on trying to reactivate using an existing code because it appeared to have been somehow destroyed. Finally, when attempting to follow one of the on-line virtual support chat sessions, the virtual "Get help" application indicated that my instance of Windows 10 Pro was already activated. How can this be the case? It kept wanting me to activate it, yet when I tried, it said it was already activated.
### Or not
By the time I had spent several hours over three days doing research and experimentation, I decided to go back to booting the original SSD into Windows and come back to this at a later date. But then Windows even when booted from the original storage device demanded to be reactivated.
Searching the Microsoft support site was unhelpful. After having to fuss with the same automated support as before, I called the phone number provided only to be told by an automated response system that all support for Windows 10 Pro was only provided by internet. By now, I was nearly a day late in getting the computer running and installed back at the office.
### Back to the future
I finally sucked it up, purchased a copy of Windows 10 Home for about $120 and created a VM with a virtual storage device on which to install it.
I copied a large number of document and spreadsheet files to the office manager's home directory. I reinstalled the one Windows program we need and verified with the office manager that it worked and the data was all there.
### Final thoughts
So my objective was met, literally a day late and about $120 short, but using a more standard approach. I am still making a few adjustments to permissions and restoring the Thunderbird address book; I have some CSV backups to work from, but the ***.mab** files contain very little information on the Windows drive. I even used the Linux **find** command to locate all the ones on the original storage device.
I went down a number of rabbit holes and had to extract myself and start over each time. I ran into problems that were not directly related to this project, but that affected my work on it. Those problems included interesting things like mounting the Windows partition on **/mnt** on my Linux box and getting a message that the partition had been improperly closed by Windows (yes on my Linux host) and that it had fixed the inconsistency. Not even Windows could do that after multiple reboots through its so-called "recovery" mode.
Perhaps you noticed some clues in the output data from the **chntpw** utility. I cut out some of the other user accounts that were displayed on my host for security reasons, but I saw from that information that all of the users were admins. Needless to say, I changed that. I am still surprised by the poor administrative practices I encounter, but I guess I should not be.
In the end, I was forced to purchase a license, but one that was at least a bit less expensive than the original. One thing I know is that the Linux piece of this worked perfectly once I had found all the necessary information. The issue was dealing with Windows activation. Some of you may have been successful at getting Windows reactivated. If so, I would still like to know how you did it, so please add your experience to the comments.
This is yet another reason I dislike Windows and only ever use Linux on my own systems. It is also one of the reasons I am converting all of the organization's computers to Linux. It just takes time and convincing. We only have this one accounting program left, and I need to work with the treasurer to find one that works for her. I understand this I like my own tools, and I need them to work in a way that is best for me.
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/1/virtualbox-windows-linux
作者:[David Both][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/dboth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/puzzle_computer_solve_fix_tool.png?itok=U0pH1uwj (Puzzle pieces coming together to form a computer screen)
[2]: https://opensource.com/article/20/7/godbledger
[3]: https://opensource.com/sites/default/files/virtualbox.png

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (cooljelly)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -2,7 +2,7 @@
[#]: via: (https://opensource.com/article/21/3/android-raspberry-pi)
[#]: author: (Sudeshna Sur https://opensource.com/users/sudeshna-sur)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: ( RiaXu)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
@ -131,7 +131,7 @@ via: https://opensource.com/article/21/3/android-raspberry-pi
作者:[Sudeshna Sur][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
译者:[译者ID](https://github.com/ShuyRoy)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,167 +0,0 @@
[#]: subject: (Learn Python dictionary values with Jupyter)
[#]: via: (https://opensource.com/article/21/3/dictionary-values-python)
[#]: author: (Lauren Maffeo https://opensource.com/users/lmaffeo)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
Learn Python dictionary values with Jupyter
======
Implementing data structures with dictionaries helps you access
information more quickly.
![Hands on a keyboard with a Python book ][1]
Dictionaries are the Python programming language's way of implementing data structures. A Python dictionary consists of several key-value pairs; each pair maps the key to its associated value.
For example, say you're a teacher who wants to match students' names to their grades. You could use a Python dictionary to map the keys (names) to their associated values (grades).
If you need to find a specific student's grade on an exam, you can access it from your dictionary. This lookup shortcut should save you time over parsing an entire list to find the student's grade.
This article shows you how to access dictionary values through each value's key. Before you begin the tutorial, make sure you have the [Anaconda package manager][2] and [Jupyter Notebook][3] installed on your machine.
### 1\. Open a new notebook in Jupyter
Begin by opening Jupyter and running it in a tab in your web browser. Then:
1. Go to **File** in the top-left corner.
2. Select **New Notebook**, then **Python 3**.
![Create Jupyter notebook][4]
(Lauren Maffeo, [CC BY-SA 4.0][5])
Your new notebook starts off untitled, but you can rename it anything you'd like. I named mine **OpenSource.com Data Dictionary Tutorial**.
The line number you see in your new Jupyter notebook is where you will write your code. (That is, your input.)
On macOS, you'll hit **Shift** then **Return** to receive your output. Make sure to do this before creating new line numbers; otherwise, any additional code you write might not run.
### 2\. Create a key-value pair
Write the keys and values you wish to access in your dictionary. To start, you'll need to define what they are in the context of your dictionary:
```
empty_dictionary = {}
grades = {
    "Kelsey": 87,
    "Finley": 92
}
one_line = {a: 1, b: 2}
```
![Code for defining key-value pairs in the dictionary][6]
(Lauren Maffeo, [CC BY-SA 4.0][5])
This allows the dictionary to associate specific keys with their respective values. Dictionaries store data by name, which allows faster lookup.
### 3\. Access a dictionary value by its key
Say you want to find a specific dictionary value; in this case, a specific student's grade. To start, hit **Insert** then **Insert Cell Below**.
![Inserting a new cell in Jupyter][7]
(Lauren Maffeo, [CC BY-SA 4.0][5])
In your new cell, define the keys and values in your dictionary.
Then, find the value you need by telling your dictionary to print that value's key. For example, look for a specific student's name—Kelsey:
```
# Access data in a dictionary
grades = {
    "Kelsey": 87,
    "Finley": 92
}
print(grades["Kelsey"])
87
```
![Code to look for a specific value][8]
(Lauren Maffeo, [CC BY-SA 4.0][5])
Once you've asked for Kelsey's grade (that is, the value you're trying to find), hit **Shift** (if you're on macOS), then **Return**.
You see your desired value—Kelsey's grade—as an output below your cell.
### 4\. Update an existing key
What if you realize you added the wrong grade for a student to your dictionary? You can fix it by updating your dictionary to store an additional value.
To start, choose which key you want to update. In this case, say you entered Finley's grade incorrectly. That is the key you'll update in this example.
To update Finley's grade, insert a new cell below, then create a new key-value pair. Tell your cell to print the dictionary, then hit **Shift** and **Return**:
```
grades["Finley"] = 90
print(grades)
{'Kelsey': 87; "Finley": 90}
```
![Code for updating a key][9]
(Lauren Maffeo, [CC BY-SA 4.0][5])
The updated dictionary, with Finley's new grade, appears as your output.
### 5\. Add a new key
Say you get a new student's grade for an exam. You can add that student's name and grade to your dictionary by adding a new key-value pair.
Insert a new cell below, then add the new student's name and grade as a key-value pair. Once you're done, tell your cell to print the dictionary, then hit **Shift** and **Return**:
```
grades["Alex"] = 88
print(grades)
{'Kelsey': 87, 'Finley': 90, 'Alex': 88}
```
![Add a new key][10]
(Lauren Maffeo, [CC BY-SA 4.0][5])
All key-value pairs should appear as output.
### Using dictionaries
Remember that keys and values can be any data type, but it's rare for them to be [non-primitive types][11]. Additionally, dictionaries don't store or structure their content in any specific order. If you need an ordered sequence of items, it's best to create a list in Python, not a dictionary.
If you're thinking of using a dictionary, first confirm if your data is structured the right way, i.e., like a phone book. If not, then using a list, tuple, tree, or other data structure might be the best option.
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/3/dictionary-values-python
作者:[Lauren Maffeo][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/lmaffeo
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/python-programming-code-keyboard.png?itok=fxiSpmnd (Hands on a keyboard with a Python book )
[2]: https://docs.anaconda.com/anaconda/
[3]: https://opensource.com/article/18/3/getting-started-jupyter-notebooks
[4]: https://opensource.com/sites/default/files/uploads/new-jupyter-notebook.png (Create Jupyter notebook)
[5]: https://creativecommons.org/licenses/by-sa/4.0/
[6]: https://opensource.com/sites/default/files/uploads/define-keys-values.png (Code for defining key-value pairs in the dictionary)
[7]: https://opensource.com/sites/default/files/uploads/jupyter_insertcell.png (Inserting a new cell in Jupyter)
[8]: https://opensource.com/sites/default/files/uploads/lookforvalue.png (Code to look for a specific value)
[9]: https://opensource.com/sites/default/files/uploads/jupyter_updatekey.png (Code for updating a key)
[10]: https://opensource.com/sites/default/files/uploads/jupyter_addnewkey.png (Add a new key)
[11]: https://www.datacamp.com/community/tutorials/data-structures-python

View File

@ -1,288 +0,0 @@
[#]: subject: (Visualize multi-threaded Python programs with an open source tool)
[#]: via: (https://opensource.com/article/21/3/python-viztracer)
[#]: author: (Tian Gao https://opensource.com/users/gaogaotiantian)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
Visualize multi-threaded Python programs with an open source tool
======
VizTracer traces concurrent Python programs to help with logging,
debugging, and profiling.
![Colorful sound wave graph][1]
Concurrency is an essential part of modern programming, as we have multiple cores and many tasks that need to cooperate. However, it's harder to understand concurrent programs when they are not running sequentially. It's not as easy for engineers to identify bugs and performance issues in these programs as it is in a single-thread, single-task program.
With Python, you have multiple options for concurrency. The most common ones are probably multi-threaded with the threading module, multiprocess with the subprocess and multiprocessing modules, and the more recent async syntax with the asyncio module. Before [VizTracer][2], there was a lack of tools to analyze programs using these techniques.
VizTracer is a tool for tracing and visualizing Python programs, which is helpful for logging, debugging, and profiling. Even though it works well for single-thread, single-task programs, its utility in concurrent programs is what makes it unique.
### Try a simple task
Start with a simple practice task: Figure out whether the integers in an array are prime numbers and return a Boolean array. Here is a simple solution:
```
def is_prime(n):
    for i in range(2, n):
        if n % i == 0:
            return False
    return True
def get_prime_arr(arr):
    return [is_prime(elem) for elem in arr]
```
Try to run it normally, in a single thread, with VizTracer:
```
if __name__ == "__main__":
    num_arr = [random.randint(100, 10000) for _ in range(6000)]
    get_prime_arr(num_arr)
[/code] [code]`viztracer my_program.py`
```
![Running code in a single thread][3]
(Tian Gao, [CC BY-SA 4.0][4])
The call-stack report indicates it took about 140ms, with most of the time spent in `get_prime_arr`.
![call-stack report][5]
(Tian Gao, [CC BY-SA 4.0][4])
It's just doing the `is_prime` function over and over again on the elements in the array.
This is what you would expect, and it's not that interesting (if you know VizTracer).
### Try a multi-thread program
Try doing it with a multi-thread program:
```
if __name__ == "__main__":
    num_arr = [random.randint(100, 10000) for i in range(2000)]
    thread1 = Thread(target=get_prime_arr, args=(num_arr,))
    thread2 = Thread(target=get_prime_arr, args=(num_arr,))
    thread3 = Thread(target=get_prime_arr, args=(num_arr,))
    thread1.start()
    thread2.start()
    thread3.start()
    thread1.join()
    thread2.join()
    thread3.join()
```
To match the single-thread program's workload, this uses a 2,000-element array for three threads, simulating a situation where three threads are sharing the task.
![Multi-thread program][6]
(Tian Gao, [CC BY-SA 4.0][4])
As you would expect if you are familiar with Python's Global Interpreter Lock (GIL), it won't get any faster. It took a little bit more than 140ms due to the overhead. However, you can observe the concurrency of multiple threads:
![Concurrency of multiple threads][7]
(Tian Gao, [CC BY-SA 4.0][4])
When one thread was working (executing multiple `is_prime` functions), the other one was frozen (one `is_prime` function); later, they switched. This is due to GIL, and it is the reason Python does not have true multi-threading. It can achieve concurrency but not parallelism.
### Try it with multiprocessing
To achieve parallelism, the way to go is the multiprocessing library. Here is another version with multiprocessing:
```
if __name__ == "__main__":
    num_arr = [random.randint(100, 10000) for _ in range(2000)]
   
    p1 = Process(target=get_prime_arr, args=(num_arr,))
    p2 = Process(target=get_prime_arr, args=(num_arr,))
    p3 = Process(target=get_prime_arr, args=(num_arr,))
    p1.start()
    p2.start()
    p3.start()
    p1.join()
    p2.join()
    p3.join()
```
To run it with VizTracer, you need an extra argument:
```
`viztracer --log_multiprocess my_program.py`
```
![Running with extra argument][8]
(Tian Gao, [CC BY-SA 4.0][4])
The whole program finished in a little more than 50ms, with the actual task finishing before the 50ms mark. The program's speed roughly tripled.
To compare it with the multi-thread version, here is the multiprocess version:
![Multi-process version][9]
(Tian Gao, [CC BY-SA 4.0][4])
Without GIL, multiple processes can achieve parallelism, which means multiple `is_prime` functions can execute in parallel.
However, Python's multi-thread is not useless. For example, for computation-intensive and I/O-intensive programs, you can fake an I/O-bound task with sleep:
```
def io_task():
    time.sleep(0.01)
```
Try it in a single-thread, single-task program:
```
if __name__ == "__main__":
    for _ in range(3):
        io_task()
```
![I/O-bound single-thread, single-task program][10]
(Tian Gao, [CC BY-SA 4.0][4])
The full program took about 30ms; nothing special.
Now use multi-thread:
```
if __name__ == "__main__":
    thread1 = Thread(target=io_task)
    thread2 = Thread(target=io_task)
    thread3 = Thread(target=io_task)
    thread1.start()
    thread2.start()
    thread3.start()
    thread1.join()
    thread2.join()
    thread3.join()
```
![I/O-bound multi-thread program][11]
(Tian Gao, [CC BY-SA 4.0][4])
The program took 10ms, and it's clear how the three threads worked concurrently and improved the overall performance.
### Try it with asyncio
Python is trying to introduce another interesting feature called async programming. You can make an async version of this task:
```
import asyncio
async def io_task():
    await asyncio.sleep(0.01)
async def main():
    t1 = asyncio.create_task(io_task())
    t2 = asyncio.create_task(io_task())
    t3 = asyncio.create_task(io_task())
    await t1
    await t2
    await t3
if __name__ == "__main__":
    asyncio.run(main())
```
As asyncio is literally a single-thread scheduler with tasks, you can use VizTracer directly on it:
![VizTracer with asyncio][12]
(Tian Gao, [CC BY-SA 4.0][4])
It still took 10ms, but most of the functions displayed are the underlying structure, which is probably not what users are interested in. To solve this, you can use `--log_async` to separate the real task:
```
`viztracer --log_async my_program.py`
```
![Using --log_async to separate tasks][13]
(Tian Gao, [CC BY-SA 4.0][4])
Now the user tasks are much clearer. For most of the time, no tasks are running (because the only thing it does is sleep). Here's the interesting part:
![Graph of task creation and execution][14]
(Tian Gao, [CC BY-SA 4.0][4])
This shows when the tasks were created and executed. Task-1 was the `main()` co-routine and created other tasks. Tasks 2, 3, and 4 executed `io_task` and `sleep` then waited for the wake-up. As the graph shows, there is no overlap between tasks because it's a single-thread program, and VizTracer visualized it this way to make it more understandable.
To make it more interesting, add a `time.sleep` call in the task to block the async loop:
```
async def io_task():
    time.sleep(0.01)
    await asyncio.sleep(0.01)
```
![time.sleep call][15]
(Tian Gao, [CC BY-SA 4.0][4])
The program took much longer (40ms), and the tasks filled the blanks in the async scheduler.
This feature is very helpful for diagnosing behavior and performance issues in async programs.
### See what's happening with VizTracer
With VizTracer, you can see what's going on with your program on a timeline, rather than imaging it from complicated logs. This helps you understand your concurrent programs better.
VizTracer is open source, released under the Apache 2.0 license, and supports all common operating systems (Linux, macOS, and Windows). You can learn more about its features and access its source code in [VizTracer's GitHub repository][16].
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/3/python-viztracer
作者:[Tian Gao][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/gaogaotiantian
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/colorful_sound_wave.png?itok=jlUJG0bM (Colorful sound wave graph)
[2]: https://readthedocs.org/projects/viztracer/
[3]: https://opensource.com/sites/default/files/uploads/viztracer_singlethreadtask.png (Running code in a single thread)
[4]: https://creativecommons.org/licenses/by-sa/4.0/
[5]: https://opensource.com/sites/default/files/uploads/viztracer_callstackreport.png (call-stack report)
[6]: https://opensource.com/sites/default/files/uploads/viztracer_multithread.png (Multi-thread program)
[7]: https://opensource.com/sites/default/files/uploads/viztracer_concurrency.png (Concurrency of multiple threads)
[8]: https://opensource.com/sites/default/files/uploads/viztracer_multithreadrun.png (Running with extra argument)
[9]: https://opensource.com/sites/default/files/uploads/viztracer_comparewithmultiprocess.png (Multi-process version)
[10]: https://opensource.com/sites/default/files/uploads/io-bound_singlethread.png (I/O-bound single-thread, single-task program)
[11]: https://opensource.com/sites/default/files/uploads/io-bound_multithread.png (I/O-bound multi-thread program)
[12]: https://opensource.com/sites/default/files/uploads/viztracer_asyncio.png (VizTracer with asyncio)
[13]: https://opensource.com/sites/default/files/uploads/log_async.png (Using --log_async to separate tasks)
[14]: https://opensource.com/sites/default/files/uploads/taskcreation.png (Graph of task creation and execution)
[15]: https://opensource.com/sites/default/files/uploads/time.sleep_call.png (time.sleep call)
[16]: https://github.com/gaogaotiantian/viztracer

View File

@ -1,95 +0,0 @@
[#]: subject: (6 things to know about using WebAssembly on Firefox)
[#]: via: (https://opensource.com/article/21/3/webassembly-firefox)
[#]: author: (Stephan Avenwedde https://opensource.com/users/hansic99)
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
6 things to know about using WebAssembly on Firefox
======
Get to know the opportunities and limitations of running WebAssembly on
Firefox.
![Business woman on laptop sitting in front of window][1]
WebAssembly is a portable execution format that has drawn a lot of interest due to its ability to execute applications in the browser at near-native speed. By its nature, WebAssembly has some special properties and limitations. However, by combining it with other technologies, completely new possibilities arise, especially related to gaming in the browser.
This article describes the concepts, possibilities, and limitations of running WebAssembly on Firefox.
### The sandbox
WebAssembly has a [strict security policy][2]. A program or functional unit in WebAssembly is called a _module_. Each module instance runs its own isolated memory space. Therefore, one module cannot access another module's virtual address space, even if they are loaded on the same web page. By design, WebAssembly also considers memory safety and control-flow integrity, which enables an (almost-) deterministic execution.
### Web APIs
Access to many kinds of input and output devices is granted via JavaScript [Web APIs][3]. In the future, access to Web APIs will be available without the detour over to JavaScript, according to this [proposal][4]. C++ programmers can find information about accessing the Web APIs on [Emscripten.org][5]. Rust programmers can use the [wasm-bindgen][6] library that is documented on [rustwasm.github.io][7].
### File input/output
Because WebAssembly is executed in a sandboxed environment, it cannot access the host's filesystem when it is executed in a browser. However, Emscripten offers a solution in the form of a virtual filesystem.
Emscripten makes it possible to preload files to the memory filesystem at compile time. Those files can then be read from within the WebAssembly application, just as you would on an ordinary filesystem. This [tutorial][8] offers more information.
### Persistent data
If you need to store persistent data on the client-side, it must be done over a JavaScript Web API. Refer to Mozilla Developer Network's documentation on [browser storage limits and eviction criteria][9] for more detailed information about the different approaches.
### Memory management
WebAssembly modules operate on linear memory as a [stack machine][10]. This means that concepts like heap memory allocations are not available. However, if you are using `new` in C++ or `Box::new` in Rust, you would expect it to result in a heap memory allocation. The way heap memory allocation requests are translated into WebAssembly relies heavily upon the toolchain. You can find a detailed analysis of how different toolchains deal with heap memory allocations in Frank Rehberger's post about [_WebAssembly and dynamic memory_][11].
### Games!
In combination with [WebGL][12], WebAssembly enables native gaming in the browser due to its high execution speed. The big proprietary game engines [Unity][13] and [Unreal Engine 4][14] show what is possible with WebGL. There are also open source game engines that use WebAssembly and the WebGL interface. Here are some examples:
* Since November 2011, the [id Tech 4][15] engine (better known as the Doom 3 engine) is available under the GPL license on [GitHub][16]. There is also a [WebAssembly port of Doom 3][17].
* The Urho3D engine provides some [impressive examples][18] that can run in the browser.
* If you like retro games, try this [Game Boy emulator][19].
* The [Godot engine is also capable of producing WebAssembly][20]. I couldn't find a demo, but the [Godot editor][21] has been ported to WebAssembly.
### More about WebAssembly
WebAssembly is a promising technology that I believe we will see more frequently in the future. In addition to executing in the browser, WebAssembly can also be used as a portable execution format. The [Wasmer][22] container host enables you to execute WebAssembly code on various platforms.
If you want more demos, examples, and tutorials, take a look at this [extensive collection of WebAssembly topics][23]. Not exclusive to WebAssembly but still worth a look are Mozilla's [collection of games and demos][24].
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/3/webassembly-firefox
作者:[Stephan Avenwedde][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/hansic99
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/lenovo-thinkpad-laptop-concentration-focus-windows-office.png?itok=-8E2ihcF (Woman using laptop concentrating)
[2]: https://webassembly.org/docs/security/
[3]: https://developer.mozilla.org/en-US/docs/Web/API
[4]: https://github.com/WebAssembly/gc/blob/master/README.md
[5]: https://emscripten.org/docs/porting/connecting_cpp_and_javascript/Interacting-with-code.html
[6]: https://github.com/rustwasm/wasm-bindgen
[7]: https://rustwasm.github.io/wasm-bindgen/
[8]: https://emscripten.org/docs/api_reference/Filesystem-API.html
[9]: https://developer.mozilla.org/en-US/docs/Web/API/IndexedDB_API/Browser_storage_limits_and_eviction_criteria
[10]: https://en.wikipedia.org/wiki/Stack_machine
[11]: https://frehberg.wordpress.com/webassembly-and-dynamic-memory/
[12]: https://en.wikipedia.org/wiki/WebGL
[13]: https://beta.unity3d.com/jonas/AngryBots/
[14]: https://www.youtube.com/watch?v=TwuIRcpeUWE
[15]: https://en.wikipedia.org/wiki/Id_Tech_4
[16]: https://github.com/id-Software/DOOM-3
[17]: https://wasm.continuation-labs.com/d3demo/
[18]: https://urho3d.github.io/samples/
[19]: https://vaporboy.net/
[20]: https://docs.godotengine.org/en/stable/development/compiling/compiling_for_web.html
[21]: https://godotengine.org/editor/latest/godot.tools.html
[22]: https://github.com/wasmerio/wasmer
[23]: https://github.com/mbasso/awesome-wasm
[24]: https://developer.mozilla.org/en-US/docs/Games/Examples

View File

@ -1,165 +0,0 @@
[#]: subject: (How to write 'Hello World' in WebAssembly)
[#]: via: (https://opensource.com/article/21/3/hello-world-webassembly)
[#]: author: (Stephan Avenwedde https://opensource.com/users/hansic99)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
How to write 'Hello World' in WebAssembly
======
Get started writing WebAssembly in human-readable text with this
step-by-step tutorial.
![Hello World inked on bread][1]
WebAssembly is a bytecode format that [virtually every browser][2] can compile to its host system's machine code. Alongside JavaScript and WebGL, WebAssembly fulfills the demand for porting applications for platform-independent use in the web browser. As a compilation target for C++ and Rust, WebAssembly enables web browsers to execute code at near-native speed.
When you talk about a WebAssembly, application, you must distinguish between three states:
1. **Source code (e.g., C++ or Rust):** You have an application written in a compatible language that you want to execute in the browser.
2. **WebAssembly bytecode:** You choose WebAssembly bytecode as your compilation target. As a result, you get a `.wasm` file.
3. **Machine code (opcode):** The browser loads the `.wasm` file and compiles it to the corresponding machine code of its host system.
WebAssembly also has a text format that represents the binary format in human-readable text. For the sake of simplicity, I will refer to this as **WASM-text**. WASM-text can be compared to high-level assembly language. Of course, you would not write a complete application based on WASM-text, but it's good to know how it works under the hood (especially for debugging and performance optimization).
This article will guide you through creating the classic _Hello World_ program in WASM-text.
### Creating the .wat file
WASM-text files usually end with `.wat`. Start from scratch by creating an empty text file named `helloworld.wat`, open it with your favorite text editor, and paste in:
```
(module
    ;; Imports from JavaScript namespace
    (import  "console"  "log" (func  $log (param  i32  i32))) ;; Import log function
    (import  "js"  "mem" (memory  1)) ;; Import 1 page of memory (54kb)
   
    ;; Data section of our module
    (data (i32.const 0) "Hello World from WebAssembly!")
   
    ;; Function declaration: Exported as helloWorld(), no arguments
    (func (export  "helloWorld")
        i32.const 0  ;; pass offset 0 to log
        i32.const 29  ;; pass length 29 to log (strlen of sample text)
        call  $log
        )
)
```
The WASM-text format is based upon S-expressions. To enable interaction, JavaScript functions are imported with the `import` statement, and WebAssembly functions are exported with the `export` statement. For this example, import the `log `function from the `console` module, which takes two parameters of type `i32` as input and one page of memory (64KB) to store the string.
The string will be written into the `data` section at offset `0`. The `data` section is an overlay of your memory, and the memory is allocated in the JavaScript part.
Functions are marked with the keyword `func`. The stack is empty when entering a function. Function parameters are pushed onto the stack (here offset and length) before another function is called (see `call $log`). When a function returns an `f32` type (for example), an `f32` variable must remain on the stack when leaving the function (but this is not the case in this example).
### Creating the .wasm file
The WASM-text and the WebAssembly bytecode have 1:1 correspondence. This means you can convert WASM-text into bytecode (and vice versa). You already have the WASM-text, and now you want to create the bytecode.
The conversion can be performed with the [WebAssembly Binary Toolkit][3] (WABT). Make a clone of the repository at that link and follow the installation instructions.
After you build the toolchain, convert WASM-text to bytecode by opening a console and entering:
```
`wat2wasm helloworld.wat -o helloworld.wasm`
```
You can also convert bytecode to WASM-text with:
```
`wasm2wat helloworld.wasm -o helloworld_reverse.wat`
```
A `.wat` file created from a `.wasm` file does not include any function nor parameter names. By default, WebAssembly identifies functions and parameters with their index.
### Compiling the .wasm file
Currently, WebAssembly only coexists with JavaScript, so you have to write a short script to load and compile the `.wasm` file and do the function calls. You also need to define the functions you will import in your WebAssembly module.
Create an empty text file and name it `helloworld.html`, then open your favorite text editor and paste in:
```
&lt;!DOCTYPE  html&gt;
&lt;html&gt;
  &lt;head&gt;
    &lt;meta  charset="utf-8"&gt;
    &lt;title&gt;Simple template&lt;/title&gt;
  &lt;/head&gt;
  &lt;body&gt;
    &lt;script&gt;
   
      var memory = new  WebAssembly.Memory({initial:1});
      function  consoleLogString(offset, length) {
        var  bytes = new  Uint8Array(memory.buffer, offset, length);
        var  string = new  TextDecoder('utf8').decode(bytes);
        console.log(string);
      };
      var  importObject = {
        console: {
          log:  consoleLogString
        },
        js : {
          mem:  memory
        }
      };
     
      WebAssembly.instantiateStreaming(fetch('helloworld.wasm'), importObject)
      .then(obj  =&gt; {
        obj.instance.exports.helloWorld();
      });
     
    &lt;/script&gt;
  &lt;/body&gt;
&lt;/html&gt;
```
The `WebAssembly.Memory(...)` method returns one page of memory that is 64KB in size. The function `consoleLogString` reads a string from that memory page based on the length and offset. Both objects are passed to your WebAssembly module as part of the `importObject`.
Before you can run this example, you may have to allow Firefox to access files from this directory by typing `about:config` in the address line and setting `privacy.file_unique_origin` to `true`:
![Firefox setting][4]
(Stephan Avenwedde, [CC BY-SA 4.0][5])
> **Caution:** This will make you vulnerable to the [CVE-2019-11730][6] security issue.
Now, open `helloworld.html` in Firefox and enter **Ctrl**+**K** to open the developer console.
![Debugger output][7]
(Stephan Avenwedde, [CC BY-SA 4.0][5])
### Learn more
This Hello World example is just one of the detailed tutorials in MDN's [Understanding WebAssembly text format][8] documentation. If you want to learn more about WebAssembly and how it works under the hood, take a look at these docs.
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/3/hello-world-webassembly
作者:[Stephan Avenwedde][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/hansic99
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/helloworld_bread_lead.jpeg?itok=1r8Uu7gk (Hello World inked on bread)
[2]: https://developer.mozilla.org/en-US/docs/WebAssembly#browser_compatibility
[3]: https://github.com/webassembly/wabt
[4]: https://opensource.com/sites/default/files/uploads/firefox_setting.png (Firefox setting)
[5]: https://creativecommons.org/licenses/by-sa/4.0/
[6]: https://www.mozilla.org/en-US/security/advisories/mfsa2019-21/#CVE-2019-11730
[7]: https://opensource.com/sites/default/files/uploads/debugger_output.png (Debugger output)
[8]: https://developer.mozilla.org/en-US/docs/WebAssembly/Understanding_the_text_format

View File

@ -1,207 +0,0 @@
[#]: subject: (Practice using the Linux grep command)
[#]: via: (https://opensource.com/article/21/3/grep-cheat-sheet)
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
[#]: collector: (lujun9972)
[#]: translator: (lxbwolf)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
Practice using the Linux grep command
======
Learn the basics on searching for info in your files, then download our
cheat sheet for a quick reference guide to grep and regex.
![Hand putting a Linux file folder into a drawer][1]
One of the classic Unix commands, developed way back in 1974 by Ken Thompson, is the Global Regular Expression Print (grep) command. It's so ubiquitous in computing that it's frequently used as a verb ("grepping through a file") and, depending on how geeky your audience, it fits nicely into real-world scenarios, too. (For example, "I'll have to grep my memory banks to recall that information.") In short, grep is a way to search through a file for a specific pattern of characters. If that sounds like the modern Find function available in any word processor or text editor, then you've already experienced grep's effects on the computing industry.
Far from just being a quaint old command that's been supplanted by modern technology, grep's true power lies in two aspects:
* Grep works in the terminal and operates on streams of data, so you can incorporate it into complex processes. You can not only _find_ a word in a text file; you can extract the word, send it to another command, and so on.
* Grep uses regular expression to provide a flexible search capability.
Learning the `grep` command is easy, although it does take some practice. This article introduces you to some of its features I find most useful.
**[Download our free [grep cheat sheet][2]]**
### Installing grep
If you're using Linux, you already have grep installed.
On macOS, you have the BSD version of grep. This differs slightly from the GNU version, so if you want to follow along exactly with this article, then install GNU grep from a project like [Homebrew][3] or [MacPorts][4].
### Basic grep
The basic grep syntax is always the same. You provide the `grep` command a pattern and a file you want it to search. In return, it prints each line to your terminal with a match.
```
$ grep gnu gpl-3.0.txt
    along with this program.  If not, see &lt;[http://www.gnu.org/licenses/\&gt;][5].
&lt;[http://www.gnu.org/licenses/\&gt;][5].
&lt;[http://www.gnu.org/philosophy/why-not-lgpl.html\&gt;][6].
```
By default, the `grep` command is case-sensitive, so "gnu" is different from "GNU" or "Gnu." You can make it ignore capitalization with the `--ignore-case` option.
```
$ grep --ignore-case gnu gpl-3.0.txt
                    GNU GENERAL PUBLIC LICENSE
  The GNU General Public License is a free, copyleft license for
the GNU General Public License is intended to guarantee your freedom to
GNU General Public License for most of our software; it applies also to
[...16 more results...]
&lt;[http://www.gnu.org/licenses/\&gt;][5].
&lt;[http://www.gnu.org/philosophy/why-not-lgpl.html\&gt;][6].
```
You can also make the `grep` command return all lines _without_ a match by using the `--invert-match` option:
```
$ grep --invert-match \
\--ignore-case gnu gpl-3.0.txt
                      Version 3, 29 June 2007
 Copyright (C) 2007 Free Software Foundation, Inc. &lt;[http://fsf.org/\&gt;][7]
[...648 lines...]
Public License instead of this License.  But first, please read
```
### Pipes
It's useful to be able to find text in a file, but the true power of [POSIX][8] is its ability to chain commands together through "pipes." I find that my best use of grep is when it's combined with other tools, like cut, tr, or [curl][9].
For instance, assume I have a file that lists some technical papers I want to download. I could open the file and manually click on each link, and then click through Firefox options to save each file to my hard drive, but that's a lot of time and clicking. Instead, I could grep for the links in the file, printing _only_ the matching string by using the `--only-matching` option:
```
$ grep --only-matching http\:\/\/.*pdf example.html
<http://example.com/linux\_whitepaper.pdf>
<http://example.com/bsd\_whitepaper.pdf>
<http://example.com/important\_security\_topic.pdf>
```
The output is a list of URLs, each on one line. This is a natural fit for how Bash processes data, so instead of having the URLs printed to my terminal, I can just pipe them into `curl`:
```
$ grep --only-matching http\:\/\/.*pdf \
example.html | curl --remote-name
```
This downloads each file, saving it according to its remote filename onto my hard drive.
My search pattern in this example may seem cryptic. That's because it uses regular expression, a kind of "wildcard" language that's particularly useful when searching broadly through lots of text.
### Regular expression
Nobody is under the illusion that regular expression ("regex" for short) is easy. However, I find it often has a worse reputation than it deserves. Admittedly, there's the potential for people to get a little _too clever_ with regex until it's so unreadable and so broad that it folds in on itself, but you don't have to overdo your regex. Here's a brief introduction to regex the way I use it.
First, create a file called `example.txt` and enter this text into it:
```
Albania
Algeria
Canada
0
1
3
11
```
The most basic element of regex is the humble `.` character. It represents a single character.
```
$ grep Can.da example.txt
Canada
```
The pattern `Can.da` successfully returned `Canada` because the `.` character represented any _one_ character.
The `.` wildcard can be modified to represent more than one character with these notations:
* `?` matches the preceding item zero or one time
* `*` matches the preceding item zero or more times
* `+` matches the preceding item one or more times
* `{4}` matches the preceding item up to four (or any number you enter in the braces) times
Armed with this knowledge, you can practice regex on `example.txt` all afternoon, seeing what interesting combinations you come up with. Some won't work; others will. The important thing is to analyze the results, so you understand why.
For instance, this fails to return any country:
```
`$ grep A.a example.txt`
```
It fails because the `.` character can only ever match a single character unless you level it up. Using the `*` character, you can tell `grep` to match a single character zero or as many times as necessary until it reaches the end of the word. Because you know the list you're dealing with, you know that _zero times_ is useless in this instance. There are definitely no three-letter country names in this list. So instead, you can use `+` to match a single character at least once and then again as many times as necessary until the end of the word:
```
$ grep A.+a example.txt
Albania
Algeria
```
You can use square brackets to provide a list of letters:
```
$ grep [A,C].+a example.txt
Albania
Algeria
Canada
```
This works for numbers, too. The results may surprise you:
```
$ grep [1-9] example.txt
1
3
11
```
Are you surprised to see 11 in a search for digits 1 to 9?
What happens if you add 13 to your list?
These numbers are returned because they include 1, which is among the list of digits to match.
As you can see, regex is something of a puzzle, but through experimentation and practice, you can get comfortable with it and use it to improve the way you grep through your data.
### Download the cheatsheet
The `grep` command has far more options than I demonstrated in this article. There are options to better format results, list files and line numbers containing matches, provide context for results by printing the lines surrounding a match, and much more. If you're learning grep, or you just find yourself using it often and resorting to searching through its `info` pages, you'll do yourself a favor by downloading our cheat sheet for it. The cheat sheet uses short options (`-v` instead of `--invert-matching`, for instance) as a way to get you familiar with common grep shorthand. It also contains a regex section to help you remember the most common regex codes. [Download the grep cheat sheet today!][2]
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/3/grep-cheat-sheet
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/yearbook-haff-rx-linux-file-lead_0.png?itok=-i0NNfDC (Hand putting a Linux file folder into a drawer)
[2]: https://opensource.com/downloads/grep-cheat-sheet
[3]: https://opensource.com/article/20/6/homebrew-mac
[4]: https://opensource.com/article/20/11/macports
[5]: http://www.gnu.org/licenses/\>
[6]: http://www.gnu.org/philosophy/why-not-lgpl.html\>
[7]: http://fsf.org/\>
[8]: https://opensource.com/article/19/7/what-posix-richard-stallman-explains
[9]: https://opensource.com/downloads/curl-command-cheat-sheet

View File

@ -1,145 +0,0 @@
[#]: subject: (4 cool new projects to try in Copr for March 2021)
[#]: via: (https://fedoramagazine.org/4-cool-new-projects-to-try-in-copr-for-march-2021/)
[#]: author: (Jakub Kadlčík https://fedoramagazine.org/author/frostyx/)
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
4 cool new projects to try in Copr for March 2021
======
![][1]
Copr is a [collection][2] of personal repositories for software that isnt carried in Fedora Linux. Some software doesnt conform to standards that allow easy packaging. Or it may not meet other Fedora standards, despite being free and open-source. Copr can offer these projects outside the Fedora set of packages. Software in Copr isnt supported by Fedora infrastructure or signed by the project. However, it can be a neat way to try new or experimental software.
This article presents a few new and interesting projects in Copr. If youre new to using Copr, see the [Copr User Documentation][3] for how to get started.
### [][4]
### Ytfzf
[Ytfzf][5] is a simple command-line tool for searching and watching YouTube videos. It provides a fast and intuitive interface built around fuzzy find utility [fzf][6]. It uses [youtube-dl][7] to download selected videos and opens an external video player to watch them. Because of this approach, _ytfzf_ is significantly less resource-heavy than a web browser with YouTube. It supports thumbnails (via [ueberzug][8]), history saving, queueing multiple videos or downloading them for later, channel subscriptions, and other handy features. Thanks to tools like [dmenu][9] or [rofi][10], it can even be used outside the terminal.
![][11]
#### [][12] Installation instructions
The [repo][13] currently provides Ytfzf for Fedora 33 and 34. To install it, use these commands:
```
sudo dnf copr enable bhoman/ytfzf
sudo dnf install ytfzf
```
### [][14] Gemini clients
Have you ever wondered what your internet browsing experience would be if the World Wide Web went an entirely different route and didnt adopt CSS and client-side scripting? [Gemini][15] is a modern alternative to the HTTPS protocol, although it doesnt intend to replace it. The [stenstorp/gemini][16] Copr project provides various clients for browsing Gemini _websites_, namely [Castor][17], [Dragonstone][18], [Kristall][19], and [Lagrange][20].
The [Gemini][21] site provides a list of some hosts that use this protocol. Using Castor to visit this site is shown here:
![][22]
#### [][23] Installation instructions
The [repo][16] currently provides Gemini clients for Fedora 32, 33, 34, and Fedora Rawhide. Also available for EPEL 7 and 8, and CentOS Stream. To install a browser, chose from the install commands shown here:
```
sudo dnf copr enable stenstorp/gemini
sudo dnf install castor
sudo dnf install dragonstone
sudo dnf install kristall
sudo dnf install lagrange
```
### [][24] Ly
[Ly][25] is a lightweight login manager for Linux and BSD. It features a ncurses-like text-based user interface. Theoretically, it should support all X desktop environments and window managers (many of them [were tested][26]). Ly also provides basic Wayland support (Sway works very well). Somewhere in the configuration, there is an easter egg option to enable the famous [PSX DOOM fire][27] animation in the background, which on its own, is worth checking out.
![][28]
#### [][29] Installation instructions
The [repo][30] currently provides Ly for Fedora 32, 33, and Fedora Rawhide. To install it, use these commands:
```
sudo dnf copr enable dhalucario/ly
sudo dnf install ly
```
Before setting up Ly to be your system login screen, run _ly_ command in the terminal to make sure it works properly. Then proceed with disabling your current login manager and enabling Ly instead.
```
sudo systemctl disable gdm
sudo systemctl enable ly
```
Finally, restart your computer for the changes to take an effect.
### [][31] AWS CLI v2
[AWS CLI v2][32] brings a steady and methodical evolution based on the community feedback, rather than a massive redesign of the original client. It introduces new mechanisms for configuring credentials and now allows the user to import credentials from the _.csv_ files generated in the AWS Console. It also provides support for AWS SSO. Other big improvements are server-side auto-completion, and interactive parameters generation. A fresh new feature is interactive wizards, which provide a higher level of abstraction and combines multiple AWS API calls to create, update, or delete AWS resources.
![][33]
#### [][34] Installation instructions
The [repo][35] currently provides AWS CLI v2 for Fedora Linux 32, 33, 34, and Fedora Rawhide. To install it, use these commands:
```
sudo dnf copr enable spot/aws-cli-2
sudo dnf install aws-cli-2
```
Naturally, access to an AWS account is necessary.
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/4-cool-new-projects-to-try-in-copr-for-march-2021/
作者:[Jakub Kadlčík][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/frostyx/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2020/10/4-copr-945x400-1-816x345.jpg
[2]: https://copr.fedorainfracloud.org/
[3]: https://docs.pagure.org/copr.copr/user_documentation.html
[4]: https://github.com/FrostyX/fedora-magazine/blob/main/2021-march.md#droidcam
[5]: https://github.com/pystardust/ytfzf
[6]: https://github.com/junegunn/fzf
[7]: http://ytdl-org.github.io/youtube-dl/
[8]: https://github.com/seebye/ueberzug
[9]: https://tools.suckless.org/dmenu/
[10]: https://github.com/davatorium/rofi
[11]: https://fedoramagazine.org/wp-content/uploads/2021/03/ytfzf.png
[12]: https://github.com/FrostyX/fedora-magazine/blob/main/2021-march.md#installation-instructions
[13]: https://copr.fedorainfracloud.org/coprs/bhoman/ytfzf/
[14]: https://github.com/FrostyX/fedora-magazine/blob/main/2021-march.md#gemini-clients
[15]: https://gemini.circumlunar.space/
[16]: https://copr.fedorainfracloud.org/coprs/stenstorp/gemini/
[17]: https://git.sr.ht/~julienxx/castor
[18]: https://gitlab.com/baschdel/dragonstone
[19]: https://kristall.random-projects.net/
[20]: https://github.com/skyjake/lagrange
[21]: https://gemini.circumlunar.space/servers/
[22]: https://fedoramagazine.org/wp-content/uploads/2021/03/gemini.png
[23]: https://github.com/FrostyX/fedora-magazine/blob/main/2021-march.md#installation-instructions-1
[24]: https://github.com/FrostyX/fedora-magazine/blob/main/2021-march.md#ly
[25]: https://github.com/nullgemm/ly
[26]: https://github.com/nullgemm/ly#support
[27]: https://fabiensanglard.net/doom_fire_psx/index.html
[28]: https://fedoramagazine.org/wp-content/uploads/2021/03/ly.png
[29]: https://github.com/FrostyX/fedora-magazine/blob/main/2021-march.md#installation-instructions-2
[30]: https://copr.fedorainfracloud.org/coprs/dhalucario/ly/
[31]: https://github.com/FrostyX/fedora-magazine/blob/main/2021-march.md#aws-cli-v2
[32]: https://aws.amazon.com/blogs/developer/aws-cli-v2-is-now-generally-available/
[33]: https://fedoramagazine.org/wp-content/uploads/2021/03/aws-cli-2.png
[34]: https://github.com/FrostyX/fedora-magazine/blob/main/2021-march.md#installation-instructions-3
[35]: https://copr.fedorainfracloud.org/coprs/spot/aws-cli-2/

View File

@ -0,0 +1,104 @@
[#]: subject: (6 WordPress plugins for restaurants and retailers)
[#]: via: (https://opensource.com/article/21/3/wordpress-plugins-retail)
[#]: author: (Don Watkins https://opensource.com/users/don-watkins)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
6 WordPress plugins for restaurants and retailers
======
The end of the pandemic won't be the end of curbside pickup, delivery,
and other shopping conveniences, so set your website up for success with
these plugins.
![An open for business sign.][1]
The pandemic changed how many people prefer to do business—probably permanently. Restaurants and other local retail establishments can no longer rely on walk-in trade, as they always have. Online ordering of food and other items has become the norm and the expectation. It is unlikely consumers will turn their backs on the convenience of e-commerce once the pandemic is over.
WordPress is a great platform for getting your business' message out to consumers and ensuring you're meeting their e-commerce needs. And its ecosystem of plugins extends the platform to increase its usefulness to you and your customers.
The six open source plugins described below will help you create a WordPress site that meets your customers' preferences for online shopping, curbside pickup, and delivery, and build your brand and your customer base—now and post-pandemic.
### E-commerce
![WooCommerce][2]
WooCommerce (Don Watkins, [CC BY-SA 4.0][3])
[WooCommerce][4] says it is the most popular e-commerce plugin for the WordPress platform. Its website says: "Our core platform is free, flexible, and amplified by a global community. The freedom of open source means you retain full ownership of your store's content and data forever." The plugin, which is under active development, enables you to create enticing web storefronts. It was created by WordPress developer [Automattic][5] and is released under the GPLv3.
### Order, delivery, and pickup
![Curbside Pickup][6]
Curbside Pickup (Don Watkins, [CC BY-SA 4.0][3])
[Curbside Pickup][7] is a complete system to manage your curbside pickup experience. It's ideal for any restaurant, library, retailer, or other organization that offers curbside pickup for purchases. The plugin, which is licensed GPLv3, works with any theme that supports WooCommerce.
![Food Store][8]
[Food Store][9]
If you're looking for an online food delivery and pickup system, [Food Store][9] could meet your needs. It extends WordPress' core functions and capabilities to convert your brick-and-mortar restaurant into a food-ordering hub. The plugin, licensed under GPLv2, is under active development with over 1,000 installations.
![RestroPress][10]
[RestroPress][11]
[RestroPress][11] is another option to add a food-ordering system to your website. The GPLv2-licensed plugin has over 4,000 installations and supports payment through PayPal, Amazon, and cash on delivery.
![RestaurantPress][12]
[RestaurantPress][13]
If you want to post the menu for your restaurant, bar, or cafe online, try [RestaurantPress][13]. According to its website, the plugin, which is available under a GPLv2 license, "provides modern responsive menu templates that adapt to any devices," according to its website. It has over 2,000 installations and integrates with WooCommerce.
### Communications
![Corona Virus \(COVID-19\) Banner & Live Data][14]
Corona Virus (COVID-19) Banner &amp; Live Data (Don Watkins, [CC BY-SA 4.0][3])
You can keep your customers informed about COVID-19 policies with the [Corona Virus Banner &amp; Live Data][15] plugin. It adds a simple banner with live coronavirus information to your website. It has over 6,000 active installations and is open source under GPLv2.
![MailPoet][16]
MailPoet (Don Watkins, [CC BY-SA 4.0][3])
As rules and restrictions change rapidly, an email newsletter is a great way to keep your customers informed. The [MailPoet][17] WordPress plugin makes it easy to manage and email information about new offerings, hours, and more. Through MailPoet, website visitors can subscribe to your newsletter, which you can create and send with WordPress. It has over 300,000 installations and is open source under GPLv2.
### Prepare for the post-pandemic era
Pandemic-driven lockdowns made online shopping, curbside pickup, and home delivery necessities, but these shopping trends are not going anywhere. As the pandemic subsides, restrictions will ease, and we will start shopping, dining, and doing business in person more. Still, consumers have come to appreciate the ease and convenience of e-commerce, even for small local restaurants and stores, and these plugins will help your WordPress site meet their needs.
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/3/wordpress-plugins-retail
作者:[Don Watkins][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/don-watkins
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/open_business_sign_store.jpg?itok=g4QibRqg (An open for business sign.)
[2]: https://opensource.com/sites/default/files/pictures/woocommerce.png (WooCommerce)
[3]: https://creativecommons.org/licenses/by-sa/4.0/
[4]: https://wordpress.org/plugins/woocommerce/
[5]: https://automattic.com/
[6]: https://opensource.com/sites/default/files/pictures/curbsidepickup.png (Curbside Pickup)
[7]: https://wordpress.org/plugins/curbside-pickup/
[8]: https://opensource.com/sites/default/files/pictures/food-store.png (Food Store)
[9]: https://wordpress.org/plugins/food-store/
[10]: https://opensource.com/sites/default/files/pictures/restropress.png (RestroPress)
[11]: https://wordpress.org/plugins/restropress/
[12]: https://opensource.com/sites/default/files/pictures/restaurantpress.png (RestaurantPress)
[13]: https://wordpress.org/plugins/restaurantpress/
[14]: https://opensource.com/sites/default/files/pictures/covid19updatebanner.png (Corona Virus (COVID-19) Banner & Live Data)
[15]: https://wordpress.org/plugins/corona-virus-covid-19-banner/
[16]: https://opensource.com/sites/default/files/pictures/mailpoet1.png (MailPoet)
[17]: https://wordpress.org/plugins/mailpoet/

View File

@ -0,0 +1,144 @@
[#]: subject: (Productivity with Ulauncher)
[#]: via: (https://fedoramagazine.org/ulauncher-productivity/)
[#]: author: (Troy Curtis Jr https://fedoramagazine.org/author/troycurtisjr/)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
Productivity with Ulauncher
======
![Productivity with Ulauncher][1]
Photo by [Freddy Castro][2] on [Unsplash][3]
Application launchers are a category of productivity software that not everyone is familiar with, and yet most people use the basic concepts without realizing it. As the name implies, this software launches applications, but they also other capablities.
Examples of dedicated Linux launchers include [dmenu][4], [Synapse][5], and [Albert][6]. On MacOS, some examples are [Quicksilver][7] and [Alfred][8]. Many modern desktops include basic versions as well. On Fedora Linux, the Gnome 3 [activities overview][9] uses search to open applications and more, while MacOS has the built-in launcher Spotlight.
While these applications have great feature sets, this article focuses on productivity with [Ulauncher][10].
### What is Ulauncher?
[Ulauncher][10] is a new application launcher written in Python, with the first Fedora package available in March 2020 for [Fedora Linux 32][11]. The core focuses on basic functionality with a nice [interface for extensions][12]. Like most application launchers, the key idea in Ulauncher is search. Search is a powerful productivity boost, especially for repetitive tasks.
Typical menu-driven interfaces work great for discovery when you arent sure what options are available. However, when the same action needs to happen repeatedly, it is a real time sink to navigate into 3 nested sub-menus over and over again. On the other side, [hotkeys][13] give immediate access to specific actions, but can be difficult to remember. Especially after exhausting all the obvious mnemonics. Is [_Control+C_][14] “copy”, or is it “cancel”? Search is a middle ground giving a means to get to a specific command quickly, while supporting discovery by typing only some remembered word or fragment. Exploring by search works especially well if tags and descriptions are available. Ulauncher supplies the search framework that extensions can use to build all manner of productivity enhancing actions.
### Getting started
Getting the core functionality of Ulauncher on any Fedora OS is trivial; install using _[dnf][15]_:
```
sudo dnf install ulauncher
```
Once installed, use any standard desktop launching method for the first start up of Ulauncher. A basic dialog should pop up, but if not try launching it again to toggle the input box on. Click the gear icon on the right side to open the preferences dialog.
![Ulauncher input box][16]
A number of options are available, but the most important when starting out are _Launch at login_ and the hotkey. The default hotkey is _Control+space_, but it can be changed. Running in Wayland needs additional configuration for consistent operation; see the [Ulauncher wiki][17] for details. Users of “Focus on Hover” or “Sloppy Focus” should also enable the “Dont hide after losing mouse focus” option. Otherwise, Ulauncher disappears while typing in some cases.
### Ulauncher basics
The idea of any application launcher, like Ulauncher, is fast access at any time. Press the hotkey and the input box shows up on top of the current application. Type out and execute the desired command and the dialog hides until the next use. Unsurprisingly, the most basic operation is launching applications. This is similar to most modern desktop environments. Hit the hotkey to bring up the dialog and start typing, for example _te_, and a list of matches comes up. Keep typing to further refine the search, or navigate to the entry using the arrow keys. For even faster access, use _Alt+#_ to directly choose a result.
![Ulauncher dialog searching for keywords with “te”][18]
Ulauncher can also do quick calculations and navigate the file-system. To calculate, hit the hotkey and type a math expression. The result list dynamically updates with the result, and hitting _Enter_ copies the value to the clipboard. Start file-system navigation by typing _/_ to start at the root directory or _~/_ to start in the home directory. Selecting a directory lists that directorys contents and typing another argument filters the displayed list. Locate the right file by repeatedly descending directories. Selecting a file opens it, while _Alt+Enter_ opens the folder containing the file.
### Ulauncher shortcuts
The first bit of customization comes in the form of shortcuts. The _Shortcuts_ tab in the preferences dialog lists all the current shortcuts. Shortcuts can be direct commands, URL aliases, URLs with argument substitution, or small scripts. Basic shortcuts for Wikipedia, StackOverflow, and Google come pre-configured, but custom shortcuts are easy to add.
![Ulauncher shortcuts preferences tab][19]
For instance, to create a duckduckgo search shortcut, click _Add Shortcut_ in the _Shortcuts_ preferences tab and add the name and keyword _duck_ with the query _<https://duckduckgo.com/?q=%s>_. Any argument given to the _duck_ keyword replaces _%s_ in the query and the URL opened in the default browser. Now, typing _duck fedora_ will bring up a duckduckgo search using the supplied terms, in this case _fedora_.
A more complex shortcut is a script to convert [UTC time][20] to local time. Once again click _Add Shortcut_ and this time use the keyword _utc_. In the _Query or Script_ text box, include the following script:
```
#!/bin/bash
tzdate=$(date -d "$1 UTC")
zenity --info --no-wrap --text="$tzdate"
```
This script takes the first argument (given as _$1_) and uses the standard [_date_][21] utility to convert a given UTC time into the computers local timezone. Then [zenity][22] pops up a simple dialog with the result. To test this, open Ulauncher and type _utc 11:00_. While this is a good example showing whats possible with shortcuts, see the [ultz][23] extension for really converting time zones.
### Introducing extensions
While the built-in functionality is great, installing extensions really accelerates productivity with Ulauncher. Extensions can go far beyond what is possible with custom shortcuts, most obviously by providing suggestions as arguments are typed. Extensions are Python modules which use the [Ulauncher extension interface][12] and can either be personally-developed local code or shared with others using GitHub. A collection of community developed extensions is available at <https://ext.ulauncher.io/>. There are basic standalone extensions for quick conversions and dynamic interfaces to online resources such as dictionaries. Other extensions integrate with external applications, like password managers, browsers, and VPN providers. These effectively give external applications a Ulauncher interface. By keeping the core code small and relying on extensions to add advanced functionality, Ulauncher ensures that each user only installs the functionality they need.
![Ulauncher extension configuration][24]
Installing a new extension is easy, though it could be a more integrated experience. After finding an interesting extension, either on the Ulauncher extensions website or anywhere on GitHub, navigate to the _Extensions_ tab in the preferences window. Click _Add Extension_ and paste in the GitHub URL. This loads the extension and shows a preferences page for any available options. A nice hint is that while browsing the extensions website, clicking on the _Github star_ button opens the extensions GitHub page. Often this GitHub repository has more details about the extension than the summary provided on the community extensions website.
#### Firefox bookmarks search
One useful extension is [Ulauncher Firefox Bookmarks][25], which gives fuzzy search access to the current users Firefox bookmarks. While this is similar to typing _*&lt;search-term&gt;_ in Firefoxs omnibar, the difference is Ulauncher gives quick access to the bookmarks from anywhere, without needing to open Firefox first. Also, since this method uses search to locate bookmarks, no folder organization is really needed. This means pages can be “starred” quickly in Firefox and there is no need to hunt for an appropriate folder to put it in.
![Firefox Ulauncher extension searching for fedora][26]
#### Clipboard search
Using a clipboard manager is a productivity boost on its own. These managers maintain a history of clipboard contents, which makes it easy to retrieve earlier copied snippets. Knowing there is a history of copied data allows the user to copy text without concern of overwriting the current contents. Adding in the [Ulauncher clipboard][27] extension gives quick access to the clipboard history with search capability without having to remember another unique hotkey combination. The extension integrates with different clipboard managers: [GPaste][28], [clipster][29], or [CopyQ][30]. Invoking Ulauncher and typing the _c_ keywords brings up a list of recent copied snippets. Typing out an argument starts to narrow the list of options, eventually showing the sought after text. Selecting the item copies it to the clipboard, ready to paste into another application.
![Ulauncher clipboard extension listing latest clipboard contents][31]
#### Google search
The last extension to highlight is [Google Search][32]. While a Google search shortcut is available as a default shortcut, using an extension allows for more dynamic behavior. With the extension, Google supplies suggestions as the search term is typed. The experience is similar to what is available on Googles homepage, or in the search box in Firefox. Again, the key benefit of using the extension for Google search is immediate access while doing anything else on the computer.
![Google search Ulauncher extension listing suggestions for fedora][33]
### Being productive
Productivity on a computer means customizing the environment for each particular usage. A little configuration streamlines common tasks. Dedicated hotkeys work really well for the most frequent actions, but it doesnt take long before it gets hard to remember them all. Using fuzzy search to find half-remembered keywords strikes a good balance between discoverability and direct access. The key to productivity with Ulauncher is identifying frequent actions and installing an extension, or adding a shortcut, to make doing it faster. Building a habit to search in Ulauncher first means there is a quick and consistent interface ready to go a key stroke away.
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/ulauncher-productivity/
作者:[Troy Curtis Jr][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/troycurtisjr/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2021/03/ulauncher-816x345.jpg
[2]: https://unsplash.com/@readysetfreddy?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[3]: https://unsplash.com/?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[4]: https://tools.suckless.org/dmenu/
[5]: https://launchpad.net/synapse-project
[6]: https://github.com/albertlauncher/albert
[7]: https://qsapp.com/
[8]: https://www.alfredapp.com/
[9]: https://help.gnome.org/misc/release-notes/3.6/users-activities-overview.html.en
[10]: https://ulauncher.io/
[11]: https://fedoramagazine.org/announcing-fedora-32/
[12]: http://docs.ulauncher.io/en/latest/
[13]: https://en.wikipedia.org/wiki/Keyboard_shortcut
[14]: https://en.wikipedia.org/wiki/Control-C
[15]: https://fedoramagazine.org/managing-packages-fedora-dnf/
[16]: https://fedoramagazine.org/wp-content/uploads/2021/03/image.png
[17]: https://github.com/Ulauncher/Ulauncher/wiki/Hotkey-In-Wayland
[18]: https://fedoramagazine.org/wp-content/uploads/2021/03/image-1.png
[19]: https://fedoramagazine.org/wp-content/uploads/2021/03/image-2-1024x361.png
[20]: https://www.timeanddate.com/time/aboututc.html
[21]: https://man7.org/linux/man-pages/man1/date.1.html
[22]: https://help.gnome.org/users/zenity/stable/
[23]: https://github.com/Epholys/ultz
[24]: https://fedoramagazine.org/wp-content/uploads/2021/03/image-6-1024x407.png
[25]: https://github.com/KuenzelIT/ulauncher-firefox-bookmarks
[26]: https://fedoramagazine.org/wp-content/uploads/2021/03/image-3.png
[27]: https://github.com/friday/ulauncher-clipboard
[28]: https://github.com/Keruspe/GPaste
[29]: https://github.com/mrichar1/clipster
[30]: https://hluk.github.io/CopyQ/
[31]: https://fedoramagazine.org/wp-content/uploads/2021/03/image-4.png
[32]: https://github.com/NastuzziSamy/ulauncher-google-search
[33]: https://fedoramagazine.org/wp-content/uploads/2021/03/image-5.png

View File

@ -0,0 +1,91 @@
[#]: subject: (Meet Sleek: A Sleek Looking To-Do List Application)
[#]: via: (https://itsfoss.com/sleek-todo-app/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
Meet Sleek: A Sleek Looking To-Do List Application
======
There are plenty of [to-do list applications available for Linux][1]. There is one more added to that list in the form of Sleek.
### Sleek to-do List app
Sleek is nothing extraordinary except for its looks perhaps. It provides an Electron-based GUI for todo.txt.
![][2]
For those not aware, [Electron][3] is a framework that lets you use JavaScript, HTML and CSS for building cross-platform desktop apps. It utilizes Chromium and Node.js for this purpose and this is why some people dont like their desktop apps running a browser underneath it.
[Todo.txt][4] is a text-based file system and if you follow its markup syntax, you can create a to-do list. There are tons of mobile, desktop and CLI apps that use Todo.txt underneath it.
Dont worry you dont need to know the correct syntax for todo.txt. Since Sleek is a GUI tool, you can utilize its interface for creating to-do lists without special efforts.
The advantage of todo.txt is that you can copy or export your files and use it on any To Do List app that supports todo.txt. This gives you portability to keep your data while moving between applications.
### Experience with Sleek
![][5]
Sleek gives you option to create a new to-do.txt or open an existing one. Once you create or open one, you can start adding items to the list.
Apart from the normal checklist, you can add tasks with due date.
![][6]
While adding a due date, you can also set the repetition for the tasks. I find this weird that you can not create a recurring task without setting a due date to it. This is something the developer should try to fix in the future release of the application.
![][7]
You can check a task complete. You can also choose to hide or show completed tasks with options to sort tasks based on priority.
Sleek is available in both dark and light theme. There is a dedicated option on the left sidebar to change themes. You can, of course, change it from the settings.
![][8]
There is no provision to sync your to-do list app. As a workaround, you can save your todo.txt file in a location that is automatically sync with Nextcloud, Dropbox or some other cloud service. This also opens the possibility of using it on mobile with some todo.txt mobile client. Its just a suggestion, I havent tried it myself.
### Installing Sleek on Linux
Since Sleek is an Electron-based application, it is available for Windows as well as Linux.
For Linux, you can install it using Snap or Flatpak, whichever you prefer.
For Snap, use the following command:
```
sudo snap install sleek
```
If you have enabled Flatpak and added Flathub repository, you can install it using this command:
```
flatpak install flathub com.github.ransome1.sleek
```
As I said at the beginning of this article, Sleek is nothing extraordinary. If you prefer a modern looking to-do list app with option to import and export your tasks list, you may give this open source application a try.
--------------------------------------------------------------------------------
via: https://itsfoss.com/sleek-todo-app/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/to-do-list-apps-linux/
[2]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/03/sleek-to-do-list-app.png?resize=800%2C630&ssl=1
[3]: https://www.electronjs.org/
[4]: http://todotxt.org/
[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/03/sleek-to-do-list-app-1.png?resize=800%2C521&ssl=1
[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/03/sleek-to-do-list-app-due-tasks.png?resize=800%2C632&ssl=1
[7]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/03/sleek-to-do-list-app-repeat-tasks.png?resize=800%2C632&ssl=1
[8]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/03/sleek-to-do-list-app-light-theme.png?resize=800%2C521&ssl=1

View File

@ -0,0 +1,87 @@
[#]: subject: (WebAssembly Security, Now and in the Future)
[#]: via: (https://www.linux.com/news/webassembly-security-now-and-in-the-future/)
[#]: author: (Dan Brown https://training.linuxfoundation.org/announcements/webassembly-security-now-and-in-the-future/)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
WebAssembly Security, Now and in the Future
======
_By Marco Fioretti_
**Introduction**
WebAssembly is, as we [explained recently][1], a binary format for software written in any language, designed to eventually run on any platform without changes. The first application of WebAssembly is inside web browsers, to make websites faster and more interactive. Plans to push WebAssembly beyond the Web, from servers of all sorts to the Internet of Things (IoT), create as many opportunities as security issues. This post is an introductory overview of those issues and of the WebAssembly security model.
**WebAssembly is like JavaScript**
Inside web browsers, WebAssembly modules are managed by the same Virtual Machine (VM) that executes JavaScript code. Therefore, WebAssembly may be used to do much of the same harm that is doable with JavaScript, just more efficiently and less visibly. Since JavaScript is plain text that the browser will compile, and WebAssembly a ready-to-run binary format, the latter runs faster, and is also harder to scan (even by antivirus software) for malicious instructions.
This “code obfuscation” effect of WebAssembly has been already used, among other things, to pop up unwanted advertising or to open fake “tech support” windows that ask for sensitive data. Another trick is to automatically redirect browsers to “landing” pages that contain the really dangerous malware.
Finally, WebAssembly may be used, just like JavaScript, to “steal” processing power instead of data. In 2019, an [analysis of 150 different Wasm modules][2] found out that about _32%_ of them were used for cryptocurrency-mining.
**WebAssembly sandbox, and interfaces**
WebAssembly code runs closed into a [sandbox][3] managed by the VM, not by the operating system. This gives it no visibility of the host computer, or ways to interact directly with it. Access to system resources, be they files, hardware or internet connections, can only happen through the WebAssembly System Interface (WASI) provided by that VM.
The WASI is different from most other application programming interfaces, with unique security characteristics that are truly driving the adoption of WASM on servers/edge computing scenarios, and will be the topic of the next post. Here, it is enough to say that its security implications greatly vary, when moving from the web to other environments. Modern web browsers are terribly complex pieces of software, but lay on decades of experience, and of daily tests from billions of people. Compared to browsers, servers or IoT devices are almost uncharted lands. The VMs for those platforms will require extensions of WASI and thus, in turn, surely introduce new security challenges.
**Memory and code management in WebAssembly**
Compared to normal compiled programs, WebAssembly applications have very restricted access to memory, and to themselves too. WebAssembly code cannot directly access functions or variables that are not yet called, jump to arbitrary addresses or execute data in memory as bytecode instructions.
Inside browsers, a Wasm module only gets one, global array (“linear memory”) of contiguous bytes to play with. WebAssembly can directly read and write any location in that area, or request an increase in its size, but thats all. This linear memory is also separated from the areas that contain its actual code, execution stack, and of course the virtual machine that runs WebAssembly. For browsers, all these data structures are ordinary JavaScript objects, insulated from all the others using standard procedures.
**The result: good, but not perfect**
All these restrictions make it quite hard for a WebAssembly module to misbehave, but not impossible.
The sandboxed memory that makes it almost impossible for WebAssembly to touch what is _outside_ also makes it harder for the operating system to prevent bad things from happening _inside_. Traditional memory monitoring mechanisms like [“stack canaries”][4], which notice if some code tries to mess with objects that it should not touch, [cannot work there][5].
The fact that WebAssembly can only access its own linear memory, but directly, may also _facilitate_ the work of attackers. With those constraints, and access to the source code of a module, it is much easier to guess which memory locations could be overwritten to make the most damage. It also seems [possible][6] to corrupt local variables, because they stay in an unsupervised stack in the linear memory.
A 2020 paper on the [binary security of WebAssembly][5] noted that WebAssembly code can still overwrite string literals in supposedly constant memory. The same paper describes other ways in which WebAssembly may be less secure than when compiled to a native binary, on three different platforms (browsers, server-side applications on Node.js, and applications for stand-alone WebAssembly VMs) and is recommended further reading on this topic.
In general, the idea that WebAssembly can only damage whats inside its own sandbox can be misleading. WebAssembly modules do the heavy work for the JavaScript code that calls them, exchanging variables every time. If they write into any of those variables code that may cause crashes or data leaks in the unsafe JavaScript that called WebAssembly, those things _will_ happen.
**The road ahead**
Two emerging features of WebAssembly that will surely impact its security (how and how much, its too early to tell) are [concurrency][7], and internal garbage collection.
Concurrency is what allows several WebAssembly modules to run in the same VM simultaneously. Today this is possible only through JavaScript [web workers][8], but better mechanisms are under development. Security-wise, they may bring in [“a lot of code… that did not previously need to be”][9], that is more ways for things to go wrong.
A [native Garbage Collector][10] is needed to increase performance and security, but above all to use WebAssembly outside the well-tested Java VMs of browsers, that collect all the garbage inside themselves anyway. Even this new code, of course, may become another entry point for bugs and attacks.
On the positive side, general strategies to make WebAssembly even safer than it is today also exist. Quoting again from [here][5], they include compiler improvements, _separate_ linear memories for stack, heap and constant data, and avoiding to compile as WebAssembly modules code in “unsafe languages, such as C”.
The post [WebAssembly Security, Now and in the Future][11] appeared first on [Linux Foundation Training][12].
--------------------------------------------------------------------------------
via: https://www.linux.com/news/webassembly-security-now-and-in-the-future/
作者:[Dan Brown][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://training.linuxfoundation.org/announcements/webassembly-security-now-and-in-the-future/
[b]: https://github.com/lujun9972
[1]: https://training.linuxfoundation.org/announcements/an-introduction-to-webassembly/
[2]: https://www.sec.cs.tu-bs.de/pubs/2019a-dimva.pdf
[3]: https://webassembly.org/docs/security/
[4]: https://ctf101.org/binary-exploitation/stack-canaries/
[5]: https://www.usenix.org/system/files/sec20-lehmann.pdf
[6]: https://spectrum.ieee.org/tech-talk/telecom/security/more-worries-over-the-security-of-web-assembly
[7]: https://github.com/WebAssembly/threads
[8]: https://en.wikipedia.org/wiki/Web_worker
[9]: https://googleprojectzero.blogspot.com/2018/08/the-problems-and-promise-of-webassembly.html
[10]: https://github.com/WebAssembly/gc/blob/master/proposals/gc/Overview.md
[11]: https://training.linuxfoundation.org/announcements/webassembly-security-now-and-in-the-future/
[12]: https://training.linuxfoundation.org/

View File

@ -0,0 +1,466 @@
[#]: subject: (Build a to-do list app in React with hooks)
[#]: via: (https://opensource.com/article/21/3/react-app-hooks)
[#]: author: (Jaivardhan Kumar https://opensource.com/users/invinciblejai)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
Build a to-do list app in React with hooks
======
Learn to build React apps using functional components and state
management.
![Team checklist and to dos][1]
React is one of the most popular and simple JavaScript libraries for building user interfaces (UIs) because it allows you to create reusable UI components.
Components in React are independent, reusable pieces of code that serve as building blocks for an application. React functional components are JavaScript functions that separate the presentation layer from the business logic. According to the [React docs][2], a simple, functional component can be written like:
```
function Welcome(props) {
  return &lt;h1&gt;Hello, {props.name}&lt;/h1&gt;;
}
```
React functional components are stateless. Stateless components are declared as functions that have no state and return the same markup, given the same props. State is managed in components with hooks, which were introduced in React 16.8. They enable the management of state and the lifecycle of functional components. There are several built-in hooks, and you can also create custom hooks.
This article explains how to build a simple to-do app in React using functional components and state management. The complete code for this app is available on [GitHub][3] and [CodeSandbox][4]. When you're finished with this tutorial, the app will look like this:
![React to-do list][5]
(Jaivardhan Kumar, [CC BY-SA 4.0][6])
### Prerequisites
* To build locally, you must have [Node.js][7] v10.16 or higher, [yarn][8] v1.20.0 or higher, and npm 5.6
* Basic knowledge of JavaScript
* Basic understanding of React would be a plus
### Create a React app
[Create React App][9] is an environment that allows you to start building a React app. Along with this tutorial, I used a TypeScript template for adding static type definitions. [TypeScript][10] is an open source language that builds on JavaScript:
```
`npx create-react-app todo-app-context-api --template typescript`
```
[npx][11] is a package runner tool; alternatively, you can use [yarn][12]:
```
`yarn create react-app todo-app-context-api --template typescript`
```
After you execute this command, you can navigate to the directory and run the app:
```
cd todo-app-context-api
yarn start
```
You should see the starter app and the React logo which is generated by boilerplate code. Since you are building your own React app, you will be able to modify the logo and styles to meet your needs.
### Build the to-do app
The to-do app can:
* Add an item
* List items
* Mark items as completed
* Delete items
* Filter items based on status (e.g., completed, all, active)
![To-Do App architecture][13]
(Jaivardhan Kumar, [CC BY-SA 4.0][6])
#### The header component
Create a directory called **components** and add a file named **Header.tsx**:
```
mkdir components
cd  components
vi  Header.tsx
```
Header is a functional component that holds the heading:
```
const Header: React.FC = () =&gt; {
    return (
        &lt;div className="header"&gt;
            &lt;h1&gt;
                Add TODO List!!
            &lt;/h1&gt;
        &lt;/div&gt;
        )
}
```
#### The AddTodo component
The **AddTodo** component contains a text box and a button. Clicking the button adds an item to the list.
Create a directory called **todo** under the **components** directory and add a file named **AddTodo.tsx**:
```
mkdir todo
cd todo
vi AddTodo.tsx
```
AddTodo is a functional component that accepts props. Props allow one-way passing of data, i.e., only from parent to child components:
```
const AddTodo: React.FC&lt;AddTodoProps&gt; = ({ todoItem, updateTodoItem, addTaskToList }) =&gt; {
    const submitHandler = (event: SyntheticEvent) =&gt; {
        event.preventDefault();
        addTaskToList();
    }
    return (
        &lt;form className="addTodoContainer" onSubmit={submitHandler}&gt;
            &lt;div  className="controlContainer"&gt;
                &lt;input className="controlSpacing" style={{flex: 1}} type="text" value={todoItem?.text ?? ''} onChange={(ev) =&gt; updateTodoItem(ev.target.value)} placeholder="Enter task todo ..." /&gt;
                &lt;input className="controlSpacing" style={{flex: 1}} type="submit" value="submit" /&gt;
            &lt;/div&gt;
            &lt;div&gt;
                &lt;label&gt;
                    &lt;span style={{ color: '#ccc', padding: '20px' }}&gt;{todoItem?.text}&lt;/span&gt;
                &lt;/label&gt;
            &lt;/div&gt;
        &lt;/form&gt;
    )
}
```
You have created a functional React component called **AddTodo** that takes props provided by the parent function. This makes the component reusable. The props that need to be passed are:
* **todoItem:** An empty item state
* **updateToDoItem:** A helper function to send callbacks to the parent as the user types
* **addTaskToList:** A function to add an item to a to-do list
There are also some styling and HTML elements, like form, input, etc.
#### The TodoList component
The next component to create is the **TodoList**. It is responsible for listing the items in the to-do state and providing options to delete and mark items as complete.
**TodoList** will be a functional component:
```
const TodoList: React.FC = ({ listData, removeItem, toggleItemStatus }) =&gt; {
    return listData.length &gt; 0 ? (
        &lt;div className="todoListContainer"&gt;
            { listData.map((lData) =&gt; {
                return (
                    &lt;ul key={lData.id}&gt;
                        &lt;li&gt;
                            &lt;div className="listItemContainer"&gt;
                                &lt;input type="checkbox" style={{ padding: '10px', margin: '5px' }} onChange={() =&gt; toggleItemStatus(lData.id)} checked={lData.completed}/&gt;
                                &lt;span className="listItems" style={{ textDecoration: lData.completed ? 'line-through' : 'none', flex: 2 }}&gt;{lData.text}&lt;/span&gt;
                                &lt;button type="button" className="listItems" onClick={() =&gt; removeItem(lData.id)}&gt;Delete&lt;/button&gt;
                            &lt;/div&gt;
                        &lt;/li&gt;
                    &lt;/ul&gt;
                )
            })}
        &lt;/div&gt;
    ) : (&lt;span&gt; No Todo list exist &lt;/span &gt;)
}
```
The **TodoList** is also a reusable functional React component that accepts props from parent functions. The props that need to be passed are:
* **listData:** A list of to-do items with IDs, text, and completed properties
* **removeItem:** A helper function to delete an item from a to-do list
* **toggleItemStatus:** A function to toggle the task status from completed to not completed and vice versa
There are also some styling and HTML elements (like lists, input, etc.).
#### Footer component
**Footer** will be a functional component; create it in the **components** directory as follows:
```
cd ..
const Footer: React.FC = ({item = 0, storage, filterTodoList}) =&gt; {
    return (
        &lt;div className="footer"&gt;
            &lt;button type="button" style={{flex:1}} onClick={() =&gt; filterTodoList(ALL_FILTER)}&gt;All Item&lt;/button&gt;
            &lt;button type="button" style={{flex:1}} onClick={() =&gt; filterTodoList(ACTIVE_FILTER)}&gt;Active&lt;/button&gt;
            &lt;button type="button" style={{flex:1}} onClick={() =&gt; filterTodoList(COMPLETED_FILTER)}&gt;Completed&lt;/button&gt;
            &lt;span style={{color: '#cecece', flex:4, textAlign: 'center'}}&gt;{item} Items | Make use of {storage} to store data&lt;/span&gt;
        &lt;/div&gt;
    );
}
```
It accepts three props:
* **item:** Displays the number of items
* **storage:** Displays text
* **filterTodoList:** A function to filter tasks based on status (active, completed, all items)
### Todo component: Managing state with contextApi and useReducer
![Todo Component][14]
(Jaivardhan Kumar, [CC BY-SA 4.0][6])
Context provides a way to pass data through the component tree without having to pass props down manually at every level. **ContextApi** and **useReducer** can be used to manage state by sharing it across the entire React component tree without passing it as a prop to each component in the tree.
Now that you have the AddTodo, TodoList, and Footer components, you need to wire them.
Use the following built-in hooks to manage the components' state and lifecycle:
* **useState:** Returns the stateful value and updater function to update the state
* **useEffect:** Helps manage lifecycle in functional components and perform side effects
* **useContext:** Accepts a context object and returns current context value
* **useReducer:** Like useState, it returns the stateful value and updater function, but it is used instead of useState when you have complex state logic (e.g., multiple sub-values or if the new state depends on the previous one)
First, use **contextApi** and **useReducer** hooks to manage the state. For separation of concerns, add a new directory under **components** called **contextApiComponents**:
```
mkdir contextApiComponents
cd contextApiComponents
```
Create **TodoContextApi.tsx**:
```
const defaultTodoItem: TodoItemProp = { id: Date.now(), text: '', completed: false };
const TodoContextApi: React.FC = () =&gt; {
    const { state: { todoList }, dispatch } = React.useContext(TodoContext);
    const [todoItem, setTodoItem] = React.useState(defaultTodoItem);
    const [todoListData, setTodoListData] = React.useState(todoList);
    React.useEffect(() =&gt; {
        setTodoListData(todoList);
    }, [todoList])
    const updateTodoItem = (text: string) =&gt; {
        setTodoItem({
            id: Date.now(),
            text,
            completed: false
        })
    }
    const addTaskToList = () =&gt; {
        dispatch({
            type: ADD_TODO_ACTION,
            payload: todoItem
        });
        setTodoItem(defaultTodoItem);
    }
    const removeItem = (id: number) =&gt; {
        dispatch({
            type: REMOVE_TODO_ACTION,
            payload: { id }
        })
    }
    const toggleItemStatus = (id: number) =&gt; {
        dispatch({
            type: UPDATE_TODO_ACTION,
            payload: { id }
        })
    }
    const filterTodoList = (type: string) =&gt; {
        const filteredList = FilterReducer(todoList, {type});
        setTodoListData(filteredList)
    }
    return (
        &lt;&gt;
            &lt;AddTodo todoItem={todoItem} updateTodoItem={updateTodoItem} addTaskToList={addTaskToList} /&gt;
            &lt;TodoList listData={todoListData} removeItem={removeItem} toggleItemStatus={toggleItemStatus} /&gt;
            &lt;Footer item={todoListData.length} storage="Context API" filterTodoList={filterTodoList} /&gt;
        &lt;/&gt;
    )
}
```
This component includes the **AddTodo**, **TodoList**, and **Footer** components and their respective helper and callback functions.
To manage the state, it uses **contextApi**, which provides state and dispatch methods, which, in turn, updates the state. It accepts a context object. (You will create the provider for the context, called **contextProvider**, next).
```
` const { state: { todoList }, dispatch } = React.useContext(TodoContext);`
```
#### TodoProvider
Add **TodoProvider**, which creates **context** and uses a **useReducer** hook. The **useReducer** hook takes a reducer function along with the initial values and returns state and updater functions (dispatch).
* Create the context and export it. Exporting it will allow it to be used by any child component to get the current state using the hook **useContext**: [code]`export const TodoContext = React.createContext({} as TodoContextProps);`
```
* Create **ContextProvider** and export it: [code] const TodoProvider : React.FC = (props) =&gt; {
    const [state, dispatch] = React.useReducer(TodoReducer, {todoList: []});
    const value = {state, dispatch}
    return (
        &lt;TodoContext.Provider value={value}&gt;
            {props.children}
        &lt;/TodoContext.Provider&gt;
    )
}
```
* Context data can be accessed by any React component in the hierarchy directly with the **useContext** hook if you wrap the parent component (e.g., **TodoContextApi**) or the app itself with the provider (e.g., **TodoProvider**): [code] &lt;TodoProvider&gt;
  &lt;TodoContextApi /&gt;
&lt;/TodoProvider&gt;
```
* In the **TodoContextApi** component, use the **useContext** hook to access the current context value: [code]`const { state: { todoList }, dispatch } = React.useContext(TodoContext)`
```
**TodoProvider.tsx:**
```
type TodoContextProps = {
    state : {todoList: TodoItemProp[]};
    dispatch: ({type, payload}: {type:string, payload: any}) =&gt; void;
}
export const TodoContext = React.createContext({} as TodoContextProps);
const TodoProvider : React.FC = (props) =&gt; {
    const [state, dispatch] = React.useReducer(TodoReducer, {todoList: []});
    const value = {state, dispatch}
    return (
        &lt;TodoContext.Provider value={value}&gt;
            {props.children}
        &lt;/TodoContext.Provider&gt;
    )
}
```
#### Reducers
A reducer is a pure function with no side effects. This means that for the same input, the expected output will always be the same. This makes the reducer easier to test in isolation and helps manage state. **TodoReducer** and **FilterReducer** are used in the components **TodoProvider** and **TodoContextApi**.
Create a directory named **reducers** under **src** and create a file there named **TodoReducer.tsx**:
```
const TodoReducer = (state: StateProps = {todoList:[]}, action: ActionProps) =&gt; {
    switch(action.type) {
        case ADD_TODO_ACTION:
            return { todoList: [...state.todoList, action.payload]}
        case REMOVE_TODO_ACTION:
            return { todoList: state.todoList.length ? state.todoList.filter((d) =&gt; d.id !== action.payload.id) : []};
        case UPDATE_TODO_ACTION:
            return { todoList: state.todoList.length ? state.todoList.map((d) =&gt; {
                if(d.id === action.payload.id) d.completed = !d.completed;
                return d;
            }): []}
        default:
            return state;
    }
}
```
Create a **FilterReducer** to maintain the filter's state:
```
const FilterReducer =(state : TodoItemProp[] = [], action: ActionProps) =&gt; {
    switch(action.type) {
        case ALL_FILTER:
            return state;
        case ACTIVE_FILTER:
            return state.filter((d) =&gt; !d.completed);
        case COMPLETED_FILTER:
            return state.filter((d) =&gt; d.completed);
        default:
            return state;
    }
}
```
You have created all the required components. Next, you will add the **Header** and **TodoContextApi** components in App, and **TodoContextApi** with **TodoProvider** so that all children can access the context.
```
function App() {
  return (
    &lt;div className="App"&gt;
      &lt;Header /&gt;
      &lt;TodoProvider&gt;
              &lt;TodoContextApi /&gt;
      &lt;/TodoProvider&gt;
    &lt;/div&gt;
  );
}
```
Ensure the App component is in **index.tsx** within **ReactDom.render**. [ReactDom.render][15] takes two arguments: React Element and an ID of an HTML element. React Element gets rendered on a web page, and the **id** indicates which HTML element will be replaced by the React Element:
```
ReactDOM.render(
   &lt;App /&gt;,
  document.getElementById('root')
);
```
### Conclusion
You have learned how to build a functional app in React using hooks and state management. What will you do with it?
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/3/react-app-hooks
作者:[Jaivardhan Kumar][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/invinciblejai
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/todo_checklist_team_metrics_report.png?itok=oB5uQbzf (Team checklist and to dos)
[2]: https://reactjs.org/docs/components-and-props.html
[3]: https://github.com/invincibleJai/todo-app-context-api
[4]: https://codesandbox.io/s/reverent-edison-v8om5
[5]: https://opensource.com/sites/default/files/pictures/todocontextapi.gif (React to-do list)
[6]: https://creativecommons.org/licenses/by-sa/4.0/
[7]: https://nodejs.org/en/download/
[8]: https://yarnpkg.com/getting-started/install
[9]: https://github.com/facebook/create-react-app
[10]: https://www.typescriptlang.org/
[11]: https://www.npmjs.com/package/npx
[12]: https://yarnpkg.com/
[13]: https://opensource.com/sites/default/files/uploads/to-doapp_architecture.png (To-Do App architecture)
[14]: https://opensource.com/sites/default/files/uploads/todocomponent_0.png (Todo Component)
[15]: https://reactjs.org/docs/react-dom.html#render

View File

@ -0,0 +1,202 @@
[#]: subject: (Read and write files with Bash)
[#]: via: (https://opensource.com/article/21/3/input-output-bash)
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
Read and write files with Bash
======
Learn the different ways Bash reads and writes data and when to use each
method.
![bash logo on green background][1]
When you're scripting with Bash, sometimes you need to read data from or write data to a file. Sometimes a file may contain configuration options, and other times the file is the data your user is creating with your application. Every language handles this task a little differently, and this article demonstrates how to handle data files with Bash and other [POSIX][2] shells.
### Install Bash
If you're on Linux, you probably already have Bash. If not, you can find it in your software repository.
On macOS, you can use the default terminal, either Bash or [Zsh][3], depending on the macOS version you're running.
On Windows, there are several ways to experience Bash, including Microsoft's officially supported [Windows Subsystem for Linux][4] (WSL).
Once you have Bash installed, open your favorite text editor and get ready to code.
### Reading a file with Bash
In addition to being [a shell][5], Bash is a scripting language. There are several ways to read data from Bash: You can create a sort of data stream and parse the output, or you can load data into memory. Both are valid methods of ingesting information, but each has pretty specific use cases.
#### Source a file in Bash
When you "source" a file in Bash, you cause Bash to read the contents of a file with the expectation that it contains valid data that Bash can fit into its established data model. You won't source data from any old file, but you can use this method to read configuration files and functions.
For instance, create a file called `example.sh` and enter this into it:
```
#!/bin/sh
greet opensource.com
echo "The meaning of life is $var"
```
Run the code to see it fail:
```
$ bash ./example.sh
./example.sh: line 3: greet: command not found
The meaning of life is
```
Bash doesn't have a command called `greet`, so it could not execute that line, and it has no record of a variable called `var`, so there is no known meaning of life. To fix this problem, create a file called `include.sh`:
```
greet() {
    echo "Hello ${1}"
}
var=42
```
Revise your `example.sh` script to include a `source` command:
```
#!/bin/sh
source include.sh
greet opensource.com
echo "The meaning of life is $var"
```
Run the script to see it work:
```
$ bash ./example.sh
Hello opensource.com
The meaning of life is 42
```
The `greet` command is brought into your shell environment because it is defined in the `include.sh` file, and it even recognizes the argument (`opensource.com` in this example). The variable `var` is set and imported, too.
#### Parse a file in Bash
The other way to get data "into" Bash is to parse it as a data stream. There are many ways to do this. You can use `grep` or `cat` or any command that takes data and pipes it to stdout. Alternately, you can use what is built into Bash: the redirect. Redirection on its own isn't very useful, so in this example, I also use the built-in `echo` command to print the results of the redirect:
```
#!/bin/sh
echo $( &lt; include.sh )
```
Save this as `stream.sh` and run it to see the results:
```
$ bash ./stream.sh
greet() { echo "Hello ${1}" } var=42
$
```
For each line in the `include.sh` file, Bash prints (or echoes) the line to your terminal. Piping it first to an appropriate parser is a common way to read data with Bash. For instance, assume for a moment that `include.sh` is a configuration file with key and value pairs separated by an equal (`=`) sign. You could obtain values with `awk` or even `cut`:
```
#!/bin/sh
myVar=`grep var include.sh | cut -d'=' -f2`
echo $myVar
```
Try running the script:
```
$ bash ./stream.sh
42
```
### Writing data to a file with Bash
Whether you're storing data your user created with your application or just metadata about what the user did in an application (for instance, game saves or recent songs played), there are many good reasons to store data for later use. In Bash, you can save data to files using common shell redirection.
For instance, to create a new file containing output, use a single redirect token:
```
#!/bin/sh
TZ=UTC
date &gt; date.txt
```
Run the script a few times:
```
$ bash ./date.sh
$ cat date.txt
Tue Feb 23 22:25:06 UTC 2021
$ bash ./date.sh
$ cat date.txt
Tue Feb 23 22:25:12 UTC 2021
```
To append data, use the double redirect tokens:
```
#!/bin/sh
TZ=UTC
date &gt;&gt; date.txt
```
Run the script a few times:
```
$ bash ./date.sh
$ bash ./date.sh
$ bash ./date.sh
$ cat date.txt
Tue Feb 23 22:25:12 UTC 2021
Tue Feb 23 22:25:17 UTC 2021
Tue Feb 23 22:25:19 UTC 2021
Tue Feb 23 22:25:22 UTC 2021
```
### Bash for easy programming
Bash excels at being easy to learn because, with just a few basic concepts, you can build complex programs. For the full documentation, refer to the [excellent Bash documentation][6] on GNU.org.
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/3/input-output-bash
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bash_command_line.png?itok=k4z94W2U (bash logo on green background)
[2]: https://opensource.com/article/19/7/what-posix-richard-stallman-explains
[3]: https://opensource.com/article/19/9/getting-started-zsh
[4]: https://opensource.com/article/19/7/ways-get-started-linux#wsl
[5]: https://www.redhat.com/sysadmin/terminals-shells-consoles
[6]: http://gnu.org/software/bash

View File

@ -0,0 +1,194 @@
[#]: subject: (How to use the Linux sed command)
[#]: via: (https://opensource.com/article/21/3/sed-cheat-sheet)
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
How to use the Linux sed command
======
Learn basic sed usage then download our cheat sheet for a quick
reference to the Linux stream editor.
![Penguin with green background][1]
Few Unix commands are as famous as sed, [grep][2], and [awk][3]. They get grouped together often, possibly because they have strange names and powerful tools for parsing text. They also share some syntactical and logical similarities. And while they're all useful for parsing text, each has its specialties. This article examines the `sed` command, which is a _stream editor_.
I've written before about [sed][4], as well as its distant relative [ed][5]. To get comfortable with sed, it helps to have some familiarity with ed because that helps you get used to the idea of buffers. This article assumes that you're familiar with the very basics of sed, meaning you've at least run the classic `s/foo/bar/` style find-and-replace command.
**[Download our free [sed cheat sheet][6]]**
### Installing sed
If you're using Linux, BSD, or macOS, you already have GNU or BSD sed installed. These are unique reimplementations of the original `sed` command, and while they're similar, there are minor differences. This article has been tested on the Linux and NetBSD versions, so you can use whatever sed you find on your computer in this case, although for BSD sed you must use short options (`-n` instead of `--quiet`, for instance) only.
GNU sed is generally regarded to be the most feature-rich sed available, so you might want to try it whether or not you're running Linux. If you can't find GNU sed (often called gsed on non-Linux systems) in your ports tree, then you can [download its source code][7] from the GNU website. The nice thing about installing GNU sed is that you can use its extra functions but also constrain it to conform to the [POSIX][8] specifications of sed, should you require portability.
MacOS users can find GNU sed on [MacPorts][9] or [Homebrew][10].
On Windows, you can [install GNU sed][11] with [Chocolatey][12].
### Understanding pattern space and hold space
Sed works on exactly one line at a time. Because it has no visual display, it creates a _pattern space_, a space in memory containing the current line from the input stream (with any trailing newline character removed). Once you populate the pattern space, sed executes your instructions. When it reaches the end of the commands, sed prints the pattern space's contents to the output stream. The default output stream is **stdout**, but the output can be redirected to a file or even back into the same file using the `--in-place=.bak` option.
Then the cycle begins again with the next input line.
To provide a little flexibility as you scrub through files with sed, sed also provides a _hold space_ (sometimes also called a _hold buffer_), a space in sed's memory reserved for temporary data storage. You can think of hold space as a clipboard, and in fact, that's exactly what this article demonstrates: how to copy/cut and paste with sed.
First, create a sample text file with this text as its contents:
```
Line one
Line three
Line two
```
### Copying data to hold space
To place something in sed's hold space, use the `h` or `H` command. A lower-case `h` tells sed to overwrite the current contents of hold space, while a capital `H` tells it to append data to whatever's already in hold space.
Used on its own, there's not much to see:
```
$ sed --quiet -e '/three/ h' example.txt
$
```
The `--quiet` (`-n` for short) option suppresses all output but what sed has performed for my search requirements. In this case, sed selects any line containing the string `three`, and copying it to hold space. I've not told sed to print anything, so no output is produced.
### Copying data from hold space
To get some insight into hold space, you can copy its contents from hold space and place it into pattern space with the `g` command. Watch what happens:
```
$ sed -n -e '/three/h' -e 'g;p' example.txt
Line three
Line three
```
The first blank line prints because the hold space is empty when it's first copied into pattern space.
The next two lines contain `Line three` because that's what's in hold space from line two onward.
This command uses two unique scripts (`-e`) purely to help with readability and organization. It can be useful to divide steps into individual scripts, but technically this command works just as well as one script statement:
```
$ sed -n -e '/three/h ; g ; p' example.txt
Line three
Line three
```
### Appending data to pattern space
The `G` command appends a newline character and the contents of the hold space to the pattern space.
```
$ sed -n -e '/three/h' -e 'G;p' example.txt
Line one
Line three
Line three
Line two
Line three
```
The first two lines of this output contain both the contents of the pattern space (`Line one`) and the empty hold space. The next two lines match the search text (`three`), so it contains both the pattern space and the hold space. The hold space doesn't change for the third pair of lines, so the pattern space (`Line two`) prints with the hold space (still `Line three`) trailing at the end.
### Doing cut and paste with sed
Now that you know how to juggle a string from pattern to hold space and back again, you can devise a sed script that copies, then deletes, and then pastes a line within a document. For example, the example file for this article has `Line three` out of order. Sed can fix that:
```
$ sed -n -e '/three/ h' -e '/three/ d' \
-e '/two/ G;p' example.txt
Line one
Line two
Line three
```
* The first script finds a line containing the string `three` and copies it from pattern space to hold space, replacing anything currently in hold space.
* The second script deletes any line containing the string `three`. This completes the equivalent of a _cut_ action in a word processor or text editor.
* The final script finds a line containing `two` and _appends_ the contents of hold space to pattern space and then prints the pattern space.
Job done.
### Scripting with sed
Once again, the use of separate script statements is purely for visual and mental simplicity. The cut-and-paste command works as one script:
```
$ sed -n -e '/three/ h ; /three/ d ; /two/ G ; p' example.txt
Line one
Line two
Line three
```
It can even be written as a dedicated script file:
```
#!/usr/bin/sed -nf
/three/h
/three/d
/two/ G
p
```
To run the script, mark it executable and try it on your sample file:
```
$ chmod +x myscript.sed
$ ./myscript.sed example.txt
Line one
Line two
Line three
```
Of course, the more predictable the text you need to parse, the easier it is to solve your problem with sed. It's usually not practical to invent "recipes" for sed actions (such as a copy and paste) because the condition to trigger the action is probably different from file to file. However, the more fluent you become with sed's commands, the easier it is to devise complex actions based on the input you need to parse.
The important things are recognizing distinct actions, understanding when sed moves to the next line, and predicting what the pattern and hold space can be expected to contain.
### Download the cheat sheet
Sed is complex. It only has a dozen commands, yet its flexible syntax and raw power mean it's full of endless potential. I used to reference pages of clever one-liners in an attempt to get the most use out of sed, but it wasn't until I started inventing (and sometimes reinventing) my own solutions that I felt like I was starting to _actually_ learn sed. If you're looking for gentle reminders of commands and helpful tips on syntax, [download our sed cheat sheet][6], and start learning sed once and for all!
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/3/sed-cheat-sheet
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/linux_penguin_green.png?itok=ENdVzW22 (Penguin with green background)
[2]: https://opensource.com/article/21/3/grep-cheat-sheet
[3]: https://opensource.com/article/20/9/awk-ebook
[4]: https://opensource.com/article/20/12/sed
[5]: https://opensource.com/article/20/12/gnu-ed
[6]: https://opensource.com/downloads/sed-cheat-sheet
[7]: http://www.gnu.org/software/sed/
[8]: https://opensource.com/article/19/7/what-posix-richard-stallman-explains
[9]: https://opensource.com/article/20/11/macports
[10]: https://opensource.com/article/20/6/homebrew-mac
[11]: https://chocolatey.org/packages/sed
[12]: https://opensource.com/article/20/3/chocolatey

View File

@ -0,0 +1,268 @@
[#]: subject: (Identify Linux performance bottlenecks using open source tools)
[#]: via: (https://opensource.com/article/21/3/linux-performance-bottlenecks)
[#]: author: (Howard Fosdick https://opensource.com/users/howtech)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
Identify Linux performance bottlenecks using open source tools
======
Not long ago, identifying hardware bottlenecks required deep expertise.
Today's open source GUI performance monitors make it pretty simple.
![Lightning in a bottle][1]
Computers are integrated systems that only perform as fast as their slowest hardware component. If one component is less capable than the others—if it falls behind and can't keep up—it can hold your entire system back. That's a _performance bottleneck_. Removing a serious bottleneck can make your system fly.
This article explains how to identify hardware bottlenecks in Linux systems. The techniques apply to both personal computers and servers. My emphasis is on PCs—I won't cover server-specific bottlenecks in areas such as LAN management or database systems. Those often involve specialized tools.
I also won't talk much about solutions. That's too big a topic for this article. Instead, I'll write a follow-up article with performance tweaks.
I'll use only open source graphical user interface (GUI) tools to get the job done. Most articles on Linux bottlenecking are pretty complicated. They use specialized commands and delve deep into arcane details.
The GUI tools that open source offers make identifying many bottlenecks simple. My goal is to give you a quick, easy approach that you can use anywhere.
### Where to start
A computer consists of six key hardware resources:
* Processors
* Memory
* Storage
* USB ports
* Internet connection
* Graphics processor
Should any one resource perform poorly, it can create a performance bottleneck. To identify a bottleneck, you must monitor these six resources.
Open source offers a plethora of tools to do the job. I'll use the [GNOME System Monitor][2]. Its output is easy to understand, and you can find it in most repositories.
Start it up and click on the **Resources** tab. You can identify many performance problems right off.
![System Monitor - Resources Panel ][3]
Fig. 1. System Monitor spots problems. (Howard Fosdick, [CC BY-SA 4.0][4])
The **Resources** panel displays three sections: **CPU History**, **Memory and Swap History**, and **Network History**. A quick glance tells you immediately whether your processors are swamped, or your computer is out of memory, or you're using up all your internet bandwidth.
I'll explore these problems below. For now, check the System Monitor first when your computer slows down. It instantly clues you in on the most common performance problems.
Now let's explore how to identify bottlenecks in specific areas.
### How to identify processor bottlenecks
To spot a bottleneck, you must first know what hardware you have. Open source offers several tools for this purpose. I like [HardInfo][5] because its screens are easy to read and it's widely popular.
Start up HardInfo. Its **Computer -&gt; Summary** panel identifies your CPU and tells you about its cores, threads, and speeds. It also identifies your motherboard and other computer components.
![HardInfo Summary Panel][6]
Fig. 2. HardInfo shows hardware details. (Howard Fosdick, [CC BY-SA 4.0][4])
HardInfo reveals that this computer has one physical CPU chip. That chip contains two processors, or cores. Each core supports two threads, or logical processors. That's a total of four logical processors—exactly what System Monitor's CPU History section showed in Fig. 1.
A _processor bottleneck_ occurs when processors can't respond to requests for their time. They're already busy.
You can identify this when System Monitor shows logical processor utilization at over 80% or 90% for a sustained period. Here's an example where three of the four logical processors are swamped at 100% utilization. That's a bottleneck because it doesn't leave much CPU for any other work.
![System Monitor processor bottleneck][7]
Fig. 3. A processor bottleneck. (Howard Fosdick, [CC BY-SA 4.0][4])
#### Which app is causing the problem?
You need to find out which program(s) is consuming all that CPU. Click on System Monitor's **Processes** tab. Then click on the **% CPU** header to sort the processes by how much CPU they're consuming. You'll see which apps are throttling your system.
![System Monitor Processes panel][8]
Fig. 4. Identifying the offending processes. (Howard Fosdick, [CC BY-SA 4.0][4])
The top three processes each consume 24% of the _total_ CPU resource. Since there are four logical processors, this means each consumes an entire processor. That's just as Fig. 3 shows.
The **Processes** panel identifies a program named **analytical_AI** as the culprit. You can right-click on it in the panel to see more details on its resource consumption, including memory use, the files it has open, its input/output details, and more.
If your login has administrator privileges, you can manage the process. You can change its priority and stop, continue, end, or kill it. So, you could immediately resolve your bottleneck here.
![System Monitor managing a process][9]
Fig. 5. Right-click on a process to manage it. (Howard Fosdick, [CC BY-SA 4.0][4])
How do you fix processing bottlenecks? Beyond managing the offending process in real time, you could prevent the bottleneck from happening. For example, you might substitute another app for the offender, work around it, change your behavior when using that app, schedule the app for off-hours, address an underlying memory issue, performance-tweak the app or your system software, or upgrade your hardware. That's too much to cover here, so I'll explore those options in my next article.
#### Common processor bottlenecks
You'll encounter several common bottlenecks when monitoring your CPUs with System Monitor.
Sometimes one logical processor is bottlenecked while all the others are at low utilization. This means you have an app that's not coded smartly enough to take advantage of more than one logical processor, and it's maxed out the one it's using. That app will take longer to finish than it would if it used more processors. On the other hand, at least it leaves your other processors free for other work and doesn't take over your computer.
You might also see a logical processor stuck forever at 100% utilization. Either it's very busy, or a process is hung. The way to tell if it's hung is if the process never does any disk activity (as the System Monitor **Processes** panel will show).
Finally, you might notice that when all your processors are bottlenecked, your memory is fully utilized, too. Out-of-memory conditions sometimes cause processor bottlenecks. In this case, you want to solve the underlying memory problem, not the symptomatic CPU issue.
### How to identify memory bottlenecks
Given the large amount of memory in modern PCs, memory bottlenecks are much less common than they once were. Yet you can still run into them if you run memory-intensive programs, especially if you have a computer that doesn't contain much random access memory (RAM).
Linux [uses memory][10] both for programs and to cache disk data. The latter speeds up disk data access. Linux can reclaim that memory any time it needs it for program use.
The System Monitor's **Resources** panel displays your total memory and how much of it is used. In the **Processes** panel, you can see individual processes' memory use.
Here's the portion of the System Monitor **Resources** panel that tracks aggregate memory use:
![System Monitor memory bottleneck][11]
Fig. 6. A memory bottleneck. (Howard Fosdick, [CC BY-SA 4.0][4])
To the right of Memory, you'll notice [Swap][12]. This is disk space Linux uses when it runs low on memory. It writes memory to disk to continue operations, effectively using swap as a slower extension to your RAM.
The two memory performance problems you'll want to look out for are:
> 1. Memory appears largely used, and you see frequent or increasing activity on the swap space.
> 2. Both memory and swap are largely used up.
>
Situation 1 means slower performance because swap is always slower than memory. Whether you consider it a performance problem depends on many factors (e.g., how active your swap space is, its speed, your expectations, etc.). My opinion is that anything more than token swap use is unacceptable for a modern personal computer.
Situation 2 is where both memory and swap are largely in use. This is a _memory bottleneck._ The computer becomes unresponsive. It could even fall into a state of _thrashing_, where it accomplishes little more than memory management.
Fig. 6 above shows an old computer with only 2GB of RAM. As memory use surpassed 80%, the system started writing to swap. Responsiveness declined. This screenshot shows over 90% memory use, and the computer is unusable.
The ultimate answer to memory problems is to either use less of it or buy more. I'll discuss solutions in my follow-up article.
### How to identify storage bottlenecks
Storage today comes in several varieties of solid-state and mechanical hard disks. Device interfaces include PCIe, SATA, Thunderbolt, and USB. Regardless of which type of storage you have, you use the same procedure to identify disk bottlenecks.
Start with System Monitor. Its **Processes** panel displays the input/output rates for individual processes. So you can quickly identify which processes are doing the most disk I/O.
But the tool doesn't show the _aggregate data transfer rate per disk._ You need to see the total load on a specific disk to determine if that disk is a storage bottleneck.
To do so, use the [atop][13] command. It's available in most Linux repositories.
Just type `atop` at the command-line prompt. The output below shows that device `sdb` is `busy 101%`. Clearly, it's reached its performance limit and is restricting how fast your system can get work done.
![atop disk bottleneck][14]
Fig. 7. The atop command identifies a disk bottleneck. (Howard Fosdick, [CC BY-SA 4.0][4])
Notice that one of the CPUs is waiting on the disk to do its job 85% of the time (`cpu001 w 85%`). This is typical when a storage device becomes a bottleneck. In fact, many look first at CPU I/O waits to spot storage bottlenecks.
So, to easily identify a storage bottleneck, use the `atop` command. Then use the **Processes** panel on System Monitor to identify the individual processes that are causing the bottleneck.
### How to identify USB port bottlenecks
Some people use their USB ports all day long. Yet, they never check if those ports are being used optimally. Whether you plug in an external disk, a memory stick, or something else, you'll want to verify that you're getting maximum performance from your USB-connected devices.
This chart shows why. Potential USB data transfer rates vary _enormously_.
![USB standards][15]
Fig. 8. USB speeds vary a lot. (Howard Fosdick, based on figures provided by [Tripplite][16] and [Wikipedia][17], [CC BY-SA 4.0][4])
HardInfo's **USB Devices** tab displays the USB standards your computer supports. Most computers offer more than one speed. How can you tell the speed of a specific port? Vendors color-code them, as shown in the chart. Or you can look in your computer's documentation.
To see the actual speeds you're getting, test by using the open source [GNOME Disks][18] program. Just start up GNOME Disks, select its **Benchmark Disk** feature, and run a benchmark. That tells you the maximum real speed you'll get for a port with the specific device plugged into it.
You may get different transfer speeds for a port, depending on which device you plug into it. Data rates depend on the particular combination of port and device.
For example, a device that could fly at 3.1 speed will use a 2.0 port—at 2.0 speed—if that's what you plug it into. (And it won't tell you it's operating at the slower speed!) Conversely, if you plug a USB 2.0 device into a 3.1 port, it will work, but at the 2.0 speed. So to get fast USB, you must ensure both the port and the device support it. GNOME Disks gives you the means to verify this.
To identify a USB processing bottleneck, use the same procedure you did for solid-state and hard disks. Run the `atop` command to spot a USB storage bottleneck. Then, use System Monitor to get the details on the offending process(es).
### How to identify internet bandwidth bottlenecks
The System Monitor **Resources** panel tells you in real time what internet connection speed you're experiencing (see Fig. 1).
There are [great Python tools out there][19] to test your maximum internet speed, but you can also test it on websites like [Speedtest][20], [Fast.com][21], and [Speakeasy][22]. For best results, close everything and run _only_ the speed test; turn off your VPN; run tests at different times of day; and compare the results from several testing sites.
Then compare your results to the download and upload speeds that your vendor claims you're getting. That way, you can confirm you're getting the speeds you're paying for.
If you have a separate router, test with and without it. That can tell you if your router is a bottleneck. If you use WiFi, test with it and without it (by directly cabling your laptop to the modem). I've often seen people complain about their internet vendor when what they actually have is a WiFi bottleneck they could fix themselves.
If some program is consuming your entire internet connection, you want to know which one. Find it by using the `nethogs` command. It's available in most repositories.
The other day, my System Monitor suddenly showed my internet access spiking. I just typed `nethogs` in the command line, and it instantly identified the bandwidth consumer as a Clamav antivirus update.
![Nethogs][23]
Fig. 9. Nethogs identifies bandwidth consumers. (Howard Fosdick, [CC BY-SA 4.0][4])
### How to identify graphics processing bottlenecks
If you plug your monitor into the motherboard in the back of your desktop computer, you're using _onboard graphics_. If you plug it into a card in the back, you have a dedicated graphics subsystem. Most call it a _video card_ or _graphics card._ For desktop computers, add-in cards are typically more powerful and more expensive than motherboard graphics. Laptops always use onboard graphics.
HardInfo's **PCI Devices** panel tells you about your graphics processing unit (GPU). It also displays the amount of dedicated video memory you have (look for the memory marked "prefetchable").
![Video Chipset Information][24]
Fig. 10. HardInfo provides graphics processing information. (Howard Fosdick, [CC BY-SA 4.0][4])
CPUs and GPUs work [very closely][25] together. To simplify, the CPU prepares frames for the GPU to render, then the GPU renders the frames.
A _GPU bottleneck_ occurs when your CPUs are waiting on a GPU that is 100% busy.
To identify this, you need to monitor CPU and GPU utilization rates. Open source monitors like [Conky][26] and [Glances][27] do this if their extensions work with your graphics chipset.
Take a look at this example from Conky. You can see that this system has a lot of available CPU. The GPU is only 25% busy. Imagine if that GPU number were instead near 100%. Then you'd know that the CPUs were waiting on the GPU, and you'd have a GPU bottleneck.
![Conky CPU and GPU monitoring][28]
Fig. 11. Conky displays CPU and GPU utilization. (Image courtesy of [AskUbuntu forum][29])
On some systems, you'll need a vendor-specific tool to monitor your GPU. They're all downloadable from GitHub and are described in this article on [GPU monitoring and diagnostic command-line tools][30].
### Summary
Computers consist of a collection of integrated hardware resources. Should any of them fall way behind the others in its workload, it creates a performance bottleneck. That can hold back your entire system. You need to be able to identify and correct bottlenecks to achieve optimal performance.
Not so long ago, identifying bottlenecks required deep expertise. Today's open source GUI performance monitors make it pretty simple.
In my next article, I'll discuss specific ways to improve your Linux PC's performance. Meanwhile, please share your own experiences in the comments.
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/3/linux-performance-bottlenecks
作者:[Howard Fosdick][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/howtech
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_lightning.png?itok=wRzjWIlm (Lightning in a bottle)
[2]: https://wiki.gnome.org/Apps/SystemMonitor
[3]: https://opensource.com/sites/default/files/uploads/1_system_monitor_resources_panel.jpg (System Monitor - Resources Panel )
[4]: https://creativecommons.org/licenses/by-sa/4.0/
[5]: https://itsfoss.com/hardinfo/
[6]: https://opensource.com/sites/default/files/uploads/2_hardinfo_summary_panel.jpg (HardInfo Summary Panel)
[7]: https://opensource.com/sites/default/files/uploads/3_system_monitor_100_processor_utilization.jpg (System Monitor processor bottleneck)
[8]: https://opensource.com/sites/default/files/uploads/4_system_monitor_processes_panel.jpg (System Monitor Processes panel)
[9]: https://opensource.com/sites/default/files/uploads/5_system_monitor_manage_a_process.jpg (System Monitor managing a process)
[10]: https://www.networkworld.com/article/3394603/when-to-be-concerned-about-memory-levels-on-linux.html
[11]: https://opensource.com/sites/default/files/uploads/6_system_monitor_out_of_memory.jpg (System Monitor memory bottleneck)
[12]: https://opensource.com/article/18/9/swap-space-linux-systems
[13]: https://opensource.com/life/16/2/open-source-tools-system-monitoring
[14]: https://opensource.com/sites/default/files/uploads/7_atop_storage_bottleneck.jpg (atop disk bottleneck)
[15]: https://opensource.com/sites/default/files/uploads/8_usb_standards_speeds.jpg (USB standards)
[16]: https://www.samsung.com/us/computing/memory-storage/solid-state-drives/
[17]: https://en.wikipedia.org/wiki/USB
[18]: https://wiki.gnome.org/Apps/Disks
[19]: https://opensource.com/article/20/1/internet-speed-tests
[20]: https://www.speedtest.net/
[21]: https://fast.com/
[22]: https://www.speakeasy.net/speedtest/
[23]: https://opensource.com/sites/default/files/uploads/9_nethogs_bandwidth_consumers.jpg (Nethogs)
[24]: https://opensource.com/sites/default/files/uploads/10_hardinfo_video_card_information.jpg (Video Chipset Information)
[25]: https://www.wepc.com/tips/cpu-gpu-bottleneck/
[26]: https://itsfoss.com/conky-gui-ubuntu-1304/
[27]: https://opensource.com/article/19/11/monitoring-linux-glances
[28]: https://opensource.com/sites/default/files/uploads/11_conky_cpu_and_gup_monitoring.jpg (Conky CPU and GPU monitoring)
[29]: https://askubuntu.com/questions/387594/how-to-measure-gpu-usage
[30]: https://www.cyberciti.biz/open-source/command-line-hacks/linux-gpu-monitoring-and-diagnostic-commands/

View File

@ -0,0 +1,92 @@
[#]: subject: (Plausible: Privacy-Focused Google Analytics Alternative)
[#]: via: (https://itsfoss.com/plausible/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
Plausible: Privacy-Focused Google Analytics Alternative
======
[Plausible][1] is a simple, privacy-friendly analytics tool. It helps you analyze the number of unique visitors, pageviews, bounce rate and visit duration.
If you have a website you would probably understand those terms. As a website owner, it helps you know if your site is getting more visitors over the time, from where the traffic is coming and if you have some knowledge on these things, you can work on improving your website for more visits.
When it comes to website analytics, the one service that rules this domain is the Googles free tool Google Analytics. Just like Google is the de-facto search engine, Google Analytics is the de-facto analytics tool. But you dont have to live with it specially if you cannot trust Big tech with your and your site visitors data.
Plausible gives you the freedom from Google Analytics and I am going to discuss this open source project in this article.
Please mind that some technical terms in the article could be unknown to you if you have never managed a website or bothered about analytics.
### Plausible for privacy friendly website analytics
The script used by Plausible for analytics is extremely lightweight with less than 1 KB in size.
The focus is on preserving the privacy so you get valuable and actionable stats without compromising on the privacy of your visitors. Plausible is one of the rare few analytics tool that doesnt require cookie banner or GDP consent because it is already [GDPR-compliant][2] on privacy front. Thats super cool.
In terms of features, it doesnt have the same level of granularity and details of Google Analytics. Plausible banks on simplicity. It shows a graph of your traffic stats for past 30 days. You may also switch to real time view.
![][3]
You can also see where your traffic is coming from and which pages on your website gets the most visits. The sources can also show UTM campaigns.
![][4]
You also have the option to enable GeoIP to get some insights about the geographical location of your website visitors. You can also check how many visitors use desktop or mobile device to visit your website. There is also an option for operating system and as you can see, [Linux Handbook][5] gets 48% of its visitors from Windows devices. Pretty strange, right?
![][6]
Clearly, the data provided is nowhere close to what Google Analytics can do, but thats intentional. Plausible intends to provide you simple matrix.
### Using Plausible: Opt for paid managed hosting or self-host it on your server
There are two ways you can start using Plausible. Sign up for their official managed hosting. Youll have to pay for the service and this eventually helps the development of the Plausible project. They do have 30-days trial period and it doesnt even require any payment information from your side.
The pricing starts at $6 per month for 10k monthly pageviews. Pricing increases with the number of pageviews. You can calculate the pricing on Plausible website.
[Plausible Pricing][7]
You can try it for 30 days and see if you would like to pay to Plausible developers for the service and own your data.
If you think the pricing is not affordable, you can take the advantage of the fact that Plausible is open source and deploy it yourself. If you are interested, read our [in-depth guide on self-hosting a Plausible instance with Docker][8].
At Its FOSS, we self-host Plausible. Our Plausible instance has three of our websites added.
![Plausble dashboard for Its FOSS websites][9]
If you maintain the website of an open source project and would like to use Plausible, you can contact us through our [High on Cloud project][10]. With High on Cloud, we help small businesses host and use open source software on their servers.
### Conclusion
If you are not super obsessed with data and just want a quick glance on how your website is performing, Plausible is a decent choice. I like it because it is lightweight and privacy compliant. Thats the main reason why I use it on Linux Handbook, our [ethical web portal for teaching Linux server related stuff][11].
Overall, I am pretty content with Plausible and recommend it to other website owners.
Do you run or manage a website as well? What tool do you use for the analytics or do you not care about that at all?
--------------------------------------------------------------------------------
via: https://itsfoss.com/plausible/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://plausible.io/
[2]: https://gdpr.eu/compliance/
[3]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/03/plausible-graph-lhb.png?resize=800%2C395&ssl=1
[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/03/plausible-stats-lhb-2.png?resize=800%2C333&ssl=1
[5]: https://linuxhandbook.com/
[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/03/plausible-geo-ip-stats.png?resize=800%2C331&ssl=1
[7]: https://plausible.io/#pricing
[8]: https://linuxhandbook.com/plausible-deployment-guide/
[9]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/03/plausible-analytics-for-itsfoss.png?resize=800%2C231&ssl=1
[10]: https://highoncloud.com/
[11]: https://linuxhandbook.com/about/#ethical-web-portal

View File

@ -0,0 +1,144 @@
[#]: subject: (10 open source tools for content creators)
[#]: via: (https://opensource.com/article/21/3/open-source-tools-web-design)
[#]: author: (Kristina Tuvikene https://opensource.com/users/hfkristina)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
10 open source tools for content creators
======
Check out these lesser-known web design tools for your next project.
![Painting art on a computer screen][1]
There are a lot of well-known open source applications used in web design, but there are also many great tools that are not as popular. I thought I'd challenge myself to find some obscure options on the chance I might find something useful.
Open source offers a wealth of options, so it's no surprise that I found 10 new applications that I now consider indispensable to my work.
### Bulma
![Bulma widgets][2]
[Bulma][3] is a modular and responsive CSS framework for designing interfaces that flow beautifully. Design work is hardest between the moment of inspiration and the time of initial implementation, and that's exactly the problem Bulma helps solve. It's a collection of useful front-end components that a designer can combine to create an engaging and polished interface. And the best part is that it requires no JavaScript. It's all done in CSS.
Included components include forms, columns, tabbed interfaces, pagination, breadcrumbs, buttons, notifications, and much more.
### Skeleton
![Skeleton][4]
(Kristina Tuvikene, [CC BY-SA 4.0][5])
[Skeleton][6] is a lightweight open source framework that gives you a simple grid, basic formats, and cross-browser support. It's a great alternative for bulky frameworks and lets you start coding your site with a minimal but highly functional foundation. There's a slight learning curve, as you do have to get familiar with its codebase, but after you've built one site with Skeleton, you've built a thousand, and it becomes second-nature.
### The Noun Project
![The Noun Project][7]
(Kristina Tuvikene, [CC BY-SA 4.0][5])
[The Noun Project][8] is a collection of more than 3 million icons and images. You can use them on your site or as inspiration to create your own designs. I've found hundreds of useful icons on the site, and they're superbly easy to use. Because they're so basic, you can use them as-is for a nice, minimal look or bring them into your [favorite image editor][9] and customize them for your project.
### MyPaint
![MyPaint][10]
(Kristina Tuvikene, [CC BY-SA 4.0][5])
If you fancy creating your own icons or maybe some incidental art, then you should take a look at [MyPaint][11]. It is a lightweight painting tool that supports various graphic tablets, features dozens of amazing brush emulators and textures, and has a clean, minimal interface, so you can focus on creating your illustration.
### Glimpse
![Glimpse][12]
(Kristina Tuvikene, [CC BY-SA 4.0][5])
[Glimpse][13] is a cross-platform photo editor, a fork of [GIMP][14] that adds some nice features such as keyboard shortcuts similar to another popular (non-open) image editor. This is one of those must-have [applications for any graphic designer][15]. Climpse doesn't have a macOS release yet, but Mac users may use GIMP in the mean time.
### LazPaint
![LaPaz][16]
(Kristina Tuvikene, [CC BY-SA 4.0][5])
[LazPaint][17] is a lightweight raster and vector graphics editor with multiple tools and filters. It's also available on multiple platforms and offers straightforward vector editing for quick and basic work.
### The League of Moveable Type
![League of Moveable Type][18]
(Kristina Tuvikene, [CC BY-SA 4.0][5])
My favorite open source font foundry, [The League of Moveable Type][19], offers expertly designed open source font faces. There's something suitable for every sort of project here.
### Shotcut
![Shotcut][20]
(Kristina Tuvikene, [CC BY-SA 4.0][5])
[Shotcut][21] is a non-linear video editor that supports multiple audio and video formats. It has an intuitive interface, undockable panels, and you can do some basic to advanced video editing using this open source tool.
### Draw.io
![Draw.io][22]
(Kristina Tuvikene, [CC BY-SA 4.0][5])
[Draw.io][23] is lightweight, dedicated software with a straightforward user interface for creating professional diagrams and flowcharts. You can run it online or [get it from GitHub][24] and install it locally.
### Bonus resource: Olive video editor
![Olive][25]
(©2021, [Olive][26])
[Olive video editor][27] is a work in progress but considered a very strong contender for premium open source video editing software. It's something you should keep your eye on for sure.
### Add these to your collection
Web design is an exciting line of work, and there's always something unexpected to deal with or invent. There are many great open source options out there for the resourceful web designer, and you'll benefit from trying these out to see if they fit your style.
What open source web design tools do you use that I've missed? Please share your favorites in the comments!
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/3/open-source-tools-web-design
作者:[Kristina Tuvikene][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/hfkristina
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/painting_computer_screen_art_design_creative.png?itok=LVAeQx3_ (Painting art on a computer screen)
[2]: https://opensource.com/sites/default/files/bulma.jpg (Bulma widgets)
[3]: https://bulma.io/
[4]: https://opensource.com/sites/default/files/uploads/skeleton.jpg (Skeleton)
[5]: https://creativecommons.org/licenses/by-sa/4.0/
[6]: http://getskeleton.com/
[7]: https://opensource.com/sites/default/files/uploads/nounproject.jpg (The Noun Project)
[8]: https://thenounproject.com/
[9]: https://opensource.com/life/12/6/design-without-debt-five-tools-for-designers
[10]: https://opensource.com/sites/default/files/uploads/mypaint.jpg (MyPaint)
[11]: http://mypaint.org/
[12]: https://opensource.com/sites/default/files/uploads/glimpse.jpg (Glimpse)
[13]: https://glimpse-editor.github.io/
[14]: https://www.gimp.org/
[15]: https://websitesetup.org/web-design-software/
[16]: https://opensource.com/sites/default/files/uploads/lapaz.jpg (LaPaz)
[17]: https://lazpaint.github.io/
[18]: https://opensource.com/sites/default/files/uploads/league-of-moveable-type.jpg (League of Moveable Type)
[19]: https://www.theleagueofmoveabletype.com/
[20]: https://opensource.com/sites/default/files/uploads/shotcut.jpg (Shotcut)
[21]: https://shotcut.org/
[22]: https://opensource.com/sites/default/files/uploads/drawio.jpg (Draw.io)
[23]: http://www.draw.io/
[24]: https://github.com/jgraph/drawio
[25]: https://opensource.com/sites/default/files/uploads/olive.png (Olive)
[26]: https://olivevideoeditor.org/020.php
[27]: https://olivevideoeditor.org/

View File

@ -0,0 +1,144 @@
[#]: subject: (How to read and write files in C++)
[#]: via: (https://opensource.com/article/21/3/ccc-input-output)
[#]: author: (Stephan Avenwedde https://opensource.com/users/hansic99)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
How to read and write files in C++
======
If you know how to use I/O streams in C++, you can (in principle) handle
any kind of I/O device.
![Computer screen with files or windows open][1]
In C++, reading and writing to files can be done by using I/O streams in conjunction with the stream operators `>>` and `<<`. When reading or writing to files, those operators are applied to an instance of a class representing a file on the hard drive. This stream-based approach has a huge advantage: From a C ++ perspective, it doesn't matter what you are reading or writing to, whether it's a file, a database, the console, or another PC you are connected to over the network. Therefore, knowing how to write files using stream operators can be transferred to other areas.
### I/O stream classes
The C++ standard library provides the class [ios_base][2]. This class acts as the base class for all I/O stream-compatible classes, such as [basic_ofstream][3] and [basic_ifstream][4]. This example will use the specialized types for reading/writing characters, `ifstream` and `ofstream`.
* `ofstream` means _output file stream_, and it can be accessed with the insertion operator, `<<`.
* `ifstream` means _input file stream_, and it can be accessed with the extraction operator, `>>`.
Both types are defined inside the header `<fstream>`.
A class that inherits from `ios_base` can be thought of as a data sink when writing to it or as a data source when reading from it, completely detached from the data itself. This object-oriented approach makes concepts such as [separation of concerns][5] and [dependency injection][6] easy to implement.
### A simple example
This example program is quite simple: It creates an `ofstream`, writes to it, creates an `ifstream`, and reads from it:
```
#include &lt;iostream&gt; // cout, cin, cerr etc...
#include &lt;fstream&gt; // ifstream, ofstream
#include &lt;string&gt;
int main()
{
    std::string sFilename = "MyFile.txt";    
    /******************************************
     *                                        *
     *                WRITING                 *
     *                                        *
     ******************************************/
    std::ofstream fileSink(sFilename); // Creates an output file stream
    if (!fileSink) {
        std::cerr &lt;&lt; "Canot open " &lt;&lt; sFilename &lt;&lt; std::endl;
        exit(-1);
    }
    /* std::endl will automatically append the correct EOL */
    fileSink &lt;&lt; "Hello Open Source World!" &lt;&lt; std::endl;
    /******************************************
     *                                        *
     *                READING                 *
     *                                        *
     ******************************************/
   
    std::ifstream fileSource(sFilename); // Creates an input file stream
    if (!fileSource) {
        std::cerr &lt;&lt; "Canot open " &lt;&lt; sFilename &lt;&lt; std::endl;
        exit(-1);
    }
    else {
        // Intermediate buffer
        std::string buffer;
        // By default, the &gt;&gt; operator reads word by workd (till whitespace)
        while (fileSource &gt;&gt; buffer)
        {
            std::cout &lt;&lt; buffer &lt;&lt; std::endl;
        }
    }
    exit(0);
}
```
This code is available on [GitHub][7]. When you compile and execute it, you should get the following output:
![Console screenshot][8]
(Stephan Avenwedde, [CC BY-SA 4.0][9])
This is a simplified, beginner-friendly example. If you want to use this code in your own application, please note the following:
* The file streams are automatically closed at the end of the program. If you want to proceed with the execution, you should close them manually by calling the `close()` method.
* These file stream classes inherit (over several levels) from [basic_ios][10], which overloads the `!` operator. This lets you implement a simple check if you can access the stream. On [cppreference.com][11], you can find an overview of when this check will (and won't) succeed, and you can implement further error handling.
* By default, `ifstream` stops at white space and skips it. To read line by line until you reach [EOF][12], use the `getline(...)`-method.
* For reading and writing binary files, pass the `std::ios::binary` flag to the constructor: This prevents [EOL][13] characters from being appended to each line.
### Writing from the systems perspective
When writing files, the data is written to the system's in-memory write buffer. When the system receives the system call [sync][14], this buffer's contents are written to the hard drive. This mechanism is also the reason you shouldn't remove a USB stick without telling the system. Usually, _sync_ is called on a regular basis by a daemon. If you really want to be on the safe side, you can also call _sync_ manually:
```
#include &lt;unistd.h&gt; // needs to be included
sync();
```
### Summary
Reading and writing to files in C++ is not that complicated. Moreover, if you know how to deal with I/O streams, you also know (in principle) how to deal with any kind of I/O device. Libraries for various kinds of I/O devices let you use stream operators for easy access. This is why it is beneficial to know how I/O steams work.
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/3/ccc-input-output
作者:[Stephan Avenwedde][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/hansic99
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/browser_screen_windows_files.png?itok=kLTeQUbY (Computer screen with files or windows open)
[2]: https://en.cppreference.com/w/cpp/io/ios_base
[3]: https://en.cppreference.com/w/cpp/io/basic_ofstream
[4]: https://en.cppreference.com/w/cpp/io/basic_ifstream
[5]: https://en.wikipedia.org/wiki/Separation_of_concerns
[6]: https://en.wikipedia.org/wiki/Dependency_injection
[7]: https://github.com/hANSIc99/cpp_input_output
[8]: https://opensource.com/sites/default/files/uploads/c_console_screenshot.png (Console screenshot)
[9]: https://creativecommons.org/licenses/by-sa/4.0/
[10]: https://en.cppreference.com/w/cpp/io/basic_ios
[11]: https://en.cppreference.com/w/cpp/io/basic_ios/operator!
[12]: https://en.wikipedia.org/wiki/End-of-file
[13]: https://en.wikipedia.org/wiki/Newline
[14]: https://en.wikipedia.org/wiki/Sync_%28Unix%29

View File

@ -0,0 +1,108 @@
[#]: subject: (Network address translation part 3 the conntrack event framework)
[#]: via: (https://fedoramagazine.org/conntrack-event-framework/)
[#]: author: (Florian Westphal https://fedoramagazine.org/author/strlen/)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
Network address translation part 3 the conntrack event framework
======
![][1]
This is the third post in a series about network address translation (NAT). The first article introduced [how to use the iptables/nftables packet tracing feature][2] to find the source of NAT-related connectivity problems. Part 2 [introduced the “conntrack” command][3]. This part gives an introduction to the “conntrack” event framework.
### Introduction
NAT configured via iptables or nftables builds on top of netfilters connection tracking framework. conntracks event facility allows real-time monitoring of incoming and outgoing flows. This event framework is useful for debugging or logging flow information, for instance with [ulog][4] and its IPFIX output plugin.
### Conntrack events
Run the following command to see a real-time conntrack event log:
```
# conntrack -E
NEW tcp 120 SYN_SENT src=10.1.0.114 dst=10.7.43.52 sport=4123 dport=22 [UNREPLIED] src=10.7.43.52 dst=10.1.0.114 sport=22 dport=4123
UPDATE tcp 60 SYN_RECV src=10.1.0.114 dst=10.7.43.52 sport=4123 dport=22 src=10.7.43.52 dst=10.1.0.114 sport=22 dport=4123
UPDATE tcp 432000 ESTABLISHED src=10.1.0.114 dst=10.7.43.52 sport=4123 dport=22 src=10.7.43.52 dst=10.1.0.114 sport=22 dport=4123 [ASSURED]
UPDATE tcp 120 FIN_WAIT src=10.1.0.114 dst=10.7.43.52 sport=4123 dport=22 src=10.7.43.52 dst=10.1.0.114 sport=22 dport=4123 [ASSURED]
UPDATE tcp 30 LAST_ACK src=10.1.0.114 dst=10.7.43.52 sport=4123 dport=22 src=10.7.43.52 dst=10.1.0.114 sport=22 dport=4123 [ASSURED]
UPDATE tcp 120 TIME_WAIT src=10.1.0.114 dst=10.7.43.52 sport=4123 dport=22 src=10.7.43.52 dst=10.1.0.114 sport=22 dport=4123 [ASSURED]
```
This prints a continuous stream of events:
* new connections
* removal of connections
* changes in a connections state.
Hit _ctrl+c_ to quit.
The conntrack tool offers a number of options to limit the output. For example its possible to only show DESTROY events. The NEW event is generated after the iptables/nftables rule set accepts the corresponding packet.
### **Conntrack expectations**
Some legacy protocols require multiple connections to work, such as [FTP][5], [SIP][6] or [H.323][7]. To make these work in NAT environments, conntrack uses “connection tracking helpers”: kernel modules that can parse the specific higher-level protocol such as ftp.
The _nf_conntrack_ftp_ module parses the ftp command connection and extracts the TCP port number that will be used for the file transfer. The helper module then inserts a “expectation” that consists of the extracted port number and address of the ftp client. When a new data connection arrives, conntrack searches the expectation table for a match. An incoming connection that matches such an entry is flagged RELATED rather than NEW. This allows you to craft iptables and nftables rulesets that reject incoming connection requests unless they were requested by an existing connection. If the original connection is subject to NAT, the related data connection will inherit this as well. This means that helpers can expose ports on internal hosts that are otherwise unreachable from the wider internet. The next section will explain this expectation mechanism in more detail.
### The expectation table
Use _conntrack -L expect_ to list all active expectations. In most cases this table appears to be empty, even if a helper module is active. This is because expectation table entries are short-lived. Use _conntrack -E expect_ to monitor the system for changes in the expectation table instead.
Use this to determine if a helper is working as intended or to log conntrack actions taken by the helper. Here is an example output of a file download via ftp:
```
```
# conntrack -E expect
NEW 300 proto=6 src=10.2.1.1 dst=10.8.4.12 sport=0 dport=46767 mask-src=255.255.255.255 mask-dst=255.255.255.255 sport=0 dport=65535 master-src=10.2.1.1 master-dst=10.8.4.12 sport=34526 dport=21 class=0 helper=ftp
DESTROY 299 proto=6 src=10.2.1.1 dst=10.8.4.12 sport=0 dport=46767 mask-src=255.255.255.255 mask-dst=255.255.255.255 sport=0 dport=65535 master-src=10.2.1.1 master-dst=10.8.4.12 sport=34526 dport=21 class=0 helper=ftp
```
```
The expectation entry describes the criteria that an incoming connection request must meet in order to recognized as a RELATED connection. In this example, the connection may come from any port, must go to port 46767 (the port the ftp server expects to receive the DATA connection request on). Futhermore the source and destination addresses must match the address of the ftp client and server.
Events also include the connection that created the expectation and the name of the protocol helper (ftp). The helper has full control over the expectation: it can request full matching (IP addresses of the incoming connection must match), it can restrict to a subnet or even allow the request to come from any address. Check the “mask-dst” and “mask-src” parameters to see what parts of the addresses need to match.
### Caveats
You can configure some helpers to allow wildcard expectations. Such wildcard expectations result in requests coming from an unrelated 3rd party host to get flagged as RELATED. This can open internal servers to the wider internet (“NAT slipstreaming”).
This is the reason helper modules require explicit configuration from the nftables/iptables ruleset. See [this article][8] for more information about helpers and how to configure them. It includes a table that describes the various helpers and the types of expectations (such as wildcard forwarding) they can create. The nftables wiki has a [nft ftp example][9].
A nftables rule like ct state related ct helper “ftp” matches connections that were detected as a result of an expectation created by the ftp helper.
In iptables, use “_-m conntrack ctstate RELATED -m helper helper ftp_“. Always restrict helpers to only allow communication to and from the expected server addresses. This prevents accidental exposure of other, unrelated hosts.
### Summary
This article introduced the conntrack event facilty and gave examples on how to inspect the expectation table. The next part of the series will describe low-level debug knobs of conntrack.
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/conntrack-event-framework/
作者:[Florian Westphal][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/strlen/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2021/03/network-address-translation-part-3-816x345.jpg
[2]: https://fedoramagazine.org/network-address-translation-part-1-packet-tracing/
[3]: https://fedoramagazine.org/network-address-translation-part-2-the-conntrack-tool/
[4]: https://netfilter.org/projects/ulogd/index.html
[5]: https://en.wikipedia.org/wiki/File_Transfer_Protocol
[6]: https://en.wikipedia.org/wiki/Session_Initiation_Protocol
[7]: https://en.wikipedia.org/wiki/H.323
[8]: https://github.com/regit/secure-conntrack-helpers/blob/master/secure-conntrack-helpers.rst
[9]: https://wiki.nftables.org/wiki-nftables/index.php/Conntrack_helpers

View File

@ -0,0 +1,70 @@
[#]: subject: (Why you should care about service mesh)
[#]: via: (https://opensource.com/article/21/3/service-mesh)
[#]: author: (Daniel Oh https://opensource.com/users/daniel-oh)
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
Why you should care about service mesh
======
Service mesh provides benefits for development and operations in
microservices environments.
![Net catching 1s and 0s or data in the clouds][1]
Many developers wonder why they should care about [service mesh][2]. It's a question I'm asked often in my presentations at developer meetups, conferences, and hands-on workshops about microservices development with cloud-native architecture. My answer is always the same: "As long as you want to simplify your microservices architecture, it should be running on Kubernetes."
Concerning simplification, you probably also wonder why distributed microservices must be designed so complexly for running on Kubernetes clusters. As this article explains, many developers solve the microservices architecture's complexity with service mesh and gain additional benefits by adopting service mesh in production.
### What is a service mesh?
A service mesh is a dedicated infrastructure layer for providing a transparent and code-independent (polyglot) way to eliminate nonfunctional microservices capabilities from the application code.
![Before and After Service Mesh][3]
(Daniel Oh, [CC BY-SA 4.0][4])
### Why service mesh matters to developers
When developers deploy microservices to the cloud, they have to address nonfunctional microservices capabilities to avoid cascading failures, regardless of business functionalities. Those capabilities typically can be represented in service discovery, logging, monitoring, resiliency, authentication, elasticity, and tracing. Developers must spend more time adding them to each microservice rather than developing actual business logic, which makes the microservices heavy and complex.
As organizations accelerate their move to the cloud, the service mesh can increase developer productivity. Instead of making the services responsible for dealing with those complexities and adding more code into each service to deal with cloud-native concerns, the Kubernetes + service mesh platform is responsible for providing those services to any application (existing or new, in any programming language or framework) running on the platform. Then the microservices can be lightweight and focus on their business logic rather than cloud-native complexities.
### Why service mesh matters to ops
This doesn't answer why ops teams need to care about the service mesh for operating cloud-native microservices on Kubernetes. It's because the ops teams have to ensure robust security, compliance, and observability for spreading new cloud-native applications across large hybrid and multi clouds on Kubernetes environments.
The service mesh is composed of a control plane for managing proxies to route traffic and a data plane for injecting sidecars. The sidecars allow the ops teams to do things like adding third-party security tools and tracing traffic in all service communications to avoid security breaches or compliance issues. The service mesh also improves observation capabilities by visualizing tracing metrics on graphical dashboards.
### How to get started with service mesh
Service mesh manages cloud-native capabilities more efficiently—for developers and operators and from application development to platform operation.
You might want to know where to get started adopting service mesh in alignment with your microservices applications and architecture. Luckily, there are many open source service mesh projects. Many cloud service providers also offer service mesh capabilities within their Kubernetes platforms.
![CNCF Service Mesh Landscape][5]
(Daniel Oh, [CC BY-SA 4.0][4])
You can find links to the most popular service mesh projects and services on the [CNCF Service Mesh Landscape][6] webpage.
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/3/service-mesh
作者:[Daniel Oh][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/daniel-oh
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/data_analytics_cloud.png?itok=eE4uIoaB (Net catching 1s and 0s or data in the clouds)
[2]: https://www.redhat.com/en/topics/microservices/what-is-a-service-mesh
[3]: https://opensource.com/sites/default/files/uploads/vm-vs-service-mesh.png (Before and After Service Mesh)
[4]: https://creativecommons.org/licenses/by-sa/4.0/
[5]: https://opensource.com/sites/default/files/uploads/service-mesh-providers.png (CNCF Service Mesh Landscape)
[6]: https://landscape.cncf.io/card-mode?category=service-mesh&grouping=category

View File

@ -0,0 +1,129 @@
[#]: subject: (My favorite open source tools to meet new friends)
[#]: via: (https://opensource.com/article/21/3/open-source-streaming)
[#]: author: (Chris Collins https://opensource.com/users/clcollins)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
My favorite open source tools to meet new friends
======
Quarantine hasn't been all bad—it's allowed people to create fun online
communities that also help others.
![Two people chatting via a video conference app][1]
In March 2020, I joined the rest of the world in quarantine at home for two weeks. Then, two weeks turned into more. And more. It wasn't too hard on me at first. I had been working a remote job for a year already, and I'm sort of an introvert in some ways. Being at home was sort of "business as usual" for me, but I watched as it took its toll on others, including my wife.
### An unlikely lifeline
That spring, I found out a buddy and co-worker of mine was a Fairly Well-Known Streamer™ who had been doing a podcast for something ridiculous, like _15 years_. So, I popped into the podcast's Twitch channel, [2DorksTV][2]. What I found, I was not prepared for. My friend and his co-hosts perform their podcast _live_ on Twitch, like the cast of _Saturday Night Live_ or something! _**Live!**_ The hosts, Stephen, Ashley, and Jacob, joked and laughed, (sometimes) read news stories, and interacted with a vibrant community of followers—_live!_
I introduced myself in the chat, and Stephen looked into the camera and welcomed me, as though he were looking at and talking directly to me. I was surprised to find that there was a real back and forth. The community in the chat talked with the hosts and one another, and the hosts interacted with the chat.
It was a great time, and I laughed out loud for the first time in several months.
### Trying a new thing
Shortly after getting involved in the community, I thought I might try out streaming for myself. I didn't have a podcast or a co-host, but I really, _really_ like to play Dwarf Fortress, a video game that's not open source but is built for Linux. People stream themselves playing games, right? I had all the stuff I needed because I already worked remotely full time. Other folks were struggling to find a webcam in stock and a spot to work that wasn't a kitchen table, but I'd been set up for months.
When I looked into it more, I found that a free and open source video recording and streaming application named OBS Studio is one of the most popular ways to stream to Twitch and other platforms. Score one for open source!
[OBS worked][3] _right out of the box_ on my Fedora system, so there's not much to write about. And that's a good thing!
So, it wasn't because of the software that my first stream was…rough, to say the least. I didn't really know what I was doing, the quality wasn't that great, and I kept muting the mic to cough and forgetting to turn it back on. I think there were a grand total of zero viewers who saw that stream, and that's probably for the best.
The next day though, I shared what I'd done in chat, and everyone was amazingly supportive. I decided to try again. In the second stream, Stephen popped in and said hi, and I had the opportunity to be on the other side of the camera, talking to a friend in chat and really enjoying the interaction. Within a few more streams, more of the community started to hop on and chat and hang out and, despite having no idea what was going on (Dwarf Fortress is famously a bit dense), sticking around and interacting with me.
### The open source behind the stream
Eventually, I started to up my game. Not my Dwarf Fortress game, but my streaming game. My stream slowly became more polished and more frequent. I created my own official stream, called _It's Dwarf Fortress! …with Hammerdwarf!_
The entire production is powered by open source:
* [VLC Media Player][4] plays the intro and outro music.
* I use [GIMP][5] (GNU Image Manipulation Program) to make the logos and splash screens.
* [OBS Studio][6] handles the recording and streaming.
* Both GIMP and OBS are packaged with [Flatpak][7], a seriously cool next-generation packaging technology for Linux.
* I've recently started using [OpenShot][8] to edit recordings of my stream before uploading them to YouTube.
* Even the fonts I use are Open Font License fonts.
* All this, the game included, live on a Fedora Linux system.
### Coding out in the open
As I got further into streaming, I discovered, again through Stephen, that folks stream themselves programming. What?! But it's oddly satisfying, listening to someone calmly talk about what they're doing and why and hearing the quiet clicks of their keyboard. I've started keeping those kinds of things on in the background while I work, just for ambiance.
Eventually, I thought to myself, "Why not? I could do that too. I program things." I had plenty of side projects to work on, and maybe folks would come hang out with me while I work on them.
I created a new stream called _It's _not_ Dwarf Fortress! …with Hammerdwarf!_ (Look—that's just how Dwarf Fortress-y I am.) I started up that stream and worked on a little side project, and—the very first time—a group of four or five folks from my previous job hopped in and hung out with me, despite it being the middle of their workday. Friends from the 2DorksTV Discord joined as well, and we had a nice big group of folks chatting and helping me troubleshoot code and regexes and missing whitespace. And then, some random folks I didn't know, folks looking around for a stream on Twitch, found it and jumped in as well!
### Sharing is what open source is about
Fast forward a few months, and I was talking (again) with Stephen. Over the months, we've discussed how folks represent themselves online and commiserated about feeling out of place at work, fighting to feel like we deserve to be there, to convince ourselves that we're good enough to be there. It's not just him or just me, I realize. I have this conversation with _so many people_. I told Stephen that I think it's because there is so little representation of _trying_. Everyone shares their success story on Twitter. They only ever _do_ or _don't_.
They never share themselves trying.
("Friggin Yoda, man," Stephen commented on the matter. You can see why he's got a successful podcast.)
Presentations at tech conferences are filled with complicated, difficult stories, but they're always success stories. The "internet famous" in our field, developer advocates and tech gurus, share amazing new things and present complicated demos, but all of them are backed by teams of people working with them that no one ever sees. Online, with tech specifically and honestly the rest of the world generally, you see only the finished sausage, not all the grind.
These are the things I think help people, and I realized that I need to be open about all of my processes. Projects I work on take me _forever_ to figure out. Code that I write _sucks_. I'm a senior software engineer/site reliability engineer for a large software company. I spend _hours and hours_ reading documentation, struggling to figure out how something works, and slowly, slowly incrementing on it. Even that first Dwarf Fortress stream needed a lot of help.
And this is normal!
Everyone does it, but we're so tuned into sharing our successes and hiding our failures that all we can compare our flawed selves to is other people's successes. We never see their failures, and we try to live up to a standard of illusion.
I even struggled to decide whether I should create a whole new channel for this thing I was trying to do. I spent all this time building a professional career image online—I couldn't show everyone how much of a Dwarf Dork I _really_ am! And once again, Stephen inspired me:
> "Hammerdwarf is you. And your coding stream was definitely a professional stream. The channel name didn't matter…Be authentic."
Professional Chris Collins and personal Hammerdwarf make up who I am. I have a wife and two dogs, I like space stuff, I get a headache every now and again, I write for Opensource.com and [EnableSysadmin][9], I speak at tech conferences, and sometimes, I have to take an afternoon off work to sit in the sun or lie awake at night because I miss my friends.
All that to say, my summer project, inspired by Stephen, Ashley, and Jacob and the community from 2DorksTV and powered by open source technology, is to fail publicly and to be real. To borrow a phrase from another excellent podcast: I am [failing out loud][10].
I've started a streaming program on Twitch called _Practically Programming_, dedicated to showing what it is like for me at work, working on real things and failing and struggling and needing help. I've been in tech for almost 20 years, and I still have to learn every day, and now I'm going to do so online where everyone can see me. Because it's important to show your failures and flaws as much as your successes, and it's important to see others fail and realize it's a normal part of life.
![Practically Programming logo][11]
(Chris Collins, [CC BY-SA 4.0][12])
That's what I did last summer.
And _Practically Programming_ is what I will be doing this spring and from now on. Please join me if you're interested, and please, if you fail at something or struggle with something, know that everyone else is doing so, too. As long as you keep trying and keep learning, it doesn't matter how many times you fail.
You got this!
* * *
_Practically Programming_ is on my [Hammerdwarf Twitch channel][13] on Tuesdays and Thursdays at 5pm Pacific time.
Dwarf Fortress is on almost any other time…
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/3/open-source-streaming
作者:[Chris Collins][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/clcollins
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/chat_video_conference_talk_team.png?itok=t2_7fEH0 (Two people chatting via a video conference app)
[2]: https://www.twitch.com/2dorkstv
[3]: https://opensource.com/article/20/4/open-source-live-stream
[4]: https://www.videolan.org/vlc/index.html
[5]: https://www.gimp.org/
[6]: https://obsproject.com/
[7]: https://opensource.com/article/21/2/linux-packaging
[8]: https://opensource.com/article/21/2/linux-python-video
[9]: http://redhat.com/sysadmin
[10]: https://open.spotify.com/show/1WcfOvSiD99zrVLFWlFHpo
[11]: https://opensource.com/sites/default/files/uploads/practically_programming_logo.png (Practically Programming logo)
[12]: https://creativecommons.org/licenses/by-sa/4.0/
[13]: https://www.twitch.tv/hammerdwarf

View File

@ -0,0 +1,97 @@
[#]: subject: (Manipulate data in files with Lua)
[#]: via: (https://opensource.com/article/21/3/lua-files)
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
Manipulate data in files with Lua
======
Understand how Lua handles reading and writing data.
![Person standing in front of a giant computer screen with numbers, data][1]
Some data is ephemeral, stored in RAM, and only significant while an application is running. But some data is meant to be persistent, stored on a hard drive for later use. When you program, whether you're working on a simple script or a complex suite of tools, it's common to need to read and write files. Sometimes a file may contain configuration options, and other times the file is the data that your user is creating with your application. Every language handles this task a little differently, and this article demonstrates how to handle data files with Lua.
### Installing Lua
If you're on Linux, you can install Lua from your distribution's software repository. On macOS, you can install Lua from [MacPorts][2] or [Homebrew][3]. On Windows, you can install Lua from [Chocolatey][4].
Once you have Lua installed, open your favorite text editor and get ready to code.
### Reading a file with Lua
Lua uses the `io` library for data input and output. The following example creates a function called `ingest` to read data from a file and then parses it with the `:read` function. When opening a file in Lua, there are several modes you can enable. Because I just need to read data from this file, I use the `r` (for "read") mode:
```
function ingest(file)
   local f = io.open(file, "r")
   local lines = f:read("*all")
   f:close()
   return(lines)
end
myfile=ingest("example.txt")
print(myfile)
```
In the code, notice that the variable `myfile` is created to trigger the `ingest` function, and therefore, it receives whatever that function returns. The `ingest` function returns the lines (from a variable intuitively called `lines`) of the file. When the contents of the `myfile` variable are printed in the final step, the lines of the file appear in the terminal.
If the file `example.txt` contains configuration options, then I would write some additional code to parse that data, probably using another Lua library depending on whether the configuration was stored as an INI file or YAML file or some other format. If the data were an SVG graphic, I'd write extra code to parse XML, probably using an SVG library for Lua. In other words, the data your code reads can be manipulated once it's loaded into memory, but all that's required to load it is the `io` library.
### Writing data to a file with Lua
Whether you're storing data your user is creating with your application or just metadata about what the user is doing in an application (for instance, game saves or recent songs played), there are many good reasons to store data for later use. In Lua, this is achieved through the `io` library by opening a file, writing data into it, and closing the file:
```
function exgest(file)
   local f = io.open(file, "a")
   io.output(f)
   io.write("hello world\n")
   io.close(f)
end
exgest("example.txt")
```
To read data from the file, I open the file in `r` mode, but this time I use `a` (for "append") to write data to the end of the file. Because I'm writing plain text into a file, I added my own newline character (`\n`). Often, you're not writing raw text into a file, and you'll probably use an additional library to write a specific format instead. For instance, you might use an INI or YAML library to help write configuration files, an XML library to write XML, and so on.
### File modes
When opening files in Lua, there are some safeguards and parameters to define how a file should be handled. The default is `r`, which permits you to read data only:
* **r** for read only
* **w** to overwrite or create a new file if it doesn't already exist
* **r+** to read and overwrite
* **a** to append data to a file or make a new file if it doesn't already exist
* **a+** to read data, append data to a file, or make a new file if it doesn't already exist
There are a few others (`b` for binary formats, for instance), but those are the most common. For the full documentation, refer to the excellent Lua documentation on [Lua.org/manual][5].
### Lua and files
Like other programming languages, Lua has plenty of library support to access a filesystem to read and write data. Because Lua has a consistent and simple syntax, it's easy to perform complex processing on data in files of any format. Try using Lua for your next software project, or as an API for your C or C++ project.
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/3/lua-files
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/data_metrics_analytics_desktop_laptop.png?itok=9QXd7AUr (Person standing in front of a giant computer screen with numbers, data)
[2]: https://opensource.com/article/20/11/macports
[3]: https://opensource.com/article/20/6/homebrew-mac
[4]: https://opensource.com/article/20/3/chocolatey
[5]: http://lua.org/manual

View File

@ -0,0 +1,226 @@
[#]: subject: (Rapidly configure SD cards for your Raspberry Pi cluster)
[#]: via: (https://opensource.com/article/21/3/raspberry-pi-cluster)
[#]: author: (Gregor von Laszewski https://opensource.com/users/laszewski)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
Rapidly configure SD cards for your Raspberry Pi cluster
======
Create multiple SD cards that are preconfigured to create Pi clusters
with Cloudmesh Pi Burner.
![Raspberries with pi symbol overlay][1]
There are many reasons people want to create [computer clusters][2] using the Raspberry Pi, including that they have full control over their platform, they're able to use an inexpensive, highly usable platform, and get the opportunity to learn about cluster computing in general.
There are different methods for setting up a cluster, such as headless, network booting, and booting from SD cards. Each method has advantages and disadvantages, but the latter method is most familiar to users who have worked with a single Pi. Most cluster setups involve many complex steps that require a significant amount of time because they are executed on an individual Pi. Even starting is non-trivial, as you need to set up a network to access them.
Despite improvements to the [Raspberry Pi Imager][3] and the availability of [PiBakery][4], the process is still too involved. So, at Cloudmesh, we asked:
> Is it possible to develop a tool that is specifically targeted to burn SD cards for Pis in a cluster one at a time so that the cards can be just plugged in and, with minimal effort, start a cluster that simply works?
In response, we developed a tool called **Cloudmesh Pi Burner** for SD Cards, and we present it within [Pi Planet][5]. No more spending hours upon hours to replicate the steps and learn complex DevOps tutorials; instead, you can get a cluster set up with just a few commands.
For this, we developed `cms burn`, which is a program that you can execute on a "manager" Pi or a Linux or macOS computer to burn cards for your cluster.
We set up a [comprehensive package][6] on GitHub that can be installed easily. You can read about it in detail in the [README][7]. There, you can also find detailed instructions on how to [burn directly][8] from a macOS or Linux computer.
### Getting started
This article explains how to create a cluster setup using five Raspberry Pi units (you need a minimum of two, but this method also works for larger numbers). To follow along, you must have five SD cards, one for each of the five Pi units. It's helpful to have a network switch (managed or unmanaged) with five Ethernet cables (one for each Pi).
#### Requirements
You need:
* 5 Raspberry Pi boards
* 5 SD cards
* 5 Ethernet cables
* A network switch (unmanaged or managed)
* WiFi access
* Monitor, mouse, keyboard (for desktop access on Pi)
* An SD card slot for your computer or the manager Pi (and preferably supports USB 3.0 speeds)
* If you're doing this on a Mac, you must install [XCode][9] and [Homebrew][10]
On Linux, the open source **ext4** filesystem is supported by default. However, Apple doesn't provide this capability for macOS, so you must purchase support separately. I use Paragon Software's **extFS** application. Like macOS itself, this is largely based upon, but is not itself, open source.
At Cloudmesh, we maintain a list of [hardware parts][11] you need to consider when setting up a cluster.
### Network configuration
Figure 1 shows our network configuration. Of the five Raspberry Pi computers, one is dedicated as a _manager_ and four are _workers_. Using WiFi for the manager Pi allows you to set it up anywhere in your house or other location (other configurations are discussed in the README).
Our configuration uses an unmanaged network switch, where the manager and workers communicate locally with each other, and the manager provides internet access to the workers over a bridge that's configured for you.
![Pi cluster setup with bridge network][12]
Pi cluster setup with bridge network (©2021 [The Cloudmesh Projects][13])
### Set up the Cloudmesh burn application
To set up the Cloudmesh burn program, first [create a Python `venv`][14]:
```
$ python3 -m venv ~/ENV3
$ source ~/ENV3/bin/activate
```
Next, install the Cloudmesh cluster generation tools and start the burn process. You must adjust the path to your SD card, which differs depending on your system and what kind of SD card reader you're using. Here's an example:
```
(ENV3)$ pip install cloudmesh-pi-cluster
(ENV3)$ cms help
(ENV3)$ cms burn info
(ENV3)$ cms burn cluster \
\--device=/path/to/sdcard \
\--hostname=red,red01,red02,red03,red04 \
\--ssid=myssid -y
```
Fill out the passwords and plug in the SD cards as requested.
### Start your cluster and configure it
Plug the burned SD cards into the Pis and switch them on. Execute the `ssh` command to log into your manager—it's the one called `red` (worker nodes are identified by number):
```
`(ENV3)$ ssh pi@red.local`
```
This takes a while, as the filesystems on the SD cards need to be installed, and configurations such as Country, SSH, and WiFi need to be activated.
Once you are in the manager, install the Cloudmesh cluster software in it. (You could have done this automatically, but we decided to leave this part of the process up to you to give you maximum flexibility.)
```
pi@red:~ $ curl -Ls \
<http://cloudmesh.github.io/get/pi> \
\--output install.sh
pi@red:~ $ sh ./install.sh
```
After lots of log messages, you see:
```
#################################################
# Install Completed                             #
#################################################
Time to update and upgarde: 339 s
Time to install the venv:   22 s
Time to install cloudmesh:  185 s
Time for total install:     546 s
Time to install: 546 s
#################################################
Please activate with
    source ~/ENV3/bin/activate
```
Reboot:
```
`pi@red:~ $ sudo reboot`
```
### Start using your cluster
Log in to your manager Pi over SSH:
```
`(ENV3)$ ssh pi@red.local`
```
Once you're logged into your manager (in this example, `red.local`) on the network, execute a command to see if things are working. For example, you can use a temperature monitor to get the temperature from all Pi boards:
```
(ENV3) pi@red:~ $ cms pi temp red01,red02,red03,red04
pi temp red01,red02
+--------+--------+-------+----------------------------+
| host   |    cpu |   gpu | date                       |
|--------+--------+-------+----------------------------|
| red01  | 45.277 |  45.2 | 2021-02-23 22:13:11.788430 |
| red02  | 42.842 |  42.8 | 2021-02-23 22:13:11.941566 |
| red02  | 43.356 |  42.8 | 2021-02-23 22:13:11.961245 |
| red02  | 44.124 |  42.8 | 2021-02-23 22:13:11.981896 |
+--------+--------+-------+----------------------------+
```
### Access the workers
It's even more convenient to access the workers, so we designed a tunnel command that makes setup easy. Call it on the manager node, for example:
```
`(ENV3) pi@red:~ $ cms host setup "red0[1-4]" user@laptop.local`
```
This creates ssh keys on all workers, gathers ssh keys from all hosts, and scatters the public keys to the manager's and worker's authorized key file. This also makes the manager node a bridge for the worker nodes so they can have internet access. Now our laptop we update our ssh config file with the following command.
```
`(ENV3)$ cms host config proxy pi@red.local red0[1-4]`
```
Now you can access the workers from your computer. Try it out with the temperature program:
```
(ENV3)$ cms pi temp "red,red0[1-4]"              
+-------+--------+-------+----------------------------+
| host  |    cpu |   gpu | date                       |
|-------+--------+-------+----------------------------|
| red   | 50.147 |  50.1 | 2021-02-18 21:10:05.942494 |
| red01 | 51.608 |  51.6 | 2021-02-18 21:10:06.153189 |
| red02 | 45.764 |  45.7 | 2021-02-18 21:10:06.163067 |
...
+-------+--------+-------+----------------------------+
```
### More information
Since this uses SSH keys to authenticate between the manager and the workers, you can log directly into the workers from the manager. You can find more details in the [README][7] and on [Pi Planet][5]. Other Cloudmesh components are discussed in the [Cloudmesh manual][15].
* * *
_This article is based on [Easy Raspberry Pi cluster setup with Cloudmesh from MacOS][13] and is republished with permission._
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/3/raspberry-pi-cluster
作者:[Gregor von Laszewski][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/laszewski
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/life-raspberrypi_0.png?itok=Kczz87J2 (Raspberries with pi symbol overlay)
[2]: https://en.wikipedia.org/wiki/Computer_cluster
[3]: https://www.youtube.com/watch?v=J024soVgEeM
[4]: https://www.raspberrypi.org/blog/pibakery/
[5]: https://piplanet.org/
[6]: https://github.com/cloudmesh/cloudmesh-pi-burn
[7]: https://github.com/cloudmesh/cloudmesh-pi-burn/blob/main/README.md
[8]: https://github.com/cloudmesh/cloudmesh-pi-burn#71-quickstart-for-a-setup-of-a-cluster-from-macos-or-linux-with-no-burning-on-a-pi
[9]: https://opensource.com/article/20/8/iterm2-zsh
[10]: https://opensource.com/article/20/6/homebrew-mac
[11]: https://cloudmesh.github.io/pi/docs/hardware/parts/
[12]: https://opensource.com/sites/default/files/uploads/network-bridge.png (Pi cluster setup with bridge network)
[13]: https://cloudmesh.github.io/pi/tutorial/sdcard-burn-pi-headless/
[14]: https://opensource.com/article/20/10/venv-python
[15]: https://cloudmesh.github.io/cloudmesh-manual/

View File

@ -0,0 +1,500 @@
[#]: subject: (Setting up a VM on Fedora Server using Cloud Images and virt-install version 3)
[#]: via: (https://fedoramagazine.org/setting-up-a-vm-on-fedora-server-using-cloud-images-and-virt-install-version-3/)
[#]: author: (pboy https://fedoramagazine.org/author/pboy/)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
Setting up a VM on Fedora Server using Cloud Images and virt-install version 3
======
![][1]
Photo by [Max Kukurudziak][2] on [Unsplash][3]
Many servers use one or more virtual machines (VMs), e.g. to isolate a public service in the best possible way and to protect the host server from compromise. This article explores the possibilities of deploying [Fedora Cloud Base][4] images as a VM in an autonomous Fedora 33 Server Edition using version 3 of _virt-install_. This capability was introduced with Fedora 33 and the new _-cloud-init_ option.
### Why use Cloud Images?
The standard virtualization tool for Fedora Server is _libvirt_. For a long time the only way to create a virtual Fedora Server instance was to create a _libvirt_ VM and run the standard Anaconda installation. Several tools exist to make this procedure as comfortable and fail-safe as possible, e.g. a [Cockpit module][5]. The process is pretty straight forward and every Fedora system administrator is used to it.
With the advent of cloud systems came cloud images. These are pre-built ready-to-run virtual servers. Fedora provides specialized images for various cloud systems as well as Fedora Cloud Base image, a generic optimized VM. The image image is copied to the server and used by a virtual machine as an operational file system.
These images save the system administrator the time-consuming process of many individual passes through Anaconda. An installation merely requires the invocation of _virt-install_ with suitable parameters. It is a CLI tool, thus easily scriptable and reproducible. In a worst case emergency, a replacement VM can be set up quickly.
Fedora Cloud Base images are integrated into the Fedora QA Process. This prevents subtle inconsistencies that may lead to not-so-subtle problems during operation. For any system administrator concerned about security and reliability, this is an incredibly valuable advantage over _libvirt_ compatible VM images from third party vendors. Cloud images speed up the deployment process as well.
#### Implementation considerations
As usual, there is nothing for free. Cloud images use _cloud-init_ for an automatic initial configuration, which is otherwise done as part of Anaconda. The cloud system usually provides the necessary information. In the absence of cloud, the system administrator must provide a replacement.
Basically, there are two implementation options.
First, with relatively little additional effort, you can install [Vagrant and the Vagrant libvirt plugin][6]. If the server is also used for development work, Vagrant may already be in use and the additional effort is minimal. This option is then the optimal choice.
Second, you can use _virt-install_ directly. Until now you had to create a cloud-init nocloud datasource iso in [several additional steps][7]. v_irt-install_ version 3, included since Fedora 33, elements these additional steps. The newly introduced _-cloud-init_ option initially configures a VM from a cloud image without additional software and without detours. _Virt-install_ takes on taming the rather complex cloud-init nocloud procedures.
There are two ways to make use of _virt-install_:
* quick and (not really) dirty: minimal Cloud-init configuration
This requires a little more post-installation effort and is suitable if you set up only a few VMs.
* elaborate cloud-init based configuration using simple configuration files
This requires more pre-installation work and is more effective if you have to set up multiple VMs.
#### Be certain you know what you are getting
There is no light without shadow. Cloud Base image (currently) do not provide an alternatively built but otherwise identical build of Fedora Server Edition. There are some subtle differences. For example:
* Fedora Server Edition uses xfs as its file system, Cloud Base Image still uses the older ext4.
* Fedora Server Edition now persists the network configuration completely and stringently in NetworkManager, Fedora Cloud Base image still uses the old ifcfg plugin.
* Other differences are conceptual. For example, Fedora Cloud image does not install a firewall by default.
* The use concept for the persistent storage is also different due to technical differences.
Overall, however, the functionality is so far identical and the advantages so noticeable that it is worthwhile and makes sense to use Fedora Cloud Base.
### A **t**ypical **u**se **c**ase
Consider a use case that often applies to small and medium-sized organizations. The hardware is located in an off-premise housing center. Fedora Server is required with the most rigorous isolation possible from public access, e.g. ssh and key based authentication only. Any risk of compromise has to be minimized. Public services are offered in a VM to provide as much isolation as possible. The VM operates as a pure front end with minimal exposure of services. For example, only an Apache web server is installed. All data processing resides on an application server in a separate VM (or a container), e.g. JBoss rsp. Wildfly. The application server accesses a database that may run directly on the host hardware for performance reasons but without any public access.
Regarding the infrastructure, at least some VMs as well as the host ssh or vpn process need access to the public network. They have to share the physical interface. At the same time, VMs and host need another internal network that enables protected communication. The application VM only connects to the internal network. And we need an internal DNS for the services to find each other.
### **System Requirements**
You need a Fedora 33 Server Edition with _libvirt_ virtualization properly installed and working. The _libvirt_ network “default” with virbr0 provides the internal protected network and is active. Some external network device, usually a router, provides DHCP service for the external network. Every lab or production environment should meet these requirements.
For internal name resolution to work, you have to decide upon an internal domain name and extend the _libvirt_ network configuration. In this example the external name will be _example.com_, and the internal domain name will be _example.lan_. The Fedora server thus receives the name _host.example.com_ externally and internally _host.example.lan_ or just _host_ for short. The names of the VMs are _**app**_ and _**web**_, respectively. The two examples that follow will create these VMs.
#### Network preparations for the examples
Modify the configuration of the internal network similar to the example below (N.B. adjust your domain name accordingly! Leave mac address and UUID untouched!):
```
# virsh net-edit default
<network>
<name>default</name>
<uuid>aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee</uuid>
<bridge name='virbr0' stp='on' delay='0'/>
<mac address='52:54:00:xx:yy:zz'/>
<forward mode='nat'/>
<mtu size='8000'/>
<domain name='example.lan'/>
<dns forwardPlainNames='no'>
<forwarder domain='example.lan' />
<host ip='192.168.122.1'>
<hostname>host</hostname>
<hostname>host.example.lan</hostname>
</host>
</dns>
<ip address='192.168.122.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.122.2' end='192.168.122.200'/>
</dhcp>
</ip>
</network>
# virsh net-destroy default
# virsh net-start default
```
Do NOT add an external forwarder via _&lt;forwarder addr=xxx.yyy.zz.uu/&gt;_ tag. It will break the VMs split-dns capability.
Due to a bug in the interaction of _systemd-resolved_ and _libvirt_, the name resolution for the internal network does not work on the host at the moment without additional measures. The VMs are not affected. Hence, the host cannot resolve the names of the VMs, but conversely, the VMs can resolve to each other and to the host. The latter is sufficient here.
With everything set up correctly the following interfaces are active on the host:
```
# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu ...
inet 127.0.0.1/8 scope host ...
inet6 ::1/128 scope host ...
2: enpNsM: <BROADCAST,MULTICAST, ...
inet xxx.yyy.zzz.uuu/24 brd xxx. ...
inet6 200x:xx:yyy:...
inet6 fe80::xxx:yyy:...
3: virbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu ...
inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
...
4: virbr0-nic: <BROADCAST,MULTICAST> mtu 8...
```
### Creating a Fedora Server **v**irtual **m**achine **u**sing Fedora Cloud Base Image
#### Preparations
First download a Fedora 33 Cloud Base Image file and store it in the directory _/var/lib/libvirt/boot_. By convention, this is the location from which images are installed.
```
# sudo wget https://download.fedoraproject.org/pub/fedora/linux/releases/33/Cloud/x86_64/images/Fedora-Cloud-Base-33-1.2.x86_64.qcow2 -O /var/lib/libvirt/boot/Fedora-Cloud-Base-33-1.2.x86_64.qcow2
# sudo wget https://getfedora.org/static/checksums/Fedora-Cloud-33-1.2-x86_64-CHECKSUM -O /var/lib/libvirt/boot/Fedora-Cloud-33-1.2-x86_64-CHECKSUM
# sudo cd /var/lib/libvirt/boot
# sudo sha256sum --ignore-missing -c *-CHECKSUM
```
The *CHECKSUM file contains the values for all cloud images. The check should result in one _OK_.
For external connectivity of the VMs, the easiest way is to use MacVTap in the VM configuration. You dont need to set up a virtual bridge nor touch the critical configuration of the physical Ethernet interface of an off-premise server. Enable forwarding for both IPv4 and IPv6 (dual stack). _Libvirt_ takes care for IPv4. Nevertheless, it is advantageous to configure forwarding independent of _libvirt_.
Check the forwarding configuration:
```
# cat /proc/sys/net/ipv4/ip_forward
# cat /proc/sys/net/ipv6/conf/default/forwarding
```
In both cases, an output value of 1 is required. If necessary, activate forwarding temporarily until next reboot:
**[…]# echo 1 &gt; /proc/sys/net/ipv4/ip_forward
[…]# echo 1 &gt; /proc/sys/net/ipv6/conf/all/forwarding**
For a permanent setup create the following file:
```
# vim /etc/sysctl.d/50-enable-forwarding.conf
# local customizations
#
# enable forwarding for dual stack
net.ipv4.ip_forwarding=1
net.ipv6.conf.all.forwarding=1
```
With these preparations completed, the following two examples, creating the VMs _**app**_ and _**web**_, should work flawlessly.
#### Example 1: Quick &amp; (not really) dirty: Minimal cloud-init configuration
Installation for the _**app**_ VM begins by creating a copy of the download image as a (fully installed) virtual disk in the directory _/var/lib/libvirt/images_. This is, by convention, the virtual disk pool. The _virt-install_ program performs the installation. The parameters on _virt-install_ pass all the required information. There is no need for further intervention or preparation The parameters first specify the usual, general properties such as memory, CPU and the (non-graphical) console for the server. The parameter _-graphics none_, enforces a redirect to the terminal window. After booting you get a VM terminal prompt and immediate access from the host. Parameter _-import_ causes skipping the install task and booting from the first virtual disk specified by the _-disk_ parameter. The VM “app” is will connect to the internal virtual network thus only one network is specified by the _-network_ parameter.
The only new parameter is _-cloud-init_ without any further subparameters. This causes the generation and display of a root password, enabling a one-time login. cloud-init is executes with sensible default settings. Finally, it is deactivated and not executed during subsequent boot processes.
The VM terminal appears when installation is complete. Note that the first root login password is displayed early in the process and is used for the initial login. This password is single use and must be replace during the first login.
```
# sudo cp /var/lib/libvirt/boot/Fedora-Cloud-Base-33-1.2.x86_64.qcow2 \
/var/lib/libvirt/images/app.qcow2
# sudo virt-install --name app
--memory 3074 --cpu host --vcpus 3 --graphics none\
--os-type linux --os-variant fedora33 --import \
--disk /var/lib/libvirt/images/app.qcow2,format=qcow2,bus=virtio \
--network bridge=virbr0,model=virtio \
--cloud-init
WARNING Defaulting to --cloud-init root-password-generate=yes,disable=yes
Installation startet …
Password for first root login is: OtMQshytI0E8xZGD
Installation will continue in 10 seconds (press Enter to skip)…Running text console command: …
Connected to Domain: app
Escape character is ^] (Ctrl + ])
[ 0.000000] Linux version 5.8.15-301.fc33.x86_64 (mockbuild@bkernel01.iad2.fedoraproject …
[ 29.271451] cloud-init[721]: Cloud-init v. 19.4 finished … Datasource DataSourceNoCloud …
[FAILED] Failed to start Execute cloud user/final scripts.
See 'systemctl status cloud-final.service' for details.
[ OK ] Reached target Cloud-init target.
Fedora 33 (Cloud Edition)
Kernel 5.8.15-301.fc33.x86_64 on an x86_64 (ttyS0)
localhost login:
```
The error message is unsightly, but does not affect operation. (This might be the reason for cloud-init service remaining enabled.) You may disable it manually or remove it at all.
On the host you may check the network status:
```
# less /var/lib/libvirt/dnsmasq/virbr0.status
[
{
"ip-address": "192.168.122.109",
"mac-address": "52:54:00:57:35:3d",
"client-id": "01:52:54:00:57:35:3d",
"expiry-time": 1615665342
}
]
```
The output shows the VM got an internal IP, but no hostname because one has not yet been set. That is the first post-installation tasks to perform.
##### Post-Installation Tasks
The initially displayed password enables _root_ login and forces the setting of a new one.
Of particular interest is the network connection. Verify using these commands:
```
# ping host
# ping host.example.lan
# ping host.example.com
# ping guardian.co.ik
```
Everything is working fine out of the box. Internal and external network access is working.
The only remaining task is to set hostname
```
# hostnamectl set-hostname app.example.lan
```
After rebooting, using this command on the host again, _**less**_ _**/var/lib/libvirt/dnsmasq/virbr0.status**_ will now list a hostname. This verifies that name resolution is working.
To complete the final application software installations, perform a system update and install a Tomcat application server for the functional demo.
```
# dnf -y update && dnf -y install tomcat && systemctl enable tomcat --now && reboot
```
When installation and reboot complete, exit and close the console using _**&lt;ctrl&gt;+]**_.
The VM is automatically deactivated and not executed during subsequent boot processes. To override this, on the host, enable autostart of the **app** VM
```
# sudo virsh autostart app
```
#### Example 2: An easy way to an elaborate configuration
The **web** front end VM is more complex and there are several issues to deal with. There is a public facing interface that requires the installation of a firewall. It is unique to the cloud-init process that the internal interface is not configured persistently. Instead, it is set up anew each time the system is booted. This makes it impossible to assign a firewall zone to this interface. The public interface also provides ssh access. So for root a key file is needed to secure the login.
The virt-install cloud-init process is provisioned by two subparameters, meta-data and user-data. Each references a configuration file. These files were previously buried in a special iso image, now simulated by _virt-install_. You are free to chose where to store these files. It is best, however, to be systematic and choosing a subdirectory in the boot directory is a good choice. This example will use _/var/lib/libvirt/boot/cloud-init_.
The file referenced by the meta-data parameter contains information about the runtime environment. The name is _web-meta-data_ in this example. Here it contains just the mandatory parameter _instance-id_. The must be unique in a cloud environment, but can be chosen arbitrarily here just as in a nocloud environment.
```
# sudo mkdir /var/lib/libvirt/boot/cloud-init
# sudo vim /var/lib/libvirt/boot/cloud-init/web-meta-data
instance-id: web-app
```
The file referenced by the user-data parameter holds the main configuration work. This example uses the name _web-user-data_ . The first line must contain some kind of shebang, which cloud-init uses to determine the format of the following data. The formatting itself is _yaml_. The _web-user-data_ file defines several steps:
1. setting the hostname
2. set up the user root with the public RSA key copied into the file as well as the fallback account “hostmin” (or alike). The latter is enabled to log in by password and assigned to the group wheel
3. set up a first-time password for both users for initial login which must be changed on first login
4. install required additional packages , e.g. the firewall, fail2ban, postfix (needed by fail2ban) and the webserver
5. some packages need additional configuration files
6. the VM needs an update of all packages
7. several configuration commands are required
1. assign zone trusted to the interface eth1 (2nd position in the dbus path, so the order of the network parameters when calling _libvirt_ is crucial!) and rename it according to naming convention. The modification also persists to a configuration file (still in /etc/sysconfig/network-scripts/ )
2. start the firewall and add the web services
3. finally disable cloud-init
Once the configuration files are completed it eliminates what would be a time consuming process if done manually. This efficiency makes the use of cloud images attractive. The definition of _web-user-data_ follows:
```
# vim /var/lib/libvirt/boot/cloud-init/web-user-data
# cloud-config
# (1) setting hostname
preserve_hostname: False
hostname: web
fqdn: web.example.com
# (2) set up root and fallback account including rsa key copied into this file
users:
- name: root
ssh-authorized-keys:
- ssh-rsa AAAAB3NzaC1yc2EAAAADAQA…jSMt9rC4uKDPR8whgw==
- name: hostmin
groups: users,wheel
ssh_pwauth: True
ssh-authorized-keys:
- ssh-rsa AAAAB3NzaC1yc2EAAAIAQDix...Mt9rC4uKDPR8whgw==
# (3) set up a first-time password for both accounts
chpasswd:
list: |
root:myPassword
hostmin:topSecret
expire: True
# (4) install additional required packages
packages:
- firewalld
- postfix
- fail2ban
- vim
- httpd
- mod_ssl
- letsencrypt
# (5) some packages need additional configuration files
write_files:
- path: /etc/fail2ban/jail.local
content: |
# /etc/fail2ban/jail.local
# Jail configuration local customization
# Adjust the default configuration's default values
[DEFAULT]
##ignoreip = /24 /32
bantime = 6600
backend = auto
# The main configuration file defines all services but
# deactivates them by default. Activate those needed
[sshd]
enabled = true
# detect password authentication failures
[apache-auth]
enabled = true
# detect spammer robots crawling email addresses
[apache-badbots]
enabled = true
# detect Apache overflow attempts
[apache-overflows]
enabled = true
- path: /etc/httpd/conf.d/vhost_default.conf
content: |
<VirtualHost *:80>
ServerAdmin root@localhost
DirectoryIndex index.jsp
DocumentRoot /var/www/html
<Directory "/var/www.html">
Options Indexes FollowSymLinks
AllowOverride none
# Allow open access:
Require all granted
</Directory>
ProxyPass / http://app:8080/
</VirtualHost>
# (6) perform a package upgrade
package_upgrade: true
# (7) several configuration commands are executed on first boot
runcmd:
# (a.) assign a zone to internal interface as well as some other adaptations.
# results in the writing of a configuration file
# IMPORTANT: internal interface have to be specified SECOND after external
- nmcli con mod path 2 con-name eth1 connection.zone trusted
- nmcli con mod path 2 con-name 'System eth1' ipv6.method disabled
- nmcli con up path 2
# (b.) activate and configure firewall and additional services
- systemctl enable firewalld --now
- firewall-cmd --permanent add-service=http
- firewall-cmd --permanent add-service=https
- firewall-cmd --reload
- systemctl enable fail2ban --now
# compensate for a SELinux port handling issue
- setsebool httpd_can_network_connect 1 -P
- systemctl enable httpd -now
# (c.) finally disable cloud-init
- systemctl disable cloud-init
- reboot
# done
```
A detailed overview of the user-data configuration options is provided in the examples section of the [cloud-init project documentation][8].
After completing the configuration files, initiate the virt-install process. Adjust the values of CPU, memory, external network interface etc. as required.
```
# sudo virt-install --name web \
--memory 3072 --cpu host --vcpus 3 --graphics none \
--os-type linux --os-variant fedora33 --import \
--disk /var/lib/libvirt/images/web.qcow2,format=qcow2,bus=virtio, size=20 \
--network type=direct,source=enp1s0,source_mode=bridge,model=virtio \
--network bridge=virbr0,model=virtio \
--cloud-init meta-data=/var/lib/libvirt/boot/cloud-init/web-meta-data,user-data=/var/lib/libvirt/boot/cloud-init/web-user-data
```
If the network environment issues IP addresses based on MAC addresses via DHCP, add the MAC address to the the first network configuration:
```
--network type=direct,source=enp1s0,source_mode=bridge,mac=52:54:00:93:97:46,model=virtio
```
Remember, that the first 3 pairs in the MAC address must be the sequence 52:54:00 for KVM virtual machines.
Back on the host enable autostart of the VM:
```
# virsh autostart web
```
Everything is complet. Direct your desktop browser to your <http://example.com> domain and enjoy a look at the tomcat webapps screen (after ignoring the warning about an insecure connection).
##### Configuring a static address
According to the specifications a static network connection is configured in meta-data. A configuration would look like this:
```
# vim /var/lib/libdir/boot/cloud-init/web-meta-data
instance-id: web-app
network-interfaces: |
iface eth0 inet static
address 192.168.1.10
netmask 255.255.255.0
gateway 192.168.1.254
```
_Cloud-init_ will create a configuration file accordingly. But there are 2 issues
* The configuration file is created after a default initialization of the interface via dhcp and the interface is not reinitialized.
* The generated configuration file includes the setting _onboot=no_ so after a reboot there is no connection either.
There are several hints that this is a bug that has existed for a long time so manual intervention is required.
It is probably easier and more efficient to do without the networks specification in meta-data and make an adjustment manually on the basis of the default initialization in user-data. Perform the following before the configuration of the internal network:
```
# nmcli con mod path 1 ipv4.method static ipv4.addresses '192.168.158.240/24' ipv4.gateway '192.168.158.1' ipv4.dns '192.168.158.1'
# nmcli con mod path 1 ipv6.method static ipv6.addresses '2003:ca:7f06:2c00:5054:ff:fed6:5b27/64' ipv6.gateway 'fe80::1' ipv6.dns '003:ca:7f06:2c00::add:9999'
# nmcli con up path 1
```
Doing this, the connection is immediately reset to the new specification and the configuration file is adjusted immediately. Remember to adjust the configuration values as needed.
Alternatively, the 3 statements can be made part of the user-data file and adapted or commented in or out as required. The corresponding part of the file would look like
```
...
# (7.) several configuration commands are executed on first boot
runcmd:
# If needed, convert interface eth0 as static
# comment in and modify as required
#- nmcli con mod path 1 ipv4.method static ipv4.addresses '<IPv4>/24' ipv4.gateway '<IPv4>' ipv4.dns 'IPv4
#- nmcli con mod path 1 ipv6.method static ipv6.addresses '<IPv6>/64' ipv6.gateway '<IPv6>' ipv6.dns '<IPv6>'
#- nmcli con up path 1
# (a) assign a zone to internal interface as well as some other adaptations.
# results in the writing of a configuration file
# IMPORTANT: internal interface have to be specified SECOND after external
- nmcli con mod path 2 con-name eth1 connection.zone trusted
- ...
```
Again, adjust the &lt;IPv4&gt;, &lt;IPv6&gt;, etc. configuration values as needed!
Configuring the cloud-init process by virt-install version 3 is highly efficient and flexible. You may create a dedicated set of files for each VM or you may keep one set of generic files and adjust them by commenting in and out as required. A combination of both can be use. You can quickly and easily change settings to test suitability for your purposes.
In summary, while the use of Fedora Cloud Base Images comes with some inconveniences and suffers from shortcomings in documentation, Fedora Cloud Base images and virt-install version 3 is a great combination for quickly and efficiently creating virtual machines for Fedora Server.
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/setting-up-a-vm-on-fedora-server-using-cloud-images-and-virt-install-version-3/
作者:[pboy][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/pboy/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2021/03/cloud_base_via_virt-install-816x345.jpg
[2]: https://unsplash.com/@maxkuk?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[3]: https://unsplash.com/s/photos/cloud-computing?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[4]: https://alt.fedoraproject.org/cloud/
[5]: https://fedoramagazine.org/create-virtual-machines-with-cockpit-in-fedora/
[6]: https://fedoramagazine.org/vagrant-qemukvm-fedora-devops-sysadmin/
[7]: https://blog.christophersmart.com/2016/06/17/booting-fedora-24-cloud-image-with-kvm/
[8]: https://cloudinit.readthedocs.io/en/latest/topics/examples.html

View File

@ -0,0 +1,190 @@
[#]: subject: (Why I love using the IPython shell and Jupyter notebooks)
[#]: via: (https://opensource.com/article/21/3/ipython-shell-jupyter-notebooks)
[#]: author: (Ben Nuttall https://opensource.com/users/bennuttall)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
Why I love using the IPython shell and Jupyter notebooks
======
Jupyter notebooks take the IPython shell to the next level.
![Computer laptop in space][1]
The Jupyter project started out as IPython and the IPython Notebook. It was originally a Python-specific interactive shell and notebook environment, which later branched out to become language-agnostic, supporting Julia, Python, and R—and potentially anything else.
![Jupyter][2]
(Ben Nuttall, [CC BY-SA 4.0][3])
IPython is a Python shell—similar to what you get when you type `python` or `python3` at the command line—but it's more clever and more helpful. If you've ever typed a multi-line command into the Python shell and wanted to repeat it, you'll understand the frustration of having to scroll through your history one line at a time. With IPython, you can scroll back through whole blocks at a time while still being able to navigate line-by-line and edit parts of those blocks.
![iPython][4]
(Ben Nuttall, [CC BY-SA 4.0][3])
It has autocompletion and provides context-aware suggestions:
![iPython offers suggestions][5]
(Ben Nuttall, [CC BY-SA 4.0][3])
It pretty-prints by default:
![iPython pretty prints][6]
(Ben Nuttall, [CC BY-SA 4.0][3])
It even allows you to run shell commands:
![IPython shell commands][7]
(Ben Nuttall, [CC BY-SA 4.0][3])
It also provides helpful features like adding `?` to an object as a shortcut for running `help()` without breaking your flow:
![IPython help][8]
(Ben Nuttall, [CC BY-SA 4.0][3])
If you're using a virtual environment (see my post on [virtualenvwrapper][9], install it with pip in the environment):
```
`pip install ipython`
```
To install it system-wide, you can use apt on Debian, Ubuntu, or Raspberry Pi:
```
`sudo apt install ipython3`
```
or with pip:
```
`sudo pip3 install ipython`
```
### Jupyter notebooks
Jupyter notebooks take the IPython shell to the next level. First of all, they're browser-based, not terminal-based. To get started, install `jupyter`.
If you're using a virtual environment, install it with pip in the environment:
```
`pip install jupyter`
```
To install it system-wide, you can use apt on Debian, Ubuntu, or Raspberry Pi:
```
`sudo apt install jupyter-notebook`
```
or with pip:
```
`sudo pip3 install jupyter`
```
Launch the notebook with:
```
`jupyter notebook`
```
This will open in your browser:
![Jupyter Notebook][10]
(Ben Nuttall, [CC BY-SA 4.0][3])
You can create a new Python 3 notebook using the **New** dropdown:
![Python 3 in Jupyter Notebook][11]
(Ben Nuttall, [CC BY-SA 4.0][3])
Now you can write and execute commands in the `In[ ]` fields. Use **Enter** for a newline within the block and **Shift+Enter** to execute:
![Executing commands in Jupyter][12]
(Ben Nuttall, [CC BY-SA 4.0][3])
You can edit and rerun blocks. You can reorder them, delete them, copy/paste, and so on. You can run blocks in any order—but be aware that any variables created will be in scope according to the time of execution, rather than the order they appear within the notebook. You can restart and clear output or restart and run all blocks from within the **Kernel** menu.
Using the `print` function will output every time. But if you only have a single statement that's not assigned or your last statement is unassigned, it will be output anyway:
![Jupyter output][13]
(Ben Nuttall, [CC BY-SA 4.0][3])
You can even refer to `In` and `Out` as indexable objects:
![Jupyter output][14]
(Ben Nuttall, [CC BY-SA 4.0][3])
All the IPython features are available and are often presented a little nicer, too:
![Jupyter supports IPython features][15]
(Ben Nuttall, [CC BY-SA 4.0][3])
You can even do inline plots using [Matplotlib][16]:
![Graphing in Jupyter Notebook][17]
(Ben Nuttall, [CC BY-SA 4.0][3])
Finally, you can save your notebooks and include them in Git repositories, and if you push to GitHub, they will render as completed notebooks—outputs, graphs, and all (as in [this example][18]):
![Saving Notebook to GitHub][19]
(Ben Nuttall, [CC BY-SA 4.0][3])
* * *
_This article originally appeared on Ben Nuttall's [Tooling Tuesday blog][20] and is reused with permission._
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/3/ipython-shell-jupyter-notebooks
作者:[Ben Nuttall][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/bennuttall
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/computer_space_graphic_cosmic.png?itok=wu493YbB (Computer laptop in space)
[2]: https://opensource.com/sites/default/files/uploads/jupyterpreview.png (Jupyter)
[3]: https://creativecommons.org/licenses/by-sa/4.0/
[4]: https://opensource.com/sites/default/files/uploads/ipython-loop.png (iPython)
[5]: https://opensource.com/sites/default/files/uploads/ipython-suggest.png (iPython offers suggestions)
[6]: https://opensource.com/sites/default/files/uploads/ipython-pprint.png (iPython pretty prints)
[7]: https://opensource.com/sites/default/files/uploads/ipython-ls.png (IPython shell commands)
[8]: https://opensource.com/sites/default/files/uploads/ipython-help.png (IPython help)
[9]: https://opensource.com/article/21/2/python-virtualenvwrapper
[10]: https://opensource.com/sites/default/files/uploads/jupyter-notebook-1.png (Jupyter Notebook)
[11]: https://opensource.com/sites/default/files/uploads/jupyter-python-notebook.png (Python 3 in Jupyter Notebook)
[12]: https://opensource.com/sites/default/files/uploads/jupyter-loop.png (Executing commands in Jupyter)
[13]: https://opensource.com/sites/default/files/uploads/jupyter-cells.png (Jupyter output)
[14]: https://opensource.com/sites/default/files/uploads/jupyter-cells-2.png (Jupyter output)
[15]: https://opensource.com/sites/default/files/uploads/jupyter-help.png (Jupyter supports IPython features)
[16]: https://matplotlib.org/
[17]: https://opensource.com/sites/default/files/uploads/jupyter-graph.png (Graphing in Jupyter Notebook)
[18]: https://github.com/piwheels/stats/blob/master/2020.ipynb
[19]: https://opensource.com/sites/default/files/uploads/savenotebooks.png (Saving Notebook to GitHub)
[20]: https://tooling.bennuttall.com/the-ipython-shell-and-jupyter-notebooks/

View File

@ -0,0 +1,106 @@
[#]: subject: (NewsFlash: A Modern Open-Source Feed Reader With Feedly Support)
[#]: via: (https://itsfoss.com/newsflash-feedreader/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
NewsFlash: A Modern Open-Source Feed Reader With Feedly Support
======
Some may choose to believe that RSS readers are dead, but theyre here to stay. Especially when you dont want the Big tech algorithm to decide what you should read. With a feed reader, you can choose your own reading sources.
Ive recently come across a fantastic RSS reader NewsFlash. It also supports adding feeds through web-based feed readers like [Feedly][1] and NewsBlur. Thats a big relief because if you are already such a service, you dont have to import your feeds manually.
NewsFlash happens to be the spiritual successor to [FeedReader][2] with the original developer involved as well.
In case youre wondering, weve already covered a list of [Feed Reader apps for Linux][3] if youre looking for more options.
### NewsFlash: A Feed Reader To Complement Web-based RSS Reader Account
![][4]
It is important to note that NewsFlash isnt just tailored for web-based RSS feed accounts, you can choose to use local RSS feeds as well without needing to sync them on multiple devices.
However, it is specifically helpful if youre using any of the supported web-based feed readers.
Here, Ill be highlighting some of the features that it offers.
### Features of NewsFlash
![][5]
* Desktop Notifications support
* Fast search and filtering
* Supports tagging
* Useful keyboard shortcuts that can be later customized
* Local feeds
* Import/Export OPML files
* Easily discover various RSS Feeds using Feedlys library without needing to sign up for the service
* Custom Font Support
* Multiple themes supported (including a dark theme)
* Ability to enable/disable the Thumbnails
* Tweak the time for regular sync intervals
* Support for web-based Feed accounts like Feedly, Fever, NewsBlur, feedbin, Miniflux
In addition to the features mentioned, it also opens the reader view when you re-size the window, so thats a subtle addition.
![newsflash screenshot 1][6]
If you want to reset the account, you can easily do that as well which will delete all your local data as well. And, yes, you can manually clear the cache and set an expiry for user data to exist locally for all the feeds you follow.
**Recommended Read:**
![][7]
#### [6 Best Feed Reader Apps for Linux][3]
Extensively use RSS feeds to stay updated with your favorite websites? Take a look at the best feed reader applications for Linux.
### Installing NewsFlash in Linux
You do not get any official packages available for various Linux distributions but limited to a [Flatpak][8].
For Arch users, you can find it available in [AUR][9].
Fortunately, the [Flatpak][10] package makes it easy for you to install it on any Linux distro you use. You can refer to our [Flatpak guide][11] for help.
In either case, you can refer to its [GitLab page][12] and compile it yourself.
### Closing Thoughts
Im currently using it by moving away from web-based services as a local solution on my desktop. You can simply export the OPML file to get the same feeds on any of your mobile feed applications, thats what Ive done.
The user interface is easy to use and provides a modern UX, if not the best. You can find all the essential features available while being a simple-looking RSS reader as well.
What do you think about NewsFlash? Do you prefer using something else? Feel free to share your thoughts in the comments.
--------------------------------------------------------------------------------
via: https://itsfoss.com/newsflash-feedreader/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://feedly.com/
[2]: https://jangernert.github.io/FeedReader/
[3]: https://itsfoss.com/feed-reader-apps-linux/
[4]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/03/newsflash.jpg?resize=945%2C648&ssl=1
[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/03/newsflash-screenshot.jpg?resize=800%2C533&ssl=1
[6]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/03/newsflash-screenshot-1.jpg?resize=800%2C532&ssl=1
[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2018/04/best-feed-reader-apps-linux.jpg?fit=800%2C450&ssl=1
[8]: https://flathub.org/apps/details/com.gitlab.newsflash
[9]: https://itsfoss.com/aur-arch-linux/
[10]: https://itsfoss.com/what-is-flatpak/
[11]: https://itsfoss.com/flatpak-guide/
[12]: https://gitlab.com/news-flash/news_flash_gtk

View File

@ -1,296 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (stevenzdg988)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Using Python to explore Google's Natural Language API)
[#]: via: (https://opensource.com/article/19/7/python-google-natural-language-api)
[#]: author: (JR Oakes https://opensource.com/users/jroakes)
利用 Python 探究 Google 的自然语言 API(应用程序接口)
======
Google API 可以提供有关 Google 如何对网站进行分类的线索以及调整内容以改进搜索结果的方法。
![计算机屏幕放大镜][1]
作为搜索引擎技术优化器,我一直在寻找以新颖的方式使用数据的方法,以更好地了解 Google 如何对网站进行排名。我最近研究了 Google 的 [自然语言 API][2] 是否能更好地了解 Google 如何分类网站内容。
尽管有 [开源 NLP 工具][3],要探索 Google 工具,假设在其他产品中使用相同技术的前提下,像 Search。本文介绍了 Google 的自然语言 API并探究了常见的自然语言处理NLP任务以及如何将其用于报告网站内容的创建。
### 了解数据类型
首先,了解 Google 自然语言 API 返回的数据类型非常重要。
#### 实体
实体是可以与物理世界中的某些事物联系在一起的文本短语。命名实体识别NER是 NLP 的难点,因为工具通常需要查看关键字的完整上下文才能理解其用法。例如,同形异义字拼写相同,但是具有多种含义。句子中的 "`lead`" 是指一种金属(名词),使某人移动(动词),还可能是剧本中的主要角色(也是名词)Google 有 12 种不同类型的实体,还有第 13 个全方位类别称为 "`UNKNOWN`(未知)"。一些实体与 Wikipedia(维基百科)的文章相关,表明 [知识图谱][4] 对数据的影响。每个实体都返回一个显著的与所提供文本的整体相关性的分数。
![实体][5]
#### 情感
情感,对某事的看法或态度,是判断文档和句子标准以及在文档中发现单个实体。情感的得分范围从 -1.0(负)到 1.0(正)。大小代表情感的非标准化强度;它的范围是 0.0 到无穷大。
![情感][6]
#### 语法
语法分析包含大多数常见的 NLP 活动,在更好的 `libraries`(库)中发现,例如 [lemmatization(词形演变)][7][part-of-speech tagging(词性标记)][8] 和 [dependency-tree parsing(依赖树分析)][9]。 NLP 主要处理帮助机器理解文本和关键字之间的关系。语法分析是大多数语言处理或理解任务的基础部分。
![语法][10]
#### 分类
分类是将整个给定内容分配给特定行业或主题类别,其置信度得分从 0.0 到 1.0。这些分类似乎与其他 Google 工具使用的受众群体和网站类别相同,像 AdWords。
![分类][11]
### 提取数据
现在,我将提取一些示例数据进行处理。我使用 Google 的 [搜索控制台 API][12] 收集了一些搜索查询及其相应的网址。Google Search Console 是一种工具,可报告人们使用 Google Search 查找网站页面术语。[开源的 Jupyter Notebook][13] 可提取有关网站的类似数据。在此示例中,我在网站(没命名)上提取 Google Search Console 生成于 2019 年 1 月 1 日至 6 月 1 日之间的数据,并将其限制为至少获得一次点击(而不只是印象)的查询。
该数据集包含 2969 页面和 7144 条显示了网页 Google Search 搜索结果查询的信息。下表显示,绝大多数页面获得的点击很少,因为该网站侧重于所谓的长尾(越特殊,通常更长)而不是短尾(非常普遍,搜索量更大)搜索查询。
![所有页面的点击次数柱状图][14]
为了减少数据集的大小并仅获得效果最好的页面,我将数据集限制为在此期间至少获得 20 次展示的页面。这是精炼数据集的按页点击的柱状图,其中包括 723 页:
![部分网页的点击次数柱状图][15]
在 Python 中使用 Google 自然语言 API 库
要测试 API在 Python 中创建一个利用 **[google-cloud-language(Google 云语言)][16]** 库的小脚本。以下代码基于 Python 3.5+。
首先,激活一个新的虚拟环境并安装库。用环境的唯一名称替换 **lt;your-envgt;** 。
```
virtualenv &lt;your-env&gt;
source &lt;your-env&gt;/bin/activate
pip install --upgrade google-cloud-language
pip install --upgrade requests
```
该脚本从 URL 提取 HTML并将 HTML 提供给自然语言 API。返回一个包含 **sentiment**, **entities**, 和 **categories** 的字典,其中这些键值都是列表。我使用 Jupyter Notebook 运行此代码,因为使用同一内核注释和重试代码更加容易。
```
# Import needed libraries
import requests
import json
from google.cloud import language
from google.oauth2 import service_account
from google.cloud.language import enums
from google.cloud.language import types
# Build language API client (requires service account key)
client = language.LanguageServiceClient.from_service_account_json('services.json')
# Define functions
def pull_googlenlp(client, url, invalid_types = ['OTHER'], **data):
   
        html = load_text_from_url(url, **data)
   
        if not html:
        return None
   
        document = types.Document(
        content=html,
        type=language.enums.Document.Type.HTML )
        features = {'extract_syntax': True,
                'extract_entities': True,
                'extract_document_sentiment': True,
                'extract_entity_sentiment': True,
                'classify_text': False
                }
   
        response = client.annotate_text(document=document, features=features)
        sentiment = response.document_sentiment
        entities = response.entities
   
        response = client.classify_text(document)
        categories = response.categories
         
        def get_type(type):
        return client.enums.Entity.Type(entity.type).name
   
        result = {}
   
        result['sentiment'] = []    
        result['entities'] = []
        result['categories'] = []
        if sentiment:
        result['sentiment'] = [{ 'magnitude': sentiment.magnitude, 'score':sentiment.score }]
         
        for entity in entities:
        if get_type(entity.type) not in invalid_types:
                result['entities'].append({'name': entity.name, 'type': get_type(entity.type), 'salience': entity.salience, 'wikipedia_url': entity.metadata.get('wikipedia_url', '-')  })
         
        for category in categories:
        result['categories'].append({'name':category.name, 'confidence': category.confidence})
         
         
        return result
def load_text_from_url(url, **data):
        timeout = data.get('timeout', 20)
   
        results = []
   
        try:
         
        print("Extracting text from: {}".format(url))
        response = requests.get(url, timeout=timeout)
        text = response.text
        status = response.status_code
        if status == 200 and len(text) &gt; 0:
                return text
         
        return None
         
        except Exception as e:
        print('Problem with url: {0}.'.format(url))
        return None
```
要访问该 API请按照 Google 的 [快速入门说明][17] 在Google Cloud Console 中创建一个项目,启用该 API 并下载服务帐户密钥。之后,您应该拥有一个类似于以下内容的 JSON 文件:
![`services.json` 文件][18]
命名为 **services.json** 上传到项目文件夹。
然后,您可以通过运行以下命令为任何 URL例如 Opensource.com拉取 API 数据:
```
url = "<https://opensource.com/article/19/6/how-ssh-running-container>"
pull_googlenlp(client,url)
```
如果设置正确,您将看到以下输出:
![拉取 API 数据的输出][19]
为了使入门更加容易,我创建了一个 [Jupyter Notebook][20],您可以下载并使用它来测试提取网页的实体,类别和情感。我更喜欢使用 [JupyterLab][21],它是 Jupyter Notebook 的扩展,其中包括文件查看器和其他增强的用户体验功能。如果您不熟悉这些工具,我认为利用 [Anaconda][22] 是开始使用 Python 和 Jupyter 的最简单途径。它使安装和设置 Python 以及公共库变得非常容易,尤其是在 Windows 上。
### 处理数据
使用这些可抓取给定页面的 HTML 并将其传递给 Natural Language API 的函数,我可以对 723 个 URL 进行一些分析。首先,我将通过查看所有页面中返回的顶级分类的数量来查看与网站相关的分类。
#### 分类
![来自示例站点的分类数据][23]
这似乎是该特定站点关键主题的相当准确的表示法。通过查看一个效果最好的页面进行排名的单个查询,我可以比较同一查询在 Google (搜索)结果中的其他排名页面。
* _URL 1 |顶级类别:/法律和政府/与法律相关的0.5099999904632568)共 1 个类别。_
* _未返回任何类别。_
* _URL 3 |顶级类别:/ Internetamp;电信/移动与无线0.6100000143051147)共 1 个类别。_
* _URL 4 |顶级类别:/计算机与电子产品/软件0.5799999833106995)共有 2 个类别。_
* _URL 5 |顶级类别:/ Internetamp;电信/移动与无线/移动应用程序和附件0.75)共有 1 个类别。_
* _未返回任何类别。_
* _URL 7 |顶级类别:/计算机与电子/软件/商业与生产力软件0.7099999785423279共2个类别。_
* _URL 8 |顶级类别:/法律和政府/与法律相关的0.8999999761581421)共 3 个类别。_
* _URL 9 |顶级类别:/参考/一般参考/类型指南和模板0.6399999856948853)共有 1 个类别。_
* _未返回任何类别。_
上方括号中的数字表示 Google 对页面内容与该类别相关的置信度。对于相同类别,第八个结果比第一个结果具有更高的置信度,因此,这似乎不是定义排名相关性的灵丹妙药。此外,类别太宽泛导致无法满足特定搜索主题的需要。
通过排名查看平均置信度,这两个指标之间似乎没有相关性,至少对于此数据集而言:
![平均置信度排名分布图][24]
这两种方法都可以对网站进行有规模检查以确保内容类别易于理解,并且样板或销售内容不会使您的页面与您的主要专业知识领域无关。想一想,如果您出售工业用品,但是您的页面返回 _Marketing(销售)_ 作为主要类别。似乎没有强烈的建议,即类别相关性至少在页面级别与您的排名有关系。
#### 情感
我不会在情感上花很多时间。在所有从 API 返回情感的页面中它们分为两个容器0.1 和 0.2,这几乎是中立的。根据柱状图,很容易看出情感没有太大价值。对于新闻或舆论网站而言,测量特定页面的情感到中值排名之间的相关性将是一个更加有趣的指标。
![独特页面的情感柱状图][25]
#### 实体
在我看来,实体是 API 中最有趣的部分。这是根据显着性或与页面的相关性通过所有页面选择顶级实体。请注意对于相同的术语销售清单Google 会推断出不同的类型,可能是错误的。这是由于这些术语出现在内容中的不同上下文中引起的。
![示例网站的顶级实体][26]
然后,我分别查看了每个实体类型,并一起查看了该实体的显着性与页面的最佳排名位置之间是否存在任何关联。对于每种类型,我将按匹配该类型突出性排序(降序)匹配顶级实体的突出性(与页面的整体相关性)。
在所有示例中,某些实体类型返回零突出性,因此我在下面的图表中省略了那些结果。
![突出性与最佳排名位置的相关性][27]
**Consumer Good(消费性商品)** 实体类型具有最高的正相关性,皮尔森相关性为 0.15854,尽管由于较低编号的排名更好,所以 **Person(皮尔森)** 实体的最佳结果具有 -0.15483 的相关性。这是一个非常小的样本集,尤其是对于单个实体类型,因此我不能处理太多数据。我没有发现任何具有强相关性的值,但是 **Person(皮尔森)** 实体最有意义。网站通常都有关于其首席执行官和其他主要雇员的页面,这些页面很可能在这些查询的搜索结果方面做得好。
继续,当从整体上看站点,以下主题表现出基于 **entity(实体)** **name(名称)****entity type(实体类型)**
![基于实体名称和实体类型的主题][28]
我使一些看起来过于特殊的掩饰网站身份的结果模糊不清。从主题上讲,名称信息是在您(或竞争对手)的网站上局部查看其核心主题的一种好方法。这样做仅基于示例网站的排名网址,而不是基于所有网站的可能网址(因为 Search Console(搜索引擎) 数据仅记录Google 中展示的页面),但是结果会很有趣,尤其是当您使用像 [Ahrefs][29] 之类的工具拉取主排名网址的网站时,该工具会跟踪许多查询以及这些查询的 Google 搜索结果。
实体数据中另一个有趣的部分是标记为 **CONSUMER_GOOD** 的实体倾向于 “查看” 像我已经在看到 “Knowledge Results(知识结果)”的结果,即页面右侧的 Google Search (搜索)结果。
![Google 搜索结果][30]
在我们的数据集中具有三个或三个以上关键字的 **Consumer Good(消费性商品)** 实体名称中,有 5.8 的 Knowledge Results与 Google 对该实体命名的结果相同。这意味着,如果您在 Google 中搜索术语或短语,则右侧的框(例如,上面显示 Linux 的 Knowledge Results将显示在搜索结果页面中。由于 Google 会 “挑选” 代表实体的示例网页因此这是一个很好的可以在搜索结果中识别出具有唯一特征的机会。同样有趣的是5.8 的在 Google 中显示这些 Knowledge Results 名称中,没有一个实体具有从 Natural Language API 返回 Wikipedia URL。这足够有趣可以保证额外的分析。这将是非常有用的尤其是对于更难懂的像 Ahrefs 传统的全球排名跟踪工具主题,在其数据库中则没有。
如前所述Knowledge Results(知识结果)对于希望在 Google 中使其其内容起作用的网站所有者而言非常重要,因为它们在桌面搜索中加强高亮显示。假设,它们也很可能与 Google [Discover][31] 的知识库主题保持一致,这是一款适用于 Android 和 iOS 的产品,旨在根据用户感兴趣但不明确搜索的主题向用户展示内容。
### 总结
本文介绍了 Google 的自然语言 API共享了一些代码并研究了此 API 对网站所有者可能有用的方式。关键要点是:
* 学习使用 Python 和 Jupyter Notebooks 可以开始您的数据收集任务进入令人难以置信的全球 API 和由令人难以置信的聪明和有才能的人构建的开源项目(如 Pandas 和 NumPy
* Python 允许我为了一个特定目的快速提取和测试有关 API 值的假设。
* 通过 Google 的分类 API 传递网站页面可能是一项很好的检查,以确保其内容分解成正确的主题类别。为竞争对手的网站执行此操作还可以提供有关在何处进行调整或创建内容的指导。
* 对于示例网站Google 的情感评分似乎并不是一个有趣的指标,但是对于新闻或基于意见的网站,它可能是一个有趣的指标。
* Google 的发现实体从整体上提供了网站的更细化的主题级别视图,并且像分类一样,在竞争性内容分析中使用将非常有趣。
* 实体可以帮助定义机会,使您的内容可以与搜索结果或 Google Discover 结果中的 Google Knowledge 块保持一致。我们将5.8%的结果设置为更长的(关键字计数)**Consumer Goods(消费商品)** 实体,显示这些结果,对于某些网站来说,可能有机会更好地优化这些实体的页面突出性分数,从而有更好的机会在 Google 搜索结果或 Google Discovers 建议中捕获此起重要作用的位置。
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/7/python-google-natural-language-api
作者:[JR Oakes][a]
选题:[lujun9972][b]
译者:[stevenzdg988](https://github.com/stevenzdg988)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jroakes
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/search_find_code_issue_bug_programming.png?itok=XPrh7fa0 (magnifying glass on computer screen)
[2]: https://cloud.google.com/natural-language/#natural-language-api-demo
[3]: https://opensource.com/article/19/3/natural-language-processing-tools
[4]: https://en.wikipedia.org/wiki/Knowledge_Graph
[5]: https://opensource.com/sites/default/files/uploads/entities.png (Entities)
[6]: https://opensource.com/sites/default/files/uploads/sentiment.png (Sentiment)
[7]: https://en.wikipedia.org/wiki/Lemmatisation
[8]: https://en.wikipedia.org/wiki/Part-of-speech_tagging
[9]: https://en.wikipedia.org/wiki/Parse_tree#Dependency-based_parse_trees
[10]: https://opensource.com/sites/default/files/uploads/syntax.png (Syntax)
[11]: https://opensource.com/sites/default/files/uploads/categories.png (Categories)
[12]: https://developers.google.com/webmaster-tools/
[13]: https://github.com/MLTSEO/MLTS/blob/master/Demos.ipynb
[14]: https://opensource.com/sites/default/files/uploads/histogram_1.png (Histogram of clicks for all pages)
[15]: https://opensource.com/sites/default/files/uploads/histogram_2.png (Histogram of clicks for subset of pages)
[16]: https://pypi.org/project/google-cloud-language/
[17]: https://cloud.google.com/natural-language/docs/quickstart
[18]: https://opensource.com/sites/default/files/uploads/json_file.png (services.json file)
[19]: https://opensource.com/sites/default/files/uploads/output.png (Output from pulling API data)
[20]: https://github.com/MLTSEO/MLTS/blob/master/Tutorials/Google_Language_API_Intro.ipynb
[21]: https://github.com/jupyterlab/jupyterlab
[22]: https://www.anaconda.com/distribution/
[23]: https://opensource.com/sites/default/files/uploads/categories_2.png (Categories data from example site)
[24]: https://opensource.com/sites/default/files/uploads/plot.png (Plot of average confidence by ranking position )
[25]: https://opensource.com/sites/default/files/uploads/histogram_3.png (Histogram of sentiment for unique pages)
[26]: https://opensource.com/sites/default/files/uploads/entities_2.png (Top entities for example site)
[27]: https://opensource.com/sites/default/files/uploads/salience_plots.png (Correlation between salience and best ranking position)
[28]: https://opensource.com/sites/default/files/uploads/themes.png (Themes based on entity name and entity type)
[29]: https://ahrefs.com/
[30]: https://opensource.com/sites/default/files/uploads/googleresults.png (Google search results)
[31]: https://www.blog.google/products/search/introducing-google-discover/

View File

@ -1,112 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (ShuyRoy )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Get started with distributed tracing using Grafana Tempo)
[#]: via: (https://opensource.com/article/21/2/tempo-distributed-tracing)
[#]: author: (Annanay Agarwal https://opensource.com/users/annanayagarwal)
使用Grafana Tempo开始分布式跟踪
======
Grafana Tempo是一个新的开源、大容量分布式跟踪后端。
![Computer laptop in space][1]
Grafana的[Tempo][2]是出自Grafana实验室的一个简单易用、大规模集成、分布式的跟踪后端。Tempo集成了[Grafana][3]、[Prometheus][4]以及[Loki][5],并且它只需要对象存储进行操作,这使得它是合算的且易操作的。
我从一开始就参与了这个开源项目所以我将介绍一些关于Tempo的基础知识并说明为什么本地云社区会注意到它。
### 分布式跟踪
想要收集对应用程序请求的遥测数据是很常见的。但是在现在的服务器中,单个应用通常被分割为多个微服务,可能运行在几个不同的节点上。
分布式跟踪是一种获得关于应用的性能细粒度信息的方式该应用程序可能由离散的服务组成。当请求到达一个应用时它提供了请求生命周期的统一视图。Tempo的分布式跟踪可以用于单片或微服务应用它提供[请求范围的信息][6],使其成为可观察的第三个支柱(除了度量和日志)。
接下来是一个分布式跟踪系统生成应用程序甘特图的示例。它使用Jaeger [HotROD][7] 的演示应用生成跟踪并把他们存到Grafana云托管的Tempo上。这个图展示了按照服务和功能划分的请求处理时间。
![Gantt chart from Grafana Tempo][8]
(Annanay Agarwal, [CC BY-SA 4.0][9])
### 减少索引的大小
在丰富且定义良好的数据模型中,跟踪包含大量信息。通常,跟踪后端有两种交互:使用元数据选择器(如服务名或者持续时间)筛选跟踪,并在筛选后可视化跟踪。
为了加强查找大多数的开源分布式跟踪框架对跟踪中的许多字段进行索引包括服务名称、操作名称、标记和持续时间。这会导致索引很大并迫使您使用Elasticsearch或者[Cassandra][10]这样的数据库。但是这些很难管理而且大规模操作的成本高所以我在Grafana实验室的团队打算提出一个更好的解决方案。
在Grafana中我的待命调试工作流开始使用指标报表我们使用[Cortex][11]来存储我们应用中的指标它是一个云本地计算基金会孵化的项目用于扩展Prometheus深入研究这个问题筛选有问题服务的日志我们将日志存储在Loki中就像Prometheus一样只不过Loki是存日志的然后查看跟踪给定的请求。我们意识到我们过滤时所需的所有索引信息都可以在Cortex和Loki中找到。但是我们需要通过这些工具实现跟踪可发现的强大集成以及根据跟踪ID进行键值查找的免费存储。
这是[Grafana Tempo][12]项目的开始。通过关注给定跟踪ID的跟踪检索我们将Tempo设计为最小依赖、高容量、低成本的分布式跟踪后端。
### 容易操作和低成本
Tempo使用对象存储后端这是它唯一的依赖。它既可以被用于单二进制模式下也可以用于微服务模式请参考repo中的[例子][13],了解如何轻松开始)。使用对象存储也意味着你可以在不使用任何抽样的情况下存储应用的的大量跟踪。这可以确保你永远不会丢弃出错或延迟更高的百万分之一的请求。
### 与开源工具的强大集成
[Grafana 7.3包括了Tempo数据源][14]这意味着你可以在Grafana UI中可视化来自Tempo的跟踪。而且[Loki 2.0的新查询特性][15]使得Tempo中的跟踪更简单。为了与Prometheus集成该团队正在添加对范例的支持范例是可以添加到时间序列数据中的高基数元数据信息。度量存储后端不会对它们建立索引但是你可以在Grafana UI中检索和显示度量值。尽管exemplars可以存储各种元数据但是在这个用例中跟踪的ID被存储以便与Tempo强集成。
这个例子展示了使用带有请求延迟直方图的范例其中每个范例数据点都链接到Tempo中的一个跟踪。
![Using exemplars in Tempo][16]
(Annanay Agarwal, [CC BY-SA 4.0][9])
### 元数据一致性
作为容器化应用程序运行的应用发出的遥测数据通常具有一些相关的元数据。这可以包括集群ID、命名空间、pod IP等。这对于提供基于需求的信息是好的但是如果你可以利用包含在元数据的信息来进行一些高效的工作那就更好了。
 
例如,你可以使用[Grafana云代理将跟踪信息导入Tempo中][17]代理利用Prometheus服务发现机制轮询Kubernetes接口以查询元数据信息并且将这些标记添加到应用程序发出的跨域数据中。由于这些元数据也在Loki中也建立了索引所以通过元数据转换为Loki变迁选择器可以很容易地从跟踪跳转到查看给定服务的日志。
下面是一个一致元数据的示例它可用于Tempo跟踪中查看给定范围的日志。
### ![][18]
### 云本地
Grafana Tempo作为一个容器化的应用时可用的你可以在如Kubernetes、Mesos等任何编排引擎上运行它。根据获取/查询路径上的工作负载各种服务可以水平伸缩。你还可以使用云本地对象存储如谷歌云存储、Amazon S3或者Tempo Azure博客存储。更多的信息请阅读Tempo文档中的[架构部分][19]。
### 试一试Tempo
如果这对你和我们一样有用,可以[克隆Tempo仓库][20]试一试。
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/2/tempo-distributed-tracing
作者:[Annanay Agarwal][a]
选题:[lujun9972][b]
译者:[RiaXu](https://github.com/ShuyRoy)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/annanayagarwal
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/computer_space_graphic_cosmic.png?itok=wu493YbB (Computer laptop in space)
[2]: https://grafana.com/oss/tempo/
[3]: http://grafana.com/oss/grafana
[4]: https://prometheus.io/
[5]: https://grafana.com/oss/loki/
[6]: https://peter.bourgon.org/blog/2017/02/21/metrics-tracing-and-logging.html
[7]: https://github.com/jaegertracing/jaeger/tree/master/examples/hotrod
[8]: https://opensource.com/sites/default/files/uploads/tempo_gantt.png (Gantt chart from Grafana Tempo)
[9]: https://creativecommons.org/licenses/by-sa/4.0/
[10]: https://opensource.com/article/19/8/how-set-apache-cassandra-cluster
[11]: https://cortexmetrics.io/
[12]: http://github.com/grafana/tempo
[13]: https://grafana.com/docs/tempo/latest/getting-started/example-demo-app/
[14]: https://grafana.com/blog/2020/10/29/grafana-7.3-released-support-for-the-grafana-tempo-tracing-system-new-color-palettes-live-updates-for-dashboard-viewers-and-more/
[15]: https://grafana.com/blog/2020/11/09/trace-discovery-in-grafana-tempo-using-prometheus-exemplars-loki-2.0-queries-and-more/
[16]: https://opensource.com/sites/default/files/uploads/tempo_exemplar.png (Using exemplars in Tempo)
[17]: https://grafana.com/blog/2020/11/17/tracing-with-the-grafana-cloud-agent-and-grafana-tempo/
[18]: https://lh5.googleusercontent.com/vNqk-ygBOLjKJnCbTbf2P5iyU5Wjv2joR7W-oD7myaP73Mx0KArBI2CTrEDVi04GQHXAXecTUXdkMqKRq8icnXFJ7yWUEpaswB1AOU4wfUuADpRV8pttVtXvTpVVv8_OfnDINgfN
[19]: https://grafana.com/docs/tempo/latest/architecture/architecture/
[20]: https://github.com/grafana/tempo

View File

@ -0,0 +1,256 @@
[#]: subject: (Visualize multi-threaded Python programs with an open source tool)
[#]: via: (https://opensource.com/article/21/3/python-viztracer)
[#]: author: (Tian Gao https://opensource.com/users/gaogaotiantian)
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: (wxy)
[#]: publisher: ( )
[#]: url: ( )
用一个开源工具实现多线程 Python 程序的可视化
======
> VizTracer 可以跟踪并发的 Python 程序,以帮助记录、调试和剖析。
![丰富多彩的声波图][1]
并发是现代编程中必不可少的一部分,因为我们有多个核心,有许多需要协作的任务。然而,当并发程序不按顺序运行时,就很难理解它们。对于工程师来说,在这些程序中发现 bug 和性能问题不像在单线程、单任务程序中那么容易。
在 Python 中,你有多种并发的选择。最常见的可能是用 `threading` 模块的多线程,用`subprocess` 和 `multiprocessing` 模块的多进程,以及最近用 `asyncio` 模块提供的 `async` 语法。在 [VizTracer][2] 之前,缺乏分析使用了这些技术程序的工具。
VizTracer 是一个追踪和可视化 Python 程序的工具,对日志、调试和剖析很有帮助。尽管它对单线程、单任务程序很好用,但它在并发程序中的实用性是它的独特之处。
## 尝试一个简单的任务
从一个简单的练习任务开始:计算出一个数组中的整数是否是质数并返回一个布尔数组。下面是一个简单的解决方案:
```
def is_prime(n):
for i in range(2, n):
if n % i == 0:
return False
return True
def get_prime_arr(arr):
return [is_prime(elem) for elem in arr]
```
试着用 VizTracer 以单线程方式正常运行它:
```
if __name__ == "__main__":
num_arr = [random.randint(100, 10000) for _ in range(6000)]
get_prime_arr(num_arr)
```
```
viztracer my_program.py
```
![Running code in a single thread][3]
调用堆栈报告显示,耗时约 140ms大部分时间花在 `get_prime_arr` 上。
![call-stack report][5]
这只是在数组中的元素上一遍又一遍地执行 `is_prime` 函数。
这是你所期望的,而且它并不有趣(如果你了解 VizTracer 的话)。
### 试试多线程程序
试着用多线程程序来做:
```
if __name__ == "__main__":
    num_arr = [random.randint(100, 10000) for i in range(2000)]
    thread1 = Thread(target=get_prime_arr, args=(num_arr,))
    thread2 = Thread(target=get_prime_arr, args=(num_arr,))
    thread3 = Thread(target=get_prime_arr, args=(num_arr,))
    thread1.start()
    thread2.start()
    thread3.start()
    thread1.join()
    thread2.join()
    thread3.join()
```
为了配合单线程程序的工作负载,这就为三个线程使用了一个 2000 元素的数组,模拟了三个线程共享任务的情况。
![Multi-thread program][6]
如果你熟悉 Python 的全局解释器锁GIL就会想到它不会再快了。由于开销太大花了 140ms 多一点的时间。不过,你可以观察到多线程的并发性:
![Concurrency of multiple threads][7]
当一个线程在工作(执行多个 `is_prime` 函数)时,另一个线程被冻结了(一个 `is_prime` 函数);后来,它们进行了切换。这是由于 GIL 的原因,这也是 Python 没有真正的多线程的原因。它可以实现并发,但不能实现并行。
### 用多进程试试
要想实现并行,办法就是 `multiprocessing` 库。下面是另一个使用 `multiprocessing` 的版本:
```
if __name__ == "__main__":
    num_arr = [random.randint(100, 10000) for _ in range(2000)]
   
    p1 = Process(target=get_prime_arr, args=(num_arr,))
    p2 = Process(target=get_prime_arr, args=(num_arr,))
    p3 = Process(target=get_prime_arr, args=(num_arr,))
    p1.start()
    p2.start()
    p3.start()
    p1.join()
    p2.join()
    p3.join()
```
要使用 VizTracer 运行它,你需要一个额外的参数:
```
viztracer --log_multiprocess my_program.py
```
![Running with extra argument][8]
整个程序在 50ms 多一点的时间内完成,实际任务在 50ms 之前完成。程序的速度大概提高了三倍。
为了和多线程版本进行比较,这里是多进程版本:
![Multi-process version][9]
在没有 GIL 的情况下,多个进程可以实现并行,也就是多个 `is_prime` 函数可以并行执行。
不过Python 的多线程也不是一无是处。例如,对于计算密集型和 I/O 密集型程序,你可以用睡眠来伪造一个 I/O 绑定的任务:
```
def io_task():
    time.sleep(0.01)
```
在单线程、单任务程序中试试:
```
if __name__ == "__main__":
    for _ in range(3):
        io_task()
```
![I/O-bound single-thread, single-task program][10]
整个程序用了 30ms 左右,没什么特别的。
现在使用多线程:
```
if __name__ == "__main__":
    thread1 = Thread(target=io_task)
    thread2 = Thread(target=io_task)
    thread3 = Thread(target=io_task)
    thread1.start()
    thread2.start()
    thread3.start()
    thread1.join()
    thread2.join()
    thread3.join()
```
![I/O-bound multi-thread program][11]
程序耗时 10ms很明显三个线程是并发工作的这提高了整体性能。
### 用 asyncio 试试
Python 正在尝试引入另一个有趣的功能,叫做异步编程。你可以制作一个异步版的任务:
```
import asyncio
async def io_task():
    await asyncio.sleep(0.01)
async def main():
    t1 = asyncio.create_task(io_task())
    t2 = asyncio.create_task(io_task())
    t3 = asyncio.create_task(io_task())
    await t1
    await t2
    await t3
if __name__ == "__main__":
    asyncio.run(main())
```
由于 `asyncio` 从字面上看是一个带有任务的单线程调度器,你可以直接在它上使用 VizTracer
![VizTracer with asyncio][12]
依然花了 10ms但显示的大部分函数都是底层结构这可能不是用户感兴趣的。为了解决这个问题可以使用 `--log_async` 来分离真正的任务:
```
viztracer --log_async my_program.py
```
![Using --log_async to separate tasks][13]
现在,用户任务更加清晰了。在大部分时间里,没有任务在运行(因为它唯一做的事情就是睡觉)。有趣的部分是这里:
![Graph of task creation and execution][14]
这显示了任务的创建和执行时间。Task-1 是 `main()` 协程创建了其他任务。Task-2、Task-3、Task-4 执行 `io_task``sleep` 然后等待唤醒。如图所示因为是单线程程序所以任务之间没有重叠VizTracer 这样可视化是为了让它更容易理解。
为了让它更有趣,可以在任务中添加一个 `time.sleep` 的调用来阻止异步循环:
```
async def io_task():
    time.sleep(0.01)
    await asyncio.sleep(0.01)
```
![time.sleep call][15]
程序耗时更长40ms任务填补了异步调度器中的空白。
这个功能对于诊断异步程序的行为和性能问题非常有帮助。
### 看看 VizTracer 发生了什么?
通过 VizTracer你可以在时间轴上查看程序的进展情况而不是从复杂的日志中想象。这有助于你更好地理解你的并发程序。
VizTracer 是开源的,在 Apache 2.0 许可证下发布支持所有常见的操作系统Linux、macOS 和 Windows。你可以在 [VizTracer 的 GitHub 仓库][16]中了解更多关于它的功能和访问它的源代码。
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/3/python-viztracer
作者:[Tian Gao][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/gaogaotiantian
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/colorful_sound_wave.png?itok=jlUJG0bM (Colorful sound wave graph)
[2]: https://readthedocs.org/projects/viztracer/
[3]: https://opensource.com/sites/default/files/uploads/viztracer_singlethreadtask.png (Running code in a single thread)
[4]: https://creativecommons.org/licenses/by-sa/4.0/
[5]: https://opensource.com/sites/default/files/uploads/viztracer_callstackreport.png (call-stack report)
[6]: https://opensource.com/sites/default/files/uploads/viztracer_multithread.png (Multi-thread program)
[7]: https://opensource.com/sites/default/files/uploads/viztracer_concurrency.png (Concurrency of multiple threads)
[8]: https://opensource.com/sites/default/files/uploads/viztracer_multithreadrun.png (Running with extra argument)
[9]: https://opensource.com/sites/default/files/uploads/viztracer_comparewithmultiprocess.png (Multi-process version)
[10]: https://opensource.com/sites/default/files/uploads/io-bound_singlethread.png (I/O-bound single-thread, single-task program)
[11]: https://opensource.com/sites/default/files/uploads/io-bound_multithread.png (I/O-bound multi-thread program)
[12]: https://opensource.com/sites/default/files/uploads/viztracer_asyncio.png (VizTracer with asyncio)
[13]: https://opensource.com/sites/default/files/uploads/log_async.png (Using --log_async to separate tasks)
[14]: https://opensource.com/sites/default/files/uploads/taskcreation.png (Graph of task creation and execution)
[15]: https://opensource.com/sites/default/files/uploads/time.sleep_call.png (time.sleep call)
[16]: https://github.com/gaogaotiantian/viztracer

View File

@ -2,26 +2,26 @@
[#]: via: (https://theartofmachinery.com/2021/03/18/reverse_engineering_a_docker_image.html)
[#]: author: (Simon Arneaud https://theartofmachinery.com)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (DCOLIVERSUN)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
Reverse Engineering a Docker Image
Docker 镜像逆向工程
======
This started with a consulting snafu: Government organisation A got government organisation B to develop a web application. Government organisation B subcontracted part of the work to somebody. Hosting and maintenance of the project was later contracted out to a private-sector company C. Company C discovered that the subcontracted somebody (who was long gone) had built a custom Docker image and made it a dependency of the build system, but without committing the original Dockerfile. That left company C with a contractual obligation to manage a Docker image they had no source code for. Company C calls me in once in a while to do various things, so doing something about this mystery meat Docker image became my job.
本文介绍的内容开始于一个咨询陷阱:政府组织 A 让政府组织 B 开发一个网络应用程序。政府机构 B 把部分工作外包给某个人。后来,项目的托管和维护被外包给一家私人公司 C。C 公司发现,之前外包的人(过世很久了)已经构建了一个自定义的 Docker 镜像,并使镜像成为系统构建的依赖项,但这个人没有提交原始的 Dockerfile。C 公司有合同义务管理这个 Docker 镜像可是他们他们没有源代码。C 公司偶尔叫我进去做各种工作,所以处理一些关于这个神秘 Docker 镜像的事情就成了我的工作。
Fortunately, the Docker image format is a lot more transparent than it could be. A little detective work is needed, but a lot can be figured out just by pulling apart an image file. As an example, heres a quick walkthrough of an image for [the Prettier code formatter][1].
幸运的是,这个 Docker 镜像格式比它应有的样子透明多了。虽然还需要做一些侦查工作,但只要解剖一个镜像文件,就能发现很多东西。例如,这里有一个 [Prettier 代码格式][1]镜像可供快速浏览。
First lets get the Docker daemon to pull the image, then extract the image to a file:
首先,让 Docker <ruby>守护进程<rt>daemon</rt></ruby>拉取镜像,然后将镜像提取到文件中:
```
docker pull tmknom/prettier:2.0.5
docker save tmknom/prettier:2.0.5 > prettier.tar
```
Yes, the file is just an archive in the classic tarball format:
是的,该文件只是一个典型 tarball 格式的归档文件:
```
$ tar xvf prettier.tar
@ -42,7 +42,7 @@ manifest.json
repositories
```
As you can see, Docker uses hashes a lot for naming things. Lets have a look at the `manifest.json`. Its in hard-to-read compacted JSON, but the [`jq` JSON Swiss Army knife][2] can pretty print it for us:
如你所见Docker 在命名时经常使用<ruby>哈希<rt>hash</rt></ruby>。我们看看 `manifest.json`。它在难以阅读的压缩 JSON 中,不过 [`jq` JSON Swiss Army knife][2]可以很好地打印 JSON
```
$ jq . manifest.json
@ -61,7 +61,7 @@ $ jq . manifest.json
]
```
Note that the three layers correspond to the three hash-named directories. Well look at them later. For now, lets look at the JSON file pointed to by the `Config` key. Its a little long, so Ill just dump the first bit here:
请注意,这三层对应三个 hash 命名的目录。我们以后再看。现在,让我们看看 `Config` 键指向的 JSON 文件。文件名有点长,所有我把第一点放在这里:
```
$ jq . 88f38be28f05f38dba94ce0c1328ebe2b963b65848ab96594f8172a9c3b0f25b.json | head -n 20
@ -87,9 +87,9 @@ $ jq . 88f38be28f05f38dba94ce0c1328ebe2b963b65848ab96594f8172a9c3b0f25b.json | h
"Image": "sha256:93e72874b338c1e0734025e1d8ebe259d4f16265dc2840f88c4c754e1c01ba0a",
```
The most interesting part is the `history` list, which lists every single layer in the image. A Docker image is a stack of these layers. Almost every statement in a Dockerfile turns into a layer that describes the changes to the image made by that statement. If you have a `RUN script.sh` statement that creates `really_big_file` that you then delete with `RUN rm really_big_file`, you actually get two layers in the Docker image: one that contains `really_big_file`, and one that contains a `.wh.really_big_file` tombstone to cancel it out. The overall image file isnt any smaller. Thats why you often see Dockerfile statements chained together like `RUN script.sh && rm really_big_file` — it ensures all changes are coalesced into one layer.
最重要的是 `history` 列表它列出了镜像中的每一层。Docker 镜像由这些层堆叠而成。Dockerfile 中几乎每条命令都会变成一个层,描述该命令对镜像所做的更改。如果你执行 `RUN script.sh` 命令,创建 `really_big_file`,然后你用 `RUN rm really_big_file` 命令删除文件Docker 镜像实际生成两层:一个包含 `really_big_file`,一个包含 `.wh.really_big_file` 记录来删除它。整个镜像文件大小不变。这就是为什么你会经常看到像 `RUN script.sh && rm really_big_file` 这样的 Dockerfile 命令链接在一起——它保障所有更改都合并到一层中。
Here are all the layers recorded in the Docker image. Notice that most layers dont change the filesystem image and are marked `"empty_layer": true`. Only three are non-empty, which matches up with what we saw before.
以下是 Docker 镜像中记录的所有层。注意,大多数层不改变文件系统镜像,并且 `empty_layer` 标记为 true。以下只有三个层是非空的与我们之前描述的相符。
```
$ jq .history 88f38be28f05f38dba94ce0c1328ebe2b963b65848ab96594f8172a9c3b0f25b.json
@ -162,11 +162,11 @@ m=${NODEJS_VERSION} && npm install -g prettier@${PRETTIER_VERSION} && np
]
```
Fantastic! All the statements are right there in the `created_by` fields, so we can almost reconstruct the Dockerfile just from this. Almost. The `ADD` statement at the very top doesnt actually give us the file we need to `ADD`. `COPY` statements are also going to be opaque. We also lose `FROM` statements because they expand out to all the layers inherited from the base Docker image.
太棒了!所有的命令都在 `created_by` 字段中,我们几乎可以用这些命令重建 Dockerfile。但不是完全可以。最上面的 `ADD` 命令实际上没有给我们需要添加的文件。`COPY` 命令也没有全部信息。因为 `FROM` 命令会扩展到继承基础 Docker 镜像的所有层,所以可能会跟丢该命令。
We can group the layers by Dockerfile by looking at the timestamps. Most layer timestamps are under a minute apart, representing how long each layer took to build. However, the first two layers are from `2020-04-24`, and the rest of the layers are from `2020-04-29`. This would be because the first two layers are from a base Docker image. Ideally wed figure out a `FROM` statement that gets us that image, so that we have a maintainable Dockerfile.
我们可以通过查看<ruby>时间戳<rt>timestamp</rt></ruby>,按 Dockerfile 对层进行分组。大多数层的时间戳相差不到一分钟,代表每一层构建所需的时间。但是前两层是 `2020-04-24`,其余的是 `2020-04-29`。这是因为前两层来自一个基础 Docker 镜像。理想情况下,我们可以找出一个 `FROM` 命令来获得这个镜像,这样我们就有了一个可维护的 Dockerfile。
The `manifest.json` says that the first non-empty layer is `a9cc4ace48cd792ef888ade20810f82f6c24aaf2436f30337a2a712cd054dc97/layer.tar`. Lets take a look:
`manifest.json` 展示第一个非空层是 `a9cc4ace48cd792ef888ade20810f82f6c24aaf2436f30337a2a712cd054dc97/layer.tar`。让我们看看它:
```
$ cd a9cc4ace48cd792ef888ade20810f82f6c24aaf2436f30337a2a712cd054dc97/
@ -183,7 +183,7 @@ bin/chmod
bin/chown
```
Okay, that looks like it might be an operating system base image, which is what youd expect from a typical Dockerfile. There are 488 entries in the tarball, and if you scroll through them, some interesting ones stand out:
看起来它可能是一个<ruby>操作系统<rt>operating system</rt></ruby>基础镜像,这也是你期望从典型 Dockerfile 中看到的。Tarball 中有 488 个条目,如果你浏览一下,就会发现一些有趣的条目:
```
...
@ -203,7 +203,7 @@ etc/conf.d/
...
```
Sure enough, its an [Alpine][3] image, which you might have guessed if you noticed that the other layers used an `apk` command to install packages. Lets extract the tarball and look around:
果不其然,这是一个 [Alpine][3] 镜像,如果你注意到其他层使用 `apk` 命令安装软件包,你可能已经猜到了。让我们解压 tarball 看看:
```
$ mkdir files
@ -215,9 +215,9 @@ $ cat etc/alpine-release
3.11.6
```
If you pull `alpine:3.11.6` and extract it, youll find that theres one non-empty layer inside it, and the `layer.tar` is identical to the `layer.tar` in the base layer of the Prettier image.
如果你拉取、解压 `alpine:3.11.6`,你会发现里面有一个非空层,`layer.tar`与 Prettier 镜像基础层中的 `layer.tar` 是一样的。
Just for the heck of it, whats in the other two non-empty layers? The second layer is the main layer containing the Prettier installation. It has 528 entries, including Prettier, a bunch of dependencies and certificate updates:
出于兴趣,另外两个非空层是什么?第二层是包含 Prettier 安装包的主层。它有 528 个条目,包含 Prettier、一堆依赖项和证书更新
```
...
@ -257,14 +257,14 @@ usr/share/ca-certificates/mozilla/Actalis_Authentication_Root_CA.crt
...
```
The third layer is created by the `WORKDIR /work` statement, and it contains exactly one entry:
第三层由 `WORKDIR /work` 命令创建,它只包含一个条目:
```
$ tar tf 6c37da2ee7de579a0bf5495df32ba3e7807b0a42e2a02779206d165f55f1ba70/layer.tar
work/
```
[The original Dockerfile is in the Prettier git repo.][4]
[原始 Dockerfile 在 Prettier 的 git repo 中][4]
--------------------------------------------------------------------------------
@ -272,7 +272,7 @@ via: https://theartofmachinery.com/2021/03/18/reverse_engineering_a_docker_image
作者:[Simon Arneaud][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
译者:[DCOLIVERSUN](https://github.com/DCOLIVERSUN)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,300 @@
[#]: subject: (5 everyday sysadmin tasks to automate with Ansible)
[#]: via: (https://opensource.com/article/21/3/ansible-sysadmin)
[#]: author: (Mike Calizo https://opensource.com/users/mcalizo)
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: (wxy)
[#]: publisher: ( )
[#]: url: ( )
用 Ansible 自动化系统管理员的 5 个日常任务
======
> 通过使用 Ansible 自动执行可重复的日常任务,提高工作效率并避免错误。
![Tips and gears turning][1]
如果你讨厌执行重复性的任务,那么我有一个提议给你,去学习 [Ansible][2]!
Ansible 是一个工具,它可以帮助你更轻松、更快速地完成日常任务,这样你就可以更有效地利用时间,比如学习重要的新技术。对于系统管理员来说,它是一个很好的工具,因为它可以帮助你实现标准化,并在日常活动中进行协作,包括:
1. 安装、配置和调配服务器和应用程序;
2. 定期更新和升级系统;
3. 监测、减轻和排除问题。
通常,许多这些基本的日常任务都需要手动步骤,而根据个人的技能的不同,可能会造成不一致并导致配置发生漂移。这在小规模的实施中可能是可以接受的,因为你管理一台服务器,并且知道自己在做什么。但当你管理数百或数千台服务器时会发生什么?
如果不小心,这些手动的、可重复的任务可能会因为人为的错误而造成延误和问题,而这些错误可能会影响你及你的组织的声誉。
这就是自动化的价值所在。而 [Ansible][3] 是自动化这些可重复的日常任务的完美工具。
自动化的一些原因是:
1. 你想要一个一致和稳定的环境。
2. 你想要促进标准化。
3. 你希望减少停机时间,减少严重事故案例,以便可以享受生活。
4. 你想喝杯啤酒,而不是排除故障问题!
本文提供了一些系统管理员可以使用 Ansible 自动化的日常任务的例子。我把本文中的剧本和角色放到了 GitHub 上的 [系统管理员任务仓库][4] 中,以方便你使用它们。
这些剧本的结构是这样的(我的注释前面有 `==>`)。
```
[root@homebase 6_sysadmin_tasks]# tree -L 2
.
├── ansible.cfg ==> 负责控制 Ansible 行为的配置文件
├── ansible.log
├── inventory
│ ├── group_vars
│ ├── hosts ==> 包含我的目标服务器列表的清单文件
│ └── host_vars
├── LICENSE
├── playbooks ==> 包含我们将在本文中使用的剧本的目录
│ ├── c_logs.yml
│ ├── c_stats.yml
│ ├── c_uptime.yml
│ ├── inventory
│ ├── r_cron.yml
│ ├── r_install.yml
│ └── r_script.yml
├── README.md
├── roles ==> 包含我们将在本文中使用的角色的目录
│ ├── check_logs
│ ├── check_stats
│ ├── check_uptime
│ ├── install_cron
│ ├── install_tool
│ └── run_scr
└── templates ==> 包含 jinja 模板的目录
├── cron_output.txt.j2
├── sar.txt.j2
└── scr_output.txt.j2
```
清单类似这样的:
```
[root@homebase 6_sysadmin_tasks]# cat inventory/hosts
[rhel8]
master ansible_ssh_host=192.168.1.12
workernode1 ansible_ssh_host=192.168.1.15
[rhel8:vars]
ansible_user=ansible ==> 请用你的 ansible 用户名更新它
```
这里有五个你可以用 Ansible 自动完成的日常系统管理任务。
### 1、检查服务器的正常运行时间
你需要确保你的服务器一直处于正常运行状态。机构会拥有企业监控工具来监控服务器和应用程序的正常运行时间,但自动监控工具时常会出现故障,你需要登录进去验证一台服务器的状态。手动验证每台服务器的正常运行时间需要花费大量的时间。你的服务器越多,你需要花费的时间就越长。但如果有了自动化,这种验证可以在几分钟内完成。
使用 [check_uptime][5] 角色和 `c_uptime.yml` 剧本:
```
[root@homebase 6_sysadmin_tasks]# ansible-playbook -i inventory/hosts playbooks/c_uptime.yml -k
SSH password:
PLAY [Check Uptime for Servers] ****************************************************************************************************************************************
TASK [check_uptime : Capture timestamp] *************************************************************************************************
.
截断...
.
PLAY RECAP *************************************************************************************************************************************************************
master : ok=6 changed=4 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
workernode1 : ok=6 changed=4 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
[root@homebase 6_sysadmin_tasks]#
```
剧本的输出是这样的:
```
[root@homebase 6_sysadmin_tasks]# cat /var/tmp/uptime-master-20210221004417.txt
-----------------------------------------------------
Uptime for master
-----------------------------------------------------
00:44:17 up 44 min, 2 users, load average: 0.01, 0.09, 0.09
-----------------------------------------------------
[root@homebase 6_sysadmin_tasks]# cat /var/tmp/uptime-workernode1-20210221184525.txt
-----------------------------------------------------
Uptime for workernode1
-----------------------------------------------------
18:45:26 up 44 min, 2 users, load average: 0.01, 0.01, 0.00
-----------------------------------------------------
```
使用 Ansible你可以用较少的努力以人类可读的格式获得多个服务器的状态[Jinja 模板][6] 允许你根据自己的需要调整输出。通过更多的自动化,你可以按计划运行,并通过电子邮件发送输出,以达到报告的目的。
### 2、配置额外的 cron 作业
你需要根据基础设施和应用需求定期更新服务器的计划作业。这似乎是一项微不足道的工作,但必须正确且持续地完成。想象一下,如果你对数百台生产服务器进行手动操作,这需要花费多少时间。如果做错了,就会影响生产应用程序,如果计划的作业重叠,就会导致应用程序停机或影响服务器性能。
使用 [install_cron][7] 角色和 `r_cron.yml` 剧本:
```
[root@homebase 6_sysadmin_tasks]# ansible-playbook -i inventory/hosts playbooks/r_cron.yml -k
SSH password:
PLAY [Install additional cron jobs for root] ***************************************************************************************************************************
.
截断...
.
PLAY RECAP *************************************************************************************************************************************************************
master : ok=10 changed=7 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
workernode1 : ok=10 changed=7 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
验证剧本的结果:
```
[root@homebase 6_sysadmin_tasks]# ansible -i inventory/hosts all -m shell -a "crontab -l" -k
SSH password:
master | CHANGED | rc=0 >>
1 2 3 4 5 /usr/bin/ls /tmp
#Ansible: Iotop Monitoring
0 5,2 * * * /usr/sbin/iotop -b -n 1 >> /var/tmp/iotop.log 2>> /var/tmp/iotop.err
workernode1 | CHANGED | rc=0 >>
1 2 3 4 5 /usr/bin/ls /tmp
#Ansible: Iotop Monitoring
0 5,2 * * * /usr/sbin/iotop -b -n 1 >> /var/tmp/iotop.log 2>> /var/tmp/iotop.err
```
使用 Ansible你可以以快速和一致的方式更新所有服务器上的 crontab 条目。你还可以使用一个简单的点对点 Ansible 命令来报告更新后的 crontab 的状态,以验证最近应用的变化。
### 3、收集服务器统计和 sars
在常规的故障排除过程中,为了诊断服务器性能或应用程序问题,你需要收集<ruby>系统活动报告<rt>system activity reports</rt></ruby>sars和服务器统计。在大多数情况下服务器日志包含非常重要的信息开发人员或运维团队需要这些信息来帮助解决影响整个环境的具体问题。
安全团队在进行调查时非常特别,大多数时候,他们希望查看多个服务器的日志。你需要找到一种简单的方法来收集这些文档。如果你能把收集任务委托给他们就更好了。
通过 [check_stats][8] 角色和 `c_stats.yml` 剧本来完成这个任务:
```
$ ansible-playbook -i inventory/hosts playbooks/c_stats.yml
PLAY [Check Stats/sar for Servers] ***********************************************************************************************************************************
TASK [check_stats : Get current date time] ***************************************************************************************************************************
changed: [master]
changed: [workernode1]
.
截断...
.
PLAY RECAP ***********************************************************************************************************************************************************
master : ok=5 changed=4 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
workernode1 : ok=5 changed=4 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
输出看起来像这样:
```
$ cat /tmp/sar-workernode1-20210221214056.txt
-----------------------------------------------------
sar output for workernode1
-----------------------------------------------------
Linux 4.18.0-193.el8.x86_64 (node1) 21/02/21 _x86_64_ (2 CPU)
21:39:30 LINUX RESTART (2 CPU)
-----------------------------------------------------
```
### 4、收集服务器日志
除了收集服务器统计和 sars 信息,你还需要不时地收集日志,尤其是当你需要帮助调查问题时。
通过 [check_logs][9] 角色和 `r_cron.yml` 剧本来实现:
```
$ ansible-playbook -i inventory/hosts playbooks/c_logs.yml -k
SSH password:
PLAY [Check Logs for Servers] ****************************************************************************************************************************************
.
截断...
.
TASK [check_logs : Capture Timestamp] ********************************************************************************************************************************
changed: [master]
changed: [workernode1]
PLAY RECAP ***********************************************************************************************************************************************************
master : ok=6 changed=4 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
workernode1 : ok=6 changed=4 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
为了确认输出,打开转储位置生成的文件。日志应该是这样的:
```
$ cat /tmp/logs-workernode1-20210221214758.txt | more
-----------------------------------------------------
Logs gathered: /var/log/messages for workernode1
-----------------------------------------------------
Feb 21 18:00:27 node1 kernel: Command line: BOOT_IMAGE=(hd0,gpt2)/vmlinuz-4.18.0-193.el8.x86_64 root=/dev/mapper/rhel-root ro crashkernel=auto resume=/dev/mapper/rhel
-swap rd.lvm.lv=rhel/root rd.lvm.lv=rhel/swap rhgb quiet
Feb 21 18:00:27 node1 kernel: Disabled fast string operations
Feb 21 18:00:27 node1 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Feb 21 18:00:27 node1 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Feb 21 18:00:27 node1 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Feb 21 18:00:27 node1 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256
Feb 21 18:00:27 node1 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
```
### 5、安装或删除软件包和软件
你需要能够持续快速地在系统上安装和更新软件和软件包。缩短安装或更新软件包和软件所需的时间,可以避免服务器和应用程序不必要的停机时间。
通过 [install_tool][10] 角色和 `r_install.yml` 剧本来实现这一点:
```
$ ansible-playbook -i inventory/hosts playbooks/r_install.yml -k
SSH password:
PLAY [Install additional tools/packages] ***********************************************************************************
TASK [install_tool : Install specified tools in the role vars] *************************************************************
ok: [master] => (item=iotop)
ok: [workernode1] => (item=iotop)
ok: [workernode1] => (item=traceroute)
ok: [master] => (item=traceroute)
PLAY RECAP *****************************************************************************************************************
master : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
workernode1 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
这个例子安装了在 vars 文件中定义的两个特定包和版本。使用 Ansible 自动化,你可以比手动安装更快地安装多个软件包或软件。你也可以使用 vars 文件来定义你要安装的软件包的版本。
```
$ cat roles/install_tool/vars/main.yml
---
# vars file for install_tool
ins_action: absent
package_list:
  - iotop-0.6-16.el8.noarch
  - traceroute
```
### 拥抱自动化
要成为一名有效率的系统管理员你需要接受自动化来鼓励团队内部的标准化和协作。Ansible 使你能够在更少的时间内做更多的事情,这样你就可以将时间花在更令人兴奋的项目上,而不是做重复的任务,如管理你的事件和问题管理流程。
有了更多的空闲时间,你可以学习更多的知识,让自己可以迎接下一个职业机会的到来。
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/3/ansible-sysadmin
作者:[Mike Calizo][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/mcalizo
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/gears_devops_learn_troubleshooting_lightbulb_tips_520.png?itok=HcN38NOk (Tips and gears turning)
[2]: https://www.ansible.com/
[3]: https://opensource.com/tags/ansible
[4]: https://github.com/mikecali/6_sysadmin_tasks
[5]: https://github.com/mikecali/6_sysadmin_tasks/tree/main/roles/check_uptime
[6]: https://docs.ansible.com/ansible/latest/user_guide/playbooks_templating.html
[7]: https://github.com/mikecali/6_sysadmin_tasks/tree/main/roles/install_cron
[8]: https://github.com/mikecali/6_sysadmin_tasks/tree/main/roles/check_stats
[9]: https://github.com/mikecali/6_sysadmin_tasks/tree/main/roles/check_logs
[10]: https://github.com/mikecali/6_sysadmin_tasks/tree/main/roles/install_tool

View File

@ -0,0 +1,76 @@
[#]: subject: (Affordable high-temperature 3D printers at home)
[#]: via: (https://opensource.com/article/21/3/desktop-3d-printer)
[#]: author: (Joshua Pearce https://opensource.com/users/jmpearce)
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
在家就能用得起的高温 3D 打印机
======
有多实惠?低于 1000 美元。
![High-temperature 3D-printed mask][1]
3D 打印机从 20 世纪 80 年代就已经出现了,但直到 [RepRap][2] 项目开源后才得到大众的关注。RepRap 是 self-replicating rapid prototyper (自我复制快速原型机) 的缩写,它是一种基本上可以自己打印的 3D 打印机。[2004 年][3]发布开源计划在后,导致 3D 打印机的成本从几十万美金降到了几百美金。
这些开源的桌面工具一直局限于 ABS 等低性能、低温的热塑性塑料(如乐高积木)。市场上有几款高温打印机,但其高昂的成本(几万到几十万美元)使大多数人无法获得。直到最近,它们才参与了很多竞争,因为它们被一项专利 US6722872B1 锁定,该专利于 2021 年 2 月 27 日[到期][4]。
随着这个路障的消除,我们即将看到高温、低成本、熔融纤维 3D 打印机的爆发。
价格低到什么程度?低于 1000 美元如何。
在疫情最严重的时候,我的团队赶紧发布了一个[开源高温 3D 打印机][5]的设计,用于制造可热灭菌的个人防护装备 PPE。该项目的想法是让人们能够[用高温材料打印 PPE][6](如口罩),并将它放入家用烤箱进行消毒。我们称我们的设备为 Cerberus它具有以下特点
1. 可达到 200℃ 加热床
2. 可达到 500℃ 的热源
3. 带有 1kW 加热器核心的隔离式加热室。
4. 主电源(交流电源)电压室和床身加热,以便快速启动。
你可以用现成的零件来构建这个项目,其中一些零件你可以打印,价格不到 1000 美元。它可以成功打印聚醚酮酮 PEKK 和聚醚酰亚胺PEI以商品名 Ultem 出售)。这两种材料都比现在低成本打印机能打印的任何材料强得多。
![PPE printer][7]
J.M.Pearce, [GNU Free Documentation License][8]
这款高温 3D 打印机的设计是有三个头但我们发布的时候只有一个头。Cerberus 是以希腊神话中的三头冥界看门狗命名的。通常情况下,我们不会发布只有一个头的打印机,但疫情改变了我们的优先级。[开源社区团结起来][9],帮助解决早期的供应不足,许多桌面 3D 打印机都在产出有用的产品,以帮助保护人们免受 COVID 的侵害。
那另外两个头呢?
其他两个头是为了高温熔融颗粒制造(例如,这个开源的[3D打印机][10]的高温版本)和铺设在金属线中(像在[这个设计][11]中以建立一个开源的热交换器。Cerberus 打印机的其他功能可能是一个自动喷嘴清洁器和在高温下打印连续纤维的方法。另外,你还可以在转台上安装任何你喜欢的东西来制造高端产品。
把一个盒子放在 3D 打印机周围,而把电子元件留在外面的[专利][12]到期为高温家用 3D 打印机铺平了道路,这将使这些设备以合理的成本从单纯的玩具变为工业工具。
已经有公司在 RepRap 传统的基础上将这些低成本系统推向市场例如1250 美元的 [Creality3D CR-5 Pro][13] 3D 打印机可以达到 300℃。Creality 销售最受欢迎的桌面 3D 打印机,并开放了部分设计。
然而,要打印超高端工程聚合物,这些打印机需要达到 350℃ 以上。开源计划已经可以帮助桌面 3D 打印机制造商开始与垄断公司竞争,这些公司由于躲在专利背后,已经阻碍了 3D 打印 20 年。预计低成本、高温桌面 3D 打印机的竞争将真正升温!
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/3/desktop-3d-printer
作者:[Joshua Pearce][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jmpearce
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/3d_printer_mask.jpg?itok=5ePZghTW (High-temperature 3D-printed mask)
[2]: https://reprap.org/wiki/RepRap
[3]: https://reprap.org/wiki/Wealth_Without_Money
[4]: https://3dprintingindustry.com/news/stratasys-heated-build-chamber-for-3d-printer-patent-us6722872b1-set-to-expire-this-week-185012/
[5]: https://doi.org/10.1016/j.ohx.2020.e00130
[6]: https://www.appropedia.org/Open_Source_High-Temperature_Reprap_for_3-D_Printing_Heat-Sterilizable_PPE_and_Other_Applications
[7]: https://opensource.com/sites/default/files/uploads/ppe-hight3dp.png (PPE printer)
[8]: https://www.gnu.org/licenses/fdl-1.3.html
[9]: https://opensource.com/article/20/3/volunteer-covid19
[10]: https://www.liebertpub.com/doi/10.1089/3dp.2019.0195
[11]: https://www.appropedia.org/Open_Source_Multi-Head_3D_Printer_for_Polymer-Metal_Composite_Component_Manufacturing
[12]: https://www.academia.edu/17609790/A_Novel_Approach_to_Obviousness_An_Algorithm_for_Identifying_Prior_Art_Concerning_3-D_Printing_Materials
[13]: https://creality3d.shop/collections/cr-series/products/cr-5-pro-h-3d-printer