mirror of
https://github.com/LCTT/TranslateProject.git
synced 2024-12-29 21:41:00 +08:00
commit
74f808fbef
@ -0,0 +1,299 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (stevenzdg988)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-13233-1.html)
|
||||
[#]: subject: (Using Python to explore Google's Natural Language API)
|
||||
[#]: via: (https://opensource.com/article/19/7/python-google-natural-language-api)
|
||||
[#]: author: (JR Oakes https://opensource.com/users/jroakes)
|
||||
|
||||
利用 Python 探究 Google 的自然语言 API
|
||||
======
|
||||
|
||||
> Google API 可以凸显出有关 Google 如何对网站进行分类的线索,以及如何调整内容以改进搜索结果的方法。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202103/24/232018q66pz2uc5uuq1p03.jpg)
|
||||
|
||||
作为一名技术性的搜索引擎优化人员,我一直在寻找以新颖的方式使用数据的方法,以更好地了解 Google 如何对网站进行排名。我最近研究了 Google 的 [自然语言 API][2] 能否更好地揭示 Google 是如何分类网站内容的。
|
||||
|
||||
尽管有 [开源 NLP 工具][3],但我想探索谷歌的工具,前提是它可能在其他产品中使用同样的技术,比如搜索。本文介绍了 Google 的自然语言 API,并探究了常见的自然语言处理(NLP)任务,以及如何使用它们来为网站内容创建提供信息。
|
||||
|
||||
### 了解数据类型
|
||||
|
||||
首先,了解 Google 自然语言 API 返回的数据类型非常重要。
|
||||
|
||||
#### 实体
|
||||
|
||||
<ruby>实体<rt>Entities</rt></ruby>是可以与物理世界中的某些事物联系在一起的文本短语。<ruby>命名实体识别<rt>Named Entity Recognition</rt></ruby>(NER)是 NLP 的难点,因为工具通常需要查看关键字的完整上下文才能理解其用法。例如,<ruby>同形异义字<rt>homographs</rt></ruby>拼写相同,但是具有多种含义。句子中的 “lead” 是指一种金属:“铅”(名词),使某人移动:“牵领”(动词),还可能是剧本中的主要角色(也是名词)?Google 有 12 种不同类型的实体,还有第 13 个名为 “UNKNOWN”(未知)的统称类别。一些实体与维基百科的文章相关,这表明 [知识图谱][4] 对数据的影响。每个实体都会返回一个显著性分数,即其与所提供文本的整体相关性。
|
||||
|
||||
![实体][5]
|
||||
|
||||
#### 情感
|
||||
|
||||
<ruby>情感<rt>Sentiment</rt></ruby>,即对某事的看法或态度,是在文件和句子层面以及文件中发现的单个实体上进行衡量。情感的<ruby>得分<rt>score</rt></ruby>范围从 -1.0(消极)到 1.0(积极)。<ruby>幅度<rt>magnitude</rt></ruby>代表情感的<ruby>非归一化<rt>non-normalized</rt></ruby>强度;它的范围是 0.0 到无穷大。
|
||||
|
||||
![情感][6]
|
||||
|
||||
#### 语法
|
||||
|
||||
<ruby>语法<rt>Syntax</rt></ruby>解析包含了大多数在较好的库中常见的 NLP 活动,例如 <ruby>[词形演变][7]<rt>lemmatization</rt></ruby>、<ruby>[词性标记][8]<rt>part-of-speech tagging</rt></ruby> 和 <ruby>[依赖树解析][9]<rt>dependency-tree parsing</rt></ruby>。NLP 主要处理帮助机器理解文本和关键字之间的关系。语法解析是大多数语言处理或理解任务的基础部分。
|
||||
|
||||
![语法][10]
|
||||
|
||||
#### 分类
|
||||
|
||||
<ruby>分类<rt>Categories</rt></ruby>是将整个给定内容分配给特定行业或主题类别,其<ruby>置信度<rt>confidence</rt></ruby>得分从 0.0 到 1.0。这些分类似乎与其他 Google 工具使用的受众群体和网站类别相同,如 AdWords。
|
||||
|
||||
![分类][11]
|
||||
|
||||
### 提取数据
|
||||
|
||||
现在,我将提取一些示例数据进行处理。我使用 Google 的 [搜索控制台 API][12] 收集了一些搜索查询及其相应的网址。Google 搜索控制台是一个报告人们使用 Google Search 查找网站页面的术语的工具。这个 [开源的 Jupyter 笔记本][13] 可以让你提取有关网站的类似数据。在此示例中,我在 2019 年 1 月 1 日至 6 月 1 日期间生成的一个网站(我没有提及名字)上提取了 Google 搜索控制台数据,并将其限制为至少获得一次点击(而不只是<ruby>曝光<rt>impressions</rt></ruby>)的查询。
|
||||
|
||||
该数据集包含 2969 个页面和在 Google Search 的结果中显示了该网站网页的 7144 条查询的信息。下表显示,绝大多数页面获得的点击很少,因为该网站侧重于所谓的长尾(越特殊通常就更长尾)而不是短尾(非常笼统,搜索量更大)搜索查询。
|
||||
|
||||
![所有页面的点击次数柱状图][14]
|
||||
|
||||
为了减少数据集的大小并仅获得效果最好的页面,我将数据集限制为在此期间至少获得 20 次曝光的页面。这是精炼数据集的按页点击的柱状图,其中包括 723 个页面:
|
||||
|
||||
![部分网页的点击次数柱状图][15]
|
||||
|
||||
### 在 Python 中使用 Google 自然语言 API 库
|
||||
|
||||
要测试 API,在 Python 中创建一个利用 [google-cloud-language][16] 库的小脚本。以下代码基于 Python 3.5+。
|
||||
|
||||
首先,激活一个新的虚拟环境并安装库。用环境的唯一名称替换 `<your-env>` 。
|
||||
|
||||
```
|
||||
virtualenv <your-env>
|
||||
source <your-env>/bin/activate
|
||||
pip install --upgrade google-cloud-language
|
||||
pip install --upgrade requests
|
||||
```
|
||||
|
||||
该脚本从 URL 提取 HTML,并将 HTML 提供给自然语言 API。返回一个包含 `sentiment`、 `entities` 和 `categories` 的字典,其中这些键的值都是列表。我使用 Jupyter 笔记本运行此代码,因为使用同一内核注释和重试代码更加容易。
|
||||
|
||||
```
|
||||
# Import needed libraries
|
||||
import requests
|
||||
import json
|
||||
|
||||
from google.cloud import language
|
||||
from google.oauth2 import service_account
|
||||
from google.cloud.language import enums
|
||||
from google.cloud.language import types
|
||||
|
||||
# Build language API client (requires service account key)
|
||||
client = language.LanguageServiceClient.from_service_account_json('services.json')
|
||||
|
||||
# Define functions
|
||||
def pull_googlenlp(client, url, invalid_types = ['OTHER'], **data):
|
||||
|
||||
html = load_text_from_url(url, **data)
|
||||
|
||||
if not html:
|
||||
return None
|
||||
|
||||
document = types.Document(
|
||||
content=html,
|
||||
type=language.enums.Document.Type.HTML )
|
||||
|
||||
features = {'extract_syntax': True,
|
||||
'extract_entities': True,
|
||||
'extract_document_sentiment': True,
|
||||
'extract_entity_sentiment': True,
|
||||
'classify_text': False
|
||||
}
|
||||
|
||||
response = client.annotate_text(document=document, features=features)
|
||||
sentiment = response.document_sentiment
|
||||
entities = response.entities
|
||||
|
||||
response = client.classify_text(document)
|
||||
categories = response.categories
|
||||
|
||||
def get_type(type):
|
||||
return client.enums.Entity.Type(entity.type).name
|
||||
|
||||
result = {}
|
||||
|
||||
result['sentiment'] = []
|
||||
result['entities'] = []
|
||||
result['categories'] = []
|
||||
|
||||
if sentiment:
|
||||
result['sentiment'] = [{ 'magnitude': sentiment.magnitude, 'score':sentiment.score }]
|
||||
|
||||
for entity in entities:
|
||||
if get_type(entity.type) not in invalid_types:
|
||||
result['entities'].append({'name': entity.name, 'type': get_type(entity.type), 'salience': entity.salience, 'wikipedia_url': entity.metadata.get('wikipedia_url', '-') })
|
||||
|
||||
for category in categories:
|
||||
result['categories'].append({'name':category.name, 'confidence': category.confidence})
|
||||
|
||||
|
||||
return result
|
||||
|
||||
|
||||
def load_text_from_url(url, **data):
|
||||
|
||||
timeout = data.get('timeout', 20)
|
||||
|
||||
results = []
|
||||
|
||||
try:
|
||||
|
||||
print("Extracting text from: {}".format(url))
|
||||
response = requests.get(url, timeout=timeout)
|
||||
|
||||
text = response.text
|
||||
status = response.status_code
|
||||
|
||||
if status == 200 and len(text) > 0:
|
||||
return text
|
||||
|
||||
return None
|
||||
|
||||
|
||||
except Exception as e:
|
||||
print('Problem with url: {0}.'.format(url))
|
||||
return None
|
||||
```
|
||||
|
||||
要访问该 API,请按照 Google 的 [快速入门说明][17] 在 Google 云主控台中创建一个项目,启用该 API 并下载服务帐户密钥。之后,你应该拥有一个类似于以下内容的 JSON 文件:
|
||||
|
||||
![services.json 文件][18]
|
||||
|
||||
命名为 `services.json`,并上传到项目文件夹。
|
||||
|
||||
然后,你可以通过运行以下程序来提取任何 URL(例如 Opensource.com)的 API 数据:
|
||||
|
||||
```
|
||||
url = "https://opensource.com/article/19/6/how-ssh-running-container"
|
||||
pull_googlenlp(client,url)
|
||||
```
|
||||
|
||||
如果设置正确,你将看到以下输出:
|
||||
|
||||
![拉取 API 数据的输出][19]
|
||||
|
||||
为了使入门更加容易,我创建了一个 [Jupyter 笔记本][20],你可以下载并使用它来测试提取网页的实体、类别和情感。我更喜欢使用 [JupyterLab][21],它是 Jupyter 笔记本的扩展,其中包括文件查看器和其他增强的用户体验功能。如果你不熟悉这些工具,我认为利用 [Anaconda][22] 是开始使用 Python 和 Jupyter 的最简单途径。它使安装和设置 Python 以及常用库变得非常容易,尤其是在 Windows 上。
|
||||
|
||||
### 处理数据
|
||||
|
||||
使用这些函数,可抓取给定页面的 HTML 并将其传递给自然语言 API,我可以对 723 个 URL 进行一些分析。首先,我将通过查看所有页面中返回的顶级分类的数量来查看与网站相关的分类。
|
||||
|
||||
#### 分类
|
||||
|
||||
![来自示例站点的分类数据][23]
|
||||
|
||||
这似乎是该特定站点的关键主题的相当准确的代表。通过查看一个效果最好的页面进行排名的单个查询,我可以比较同一查询在 Google 搜索结果中的其他排名页面。
|
||||
|
||||
* URL 1 |顶级类别:/法律和政府/与法律相关的(0.5099999904632568)共 1 个类别。
|
||||
* 未返回任何类别。
|
||||
* URL 3 |顶级类别:/互联网与电信/移动与无线(0.6100000143051147)共 1 个类别。
|
||||
* URL 4 |顶级类别:/计算机与电子产品/软件(0.5799999833106995)共有 2 个类别。
|
||||
* URL 5 |顶级类别:/互联网与电信/移动与无线/移动应用程序和附件(0.75)共有 1 个类别。
|
||||
* 未返回任何类别。
|
||||
* URL 7 |顶级类别:/计算机与电子/软件/商业与生产力软件(0.7099999785423279)共 2 个类别。
|
||||
* URL 8 |顶级类别:/法律和政府/与法律相关的(0.8999999761581421)共 3 个类别。
|
||||
* URL 9 |顶级类别:/参考/一般参考/类型指南和模板(0.6399999856948853)共有 1 个类别。
|
||||
* 未返回任何类别。
|
||||
|
||||
上方括号中的数字表示 Google 对页面内容与该分类相关的置信度。对于相同分类,第八个结果比第一个结果具有更高的置信度,因此,这似乎不是定义排名相关性的灵丹妙药。此外,分类太宽泛导致无法满足特定搜索主题的需要。
|
||||
|
||||
通过排名查看平均置信度,这两个指标之间似乎没有相关性,至少对于此数据集而言如此:
|
||||
|
||||
![平均置信度排名分布图][24]
|
||||
|
||||
这两种方法对网站进行规模审查是有意义的,以确保内容类别易于理解,并且样板或销售内容不会使你的页面与你的主要专业知识领域无关。想一想,如果你出售工业用品,但是你的页面返回 “Marketing(销售)” 作为主要分类。似乎没有一个强烈的迹象表明,分类相关性与你的排名有什么关系,至少在页面级别如此。
|
||||
|
||||
#### 情感
|
||||
|
||||
我不会在情感上花很多时间。在所有从 API 返回情感的页面中,它们分为两个区间:0.1 和 0.2,这几乎是中立的情感。根据直方图,很容易看出情感没有太大价值。对于新闻或舆论网站而言,测量特定页面的情感到中位数排名之间的相关性将是一个更加有趣的指标。
|
||||
|
||||
![独特页面的情感柱状图][25]
|
||||
|
||||
#### 实体
|
||||
|
||||
在我看来,实体是 API 中最有趣的部分。这是在所有页面中按<ruby>显著性<rt>salience</rt></ruby>(或与页面的相关性)选择的顶级实体。请注意,对于相同的术语(销售清单),Google 会推断出不同的类型,可能是错误的。这是由于这些术语出现在内容中的不同上下文中引起的。
|
||||
|
||||
![示例网站的顶级实体][26]
|
||||
|
||||
然后,我分别查看了每个实体类型,并一起查看了该实体的显著性与页面的最佳排名位置之间是否存在任何关联。对于每种类型,我匹配了与该类型匹配的顶级实体的显著性(与页面的整体相关性),按显著性排序(降序)。
|
||||
|
||||
有些实体类型在所有示例中返回的显著性为零,因此我从下面的图表中省略了这些结果。
|
||||
|
||||
![显著性与最佳排名位置的相关性][27]
|
||||
|
||||
“Consumer Good(消费性商品)” 实体类型具有最高的正相关性,<ruby>皮尔森相关度<rt>Pearson correlation</rt></ruby>为 0.15854,尽管由于较低编号的排名更好,所以 “Person” 实体的结果最好,相关度为 -0.15483。这是一个非常小的样本集,尤其是对于单个实体类型,我不能对数据做太多的判断。我没有发现任何具有强相关性的值,但是 “Person” 实体最有意义。网站通常都有关于其首席执行官和其他主要雇员的页面,这些页面很可能在这些查询的搜索结果方面做得好。
|
||||
|
||||
继续,当从整体上看站点,根据实体名称和实体类型,出现了以下主题。
|
||||
|
||||
![基于实体名称和实体类型的主题][28]
|
||||
|
||||
我模糊了几个看起来过于具体的结果,以掩盖网站的身份。从主题上讲,名称信息是在你(或竞争对手)的网站上局部查看其核心主题的一种好方法。这样做仅基于示例网站的排名网址,而不是基于所有网站的可能网址(因为 Search Console 数据仅记录 Google 中展示的页面),但是结果会很有趣,尤其是当你使用像 [Ahrefs][29] 之类的工具提取一个网站的主要排名 URL,该工具会跟踪许多查询以及这些查询的 Google 搜索结果。
|
||||
|
||||
实体数据中另一个有趣的部分是标记为 “CONSUMER_GOOD” 的实体倾向于 “看起来” 像我在看到 “<ruby>知识结果<rt>Knowledge Results</rt></ruby>”的结果,即页面右侧的 Google 搜索结果。
|
||||
|
||||
![Google 搜索结果][30]
|
||||
|
||||
在我们的数据集中具有三个或三个以上关键字的 “Consumer Good(消费性商品)” 实体名称中,有 5.8% 的知识结果与 Google 对该实体命名的结果相同。这意味着,如果你在 Google 中搜索术语或短语,则右侧的框(例如,上面显示 Linux 的知识结果)将显示在搜索结果页面中。由于 Google 会 “挑选” 代表实体的示例网页,因此这是一个很好的机会,可以在搜索结果中识别出具有唯一特征的机会。同样有趣的是,5.8% 的在 Google 中显示这些知识结果名称中,没有一个实体的维基百科 URL 从自然语言 API 中返回。这很有趣,值得进行额外的分析。这将是非常有用的,特别是对于传统的全球排名跟踪工具(如 Ahrefs)数据库中没有的更深奥的主题。
|
||||
|
||||
如前所述,知识结果对于那些希望自己的内容在 Google 中被收录的网站所有者来说是非常重要的,因为它们在桌面搜索中加强高亮显示。假设,它们也很可能与 Google [Discover][31] 的知识库主题保持一致,这是一款适用于 Android 和 iOS 的产品,它试图根据用户感兴趣但没有明确搜索的主题为用户浮现内容。
|
||||
|
||||
### 总结
|
||||
|
||||
本文介绍了 Google 的自然语言 API,分享了一些代码,并研究了此 API 对网站所有者可能有用的方式。关键要点是:
|
||||
|
||||
* 学习使用 Python 和 Jupyter 笔记本可以为你的数据收集任务打开到一个由令人难以置信的聪明和有才华的人建立的不可思议的 API 和开源项目(如 Pandas 和 NumPy)的世界。
|
||||
* Python 允许我为了一个特定目的快速提取和测试有关 API 值的假设。
|
||||
* 通过 Google 的分类 API 传递网站页面可能是一项很好的检查,以确保其内容分解成正确的主题分类。对于竞争对手的网站执行此操作还可以提供有关在何处进行调整或创建内容的指导。
|
||||
* 对于示例网站,Google 的情感评分似乎并不是一个有趣的指标,但是对于新闻或基于意见的网站,它可能是一个有趣的指标。
|
||||
* Google 发现的实体从整体上提供了更细化的网站的主题级别视图,并且像分类一样,在竞争性内容分析中使用将非常有趣。
|
||||
* 实体可以帮助定义机会,使你的内容可以与搜索结果或 Google Discover 结果中的 Google 知识块保持一致。我们将 5.8% 的结果设置为更长的(字计数)“Consumer Goods(消费商品)” 实体,显示这些结果,对于某些网站来说,可能有机会更好地优化这些实体的页面显著性分数,从而有更好的机会在 Google 搜索结果或 Google Discovers 建议中抓住这个重要作用的位置。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/7/python-google-natural-language-api
|
||||
|
||||
作者:[JR Oakes][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[stevenzdg988](https://github.com/stevenzdg988)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/jroakes
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/search_find_code_issue_bug_programming.png?itok=XPrh7fa0 (magnifying glass on computer screen)
|
||||
[2]: https://cloud.google.com/natural-language/#natural-language-api-demo
|
||||
[3]: https://opensource.com/article/19/3/natural-language-processing-tools
|
||||
[4]: https://en.wikipedia.org/wiki/Knowledge_Graph
|
||||
[5]: https://opensource.com/sites/default/files/uploads/entities.png (Entities)
|
||||
[6]: https://opensource.com/sites/default/files/uploads/sentiment.png (Sentiment)
|
||||
[7]: https://en.wikipedia.org/wiki/Lemmatisation
|
||||
[8]: https://en.wikipedia.org/wiki/Part-of-speech_tagging
|
||||
[9]: https://en.wikipedia.org/wiki/Parse_tree#Dependency-based_parse_trees
|
||||
[10]: https://opensource.com/sites/default/files/uploads/syntax.png (Syntax)
|
||||
[11]: https://opensource.com/sites/default/files/uploads/categories.png (Categories)
|
||||
[12]: https://developers.google.com/webmaster-tools/
|
||||
[13]: https://github.com/MLTSEO/MLTS/blob/master/Demos.ipynb
|
||||
[14]: https://opensource.com/sites/default/files/uploads/histogram_1.png (Histogram of clicks for all pages)
|
||||
[15]: https://opensource.com/sites/default/files/uploads/histogram_2.png (Histogram of clicks for subset of pages)
|
||||
[16]: https://pypi.org/project/google-cloud-language/
|
||||
[17]: https://cloud.google.com/natural-language/docs/quickstart
|
||||
[18]: https://opensource.com/sites/default/files/uploads/json_file.png (services.json file)
|
||||
[19]: https://opensource.com/sites/default/files/uploads/output.png (Output from pulling API data)
|
||||
[20]: https://github.com/MLTSEO/MLTS/blob/master/Tutorials/Google_Language_API_Intro.ipynb
|
||||
[21]: https://github.com/jupyterlab/jupyterlab
|
||||
[22]: https://www.anaconda.com/distribution/
|
||||
[23]: https://opensource.com/sites/default/files/uploads/categories_2.png (Categories data from example site)
|
||||
[24]: https://opensource.com/sites/default/files/uploads/plot.png (Plot of average confidence by ranking position )
|
||||
[25]: https://opensource.com/sites/default/files/uploads/histogram_3.png (Histogram of sentiment for unique pages)
|
||||
[26]: https://opensource.com/sites/default/files/uploads/entities_2.png (Top entities for example site)
|
||||
[27]: https://opensource.com/sites/default/files/uploads/salience_plots.png (Correlation between salience and best ranking position)
|
||||
[28]: https://opensource.com/sites/default/files/uploads/themes.png (Themes based on entity name and entity type)
|
||||
[29]: https://ahrefs.com/
|
||||
[30]: https://opensource.com/sites/default/files/uploads/googleresults.png (Google search results)
|
||||
[31]: https://www.blog.google/products/search/introducing-google-discover/
|
@ -0,0 +1,232 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (stevenzdg988)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-13206-1.html)
|
||||
[#]: subject: (9 favorite open source tools for Node.js developers)
|
||||
[#]: via: (https://opensource.com/article/20/1/open-source-tools-nodejs)
|
||||
[#]: author: (Hiren Dhadhuk https://opensource.com/users/hirendhadhuk)
|
||||
|
||||
9 个 Node.js 开发人员最喜欢的开源工具
|
||||
======
|
||||
|
||||
> 在众多可用于简化 Node.js 开发的工具中,以下 9 种是最佳选择。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202103/15/233658i99wxvzin13o5319.png)
|
||||
|
||||
我最近在 [StackOverflow][2] 上读到了一项调查,该调查称超过 49% 的开发人员在其项目中使用了 Node.js。这结果对我来说并不意外。
|
||||
|
||||
作为一个狂热的技术使用者,我可以肯定地说 Node.js 的引入引领了软件开发的新时代。现在,它是软件开发最受欢迎的技术之一,仅次于JavaScript。
|
||||
|
||||
### Node.js 是什么,为什么如此受欢迎?
|
||||
|
||||
Node.js 是一个跨平台的开源运行环境,用于在浏览器之外执行 JavaScript 代码。它也是建立在 Chrome 的 JavaScript 运行时之上的首选运行时环境,主要用于构建快速、可扩展和高效的网络应用程序。
|
||||
|
||||
我记得当时我们要花费几个小时来协调前端和后端开发人员,他们分别编写不同脚本。当 Node.js 出现后,所有这些都改变了。我相信,促使开发人员采用这项技术是因为它的双向效率。
|
||||
|
||||
使用 Node.js,你可以让你的代码同时运行在客户端和服务器端,从而加快了整个开发过程。Node.js 弥合了前端和后端开发之间的差距,并使开发过程更加高效。
|
||||
|
||||
### Node.js 工具浪潮
|
||||
|
||||
对于 49% 的开发人员(包括我)来说,Node.js 处于在前端和后端开发的金字塔顶端。有大量的 [Node.js 用例][3] 帮助我和我的团队在截止日期之内交付复杂的项目。幸运的是,Node.js 的日益普及也产生了一系列开源项目和工具,以帮助开发人员使用该环境。
|
||||
|
||||
近来,对使用 Node.js 构建的项目的需求突然增加。有时,我发现管理这些项目,并同时保持交付高质量项目的步伐非常具有挑战性。因此,我决定使用为 Node.js 开发人员提供的许多开源工具中一些最高效的,使某些方面的开发自动化。
|
||||
|
||||
根据我在 Node.js 方面的丰富经验,我使用了许多的工具,这些工具对整个开发过程都非常有帮助:从简化编码过程,到监测再到内容管理。
|
||||
|
||||
为了帮助我的 Node.js 开发同道,我整理了这个列表,其中包括我最喜欢的 9 个简化 Node.js 开发的开源工具。
|
||||
|
||||
### Webpack
|
||||
|
||||
[Webpack][4] 是一个容易使用的 JavaScript <ruby>模块捆绑程序<rt>module bundler</rt></ruby>,用于简化前端开发。它会检测具有依赖的模块,并将其转换为描述模块的静态<ruby>素材<rt>asset</rt></ruby>。
|
||||
|
||||
可以通过软件包管理器 npm 或 Yarn 安装该工具。
|
||||
|
||||
利用 npm 命令安装如下:
|
||||
|
||||
```
|
||||
npm install --save-dev webpack
|
||||
```
|
||||
|
||||
利用 Yarn 命令安装如下:
|
||||
|
||||
```
|
||||
yarn add webpack --dev
|
||||
```
|
||||
|
||||
Webpack 可以创建在运行时异步加载的单个捆绑包或多个素材链。不必单独加载。使用 Webpack 工具可以快速高效地打包这些素材并提供服务,从而改善用户整体体验,并减少开发人员在管理加载时间方面的困难。
|
||||
|
||||
### Strapi
|
||||
|
||||
[Strapi][5] 是一个开源的<ruby>无界面<rt>headless</rt></ruby>内容管理系统(CMS)。无界面 CMS 是一种基础软件,可以管理内容而无需预先构建好的前端。它是一个使用 RESTful API 函数的只有后端的系统。
|
||||
|
||||
可以通过软件包管理器 Yarn 或 npx 安装 Strapi。
|
||||
|
||||
利用 Yarn 命令安装如下:
|
||||
|
||||
```
|
||||
yarn create strapi-app my-project --quickstart
|
||||
```
|
||||
|
||||
利用 npx 命令安装如下:
|
||||
|
||||
```
|
||||
npx create-strapi-app my-project --quickstart
|
||||
```
|
||||
|
||||
Strapi 的目标是在任何设备上以结构化的方式获取和交付内容。CMS 可以使你轻松管理应用程序的内容,并确保它们是动态的,可以在任何设备上访问。
|
||||
|
||||
它提供了许多功能,包括文件上传、内置的电子邮件系统、JSON Web Token(JWT)验证和自动生成文档。我发现它非常方便,因为它简化了整个 CMS,并为我提供了编辑、创建或删除所有类型内容的完全自主权。
|
||||
|
||||
另外,通过 Strapi 构建的内容结构非常灵活,因为你可以创建和重用内容组和可定制的 API。
|
||||
|
||||
### Broccoli
|
||||
|
||||
[Broccoli][6] 是一个功能强大的构建工具,运行在 [ES6][7] 模块上。构建工具是一种软件,可让你将应用程序或网站中的所有各种素材(例如图像、CSS、JavaScript 等)组合成一种可分发的格式。Broccoli 将自己称为 “雄心勃勃的应用程序的素材管道”。
|
||||
|
||||
使用 Broccoli 你需要一个项目目录。有了项目目录后,可以使用以下命令通过 npm 安装 Broccoli:
|
||||
|
||||
```
|
||||
npm install --save-dev broccoli
|
||||
npm install --global broccoli-cli
|
||||
```
|
||||
|
||||
你也可以使用 Yarn 进行安装。
|
||||
|
||||
当前版本的 Node.js 就是使用该工具的最佳版本,因为它提供了长期支持。它可以帮助你避免进行更新和重新安装过程中的麻烦。安装过程完成后,可以在 `Brocfile.js` 文件中包含构建规范。
|
||||
|
||||
在 Broccoli 中,抽象单位是“树”,该树将文件和子目录存储在特定子目录中。因此,在构建之前,你必须有一个具体的想法,你希望你的构建是什么样子的。
|
||||
|
||||
最好的是,Broccoli 带有用于开发的内置服务器,可让你将素材托管在本地 HTTP 服务器上。Broccoli 非常适合流线型重建,因为其简洁的架构和灵活的生态系统可提高重建和编译速度。Broccoli 可让你井井有条,以节省时间并在开发过程中最大限度地提高生产力。
|
||||
|
||||
### Danger
|
||||
|
||||
[Danger][8] 是一个非常方便的开源工具,用于简化你的<ruby>拉取请求<rt>pull request</rt></ruby>(PR)检查。正如 Danger 库描述所说,该工具可通过管理 PR 检查来帮助 “正规化” 你的代码审查系统。Danger 可以与你的 CI 集成在一起,帮助你加快审核过程。
|
||||
|
||||
将 Danger 与你的项目集成是一个简单的逐步过程:你只需要包括 Danger 模块,并为每个项目创建一个 Danger 文件。然而,创建一个 Danger 帐户(通过 GitHub 或 Bitbucket 很容易做到),并且为开源软件项目设置访问令牌更加方便。
|
||||
|
||||
可以通过 NPM 或 Yarn 安装 Danger。要使用 Yarn,请添加 `danger -D` 到 `package.JSON` 中。
|
||||
|
||||
将 Danger 添加到 CI 后,你可以:
|
||||
|
||||
* 高亮显示重要的创建工件
|
||||
* 通过强制链接到 Trello 和 Jira 之类的工具来管理 sprint
|
||||
* 强制生成更新日志
|
||||
* 使用描述性标签
|
||||
* 以及更多
|
||||
|
||||
例如,你可以设计一个定义团队文化并为代码审查和 PR 检查设定特定规则的系统。根据 Danger 提供的元数据及其广泛的插件生态系统,可以解决常见的<ruby>议题<rt>issue</rt></ruby>。
|
||||
|
||||
### Snyk
|
||||
|
||||
网络安全是开发人员的主要关注点。[Snyk][9] 是修复开源组件中漏洞的最著名工具之一。它最初是一个用于修复 Node.js 项目漏洞的项目,并且已经演变为可以检测并修复 Ruby、Java、Python 和 Scala 应用程序中的漏洞。Snyk 主要分四个阶段运行:
|
||||
|
||||
* 查找漏洞依赖性
|
||||
* 修复特定漏洞
|
||||
* 通过 PR 检查预防安全风险
|
||||
* 持续监控应用程序
|
||||
|
||||
Snyk 可以集成在项目的任何阶段,包括编码、CI/CD 和报告。我发现这对于测试 Node.js 项目非常有帮助,可以测试或构建 npm 软件包时检查是否存在安全风险。你还可以在 GitHub 中为你的应用程序运行 PR 检查,以使你的项目更安全。Synx 还提供了一系列集成,可用于监控依赖关系并解决特定问题。
|
||||
|
||||
要在本地计算机上运行 Snyk,可以通过 NPM 安装它:
|
||||
|
||||
```
|
||||
npm install -g snyk
|
||||
```
|
||||
|
||||
### Migrat
|
||||
|
||||
[Migrat][10] 是一款使用纯文本的数据迁移工具,非常易于使用。 它可在各种软件堆栈和进程中工作,从而使其更加实用。你可以使用简单的代码行安装 Migrat:
|
||||
|
||||
```
|
||||
$ npm install -g migrat
|
||||
```
|
||||
|
||||
Migrat 并不需要特别的数据库引擎。它支持多节点环境,因为迁移可以在一个全局节点上运行,也可以在每个服务器上运行一次。Migrat 之所以方便,是因为它便于向每个迁移传递上下文。
|
||||
|
||||
你可以定义每个迁移的用途(例如,数据库集、连接、日志接口等)。此外,为了避免随意迁移,即多个服务器在全局范围内进行迁移,Migrat 可以在进程运行时进行全局锁定,从而使其只能在全局范围内运行一次。它还附带了一系列用于 SQL 数据库、Slack、HipChat 和 Datadog 仪表盘的插件。你可以将实时迁移状况发送到这些平台中的任何一个。
|
||||
|
||||
### Clinic.js
|
||||
|
||||
[Clinic.js][11] 是一个用于 Node.js 项目的开源监视工具。它结合了三种不同的工具 Doctor、Bubbleprof 和 Flame,帮助你监控、检测和解决 Node.js 的性能问题。
|
||||
|
||||
你可以通过运行以下命令从 npm 安装 Clinic.js:
|
||||
|
||||
```
|
||||
$ npm install clinic
|
||||
```
|
||||
|
||||
你可以根据要监视项目的某个方面以及要生成的报告,选择要使用的 Clinic.js 包含的三个工具中的一个:
|
||||
|
||||
* Doctor 通过注入探针来提供详细的指标,并就项目的总体运行状况提供建议。
|
||||
* Bubbleprof 非常适合分析,并使用 `async_hooks` 生成指标。
|
||||
* Flame 非常适合发现代码中的热路径和瓶颈。
|
||||
|
||||
### PM2
|
||||
|
||||
监视是后端开发过程中最重要的方面之一。[PM2][12] 是一款 Node.js 的进程管理工具,可帮助开发人员监视项目的多个方面,例如日志、延迟和速度。该工具与 Linux、MacOS 和 Windows 兼容,并支持从 Node.js 8.X 开始的所有 Node.js 版本。
|
||||
|
||||
你可以使用以下命令通过 npm 安装 PM2:
|
||||
|
||||
```
|
||||
$ npm install pm2 --g
|
||||
```
|
||||
|
||||
如果尚未安装 Node.js,则可以使用以下命令安装:
|
||||
|
||||
```
|
||||
wget -qO- https://getpm2.com/install.sh | bash
|
||||
```
|
||||
|
||||
安装完成后,使用以下命令启动应用程序:
|
||||
|
||||
```
|
||||
$ pm2 start app.js
|
||||
```
|
||||
|
||||
关于 PM2 最好的地方是可以在集群模式下运行应用程序。可以同时为多个 CPU 内核生成一个进程。这样可以轻松增强应用程序性能并最大程度地提高可靠性。PM2 也非常适合更新工作,因为你可以使用 “热重载” 选项更新应用程序并以零停机时间重新加载应用程序。总体而言,它是为 Node.js 应用程序简化进程管理的好工具。
|
||||
|
||||
### Electrode
|
||||
|
||||
[Electrode][13] 是 Walmart Labs 的一个开源应用程序平台。该平台可帮助你以结构化方式构建大规模通用的 React/Node.js 应用程序。
|
||||
|
||||
Electrode 应用程序生成器使你可以构建专注于代码的灵活内核,提供一些出色的模块以向应用程序添加复杂功能,并附带了广泛的工具来优化应用程序的 Node.js 包。
|
||||
|
||||
可以使用 npm 安装 Electrode。安装完成后,你可以使用 Ignite 启动应用程序,并深入研究 Electrode 应用程序生成器。
|
||||
|
||||
你可以使用 NPM 安装 Electrode:
|
||||
|
||||
```
|
||||
npm install -g electrode-ignite xclap-cli
|
||||
```
|
||||
|
||||
### 你最喜欢哪一个?
|
||||
|
||||
这些只是不断增长的开源工具列表中的一小部分,在使用 Node.js 时,这些工具可以在不同阶段派上用场。你最喜欢使用哪些开源 Node.js 工具?请在评论中分享你的建议。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/1/open-source-tools-nodejs
|
||||
|
||||
作者:[Hiren Dhadhuk][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[stevenzdg988](https://github.com/stevenzdg988)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/hirendhadhuk
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/tools_hardware_purple.png?itok=3NdVoYhl (Tools illustration)
|
||||
[2]: https://insights.stackoverflow.com/survey/2019#technology-_-other-frameworks-libraries-and-tools
|
||||
[3]: https://www.simform.com/nodejs-use-case/
|
||||
[4]: https://webpack.js.org/
|
||||
[5]: https://strapi.io/
|
||||
[6]: https://broccoli.build/
|
||||
[7]: https://en.wikipedia.org/wiki/ECMAScript#6th_Edition_-_ECMAScript_2015
|
||||
[8]: https://danger.systems/
|
||||
[9]: https://snyk.io/
|
||||
[10]: https://github.com/naturalatlas/migrat
|
||||
[11]: https://clinicjs.org/
|
||||
[12]: https://pm2.keymetrics.io/
|
||||
[13]: https://www.electrode.io/
|
@ -0,0 +1,152 @@
|
||||
[#]: collector: "lujun9972"
|
||||
[#]: translator: "wyxplus"
|
||||
[#]: reviewer: "wxy"
|
||||
[#]: publisher: "wxy"
|
||||
[#]: url: "https://linux.cn/article-13215-1.html"
|
||||
[#]: subject: "Managing processes on Linux with kill and killall"
|
||||
[#]: via: "https://opensource.com/article/20/1/linux-kill-killall"
|
||||
[#]: author: "Jim Hall https://opensource.com/users/jim-hall"
|
||||
|
||||
在 Linux 上使用 kill 和 killall 命令来管理进程
|
||||
======
|
||||
|
||||
> 了解如何使用 ps、kill 和 killall 命令来终止进程并回收系统资源。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202103/18/230625q6g65gz6ugdk8ygr.jpg)
|
||||
|
||||
在 Linux 中,每个程序和<ruby>守护程序<rt>daemon</rt></ruby>都是一个“<ruby>进程<rt>process</rt></ruby>”。 大多数进程代表一个正在运行的程序。而另外一些程序可以派生出其他进程,比如说它会侦听某些事件的发生,然后对其做出响应。并且每个进程都需要一定的内存和处理能力。你运行的进程越多,所需的内存和 CPU 使用周期就越多。在老式电脑(例如我使用了 7 年的笔记本电脑)或轻量级计算机(例如树莓派)上,如果你关注过后台运行的进程,就能充分利用你的系统。
|
||||
|
||||
你可以使用 `ps` 命令来查看正在运行的进程。你通常会使用 `ps` 命令的参数来显示出更多的输出信息。我喜欢使用 `-e` 参数来查看每个正在运行的进程,以及 `-f` 参数来获得每个进程的全部细节。以下是一些例子:
|
||||
|
||||
```
|
||||
$ ps
|
||||
PID TTY TIME CMD
|
||||
88000 pts/0 00:00:00 bash
|
||||
88052 pts/0 00:00:00 ps
|
||||
88053 pts/0 00:00:00 head
|
||||
```
|
||||
```
|
||||
$ ps -e | head
|
||||
PID TTY TIME CMD
|
||||
1 ? 00:00:50 systemd
|
||||
2 ? 00:00:00 kthreadd
|
||||
3 ? 00:00:00 rcu_gp
|
||||
4 ? 00:00:00 rcu_par_gp
|
||||
6 ? 00:00:02 kworker/0:0H-events_highpri
|
||||
9 ? 00:00:00 mm_percpu_wq
|
||||
10 ? 00:00:01 ksoftirqd/0
|
||||
11 ? 00:00:12 rcu_sched
|
||||
12 ? 00:00:00 migration/0
|
||||
```
|
||||
```
|
||||
$ ps -ef | head
|
||||
UID PID PPID C STIME TTY TIME CMD
|
||||
root 1 0 0 13:51 ? 00:00:50 /usr/lib/systemd/systemd --switched-root --system --deserialize 36
|
||||
root 2 0 0 13:51 ? 00:00:00 [kthreadd]
|
||||
root 3 2 0 13:51 ? 00:00:00 [rcu_gp]
|
||||
root 4 2 0 13:51 ? 00:00:00 [rcu_par_gp]
|
||||
root 6 2 0 13:51 ? 00:00:02 [kworker/0:0H-kblockd]
|
||||
root 9 2 0 13:51 ? 00:00:00 [mm_percpu_wq]
|
||||
root 10 2 0 13:51 ? 00:00:01 [ksoftirqd/0]
|
||||
root 11 2 0 13:51 ? 00:00:12 [rcu_sched]
|
||||
root 12 2 0 13:51 ? 00:00:00 [migration/0]
|
||||
```
|
||||
|
||||
最后的例子显示最多的细节。在每一行,`UID`(用户 ID)显示了该进程的所有者。`PID`(进程 ID)代表每个进程的数字 ID,而 `PPID`(父进程 ID)表示其父进程的数字 ID。在任何 Unix 系统中,进程是从 1 开始编号,是内核启动后运行的第一个进程。在这里,`systemd` 是第一个进程,它催生了 `kthreadd`,而 `kthreadd` 还创建了其他进程,包括 `rcu_gp`、`rcu_par_gp` 等一系列进程。
|
||||
|
||||
### 使用 kill 命令来管理进程
|
||||
|
||||
系统会处理大多数后台进程,所以你不需要操心这些进程。你只需要关注那些你所运行的应用创建的进程。虽然许多应用一次只运行一个进程(如音乐播放器、终端模拟器或游戏等),但其他应用则可能创建后台进程。其中一些应用可能当你退出后还在后台运行,以便下次你使用的时候能快速启动。
|
||||
|
||||
当我运行 Chromium(作为谷歌 Chrome 浏览器所基于的开源项目)时,进程管理便成了问题。 Chromium 在我的笔记本电脑上运行非常吃力,并产生了许多额外的进程。现在我仅打开五个选项卡,就能看到这些 Chromium 进程:
|
||||
|
||||
```
|
||||
$ ps -ef | fgrep chromium
|
||||
jhall 66221 [...] /usr/lib64/chromium-browser/chromium-browser [...]
|
||||
jhall 66230 [...] /usr/lib64/chromium-browser/chromium-browser [...]
|
||||
[...]
|
||||
jhall 66861 [...] /usr/lib64/chromium-browser/chromium-browser [...]
|
||||
jhall 67329 65132 0 15:45 pts/0 00:00:00 grep -F chromium
|
||||
```
|
||||
|
||||
我已经省略一些行,其中有 20 个 Chromium 进程和一个正在搜索 “chromium" 字符的 `grep` 进程。
|
||||
|
||||
```
|
||||
$ ps -ef | fgrep chromium | wc -l
|
||||
21
|
||||
```
|
||||
|
||||
但是在我退出 Chromium 之后,这些进程仍旧运行。如何关闭它们并回收这些进程占用的内存和 CPU 呢?
|
||||
|
||||
`kill` 命令能让你终止一个进程。在最简单的情况下,你告诉 `kill` 命令终止你想终止的进程的 PID。例如,要终止这些进程,我需要对 20 个 Chromium 进程 ID 都执行 `kill` 命令。一种方法是使用命令行获取 Chromium 的 PID,而另一种方法针对该列表运行 `kill`:
|
||||
|
||||
|
||||
```
|
||||
$ ps -ef | fgrep /usr/lib64/chromium-browser/chromium-browser | awk '{print $2}'
|
||||
66221
|
||||
66230
|
||||
66239
|
||||
66257
|
||||
66262
|
||||
66283
|
||||
66284
|
||||
66285
|
||||
66324
|
||||
66337
|
||||
66360
|
||||
66370
|
||||
66386
|
||||
66402
|
||||
66503
|
||||
66539
|
||||
66595
|
||||
66734
|
||||
66848
|
||||
66861
|
||||
69702
|
||||
|
||||
$ ps -ef | fgrep /usr/lib64/chromium-browser/chromium-browser | awk '{print $2}' > /tmp/pids
|
||||
$ kill $(cat /tmp/pids)
|
||||
```
|
||||
|
||||
最后两行是关键。第一个命令行为 Chromium 浏览器生成一个进程 ID 列表。第二个命令行针对该进程 ID 列表运行 `kill` 命令。
|
||||
|
||||
### 介绍 killall 命令
|
||||
|
||||
一次终止多个进程有个更简单方法,使用 `killall` 命令。你或许可以根据名称猜测出,`killall` 会终止所有与该名字匹配的进程。这意味着我们可以使用此命令来停止所有流氓 Chromium 进程。这很简单:
|
||||
|
||||
```
|
||||
$ killall /usr/lib64/chromium-browser/chromium-browser
|
||||
```
|
||||
|
||||
但是要小心使用 `killall`。该命令能够终止与你所给出名称相匹配的所有进程。这就是为什么我喜欢先使用 `ps -ef` 命令来检查我正在运行的进程,然后针对要停止的命令的准确路径运行 `killall`。
|
||||
|
||||
你也可以使用 `-i` 或 `--interactive` 参数,来让 `killkill` 在停止每个进程之前提示你。
|
||||
|
||||
`killall` 还支持使用 `-o` 或 `--older-than` 参数来查找比特定时间更早的进程。例如,如果你发现了一组已经运行了好几天的恶意进程,这将会很有帮助。又或是,你可以查找比特定时间更晚的进程,例如你最近启动的失控进程。使用 `-y` 或 `--young-than` 参数来查找这些进程。
|
||||
|
||||
### 其他管理进程的方式
|
||||
|
||||
进程管理是系统维护重要的一部分。在我作为 Unix 和 Linux 系统管理员的早期职业生涯中,杀死非法作业的能力是保持系统正常运行的关键。在如今,你可能不需要亲手在 Linux 上的终止流氓进程,但是知道 `kill` 和 `killall` 能够在最终出现问题时为你提供帮助。
|
||||
|
||||
你也能寻找其他方式来管理进程。在我这个案例中,我并不需要在我退出浏览器后,使用 `kill` 或 `killall` 来终止后台 Chromium 进程。在 Chromium 中有个简单设置就可以进行控制:
|
||||
|
||||
![Chromium background processes setting][2]
|
||||
|
||||
不过,始终关注系统上正在运行哪些进程,并且在需要的时候进行干预是一个明智之举。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/1/linux-kill-killall
|
||||
|
||||
作者:[Jim Hall][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wyxplus](https://github.com/wyxplus)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/jim-hall
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/linux_penguin_green.png?itok=ENdVzW22 "Penguin with green background"
|
||||
[2]: https://opensource.com/sites/default/files/uploads/chromium-settings-continue-running.png "Chromium background processes setting"
|
@ -1,56 +1,46 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (cooljelly)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-13239-1.html)
|
||||
[#]: subject: (Multicloud, security integration drive massive SD-WAN adoption)
|
||||
[#]: via: (https://www.networkworld.com/article/3527194/multicloud-security-integration-drive-massive-sd-wan-adoption.html)
|
||||
[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
|
||||
|
||||
多云融合和安全集成推动SD-WAN的大规模应用
|
||||
多云融合和安全集成推动 SD-WAN 的大规模应用
|
||||
======
|
||||
2022 年 SD-WAN 市场 40% 的同比增长主要来自于包括 Cisco、VMWare、Juniper 和 Arista 在内的网络供应商和包括 AWS、Microsoft Azure,Google Anthos 和 IBM RedHat 在内的服务提供商之间的紧密联系。
|
||||
[Gratisography][1] [(CC0)][2]
|
||||
|
||||
越来越多的云应用,以及越来越完善的网络安全性,可视化特性和可管理性,正以惊人的速度推动企业软件定义广域网 ([SD-WAN][3]) 部署。
|
||||
> 2022 年 SD-WAN 市场 40% 的同比增长主要来自于包括 Cisco、VMWare、Juniper 和 Arista 在内的网络供应商和包括 AWS、Microsoft Azure,Google Anthos 和 IBM RedHat 在内的服务提供商之间的紧密联系。
|
||||
|
||||
IDC(International Data Corporation,译者注)公司的网络基础架构副总裁 Rohit Mehra 表示,根据 IDC 的研究,过去一年中,特别是软件和基础设施即服务(SaaS 和 IaaS)产品推动了 SD-WAN 的实施。
|
||||
|
||||
**阅读更多关于边缘计算的文章**
|
||||
|
||||
* [边缘计算和物联网如何重塑数据中心][4]
|
||||
* [边缘计算的最佳实践][5]
|
||||
* [边缘计算如何提高物联网的安全性][6]
|
||||
![](https://img.linux.net.cn/data/attachment/album/202103/27/095154f0625f3k8455800x.jpg)
|
||||
|
||||
越来越多的云应用,以及越来越完善的网络安全性、可视化特性和可管理性,正以惊人的速度推动企业<ruby>软件定义广域网<rt>software-defined WAN</rt></ruby>([SD-WAN][3])的部署。
|
||||
|
||||
IDC(LCTT 译注:International Data Corporation)公司的网络基础架构副总裁 Rohit Mehra 表示,根据 IDC 的研究,过去一年中,特别是软件和基础设施即服务(SaaS 和 IaaS)产品推动了 SD-WAN 的实施。
|
||||
|
||||
例如,IDC 表示,根据其最近的客户调查结果,有 95% 的客户将在两年内使用 [SD-WAN][7] 技术,而 42% 的客户已经部署了它。IDC 还表示,到 2022 年,SD-WAN 基础设施市场将达到 45 亿美元,此后每年将以每年 40% 的速度增长。
|
||||
|
||||
”SD-WAN 的增长是一个广泛的趋势,很大程度上是由企业希望优化远程站点的云连接性的需求推动的。“ Mehra 说。
|
||||
“SD-WAN 的增长是一个广泛的趋势,很大程度上是由企业希望优化远程站点的云连接性的需求推动的。” Mehra 说。
|
||||
|
||||
思科最近撰文称,多云网络的发展正在促使许多企业改组其网络,以更好地使用 SD-WAN 技术。SD-WAN 对于采用云服务的企业至关重要,它是园区网、分支机构、[物联网][8]、[数据中心][9] 和云之间的连接中间件。思科公司表示,根据调查,平均每个思科的企业客户有 30 个付费的 SaaS 应用程序,而他们实际使用的 SaaS 应用会更多——在某些情况下甚至超过 100 种。
|
||||
|
||||
这种趋势的部分原因是由网络供应商(例如 Cisco、VMware、Juniper、Arista 等)(这里的网络供应商指的是提供硬件或软件并可按需组网的厂商,译者注)与服务提供商(例如 Amazon AWS、Microsoft Azure、Google Anthos 和 IBM RedHat 等)建立的关系推动的。
|
||||
这种趋势的部分原因是由网络供应商(例如 Cisco、VMware、Juniper、Arista 等)(LCTT 译注:这里的网络供应商指的是提供硬件或软件并可按需组网的厂商)与服务提供商(例如 Amazon AWS、Microsoft Azure、Google Anthos 和 IBM RedHat 等)建立的关系推动的。
|
||||
|
||||
去年 12 月,AWS为其云产品发布了关键服务,其中包括诸如 [AWS Transit Gateway][10] 等新集成技术的关键服务,这标志着 SD-WAN 与多云场景关系的日益重要。使用 AWS Transit Gateway 技术,客户可以将 AWS 中的 VPC(Virtual Private Cloud) 和其自有网络均连接到相同的网关。Aruba、Aviatrix Cisco、Citrix Systems、Silver Peak 和 Versa 已经宣布支持该技术,这将简化和增强这些公司的 SD-WAN 产品与 AWS 云服务的集成服务的性能和表现。
|
||||
|
||||
[][11]
|
||||
去年 12 月,AWS 为其云产品发布了关键服务,其中包括诸如 [AWS Transit Gateway][10] 等新集成技术的关键服务,这标志着 SD-WAN 与多云场景关系的日益重要。使用 AWS Transit Gateway 技术,客户可以将 AWS 中的 VPC(<ruby>虚拟私有云<rt>Virtual Private Cloud</rt></ruby>)和其自有网络均连接到相同的网关。Aruba、Aviatrix Cisco、Citrix Systems、Silver Peak 和 Versa 已经宣布支持该技术,这将简化和增强这些公司的 SD-WAN 产品与 AWS 云服务的集成服务的性能和表现。
|
||||
|
||||
Mehra 说,展望未来,对云应用的友好兼容和完善的性能监控等增值功能将是 SD-WAN 部署的关键部分。
|
||||
|
||||
随着 SD-WAN 与云的关系不断发展,SD-WAN 对集成安全功能的需求也在不断增长。
|
||||
|
||||
Mehra 说,SD-WAN 产品集成安全性的方式比以往单独打包的广域网安全软件或服务要好得多。SD-WAN 是一个更加敏捷的安全环境。SD-WAN 被公认的主要组成部分包括安全功能,数据分析功能和广域网优化功能等,其中安全功能则是下一代 SD-WAN 解决方案的首要需求。
|
||||
Mehra 说,SD-WAN 产品集成安全性的方式比以往单独打包的广域网安全软件或服务要好得多。SD-WAN 是一个更加敏捷的安全环境。SD-WAN 公认的主要组成部分包括安全功能,数据分析功能和广域网优化功能等,其中安全功能则是下一代 SD-WAN 解决方案的首要需求。
|
||||
|
||||
Mehra 说,企业将越来越少地关注仅解决某个具体问题的 SD-WAN 解决方案,而将青睐于能够解决更广泛的网络管理和安全需求的 SD-WAN 平台。他们将寻找可以与他们的 IT 基础设施(包括企业数据中心网络、企业园区局域网、[公有云][12] 资源等)集成更紧密的 SD-WAN 平台。他说,企业将寻求无缝融合的安全服务,并希望有其他各种功能的支持,例如可视化,数据分析和统一通信功能。
|
||||
Mehra 说,企业将越来越少地关注仅解决某个具体问题的 SD-WAN 解决方案,而将青睐于能够解决更广泛的网络管理和安全需求的 SD-WAN 平台。他们将寻找可以与他们的 IT 基础设施(包括企业数据中心网络、企业园区局域网、[公有云][12] 资源等)集成更紧密的 SD-WAN 平台。他说,企业将寻求无缝融合的安全服务,并希望有其他各种功能的支持,例如可视化、数据分析和统一通信功能。
|
||||
|
||||
“随着客户不断将其基础设施与软件集成在一起,他们可以做更多的事情,例如根据其局域网和广域网上的用户、设备或应用程序的需求,实现一致的管理和安全策略,并最终获得更好的整体使用体验。” Mehra 说。
|
||||
|
||||
一个新兴趋势是 SD-WAN 产品包需要支持 [SD-branch][13] 技术。 Mehra 说,超过 70% 的 IDC 受调查客户希望在明年使用 SD-Branch。在最近几周,[Juniper][14] 和 [Aruba][15] 公司已经优化了 SD-branch 产品,这一趋势预计将在今年持续下去。
|
||||
|
||||
SD-Branch 技术基于 SD-WAN 的概念和支持,但更专注于满足分支机构中局域网的组网和管理需求。展望未来,SD-Branch 如何与其他技术集成,例如数据分析,音视频,统一通信等,将成为该技术的主要驱动力。
|
||||
|
||||
加入 [Facebook][16] 和 [LinkedIn][17] 上的 Network World 社区,以评论您最关注的主题。
|
||||
SD-Branch 技术建立在 SD-WAN 的概念和支持的基础上,但更专注于满足分支机构中局域网的组网和管理需求。展望未来,SD-Branch 如何与其他技术集成,例如数据分析、音视频、统一通信等,将成为该技术的主要驱动力。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -59,13 +49,13 @@ via: https://www.networkworld.com/article/3527194/multicloud-security-integratio
|
||||
作者:[Michael Cooney][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[cooljelly](https://github.com/cooljelly)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Michael-Cooney/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.pexels.com/photo/black-and-white-branches-tree-high-279/
|
||||
[1]: https://images.idgesg.net/images/article/2018/07/branches_branching_trees_bare_black_and_white_by_gratisography_cc0_via_pexels_1200x800-100763250-large.jpg
|
||||
[2]: https://creativecommons.org/publicdomain/zero/1.0/
|
||||
[3]: https://www.networkworld.com/article/3031279/sd-wan-what-it-is-and-why-you-ll-use-it-one-day.html
|
||||
[4]: https://www.networkworld.com/article/3291790/data-center/how-edge-networking-and-iot-will-reshape-data-centers.html
|
148
published/20200410 Get started with Bash programming.md
Normal file
148
published/20200410 Get started with Bash programming.md
Normal file
@ -0,0 +1,148 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (stevenzdg988)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-13210-1.html)
|
||||
[#]: subject: (Get started with Bash programming)
|
||||
[#]: via: (https://opensource.com/article/20/4/bash-programming-guide)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
|
||||
|
||||
如何入门 Bash 编程
|
||||
======
|
||||
|
||||
> 了解如何在 Bash 中编写定制程序以自动执行重复性操作任务。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202103/17/110745ctcuzcnt0dv0toi7.jpg)
|
||||
|
||||
Unix 最初的希望之一是,让计算机的日常用户能够微调其计算机,以适应其独特的工作风格。几十年来,人们对计算机定制的期望已经降低,许多用户认为他们的应用程序和网站的集合就是他们的 “定制环境”。原因之一是许多操作系统的组件未不开源,普通用户无法使用其源代码。
|
||||
|
||||
但是对于 Linux 用户而言,定制程序是可以实现的,因为整个系统都围绕着可通过终端使用的命令啦进行的。终端不仅是用于快速命令或深入排除故障的界面;也是一个脚本环境,可以通过为你处理日常任务来减少你的工作量。
|
||||
|
||||
### 如何学习编程
|
||||
|
||||
如果你以前从未进行过任何编程,可能面临考虑两个不同的挑战:一个是了解怎样编写代码,另一个是了解要编写什么代码。你可以学习 _语法_,但是如果你不知道 _语言_ 中有哪些可用的关键字,你将无法继续。在实践中,要同时开始学习这两个概念,是因为如果没有关键字的堆砌就无法学习语法,因此,最初你要使用基本命令和基本编程结构来编写简单的任务。一旦熟悉了基础知识,就可以探索更多编程语言的内容,从而使你的程序能够做越来越重要的事情。
|
||||
|
||||
在 [Bash][2] 中,你使用的大多数 _关键字_ 是 Linux 命令。 _语法_ 就是 Bash。如果你已经频繁地使用过了 Bash,则向 Bash 编程的过渡相对容易。但是,如果你不曾使用过 Bash,你会很高兴地了解到它是一种为清晰和简单而构建的简单语言。
|
||||
|
||||
### 交互设计
|
||||
|
||||
有时,学习编程时最难搞清楚的事情就是计算机可以为你做些什么。显然,如果一台计算机可以自己完成你要做的所有操作,那么你就不必再碰计算机了。但是现实是,人类很重要。找到你的计算机可以帮助你的事情的关键是注意到你一周内需要重复执行的任务。计算机特别擅长于重复的任务。
|
||||
|
||||
但是,为了能告知计算机为你做某事,你必须知道怎么做。这就是 Bash 擅长的领域:交互式编程。在终端中执行一个动作时,你也在学习如何编写脚本。
|
||||
|
||||
例如,我曾经负责将大量 PDF 书籍转换为低墨和友好打印的版本。一种方法是在 PDF 编辑器中打开 PDF,从数百张图像(页面背景和纹理都算作图像)中选择每张图像,删除它们,然后将其保存到新的 PDF中。仅仅是一本书,这样就需要半天时间。
|
||||
|
||||
我的第一个想法是学习如何编写 PDF 编辑器脚本,但是经过数天的研究,我找不到可以编写编辑 PDF 应用程序的脚本(除了非常丑陋的鼠标自动化技巧)。因此,我将注意力转向了从终端内找出完成任务的方法。这让我有了几个新发现,包括 GhostScript,它是 PostScript 的开源版本(PDF 基于的打印机语言)。通过使用 GhostScript 处理了几天的任务,我确认这是解决我的问题的方法。
|
||||
|
||||
编写基本的脚本来运行命令,只不过是复制我用来从 PDF 中删除图像的命令和选项,并将其粘贴到文本文件中而已。将这个文件作为脚本运行,大概也会产生同样的结果。
|
||||
|
||||
### 向 Bash 脚本传参数
|
||||
|
||||
在终端中运行命令与在 Shell 脚本中运行命令之间的区别在于前者是交互式的。在终端中,你可以随时进行调整。例如,如果我刚刚处理 `example_1.pdf` 并准备处理下一个文档,以适应我的命令,则只需要更改文件名即可。
|
||||
|
||||
Shell 脚本不是交互式的。实际上,Shell _脚本_ 存在的唯一原因是让你不必亲自参与。这就是为什么命令(以及运行它们的 Shell 脚本)会接受参数的原因。
|
||||
|
||||
在 Shell 脚本中,有一些预定义的可以反映脚本启动方式的变量。初始变量是 `$0`,它代表了启动脚本的命令。下一个变量是 `$1` ,它表示传递给 Shell 脚本的第一个 “参数”。例如,在命令 `echo hello` 中,命令 `echo` 为 `$0,`,关键字 `hello` 为 `$1`,而 `world` 是 `$2`。
|
||||
|
||||
在 Shell 中交互如下所示:
|
||||
|
||||
```
|
||||
$ echo hello world
|
||||
hello world
|
||||
```
|
||||
|
||||
在非交互式 Shell 脚本中,你 _可以_ 以非常直观的方式执行相同的操作。将此文本输入文本文件并将其另存为 `hello.sh`:
|
||||
|
||||
```
|
||||
echo hello world
|
||||
```
|
||||
|
||||
执行这个脚本:
|
||||
|
||||
```
|
||||
$ bash hello.sh
|
||||
hello world
|
||||
```
|
||||
|
||||
同样可以,但是并没有利用脚本可以接受输入这一优势。将 `hello.sh` 更改为:
|
||||
|
||||
```
|
||||
echo $1
|
||||
```
|
||||
|
||||
用引号将两个参数组合在一起来运行脚本:
|
||||
|
||||
```
|
||||
$ bash hello.sh "hello bash"
|
||||
hello bash
|
||||
```
|
||||
|
||||
对于我的 PDF 瘦身项目,我真的需要这种非交互性,因为每个 PDF 都花了几分钟来压缩。但是通过创建一个接受我的输入的脚本,我可以一次将几个 PDF 文件全部提交给脚本。该脚本按顺序处理了每个文件,这可能需要半小时或稍长一点时间,但是我可以用半小时来完成其他任务。
|
||||
|
||||
### 流程控制
|
||||
|
||||
创建 Bash 脚本是完全可以接受的,从本质上讲,这些脚本是你开始实现需要重复执行任务的准确过程的副本。但是,可以通过控制信息流的方式来使脚本更强大。管理脚本对数据响应的常用方法是:
|
||||
|
||||
* `if`/`then` 选择结构语句
|
||||
* `for` 循环结构语句
|
||||
* `while` 循环结构语句
|
||||
* `case` 语句
|
||||
|
||||
计算机不是智能的,但是它们擅长比较和分析数据。如果你在脚本中构建一些数据分析,则脚本会变得更加智能。例如,基本的 `hello.sh` 脚本运行后不管有没有内容都会显示:
|
||||
|
||||
```
|
||||
$ bash hello.sh foo
|
||||
foo
|
||||
$ bash hello.sh
|
||||
|
||||
$
|
||||
```
|
||||
|
||||
如果在没有接收输入的情况下提供帮助消息,将会更加容易使用。如下是一个 `if`/`then` 语句,如果你以一种基本的方式使用 Bash,则你可能不知道 Bash 中存在这样的语句。但是编程的一部分是学习语言,通过一些研究,你将了解 `if/then` 语句:
|
||||
|
||||
```
|
||||
if [ "$1" = "" ]; then
|
||||
echo "syntax: $0 WORD"
|
||||
echo "If you provide more than one word, enclose them in quotes."
|
||||
else
|
||||
echo "$1"
|
||||
fi
|
||||
```
|
||||
|
||||
运行新版本的 `hello.sh` 输出如下:
|
||||
|
||||
```
|
||||
$ bash hello.sh
|
||||
syntax: hello.sh WORD
|
||||
If you provide more than one word, enclose them in quotes.
|
||||
$ bash hello.sh "hello world"
|
||||
hello world
|
||||
```
|
||||
|
||||
### 利用脚本工作
|
||||
|
||||
无论你是从 PDF 文件中查找要删除的图像,还是要管理混乱的下载文件夹,抑或要创建和提供 Kubernetes 镜像,学习编写 Bash 脚本都需要先使用 Bash,然后学习如何将这些脚本从仅仅是一个命令列表变成响应输入的东西。通常这是一个发现的过程:你一定会找到新的 Linux 命令来执行你从未想象过可以通过文本命令执行的任务,你会发现 Bash 的新功能,使你的脚本可以适应所有你希望它们运行的不同方式。
|
||||
|
||||
学习这些技巧的一种方法是阅读其他人的脚本。了解人们如何在其系统上自动化死板的命令。看看你熟悉的,并寻找那些陌生事物的更多信息。
|
||||
|
||||
另一种方法是下载我们的 [Bash 编程入门][3] 电子书。它向你介绍了特定于 Bash 的编程概念,并且通过学习的构造,你可以开始构建自己的命令。当然,它是免费的,并根据 [创作共用许可证][4] 进行下载和分发授权,所以今天就来获取它吧。
|
||||
|
||||
- [下载我们介绍用 Bash 编程的电子书!][3]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/4/bash-programming-guide
|
||||
|
||||
作者:[Seth Kenlon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[stevenzdg988](https://github.com/stevenzdg988)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/seth
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/command_line_prompt.png?itok=wbGiJ_yg (Command line prompt)
|
||||
[2]: https://opensource.com/resources/what-bash
|
||||
[3]: https://opensource.com/downloads/bash-programming-guide
|
||||
[4]: https://opensource.com/article/20/1/what-creative-commons
|
136
published/20200702 6 best practices for managing Git repos.md
Normal file
136
published/20200702 6 best practices for managing Git repos.md
Normal file
@ -0,0 +1,136 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (stevenzdg988)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-13200-1.html)
|
||||
[#]: subject: (6 best practices for managing Git repos)
|
||||
[#]: via: (https://opensource.com/article/20/7/git-repos-best-practices)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
|
||||
|
||||
6 个最佳的 Git 仓库管理实践
|
||||
======
|
||||
|
||||
> 抵制在 Git 中添加一些会增加管理难度的东西的冲动;这里有替代方法。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202103/13/225927c3mvm5x275vano5m.jpg)
|
||||
|
||||
有权访问源代码使对安全性的分析以及应用程序的安全成为可能。但是,如果没有人真正看过代码,问题就不会被发现,即使人们主动地看代码,通常也要看很多东西。幸运的是,GitHub 拥有一个活跃的安全团队,最近,他们 [发现了已提交到多个 Git 仓库中的特洛伊木马病毒][2],甚至仓库的所有者也偷偷溜走了。尽管我们无法控制其他人如何管理自己的仓库,但我们可以从他们的错误中吸取教训。为此,本文回顾了将文件添加到自己的仓库中的一些最佳实践。
|
||||
|
||||
### 了解你的仓库
|
||||
|
||||
![Git 仓库终端][3]
|
||||
|
||||
这对于安全的 Git 仓库来可以说是头号规则。作为项目维护者,无论是你自己创建的还是采用别人的,你的工作是了解自己仓库中的内容。你可能无法记住代码库中每一个文件,但是你需要了解你所管理的内容的基本组成部分。如果在几十个合并后出现一个游离的文件,你会很容易地发现它,因为你不知道它的用途,你需要检查它来刷新你的记忆。发生这种情况时,请查看该文件,并确保准确了解为什么它是必要的。
|
||||
|
||||
### 禁止二进制大文件
|
||||
|
||||
![终端中 Git 的二进制检查命令][4]
|
||||
|
||||
Git 是为文本而生的,无论是用纯文本编写的 C 或 Python 还是 Java 文本,亦或是 JSON、YAML、XML、Markdown、HTML 或类似的文本。Git 对于二进制文件不是很理想。
|
||||
|
||||
两者之间的区别是:
|
||||
|
||||
```
|
||||
$ cat hello.txt
|
||||
This is plain text.
|
||||
It's readable by humans and machines alike.
|
||||
Git knows how to version this.
|
||||
|
||||
$ git diff hello.txt
|
||||
diff --git a/hello.txt b/hello.txt
|
||||
index f227cc3..0d85b44 100644
|
||||
--- a/hello.txt
|
||||
+++ b/hello.txt
|
||||
@@ -1,2 +1,3 @@
|
||||
This is plain text.
|
||||
+It's readable by humans and machines alike.
|
||||
Git knows how to version this.
|
||||
```
|
||||
|
||||
和
|
||||
|
||||
```
|
||||
$ git diff pixel.png
|
||||
diff --git a/pixel.png b/pixel.png
|
||||
index 563235a..7aab7bc 100644
|
||||
Binary files a/pixel.png and b/pixel.png differ
|
||||
|
||||
$ cat pixel.png
|
||||
<EFBFBD>PNG
|
||||
▒
|
||||
IHDR7n<EFBFBD>$gAMA<4D><41>
|
||||
<20>abKGD݊<44>tIME<4D>
|
||||
|
||||
-2R<32><52>
|
||||
IDA<EFBFBD>c`<60>!<21>3%tEXtdate:create2020-06-11T11:45:04+12:00<30><30>r.%tEXtdate:modify2020-06-11T11:45:04+12:00<30><30>ʒIEND<4E>B`<60>
|
||||
```
|
||||
|
||||
二进制文件中的数据不能像纯文本一样被解析,因此,如果二进制文件发生任何更改,则必须重写整个内容。一个版本与另一个版本之间唯一的区别就是全部不同,这会快速增加仓库大小。
|
||||
|
||||
更糟糕的是,Git 仓库维护者无法合理地审计二进制数据。这违反了头号规则:应该对仓库的内容了如指掌。
|
||||
|
||||
除了常用的 [POSIX][5] 工具之外,你还可以使用 `git diff` 检测二进制文件。当你尝试使用 `--numstat` 选项来比较二进制文件时,Git 返回空结果:
|
||||
|
||||
```
|
||||
$ git diff --numstat /dev/null pixel.png | tee
|
||||
- - /dev/null => pixel.png
|
||||
$ git diff --numstat /dev/null file.txt | tee
|
||||
5788 0 /dev/null => list.txt
|
||||
```
|
||||
|
||||
如果你正在考虑将二进制大文件(BLOB)提交到仓库,请停下来先思考一下。如果它是二进制文件,那它是由什么生成的。是否有充分的理由不在构建时生成它们,而是将它们提交到仓库?如果你认为提交二进制数据是有意义的,请确保在 `README` 文件或类似文件中指明二进制文件的位置、为什么是二进制文件的原因以及更新它们的协议是什么。必须谨慎对其更新,因为你每提交一个二进制大文件的变化,它的存储空间实际上都会加倍。
|
||||
|
||||
### 让第三方库留在第三方
|
||||
|
||||
第三方库也不例外。尽管它是开源的众多优点之一,你可以不受限制地重用和重新分发不是你编写的代码,但是有很多充分的理由不把第三方库存储在你自己的仓库中。首先,除非你自己检查了所有代码(以及将来的合并),否则你不能为第三方完全担保。其次,当你将第三方库复制到你的 Git 仓库中时,会将焦点从真正的上游源代码中分离出来。从技术上讲,对库有信心的人只对该库的主副本有把握,而不是对随机仓库的副本有把握。如果你需要锁定特定版本的库,请给开发者提供一个合理的项目所需的发布 URL,或者使用 [Git 子模块][6]。
|
||||
|
||||
### 抵制盲目的 git add
|
||||
|
||||
![Git 手动添加命令终端中][7]
|
||||
|
||||
如果你的项目已编译,请抵制住使用 `git add .` 的冲动(其中 `.` 是当前目录或特定文件夹的路径),因为这是一种添加任何新东西的简单方法。如果你不是手动编译项目,而是使用 IDE 为你管理项目,这一点尤其重要。用 IDE 管理项目时,跟踪添加到仓库中的内容会非常困难,因此仅添加你实际编写的内容非常重要,而不是添加项目文件夹中出现的任何新对象。
|
||||
|
||||
如果你使用了 `git add .`,请在推送之前检查暂存区里的内容。如果在运行 `make clean` 或等效命令后,执行 `git status` 时在项目文件夹中看到一个陌生的对象,请找出它的来源,以及为什么仍然在项目的目录中。这是一种罕见的构建工件,不会在编译期间重新生成,因此在提交前请三思。
|
||||
|
||||
### 使用 Git ignore
|
||||
|
||||
![终端中的 `Git ignore` 命令][8]
|
||||
|
||||
许多为程序员打造的便利也非常杂乱。任何项目的典型项目目录,无论是编程的,还是艺术的或其他的,到处都是隐藏的文件、元数据和遗留的工件。你可以尝试忽略这些对象,但是 `git status` 中的提示越多,你错过某件事的可能性就越大。
|
||||
|
||||
你可以通过维护一个良好的 `gitignore` 文件来为你过滤掉这种噪音。因为这是使用 Git 的用户的共同要求,所以有一些入门级的 `gitignore` 文件。[Github.com/github/gitignore][9] 提供了几个专门创建的 `gitignore` 文件,你可以下载这些文件并将其放置到自己的项目中,[Gitlab.com][10] 在几年前就将`gitignore` 模板集成到了仓库创建工作流程中。使用这些模板来帮助你为项目创建适合的 `gitignore` 策略并遵守它。
|
||||
|
||||
### 查看合并请求
|
||||
|
||||
![Git 合并请求][11]
|
||||
|
||||
当你通过电子邮件收到一个合并/拉取请求或补丁文件时,不要只是为了确保它能正常工作而进行测试。你的工作是阅读进入代码库的新代码,并了解其是如何产生结果的。如果你不同意这个实现,或者更糟的是,你不理解这个实现,请向提交该实现的人发送消息,并要求其进行说明。质疑那些希望成为版本库永久成员的代码并不是一种社交失误,但如果你不知道你把什么合并到用户使用的代码中,那就是违反了你和用户之间的社交契约。
|
||||
|
||||
### Git 责任
|
||||
|
||||
社区致力于开源软件良好的安全性。不要鼓励你的仓库中不良的 Git 实践,也不要忽视你克隆的仓库中的安全威胁。Git 功能强大,但它仍然只是一个计算机程序,因此要以人为本,确保每个人的安全。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/7/git-repos-best-practices
|
||||
|
||||
作者:[Seth Kenlon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[stevenzdg988](https://github.com/stevenzdg988)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/seth
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/wfh_work_home_laptop_work.png?itok=VFwToeMy (Working from home at a laptop)
|
||||
[2]: https://securitylab.github.com/research/octopus-scanner-malware-open-source-supply-chain/
|
||||
[3]: https://opensource.com/sites/default/files/uploads/git_repo.png (Git repository )
|
||||
[4]: https://opensource.com/sites/default/files/uploads/git-binary-check.jpg (Git binary check)
|
||||
[5]: https://opensource.com/article/19/7/what-posix-richard-stallman-explains
|
||||
[6]: https://git-scm.com/book/en/v2/Git-Tools-Submodules
|
||||
[7]: https://opensource.com/sites/default/files/uploads/git-cola-manual-add.jpg (Git manual add)
|
||||
[8]: https://opensource.com/sites/default/files/uploads/git-ignore.jpg (Git ignore)
|
||||
[9]: https://github.com/github/gitignore
|
||||
[10]: https://about.gitlab.com/releases/2016/05/22/gitlab-8-8-released
|
||||
[11]: https://opensource.com/sites/default/files/uploads/git_merge_request.png (Git merge request)
|
315
published/20200915 Improve your time management with Jupyter.md
Normal file
315
published/20200915 Improve your time management with Jupyter.md
Normal file
@ -0,0 +1,315 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (stevenzdg988)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-13212-1.html)
|
||||
[#]: subject: (Improve your time management with Jupyter)
|
||||
[#]: via: (https://opensource.com/article/20/9/calendar-jupyter)
|
||||
[#]: author: (Moshe Zadka https://opensource.com/users/moshez)
|
||||
|
||||
使用 Jupyter 改善你的时间管理
|
||||
======
|
||||
|
||||
> 在 Jupyter 里使用 Python 来分析日历,以了解你是如何使用时间的。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202103/18/095530cxx6663ptypyzvmx.jpg)
|
||||
|
||||
[Python][2] 在探索数据方面具有令人难以置信的可扩展性。利用 [Pandas][3] 或 [Dask][4],你可以将 [Jupyter][5] 扩展到大数据领域。但是小数据、个人资料、私人数据呢?
|
||||
|
||||
JupyterLab 和 Jupyter Notebook 为我提供了一个绝佳的环境,可以让我审视我的笔记本电脑生活。
|
||||
|
||||
我的探索是基于以下事实:我使用的几乎每个服务都有一个 Web API。我使用了诸多此类服务:待办事项列表、时间跟踪器、习惯跟踪器等。还有一个几乎每个人都会使用到:_日历_。相同的思路也可以应用于其他服务,但是日历具有一个很酷的功能:几乎所有 Web 日历都支持的开放标准 —— CalDAV。
|
||||
|
||||
### 在 Jupyter 中使用 Python 解析日历
|
||||
|
||||
大多数日历提供了导出为 CalDAV 格式的方法。你可能需要某种身份验证才能访问这些私有数据。按照你的服务说明进行操作即可。如何获得凭据取决于你的服务,但是最终,你应该能够将这些凭据存储在文件中。我将我的凭据存储在根目录下的一个名为 `.caldav` 的文件中:
|
||||
|
||||
```
|
||||
import os
|
||||
with open(os.path.expanduser("~/.caldav")) as fpin:
|
||||
username, password = fpin.read().split()
|
||||
```
|
||||
|
||||
切勿将用户名和密码直接放在 Jupyter Notebook 的笔记本中!它们可能会很容易因 `git push` 的错误而导致泄漏。
|
||||
|
||||
下一步是使用方便的 PyPI [caldav][6] 库。我找到了我的电子邮件服务的 CalDAV 服务器(你可能有所不同):
|
||||
|
||||
```
|
||||
import caldav
|
||||
client = caldav.DAVClient(url="https://caldav.fastmail.com/dav/", username=username, password=password)
|
||||
```
|
||||
|
||||
CalDAV 有一个称为 `principal`(主键)的概念。它是什么并不重要,只要知道它是你用来访问日历的东西就行了:
|
||||
|
||||
```
|
||||
principal = client.principal()
|
||||
calendars = principal.calendars()
|
||||
```
|
||||
|
||||
从字面上讲,日历就是关于时间的。访问事件之前,你需要确定一个时间范围。默认一星期就好:
|
||||
|
||||
```
|
||||
from dateutil import tz
|
||||
import datetime
|
||||
now = datetime.datetime.now(tz.tzutc())
|
||||
since = now - datetime.timedelta(days=7)
|
||||
```
|
||||
|
||||
大多数人使用的日历不止一个,并且希望所有事件都在一起出现。`itertools.chain.from_iterable` 方法使这一过程变得简单:
|
||||
|
||||
```
|
||||
import itertools
|
||||
|
||||
raw_events = list(
|
||||
itertools.chain.from_iterable(
|
||||
calendar.date_search(start=since, end=now, expand=True)
|
||||
for calendar in calendars
|
||||
)
|
||||
)
|
||||
```
|
||||
|
||||
将所有事件读入内存很重要,以 API 原始的本地格式进行操作是重要的实践。这意味着在调整解析、分析和显示代码时,无需返回到 API 服务刷新数据。
|
||||
|
||||
但 “原始” 真的是原始,事件是以特定格式的字符串出现的:
|
||||
|
||||
```
|
||||
print(raw_events[12].data)
|
||||
```
|
||||
|
||||
```
|
||||
BEGIN:VCALENDAR
|
||||
VERSION:2.0
|
||||
PRODID:-//CyrusIMAP.org/Cyrus
|
||||
3.3.0-232-g4bdb081-fm-20200825.002-g4bdb081a//EN
|
||||
BEGIN:VEVENT
|
||||
DTEND:20200825T230000Z
|
||||
DTSTAMP:20200825T181915Z
|
||||
DTSTART:20200825T220000Z
|
||||
SUMMARY:Busy
|
||||
UID:
|
||||
1302728i-040000008200E00074C5B7101A82E00800000000D939773EA578D601000000000
|
||||
000000010000000CD71CC3393651B419E9458134FE840F5
|
||||
END:VEVENT
|
||||
END:VCALENDAR
|
||||
```
|
||||
|
||||
幸运的是,PyPI 可以再次使用另一个辅助库 [vobject][7] 解围:
|
||||
|
||||
```
|
||||
import io
|
||||
import vobject
|
||||
|
||||
def parse_event(raw_event):
|
||||
data = raw_event.data
|
||||
parsed = vobject.readOne(io.StringIO(data))
|
||||
contents = parsed.vevent.contents
|
||||
return contents
|
||||
```
|
||||
|
||||
```
|
||||
parse_event(raw_events[12])
|
||||
```
|
||||
|
||||
```
|
||||
{'dtend': [<DTEND{}2020-08-25 23:00:00+00:00>],
|
||||
'dtstamp': [<DTSTAMP{}2020-08-25 18:19:15+00:00>],
|
||||
'dtstart': [<DTSTART{}2020-08-25 22:00:00+00:00>],
|
||||
'summary': [<SUMMARY{}Busy>],
|
||||
'uid': [<UID{}1302728i-040000008200E00074C5B7101A82E00800000000D939773EA578D601000000000000000010000000CD71CC3393651B419E9458134FE840F5>]}
|
||||
```
|
||||
|
||||
好吧,至少好一点了。
|
||||
|
||||
仍有一些工作要做,将其转换为合理的 Python 对象。第一步是 _拥有_ 一个合理的 Python 对象。[attrs][8] 库提供了一个不错的开始:
|
||||
|
||||
```
|
||||
import attr
|
||||
from __future__ import annotations
|
||||
@attr.s(auto_attribs=True, frozen=True)
|
||||
class Event:
|
||||
start: datetime.datetime
|
||||
end: datetime.datetime
|
||||
timezone: Any
|
||||
summary: str
|
||||
```
|
||||
|
||||
是时候编写转换代码了!
|
||||
|
||||
第一个抽象从解析后的字典中获取值,不需要所有的装饰:
|
||||
|
||||
```
|
||||
def get_piece(contents, name):
|
||||
return contents[name][0].value
|
||||
get_piece(_, "dtstart")
|
||||
datetime.datetime(2020, 8, 25, 22, 0, tzinfo=tzutc())
|
||||
```
|
||||
|
||||
日历事件总有一个“开始”、有一个“结束”、有一个 “持续时间”。一些谨慎的解析逻辑可以将两者协调为同一个 Python 对象:
|
||||
|
||||
```
|
||||
def from_calendar_event_and_timezone(event, timezone):
|
||||
contents = parse_event(event)
|
||||
start = get_piece(contents, "dtstart")
|
||||
summary = get_piece(contents, "summary")
|
||||
try:
|
||||
end = get_piece(contents, "dtend")
|
||||
except KeyError:
|
||||
end = start + get_piece(contents, "duration")
|
||||
return Event(start=start, end=end, summary=summary, timezone=timezone)
|
||||
```
|
||||
|
||||
将事件放在 _本地_ 时区而不是 UTC 中很有用,因此使用本地时区:
|
||||
|
||||
```
|
||||
my_timezone = tz.gettz()
|
||||
from_calendar_event_and_timezone(raw_events[12], my_timezone)
|
||||
Event(start=datetime.datetime(2020, 8, 25, 22, 0, tzinfo=tzutc()), end=datetime.datetime(2020, 8, 25, 23, 0, tzinfo=tzutc()), timezone=tzfile('/etc/localtime'), summary='Busy')
|
||||
```
|
||||
|
||||
既然事件是真实的 Python 对象,那么它们实际上应该具有附加信息。幸运的是,可以将方法添加到类中。
|
||||
|
||||
但是要弄清楚哪个事件发生在哪一天不是很直接。你需要在 _本地_ 时区中选择一天:
|
||||
|
||||
```
|
||||
def day(self):
|
||||
offset = self.timezone.utcoffset(self.start)
|
||||
fixed = self.start + offset
|
||||
return fixed.date()
|
||||
Event.day = property(day)
|
||||
```
|
||||
|
||||
```
|
||||
print(_.day)
|
||||
2020-08-25
|
||||
```
|
||||
|
||||
事件在内部始终是以“开始”/“结束”的方式表示的,但是持续时间是有用的属性。持续时间也可以添加到现有类中:
|
||||
|
||||
```
|
||||
def duration(self):
|
||||
return self.end - self.start
|
||||
Event.duration = property(duration)
|
||||
```
|
||||
|
||||
```
|
||||
print(_.duration)
|
||||
1:00:00
|
||||
```
|
||||
|
||||
现在到了将所有事件转换为有用的 Python 对象了:
|
||||
|
||||
```
|
||||
all_events = [from_calendar_event_and_timezone(raw_event, my_timezone)
|
||||
for raw_event in raw_events]
|
||||
```
|
||||
|
||||
全天事件是一种特例,可能对分析生活没有多大用处。现在,你可以忽略它们:
|
||||
|
||||
```
|
||||
# ignore all-day events
|
||||
all_events = [event for event in all_events if not type(event.start) == datetime.date]
|
||||
```
|
||||
|
||||
事件具有自然顺序 —— 知道哪个事件最先发生可能有助于分析:
|
||||
|
||||
```
|
||||
all_events.sort(key=lambda ev: ev.start)
|
||||
```
|
||||
|
||||
现在,事件已排序,可以将它们加载到每天:
|
||||
|
||||
```
|
||||
import collections
|
||||
events_by_day = collections.defaultdict(list)
|
||||
for event in all_events:
|
||||
events_by_day[event.day].append(event)
|
||||
```
|
||||
|
||||
有了这些,你就有了作为 Python 对象的带有日期、持续时间和序列的日历事件。
|
||||
|
||||
### 用 Python 报到你的生活
|
||||
|
||||
现在是时候编写报告代码了!带有适当的标题、列表、重要内容以粗体显示等等,有醒目的格式是很意义。
|
||||
|
||||
这就是一些 HTML 和 HTML 模板。我喜欢使用 [Chameleon][9]:
|
||||
|
||||
```
|
||||
template_content = """
|
||||
<html><body>
|
||||
<div tal:repeat="item items">
|
||||
<h2 tal:content="item[0]">Day</h2>
|
||||
<ul>
|
||||
<li tal:repeat="event item[1]"><span tal:replace="event">Thing</span></li>
|
||||
</ul>
|
||||
</div>
|
||||
</body></html>"""
|
||||
```
|
||||
|
||||
Chameleon 的一个很酷的功能是使用它的 `html` 方法渲染对象。我将以两种方式使用它:
|
||||
|
||||
* 摘要将以粗体显示
|
||||
* 对于大多数活动,我都会删除摘要(因为这是我的个人信息)
|
||||
|
||||
```
|
||||
def __html__(self):
|
||||
offset = my_timezone.utcoffset(self.start)
|
||||
fixed = self.start + offset
|
||||
start_str = str(fixed).split("+")[0]
|
||||
summary = self.summary
|
||||
if summary != "Busy":
|
||||
summary = "<REDACTED>"
|
||||
return f"<b>{summary[:30]}</b> -- {start_str} ({self.duration})"
|
||||
Event.__html__ = __html__
|
||||
```
|
||||
|
||||
为了简洁起见,将该报告切成每天的:
|
||||
|
||||
```
|
||||
import chameleon
|
||||
from IPython.display import HTML
|
||||
template = chameleon.PageTemplate(template_content)
|
||||
html = template(items=itertools.islice(events_by_day.items(), 3, 4))
|
||||
HTML(html)
|
||||
```
|
||||
|
||||
渲染后,它将看起来像这样:
|
||||
|
||||
**2020-08-25**
|
||||
|
||||
- **\<REDACTED>** -- 2020-08-25 08:30:00 (0:45:00)
|
||||
- **\<REDACTED>** -- 2020-08-25 10:00:00 (1:00:00)
|
||||
- **\<REDACTED>** -- 2020-08-25 11:30:00 (0:30:00)
|
||||
- **\<REDACTED>** -- 2020-08-25 13:00:00 (0:25:00)
|
||||
- Busy -- 2020-08-25 15:00:00 (1:00:00)
|
||||
- **\<REDACTED>** -- 2020-08-25 15:00:00 (1:00:00)
|
||||
- **\<REDACTED>** -- 2020-08-25 19:00:00 (1:00:00)
|
||||
- **\<REDACTED>** -- 2020-08-25 19:00:12 (1:00:00)
|
||||
|
||||
### Python 和 Jupyter 的无穷选择
|
||||
|
||||
通过解析、分析和报告各种 Web 服务所拥有的数据,这只是你可以做的事情的表面。
|
||||
|
||||
为什么不对你最喜欢的服务试试呢?
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/9/calendar-jupyter
|
||||
|
||||
作者:[Moshe Zadka][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[stevenzdg988](https://github.com/stevenzdg988)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/moshez
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/calendar.jpg?itok=jEKbhvDT (Calendar close up snapshot)
|
||||
[2]: https://opensource.com/resources/python
|
||||
[3]: https://pandas.pydata.org/
|
||||
[4]: https://dask.org/
|
||||
[5]: https://jupyter.org/
|
||||
[6]: https://pypi.org/project/caldav/
|
||||
[7]: https://pypi.org/project/vobject/
|
||||
[8]: https://opensource.com/article/19/5/python-attrs
|
||||
[9]: https://chameleon.readthedocs.io/en/latest/
|
@ -0,0 +1,83 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-13209-1.html)
|
||||
[#]: subject: (Turn your Raspberry Pi into a HiFi music system)
|
||||
[#]: via: (https://opensource.com/article/21/1/raspberry-pi-hifi)
|
||||
[#]: author: (Peter Czanik https://opensource.com/users/czanik)
|
||||
|
||||
把你的树莓派变成一个 HiFi 音乐系统
|
||||
======
|
||||
|
||||
> 为你的朋友、家人、同事或其他任何拥有廉价发烧设备的人播放音乐。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202103/17/094819ad5vzy0kqwvlxeee.jpg)
|
||||
|
||||
在过去的 10 年里,我大部分时间都是远程工作,但当我走进办公室时,我坐在一个充满内向的同伴的房间里,他们很容易被环境噪音和谈话所干扰。我们发现,听音乐可以抑制办公室的噪音,让声音不那么扰人,用愉快的音乐提供一个愉快的工作环境。
|
||||
|
||||
起初,我们的一位同事带来了一些老式的有源电脑音箱,把它们连接到他的桌面电脑上,然后问我们想听什么。它可以工作,但音质不是很好,而且只有当他在办公室的时候才可以使用。接下来,我们又买了一对 Altec Lansing 音箱。音质有所改善,但没有什么灵活性。
|
||||
|
||||
不久之后,我们得到了一台通用 ARM 单板计算机(SBC),这意味着任何人都可以通过 Web 界面控制播放列表和音箱。但一块普通的 ARM 开发板意味着我们不能使用流行的音乐设备软件。由于非标准的内核,更新操作系统是一件很痛苦的事情,而且 Web 界面也经常出现故障。
|
||||
|
||||
当团队壮大并搬进更大的房间后,我们开始梦想着有更好音箱和更容易处理软件和硬件组合的方法。
|
||||
|
||||
为了用一种相对便宜、灵活、音质好的方式解决我们的问题,我们用树莓派、音箱和开源软件开发了一个办公室 HiFi。
|
||||
|
||||
### HiFi 硬件
|
||||
|
||||
用一个专门的 PC 来播放背景音乐就有点过分了。它昂贵、嘈杂(除非是静音的,但那就更贵了),而且不环保。即使是最便宜的 ARM 板也能胜任这个工作,但从软件的角度来看,它们往往存在问题。树莓派还是比较便宜的,虽然不是标准的计算机,但在硬件和软件方面都有很好的支持。
|
||||
|
||||
接下来的问题是:用什么音箱。质量好的、有源的音箱很贵。无源音箱的成本较低,但需要一个功放,这需要为这套设备增加另一个盒子。它们还必须使用树莓派的音频输出;虽然可以工作,但并不是最好的,特别是当你已经在高质量的音箱和功放上投入资金的时候。
|
||||
|
||||
幸运的是,在数以千计的树莓派硬件扩展中,有内置数字模拟转换器(DAC)的功放。我们选择了 [HiFiBerry 的 Amp][2]。它在我们买来后不久就停产了(被采样率更好的 Amp+ 型号取代),但对于我们的目的来说,它已经足够好了。在开着空调的情况下,我想无论如何你也听不出 48kHz 或 192kHz 的 DAC 有什么不同。
|
||||
|
||||
音箱方面,我们选择了 [Audioengine P4][3],是在某店家清仓大甩卖的时候买的,价格超低。它很容易让我们的办公室房间充满了声音而不失真(并且还能传到我们的房间之外,有一些失真,隔壁的工程师往往不喜欢)。
|
||||
|
||||
### HiFi 软件
|
||||
|
||||
在我们旧的通用 ARM SBC 上我们需要维护一个 Ubuntu,使用一个固定的、古老的、在软件包仓库外的系统内核,这是有问题的。树莓派操作系统包括一个维护良好的内核包,使其成为一个稳定且易于更新的基础系统,但它仍然需要我们定期更新 Python 脚本来访问 Spotify 和 YouTube。对于我们的目的来说,这有点过于高维护。
|
||||
|
||||
幸运的是,使用树莓派作为基础意味着有许多现成的软件设备可用。
|
||||
|
||||
我们选择了 [Volumio][4],这是一个将树莓派变成音乐播放设备的开源项目。安装是一个简单的*一步步完成*的过程。安装和升级是完全无痛的,而不用辛辛苦苦地安装和维护一个操作系统,并定期调试破损的 Python 代码。配置 HiFiBerry 功放不需要编辑任何配置文件,你只需要从列表中选择即可。当然,习惯新的用户界面需要一定的时间,但稳定性和维护的便捷性让这个改变是值得的。
|
||||
|
||||
![Volumio interface][5]
|
||||
|
||||
### 播放音乐并体验
|
||||
|
||||
虽然大流行期间我们都在家里办公,不过我把办公室的 HiFi 安装在我的家庭办公室里,这意味着我可以自由支配它的运行。一个不断变化的用户界面对于一个团队来说会很痛苦,但对于一个有研发背景的人来说,自己玩一个设备,变化是很有趣的。
|
||||
|
||||
我不是一个程序员,但我有很强的 Linux 和 Unix 系统管理背景。这意味着,虽然我觉得修复坏掉的 Python 代码很烦人,但 Volumio 对我来说却足够完美,足够无聊(这是一个很好的“问题”)。幸运的是,在树莓派上播放音乐还有很多其他的可能性。
|
||||
|
||||
作为一个终端狂人(我甚至从终端窗口启动 LibreOffice),我主要使用 Music on Console([MOC][6])来播放我的网络存储(NAS)中的音乐。我有几百张 CD,都转换成了 [FLAC][7] 文件。而且我还从 [BandCamp][8] 或 [Society of Sound][9] 等渠道购买了许多数字专辑。
|
||||
|
||||
另一个选择是 [音乐播放器守护进程(MPD)][10]。把它运行在树莓派上,我可以通过网络使用 Linux 和 Android 的众多客户端之一与我的音乐进行远程交互。
|
||||
|
||||
### 音乐不停歇
|
||||
|
||||
正如你所看到的,创建一个廉价的 HiFi 系统在软件和硬件方面几乎是无限可能的。我们的解决方案只是众多解决方案中的一个,我希望它能启发你建立适合你环境的东西。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/1/raspberry-pi-hifi
|
||||
|
||||
作者:[Peter Czanik][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/czanik
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/hi-fi-stereo-vintage.png?itok=KYY3YQwE (HiFi vintage stereo)
|
||||
[2]: https://www.hifiberry.com/products/amp/
|
||||
[3]: https://audioengineusa.com/shop/passivespeakers/p4-passive-speakers/
|
||||
[4]: https://volumio.org/
|
||||
[5]: https://opensource.com/sites/default/files/uploads/volumeio.png (Volumio interface)
|
||||
[6]: https://en.wikipedia.org/wiki/Music_on_Console
|
||||
[7]: https://xiph.org/flac/
|
||||
[8]: https://bandcamp.com/
|
||||
[9]: https://realworldrecords.com/news/society-of-sound-statement/
|
||||
[10]: https://www.musicpd.org/
|
@ -0,0 +1,244 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-13240-1.html)
|
||||
[#]: subject: (Convert your Windows install into a VM on Linux)
|
||||
[#]: via: (https://opensource.com/article/21/1/virtualbox-windows-linux)
|
||||
[#]: author: (David Both https://opensource.com/users/dboth)
|
||||
|
||||
在 Linux 上将你的 Windows 系统转换为虚拟机
|
||||
======
|
||||
|
||||
> 下面是我如何配置 VirtualBox 虚拟机以在我的 Linux 工作站上使用物理的 Windows 操作系统。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202103/27/105053kyd66r1cpr1s2vz2.jpg)
|
||||
|
||||
我经常使用 VirtualBox 来创建虚拟机来测试新版本的 Fedora、新的应用程序和很多管理工具,比如 Ansible。我甚至使用 VirtualBox 来测试创建一个 Windows 访客主机。
|
||||
|
||||
我从来没有在我的任何一台个人电脑上使用 Windows 作为我的主要操作系统,甚至也没在虚拟机中执行过一些用 Linux 无法完成的冷门任务。不过,我确实为一个需要使用 Windows 下的财务程序的组织做志愿者。这个程序运行在办公室经理的电脑上,使用的是预装的 Windows 10 Pro。
|
||||
|
||||
这个财务应用程序并不特别,[一个更好的 Linux 程序][2] 可以很容易地取代它,但我发现许多会计和财务主管极不愿意做出改变,所以我还没能说服我们组织中的人迁移。
|
||||
|
||||
这一系列的情况,加上最近的安全恐慌,使得我非常希望将运行 Windows 的主机转换为 Fedora,并在该主机上的虚拟机中运行 Windows 和会计程序。
|
||||
|
||||
重要的是要明白,我出于多种原因极度不喜欢 Windows。主要原因是,我不愿意为了在新的虚拟机上安装它而再花钱购买一个 Windows 许可证(Windows 10 Pro 大约需要 200 美元)。此外,Windows 10 在新系统上设置时或安装后需要足够的信息,如果微软的数据库被攻破,破解者就可以窃取一个人的身份。任何人都不应该为了注册软件而需要提供自己的姓名、电话号码和出生日期。
|
||||
|
||||
### 开始
|
||||
|
||||
这台实体电脑已经在主板上唯一可用的 m.2 插槽中安装了一个 240GB 的 NVMe m.2 的 SSD 存储设备。我决定在主机上安装一个新的 SATA SSD,并将现有的带有 Windows 的 SSD 作为 Windows 虚拟机的存储设备。金士顿在其网站上对各种 SSD 设备、外形尺寸和接口做了很好的概述。
|
||||
|
||||
这种方法意味着我不需要重新安装 Windows 或任何现有的应用软件。这也意味着,在这台电脑上工作的办公室经理将使用 Linux 进行所有正常的活动,如电子邮件、访问 Web、使用 LibreOffice 创建文档和电子表格。这种方法增加了主机的安全性。唯一会使用 Windows 虚拟机的时间是运行会计程序。
|
||||
|
||||
### 先备份
|
||||
|
||||
在做其他事情之前,我创建了整个 NVMe 存储设备的备份 ISO 镜像。我在 500GB 外置 USB 存储盘上创建了一个分区,在其上创建了一个 ext4 文件系统,然后将该分区挂载到 `/mnt`。我使用 `dd` 命令来创建镜像。
|
||||
|
||||
我在主机中安装了新的 500GB SATA SSD,并从<ruby>临场<rt>live</rt></ruby> USB 上安装了 Fedora 32 Xfce <ruby>偏好版<rt>spin</rt></ruby>。在安装后的初次重启时,在 GRUB2 引导菜单上,Linux 和 Windows 操作系统都是可用的。此时,主机可以在 Linux 和 Windows 之间进行双启动。
|
||||
|
||||
### 在网上寻找帮助
|
||||
|
||||
现在我需要一些关于创建一个使用物理硬盘或 SSD 作为其存储设备的虚拟机的信息。我很快就在 VirtualBox 文档和互联网上发现了很多关于如何做到这一点的信息。虽然 VirtualBox 文档初步帮助了我,但它并不完整,遗漏了一些关键信息。我在互联网上找到的大多数其他信息也很不完整。
|
||||
|
||||
在我们的记者 Joshua Holm 的帮助下,我得以突破这些残缺的信息,并以一个可重复的流程来完成这项工作。
|
||||
|
||||
### 让它发挥作用
|
||||
|
||||
这个过程其实相当简单,虽然需要一个玄妙的技巧才能实现。当我准备好这一步的时候,Windows 和 Linux 操作系统已经到位了。
|
||||
|
||||
首先,我在 Linux 主机上安装了最新版本的 VirtualBox。VirtualBox 可以从许多发行版的软件仓库中安装,也可以直接从 Oracle VirtualBox 仓库中安装,或者从 VirtualBox 网站上下载所需的包文件并在本地安装。我选择下载 AMD64 版本,它实际上是一个安装程序而不是一个软件包。我使用这个版本来规避一个与这个特定项目无关的问题。
|
||||
|
||||
安装过程总是在 `/etc/group` 中创建一个 `vboxusers` 组。我把打算运行这个虚拟机的用户添加到 `/etc/group` 中的 `vboxusers` 和 `disk` 组。将相同的用户添加到 `disk` 组是很重要的,因为 VirtualBox 是以启动它的用户身份运行的,而且还需要直接访问 `/dev/sdx` 特殊设备文件才能在这种情况下工作。将用户添加到 `disk` 组可以提供这种级别的访问权限,否则他们就不会有这种权限。
|
||||
|
||||
然后,我创建了一个目录来存储虚拟机,并赋予它 `root.vboxusers` 的所有权和 `775` 的权限。我使用 `/vms` 用作该目录,但可以是任何你想要的目录。默认情况下,VirtualBox 会在创建虚拟机的用户的子目录中创建新的虚拟机。这将使多个用户之间无法共享对虚拟机的访问,从而不会产生巨大的安全漏洞。将虚拟机目录放置在一个可访问的位置,可以共享虚拟机。
|
||||
|
||||
我以非 root 用户的身份启动 VirtualBox 管理器。然后,我使用 VirtualBox 的“<ruby>偏好<rt>Preferences</rt></ruby> => <ruby>一般<rt>General</rt></ruby>”菜单将“<ruby>默认机器文件夹<rt>Default Machine Folder</rt></ruby>”设置为 `/vms` 目录。
|
||||
|
||||
我创建的虚拟机没有虚拟磁盘。“<ruby>类型<rt>Type<rt></ruby>” 应该是 `Windows`,“<ruby>版本<rt>Version</rt></ruby>”应该设置为 `Windows 10 64-bit`。为虚拟机设置一个合理的内存量,但只要虚拟机处于关闭状态,以后可以更改。在安装的“<ruby>硬盘<rt>Hard disk</rt></ruby>”页面,我选择了 “<ruby>不要添加虚拟硬盘<rt>Do not add a virtual hard disk</rt></ruby>”,点击“<ruby>创建<rt>Create</rt></ruby>”。新的虚拟机出现在VirtualBox 管理器窗口中。这个过程也创建了 `/vms/Test1` 目录。
|
||||
|
||||
我使用“<ruby>高级<rt>Advanced</rt></ruby>”菜单在一个页面上设置了所有的配置,如图 1 所示。“<ruby>向导模式<rt>Guided Mode</rt></ruby>”可以获得相同的信息,但需要更多的点击,以通过一个窗口来进行每个配置项目。它确实提供了更多的帮助内容,但我并不需要。
|
||||
|
||||
![VirtualBox 对话框:创建新的虚拟机,但不添加硬盘][3]
|
||||
|
||||
*图 1:创建一个新的虚拟机,但不要添加硬盘。*
|
||||
|
||||
然后,我需要知道 Linux 给原始 Windows 硬盘分配了哪个设备。在终端会话中以 root 身份使用 `lshw` 命令来发现 Windows 磁盘的设备分配情况。在本例中,代表整个存储设备的设备是 `/dev/sdb`。
|
||||
|
||||
```
|
||||
# lshw -short -class disk,volume
|
||||
H/W path Device Class Description
|
||||
=========================================================
|
||||
/0/100/17/0 /dev/sda disk 500GB CT500MX500SSD1
|
||||
/0/100/17/0/1 volume 2047MiB Windows FAT volume
|
||||
/0/100/17/0/2 /dev/sda2 volume 4GiB EXT4 volume
|
||||
/0/100/17/0/3 /dev/sda3 volume 459GiB LVM Physical Volume
|
||||
/0/100/17/1 /dev/cdrom disk DVD+-RW DU-8A5LH
|
||||
/0/100/17/0.0.0 /dev/sdb disk 256GB TOSHIBA KSG60ZMV
|
||||
/0/100/17/0.0.0/1 /dev/sdb1 volume 649MiB Windows FAT volume
|
||||
/0/100/17/0.0.0/2 /dev/sdb2 volume 127MiB reserved partition
|
||||
/0/100/17/0.0.0/3 /dev/sdb3 volume 236GiB Windows NTFS volume
|
||||
/0/100/17/0.0.0/4 /dev/sdb4 volume 989MiB Windows NTFS volume
|
||||
[root@office1 etc]#
|
||||
```
|
||||
|
||||
VirtualBox 不需要把虚拟存储设备放在 `/vms/Test1` 目录中,而是需要有一种方法来识别要从其启动的物理硬盘。这种识别是通过创建一个 `*.vmdk` 文件来实现的,该文件指向将作为虚拟机存储设备的原始物理磁盘。作为非 root 用户,我创建了一个 vmdk 文件,指向整个 Windows 设备 `/dev/sdb`。
|
||||
|
||||
```
|
||||
$ VBoxManage internalcommands createrawvmdk -filename /vms/Test1/Test1.vmdk -rawdisk /dev/sdb
|
||||
RAW host disk access VMDK file /vms/Test1/Test1.vmdk created successfully.
|
||||
```
|
||||
|
||||
然后,我使用 VirtualBox 管理器 “<ruby>文件<rt>File</rt></ruby> => <ruby>虚拟介质管理器<rt>Virtual Media Manager</rt></ruby>” 对话框将 vmdk 磁盘添加到可用硬盘中。我点击了“<ruby>添加<rt>Add</rt></ruby>”,文件管理对话框中显示了默认的 `/vms` 位置。我选择了 `Test1` 目录,然后选择了 `Test1.vmdk` 文件。然后我点击“<ruby>打开<rt>Open</rt></ruby>”,`Test1.vmdk` 文件就显示在可用硬盘列表中。我选择了它,然后点击“<ruby>关闭<rt>Close</rt></ruby>”。
|
||||
|
||||
下一步就是将这个 vmdk 磁盘添加到我们的虚拟机的存储设备中。在 “Test1 VM” 的设置菜单中,我选择了 “<ruby>存储<rt>Storage</rt></ruby>”,并点击了添加硬盘的图标。这时打开了一个对话框,在一个名为“<ruby>未连接<rt>Not attached</rt></ruby>”的列表中显示了 `Test1vmdk` 虚拟磁盘文件。我选择了这个文件,并点击了“<ruby>选择<rt>Choose</rt></ruby>”按钮。这个设备现在显示在连接到 “Test1 VM” 的存储设备列表中。这个虚拟机上唯一的其他存储设备是一个空的 CD/DVD-ROM 驱动器。
|
||||
|
||||
我点击了“<ruby>确定<rt>OK</rt></ruby>”,完成了将此设备添加到虚拟机中。
|
||||
|
||||
在新的虚拟机工作之前,还有一个项目需要配置。使用 VirtualBox 管理器设置对话框中的 “Test1 VM”,我导航到 “<ruby>系统<rt>System</rt></ruby> => <ruby>主板<rt>Motherboard</rt></ruby>”页面,并在 “<ruby>启用 EFI<rt>Enable EFI</rt></ruby>”的方框中打上勾。如果你不这样做,当你试图启动这个虚拟机时,VirtualBox 会产生一个错误,说明它无法找到一个可启动的介质。
|
||||
|
||||
现在,虚拟机从原始的 Windows 10 硬盘驱动器启动。然而,我无法登录,因为我在这个系统上没有一个常规账户,而且我也无法获得 Windows 管理员账户的密码。
|
||||
|
||||
### 解锁驱动器
|
||||
|
||||
不,本节并不是要破解硬盘的加密,而是要绕过众多 Windows 管理员账户之一的密码,而这些账户是不属于组织中某个人的。
|
||||
|
||||
尽管我可以启动 Windows 虚拟机,但我无法登录,因为我在该主机上没有账户,而向人们索要密码是一种可怕的安全漏洞。尽管如此,我还是需要登录这个虚拟机来安装 “VirtualBox Guest Additions”,它可以提供鼠标指针的无缝捕捉和释放,允许我将虚拟机调整到大于 1024x768 的大小,并在未来进行正常的维护。
|
||||
|
||||
这是一个完美的用例,Linux 的功能就是更改用户密码。尽管我是访问之前的管理员的账户来启动,但在这种情况下,他不再支持这个系统,我也无法辨别他的密码或他用来生成密码的模式。我就直接清除了上一个系统管理员的密码。
|
||||
|
||||
有一个非常不错的开源软件工具,专门用于这个任务。在 Linux 主机上,我安装了 `chntpw`,它的意思大概是:“更改 NT 的密码”。
|
||||
|
||||
```
|
||||
# dnf -y install chntpw
|
||||
```
|
||||
|
||||
我关闭了虚拟机的电源,然后将 `/dev/sdb3` 分区挂载到 `/mnt` 上。我确定 `/dev/sdb3` 是正确的分区,因为它是我在之前执行 `lshw` 命令的输出中看到的第一个大的 NTFS 分区。一定不要在虚拟机运行时挂载该分区,那样会导致虚拟机存储设备上的数据严重损坏。请注意,在其他主机上分区可能有所不同。
|
||||
|
||||
导航到 `/mnt/Windows/System32/config` 目录。如果当前工作目录(PWD)不在这里,`chntpw` 实用程序就无法工作。请启动该程序。
|
||||
|
||||
```
|
||||
# chntpw -i SAM
|
||||
chntpw version 1.00 140201, (c) Petter N Hagen
|
||||
Hive <SAM> name (from header): <\SystemRoot\System32\Config\SAM>
|
||||
ROOT KEY at offset: 0x001020 * Subkey indexing type is: 686c <lh>
|
||||
File size 131072 [20000] bytes, containing 11 pages (+ 1 headerpage)
|
||||
Used for data: 367/44720 blocks/bytes, unused: 14/24560 blocks/bytes.
|
||||
|
||||
<>========<> chntpw Main Interactive Menu <>========<>
|
||||
|
||||
Loaded hives: <SAM>
|
||||
|
||||
1 - Edit user data and passwords
|
||||
2 - List groups
|
||||
- - -
|
||||
9 - Registry editor, now with full write support!
|
||||
q - Quit (you will be asked if there is something to save)
|
||||
|
||||
|
||||
What to do? [1] ->
|
||||
```
|
||||
|
||||
`chntpw` 命令使用 TUI(文本用户界面),它提供了一套菜单选项。当选择其中一个主要菜单项时,通常会显示一个次要菜单。按照明确的菜单名称,我首先选择了菜单项 `1`。
|
||||
|
||||
```
|
||||
What to do? [1] -> 1
|
||||
|
||||
===== chntpw Edit User Info & Passwords ====
|
||||
|
||||
| RID -|---------- Username ------------| Admin? |- Lock? --|
|
||||
| 01f4 | Administrator | ADMIN | dis/lock |
|
||||
| 03eb | john | ADMIN | dis/lock |
|
||||
| 01f7 | DefaultAccount | | dis/lock |
|
||||
| 01f5 | Guest | | dis/lock |
|
||||
| 01f8 | WDAGUtilityAccount | | dis/lock |
|
||||
|
||||
Please enter user number (RID) or 0 to exit: [3e9]
|
||||
```
|
||||
|
||||
接下来,我选择了我们的管理账户 `john`,在提示下输入 RID。这将显示用户的信息,并提供额外的菜单项来管理账户。
|
||||
|
||||
```
|
||||
Please enter user number (RID) or 0 to exit: [3e9] 03eb
|
||||
================= USER EDIT ====================
|
||||
|
||||
RID : 1003 [03eb]
|
||||
Username: john
|
||||
fullname:
|
||||
comment :
|
||||
homedir :
|
||||
|
||||
00000221 = Users (which has 4 members)
|
||||
00000220 = Administrators (which has 5 members)
|
||||
|
||||
Account bits: 0x0214 =
|
||||
[ ] Disabled | [ ] Homedir req. | [ ] Passwd not req. |
|
||||
[ ] Temp. duplicate | [X] Normal account | [ ] NMS account |
|
||||
[ ] Domain trust ac | [ ] Wks trust act. | [ ] Srv trust act |
|
||||
[X] Pwd don't expir | [ ] Auto lockout | [ ] (unknown 0x08) |
|
||||
[ ] (unknown 0x10) | [ ] (unknown 0x20) | [ ] (unknown 0x40) |
|
||||
|
||||
Failed login count: 0, while max tries is: 0
|
||||
Total login count: 47
|
||||
|
||||
- - - - User Edit Menu:
|
||||
1 - Clear (blank) user password
|
||||
2 - Unlock and enable user account [probably locked now]
|
||||
3 - Promote user (make user an administrator)
|
||||
4 - Add user to a group
|
||||
5 - Remove user from a group
|
||||
q - Quit editing user, back to user select
|
||||
Select: [q] > 2
|
||||
```
|
||||
|
||||
这时,我选择了菜单项 `2`,“<ruby>解锁并启用用户账户<rt>Unlock and enable user account</rt></ruby>”,这样就可以删除密码,使我可以不用密码登录。顺便说一下 —— 这就是自动登录。然后我退出了该程序。在继续之前,一定要先卸载 `/mnt`。
|
||||
|
||||
我知道,我知道,但为什么不呢! 我已经绕过了这个硬盘和主机的安全问题,所以一点也不重要。这时,我确实登录了旧的管理账户,并为自己创建了一个新的账户,并设置了安全密码。然后,我以自己的身份登录,并删除了旧的管理账户,这样别人就无法使用了。
|
||||
|
||||
网上也有 Windows Administrator 账号的使用说明(上面列表中的 `01f4`)。如果它不是作为组织管理账户,我可以删除或更改该账户的密码。还要注意的是,这个过程也可以从目标主机上运行临场 USB 来执行。
|
||||
|
||||
### 重新激活 Windows
|
||||
|
||||
因此,我现在让 Windows SSD 作为虚拟机在我的 Fedora 主机上运行了。然而,令人沮丧的是,在运行了几个小时后,Windows 显示了一条警告信息,表明我需要“激活 Windows”。
|
||||
|
||||
在看了许许多多的死胡同网页之后,我终于放弃了使用现有激活码重新激活的尝试,因为它似乎已经以某种方式被破坏了。最后,当我试图进入其中一个在线虚拟支持聊天会话时,虚拟的“获取帮助”应用程序显示我的 Windows 10 Pro 实例已经被激活。这怎么可能呢?它一直希望我激活它,然而当我尝试时,它说它已经被激活了。
|
||||
|
||||
### 或者不
|
||||
|
||||
当我在三天内花了好几个小时做研究和实验时,我决定回到原来的 SSD 启动到 Windows 中,以后再来处理这个问题。但后来 Windows —— 即使从原存储设备启动,也要求重新激活。
|
||||
|
||||
在微软支持网站上搜索也无济于事。在不得不与之前一样的自动支持大费周章之后,我拨打了提供的电话号码,却被自动响应系统告知,所有对 Windows 10 Pro 的支持都只能通过互联网提供。到现在,我已经晚了将近一天才让电脑运行起来并安装回办公室。
|
||||
|
||||
### 回到未来
|
||||
|
||||
我终于吸了一口气,购买了一份 Windows 10 Home,大约 120 美元,并创建了一个带有虚拟存储设备的虚拟机,将其安装在上面。
|
||||
|
||||
我将大量的文档和电子表格文件复制到办公室经理的主目录中。我重新安装了一个我们需要的 Windows 程序,并与办公室经理验证了它可以工作,数据都在那里。
|
||||
|
||||
### 总结
|
||||
|
||||
因此,我的目标达到了,实际上晚了一天,花了 120 美元,但使用了一种更标准的方法。我仍在对权限进行一些调整,并恢复 Thunderbird 通讯录;我有一些 CSV 备份,但 `*.mab` 文件在 Windows 驱动器上包含的信息很少。我甚至用 Linux 的 `find` 命令来定位原始存储设备上的所有。
|
||||
|
||||
我走了很多弯路,每次都要自己重新开始。我遇到了一些与这个项目没有直接关系的问题,但却影响了我的工作。这些问题包括一些有趣的事情,比如把 Windows 分区挂载到我的 Linux 机器的 `/mnt` 上,得到的信息是该分区已经被 Windows 不正确地关闭(是的,在我的 Linux 主机上),并且它已经修复了不一致的地方。即使是 Windows 通过其所谓的“恢复”模式多次重启后也做不到这一点。
|
||||
|
||||
也许你从 `chntpw` 工具的输出数据中发现了一些线索。出于安全考虑,我删掉了主机上显示的其他一些用户账号,但我从这些信息中看到,所有的用户都是管理员。不用说,我也改了。我仍然对我遇到的糟糕的管理方式感到惊讶,但我想我不应该这样。
|
||||
|
||||
最后,我被迫购买了一个许可证,但这个许可证至少比原来的要便宜一些。我知道的一点是,一旦我找到了所有必要的信息,Linux 这一块就能完美地工作。问题是处理 Windows 激活的问题。你们中的一些人可能已经成功地让 Windows 重新激活了。如果是这样,我还是想知道你们是怎么做到的,所以请把你们的经验添加到评论中。
|
||||
|
||||
这是我不喜欢 Windows,只在自己的系统上使用 Linux 的又一个原因。这也是我将组织中所有的计算机都转换为 Linux 的原因之一。只是需要时间和说服力。我们只剩下这一个会计程序了,我需要和财务主管一起找到一个适合她的程序。我明白这一点 —— 我喜欢自己的工具,我需要它们以一种最适合我的方式工作。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/1/virtualbox-windows-linux
|
||||
|
||||
作者:[David Both][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/dboth
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/puzzle_computer_solve_fix_tool.png?itok=U0pH1uwj (Puzzle pieces coming together to form a computer screen)
|
||||
[2]: https://opensource.com/article/20/7/godbledger
|
||||
[3]: https://opensource.com/sites/default/files/virtualbox.png
|
@ -0,0 +1,103 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (ShuyRoy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-13229-1.html)
|
||||
[#]: subject: (Get started with distributed tracing using Grafana Tempo)
|
||||
[#]: via: (https://opensource.com/article/21/2/tempo-distributed-tracing)
|
||||
[#]: author: (Annanay Agarwal https://opensource.com/users/annanayagarwal)
|
||||
|
||||
使用 Grafana Tempo 进行分布式跟踪
|
||||
======
|
||||
|
||||
> Grafana Tempo 是一个新的开源、大容量分布式跟踪后端。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202103/23/221354lc1eiill7lln4lli.jpg)
|
||||
|
||||
Grafana 的 [Tempo][2] 是出自 Grafana 实验室的一个简单易用、大规模的、分布式的跟踪后端。Tempo 集成了 [Grafana][3]、[Prometheus][4] 以及 [Loki][5],并且它只需要对象存储进行操作,因此成本低廉,操作简单。
|
||||
|
||||
我从一开始就参与了这个开源项目,所以我将介绍一些关于 Tempo 的基础知识,并说明为什么云原生社区会注意到它。
|
||||
|
||||
### 分布式跟踪
|
||||
|
||||
想要收集对应用程序请求的遥测数据是很常见的。但是在现在的服务器中,单个应用通常被分割为多个微服务,可能运行在几个不同的节点上。
|
||||
|
||||
分布式跟踪是一种获得关于应用的性能细粒度信息的方式,该应用程序可能由离散的服务组成。当请求到达一个应用时,它提供了该请求的生命周期的统一视图。Tempo 的分布式跟踪可以用于单体应用或微服务应用,它提供 [请求范围的信息][6],使其成为可观察性的第三个支柱(另外两个是度量和日志)。
|
||||
|
||||
接下来是一个分布式跟踪系统生成应用程序甘特图的示例。它使用 Jaeger [HotROD][7] 的演示应用生成跟踪,并把它们存到 Grafana 云托管的 Tempo 上。这个图展示了按照服务和功能划分的请求处理时间。
|
||||
|
||||
![Gantt chart from Grafana Tempo][8]
|
||||
|
||||
### 减少索引的大小
|
||||
|
||||
在丰富且定义良好的数据模型中,跟踪包含大量信息。通常,跟踪后端有两种交互:使用元数据选择器(如服务名或者持续时间)筛选跟踪,以及筛选后的可视化跟踪。
|
||||
|
||||
为了加强搜索,大多数的开源分布式跟踪框架会对跟踪中的许多字段进行索引,包括服务名称、操作名称、标记和持续时间。这会导致索引很大,并迫使你使用 Elasticsearch 或者 [Cassandra][10] 这样的数据库。但是,这些很难管理,而且大规模运营成本很高,所以我在 Grafana 实验室的团队开始提出一个更好的解决方案。
|
||||
|
||||
在 Grafana 中,我们的待命调试工作流从使用指标报表开始(我们使用 [Cortex][11] 来存储我们应用中的指标,它是一个云原生基金会孵化的项目,用于扩展 Prometheus),深入研究这个问题,筛选有问题服务的日志(我们将日志存储在 Loki 中,它就像 Prometheus 一样,只不过 Loki 是存日志的),然后查看跟踪给定的请求。我们意识到,我们过滤时所需的所有索引信息都可以在 Cortex 和 Loki 中找到。但是,我们需要一个强大的集成,以通过这些工具实现跟踪的可发现性,并需要一个很赞的存储,以根据跟踪 ID 进行键值查找。
|
||||
|
||||
这就是 [Grafana Tempo][12] 项目的开始。通过专注于给定检索跟踪 ID 的跟踪,我们将 Tempo 设计为最小依赖性、大容量、低成本的分布式跟踪后端。
|
||||
|
||||
### 操作简单,性价比高
|
||||
|
||||
Tempo 使用对象存储后端,这是它唯一的依赖。它既可以被用于单一的二进制下,也可以用于微服务模式(请参考仓库中的 [例子][13],了解如何轻松上手)。使用对象存储还意味着你可以存储大量的应用程序的痕迹,而无需任何采样。这可以确保你永远不会丢弃那百万分之一的出错或具有较高延迟的请求的跟踪。
|
||||
|
||||
### 与开源工具的强大集成
|
||||
|
||||
[Grafana 7.3 包括了 Tempo 数据源][14],这意味着你可以在 Grafana UI 中可视化来自Tempo 的跟踪。而且,[Loki 2.0 的新查询特性][15] 使得 Tempo 中的跟踪更简单。为了与 Prometheus 集成,该团队正在添加对<ruby>范例<rt>exemplar</rt></ruby>的支持,范例是可以添加到时间序列数据中的高基数元数据信息。度量存储后端不会对它们建立索引,但是你可以在 Grafana UI 中检索和显示度量值。尽管范例可以存储各种元数据,但是在这个用例中,存储跟踪 ID 是为了与 Tempo 紧密集成。
|
||||
|
||||
这个例子展示了使用带有请求延迟直方图的范例,其中每个范例数据点都链接到 Tempo 中的一个跟踪。
|
||||
|
||||
![Using exemplars in Tempo][16]
|
||||
|
||||
### 元数据一致性
|
||||
|
||||
作为容器化应用程序运行的应用发出的遥测数据通常具有一些相关的元数据。这可以包括集群 ID、命名空间、吊舱 IP 等。这对于提供基于需求的信息是好的,但如果你能将元数据中包含的信息用于生产性的东西,那就更好了。
|
||||
|
||||
例如,你可以使用 [Grafana 云代理将跟踪信息导入 Tempo 中][17],代理利用 Prometheus 服务发现机制轮询 Kubernetes API 以获取元数据信息,并且将这些标记添加到应用程序发出的跨域数据中。由于这些元数据也在 Loki 中也建立了索引,所以通过元数据转换为 Loki 标签选择器,可以很容易地从跟踪跳转到查看给定服务的日志。
|
||||
|
||||
下面是一个一致元数据的示例,它可用于Tempo跟踪中查看给定范围的日志。
|
||||
|
||||
![][18]
|
||||
|
||||
### 云原生
|
||||
|
||||
Grafana Tempo 可以作为容器化应用,你可以在如 Kubernetes、Mesos 等编排引擎上运行它。根据获取/查询路径上的工作负载,各种服务可以水平伸缩。你还可以使用云原生的对象存储,如谷歌云存储、Amazon S3 或者 Tempo Azure 博客存储。更多的信息,请阅读 Tempo 文档中的 [架构部分][19]。
|
||||
|
||||
### 试一试 Tempo
|
||||
|
||||
如果这对你和我们一样有用,可以 [克隆 Tempo 仓库][20]试一试。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/2/tempo-distributed-tracing
|
||||
|
||||
作者:[Annanay Agarwal][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[RiaXu](https://github.com/ShuyRoy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/annanayagarwal
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/computer_space_graphic_cosmic.png?itok=wu493YbB (Computer laptop in space)
|
||||
[2]: https://grafana.com/oss/tempo/
|
||||
[3]: http://grafana.com/oss/grafana
|
||||
[4]: https://prometheus.io/
|
||||
[5]: https://grafana.com/oss/loki/
|
||||
[6]: https://peter.bourgon.org/blog/2017/02/21/metrics-tracing-and-logging.html
|
||||
[7]: https://github.com/jaegertracing/jaeger/tree/master/examples/hotrod
|
||||
[8]: https://opensource.com/sites/default/files/uploads/tempo_gantt.png (Gantt chart from Grafana Tempo)
|
||||
[9]: https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[10]: https://opensource.com/article/19/8/how-set-apache-cassandra-cluster
|
||||
[11]: https://cortexmetrics.io/
|
||||
[12]: http://github.com/grafana/tempo
|
||||
[13]: https://grafana.com/docs/tempo/latest/getting-started/example-demo-app/
|
||||
[14]: https://grafana.com/blog/2020/10/29/grafana-7.3-released-support-for-the-grafana-tempo-tracing-system-new-color-palettes-live-updates-for-dashboard-viewers-and-more/
|
||||
[15]: https://grafana.com/blog/2020/11/09/trace-discovery-in-grafana-tempo-using-prometheus-exemplars-loki-2.0-queries-and-more/
|
||||
[16]: https://opensource.com/sites/default/files/uploads/tempo_exemplar.png (Using exemplars in Tempo)
|
||||
[17]: https://grafana.com/blog/2020/11/17/tracing-with-the-grafana-cloud-agent-and-grafana-tempo/
|
||||
[18]: https://lh5.googleusercontent.com/vNqk-ygBOLjKJnCbTbf2P5iyU5Wjv2joR7W-oD7myaP73Mx0KArBI2CTrEDVi04GQHXAXecTUXdkMqKRq8icnXFJ7yWUEpaswB1AOU4wfUuADpRV8pttVtXvTpVVv8_OfnDINgfN
|
||||
[19]: https://grafana.com/docs/tempo/latest/architecture/architecture/
|
||||
[20]: https://github.com/grafana/tempo
|
161
published/20210224 Set your path in FreeDOS.md
Normal file
161
published/20210224 Set your path in FreeDOS.md
Normal file
@ -0,0 +1,161 @@
|
||||
[#]: subject: (Set your path in FreeDOS)
|
||||
[#]: via: (https://opensource.com/article/21/2/path-freedos)
|
||||
[#]: author: (Kevin O'Brien https://opensource.com/users/ahuka)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (robsean)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-13218-1.html)
|
||||
|
||||
在 FreeDOS 中设置你的路径
|
||||
======
|
||||
|
||||
> 学习 FreeDOS 路径的知识,如何设置它,并且如何使用它。
|
||||
|
||||
![查看职业生涯地图][1]
|
||||
|
||||
你在开源 [FreeDOS][2] 操作系统中所做的一切工作都是通过命令行完成的。命令行以一个 _提示符_ 开始,这是计算机说法的方式,“我准备好了。请给我一些事情来做。”你可以配置你的提示符的外观,但是默认情况下,它是:
|
||||
|
||||
```
|
||||
C:\>
|
||||
```
|
||||
|
||||
从命令行中,你可以做两件事:运行一个内部命令或运行一个程序。外部命令是在你的 `FDOS` 目录中可找到的以单独文件形式存在的程序,以便运行程序包括运行外部命令。它也意味着你可以使用你的计算机运行应用程序软件来做一些东西。你也可以运行一个批处理文件,但是在这种情况下,你所做的全部工作就变成了运行批处理文件中所列出的一系列命令或程序。
|
||||
|
||||
### 可执行应用程序文件
|
||||
|
||||
FreeDOS 可以运行三种类型的应用程序文件:
|
||||
|
||||
1. **COM** 是一个用机器语言写的,且小于 64 KB 的文件。
|
||||
2. **EXE** 也是一个用机器语言写的文件,但是它可以大于 64 KB 。此外,在 EXE 文件的开头部分有信息,用于告诉 DOS 系统该文件是什么类型的以及如何加载和运行。
|
||||
3. **BAT** 是一个使用文本编辑器以 ASCII 文本格式编写的 _批处理文件_ ,其中包含以批处理模式执行的 FreeDOS 命令。这意味着每个命令都会按顺序执行到文件的结尾。
|
||||
|
||||
如果你所输入的一个文件名称不能被 FreeDOS 识别为一个内部命令或一个程序,你将收到一个错误消息 “Bad command or filename” 。如果你看到这个错误,它意味着会是下面三种情况中的其中一种:
|
||||
|
||||
1. 由于某些原因,你所给予的名称是错误的。你可能拼错了文件名称,或者你可能正在使用错误的命令名称。检查名称和拼写,并再次尝试。
|
||||
2. 可能你正在尝试运行的程序并没有安装在计算机上。请确认它已经安装了。
|
||||
3. 文件确实存在,但是 FreeDOS 不知道在哪里可以找到它。
|
||||
|
||||
在清单上的最后一项就是这篇文章的主题,它被称为路径。如果你已经习惯于使用 Linux 或 Unix ,你可能已经理解 [PATH 变量][3] 的概念。如果你是命令行的新手,那么路径是一个非常重要的足以让你舒适的东西。
|
||||
|
||||
### 路径
|
||||
|
||||
当你输入一个可执行应用程序文件的名称时,FreeDOS 必须能找到它。FreeDOS 会在一个具体指定的位置层次结构中查找文件:
|
||||
|
||||
1. 首先,它查找当前驱动器的活动目录(称为 _工作目录_)。如果你正在目录 `C:\FDOS` 中,接着,你输入名称 `FOOBAR.EXE`,FreeDOS 将在 `C:\FDOS` 中查找带有这个名称的文件。你甚至不需要输入完整的名称。如果你输入 `FOOBAR` ,FreeDOS 将查找任何带有这个名称的可执行文件,不管它是 `FOOBAR.EXE`,`FOOBAR.COM`,或 `FOOBAR.BAT`。只要 FreeDOS 能找到一个匹配该名称的文件,它就会运行该可执行文件。
|
||||
2. 如果 FreeDOS 不能找到你所输入名称的文件,它将查询被称为 `PATH` 的一些东西。每当 DOS 不能在当前活动命令中找到文件时,会指示 DOS 检查这个列表中目录。
|
||||
|
||||
你可以随时使用 `path` 命令来查看你的计算机的路径。只需要在 FreeDOS 提示符中输入 `path` ,FreeDOS 就会返回你的路径设置:
|
||||
|
||||
```
|
||||
C:\>path
|
||||
PATH=C:\FDOS\BIN
|
||||
```
|
||||
|
||||
第一行是提示符和命令,第二行是计算机返回的东西。你可以看到 DOS 第一个查看的位置就是位于 `C` 驱动器上的 `FDOS\BIN`。如果你想更改你的路径,你可以输入一个 `path` 命令以及你想使用的新路径:
|
||||
|
||||
```
|
||||
C:\>path=C:\HOME\BIN;C:\FDOS\BIN
|
||||
```
|
||||
|
||||
在这个示例中,我设置我的路径到我个人的 `BIN` 文件夹,我把它放在一个叫 `HOME` 的自定义目录中,然后再设置为 `FDOS/BIN`。现在,当你检查你的路径时:
|
||||
|
||||
```
|
||||
C:\>path
|
||||
PATH=C:\HOME\BIN;C:\FDOS\BIN
|
||||
```
|
||||
|
||||
路径设置是按所列目录的顺序处理的。
|
||||
|
||||
你可能会注意到有一些字符是小写的,有一些字符是大写的。你使用哪一种都真的不重要。FreeDOS 是不区分大小写的,并且把所有的东西都作为大写字母对待。在内部,FreeDOS 使用的全是大写字母,这就是为什么你看到来自你命令的输出都是大写字母的原因。如果你以小写字母的形式输入命令和文件名称,在一个转换器将自动转换它们为大写字母后,它们将被执行。
|
||||
|
||||
输入一个新的路径来替换先前设置的路径。
|
||||
|
||||
### autoexec.bat 文件
|
||||
|
||||
你可能遇到的下一个问题的是 FreeDOS 默认使用的第一个路径来自何处。这与其它一些重要的设置一起定义在你的 `C` 驱动器的根目录下的 `AUTOEXEC.BAT` 文件中。这是一个批处理文件,它在你启动 FreeDOS 时会自动执行(由此得名)。你可以使用 FreeDOS 程序 `EDIT` 来编辑这个文件。为查看或编辑这个文件的内容,输入下面的命令:
|
||||
|
||||
```
|
||||
C:\>edit autoexec.bat
|
||||
```
|
||||
|
||||
这一行出现在顶部附近:
|
||||
|
||||
```
|
||||
SET PATH=%dosdir%\BIN
|
||||
```
|
||||
|
||||
这一行定义默认路径的值。
|
||||
|
||||
在你查看 `AUTOEXEC.BAT` 后,你可以通过依次按下面的按键来退出 EDIT 应用程序:
|
||||
|
||||
1. `Alt`
|
||||
2. `f`
|
||||
3. `x`
|
||||
|
||||
你也可以使用键盘快捷键 `Alt+X`。
|
||||
|
||||
### 使用完整的路径
|
||||
|
||||
如果你在你的路径中忘记包含 `C:\FDOS\BIN` ,那么你将不能快速访问存储在这里的任何应用程序,因为 FreeDOS 不知道从哪里找到它们。例如,假设我设置我的路径到我个人应用程序集合:
|
||||
|
||||
```
|
||||
C:\>path=C:\HOME\BIN
|
||||
```
|
||||
|
||||
内置在命令行中应用程序仍然能正常工作:
|
||||
|
||||
```
|
||||
C:\cd HOME
|
||||
C:\HOME>dir
|
||||
ARTICLES
|
||||
BIN
|
||||
CHEATSHEETS
|
||||
GAMES
|
||||
DND
|
||||
```
|
||||
|
||||
不过,外部的命令将不能运行:
|
||||
|
||||
```
|
||||
C:HOME\ARTICLES>BZIP2 -c example.txt
|
||||
Bad command or filename - "BZIP2"
|
||||
```
|
||||
|
||||
通过提供命令的一个 _完整路径_ ,你可以总是执行一个在你的系统上且不在你的路径中的命令:
|
||||
|
||||
```
|
||||
C:HOME\ARTICLES>C:\FDOS\BIN\BZIP2 -c example.txt
|
||||
C:HOME\ARTICLES>DIR
|
||||
example.txb
|
||||
```
|
||||
|
||||
你可以使用同样的方法从外部介质或其它目录执行应用程序。
|
||||
|
||||
### FreeDOS 路径
|
||||
|
||||
通常情况下,你很可能希望在路径中保留 `C:\PDOS\BIN` ,因为它包含所有使用 FreeDOS 分发的默认应用程序。
|
||||
|
||||
除非你更改 `AUTOEXEC.BAT` 中的路径,否则将在重新启动后恢复默认路径。
|
||||
|
||||
现在,你知道如何在 FreeDOS 中管理你的路径,你能够以最适合你的方式了执行命令和维护你的工作环境。
|
||||
|
||||
_致谢 [DOS 课程 5: 路径][4] (在 CC BY-SA 4.0 协议下发布) 为本文提供的一些信息。_
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/2/path-freedos
|
||||
|
||||
作者:[Kevin O'Brien][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[robsean](https://github.com/robsean)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/ahuka
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/career_journey_road_gps_path_map_520.png?itok=PpL6jJgY (Looking at a map for career journey)
|
||||
[2]: https://www.freedos.org/
|
||||
[3]: https://opensource.com/article/17/6/set-path-linux
|
||||
[4]: https://www.ahuka.com/dos-lessons-for-self-study-purposes/dos-lesson-5-the-path/
|
59
published/20210225 4 new open source licenses.md
Normal file
59
published/20210225 4 new open source licenses.md
Normal file
@ -0,0 +1,59 @@
|
||||
[#]: subject: (4 new open source licenses)
|
||||
[#]: via: (https://opensource.com/article/21/2/osi-licenses-cal-cern-ohl)
|
||||
[#]: author: (Pam Chestek https://opensource.com/users/pchestek)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wyxplus)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-13224-1.html)
|
||||
|
||||
四个新式开源许可证
|
||||
======
|
||||
|
||||
> 让我们来看看 OSI 最新批准的加密自治许可证和 CERN 开源硬件许可协议。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202103/21/221014mw8lhxox0kkjk04z.jpg)
|
||||
|
||||
作为 <ruby>[开源定义][2]<rt>Open Source Defintion</rt></ruby>(OSD)的管理者,<ruby>[开源促进会][3]<rt>Open Source Initiative</rt></ruby>(OSI)20 年来一直在批准“开源”许可证。这些许可证是开源软件生态系统的基础,可确保每个人都可以使用、改进和共享软件。当一个许可证获批为“开源”时,是因为 OSI 认为该许可证可以促进相互的协作和共享,从而使得每个参与开源生态的人获益。
|
||||
|
||||
在过去的 20 年里,世界发生了翻天覆地的变化。现如今,软件以新的甚至是无法想象的方式在被使用。OSI 已经预料到,曾经被人们所熟知的开源许可证现已无法满足如今的要求。因此,许可证管理者已经加强了工作,为更广泛的用途提交了几个新的许可证。OSI 所面临的挑战是在评估这些新的许可证概念是否会继续推动共享和合作,是否被值得称为“开源”许可证,最终 OSI 批准了一些用于特殊领域的新式许可证。
|
||||
|
||||
### 四个新式许可证
|
||||
|
||||
第一个是 <ruby>[加密自治许可证][4]<rt>Cryptographic Autonomy License</rt></ruby>(CAL)。该许可证是为分布式密码应用程序而设计的。此许可证所解决的问题是,现有的开源许可证无法保证开放性,因为如果没有义务也与其他对等体共享数据,那么一个对等体就有可能损害网络的运行。因此,除了是一个强有力的版权保护许可外,CAL 还包括向第三方提供独立使用和修改软件所需的权限和资料的义务,而不会让第三方有数据或功能的损失。
|
||||
|
||||
随着越来越多的人使用加密结构进行点对点共享,那么更多的开发人员发现自己需要诸如 CAL 之类的法律工具也就不足为奇了。 OSI 的两个邮件列表 License-Discuss 和 License-Review 上的社区,讨论了拟议的新开源许可证,并询问了有关此许可证的诸多问题。我们希望由此产生的许可证清晰易懂,并希望对其他开源从业者有所裨益。
|
||||
|
||||
接下来是,欧洲核研究组织(CERN)提交的 CERN <ruby>开放硬件许可证<rt>Open Hardware Licence</rt></ruby>(OHL)系列许可证以供审议。它包括三个许可证,其主要用于开放硬件,这是一个与开源软件相似的开源访问领域,但有其自身的挑战和细微差别。硬件和软件之间的界线现已变得相当模糊,因此应用单独的硬件和软件许可证变得越来越困难。欧洲核子研究组织(CERN)制定了一个可以确保硬件和软件自由的许可证。
|
||||
|
||||
OSI 可能在开始时就没考虑将开源硬件许可证添加到其开源许可证列表中,但是世界早已发生变革。因此,尽管 CERN 许可证中的措词涵盖了硬件术语,但它也符合 OSI 认可的所有开源软件许可证的条件。
|
||||
|
||||
CERN 开源硬件许可证包括一个 [宽松许可证][5]、一个 [弱互惠许可证][6] 和一个 [强互惠许可证][7]。最近,该许可证已被一个国际研究项目采用,该项目正在制造可用于 COVID-19 患者的简单、易于生产的呼吸机。
|
||||
|
||||
### 了解更多
|
||||
|
||||
CAL 和 CERN OHL 许可证是针对特殊用途的,并且 OSI 不建议把它们用于其它领域。但是 OSI 想知道这些许可证是否会按预期发展,从而有助于在较新的计算机领域中培育出健壮的开源生态。
|
||||
|
||||
可以从 OSI 获得关于 [许可证批准过程][8] 的更多信息。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/2/osi-licenses-cal-cern-ohl
|
||||
|
||||
作者:[Pam Chestek][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wyxplus](https://github.com/wyxplus)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/pchestek
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LAW_lawdotgov3.png?itok=e4eFKe0l "Law books in a library"
|
||||
[2]: https://opensource.org/osd
|
||||
[3]: https://opensource.org/
|
||||
[4]: https://opensource.org/licenses/CAL-1.0
|
||||
[5]: https://opensource.org/CERN-OHL-P
|
||||
[6]: https://opensource.org/CERN-OHL-W
|
||||
[7]: https://opensource.org/CERN-OHL-S
|
||||
[8]: https://opensource.org/approval
|
@ -1,62 +1,61 @@
|
||||
[#]: subject: (An Introduction to WebAssembly)
|
||||
[#]: via: (https://www.linux.com/news/an-introduction-to-webassembly/)
|
||||
[#]: author: (Dan Brown https://training.linuxfoundation.org/announcements/an-introduction-to-webassembly/)
|
||||
[#]: author: (Marco Fioretti https://training.linuxfoundation.org/announcements/an-introduction-to-webassembly/)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-13197-1.html)
|
||||
|
||||
WebAssembly 介绍
|
||||
======
|
||||
|
||||
_Marco Fioretti 编写_
|
||||
![](https://img.linux.net.cn/data/attachment/album/202103/12/222938jww882da88oqzays.jpg)
|
||||
|
||||
## **到底什么是 WebAssembly?**
|
||||
### 到底什么是 WebAssembly?
|
||||
|
||||
[WebAssembly,也叫 Wasm][1],是一种 Web 优化的代码格式和 API(应用编程接口),它可以大大提高网站的性能和能力。WebAssembly 的 1.0 版本,于 2017 年发布,并于 2019 年成为 W3C 官方标准。
|
||||
[WebAssembly][1],也叫 Wasm,是一种为 Web 优化的代码格式和 API(应用编程接口),它可以大大提高网站的性能和能力。WebAssembly 的 1.0 版本于 2017 年发布,并于 2019 年成为 W3C 官方标准。
|
||||
|
||||
该标准得到了所有主流浏览器供应商的积极支持,原因显而易见:官方列出的[“浏览器内部”用例][2]中提到了,其中包括视频编辑、3D 游戏、虚拟和增强现实、p2p 服务和科学模拟。除了让浏览器的功能比J avaScript 强大得多,该标准甚至可以延长网站的寿命:例如,正是 WebAssembly 为[互联网档案馆的 Flash 动画和游戏][3]提供了持续的支持。
|
||||
该标准得到了所有主流浏览器供应商的积极支持,原因显而易见:官方列出的 [“浏览器内部”用例][2] 中提到了,其中包括视频编辑、3D 游戏、虚拟和增强现实、p2p 服务和科学模拟。除了让浏览器的功能比JavaScript 强大得多,该标准甚至可以延长网站的寿命:例如,正是 WebAssembly 为 [互联网档案馆的 Flash 动画和游戏][3] 提供了持续的支持。
|
||||
|
||||
不过,WebAssembly 并不只用于浏览器,目前它还被用于移动和基于边缘环境的 Cloudflare Workers 等产品中。
|
||||
|
||||
## **WebAssembly 如何工作**
|
||||
### WebAssembly 如何工作?
|
||||
|
||||
.wasm 格式的文件包含低级二进制指令(字节码),可由使用通用栈的虚拟机以“接近 CPU 原生速度”执行。这些代码被打包成模块,也就是可以被浏览器直接执行的对象。每个模块可以被一个网页多次实例化。模块内部定义的函数被列在一个专用数组中,或称为 Table,相应的数据被包含在另一个结构中,称为 arraybuffer。开发者可以通过 Javascript WebAssembly.memory() 的调用,为 .wasm 代码显式分配内存。
|
||||
.wasm 格式的文件包含低级二进制指令(字节码),可由使用通用栈的虚拟机以“接近 CPU 原生速度”执行。这些代码被打包成模块(可以被浏览器直接执行的对象),每个模块可以被一个网页多次实例化。模块内部定义的函数被列在一个专用数组中,或称为<ruby>表<rt>Table</rt></ruby>,相应的数据被包含在另一个结构中,称为 <ruby>缓存数组<rt>arraybuffer</rt></ruby>。开发者可以通过 Javascript `WebAssembly.memory()` 的调用,为 .wasm 代码显式分配内存。
|
||||
|
||||
.wasm 格式的纯文本版本可以大大简化学习和调试,同样也可以使用。然而,WebAssembly 并不是真的要供人直接使用。从技术上讲,.wasm 只是一个与浏览器兼容的**编译目标**:一种软件编译器可以自动翻译用高级编程语言编写的代码的格式。
|
||||
.wasm 格式也有纯文本版本,它可以大大简化学习和调试。然而,WebAssembly 并不是真的要供人直接使用。从技术上讲,.wasm 只是一个与浏览器兼容的**编译目标**:一种用高级编程语言编写的软件编译器可以自动翻译的代码格式。
|
||||
|
||||
这种选择正是使开发人员能够使用数十亿人熟悉的语言(C/C++、Python、Go、Rust 等)直接为用户界面进行编程的方式,但以前浏览器无法对其进行有效利用。更妙的是,程序员将得到这些,至少在理论上无需直接查看 WebAssembly 代码,也无需担心(因为目标是一个**虚拟**机)物理 CPU 将实际运行他们的代码。
|
||||
这种选择正是使开发人员能够使用数十亿人熟悉的语言(C/C++、Python、Go、Rust 等)直接为用户界面进行编程的方式,但以前浏览器无法对其进行有效利用。更妙的是,至少在理论上程序员可以利用它们,无需直接查看 WebAssembly 代码,也无需担心物理 CPU 实际运行他们的代码(因为目标是一个**虚拟**机)。
|
||||
|
||||
## **但是我们已经有了 JavaScript,我们真的需要 WebAssembly 吗?**
|
||||
### 但是我们已经有了 JavaScript,我们真的需要 WebAssembly 吗?
|
||||
|
||||
是的,有几个原因。首先,作为二进制指令,.wasm 文件比同等功能的 JavaScript 文件小得多,下载速度也快得多。最重要的是,Javascript 文件必须在浏览器将其转换为其内部虚拟机可用的字节码之前进行完全解析和验证。
|
||||
|
||||
而 .wasm 文件则可以一次性验证和编译,从而使“流式编译”成为可能:浏览器在开始**下载它们**的那一刻就可以开始编译和执行它们,就像串流电影一样。
|
||||
|
||||
这就是说,并不是所有可以想到的 WebAssembly 应用肯定会比由专业程序员手动优化的等效 JavaScript 应用更快或更小。例如,如果一些 .wasm 需要包含 JavaScript 不需要的库,这种情况可能会发生。
|
||||
这就是说,并不是所有可以想到的 WebAssembly 应用都肯定会比由专业程序员手动优化的等效 JavaScript 应用更快或更小。例如,如果一些 .wasm 需要包含 JavaScript 不需要的库,这种情况可能会发生。
|
||||
|
||||
## **WebAssembly 是否会让 JavaScript 过时?**
|
||||
### WebAssembly 是否会让 JavaScript 过时?
|
||||
|
||||
一句话:不会。当然暂时不会,至少在浏览器内不会。WebAssembly 模块仍然需要 JavaScript,因为在设计上它们不能访问文档对象模型 (DOM),也就是[主要用于修改网页的 API][4]。此外,.wasm 代码不能进行系统调用或读取浏览器的内存。WebAssembly 只能在沙箱中运行,一般来说,它能与外界的交互甚至比 JavaScript 更少,而且只能通过 JavaScript 接口进行。
|
||||
一句话:不会。暂时不会,至少在浏览器内不会。WebAssembly 模块仍然需要 JavaScript,因为在设计上它们不能访问文档对象模型 (DOM)—— [主要用于修改网页的 API][4]。此外,.wasm 代码不能进行系统调用或读取浏览器的内存。WebAssembly 只能在沙箱中运行,一般来说,它能与外界的交互甚至比 JavaScript 更少,而且只能通过 JavaScript 接口进行。
|
||||
|
||||
因此,至少在不久的将来 .wasm 模块将只是通过 JavaScript 提供那些如果用 JavaScript 语言编写会消耗更多带宽、内存或 CPU 时间的部分。
|
||||
|
||||
## **网络浏览器如何运行 WebAssembly**
|
||||
### Web 浏览器如何运行 WebAssembly?
|
||||
|
||||
一般来说,浏览器至少需要两块来处理动态应用:运行应用代码的虚拟机 (VM),以及可以同时修改浏览器行为和网页显示的 API。
|
||||
一般来说,浏览器至少需要两块来处理动态应用:运行应用代码的虚拟机(VM),以及可以同时修改浏览器行为和网页显示的 API。
|
||||
|
||||
现代浏览器内部的虚拟机通过以下方式同时支持 JavaScript 和 WebAssembly:
|
||||
|
||||
1. 浏览器下载一个用 HTML 标记语言编写的网页,然后进行渲染
|
||||
2. 如果该 HTML 调用 JavaScript 代码,浏览器的虚拟机就会执行该代码。但是...
|
||||
3. 如果 JavaScript 代码中包含了 WebAssembly 模块的实例,那么就按照上面的描述获取该实例,然后根据需要通过 JavaScript 的 WebAssembly API 来使用该实例
|
||||
4. 当 WebAssembly 代码产生的东西将修改 DOM 即“宿主”网页的结构,JavaScript 代码就会接收到,并继续进行实际的修改。
|
||||
4. 当 WebAssembly 代码产生的东西将修改 DOM(即“宿主”网页)的结构,JavaScript 代码就会接收到,并继续进行实际的修改。
|
||||
|
||||
### 我如何才能创建可用的 WebAssembly 代码?
|
||||
|
||||
## **我如何才能创建可用的 WebAssembly 代码?**
|
||||
|
||||
越来越多的编程语言社区支持直接编译到 Wasm,我们建议从 webassembly.org 的[入门指南][5]开始,这取决于你使用什么语言。请注意,并不是所有的编程语言都有相同水平的 Wasm 支持,因此你的工作量可能会有所不同。
|
||||
越来越多的编程语言社区支持直接编译到 Wasm,我们建议从 webassembly.org 的 [入门指南][5] 开始,这取决于你使用什么语言。请注意,并不是所有的编程语言都有相同水平的 Wasm 支持,因此你的工作量可能会有所不同。
|
||||
|
||||
我们计划在未来几个月内发布一系列文章,提供更多关于 WebAssembly 的信息。要自己开始使用它,你可以报名参加 Linux 基金会的免费 [WebAssembly 介绍][6]在线培训课程。
|
||||
|
||||
@ -69,7 +68,7 @@ via: https://www.linux.com/news/an-introduction-to-webassembly/
|
||||
作者:[Dan Brown][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -4,15 +4,15 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-13203-1.html)
|
||||
|
||||
学习使用 GDB 调试代码
|
||||
======
|
||||
|
||||
> 使用 GNU 调试器来解决你的代码问题。
|
||||
|
||||
![在电脑屏幕上放大镜,发现代码中的错误][1]
|
||||
![](https://img.linux.net.cn/data/attachment/album/202103/14/210547k3q5lek8j9qspkks.jpg)
|
||||
|
||||
GNU 调试器常以它的命令 `gdb` 称呼它,它是一个交互式的控制台,可以帮助你浏览源代码、分析执行的内容,其本质上是对错误的应用程序中出现的问题进行逆向工程。
|
||||
|
||||
@ -88,7 +88,7 @@ Program received signal SIGSEGV, Segmentation fault.
|
||||
要充分利用 GDB,你需要将调试符号编译到你的可执行文件中。你可以用 GCC 中的 `-g` 选项来生成这个符号:
|
||||
|
||||
```
|
||||
$ g++ -o debuggy example.cpp
|
||||
$ g++ -g -o debuggy example.cpp
|
||||
$ ./debuggy
|
||||
Hello world.
|
||||
Segmentation fault
|
||||
@ -250,7 +250,7 @@ $4 = 02
|
||||
要查看其在内存中的地址:
|
||||
|
||||
```
|
||||
(gdb) print /o beta
|
||||
(gdb) print /o &beta
|
||||
$5 = 0x2
|
||||
```
|
||||
|
@ -0,0 +1,171 @@
|
||||
[#]: subject: (5 surprising things you can do with LibreOffice from the command line)
|
||||
[#]: via: (https://opensource.com/article/21/3/libreoffice-command-line)
|
||||
[#]: author: (Don Watkins https://opensource.com/users/don-watkins)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-13219-1.html)
|
||||
|
||||
5 个用命令行操作 LibreOffice 的技巧
|
||||
======
|
||||
|
||||
> 直接在命令行中对文件进行转换、打印、保护等操作。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202103/20/110200xjkkijnjixbyi4ui.jpg)
|
||||
|
||||
LibreOffice 拥有所有你想要的办公软件套件的生产力功能,使其成为微软 Office 或谷歌套件的流行的开源替代品。LibreOffice 的能力之一是可以从命令行操作。例如,Seth Kenlon 最近解释了如何使用 LibreOffice 用全局 [命令行选项将多个文件][2] 从 DOCX 转换为 EPUB。他的文章启发我分享一些其他 LibreOffice 命令行技巧和窍门。
|
||||
|
||||
在查看 LibreOffice 命令的一些隐藏功能之前,你需要了解如何使用应用选项。并不是所有的应用都接受选项(除了像 `--help` 选项这样的基本选项,它在大多数 Linux 应用中都可以使用)。
|
||||
|
||||
```
|
||||
$ libreoffice --help
|
||||
```
|
||||
|
||||
这将返回 LibreOffice 接受的其他选项的描述。有些应用没有太多选项,但 LibreOffice 好几页有用的选项,所以有很多东西可以玩。
|
||||
|
||||
就是说,你可以在终端上使用 LibreOffice 进行以下五项有用的操作,来让使软件更加有用。
|
||||
|
||||
### 1、自定义你的启动选项
|
||||
|
||||
你可以修改你启动 LibreOffice 的方式。例如,如果你想只打开 LibreOffice 的文字处理器组件:
|
||||
|
||||
```
|
||||
$ libreoffice --writer # 启动文字处理器
|
||||
```
|
||||
|
||||
你可以类似地打开它的其他组件:
|
||||
|
||||
|
||||
```
|
||||
$ libreoffice --calc # 启动一个空的电子表格
|
||||
$ libreoffice --draw # 启动一个空的绘图文档
|
||||
$ libreoffice --web # 启动一个空的 HTML 文档
|
||||
```
|
||||
|
||||
你也可以从命令行访问特定的帮助文件:
|
||||
|
||||
```
|
||||
$ libreoffice --helpwriter
|
||||
```
|
||||
|
||||
![LibreOffice Writer help][3]
|
||||
|
||||
或者如果你需要电子表格应用方面的帮助:
|
||||
|
||||
```
|
||||
$ libreoffice --helpcalc
|
||||
```
|
||||
|
||||
你可以在不显示启动屏幕的情况下启动 LibreOffice:
|
||||
|
||||
```
|
||||
$ libreoffice --writer --nologo
|
||||
```
|
||||
|
||||
你甚至可以在你完成当前窗口的工作时,让它在后台最小化启动:
|
||||
|
||||
```
|
||||
$ libreoffice --writer --minimized
|
||||
```
|
||||
|
||||
### 2、以只读模式打开一个文件
|
||||
|
||||
你可以使用 `--view` 以只读模式打开文件,以防止意外地对重要文件进行修改和保存:
|
||||
|
||||
```
|
||||
$ libreoffice --view example.odt
|
||||
```
|
||||
|
||||
### 3、打开一个模板文档
|
||||
|
||||
你是否曾经创建过用作信头或发票表格的文档?LibreOffice 具有丰富的内置模板系统,但是你可以使用 `-n` 选项将任何文档作为模板:
|
||||
|
||||
```
|
||||
$ libreoffice --writer -n example.odt
|
||||
```
|
||||
|
||||
你的文档将在 LibreOffice 中打开,你可以对其进行修改,但保存时不会覆盖原始文件。
|
||||
|
||||
### 4、转换文档
|
||||
|
||||
当你需要做一个小任务,比如将一个文件转换为新的格式时,应用启动的时间可能与完成任务的时间一样长。解决办法是 `--headless` 选项,它可以在不启动图形用户界面的情况下执行 LibreOffice 进程。
|
||||
|
||||
例如,在 LibreOffic 中,将一个文档转换为 EPUB 是一个非常简单的任务,但使用 `libreoffice` 命令就更容易:
|
||||
|
||||
```
|
||||
$ libreoffice --headless --convert-to epub example.odt
|
||||
```
|
||||
|
||||
使用通配符意味着你可以一次转换几十个文档:
|
||||
|
||||
```
|
||||
$ libreoffice --headless --convert-to epub *.odt
|
||||
```
|
||||
|
||||
你可以将文件转换为多种格式,包括 PDF、HTML、DOC、DOCX、EPUB、纯文本等。
|
||||
|
||||
### 5、从终端打印
|
||||
|
||||
你可以从命令行打印 LibreOffice 文档,而无需打开应用:
|
||||
|
||||
```
|
||||
$ libreoffice --headless -p example.odt
|
||||
```
|
||||
|
||||
这个选项不需要打开 LibreOffice 就可以使用默认打印机打印,它只是将文档发送到你的打印机。
|
||||
|
||||
要打印一个目录中的所有文件:
|
||||
|
||||
```
|
||||
$ libreoffice -p *.odt
|
||||
```
|
||||
|
||||
(我不止一次执行了这个命令,然后用完了纸,所以在你开始之前,确保你的打印机里有足够的纸张。)
|
||||
|
||||
你也可以把文件输出成 PDF。通常这和使用 `--convert-to-pdf` 选项没有什么区别,但是很容易记住:
|
||||
|
||||
|
||||
```
|
||||
$ libreoffice --print-to-file example.odt --headless
|
||||
```
|
||||
|
||||
### 额外技巧:Flatpak 和命令选项
|
||||
|
||||
如果你是使用 [Flatpak][5] 安装的 LibreOffice,所有这些命令选项都可以使用,但你必须通过 Flatpak 传递。下面是一个例子:
|
||||
|
||||
```
|
||||
$ flatpak run org.libreoffice.LibreOffice --writer
|
||||
```
|
||||
|
||||
它比本地安装要麻烦得多,所以你可能会受到启发 [写一个 Bash 别名][6] 来使它更容易直接与 LibreOffice 交互。
|
||||
|
||||
### 令人惊讶的终端选项
|
||||
|
||||
通过查阅手册页面,了解如何从命令行扩展 LibreOffice 的功能:
|
||||
|
||||
```
|
||||
$ man libreoffice
|
||||
```
|
||||
|
||||
你是否知道 LibreOffice 具有如此丰富的命令行选项? 你是否发现了其他人似乎都不了解的其他选项? 请在评论中分享它们!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/3/libreoffice-command-line
|
||||
|
||||
作者:[Don Watkins][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/don-watkins
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/shortcut_command_function_editing_key.png?itok=a0sEc5vo (hot keys for shortcuts or features on computer keyboard)
|
||||
[2]: https://opensource.com/article/21/2/linux-workday
|
||||
[3]: https://opensource.com/sites/default/files/uploads/libreoffice-help.png (LibreOffice Writer help)
|
||||
[4]: https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[5]: https://www.libreoffice.org/download/flatpak/
|
||||
[6]: https://opensource.com/article/19/7/bash-aliases
|
@ -0,0 +1,81 @@
|
||||
[#]: subject: (Track your family calendar with a Raspberry Pi and a low-power display)
|
||||
[#]: via: (https://opensource.com/article/21/3/family-calendar-raspberry-pi)
|
||||
[#]: author: (Javier Pena https://opensource.com/users/jpena)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wyxplus)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-13222-1.html)
|
||||
|
||||
利用树莓派和低功耗显示器来跟踪你的家庭日程表
|
||||
======
|
||||
|
||||
> 通过利用开源工具和电子墨水屏,让每个人都清楚家庭的日程安排。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202103/21/091512dkbgb3vzgjrz2935.jpg)
|
||||
|
||||
有些家庭的日程安排很复杂:孩子们有上学活动和放学后的活动,你想要记住的重要事情,每个人都有多个约会等等。虽然你可以使用手机和应用程序来关注所有事情,但在家中放置一个大型低功耗显示器以显示家人的日程不是更好吗?电子墨水日程表刚好满足!
|
||||
|
||||
![E Ink calendar][2]
|
||||
|
||||
### 硬件
|
||||
|
||||
这个项目是作为假日项目开始,因此我试着尽可能多的旧物利用。其中包括一台已经闲置了太长时间树莓派 2。由于我没有电子墨水屏,因此我需要购买一个。幸运的是,我找到了一家供应商,该供应商为支持树莓派的屏幕提供了 [开源驱动程序和示例][4],该屏幕使用 [GPIO][5] 端口连接。
|
||||
|
||||
我的家人还想在不同的日程表之间切换,因此需要某种形式的输入。我没有添加 USB 键盘,而是选择了一种更简单的解决方案,并购买了一个类似于在 [这篇文章][6] 中所描述 1x4 大小的键盘。这使我可以将键盘连接到树莓派中的某些 GPIO 端口。
|
||||
|
||||
最后,我需要一个相框来容纳整个设置。虽然背面看起来有些凌乱,但它能完成工作。
|
||||
|
||||
![Calendar internals][7]
|
||||
|
||||
### 软件
|
||||
|
||||
我从 [一个类似的项目][8] 中获得了灵感,并开始为我的项目编写 Python 代码。我需要从两个地方获取数据:
|
||||
|
||||
* 天气信息:从 [OpenWeather API][9] 获取
|
||||
* 时间信息:我打算使用 [CalDav 标准][10] 连接到一个在我家服务器上运行的日程表
|
||||
|
||||
由于必须等待一些零件的送达,因此我使用了模块化的方法来进行输入和显示,这样我可以在没有硬件的情况下调试大多数代码。日程表应用程序需要驱动程序,于是我编写了一个 [Pygame][11] 驱动程序以便能在台式机上运行它。
|
||||
|
||||
编写代码最好的部分是能够重用现有的开源项目,所以访问不同的 API 很容易。我可以专注于设计用户界面,其中包括每个人的周历和每个人的日历,以及允许使用小键盘来选择日程。并且我花时间又添加了一些额外的功能,例如特殊日子的自定义屏幕保护程序。
|
||||
|
||||
![E Ink calendar screensaver][12]
|
||||
|
||||
最后的集成步骤将确保我的日程表应用程序将在启动时运行,并且能够容错。我使用了一个基本的 [树莓派系统][13] 镜像,并将该应用程序配置到 systemd 服务,以便它可以在出现故障和系统重新启动依旧运行。
|
||||
|
||||
做完所有工作,我把代码上传到了 [GitHub][14]。因此,如果你要创建类似的日历,可以随时查看并重构它!
|
||||
|
||||
### 结论
|
||||
|
||||
日程表已成为我们厨房中的日常工具。它可以帮助我们记住我们的日常活动,甚至我们的孩子在上学前,都可以使用它来查看日程的安排。
|
||||
|
||||
对我而言,这个项目让我感受到开源的力量。如果没有开源的驱动程序、库以及开放 API,我们依旧还在用纸和笔来安排日程。很疯狂,不是吗?
|
||||
|
||||
需要确保你的日程不冲突吗?学习如何使用这些免费的开源项目来做到这点。
|
||||
|
||||
------
|
||||
via: https://opensource.com/article/21/3/family-calendar-raspberry-pi
|
||||
|
||||
作者:[Javier Pena][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wyxplus](https://github.com/wyxplus)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/jpena
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/calendar-coffee.jpg?itok=9idm1917 "Calendar with coffee and breakfast"
|
||||
[2]: https://opensource.com/sites/default/files/uploads/calendar.jpg "E Ink calendar"
|
||||
[3]: https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[4]: https://github.com/waveshare/e-Paper
|
||||
[5]: https://opensource.com/article/19/3/gpio-pins-raspberry-pi
|
||||
[6]: https://www.instructables.com/1x4-Membrane-Keypad-w-Arduino/
|
||||
[7]: https://opensource.com/sites/default/files/uploads/calendar_internals.jpg "Calendar internals"
|
||||
[8]: https://github.com/zli117/EInk-Calendar
|
||||
[9]: https://openweathermap.org
|
||||
[10]: https://en.wikipedia.org/wiki/CalDAV
|
||||
[11]: https://github.com/pygame/pygame
|
||||
[12]: https://opensource.com/sites/default/files/uploads/calendar_screensaver.jpg "E Ink calendar screensaver"
|
||||
[13]: https://www.raspberrypi.org/software/
|
||||
[14]: https://github.com/javierpena/eink-calendar
|
@ -3,22 +3,22 @@
|
||||
[#]: author: (Kader Miyanyedi https://fedoramagazine.org/author/moonkat/)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-13202-1.html)
|
||||
|
||||
如何在 Fedora 上使用 Poetry 来管理你的 Python 项目?
|
||||
======
|
||||
|
||||
![Python & Poetry on Fedora][1]
|
||||
|
||||
Python 开发人员经常创建一个新的虚拟环境来分离项目依赖,然后用 _pip、pipenv_ 等工具来管理它们。Poetry 是一个简化 Python 中依赖管理和打包的工具。这篇文章将向你展示如何在 Fedora 上使用 Poetry 来管理你的 Python 项目。
|
||||
Python 开发人员经常创建一个新的虚拟环境来分离项目依赖,然后用 `pip`、`pipenv` 等工具来管理它们。Poetry 是一个简化 Python 中依赖管理和打包的工具。这篇文章将向你展示如何在 Fedora 上使用 Poetry 来管理你的 Python 项目。
|
||||
|
||||
与其他工具不同,Poetry 只使用一个配置文件来进行依赖管理、打包和发布。这消除了对不同文件的需求,如 _Pipfile、MANIFEST.in、setup.py_ 等。这也比使用多个工具更快。
|
||||
与其他工具不同,Poetry 只使用一个配置文件来进行依赖管理、打包和发布。这消除了对不同文件的需求,如 `Pipfile`、`MANIFEST.in`、`setup.py` 等。这也比使用多个工具更快。
|
||||
|
||||
下面详细介绍一下开始使用 Poetry 时使用的命令。
|
||||
|
||||
### **在 Fedora 上安装 Poetry**
|
||||
### 在 Fedora 上安装 Poetry
|
||||
|
||||
如果你已经使用 Fedora 32 或以上版本,你可以使用这个命令直接从命令行安装 Poetry:
|
||||
|
||||
@ -26,22 +26,20 @@ Python 开发人员经常创建一个新的虚拟环境来分离项目依赖,
|
||||
$ sudo dnf install poetry
|
||||
```
|
||||
|
||||
```
|
||||
编者注:在 Fedora Silverblue 或 CoreOs上,Python 3.9.2 是核心提交的一部分,你可以用下面的命令安装 Poetry:
|
||||
|
||||
```
|
||||
|
||||
rpm-ostree install poetry
|
||||
|
||||
```
|
||||
|
||||
### 初始化一个项目
|
||||
|
||||
使用 _new_ 命令创建一个新项目。
|
||||
使用 `new` 命令创建一个新项目:
|
||||
|
||||
```
|
||||
$ poetry new poetry-project
|
||||
```
|
||||
|
||||
The structure of a project created with Poetry looks like this:
|
||||
用 Poetry 创建的项目结构是这样的:
|
||||
|
||||
```
|
||||
@ -54,7 +52,7 @@ The structure of a project created with Poetry looks like this:
|
||||
└── test_poetry_project.py
|
||||
```
|
||||
|
||||
Poetry 使用 _pyproject.toml_ 来管理项目的依赖。最初,这个文件看起来类似于这样:
|
||||
Poetry 使用 `pyproject.toml` 来管理项目的依赖。最初,这个文件看起来类似于这样:
|
||||
|
||||
```
|
||||
[tool.poetry]
|
||||
@ -81,10 +79,7 @@ build-backend = "poetry.masonry.api"
|
||||
* 第三部分包含开发依赖。
|
||||
* 第四部分描述的是符合 [PEP 517][2] 的构建系统。
|
||||
|
||||
|
||||
|
||||
|
||||
如果你已经有一个项目,或者创建了自己的项目文件夹,并且你想使用 Poetry,请在你的项目中运行 _init_ 命令。
|
||||
如果你已经有一个项目,或者创建了自己的项目文件夹,并且你想使用 Poetry,请在你的项目中运行 `init` 命令。
|
||||
|
||||
```
|
||||
$ poetry init
|
||||
@ -100,7 +95,7 @@ $ poetry init
|
||||
$ poetry shell
|
||||
```
|
||||
|
||||
Poetry 默认在 _/home/username/.cache/pypoetry_ 项目中创建虚拟环境。你可以通过编辑 poetry 配置来更改默认路径。使用下面的命令查看配置列表:
|
||||
Poetry 默认在 `/home/username/.cache/pypoetry` 项目中创建虚拟环境。你可以通过编辑 Poetry 配置来更改默认路径。使用下面的命令查看配置列表:
|
||||
|
||||
```
|
||||
$ poetry config --list
|
||||
@ -111,7 +106,7 @@ virtualenvs.in-project = true
|
||||
virtualenvs.path = "{cache-dir}/virtualenvs"
|
||||
```
|
||||
|
||||
修改 _virtualenvs.in-project_ 配置变量,在项目目录下创建一个虚拟环境。Poetry 命令是:
|
||||
修改 `virtualenvs.in-project` 配置变量,在项目目录下创建一个虚拟环境。Poetry 命令是:
|
||||
|
||||
```
|
||||
$ poetry config virtualenv.in-project true
|
||||
@ -119,27 +114,27 @@ $ poetry config virtualenv.in-project true
|
||||
|
||||
### 添加依赖
|
||||
|
||||
使用 _poetry add_ 命令为项目安装一个依赖。
|
||||
使用 `poetry add` 命令为项目安装一个依赖:
|
||||
|
||||
```
|
||||
$ poetry add django
|
||||
```
|
||||
|
||||
你可以使用带有 _-dev_ 选项的 _add_ 命令来识别任何只用于开发环境的依赖。
|
||||
你可以使用带有 `--dev` 选项的 `add` 命令来识别任何只用于开发环境的依赖:
|
||||
|
||||
```
|
||||
$ poetry add black --dev
|
||||
```
|
||||
|
||||
**add** 命令会创建一个 _poetry.lock_ 文件,用来跟踪软件包的版本。如果 _poetry.lock_ 文件不存在,那么会安装 _pyproject.toml_ 中所有依赖项的最新版本。如果 _poetry.lock_ 存在,Poetry 会使用文件中列出的确切版本,以确保每个使用这个项目的人的软件包版本是一致的。
|
||||
`add` 命令会创建一个 `poetry.lock` 文件,用来跟踪软件包的版本。如果 `poetry.lock` 文件不存在,那么会安装 `pyproject.toml` 中所有依赖项的最新版本。如果 `poetry.lock` 存在,Poetry 会使用文件中列出的确切版本,以确保每个使用这个项目的人的软件包版本是一致的。
|
||||
|
||||
使用 poetry _install_ 命令来安装当前项目中的所有依赖。
|
||||
使用 `poetry install` 命令来安装当前项目中的所有依赖:
|
||||
|
||||
```
|
||||
$ poetry install
|
||||
```
|
||||
|
||||
通过使用 _no-dev_ 选项防止安装开发依赖。
|
||||
通过使用 `--no-dev` 选项防止安装开发依赖:
|
||||
|
||||
```
|
||||
$ poetry install --no-dev
|
||||
@ -147,7 +142,7 @@ $ poetry install --no-dev
|
||||
|
||||
### 列出软件包
|
||||
|
||||
_show_ 命令会列出所有可用的软件包。_tree_ 选项将以树状列出软件包。
|
||||
`show` 命令会列出所有可用的软件包。`--tree` 选项将以树状列出软件包:
|
||||
|
||||
```
|
||||
$ poetry show --tree
|
||||
@ -158,7 +153,7 @@ django 3.1.7 A high-level Python Web framework that encourages rapid development
|
||||
└── sqlparse >=0.2.2
|
||||
```
|
||||
|
||||
包含软件包名称,以列出特定软件包的详细信息。
|
||||
包含软件包名称,以列出特定软件包的详细信息:
|
||||
|
||||
```
|
||||
$ poetry show requests
|
||||
@ -174,7 +169,7 @@ dependencies
|
||||
- urllib3 >=1.21.1,<1.27
|
||||
```
|
||||
|
||||
最后,如果你想知道软件包的最新版本,你可以通过 _latest_ 选项。
|
||||
最后,如果你想知道软件包的最新版本,你可以通过 `--latest` 选项:
|
||||
|
||||
```
|
||||
$ poetry show --latest
|
||||
@ -194,7 +189,7 @@ via: https://fedoramagazine.org/how-to-use-poetry-to-manage-your-python-projects
|
||||
作者:[Kader Miyanyedi][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,150 @@
|
||||
[#]: subject: (Learn Python dictionary values with Jupyter)
|
||||
[#]: via: (https://opensource.com/article/21/3/dictionary-values-python)
|
||||
[#]: author: (Lauren Maffeo https://opensource.com/users/lmaffeo)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (DCOLIVERSUN)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-13236-1.html)
|
||||
|
||||
用 Jupyter 学习 Python 字典
|
||||
======
|
||||
|
||||
> 字典数据结构可以帮助你快速访问信息。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202103/26/094720i58u5qxx3l4qsssx.jpg)
|
||||
|
||||
字典是 Python 编程语言使用的数据结构。一个 Python 字典由多个键值对组成;每个键值对将键映射到其关联的值上。
|
||||
|
||||
例如你是一名老师,想把学生姓名与成绩对应起来。你可以使用 Python 字典,将学生姓名映射到他们关联的成绩上。此时,键值对中键是姓名,值是对应的成绩。
|
||||
|
||||
如果你想知道某个学生的考试成绩,你可以从字典中访问。这种快捷查询方式可以为你节省解析整个列表找到学生成绩的时间。
|
||||
|
||||
本文介绍了如何通过键访问对应的字典值。学习前,请确保你已经安装了 [Anaconda 包管理器][2]和 [Jupyter 笔记本][3]。
|
||||
|
||||
### 1、在 Jupyter 中打开一个新的笔记本
|
||||
|
||||
首先在 Web 浏览器中打开并运行 Jupyter。然后,
|
||||
|
||||
1. 转到左上角的 “File”。
|
||||
2. 选择 “New Notebook”,点击 “Python 3”。
|
||||
|
||||
![新建 Jupyter 笔记本][4]
|
||||
|
||||
开始时,新建的笔记本是无标题的,你可以将其重命名为任何名称。我为我的笔记本取名为 “OpenSource.com Data Dictionary Tutorial”。
|
||||
|
||||
笔记本中标有行号的位置就是你写代码的区域,也是你输入的位置。
|
||||
|
||||
在 macOS 上,可以同时按 `Shift + Return` 键得到输出。在创建新的代码区域前,请确保完成上述动作;否则,你写的任何附加代码可能无法运行。
|
||||
|
||||
### 2、新建一个键值对
|
||||
|
||||
在字典中输入你希望访问的键与值。输入前,你需要在字典上下文中定义它们的含义:
|
||||
|
||||
```
|
||||
empty_dictionary = {}
|
||||
grades = {
|
||||
"Kelsey": 87,
|
||||
"Finley": 92
|
||||
}
|
||||
|
||||
one_line = {a: 1, b: 2}
|
||||
```
|
||||
|
||||
![定义字典键值对的代码][6]
|
||||
|
||||
这段代码让字典将特定键与其各自的值关联起来。字典按名称存储数据,从而可以更快地查询。
|
||||
|
||||
### 3、通过键访问字典值
|
||||
|
||||
现在你想查询指定的字典值;在上述例子中,字典值指特定学生的成绩。首先,点击 “Insert” 后选择 “Insert Cell Below”。
|
||||
|
||||
![在 Jupyter 插入新建单元格][7]
|
||||
|
||||
在新单元格中,定义字典中的键与值。
|
||||
|
||||
然后,告诉字典打印该值的键,找到需要的值。例如,查询名为 Kelsey 的学生的成绩:
|
||||
|
||||
```
|
||||
# 访问字典中的数据
|
||||
grades = {
|
||||
"Kelsey": 87,
|
||||
"Finley": 92
|
||||
}
|
||||
|
||||
print(grades["Kelsey"])
|
||||
87
|
||||
```
|
||||
|
||||
![查询特定值的代码][8]
|
||||
|
||||
当你查询 Kelsey 的成绩(也就是你想要查询的值)时,如果你用的是 macOS,只需要同时按 `Shift+Return` 键。
|
||||
|
||||
你会在单元格下方看到 Kelsey 的成绩。
|
||||
|
||||
### 4、更新已有的键
|
||||
|
||||
当把一位学生的错误成绩添加到字典时,你会怎么办?可以通过更新字典、存储新值来修正这类错误。
|
||||
|
||||
首先,选择你想更新的那个键。在上述例子中,假设你错误地输入了 Finley 的成绩,那么 Finley 就是你需要更新的键。
|
||||
|
||||
为了更新 Finley 的成绩,你需要在下方插入新的单元格,然后创建一个新的键值对。同时按 `Shift+Return` 键打印字典全部信息:
|
||||
|
||||
```
|
||||
grades["Finley"] = 90
|
||||
print(grades)
|
||||
|
||||
{'Kelsey': 87; "Finley": 90}
|
||||
```
|
||||
|
||||
![更新键的代码][9]
|
||||
|
||||
单元格下方输出带有 Finley 更新成绩的字典。
|
||||
|
||||
### 5、添加新键
|
||||
|
||||
假设你得到一位新学生的考试成绩。你可以用新键值对将那名学生的姓名与成绩补充到字典中。
|
||||
|
||||
插入新的单元格,以键值对形式添加新学生的姓名与成绩。当你完成这些后,同时按 `Shift+Return` 键打印字典全部信息:
|
||||
|
||||
```
|
||||
grades["Alex"] = 88
|
||||
print(grades)
|
||||
|
||||
{'Kelsey': 87, 'Finley': 90, 'Alex': 88}
|
||||
```
|
||||
|
||||
![添加新键][10]
|
||||
|
||||
所有的键值对输出在单元格下方。
|
||||
|
||||
### 使用字典
|
||||
|
||||
请记住,键与值可以是任意数据类型,但它们很少是<ruby>[扩展数据类型][11]<rt>non-primitive types</rt></ruby>。此外,字典不能以指定的顺序存储、组织里面的数据。如果你想要数据有序,最好使用 Python 列表,而非字典。
|
||||
|
||||
如果你考虑使用字典,首先要确认你的数据结构是否是合适的,例如像电话簿的结构。如果不是,列表、元组、树或者其他数据结构可能是更好的选择。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/3/dictionary-values-python
|
||||
|
||||
作者:[Lauren Maffeo][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[DCOLIVERSUN](https://github.com/DCOLIVERSUN)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/lmaffeo
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/python-programming-code-keyboard.png?itok=fxiSpmnd (Hands on a keyboard with a Python book )
|
||||
[2]: https://docs.anaconda.com/anaconda/
|
||||
[3]: https://opensource.com/article/18/3/getting-started-jupyter-notebooks
|
||||
[4]: https://opensource.com/sites/default/files/uploads/new-jupyter-notebook.png (Create Jupyter notebook)
|
||||
[5]: https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[6]: https://opensource.com/sites/default/files/uploads/define-keys-values.png (Code for defining key-value pairs in the dictionary)
|
||||
[7]: https://opensource.com/sites/default/files/uploads/jupyter_insertcell.png (Inserting a new cell in Jupyter)
|
||||
[8]: https://opensource.com/sites/default/files/uploads/lookforvalue.png (Code to look for a specific value)
|
||||
[9]: https://opensource.com/sites/default/files/uploads/jupyter_updatekey.png (Code for updating a key)
|
||||
[10]: https://opensource.com/sites/default/files/uploads/jupyter_addnewkey.png (Add a new key)
|
||||
[11]: https://www.datacamp.com/community/tutorials/data-structures-python
|
@ -0,0 +1,101 @@
|
||||
[#]: subject: (Use gImageReader to Extract Text From Images and PDFs on Linux)
|
||||
[#]: via: (https://itsfoss.com/gimagereader-ocr/)
|
||||
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-13205-1.html)
|
||||
|
||||
在 Linux 上使用 gImageReader 从图像和 PDF 中提取文本
|
||||
======
|
||||
|
||||
> gImageReader 是一个 GUI 工具,用于在 Linux 中利用 Tesseract OCR 引擎从图像和 PDF 文件中提取文本。
|
||||
|
||||
[gImageReader][1] 是 [Tesseract 开源 OCR 引擎][2]的一个前端。Tesseract 最初是由 HP 公司开发的,然后在 2006 年开源。
|
||||
|
||||
基本上,OCR(光学字符识别)引擎可以让你从图片或文件(PDF)中扫描文本。默认情况下,它可以检测几种语言,还支持通过 Unicode 字符扫描。
|
||||
|
||||
然而,Tesseract 本身是一个没有任何 GUI 的命令行工具。因此,gImageReader 就来解决这点,它可以让任何用户使用它从图像和文件中提取文本。
|
||||
|
||||
让我重点介绍一些有关它的内容,同时说下我在测试期间的使用经验。
|
||||
|
||||
### gImageReader:一个跨平台的 Tesseract OCR 前端
|
||||
|
||||
![][3]
|
||||
|
||||
为了简化事情,gImageReader 在从 PDF 文件或包含任何类型文本的图像中提取文本时非常方便。
|
||||
|
||||
无论你是需要它来进行拼写检查还是翻译,它都应该对特定的用户群体有用。
|
||||
|
||||
以列表总结下功能,这里是你可以用它做的事情:
|
||||
|
||||
* 从磁盘、扫描设备、剪贴板和截图中添加 PDF 文档和图像
|
||||
* 能够旋转图像
|
||||
* 常用的图像控制,用于调整亮度、对比度和分辨率。
|
||||
* 直接通过应用扫描图像
|
||||
* 能够一次性处理多个图像或文件
|
||||
* 手动或自动识别区域定义
|
||||
* 识别纯文本或 [hOCR][4] 文档
|
||||
* 编辑器显示识别的文本
|
||||
* 可对对提取的文本进行拼写检查
|
||||
* 从 hOCR 文件转换/导出为 PDF 文件
|
||||
* 将提取的文本导出为 .txt 文件
|
||||
* 跨平台(Windows)
|
||||
|
||||
### 在 Linux 上安装 gImageReader
|
||||
|
||||
**注意**:你需要安装 Tesseract 语言包,才能从软件管理器中的图像/文件中进行检测。
|
||||
|
||||
![][5]
|
||||
|
||||
你可以在一些 Linux 发行版如 Fedora 和 Debian 的默认仓库中找到 gImageReader。
|
||||
|
||||
对于 Ubuntu,你需要添加一个 PPA,然后安装它。要做到这点,下面是你需要在终端中输入的内容:
|
||||
|
||||
```
|
||||
sudo add-apt-repository ppa:sandromani/gimagereader
|
||||
sudo apt update
|
||||
sudo apt install gimagereader
|
||||
```
|
||||
|
||||
你也可以从 openSUSE 的构建服务中找到它,Arch Linux 用户可在 [AUR][6] 中找到。
|
||||
|
||||
所有的仓库和包的链接都可以在他们的 [GitHub 页面][1]中找到。
|
||||
|
||||
### gImageReader 使用经验
|
||||
|
||||
当你需要从图像中提取文本时,gImageReader 是一个相当有用的工具。当你尝试从 PDF 文件中提取文本时,它的效果非常好。
|
||||
|
||||
对于从智能手机拍摄的图片中提取,检测很接近,但有点不准确。也许当你进行扫描时,从文件中识别字符可能会更好。
|
||||
|
||||
所以,你需要亲自尝试一下,看看它是否对你而言工作良好。我在 Linux Mint 20.1(基于 Ubuntu 20.04)上试过。
|
||||
|
||||
我只遇到了一个从设置中管理语言的问题,我没有得到一个快速的解决方案。如果你遇到此问题,那么可能需要对其进行故障排除,并进一步了解如何解决该问题。
|
||||
|
||||
![][7]
|
||||
|
||||
除此之外,它工作良好。
|
||||
|
||||
试试吧,让我知道它是如何为你服务的!如果你知道类似的东西(和更好的),请在下面的评论中告诉我。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/gimagereader-ocr/
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/ankush/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://github.com/manisandro/gImageReader
|
||||
[2]: https://tesseract-ocr.github.io/
|
||||
[3]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/03/gImageReader.png?resize=800%2C456&ssl=1
|
||||
[4]: https://en.wikipedia.org/wiki/HOCR
|
||||
[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/03/tesseract-language-pack.jpg?resize=800%2C620&ssl=1
|
||||
[6]: https://itsfoss.com/aur-arch-linux/
|
||||
[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/03/gImageReader-1.jpg?resize=800%2C460&ssl=1
|
@ -3,27 +3,27 @@
|
||||
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-13199-1.html)
|
||||
|
||||
如何更新 openSUSE Linux 系统
|
||||
======
|
||||
|
||||
从我记事起,我就一直是 Ubuntu 的用户。我曾经转向过其他发行版,但最终还是不断地回到 Ubuntu。但最近,我开始使用 openSUSE 来尝试一些非 Debian 的东西。
|
||||
![](https://img.linux.net.cn/data/attachment/album/202103/13/110932nsq33tjit9933h2k.jpg)
|
||||
|
||||
随着我对 [openSUSE][1] 的不断探索,我不断发现 SUSE 中略有不同的东西,并打算在 It's FOSS 的教程中介绍它们。
|
||||
就我记忆所及,我一直是 Ubuntu 的用户。我曾经转向过其他发行版,但最终还是一次次回到 Ubuntu。但最近,我开始使用 openSUSE 来尝试一些非 Debian 的东西。
|
||||
|
||||
第一次,我写的是更新 openSUSE 系统。有两种方法可以做到:
|
||||
随着我对 [openSUSE][1] 的不断探索,我不断发现 SUSE 中略有不同的东西,并打算在教程中介绍它们。
|
||||
|
||||
第一篇我写的是更新 openSUSE 系统。有两种方法可以做到:
|
||||
|
||||
* 使用终端(适用于 openSUSE 桌面和服务器)
|
||||
* 使用图形工具(适用于 openSUSE 桌面)
|
||||
|
||||
|
||||
|
||||
### 通过命令行更新 openSUSE
|
||||
|
||||
更新 openSUSE 的最简单方法是使用 zypper 命令。它提供了补丁和更新管理的全部功能。它可以解决文件冲突和依赖性问题。更新也包括 Linux 内核。
|
||||
更新 openSUSE 的最简单方法是使用 `zypper` 命令。它提供了补丁和更新管理的全部功能。它可以解决文件冲突和依赖性问题。更新也包括 Linux 内核。
|
||||
|
||||
如果你正在使用 openSUSE Leap,请使用这个命令:
|
||||
|
||||
@ -31,9 +31,9 @@
|
||||
sudo zypper update
|
||||
```
|
||||
|
||||
你也可以用 `up` 代替 `update`,但我觉得 update 更容易记住。
|
||||
你也可以用 `up` 代替 `update`,但我觉得 `update` 更容易记住。
|
||||
|
||||
如果你正在使用 openSUSE Tumbleweed,请使用 `dist-upgrade` 或者 `dup`(简称)。Tumbleweed 是[滚动发行版][2],因此建议使用 dist-upgrade 选项。
|
||||
如果你正在使用 openSUSE Tumbleweed,请使用 `dist-upgrade` 或者 `dup`(简称)。Tumbleweed 是[滚动发行版][2],因此建议使用 `dist-upgrade` 选项。
|
||||
|
||||
```
|
||||
sudo zypper dist-upgrade
|
||||
@ -45,7 +45,7 @@ sudo zypper dist-upgrade
|
||||
|
||||
如果你的系统需要重启,你会得到通知。
|
||||
|
||||
如果你只是想刷新仓库(比如 sudo apt update),你可以使用这个命令:
|
||||
如果你只是想刷新仓库(像 `sudo apt update` 一样),你可以使用这个命令:
|
||||
|
||||
```
|
||||
sudo zypper refresh
|
||||
@ -59,7 +59,7 @@ sudo zypper list-updates
|
||||
|
||||
### 以图形方式更新 openSUSE
|
||||
|
||||
如果你使用 openSUSE 作为桌面,你将有额外的选择使用 GUI 工具来安装更新。这个工具可能会根据[你使用的桌面环境][4]而改变。
|
||||
如果你使用 openSUSE 作为桌面,你可以选择使用 GUI 工具来安装更新。这个工具可能会根据 [你使用的桌面环境][4] 而改变。
|
||||
|
||||
例如,KDE 有自己的软件中心,叫做 “Discover”。你可以用它来搜索和安装新的应用。你也可以用它来安装系统更新。
|
||||
|
||||
@ -82,7 +82,7 @@ sudo zypper addlock plasma5-pk-updates
|
||||
|
||||
![][8]
|
||||
|
||||
就是这些了。这是一篇简短的文章。在下一篇 SUSE 教程中,我将通过实例向大家展示一些常用的 zypper 命令。敬请期待。
|
||||
就是这些了。这是一篇简短的文章。在下一篇 SUSE 教程中,我将通过实例向大家展示一些常用的 `zypper` 命令。敬请期待。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -91,7 +91,7 @@ via: https://itsfoss.com/update-opensuse/
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,110 @@
|
||||
[#]: subject: (Understanding file names and directories in FreeDOS)
|
||||
[#]: via: (https://opensource.com/article/21/3/files-freedos)
|
||||
[#]: author: (Kevin O'Brien https://opensource.com/users/ahuka)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-13208-1.html)
|
||||
|
||||
了解 FreeDOS 中的文件名和目录
|
||||
======
|
||||
|
||||
> 了解如何在 FreeDOS 中创建,编辑和命名文件。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202103/16/094544qanrpbnlmltilump.jpg)
|
||||
|
||||
开源操作系统 [FreeDOS][2] 是一个久经考验的项目,可帮助用户玩复古游戏、更新固件、运行过时但受欢迎的应用以及研究操作系统设计。FreeDOS 提供了有关个人计算历史的见解(因为它实现了 80 年代初的事实上的操作系统),但是它是在现代环境中进行的。在本文中,我将使用 FreeDOS 来解释文件名和扩展名是如何发展的。
|
||||
|
||||
### 了解文件名和 ASCII 文本
|
||||
|
||||
FreeDOS 文件名遵循所谓的 *8.3 惯例*。这意味着所有的 FreeDOS 文件名都有两个部分,分别包含最多八个和三个字符。第一部分通常被称为*文件名*(这可能会让人有点困惑,因为文件名和文件扩展名的组合也被称为文件名)。这一部分可以有一个到八个字符。之后是*扩展名*,可以有零到三个字符。这两部分之间用一个点隔开。
|
||||
|
||||
文件名可以使用任何字母或数字。键盘上的许多其他字符也是允许的,但不是所有的字符。这是因为许多其他字符在 FreeDOS 中被指定了特殊用途。一些可以出现在 FreeDOS 文件名中的字符有:
|
||||
|
||||
|
||||
```
|
||||
~ ! @ # $ % ^ & ( ) _ - { } `
|
||||
```
|
||||
|
||||
扩展 [ASCII][3] 字符集中也有一些字符可以使用,例如 `<60>`。
|
||||
|
||||
在 FreeDOS 中具有特殊意义的字符,因此不能用于文件名中,包括:
|
||||
|
||||
```
|
||||
* / + | \ = ? [ ] ; : " . < > ,
|
||||
```
|
||||
|
||||
另外,你不能在 FreeDOS 文件名中使用空格。FreeDOS 控制台[使用空格将命令的与选项和参数分隔][4]。
|
||||
|
||||
FreeDOS 是*不区分大小写*的,所以不管你是使用大写字母还是小写字母都无所谓。所有的字母都会被转换为大写字母,所以无论你做什么,你的文件最终都会在名称中使用大写字母。
|
||||
|
||||
#### 文件扩展名
|
||||
|
||||
FreeDOS 中的文件不需要有扩展名,但文件扩展名确实有一些用途。某些文件扩展名在 FreeDOS 中有内置的含义,例如:
|
||||
|
||||
* **EXE**:可执行文件
|
||||
* **COM**:命令文件
|
||||
* **SYS**:系统文件
|
||||
* **BAT**:批处理文件
|
||||
|
||||
特定的软件程序使用其他扩展名,或者你可以在创建文件时使用它们。这些扩展名没有绝对的文件关联,因此如果你使用 FreeDOS 的文字处理器,你的文件使用什么扩展名并不重要。如果你愿意,你可以发挥创意,将扩展名作为你的文件系统的一部分。例如,你可以用 `*.JAN`、`*.FEB`、`*.MAR`、`*.APR` 等等来命名你的备忘录。
|
||||
|
||||
### 编辑文件
|
||||
|
||||
FreeDOS 自带的 Edit 应用可以快速方便地进行文本编辑。它是一个简单的编辑器,沿屏幕顶部有一个菜单栏,可以方便地访问所有常用的功能(如复制、粘贴、保存等)。
|
||||
|
||||
![Editing in FreeDOS][5]
|
||||
|
||||
正如你所期望的那样,还有很多其他的文本编辑器可以使用,包括小巧但用途广泛的 [e3 编辑器][7]。你可以在 GitLab 上找到各种各样的 [FreeDOS 应用][8] 。
|
||||
|
||||
### 创建文件
|
||||
|
||||
你可以在 FreeDOS 中使用 `touch` 命令创建空文件。这个简单的工具可以更新文件的修改时间或创建一个新文件。
|
||||
|
||||
```
|
||||
C:\>touch foo.txt
|
||||
C:\>dir
|
||||
FOO TXT 0 01-12-2021 10:00a
|
||||
```
|
||||
|
||||
你也可以直接从 FreeDOS 控制台创建文件,而不需要使用 Edit 文本编辑器。首先,使用 `copy` 命令将控制台中的输入(简称 `con`)复制到一个新的文件对象中。用 `Ctrl+Z` 终止输入,然后按**回车**键:
|
||||
|
||||
```
|
||||
C:\>copy con test.txt
|
||||
con => test.txt
|
||||
This is a test file.
|
||||
^Z
|
||||
```
|
||||
|
||||
`Ctrl+Z` 字符在控制台中显示为 `^Z`。它并没有被复制到文件中,而是作为文件结束(EOF)的分隔符。换句话说,它告诉 FreeDOS 何时停止复制。这是一个很好的技巧,可以用来做快速的笔记或开始一个简单的文档,以便以后工作。
|
||||
|
||||
### 文件和 FreeDOS
|
||||
|
||||
FreeDOS 是开源的、免费的且 [易于安装][9]。探究 FreeDOS 如何处理文件,可以帮助你了解多年来计算的发展,不管你平时使用的是什么操作系统。启动 FreeDOS,开始探索现代复古计算吧!
|
||||
|
||||
_本文中的部分信息曾发表在 [DOS 课程 7:DOS 文件名;ASCII][10] 中(CC BY-SA 4.0)。_
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/3/files-freedos
|
||||
|
||||
作者:[Kevin O'Brien][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/ahuka
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/files_documents_paper_folder.png?itok=eIJWac15 (Files in a folder)
|
||||
[2]: https://www.freedos.org/
|
||||
[3]: tmp.2sISc4Tp3G#ASCII
|
||||
[4]: https://opensource.com/article/21/2/set-your-path-freedos
|
||||
[5]: https://opensource.com/sites/default/files/uploads/freedos_2_files-edit.jpg (Editing in FreeDOS)
|
||||
[6]: https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[7]: https://opensource.com/article/20/12/e3-linux
|
||||
[8]: https://gitlab.com/FDOS/
|
||||
[9]: https://opensource.com/article/18/4/gentle-introduction-freedos
|
||||
[10]: https://www.ahuka.com/dos-lessons-for-self-study-purposes/dos-lesson-7-dos-filenames-ascii/
|
@ -0,0 +1,175 @@
|
||||
[#]: subject: (Linux Mint Cinnamon vs MATE vs Xfce: Which One Should You Use?)
|
||||
[#]: via: (https://itsfoss.com/linux-mint-cinnamon-mate-xfce/)
|
||||
[#]: author: (Dimitrios https://itsfoss.com/author/dimitrios/)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-13213-1.html)
|
||||
|
||||
Cinnamon vs MATE vs Xfce:你应该选择那一个 Linux Mint 口味?
|
||||
======
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202103/18/111916ljidnfwwsxec1fqf.jpg)
|
||||
|
||||
Linux Mint 无疑是 [最适合初学者的 Linux 发行版之一][1]。尤其是对于刚刚迈向 Linux 世界的 Windows 用户来说,更是如此。
|
||||
|
||||
2006 年以来(也就是 Linux Mint 首次发布的那一年),他们开发了一系列的提高用户的体验的 [工具][2]。此外,Linux Mint 是基于 Ubuntu 的,所以你有一个可以寻求帮助的庞大的用户社区。
|
||||
|
||||
我不打算讨论 Linux Mint 有多好。如果你已经下定决心 [安装Linux Mint][3],你可能会对它网站上的 [下载部分][4] 感到有些困惑。
|
||||
|
||||
它给了你三个选择:Cinnamon、MATE 和 Xfce。不知道该如何选择吗?我将在本文中帮你解决这个问题。
|
||||
|
||||
![][5]
|
||||
|
||||
如果你是个 Linux 的绝对新手,对上面的东西一无所知,我建议你了解一下 [什么是 Linux 桌面环境][6]。如果你能再多花点时间,请阅读这篇关于 [什么是 Linux,以及为什么有这么多看起来相似的 Linux 操作系统][7] 的优秀解释。
|
||||
|
||||
有了这些信息,你就可以了解各种 Linux Mint 版本之间的区别了。如果你不知道该选择哪一个,通过这篇文章,我将帮助你做出一个有意识的选择。
|
||||
|
||||
### 你应该选择哪个 Linux Mint 版本?
|
||||
|
||||
![][8]
|
||||
|
||||
简单来说,可供选择的有以下几种:
|
||||
|
||||
* **Cinnamon 桌面**:具有现代感的传统桌面。
|
||||
* **MATE 桌面**:类似 GNOME 2 时代的传统外观桌面。
|
||||
* **Xfce 桌面**:一个流行的轻量级桌面环境。
|
||||
|
||||
我们来逐一看看 Mint 的各个变种。
|
||||
|
||||
#### Linux Mint Cinnamon 版
|
||||
|
||||
Cinnamon 桌面是由 Linux Mint 团队开发的,显然它是 Linux Mint 的主力版本。
|
||||
|
||||
早在近十年前,当 GNOME 桌面选择了非常规的 GNOME 3 用户界面时,人们就开始了 Cinnamon 的开发,通过复刻 GNOME 2 的一些组件来保持桌面的传统外观。
|
||||
|
||||
很多 Linux 用户喜欢 Cinnamon,就是因为它有像 Windows 7 一样的界面。
|
||||
|
||||
![Linux Mint Cinnamon desktop][9]
|
||||
|
||||
##### 性能和相应能力
|
||||
|
||||
Cinnamon 桌面的性能比过去的版本有所提高,但如果没有固态硬盘,你会觉得有点迟钝。上一次我使用 Cinnamon 桌面是在 4.4.8 版,开机后的内存消耗在 750MB 左右。现在的 4.8.6 版有了很大的改进,开机后减少了 100MB 内存消耗。
|
||||
|
||||
为了获得最佳的用户体验,应该考虑双核 CPU,最低 4GB 内存。
|
||||
|
||||
![Linux Mint 20 Cinnamon idle system stats][10]
|
||||
|
||||
##### 优势
|
||||
|
||||
* 从 Windows 无缝切换
|
||||
* 赏心悦目
|
||||
* 高度 [可定制][11]
|
||||
|
||||
##### 劣势
|
||||
|
||||
* 如果你的系统只有 2GB 内存,可能还是不够理想
|
||||
|
||||
**附加建议**:如果你喜欢 Debian 而不是 Ubuntu,你可以选择 [Linux Mint Debian 版][12](LMDE)。LMDE 和带有 Cinnamon 桌面的 Debian 主要区别在于 LMDE 向其仓库提供最新的桌面环境。
|
||||
|
||||
#### Linux Mint Mate 版
|
||||
|
||||
[MATE 桌面环境][13] 也有类似的故事,它的目的是维护和支持 GNOME 2 的代码库和应用程序。它的外观和感觉与 GNOME 2 非常相似。
|
||||
|
||||
在我看来,到目前为止,MATE 桌面的最佳实现是 [Ubuntu MATE][14]。在 Linux Mint 中,你会得到一个定制版的 MATE 桌面,它符合 Cinnamon 美学,而不是传统的 GNOME 2 设定。
|
||||
|
||||
![Screenshot of Linux Mint MATE desktop][15]
|
||||
|
||||
##### 性能和响应能力
|
||||
|
||||
MATE 桌面以轻薄著称,这一点毋庸置疑。与 Cinnamon 桌面相比,其 CPU 的使用率始终保持在较低的水平,换言之,在笔记本电脑上会有更好的电池续航时间。
|
||||
|
||||
虽然感觉没有 Xfce 那么敏捷(在我看来),但不至于影响用户体验。内存消耗在 500MB 以下起步,这对于功能丰富的桌面环境来说是令人印象深刻的。
|
||||
|
||||
![Linux Mint 20 MATE idle system stats][16]
|
||||
|
||||
##### 优势
|
||||
|
||||
* 不影响 [功能][17] 的轻量级桌面
|
||||
* 足够的 [定制化][18] 可能性
|
||||
|
||||
##### 劣势
|
||||
|
||||
* 传统的外观可能会给你一种过时的感觉
|
||||
|
||||
#### Linux Mint Xfce 版
|
||||
|
||||
Xfce 项目始于 1996 年,受到了 UNIX 的 [通用桌面环境(CDE)][19] 的启发。Xfce 是 “[XForms][20] Common Environment” 的缩写,但由于它不再使用 XForms 工具箱,所以名字拼写为 “Xfce”。
|
||||
|
||||
它的目标是快速、轻量级和易于使用。Xfce 是许多流行的 Linux 发行版的主要桌面,如 [Manjaro][21] 和 [MX Linux][22]。
|
||||
|
||||
Linux Mint 提供了一个精致的 Xfce 桌面,但即使是黑暗主题也无法与 Cinnamon 桌面的美感相比。
|
||||
|
||||
![Linux Mint 20 Xfce desktop][23]
|
||||
|
||||
##### 性能和响应能力
|
||||
|
||||
Xfce 是 Linux Mint 提供的最精简的桌面环境。通过点击开始菜单、设置控制面板或探索底部面板,你会发现这是一个简单而又灵活的桌面环境。
|
||||
|
||||
尽管我觉得极简主义是一个积极的属性,但 Xfce 并不是一个养眼的产品,反而留下的是比较传统的味道。但对于一些用户来说,经典的桌面环境才是他们的首选。
|
||||
|
||||
在第一次开机时,内存的使用情况与 MATE 桌面类似,但并不尽如人意。如果你的电脑没有配备 SSD,Xfce 桌面环境可以让你的系统复活。
|
||||
|
||||
![Linux Mint 20 Xfce idle system stats][24]
|
||||
|
||||
##### 优势
|
||||
|
||||
* 使用简单
|
||||
* 非常轻巧,适合老式硬件
|
||||
* 坚如磐石的稳定
|
||||
|
||||
##### 劣势
|
||||
|
||||
* 过时的外观
|
||||
* 与 Cinnamon 相比,可能没有那么多的定制化服务
|
||||
|
||||
### 总结
|
||||
|
||||
由于这三款桌面环境都是基于 GTK 工具包的,所以选择哪个纯属个人喜好。它们都很节约系统资源,对于 4GB 内存的适度系统来说,表现良好。Xfce 和 MATE 可以更低一些,支持低至 2GB 内存的系统。
|
||||
|
||||
Linux Mint 并不是唯一提供多种选择的发行版。Manjaro、Fedora和 [Ubuntu 等发行版也有各种口味][25] 可供选择。
|
||||
|
||||
如果你还是无法下定决心,我建议先选择默认的 Cinnamon 版,并尝试 [在虚拟机中使用 Linux Mint][26]。看看你是否喜欢这个外观和感觉。如果不喜欢,你可以用同样的方式测试其他变体。如果你决定了这个版本,你可以继续 [在你的主系统上安装它][3]。
|
||||
|
||||
希望我的这篇文章能够帮助到你。如果你对这个话题还有疑问或建议,请在下方留言。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/linux-mint-cinnamon-mate-xfce/
|
||||
|
||||
作者:[Dimitrios][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/dimitrios/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://itsfoss.com/best-linux-beginners/
|
||||
[2]: https://linuxmint-developer-guide.readthedocs.io/en/latest/mint-tools.html#
|
||||
[3]: https://itsfoss.com/install-linux-mint/
|
||||
[4]: https://linuxmint.com/download.php
|
||||
[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/03/linux-mint-version-options.png?resize=789%2C277&ssl=1
|
||||
[6]: https://itsfoss.com/what-is-desktop-environment/
|
||||
[7]: https://itsfoss.com/what-is-linux/
|
||||
[8]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/03/Linux-Mint-variants.jpg?resize=800%2C450&ssl=1
|
||||
[9]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/03/linux-mint-20.1-cinnamon.jpg?resize=800%2C500&ssl=1
|
||||
[10]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/03/Linux-Mint-20-Cinnamon-ram-usage.png?resize=800%2C600&ssl=1
|
||||
[11]: https://itsfoss.com/customize-cinnamon-desktop/
|
||||
[12]: https://itsfoss.com/lmde-4-release/
|
||||
[13]: https://mate-desktop.org/
|
||||
[14]: https://itsfoss.com/ubuntu-mate-20-04-review/
|
||||
[15]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/07/linux-mint-mate.jpg?resize=800%2C500&ssl=1
|
||||
[16]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/03/Linux-Mint-20-MATE-ram-usage.png?resize=800%2C600&ssl=1
|
||||
[17]: https://mate-desktop.org/blog/2020-02-10-mate-1-24-released/
|
||||
[18]: https://itsfoss.com/ubuntu-mate-customization/
|
||||
[19]: https://en.wikipedia.org/wiki/Common_Desktop_Environment
|
||||
[20]: https://en.wikipedia.org/wiki/XForms_(toolkit)
|
||||
[21]: https://itsfoss.com/manjaro-linux-review/
|
||||
[22]: https://itsfoss.com/mx-linux-19/
|
||||
[23]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/07/linux-mint-xfce.jpg?resize=800%2C500&ssl=1
|
||||
[24]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/03/Linux-Mint-20-Xfce-ram-usage.png?resize=800%2C600&ssl=1
|
||||
[25]: https://itsfoss.com/which-ubuntu-install/
|
||||
[26]: https://itsfoss.com/install-linux-mint-in-virtualbox/
|
@ -0,0 +1,84 @@
|
||||
[#]: subject: (Set up network parental controls on a Raspberry Pi)
|
||||
[#]: via: (https://opensource.com/article/21/3/raspberry-pi-parental-control)
|
||||
[#]: author: (Daniel Oh https://opensource.com/users/daniel-oh)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-13216-1.html)
|
||||
|
||||
在树莓派上设置家庭网络的家长控制
|
||||
======
|
||||
|
||||
> 用最少的时间和金钱投入,就能保证孩子上网安全。
|
||||
|
||||
![Family learning and reading together at night in a room][1]
|
||||
|
||||
家长们一直在寻找保护孩子们上网的方法,从防止恶意软件、横幅广告、弹出窗口、活动跟踪脚本和其他问题,到防止他们在应该做功课的时候玩游戏和看 YouTube。许多企业使用工具来规范员工的网络安全和活动,但问题是如何在家里实现这一点?
|
||||
|
||||
简短的答案是一台小巧、廉价的树莓派电脑,它可以让你为孩子和你在家的工作设置<ruby>家长控制<rt>parental controls</rt></ruby>。本文将引导你了解使用树莓派构建自己的启用了家长控制功能的家庭网络有多么容易。
|
||||
|
||||
### 安装硬件和软件
|
||||
|
||||
对于这个项目,你需要一个树莓派和一个家庭网络路由器。如果你在线购物网站花上 5 分钟浏览,就可以发现很多选择。[树莓派 4][2] 和 [TP-Link 路由器][3] 是初学者的好选择。
|
||||
|
||||
有了网络设备和树莓派后,你需要在 Linux 容器或者受支持的操作系统中安装 [Pi-hole][4]。有几种 [安装方法][5],但一个简单的方法是在你的树莓派上执行以下命令:
|
||||
|
||||
```
|
||||
curl -sSL https://install.pi-hole.net | bash
|
||||
```
|
||||
|
||||
### 配置 Pi-hole 作为你的 DNS 服务器
|
||||
|
||||
接下来,你需要在路由器和 Pi-hole 中配置 DHCP 设置:
|
||||
|
||||
1. 禁用路由器中的 DHCP 服务器设置
|
||||
2. 在 Pi-hole 中启用 DHCP 服务器
|
||||
|
||||
每台设备都不一样,所以我没有办法告诉你具体需要点击什么来调整设置。一般来说,你可以通过浏览器访问你家的路由器。你的路由器的地址有时会印在路由器的底部,它以 192.168 或 10 开头。
|
||||
|
||||
在浏览器中,打开你的路由器的地址,并用你的凭证登录。它通常是简单的 `admin` 和一个数字密码(有时这个密码也打印在路由器上)。如果你不知道登录名,请打电话给你的供应商并询问详情。
|
||||
|
||||
在图形界面中,寻找你的局域网内关于 DHCP 的部分,并停用 DHCP 服务器。 你的路由器界面几乎肯定会与我的不同,但这是一个我设置的例子。取消勾选 **DHCP 服务器**:
|
||||
|
||||
![Disable DHCP][6]
|
||||
|
||||
接下来,你必须在 Pi-hole 上激活 DHCP 服务器。如果你不这样做,除非你手动分配 IP 地址,否则你的设备将无法上网!
|
||||
|
||||
### 让你的网络适合家庭
|
||||
|
||||
设置完成了。现在,你的网络设备(如手机、平板电脑、笔记本电脑等)将自动找到树莓派上的 DHCP 服务器。然后,每个设备将被分配一个动态 IP 地址来访问互联网。
|
||||
|
||||
注意:如果你的路由器设备支持设置 DNS 服务器,你也可以在路由器中配置 DNS 客户端。客户端将把 Pi-hole 作为你的 DNS 服务器。
|
||||
|
||||
要设置你的孩子可以访问哪些网站和活动的规则,打开浏览器进入 Pi-hole 管理页面,`http://pi.hole/admin/`。在仪表板上,点击“Whitelist”来添加你的孩子可以访问的网页。你也可以将不允许孩子访问的网站(如游戏、成人、广告、购物等)添加到“Blocklist”。
|
||||
|
||||
![Pi-hole admin dashboard][8]
|
||||
|
||||
### 接下来是什么?
|
||||
|
||||
现在,你已经在树莓派上设置了家长控制,你可以让你的孩子更安全地上网,同时让他们访问经批准的娱乐选项。这也可以通过减少你的家庭串流来降低你的家庭网络使用量。更多高级使用方法,请访问 Pi-hole 的[文档][9]和[博客][10]。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/3/raspberry-pi-parental-control
|
||||
|
||||
作者:[Daniel Oh][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/daniel-oh
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/family_learning_kids_night_reading.png?itok=6K7sJVb1 (Family learning and reading together at night in a room)
|
||||
[2]: https://www.raspberrypi.org/products/
|
||||
[3]: https://www.amazon.com/s?k=tp-link+router&crid=3QRLN3XRWHFTC&sprefix=TP-Link%2Caps%2C186&ref=nb_sb_ss_ts-doa-p_3_7
|
||||
[4]: https://pi-hole.net/
|
||||
[5]: https://github.com/pi-hole/pi-hole/#one-step-automated-install
|
||||
[6]: https://opensource.com/sites/default/files/uploads/disabledhcp.jpg (Disable DHCP)
|
||||
[7]: https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[8]: https://opensource.com/sites/default/files/uploads/blocklist.png (Pi-hole admin dashboard)
|
||||
[9]: https://docs.pi-hole.net/
|
||||
[10]: https://pi-hole.net/blog/#page-content
|
@ -0,0 +1,94 @@
|
||||
[#]: subject: (6 things to know about using WebAssembly on Firefox)
|
||||
[#]: via: (https://opensource.com/article/21/3/webassembly-firefox)
|
||||
[#]: author: (Stephan Avenwedde https://opensource.com/users/hansic99)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-13230-1.html)
|
||||
|
||||
在 Firefox 上使用 WebAssembly 要了解的 6 件事
|
||||
======
|
||||
|
||||
> 了解在 Firefox 上运行 WebAssembly 的机会和局限性。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202103/23/223901pi6tcg7ybsyxos7x.jpg)
|
||||
|
||||
WebAssembly 是一种可移植的执行格式,由于它能够以近乎原生的速度在浏览器中执行应用而引起了人们的极大兴趣。WebAssembly 本质上有一些特殊的属性和局限性。但是,通过将其与其他技术结合,将出现全新的可能性,尤其是与浏览器中的游戏有关的可能性。
|
||||
|
||||
本文介绍了在 Firefox 上运行 WebAssembly 的概念、可能性和局限性。
|
||||
|
||||
### 沙盒
|
||||
|
||||
WebAssembly 有 [严格的安全策略][2]。 WebAssembly 中的程序或功能单元称为*模块*。每个模块实例都运行在自己的隔离内存空间中。因此,即使同一个网页加载了多个模块,它们也无法访问另一个模块的虚拟地址空间。设计上,WebAssembly 还考虑了内存安全性和控制流完整性,这使得(几乎)确定性的执行成为可能。
|
||||
|
||||
### Web API
|
||||
|
||||
通过 JavaScript [Web API][3] 可以访问多种输入和输出设备。根据这个 [提案][4],将来可以不用绕道到 JavaScript 来访问 Web API。C++ 程序员可以在 [Emscripten.org][5] 上找到有关访问 Web API 的信息。Rust 程序员可以使用 [rustwasm.github.io][7] 中写的 [wasm-bindgen][6] 库。
|
||||
|
||||
### 文件输入/输出
|
||||
|
||||
因为 WebAssembly 是在沙盒环境中执行的,所以当它在浏览器中执行时,它无法访问主机的文件系统。但是,Emscripten 提供了虚拟文件系统形式的解决方案。
|
||||
|
||||
Emscripten 使在编译时将文件预加载到内存文件系统成为可能。然后可以像在普通文件系统上一样从 WebAssembly 应用中读取这些文件。这个 [教程][8] 提供了更多信息。
|
||||
|
||||
### 持久化数据
|
||||
|
||||
如果你需要在客户端存储持久化数据,那么必须通过 JavaScript Web API 来完成。请参考 Mozilla 开发者网络(MDN)关于 [浏览器存储限制和过期标准][9] 的文档,了解不同方法的详细信息。
|
||||
|
||||
### 内存管理
|
||||
|
||||
WebAssembly 模块作为 [堆栈机][10] 在线性内存上运行。这意味着堆内存分配等概念是没有的。然而,如果你在 C++ 中使用 `new` 或者在 Rust 中使用 `Box::new`,你会期望它会进行堆内存分配。将堆内存分配请求转换成 WebAssembly 的方式在很大程度上依赖于工具链。你可以在 Frank Rehberger 关于 [WebAssembly 和动态内存][11] 的文章中找到关于不同工具链如何处理堆内存分配的详细分析。
|
||||
|
||||
### 游戏!
|
||||
|
||||
与 [WebGL][12] 结合使用时,WebAssembly 的执行速度很高,因此可以在浏览器中运行原生游戏。大型专有游戏引擎 [Unity][13] 和[虚幻 4][14] 展示了 WebGL 可以实现的功能。也有使用 WebAssembly 和 WebGL 接口的开源游戏引擎。这里有些例子:
|
||||
|
||||
* 自 2011 年 11 月起,[id Tech 4][15] 引擎(更常称之为 Doom 3 引擎)可在 [GitHub][16] 上以 GPL 许可的形式获得。此外,还有一个 [Doom 3 的 WebAssembly 移植版][17]。
|
||||
* Urho3D 引擎提供了一些 [令人印象深刻的例子][18],它们可以在浏览器中运行。
|
||||
* 如果你喜欢复古游戏,可以试试这个 [Game Boy 模拟器][19]。
|
||||
* [Godot 引擎也能生成 WebAssembly][20]。我找不到演示,但 [Godot 编辑器][21] 已经被移植到 WebAssembly 上。
|
||||
|
||||
### 有关 WebAssembly 的更多信息
|
||||
|
||||
WebAssembly 是一项很有前途的技术,我相信我们将来会越来越多地看到它。除了在浏览器中执行之外,WebAssembly 还可以用作可移植的执行格式。[Wasmer][22] 容器主机使你可以在各种平台上执行 WebAssembly 代码。
|
||||
|
||||
如果你需要更多的演示、示例和教程,请看一下这个 [WebAssembly 主题集合][23]。Mozilla 的 [游戏和示例合集][24] 并非全是 WebAssembly,但仍然值得一看。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/3/webassembly-firefox
|
||||
|
||||
作者:[Stephan Avenwedde][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/hansic99
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/lenovo-thinkpad-laptop-concentration-focus-windows-office.png?itok=-8E2ihcF (Woman using laptop concentrating)
|
||||
[2]: https://webassembly.org/docs/security/
|
||||
[3]: https://developer.mozilla.org/en-US/docs/Web/API
|
||||
[4]: https://github.com/WebAssembly/gc/blob/master/README.md
|
||||
[5]: https://emscripten.org/docs/porting/connecting_cpp_and_javascript/Interacting-with-code.html
|
||||
[6]: https://github.com/rustwasm/wasm-bindgen
|
||||
[7]: https://rustwasm.github.io/wasm-bindgen/
|
||||
[8]: https://emscripten.org/docs/api_reference/Filesystem-API.html
|
||||
[9]: https://developer.mozilla.org/en-US/docs/Web/API/IndexedDB_API/Browser_storage_limits_and_eviction_criteria
|
||||
[10]: https://en.wikipedia.org/wiki/Stack_machine
|
||||
[11]: https://frehberg.wordpress.com/webassembly-and-dynamic-memory/
|
||||
[12]: https://en.wikipedia.org/wiki/WebGL
|
||||
[13]: https://beta.unity3d.com/jonas/AngryBots/
|
||||
[14]: https://www.youtube.com/watch?v=TwuIRcpeUWE
|
||||
[15]: https://en.wikipedia.org/wiki/Id_Tech_4
|
||||
[16]: https://github.com/id-Software/DOOM-3
|
||||
[17]: https://wasm.continuation-labs.com/d3demo/
|
||||
[18]: https://urho3d.github.io/samples/
|
||||
[19]: https://vaporboy.net/
|
||||
[20]: https://docs.godotengine.org/en/stable/development/compiling/compiling_for_web.html
|
||||
[21]: https://godotengine.org/editor/latest/godot.tools.html
|
||||
[22]: https://github.com/wasmerio/wasmer
|
||||
[23]: https://github.com/mbasso/awesome-wasm
|
||||
[24]: https://developer.mozilla.org/en-US/docs/Games/Examples
|
@ -0,0 +1,106 @@
|
||||
[#]: subject: (Kooha is a Nascent Screen Recorder for GNOME With Wayland Support)
|
||||
[#]: via: (https://itsfoss.com/kooha-screen-recorder/)
|
||||
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-13227-1.html)
|
||||
|
||||
Kooha:一款支持 Wayland 的新生 GNOME 屏幕录像机
|
||||
======
|
||||
|
||||
Linux 中没有一个 [像样的支持 Wayland 显示服务器的屏幕录制软件][1]。
|
||||
|
||||
如果你使用 Wayland 的话,[GNOME 内置的屏幕录像机][1] 可能是少有的(也是唯一的)支持的软件。但是那个屏幕录像机没有可视界面和你所期望的标准屏幕录像软件的功能。
|
||||
|
||||
值得庆幸的是,有一个新的应用正在开发中,它提供了比 GNOME 屏幕录像机更多一点的功能,并且在 Wayland 上也能正常工作。
|
||||
|
||||
### 遇见 Kooha:一个新的 GNOME 桌面屏幕录像机
|
||||
|
||||
![][2]
|
||||
|
||||
[Kooha][3] 是一个处于开发初期阶段的应用,它可以在 GNOME 中使用,是用 GTK 和 PyGObject 构建的。事实上,它利用了与 GNOME 内置屏幕录像机相同的后端。
|
||||
|
||||
以下是 Kooha 的功能:
|
||||
|
||||
* 录制整个屏幕或选定区域
|
||||
* 在 Wayland 和 Xorg 显示服务器上均可使用
|
||||
* 在视频里用麦克风记录音频
|
||||
* 包含或忽略鼠标指针的选项
|
||||
* 可以在开始录制前增加 5 秒或 10 秒的延迟
|
||||
* 支持 WebM 和 MKV 格式的录制
|
||||
* 允许更改默认保存位置
|
||||
* 支持一些键盘快捷键
|
||||
|
||||
### 我的 Kooha 体验
|
||||
|
||||
![][4]
|
||||
|
||||
它的开发者 Dave Patrick 联系了我,由于我急需一款好用的屏幕录像机,所以我马上就去试用了。
|
||||
|
||||
目前,[Kooha 只能通过 Flatpak 安装][5]。我安装了 Flatpak,当我试着使用时,它什么都没有记录。我和 Dave 进行了快速的邮件讨论,他告诉我这是由于 [Ubuntu 20.10 中 GNOME 屏幕录像机的 bug][6]。
|
||||
|
||||
你可以想象我对支持 Wayland 的屏幕录像机的绝望,我 [将我的 Ubuntu 升级到 21.04 测试版][7]。
|
||||
|
||||
在 21.04 中,可以屏幕录像,但仍然无法录制麦克风的音频。
|
||||
|
||||
我注意到了另外几件无法按照我的喜好顺利进行的事情。
|
||||
|
||||
例如,在录制时,计时器在屏幕上仍然可见,并且包含在录像中。我不会希望在视频教程中出现这种情况。我想你也不会喜欢看到这些吧。
|
||||
|
||||
![][8]
|
||||
|
||||
另外就是关于多显示器的支持。没有专门选择某一个屏幕的选项。我连接了两个外部显示器,默认情况下,它录制所有三个显示器。可以使用设置捕捉区域,但精确拖动屏幕区域是一项耗时的任务。
|
||||
|
||||
它也没有 [Kazam][9] 或其他传统屏幕录像机中有的设置帧率或者编码的选项。
|
||||
|
||||
### 在 Linux 上安装 Kooha(如果你使用 GNOME)
|
||||
|
||||
请确保在你的 Linux 发行版上启用 Flatpak 支持。目前它只适用于 GNOME,所以请检查你使用的桌面环境。
|
||||
|
||||
使用此命令将 Flathub 添加到你的 Flatpak 仓库列表中:
|
||||
|
||||
```
|
||||
flatpak remote-add --if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepo
|
||||
```
|
||||
|
||||
然后用这个命令来安装:
|
||||
|
||||
```
|
||||
flatpak install flathub io.github.seadve.Kooha
|
||||
```
|
||||
|
||||
你可以通过菜单或使用这个命令来运行它:
|
||||
|
||||
```
|
||||
flatpak run io.github.seadve.Kooha
|
||||
```
|
||||
|
||||
### 总结
|
||||
|
||||
Kooha 并不完美,但考虑到 Wayland 领域的巨大空白,我希望开发者努力修复这些问题并增加更多的功能。考虑到 [Ubuntu 21.04 将默认切换到 Wayland][10],以及其他一些流行的发行版如 Fedora 和 openSUSE 已经默认使用 Wayland,这一点很重要。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/kooha-screen-recorder/
|
||||
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/abhishek/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://itsfoss.com/gnome-screen-recorder/
|
||||
[2]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/03/kooha-screen-recorder.png?resize=800%2C450&ssl=1
|
||||
[3]: https://github.com/SeaDve/Kooha
|
||||
[4]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/03/kooha.png?resize=797%2C364&ssl=1
|
||||
[5]: https://flathub.org/apps/details/io.github.seadve.Kooha
|
||||
[6]: https://bugs.launchpad.net/ubuntu/+source/gnome-shell/+bug/1901391
|
||||
[7]: https://itsfoss.com/upgrade-ubuntu-beta/
|
||||
[8]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/03/kooha-recording.jpg?resize=800%2C636&ssl=1
|
||||
[9]: https://itsfoss.com/kazam-screen-recorder/
|
||||
[10]: https://news.itsfoss.com/ubuntu-21-04-wayland/
|
@ -0,0 +1,105 @@
|
||||
[#]: subject: (Use gdu for a Faster Disk Usage Checking in Linux Terminal)
|
||||
[#]: via: (https://itsfoss.com/gdu/)
|
||||
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-13234-1.html)
|
||||
|
||||
使用 gdu 进行更快的磁盘使用情况检查
|
||||
======
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202103/24/233818dkfvi4fviiysn8o9.jpg)
|
||||
|
||||
在 Linux 终端中有两种常用的 [检查磁盘使用情况的方法][1]:`du` 命令和 `df` 命令。[du 命令更多的是用来检查目录的使用空间][2],`df` 命令则是提供文件系统级别的磁盘使用情况。
|
||||
|
||||
还有更友好的 [用 GNOME “磁盘” 等图形工具在 Linux 中查看磁盘使用情况的方法][3]。如果局限于终端,你可以使用像 [ncdu][5] 这样的 [TUI][4] 工具,以一种图形化的方式获取磁盘使用信息。
|
||||
|
||||
### gdu: 在 Linux 终端中检查磁盘使用情况
|
||||
|
||||
[gdu][6] 就是这样一个用 Go 编写的工具(因此是 gdu 中的 “g”)。gdu 开发者的 [基准测试][7] 表明,它的磁盘使用情况检查速度相当快,特别是在 SSD 上。事实上,gdu 主要是针对 SSD 的,尽管它也可以在 HDD 上工作。
|
||||
|
||||
如果你在使用 `gdu` 命令时没有使用任何选项,它就会显示你当前所在目录的磁盘使用情况。
|
||||
|
||||
![][8]
|
||||
|
||||
由于它具有文本用户界面(TUI),你可以使用箭头浏览目录和磁盘。你也可以按文件名或大小对结果进行排序。
|
||||
|
||||
你可以用它做到:
|
||||
|
||||
* 向上箭头或 `k` 键将光标向上移动
|
||||
* 向下箭头或 `j` 键将光标向下移动
|
||||
* 回车选择目录/设备
|
||||
* 左箭头或 `h` 键转到上级目录
|
||||
* 使用 `d` 键删除所选文件或目录
|
||||
* 使用 `n` 键按名称排序
|
||||
* 使用 `s` 键按大小排序
|
||||
* 使用 `c` 键按项目排序
|
||||
|
||||
你会注意到一些条目前的一些符号。这些符号有特定的意义。
|
||||
|
||||
![][9]
|
||||
|
||||
* `!` 表示读取目录时发生错误。
|
||||
* `.` 表示在读取子目录时发生错误,大小可能不正确。
|
||||
* `@` 表示文件是一个符号链接或套接字。
|
||||
* `H` 表示文件已经被计数(硬链接)。
|
||||
* `e` 表示目录为空。
|
||||
|
||||
要查看所有挂载磁盘的磁盘利用率和可用空间,使用选项 `d`:
|
||||
|
||||
```
|
||||
gdu -d
|
||||
```
|
||||
|
||||
它在一屏中显示所有的细节:
|
||||
|
||||
![][10]
|
||||
|
||||
看起来是个方便的工具,对吧?让我们看看如何在你的 Linux 系统上安装它。
|
||||
|
||||
### 在 Linux 上安装 gdu
|
||||
|
||||
gdu 是通过 [AUR][11] 提供给 Arch 和 Manjaro 用户的。我想,作为一个 Arch 用户,你应该知道如何使用 AUR。
|
||||
|
||||
它包含在即将到来的 Ubuntu 21.04 的 universe 仓库中,但有可能你现在还没有使用它。这种情况下,你可以使用 Snap 安装它,这可能看起来有很多条 `snap` 命令:
|
||||
|
||||
```
|
||||
snap install gdu-disk-usage-analyzer
|
||||
snap connect gdu-disk-usage-analyzer:mount-observe :mount-observe
|
||||
snap connect gdu-disk-usage-analyzer:system-backup :system-backup
|
||||
snap alias gdu-disk-usage-analyzer.gdu gdu
|
||||
```
|
||||
|
||||
你也可以在其发布页面找到源代码:
|
||||
|
||||
- [下载 gdu 的源代码][12]
|
||||
|
||||
我更习惯于使用 `du` 和 `df` 命令,但我觉得一些 Linux 用户可能会喜欢 gdu。你是其中之一吗?
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/gdu/
|
||||
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/abhishek/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://linuxhandbook.com/df-command/
|
||||
[2]: https://linuxhandbook.com/find-directory-size-du-command/
|
||||
[3]: https://itsfoss.com/check-free-disk-space-linux/
|
||||
[4]: https://itsfoss.com/gui-cli-tui/
|
||||
[5]: https://dev.yorhel.nl/ncdu
|
||||
[6]: https://github.com/dundee/gdu
|
||||
[7]: https://github.com/dundee/gdu#benchmarks
|
||||
[8]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/03/gdu-disk-utilization.png?resize=800%2C471&ssl=1
|
||||
[9]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/03/gdu-entry-symbols.png?resize=800%2C302&ssl=1
|
||||
[10]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/03/gdu-disk-utilization-for-all-drives.png?resize=800%2C471&ssl=1
|
||||
[11]: https://itsfoss.com/aur-arch-linux/
|
||||
[12]: https://github.com/dundee/gdu/releases
|
@ -0,0 +1,308 @@
|
||||
[#]: subject: (Top 10 Terminal Emulators for Linux \(With Extra Features or Amazing Looks\))
|
||||
[#]: via: (https://itsfoss.com/linux-terminal-emulators/)
|
||||
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-13221-1.html)
|
||||
|
||||
10 个常见的 Linux 终端仿真器
|
||||
======
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202103/21/073043q4j4o6hr33b595j4.jpg)
|
||||
|
||||
默认情况下,所有的 Linux 发行版都已经预装了“<ruby>终端<rt>terminal</rt></ruby>”应用程序或“<ruby>终端仿真器<rt>terminal emulator</rt></ruby>”(这才是正确的技术术语)。当然,根据桌面环境的不同,它的外观和感觉会有所不同。
|
||||
|
||||
Linux 的特点是,你可以不用局限于你的发行版所提供的东西,你可以用你所选择的替代应用程序。终端也不例外。有几个提供了独特功能的终端仿真器令人印象深刻,可以获得更好的用户体验或更好的外观。
|
||||
|
||||
在这里,我将整理一个有趣的终端应用程序的列表,你可以在你的 Linux 发行版上尝试它们。
|
||||
|
||||
### 值得赞叹的 Linux 终端仿真器
|
||||
|
||||
此列表没有特别的排名顺序,我会先列出一些有趣的,然后是一些最流行的终端仿真器。此外,我还强调了每个提到的终端仿真器的主要功能,你可以选择你喜欢的终端仿真器。
|
||||
|
||||
#### 1、Terminator
|
||||
|
||||
![][1]
|
||||
|
||||
主要亮点:
|
||||
|
||||
* 可以在一个窗口中使用多个 GNOME 终端
|
||||
|
||||
[Terminator][2] 是一款非常流行的终端仿真器,目前仍在维护中(从 Launchpad 移到了 GitHub)。
|
||||
|
||||
它基本上是在一个窗口中为你提供了多个 GNOME 终端。在它的帮助下,你可以轻松地对终端窗口进行分组和重组。你可能会觉得这像是在使用平铺窗口管理器,不过有一些限制。
|
||||
|
||||
##### 如何安装 Terminator?
|
||||
|
||||
对于基于 Ubuntu 的发行版,你只需在终端输入以下命令:
|
||||
|
||||
```
|
||||
sudo apt install terminator
|
||||
```
|
||||
|
||||
你应该可以在大多数 Linux 发行版的默认仓库中找到它。但是,如果你需要安装帮助,请访问它的 [GitHub 页面][3]。
|
||||
|
||||
#### 2、Guake 终端
|
||||
|
||||
![][4]
|
||||
|
||||
主要亮点:
|
||||
|
||||
* 专为在 GNOME 上快速访问终端而设计
|
||||
* 工作速度快,不需要大量的系统资源
|
||||
* 访问的快捷键
|
||||
|
||||
[Guake][6] 终端最初的灵感来自于一款 FPS 游戏 Quake。与其他一些终端仿真器不同的是,它的工作方式是覆盖在其他的活动窗口上。
|
||||
|
||||
你所要做的就是使用快捷键(`F12`)召唤该仿真器,它就会从顶部出现。你可以自定义该仿真器的宽度或位置,但大多数用户使用默认设置就可以了。
|
||||
|
||||
它不仅仅是一个方便的终端仿真器,还提供了大量的功能,比如能够恢复标签、拥有多个标签、对每个标签进行颜色编码等等。你可以查看我关于 [Guake 的单独文章][5] 来了解更多。
|
||||
|
||||
##### 如何安装 Guake 终端?
|
||||
|
||||
Guake 在大多数 Linux 发行版的默认仓库中都可以找到,你可以参考它的 [官方安装说明][7]。
|
||||
|
||||
如果你使用的是基于 Debian 的发行版,只需输入以下命令:
|
||||
|
||||
```
|
||||
sudo apt install guake
|
||||
```
|
||||
|
||||
#### 3、Tilix 终端
|
||||
|
||||
![][8]
|
||||
|
||||
主要亮点:
|
||||
|
||||
* 平铺功能
|
||||
* 支持拖放
|
||||
* 下拉式 Quake 模式
|
||||
|
||||
[Tilix][10] 终端提供了与 Guake 类似的下拉式体验 —— 但它允许你在平铺模式下拥有多个终端窗口。
|
||||
|
||||
如果你的 Linux 发行版中默认没有平铺窗口,而且你有一个大屏幕,那么这个功能就特别有用,你可以在多个终端窗口上工作,而不需要在不同的工作空间之间切换。
|
||||
|
||||
如果你想了解更多关于它的信息,我们之前已经 [单独介绍][9] 过了。
|
||||
|
||||
##### 如何安装 Tilix?
|
||||
|
||||
Tilix 在大多数发行版的默认仓库中都有。如果你使用的是基于 Ubuntu 的发行版,只需输入:
|
||||
|
||||
```
|
||||
sudo apt install tilix
|
||||
```
|
||||
|
||||
#### 4、Hyper
|
||||
|
||||
![][13]
|
||||
|
||||
主要亮点:
|
||||
|
||||
* 基于 HTML/CSS/JS 的终端
|
||||
* 基于 Electron
|
||||
* 跨平台
|
||||
* 丰富的配置选项
|
||||
|
||||
[Hyper][15] 是另一个有趣的终端仿真器,它建立在 Web 技术之上。它并没有提供独特的用户体验,但看起来很不一样,并提供了大量的自定义选项。
|
||||
|
||||
它还支持安装主题和插件来轻松定制终端的外观。你可以在他们的 [GitHub 页面][14] 中探索更多关于它的内容。
|
||||
|
||||
##### 如何安装 Hyper?
|
||||
|
||||
Hyper 在默认的资源库中是不可用的。然而,你可以通过他们的 [官方网站][16] 找到 .deb 和 .rpm 包来安装。
|
||||
|
||||
如果你是新手,请阅读文章以获得 [使用 deb 文件][17] 和 [使用 rpm 文件][18] 的帮助。
|
||||
|
||||
#### 5、Tilda
|
||||
|
||||
![][19]
|
||||
|
||||
主要亮点:
|
||||
|
||||
* 下拉式终端
|
||||
* 搜索栏整合
|
||||
|
||||
[Tilda][20] 是另一款基于 GTK 的下拉式终端仿真器。与其他一些不同的是,它提供了一个你可以切换的集成搜索栏,还可以让你自定义很多东西。
|
||||
|
||||
你还可以设置热键来快速访问或执行某个动作。从功能上来说,它是相当令人印象深刻的。然而,在视觉上,我不喜欢覆盖的行为,而且它也不支持拖放。不过你可以试一试。
|
||||
|
||||
##### 如何安装 Tilda?
|
||||
|
||||
对于基于 Ubuntu 的发行版,你可以简单地键入:
|
||||
|
||||
```
|
||||
sudo apt install tilda
|
||||
```
|
||||
|
||||
你可以参考它的 [GitHub 页面][20],以了解其他发行版的安装说明。
|
||||
|
||||
#### 6、eDEX-UI
|
||||
|
||||
![][21]
|
||||
|
||||
主要亮点:
|
||||
|
||||
* 科幻感的外观
|
||||
* 跨平台
|
||||
* 自定义主题选项
|
||||
* 支持多个终端标签
|
||||
|
||||
如果你不是特别想找一款可以帮助你更快的完成工作的终端仿真器,那么 [eDEX-UI][23] 绝对是你应该尝试的。
|
||||
|
||||
对于科幻迷和只想让自己的终端看起来独特的用户来说,这绝对是一款漂亮的终端仿真器。如果你不知道,它的灵感很大程度上来自于电影《创:战纪》。
|
||||
|
||||
不仅仅是设计或界面,总的来说,它为你提供了独特的用户体验,你会喜欢的。它还可以让你 [自定义终端][12]。如果你打算尝试的话,它确实需要大量的系统资源。
|
||||
|
||||
你不妨看看我们 [专门介绍 eDEX-UI][22] 的文章,了解更多关于它的信息和安装步骤。
|
||||
|
||||
##### 如何安装 eDEX-UI?
|
||||
|
||||
你可以在一些包含 [AUR][24] 的仓库中找到它。无论是哪种情况,你都可以从它的 [GitHub 发布部分][25] 中抓取一个适用于你的 Linux 发行版的软件包(或 AppImage 文件)。
|
||||
|
||||
#### 7、Cool Retro Terminal
|
||||
|
||||
![][26]
|
||||
|
||||
主要亮点:
|
||||
|
||||
* 复古主题
|
||||
* 动画/效果调整
|
||||
|
||||
[Cool Retro Terminal][27] 是一款独特的终端仿真器,它为你提供了一个复古的阴极射线管显示器的外观。
|
||||
|
||||
如果你正在寻找一些额外功能的终端仿真器,这可能会让你失望。然而,令人印象深刻的是,它在资源上相当轻盈,并允许你自定义颜色、效果和字体。
|
||||
|
||||
##### 如何安装 Cool Retro Terminal?
|
||||
|
||||
你可以在其 [GitHub 页面][27] 中找到所有主流 Linux 发行版的安装说明。对于基于 Ubuntu 的发行版,你可以在终端中输入以下内容:
|
||||
|
||||
```
|
||||
sudo apt install cool-retro-term
|
||||
```
|
||||
|
||||
#### 8、Alacritty
|
||||
|
||||
![][28]
|
||||
|
||||
主要亮点:
|
||||
|
||||
* 跨平台
|
||||
* 选项丰富,重点是整合。
|
||||
|
||||
[Alacritty][29] 是一款有趣的开源跨平台终端仿真器。尽管它被认为是处于“测试”阶段的东西,但它仍然可以工作。
|
||||
|
||||
它的目标是为你提供广泛的配置选项,同时考虑到性能。例如,使用键盘点击 URL、将文本复制到剪贴板、使用 “Vi” 模式进行搜索等功能可能会吸引你去尝试。
|
||||
|
||||
你可以探索它的 [GitHub 页面][29] 了解更多信息。
|
||||
|
||||
##### 如何安装 Alacritty?
|
||||
|
||||
官方 GitHub 页面上说可以使用包管理器安装 Alacritty,但我在 Linux Mint 20.1 的默认仓库或 [synaptic 包管理器][30] 中找不到它。
|
||||
|
||||
如果你想尝试的话,可以按照 [安装说明][31] 来手动设置。
|
||||
|
||||
#### 9、Konsole
|
||||
|
||||
![][32]
|
||||
|
||||
主要亮点:
|
||||
|
||||
* KDE 的终端
|
||||
* 轻巧且可定制
|
||||
|
||||
如果你不是新手,这个可能不用介绍了。[Konsole][33] 是 KDE 桌面环境的默认终端仿真器。
|
||||
|
||||
不仅如此,它还集成了很多 KDE 应用。即使你使用的是其他的桌面环境,你也可以试试 Konsole。它是一个轻量级的终端仿真器,拥有众多的功能。
|
||||
|
||||
你可以拥有多个标签和多个分组窗口。以及改变终端仿真器的外观和感觉的大量的自定义选项。
|
||||
|
||||
##### 如何安装 Konsole?
|
||||
|
||||
对于基于 Ubuntu 的发行版和大多数其他发行版,你可以使用默认的版本库来安装它。对于基于 Debian 的发行版,你只需要在终端中输入以下内容:
|
||||
|
||||
```
|
||||
sudo apt install konsole
|
||||
```
|
||||
|
||||
#### 10、GNOME 终端
|
||||
|
||||
![][34]
|
||||
|
||||
主要亮点:
|
||||
|
||||
* GNOME 的终端
|
||||
* 简单但可定制
|
||||
|
||||
如果你使用的是任何基于 Ubuntu 的 GNOME 发行版,它已经是天生的了,它可能不像 Konsole 那样可以自定义,但它可以让你轻松地配置终端的大部分重要方面。它可能不像 Konsole 那样可以自定义(取决于你在做什么),但它可以让你轻松配置终端的大部分重要方面。
|
||||
|
||||
总的来说,它提供了良好的用户体验和易于使用的界面,并提供了必要的功能。
|
||||
|
||||
如果你好奇的话,我还有一篇 [自定义你的 GNOME 终端][12] 的教程。
|
||||
|
||||
##### 如何安装 GNOME 终端?
|
||||
|
||||
如果你没有使用 GNOME 桌面,但又想尝试一下,你可以通过默认的软件仓库轻松安装它。
|
||||
|
||||
对于基于 Debian 的发行版,以下是你需要在终端中输入的内容:
|
||||
|
||||
```
|
||||
sudo apt install gnome-terminal
|
||||
```
|
||||
|
||||
### 总结
|
||||
|
||||
有好几个终端仿真器。如果你正在寻找不同的用户体验,你可以尝试任何你喜欢的东西。然而,如果你的目标是一个稳定的和富有成效的体验,你需要测试一下,然后才能依靠它们。
|
||||
|
||||
对于大多数用户来说,默认的终端仿真器应该足够好用了。但是,如果你正在寻找快速访问(Quake 模式)、平铺功能或在一个终端中的多个窗口,请试试上述选择。
|
||||
|
||||
你最喜欢的 Linux 终端仿真器是什么?我有没有错过列出你最喜欢的?欢迎在下面的评论中告诉我你的想法。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/linux-terminal-emulators/
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/ankush/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/03/terminator-terminal.jpg?resize=800%2C436&ssl=1
|
||||
[2]: https://gnome-terminator.org
|
||||
[3]: https://github.com/gnome-terminator/terminator
|
||||
[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/02/guake-terminal-2.png?resize=800%2C432&ssl=1
|
||||
[5]: https://itsfoss.com/guake-terminal/
|
||||
[6]: https://github.com/Guake/guake
|
||||
[7]: https://guake.readthedocs.io/en/latest/user/installing.html#system-wide-installation
|
||||
[8]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/03/tilix-screenshot.png?resize=800%2C460&ssl=1
|
||||
[9]: https://itsfoss.com/tilix-terminal-emulator/
|
||||
[10]: https://gnunn1.github.io/tilix-web/
|
||||
[11]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/02/linux-terminal-customization.jpg?fit=800%2C450&ssl=1
|
||||
[12]: https://itsfoss.com/customize-linux-terminal/
|
||||
[13]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/03/hyper-screenshot.png?resize=800%2C527&ssl=1
|
||||
[14]: https://github.com/vercel/hyper
|
||||
[15]: https://hyper.is/
|
||||
[16]: https://hyper.is/#installation
|
||||
[17]: https://itsfoss.com/install-deb-files-ubuntu/
|
||||
[18]: https://itsfoss.com/install-rpm-files-fedora/
|
||||
[19]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/03/tilda-terminal.jpg?resize=800%2C427&ssl=1
|
||||
[20]: https://github.com/lanoxx/tilda
|
||||
[21]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/09/edex-ui-screenshot.png?resize=800%2C450&ssl=1
|
||||
[22]: https://itsfoss.com/edex-ui-sci-fi-terminal/
|
||||
[23]: https://github.com/GitSquared/edex-ui
|
||||
[24]: https://itsfoss.com/aur-arch-linux/
|
||||
[25]: https://github.com/GitSquared/edex-ui/releases
|
||||
[26]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2015/10/cool-retro-term-1.jpg?resize=799%2C450&ssl=1
|
||||
[27]: https://github.com/Swordfish90/cool-retro-term
|
||||
[28]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/03/alacritty-screenshot.png?resize=800%2C496&ssl=1
|
||||
[29]: https://github.com/alacritty/alacritty
|
||||
[30]: https://itsfoss.com/synaptic-package-manager/
|
||||
[31]: https://github.com/alacritty/alacritty/blob/master/INSTALL.md#debianubuntu
|
||||
[32]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/03/konsole-screenshot.png?resize=800%2C512&ssl=1
|
||||
[33]: https://konsole.kde.org/
|
||||
[34]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/02/default-terminal.jpg?resize=773%2C493&ssl=1
|
100
published/20210322 Why I use exa instead of ls on Linux.md
Normal file
100
published/20210322 Why I use exa instead of ls on Linux.md
Normal file
@ -0,0 +1,100 @@
|
||||
[#]: subject: (Why I use exa instead of ls on Linux)
|
||||
[#]: via: (https://opensource.com/article/21/3/replace-ls-exa)
|
||||
[#]: author: (Sudeshna Sur https://opensource.com/users/sudeshna-sur)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-13237-1.html)
|
||||
|
||||
为什么我在 Linux 上使用 exa 而不是 ls?
|
||||
======
|
||||
|
||||
> exa 是一个 Linux ls 命令的现代替代品。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202103/26/101726h008fn6tttn4g6gt.jpg)
|
||||
|
||||
我们生活在一个繁忙的世界里,当我们需要查找文件和数据时,使用 `ls` 命令可以节省时间和精力。但如果不经过大量调整,默认的 `ls` 输出并不十分舒心。当有一个 exa 替代方案时,为什么要花时间眯着眼睛看黑白文字呢?
|
||||
|
||||
[exa][2] 是一个常规 `ls` 命令的现代替代品,它让生活变得更轻松。这个工具是用 [Rust][3] 编写的,该语言以并行性和安全性而闻名。
|
||||
|
||||
### 安装 exa
|
||||
|
||||
要安装 `exa`,请运行:
|
||||
|
||||
```
|
||||
$ dnf install exa
|
||||
```
|
||||
|
||||
### 探索 exa 的功能
|
||||
|
||||
`exa` 改进了 `ls` 文件列表,它提供了更多的功能和更好的默认值。它使用颜色来区分文件类型和元数据。它能识别符号链接、扩展属性和 Git。而且它体积小、速度快,只有一个二进制文件。
|
||||
|
||||
#### 跟踪文件
|
||||
|
||||
你可以使用 `exa` 来跟踪某个 Git 仓库中新增的文件。
|
||||
|
||||
![Tracking Git files with exa][4]
|
||||
|
||||
#### 树形结构
|
||||
|
||||
这是 `exa` 的基本树形结构。`--level` 的值决定了列表的深度,这里设置为 2。如果你想列出更多的子目录和文件,请增加 `--level` 的值。
|
||||
|
||||
![exa's default tree structure][6]
|
||||
|
||||
这个树包含了每个文件的很多元数据。
|
||||
|
||||
![Metadata in exa's tree structure][7]
|
||||
|
||||
#### 配色方案
|
||||
|
||||
默认情况下,`exa` 根据 [内置的配色方案][8] 来标识不同的文件类型。它不仅对文件和目录进行颜色编码,还对 `Cargo.toml`、`CMakeLists.txt`、`Gruntfile.coffee`、`Gruntfile.js`、`Makefile` 等多种文件类型进行颜色编码。
|
||||
|
||||
#### 扩展文件属性
|
||||
|
||||
当你使用 `exa` 探索 xattrs(扩展的文件属性)时,`--extended` 会显示所有的 xattrs。
|
||||
|
||||
![xattrs in exa][9]
|
||||
|
||||
#### 符号链接
|
||||
|
||||
`exa` 能识别符号链接,也能指出实际的文件。
|
||||
|
||||
![symlinks in exa][10]
|
||||
|
||||
#### 递归
|
||||
|
||||
当你想递归当前目录下所有目录的列表时,`exa` 能进行递归。
|
||||
|
||||
![recurse in exa][11]
|
||||
|
||||
### 总结
|
||||
|
||||
我相信 `exa 是最简单、最容易适应的工具之一。它帮助我跟踪了很多 Git 和 Maven 文件。它的颜色编码让我更容易在多个子目录中进行搜索,它还能帮助我了解当前的 xattrs。
|
||||
|
||||
你是否已经用 `exa` 替换了 `ls`?请在评论中分享你的反馈。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/3/replace-ls-exa
|
||||
|
||||
作者:[Sudeshna Sur][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/sudeshna-sur
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bash_command_line.png?itok=k4z94W2U (bash logo on green background)
|
||||
[2]: https://the.exa.website/docs
|
||||
[3]: https://opensource.com/tags/rust
|
||||
[4]: https://opensource.com/sites/default/files/uploads/exa_trackingfiles.png (Tracking Git files with exa)
|
||||
[5]: https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[6]: https://opensource.com/sites/default/files/uploads/exa_treestructure.png (exa's default tree structure)
|
||||
[7]: https://opensource.com/sites/default/files/uploads/exa_metadata.png (Metadata in exa's tree structure)
|
||||
[8]: https://the.exa.website/features/colours
|
||||
[9]: https://opensource.com/sites/default/files/uploads/exa_xattrs.png (xattrs in exa)
|
||||
[10]: https://opensource.com/sites/default/files/uploads/exa_symlinks.png (symlinks in exa)
|
||||
[11]: https://opensource.com/sites/default/files/uploads/exa_recurse.png (recurse in exa)
|
@ -0,0 +1,134 @@
|
||||
[#]: subject: (Extending Looped Music for Fun, Relaxation and Productivity)
|
||||
[#]: via: (https://theartofmachinery.com/2021/03/12/loopx.html)
|
||||
[#]: author: (Simon Arneaud https://theartofmachinery.com)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
||||
Extending Looped Music for Fun, Relaxation and Productivity
|
||||
======
|
||||
|
||||
Some work (like programming) takes a lot of concentration, and I use noise-cancelling headphones to help me work productively in silence. But for other work (like doing business paperwork), I prefer to have quiet music in the background to help me stay focussed. Quiet background music is good for meditation or dozing, too. If you can’t fall asleep or completely clear your mind, zoning out to some music is the next best thing.
|
||||
|
||||
The best music for that is simple and repetitive — something nice enough to listen too, but not distracting, and okay to tune out of when needed. Computer game music is like that, by design, so there’s plenty of good background music out there. The harder problem is finding samples that play for more than a few minutes.
|
||||
|
||||
So I made [`loopx`][1], a tool that takes a sample of music that loops a few times, and repeats the loop to make a long piece of music.
|
||||
|
||||
When you’re listening to the same music loop for a long time, even slight distortion becomes distracting. Making quality extended music audio out of real-world samples (and doing it fast enough) takes a bit of maths and computer science. About ten years ago I was doing digital signal processing (DSP) programming for industrial metering equipment, so this side project got me digging up some old theory again.
|
||||
|
||||
### The high-level plan
|
||||
|
||||
It would be easy if we could just play the original music sample on repeat. But, in practice, most files we’ll have won’t be perfectly trimmed to the right loop length. Some tracks will also have some kind of intro before the loop, but even if they don’t, they’ll usually have some fade in and out.
|
||||
|
||||
`loopx` needs to analyse the music file to find the music loop data, and then construct a longer version by copying and splicing together pieces of the original.
|
||||
|
||||
By the way, the examples in this post use [Beneath the Rabbit Holes][2] by Jason Lavallee from the soundtrack of the [FOSS platform game SuperTux][3]. I looped it a couple of times and added silence and fade in/out to the ends.
|
||||
|
||||
### Measuring the music loop length (or “period”)
|
||||
|
||||
If you don’t care about performance, estimating the period at which the music repeats itself is pretty straightforward. All you have to do is take two copies of the music side by side, and slide one copy along until you find an offset that makes the two copies match up again.
|
||||
|
||||
![][4]
|
||||
|
||||
Now, if we could guarantee the music data would repeat exactly, there are many super-fast algorithms that could be used to help here (e.g., Rabin-Karp or suffix trees). However, even if we’re looking at computer-generated music, we can’t guarantee the loop will be exact for a variety of reasons like phase distortion (which will come up again later), dithering and sampling rate effects.
|
||||
|
||||
![Converting this greyscale image of a ball in a room to a black/white image demonstrates dithering. Simple thresholding turns the image into regions of solid black and regions of solid white, with all detail lost except near the threshold. Adding random noise before converting fuzzes the threshold, allowing more detail to come through. This example is extreme, but the same idea is behind dithering digital audio when approximating smooth analogue signals.][5]
|
||||
|
||||
By the way, Chris Montgomery (who developed Ogg Vorbis) made [an excellent presentation about the real-world issues (and non-issues) with digital audio][6]. There’s a light-hearted video that’s about 20 minutes and definitely worth watching if you have any interest in this stuff. Before that, he also did [an intro to the technical side of digital media][7] if you want to start from the beginning.
|
||||
|
||||
If exact matching isn’t an option, we need to find a best fit instead, using one of the many vector similarity algorithms. The problem is that any good similarity algorithm will look at all the vector data and be (O(N)) time at best. If we naïvely calculate that at every slide offset, finding the best fit will be (O(N^{2})) time. With over 40k samples for every second of music (multiplied by the number of channels), these vectors are way too big for that approach to be fast enough.
|
||||
|
||||
Thankfully, we can do it in (O(N\log N)) time using the Fourier transform if we choose to use autocorrelation to find the best fit. Autocorrelation means taking the dot product at every offset, and with some normalisation that’s a bit like using cosine similarity.
|
||||
|
||||
![Log energy plot of the autocorrelation of the Beneath the Rabbit Holes sample \(normalised by overlap length\). This represents the closeness of match when the music is compared to a time-shifted version of itself. Naturally, there's a peak at 0 minutes offset, but the next biggest peak is at 2m58.907s, which happens to be exactly the length of the original music loop. The smaller peaks reflect small-scale patterns, such as the music rhythm.][8]
|
||||
|
||||
### The Fourier transform?
|
||||
|
||||
The Fourier transform is pretty famous in some parts of STEM, but not others. It’s used a lot in `loopx`, so here are some quick notes for those in the second group.
|
||||
|
||||
There are a couple of ways to think about and use the Fourier transform. The first is the down-to-earth way: it’s an algorithm that takes a signal and analyses the different frequencies in it. If you take Beethoven’s Symphony No. 9 in D minor, Op 125, Ode to Joy, and put it through a Fourier transform, you’ll get a signal with peaks that correspond to notes in the scale of D minor. The Fourier transform is reversible, so it allows manipulating signals in terms of frequency, too.
|
||||
|
||||
The second way to think of Fourier transforms is stratospherically abstract: the Fourier transform is a mapping between two vector spaces, often called the time domain and the frequency domain. It’s not just individual vectors that have mirror versions in the other domain. Operations on vectors and differential equations over vectors and so on can all be transformed, too. Often the version in one domain is simpler than the version in the other, making the Fourier transform a useful theoretical tool. In this case, it turns out that autocorrelation is very simple in the frequency domain.
|
||||
|
||||
The Fourier transform is used both ways in `loopx`. Because Fourier transforms represent most of the number crunching, `loopx` uses [FFTW][9], a “do one thing really, really well” library for fast Fourier transform implementations.
|
||||
|
||||
### Dealing with phase distortion
|
||||
|
||||
I had some false starts implementing `loopx` because of a practical difference between industrial signal processing and sound engineering: psychoacoustics. Our ears are basically an array of sensors tuned to different frequencies. That’s it. Suppose you play two tones into your ears, with different phases (i.e., they’re shifted in time relatively to each other). You literally can’t hear the difference because there’s no wiring between the ears and the brain carrying that information.
|
||||
|
||||
![][10]
|
||||
|
||||
Sure, if you play several frequencies at once, phase differences can interact in ways that are audible, but phase matters less overall. A sound engineer who has to make a choice between phase distortion and some other kind of distortion will tend to favour phase distortion because it’s less noticeable. Phase distortion is usually simple and consistent, but phase distortion from popular lossy compression standards like MP3 and Ogg Vorbis seems to be more complicated.
|
||||
|
||||
Basically, when you zoom right into the audio data, any algorithmic approach that’s sensitive to the precise timing of features is hopeless. Because audio files are designed for phase-insensitve ears, I had to make my algorithms phase-insensitive too to get any kind of robustness. That’s probably not news to anyone with real audio engineering experience, but it was a bit of an, “Oh,” moment for someone like me coming from metering equipment DSP.
|
||||
|
||||
I ended up using spectrograms a lot. They’re 2D heatmaps in which one axis represents time, and the other axis represents frequency. The example below shows how they make high-level music features much more recognisable, without having to deal with low-level issues like phase. (If you’re curious, you can see [a 7833x192 spectrogram of both channels of the whole track][11].)
|
||||
|
||||
![Spectrogram of the first 15s of Beneath the Rabbit Holes. Time advances to the right. Each vertical strip shows the signal strength by frequency at a given time window, which low notes at the bottom and high ones at the top. The bright strip at the bottom is the bass. The vertical streaks are percussion. The melody starts at about 10s, and appears as dots for notes.][12]
|
||||
|
||||
The Fourier transform does most of the work of getting frequency information out of music, but a bit more is needed to get a useful spectrogram. The Fourier transform works over the whole input, so instead of one transformation, we need to do transformations of overlapping windows running along the input. Each windowed transformation turns into a single frame of the spectrogram after a bit of postprocessing. The Fourier transform uses a linear frequency scale, which isn’t natural for music (every 8th white key on a piano has double the pitch), so frequencies get binned according to a Mel scale (designed to approximate human pitch perception). After that, the total energy for each frequency gets log-transformed (again, to match human perception). [This article describes the steps in detail][13] (ignore the final DCT step).
|
||||
|
||||
### Finding the loop zone
|
||||
|
||||
Remember that the music sample will likely have some intro and outro? Before doing more processing, `loopx` needs to find the section of the music sample that actually loops (what’s called the “loop zone” in the code). It’s easy in principle: scan along the music sample and check if it matches up with the music one period ahead. The loop zone is assumed to be the longest stretch of music that matches (plus the one period at the end). Processing the spectrogram of the music, instead of the raw signal itself, turned out to be more robust.
|
||||
|
||||
![The difference between each spectrogram frame and the one that's a music period after in the Beneath the Rabbit Holes sample. The difference is high at the beginning and end because of the silence and fade in/out. The difference is low in the middle because of the music loop.][14]
|
||||
|
||||
A human can eyeball a plot like the one above and see where the intro and outro are. However, the error thresholds for “match” and “mismatch” vary depending on the sample quality and how accurate the original period estimate are, so finding a reliable computer algorithm is more complicated. There are statistical techniques for solving this problem (like Otsu’s method), but `loopx` just exploits the assumption that a loop zone exists, and figures out thresholds based on low-error sections of the plot. A variant of Schmitt triggering is used to get a good separation between the loop zone and the rest.
|
||||
|
||||
### Refining the period estimate
|
||||
|
||||
Autocorrelation is pretty good for estimating the period length, but a long intro or outro can pull the estimate either way. Knowing the loop zone lets us refine the estimate: any recognisable feature (like a chord change or drum beat) inside the loop zone will repeat one period before or after. If we find a pair of distinctive features, we can measure the difference to get an accurate estimate of the period.
|
||||
|
||||
`loopx` finds the strongest features in the music using a novelty curve — which is just the difference between one spectrogram frame and the next. Any change (a beat, a note, a change of key) will cause a spike in this curve, and the biggest spikes are taken as points of interest. Instead of trying to find the exact position of music features (which would be fragile), `loopx` just takes the region around a point of interest and its period-shifted pair, and uses cross-correlation to find the shift that makes them best match (just like the autocorrelation, but between two signals). For robustness, shifts are calculated for a bunch of points and the median is used to correct the period. The median is better than the average because each single-point correction estimate is either highly accurate alone or way off because something went wrong.
|
||||
|
||||
### Extending the music
|
||||
|
||||
The loop zone has the useful property that jumping back or forward a multiple of the music period keeps the music playing uninterrupted, as long as playback stays within the loop zone. This is the essence of how `loopx` extends music. To make a long output, `loopx` copies music data from the beginning until it hits the end of the loop zone. Then it jumps back as many periods as it can (staying inside the loop zone) and keeps repeating copies like that until it has output enough data. Then it just keeps copying to the end.
|
||||
|
||||
That sounds simple, but if you’ve ever tried it you’ll know there’s one more problem. Most music is made of smooth waves. If you just cut music up in arbitrary places and concatenate the pieces together, you get big jumps in the wave signal that turn into jarring popping sounds when played back as an analogue signal. When I’ve done this by hand, I’ve tried to minimise this distortion by making the curve as continuous as possible. For example, I might find a place in the first fragment of audio where the signal crosses the zero line going down, and I’ll try to match it up with a place in the second fragment that’s also crossing zero going down. That avoids a loud pop, but it’s not perfect.
|
||||
|
||||
An alternative that’s actually easier to implement in code is a minimum-error match. Suppose you’re splicing signal A to signal B, and you want to evaluate how good the splice is. You can take some signal near the splice point and compare it to what the signal would have been if signal A had kept playing. Simply substracting and summing the squares gives a reasonable measure of quality. I also tried filtering the errors before squaring and summing because distortion below 20Hz and above 20kHz isn’t as bad as distortion inside normal human hearing range. This approach improved the splices a lot, but it wasn’t reliable at making them seamless. I don’t have super hearing ability, but the splices got jarring when listening to a long track with headphones in a quiet room.
|
||||
|
||||
Once again, the spectral approach was more robust. Calculating the spectrum around the splice and comparing it to the spectrum around the original signal is a useful way to measure splice quality. The pop sound of a broken audio signal appears as an obvious burst of noise across most of the spectrum. Even better, because the spectrum is designed to reflect human hearing, it also catches any other annoying effects, like a blip caused by a bad splice right on the edge of a drum beat. Anything that’s obvious to a human will be obvious in the spectrogram.
|
||||
|
||||
![Examples of how splicing affects the local music spectrum. The signal plots on the left show the splice point and a few hundred audio samples either side. The spectra on the right are calculated from a few thousand samples either side of the splice point. The centre row shows the original, unspliced signal and its spectrum. The spectrum of the bad splice is flooded with noise and is obviously different from the original spectrum. The spectrum of the improved splice looks much more like the original. The audio signal already looks reasonably smooth in the time domain, but loopx is able to find even better splices by looking at the spectra.][15]
|
||||
|
||||
There are multiple splice points that need to be made seamless. The simple approach to optimising them is a greedy one: just process each splice point in order and take the best splice found locally. However, `loopx` also tries to maintain the music loop length as best as possible, which means each splice point will depend on the splicing decisions made earlier. That means later splices can be forced to be worse because of overeager decisions made earlier.
|
||||
|
||||
Now, I admit this might be getting into anal retentive territory, but I wasn’t totally happy with about %5 of the tracks I tested, and I wanted a tool that could reliably make music better than my hearing (assuming quality input data). So I switched to optimising the splices using Dijkstra’s algorithm. Normally Dijkstra is thought of as an algorithm for figuring out the shortest path from start to finish using available path segments. In this case, I’m finding the least distortion series of copies to get from an empty output audio file to one that’s the target length, using spliced segments of the input file. Abstractly, it’s the same problem. I also calculate cost a little differently. In normal path finding, the path cost is the sum of the segment costs. However, total distortion isn’t the best measure for `loopx`. I don’t care if Dijkstra’s algorithm can make an almost-perfect splice perfect if it means making an annoying splice worse. So, `loopx` finds the copy plan with the least worst-case distortion level. That’s no problem because Dijkstra’s algorithm works just as well finding min-max as it does finding min-sum (abstractly, it just needs paths to be evaluated in a way that’s a total ordering and never improves when another segment is added).
|
||||
|
||||
### Enjoying the music
|
||||
|
||||
It’s rare for any of my hobby programming projects to actually be useful at all to my everyday life away from computers, but I’ve already found multiple uses for background music generated by `loopx`. As usual, [the full source is available on GitLab][1].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://theartofmachinery.com/2021/03/12/loopx.html
|
||||
|
||||
作者:[Simon Arneaud][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://theartofmachinery.com
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://gitlab.com/sarneaud/loopx
|
||||
[2]: https://github.com/SuperTux/supertux/blob/56efa801a59e7e32064b759145e296a2d3c11e44/data/music/forest/beneath_the_rabbit_hole.ogg
|
||||
[3]: https://github.com/SuperTux/supertux
|
||||
[4]: https://theartofmachinery.com/images/loopx/shifted.jpg
|
||||
[5]: https://theartofmachinery.com/images/loopx/dither_demo.png
|
||||
[6]: https://wiki.xiph.org/Videos/Digital_Show_and_Tell
|
||||
[7]: https://wiki.xiph.org/Videos/A_Digital_Media_Primer_For_Geeks
|
||||
[8]: https://theartofmachinery.com/images/loopx/autocorrelation.jpg
|
||||
[9]: http://www.fftw.org/
|
||||
[10]: https://theartofmachinery.com/images/loopx/phase_shift.svg
|
||||
[11]: https://theartofmachinery.com/images/loopx/spectrogram.png
|
||||
[12]: https://theartofmachinery.com/images/loopx/spectrogram_intro.png
|
||||
[13]: https://www.practicalcryptography.com/miscellaneous/machine-learning/guide-mel-frequency-cepstral-coefficients-mfccs/
|
||||
[14]: https://theartofmachinery.com/images/loopx/loop_zone_errors.png
|
||||
[15]: https://theartofmachinery.com/images/loopx/splice.png
|
@ -0,0 +1,222 @@
|
||||
[#]: subject: (Get better at programming by learning how things work)
|
||||
[#]: via: (https://jvns.ca/blog/learn-how-things-work/)
|
||||
[#]: author: (Julia Evans https://jvns.ca/)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
||||
Get better at programming by learning how things work
|
||||
======
|
||||
|
||||
When we talk about getting better at programming, we often talk about testing, writing reusable code, design patterns, and readability.
|
||||
|
||||
All of those things are important. But in this blog post, I want to talk about a different way to get better at programming: learning how the systems you’re using work! This is the main way I approach getting better at programming.
|
||||
|
||||
### examples of “how things work”
|
||||
|
||||
To explain what I mean by “how things work”, here are some different types of programming and examples of what you could learn about how they work.
|
||||
|
||||
Frontend JS:
|
||||
|
||||
* how the event loop works
|
||||
* HTTP methods like GET and POST
|
||||
* what the DOM is and what you can do with it
|
||||
* the same-origin policy and CORS
|
||||
|
||||
|
||||
|
||||
CSS:
|
||||
|
||||
* how inline elements are rendered differently from block elements
|
||||
* what the “default flow” is
|
||||
* how flexbox works
|
||||
* how CSS decides which selector to apply to which element (the “cascading” part of the cascading style sheets)
|
||||
|
||||
|
||||
|
||||
Systems programming:
|
||||
|
||||
* the difference between the stack and the heap
|
||||
* how virtual memory works
|
||||
* how numbers are represented in binary
|
||||
* what a symbol table is
|
||||
* how code from external libraries gets loaded (e.g. dynamic/static linking)
|
||||
* Atomic instructions and how they’re different from mutexes
|
||||
|
||||
|
||||
|
||||
### you can use something without understanding how it works (and that can be ok!)
|
||||
|
||||
We work with a LOT of different systems, and it’s unreasonable to expect that every single person understands everything about all of them. For example, many people write programs that send email, and most of those people probably don’t understand everything about how email works. Email is really complicated! That’s why we have abstractions.
|
||||
|
||||
But if you’re working with something (like CSS, or HTTP, or goroutines, or email) more seriously and you don’t really understand how it works, sometimes you’ll start to run into problems.
|
||||
|
||||
### your bugs will tell you when you need to improve your mental model
|
||||
|
||||
When I’m programming and I’m missing a key concept about how something works, it doesn’t always show up in an obvious way. What will happen is:
|
||||
|
||||
* I’ll have bugs in my programs because of an incorrect mental model
|
||||
* I’ll struggle to fix those bugs quickly and I won’t be able to find the right questions to ask to diagnose them
|
||||
* I feel really frustrated
|
||||
|
||||
|
||||
|
||||
I think it’s actually an important skill **just to be able to recognize that this is happening**: I’ve slowly learned to recognize the feeling of “wait, I’m really confused, I think there’s something I don’t understand about how this system works, what is it?”
|
||||
|
||||
Being a senior developer is less about knowing absolutely everything and more about quickly being able to recognize when you **don’t** know something and learn it. Speaking of being a senior developer…
|
||||
|
||||
### even senior developers need to learn how their systems work
|
||||
|
||||
So far I’ve never stopped learning how things work, because there are so many different types of systems we work with!
|
||||
|
||||
For example, I know a lot of the fundamentals of how C programs work and web programming (like the examples at the top of this post), but when it comes to graphics programming/OpenGL/GPUs, I know very few of the fundamental ideas. And sometimes I’ll discover a new fact that I’m missing about a system I thought I knew, like last year I [discovered][1] that I was missing a LOT of information about how CSS works.
|
||||
|
||||
It can feel bad to realise that you really don’t understand how a system you’ve been using works when you have 10 years of experience (“ugh, shouldn’t I know this already? I’ve been using this for so long!“), but it’s normal! There’s a lot to know about computers and we are constantly inventing new things to know, so nobody can keep up with every single thing.
|
||||
|
||||
### how I go from “I’m confused” to “ok, I get it!”
|
||||
|
||||
When I notice I’m confused, I like to approach it like this:
|
||||
|
||||
1. Notice I’m confused about a topic (“hey, when I write `await` in my Javascript program, what is actually happening?“)
|
||||
2. Break down my confusion into specific factual questions, like “when there’s an `await` and it’s waiting, how does it decide which part of my code runs next? Where is that information stored?”
|
||||
3. Find out the answers to those questions (by writing a program, reading something on the internet, or asking someone)
|
||||
4. Test my understanding by writing a program (“hey, that’s why I was having that async bug! And I can fix it like this!“)
|
||||
|
||||
|
||||
|
||||
The last “test my understanding” step is really important. The whole point of understanding how computers work is to actually write code to make them do things!
|
||||
|
||||
I find that if I can use my newfound understanding to do something concrete like implement a new feature or fix a bug or even just write a test program that demonstrates how the thing works, it feels a LOT more real than if I just read about it. And then it’s much more likely that I’ll be able to use it in practice later.
|
||||
|
||||
### just learning a few facts can help a lot
|
||||
|
||||
Learning how things work doesn’t need to be a big huge thing. For example, I used to not really know how floating point numbers worked, and I felt nervous that something weird would happen that I didn’t understand.
|
||||
|
||||
And then one day in 2013 I went to a talk by Stefan Karpinski explaining how floating point numbers worked (containing roughly the information in [this comic][2], but with more weird details). And now I feel totally confident using floating point numbers! I know what their basic limitations are, and when not to use them (to represent integers larger than 2^53). And I know what I _don’t_ know – I know it’s hard to write numerically stable linear algebra algorithms and I have no idea how to do that.
|
||||
|
||||
### connect new facts to information you already know
|
||||
|
||||
When learning a new fact, it’s easy to be able to recite a sentence like “ok, there are 8 bits in a byte”. That’s true, but so what? What’s harder (and much more useful!) is to be able to connect that information to what you already know about programming.
|
||||
|
||||
For example, let’s take this “8 bits in a byte thing”. In your program you probably have strings, like “Hello”. You can already start asking lots of questions about this, like:
|
||||
|
||||
* How many bytes in memory are used to represent the string “Hello”? (it’s 5!)
|
||||
* What bits exactly does the letter “H” correspond to? (the encoding for “Hello” is going to be using ASCII, so you can look it up in an ASCII table!)
|
||||
* If you have a running program that’s printing out the string “Hello”, can you go look at its memory and find out where those bytes are in its memory? How do you do that?
|
||||
|
||||
|
||||
|
||||
The important thing here is to ask the questions and explore the connections that **you’re** curious about – maybe you’re not so interested in how the strings are represented in memory, but you really want to know how many bytes a heart emoji is in Unicode! Or maybe you want to learn about how floating point numbers work!
|
||||
|
||||
I find that when I connect new facts to things I’m already familiar with (like emoji or floating point numbers or strings), then the information sticks a lot better.
|
||||
|
||||
Next up, I want to talk about 2 ways to get information: asking a person yes/no questions, and asking the computer.
|
||||
|
||||
### how to get information: ask yes/no questions
|
||||
|
||||
When I’m talking to someone who knows more about the concept than me, I find it helps to start by asking really simple questions, where the answer is just “yes” or “no”. I’ve written about yes/no questions before in [how to ask good questions][3], but I love it a lot so let’s talk about it again!
|
||||
|
||||
I do this because it forces me to articulate exactly what my current mental model _is_, and because I think yes/no questions are often easier for the person I’m asking to answer.
|
||||
|
||||
For example, here are some different types of questions:
|
||||
|
||||
* Check if your current understanding is correct
|
||||
* Example: “Is a pixel shader the same thing as a fragment shader?”
|
||||
* How concepts you’ve heard of are related to each other
|
||||
* Example: “Does shadertoy use OpenGL?”
|
||||
* Example: “Do graphics cards know about triangles?”
|
||||
* High-level questions about what the main purpose of something is
|
||||
* Example: “Does mysql orchestrator proxy database queries?”
|
||||
* Example: “Does OpenGL give you more control or less control over the graphics card than Vulkan?”
|
||||
|
||||
|
||||
|
||||
### yes/no questions put you in control
|
||||
|
||||
When I ask very open-ended questions like “how does X work?”, I find that it often goes wrong in one of 2 ways:
|
||||
|
||||
1. The person starts telling me a bunch of things that I already knew
|
||||
2. The person starts telling me a bunch of things that I don’t know, but which aren’t really what I was interested in understanding
|
||||
|
||||
|
||||
|
||||
Both of these are frustrating, but of course neither of these things are their fault! They can’t know exactly what informatoin I wanted about X, because I didn’t tell them. But it still always feels bad to have to interrupt someone with “oh no, sorry, that’s not what I wanted to know at all!”
|
||||
|
||||
I love yes/no questions because, even though they’re harder to formulate, I’m WAY more likely to get the exact answers I want and less likely to waste the time of the person I’m asking by having them explain a bunch of things that I’m not interested in.
|
||||
|
||||
### asking yes/no questions isn’t always easy
|
||||
|
||||
When I’m asking someone questions to try to learn about something new, sometimes this happens:
|
||||
|
||||
**me:** so, just to check my understanding, it works like this, right?
|
||||
**them:** actually, no, it’s <completely different thing>
|
||||
**me (internally)**: (brief moment of panic)
|
||||
**me:** ok, let me think for a minute about my next question
|
||||
|
||||
It never quite feels _good_ to learn that my mental model was totally wrong, even though it’s incredibly helpful information. Asking this kind of really specific question (even though it’s more effective!) puts you in a more vulnerable position than asking a broader question, because sometimes you have to reveal specific things that you were totally wrong about!
|
||||
|
||||
When this happens, I like to just say that I’m going to take a minute to incorporate the new fact into my mental model and think about my next question.
|
||||
|
||||
Okay, that’s the end of this digression into my love for yes/no questions :)
|
||||
|
||||
### how to get information: ask the computer
|
||||
|
||||
Sometimes when I’m trying to answer a question I have, there won’t be anybody to ask, and I’ll Google it or search the documentation and won’t find anything.
|
||||
|
||||
But the delightful thing about computers is that you can often get answers to questions about computers by… asking your computer!
|
||||
|
||||
Here are a few examples (from past blog posts) of questions I’ve had and computer experiments I ran to answer them for myself:
|
||||
|
||||
* Are atomics faster or slower than mutexes? (blog post: [trying out mutexes and atomics][4])
|
||||
* If I add a user to a group, will existing processes running as that user have the new group? (blog post: [How do groups work on Linux?][5])
|
||||
* On Linux, if you have a server listening on 0.0.0.0 but you don’t have any network interfaces, can you connect to that server? (blog post: [what’s a network interface?][6])
|
||||
* How is the data in a SQLite database actually organized on disk? (blog post: [How does SQLite work? Part 1: pages!][7])
|
||||
|
||||
|
||||
|
||||
### asking the computer is a skill
|
||||
|
||||
It definitely takes time to learn how to turn “I’m confused about X” into specific questions, and then to turn that question into an experiment you can run on your computer to definitively answer it.
|
||||
|
||||
But it’s a really powerful tool to have! If you’re not limited to just the things that you can Google / what’s in the documentation / what the people around you know, then you can do a LOT more.
|
||||
|
||||
### be aware of what you still don’t understand
|
||||
|
||||
Like I said earlier, the point here isn’t to understand every single thing. But especially as you get more senior, it’s important to be aware of what you don’t know! For example, here are five things I don’t know (out of a VERY large list):
|
||||
|
||||
* How database transactions / isolation levels work
|
||||
* How vertex shaders work (in graphics)
|
||||
* How font rendering works
|
||||
* How BGP / peering work
|
||||
* How multiple inheritance works in Python
|
||||
|
||||
|
||||
|
||||
And I don’t really need to know how those things work right now! But one day I’m pretty sure I’m going to need to know how database transactions work, and I know it’s something I can learn when that day comes :)
|
||||
|
||||
Someone who read this post asked me “how do you figure out what you don’t know?” and I didn’t have a good answer, so I’d love to hear your thoughts!
|
||||
|
||||
Thanks to Haider Al-Mosawi, Ivan Savov, Jake Donham, John Hergenroeder, Kamal Marhubi, Matthew Parker, Matthieu Cneude, Ori Bernstein, Peter Lyons, Sebastian Gutierrez, Shae Matijs Erisson, Vaibhav Sagar, and Zell Liew for reading a draft of this.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://jvns.ca/blog/learn-how-things-work/
|
||||
|
||||
作者:[Julia Evans][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://jvns.ca/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://jvns.ca/blog/debugging-attitude-matters/
|
||||
[2]: https://wizardzines.com/comics/floating-point/
|
||||
[3]: https://jvns.ca/blog/good-questions/
|
||||
[4]: https://jvns.ca/blog/2014/12/14/fun-with-threads/
|
||||
[5]: https://jvns.ca/blog/2017/11/20/groups/
|
||||
[6]: https://jvns.ca/blog/2017/09/03/network-interfaces/
|
||||
[7]: https://jvns.ca/blog/2014/09/27/how-does-sqlite-work-part-1-pages/
|
@ -0,0 +1,80 @@
|
||||
[#]: subject: (Elevating open leaders by getting out of their way)
|
||||
[#]: via: (https://opensource.com/open-organization/21/3/open-spaces-leadership-talent)
|
||||
[#]: author: (Jos Groen https://opensource.com/users/jos-groen)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
||||
Elevating open leaders by getting out of their way
|
||||
======
|
||||
Your organization's leaders likely know the most effective and
|
||||
innovative path forward. Are you giving them the space they need to get
|
||||
you there?
|
||||
![Leaders are catalysts][1]
|
||||
|
||||
Today, we're seeing the rapid rise of agile organizations capable of quickly and effectively adapting to market new ideas with large-scale impacts. These companies tend to have something in common: they have a clear core direction and young, energetic leaders—leaders who encourage their talented employees to develop their potential.
|
||||
|
||||
The way these organizations apply open principles to developing their internal talent—that is, how they facilitate and encourage talented employees to develop and advance in all layers of the organization—is a critical component of their sustainability and success. The organizations have achieved an important kind of "flow," through which talented employees can easily shift to the places in the organization where they can add the most value based on their talents, skills, and [intrinsic motivators.][2] Flow ensures fresh ideas and new impulses. After all, the best idea can originate anywhere in the organization—no matter where a particular employee may be located.
|
||||
|
||||
In this new series, I'll explore various dimensions of this open approach to organizational talent management. In this article, I explicitly focus on employees who demonstrate leadership talent. After all, we need leaders to create contexts based on open principles, leaders able to balance people and business in their organization.
|
||||
|
||||
### The elements of success
|
||||
|
||||
I see five crucial elements that determine the success of businesses today:
|
||||
|
||||
1. Talented leaders are engaged and empowered—given the space to develop, grow, and build experience under the guidance of mentors (leaders) in a safe environment. They can fail fast and learn fast.
|
||||
2. Their organizations know how to quickly and decisively convert new ideas into valuable products, services, or solutions.
|
||||
3. The dynamic between "top" and "bottom" managers and leaders in the organization is one of balance.
|
||||
4. People are willing to let go of deeply held beliefs, processes, and behaviors. It's brave to work openly.
|
||||
5. The organization has a clear core direction and strong identity based on the open principles.
|
||||
|
||||
|
||||
|
||||
All these elements of success are connected to employees' creativity and ingenuity.
|
||||
|
||||
### Open and safe working environment
|
||||
|
||||
Companies that traditionally base their services, governance, and strategic execution on hierarchy and the authority embedded in their systems, processes, and management structure rarely leave room for this kind of open talent development. In these systems, good ideas too often get "stuck" in bureaucracies, and authority to lead is primarily [based on tenure and seniority][3], not on talent. Moreover, traditionally minded board members and management don't always have an adequate eye for management talent. So there is the first challenge! We need leaders who can have a primary eye on leadership talent. The first step to balance management and leadership at the top. Empowering the most talented and passionate—rather than the more senior—makes them uncomfortable. So leaders with potentially innovative ideas rarely get invited to participate in the "inner circle."
|
||||
|
||||
Fortunately, I see these organizations beginning to realize that they need to get moving before they lose their competitive edge.
|
||||
|
||||
The truth is that there is no "right" or "wrong" choice for organizing a business. The choices an organization makes are simply the choices that determine their overall speed, strength, and agility.
|
||||
|
||||
They're beginning to understand that they need to provide talented employees with [safe spaces for experimentation][4]—an open and safe work environment, one in which employees can experiment with new ideas, learn from their mistakes, and [find that place][5] in the organization [where they thrive][6].
|
||||
|
||||
But the truth is that there is no "right" or "wrong" choice for organizing a business. The choices an organization makes are simply the choices that determine their overall speed, strength, and agility. And more frequently, organizations are choosing open approaches to building their cultures and processes, because their talent thrives better in environments based on transparency and trust. Employees in these organizations have more perspective and are actively involved in the design and development of the organization itself. They keep their eyes and ears "open" for new ideas and approaches—so the organization benefits from empowering them.
|
||||
|
||||
### Hybrid thinking
|
||||
|
||||
As [I've said before][7]: the transition from a conventional organization to a more open one is never a guaranteed success. During this transformation, you'll encounter periods in which traditional and open practices operate side by side, even mixed and shuffled. These are an organization's _hybrid_ phase.
|
||||
|
||||
When your organization enters this hybrid phase, it needs to begin thinking about changing its approach to talent management. In addition to its _individual_ transformation, it will need to balance the needs and perspectives of senior managers and leaders alongside _other_ management layers, which are beginning to shift. In short, it must establish a new vision and strategy for the development of leadership talent.
|
||||
|
||||
The starting point here is to create a safe and stimulating environment where mentors and coaches support these future leaders in their growth. During this hybrid period, you will be searching for the balance between passion and performance in the organization—which means you'll need to let go of deeply rooted beliefs, processes, and behaviors. In my opinion, this means focusing on the _human_ elements present in your organization, its leadership, and its flows of talent, without losing sight of organizational performance. This "letting go" doesn't happen quickly or immediately, like pressing a button, nor is it one that you can entirely influence. But it is an exciting and comprehensive journey that you and your organization will embark on.
|
||||
|
||||
And that journey begins with you. Are you ready for it?
|
||||
|
||||
Resolved to be a more open leader in 2016? Start by reading these books.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/open-organization/21/3/open-spaces-leadership-talent
|
||||
|
||||
作者:[Jos Groen][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/jos-groen
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/leaderscatalysts.jpg?itok=f8CwHiKm (Leaders are catalysts)
|
||||
[2]: https://opensource.com/open-organization/18/5/rethink-motivation-engagement
|
||||
[3]: https://opensource.com/open-organization/16/8/how-make-meritocracy-work
|
||||
[4]: https://opensource.com/open-organization/19/3/introduction-psychological-safety
|
||||
[5]: https://opensource.com/open-organization/17/9/own-your-open-career
|
||||
[6]: https://opensource.com/open-organization/17/12/drive-open-career-forward
|
||||
[7]: https://opensource.com/open-organization/20/6/organization-everyone-deserves
|
@ -0,0 +1,74 @@
|
||||
[#]: subject: (Linux powers the internet, confirms EU commissioner)
|
||||
[#]: via: (https://opensource.com/article/21/3/linux-powers-internet)
|
||||
[#]: author: (James Lovegrove https://opensource.com/users/jlo)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
||||
Linux powers the internet, confirms EU commissioner
|
||||
======
|
||||
EU celebrates the importance of open source software at the annual EU
|
||||
Open Source Policy Summit.
|
||||
![Penguin driving a car with a yellow background][1]
|
||||
|
||||
In 20 years of EU digital policy in Brussels, I have seen growing awareness and recognition among policymakers in Europe of the importance of open source software (OSS). A recent keynote by EU internal market commissioner Thierry Breton at the annual [EU Open Source Policy Summit][2] in February provides another example—albeit with a sense of urgency and strategic opportunity that has been largely missing in the past.
|
||||
|
||||
Commissioner Breton did more than just recognize the "long list of [OSS] success stories." He also underscored OSS's critical role in accelerating Europe's €750 billion recovery and the goal to further "embed open source" into Europe's longer-term policy objectives in the public sector and other key industrial sectors.
|
||||
|
||||
In addition to the commissioner's celebration that "Linux is powering the internet," there was a policy-related call to action to expand the OSS value proposition to many other areas of digital sovereignty. Indeed, with only 2.5 years of EU Commission mandate remaining, there is a welcome sense of urgency. I see three possible reasons for this: 1. fresh facts and figures, 2. compelling policy commitments, and 3. game-changing investment opportunities for Europe.
|
||||
|
||||
### 1\. Fresh facts and figures
|
||||
|
||||
Commissioner Breton shared new facts and figures to better inform policymakers in Brussels and all European capitals. The EU's new [Open Source Study][3] reveals that the "economic impact of OSS is estimated to have been between €65 and €95 billion (2018 figures)" and an "increase of 10% [in code contributions] would generate in the future around additional €100 billion in EU GDP per year."
|
||||
|
||||
This EU report on OSS, the first since 2006, builds nicely on several other recent open source reports in Germany (from [Bitkom][4]) and France (from [CNLL/Syntec][5]), recent strategic IT analysis by the German federal government, and the [Berlin Declaration][6]'s December 2020 pledge for all EU member states to "implement common standards, modular architectures, and—when suitable—open source technologies in the development and deployment of cross-border digital solutions" by 2024, the end of current EU Commission's mandate.
|
||||
|
||||
### 2\. Compelling policy commitments
|
||||
|
||||
Commissioner Breton's growth and sovereignty questions seemed to hinge on the need to bolster existing open source adoption and collaboration—notably "how to embed open source into public administration to make them more efficient and resilient" and "how to create an enabling framework for the private sector to invest in open source."
|
||||
|
||||
I would encourage readers to review the various [panel discussions][7] from the Policy Summit that address many of the important enabling factors (e.g., establishing open source program offices [OSPOs], open standards, public sector sharing and reuse, etc.). These will be tackled over the coming months with deeper dives by OpenForum Europe and other European associations (e.g., Bitkom's Open Source Day on 16 September), thereby bringing policymaking and open source code and collaboration closer together.
|
||||
|
||||
### 3\. Game-changing investments
|
||||
|
||||
The European Parliament [recently approved][8] the final go-ahead for the €750 billion Next Generation European Union ([NGEU][9]) stimulus package. This game-changing investment is a once-in-a-generation opportunity to realize longstanding EU policy objectives while accelerating digital transformation in an open and sustainable fashion, as "each plan has to dedicate at least 37% of its budget to climate and at least 20% to digital actions."
|
||||
|
||||
During the summit, great insights into how Europe's public sector can further embrace open innovation in the context of these game-changing EU funds were shared by [OFE][10] and [Digital Europe][11] speakers from Germany, Italy, Portugal, Slovenia, FIWARE, and Red Hat. 2021 is fast becoming a critical year when this objective can be realized within the public sector and [industry][12].
|
||||
|
||||
### A call to action
|
||||
|
||||
Commissioner Breton's recognition of Linux is more than another political validation that "open source has won." It is a call to action to collaborate to accelerate European competitiveness and transformation and is a key to sovereignty (interoperability within services and portability of data and workloads) to reflect key European values through open source.
|
||||
|
||||
Commissioner Breton is working closely with the EU executive vice president for a digital age, Margate Vestager, to roll out a swathe of regulatory carrots and sticks for the digital sector. Indeed, in the words of the Commission President Ursula von der Leyen at the recent [Masters of Digital 2021][13] event, "this year we are rewriting the rule book for our digital internal market. I want companies to know that across the European Union, there will be one set of digital rules instead of this patchwork of national rules."
|
||||
|
||||
In another 10 years, we will all look back on the past year and ask ourselves this question: did we "waste a good crisis" to realize [Europe's digital decade][14]?
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/3/linux-powers-internet
|
||||
|
||||
作者:[James Lovegrove][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/jlo
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/car-penguin-drive-linux-yellow.png?itok=twWGlYAc (Penguin driving a car with a yellow background)
|
||||
[2]: https://openforumeurope.org/event/policy-summit-2021/
|
||||
[3]: https://ec.europa.eu/digital-single-market/en/news/study-and-survey-impact-open-source-software-and-hardware-eu-economy
|
||||
[4]: https://www.bitkom.org/Presse/Presseinformation/Open-Source-deutschen-Wirtschaft-angekommen
|
||||
[5]: https://joinup.ec.europa.eu/collection/open-source-observatory-osor/news/technological-independence
|
||||
[6]: https://www.bmi.bund.de/SharedDocs/downloads/EN/eu-presidency/gemeinsame-erklaerungen/berlin-declaration-digital-society.html
|
||||
[7]: https://www.youtube.com/user/openforumeurope/videos
|
||||
[8]: https://www.europarl.europa.eu/news/en/press-room/20210204IPR97105/parliament-gives-go-ahead-to-EU672-5-billion-recovery-and-resilience-facility
|
||||
[9]: https://ec.europa.eu/info/strategy/recovery-plan-europe_en
|
||||
[10]: https://www.youtube.com/watch?v=xU7cfhVk3_s&feature=emb_logo
|
||||
[11]: https://www.youtube.com/watch?v=Jq3s6cdsA0I&feature=youtu.be
|
||||
[12]: https://www.digitaleurope.org/wp/wp-content/uploads/2021/02/DIGITALEUROPE-recommendations-on-the-Update-to-the-EU-Industrial-Strategy_Industrial-Forum-questionnaire-comms.pdf
|
||||
[13]: https://www.youtube.com/watch?v=EDzQI7q2YKc
|
||||
[14]: https://ec.europa.eu/info/law/better-regulation/have-your-say/initiatives/12900-Europe-s-digital-decade-2030-digital-targets
|
@ -1,304 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Using Python to explore Google's Natural Language API)
|
||||
[#]: via: (https://opensource.com/article/19/7/python-google-natural-language-api)
|
||||
[#]: author: (JR Oakes https://opensource.com/users/jroakes)
|
||||
|
||||
Using Python to explore Google's Natural Language API
|
||||
======
|
||||
Google's API can surface clues to how Google is classifying your site
|
||||
and ways to tweak your content to improve search results.
|
||||
![magnifying glass on computer screen][1]
|
||||
|
||||
As a technical search engine optimizer, I am always looking for ways to use data in novel ways to better understand how Google ranks websites. I recently investigated whether Google's [Natural Language API][2] could better inform how Google may be classifying a site's content.
|
||||
|
||||
Although there are [open source NLP tools][3], I wanted to explore Google's tools under the assumption it might use the same tech in other products, like Search. This article introduces Google's Natural Language API and explores common natural language processing (NLP) tasks and how they might be used to inform website content creation.
|
||||
|
||||
### Understanding the data types
|
||||
|
||||
To begin, it is important to understand the types of data that Google's Natural Language API returns.
|
||||
|
||||
#### Entities
|
||||
|
||||
Entities are text phrases that can be tied back to something in the physical world. Named entity recognition (NER) is a difficult part of NLP because tools often need to look at the full context around words to understand their usage. For example, homographs are spelled the same but have multiple meanings. Does "lead" in a sentence refer to a metal (a noun), causing someone to move (a verb), or the main character in a play (also a noun)? Google has 12 distinct types of entities, as well as a 13th catch-all category called "UNKNOWN." Some of the entities tie back to Wikipedia articles, suggesting [Knowledge Graph][4] influence on the data. Each entity returns a salience score, which is its overall relevance to the supplied text.
|
||||
|
||||
![Entities][5]
|
||||
|
||||
#### Sentiment
|
||||
|
||||
Sentiment, a view of or attitude towards something, is measured at the document and sentence level and for individual entities discovered in the document. The score of the sentiment ranges from -1.0 (negative) to 1.0 (positive). The magnitude represents the non-normalized strength of emotion; it ranges between 0.0 and infinity.
|
||||
|
||||
![Sentiment][6]
|
||||
|
||||
#### Syntax
|
||||
|
||||
Syntax parsing contains most of the common NLP activities found in better libraries, like [lemmatization][7], [part-of-speech tagging][8], and [dependency-tree parsing][9]. NLP mainly deals with helping machines understand text and the relationship between words. Syntax parsing is a foundational part of most language-processing or understanding tasks.
|
||||
|
||||
![Syntax][10]
|
||||
|
||||
#### Categories
|
||||
|
||||
Categories assign the entire given content to a specific industry or topical category with a confidence score from 0.0 to 1.0. The categories appear to be the same audience and website categories used by other Google tools, like AdWords.
|
||||
|
||||
![Categories][11]
|
||||
|
||||
### Pulling some data
|
||||
|
||||
Now I'll pull some sample data to play around with. I gathered some search queries and their corresponding URLs using Google's [Search Console API][12]. Google Search Console is a tool that reports the terms people use to find a website's pages with Google Search. This [open source Jupyter notebook][13] allows you to pull similar data about your website. For this example, I pulled Google Search Console data on a website (which I won't name) generated between January 1 and June 1, 2019, and restricted it to queries that received at least one click (as opposed to just impressions).
|
||||
|
||||
This dataset contains information on 2,969 pages and 7,144 queries that displayed the website's pages in Google Search results. The table below shows that the vast majority of pages received very few clicks, as this site focuses on what is called long-tail (more specific and usually longer) as opposed to short-tail (very general, higher search volume) search queries.
|
||||
|
||||
![Histogram of clicks for all pages][14]
|
||||
|
||||
To reduce the dataset size and get only top-performing pages, I limited the dataset to pages that received at least 20 impressions over the period. This is the histogram of clicks by page for this refined dataset, which includes 723 pages:
|
||||
|
||||
![Histogram of clicks for subset of pages][15]
|
||||
|
||||
### Using Google's Natural Language API library in Python
|
||||
|
||||
To test out the API, create a small script that leverages the **[google-cloud-language][16]** library in Python. The following code is Python 3.5+.
|
||||
|
||||
First, activate a new virtual environment and install the libraries. Replace **<your-env>** with a unique name for the environment.
|
||||
|
||||
|
||||
```
|
||||
virtualenv <your-env>
|
||||
source <your-env>/bin/activate
|
||||
pip install --upgrade google-cloud-language
|
||||
pip install --upgrade requests
|
||||
```
|
||||
|
||||
This script extracts HTML from a URL and feeds the HTML to the Natural Language API. It returns a dictionary of **sentiment**, **entities**, and **categories**, where the values for these keys are all lists. I used a Jupyter notebook to run this code because it makes it easier to annotate and retry code using the same kernel.
|
||||
|
||||
|
||||
```
|
||||
# Import needed libraries
|
||||
import requests
|
||||
import json
|
||||
|
||||
from google.cloud import language
|
||||
from google.oauth2 import service_account
|
||||
from google.cloud.language import enums
|
||||
from google.cloud.language import types
|
||||
|
||||
# Build language API client (requires service account key)
|
||||
client = language.LanguageServiceClient.from_service_account_json('services.json')
|
||||
|
||||
# Define functions
|
||||
def pull_googlenlp(client, url, invalid_types = ['OTHER'], **data):
|
||||
|
||||
html = load_text_from_url(url, **data)
|
||||
|
||||
if not html:
|
||||
return None
|
||||
|
||||
document = types.Document(
|
||||
content=html,
|
||||
type=language.enums.Document.Type.HTML )
|
||||
|
||||
features = {'extract_syntax': True,
|
||||
'extract_entities': True,
|
||||
'extract_document_sentiment': True,
|
||||
'extract_entity_sentiment': True,
|
||||
'classify_text': False
|
||||
}
|
||||
|
||||
response = client.annotate_text(document=document, features=features)
|
||||
sentiment = response.document_sentiment
|
||||
entities = response.entities
|
||||
|
||||
response = client.classify_text(document)
|
||||
categories = response.categories
|
||||
|
||||
def get_type(type):
|
||||
return client.enums.Entity.Type(entity.type).name
|
||||
|
||||
result = {}
|
||||
|
||||
result['sentiment'] = []
|
||||
result['entities'] = []
|
||||
result['categories'] = []
|
||||
|
||||
if sentiment:
|
||||
result['sentiment'] = [{ 'magnitude': sentiment.magnitude, 'score':sentiment.score }]
|
||||
|
||||
for entity in entities:
|
||||
if get_type(entity.type) not in invalid_types:
|
||||
result['entities'].append({'name': entity.name, 'type': get_type(entity.type), 'salience': entity.salience, 'wikipedia_url': entity.metadata.get('wikipedia_url', '-') })
|
||||
|
||||
for category in categories:
|
||||
result['categories'].append({'name':category.name, 'confidence': category.confidence})
|
||||
|
||||
|
||||
return result
|
||||
|
||||
def load_text_from_url(url, **data):
|
||||
|
||||
timeout = data.get('timeout', 20)
|
||||
|
||||
results = []
|
||||
|
||||
try:
|
||||
|
||||
print("Extracting text from: {}".format(url))
|
||||
response = requests.get(url, timeout=timeout)
|
||||
|
||||
text = response.text
|
||||
status = response.status_code
|
||||
|
||||
if status == 200 and len(text) > 0:
|
||||
return text
|
||||
|
||||
return None
|
||||
|
||||
|
||||
except Exception as e:
|
||||
print('Problem with url: {0}.'.format(url))
|
||||
return None
|
||||
```
|
||||
|
||||
To access the API, follow Google's [quickstart instructions][17] to create a project in Google Cloud Console, enable the API, and download a service account key. Afterward, you should have a JSON file that looks similar to this:
|
||||
|
||||
![services.json file][18]
|
||||
|
||||
Upload it to your project folder with the name **services.json**.
|
||||
|
||||
Then you can pull the API data for any URL (such as Opensource.com) by running the following:
|
||||
|
||||
|
||||
```
|
||||
url = "<https://opensource.com/article/19/6/how-ssh-running-container>"
|
||||
pull_googlenlp(client,url)
|
||||
```
|
||||
|
||||
If it's set up correctly, you should see this output:
|
||||
|
||||
![Output from pulling API data][19]
|
||||
|
||||
To make it easier to get started, I created a [Jupyter Notebook][20] that you can download and use to test extracting web pages' entities, categories, and sentiment. I prefer using [JupyterLab][21], which is an extension of Jupyter Notebooks that includes a file viewer and other enhanced user experience features. If you're new to these tools, I think [Anaconda][22] is the easiest way to get started using Python and Jupyter. It makes installing and setting up Python, as well as common libraries, very easy, especially on Windows.
|
||||
|
||||
### Playing with the data
|
||||
|
||||
With these functions that scrape the HTML of the given page and pass it to the Natural Language API, I can run some analysis across the 723 URLs. First, I'll look at the categories relevant to the site by looking at the count of returned top categories across all pages.
|
||||
|
||||
#### Categories
|
||||
|
||||
![Categories data from example site][23]
|
||||
|
||||
This seems to be a fairly accurate representation of the key themes of this particular site. Looking at a single query that one of the top-performing pages ranks for, I can compare the other ranking pages in Google's results for that same query.
|
||||
|
||||
* _URL 1 | Top Category: /Law & Government/Legal (0.5099999904632568) of 1 total categories._
|
||||
* _No categories returned._
|
||||
* _URL 3 | Top Category: /Internet & Telecom/Mobile & Wireless (0.6100000143051147) of 1 total categories._
|
||||
* _URL 4 | Top Category: /Computers & Electronics/Software (0.5799999833106995) of 2 total categories._
|
||||
* _URL 5 | Top Category: /Internet & Telecom/Mobile & Wireless/Mobile Apps & Add-Ons (0.75) of 1 total categories._
|
||||
* _No categories returned._
|
||||
* _URL 7 | Top Category: /Computers & Electronics/Software/Business & Productivity Software (0.7099999785423279) of 2 total categories._
|
||||
* _URL 8 | Top Category: /Law & Government/Legal (0.8999999761581421) of 3 total categories._
|
||||
* _URL 9 | Top Category: /Reference/General Reference/Forms Guides & Templates (0.6399999856948853) of 1 total categories._
|
||||
* _No categories returned._
|
||||
|
||||
|
||||
|
||||
The numbers in parentheses above represent Google's confidence that the content of the page is relevant for that category. The eighth result has much higher confidence than the first result for the same category, so this doesn't seem to be a magic bullet for defining relevance for ranking. Also, the categories are much too broad to make sense for a specific search topic.
|
||||
|
||||
Looking at average confidence by ranking position, there doesn't seem to be a correlation between these two metrics, at least for this dataset:
|
||||
|
||||
![Plot of average confidence by ranking position ][24]
|
||||
|
||||
Both of these approaches make sense to review for a website at scale to ensure the content categories seem appropriate, and boilerplate or sales content isn't moving your pages out of relevance for your main expertise area. Think if you sell industrial supplies, but your pages return _Marketing_ as the main category. There doesn't seem to be a strong suggestion that category relevancy has anything to do with how well you rank, at least at a page level.
|
||||
|
||||
#### Sentiment
|
||||
|
||||
I won't spend much time on sentiment. Across all the pages that returned a sentiment from the API, they fell into two bins: 0.1 and 0.2, which is almost neutral sentiment. Based on the histogram, it is easy to tell that sentiment doesn't provide much value. It would be a much more interesting metric to run for a news or opinion site to measure the correlation of sentiment to median rank for particular pages.
|
||||
|
||||
![Histogram of sentiment for unique pages][25]
|
||||
|
||||
#### Entities
|
||||
|
||||
Entities were the most interesting part of the API, in my opinion. This is a selection of the top entities, across all pages, by salience (or relevancy to the page). Notice that Google is inferring different types for the same terms (Bill of Sale), perhaps incorrectly. This is caused by the terms appearing in different contexts in the content.
|
||||
|
||||
![Top entities for example site][26]
|
||||
|
||||
Then I looked at each entity type individually and all together to see if there was any correlation between the salience of the entity and the best-ranking position of the page. For each type, I matched the salience (overall relevance to the page) of the top entity matching that type ordered by salience (descending).
|
||||
|
||||
Some of the entity types returned zero salience across all examples, so I omitted those results from the charts below.
|
||||
|
||||
![Correlation between salience and best ranking position][27]
|
||||
|
||||
The **Consumer Good** entity type had the highest positive correlation, with a Pearson correlation of 0.15854, although since lower-numbered rankings are better, the **Person** entity had the best result with a -0.15483 correlation. This is an extremely small sample set, especially for individual entity types, so I can't make too much of the data. I didn't find any value with a strong correlation, but the **Person** entity makes the most sense. Sites usually have pages about their chief executive and other key employees, and these pages are very likely to do well in search results for those queries.
|
||||
|
||||
Moving on, while looking at the site holistically, the following themes emerge based on **entity** **name** and **entity type**.
|
||||
|
||||
![Themes based on entity name and entity type][28]
|
||||
|
||||
I blurred a few results that seem too specific to mask the site's identity. Thematically, the name information is a good way to look topically at your (or a competitor's) site to see its core themes. This was done based only on the example site's ranking URLs and not all the site's possible URLs (Since Search Console data only reports on pages that received impressions in Google), but the results would be interesting, especially if you were to pull a site's main ranking URLs from a tool like [Ahrefs][29], which tracks many, many queries and the Google results for those queries.
|
||||
|
||||
The other interesting piece in the entity data is that entities marked **CONSUMER_GOOD** tended to "look" like results I have seen in Knowledge Results, i.e., the Google Search results on the right-hand side of the page.
|
||||
|
||||
![Google search results][30]
|
||||
|
||||
Of the **Consumer Good** entity names from our data set that had three or more words, 5.8% had the same Knowledge Results as Google's results for the entity name. This means, if you searched for the term or phrase in Google, the block on the right (eg. the Knowledge Results showing Linux above), would display in the search result page. Since Google "picks" an exemplar webpage to represent the entity, it is a good opportunity to identify opportunities to be singularly featured in search results. Also of interest, of the 5.8% names that displayed these Knowledge Results in Google, none of the entities had Wikipedia URLs returned from the Natural Language API. This is interesting enough to warrant additional analysis. It would be very useful, especially for more esoteric topics that traditional global rank-tracking tools, like Ahrefs, don't have in their databases.
|
||||
|
||||
As mentioned, the Knowledge Results can be important to site owners who want to have their content featured in Google, as they are strongly highlighted on desktop search. They are also more than likely, hypothetically, to line up with knowledge-base topics from Google [Discover][31], an offering for Android and iOS that attempts to surface content for users based on topics they are interested in but haven't searched explicitly for.
|
||||
|
||||
### Wrapping up
|
||||
|
||||
This article went over Google's Natural Language API, shared some code, and investigated ways this API may be useful for site owners. The key takeaways are:
|
||||
|
||||
* Learning to use Python and Jupyter Notebooks opens your data-gathering tasks to a world of incredible APIs and open source projects (like Pandas and NumPy) built by incredibly smart and talented people.
|
||||
* Python allows me to quickly pull and test my hypothesis about the value of an API for a particular purpose.
|
||||
* Passing a website's pages through Google's categorization API may be a good check to ensure its content falls into the correct thematic categories. Doing this for competitors' sites may also offer guidance on where to tune-up or create content.
|
||||
* Google's sentiment score didn't seem to be an interesting metric for the example site, but it may be for news or opinion-based sites.
|
||||
* Google's found entities gave a much more granular topic-level view of the website holistically and, like categorization, would be very interesting to use in competitive content analysis.
|
||||
* Entities may help define opportunities where your content can line up with Google Knowledge blocks in search results or Google Discover results. With 5.8% of our results set for longer (word count) **Consumer Goods** entities, displaying these results, there may be opportunities, for some sites, to better optimize their page's salience score for these entities to stand a better chance of capturing this featured placement in Google search results or Google Discovers suggestions.
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/7/python-google-natural-language-api
|
||||
|
||||
作者:[JR Oakes][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/jroakes
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/search_find_code_issue_bug_programming.png?itok=XPrh7fa0 (magnifying glass on computer screen)
|
||||
[2]: https://cloud.google.com/natural-language/#natural-language-api-demo
|
||||
[3]: https://opensource.com/article/19/3/natural-language-processing-tools
|
||||
[4]: https://en.wikipedia.org/wiki/Knowledge_Graph
|
||||
[5]: https://opensource.com/sites/default/files/uploads/entities.png (Entities)
|
||||
[6]: https://opensource.com/sites/default/files/uploads/sentiment.png (Sentiment)
|
||||
[7]: https://en.wikipedia.org/wiki/Lemmatisation
|
||||
[8]: https://en.wikipedia.org/wiki/Part-of-speech_tagging
|
||||
[9]: https://en.wikipedia.org/wiki/Parse_tree#Dependency-based_parse_trees
|
||||
[10]: https://opensource.com/sites/default/files/uploads/syntax.png (Syntax)
|
||||
[11]: https://opensource.com/sites/default/files/uploads/categories.png (Categories)
|
||||
[12]: https://developers.google.com/webmaster-tools/
|
||||
[13]: https://github.com/MLTSEO/MLTS/blob/master/Demos.ipynb
|
||||
[14]: https://opensource.com/sites/default/files/uploads/histogram_1.png (Histogram of clicks for all pages)
|
||||
[15]: https://opensource.com/sites/default/files/uploads/histogram_2.png (Histogram of clicks for subset of pages)
|
||||
[16]: https://pypi.org/project/google-cloud-language/
|
||||
[17]: https://cloud.google.com/natural-language/docs/quickstart
|
||||
[18]: https://opensource.com/sites/default/files/uploads/json_file.png (services.json file)
|
||||
[19]: https://opensource.com/sites/default/files/uploads/output.png (Output from pulling API data)
|
||||
[20]: https://github.com/MLTSEO/MLTS/blob/master/Tutorials/Google_Language_API_Intro.ipynb
|
||||
[21]: https://github.com/jupyterlab/jupyterlab
|
||||
[22]: https://www.anaconda.com/distribution/
|
||||
[23]: https://opensource.com/sites/default/files/uploads/categories_2.png (Categories data from example site)
|
||||
[24]: https://opensource.com/sites/default/files/uploads/plot.png (Plot of average confidence by ranking position )
|
||||
[25]: https://opensource.com/sites/default/files/uploads/histogram_3.png (Histogram of sentiment for unique pages)
|
||||
[26]: https://opensource.com/sites/default/files/uploads/entities_2.png (Top entities for example site)
|
||||
[27]: https://opensource.com/sites/default/files/uploads/salience_plots.png (Correlation between salience and best ranking position)
|
||||
[28]: https://opensource.com/sites/default/files/uploads/themes.png (Themes based on entity name and entity type)
|
||||
[29]: https://ahrefs.com/
|
||||
[30]: https://opensource.com/sites/default/files/uploads/googleresults.png (Google search results)
|
||||
[31]: https://www.blog.google/products/search/introducing-google-discover/
|
@ -1,249 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (stevenzdg988)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (9 favorite open source tools for Node.js developers)
|
||||
[#]: via: (https://opensource.com/article/20/1/open-source-tools-nodejs)
|
||||
[#]: author: (Hiren Dhadhuk https://opensource.com/users/hirendhadhuk)
|
||||
|
||||
9 favorite open source tools for Node.js developers
|
||||
======
|
||||
Of the wide range of tools available to simplify Node.js development,
|
||||
here are the 10 best.
|
||||
![Tools illustration][1]
|
||||
|
||||
I recently read a survey on [StackOverflow][2] that said more than 49% of developers use Node.js for their projects. This came as no surprise to me.
|
||||
|
||||
As an avid user of technology, I think it's safe to say that the introduction of Node.js led to a new era of software development. It is now one of the most preferred technologies for software development, right next to JavaScript.
|
||||
|
||||
### What is Node.js, and why is it so popular?
|
||||
|
||||
Node.js is a cross-platform, open source runtime environment for executing JavaScript code outside of the browser. It is also a preferred runtime environment built on Chrome's JavaScript runtime and is mainly used for building fast, scalable, and efficient network applications.
|
||||
|
||||
I remember when we used to sit for hours and hours coordinating between front-end and back-end developers who were writing different scripts for each side. All of this changed as soon as Node.js came into the picture. I believe that the one thing that drives developers towards this technology is its two-way efficiency.
|
||||
|
||||
With Node.js, you can run your code simultaneously on both the client and the server side, speeding up the whole process of development. Node.js bridges the gap between front-end and back-end development and makes the development process much more efficient.
|
||||
|
||||
### A wave of Node.js tools
|
||||
|
||||
For 49% of all developers (including me), Node.js is at the top of the pyramid when it comes to front-end and back-end development. There are tons of [Node.js use cases][3] that have helped me and my team deliver complex projects within our deadlines. Fortunately, Node.js' rising popularity has also produced a wave of open source projects and tools to help developers working with the environment.
|
||||
|
||||
Recently, there has been a sudden increase in demand for projects built with Node.js. Sometimes, I find it quite challenging to manage these projects and keep up the pace while delivering high-quality results. So I decided to automate certain aspects of development using some of the most efficient of the many open source tools available for Node.js developers.
|
||||
|
||||
In my extensive experience with Node.js, I've worked with a wide range of tools that have helped me with the overall development process—from streamlining the coding process to monitoring to content management.
|
||||
|
||||
To help my fellow Node.js developers, I compiled this list of 9 of my favorite open source tools for simplifying Node.js development.
|
||||
|
||||
### Webpack
|
||||
|
||||
[Webpack][4] is a handy JavaScript module bundler used to simplify front-end development. It detects modules with dependencies and transforms them into static assets that represent the modules.
|
||||
|
||||
You can install the tool through either the npm or Yarn package manager.
|
||||
|
||||
With npm:
|
||||
|
||||
|
||||
```
|
||||
`npm install --save-dev webpack`
|
||||
```
|
||||
|
||||
With Yarn:
|
||||
|
||||
|
||||
```
|
||||
`yarn add webpack --dev`
|
||||
```
|
||||
|
||||
Webpack creates single bundles or multiple chains of assets that can be loaded asynchronously at runtime. Each asset does not have to be loaded individually. Bundling and serving assets becomes quick and efficient with the Webpack tool, making the overall user experience better and reducing the developer's hassle in managing load time.
|
||||
|
||||
### Strapi
|
||||
|
||||
[Strapi][5] is an open source headless content management system (CMS). A headless CMS is basically software that lets you manage your content devoid of a prebuilt frontend. It is a backend-only system that functions using RESTful APIs.
|
||||
|
||||
You can install Strapi through Yarn or npx packages.
|
||||
|
||||
With Yarn:
|
||||
|
||||
|
||||
```
|
||||
`yarn create strapi-app my-project --quickstart`
|
||||
```
|
||||
|
||||
With npx:
|
||||
|
||||
|
||||
```
|
||||
`npx create-strapi-app my-project --quickstart`
|
||||
```
|
||||
|
||||
Strapi's goal is to fetch and deliver your content in a structured manner across any device. The CMS makes it easy to manage your applications' content and make sure they are dynamic and accessible across any device.
|
||||
|
||||
It provides a lot of features, including file upload, a built-in email system, JSON Web Token (JWT) authentication, and auto-generated documentation. I find it very convenient, as it simplifies the overall CMS and gives me full autonomy in editing, creating, or deleting all types of contents.
|
||||
|
||||
In addition, the content structure built through Strapi is extremely flexible because you can create and reuse groups of content and customizable APIs.
|
||||
|
||||
### Broccoli
|
||||
|
||||
[Broccoli][6] is a powerful build tool that runs on an [ES6][7] module. Build tools are software that let you assemble all the different assets within your application or website, e.g., images, CSS, JavaScript, etc., into one distributable format. Broccoli brands itself as the "asset pipeline for ambitious applications."
|
||||
|
||||
You need a project directory to work with Broccoli. Once you have the project directory in place, you can install Broccoli with npm using:
|
||||
|
||||
|
||||
```
|
||||
npm install --save-dev broccoli
|
||||
npm install --global broccoli-cli
|
||||
```
|
||||
|
||||
You can also use Yarn for installation.
|
||||
|
||||
The current version of Node.js would be the best version for the tool as it provides long-time support. This helps you avoid the hassle of updating and reinstalling as you go. Once the installation process is completed, you can include the build specification in your Brocfile.js.
|
||||
|
||||
In Broccoli, the unit of abstraction is a tree, which stores files and subdirectories within specific subdirectories. Therefore, before you build, you must have a specific idea of what you want your build to look like.
|
||||
|
||||
The best part about Broccoli is that it comes with a built-in server for development that lets you host your assets on a local HTTP server. Broccoli is great for streamlined rebuilds, as its concise architecture and flexible ecosystem boost rebuild and compilation speeds. Broccoli lets you get organized to save time and maximize productivity during development.
|
||||
|
||||
### Danger
|
||||
|
||||
[Danger][8] is a very handy open source tool for streamlining your pull request (PR) checks. As Danger's library description says, the tool helps you "formalize" your code review system by managing PR checks. Danger integrates with your CI and helps you speed up the review process.
|
||||
|
||||
Integrating Danger with your project is an easy step-by-step process—you just need to include the Danger module and create a Danger file for each project. However, it's more convenient to create a Danger account (easy to do through GitHub or Bitbucket), then set up access tokens for your open source software projects.
|
||||
|
||||
Danger can be installed via NPM or Yarn. To use Yarn, add danger -D to add it to your package.JSON.
|
||||
|
||||
After you add Danger to your CI, you can:
|
||||
|
||||
* Highlight build artifacts of importance
|
||||
* Manage sprints by enforcing links to tools like Trello and Jira
|
||||
* Enforce changelogs
|
||||
* Utilize descriptive labels
|
||||
* And much more
|
||||
|
||||
|
||||
|
||||
For example, you can design a system that defines the team culture and sets out specific rules for code review and PR checks. Common issues can be solved based on the metadata Danger provides along with its extensive plugin ecosystem.
|
||||
|
||||
### Snyk
|
||||
|
||||
Cybersecurity is a major concern for developers. [Snyk][9] is one of the most well-known tools to fix vulnerabilities in open source components. It started as a project to fix vulnerabilities in Node.js projects and has evolved to detect and fix vulnerabilities in Ruby, Java, Python, and Scala apps as well. Snyk mainly runs in four stages:
|
||||
|
||||
* Finding vulnerability dependencies
|
||||
* Fixing specific vulnerabilities
|
||||
* Preventing security risks by PR checks
|
||||
* Monitoring apps continuously
|
||||
|
||||
|
||||
|
||||
Snyk can be integrated with your project at any stage, including coding, CI/CD, and reporting. I find it extremely helpful for testing Node.js projects to test out npm packages for security risks or at build-time. You can also run PR checks for your applications in GitHub to make your projects more secure. Synx also provides a range of integrations that you can use to monitor dependencies and fix specific problems.
|
||||
|
||||
To run Snyk on your machine locally, you can install it through NPM:
|
||||
|
||||
|
||||
```
|
||||
`npm install -g snyk`
|
||||
```
|
||||
|
||||
### Migrat
|
||||
|
||||
[Migrat][10] is an extremely easy to use data-migration tool that uses plain text. It works across a diverse range of stacks and processes that make it even more convenient. You can install Migrat with a simple line of code:
|
||||
|
||||
|
||||
```
|
||||
`$ npm install -g migrat`
|
||||
```
|
||||
|
||||
Migrat is not specific to a particular database engine. It supports multi-node environments, as migrations can run on one node globally or once per server. What makes Migrat convenient is the facilitation of passing context to each migration.
|
||||
|
||||
You can define what each migration is for (e.g.,. database sets, connections, logging interfaces, etc.). Moreover, to avoid haphazard migrations, where multiple servers are running migrations globally, Migrat facilitates global lockdown while the process is running so that it can run only once globally. It also comes with a range of plug-ins for SQL databases, Slack, HipChat, and the Datadog dashboard. You can send live migrations to any of these platforms.
|
||||
|
||||
### Clinic.js
|
||||
|
||||
[Clinic.js][11] is an open source monitoring tool for Node.js projects. It combines three different tools—Doctor, Bubbleprof, and Flame—that help you monitor, detect, and solve performance issues with Node.js.
|
||||
|
||||
You can install Clinic.js from npm by running this command:
|
||||
|
||||
|
||||
```
|
||||
`$ npm install clinic`
|
||||
```
|
||||
|
||||
You can choose which of the three tools that comprise Clinic.js you want to use based on which aspect of your project you want to monitor and the report you want to generate:
|
||||
|
||||
* Doctor provides detailed metrics by injecting probes and provides recommendations on the overall health of your project.
|
||||
* Bubbleprof is great for profiling and generates metrics using async_hooks.
|
||||
* Flame is great for uncovering hot paths and bottlenecks in your code.
|
||||
|
||||
|
||||
|
||||
### PM2
|
||||
|
||||
Monitoring is one of the most important aspects of any backend development process. [PM2][12] is a process management tool for Node.js that helps developers monitor multiple aspects of their projects such as logs, delays, and speed. The tool is compatible with Linux, MacOS, and Windows and supports all Node.js versions starting from Node.js 8.X.
|
||||
|
||||
You can install PM2 with npm using:
|
||||
|
||||
|
||||
```
|
||||
`$ npm install pm2 --g`
|
||||
```
|
||||
|
||||
If you do not already have Node.js installed, you can use:
|
||||
|
||||
|
||||
```
|
||||
`wget -qO- https://getpm2.com/install.sh | bash`
|
||||
```
|
||||
|
||||
Once it's installed, start the application with:
|
||||
|
||||
|
||||
```
|
||||
`$ pm2 start app.js`
|
||||
```
|
||||
|
||||
The best part about PM2 is that it lets you run your apps in cluster mode. You can spawn a process for multiple CPU cores at a time. This makes it easy to enhance application performance and maximize reliability. PM2 is also great for updates, as you can update your apps and reload them with zero downtime using the "hot reload" option. Overall, it's a great tool to simplify process management for Node.js applications.
|
||||
|
||||
### Electrode
|
||||
|
||||
[Electrode][13] is an open source application platform from Walmart Labs. The platform helps you build large-scale, universal React/Node.js applications in a structured manner.
|
||||
|
||||
The Electrode app generator lets you build a flexible core focused on the code, provides some great modules to add complex features to the app, and comes with a wide range of tools to optimize your app's Node.js bundle.
|
||||
|
||||
Electrode can be installed using npm. Once the installation is finished, you can start the app using Ignite and dive right in with the Electrode app generator.
|
||||
|
||||
You can install Electrode using NPM:
|
||||
|
||||
|
||||
```
|
||||
`npm install -g electrode-ignite xclap-cli`
|
||||
```
|
||||
|
||||
### Which are your favorite?
|
||||
|
||||
These are just a few of the always-growing list of open source tools that can come in handy at different stages when working with Node.js. Which are your go-to open source Node.js tools? Please share your recommendations in the comments.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/1/open-source-tools-nodejs
|
||||
|
||||
作者:[Hiren Dhadhuk][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[stevenzdg988](https://github.com/stevenzdg988)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/hirendhadhuk
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/tools_hardware_purple.png?itok=3NdVoYhl (Tools illustration)
|
||||
[2]: https://insights.stackoverflow.com/survey/2019#technology-_-other-frameworks-libraries-and-tools
|
||||
[3]: https://www.simform.com/nodejs-use-case/
|
||||
[4]: https://webpack.js.org/
|
||||
[5]: https://strapi.io/
|
||||
[6]: https://broccoli.build/
|
||||
[7]: https://en.wikipedia.org/wiki/ECMAScript#6th_Edition_-_ECMAScript_2015
|
||||
[8]: https://danger.systems/
|
||||
[9]: https://snyk.io/
|
||||
[10]: https://github.com/naturalatlas/migrat
|
||||
[11]: https://clinicjs.org/
|
||||
[12]: https://pm2.keymetrics.io/
|
||||
[13]: https://www.electrode.io/
|
@ -1,157 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Managing processes on Linux with kill and killall)
|
||||
[#]: via: (https://opensource.com/article/20/1/linux-kill-killall)
|
||||
[#]: author: (Jim Hall https://opensource.com/users/jim-hall)
|
||||
|
||||
Managing processes on Linux with kill and killall
|
||||
======
|
||||
Know how to terminate processes and reclaim system resources with the
|
||||
ps, kill, and killall commands.
|
||||
![Penguin with green background][1]
|
||||
|
||||
In Linux, every program and daemon is a "process." Most processes represent a single running program. Other programs can fork off other processes, such as processes to listen for certain things to happen and then respond to them. And each process requires a certain amount of memory and processing power. The more processes you have running, the more memory and CPU cycles you'll need. On older systems, like my seven-year-old laptop, or smaller computers, like the Raspberry Pi, you can get the most out of your system if you keep an eye on what processes you have running in the background.
|
||||
|
||||
You can get a list of running processes with the **ps** command. You'll usually want to give **ps** some options to show more information in its output. I like to use the **-e** option to see every process running on my system, and the **-f** option to get full details about each process. Here are some examples:
|
||||
|
||||
|
||||
```
|
||||
$ ps
|
||||
PID TTY TIME CMD
|
||||
88000 pts/0 00:00:00 bash
|
||||
88052 pts/0 00:00:00 ps
|
||||
88053 pts/0 00:00:00 head
|
||||
|
||||
[/code] [code]
|
||||
|
||||
$ ps -e | head
|
||||
PID TTY TIME CMD
|
||||
1 ? 00:00:50 systemd
|
||||
2 ? 00:00:00 kthreadd
|
||||
3 ? 00:00:00 rcu_gp
|
||||
4 ? 00:00:00 rcu_par_gp
|
||||
6 ? 00:00:02 kworker/0:0H-events_highpri
|
||||
9 ? 00:00:00 mm_percpu_wq
|
||||
10 ? 00:00:01 ksoftirqd/0
|
||||
11 ? 00:00:12 rcu_sched
|
||||
12 ? 00:00:00 migration/0
|
||||
|
||||
[/code] [code]
|
||||
|
||||
$ ps -ef | head
|
||||
UID PID PPID C STIME TTY TIME CMD
|
||||
root 1 0 0 13:51 ? 00:00:50 /usr/lib/systemd/systemd --switched-root --system --deserialize 36
|
||||
root 2 0 0 13:51 ? 00:00:00 [kthreadd]
|
||||
root 3 2 0 13:51 ? 00:00:00 [rcu_gp]
|
||||
root 4 2 0 13:51 ? 00:00:00 [rcu_par_gp]
|
||||
root 6 2 0 13:51 ? 00:00:02 [kworker/0:0H-kblockd]
|
||||
root 9 2 0 13:51 ? 00:00:00 [mm_percpu_wq]
|
||||
root 10 2 0 13:51 ? 00:00:01 [ksoftirqd/0]
|
||||
root 11 2 0 13:51 ? 00:00:12 [rcu_sched]
|
||||
root 12 2 0 13:51 ? 00:00:00 [migration/0]
|
||||
```
|
||||
|
||||
The last example shows the most detail. On each line, the UID (user ID) shows the user that owns the process. The PID (process ID) represents the numerical ID of each process, and PPID (parent process ID) shows the ID of the process that spawned this one. In any Unix system, processes count up from PID 1, the first process to run once the kernel starts up. Here, **systemd** is the first process, which spawned **kthreadd**. And **kthreadd** created other processes including **rcu_gp**, **rcu_par_gp**, and a bunch of other ones.
|
||||
|
||||
### Process management with the kill command
|
||||
|
||||
The system will take care of most background processes on its own, so you don't need to worry about them. You should only have to get involved in managing any processes that you create, usually by running applications. While many applications run one process at a time (think about your music player or terminal emulator or game), other applications might create background processes. Some of these might keep running when you exit the application so they can get back to work quickly the next time you start the application.
|
||||
|
||||
Process management is an issue when I run Chromium, the open source base for Google's Chrome browser. Chromium works my laptop pretty hard and fires off a lot of extra processes. Right now, I can see these Chromium processes running with only five tabs open:
|
||||
|
||||
|
||||
```
|
||||
$ ps -ef | fgrep chromium
|
||||
jhall 66221 [...] /usr/lib64/chromium-browser/chromium-browser [...]
|
||||
jhall 66230 [...] /usr/lib64/chromium-browser/chromium-browser [...]
|
||||
[...]
|
||||
jhall 66861 [...] /usr/lib64/chromium-browser/chromium-browser [...]
|
||||
jhall 67329 65132 0 15:45 pts/0 00:00:00 grep -F chromium
|
||||
```
|
||||
|
||||
I've omitted some lines, but there are 20 Chromium processes and one **grep** process that is searching for the string "chromium."
|
||||
|
||||
|
||||
```
|
||||
$ ps -ef | fgrep chromium | wc -l
|
||||
21
|
||||
```
|
||||
|
||||
But after I exit Chromium, those processes remain open. How do you shut them down and reclaim the memory and CPU that those processes are taking up?
|
||||
|
||||
The **kill** command lets you terminate a process. In the simplest case, you tell **kill** the PID of what you want to stop. For example, to terminate each of these processes, I would need to execute the **kill** command against each of the 20 Chromium process IDs. One way to do that is with a command line that gets the Chromium PIDs and another that runs **kill** against that list:
|
||||
|
||||
|
||||
```
|
||||
$ ps -ef | fgrep /usr/lib64/chromium-browser/chromium-browser | awk '{print $2}'
|
||||
66221
|
||||
66230
|
||||
66239
|
||||
66257
|
||||
66262
|
||||
66283
|
||||
66284
|
||||
66285
|
||||
66324
|
||||
66337
|
||||
66360
|
||||
66370
|
||||
66386
|
||||
66402
|
||||
66503
|
||||
66539
|
||||
66595
|
||||
66734
|
||||
66848
|
||||
66861
|
||||
69702
|
||||
|
||||
$ ps -ef | fgrep /usr/lib64/chromium-browser/chromium-browser | awk '{print $2}' > /tmp/pids
|
||||
$ kill $( cat /tmp/pids)
|
||||
```
|
||||
|
||||
Those last two lines are the key. The first command line generates a list of process IDs for the Chromium browser. The second command line runs the **kill** command against that list of process IDs.
|
||||
|
||||
### Introducing the killall command
|
||||
|
||||
A simpler way to stop a bunch of processes all at once is to use the **killall** command. As you might guess by the name, **killall** terminates all processes that match a name. That means we can use this command to stop all of our rogue Chromium processes. This is as simple as:
|
||||
|
||||
|
||||
```
|
||||
`$ killall /usr/lib64/chromium-browser/chromium-browser`
|
||||
```
|
||||
|
||||
But be careful with **killall**. This command can terminate any process that matches what you give it. That's why I like to first use **ps -ef** to check my running processes, then run **killall** against the exact path to the command that I want to stop.
|
||||
|
||||
You might also want to use the **-i** or **\--interactive** option to ask **killall** to prompt you before it stops each process.
|
||||
|
||||
**killall** also supports options to select processes that are older than a specific time using the **-o** or **\--older-than** option. This can be helpful if you discover a set of rogue processes that have been running unattended for several days, for example. Or you can select processes that are younger than a specific time, such as runaway processes you recently started. Use the **-y** or **\--younger-than** option to select these processes.
|
||||
|
||||
### Other ways to manage processes
|
||||
|
||||
Process management can be an important part of system maintenance. In my early career as a Unix and Linux systems administrator, the ability to kill escaped jobs was a useful tool to keep systems running properly. You may not need to kill rogue processes in a modern Linux desktop, but knowing **kill** and **killall** can help you when things eventually go awry.
|
||||
|
||||
You can also look for other ways to manage processes. In my case, I didn't really need to use **kill** or **killall** to stop the background Chromium processes after I exited the browser. There's a simple setting in Chromium to control that:
|
||||
|
||||
![Chromium background processes setting][2]
|
||||
|
||||
Still, it's always a good idea to keep an eye on what processes are running on your system and know how to manage them when needed.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/1/linux-kill-killall
|
||||
|
||||
作者:[Jim Hall][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/jim-hall
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/linux_penguin_green.png?itok=ENdVzW22 (Penguin with green background)
|
||||
[2]: https://opensource.com/sites/default/files/uploads/chromium-settings-continue-running.png (Chromium background processes setting)
|
@ -1,157 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (stevenzdg988)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Get started with Bash programming)
|
||||
[#]: via: (https://opensource.com/article/20/4/bash-programming-guide)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
|
||||
|
||||
Get started with Bash programming
|
||||
======
|
||||
Learn how to write custom programs in Bash to automate your repetitive
|
||||
tasks. Download our new eBook to get started.
|
||||
![Command line prompt][1]
|
||||
|
||||
One of the original hopes for Unix was that it would empower everyday computer users to fine-tune their computers to match their unique working style. The expectations around computer customization have diminished over the decades, and many users consider their collection of apps and websites to be their "custom environment." One reason for that is that the components of many operating systems are not open, so their source code isn't available to normal users.
|
||||
|
||||
But for Linux users, custom programs are within reach because the entire system is based around commands available through the terminal. The terminal isn't just an interface for quick commands or in-depth troubleshooting; it's a scripting environment that can reduce your workload by taking care of mundane tasks for you.
|
||||
|
||||
### How to learn programming
|
||||
|
||||
If you've never done any programming before, it might help to think of it in terms of two different challenges: one is to understand how code is written, and the other is to understand what code to write. You can learn _syntax_—but you won't get far without knowing what words are available to you in the _language_. In practice, you start learning both concepts all at once because you can't learn syntax without words to arrange, so initially, you write simple tasks using basic commands and basic programming structures. Once you feel comfortable with the basics, you can explore more of the language so you can make your programs do more and more significant things.
|
||||
|
||||
In [Bash][2], most of the _words_ you use are Linux commands. The _syntax_ is Bash. If you already use Bash on a frequent basis, then the transition to Bash programming is relatively easy. But if you don't use Bash, you'll be pleased to learn that it's a simple language built for clarity and simplicity.
|
||||
|
||||
### Interactive design
|
||||
|
||||
Sometimes, the hardest thing to figure out when learning to program is what a computer can do for you. Obviously, if a computer on its own could do everything you do with it, then you wouldn't have to ever touch a computer again. But the reality is that humans are important. The key to finding something your computer can help you with is to take notice of tasks you repeatedly do throughout the week. Computers handle repetition particularly well.
|
||||
|
||||
But for you to be able to tell your computer to do something, you must know how to do it. This is an area Bash excels in: interactive programming. As you perform an action in the terminal, you are also learning how to script it.
|
||||
|
||||
For instance, I was once tasked with converting a large number of PDF books to versions that would be low-ink and printer-friendly. One way to do this is to open the PDF in a PDF editor, select each one of the hundreds of images—page backgrounds and textures counted as images—delete them, and then save it to a new PDF. Just one book would take half a day this way.
|
||||
|
||||
My first thought was to learn how to script a PDF editor, but after days of research, I could not find a PDF editing application that could be scripted (outside of very ugly mouse-automation hacks). So I turned my attention to finding out to accomplish the task from within a terminal. This resulted in several new discoveries, including GhostScript, the open source version of PostScript (the printer language PDF is based on). By using GhostScript for the task for a few days, I confirmed that it was the solution to my problem.
|
||||
|
||||
Formulating a basic script to run the command was merely a matter of copying the command and options I used to remove images from a PDF and pasting them into a text file. Running the file as a script would, presumably, produce the same results.
|
||||
|
||||
### Passing arguments to a Bash script
|
||||
|
||||
The difference between running a command in a terminal and running a command in a shell script is that the former is interactive. In a terminal, you can adjust things as you go. For instance, if I just processed **example_1.pdf** and am ready to process the next document, to adapt my command, I only need to change the filename.
|
||||
|
||||
A shell script isn't interactive, though. In fact, the only reason a shell _script_ exists is so that you don't have to attend to it. This is why commands (and the shell scripts that run them) accept arguments.
|
||||
|
||||
In a shell script, there are a few predefined variables that reflect how a script starts. The initial variable is **$0**, and it represents the command issued to start the script. The next variable is **$1**, which represents the first "argument" passed to the shell script. For example, in the command **echo hello**, the command **echo** is **$0,** and the word **hello** is **$1**. In the command **echo hello world**, the command **echo** is **$0**, **hello** is **$1**, and **world** is **$2**.
|
||||
|
||||
In an interactive shell:
|
||||
|
||||
|
||||
```
|
||||
$ echo hello world
|
||||
hello world
|
||||
```
|
||||
|
||||
In a non-interactive shell script, you _could_ do the same thing in a very literal way. Type this text into a text file and save it as **hello.sh**:
|
||||
|
||||
|
||||
```
|
||||
`echo hello world`
|
||||
```
|
||||
|
||||
Now run the script:
|
||||
|
||||
|
||||
```
|
||||
$ bash hello.sh
|
||||
hello world
|
||||
```
|
||||
|
||||
That works, but it doesn't take advantage of the fact that a script can take input. Change **hello.sh** to this:
|
||||
|
||||
|
||||
```
|
||||
`echo $1`
|
||||
```
|
||||
|
||||
Run the script with two arguments grouped together as one with quotation marks:
|
||||
|
||||
|
||||
```
|
||||
$ bash hello.sh "hello bash"
|
||||
hello bash
|
||||
```
|
||||
|
||||
For my PDF reduction project, I had a real need for this kind of non-interactivity, because each PDF took several minutes to condense. But by creating a script that accepted input from me, I could feed the script several PDF files all at once. The script processed each one sequentially, which could take half an hour or more, but it was a half-hour I could use for other tasks.
|
||||
|
||||
### Flow control
|
||||
|
||||
It's perfectly acceptable to create Bash scripts that are, essentially, transcripts of the exact process you took to achieve the task you need to be repeated. However, scripts can be made more powerful by controlling how information flows through them. Common methods of managing a script's response to data are:
|
||||
|
||||
* if/then
|
||||
* for loops
|
||||
* while loops
|
||||
* case statements
|
||||
|
||||
|
||||
|
||||
Computers aren't intelligent, but they are good at comparing and parsing data. Scripts can feel a lot more intelligent if you build some data analysis into them. For example, the basic **hello.sh** script runs whether or not there's anything to echo:
|
||||
|
||||
|
||||
```
|
||||
$ bash hello.sh foo
|
||||
foo
|
||||
$ bash hello.sh
|
||||
|
||||
$
|
||||
```
|
||||
|
||||
It would be more user-friendly if it provided a help message when it receives no input. That's an if/then statement, and if you're using Bash in a basic way, you probably wouldn't know that such a statement existed in Bash. But part of programming is learning the language, and with a little research you'd learn about if/then statements:
|
||||
|
||||
|
||||
```
|
||||
if [ "$1" = "" ]; then
|
||||
echo "syntax: $0 WORD"
|
||||
echo "If you provide more than one word, enclose them in quotes."
|
||||
else
|
||||
echo "$1"
|
||||
fi
|
||||
```
|
||||
|
||||
Running this new version of **hello.sh** results in:
|
||||
|
||||
|
||||
```
|
||||
$ bash hello.sh
|
||||
syntax: hello.sh WORD
|
||||
If you provide more than one word, enclose them in quotes.
|
||||
$ bash hello.sh "hello world"
|
||||
hello world
|
||||
```
|
||||
|
||||
### Working your way through a script
|
||||
|
||||
Whether you're looking for something to remove images from PDF files, or something to manage your cluttered Downloads folder, or something to create and provision Kubernetes images, learning to script Bash is a matter of using Bash and then learning ways to take those scripts from just a list of commands to something that responds to input. It's usually a process of discovery: you're bound to find new Linux commands that perform tasks you never imagined could be performed with text commands, and you'll find new functions of Bash to make your scripts adaptable to all the different ways you want them to run.
|
||||
|
||||
One way to learn these tricks is to read other people's scripts. Get a feel for how people are automating rote commands on their systems. See what looks familiar to you, and look for more information about the things that are unfamiliar.
|
||||
|
||||
Another way is to download our [introduction to programming with Bash][3] eBook. It introduces you to programming concepts specific to Bash, and with the constructs you learn, you can start to build your own commands. And of course, it's free to download and licensed under a [Creative Commons][4] license, so grab your copy today.
|
||||
|
||||
### [Download our introduction to programming with Bash eBook!][3]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/4/bash-programming-guide
|
||||
|
||||
作者:[Seth Kenlon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[stevenzdg988](https://github.com/stevenzdg988)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/seth
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/command_line_prompt.png?itok=wbGiJ_yg (Command line prompt)
|
||||
[2]: https://opensource.com/resources/what-bash
|
||||
[3]: https://opensource.com/downloads/bash-programming-guide
|
||||
[4]: https://opensource.com/article/20/1/what-creative-commons
|
@ -1,424 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to automate your cryptocurrency trades with Python)
|
||||
[#]: via: (https://opensource.com/article/20/4/python-crypto-trading-bot)
|
||||
[#]: author: (Stephan Avenwedde https://opensource.com/users/hansic99)
|
||||
|
||||
How to automate your cryptocurrency trades with Python
|
||||
======
|
||||
In this tutorial, learn how to set up and use Pythonic, a graphical
|
||||
programming tool that makes it easy for users to create Python
|
||||
applications using ready-made function modules.
|
||||
![scientific calculator][1]
|
||||
|
||||
Unlike traditional stock exchanges like the New York Stock Exchange that have fixed trading hours, cryptocurrencies are traded 24/7, which makes it impossible for anyone to monitor the market on their own.
|
||||
|
||||
Often in the past, I had to deal with the following questions related to my crypto trading:
|
||||
|
||||
* What happened overnight?
|
||||
* Why are there no log entries?
|
||||
* Why was this order placed?
|
||||
* Why was no order placed?
|
||||
|
||||
|
||||
|
||||
The usual solution is to use a crypto trading bot that places orders for you when you are doing other things, like sleeping, being with your family, or enjoying your spare time. There are a lot of commercial solutions available, but I wanted an open source option, so I created the crypto-trading bot [Pythonic][2]. As [I wrote][3] in an introductory article last year, "Pythonic is a graphical programming tool that makes it easy for users to create Python applications using ready-made function modules." It originated as a cryptocurrency bot and has an extensive logging engine and well-tested, reusable parts such as schedulers and timers.
|
||||
|
||||
### Getting started
|
||||
|
||||
This hands-on tutorial teaches you how to get started with Pythonic for automated trading. It uses the example of trading [Tron][4] against [Bitcoin][5] on the [Binance][6] exchange platform. I choose these coins because of their volatility against each other, rather than any personal preference.
|
||||
|
||||
The bot will make decisions based on [exponential moving averages][7] (EMAs).
|
||||
|
||||
![TRX/BTC 1-hour candle chart][8]
|
||||
|
||||
TRX/BTC 1-hour candle chart
|
||||
|
||||
The EMA indicator is, in general, a weighted moving average that gives more weight to recent price data. Although a moving average may be a simple indicator, I've had good experiences using it.
|
||||
|
||||
The purple line in the chart above shows an EMA-25 indicator (meaning the last 25 values were taken into account).
|
||||
|
||||
The bot monitors the pitch between the current EMA-25 value (t0) and the previous EMA-25 value (t-1). If the pitch exceeds a certain value, it signals rising prices, and the bot will place a buy order. If the pitch falls below a certain value, the bot will place a sell order.
|
||||
|
||||
The pitch will be the main indicator for making decisions about trading. For this tutorial, it will be called the _trade factor_.
|
||||
|
||||
### Toolchain
|
||||
|
||||
The following tools are used in this tutorial:
|
||||
|
||||
* Binance expert trading view (visualizing data has been done by many others, so there's no need to reinvent the wheel by doing it yourself)
|
||||
* Jupyter Notebook for data-science tasks
|
||||
* Pythonic, which is the overall framework
|
||||
* PythonicDaemon as the pure runtime (console- and Linux-only)
|
||||
|
||||
|
||||
|
||||
### Data mining
|
||||
|
||||
For a crypto trading bot to make good decisions, it's essential to get open-high-low-close ([OHLC][9]) data for your asset in a reliable way. You can use Pythonic's built-in elements and extend them with your own logic.
|
||||
|
||||
The general workflow is:
|
||||
|
||||
1. Synchronize with Binance time
|
||||
2. Download OHLC data
|
||||
3. Load existing OHLC data from the file into memory
|
||||
4. Compare both datasets and extend the existing dataset with the newer rows
|
||||
|
||||
|
||||
|
||||
This workflow may be a bit overkill, but it makes this solution very robust against downtime and disconnections.
|
||||
|
||||
To begin, you need the **Binance OHLC Query** element and a **Basic Operation** element to execute your own code.
|
||||
|
||||
![Data-mining workflow][10]
|
||||
|
||||
Data-mining workflow
|
||||
|
||||
The OHLC query is set up to query the asset pair **TRXBTC** (Tron/Bitcoin) in one-hour intervals.
|
||||
|
||||
![Configuration of the OHLC query element][11]
|
||||
|
||||
Configuring the OHLC query element
|
||||
|
||||
The output of this element is a [Pandas DataFrame][12]. You can access the DataFrame with the **input** variable in the **Basic Operation** element. Here, the **Basic Operation** element is set up to use Vim as the default code editor.
|
||||
|
||||
![Basic Operation element set up to use Vim][13]
|
||||
|
||||
Basic Operation element set up to use Vim
|
||||
|
||||
Here is what the code looks like:
|
||||
|
||||
|
||||
```
|
||||
import pickle, pathlib, os
|
||||
import pandas as pd
|
||||
|
||||
outout = None
|
||||
|
||||
if isinstance(input, pd.DataFrame):
|
||||
file_name = 'TRXBTC_1h.bin'
|
||||
home_path = str(pathlib.Path.home())
|
||||
data_path = os.path.join(home_path, file_name)
|
||||
|
||||
try:
|
||||
df = pickle.load(open(data_path, 'rb'))
|
||||
n_row_cnt = df.shape[0]
|
||||
df = pd.concat([df,input], ignore_index=True).drop_duplicates(['close_time'])
|
||||
df.reset_index(drop=True, inplace=True)
|
||||
n_new_rows = df.shape[0] - n_row_cnt
|
||||
log_txt = '{}: {} new rows written'.format(file_name, n_new_rows)
|
||||
except:
|
||||
log_txt = 'File error - writing new one: {}'.format(e)
|
||||
df = input
|
||||
|
||||
pickle.dump(df, open(data_path, "wb" ))
|
||||
output = df
|
||||
```
|
||||
|
||||
First, check whether the input is the DataFrame type. Then look inside the user's home directory (**~/**) for a file named **TRXBTC_1h.bin**. If it is present, then open it, concatenate new rows (the code in the **try** section), and drop overlapping duplicates. If the file doesn't exist, trigger an _exception_ and execute the code in the **except** section, creating a new file.
|
||||
|
||||
As long as the checkbox **log output** is enabled, you can follow the logging with the command-line tool **tail**:
|
||||
|
||||
|
||||
```
|
||||
`$ tail -f ~/Pythonic_2020/Feb/log_2020_02_19.txt`
|
||||
```
|
||||
|
||||
For development purposes, skip the synchronization with Binance time and regular scheduling for now. This will be implemented below.
|
||||
|
||||
### Data preparation
|
||||
|
||||
The next step is to handle the evaluation logic in a separate grid; therefore, you have to pass over the DataFrame from Grid 1 to the first element of Grid 2 with the help of the **Return element**.
|
||||
|
||||
In Grid 2, extend the DataFrame by a column that contains the EMA values by passing the DataFrame through a **Basic Technical Analysis** element.
|
||||
|
||||
![Technical analysis workflow in Grid 2][14]
|
||||
|
||||
Technical analysis workflow in Grid 2
|
||||
|
||||
Configure the technical analysis element to calculate the EMAs over a period of 25 values.
|
||||
|
||||
![Configuration of the technical analysis element][15]
|
||||
|
||||
Configuring the technical analysis element
|
||||
|
||||
When you run the whole setup and activate the debug output of the **Technical Analysis** element, you will realize that the values of the EMA-25 column all seem to be the same.
|
||||
|
||||
![Missing decimal places in output][16]
|
||||
|
||||
Decimal places are missing in the output
|
||||
|
||||
This is because the EMA-25 values in the debug output include just six decimal places, even though the output retains the full precision of an 8-byte float value.
|
||||
|
||||
For further processing, add a **Basic Operation** element:
|
||||
|
||||
![Workflow in Grid 2][17]
|
||||
|
||||
Workflow in Grid 2
|
||||
|
||||
With the **Basic Operation** element, dump the DataFrame with the additional EMA-25 column so that it can be loaded into a Jupyter Notebook;
|
||||
|
||||
![Dump extended DataFrame to file][18]
|
||||
|
||||
Dump extended DataFrame to file
|
||||
|
||||
### Evaluation logic
|
||||
|
||||
Developing the evaluation logic inside Juypter Notebook enables you to access the code in a more direct way. To load the DataFrame, you need the following lines:
|
||||
|
||||
![Representation with all decimal places][19]
|
||||
|
||||
Representation with all decimal places
|
||||
|
||||
You can access the latest EMA-25 values by using [**iloc**][20] and the column name. This keeps all of the decimal places.
|
||||
|
||||
You already know how to get the latest value. The last line of the example above shows only the value. To copy the value to a separate variable, you have to access it with the **.at** method, as shown below.
|
||||
|
||||
You can also directly calculate the trade factor, which you will need in the next step.
|
||||
|
||||
![Buy/sell decision][21]
|
||||
|
||||
Buy/sell decision
|
||||
|
||||
### Determine the trading factor
|
||||
|
||||
As you can see in the code above, I chose 0.009 as the trade factor. But how do I know if 0.009 is a good trading factor for decisions? Actually, this factor is really bad, so instead, you can brute-force the best-performing trade factor.
|
||||
|
||||
Assume that you will buy or sell based on the closing price.
|
||||
|
||||
![Validation function][22]
|
||||
|
||||
Validation function
|
||||
|
||||
In this example, **buy_factor** and **sell_factor** are predefined. So extend the logic to brute-force the best performing values.
|
||||
|
||||
![Nested for loops for determining the buy and sell factor][23]
|
||||
|
||||
Nested _for_ loops for determining the buy and sell factor
|
||||
|
||||
This has 81 loops to process (9x9), which takes a couple of minutes on my machine (a Core i7 267QM).
|
||||
|
||||
![System utilization while brute forcing][24]
|
||||
|
||||
System utilization while brute-forcing
|
||||
|
||||
After each loop, it appends a tuple of **buy_factor**, **sell_factor**, and the resulting **profit** to the **trading_factors** list. Sort the list by profit in descending order.
|
||||
|
||||
![Sort profit with related trading factors in descending order][25]
|
||||
|
||||
Sort profit with related trading factors in descending order
|
||||
|
||||
When you print the list, you can see that 0.002 is the most promising factor.
|
||||
|
||||
![Sorted list of trading factors and profit][26]
|
||||
|
||||
Sorted list of trading factors and profit
|
||||
|
||||
When I wrote this in March 2020, the prices were not volatile enough to present more promising results. I got much better results in February, but even then, the best-performing trading factors were also around 0.002.
|
||||
|
||||
### Split the execution path
|
||||
|
||||
Start a new grid now to maintain clarity. Pass the DataFrame with the EMA-25 column from Grid 2 to element 0A of Grid 3 by using a **Return** element.
|
||||
|
||||
In Grid 3, add a **Basic Operation** element to execute the evaluation logic. Here is the code of that element:
|
||||
|
||||
![Implemented evaluation logic][27]
|
||||
|
||||
Implemented evaluation logic
|
||||
|
||||
The element outputs a **1** if you should buy or a **-1** if you should sell. An output of **0** means there's nothing to do right now. Use a **Branch** element to control the execution path.
|
||||
|
||||
![Branch element: Grid 3 Position 2A][28]
|
||||
|
||||
Branch element: Grid 3, Position 2A
|
||||
|
||||
Due to the fact that both **0** and **-1** are processed the same way, you need an additional Branch element on the right-most execution path to decide whether or not you should sell.
|
||||
|
||||
![Branch element: Grid 3 Position 3B][29]
|
||||
|
||||
Branch element: Grid 3, Position 3B
|
||||
|
||||
Grid 3 should now look like this:
|
||||
|
||||
![Workflow on Grid 3][30]
|
||||
|
||||
Workflow on Grid 3
|
||||
|
||||
### Execute orders
|
||||
|
||||
Since you cannot buy twice, you must keep a persistent variable between the cycles that indicates whether you have already bought.
|
||||
|
||||
You can do this with a **Stack element**. The Stack element is, as the name suggests, a representation of a file-based stack that can be filled with any Python data type.
|
||||
|
||||
You need to define that the stack contains only one Boolean element, which determines if you bought (**True**) or not (**False**). As a consequence, you have to preset the stack with one **False**. You can set this up, for example, in Grid 4 by simply passing a **False** to the stack.
|
||||
|
||||
![Forward a False-variable to the subsequent Stack element][31]
|
||||
|
||||
Forward a **False** variable to the subsequent Stack element
|
||||
|
||||
The Stack instances after the branch tree can be configured as follows:
|
||||
|
||||
![Configuration of the Stack element][32]
|
||||
|
||||
Configuring the Stack element
|
||||
|
||||
In the Stack element configuration, set **Do this with input** to **Nothing**. Otherwise, the Boolean value will be overwritten by a 1 or 0.
|
||||
|
||||
This configuration ensures that only one value is ever saved in the stack (**True** or **False**), and only one value can ever be read (for clarity).
|
||||
|
||||
Right after the Stack element, you need an additional **Branch** element to evaluate the stack value before you place the **Binance Order** elements.
|
||||
|
||||
![Evaluate the variable from the stack][33]
|
||||
|
||||
Evaluating the variable from the stack
|
||||
|
||||
Append the Binance Order element to the **True** path of the Branch element. The workflow on Grid 3 should now look like this:
|
||||
|
||||
![Workflow on Grid 3][34]
|
||||
|
||||
Workflow on Grid 3
|
||||
|
||||
The Binance Order element is configured as follows:
|
||||
|
||||
![Configuration of the Binance Order element][35]
|
||||
|
||||
Configuring the Binance Order element
|
||||
|
||||
You can generate the API and Secret keys on the Binance website under your account settings.
|
||||
|
||||
![Creating an API key in Binance][36]
|
||||
|
||||
Creating an API key in the Binance account settings
|
||||
|
||||
In this tutorial, every trade is executed as a market trade and has a volume of 10,000 TRX (~US$ 150 on March 2020). (For the purposes of this tutorial, I am demonstrating the overall process by using a Market Order. Because of that, I recommend using at least a Limit order.)
|
||||
|
||||
The subsequent element is not triggered if the order was not executed properly (e.g., a connection issue, insufficient funds, or incorrect currency pair). Therefore, you can assume that if the subsequent element is triggered, the order was placed.
|
||||
|
||||
Here is an example of output from a successful sell order for XMRBTC:
|
||||
|
||||
![Output of a successfully placed sell order][37]
|
||||
|
||||
Successful sell order output
|
||||
|
||||
This behavior makes subsequent steps more comfortable: You can always assume that as long the output is proper, the order was placed. Therefore, you can append a **Basic Operation** element that simply writes the output to **True** and writes this value on the stack to indicate whether the order was placed or not.
|
||||
|
||||
If something went wrong, you can find the details in the logging message (if logging is enabled).
|
||||
|
||||
![Logging output of Binance Order element][38]
|
||||
|
||||
Logging output from Binance Order element
|
||||
|
||||
### Schedule and sync
|
||||
|
||||
For regular scheduling and synchronization, prepend the entire workflow in Grid 1 with the **Binance Scheduler** element.
|
||||
|
||||
![Binance Scheduler at Grid 1, Position 1A][39]
|
||||
|
||||
Binance Scheduler at Grid 1, Position 1A
|
||||
|
||||
The Binance Scheduler element executes only once, so split the execution path on the end of Grid 1 and force it to re-synchronize itself by passing the output back to the Binance Scheduler element.
|
||||
|
||||
![Grid 1: Split execution path][40]
|
||||
|
||||
Grid 1: Split execution path
|
||||
|
||||
Element 5A points to Element 1A of Grid 2, and Element 5B points to Element 1A of Grid 1 (Binance Scheduler).
|
||||
|
||||
### Deploy
|
||||
|
||||
You can run the whole setup 24/7 on your local machine, or you could host it entirely on an inexpensive cloud system. For example, you can use a Linux/FreeBSD cloud system for about US$5 per month, but they usually don't provide a window system. If you want to take advantage of these low-cost clouds, you can use PythonicDaemon, which runs completely inside the terminal.
|
||||
|
||||
![PythonicDaemon console interface][41]
|
||||
|
||||
PythonicDaemon console
|
||||
|
||||
PythonicDaemon is part of the basic installation. To use it, save your complete workflow, transfer it to the remote running system (e.g., by Secure Copy [SCP]), and start PythonicDaemon with the workflow file as an argument:
|
||||
|
||||
|
||||
```
|
||||
`$ PythonicDaemon trading_bot_one`
|
||||
```
|
||||
|
||||
To automatically start PythonicDaemon at system startup, you can add an entry to the crontab:
|
||||
|
||||
|
||||
```
|
||||
`# crontab -e`
|
||||
```
|
||||
|
||||
![Crontab on Ubuntu Server][42]
|
||||
|
||||
Crontab on Ubuntu Server
|
||||
|
||||
### Next steps
|
||||
|
||||
As I wrote at the beginning, this tutorial is just a starting point into automated trading. Programming trading bots is approximately 10% programming and 90% testing. When it comes to letting your bot trade with your money, you will definitely think thrice about the code you program. So I advise you to keep your code as simple and easy to understand as you can.
|
||||
|
||||
If you want to continue developing your trading bot on your own, the next things to set up are:
|
||||
|
||||
* Automatic profit calculation (hopefully only positive!)
|
||||
* Calculation of the prices you want to buy for
|
||||
* Comparison with your order book (i.e., was the order filled completely?)
|
||||
|
||||
|
||||
|
||||
You can download the whole example on [GitHub][2].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/4/python-crypto-trading-bot
|
||||
|
||||
作者:[Stephan Avenwedde][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/hansic99
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/calculator_money_currency_financial_tool.jpg?itok=2QMa1y8c (scientific calculator)
|
||||
[2]: https://github.com/hANSIc99/Pythonic
|
||||
[3]: https://opensource.com/article/19/5/graphically-programming-pythonic
|
||||
[4]: https://tron.network/
|
||||
[5]: https://bitcoin.org/en/
|
||||
[6]: https://www.binance.com/
|
||||
[7]: https://www.investopedia.com/terms/e/ema.asp
|
||||
[8]: https://opensource.com/sites/default/files/uploads/1_ema-25.png (TRX/BTC 1-hour candle chart)
|
||||
[9]: https://en.wikipedia.org/wiki/Open-high-low-close_chart
|
||||
[10]: https://opensource.com/sites/default/files/uploads/2_data-mining-workflow.png (Data-mining workflow)
|
||||
[11]: https://opensource.com/sites/default/files/uploads/3_ohlc-query.png (Configuration of the OHLC query element)
|
||||
[12]: https://pandas.pydata.org/pandas-docs/stable/getting_started/dsintro.html#dataframe
|
||||
[13]: https://opensource.com/sites/default/files/uploads/4_edit-basic-operation.png (Basic Operation element set up to use Vim)
|
||||
[14]: https://opensource.com/sites/default/files/uploads/6_grid2-workflow.png (Technical analysis workflow in Grid 2)
|
||||
[15]: https://opensource.com/sites/default/files/uploads/7_technical-analysis-config.png (Configuration of the technical analysis element)
|
||||
[16]: https://opensource.com/sites/default/files/uploads/8_missing-decimals.png (Missing decimal places in output)
|
||||
[17]: https://opensource.com/sites/default/files/uploads/9_basic-operation-element.png (Workflow in Grid 2)
|
||||
[18]: https://opensource.com/sites/default/files/uploads/10_dump-extended-dataframe.png (Dump extended DataFrame to file)
|
||||
[19]: https://opensource.com/sites/default/files/uploads/11_load-dataframe-decimals.png (Representation with all decimal places)
|
||||
[20]: https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.iloc.html
|
||||
[21]: https://opensource.com/sites/default/files/uploads/12_trade-factor-decision.png (Buy/sell decision)
|
||||
[22]: https://opensource.com/sites/default/files/uploads/13_validation-function.png (Validation function)
|
||||
[23]: https://opensource.com/sites/default/files/uploads/14_brute-force-tf.png (Nested for loops for determining the buy and sell factor)
|
||||
[24]: https://opensource.com/sites/default/files/uploads/15_system-utilization.png (System utilization while brute forcing)
|
||||
[25]: https://opensource.com/sites/default/files/uploads/16_sort-profit.png (Sort profit with related trading factors in descending order)
|
||||
[26]: https://opensource.com/sites/default/files/uploads/17_sorted-trading-factors.png (Sorted list of trading factors and profit)
|
||||
[27]: https://opensource.com/sites/default/files/uploads/18_implemented-evaluation-logic.png (Implemented evaluation logic)
|
||||
[28]: https://opensource.com/sites/default/files/uploads/19_output.png (Branch element: Grid 3 Position 2A)
|
||||
[29]: https://opensource.com/sites/default/files/uploads/20_editbranch.png (Branch element: Grid 3 Position 3B)
|
||||
[30]: https://opensource.com/sites/default/files/uploads/21_grid3-workflow.png (Workflow on Grid 3)
|
||||
[31]: https://opensource.com/sites/default/files/uploads/22_pass-false-to-stack.png (Forward a False-variable to the subsequent Stack element)
|
||||
[32]: https://opensource.com/sites/default/files/uploads/23_stack-config.png (Configuration of the Stack element)
|
||||
[33]: https://opensource.com/sites/default/files/uploads/24_evaluate-stack-value.png (Evaluate the variable from the stack)
|
||||
[34]: https://opensource.com/sites/default/files/uploads/25_grid3-workflow.png (Workflow on Grid 3)
|
||||
[35]: https://opensource.com/sites/default/files/uploads/26_binance-order.png (Configuration of the Binance Order element)
|
||||
[36]: https://opensource.com/sites/default/files/uploads/27_api-key-binance.png (Creating an API key in Binance)
|
||||
[37]: https://opensource.com/sites/default/files/uploads/28_sell-order.png (Output of a successfully placed sell order)
|
||||
[38]: https://opensource.com/sites/default/files/uploads/29_binance-order-output.png (Logging output of Binance Order element)
|
||||
[39]: https://opensource.com/sites/default/files/uploads/30_binance-scheduler.png (Binance Scheduler at Grid 1, Position 1A)
|
||||
[40]: https://opensource.com/sites/default/files/uploads/31_split-execution-path.png (Grid 1: Split execution path)
|
||||
[41]: https://opensource.com/sites/default/files/uploads/32_pythonic-daemon.png (PythonicDaemon console interface)
|
||||
[42]: https://opensource.com/sites/default/files/uploads/33_crontab.png (Crontab on Ubuntu Server)
|
@ -1,324 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Improve your time management with Jupyter)
|
||||
[#]: via: (https://opensource.com/article/20/9/calendar-jupyter)
|
||||
[#]: author: (Moshe Zadka https://opensource.com/users/moshez)
|
||||
|
||||
Improve your time management with Jupyter
|
||||
======
|
||||
Discover how you are spending time by parsing your calendar with Python
|
||||
in Jupyter.
|
||||
![Calendar close up snapshot][1]
|
||||
|
||||
[Python][2] has incredibly scalable options for exploring data. With [Pandas][3] or [Dask][4], you can scale [Jupyter][5] up to big data. But what about small data? Personal data? Private data?
|
||||
|
||||
JupyterLab and Jupyter Notebook provide a great environment to scrutinize my laptop-based life.
|
||||
|
||||
My exploration is powered by the fact that almost every service I use has a web application programming interface (API). I use many such services: a to-do list, a time tracker, a habit tracker, and more. But there is one that almost everyone uses: _a calendar_. The same ideas can be applied to other services, but calendars have one cool feature: an open standard that almost all web calendars support: `CalDAV`.
|
||||
|
||||
### Parsing your calendar with Python in Jupyter
|
||||
|
||||
Most calendars provide a way to export into the `CalDAV` format. You may need some authentication for accessing this private data. Following your service's instructions should do the trick. How you get the credentials depends on your service, but eventually, you should be able to store them in a file. I store mine in my root directory in a file called `.caldav`:
|
||||
|
||||
|
||||
```
|
||||
import os
|
||||
with open(os.path.expanduser("~/.caldav")) as fpin:
|
||||
username, password = fpin.read().split()
|
||||
```
|
||||
|
||||
Never put usernames and passwords directly in notebooks! They could easily leak with a stray `git push`.
|
||||
|
||||
The next step is to use the convenient PyPI [caldav][6] library. I looked up the CalDAV server for my email service (yours may be different):
|
||||
|
||||
|
||||
```
|
||||
import caldav
|
||||
client = caldav.DAVClient(url="<https://caldav.fastmail.com/dav/>", username=username, password=password)
|
||||
```
|
||||
|
||||
CalDAV has a concept called the `principal`. It is not important to get into right now, except to know it's the thing you use to access the calendars:
|
||||
|
||||
|
||||
```
|
||||
principal = client.principal()
|
||||
calendars = principal.calendars()
|
||||
```
|
||||
|
||||
Calendars are, literally, all about time. Before accessing events, you need to decide on a time range. One week should be a good default:
|
||||
|
||||
|
||||
```
|
||||
from dateutil import tz
|
||||
import datetime
|
||||
now = datetime.datetime.now(tz.tzutc())
|
||||
since = now - datetime.timedelta(days=7)
|
||||
```
|
||||
|
||||
Most people use more than one calendar, and most people want all their events together. The `itertools.chain.from_iterable` makes this straightforward: ` `
|
||||
|
||||
|
||||
```
|
||||
import itertools
|
||||
|
||||
raw_events = list(
|
||||
itertools.chain.from_iterable(
|
||||
calendar.date_search(start=since, end=now, expand=True)
|
||||
for calendar in calendars
|
||||
)
|
||||
)
|
||||
```
|
||||
|
||||
Reading all the events into memory is important, and doing so in the API's raw, native format is an important practice. This means that when fine-tuning the parsing, analyzing, and displaying code, there is no need to go back to the API service to refresh the data.
|
||||
|
||||
But "raw" is not an understatement. The events come through as strings in a specific format:
|
||||
|
||||
|
||||
```
|
||||
`print(raw_events[12].data)`[/code] [code]
|
||||
|
||||
BEGIN:VCALENDAR
|
||||
VERSION:2.0
|
||||
PRODID:-//CyrusIMAP.org/Cyrus
|
||||
3.3.0-232-g4bdb081-fm-20200825.002-g4bdb081a//EN
|
||||
BEGIN:VEVENT
|
||||
DTEND:20200825T230000Z
|
||||
DTSTAMP:20200825T181915Z
|
||||
DTSTART:20200825T220000Z
|
||||
SUMMARY:Busy
|
||||
UID:
|
||||
1302728i-040000008200E00074C5B7101A82E00800000000D939773EA578D601000000000
|
||||
000000010000000CD71CC3393651B419E9458134FE840F5
|
||||
END:VEVENT
|
||||
END:VCALENDAR
|
||||
```
|
||||
|
||||
Luckily, PyPI comes to the rescue again with another helper library, [vobject][7]:
|
||||
|
||||
|
||||
```
|
||||
import io
|
||||
import vobject
|
||||
|
||||
def parse_event(raw_event):
|
||||
data = raw_event.data
|
||||
parsed = vobject.readOne(io.StringIO(data))
|
||||
contents = parsed.vevent.contents
|
||||
return contents
|
||||
|
||||
[/code] [code]`parse_event(raw_events[12])`[/code] [code]
|
||||
|
||||
{'dtend': [<DTEND{}2020-08-25 23:00:00+00:00>],
|
||||
'dtstamp': [<DTSTAMP{}2020-08-25 18:19:15+00:00>],
|
||||
'dtstart': [<DTSTART{}2020-08-25 22:00:00+00:00>],
|
||||
'summary': [<SUMMARY{}Busy>],
|
||||
'uid': [<UID{}1302728i-040000008200E00074C5B7101A82E00800000000D939773EA578D601000000000000000010000000CD71CC3393651B419E9458134FE840F5>]}
|
||||
```
|
||||
|
||||
Well, at least it's a little better.
|
||||
|
||||
There is still some work to do to convert it to a reasonable Python object. The first step is to _have_ a reasonable Python object. The [attrs][8] library provides a nice start:
|
||||
|
||||
|
||||
```
|
||||
import attr
|
||||
from __future__ import annotations
|
||||
@attr.s(auto_attribs=True, frozen=True)
|
||||
class Event:
|
||||
start: datetime.datetime
|
||||
end: datetime.datetime
|
||||
timezone: Any
|
||||
summary: str
|
||||
```
|
||||
|
||||
Time to write the conversion code!
|
||||
|
||||
The first abstraction gets the value from the parsed dictionary without all the decorations:
|
||||
|
||||
|
||||
```
|
||||
def get_piece(contents, name):
|
||||
return contents[name][0].value
|
||||
|
||||
[/code] [code]`get_piece(_, "dtstart")`[/code] [code]` datetime.datetime(2020, 8, 25, 22, 0, tzinfo=tzutc())`
|
||||
```
|
||||
|
||||
Calendar events always have a start, but they sometimes have an "end" and sometimes a "duration." Some careful parsing logic can harmonize both into the same Python objects:
|
||||
|
||||
|
||||
```
|
||||
def from_calendar_event_and_timezone(event, timezone):
|
||||
contents = parse_event(event)
|
||||
start = get_piece(contents, "dtstart")
|
||||
summary = get_piece(contents, "summary")
|
||||
try:
|
||||
end = get_piece(contents, "dtend")
|
||||
except KeyError:
|
||||
end = start + get_piece(contents, "duration")
|
||||
return Event(start=start, end=end, summary=summary, timezone=timezone)
|
||||
```
|
||||
|
||||
Since it is useful to have the events in your _local_ time zone rather than UTC, this uses the local timezone:
|
||||
|
||||
|
||||
```
|
||||
`my_timezone = tz.gettz()`[/code] [code]`from_calendar_event_and_timezone(raw_events[12], my_timezone)`[/code] [code]` Event(start=datetime.datetime(2020, 8, 25, 22, 0, tzinfo=tzutc()), end=datetime.datetime(2020, 8, 25, 23, 0, tzinfo=tzutc()), timezone=tzfile('/etc/localtime'), summary='Busy')`
|
||||
```
|
||||
|
||||
Now that the events are real Python objects, they really should have some additional information. Luckily, it is possible to add methods retroactively to classes.
|
||||
|
||||
But figuring which _day_ an event happens is not that obvious. You need the day in the _local_ timezone:
|
||||
|
||||
|
||||
```
|
||||
def day(self):
|
||||
offset = self.timezone.utcoffset(self.start)
|
||||
fixed = self.start + offset
|
||||
return fixed.date()
|
||||
Event.day = property(day)
|
||||
|
||||
[/code] [code]`print(_.day)`[/code] [code]` 2020-08-25`
|
||||
```
|
||||
|
||||
Events are always represented internally as start/end, but knowing the duration is a useful property. Duration can also be added to the existing class:
|
||||
|
||||
|
||||
```
|
||||
def duration(self):
|
||||
return self.end - self.start
|
||||
Event.duration = property(duration)
|
||||
|
||||
[/code] [code]`print(_.duration)`[/code] [code]` 1:00:00`
|
||||
```
|
||||
|
||||
Now it is time to convert all events into useful Python objects:
|
||||
|
||||
|
||||
```
|
||||
all_events = [from_calendar_event_and_timezone(raw_event, my_timezone)
|
||||
for raw_event in raw_events]
|
||||
```
|
||||
|
||||
All-day events are a special case and probably less useful for analyzing life. For now, you can ignore them:
|
||||
|
||||
|
||||
```
|
||||
# ignore all-day events
|
||||
all_events = [event for event in all_events if not type(event.start) == datetime.date]
|
||||
```
|
||||
|
||||
Events have a natural order—knowing which one happened first is probably useful for analysis:
|
||||
|
||||
|
||||
```
|
||||
`all_events.sort(key=lambda ev: ev.start)`
|
||||
```
|
||||
|
||||
Now that the events are sorted, they can be broken into days:
|
||||
|
||||
|
||||
```
|
||||
import collections
|
||||
events_by_day = collections.defaultdict(list)
|
||||
for event in all_events:
|
||||
events_by_day[event.day].append(event)
|
||||
```
|
||||
|
||||
And with that, you have calendar events with dates, duration, and sequence as Python objects.
|
||||
|
||||
### Reporting on your life in Python
|
||||
|
||||
Now it is time to write reporting code! It is fun to have eye-popping formatting with proper headers, lists, important things in bold, etc.
|
||||
|
||||
This means HTML and some HTML templating. I like to use [Chameleon][9]:
|
||||
|
||||
|
||||
```
|
||||
template_content = """
|
||||
<html><body>
|
||||
<div tal:repeat="item items">
|
||||
<h2 tal:content="item[0]">Day</h2>
|
||||
<ul>
|
||||
<li tal:repeat="event item[1]"><span tal:replace="event">Thing</span></li>
|
||||
</ul>
|
||||
</div>
|
||||
</body></html>"""
|
||||
```
|
||||
|
||||
One cool feature of Chameleon is that it will render objects using its `html` method. I will use it in two ways:
|
||||
|
||||
* The summary will be in **bold**
|
||||
* For most events, I will remove the summary (since this is my personal information)
|
||||
|
||||
|
||||
|
||||
|
||||
```
|
||||
def __html__(self):
|
||||
offset = my_timezone.utcoffset(self.start)
|
||||
fixed = self.start + offset
|
||||
start_str = str(fixed).split("+")[0]
|
||||
summary = self.summary
|
||||
if summary != "Busy":
|
||||
summary = "&lt;REDACTED&gt;"
|
||||
return f"<b>{summary[:30]}</b> \-- {start_str} ({self.duration})"
|
||||
Event.__html__ = __html__
|
||||
```
|
||||
|
||||
In the interest of brevity, the report will be sliced into one day's worth.
|
||||
|
||||
|
||||
```
|
||||
import chameleon
|
||||
from IPython.display import HTML
|
||||
template = chameleon.PageTemplate(template_content)
|
||||
html = template(items=itertools.islice(events_by_day.items(), 3, 4))
|
||||
HTML(html)
|
||||
```
|
||||
|
||||
#### When rendered, it will look something like this:
|
||||
|
||||
#### 2020-08-25
|
||||
|
||||
* **<REDACTED>** \-- 2020-08-25 08:30:00 (0:45:00)
|
||||
* **<REDACTED>** \-- 2020-08-25 10:00:00 (1:00:00)
|
||||
* **<REDACTED>** \-- 2020-08-25 11:30:00 (0:30:00)
|
||||
* **<REDACTED>** \-- 2020-08-25 13:00:00 (0:25:00)
|
||||
* **Busy** \-- 2020-08-25 15:00:00 (1:00:00)
|
||||
* **<REDACTED>** \-- 2020-08-25 15:00:00 (1:00:00)
|
||||
* **<REDACTED>** \-- 2020-08-25 19:00:00 (1:00:00)
|
||||
* **<REDACTED>** \-- 2020-08-25 19:00:12 (1:00:00)
|
||||
|
||||
|
||||
|
||||
### Endless options with Python and Jupyter
|
||||
|
||||
This only scratches the surface of what you can do by parsing, analyzing, and reporting on the data that various web services have on you.
|
||||
|
||||
Why not try it with your favorite service?
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/9/calendar-jupyter
|
||||
|
||||
作者:[Moshe Zadka][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/moshez
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/calendar.jpg?itok=jEKbhvDT (Calendar close up snapshot)
|
||||
[2]: https://opensource.com/resources/python
|
||||
[3]: https://pandas.pydata.org/
|
||||
[4]: https://dask.org/
|
||||
[5]: https://jupyter.org/
|
||||
[6]: https://pypi.org/project/caldav/
|
||||
[7]: https://pypi.org/project/vobject/
|
||||
[8]: https://opensource.com/article/19/5/python-attrs
|
||||
[9]: https://chameleon.readthedocs.io/en/latest/
|
@ -1,84 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Turn your Raspberry Pi into a HiFi music system)
|
||||
[#]: via: (https://opensource.com/article/21/1/raspberry-pi-hifi)
|
||||
[#]: author: (Peter Czanik https://opensource.com/users/czanik)
|
||||
|
||||
Turn your Raspberry Pi into a HiFi music system
|
||||
======
|
||||
Play music for your friends, family, co-workers, or anyone else with an
|
||||
inexpensive audiophile setup.
|
||||
![HiFi vintage stereo][1]
|
||||
|
||||
For the past 10 years, I've worked remotely most of the time, but when I go into the office, I sit in a room full of fellow introverts who are easily disturbed by ambient noise and talking. We discovered that listening to music can suppress office noise, make voices less distracting, and provide a pleasant working environment with enjoyable music.
|
||||
|
||||
Initially, one of our colleagues brought in some old powered computer speakers, connected them to his desktop, and asked us what we wanted to listen to. It did its job, but the sound quality wasn't great, and it only worked when he was in the office. Next, we bought a pair of Altec Lansing speakers. The sound quality improved, but flexibility did not.
|
||||
|
||||
Not much later, we got a generic Arm single-board computer (SBC). This meant anyone could control the playlist and the speakers over the network using a web interface. But a random Arm developer board meant we could not use popular music appliance software. Updating the operating system was a pain due to a non-standard kernel, and the web interface broke frequently.
|
||||
|
||||
When the team grew and moved into a larger room, we started dreaming about better speakers and an easier way to handle the software and hardware combo.
|
||||
|
||||
To solve our issue in a way that is relatively inexpensive, flexible, and has good sound quality, we developed an office HiFi with a Raspberry Pi, speakers, and open source software.
|
||||
|
||||
### HiFi hardware
|
||||
|
||||
Having a dedicated PC for background music is overkill. It's expensive, noisy (unless it's silent, but then it's even more expensive), and not environmentally friendly. Even the cheapest Arm boards are up to the job, but they're often problematic from the software point of view. The Raspberry Pi is still on the cheap end and, while not standards-compliant, is well-supported on the hardware and the software side.
|
||||
|
||||
The next question was: what speakers to use. Good-quality, powered speakers are expensive. Passive speakers cost less but need an amplifier, and that would add another box to the setup. They would also have to use the Pi's audio output; while it works, it's not exactly the best, especially when you're already spending money on quality speakers and an amplifier.
|
||||
|
||||
Luckily, among the thousands of Raspberry Pi hardware extensions are amplifiers with built-in digital-analog converters (DAC). We selected [HiFiBerry's Amp][2]. It was discontinued soon after we bought it (replaced by an Amp+ model with a better sample rate), but it's good enough for our purposes. With air conditioning on, I don't think you can hear the difference between a DAC capable of 48kHz or 192kHz anyway.
|
||||
|
||||
For speakers, we chose the [Audioengine P4][3], which we bought when a shop had a clearance sale with extra-low prices. It easily fills our office room with sound without distortion (and fills much more than our room with some distortion, but neighboring engineers tend to dislike that).
|
||||
|
||||
### HiFi software
|
||||
|
||||
Maintaining Ubuntu on our old generic Arm SBC with a fixed, ancient, out-of-packaging system kernel was problematic. The Raspberry Pi OS includes a well-maintained kernel package, making it a stable and easily updated base system, but it still required us to regularly update a Python script to access Spotify and YouTube. That was a little too high-maintenance for our purposes.
|
||||
|
||||
Luckily, using the Raspberry Pi as a base means there are many ready-to-use software appliances available.
|
||||
|
||||
We settled on [Volumio][4], an open source project that turns a Pi into a music-playing appliance. Installation is a simple _next-next-finish_ process. Instead of painstakingly installing and maintaining an operating system and regularly debugging broken Python code, installation and upgrades are completely pain-free. Configuring the HiFiBerry amplifier doesn't require editing any configuration files; you can just select it from a list. Of course, getting used to a new user interface takes some time, but the stability and ease of maintenance made this change worthwhile.
|
||||
|
||||
![Volumio interface][5]
|
||||
|
||||
Screenshot courtesy of [Volumeio][4] (© Michelangelo Guarise)
|
||||
|
||||
### Playing music and experimenting
|
||||
|
||||
While we're all working from home during the pandemic, the office HiFi is installed in my home office, which means I have free reign over what it runs. A constantly changing user interface would be a pain for a team, but for someone with an R&D background, playing with a device on my own, change is fun.
|
||||
|
||||
I'm not a programmer, but I have a strong Linux and Unix sysadmin background. That means that while I find fixing broken Python code tiresome, Volumio is just perfect enough to be boring for me (a great "problem" to have). Luckily, there are many other possibilities to play music on a Raspberry Pi.
|
||||
|
||||
As a terminal maniac (I even start LibreOffice from a terminal window), I mostly use Music on Console ([MOC][6]) to play music from my network-attached storage (NAS). I have hundreds of CDs, all turned into [FLAC][7] files. And I've also bought many digital albums from sources like [BandCamp][8] or [Society of Sound][9].
|
||||
|
||||
Another option is the [Music Player Daemon (MPD)][10]. With it running on the Raspberry Pi, I can interact with my music remotely over the network using any of the many clients available for Linux and Android.
|
||||
|
||||
### Can't stop the music
|
||||
|
||||
As you can see, the possibilities for creating an inexpensive HiFi system are almost endless on both the software and the hardware side. Our solution is just one of many, and I hope it inspires you to build something that fits your environment.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/1/raspberry-pi-hifi
|
||||
|
||||
作者:[Peter Czanik][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/czanik
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/hi-fi-stereo-vintage.png?itok=KYY3YQwE (HiFi vintage stereo)
|
||||
[2]: https://www.hifiberry.com/products/amp/
|
||||
[3]: https://audioengineusa.com/shop/passivespeakers/p4-passive-speakers/
|
||||
[4]: https://volumio.org/
|
||||
[5]: https://opensource.com/sites/default/files/uploads/volumeio.png (Volumio interface)
|
||||
[6]: https://en.wikipedia.org/wiki/Music_on_Console
|
||||
[7]: https://xiph.org/flac/
|
||||
[8]: https://bandcamp.com/
|
||||
[9]: https://realworldrecords.com/news/society-of-sound-statement/
|
||||
[10]: https://www.musicpd.org/
|
@ -1,250 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Convert your Windows install into a VM on Linux)
|
||||
[#]: via: (https://opensource.com/article/21/1/virtualbox-windows-linux)
|
||||
[#]: author: (David Both https://opensource.com/users/dboth)
|
||||
|
||||
Convert your Windows install into a VM on Linux
|
||||
======
|
||||
Here's how I configured a VirtualBox VM to use a physical Windows drive
|
||||
on my Linux workstation.
|
||||
![Puzzle pieces coming together to form a computer screen][1]
|
||||
|
||||
I use VirtualBox frequently to create virtual machines for testing new versions of Fedora, new application programs, and lots of administrative tools like Ansible. I have even used VirtualBox to test the creation of a Windows guest host.
|
||||
|
||||
Never have I ever used Windows as my primary operating system on any of my personal computers or even in a VM to perform some obscure task that cannot be done with Linux. I do, however, volunteer for an organization that uses one financial program that requires Windows. This program runs on the office manager's computer on Windows 10 Pro, which came preinstalled.
|
||||
|
||||
This financial application is not special, and [a better Linux program][2] could easily replace it, but I've found that many accountants and treasurers are extremely reluctant to make changes, so I've not yet been able to convince those in our organization to migrate.
|
||||
|
||||
This set of circumstances, along with a recent security scare, made it highly desirable to convert the host running Windows to Fedora and to run Windows and the accounting program in a VM on that host.
|
||||
|
||||
It is important to understand that I have an extreme dislike for Windows for multiple reasons. The primary ones that apply to this case are that I would hate to pay for another Windows license – Windows 10 Pro costs about $200 – to install it on a new VM. Also, Windows 10 requires enough information when setting it up on a new system or after an installation to enable crackers to steal one's identity, should the Microsoft database be breached. No one should need to provide their name, phone number, and birth date in order to register software.
|
||||
|
||||
### Getting started
|
||||
|
||||
The physical computer already had a 240GB NVMe m.2 storage device installed in the only available m.2 slot on the motherboard. I decided to install a new SATA SSD in the host and use the existing SSD with Windows on it as the storage device for the Windows VM. Kingston has an excellent overview of various SSD devices, form factors, and interfaces on its web site.
|
||||
|
||||
That approach meant that I wouldn't need to do a completely new installation of Windows or any of the existing application software. It also meant that the office manager who works at this computer would use Linux for all normal activities such as email, web access, document and spreadsheet creation with LibreOffice. This approach increases the host's security profile. The only time that the Windows VM would be used is to run the accounting program.
|
||||
|
||||
### Back it up first
|
||||
|
||||
Before I did anything else, I created a backup ISO image of the entire NVMe storage device. I made a partition on a 500GB external USB storage drive, created an ext4 filesystem on it, and then mounted that partition on **/mnt**. I used the **dd** command to create the image.
|
||||
|
||||
I installed the new 500GB SATA SSD in the host and installed the Fedora 32 Xfce spin on it from a Live USB. At the initial reboot after installation, both the Linux and Windows drives were available on the GRUB2 boot menu. At this point, the host could be dual-booted between Linux and Windows.
|
||||
|
||||
### Looking for help in all the internet places
|
||||
|
||||
Now I needed some information on creating a VM that uses a physical hard drive or SSD as its storage device. I quickly discovered a lot of information about how to do this in the VirtualBox documentation and the internet in general. Although the VirtualBox documentation helped me to get started, it is not complete, leaving out some critical information. Most of the other information I found on the internet is also quite incomplete.
|
||||
|
||||
With some critical help from one of our Opensource.com Correspondents, Joshua Holm, I was able to break through the cruft and make this work in a repeatable procedure.
|
||||
|
||||
### Making it work
|
||||
|
||||
This procedure is actually fairly simple, although one arcane hack is required to make it work. The Windows and Linux operating systems were already in place by the time I was ready for this step.
|
||||
|
||||
First, I installed the most recent version of VirtualBox on the Linux host. VirtualBox can be installed from many distributions' software repositories, directly from the Oracle VirtualBox repository, or by downloading the desired package file from the VirtualBox web site and installing locally. I chose to download the AMD64 version, which is actually an installer and not a package. I use this version to circumvent a problem that is not related to this particular project.
|
||||
|
||||
The installation procedure always creates a **vboxusers** group in **/etc/group**. I added the users intended to run this VM to the **vboxusers** and **disk** groups in **/etc/group**. It is important to add the same users to the **disk** group because VirtualBox runs as the user who launched it and also requires direct access to the **/dev/sdx** device special file to work in this scenario. Adding users to the **disk** group provides that level of access, which they would not otherwise have.
|
||||
|
||||
I then created a directory to store the VMs and gave it ownership of **root.vboxusers** and **775** permissions. I used **/vms** for the directory, but it could be anything you want. By default, VirtualBox creates new virtual machines in a subdirectory of the user creating the VM. That would make it impossible to share access to the VM among multiple users without creating a massive security vulnerability. Placing the VM directory in an accessible location allows sharing the VMs.
|
||||
|
||||
I started the VirtualBox Manager as a non-root user. I then used the VirtualBox **Preferences ==> General** menu to set the Default Machine Folder to the directory **/vms**.
|
||||
|
||||
I created the VM without a virtual disk. The **Type** should be **Windows**, and the **Version** should be set to **Windows 10 64-bit**. Set a reasonable amount of RAM for the VM, but this can be changed later so long as the VM is off. On the **Hard disk** page of the installation, I chose the "Do not add a virtual hard disk" and clicked on **Create**. The new VM appeared in the VirtualBox Manager window. This procedure also created the **/vms/Test1** directory.
|
||||
|
||||
I did this using the **Advanced** menu and performed all of the configurations on a single page, as seen in Figure 1. The **Guided Mode** obtains the same information but requires more clicks to go through a window for each configuration item. It does provide a little more in the way of help text, but I did not need that.
|
||||
|
||||
![VirtualBox dialog box to create a new virtual machine but do not add a hard disk][3]
|
||||
|
||||
opensource.com
|
||||
|
||||
Figure 1: Create a new virtual machine but do not add a hard disk.
|
||||
|
||||
Then I needed to know which device was assigned by Linux to the raw Windows drive. As root in a terminal session, use the **lshw** command to discover the device assignment for the Windows disk. In this case, the device that represents the entire storage device is **/dev/sdb**.
|
||||
|
||||
|
||||
```
|
||||
# lshw -short -class disk,volume
|
||||
H/W path Device Class Description
|
||||
=========================================================
|
||||
/0/100/17/0 /dev/sda disk 500GB CT500MX500SSD1
|
||||
/0/100/17/0/1 volume 2047MiB Windows FAT volume
|
||||
/0/100/17/0/2 /dev/sda2 volume 4GiB EXT4 volume
|
||||
/0/100/17/0/3 /dev/sda3 volume 459GiB LVM Physical Volume
|
||||
/0/100/17/1 /dev/cdrom disk DVD+-RW DU-8A5LH
|
||||
/0/100/17/0.0.0 /dev/sdb disk 256GB TOSHIBA KSG60ZMV
|
||||
/0/100/17/0.0.0/1 /dev/sdb1 volume 649MiB Windows FAT volume
|
||||
/0/100/17/0.0.0/2 /dev/sdb2 volume 127MiB reserved partition
|
||||
/0/100/17/0.0.0/3 /dev/sdb3 volume 236GiB Windows NTFS volume
|
||||
/0/100/17/0.0.0/4 /dev/sdb4 volume 989MiB Windows NTFS volume
|
||||
[root@office1 etc]#
|
||||
```
|
||||
|
||||
Instead of a virtual storage device located in the **/vms/Test1** directory, VirtualBox needs to have a way to identify the physical hard drive from which it is to boot. This identification is accomplished by creating a ***.vmdk** file, which points to the raw physical disk that will be used as the storage device for the VM. As a non-root user, I created a **vmdk** file that points to the entire Windows device, **/dev/sdb**.
|
||||
|
||||
|
||||
```
|
||||
$ VBoxManage internalcommands createrawvmdk -filename /vms/Test1/Test1.vmdk -rawdisk /dev/sdb
|
||||
RAW host disk access VMDK file /vms/Test1/Test1.vmdk created successfully.
|
||||
```
|
||||
|
||||
I then used the **VirtualBox Manager File ==> Virtual Media Manager** dialog to add the **vmdk** disk to the available hard disks. I clicked on **Add**, and the default **/vms** location was displayed in the file management dialog. I selected the **Test1** directory and then the **Test1.vmdk** file. I then clicked **Open**, and the **Test1.vmdk** file was displayed in the list of available hard drives. I selected it and clicked on **Close**.
|
||||
|
||||
The next step was to add this **vmdk** disk to the storage devices for our VM. In the settings menu for the **Test1 VM**, I selected **Storage** and clicked on the icon to add a hard disk. This opened a dialog that showed the **Test1vmdk** virtual disk file in a list entitled **Not attached.** I selected this file and clicked on the **Choose** button. This device is now displayed in the list of storage devices connected to the **Test1 VM**. The only other storage device on this VM is an empty CD/DVD-ROM drive.
|
||||
|
||||
I clicked on **OK** to complete the addition of this device to the VM.
|
||||
|
||||
There was one more item to configure before the new VM would work. Using the **VirtualBox Manager Settings** dialog for the **Test1 VM**, I navigated to the **System ==> Motherboard** page and placed a check in the box for **Enable EFI**. If you do not do this, VirtualBox will generate an error stating that it cannot find a bootable medium when you attempt to boot this VM.
|
||||
|
||||
The virtual machine now boots from the raw Windows 10 hard drive. However, I could not log in because I did not have a regular account on this system, and I also did not have access to the password for the Windows administrator account.
|
||||
|
||||
### Unlocking the drive
|
||||
|
||||
No, this section is not about breaking the encryption of the hard drive. Rather, it is about bypassing the password for one of the many Windows administrator accounts, which no one at the organization had.
|
||||
|
||||
Even though I could boot the Windows VM, I could not log in because I had no account on that host and asking people for their passwords is a horrible security breach. Nevertheless, I needed to log in to the VM to install the **VirtualBox Guest Additions**, which would provide seamless capture and release of the mouse pointer, allow me to resize the VM to be larger than 1024x768, and perform normal maintenance in the future.
|
||||
|
||||
This is a perfect use case for the Linux capability to change user passwords. Even though I am accessing the previous administrator's account to start, in this case, he will no longer support this system, and I won't be able to discern his password or the patterns he uses to generate them. I will simply clear the password for the previous sysadmin.
|
||||
|
||||
There is a very nice open source software tool specifically for this task. On the Linux host, I installed **chntpw**, which probably stands for something like, "Change NT PassWord."
|
||||
|
||||
|
||||
```
|
||||
`# dnf -y install chntpw`
|
||||
```
|
||||
|
||||
I powered off the VM and then mounted the **/dev/sdb3** partition on **/mnt**. I determined that **/dev/sdb3** is the correct partition because it is the first large NTFS partition I saw in the output from the **lshw** command I performed previously. Be sure not to mount the partition while the VM is running; that could cause significant corruption of the data on the VM storage device. Note that the correct partition might be different on other hosts.
|
||||
|
||||
Navigate to the **/mnt/Windows/System32/config** directory. The **chntpw** utility program does not work if that is not the present working directory (PWD). Start the program.
|
||||
|
||||
|
||||
```
|
||||
# chntpw -i SAM
|
||||
chntpw version 1.00 140201, (c) Petter N Hagen
|
||||
Hive <SAM> name (from header): <\SystemRoot\System32\Config\SAM>
|
||||
ROOT KEY at offset: 0x001020 * Subkey indexing type is: 686c <lh>
|
||||
File size 131072 [20000] bytes, containing 11 pages (+ 1 headerpage)
|
||||
Used for data: 367/44720 blocks/bytes, unused: 14/24560 blocks/bytes.
|
||||
|
||||
<>========<> chntpw Main Interactive Menu <>========<>
|
||||
|
||||
Loaded hives: <SAM>
|
||||
|
||||
1 - Edit user data and passwords
|
||||
2 - List groups
|
||||
- - -
|
||||
9 - Registry editor, now with full write support!
|
||||
q - Quit (you will be asked if there is something to save)
|
||||
|
||||
What to do? [1] ->
|
||||
```
|
||||
|
||||
The **chntpw** command uses a TUI (Text User Interface), which provides a set of menu options. When one of the primary menu items is chosen, a secondary menu is usually displayed. Following the clear menu names, I first chose menu item **1**.
|
||||
|
||||
|
||||
```
|
||||
What to do? [1] -> 1
|
||||
|
||||
===== chntpw Edit User Info & Passwords ====
|
||||
|
||||
| RID -|---------- Username ------------| Admin? |- Lock? --|
|
||||
| 01f4 | Administrator | ADMIN | dis/lock |
|
||||
| 03ec | john | ADMIN | dis/lock |
|
||||
| 01f7 | DefaultAccount | | dis/lock |
|
||||
| 01f5 | Guest | | dis/lock |
|
||||
| 01f8 | WDAGUtilityAccount | | dis/lock |
|
||||
|
||||
Please enter user number (RID) or 0 to exit: [3e9]
|
||||
```
|
||||
|
||||
Next, I selected our admin account, **john**, by typing the RID at the prompt. This displays information about the user and offers additional menu items to manage the account.
|
||||
|
||||
|
||||
```
|
||||
Please enter user number (RID) or 0 to exit: [3e9] 03eb
|
||||
================= USER EDIT ====================
|
||||
|
||||
RID : 1003 [03eb]
|
||||
Username: john
|
||||
fullname:
|
||||
comment :
|
||||
homedir :
|
||||
|
||||
00000221 = Users (which has 4 members)
|
||||
00000220 = Administrators (which has 5 members)
|
||||
|
||||
Account bits: 0x0214 =
|
||||
[ ] Disabled | [ ] Homedir req. | [ ] Passwd not req. |
|
||||
[ ] Temp. duplicate | [X] Normal account | [ ] NMS account |
|
||||
[ ] Domain trust ac | [ ] Wks trust act. | [ ] Srv trust act |
|
||||
[X] Pwd don't expir | [ ] Auto lockout | [ ] (unknown 0x08) |
|
||||
[ ] (unknown 0x10) | [ ] (unknown 0x20) | [ ] (unknown 0x40) |
|
||||
|
||||
Failed login count: 0, while max tries is: 0
|
||||
Total login count: 47
|
||||
|
||||
\- - - - User Edit Menu:
|
||||
1 - Clear (blank) user password
|
||||
2 - Unlock and enable user account [probably locked now]
|
||||
3 - Promote user (make user an administrator)
|
||||
4 - Add user to a group
|
||||
5 - Remove user from a group
|
||||
q - Quit editing user, back to user select
|
||||
Select: [q] > 2
|
||||
```
|
||||
|
||||
At this point, I chose menu item **2**, "Unlock and enable user account," which deletes the password and enables me to log in without a password. By the way – this is an automatic login. I then exited the program. Be sure to unmount **/mnt** before proceeding.
|
||||
|
||||
I know, I know, but why not! I have already bypassed security on this drive and host, so it matters not one iota. At this point, I did log in to the old administrative account and created a new account for myself with a secure password. I then logged in as myself and deleted the old admin account so that no one else could use it.
|
||||
|
||||
There are also instructions on the internet for using the Windows Administrator account (01f4 in the list above). I could have deleted or changed the password on that account had there not been an organizational admin account in place. Note also that this procedure can be performed from a live USB running on the target host.
|
||||
|
||||
### Reactivating Windows
|
||||
|
||||
So I now had the Windows SSD running as a VM on my Fedora host. However, in a frustrating turn of events, after running for a few hours, Windows displayed a warning message indicating that I needed to "Activate Windows."
|
||||
|
||||
After following many more dead-end web pages, I finally gave up on trying to reactivate using an existing code because it appeared to have been somehow destroyed. Finally, when attempting to follow one of the on-line virtual support chat sessions, the virtual "Get help" application indicated that my instance of Windows 10 Pro was already activated. How can this be the case? It kept wanting me to activate it, yet when I tried, it said it was already activated.
|
||||
|
||||
### Or not
|
||||
|
||||
By the time I had spent several hours over three days doing research and experimentation, I decided to go back to booting the original SSD into Windows and come back to this at a later date. But then Windows – even when booted from the original storage device – demanded to be reactivated.
|
||||
|
||||
Searching the Microsoft support site was unhelpful. After having to fuss with the same automated support as before, I called the phone number provided only to be told by an automated response system that all support for Windows 10 Pro was only provided by internet. By now, I was nearly a day late in getting the computer running and installed back at the office.
|
||||
|
||||
### Back to the future
|
||||
|
||||
I finally sucked it up, purchased a copy of Windows 10 Home – for about $120 – and created a VM with a virtual storage device on which to install it.
|
||||
|
||||
I copied a large number of document and spreadsheet files to the office manager's home directory. I reinstalled the one Windows program we need and verified with the office manager that it worked and the data was all there.
|
||||
|
||||
### Final thoughts
|
||||
|
||||
So my objective was met, literally a day late and about $120 short, but using a more standard approach. I am still making a few adjustments to permissions and restoring the Thunderbird address book; I have some CSV backups to work from, but the ***.mab** files contain very little information on the Windows drive. I even used the Linux **find** command to locate all the ones on the original storage device.
|
||||
|
||||
I went down a number of rabbit holes and had to extract myself and start over each time. I ran into problems that were not directly related to this project, but that affected my work on it. Those problems included interesting things like mounting the Windows partition on **/mnt** on my Linux box and getting a message that the partition had been improperly closed by Windows (yes – on my Linux host) and that it had fixed the inconsistency. Not even Windows could do that after multiple reboots through its so-called "recovery" mode.
|
||||
|
||||
Perhaps you noticed some clues in the output data from the **chntpw** utility. I cut out some of the other user accounts that were displayed on my host for security reasons, but I saw from that information that all of the users were admins. Needless to say, I changed that. I am still surprised by the poor administrative practices I encounter, but I guess I should not be.
|
||||
|
||||
In the end, I was forced to purchase a license, but one that was at least a bit less expensive than the original. One thing I know is that the Linux piece of this worked perfectly once I had found all the necessary information. The issue was dealing with Windows activation. Some of you may have been successful at getting Windows reactivated. If so, I would still like to know how you did it, so please add your experience to the comments.
|
||||
|
||||
This is yet another reason I dislike Windows and only ever use Linux on my own systems. It is also one of the reasons I am converting all of the organization's computers to Linux. It just takes time and convincing. We only have this one accounting program left, and I need to work with the treasurer to find one that works for her. I understand this – I like my own tools, and I need them to work in a way that is best for me.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/1/virtualbox-windows-linux
|
||||
|
||||
作者:[David Both][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/dboth
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/puzzle_computer_solve_fix_tool.png?itok=U0pH1uwj (Puzzle pieces coming together to form a computer screen)
|
||||
[2]: https://opensource.com/article/20/7/godbledger
|
||||
[3]: https://opensource.com/sites/default/files/virtualbox.png
|
@ -1,106 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (ShuyRoy )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Get started with distributed tracing using Grafana Tempo)
|
||||
[#]: via: (https://opensource.com/article/21/2/tempo-distributed-tracing)
|
||||
[#]: author: (Annanay Agarwal https://opensource.com/users/annanayagarwal)
|
||||
|
||||
Get started with distributed tracing using Grafana Tempo
|
||||
======
|
||||
Grafana Tempo is a new open source, high-volume distributed tracing
|
||||
backend.
|
||||
![Computer laptop in space][1]
|
||||
|
||||
Grafana's [Tempo][2] is an easy-to-use, high-scale, distributed tracing backend from Grafana Labs. Tempo has integrations with [Grafana][3], [Prometheus][4], and [Loki][5] and requires only object storage to operate, making it cost-efficient and easy to operate.
|
||||
|
||||
I've been involved with this open source project since its inception, so I'll go over some of the basics about Tempo and show why the cloud-native community has taken notice of it.
|
||||
|
||||
### Distributed tracing
|
||||
|
||||
It's common to want to gather telemetry on requests made to an application. But in the modern server world, a single application is regularly split across many microservices, potentially running on several different nodes.
|
||||
|
||||
Distributed tracing is a way to get fine-grained information about the performance of an application that may consist of discreet services. It provides a consolidated view of the request's lifecycle as it passes through an application. Tempo's distributed tracing can be used with monolithic or microservice applications, and it gives you [request-scoped information][6], making it the third pillar of observability (alongside metrics and logs).
|
||||
|
||||
The following is an example of a Gantt chart that distributed tracing systems can produce about applications. It uses the Jaeger [HotROD][7] demo application to generate traces and stores them in Grafana Cloud's hosted Tempo. This chart shows the processing time for the request, broken down by service and function.
|
||||
|
||||
![Gantt chart from Grafana Tempo][8]
|
||||
|
||||
(Annanay Agarwal, [CC BY-SA 4.0][9])
|
||||
|
||||
### Reducing index size
|
||||
|
||||
Traces have a ton of information in a rich and well-defined data model. Usually, there are two interactions with a tracing backend: filtering for traces using metadata selectors like the service name or duration, and visualizing a trace once it's been filtered.
|
||||
|
||||
To enhance search, most open source distributed tracing frameworks index a number of fields from the trace, including the service name, operation name, tags, and duration. This results in a large index and pushes you to use a database like Elasticsearch or [Cassandra][10]. However, these can be tough to manage and costly to operate at scale, so my team at Grafana Labs set out to come up with a better solution.
|
||||
|
||||
At Grafana, our on-call debugging workflows start with drilling down for the problem using a metrics dashboard (we use [Cortex][11], a Cloud Native Computing Foundation incubating project for scaling Prometheus, to store metrics from our application), sifting through the logs for the problematic service (we store our logs in Loki, which is like Prometheus, but for logs), and then viewing traces for a given request. We realized that all the indexing information we need for the filtering step is available in Cortex and Loki. However, we needed a strong integration for trace discoverability through these tools and a complimentary store for key-value lookup by trace ID.
|
||||
|
||||
This was the start of the [Grafana Tempo][12] project. By focusing on retrieving traces given a trace ID, we designed Tempo to be a minimal-dependency, high-volume, cost-effective distributed tracing backend.
|
||||
|
||||
### Easy to operate and cost-effective
|
||||
|
||||
Tempo uses an object storage backend, which is its only dependency. It can be used in either single binary or microservices mode (check out the [examples][13] in the repo on how to get started easily). Using object storage also means you can store a high volume of traces from applications without any sampling. This ensures that you never throw away traces for those one-in-a-million requests that errored out or had higher latencies.
|
||||
|
||||
### Strong integration with open source tools
|
||||
|
||||
[Grafana 7.3 includes a Tempo data source][14], which means you can visualize traces from Tempo in the Grafana UI. Also, [Loki 2.0's new query features][15] make trace discovery in Tempo easy. And to integrate with Prometheus, the team is working on adding support for exemplars, which are high-cardinality metadata information you can add to time-series data. The metric storage backends do not index these, but you can retrieve and display them alongside the metric value in the Grafana UI. While exemplars can store various metadata, trace-IDs are stored to integrate strongly with Tempo in this use case.
|
||||
|
||||
This example shows using exemplars with a request latency histogram where each exemplar data point links to a trace in Tempo.
|
||||
|
||||
![Using exemplars in Tempo][16]
|
||||
|
||||
(Annanay Agarwal, [CC BY-SA 4.0][9])
|
||||
|
||||
### Consistent metadata
|
||||
|
||||
Telemetry data emitted from applications running as containerized applications generally has some metadata associated with it. This can include information like the cluster ID, namespace, pod IP, etc. This is great for providing on-demand information, but it's even better if you can use the information contained in metadata for something productive.
|
||||
|
||||
For instance, you can use the [Grafana Cloud Agent to ingest traces into Tempo][17], and the agent leverages the Prometheus Service Discovery mechanism to poll the Kubernetes API for metadata information and adds these as tags to spans emitted by the application. Since this metadata is also indexed in Loki, it makes it easy for you to jump from traces to view logs for a given service by translating metadata into Loki label selectors.
|
||||
|
||||
The following is an example of consistent metadata that can be used to view the logs for a given span in a trace in Tempo.
|
||||
|
||||
### ![][18]
|
||||
|
||||
### Cloud-native
|
||||
|
||||
Grafana Tempo is available as a containerized application, and you can run it on any orchestration engine like Kubernetes, Mesos, etc. The various services can be horizontally scaled depending on the workload on the ingest/query path. You can also use cloud-native object storage, such as Google Cloud Storage, Amazon S3, or Azure Blog Storage with Tempo. For further information, read the [architecture section][19] in Tempo's documentation.
|
||||
|
||||
### Try Tempo
|
||||
|
||||
If this sounds like it might be as useful for you as it has been for us, [clone the Tempo repo][20] and give it a try.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/2/tempo-distributed-tracing
|
||||
|
||||
作者:[Annanay Agarwal][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[RiaXu](https://github.com/ShuyRoy)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/annanayagarwal
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/computer_space_graphic_cosmic.png?itok=wu493YbB (Computer laptop in space)
|
||||
[2]: https://grafana.com/oss/tempo/
|
||||
[3]: http://grafana.com/oss/grafana
|
||||
[4]: https://prometheus.io/
|
||||
[5]: https://grafana.com/oss/loki/
|
||||
[6]: https://peter.bourgon.org/blog/2017/02/21/metrics-tracing-and-logging.html
|
||||
[7]: https://github.com/jaegertracing/jaeger/tree/master/examples/hotrod
|
||||
[8]: https://opensource.com/sites/default/files/uploads/tempo_gantt.png (Gantt chart from Grafana Tempo)
|
||||
[9]: https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[10]: https://opensource.com/article/19/8/how-set-apache-cassandra-cluster
|
||||
[11]: https://cortexmetrics.io/
|
||||
[12]: http://github.com/grafana/tempo
|
||||
[13]: https://grafana.com/docs/tempo/latest/getting-started/example-demo-app/
|
||||
[14]: https://grafana.com/blog/2020/10/29/grafana-7.3-released-support-for-the-grafana-tempo-tracing-system-new-color-palettes-live-updates-for-dashboard-viewers-and-more/
|
||||
[15]: https://grafana.com/blog/2020/11/09/trace-discovery-in-grafana-tempo-using-prometheus-exemplars-loki-2.0-queries-and-more/
|
||||
[16]: https://opensource.com/sites/default/files/uploads/tempo_exemplar.png (Using exemplars in Tempo)
|
||||
[17]: https://grafana.com/blog/2020/11/17/tracing-with-the-grafana-cloud-agent-and-grafana-tempo/
|
||||
[18]: https://lh5.googleusercontent.com/vNqk-ygBOLjKJnCbTbf2P5iyU5Wjv2joR7W-oD7myaP73Mx0KArBI2CTrEDVi04GQHXAXecTUXdkMqKRq8icnXFJ7yWUEpaswB1AOU4wfUuADpRV8pttVtXvTpVVv8_OfnDINgfN
|
||||
[19]: https://grafana.com/docs/tempo/latest/architecture/architecture/
|
||||
[20]: https://github.com/grafana/tempo
|
@ -1,179 +0,0 @@
|
||||
[#]: subject: (Set your path in FreeDOS)
|
||||
[#]: via: (https://opensource.com/article/21/2/path-freedos)
|
||||
[#]: author: (Kevin O'Brien https://opensource.com/users/ahuka)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (robsean)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
||||
Set your path in FreeDOS
|
||||
======
|
||||
Learn about your FreeDOS path, how to set it, and how to use it.
|
||||
![Looking at a map for career journey][1]
|
||||
|
||||
Everything you do in the open source [FreeDOS][2] operating system is done from the command line. The command line begins with a _prompt_, which is the computer's way of saying, "I'm ready. Give me something to do." You can configure your prompt's appearance, but by default, it's:
|
||||
|
||||
|
||||
```
|
||||
`C:\>`
|
||||
```
|
||||
|
||||
From the command line, you can do two things: Run an internal command or run a program. External commands are programs found in separate files in your `FDOS` directory, so running programs includes running external commands. It also means running the application software you use to do things with your computer. You can also run a batch file, but in that case, all you're doing is running a series of commands or programs that are listed in the batch file.
|
||||
|
||||
### Executable application files
|
||||
|
||||
FreeDOS can run three types of application files:
|
||||
|
||||
1. **COM** is a file in machine language less than 64KB in size.
|
||||
2. **EXE** is a file in machine language that can be larger than 64KB. EXE files also have information at the beginning of the file telling DOS what type of file it is and how to load and run it.
|
||||
3. **BAT** is a _batch file_ written with a text editor in ASCII text format containing FreeDOS commands that are executed in batch mode. This means each command is executed in sequence until the file ends.
|
||||
|
||||
|
||||
|
||||
If you enter an application name that FreeDOS does not recognize as either an internal command or a program, you get the error message _Bad command or filename_. If you see this error, it means one of three things:
|
||||
|
||||
1. The name you gave is incorrect for some reason. Possibly you misspelled the file name, or maybe you're using the wrong command name. Check the name and the spelling and try again.
|
||||
2. Maybe the program you are trying to run is not installed on the computer. Verify that it is installed.
|
||||
3. The file does exist, but FreeDOS doesn't know where to find it.
|
||||
|
||||
|
||||
|
||||
The final item on this list is the subject of this article, and it's referred to as the `PATH`. If you're used to Linux or Unix already, you may already understand the concept of [the PATH variable][3]. If you're new to the command line, the path is an important thing to get comfortable with.
|
||||
|
||||
### The path
|
||||
|
||||
When you enter the name of an executable application file, FreeDOS has to find it. FreeDOS looks for the file in a specific hierarchy of locations:
|
||||
|
||||
1. First, it looks in the active directory of the current drive (called the _working directory_). If you're in the directory `C:\FDOS`, and you type in the name `FOOBAR.EXE`, FreeDOS looks in `C:\FDOS` for a file with that name. You don't even need to type in the entire name. If you type in `FOOBAR`, FreeDOS looks for any executable file with that name, whether it's `FOOBAR.EXE`, `FOOBAR.COM`, or `FOOBAR.BAT`. Should FreeDOS find a file matching that name, it runs it.
|
||||
2. If FreeDOS does not find a file with the name you've entered, it consults something called the `PATH`. This is a list of directories that DOS has been instructed to check whenever it cannot find a file in the current active directory.
|
||||
|
||||
|
||||
|
||||
You can see your computer's path at any time by using the `PATH` command. Just type `path` at the FreeDOS prompt, and FreeDOS returns your path setting:
|
||||
|
||||
|
||||
```
|
||||
C:\>path
|
||||
PATH=C:\FDOS\BIN
|
||||
```
|
||||
|
||||
The first line is the prompt and the command, and the second line is what the computer returned. You can see that the first place DOS looks is `FDOS\BIN`, which is located on the `C` drive. If you want to change your path, you can enter a path command and the new path you want to use:
|
||||
|
||||
|
||||
```
|
||||
`C:\>path=C:\HOME\BIN;C:\FDOS\BIN`
|
||||
```
|
||||
|
||||
In this example, I set my path to my personal `BIN` folder, which I keep in a custom directory called `HOME`, and then to `FDOS\BIN`. Now when you check your path:
|
||||
|
||||
|
||||
```
|
||||
C:\>path
|
||||
PATH=C:\HOME\BIN;C:\FDOS\BIN
|
||||
```
|
||||
|
||||
The path setting is processed in the order that directories are listed.
|
||||
|
||||
You may notice that some characters are lower case and some upper case. It really doesn't matter which you use. FreeDOS is not case-sensitive and treats everything as an upper-case letter. Internally, FreeDOS uses all upper-case letters, which is why you see the output from your commands in upper case. If you type commands and file names in lower case, a converter automatically converts them to upper case, and they are executed.
|
||||
|
||||
Entering a new path replaces whatever the path was set to previously.
|
||||
|
||||
### The autoexec.bat file
|
||||
|
||||
The next question you might have is where that first path, the one FreeDOS uses by default, came from. That, along with several other important settings, is defined in the `AUTOEXEC.BAT` file located at the root of your `C` drive. This is a batch file that automatically executes (hence the name) when you start FreeDOS. You can edit this file with the FreeDOS program `EDIT`. To see or edit the contents of this file, enter the following command:
|
||||
|
||||
|
||||
```
|
||||
`C:\>edit autoexec.bat`
|
||||
```
|
||||
|
||||
This line appears near the top:
|
||||
|
||||
|
||||
```
|
||||
`SET PATH=%dosdir%\BIN`
|
||||
```
|
||||
|
||||
This line defines the value of the default path.
|
||||
|
||||
After you look at `AUTOEXEC.BAT`, you can exit the EDIT application by pressing the following keys in order:
|
||||
|
||||
1. Alt
|
||||
2. f
|
||||
3. x
|
||||
|
||||
|
||||
|
||||
You can also use the keyboard shortcut **Alt**+**X**.
|
||||
|
||||
### Using the full path
|
||||
|
||||
If you forget to include `C:\FDOS\BIN` in your path, you won't have immediate access to any of the applications stored there because FreeDOS won't know where to find them. For instance, imagine I set my path to my personal collection of applications:
|
||||
|
||||
|
||||
```
|
||||
`C:\>path=C:\HOME\BIN`
|
||||
```
|
||||
|
||||
Applications built into the command line still work:
|
||||
|
||||
|
||||
```
|
||||
C:\cd HOME
|
||||
C:\HOME>dir
|
||||
ARTICLES
|
||||
BIN
|
||||
CHEATSHEETS
|
||||
GAMES
|
||||
DND
|
||||
```
|
||||
|
||||
However, external commands fail:
|
||||
|
||||
|
||||
```
|
||||
C:HOME\ARTICLES>BZIP2 -c example.txt
|
||||
Bad command or filename - "BZIP2"
|
||||
```
|
||||
|
||||
You can always execute a command that you know is on your system but not in your path by providing the _full path_ to the file:
|
||||
|
||||
|
||||
```
|
||||
C:HOME\ARTICLES>C:\FDOS\BIN\BZIP2 -c example.txt
|
||||
C:HOME\ARTICLES>DIR
|
||||
example.txb
|
||||
```
|
||||
|
||||
You can execute applications from external media or other directories the same way.
|
||||
|
||||
### FreeDOS path
|
||||
|
||||
Generally, you probably want to keep `C:\PDOS\BIN` in your path because it contains all the default applications distributed with FreeDOS.
|
||||
|
||||
Unless you change the path in `AUTOEXEC.BAT`, the default path is restored after a reboot.
|
||||
|
||||
Now that you know how to manage your path in FreeDOS, you can execute commands and maintain your working environment in whatever way works best for you.
|
||||
|
||||
* * *
|
||||
|
||||
_Thanks to [DOS Lesson 5: The Path][4] (published under a CC BY-SA 4.0 license) for some of the information in this article._
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/2/path-freedos
|
||||
|
||||
作者:[Kevin O'Brien][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/ahuka
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/career_journey_road_gps_path_map_520.png?itok=PpL6jJgY (Looking at a map for career journey)
|
||||
[2]: https://www.freedos.org/
|
||||
[3]: https://opensource.com/article/17/6/set-path-linux
|
||||
[4]: https://www.ahuka.com/dos-lessons-for-self-study-purposes/dos-lesson-5-the-path/
|
@ -1,58 +0,0 @@
|
||||
[#]: subject: (4 new open source licenses)
|
||||
[#]: via: (https://opensource.com/article/21/2/osi-licenses-cal-cern-ohl)
|
||||
[#]: author: (Pam Chestek https://opensource.com/users/pchestek)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
||||
4 new open source licenses
|
||||
======
|
||||
Get to know the new OSI-approved Cryptographic Autonomy License and CERN
|
||||
open hardware licenses.
|
||||
![Law books in a library][1]
|
||||
|
||||
As the steward of the [Open Source Defintion][2], the [Open Source Initiative][3] has been designating licenses as "open source" for over 20 years. These licenses are the foundation of the open source software ecosystem, ensuring that everyone can use, improve, and share software. When a license is approved, it is because the OSI believes that the license fosters collaboration and sharing for the benefit of everyone who participates in the ecosystem.
|
||||
|
||||
The world has changed over the past 20 years, with software now used in new and even unimaginable ways. The OSI has seen that the familiar open source licenses are not always well-suited for these new situations. But license stewards have stepped up, submitting several new licenses for more expansive uses. The OSI was challenged to evaluate whether these new concepts in licensing would continue to advance sharing and collaboration and merit being referred to as "open source" licenses, ultimately approving some new special purpose licenses.
|
||||
|
||||
### Four new licenses
|
||||
|
||||
First is the [Cryptographic Autonomy License][4]. This license is designed for distributed cryptographic applications. The challenge of this use case was that the existing open source licenses wouldn't assure openness because it would be possible for one peer to impair the functioning of the network if there was no obligation to also share data with the other peers. So, in addition to being a strong copyleft license, the CAL also includes an obligation to provide third parties the permissions and materials needed to independently use and modify the software without that third party having a loss of data or capability.
|
||||
|
||||
As more and more uses arise for peer-to-peer sharing using a cryptographic structure, it wouldn't be surprising if more developers found themselves in need of a legal tool like the CAL. The community on License-Discuss and License-Review, OSI's two mailing lists where proposed new open source licenses are discussed, asked many questions about this license. We hope that the resulting license is clear and easy to understand and that other open source practitioners will find it useful.
|
||||
|
||||
Next, the European Organization for Nuclear Research, CERN, submitted the CERN Open Hardware Licence (OHL) family of licenses for consideration. All three of its licenses are primarily intended for open hardware, a field of open access that is similar to open source software but with its own challenges and nuances. The line between hardware and software has blurred considerably, so applying separate hardware and software licenses has become more and more difficult. CERN undertook crafting a license that would ensure freedom for both hardware and software.
|
||||
|
||||
The OSI probably would not have considered adding an open hardware license to its list of open source licenses back when it started, but the world has changed. So while the wording in the CERN licenses encompasses hardware concepts, it also meets all the qualifications to be approved by the OSI as an open source software license.
|
||||
|
||||
The suite of CERN Open Hardware licenses includes a [permissive license][5], a [weak reciprocal license][6], and a [strong reciprocal license][7]. Most recently, the license has been adopted by an international research project that is building simple, easily replicable ventilators to use with COVID-19 patients.
|
||||
|
||||
### Learn more
|
||||
|
||||
The CAL and CERN OHL licenses are special-purpose, and the OSI does not recommend their use outside the fields for which they were designed. But the OSI is eager to see whether these licenses will work as intended, fostering robust open ecosystems in these newer computing arenas.
|
||||
|
||||
More information on the [license approval process][8] is available from the OSI.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/2/osi-licenses-cal-cern-ohl
|
||||
|
||||
作者:[Pam Chestek][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/pchestek
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LAW_lawdotgov3.png?itok=e4eFKe0l (Law books in a library)
|
||||
[2]: https://opensource.org/osd
|
||||
[3]: https://opensource.org/
|
||||
[4]: https://opensource.org/licenses/CAL-1.0
|
||||
[5]: https://opensource.org/CERN-OHL-P
|
||||
[6]: https://opensource.org/CERN-OHL-W
|
||||
[7]: https://opensource.org/CERN-OHL-S
|
||||
[8]: https://opensource.org/approval
|
@ -1,186 +0,0 @@
|
||||
[#]: subject: (5 surprising things you can do with LibreOffice from the command line)
|
||||
[#]: via: (https://opensource.com/article/21/3/libreoffice-command-line)
|
||||
[#]: author: (Don Watkins https://opensource.com/users/don-watkins)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
||||
5 surprising things you can do with LibreOffice from the command line
|
||||
======
|
||||
Convert, print, protect, and do more with your files directly from the
|
||||
command line.
|
||||
![hot keys for shortcuts or features on computer keyboard][1]
|
||||
|
||||
LibreOffice has all the productivity features you'd want from an office software suite, making it a popular open source alternative to Microsoft Office or Google Suite. One of LibreOffice's powers is the ability to operate from the command line. For example, Seth Kenlon recently explained how he uses a global [command-line option to convert multiple files][2] from DOCX to EPUB with LibreOffice. His article inspired me to share some other LibreOffice command-line tips and tricks.
|
||||
|
||||
Before we look at some hidden features of LibreOffice commands, you need to understand how to use options with applications. Not all applications accept options (aside from the basics like the `--help` option, which works in most Linux applications).
|
||||
|
||||
|
||||
```
|
||||
`$ libreoffice --help`
|
||||
```
|
||||
|
||||
This returns descriptions of other options LibreOffice accepts. Some applications don't have many options, but LibreOffice has a few screens worth, so there's plenty to play with.
|
||||
|
||||
That said, here are five useful things you can do with LibreOffice at the terminal to make the software even more useful.
|
||||
|
||||
### 1\. Customize your launch options
|
||||
|
||||
You can modify how you launch LibreOffice. For instance, if you want to open just LibreOffice's word processor component:
|
||||
|
||||
|
||||
```
|
||||
`$ libreoffice --writer #starts the word processor`
|
||||
```
|
||||
|
||||
You can open its other components similarly:
|
||||
|
||||
|
||||
```
|
||||
$ libreoffice --calc #starts the Calc document
|
||||
$ libreoffice --draw #starts an empty Draw document
|
||||
$ libreoffice --web #starts and empty HTML document
|
||||
```
|
||||
|
||||
You also can access specific help files from the command line:
|
||||
|
||||
|
||||
```
|
||||
`$ libreoffice --helpwriter`
|
||||
```
|
||||
|
||||
![LibreOffice Writer help][3]
|
||||
|
||||
(Don Watkins, [CC BY-SA 4.0][4])
|
||||
|
||||
Or if you need help with the spreadsheet application:
|
||||
|
||||
|
||||
```
|
||||
`$ libreoffice --helpcalc`
|
||||
```
|
||||
|
||||
You can start LibreOffice without the splash screen:
|
||||
|
||||
|
||||
```
|
||||
`$ libreoffice --writer --nologo`
|
||||
```
|
||||
|
||||
You can even have it launch minimized in the background while you finish working in your current window:
|
||||
|
||||
|
||||
```
|
||||
`$ libreoffice --writer --minimized`
|
||||
```
|
||||
|
||||
### 2\. Open a file in read-only mode
|
||||
|
||||
You can open files in read-only mode using `--view` to prevent accidentally making and saving changes to an important file:
|
||||
|
||||
|
||||
```
|
||||
`$ libreoffice --view example.odt`
|
||||
```
|
||||
|
||||
### 3\. Open a document as a template
|
||||
|
||||
Have you ever created a document to use as a letterhead or invoice form? LibreOffice has a rich built-in template system, but you can make any document a template with the `-n` option:
|
||||
|
||||
|
||||
```
|
||||
`$ libreoffice --writer -n example.odt`
|
||||
```
|
||||
|
||||
Your document will open in LibreOffice and you can make changes to it, but you won't overwrite the original file when you save it.
|
||||
|
||||
### 4\. Convert documents
|
||||
|
||||
When you need to do a small task like converting a file to a new format, it can take as long for the application to launch as it takes to do the task. The solution is the `--headless` option, which executes LibreOffice processes without launching the graphical user interface.
|
||||
|
||||
For example, converting a document to EPUB is a pretty simple task in LibreOffice—but it's even easier with the `libreoffice` command:
|
||||
|
||||
|
||||
```
|
||||
`$ libreoffice --headless --convert-to epub example.odt`
|
||||
```
|
||||
|
||||
Using wildcards means you can convert dozens of documents at once:
|
||||
|
||||
|
||||
```
|
||||
`$ libreoffice --headless --convert-to epub *.odt`
|
||||
```
|
||||
|
||||
You can convert files to several formats, including PDF, HTML, DOC, DOCX, EPUB, plain text, and many more.
|
||||
|
||||
### 5\. Print from the terminal
|
||||
|
||||
You can print LibreOffice documents from the command line without opening the application:
|
||||
|
||||
|
||||
```
|
||||
`$ libreoffice --headless -p example.odt`
|
||||
```
|
||||
|
||||
This option prints to the default printer without opening LibreOffice; it just sends the document to your printer.
|
||||
|
||||
To print all the files in a directory:
|
||||
|
||||
|
||||
```
|
||||
`$ libreoffice -p *.odt`
|
||||
```
|
||||
|
||||
(More than once, I've issued this command and then run out of paper, so make sure you have enough paper loaded in your printer before you start.)
|
||||
|
||||
You can also print files to PDF. There's usually no difference between this and using the `--convert-to-pdf` option but it's easy to remember:
|
||||
|
||||
|
||||
```
|
||||
`$ libreoffice --print-to-file example.odt --headless`
|
||||
```
|
||||
|
||||
### Bonus: Flatpak and command options
|
||||
|
||||
If you installed LibreOffice as a [Flatpak][5], all of these command options work, but you have to pass them through Flatpak. Here's an example:
|
||||
|
||||
|
||||
```
|
||||
`$ flatpak run org.libreoffice.LibreOffice --writer`
|
||||
```
|
||||
|
||||
It's a lot more verbose than a local install, so you might be inspired to [write a Bash alias][6] to make it easier to interact with LibreOffice directly.
|
||||
|
||||
### Surprising terminal options
|
||||
|
||||
Find out how you can extend the power of LibreOffice from the command line by consulting the man pages:
|
||||
|
||||
|
||||
```
|
||||
`$ man libreoffice`
|
||||
```
|
||||
|
||||
Were you aware that LibreOffice had such a rich set of command-line options? Have you discovered other options that nobody else seems to know about? Share them in the comments!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/3/libreoffice-command-line
|
||||
|
||||
作者:[Don Watkins][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/don-watkins
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/shortcut_command_function_editing_key.png?itok=a0sEc5vo (hot keys for shortcuts or features on computer keyboard)
|
||||
[2]: https://opensource.com/article/21/2/linux-workday
|
||||
[3]: https://opensource.com/sites/default/files/uploads/libreoffice-help.png (LibreOffice Writer help)
|
||||
[4]: https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[5]: https://www.libreoffice.org/download/flatpak/
|
||||
[6]: https://opensource.com/article/19/7/bash-aliases
|
@ -1,89 +0,0 @@
|
||||
[#]: subject: (Track your family calendar with a Raspberry Pi and a low-power display)
|
||||
[#]: via: (https://opensource.com/article/21/3/family-calendar-raspberry-pi)
|
||||
[#]: author: (Javier Pena https://opensource.com/users/jpena)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
||||
Track your family calendar with a Raspberry Pi and a low-power display
|
||||
======
|
||||
Help everyone keep up with your family's schedule using open source
|
||||
tools and an E Ink display.
|
||||
![Calendar with coffee and breakfast][1]
|
||||
|
||||
Some families have a complex schedule: the kids have school and afterschool activities, you have important events you want to remember, everyone has multiple appointments, and so forth. While you can keep track of everything using your cellphone and an app, wouldn't it be better to have a large, low-power display at home to show your family's calendar? Meet the E Ink calendar!
|
||||
|
||||
![E Ink calendar][2]
|
||||
|
||||
(Javier Pena, [CC BY-SA 4.0][3])
|
||||
|
||||
### The hardware
|
||||
|
||||
The calendar started as a holiday project, so I tried to reuse as much as I could. This included a Raspberry Pi 2 that had been unused for too long. I did not have an E Ink display, so I had to buy it. Fortunately, I found a vendor that provided [open source drivers and examples][4] for its Raspberry Pi-ready screen, which is connected using some [GPIO][5] ports.
|
||||
|
||||
My family also wanted to switch between different calendars, and that required some form of input. Instead of adding a USB keyboard, I opted for a simpler solution and bought a 1x4 matrix keypad, similar to the one described in [this article][6]. This allowed me to connect the keypad to some GPIO ports in the Raspberry Pi.
|
||||
|
||||
Finally, I needed a photo frame to house the whole setup. It looks a bit messy on the back, but it gets the job done.
|
||||
|
||||
![Calendar internals][7]
|
||||
|
||||
(Javier Pena, [CC BY-SA 4.0][3])
|
||||
|
||||
### The software
|
||||
|
||||
I took inspiration from a [similar project][8] and started writing the Python code for my project. I needed to get data from two areas:
|
||||
|
||||
* Weather data, which I got from the [OpenWeather API][9]
|
||||
* Calendar data; I decided to use the [CalDav standard][10], which lets me connect to a calendar running on my home server
|
||||
|
||||
|
||||
|
||||
Since I had to wait for some parts to arrive, I used a modular approach for the input and display so that I could debug most of the code without the hardware. The calendar application supports drivers, and I wrote a [Pygame][11] driver to run it on a desktop PC.
|
||||
|
||||
The best part of writing the code was being able to reuse existing open source projects, so accessing the different APIs was easy. I could focus on the user interface—having per-person weekly and everyone daily calendars, allowing calendar selection using the keypad—and I had time to add some extra touches, like custom screen savers for special days.
|
||||
|
||||
![E Ink calendar screensaver][12]
|
||||
|
||||
(Javier Pena, [CC BY-SA 4.0][3])
|
||||
|
||||
The final integration step was making sure my calendar application would run on startup and be resilient to errors. I used a base [Raspberry Pi OS][13] image and installed the application as a systemd service so that it would survive failures and system restarts.
|
||||
|
||||
Once I finished everything, I uploaded the code [to GitHub][14]. So if you want to create a similar calendar, feel free to have a look and reuse it!
|
||||
|
||||
### The result
|
||||
|
||||
The calendar has become an everyday appliance in our kitchen. It helps us remember our daily activities, and even our kids use it to check their schedule before going to school.
|
||||
|
||||
On a personal note, the project helped me appreciate the _power of open_. Without open source drivers and libraries and open APIs, we would still be organizing our schedule with paper and a pen. Crazy, isn't it?
|
||||
|
||||
Need to keep your schedule straight? Learn how to do it using open source with these free...
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/3/family-calendar-raspberry-pi
|
||||
|
||||
作者:[Javier Pena][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/jpena
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/calendar-coffee.jpg?itok=9idm1917 (Calendar with coffee and breakfast)
|
||||
[2]: https://opensource.com/sites/default/files/uploads/calendar.jpg (E Ink calendar)
|
||||
[3]: https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[4]: https://github.com/waveshare/e-Paper
|
||||
[5]: https://opensource.com/article/19/3/gpio-pins-raspberry-pi
|
||||
[6]: https://www.instructables.com/1x4-Membrane-Keypad-w-Arduino/
|
||||
[7]: https://opensource.com/sites/default/files/uploads/calendar_internals.jpg (Calendar internals)
|
||||
[8]: https://github.com/zli117/EInk-Calendar
|
||||
[9]: https://openweathermap.org
|
||||
[10]: https://en.wikipedia.org/wiki/CalDAV
|
||||
[11]: https://github.com/pygame/pygame
|
||||
[12]: https://opensource.com/sites/default/files/uploads/calendar_screensaver.jpg (E Ink calendar screensaver)
|
||||
[13]: https://www.raspberrypi.org/software/
|
||||
[14]: https://github.com/javierpena/eink-calendar
|
@ -2,7 +2,7 @@
|
||||
[#]: via: (https://opensource.com/article/21/3/android-raspberry-pi)
|
||||
[#]: author: (Sudeshna Sur https://opensource.com/users/sudeshna-sur)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: translator: ( RiaXu)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
@ -131,7 +131,7 @@ via: https://opensource.com/article/21/3/android-raspberry-pi
|
||||
|
||||
作者:[Sudeshna Sur][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
译者:[译者ID](https://github.com/ShuyRoy)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,167 +0,0 @@
|
||||
[#]: subject: (Learn Python dictionary values with Jupyter)
|
||||
[#]: via: (https://opensource.com/article/21/3/dictionary-values-python)
|
||||
[#]: author: (Lauren Maffeo https://opensource.com/users/lmaffeo)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
||||
Learn Python dictionary values with Jupyter
|
||||
======
|
||||
Implementing data structures with dictionaries helps you access
|
||||
information more quickly.
|
||||
![Hands on a keyboard with a Python book ][1]
|
||||
|
||||
Dictionaries are the Python programming language's way of implementing data structures. A Python dictionary consists of several key-value pairs; each pair maps the key to its associated value.
|
||||
|
||||
For example, say you're a teacher who wants to match students' names to their grades. You could use a Python dictionary to map the keys (names) to their associated values (grades).
|
||||
|
||||
If you need to find a specific student's grade on an exam, you can access it from your dictionary. This lookup shortcut should save you time over parsing an entire list to find the student's grade.
|
||||
|
||||
This article shows you how to access dictionary values through each value's key. Before you begin the tutorial, make sure you have the [Anaconda package manager][2] and [Jupyter Notebook][3] installed on your machine.
|
||||
|
||||
### 1\. Open a new notebook in Jupyter
|
||||
|
||||
Begin by opening Jupyter and running it in a tab in your web browser. Then:
|
||||
|
||||
1. Go to **File** in the top-left corner.
|
||||
2. Select **New Notebook**, then **Python 3**.
|
||||
|
||||
|
||||
|
||||
![Create Jupyter notebook][4]
|
||||
|
||||
(Lauren Maffeo, [CC BY-SA 4.0][5])
|
||||
|
||||
Your new notebook starts off untitled, but you can rename it anything you'd like. I named mine **OpenSource.com Data Dictionary Tutorial**.
|
||||
|
||||
The line number you see in your new Jupyter notebook is where you will write your code. (That is, your input.)
|
||||
|
||||
On macOS, you'll hit **Shift** then **Return** to receive your output. Make sure to do this before creating new line numbers; otherwise, any additional code you write might not run.
|
||||
|
||||
### 2\. Create a key-value pair
|
||||
|
||||
Write the keys and values you wish to access in your dictionary. To start, you'll need to define what they are in the context of your dictionary:
|
||||
|
||||
|
||||
```
|
||||
empty_dictionary = {}
|
||||
grades = {
|
||||
"Kelsey": 87,
|
||||
"Finley": 92
|
||||
}
|
||||
|
||||
one_line = {a: 1, b: 2}
|
||||
```
|
||||
|
||||
![Code for defining key-value pairs in the dictionary][6]
|
||||
|
||||
(Lauren Maffeo, [CC BY-SA 4.0][5])
|
||||
|
||||
This allows the dictionary to associate specific keys with their respective values. Dictionaries store data by name, which allows faster lookup.
|
||||
|
||||
### 3\. Access a dictionary value by its key
|
||||
|
||||
Say you want to find a specific dictionary value; in this case, a specific student's grade. To start, hit **Insert** then **Insert Cell Below**.
|
||||
|
||||
![Inserting a new cell in Jupyter][7]
|
||||
|
||||
(Lauren Maffeo, [CC BY-SA 4.0][5])
|
||||
|
||||
In your new cell, define the keys and values in your dictionary.
|
||||
|
||||
Then, find the value you need by telling your dictionary to print that value's key. For example, look for a specific student's name—Kelsey:
|
||||
|
||||
|
||||
```
|
||||
# Access data in a dictionary
|
||||
grades = {
|
||||
"Kelsey": 87,
|
||||
"Finley": 92
|
||||
}
|
||||
|
||||
print(grades["Kelsey"])
|
||||
87
|
||||
```
|
||||
|
||||
![Code to look for a specific value][8]
|
||||
|
||||
(Lauren Maffeo, [CC BY-SA 4.0][5])
|
||||
|
||||
Once you've asked for Kelsey's grade (that is, the value you're trying to find), hit **Shift** (if you're on macOS), then **Return**.
|
||||
|
||||
You see your desired value—Kelsey's grade—as an output below your cell.
|
||||
|
||||
### 4\. Update an existing key
|
||||
|
||||
What if you realize you added the wrong grade for a student to your dictionary? You can fix it by updating your dictionary to store an additional value.
|
||||
|
||||
To start, choose which key you want to update. In this case, say you entered Finley's grade incorrectly. That is the key you'll update in this example.
|
||||
|
||||
To update Finley's grade, insert a new cell below, then create a new key-value pair. Tell your cell to print the dictionary, then hit **Shift** and **Return**:
|
||||
|
||||
|
||||
```
|
||||
grades["Finley"] = 90
|
||||
print(grades)
|
||||
|
||||
{'Kelsey': 87; "Finley": 90}
|
||||
```
|
||||
|
||||
![Code for updating a key][9]
|
||||
|
||||
(Lauren Maffeo, [CC BY-SA 4.0][5])
|
||||
|
||||
The updated dictionary, with Finley's new grade, appears as your output.
|
||||
|
||||
### 5\. Add a new key
|
||||
|
||||
Say you get a new student's grade for an exam. You can add that student's name and grade to your dictionary by adding a new key-value pair.
|
||||
|
||||
Insert a new cell below, then add the new student's name and grade as a key-value pair. Once you're done, tell your cell to print the dictionary, then hit **Shift** and **Return**:
|
||||
|
||||
|
||||
```
|
||||
grades["Alex"] = 88
|
||||
print(grades)
|
||||
|
||||
{'Kelsey': 87, 'Finley': 90, 'Alex': 88}
|
||||
```
|
||||
|
||||
![Add a new key][10]
|
||||
|
||||
(Lauren Maffeo, [CC BY-SA 4.0][5])
|
||||
|
||||
All key-value pairs should appear as output.
|
||||
|
||||
### Using dictionaries
|
||||
|
||||
Remember that keys and values can be any data type, but it's rare for them to be [non-primitive types][11]. Additionally, dictionaries don't store or structure their content in any specific order. If you need an ordered sequence of items, it's best to create a list in Python, not a dictionary.
|
||||
|
||||
If you're thinking of using a dictionary, first confirm if your data is structured the right way, i.e., like a phone book. If not, then using a list, tuple, tree, or other data structure might be the best option.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/3/dictionary-values-python
|
||||
|
||||
作者:[Lauren Maffeo][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/lmaffeo
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/python-programming-code-keyboard.png?itok=fxiSpmnd (Hands on a keyboard with a Python book )
|
||||
[2]: https://docs.anaconda.com/anaconda/
|
||||
[3]: https://opensource.com/article/18/3/getting-started-jupyter-notebooks
|
||||
[4]: https://opensource.com/sites/default/files/uploads/new-jupyter-notebook.png (Create Jupyter notebook)
|
||||
[5]: https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[6]: https://opensource.com/sites/default/files/uploads/define-keys-values.png (Code for defining key-value pairs in the dictionary)
|
||||
[7]: https://opensource.com/sites/default/files/uploads/jupyter_insertcell.png (Inserting a new cell in Jupyter)
|
||||
[8]: https://opensource.com/sites/default/files/uploads/lookforvalue.png (Code to look for a specific value)
|
||||
[9]: https://opensource.com/sites/default/files/uploads/jupyter_updatekey.png (Code for updating a key)
|
||||
[10]: https://opensource.com/sites/default/files/uploads/jupyter_addnewkey.png (Add a new key)
|
||||
[11]: https://www.datacamp.com/community/tutorials/data-structures-python
|
@ -1,105 +0,0 @@
|
||||
[#]: subject: (Use gImageReader to Extract Text From Images and PDFs on Linux)
|
||||
[#]: via: (https://itsfoss.com/gimagereader-ocr/)
|
||||
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
||||
Use gImageReader to Extract Text From Images and PDFs on Linux
|
||||
======
|
||||
|
||||
_Brief: gImageReader is a GUI tool to utilize tesseract OCR engine for extracting texts from images and PDF files in Linux._
|
||||
|
||||
[gImageReader][1] is a front-end for [Tesseract Open Source OCR Engine][2]. _Tesseract_ was originally developed at HP and then was open-sourced in 2006.
|
||||
|
||||
Basically, the OCR (Optical Character Recognition) engine lets you scan texts from a picture or a file (PDF). It can detect several languages by default and also supports scanning through Unicode characters.
|
||||
|
||||
However, the Tesseract by itself is a command-line tool without any GUI. So, here, gImageReader comes to the rescue to let any user utilize it to extract text from images and files.
|
||||
|
||||
Let me highlight a few things about it while mentioning my experience with it for the time I tested it out.
|
||||
|
||||
### gImageReader: A Cross-Platform Front-End to Tesseract OCR
|
||||
|
||||
![][3]
|
||||
|
||||
To simplify things, gImageReader comes in handy to extract text from a PDF file or an image that contains any kind of text.
|
||||
|
||||
Whether you need it for spellcheck or translation, it should be useful for a specific group of users.
|
||||
|
||||
To sum up the features in a list, here’s what you can do with it:
|
||||
|
||||
* Add PDF documents and images from disk, scanning devices, clipboard and screenshots
|
||||
* Ability to rotate images
|
||||
* Common image controls to adjust brightness, contrast, and resolution
|
||||
* Scan images directly through the app
|
||||
* Ability to process multiple images or files in one go
|
||||
* Manual or automatic recognition area definition
|
||||
* Recognize to plain text or to [hOCR][4] documents
|
||||
* Editor to display the recognized text
|
||||
* Can spellcheck the text extracted
|
||||
* Convert/Export to PDF documents from hOCR document
|
||||
* Export extracted text as a .txt file
|
||||
* Cross-platform (Windows)
|
||||
|
||||
|
||||
|
||||
### Installing gImageReader on Linux
|
||||
|
||||
**Note**: _You need to explicitly install Tesseract language packs to detect from images/files from your software manager._
|
||||
|
||||
![][5]
|
||||
|
||||
You can find gImageReader in the default repositories for some Linux distributions like Fedora and Debian.
|
||||
|
||||
For Ubuntu, you need to add a PPA and then install it. To do that, here’s what you need to type in the terminal:
|
||||
|
||||
```
|
||||
sudo add-apt-repository ppa:sandromani/gimagereader
|
||||
sudo apt update
|
||||
sudo apt install gimagereader
|
||||
```
|
||||
|
||||
You can also find it for openSUSE from its build service and [AUR][6] will be the place for Arch Linux users.
|
||||
|
||||
All the links to the repositories and the packages can be found in their [GitHub page][1].
|
||||
|
||||
[gImageReader][1]
|
||||
|
||||
### Experience with gImageReader
|
||||
|
||||
gImageReader is a quite useful tool for extracting texts from images when you need them. It works great when you try from a PDF file.
|
||||
|
||||
For extracting images from a picture shot on a smartphone, the detection was close but a bit inaccurate. Maybe when you scan something, recognition of characters from the file could be better.
|
||||
|
||||
So, you’ll have to try it for yourself to see how well it works for your use-case. I tried it on Linux Mint 20.1 (based on Ubuntu 20.04).
|
||||
|
||||
I just had an issue to manage languages from the settings and I didn’t get a quick solution for that. If you encounter the issue, you might want to troubleshoot it and explore more about it how to fix it.
|
||||
|
||||
![][7]
|
||||
|
||||
Other than that, it worked just fine.
|
||||
|
||||
Do give it a try and let me know how it worked for you! If you know of something similar (and better), do let me know about it in the comments below.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/gimagereader-ocr/
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/ankush/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://github.com/manisandro/gImageReader
|
||||
[2]: https://tesseract-ocr.github.io/
|
||||
[3]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/03/gImageReader.png?resize=800%2C456&ssl=1
|
||||
[4]: https://en.wikipedia.org/wiki/HOCR
|
||||
[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/03/tesseract-language-pack.jpg?resize=800%2C620&ssl=1
|
||||
[6]: https://itsfoss.com/aur-arch-linux/
|
||||
[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/03/gImageReader-1.jpg?resize=800%2C460&ssl=1
|
@ -1,117 +0,0 @@
|
||||
[#]: subject: (Understanding file names and directories in FreeDOS)
|
||||
[#]: via: (https://opensource.com/article/21/3/files-freedos)
|
||||
[#]: author: (Kevin O'Brien https://opensource.com/users/ahuka)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
||||
Understanding file names and directories in FreeDOS
|
||||
======
|
||||
Learn how to create, edit, and name files in FreeDOS.
|
||||
![Files in a folder][1]
|
||||
|
||||
The open source operating system [FreeDOS][2] is a tried-and-true project that helps users play retro games, update firmware, run outdated but beloved applications, and study operating system design. FreeDOS offers insights into the history of personal computing (because it implements the de facto operating system of the early '80s) but in a modern context. In this article, I'll use FreeDOS to explain how file names and extensions developed.
|
||||
|
||||
### Understanding file names and ASCII text
|
||||
|
||||
FreeDOS file names follow what is called the _8.3 convention_. This means that all FreeDOS file names have two parts that contain up to eight and three characters, respectively. The first part is often referred to as the _file name_ (which can be a little confusing because the combination of the file name and the file extension is also called a file name). This part can have anywhere from one to eight characters in it. This is followed by the _extension_, which can have from zero to three characters. These two parts are separated by a dot.
|
||||
|
||||
File names can use any letter of the alphabet or any numeral. Many of the other characters found on a keyboard are also allowed, but not all of them. That's because many of these other characters have been assigned a special use in FreeDOS. Some of the characters that can appear in a FreeDOS file name are:
|
||||
|
||||
|
||||
```
|
||||
`~ ! @ # $ % ^ & ( ) _ - { } ``
|
||||
```
|
||||
|
||||
There are also characters in the extended [ASCII][3] set that can be used, such as <20>.
|
||||
|
||||
Characters with a special meaning in FreeDOS that, therefore, cannot be used in file names include:
|
||||
|
||||
|
||||
```
|
||||
`*/ + | \ = ? [ ] ; : " . < > ,`
|
||||
```
|
||||
|
||||
Also, you cannot use a space in a FreeDOS file name. The FreeDOS console [uses spaces to separate commands][4] from options and parameters.
|
||||
|
||||
FreeDOS is case _insensitive_, so it doesn't matter whether you use uppercase or lowercase letters. All letters are converted to uppercase, so your files end up with uppercase letters in the name, no matter what you do.
|
||||
|
||||
#### File extensions
|
||||
|
||||
A file in FreeDOS isn't required to have an extension, but file extensions do have some uses. Certain file extensions have built-in meanings in FreeDOS, such as:
|
||||
|
||||
* **EXE**: executable file
|
||||
* **COM**: command file
|
||||
* **SYS**: system file
|
||||
* **BAT**: batch file
|
||||
|
||||
|
||||
|
||||
Specific software programs use other extensions, or you can use them when you create a file. These extensions have no absolute file associations, so if you use a FreeDOS word processor, it doesn't matter what extension you use for your files. You could get creative and use extensions as part of your filing system if you want. For instance, you could name your memos using *.JAN, *.FEB, *.MAR, *.APR, and so on.
|
||||
|
||||
### Editing files
|
||||
|
||||
FreeDOS comes with the Edit application for quick and easy text editing. It's a simple editor with a menu bar along the top of the screen for easy access to all the usual functions (such as copy, paste, save, and so on.)
|
||||
|
||||
![Editing in FreeDOS][5]
|
||||
|
||||
(Kevin O'Brien, [CC BY-SA 4.0][6])
|
||||
|
||||
As you might expect, many other text editors are available, including the tiny but versatile [e3 editor][7]. You can find a good variety of [FreeDOS applications][8] on GitLab.
|
||||
|
||||
### Creating files
|
||||
|
||||
You can create empty files in FreeDOS using the `touch` command. This simple utility updates a file's modification time or creates a new file:
|
||||
|
||||
|
||||
```
|
||||
C:\>touch foo.txt
|
||||
C:\>dir
|
||||
FOO TXT 0 01-12-2021 10:00a
|
||||
```
|
||||
|
||||
You can also create a file directly from the FreeDOS console without using the Edit text editor. First, use the `copy` command to copy input in the console (`con` for short) into a new file object. Terminate input with **Ctrl**+**Z** followed by the **Return** or **Enter** key:
|
||||
|
||||
|
||||
```
|
||||
C:\>copy con test.txt
|
||||
con => test.txt
|
||||
This is a test file.
|
||||
^Z
|
||||
```
|
||||
|
||||
The **Ctrl**+**Z** character shows up in the console as `^Z`. It isn't copied to the file but serves as an End of File (EOF) delimiter. In other words, it tells FreeDOS when to stop copying. This is a neat trick for making quick notes or starting a simple document to work on later.
|
||||
|
||||
### Files and FreeDOS
|
||||
|
||||
FreeDOS is open source, free, and [easy to install][9]. Exploring how FreeDOS treats files can help you understand how computing has developed over the years, regardless of your usual operating system. Boot up FreeDOS and start exploring modern retro computing!
|
||||
|
||||
* * *
|
||||
|
||||
_Some of the information in this article was previously published in [DOS lesson 7: DOS filenames; ASCII][10] (CC BY-SA 4.0)._
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/3/files-freedos
|
||||
|
||||
作者:[Kevin O'Brien][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/ahuka
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/files_documents_paper_folder.png?itok=eIJWac15 (Files in a folder)
|
||||
[2]: https://www.freedos.org/
|
||||
[3]: tmp.2sISc4Tp3G#ASCII
|
||||
[4]: https://opensource.com/article/21/2/set-your-path-freedos
|
||||
[5]: https://opensource.com/sites/default/files/uploads/freedos_2_files-edit.jpg (Editing in FreeDOS)
|
||||
[6]: https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[7]: https://opensource.com/article/20/12/e3-linux
|
||||
[8]: https://gitlab.com/FDOS/
|
||||
[9]: https://opensource.com/article/18/4/gentle-introduction-freedos
|
||||
[10]: https://www.ahuka.com/dos-lessons-for-self-study-purposes/dos-lesson-7-dos-filenames-ascii/
|
@ -1,187 +0,0 @@
|
||||
[#]: subject: (Linux Mint Cinnamon vs MATE vs Xfce: Which One Should You Use?)
|
||||
[#]: via: (https://itsfoss.com/linux-mint-cinnamon-mate-xfce/)
|
||||
[#]: author: (Dimitrios https://itsfoss.com/author/dimitrios/)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
||||
Linux Mint Cinnamon vs MATE vs Xfce: Which One Should You Use?
|
||||
======
|
||||
|
||||
Linux Mint is undoubtedly [one of the best Linux distributions for beginners][1]. This is especially true for Windows users that walking their first steps to Linux world.
|
||||
|
||||
Since 2006, the year that Linux Mint made its first release, a selection of [tools][2] has been developed to enhance user experience. Furthermore, Linux Mint is based on Ubuntu, so you have a large community of users to seek help.
|
||||
|
||||
I am not going to discuss how good Linux Mint is. If you have already made your mind to [install Linux Mint][3], you probably get a little confused on the [download section][4] on its website.
|
||||
|
||||
It gives you three options to choose from: Cinnamon, MATE and Xfce. Confused? I’ll help you with that in this article.
|
||||
|
||||
![][5]
|
||||
|
||||
If you are absolutely new to Linux and have no idea about what the above things are, I recommend you to understand a bit on [what is a desktop environment in Linux][6]. And if you could spare some more minutes, read this excellent explanation on [what is Linux and why there are so many of Linux operating systems that look similar to each other][7].
|
||||
|
||||
With that information, you are ready to understand the difference between the various Linux Mint editions. If you are unsure which to choose, with this article I will help you to make a conscious choice.
|
||||
|
||||
### Which Linux Mint version should you choose?
|
||||
|
||||
![][8]
|
||||
|
||||
Briefly, the available choices are the following:
|
||||
|
||||
* **Cinnamon desktop:** A modern touch on traditional desktop
|
||||
* **MATE desktop:** A traditional looking desktop resembling the GNOME 2 era.
|
||||
* **Xfce desktop:** A popular lightweight desktop environment.
|
||||
|
||||
|
||||
|
||||
Let’s have a look at the Mint variants one by one.
|
||||
|
||||
#### Linux Mint Cinnamon edition
|
||||
|
||||
Cinnamon desktop is developed by Linux Mint team and clearly it is the flagship edition of Linux Mint.
|
||||
|
||||
Almost a decade back when the GNOME desktop opted for the unconventional UI with GNOME 3, Cinnamon development was started to keep the traditional looks of the desktop by forking some components of GNOME 2.
|
||||
|
||||
Many Linux users like Cinnamon for its similarity with Windows 7 like interface.
|
||||
|
||||
![Linux Mint Cinnamon desktop][9]
|
||||
|
||||
##### Performance and responsiveness
|
||||
|
||||
The cinnamon desktop performance has improved from the past releases but without an SSD you can feel a bit sluggish. The last time I used cinnamon desktop was in version 4.4.8, the RAM consumption right after boot was around 750mb. There is a huge improvement in the current version 4.8.6, reduced by 100mb after boot.
|
||||
|
||||
To get the best user experience, a dual-core CPU with 4 GB of RAM as a minimum should be considered.
|
||||
|
||||
![Linux Mint 20 Cinnamon idle system stats][10]
|
||||
|
||||
##### Pros
|
||||
|
||||
* Seamless switch from Windows
|
||||
* Pleasing aesthetics
|
||||
* Highly [customizable][11]
|
||||
|
||||
|
||||
|
||||
##### Cons
|
||||
|
||||
* May still not be ideal if you have a system with 2 GB RAM
|
||||
|
||||
|
||||
|
||||
**Bonus Tip**: If you prefer Debian instead of Ubuntu you have the option of [Linux Mint Debian Edition][12]. The main difference between LMDE and Debian with Cinnamon desktop is that LMDE ships the latest desktop environment to its repositories.
|
||||
|
||||
#### Linux Mint Mate edition
|
||||
|
||||
[MATE desktop environment][13] shares a similar story as it aims to maintain and support the GNOME 2 code base and applications. The Look and feel is very similar to GNOME 2.
|
||||
|
||||
In my opinion, the best implementation of MATE desktop is by far [Ubuntu MATE][14]. In Linux Mint you get a customized version of MATE desktop, which is in line with Cinnamon aesthetics and not to the traditional GNOME 2 set out.
|
||||
|
||||
![Screenshot of Linux Mint MATE desktop][15]
|
||||
|
||||
##### Performance and responsiveness
|
||||
|
||||
MATE desktop has a reputation of its lightweight nature and there is no doubt about that. Compared to Cinnamon desktop, the CPU usage always remains a bit lower, and this can be translated to a better battery life on a laptop.
|
||||
|
||||
Although it doesn’t feel as snappy as Xfce (in my opinion), but not to an extent to compromise user experience. RAM consumption starts under 500mb which is impressive for a feature rich desktop environment.
|
||||
|
||||
![Linux Mint 20 MATE idle system stats][16]
|
||||
|
||||
##### Pros
|
||||
|
||||
* Lightweight desktop without compromising on [features][17]
|
||||
* Enough [customization][18] potential
|
||||
|
||||
|
||||
|
||||
##### Cons
|
||||
|
||||
* Traditional looks may give you a dated feel
|
||||
|
||||
|
||||
|
||||
#### Linux Mint Xfce edition
|
||||
|
||||
XFCE project started in 1996 inspired by the [Common Desktop Environment][19] of UNIX. XFCE” stands for “[XForms][20] Common Environment”, but since it no longer uses the XForms toolkit, the name is spelled as “Xfce”.
|
||||
|
||||
It aims to be fast, lightweight and easy to use. Xfce is the flagship desktop of many popular Linux distributions like [Manjaro][21] and [MX Linux][22].
|
||||
|
||||
Linux Mint offers a polished Xfce desktop but can’t match the beauty of Cinnamon desktop even in a Dark theme.
|
||||
|
||||
![Linux Mint 20 Xfce desktop][23]
|
||||
|
||||
##### Performance and responsiveness
|
||||
|
||||
Xfce is the leanest desktop environment Linux Mint has to offer. By clicking the start menu, the settings control panel or exploring the bottom panel you will notice that this is a simple yet a flexible desktop environment.
|
||||
|
||||
Despite I find minimalism a positive attribute, Xfce is not an eye candy, leaving a more traditional taste. For some users a classic desktop environment is the one to go for.
|
||||
|
||||
At the first boot the ram usage is similar to MATE desktop but not quite as good. If your computer isn’t equipped with an SSD, Xfce desktop environment can resurrect your system.
|
||||
|
||||
![Linux Mint 20 Xfce idle system stats][24]
|
||||
|
||||
##### Pros
|
||||
|
||||
* Simple to use
|
||||
* Very lightweight – suitable for older hardware
|
||||
* Rock-solid stable
|
||||
|
||||
|
||||
|
||||
##### Cons
|
||||
|
||||
* Outdated look
|
||||
* May not have as much customization to offer in comparison to Cinnamon
|
||||
|
||||
|
||||
|
||||
#### Conclusion
|
||||
|
||||
Since all these three desktop environments are based on GTK toolkit, the choice is purely a matter of taste. All of them are easy on system resources and perform well for a modest system with 4 GB RAM. Xfce and MATE can go a bit lower by supporting systems with as low as 2 GB RAM.
|
||||
|
||||
Linux Mint is not the only distribution that provides multiple choices. Distros like Manjaro, Fedora and [Ubuntu have various flavors][25] to choose from as well.
|
||||
|
||||
If you still cannot make your mind, I’ll say go with the default Cinnamon edition first and try to [use Linux Mint in a virtual box][26]. See if you like the look and feel. If not, you can test other variants in the same fashion. If you decide on the version, you can go on and [install it on your main system][3].
|
||||
|
||||
I hope I was able to help you with this article. If you still have questions or suggestions on this topic, please leave a comment below.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/linux-mint-cinnamon-mate-xfce/
|
||||
|
||||
作者:[Dimitrios][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/dimitrios/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://itsfoss.com/best-linux-beginners/
|
||||
[2]: https://linuxmint-developer-guide.readthedocs.io/en/latest/mint-tools.html#
|
||||
[3]: https://itsfoss.com/install-linux-mint/
|
||||
[4]: https://linuxmint.com/download.php
|
||||
[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/03/linux-mint-version-options.png?resize=789%2C277&ssl=1
|
||||
[6]: https://itsfoss.com/what-is-desktop-environment/
|
||||
[7]: https://itsfoss.com/what-is-linux/
|
||||
[8]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/03/Linux-Mint-variants.jpg?resize=800%2C450&ssl=1
|
||||
[9]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/03/linux-mint-20.1-cinnamon.jpg?resize=800%2C500&ssl=1
|
||||
[10]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/03/Linux-Mint-20-Cinnamon-ram-usage.png?resize=800%2C600&ssl=1
|
||||
[11]: https://itsfoss.com/customize-cinnamon-desktop/
|
||||
[12]: https://itsfoss.com/lmde-4-release/
|
||||
[13]: https://mate-desktop.org/
|
||||
[14]: https://itsfoss.com/ubuntu-mate-20-04-review/
|
||||
[15]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/07/linux-mint-mate.jpg?resize=800%2C500&ssl=1
|
||||
[16]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/03/Linux-Mint-20-MATE-ram-usage.png?resize=800%2C600&ssl=1
|
||||
[17]: https://mate-desktop.org/blog/2020-02-10-mate-1-24-released/
|
||||
[18]: https://itsfoss.com/ubuntu-mate-customization/
|
||||
[19]: https://en.wikipedia.org/wiki/Common_Desktop_Environment
|
||||
[20]: https://en.wikipedia.org/wiki/XForms_(toolkit)
|
||||
[21]: https://itsfoss.com/manjaro-linux-review/
|
||||
[22]: https://itsfoss.com/mx-linux-19/
|
||||
[23]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/07/linux-mint-xfce.jpg?resize=800%2C500&ssl=1
|
||||
[24]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/03/Linux-Mint-20-Xfce-ram-usage.png?resize=800%2C600&ssl=1
|
||||
[25]: https://itsfoss.com/which-ubuntu-install/
|
||||
[26]: https://itsfoss.com/install-linux-mint-in-virtualbox/
|
@ -1,90 +0,0 @@
|
||||
[#]: subject: (Set up network parental controls on a Raspberry Pi)
|
||||
[#]: via: (https://opensource.com/article/21/3/raspberry-pi-parental-control)
|
||||
[#]: author: (Daniel Oh https://opensource.com/users/daniel-oh)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
||||
Set up network parental controls on a Raspberry Pi
|
||||
======
|
||||
With minimal investment of time and money, you can keep your kids safe
|
||||
online.
|
||||
![Family learning and reading together at night in a room][1]
|
||||
|
||||
Parents are always looking for ways to protect their kids online—from malware, banner ads, pop-ups, activity-tracking scripts, and other concerns—and to prevent them from playing games and watching YouTube when they should be doing their schoolwork. Many businesses use tools that regulate their employees' online safety and activities, but the question is how to make this happen at home?
|
||||
|
||||
The short answer is a tiny, inexpensive Raspberry Pi computer that enables you to set parental controls for your kids and your work at home. This article walks you through how easy it is to build your own parental control-enabled home network with a Raspberry Pi.
|
||||
|
||||
### Install the hardware and software
|
||||
|
||||
For this project, you'll need a Raspberry Pi and a home network router. If you spend only five minutes exploring online shopping sites, you will find a lot of options. The [Raspberry Pi 4][2] and a [TP-Link router][3] are good options for beginners.
|
||||
|
||||
Once you have your network device and Pi, you need to install [Pi-hole][4] as a Linux container or a supported operating system. There are several [ways to install it][5], but an easy way is to issue the following command on your Pi:
|
||||
|
||||
|
||||
```
|
||||
`curl -sSL https://install.pi-hole.net | bash`
|
||||
```
|
||||
|
||||
### Configure Pi-hole as your DNS server
|
||||
|
||||
Next, you need to configure the DHCP settings in both your router and Pi-hole:
|
||||
|
||||
1. Disable the DHCP server setting in your router
|
||||
2. Enable the DHCP server in Pi-hole
|
||||
|
||||
|
||||
|
||||
Every device is different, so there's no way for me to tell you exactly what you need to click on to adjust your settings. Generally, you access your home router through a web browser. Your router's address is sometimes printed on the bottom of the router, and it begins with either 192.168 or 10.
|
||||
|
||||
In your web browser, navigate to your router's address and log in with the credentials you received when you got your internet service. It's often as simple as `admin` with a numeric password (sometimes this password is also printed on the router). If you don't know the login, call your internet provider and ask for details.
|
||||
|
||||
In the graphical interface, look for a section within your LAN about DHCP, and deactivate the DHCP server. Your router's interface will almost certainly look different from mine, but this is an example of what I saw when setting it up. Uncheck **DHCP server**:
|
||||
|
||||
![Disable DHCP][6]
|
||||
|
||||
(Daniel Oh, [CC BY-SA 4.0][7])
|
||||
|
||||
Next, you _must_ activate the DHCP server on the Pi-hole. If you don't do that, none of your devices will be able to get online unless you manually assign IP addresses!
|
||||
|
||||
### Make your network family-friendly
|
||||
|
||||
You're all set. Now, your network devices (i.e., mobile phone, tablet PC, laptop, etc.) will automatically find the DHCP server on the Raspberry Pi. Then, each device will be assigned a dynamic IP address to access the internet.
|
||||
|
||||
Note: If your router device supports setting a DNS server, you can also configure the DNS clients in your router. The client will refer to the Pi-hole as your DNS server.
|
||||
|
||||
To set up rules for which sites and activities your kids can access, open a web browser to the Pi-hole admin page, `http://pi.hole/admin/`. On the dashboard, click on **Whitelist** to add web pages your kids are allowed to access. You can also add sites that your kids aren't allowed to access (e.g., gaming, adult, ads, shopping, etc.) to the **Blocklist**.
|
||||
|
||||
![Pi-hole admin dashboard][8]
|
||||
|
||||
(Daniel Oh, [CC BY-SA 4.0][7])
|
||||
|
||||
### What's next?
|
||||
|
||||
Now that you've set up your Raspberry Pi for parental control, you can keep your kids safer online while giving them access to approved entertainment options. This can also decrease your home internet usage by reducing how much your family is streaming. For more advanced usage, access Pi-hole's [documentation][9] and [blogs][10].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/3/raspberry-pi-parental-control
|
||||
|
||||
作者:[Daniel Oh][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/daniel-oh
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/family_learning_kids_night_reading.png?itok=6K7sJVb1 (Family learning and reading together at night in a room)
|
||||
[2]: https://www.raspberrypi.org/products/
|
||||
[3]: https://www.amazon.com/s?k=tp-link+router&crid=3QRLN3XRWHFTC&sprefix=TP-Link%2Caps%2C186&ref=nb_sb_ss_ts-doa-p_3_7
|
||||
[4]: https://pi-hole.net/
|
||||
[5]: https://github.com/pi-hole/pi-hole/#one-step-automated-install
|
||||
[6]: https://opensource.com/sites/default/files/uploads/disabledhcp.jpg (Disable DHCP)
|
||||
[7]: https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[8]: https://opensource.com/sites/default/files/uploads/blocklist.png (Pi-hole admin dashboard)
|
||||
[9]: https://docs.pi-hole.net/
|
||||
[10]: https://pi-hole.net/blog/#page-content
|
@ -0,0 +1,303 @@
|
||||
[#]: subject: (Build a router with mobile connectivity using Raspberry Pi)
|
||||
[#]: via: (https://opensource.com/article/21/3/router-raspberry-pi)
|
||||
[#]: author: (Lukas Janėnas https://opensource.com/users/lukasjan)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
||||
Build a router with mobile connectivity using Raspberry Pi
|
||||
======
|
||||
Use OpenWRT to get more control over your network's router.
|
||||
![Mesh networking connected dots][1]
|
||||
|
||||
The Raspberry Pi is a small, single-board computer that, despite being the size of a credit card, is capable of doing a lot of things. In reality, this little computer can be almost anything you want to be. You just need to open up your imagination.
|
||||
|
||||
Raspberry Pi enthusiasts have made many different projects, from simple programs to complex automation projects and solutions like weather stations or even smart-home devices. This article will show how to turn your Raspberry Pi into a router with LTE mobile connectivity using the OpenWRT project.
|
||||
|
||||
### About OpenWRT and LTE
|
||||
|
||||
[OpenWRT][2] is an open source project that uses Linux to target embedded devices. It's been around for more than 15 years and has a large and active community.
|
||||
|
||||
There are many ways to use OpenWRT, but its main purpose is in routers. It provides a fully writable filesystem with package management, and because it is open source, you can see and modify the code and contribute to the ecosystem. If you would like to have more control over your router, this is the system you want to use.
|
||||
|
||||
Long-term evolution (LTE) is a standard for wireless broadband communication based on the GSM/EDGE and UMTS/HSPA technologies. The LTE modem I'm using is a USB device that can add 3G or 4G (LTE) cellular connectivity to a Raspberry Pi computer.
|
||||
|
||||
![Teltonika TRM240 modem][3]
|
||||
|
||||
(Lukas Janenas, [CC BY-SA 4.0][4])
|
||||
|
||||
### Prerequisites
|
||||
|
||||
For this project, you will need:
|
||||
|
||||
* A Raspberry Pi with a power cable
|
||||
* A computer, preferably running Linux
|
||||
* A microSD card with at least 16GB
|
||||
* An Ethernet cable
|
||||
* An LTE modem (I am using a Teltonika [TRM240][5])
|
||||
* A SIM card for mobile connectivity
|
||||
|
||||
|
||||
|
||||
### Install OpenWRT
|
||||
|
||||
To get started, download the latest [Raspberry Pi-compatible release of OpenWRT][6]. On the OpenWRT site, you see four images: two with **ext4** and two with **squashfs** filesystems. I use the **ext4** filesystem. You can download either the **factory** or **sysupgrade** image; both work great.
|
||||
|
||||
![OpenWRT image files][7]
|
||||
|
||||
(Lukas Janenas, [CC BY-SA 4.0][4])
|
||||
|
||||
Once you download the image, you need to extract it and install it on the SD card by [following these instructions][8]. It can take some time to install the firmware, so be patient. Once it's finished, there will be two partitions on your microSD card. One is used for the bootloader and the other one for the OpenWRT system.
|
||||
|
||||
### Boot up the system
|
||||
|
||||
To boot up your new system, insert the microSD card into the Raspberry Pi, connect the Pi to your router (or a switch) with an Ethernet cable, and power it on.
|
||||
|
||||
If you're experienced with the Raspberry Pi, you may be used to accessing it through a terminal over SSH, or just by connecting it to a monitor and keyboard. OpenWRT works a little differently. You interact with this software through a web browser, so you must be able to access your Pi over your network.
|
||||
|
||||
By default, the Raspberry Pi uses this IP address: 192.168.1.1. The computer you use to configure the Pi must be on the same subnet as the Pi. If your network doesn't use 192.168.1.x addresses, or if you're unsure, open **Settings** in GNOME, navigate to network settings, select **Manual**, and enter the following IP address and Netmask:
|
||||
|
||||
* **IP address:** 192.168.1.15
|
||||
* **Netmask:** 255.255.255.0
|
||||
|
||||
|
||||
|
||||
![IP addresses][9]
|
||||
|
||||
(Lukas Janenas, [CC BY-SA 4.0][4])
|
||||
|
||||
Open a web browser on your computer and navigate to 192.168.1.1. This opens an authentication page so you can log in to your Pi.
|
||||
|
||||
![OpenWRT login page][10]
|
||||
|
||||
(Lukas Janenas, [CC BY-SA 4.0][4])
|
||||
|
||||
No password is required yet, so just click the **Login** button to continue.
|
||||
|
||||
### Configure network connection
|
||||
|
||||
The Raspberry Pi has only one Ethernet port, while normal routers have a couple of them: one for WAN (wired area network) and the other for LAN (local area network). You have two options:
|
||||
|
||||
1. Use your Ethernet port for network connectivity
|
||||
2. Use WiFi for network connectivity
|
||||
|
||||
|
||||
|
||||
**To use Ethernet:**
|
||||
|
||||
Should you decide to use Ethernet, navigate to **Network → Interfaces**. On the configuration page, press the blue **Edit** button that is associated with the **LAN** interface.
|
||||
|
||||
![LAN interface][11]
|
||||
|
||||
(Lukas Janenas, [CC BY-SA 4.0][4])
|
||||
|
||||
A pop-up window should appear. In that window, you need to enter the IP address to match the subnet of the router to which you will connect the Raspberry Pi. Change the Netmask, if needed, and enter the IP address of the router the Raspberry Pi will connect to.
|
||||
|
||||
![Enter IP in the LAN interface][12]
|
||||
|
||||
(Lukas Janenas, [CC BY-SA 4.0][4])
|
||||
|
||||
Save this configuration and connect your Pi to the router over Ethernet. You can now reach the Raspberry Pi with this new IP address.
|
||||
|
||||
Be sure to set a password for your OpenWRT router before you put it into production use!
|
||||
|
||||
**To use WiFi**
|
||||
|
||||
If you would like to connect the Raspberry Pi to the internet through WiFi, navigate to **Network → Wireless**. In the **Wireless** menu, press the blue **Scan** button to locate your home network.
|
||||
|
||||
![Scan the network][13]
|
||||
|
||||
(Lukas Janenas, [CC BY-SA 4.0][4])
|
||||
|
||||
In the pop-up window, find your WiFi network and connect to it. Don't forget to **Save and Apply** the configuration.
|
||||
|
||||
In the **Network → Interfaces** section, you should see a new interface.
|
||||
|
||||
![New interface][14]
|
||||
|
||||
(Lukas Janenas, [CC BY-SA 4.0][4])
|
||||
|
||||
Be sure to set a password for your OpenWRT router before you put it into production use!
|
||||
|
||||
### Install the necessary packages
|
||||
|
||||
By default, the router doesn't have a lot of packages. OpenWRT offers a package manager with a selection of packages you need to install. Navigate to **System → Software** and update your package manager by pressing the button labeled "**Update lists…**".
|
||||
|
||||
![Updating packages][15]
|
||||
|
||||
(Lukas Janenas, [CC BY-SA 4.0][4])
|
||||
|
||||
You will see a lot of packages; you need to install these:
|
||||
|
||||
* usb-modeswitch
|
||||
* kmod-mii
|
||||
* kmod-usb-net
|
||||
* kmod-usb-wdm
|
||||
* kmod-usb-serial
|
||||
* kmod-usb-serial-option
|
||||
* kmod-usb-serial-wwan (if it's not installed)
|
||||
|
||||
|
||||
|
||||
Additionally, [download this modemmanager package][16] and install it by pressing the button labeled **Upload Package…** in the pop-up window. Reboot the Raspberry Pi for the packages to take effect.
|
||||
|
||||
### Set up the mobile interface
|
||||
|
||||
After all those packages are installed, you can set up the mobile interface. Before connecting the modem to the Raspberry Pi read, the [modem instructions][17] to set it up. Then connect your mobile modem to the Raspberry Pi and wait a little until the modem boots up.
|
||||
|
||||
Navigate to **Network → Interface**. At the bottom of the page, press the **Add new interface…** button. In the pop-up window, give your interface a name (e.g., **mobile**) and select **ModemManager** from the drop-down list.
|
||||
|
||||
![Add a new mobile interface][18]
|
||||
|
||||
(Lukas Janenas, [CC BY-SA 4.0][4])
|
||||
|
||||
Press the button labeled **Create Interface**. You should see a new pop-up window. This is the main window for configuring the interface. In this window, select your modem and enter any other information like an Access Point Name (APN) or a PIN.
|
||||
|
||||
![Configuring the interface][19]
|
||||
|
||||
(Lukas Janenas, [CC BY-SA 4.0][4])
|
||||
|
||||
**Note:** If no modem devices appear in the list, try rebooting your Raspberry Pi or installing the kmod-usb-net-qmi-wwan package.
|
||||
|
||||
When you are done configuring your interface, press **Save** and then **Save and Apply**. Give some time for the system to take effect. If everything went well, you should see something like this.
|
||||
|
||||
![Configured interface][20]
|
||||
|
||||
(Lukas Janenas, [CC BY-SA 4.0][4])
|
||||
|
||||
If you want to check your internet connection over this interface, you can use ssh to connect to your Raspberry Pi shell. In the terminal, enter:
|
||||
|
||||
|
||||
```
|
||||
`ssh root@192.168.1.1`
|
||||
```
|
||||
|
||||
The default IP address is 192.168.1.1; if you changed it, then use that IP address to connect. When connected, execute this command in the terminal:
|
||||
|
||||
|
||||
```
|
||||
`ping -I ppp0 google.com`
|
||||
```
|
||||
|
||||
If everything is working, then you should receive pings back from Google's servers.
|
||||
|
||||
![Terminal interface][21]
|
||||
|
||||
(Lukas Janenas, [CC BY-SA 4.0][4])
|
||||
|
||||
**ppp0** is the default interface name for the mobile interface you created. You can check your interfaces using **ifconfig**. It shows active interfaces only.
|
||||
|
||||
### Set up the firewall
|
||||
|
||||
To get the mobile interface working, you need to configure a firewall for the **mobile** interface and the **lan** interface to direct traffic to the correct interface.
|
||||
|
||||
Navigate to **Network → Firewall**. At the bottom of the page, you should see a section called **Zones**.
|
||||
|
||||
![Firewall zones][22]
|
||||
|
||||
(Lukas Janenas, [CC BY-SA 4.0][4])
|
||||
|
||||
The simplest way to configure the firewall is to adjust the **wan** zone. Press the **Edit** button and in the **Covered networks** option, select your **mobile** interface, and **Save and Apply** your configuration. If you don't want to use WiFi to connect to the internet, you can remove **wwan** from the **Covered networks** or disable the WiFi connection.
|
||||
|
||||
![Firewall zone settings][23]
|
||||
|
||||
(Lukas Janenas, [CC BY-SA 4.0][4])
|
||||
|
||||
If you want to set up individual zones for each interface, just create a new zone and assign the necessary interfaces. For example, you may want to have a mobile zone that covers the mobile interface and is used to forward LAN interface traffic through it. Press the **Add** button, then **Name** your zone, check the **Masquerading** check box, select **Covered Networks**, and choose which zones can forward their traffic.
|
||||
|
||||
![Firewall zone settings][24]
|
||||
|
||||
(Lukas Janenas, [CC BY-SA 4.0][4])
|
||||
|
||||
Then **Save and Apply** the changes. Now you have a new zone.
|
||||
|
||||
### Set up an Access Point
|
||||
|
||||
The last step is to configure a network with an Access Point for your devices to connect to the internet. To set up an Access Point, navigate to **Network → Wireless**. You will see a WiFi device interface, a disabled Access Point named **OpenWRT**, and a connection that is used to connect to the internet over WiFi (if you didn't disable or delete it earlier). On the **Disable** interface, press the **Edit** button, then **Enable** the interface.
|
||||
|
||||
![Enabling wireless network][25]
|
||||
|
||||
(Lukas Janenas, [CC BY-SA 4.0][4])
|
||||
|
||||
If you want, you can change the interface name by editing the **ESSID** option. You can also select which network it will be associated with. By default, it with be associated with the **lan** interface.
|
||||
|
||||
![Configuring the interface][26]
|
||||
|
||||
(Lukas Janenas, [CC BY-SA 4.0][4])
|
||||
|
||||
To add a password for this interface, select the **Wireless Security** tab. In the tab, select the encryption **WPA2-PSK** and enter the password for the interface in the **Key** option field.
|
||||
|
||||
![Setting a password][27]
|
||||
|
||||
(Lukas Janenas, [CC BY-SA 4.0][4])
|
||||
|
||||
Then **Save and Apply** the configuration. If the configuration was set correctly, when scanning available Access Points with your device, you should see a new Access Point with the name you assigned.
|
||||
|
||||
### Additional packages
|
||||
|
||||
If you want, you can download additional packages for your router through the web interface. Just go to **System → Software** and install the package you want from the list or download it from the internet and upload it. If you don't see any packages in the list, press the **Update lists…** button.
|
||||
|
||||
You can also add other repositories that have packages that are good to use with OpenWRT. Packages and their web interfaces are installed separately. The packages that start with the prefix **luci-** are web interface packages.
|
||||
|
||||
![Packages with luci- prefix][28]
|
||||
|
||||
(Lukas Janenas, [CC BY-SA 4.0][4])
|
||||
|
||||
### Give it a try
|
||||
|
||||
This is what my Raspberry Pi router setup looks like.
|
||||
|
||||
![Raspberry Pi router][29]
|
||||
|
||||
(Lukas Janenas, [CC BY-SA 4.0][4])
|
||||
|
||||
It not difficult to build a router from a Raspberry Pi. The downside is that a Raspberry Pi has only one Ethernet port. You can add more ports with a USB-to-Ethernet adapter. Don't forget to configure the port on the interface's website.
|
||||
|
||||
OpenWRT supports a large number of mobile modems, and you can configure the mobile interface for any of them with the modemmanager, which is a universal tool to manage modems.
|
||||
|
||||
Have you used your Raspberry Pi as a router? Let us know how it went in the comments.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/3/router-raspberry-pi
|
||||
|
||||
作者:[Lukas Janėnas][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/lukasjan
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/mesh_networking_dots_connected.png?itok=ovINTRR3 (Mesh networking connected dots)
|
||||
[2]: https://openwrt.org/
|
||||
[3]: https://opensource.com/sites/default/files/uploads/lte_modem.png (Teltonika TRM240 modem)
|
||||
[4]: https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[5]: https://teltonika-networks.com/product/trm240/
|
||||
[6]: https://downloads.openwrt.org/releases/19.07.7/targets/brcm2708/bcm2710/
|
||||
[7]: https://opensource.com/sites/default/files/uploads/imagefiles.png (OpenWRT image files)
|
||||
[8]: https://opensource.com/article/17/3/how-write-sd-cards-raspberry-pi
|
||||
[9]: https://opensource.com/sites/default/files/uploads/ipaddresses.png (IP addresses)
|
||||
[10]: https://opensource.com/sites/default/files/uploads/openwrt-login.png (OpenWRT login page)
|
||||
[11]: https://opensource.com/sites/default/files/uploads/lan-interface.png (LAN interface)
|
||||
[12]: https://opensource.com/sites/default/files/uploads/lan-interface-ip.png (Enter IP in the LAN interface)
|
||||
[13]: https://opensource.com/sites/default/files/uploads/scannetwork.png (Scan the network)
|
||||
[14]: https://opensource.com/sites/default/files/uploads/newinterface.png (New interface)
|
||||
[15]: https://opensource.com/sites/default/files/uploads/updatesoftwarelist.png (Updating packages)
|
||||
[16]: https://downloads.openwrt.org/releases/packages-21.02/aarch64_cortex-a53/luci/luci-proto-modemmanager_git-21.007.43644-ab7e45c_all.ipk
|
||||
[17]: https://wiki.teltonika-networks.com/view/TRM240_SIM_Card
|
||||
[18]: https://opensource.com/sites/default/files/uploads/addnewinterface.png (Add a new mobile interface)
|
||||
[19]: https://opensource.com/sites/default/files/uploads/configureinterface.png (Configuring the interface)
|
||||
[20]: https://opensource.com/sites/default/files/uploads/configuredinterface.png (Configured interface)
|
||||
[21]: https://opensource.com/sites/default/files/uploads/terminal.png (Terminal interface)
|
||||
[22]: https://opensource.com/sites/default/files/uploads/firewallzones.png (Firewall zones)
|
||||
[23]: https://opensource.com/sites/default/files/uploads/firewallzonesettings.png (Firewall zone settings)
|
||||
[24]: https://opensource.com/sites/default/files/uploads/firewallzonepriv.png (Firewall zone settings)
|
||||
[25]: https://opensource.com/sites/default/files/uploads/enablewirelessnetwork.png (Enabling wireless network)
|
||||
[26]: https://opensource.com/sites/default/files/uploads/interfaceconfig.png (Configuring the interface)
|
||||
[27]: https://opensource.com/sites/default/files/uploads/interfacepassword.png (Setting a password)
|
||||
[28]: https://opensource.com/sites/default/files/uploads/luci-packages.png (Packages with luci- prefix)
|
||||
[29]: https://opensource.com/sites/default/files/uploads/raspberrypirouter.jpg (Raspberry Pi router)
|
@ -0,0 +1,288 @@
|
||||
[#]: subject: (Visualize multi-threaded Python programs with an open source tool)
|
||||
[#]: via: (https://opensource.com/article/21/3/python-viztracer)
|
||||
[#]: author: (Tian Gao https://opensource.com/users/gaogaotiantian)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
||||
Visualize multi-threaded Python programs with an open source tool
|
||||
======
|
||||
VizTracer traces concurrent Python programs to help with logging,
|
||||
debugging, and profiling.
|
||||
![Colorful sound wave graph][1]
|
||||
|
||||
Concurrency is an essential part of modern programming, as we have multiple cores and many tasks that need to cooperate. However, it's harder to understand concurrent programs when they are not running sequentially. It's not as easy for engineers to identify bugs and performance issues in these programs as it is in a single-thread, single-task program.
|
||||
|
||||
With Python, you have multiple options for concurrency. The most common ones are probably multi-threaded with the threading module, multiprocess with the subprocess and multiprocessing modules, and the more recent async syntax with the asyncio module. Before [VizTracer][2], there was a lack of tools to analyze programs using these techniques.
|
||||
|
||||
VizTracer is a tool for tracing and visualizing Python programs, which is helpful for logging, debugging, and profiling. Even though it works well for single-thread, single-task programs, its utility in concurrent programs is what makes it unique.
|
||||
|
||||
### Try a simple task
|
||||
|
||||
Start with a simple practice task: Figure out whether the integers in an array are prime numbers and return a Boolean array. Here is a simple solution:
|
||||
|
||||
|
||||
```
|
||||
def is_prime(n):
|
||||
for i in range(2, n):
|
||||
if n % i == 0:
|
||||
return False
|
||||
return True
|
||||
|
||||
def get_prime_arr(arr):
|
||||
return [is_prime(elem) for elem in arr]
|
||||
```
|
||||
|
||||
Try to run it normally, in a single thread, with VizTracer:
|
||||
|
||||
|
||||
```
|
||||
if __name__ == "__main__":
|
||||
num_arr = [random.randint(100, 10000) for _ in range(6000)]
|
||||
get_prime_arr(num_arr)
|
||||
|
||||
[/code] [code]`viztracer my_program.py`
|
||||
```
|
||||
|
||||
![Running code in a single thread][3]
|
||||
|
||||
(Tian Gao, [CC BY-SA 4.0][4])
|
||||
|
||||
The call-stack report indicates it took about 140ms, with most of the time spent in `get_prime_arr`.
|
||||
|
||||
![call-stack report][5]
|
||||
|
||||
(Tian Gao, [CC BY-SA 4.0][4])
|
||||
|
||||
It's just doing the `is_prime` function over and over again on the elements in the array.
|
||||
|
||||
This is what you would expect, and it's not that interesting (if you know VizTracer).
|
||||
|
||||
### Try a multi-thread program
|
||||
|
||||
Try doing it with a multi-thread program:
|
||||
|
||||
|
||||
```
|
||||
if __name__ == "__main__":
|
||||
num_arr = [random.randint(100, 10000) for i in range(2000)]
|
||||
thread1 = Thread(target=get_prime_arr, args=(num_arr,))
|
||||
thread2 = Thread(target=get_prime_arr, args=(num_arr,))
|
||||
thread3 = Thread(target=get_prime_arr, args=(num_arr,))
|
||||
|
||||
thread1.start()
|
||||
thread2.start()
|
||||
thread3.start()
|
||||
|
||||
thread1.join()
|
||||
thread2.join()
|
||||
thread3.join()
|
||||
```
|
||||
|
||||
To match the single-thread program's workload, this uses a 2,000-element array for three threads, simulating a situation where three threads are sharing the task.
|
||||
|
||||
![Multi-thread program][6]
|
||||
|
||||
(Tian Gao, [CC BY-SA 4.0][4])
|
||||
|
||||
As you would expect if you are familiar with Python's Global Interpreter Lock (GIL), it won't get any faster. It took a little bit more than 140ms due to the overhead. However, you can observe the concurrency of multiple threads:
|
||||
|
||||
![Concurrency of multiple threads][7]
|
||||
|
||||
(Tian Gao, [CC BY-SA 4.0][4])
|
||||
|
||||
When one thread was working (executing multiple `is_prime` functions), the other one was frozen (one `is_prime` function); later, they switched. This is due to GIL, and it is the reason Python does not have true multi-threading. It can achieve concurrency but not parallelism.
|
||||
|
||||
### Try it with multiprocessing
|
||||
|
||||
To achieve parallelism, the way to go is the multiprocessing library. Here is another version with multiprocessing:
|
||||
|
||||
|
||||
```
|
||||
if __name__ == "__main__":
|
||||
num_arr = [random.randint(100, 10000) for _ in range(2000)]
|
||||
|
||||
p1 = Process(target=get_prime_arr, args=(num_arr,))
|
||||
p2 = Process(target=get_prime_arr, args=(num_arr,))
|
||||
p3 = Process(target=get_prime_arr, args=(num_arr,))
|
||||
|
||||
p1.start()
|
||||
p2.start()
|
||||
p3.start()
|
||||
|
||||
p1.join()
|
||||
p2.join()
|
||||
p3.join()
|
||||
```
|
||||
|
||||
To run it with VizTracer, you need an extra argument:
|
||||
|
||||
|
||||
```
|
||||
`viztracer --log_multiprocess my_program.py`
|
||||
```
|
||||
|
||||
![Running with extra argument][8]
|
||||
|
||||
(Tian Gao, [CC BY-SA 4.0][4])
|
||||
|
||||
The whole program finished in a little more than 50ms, with the actual task finishing before the 50ms mark. The program's speed roughly tripled.
|
||||
|
||||
To compare it with the multi-thread version, here is the multiprocess version:
|
||||
|
||||
![Multi-process version][9]
|
||||
|
||||
(Tian Gao, [CC BY-SA 4.0][4])
|
||||
|
||||
Without GIL, multiple processes can achieve parallelism, which means multiple `is_prime` functions can execute in parallel.
|
||||
|
||||
However, Python's multi-thread is not useless. For example, for computation-intensive and I/O-intensive programs, you can fake an I/O-bound task with sleep:
|
||||
|
||||
|
||||
```
|
||||
def io_task():
|
||||
time.sleep(0.01)
|
||||
```
|
||||
|
||||
Try it in a single-thread, single-task program:
|
||||
|
||||
|
||||
```
|
||||
if __name__ == "__main__":
|
||||
for _ in range(3):
|
||||
io_task()
|
||||
```
|
||||
|
||||
![I/O-bound single-thread, single-task program][10]
|
||||
|
||||
(Tian Gao, [CC BY-SA 4.0][4])
|
||||
|
||||
The full program took about 30ms; nothing special.
|
||||
|
||||
Now use multi-thread:
|
||||
|
||||
|
||||
```
|
||||
if __name__ == "__main__":
|
||||
thread1 = Thread(target=io_task)
|
||||
thread2 = Thread(target=io_task)
|
||||
thread3 = Thread(target=io_task)
|
||||
|
||||
thread1.start()
|
||||
thread2.start()
|
||||
thread3.start()
|
||||
|
||||
thread1.join()
|
||||
thread2.join()
|
||||
thread3.join()
|
||||
```
|
||||
|
||||
![I/O-bound multi-thread program][11]
|
||||
|
||||
(Tian Gao, [CC BY-SA 4.0][4])
|
||||
|
||||
The program took 10ms, and it's clear how the three threads worked concurrently and improved the overall performance.
|
||||
|
||||
### Try it with asyncio
|
||||
|
||||
Python is trying to introduce another interesting feature called async programming. You can make an async version of this task:
|
||||
|
||||
|
||||
```
|
||||
import asyncio
|
||||
|
||||
async def io_task():
|
||||
await asyncio.sleep(0.01)
|
||||
|
||||
async def main():
|
||||
t1 = asyncio.create_task(io_task())
|
||||
t2 = asyncio.create_task(io_task())
|
||||
t3 = asyncio.create_task(io_task())
|
||||
|
||||
await t1
|
||||
await t2
|
||||
await t3
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
As asyncio is literally a single-thread scheduler with tasks, you can use VizTracer directly on it:
|
||||
|
||||
![VizTracer with asyncio][12]
|
||||
|
||||
(Tian Gao, [CC BY-SA 4.0][4])
|
||||
|
||||
It still took 10ms, but most of the functions displayed are the underlying structure, which is probably not what users are interested in. To solve this, you can use `--log_async` to separate the real task:
|
||||
|
||||
|
||||
```
|
||||
`viztracer --log_async my_program.py`
|
||||
```
|
||||
|
||||
![Using --log_async to separate tasks][13]
|
||||
|
||||
(Tian Gao, [CC BY-SA 4.0][4])
|
||||
|
||||
Now the user tasks are much clearer. For most of the time, no tasks are running (because the only thing it does is sleep). Here's the interesting part:
|
||||
|
||||
![Graph of task creation and execution][14]
|
||||
|
||||
(Tian Gao, [CC BY-SA 4.0][4])
|
||||
|
||||
This shows when the tasks were created and executed. Task-1 was the `main()` co-routine and created other tasks. Tasks 2, 3, and 4 executed `io_task` and `sleep` then waited for the wake-up. As the graph shows, there is no overlap between tasks because it's a single-thread program, and VizTracer visualized it this way to make it more understandable.
|
||||
|
||||
To make it more interesting, add a `time.sleep` call in the task to block the async loop:
|
||||
|
||||
|
||||
```
|
||||
async def io_task():
|
||||
time.sleep(0.01)
|
||||
await asyncio.sleep(0.01)
|
||||
```
|
||||
|
||||
![time.sleep call][15]
|
||||
|
||||
(Tian Gao, [CC BY-SA 4.0][4])
|
||||
|
||||
The program took much longer (40ms), and the tasks filled the blanks in the async scheduler.
|
||||
|
||||
This feature is very helpful for diagnosing behavior and performance issues in async programs.
|
||||
|
||||
### See what's happening with VizTracer
|
||||
|
||||
With VizTracer, you can see what's going on with your program on a timeline, rather than imaging it from complicated logs. This helps you understand your concurrent programs better.
|
||||
|
||||
VizTracer is open source, released under the Apache 2.0 license, and supports all common operating systems (Linux, macOS, and Windows). You can learn more about its features and access its source code in [VizTracer's GitHub repository][16].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/3/python-viztracer
|
||||
|
||||
作者:[Tian Gao][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/gaogaotiantian
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/colorful_sound_wave.png?itok=jlUJG0bM (Colorful sound wave graph)
|
||||
[2]: https://readthedocs.org/projects/viztracer/
|
||||
[3]: https://opensource.com/sites/default/files/uploads/viztracer_singlethreadtask.png (Running code in a single thread)
|
||||
[4]: https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[5]: https://opensource.com/sites/default/files/uploads/viztracer_callstackreport.png (call-stack report)
|
||||
[6]: https://opensource.com/sites/default/files/uploads/viztracer_multithread.png (Multi-thread program)
|
||||
[7]: https://opensource.com/sites/default/files/uploads/viztracer_concurrency.png (Concurrency of multiple threads)
|
||||
[8]: https://opensource.com/sites/default/files/uploads/viztracer_multithreadrun.png (Running with extra argument)
|
||||
[9]: https://opensource.com/sites/default/files/uploads/viztracer_comparewithmultiprocess.png (Multi-process version)
|
||||
[10]: https://opensource.com/sites/default/files/uploads/io-bound_singlethread.png (I/O-bound single-thread, single-task program)
|
||||
[11]: https://opensource.com/sites/default/files/uploads/io-bound_multithread.png (I/O-bound multi-thread program)
|
||||
[12]: https://opensource.com/sites/default/files/uploads/viztracer_asyncio.png (VizTracer with asyncio)
|
||||
[13]: https://opensource.com/sites/default/files/uploads/log_async.png (Using --log_async to separate tasks)
|
||||
[14]: https://opensource.com/sites/default/files/uploads/taskcreation.png (Graph of task creation and execution)
|
||||
[15]: https://opensource.com/sites/default/files/uploads/time.sleep_call.png (time.sleep call)
|
||||
[16]: https://github.com/gaogaotiantian/viztracer
|
173
sources/tech/20210313 Build an open source theremin.md
Normal file
173
sources/tech/20210313 Build an open source theremin.md
Normal file
@ -0,0 +1,173 @@
|
||||
[#]: subject: (Build an open source theremin)
|
||||
[#]: via: (https://opensource.com/article/21/3/open-source-theremin)
|
||||
[#]: author: (Gordon Haff https://opensource.com/users/ghaff)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
||||
Build an open source theremin
|
||||
======
|
||||
Create your own electronic musical instrument with Open.Theremin V3.
|
||||
![radio communication signals][1]
|
||||
|
||||
Even if you haven't heard of a [theremin][2], you're probably familiar with the [eerie electronic sound][3] it makes from watching TV shows and movies like the 1951 science fiction classic _The Day the Earth Stood Still_. Theremins have also appeared in popular music, although often in the form of a theremin variant. For example, the "theremin" in the Beach Boys' "Good Vibrations" was actually an [electro-theremin][4], an instrument played with a slider invented by trombonist Paul Tanner and amateur inventor Bob Whitsell and designed to be easier to play.
|
||||
|
||||
Soviet physicist Leon Theremin invented the theremin in 1920. It was one of the first electronic instruments, and Theremin introduced it to the world through his concerts in Europe and the US in the late 1920s. He patented his invention in 1928 and sold the rights to RCA. However, in the wake of the 1929 stock market crash, RCA's expensive product flopped. Theremin returned to the Soviet Union under somewhat mysterious circumstances in the late 1930s. The instrument remained relatively unknown until Robert Moog, of synthesizer fame, became interested in them as a high school student in the 1950s and started writing articles and selling kits. RA Moog, the company he founded, remains the best-known maker of commercial theremins today.
|
||||
|
||||
### What does this have to do with open source?
|
||||
|
||||
In 2008, Swiss engineer Urs Gaudenz was at a festival put on by the Swiss Mechatronic Art Society, which describes itself as a collective of engineers, hackers, scientists, and artists who collaborate on creative uses of technology. The festival included a theremin exhibit, which introduced Gaudenz to the instrument.
|
||||
|
||||
At a subsequent event focused on bringing together music and technology, one of the organizers told Gaudenz that there were a lot of people who wanted to build theremins from kits. Some kits existed, but they often didn't work or play well. Gaudenz set off to build an open theremin that could be played in the same manner and use the same operating principles as a traditional theremin but with a modern electronic board and microcontroller.
|
||||
|
||||
The [Open.Theremin][5] project (currently in version 3) is completely open source, including the microcontroller code and the [hardware files][6], which include the schematics and printed circuit board (PCB) layout. The hardware and the instructions are under GPL v3, while the [control code][7] is under LGPL v3. Therefore, the project can be assembled completely from scratch. In practice, most people will probably work from the kit available from Gaudi.ch, so that's what I'll describe in this article. There's also a completely assembled version available.
|
||||
|
||||
### How does a theremin work?
|
||||
|
||||
Before getting into the details of the Open.Theremin V3 and its assembly and use, I'll talk at a high level about how traditional theremins work.
|
||||
|
||||
Theremins are highly unusual in that they're played without touching the instrument directly or indirectly. They're controlled by varying your distance and hand shape from [two antennas][8], a horizontal volume loop antenna, typically on the left, and a vertical pitch antenna, typically on the right. Some theremins have a pitch antenna only—Robert Plant of Led Zeppelin played such a variant—and some, including the Open.Theremin, have additional knob controls. But hand movements associated with the volume and pitch antennas are the primary means of controlling the instrument.
|
||||
|
||||
I've been referring to the "antennas" because that's how everyone else refers to them. But they're not antennas in the usual sense of picking up radio waves. Each antenna acts as a plate in a capacitor. This brings us to the basic theremin operating principle: the heterodyne oscillator that mixes signals from a fixed and a variable oscillator.
|
||||
|
||||
Such a circuit can be implemented in various ways. The Open.Theremin uses a combination of an oscillating crystal for the fixed frequency and an LC (inductance-capacitance) oscillator tuned to a similar but different frequency for the variable oscillator. There's one circuit for volume and a second one (operating at a slightly different frequency to avoid interference) for pitch, as this functional block diagram shows.
|
||||
|
||||
![Theremin block diagram][9]
|
||||
|
||||
(Gaudi Labs, [GPL v3][10])
|
||||
|
||||
You play the theremin by moving or changing the shape of your hand relative to each antenna. This changes the capacitance of the LC circuit. These changes are, in turn, processed and turned into sound.
|
||||
|
||||
### Assembling the materials
|
||||
|
||||
But enough theory. For this tutorial, I'll assume you're using an Open.Theremin V3 kit. In that case, here's what you need:
|
||||
|
||||
* [Open.Theremin V3 kit][11]
|
||||
* Arduino Uno with mounting plate
|
||||
* Soldering iron and related materials (you'll want fairly fine solder; I used 0.02")
|
||||
* USB printer-type cable
|
||||
* Wire for grounding
|
||||
* Replacement antenna mounting hardware: Socket head M3-10 bolt, washer, wing nut (x2, optional)
|
||||
* Speaker or headphones (3.5mm jack)
|
||||
* Tripod with standard ¼" screw
|
||||
|
||||
|
||||
|
||||
The Open.Theremin is a shield for an Arduino, which is to say it's a modular circuit board that piggybacks on the Arduino microcontroller to extend its capabilities. In this case, the Arduino handles most of the important tasks for the theremin board, such as linearizing and filtering the audio and generating the instrument's sound using stored waveforms. The waveforms can be changed in the Arduino software. The Arduino's capabilities are an important part of enabling a wholly digital theremin with good sound quality without analog parts.
|
||||
|
||||
The Arduino is also open source. It grew out of a 2003 project at the Interaction Design Institute Ivrea in Ivrea, Italy.
|
||||
|
||||
### Building the hardware
|
||||
|
||||
There are [good instructions][12] for building the theremin hardware on the Gaudi.ch site, so I won't take you through every step. I'll focus on the project at a high level and share some knowledge that you may find helpful.
|
||||
|
||||
The PCB that comes with the kit already has the integrated circuits and discrete electronics surface-mounted on the board's backside, so you don't need to worry about those (other than not damaging them). What you do need to solder to the board are the pins to attach the shield to the Arduino, four potentiometers (pots), and a couple of surface-mount LEDs and a surface-mount button on the front side.
|
||||
|
||||
Before going further, I should note that this is probably an intermediate-level project. There's not a lot of soldering, but some of it is fairly detailed and in close proximity to other electronics. The surface-mount LEDs and button on the front side aren't hard to solder but do take a little technique (described in the instructions on the Gaudi.ch site). Just deliberately work your way through the soldering in the suggested order. You'll want good lighting and maybe a magnifier. Carefully check that no pins are shorting other pins.
|
||||
|
||||
Here is what the front of the hardware looks like:
|
||||
|
||||
![Open.Theremin front][13]
|
||||
|
||||
(Gordon Haff, [CC-BY-SA 4.0][14])
|
||||
|
||||
This shows the backside; the pins are the interface to the Arduino.
|
||||
|
||||
![Open.Theremin back][15]
|
||||
|
||||
(Gordon Haff, [CC-BY-SA 4.0][14])
|
||||
|
||||
I'll return to the hardware after setting up the Arduino and its software.
|
||||
|
||||
### Loading the software
|
||||
|
||||
The Arduino part of this project is straightforward if you've done anything with an Arduino and, really, even if you haven't.
|
||||
|
||||
* Install the [Arduino Desktop IDE][16]
|
||||
* Download the [Open.Theremin control software][7] and load it into the IDE
|
||||
* Attach the Arduino to your computer with a USB cable
|
||||
* Upload the software to the Arduino
|
||||
|
||||
|
||||
|
||||
It's possible to modify the Arduino's software, such as changing the stored waveforms, but I will not get into that in this article.
|
||||
|
||||
Power off the Arduino and carefully attach the shield. Make sure you line them up properly. (If you're uncertain, look at the Open.Theremin's [schematics][17], which show you which Arduino sockets aren't in use.)
|
||||
|
||||
Reconnect the USB. The red LED on the shield should come on. If it doesn't, something is wrong.
|
||||
|
||||
Use the Arduino Desktop IDE one more time to check out the calibration process, which, hopefully, will offer more confirmation that things are going according to plan. Here are the [detailed instructions][18].
|
||||
|
||||
What you're doing here is monitoring the calibration process. This isn't a real calibration because you haven't attached the antennas, and you'll have to recalibrate whenever you move the theremin. But this should give you an indication of whether the theremin is basically working.
|
||||
|
||||
Once you press the function button for about a second, the yellow LED should start to blink slowly, and the output from the Arduino's serial monitor should look something like the image below, which shows typical Open.Theremin calibration output. The main things that indicate a problem are frequency-tuning ranges that are either just zeros or that have a range that doesn't bound the set frequency.
|
||||
|
||||
![Open.Theremin calibration output][19]
|
||||
|
||||
(Gordon Haff, [CC-BY-SA 4.0][14])
|
||||
|
||||
### Completing the hardware
|
||||
|
||||
To finish the hardware, it's easiest if you separate the Arduino from the shield. You'll probably want to screw some sort of mounting plate to the back of the Arduino for the self-adhesive tripod mount you'll attach. Attaching the tripod mount works much better on a plate than on the Arduino board itself. Furthermore, I found that the mount's adhesive didn't work very well, and I had to use stronger glue.
|
||||
|
||||
Next, attach the antennas. The loop antenna goes on the left. The pitch antenna goes on the right (the shorter leg connects to the shield). Attach the supplied banana plugs to the antennas. (You need to use enough force to mate the two parts that you'll want to do it before attaching the banana plugs to the board.)
|
||||
|
||||
I found the kit's hardware extremely frustrating to tighten sufficiently to keep the antennas from rotating. In fact, due to the volume antenna swinging around, it ended up grounding itself on some of the conductive printing on the PCB, which led to a bit of debugging. In any case, the hardware listed in the parts list at the top of this article made it much easier for me to attach the antennas.
|
||||
|
||||
Attach the tripod mount to a tripod or stand of some sort, connect the USB to a power source, plug the Open.Theremin into a speaker or headset, and you're ready to go.
|
||||
|
||||
Well, almost. You need to ground it. Plugging the theremin into a stereo may ground it, as may the USB connection powering it. If the person playing the instrument (i.e., the player) has a strong coupling to ground, that can be sufficient. But if these circumstances don't apply, you need to ground the theremin by running a wire from the ground pad on the board to something like a water pipe. You can also connect the ground pad to the player with an antistatic wrist strap or equivalent wire. This gives the player strong capacitive coupling directly with the theremin, [which works][20] as an alternative to grounding the theremin.
|
||||
|
||||
At this point, recalibrate the theremin. You probably don't need to fiddle with the knobs at the start. Volume does what you'd expect. Pitch changes the "zero beat" point, i.e., where the theremin transitions from high pitched near the pitch antenna to silence near your body. Register is similar to what's called sensitivity on other theremins. Timbre selects among the different waveforms programmed into the Arduino.
|
||||
|
||||
There are many theremin videos online. It is _not_ an easy instrument to play well, but it is certainly fun to play with.
|
||||
|
||||
### The value of open
|
||||
|
||||
The open nature of the Open.Theremin project has enabled collaboration that would have been more difficult otherwise.
|
||||
|
||||
For example, Gaudenz received a great deal of feedback from people who play the theremin well, including [Swiss theremin player Coralie Ehinger][21]. Gaudenz says he really doesn't play the theremin but the help he got from players enabled him to make changes to make Open.Theremin a playable musical instrument.
|
||||
|
||||
Others contributed directly to the instrument design, especially the Arduino software code. Gaudenz credits [Thierry Frenkel][22] with improved volume control code. [Vincent Dhamelincourt][23] came up with the MIDI implementation. Gaudenz used circuit designs that others had created and shared, like designs [for the oscillators][24] that are a central part of the Open.Theremin board.
|
||||
|
||||
Open.Theremin is a great example of how open source is not just good for the somewhat abstract reasons people often mention. It can also lead to specific examples of improved collaboration and more effective design.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/3/open-source-theremin
|
||||
|
||||
作者:[Gordon Haff][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/ghaff
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/sound-radio-noise-communication.png?itok=KMNn9QrZ (radio communication signals)
|
||||
[2]: https://en.wikipedia.org/wiki/Theremin
|
||||
[3]: https://www.youtube.com/watch?v=2tnJEqXSs24
|
||||
[4]: https://en.wikipedia.org/wiki/Electro-Theremin
|
||||
[5]: http://www.gaudi.ch/OpenTheremin/
|
||||
[6]: https://github.com/GaudiLabs/OpenTheremin_Shield
|
||||
[7]: https://github.com/GaudiLabs/OpenTheremin_V3
|
||||
[8]: https://en.wikipedia.org/wiki/Theremin#/media/File:Etherwave_Theremin_Kit.jpg
|
||||
[9]: https://opensource.com/sites/default/files/uploads/opentheremin_blockdiagram.png (Theremin block diagram)
|
||||
[10]: https://www.gnu.org/licenses/gpl-3.0.en.html
|
||||
[11]: https://gaudishop.ch/index.php/product-category/opentheremin/
|
||||
[12]: https://www.gaudi.ch/OpenTheremin/images/stories/OpenTheremin/Instructions_OpenThereminV3.pdf
|
||||
[13]: https://opensource.com/sites/default/files/uploads/opentheremin_front.jpg (Open.Theremin front)
|
||||
[14]: https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[15]: https://opensource.com/sites/default/files/uploads/opentheremin_back.jpg (Open.Theremin back)
|
||||
[16]: https://www.arduino.cc/en/software
|
||||
[17]: https://www.gaudi.ch/OpenTheremin/index.php/opentheremin-v3/schematics
|
||||
[18]: http://www.gaudi.ch/OpenTheremin/index.php/40-general/197-calibration-diagnostics
|
||||
[19]: https://opensource.com/sites/default/files/uploads/opentheremin_calibration.png (Open.Theremin calibration output)
|
||||
[20]: http://www.thereminworld.com/Forums/T/30525/grounding-and-alternatives-yes-a-repeat-performance--
|
||||
[21]: https://youtu.be/8bxz01kN7Sw
|
||||
[22]: https://theremin.tf/en/category/projects/open_theremin-projects/
|
||||
[23]: https://www.gaudi.ch/OpenTheremin/index.php/opentheremin-v3/midi-implementation
|
||||
[24]: http://www.gaudi.ch/OpenTheremin/index.php/home/sound-and-oscillators
|
94
sources/tech/20210313 My review of the Raspberry Pi 400.md
Normal file
94
sources/tech/20210313 My review of the Raspberry Pi 400.md
Normal file
@ -0,0 +1,94 @@
|
||||
[#]: subject: (My review of the Raspberry Pi 400)
|
||||
[#]: via: (https://opensource.com/article/21/3/raspberry-pi-400-review)
|
||||
[#]: author: (Don Watkins https://opensource.com/users/don-watkins)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
||||
My review of the Raspberry Pi 400
|
||||
======
|
||||
Raspberry Pi 400's support for videoconferencing is a benefit for
|
||||
homeschoolers seeking inexpensive computers.
|
||||
![Raspberries with pi symbol overlay][1]
|
||||
|
||||
The [Raspberry Pi 400][2] promises to be a boon to the homeschool market. In addition to providing an easy-to-assemble workstation that comes loaded with free software, the Pi 400 also serves as a surprisingly effective videoconferencing platform. I ordered a Pi 400 from CanaKit late last year and was eager to explore this capability.
|
||||
|
||||
### Easy setup
|
||||
|
||||
After unboxing my Pi 400, which came in this lovely package, the setup was quick and easy.
|
||||
|
||||
![Raspberry Pi 400 box][3]
|
||||
|
||||
(Don Watkins, [CC BY-SA 4.0][4])
|
||||
|
||||
The Pi 400 reminds me of the old Commodore 64. The keyboard and CPU are in one form factor.
|
||||
|
||||
![Raspberry Pi 400 keyboard][5]
|
||||
|
||||
(Don Watkins, [CC BY-SA 4.0][4])
|
||||
|
||||
The matching keyboard and mouse make this little unit both aesthetically and ergonomically appealing.
|
||||
|
||||
Unlike earlier versions of the Raspberry Pi, there are not many parts to assemble. I connected the mouse, power supply, and micro HDMI cable to the back of the unit.
|
||||
|
||||
The ports on the back of the keyboard are where things get interesting.
|
||||
|
||||
![Raspberry Pi 400 ports][6]
|
||||
|
||||
(Don Watkins, [CC BY-SA 4.0][4])
|
||||
|
||||
From left to right, the ports are:
|
||||
|
||||
* 40-pin GPIO
|
||||
* MicroSD: a microSD card is the main hard drive, and it comes with a microSD card in the slot, ready for startup
|
||||
* Two micro HDMI ports
|
||||
* USB-C port for power
|
||||
* Two USB 3.0 ports and one USB 2.0 port for the mouse
|
||||
* Gigabit Ethernet port
|
||||
|
||||
|
||||
|
||||
The CPU is a Broadcom 1.8GHz 64-bit quad-core ARMv8 CPU, overclocked to make it even faster than the Raspberry Pi 4's processor.
|
||||
|
||||
My unit came with 4GB RAM and a stock 16GB microSD card with Raspberry Pi OS installed and ready to boot up for the first time.
|
||||
|
||||
### Evaluating the software and user experience
|
||||
|
||||
The Raspberry Pi Foundation continually improves its software. Raspberry Pi OS has various wizards to make setup easier, including ones for keyboard layout, WiFi settings, and so on.
|
||||
|
||||
The software included on the microSD card was the August 2020 Raspberry Pi OS release. After initial startup and setup, I connected a Logitech C270 webcam (which I regularly use with my other Linux computers) to one of the USB 3.0 ports.
|
||||
|
||||
The operating system recognized the Logitech webcam, but I could not get the microphone to work with [Jitsi][7]. I solved this problem by updating to the latest [Raspberry Pi OS][8] release with Linux Kernel version 5.4. This OS version includes many important features that I love, like an updated Chromium browser and Pulse Audio, which solved my webcam audio woes. I can use open source videoconferencing sites, like Jitsi, and common proprietary ones, like Google Hangouts, for video calls, but Zoom was entirely unsuccessful.
|
||||
|
||||
### Learning computing with the Pi
|
||||
|
||||
The icing on the cake is the Official Raspberry Pi Beginners Guide, a 245-page book introducing you to your new computer. Packed with informative tutorials, this book hearkens back to the days when technology _provided documentation_! For the curious mind, this book is a vitally important key to the Pi, which is best when it serves as a gateway to open source computing.
|
||||
|
||||
And after you become enchanted with Linux and all that it offers by using the Pi, you'll have months of exploration ahead, thanks to Opensource.com's [many Raspberry Pi articles][9].
|
||||
|
||||
I paid US$ 135 for my Raspberry Pi 400 because I added an optional inline power switch and an extra 32GB microSD card. Without those additional components, the unit is US$ 100. It's a steal either way and sure to provide years of fun, fast, and educational computing.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/3/raspberry-pi-400-review
|
||||
|
||||
作者:[Don Watkins][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/don-watkins
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/life-raspberrypi_0.png?itok=Kczz87J2 (Raspberries with pi symbol overlay)
|
||||
[2]: https://opensource.com/article/20/11/raspberry-pi-400
|
||||
[3]: https://opensource.com/sites/default/files/uploads/pi400box.jpg (Raspberry Pi 400 box)
|
||||
[4]: https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[5]: https://opensource.com/sites/default/files/uploads/pi400-keyboard.jpg (Raspberry Pi 400 keyboard)
|
||||
[6]: https://opensource.com/sites/default/files/uploads/pi400-ports.jpg (Raspberry Pi 400 ports)
|
||||
[7]: https://opensource.com/article/20/5/open-source-video-conferencing
|
||||
[8]: https://www.raspberrypi.org/software/
|
||||
[9]: https://opensource.com/tags/raspberry-pi
|
@ -0,0 +1,100 @@
|
||||
[#]: subject: (12 Raspberry Pi projects to try this year)
|
||||
[#]: via: (https://opensource.com/articles/21/3/raspberry-pi-projects)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
||||
12 Raspberry Pi projects to try this year
|
||||
======
|
||||
There are plenty of reasons to use your Raspberry Pi at home, work, and
|
||||
everywhere in between. Celebrate Pi Day by choosing one of these
|
||||
projects.
|
||||
![Raspberry Pi 4 board][1]
|
||||
|
||||
Remember when the Raspberry Pi was just a really tiny hobbyist Linux computer? Well, to the surprise of no one, the Pi's power and scope has escalated quickly. Have you got a new Raspberry Pi or an old one lying around needing something to do? If so, we have plenty of new project ideas, ranging from home automation to cross-platform coding, and even some new hardware to check out.
|
||||
|
||||
### Raspberry Pi at home
|
||||
|
||||
Although I started using the Raspberry Pi mostly for electronics projects, any spare Pi not attached to a breadboard quickly became a home server. As I decommission old units, I always look for a new reason to keep it working on something useful.
|
||||
|
||||
* While it's fun to make LEDs blink with a Pi, after you've finished a few basic electronics projects, it might be time to give your Pi some serious responsibilities. Predictably, it turns out that a homemade smart thermostat is substantially smarter than those you buy off the shelf. Try out ThermOS and this tutorial to [build your own multizone thermostat with a Raspberry Pi][2].
|
||||
|
||||
* Whether you have a child trying to focus on remote schoolwork or an adult trying to stay on task during work hours, being able to "turn off" parts of the Internet can be an invaluable feature for your home network. [The Pi-hole project][3] grants you this ability by turning your Pi into your local DNS server, which allows you to block or re-route specific sites. There's a sizable community around Pi-hole, so there are existing lists of commonly blocked sites, and several front-ends to help you interact with Pi-hole right from your Android phone.
|
||||
|
||||
* Some families have a complex schedule. Kids have school and afterschool activities, adults have important events to attend, anniversaries and birthdays to remember, appointments to keep, and so on. You can keep track of everything using your mobile phone, but this is the future! Shouldn't wall calendars be interactive by now?
|
||||
|
||||
For me, nothing is more futuristic than paper that changes its ink. Of course, we have e-ink now, and the Pi can use an e-ink display as its screen. [Build a family calendar][4] with a Pi and an e-ink display for one of the lowest-powered yet most futuristic (or magical, if you prefer) calendaring systems possible.
|
||||
|
||||
* There's something about the Raspberry Pi's minimal design and lack of a case that inspires you to want to build something with it. After you've built yourself a thermostat and a calendar, why not [replace your home router with a Raspberry Pi][5]? With the OpenWRT distribution, you can repurpose your Pi as a router, and with the right hardware you can even add mobile connectivity.
|
||||
|
||||
|
||||
|
||||
|
||||
### Monitoring your world with the Pi
|
||||
|
||||
For modern technology to be truly interactive, it has to have an awareness of its environment. For instance, a display that brightens or dims based on ambient light isn't possible without useful light sensor data. Similarly, the actual _environment_ is really important to us humans, and so it helps to have technology that can monitor it for us.
|
||||
|
||||
* Gathering data from sensors is one of the foundations you need to understand before embarking on a home automation or Internet of Things project. The Pi can do serious computing tasks, but it's got to get its data from something. Sensors provide a Pi with data about the environment. [Learn more about the fine art of gathering data over sensors][6] so you'll be ready to monitor the physical world with your Pi.
|
||||
|
||||
* Once you're gathering data, you need a way to process it. The open source monitoring tool Prometheus is famous for its ability to represent complex data inputs, and so it's an ideal candidate to be your IoT (Internet of Things) aggregator. Get started now, and in no time you'll be monitoring and measuring and general data crunching with [Prometheus on a Pi][7].
|
||||
|
||||
* While a Pi is inexpensive and small enough to be given a single task, it's still a surprisingly powerful computer. Whether you've got one Pi monitoring a dozen other Pi units on your IoT, or whether you just have a Pi tracking the temperature of your greenhouse, sometimes it's nice to be able to check in on the Pi itself to find out what its workload is like, or where specific tasks might be able to be optimized.
|
||||
|
||||
Grafana is a great platform for monitoring servers, including a Raspberry Pi. [Prometheus and Grafana][8] work together to monitor all aspects of your hardware, providing a friendly dashboard so you can check in on performance and reliability at a glance.
|
||||
|
||||
* You can download mobile apps to help you scan your home for WiFi signal strength, or you can [build your own on a Raspberry Pi using Go][9]. The latter sounds a lot more fun than the former, and because you're writing it yourself, there's a lot more customization you can do on a Pi-based solution.
|
||||
|
||||
|
||||
|
||||
|
||||
### The Pi at work
|
||||
|
||||
I've run file shares and development servers on Pi units at work, and I've seen them at former workplaces doing all kinds of odd jobs (I remember one that got hooked up to an espresso machine to count how many cups of coffee my department consumed each day, not for accounting purposes but for bragging rights). Ask your IT department before bringing your Pi to work, of course, but look around and see what odd job a credit-card-sized computer might be able to do for you.
|
||||
|
||||
* Of course you could host a website on a Raspberry Pi from the very beginning of the Pi. But as the Pi has developed, it's gotten more RAM and better processing power, and so [a dynamic website with SQLite or Postgres and Python][10] is an entirely reasonable prospect.
|
||||
|
||||
* Printers are infamously frustrating. Wouldn't it be nice to program [your very own print UI][11] using the amazing cross-platform framework TotalCross and a Pi? The less you have to struggle through screens of poorly designed and excessive options, the better. If you design it yourself, you can provide exactly the options your department needs, leaving the rest out of sight and out of mind.
|
||||
|
||||
* Containers are the latest trend in computing, but before containers there were FreeBSD jails. Jails are a great solution for running high-risk applications safely, but they can be complex to set up and maintain. However, if you install FreeBSD on your Pi and run [Bastille for jail management][12] and mix in the liberal use of jail templates, you'll find yourself using jails with the same ease you use containers on Linux.
|
||||
|
||||
* The "problem" with having so many tech devices around your desk is that your attention tends to get split between screens. If you'd rather be able to relax and just stare at a single screen, then you might look into the Scrcpy project, a screen copying application that [lets you access the screen of your mobile device on your Linux desktop or Pi][13]. I've tested scrcpy on a Pi 3 and a Pi 4, and the performance has surprised me each time. I use scrcpy often, but especially when I'm setting up an exciting new Edge computing node on your Pi cluster, or building my smart thermostat, or my mobile router, or whatever else.
|
||||
|
||||
|
||||
|
||||
|
||||
### Get a Pi
|
||||
|
||||
To be fair, not everyone has a Pi. If you haven't gotten hold of a Pi yet, you might [take a look at the Pi 400][14], an ultra-portable Pi-in-a-keyboard computer. Evocative of the Commodore 64, this unique form factor is designed to make it easy for you to plug your keyboard (and the Pi inside of it) into the closest monitor and get started computing. It's fast, easy, convenient, and almost _painfully_ retro. If you don't own a Pi yet, this may well be the one to get.
|
||||
|
||||
What Pi projects are you working on for Pi day? Tell us in the comments.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/articles/21/3/raspberry-pi-projects
|
||||
|
||||
作者:[Seth Kenlon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/seth
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/raspberry-pi-4_lead.jpg?itok=2bkk43om (Raspberry Pi 4 board)
|
||||
[2]: https://opensource.com/article/21/3/thermostat-raspberry-pi
|
||||
[3]: https://opensource.com/article/21/3/raspberry-pi-parental-control
|
||||
[4]: https://opensource.com/article/21/3/family-calendar-raspberry-pi
|
||||
[5]: https://opensource.com/article/21/3/router-raspberry-pi
|
||||
[6]: https://opensource.com/article/21/3/sensor-data-raspberry-pi
|
||||
[7]: https://opensource.com/article/21/3/iot-measure-raspberry-pi
|
||||
[8]: https://opensource.com/article/21/3/raspberry-pi-grafana-cloud
|
||||
[9]: https://opensource.com/article/21/3/troubleshoot-wifi-go-raspberry-pi
|
||||
[10]: https://opensource.com/article/21/3/web-hosting-raspberry-pi
|
||||
[11]: https://opensource.com/article/21/3/raspberry-pi-totalcross
|
||||
[12]: https://opensource.com/article/21/3/bastille-raspberry-pi
|
||||
[13]: https://opensource.com/article/21/3/android-raspberry-pi
|
||||
[14]: https://opensource.com/article/21/3/raspberry-pi-400-review
|
@ -0,0 +1,285 @@
|
||||
[#]: subject: (Learn how file input and output works in C)
|
||||
[#]: via: (https://opensource.com/article/21/3/file-io-c)
|
||||
[#]: author: (Jim Hall https://opensource.com/users/jim-hall)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wyxplus)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
||||
Learn how file input and output works in C
|
||||
======
|
||||
Understanding I/O can help you do things faster.
|
||||
![4 manilla folders, yellow, green, purple, blue][1]
|
||||
|
||||
If you want to learn input and output in C, start by looking at the `stdio.h` include file. As you might guess from the name, that file defines all the standard ("std") input and output ("io") functions.
|
||||
|
||||
The first `stdio.h` function that most people learn is the `printf` function to print formatted output. Or the `puts` function to print a simple string. Those are great functions to print information to the user, but if you want to do more than that, you'll need to explore other functions.
|
||||
|
||||
You can learn about some of these functions and methods by writing a replica of a common Linux command. The `cp` command will copy one file to another. If you look at the `cp` man page, you'll see that `cp` supports a broad set of command-line parameters and options. But in the simplest case, `cp` supports copying one file to another:
|
||||
|
||||
|
||||
```
|
||||
`cp infile outfile`
|
||||
```
|
||||
|
||||
You can write your own version of this `cp` command in C by using only a few basic functions to _read_ and _write_ files.
|
||||
|
||||
### Reading and writing one character at a time
|
||||
|
||||
You can easily do input and output using the `fgetc` and `fputc` functions. These read and write data one character at a time. The usage is defined in `stdio.h` and is quite straightforward: `fgetc` reads (gets) a single character from a file, and `fputc` puts a single character into a file.
|
||||
|
||||
|
||||
```
|
||||
int [fgetc][2](FILE *stream);
|
||||
int [fputc][3](int c, FILE *stream);
|
||||
```
|
||||
|
||||
Writing the `cp` command requires accessing files. In C, you open a file using the `fopen` function, which takes two arguments: the _name_ of the file and the _mode_ you want to use. The mode is usually `r` to read from a file or `w` to write to a file. The mode supports other options too, but for this tutorial, just focus on reading and writing.
|
||||
|
||||
Copying one file to another then becomes a matter of opening the source and destination files, then _reading one character at a time_ from the first file, then _writing that character_ to the second file. The `fgetc` function returns either the single character read from the input file or the _end of file_ (`EOF`) marker when the file is done. Once you've read `EOF`, you've finished copying and you can close both files. That code looks like this:
|
||||
|
||||
|
||||
```
|
||||
do {
|
||||
ch = [fgetc][2](infile);
|
||||
if (ch != EOF) {
|
||||
[fputc][3](ch, outfile);
|
||||
}
|
||||
} while (ch != EOF);
|
||||
```
|
||||
|
||||
You can write your own `cp` program with this loop to read and write one character at a time by using the `fgetc` and `fputc` functions. The `cp.c` source code looks like this:
|
||||
|
||||
|
||||
```
|
||||
#include <stdio.h>
|
||||
|
||||
int
|
||||
main(int argc, char **argv)
|
||||
{
|
||||
FILE *infile;
|
||||
FILE *outfile;
|
||||
int ch;
|
||||
|
||||
/* parse the command line */
|
||||
|
||||
/* usage: cp infile outfile */
|
||||
|
||||
if (argc != 3) {
|
||||
[fprintf][4](stderr, "Incorrect usage\n");
|
||||
[fprintf][4](stderr, "Usage: cp infile outfile\n");
|
||||
return 1;
|
||||
}
|
||||
|
||||
/* open the input file */
|
||||
|
||||
infile = [fopen][5](argv[1], "r");
|
||||
if (infile == NULL) {
|
||||
[fprintf][4](stderr, "Cannot open file for reading: %s\n", argv[1]);
|
||||
return 2;
|
||||
}
|
||||
|
||||
/* open the output file */
|
||||
|
||||
outfile = [fopen][5](argv[2], "w");
|
||||
if (outfile == NULL) {
|
||||
[fprintf][4](stderr, "Cannot open file for writing: %s\n", argv[2]);
|
||||
[fclose][6](infile);
|
||||
return 3;
|
||||
}
|
||||
|
||||
/* copy one file to the other */
|
||||
|
||||
/* use fgetc and fputc */
|
||||
|
||||
do {
|
||||
ch = [fgetc][2](infile);
|
||||
if (ch != EOF) {
|
||||
[fputc][3](ch, outfile);
|
||||
}
|
||||
} while (ch != EOF);
|
||||
|
||||
/* done */
|
||||
|
||||
[fclose][6](infile);
|
||||
[fclose][6](outfile);
|
||||
|
||||
return 0;
|
||||
}
|
||||
```
|
||||
|
||||
And you can compile that `cp.c` file into a full executable using the GNU Compiler Collection (GCC):
|
||||
|
||||
|
||||
```
|
||||
`$ gcc -Wall -o cp cp.c`
|
||||
```
|
||||
|
||||
The `-o cp` option tells the compiler to save the compiled program into the `cp` program file. The `-Wall` option tells the compiler to turn on all warnings. If you don't see any warnings, that means everything worked correctly.
|
||||
|
||||
### Reading and writing blocks of data
|
||||
|
||||
Programming your own `cp` command by reading and writing data one character at a time does the job, but it's not very fast. You might not notice when copying "everyday" files like documents and text files, but you'll really notice the difference when copying large files or when copying files over a network. Working on one character at a time requires significant overhead.
|
||||
|
||||
A better way to write this `cp` command is by reading a chunk of the input into memory (called a _buffer_), then writing that collection of data to the second file. This is much faster because the program can read more of the data at one time, which requires fewer "reads" from the file.
|
||||
|
||||
You can read a file into a variable by using the `fread` function. This function takes several arguments: the array or memory buffer to read data into (`ptr`), the size of the smallest thing you want to read (`size`), how many of those things you want to read (`nmemb`), and the file to read from (`stream`):
|
||||
|
||||
|
||||
```
|
||||
`size_t fread(void *ptr, size_t size, size_t nmemb, FILE *stream);`
|
||||
```
|
||||
|
||||
The different options provide quite a bit of flexibility for more advanced file input and output, such as reading and writing files with a certain data structure. But in the simple case of _reading data from one file_ and _writing data to another file_, you can use a buffer that is an array of characters.
|
||||
|
||||
And you can write the buffer to another file using the `fwrite` function. This uses a similar set of options to the `fread` function: the array or memory buffer to read data from, the size of the smallest thing you need to write, how many of those things you need to write, and the file to write to.
|
||||
|
||||
|
||||
```
|
||||
`size_t fwrite(const void *ptr, size_t size, size_t nmemb, FILE *stream);`
|
||||
```
|
||||
|
||||
In the case where the program reads a file into a buffer, then writes that buffer to another file, the array (`ptr`) can be an array of a fixed size. For example, you can use a `char` array called `buffer` that is 200 characters long.
|
||||
|
||||
With that assumption, you need to change the loop in your `cp` program to _read data from a file into a buffer_ then _write that buffer to another file_:
|
||||
|
||||
|
||||
```
|
||||
while (![feof][7](infile)) {
|
||||
buffer_length = [fread][8](buffer, sizeof(char), 200, infile);
|
||||
[fwrite][9](buffer, sizeof(char), buffer_length, outfile);
|
||||
}
|
||||
```
|
||||
|
||||
Here's the full source code to your updated `cp` program, which now uses a buffer to read and write data:
|
||||
|
||||
|
||||
```
|
||||
#include <stdio.h>
|
||||
|
||||
int
|
||||
main(int argc, char **argv)
|
||||
{
|
||||
FILE *infile;
|
||||
FILE *outfile;
|
||||
char buffer[200];
|
||||
size_t buffer_length;
|
||||
|
||||
/* parse the command line */
|
||||
|
||||
/* usage: cp infile outfile */
|
||||
|
||||
if (argc != 3) {
|
||||
[fprintf][4](stderr, "Incorrect usage\n");
|
||||
[fprintf][4](stderr, "Usage: cp infile outfile\n");
|
||||
return 1;
|
||||
}
|
||||
|
||||
/* open the input file */
|
||||
|
||||
infile = [fopen][5](argv[1], "r");
|
||||
if (infile == NULL) {
|
||||
[fprintf][4](stderr, "Cannot open file for reading: %s\n", argv[1]);
|
||||
return 2;
|
||||
}
|
||||
|
||||
/* open the output file */
|
||||
|
||||
outfile = [fopen][5](argv[2], "w");
|
||||
if (outfile == NULL) {
|
||||
[fprintf][4](stderr, "Cannot open file for writing: %s\n", argv[2]);
|
||||
[fclose][6](infile);
|
||||
return 3;
|
||||
}
|
||||
|
||||
/* copy one file to the other */
|
||||
|
||||
/* use fread and fwrite */
|
||||
|
||||
while (![feof][7](infile)) {
|
||||
buffer_length = [fread][8](buffer, sizeof(char), 200, infile);
|
||||
[fwrite][9](buffer, sizeof(char), buffer_length, outfile);
|
||||
}
|
||||
|
||||
/* done */
|
||||
|
||||
[fclose][6](infile);
|
||||
[fclose][6](outfile);
|
||||
|
||||
return 0;
|
||||
}
|
||||
```
|
||||
|
||||
Since you want to compare this program to the other program, save this source code as `cp2.c`. You can compile that updated program using GCC:
|
||||
|
||||
|
||||
```
|
||||
`$ gcc -Wall -o cp2 cp2.c`
|
||||
```
|
||||
|
||||
As before, the `-o cp2` option tells the compiler to save the compiled program into the `cp2` program file. The `-Wall` option tells the compiler to turn on all warnings. If you don't see any warnings, that means everything worked correctly.
|
||||
|
||||
### Yes, it really is faster
|
||||
|
||||
Reading and writing data using buffers is the better way to write this version of the `cp` program. Because it reads chunks of a file into memory at once, the program doesn't need to read data as often. You might not notice a difference in using either method on smaller files, but you'll really see the difference if you need to copy something that's much larger or when copying data on slower media like over a network connection.
|
||||
|
||||
I ran a runtime comparison using the Linux `time` command. This command runs another program, then tells you how long that program took to complete. For my test, I wanted to see the difference in time, so I copied a 628MB CD-ROM image file I had on my system.
|
||||
|
||||
I first copied the image file using the standard Linux `cp` command to see how long that takes. By running the Linux `cp` command first, I also eliminated the possibility that Linux's built-in file-cache system wouldn't give my program a false performance boost. The test with Linux `cp` took much less than one second to run:
|
||||
|
||||
|
||||
```
|
||||
$ time cp FD13LIVE.iso tmpfile
|
||||
|
||||
real 0m0.040s
|
||||
user 0m0.001s
|
||||
sys 0m0.003s
|
||||
```
|
||||
|
||||
Copying the same file using my own version of the `cp` command took significantly longer. Reading and writing one character at a time took almost five seconds to copy the file:
|
||||
|
||||
|
||||
```
|
||||
$ time ./cp FD13LIVE.iso tmpfile
|
||||
|
||||
real 0m4.823s
|
||||
user 0m4.100s
|
||||
sys 0m0.571s
|
||||
```
|
||||
|
||||
Reading data from an input into a buffer and then writing that buffer to an output file is much faster. Copying the file using this method took less than a second:
|
||||
|
||||
|
||||
```
|
||||
$ time ./cp2 FD13LIVE.iso tmpfile
|
||||
|
||||
real 0m0.944s
|
||||
user 0m0.224s
|
||||
sys 0m0.608s
|
||||
```
|
||||
|
||||
My demonstration `cp` program used a buffer that was 200 characters. I'm sure the program would run much faster if I read more of the file into memory at once. But for this comparison, you can already see the huge difference in performance, even with a small, 200 character buffer.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/3/file-io-c
|
||||
|
||||
作者:[Jim Hall][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/jim-hall
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/file_system.jpg?itok=pzCrX1Kc (4 manilla folders, yellow, green, purple, blue)
|
||||
[2]: http://www.opengroup.org/onlinepubs/009695399/functions/fgetc.html
|
||||
[3]: http://www.opengroup.org/onlinepubs/009695399/functions/fputc.html
|
||||
[4]: http://www.opengroup.org/onlinepubs/009695399/functions/fprintf.html
|
||||
[5]: http://www.opengroup.org/onlinepubs/009695399/functions/fopen.html
|
||||
[6]: http://www.opengroup.org/onlinepubs/009695399/functions/fclose.html
|
||||
[7]: http://www.opengroup.org/onlinepubs/009695399/functions/feof.html
|
||||
[8]: http://www.opengroup.org/onlinepubs/009695399/functions/fread.html
|
||||
[9]: http://www.opengroup.org/onlinepubs/009695399/functions/fwrite.html
|
@ -0,0 +1,176 @@
|
||||
[#]: subject: (Get started with edge computing by programming embedded systems)
|
||||
[#]: via: (https://opensource.com/article/21/3/rtos-embedded-development)
|
||||
[#]: author: (Alan Smithee https://opensource.com/users/alansmithee)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
||||
Get started with edge computing by programming embedded systems
|
||||
======
|
||||
The AT device package for controlling wireless modems is one of RTOS's
|
||||
most popular extensions.
|
||||
![Looking at a map][1]
|
||||
|
||||
RTOS is an open source [operating system for embedded devices][2] developed by RT-Thread. It provides a standardized, friendly foundation for developers to program a variety of devices and includes a large number of useful libraries and toolkits to make the process easier.
|
||||
|
||||
Like Linux, RTOS uses a modular approach, which makes it easy to extend. Packages enable developers to use RTOS for any device they want to target. One of RTOS's most popular extensions is the AT device package, which includes porting files and sample code for different AT devices (i.e., modems).
|
||||
|
||||
At over 62,000 downloads (at the time of this writing, at least), one of the most popular extensions to RTOS is the AT device package, which includes porting files and sample code for different AT devices.
|
||||
|
||||
### About AT commands
|
||||
|
||||
AT commands were originally a protocol to control old dial-up modems. As modem technology moved on to higher bandwidths, it remained useful to have a light and efficient protocol for device control, and major mobile phone manufacturers jointly developed a set of AT commands to control the GSM module on mobile phones.
|
||||
|
||||
Today, the AT protocol is still common in networked communication, and there are many devices, including WiFi, Bluetooth, and 4G, that accept AT commands.
|
||||
|
||||
If you're creating purpose-built appliances for edge computing input, monitoring, or the Internet of Things (IoT), some of the AT devices supported by RTOS that you may encounter include ESP8266, ESP32, M26, MC20, RW007, MW31, SIM800C, W60X, SIM76XX, A9/A9G, BC26, AIR720, ME3616, M 6315, BC28, and EC200X.
|
||||
|
||||
RT-Thread contains the Socket Abstraction Layer (SAL) component, which implements the abstraction of various network protocols and interfaces and provides a standard set of [BSD socket][3] APIs to the upper level. The SAL then takes over the AT socket interface so that developers just need to consider the network interface provided by the network application layer.
|
||||
|
||||
This package implements the AT socket on devices (including the ones above), allowing communications through standard socket interfaces in the form of AT commands. The [RT-thread programming guide][4] includes descriptions of specific functions.
|
||||
|
||||
The at_device package is distributed under an LGPLv2.1 license, and it's easy to obtain by using the [RT-Thread Env tool][5]. This tool includes a configurator and a package manager, which configure the kernel and component functions and can be used to tailor the components and manage online packages. This enables developers to build systems as if they were building blocks.
|
||||
|
||||
### Get the at_device package
|
||||
|
||||
To use AT devices with RTOS, you must enable the AT component library and AT socket functionality. This requires:
|
||||
|
||||
* RT_Thread 4.0.2+
|
||||
* RT_Thread AT component 1.3.0+
|
||||
* RT_Thread SAL component
|
||||
* RT-Thread netdev component
|
||||
|
||||
|
||||
|
||||
The AT device package has been updated for multiple versions. Different versions require different configuration options, so they must fit into the corresponding system versions. Most of the currently available AT device package versions are:
|
||||
|
||||
* V1.2.0: For RT-Thread versions less than V3.1.3, AT component version equals V1.0.0
|
||||
* V1.3.0: For RT-Thread versions less than V3.1.3, AT component version equals V1.1.0
|
||||
* V1.4.0: For RT-Thread versions less than V3.1.3 or equal to V4.0.0, AT component version equals V1.2.0
|
||||
* V1.5.0: For RT-Thread versions less than V3.1.3 or equal to V4.0.0, AT component version equals V1.2.0
|
||||
* V1.6.0: For RT-Thread versions equal to V3.1.3 or V4.0.1, AT component version equals V1.2.0
|
||||
* V2.0.0/V2.0.1: For RT-Thread versions higher than V4.0.1 or higher than 3.1.3, AT component version equals V1.3.0
|
||||
* Latest version: For RT-Thread versions higher than V4.0.1 or higher than 3.1.3, AT component version equals V1.3.0
|
||||
|
||||
|
||||
|
||||
Getting the right version is mostly an automatic process done in menuconfig. It provides the best version of the at_device package based on your current system environment.
|
||||
|
||||
As mentioned, different versions require different configuration options. For instance, version 1.x supports enabling one AT device at a time:
|
||||
|
||||
|
||||
```
|
||||
RT-Thread online packages --->
|
||||
IoT - internet of things --->
|
||||
-*- AT DEVICE: RT-Thread AT component porting or samples for different device
|
||||
[ ] Enable at device init by thread
|
||||
AT socket device modules (Not selected, please select) --->
|
||||
Version (V1.6.0) --->
|
||||
```
|
||||
|
||||
The option to enable the AT device init by thread dictates whether the configuration creates a separate thread to initialize the device network.
|
||||
|
||||
Version 2.x supports enabling multiple AT devices at the same time:
|
||||
|
||||
|
||||
```
|
||||
RT-Thread online packages --->
|
||||
IoT - internet of things --->
|
||||
-*- AT DEVICE: RT-Thread AT component porting or samples for different device
|
||||
[*] Quectel M26/MC20 --->
|
||||
[*] Enable initialize by thread
|
||||
[*] Enable sample
|
||||
(-1) Power pin
|
||||
(-1) Power status pin
|
||||
(uart3) AT client device name
|
||||
(512) The maximum length of receive line buffer
|
||||
[ ] Quectel EC20 --->
|
||||
[ ] Espressif ESP32 --->
|
||||
[*] Espressif ESP8266 --->
|
||||
[*] Enable initialize by thread
|
||||
[*] Enable sample
|
||||
(realthread) WIFI ssid
|
||||
(12345678) WIFI password
|
||||
(uart2) AT client device name
|
||||
(512) The maximum length of receive line buffer
|
||||
[ ] Realthread RW007 --->
|
||||
[ ] SIMCom SIM800C --->
|
||||
[ ] SIMCom SIM76XX --->
|
||||
[ ] Notion MW31 --->
|
||||
[ ] WinnerMicro W60X --->
|
||||
[ ] AiThink A9/A9G --->
|
||||
[ ] Quectel BC26 --->
|
||||
[ ] Luat air720 --->
|
||||
[ ] GOSUNCN ME3616 --->
|
||||
[ ] ChinaMobile M6315 --->
|
||||
[ ] Quectel BC28 --->
|
||||
[ ] Quectel ec200x --->
|
||||
Version (latest) --->
|
||||
```
|
||||
|
||||
This version includes many other options, including one to enable sample code, which might be particularly useful to new developers or any developer using an unfamiliar device.
|
||||
|
||||
You can also control options to choose which pin you want to use to supply power to your component, a pin to indicate the power state, the name of the serial device the sample device uses, and the maximum length of the data the sample device receives. On applicable devices, you can also set the SSID name and password.
|
||||
|
||||
In short, there is no shortage of control options.
|
||||
|
||||
* V2.X.X version supports enabling multiple AT devices simultaneously, and the enabled device information can be viewed with the `ifocnfig` command in [finsh shell][6].
|
||||
* V2.X.X version requires the device to register before it's used; the registration can be done in the samples directory file or customized in the application layer.
|
||||
* Pin options such as **Power pin** and **Power status pin** are configured according to the device's hardware connection. They can be configured as `-1` if the hardware power-on function is not used.
|
||||
* One AT device should correspond to one serial name, and the **AT client device name** for each device should be different.
|
||||
|
||||
|
||||
|
||||
### AT components configuration options
|
||||
|
||||
When the AT device package is selected and device support is enabled, client functionality for the AT component is selected by default. That means more options—this time for the AT component:
|
||||
|
||||
|
||||
```
|
||||
RT-Thread Components --->
|
||||
Network --->
|
||||
AT commands --->
|
||||
[ ] Enable debug log output
|
||||
[ ] Enable AT commands server
|
||||
-*- Enable AT commands client
|
||||
(1) The maximum number of supported clients
|
||||
-*- Enable BSD Socket API support by AT commnads
|
||||
[*] Enable CLI(Command-Line Interface) for AT commands
|
||||
[ ] Enable print RAW format AT command communication data
|
||||
(128) The maximum length of AT Commonds buffer
|
||||
```
|
||||
|
||||
The configuration options related to the AT device package are:
|
||||
|
||||
* **The maximum number of supported clients**: Selecting multiple devices in the AT device package requires this option to be configured as the corresponding value.
|
||||
* **Enable BSD Socket API support by AT commands**: This option will be selected by default when selecting the AT device package.
|
||||
* **The maximum length of AT Commands buffe:** The maximum length of the data the AT commands can send.
|
||||
|
||||
|
||||
|
||||
### Anything is possible
|
||||
|
||||
When you start programming embedded systems, you quickly realize that you can create anything you can imagine. RTOS aims to help you get there, and its packages offer a head start. Interconnected devices are the expectation now. IoT technology on the [edge][7] must be able to communicate across various protocols, and the AT protocol is the key.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/3/rtos-embedded-development
|
||||
|
||||
作者:[Alan Smithee][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/alansmithee
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/tips_map_guide_ebook_help_troubleshooting_lightbulb_520.png?itok=L0BQHgjr (Looking at a map)
|
||||
[2]: https://opensource.com/article/20/6/open-source-rtos
|
||||
[3]: https://en.wikipedia.org/wiki/Berkeley_sockets
|
||||
[4]: https://github.com/RT-Thread/rtthread-manual-doc/blob/master/at/at.md
|
||||
[5]: https://www.rt-thread.io/download.html?download=Env
|
||||
[6]: https://www.rt-thread.org/download/rttdoc_1_0_0/group__finsh.html
|
||||
[7]: https://www.redhat.com/en/topics/edge-computing
|
@ -0,0 +1,194 @@
|
||||
[#]: subject: (My favorite open source project management tools)
|
||||
[#]: via: (https://opensource.com/article/21/3/open-source-project-management)
|
||||
[#]: author: (Frank Bergmann https://opensource.com/users/fraber)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
||||
My favorite open source project management tools
|
||||
======
|
||||
If you're managing large and complex projects, try replacing Microsoft
|
||||
Project with an open source option.
|
||||
![Kanban-style organization action][1]
|
||||
|
||||
Projects like building a satellite, developing a robot, or launching a new product are all expensive, involve different providers, and contain hard dependencies that must be tracked.
|
||||
|
||||
The approach to project management in the world of large projects is quite simple (in theory at least). You create a project plan and split it into smaller pieces until you can reasonably assign costs, duration, resources, and dependencies to the various activities. Once the project plan is approved by the people in charge of the money, you use it to track the project's execution. Drawing all of the project's activities on a timeline produces a bar chart called a [Gantt chart][2].
|
||||
|
||||
Gantt charts have always been used in [waterfall project methodologies][3], but they can also be used with agile. For example, large projects may use a Gantt chart for a scrum sprint and ignore other details like user stories, thereby embedding agile phases. Other large projects may include multiple product releases (e.g., minimum viable product [MVP], second version, third version, etc.). In this case, the super-structure is kind of agile, with each phase planned as a Gantt chart to deal with budgets and complex dependencies.
|
||||
|
||||
### Project management tools
|
||||
|
||||
There are literally hundreds of tools available to manage large projects with Gantt charts, and Microsoft Project is probably the most popular. It is part of the Microsoft Office family, scales to hundreds of thousands of activities, and has an incredible number of features that support almost every conceivable way to manage a project schedule. With Project, it's not always clear what is more expensive: the software license or the training courses that teach you how to use the tool.
|
||||
|
||||
Another drawback is that Microsoft Project is a standalone desktop application, and only one person can update a schedule. You would need to buy licenses for Microsoft Project Server, Project for the web, or Microsoft Planner if you want multiple users to collaborate.
|
||||
|
||||
Fortunately, there are open source alternatives to the proprietary tools, including the applications in this article. All are open source and include a Gantt for scheduling hierarchical activities based on resources and dependencies. ProjectLibre, GanttProject, and TaskJuggler are desktop applications for a single project manager; ProjeQtOr and Redmine are web applications for project teams, and ]project-open[ is a web application for managing entire organizations.
|
||||
|
||||
I evaluated the tools based on a single user planning and tracking a single large project. My evaluation criteria includes Gantt editor features, availability on Windows, Linux, and macOS, scalability, import/export, and reporting. (Full disclosure: I'm the founder of ]project-open[, and I've been active in several open source communities for many years. This list includes our product, so my views may be biased, but I tried to focus on each product's best features.)
|
||||
|
||||
### Redmine 4.1.0
|
||||
|
||||
![Redmine][4]
|
||||
|
||||
(Frank Bergmann, [CC BY-SA 4.0][5])
|
||||
|
||||
[Redmine][6] is a web-based project management tool with a focus on agile methodologies.
|
||||
|
||||
The standard installation includes a Gantt timeline view, but it lacks fundamental features like scheduling, drag-and-drop, indent and outdent, and resource assignments. You have to edit task properties individually to change the task tree's structure.
|
||||
|
||||
Redmine has Gantt editor plugins, but they are either outdated (e.g., [Plus Gantt][7]) or proprietary (e.g., [ANKO Gantt chart][8]). If you know of other open source Gantt editor plugins, please share them in the comments.
|
||||
|
||||
Redmine is written in Ruby on Rails and available for Windows, Linux, and macOS. The core is available under a GPLv2 license.
|
||||
|
||||
* **Best for:** IT teams working using agile methodologies
|
||||
* **Unique selling proposition:** It's the original "upstream" parent project of OpenProject and EasyRedmine.
|
||||
|
||||
|
||||
|
||||
### ]project-open[ 5.1
|
||||
|
||||
![\]project-open\[][9]
|
||||
|
||||
(Frank Bergmann, [CC BY-SA 4.0][5])
|
||||
|
||||
[]project-open[][10] is a web-based project management system that takes the perspective of an entire organization, similar to an enterprise resource planning (ERP) system. It can also manage project portfolios, budgets, invoicing, sales, human resources, and other functional areas. Specific variants exist for professional services automation (PSA) for running a project company, project management office (PMO) for managing an enterprise's strategic projects, and enterprise project management (EPM) for managing a department's projects.
|
||||
|
||||
The ]po[ Gantt editor includes hierarchical tasks, dependencies, and scheduling based on planned work and assigned resources. It does not support resource calendars and non-human resources. The ]po[ system is quite complex, and the GUI might need a refresh.
|
||||
|
||||
]project-open[ is written in TCL and JavaScript and available for Windows and Linux. The ]po[ core is available under a GPLv2 license with proprietary extensions available for large companies.
|
||||
|
||||
* **Best for:** Medium to large project organizations that need a lot of financial project reporting
|
||||
* **Unique selling proposition:** ]po[ is an integrated system to run an entire project company or department.
|
||||
|
||||
|
||||
|
||||
### ProjectLibre 1.9.3
|
||||
|
||||
![ProjectLibre][11]
|
||||
|
||||
(Frank Bergmann, [CC BY-SA 4.0][5])
|
||||
|
||||
[ProjectLibre][12] is probably the closest you can get to Microsoft Project in the open source world. It is a desktop application that supports all-important project planning features, including resource calendars, baselines, and cost management. It also allows you to import and export schedules using MS-Project's file format.
|
||||
|
||||
ProjectLibre is perfectly suitable for planning and executing small or midsized projects. However, it's missing some advanced features in MS-Project, and its GUI is not the prettiest.
|
||||
|
||||
ProjectLibre is written in Java and available for Windows, Linux, and macOS and licensed under an open source Common Public Attribution (CPAL) license. The ProjectLibre team is currently working on a Web offering called ProjectLibre Cloud under a proprietary license.
|
||||
|
||||
* **Best for:** An individual project manager running small to midsized projects or as a viewer for project members who don't have a full MS-Project license
|
||||
* **Unique selling proposition:** It's the closest you can get to MS-Project with open source.
|
||||
|
||||
|
||||
|
||||
### GanttProject 2.8.11
|
||||
|
||||
![GanttProject][13]
|
||||
|
||||
(Frank Bergmann, [CC BY-SA 4.0][5])
|
||||
|
||||
[GanttProject][14] is similar to ProjectLibre as a desktop Gantt editor but with a more limited feature set. It doesn't support baselines nor non-human resources, and the reporting functionality is more limited.
|
||||
|
||||
GanttProject is a desktop application written in Java and available for Windows, Linux, and macOS under the GPLv3 license.
|
||||
|
||||
* **Best for:** Simple Gantt charts or learning Gantt-based project management techniques.
|
||||
* **Unique selling proposition:** It supports program evaluation and review technique ([PERT][15]) charts and collaboration using WebDAV.
|
||||
|
||||
|
||||
|
||||
### TaskJuggler 3.7.1
|
||||
|
||||
![TaskJuggler][16]
|
||||
|
||||
(Frank Bergmann, [CC BY-SA 4.0][5])
|
||||
|
||||
[TaskJuggler][17] schedules multiple parallel projects in large organizations, focusing on automatically resolving resource assignment conflicts (i.e., resource leveling).
|
||||
|
||||
It is not an interactive Gantt editor but a command-line tool that works similarly to a compiler: It reads a list of tasks from a text file and produces a series of reports with the optimum start and end times for each task depending on the assigned resources, dependencies, priorities, and many other parameters. It supports multiple projects, baselines, resource calendars, shifts, and time zones and has been designed to scale to enterprise scenarios with many projects and resources.
|
||||
|
||||
Writing a TaskJuggler input file with its specific syntax may be beyond the average project manager's capabilities. However, you can use ]project-open[ as a graphical frontend for TaskJuggler to generate input, including absences, task progress, and logged hours. When used this way, TaskJuggler becomes a powerful what-if scenario planner.
|
||||
|
||||
TaskJuggler is written in Ruby and available for Windows, Linux, and macOS under a GPLv2 license.
|
||||
|
||||
* **Best for:** Medium to large departments managed by a true nerd
|
||||
* **Unique selling proposition:** It excels in automatic resource-leveling.
|
||||
|
||||
|
||||
|
||||
### ProjeQtOr 9.0.4
|
||||
|
||||
![ProjeQtOr][18]
|
||||
|
||||
(Frank Bergmann, [CC BY-SA 4.0][5])
|
||||
|
||||
[ProjeQtOr][19] is a web-based project management application that's suitable for IT projects. It supports risks, budgets, deliverables, and financial documents in addition to projects, tickets, and activities to integrate many aspects of project management into a single system.
|
||||
|
||||
ProjeQtOr provides a Gantt editor with a feature set similar to ProjectLibre, including hierarchical tasks, dependencies, and scheduling based on planned work and assigned resources. However, it doesn't support in-place editing of values (e.g., task name, estimated time, etc.); users must change values in an entry form below the Gantt view and save the values.
|
||||
|
||||
ProjeQtOr is written in PHP and available for Windows, Linux, and macOS under the Affero GPL3 license.
|
||||
|
||||
* **Best for:** IT departments tracking a list of projects
|
||||
* **Unique selling proposition:** Lets you store a wealth of information for every project, keeping all information in one place.
|
||||
|
||||
|
||||
|
||||
### Other tools
|
||||
|
||||
The following systems may be valid options for specific use cases but were excluded from the main list for various reasons.
|
||||
|
||||
![LIbrePlan][20]
|
||||
|
||||
(Frank Bergmann, [CC BY-SA 4.0][5])
|
||||
|
||||
* [**LibrePlan**][21] is a web-based project management application focusing on Gantt charts. It would have figured prominently in the list above due to its feature set, but there is no installation available for recent Linux versions (CentOS 7 or 8). The authors say updated instructions will be available soon.
|
||||
* [**dotProject**][22] is a web-based project management system written in PHP and available under the GPLv2.x license. It includes a Gantt timeline report, but it doesn't have options to edit it, and dependencies don't work yet (they're "only partially functional").
|
||||
* [**Leantime**][23] is a web-based project management system with a pretty GUI written in PHP and available under the GPLv2 license. It includes a Gantt timeline for milestones but without dependencies.
|
||||
* [**Orangescrum**][24] is a web-based project-management tool. Gantt charts are available as a paid add-on or with a paid subscription.
|
||||
* [**Talaia/OpenPPM**][25] is a web-based project portfolio management system. However, version 4.6.1 still says "Coming Soon: Interactive Gantt Charts."
|
||||
* [**Odoo**][26] and [**OpenProject**][27] both restrict some important features to the paid enterprise edition.
|
||||
|
||||
|
||||
|
||||
In this review, I aimed to include all open source project management systems that include a Gantt editor with dependency scheduling. If I missed a project or misrepresented something, please let me know in the comments.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/3/open-source-project-management
|
||||
|
||||
作者:[Frank Bergmann][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/fraber
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/kanban_trello_organize_teams_520.png?itok=ObNjCpxt (Kanban-style organization action)
|
||||
[2]: https://en.wikipedia.org/wiki/Gantt_chart
|
||||
[3]: https://opensource.com/article/20/3/agiles-vs-waterfall
|
||||
[4]: https://opensource.com/sites/default/files/uploads/redmine.png (Redmine)
|
||||
[5]: https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[6]: https://www.redmine.org/
|
||||
[7]: https://redmine.org/plugins/plus_gantt
|
||||
[8]: https://www.redmine.org/plugins/anko_gantt_chart
|
||||
[9]: https://opensource.com/sites/default/files/uploads/project-open.png (]project-open[)
|
||||
[10]: https://www.project-open.com
|
||||
[11]: https://opensource.com/sites/default/files/uploads/projectlibre.png (ProjectLibre)
|
||||
[12]: http://www.projectlibre.org
|
||||
[13]: https://opensource.com/sites/default/files/uploads/ganttproject.png (GanttProject)
|
||||
[14]: https://www.ganttproject.biz
|
||||
[15]: https://en.wikipedia.org/wiki/Program_evaluation_and_review_technique
|
||||
[16]: https://opensource.com/sites/default/files/uploads/taskjuggler.png (TaskJuggler)
|
||||
[17]: https://taskjuggler.org/
|
||||
[18]: https://opensource.com/sites/default/files/uploads/projeqtor.png (ProjeQtOr)
|
||||
[19]: https://www.projeqtor.org
|
||||
[20]: https://opensource.com/sites/default/files/uploads/libreplan.png (LIbrePlan)
|
||||
[21]: https://www.libreplan.dev/
|
||||
[22]: https://dotproject.net/
|
||||
[23]: https://leantime.io
|
||||
[24]: https://orangescrum.org/
|
||||
[25]: http://en.talaia-openppm.com/
|
||||
[26]: https://odoo.com
|
||||
[27]: http://openproject.org
|
@ -0,0 +1,163 @@
|
||||
[#]: subject: (Programming 101: Input and output with Java)
|
||||
[#]: via: (https://opensource.com/article/21/3/io-java)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
||||
Programming 101: Input and output with Java
|
||||
======
|
||||
Learn how Java handles reading and writing data.
|
||||
![Coffee beans and a cup of coffee][1]
|
||||
|
||||
When you write a program, your application may need to read from and write to files stored on the user's computer. This is common in situations when you want to load or store configuration options, you need to create log files, or your user wants to save work for later. Every language handles this task a little differently. This article demonstrates how to handle data files with Java.
|
||||
|
||||
### Installing Java
|
||||
|
||||
Regardless of your computer's platform, you can install Java from [AdoptOpenJDK][2]. This site offers safe and open source builds of Java. On Linux, you may also find AdoptOpenJDK builds in your software repository.
|
||||
|
||||
I recommend using the latest long-term support (LTS) release. The latest non-LTS release is best for developers looking to try the latest Java features, but it likely outpaces what most users have installed—either by default on their system or installed previously for some other Java application. Using the LTS release ensures you're up-to-date with what most users have installed.
|
||||
|
||||
Once you have Java installed, open your favorite text editor and get ready to code. You might also want to investigate an [integrated development environment for Java][3]. BlueJ is ideal for new programmers, while Eclipse and Netbeans are nice for intermediate and experienced coders.
|
||||
|
||||
### Reading a file with Java
|
||||
|
||||
Java uses the `File` library to load files.
|
||||
|
||||
This example creates a class called `Ingest` to read data from a file. When you open a file in Java, you create a `Scanner` object, which scans the file you provide, line by line. In fact, a `Scanner` is the same concept as a cursor in a text editor, and you can control that "cursor" for reading and writing with `Scanner` methods like `nextLine`:
|
||||
|
||||
|
||||
```
|
||||
import java.io.File;
|
||||
import java.util.Scanner;
|
||||
import java.io.FileNotFoundException;
|
||||
|
||||
public class Ingest {
|
||||
public static void main([String][4][] args) {
|
||||
|
||||
try {
|
||||
[File][5] myFile = new [File][5]("example.txt");
|
||||
Scanner myScanner = new Scanner(myFile);
|
||||
while (myScanner.hasNextLine()) {
|
||||
[String][4] line = myScanner.nextLine();
|
||||
[System][6].out.println(line);
|
||||
}
|
||||
myScanner.close();
|
||||
} catch ([FileNotFoundException][7] ex) {
|
||||
ex.printStackTrace();
|
||||
} //try
|
||||
} //main
|
||||
} //class
|
||||
```
|
||||
|
||||
This code creates the variable `myfile` under the assumption that a file named `example.txt` exists. If that file does not exist, Java "throws an exception" (this means it found an error in what you attempted to do and says so), which is "caught" by the very specific `FileNotFoundException` library. The fact that there's a library specific to this exact error betrays how common this error is.
|
||||
|
||||
Next, it creates a `Scanner` and loads the file into it. I call it `myScanner` to differentiate it from its generic class template. A `while` loop sends `myScanner` over the file, line by line, for as long as there _is_ a next line. That's what the `hasNextLine` method does: it detects whether there's any data after the "cursor." You can simulate this by opening a file in a text editor: Your cursor starts at the very beginning of the file, and you can use the keyboard to scan through the file with the cursor until you run out of lines.
|
||||
|
||||
The `while` loop creates a variable `line` and assigns it the data of the current line. Then it prints the contents of `line` just to provide feedback. A more useful program would probably parse each line to extract whatever important data it contains.
|
||||
|
||||
At the end of the process, the `myScanner` object closes.
|
||||
|
||||
### Running the code
|
||||
|
||||
Save your code as `Ingest.java` (it's a Java convention to give classes an initial capital letter and name the file to match). If you try to run this simple application, you will probably receive an error because there is no `example.txt` for the application to load yet:
|
||||
|
||||
|
||||
```
|
||||
$ java ./Ingest.java
|
||||
java.io.[FileNotFoundException][7]:
|
||||
example.txt (No such file or directory)
|
||||
```
|
||||
|
||||
What a perfect opportunity to write a Java application that writes data to a file!
|
||||
|
||||
### Writing data to a file with Java
|
||||
|
||||
Whether you're storing data that your user creates with your application or just metadata about what a user did in an application (for instance, game saves or recent songs played), there are lots of good reasons to store data for later use. In Java, this is achieved through the `FileWriter` library, this time by opening a file, writing data into it, and then closing the file:
|
||||
|
||||
|
||||
```
|
||||
import java.io.FileWriter;
|
||||
import java.io.IOException;
|
||||
|
||||
public class Exgest {
|
||||
public static void main([String][4][] args) {
|
||||
try {
|
||||
[FileWriter][8] myFileWriter = new [FileWriter][8]("example.txt", true);
|
||||
myFileWriter.write("Hello world\n");
|
||||
myFileWriter.close();
|
||||
} catch ([IOException][9] ex) {
|
||||
[System][6].out.println(ex);
|
||||
} // try
|
||||
} // main
|
||||
}
|
||||
```
|
||||
|
||||
The logic and flow of this class are similar to reading a file. Instead of a `Scanner`, it creates a `FileWriter` object with the name of a file. The `true` flag at the end of the `FileWriter` statement tells `FileWriter` to _append_ text to the end of the file. To overwrite a file's contents, remove the `true`:
|
||||
|
||||
|
||||
```
|
||||
`FileWriter myFileWriter = new FileWriter("example.txt", true);`
|
||||
```
|
||||
|
||||
Because I'm writing plain text into a file, I added my own newline character (`\n`) at the end of the data (`Hello world`) written into the file.
|
||||
|
||||
### Trying the code
|
||||
|
||||
Save this code as `Exgest.java`, following the Java convention of naming the file to match the class name.
|
||||
|
||||
Now that you have the means to create and read data with Java, you can try your new applications, in reverse order:
|
||||
|
||||
|
||||
```
|
||||
$ java ./Exgest.java
|
||||
$ java ./Ingest.java
|
||||
Hello world
|
||||
$
|
||||
```
|
||||
|
||||
Because it appends data to the end, you can repeat your application to write data as many times as you want to add more data to your file:
|
||||
|
||||
|
||||
```
|
||||
$ java ./Exgest.java
|
||||
$ java ./Exgest.java
|
||||
$ java ./Exgest.java
|
||||
$ java ./Ingest.java
|
||||
Hello world
|
||||
Hello world
|
||||
Hello world
|
||||
$
|
||||
```
|
||||
|
||||
### Java and data
|
||||
|
||||
You're don't write raw text into a file very often; in the real world, you probably use an additional library to write a specific format instead. For instance, you might use an XML library to write complex data, an INI or YAML library to write configuration files, or any number of specialized libraries to write binary formats like images or audio.
|
||||
|
||||
For full information, refer to the [OpenJDK documentation][10].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/3/io-java
|
||||
|
||||
作者:[Seth Kenlon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/seth
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/java-coffee-mug.jpg?itok=Bj6rQo8r (Coffee beans and a cup of coffee)
|
||||
[2]: https://adoptopenjdk.net
|
||||
[3]: https://opensource.com/article/20/7/ide-java
|
||||
[4]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+string
|
||||
[5]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+file
|
||||
[6]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+system
|
||||
[7]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+filenotfoundexception
|
||||
[8]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+filewriter
|
||||
[9]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+ioexception
|
||||
[10]: https://access.redhat.com/documentation/en-us/openjdk/11/
|
73
sources/tech/20210317 Track aircraft with a Raspberry Pi.md
Normal file
73
sources/tech/20210317 Track aircraft with a Raspberry Pi.md
Normal file
@ -0,0 +1,73 @@
|
||||
[#]: subject: (Track aircraft with a Raspberry Pi)
|
||||
[#]: via: (https://opensource.com/article/21/3/tracking-flights-raspberry-pi)
|
||||
[#]: author: (Patrick Easters https://opensource.com/users/patrickeasters)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
||||
Track aircraft with a Raspberry Pi
|
||||
======
|
||||
Explore the open skies with a Raspberry Pi, an inexpensive radio, and
|
||||
open source software.
|
||||
![Airplane flying with a globe background][1]
|
||||
|
||||
I live near a major airport, and I frequently hear aircraft flying over my house. I also have a curious preschooler, and I find myself answering questions like, "What's that?" and "Where's that plane going?" often. While a quick internet search could answer these questions, I wanted to see if I could answer them myself.
|
||||
|
||||
With a Raspberry Pi, an inexpensive radio, and open source software, I can track aircraft as far as 200 miles from my house. Whether you're answering relentless questions from your kids or are just curious about what's in the sky above you, this is something you can try, too.
|
||||
|
||||
![Flight map][2]
|
||||
|
||||
(Patrick Easters, [CC BY-SA 4.0][3])
|
||||
|
||||
### The protocol behind it all
|
||||
|
||||
[ADS-B][4] is a technology that aircraft use worldwide to broadcast their location. Aircraft use position data gathered from GPS and periodically broadcast it along with speed and other telemetry so that other aircraft and ground stations can track their position.
|
||||
|
||||
Since this protocol is well-known and unencrypted, there are many solutions to receive and parse it, including many that are open source.
|
||||
|
||||
### Gathering the hardware
|
||||
|
||||
Pretty much any [Raspberry Pi][5] will work for this project. I've used an older Pi 1 Model B, but I'd recommend a Pi 3 or newer to ensure you can keep up with the stream of decoded ADS-B messages.
|
||||
|
||||
To receive the ADS-B signals, you need a software-defined radio. Thanks to ultra-cheap radio chips designed for TV tuners, there are quite a few cheap USB receivers to choose from. I use [FlightAware's ProStick Plus][6] because it has a built-in filter to weaken signals outside the 1090MHz band used for ADS-B. Filtering is important since strong signals, such as broadcast FM radio and television, can desensitize the receiver. Any receiver based on RTL-SDR should work.
|
||||
|
||||
You will also need an antenna for the receiver. The options are limitless here, ranging from the [more adventurous DIY options][7] to purchasing a [ready-made 1090MHz antenna][8]. Whichever route you choose, antenna placement matters most. ADS-B reception is line-of-sight, so you'll want your antenna to be as high as possible to extend your range. I have mine in my attic, but I got decent results from my house's upper floor.
|
||||
|
||||
### Visualizing your data with software
|
||||
|
||||
Now that your Pi is equipped to receive ADS-B signals, the real magic happens in the software. Two of the most commonly used open source software projects for ADS-B are [readsb][9] for decoding ADS-B messages and [tar1090][10] for visualization. Combining both provides an interactive map showing all the aircraft your Pi is tracking.
|
||||
|
||||
Both projects provide setup instructions, but using a prebuilt image like the [ADSBx Custom Pi Image][11] is the fastest way to get going. The ADSBx image even configures a Prometheus instance with custom metrics like aircraft count.
|
||||
|
||||
### Keep experimenting
|
||||
|
||||
If the novelty of tracking airplanes with your Raspberry Pi wears off, there are plenty of ways to keep experimenting. Try different antenna designs or find the best antenna placement to maximize the number of aircraft you see.
|
||||
|
||||
These are just a few of the ways to track aircraft with your Pi, and hopefully, this inspires you to try it out and learn a bit about the world of radio. Happy tracking!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/3/tracking-flights-raspberry-pi
|
||||
|
||||
作者:[Patrick Easters][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/patrickeasters
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/plane_travel_world_international.png?itok=jG3sYPty (Airplane flying with a globe background)
|
||||
[2]: https://opensource.com/sites/default/files/uploads/flightmap.png (Flight map)
|
||||
[3]: https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[4]: https://en.wikipedia.org/wiki/Automatic_Dependent_Surveillance%E2%80%93Broadcast
|
||||
[5]: https://www.raspberrypi.org/
|
||||
[6]: https://www.amazon.com/FlightAware-FA-PROSTICKPLUS-1-Receiver-Built-Filter/dp/B01M7REJJW
|
||||
[7]: http://www.radioforeveryone.com/p/easy-homemade-ads-b-antennas.html
|
||||
[8]: https://www.amazon.com/s?k=1090+antenna+sma&i=electronics&ref=nb_sb_noss_2
|
||||
[9]: https://github.com/wiedehopf/readsb
|
||||
[10]: https://github.com/wiedehopf/tar1090
|
||||
[11]: https://www.adsbexchange.com/how-to-feed/adsbx-custom-pi-image/
|
@ -0,0 +1,225 @@
|
||||
[#]: subject: (Get started with an open source customer data platform)
|
||||
[#]: via: (https://opensource.com/article/21/3/rudderstack-customer-data-platform)
|
||||
[#]: author: (Amey Varangaonkar https://opensource.com/users/ameypv)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
||||
Get started with an open source customer data platform
|
||||
======
|
||||
As an open source alternative to Segment, RudderStack collects and
|
||||
routes event stream (or clickstream) data and automatically builds your
|
||||
customer data lake on your data warehouse.
|
||||
![Person standing in front of a giant computer screen with numbers, data][1]
|
||||
|
||||
[RudderStack][2] is an open source, warehouse-first customer data pipeline. It collects and routes event stream (or clickstream) data and automatically builds your customer data lake on your data warehouse.
|
||||
|
||||
RudderStack is commonly known as the open source alternative to the customer data platform (CDP), [Segment][3]. It provides a more secure, flexible, and cost-effective solution in comparison. You get all the CDP functionality with added security and full ownership of your customer data.
|
||||
|
||||
Warehouse-first tools like RudderStack are architected to build functional data lakes in the user's data warehouse. The benefits are improved data control, increased flexibility in tool use, and (frequently) lower costs. Since it's open source, you can see how complicated processes—like building your identity graph—are done without relying on a vendor's black box.
|
||||
|
||||
### Getting the RudderStack workspace token
|
||||
|
||||
Before you get started, you will need the RudderStack workspace token from your RudderStack dashboard. To get it:
|
||||
|
||||
1. Go to the [RudderStack dashboard][4].
|
||||
|
||||
2. Log in using your credentials (or sign up for an account, if you don't already have one).
|
||||
|
||||
![RudderStack login screen][5]
|
||||
|
||||
(RudderStack, [CC BY-SA 4.0][6])
|
||||
|
||||
3. Once you've logged in, you should see the workspace token on your RudderStack dashboard.
|
||||
|
||||
![RudderStack workspace token][7]
|
||||
|
||||
(RudderStack, [CC BY-SA 4.0][6])
|
||||
|
||||
|
||||
|
||||
|
||||
### Installing RudderStack
|
||||
|
||||
Setting up a RudderStack open source instance is straightforward. You have two installation options:
|
||||
|
||||
1. On your Kubernetes cluster, using RudderStack's Helm charts
|
||||
2. On your Docker container, using the `docker-compose` command
|
||||
|
||||
|
||||
|
||||
This tutorial explains how to use both options but assumes that you already have [Git installed on your system][8].
|
||||
|
||||
#### Deploying with Kubernetes
|
||||
|
||||
You can deploy RudderStack on your Kubernetes cluster using the [Helm][9] package manager.
|
||||
|
||||
_If you plan to use RudderStack in production, we strongly recommend using this method._ This is because the Docker images are updated with bug fixes more frequently than the GitHub repository (which follows a monthly release cycle).
|
||||
|
||||
Before you can deploy RudderStack on Kubernetes, make sure you have the following prerequisites in place:
|
||||
|
||||
* [Install and connect kubectl][10] to your Kubernetes cluster.
|
||||
* [Install Helm][11] on your system, either through the Helm installer scripts or its package manager.
|
||||
* Finally, get the workspace token from the RudderStack dashboard by following the steps in the [Getting the RudderStack workspace token][12] section.
|
||||
|
||||
|
||||
|
||||
Once you've completed all the prerequisites, deploy RudderStack on your default Kubernetes cluster:
|
||||
|
||||
1. Find the Helm chart required to deploy RudderStack in this [repo][13].
|
||||
2. Install the Helm chart with a release name of your choice (`my-release`, in this example) from the root directory of the repo in the previous step: [code] $ helm install \
|
||||
my-release ./ --set \
|
||||
rudderWorkspaceToken="<your workspace token from RudderStack dashboard>"
|
||||
```
|
||||
This deploys RudderStack on your default Kubernetes cluster configured with kubectl using the workspace token you obtained from the RudderStack dashboard.
|
||||
|
||||
For more details on the configurable parameters in the RudderStack Helm chart or updating the versions of the images used, consult the [documentation][14].
|
||||
|
||||
### Deploying with Docker
|
||||
|
||||
Docker is the easiest and fastest way to set up your open source RudderStack instance.
|
||||
|
||||
First, get the workspace token from the RudderStack dashboard by following the steps above.
|
||||
|
||||
Once you have the RudderStack workspace token:
|
||||
|
||||
1. Download the [**rudder-docker.yml**][15] docker-compose file required for the installation.
|
||||
2. Replace `<your_workspace_token>` in this file with your RudderStack workspace token.
|
||||
3. Set up RudderStack on your Docker container by running: [code]`docker-compose -f rudder-docker.yml up`
|
||||
```
|
||||
|
||||
|
||||
|
||||
Now RudderStack should be up and running on your Docker instance.
|
||||
|
||||
### Verifying the installation
|
||||
|
||||
You can verify your RudderStack installation by sending test events using the bundled shell script:
|
||||
|
||||
1. Clone the GitHub repository: [code]`git clone https://github.com/rudderlabs/rudder-server.git`
|
||||
```
|
||||
2. In this tutorial, you will verify RudderStack by sending test events to Google Analytics. Make sure you have a Google Analytics account and keep the tracking ID handy. Also, note that the Google Analytics account needs to have a `Web` property.
|
||||
|
||||
3. In the [RudderStack hosted control plane][4]:
|
||||
|
||||
* Add a source on the RudderStack dashboard by following the [Adding a source and destination in RudderStack][16] guide. You can use either of RudderStack's event stream software development kits (SDKs) for sending events from your app. This example sets up the [JavaScript SDK][17] as a source on the dashboard. **Note:** You aren't actually installing the RudderStack JavaScript SDK on your site in this step; you are just creating the source in RudderStack.
|
||||
|
||||
* Configure a Google Analytics destination on the RudderStack dashboard using the instructions in the guide mentioned previously. Use the Google Analytics tracking ID you kept from step 2 of this section:
|
||||
|
||||
![Google Analytics tracking ID][18]
|
||||
|
||||
(RudderStack, [CC BY-SA 4.0][6])
|
||||
|
||||
4. As mentioned before, RudderStack bundles a shell script that generates test events. Get the **Source write key** from the RudderStack dashboard:
|
||||
|
||||
![RudderStack source write key][19]
|
||||
|
||||
(RudderStack, [CC BY-SA 4.0][6])
|
||||
|
||||
5. Next, run: [code]`./scripts/generate-event <YOUR_WRITE_KEY> https://hosted.rudderlabs.com/v1/batch`
|
||||
```
|
||||
|
||||
6. Finally, log into your Google Analytics account and verify that the events were delivered. In your Google Analytics account, navigate to **RealTime** -> **Events**. The RealTime view is important because some dashboards can take one to two days to refresh.
|
||||
|
||||
|
||||
|
||||
|
||||
### Optional: Setting up the open source control plane
|
||||
|
||||
RudderStack's core architecture contains two major components: the data plane and the control plane. The data plane, [rudder-server][20], delivers your event data, and the RudderStack hosted control plane manages the configuration of your sources and destinations.
|
||||
|
||||
However, if you want to manage the source and destination configurations locally, you can set an open source control plane in your environment using the RudderStack Config Generator. (You must have [Node.js][21] installed on your system to use it.)
|
||||
|
||||
Here are the steps to set up the control plane:
|
||||
|
||||
1. Install and set up RudderStack on the platform of your choice by following the instructions above.
|
||||
2. Run the following commands in this order:
|
||||
* `cd utils/config-gen`
|
||||
* `npm install`
|
||||
* `npm start`
|
||||
|
||||
|
||||
|
||||
You should now be able to access the open source control plane at `http://localhost:3000` by default. If your setup is successful, you will see the user interface.
|
||||
|
||||
![RudderStack open source control plane][22]
|
||||
|
||||
(RudderStack, [CC BY-SA 4.0][6])
|
||||
|
||||
To export the existing workspace configuration from the RudderStack-hosted control plane and have RudderStack use it, consult the [docs][23].
|
||||
|
||||
### RudderStack and open source
|
||||
|
||||
The core of RudderStack is in the [rudder-server][20] repository. It is open source, licensed under [AGPL-3.0][24]. A majority of the destination integrations live in the [rudder-transformer][25] repository. They are open source as well, licensed under the [MIT License][26]. The SDKs and instrumentation repositories, several tool and utility repositories, and even some [dbt][27] model repositories for use-cases like customer journey analysis and sessionization for the data residing in your data warehouse are open source, licensed under the MIT License, and available in the [GitHub repository][28].
|
||||
|
||||
You can use RudderStack's open source offering, rudder-server, on your platform of choice. There are setup guides for [Docker][29], [Kubernetes][30], [native installation][31], and [developer machines][32].
|
||||
|
||||
RudderStack open source offers:
|
||||
|
||||
1. RudderStack event stream
|
||||
2. 15+ SDKs and source integrations to ingest event data
|
||||
3. 80+ destination and warehouse integrations
|
||||
4. Slack community support
|
||||
|
||||
|
||||
|
||||
#### RudderStack Cloud
|
||||
|
||||
RudderStack also offers a managed option, [RudderStack Cloud][33]. It is fast, reliable, and highly scalable with a multi-node architecture and sophisticated error-handling mechanism. You can hit peak event volume without worrying about downtime, loss of events, or latency.
|
||||
|
||||
Explore our open source repos on [GitHub][28], subscribe to [our blog][34], and follow us on social media: [Twitter][35], [LinkedIn][36], [dev.to][37], [Medium][38], and [YouTube][39]!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/3/rudderstack-customer-data-platform
|
||||
|
||||
作者:[Amey Varangaonkar][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/ameypv
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/data_metrics_analytics_desktop_laptop.png?itok=9QXd7AUr (Person standing in front of a giant computer screen with numbers, data)
|
||||
[2]: https://rudderstack.com/
|
||||
[3]: https://segment.com/
|
||||
[4]: https://app.rudderstack.com/
|
||||
[5]: https://opensource.com/sites/default/files/uploads/rudderstack_login.png (RudderStack login screen)
|
||||
[6]: https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[7]: https://opensource.com/sites/default/files/uploads/rudderstack_workspace-token.png (RudderStack workspace token)
|
||||
[8]: https://opensource.com/life/16/7/stumbling-git
|
||||
[9]: https://helm.sh/
|
||||
[10]: https://kubernetes.io/docs/tasks/tools/install-kubectl/
|
||||
[11]: https://helm.sh/docs/intro/install/
|
||||
[12]: tmp.AhGpFIyrbZ#token
|
||||
[13]: https://github.com/rudderlabs/rudderstack-helm
|
||||
[14]: https://docs.rudderstack.com/installing-and-setting-up-rudderstack/kubernetes
|
||||
[15]: https://raw.githubusercontent.com/rudderlabs/rudder-server/master/rudder-docker.yml
|
||||
[16]: https://docs.rudderstack.com/get-started/adding-source-and-destination-rudderstack
|
||||
[17]: https://docs.rudderstack.com/rudderstack-sdk-integration-guides/rudderstack-javascript-sdk
|
||||
[18]: https://opensource.com/sites/default/files/uploads/googleanalyticstrackingid.png (Google Analytics tracking ID)
|
||||
[19]: https://opensource.com/sites/default/files/uploads/rudderstack_sourcewritekey.png (RudderStack source write key)
|
||||
[20]: https://github.com/rudderlabs/rudder-server
|
||||
[21]: https://nodejs.org/en/download/
|
||||
[22]: https://opensource.com/sites/default/files/uploads/rudderstack_controlplane.png (RudderStack open source control plane)
|
||||
[23]: https://docs.rudderstack.com/how-to-guides/rudderstack-config-generator
|
||||
[24]: https://www.gnu.org/licenses/agpl-3.0-standalone.html
|
||||
[25]: https://github.com/rudderlabs/rudder-transformer
|
||||
[26]: https://opensource.org/licenses/MIT
|
||||
[27]: https://www.getdbt.com/
|
||||
[28]: https://github.com/rudderlabs
|
||||
[29]: https://docs.rudderstack.com/get-started/installing-and-setting-up-rudderstack/docker
|
||||
[30]: https://docs.rudderstack.com/get-started/installing-and-setting-up-rudderstack/kubernetes
|
||||
[31]: https://docs.rudderstack.com/get-started/installing-and-setting-up-rudderstack/native-installation
|
||||
[32]: https://docs.rudderstack.com/get-started/installing-and-setting-up-rudderstack/developer-machine-setup
|
||||
[33]: https://resources.rudderstack.com/rudderstack-cloud
|
||||
[34]: https://rudderstack.com/blog/
|
||||
[35]: https://twitter.com/RudderStack
|
||||
[36]: https://www.linkedin.com/company/rudderlabs/
|
||||
[37]: https://dev.to/rudderstack
|
||||
[38]: https://rudderstack.medium.com/
|
||||
[39]: https://www.youtube.com/channel/UCgV-B77bV_-LOmKYHw8jvBw
|
285
sources/tech/20210318 Reverse Engineering a Docker Image.md
Normal file
285
sources/tech/20210318 Reverse Engineering a Docker Image.md
Normal file
@ -0,0 +1,285 @@
|
||||
[#]: subject: (Reverse Engineering a Docker Image)
|
||||
[#]: via: (https://theartofmachinery.com/2021/03/18/reverse_engineering_a_docker_image.html)
|
||||
[#]: author: (Simon Arneaud https://theartofmachinery.com)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (DCOLIVERSUN)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
||||
Reverse Engineering a Docker Image
|
||||
======
|
||||
|
||||
This started with a consulting snafu: Government organisation A got government organisation B to develop a web application. Government organisation B subcontracted part of the work to somebody. Hosting and maintenance of the project was later contracted out to a private-sector company C. Company C discovered that the subcontracted somebody (who was long gone) had built a custom Docker image and made it a dependency of the build system, but without committing the original Dockerfile. That left company C with a contractual obligation to manage a Docker image they had no source code for. Company C calls me in once in a while to do various things, so doing something about this mystery meat Docker image became my job.
|
||||
|
||||
Fortunately, the Docker image format is a lot more transparent than it could be. A little detective work is needed, but a lot can be figured out just by pulling apart an image file. As an example, here’s a quick walkthrough of an image for [the Prettier code formatter][1].
|
||||
|
||||
First let’s get the Docker daemon to pull the image, then extract the image to a file:
|
||||
|
||||
```
|
||||
docker pull tmknom/prettier:2.0.5
|
||||
docker save tmknom/prettier:2.0.5 > prettier.tar
|
||||
```
|
||||
|
||||
Yes, the file is just an archive in the classic tarball format:
|
||||
|
||||
```
|
||||
$ tar xvf prettier.tar
|
||||
6c37da2ee7de579a0bf5495df32ba3e7807b0a42e2a02779206d165f55f1ba70/
|
||||
6c37da2ee7de579a0bf5495df32ba3e7807b0a42e2a02779206d165f55f1ba70/VERSION
|
||||
6c37da2ee7de579a0bf5495df32ba3e7807b0a42e2a02779206d165f55f1ba70/json
|
||||
6c37da2ee7de579a0bf5495df32ba3e7807b0a42e2a02779206d165f55f1ba70/layer.tar
|
||||
88f38be28f05f38dba94ce0c1328ebe2b963b65848ab96594f8172a9c3b0f25b.json
|
||||
a9cc4ace48cd792ef888ade20810f82f6c24aaf2436f30337a2a712cd054dc97/
|
||||
a9cc4ace48cd792ef888ade20810f82f6c24aaf2436f30337a2a712cd054dc97/VERSION
|
||||
a9cc4ace48cd792ef888ade20810f82f6c24aaf2436f30337a2a712cd054dc97/json
|
||||
a9cc4ace48cd792ef888ade20810f82f6c24aaf2436f30337a2a712cd054dc97/layer.tar
|
||||
d4f612de5397f1fc91272cfbad245b89eac8fa4ad9f0fc10a40ffbb54a356cb4/
|
||||
d4f612de5397f1fc91272cfbad245b89eac8fa4ad9f0fc10a40ffbb54a356cb4/VERSION
|
||||
d4f612de5397f1fc91272cfbad245b89eac8fa4ad9f0fc10a40ffbb54a356cb4/json
|
||||
d4f612de5397f1fc91272cfbad245b89eac8fa4ad9f0fc10a40ffbb54a356cb4/layer.tar
|
||||
manifest.json
|
||||
repositories
|
||||
```
|
||||
|
||||
As you can see, Docker uses hashes a lot for naming things. Let’s have a look at the `manifest.json`. It’s in hard-to-read compacted JSON, but the [`jq` JSON Swiss Army knife][2] can pretty print it for us:
|
||||
|
||||
```
|
||||
$ jq . manifest.json
|
||||
[
|
||||
{
|
||||
"Config": "88f38be28f05f38dba94ce0c1328ebe2b963b65848ab96594f8172a9c3b0f25b.json",
|
||||
"RepoTags": [
|
||||
"tmknom/prettier:2.0.5"
|
||||
],
|
||||
"Layers": [
|
||||
"a9cc4ace48cd792ef888ade20810f82f6c24aaf2436f30337a2a712cd054dc97/layer.tar",
|
||||
"d4f612de5397f1fc91272cfbad245b89eac8fa4ad9f0fc10a40ffbb54a356cb4/layer.tar",
|
||||
"6c37da2ee7de579a0bf5495df32ba3e7807b0a42e2a02779206d165f55f1ba70/layer.tar"
|
||||
]
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
Note that the three layers correspond to the three hash-named directories. We’ll look at them later. For now, let’s look at the JSON file pointed to by the `Config` key. It’s a little long, so I’ll just dump the first bit here:
|
||||
|
||||
```
|
||||
$ jq . 88f38be28f05f38dba94ce0c1328ebe2b963b65848ab96594f8172a9c3b0f25b.json | head -n 20
|
||||
{
|
||||
"architecture": "amd64",
|
||||
"config": {
|
||||
"Hostname": "",
|
||||
"Domainname": "",
|
||||
"User": "",
|
||||
"AttachStdin": false,
|
||||
"AttachStdout": false,
|
||||
"AttachStderr": false,
|
||||
"Tty": false,
|
||||
"OpenStdin": false,
|
||||
"StdinOnce": false,
|
||||
"Env": [
|
||||
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
|
||||
],
|
||||
"Cmd": [
|
||||
"--help"
|
||||
],
|
||||
"ArgsEscaped": true,
|
||||
"Image": "sha256:93e72874b338c1e0734025e1d8ebe259d4f16265dc2840f88c4c754e1c01ba0a",
|
||||
```
|
||||
|
||||
The most interesting part is the `history` list, which lists every single layer in the image. A Docker image is a stack of these layers. Almost every statement in a Dockerfile turns into a layer that describes the changes to the image made by that statement. If you have a `RUN script.sh` statement that creates `really_big_file` that you then delete with `RUN rm really_big_file`, you actually get two layers in the Docker image: one that contains `really_big_file`, and one that contains a `.wh.really_big_file` tombstone to cancel it out. The overall image file isn’t any smaller. That’s why you often see Dockerfile statements chained together like `RUN script.sh && rm really_big_file` — it ensures all changes are coalesced into one layer.
|
||||
|
||||
Here are all the layers recorded in the Docker image. Notice that most layers don’t change the filesystem image and are marked `"empty_layer": true`. Only three are non-empty, which matches up with what we saw before.
|
||||
|
||||
```
|
||||
$ jq .history 88f38be28f05f38dba94ce0c1328ebe2b963b65848ab96594f8172a9c3b0f25b.json
|
||||
[
|
||||
{
|
||||
"created": "2020-04-24T01:05:03.608058404Z",
|
||||
"created_by": "/bin/sh -c #(nop) ADD file:b91adb67b670d3a6ff9463e48b7def903ed516be66fc4282d22c53e41512be49 in / "
|
||||
},
|
||||
{
|
||||
"created": "2020-04-24T01:05:03.92860976Z",
|
||||
"created_by": "/bin/sh -c #(nop) CMD [\"/bin/sh\"]",
|
||||
"empty_layer": true
|
||||
},
|
||||
{
|
||||
"created": "2020-04-29T06:34:06.617130538Z",
|
||||
"created_by": "/bin/sh -c #(nop) ARG BUILD_DATE",
|
||||
"empty_layer": true
|
||||
},
|
||||
{
|
||||
"created": "2020-04-29T06:34:07.020521808Z",
|
||||
"created_by": "/bin/sh -c #(nop) ARG VCS_REF",
|
||||
"empty_layer": true
|
||||
},
|
||||
{
|
||||
"created": "2020-04-29T06:34:07.36915054Z",
|
||||
"created_by": "/bin/sh -c #(nop) ARG VERSION",
|
||||
"empty_layer": true
|
||||
},
|
||||
{
|
||||
"created": "2020-04-29T06:34:07.708820086Z",
|
||||
"created_by": "/bin/sh -c #(nop) ARG REPO_NAME",
|
||||
"empty_layer": true
|
||||
},
|
||||
{
|
||||
"created": "2020-04-29T06:34:08.06429638Z",
|
||||
"created_by": "/bin/sh -c #(nop) LABEL org.label-schema.vendor=tmknom org.label-schema.name=tmknom/prettier org.label-schema.description=Prettier is an opinionated code formatter. org.label-schema.build-date=2020-04-29T06:34:01Z org
|
||||
.label-schema.version=2.0.5 org.label-schema.vcs-ref=35d2587 org.label-schema.vcs-url=https://github.com/tmknom/prettier org.label-schema.usage=https://github.com/tmknom/prettier/blob/master/README.md#usage org.label-schema.docker.cmd=do
|
||||
cker run --rm -v $PWD:/work tmknom/prettier --parser=markdown --write '**/*.md' org.label-schema.schema-version=1.0",
|
||||
"empty_layer": true
|
||||
},
|
||||
{
|
||||
"created": "2020-04-29T06:34:08.511269907Z",
|
||||
"created_by": "/bin/sh -c #(nop) ARG NODEJS_VERSION=12.15.0-r1",
|
||||
"empty_layer": true
|
||||
},
|
||||
{
|
||||
"created": "2020-04-29T06:34:08.775876657Z",
|
||||
"created_by": "/bin/sh -c #(nop) ARG PRETTIER_VERSION",
|
||||
"empty_layer": true
|
||||
},
|
||||
{
|
||||
"created": "2020-04-29T06:34:26.399622951Z",
|
||||
"created_by": "|6 BUILD_DATE=2020-04-29T06:34:01Z NODEJS_VERSION=12.15.0-r1 PRETTIER_VERSION=2.0.5 REPO_NAME=tmknom/prettier VCS_REF=35d2587 VERSION=2.0.5 /bin/sh -c set -x && apk add --no-cache nodejs=${NODEJS_VERSION} nodejs-np
|
||||
m=${NODEJS_VERSION} && npm install -g prettier@${PRETTIER_VERSION} && npm cache clean --force && apk del nodejs-npm"
|
||||
},
|
||||
{
|
||||
"created": "2020-04-29T06:34:26.764034848Z",
|
||||
"created_by": "/bin/sh -c #(nop) WORKDIR /work"
|
||||
},
|
||||
{
|
||||
"created": "2020-04-29T06:34:27.092671047Z",
|
||||
"created_by": "/bin/sh -c #(nop) ENTRYPOINT [\"/usr/bin/prettier\"]",
|
||||
"empty_layer": true
|
||||
},
|
||||
{
|
||||
"created": "2020-04-29T06:34:27.406606712Z",
|
||||
"created_by": "/bin/sh -c #(nop) CMD [\"--help\"]",
|
||||
"empty_layer": true
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
Fantastic! All the statements are right there in the `created_by` fields, so we can almost reconstruct the Dockerfile just from this. Almost. The `ADD` statement at the very top doesn’t actually give us the file we need to `ADD`. `COPY` statements are also going to be opaque. We also lose `FROM` statements because they expand out to all the layers inherited from the base Docker image.
|
||||
|
||||
We can group the layers by Dockerfile by looking at the timestamps. Most layer timestamps are under a minute apart, representing how long each layer took to build. However, the first two layers are from `2020-04-24`, and the rest of the layers are from `2020-04-29`. This would be because the first two layers are from a base Docker image. Ideally we’d figure out a `FROM` statement that gets us that image, so that we have a maintainable Dockerfile.
|
||||
|
||||
The `manifest.json` says that the first non-empty layer is `a9cc4ace48cd792ef888ade20810f82f6c24aaf2436f30337a2a712cd054dc97/layer.tar`. Let’s take a look:
|
||||
|
||||
```
|
||||
$ cd a9cc4ace48cd792ef888ade20810f82f6c24aaf2436f30337a2a712cd054dc97/
|
||||
$ tar tf layer.tf | head
|
||||
bin/
|
||||
bin/arch
|
||||
bin/ash
|
||||
bin/base64
|
||||
bin/bbconfig
|
||||
bin/busybox
|
||||
bin/cat
|
||||
bin/chgrp
|
||||
bin/chmod
|
||||
bin/chown
|
||||
```
|
||||
|
||||
Okay, that looks like it might be an operating system base image, which is what you’d expect from a typical Dockerfile. There are 488 entries in the tarball, and if you scroll through them, some interesting ones stand out:
|
||||
|
||||
```
|
||||
...
|
||||
dev/
|
||||
etc/
|
||||
etc/alpine-release
|
||||
etc/apk/
|
||||
etc/apk/arch
|
||||
etc/apk/keys/
|
||||
etc/apk/keys/alpine-devel@lists.alpinelinux.org-4a6a0840.rsa.pub
|
||||
etc/apk/keys/alpine-devel@lists.alpinelinux.org-5243ef4b.rsa.pub
|
||||
etc/apk/keys/alpine-devel@lists.alpinelinux.org-5261cecb.rsa.pub
|
||||
etc/apk/protected_paths.d/
|
||||
etc/apk/repositories
|
||||
etc/apk/world
|
||||
etc/conf.d/
|
||||
...
|
||||
```
|
||||
|
||||
Sure enough, it’s an [Alpine][3] image, which you might have guessed if you noticed that the other layers used an `apk` command to install packages. Let’s extract the tarball and look around:
|
||||
|
||||
```
|
||||
$ mkdir files
|
||||
$ cd files
|
||||
$ tar xf ../layer.tar
|
||||
$ ls
|
||||
bin dev etc home lib media mnt opt proc root run sbin srv sys tmp usr var
|
||||
$ cat etc/alpine-release
|
||||
3.11.6
|
||||
```
|
||||
|
||||
If you pull `alpine:3.11.6` and extract it, you’ll find that there’s one non-empty layer inside it, and the `layer.tar` is identical to the `layer.tar` in the base layer of the Prettier image.
|
||||
|
||||
Just for the heck of it, what’s in the other two non-empty layers? The second layer is the main layer containing the Prettier installation. It has 528 entries, including Prettier, a bunch of dependencies and certificate updates:
|
||||
|
||||
```
|
||||
...
|
||||
usr/lib/libuv.so.1
|
||||
usr/lib/libuv.so.1.0.0
|
||||
usr/lib/node_modules/
|
||||
usr/lib/node_modules/prettier/
|
||||
usr/lib/node_modules/prettier/LICENSE
|
||||
usr/lib/node_modules/prettier/README.md
|
||||
usr/lib/node_modules/prettier/bin-prettier.js
|
||||
usr/lib/node_modules/prettier/doc.js
|
||||
usr/lib/node_modules/prettier/index.js
|
||||
usr/lib/node_modules/prettier/package.json
|
||||
usr/lib/node_modules/prettier/parser-angular.js
|
||||
usr/lib/node_modules/prettier/parser-babel.js
|
||||
usr/lib/node_modules/prettier/parser-flow.js
|
||||
usr/lib/node_modules/prettier/parser-glimmer.js
|
||||
usr/lib/node_modules/prettier/parser-graphql.js
|
||||
usr/lib/node_modules/prettier/parser-html.js
|
||||
usr/lib/node_modules/prettier/parser-markdown.js
|
||||
usr/lib/node_modules/prettier/parser-postcss.js
|
||||
usr/lib/node_modules/prettier/parser-typescript.js
|
||||
usr/lib/node_modules/prettier/parser-yaml.js
|
||||
usr/lib/node_modules/prettier/standalone.js
|
||||
usr/lib/node_modules/prettier/third-party.js
|
||||
usr/local/
|
||||
usr/local/share/
|
||||
usr/local/share/ca-certificates/
|
||||
usr/sbin/
|
||||
usr/sbin/update-ca-certificates
|
||||
usr/share/
|
||||
usr/share/ca-certificates/
|
||||
usr/share/ca-certificates/mozilla/
|
||||
usr/share/ca-certificates/mozilla/ACCVRAIZ1.crt
|
||||
usr/share/ca-certificates/mozilla/AC_RAIZ_FNMT-RCM.crt
|
||||
usr/share/ca-certificates/mozilla/Actalis_Authentication_Root_CA.crt
|
||||
...
|
||||
```
|
||||
|
||||
The third layer is created by the `WORKDIR /work` statement, and it contains exactly one entry:
|
||||
|
||||
```
|
||||
$ tar tf 6c37da2ee7de579a0bf5495df32ba3e7807b0a42e2a02779206d165f55f1ba70/layer.tar
|
||||
work/
|
||||
```
|
||||
|
||||
[The original Dockerfile is in the Prettier git repo.][4]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://theartofmachinery.com/2021/03/18/reverse_engineering_a_docker_image.html
|
||||
|
||||
作者:[Simon Arneaud][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://theartofmachinery.com
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://github.com/tmknom/prettier
|
||||
[2]: https://stedolan.github.io/jq/
|
||||
[3]: https://www.alpinelinux.org/
|
||||
[4]: https://github.com/tmknom/prettier/blob/35d2587ec052e880d73f73547f1ffc2b11e29597/Dockerfile
|
@ -0,0 +1,393 @@
|
||||
[#]: subject: (Create a countdown clock with a Raspberry Pi)
|
||||
[#]: via: (https://opensource.com/article/21/3/raspberry-pi-countdown-clock)
|
||||
[#]: author: (Chris Collins https://opensource.com/users/clcollins)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
||||
Create a countdown clock with a Raspberry Pi
|
||||
======
|
||||
Start counting down the days to your next holiday with a Raspberry Pi
|
||||
and an ePaper display.
|
||||
![Alarm clocks with different time][1]
|
||||
|
||||
For 2021, [Pi Day][2] has come and gone, leaving fond memories and [plenty of Raspberry Pi projects][3] to try out. The days after any holiday can be hard when returning to work after high spirits and plenty of fun, and Pi Day is no exception. As we look into the face of the Ides of March, we can long for the joys of the previous, well, day. But fear no more, dear Pi Day celebrant! For today, we begin the long countdown to the next Pi Day!
|
||||
|
||||
OK, but seriously. I made a Pi Day countdown timer, and you can too!
|
||||
|
||||
A while back, I purchased a [Raspberry Pi Zero W][4] and recently used it to [figure out why my WiFi was so bad][5]. I was also intrigued by the idea of getting an ePaper display for the little Zero W. I didn't have a good use for one, but, dang it, it looked like fun! I purchased a little 2.13" [Waveshare display][6], which fit perfectly on top of the Raspberry Pi Zero W. It's easy to install: Just slip the display down onto the Raspberry Pi's GIPO headers and you're good to go.
|
||||
|
||||
I used [Raspberry Pi OS][7] for this project, and while it surely can be done with other operating systems, the `raspi-config` command, used below, is most easily available on Raspberry Pi OS.
|
||||
|
||||
### Set up the Raspberry Pi and the ePaper display
|
||||
|
||||
Setting up the Raspberry Pi to work with the ePaper display requires you to enable the Serial Peripheral Interface (SPI) in the Raspberry Pi software, install the BCM2835 C libraries (to access the GPIO functions for the Broadcom BCM 2835 chip on the Raspberry Pi), and install Python GPIO libraries to control the ePaper display. Finally, you need to install the Waveshare libraries for working with the 2.13" display using Python.
|
||||
|
||||
Here's a step-by-step walkthrough of how to do these tasks.
|
||||
|
||||
#### Enable SPI
|
||||
|
||||
The easiest way to enable SPI is with the Raspberry Pi `raspi-config` command. The SPI bus allows serial data communication to be used with devices—in this case, the ePaper display:
|
||||
|
||||
|
||||
```
|
||||
`$ sudo raspi-config`
|
||||
```
|
||||
|
||||
From the menu that pops up, select **Interfacing Options** -> **SPI** -> **Yes** to enable the SPI interface, then reboot.
|
||||
|
||||
#### Install BCM2835 libraries
|
||||
|
||||
As mentioned above, the BCM2835 libraries are software for the Broadcom BCM2385 chip on the Raspberry Pi, which allows access to the GPIO pins and the ability to use them to control devices.
|
||||
|
||||
As I'm writing this, the latest version of the Broadcom BCM 2835 libraries for the Raspberry Pi is v1.68. To install the libraries, you need to download the software tarball and build and install the software with `make`:
|
||||
|
||||
|
||||
```
|
||||
# Download the BCM2853 libraries and extract them
|
||||
$ curl -sSL <http://www.airspayce.com/mikem/bcm2835/bcm2835-1.68.tar.gz> -o - | tar -xzf -
|
||||
|
||||
# Change directories into the extracted code
|
||||
$ pushd bcm2835-1.68/
|
||||
|
||||
# Configure, build, check and install the BCM2853 libraries
|
||||
$ sudo ./configure
|
||||
$ sudo make check
|
||||
$ sudo make install
|
||||
|
||||
# Return to the original directory
|
||||
$ popd
|
||||
```
|
||||
|
||||
#### Install required Python libraries
|
||||
|
||||
You also need some Python libraries to use Python to control the ePaper display, the `RPi.GPIO` pip package. You also need the `python3-pil` package for drawing shapes. Apparently, the PIL package is all but dead, but there is an alternative, [Pillow][8]. I have not tested Pillow for this project, but it may work:
|
||||
|
||||
|
||||
```
|
||||
# Install the required Python libraries
|
||||
$ sudo apt-get update
|
||||
$ sudo apt-get install python3-pip python3-pil
|
||||
$ sudo pip3 install RPi.GPIO
|
||||
```
|
||||
|
||||
_Note: These instructions are for Python 3. You can find Python 2 instructions on Waveshare's website_
|
||||
|
||||
#### Download Waveshare examples and Python libraries
|
||||
|
||||
Waveshare maintains a Git repository with Python and C libraries for working with its ePaper displays and some examples that show how to use them. For this countdown clock project, you will clone this repository and use the libraries for the 2.13" display:
|
||||
|
||||
|
||||
```
|
||||
# Clone the WaveShare e-Paper git repository
|
||||
$ git clone <https://github.com/waveshare/e-Paper.git>
|
||||
```
|
||||
|
||||
If you're using a different display or a product from another company, you'll need to use the appropriate software for your display.
|
||||
|
||||
Waveshare provides instructions for most of the above on its website:
|
||||
|
||||
* [WaveShare ePaper setup instructions][9]
|
||||
* [WaveShare ePaper libraries install instructions][10]
|
||||
|
||||
|
||||
|
||||
#### Get a fun font (optional)
|
||||
|
||||
You can display your timer however you want, but why not do it with a little style? Find a cool font to work with!
|
||||
|
||||
There's a ton of [Open Font License][11] fonts available out there. I am particularly fond of Bangers. You've seen this if you've ever watched YouTube—it's used _all over_. It can be downloaded and dropped into your user's local shared fonts directory to make it available for any application, including this project:
|
||||
|
||||
|
||||
```
|
||||
# The "Bangers" font is a Open Fonts License licensed font by Vernon Adams (<https://github.com/vernnobile>) from Google Fonts
|
||||
$ mkdir -p ~/.local/share/fonts
|
||||
$ curl -sSL <https://github.com/google/fonts/raw/master/ofl/bangers/Bangers-Regular.ttf> -o fonts/Bangers-Regular.ttf
|
||||
```
|
||||
|
||||
### Create a Pi Day countdown timer
|
||||
|
||||
Now that you have installed the software to work with the ePaper display and a fun font to use, you can build something cool with it: a timer to count down to the next Pi Day!
|
||||
|
||||
If you want, you can just grab the [countdown.py][12] Python file from this project's [GitHub repo][13] and skip to the end of this article.
|
||||
|
||||
For the curious, I'll break down that file, section by section.
|
||||
|
||||
#### Import some libraries
|
||||
|
||||
|
||||
```
|
||||
#!/usr/bin/python3
|
||||
# -*- coding:utf-8 -*-
|
||||
import logging
|
||||
import os
|
||||
import sys
|
||||
import time
|
||||
|
||||
from datetime import datetime
|
||||
from pathlib import Path
|
||||
from PIL import Image,ImageDraw,ImageFont
|
||||
|
||||
logging.basicConfig(level=logging.INFO)
|
||||
|
||||
basedir = Path(__file__).parent
|
||||
waveshare_base = basedir.joinpath('e-Paper', 'RaspberryPi_JetsonNano', 'python')
|
||||
libdir = waveshare_base.joinpath('lib')
|
||||
```
|
||||
|
||||
At the start, the Python script imports some standard libraries used later in the script. You also need to add `Image`, `ImageDraw`, and `ImageFont` from the PIL package, which you'll use to draw some simple geometric shapes. Finally, set some variables for the local `lib` directory that contains the Waveshare Python libraries for working with the 2.13" display, and which you can use later to load the library from the local directory.
|
||||
|
||||
#### Font size helper function
|
||||
|
||||
The next part of the script has a helper function for setting the font size for your chosen font: Bangers-Regular.ttf. It takes an integer for the font size and returns an ImageFont object you can use with the display:
|
||||
|
||||
|
||||
```
|
||||
def set_font_size(font_size):
|
||||
logging.info("Loading font...")
|
||||
return ImageFont.truetype(f"{basedir.joinpath('Bangers-Regular.ttf').resolve()}", font_size)
|
||||
```
|
||||
|
||||
#### Countdown logic
|
||||
|
||||
Next is a small function that calculates the meat of this project: how long it is until the next Pi Day. If it were, say, January, it would be relatively straightforward to count how many days are left, but you also need to consider whether Pi Day has already passed for the year (sadface), and if so, count how very, very many days are ahead until you can celebrate again:
|
||||
|
||||
|
||||
```
|
||||
def countdown(now):
|
||||
piday = datetime(now.year, 3, 14)
|
||||
|
||||
# Add a year if we're past PiDay
|
||||
if piday < now:
|
||||
piday = datetime((now.year + 1), 3, 14)
|
||||
|
||||
days = (piday - now).days
|
||||
|
||||
logging.info(f"Days till piday: {days}")
|
||||
return day
|
||||
```
|
||||
|
||||
#### The main function
|
||||
|
||||
Finally, you get to the main function, which initializes the display and begins writing data to it. In this case, you'll write a welcome message and then begin the countdown to the next Pi Day. But first, you need to load the Waveshare library:
|
||||
|
||||
|
||||
```
|
||||
def main():
|
||||
|
||||
if os.path.exists(libdir):
|
||||
sys.path.append(f"{libdir}")
|
||||
from waveshare_epd import epd2in13_V2
|
||||
else:
|
||||
logging.fatal(f"not found: {libdir}")
|
||||
sys.exit(1)
|
||||
```
|
||||
|
||||
The snippet above checks to make sure the library has been downloaded to a directory alongside the countdown script, and then it loads the `epd2in13_V2` library. If you're using a different display, you will need to use a different library. You can also write your own if you are so inclined. I found it kind of interesting to read the Python code that Waveshare provides with the display. It's considerably less complicated than I would have imagined it to be, if somewhat tedious.
|
||||
|
||||
The next bit of code creates an EPD (ePaper Display) object to interact with the display and initializes the hardware:
|
||||
|
||||
|
||||
```
|
||||
logging.info("Starting...")
|
||||
try:
|
||||
# Create an a display object
|
||||
epd = epd2in13_V2.EPD()
|
||||
|
||||
# Initialize the displace, and make sure it's clear
|
||||
# ePaper keeps it's state unless updated!
|
||||
logging.info("Initialize and clear...")
|
||||
epd.init(epd.FULL_UPDATE)
|
||||
epd.Clear(0xFF)
|
||||
```
|
||||
|
||||
An interesting aside about ePaper: It uses power only when it changes a pixel from white to black or vice-versa. This means when the power is removed from the device or the application stops for whatever reason, whatever was on the screen remains. That's great from a power-consumption perspective, but it also means you need to clear the display when starting up, or your script will just write over whatever is already on the screen. Hence, `epd.Clear(0xFF)` is used to clear the display when the script starts.
|
||||
|
||||
Next, create a "canvas" where you will draw the rest of your display output:
|
||||
|
||||
|
||||
```
|
||||
# Create an image object
|
||||
# NOTE: The "epd.heigh" is the LONG side of the screen
|
||||
# NOTE: The "epd.width" is the SHORT side of the screen
|
||||
# Counter-intuitive...
|
||||
logging.info(f"Creating canvas - height: {epd.height}, width: {epd.width}")
|
||||
image = Image.new('1', (epd.height, epd.width), 255) # 255: clear the frame
|
||||
draw = ImageDraw.Draw(image)
|
||||
```
|
||||
|
||||
This matches the width and height of the display—but it is somewhat counterintuitive, in that the short side of the display is the width. I think of the long side as the width, so this is just something to note. Note that the `epd.height` and `epd.width` are set by the Waveshare library to correspond to the device you're using.
|
||||
|
||||
#### Welcome message
|
||||
|
||||
Next, you'll start to draw something. This involves setting data on the "canvas" object you created above. This doesn't draw it to the ePaper display yet—you're just building the image you want right now. Create a little welcome message celebrating Pi Day, with an image of a piece of pie, drawn by yours truly just for this project:
|
||||
|
||||
![drawing of a piece of pie][14]
|
||||
|
||||
(Chris Collins, [CC BY-SA 4.0][15])
|
||||
|
||||
Cute, huh?
|
||||
|
||||
|
||||
```
|
||||
logging.info("Set text text...")
|
||||
bangers64 = set_font_size(64)
|
||||
draw.text((0, 30), 'PI DAY!', font = bangers64, fill = 0)
|
||||
|
||||
logging.info("Set BMP...")
|
||||
bmp = Image.open(basedir.joinpath("img", "pie.bmp"))
|
||||
image.paste(bmp, (150,2))
|
||||
```
|
||||
|
||||
Finally, _finally_, you get to display the canvas you drew, and it's a little bit anti-climactic:
|
||||
|
||||
|
||||
```
|
||||
logging.info("Display text and BMP")
|
||||
epd.display(epd.getbuffer(image))
|
||||
```
|
||||
|
||||
That bit above updates the display to show the image you drew.
|
||||
|
||||
Next, prepare another image to display your countdown timer.
|
||||
|
||||
#### Pi Day countdown timer
|
||||
|
||||
First, create a new image object that you can use to draw the display. Also, set some new font sizes to use for the image:
|
||||
|
||||
|
||||
```
|
||||
logging.info("Pi Date countdown; press CTRL-C to exit")
|
||||
piday_image = Image.new('1', (epd.height, epd.width), 255)
|
||||
piday_draw = ImageDraw.Draw(piday_image)
|
||||
|
||||
# Set some more fonts
|
||||
bangers36 = set_font_size(36)
|
||||
bangers64 = set_font_size(64)
|
||||
```
|
||||
|
||||
To display a ticker like a countdown, it's more efficient to update part of the image, changing the display for only what has changed in the data you want to draw. The next bit of code prepares the display to function this way:
|
||||
|
||||
|
||||
```
|
||||
# Prep for updating display
|
||||
epd.displayPartBaseImage(epd.getbuffer(piday_image))
|
||||
epd.init(epd.PART_UPDATE)
|
||||
```
|
||||
|
||||
Finally, you get to the timer bit, starting an infinite loop that checks how long it is until the next Pi Day and displays the countdown on the ePaper display. If it actually _is_ Pi Day, you can handle that with a little celebration message:
|
||||
|
||||
|
||||
```
|
||||
while (True):
|
||||
days = countdown(datetime.now())
|
||||
unit = get_days_unit(days)
|
||||
|
||||
# Clear the bottom half of the screen by drawing a rectangle filld with white
|
||||
piday_draw.rectangle((0, 50, 250, 122), fill = 255)
|
||||
|
||||
# Draw the Header
|
||||
piday_draw.text((10,10), "Days till Pi-day:", font = bangers36, fill = 0)
|
||||
|
||||
if days == 0:
|
||||
# Draw the Pi Day celebration text!
|
||||
piday_draw.text((0, 50), f"It's Pi Day!", font = bangers64, fill = 0)
|
||||
else:
|
||||
# Draw how many days until Pi Day
|
||||
piday_draw.text((70, 50), f"{str(days)} {unit}", font = bangers64, fill = 0)
|
||||
|
||||
# Render the screen
|
||||
epd.displayPartial(epd.getbuffer(piday_image))
|
||||
time.sleep(5)
|
||||
```
|
||||
|
||||
The last bit of the script does some error handling, including some code to catch keyboard interrupts so that you can stop the infinite loop with **Ctrl**+**C** and a small function to print "day" or "days" depending on whether or not the output should be singular (for that one, single day each year when it's appropriate):
|
||||
|
||||
|
||||
```
|
||||
except IOError as e:
|
||||
logging.info(e)
|
||||
|
||||
except KeyboardInterrupt:
|
||||
logging.info("Exiting...")
|
||||
epd.init(epd.FULL_UPDATE)
|
||||
epd.Clear(0xFF)
|
||||
time.sleep(1)
|
||||
epd2in13_V2.epdconfig.module_exit()
|
||||
exit()
|
||||
|
||||
def get_days_unit(count):
|
||||
if count == 1:
|
||||
return "day"
|
||||
|
||||
return "days"
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
```
|
||||
|
||||
And there you have it! A script to count down and display how many days are left until Pi Day! Here's an action shot on my Raspberry Pi (sped up by 86,400; I don't have nearly enough disk space to save a day-long video):
|
||||
|
||||
![Pi Day Countdown Timer In Action][16]
|
||||
|
||||
(Chris Collins, [CC BY-SA 4.0][15])
|
||||
|
||||
#### Install the systemd service (optional)
|
||||
|
||||
If you'd like the countdown display to run whenever the system is turned on and without you having to be logged in and run the script, you can install the optional systemd unit as a [systemd user service][17]).
|
||||
|
||||
Copy the [piday.service][18] file on GitHub to `${HOME}/.config/systemd/user`, first creating the directory if it doesn't exist. Then you can enable the service and start it:
|
||||
|
||||
|
||||
```
|
||||
$ mkdir -p ~/.config/systemd/user
|
||||
$ cp piday.service ~/.config/systemd/user
|
||||
$ systemctl --user enable piday.service
|
||||
$ systemctl --user start piday.service
|
||||
|
||||
# Enable lingering, to create a user session at boot
|
||||
# and allow services to run after logout
|
||||
$ loginctl enable-linger $USER
|
||||
```
|
||||
|
||||
The script will output to the systemd journal, and the output can be viewed with the `journalctl` command.
|
||||
|
||||
### It's beginning to look a lot like Pi Day!
|
||||
|
||||
And _there_ you have it! A Pi Day countdown timer, displayed on an ePaper display using a Raspberry Pi Zero W, and starting on system boot with a systemd unit file! Now there are just 350-something days until we can once again come together and celebrate the fantastic device that is the Raspberry Pi. And we can see exactly how many days at a glance with our tiny project.
|
||||
|
||||
But in truth, anyone can hold Pi Day in their hearts year-round, so enjoy creating some fun and educational projects with your own Raspberry Pi!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/3/raspberry-pi-countdown-clock
|
||||
|
||||
作者:[Chris Collins][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/clcollins
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/clocks_time.png?itok=_ID09GDk (Alarm clocks with different time)
|
||||
[2]: https://en.wikipedia.org/wiki/Pi_Day
|
||||
[3]: https://opensource.com/tags/raspberry-pi
|
||||
[4]: https://www.raspberrypi.org/products/raspberry-pi-zero-w/
|
||||
[5]: https://opensource.com/article/21/3/troubleshoot-wifi-go-raspberry-pi
|
||||
[6]: https://www.waveshare.com/product/displays/e-paper.htm
|
||||
[7]: https://www.raspberrypi.org/software/operating-systems/
|
||||
[8]: https://pypi.org/project/Pillow/
|
||||
[9]: https://www.waveshare.com/wiki/2.13inch_e-Paper_HAT
|
||||
[10]: https://www.waveshare.com/wiki/Libraries_Installation_for_RPi
|
||||
[11]: https://scripts.sil.org/cms/scripts/page.php?site_id=nrsi&id=OFL
|
||||
[12]: https://github.com/clcollins/epaper-pi-ex/blob/main/countdown.py
|
||||
[13]: https://github.com/clcollins/epaper-pi-ex/
|
||||
[14]: https://opensource.com/sites/default/files/uploads/pie.png (drawing of a piece of pie)
|
||||
[15]: https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[16]: https://opensource.com/sites/default/files/uploads/piday_countdown.gif (Pi Day Countdown Timer In Action)
|
||||
[17]: https://wiki.archlinux.org/index.php/systemd/User
|
||||
[18]: https://github.com/clcollins/epaper-pi-ex/blob/main/piday.service
|
213
sources/tech/20210319 Managing deb Content in Foreman.md
Normal file
213
sources/tech/20210319 Managing deb Content in Foreman.md
Normal file
@ -0,0 +1,213 @@
|
||||
[#]: subject: (Managing deb Content in Foreman)
|
||||
[#]: via: (https://opensource.com/article/21/3/linux-foreman)
|
||||
[#]: author: (Maximilian Kolb https://opensource.com/users/kolb)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
||||
Managing deb Content in Foreman
|
||||
======
|
||||
Use Foreman to serve software packages and errata for certain Linux
|
||||
systems.
|
||||
![Package wrapped with brown paper and red bow][1]
|
||||
|
||||
Foreman is a data center automation tool to deploy, configure, and patch hosts. It relies on Katello for content management, which in turn relies on Pulp to manage repositories. See [_Manage content using Pulp Debian_][2] for more information.
|
||||
|
||||
Pulp offers many plugins for different content types, including RPM packages, Ansible roles and collections, PyPI packages, and deb content. The latter is called the **pulp_deb** plugin.
|
||||
|
||||
### Content management in Foreman
|
||||
|
||||
The basic idea for providing content to hosts is to mirror repositories and provide content to hosts via either the Foreman server or attached Smart Proxies.
|
||||
|
||||
This tutorial is a step-by-step guide to adding deb content to Foreman and serving hosts running Debian 10. "Deb content" refers to software packages and errata for Debian-based Linux systems (e.g., Debian and Ubuntu). This article focuses on [Debian 10 Buster][3] but the instructions also work for [Ubuntu 20.04 Focal Fossa][4], unless noted otherwise.
|
||||
|
||||
### 1\. Create the operating system
|
||||
|
||||
#### 1.1. Create an architecture
|
||||
|
||||
Navigate to **Hosts > Architectures** and create a new architecture (if the architecture where you want to deploy Debian 10 hosts is missing). This tutorial assumes your hosts run on the x86_64 architecture, as Foreman does.
|
||||
|
||||
#### 1.2. Create an installation media
|
||||
|
||||
Navigate to **Hosts > Installation Media** and create new Debian 10 installation media. Use the upstream repository URL <http://ftp.debian.org/debian/>.
|
||||
|
||||
Select the Debian operating system family for either Debian or Ubuntu.
|
||||
|
||||
Alternatively, you can also use a Debian mirror. However, content synced via Pulp does not work for two reasons: first, the `linux` and `initrd.gz` files are not in the expected locations; second, the `Release` file is not signed.
|
||||
|
||||
#### 1.3. Create an operating system
|
||||
|
||||
Navigate to **Hosts > Operating Systems** and create a new operating system called Debian 10. Use **10** as the major version and leave the minor version field blank. For Ubuntu, use **20.04** as the major version and leave the minor version field blank.
|
||||
|
||||
![Creating an operating system entry][5]
|
||||
|
||||
(Maximilian Kolb, [CC BY-SA 4.0][6])
|
||||
|
||||
Select the Debian operating system family for Debian or Ubuntu, and specify the release name (e.g., **Buster** for Debian 10 or **Stretch** for Debian 9). Select the default partition tables and provisioning templates, i.e., **Preseed default ***.
|
||||
|
||||
#### 1.4. Adapt default Preseed templates (optional)
|
||||
|
||||
Navigate to **Hosts > Partition Tables** and **Hosts > Provisioning Templates** and adapt the default **Preseed** templates if necessary. Note that you need to clone locked templates before editing them. Cloned templates will not receive updates with newer Foreman versions. All Debian-based systems use **Preseed** templates, which are included with Foreman by default.
|
||||
|
||||
#### 1.5. Associate the templates
|
||||
|
||||
Navigate to **Hosts > Provisioning Templates** and search for **Preseed**. Associate all desired provisioning templates to the operating system. Then, navigate to **Hosts > Operating Systems** and select **Debian 10** as the operating system. Select the **Templates** tab and associate any provisioning templates that you want.
|
||||
|
||||
### 2\. Synchronize content
|
||||
|
||||
#### 2.1. Create content credentials for Debian upstream repositories and Debian client
|
||||
|
||||
Navigate to **Content > Content Credentials** and add the required GPG public keys as content credentials for Foreman to verify the deb packages' authenticity. To obtain the necessary GPG public keys, verify the **Release** file and export the corresponding GPG public key as follows:
|
||||
|
||||
* **Debian 10 main:** [code] wget <http://ftp.debian.org/debian/dists/buster/Release> && wget <http://ftp.debian.org/debian/dists/buster/Release.gpg>
|
||||
gpg --verify Release.gpg Release
|
||||
gpg --keyserver keys.gnupg.net --recv-key 16E90B3FDF65EDE3AA7F323C04EE7237B7D453EC
|
||||
gpg --keyserver keys.gnupg.net --recv-key 0146DC6D4A0B2914BDED34DB648ACFD622F3D138
|
||||
gpg --keyserver keys.gnupg.net --recv-key 6D33866EDD8FFA41C0143AEDDCC9EFBF77E11517
|
||||
gpg --armor --export E0B11894F66AEC98 DC30D7C23CBBABEE DCC9EFBF77E11517 > debian_10_main.txt
|
||||
```
|
||||
* **Debian 10 security:** [code] wget <http://security.debian.org/debian-security/dists/buster/updates/Release> && wget <http://security.debian.org/debian-security/dists/buster/updates/Release.gpg>
|
||||
gpg --verify Release.gpg Release
|
||||
gpg --keyserver keys.gnupg.net --recv-key 379483D8B60160B155B372DDAA8E81B4331F7F50
|
||||
gpg --keyserver keys.gnupg.net --recv-key 5237CEEEF212F3D51C74ABE0112695A0E562B32A
|
||||
gpg --armor --export EDA0D2388AE22BA9 4DFAB270CAA96DFA > debian_10_security.txt
|
||||
```
|
||||
* **Debian 10 updates:** [code] wget <http://ftp.debian.org/debian/dists/buster-updates/Release> && wget <http://ftp.debian.org/debian/dists/buster-updates/Release.gpg>
|
||||
gpg --verify Release.gpg Release
|
||||
gpg --keyserver keys.gnupg.net --recv-key 16E90B3FDF65EDE3AA7F323C04EE7237B7D453EC
|
||||
gpg --keyserver keys.gnupg.net --recv-key 0146DC6D4A0B2914BDED34DB648ACFD622F3D138
|
||||
gpg --armor --export E0B11894F66AEC98 DC30D7C23CBBABEE > debian_10_updates.txt
|
||||
```
|
||||
* **Debian 10 client:** [code]`wget --output-document=debian_10_client.txt https://apt.atix.de/atix_gpg.pub`
|
||||
```
|
||||
|
||||
|
||||
|
||||
You can select the respective ASCII-armored TXT files to upload to your Foreman instance.
|
||||
|
||||
#### 2.2. Create products called Debian 10 and Debian 10 client
|
||||
|
||||
Navigate to **Content > Hosts** and create two new products.
|
||||
|
||||
#### 2.3. Create the necessary Debian 10 repositories
|
||||
|
||||
Navigate to **Content > Products** and select the **Debian 10** product. Create three **deb** repositories:
|
||||
|
||||
* **Debian 10 main:**
|
||||
* URL: `http://ftp.debian.org/debian/`
|
||||
* Releases: `buster`
|
||||
* Component: `main`
|
||||
* Architecture: `amd64`
|
||||
|
||||
|
||||
* **Debian 10 security:**
|
||||
* URL: `http://deb.debian.org/debian-security/`
|
||||
* Releases: `buster/updates`
|
||||
* Component: `main`
|
||||
* Architecture: `amd64`
|
||||
|
||||
|
||||
|
||||
If you want, you can add a self-hosted errata service: `https://github.com/ATIX-AG/errata_server` and `https://github.com/ATIX-AG/errata_parser`
|
||||
|
||||
* **Debian 10 updates:**
|
||||
* URL: `http://ftp.debian.org/debian/`
|
||||
* Releases: `buster-updates`
|
||||
* Component: `main`
|
||||
* Architecture: `amd64`
|
||||
|
||||
|
||||
|
||||
Select the content credentials that you created in step 2.1. Adjust the components and architecture as needed. Navigate to **Content > Products** and select the **Debian 10 client** product. Create a **deb** repository as follows:
|
||||
|
||||
* **Debian 10 subscription-manager**
|
||||
* URL: `https://apt.atix.de/Debian10/`
|
||||
* Releases: `stable`
|
||||
* Component: `main`
|
||||
* Architecture: `amd64`
|
||||
|
||||
|
||||
|
||||
Select the content credentials you created in step 2.1. The Debian 10 client contains the **subscription-manager** package, which runs on each content host to receive content from the Foreman Server or an attached Smart Proxy. Navigate to [apt.atix.de][7] for further instructions.
|
||||
|
||||
#### 2.4. Synchronize the repositories
|
||||
|
||||
If you want, you can create a sync plan to sync the **Debian 10** and **Debian 10 client** products periodically. To sync the product once, click the **Select Action > Sync Now** button on the **Products** page.
|
||||
|
||||
#### 2.5. Create content views
|
||||
|
||||
Navigate to **Content > Content Views** and create a content view called **Debian 10** comprising the Debian upstream repositories created in the **Debian 10** product and publish a new version. Do the same for the **Debian 10 client** repository of the **Debian 10 client** product.
|
||||
|
||||
#### 2.6. Create a composite content view
|
||||
|
||||
Create a new composite content view called **Composite Debian 10** comprising the previously published **Debian 10** and **Debian 10 client** content views and publish a new version. You may optionally add other content views of your choice (e.g., Puppet).
|
||||
|
||||
![Composite content view][8]
|
||||
|
||||
(Maximilian Kolb, [CC BY-SA 4.0][6])
|
||||
|
||||
#### 2.7. Create an activation key
|
||||
|
||||
Navigate to **Content > Activation Keys** and create a new activation key called **debian-10**:
|
||||
|
||||
* Select the **Library** lifecycle environment and add the **Composite Debian 10** content view.
|
||||
* On the **Details** tab, assign the correct lifecycle environment and composite content view.
|
||||
* On the **Subscriptions** tab, assign the necessary subscriptions, i.e., the **Debian 10** and **Debian 10 client** products.
|
||||
|
||||
|
||||
|
||||
### 3\. Deploy a host
|
||||
|
||||
#### 3.1. Enable provisioning via Port 8000
|
||||
|
||||
Connect to your Foreman instance via SSH and edit the following file:
|
||||
|
||||
|
||||
```
|
||||
`/etc/foreman-proxy/settings.yml`
|
||||
```
|
||||
|
||||
Search for `:http_port: 8000` and make sure it is not commented out (i.e., the line does not start with a `#`).
|
||||
|
||||
#### 3.2. Create a host group
|
||||
|
||||
Navigate to **Configure > Host Groups** and create a new host group called **Debian 10**. Check out the Foreman documentation on [creating host groups][9], and make sure to select the correct entries on the **Operating System** and **Activation Keys** tabs.
|
||||
|
||||
#### 3.3. Create a new host
|
||||
|
||||
Navigate to **Hosts > Create Host** and either select the host group as described above or manually enter the identical information.
|
||||
|
||||
> Tip: Deploying hosts running Ubuntu 20.04 is even easier, as you can use its official installation media ISO image and do offline installations. Check out orcharhino's [Managing Ubuntu Systems Guide][10] for more information.
|
||||
|
||||
[ATIX][11] has developed several Foreman plugins, and is an integral part of the [Foreman open source ecosystem][12]. The community's feedback on our contributions is passed back to our customers, as we continuously strive to improve our downstream product, [orcharhino][13].
|
||||
|
||||
This May I started my internship at Red Hat with the Pulp team . Since it was my first ever...
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/3/linux-foreman
|
||||
|
||||
作者:[Maximilian Kolb][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/kolb
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/brown-package-red-bow.jpg?itok=oxZYQzH- (Package wrapped with brown paper and red bow)
|
||||
[2]: https://opensource.com/article/20/10/pulp-debian
|
||||
[3]: https://wiki.debian.org/DebianBuster
|
||||
[4]: https://releases.ubuntu.com/20.04/
|
||||
[5]: https://opensource.com/sites/default/files/uploads/foreman-debian_content_deb_operating_system_entry.png (Creating an operating system entry)
|
||||
[6]: https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[7]: https://apt.atix.de/
|
||||
[8]: https://opensource.com/sites/default/files/uploads/foreman-debian_content_deb_composite_content_view.png (Composite content view)
|
||||
[9]: https://docs.theforeman.org/nightly/Managing_Hosts/index-foreman-el.html#creating-a-host-group
|
||||
[10]: https://docs.orcharhino.com/or/docs/sources/usage_guides/managing_ubuntu_systems_guide.html#musg_deploy_hosts
|
||||
[11]: https://atix.de/
|
||||
[12]: https://theforeman.org/2020/10/atix-in-the-foreman-community.html
|
||||
[13]: https://orcharhino.com/
|
@ -0,0 +1,315 @@
|
||||
[#]: subject: (5 everyday sysadmin tasks to automate with Ansible)
|
||||
[#]: via: (https://opensource.com/article/21/3/ansible-sysadmin)
|
||||
[#]: author: (Mike Calizo https://opensource.com/users/mcalizo)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
||||
5 everyday sysadmin tasks to automate with Ansible
|
||||
======
|
||||
Get more efficient and avoid errors by automating repeatable daily tasks
|
||||
with Ansible.
|
||||
![Tips and gears turning][1]
|
||||
|
||||
If you hate performing repetitive tasks, then I have a proposition for you. Learn [Ansible][2]!
|
||||
|
||||
Ansible is a tool that will help you do your daily tasks easier and faster, so you can use your time in more effective ways, like learning new technology that matters. It's a great tool for sysadmins because it helps you achieve standardization and collaborate on daily activities, including:
|
||||
|
||||
1. Installing, configuring, and provisioning servers and applications
|
||||
2. Updating and upgrading systems regularly
|
||||
3. Monitoring, mitigating, and troubleshooting issues
|
||||
|
||||
|
||||
|
||||
Typically, many of these essential daily tasks require manual steps that depend upon an individual's skills, creating inconsistencies and resulting in configuration drift. This might be OK in a small-scale implementation where you're managing one server and know what you are doing. But what happens when you are managing hundreds or thousands of servers?
|
||||
|
||||
If you are not careful, these manual, repeatable tasks can cause delays and issues because of human errors, and those errors might impact you and your organization's reputation.
|
||||
|
||||
This is where the value of automation comes into the picture. And [Ansible][3] is a perfect tool for automating these repeatable daily tasks.
|
||||
|
||||
Some of the reasons to automate are:
|
||||
|
||||
1. You want a consistent and stable environment.
|
||||
2. You want to foster standardization.
|
||||
3. You want less downtime and fewer severe incident cases so you can enjoy your life.
|
||||
4. You want to have a beer instead of troubleshooting issues!
|
||||
|
||||
|
||||
|
||||
This article offers some examples of the daily tasks a sysadmin can automate using Ansible. I put the playbooks and roles from this article into a [sysadmin tasks repository][4] on GitHub to make it easier for you to use them.
|
||||
|
||||
These playbooks are structured like this (my notes are preceded with `==>`):
|
||||
|
||||
|
||||
```
|
||||
[root@homebase 6_sysadmin_tasks]# tree -L 2
|
||||
.
|
||||
├── ansible.cfg ===> Ansible config file that is responsible for controlling how ansible behave
|
||||
├── ansible.log
|
||||
├── inventory
|
||||
│ ├── group_vars
|
||||
│ ├── hosts ==> the inventory file that contains the list of my target server
|
||||
│ └── host_vars
|
||||
├── LICENSE
|
||||
├── playbooks ==> the directory that contains playbooks that we will be using for this article
|
||||
│ ├── c_logs.yml
|
||||
│ ├── c_stats.yml
|
||||
│ ├── c_uptime.yml
|
||||
│ ├── inventory
|
||||
│ ├── r_cron.yml
|
||||
│ ├── r_install.yml
|
||||
│ └── r_script.yml
|
||||
├── README.md
|
||||
├── roles ==> the directory that contains the roles that we will be using in this article.
|
||||
│ ├── check_logs
|
||||
│ ├── check_stats
|
||||
│ ├── check_uptime
|
||||
│ ├── install_cron
|
||||
│ ├── install_tool
|
||||
│ └── run_scr
|
||||
└── templates ==> the directory that contains the jinja template
|
||||
├── cron_output.txt.j2
|
||||
├── sar.txt.j2
|
||||
└── scr_output.txt.j2
|
||||
```
|
||||
|
||||
The inventory looks like this:
|
||||
|
||||
|
||||
```
|
||||
[root@homebase 6_sysadmin_tasks]# cat inventory/hosts
|
||||
[rhel8]
|
||||
master ansible_ssh_host=192.168.1.12
|
||||
workernode1 ansible_ssh_host=192.168.1.15
|
||||
|
||||
[rhel8:vars]
|
||||
ansible_user=ansible ==> Please update this with your preferred ansible user
|
||||
```
|
||||
|
||||
Here are five daily sysadmin tasks that you can automate with Ansible.
|
||||
|
||||
### 1\. Check server uptime
|
||||
|
||||
You need to make sure your servers are up and running all the time. Organizations have enterprise monitoring tools to monitor server and application uptime, but from time to time, the automated monitoring tools fail, and you need to jump in and verify a server's status. It takes a lot of time to verify each server's uptime manually. The more servers you have, the longer time you have to spend. But with automation, this verification can be done in minutes.
|
||||
|
||||
Use the [check_uptime][5] role and the `c_uptime.yml` playbook:
|
||||
|
||||
|
||||
```
|
||||
[root@homebase 6_sysadmin_tasks]# ansible-playbook -i inventory/hosts playbooks/c_uptime.yml -k
|
||||
SSH password:
|
||||
PLAY [Check Uptime for Servers] ****************************************************************************************************************************************
|
||||
TASK [check_uptime : Capture timestamp] *************************************************************************************************
|
||||
.
|
||||
snip...
|
||||
.
|
||||
PLAY RECAP *************************************************************************************************************************************************************
|
||||
master : ok=6 changed=4 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
|
||||
workernode1 : ok=6 changed=4 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
|
||||
[root@homebase 6_sysadmin_tasks]#
|
||||
```
|
||||
|
||||
The playbook's output looks like this:
|
||||
|
||||
|
||||
```
|
||||
[root@homebase 6_sysadmin_tasks]# cat /var/tmp/uptime-master-20210221004417.txt
|
||||
\-----------------------------------------------------
|
||||
Uptime for master
|
||||
\-----------------------------------------------------
|
||||
00:44:17 up 44 min, 2 users, load average: 0.01, 0.09, 0.09
|
||||
\-----------------------------------------------------
|
||||
[root@homebase 6_sysadmin_tasks]# cat /var/tmp/uptime-workernode1-20210221184525.txt
|
||||
\-----------------------------------------------------
|
||||
Uptime for workernode1
|
||||
\-----------------------------------------------------
|
||||
18:45:26 up 44 min, 2 users, load average: 0.01, 0.01, 0.00
|
||||
\-----------------------------------------------------
|
||||
```
|
||||
|
||||
Using Ansible, you can get the status of multiple servers in a human-readable format with less effort, and the [Jinja template][6] allows you to adjust the output based on your needs. With more automation, you can run this on a schedule and send the output through email for reporting purposes.
|
||||
|
||||
### 2\. Configure additional cron jobs
|
||||
|
||||
You need to update your servers' scheduled jobs regularly based on infrastructure and application requirements. This may seem like a menial job, but it has to be done correctly and consistently. Imagine the time this takes if you are doing this manually with hundreds of production servers. If it is done wrong, it can impact production applications, which can cause application downtime or impact server performance if scheduled jobs overlap.
|
||||
|
||||
Use the [install_cron][7] role and the `r_cron.yml` playbook:
|
||||
|
||||
|
||||
```
|
||||
[root@homebase 6_sysadmin_tasks]# ansible-playbook -i inventory/hosts playbooks/r_cron.yml -k
|
||||
SSH password:
|
||||
PLAY [Install additional cron jobs for root] ***************************************************************************************************************************
|
||||
.
|
||||
snip
|
||||
.
|
||||
PLAY RECAP *************************************************************************************************************************************************************
|
||||
master : ok=10 changed=7 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
|
||||
workernode1 : ok=10 changed=7 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
|
||||
```
|
||||
|
||||
Verify the playbook's results:
|
||||
|
||||
|
||||
```
|
||||
[root@homebase 6_sysadmin_tasks]# ansible -i inventory/hosts all -m shell -a "crontab -l" -k
|
||||
SSH password:
|
||||
master | CHANGED | rc=0 >>
|
||||
1 2 3 4 5 /usr/bin/ls /tmp
|
||||
#Ansible: Iotop Monitoring
|
||||
0 5,2 * * * /usr/sbin/iotop -b -n 1 >> /var/tmp/iotop.log 2>> /var/tmp/iotop.err
|
||||
workernode1 | CHANGED | rc=0 >>
|
||||
1 2 3 4 5 /usr/bin/ls /tmp
|
||||
#Ansible: Iotop Monitoring
|
||||
0 5,2 * * * /usr/sbin/iotop -b -n 1 >> /var/tmp/iotop.log 2>> /var/tmp/iotop.err
|
||||
```
|
||||
|
||||
Using Ansible, you can update the crontab entry on all your servers in a fast and consistent way. You can also report the updated crontab's status using a simple ad-hoc Ansible command to verify the recently applied changes.
|
||||
|
||||
### 3\. Gather server stats and sars
|
||||
|
||||
During routine troubleshooting and to diagnose server performance or application issues, you need to gather system activity reports (sars) and server stats. In most scenarios, server logs contain very important information that developers or ops teams need to help solve specific problems that affect the overall environment.
|
||||
|
||||
Security teams are very particular when conducting investigations, and most of the time, they want to look at logs for multiple servers. You need to find an easy way to collect this documentation. It's even better if you can delegate the collection task to them.
|
||||
|
||||
Do this with the [check_stats][8] role and the `c_stats.yml` playbook:
|
||||
|
||||
|
||||
```
|
||||
$ ansible-playbook -i inventory/hosts playbooks/c_stats.yml
|
||||
|
||||
PLAY [Check Stats/sar for Servers] ***********************************************************************************************************************************
|
||||
|
||||
TASK [check_stats : Get current date time] ***************************************************************************************************************************
|
||||
changed: [master]
|
||||
changed: [workernode1]
|
||||
.
|
||||
snip...
|
||||
.
|
||||
PLAY RECAP ***********************************************************************************************************************************************************
|
||||
master : ok=5 changed=4 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
|
||||
workernode1 : ok=5 changed=4 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
|
||||
```
|
||||
|
||||
The output will look like this:
|
||||
|
||||
|
||||
```
|
||||
$ cat /tmp/sar-workernode1-20210221214056.txt
|
||||
\-----------------------------------------------------
|
||||
sar output for workernode1
|
||||
\-----------------------------------------------------
|
||||
Linux 4.18.0-193.el8.x86_64 (node1) 21/02/21 _x86_64_ (2 CPU)
|
||||
21:39:30 LINUX RESTART (2 CPU)
|
||||
\-----------------------------------------------------
|
||||
```
|
||||
|
||||
### 4\. Collect server logs
|
||||
|
||||
In addition to gathering server stats and sars information, you will also need to collect logs from time to time, especially if you need to help investigate issues.
|
||||
|
||||
Do this with the [check_logs][9] role and the `r_cron.yml` playbook:
|
||||
|
||||
|
||||
```
|
||||
$ ansible-playbook -i inventory/hosts playbooks/c_logs.yml -k
|
||||
SSH password:
|
||||
|
||||
PLAY [Check Logs for Servers] ****************************************************************************************************************************************
|
||||
.
|
||||
snip
|
||||
.
|
||||
TASK [check_logs : Capture Timestamp] ********************************************************************************************************************************
|
||||
changed: [master]
|
||||
changed: [workernode1]
|
||||
PLAY RECAP ***********************************************************************************************************************************************************
|
||||
master : ok=6 changed=4 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
|
||||
workernode1 : ok=6 changed=4 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
|
||||
```
|
||||
|
||||
To confirm the output, open the files generated in the dump location. The logs should look like this:
|
||||
|
||||
|
||||
```
|
||||
$ cat /tmp/logs-workernode1-20210221214758.txt | more
|
||||
\-----------------------------------------------------
|
||||
Logs gathered: /var/log/messages for workernode1
|
||||
\-----------------------------------------------------
|
||||
|
||||
Feb 21 18:00:27 node1 kernel: Command line: BOOT_IMAGE=(hd0,gpt2)/vmlinuz-4.18.0-193.el8.x86_64 root=/dev/mapper/rhel-root ro crashkernel=auto resume=/dev/mapper/rhel
|
||||
-swap rd.lvm.lv=rhel/root rd.lvm.lv=rhel/swap rhgb quiet
|
||||
Feb 21 18:00:27 node1 kernel: Disabled fast string operations
|
||||
Feb 21 18:00:27 node1 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
|
||||
Feb 21 18:00:27 node1 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
|
||||
Feb 21 18:00:27 node1 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
|
||||
Feb 21 18:00:27 node1 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256
|
||||
Feb 21 18:00:27 node1 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
|
||||
```
|
||||
|
||||
### 5\. Install or remove packages and software
|
||||
|
||||
You need to be able to install and update software and packages on your systems consistently and rapidly. Reducing the time it takes to install or update packages and software avoids unnecessary downtime of servers and applications.
|
||||
|
||||
Do this with the [install_tool][10] role and the `r_install.yml` playbook:
|
||||
|
||||
|
||||
```
|
||||
$ ansible-playbook -i inventory/hosts playbooks/r_install.yml -k
|
||||
SSH password:
|
||||
PLAY [Install additional tools/packages] ***********************************************************************************
|
||||
|
||||
TASK [install_tool : Install specified tools in the role vars] *************************************************************
|
||||
ok: [master] => (item=iotop)
|
||||
ok: [workernode1] => (item=iotop)
|
||||
ok: [workernode1] => (item=traceroute)
|
||||
ok: [master] => (item=traceroute)
|
||||
|
||||
PLAY RECAP *****************************************************************************************************************
|
||||
master : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
|
||||
workernode1 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
|
||||
```
|
||||
|
||||
This example installs two specific packages and versions defined in a vars file. Using Ansible automation, you can install multiple packages or software faster than doing it manually. You can also use the vars file to define the version of the packages that you want to install:
|
||||
|
||||
|
||||
```
|
||||
$ cat roles/install_tool/vars/main.yml
|
||||
\---
|
||||
# vars file for install_tool
|
||||
ins_action: absent
|
||||
package_list:
|
||||
- iotop-0.6-16.el8.noarch
|
||||
- traceroute
|
||||
```
|
||||
|
||||
### Embrace automation
|
||||
|
||||
To be an effective sysadmin, you need to embrace automation to encourage standardization and collaboration within your team. Ansible enables you to do more in less time so that you can spend your time on more exciting projects instead of doing repeatable tasks like managing your incident and problem management processes.
|
||||
|
||||
With more free time on your hands, you can learn more and make yourself available for the next career opportunity that comes your way.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/3/ansible-sysadmin
|
||||
|
||||
作者:[Mike Calizo][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/mcalizo
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/gears_devops_learn_troubleshooting_lightbulb_tips_520.png?itok=HcN38NOk (Tips and gears turning)
|
||||
[2]: https://www.ansible.com/
|
||||
[3]: https://opensource.com/tags/ansible
|
||||
[4]: https://github.com/mikecali/6_sysadmin_tasks
|
||||
[5]: https://github.com/mikecali/6_sysadmin_tasks/tree/main/roles/check_uptime
|
||||
[6]: https://docs.ansible.com/ansible/latest/user_guide/playbooks_templating.html
|
||||
[7]: https://github.com/mikecali/6_sysadmin_tasks/tree/main/roles/install_cron
|
||||
[8]: https://github.com/mikecali/6_sysadmin_tasks/tree/main/roles/check_stats
|
||||
[9]: https://github.com/mikecali/6_sysadmin_tasks/tree/main/roles/check_logs
|
||||
[10]: https://github.com/mikecali/6_sysadmin_tasks/tree/main/roles/install_tool
|
@ -0,0 +1,104 @@
|
||||
[#]: subject: (6 WordPress plugins for restaurants and retailers)
|
||||
[#]: via: (https://opensource.com/article/21/3/wordpress-plugins-retail)
|
||||
[#]: author: (Don Watkins https://opensource.com/users/don-watkins)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
||||
6 WordPress plugins for restaurants and retailers
|
||||
======
|
||||
The end of the pandemic won't be the end of curbside pickup, delivery,
|
||||
and other shopping conveniences, so set your website up for success with
|
||||
these plugins.
|
||||
![An open for business sign.][1]
|
||||
|
||||
The pandemic changed how many people prefer to do business—probably permanently. Restaurants and other local retail establishments can no longer rely on walk-in trade, as they always have. Online ordering of food and other items has become the norm and the expectation. It is unlikely consumers will turn their backs on the convenience of e-commerce once the pandemic is over.
|
||||
|
||||
WordPress is a great platform for getting your business' message out to consumers and ensuring you're meeting their e-commerce needs. And its ecosystem of plugins extends the platform to increase its usefulness to you and your customers.
|
||||
|
||||
The six open source plugins described below will help you create a WordPress site that meets your customers' preferences for online shopping, curbside pickup, and delivery, and build your brand and your customer base—now and post-pandemic.
|
||||
|
||||
### E-commerce
|
||||
|
||||
![WooCommerce][2]
|
||||
|
||||
WooCommerce (Don Watkins, [CC BY-SA 4.0][3])
|
||||
|
||||
[WooCommerce][4] says it is the most popular e-commerce plugin for the WordPress platform. Its website says: "Our core platform is free, flexible, and amplified by a global community. The freedom of open source means you retain full ownership of your store's content and data forever." The plugin, which is under active development, enables you to create enticing web storefronts. It was created by WordPress developer [Automattic][5] and is released under the GPLv3.
|
||||
|
||||
### Order, delivery, and pickup
|
||||
|
||||
![Curbside Pickup][6]
|
||||
|
||||
Curbside Pickup (Don Watkins, [CC BY-SA 4.0][3])
|
||||
|
||||
[Curbside Pickup][7] is a complete system to manage your curbside pickup experience. It's ideal for any restaurant, library, retailer, or other organization that offers curbside pickup for purchases. The plugin, which is licensed GPLv3, works with any theme that supports WooCommerce.
|
||||
|
||||
![Food Store][8]
|
||||
|
||||
[Food Store][9]
|
||||
|
||||
If you're looking for an online food delivery and pickup system, [Food Store][9] could meet your needs. It extends WordPress' core functions and capabilities to convert your brick-and-mortar restaurant into a food-ordering hub. The plugin, licensed under GPLv2, is under active development with over 1,000 installations.
|
||||
|
||||
![RestroPress][10]
|
||||
|
||||
[RestroPress][11]
|
||||
|
||||
[RestroPress][11] is another option to add a food-ordering system to your website. The GPLv2-licensed plugin has over 4,000 installations and supports payment through PayPal, Amazon, and cash on delivery.
|
||||
|
||||
![RestaurantPress][12]
|
||||
|
||||
[RestaurantPress][13]
|
||||
|
||||
If you want to post the menu for your restaurant, bar, or cafe online, try [RestaurantPress][13]. According to its website, the plugin, which is available under a GPLv2 license, "provides modern responsive menu templates that adapt to any devices," according to its website. It has over 2,000 installations and integrates with WooCommerce.
|
||||
|
||||
### Communications
|
||||
|
||||
![Corona Virus \(COVID-19\) Banner & Live Data][14]
|
||||
|
||||
Corona Virus (COVID-19) Banner & Live Data (Don Watkins, [CC BY-SA 4.0][3])
|
||||
|
||||
You can keep your customers informed about COVID-19 policies with the [Corona Virus Banner & Live Data][15] plugin. It adds a simple banner with live coronavirus information to your website. It has over 6,000 active installations and is open source under GPLv2.
|
||||
|
||||
![MailPoet][16]
|
||||
|
||||
MailPoet (Don Watkins, [CC BY-SA 4.0][3])
|
||||
|
||||
As rules and restrictions change rapidly, an email newsletter is a great way to keep your customers informed. The [MailPoet][17] WordPress plugin makes it easy to manage and email information about new offerings, hours, and more. Through MailPoet, website visitors can subscribe to your newsletter, which you can create and send with WordPress. It has over 300,000 installations and is open source under GPLv2.
|
||||
|
||||
### Prepare for the post-pandemic era
|
||||
|
||||
Pandemic-driven lockdowns made online shopping, curbside pickup, and home delivery necessities, but these shopping trends are not going anywhere. As the pandemic subsides, restrictions will ease, and we will start shopping, dining, and doing business in person more. Still, consumers have come to appreciate the ease and convenience of e-commerce, even for small local restaurants and stores, and these plugins will help your WordPress site meet their needs.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/3/wordpress-plugins-retail
|
||||
|
||||
作者:[Don Watkins][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/don-watkins
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/open_business_sign_store.jpg?itok=g4QibRqg (An open for business sign.)
|
||||
[2]: https://opensource.com/sites/default/files/pictures/woocommerce.png (WooCommerce)
|
||||
[3]: https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[4]: https://wordpress.org/plugins/woocommerce/
|
||||
[5]: https://automattic.com/
|
||||
[6]: https://opensource.com/sites/default/files/pictures/curbsidepickup.png (Curbside Pickup)
|
||||
[7]: https://wordpress.org/plugins/curbside-pickup/
|
||||
[8]: https://opensource.com/sites/default/files/pictures/food-store.png (Food Store)
|
||||
[9]: https://wordpress.org/plugins/food-store/
|
||||
[10]: https://opensource.com/sites/default/files/pictures/restropress.png (RestroPress)
|
||||
[11]: https://wordpress.org/plugins/restropress/
|
||||
[12]: https://opensource.com/sites/default/files/pictures/restaurantpress.png (RestaurantPress)
|
||||
[13]: https://wordpress.org/plugins/restaurantpress/
|
||||
[14]: https://opensource.com/sites/default/files/pictures/covid19updatebanner.png (Corona Virus (COVID-19) Banner & Live Data)
|
||||
[15]: https://wordpress.org/plugins/corona-virus-covid-19-banner/
|
||||
[16]: https://opensource.com/sites/default/files/pictures/mailpoet1.png (MailPoet)
|
||||
[17]: https://wordpress.org/plugins/mailpoet/
|
144
sources/tech/20210322 Productivity with Ulauncher.md
Normal file
144
sources/tech/20210322 Productivity with Ulauncher.md
Normal file
@ -0,0 +1,144 @@
|
||||
[#]: subject: (Productivity with Ulauncher)
|
||||
[#]: via: (https://fedoramagazine.org/ulauncher-productivity/)
|
||||
[#]: author: (Troy Curtis Jr https://fedoramagazine.org/author/troycurtisjr/)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
||||
Productivity with Ulauncher
|
||||
======
|
||||
|
||||
![Productivity with Ulauncher][1]
|
||||
|
||||
Photo by [Freddy Castro][2] on [Unsplash][3]
|
||||
|
||||
Application launchers are a category of productivity software that not everyone is familiar with, and yet most people use the basic concepts without realizing it. As the name implies, this software launches applications, but they also other capablities.
|
||||
|
||||
Examples of dedicated Linux launchers include [dmenu][4], [Synapse][5], and [Albert][6]. On MacOS, some examples are [Quicksilver][7] and [Alfred][8]. Many modern desktops include basic versions as well. On Fedora Linux, the Gnome 3 [activities overview][9] uses search to open applications and more, while MacOS has the built-in launcher Spotlight.
|
||||
|
||||
While these applications have great feature sets, this article focuses on productivity with [Ulauncher][10].
|
||||
|
||||
### What is Ulauncher?
|
||||
|
||||
[Ulauncher][10] is a new application launcher written in Python, with the first Fedora package available in March 2020 for [Fedora Linux 32][11]. The core focuses on basic functionality with a nice [interface for extensions][12]. Like most application launchers, the key idea in Ulauncher is search. Search is a powerful productivity boost, especially for repetitive tasks.
|
||||
|
||||
Typical menu-driven interfaces work great for discovery when you aren’t sure what options are available. However, when the same action needs to happen repeatedly, it is a real time sink to navigate into 3 nested sub-menus over and over again. On the other side, [hotkeys][13] give immediate access to specific actions, but can be difficult to remember. Especially after exhausting all the obvious mnemonics. Is [_Control+C_][14] “copy”, or is it “cancel”? Search is a middle ground giving a means to get to a specific command quickly, while supporting discovery by typing only some remembered word or fragment. Exploring by search works especially well if tags and descriptions are available. Ulauncher supplies the search framework that extensions can use to build all manner of productivity enhancing actions.
|
||||
|
||||
### Getting started
|
||||
|
||||
Getting the core functionality of Ulauncher on any Fedora OS is trivial; install using _[dnf][15]_:
|
||||
|
||||
```
|
||||
sudo dnf install ulauncher
|
||||
```
|
||||
|
||||
Once installed, use any standard desktop launching method for the first start up of Ulauncher. A basic dialog should pop up, but if not try launching it again to toggle the input box on. Click the gear icon on the right side to open the preferences dialog.
|
||||
|
||||
![Ulauncher input box][16]
|
||||
|
||||
A number of options are available, but the most important when starting out are _Launch at login_ and the hotkey. The default hotkey is _Control+space_, but it can be changed. Running in Wayland needs additional configuration for consistent operation; see the [Ulauncher wiki][17] for details. Users of “Focus on Hover” or “Sloppy Focus” should also enable the “Don’t hide after losing mouse focus” option. Otherwise, Ulauncher disappears while typing in some cases.
|
||||
|
||||
### Ulauncher basics
|
||||
|
||||
The idea of any application launcher, like Ulauncher, is fast access at any time. Press the hotkey and the input box shows up on top of the current application. Type out and execute the desired command and the dialog hides until the next use. Unsurprisingly, the most basic operation is launching applications. This is similar to most modern desktop environments. Hit the hotkey to bring up the dialog and start typing, for example _te_, and a list of matches comes up. Keep typing to further refine the search, or navigate to the entry using the arrow keys. For even faster access, use _Alt+#_ to directly choose a result.
|
||||
|
||||
![Ulauncher dialog searching for keywords with “te”][18]
|
||||
|
||||
Ulauncher can also do quick calculations and navigate the file-system. To calculate, hit the hotkey and type a math expression. The result list dynamically updates with the result, and hitting _Enter_ copies the value to the clipboard. Start file-system navigation by typing _/_ to start at the root directory or _~/_ to start in the home directory. Selecting a directory lists that directory’s contents and typing another argument filters the displayed list. Locate the right file by repeatedly descending directories. Selecting a file opens it, while _Alt+Enter_ opens the folder containing the file.
|
||||
|
||||
### Ulauncher shortcuts
|
||||
|
||||
The first bit of customization comes in the form of shortcuts. The _Shortcuts_ tab in the preferences dialog lists all the current shortcuts. Shortcuts can be direct commands, URL aliases, URLs with argument substitution, or small scripts. Basic shortcuts for Wikipedia, StackOverflow, and Google come pre-configured, but custom shortcuts are easy to add.
|
||||
|
||||
![Ulauncher shortcuts preferences tab][19]
|
||||
|
||||
For instance, to create a duckduckgo search shortcut, click _Add Shortcut_ in the _Shortcuts_ preferences tab and add the name and keyword _duck_ with the query _<https://duckduckgo.com/?q=%s>_. Any argument given to the _duck_ keyword replaces _%s_ in the query and the URL opened in the default browser. Now, typing _duck fedora_ will bring up a duckduckgo search using the supplied terms, in this case _fedora_.
|
||||
|
||||
A more complex shortcut is a script to convert [UTC time][20] to local time. Once again click _Add Shortcut_ and this time use the keyword _utc_. In the _Query or Script_ text box, include the following script:
|
||||
|
||||
```
|
||||
#!/bin/bash
|
||||
tzdate=$(date -d "$1 UTC")
|
||||
zenity --info --no-wrap --text="$tzdate"
|
||||
```
|
||||
|
||||
This script takes the first argument (given as _$1_) and uses the standard [_date_][21] utility to convert a given UTC time into the computer’s local timezone. Then [zenity][22] pops up a simple dialog with the result. To test this, open Ulauncher and type _utc 11:00_. While this is a good example showing what’s possible with shortcuts, see the [ultz][23] extension for really converting time zones.
|
||||
|
||||
### Introducing extensions
|
||||
|
||||
While the built-in functionality is great, installing extensions really accelerates productivity with Ulauncher. Extensions can go far beyond what is possible with custom shortcuts, most obviously by providing suggestions as arguments are typed. Extensions are Python modules which use the [Ulauncher extension interface][12] and can either be personally-developed local code or shared with others using GitHub. A collection of community developed extensions is available at <https://ext.ulauncher.io/>. There are basic standalone extensions for quick conversions and dynamic interfaces to online resources such as dictionaries. Other extensions integrate with external applications, like password managers, browsers, and VPN providers. These effectively give external applications a Ulauncher interface. By keeping the core code small and relying on extensions to add advanced functionality, Ulauncher ensures that each user only installs the functionality they need.
|
||||
|
||||
![Ulauncher extension configuration][24]
|
||||
|
||||
Installing a new extension is easy, though it could be a more integrated experience. After finding an interesting extension, either on the Ulauncher extensions website or anywhere on GitHub, navigate to the _Extensions_ tab in the preferences window. Click _Add Extension_ and paste in the GitHub URL. This loads the extension and shows a preferences page for any available options. A nice hint is that while browsing the extensions website, clicking on the _Github star_ button opens the extension’s GitHub page. Often this GitHub repository has more details about the extension than the summary provided on the community extensions website.
|
||||
|
||||
#### Firefox bookmarks search
|
||||
|
||||
One useful extension is [Ulauncher Firefox Bookmarks][25], which gives fuzzy search access to the current user’s Firefox bookmarks. While this is similar to typing _*<search-term>_ in Firefox’s omnibar, the difference is Ulauncher gives quick access to the bookmarks from anywhere, without needing to open Firefox first. Also, since this method uses search to locate bookmarks, no folder organization is really needed. This means pages can be “starred” quickly in Firefox and there is no need to hunt for an appropriate folder to put it in.
|
||||
|
||||
![Firefox Ulauncher extension searching for fedora][26]
|
||||
|
||||
#### Clipboard search
|
||||
|
||||
Using a clipboard manager is a productivity boost on its own. These managers maintain a history of clipboard contents, which makes it easy to retrieve earlier copied snippets. Knowing there is a history of copied data allows the user to copy text without concern of overwriting the current contents. Adding in the [Ulauncher clipboard][27] extension gives quick access to the clipboard history with search capability without having to remember another unique hotkey combination. The extension integrates with different clipboard managers: [GPaste][28], [clipster][29], or [CopyQ][30]. Invoking Ulauncher and typing the _c_ keywords brings up a list of recent copied snippets. Typing out an argument starts to narrow the list of options, eventually showing the sought after text. Selecting the item copies it to the clipboard, ready to paste into another application.
|
||||
|
||||
![Ulauncher clipboard extension listing latest clipboard contents][31]
|
||||
|
||||
#### Google search
|
||||
|
||||
The last extension to highlight is [Google Search][32]. While a Google search shortcut is available as a default shortcut, using an extension allows for more dynamic behavior. With the extension, Google supplies suggestions as the search term is typed. The experience is similar to what is available on Google’s homepage, or in the search box in Firefox. Again, the key benefit of using the extension for Google search is immediate access while doing anything else on the computer.
|
||||
|
||||
![Google search Ulauncher extension listing suggestions for fedora][33]
|
||||
|
||||
### Being productive
|
||||
|
||||
Productivity on a computer means customizing the environment for each particular usage. A little configuration streamlines common tasks. Dedicated hotkeys work really well for the most frequent actions, but it doesn’t take long before it gets hard to remember them all. Using fuzzy search to find half-remembered keywords strikes a good balance between discoverability and direct access. The key to productivity with Ulauncher is identifying frequent actions and installing an extension, or adding a shortcut, to make doing it faster. Building a habit to search in Ulauncher first means there is a quick and consistent interface ready to go a key stroke away.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/ulauncher-productivity/
|
||||
|
||||
作者:[Troy Curtis Jr][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fedoramagazine.org/author/troycurtisjr/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://fedoramagazine.org/wp-content/uploads/2021/03/ulauncher-816x345.jpg
|
||||
[2]: https://unsplash.com/@readysetfreddy?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
|
||||
[3]: https://unsplash.com/?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
|
||||
[4]: https://tools.suckless.org/dmenu/
|
||||
[5]: https://launchpad.net/synapse-project
|
||||
[6]: https://github.com/albertlauncher/albert
|
||||
[7]: https://qsapp.com/
|
||||
[8]: https://www.alfredapp.com/
|
||||
[9]: https://help.gnome.org/misc/release-notes/3.6/users-activities-overview.html.en
|
||||
[10]: https://ulauncher.io/
|
||||
[11]: https://fedoramagazine.org/announcing-fedora-32/
|
||||
[12]: http://docs.ulauncher.io/en/latest/
|
||||
[13]: https://en.wikipedia.org/wiki/Keyboard_shortcut
|
||||
[14]: https://en.wikipedia.org/wiki/Control-C
|
||||
[15]: https://fedoramagazine.org/managing-packages-fedora-dnf/
|
||||
[16]: https://fedoramagazine.org/wp-content/uploads/2021/03/image.png
|
||||
[17]: https://github.com/Ulauncher/Ulauncher/wiki/Hotkey-In-Wayland
|
||||
[18]: https://fedoramagazine.org/wp-content/uploads/2021/03/image-1.png
|
||||
[19]: https://fedoramagazine.org/wp-content/uploads/2021/03/image-2-1024x361.png
|
||||
[20]: https://www.timeanddate.com/time/aboututc.html
|
||||
[21]: https://man7.org/linux/man-pages/man1/date.1.html
|
||||
[22]: https://help.gnome.org/users/zenity/stable/
|
||||
[23]: https://github.com/Epholys/ultz
|
||||
[24]: https://fedoramagazine.org/wp-content/uploads/2021/03/image-6-1024x407.png
|
||||
[25]: https://github.com/KuenzelIT/ulauncher-firefox-bookmarks
|
||||
[26]: https://fedoramagazine.org/wp-content/uploads/2021/03/image-3.png
|
||||
[27]: https://github.com/friday/ulauncher-clipboard
|
||||
[28]: https://github.com/Keruspe/GPaste
|
||||
[29]: https://github.com/mrichar1/clipster
|
||||
[30]: https://hluk.github.io/CopyQ/
|
||||
[31]: https://fedoramagazine.org/wp-content/uploads/2021/03/image-4.png
|
||||
[32]: https://github.com/NastuzziSamy/ulauncher-google-search
|
||||
[33]: https://fedoramagazine.org/wp-content/uploads/2021/03/image-5.png
|
74
sources/tech/20210323 3 new Java tools to try in 2021.md
Normal file
74
sources/tech/20210323 3 new Java tools to try in 2021.md
Normal file
@ -0,0 +1,74 @@
|
||||
[#]: subject: (3 new Java tools to try in 2021)
|
||||
[#]: via: (https://opensource.com/article/21/3/enterprise-java-tools)
|
||||
[#]: author: (Daniel Oh https://opensource.com/users/daniel-oh)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
||||
3 new Java tools to try in 2021
|
||||
======
|
||||
Empower your enterprise Java applications and your career with these
|
||||
three tools and frameworks.
|
||||
![Person drinking a hot drink at the computer][1]
|
||||
|
||||
Despite the popularity of [Python][2], [Go][3], and [Node.js][4] for implementing [artificial intelligence][5] and machine learning applications and [serverless functions][6] on Kubernetes, Java technologies still play a key role in developing enterprise applications. According to [_Developer Economics_][7], in Q3 2020, there were 8 million enterprise Java developers worldwide.
|
||||
|
||||
Although the programming language has been around for more than 25 years, there are always new trends, tools, and frameworks in the Java world that can empower your applications and your career.
|
||||
|
||||
The vast majority of Java frameworks are designed for long-running processes with dynamic behaviors for running mutable application servers such as physical servers and virtual machines. Things have changed since Kubernetes containers were unleashed in 2014. The biggest issue with using Java applications on Kubernetes is with optimizing application performance by decreasing memory footprints, speeding start and response times, and reducing file sizes.
|
||||
|
||||
### 3 new Java frameworks and tools to consider
|
||||
|
||||
Java developers are also always looking for easier ways to integrate shiny new open source tools and projects into their Java applications and daily work. This significantly increases development productivity and motivates more enterprises and individual developers to keep using the Java stack.
|
||||
|
||||
When trying to meet the expectations listed above for the enterprise Java ecosystem, these three new Java frameworks and tools are worth your attention.
|
||||
|
||||
#### 1\. Quarkus
|
||||
|
||||
[Quarkus][8] is designed to develop cloud-native microservices and serverless with amazingly fast boot time, incredibly low resident set size (RSS) memory, and high-density memory utilization in container orchestration platforms like Kubernetes. According to JRebel's [9th annual global Java developer productivity report][9], the usage of Quarkus by Java developers rose 6% from less than 1%, and [Micronaut][10] and [Vert.x][11] grew to 4% and 2%, respectively, both up from roughly 1% last year.
|
||||
|
||||
#### 2\. Eclipse JKube
|
||||
|
||||
[Eclipse JKube][12] enables Java developers to build container images based on cloud-native Java applications using [Docker][13], [Jib][14], or [Source-To-Image][15] build strategies. It also generates Kubernetes and OpenShift manifests at compile time and improves developers' experience with debug, watch, and logging tools.
|
||||
|
||||
#### 3\. MicroProfile
|
||||
|
||||
[MicroProfile][16] solves the biggest problems related to optimizing enterprise Java for a microservices architecture without adopting new frameworks or refactoring entire applications. Furthermore, MicroProfile [specifications][17] (i.e., Health, Open Tracing, Open API, Fault Tolerance, Metrics, Config) continue to develop in alignment with [Jakarta EE][18] implementation.
|
||||
|
||||
### Conclusion
|
||||
|
||||
It's hard to say which Java frameworks or tools are the best choices for enterprise Java developers to implement. As long as there is room for improvement in the Java stack and accelerating enterprise businesses, we can expect new frameworks, tools, and platforms, like the three above, to become available. Spend some time looking at them to see if they can improve your enterprise Java applications in 2021.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/3/enterprise-java-tools
|
||||
|
||||
作者:[Daniel Oh][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/daniel-oh
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/coffee_tea_laptop_computer_work_desk.png?itok=D5yMx_Dr (Person drinking a hot drink at the computer)
|
||||
[2]: https://opensource.com/resources/python
|
||||
[3]: https://opensource.com/article/18/11/learning-golang
|
||||
[4]: https://opensource.com/article/18/7/node-js-interactive-cli
|
||||
[5]: https://opensource.com/article/18/12/how-get-started-ai
|
||||
[6]: https://opensource.com/article/19/4/enabling-serverless-kubernetes
|
||||
[7]: https://developereconomics.com/
|
||||
[8]: https://quarkus.io/
|
||||
[9]: https://www.jrebel.com/resources/java-developer-productivity-report-2021
|
||||
[10]: https://micronaut.io/
|
||||
[11]: https://vertx.io/
|
||||
[12]: https://www.eclipse.org/jkube/
|
||||
[13]: https://opensource.com/resources/what-docker
|
||||
[14]: https://github.com/GoogleContainerTools/jib
|
||||
[15]: https://www.openshift.com/blog/create-s2i-builder-image
|
||||
[16]: https://opensource.com/article/18/1/eclipse-microprofile
|
||||
[17]: https://microprofile.io/
|
||||
[18]: https://opensource.com/article/18/5/jakarta-ee
|
@ -0,0 +1,75 @@
|
||||
[#]: subject: (Affordable high-temperature 3D printers at home)
|
||||
[#]: via: (https://opensource.com/article/21/3/desktop-3d-printer)
|
||||
[#]: author: (Joshua Pearce https://opensource.com/users/jmpearce)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
||||
Affordable high-temperature 3D printers at home
|
||||
======
|
||||
How affordable? Under $1,000 USD
|
||||
![High-temperature 3D-printed mask][1]
|
||||
|
||||
3D printers have been around since the 1980s, but they didn't gain popular attention until they became open source, thanks to the [RepRap][2] project. RepRap stands for self-replicating rapid prototyper; it's a 3D printer that can largely print itself. The open source plans were released [in 2004][3] and led to 3D printer costs dropping from hundreds of thousands of dollars to a few hundred dollars.
|
||||
|
||||
These open source desktop tools have been limited to low-performance, low-temperature thermoplastics like ABS (e.g., Lego blocks). There are several high-temperature printers on the market, but their high costs (tens to hundreds of thousands of dollars) make them inaccessible to most people. They didn't have a lot of competition until recently because they were locked up by a patent (US6722872B1), which [expired][4] on February 27, 2021.
|
||||
|
||||
With this roadblock removed, we are about to see an explosion of high-temperature, low-cost, fused-filament 3D printers.
|
||||
|
||||
How low? How about under $1,000.
|
||||
|
||||
During the height of the pandemic, my team rushed to publish designs for an [open source high-temperature 3D printer][5] for manufacturing heat-sterilizable personal protective equipment (PPE). The project's idea is to enable people [to print PPE][6] (e.g., masks) with high-temperature materials and pop them in their home oven to sterilize them. We call our device the Cerberus, and it has the following features:
|
||||
|
||||
1. 200°C capable heated bed
|
||||
2. 500°C capable hot end
|
||||
3. Isolated heated chamber with 1kW space heater core
|
||||
4. Mains (AC power) voltage chamber and bed heating for rapid start
|
||||
|
||||
|
||||
|
||||
You can build this project from readily available parts, some of which you can print, for under $1,000. It successfully prints polyetherketoneketone (PEKK) and polyetherimide (PEI, which sells under the trade name Ultem). Both materials are much stronger than anything that can be printed today on low-cost printers.
|
||||
|
||||
![PPE printer][7]
|
||||
|
||||
(J.M.Pearce, [GNU Free Documentation License][8])
|
||||
|
||||
The high-temperature 3D printer was designed to have three heads, but we released it with only one. The Cerberus is named after Greek mythology's three-headed watchdog of the underworld. Normally we would not have released the printer with only one head, but the pandemic shifted our priorities. The [open source community rallied][9] to help solve supply deficits early on, and many desktop 3D printers were spitting out useful products to help protect people from COVID.
|
||||
|
||||
What about the other two heads?
|
||||
|
||||
The other two heads were intended for high-temperature fused particle fabricators (e.g., the high-temperature version of this open source [3D printer hack][10]) and laying in metal wire (like in [this design][11]) to build an open source heat exchanger. Other functionalities for the Cerberus printer might be an automatic nozzle cleaner and a method to print continuous fibers at high temperatures. Also, you can mount anything you like on the turret to manufacture high-end products.
|
||||
|
||||
The expiration of the [obvious patent][12] for putting a box around a 3D printer while leaving the electronics on the outside paves the way for high-temperature home 3D printers, which will enable these devices to graduate from mere toys to industrial tools at reasonable costs.
|
||||
|
||||
Companies are already building on the RepRap tradition and bringing these low-cost systems to the market (e.g., the $1,250 [Creality3D CR-5 Pro][13] 3D printer that can get to 300°C). Creality sells the most popular desktop 3D printer and has open sourced some of its designs.
|
||||
|
||||
To print super-high-end engineering polymers, however, these printers will need to get over 350°C. Open source plans are already available to help desktop 3D printer manufacturers start competing with the lumbering companies that have held back 3D printing for 20 years as they hid behind patents. Expect the competition for low-cost, high-temperature desktop 3D printers to really heat up!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/3/desktop-3d-printer
|
||||
|
||||
作者:[Joshua Pearce][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/jmpearce
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/3d_printer_mask.jpg?itok=5ePZghTW (High-temperature 3D-printed mask)
|
||||
[2]: https://reprap.org/wiki/RepRap
|
||||
[3]: https://reprap.org/wiki/Wealth_Without_Money
|
||||
[4]: https://3dprintingindustry.com/news/stratasys-heated-build-chamber-for-3d-printer-patent-us6722872b1-set-to-expire-this-week-185012/
|
||||
[5]: https://doi.org/10.1016/j.ohx.2020.e00130
|
||||
[6]: https://www.appropedia.org/Open_Source_High-Temperature_Reprap_for_3-D_Printing_Heat-Sterilizable_PPE_and_Other_Applications
|
||||
[7]: https://opensource.com/sites/default/files/uploads/ppe-hight3dp.png (PPE printer)
|
||||
[8]: https://www.gnu.org/licenses/fdl-1.3.html
|
||||
[9]: https://opensource.com/article/20/3/volunteer-covid19
|
||||
[10]: https://www.liebertpub.com/doi/10.1089/3dp.2019.0195
|
||||
[11]: https://www.appropedia.org/Open_Source_Multi-Head_3D_Printer_for_Polymer-Metal_Composite_Component_Manufacturing
|
||||
[12]: https://www.academia.edu/17609790/A_Novel_Approach_to_Obviousness_An_Algorithm_for_Identifying_Prior_Art_Concerning_3-D_Printing_Materials
|
||||
[13]: https://creality3d.shop/collections/cr-series/products/cr-5-pro-h-3d-printer
|
@ -0,0 +1,91 @@
|
||||
[#]: subject: (Meet Sleek: A Sleek Looking To-Do List Application)
|
||||
[#]: via: (https://itsfoss.com/sleek-todo-app/)
|
||||
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
||||
Meet Sleek: A Sleek Looking To-Do List Application
|
||||
======
|
||||
|
||||
There are plenty of [to-do list applications available for Linux][1]. There is one more added to that list in the form of Sleek.
|
||||
|
||||
### Sleek to-do List app
|
||||
|
||||
Sleek is nothing extraordinary except for its looks perhaps. It provides an Electron-based GUI for todo.txt.
|
||||
|
||||
![][2]
|
||||
|
||||
For those not aware, [Electron][3] is a framework that lets you use JavaScript, HTML and CSS for building cross-platform desktop apps. It utilizes Chromium and Node.js for this purpose and this is why some people don’t like their desktop apps running a browser underneath it.
|
||||
|
||||
[Todo.txt][4] is a text-based file system and if you follow its markup syntax, you can create a to-do list. There are tons of mobile, desktop and CLI apps that use Todo.txt underneath it.
|
||||
|
||||
Don’t worry you don’t need to know the correct syntax for todo.txt. Since Sleek is a GUI tool, you can utilize its interface for creating to-do lists without special efforts.
|
||||
|
||||
The advantage of todo.txt is that you can copy or export your files and use it on any To Do List app that supports todo.txt. This gives you portability to keep your data while moving between applications.
|
||||
|
||||
### Experience with Sleek
|
||||
|
||||
![][5]
|
||||
|
||||
Sleek gives you option to create a new to-do.txt or open an existing one. Once you create or open one, you can start adding items to the list.
|
||||
|
||||
Apart from the normal checklist, you can add tasks with due date.
|
||||
|
||||
![][6]
|
||||
|
||||
While adding a due date, you can also set the repetition for the tasks. I find this weird that you can not create a recurring task without setting a due date to it. This is something the developer should try to fix in the future release of the application.
|
||||
|
||||
![][7]
|
||||
|
||||
You can check a task complete. You can also choose to hide or show completed tasks with options to sort tasks based on priority.
|
||||
|
||||
Sleek is available in both dark and light theme. There is a dedicated option on the left sidebar to change themes. You can, of course, change it from the settings.
|
||||
|
||||
![][8]
|
||||
|
||||
There is no provision to sync your to-do list app. As a workaround, you can save your todo.txt file in a location that is automatically sync with Nextcloud, Dropbox or some other cloud service. This also opens the possibility of using it on mobile with some todo.txt mobile client. It’s just a suggestion, I haven’t tried it myself.
|
||||
|
||||
### Installing Sleek on Linux
|
||||
|
||||
Since Sleek is an Electron-based application, it is available for Windows as well as Linux.
|
||||
|
||||
For Linux, you can install it using Snap or Flatpak, whichever you prefer.
|
||||
|
||||
For Snap, use the following command:
|
||||
|
||||
```
|
||||
sudo snap install sleek
|
||||
```
|
||||
|
||||
If you have enabled Flatpak and added Flathub repository, you can install it using this command:
|
||||
|
||||
```
|
||||
flatpak install flathub com.github.ransome1.sleek
|
||||
```
|
||||
|
||||
As I said at the beginning of this article, Sleek is nothing extraordinary. If you prefer a modern looking to-do list app with option to import and export your tasks list, you may give this open source application a try.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/sleek-todo-app/
|
||||
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/abhishek/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://itsfoss.com/to-do-list-apps-linux/
|
||||
[2]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/03/sleek-to-do-list-app.png?resize=800%2C630&ssl=1
|
||||
[3]: https://www.electronjs.org/
|
||||
[4]: http://todotxt.org/
|
||||
[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/03/sleek-to-do-list-app-1.png?resize=800%2C521&ssl=1
|
||||
[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/03/sleek-to-do-list-app-due-tasks.png?resize=800%2C632&ssl=1
|
||||
[7]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/03/sleek-to-do-list-app-repeat-tasks.png?resize=800%2C632&ssl=1
|
||||
[8]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/03/sleek-to-do-list-app-light-theme.png?resize=800%2C521&ssl=1
|
@ -0,0 +1,87 @@
|
||||
[#]: subject: (WebAssembly Security, Now and in the Future)
|
||||
[#]: via: (https://www.linux.com/news/webassembly-security-now-and-in-the-future/)
|
||||
[#]: author: (Dan Brown https://training.linuxfoundation.org/announcements/webassembly-security-now-and-in-the-future/)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
||||
WebAssembly Security, Now and in the Future
|
||||
======
|
||||
|
||||
_By Marco Fioretti_
|
||||
|
||||
**Introduction**
|
||||
|
||||
WebAssembly is, as we [explained recently][1], a binary format for software written in any language, designed to eventually run on any platform without changes. The first application of WebAssembly is inside web browsers, to make websites faster and more interactive. Plans to push WebAssembly beyond the Web, from servers of all sorts to the Internet of Things (IoT), create as many opportunities as security issues. This post is an introductory overview of those issues and of the WebAssembly security model.
|
||||
|
||||
**WebAssembly is like JavaScript**
|
||||
|
||||
Inside web browsers, WebAssembly modules are managed by the same Virtual Machine (VM) that executes JavaScript code. Therefore, WebAssembly may be used to do much of the same harm that is doable with JavaScript, just more efficiently and less visibly. Since JavaScript is plain text that the browser will compile, and WebAssembly a ready-to-run binary format, the latter runs faster, and is also harder to scan (even by antivirus software) for malicious instructions.
|
||||
|
||||
This “code obfuscation” effect of WebAssembly has been already used, among other things, to pop up unwanted advertising or to open fake “tech support” windows that ask for sensitive data. Another trick is to automatically redirect browsers to “landing” pages that contain the really dangerous malware.
|
||||
|
||||
Finally, WebAssembly may be used, just like JavaScript, to “steal” processing power instead of data. In 2019, an [analysis of 150 different Wasm modules][2] found out that about _32%_ of them were used for cryptocurrency-mining.
|
||||
|
||||
**WebAssembly sandbox, and interfaces**
|
||||
|
||||
WebAssembly code runs closed into a [sandbox][3] managed by the VM, not by the operating system. This gives it no visibility of the host computer, or ways to interact directly with it. Access to system resources, be they files, hardware or internet connections, can only happen through the WebAssembly System Interface (WASI) provided by that VM.
|
||||
|
||||
The WASI is different from most other application programming interfaces, with unique security characteristics that are truly driving the adoption of WASM on servers/edge computing scenarios, and will be the topic of the next post. Here, it is enough to say that its security implications greatly vary, when moving from the web to other environments. Modern web browsers are terribly complex pieces of software, but lay on decades of experience, and of daily tests from billions of people. Compared to browsers, servers or IoT devices are almost uncharted lands. The VMs for those platforms will require extensions of WASI and thus, in turn, surely introduce new security challenges.
|
||||
|
||||
**Memory and code management in WebAssembly**
|
||||
|
||||
Compared to normal compiled programs, WebAssembly applications have very restricted access to memory, and to themselves too. WebAssembly code cannot directly access functions or variables that are not yet called, jump to arbitrary addresses or execute data in memory as bytecode instructions.
|
||||
|
||||
Inside browsers, a Wasm module only gets one, global array (“linear memory”) of contiguous bytes to play with. WebAssembly can directly read and write any location in that area, or request an increase in its size, but that’s all. This linear memory is also separated from the areas that contain its actual code, execution stack, and of course the virtual machine that runs WebAssembly. For browsers, all these data structures are ordinary JavaScript objects, insulated from all the others using standard procedures.
|
||||
|
||||
**The result: good, but not perfect**
|
||||
|
||||
All these restrictions make it quite hard for a WebAssembly module to misbehave, but not impossible.
|
||||
|
||||
The sandboxed memory that makes it almost impossible for WebAssembly to touch what is _outside_ also makes it harder for the operating system to prevent bad things from happening _inside_. Traditional memory monitoring mechanisms like [“stack canaries”][4], which notice if some code tries to mess with objects that it should not touch, [cannot work there][5].
|
||||
|
||||
The fact that WebAssembly can only access its own linear memory, but directly, may also _facilitate_ the work of attackers. With those constraints, and access to the source code of a module, it is much easier to guess which memory locations could be overwritten to make the most damage. It also seems [possible][6] to corrupt local variables, because they stay in an unsupervised stack in the linear memory.
|
||||
|
||||
A 2020 paper on the [binary security of WebAssembly][5] noted that WebAssembly code can still overwrite string literals in supposedly constant memory. The same paper describes other ways in which WebAssembly may be less secure than when compiled to a native binary, on three different platforms (browsers, server-side applications on Node.js, and applications for stand-alone WebAssembly VMs) and is recommended further reading on this topic.
|
||||
|
||||
In general, the idea that WebAssembly can only damage what’s inside its own sandbox can be misleading. WebAssembly modules do the heavy work for the JavaScript code that calls them, exchanging variables every time. If they write into any of those variables code that may cause crashes or data leaks in the unsafe JavaScript that called WebAssembly, those things _will_ happen.
|
||||
|
||||
**The road ahead**
|
||||
|
||||
Two emerging features of WebAssembly that will surely impact its security (how and how much, it’s too early to tell) are [concurrency][7], and internal garbage collection.
|
||||
|
||||
Concurrency is what allows several WebAssembly modules to run in the same VM simultaneously. Today this is possible only through JavaScript [web workers][8], but better mechanisms are under development. Security-wise, they may bring in [“a lot of code… that did not previously need to be”][9], that is more ways for things to go wrong.
|
||||
|
||||
A [native Garbage Collector][10] is needed to increase performance and security, but above all to use WebAssembly outside the well-tested Java VMs of browsers, that collect all the garbage inside themselves anyway. Even this new code, of course, may become another entry point for bugs and attacks.
|
||||
|
||||
On the positive side, general strategies to make WebAssembly even safer than it is today also exist. Quoting again from [here][5], they include compiler improvements, _separate_ linear memories for stack, heap and constant data, and avoiding to compile as WebAssembly modules code in “unsafe languages, such as C”.
|
||||
|
||||
The post [WebAssembly Security, Now and in the Future][11] appeared first on [Linux Foundation – Training][12].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/news/webassembly-security-now-and-in-the-future/
|
||||
|
||||
作者:[Dan Brown][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://training.linuxfoundation.org/announcements/webassembly-security-now-and-in-the-future/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://training.linuxfoundation.org/announcements/an-introduction-to-webassembly/
|
||||
[2]: https://www.sec.cs.tu-bs.de/pubs/2019a-dimva.pdf
|
||||
[3]: https://webassembly.org/docs/security/
|
||||
[4]: https://ctf101.org/binary-exploitation/stack-canaries/
|
||||
[5]: https://www.usenix.org/system/files/sec20-lehmann.pdf
|
||||
[6]: https://spectrum.ieee.org/tech-talk/telecom/security/more-worries-over-the-security-of-web-assembly
|
||||
[7]: https://github.com/WebAssembly/threads
|
||||
[8]: https://en.wikipedia.org/wiki/Web_worker
|
||||
[9]: https://googleprojectzero.blogspot.com/2018/08/the-problems-and-promise-of-webassembly.html
|
||||
[10]: https://github.com/WebAssembly/gc/blob/master/proposals/gc/Overview.md
|
||||
[11]: https://training.linuxfoundation.org/announcements/webassembly-security-now-and-in-the-future/
|
||||
[12]: https://training.linuxfoundation.org/
|
@ -0,0 +1,466 @@
|
||||
[#]: subject: (Build a to-do list app in React with hooks)
|
||||
[#]: via: (https://opensource.com/article/21/3/react-app-hooks)
|
||||
[#]: author: (Jaivardhan Kumar https://opensource.com/users/invinciblejai)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
||||
Build a to-do list app in React with hooks
|
||||
======
|
||||
Learn to build React apps using functional components and state
|
||||
management.
|
||||
![Team checklist and to dos][1]
|
||||
|
||||
React is one of the most popular and simple JavaScript libraries for building user interfaces (UIs) because it allows you to create reusable UI components.
|
||||
|
||||
Components in React are independent, reusable pieces of code that serve as building blocks for an application. React functional components are JavaScript functions that separate the presentation layer from the business logic. According to the [React docs][2], a simple, functional component can be written like:
|
||||
|
||||
|
||||
```
|
||||
function Welcome(props) {
|
||||
return <h1>Hello, {props.name}</h1>;
|
||||
}
|
||||
```
|
||||
|
||||
React functional components are stateless. Stateless components are declared as functions that have no state and return the same markup, given the same props. State is managed in components with hooks, which were introduced in React 16.8. They enable the management of state and the lifecycle of functional components. There are several built-in hooks, and you can also create custom hooks.
|
||||
|
||||
This article explains how to build a simple to-do app in React using functional components and state management. The complete code for this app is available on [GitHub][3] and [CodeSandbox][4]. When you're finished with this tutorial, the app will look like this:
|
||||
|
||||
![React to-do list][5]
|
||||
|
||||
(Jaivardhan Kumar, [CC BY-SA 4.0][6])
|
||||
|
||||
### Prerequisites
|
||||
|
||||
* To build locally, you must have [Node.js][7] v10.16 or higher, [yarn][8] v1.20.0 or higher, and npm 5.6
|
||||
* Basic knowledge of JavaScript
|
||||
* Basic understanding of React would be a plus
|
||||
|
||||
|
||||
|
||||
### Create a React app
|
||||
|
||||
[Create React App][9] is an environment that allows you to start building a React app. Along with this tutorial, I used a TypeScript template for adding static type definitions. [TypeScript][10] is an open source language that builds on JavaScript:
|
||||
|
||||
|
||||
```
|
||||
`npx create-react-app todo-app-context-api --template typescript`
|
||||
```
|
||||
|
||||
[npx][11] is a package runner tool; alternatively, you can use [yarn][12]:
|
||||
|
||||
|
||||
```
|
||||
`yarn create react-app todo-app-context-api --template typescript`
|
||||
```
|
||||
|
||||
After you execute this command, you can navigate to the directory and run the app:
|
||||
|
||||
|
||||
```
|
||||
cd todo-app-context-api
|
||||
yarn start
|
||||
```
|
||||
|
||||
You should see the starter app and the React logo which is generated by boilerplate code. Since you are building your own React app, you will be able to modify the logo and styles to meet your needs.
|
||||
|
||||
### Build the to-do app
|
||||
|
||||
The to-do app can:
|
||||
|
||||
* Add an item
|
||||
* List items
|
||||
* Mark items as completed
|
||||
* Delete items
|
||||
* Filter items based on status (e.g., completed, all, active)
|
||||
|
||||
|
||||
|
||||
![To-Do App architecture][13]
|
||||
|
||||
(Jaivardhan Kumar, [CC BY-SA 4.0][6])
|
||||
|
||||
#### The header component
|
||||
|
||||
Create a directory called **components** and add a file named **Header.tsx**:
|
||||
|
||||
|
||||
```
|
||||
mkdir components
|
||||
cd components
|
||||
vi Header.tsx
|
||||
```
|
||||
|
||||
Header is a functional component that holds the heading:
|
||||
|
||||
|
||||
```
|
||||
const Header: React.FC = () => {
|
||||
return (
|
||||
<div className="header">
|
||||
<h1>
|
||||
Add TODO List!!
|
||||
</h1>
|
||||
</div>
|
||||
)
|
||||
}
|
||||
```
|
||||
|
||||
#### The AddTodo component
|
||||
|
||||
The **AddTodo** component contains a text box and a button. Clicking the button adds an item to the list.
|
||||
|
||||
Create a directory called **todo** under the **components** directory and add a file named **AddTodo.tsx**:
|
||||
|
||||
|
||||
```
|
||||
mkdir todo
|
||||
cd todo
|
||||
vi AddTodo.tsx
|
||||
```
|
||||
|
||||
AddTodo is a functional component that accepts props. Props allow one-way passing of data, i.e., only from parent to child components:
|
||||
|
||||
|
||||
```
|
||||
const AddTodo: React.FC<AddTodoProps> = ({ todoItem, updateTodoItem, addTaskToList }) => {
|
||||
const submitHandler = (event: SyntheticEvent) => {
|
||||
event.preventDefault();
|
||||
addTaskToList();
|
||||
}
|
||||
return (
|
||||
<form className="addTodoContainer" onSubmit={submitHandler}>
|
||||
<div className="controlContainer">
|
||||
<input className="controlSpacing" style={{flex: 1}} type="text" value={todoItem?.text ?? ''} onChange={(ev) => updateTodoItem(ev.target.value)} placeholder="Enter task todo ..." />
|
||||
<input className="controlSpacing" style={{flex: 1}} type="submit" value="submit" />
|
||||
</div>
|
||||
<div>
|
||||
<label>
|
||||
<span style={{ color: '#ccc', padding: '20px' }}>{todoItem?.text}</span>
|
||||
</label>
|
||||
</div>
|
||||
</form>
|
||||
)
|
||||
}
|
||||
```
|
||||
|
||||
You have created a functional React component called **AddTodo** that takes props provided by the parent function. This makes the component reusable. The props that need to be passed are:
|
||||
|
||||
* **todoItem:** An empty item state
|
||||
* **updateToDoItem:** A helper function to send callbacks to the parent as the user types
|
||||
* **addTaskToList:** A function to add an item to a to-do list
|
||||
|
||||
|
||||
|
||||
There are also some styling and HTML elements, like form, input, etc.
|
||||
|
||||
#### The TodoList component
|
||||
|
||||
The next component to create is the **TodoList**. It is responsible for listing the items in the to-do state and providing options to delete and mark items as complete.
|
||||
|
||||
**TodoList** will be a functional component:
|
||||
|
||||
|
||||
```
|
||||
const TodoList: React.FC = ({ listData, removeItem, toggleItemStatus }) => {
|
||||
return listData.length > 0 ? (
|
||||
<div className="todoListContainer">
|
||||
{ listData.map((lData) => {
|
||||
return (
|
||||
<ul key={lData.id}>
|
||||
<li>
|
||||
<div className="listItemContainer">
|
||||
<input type="checkbox" style={{ padding: '10px', margin: '5px' }} onChange={() => toggleItemStatus(lData.id)} checked={lData.completed}/>
|
||||
<span className="listItems" style={{ textDecoration: lData.completed ? 'line-through' : 'none', flex: 2 }}>{lData.text}</span>
|
||||
<button type="button" className="listItems" onClick={() => removeItem(lData.id)}>Delete</button>
|
||||
</div>
|
||||
</li>
|
||||
</ul>
|
||||
)
|
||||
})}
|
||||
</div>
|
||||
) : (<span> No Todo list exist </span >)
|
||||
}
|
||||
```
|
||||
|
||||
The **TodoList** is also a reusable functional React component that accepts props from parent functions. The props that need to be passed are:
|
||||
|
||||
* **listData:** A list of to-do items with IDs, text, and completed properties
|
||||
* **removeItem:** A helper function to delete an item from a to-do list
|
||||
* **toggleItemStatus:** A function to toggle the task status from completed to not completed and vice versa
|
||||
|
||||
|
||||
|
||||
There are also some styling and HTML elements (like lists, input, etc.).
|
||||
|
||||
#### Footer component
|
||||
|
||||
**Footer** will be a functional component; create it in the **components** directory as follows:
|
||||
|
||||
|
||||
```
|
||||
cd ..
|
||||
|
||||
const Footer: React.FC = ({item = 0, storage, filterTodoList}) => {
|
||||
return (
|
||||
<div className="footer">
|
||||
<button type="button" style={{flex:1}} onClick={() => filterTodoList(ALL_FILTER)}>All Item</button>
|
||||
<button type="button" style={{flex:1}} onClick={() => filterTodoList(ACTIVE_FILTER)}>Active</button>
|
||||
<button type="button" style={{flex:1}} onClick={() => filterTodoList(COMPLETED_FILTER)}>Completed</button>
|
||||
<span style={{color: '#cecece', flex:4, textAlign: 'center'}}>{item} Items | Make use of {storage} to store data</span>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
```
|
||||
|
||||
It accepts three props:
|
||||
|
||||
* **item:** Displays the number of items
|
||||
* **storage:** Displays text
|
||||
* **filterTodoList:** A function to filter tasks based on status (active, completed, all items)
|
||||
|
||||
|
||||
|
||||
### Todo component: Managing state with contextApi and useReducer
|
||||
|
||||
![Todo Component][14]
|
||||
|
||||
(Jaivardhan Kumar, [CC BY-SA 4.0][6])
|
||||
|
||||
Context provides a way to pass data through the component tree without having to pass props down manually at every level. **ContextApi** and **useReducer** can be used to manage state by sharing it across the entire React component tree without passing it as a prop to each component in the tree.
|
||||
|
||||
Now that you have the AddTodo, TodoList, and Footer components, you need to wire them.
|
||||
|
||||
Use the following built-in hooks to manage the components' state and lifecycle:
|
||||
|
||||
* **useState:** Returns the stateful value and updater function to update the state
|
||||
* **useEffect:** Helps manage lifecycle in functional components and perform side effects
|
||||
* **useContext:** Accepts a context object and returns current context value
|
||||
* **useReducer:** Like useState, it returns the stateful value and updater function, but it is used instead of useState when you have complex state logic (e.g., multiple sub-values or if the new state depends on the previous one)
|
||||
|
||||
|
||||
|
||||
First, use **contextApi** and **useReducer** hooks to manage the state. For separation of concerns, add a new directory under **components** called **contextApiComponents**:
|
||||
|
||||
|
||||
```
|
||||
mkdir contextApiComponents
|
||||
cd contextApiComponents
|
||||
```
|
||||
|
||||
Create **TodoContextApi.tsx**:
|
||||
|
||||
|
||||
```
|
||||
const defaultTodoItem: TodoItemProp = { id: Date.now(), text: '', completed: false };
|
||||
|
||||
const TodoContextApi: React.FC = () => {
|
||||
const { state: { todoList }, dispatch } = React.useContext(TodoContext);
|
||||
const [todoItem, setTodoItem] = React.useState(defaultTodoItem);
|
||||
const [todoListData, setTodoListData] = React.useState(todoList);
|
||||
|
||||
React.useEffect(() => {
|
||||
setTodoListData(todoList);
|
||||
}, [todoList])
|
||||
|
||||
const updateTodoItem = (text: string) => {
|
||||
setTodoItem({
|
||||
id: Date.now(),
|
||||
text,
|
||||
completed: false
|
||||
})
|
||||
}
|
||||
const addTaskToList = () => {
|
||||
dispatch({
|
||||
type: ADD_TODO_ACTION,
|
||||
payload: todoItem
|
||||
});
|
||||
setTodoItem(defaultTodoItem);
|
||||
}
|
||||
const removeItem = (id: number) => {
|
||||
dispatch({
|
||||
type: REMOVE_TODO_ACTION,
|
||||
payload: { id }
|
||||
})
|
||||
}
|
||||
const toggleItemStatus = (id: number) => {
|
||||
dispatch({
|
||||
type: UPDATE_TODO_ACTION,
|
||||
payload: { id }
|
||||
})
|
||||
}
|
||||
const filterTodoList = (type: string) => {
|
||||
const filteredList = FilterReducer(todoList, {type});
|
||||
setTodoListData(filteredList)
|
||||
|
||||
}
|
||||
|
||||
return (
|
||||
<>
|
||||
<AddTodo todoItem={todoItem} updateTodoItem={updateTodoItem} addTaskToList={addTaskToList} />
|
||||
<TodoList listData={todoListData} removeItem={removeItem} toggleItemStatus={toggleItemStatus} />
|
||||
<Footer item={todoListData.length} storage="Context API" filterTodoList={filterTodoList} />
|
||||
</>
|
||||
)
|
||||
}
|
||||
```
|
||||
|
||||
This component includes the **AddTodo**, **TodoList**, and **Footer** components and their respective helper and callback functions.
|
||||
|
||||
To manage the state, it uses **contextApi**, which provides state and dispatch methods, which, in turn, updates the state. It accepts a context object. (You will create the provider for the context, called **contextProvider**, next).
|
||||
|
||||
|
||||
```
|
||||
` const { state: { todoList }, dispatch } = React.useContext(TodoContext);`
|
||||
```
|
||||
|
||||
#### TodoProvider
|
||||
|
||||
Add **TodoProvider**, which creates **context** and uses a **useReducer** hook. The **useReducer** hook takes a reducer function along with the initial values and returns state and updater functions (dispatch).
|
||||
|
||||
* Create the context and export it. Exporting it will allow it to be used by any child component to get the current state using the hook **useContext**: [code]`export const TodoContext = React.createContext({} as TodoContextProps);`
|
||||
```
|
||||
* Create **ContextProvider** and export it: [code] const TodoProvider : React.FC = (props) => {
|
||||
const [state, dispatch] = React.useReducer(TodoReducer, {todoList: []});
|
||||
const value = {state, dispatch}
|
||||
return (
|
||||
<TodoContext.Provider value={value}>
|
||||
{props.children}
|
||||
</TodoContext.Provider>
|
||||
)
|
||||
}
|
||||
```
|
||||
* Context data can be accessed by any React component in the hierarchy directly with the **useContext** hook if you wrap the parent component (e.g., **TodoContextApi**) or the app itself with the provider (e.g., **TodoProvider**): [code] <TodoProvider>
|
||||
<TodoContextApi />
|
||||
</TodoProvider>
|
||||
```
|
||||
* In the **TodoContextApi** component, use the **useContext** hook to access the current context value: [code]`const { state: { todoList }, dispatch } = React.useContext(TodoContext)`
|
||||
```
|
||||
|
||||
|
||||
|
||||
**TodoProvider.tsx:**
|
||||
|
||||
|
||||
```
|
||||
type TodoContextProps = {
|
||||
state : {todoList: TodoItemProp[]};
|
||||
dispatch: ({type, payload}: {type:string, payload: any}) => void;
|
||||
}
|
||||
|
||||
export const TodoContext = React.createContext({} as TodoContextProps);
|
||||
|
||||
const TodoProvider : React.FC = (props) => {
|
||||
const [state, dispatch] = React.useReducer(TodoReducer, {todoList: []});
|
||||
const value = {state, dispatch}
|
||||
return (
|
||||
<TodoContext.Provider value={value}>
|
||||
{props.children}
|
||||
</TodoContext.Provider>
|
||||
)
|
||||
}
|
||||
```
|
||||
|
||||
#### Reducers
|
||||
|
||||
A reducer is a pure function with no side effects. This means that for the same input, the expected output will always be the same. This makes the reducer easier to test in isolation and helps manage state. **TodoReducer** and **FilterReducer** are used in the components **TodoProvider** and **TodoContextApi**.
|
||||
|
||||
Create a directory named **reducers** under **src** and create a file there named **TodoReducer.tsx**:
|
||||
|
||||
|
||||
```
|
||||
const TodoReducer = (state: StateProps = {todoList:[]}, action: ActionProps) => {
|
||||
switch(action.type) {
|
||||
case ADD_TODO_ACTION:
|
||||
return { todoList: [...state.todoList, action.payload]}
|
||||
case REMOVE_TODO_ACTION:
|
||||
return { todoList: state.todoList.length ? state.todoList.filter((d) => d.id !== action.payload.id) : []};
|
||||
case UPDATE_TODO_ACTION:
|
||||
return { todoList: state.todoList.length ? state.todoList.map((d) => {
|
||||
if(d.id === action.payload.id) d.completed = !d.completed;
|
||||
return d;
|
||||
}): []}
|
||||
default:
|
||||
return state;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Create a **FilterReducer** to maintain the filter's state:
|
||||
|
||||
|
||||
```
|
||||
const FilterReducer =(state : TodoItemProp[] = [], action: ActionProps) => {
|
||||
switch(action.type) {
|
||||
case ALL_FILTER:
|
||||
return state;
|
||||
case ACTIVE_FILTER:
|
||||
return state.filter((d) => !d.completed);
|
||||
case COMPLETED_FILTER:
|
||||
return state.filter((d) => d.completed);
|
||||
default:
|
||||
return state;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
You have created all the required components. Next, you will add the **Header** and **TodoContextApi** components in App, and **TodoContextApi** with **TodoProvider** so that all children can access the context.
|
||||
|
||||
|
||||
```
|
||||
function App() {
|
||||
return (
|
||||
<div className="App">
|
||||
<Header />
|
||||
<TodoProvider>
|
||||
<TodoContextApi />
|
||||
</TodoProvider>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
```
|
||||
|
||||
Ensure the App component is in **index.tsx** within **ReactDom.render**. [ReactDom.render][15] takes two arguments: React Element and an ID of an HTML element. React Element gets rendered on a web page, and the **id** indicates which HTML element will be replaced by the React Element:
|
||||
|
||||
|
||||
```
|
||||
ReactDOM.render(
|
||||
<App />,
|
||||
document.getElementById('root')
|
||||
);
|
||||
```
|
||||
|
||||
### Conclusion
|
||||
|
||||
You have learned how to build a functional app in React using hooks and state management. What will you do with it?
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/3/react-app-hooks
|
||||
|
||||
作者:[Jaivardhan Kumar][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/invinciblejai
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/todo_checklist_team_metrics_report.png?itok=oB5uQbzf (Team checklist and to dos)
|
||||
[2]: https://reactjs.org/docs/components-and-props.html
|
||||
[3]: https://github.com/invincibleJai/todo-app-context-api
|
||||
[4]: https://codesandbox.io/s/reverent-edison-v8om5
|
||||
[5]: https://opensource.com/sites/default/files/pictures/todocontextapi.gif (React to-do list)
|
||||
[6]: https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[7]: https://nodejs.org/en/download/
|
||||
[8]: https://yarnpkg.com/getting-started/install
|
||||
[9]: https://github.com/facebook/create-react-app
|
||||
[10]: https://www.typescriptlang.org/
|
||||
[11]: https://www.npmjs.com/package/npx
|
||||
[12]: https://yarnpkg.com/
|
||||
[13]: https://opensource.com/sites/default/files/uploads/to-doapp_architecture.png (To-Do App architecture)
|
||||
[14]: https://opensource.com/sites/default/files/uploads/todocomponent_0.png (Todo Component)
|
||||
[15]: https://reactjs.org/docs/react-dom.html#render
|
202
sources/tech/20210324 Read and write files with Bash.md
Normal file
202
sources/tech/20210324 Read and write files with Bash.md
Normal file
@ -0,0 +1,202 @@
|
||||
[#]: subject: (Read and write files with Bash)
|
||||
[#]: via: (https://opensource.com/article/21/3/input-output-bash)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
||||
Read and write files with Bash
|
||||
======
|
||||
Learn the different ways Bash reads and writes data and when to use each
|
||||
method.
|
||||
![bash logo on green background][1]
|
||||
|
||||
When you're scripting with Bash, sometimes you need to read data from or write data to a file. Sometimes a file may contain configuration options, and other times the file is the data your user is creating with your application. Every language handles this task a little differently, and this article demonstrates how to handle data files with Bash and other [POSIX][2] shells.
|
||||
|
||||
### Install Bash
|
||||
|
||||
If you're on Linux, you probably already have Bash. If not, you can find it in your software repository.
|
||||
|
||||
On macOS, you can use the default terminal, either Bash or [Zsh][3], depending on the macOS version you're running.
|
||||
|
||||
On Windows, there are several ways to experience Bash, including Microsoft's officially supported [Windows Subsystem for Linux][4] (WSL).
|
||||
|
||||
Once you have Bash installed, open your favorite text editor and get ready to code.
|
||||
|
||||
### Reading a file with Bash
|
||||
|
||||
In addition to being [a shell][5], Bash is a scripting language. There are several ways to read data from Bash: You can create a sort of data stream and parse the output, or you can load data into memory. Both are valid methods of ingesting information, but each has pretty specific use cases.
|
||||
|
||||
#### Source a file in Bash
|
||||
|
||||
When you "source" a file in Bash, you cause Bash to read the contents of a file with the expectation that it contains valid data that Bash can fit into its established data model. You won't source data from any old file, but you can use this method to read configuration files and functions.
|
||||
|
||||
For instance, create a file called `example.sh` and enter this into it:
|
||||
|
||||
|
||||
```
|
||||
#!/bin/sh
|
||||
|
||||
greet opensource.com
|
||||
|
||||
echo "The meaning of life is $var"
|
||||
```
|
||||
|
||||
Run the code to see it fail:
|
||||
|
||||
|
||||
```
|
||||
$ bash ./example.sh
|
||||
./example.sh: line 3: greet: command not found
|
||||
The meaning of life is
|
||||
```
|
||||
|
||||
Bash doesn't have a command called `greet`, so it could not execute that line, and it has no record of a variable called `var`, so there is no known meaning of life. To fix this problem, create a file called `include.sh`:
|
||||
|
||||
|
||||
```
|
||||
greet() {
|
||||
echo "Hello ${1}"
|
||||
}
|
||||
|
||||
var=42
|
||||
```
|
||||
|
||||
Revise your `example.sh` script to include a `source` command:
|
||||
|
||||
|
||||
```
|
||||
#!/bin/sh
|
||||
|
||||
source include.sh
|
||||
|
||||
greet opensource.com
|
||||
|
||||
echo "The meaning of life is $var"
|
||||
```
|
||||
|
||||
Run the script to see it work:
|
||||
|
||||
|
||||
```
|
||||
$ bash ./example.sh
|
||||
Hello opensource.com
|
||||
The meaning of life is 42
|
||||
```
|
||||
|
||||
The `greet` command is brought into your shell environment because it is defined in the `include.sh` file, and it even recognizes the argument (`opensource.com` in this example). The variable `var` is set and imported, too.
|
||||
|
||||
#### Parse a file in Bash
|
||||
|
||||
The other way to get data "into" Bash is to parse it as a data stream. There are many ways to do this. You can use `grep` or `cat` or any command that takes data and pipes it to stdout. Alternately, you can use what is built into Bash: the redirect. Redirection on its own isn't very useful, so in this example, I also use the built-in `echo` command to print the results of the redirect:
|
||||
|
||||
|
||||
```
|
||||
#!/bin/sh
|
||||
|
||||
echo $( < include.sh )
|
||||
```
|
||||
|
||||
Save this as `stream.sh` and run it to see the results:
|
||||
|
||||
|
||||
```
|
||||
$ bash ./stream.sh
|
||||
greet() { echo "Hello ${1}" } var=42
|
||||
$
|
||||
```
|
||||
|
||||
For each line in the `include.sh` file, Bash prints (or echoes) the line to your terminal. Piping it first to an appropriate parser is a common way to read data with Bash. For instance, assume for a moment that `include.sh` is a configuration file with key and value pairs separated by an equal (`=`) sign. You could obtain values with `awk` or even `cut`:
|
||||
|
||||
|
||||
```
|
||||
#!/bin/sh
|
||||
|
||||
myVar=`grep var include.sh | cut -d'=' -f2`
|
||||
|
||||
echo $myVar
|
||||
```
|
||||
|
||||
Try running the script:
|
||||
|
||||
|
||||
```
|
||||
$ bash ./stream.sh
|
||||
42
|
||||
```
|
||||
|
||||
### Writing data to a file with Bash
|
||||
|
||||
Whether you're storing data your user created with your application or just metadata about what the user did in an application (for instance, game saves or recent songs played), there are many good reasons to store data for later use. In Bash, you can save data to files using common shell redirection.
|
||||
|
||||
For instance, to create a new file containing output, use a single redirect token:
|
||||
|
||||
|
||||
```
|
||||
#!/bin/sh
|
||||
|
||||
TZ=UTC
|
||||
date > date.txt
|
||||
```
|
||||
|
||||
Run the script a few times:
|
||||
|
||||
|
||||
```
|
||||
$ bash ./date.sh
|
||||
$ cat date.txt
|
||||
Tue Feb 23 22:25:06 UTC 2021
|
||||
$ bash ./date.sh
|
||||
$ cat date.txt
|
||||
Tue Feb 23 22:25:12 UTC 2021
|
||||
```
|
||||
|
||||
To append data, use the double redirect tokens:
|
||||
|
||||
|
||||
```
|
||||
#!/bin/sh
|
||||
|
||||
TZ=UTC
|
||||
date >> date.txt
|
||||
```
|
||||
|
||||
Run the script a few times:
|
||||
|
||||
|
||||
```
|
||||
$ bash ./date.sh
|
||||
$ bash ./date.sh
|
||||
$ bash ./date.sh
|
||||
$ cat date.txt
|
||||
Tue Feb 23 22:25:12 UTC 2021
|
||||
Tue Feb 23 22:25:17 UTC 2021
|
||||
Tue Feb 23 22:25:19 UTC 2021
|
||||
Tue Feb 23 22:25:22 UTC 2021
|
||||
```
|
||||
|
||||
### Bash for easy programming
|
||||
|
||||
Bash excels at being easy to learn because, with just a few basic concepts, you can build complex programs. For the full documentation, refer to the [excellent Bash documentation][6] on GNU.org.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/3/input-output-bash
|
||||
|
||||
作者:[Seth Kenlon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/seth
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bash_command_line.png?itok=k4z94W2U (bash logo on green background)
|
||||
[2]: https://opensource.com/article/19/7/what-posix-richard-stallman-explains
|
||||
[3]: https://opensource.com/article/19/9/getting-started-zsh
|
||||
[4]: https://opensource.com/article/19/7/ways-get-started-linux#wsl
|
||||
[5]: https://www.redhat.com/sysadmin/terminals-shells-consoles
|
||||
[6]: http://gnu.org/software/bash
|
194
sources/tech/20210325 How to use the Linux sed command.md
Normal file
194
sources/tech/20210325 How to use the Linux sed command.md
Normal file
@ -0,0 +1,194 @@
|
||||
[#]: subject: (How to use the Linux sed command)
|
||||
[#]: via: (https://opensource.com/article/21/3/sed-cheat-sheet)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
||||
How to use the Linux sed command
|
||||
======
|
||||
Learn basic sed usage then download our cheat sheet for a quick
|
||||
reference to the Linux stream editor.
|
||||
![Penguin with green background][1]
|
||||
|
||||
Few Unix commands are as famous as sed, [grep][2], and [awk][3]. They get grouped together often, possibly because they have strange names and powerful tools for parsing text. They also share some syntactical and logical similarities. And while they're all useful for parsing text, each has its specialties. This article examines the `sed` command, which is a _stream editor_.
|
||||
|
||||
I've written before about [sed][4], as well as its distant relative [ed][5]. To get comfortable with sed, it helps to have some familiarity with ed because that helps you get used to the idea of buffers. This article assumes that you're familiar with the very basics of sed, meaning you've at least run the classic `s/foo/bar/` style find-and-replace command.
|
||||
|
||||
**[Download our free [sed cheat sheet][6]]**
|
||||
|
||||
### Installing sed
|
||||
|
||||
If you're using Linux, BSD, or macOS, you already have GNU or BSD sed installed. These are unique reimplementations of the original `sed` command, and while they're similar, there are minor differences. This article has been tested on the Linux and NetBSD versions, so you can use whatever sed you find on your computer in this case, although for BSD sed you must use short options (`-n` instead of `--quiet`, for instance) only.
|
||||
|
||||
GNU sed is generally regarded to be the most feature-rich sed available, so you might want to try it whether or not you're running Linux. If you can't find GNU sed (often called gsed on non-Linux systems) in your ports tree, then you can [download its source code][7] from the GNU website. The nice thing about installing GNU sed is that you can use its extra functions but also constrain it to conform to the [POSIX][8] specifications of sed, should you require portability.
|
||||
|
||||
MacOS users can find GNU sed on [MacPorts][9] or [Homebrew][10].
|
||||
|
||||
On Windows, you can [install GNU sed][11] with [Chocolatey][12].
|
||||
|
||||
### Understanding pattern space and hold space
|
||||
|
||||
Sed works on exactly one line at a time. Because it has no visual display, it creates a _pattern space_, a space in memory containing the current line from the input stream (with any trailing newline character removed). Once you populate the pattern space, sed executes your instructions. When it reaches the end of the commands, sed prints the pattern space's contents to the output stream. The default output stream is **stdout**, but the output can be redirected to a file or even back into the same file using the `--in-place=.bak` option.
|
||||
|
||||
Then the cycle begins again with the next input line.
|
||||
|
||||
To provide a little flexibility as you scrub through files with sed, sed also provides a _hold space_ (sometimes also called a _hold buffer_), a space in sed's memory reserved for temporary data storage. You can think of hold space as a clipboard, and in fact, that's exactly what this article demonstrates: how to copy/cut and paste with sed.
|
||||
|
||||
First, create a sample text file with this text as its contents:
|
||||
|
||||
|
||||
```
|
||||
Line one
|
||||
Line three
|
||||
Line two
|
||||
```
|
||||
|
||||
### Copying data to hold space
|
||||
|
||||
To place something in sed's hold space, use the `h` or `H` command. A lower-case `h` tells sed to overwrite the current contents of hold space, while a capital `H` tells it to append data to whatever's already in hold space.
|
||||
|
||||
Used on its own, there's not much to see:
|
||||
|
||||
|
||||
```
|
||||
$ sed --quiet -e '/three/ h' example.txt
|
||||
$
|
||||
```
|
||||
|
||||
The `--quiet` (`-n` for short) option suppresses all output but what sed has performed for my search requirements. In this case, sed selects any line containing the string `three`, and copying it to hold space. I've not told sed to print anything, so no output is produced.
|
||||
|
||||
### Copying data from hold space
|
||||
|
||||
To get some insight into hold space, you can copy its contents from hold space and place it into pattern space with the `g` command. Watch what happens:
|
||||
|
||||
|
||||
```
|
||||
$ sed -n -e '/three/h' -e 'g;p' example.txt
|
||||
|
||||
Line three
|
||||
Line three
|
||||
```
|
||||
|
||||
The first blank line prints because the hold space is empty when it's first copied into pattern space.
|
||||
|
||||
The next two lines contain `Line three` because that's what's in hold space from line two onward.
|
||||
|
||||
This command uses two unique scripts (`-e`) purely to help with readability and organization. It can be useful to divide steps into individual scripts, but technically this command works just as well as one script statement:
|
||||
|
||||
|
||||
```
|
||||
$ sed -n -e '/three/h ; g ; p' example.txt
|
||||
|
||||
Line three
|
||||
Line three
|
||||
```
|
||||
|
||||
### Appending data to pattern space
|
||||
|
||||
The `G` command appends a newline character and the contents of the hold space to the pattern space.
|
||||
|
||||
|
||||
```
|
||||
$ sed -n -e '/three/h' -e 'G;p' example.txt
|
||||
Line one
|
||||
|
||||
Line three
|
||||
Line three
|
||||
Line two
|
||||
Line three
|
||||
```
|
||||
|
||||
The first two lines of this output contain both the contents of the pattern space (`Line one`) and the empty hold space. The next two lines match the search text (`three`), so it contains both the pattern space and the hold space. The hold space doesn't change for the third pair of lines, so the pattern space (`Line two`) prints with the hold space (still `Line three`) trailing at the end.
|
||||
|
||||
### Doing cut and paste with sed
|
||||
|
||||
Now that you know how to juggle a string from pattern to hold space and back again, you can devise a sed script that copies, then deletes, and then pastes a line within a document. For example, the example file for this article has `Line three` out of order. Sed can fix that:
|
||||
|
||||
|
||||
```
|
||||
$ sed -n -e '/three/ h' -e '/three/ d' \
|
||||
-e '/two/ G;p' example.txt
|
||||
Line one
|
||||
Line two
|
||||
Line three
|
||||
```
|
||||
|
||||
* The first script finds a line containing the string `three` and copies it from pattern space to hold space, replacing anything currently in hold space.
|
||||
* The second script deletes any line containing the string `three`. This completes the equivalent of a _cut_ action in a word processor or text editor.
|
||||
* The final script finds a line containing `two` and _appends_ the contents of hold space to pattern space and then prints the pattern space.
|
||||
|
||||
|
||||
|
||||
Job done.
|
||||
|
||||
### Scripting with sed
|
||||
|
||||
Once again, the use of separate script statements is purely for visual and mental simplicity. The cut-and-paste command works as one script:
|
||||
|
||||
|
||||
```
|
||||
$ sed -n -e '/three/ h ; /three/ d ; /two/ G ; p' example.txt
|
||||
Line one
|
||||
Line two
|
||||
Line three
|
||||
```
|
||||
|
||||
It can even be written as a dedicated script file:
|
||||
|
||||
|
||||
```
|
||||
#!/usr/bin/sed -nf
|
||||
|
||||
/three/h
|
||||
/three/d
|
||||
/two/ G
|
||||
p
|
||||
```
|
||||
|
||||
To run the script, mark it executable and try it on your sample file:
|
||||
|
||||
|
||||
```
|
||||
$ chmod +x myscript.sed
|
||||
$ ./myscript.sed example.txt
|
||||
Line one
|
||||
Line two
|
||||
Line three
|
||||
```
|
||||
|
||||
Of course, the more predictable the text you need to parse, the easier it is to solve your problem with sed. It's usually not practical to invent "recipes" for sed actions (such as a copy and paste) because the condition to trigger the action is probably different from file to file. However, the more fluent you become with sed's commands, the easier it is to devise complex actions based on the input you need to parse.
|
||||
|
||||
The important things are recognizing distinct actions, understanding when sed moves to the next line, and predicting what the pattern and hold space can be expected to contain.
|
||||
|
||||
### Download the cheat sheet
|
||||
|
||||
Sed is complex. It only has a dozen commands, yet its flexible syntax and raw power mean it's full of endless potential. I used to reference pages of clever one-liners in an attempt to get the most use out of sed, but it wasn't until I started inventing (and sometimes reinventing) my own solutions that I felt like I was starting to _actually_ learn sed. If you're looking for gentle reminders of commands and helpful tips on syntax, [download our sed cheat sheet][6], and start learning sed once and for all!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/3/sed-cheat-sheet
|
||||
|
||||
作者:[Seth Kenlon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/seth
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/linux_penguin_green.png?itok=ENdVzW22 (Penguin with green background)
|
||||
[2]: https://opensource.com/article/21/3/grep-cheat-sheet
|
||||
[3]: https://opensource.com/article/20/9/awk-ebook
|
||||
[4]: https://opensource.com/article/20/12/sed
|
||||
[5]: https://opensource.com/article/20/12/gnu-ed
|
||||
[6]: https://opensource.com/downloads/sed-cheat-sheet
|
||||
[7]: http://www.gnu.org/software/sed/
|
||||
[8]: https://opensource.com/article/19/7/what-posix-richard-stallman-explains
|
||||
[9]: https://opensource.com/article/20/11/macports
|
||||
[10]: https://opensource.com/article/20/6/homebrew-mac
|
||||
[11]: https://chocolatey.org/packages/sed
|
||||
[12]: https://opensource.com/article/20/3/chocolatey
|
@ -0,0 +1,268 @@
|
||||
[#]: subject: (Identify Linux performance bottlenecks using open source tools)
|
||||
[#]: via: (https://opensource.com/article/21/3/linux-performance-bottlenecks)
|
||||
[#]: author: (Howard Fosdick https://opensource.com/users/howtech)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
||||
Identify Linux performance bottlenecks using open source tools
|
||||
======
|
||||
Not long ago, identifying hardware bottlenecks required deep expertise.
|
||||
Today's open source GUI performance monitors make it pretty simple.
|
||||
![Lightning in a bottle][1]
|
||||
|
||||
Computers are integrated systems that only perform as fast as their slowest hardware component. If one component is less capable than the others—if it falls behind and can't keep up—it can hold your entire system back. That's a _performance bottleneck_. Removing a serious bottleneck can make your system fly.
|
||||
|
||||
This article explains how to identify hardware bottlenecks in Linux systems. The techniques apply to both personal computers and servers. My emphasis is on PCs—I won't cover server-specific bottlenecks in areas such as LAN management or database systems. Those often involve specialized tools.
|
||||
|
||||
I also won't talk much about solutions. That's too big a topic for this article. Instead, I'll write a follow-up article with performance tweaks.
|
||||
|
||||
I'll use only open source graphical user interface (GUI) tools to get the job done. Most articles on Linux bottlenecking are pretty complicated. They use specialized commands and delve deep into arcane details.
|
||||
|
||||
The GUI tools that open source offers make identifying many bottlenecks simple. My goal is to give you a quick, easy approach that you can use anywhere.
|
||||
|
||||
### Where to start
|
||||
|
||||
A computer consists of six key hardware resources:
|
||||
|
||||
* Processors
|
||||
* Memory
|
||||
* Storage
|
||||
* USB ports
|
||||
* Internet connection
|
||||
* Graphics processor
|
||||
|
||||
|
||||
|
||||
Should any one resource perform poorly, it can create a performance bottleneck. To identify a bottleneck, you must monitor these six resources.
|
||||
|
||||
Open source offers a plethora of tools to do the job. I'll use the [GNOME System Monitor][2]. Its output is easy to understand, and you can find it in most repositories.
|
||||
|
||||
Start it up and click on the **Resources** tab. You can identify many performance problems right off.
|
||||
|
||||
![System Monitor - Resources Panel ][3]
|
||||
|
||||
Fig. 1. System Monitor spots problems. (Howard Fosdick, [CC BY-SA 4.0][4])
|
||||
|
||||
The **Resources** panel displays three sections: **CPU History**, **Memory and Swap History**, and **Network History**. A quick glance tells you immediately whether your processors are swamped, or your computer is out of memory, or you're using up all your internet bandwidth.
|
||||
|
||||
I'll explore these problems below. For now, check the System Monitor first when your computer slows down. It instantly clues you in on the most common performance problems.
|
||||
|
||||
Now let's explore how to identify bottlenecks in specific areas.
|
||||
|
||||
### How to identify processor bottlenecks
|
||||
|
||||
To spot a bottleneck, you must first know what hardware you have. Open source offers several tools for this purpose. I like [HardInfo][5] because its screens are easy to read and it's widely popular.
|
||||
|
||||
Start up HardInfo. Its **Computer -> Summary** panel identifies your CPU and tells you about its cores, threads, and speeds. It also identifies your motherboard and other computer components.
|
||||
|
||||
![HardInfo Summary Panel][6]
|
||||
|
||||
Fig. 2. HardInfo shows hardware details. (Howard Fosdick, [CC BY-SA 4.0][4])
|
||||
|
||||
HardInfo reveals that this computer has one physical CPU chip. That chip contains two processors, or cores. Each core supports two threads, or logical processors. That's a total of four logical processors—exactly what System Monitor's CPU History section showed in Fig. 1.
|
||||
|
||||
A _processor bottleneck_ occurs when processors can't respond to requests for their time. They're already busy.
|
||||
|
||||
You can identify this when System Monitor shows logical processor utilization at over 80% or 90% for a sustained period. Here's an example where three of the four logical processors are swamped at 100% utilization. That's a bottleneck because it doesn't leave much CPU for any other work.
|
||||
|
||||
![System Monitor processor bottleneck][7]
|
||||
|
||||
Fig. 3. A processor bottleneck. (Howard Fosdick, [CC BY-SA 4.0][4])
|
||||
|
||||
#### Which app is causing the problem?
|
||||
|
||||
You need to find out which program(s) is consuming all that CPU. Click on System Monitor's **Processes** tab. Then click on the **% CPU** header to sort the processes by how much CPU they're consuming. You'll see which apps are throttling your system.
|
||||
|
||||
![System Monitor Processes panel][8]
|
||||
|
||||
Fig. 4. Identifying the offending processes. (Howard Fosdick, [CC BY-SA 4.0][4])
|
||||
|
||||
The top three processes each consume 24% of the _total_ CPU resource. Since there are four logical processors, this means each consumes an entire processor. That's just as Fig. 3 shows.
|
||||
|
||||
The **Processes** panel identifies a program named **analytical_AI** as the culprit. You can right-click on it in the panel to see more details on its resource consumption, including memory use, the files it has open, its input/output details, and more.
|
||||
|
||||
If your login has administrator privileges, you can manage the process. You can change its priority and stop, continue, end, or kill it. So, you could immediately resolve your bottleneck here.
|
||||
|
||||
![System Monitor managing a process][9]
|
||||
|
||||
Fig. 5. Right-click on a process to manage it. (Howard Fosdick, [CC BY-SA 4.0][4])
|
||||
|
||||
How do you fix processing bottlenecks? Beyond managing the offending process in real time, you could prevent the bottleneck from happening. For example, you might substitute another app for the offender, work around it, change your behavior when using that app, schedule the app for off-hours, address an underlying memory issue, performance-tweak the app or your system software, or upgrade your hardware. That's too much to cover here, so I'll explore those options in my next article.
|
||||
|
||||
#### Common processor bottlenecks
|
||||
|
||||
You'll encounter several common bottlenecks when monitoring your CPUs with System Monitor.
|
||||
|
||||
Sometimes one logical processor is bottlenecked while all the others are at low utilization. This means you have an app that's not coded smartly enough to take advantage of more than one logical processor, and it's maxed out the one it's using. That app will take longer to finish than it would if it used more processors. On the other hand, at least it leaves your other processors free for other work and doesn't take over your computer.
|
||||
|
||||
You might also see a logical processor stuck forever at 100% utilization. Either it's very busy, or a process is hung. The way to tell if it's hung is if the process never does any disk activity (as the System Monitor **Processes** panel will show).
|
||||
|
||||
Finally, you might notice that when all your processors are bottlenecked, your memory is fully utilized, too. Out-of-memory conditions sometimes cause processor bottlenecks. In this case, you want to solve the underlying memory problem, not the symptomatic CPU issue.
|
||||
|
||||
### How to identify memory bottlenecks
|
||||
|
||||
Given the large amount of memory in modern PCs, memory bottlenecks are much less common than they once were. Yet you can still run into them if you run memory-intensive programs, especially if you have a computer that doesn't contain much random access memory (RAM).
|
||||
|
||||
Linux [uses memory][10] both for programs and to cache disk data. The latter speeds up disk data access. Linux can reclaim that memory any time it needs it for program use.
|
||||
|
||||
The System Monitor's **Resources** panel displays your total memory and how much of it is used. In the **Processes** panel, you can see individual processes' memory use.
|
||||
|
||||
Here's the portion of the System Monitor **Resources** panel that tracks aggregate memory use:
|
||||
|
||||
![System Monitor memory bottleneck][11]
|
||||
|
||||
Fig. 6. A memory bottleneck. (Howard Fosdick, [CC BY-SA 4.0][4])
|
||||
|
||||
To the right of Memory, you'll notice [Swap][12]. This is disk space Linux uses when it runs low on memory. It writes memory to disk to continue operations, effectively using swap as a slower extension to your RAM.
|
||||
|
||||
The two memory performance problems you'll want to look out for are:
|
||||
|
||||
> 1. Memory appears largely used, and you see frequent or increasing activity on the swap space.
|
||||
> 2. Both memory and swap are largely used up.
|
||||
>
|
||||
|
||||
|
||||
Situation 1 means slower performance because swap is always slower than memory. Whether you consider it a performance problem depends on many factors (e.g., how active your swap space is, its speed, your expectations, etc.). My opinion is that anything more than token swap use is unacceptable for a modern personal computer.
|
||||
|
||||
Situation 2 is where both memory and swap are largely in use. This is a _memory bottleneck._ The computer becomes unresponsive. It could even fall into a state of _thrashing_, where it accomplishes little more than memory management.
|
||||
|
||||
Fig. 6 above shows an old computer with only 2GB of RAM. As memory use surpassed 80%, the system started writing to swap. Responsiveness declined. This screenshot shows over 90% memory use, and the computer is unusable.
|
||||
|
||||
The ultimate answer to memory problems is to either use less of it or buy more. I'll discuss solutions in my follow-up article.
|
||||
|
||||
### How to identify storage bottlenecks
|
||||
|
||||
Storage today comes in several varieties of solid-state and mechanical hard disks. Device interfaces include PCIe, SATA, Thunderbolt, and USB. Regardless of which type of storage you have, you use the same procedure to identify disk bottlenecks.
|
||||
|
||||
Start with System Monitor. Its **Processes** panel displays the input/output rates for individual processes. So you can quickly identify which processes are doing the most disk I/O.
|
||||
|
||||
But the tool doesn't show the _aggregate data transfer rate per disk._ You need to see the total load on a specific disk to determine if that disk is a storage bottleneck.
|
||||
|
||||
To do so, use the [atop][13] command. It's available in most Linux repositories.
|
||||
|
||||
Just type `atop` at the command-line prompt. The output below shows that device `sdb` is `busy 101%`. Clearly, it's reached its performance limit and is restricting how fast your system can get work done.
|
||||
|
||||
![atop disk bottleneck][14]
|
||||
|
||||
Fig. 7. The atop command identifies a disk bottleneck. (Howard Fosdick, [CC BY-SA 4.0][4])
|
||||
|
||||
Notice that one of the CPUs is waiting on the disk to do its job 85% of the time (`cpu001 w 85%`). This is typical when a storage device becomes a bottleneck. In fact, many look first at CPU I/O waits to spot storage bottlenecks.
|
||||
|
||||
So, to easily identify a storage bottleneck, use the `atop` command. Then use the **Processes** panel on System Monitor to identify the individual processes that are causing the bottleneck.
|
||||
|
||||
### How to identify USB port bottlenecks
|
||||
|
||||
Some people use their USB ports all day long. Yet, they never check if those ports are being used optimally. Whether you plug in an external disk, a memory stick, or something else, you'll want to verify that you're getting maximum performance from your USB-connected devices.
|
||||
|
||||
This chart shows why. Potential USB data transfer rates vary _enormously_.
|
||||
|
||||
![USB standards][15]
|
||||
|
||||
Fig. 8. USB speeds vary a lot. (Howard Fosdick, based on figures provided by [Tripplite][16] and [Wikipedia][17], [CC BY-SA 4.0][4])
|
||||
|
||||
HardInfo's **USB Devices** tab displays the USB standards your computer supports. Most computers offer more than one speed. How can you tell the speed of a specific port? Vendors color-code them, as shown in the chart. Or you can look in your computer's documentation.
|
||||
|
||||
To see the actual speeds you're getting, test by using the open source [GNOME Disks][18] program. Just start up GNOME Disks, select its **Benchmark Disk** feature, and run a benchmark. That tells you the maximum real speed you'll get for a port with the specific device plugged into it.
|
||||
|
||||
You may get different transfer speeds for a port, depending on which device you plug into it. Data rates depend on the particular combination of port and device.
|
||||
|
||||
For example, a device that could fly at 3.1 speed will use a 2.0 port—at 2.0 speed—if that's what you plug it into. (And it won't tell you it's operating at the slower speed!) Conversely, if you plug a USB 2.0 device into a 3.1 port, it will work, but at the 2.0 speed. So to get fast USB, you must ensure both the port and the device support it. GNOME Disks gives you the means to verify this.
|
||||
|
||||
To identify a USB processing bottleneck, use the same procedure you did for solid-state and hard disks. Run the `atop` command to spot a USB storage bottleneck. Then, use System Monitor to get the details on the offending process(es).
|
||||
|
||||
### How to identify internet bandwidth bottlenecks
|
||||
|
||||
The System Monitor **Resources** panel tells you in real time what internet connection speed you're experiencing (see Fig. 1).
|
||||
|
||||
There are [great Python tools out there][19] to test your maximum internet speed, but you can also test it on websites like [Speedtest][20], [Fast.com][21], and [Speakeasy][22]. For best results, close everything and run _only_ the speed test; turn off your VPN; run tests at different times of day; and compare the results from several testing sites.
|
||||
|
||||
Then compare your results to the download and upload speeds that your vendor claims you're getting. That way, you can confirm you're getting the speeds you're paying for.
|
||||
|
||||
If you have a separate router, test with and without it. That can tell you if your router is a bottleneck. If you use WiFi, test with it and without it (by directly cabling your laptop to the modem). I've often seen people complain about their internet vendor when what they actually have is a WiFi bottleneck they could fix themselves.
|
||||
|
||||
If some program is consuming your entire internet connection, you want to know which one. Find it by using the `nethogs` command. It's available in most repositories.
|
||||
|
||||
The other day, my System Monitor suddenly showed my internet access spiking. I just typed `nethogs` in the command line, and it instantly identified the bandwidth consumer as a Clamav antivirus update.
|
||||
|
||||
![Nethogs][23]
|
||||
|
||||
Fig. 9. Nethogs identifies bandwidth consumers. (Howard Fosdick, [CC BY-SA 4.0][4])
|
||||
|
||||
### How to identify graphics processing bottlenecks
|
||||
|
||||
If you plug your monitor into the motherboard in the back of your desktop computer, you're using _onboard graphics_. If you plug it into a card in the back, you have a dedicated graphics subsystem. Most call it a _video card_ or _graphics card._ For desktop computers, add-in cards are typically more powerful and more expensive than motherboard graphics. Laptops always use onboard graphics.
|
||||
|
||||
HardInfo's **PCI Devices** panel tells you about your graphics processing unit (GPU). It also displays the amount of dedicated video memory you have (look for the memory marked "prefetchable").
|
||||
|
||||
![Video Chipset Information][24]
|
||||
|
||||
Fig. 10. HardInfo provides graphics processing information. (Howard Fosdick, [CC BY-SA 4.0][4])
|
||||
|
||||
CPUs and GPUs work [very closely][25] together. To simplify, the CPU prepares frames for the GPU to render, then the GPU renders the frames.
|
||||
|
||||
A _GPU bottleneck_ occurs when your CPUs are waiting on a GPU that is 100% busy.
|
||||
|
||||
To identify this, you need to monitor CPU and GPU utilization rates. Open source monitors like [Conky][26] and [Glances][27] do this if their extensions work with your graphics chipset.
|
||||
|
||||
Take a look at this example from Conky. You can see that this system has a lot of available CPU. The GPU is only 25% busy. Imagine if that GPU number were instead near 100%. Then you'd know that the CPUs were waiting on the GPU, and you'd have a GPU bottleneck.
|
||||
|
||||
![Conky CPU and GPU monitoring][28]
|
||||
|
||||
Fig. 11. Conky displays CPU and GPU utilization. (Image courtesy of [AskUbuntu forum][29])
|
||||
|
||||
On some systems, you'll need a vendor-specific tool to monitor your GPU. They're all downloadable from GitHub and are described in this article on [GPU monitoring and diagnostic command-line tools][30].
|
||||
|
||||
### Summary
|
||||
|
||||
Computers consist of a collection of integrated hardware resources. Should any of them fall way behind the others in its workload, it creates a performance bottleneck. That can hold back your entire system. You need to be able to identify and correct bottlenecks to achieve optimal performance.
|
||||
|
||||
Not so long ago, identifying bottlenecks required deep expertise. Today's open source GUI performance monitors make it pretty simple.
|
||||
|
||||
In my next article, I'll discuss specific ways to improve your Linux PC's performance. Meanwhile, please share your own experiences in the comments.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/3/linux-performance-bottlenecks
|
||||
|
||||
作者:[Howard Fosdick][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/howtech
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_lightning.png?itok=wRzjWIlm (Lightning in a bottle)
|
||||
[2]: https://wiki.gnome.org/Apps/SystemMonitor
|
||||
[3]: https://opensource.com/sites/default/files/uploads/1_system_monitor_resources_panel.jpg (System Monitor - Resources Panel )
|
||||
[4]: https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[5]: https://itsfoss.com/hardinfo/
|
||||
[6]: https://opensource.com/sites/default/files/uploads/2_hardinfo_summary_panel.jpg (HardInfo Summary Panel)
|
||||
[7]: https://opensource.com/sites/default/files/uploads/3_system_monitor_100_processor_utilization.jpg (System Monitor processor bottleneck)
|
||||
[8]: https://opensource.com/sites/default/files/uploads/4_system_monitor_processes_panel.jpg (System Monitor Processes panel)
|
||||
[9]: https://opensource.com/sites/default/files/uploads/5_system_monitor_manage_a_process.jpg (System Monitor managing a process)
|
||||
[10]: https://www.networkworld.com/article/3394603/when-to-be-concerned-about-memory-levels-on-linux.html
|
||||
[11]: https://opensource.com/sites/default/files/uploads/6_system_monitor_out_of_memory.jpg (System Monitor memory bottleneck)
|
||||
[12]: https://opensource.com/article/18/9/swap-space-linux-systems
|
||||
[13]: https://opensource.com/life/16/2/open-source-tools-system-monitoring
|
||||
[14]: https://opensource.com/sites/default/files/uploads/7_atop_storage_bottleneck.jpg (atop disk bottleneck)
|
||||
[15]: https://opensource.com/sites/default/files/uploads/8_usb_standards_speeds.jpg (USB standards)
|
||||
[16]: https://www.samsung.com/us/computing/memory-storage/solid-state-drives/
|
||||
[17]: https://en.wikipedia.org/wiki/USB
|
||||
[18]: https://wiki.gnome.org/Apps/Disks
|
||||
[19]: https://opensource.com/article/20/1/internet-speed-tests
|
||||
[20]: https://www.speedtest.net/
|
||||
[21]: https://fast.com/
|
||||
[22]: https://www.speakeasy.net/speedtest/
|
||||
[23]: https://opensource.com/sites/default/files/uploads/9_nethogs_bandwidth_consumers.jpg (Nethogs)
|
||||
[24]: https://opensource.com/sites/default/files/uploads/10_hardinfo_video_card_information.jpg (Video Chipset Information)
|
||||
[25]: https://www.wepc.com/tips/cpu-gpu-bottleneck/
|
||||
[26]: https://itsfoss.com/conky-gui-ubuntu-1304/
|
||||
[27]: https://opensource.com/article/19/11/monitoring-linux-glances
|
||||
[28]: https://opensource.com/sites/default/files/uploads/11_conky_cpu_and_gup_monitoring.jpg (Conky CPU and GPU monitoring)
|
||||
[29]: https://askubuntu.com/questions/387594/how-to-measure-gpu-usage
|
||||
[30]: https://www.cyberciti.biz/open-source/command-line-hacks/linux-gpu-monitoring-and-diagnostic-commands/
|
@ -0,0 +1,92 @@
|
||||
[#]: subject: (Plausible: Privacy-Focused Google Analytics Alternative)
|
||||
[#]: via: (https://itsfoss.com/plausible/)
|
||||
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
||||
Plausible: Privacy-Focused Google Analytics Alternative
|
||||
======
|
||||
|
||||
[Plausible][1] is a simple, privacy-friendly analytics tool. It helps you analyze the number of unique visitors, pageviews, bounce rate and visit duration.
|
||||
|
||||
If you have a website you would probably understand those terms. As a website owner, it helps you know if your site is getting more visitors over the time, from where the traffic is coming and if you have some knowledge on these things, you can work on improving your website for more visits.
|
||||
|
||||
When it comes to website analytics, the one service that rules this domain is the Google’s free tool Google Analytics. Just like Google is the de-facto search engine, Google Analytics is the de-facto analytics tool. But you don’t have to live with it specially if you cannot trust Big tech with your and your site visitor’s data.
|
||||
|
||||
Plausible gives you the freedom from Google Analytics and I am going to discuss this open source project in this article.
|
||||
|
||||
Please mind that some technical terms in the article could be unknown to you if you have never managed a website or bothered about analytics.
|
||||
|
||||
### Plausible for privacy friendly website analytics
|
||||
|
||||
The script used by Plausible for analytics is extremely lightweight with less than 1 KB in size.
|
||||
|
||||
The focus is on preserving the privacy so you get valuable and actionable stats without compromising on the privacy of your visitors. Plausible is one of the rare few analytics tool that doesn’t require cookie banner or GDP consent because it is already [GDPR-compliant][2] on privacy front. That’s super cool.
|
||||
|
||||
In terms of features, it doesn’t have the same level of granularity and details of Google Analytics. Plausible banks on simplicity. It shows a graph of your traffic stats for past 30 days. You may also switch to real time view.
|
||||
|
||||
![][3]
|
||||
|
||||
You can also see where your traffic is coming from and which pages on your website gets the most visits. The sources can also show UTM campaigns.
|
||||
|
||||
![][4]
|
||||
|
||||
You also have the option to enable GeoIP to get some insights about the geographical location of your website visitors. You can also check how many visitors use desktop or mobile device to visit your website. There is also an option for operating system and as you can see, [Linux Handbook][5] gets 48% of its visitors from Windows devices. Pretty strange, right?
|
||||
|
||||
![][6]
|
||||
|
||||
Clearly, the data provided is nowhere close to what Google Analytics can do, but that’s intentional. Plausible intends to provide you simple matrix.
|
||||
|
||||
### Using Plausible: Opt for paid managed hosting or self-host it on your server
|
||||
|
||||
There are two ways you can start using Plausible. Sign up for their official managed hosting. You’ll have to pay for the service and this eventually helps the development of the Plausible project. They do have 30-days trial period and it doesn’t even require any payment information from your side.
|
||||
|
||||
The pricing starts at $6 per month for 10k monthly pageviews. Pricing increases with the number of pageviews. You can calculate the pricing on Plausible website.
|
||||
|
||||
[Plausible Pricing][7]
|
||||
|
||||
You can try it for 30 days and see if you would like to pay to Plausible developers for the service and own your data.
|
||||
|
||||
If you think the pricing is not affordable, you can take the advantage of the fact that Plausible is open source and deploy it yourself. If you are interested, read our [in-depth guide on self-hosting a Plausible instance with Docker][8].
|
||||
|
||||
At It’s FOSS, we self-host Plausible. Our Plausible instance has three of our websites added.
|
||||
|
||||
![Plausble dashboard for It’s FOSS websites][9]
|
||||
|
||||
If you maintain the website of an open source project and would like to use Plausible, you can contact us through our [High on Cloud project][10]. With High on Cloud, we help small businesses host and use open source software on their servers.
|
||||
|
||||
### Conclusion
|
||||
|
||||
If you are not super obsessed with data and just want a quick glance on how your website is performing, Plausible is a decent choice. I like it because it is lightweight and privacy compliant. That’s the main reason why I use it on Linux Handbook, our [ethical web portal for teaching Linux server related stuff][11].
|
||||
|
||||
Overall, I am pretty content with Plausible and recommend it to other website owners.
|
||||
|
||||
Do you run or manage a website as well? What tool do you use for the analytics or do you not care about that at all?
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/plausible/
|
||||
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/abhishek/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://plausible.io/
|
||||
[2]: https://gdpr.eu/compliance/
|
||||
[3]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/03/plausible-graph-lhb.png?resize=800%2C395&ssl=1
|
||||
[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/03/plausible-stats-lhb-2.png?resize=800%2C333&ssl=1
|
||||
[5]: https://linuxhandbook.com/
|
||||
[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/03/plausible-geo-ip-stats.png?resize=800%2C331&ssl=1
|
||||
[7]: https://plausible.io/#pricing
|
||||
[8]: https://linuxhandbook.com/plausible-deployment-guide/
|
||||
[9]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/03/plausible-analytics-for-itsfoss.png?resize=800%2C231&ssl=1
|
||||
[10]: https://highoncloud.com/
|
||||
[11]: https://linuxhandbook.com/about/#ethical-web-portal
|
@ -0,0 +1,144 @@
|
||||
[#]: subject: (10 open source tools for content creators)
|
||||
[#]: via: (https://opensource.com/article/21/3/open-source-tools-web-design)
|
||||
[#]: author: (Kristina Tuvikene https://opensource.com/users/hfkristina)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
||||
10 open source tools for content creators
|
||||
======
|
||||
Check out these lesser-known web design tools for your next project.
|
||||
![Painting art on a computer screen][1]
|
||||
|
||||
There are a lot of well-known open source applications used in web design, but there are also many great tools that are not as popular. I thought I'd challenge myself to find some obscure options on the chance I might find something useful.
|
||||
|
||||
Open source offers a wealth of options, so it's no surprise that I found 10 new applications that I now consider indispensable to my work.
|
||||
|
||||
### Bulma
|
||||
|
||||
![Bulma widgets][2]
|
||||
|
||||
[Bulma][3] is a modular and responsive CSS framework for designing interfaces that flow beautifully. Design work is hardest between the moment of inspiration and the time of initial implementation, and that's exactly the problem Bulma helps solve. It's a collection of useful front-end components that a designer can combine to create an engaging and polished interface. And the best part is that it requires no JavaScript. It's all done in CSS.
|
||||
|
||||
Included components include forms, columns, tabbed interfaces, pagination, breadcrumbs, buttons, notifications, and much more.
|
||||
|
||||
### Skeleton
|
||||
|
||||
![Skeleton][4]
|
||||
|
||||
(Kristina Tuvikene, [CC BY-SA 4.0][5])
|
||||
|
||||
[Skeleton][6] is a lightweight open source framework that gives you a simple grid, basic formats, and cross-browser support. It's a great alternative for bulky frameworks and lets you start coding your site with a minimal but highly functional foundation. There's a slight learning curve, as you do have to get familiar with its codebase, but after you've built one site with Skeleton, you've built a thousand, and it becomes second-nature.
|
||||
|
||||
### The Noun Project
|
||||
|
||||
![The Noun Project][7]
|
||||
|
||||
(Kristina Tuvikene, [CC BY-SA 4.0][5])
|
||||
|
||||
[The Noun Project][8] is a collection of more than 3 million icons and images. You can use them on your site or as inspiration to create your own designs. I've found hundreds of useful icons on the site, and they're superbly easy to use. Because they're so basic, you can use them as-is for a nice, minimal look or bring them into your [favorite image editor][9] and customize them for your project.
|
||||
|
||||
### MyPaint
|
||||
|
||||
![MyPaint][10]
|
||||
|
||||
(Kristina Tuvikene, [CC BY-SA 4.0][5])
|
||||
|
||||
If you fancy creating your own icons or maybe some incidental art, then you should take a look at [MyPaint][11]. It is a lightweight painting tool that supports various graphic tablets, features dozens of amazing brush emulators and textures, and has a clean, minimal interface, so you can focus on creating your illustration.
|
||||
|
||||
### Glimpse
|
||||
|
||||
![Glimpse][12]
|
||||
|
||||
(Kristina Tuvikene, [CC BY-SA 4.0][5])
|
||||
|
||||
[Glimpse][13] is a cross-platform photo editor, a fork of [GIMP][14] that adds some nice features such as keyboard shortcuts similar to another popular (non-open) image editor. This is one of those must-have [applications for any graphic designer][15]. Climpse doesn't have a macOS release yet, but Mac users may use GIMP in the mean time.
|
||||
|
||||
### LazPaint
|
||||
|
||||
![LaPaz][16]
|
||||
|
||||
(Kristina Tuvikene, [CC BY-SA 4.0][5])
|
||||
|
||||
[LazPaint][17] is a lightweight raster and vector graphics editor with multiple tools and filters. It's also available on multiple platforms and offers straightforward vector editing for quick and basic work.
|
||||
|
||||
### The League of Moveable Type
|
||||
|
||||
![League of Moveable Type][18]
|
||||
|
||||
(Kristina Tuvikene, [CC BY-SA 4.0][5])
|
||||
|
||||
My favorite open source font foundry, [The League of Moveable Type][19], offers expertly designed open source font faces. There's something suitable for every sort of project here.
|
||||
|
||||
### Shotcut
|
||||
|
||||
![Shotcut][20]
|
||||
|
||||
(Kristina Tuvikene, [CC BY-SA 4.0][5])
|
||||
|
||||
[Shotcut][21] is a non-linear video editor that supports multiple audio and video formats. It has an intuitive interface, undockable panels, and you can do some basic to advanced video editing using this open source tool.
|
||||
|
||||
### Draw.io
|
||||
|
||||
![Draw.io][22]
|
||||
|
||||
(Kristina Tuvikene, [CC BY-SA 4.0][5])
|
||||
|
||||
[Draw.io][23] is lightweight, dedicated software with a straightforward user interface for creating professional diagrams and flowcharts. You can run it online or [get it from GitHub][24] and install it locally.
|
||||
|
||||
### Bonus resource: Olive video editor
|
||||
|
||||
![Olive][25]
|
||||
|
||||
(©2021, [Olive][26])
|
||||
|
||||
[Olive video editor][27] is a work in progress but considered a very strong contender for premium open source video editing software. It's something you should keep your eye on for sure.
|
||||
|
||||
### Add these to your collection
|
||||
|
||||
Web design is an exciting line of work, and there's always something unexpected to deal with or invent. There are many great open source options out there for the resourceful web designer, and you'll benefit from trying these out to see if they fit your style.
|
||||
|
||||
What open source web design tools do you use that I've missed? Please share your favorites in the comments!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/3/open-source-tools-web-design
|
||||
|
||||
作者:[Kristina Tuvikene][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/hfkristina
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/painting_computer_screen_art_design_creative.png?itok=LVAeQx3_ (Painting art on a computer screen)
|
||||
[2]: https://opensource.com/sites/default/files/bulma.jpg (Bulma widgets)
|
||||
[3]: https://bulma.io/
|
||||
[4]: https://opensource.com/sites/default/files/uploads/skeleton.jpg (Skeleton)
|
||||
[5]: https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[6]: http://getskeleton.com/
|
||||
[7]: https://opensource.com/sites/default/files/uploads/nounproject.jpg (The Noun Project)
|
||||
[8]: https://thenounproject.com/
|
||||
[9]: https://opensource.com/life/12/6/design-without-debt-five-tools-for-designers
|
||||
[10]: https://opensource.com/sites/default/files/uploads/mypaint.jpg (MyPaint)
|
||||
[11]: http://mypaint.org/
|
||||
[12]: https://opensource.com/sites/default/files/uploads/glimpse.jpg (Glimpse)
|
||||
[13]: https://glimpse-editor.github.io/
|
||||
[14]: https://www.gimp.org/
|
||||
[15]: https://websitesetup.org/web-design-software/
|
||||
[16]: https://opensource.com/sites/default/files/uploads/lapaz.jpg (LaPaz)
|
||||
[17]: https://lazpaint.github.io/
|
||||
[18]: https://opensource.com/sites/default/files/uploads/league-of-moveable-type.jpg (League of Moveable Type)
|
||||
[19]: https://www.theleagueofmoveabletype.com/
|
||||
[20]: https://opensource.com/sites/default/files/uploads/shotcut.jpg (Shotcut)
|
||||
[21]: https://shotcut.org/
|
||||
[22]: https://opensource.com/sites/default/files/uploads/drawio.jpg (Draw.io)
|
||||
[23]: http://www.draw.io/
|
||||
[24]: https://github.com/jgraph/drawio
|
||||
[25]: https://opensource.com/sites/default/files/uploads/olive.png (Olive)
|
||||
[26]: https://olivevideoeditor.org/020.php
|
||||
[27]: https://olivevideoeditor.org/
|
144
sources/tech/20210326 How to read and write files in C.md
Normal file
144
sources/tech/20210326 How to read and write files in C.md
Normal file
@ -0,0 +1,144 @@
|
||||
[#]: subject: (How to read and write files in C++)
|
||||
[#]: via: (https://opensource.com/article/21/3/ccc-input-output)
|
||||
[#]: author: (Stephan Avenwedde https://opensource.com/users/hansic99)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
||||
How to read and write files in C++
|
||||
======
|
||||
If you know how to use I/O streams in C++, you can (in principle) handle
|
||||
any kind of I/O device.
|
||||
![Computer screen with files or windows open][1]
|
||||
|
||||
In C++, reading and writing to files can be done by using I/O streams in conjunction with the stream operators `>>` and `<<`. When reading or writing to files, those operators are applied to an instance of a class representing a file on the hard drive. This stream-based approach has a huge advantage: From a C ++ perspective, it doesn't matter what you are reading or writing to, whether it's a file, a database, the console, or another PC you are connected to over the network. Therefore, knowing how to write files using stream operators can be transferred to other areas.
|
||||
|
||||
### I/O stream classes
|
||||
|
||||
The C++ standard library provides the class [ios_base][2]. This class acts as the base class for all I/O stream-compatible classes, such as [basic_ofstream][3] and [basic_ifstream][4]. This example will use the specialized types for reading/writing characters, `ifstream` and `ofstream`.
|
||||
|
||||
* `ofstream` means _output file stream_, and it can be accessed with the insertion operator, `<<`.
|
||||
* `ifstream` means _input file stream_, and it can be accessed with the extraction operator, `>>`.
|
||||
|
||||
|
||||
|
||||
Both types are defined inside the header `<fstream>`.
|
||||
|
||||
A class that inherits from `ios_base` can be thought of as a data sink when writing to it or as a data source when reading from it, completely detached from the data itself. This object-oriented approach makes concepts such as [separation of concerns][5] and [dependency injection][6] easy to implement.
|
||||
|
||||
### A simple example
|
||||
|
||||
This example program is quite simple: It creates an `ofstream`, writes to it, creates an `ifstream`, and reads from it:
|
||||
|
||||
|
||||
```
|
||||
#include <iostream> // cout, cin, cerr etc...
|
||||
#include <fstream> // ifstream, ofstream
|
||||
#include <string>
|
||||
|
||||
int main()
|
||||
{
|
||||
std::string sFilename = "MyFile.txt";
|
||||
|
||||
/******************************************
|
||||
* *
|
||||
* WRITING *
|
||||
* *
|
||||
******************************************/
|
||||
|
||||
std::ofstream fileSink(sFilename); // Creates an output file stream
|
||||
|
||||
if (!fileSink) {
|
||||
std::cerr << "Canot open " << sFilename << std::endl;
|
||||
exit(-1);
|
||||
}
|
||||
|
||||
/* std::endl will automatically append the correct EOL */
|
||||
fileSink << "Hello Open Source World!" << std::endl;
|
||||
|
||||
/******************************************
|
||||
* *
|
||||
* READING *
|
||||
* *
|
||||
******************************************/
|
||||
|
||||
std::ifstream fileSource(sFilename); // Creates an input file stream
|
||||
|
||||
if (!fileSource) {
|
||||
std::cerr << "Canot open " << sFilename << std::endl;
|
||||
exit(-1);
|
||||
}
|
||||
else {
|
||||
// Intermediate buffer
|
||||
std::string buffer;
|
||||
|
||||
// By default, the >> operator reads word by workd (till whitespace)
|
||||
while (fileSource >> buffer)
|
||||
{
|
||||
std::cout << buffer << std::endl;
|
||||
}
|
||||
}
|
||||
|
||||
exit(0);
|
||||
}
|
||||
```
|
||||
|
||||
This code is available on [GitHub][7]. When you compile and execute it, you should get the following output:
|
||||
|
||||
![Console screenshot][8]
|
||||
|
||||
(Stephan Avenwedde, [CC BY-SA 4.0][9])
|
||||
|
||||
This is a simplified, beginner-friendly example. If you want to use this code in your own application, please note the following:
|
||||
|
||||
* The file streams are automatically closed at the end of the program. If you want to proceed with the execution, you should close them manually by calling the `close()` method.
|
||||
* These file stream classes inherit (over several levels) from [basic_ios][10], which overloads the `!` operator. This lets you implement a simple check if you can access the stream. On [cppreference.com][11], you can find an overview of when this check will (and won't) succeed, and you can implement further error handling.
|
||||
* By default, `ifstream` stops at white space and skips it. To read line by line until you reach [EOF][12], use the `getline(...)`-method.
|
||||
* For reading and writing binary files, pass the `std::ios::binary` flag to the constructor: This prevents [EOL][13] characters from being appended to each line.
|
||||
|
||||
|
||||
|
||||
### Writing from the systems perspective
|
||||
|
||||
When writing files, the data is written to the system's in-memory write buffer. When the system receives the system call [sync][14], this buffer's contents are written to the hard drive. This mechanism is also the reason you shouldn't remove a USB stick without telling the system. Usually, _sync_ is called on a regular basis by a daemon. If you really want to be on the safe side, you can also call _sync_ manually:
|
||||
|
||||
|
||||
```
|
||||
#include <unistd.h> // needs to be included
|
||||
|
||||
sync();
|
||||
```
|
||||
|
||||
### Summary
|
||||
|
||||
Reading and writing to files in C++ is not that complicated. Moreover, if you know how to deal with I/O streams, you also know (in principle) how to deal with any kind of I/O device. Libraries for various kinds of I/O devices let you use stream operators for easy access. This is why it is beneficial to know how I/O steams work.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/3/ccc-input-output
|
||||
|
||||
作者:[Stephan Avenwedde][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/hansic99
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/browser_screen_windows_files.png?itok=kLTeQUbY (Computer screen with files or windows open)
|
||||
[2]: https://en.cppreference.com/w/cpp/io/ios_base
|
||||
[3]: https://en.cppreference.com/w/cpp/io/basic_ofstream
|
||||
[4]: https://en.cppreference.com/w/cpp/io/basic_ifstream
|
||||
[5]: https://en.wikipedia.org/wiki/Separation_of_concerns
|
||||
[6]: https://en.wikipedia.org/wiki/Dependency_injection
|
||||
[7]: https://github.com/hANSIc99/cpp_input_output
|
||||
[8]: https://opensource.com/sites/default/files/uploads/c_console_screenshot.png (Console screenshot)
|
||||
[9]: https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[10]: https://en.cppreference.com/w/cpp/io/basic_ios
|
||||
[11]: https://en.cppreference.com/w/cpp/io/basic_ios/operator!
|
||||
[12]: https://en.wikipedia.org/wiki/End-of-file
|
||||
[13]: https://en.wikipedia.org/wiki/Newline
|
||||
[14]: https://en.wikipedia.org/wiki/Sync_%28Unix%29
|
@ -0,0 +1,108 @@
|
||||
[#]: subject: (Network address translation part 3 – the conntrack event framework)
|
||||
[#]: via: (https://fedoramagazine.org/conntrack-event-framework/)
|
||||
[#]: author: (Florian Westphal https://fedoramagazine.org/author/strlen/)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
||||
Network address translation part 3 – the conntrack event framework
|
||||
======
|
||||
|
||||
![][1]
|
||||
|
||||
This is the third post in a series about network address translation (NAT). The first article introduced [how to use the iptables/nftables packet tracing feature][2] to find the source of NAT-related connectivity problems. Part 2 [introduced the “conntrack” command][3]. This part gives an introduction to the “conntrack” event framework.
|
||||
|
||||
### Introduction
|
||||
|
||||
NAT configured via iptables or nftables builds on top of netfilter’s connection tracking framework. conntrack’s event facility allows real-time monitoring of incoming and outgoing flows. This event framework is useful for debugging or logging flow information, for instance with [ulog][4] and its IPFIX output plugin.
|
||||
|
||||
### Conntrack events
|
||||
|
||||
Run the following command to see a real-time conntrack event log:
|
||||
|
||||
```
|
||||
# conntrack -E
|
||||
NEW tcp 120 SYN_SENT src=10.1.0.114 dst=10.7.43.52 sport=4123 dport=22 [UNREPLIED] src=10.7.43.52 dst=10.1.0.114 sport=22 dport=4123
|
||||
UPDATE tcp 60 SYN_RECV src=10.1.0.114 dst=10.7.43.52 sport=4123 dport=22 src=10.7.43.52 dst=10.1.0.114 sport=22 dport=4123
|
||||
UPDATE tcp 432000 ESTABLISHED src=10.1.0.114 dst=10.7.43.52 sport=4123 dport=22 src=10.7.43.52 dst=10.1.0.114 sport=22 dport=4123 [ASSURED]
|
||||
UPDATE tcp 120 FIN_WAIT src=10.1.0.114 dst=10.7.43.52 sport=4123 dport=22 src=10.7.43.52 dst=10.1.0.114 sport=22 dport=4123 [ASSURED]
|
||||
UPDATE tcp 30 LAST_ACK src=10.1.0.114 dst=10.7.43.52 sport=4123 dport=22 src=10.7.43.52 dst=10.1.0.114 sport=22 dport=4123 [ASSURED]
|
||||
UPDATE tcp 120 TIME_WAIT src=10.1.0.114 dst=10.7.43.52 sport=4123 dport=22 src=10.7.43.52 dst=10.1.0.114 sport=22 dport=4123 [ASSURED]
|
||||
```
|
||||
|
||||
This prints a continuous stream of events:
|
||||
|
||||
* new connections
|
||||
* removal of connections
|
||||
* changes in a connections state.
|
||||
|
||||
|
||||
|
||||
Hit _ctrl+c_ to quit.
|
||||
|
||||
The conntrack tool offers a number of options to limit the output. For example its possible to only show DESTROY events. The NEW event is generated after the iptables/nftables rule set accepts the corresponding packet.
|
||||
|
||||
### **Conntrack expectations**
|
||||
|
||||
Some legacy protocols require multiple connections to work, such as [FTP][5], [SIP][6] or [H.323][7]. To make these work in NAT environments, conntrack uses “connection tracking helpers”: kernel modules that can parse the specific higher-level protocol such as ftp.
|
||||
|
||||
The _nf_conntrack_ftp_ module parses the ftp command connection and extracts the TCP port number that will be used for the file transfer. The helper module then inserts a “expectation” that consists of the extracted port number and address of the ftp client. When a new data connection arrives, conntrack searches the expectation table for a match. An incoming connection that matches such an entry is flagged RELATED rather than NEW. This allows you to craft iptables and nftables rulesets that reject incoming connection requests unless they were requested by an existing connection. If the original connection is subject to NAT, the related data connection will inherit this as well. This means that helpers can expose ports on internal hosts that are otherwise unreachable from the wider internet. The next section will explain this expectation mechanism in more detail.
|
||||
|
||||
### The expectation table
|
||||
|
||||
Use _conntrack -L expect_ to list all active expectations. In most cases this table appears to be empty, even if a helper module is active. This is because expectation table entries are short-lived. Use _conntrack -E expect_ to monitor the system for changes in the expectation table instead.
|
||||
|
||||
Use this to determine if a helper is working as intended or to log conntrack actions taken by the helper. Here is an example output of a file download via ftp:
|
||||
```
|
||||
|
||||
```
|
||||
|
||||
# conntrack -E expect
|
||||
NEW 300 proto=6 src=10.2.1.1 dst=10.8.4.12 sport=0 dport=46767 mask-src=255.255.255.255 mask-dst=255.255.255.255 sport=0 dport=65535 master-src=10.2.1.1 master-dst=10.8.4.12 sport=34526 dport=21 class=0 helper=ftp
|
||||
DESTROY 299 proto=6 src=10.2.1.1 dst=10.8.4.12 sport=0 dport=46767 mask-src=255.255.255.255 mask-dst=255.255.255.255 sport=0 dport=65535 master-src=10.2.1.1 master-dst=10.8.4.12 sport=34526 dport=21 class=0 helper=ftp
|
||||
```
|
||||
|
||||
```
|
||||
|
||||
The expectation entry describes the criteria that an incoming connection request must meet in order to recognized as a RELATED connection. In this example, the connection may come from any port, must go to port 46767 (the port the ftp server expects to receive the DATA connection request on). Futhermore the source and destination addresses must match the address of the ftp client and server.
|
||||
|
||||
Events also include the connection that created the expectation and the name of the protocol helper (ftp). The helper has full control over the expectation: it can request full matching (IP addresses of the incoming connection must match), it can restrict to a subnet or even allow the request to come from any address. Check the “mask-dst” and “mask-src” parameters to see what parts of the addresses need to match.
|
||||
|
||||
### Caveats
|
||||
|
||||
You can configure some helpers to allow wildcard expectations. Such wildcard expectations result in requests coming from an unrelated 3rd party host to get flagged as RELATED. This can open internal servers to the wider internet (“NAT slipstreaming”).
|
||||
|
||||
This is the reason helper modules require explicit configuration from the nftables/iptables ruleset. See [this article][8] for more information about helpers and how to configure them. It includes a table that describes the various helpers and the types of expectations (such as wildcard forwarding) they can create. The nftables wiki has a [nft ftp example][9].
|
||||
|
||||
A nftables rule like ‘ct state related ct helper “ftp”‘ matches connections that were detected as a result of an expectation created by the ftp helper.
|
||||
|
||||
In iptables, use “_-m conntrack –ctstate RELATED -m helper –helper ftp_“. Always restrict helpers to only allow communication to and from the expected server addresses. This prevents accidental exposure of other, unrelated hosts.
|
||||
|
||||
### Summary
|
||||
|
||||
This article introduced the conntrack event facilty and gave examples on how to inspect the expectation table. The next part of the series will describe low-level debug knobs of conntrack.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/conntrack-event-framework/
|
||||
|
||||
作者:[Florian Westphal][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fedoramagazine.org/author/strlen/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://fedoramagazine.org/wp-content/uploads/2021/03/network-address-translation-part-3-816x345.jpg
|
||||
[2]: https://fedoramagazine.org/network-address-translation-part-1-packet-tracing/
|
||||
[3]: https://fedoramagazine.org/network-address-translation-part-2-the-conntrack-tool/
|
||||
[4]: https://netfilter.org/projects/ulogd/index.html
|
||||
[5]: https://en.wikipedia.org/wiki/File_Transfer_Protocol
|
||||
[6]: https://en.wikipedia.org/wiki/Session_Initiation_Protocol
|
||||
[7]: https://en.wikipedia.org/wiki/H.323
|
||||
[8]: https://github.com/regit/secure-conntrack-helpers/blob/master/secure-conntrack-helpers.rst
|
||||
[9]: https://wiki.nftables.org/wiki-nftables/index.php/Conntrack_helpers
|
@ -0,0 +1,70 @@
|
||||
[#]: subject: (Why you should care about service mesh)
|
||||
[#]: via: (https://opensource.com/article/21/3/service-mesh)
|
||||
[#]: author: (Daniel Oh https://opensource.com/users/daniel-oh)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
||||
Why you should care about service mesh
|
||||
======
|
||||
Service mesh provides benefits for development and operations in
|
||||
microservices environments.
|
||||
![Net catching 1s and 0s or data in the clouds][1]
|
||||
|
||||
Many developers wonder why they should care about [service mesh][2]. It's a question I'm asked often in my presentations at developer meetups, conferences, and hands-on workshops about microservices development with cloud-native architecture. My answer is always the same: "As long as you want to simplify your microservices architecture, it should be running on Kubernetes."
|
||||
|
||||
Concerning simplification, you probably also wonder why distributed microservices must be designed so complexly for running on Kubernetes clusters. As this article explains, many developers solve the microservices architecture's complexity with service mesh and gain additional benefits by adopting service mesh in production.
|
||||
|
||||
### What is a service mesh?
|
||||
|
||||
A service mesh is a dedicated infrastructure layer for providing a transparent and code-independent (polyglot) way to eliminate nonfunctional microservices capabilities from the application code.
|
||||
|
||||
![Before and After Service Mesh][3]
|
||||
|
||||
(Daniel Oh, [CC BY-SA 4.0][4])
|
||||
|
||||
### Why service mesh matters to developers
|
||||
|
||||
When developers deploy microservices to the cloud, they have to address nonfunctional microservices capabilities to avoid cascading failures, regardless of business functionalities. Those capabilities typically can be represented in service discovery, logging, monitoring, resiliency, authentication, elasticity, and tracing. Developers must spend more time adding them to each microservice rather than developing actual business logic, which makes the microservices heavy and complex.
|
||||
|
||||
As organizations accelerate their move to the cloud, the service mesh can increase developer productivity. Instead of making the services responsible for dealing with those complexities and adding more code into each service to deal with cloud-native concerns, the Kubernetes + service mesh platform is responsible for providing those services to any application (existing or new, in any programming language or framework) running on the platform. Then the microservices can be lightweight and focus on their business logic rather than cloud-native complexities.
|
||||
|
||||
### Why service mesh matters to ops
|
||||
|
||||
This doesn't answer why ops teams need to care about the service mesh for operating cloud-native microservices on Kubernetes. It's because the ops teams have to ensure robust security, compliance, and observability for spreading new cloud-native applications across large hybrid and multi clouds on Kubernetes environments.
|
||||
|
||||
The service mesh is composed of a control plane for managing proxies to route traffic and a data plane for injecting sidecars. The sidecars allow the ops teams to do things like adding third-party security tools and tracing traffic in all service communications to avoid security breaches or compliance issues. The service mesh also improves observation capabilities by visualizing tracing metrics on graphical dashboards.
|
||||
|
||||
### How to get started with service mesh
|
||||
|
||||
Service mesh manages cloud-native capabilities more efficiently—for developers and operators and from application development to platform operation.
|
||||
|
||||
You might want to know where to get started adopting service mesh in alignment with your microservices applications and architecture. Luckily, there are many open source service mesh projects. Many cloud service providers also offer service mesh capabilities within their Kubernetes platforms.
|
||||
|
||||
![CNCF Service Mesh Landscape][5]
|
||||
|
||||
(Daniel Oh, [CC BY-SA 4.0][4])
|
||||
|
||||
You can find links to the most popular service mesh projects and services on the [CNCF Service Mesh Landscape][6] webpage.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/3/service-mesh
|
||||
|
||||
作者:[Daniel Oh][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/daniel-oh
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/data_analytics_cloud.png?itok=eE4uIoaB (Net catching 1s and 0s or data in the clouds)
|
||||
[2]: https://www.redhat.com/en/topics/microservices/what-is-a-service-mesh
|
||||
[3]: https://opensource.com/sites/default/files/uploads/vm-vs-service-mesh.png (Before and After Service Mesh)
|
||||
[4]: https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[5]: https://opensource.com/sites/default/files/uploads/service-mesh-providers.png (CNCF Service Mesh Landscape)
|
||||
[6]: https://landscape.cncf.io/card-mode?category=service-mesh&grouping=category
|
@ -0,0 +1,429 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wyxplus)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to automate your cryptocurrency trades with Python)
|
||||
[#]: via: (https://opensource.com/article/20/4/python-crypto-trading-bot)
|
||||
[#]: author: (Stephan Avenwedde https://opensource.com/users/hansic99)
|
||||
|
||||
|
||||
如何使用 Python 来自动交易加密货币
|
||||
======
|
||||
|
||||
在本教程中,教你如何设置和使用 Pythonic 来编程。它是一个图形化编程工具,用户可以很容易地使用现成的函数模块创建 Python 程序。
|
||||
|
||||
![scientific calculator][1]
|
||||
|
||||
然而,不像纽约证券交易所这样的传统证券交易所一样,有一段固定的交易时间。对于加密货币而言,则是 7×24 小时交易,任何人都无法独自盯着市场。
|
||||
|
||||
在以前,我经常思考与加密货币交易相关的问题:
|
||||
|
||||
- 一夜之间发生了什么?
|
||||
- 为什么没有日志记录?
|
||||
- 为什么下单?
|
||||
- 为什么不下单?
|
||||
|
||||
通常的解决手段是当在你做其他事情时,例如睡觉、与家人在一起或享受空闲时光,使用加密交易机器人代替你下单。虽然有很多商业解决方案可用,但是我选择开源的解决方案,因此我编写了加密交易机器人 [Pythonic][2]。 正如去年 [我写过的文章][3] 一样,“ Pythonic 是一种图形化编程工具,它让用户可以轻松使用现成的功能模块来创建Python应用程序。” 最初它是作为加密货币机器人使用,并具有可扩展的日志记录引擎以及经过精心测试的可重用部件,例如调度器和计时器。
|
||||
|
||||
### 开始
|
||||
|
||||
本教程将教你如何开始使用 Pythonic 进行自动交易。我选择 [币安][6]Binance<ruby>[币安][6]<rt>Binance</rt></ruby> 交易所的 [波场][4]Tron<ruby>[波场][4]<rt>Tron</rt></ruby> 与 [比特币][3]Bitcoin<ruby>[比特币][3]<rt>Bitcoin</rt></ruby>
|
||||
|
||||
交易对为例。我之所以选择这些加密货币,是因为它们彼此之间的波动性大,而不是出于个人喜好。
|
||||
|
||||
机器人将根据 [指数移动平均][7] (EMAs)来做出决策。
|
||||
|
||||
![TRX/BTC 1-hour candle chart][8]
|
||||
|
||||
TRX/BTC 1 小时 K 线图
|
||||
|
||||
EMA 指标通常是指加权移动平均线,可以对近期价格数据赋予更多权重。尽管移动平均线可能只是一个简单的指标,但我能熟练使用它。
|
||||
|
||||
上图中的紫色线显示了 EMA-25 指标(这表示要考虑最近的 25 个值)。
|
||||
|
||||
机器人监视当前的 EMA-25 值(t0)和前一个 EMA-25 值(t-1)之间的差距。如果差值超过某个值,则表示价格上涨,机器人将下达购买订单。如果差值低于某个值,则机器人将下达卖单。
|
||||
|
||||
差值将是做出交易决策的主要指标。在本教程中,它称为交易参数。
|
||||
|
||||
### 工具链
|
||||
|
||||
|
||||
|
||||
将在本教程使用如下工具:
|
||||
|
||||
- 币安专业交易视图(已经有其他人做了数据可视化,所以不需要重复造轮子)
|
||||
- Jupyter Notebook:用于数据科学任务
|
||||
- Pythonic:作为整体框架
|
||||
- PythonicDaemon :作为终端运行(仅适用于控制台和 Linux)
|
||||
|
||||
|
||||
|
||||
### 数据挖掘
|
||||
|
||||
为了使加密货币交易机器人尽可能能做出正确的决定,以可靠的方式获取资产的美国线([OHLC][9])数据是至关重要。你可以使用 Pythonic 的内置元素,还可以根据自己逻辑来对其进行扩展。
|
||||
|
||||
一般的工作流程:
|
||||
|
||||
1. 与币安时间同步
|
||||
2. 下载 OHLC 数据
|
||||
3. 从文件中把 OHLC 数据加载到内存
|
||||
4. 比较数据集并扩展更新数据集
|
||||
|
||||
|
||||
|
||||
这个工作流程可能有点夸张,但是它能使得程序更加健壮,甚至在停机和断开连接时,也能平稳运行。
|
||||
|
||||
一开始,你需要 **币安 OHLC 查询**Binance OHLC Query<ruby>**币安 OHLC 查询**<rt>Binance OHLC Query</rt></ruby> 元素和一个 **基础操作**Basic Operation<ruby>**基础操作**<rt>Basic Operation</rt></ruby> 元素来执行你的代码。
|
||||
|
||||
![Data-mining workflow][10]
|
||||
|
||||
数据挖掘工作流程
|
||||
|
||||
OHLC 查询设置为每隔一小时查询一次 **TRXBTC** 资产对(波场/比特币)。
|
||||
|
||||
![Configuration of the OHLC query element][11]
|
||||
|
||||
配置 OHLC 查询元素
|
||||
|
||||
其中输出的元素是 [Pandas DataFrame][12]。你可以在 **基础操作** 元素中使用 **输入**input<ruby>**输入**<rt>input</rt></ruby> 变量来访问 DataFrame。其中,将 Vim 设置为 **基础操作** 元素的默认代码编辑器。
|
||||
|
||||
![Basic Operation element set up to use Vim][13]
|
||||
|
||||
使用 Vim 编辑基础操作元素
|
||||
|
||||
具体代码如下:
|
||||
|
||||
|
||||
```
|
||||
import pickle, pathlib, os
|
||||
import pandas as pd
|
||||
|
||||
outout = None
|
||||
|
||||
if isinstance(input, pd.DataFrame):
|
||||
file_name = 'TRXBTC_1h.bin'
|
||||
home_path = str(pathlib.Path.home())
|
||||
data_path = os.path.join(home_path, file_name)
|
||||
|
||||
try:
|
||||
df = pickle.load(open(data_path, 'rb'))
|
||||
n_row_cnt = df.shape[0]
|
||||
df = pd.concat([df,input], ignore_index=True).drop_duplicates(['close_time'])
|
||||
df.reset_index(drop=True, inplace=True)
|
||||
n_new_rows = df.shape[0] - n_row_cnt
|
||||
log_txt = '{}: {} new rows written'.format(file_name, n_new_rows)
|
||||
except:
|
||||
log_txt = 'File error - writing new one: {}'.format(e)
|
||||
df = input
|
||||
|
||||
pickle.dump(df, open(data_path, "wb" ))
|
||||
output = df
|
||||
```
|
||||
|
||||
首先,检查输入是否为 DataFrame 元素。然后在用户的家目录(**〜/ **)中查找名为 **TRXBTC_1h.bin** 的文件。如果存在,则将其打开,执行新代码段(**try** 部分中的代码),并删除重复项。如果文件不存在,则触发异常并执行 **except** 部分中的代码,创建一个新文件。
|
||||
|
||||
只要启用了复选框 **日志输出**log output<ruby>**日志输出**<rt>log output</rt></ruby>,你就可以使用命令行工具 **tail** 查看日志记录:
|
||||
|
||||
|
||||
```
|
||||
`$ tail -f ~/Pythonic_2020/Feb/log_2020_02_19.txt`
|
||||
```
|
||||
|
||||
出于开发目的,现在跳过与币安时间的同步和计划执行,这将在下面实现。
|
||||
|
||||
### 准备数据
|
||||
|
||||
下一步是在单独的 网格Grid<ruby>网格<rt>Grid</rt></ruby> 中处理评估逻辑。因此,你必须借助 **返回元素**Return element<ruby>**返回元素**<rt>Return element</rt></ruby> 将 DataFrame 从网格 1 传递到网格 2 的第一个元素。
|
||||
|
||||
在网格 2 中,通过使 DataFrame 通过 **基础技术分析**Basic Technical Analysis<ruby>**基础技术分析**<rt>Basic Technical Analysis</rt></ruby> 元素,将 DataFrame 扩展包含 EMA 值的一列。
|
||||
|
||||
![Technical analysis workflow in Grid 2][14]
|
||||
|
||||
在网格 2 中技术分析工作流程
|
||||
|
||||
配置技术分析元素以计算 25 个值的 EMAs。
|
||||
|
||||
![Configuration of the technical analysis element][15]
|
||||
|
||||
配置技术分析元素
|
||||
|
||||
当你运行整个程序并开启 **技术分析**Technical Analysis<ruby>**技术分析**<rt>Technical Analysis</rt></ruby> 元素的调试输出时,你将发现 EMA-25 列的值似乎都相同。
|
||||
|
||||
![Missing decimal places in output][16]
|
||||
|
||||
输出中精度不够
|
||||
|
||||
这是因为调试输出中的 EMA-25 值仅包含六位小数,即使输出保留了 8 个字节完整精度的浮点值。
|
||||
|
||||
为了能进行进一步处理,请添加 **基础操作** 元素:
|
||||
|
||||
![Workflow in Grid 2][17]
|
||||
|
||||
网格 2 中的工作流程
|
||||
|
||||
使用 **基础操作** 元素,将 DataFrame 与添加的 EMA-25 列一起转储,以便可以将其加载到 Jupyter Notebook中;
|
||||
|
||||
![Dump extended DataFrame to file][18]
|
||||
|
||||
将扩展后的 DataFrame 存储到文件中
|
||||
|
||||
### 评估策略
|
||||
|
||||
在 Juypter Notebook 中开发评估策略,让你可以更直接地访问代码。要加载 DataFrame,你需要使用如下代码:
|
||||
|
||||
![Representation with all decimal places][19]
|
||||
|
||||
用全部小数位表示
|
||||
|
||||
你可以使用 [**iloc**][20] 和列名来访问最新的 EMA-25 值,并且会保留所有小数位。
|
||||
|
||||
你已经知道如何来获得最新的数据。上面示例的最后一行仅显示该值。为了能将该值拷贝到不同的变量中,你必须使用如下图所示的 **.at** 方法方能成功。
|
||||
|
||||
你也可以直接计算出你下一步所需的交易参数。
|
||||
|
||||
![Buy/sell decision][21]
|
||||
|
||||
买卖决策
|
||||
|
||||
### 确定交易参数
|
||||
|
||||
如上面代码所示,我选择 0.009 作为交易参数。但是我怎么知道 0.009 是决定交易的一个好参数呢? 实际上,这个参数确实很糟糕,因此,你可以直接计算出表现最佳的交易参数。
|
||||
|
||||
假设你将根据收盘价进行买卖。
|
||||
|
||||
![Validation function][22]
|
||||
|
||||
回测功能
|
||||
|
||||
在此示例中,**buy_factor** 和 **sell_factor** 是预先定义好的。因此,发散思维用直接计算出表现最佳的参数。
|
||||
|
||||
![Nested for loops for determining the buy and sell factor][23]
|
||||
|
||||
嵌套的 _for_ 循环,用于确定购买和出售的参数
|
||||
|
||||
这要跑 81 个循环(9x9),在我的机器(Core i7 267QM)上花费了几分钟。
|
||||
|
||||
![System utilization while brute forcing][24]
|
||||
|
||||
在暴力运算时系统的利用率
|
||||
|
||||
在每个循环之后,它将 **buy_factor**,**sell_factor** 元组和生成的 **利润**profit<ruby>**利润**<rt>profit</rt></ruby> 元组追加到 **trading_factors** 列表中。按利润降序对列表进行排序。
|
||||
|
||||
![Sort profit with related trading factors in descending order][25]
|
||||
|
||||
将利润与相关的交易参数按降序排序
|
||||
|
||||
当你打印出列表时,你会看到 0.002 是最好的参数。
|
||||
|
||||
![Sorted list of trading factors and profit][26]
|
||||
|
||||
交易要素和收益的有序列表
|
||||
|
||||
当我在 2020 年 3 月写下这篇文章时,价格的波动还不足以呈现出更理想的结果。我在 2 月份得到了更好的结果,但即使在那个时候,表现最好的交易参数也在 0.002 左右。
|
||||
|
||||
### 分割执行路径
|
||||
|
||||
现在开始新建一个网格以保持逻辑清晰。使用 **返回** 元素将带有 EMA-25 列的 DataFrame 从网格 2 传递到网格 3 的 0A 元素。
|
||||
|
||||
在网格 3 中,添加 **基础操作** 元素以执行评估逻辑。这是该元素中的代码:
|
||||
|
||||
![Implemented evaluation logic][27]
|
||||
|
||||
实现评估策略
|
||||
|
||||
如果输出 **1** 表示你应该购买,如果输出 **2** 则表示你应该卖出。 输出 **0** 表示现在无需操作。使用 **分支**Branch<ruby>**分支**<rt>Branch</rt></ruby> 元素来控制执行路径。
|
||||
|
||||
![Branch element: Grid 3 Position 2A][28]
|
||||
|
||||
Branch 元素:网格 3,2A 位置
|
||||
|
||||
|
||||
|
||||
因为 **0** 和 **-1** 的处理流程一样,所以你需要在最右边添加一个分支元素来判断你是否应该卖出。
|
||||
|
||||
![Branch element: Grid 3 Position 3B][29]
|
||||
|
||||
分支元素:网格 3,3B 位置
|
||||
|
||||
网格 3 应该现在如下图所示:
|
||||
|
||||
![Workflow on Grid 3][30]
|
||||
|
||||
网格 3 的工作流程
|
||||
|
||||
### 下单
|
||||
|
||||
由于无需在一个周期中购买两次,因此必须在周期之间保留一个持久变量,以指示你是否已经购买。
|
||||
|
||||
你可以利用 **栈**Stack<ruby>**栈**<rt>Stack</rt></ruby> 元素来实现。顾名思义,栈元素表示可以用任何 Python 数据类型来放入的基于文件的栈。
|
||||
|
||||
你需要定义栈仅包含一个布尔类型,该布尔类型决定是否购买了(**True**)或(**False**)。因此,你必须使用 **False** 来初始化栈。例如,你可以在网格 4 中简单地通过将 **False** 传递给栈来进行设置。![Forward a False-variable to the subsequent Stack element][31]
|
||||
|
||||
将 **False** 变量传输到后续的栈元素中
|
||||
|
||||
在分支树后的栈实例可以进行如下配置:
|
||||
|
||||
![Configuration of the Stack element][32]
|
||||
|
||||
设置栈元素
|
||||
|
||||
在栈元素设置中,将 **Do this with input** 设置成 **Nothing**。否则,布尔值将被 1 或 0 覆盖。
|
||||
|
||||
该设置确保仅将一个值保存于栈中(**True** 或 **False**),并且只能读取一个值(为了清楚起见)。
|
||||
|
||||
在栈元素之后,你需要另外一个 **分支** 元素来判断栈的值,然后再放置 **币安订单**Binance Order<ruby>**币安订单**<rt>Binance Order</rt></ruby> 元素。
|
||||
|
||||
![Evaluate the variable from the stack][33]
|
||||
|
||||
判断栈中的变量
|
||||
|
||||
将币安订单元素添加到分支元素的 **True** 路径。网格 3 上的工作流现在应如下所示:
|
||||
|
||||
![Workflow on Grid 3][34]
|
||||
|
||||
网格 3 的工作流程
|
||||
|
||||
币安订单元素应如下配置:
|
||||
|
||||
![Configuration of the Binance Order element][35]
|
||||
|
||||
编辑币安订单元素
|
||||
|
||||
你可以在币安网站上的帐户设置中生成 API 和密钥。
|
||||
|
||||
![Creating an API key in Binance][36]
|
||||
|
||||
在币安账户设置中创建一个 API key
|
||||
|
||||
在本文中,每笔交易都是作为市价交易执行的,交易量为10,000 TRX(2020 年 3 月约为 150 美元)(出于教学的目的,我通过使用市价下单来演示整个过程。因此,我建议至少使用限价下单。)
|
||||
|
||||
如果未正确执行下单(例如,网络问题、资金不足或货币对不正确),则不会触发后续元素。因此,你可以假定如果触发了后续元素,则表示该订单已下达。
|
||||
|
||||
这是一个成功的 XMRBTC 卖单的输出示例:
|
||||
|
||||
![Output of a successfully placed sell order][37]
|
||||
|
||||
成功卖单的输出
|
||||
|
||||
该行为使后续步骤更加简单:你可以始终假设只要成功输出,就表示订单成功。因此,你可以添加一个 **基础操作** 元素,该元素将简单地输出 **True** 并将此值放入栈中以表示是否下单。
|
||||
|
||||
如果出现错误的话,你可以在日志信息中查看具体细节(如果启用日志功能)。
|
||||
|
||||
![Logging output of Binance Order element][38]
|
||||
|
||||
币安订单元素中的输出日志信息
|
||||
|
||||
### 调度和同步
|
||||
|
||||
对于日程调度和同步,请在网格 1 中将整个工作流程置于 **币安调度器**Binance Scheduler<ruby>**币安调度器**<rt>Binance Scheduler</rt></ruby> 元素的前面。
|
||||
|
||||
![Binance Scheduler at Grid 1, Position 1A][39]
|
||||
|
||||
在网格 1,1A 位置的币安调度器
|
||||
|
||||
由于币安调度器元素只执行一次,因此请在网格 1 的末尾拆分执行路径,并通过将输出传递回币安调度器来强制让其重新同步。
|
||||
|
||||
![Grid 1: Split execution path][40]
|
||||
|
||||
网格 1:拆分执行路径
|
||||
|
||||
5A 元素指向 网格 2 的 1A 元素,并且 5B 元素指向网格 1 的 1A 元素(币安调度器)。
|
||||
|
||||
### 部署
|
||||
|
||||
你可以在本地计算机上全天候 7×24 小时运行整个程序,也可以将其完全托管在廉价的云系统上。例如,你可以使用 Linux/FreeBSD 云系统,每月约 5 美元,但通常不提供图形化界面。如果你想利用这些低成本的云,可以使用 PythonicDaemon,它能在终端中完全运行。
|
||||
|
||||
![PythonicDaemon console interface][41]
|
||||
|
||||
PythonicDaemon 控制台
|
||||
|
||||
PythonicDaemon 是基础程序的一部分。要使用它,请保存完整的工作流程,将其传输到远程运行的系统中(例如,通过安全拷贝协议Secure Copy<ruby>安全拷贝协议<rt>Secure Copy</rt></ruby> [SCP]),然后把工作流程文件作为参数来启动 PythonicDaemon:
|
||||
|
||||
|
||||
```
|
||||
`$ PythonicDaemon trading_bot_one`
|
||||
```
|
||||
|
||||
为了能在系统启动时自启 PythonicDaemon,可以将一个条目添加到 crontab 中:
|
||||
|
||||
|
||||
```
|
||||
`# crontab -e`
|
||||
```
|
||||
|
||||
![Crontab on Ubuntu Server][42]
|
||||
|
||||
在 Ubuntu 服务器上的 Crontab
|
||||
|
||||
### 下一步
|
||||
|
||||
正如我在一开始时所说的,本教程只是自动交易的入门。对交易机器人进行编程大约需要 10% 的编程和 90% 的测试。当涉及到让你的机器人用金钱交易时,你肯定会对编写的代码再三思考。因此,我建议你编码时要尽可能简单和易于理解。
|
||||
|
||||
|
||||
|
||||
如果你想自己继续开发交易机器人,接下来所需要做的事:
|
||||
|
||||
- 收益自动计算(希望你有正收益!)
|
||||
- 计算你想买的价格
|
||||
- 比较你的预订单(例如,订单是否填写完整?)
|
||||
|
||||
|
||||
|
||||
你可以从 [GitHub][2] 上获取完整代码。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/4/python-crypto-trading-bot
|
||||
|
||||
作者:[Stephan Avenwedde][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wyxplus](https://github.com/wyxplus)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/hansic99
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/calculator_money_currency_financial_tool.jpg?itok=2QMa1y8c "scientific calculator"
|
||||
[2]: https://github.com/hANSIc99/Pythonic
|
||||
[3]: https://opensource.com/article/19/5/graphically-programming-pythonic
|
||||
[4]: https://tron.network/
|
||||
[5]: https://bitcoin.org/en/
|
||||
[6]: https://www.binance.com/
|
||||
[7]: https://www.investopedia.com/terms/e/ema.asp
|
||||
[8]: https://opensource.com/sites/default/files/uploads/1_ema-25.png "TRX/BTC 1-hour candle chart"
|
||||
[9]: https://en.wikipedia.org/wiki/Open-high-low-close_chart
|
||||
[10]: https://opensource.com/sites/default/files/uploads/2_data-mining-workflow.png "Data-mining workflow"
|
||||
[11]: https://opensource.com/sites/default/files/uploads/3_ohlc-query.png "Configuration of the OHLC query element"
|
||||
[12]: https://pandas.pydata.org/pandas-docs/stable/getting_started/dsintro.html#dataframe
|
||||
[13]: https://opensource.com/sites/default/files/uploads/4_edit-basic-operation.png "Basic Operation element set up to use Vim"
|
||||
[14]: https://opensource.com/sites/default/files/uploads/6_grid2-workflow.png "Technical analysis workflow in Grid 2"
|
||||
[15]: https://opensource.com/sites/default/files/uploads/7_technical-analysis-config.png "Configuration of the technical analysis element"
|
||||
[16]: https://opensource.com/sites/default/files/uploads/8_missing-decimals.png "Missing decimal places in output"
|
||||
[17]: https://opensource.com/sites/default/files/uploads/9_basic-operation-element.png "Workflow in Grid 2"
|
||||
[18]: https://opensource.com/sites/default/files/uploads/10_dump-extended-dataframe.png "Dump extended DataFrame to file"
|
||||
[19]: https://opensource.com/sites/default/files/uploads/11_load-dataframe-decimals.png "Representation with all decimal places"
|
||||
[20]: https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.iloc.html
|
||||
[21]: https://opensource.com/sites/default/files/uploads/12_trade-factor-decision.png "Buy/sell decision"
|
||||
[22]: https://opensource.com/sites/default/files/uploads/13_validation-function.png "Validation function"
|
||||
[23]: https://opensource.com/sites/default/files/uploads/14_brute-force-tf.png "Nested for loops for determining the buy and sell factor"
|
||||
[24]: https://opensource.com/sites/default/files/uploads/15_system-utilization.png "System utilization while brute forcing"
|
||||
[25]: https://opensource.com/sites/default/files/uploads/16_sort-profit.png "Sort profit with related trading factors in descending order"
|
||||
[26]: https://opensource.com/sites/default/files/uploads/17_sorted-trading-factors.png "Sorted list of trading factors and profit"
|
||||
[27]: https://opensource.com/sites/default/files/uploads/18_implemented-evaluation-logic.png "Implemented evaluation logic"
|
||||
[28]: https://opensource.com/sites/default/files/uploads/19_output.png "Branch element: Grid 3 Position 2A"
|
||||
[29]: https://opensource.com/sites/default/files/uploads/20_editbranch.png "Branch element: Grid 3 Position 3B"
|
||||
[30]: https://opensource.com/sites/default/files/uploads/21_grid3-workflow.png "Workflow on Grid 3"
|
||||
[31]: https://opensource.com/sites/default/files/uploads/22_pass-false-to-stack.png "Forward a False-variable to the subsequent Stack element"
|
||||
[32]: https://opensource.com/sites/default/files/uploads/23_stack-config.png "Configuration of the Stack element"
|
||||
[33]: https://opensource.com/sites/default/files/uploads/24_evaluate-stack-value.png "Evaluate the variable from the stack"
|
||||
[34]: https://opensource.com/sites/default/files/uploads/25_grid3-workflow.png "Workflow on Grid 3"
|
||||
[35]: https://opensource.com/sites/default/files/uploads/26_binance-order.png "Configuration of the Binance Order element"
|
||||
[36]: https://opensource.com/sites/default/files/uploads/27_api-key-binance.png "Creating an API key in Binance"
|
||||
[37]: https://opensource.com/sites/default/files/uploads/28_sell-order.png "Output of a successfully placed sell order"
|
||||
[38]: https://opensource.com/sites/default/files/uploads/29_binance-order-output.png "Logging output of Binance Order element"
|
||||
[39]: https://opensource.com/sites/default/files/uploads/30_binance-scheduler.png "Binance Scheduler at Grid 1, Position 1A"
|
||||
[40]: https://opensource.com/sites/default/files/uploads/31_split-execution-path.png "Grid 1: Split execution path"
|
||||
[41]: https://opensource.com/sites/default/files/uploads/32_pythonic-daemon.png "PythonicDaemon console interface"
|
||||
[42]: https://opensource.com/sites/default/files/uploads/33_crontab.png "Crontab on Ubuntu Server"
|
@ -1,135 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (stevenzdg988)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (6 best practices for managing Git repos)
|
||||
[#]: via: (https://opensource.com/article/20/7/git-repos-best-practices)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
|
||||
|
||||
6 个最佳的 Git 仓库管理实践
|
||||
======
|
||||
阻止向 Git 中添加内容的主张会使其变得更难管理;
|
||||
这里有替代方法。
|
||||
![在家中使用笔记本电脑工作][1]
|
||||
|
||||
有权访问源代码使对安全性的分析以及应用程序的安全成为可能。但是,如果没有人真正看过代码,问题就不会被发现,即使人们积极地看代码,通常也要看很多东西。幸运的是,GitHub 拥有一个活跃的安全团队,最近,他们 [发现了已提交到多个 Git 存储库中的特洛伊木马病毒][2],甚至仓库的所有者也偷偷溜走了。尽管我们无法控制其他人如何管理自己的存储库,但我们可以从他们的错误中吸取教训。为此,本文回顾了将文件添加到自己的存储库中的一些最佳实践。
|
||||
|
||||
### 了解您的仓库
|
||||
|
||||
![Git 存储库终端][3]
|
||||
|
||||
对于安全的 Git 存储库来说大概是 Rule Zero (头号规则)。作为项目维护者,无论您是从自己的开始还是采用别人的,您的工作是了解自己存储库中的内容。您可能没有代码库中关于每个文件的存储列表,但是您需要了解所管理内容的基本组成。如果几十个合并后出现一个偏离的文件,您将可以很容易地识别它,因为您不知道它的用途,并且需要刷新内存检查它。发生这种情况时,请查看文件,并确保准确了解为什么它是必要的。
|
||||
|
||||
### 禁止二进制大文件
|
||||
|
||||
![终端中 Git 的二进制检查命令][4]
|
||||
|
||||
Git 用于文本,无论是用纯文本编写的 C 或 Python 还是 Java 文本,亦或是 JSON,YAML,XML,Markdown,HTML 或类似的文本。Git 对于二进制文件不是很理想。
|
||||
|
||||
两者之间的区别是:
|
||||
|
||||
```
|
||||
$ cat hello.txt
|
||||
This is plain text.
|
||||
It's readable by humans and machines alike.
|
||||
Git knows how to version this.
|
||||
|
||||
$ git diff hello.txt
|
||||
diff --git a/hello.txt b/hello.txt
|
||||
index f227cc3..0d85b44 100644
|
||||
\--- a/hello.txt
|
||||
+++ b/hello.txt
|
||||
@@ -1,2 +1,3 @@
|
||||
This is plain text.
|
||||
+It's readable by humans and machines alike.
|
||||
Git knows how to version this.
|
||||
```
|
||||
|
||||
和
|
||||
|
||||
```
|
||||
$ git diff pixel.png
|
||||
diff --git a/pixel.png b/pixel.png
|
||||
index 563235a..7aab7bc 100644
|
||||
Binary files a/pixel.png and b/pixel.png differ
|
||||
|
||||
$ cat pixel.png
|
||||
<EFBFBD>PNG
|
||||
▒
|
||||
IHDR7n<EFBFBD>$gAMA<4D><41>
|
||||
<20>abKGD݊<44>tIME<4D>
|
||||
|
||||
-2R<32><52>
|
||||
IDA<EFBFBD>c`<60>!<21>3%tEXtdate:create2020-06-11T11:45:04+12:00<30><30>r.%tEXtdate:modify2020-06-11T11:45:04+12:00<30><30>ʒIEND<4E>B`<60>
|
||||
```
|
||||
|
||||
二进制文件中的数据无法以解析纯文本相同的方式进行解析,因此,如果二进制文件发生任何更改,则必须重写整个内容。一个版本与另一个版本之间的仅有地区别是快速增加的内容。
|
||||
|
||||
更糟糕的是,Git 存储库维护者无法合理地审计二进制数据。这违反了 Rule Zero (头号规则):应该对存储库的内容了如指掌。
|
||||
|
||||
除了常用的 [POSIX(可移植性操作系统接口)][5] 工具之外,您还可以使用 `git diff` 检测二进制文件。当您尝试使用 `--numstat` 选项来比较二进制文件时,Git 返回空结果:
|
||||
|
||||
```
|
||||
$ git diff --numstat /dev/null pixel.png | tee
|
||||
\- - /dev/null => pixel.png
|
||||
$ git diff --numstat /dev/null file.txt | tee
|
||||
5788 0 /dev/null => list.txt
|
||||
```
|
||||
|
||||
如果您正在考虑将二进制大文件提交到存储库,请停下来先思考一下。如果是二进制文件,则它是由什么生成的。 有充分的理由在构建时生成它们来代替将它们提交存储库?你决定提交二进制数据可行,请确保在 README 文件或类似文件中标识二进制文件的位置,为什么是二进制的原因以及更新它们的协议。必须谨慎执行更新,因为对于提交给二进制大文件的每次更改,该二进制大文件的存储空间实际上都会加倍。
|
||||
|
||||
### 保留第三方库
|
||||
|
||||
第三方库也不例外。尽管它是开放源代码的众多优点之一,您可以不受限制地重用和重新分发未编写的代码,但是有很多充分的理由不去覆盖存储在您自己的存储库中第三方库。首先,除非您自己检查了所有代码(以及将来的合并),否则您无法准确确定第三方库。其次,当您将第三方库复制到您的 Git 存储库中时,会将焦点从真正的上游源分离出来。从技术上仅对主库的副本有把握,而不对随机存储库的副本有把握。如果您需要锁定特定版本的库,请为开发人员提供项目所需版本的合理 URL,或者使用[Git 子模块][6]。
|
||||
|
||||
### 抵制盲目的 `git add`
|
||||
|
||||
![Git 手动添加命令终端中][7]
|
||||
|
||||
如果您的项目已编译,请不要使用 `git add .`(其中 `.` 是当前目录或特定文件夹的路径)作为添加任意和每一个新内容的简单方法。如果您不是手动编译项目,而是使用 IDE 为您管理项目,则这一点尤其重要。用 IDE 管理项目时,跟踪添加到存储库中的内容非常困难,因此仅添加您实际编写的内容非常重要,而不是在项目文件夹中弹出的任何新对象。
|
||||
|
||||
如果您使用 `git add .` 做,请在推送之前检查这一状态里的情况。如果在执行 `git status` 时在项目文件夹中看到一个陌生的对象,请在运行 `make clean` 或等效命令找出它的来源以及为什么仍然在项目的目录中。这是非常好的不会在编译期间重新生成的创建方法,因此在提交前请三思。
|
||||
|
||||
### 使用 Git ignore
|
||||
|
||||
![终端中的 `Git ignore` 命令][8]
|
||||
|
||||
为程序员提供的许多方便的创建众说纷纭。任何项目,程序,富有艺术性的或其他的典型项目目录中都充斥着隐藏的文件,元数据和残留的内容。您可以尝试忽略这些对象,但是 `git status` 中的提示越多,您错过某件事的可能性就越大。
|
||||
|
||||
您可以通过维护一个良好的 `gitignore` 文件来为您过滤掉这种噪音。因为这是使用 Git 的用户的共同要求,所以有一些入门 `gitignore` 文件可用。[Github.com/github/gitignore][9] 提供了几个专门创建 `gitignore` 的文件,您可以下载这些文件并将其放置到自己的项目中,几年前 [Gitlab.com][10] 将`gitignore` 模板集成到了存储库创建工作流程中。使用这些帮助您为项目创建适合的 `gitignore` 策略并遵守它。
|
||||
|
||||
### 查看合并请求
|
||||
|
||||
![Git 合并请求][11]
|
||||
|
||||
当您通过电子邮件收到合并或拉取请求或补丁文件时,请勿仅对其进行测试以确保其正常工作。您的工作是阅读新代码进入代码库的并了解其如何产生结果。如果您不同意实施,或者更糟的是,您不理解该实施,请向提交该实施的人发送消息,并要求其进行说明。询问所依赖代码要成为存储库中永久性装置具有优先权,但是这是在你同你的用户的不知道将合并什么到他们将要使用的代码中开启的约定。
|
||||
|
||||
### Git 责任
|
||||
|
||||
社区致力于开源软件良好的安全性。不要鼓励在您的存储库中使用不良的 Git 实践,也不要忽视克隆的存储库中的安全威胁。Git 功能强大,但它仍然只是一个计算机程序,因此要以人为本,确保每个人的安全。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/7/git-repos-best-practices
|
||||
|
||||
作者:[Seth Kenlon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[stevenzdg988](https://github.com/stevenzdg988)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/seth
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/wfh_work_home_laptop_work.png?itok=VFwToeMy (Working from home at a laptop)
|
||||
[2]: https://securitylab.github.com/research/octopus-scanner-malware-open-source-supply-chain/
|
||||
[3]: https://opensource.com/sites/default/files/uploads/git_repo.png (Git repository )
|
||||
[4]: https://opensource.com/sites/default/files/uploads/git-binary-check.jpg (Git binary check)
|
||||
[5]: https://opensource.com/article/19/7/what-posix-richard-stallman-explains
|
||||
[6]: https://git-scm.com/book/en/v2/Git-Tools-Submodules
|
||||
[7]: https://opensource.com/sites/default/files/uploads/git-cola-manual-add.jpg (Git manual add)
|
||||
[8]: https://opensource.com/sites/default/files/uploads/git-ignore.jpg (Git ignore)
|
||||
[9]: https://github.com/github/gitignore
|
||||
[10]: https://about.gitlab.com/releases/2016/05/22/gitlab-8-8-released
|
||||
[11]: https://opensource.com/sites/default/files/uploads/git_merge_request.png (Git merge request)
|
@ -0,0 +1,165 @@
|
||||
[#]: subject: (How to write 'Hello World' in WebAssembly)
|
||||
[#]: via: (https://opensource.com/article/21/3/hello-world-webassembly)
|
||||
[#]: author: (Stephan Avenwedde https://opensource.com/users/hansic99)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
||||
如何在 WebAssembly 中写 “Hello World”?
|
||||
======
|
||||
通过这个分步教程,开始用人类可读的文本编写 WebAssembly。
|
||||
![Hello World inked on bread][1]
|
||||
|
||||
WebAssembly 是一种字节码格式,[几乎所有的浏览器][2]都可以将它编译成其主机系统的机器代码。除了 JavaScript 和 WebGL 之外,WebAssembly 还满足了将应用移植到浏览器中以实现平台独立的需求。作为 C++ 和 Rust 的编译目标,WebAssembly 使 Web 浏览器能够以接近原生的速度执行代码。
|
||||
|
||||
当你谈论 WebAssembly、应用时,你必须区分三种状态:
|
||||
|
||||
1. **源码(如 C++ 或 Rust):** 你有一个用兼容语言编写的应用,你想把它在浏览器中执行。
|
||||
2. **WebAssembly 字节码:** 你选择 WebAssembly 字节码作为编译目标。最后,你得到一个 `.wasm` 文件。
|
||||
3. **机器码(opcode):** 浏览器加载 `.wasm` 文件,并将其编译成主机系统的相应机器码。
|
||||
|
||||
|
||||
|
||||
WebAssembly 还有一种文本格式,用人类可读的文本表示二进制格式。为了简单起见,我将其称为 **WASM-text**。WASM-text 可以比作高级汇编语言。当然,你不会基于 WASM-text 来编写一个完整的应用,但了解它的底层工作原理是很好的(特别是对于调试和性能优化)。
|
||||
|
||||
本文将指导你在 WASM-text 中创建经典的 _Hello World_ 程序。
|
||||
|
||||
### 创建 .wat 文件
|
||||
|
||||
WASM-text 文件通常以 `.wat` 结尾。第一步创建一个名为 `helloworld.wat` 的空文本文件,用你最喜欢的文本编辑器打开它,然后粘贴进去:
|
||||
|
||||
|
||||
|
||||
```
|
||||
(module
|
||||
;; Imports from JavaScript namespace
|
||||
(import "console" "log" (func $log (param i32 i32))) ;; Import log function
|
||||
(import "js" "mem" (memory 1)) ;; Import 1 page of memory (54kb)
|
||||
|
||||
;; Data section of our module
|
||||
(data (i32.const 0) "Hello World from WebAssembly!")
|
||||
|
||||
;; Function declaration: Exported as helloWorld(), no arguments
|
||||
(func (export "helloWorld")
|
||||
i32.const 0 ;; pass offset 0 to log
|
||||
i32.const 29 ;; pass length 29 to log (strlen of sample text)
|
||||
call $log
|
||||
)
|
||||
)
|
||||
```
|
||||
|
||||
WASM-text 格式是基于 S 表达式的。为了实现交互,JavaScript 函数用 `import` 语句导入,WebAssembly 函数用 `export` 语句导出。在这个例子中,从 `console` 模块中导入 `log` 函数,它需要两个类型为 `i32` 的参数作为输入,以及一页内存(64KB)来存储字符串。
|
||||
|
||||
字符串将被写入偏移量 `0` 的 `data` 部分。`data` 部分是你的内存的覆盖,内存是在 JavaScript 部分分配的。
|
||||
|
||||
函数用关键字 `func` 标记。当进入函数时,栈是空的。在调用另一个函数之前,函数参数会被压入栈中(这里是偏移量和长度)(见 `call $log`)。当一个函数返回一个 `f32` 类型时(例如),当离开函数时,一个 `f32` 变量必须保留在栈中(但在本例中不是这样)。
|
||||
|
||||
### 创建 .wasm 文件
|
||||
|
||||
WASM-text 和 WebAssembly 字节码有 1:1 的对应关系,这意味着你可以将 WASM-text 转换成字节码(反之亦然)。你已经有了 WASM-text,现在将创建字节码。
|
||||
|
||||
转换可以通过 [WebAssembly Binary Toolkit][3](WABT)来完成。从链接克隆仓库,并按照安装说明进行安装。
|
||||
|
||||
建立工具链后,打开控制台并输入以下内容,将 WASM-text 转换为字节码:
|
||||
|
||||
|
||||
```
|
||||
`wat2wasm helloworld.wat -o helloworld.wasm`
|
||||
```
|
||||
|
||||
你也可以用以下方法将字节码转换为 WASM-text:
|
||||
|
||||
|
||||
```
|
||||
`wasm2wat helloworld.wasm -o helloworld_reverse.wat`
|
||||
```
|
||||
|
||||
一个从 `.wasm` 文件创建的 `.wat` 文件不包括任何函数或参数名称。默认情况下,WebAssembly 用它们的索引来识别函数和参数。
|
||||
|
||||
### 编译 .wasm 文件
|
||||
|
||||
目前,WebAssembly 只与 JavaScript 共存,所以你必须编写一个简短的脚本来加载和编译 `.wasm` 文件并进行函数调用。你还需要在 WebAssembly 模块中定义你要导入的函数。
|
||||
|
||||
创建一个空的文本文件,并将其命名为 `helloworld.html`,然后打开你喜欢的文本编辑器并粘贴进去:
|
||||
|
||||
|
||||
```
|
||||
<!DOCTYPE html>
|
||||
<html>
|
||||
<head>
|
||||
<meta charset="utf-8">
|
||||
<title>Simple template</title>
|
||||
</head>
|
||||
<body>
|
||||
<script>
|
||||
|
||||
var memory = new WebAssembly.Memory({initial:1});
|
||||
|
||||
function consoleLogString(offset, length) {
|
||||
var bytes = new Uint8Array(memory.buffer, offset, length);
|
||||
var string = new TextDecoder('utf8').decode(bytes);
|
||||
console.log(string);
|
||||
};
|
||||
|
||||
var importObject = {
|
||||
console: {
|
||||
log: consoleLogString
|
||||
},
|
||||
js : {
|
||||
mem: memory
|
||||
}
|
||||
};
|
||||
|
||||
WebAssembly.instantiateStreaming(fetch('helloworld.wasm'), importObject)
|
||||
.then(obj => {
|
||||
obj.instance.exports.helloWorld();
|
||||
});
|
||||
|
||||
</script>
|
||||
</body>
|
||||
</html>
|
||||
```
|
||||
|
||||
`WebAssembly.Memory(...)` 方法返回一个大小为 64KB 的内存页。函数 `consoleLogString` 根据长度和偏移量从该内存页读取一个字符串。这两个对象作为 `importObject` 的一部分传递给你的 WebAssembly 模块。
|
||||
|
||||
在你运行这个例子之前,你可能必须允许 Firefox 从这个目录中访问文件,在地址栏输入 `about:config`,并将 `privacy.file_unique_origin` 设置为 `true`:
|
||||
|
||||
![Firefox setting][4]
|
||||
|
||||
(Stephan Avenwedde, [CC BY-SA 4.0][5])
|
||||
|
||||
> **注意:** 这样做会使你容易受到 [CVE-2019-11730][6] 安全问题的影响。
|
||||
|
||||
现在,在 Firefox 中打开 `helloworld.html`,按下 **Ctrl**+**K** 打开开发者控制台。
|
||||
|
||||
![Debugger output][7]
|
||||
|
||||
(Stephan Avenwedde, [CC BY-SA 4.0][5])
|
||||
|
||||
### 了解更多
|
||||
|
||||
这个 Hello World 的例子只是 MDN 的[了解 WebAssembly 文本格式][8]文档中的教程之一。如果你想了解更多关于 WebAssembly 的知识以及它的工作原理,可以看看这些文档。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/3/hello-world-webassembly
|
||||
|
||||
作者:[Stephan Avenwedde][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/hansic99
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/helloworld_bread_lead.jpeg?itok=1r8Uu7gk (Hello World inked on bread)
|
||||
[2]: https://developer.mozilla.org/en-US/docs/WebAssembly#browser_compatibility
|
||||
[3]: https://github.com/webassembly/wabt
|
||||
[4]: https://opensource.com/sites/default/files/uploads/firefox_setting.png (Firefox setting)
|
||||
[5]: https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[6]: https://www.mozilla.org/en-US/security/advisories/mfsa2019-21/#CVE-2019-11730
|
||||
[7]: https://opensource.com/sites/default/files/uploads/debugger_output.png (Debugger output)
|
||||
[8]: https://developer.mozilla.org/en-US/docs/WebAssembly/Understanding_the_text_format
|
@ -0,0 +1,206 @@
|
||||
[#]: subject: "Practice using the Linux grep command"
|
||||
[#]: via: "https://opensource.com/article/21/3/grep-cheat-sheet"
|
||||
[#]: author: "Seth Kenlon https://opensource.com/users/seth"
|
||||
[#]: collector: "lujun9972"
|
||||
[#]: translator: "lxbwolf"
|
||||
[#]: reviewer: " "
|
||||
[#]: publisher: " "
|
||||
[#]: url: " "
|
||||
|
||||
练习使用 Linux 的 grep 命令
|
||||
======
|
||||
来学习下搜索文件中内容的基本操作,然后下载我们的备忘录作为 grep 和正则表达式的快速参考指南。
|
||||
![Hand putting a Linux file folder into a drawer][1]
|
||||
|
||||
grep(<ruby>全局正则表达式打印<rt>Global Regular Expression Print</rt></ruby>)是早在 1974 年由 Ken Thompson 开发的基本 Unix 命令之一。在计算领域,它无处不在,通常被用作为动词(“搜索一个文件中的内容”)。如果你的谈话对象有极客精神,那么它也能在真实生活场景中使用。(例如,“我会搜索我的内存库来回想起那些信息。”)简而言之,grep 是一种用特定的字符模式来搜索文件中内容的方式。如果你感觉这听起来像是文字处理器或文本编辑器的现代 Find 功能,那么你就已经在计算行业感受到了grep 的影响。
|
||||
|
||||
grep 绝不是被现代技术抛弃的远古命令,它的强大体现在两个方面:
|
||||
|
||||
* grep 可以在终端操作数据流,因此你可以把它嵌入到复杂的处理中。你不仅可以在一个文本文件中*查找*文字,还可以提取文字后把它发给另一个命令。
|
||||
* grep 使用正则表达式来提供灵活的搜索能力。
|
||||
|
||||
|
||||
|
||||
虽然需要一些练习,但学习 `grep` 命令还是很容易的。本文会介绍一些我认为 grep 最有用的功能。
|
||||
|
||||
**[下载我们免费的 [grep 备忘录][2]]**
|
||||
|
||||
### 安装 grep
|
||||
|
||||
Linux 默认安装 grep。
|
||||
|
||||
MacOS 默认安装了 BSD 版的 grep。BSD 版的 grep 跟 GNU 版有一点不一样,因此如果你想完全参照本文,那么请使用 [Homebrew][3] 或 [MacPorts][4] 安装 GNU 版的 grep。
|
||||
|
||||
### 基础的 grep
|
||||
|
||||
所有版本的 grep 基础语法都一样。入参是匹配模式和你需要搜索的文件。它会把匹配到的每一行输出到你的终端。
|
||||
|
||||
|
||||
```
|
||||
$ grep gnu gpl-3.0.txt
|
||||
along with this program. If not, see <[http://www.gnu.org/licenses/\>][5].
|
||||
<[http://www.gnu.org/licenses/\>][5].
|
||||
<[http://www.gnu.org/philosophy/why-not-lgpl.html\>][6].
|
||||
```
|
||||
|
||||
`grep` 命令默认大小写敏感,因此 “gnu”、“GNU"、”Gnu“ 是三个不同的值。你可以使用 `--ignore-case` 选项来忽略大小写。
|
||||
|
||||
|
||||
```
|
||||
$ grep --ignore-case gnu gpl-3.0.txt
|
||||
GNU GENERAL PUBLIC LICENSE
|
||||
The GNU General Public License is a free, copyleft license for
|
||||
the GNU General Public License is intended to guarantee your freedom to
|
||||
GNU General Public License for most of our software; it applies also to
|
||||
[...16 more results...]
|
||||
<[http://www.gnu.org/licenses/\>][5].
|
||||
<[http://www.gnu.org/philosophy/why-not-lgpl.html\>][6].
|
||||
```
|
||||
|
||||
你也可以通过 `--invert-match` 选项来输出所有没有匹配到的行:
|
||||
|
||||
|
||||
```
|
||||
$ grep --invert-match \
|
||||
\--ignore-case gnu gpl-3.0.txt
|
||||
Version 3, 29 June 2007
|
||||
|
||||
Copyright (C) 2007 Free Software Foundation, Inc. <[http://fsf.org/\>][7]
|
||||
[...648 lines...]
|
||||
Public License instead of this License. But first, please read
|
||||
```
|
||||
|
||||
### 管道
|
||||
|
||||
能搜索文件中的文本内容是很有用的,但是 [POSIX][8] 的真正强大之处是可以通过”管道“来连接多条命令。我发现我使用 grep 最好的方式是把它与其他工具如 cut、tr 或 [curl][9] 联合使用。
|
||||
|
||||
假如现在有一个文件,文件中每一行是我想要下载的技术论文。我可以打开文件手动点击每一个链接,然后点击火狐的选项把每一个文件保存到我的硬盘,但是需要点击多次且耗费很长时间。而我还可以搜索文件中的链接,用 `--only-matching` 选项*只*打印出匹配到的字符串。
|
||||
|
||||
|
||||
```
|
||||
$ grep --only-matching http\:\/\/.*pdf example.html
|
||||
<http://example.com/linux\_whitepaper.pdf>
|
||||
<http://example.com/bsd\_whitepaper.pdf>
|
||||
<http://example.com/important\_security\_topic.pdf>
|
||||
```
|
||||
|
||||
输出是一系列的 URL,每行一个。而这与 Bash 处理数据的方式完美契合,因此我不再把 URL 打印到终端,而是把它们通过管道传给 `curl`:
|
||||
|
||||
|
||||
```
|
||||
$ grep --only-matching http\:\/\/.*pdf \
|
||||
example.html | curl --remote-name
|
||||
```
|
||||
|
||||
这条命令可以下载每一个文件,然后以各自远程的文件名命名保存在我的硬盘上。
|
||||
|
||||
这个例子中我的搜索模式可能很晦涩。那是因为它用的是正则表达式,一种在大量文本中进行模糊搜索时非常有用的”通配符“语言。
|
||||
|
||||
### 正则表达式
|
||||
|
||||
没有人会觉得正则表达式(简称 ”regex“)很简单。然而,我发现它的名声通常并不好。不可否认,很多人在使用正则表达式时”过于聪明”,以致于可读性很差,太过模糊以致于前面的模式覆盖了后面的模式,但是你仍大可不必滥用正则。这里是我使用正则的一个简明的教程。
|
||||
|
||||
首先,创建一个名为 `example.txt` 的文件,输入以下内容:
|
||||
|
||||
|
||||
```
|
||||
Albania
|
||||
Algeria
|
||||
Canada
|
||||
0
|
||||
1
|
||||
3
|
||||
11
|
||||
```
|
||||
|
||||
最基础的元素是谦逊的 `.` 字符。它表示一个字符。
|
||||
|
||||
|
||||
```
|
||||
$ grep Can.da example.txt
|
||||
Canada
|
||||
```
|
||||
|
||||
模式 `Can.da` 能成功匹配到 `Canada` 是因为 `.` 字符表示任意*一个*字符。
|
||||
|
||||
可以使用下面这些符号来使 `.` 通配符表示多个字符:
|
||||
|
||||
* `?` 匹配前面的模式零次或一次
|
||||
* `*` 匹配前面的模式零次或多次
|
||||
* `+` 匹配前面的模式一次或多次
|
||||
* `{4}` 匹配前面的模式最多 4 次(或是你在括号中写的其他次数)
|
||||
|
||||
|
||||
|
||||
了解了这些知识后,你可以用你认为有意思的所有模式来在 `example.txt` 中做练习。可能有些会成功,有些不会成功。重要的是你要去分析结果,这样你才会知道原因。
|
||||
|
||||
例如,下面的命令匹配不到任何国家:
|
||||
|
||||
|
||||
```
|
||||
`$ grep A.a example.txt`
|
||||
```
|
||||
|
||||
因为 `.` 字符只能匹配一个字符,除非你增加匹配次数。使用 `*` 字符,告诉 `grep` 匹配一个字符零次或者必要的任意多次直到单词末尾。因为你知道你要处理的内容,因此在本例中*零次*是没有必要的。在这个列表中一定没有单个字母的国家。因此,你可以用 `+` 来匹配一个字符至少一次且任意多次直到单词末尾:
|
||||
|
||||
|
||||
```
|
||||
$ grep A.+a example.txt
|
||||
Albania
|
||||
Algeria
|
||||
```
|
||||
|
||||
你可以使用方括号来提供一系列的字母:
|
||||
|
||||
|
||||
```
|
||||
$ grep [A,C].+a example.txt
|
||||
Albania
|
||||
Algeria
|
||||
Canada
|
||||
```
|
||||
|
||||
也可以用来匹配数字。结果可能会震惊你:
|
||||
|
||||
|
||||
```
|
||||
$ grep [1-9] example.txt
|
||||
1
|
||||
3
|
||||
11
|
||||
```
|
||||
|
||||
看到 11 出现在搜索数字 1 到 9 的结果中,你惊讶吗?
|
||||
|
||||
如果把 13 加到搜索列表中,会出现什么结果呢?
|
||||
|
||||
这些数字之所以会被匹配到,是因为它们包含 1,而 1 在要匹配的数字中。
|
||||
|
||||
你可以发现,正则表达式有时会令人费解,但是通过体验和练习,你可以熟练掌握它,用它来提高你搜索数据的能力。
|
||||
|
||||
### 下载备忘录
|
||||
|
||||
`grep` 命令还有很多文章中没有列出的选项。有用来更好地展示匹配结果、列出文件、列出匹配到的行号、通过打印匹配到的行周围的内容来显示上下文的选项,等等。如果你在学习 grep,或者你经常使用它并且通过查阅它的`帮助`页面来查看选项,那么你可以下载我们的备忘录。这个备忘录使用短选项(例如,使用 `-v`,而不是 `--invert-matching`)来帮助你更好地熟悉 grep。它还有一部分正则表达式可以帮你记住用途最广的正则表达式代码。 [现在就下载 grep 备忘录!][2]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/3/grep-cheat-sheet
|
||||
|
||||
作者:[Seth Kenlon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[lxbwolf](https://github.com/lxbwolf)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/seth
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/yearbook-haff-rx-linux-file-lead_0.png?itok=-i0NNfDC "Hand putting a Linux file folder into a drawer"
|
||||
[2]: https://opensource.com/downloads/grep-cheat-sheet
|
||||
[3]: https://opensource.com/article/20/6/homebrew-mac
|
||||
[4]: https://opensource.com/article/20/11/macports
|
||||
[5]: http://www.gnu.org/licenses/\>
|
||||
[6]: http://www.gnu.org/philosophy/why-not-lgpl.html\>
|
||||
[7]: http://fsf.org/\>
|
||||
[8]: https://opensource.com/article/19/7/what-posix-richard-stallman-explains
|
||||
[9]: https://opensource.com/downloads/curl-command-cheat-sheet
|
@ -0,0 +1,145 @@
|
||||
[#]: subject: (4 cool new projects to try in Copr for March 2021)
|
||||
[#]: via: (https://fedoramagazine.org/4-cool-new-projects-to-try-in-copr-for-march-2021/)
|
||||
[#]: author: (Jakub Kadlčík https://fedoramagazine.org/author/frostyx/)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
||||
COPR 仓库中 4 个很酷的新项目(2021.03)
|
||||
======
|
||||
|
||||
![][1]
|
||||
|
||||
COPR 是个人软件仓库[集合][2],它不在 Fedora 中。这是因为某些软件不符合轻松打包的标准;或者它可能不符合其他 Fedora 标准,尽管它是自由而开源的。COPR 可以在 Fedora 套件之外提供这些项目。COPR 中的软件不受 Fedora 基础设施的支持,或者是由项目自己背书的。但是,这是一种尝试新的或实验性的软件的一种巧妙的方式。
|
||||
|
||||
本文介绍了 COPR 中一些有趣的新项目。如果你第一次使用 COPR,请参阅 [COPR 用户文档][3]。
|
||||
|
||||
### [][4]
|
||||
|
||||
### Ytfzf
|
||||
|
||||
[Ytfzf][5] 是一个简单的命令行工具,用于搜索和观看 YouTube 视频。它提供了围绕模糊查找程序 [fzf][6] 构建的快速直观的界面。它使用 [youtube-dl][7] 来下载选定的视频,并打开外部视频播放器来观看。由于这种方式,_ytfzf_ 比使用浏览器观看 YouTube 资源占用要少得多。它支持缩略图(通过 [ueberzug][8])、历史记录保存、多个视频排队或下载它们以供以后使用、频道订阅以及其他方便的功能。多亏了像 [dmenu][9] 或 [rofi][10] 这样的工具,它甚至可以在终端之外使用。
|
||||
|
||||
![][11]
|
||||
|
||||
#### [][12] 安装说明
|
||||
|
||||
目前[仓库][13]为 Fedora 33 和 34 提供 Ytfzf。要安装它,请使用以下命令:
|
||||
|
||||
```
|
||||
sudo dnf copr enable bhoman/ytfzf
|
||||
sudo dnf install ytfzf
|
||||
```
|
||||
|
||||
### [][14] Gemini 客户端
|
||||
|
||||
你有没有想过,如果万维网走的是一条完全不同的路线,不采用 CSS 和客户端脚本,你的互联网浏览体验会如何?[Gemini][15] 是 HTTPS 协议的现代替代品,尽管它并不打算取代 HTTPS 协议。[stenstorp/gemini][16] COPR 项目提供了各种客户端来浏览 Gemini _网站_,有 [Castor][17]、[Dragonstone][18]、[Kristall][19] 和 [Lagrange][20]。
|
||||
|
||||
[Gemini][21] 站点提供了一些使用该协议的主机列表。以下显示了使用 Castor 访问这个站点的情况:
|
||||
|
||||
![][22]
|
||||
|
||||
#### [][23] 安装说明
|
||||
|
||||
该[仓库][16]目前为 Fedora 32、33、34 和 Fedora Rawhide 提供 Gemini 客户端。EPEL 7 和 8,以及 CentOS Stream 也可使用。要安装浏览器,请从这里显示的安装命令中选择:
|
||||
|
||||
```
|
||||
sudo dnf copr enable stenstorp/gemini
|
||||
|
||||
sudo dnf install castor
|
||||
sudo dnf install dragonstone
|
||||
sudo dnf install kristall
|
||||
sudo dnf install lagrange
|
||||
```
|
||||
|
||||
### [][24] Ly
|
||||
|
||||
[Ly][25] 是一个 Linux 和 BSD 的轻量级登录管理器。它有一个类似于 ncurses 的基于文本的用户界面。理论上,它应该支持所有的 X 桌面环境和窗口管理器(其中很多都[经过测试][26])。Ly 还提供了基本的 Wayland 支持(Sway 也工作良好)。在配置的某个地方,有一个复活节彩蛋选项,可以在背景中启用著名的 [PSX DOOM fire][27] 动画,就其本身而言,值得一试。
|
||||
|
||||
![][28]
|
||||
|
||||
#### [][29] 安装说明
|
||||
|
||||
该[仓库][30]目前为 Fedora 32、33 和 Fedora Rawhide 提供 Ly。要安装它,请使用以下命令:
|
||||
|
||||
```
|
||||
sudo dnf copr enable dhalucario/ly
|
||||
sudo dnf install ly
|
||||
```
|
||||
|
||||
在将 Ly 设置为系统登录界面之前,请在终端中运行 _ly_ 命令以确保其正常工作。然后关闭当前的登录管理器,启用 Ly。
|
||||
|
||||
```
|
||||
sudo systemctl disable gdm
|
||||
sudo systemctl enable ly
|
||||
```
|
||||
|
||||
最后,重启计算机,使其更改生效。
|
||||
|
||||
### [][31] AWS CLI v2
|
||||
|
||||
[AWS CLI v2][32] 带来基于社区反馈进行的稳健而有条理的演变,而不是对原有客户端的大规模重新设计。它引入了配置凭证的新机制,现在允许用户从 AWS 控制台中生成的 _.csv_ 文件导入凭证。它还提供了对 AWS SSO 的支持。其他大的改进是服务端自动补全,以及交互式参数生成。一个新功能是交互式向导,它提供了更高层次的抽象,并结合多个 AWS API 调用来创建、更新或删除 AWS 资源。
|
||||
|
||||
![][33]
|
||||
|
||||
#### [][34] 安装说明
|
||||
|
||||
该[仓库][35]目前为 Fedora Linux 32、33、34 和 Fedora Rawhide 提供 AWS CLI v2。要安装它,请使用以下命令:
|
||||
|
||||
```
|
||||
sudo dnf copr enable spot/aws-cli-2
|
||||
sudo dnf install aws-cli-2
|
||||
```
|
||||
|
||||
自然地,访问 AWS 账户是必要的。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/4-cool-new-projects-to-try-in-copr-for-march-2021/
|
||||
|
||||
作者:[Jakub Kadlčík][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fedoramagazine.org/author/frostyx/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://fedoramagazine.org/wp-content/uploads/2020/10/4-copr-945x400-1-816x345.jpg
|
||||
[2]: https://copr.fedorainfracloud.org/
|
||||
[3]: https://docs.pagure.org/copr.copr/user_documentation.html
|
||||
[4]: https://github.com/FrostyX/fedora-magazine/blob/main/2021-march.md#droidcam
|
||||
[5]: https://github.com/pystardust/ytfzf
|
||||
[6]: https://github.com/junegunn/fzf
|
||||
[7]: http://ytdl-org.github.io/youtube-dl/
|
||||
[8]: https://github.com/seebye/ueberzug
|
||||
[9]: https://tools.suckless.org/dmenu/
|
||||
[10]: https://github.com/davatorium/rofi
|
||||
[11]: https://fedoramagazine.org/wp-content/uploads/2021/03/ytfzf.png
|
||||
[12]: https://github.com/FrostyX/fedora-magazine/blob/main/2021-march.md#installation-instructions
|
||||
[13]: https://copr.fedorainfracloud.org/coprs/bhoman/ytfzf/
|
||||
[14]: https://github.com/FrostyX/fedora-magazine/blob/main/2021-march.md#gemini-clients
|
||||
[15]: https://gemini.circumlunar.space/
|
||||
[16]: https://copr.fedorainfracloud.org/coprs/stenstorp/gemini/
|
||||
[17]: https://git.sr.ht/~julienxx/castor
|
||||
[18]: https://gitlab.com/baschdel/dragonstone
|
||||
[19]: https://kristall.random-projects.net/
|
||||
[20]: https://github.com/skyjake/lagrange
|
||||
[21]: https://gemini.circumlunar.space/servers/
|
||||
[22]: https://fedoramagazine.org/wp-content/uploads/2021/03/gemini.png
|
||||
[23]: https://github.com/FrostyX/fedora-magazine/blob/main/2021-march.md#installation-instructions-1
|
||||
[24]: https://github.com/FrostyX/fedora-magazine/blob/main/2021-march.md#ly
|
||||
[25]: https://github.com/nullgemm/ly
|
||||
[26]: https://github.com/nullgemm/ly#support
|
||||
[27]: https://fabiensanglard.net/doom_fire_psx/index.html
|
||||
[28]: https://fedoramagazine.org/wp-content/uploads/2021/03/ly.png
|
||||
[29]: https://github.com/FrostyX/fedora-magazine/blob/main/2021-march.md#installation-instructions-2
|
||||
[30]: https://copr.fedorainfracloud.org/coprs/dhalucario/ly/
|
||||
[31]: https://github.com/FrostyX/fedora-magazine/blob/main/2021-march.md#aws-cli-v2
|
||||
[32]: https://aws.amazon.com/blogs/developer/aws-cli-v2-is-now-generally-available/
|
||||
[33]: https://fedoramagazine.org/wp-content/uploads/2021/03/aws-cli-2.png
|
||||
[34]: https://github.com/FrostyX/fedora-magazine/blob/main/2021-march.md#installation-instructions-3
|
||||
[35]: https://copr.fedorainfracloud.org/coprs/spot/aws-cli-2/
|
Loading…
Reference in New Issue
Block a user