mirror of
https://github.com/LCTT/TranslateProject.git
synced 2024-12-26 21:30:55 +08:00
commit
e7a68a7582
@ -0,0 +1,57 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (Morisun029)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11875-1.html)
|
||||
[#]: subject: (Top CI/CD resources to set you up for success)
|
||||
[#]: via: (https://opensource.com/article/19/12/cicd-resources)
|
||||
[#]: author: (Jessica Cherry https://opensource.com/users/jrepka)
|
||||
|
||||
顶级 CI / CD 资源,助你成功
|
||||
======
|
||||
|
||||
> 随着企业期望实现无缝、灵活和可扩展的部署,持续集成和持续部署成为 2019 年的关键主题。
|
||||
|
||||
![Plumbing tubes in many directions][1]
|
||||
|
||||
对于 CI/CD 和 DevOps 来说,2019 年是非常棒的一年。Opensource.com 的作者分享了他们专注于无缝、灵活和可扩展部署时是如何朝着敏捷和 scrum 方向发展的。以下是我们 2019 年发布的 CI/CD 文章中的一些重要文章。
|
||||
|
||||
### 学习和提高你的 CI/CD 技能
|
||||
|
||||
我们最喜欢的一些文章集中在 CI/CD 的实操经验上,并涵盖了许多方面。通常以 [Jenkins][2] 管道开始,Bryant Son 的文章《[用 Jenkins 构建 CI/CD 管道][3]》将为你提供足够的经验,以开始构建你的第一个管道。Daniel Oh 在《[用 DevOps 管道进行自动验收测试][4]》一文中,提供了有关验收测试的重要信息,包括可用于自行测试的各种 CI/CD 应用程序。我写的《[安全扫描 DevOps 管道][5]》非常简短,其中简要介绍了如何使用 Jenkins 平台在管道中设置安全性。
|
||||
|
||||
### 交付工作流程
|
||||
|
||||
正如 Jithin Emmanuel 在《[Screwdriver:一个用于持续交付的可扩展构建平台][6]》中分享的,在学习如何使用和提高你的 CI/CD 技能方面,工作流程很重要,特别是当涉及到管道时。Emily Burns 在《[为什么 Spinnaker 对 CI/CD 很重要][7]》中解释了灵活地使用 CI/CD 工作流程准确构建所需内容的原因。Willy-Peter Schaub 还盛赞了为所有产品创建统一管道的想法,以便《[在一个 CI/CD 管道中一致地构建每个产品][8]》。这些文章将让你很好地了解在团队成员加入工作流程后会发生什么情况。
|
||||
|
||||
### CI/CD 如何影响企业
|
||||
|
||||
2019 年也是认识到 CI/CD 的业务影响以及它是如何影响日常运营的一年。Agnieszka Gancarczyk 分享了 Red Hat 《[小型 Scrum vs. 大型 Scrum][9]》的调查结果, 包括受访者对 Scrum、敏捷运动及对团队的影响的不同看法。Will Kelly 的《[持续部署如何影响整个组织][10]》,也提及了开放式沟通的重要性。Daniel Oh 也在《[DevOps 团队必备的 3 种指标仪表板][11]》中强调了指标和可观测性的重要性。最后是 Ann Marie Fred 的精彩文章《[不在生产环境中测试?要在生产环境中测试!][12]》详细说明了在验收测试前在生产环境中测试的重要性。
|
||||
|
||||
感谢许多贡献者在 2019 年与 Opensource 的读者分享他们的见解,我期望在 2020 年里从他们那里了解更多有关 CI/CD 发展的信息。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/12/cicd-resources
|
||||
|
||||
作者:[Jessica Cherry][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[Morisun029](https://github.com/Morisun029)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/jrepka
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/plumbing_pipes_tutorial_how_behind_scenes.png?itok=F2Z8OJV1 (Plumbing tubes in many directions)
|
||||
[2]: https://jenkins.io/
|
||||
[3]: https://linux.cn/article-11546-1.html
|
||||
[4]: https://opensource.com/article/19/4/devops-pipeline-acceptance-testing
|
||||
[5]: https://opensource.com/article/19/7/security-scanning-your-devops-pipeline
|
||||
[6]: https://opensource.com/article/19/3/screwdriver-cicd
|
||||
[7]: https://opensource.com/article/19/8/why-spinnaker-matters-cicd
|
||||
[8]: https://opensource.com/article/19/7/cicd-pipeline-rule-them-all
|
||||
[9]: https://opensource.com/article/19/3/small-scale-scrum-vs-large-scale-scrum
|
||||
[10]: https://opensource.com/article/19/7/organizational-impact-continuous-deployment
|
||||
[11]: https://linux.cn/article-11183-1.html
|
||||
[12]: https://opensource.com/article/19/5/dont-test-production
|
65
published/20200109 What-s HTTPS for secure computing.md
Normal file
65
published/20200109 What-s HTTPS for secure computing.md
Normal file
@ -0,0 +1,65 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (hopefully2333)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11877-1.html)
|
||||
[#]: subject: (What's HTTPS for secure computing?)
|
||||
[#]: via: (https://opensource.com/article/20/1/confidential-computing)
|
||||
[#]: author: (Mike Bursell https://opensource.com/users/mikecamel)
|
||||
|
||||
用于安全计算的 HTTPS 是什么?
|
||||
======
|
||||
|
||||
> 在默认的情况下,网站的安全性还不足够。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202002/11/123552rqncn4c7474j44jq.jpg)
|
||||
|
||||
在过去的几年里,寻找一个只以 “http://...” 开头的网站变得越来越难,这是因为业界终于意识到,网络安全“是件事”,同时也是因为客户端和服务端之间建立和使用 https 连接变得更加容易了。类似的转变可能正以不同的方式发生在云计算、边缘计算、物联网、区块链,人工智能、机器学习等领域。长久以来,我们都知道我们应该对存储的静态数据和在网络中传输的数据进行加密,但是在使用和处理数据的时候对它进行加密是困难且昂贵的。可信计算(使用例如<ruby>受信任的执行环境<rt>Trusted Execution Environments</rt></ruby> TEEs 这样的硬件功能来提供数据和算法这种类型的保护)可以保护主机系统中的或者易受攻击的环境中的数据。
|
||||
|
||||
关于 [TEEs][2],当然,还有我和 Nathaniel McCallum 共同创立的 [Enarx 项目][3],我已经写了几次文章(参见《[给每个人的 Enarx(一个任务)][4]》 和 《[Enarx 迈向多平台][5]》)。Enarx 使用 TEEs 来提供独立于平台和语言的部署平台,以此来让你能够安全地将敏感应用或者敏感组件(例如微服务)部署在你不信任的主机上。当然,Enarx 是完全开源的(顺便提一下,我们使用的是 Apache 2.0 许可证)。能够在你不信任的主机上运行工作负载,这是可信计算的承诺,它扩展了使用静态敏感数据和传输中数据的常规做法:
|
||||
|
||||
* **存储**:你要加密你的静态数据,因为你不完全信任你的基础存储架构。
|
||||
* **网络**:你要加密你正在传输中的数据,因为你不完全信任你的基础网络架构。
|
||||
* **计算**:你要加密你正在使用中的数据,因为你不完全信任你的基础计算架构。
|
||||
|
||||
关于信任,我有非常多的话想说,而且,上述说法里的单词“**完全**”是很重要的(在重新读我写的这篇文章的时候,我新加了这个单词)。不论哪种情况,你必须在一定程度上信任你的基础设施,无论是传递你的数据包还是存储你的数据块,例如,对于计算基础架构,你必须要去信任 CPU 和与之关联的固件,这是因为如果你不信任他们,你就无法真正地进行计算(现在有一些诸如<ruby>同态加密<rt>homomorphic encryption</rt></ruby>一类的技术,这些技术正在开始提供一些可能性,但是它们依然有限,这些技术还不够成熟)。
|
||||
|
||||
考虑到发现的一些 CPU 安全性问题,是否应该完全信任 CPU 有时自然会产生疑问,以及它们是否在针对其所在的主机的物理攻击中具有完全的安全性。
|
||||
|
||||
这两个问题的回答都是“不”,但是在考虑到大规模可用性和普遍推广的成本,这已经是我们当前拥有的最好的技术了。为了解决第二个问题,没有人去假装这项技术(或者任何的其他技术)是完全安全的:我们需要做的是思考我们的[威胁模型][6]并确定这个情况下的 TEEs 是否为我们的特殊需求提供了足够的安全防护。关于第一个问题,Enarx 采用的模型是在部署时就对你是否信任一个特定的 CPU 组做出决定。举个例子,如果供应商 Q 的 R 代芯片被发现有漏洞,可以很简单地说“我拒绝将我的工作内容部署到 Q 的 R 代芯片上去,但是仍然可以部署到 Q 的 S 型号、T 型号和 U 型号的芯片以及任何 P、M 和 N 供应商的任何芯片上去。”
|
||||
|
||||
我认为这里发生了三处改变,这些改变引起了人们现在对<ruby>机密计算<rt>confidential computing</rt></ruby>的兴趣和采用。
|
||||
|
||||
1. **硬件可用**:只是在过去的 6 到 12 个月里,支持 TEEs 的硬件才开始变得广泛可用,这会儿市场上的主要例子是 Intel 的 SGX 和 AMD 的 SEV。我们期望在未来可以看到支持 TEE 的硬件的其他例子。
|
||||
2. **行业就绪**:就像上云越来越多地被接受作为应用程序部署的模型,监管机构和立法机构也在提高各类组织保护其管理的数据的要求。组织开始呼吁在不受信任的主机运行敏感程序(或者是处理敏感数据的应用程序)的方法,更确切地说,是在无法完全信任且带有敏感数据的主机上运行的方法。这不足为奇:如果芯片制造商看不到这项技术的市场,他们就不会投太多的钱在这项技术上。Linux 基金会的[机密计算联盟(CCC)][7]的成立就是业界对如何寻找使用加密计算的通用模型并且鼓励开源项目使用这些技术感兴趣的案例。(红帽发起的 Enarx 是一个 CCC 项目。)
|
||||
3. **开放源码**:就像区块链一样,机密计算是使用开源绝对明智的技术之一。如果你要运行敏感程序,你需要去信任正在为你运行的程序。不仅仅是 CPU 和固件,同样还有在 TEE 内执行你的工作负载的框架。可以很好地说,“我不信任主机机器和它上面的软件栈,所以我打算使用 TEE,”但是如果你不够了解 TEE 软件环境,那你就是将一种软件不透明换成另外一种。TEEs 的开源支持将允许你或者社区(实际上是你与社区)以一种专有软件不可能实现的方式来检查和审计你所运行的程序。这就是为什么 CCC 位于 Linux 基金会旗下(这个基金会致力于开放式开发模型)并鼓励 TEE 相关的软件项目加入且成为开源项目(如果它们还没有成为开源)。
|
||||
|
||||
我认为,在过去的 15 到 20 年里,硬件可用、行业就绪和开放源码已成为推动技术改变的驱动力。区块链、人工智能、云计算、<ruby>大规模计算<rt>webscale computing</rt></ruby>、大数据和互联网商务都是这三个点同时发挥作用的例子,并且在业界带来了巨大的改变。
|
||||
|
||||
在一般情况下,安全是我们这数十年来听到的一种承诺,并且其仍然未被实现。老实说,我不确定它未来会不会实现。但是随着新技术的到来,特定用例的安全变得越来越实用和无处不在,并且在业内受到越来越多的期待。这样看起来,机密计算似乎已准备好成为成为下一个重大变化 —— 而你,我亲爱的读者,可以一起来加入到这场革命(毕竟它是开源的)。
|
||||
|
||||
这篇文章最初是发布在 Alice, Eve, and Bob 上的,这是得到了作者许可的重发。
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/1/confidential-computing
|
||||
|
||||
作者:[Mike Bursell][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[hopefully2333](https://github.com/hopefully2333)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/mikecamel
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/secure_https_url_browser.jpg?itok=OaPuqBkG (Secure https browser)
|
||||
[2]: https://aliceevebob.com/2019/02/26/oh-how-i-love-my-tee-or-do-i/
|
||||
[3]: https://enarx.io/
|
||||
[4]: https://aliceevebob.com/2019/08/20/enarx-for-everyone-a-quest/
|
||||
[5]: https://aliceevebob.com/2019/10/29/enarx-goes-multi-platform/
|
||||
[6]: https://aliceevebob.com/2018/02/20/there-are-no-absolutes-in-security/
|
||||
[7]: https://confidentialcomputing.io/
|
||||
[8]: tmp.VEZpFGxsLv#1
|
||||
[9]: https://aliceevebob.com/2019/12/03/confidential-computing-the-new-https/
|
@ -1,35 +1,36 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11876-1.html)
|
||||
[#]: subject: (Get your RSS feeds and podcasts in one place with this open source tool)
|
||||
[#]: via: (https://opensource.com/article/20/1/open-source-rss-feed-reader)
|
||||
[#]: author: (Kevin Sonney https://opensource.com/users/ksonney)
|
||||
|
||||
使用此开源工具将你的RSS 订阅源和播客放在一起
|
||||
使用此开源工具在一起收取你的 RSS 订阅源和播客
|
||||
======
|
||||
在我们的 20 个使用开源提升生产力的系列的第十二篇文章中使用 Newsboat 跟上你的新闻 RSS 源和播客。
|
||||
![Ship captain sailing the Kubernetes seas][1]
|
||||
|
||||
> 在我们的 20 个使用开源提升生产力的系列的第十二篇文章中使用 Newsboat 收取你的新闻 RSS 源和播客。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202002/10/162526wv5jdl0m12sw10md.jpg)
|
||||
|
||||
去年,我在 19 天里给你介绍了 19 个新(对你而言)的生产力工具。今年,我换了一种方式:使用你在使用或者还没使用的工具,构建一个使你可以在新一年更加高效的环境。
|
||||
|
||||
### 使用 Newsboat 访问你的 RSS 源和播客
|
||||
|
||||
RSS 新闻源是了解各个网站最新消息的非常方便的方法。除了 Opensource.com,我还会关注 [SysAdvent][2] 年度 sysadmin 工具,还有一些我最喜欢的作者以及一些网络漫画。RSS 阅读器可以让我“批处理”阅读内容,因此,我每天不会在不同的网站上花费很多时间。
|
||||
RSS 新闻源是了解各个网站最新消息的非常方便的方法。除了 Opensource.com,我还会关注 [SysAdvent][2] sysadmin 年度工具,还有一些我最喜欢的作者以及一些网络漫画。RSS 阅读器可以让我“批处理”阅读内容,因此,我每天不会在不同的网站上花费很多时间。
|
||||
|
||||
![Newsboat][3]
|
||||
|
||||
[Newsboat][4] 是一个基于终端的 RSS 订阅源阅读器,外观感觉很像电子邮件程序 [Mutt][5]。它使阅读新闻变得容易,并有许多不错的功能。
|
||||
|
||||
安装 Newsboat 非常容易,因为它包含在大多数发行版(以及 MacOS 上的 Homebrew)中。安装后,只需在 **~/.newsboat/urls** 中添加订阅源。如果你是从其他阅读器迁移而来,并有导出的 OPML 文件,那么可以使用以下方式导入:
|
||||
|
||||
安装 Newsboat 非常容易,因为它包含在大多数发行版(以及 MacOS 上的 Homebrew)中。安装后,只需在 `~/.newsboat/urls` 中添加订阅源。如果你是从其他阅读器迁移而来,并有导出的 OPML 文件,那么可以使用以下方式导入:
|
||||
|
||||
```
|
||||
`newsboat -i </path/to/my/feeds.opml>`
|
||||
newsboat -i </path/to/my/feeds.opml>
|
||||
```
|
||||
|
||||
添加订阅源后,Newsboat 的界面非常熟悉,特别是如果你使用过 Mutt。你可以使用箭头键上下滚动,使用 **r** 检查某个源中是否有新项目,使用 **R** 检查所有源中是否有新项目,按**回车**打开订阅源,并选择要阅读的文章。
|
||||
添加订阅源后,Newsboat 的界面非常熟悉,特别是如果你使用过 Mutt。你可以使用箭头键上下滚动,使用 `r` 检查某个源中是否有新项目,使用 `R` 检查所有源中是否有新项目,按回车打开订阅源,并选择要阅读的文章。
|
||||
|
||||
![Newsboat article list][6]
|
||||
|
||||
@ -39,7 +40,7 @@ RSS 新闻源是了解各个网站最新消息的非常方便的方法。除了
|
||||
|
||||
#### 播客
|
||||
|
||||
Newsboat 还通过 Podboat 提供了[播客支持][10],podboat 是一个附带的应用,它可帮助下载和排队播客节目。在 Newsboat 中查看播客源时,按下 **e** 将节目添加到你的下载队列中。所有信息将保存在 **~/.newsboat** 目录中的队列文件中。Podboat 读取此队列并将节目下载到本地磁盘。你可以在 Podboat 的用户界面(外观和行为类似于 Newsboat)执行此操作,也可以使用 ** podboat -a ** 让 Podboat 下载所有内容。作为播客人和播客听众,我认为这_真的_很方便。
|
||||
Newsboat 还通过 Podboat 提供了[播客支持][10],Podboat 是一个附带的应用,它可帮助下载和排队播客节目。在 Newsboat 中查看播客源时,按下 `e` 将节目添加到你的下载队列中。所有信息将保存在 `~/.newsboat` 目录中的队列文件中。Podboat 读取此队列并将节目下载到本地磁盘。你可以在 Podboat 的用户界面(外观和行为类似于 Newsboat)执行此操作,也可以使用 `podboat -a` 让 Podboat 下载所有内容。作为播客人和播客听众,我认为这*真的*很方便。
|
||||
|
||||
![Podboat][11]
|
||||
|
||||
@ -52,7 +53,7 @@ via: https://opensource.com/article/20/1/open-source-rss-feed-reader
|
||||
作者:[Kevin Sonney][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,29 +1,30 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11879-1.html)
|
||||
[#]: subject: (Use this open source tool to get your local weather forecast)
|
||||
[#]: via: (https://opensource.com/article/20/1/open-source-weather-forecast)
|
||||
[#]: author: (Kevin Sonney https://opensource.com/users/ksonney)
|
||||
|
||||
使用这个开源工具获取本地天气预报
|
||||
======
|
||||
在我们的 20 个使用开源提升生产力的系列的第十三篇文章中使用 wego 来了解出门前你是否要需要外套、雨伞或者防晒霜。
|
||||
![Sky with clouds and grass][1]
|
||||
|
||||
> 在我们的 20 个使用开源提升生产力的系列的第十三篇文章中使用 wego 来了解出门前你是否要需要外套、雨伞或者防晒霜。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202002/11/140842a8qwomfeg9mwegg8.jpg)
|
||||
|
||||
去年,我在 19 天里给你介绍了 19 个新(对你而言)的生产力工具。今年,我换了一种方式:使用你在使用或者还没使用的工具,构建一个使你可以在新一年更加高效的环境。
|
||||
|
||||
### 使用 wego 了解天气
|
||||
|
||||
过去十年我对我的职业最满意的地方之一是大多数时候是远程工作。尽管现实情况是我很多时候是在家里办公,但我可以在世界上任何地方工作。缺点是,离家时我会根据天气做出一些决定。在我居住的地方,”晴朗“可以表示从”酷热“、”低于零度“到”一小时内会小雨“。能够了解实际情况和快速预测非常有用。
|
||||
过去十年我对我的职业最满意的地方之一是大多数时候是远程工作。尽管现实情况是我很多时候是在家里办公,但我可以在世界上任何地方工作。缺点是,离家时我会根据天气做出一些决定。在我居住的地方,“晴朗”可以表示从“酷热”、“低于零度”到“一小时内会小雨”。能够了解实际情况和快速预测非常有用。
|
||||
|
||||
![Wego][2]
|
||||
|
||||
[Wego][3] 是用 Go 编写的程序,可以获取并显示你的当地天气。如果你愿意,它甚至可以用闪亮的 ASCII 艺术效果进行渲染。
|
||||
|
||||
要安装 wego,你需要确保在系统上安装了[Go][4]。之后,你可以使用 **go get** 命令获取最新版本。你可能还想将 **~/go/bin** 目录添加到路径中:
|
||||
|
||||
要安装 `wego`,你需要确保在系统上安装了[Go][4]。之后,你可以使用 `go get` 命令获取最新版本。你可能还想将 `~/go/bin` 目录添加到路径中:
|
||||
|
||||
```
|
||||
go get -u github.com/schachmat/wego
|
||||
@ -31,11 +32,9 @@ export PATH=~/go/bin:$PATH
|
||||
wego
|
||||
```
|
||||
|
||||
首次运行时,wego 会报告缺失 API 密钥。现在你需要决定一个后端。默认后端是 [Forecast.io][5],它是 [Dark Sky][6]的一部分。Wego还支持 [OpenWeatherMap][7] 和 [WorldWeatherOnline][8]。我更喜欢 OpenWeatherMap,因此我将在此向你展示如何设置。
|
||||
|
||||
你需要在 OpenWeatherMap 中[注册 API 密钥][9]。注册是免费的,尽管免费的 API 密钥限制了一天可以查询的数量,但这对于普通用户来说应该没问题。得到 API 密钥后,将它放到 **~/.wegorc** 文件中。现在可以填写你的位置、语言以及使用公制、英制(英国/美国)还是国际单位制(SI)。OpenWeatherMap 可通过名称、邮政编码、坐标和 ID 确定位置,这是我喜欢它的原因之一。
|
||||
|
||||
首次运行时,`wego` 会报告缺失 API 密钥。现在你需要决定一个后端。默认后端是 [Forecast.io][5],它是 [Dark Sky][6]的一部分。`wego` 还支持 [OpenWeatherMap][7] 和 [WorldWeatherOnline][8]。我更喜欢 OpenWeatherMap,因此我将在此向你展示如何设置。
|
||||
|
||||
你需要在 OpenWeatherMap 中[注册 API 密钥][9]。注册是免费的,尽管免费的 API 密钥限制了一天可以查询的数量,但这对于普通用户来说应该没问题。得到 API 密钥后,将它放到 `~/.wegorc` 文件中。现在可以填写你的位置、语言以及使用公制、英制(英国/美国)还是国际单位制(SI)。OpenWeatherMap 可通过名称、邮政编码、坐标和 ID 确定位置,这是我喜欢它的原因之一。
|
||||
|
||||
```
|
||||
# wego configuration for OEM
|
||||
@ -53,16 +52,15 @@ owm-lang=en
|
||||
units=imperial
|
||||
```
|
||||
|
||||
现在,在命令行运行 **wego** 将显示接下来三天的当地天气。
|
||||
现在,在命令行运行 `wego` 将显示接下来三天的当地天气。
|
||||
|
||||
Wego 还可以输出 JSON 以便程序使用,还可显示 emoji。你可以使用 **-f** 参数或在 **.wegorc** 文件中指定前端。
|
||||
`wego` 还可以输出 JSON 以便程序使用,还可显示 emoji。你可以使用 `-f` 参数或在 `.wegorc` 文件中指定前端。
|
||||
|
||||
![Wego at login][10]
|
||||
|
||||
如果你想在每次打开 shell 或登录主机时查看天气,只需将 wego 添加到 **~/.bashrc**(我这里是 **~/.zshrc**)即可。
|
||||
|
||||
[wttr.in][11] 项目是 wego 上的基于 Web 的封装。它提供了一些其他显示选项,并且可以在同名网站上看到。关于 wttr.in 的一件很酷的事情是,你可以使用 **curl** 获取一行天气信息。我有一个名为 **get_wttr** 的 shell 函数,用于获取当前简化的预报信息。
|
||||
如果你想在每次打开 shell 或登录主机时查看天气,只需将 wego 添加到 `~/.bashrc`(我这里是 `~/.zshrc`)即可。
|
||||
|
||||
[wttr.in][11] 项目是 wego 上的基于 Web 的封装。它提供了一些其他显示选项,并且可以在同名网站上看到。关于 wttr.in 的一件很酷的事情是,你可以使用 `curl` 获取一行天气信息。我有一个名为 `get_wttr` 的 shell 函数,用于获取当前简化的预报信息。
|
||||
|
||||
```
|
||||
get_wttr() {
|
||||
@ -81,7 +79,7 @@ via: https://opensource.com/article/20/1/open-source-weather-forecast
|
||||
作者:[Kevin Sonney][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,72 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Open source vs. proprietary: What's the difference?)
|
||||
[#]: via: (https://opensource.com/article/20/2/open-source-vs-proprietary)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
|
||||
|
||||
Open source vs. proprietary: What's the difference?
|
||||
======
|
||||
Need four good reasons to tell your friends to use open source? Here's
|
||||
how to make your case.
|
||||
![Doodles of the word open][1]
|
||||
|
||||
There's a lot to be learned from open source projects. After all, managing hundreds of disparate, asynchronous commits and bugs doesn't happen by accident. Someone or something has to coordinate releases, and keep all the code and project roadmaps organized. It's a lot like life. You have lots of tasks demanding your attention, and you have to tend to each in turn. To ensure everything gets done before its deadline, you try to stay organized and focused.
|
||||
|
||||
Fortunately, there are [applications out there][2] designed to help with that sort of thing, and many apply just as well to real life as they do to software.
|
||||
|
||||
Here are some reasons for choosing [open tools][3] when improving personal or project-based organization.
|
||||
|
||||
### Data ownership
|
||||
|
||||
It's rarely profitable for proprietary tools to provide you with [data][4] dumps. Some products, usually after a long battle with their users (and sometimes a lawsuit), provide ways to extract your data from them. But the real issue isn't whether a company lets you extract data; it's the fact that the capability to get to your data isn't guaranteed in the first place. It's your data, and when it's literally what you do each day, it is, in a way, your life. Nobody should have primary access to that but you, so why should you have to petition a company for a copy?
|
||||
|
||||
Using an open source tool ensures you have priority access to your own activities. When you need a copy of something, you already have it. When you need to export it from one application to another, you have complete control of how the data is exchanged. If you need to export your schedule from a calendar into your kanban board, you can manipulate and process the data to fit. You don't have to wait for functionality to be added to the app, because you own the data, the database, and the app.
|
||||
|
||||
### Working for yourself
|
||||
|
||||
When you use open source tools, you often end up improving them, sometimes whether you know it or not. You may not (or you may!) download the source and hack on code, but you probably fall into a way of using the tool that works best for you. You optimize your interaction with the tool. The unique way you interact with your tooling creates a kind of meta-tool: you haven't changed the software, but you've adapted it and yourself in ways that the project author and a dozen other users never imagined. Everyone does this with whatever software they rely upon, and it's why sitting at someone else's computer to use a familiar software (or even just looking over someone's shoulder) often feels foreign, like you're using a different version of the application than you're used to.
|
||||
|
||||
When you do this with proprietary software, you're either contributing to someone else's marketplace for free, or you're adjusting your own behavior based on forces outside your own control. When you optimize an open source tool, both the software and the interaction belong to you.
|
||||
|
||||
### The right to not upgrade
|
||||
|
||||
Tools change. It's the way of things.
|
||||
|
||||
Change can be frustrating, but it can be crippling when a service changes so severely that it breaks your workflow. A proprietary service has and maintains every right to change its product, and you explicitly accept this by using the product. If your favorite accounting software or scheduling web app changes its interface or its output options, you usually have no recourse but to adapt or stop using the service. Proprietary services reserve the right to remove features, arbitrarily and without warning, and it's not uncommon for companies to start out with an open API and strong compatibility with open source, only to drop these conveniences once its customer base has reached critical mass.
|
||||
|
||||
Open source changes, too. Changes in open source can be frustrating, too, and it can even drive users to alternative open source solutions. The difference is that when open source changes, you still own the unchanged code base. More importantly, lots of other people do too, and if there's enough desire for it, the project can be forked. There are several famous examples of this, but admittedly there are just as many examples where the demand was _not_ great enough, and users essentially had to adapt.
|
||||
|
||||
Even so, users are never truly forced to do anything in open source. If you want to hack together an old version of your mission-critical service on an old distro running ancient libraries in a virtual machine, you can do that because you own the code. When a proprietary service changes, you have no choice but to follow.
|
||||
|
||||
With open source, you can choose to forge your own path when necessary or follow the developers when convenient.
|
||||
|
||||
### Open for collaboration
|
||||
|
||||
Proprietary services can affect others in ways you may not realize. Closed source tools are accidentally insidious. If you use a proprietary product to manage your schedule or your recipes or your library, or you use a proprietary font in your graphic design or website, then the moment you need to coordinate with someone else, you are essentially forcing them to sign up for the same proprietary service because proprietary services usually require accounts. Of course, the same is sometimes true for an open source solution, but it's not common for open source products to collect and sell user data the way proprietary vendors do, so the stakes aren't quite the same.
|
||||
|
||||
### Independence
|
||||
|
||||
Ultimately, the open source advantage is one of independence for you and for those you want to collaborate with. Not everyone uses open source, and even if everyone did not everyone would use the exact same tool or the same assets, so there will always be some negotiation when sharing data. However, by keeping your data and projects open, you enable everyone (your future self included) to contribute.
|
||||
|
||||
What steps do you take to ensure your work is open and accessible? Tell us in the comments!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/2/open-source-vs-proprietary
|
||||
|
||||
作者:[Seth Kenlon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/seth
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/EDUCATION_doodles.png?itok=W_0DOMM4 (Doodles of the word open)
|
||||
[2]: https://opensource.com/article/20/1/open-source-productivity-tools
|
||||
[3]: https://opensource.com/tags/tools
|
||||
[4]: https://opensource.com/tags/analytics-and-metrics
|
@ -0,0 +1,106 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (5 firewall features IT pros should know about but probably don’t)
|
||||
[#]: via: (https://www.networkworld.com/article/3519854/4-firewall-features-it-pros-should-know-about-but-probably-dont.html)
|
||||
[#]: author: (Zeus Kerravala https://www.networkworld.com/author/Zeus-Kerravala/)
|
||||
|
||||
5 firewall features IT pros should know about but probably don’t
|
||||
======
|
||||
As a foundational network defense, firewalls continue to be enhanced with new features, so many that some important ones that shouldn’t be get overlooked.
|
||||
Natalya Burova / Getty Images
|
||||
|
||||
[Firewalls][1] continuously evolve to remain a staple of network security by incorporating functionality of standalone devices, embracing network-architecture changes, and integrating outside data sources to add intelligence to the decisions they make – a daunting wealth of possibilities that is difficult to keep track of.
|
||||
|
||||
Because of this richness of features, next-generation firewalls are difficult to master fully, and important capabilities sometimes can be, and in practice are, overlooked.
|
||||
|
||||
Here is a shortlist of new features IT pros should be aware of.
|
||||
|
||||
**[ Also see [What to consider when deploying a next generation firewall][2]. | Get regularly scheduled insights by [signing up for Network World newsletters][3]. ]**
|
||||
|
||||
### Also in this series:
|
||||
|
||||
* [Cybersecurity in 2020: From secure code to defense in depth][4] (CSO)
|
||||
* [More targeted, sophisticated and costly: Why ransomware might be your biggest threat][5] (CSO)
|
||||
* [How to bring security into agile development and CI/CD][6] (Infoworld)
|
||||
* [UEM to marry security – finally – after long courtship][7] (Computerworld)
|
||||
* [Security vs. innovation: IT's trickiest balancing act][8] (CIO)
|
||||
|
||||
|
||||
|
||||
## Network segmentation
|
||||
|
||||
Dividing a single physical network into multiple logical networks is known as [network segmentation][9] in which each segment behaves as if it runs on its own physical network. The traffic from one segment can’t be seen by or passed to another segment.
|
||||
|
||||
This significantly reduces attack surfaces in the event of a breach. For example, a hospital could put all its medical devices into one segment and its patient records into another. Then, if hackers breach a heart pump that was not secured properly, that would not enable them to access private patient information.
|
||||
|
||||
It’s important to note that many connected things that make up the [internet of things][10] have older operating systems and are inherently insecure and can act as a point of entry for attackers, so the growth of IoT and its distributed nature drives up the need for network segmentation.
|
||||
|
||||
## Policy optimization
|
||||
|
||||
Firewall policies and rules are the engine that make firewalls go. Most security professionals are terrified of removing older policies because they don’t know when they were put in place or why. As a result, rules keep getting added with no thought of reducing the overall number. Some enterprises say they have millions of firewall rules in place. The fact is, too many rules add complexity, can conflict with each other and are time consuming to manage and troubleshoot.
|
||||
|
||||
[][11]
|
||||
|
||||
Policy optimization migrates legacy security policy rules to application-based rules that permit or deny traffic based on what application is being used. This improves overall security by reducing the attack surface and also provides visibility to safely enable application access. Policy optimization identifies port-based rules so they can be converted to application-based whitelist rules or add applications from a port-based rule to an existing application-based rule without compromising application availability. It also identifies over-provisioned application-based rules. Policy optimization helps prioritize which port-based rules to migrate first, identify application-based rules that allow applications that aren’t being used, and analyze rule-usage characteristics such as hit count, which compares how often a particular rule is applied vs. how often all the rules are applied.
|
||||
|
||||
Converting port-based rules to application-based rules improves security posture because the organization can select the applications they want to whitelist and deny all other applications. That way unwanted and potentially malicious traffic is eliminated from the network.
|
||||
|
||||
## Credential-theft prevention
|
||||
|
||||
Historically, workers accessed corporate applications from company offices. Today they access legacy apps, SaaS apps and other cloud services from the office, home, airport and anywhere else they may be. This makes it much easier for threat actors to steal credentials. The Verizon [Data Breach Investigations Report][12] found that 81% of hacking-related breaches leveraged stolen and/or weak passwords.
|
||||
|
||||
Credential-theft prevention blocks employees from using corporate credentials on sites such as Facebook and Twitter. Even though they may be sanctioned applications, using corporate credentials to access them puts the business at risk.
|
||||
|
||||
Credential-theft prevention works by scanning username and password submissions to websites and compare those submissions to lists of official corporate credentials. Businesses can choose what websites to allow submitting corporate credentials to or block them based on the URL category of the website.
|
||||
|
||||
When the firewall detects a user attempting to submit credentials to a site in a category that is restricted, it can display a block-response page that prevents the user from submitting credentials. Alternatively, it can present a continue page that warns users against submitting credentials to sites classified in certain URL categories, but still allows them to continue with the credential submission. Security professionals can customize these block pages to educate users against reusing corporate credentials, even on legitimate, non-phishing sites.
|
||||
|
||||
## DNS security
|
||||
|
||||
A combination of machine learning, analytics and automation can block attacks that leverage the [Domain Name System (DNS)][13]. In many enterprises, DNS servers are unsecured and completely wide open to attacks that redirect users to bad sites where they are phished and where data is stolen. Threat actors have a high degree of success with DNS-based attacks because security teams have very little visibility into how attackers use the service to maintain control of infected devices. There are some standalone DNS security services that are moderately effective but lack the volume of data to recognize all attacks.
|
||||
|
||||
When DNS security is integrated into firewalls, machine learning can analyze the massive amount of network data, making standalone analysis tools unnecessary. DNS security integrated into a firewall can predict and block malicious domains through automation and the real-time analysis that finds them. As the number of bad domains grows, machine learning can find them quickly and ensure they don’t become problems.
|
||||
|
||||
Integrated DNS security can also use machine-learning analytics to neutralize DNS tunneling, which smuggles data through firewalls by hiding it within DNS requests. DNS security can also find malware command-and-control servers. It builds on top of signature-based systems to identify advanced tunneling methods and automates the shutdown of DNS-tunneling attacks.
|
||||
|
||||
## Dynamic user groups
|
||||
|
||||
It’s possible to create policies that automate the remediation of anomalous activities of workers. The basic premise is that users’ roles within a group means their network behaviors should be similar to each other. For example, if a worker is phished and strange apps were installed, this would stand out and could indicate a breach.
|
||||
|
||||
Historically, quarantining a group of users was highly time consuming because each member of the group had to be addressed and policies enforced individually. With dynamic user groups, when the firewall sees an anomaly it creates policies that counter the anomoly and pushes them out to the user group. The entire group is automatically updated without having to manually create and commit policies. So, for example, all the people in accounting would receive the same policy update automatically, at once, instead of manually, one at a time. Integration with the firewall enables the firewall to distribute the policies for the user group to all the other infrastructure that requires it including other firewalls, log collectors or applications.
|
||||
|
||||
Firewalls have been and will continue to be the anchor of cyber security. They are the first line of defense and can thwart many attacks before they penetrate the enterprise network. Maximizing the value of firewalls means turning on many of the advanced features, some of which have been in firewalls for years but not turned on for a variety of reasons.
|
||||
|
||||
Join the Network World communities on [Facebook][14] and [LinkedIn][15] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3519854/4-firewall-features-it-pros-should-know-about-but-probably-dont.html
|
||||
|
||||
作者:[Zeus Kerravala][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Zeus-Kerravala/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.networkworld.com/article/3230457/what-is-a-firewall-perimeter-stateful-inspection-next-generation.html
|
||||
[2]: https://www.networkworld.com/article/3236448/lan-wan/what-to-consider-when-deploying-a-next-generation-firewall.html
|
||||
[3]: https://www.networkworld.com/newsletters/signup.html
|
||||
[4]: https://www.csoonline.com/article/3519913/cybersecurity-in-2020-from-secure-code-to-defense-in-depth.html
|
||||
[5]: https://www.csoonline.com/article/3518864/more-targeted-sophisticated-and-costly-why-ransomware-might-be-your-biggest-threat.html
|
||||
[6]: https://www.infoworld.com/article/3520969/how-to-bring-security-into-agile-development-and-cicd.html
|
||||
[7]: https://www.computerworld.com/article/3516136/uem-to-marry-security-finally-after-long-courtship.html
|
||||
[8]: https://www.cio.com/article/3521009/security-vs-innovation-its-trickiest-balancing-act.html
|
||||
[9]: https://www.networkworld.com/article/3016565/how-network-segmentation-provides-a-path-to-iot-security.html
|
||||
[10]: https://www.networkworld.com/article/3207535/what-is-iot-the-internet-of-things-explained.html
|
||||
[11]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE21620&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
|
||||
[12]: https://enterprise.verizon.com/resources/reports/dbir/
|
||||
[13]: https://www.networkworld.com/article/3268449/what-is-dns-and-how-does-it-work.html
|
||||
[14]: https://www.facebook.com/NetworkWorld/
|
||||
[15]: https://www.linkedin.com/company/network-world
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: translator: (chenmu-kk )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
@ -1,113 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (12 open source tools for natural language processing)
|
||||
[#]: via: (https://opensource.com/article/19/3/natural-language-processing-tools)
|
||||
[#]: author: (Dan Barker https://opensource.com/users/barkerd427)
|
||||
|
||||
12 open source tools for natural language processing
|
||||
======
|
||||
|
||||
Take a look at a dozen options for your next NLP application.
|
||||
|
||||
![Chat bubbles][1]
|
||||
|
||||
Natural language processing (NLP), the technology that powers all the chatbots, voice assistants, predictive text, and other speech/text applications that permeate our lives, has evolved significantly in the last few years. There are a wide variety of open source NLP tools out there, so I decided to survey the landscape to help you plan your next voice- or text-based application.
|
||||
|
||||
For this review, I focused on tools that use languages I'm familiar with, even though I'm not familiar with all the tools. (I didn't find a great selection of tools in the languages I'm not familiar with anyway.) That said, I excluded tools in three languages I am familiar with, for various reasons.
|
||||
|
||||
The most obvious language I didn't include might be R, but most of the libraries I found hadn't been updated in over a year. That doesn't always mean they aren't being maintained well, but I think they should be getting updates more often to compete with other tools in the same space. I also chose languages and tools that are most likely to be used in production scenarios (rather than academia and research), and I have mostly used R as a research and discovery tool.
|
||||
|
||||
I was also surprised to see that the Scala libraries are fairly stagnant. It has been a couple of years since I last used Scala, when it was pretty popular. Most of the libraries haven't been updated since that time—or they've only had a few updates.
|
||||
|
||||
Finally, I excluded C++. This is mostly because it's been many years since I last wrote in C++, and the organizations I've worked in have not used C++ for NLP or any data science work.
|
||||
|
||||
### Python tools
|
||||
|
||||
#### Natural Language Toolkit (NLTK)
|
||||
|
||||
It would be easy to argue that [Natural Language Toolkit (NLTK)][2] is the most full-featured tool of the ones I surveyed. It implements pretty much any component of NLP you would need, like classification, tokenization, stemming, tagging, parsing, and semantic reasoning. And there's often more than one implementation for each, so you can choose the exact algorithm or methodology you'd like to use. It also supports many languages. However, it represents all data in the form of strings, which is fine for simple constructs but makes it hard to use some advanced functionality. The documentation is also quite dense, but there is a lot of it, as well as [a great book][3]. The library is also a bit slow compared to other tools. Overall, this is a great toolkit for experimentation, exploration, and applications that need a particular combination of algorithms.
|
||||
|
||||
#### SpaCy
|
||||
|
||||
[SpaCy][4] is probably the main competitor to NLTK. It is faster in most cases, but it only has a single implementation for each NLP component. Also, it represents everything as an object rather than a string, which simplifies the interface for building applications. This also helps it integrate with many other frameworks and data science tools, so you can do more once you have a better understanding of your text data. However, SpaCy doesn't support as many languages as NLTK. It does have a simple interface with a simplified set of choices and great documentation, as well as multiple neural models for various components of language processing and analysis. Overall, this is a great tool for new applications that need to be performant in production and don't require a specific algorithm.
|
||||
|
||||
#### TextBlob
|
||||
|
||||
[TextBlob][5] is kind of an extension of NLTK. You can access many of NLTK's functions in a simplified manner through TextBlob, and TextBlob also includes functionality from the Pattern library. If you're just starting out, this might be a good tool to use while learning, and it can be used in production for applications that don't need to be overly performant. Overall, TextBlob is used all over the place and is great for smaller projects.
|
||||
|
||||
#### Textacy
|
||||
|
||||
This tool may have the best name of any library I've ever used. Say "[Textacy][6]" a few times while emphasizing the "ex" and drawing out the "cy." Not only is it great to say, but it's also a great tool. It uses SpaCy for its core NLP functionality, but it handles a lot of the work before and after the processing. If you were planning to use SpaCy, you might as well use Textacy so you can easily bring in many types of data without having to write extra helper code.
|
||||
|
||||
#### PyTorch-NLP
|
||||
|
||||
[PyTorch-NLP][7] has been out for just a little over a year, but it has already gained a tremendous community. It is a great tool for rapid prototyping. It's also updated often with the latest research, and top companies and researchers have released many other tools to do all sorts of amazing processing, like image transformations. Overall, PyTorch is targeted at researchers, but it can also be used for prototypes and initial production workloads with the most advanced algorithms available. The libraries being created on top of it might also be worth looking into.
|
||||
|
||||
### Node tools
|
||||
|
||||
#### Retext
|
||||
|
||||
[Retext][8] is part of the [unified collective][9]. Unified is an interface that allows multiple tools and plugins to integrate and work together effectively. Retext is one of three syntaxes used by the unified tool; the others are Remark for markdown and Rehype for HTML. This is a very interesting idea, and I'm excited to see this community grow. Retext doesn't expose a lot of its underlying techniques, but instead uses plugins to achieve the results you might be aiming for with NLP. It's easy to do things like checking spelling, fixing typography, detecting sentiment, or making sure text is readable with simple plugins. Overall, this is an excellent tool and community if you just need to get something done without having to understand everything in the underlying process.
|
||||
|
||||
#### Compromise
|
||||
|
||||
[Compromise][10] certainly isn't the most sophisticated tool. If you're looking for the most advanced algorithms or the most complete system, this probably isn't the right tool for you. However, if you want a performant tool that has a wide breadth of features and can function on the client side, you should take a look at Compromise. Overall, its name is accurate in that the creators compromised on functionality and accuracy by focusing on a small package with much more specific functionality that benefits from the user understanding more of the context surrounding the usage.
|
||||
|
||||
#### Natural
|
||||
|
||||
[Natural][11] includes most functions you might expect in a general NLP library. It is mostly focused on English, but some other languages have been contributed, and the community is open to additional contributions. It supports tokenizing, stemming, classification, phonetics, term frequency–inverse document frequency, WordNet, string similarity, and some inflections. It might be most comparable to NLTK, in that it tries to include everything in one package, but it is easier to use and isn't necessarily focused around research. Overall, this is a pretty full library, but it is still in active development and may require additional knowledge of underlying implementations to be fully effective.
|
||||
|
||||
#### Nlp.js
|
||||
|
||||
[Nlp.js][12] is built on top of several other NLP libraries, including Franc and Brain.js. It provides a nice interface into many components of NLP, like classification, sentiment analysis, stemming, named entity recognition, and natural language generation. It also supports quite a few languages, which is helpful if you plan to work in something other than English. Overall, this is a great general tool with a simplified interface into several other great tools. This will likely take you a long way in your applications before you need something more powerful or more flexible.
|
||||
|
||||
### Java tools
|
||||
|
||||
#### OpenNLP
|
||||
|
||||
[OpenNLP][13] is hosted by the Apache Foundation, so it's easy to integrate it into other Apache projects, like Apache Flink, Apache NiFi, and Apache Spark. It is a general NLP tool that covers all the common processing components of NLP, and it can be used from the command line or within an application as a library. It also has wide support for multiple languages. Overall, OpenNLP is a powerful tool with a lot of features and ready for production workloads if you're using Java.
|
||||
|
||||
#### StanfordNLP
|
||||
|
||||
[Stanford CoreNLP][14] is a set of tools that provides statistical NLP, deep learning NLP, and rule-based NLP functionality. Many other programming language bindings have been created so this tool can be used outside of Java. It is a very powerful tool created by an elite research institution, but it may not be the best thing for production workloads. This tool is dual-licensed with a special license for commercial purposes. Overall, this is a great tool for research and experimentation, but it may incur additional costs in a production system. The Python implementation might also interest many readers more than the Java version. Also, one of the best Machine Learning courses is taught by a Stanford professor on Coursera. [Check it out][15] along with other great resources.
|
||||
|
||||
#### CogCompNLP
|
||||
|
||||
[CogCompNLP][16], developed by the University of Illinois, also has a Python library with similar functionality. It can be used to process text, either locally or on remote systems, which can remove a tremendous burden from your local device. It provides processing functions such as tokenization, part-of-speech tagging, chunking, named-entity tagging, lemmatization, dependency and constituency parsing, and semantic role labeling. Overall, this is a great tool for research, and it has a lot of components that you can explore. I'm not sure it's great for production workloads, but it's worth trying if you plan to use Java.
|
||||
|
||||
* * *
|
||||
|
||||
What are your favorite open source tools and libraries for NLP? Please share in the comments—especially if there's one I didn't include.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/3/natural-language-processing-tools
|
||||
|
||||
作者:[Dan Barker (Community Moderator)][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/barkerd427
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/talk_chat_communication_team.png?itok=CYfZ_gE7 (Chat bubbles)
|
||||
[2]: http://www.nltk.org/
|
||||
[3]: http://www.nltk.org/book_1ed/
|
||||
[4]: https://spacy.io/
|
||||
[5]: https://textblob.readthedocs.io/en/dev/
|
||||
[6]: https://readthedocs.org/projects/textacy/
|
||||
[7]: https://pytorchnlp.readthedocs.io/en/latest/
|
||||
[8]: https://www.npmjs.com/package/retext
|
||||
[9]: https://unified.js.org/
|
||||
[10]: https://www.npmjs.com/package/compromise
|
||||
[11]: https://www.npmjs.com/package/natural
|
||||
[12]: https://www.npmjs.com/package/node-nlp
|
||||
[13]: https://opennlp.apache.org/
|
||||
[14]: https://stanfordnlp.github.io/CoreNLP/
|
||||
[15]: https://opensource.com/article/19/2/learn-data-science-ai
|
||||
[16]: https://github.com/CogComp/cogcomp-nlp
|
@ -1,247 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Manage multimedia files with Git)
|
||||
[#]: via: (https://opensource.com/article/19/4/manage-multimedia-files-git)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
|
||||
|
||||
Manage multimedia files with Git
|
||||
======
|
||||
Learn how to use Git to track large multimedia files in your projects in
|
||||
the final article in our series on little-known uses of Git.
|
||||
![video editing dashboard][1]
|
||||
|
||||
Git is very specifically designed for source code version control, so it's rarely embraced by projects and industries that don't primarily work in plaintext. However, the advantages of an asynchronous workflow are appealing, especially in the ever-growing number of industries that combine serious computing with seriously artistic ventures, including web design, visual effects, video games, publishing, currency design (yes, that's a real industry), education… the list goes on and on.
|
||||
|
||||
In this series leading up to Git's 14th anniversary, we've shared six little-known ways to use Git. In this final article, we'll look at software that brings the advantages of Git to managing multimedia files.
|
||||
|
||||
### The problem with managing multimedia files with Git
|
||||
|
||||
It seems to be common knowledge that Git doesn't work well with non-text files, but it never hurts to challenge assumptions. Here's an example of copying a photo file using Git:
|
||||
|
||||
|
||||
```
|
||||
$ du -hs
|
||||
108K .
|
||||
$ cp ~/photos/dandelion.tif .
|
||||
$ git add dandelion.tif
|
||||
$ git commit -m 'added a photo'
|
||||
[master (root-commit) fa6caa7] two photos
|
||||
1 file changed, 0 insertions(+), 0 deletions(-)
|
||||
create mode 100644 dandelion.tif
|
||||
$ du -hs
|
||||
1.8M .
|
||||
```
|
||||
|
||||
Nothing unusual so far; adding a 1.8MB photo to a directory results in a directory 1.8MB in size. So, let's try removing the file:
|
||||
|
||||
|
||||
```
|
||||
$ git rm dandelion.tif
|
||||
$ git commit -m 'deleted a photo'
|
||||
$ du -hs
|
||||
828K .
|
||||
```
|
||||
|
||||
You can see the problem here: Removing a large file after it's been committed increases a repository's size roughly eight times its original, barren state (from 108K to 828K). You can perform tests to get a better average, but this simple demonstration is consistent with my experience. The cost of committing files that aren't text-based is minimal at first, but the longer a project stays active, the more changes people make to static content, and the more those fractions start to add up. When a Git repository becomes very large, the major cost is usually speed. The time to perform pulls and pushes goes from being how long it takes to take a sip of coffee to how long it takes to wonder if your computer got kicked off the network.
|
||||
|
||||
The reason static content causes Git to grow in size is that formats based on text allow Git to pull out just the parts that have changed. Raster images and music files make as much sense to Git as they would to you if you looked at the binary data contained in a .png or .wav file. So Git just takes all the data and makes a new copy of it, even if only one pixel changes from one photo to the next.
|
||||
|
||||
### Git-portal
|
||||
|
||||
In practice, many multimedia projects don't need or want to track the media's history. The media part of a project tends to have a different lifecycle than the text or code part of a project. Media assets generally progress in one direction: a picture starts as a pencil sketch, proceeds toward its destination as a digital painting, and, even if the text is rolled back to an earlier version, the art continues its forward progress. It's rare for media to be bound to a specific version of a project. The exceptions are usually graphics that reflect datasets—usually tables or graphs or charts—that can be done in text-based formats such as SVG.
|
||||
|
||||
So, on many projects that involve both media and text (whether it's narrative prose or code), Git is an acceptable solution to file management, as long as there's a playground outside the version control cycle for artists to play in.
|
||||
|
||||
![Graphic showing relationship between art assets and Git][2]
|
||||
|
||||
A simple way to enable that is [Git-portal][3], a Bash script armed with Git hooks that moves your asset files to a directory outside Git's purview and replaces them with symlinks. Git commits the symlinks (sometimes called aliases or shortcuts), which are trivially small, so all you commit are your text files and whatever symlinks represent your media assets. Because the replacement files are symlinks, your project continues to function as expected because your local machine follows the symlinks to their "real" counterparts. Git-portal maintains a project's directory structure when it swaps out a file with a symlink, so it's easy to reverse the process, should you decide that Git-portal isn't right for your project or you need to build a version of your project without symlinks (for distribution, for instance).
|
||||
|
||||
Git-portal also allows remote synchronization of assets over rsync, so you can set up a remote storage location as a centralized source of authority.
|
||||
|
||||
Git-portal is ideal for multimedia projects, including video game and tabletop game design, virtual reality projects with big 3D model renders and textures, [books][4] with graphics and .odt exports, collaborative [blog websites][5], music projects, and much more. It's not uncommon for an artist to perform versioning in their application—in the form of layers (in the graphics world) and tracks (in the music world)—so Git adds nothing to multimedia project files themselves. The power of Git is leveraged for other parts of artistic projects (prose and narrative, project management, subtitle files, credits, marketing copy, documentation, and so on), and the power of structured remote backups is leveraged by the artists.
|
||||
|
||||
#### Install Git-portal
|
||||
|
||||
There are RPM packages for Git-portal located at <https://klaatu.fedorapeople.org/git-portal>, which you can download and install.
|
||||
|
||||
Alternately, you can install Git-portal manually from its home on GitLab. It's just a Bash script and some Git hooks (which are also Bash scripts), but it requires a quick build process so that it knows where to install itself:
|
||||
|
||||
|
||||
```
|
||||
$ git clone <https://gitlab.com/slackermedia/git-portal.git> git-portal.clone
|
||||
$ cd git-portal.clone
|
||||
$ ./configure
|
||||
$ make
|
||||
$ sudo make install
|
||||
```
|
||||
|
||||
#### Use Git-portal
|
||||
|
||||
Git-portal is used alongside Git. This means, as with all large-file extensions to Git, there are some added steps to remember. But you only need Git-portal when dealing with your media assets, so it's pretty easy to remember unless you've acclimated yourself to treating large files the same as text files (which is rare for Git users). There's one setup step you must do to use Git-portal in a project:
|
||||
|
||||
|
||||
```
|
||||
$ mkdir bigproject.git
|
||||
$ cd !$
|
||||
$ git init
|
||||
$ git-portal init
|
||||
```
|
||||
|
||||
Git-portal's **init** function creates a **_portal** directory in your Git repository and adds it to your .gitignore file.
|
||||
|
||||
Using Git-portal in a daily routine integrates smoothly with Git. A good example is a MIDI-based music project: the project files produced by the music workstation are text-based, but the MIDI files are binary data:
|
||||
|
||||
|
||||
```
|
||||
$ ls -1
|
||||
_portal
|
||||
song.1.qtr
|
||||
song.qtr
|
||||
song-Track_1-1.mid
|
||||
song-Track_1-3.mid
|
||||
song-Track_2-1.mid
|
||||
$ git add song*qtr
|
||||
$ git-portal song-Track*mid
|
||||
$ git add song-Track*mid
|
||||
```
|
||||
|
||||
If you look into the **_portal** directory, you'll find the original MIDI files. The files in their place are symlinks to **_portal** , which keeps the music workstation working as expected:
|
||||
|
||||
|
||||
```
|
||||
$ ls -lG
|
||||
[...] _portal/
|
||||
[...] song.1.qtr
|
||||
[...] song.qtr
|
||||
[...] song-Track_1-1.mid -> _portal/song-Track_1-1.mid*
|
||||
[...] song-Track_1-3.mid -> _portal/song-Track_1-3.mid*
|
||||
[...] song-Track_2-1.mid -> _portal/song-Track_2-1.mid*
|
||||
```
|
||||
|
||||
As with Git, you can also add a directory of files:
|
||||
|
||||
|
||||
```
|
||||
$ cp -r ~/synth-presets/yoshimi .
|
||||
$ git-portal add yoshimi
|
||||
Directories cannot go through the portal. Sending files instead.
|
||||
$ ls -lG _portal/yoshimi
|
||||
[...] yoshimi.stat -> ../_portal/yoshimi/yoshimi.stat*
|
||||
```
|
||||
|
||||
Removal works as expected, but when removing something in **_portal** , you should use **git-portal rm** instead of **git rm**. Using Git-portal ensures that the file is removed from **_portal** :
|
||||
|
||||
|
||||
```
|
||||
$ ls
|
||||
_portal/ song.qtr song-Track_1-3.mid@ yoshimi/
|
||||
song.1.qtr song-Track_1-1.mid@ song-Track_2-1.mid@
|
||||
$ git-portal rm song-Track_1-3.mid
|
||||
rm 'song-Track_1-3.mid'
|
||||
$ ls _portal/
|
||||
song-Track_1-1.mid* song-Track_2-1.mid* yoshimi/
|
||||
```
|
||||
|
||||
If you forget to use Git-portal, then you have to remove the portal file manually:
|
||||
|
||||
|
||||
```
|
||||
$ git-portal rm song-Track_1-1.mid
|
||||
rm 'song-Track_1-1.mid'
|
||||
$ ls _portal/
|
||||
song-Track_1-1.mid* song-Track_2-1.mid* yoshimi/
|
||||
$ trash _portal/song-Track_1-1.mid
|
||||
```
|
||||
|
||||
Git-portal's only other function is to list all current symlinks and find any that may have become broken, which can sometimes happen if files move around in a project directory:
|
||||
|
||||
|
||||
```
|
||||
$ mkdir foo
|
||||
$ mv yoshimi foo
|
||||
$ git-portal status
|
||||
bigproject.git/song-Track_2-1.mid: symbolic link to _portal/song-Track_2-1.mid
|
||||
bigproject.git/foo/yoshimi/yoshimi.stat: broken symbolic link to ../_portal/yoshimi/yoshimi.stat
|
||||
```
|
||||
|
||||
If you're using Git-portal for a personal project and maintaining your own backups, this is technically all you need to know about Git-portal. If you want to add in collaborators or you want Git-portal to manage backups the way (more or less) Git does, you can a remote.
|
||||
|
||||
#### Add Git-portal remotes
|
||||
|
||||
Adding a remote location for Git-portal is done through Git's existing remote function. Git-portal implements Git hooks, scripts hidden in your repository's .git directory, to look at your remotes for any that begin with **_portal**. If it finds one, it attempts to **rsync** to the remote location and synchronize files. Git-portal performs this action anytime you do a Git push or a Git merge (or pull, which is really just a fetch and an automatic merge).
|
||||
|
||||
If you've only cloned Git repositories, then you may never have added a remote yourself. It's a standard Git procedure:
|
||||
|
||||
|
||||
```
|
||||
$ git remote add origin [git@gitdawg.com][6]:seth/bigproject.git
|
||||
$ git remote -v
|
||||
origin [git@gitdawg.com][6]:seth/bigproject.git (fetch)
|
||||
origin [git@gitdawg.com][6]:seth/bigproject.git (push)
|
||||
```
|
||||
|
||||
The name **origin** is a popular convention for your main Git repository, so it makes sense to use it for your Git data. Your Git-portal data, however, is stored separately, so you must create a second remote to tell Git-portal where to push to and pull from. Depending on your Git host, you may need a separate server because gigabytes of media assets are unlikely to be accepted by a Git host with limited space. Or maybe you're on a server that permits you to access only your Git repository and not external storage directories:
|
||||
|
||||
|
||||
```
|
||||
$ git remote add _portal [seth@example.com][7]:/home/seth/git/bigproject_portal
|
||||
$ git remote -v
|
||||
origin [git@gitdawg.com][6]:seth/bigproject.git (fetch)
|
||||
origin [git@gitdawg.com][6]:seth/bigproject.git (push)
|
||||
_portal [seth@example.com][7]:/home/seth/git/bigproject_portal (fetch)
|
||||
_portal [seth@example.com][7]:/home/seth/git/bigproject_portal (push)
|
||||
```
|
||||
|
||||
You may not want to give all of your users individual accounts on your server, and you don't have to. To provide access to the server hosting a repository's large file assets, you can run a Git frontend like **[Gitolite][8]** , or you can use **rrsync** (i.e., restricted rsync).
|
||||
|
||||
Now you can push your Git data to your remote Git repository and your Git-portal data to your remote portal:
|
||||
|
||||
|
||||
```
|
||||
$ git push origin HEAD
|
||||
master destination detected
|
||||
Syncing _portal content...
|
||||
sending incremental file list
|
||||
sent 9,305 bytes received 18 bytes 1,695.09 bytes/sec
|
||||
total size is 60,358,015 speedup is 6,474.10
|
||||
Syncing _portal content to example.com:/home/seth/git/bigproject_portal
|
||||
```
|
||||
|
||||
If you have Git-portal installed and a **_portal** remote configured, your **_portal** directory will be synchronized, getting new content from the server and sending fresh content with every push. While you don't have to do a Git commit and push to sync with the server (a user could just use rsync directly), I find it useful to require commits for artistic changes. It integrates artists and their digital assets into the rest of the workflow, and it provides useful metadata about project progress and velocity.
|
||||
|
||||
### Other options
|
||||
|
||||
If Git-portal is too simple for you, there are other options for managing large files with Git. [Git Large File Storage][9] (LFS) is a fork of a defunct project called git-media and is maintained and supported by GitHub. It requires special commands (like **git lfs track** to protect large files from being tracked by Git) and requires the user to manage a .gitattributes file to update which files in the repository are tracked by LFS. It supports _only_ HTTP and HTTPS remotes for large files, so your LFS server must be configured so users can authenticate over HTTP rather than SSH or rsync.
|
||||
|
||||
A more flexible option than LFS is [git-annex][10], which you can learn more about in my article about [managing binary blobs in Git][11] (ignore the parts about the deprecated git-media, as its former flexibility doesn't apply to its successor, Git LFS). Git-annex is a flexible and elegant solution with a detailed system for adding, removing, and moving large files within a repository. Because it's flexible and powerful, there are lots of new commands and rules to learn, so take a look at its [documentation][12].
|
||||
|
||||
If, however, your needs are simple and you like a solution that utilizes existing technology to do simple and obvious tasks, Git-portal might be the tool for the job.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/4/manage-multimedia-files-git
|
||||
|
||||
作者:[Seth Kenlon (Red Hat, Community Moderator)][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/seth
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/video_editing_folder_music_wave_play.png?itok=-J9rs-My (video editing dashboard)
|
||||
[2]: https://opensource.com/sites/default/files/uploads/git-velocity.jpg (Graphic showing relationship between art assets and Git)
|
||||
[3]: http://gitlab.com/slackermedia/git-portal.git
|
||||
[4]: https://www.apress.com/gp/book/9781484241691
|
||||
[5]: http://mixedsignals.ml
|
||||
[6]: mailto:git@gitdawg.com
|
||||
[7]: mailto:seth@example.com
|
||||
[8]: https://opensource.com/article/19/4/file-sharing-git
|
||||
[9]: https://git-lfs.github.com/
|
||||
[10]: https://git-annex.branchable.com/
|
||||
[11]: https://opensource.com/life/16/8/how-manage-binary-blobs-git-part-7
|
||||
[12]: https://git-annex.branchable.com/walkthrough/
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: translator: (zhangxiangping)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
@ -264,7 +264,7 @@ via: https://opensource.com/article/19/7/python-google-natural-language-api
|
||||
|
||||
作者:[JR Oakes][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
译者:[zhangxiangping](https://github.com/zhangxiangping)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,96 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Getting started with GnuCash)
|
||||
[#]: via: (https://opensource.com/article/20/2/gnucash)
|
||||
[#]: author: (Don Watkins https://opensource.com/users/don-watkins)
|
||||
|
||||
Getting started with GnuCash
|
||||
======
|
||||
Manage your personal or small business accounting with GnuCash.
|
||||
![A dollar sign in a network][1]
|
||||
|
||||
For the past four years, I've been managing my personal finances with [GnuCash][2], and I'm quite satisfied with it. The open source (GPL v3) project has been growing and improving since its initial release in 1998, and the latest version, 3.8, released in December 2019, adds many improvements and bug fixes.
|
||||
|
||||
GnuCash is available for Windows, MacOS, and Linux. The application implements a double-entry bookkeeping system and can import a variety of popular open and proprietary file formats, including QIF, QFX, OFX, CSV, and more. This makes it easy to convert from other personal finance applications, including Quicken, which it was created to replicate.
|
||||
|
||||
With GnuCash, you can track personal finances as well as small business accounting and invoicing. It doesn't have an integrated payroll system; according to the documentation, you can track payroll expenses in GnuCash, but you have to calculate taxes and deductions outside the software.
|
||||
|
||||
### Installation
|
||||
|
||||
To install GnuCash on Linux:
|
||||
|
||||
* On Red Hat, CentOS, or Fedora: **$ sudo dnf install gnucash**
|
||||
* On Debian, Ubuntu, or Pop_OS: **$ sudo apt install gnucash**
|
||||
|
||||
|
||||
|
||||
You can also install it from [Flathub][3], which is what I used on my laptop running Elementary OS. (All the screenshots in this article are from that installation.)
|
||||
|
||||
### Setup
|
||||
|
||||
After you install and launch the program, you will see a welcome screen that gives you the option to create a new set of accounts, import QIF files, or open a new user tutorial.
|
||||
|
||||
![GnuCash Welcome screen][4]
|
||||
|
||||
#### Personal accounts
|
||||
|
||||
If you choose the first option (as I did), GnuCash opens a screen to help you get up and running. It collects initial data and sets up your account preferences, such as your account types and names, business data (e.g., tax ID number), and preferred currency.
|
||||
|
||||
![GnuCash new account setup][5]
|
||||
|
||||
GnuCash supports personal bank accounts, business accounts, car loans, CD and money market accounts, childcare accounts, and more.
|
||||
|
||||
As an example, start by creating a simple checkbook. You can either enter your account's beginning balance or import existing account data in multiple formats.
|
||||
|
||||
![GnuCash import data][6]
|
||||
|
||||
#### Invoicing
|
||||
|
||||
GnuCash also supports small business functions, including customers, vendors, and invoicing. To create an invoice, enter the data in the **Business ->Invoice** section.
|
||||
|
||||
![GnuCash create invoice][7]
|
||||
|
||||
Then you can either print the invoice on paper or export it to a PDF and email it to your customer.
|
||||
|
||||
![GnuCash invoice][8]
|
||||
|
||||
### Get help
|
||||
|
||||
If you have questions, there's an excellent Help section that's guide accessible from the far-right side of the menu bar.
|
||||
|
||||
![GnuCash help][9]
|
||||
|
||||
The project's website includes links to lots of helpful information, such as a great overview of GnuCash [features][10]. GnuCash also has [detailed documentation][11] available to download and read offline and a [wiki][12] with helpful information for users and developers.
|
||||
|
||||
You can find other files and documentation in the project's [GitHub][13] repository. The GnuCash project is volunteer-driven. If you would like to contribute, please check out [Getting involved][14] on the project's wiki.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/2/gnucash
|
||||
|
||||
作者:[Don Watkins][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/don-watkins
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc_whitehurst_money.png?itok=ls-SOzM0 (A dollar sign in a network)
|
||||
[2]: https://www.gnucash.org/
|
||||
[3]: https://flathub.org/apps/details/org.gnucash.GnuCash
|
||||
[4]: https://opensource.com/sites/default/files/images/gnucash_welcome.png (GnuCash Welcome screen)
|
||||
[5]: https://opensource.com/sites/default/files/uploads/gnucash_newaccountsetup.png (GnuCash new account setup)
|
||||
[6]: https://opensource.com/sites/default/files/uploads/gnucash_importdata.png (GnuCash import data)
|
||||
[7]: https://opensource.com/sites/default/files/uploads/gnucash_enter-invoice.png (GnuCash create invoice)
|
||||
[8]: https://opensource.com/sites/default/files/uploads/gnucash_invoice.png (GnuCash invoice)
|
||||
[9]: https://opensource.com/sites/default/files/uploads/gnucash_help.png (GnuCash help)
|
||||
[10]: https://www.gnucash.org/features.phtml
|
||||
[11]: https://www.gnucash.org/docs/v3/C/gnucash-help.pdf
|
||||
[12]: https://wiki.gnucash.org/wiki/GnuCash
|
||||
[13]: https://github.com/Gnucash
|
||||
[14]: https://wiki.gnucash.org/wiki/GnuCash#Getting_involved_in_the_GnuCash_project
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: translator: (Morisun029)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
@ -0,0 +1,114 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Install All Essential Media Codecs in Ubuntu With This Single Command [Beginner’s Tip])
|
||||
[#]: via: (https://itsfoss.com/install-media-codecs-ubuntu/)
|
||||
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
|
||||
|
||||
Install All Essential Media Codecs in Ubuntu With This Single Command [Beginner’s Tip]
|
||||
======
|
||||
|
||||
If you have just installed Ubuntu or some other [Ubuntu flavors][1] like Kubuntu, Lubuntu etc, you’ll notice that your system doesn’t play some audio or video file.
|
||||
|
||||
For video files, you can [install VLC on Ubuntu][2]. [VLC][3] one of the [best video players for Linux][4] and can play almost any video file format. But you’ll still have troubles with audio media files and flash player.
|
||||
|
||||
The good thing is that [Ubuntu][5] provides a single package to install all the essential media codecs: ubuntu-restricted-extras.
|
||||
|
||||
![][6]
|
||||
|
||||
### What is Ubuntu Restricted Extras?
|
||||
|
||||
The ubuntu-restricted-extras is a software package that consists various essential software like flash plugin, [unrar][7], [gstreamer][8], mp4, codecs for [Chromium browser in Ubuntu][9] etc.
|
||||
|
||||
Since these software are not open source and some of them involve software patents, Ubuntu doesn’t install them by default. You’ll have to use multiverse repository, the software repository specifically created by Ubuntu to provide non-open source software to its users.
|
||||
|
||||
Please read this article to [learn more about various Ubuntu repositories][10].
|
||||
|
||||
### How to install Ubuntu Restricted Extras?
|
||||
|
||||
I find it surprising that the software center doesn’t list Ubuntu Restricted Extras. In any case, you can install the package using command line and it’s very simple.
|
||||
|
||||
Open a terminal by searching for it in the menu or using the [terminal keyboard shortcut Ctrl+Alt+T][11].
|
||||
|
||||
Since ubuntu-restrcited-extras package is available in the multiverse repository, you should verify that the multiverse repository is enabled on your system:
|
||||
|
||||
```
|
||||
sudo add-apt-repository multiverse
|
||||
```
|
||||
|
||||
And then you can install it in Ubuntu default edition using this command:
|
||||
|
||||
```
|
||||
sudo apt install ubuntu-restricted-extras
|
||||
```
|
||||
|
||||
When you enter the command, you’ll be asked to enter your password. When _**you type the password, nothing is displayed on the screen**_. That’s normal. Type your password and press enter.
|
||||
|
||||
It will show a huge list of packages to be installed. Press enter to confirm your selection when it asks.
|
||||
|
||||
You’ll also encounter an [EULA][12] (End User License Agreement) screen like this:
|
||||
|
||||
![Press Tab key to select OK and press Enter key][13]
|
||||
|
||||
It could be overwhelming to navigate this screen but don’t worry. Just press tab and it will highlight the options. When the correct options are highlighted, press enter to confirm your selection.
|
||||
|
||||
![Press Tab key to highlight Yes and press Enter key][14]
|
||||
|
||||
Once the process finishes, you should be able to play MP3 and other media formats thanks to newly installed media codecs.
|
||||
|
||||
##### Installing restricted extra package on Kubuntu, Lubuntu, Xubuntu
|
||||
|
||||
Do keep in mind that Kubuntu, Lubuntu and Xubuntu has this package available with their own respective names. They should have just used the same name but they don’t unfortunately.
|
||||
|
||||
On Kubuntu, use this command:
|
||||
|
||||
```
|
||||
sudo apt install kubuntu-restricted-extras
|
||||
```
|
||||
|
||||
On Lubuntu, use:
|
||||
|
||||
```
|
||||
sudo apt install lubuntu-restricted-extras
|
||||
```
|
||||
|
||||
On Xubuntu, you should use:
|
||||
|
||||
```
|
||||
sudo apt install xubuntu-restricted-extras
|
||||
```
|
||||
|
||||
I always recommend getting ubuntu-restricted-extras as one of the [essential things to do after installing Ubuntu][15]. It’s good to have a single command to install multiple codecs in Ubuntu.
|
||||
|
||||
I hope you like this quick tip in the Ubuntu beginner series. I’ll share more such tips in the future.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/install-media-codecs-ubuntu/
|
||||
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/abhishek/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://itsfoss.com/which-ubuntu-install/
|
||||
[2]: https://itsfoss.com/install-latest-vlc/
|
||||
[3]: https://www.videolan.org/index.html
|
||||
[4]: https://itsfoss.com/video-players-linux/
|
||||
[5]: https://ubuntu.com/
|
||||
[6]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/02/Media_Codecs_in_Ubuntu.png?ssl=1
|
||||
[7]: https://itsfoss.com/use-rar-ubuntu-linux/
|
||||
[8]: https://gstreamer.freedesktop.org/
|
||||
[9]: https://itsfoss.com/install-chromium-ubuntu/
|
||||
[10]: https://itsfoss.com/ubuntu-repositories/
|
||||
[11]: https://itsfoss.com/ubuntu-shortcuts/
|
||||
[12]: https://en.wikipedia.org/wiki/End-user_license_agreement
|
||||
[13]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/02/installing_ubuntu_restricted_extras.jpg?ssl=1
|
||||
[14]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/02/installing_ubuntu_restricted_extras_1.jpg?ssl=1
|
||||
[15]: https://itsfoss.com/things-to-do-after-installing-ubuntu-18-04/
|
109
sources/tech/20200210 Music composition with Python and Linux.md
Normal file
109
sources/tech/20200210 Music composition with Python and Linux.md
Normal file
@ -0,0 +1,109 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Music composition with Python and Linux)
|
||||
[#]: via: (https://opensource.com/article/20/2/linux-open-source-music)
|
||||
[#]: author: (Alan Formy-Duval https://opensource.com/users/alanfdoss)
|
||||
|
||||
Music composition with Python and Linux
|
||||
======
|
||||
A chat with Mr. MAGFest—Brendan Becker.
|
||||
![Wires plugged into a network switch][1]
|
||||
|
||||
I met Brendan Becker working in a computer store in 1999. We both enjoyed building custom computers and installing Linux on them. Brendan was always involved in several technology projects at once, ranging from game coding to music composition. Fast-forwarding a few years from the days of computer stores, he went on to write [pyDance][2], an open source implementation of multiple dancing games, and then became the CEO of music and gaming event [MAGFest][3]. Sometimes referred to as "Mr. MAGFest" because he was at the helm of the event, Brendan now uses the music pseudonym "[Inverse Phase][4]" as a composer of chiptunes—music predominantly made on 8-bit computers and game consoles.
|
||||
|
||||
I thought it would be interesting to interview him and ask some specifics about how he has benefited from Linux and open source software throughout his career.
|
||||
|
||||
![Inverse Phase performance photo][5]
|
||||
|
||||
Copyright Nickeledge, CC BY-SA 2.0.
|
||||
|
||||
### Alan Formy-Duval: How did you get started in computers and software?
|
||||
|
||||
Brendan Becker: There's been a computer in my household almost as far back as I can remember. My dad has fervently followed technology; he brought home a Compaq Portable when they first hit the market, and when he wasn't doing work on it, I would have access to it. Since I began reading at age two, using a computer became second nature to me—just read what it said on the disk, follow the instructions, and I could play games! Some of the time I would be playing with learning and education software, and we had a few disks full of games that I could play other times. I remember a single disk with a handful of free clones of popular titles. Eventually, my dad showed me that we could call other computers (BBS'ing at age 5!), and I saw where some of the games came from. One of the games I liked to play was written in BASIC, and all bets were off when I realized that I could simply modify the game by just reading a few things and re-typing them to make my game easier.
|
||||
|
||||
### Formy-Duval: This was the 1980s?
|
||||
|
||||
Becker: The Compaq Portable dropped in 1983 to give you a frame of reference. My dad had one of the first of that model.
|
||||
|
||||
### Formy-Duval: How did you get into Linux and open source software?
|
||||
|
||||
Becker: I was heavy into MODs and demoscene stuff in the early 90s, and I noticed that Walnut Creek ([cdrom.com][6]; now defunct) ran shop on FreeBSD. I was super curious about Unix and other operating systems in general, but didn't have much firsthand exposure, and thought it might be time to finally try something. DOOM had just released, and someone told me I might even be able to get it to run. Between that and being able to run cool internet servers, I started going down the rabbit hole. Someone saw me reading about FreeBSD and suggested I check out Linux, this new OS that was written from the ground up for x86, unlike BSD, which (they said) had some issues with compatibility. So, I joined #linuxhelp on undernet IRC and asked how to get started with Linux, pointing out that I had done a modicum of research (asking "what's the difference between Red Hat and Slackware?") and probing mainly about what would be easiest to use. The only person talking in the channel said that he was 13 years old and he could figure out Slackware, so I should not have an issue. A math teacher in my school gave me a hard disk, I downloaded the "A" disk sets and a boot disk, wrote it out, installed it, and didn't spend much time looking back.
|
||||
|
||||
### Formy-Duval: How'd you become known as Mr. MAGFest?
|
||||
|
||||
Becker: Well, this one is pretty easy. I became the acting head of MAGFest almost immediately after the first event. The former chairpeople were all going their separate ways, and I demanded the event not be canceled to the guy in charge. The solution was to run it myself, and that nickname became mine as I slowly molded the event into my own.
|
||||
|
||||
### Formy-Duval: I remember attending in those early days. How large did MAGFest eventually become?
|
||||
|
||||
Becker: The first MAGFest was 265 people. It is now a scary huge 20,000+ unique attendees.
|
||||
|
||||
### Formy-Duval: That is huge! Can you briefly describe the MAGFest convention?
|
||||
|
||||
Becker: One of my buddies, Hex, described it really well. He said, "It's like this video-game themed birthday party with all of your friends, but there happen to be a few thousand people there, and all of them can be your friends if you want, and then there are rock concerts." This was quickly adopted and shortened to "It's a four-day video game party with multiple video game rock concerts." Often the phrase "music and gaming festival" is all people need to get the idea.
|
||||
|
||||
### Formy-Duval: How did you make use of open source software in running MAGFest?
|
||||
|
||||
Becker: At the time I became the head of MAGFest, I had already written a game in Python, so I felt most comfortable also writing our registration system in Python. It was a pretty easy decision since there were no costs involved, and I already had the experience. Later on, our online registration system and rideshare interfaces were written in PHP/MySQL, and we used Kboard for our forums. Eventually, this evolved to us rolling our own registration system from scratch in Python, which we also use at the event, and running Drupal on the main website. At one point, I also wrote a system to manage the video room and challenge stations in Python. Oh, and we had a few game music listening stations that you could flip through tracks and liner notes of iconic game OSTs (Original Sound Tracks) and bands who played MAGFest.
|
||||
|
||||
### Formy-Duval: I understand that a few years ago you reduced your responsibilities at MAGFest to pursue new projects. What was your next endeavor?
|
||||
|
||||
Becker: I was always rather heavily into the game music scene and tried to bring as much of it to MAGFest as possible. As I became a part of those communities more and more, I wanted to participate. I wrote some medleys, covers, and arrangements of video game tunes using free, open source versions of the DOS and Windows demoscene tools that I had used before, which were also free but not necessarily open source. I released a few tracks in the first few years of running MAGFest, and then after some tough love and counseling from Jake Kaufman (also known as virt; Shovel Knight and Shantae are on his resume, among others), I switched gears to something I was much better at—chiptunes. Even though I had written PC speaker beeps and boops as a kid with my Compaq Portable and MOD files in the demoscene throughout the 90s, I released the first NES-spec track that I was truly proud to call my own in 2006. Several pop tributes and albums followed.
|
||||
|
||||
In 2010, I was approached by multiple individuals for game soundtrack work. Even though the soundtrack work didn't affect it much, I began to scale back some of my responsibilities with MAGFest more seriously, and in 2011, I decided to take a much larger step into the background. I would stay on the board as a consultant and help people learn what they needed to run their departments, but I was no longer at the helm. At the same time, my part-time job, which was paying the bills, laid off all of their workers, and I suddenly found myself with a lot of spare time. I began writing Pretty Eight Machine, a Nine Inch Nails tribute, which I spent over a year on, and between that and the game soundtrack work, I proved to myself that I could put food on the table (if only barely) with music, and this is what I wanted to do next.
|
||||
|
||||
###
|
||||
|
||||
![Inverse Phase CTM Tracker][7]
|
||||
|
||||
Copyright Inverse Phase, Used with permission.
|
||||
|
||||
### Formy-Duval: What is your workspace like in terms of hardware and software?
|
||||
|
||||
Becker: In my DOS/Windows days, I used mostly FastTracker 2. In Linux, I replaced that with SoundTracker (not Karsten Obarski's original, but a GTK rewrite; see [soundtracker.org][8]). These days, SoundTracker is in a state of flux—although I still need to try the new GTK3 version—but [MilkyTracker][9] is a good replacement when I can't use SoundTracker. Good old FastTracker 2 also runs in DOSBox, if I really need the original. This was when I started using Linux, however, so this is stuff I figured out 20-25 years ago.
|
||||
|
||||
Within the last ten years, I've gravitated away from sample-based music and towards chiptunes—music synthesized by old sound chips from 8- and 16-bit game systems and computers. There is a very good cross-platform tool called [Deflemask][10] to write music for a lot of these systems. A few of the systems I want to write music for aren't supported, though, and Deflemask is closed source, so I've begun building my own music composition environment from scratch with Python and [Pygame][11]. I maintain my code tree using Git and will control hardware synthesizer boards using open source [KiCad][12].
|
||||
|
||||
### Formy-Duval: What projects are you currently focused on?
|
||||
|
||||
Becker: I work on game soundtracks and music commissions off and on. While that's going on, I've also been working on starting an electronic entertainment museum called [Bloop][13]. We're doing a lot of cool things with archival and inventory, but perhaps the most exciting thing is that we've been building exhibits with Raspberry Pis. They're so versatile, and it's weird to think that, if I had tried to do this even ten years ago, I wouldn't have had small single-board computers to drive my exhibits; I probably would have bolted a laptop to the back of a flat-panel!
|
||||
|
||||
### Formy-Duval: There are a lot more game platforms coming to Linux now, such as Steam, Lutris, and Play-on-Linux. Do you think this trend will continue, and these are here to stay?
|
||||
|
||||
Becker: As someone who's been gaming on Linux for 25 years—in fact, I was brought to Linux _because_ of games—I think I find this question harder than most people would. I've been running Linux-native games for decades, and I've even had to eat my "either a Linux solution exists or can be written" words back in the day, but eventually, I did, and wrote a Linux game.
|
||||
|
||||
Real talk: Android's been out since 2008. If you've played a game on Android, you've played a game on Linux. Steam's been available for Linux for eight years. The Steambox/SteamOS was announced only a year after Steam. I don't hear a whole lot about Lutris or Play-on-Linux, but I know they exist and hope they succeed. I do see a pretty big following for GOG, and I think that's pretty neat. I see a lot of quality game ports coming out of people like Ryan Gordon (icculus) and Ethan Lee (flibitijibibo), and some companies even port in-house. Game engines like Unity and Unreal already support Linux. Valve has incorporated Proton into the Linux version of Steam for something like two years now, so now Linux users don't even have to search for Linux-native versions of their games.
|
||||
|
||||
I can say that I think most gamers expect and will continue to expect the level of support they're already receiving from the retail game market. Personally, I hope that level goes up and not down!
|
||||
|
||||
_Learn more about Brendan's work as [Inverse Phase][14]._
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/2/linux-open-source-music
|
||||
|
||||
作者:[Alan Formy-Duval][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/alanfdoss
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003499_01_other21x_cc.png?itok=JJJ5z6aB (Wires plugged into a network switch)
|
||||
[2]: http://icculus.org/pyddr/
|
||||
[3]: http://magfest.org/
|
||||
[4]: http://www.inversephase.com/
|
||||
[5]: https://opensource.com/sites/default/files/uploads/inverse_phase_performance_bw.png (Inverse Phase performance photo)
|
||||
[6]: https://en.wikipedia.org/wiki/Walnut_Creek_CDROM
|
||||
[7]: https://opensource.com/sites/default/files/uploads/inversephase_ctm_tracker_screenshot.png (Inverse Phase CTM Tracker)
|
||||
[8]: http://soundtracker.org
|
||||
[9]: http://www.milkytracker.org
|
||||
[10]: http://www.deflemask.com
|
||||
[11]: http://www.pygame.org
|
||||
[12]: http://www.kicad-pcb.org
|
||||
[13]: http://bloopmuseum.com
|
||||
[14]: https://www.inversephase.com
|
@ -0,0 +1,118 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Playing Music on your Fedora Terminal with MPD and ncmpcpp)
|
||||
[#]: via: (https://fedoramagazine.org/playing-music-on-your-fedora-terminal-with-mpd-and-ncmpcpp/)
|
||||
[#]: author: (Carmine Zaccagnino https://fedoramagazine.org/author/carzacc/)
|
||||
|
||||
Playing Music on your Fedora Terminal with MPD and ncmpcpp
|
||||
======
|
||||
|
||||
![][1]
|
||||
|
||||
MPD, as the name implies, is a Music Playing Daemon. It can play music but, being a daemon, any piece of software can interface with it and play sounds, including some CLI clients.
|
||||
|
||||
One of them is called _ncmpcpp_, which is an improvement over the pre-existing _ncmpc_ tool. The name change doesn’t have much to do with the language they’re written in: they’re both C++, but _ncmpcpp_ is called that because it’s the _NCurses Music Playing Client_ _Plus Plus_.
|
||||
|
||||
### Installing MPD and ncmpcpp
|
||||
|
||||
The _ncmpmpcc_ client can be installed from the official Fedora repositories with DNF directly with
|
||||
|
||||
```
|
||||
$ sudo dnf install ncmpcpp
|
||||
```
|
||||
|
||||
On the other hand, MPD has to be installed from the RPMFusion _free_ repositories, which you can enable, [as per the official installation instructions][2], by running
|
||||
|
||||
```
|
||||
$ sudo dnf install https://download1.rpmfusion.org/free/fedora/rpmfusion-free-release-$(rpm -E %fedora).noarch.rpm
|
||||
```
|
||||
|
||||
and then you can install MPD by running
|
||||
|
||||
```
|
||||
$ sudo dnf install mpd
|
||||
```
|
||||
|
||||
### Configuring and Starting MPD
|
||||
|
||||
The most painless way to set up MPD is to run it as a regular user. The default is to run it as the dedicated _mpd_ user, but that causes all sorts of issues with permissions.
|
||||
|
||||
Before we can run it, we need to create a local config file that will allow it to run as a regular user.
|
||||
|
||||
To do that, create a subdirectory called _mpd_ in _~/.config_:
|
||||
|
||||
```
|
||||
$ mkdir ~/.config/mpd
|
||||
```
|
||||
|
||||
copy the default config file into this directory:
|
||||
|
||||
```
|
||||
$ cp /etc/mpd.conf ~/.config/mpd
|
||||
```
|
||||
|
||||
and then edit it with a text editor like _vim_, _nano_ or _gedit_:
|
||||
|
||||
```
|
||||
$ nano ~/.config/mpd/mpd.conf
|
||||
```
|
||||
|
||||
I recommend you read through all of it to check if there’s anything you need to do, but for most setups you can delete everything and just leave the following:
|
||||
|
||||
```
|
||||
db_file "~/.config/mpd/mpd.db"
|
||||
log_file "syslog"
|
||||
```
|
||||
|
||||
At this point you should be able to just run
|
||||
|
||||
```
|
||||
$ mpd
|
||||
```
|
||||
|
||||
with no errors, which will start the MPD daemon in the background.
|
||||
|
||||
### Using ncmpcpp
|
||||
|
||||
Simply run
|
||||
|
||||
```
|
||||
$ ncmpcpp
|
||||
```
|
||||
|
||||
and you’ll see a ncurses-powered graphical user interface in your terminal.
|
||||
|
||||
Press _4_ and you should see your local music library, be able to change the selection using the arrow keys and press _Enter_ to play a song.
|
||||
|
||||
Doing this multiple times will create a _playlist_, which allows you to move to the next track using the _>_ button (not the right arrow, the _>_ closing angle bracket character) and go back to the previous track with _<_. The + and – buttons increase and decrease volume. The _Q_ button quits ncmpcpp but it doesn’t stop the music. You can play and pause with _P_.
|
||||
|
||||
You can see the current playlist by pressing the _1_ button (this is the default view). From this view you can press _i_ to look at the information (tags) about the current song. You can change the tags of the currently playing (or paused) song by pressing _6_.
|
||||
|
||||
Pressing the \ button will add (or remove) an informative panel at the top of the view. In the top left, you should see something that looks like this:
|
||||
|
||||
```
|
||||
[------]
|
||||
```
|
||||
|
||||
Pressing the _r_, _z_, _y_, _R_, _x_ buttons will respectively toggle the _repeat_, _random_, _single_, _consume_ and _crossfade_ playback modes and will replace one of the _–_ characters in that little indicator to the initial of the selected mode.
|
||||
|
||||
Pressing the _F1_ button will display some help text, which contains a list of keybindings, so there’s no need to write a complete list here. So now go on, be geeky, and play all your music from your terminal!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/playing-music-on-your-fedora-terminal-with-mpd-and-ncmpcpp/
|
||||
|
||||
作者:[Carmine Zaccagnino][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fedoramagazine.org/author/carzacc/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://fedoramagazine.org/wp-content/uploads/2020/02/play_music_mpd-816x346.png
|
||||
[2]: https://rpmfusion.org/Configuration
|
222
sources/tech/20200210 Scan Kubernetes for errors with KRAWL.md
Normal file
222
sources/tech/20200210 Scan Kubernetes for errors with KRAWL.md
Normal file
@ -0,0 +1,222 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Scan Kubernetes for errors with KRAWL)
|
||||
[#]: via: (https://opensource.com/article/20/2/kubernetes-scanner)
|
||||
[#]: author: (Abhishek Tamrakar https://opensource.com/users/tamrakar)
|
||||
|
||||
Scan Kubernetes for errors with KRAWL
|
||||
======
|
||||
The KRAWL script identifies errors in Kubernetes pods and containers.
|
||||
![Ship captain sailing the Kubernetes seas][1]
|
||||
|
||||
When you're running containers with Kubernetes, you often find that they pile up. This is by design. It's one of the advantages of containers: they're cheap to start whenever a new one is needed. You can use a front-end like OpenShift or OKD to manage pods and containers. Those make it easy to visualize what you have set up, and have a rich set of commands for quick interactions.
|
||||
|
||||
If a platform to manage containers doesn't fit your requirements, though, you can also get that information using only a Kubernetes toolchain, but there are a lot of commands you need for a full overview of a complex environment. For that reason, I wrote [KRAWL][2], a simple script that scans pods and containers under the namespaces on Kubernetes clusters and displays the output of events, if any are found. It can also be used as Kubernetes plugin for the same purpose. It's a quick and easy way to get a lot of useful information.
|
||||
|
||||
### Prerequisites
|
||||
|
||||
* You must have kubectl installed.
|
||||
* Your cluster's kubeconfig must be either in its default location ($HOME/.kube/config) or exported (KUBECONFIG=/path/to/kubeconfig).
|
||||
|
||||
|
||||
|
||||
### Usage
|
||||
|
||||
|
||||
```
|
||||
`$ ./krawl`
|
||||
```
|
||||
|
||||
![KRAWL script][3]
|
||||
|
||||
### The script
|
||||
|
||||
|
||||
```
|
||||
#!/bin/bash
|
||||
# AUTHOR: Abhishek Tamrakar
|
||||
# EMAIL: [abhishek.tamrakar08@gmail.com][4]
|
||||
# LICENSE: Copyright (C) 2018 Abhishek Tamrakar
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# <http://www.apache.org/licenses/LICENSE-2.0>
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
##
|
||||
#define the variables
|
||||
KUBE_LOC=~/.kube/config
|
||||
#define variables
|
||||
KUBECTL=$(which kubectl)
|
||||
GET=$(which egrep)
|
||||
AWK=$(which awk)
|
||||
red=$(tput setaf 1)
|
||||
normal=$(tput sgr0)
|
||||
# define functions
|
||||
|
||||
# wrapper for printing info messages
|
||||
info()
|
||||
{
|
||||
printf '\n\e[34m%s\e[m: %s\n' "INFO" "$@"
|
||||
}
|
||||
|
||||
# cleanup when all done
|
||||
cleanup()
|
||||
{
|
||||
rm -f results.csv
|
||||
}
|
||||
|
||||
# just check if the command we are about to call is available
|
||||
checkcmd()
|
||||
{
|
||||
#check if command exists
|
||||
local cmd=$1
|
||||
if [ -z "${!cmd}" ]
|
||||
then
|
||||
printf '\n\e[31m%s\e[m: %s\n' "ERROR" "check if $1 is installed !!!"
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
get_namespaces()
|
||||
{
|
||||
#get namespaces
|
||||
namespaces=( \
|
||||
$($KUBECTL get namespaces --ignore-not-found=true | \
|
||||
$AWK '/Active/ {print $1}' \
|
||||
ORS=" ") \
|
||||
)
|
||||
#exit if namespaces are not found
|
||||
if [ ${#namespaces[@]} -eq 0 ]
|
||||
then
|
||||
printf '\n\e[31m%s\e[m: %s\n' "ERROR" "No namespaces found!!"
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
#get events for pods in errored state
|
||||
get_pod_events()
|
||||
{
|
||||
printf '\n'
|
||||
if [ ${#ERRORED[@]} -ne 0 ]
|
||||
then
|
||||
info "${#ERRORED[@]} errored pods found."
|
||||
for CULPRIT in ${ERRORED[@]}
|
||||
do
|
||||
info "POD: $CULPRIT"
|
||||
info
|
||||
$KUBECTL get events \
|
||||
--field-selector=involvedObject.name=$CULPRIT \
|
||||
-ocustom-columns=LASTSEEN:.lastTimestamp,REASON:.reason,MESSAGE:.message \
|
||||
--all-namespaces \
|
||||
--ignore-not-found=true
|
||||
done
|
||||
else
|
||||
info "0 pods with errored events found."
|
||||
fi
|
||||
}
|
||||
|
||||
#define the logic
|
||||
get_pod_errors()
|
||||
{
|
||||
printf "%s %s %s\n" "NAMESPACE,POD_NAME,CONTAINER_NAME,ERRORS" > results.csv
|
||||
printf "%s %s %s\n" "---------,--------,--------------,------" >> results.csv
|
||||
for NAMESPACE in ${namespaces[@]}
|
||||
do
|
||||
while IFS=' ' read -r POD CONTAINERS
|
||||
do
|
||||
for CONTAINER in ${CONTAINERS//,/ }
|
||||
do
|
||||
COUNT=$($KUBECTL logs --since=1h --tail=20 $POD -c $CONTAINER -n $NAMESPACE 2>/dev/null| \
|
||||
$GET -c '^error|Error|ERROR|Warn|WARN')
|
||||
if [ $COUNT -gt 0 ]
|
||||
then
|
||||
STATE=("${STATE[@]}" "$NAMESPACE,$POD,$CONTAINER,$COUNT")
|
||||
else
|
||||
#catch pods in errored state
|
||||
ERRORED=($($KUBECTL get pods -n $NAMESPACE --no-headers=true | \
|
||||
awk '!/Running/ {print $1}' ORS=" ") \
|
||||
)
|
||||
fi
|
||||
done
|
||||
done< <($KUBECTL get pods -n $NAMESPACE --ignore-not-found=true -o=custom-columns=NAME:.metadata.name,CONTAINERS:.spec.containers[*].name --no-headers=true)
|
||||
done
|
||||
printf "%s\n" ${STATE[@]:-None} >> results.csv
|
||||
STATE=()
|
||||
}
|
||||
#define usage for seprate run
|
||||
usage()
|
||||
{
|
||||
cat << EOF
|
||||
|
||||
USAGE: "${0##*/} </path/to/kube-config>(optional)"
|
||||
|
||||
This program is a free software under the terms of Apache 2.0 License.
|
||||
COPYRIGHT (C) 2018 Abhishek Tamrakar
|
||||
|
||||
EOF
|
||||
exit 0
|
||||
}
|
||||
|
||||
#check if basic commands are found
|
||||
trap cleanup EXIT
|
||||
checkcmd KUBECTL
|
||||
#
|
||||
#set the ground
|
||||
if [ $# -lt 1 ]; then
|
||||
if [ ! -e ${KUBE_LOC} -a ! -s ${KUBE_LOC} ]
|
||||
then
|
||||
info "A readable kube config location is required!!"
|
||||
usage
|
||||
fi
|
||||
elif [ $# -eq 1 ]
|
||||
then
|
||||
export KUBECONFIG=$1
|
||||
elif [ $# -gt 1 ]
|
||||
then
|
||||
usage
|
||||
fi
|
||||
#play
|
||||
get_namespaces
|
||||
get_pod_errors
|
||||
|
||||
printf '\n%40s\n' 'KRAWL'
|
||||
printf '%s\n' '---------------------------------------------------------------------------------'
|
||||
printf '%s\n' ' Krawl is a command line utility to scan pods and prints name of errored pods '
|
||||
printf '%s\n\n' ' +and containers within. To use it as kubernetes plugin, please check their page '
|
||||
printf '%s\n' '================================================================================='
|
||||
|
||||
cat results.csv | sed 's/,/,|/g'| column -s ',' -t
|
||||
get_pod_events
|
||||
```
|
||||
|
||||
* * *
|
||||
|
||||
_This was originally published as the README in [KRAWL's GitHub repository][2] and is reused with permission._
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/2/kubernetes-scanner
|
||||
|
||||
作者:[Abhishek Tamrakar][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/tamrakar
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/ship_captain_devops_kubernetes_steer.png?itok=LAHfIpek (Ship captain sailing the Kubernetes seas)
|
||||
[2]: https://github.com/abhiTamrakar/kube-plugins/tree/master/krawl
|
||||
[3]: https://opensource.com/sites/default/files/uploads/krawl_0.png (KRAWL script)
|
||||
[4]: mailto:abhishek.tamrakar08@gmail.com
|
@ -0,0 +1,100 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (HankChow)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Top hacks for the YaCy open source search engine)
|
||||
[#]: via: (https://opensource.com/article/20/2/yacy-search-engine-hacks)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
|
||||
|
||||
Top hacks for the YaCy open source search engine
|
||||
======
|
||||
Rather than adapting to someone else's vision, customize you search
|
||||
engine for the internet you want with YaCY.
|
||||
![Browser of things][1]
|
||||
|
||||
In my article about [getting started with YaCy][2], I explained how to install and start using the [YaCy][3] peer-to-peer search engine. One of the most exciting things about YaCy, however, is the fact that it's a local client. Each user owns and operates a node in a globally distributed search engine infrastructure, which means each user is in full control of how they navigate and experience the World Wide Web.
|
||||
|
||||
For instance, Google used to provide the URL google.com/linux as a shortcut to filter searches for Linux-related topics. It was a small feature that many people found useful, but [topical shortcuts were dropped][4] in 2011.
|
||||
|
||||
YaCy makes it possible to customize your search experience.
|
||||
|
||||
### Customize YaCy
|
||||
|
||||
Once you've installed YaCy, navigate to your search page at **localhost:8090**. To customize your search engine, click the **Administration** button in the top-right corner (it may be concealed in a menu icon on small screens).
|
||||
|
||||
The admin panel allows you to configure how YaCy uses your system resources and how it interacts with other YaCy clients.
|
||||
|
||||
![YaCy profile selector][5]
|
||||
|
||||
For instance, to configure an alternative port and set RAM and disk usage, use the **First steps** menu in the sidebar. To monitor YaCy activity, use the **Monitoring** panel. Most features are discoverable by clicking through the panels, but here are some of my favorites.
|
||||
|
||||
### Search appliance
|
||||
|
||||
Several companies have offered [intranet search appliances][6], but with YaCy, you can implement it for free. Whether you want to search through your own data or to implement a search system for local file shares at your business, you can choose to run YaCy as an internal indexer for files accessible over HTTP, FTP, and SMB (Samba). People in your local network can use your personalized instance of YaCy to find shared files, and none of the data is shared with users outside your network.
|
||||
|
||||
### Network configuration
|
||||
|
||||
YaCy favors isolation and privacy by default. You can adjust how you connect to the peer-to-peer network in the **Network Configuration** panel, which is revealed by clicking the link located at the top of the **Use Case & Account** configuration screen.
|
||||
|
||||
![YaCy network configuration][7]
|
||||
|
||||
### Crawl a site
|
||||
|
||||
Peer-to-peer indexing is user-driven. There's no mega-corporation initiating searches on every accessible page on the internet, so a site isn't indexed until someone deliberately crawls it with YaCy.
|
||||
|
||||
The YaCy client provides two options to help you help crawl the web: you can perform a manual crawl, and you can make YaCy available for suggested crawls.
|
||||
|
||||
![YaCy advanced crawler][8]
|
||||
|
||||
#### Start a manual crawling job
|
||||
|
||||
A manual crawl is when you enter the URL of a site you want to index and start a YaCy crawl job. To do this, click the **Advanced Crawler** link in the **Production** sidebar. Enter one or more URLs, then scroll to the bottom of the page and enable the **Do remote indexing** option. This enables your client to broadcast the URLs it is indexing, so clients that have opted to accept requests can help you perform the crawl.
|
||||
|
||||
To start the crawl, click the **Start New Crawl Job** button at the bottom of the page. I use this method to index sites I use frequently or find useful.
|
||||
|
||||
Once the crawl job starts, YaCy indexes the URLs you enter and stores the index on your local machine. As long as you are running in senior mode (meaning your firewall permits incoming and outgoing traffic on port 8090), your index is available to YaCy users all over the globe.
|
||||
|
||||
#### Join in on a crawl
|
||||
|
||||
While some very dedicated YaCy senior users may crawl the internet compulsively, there are a _lot_ of sites out there in the world. It might seem impossible to match the resources of popular spiders and bots, but because YaCy has so many users, they can band together as a community to index more of the internet than any one user could do alone. If you activate YaCy to broadcast requests for site crawls, participating clients can work together to crawl sites you might not otherwise think to crawl manually.
|
||||
|
||||
To configure your client to accept jobs from others, click the **Advanced Crawler** link in the left sidebar menu. In the **Advanced Crawler** panel, click the **Remote Crawling** link under the **Network Harvesting** heading at the top of the page. Enable remote crawls by placing a tick in the checkbox next to the **Load** setting.
|
||||
|
||||
![YaCy remote crawling][9]
|
||||
|
||||
### YaCy monitoring and more
|
||||
|
||||
YaCy is a surprisingly robust search engine, providing you with the opportunity to theme and refine your experience in nearly any way you could want. You can monitor the activity of your YaCy client in the **Monitoring** panel, so you can get an idea of how many people are benefiting from the work of the YaCy community and also see what kind of activity it's generating for your computer and network.
|
||||
|
||||
![YaCy monitoring screen][10]
|
||||
|
||||
### Search engines make a difference
|
||||
|
||||
The more time you spend with the Administration screen, the more fun it becomes to ponder how the search engine you use can change your perspective. Your experience of the internet is shaped by the results you get back for even the simplest of queries. You might notice, in fact, how different one person's "internet" is from another person's when you talk to computer users from a different industry. For some people, the web is littered with ads and promoted searches and suffers from the tunnel vision of learned responses to queries. For instance, if someone consistently searches for answers about X, most commercial search engines will give weight to query responses that concern X. That's a useful feature on the one hand, but it occludes answers that require Y, even though that might be the better solution for a specific task.
|
||||
|
||||
As in real life, stepping outside a manufactured view of the world can be healthy and enlightening. Try YaCy, and see what you discover.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/2/yacy-search-engine-hacks
|
||||
|
||||
作者:[Seth Kenlon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/seth
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/browser_desktop_website_checklist_metrics.png?itok=OKKbl1UR (Browser of things)
|
||||
[2]: https://opensource.com/article/20/2/open-source-search-engine
|
||||
[3]: https://yacy.net/
|
||||
[4]: https://www.linuxquestions.org/questions/linux-news-59/is-there-no-more-linux-google-884306/
|
||||
[5]: https://opensource.com/sites/default/files/uploads/yacy-profiles.jpg (YaCy profile selector)
|
||||
[6]: https://en.wikipedia.org/wiki/Vivisimo
|
||||
[7]: https://opensource.com/sites/default/files/uploads/yacy-network-config.jpg (YaCy network configuration)
|
||||
[8]: https://opensource.com/sites/default/files/uploads/yacy-advanced-crawler.jpg (YaCy advanced crawler)
|
||||
[9]: https://opensource.com/sites/default/files/uploads/yacy-remote-crawl-accept.jpg (YaCy remote crawling)
|
||||
[10]: https://opensource.com/sites/default/files/uploads/yacy-monitor.jpg (YaCy monitoring screen)
|
@ -0,0 +1,104 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Dino is a Modern Looking Open Source XMPP Client)
|
||||
[#]: via: (https://itsfoss.com/dino-xmpp-client/)
|
||||
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
|
||||
|
||||
Dino is a Modern Looking Open Source XMPP Client
|
||||
======
|
||||
|
||||
_**Brief: Dino is a relatively new open-source XMPP client that tries to offer a good user experience while encouraging privacy-focused users to utilize XMPP for messaging.**_
|
||||
|
||||
### Dino: An Open Source XMPP Client
|
||||
|
||||
![][1]
|
||||
|
||||
[XMPP][2] (Extensible Messaging Presence Protocol) is a decentralized model of network to facilitate instant messaging and collaboration. Decentralize means there is no central server that has access to your data. The communication is directly between the end-points.
|
||||
|
||||
Some of us might call it an “old school” tech probably because the XMPP clients usually have a very bad user experience or simply just because it takes time to get used to (or set it up).
|
||||
|
||||
That’s when [Dino][3] comes to the rescue as a modern XMPP client to provide a clean and snappy user experience without compromising your privacy.
|
||||
|
||||
### The User Experience
|
||||
|
||||
![][4]
|
||||
|
||||
Dino does try to improve the user experience as an XMPP client but it is worth noting that the look and feel of it will depend on your Linux distribution to some extent. Your icon theme or the gnome theme might make it look better or worse for your personal experience.
|
||||
|
||||
Technically, the user interface is quite simple and easy to use. So, I suggest you take a look at some of the [best icon themes][5] and [GNOME themes][6] for Ubuntu to tweak the look of Dino.
|
||||
|
||||
### Features of Dino
|
||||
|
||||
![Dino Screenshot][7]
|
||||
|
||||
You can expect to use Dino as an alternative to Slack, [Signal][8] or [Wire][9] for your business or personal usage.
|
||||
|
||||
It offers all of the essential features you would need in a messaging application, let us take a look at a list of things that you can expect from it:
|
||||
|
||||
* Decentralized Communication
|
||||
* Public XMPP Servers supported if you cannot setup your own server
|
||||
* Similar to UI to other popular messengers – so it’s easy to use
|
||||
* Image & File sharing
|
||||
* Multiple accounts supported
|
||||
* Advanced message search
|
||||
* [OpenPGP][10] & [OMEMO][11] encryption supported
|
||||
* Lightweight native desktop application
|
||||
|
||||
|
||||
|
||||
### Installing Dino on Linux
|
||||
|
||||
You may or may not find it listed in your software center. Dino does provide ready to use binaries for Debian (deb) and Fedora (rpm) based distributions.
|
||||
|
||||
**For Ubuntu:**
|
||||
|
||||
Dino is available in the universe repository on Ubuntu and you can install it using this command:
|
||||
|
||||
```
|
||||
sudo apt install dino-im
|
||||
```
|
||||
|
||||
Similarly, you can find packages for other Linux distributions on their [GitHub distribution packages page][12].
|
||||
|
||||
If you want the latest and greatest, you can also find both **.deb** and .**rpm** files for Dino to install on your Linux distribution (nightly builds) from [OpenSUSE’s software webpage][13].
|
||||
|
||||
In either case, head to their [GitHub page][14] or click on the link below to visit the official site.
|
||||
|
||||
[Download Dino][3]
|
||||
|
||||
**Wrapping Up**
|
||||
|
||||
It works quite well without any issues (at the time of writing this and quick testing it). I’ll try exploring more about it and hopefully cover more XMPP-centric articles to encourage users to use XMPP clients and servers for communication.
|
||||
|
||||
What do you think about Dino? Would you recommend another open-source XMPP client that’s potentially better than Dino? Let me know your thoughts in the comments below.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/dino-xmpp-client/
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/ankush/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/02/dino-main.png?ssl=1
|
||||
[2]: https://xmpp.org/about/
|
||||
[3]: https://dino.im/
|
||||
[4]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/02/dino-xmpp-client.jpg?ssl=1
|
||||
[5]: https://itsfoss.com/best-icon-themes-ubuntu-16-04/
|
||||
[6]: https://itsfoss.com/best-gtk-themes/
|
||||
[7]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/02/dino-screenshot.png?ssl=1
|
||||
[8]: https://itsfoss.com/signal-messaging-app/
|
||||
[9]: https://itsfoss.com/wire-messaging-linux/
|
||||
[10]: https://www.openpgp.org/
|
||||
[11]: https://en.wikipedia.org/wiki/OMEMO
|
||||
[12]: https://github.com/dino/dino/wiki/Distribution-Packages
|
||||
[13]: https://software.opensuse.org/download.html?project=network:messaging:xmpp:dino&package=dino
|
||||
[14]: https://github.com/dino/dino
|
@ -0,0 +1,111 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (zhangxiangping)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (12 open source tools for natural language processing)
|
||||
[#]: via: (https://opensource.com/article/19/3/natural-language-processing-tools)
|
||||
[#]: author: (Dan Barker https://opensource.com/users/barkerd427)
|
||||
|
||||
12种自然语言处理的开源工具
|
||||
======
|
||||
|
||||
看看可以用在你自己NLP应用中的十几个工具吧。
|
||||
|
||||
![Chat bubbles][1]
|
||||
|
||||
在过去的几年里,自然语言处理(NLP)推动了聊天机器人、语音助手、文本预测,这些在我们的日常生活中常用的语音或文本应用程技术的发展。目前有着各种各样开源的NLP工具,所以我决定调查一下当前开源的NLP工具来帮助您制定您开发下一个基于语音或文本的应用程序的计划。
|
||||
|
||||
我将从我所熟悉的编程语言出发来介绍这些工具,尽管我对这些工具不是很熟悉(我没有在我不熟悉的语言中找工具)。也就是说,出于各种原因,我排除了三种我熟悉的语言中的工具。
|
||||
|
||||
R语言是没有被包含在内的,因为我发现的大多数库都有一年多没有更新了。这并不总是意味着他们没有得到很好的维护,但我认为他们应该得到更多的更新,以便和同一领域的其他工具竞争。我还选择了最有可能在生产场景中使用的语言和工具(而不是在学术界和研究中使用),虽然我主要是使用R作为研究和发现工具。
|
||||
|
||||
我发现Scala的很多库都没有更新了。我上次使用Scala已经有好几年了,当时它非常流行。但是大多数库从那个时候就再没有更新过,或者只有少数一些有更新。
|
||||
|
||||
最后,我排除了C++。这主要是因为我在的公司很久没有使用C++来进行NLP或者任何数据科学的工作。
|
||||
|
||||
### Python工具
|
||||
#### Natural Language Toolkit (NLTK)
|
||||
|
||||
[Natural Language Toolkit (NLTK)][2]是我调研的所有工具中功能最完善的一个。它完美地实现了自然语言处理中多数功能组件,比如分类,令牌化,词干化,标注,分词和语义推理。每一种方法都有多种不同的实现方式,所以你可以选择具体的算法和方式去使用它。同时,它也支持不同语言。然而,它将所有的数据都表示为字符串的形式,对于一些简单的数据结构来说可能很方便,但是如果要使用一些高级的功能来说就可能有点困难。它的使用文档有点复杂,但也有很多其他人编写的使用文档,比如[a great book][3]。和其他的工具比起来,这个工具库的运行速度有点慢。但总的来说,这个工具包非常不错,可以用于需要具体算法组合的实验,探索和实际应用当中。
|
||||
|
||||
#### SpaCy
|
||||
|
||||
[SpaCy][4]是NLTK的主要竞争者。在大多数情况下都比NLTK的速度更快,但是SpaCy对自然语言处理的功能组件只有单一实现。SpaCy把所有的东西都表示为一个对象而不是字符串,这样就能够为构建应用简化接口。这也方便它能够集成多种框架和数据科学的工具,使得你更容易理解你的文本数据。然而,SpaCy不像NLTK那样支持多种语言。它对每个接口都有一些简单的选项和文档,包括用于语言处理和分析各种组件的多种神经网络模型。总的来说,如果创造一个新的应用的生产过程中不需要使用特定的算法的话,这是一个很不错的工具。
|
||||
|
||||
#### TextBlob
|
||||
|
||||
[TextBlob][5]是NLTK的一个扩展库。你可以通过TextBlob用一种更简单的方式来使用NLTK的功能,TextBlob也包括了Pattern库中的功能。如果你刚刚开始学习,这将会是一个不错的工具可以用于生产对性能要求不太高的应用。TextBlob适用于任何场景,但是对小型项目会更加合适。
|
||||
|
||||
#### Textacy
|
||||
|
||||
这个工具是我用过的名字最好听的。读"[Textacy][6]" 时先发出"ex"再发出"cy"。它不仅仅是名字好,同时它本身也是一个很不错的工具。它使用SpaCy作为它自然语言处理核心功能,但它在处理过程的前后做了很多工作。如果你想要使用SpaCy,你可以先使用Textacy,从而不用去多写额外的附加代码你就可以处理不同种类的数据。
|
||||
|
||||
#### PyTorch-NLP
|
||||
|
||||
[PyTorch-NLP][7]才出现短短的一年,但它已经有一个庞大的社区了。它适用于快速原型开发。当公司或者研究人员推出很多其他工具去完成新奇的处理任务,比如图像转换,它就会被更新。PyTorch的目标用户是研究人员,但它也能用于原型开发,或在最开始的生产任务中使用最好的算法。基于此基础上的创建的库也是值得研究的。
|
||||
|
||||
### 节点工具
|
||||
|
||||
#### Retext
|
||||
|
||||
[Retext][8]是[unified collective][9]的一部分。Unified是一个接口,能够集成不同的工具和插件以便他们能够高效的工作。Retext是unified工具集三个中的一个,另外的两个分别是用于markdown编辑的Remark和用于HTML处理的Rehype。这是一个非常有趣的想法,我很高兴看到这个社区的发展。Retext没有暴露过多的底层技术,更多的是使用插件去完成你在NLP任务中想要做的事情。拼写检查,固定排版,情绪检测和可读性分析都可以用简单的插件来完成。如果你不想了解底层处理技术又想完成你的任务的话,这个工具和社区是一个不错的选择。
|
||||
|
||||
#### Compromise
|
||||
|
||||
如果你在找拥有最高级的功能和最复杂的系统的工具的话,[Compromise][10]不是你的选择。 然而,如果你想要一个性能好,应用广泛,还能在客户端运行的工具的话,Compromise值得一试。实际上,它的名字是准确的,因为作者更关注更具体功能的小软件包,而在功能性和准确性上做出了牺牲,这些功能得益于用户对使用环境的理解。
|
||||
|
||||
#### Natural
|
||||
|
||||
[Natural][11]包含了一般自然语言处理库所具有的大多数功能。它主要是处理英文文本,但也包括一些其他语言,它的社区也支持额外的语言。它能够进行令牌化,词干化,分类,语音处理,词频-逆文档频率计算(TF-IDF),WordNet,字符相似度计算和一些变换。它和NLTK有的一比,因为它想要把所有东西都包含在一个包里头,使用方便但是可能不太适合专注的研究。总的来说,这是一个不错的功能齐全的库,目前仍在开发但可能需要对底层实现有更多的了解才能完更有效。
|
||||
|
||||
#### Nlp.js
|
||||
|
||||
[Nlp.js][12]是在其他几个NLP库上开发的,包括Franc和Brain.js。它提供了一个能很好支持NLP组件的接口,比如分类,情感分析,词干化,命名实体识别和自然语言生成。它也支持一些其他语言,在你处理除了英语之外的语言时也能提供一些帮助。总之,它是一个不错的通用工具,能够提供简单的接口去调用其他工具。在你需要更强大或更灵活的工具之前,这个工具可能会在你的应用程序中用上很长一段时间。
|
||||
|
||||
### Java工具
|
||||
#### OpenNLP
|
||||
|
||||
[OpenNLP][13]是由Apache基金会维护的,所以它可以很方便地集成到其他Apache项目中,比如Apache Flink,Apache NiFi和Apache Spark。这是一个通用的NLP工具,包含了所有NLP组件中的通用功能,可以通过命令行或者以包的形式导入到应用中来使用它。它也支持很多种语言。OpenNLP是一个很高效的工具,包含了很多特性,如果你用Java开发生产的话,它是个很好的选择。
|
||||
|
||||
#### StanfordNLP
|
||||
|
||||
[Stanford CoreNLP][14]是一个工具集,提供了基于统计的,基于深度学习和基于规则的NLP功能。这个工具也有许多其他编程语言的版本,所以可以脱离Java来使用。它是由高水平的研究机构创建的一个高效的工具,但在生产环境中可能不是最好的。此工具具有双重许可,并具有可以用于商业目的的特殊许可。总之,在研究和实验中它是一个很棒的工具,但在生产系统中可能会带来一些额外的开销。比起Java版本来说,读者可能对它的Python版本更感兴趣。斯坦福教授在Coursera上教的最好的机器学习课程之一,[点此][15]访问其他不错的资源。
|
||||
|
||||
#### CogCompNLP
|
||||
|
||||
[CogCompNLP][16]由伊利诺斯大学开发的一个工具,它也有一个相似功能的Python版本事项。它可以用于处理文本,包括本地处理和远程处理,能够极大地缓解你本地设备的压力。它提供了很多处理函数,比如令牌化,词性分析,标注,断句,命名实体标注,词型还原,依存分析和语义角色标注。它是一个很好的研究工具,你可以自己探索它的不同功能。我不确定它是否适合生产环境,但如果你使用Java的话,它值得一试。
|
||||
|
||||
* * *
|
||||
|
||||
你最喜欢的开源的NLP工具和库是什么?请在评论区分享文中没有提到的工具。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/3/natural-language-processing-tools
|
||||
|
||||
作者:[Dan Barker (Community Moderator)][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[zxp](https://github.com/zhangxiangping)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/barkerd427
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/talk_chat_communication_team.png?itok=CYfZ_gE7 (Chat bubbles)
|
||||
[2]: http://www.nltk.org/
|
||||
[3]: http://www.nltk.org/book_1ed/
|
||||
[4]: https://spacy.io/
|
||||
[5]: https://textblob.readthedocs.io/en/dev/
|
||||
[6]: https://readthedocs.org/projects/textacy/
|
||||
[7]: https://pytorchnlp.readthedocs.io/en/latest/
|
||||
[8]: https://www.npmjs.com/package/retext
|
||||
[9]: https://unified.js.org/
|
||||
[10]: https://www.npmjs.com/package/compromise
|
||||
[11]: https://www.npmjs.com/package/natural
|
||||
[12]: https://www.npmjs.com/package/node-nlp
|
||||
[13]: https://opennlp.apache.org/
|
||||
[14]: https://stanfordnlp.github.io/CoreNLP/
|
||||
[15]: https://opensource.com/article/19/2/learn-data-science-ai
|
||||
[16]: https://github.com/CogComp/cogcomp-nlp
|
246
translated/tech/20190407 Manage multimedia files with Git.md
Normal file
246
translated/tech/20190407 Manage multimedia files with Git.md
Normal file
@ -0,0 +1,246 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (svtter)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Manage multimedia files with Git)
|
||||
[#]: via: (https://opensource.com/article/19/4/manage-multimedia-files-git)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
|
||||
|
||||
|
||||
通过 Git 来管理多媒体文件
|
||||
======
|
||||
|
||||
学习如何使用 Git 来追踪项目中的大型多媒体文件。
|
||||
在系列中的最后一篇文章中描述了如何使用 Git。
|
||||
|
||||
![video editing dashboard][1]
|
||||
|
||||
Git 是专用于源代码版本控制的工具。因此,Git 很少被用于非纯文本的项目以及行业。然而,异步工作流的优点是十分诱人的,尤其是在一些日益增长的行业中,这种类型的行业把重要的计算和重要的艺术冒险结合起来。其中,包括网页设计、视觉效果、视频游戏、出版、货币设计(是的,这是一个真实的行业),教育 ... 等等。还有许多行业属于这个类型。
|
||||
|
||||
在这个系列正要谈到 Git 14周年纪念日之际,我们分享了六个少为人知的方式来使用 Git。在文章的末尾,我们将会介绍一下那些利用 Git 优点来管理多媒体文件的软件。
|
||||
|
||||
### Git 管理多媒体文件的问题
|
||||
|
||||
众所周知,Git 用于处理非文本文件不是很好,但是这并不妨碍我们进行尝试。下面是一个使用 Git 来复制照片文件的例子:
|
||||
|
||||
```
|
||||
$ du -hs
|
||||
108K .
|
||||
$ cp ~/photos/dandelion.tif .
|
||||
$ git add dandelion.tif
|
||||
$ git commit -m 'added a photo'
|
||||
[master (root-commit) fa6caa7] two photos
|
||||
1 file changed, 0 insertions(+), 0 deletions(-)
|
||||
create mode 100644 dandelion.tif
|
||||
$ du -hs
|
||||
1.8M .
|
||||
```
|
||||
|
||||
目前为止没有什么异常。增加一个 1.8MB 的照片到一个目录下,使得目录变成了 1.8 MB 的大小。所以下一步,我们尝试删除文件。
|
||||
|
||||
```
|
||||
$ git rm dandelion.tif
|
||||
$ git commit -m 'deleted a photo'
|
||||
$ du -hs
|
||||
828K .
|
||||
```
|
||||
|
||||
在这里我们可以看到有些问题:删除一个已经被提交的文件,还是会使得仓库的大小扩大到原来的8倍(从 108K 到 828K)。我们可以测试多次来得到一个更好的平均值,但是这个简单的演示与我的假设一直。提交非文本文件,在一开始花费空间比较少,但是一个工厂活跃地时间越长,人们可能对静态内容修改的会更多,更多的零碎文件会被加和到一起。当一个 Git 仓库变的越来越大,主要的成本往往是速度。拉取和推送的时间,从最初抿一口咖啡的时间到你觉得你可能踢掉了
|
||||
|
||||
导致 Git 中静态内容的体积不断扩大的原因是什么呢?那些通过文本的构成的文件,允许 Git 只拉取那些修改的部分。光栅图以及音乐文件对 Git 文件而言与文本不同,你可以查看一下 .png 和 .wav 文件中的二进制数据。所以,Git 只不过是获取了全部的数据,并且创建了一个新的副本,哪怕是一张图仅仅修改了一个像素。
|
||||
|
||||
### Git-portal
|
||||
|
||||
在实践中,许多多媒体项目不需要或者不想追踪媒体的历史记录。相对于文本后者代码的部分,项目的媒体部分一般有一个不同的生命周期。媒体资源一般通过一个方向产生:一张图片从铅笔草稿开始,以数绘的形式抵达它的目的地。然后,尽管文本能够回滚到早起的版本,但是艺术只会一直向前。工程中的媒体很少被绑定到一个特定的版本。例外情况通常是反映数据集的图形,通常是可以用基于文本的格式(如SVG)完成的表、图形或图表。
|
||||
|
||||
所以,在许多同时包含文本(无论是叙事散文还是代码)和媒体的工程中,Git 是一个用于文件管理的,可接受的解决方案,只要有一个在版本控制循环之外的游乐场来给艺术家游玩。
|
||||
|
||||
![Graphic showing relationship between art assets and Git][2]
|
||||
|
||||
一个简单的方法来启用这个特性是 [Git-portal][3],一个通过武装 Git hooks 的 Bash 脚本,它将静态文件从文件夹中移出 Git 的范围,通过链接来取代。Git 提交链接文件(有时候称作快捷方式),这种链接文件比较小,所以所有的提交都是文本文件和那些代表媒体文件的链接。替身文件是链接,所以工程还会像预期的运行,因为本地机器会处理他们,转换成“真的”。当链接文件发生变动时,Git-portal 维护了一个项目的结构,因此逆转这个过程很简单。用户需要考虑的,仅仅是 Git-portal 是否适用于工程,或者需要构建一个没有链接的工程版本(例如需要分发的时候)。
|
||||
|
||||
Git-portal 也允许通过 rsync 来远程同步静态资源,所以用户可以设置一个远程存储位置,来做为一个中心的授权源。
|
||||
|
||||
Git-portal 对于多媒体的工程是一个理想的解决方案。类似的多媒体工程包括视频游戏,桌面游戏,需要进行大型3D模型渲染和纹理的虚拟现实工程,[带图的书籍][4]以及 .odt 输出,协作型的[博客站点][5],音乐项目,等等。艺术家在应用程序中以图层(在图形世界中)和曲目(在音乐世界中)的形式执行版本控制并不少见——因此,Git 不会向多媒体项目文件本身添加任何内容。Git 的功能可用于艺术项目的其他部分(例如散文和叙述、项目管理、字幕文件、信贷、营销副本、文档等),而结构化远程备份的功能则由艺术家使用。
|
||||
|
||||
#### 安装 Git-portal
|
||||
|
||||
Git-portal 的RPM 安装包位于 <https://klaatu.fedorapeople.org/git-portal>,可用于下载和安装。
|
||||
|
||||
此外,用户可以从 Git-portal 的 Gitlab 主页手动安装。这仅仅是一个 Bash 脚本以及一些 Git hooks(也是 Bash 脚本),但是需要一个快速的构建过程来让它知道安装的位置。
|
||||
|
||||
|
||||
```
|
||||
$ git clone <https://gitlab.com/slackermedia/git-portal.git> git-portal.clone
|
||||
$ cd git-portal.clone
|
||||
$ ./configure
|
||||
$ make
|
||||
$ sudo make install
|
||||
```
|
||||
|
||||
#### 使用 Git-portal
|
||||
|
||||
Git-portal 与 Git 一起使用。这意味着,对于 Git 的所有大型文件扩展名,都需要记住一些额外的步骤。但是,你仅仅需要在处理你的媒体资源的时候使用 Git-portal,所以很容易记住,除非你把大文件都当做文本文件来进行处理(对于 Git 用户很少见)。使用 Git-portal 必须做的一个安装步骤是:
|
||||
|
||||
|
||||
```
|
||||
$ mkdir bigproject.git
|
||||
$ cd !$
|
||||
$ git init
|
||||
$ git-portal init
|
||||
```
|
||||
|
||||
Git-portal 的 **init** 函数在 Git 仓库中创建了一个 **_portal** 文件夹并且添加到 .gitignore 文件中。
|
||||
|
||||
在平日里使用 Git-portal 和 Git 协同十分平滑。一个较好的例子是基于 MIDI 的音乐项目:音乐工作站产生的项目文件是基于文本的,但是 MIDI 文件是二进制数据:
|
||||
|
||||
|
||||
```
|
||||
$ ls -1
|
||||
_portal
|
||||
song.1.qtr
|
||||
song.qtr
|
||||
song-Track_1-1.mid
|
||||
song-Track_1-3.mid
|
||||
song-Track_2-1.mid
|
||||
$ git add song*qtr
|
||||
$ git-portal song-Track*mid
|
||||
$ git add song-Track*mid
|
||||
```
|
||||
|
||||
如果你查看一下 **_portal** 文件夹,你会发现那里有原始的MIDI文件。这些文件在原本的位置被替换成了指向 **_portal** 的链接文件,使得音乐工作站像预期一样运行。
|
||||
|
||||
|
||||
```
|
||||
$ ls -lG
|
||||
[...] _portal/
|
||||
[...] song.1.qtr
|
||||
[...] song.qtr
|
||||
[...] song-Track_1-1.mid -> _portal/song-Track_1-1.mid*
|
||||
[...] song-Track_1-3.mid -> _portal/song-Track_1-3.mid*
|
||||
[...] song-Track_2-1.mid -> _portal/song-Track_2-1.mid*
|
||||
```
|
||||
|
||||
与 Git 相同,你也可以添加一个文件下的文件。
|
||||
|
||||
|
||||
```
|
||||
$ cp -r ~/synth-presets/yoshimi .
|
||||
$ git-portal add yoshimi
|
||||
Directories cannot go through the portal. Sending files instead.
|
||||
$ ls -lG _portal/yoshimi
|
||||
[...] yoshimi.stat -> ../_portal/yoshimi/yoshimi.stat*
|
||||
```
|
||||
|
||||
删除功能也想预期一样工作,但是从 **_portal**中删除了一些东西。你应该使用 **git-portal rm** 而不是 **git rm**。使用 Git-portal 可以确保文件从 **_portal** 中删除:
|
||||
|
||||
|
||||
```
|
||||
$ ls
|
||||
_portal/ song.qtr song-Track_1-3.mid@ yoshimi/
|
||||
song.1.qtr song-Track_1-1.mid@ song-Track_2-1.mid@
|
||||
$ git-portal rm song-Track_1-3.mid
|
||||
rm 'song-Track_1-3.mid'
|
||||
$ ls _portal/
|
||||
song-Track_1-1.mid* song-Track_2-1.mid* yoshimi/
|
||||
```
|
||||
|
||||
如果你忘记使用 Git-portal,那么你需要手动删除 portal 文件:
|
||||
|
||||
|
||||
```
|
||||
$ git-portal rm song-Track_1-1.mid
|
||||
rm 'song-Track_1-1.mid'
|
||||
$ ls _portal/
|
||||
song-Track_1-1.mid* song-Track_2-1.mid* yoshimi/
|
||||
$ trash _portal/song-Track_1-1.mid
|
||||
```
|
||||
|
||||
Git-portal 仅有的其他工程,是列出当前所有的链接并且找到里面已经损坏的部分。有时这种情况会因为项目文件夹中的文件被移动而发生:
|
||||
|
||||
|
||||
```
|
||||
$ mkdir foo
|
||||
$ mv yoshimi foo
|
||||
$ git-portal status
|
||||
bigproject.git/song-Track_2-1.mid: symbolic link to _portal/song-Track_2-1.mid
|
||||
bigproject.git/foo/yoshimi/yoshimi.stat: broken symbolic link to ../_portal/yoshimi/yoshimi.stat
|
||||
```
|
||||
|
||||
如果你使用 Git-portal 用于私人项目并且维护自己的备份,以上就是技术方面所有你需要知道关于 Git-portal 的事情了。如果你想要添加一个协作者或者你希望 Git-portal 来像 Git 的方式来管理备份,你可以创建一个远程。
|
||||
|
||||
#### 增加 Git-portal remotes
|
||||
|
||||
为 Git-portal 增加一个远程位置是通过 Git 已经存在的功能来实现的。Git-portal 实现了 Git hooks,隐藏在仓库 .git 文件夹中的脚本,来寻找你的远程主机上是否存在以 **_portal** 开头的文件夹。如果它找到一个,它会尝试使用 **rsync** 来与远程位置同步文件。Git-portal 在用户进行 Git push 以及 Git 合并的时候(或者在进行 git pull的时候,实际上是进行一次 fetch 和自动合并)会处理这项任务。
|
||||
|
||||
如果你近克隆了 Git 仓库,那么你可能永远不会自己添加一个 remote。这是一个标准的 Git 过程:
|
||||
|
||||
```
|
||||
$ git remote add origin [git@gitdawg.com][6]:seth/bigproject.git
|
||||
$ git remote -v
|
||||
origin [git@gitdawg.com][6]:seth/bigproject.git (fetch)
|
||||
origin [git@gitdawg.com][6]:seth/bigproject.git (push)
|
||||
```
|
||||
|
||||
**origin** 这个名字对你的主要 Git 仓库是一个流行的惯例,为 Git 数据使用它是有意义的。然而,你的 Git-portal 数据是分开存储的,所以你必须创建第二个远程机器来让 Git-portal 了解向哪里 push 和从哪里 pull。取决于你的 Git 主机。你可能需要一个分离的服务器,因为媒体资源可能有GB的大小,使得一个 Git 主机由于空间限制无法承担。或者,可能你的服务器仅允许你访问你的 Git 仓库而不允许一个额外的存储文件夹:
|
||||
|
||||
```
|
||||
$ git remote add _portal [seth@example.com][7]:/home/seth/git/bigproject_portal
|
||||
$ git remote -v
|
||||
origin [git@gitdawg.com][6]:seth/bigproject.git (fetch)
|
||||
origin [git@gitdawg.com][6]:seth/bigproject.git (push)
|
||||
_portal [seth@example.com][7]:/home/seth/git/bigproject_portal (fetch)
|
||||
_portal [seth@example.com][7]:/home/seth/git/bigproject_portal (push)
|
||||
```
|
||||
|
||||
你可能不想把你的所有私人账户放在你的服务器上,而且你不需要这样做。为了提供服务器上仓库的大文件资源权限,你可以运行一个 Git 前端,比如 **[Gitolite][8]** 或者你可以使用 **rrsync** (restricted rsync)。
|
||||
|
||||
现在你可以推送你的 Git 数据到你的远程 Git 仓库和你的 Git-portal 数据到你的远程 portal:
|
||||
|
||||
|
||||
```
|
||||
$ git push origin HEAD
|
||||
master destination detected
|
||||
Syncing _portal content...
|
||||
sending incremental file list
|
||||
sent 9,305 bytes received 18 bytes 1,695.09 bytes/sec
|
||||
total size is 60,358,015 speedup is 6,474.10
|
||||
Syncing _portal content to example.com:/home/seth/git/bigproject_portal
|
||||
```
|
||||
|
||||
如果你已经安装了 Git-portal,并且配置了一个远程的 **_portal**,你的 **_portal** 文件夹将会被同步,并且从服务器获取新的内容,以及在每一次 push 的时候发送新的内容。但是,你不需要进行 Git commit 或者 push 来和服务器同步(用户可以使用直接使用 rsync),我发现对于艺术性内容的改变,提交是有用的。这将会把艺术家及其数字资源集成到工作流的其余部分中,并提供有关项目进度和速度的有用元数据。
|
||||
|
||||
### 其他选项
|
||||
|
||||
如果 Git-portal 对你而言太过简单,还有一些其他的选择用于 Git 管理大型文件。[Git Large File Storage][9] (LFS) 是一个失效项目的分支,称作 git-media。这个分支由 Github 维护和支持。它需要特殊的命令(例如 **git lfs track** 来保护大型文件不被 Git 追踪)并且需要用户维护一个 .gitattributes 文件来更新哪些仓库中的文件被 LFS 追踪。对于大文件而言,它 _仅_ 支持 HTTP 和 HTTPS 主机。所以你的 LFS 服务器必须进行配置,才能使得用户可以通过 HTTP 而不是 SSH 或 rsync 来进行鉴权。
|
||||
|
||||
另一个相对 LFS 更灵活的选项是 [git-annex][10]。你可以在我的文章 [managing binary blobs in Git][11] (忽略 git-media 这个已经废弃的项目,它的继任者 Git LFS 没有将它延续下来)中了解更多。Git-annex 是一个灵活且优雅的解决方案。它拥有一个细腻的系统来用于添加,删除,移动仓库中的大型文件。因为它灵活且强大,有很多新的命令和规则需要进行学习,所以建议看一下它的 [文档][12]。
|
||||
|
||||
然而,如果你的需求很简单,你可能更加喜欢整合已有技术来进行简单且明显任务的解决方案,Git-portal 可能是对于工作而言比较合适的工具。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/4/manage-multimedia-files-git
|
||||
|
||||
作者:[Seth Kenlon (Red Hat, Community Moderator)][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[svtter](https://github.com/svtter)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/seth
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/video_editing_folder_music_wave_play.png?itok=-J9rs-My (video editing dashboard)
|
||||
[2]: https://opensource.com/sites/default/files/uploads/git-velocity.jpg (Graphic showing relationship between art assets and Git)
|
||||
[3]: http://gitlab.com/slackermedia/git-portal.git
|
||||
[4]: https://www.apress.com/gp/book/9781484241691
|
||||
[5]: http://mixedsignals.ml
|
||||
[6]: mailto:git@gitdawg.com
|
||||
[7]: mailto:seth@example.com
|
||||
[8]: https://opensource.com/article/19/4/file-sharing-git
|
||||
[9]: https://git-lfs.github.com/
|
||||
[10]: https://git-annex.branchable.com/
|
||||
[11]: https://opensource.com/life/16/8/how-manage-binary-blobs-git-part-7
|
||||
[12]: https://git-annex.branchable.com/walkthrough/
|
@ -1,64 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (Morisun029)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Top CI/CD resources to set you up for success)
|
||||
[#]: via: (https://opensource.com/article/19/12/cicd-resources)
|
||||
[#]: author: (Jessica Cherry https://opensource.com/users/jrepka)
|
||||
|
||||
顶级 CI / CD 资源,助您成功
|
||||
======
|
||||
随着企业期望实现无缝,灵活和可扩展的部署,持续集成和持续部署成为2019年的关键主题。
|
||||
![Plumbing tubes in many directions][1]
|
||||
|
||||
对于 CI / CD 和 DevOps 来说,2019年是非常棒的一年。 Opensource 公司的作者分享了他们专注于无缝,灵活和可扩展部署时是如何朝着敏捷方向发展的。以下是我们2019年发布的 CI / CD 文章中的一些重要主题。
|
||||
|
||||
### 学习和提高您的 CI / CD 技能
|
||||
|
||||
|
||||
我们最喜欢的一些文章集中在 CI / CD 的实操经验上,并涵盖了许多方面。通常以[Jenkins][2]管道开始,布莱恩特的文章[用 Jenkins 构建 CI/CD 管道][4]将为您提供足够的经验,以开始构建您的第一个管道。丹尼尔在 [用DevOps 管道进行自动验收测试][4]一文中,提供了有关验收测试的重要信息,包括可用于独立测试的各种 CI / CD 应用程序。我写的[安全扫描DevOps 管道][5]非常简短,其中的关键点是关于如何使用 Jenkins 平台在管道中设置安全性的教程。
|
||||
|
||||
### 交付工作流程
|
||||
|
||||
|
||||
威利•彼得•绍布赞扬为所有产品创建统一流水线的想法,以使[每种产品在一个CI / CD 流水线中持续建立起来,以管控所有产品][8]。这些文章将使您更好地了解团队成员加入工作流流程后会发生什么。
|
||||
|
||||
### CI / CD 如何影响企业
|
||||
|
||||
|
||||
2019年也是认识到 CI / CD 的业务影响以及它如何影响日常运营的一年。
|
||||
|
||||
|
||||
|
||||
Agnieszka Gancarczyk 分享了Red Hat[小型Scrum vs.大型Scrum][9] 的调查结果, 包括受访者对srums,
|
||||
|
||||
敏捷运动及其对团队的影响的不同看法。威尔安•凯丽 的[持续部署如何影响企业][10], 包括开放式沟通的重要性,丹尼尔也强调了[DevOps 团队在3 种类型的指表板][11]中指标和可观测性的重要性。最后是安•玛丽•弗雷德的精彩文章: [不要在生产环境中测试?在生产环境中测试!][12] 详细说明了验收测试前在生产环境中测试的重要性。
|
||||
|
||||
感谢许多贡献者在2019年与 Opensource 的读者分享他们的见解,我期望在2020年里从他们那里了解更多有关 CI / CD 发展的信息。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/12/cicd-resources
|
||||
|
||||
作者:[Jessica Cherry][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[Morisun029](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/jrepka
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/plumbing_pipes_tutorial_how_behind_scenes.png?itok=F2Z8OJV1 (Plumbing tubes in many directions)
|
||||
[2]: https://jenkins.io/
|
||||
[3]: https://opensource.com/article/19/9/intro-building-cicd-pipelines-jenkins
|
||||
[4]: https://opensource.com/article/19/4/devops-pipeline-acceptance-testing
|
||||
[5]: https://opensource.com/article/19/7/security-scanning-your-devops-pipeline
|
||||
[6]: https://opensource.com/article/19/3/screwdriver-cicd
|
||||
[7]: https://opensource.com/article/19/8/why-spinnaker-matters-cicd
|
||||
[8]: https://opensource.com/article/19/7/cicd-pipeline-rule-them-all
|
||||
[9]: https://opensource.com/article/19/3/small-scale-scrum-vs-large-scale-scrum
|
||||
[10]: https://opensource.com/article/19/7/organizational-impact-continuous-deployment
|
||||
[11]: https://opensource.com/article/19/7/dashboards-devops-teams
|
||||
[12]: https://opensource.com/article/19/5/dont-test-production
|
@ -1,75 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (hopefully2333)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (What's HTTPS for secure computing?)
|
||||
[#]: via: (https://opensource.com/article/20/1/confidential-computing)
|
||||
[#]: author: (Mike Bursell https://opensource.com/users/mikecamel)
|
||||
|
||||
用于安全计算的 HTTPS 是什么?
|
||||
======
|
||||
在默认的情况下,网站的安全性还不足够。
|
||||
![Secure https browser][1]
|
||||
|
||||
在过去的几年里,寻找一个只以 "http://…" 开头的网站变得越来越难,这是因为行业终于意识到,网络安全是 “一件事”,同时也是因为客户端和服务端之间建立和使用 https 连接变得更加容易了。类似的转变可能正以不同的方式发生在云计算、边缘计算、物联网、区块链,人工智能、机器学习等领域。长久以来,我们都知道我们应该对存储的静态数据和在网络中传输的数据进行加密,但是在使用和处理数据的时候对它进行加密是困难且昂贵的。可信计算-使用例如受信任的执行环境(TEEs)这样的硬件功能来提供数据和算法这种类型的保护-保护主机系统中的或者易受攻击的环境中的数据。
|
||||
|
||||
我已经写了几次关于 TEEs 的文章,当然,还有我和 Nathaniel McCallum 共同创立的 Enarx 项目(要看 Enarx 项目的所有人(一个任务)和 Enarx 的多平台示例)。Enarx 使用 TEEs 来提供独立于平台和语言的部署平台,以此来让你能够安全地将敏感应用或者敏感组件(例如微服务)部署在你不信任的主机上。Enarx 当然是完全开源的(或许会有人感兴趣,我们使用的是 Apache 2.0 许可证)。能够在你不信任的主机上运行工作程序,这是可信计算的承诺,它为静态敏感数据和传输中数据的使用扩展了常规做法:
|
||||
|
||||
* **存储:** 你要加密你的静态数据,因为你不完全信任你的基础存储架构。
|
||||
* **网络:** 你要加密你正在传输中的数据,因为你不完全信任你的基础网络架构。
|
||||
* **计算:** 你要加密你正在使用中的数据,因为你不完全信任你的基础计算架构。
|
||||
|
||||
|
||||
|
||||
关于信任,我有非常多的话想说,而且,上述说法里的单词“完全”是很重要的(在重新读我写的这篇文章的时候,我新加了这个单词)。在所有的情况下,你必须在一定程度上信任你的基础设施,无论是传递你的数据包还是存储你的数据块,例如,对于计算基础架构,你必须要去信任 CPU 和与之关联的固件,这是因为如果你不信任他们,你就无法真正地进行计算(现在有一些诸如同态加密一类的技术,这些技术正在开始提供一些机会,但是它们依然是有限的,这些技术还不够成熟)。
|
||||
|
||||
你在考虑是否应该完全信任 cpu 时,问题有时也会随之一起到来,还会同时发现一些安全问题,并且是否无论主机在哪都应该完全免受物理攻击。
|
||||
|
||||
这两个问题的回答都是“不”,但是在考虑到大规模可用性和普遍推广开的成本,这已经是我们当前拥有的最好的技术了。为了解决第二个问题,没有人去假装这项技术(或者任何的其他技术)是完全安全的:我们需要做的是思考我们的威胁模型并确定这个情况下的 TEEs 是否为我们的特殊需求提供了足够的安全防护。关于第一个问题,Enarx 采用的模型是在部署时就对你是否信任一个特定的 CPU 组做出决定。举个例子,如果供应商 Q 的 R 代芯片被发现有漏洞,可以很简单地说“我拒绝将我的工作内容部署到 Q 的 R 代芯片上去,但是仍然可以部署到 Q 的 S 型号,T 型号和 U 型号的芯片以及任何 P,M 和 N 供应商的任何芯片上去。”
|
||||
|
||||
我认为这里发生了三处改变,这些改变引起了人们现在对机密计算的兴趣和采用。
|
||||
|
||||
1. **硬件可用性:** 仅仅在过去的 6 到 12 个月里,支持 TEEs 的硬件才开始变得广泛可用,这会儿市场上的主要例子是 Intel 的 SGX 和 AMD 的 SEV。我们期望在未来可以看到支持 TEE 的硬件的其他例子。
|
||||
2. **行业准备:** 就像上云越来越多地被接受作为应用程序部署的模型,监管机构和立法机构也在提高各类组织保护其管理的数据的要求。组织开始强烈呼吁在不受信任的主机-或者更确切地说,在他们不能完全信任且带有敏感数据的主机上运行敏感程序(或者是处理敏感数据的应用程序)的方法。这应该是不足为奇的:如果芯片制造商看不到这项技术的市场,他们就不会投太多的钱在这项技术上。Linux 基金会的机密计算联盟(CCC)的成立就是一个行业如何有志于发现使用加密计算的公共模型并且鼓励开源项目使用这些技术的案例。
|
||||
3. **开源:** 就像区块链一样,加密计算是使用起来绝对不费吹灰之力的开源技术之一。如果你要运行敏感程序,你需要去信任正在为你运行的程序。不仅仅是 CPU 和固件,同样还有在 TEE 内支持你工作扩展的框架。可以很好地说,“我不信任主机机器和它上面的软件栈,所以我打算使用 TEE,”但是如果你不够了解 TEE 软件环境,那你就是将一种不透明的软件换成另外一种。TEEs 的开源支持将允许你或者社区-实际上是你和社区-以一种专有软件不可能实现的方式来检查和审计你所运行的程序。这就是为什么 CCC 位于 Linux 基金会(这个基金会致力于开放式开发模型)并鼓励 TEE 相关的软件项目加入且成为开源项目(如果他们还没有成为开源)。
|
||||
|
||||
|
||||
|
||||
我认为,在过去的 15 到 20 年里,硬件可用性,行业准备和开源已成为推动技术改变的驱动力。区块链,人工智能,云计算,互联网计算,大数据和互联网商务都是这三个点同时发挥作用的例子,并且在行业内带来了巨大的改变。
|
||||
|
||||
在一般情况下,安全是我们这数十年来听到的一种承诺,并且其仍然未被实现。老实说,我不确定它未来会不会实现。但是随着新技术的到来,特定用例的无处不在的安全变得越来越实际,并且在业内受到越来越多的期待。这样看起来,机密计算似乎已准备好成为成为下一个重大变化-而你,我亲爱的读者,可以一起来加入到这场革命(毕竟它是开源的)。
|
||||
|
||||
* * *
|
||||
|
||||
1. Enarx,是红帽发起的,是一个 CCC 项目。
|
||||
|
||||
|
||||
|
||||
* * *
|
||||
|
||||
这篇文章最初是发布在 Alice, Eve, and Bob 上的,这是得到了作者许可的重发。
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/1/confidential-computing
|
||||
|
||||
作者:[Mike Bursell][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[hopefully2333](https://github.com/hopefully2333)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/mikecamel
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/secure_https_url_browser.jpg?itok=OaPuqBkG (Secure https browser)
|
||||
[2]: https://aliceevebob.com/2019/02/26/oh-how-i-love-my-tee-or-do-i/
|
||||
[3]: https://enarx.io/
|
||||
[4]: https://aliceevebob.com/2019/08/20/enarx-for-everyone-a-quest/
|
||||
[5]: https://aliceevebob.com/2019/10/29/enarx-goes-multi-platform/
|
||||
[6]: https://aliceevebob.com/2018/02/20/there-are-no-absolutes-in-security/
|
||||
[7]: https://confidentialcomputing.io/
|
||||
[8]: tmp.VEZpFGxsLv#1
|
||||
[9]: https://aliceevebob.com/2019/12/03/confidential-computing-the-new-https/
|
96
translated/tech/20200205 Getting started with GnuCash.md
Normal file
96
translated/tech/20200205 Getting started with GnuCash.md
Normal file
@ -0,0 +1,96 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Getting started with GnuCash)
|
||||
[#]: via: (https://opensource.com/article/20/2/gnucash)
|
||||
[#]: author: (Don Watkins https://opensource.com/users/don-watkins)
|
||||
|
||||
开始使用 GnuCash
|
||||
======
|
||||
使用 GnuCash 管理你的个人或小型企业会计。
|
||||
![A dollar sign in a network][1]
|
||||
|
||||
在过去的四年里,我一直在用 [GnuCash][2] 来管理我的个人财务,我对此非常满意。这个开源 (GPL v3) 项目自 1998 年首次发布以来一直成长和改进,2019 年 12 月发布的最新版本 3.8 增加了许多改进和 bug 修复。
|
||||
|
||||
GnuCash 可在 Windows、MacOS 和 Linux 中使用。它实现了一个复式记账系统,并可以导入各种流行的开放和专有文件格式,包括 QIF、QFX、OFX、CSV 等。这使得从其他财务应用转换(包括 Quicken)而来很容易,它是为取代这些而出现的。
|
||||
|
||||
借助 GnuCash,你可以跟踪个人财务状况以及小型企业会计和开票。它没有一个集成的工资系统。根据文档,你可以在 GnuCash 中跟踪工资支出,但你必须在软件外部计算税金和扣减。
|
||||
|
||||
### 安装
|
||||
|
||||
要在 Linux 上安装 GnuCash:
|
||||
|
||||
* 在 Red Hat、CentOS 或 Fedora 中: **$ sudo dnf install gnucash**
|
||||
* 在 Debian、Ubuntu 或 Pop_OS 中: **$ sudo apt install gnucash**
|
||||
|
||||
|
||||
|
||||
你也可以从 [Flathub][3] 安装它,我在运行 Elementary OS 的笔记本上使用它。(本文中的所有截图都来自此次安装)。
|
||||
|
||||
### 设置
|
||||
|
||||
安装并启动程序后,你将看到一个欢迎屏幕,该页面提供了创建新账户集、导入 QIF 文件或打开新用户教程的选项。
|
||||
|
||||
![GnuCash Welcome screen][4]
|
||||
|
||||
#### 个人账户
|
||||
|
||||
如果你选择第一个选项(正如我所做的那样),GnuCash 会打开一个页面给你向导。它收集初始数据并设置账户首选项,例如账户类型和名称、商业数据(例如,税号)和首选货币。
|
||||
|
||||
![GnuCash new account setup][5]
|
||||
|
||||
GnuCash 支持个人银行账户、商业账户、汽车贷款、CD 和货币市场账户、儿童保育账户等。
|
||||
|
||||
例如,首先创建一个简单的支票簿。你可以输入账户的初始余额或以多种格式导入现有账户数据。
|
||||
|
||||
![GnuCash import data][6]
|
||||
|
||||
#### 开票
|
||||
|
||||
GnuCash 还支持小型企业功能,包括客户、供应商和开票。要创建发票,请在 **Business ->Invoice** 中输入数据。
|
||||
|
||||
![GnuCash create invoice][7]
|
||||
|
||||
然后,你可以将发票打印在纸上,也可以将其导出到 PDF 并通过电子邮件发送给你的客户。
|
||||
|
||||
![GnuCash invoice][8]
|
||||
|
||||
### 获取帮助
|
||||
|
||||
如果你有任何疑问,它有一个优秀的帮助,你可在菜单栏的右侧获取指导。
|
||||
|
||||
![GnuCash help][9]
|
||||
|
||||
项目的网站包含许多有用的信息的链接,例如 GnuCash [功能][10]的概述。GnuCash 还提供了[详细的文档][11],可供下载和离线阅读,它还有一个 [wiki][12],为用户和开发人员提供了有用的信息。
|
||||
|
||||
你可以在项目的 [GitHub][13] 仓库中找到其他文件和文档。GnuCash 项目由志愿者驱动。如果你想参与,请查看项目的 wiki 上的 [Getting involved][14] 部分。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/2/gnucash
|
||||
|
||||
作者:[Don Watkins][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi]](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/don-watkins
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc_whitehurst_money.png?itok=ls-SOzM0 (A dollar sign in a network)
|
||||
[2]: https://www.gnucash.org/
|
||||
[3]: https://flathub.org/apps/details/org.gnucash.GnuCash
|
||||
[4]: https://opensource.com/sites/default/files/images/gnucash_welcome.png (GnuCash Welcome screen)
|
||||
[5]: https://opensource.com/sites/default/files/uploads/gnucash_newaccountsetup.png (GnuCash new account setup)
|
||||
[6]: https://opensource.com/sites/default/files/uploads/gnucash_importdata.png (GnuCash import data)
|
||||
[7]: https://opensource.com/sites/default/files/uploads/gnucash_enter-invoice.png (GnuCash create invoice)
|
||||
[8]: https://opensource.com/sites/default/files/uploads/gnucash_invoice.png (GnuCash invoice)
|
||||
[9]: https://opensource.com/sites/default/files/uploads/gnucash_help.png (GnuCash help)
|
||||
[10]: https://www.gnucash.org/features.phtml
|
||||
[11]: https://www.gnucash.org/docs/v3/C/gnucash-help.pdf
|
||||
[12]: https://wiki.gnucash.org/wiki/GnuCash
|
||||
[13]: https://github.com/Gnucash
|
||||
[14]: https://wiki.gnucash.org/wiki/GnuCash#Getting_involved_in_the_GnuCash_project
|
Loading…
Reference in New Issue
Block a user