Merge branch 'master' of https://github.com/LCTT/TranslateProject into translating

This commit is contained in:
geekpi 2020-09-03 16:12:29 +08:00
commit fbf76989ae
15 changed files with 1441 additions and 43 deletions

View File

@ -1,8 +1,8 @@
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: (wxy)
[#]: publisher: ( )
[#]: url: ( )
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-12575-1.html)
[#]: subject: (SCP users migration guide to rsync)
[#]: via: (https://fedoramagazine.org/scp-users-migration-guide-to-rsync/)
[#]: author: (chasinglogic https://fedoramagazine.org/author/chasinglogic/)
@ -10,17 +10,17 @@
scp 用户的 rsync 迁移指南
======
![][1]
![](https://img.linux.net.cn/data/attachment/album/202009/03/102942u7rxf79a7rsr9txz.jpg)
在 [SSH 8.0 预发布公告][2]中OpenSSH 项目表示,他们认为 scp 协议已经过时,不灵活,而且不容易修复,然后他们继推荐使用 `sftp``rsync` 来进行文件传输。
在 [SSH 8.0 预发布公告][2]中OpenSSH 项目表示,他们认为 scp 协议已经过时,不灵活,而且不容易修复,然后他们继推荐使用 `sftp``rsync` 来进行文件传输。
然而,很多用户都是从小用着 `scp` 命令长大的,所以对 `rsync` 并不熟悉。此外,`rsync` 可以做的事情也远不止复制文件,这可能会给菜鸟们留下复杂和不透明的印象。尤其是,`scp` 命令的标志大体上可以直接对应到 `cp` 命令的标志,而 `rsync` 命令的标志却和它大相径庭。
然而,很多用户都是从小用着 `scp` 命令长大的,所以对 `rsync` 并不熟悉。此外,`rsync` 可以做的事情也远不止复制文件,这可能会给菜鸟们留下复杂和难以掌握的印象。尤其是,`scp` 命令的标志大体上可以直接对应到 `cp` 命令的标志,而 `rsync` 命令的标志却和它大相径庭。
本文将为熟悉 `scp` 的人提供一个介绍和过渡的指南。让我们跳进最常见的场景:复制文件和复制目录。
### 复制文件
对于复制单个文件而言,`scp` 和 `rsync` 命令实际上是等价的。比方说,你需要把 `foo.txt` 传到一个名为 `server` 的服务器上你的主目录下:
对于复制单个文件而言,`scp` 和 `rsync` 命令实际上是等价的。比方说,你需要把 `foo.txt` 传到你在名为 `server` 的服务器上的主目录下:
```
$ scp foo.txt me@server:/home/me/
@ -34,7 +34,7 @@ $ rsync foo.txt me@server:/home/me/
### 复制目录
对于复制目录,就确实有了很大的分歧,这也解释了为什么 `rsync` 会被认为比 `scp` 更复杂。如果你想把 `bar` 目录复制到 `server` 服务器上,除了指定 `ssh` 信息外,相应的 `scp` 命令和 `cp` 命令一模一样。
对于复制目录,就有了很大的分歧,这也解释了为什么 `rsync` 会被认为比 `scp` 更复杂。如果你想把 `bar` 目录复制到 `server` 服务器上,除了指定 `ssh` 信息外,相应的 `scp` 命令和 `cp` 命令一模一样。
```
$ scp -r bar/ me@server:/home/me/
@ -88,14 +88,14 @@ bar
1 directory, 2 files
```
请注意,`link.txt` 不再是一个符号链接,它现在是一个完整的 `foo.txt` 副本。如果你习惯于使用 `cp`,这可能会是令人惊讶的行为。如果你尝试使用 `cp -r` 复制 `bar` 目录,你会得到一个新的目录,里面的符号链接和 `bar` 的一样。现在如果我们尝试使用之前的 `rsync` 命令,我们会得到一个警告:
请注意,`link.txt` 不再是一个符号链接,它现在是一个 `foo.txt` 的完整副本。如果你习惯于使用 `cp`,这可能会是令人惊讶的行为。如果你尝试使用 `cp -r` 复制 `bar` 目录,你会得到一个新的目录,里面的符号链接和 `bar` 的一样。现在如果我们尝试使用之前的 `rsync` 命令,我们会得到一个警告:
```
$ rsync -r bar/ me@server:/home/me/
skipping non-regular file "bar/baz/link.txt"
```
`rsync` 警告我们它发现了一个非常规文件,并正在跳过它。因为你没有告诉它可以复制符号链接,所以它忽略了它们。`rsync` 在手册中有一节“符号链接”,解释了所有可能的行为选项。在我们的例子中,我们需要添加 `-links` 标志:
`rsync` 警告我们它发现了一个非常规文件,并正在跳过它。因为你没有告诉它可以复制符号链接,所以它忽略了它们。`rsync` 在手册中有一节“符号链接”,解释了所有可能的行为选项。在我们的例子中,我们需要添加 `-links` 标志:
```
$ rsync -r --links bar/ me@server:/home/me/
@ -112,7 +112,7 @@ bar/
1 directory, 2 files
```
为了省去一些打字工作,并利用更多的文件保护选项,在复制目录时可以使用 `-archive`(简称 `-a`标志。该归档标志将做大多数人所期望的事情,因为它可以实现递归复制、符号链接复制和许多其他选项。
为了省去一些打字工作,并利用更多的文件保护选项,在复制目录时可以使用归档标志 `-archive`(简称 `-a`)。该归档标志将做大多数人所期望的事情,因为它可以实现递归复制、符号链接复制和许多其他选项。
```
$ rsync -a bar/ me@server:/home/me/
@ -157,7 +157,7 @@ Host server
#### 同步
顾名思义,`rsync` 可以做的不仅仅是复制数据。到目前为止,我们只演示了如何使用 `rsync` 复制文件。如果你想让 `rsync` 把目标目录变成源目录的样子,你可以在 `rsync` 中添加 `-delete` 标志。这个删除标志使得 `rsync` 将从源目录中复制不存在于目标目录中的文件,然后它将删除目标目录中不存在于源目录中的文件。结果就是目标目录和源目录完全一样。相比之下,`scp` 只会在目标目录下添加文件。
顾名思义,`rsync` 可以做的不仅仅是复制数据。到目前为止,我们只演示了如何使用 `rsync` 复制文件。如果你想让 `rsync` 把目标目录变成源目录的样子,你可以在 `rsync` 中添加删除标志 `-delete`。这个删除标志使得 `rsync` 将从源目录中复制不存在于目标目录中的文件,然后它将删除目标目录中不存在于源目录中的文件。结果就是目标目录和源目录完全一样。相比之下,`scp` 只会在目标目录下添加文件。
### 结论

View File

@ -1,25 +1,26 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-12576-1.html)
[#]: subject: (Use this command-line tool to find security flaws in your code)
[#]: via: (https://opensource.com/article/20/8/static-code-security-analysis)
[#]: author: (Ari Noman https://opensource.com/users/arinoman)
使用这个命令行工具来查找你代码中的安全漏洞
使用命令行工具 Graudit 来查找你代码中的安全漏洞
======
凭借广泛的语言支持Graudit 可以让你在开发过程中的审计你的代码安全。
![Code on a screen][1]
测试是软件开发生命周期 SDLC 的重要组成部分,它有几个阶段。今天,我想谈谈如何在代码中发现安全问题
> 凭借广泛的语言支持Graudit 可以让你在开发过程中的审计你的代码安全。
在开发软件的时候,你不能忽视安全问题。这就是为什么有一个术语叫 DevSecOps他的基本职责是识别和解决应用中的安全漏洞。有一些用于检查 [OWASP漏洞][2] 的开源解决方案,它将通过创建源代码的威胁模型来得出结果。
![](https://img.linux.net.cn/data/attachment/album/202009/03/114037qhi2h282wghbp74n.jpg)
测试是软件开发生命周期SDLC的重要组成部分它有几个阶段。今天我想谈谈如何在代码中发现安全问题。
处理安全问题有不同的方法,如静态应用安全测试 SAST、动态应用安全测试 DAST、交互式应用安全测试 IAST、软件组成分析等
在开发软件的时候,你不能忽视安全问题。这就是为什么有一个术语叫 DevSecOps它的基本职责是识别和解决应用中的安全漏洞。有一些用于检查 [OWASP 漏洞][2]的开源解决方案,它将通过创建源代码的威胁模型来得出结果
静态应用安全测试在代码层面运行,通过发现已经编写的代码中的错误来分析应用。这种方法不需要运行代码,所以叫静态分析。
处理安全问题有不同的方法如静态应用安全测试SAST、动态应用安全测试DAST、交互式应用安全测试IAST、软件组成分析等。
静态应用安全测试在代码层面运行,通过发现编写好的代码中的错误来分析应用。这种方法不需要运行代码,所以叫静态分析。
我将重点介绍静态代码分析,并使用一个开源工具进行实际体验。
@ -29,9 +30,9 @@
好的开源工具总是考虑到灵活性,它们应该能够在任何环境中使用,覆盖尽可能多的情况。这让开发人员更容易将该软件与他们现有的系统连接起来。
但是有的时候,你可能需要一个功能,而这个功能在你选择的工具中是不可用的。那么你就可以选择将代码分叉,在其上开发自己的功能,并在系统中使用。
但是有的时候,你可能需要一个功能,而这个功能在你选择的工具中是不可用的。那么你就可以选择复刻其代码,在其上开发自己的功能,并在你的系统中使用。
因为,大多数时候,开源软件是由一个社区驱动的,开发的速度往往对该工具的用户来说是一个加分项,因为他们会根据用户的反馈、问题或 bug 报告来迭代项目。
因为,大多数时候,开源软件是由社区驱动的,开发的速度往往是该工具的用户的加分项,因为他们会根据用户的反馈、问题或 bug 报告来迭代项目。
### 使用 Graudit 来确保你的代码安全
@ -39,25 +40,22 @@
在这里,我们将使用 [Graudit][3],它是一个简单的命令行工具,可以让我们找到代码库中的安全缺陷。它支持不同的语言,但有一个固定的签名集。
Graudit 使用的 grep 是 GNU 许可证下的工具,类似的静态代码分析工具还有 Rough Auditing Tool for SecurityRATS、Securitycompass Web Application Analysis ToolSWAAT、flawfinder 等。但的技术要求是最低的,并且非常灵活。不过,你可能还是有 Graudit 无法满足的要求。如果是这样,你可以看看这个[列表][4]的其他的选择。
Graudit 使用的 `grep` 是 GNU 许可证下的工具,类似的静态代码分析工具还有 Rough Auditing Tool for SecurityRATS、Securitycompass Web Application Analysis ToolSWAAT、flawfinder 等。但 Graudit 的技术要求是最低的,并且非常灵活。不过,你可能还是有 Graudit 无法满足的要求。如果是这样,你可以看看这个[列表][4]的其他的选择。
我们可以将这个工具安装在特定的项目下,或者全局命名空间中,或者在特定的用户下,或者任何我们喜欢地方,它很灵活。我们先来克隆一下仓库。
```
`$ git clone https://github.com/wireghoul/graudit`
$ git clone https://github.com/wireghoul/graudit
```
现在,我们需要创建一个 Graudit 的符号链接,以便我们可以将其作为一个命令使用。
```
$ cd ~/bin && mkdir graudit
$ cd ~/bin && mkdir graudit
$ ln --symbolic ~/graudit/graudit ~/bin/graudit
```
在 .bashrc 中添加一个别名(或者你使用的任何 shell 的配置文件)。
`.bashrc` (或者你使用的任何 shell 的配置文件)中添加一个别名。
```
#------ .bashrc ------
@ -67,30 +65,27 @@ alias graudit="~/bin/graudit"
重新加载 shell
```
$ source ~/.bashrc # OR
$ source ~/.bashrc #
$ exex $SHELL
```
让我们通过运行这个来检查是否成功安装了这个工具。
```
`$ graudit -h`
$ graudit -h
```
如果你得到类似于这样的结果,那么就可以了。
![Graudit terminal screen showing help page][5]
图 1 Graudit 帮助页面
*图 1 Graudit 帮助页面*
我正在使用我现有的一个项目来测试这个工具。要运行该工具,我们需要传递相应语言的数据库。你会在 signatures 文件夹下找到这些数据库。
```
`$ graudit -d ~/gradit/signatures/js.db`
$ graudit -d ~/gradit/signatures/js.db
```
我在现有项目中的两个 JavaScript 文件上运行了它,你可以看到它在控制台中抛出了易受攻击的代码。
@ -103,9 +98,9 @@ $ exex $SHELL
### Graudit 的优点和缺点
Graudit 支持很多语言,这使其成为许多不同系统上的用户的理想选择。由于它的使用简单和广泛的语言支持,它可以与其他免费或付费工具相媲美。最重要的是,它们正在开发中,社区也支持其他用户。
Graudit 支持很多语言,这使其成为许多不同系统上的用户的理想选择。由于它的使用简单和语言支持广泛,它可以与其他免费或付费工具相媲美。最重要的是,它们正在开发中,社区也支持其他用户。
虽然这是一个方便的工具,但你可能会发现很难将某个特定的代码识别为”易受攻击“。也许开发者会在未来版本的工具中加入这个功能。但是,通过使用这样的工具来关注代码中的安全问题总是好的。
虽然这是一个方便的工具,但你可能会发现很难将某个特定的代码识别为“易受攻击”。也许开发者会在未来版本的工具中加入这个功能。但是,通过使用这样的工具来关注代码中的安全问题总是好的。
### 总结
@ -118,7 +113,7 @@ via: https://opensource.com/article/20/8/static-code-security-analysis
作者:[Ari Noman][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (leommxj)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -0,0 +1,59 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Making Zephyr More Secure)
[#]: via: (https://www.linux.com/audience/developers/making-zephyr-more-secure/)
[#]: author: (Swapnil Bhartiya https://www.linux.com/author/swapnil/)
Making Zephyr More Secure
======
Zephyr is gaining momentum where more and more companies are embracing this open source project for their embedded devices. However, security is becoming a huge concern for these connected devices. The NCC Group recently conducted an evaluation and security assessment of the project to help harden it against attacks. In the interview, Kate Stewart, Senior Director of Strategic Programs at Linux Foundation talk about the assessment and the evolution of the project.
Here is a quick transcript of the interview:
**Swapnil Bhartiya: The NCC group recently evaluated Linux for security. Can you talk about what was the outcome of that evaluation?**
Kate Stewart: Were very thankful for the NCC group for the work that they did and helping us to get Zephyr hardened further. In some senses when it had first hit us, it was like, “Okay, theyre taking us seriously now. Awesome.” And the reason theyre doing this is that their customers are asking for it. Theyve got people who are very interested in Zephyr so they decided to invest the time doing the research to see what they could find. And the fact that were good enough to critique now is a nice positive for the project, no question.
Up till this point, wed had been getting some vulnerabilities that researchers had noticed in certain areas and had to tell us about. Wed issued CVEs so we had a process down, but suddenly being hit with the whole bulk of those like that was like, “Okay, time to up our game guys.” And so, what weve done is we found out we didnt have a good way of letting people who have products with Zephyr based on them know about our vulnerabilities. And what we wanted to be able to do is make it clear that if people have products and they have them out in the market and that they want to know if theres a vulnerability. We just added a new webpage so they know how to register, and they can let us know to contact them.
The challenge of embedded is you dont quite know where the software is. Weve got a lot of people downloading Zephyr, we got a lot of people using Zephyr. Were seeing people upstreaming things all the time, but we dont know where the products are, its all word of mouth to a large extent. Therere no tracers or anything else, you dont want to do that in an embedded space on IoT; battery life is important. And so, its pretty key for figuring out how do we let people who want to be notified know.
Wed registered as a CNA with Mitre several years ago now and we can assign CVE numbers in the project. But what we didnt have was a good way of reaching out to people beyond our membership under embargo so that we can give them time to remediate any issues that were fixing. By changing our policies, its gone from a 60-day embargo window to a 90-day embargo window. In the first 30 days, were working internally to get the team to fix the issues and then weve got a 60-day window for our people who do products to basically remediate in the field if necessary. So, getting ourselves useful for product makers was one of the big focuses this year.
**Swapnil Bhartiya: Since Zephyrs LTS release was made last year, can you talk about the new releases, especially from the security perspective because I think the latest version is 2.3.0?**
Kate Stewart: Yeah, 2.3.0 and then we also have 1.14.2. and 1.14 is our LTS-1 as we say. And weve put an update out to it with the security fixes and a long-term stable like the Linux kernel has security fixes and bug fixes backported into it so that people can build products on it and keep it active over time without as much change in the interfaces and everything else that were doing in the mainline development tree and what weve just done with the 2.3.
2.3 has a lot of new features in it and weve got all these vulnerabilities remediated. Theres a lot more coming up down the road, so the community right now is working. Weve adopted new set of coding guidelines for the project and we will be working on that so we can get ourselves ready for going after safety certifications next year. So theres a lot of code in motion right now, but theres a lot of new features being added every day. Its great.
**Swapnil Bhartiya: I also want to talk a bit about the community side of it. Can you talk about how the community is growing new use cases?**
Kate Stewart: Weve just added two new members into Zephyr. Weve got Teenage Engineering has just joined us and Laird Connectivity has just joined us and its really cool to start seeing these products coming out. There are some rather interesting technologies and products that are showing up and so Im really looking forward to being able to have blog posts about them.
Laird Connectivity is basically a small device running Zephyr that you can use for monitoring distance without recording other information. So, in days of COVID, we need to start figuring out technology assists to help us keep the risk down. Laird Connectivity has devices for that.
So were seeing a lot of innovation happening very quickly in Zephyr and thats really Zephyrs strength is its got a very solid code base and lets people add their innovation on top.
**Swapnil Bhartiya: What role do you think Zephyr going to play in the post-COVID-19 world?**
Kate Stewart: Well, I think they offer us interesting opportunities. Some of the technologies that are being looked at for monitoring for instance we have distance monitoring, contact tracing and things like that. We can either do it very manually or we can start to take advantage of the technology infrastructures to do so. But people may not want to have a device effectively monitoring them all the time. They may just want to know exactly, position-wise, where they are. So thats potentially some degree of control over whats being sent into the tracing and tracking.
These sorts of technologies I think will be helping us improve things over time. I think theres a lot of knowledge that were getting out of these and ways we can optimize the information and the RTOS and the sensors are discrete functionality and are improving how do we look at things.
**Swapnil Bhartiya: There are so many people who are using Zephyr but since it is open source we not even aware of them. How do you ensure that whether someone is an official member of the project or not if they are running Zephyr their devices are secure?**
Kate Stewart: We do a lot of testing with Zephyr, theres a tremendous amount of test infrastructure. Theres the whole regression infrastructure. We work to various thresholds of quality levels and weve got a lot of expertise and have publicly documented all of our best practices. A security team is a top-notch group of people. Im really so proud to be able to work with them. They do a really good job of caring about the issues as well as finding them, debugging them and making sure anything that comes up gets solved. So in that sense, theres a lot of really great people working on Zephyr and it makes it a really fun community to work with, no question. In fact, its growing fast actually.
**Swapnil Bhartiya: Kate, thank you so much for taking your time out and talking to me today about these projects.**
--------------------------------------------------------------------------------
via: https://www.linux.com/audience/developers/making-zephyr-more-secure/
作者:[Swapnil Bhartiya][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linux.com/author/swapnil/
[b]: https://github.com/lujun9972

View File

@ -0,0 +1,94 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Open Source Project For Earthquake Warning Systems)
[#]: via: (https://www.linux.com/featured/open-source-project-for-earthquake-warning-systems/)
[#]: author: (Swapnil Bhartiya https://www.linux.com/author/swapnil/)
Open Source Project For Earthquake Warning Systems
======
Earthquakes or the shaking doesnt kill people, buildings do. If we can get people out of buildings in time, we can save lives. Grillo has founded OpenEEW in partnership with IBM and the Linux Foundation to allow anyone to build their own earthquake early-warning system. Swapnil Bhartiya, the founder of TFiR, talked to the founder of Grillo on behalf of The Linux Foundation to learn more about the project.
Here is the transcript of the interview:
**Swapnil Bhartiya: If you look at these natural phenomena like earthquakes, theres no way to fight with nature. We have to learn to coexist with them. Early warnings are the best thing to do. And we have all these technologies IoTs and AI/ML. All those things are there, but we still dont know much about these phenomena. So, what I do want to understand is if you look at an earthquake, well see that in some countries the damage is much more than some other places. What is the reason for that?**
Andres Meira: earthquakes disproportionately affect countries that dont have great construction. And so, if you look at places like Mexico, the Caribbean, much of Latin America, Nepal, even some parts of India in the North and the Himalayas, you find that earthquakes can cause more damage than say in California or in Tokyo. The reason is it is buildings that ultimately kill people, not the shaking itself. So, if you can find a way to get people out of buildings before the shaking thats really the solution here. There are many things that we dont know about earthquakes. Its obviously a whole field of study, but we cant tell you for example, that an earthquake can happen in 10 years or five years. We can give you some probabilities, but not enough for you to act on.
What we can say is that an earthquake is happening right now. These technologies are all about reducing the latency so that when we know an earthquake is happening in milliseconds we can be telling people who will be affected by that event.
**Swapnil Bhartiya: What kind of work is going on to better understand earthquakes themselves?**
Andres Meira: I have a very narrow focus. Im not a seismologist and I have a very narrow focus related to detecting earthquakes and alerting people. I think in the world of seismology, there are a lot of efforts to understand the tectonic movement, but I would say there are a few interesting things happening that I know of. For example, undersea cables. People in Chile and other places are looking at undersea telecommunications cables and the effects that any sort of seismic movement have on the signals. They can actually use that as a detection system. But when you talk about some of the really deep earthquakes, 60-100 miles beneath the surface, man has not yet created holes deep enough for us to place sensors. So were very limited as to actually detecting earthquakes at a great depth. We have to wait for them to affect us near the surface.
**Swapnil Bhartiya: So then how do these earthquake early warning systems work? I want to understand from a couple of points: What does the device itself look like? What do those sensors look like? What does the software look like? And how do you kind of share data and interact with each other?**
Andres Meira: The sensors that we use, weve developed several iterations over the last couple of years and effectively, they are a small microcontroller, an accelerometer, this is the core component and some other components. What the device does is it records accelerations. So, it looks on the X, Y, and Z axes and just records accelerations from the ground so we are very fussy about how we install our sensors. Anybody can install it in their home through this OpenEEW initiative that were doing.
The sensors themselves record shaking accelerations and we send all of those accelerations in quite large messages using MQTT. We send them every second from every sensor and all of this data is collected in the cloud, and in real-time we run algorithms. We want to know that the shaking, which the accelerometer is getting is not a passing truck. Its actually an earthquake.
So weve developed the algorithms that can tell those things apart. And of course, we wait for one or two sensors to confirm the same event so that we dont get any false positives because you can still get some errors. Once we have that confirmation in the cloud we can send a message to all of the client devices. If you have an app, you will be receiving a message saying, theres an earthquake at this location, and your device will then be calculating how long it will take to reach it. Therefore, how much energy will be lost and therefore, what shaking youre going to be expecting very soon.
**Swapnil Bhartiya: Where are these devices installed?**
Andres Meira: They are installed at the moment in several countries Mexico, Chile, Costa Rica, and Puerto Rico. We are very fussy about how people install them, and in fact, on the OpenEEW website, we have a guide for this. We really require that theyre installed on the ground floor because the higher up you go, the different the frequencies of the building movement, which affects the recordings. We need it to be fixed to a solid structural element. So this could be a column or a reinforced wall, something which is rigid and it needs to be away from the noise. So it wouldnt be great if its near a door that was constantly opening and closing. Although we can handle that to some extent. As long as you are within the parameters, and ideally we look for good internet connections, although we have cellular versions as well, then thats all we need.
The real name of the game here is a quantity more than quality. If you can have a lot of sensors, it doesnt matter if one is out. It doesnt matter if the quality is down because were waiting for confirmation from other ones and redundancy is how you achieve a stable network.
**Swapnil Bhartiya: What is the latency between the time when sensors detect an earthquake and the warning is sent out? Does it also mean that the further you are from the epicenter, the more time you will get to leave a building?**
Andres Meira: So the time that a user gaps in terms of what we call the window of opportunity for them to actually act on the information is a variable and it depends on where the earthquake is relative to the user. So, Ill give you an example. Right now, Im in Mexico City. If we are detecting an earthquake in Acapulco, then you might get 60 seconds of advanced warning because an earthquake travels at more or less a fixed velocity, which is unknown and so the distance and the velocity gives you the time that youre going to be getting.
If that earthquake was in the South of Mexico in Oaxaca, we might get two minutes. Now, this is a variable. So of course, if you are in Istanbul, you might be very near the fault line or Kathmandu. You might be near the fault line. If the distance is less than what I just described, the time goes down. But even if you only have five seconds or 10 seconds, which might happen in the Bay area, for example, thats still okay. You can still ask children in a school to get underneath the furniture. You can still ask surgeons in a hospital to stop doing the surgery. Theres many things you can do and there are also automated things. You can shut off elevators or turn off gas pipes. So anytime is good, but the actual time itself is a variable.
**
Swapnil Bhartiya: The most interesting thing that you are doing is that you are also open sourcing some of these technologies. Talk about what components you have open source and why.**
Andres Meira: Open sourcing was a tough decision for us. It wasnt something we felt comfortable with initially because we spent several years developing these tools, and were obviously very proud. I think that there came a point where we realized why are we doing this? Are we doing this to develop cool technologies to make some money or to save lives? All of us live in Mexico, all of us have seen the devastation of these things. We realized that open source was the only way to really accelerate what were doing.
If we want to reach people in these countries that Ive mentioned; if we really want people to work on our technology as well and make it better, which means better alert times, less false positives. If we want to really take this to the next level, then we cant do it on our own. It will take a long time and we may never get there.
So that was the idea for the open source. And then we thought about what we could do with open source. We identified three of our core technologies and by that I mean the sensors, the detection system, which lives in the cloud, but could also live on a Raspberry Pi, and then the way we alert people. The last part is really quite open. It depends on the context. It could be a radio station. It could be a mobile app, which weve got on the website, on the GitHub. It could be many things. Loudspeakers. So those three core components, we now have published in our repo, which is OpenEEW on GitHub. And from there, people can pick and choose.
It might be that some people are data scientists so they might go just for the data because we also publish over a terabyte of accelerometer data from our networks. So people might be developing new detection systems using machine learning, and weve got instructions for that and we would very much welcome it. Then we have something for the people who do front end development. So they might be helping us with the applications and then we also have people something for the makers and the hardware guys. So they might be interested in working on the census and the firmware. Theres really a whole suite of technologies that we published.
**Swapnil Bhartiya: There are other earthquake warning systems. How is OpenEEW different?**
Andres Meira: I would divide the other systems into two categories. I would look at the national systems. I would look at say the Japanese or the California and the West coast system called Shake Alert. Those are systems with significant public funding and have taken decades to develop. I would put those into one category and another category I would look at some applications that people have developed. My Shake or Skylert, or theres many of them.
If you look at the first category, I would say that the main difference is that we understand the limitations of those systems because an earthquake in Northern Mexico is going to affect California and vice versa. An earthquake in Guatemala is going to affect Mexico and vice versa. An earthquake in Dominican Republic is going to affect Puerto Rico. The point is that earthquakes dont respect geography or political boundaries. And so we think national systems are limited, and so far they are limited by their borders. So, that was the first thing.
In terms of the technology, actually in many ways, the MEMS accelerometers that we use now are streets ahead of where we were a couple of years ago. And it really allows us to detect earthquakes hundreds of kilometers away. And actually, we can perform as well as these national systems. Weve studied our system versus the Mexican national system called SASMEX, and more often than not, we are faster and more accurate. Its on our website. So theres no reason to say that our technology is worse. In fact, having cheaper sensors means you can have huge networks and these arrays are what make all the difference.
In terms of the private ones, the problems with those are that sometimes they dont have the investment to really do wide coverage. So the open sources are our strength there because we can rely on many people to add to the project.
**Swapnil Bhartiya: What kind of roadmap do you have for the project? How do you see the evolution of the project itself?**
Andres Meira: So this has been a new area for me; Ive had to learn. The governance of OpenEEW as of today, like you mentioned, is now under the umbrella of the Linux Foundation. So this is now a Linux Foundation project and they have certain prerequisites. So we had to form a technical committee. This committee makes the steering decisions and creates the roadmap you mentioned. So, the roadmap is now published on the GitHub, and its a work in progress, but effectively were looking 12 months ahead and weve identified some areas that really need priority. Machine learning, as you mentioned, is definitely something that will be a huge change in this world because if we can detect earthquakes, potentially with just a single station with a much higher degree of certainty, then we can create networks that are less dense. So you can have something in Northern India and in Nepal, in Ecuador, with just a handful of sensors. So thats a real Holy grail for us.
We also are asking on the roadmap for people to work with us in lots of other areas. In terms of the sensors themselves, we want to do more detection on the edge. We feel that edge computing with the sensors is obviously a much better solution than what we do now, which has a lot of cloud detection. But if we can move a lot of that work to the actual devices, then I think were going to have much smarter networks and less telemetry, which opens up new connectivity options. So, the sensors as well are another area of priority on the road map.
**Swapnil Bhartiya: What kind of people would you like to get involved with and how can they get involved?**
Andres Meira: So as of today, were formally announcing the initiative and I would really invite people to go to OpenEEW.com, where weve got a site that outlines some areas that people can get involved with. Weve tried to consider what type of people would join the project. So youre going to get seismologists. We have seismologists from Harvard University and from other areas. Theyre most interested in the data from what weve seen so far. Theyre going to be looking at the data sets that weve offered and some of them are already looking at machine learning. So theres many things that they might be looking at. Of course, anyone involved with Python and machine learning, data scientists in general, might also do similar things. Ultimately, you can be agnostic about seismology. It shouldnt put you off because weve tried to abstract it away. Weve got down to the point where this is really just data.
Then weve also identified the engineers and the makers, and weve tried to guide them towards the repos, like the sensory posts. We are asking them to help us with the firmware and the hardware. And then weve got for your more typical full stack or front end developer, weve got some other repos that deal with the actual applications. How does the user get the data? How does the user get the alerts? Theres a lot of work we can be doing there as well.
So, different people might have different interests. Someone might just want to take it all. Maybe someone might want to start a network in the community, but isnt technical and thats fine. We have a Slack channel where people can join and people can say, “Hey, Im in this part of the world and Im looking for people to help me with the sensors. I can do this part.” Maybe an entrepreneur might want to join and look for the technical people.
So, were just open to anybody who is keen on the mission, and theyre welcome to join.
--------------------------------------------------------------------------------
via: https://www.linux.com/featured/open-source-project-for-earthquake-warning-systems/
作者:[Swapnil Bhartiya][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linux.com/author/swapnil/
[b]: https://github.com/lujun9972

View File

@ -0,0 +1,115 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (4 reasons Jamstack is changing web development)
[#]: via: (https://opensource.com/article/20/9/jamstack)
[#]: author: (Phil Hawksworth https://opensource.com/users/phil-hawksworth)
4 reasons Jamstack is changing web development
======
Jamstack allows web developers to move far beyond static sites without
the need for complex, expensive active-hosting infrastructure.
![spiderweb diagram][1]
The way we use and the way we build the web have evolved dramatically since its inception. Developers have seen the rise and fall of many architectural and development paradigms intended to satisfy more complex user experiences, support evolving device capabilities, and enable more effective development workflows.
In 2015, [Netlify][2] founders Matt Biilmann and Chris Bach coined the term "[Jamstack][3]" to describe the architectural model they were championing and that was gaining popularity. In reality, the foundations of this model have existed from the beginning of the web. But multiple factors led them to coin this new term to encapsulate the approach and to give developers and technical architects a better means to discuss it.
In this article, I'll look at those factors, Jamstack's attributes, why the term came into existence, and how it is changing how we approach web development.
### What is Jamstack?
All Jamstack sites share a core principle: They are a collection of static assets generated during a build or compilation process so that they can be served from a simplified web server or directly from a content delivery network (CDN).
Before the term "Jamstack" emerged, many people described these sites as "static." This describes how the first sites on the web were created (although CDNs would come later). But the term "static sites" does a poor job of capturing what is possible with the Jamstack due to the way tools, services, and browsers have evolved.
The simplest Jamstack site is a single HTML file served as a static file. For a long time, open source web servers efficiently hosted static assets this way. This has become a commodity, with companies including Amazon, Microsoft, and Google offering hosting services based on file serving rather than spending compute cycles generating a response for each request on-demand.
But that's just a static site? Right?
Well, yes. But it's the thin end of the wedge. Jamstack builds upon this to deliver sites that confound the term "static" as a useful descriptor.
If you take things a stage further and introduce JavaScript into the equation, you can begin enhancing the user's experience. Modern browsers have increasingly capable JavaScript engines (like the open source [V8][4] from Google) and powerful browser [APIs][5] to enable services such as local-caching, location services, identity services, media access, and much more.
In many ways, the browser's JavaScript engine has replaced the runtime environment needed to perform dynamic operations in web experiences. Whereas a traditional technology stack such as [LAMP][6] requires configuration and maintenance of an operating system (Linux) and an active web server (Apache), these are not considerations with Jamstack. Files are served statically to clients (browsers), and if any computation is required, it can happen there rather than on the hosting infrastructure.
As Matt Biilmann describes it, "the runtime has moved up a level, to the browser."
Web experiences don't always require content to be dynamic or personalized, but they often do. Jamstack sites can provide this, thanks to the efficient use of JavaScript as well as a booming API economy. Many companies now provide content, tools, and services via APIs. These APIs enable even small project teams to inject previously unattainable, prohibitively expensive, and complex abilities into their Jamstack projects. For example, rather than needing to build identity, payment, and fulfillment services or to host, maintain, secure, and scale database capabilities, teams can source these functionalities from experts in those areas through APIs.
Businesses have emerged to provide these and many other services with all of the economies of scale and domain specializations needed to make them robust, efficient, and sustainable. The ability to consume these services via APIs decouples them from web applications' code, which is a very desirable thing.
Because these services took us beyond the old concept of static sites, a more descriptive term was needed.
### What's in a name?
The _JavaScript_ available in modern browsers, calling on _APIs_, and enriching the underlying site content delivered with _markup_ (HTML) are the J, A, and M in the Jamstack name. They identify properties that allow sites to move far beyond static sites without the need for complex and expensive active-hosting infrastructure.
But whether you decide to use all or just some of these attributes, the Jamstack's key principle is that the assets are created in advance to vastly improve hosting and development workflows. It's a shift from the higher-risk, just-in-time request-servicing method to a simplified, more predictable, prepared-in-advance approach.
As [Aaron Swartz][7] succinctly put it way back in 2002, "Bake, don't fry" your pages.
### 4 benefits of using Jamstack
Embracing this model of decoupling the generation (or rendering) of assets from the work of serving assets creates significant opportunities.
#### Lowering the cost of scaling
In a traditional stack, where views are generated for each incoming request, there is a correlation between the volume of traffic and the computation work done on the servers. This might reach all levels of the hosting stack, from load balancers and web servers to application servers and database servers. When these additional layers of infrastructure are provisioned to help manage the load, it adds cost and complexity to the environment and the work of operating the infrastructure and the site itself.
Jamstack sites flatten that stack. The work of generating the views happens only when the code or content changes, rather than for every request. Therefore, the site can be prepared in advance and hosted directly from a CDN without needing to perform expensive computations on the fly. Large pieces of infrastructure (and the work associated with them) disappear.
In short: Jamstack sites are optimized for scale by default.
#### Improving speed
Traditionally, to improve the hosting infrastructure's response time, those with the budget and the skills would add a CDN. Identifying assets that might be considered "static" and offloading serving those resources to a CDN could reduce the load on the web-serving infrastructure. Therefore, some requests could be served more rapidly from a CDN that is optimized for that task.
With a Jamstack site, the site is served entirely from the CDN. This avoids the complex logic of determining what must be served dynamically and what can be served from a CDN.
Every deployment becomes an operation to update the entire site on the CDN rather than across many pieces of infrastructure. This allows you to automate the process, which can increase your confidence in and decrease the cost and friction of deploying site updates.
#### Reducing security risks
Removing hosting infrastructure, especially servers that do computation based on the requests they receive, has a profound impact on a site's security profile. A Jamstack site has far fewer attack vectors than a traditional site since many servers are no longer needed. There is no server more secure than the one that does not exist.
The CDN infrastructure remains but is optimized to serve pre-generated assets. Because these are read-only operations, they have fewer opportunities for attack.
#### Supercharging the workflow
By removing so many moving parts from site hosting, you can vastly improve the workflows involved in developing and deploying them.
Jamstack sites support [version control from end to end][8] and commonly use Git and Git conventions to do everything from defining and provisioning new environments to executing a deployment. Deployments no longer need to change the state, resources, or configuration of multiple pieces of hosting infrastructure. And they can be tested locally, on staging environments, and in production environments with ease.
The approach also allows more comprehensive project encapsulation. A site's code repository can include everything needed to bootstrap a project, including defining the dependencies and operations involved in building the site. This simplifies onboarding developers and walking the path to production. (Here's [an example][9].)
### Jamstack for all
Web developers familiar with a wide variety of tools and frameworks are embracing the Jamstack. They are achieving new levels of productivity, ignited by the recognition that they can use many tools and languages that put more power and ability in their hands.
Open source libraries and frameworks with high levels of adoption and love from the development community are being used in combination and with third- and first-party APIs to produce incredibly capable solutions. And the low barriers of entry mean they are easier to explore, leading to higher levels of developer empowerment, effectiveness, and enthusiasm.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/9/jamstack
作者:[Phil Hawksworth][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/phil-hawksworth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/web-cms-build-howto-tutorial.png?itok=bRbCJt1U (spiderweb diagram)
[2]: https://www.netlify.com/jamstack?utm_source=opensource.com&utm_medium=jamstack-benefits-pnh&utm_campaign=devex
[3]: https://jamstack.org/?utm_source=opensource.com&utm_medium=jamstack-benefits-pnh&utm_campaign=devex
[4]: https://v8.dev/
[5]: https://www.redhat.com/en/topics/api/what-are-application-programming-interfaces
[6]: https://en.wikipedia.org/wiki/LAMP_(software_bundle)
[7]: http://www.aaronsw.com/weblog/000404
[8]: https://www.netlify.com/products/build/?utm_source=opensource.com&utm_medium=jamstack-benefits-pnh&utm_campaign=devex
[9]: https://github.com/philhawksworth/hello-trello

View File

@ -0,0 +1,137 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (My dramatic journey to becoming an open source engineer)
[#]: via: (https://opensource.com/article/20/9/open-source-story)
[#]: author: (Anisha Swain https://opensource.com/users/anishaswain)
My dramatic journey to becoming an open source engineer
======
This is the story of how I grew from hating to loving open source
technology.
![Love and hate][1]
It's been five years and a heck of a journey from being a non-programmer to becoming an associate software engineer at Red Hat. It's a story worth telling—not because I have achieved a lot, but because of so much drama and so many pitfalls. So grab a cup of coffee, and I will share the unturned pages of my love story with technology.
People say love is as powerful as hate. And love stories that start with hate are often the most passionate ones. My love story with technology was just like that. I got into the world of programming in my freshman year of college. It was my most painful subject. Even though I have always been passionate about futuristic technologies, I didn't know how to move forward towards my passion.
Coming from an interest in instrumentation and electronics engineering, my distaste for programming increased even more with my first failed attempt at passing an interview for my college's technical society, as 80% of the questions were about programming. In my second attempt, I went in prepared, all set for my war with programming. And, luckily, I cleared it.
### Getting started
As the phrase goes, sometimes it takes a wrong turn to get you to the right place. At first, it did feel like a wrong turn. Then bit by bit, I learned something precious, not through learning, but by _listening_. I started my journey by Googling the technical terms I'd heard people utter, which is how I started growing. I came to learn about web design, and the first website I designed was the front page for my college hostel. I had never been so excited about a project before. That simple HTML/CSS website was nothing less than magic to me.
Have you heard of love at first sight? It was like that. At that moment, I fell in love with the work I was doing. I continued learning different web design frameworks (like [Bootstrap][2] and [Material Design][3]), libraries like [jQuery][4], and web development techniques like [Ajax][5].
### Gaining experience
In 2016, I received my first internship, working as a frontend developer. Even though it lasted only a short time, it made me realize that I can contribute code, and it filled me with even more confidence.
![Anisha Swain sitting by water][6]
(Anisha Swain, [CC BY-SA 4.0][7])
Gradually, I started learning server-side code, like [Node.js][8], the [Express][9] framework, [MongoDB][10], [MySQL][11], and more. My seniors helped make my road to understanding the client-server interaction fairly easy.
We heard about a hackathon organized by Tata Consultancy Services. We applied for it with an idea to add [gamification to cleaning][12] public places. (This is probably still the most engrossing project I have worked on to date.) And we were selected! After about a month of preparation, I was going to Mumbai, which was my first trip outside my state, Odisha. Today, when people ask me about young girls being banned from such opportunities, I answer, "tell me about it." I was the only girl on this trip and, yes, it was a little scary. But when I look back, I realize if I hadn't taken that calculated risk at that time, I probably wouldn't have received opportunities to travel in the future.
In time, I started learning Python, and the area that interested me the most was image processing. I started doing small projects on the [OpenCV][13] computer vision and machine learning library and later collaborated on making a prototype of a self-driving car—just a small one with basic features.
### My open source debut
Another turning point came when I learned about open source programs while participating in the Rails Girls Summer of Code (RGSoC) in 2017 with [Manaswini Das][14]. RGSoC is a global fellowship program for women and non-binary coders. Students receive three-month scholarships to work on existing open source projects and expand their skill sets.
I participated in a project named [HospitalRun][15]. It was an exciting—and honestly scary—experience for me. It was exciting because, for the first time, it felt like I was part of something meaningful, broader, and significant. A simple change I made would be visible to people all over the world. My name would be on the contributor list for a large community. It might sound like nothing, but at that time, it was like a wave of motivation. It was scary because the application was in [Ember.js][16], and learning Ember.js so quickly is an experience that can't be described. I will be ever grateful to my mentors, [Joel Worrall][17] and [Joel Glovier][18], for all the support they provided our group. Even though we didn't get selected for the program, this experience will always be a shining part of my story.
The best thing about working with technology is, it never felt like work. Neither then nor now. It was always like I was self-reflecting on a computer screen.
### Disappointment, then winning
The Summer Research Fellowships Program (SRFP), offered by the Indian Academy of Sciences (IAS), is a summer immersion experience that supplements research activities during the academic year. I wanted to work under this renowned research fellowship, and in 2017, I anxiously looked for my name among the awardees. Alas! My name wasn't there. And I was upset. Despite having the slimmest of chances to receive a fellowship, I had reviewed the profiles of all the professors who were working in the field of signal and image processing, shortlisted around 30 of them, and gone through their research papers. As I had some experience with OpenCV and image processing, I was somehow expecting to be selected.
However, I moved on and applied for RGSoC. I devoted two months to the application, but, again, my team wasn't selected. The semester was edging to an end, and I had no internships in hand. I was disheartened and clueless about what would happen next. I started applying for local internships, and I was completely upset.
But I was not aware that the IAS fellowship's second selection list had not yet been announced. On May 2, I got a message from one of my seniors with a link to the IAS second selection result list. And voila! I found my name. Anisha Swain. I couldn't believe my eyes. The credentials matched mine! I was selected to work on image processing.
The confirmation email said I was going to work in Delhi. But there was a problem with where I could live. The accommodation list was only for the people who were selected for the fellowship on the first list. I had a dilemma. My parents strictly banned me from going without proper accommodation. But when there is a will, there is a way, and I found I could stay on the Delhi University campus. In two months at Delhi, while doing research, having fun, and traveling, I experienced everything I could. I traveled to all the metro cities, and Delhi is the most beautiful city I have ever been to.
### When one door closes, another opens
In 2018, Google Summer of Code (GSoC) was just around the corner. GSoC is an annual international program where students are awarded stipends for completing a free and open source software coding project during the summer. Nothing was more prestigious to me at the time than getting into this program. Now I wonder why. I still see students going crazy over it, as if nothing else is left in life if they don't crack it. I was also upset at not being selected. Not once but twice. But as they say, "The journey is as important as the destination."
What matters more than getting selected is the learning you do during the process, as it will always stay with you. While applying to GSoC, I learned concept visualization with [D3.js][19] and [Three.js][20].
Even though I couldn't crack GSoC, my learning helped me land another internship, in 2019, at Mytrah Energy in Hyderabad. It was my first industrial experience, and I learned to do data visualization on a large scale. I dealt with data in JSON and CSV formats and created interactive charts with SVG and Canvas. The experience also helped me deal with my fear of getting into corporate life. It was a brief look into the life I was aspiring for.
### Walking into open source
Some of my friends selected for GSoC shared with me LinkedIn contact information for members of Red Hat's talent acquisition team. In 2018, I messaged them and sent my resume and personal description, but didn't receive a reply for more than a year.
But then I met them at the 2019 Grace Hopper Celebration India (GHCI) conference, Asia's largest gathering of women technologists. During the career fair, they asked for my resume, and, to my utter surprise, they remembered me from my LinkedIn messages a year before. Soon, I got an interview call. During my first interview, I lost my connection and the interview couldn't be completed, but they were kind enough to understand the situation and reschedule it. The next interview round took around three hours, and just a few hours later, I got a job offer by email. It is the best thing that has ever happened to me!
![An office desk][21]
(Anisha Swain, [CC BY-SA 4.0][7])
Today, I am an associate software engineer for Red Hat's Performance and Scale Engineering team. I work with [React][22] and design frameworks with [Ant Design][23] and [PatternFly][24]. I also deal with web technologies like [Elasticsearch][25], [GraphQL][26], and [Postgres][27]. I try to share my knowledge with others through conferences, meetups, and articles. None of this could have been possible without my "second family" from the [Zairza Cetb][28] club. This makes me realize the power of a community and the important role our surroundings play in our development.
### Rules for success
![Anisha Swain in snow][29]
(Anisha Swain, [CC BY-SA 4.0][7])
Through my journey, I have learned many things about getting where you want to be in life, including:
1. Hard work is as important as smart work.
2. Things will eventually be connected and fall into place if you have the desire to grow.
3. Work for 100% if you want to achieve 80%.
4. Stay hungry, stay foolish, and stay humble.
5. Always give back knowledge to the community.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/9/open-source-story
作者:[Anisha Swain][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/anishaswain
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/business_lovehate_0310.png?itok=2IBhQqIn (Love and hate)
[2]: https://getbootstrap.com/
[3]: https://material.io/design
[4]: https://jquery.com/
[5]: https://en.wikipedia.org/wiki/Ajax_%28programming%29
[6]: https://opensource.com/sites/default/files/uploads/anishaswain_seaside.jpg (Anisha Swain sitting by water)
[7]: https://creativecommons.org/licenses/by-sa/4.0/
[8]: https://nodejs.org
[9]: https://expressjs.com/
[10]: https://www.mongodb.com/
[11]: https://www.mysql.com/
[12]: https://github.com/Anisha1234/Clean_Clan
[13]: https://opencv.org/
[14]: https://www.linkedin.com/in/manaswini-das/
[15]: https://hospitalrun.io/
[16]: https://emberjs.com/
[17]: https://www.linkedin.com/in/jworrall/
[18]: https://www.linkedin.com/in/jglovier/
[19]: https://d3js.org/
[20]: https://threejs.org/
[21]: https://opensource.com/sites/default/files/uploads/anishaswain_desk.jpg (An office desk)
[22]: https://reactjs.org/
[23]: https://ant.design/
[24]: https://www.patternfly.org/
[25]: https://www.elastic.co/elasticsearch/
[26]: https://graphql.org/
[27]: https://www.postgresql.org/
[28]: https://medium.com/u/a9c306e657d0?source=post_page-----aa2c4cb5d924----------------------
[29]: https://opensource.com/sites/default/files/uploads/anishaswain_snow.jpg (Anisha Swain in snow)

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (gxlct008)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -0,0 +1,288 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Create a mobile app with Flutter)
[#]: via: (https://opensource.com/article/20/9/mobile-app-flutter)
[#]: author: (Vitaly Kuprenko https://opensource.com/users/kooper)
Create a mobile app with Flutter
======
Start your journey toward cross-platform development with the popular
Flutter framework.
![A person looking at a phone][1]
[Flutter][2] is a popular project among mobile developers around the world. The framework has a massive, friendly community of enthusiasts, which continues to grow as Flutter helps programmers take their projects into the mobile space.
This tutorial is meant to help you start doing mobile development with Flutter. After reading it, you'll know how to quickly install and set up the framework to start coding for smartphones, tablets, and other platforms.
This how-to assumes you have [Android Studio][3] installed on your computer and some experience working with it.
### What is Flutter?
Flutter enables developers to build apps for several platforms, including:
* Android
* iOS
* Web (in beta)
* macOS (in development)
* Linux (in development)
Support for macOS and Linux is in early development, while web support is expected to be released soon. This means that you can try out its capabilities now (as I'll describe below).
### Install Flutter
I'm using Ubuntu 18.04, but the installation process is similar with other Linux distributions, such as Arch or Mint.
#### Install with snapd
To install Flutter on Ubuntu or similar distributions using [snapd][4], enter this in a terminal:
```
$ sudo snap install flutter --classic
$ sudo snap install flutter classic
flutter 0+git.142868f from flutter Team/ installed
```
Then launch it using the `flutter` command. Upon the first launch, the framework downloads to your computer:
```
$ flutter
Initializing Flutter
Downloading <https://storage.googleapis.com/flutter\_infra\[...\]>
```
Once the download is finished, you'll see a message telling you that Flutter is initialized:
![Flutter initialized][5]
(Vitaly Kuprenko, [CC BY-SA 4.0][6])
#### Install manually
If you don't have snapd or your distribution isn't Ubuntu, the installation process will be a little bit different. In that case, [download][7] the version of Flutter recommended for your operating system.
![Install Flutter manually][8]
(Vitaly Kuprenko, [CC BY-SA 4.0][6])
Then extract it to your home directory.
Open the `.bashrc` file in your home directory (or `.zshrc` if you use the [Z shell][9]) in your favorite text editor. Because it's a hidden file, you must first enable showing hidden files in your file manager or open it from a terminal with:
```
`$ gedit ~/.bashrc &`
```
Add the following line to the end of the file:
```
`export PATH="$PATH:~/flutter/bin"`
```
Save and close the file. Keep in mind that if you extracted Flutter somewhere other than your home directory, the [path to Flutter SDK][10] will be different.
Close your terminal and then open it again so that your new configuration loads. Alternatively, you can source the configuration with:
```
`$ . ~/.bashrc`
```
If you don't see an error, then everything is fine.
This installation method is a little bit harder than using the `snap` command, but it's pretty versatile and lets you install the framework on almost any distribution.
#### Check the installation
To check the result, enter the following in the terminal:
```
`flutter doctor -v`
```
You'll see information about installed components. Don't worry if you see errors. You haven't installed any IDE plugins for working with Flutter SDK yet.
![Checking Flutter installation with the doctor command][11]
(Vitaly Kuprenko, [CC BY-SA 4.0][6])
### Install IDE plugins
You should install plugins in your [integrated development environment (IDE)][12] to help it interface with the Flutter SDK, interact with devices, and build code.
The three main IDE tools that are commonly used for Flutter development are IntelliJ IDEA (Community Edition), Android Studio, and VS Code (or [VSCodium][13]). I'm using Android Studio in this tutorial, but the steps are similar to how they work on IntelliJ IDEA (Community Edition) since they're built on the same platform.
First, launch **Android Studio**. Open **Settings** and go to the **Plugins** pane, and select the **Marketplace** tab. Enter **Flutter** in the search line and click **Install**.
![Flutter plugins][14]
(Vitaly Kuprenko, [CC BY-SA 4.0][6])
You'll probably see an option to install the **Dart** plugin; agree to it. If you don't see the Dart option, then install it manually by repeating the steps above. I also recommend using the **Rainbow Brackets** plugin, which makes code navigation easier.
That's it! You've installed all the plugins you need. You can check by entering a familiar command in the terminal:
```
`flutter doctor -v`
```
![Checking Flutter plugins with the doctor command][15]
(Vitaly Kuprenko, [CC BY-SA 4.0][6])
### Build your "Hello World" application
To start a new project, create a Flutter project:
1. Select **New -&gt; New Flutter project**.
![Creating a new Flutter plugin][16]
(Vitaly Kuprenko, [CC BY-SA 4.0][6])
2. In the window, choose the type of project you want. In this case, you need **Flutter Application**.
3. Name your project **hello_world**. Note that you should use a merged name, so use an underscore instead of a space. You may also need to specify the path to the SDK.
![Naming a new Flutter plugin][17]
(Vitaly Kuprenko, [CC BY-SA 4.0][6])
4. Enter the package name.
You've created a project! Now you can launch it on a device or by using an emulator.
![Device options in Flutter][18]
(Vitaly Kuprenko, [CC BY-SA 4.0][6])
Select the device you want and press **Run**. In a moment, you will see the result.
![Flutter demo on mobile device][19]
(Vitaly Kuprenko, [CC BY-SA 4.0][6])
Now you can start working on an [intermediate project][20].
### Try Flutter for web
Before you install Flutter components for the web, you should know that Flutter's support for web apps is pretty raw at the moment. So it's not a good idea to use it for complicated projects yet.
Flutter for web is not active in the basic SDK by default. To switch it on, go to the beta channel. To do this, enter the following command in the terminal:
```
`flutter channel beta`
```
![flutter channel beta output][21]
(Vitaly Kuprenko, [CC BY-SA 4.0][6])
Next, upgrade Flutter according to the beta branch by using the command:
```
`flutter upgrade`
```
![flutter upgrade output][22]
(Vitaly Kuprenko, [CC BY-SA 4.0][6])
To make Flutter for web work, enter:
```
`flutter config --enable-web`
```
Restart your IDE; this helps Android Studio index the new IDE and reload the list of devices. You should see several new devices:
![Flutter for web device options][23]
(Vitaly Kuprenko, [CC BY-SA 4.0][6])
Selecting **Chrome** launches an app in the browser, while **Web Server** gives you the link to your web app, which you can open in any browser.
Still, it's not time to rush into development because your current project doesn't support the web. To improve it, open the terminal in the project's root and enter:
```
`flutter create`
```
This command recreates the project, adding web support. The existing code won't be deleted.
Note that the tree has changed and now has a "web" directory:
![File tree with web directory][24]
(Vitaly Kuprenko, [CC BY-SA 4.0][6])
Now you can get to work. Select **Chrome** and press **Run**. In a moment, you'll see the browser window with your app.
![Flutter web app demo][25]
(Vitaly Kuprenko, [CC BY-SA 4.0][6])
Congratulations! You've just launched a project for the browser and can continue working with it as with any other website.
All of this comes from the same codebase because Flutter makes it possible to write code for both mobile platforms and the web with little to no changes.
### Do more with Flutter
Flutter is a powerful tool for mobile development, and moreover, it's an important evolutionary step toward cross-platform development. Learn it, use it, and deliver your apps to all the platforms!
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/9/mobile-app-flutter
作者:[Vitaly Kuprenko][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/kooper
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/idea_innovation_mobile_phone.png?itok=RqVtvxkd (A person looking at a phone)
[2]: https://flutter.dev/
[3]: https://developer.android.com/studio
[4]: https://snapcraft.io/docs/getting-started
[5]: https://opensource.com/sites/default/files/uploads/flutter1_initialized.png (Flutter initialized)
[6]: https://creativecommons.org/licenses/by-sa/4.0/
[7]: https://flutter.dev/docs/get-started/install/linux
[8]: https://opensource.com/sites/default/files/uploads/flutter2_manual-install.png (Install Flutter manually)
[9]: https://opensource.com/article/19/9/getting-started-zsh
[10]: https://opensource.com/article/17/6/set-path-linux
[11]: https://opensource.com/sites/default/files/uploads/flutter3_doctor.png (Checking Flutter installation with the doctor command)
[12]: https://www.redhat.com/en/topics/middleware/what-is-ide
[13]: https://opensource.com/article/20/6/open-source-alternatives-vs-code
[14]: https://opensource.com/sites/default/files/uploads/flutter4_plugins.png (Flutter plugins)
[15]: https://opensource.com/sites/default/files/uploads/flutter5_plugincheck.png (Checking Flutter plugins with the doctor command)
[16]: https://opensource.com/sites/default/files/uploads/flutter6_newproject.png (Creating a new Flutter plugin)
[17]: https://opensource.com/sites/default/files/uploads/flutter7_projectname.png (Naming a new Flutter plugin)
[18]: https://opensource.com/sites/default/files/uploads/flutter8_launchflutter.png (Device options in Flutter)
[19]: https://opensource.com/sites/default/files/uploads/flutter9_demo.png (Flutter demo on mobile device)
[20]: https://opensource.com/article/18/6/flutter
[21]: https://opensource.com/sites/default/files/uploads/flutter10_beta.png (flutter channel beta output)
[22]: https://opensource.com/sites/default/files/uploads/flutter11_upgrade.png (flutter upgrade output)
[23]: https://opensource.com/sites/default/files/uploads/flutter12_new-devices.png (Flutter for web device options)
[24]: https://opensource.com/sites/default/files/uploads/flutter13_tree.png (File tree with web directory)
[25]: https://opensource.com/sites/default/files/uploads/flutter14_webapp.png (Flutter web app demo)

View File

@ -0,0 +1,179 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Developing an email alert system using a surveillance camera with Node-RED and TensorFlow.js)
[#]: via: (https://www.linux.com/news/developing-an-email-alert-system-using-a-surveillance-camera-with-node-red-and-tensorflow-js/)
[#]: author: (Linux.com Editorial Staff https://www.linux.com/author/linuxdotcom/)
Developing an email alert system using a surveillance camera with Node-RED and TensorFlow.js
======
## **Overview**
In a previous article, we introduced [a procedure for developing an image recognition flow using Node-RED and TensorFlow.js.][1] Now, lets apply those learnings from what we have done and develop an e-mail alert system that uses a surveillance camera together with image recognition. As shown in the following image, we will create a flow that automatically sends an email alert when a suspicious person is captured within a surveillance camera frame.
![][2]
## **Objective: Develop flow**
In this flow, the image of the surveillance camera is periodically acquired from the webserver, and the image is displayed under the **“Original image”** node in the lower left. After that, the image is recognized using the **TensorFlow.js** node. The recognition result and the image with recognition results are displayed under the **debug** tab and the **“image with annotation”** node, respectively.
![][3]
If a person is detected by image recognition, an alert mail with the image file attached will be sent using the **SendGrid** node.  Since it is difficult to set up a real surveillance camera, we will use a sample [image sent by a surveillance camera in Kanagawa Prefecture of Japan][4]  to check the amount of water in the river.
We will explain the procedure for creating this flow in the following sections. For the Node-RED environment, use your local PC, a Raspberry Pi, or a cloud-based deployment.
## **Install the required nodes**
Click the hamburger menu on the top right of the Node-RED flow editor, go to **“Manage palette” -&gt; “Palette” tab -&gt; “Install”** tab, and install the following nodes.
* [node-red-contrib-tensorflow][5]: Image recognition node using TensorFlow.js
* [node-red-contrib-image-output][6]: Nodes that display images on the Flow Editor
* [node-red-contrib-sendgrid][7]: Nodes that send mail using SendGrid
## **Create a flow of acquiring image data**
First, create a flow that acquires the image binary data from the webserver. As in the flow below, place an inject node (the name will be changed to **“timestamp”** when placed in the workspace), **http request** node, and **image preview** node, and connect them with wires in the user interface.
![][8]
Then double-click the **http request** node to change the node property settings.
## **Adjust** _**http request**_ **node property settings**
 
Paste the URL of the surveillance camera image to the URL on the property setting screen of the **http request** node. (In Google Chrome, when you right-click on the image and select **“Copy image address”** from the menu, the URL of the image is copied to the clipboard.) Also, select **“a binary buffer”** as the output format.
![][9]
## **Execute the flow to acquire image data**
Click the **Deploy** button at the top right of the flow editor, then click the button to the **inject** nodes left. Then, the message is sent from the **inject** node to the **http request** node through the wire, and the image is acquired from the web server that provides the image of the surveillance camera. After receiving the image data, a message containing the data in binary format is sent to the **image preview** node, and the image is displayed under the **image preview** node.
![][10]
 An image of the river taken by the surveillance camera is displayed in the lower right.
## **Create a flow for image recognition of the acquired image data**
Next, create a flow that analyzes what is in the acquired image. Place a **cocossd** node, a **debug** node (the name will be changed to **msg.payload** when you place it), and a second **image preview** node.
Then, connect the **output termina**l on the right side of the **http request** node, and the **input terminal** on the left side of the **cocossd** node.
Next, connect the **output terminal** on the right side of the **cocossd** node and the debug node, the **output terminal** on the right side of the **cocossd** node, and the **input terminal** on the left side of the **image preview** node with the respective wires.
Through the wire, the binary data of the surveillance camera image is sent to the **cocossd** node, and after the image recognition is performed using **TensorFlow.js,** the object name is displayed in the **debug** node, and the image with the image recognition result is displayed in the **image preview** node. 
![][11]
The **cocossd** node is designed to store the object name in the variable **msg.payload**, and the binary data of the image with the annotation in the variable **msg.annotatedInput**. 
To make this flow work as intended, you need to double-click the **image preview** node used to display the image and change the node property settings.
## **Adjust** _**image preview**_ **node property settings**
By default, the **image preview** node displays the image data stored in the variable **msg.payload**. Here, change this default variable to **msg.annotatedInput**.
![][12]
## **Adjust** _**inject**_ **node property settings**
Since the flow is run regularly every minute, the **inject** nodes property needs to be changed. In the **Repeat** pull-down menu, select **“interval”** and set **“1 minute”** as the time interval. Also, since we want to start the periodic run process immediately after pressing the **Deploy** button, select the checkbox on the left side of **“inject once after 0.1 seconds”.**
![][13]
## **Run the flow for image recognition**
The flow process will be run immediately after pressing the **Deploy** button. When the person (author) is shown on the surveillance camera, the image recognition result **“person”** is displayed in the debug tab on the right. Also, below the **image preview** node, you will see the image annotated with an orange square.
![][14]
## **Create a flow of sending an email when a person caught in the surveillance camera**
Finally, create a flow to send the annotated image by email when the object name in the image recognition result is **“person”**. As a subsequent node of the **cocossd** node, place a **switch** node that performs condition determination, a **change** node that assigns values, and a **sendgrid** node that sends an email, and connect each node with a wire.
![][15]
Then, change the property settings for each node, as detailed in the sections below.
## **Adjust the** _**switch**_ **node property settings**
Set the rule to execute the subsequent flow only if **msg.payload** contains the string **“person” **
To set that rule, enter **“person”** in the comparison string for the condition **“==”** (on the right side of the **“az”** UX element in the property settings dialog for the switch node).
![][16]
## **Adjust the** _**change**_ **node property settings**
To attach the image with annotation to the email, substitute the image data stored in the variable **msg.annotatedInput** to the variable **msg.payload**. First, open the pull-down menu of **“az”** on the right side of the UX element of **“Target value”** and select **“msg.”**. Then enter **“annotatedInput”** in the text area on the right.
![][17]
If you forget to change to **“msg.”** in the pull-down menu that appears when you click **“az”,** the flow often does not work well, so check again to be sure that it is set to **“msg.”**.
## **Adjust the** _**sendgrid**_ **node property settings**
Set the API key from the [SendGrid management screen][18]. And then input the sender email address and recipient email address.
![][19]
Finally, to make it easier to see what each node is doing, open each nodes node properties, and set the appropriate name.
## **Validate the operation of the flow to send an email when the surveillance camera captures a person in frame**
When a person is captured in the image of the surveillance camera, the image recognition result is displayed in the debug tab the same as in the previous flow of confirmation and the orange frame is displayed in the image under the **image preview** node of **“Image with annotation”**. You can see that the person is recognized correctly.
![][20]
After that, if the judgment process, the substitution process, and the email transmission process works as designed, you will receive an email with the image file with the annotation attached to your smartphone as follows:
![][21]
## **Conclusion**
By using the flow created in this article, you can also build a simple security system for your own garden using a camera connected to a Raspberry Pi. At a larger scale, image recognition can also be run on image data acquired using network cameras that support protocols such as [ONVIF][22].
*About the author: Kazuhito Yokoi is an Engineer at Hitachis OSS Solution Center, located in Yokohama, Japan. *
--------------------------------------------------------------------------------
via: https://www.linux.com/news/developing-an-email-alert-system-using-a-surveillance-camera-with-node-red-and-tensorflow-js/
作者:[Linux.com Editorial Staff][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linux.com/author/linuxdotcom/
[b]: https://github.com/lujun9972
[1]: https://www.linux.com/news/using-tensorflow-js-and-node-red-with-image-recognition-applications/
[2]: https://www.linux.com/wp-content/uploads/2020/09/tensor1.png
[3]: https://www.linux.com/wp-content/uploads/2020/09/tensor2.png
[4]: http://www.pref.kanagawa.jp/sys/suibou/web_general/suibou_joho/html/camera/past0/p20102_0_6.html
[5]: https://flows.nodered.org/node/node-red-contrib-tensorflow
[6]: https://flows.nodered.org/node/node-red-contrib-image-output
[7]: https://flows.nodered.org/node/node-red-contrib-sendgrid
[8]: https://www.linux.com/wp-content/uploads/2020/09/tensor3.png
[9]: https://www.linux.com/wp-content/uploads/2020/09/tensor4.png
[10]: https://www.linux.com/wp-content/uploads/2020/09/tensor5.png
[11]: https://www.linux.com/wp-content/uploads/2020/09/tensor6.png
[12]: https://www.linux.com/wp-content/uploads/2020/09/tensor7.png
[13]: https://www.linux.com/wp-content/uploads/2020/09/tensor8.png
[14]: https://www.linux.com/wp-content/uploads/2020/09/tensor9.png
[15]: https://www.linux.com/wp-content/uploads/2020/09/tensor10.png
[16]: https://www.linux.com/wp-content/uploads/2020/09/tensor11.png
[17]: https://www.linux.com/wp-content/uploads/2020/09/tensor12.png
[18]: https://sendgrid.com/
[19]: https://www.linux.com/wp-content/uploads/2020/09/tensor13.png
[20]: https://www.linux.com/wp-content/uploads/2020/09/tensor14.png
[21]: https://www.linux.com/wp-content/uploads/2020/09/tensor15.png
[22]: https://www.onvif.org/

View File

@ -0,0 +1,127 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Design a book cover with an open source alternative to InDesign)
[#]: via: (https://opensource.com/article/20/9/open-source-publishing-scribus)
[#]: author: (Jim Hall https://opensource.com/users/jim-hall)
Design a book cover with an open source alternative to InDesign
======
Use the open source publishing software, Scribus to create a cover for
your next self-published book.
![Stack of books for reading][1]
I recently finished writing a book about [C programming][2], which I self-published through [Lulu.com][3]. I've used Lulu for several book projects, and it's a great platform. Earlier this year, Lulu made changes that give authors greater control over creating their book covers. Previously, you just uploaded a pair of large-format images for the front and back book covers. Now, Lulu allows authors to upload a custom PDF exactly sized to your book's dimensions.
You can create the cover using [Scribus][4], the open source page layout program. Here's how I do it.
### Download a template
When you're entering your book project information on Lulu's website, eventually, you'll navigate to the **Design** tab. Under the **Design Your Cover** section on this page, you will find a handy **Download Template** button that provides a PDF template for your book cover.
![Lulu Design your Cover page][5]
(Jim Hall, [CC BY-SA 4.0][6])
Download this template, which gives you the information you need to create your own book cover in Scribus.
![Lulu's cover template][7]
(Jim Hall, [CC BY-SA 4.0][6])
The most important details are:
* Total document size (with bleed)
* Bleed area (from trim edge)
* Spine area
**Bleed** is a printing term that is important when preparing a **print-ready** file for a printer. It is different from a margin in a regular document. When you print a document, you set a page margin for the top, bottom, and sides. In most documents, the margin is usually around an inch.
But in print-ready files, the document size needs to be a little bigger than the finished book because book covers usually include colors or pictures that go all the way to the cover's edge. To create this design, you make the colors or images go beyond your margin, and the print shop trims off the excess to get the cover down to the exact size. Therefore, the **trim** is where the print shop cuts the cover exactly to size. The **bleed area** is the extra part the printer cuts off.
If you didn't have a bleed, the print shop would have a hard time printing the cover exactly to size. If the printer was off by only a little bit, your cover would end up with a tiny, white, unprinted border on one edge. Using a bleed and trim means your cover looks right every time.
### Set up your book cover document in Scribus
To create a new document in Scribus, start with the **New Document** dialog box where you define the document's dimensions. Click on the **Bleeds** tab and enter the bleed size the PDF template says to use. Lulu books usually use 0.125" bleeds on all edges.
For the total document dimension in Scribus, you can't just use the total document size on the PDF template. If you do, your Scribus document will have the wrong dimensions. Instead, you need to do a little math to get the right size.
Look at **Total Document Size (with bleed)** on the PDF template. This is the total size of the PDF that will be sent to the printer, and it includes the back cover, the book spine, and the front cover—including the bleeds. To enter the right dimensions in Scribus, you have to subtract the bleeds from all edges. For example, my latest book is Crown quarto size, which is 7.44" x 9.68" with a spine width of 0.411" after it's bound. With 0.125" bleeds, the **Total Document Size (with bleed)** is 15.541" x 9.93". So, my document size in Scribus is:
* Width: 15.541-(2 x 0.125)=15.291"
* Height: 9.93-(2 x 0.125)=9.68"
![Scribus document setup][8]
(Jim Hall, [CC BY-SA 4.0][6])
This sets up a new Scribus document that's the right size for my book cover. The new Scribus document dimensions should match exactly what is listed as the **Total Document Size (with bleed)** on the PDF template.
### Start with the spine
When I create a new book cover in Scribus, I like to start with the spine area. This helps me verify that I defined the document correctly in Scribus.
Use the **Rectangle** tool to draw a colored box on the document where the book's spine needs to go. You don't have to draw it exactly the right size and location; just get close and use the **Properties** to set the correct values. In the shape's **Properties**, select the upper-left base point, and enter the x,y position and dimensions where the spine needs to go. Again, you'll need to do a little math and use the dimensions on the PDF template as a reference.
![Empty Scribus document][9]
(Jim Hall, [CC BY-SA 4.0][6])
For example, my book's trim size is 7.44" x 9.68"; that's the size of the front and back covers after the printer trims it. My book's spine area is 0.411", and its bleed is 0.125". That means the correct upper-left x,y position for the book's spine is:
* X-Pos (bleed + trim width): 0.411+7.44=7.8510"
* Y-Pos (minus bleed): -0.125"
The rectangle's dimensions are the full height (including bleed) of my book cover and the spine width indicated in the PDF template:
* Width: 0.411"
* Height: 9.93"
Set the rectangle's **Fill** to your favorite color and the **Stroke** to **None** to hide the border. If you defined your Scribus document correctly, you should end up with a rectangle that stretches to the top and bottom edges of your book cover positioned in the center of the document.
![Book spine in Scribus][10]
(Jim Hall, [CC BY-SA 4.0][6])
If the rectangle doesn't fit the document exactly, you probably set the wrong dimensions when you created the Scribus document. Since you haven't put a lot of effort into the book cover yet, it's probably easiest to start over rather than trying to fix your mistakes.
### The rest is up to you
From there, you can create the rest of your book's cover. Always use the PDF template as a guide. The back cover is on the left, and the front cover is on the right.
I can manage a simple book cover, but I lack the artistic abilities to create a truly eye-catching design. After designing several of my own book covers, I've gained respect for those who can design a good cover. But if you just need to create a simple book cover, you can do it yourself with open source software.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/9/open-source-publishing-scribus
作者:[Jim Hall][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jim-hall
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/books_read_list_stack_study.png?itok=GZxb9OAv (Stack of books for reading)
[2]: https://opensource.com/article/20/8/c-programming-cheat-sheet
[3]: https://www.lulu.com/
[4]: https://www.scribus.net/
[5]: https://opensource.com/sites/default/files/uploads/lulu-download-template.jpg (Lulu Design your Cover page)
[6]: https://creativecommons.org/licenses/by-sa/4.0/
[7]: https://opensource.com/sites/default/files/uploads/lulu-pdf-template.jpg (Lulu's cover template)
[8]: https://opensource.com/sites/default/files/uploads/scribus-new-document.jpg (Scribus document setup)
[9]: https://opensource.com/sites/default/files/uploads/scribus-empty-document.jpg (Empty Scribus document)
[10]: https://opensource.com/sites/default/files/uploads/scribus-spine-rectangle.jpg (Book spine in Scribus)

View File

@ -0,0 +1,176 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Open ports and route traffic through your firewall)
[#]: via: (https://opensource.com/article/20/9/firewall)
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
Open ports and route traffic through your firewall
======
Safely and securely give outside parties access to your network.
![Traffic lights at night][1]
Ideally, most local networks are protected from the outside world. If you've ever tried installing a service, such as a web server or a [Nextcloud][2] instance at home, then you probably know from first-hand experience that, while the service is easy to reach from inside the network, it's unreachable over the worldwide web.
There are both technical and security reasons for this, but sometimes you want to open access to something within a local network to the outside world. This means you need to be able to route traffic from the internet into your local network—correctly and safely. In this article, I'll explain how.
### Local and public IP addresses
The first thing you need to understand is the difference between a local internet protocol (IP) address and a public IP address. Currently, most of the world (still) uses an addressing system called IPv4, which famously has a limited pool of numbers available to assign to networked electronic devices. In fact, there are more networked devices in the world than there are IPv4 addresses, and yet IPv4 continues to function. This is possible because of local addresses.
All local networks in the world use the _same_ address pools. For instance, my home router's local IP address is 192.168.1.1. One of those is probably the same number as your home router, yet when I navigate to 192.168.1.1, I reach _my_ router's login screen and not _your_ router's login screen. That's because your home router actually has two addresses: one public and one local, and the public one shields the local one from being detected by the internet, much less from being confused for someone else's 192.168.1.1.
![network of networks][3]
(Seth Kenlon, [CC BY-SA 4.0][4])
This, in fact, is why the internet is called the internet: it's a "web" of interconnected and otherwise self-contained networks. Each network, whether it's your workplace or your home or your school or a big data center or the "cloud" itself, is a collection of connected hosts that, in turn, communicate with a gateway (usually a router) that manages traffic from the internet and to the local network, as well as out of the local network to the internet.
This means that if you're trying to access a computer on a network that's not the network you're currently attached to, then knowing the local address of that computer does you no good. You need to know the _public_ address of the remote network's gateway. And that's not all. You also need permission to pass through that gateway into the remote network.
### Firewalls
Ideally, there are firewalls all around you, even now. You don't see them (hopefully), but they're there. As technology goes, firewalls have a fun name, but they're actually a little boring. A firewall is just a computer service (also called a "daemon"), a subsystem that runs in the background of most electronic devices. There are many daemons running on your computer, including the one listening for mouse or trackpad movements, for instance. A firewall is a daemon programmed to either accept or deny certain kinds of network traffic.
Firewalls are relatively small programs, so they are embedded in most modern devices. They're running on your mobile phone, on your router, and your computer. Firewalls are designed based on network protocols, and it's part of the specification of talking to other computers that a data packet sent over a network must announce specific pieces of information about itself (or be ignored). One thing that network data contains is a _port_ number, which is one of the primary things a firewall uses when accepting or denying traffic.
Websites, for instance, are hosted on web servers. When you want to view a website, your computer sends network data identifying itself as traffic destined for port 80 of the web host. The web server's firewall is programmed to accept incoming traffic destined for port 80, so it accepts your request (and the web server, in turn, sends you the web page in response). However, were you to send (whether by accident or by design) network data destined for port 22 of that web server, you'd likely be denied by the firewall (and possibly banned for some time).
This can be a strange concept to understand because, like IP addresses, ports and firewalls don't really "exist" in the physical world. These are concepts defined in software. You can't open your computer or your router to physically inspect network ports, and you can't look at a number printed on a chip to find your IP address, and you can't douse your firewall in water to put it out. But now that you know these concepts exist, you know the hurdles involved in getting from one computer in one network to another on a different network.
Now it's time to get around those blockades.
### Your IP address
I assume you have control over your own network, and you're trying to open your own firewalls and route your own traffic to permit outside traffic into your network. First, you need your local and public IP addresses.
To find your local IP address, you can use the `ip` address command on Linux:
```
$ ip addr show | grep "inet "
 inet 127.0.0.1/8 scope host lo
 inet 192.168.1.6/27 brd 10.1.1.31 scope [...]
```
In this example, my local IP address is 192.168.1.6. The other address (127.0.0.1) is a special "loopback" address that your computer uses to refer to itself from within itself.
To find your local IP address on macOS, you can use `ifconfig`:
```
$ ifconfig | grep "inet "
 inet 127.0.0.1 netmask 0xff000000
 inet 192.168.1.6 netmask 0xffffffe0 [...]
```
And on Windows, use `ipconfig`:
```
`$ ipconfig`
```
Get the public IP address of your router at [icanhazip.com][5]. On Linux, you can get this from a terminal with the [curl command][6]:
```
$ curl <http://icanhazip.com>
93.184.216.34
```
Keep these numbers handy for later.
### Directing traffic through a router
The first device that needs to be adjusted is the gateway device. This could be a big, physical server, or it could be a tiny router. Either way, the gateway is almost certainly performing network address translation (NAT), which is the process of accepting traffic and altering the destination IP address.
When you generate network traffic to view an external website, your computer must send that traffic to your local network's gateway because your computer has, essentially, no knowledge of the outside world. As far as your computer knows, the entire internet is just your network router, 192.168.1.1 (or whatever your router's address). So, your computer sends everything to your gateway. It's the gateway's job to look at the traffic and determine where it's _actually_ headed, and then forward that data on to the real internet. When the gateway receives a response, it forwards the incoming data back to your computer.
If your gateway is a router, then to expose your computer to the outside world, you must designate a port in your router to represent your computer. This configures your router to accept traffic to a specific port and direct all of that traffic straight to your computer. Depending on the brand of router you use, this process goes by a few different names, including port forwarding or virtual server or sometimes even firewall settings.
Every device is different, so there's no way for me to tell you exactly what you need to click on to adjust your settings. Generally, you access your home router through a web browser. Your router's address is sometimes printed on the bottom of the router, and it begins with either 192.168 or 10.
Navigate to your router's address and log in with the credentials you were provided when you got your internet service. It's often as simple as `admin` with a numeric password (sometimes, this password is printed on the router, too). If you don't know the login, call your internet provider and ask for details.
In the graphical interface, redirect incoming traffic for one port to a port (the same one is usually easiest) of your computer's local IP address. In this example, I redirect incoming traffic destined for port 22 (used for SSH connections) of my home router to my desktop PC.
![Example of a router configuration][7]
(Seth Kenlon, [CC BY-SA 4.0][4])
You can redirect any port you want. For instance, if you're hosting a website on a spare computer, you can redirect traffic destined for port 80 of your router to port 80 of your website host.
### Directing traffic through a server
If your gateway is a physical server, you can direct traffic using [firewall-cmd][8]. Using the _rich rule_ option, you can have your server listen for an incoming request at a specific address (your public IP) and specific port (in this example, I use 22, which is the port used for SSH), and then direct that traffic to an IP address and port in the local network (your computer's local address).
```
$ firewall-cmd --permanent --zone=public \
\--add-rich-rule 'rule family="ipv4" destination address="93.184.216.34" forward-port port=22 protocol=tcp to-port=22 to-addr=192.168.1.6'
```
### Set your firewall
Most devices have firewalls, so you might find that traffic can't get through to your local computer even after you've forwarded ports and traffic. It's possible that there's a firewall blocking traffic even within your local network. Firewalls are designed to make your computer secure, so resist the urge to deactivate your firewall entirely (except for troubleshooting). Instead, you can selectively allow traffic.
The process of modifying your personal firewall differs according to your operating system.
On Linux, there are many services already defined. View the ones available:
```
$ sudo firewall-cmd --get-services
amanda-client amanda-k5-client bacula bacula-client
bgp bitcoin bitcoin-rpc ceph cfengine condor-collector
ctdb dhcp dhcpv6 dhcpv6-client dns elasticsearch
freeipa-ldaps ftp [...] ssh steam-streaming svdrp [...]
```
If the service you're trying to allow is listed, you can add it to your firewall:
```
`$ sudo firewall-cmd --add-service ssh --permanent`
```
If your service isn't listed, you can add the port you want to open manually:
```
`$ sudo firewall-cmd --add-port 22/tcp --permanent`
```
Opening a port in your firewall is specific to your current _zone_. For more information about firewalls, firewall-cmd, and ports, refer to my article [_Make Linux stronger with firewalls_][8], and download our [Firewall cheatsheet][9] for quick reference.
This step is only about opening a port in your computer so that traffic destined for it on a specific port is accepted. You don't need to redirect traffic because you've already done that at your gateway.
### Make the connection
You've set up your gateway and your local network to route traffic for you. Now, when someone outside your network navigates to your public IP address, destined for a specific port, they'll be redirected to your computer on the same port. It's up to you to monitor and safeguard your network, so use your new knowledge with care. Too many open ports can look like invitations to bad actors and bots, so only open what you intend to use. And most of all, have fun!
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/9/firewall
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/traffic-light-go.png?itok=nC_851ys (Traffic lights at night)
[2]: http://nextcloud.org
[3]: https://opensource.com/sites/default/files/uploads/network-of-networks.png (network of networks)
[4]: https://creativecommons.org/licenses/by-sa/4.0/
[5]: http://icanhazip.com
[6]: https://opensource.com/article/20/5/curl-cheat-sheet
[7]: https://opensource.com/sites/default/files/uploads/port-mapping.png (Example of a router configuration)
[8]: https://opensource.com/article/19/7/make-linux-stronger-firewalls
[9]: https://opensource.com/article/20/2/firewall-cheat-sheet

View File

@ -0,0 +1,136 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Rclone Browser Enables You to Sync Data With Cloud Services in Linux Graphically)
[#]: via: (https://itsfoss.com/rclone-browser/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
Rclone Browser Enables You to Sync Data With Cloud Services in Linux Graphically
======
_**Brief: Rclone Browser is an effective GUI program that makes it easy to manage and sync data on cloud storage using Rclone. Here, we take a look at what it offers and how it works.**_
If you want to use One Drive or [Google Drive on Linux][1] natively and effortlessly, you can opt for a premium GUI tool like [Insync][2] ([affiliate][3] link).
If you can put some effort in the terminal, you can use [Rclone][4] to sync with many [cloud storage services on Linux][5]. We have a detailed [guide on using Rclone for syncing with OneDrive in Linux][6].
[Rclone][4] is a pretty popular and useful command-line tool. A lot of power users will need to use Rclone for its features.
However, not everyone is comfortable using it from the terminal even if its useful enough.
So, in this article, Ill talk about an impressive GUI “Rclone Browser” that makes it easy to manage and sync your data on cloud storage using Rclone.
It is also worth noting that Rclone does offer an experimental web-based GUI — but we are going to focus on [Rclone Browser][7] here.
![][8]
### Rclone Browser: An Open-Source GUI for Rclone
Rclone Browser is a GUI that lets you browse, modify, upload/download, list files, and do a lot more stuff that youd want to do when you want to make the most out of managing a remote storage location.
It offers a simple user interface and works just fine (as per my quick test). Lets take a detailed look at the features it offers and how to get started using it.
### Features of Rclone Browser
![][9]
It offers a lot of options and control to manage remote storage locations. You may find it feature-rich or overwhelming depending on your use-case. Here they are:
* Browse and modify rclone remote storage locations
* Encrypted cloud storage supported
* Custom location and encryption for configuration supported
* No extra configuration required. It will use the same rclone configuration files (if you have any).
* Simultaneous navigation of multiple locations in separate tabs
* List files hierarchically (by file name, size, and modified date)
* Rclone commands are executed asynchronously without the GUI freezing
* You get the ability to upload, download, create new folders, rename, delete files and folders
* Drag and drop support for dragging files while uploading
* Streaming media files in player like VLC
* Mount and unmount folders/cloud drives
* Ability to calculate size of folder, export list of files, and copy rclone commands to clipboard
* Supports portable mode
* Supports shared drivers (if youre using Google Drive)
* Gives you the ability to have public link sharing option for remote storage services that offers it
* Ability to create tasks that you can easily save to run it again or edit it later
* Dark mode
* Cross-platform support (Windows, macOS, and Linux)
### Installing Rclone Browser on Linux
_You need to have rclone installed on your Linux distribution before you use Rclone Browser. Follow the [official installation instructions][10] to do that._
You will find an AppImage file available for Rclone Browser from the [releases section][11] of its [GitHub page][7]. So, you shouldnt have an issue running it on any Linux distribution.
In case you didnt know about AppImage, Ill recommend going through our guide to [use AppImage on Linux][12].
You can also choose to build it as well. The instructions to do that is in the GitHub page.
[Rclone Browser][7]
### Getting Started With Rclone Browser
Here, Ill just share a few things that you should know to get started using Rclone Browser.
![][13]
If you had any existing remote locations using rclone in the terminal, it will automatically show up in the GUI. You can also hit the “**Refresh**” button to get the latest additions.
As shown in the screenshot above, when you click the “**Config**” button it launches the terminal that lets you easily add a new remote or configure it as you want. Dont worry when the terminal pops up, Rclone browser executes the commands to do all the necessary tasks, you just have to set up or edit a few things when needed. You dont need to execute any Rclone commands.
If you have some existing remotes, you can simply open them using the “**Open**” button and have the cloud storage accessible in a different tab as shown below.
![][14]
You can easily mount the cloud drive, upload/download files, get the details, share a public link for a folder (if supported), and directly stream media files as well.
If you want to copy, move, or sync data with a remote storage location, you can simply create a task to do it. Just to make sure that you have the right settings, you can perform a dry run or go ahead with running the task.
You can find all the running tasks under the “**Jobs**” section and you can cancel/stop them if needed.
![][15]
In addition to all the basic functionalities mentioned above, you can just head to **File-&gt;Preferences** to change the rclone location, mount option, download folder, bandwidth settings, and proxy as well.
![][16]
To learn more about its usage and features, you might want to check out the [GitHub page][7] for all the technical information.
### Wrapping Up
Rclone Browser should definitely come in handy for every Linux user looking to use Rclone for its powerful features.
Have you tried it yet? Do you prefer using the GUI or the terminal for using rclone? Let me know your thoughts in the comments below!
--------------------------------------------------------------------------------
via: https://itsfoss.com/rclone-browser/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/use-google-drive-linux/
[2]: https://itsfoss.com/recommends/insync/
[3]: https://itsfoss.com/affiliate-policy/
[4]: https://rclone.org/
[5]: https://itsfoss.com/cloud-services-linux/
[6]: https://itsfoss.com/use-onedrive-linux-rclone/
[7]: https://github.com/kapitainsky/RcloneBrowser
[8]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/09/Cloud-sync.gif?resize=800%2C450&ssl=1
[9]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/08/rclone-browser-screenshot.jpg?resize=800%2C618&ssl=1
[10]: https://rclone.org/install/
[11]: https://github.com/kapitainsky/RcloneBrowser/releases/tag/1.8.0
[12]: https://itsfoss.com/use-appimage-linux/
[13]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/09/rclone-browser-howto.png?resize=800%2C412&ssl=1
[14]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/09/rclone-browser-drive.png?resize=800%2C505&ssl=1
[15]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/09/rclone-browser-task.jpg?resize=800%2C493&ssl=1
[16]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/09/rclone-browser-preferences.jpg?resize=800%2C590&ssl=1

View File

@ -0,0 +1,92 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Soon Youll be Able to Convert Any Website into Desktop Application in Linux Mint)
[#]: via: (https://itsfoss.com/web-app-manager-linux-mint/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
Soon Youll be Able to Convert Any Website into Desktop Application in Linux Mint
======
Imagine this situation. You are working on a certain topic and you have more than twenty tabs open in your web browser, mostly related to the work.
Some of these tabs are for YouTube or some other music streaming website you are listening to.
You finished the work on the topic and close the browser. Your intent was to close all the work related tabs but it also closed the tabs that you were using for listening to music or some other activities.
Now youll have to log in to those websites again and find the track you were listening to or whatever you were doing.
Frustrating, isnt it? Linux Mint understands your pain and they have an upcoming project to help you out in such scenario.
### Linux Mints Web App Manager
![][1]
In a [recent post][2], Linux Mint team revealed that it is working on a new tool called Web App Manager.
The Web App Manager tool will allow you to launch your favorite websites and have them run in their own window as if they were desktop applications.
While adding a website as a Web App, you can give it a custom name and icon. You can also give it a different category. This will help you search this app in the menu.
You may also specify which web browser you want the Web App to be opened in. Option for enabling/disabling navigation bar is also there.
![Adding a Web App In Linux Mint][3]
Say, you add YouTube as a Web App:
![Web Apps In Linux Mint][4]
If you run this YouTube Web App, YouTube will now run in its own window and in a browser of your choice.
![YouTube Web App][5]
The Web App has most of the features you see in a regular desktop application. You can use it in Alt+Tab switcher:
![Web App in Alt Tab Switcher][6]
You can even pin the Web App to the panel/taskbar for quick access.
![YouTube Web App added to the panel][7]
The Web App Manager is in beta right now but it is fairly stable to use. It is not translation ready right now and this is why it is not released to the public.
If you are using Linux Mint and want to try the Web App Manager, you can download the DEB file for the beta version of this app from the link below:
[Download Web App Manager (beta) for Linux Mint][8]
### Web apps are not new to desktop Linux
This is not something ground breaking from Linux Mint. Web apps have been on the scene for almost a decade now.
If you remember, Ubuntu had added the web app feature to its Unity desktop in 2013-14.
The lightweight Linux distribution PeppermintOS lists ICE (tool for web apps) as its main feature since 2010. In fact, Linux Mints Web App manager is based on Peppermint OSs [ICE][9].
Personally, I like web apps feature. It has its usefulness.
What do you think of Web Apps in Linux Mint? Is it something you look forward to use? Do share your views in the comment section.
--------------------------------------------------------------------------------
via: https://itsfoss.com/web-app-manager-linux-mint/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/09/Web-App-Manager-linux-mint.jpg?resize=800%2C450&ssl=1
[2]: https://blog.linuxmint.com/?p=3960
[3]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/09/Add-web-app-in-Linux-Mint.png?resize=600%2C489&ssl=1
[4]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/09/Web-Apps-in-Linux-Mint.png?resize=600%2C489&ssl=1
[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/09/youtube-web-app-linux-mint.jpg?resize=800%2C611&ssl=1
[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/09/web-app-alt-tab-switcher.jpg?resize=721%2C576&ssl=1
[7]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/09/panel.jpg?resize=470%2C246&ssl=1
[8]: http://www.linuxmint.com/tmp/blog/3960/webapp-manager_1.0.3_all.deb
[9]: https://github.com/peppermintos/ice