Merge pull request #1 from LCTT/master

update from lctt
This commit is contained in:
zhangxiangping 2020-02-11 13:59:42 +08:00 committed by GitHub
commit 3a3063a84e
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
9 changed files with 952 additions and 322 deletions

View File

@ -0,0 +1,65 @@
[#]: collector: (lujun9972)
[#]: translator: (hopefully2333)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11877-1.html)
[#]: subject: (What's HTTPS for secure computing?)
[#]: via: (https://opensource.com/article/20/1/confidential-computing)
[#]: author: (Mike Bursell https://opensource.com/users/mikecamel)
用于安全计算的 HTTPS 是什么?
======
> 在默认的情况下,网站的安全性还不足够。
![](https://img.linux.net.cn/data/attachment/album/202002/11/123552rqncn4c7474j44jq.jpg)
在过去的几年里,寻找一个只以 “http://...” 开头的网站变得越来越难,这是因为业界终于意识到,网络安全“是件事”,同时也是因为客户端和服务端之间建立和使用 https 连接变得更加容易了。类似的转变可能正以不同的方式发生在云计算、边缘计算、物联网、区块链,人工智能、机器学习等领域。长久以来,我们都知道我们应该对存储的静态数据和在网络中传输的数据进行加密,但是在使用和处理数据的时候对它进行加密是困难且昂贵的。可信计算(使用例如<ruby>受信任的执行环境<rt>Trusted Execution Environments</rt></ruby> TEEs 这样的硬件功能来提供数据和算法这种类型的保护)可以保护主机系统中的或者易受攻击的环境中的数据。
关于 [TEEs][2],当然,还有我和 Nathaniel McCallum 共同创立的 [Enarx 项目][3],我已经写了几次文章(参见《[给每个人的 Enarx一个任务][4]》 和 《[Enarx 迈向多平台][5]》。Enarx 使用 TEEs 来提供独立于平台和语言的部署平台以此来让你能够安全地将敏感应用或者敏感组件例如微服务部署在你不信任的主机上。当然Enarx 是完全开源的(顺便提一下,我们使用的是 Apache 2.0 许可证)。能够在你不信任的主机上运行工作负载,这是可信计算的承诺,它扩展了使用静态敏感数据和传输中数据的常规做法:
* **存储**:你要加密你的静态数据,因为你不完全信任你的基础存储架构。
* **网络**:你要加密你正在传输中的数据,因为你不完全信任你的基础网络架构。
* **计算**:你要加密你正在使用中的数据,因为你不完全信任你的基础计算架构。
关于信任,我有非常多的话想说,而且,上述说法里的单词“**完全**”是很重要的(在重新读我写的这篇文章的时候,我新加了这个单词)。不论哪种情况,你必须在一定程度上信任你的基础设施,无论是传递你的数据包还是存储你的数据块,例如,对于计算基础架构,你必须要去信任 CPU 和与之关联的固件,这是因为如果你不信任他们,你就无法真正地进行计算(现在有一些诸如<ruby>同态加密<rt>homomorphic encryption</rt></ruby>一类的技术,这些技术正在开始提供一些可能性,但是它们依然有限,这些技术还不够成熟)。
考虑到发现的一些 CPU 安全性问题,是否应该完全信任 CPU 有时自然会产生疑问,以及它们是否在针对其所在的主机的物理攻击中具有完全的安全性。
这两个问题的回答都是“不”,但是在考虑到大规模可用性和普遍推广的成本,这已经是我们当前拥有的最好的技术了。为了解决第二个问题,没有人去假装这项技术(或者任何的其他技术)是完全安全的:我们需要做的是思考我们的[威胁模型][6]并确定这个情况下的 TEEs 是否为我们的特殊需求提供了足够的安全防护。关于第一个问题Enarx 采用的模型是在部署时就对你是否信任一个特定的 CPU 组做出决定。举个例子,如果供应商 Q 的 R 代芯片被发现有漏洞,可以很简单地说“我拒绝将我的工作内容部署到 Q 的 R 代芯片上去,但是仍然可以部署到 Q 的 S 型号、T 型号和 U 型号的芯片以及任何 P、M 和 N 供应商的任何芯片上去。”
我认为这里发生了三处改变,这些改变引起了人们现在对<ruby>机密计算<rt>confidential computing</rt></ruby>的兴趣和采用。
1. **硬件可用**:只是在过去的 6 到 12 个月里,支持 TEEs 的硬件才开始变得广泛可用,这会儿市场上的主要例子是 Intel 的 SGX 和 AMD 的 SEV。我们期望在未来可以看到支持 TEE 的硬件的其他例子。
2. **行业就绪**就像上云越来越多地被接受作为应用程序部署的模型监管机构和立法机构也在提高各类组织保护其管理的数据的要求。组织开始呼吁在不受信任的主机运行敏感程序或者是处理敏感数据的应用程序的方法更确切地说是在无法完全信任且带有敏感数据的主机上运行的方法。这不足为奇如果芯片制造商看不到这项技术的市场他们就不会投太多的钱在这项技术上。Linux 基金会的[机密计算联盟CCC][7]的成立就是业界对如何寻找使用加密计算的通用模型并且鼓励开源项目使用这些技术感兴趣的案例。(红帽发起的 Enarx 是一个 CCC 项目。)
3. **开放源码**:就像区块链一样,机密计算是使用开源绝对明智的技术之一。如果你要运行敏感程序,你需要去信任正在为你运行的程序。不仅仅是 CPU 和固件,同样还有在 TEE 内执行你的工作负载的框架。可以很好地说,“我不信任主机机器和它上面的软件栈,所以我打算使用 TEE”但是如果你不够了解 TEE 软件环境那你就是将一种软件不透明换成另外一种。TEEs 的开源支持将允许你或者社区(实际上是你与社区)以一种专有软件不可能实现的方式来检查和审计你所运行的程序。这就是为什么 CCC 位于 Linux 基金会旗下(这个基金会致力于开放式开发模型)并鼓励 TEE 相关的软件项目加入且成为开源项目(如果它们还没有成为开源)。
我认为,在过去的 15 到 20 年里,硬件可用、行业就绪和开放源码已成为推动技术改变的驱动力。区块链、人工智能、云计算、<ruby>大规模计算<rt>webscale computing</rt></ruby>、大数据和互联网商务都是这三个点同时发挥作用的例子,并且在业界带来了巨大的改变。
在一般情况下,安全是我们这数十年来听到的一种承诺,并且其仍然未被实现。老实说,我不确定它未来会不会实现。但是随着新技术的到来,特定用例的安全变得越来越实用和无处不在,并且在业内受到越来越多的期待。这样看起来,机密计算似乎已准备好成为成为下一个重大变化 —— 而你,我亲爱的读者,可以一起来加入到这场革命(毕竟它是开源的)。
这篇文章最初是发布在 Alice, Eve, and Bob 上的,这是得到了作者许可的重发。
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/1/confidential-computing
作者:[Mike Bursell][a]
选题:[lujun9972][b]
译者:[hopefully2333](https://github.com/hopefully2333)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/mikecamel
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/secure_https_url_browser.jpg?itok=OaPuqBkG (Secure https browser)
[2]: https://aliceevebob.com/2019/02/26/oh-how-i-love-my-tee-or-do-i/
[3]: https://enarx.io/
[4]: https://aliceevebob.com/2019/08/20/enarx-for-everyone-a-quest/
[5]: https://aliceevebob.com/2019/10/29/enarx-goes-multi-platform/
[6]: https://aliceevebob.com/2018/02/20/there-are-no-absolutes-in-security/
[7]: https://confidentialcomputing.io/
[8]: tmp.VEZpFGxsLv#1
[9]: https://aliceevebob.com/2019/12/03/confidential-computing-the-new-https/

View File

@ -0,0 +1,106 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (5 firewall features IT pros should know about but probably dont)
[#]: via: (https://www.networkworld.com/article/3519854/4-firewall-features-it-pros-should-know-about-but-probably-dont.html)
[#]: author: (Zeus Kerravala https://www.networkworld.com/author/Zeus-Kerravala/)
5 firewall features IT pros should know about but probably dont
======
As a foundational network defense, firewalls continue to be enhanced with new features, so many that some important ones that shouldnt be get overlooked.
Natalya Burova / Getty Images
[Firewalls][1] continuously evolve to remain a staple of network security by incorporating functionality of standalone devices, embracing network-architecture changes, and integrating outside data sources to add intelligence to the decisions they make a daunting wealth of possibilities that is difficult to keep track of.
Because of this richness of features, next-generation firewalls are difficult to master fully, and important capabilities sometimes can be, and in practice are, overlooked.
Here is a shortlist of new features IT pros should be aware of.
**[ Also see [What to consider when deploying a next generation firewall][2]. | Get regularly scheduled insights by [signing up for Network World newsletters][3]. ]**
### Also in this series:
* [Cybersecurity in 2020: From secure code to defense in depth][4] (CSO)
* [More targeted, sophisticated and costly: Why ransomware might be your biggest threat][5] (CSO)
* [How to bring security into agile development and CI/CD][6] (Infoworld)
* [UEM to marry security finally after long courtship][7] (Computerworld)
* [Security vs. innovation: IT's trickiest balancing act][8] (CIO)
## Network segmentation
Dividing a single physical network into multiple logical networks is known as [network segmentation][9] in which each segment behaves as if it runs on its own physical network. The traffic from one segment cant be seen by or passed to another segment.
This significantly reduces attack surfaces in the event of a breach. For example, a hospital could put all its medical devices into one segment and its patient records into another. Then, if hackers breach a heart pump that was not secured properly, that would not enable them to access private patient information.
Its important to note that many connected things that make up the [internet of things][10] have older operating systems and are inherently insecure and can act as a point of entry for attackers, so the growth of IoT and its distributed nature drives up the need for network segmentation.
## Policy optimization
Firewall policies and rules are the engine that make firewalls go. Most security professionals are terrified of removing older policies because they dont know when they were put in place or why. As a result, rules keep getting added with no thought of reducing the overall number. Some enterprises say they have millions of firewall rules in place. The fact is, too many rules add complexity, can conflict with each other and are time consuming to manage and troubleshoot.
[][11]
Policy optimization migrates legacy security policy rules to application-based rules that permit or deny traffic based on what application is being used. This improves overall security by reducing the attack surface and also provides visibility to safely enable application access. Policy optimization identifies port-based rules so they can be converted to application-based whitelist rules or add applications from a port-based rule to an existing application-based rule without compromising application availability. It also identifies over-provisioned application-based rules. Policy optimization helps prioritize which port-based rules to migrate first, identify application-based rules that allow applications that arent being used, and analyze rule-usage characteristics such as hit count, which compares how often a particular rule is applied vs. how often all the rules are applied.
Converting port-based rules to application-based rules improves security posture because the organization can select the applications they want to whitelist and deny all other applications. That way unwanted and potentially malicious traffic is eliminated from the network.
## Credential-theft prevention
Historically, workers accessed corporate applications from company offices. Today they access legacy apps, SaaS apps and other cloud services from the office, home, airport and anywhere else they may be. This makes it much easier for threat actors to steal credentials. The Verizon [Data Breach Investigations Report][12] found that 81% of hacking-related breaches leveraged stolen and/or weak passwords.
Credential-theft prevention blocks employees from using corporate credentials on sites such as Facebook and Twitter.  Even though they may be sanctioned applications, using corporate credentials to access them puts the business at risk.
Credential-theft prevention works by scanning username and password submissions to websites and compare those submissions to lists of official corporate credentials. Businesses can choose what websites to allow submitting corporate credentials to or block them based on the URL category of the website.
When the firewall detects a user attempting to submit credentials to a site in a category that is restricted, it can display a block-response page that prevents the user from submitting credentials. Alternatively, it can present a continue page that warns users against submitting credentials to sites classified in certain URL categories, but still allows them to continue with the credential submission. Security professionals can customize these block pages to educate users against reusing corporate credentials, even on legitimate, non-phishing sites.
## DNS security
A combination of machine learning, analytics and automation can block attacks that leverage the [Domain Name System (DNS)][13]. In many enterprises, DNS servers are unsecured and completely wide open to attacks that redirect users to bad sites where they are phished and where data is stolen. Threat actors have a high degree of success with DNS-based attacks because security teams have very little visibility into how attackers use the service to maintain control of infected devices. There are some standalone DNS security services that are moderately effective but lack the volume of data to recognize all attacks.
When DNS security is integrated into firewalls, machine learning can analyze the massive amount of network data, making standalone analysis tools unnecessary. DNS security integrated into a firewall can predict and block malicious domains through automation and the real-time analysis that finds them.  As the number of bad domains grows, machine learning can find them quickly and ensure they dont become problems.
Integrated DNS security can also use machine-learning analytics to neutralize DNS tunneling, which smuggles data through firewalls by hiding it within DNS requests. DNS security can also find malware command-and-control servers.  It builds on top of signature-based systems to identify advanced tunneling methods and automates the shutdown of DNS-tunneling attacks.
## Dynamic user groups
Its possible to create policies that automate the remediation of anomalous activities of workers. The basic premise is that users roles within a group means their network behaviors should be similar to each other. For example, if a worker is phished and strange apps were installed, this would stand out and could indicate a breach.
Historically, quarantining a group of users was highly time consuming because each member of the group had to be addressed and policies enforced individually. With dynamic user groups, when the firewall sees an anomaly it creates policies that counter the anomoly and pushes them out to the user group. The entire group is automatically updated without having to manually create and commit policies. So, for example, all the people in accounting would receive the same policy update automatically, at once, instead of manually, one at a time. Integration with the firewall enables the firewall to distribute the policies for the user group to all the other infrastructure that requires it including other firewalls, log collectors or applications. 
Firewalls have been and will continue to be the anchor of cyber security. They are the first line of defense and can thwart many attacks before they penetrate the enterprise network.  Maximizing the value of firewalls means turning on many of the advanced features, some of which have been in firewalls for years but not turned on for a variety of reasons.
Join the Network World communities on [Facebook][14] and [LinkedIn][15] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3519854/4-firewall-features-it-pros-should-know-about-but-probably-dont.html
作者:[Zeus Kerravala][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Zeus-Kerravala/
[b]: https://github.com/lujun9972
[1]: https://www.networkworld.com/article/3230457/what-is-a-firewall-perimeter-stateful-inspection-next-generation.html
[2]: https://www.networkworld.com/article/3236448/lan-wan/what-to-consider-when-deploying-a-next-generation-firewall.html
[3]: https://www.networkworld.com/newsletters/signup.html
[4]: https://www.csoonline.com/article/3519913/cybersecurity-in-2020-from-secure-code-to-defense-in-depth.html
[5]: https://www.csoonline.com/article/3518864/more-targeted-sophisticated-and-costly-why-ransomware-might-be-your-biggest-threat.html
[6]: https://www.infoworld.com/article/3520969/how-to-bring-security-into-agile-development-and-cicd.html
[7]: https://www.computerworld.com/article/3516136/uem-to-marry-security-finally-after-long-courtship.html
[8]: https://www.cio.com/article/3521009/security-vs-innovation-its-trickiest-balancing-act.html
[9]: https://www.networkworld.com/article/3016565/how-network-segmentation-provides-a-path-to-iot-security.html
[10]: https://www.networkworld.com/article/3207535/what-is-iot-the-internet-of-things-explained.html
[11]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE21620&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
[12]: https://enterprise.verizon.com/resources/reports/dbir/
[13]: https://www.networkworld.com/article/3268449/what-is-dns-and-how-does-it-work.html
[14]: https://www.facebook.com/NetworkWorld/
[15]: https://www.linkedin.com/company/network-world

View File

@ -1,247 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (svtter)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Manage multimedia files with Git)
[#]: via: (https://opensource.com/article/19/4/manage-multimedia-files-git)
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
Manage multimedia files with Git
======
Learn how to use Git to track large multimedia files in your projects in
the final article in our series on little-known uses of Git.
![video editing dashboard][1]
Git is very specifically designed for source code version control, so it's rarely embraced by projects and industries that don't primarily work in plaintext. However, the advantages of an asynchronous workflow are appealing, especially in the ever-growing number of industries that combine serious computing with seriously artistic ventures, including web design, visual effects, video games, publishing, currency design (yes, that's a real industry), education… the list goes on and on.
In this series leading up to Git's 14th anniversary, we've shared six little-known ways to use Git. In this final article, we'll look at software that brings the advantages of Git to managing multimedia files.
### The problem with managing multimedia files with Git
It seems to be common knowledge that Git doesn't work well with non-text files, but it never hurts to challenge assumptions. Here's an example of copying a photo file using Git:
```
$ du -hs
108K .
$ cp ~/photos/dandelion.tif .
$ git add dandelion.tif
$ git commit -m 'added a photo'
[master (root-commit) fa6caa7] two photos
1 file changed, 0 insertions(+), 0 deletions(-)
create mode 100644 dandelion.tif
$ du -hs
1.8M .
```
Nothing unusual so far; adding a 1.8MB photo to a directory results in a directory 1.8MB in size. So, let's try removing the file:
```
$ git rm dandelion.tif
$ git commit -m 'deleted a photo'
$ du -hs
828K .
```
You can see the problem here: Removing a large file after it's been committed increases a repository's size roughly eight times its original, barren state (from 108K to 828K). You can perform tests to get a better average, but this simple demonstration is consistent with my experience. The cost of committing files that aren't text-based is minimal at first, but the longer a project stays active, the more changes people make to static content, and the more those fractions start to add up. When a Git repository becomes very large, the major cost is usually speed. The time to perform pulls and pushes goes from being how long it takes to take a sip of coffee to how long it takes to wonder if your computer got kicked off the network.
The reason static content causes Git to grow in size is that formats based on text allow Git to pull out just the parts that have changed. Raster images and music files make as much sense to Git as they would to you if you looked at the binary data contained in a .png or .wav file. So Git just takes all the data and makes a new copy of it, even if only one pixel changes from one photo to the next.
### Git-portal
In practice, many multimedia projects don't need or want to track the media's history. The media part of a project tends to have a different lifecycle than the text or code part of a project. Media assets generally progress in one direction: a picture starts as a pencil sketch, proceeds toward its destination as a digital painting, and, even if the text is rolled back to an earlier version, the art continues its forward progress. It's rare for media to be bound to a specific version of a project. The exceptions are usually graphics that reflect datasets—usually tables or graphs or charts—that can be done in text-based formats such as SVG.
So, on many projects that involve both media and text (whether it's narrative prose or code), Git is an acceptable solution to file management, as long as there's a playground outside the version control cycle for artists to play in.
![Graphic showing relationship between art assets and Git][2]
A simple way to enable that is [Git-portal][3], a Bash script armed with Git hooks that moves your asset files to a directory outside Git's purview and replaces them with symlinks. Git commits the symlinks (sometimes called aliases or shortcuts), which are trivially small, so all you commit are your text files and whatever symlinks represent your media assets. Because the replacement files are symlinks, your project continues to function as expected because your local machine follows the symlinks to their "real" counterparts. Git-portal maintains a project's directory structure when it swaps out a file with a symlink, so it's easy to reverse the process, should you decide that Git-portal isn't right for your project or you need to build a version of your project without symlinks (for distribution, for instance).
Git-portal also allows remote synchronization of assets over rsync, so you can set up a remote storage location as a centralized source of authority.
Git-portal is ideal for multimedia projects, including video game and tabletop game design, virtual reality projects with big 3D model renders and textures, [books][4] with graphics and .odt exports, collaborative [blog websites][5], music projects, and much more. It's not uncommon for an artist to perform versioning in their application—in the form of layers (in the graphics world) and tracks (in the music world)—so Git adds nothing to multimedia project files themselves. The power of Git is leveraged for other parts of artistic projects (prose and narrative, project management, subtitle files, credits, marketing copy, documentation, and so on), and the power of structured remote backups is leveraged by the artists.
#### Install Git-portal
There are RPM packages for Git-portal located at <https://klaatu.fedorapeople.org/git-portal>, which you can download and install.
Alternately, you can install Git-portal manually from its home on GitLab. It's just a Bash script and some Git hooks (which are also Bash scripts), but it requires a quick build process so that it knows where to install itself:
```
$ git clone <https://gitlab.com/slackermedia/git-portal.git> git-portal.clone
$ cd git-portal.clone
$ ./configure
$ make
$ sudo make install
```
#### Use Git-portal
Git-portal is used alongside Git. This means, as with all large-file extensions to Git, there are some added steps to remember. But you only need Git-portal when dealing with your media assets, so it's pretty easy to remember unless you've acclimated yourself to treating large files the same as text files (which is rare for Git users). There's one setup step you must do to use Git-portal in a project:
```
$ mkdir bigproject.git
$ cd !$
$ git init
$ git-portal init
```
Git-portal's **init** function creates a **_portal** directory in your Git repository and adds it to your .gitignore file.
Using Git-portal in a daily routine integrates smoothly with Git. A good example is a MIDI-based music project: the project files produced by the music workstation are text-based, but the MIDI files are binary data:
```
$ ls -1
_portal
song.1.qtr
song.qtr
song-Track_1-1.mid
song-Track_1-3.mid
song-Track_2-1.mid
$ git add song*qtr
$ git-portal song-Track*mid
$ git add song-Track*mid
```
If you look into the **_portal** directory, you'll find the original MIDI files. The files in their place are symlinks to **_portal** , which keeps the music workstation working as expected:
```
$ ls -lG
[...] _portal/
[...] song.1.qtr
[...] song.qtr
[...] song-Track_1-1.mid -> _portal/song-Track_1-1.mid*
[...] song-Track_1-3.mid -> _portal/song-Track_1-3.mid*
[...] song-Track_2-1.mid -> _portal/song-Track_2-1.mid*
```
As with Git, you can also add a directory of files:
```
$ cp -r ~/synth-presets/yoshimi .
$ git-portal add yoshimi
Directories cannot go through the portal. Sending files instead.
$ ls -lG _portal/yoshimi
[...] yoshimi.stat -> ../_portal/yoshimi/yoshimi.stat*
```
Removal works as expected, but when removing something in **_portal** , you should use **git-portal rm** instead of **git rm**. Using Git-portal ensures that the file is removed from **_portal** :
```
$ ls
_portal/ song.qtr song-Track_1-3.mid@ yoshimi/
song.1.qtr song-Track_1-1.mid@ song-Track_2-1.mid@
$ git-portal rm song-Track_1-3.mid
rm 'song-Track_1-3.mid'
$ ls _portal/
song-Track_1-1.mid* song-Track_2-1.mid* yoshimi/
```
If you forget to use Git-portal, then you have to remove the portal file manually:
```
$ git-portal rm song-Track_1-1.mid
rm 'song-Track_1-1.mid'
$ ls _portal/
song-Track_1-1.mid* song-Track_2-1.mid* yoshimi/
$ trash _portal/song-Track_1-1.mid
```
Git-portal's only other function is to list all current symlinks and find any that may have become broken, which can sometimes happen if files move around in a project directory:
```
$ mkdir foo
$ mv yoshimi foo
$ git-portal status
bigproject.git/song-Track_2-1.mid: symbolic link to _portal/song-Track_2-1.mid
bigproject.git/foo/yoshimi/yoshimi.stat: broken symbolic link to ../_portal/yoshimi/yoshimi.stat
```
If you're using Git-portal for a personal project and maintaining your own backups, this is technically all you need to know about Git-portal. If you want to add in collaborators or you want Git-portal to manage backups the way (more or less) Git does, you can a remote.
#### Add Git-portal remotes
Adding a remote location for Git-portal is done through Git's existing remote function. Git-portal implements Git hooks, scripts hidden in your repository's .git directory, to look at your remotes for any that begin with **_portal**. If it finds one, it attempts to **rsync** to the remote location and synchronize files. Git-portal performs this action anytime you do a Git push or a Git merge (or pull, which is really just a fetch and an automatic merge).
If you've only cloned Git repositories, then you may never have added a remote yourself. It's a standard Git procedure:
```
$ git remote add origin [git@gitdawg.com][6]:seth/bigproject.git
$ git remote -v
origin [git@gitdawg.com][6]:seth/bigproject.git (fetch)
origin [git@gitdawg.com][6]:seth/bigproject.git (push)
```
The name **origin** is a popular convention for your main Git repository, so it makes sense to use it for your Git data. Your Git-portal data, however, is stored separately, so you must create a second remote to tell Git-portal where to push to and pull from. Depending on your Git host, you may need a separate server because gigabytes of media assets are unlikely to be accepted by a Git host with limited space. Or maybe you're on a server that permits you to access only your Git repository and not external storage directories:
```
$ git remote add _portal [seth@example.com][7]:/home/seth/git/bigproject_portal
$ git remote -v
origin [git@gitdawg.com][6]:seth/bigproject.git (fetch)
origin [git@gitdawg.com][6]:seth/bigproject.git (push)
_portal [seth@example.com][7]:/home/seth/git/bigproject_portal (fetch)
_portal [seth@example.com][7]:/home/seth/git/bigproject_portal (push)
```
You may not want to give all of your users individual accounts on your server, and you don't have to. To provide access to the server hosting a repository's large file assets, you can run a Git frontend like **[Gitolite][8]** , or you can use **rrsync** (i.e., restricted rsync).
Now you can push your Git data to your remote Git repository and your Git-portal data to your remote portal:
```
$ git push origin HEAD
master destination detected
Syncing _portal content...
sending incremental file list
sent 9,305 bytes received 18 bytes 1,695.09 bytes/sec
total size is 60,358,015 speedup is 6,474.10
Syncing _portal content to example.com:/home/seth/git/bigproject_portal
```
If you have Git-portal installed and a **_portal** remote configured, your **_portal** directory will be synchronized, getting new content from the server and sending fresh content with every push. While you don't have to do a Git commit and push to sync with the server (a user could just use rsync directly), I find it useful to require commits for artistic changes. It integrates artists and their digital assets into the rest of the workflow, and it provides useful metadata about project progress and velocity.
### Other options
If Git-portal is too simple for you, there are other options for managing large files with Git. [Git Large File Storage][9] (LFS) is a fork of a defunct project called git-media and is maintained and supported by GitHub. It requires special commands (like **git lfs track** to protect large files from being tracked by Git) and requires the user to manage a .gitattributes file to update which files in the repository are tracked by LFS. It supports _only_ HTTP and HTTPS remotes for large files, so your LFS server must be configured so users can authenticate over HTTP rather than SSH or rsync.
A more flexible option than LFS is [git-annex][10], which you can learn more about in my article about [managing binary blobs in Git][11] (ignore the parts about the deprecated git-media, as its former flexibility doesn't apply to its successor, Git LFS). Git-annex is a flexible and elegant solution with a detailed system for adding, removing, and moving large files within a repository. Because it's flexible and powerful, there are lots of new commands and rules to learn, so take a look at its [documentation][12].
If, however, your needs are simple and you like a solution that utilizes existing technology to do simple and obvious tasks, Git-portal might be the tool for the job.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/4/manage-multimedia-files-git
作者:[Seth Kenlon (Red Hat, Community Moderator)][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/video_editing_folder_music_wave_play.png?itok=-J9rs-My (video editing dashboard)
[2]: https://opensource.com/sites/default/files/uploads/git-velocity.jpg (Graphic showing relationship between art assets and Git)
[3]: http://gitlab.com/slackermedia/git-portal.git
[4]: https://www.apress.com/gp/book/9781484241691
[5]: http://mixedsignals.ml
[6]: mailto:git@gitdawg.com
[7]: mailto:seth@example.com
[8]: https://opensource.com/article/19/4/file-sharing-git
[9]: https://git-lfs.github.com/
[10]: https://git-annex.branchable.com/
[11]: https://opensource.com/life/16/8/how-manage-binary-blobs-git-part-7
[12]: https://git-annex.branchable.com/walkthrough/

View File

@ -0,0 +1,109 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Music composition with Python and Linux)
[#]: via: (https://opensource.com/article/20/2/linux-open-source-music)
[#]: author: (Alan Formy-Duval https://opensource.com/users/alanfdoss)
Music composition with Python and Linux
======
A chat with Mr. MAGFest—Brendan Becker.
![Wires plugged into a network switch][1]
I met Brendan Becker working in a computer store in 1999. We both enjoyed building custom computers and installing Linux on them. Brendan was always involved in several technology projects at once, ranging from game coding to music composition. Fast-forwarding a few years from the days of computer stores, he went on to write [pyDance][2], an open source implementation of multiple dancing games, and then became the CEO of music and gaming event [MAGFest][3]. Sometimes referred to as "Mr. MAGFest" because he was at the helm of the event, Brendan now uses the music pseudonym "[Inverse Phase][4]" as a composer of chiptunes—music predominantly made on 8-bit computers and game consoles.
I thought it would be interesting to interview him and ask some specifics about how he has benefited from Linux and open source software throughout his career.
![Inverse Phase performance photo][5]
Copyright Nickeledge, CC BY-SA 2.0.
### Alan Formy-Duval: How did you get started in computers and software?
Brendan Becker: There's been a computer in my household almost as far back as I can remember. My dad has fervently followed technology; he brought home a Compaq Portable when they first hit the market, and when he wasn't doing work on it, I would have access to it. Since I began reading at age two, using a computer became second nature to me—just read what it said on the disk, follow the instructions, and I could play games! Some of the time I would be playing with learning and education software, and we had a few disks full of games that I could play other times. I remember a single disk with a handful of free clones of popular titles. Eventually, my dad showed me that we could call other computers (BBS'ing at age 5!), and I saw where some of the games came from. One of the games I liked to play was written in BASIC, and all bets were off when I realized that I could simply modify the game by just reading a few things and re-typing them to make my game easier.
### Formy-Duval: This was the 1980s?
Becker: The Compaq Portable dropped in 1983 to give you a frame of reference. My dad had one of the first of that model.
### Formy-Duval: How did you get into Linux and open source software?
Becker: I was heavy into MODs and demoscene stuff in the early 90s, and I noticed that Walnut Creek ([cdrom.com][6]; now defunct) ran shop on FreeBSD. I was super curious about Unix and other operating systems in general, but didn't have much firsthand exposure, and thought it might be time to finally try something. DOOM had just released, and someone told me I might even be able to get it to run. Between that and being able to run cool internet servers, I started going down the rabbit hole. Someone saw me reading about FreeBSD and suggested I check out Linux, this new OS that was written from the ground up for x86, unlike BSD, which (they said) had some issues with compatibility. So, I joined #linuxhelp on undernet IRC and asked how to get started with Linux, pointing out that I had done a modicum of research (asking "what's the difference between Red Hat and Slackware?") and probing mainly about what would be easiest to use. The only person talking in the channel said that he was 13 years old and he could figure out Slackware, so I should not have an issue. A math teacher in my school gave me a hard disk, I downloaded the "A" disk sets and a boot disk, wrote it out, installed it, and didn't spend much time looking back.
### Formy-Duval: How'd you become known as Mr. MAGFest?
Becker: Well, this one is pretty easy. I became the acting head of MAGFest almost immediately after the first event. The former chairpeople were all going their separate ways, and I demanded the event not be canceled to the guy in charge. The solution was to run it myself, and that nickname became mine as I slowly molded the event into my own.
### Formy-Duval: I remember attending in those early days. How large did MAGFest eventually become?
Becker: The first MAGFest was 265 people. It is now a scary huge 20,000+ unique attendees.
### Formy-Duval: That is huge! Can you briefly describe the MAGFest convention?
Becker: One of my buddies, Hex, described it really well. He said, "It's like this video-game themed birthday party with all of your friends, but there happen to be a few thousand people there, and all of them can be your friends if you want, and then there are rock concerts." This was quickly adopted and shortened to "It's a four-day video game party with multiple video game rock concerts." Often the phrase "music and gaming festival" is all people need to get the idea.
### Formy-Duval: How did you make use of open source software in running MAGFest?
Becker: At the time I became the head of MAGFest, I had already written a game in Python, so I felt most comfortable also writing our registration system in Python. It was a pretty easy decision since there were no costs involved, and I already had the experience. Later on, our online registration system and rideshare interfaces were written in PHP/MySQL, and we used Kboard for our forums. Eventually, this evolved to us rolling our own registration system from scratch in Python, which we also use at the event, and running Drupal on the main website. At one point, I also wrote a system to manage the video room and challenge stations in Python. Oh, and we had a few game music listening stations that you could flip through tracks and liner notes of iconic game OSTs (Original Sound Tracks) and bands who played MAGFest.
### Formy-Duval: I understand that a few years ago you reduced your responsibilities at MAGFest to pursue new projects. What was your next endeavor?
Becker: I was always rather heavily into the game music scene and tried to bring as much of it to MAGFest as possible. As I became a part of those communities more and more, I wanted to participate. I wrote some medleys, covers, and arrangements of video game tunes using free, open source versions of the DOS and Windows demoscene tools that I had used before, which were also free but not necessarily open source. I released a few tracks in the first few years of running MAGFest, and then after some tough love and counseling from Jake Kaufman (also known as virt; Shovel Knight and Shantae are on his resume, among others), I switched gears to something I was much better at—chiptunes. Even though I had written PC speaker beeps and boops as a kid with my Compaq Portable and MOD files in the demoscene throughout the 90s, I released the first NES-spec track that I was truly proud to call my own in 2006. Several pop tributes and albums followed.
In 2010, I was approached by multiple individuals for game soundtrack work. Even though the soundtrack work didn't affect it much, I began to scale back some of my responsibilities with MAGFest more seriously, and in 2011, I decided to take a much larger step into the background. I would stay on the board as a consultant and help people learn what they needed to run their departments, but I was no longer at the helm. At the same time, my part-time job, which was paying the bills, laid off all of their workers, and I suddenly found myself with a lot of spare time. I began writing Pretty Eight Machine, a Nine Inch Nails tribute, which I spent over a year on, and between that and the game soundtrack work, I proved to myself that I could put food on the table (if only barely) with music, and this is what I wanted to do next.
###
![Inverse Phase CTM Tracker][7]
Copyright Inverse Phase, Used with permission.
### Formy-Duval: What is your workspace like in terms of hardware and software?
Becker: In my DOS/Windows days, I used mostly FastTracker 2. In Linux, I replaced that with SoundTracker (not Karsten Obarski's original, but a GTK rewrite; see [soundtracker.org][8]). These days, SoundTracker is in a state of flux—although I still need to try the new GTK3 version—but [MilkyTracker][9] is a good replacement when I can't use SoundTracker. Good old FastTracker 2 also runs in DOSBox, if I really need the original. This was when I started using Linux, however, so this is stuff I figured out 20-25 years ago.
Within the last ten years, I've gravitated away from sample-based music and towards chiptunes—music synthesized by old sound chips from 8- and 16-bit game systems and computers. There is a very good cross-platform tool called [Deflemask][10] to write music for a lot of these systems. A few of the systems I want to write music for aren't supported, though, and Deflemask is closed source, so I've begun building my own music composition environment from scratch with Python and [Pygame][11]. I maintain my code tree using Git and will control hardware synthesizer boards using open source [KiCad][12].
### Formy-Duval: What projects are you currently focused on?
Becker: I work on game soundtracks and music commissions off and on. While that's going on, I've also been working on starting an electronic entertainment museum called [Bloop][13]. We're doing a lot of cool things with archival and inventory, but perhaps the most exciting thing is that we've been building exhibits with Raspberry Pis. They're so versatile, and it's weird to think that, if I had tried to do this even ten years ago, I wouldn't have had small single-board computers to drive my exhibits; I probably would have bolted a laptop to the back of a flat-panel!
### Formy-Duval: There are a lot more game platforms coming to Linux now, such as Steam, Lutris, and Play-on-Linux. Do you think this trend will continue, and these are here to stay?
Becker: As someone who's been gaming on Linux for 25 years—in fact, I was brought to Linux _because_ of games—I think I find this question harder than most people would. I've been running Linux-native games for decades, and I've even had to eat my "either a Linux solution exists or can be written" words back in the day, but eventually, I did, and wrote a Linux game.
Real talk: Android's been out since 2008. If you've played a game on Android, you've played a game on Linux. Steam's been available for Linux for eight years. The Steambox/SteamOS was announced only a year after Steam. I don't hear a whole lot about Lutris or Play-on-Linux, but I know they exist and hope they succeed. I do see a pretty big following for GOG, and I think that's pretty neat. I see a lot of quality game ports coming out of people like Ryan Gordon (icculus) and Ethan Lee (flibitijibibo), and some companies even port in-house. Game engines like Unity and Unreal already support Linux. Valve has incorporated Proton into the Linux version of Steam for something like two years now, so now Linux users don't even have to search for Linux-native versions of their games.
I can say that I think most gamers expect and will continue to expect the level of support they're already receiving from the retail game market. Personally, I hope that level goes up and not down!
_Learn more about Brendan's work as [Inverse Phase][14]._
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/2/linux-open-source-music
作者:[Alan Formy-Duval][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/alanfdoss
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003499_01_other21x_cc.png?itok=JJJ5z6aB (Wires plugged into a network switch)
[2]: http://icculus.org/pyddr/
[3]: http://magfest.org/
[4]: http://www.inversephase.com/
[5]: https://opensource.com/sites/default/files/uploads/inverse_phase_performance_bw.png (Inverse Phase performance photo)
[6]: https://en.wikipedia.org/wiki/Walnut_Creek_CDROM
[7]: https://opensource.com/sites/default/files/uploads/inversephase_ctm_tracker_screenshot.png (Inverse Phase CTM Tracker)
[8]: http://soundtracker.org
[9]: http://www.milkytracker.org
[10]: http://www.deflemask.com
[11]: http://www.pygame.org
[12]: http://www.kicad-pcb.org
[13]: http://bloopmuseum.com
[14]: https://www.inversephase.com

View File

@ -0,0 +1,222 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Scan Kubernetes for errors with KRAWL)
[#]: via: (https://opensource.com/article/20/2/kubernetes-scanner)
[#]: author: (Abhishek Tamrakar https://opensource.com/users/tamrakar)
Scan Kubernetes for errors with KRAWL
======
The KRAWL script identifies errors in Kubernetes pods and containers.
![Ship captain sailing the Kubernetes seas][1]
When you're running containers with Kubernetes, you often find that they pile up. This is by design. It's one of the advantages of containers: they're cheap to start whenever a new one is needed. You can use a front-end like OpenShift or OKD to manage pods and containers. Those make it easy to visualize what you have set up, and have a rich set of commands for quick interactions.
If a platform to manage containers doesn't fit your requirements, though, you can also get that information using only a Kubernetes toolchain, but there are a lot of commands you need for a full overview of a complex environment. For that reason, I wrote [KRAWL][2], a simple script that scans pods and containers under the namespaces on Kubernetes clusters and displays the output of events, if any are found. It can also be used as Kubernetes plugin for the same purpose. It's a quick and easy way to get a lot of useful information.
### Prerequisites
* You must have kubectl installed.
* Your cluster's kubeconfig must be either in its default location ($HOME/.kube/config) or exported (KUBECONFIG=/path/to/kubeconfig).
### Usage
```
`$ ./krawl`
```
![KRAWL script][3]
### The script
```
#!/bin/bash
# AUTHOR: Abhishek Tamrakar
# EMAIL: [abhishek.tamrakar08@gmail.com][4]
# LICENSE: Copyright (C) 2018 Abhishek Tamrakar
#
#  Licensed under the Apache License, Version 2.0 (the "License");
#  you may not use this file except in compliance with the License.
#  You may obtain a copy of the License at
#
#       <http://www.apache.org/licenses/LICENSE-2.0>
#
#   Unless required by applicable law or agreed to in writing, software
#   distributed under the License is distributed on an "AS IS" BASIS,
#   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
#   See the License for the specific language governing permissions and
#   limitations under the License.
##
#define the variables
KUBE_LOC=~/.kube/config
#define variables
KUBECTL=$(which kubectl)
GET=$(which egrep)
AWK=$(which awk)
red=$(tput setaf 1)
normal=$(tput sgr0)
# define functions
# wrapper for printing info messages
info()
{
  printf '\n\e[34m%s\e[m: %s\n' "INFO" "$@"
}
# cleanup when all done
cleanup()
{
  rm -f results.csv
}
# just check if the command we are about to call is available
checkcmd()
{
  #check if command exists
  local cmd=$1
  if [ -z "${!cmd}" ]
  then
    printf '\n\e[31m%s\e[m: %s\n' "ERROR"  "check if $1 is installed !!!"
    exit 1
  fi
}
get_namespaces()
{
  #get namespaces
  namespaces=( \
          $($KUBECTL get namespaces --ignore-not-found=true | \
          $AWK '/Active/ {print $1}' \
          ORS=" ") \
          )
#exit if namespaces are not found
if [ ${#namespaces[@]} -eq 0 ]
then
  printf '\n\e[31m%s\e[m: %s\n' "ERROR"  "No namespaces found!!"
  exit 1
fi
}
#get events for pods in errored state
get_pod_events()
{
  printf '\n'
  if [ ${#ERRORED[@]} -ne 0 ]
  then
      info "${#ERRORED[@]} errored pods found."
      for CULPRIT in ${ERRORED[@]}
      do
        info "POD: $CULPRIT"
        info
        $KUBECTL get events \
        --field-selector=involvedObject.name=$CULPRIT \
        -ocustom-columns=LASTSEEN:.lastTimestamp,REASON:.reason,MESSAGE:.message \
        --all-namespaces \
        --ignore-not-found=true
      done
  else
      info "0 pods with errored events found."
  fi
}
#define the logic
get_pod_errors()
{
  printf "%s %s %s\n" "NAMESPACE,POD_NAME,CONTAINER_NAME,ERRORS" &gt; results.csv
  printf "%s %s %s\n" "---------,--------,--------------,------" &gt;&gt; results.csv
  for NAMESPACE in ${namespaces[@]}
  do
    while IFS=' ' read -r POD CONTAINERS
    do
      for CONTAINER in ${CONTAINERS//,/ }
      do
        COUNT=$($KUBECTL logs --since=1h --tail=20 $POD -c $CONTAINER -n $NAMESPACE 2&gt;/dev/null| \
        $GET -c '^error|Error|ERROR|Warn|WARN')
        if [ $COUNT -gt 0 ]
        then
            STATE=("${STATE[@]}" "$NAMESPACE,$POD,$CONTAINER,$COUNT")
        else
        #catch pods in errored state
            ERRORED=($($KUBECTL get pods -n $NAMESPACE --no-headers=true | \
                awk '!/Running/ {print $1}' ORS=" ") \
                )
        fi
      done
    done&lt; &lt;($KUBECTL get pods -n $NAMESPACE --ignore-not-found=true -o=custom-columns=NAME:.metadata.name,CONTAINERS:.spec.containers[*].name --no-headers=true)
  done
  printf "%s\n" ${STATE[@]:-None} &gt;&gt; results.csv
  STATE=()
}
#define usage for seprate run
usage()
{
cat &lt;&lt; EOF
  USAGE: "${0##*/} &lt;/path/to/kube-config&gt;(optional)"
  This program is a free software under the terms of Apache 2.0 License.
  COPYRIGHT (C) 2018 Abhishek Tamrakar
EOF
exit 0
}
#check if basic commands are found
trap cleanup EXIT
checkcmd KUBECTL
#
#set the ground
if [ $# -lt 1 ]; then
  if [ ! -e ${KUBE_LOC} -a ! -s ${KUBE_LOC} ]
  then
    info "A readable kube config location is required!!"
    usage
  fi
elif [ $# -eq 1 ]
then
  export KUBECONFIG=$1
elif [ $# -gt 1 ]
then
  usage
fi
#play
get_namespaces
get_pod_errors
printf '\n%40s\n' 'KRAWL'
printf '%s\n' '---------------------------------------------------------------------------------'
printf '%s\n' '  Krawl is a command line utility to scan pods and prints name of errored pods   '
printf '%s\n\n' ' +and containers within. To use it as kubernetes plugin, please check their page '
printf '%s\n' '================================================================================='
cat results.csv | sed 's/,/,|/g'| column -s ',' -t
get_pod_events
```
* * *
_This was originally published as the README in [KRAWL's GitHub repository][2] and is reused with permission._
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/2/kubernetes-scanner
作者:[Abhishek Tamrakar][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/tamrakar
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/ship_captain_devops_kubernetes_steer.png?itok=LAHfIpek (Ship captain sailing the Kubernetes seas)
[2]: https://github.com/abhiTamrakar/kube-plugins/tree/master/krawl
[3]: https://opensource.com/sites/default/files/uploads/krawl_0.png (KRAWL script)
[4]: mailto:abhishek.tamrakar08@gmail.com

View File

@ -0,0 +1,100 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Top hacks for the YaCy open source search engine)
[#]: via: (https://opensource.com/article/20/2/yacy-search-engine-hacks)
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
Top hacks for the YaCy open source search engine
======
Rather than adapting to someone else's vision, customize you search
engine for the internet you want with YaCY.
![Browser of things][1]
In my article about [getting started with YaCy][2], I explained how to install and start using the [YaCy][3] peer-to-peer search engine. One of the most exciting things about YaCy, however, is the fact that it's a local client. Each user owns and operates a node in a globally distributed search engine infrastructure, which means each user is in full control of how they navigate and experience the World Wide Web.
For instance, Google used to provide the URL google.com/linux as a shortcut to filter searches for Linux-related topics. It was a small feature that many people found useful, but [topical shortcuts were dropped][4] in 2011. 
YaCy makes it possible to customize your search experience.
### Customize YaCy
Once you've installed YaCy, navigate to your search page at **localhost:8090**. To customize your search engine, click the **Administration** button in the top-right corner (it may be concealed in a menu icon on small screens).
The admin panel allows you to configure how YaCy uses your system resources and how it interacts with other YaCy clients.
![YaCy profile selector][5]
For instance, to configure an alternative port and set RAM and disk usage, use the **First steps** menu in the sidebar. To monitor YaCy activity, use the **Monitoring** panel. Most features are discoverable by clicking through the panels, but here are some of my favorites.
### Search appliance
Several companies have offered [intranet search appliances][6], but with YaCy, you can implement it for free. Whether you want to search through your own data or to implement a search system for local file shares at your business, you can choose to run YaCy as an internal indexer for files accessible over HTTP, FTP, and SMB (Samba). People in your local network can use your personalized instance of YaCy to find shared files, and none of the data is shared with users outside your network.
### Network configuration
YaCy favors isolation and privacy by default. You can adjust how you connect to the peer-to-peer network in the **Network Configuration** panel, which is revealed by clicking the link located at the top of the **Use Case &amp; Account** configuration screen.
![YaCy network configuration][7]
### Crawl a site
Peer-to-peer indexing is user-driven. There's no mega-corporation initiating searches on every accessible page on the internet, so a site isn't indexed until someone deliberately crawls it with YaCy.
The YaCy client provides two options to help you help crawl the web: you can perform a manual crawl, and you can make YaCy available for suggested crawls.
![YaCy advanced crawler][8]
#### Start a manual crawling job
A manual crawl is when you enter the URL of a site you want to index and start a YaCy crawl job. To do this, click the **Advanced Crawler** link in the **Production** sidebar. Enter one or more URLs, then scroll to the bottom of the page and enable the **Do remote indexing** option. This enables your client to broadcast the URLs it is indexing, so clients that have opted to accept requests can help you perform the crawl.
To start the crawl, click the **Start New Crawl Job** button at the bottom of the page. I use this method to index sites I use frequently or find useful.
Once the crawl job starts, YaCy indexes the URLs you enter and stores the index on your local machine. As long as you are running in senior mode (meaning your firewall permits incoming and outgoing traffic on port 8090), your index is available to YaCy users all over the globe.
#### Join in on a crawl
While some very dedicated YaCy senior users may crawl the internet compulsively, there are a _lot_ of sites out there in the world. It might seem impossible to match the resources of popular spiders and bots, but because YaCy has so many users, they can band together as a community to index more of the internet than any one user could do alone. If you activate YaCy to broadcast requests for site crawls, participating clients can work together to crawl sites you might not otherwise think to crawl manually.
To configure your client to accept jobs from others, click the **Advanced Crawler** link in the left sidebar menu. In the **Advanced Crawler** panel, click the **Remote Crawling** link under the **Network Harvesting** heading at the top of the page. Enable remote crawls by placing a tick in the checkbox next to the **Load** setting.
![YaCy remote crawling][9]
### YaCy monitoring and more
YaCy is a surprisingly robust search engine, providing you with the opportunity to theme and refine your experience in nearly any way you could want. You can monitor the activity of your YaCy client in the **Monitoring** panel, so you can get an idea of how many people are benefiting from the work of the YaCy community and also see what kind of activity it's generating for your computer and network.
![YaCy monitoring screen][10]
### Search engines make a difference
The more time you spend with the Administration screen, the more fun it becomes to ponder how the search engine you use can change your perspective. Your experience of the internet is shaped by the results you get back for even the simplest of queries. You might notice, in fact, how different one person's "internet" is from another person's when you talk to computer users from a different industry. For some people, the web is littered with ads and promoted searches and suffers from the tunnel vision of learned responses to queries. For instance, if someone consistently searches for answers about X, most commercial search engines will give weight to query responses that concern X. That's a useful feature on the one hand, but it occludes answers that require Y, even though that might be the better solution for a specific task.
As in real life, stepping outside a manufactured view of the world can be healthy and enlightening. Try YaCy, and see what you discover.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/2/yacy-search-engine-hacks
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/browser_desktop_website_checklist_metrics.png?itok=OKKbl1UR (Browser of things)
[2]: https://opensource.com/article/20/2/open-source-search-engine
[3]: https://yacy.net/
[4]: https://www.linuxquestions.org/questions/linux-news-59/is-there-no-more-linux-google-884306/
[5]: https://opensource.com/sites/default/files/uploads/yacy-profiles.jpg (YaCy profile selector)
[6]: https://en.wikipedia.org/wiki/Vivisimo
[7]: https://opensource.com/sites/default/files/uploads/yacy-network-config.jpg (YaCy network configuration)
[8]: https://opensource.com/sites/default/files/uploads/yacy-advanced-crawler.jpg (YaCy advanced crawler)
[9]: https://opensource.com/sites/default/files/uploads/yacy-remote-crawl-accept.jpg (YaCy remote crawling)
[10]: https://opensource.com/sites/default/files/uploads/yacy-monitor.jpg (YaCy monitoring screen)

View File

@ -0,0 +1,104 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Dino is a Modern Looking Open Source XMPP Client)
[#]: via: (https://itsfoss.com/dino-xmpp-client/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
Dino is a Modern Looking Open Source XMPP Client
======
_**Brief: Dino is a relatively new open-source XMPP client that tries to offer a good user experience while encouraging privacy-focused users to utilize XMPP for messaging.**_
### Dino: An Open Source XMPP Client
![][1]
[XMPP][2] (Extensible Messaging Presence Protocol) is a decentralized model of network to facilitate instant messaging and collaboration. Decentralize means there is no central server that has access to your data. The communication is directly between the end-points.
Some of us might call it an “old school” tech probably because the XMPP clients usually have a very bad user experience or simply just because it takes time to get used to (or set it up).
Thats when [Dino][3] comes to the rescue as a modern XMPP client to provide a clean and snappy user experience without compromising your privacy.
### The User Experience
![][4]
Dino does try to improve the user experience as an XMPP client but it is worth noting that the look and feel of it will depend on your Linux distribution to some extent. Your icon theme or the gnome theme might make it look better or worse for your personal experience.
Technically, the user interface is quite simple and easy to use. So, I suggest you take a look at some of the [best icon themes][5] and [GNOME themes][6] for Ubuntu to tweak the look of Dino.
### Features of Dino
![Dino Screenshot][7]
You can expect to use Dino as an alternative to Slack, [Signal][8] or [Wire][9] for your business or personal usage.
It offers all of the essential features you would need in a messaging application, let us take a look at a list of things that you can expect from it:
* Decentralized Communication
* Public XMPP Servers supported if you cannot setup your own server
* Similar to UI to other popular messengers so its easy to use
* Image &amp; File sharing
* Multiple accounts supported
* Advanced message search
* [OpenPGP][10] &amp; [OMEMO][11] encryption supported
* Lightweight native desktop application
### Installing Dino on Linux
You may or may not find it listed in your software center. Dino does provide ready to use binaries for Debian (deb) and Fedora (rpm) based distributions.
**For Ubuntu:**
Dino is available in the universe repository on Ubuntu and you can install it using this command:
```
sudo apt install dino-im
```
Similarly, you can find packages for other Linux distributions on their [GitHub distribution packages page][12].
If you want the latest and greatest, you can also find both **.deb** and .**rpm** files for Dino to install on your Linux distribution (nightly builds) from [OpenSUSEs software webpage][13].
In either case, head to their [GitHub page][14] or click on the link below to visit the official site.
[Download Dino][3]
**Wrapping Up**
It works quite well without any issues (at the time of writing this and quick testing it). Ill try exploring more about it and hopefully cover more XMPP-centric articles to encourage users to use XMPP clients and servers for communication.
What do you think about Dino? Would you recommend another open-source XMPP client thats potentially better than Dino? Let me know your thoughts in the comments below.
--------------------------------------------------------------------------------
via: https://itsfoss.com/dino-xmpp-client/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/02/dino-main.png?ssl=1
[2]: https://xmpp.org/about/
[3]: https://dino.im/
[4]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/02/dino-xmpp-client.jpg?ssl=1
[5]: https://itsfoss.com/best-icon-themes-ubuntu-16-04/
[6]: https://itsfoss.com/best-gtk-themes/
[7]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/02/dino-screenshot.png?ssl=1
[8]: https://itsfoss.com/signal-messaging-app/
[9]: https://itsfoss.com/wire-messaging-linux/
[10]: https://www.openpgp.org/
[11]: https://en.wikipedia.org/wiki/OMEMO
[12]: https://github.com/dino/dino/wiki/Distribution-Packages
[13]: https://software.opensuse.org/download.html?project=network:messaging:xmpp:dino&package=dino
[14]: https://github.com/dino/dino

View File

@ -0,0 +1,246 @@
[#]: collector: (lujun9972)
[#]: translator: (svtter)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Manage multimedia files with Git)
[#]: via: (https://opensource.com/article/19/4/manage-multimedia-files-git)
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
通过 Git 来管理多媒体文件
======
学习如何使用 Git 来追踪项目中的大型多媒体文件。
在系列中的最后一篇文章中描述了如何使用 Git。
![video editing dashboard][1]
Git 是专用于源代码版本控制的工具。因此Git 很少被用于非纯文本的项目以及行业。然而,异步工作流的优点是十分诱人的,尤其是在一些日益增长的行业中,这种类型的行业把重要的计算和重要的艺术冒险结合起来。其中,包括网页设计、视觉效果、视频游戏、出版、货币设计(是的,这是一个真实的行业),教育 ... 等等。还有许多行业属于这个类型。
在这个系列正要谈到 Git 14周年纪念日之际我们分享了六个少为人知的方式来使用 Git。在文章的末尾我们将会介绍一下那些利用 Git 优点来管理多媒体文件的软件。
### Git 管理多媒体文件的问题
众所周知Git 用于处理非文本文件不是很好,但是这并不妨碍我们进行尝试。下面是一个使用 Git 来复制照片文件的例子:
```
$ du -hs
108K .
$ cp ~/photos/dandelion.tif .
$ git add dandelion.tif
$ git commit -m 'added a photo'
[master (root-commit) fa6caa7] two photos
1 file changed, 0 insertions(+), 0 deletions(-)
create mode 100644 dandelion.tif
$ du -hs
1.8M .
```
目前为止没有什么异常。增加一个 1.8MB 的照片到一个目录下,使得目录变成了 1.8 MB 的大小。所以下一步,我们尝试删除文件。
```
$ git rm dandelion.tif
$ git commit -m 'deleted a photo'
$ du -hs
828K .
```
在这里我们可以看到有些问题删除一个已经被提交的文件还是会使得仓库的大小扩大到原来的8倍从 108K 到 828K。我们可以测试多次来得到一个更好的平均值但是这个简单的演示与我的假设一直。提交非文本文件在一开始花费空间比较少但是一个工厂活跃地时间越长人们可能对静态内容修改的会更多更多的零碎文件会被加和到一起。当一个 Git 仓库变的越来越大,主要的成本往往是速度。拉取和推送的时间,从最初抿一口咖啡的时间到你觉得你可能踢掉了
导致 Git 中静态内容的体积不断扩大的原因是什么呢?那些通过文本的构成的文件,允许 Git 只拉取那些修改的部分。光栅图以及音乐文件对 Git 文件而言与文本不同,你可以查看一下 .png 和 .wav 文件中的二进制数据。所以Git 只不过是获取了全部的数据,并且创建了一个新的副本,哪怕是一张图仅仅修改了一个像素。
### Git-portal
在实践中许多多媒体项目不需要或者不想追踪媒体的历史记录。相对于文本后者代码的部分项目的媒体部分一般有一个不同的生命周期。媒体资源一般通过一个方向产生一张图片从铅笔草稿开始以数绘的形式抵达它的目的地。然后尽管文本能够回滚到早起的版本但是艺术只会一直向前。工程中的媒体很少被绑定到一个特定的版本。例外情况通常是反映数据集的图形通常是可以用基于文本的格式如SVG完成的表、图形或图表。
所以在许多同时包含文本无论是叙事散文还是代码和媒体的工程中Git 是一个用于文件管理的,可接受的解决方案,只要有一个在版本控制循环之外的游乐场来给艺术家游玩。
![Graphic showing relationship between art assets and Git][2]
一个简单的方法来启用这个特性是 [Git-portal][3],一个通过武装 Git hooks 的 Bash 脚本,它将静态文件从文件夹中移出 Git 的范围通过链接来取代。Git 提交链接文件有时候称作快捷方式这种链接文件比较小所以所有的提交都是文本文件和那些代表媒体文件的链接。替身文件是链接所以工程还会像预期的运行因为本地机器会处理他们转换成“真的”。当链接文件发生变动时Git-portal 维护了一个项目的结构,因此逆转这个过程很简单。用户需要考虑的,仅仅是 Git-portal 是否适用于工程,或者需要构建一个没有链接的工程版本(例如需要分发的时候)。
Git-portal 也允许通过 rsync 来远程同步静态资源,所以用户可以设置一个远程存储位置,来做为一个中心的授权源。
Git-portal 对于多媒体的工程是一个理想的解决方案。类似的多媒体工程包括视频游戏桌面游戏需要进行大型3D模型渲染和纹理的虚拟现实工程[带图的书籍][4]以及 .odt 输出,协作型的[博客站点][5]音乐项目等等。艺术家在应用程序中以图层在图形世界中和曲目在音乐世界中的形式执行版本控制并不少见——因此Git 不会向多媒体项目文件本身添加任何内容。Git 的功能可用于艺术项目的其他部分(例如散文和叙述、项目管理、字幕文件、信贷、营销副本、文档等),而结构化远程备份的功能则由艺术家使用。
#### 安装 Git-portal
Git-portal 的RPM 安装包位于 <https://klaatu.fedorapeople.org/git-portal>,可用于下载和安装。
此外,用户可以从 Git-portal 的 Gitlab 主页手动安装。这仅仅是一个 Bash 脚本以及一些 Git hooks也是 Bash 脚本),但是需要一个快速的构建过程来让它知道安装的位置。
```
$ git clone <https://gitlab.com/slackermedia/git-portal.git> git-portal.clone
$ cd git-portal.clone
$ ./configure
$ make
$ sudo make install
```
#### 使用 Git-portal
Git-portal 与 Git 一起使用。这意味着,对于 Git 的所有大型文件扩展名,都需要记住一些额外的步骤。但是,你仅仅需要在处理你的媒体资源的时候使用 Git-portal所以很容易记住除非你把大文件都当做文本文件来进行处理对于 Git 用户很少见)。使用 Git-portal 必须做的一个安装步骤是:
```
$ mkdir bigproject.git
$ cd !$
$ git init
$ git-portal init
```
Git-portal 的 **init** 函数在 Git 仓库中创建了一个 **_portal** 文件夹并且添加到 .gitignore 文件中。
在平日里使用 Git-portal 和 Git 协同十分平滑。一个较好的例子是基于 MIDI 的音乐项目:音乐工作站产生的项目文件是基于文本的,但是 MIDI 文件是二进制数据:
```
$ ls -1
_portal
song.1.qtr
song.qtr
song-Track_1-1.mid
song-Track_1-3.mid
song-Track_2-1.mid
$ git add song*qtr
$ git-portal song-Track*mid
$ git add song-Track*mid
```
如果你查看一下 **_portal** 文件夹你会发现那里有原始的MIDI文件。这些文件在原本的位置被替换成了指向 **_portal** 的链接文件,使得音乐工作站像预期一样运行。
```
$ ls -lG
[...] _portal/
[...] song.1.qtr
[...] song.qtr
[...] song-Track_1-1.mid -> _portal/song-Track_1-1.mid*
[...] song-Track_1-3.mid -> _portal/song-Track_1-3.mid*
[...] song-Track_2-1.mid -> _portal/song-Track_2-1.mid*
```
与 Git 相同,你也可以添加一个文件下的文件。
```
$ cp -r ~/synth-presets/yoshimi .
$ git-portal add yoshimi
Directories cannot go through the portal. Sending files instead.
$ ls -lG _portal/yoshimi
[...] yoshimi.stat -> ../_portal/yoshimi/yoshimi.stat*
```
删除功能也想预期一样工作,但是从 **_portal**中删除了一些东西。你应该使用 **git-portal rm** 而不是 **git rm**。使用 Git-portal 可以确保文件从 **_portal** 中删除:
```
$ ls
_portal/ song.qtr song-Track_1-3.mid@ yoshimi/
song.1.qtr song-Track_1-1.mid@ song-Track_2-1.mid@
$ git-portal rm song-Track_1-3.mid
rm 'song-Track_1-3.mid'
$ ls _portal/
song-Track_1-1.mid* song-Track_2-1.mid* yoshimi/
```
如果你忘记使用 Git-portal那么你需要手动删除 portal 文件:
```
$ git-portal rm song-Track_1-1.mid
rm 'song-Track_1-1.mid'
$ ls _portal/
song-Track_1-1.mid* song-Track_2-1.mid* yoshimi/
$ trash _portal/song-Track_1-1.mid
```
Git-portal 仅有的其他工程,是列出当前所有的链接并且找到里面已经损坏的部分。有时这种情况会因为项目文件夹中的文件被移动而发生:
```
$ mkdir foo
$ mv yoshimi foo
$ git-portal status
bigproject.git/song-Track_2-1.mid: symbolic link to _portal/song-Track_2-1.mid
bigproject.git/foo/yoshimi/yoshimi.stat: broken symbolic link to ../_portal/yoshimi/yoshimi.stat
```
如果你使用 Git-portal 用于私人项目并且维护自己的备份,以上就是技术方面所有你需要知道关于 Git-portal 的事情了。如果你想要添加一个协作者或者你希望 Git-portal 来像 Git 的方式来管理备份,你可以创建一个远程。
#### 增加 Git-portal remotes
为 Git-portal 增加一个远程位置是通过 Git 已经存在的功能来实现的。Git-portal 实现了 Git hooks隐藏在仓库 .git 文件夹中的脚本,来寻找你的远程主机上是否存在以 **_portal** 开头的文件夹。如果它找到一个,它会尝试使用 **rsync** 来与远程位置同步文件。Git-portal 在用户进行 Git push 以及 Git 合并的时候(或者在进行 git pull的时候实际上是进行一次 fetch 和自动合并)会处理这项任务。
如果你近克隆了 Git 仓库,那么你可能永远不会自己添加一个 remote。这是一个标准的 Git 过程:
```
$ git remote add origin [git@gitdawg.com][6]:seth/bigproject.git
$ git remote -v
origin [git@gitdawg.com][6]:seth/bigproject.git (fetch)
origin [git@gitdawg.com][6]:seth/bigproject.git (push)
```
**origin** 这个名字对你的主要 Git 仓库是一个流行的惯例,为 Git 数据使用它是有意义的。然而,你的 Git-portal 数据是分开存储的,所以你必须创建第二个远程机器来让 Git-portal 了解向哪里 push 和从哪里 pull。取决于你的 Git 主机。你可能需要一个分离的服务器因为媒体资源可能有GB的大小使得一个 Git 主机由于空间限制无法承担。或者,可能你的服务器仅允许你访问你的 Git 仓库而不允许一个额外的存储文件夹:
```
$ git remote add _portal [seth@example.com][7]:/home/seth/git/bigproject_portal
$ git remote -v
origin [git@gitdawg.com][6]:seth/bigproject.git (fetch)
origin [git@gitdawg.com][6]:seth/bigproject.git (push)
_portal [seth@example.com][7]:/home/seth/git/bigproject_portal (fetch)
_portal [seth@example.com][7]:/home/seth/git/bigproject_portal (push)
```
你可能不想把你的所有私人账户放在你的服务器上,而且你不需要这样做。为了提供服务器上仓库的大文件资源权限,你可以运行一个 Git 前端,比如 **[Gitolite][8]** 或者你可以使用 **rrsync** restricted rsync)。
现在你可以推送你的 Git 数据到你的远程 Git 仓库和你的 Git-portal 数据到你的远程 portal:
```
$ git push origin HEAD
master destination detected
Syncing _portal content...
sending incremental file list
sent 9,305 bytes received 18 bytes 1,695.09 bytes/sec
total size is 60,358,015 speedup is 6,474.10
Syncing _portal content to example.com:/home/seth/git/bigproject_portal
```
如果你已经安装了 Git-portal并且配置了一个远程的 **_portal**,你的 **_portal** 文件夹将会被同步,并且从服务器获取新的内容,以及在每一次 push 的时候发送新的内容。但是,你不需要进行 Git commit 或者 push 来和服务器同步(用户可以使用直接使用 rsync我发现对于艺术性内容的改变提交是有用的。这将会把艺术家及其数字资源集成到工作流的其余部分中并提供有关项目进度和速度的有用元数据。
### 其他选项
如果 Git-portal 对你而言太过简单,还有一些其他的选择用于 Git 管理大型文件。[Git Large File Storage][9] (LFS) 是一个失效项目的分支,称作 git-media。这个分支由 Github 维护和支持。它需要特殊的命令(例如 **git lfs track** 来保护大型文件不被 Git 追踪)并且需要用户维护一个 .gitattributes 文件来更新哪些仓库中的文件被 LFS 追踪。对于大文件而言,它 _仅_ 支持 HTTP 和 HTTPS 主机。所以你的 LFS 服务器必须进行配置,才能使得用户可以通过 HTTP 而不是 SSH 或 rsync 来进行鉴权。
另一个相对 LFS 更灵活的选项是 [git-annex][10]。你可以在我的文章 [managing binary blobs in Git][11] (忽略 git-media 这个已经废弃的项目,它的继任者 Git LFS 没有将它延续下来中了解更多。Git-annex 是一个灵活且优雅的解决方案。它拥有一个细腻的系统来用于添加,删除,移动仓库中的大型文件。因为它灵活且强大,有很多新的命令和规则需要进行学习,所以建议看一下它的 [文档][12]。
然而如果你的需求很简单你可能更加喜欢整合已有技术来进行简单且明显任务的解决方案Git-portal 可能是对于工作而言比较合适的工具。
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/4/manage-multimedia-files-git
作者:[Seth Kenlon (Red Hat, Community Moderator)][a]
选题:[lujun9972][b]
译者:[svtter](https://github.com/svtter)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/video_editing_folder_music_wave_play.png?itok=-J9rs-My (video editing dashboard)
[2]: https://opensource.com/sites/default/files/uploads/git-velocity.jpg (Graphic showing relationship between art assets and Git)
[3]: http://gitlab.com/slackermedia/git-portal.git
[4]: https://www.apress.com/gp/book/9781484241691
[5]: http://mixedsignals.ml
[6]: mailto:git@gitdawg.com
[7]: mailto:seth@example.com
[8]: https://opensource.com/article/19/4/file-sharing-git
[9]: https://git-lfs.github.com/
[10]: https://git-annex.branchable.com/
[11]: https://opensource.com/life/16/8/how-manage-binary-blobs-git-part-7
[12]: https://git-annex.branchable.com/walkthrough/

View File

@ -1,75 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (hopefully2333)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (What's HTTPS for secure computing?)
[#]: via: (https://opensource.com/article/20/1/confidential-computing)
[#]: author: (Mike Bursell https://opensource.com/users/mikecamel)
用于安全计算的 HTTPS 是什么?
======
在默认的情况下,网站的安全性还不足够。
![Secure https browser][1]
在过去的几年里,寻找一个只以 "http://…" 开头的网站变得越来越难,这是因为行业终于意识到,网络安全是 “一件事”,同时也是因为客户端和服务端之间建立和使用 https 连接变得更加容易了。类似的转变可能正以不同的方式发生在云计算、边缘计算、物联网、区块链,人工智能、机器学习等领域。长久以来,我们都知道我们应该对存储的静态数据和在网络中传输的数据进行加密,但是在使用和处理数据的时候对它进行加密是困难且昂贵的。可信计算-使用例如受信任的执行环境TEEs这样的硬件功能来提供数据和算法这种类型的保护-保护主机系统中的或者易受攻击的环境中的数据。
我已经写了几次关于 TEEs 的文章,当然,还有我和 Nathaniel McCallum 共同创立的 Enarx 项目(要看 Enarx 项目的所有人(一个任务)和 Enarx 的多平台示例。Enarx 使用 TEEs 来提供独立于平台和语言的部署平台以此来让你能够安全地将敏感应用或者敏感组件例如微服务部署在你不信任的主机上。Enarx 当然是完全开源的(或许会有人感兴趣,我们使用的是 Apache 2.0 许可证)。能够在你不信任的主机上运行工作程序,这是可信计算的承诺,它为静态敏感数据和传输中数据的使用扩展了常规做法:
* **存储:** 你要加密你的静态数据,因为你不完全信任你的基础存储架构。
* **网络:** 你要加密你正在传输中的数据,因为你不完全信任你的基础网络架构。
* **计算:** 你要加密你正在使用中的数据,因为你不完全信任你的基础计算架构。
关于信任,我有非常多的话想说,而且,上述说法里的单词“完全”是很重要的(在重新读我写的这篇文章的时候,我新加了这个单词)。在所有的情况下,你必须在一定程度上信任你的基础设施,无论是传递你的数据包还是存储你的数据块,例如,对于计算基础架构,你必须要去信任 CPU 和与之关联的固件,这是因为如果你不信任他们,你就无法真正地进行计算(现在有一些诸如同态加密一类的技术,这些技术正在开始提供一些机会,但是它们依然是有限的,这些技术还不够成熟)。
你在考虑是否应该完全信任 cpu 时,问题有时也会随之一起到来,还会同时发现一些安全问题,并且是否无论主机在哪都应该完全免受物理攻击。
这两个问题的回答都是“不”,但是在考虑到大规模可用性和普遍推广开的成本,这已经是我们当前拥有的最好的技术了。为了解决第二个问题,没有人去假装这项技术(或者任何的其他技术)是完全安全的:我们需要做的是思考我们的威胁模型并确定这个情况下的 TEEs 是否为我们的特殊需求提供了足够的安全防护。关于第一个问题Enarx 采用的模型是在部署时就对你是否信任一个特定的 CPU 组做出决定。举个例子,如果供应商 Q 的 R 代芯片被发现有漏洞,可以很简单地说“我拒绝将我的工作内容部署到 Q 的 R 代芯片上去,但是仍然可以部署到 Q 的 S 型号T 型号和 U 型号的芯片以及任何 PM 和 N 供应商的任何芯片上去。”
我认为这里发生了三处改变,这些改变引起了人们现在对机密计算的兴趣和采用。
1. **硬件可用性:** 仅仅在过去的 6 到 12 个月里,支持 TEEs 的硬件才开始变得广泛可用,这会儿市场上的主要例子是 Intel 的 SGX 和 AMD 的 SEV。我们期望在未来可以看到支持 TEE 的硬件的其他例子。
2. **行业准备:** 就像上云越来越多地被接受作为应用程序部署的模型,监管机构和立法机构也在提高各类组织保护其管理的数据的要求。组织开始强烈呼吁在不受信任的主机-或者更确切地说在他们不能完全信任且带有敏感数据的主机上运行敏感程序或者是处理敏感数据的应用程序的方法。这应该是不足为奇的如果芯片制造商看不到这项技术的市场他们就不会投太多的钱在这项技术上。Linux 基金会的机密计算联盟CCC的成立就是一个行业如何有志于发现使用加密计算的公共模型并且鼓励开源项目使用这些技术的案例。
3. **开源:** 就像区块链一样,加密计算是使用起来绝对不费吹灰之力的开源技术之一。如果你要运行敏感程序,你需要去信任正在为你运行的程序。不仅仅是 CPU 和固件,同样还有在 TEE 内支持你工作扩展的框架。可以很好地说,“我不信任主机机器和它上面的软件栈,所以我打算使用 TEE”但是如果你不够了解 TEE 软件环境那你就是将一种不透明的软件换成另外一种。TEEs 的开源支持将允许你或者社区-实际上是你和社区-以一种专有软件不可能实现的方式来检查和审计你所运行的程序。这就是为什么 CCC 位于 Linux 基金会(这个基金会致力于开放式开发模型)并鼓励 TEE 相关的软件项目加入且成为开源项目(如果他们还没有成为开源)。
我认为,在过去的 15 到 20 年里,硬件可用性,行业准备和开源已成为推动技术改变的驱动力。区块链,人工智能,云计算,互联网计算,大数据和互联网商务都是这三个点同时发挥作用的例子,并且在行业内带来了巨大的改变。
在一般情况下,安全是我们这数十年来听到的一种承诺,并且其仍然未被实现。老实说,我不确定它未来会不会实现。但是随着新技术的到来,特定用例的无处不在的安全变得越来越实际,并且在业内受到越来越多的期待。这样看起来,机密计算似乎已准备好成为成为下一个重大变化-而你,我亲爱的读者,可以一起来加入到这场革命(毕竟它是开源的)。
* * *
1. Enarx是红帽发起的是一个 CCC 项目。
* * *
这篇文章最初是发布在 Alice, Eve, and Bob 上的,这是得到了作者许可的重发。
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/1/confidential-computing
作者:[Mike Bursell][a]
选题:[lujun9972][b]
译者:[hopefully2333](https://github.com/hopefully2333)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/mikecamel
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/secure_https_url_browser.jpg?itok=OaPuqBkG (Secure https browser)
[2]: https://aliceevebob.com/2019/02/26/oh-how-i-love-my-tee-or-do-i/
[3]: https://enarx.io/
[4]: https://aliceevebob.com/2019/08/20/enarx-for-everyone-a-quest/
[5]: https://aliceevebob.com/2019/10/29/enarx-goes-multi-platform/
[6]: https://aliceevebob.com/2018/02/20/there-are-no-absolutes-in-security/
[7]: https://confidentialcomputing.io/
[8]: tmp.VEZpFGxsLv#1
[9]: https://aliceevebob.com/2019/12/03/confidential-computing-the-new-https/