Merge remote-tracking branch 'LCTT/master'

This commit is contained in:
Xingyu Wang 2019-06-11 00:41:49 +08:00
commit f684909741
35 changed files with 4304 additions and 102 deletions

View File

@ -0,0 +1,98 @@
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-10957-1.html)
[#]: subject: (How we built a Linux desktop app with Electron)
[#]: via: (https://opensource.com/article/19/4/linux-desktop-electron)
[#]: author: (Nils Ganther https://opensource.com/users/nils-ganther)
我们是如何使用 Electron 构建 Linux 桌面应用程序的
======
> 这是借助 Electron 框架,构建一个在 Linux 桌面上原生运行的开源电子邮件服务的故事。
![document sending](https://img.linux.net.cn/data/attachment/album/201906/10/123114abz0lvbllktkulx7.jpg)
[Tutanota][2] 是一种安全的开源电子邮件服务,它可通过浏览器使用,也有 iOS 和 Android 应用。其客户端代码在 GPLv3 下发布Android 应用程序可在 [F-Droid][3] 上找到,以便每个人都可以使用完全与 Google 无关的版本。
由于 Tutanota 关注开源和 Linux 客户端开发,因此我们希望为 Linux 和其他平台发布一个桌面应用程序。作为一个小团队,我们很快就排除了为 Linux、Windows 和 MacOS 构建原生应用程序的可能性,并决定使用 [Electron][4] 来构建我们的应用程序。
对于任何想要快速交付视觉一致的跨平台应用程序的人来说Electron 是最适合的选择,尤其是如果你已经有一个 Web 应用程序,想要从浏览器 API 的束缚中摆脱出来时。Tutanota 就是这样一个案例。
Tutanota 基于 [SystemJS][5] 和 [Mithril][6],旨在为每个人提供简单、安全的电子邮件通信。 因此,它必须提供很多用户期望从电子邮件客户端获得的标准功能。
由于采用了现代 API 和标准,其中一些功能(如基本的推送通知、搜索文本和联系人以及支持双因素身份验证)很容易在浏览器中提供。其它功能(例如自动备份或无需我们的服务器中转的 IMAP 支持)需要对系统资源的限制性访问,而这正是 Electron 框架提供的功能。
虽然有人批评 Electron “只是一个基本的包装”,但它有明显的好处:
* Electron 可以使你能够快速地为 Linux、Windows 和 MacOS 桌面构造 Web 应用。事实上,大多数 Linux 桌面应用都是使用 Electron 构建的。
* Electron 可以轻松地将桌面客户端与 Web 应用程序达到同样的功能水准。
* 发布桌面应用程序后,你可以自由使用开发功能添加桌面端特定的功能,从而增强可用性和安全性。
* 最后但同样重要的是,这是让应用程序具备原生的感觉、融入用户系统,而同时保持其识别度的好方法。
  
### 满足用户的需求
Tutanota 不依靠于大笔的投资资金,而是依靠社区驱动的项目。基于越来越多的用户升级到我们的免费服务的付费计划,我们有机地发展我们的团队。倾听用户的需求不仅对我们很重要,而且对我们的成功至关重要。
提供桌面客户端是 Tutanota 用户[最想要的功能][7],我们感到自豪的是,我们现在可以为所有用户提供免费的桌面客户端测试版。(我们还实现了另一个高度要求的功能 —— [搜索加密数据][8] —— 但这是另一个主题了。)
我们喜欢为用户提供签名版本的 Tutanota 并支持浏览器中无法实现的功能,例如通过后台进程推送通知。 现在,我们计划添加更多特定于桌面的功能,例如 IMAP 支持(而不依赖于我们的服务器充当代理),自动备份和离线可用性。
我们选择 Electron 是因为它的 Chromium 和 Node.js 的组合最适合我们的小型开发团队,因为它只需要对我们的 Web 应用程序进行最小的更改。在我们开始使用时,可以将浏览器 API 用于所有功能特别有用,随着我们的进展,慢慢地用更多原生版本替换这些组件。这种方法对附件下载和通知特别方便。
### 调整安全性
我们知道有些人关注 Electron 的安全问题,但我们发现 Electron 在 Web 应用程序中微调访问的选项非常令人满意。你可以使用 Electron 的[安全文档][9]和 Luca Carettoni 的[Electron 安全清单][10]等资源,来帮助防止 Web 应用程序中不受信任的内容发生灾难性事故。
### 实现特定功能
Tutanota Web 客户端从一开始就构建了一个用于进程间通信的可靠协议。我们利用 Web 线程在加密和请求数据时保持用户界面UI响应性。当我们开始实现我们的移动应用时这就派上用场这些应用程序使用相同的协议在原生部分和 Web 视图之间进行通信。
这就是为什么当我们开始构建桌面客户端时很多用于本机推送通知、打开邮箱和使用文件系统的部分等已经存在因此只需要实现原生端Node.js
另一个便利是我们的构建过程使用 [Babel 转译器][11],它允许我们以现代 ES6 JavaScript 编写整个代码库,并在不同环境之间混合和匹配功能模块。这使我们能够快速调整基于 Electron 的桌面应用程序的代码。但是,我们也遇到了一些挑战。
### 克服挑战
虽然 Electron 允许我们很容易地与不同平台的桌面环境集成,但你不能低估投入的时间!最后,正是这些小事情占用了比我们预期更多的时间,但对完成桌面客户端项目也至关重要。
特定于平台的代码导致了大部分阻碍:
* 例如,窗口管理和托盘仍然在三个平台上以略有不同的方式处理。
* 注册 Tutanota 作为默认邮件程序并设置自动启动需要深入 Windows 注册表,同时确保以 [UAC] [12] 兼容的方式提示用户进行管理员访问。
* 我们需要使用 Electron 的 API 作为快捷方式和菜单,以提供复制、粘贴、撤消和重做等标准功能。
由于用户对不同平台上的应用程序的某些(有时不直接兼容)行为的期望,此过程有点复杂。使三个版本感觉像原生的需要一些迭代,甚至需要对 Web 应用程序进行一些适度的补充,以提供类似于浏览器中的文本搜索的功能。
### 总结
我们在 Electron 方面的经验基本上是积极的,我们在不到四个月的时间内完成了该项目。尽管有一些相当耗时的功能,但我们感到惊讶的是,我们可以轻松地为 Linux 提供一个测试版的 [Tutanota 桌面客户端][13]。如果你有兴趣,可以深入了解 [GitHub][14] 上的源代码。
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/4/linux-desktop-electron
作者:[Nils Ganther][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/nils-ganther
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/email_paper_envelope_document.png?itok=uPj_kouJ (document sending)
[2]: https://tutanota.com/
[3]: https://f-droid.org/en/packages/de.tutao.tutanota/
[4]: https://electronjs.org/
[5]: https://github.com/systemjs/systemjs
[6]: https://mithril.js.org/
[7]: https://tutanota.uservoice.com/forums/237921-general/filters/top?status_id=1177482
[8]: https://tutanota.com/blog/posts/first-search-encrypted-data/
[9]: https://electronjs.org/docs/tutorial/security
[10]: https://www.blackhat.com/docs/us-17/thursday/us-17-Carettoni-Electronegativity-A-Study-Of-Electron-Security-wp.pdf
[11]: https://babeljs.io/
[12]: https://en.wikipedia.org/wiki/User_Account_Control
[13]: https://tutanota.com/blog/posts/desktop-clients/
[14]: https://www.github.com/tutao/tutanota

View File

@ -0,0 +1,106 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Cisco to buy IoT security, management firm Sentryo)
[#]: via: (https://www.networkworld.com/article/3400847/cisco-to-buy-iot-security-management-firm-sentryo.html)
[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
Cisco to buy IoT security, management firm Sentryo
======
Buying Sentryo will give Cisco support for anomaly and real-time threat detection for the industrial internet of things.
![IDG Worldwide][1]
Looking to expand its IoT security and management offerings Cisco plans to acquire [Sentryo][2], a company based in France that offers anomaly detection and real-time threat detection for Industrial Internet of Things ([IIoT][3]) networks.
Founded in 2014 Sentryo products include ICS CyberVision an asset inventory, network monitoring and threat intelligence platform and CyberVision network edge sensors, which analyze network flows.
**More on IoT:**
* [What is the IoT? How the internet of things works][4]
* [What is edge computing and how its changing the network][5]
* [Most powerful Internet of Things companies][6]
* [10 Hot IoT startups to watch][7]
* [The 6 ways to make money in IoT][8]
* [What is digital twin technology? [and why it matters]][9]
* [Blockchain, service-centric networking key to IoT success][10]
* [Getting grounded in IoT networking and security][11]
* [Building IoT-ready networks must become a priority][12]
* [What is the Industrial IoT? [And why the stakes are so high]][13]
“We have incorporated Sentryos edge sensor and our industrial networking hardware with Ciscos IOx application framework,” wrote Rob Salvagno, Cisco vice president of Corporate Development and Cisco Investments in a [blog][14] about the proposed buy.
“We believe that connectivity is foundational to IoT projects and by unleashing the power of the network we can dramatically improve operational efficiencies and uncover new business opportunities. With the addition of Sentryo, Cisco can offer control systems engineers deeper visibility into assets to optimize, detect anomalies and secure their networks.”
Gartner [wrote][15] of Sentryos system: “ICS CyberVision product provides visibility into its customers'' OT networks in way all OT users will understand, not just technical IT staff. With the increased focus of both hackers and regulators on industrial control systems, it is vital to have the right visibility of an organizations OT. Many OT networks not only are geographically dispersed, but also are complex and consist of hundreds of thousands of components.”
Sentryo's ICS CyberVision lets enterprises ensure continuity, resilience and safety of their industrial operations while preventing possible cyberattacks, said [Nandini Natarajan][16] , industry analyst at Frost & Sullivan. "It automatically profiles assets and communication flows using a unique 'universal OT language' in the form of tags, which describe in plain text what each asset is doing. ICS CyberVision gives anyone immediate insights into an asset's role and behaviors; it offers many different analytic views leveraging artificial intelligence algorithms to let users deep-dive into the vast amount of data a typical industrial control system can generate. Sentryo makes it easy to see important or relevant information."
In addition, Sentryo's platform uses deep packet inspection (DPI) to extract information from communications among industrial assets, Natarajan said. This DPI engine is deployed through an edge-computing architecture that can run either on Sentryo sensor appliances or on network equipment that is already installed. Thus, Sentryo can embed visibility and cybersecurity features in the industrial network rather than deploying an out-of-band monitoring network, Natarajan said.
**[[Prepare to become a Certified Information Security Systems Professional with this comprehensive online course from PluralSight. Now offering a 10-day free trial!][17] ]**
Sentryos technology will broaden [Ciscos][18] overarching IoT plan. In January it [launched][19] a family of switches, software, developer tools and blueprints to meld IoT and industrial networking with [intent-based networking][20] (IBN) and classic IT security, monitoring and application-development support.
The new platforms can be managed by Ciscos DNA Center, and Cisco IoT Field Network Director, letting customers fuse their IoT and industrial-network control with their business IT world.
DNA Center is Ciscos central management tool for enterprise networks, featuring automation capabilities, assurance setting, fabric provisioning and policy-based segmentation. It is also at the center of the companys IBN initiative offering customers the ability to automatically implement network and policy changes on the fly and ensure data delivery. The IoT Field Network Director is software that manages multiservice networks of Cisco industrial, connected grid routers and endpoints.
Liz Centoni, senior vice president and general manager of Cisco's IoT business group said the company expects the [Sentryo technology to help][21] IoT customers in a number of ways:
Network-enabled, passive DPI capabilities to discover IoT and OT assets, and establish communication patterns between devices and systems. Sentryos sensor is natively deployable on Ciscos IOx framework and can be built into the industrial network these devices run on instead of adding additional hardware.
As device identification and communication patterns are created, Cisco will integrate this with DNA Center and Identity Services Engine(ISE) to allow customers to easily define segmentation policy. This integration will allow OT teams to leverage IT security teams expertise to secure their environments without risk to the operational processes.
With these IoT devices lacking modern embedded software and security capabilities, segmentation will be the key technology to allow communication from operational assets to the rightful systems, and reduce risk of cyber security incidents like we saw with [WannaCry][22] and [Norsk Hydro][23].
According to [Crunchbase][24], Sentryo has $3.5M in estimated revenue annually and it competes most closely with Cymmetria, Team8, and Indegy. The acquisition is expected to close before the end of Ciscos Q1 Fiscal Year 2020 -- October 26, 2019. Financial details of the acquisition were not detailed.
Sentryo is Ciscos second acquisition this year. It bought Singularity for its network analytics technology in January. In 2018 Cisco bought six companies including Duo security software.
** **
Join the Network World communities on [Facebook][25] and [LinkedIn][26] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3400847/cisco-to-buy-iot-security-management-firm-sentryo.html
作者:[Michael Cooney][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Michael-Cooney/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2018/09/nwan_019_iiot-100771131-large.jpg
[2]: https://www.sentryo.net/
[3]: https://www.networkworld.com/article/3243928/what-is-the-industrial-iot-and-why-the-stakes-are-so-high.html
[4]: https://www.networkworld.com/article/3207535/internet-of-things/what-is-the-iot-how-the-internet-of-things-works.html
[5]: https://www.networkworld.com/article/3224893/internet-of-things/what-is-edge-computing-and-how-it-s-changing-the-network.html
[6]: https://www.networkworld.com/article/2287045/internet-of-things/wireless-153629-10-most-powerful-internet-of-things-companies.html
[7]: https://www.networkworld.com/article/3270961/internet-of-things/10-hot-iot-startups-to-watch.html
[8]: https://www.networkworld.com/article/3279346/internet-of-things/the-6-ways-to-make-money-in-iot.html
[9]: https://www.networkworld.com/article/3280225/internet-of-things/what-is-digital-twin-technology-and-why-it-matters.html
[10]: https://www.networkworld.com/article/3276313/internet-of-things/blockchain-service-centric-networking-key-to-iot-success.html
[11]: https://www.networkworld.com/article/3269736/internet-of-things/getting-grounded-in-iot-networking-and-security.html
[12]: https://www.networkworld.com/article/3276304/internet-of-things/building-iot-ready-networks-must-become-a-priority.html
[13]: https://www.networkworld.com/article/3243928/internet-of-things/what-is-the-industrial-iot-and-why-the-stakes-are-so-high.html
[14]: https://blogs.cisco.com/news/cisco-industrial-iot-news
[15]: https://www.globenewswire.com/news-release/2018/06/28/1531119/0/en/Sentryo-Named-a-Cool-Vendor-by-Gartner.html
[16]: https://www.linkedin.com/pulse/industrial-internet-things-iiot-decoded-nandini-natarajan/
[17]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fcertified-information-systems-security-professional-cisspr
[18]: https://www.cisco.com/c/dam/en_us/solutions/iot/ihs-report.pdf
[19]: https://www.networkworld.com/article/3336454/cisco-goes-after-industrial-iot.html
[20]: https://www.networkworld.com/article/3202699/what-is-intent-based-networking.html
[21]: https://blogs.cisco.com/news/securing-the-internet-of-things-cisco-announces-intent-to-acquire-sentryo
[22]: https://blogs.cisco.com/security/talos/wannacry
[23]: https://www.securityweek.com/norsk-hydro-may-have-lost-40m-first-week-after-cyberattack
[24]: https://www.crunchbase.com/organization/sentryo#section-web-traffic-by-similarweb
[25]: https://www.facebook.com/NetworkWorld/
[26]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,116 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Zorin OS Becomes Even More Awesome With Zorin 15 Release)
[#]: via: (https://itsfoss.com/zorin-os-15-release/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
Zorin OS Becomes Even More Awesome With Zorin 15 Release
======
Zorin OS has always been known as one of the [beginner-focused Linux distros][1] out there. Yes, it may not be the most popular but it sure is a good distribution specially for Windows migrants.
A few years back, I remember, a friend of mine always insisted me to install [Zorin OS][2]. Personally, I didnt like the UI back then. But, now that Zorin OS 15 is here I have more reasons to get it installed as my primary OS.
Fret not, in this article, well talk about everything that you need to know.
### New Features in Zorin 15
Lets see the major changes in the latest release of Zorin. Zorin 15 is based on Ubuntu 18.04.2 and thus it brings the performance improvement under the hood. Other than that, there are several UI (User Interface) improvements.
#### Zorin Connect
![Zorin Connect][3]
Zorin OS 15s main highlight is Zorin Connect. If you have an Android device, you are in for a treat. Similar to [PushBullet][4], [Zorin Connect][5] integrates your phone with the desktop experience.
You get to sync your smartphones notifications on your desktop while also being able to reply to it. Heck, you can also reply to the SMS messages and view those conversations.
In addition to these, you get the following abilities:
* Share files and web links between devices
* Use your phone as a remote control for your computer
* Control media playback on your computer from your phone, and pause playback automatically when a phone call arrives
As mentioned in their [official announcement post][6], the data transmission will be on your local network and no data will be transmitted to the cloud. To access Zorin Connect, navigate your way through Zorin menu > System Tools > Zorin Connect.
[Get ZORIN CONNECT ON PLAY STORE][5]
#### New Desktop Theme (with dark mode!)
![Zorin Dark Mode][7]
Im all in when someone mentions “Dark Mode” or “Dark Theme”. For me, this is the best thing that comes baked in with Zorin OS 15.
[][8]
Suggested read Necuno is a New Open Source Smartphone Running KDE
Its so pleasing to my eyes when I enable the dark mode on anything, you with me?
Not just a dark theme, the UI is a lot cleaner and intuitive with subtle new animations. You can find all the settings from the Zorin Appearance app built-in.
#### Adaptive Background & Night Light
You get an option to let the background adapt according to the brightness of the environment every hour of the day. Also, you can find the night mode if you dont want the blue light to stress your eyes.
#### To do app
![Todo][9]
I always wanted this to happen so that I dont have to use a separate service that offers a Linux client to add my tasks. Its good to see a built-in app with integration support for Google Tasks and Todoist.
#### Theres More?
Yes! Other major changes include the support for Flatpak, a touch layout for convertible laptops, a DND mode, and some redesigned apps (Settings, Libre Office) to give you better user experience.
If you want the detailed list of changes along with the minor improvements, you can follow the [announcement post][6]. If you are already a Zorin user, you would notice that they have refreshed their website with a new look as well.
### Download Zorin OS 15
**Note** : _Direct upgrades from Zorin OS 12 to 15 without needing to re-install the operating system will be available later this year._
In case you didnt know, there are three versions of Zorin OS Ultimate, Core, and the Lite version.
If you want to support the devs and the project while unlocking the full potential of Zorin OS, you should get the ultimate edition for $39.
If you just want the essentials, the core edition will do just fine (which you can download for free). In either case, if you have an old computer, the lite version is the one to go with.
[DOWNLOAD ZORIN OS 15][10]
**What do you think of Zorin 15?**
[][11]
Suggested read Ubuntu 14.04 Codenamed Trusty Tahr
Im definitely going to give it a try as my primary OS fingers crossed. What about you? What do you think about the latest release? Feel free to let us know in the comments below.
--------------------------------------------------------------------------------
via: https://itsfoss.com/zorin-os-15-release/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/best-linux-beginners/
[2]: https://zorinos.com/
[3]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/06/zorin-connect.jpg?fit=800%2C473&ssl=1
[4]: https://www.pushbullet.com/
[5]: https://play.google.com/store/apps/details?id=com.zorinos.zorin_connect&hl=en_IN
[6]: https://zoringroup.com/blog/2019/06/05/zorin-os-15-is-here-faster-easier-more-connected/
[7]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/06/zorin-dark-mode.jpg?fit=722%2C800&ssl=1
[8]: https://itsfoss.com/necunos-linux-smartphone/
[9]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/06/Todo.jpg?fit=800%2C652&ssl=1
[10]: https://zorinos.com/download/
[11]: https://itsfoss.com/ubuntu-1404-codenamed-trusty-tahr/

View File

@ -0,0 +1,85 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Free and Open Source Trello Alternative OpenProject 9 Released)
[#]: via: (https://itsfoss.com/openproject-9-release/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
Free and Open Source Trello Alternative OpenProject 9 Released
======
[OpenProject][1] is a collaborative open source project management software. Its an alternative to proprietary solutions like [Trello][2] and [Jira][3].
You can use it for free if its for personal use and you set it up (and host it) on your own server. This way, you control your data.
Of course, you get access to premium features and priority help if you are a [Cloud or Enterprise edition user][4].
The OpenProject 9 release emphasizes on new board views, package list view, and work templates.
If you didnt know about this, you can give it a try. But, if you are an existing user you should know whats new before migrating to OpenProject 9.
### Whats New in OpenProject 9?
Here are some of the major changes in the latest release of OpenProject.
#### Scrum & Agile Boards
![][5]
For Cloud and Enterprise editions, theres a new [scrum][6] and [agile][7] board view. You also get to showcase your work in a [kanban-style][8] fashion, making it easier to support your agile and scrum teams.
The new board view makes it easy to know whos assigned for the task and update the status in a jiffy. You also get different board view options like basic board, status board, and version boards.
#### Work Package templates
![Work Package Template][9]
You dont have to create everything from scratch for every unique work package. So, instead, you just keep a template so that you can use it whenever you need to create a new work package. It will save a lot of time.
#### New Work Package list view
![Work Package View][10]
In the work package list, theres a subtle new addition that lets you view the avatars of the assigned people for a specific work.
#### Customizable work package view for my page
Your own page to display what you are working on (and the progress) shouldnt be always boring. So, now you get the ability to customize it and even add a Gantt chart to visualize your work.
[][11]
Suggested read Ubuntu 12.04 Has Reached End of Life
**Wrapping Up**
For detailed instructions on migrating and installation, you should follow the [official announcement post][12] covering all the essential details for the users.
Also, we would love to know about your experience with OpenProject 9, let us know about it in the comments section below! If you use some other project management software, feel free to suggest it to us and rest of your fellow Its FOSS readers.
--------------------------------------------------------------------------------
via: https://itsfoss.com/openproject-9-release/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://www.openproject.org/
[2]: https://trello.com/
[3]: https://www.atlassian.com/software/jira
[4]: https://www.openproject.org/pricing/
[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/06/open-project-9-scrum-agile.jpeg?fit=800%2C517&ssl=1
[6]: https://en.wikipedia.org/wiki/Scrum_(software_development)
[7]: https://en.wikipedia.org/wiki/Agile_software_development
[8]: https://en.wikipedia.org/wiki/Kanban
[9]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/06/work-package-template.jpg?ssl=1
[10]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/06/work-package-view.jpg?fit=800%2C454&ssl=1
[11]: https://itsfoss.com/ubuntu-12-04-end-of-life/
[12]: https://www.openproject.org/openproject-9-new-scrum-agile-board-view/

View File

@ -0,0 +1,84 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (An open source bionic leg, Python data pipeline, data breach detection, and more news)
[#]: via: (https://opensource.com/article/19/6/news-june-8)
[#]: author: (Scott Nesbitt https://opensource.com/users/scottnesbitt)
An open source bionic leg, Python data pipeline, data breach detection, and more news
======
Catch up on the biggest open source headlines from the past two weeks.
![][1]
In this edition of our open source news roundup, we take a look at an open source bionic leg, a new open source medical imaging organization, McKinsey's first open source release, and more!
### Using open source to advance bionics
A generation of people learned the term _bionics_ from the TV series **The Six Million Dollar Man** and **The Bionic Woman**. What was science fiction (although based on fact) is closer to becoming a reality thanks to prosthetic leg [designed by the University of Michigan and the Shirley Ryan AbilityLab][2].
The leg, which incorporates a simple, low-cost modular design, is "intended to improve the quality of life of patients and accelerate scientific advances by offering a unified platform to fragmented research efforts across the field of bionics." It will, according to lead designer Elliot Rouse, "enable investigators to efficiently solve challenges associated with controlling bionic legs across a range of activities in the lab and out in the community."
You can learn more about the leg, and download designs, from the [Open Source Leg][3] website.
### McKinsey releases a Python library for building production-ready data pipelines
Consulting giant McKinsey and Co. recently released its [first open source tool][4]. Called Kedro, it's a Python library for creating machine learning and data pipelines.
Kedro makes "it easier to manage large workflows and ensuring a consistent quality of code throughout a project," said product manager Yetunde Dada. While it started as a proprietary tool, McKinsey open sourced Kedro so "clients can use it after we leave a project — it is our way of giving back," said engineer Nikolaos Tsaousis.
If you're interested in taking a peek, you can grab [Kedro's source code][5] off GitHub.
### New consortium to advance open source medical imaging
A group of experts and patient advocates have come together to form the [Open Source Imaging Consortium][6]. The consortium aims to "advance the diagnosis of idiopathic pulmonary fibrosis and other interstitial lung diseases with the help of digital imaging and machine learning."
According to the consortium's executive director, Elizabeth Estes, the project aims to "collectively speed diagnosis, aid prognosis, and ultimately allow doctors to treat patients more efficiently and effectively." To do that, they're assembling and sharing "15,000 anonymous image scans and clinical data from patients, which will serve as input data for machine learning programs to develop algorithms."
### Mozilla releases a simple-to-use way to see if you've been part of a data breach
Explaining security to the less software-savvy has always been a challenge, and monitoring your exposure to risk is difficult regardless of your skill level. Mozilla released [Firefox Monitor][7], with data provided by [Have I Been Pwned][8], as a straightforward way to see if any of your emails have been part of a major data breach. You can enter emails to search one by one, or sign up for their service to notify you in the future.
The site is also full of helpful tutorials on understanding how hackers work, what to do after a data breach, and how to create strong passwords. Be sure to bookmark this one for around the holidays when family members are asking for advice.
#### In other news
* [Want a Google-free Android? Send your phone to this guy][9]
* [CockroachDB releases with a non-OSI approved license, remains source available][10]
* [Infrastructure automation company Chef commits to Open Source][11]
* [Russias would-be Windows replacement gets a security upgrade][12]
* [Switch from Medium to your own blog in a few minutes with this code][13]
* [Open Source Initiative Announces New Partnership with Software Liberty Association Taiwan][14]
_Thanks, as always, to Opensource.com staff members and moderators for their help this week._
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/6/news-june-8
作者:[Scott Nesbitt][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/scottnesbitt
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/weekly_news_roundup_tv.png?itok=B6PM4S1i
[2]: https://news.umich.edu/open-source-bionic-leg-first-of-its-kind-platform-aims-to-rapidly-advance-prosthetics/
[3]: https://opensourceleg.com/
[4]: https://www.information-age.com/kedro-mckinseys-open-source-software-tool-123482991/
[5]: https://github.com/quantumblacklabs/kedro
[6]: https://pulmonaryfibrosisnews.com/2019/05/31/international-open-source-imaging-consortium-osic-launched-to-advance-ipf-diagnosis/
[7]: https://monitor.firefox.com/
[8]: https://haveibeenpwned.com/
[9]: https://fossbytes.com/want-a-google-free-android-send-your-phone-to-this-guy/
[10]: https://www.cockroachlabs.com/blog/oss-relicensing-cockroachdb/
[11]: https://www.infoq.com/news/2019/05/chef-open-source/
[12]: https://www.nextgov.com/cybersecurity/2019/05/russias-would-be-windows-replacement-gets-security-upgrade/157330/
[13]: https://github.com/mathieudutour/medium-to-own-blog
[14]: https://opensource.org/node/994

View File

@ -0,0 +1,77 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (HPE Synergy For Dummies)
[#]: via: (https://www.networkworld.com/article/3399618/hpe-synergy-for-dummies.html)
[#]: author: (HPE https://www.networkworld.com/author/Michael-Cooney/)
HPE Synergy For Dummies
======
![istock/venimo][1]
Business must move fast today to keep up with competitive forces. That means IT must provide an agile — anytime, anywhere, any workload — infrastructure that ensures growth, boosts productivity, enhances innovation, improves the customer experience, and reduces risk.
A composable infrastructure helps organizations achieve these important objectives that are difficult — if not impossible — to achieve via traditional means, such as the ability to do the following:
* Deploy quickly with simple flexing, scaling, and updating
* Run workloads anywhere — on physical servers, on virtual servers, or in containers
* Operate any workload upon which the business depends, without worrying about infrastructure resources or compatibility
* Ensure the infrastructure is able to provide the right service levels so the business can stay in business
In other words, IT must inherently become part of the fabric of products and services that are rapidly innovated at every company, with an anytime, anywhere, any workload infrastructure.
**The anytime paradigm**
For organizations that seek to embrace DevOps, collaboration is the cultural norm. Development and operations staff work sidebyside to support software across its entire life cycle, from initial idea to production support.
To provide DevOps groups — as well as other stakeholders — the IT infrastructure required at the rate at which it is demanded, enterprise IT must increase its speed, agility, and flexibility to enable people anytime composition and recomposition of resources. Composable infrastructure enables this anytime paradigm.
**The anywhere ability**
Bare metal and virtualized workloads are just two application foundations that need to be supported in the modern data center. Today, containers are emerging as a compelling construct, providing significant benefits for certain kinds of workloads. Unfortunately, with traditional infrastructure approaches, IT needs to build out custom, unique infrastructure to support them, at least until an infrastructure is deployed that can seamlessly handle physical, virtual, and containerbased workloads.
Each environment would need its own hardware and software and might even need its own staff members supporting it.
Composable infrastructure provides an environment that supports the ability to run physical, virtual, or containerized workloads.
**Support any workload**
Do you have a legacy onpremises application that you have to keep running? Do you have enterprise resource planning (ERP) software that currently powers your business but that will take ten years to phase out? At the same time, do you have an emerging DevOps philosophy under which youd like to empower developers to dynamically create computing environments as a part of their development efforts?
All these things can be accomplished simultaneously on the right kind of infrastructure. Composable infrastructure enables any workload to operate as a part of the architecture.
**HPE Synergy**
HPE Synergy brings to life the architectural principles of composable infrastructure. It is a single, purpose-built platform that reduces operational complexity for workloads and increases operational velocity for applications and services.
Download a copy of the [HPE Synergy for Dummies eBook][2] to learn how to:
* Infuse the IT architecture with the ability to enable agility, flexibility, and speed
* Apply composable infrastructure concepts to support both traditional and cloud-native applications
* Deploy HPE Synergy infrastructure to revolutionize workload support in the data center
Also, you will find more information about HPE Synergy [here][3].
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3399618/hpe-synergy-for-dummies.html
作者:[HPE][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Michael-Cooney/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/06/istock-1026657600-100798064-large.jpg
[2]: https://www.hpe.com/us/en/resources/integrated-systems/synergy-for-dummies.html
[3]: http://hpe.com/synergy

View File

@ -0,0 +1,28 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (True Hyperconvergence at Scale: HPE Simplivity With Composable Fabric)
[#]: via: (https://www.networkworld.com/article/3399619/true-hyperconvergence-at-scale-hpe-simplivity-with-composable-fabric.html)
[#]: author: (HPE https://www.networkworld.com/author/Michael-Cooney/)
True Hyperconvergence at Scale: HPE Simplivity With Composable Fabric
======
Many hyperconverged solutions only focus on software-defined storage. However, many networking functions and technologies can be consolidated for simplicity and scale in the data center. This video describes how HPE SimpliVity with Composable Fabric gives organizations the power to run any virtual machine anywhere, anytime. Read more about HPE SimpliVity [here][1].
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3399619/true-hyperconvergence-at-scale-hpe-simplivity-with-composable-fabric.html
作者:[HPE][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Michael-Cooney/
[b]: https://github.com/lujun9972
[1]: https://hpe.com/info/simplivity

View File

@ -0,0 +1,102 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (5G will augment Wi-Fi, not replace it)
[#]: via: (https://www.networkworld.com/article/3399978/5g-will-augment-wi-fi-not-replace-it.html)
[#]: author: (Zeus Kerravala https://www.networkworld.com/author/Zeus-Kerravala/)
5G will augment Wi-Fi, not replace it
======
Jeff Lipton, vice president of strategy and corporate development at Aruba, adds a dose of reality to the 5G hype, discussing how it and Wi-Fi will work together and how to maximize the value of both.
![Thinkstock][1]
Theres arguably no technology topic thats currently hotter than [5G][2]. It was a major theme of the most recent [Mobile World Congress][3] show and has reared its head in other events such as Enterprise Connect and almost every vendor event I attend.
Some vendors have positioned 5G as a panacea to all network problems and predict it will eradicate all other forms of networking. Views like that are obviously extreme, but I do believe that 5G will have an impact on the networking industry and is something that network engineers should be aware of.
To help bring some realism to the 5G hype, I recently interviewed Jeff Lipton, vice president of strategy and corporate development at Aruba, a Hewlett Packard company, as I know HPE has been deeply involved in the evolution of both 5G and Wi-Fi.
**[ Also read:[The time of 5G is almost here][3] ]**
### Zeus Kerravala: 5G is being touted as the "next big thing." Do you see it that way?
**Jeff Lipton:** The next big thing is connecting "things" and generating actionable insights and context from those things. 5G is one of the technologies that serve this trend. Wi-Fi 6 is another — so are edge compute, Bluetooth Low Energy (BLE), artificial intelligence (AI) and machine learning (ML). These all are important, and they each have a place.
### Do you see 5G eclipsing Wi-Fi in the enterprise?
![Jeff Lipton, VP of strategy and corporate development, Aruba][4]
**Lipton:** No. 5G, like all cellular access, is appropriate if you need macro area coverage and high-speed handoffs. But its not ideal for most enterprise applications, where you generally dont need these capabilities. From a performance standpoint, [Wi-Fi 6][5] and 5G are roughly equal on most metrics, including throughput, latency, reliability, and connection density. Where they arent close is economics, where Wi-Fi is far better. I dont think many customers would be willing to trade Wi-Fi for 5G unless they need macro coverage or high-speed handoffs.
### Can Wi-Fi and 5G coexist? How would an enterprise use 5G and Wi-Fi together?
**Lipton:** Wi-Fi and 5G can and should be complementary. The 5G architecture decouples the cellular core and Radio Access Network (RAN). Consequently, Wi-Fi can be the enterprise radio front end and connect tightly with a 5G core. Since the economics of Wi-Fi — especially Wi-Fi 6 — are favorable and performance is extremely good, we envision many service providers using Wi-Fi as the radio front end for their 5G systems where it makes sense, as an alternative to Distributed Antenna (DAS) and small-cell systems.
Wi-Fi and 5G can and should be complementary." — Jeff Lipton
### If a business were considering moving to 5G only, how would this be done and how practical is it?
**Lipton:** To use 5G for primary in-building access, a customer would need to upgrade their network and virtually all of their devices. 5G provides good coverage outdoors, but cellular signals cant reliably penetrate buildings. And this problem will become worse with 5G, which partially relies on higher frequency radios. So service providers will need a way to provide indoor coverage. To provide this coverage, they propose deploying DAS or small-cell systems — paid for by the end customer. The customers would then connect their devices directly to these cellular systems and pay a service component for each device.
**[[Take this mobile device management course from PluralSight and learn how to secure devices in your company without degrading the user experience.][6] ]**
There are several problems with this approach. First, DAS and small-cell systems are significantly more expensive than Wi-Fi networks. And the cost doesnt stop with the network. Every device would need to have a 5G cellular modem, which costs tens of dollars wholesale and usually over a hundred dollars to an end user. Since few, if any MacBooks, PCs, printers or AppleTVs today have 5G modems, these devices would need to be upgraded. I dont believe many enterprises would be willing to pay this additional cost and upgrade most of their equipment for an unclear benefit.
### Are economics a factor in the 5G versus Wi-Fi debate?
**Lipton:** Economics is always a factor. Lets focus the conversation on in-building enterprise applications, since this is the use case some carriers intend to target with 5G. Weve already mentioned that upgrading to 5G would require enterprises to deploy expensive DAS or small-cell systems for in-building coverage, upgrade virtually all of their equipment to contain 5G modems, and pay service contracts for each of these devices. Its also important to understand 5G cellular networks and DAS systems operate over licensed spectrum, which is analogous to a private highway. Service providers paid billions of dollars for this spectrum, and this expense needs to be monetized and embedded in service costs. So, from both deployment and lifecycle perspectives, Wi-Fi economics are favorable to 5G.
### Are there any security implications of 5G versus Wi-Fi?
**Lipton:** Cellular technologies are perceived by some to be more secure than Wi-Fi, but thats not true. LTE is relatively secure, but it also has weak points. For example, LTE is vulnerable to a range of attacks, including data interception and device tracking, according to researchers at Purdue and the University of Iowa. 5G improves upon LTE security with multiple authentication methods and better key management.
Wi-Fi security isnt standing still either and continues to advance. Of course, Wi-Fi implementations that do not follow best practices, such as those without even basic password protection, are not optimal. But those configured with proper access controls and passwords are highly secure. With new standards — specifically, WPA3 and Enhanced Open — Wi-Fi network security has improved even further.
Its also important to keep in mind that enterprises have made enormous investments in security and compliance solutions tailored to their specific needs. With cellular networks, including 5G, enterprises lose the ability to deploy their chosen security and compliance solutions, as well as most visibility into traffic flows. While future versions of 5G will offer high-levels of customization with a feature called network slicing, enterprises would still lose the level of security and compliance customization they currently need and have.
### Any parting thoughts to add to the discussion around 5G versus Wi-Fi?
**Lipton:** The debate around Wi-Fi versus 5G misses the point. They each have their place, and they are in many ways complementary. The Wi-Fi and 5G markets both will grow, driven by the need to connect and analyze a growing number of things. If a customer needs macro coverage or high-speed handoffs and can pay the additional cost for these capabilities, 5G makes sense.
5G also could be a fit for certain industrial use cases where customers require physical network segmentation. But for the vast majority of enterprise customers, Wi-Fi will continue to prove its value as a reliable, secure, and cost-effective wireless access technology, as it does today.
**More about 802.11ax (Wi-Fi 6):**
* [Why 802.11ax is the next big thing in wireless][7]
* [FAQ: 802.11ax Wi-Fi][8]
* [Wi-Fi 6 (802.11ax) is coming to a router near you][9]
* [Wi-Fi 6 with OFDMA opens a world of new wireless possibilities][10]
* [802.11ax preview: Access points and routers that support Wi-Fi 6 are on tap][11]
Join the Network World communities on [Facebook][12] and [LinkedIn][13] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3399978/5g-will-augment-wi-fi-not-replace-it.html
作者:[Zeus Kerravala][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Zeus-Kerravala/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/05/wireless_connection_speed_connectivity_bars_cell_tower_5g_by_thinkstock-100796921-large.jpg
[2]: https://www.networkworld.com/article/3203489/what-is-5g-how-is-it-better-than-4g.html
[3]: https://www.networkworld.com/article/3354477/mobile-world-congress-the-time-of-5g-is-almost-here.html
[4]: https://images.idgesg.net/images/article/2019/06/headshot_jlipton_aruba-100798360-small.jpg
[5]: https://www.networkworld.com/article/3215907/why-80211ax-is-the-next-big-thing-in-wi-fi.html
[6]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fcourses%2Fmobile-device-management-big-picture
[7]: https://www.networkworld.com/article/3215907/mobile-wireless/why-80211ax-is-the-next-big-thing-in-wi-fi.html
[8]: https://%20https//www.networkworld.com/article/3048196/mobile-wireless/faq-802-11ax-wi-fi.html
[9]: https://www.networkworld.com/article/3311921/mobile-wireless/wi-fi-6-is-coming-to-a-router-near-you.html
[10]: https://www.networkworld.com/article/3332018/wi-fi/wi-fi-6-with-ofdma-opens-a-world-of-new-wireless-possibilities.html
[11]: https://www.networkworld.com/article/3309439/mobile-wireless/80211ax-preview-access-points-and-routers-that-support-the-wi-fi-6-protocol-on-tap.html
[12]: https://www.facebook.com/NetworkWorld/
[13]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,64 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Data center workloads become more complex despite promises to the contrary)
[#]: via: (https://www.networkworld.com/article/3400086/data-center-workloads-become-more-complex-despite-promises-to-the-contrary.html)
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
Data center workloads become more complex despite promises to the contrary
======
The data center is shouldering a greater burden than ever, despite promises of ease and the cloud.
![gorodenkoff / Getty Images][1]
Data centers are becoming more complex and still run the majority of workloads despite the promises of simplicity of deployment through automation and hyperconverged infrastructure (HCI), not to mention how the cloud was supposed to take over workloads.
Thats the finding of the Uptime Institute's latest [annual global data center survey][2] (registration required). The majority of IT loads still run on enterprise data centers even in the face of cloud adoption, putting pressure on administrators to have to manage workloads across the hybrid infrastructure.
**[ Learn[how server disaggregation can boost data center efficiency][3] | Get regularly scheduled insights: [Sign up for Network World newsletters][4] ]**
With workloads like artificial intelligence (AI) and machine language coming to the forefront, that means facilities face greater power and cooling challenges, since AI is extremely processor-intensive. That puts strain on data center administrators and power and cooling vendors alike to keep up with the growth in demand.
On top of it all, everyone is struggling to get enough staff with the right skills.
### Outages, staffing problems, lack of public cloud visibility among top concerns
Among the key findings of Uptime's report:
* The large, privately owned enterprise data center facility still forms the bedrock of corporate IT and is expected to be running half of all workloads in 2021.
* The staffing problem affecting most of the data center sector has only worsened. Sixty-one percent of respondents said they had difficulty retaining or recruiting staff, up from 55% a year earlier.
* Outages continue to cause significant problems for operators. Just over a third (34%) of all respondents had an outage or severe IT service degradation in the past year, while half (50%) had an outage or severe IT service degradation in the past three years.
* Ten percent of all respondents said their most recent significant outage cost more than $1 million.
* A lack of visibility, transparency, and accountability of public cloud services is a major concern for enterprises that have mission-critical applications. A fifth of operators surveyed said they would be more likely to put workloads in a public cloud if there were more visibility. Half of those using public cloud for mission-critical applications also said they do not have adequate visibility.
* Improvements in data center facility energy efficiency have flattened out and even deteriorated slightly in the past two years. The average PUE for 2019 is 1.67.
* Rack power density is rising after a long period of flat or minor increases, causing many to rethink cooling strategies.
* Power loss was the single biggest cause of outages, accounting for one-third of outages. Sixty percent of respondents said their data centers outage could have been prevented with better management/processes or configuration.
Traditionally data centers are improving their reliability through "rigorous attention to power, infrastructure, connectivity and on-site IT replication," the Uptime report says. The solution, though, is pricy. Data center operators are getting distributed resiliency through active-active data centers where at least two active data centers replicate data to each other. Uptime found up to 40% of those surveyed were using this method.
The Uptime survey was conducted in March and April of this year, surveying 1,100 end users in more than 50 countries and dividing them into two groups: the IT managers, owners, and operators of data centers and the suppliers, designers, and consultants that service the industry.
Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3400086/data-center-workloads-become-more-complex-despite-promises-to-the-contrary.html
作者:[Andy Patrizio][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Andy-Patrizio/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/05/cso_cloud_computing_backups_it_engineer_data_center_server_racks_connections_by_gorodenkoff_gettyimages-943065400_3x2_2400x1600-100796535-large.jpg
[2]: https://uptimeinstitute.com/2019-data-center-industry-survey-results
[3]: https://www.networkworld.com/article/3266624/how-server-disaggregation-could-make-cloud-datacenters-more-efficient.html
[4]: https://www.networkworld.com/newsletters/signup.html
[5]: https://www.facebook.com/NetworkWorld/
[6]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,66 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Moving to the Cloud? SD-WAN Matters! Part 2)
[#]: via: (https://www.networkworld.com/article/3398488/moving-to-the-cloud-sd-wan-matters-part-2.html)
[#]: author: (Rami Rammaha https://www.networkworld.com/author/Rami-Rammaha/)
Moving to the Cloud? SD-WAN Matters! Part 2
======
![istock][1]
This is the second installment of the blog series exploring how enterprises can realize the full transformation promise of the cloud by shifting to a business first networking model powered by a business-driven [SD-WAN][2]. The first installment explored automating secure IPsec connectivity and intelligently steering traffic to cloud providers. We also framed the direct correlation between moving to the cloud and adopting an SD-WAN. In this blog, we will expand upon several additional challenges that can be addressed with a business-driven SD-WAN when embracing the cloud:
### Simplifying and automating security zone-based segmentation
Securing cloud-first branches requires a robust multi-level approach that addresses following considerations:
* Restricting outside traffic coming into the branch to sessions exclusively initiated by internal users with a built-in stateful firewall, avoiding appliance sprawl and lowering operational costs; this is referred to as the app whitelist model
* Encrypting communications between end points within the SD-WAN fabric and between branch locations and public cloud instances
* Service chaining traffic to a cloud-hosted security service like [Zscaler][3] for Layer 7 inspection and analytics for internet-bound traffic
* Segmenting traffic spanning the branch, WAN and data center/cloud
* Centralizing policy orchestration and automation of zone-based firewall, VLAN and WAN overlays
A traditional device-centric WAN approach for security segmentation requires the time-consuming manual configuration of routers and/or firewalls on a device-by-device and site-by-site basis. This is not only complex and cumbersome, but it simply cant scale to 100s or 1000s of sites. Anusha Vaidyanathan, director of product management at Silver Peak, explains how to automate end-to-end zone-based segmentation, emphasizing the advantages of a business-driven approach in this [lightboard video][4].
### Delivering the Highest Quality of Experience to IT teams
The goal for enterprise IT is enabling business agility and increasing operational efficiency. The traditional router-centric WAN approach doesnt provide the best quality of experience for IT as management and on-going network operations are manual and time consuming, device-centric, cumbersome, error-prone and inefficient.
A business-driven SD-WAN such as the Silver Peak [Unity EdgeConnect™][5] unified SD-WAN edge platform centralizes the orchestration of business-driven policies. EdgeConnect automation, machine learning and open APIs easily integrate with third-party management tools and real-time visibility tools to deliver the highest quality of experience for IT, enabling them to reclaim nights and weekends. Manav Mishra, vice president of product management at Silver Peak, explains the latest Silver Peak innovations in this [lightboard video][6].
As enterprises become increasingly dependent on the cloud and embrace a multi-cloud strategy, they must address a number of new challenges:
* A centralized approach to securely embracing the cloud and the internet
* How to extend the on-premise data center to a public cloud and migrating workloads between private and public cloud, taking application portability into account
* Deliver consistent high application performance and availability to hosted applications whether they reside in the data center, private or public clouds or are delivered as SaaS services
* A proactive way to quickly resolve complex issues that span the data center and cloud as well as multiple WAN transport services by harnessing the power of advanced visibility and analytics tools
The business-driven EdgeConnect SD-WAN edge platform enables enterprise IT organizations to easily and consistently embrace the public cloud. Unified security and performance capabilities with automation deliver the highest quality of experience for both users and IT while lowering overall WAN expenditures.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3398488/moving-to-the-cloud-sd-wan-matters-part-2.html
作者:[Rami Rammaha][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Rami-Rammaha/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/05/istock-909772962-100797711-large.jpg
[2]: https://www.silver-peak.com/sd-wan/sd-wan-explained
[3]: https://www.silver-peak.com/company/tech-partners/zscaler
[4]: https://www.silver-peak.com/resource-center/how-to-create-sd-wan-security-zones-in-edgeconnect
[5]: https://www.silver-peak.com/products/unity-edge-connect
[6]: https://www.silver-peak.com/resource-center/how-to-optimize-quality-of-experience-for-it-using-sd-wan

View File

@ -0,0 +1,87 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Cisco will use AI/ML to boost intent-based networking)
[#]: via: (https://www.networkworld.com/article/3400382/cisco-will-use-aiml-to-boost-intent-based-networking.html)
[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
Cisco will use AI/ML to boost intent-based networking
======
Cisco explains how artificial intelligence and machine learning fit into a feedback loop that implements and maintain desired network conditions to optimize network performance for workloads using real-time data.
![xijian / Getty Images][1]
Artificial Intelligence and machine learning are expected to be some of the big topics at next weeks Cisco Live event and the company is already talking about how those technologies will help drive the next generation of [Intent-Based Networking][2].
“Artificial intelligence will change how we manage networks, and its a change we need,” wrote John Apostolopoulos Cisco CTO and vice president of Enterprise Networking in a [blog][3] about how Cisco says these technologies impact the network.
**[ Now see[7 free network tools you must have][4]. ]**
AI is the next major step for networking capabilities, and while researchers have talked in the past about how great AI would be, now the compute power and algorithms exist to make it possible, Apostolopoulos told Network World.
To understand how AI and ML can boost IBN, Cisco says it's necessary to understand four key factors an IBN environment needs: infrastructure, translation, activation and assurance.
Infrastructure can be virtual or physical and include wireless access points, switches, routers, compute and storage. “To make the infrastructure do what we want, we use the translation function to convert the intent, or what we are trying to make the network accomplish, from a person or computer into the correct network and security policies. These policies then must be activated on the network,” Apostolopoulos said.
The activation step takes the network and security polices and couples them with a deep understanding of the network infrastructure that includes both real-time and historic data about its behavior. It then activates or automates the policies across all of the network infrastructure elements, ideally optimizing for performance, reliability and security, Apostolopoulos wrote.
Finally assurance maintains a continuous validation-and-verification loop. IBN improves on translation and assurance to form a valuable feedback loop about whats going on in the network that wasnt available before. ** **
Apostolopoulos used the example of an international company that wanted to set up a world-wide video all-hands meeting. Everyone on the call had to have high-quality, low-latency video, and also needed the capability to send high-quality video into the call when it was time for Q&A.
“By applying machine learning and related machine reasoning, assurance can also sift through the massive amount of data related to such a global event to correctly identify if there are any problems arising. We can then get solutions to these issues and even automatically apply solutions more quickly and more reliably than before,” Apostolopoulos said.
In this case, assurance could identify that the use of WAN bandwidth to certain sites is increasing at a rate that will saturate the network paths and could proactively reroute some of the WAN flows through alternative paths to prevent congestion from occurring, Apostolopoulos wrote.
“In prior systems, this problem would typically only be recognized after the bandwidth bottleneck occurred and users experienced a drop in call quality or even lost their connection to the meeting. It would be challenging or impossible to identify the issue in real time, much less to fix it before it distracted from the experience of the meeting. Accurate and fast identification through ML and MR coupled with intelligent automation through the feedback loop is key to successful outcome.”
Apostolopoulos said AI can accelerate the path from intent into translation and activation and then examine network and behavior data in the assurance step to make sure everything is working correctly. Activation uses the insights to drive more intelligent actions for improved performance, reliability and security, creating a cycle of network optimization.
So what might an implementation of this look like? Applications that run on Ciscos DNA Center may be the central component in an IBN environment. Introduced on 2017 as the heart of its IBN initiative, [Cisco DNA Center][5] features automation capabilities, assurance setting, fabric provisioning and policy-based segmentation for enterprise networks.
“DNA Center can bring together AI and ML in a unified manner,” Apostolopoulos said. “It can store data from across the network and then customers can do AI and ML on that data.”
Central to Cisco's push is being able to gather metadata about traffic as it passes without slowing the traffic, which is accomplished through the use of ASICs in its campus and data-center switches.
“We have designed our networking gear from the ASIC, OS and software levels to gather key data via our IBN architecture, which provides unified data collection and performs algorithmic analysis across the entire network (wired, wireless, LAN, WAN, datacenter), Apostolopoulos said. “We have a massive collection of network data, including a database of problems and associated root causes, from being the worlds top enterprise network vendor over the past 20-plus years. And we have been investing for many years to create innovative network-data analysis and ML, MR, and other AI techniques to identify and solve key problems.”
Machine learning and AI can then be applied to all that data to help network operators handle everything from policy setting and network control to security.
“I also want to stress that the feedback the IT user gets from the IBN system with AI is not overwhelming telemetry data,” Apostolopoulos said. Instead it is valuable and actionable insights at scale, derived from immense data and behavioral analytics using AI.
Managing and developing new AI/ML-based applications from enormous data sets beyond what Cisco already has is a key driver behind its the companys Unified Compute System (UCS) server that wasa rolled out last September. While the new server, the UCS C480 ML, is powerful it includes eight Nvidia Tesla V100-32G GPUs with 128GB of DDR4 RAM, 24 SATA hard drives and more it is the ecosystem of vendors Cloudera, HortonWorks and others that will end up being more important.
[Earlier this year Cisco forecast][6] that [AI and ML][7] will significantly boost network management this year.
“In 2019, companies will start to adopt Artificial Intelligence, in particular Machine Learning, to analyze the telemetry coming off networks to see these patterns, in an attempt to get ahead of issues from performance optimization, to financial efficiency, to security,” said [Anand Oswal][8], senior vice president of engineering in Ciscos Enterprise Networking Business. The pattern-matching capabilities of ML will be used to spot anomalies in network behavior that might otherwise be missed, while also de-prioritizing alerts that otherwise nag network operators but that arent critical, Oswal said.
“We will also start to use these tools to categorize and cluster device and user types, which can help us create profiles for use cases as well as spot outlier activities that could indicate security incursions,” he said.
The first application of AI in network management will be smarter alerts that simply report on activities that break normal patterns, but as the technology advances it will react to more situations autonomously. The idea is to give customers more information so they and the systems can make better network decisions. Workable tools should appear later in 2019, Oswal said.
Join the Network World communities on [Facebook][9] and [LinkedIn][10] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3400382/cisco-will-use-aiml-to-boost-intent-based-networking.html
作者:[Michael Cooney][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Michael-Cooney/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/05/ai-vendor-relationship-management_bar-code_purple_artificial-intelligence_hand-on-virtual-screen-100795252-large.jpg
[2]: http://www.networkworld.com/cms/article/3202699
[3]: https://blogs.cisco.com/enterprise/improving-networks-with-ai
[4]: https://www.networkworld.com/article/2825879/7-free-open-source-network-monitoring-tools.html
[5]: https://www.networkworld.com/article/3280988/cisco-opens-dna-center-network-control-and-management-software-to-the-devops-masses.html
[6]: https://www.networkworld.com/article/3332027/cisco-touts-5-technologies-that-will-change-networking-in-2019.html
[7]: https://www.networkworld.com/article/3320978/data-center/network-operations-a-new-role-for-ai-and-ml.html
[8]: https://blogs.cisco.com/author/anandoswal
[9]: https://www.facebook.com/NetworkWorld/
[10]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,67 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Cloud adoption drives the evolution of application delivery controllers)
[#]: via: (https://www.networkworld.com/article/3400897/cloud-adoption-drives-the-evolution-of-application-delivery-controllers.html)
[#]: author: (Zeus Kerravala https://www.networkworld.com/author/Zeus-Kerravala/)
Cloud adoption drives the evolution of application delivery controllers
======
Application delivery controllers (ADCs) are on the precipice of shifting from traditional hardware appliances to software form factors.
![Aramyan / Getty Images / Microsoft][1]
Migrating to a cloud computing model will obviously have an impact on the infrastructure thats deployed. This shift has already been seen in the areas of servers, storage, and networking, as those technologies have evolved to a “software-defined” model. And it appears that application delivery controllers (ADCs) are on the precipice of a similar shift.
In fact, a new ZK Research [study about cloud computing adoption and the impact on ADCs][2] found that, when looking at the deployment model, hardware appliances are the most widely deployed — with 55% having fully deployed or are currently testing and only 15% currently researching hardware. (Note: I am an employee of ZK Research.)
Juxtapose this with containerized ADCs where only 34% have deployed or are testing but 24% are currently researching and it shows that software in containers will outpace hardware for growth. Not surprisingly, software on bare metal and in virtual machines showed similar although lower, “researching” numbers that support the thesis that the market is undergoing a shift from hardware to software.
**[ Read also:[How to make hybrid cloud work][3] ]**
The study, conducted in collaboration with Kemp Technologies, surveyed 203 respondents from the U.K. and U.S. The demographic split was done to understand regional differences. An equal number of mid and large size enterprises were looked at, with 44% being from over 5,000 employees and the other 56% from companies that have 300 to 5,000 people.
### Incumbency helps but isnt a fait accompli for future ADC purchases
The primary tenet of my research has always been that incumbents are threatened when markets transition, and this is something I wanted to investigate in the study. The survey asked whether buyers would consider an alternative as they evolve their applications from legacy (mode 1) to cloud-native (mode 2). The results offer a bit of good news and bad news for the incumbent providers. Only 8% said they would definitely select a new vendor, but 35% said they would not change. That means the other 57% will look at alternatives. This is sensible, as the requirements for cloud ADCs are different than ones that support traditional applications.
### IT pros want better automation capabilities
This begs the question as to what features ADC buyers want for a cloud environment versus traditional ones. The survey asked specifically what features would be most appealing in future purchases, and the top response was automation, followed by central management, application analytics, on-demand scaling (which is a form of automation), and visibility.
The desire to automate was a positive sign for the evolution of buyer mindset. Just a few years ago, the mere mention of automation would have sent IT pros into a panic. The reality is that IT cant operate effectively without automation, and technology professionals are starting to understand that.
The reason automation is needed is that manual changes are holding businesses back. The survey asked how the speed of ADC changes impacts the speed at which applications are rolled out, and a whopping 60% said it creates significant or minor delays. In an era of DevOps and continuous innovation, multiple minor delays create a drag on the business and can cause it to fall behind is more agile competitors.
![][4]
### ADC upgrades and service provisioning benefit most from automation
The survey also drilled down on specific ADC tasks to see where automation would have the most impact. Respondents were asked how long certain tasks took, answering in minutes, days, weeks, or months. Shockingly, there wasnt a single task where the majority said it could be done in minutes. The closest was adding DNS entries for new virtual IP addresses (VIPs) where 46% said they could do that in minutes.
Upgrading, provisioning new load balancers, and provisioning new VIPs took the longest. Looking ahead, this foreshadows big problems. As the data center gets more disaggregated and distributed, IT will deploy more software-based ADCs in more places. Taking days or weeks or month to perform these functions will cause the organization to fall behind.
The study clearly shows changes are in the air for the ADC market. For IT pros, I strongly recommend that as the environment shifts to the cloud, its prudent to evaluate new vendors. By all means, see what your incumbent vendor has, but look at least at two others that offer software-based solutions. Also, there should be a focus on automating as much as possible, so the primary evaluation criteria for ADCs should be how easy it is to implement automation.
Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3400897/cloud-adoption-drives-the-evolution-of-application-delivery-controllers.html
作者:[Zeus Kerravala][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Zeus-Kerravala/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/05/cw_microsoft_sharepoint_vs_onedrive_clouds_and_hands_by_aramyan_gettyimages-909772962_2400x1600-100796932-large.jpg
[2]: https://kemptechnologies.com/research-papers/adc-market-research-study-zeus-kerravala/?utm_source=zkresearch&utm_medium=referral&utm_campaign=zkresearch&utm_term=zkresearch&utm_content=zkresearch
[3]: https://www.networkworld.com/article/3119362/hybrid-cloud/how-to-make-hybrid-cloud-work.html#tk.nww-fsb
[4]: https://images.idgesg.net/images/article/2019/06/adc-survey-zk-research-100798593-large.jpg
[5]: https://www.facebook.com/NetworkWorld/
[6]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,118 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (For enterprise storage, persistent memory is here to stay)
[#]: via: (https://www.networkworld.com/article/3398988/for-enterprise-storage-persistent-memory-is-here-to-stay.html)
[#]: author: (John Edwards )
For enterprise storage, persistent memory is here to stay
======
Persistent memory also known as storage class memory has tantalized data center operators for many years. A new technology promises the key to success.
![Thinkstock][1]
It's hard to remember a time when semiconductor vendors haven't promised a fast, cost-effective and reliable persistent memory technology to anxious [data center][2] operators. Now, after many years of waiting and disappointment, technology may have finally caught up with the hype to make persistent memory a practical proposition.
High-capacity persistent memory, also known as storage class memory ([SCM][3]), is fast and directly addressable like dynamic random-access memory (DRAM), yet is able to retain stored data even after its power has been switched off—intentionally or unintentionally. The technology can be used in data centers to replace cheaper, yet far slower traditional persistent storage components, such as [hard disk drives][4] (HDD) and [solid-state drives][5] (SSD).
**Learn more about enterprise storage**
* [Why NVMe over Fabric matters][6]
* [What is hyperconvergence?][7]
* [How NVMe is changing enterprise storage][8]
* [Making the right hyperconvergence choice: HCI hardware or software?][9]
Persistent memory can also be used to replace DRAM itself in some situations without imposing a significant speed penalty. In this role, persistent memory can deliver crucial operational benefits, such as lightning-fast database-server restarts during maintenance, power emergencies and other expected and unanticipated reboot situations.
Many different types of strategic operational applications and databases, particularly those that require low-latency, high durability and strong data consistency, can benefit from persistent memory. The technology also has the potential to accelerate virtual machine (VM) storage and deliver higher performance to multi-node, distributed-cloud applications.
In a sense, persistent memory marks a rebirth of core memory. "Computers in the 50s to 70s used magnetic core memory, which was direct access, non-volatile memory," says Doug Wong, a senior member of [Toshiba Memory America's][10] technical staff. "Magnetic core memory was displaced by SRAM and DRAM, which are both volatile semiconductor memories."
One of the first persistent memory devices to come to market is [Intels Optane DC][11]. Other vendors that have released persistent memory products or are planning to do so include [Samsung][12], Toshiba America Memory and [SK Hynix][13].
### Persistent memory: performance + reliability
With persistent memory, data centers have a unique opportunity to gain faster performance and lower latency without enduring massive technology disruption. "It's faster than regular solid-state NAND flash-type storage, but you're also getting the benefit that its persistent," says Greg Schulz, a senior advisory analyst at vendor-independent storage advisory firm [StorageIO.][14] "It's the best of both worlds."
Yet persistent memory offers adopters much more than speedy, reliable storage. In an ideal IT world, all of the data associated with an application would reside within DRAM to achieve maximum performance. "This is currently not practical due to limited DRAM and the fact that DRAM is volatile—data is lost when power fails," observes Scott Nelson, senior vice president and general manager of Toshiba Memory America's memory business unit.
Persistent memory transports compatible applications to an "always on" status, providing continuous access to large datasets through increased system memory capacity, says Kristie Mann, [Intel's][15] director of marketing for data center memory and storage. She notes that Optane DC can supply data centers with up to three-times more system memory capacity (as much as 36TBs), system restarts in seconds versus minutes, 36% more virtual machines per node, and up to 8-times better performance on [Apache Spark][16], a widely used open-source distributed general-purpose cluster-computing framework.
System memory currently represents 60% of total platform costs, Mann says. She observes that Optane DC persistent memory provides significant customer value by delivering 1.2x performance/dollar on key customer workloads. "This value will dramatically change memory/storage economics and accelerate the data-centric era," she predicts.
### Where will persistent memory infiltrate enterprise storage?
Persistent memory is likely to first enter the IT mainstream with minimal fanfare, serving as a high-performance caching layer for high performance SSDs. "This could be adopted relatively-quickly," Nelson observes. Yet this intermediary role promises to be merely a stepping-stone to increasingly crucial applications.
Over the next few years, persistent technology will impact data centers serving enterprises across an array of sectors. "Anywhere time is money," Schulz says. "It could be financial services, but it could also be consumer-facing or sales-facing operations."
Persistent memory supercharges anything data-related that requires extreme speed at extreme scale, observes Andrew Gooding, vice president of engineering at [Aerospike][17], which delivered the first commercially available open database optimized for use with Intel Optane DC.
Machine learning is just one of many applications that stand to benefit from persistent memory. Gooding notes that ad tech firms, which rely on machine learning to understand consumers' reactions to online advertising campaigns, should find their work made much easier and more effective by persistent memory. "Theyre collecting information as users within an ad campaign browse the web," he says. "If they can read and write all that data quickly, they can then apply machine-learning algorithms and tailor specific ads for users in real time."
Meanwhile, as automakers become increasingly reliant on data insights, persistent memory promises to help them crunch numbers and refine sophisticated new technologies at breakneck speeds. "In the auto industry, manufacturers face massive data challenges in autonomous vehicles, where 20 exabytes of data needs to be processed in real time, and they're using self-training machine-learning algorithms to help with that," Gooding explains. "There are so many fields where huge amounts of data need to be processed quickly with machine-learning techniques—fraud detection, astronomy... the list goes on."
Intel, like other persistent memory vendors, expects cloud service providers to be eager adopters, targeting various types of in-memory database services. Google, for example, is applying persistent memory to big data workloads on non-relational databases from vendors such as Aerospike and [Redis Labs][18], Mann says.
High-performance computing (HPC) is yet another area where persistent memory promises to make a tremendous impact. [CERN][19], the European Organization for Nuclear Research, is using Intel's Optane DC to significantly reduce wait times for scientific computing. "The efficiency of their algorithms depends on ... persistent memory, and CERN considers it a major breakthrough that is necessary to the work they are doing," Mann observes.
### How to prepare storage infrastructure for persistent memory
Before jumping onto the persistent memory bandwagon, organizations need to carefully scrutinize their IT infrastructure to determine the precise locations of any existing data bottlenecks. This task will be primary application-dependent, Wong notes. "If there is significant performance degradation due to delays associated with access to data stored in non-volatile storage—SSD or HDD—then an SCM tier will improve performance," he explains. Yet some applications will probably not benefit from persistent memory, such as compute-bound applications where CPU performance is the bottleneck.
Developers may need to reevaluate fundamental parts of their storage and application architectures, Gooding says. "They will need to know how to program with persistent memory," he notes. "How, for example, to make sure writes are flushed to the actual persistent memory device when necessary, as opposed to just sitting in the CPU cache."
To leverage all of persistent memory's potential benefits, significant changes may also be required in how code is designed. When moving applications from DRAM and flash to persistent memory, developers will need to consider, for instance, what happens when a program crashes and restarts. "Right now, if they write code that leaks memory, that leaked memory is recovered on restart," Gooding explains. With persistent memory, that isn't necessarily the case. "Developers need to make sure the code is designed to reconstruct a consistent state when a program restarts," he notes. "You may not realize how much your designs rely on the traditional combination of fast volatile DRAM and block storage, so it can be tricky to change your code designs for something completely new like persistent memory."
Older versions of operating systems may also need to be updated to accommodate the new technology, although newer OSes are gradually becoming persistent memory aware, Schulz says. "In other words, if they detect that persistent memory is available, then they know how to utilize that either as a cache, or some other memory."
Hypervisors, such as [Hyper-V][20] and [VMware][21], now know how to leverage persistent memory to support productivity, performance and rapid restarts. By utilizing persistent memory along with the latest versions of VMware, a whole system can see an uplift in speed and also maximize the number of VMs to fit on a single host, says Ian McClarty, CEO and president of data center operator [PhoenixNAP Global IT Services][22]. "This is a great use case for companies who want to own less hardware or service providers who want to maximize hardware to virtual machine deployments."
Many key enterprise applications, particularly databases, are also becoming persistent memory aware. SQL Server and [SAPs][23] flagship [HANA][24] database management platform have both embraced persistent memory. "The SAP HANA platform is commonly used across multiple industries to process data and transactions, and then run advanced analytics ... to deliver real-time insights," Mann observes.
In terms of timing, enterprises and IT organizations should begin persistent memory planning immediately, Schulz recommends. "You should be talking with your vendors and understanding their roadmap, their plans, for not only supporting this technology, but also in what mode: as storage, as memory."
Join the Network World communities on [Facebook][25] and [LinkedIn][26] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3398988/for-enterprise-storage-persistent-memory-is-here-to-stay.html
作者:[John Edwards][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2017/08/file_folder_storage_sharing_thinkstock_477492571_3x2-100732889-large.jpg
[2]: https://www.networkworld.com/article/3353637/the-data-center-is-being-reimagined-not-disappearing.html
[3]: https://www.networkworld.com/article/3026720/the-next-generation-of-storage-disruption-storage-class-memory.html
[4]: https://www.networkworld.com/article/2159948/hard-disk-drives-vs--solid-state-drives--are-ssds-finally-worth-the-money-.html
[5]: https://www.networkworld.com/article/3326058/what-is-an-ssd.html
[6]: https://www.networkworld.com/article/3273583/why-nvme-over-fabric-matters.html
[7]: https://www.networkworld.com/article/3207567/what-is-hyperconvergence
[8]: https://www.networkworld.com/article/3280991/what-is-nvme-and-how-is-it-changing-enterprise-storage.html
[9]: https://www.networkworld.com/article/3318683/making-the-right-hyperconvergence-choice-hci-hardware-or-software
[10]: https://business.toshiba-memory.com/en-us/top.html
[11]: https://www.intel.com/content/www/us/en/architecture-and-technology/optane-dc-persistent-memory.html
[12]: https://www.samsung.com/semiconductor/
[13]: https://www.skhynix.com/eng/index.jsp
[14]: https://storageio.com/
[15]: https://www.intel.com/content/www/us/en/homepage.html
[16]: https://spark.apache.org/
[17]: https://www.aerospike.com/
[18]: https://redislabs.com/
[19]: https://home.cern/
[20]: https://docs.microsoft.com/en-us/virtualization/hyper-v-on-windows/about/
[21]: https://www.vmware.com/
[22]: https://phoenixnap.com/
[23]: https://www.sap.com/index.html
[24]: https://www.sap.com/products/hana.html
[25]: https://www.facebook.com/NetworkWorld/
[26]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,89 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Juniper: Security could help drive interest in SDN)
[#]: via: (https://www.networkworld.com/article/3400739/juniper-sdn-snapshot-finds-security-legacy-network-tech-impacts-core-network-changes.html)
[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
Juniper: Security could help drive interest in SDN
======
Juniper finds that enterprise interest in software-defined networking (SDN) is influenced by other factors, including artificial intelligence (AI) and machine learning (ML).
![monsitj / Getty Images][1]
Security challenges and developing artificial intelligence/maching learning (AI/ML) technologies are among the key issues driving [software-defined networking][2] (SDN) implementations, according to a new Juniper survey of 500 IT decision makers.
And SDN interest abounds 98% of the 500 said they were already using or considering an SDN implementation. Juniper said it had [Wakefield Research][3] poll IT decision makers of companies with 500 or more employees about their SDN strategies between May 7 and May 14, 2019.
**More about SD-WAN**
* [How to buy SD-WAN technology: Key questions to consider when selecting a supplier][4]
* [How to pick an off-site data-backup method][5]
* [SD-Branch: What it is and why youll need it][6]
* [What are the options for security SD-WAN?][7]
SDN includes technologies that separate the network control plane from the forwarding plane to enable more automated provisioning and policy-based management of network resources.
IDC estimates that the worldwide data-center SDN market will be worth more than $12 billion in 2022, recording a CAGR of 18.5% during the 2017-2022 period. The market-generated revenue of nearly $5.15 billion in 2017 was up more than 32.2% from 2016.
There are many ideas driving the development of SDN. For example, it promises to reduce the complexity of statically defined networks; make automating network functions much easier; and allow for simpler provisioning and management of networked resources from the data center to the campus or wide area network.
While the evolution of SDN is ongoing, Junipers study pointed out an issue that was perhaps not unexpected many users are still managing operations via the command line interface (CLI). CLI is the primary text-based user interface used for configuring, monitoring and maintaining most networked devices.
“If SDN is as attractive as it is then why manage the network with the same legacy technology of the past?” said Michael Bushong, vice president of enterprise and cloud marketing at Juniper Networks. “If you deploy SDN and dont adjust the operational model then it is difficult to reap all the benefits SDN can bring. Its the difference between managing devices individually which you may have done in the past to managing fleets of devices via SDN it simplifies and reduces operational expenses.”
Juniper pointed to a [Gartner prediction][8] that stated “by 2020, only 30% of network operations teams will use the command line interface (CLI) as their primary interface, down from 85% at years end 2016.” Garter stated that poll results from a recent Gartner conference found some 71% still using CLI as the primary way to make network changes.
Gartner [wrote][9] in the past that CLI has remained the primary operational tool for mainstream network operations teams for easily the past 15-20 years but that “moving away from the CLI is a good thing for the networking industry, and while it wont disappear completely (advanced/nuanced troubleshooting for example), it will be supplanted as the main interface into networking infrastructure.”
Junipers study found that 87% of businesses are still doing most or some of their network management at the device level.
What all of this shows is that customers are obviously interested in SDN but are still grappling with the best ways to get there, Bushong said.
The Juniper study also found users interested in SDN because of the potential for a security boost.
SDN can empowers a variety of security benefits. A customer can split up a network connection between an end user and the data center and have different security settings for the various types of network traffic. A network could have one public-facing, low-security network that does not touch any sensitive information. Another segment could have much more fine-grained remote-access control with software-based [firewall][10] and encryption policies on it, which allow sensitive data to traverse over it. SDN users can roll out security policies across the network from the data center to the edge much more rapidly than traditional network environments.
“Many enterprises see security—not speed—as the biggest consequence of not making this transition in the next five years, with nearly 40 percent identifying the inability to quickly address new threats as one of their main concerns,” wrote Manoj Leelanivas, chief product officer at Juniper Networks, in a blog about the survey.
“SDN is not often associated with greater security but this makes sense when we remember this is an operational transformation. In security, the challenge lies not in identifying threats or creating solutions, but in applying these solutions to a fragmented network. Streamlining complex security operations, touching many different departments and managing multiple security solutions, is where a software-defined approach can provide the answer,” Leelanivas stated.
Some of the other key findings from Juniper included:
* **The future of AI** : The deployment of artificial intelligence is about changing the operational model, Bushong said. “The ability to more easily manage workflows over groups of devices and derive usable insights to help customers be more proactive rather than reactive is the direction we are moving. Everything will ultimately be AI-driven, he said.
* **Automation** : While automation is often considered a threat, Juniper said its respondents see it positively within the context of SDN, with 38% reporting it will improve security and 25% that it will enhance their jobs by streamlining manual operations.
* **Flexibility** : Agility is the #1 benefit respondents considering SDN want to gain (48%), followed by improved reliability (43%) and greater simplicity (38%).
* **SD-WAN** : The majority, 54%, have rolled out or are in the process of rolling out SD-WAN, while an additional 34% have it under current consideration.
Join the Network World communities on [Facebook][11] and [LinkedIn][12] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3400739/juniper-sdn-snapshot-finds-security-legacy-network-tech-impacts-core-network-changes.html
作者:[Michael Cooney][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Michael-Cooney/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/03/sdn_software-defined-network_architecture-100791938-large.jpg
[2]: https://www.networkworld.com/article/3209131/what-sdn-is-and-where-its-going.html
[3]: https://www.wakefieldresearch.com/
[4]: https://www.networkworld.com/article/3323407/sd-wan/how-to-buy-sd-wan-technology-key-questions-to-consider-when-selecting-a-supplier.html
[5]: https://www.networkworld.com/article/3328488/backup-systems-and-services/how-to-pick-an-off-site-data-backup-method.html
[6]: https://www.networkworld.com/article/3250664/lan-wan/sd-branch-what-it-is-and-why-youll-need-it.html
[7]: https://www.networkworld.com/article/3285728/sd-wan/what-are-the-options-for-securing-sd-wan.html?nsdr=true
[8]: https://blogs.gartner.com/andrew-lerner/2018/01/04/checking-in-on-the-death-of-the-cli/
[9]: https://blogs.gartner.com/andrew-lerner/2016/11/22/predicting-the-death-of-the-cli/
[10]: https://www.networkworld.com/article/3230457/what-is-a-firewall-perimeter-stateful-inspection-next-generation.html
[11]: https://www.facebook.com/NetworkWorld/
[12]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,82 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Self-learning sensor chips wont need networks)
[#]: via: (https://www.networkworld.com/article/3400659/self-learning-sensor-chips-wont-need-networks.html)
[#]: author: (Patrick Nelson https://www.networkworld.com/author/Patrick-Nelson/)
Self-learning sensor chips wont need networks
======
Scientists working on new, machine-learning networks aim to embed everything needed for artificial intelligence (AI) onto a processor, eliminating the need to transfer data to the cloud or computers.
![Jiraroj Praditcharoenkul / Getty Images][1]
Tiny, intelligent microelectronics should be used to perform as much sensor processing as possible on-chip rather than wasting resources by sending often un-needed, duplicated raw data to the cloud or computers. So say scientists behind new, machine-learning networks that aim to embed everything needed for artificial intelligence (AI) onto a processor.
“This opens the door for many new applications, starting from real-time evaluation of sensor data,” says [Fraunhofer Institute for Microelectronic Circuits and Systems][2] on its website. No delays sending unnecessary data onwards, along with speedy processing, means theoretically there is zero latency.
Plus, on-microprocessor, self-learning means the embedded, or sensor, devices can self-calibrate. They can even be “completely reconfigured to perform a totally different task afterwards,” the institute says. “An embedded system with different tasks is possible.”
**[ Also read:[What is edge computing?][3] and [How edge networking and IoT will reshape data centers][4] ]**
Much internet of things (IoT) data sent through networks is redundant and wastes resources: a temperature reading taken every 10 minutes, say, when the ambient temperature hasnt changed, is one example. In fact, one only needs to know when the temperature has changed, and maybe then only when thresholds have been met.
### Neural network-on-sensor chip
The commercial German research organization says its developing a specific RISC-V microprocessor with a special hardware accelerator designed for a [brain-copying, artificial neural network (ANN) it has developed][5]. The architecture could ultimately be suitable for the condition-monitoring or predictive sensors of the kind we will likely see more of in the industrial internet of things (IIoT).
Key to Fraunhofer IMSs [Artificial Intelligence for Embedded Systems (AIfES)][6] is that the self-learning takes place at chip level rather than in the cloud or on a computer, and that it is independent of “connectivity towards a cloud or a powerful and resource-hungry processing entity.” But it still offers a “full AI mechanism, like independent learning,”
Its “decentralized AI,” says Fraunhofer IMS. "Its not focused towards big-data processing.”
Indeed, with these kinds of systems, no connection is actually required for the raw data, just for the post-analytical results, if indeed needed. Swarming can even replace that. Swarming lets sensors talk to one another, sharing relevant information without even getting a host network involved.
“It is possible to build a network from small and adaptive systems that share tasks among themselves,” Fraunhofer IMS says.
Other benefits in decentralized neural networks include that they can be more secure than the cloud. Because all processing takes place on the microprocessor, “no sensitive data needs to be transferred,” Fraunhofer IMS explains.
### Other edge computing research
The Fraunhofer researchers arent the only academics who believe entire networks become redundant with neuristor, brain-like AI chips. Binghamton University and Georgia Tech are working together on similar edge-oriented tech.
“The idea is we want to have these chips that can do all the functioning in the chip, rather than messages back and forth with some sort of large server,” Binghamton said on its website when [I wrote about the university's work last year][7].
One of the advantages of no major communications linking: Not only don't you have to worry about internet resilience, but also that energy is saved creating the link. Energy efficiency is an ambition in the sensor world — replacing batteries is time consuming, expensive, and sometimes, in the case of remote locations, extremely difficult.
Memory or storage for swaths of raw data awaiting transfer to be processed at a data center, or similar, doesnt have to be provided either — its been processed at the source, so it can be discarded.
**More about edge networking:**
* [How edge networking and IoT will reshape data centers][4]
* [Edge computing best practices][8]
* [How edge computing can help secure the IoT][9]
Join the Network World communities on [Facebook][10] and [LinkedIn][11] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3400659/self-learning-sensor-chips-wont-need-networks.html
作者:[Patrick Nelson][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Patrick-Nelson/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/02/industry_4-0_industrial_iot_smart_factory_automation_by_jiraroj_praditcharoenkul_gettyimages-902668940_2400x1600-100788458-large.jpg
[2]: https://www.ims.fraunhofer.de/en.html
[3]: https://www.networkworld.com/article/3224893/internet-of-things/what-is-edge-computing-and-how-it-s-changing-the-network.html
[4]: https://www.networkworld.com/article/3291790/data-center/how-edge-networking-and-iot-will-reshape-data-centers.html
[5]: https://www.ims.fraunhofer.de/en/Business_Units_and_Core_Competencies/Electronic_Assistance_Systems/News/AIfES-Artificial_Intelligence_for_Embedded_Systems.html
[6]: https://www.ims.fraunhofer.de/en/Business_Units_and_Core_Competencies/Electronic_Assistance_Systems/technologies/Artificial-Intelligence-for-Embedded-Systems-AIfES.html
[7]: https://www.networkworld.com/article/3326557/edge-chips-could-render-some-networks-useless.html
[8]: https://www.networkworld.com/article/3331978/lan-wan/edge-computing-best-practices.html
[9]: https://www.networkworld.com/article/3331905/internet-of-things/how-edge-computing-can-help-secure-the-iot.html
[10]: https://www.facebook.com/NetworkWorld/
[11]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,53 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (What to do when yesterdays technology wont meet todays support needs)
[#]: via: (https://www.networkworld.com/article/3399875/what-to-do-when-yesterday-s-technology-won-t-meet-today-s-support-needs.html)
[#]: author: (Anand Rajaram )
What to do when yesterdays technology wont meet todays support needs
======
![iStock][1]
You probably already know that end user technology is exploding and are feeling the effects of it in your support organization every day. Remember when IT sanctioned and standardized every hardware and software instance in the workplace? Those days are long gone. Today, its the driving force of productivity that dictates what will or wont be used and that can be hard on a support organization.
Whatever users need to do their jobs better, faster, more efficiently is what you are seeing come into the workplace. So naturally, thats what comes into your service desk too. Support organizations see all kinds of [devices, applications, systems, and equipment][2], and its adding a great deal of complexity and demand to keep up with. In fact, four of the top five factors causing support ticket volumes to rise are attributed to new and current technology.
To keep up with the steady [rise of tickets][3] and stay out in front of this surge, support organizations need to take a good, hard look at the processes and technologies they use. Yesterdays methods wont cut it. The landscape is simply changing too fast. Supporting todays users and getting them back to work fast requires an expanding set of skills and tools.
So where do you start with a new technology project? Just because a technology is new or hyped doesnt mean its right for your organization. Its important to understand your project goals and the experience you really want to create and match your technology choices to those goals. But dont go it alone. Talk to your teams. Get intimately familiar with how your support organization works today. Understand your customers needs at a deep level. And bring the right people to the table to cover:
* Business problem analysis: What existing business issue are stakeholders unhappy with?
* The impact of that problem: How does that issue justify making a change?
* Process automation analysis: What area(s) can technology help automate?
* Other solutions: Have you considered any other options besides technology?
With these questions answered, youre ready to entertain your technology options. Put together your “must-haves” in a requirements document and reach out to potential suppliers. During the initial information-gathering stage, assess if the supplier understands your goals and how their technology helps you meet them. To narrow the field, compare solutions side by side against your goals. Select the top two or three for more in-depth product demos before moving into product evaluations. By the time youre ready for implementation, you have empirical, practical knowledge of how the solution will perform against your business goals.
The key takeaway is this: Technology for technologys sake is just technology. But technology that drives business value is a solution. If you want a solution that drives results for your organization and your customers, its worth following a strategic selection process to match your goals with the best technology for the job.
For more insight, check out the [LogMeIn Rescue][4] and HDI webinar “[Technology and the Service Desk: Expanding Mission, Expanding Skills”][5].
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3399875/what-to-do-when-yesterday-s-technology-won-t-meet-today-s-support-needs.html
作者:[Anand Rajaram][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/06/istock-1019006240-100798168-large.jpg
[2]: https://www.logmeinrescue.com/resources/datasheets/infographic-mobile-support-are-your-employees-getting-what-they-need?utm_source=idg%20media&utm_medium=display&utm_campaign=native&sfdc=
[3]: https://www.logmeinrescue.com/resources/analyst-reports/the-importance-of-remote-support-in-a-shift-left-world?utm_source=idg%20media&utm_medium=display&utm_campaign=native&sfdc=
[4]: https://www.logmeinrescue.com/?utm_source=idg%20media&utm_medium=display&utm_campaign=native&sfdc=
[5]: https://www.brighttalk.com/webcast/8855/312289?utm_source=LogMeIn7&utm_medium=brighttalk&utm_campaign=312289

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (luuming)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -1,101 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How we built a Linux desktop app with Electron)
[#]: via: (https://opensource.com/article/19/4/linux-desktop-electron)
[#]: author: (Nils Ganther https://opensource.com/users/nils-ganther)
How we built a Linux desktop app with Electron
======
A story of building an open source email service that runs natively on
Linux desktops, thanks to the Electron framework.
![document sending][1]
[Tutanota][2] is a secure, open source email service that's been available as an app for the browser, iOS, and Android. The client code is published under GPLv3 and the Android app is available on [F-Droid][3] to enable everyone to use a completely Google-free version.
Because Tutanota focuses on open source and develops on Linux clients, we wanted to release a desktop app for Linux and other platforms. Being a small team, we quickly ruled out building native apps for Linux, Windows, and MacOS and decided to adapt our app using [Electron][4].
Electron is the go-to choice for anyone who wants to ship visually consistent, cross-platform applications, fast—especially if there's already a web app that needs to be freed from the shackles of the browser API. Tutanota is exactly such a case.
Tutanota is based on [SystemJS][5] and [Mithril][6] and aims to offer simple, secure email communications for everybody. As such, it has to provide a lot of the standard features users expect from any email client.
Some of these features, like basic push notifications, search for text and contacts, and support for two-factor authentication are easy to offer in the browser thanks to modern APIs and standards. Other features (such as automatic backups or IMAP support without involving our servers) need less-restricted access to system resources, which is exactly what the Electron framework provides.
While some criticize Electron as "just a basic wrapper," it has obvious benefits:
* Electron enables you to adapt a web app quickly for Linux, Windows, and MacOS desktops. In fact, most Linux desktop apps are built with Electron.
* Electron enables you to easily bring the desktop client to feature parity with the web app.
* Once you've published the desktop app, you can use free development capacity to add desktop-specific features that enhance usability and security.
* And last but certainly not least, it's a great way to make the app feel native and integrated into the user's system while maintaining its identity.
### Meeting users' needs
At Tutanota, we do not rely on big investor money, rather we are a community-driven project. We grow our team organically based on the increasing number of users upgrading to our freemium service's paid plans. Listening to what users want is not only important to us, it is essential to our success.
Offering a desktop client was users' [most-wanted feature][7] in Tutanota, and we are proud that we can now offer free beta desktop clients to all of our users. (We also implemented another highly requested feature—[search on encrypted data][8]—but that's a topic for another time.)
We liked the idea of providing users with signed versions of Tutanota and enabling functions that are impossible in the browser, such as push notifications via a background process. Now we plan to add more desktop-specific features, such as IMAP support without depending on our servers to act as a proxy, automatic backups, and offline availability.
We chose Electron because its combination of Chromium and Node.js promised to be the best fit for our small development team, as it required only minimal changes to our web app. It was particularly helpful to use the browser APIs for everything as we got started, slowly replacing those components with more native versions as we progressed. This approach was especially handy with attachment downloads and notifications.
### Tuning security
We were aware that some people cite security problems with Electron, but we found Electron's options for fine-tuning access in the web app quite satisfactory. You can use resources like the Electron's [security documentation][9] and Luca Carettoni's [Electron Security Checklist][10] to help prevent catastrophic mishaps with untrusted content in your web app.
### Achieving feature parity
The Tutanota web client was built from the start with a solid protocol for interprocess communication. We utilize web workers to keep user interface (UI) rendering responsive while encrypting and requesting data. This came in handy when we started implementing our mobile apps, which use the same protocol to communicate between the native part and the web view.
That's why when we started building the desktop clients, a lot of bindings for things like native push notifications, opening mailboxes, and working with the filesystem were already there, so only the native (node) side had to be implemented.
Another convenience was our build process using the [Babel transpiler][11], which allows us to write the entire codebase in modern ES6 JavaScript and mix-and-match utility modules between the different environments. This enabled us to speedily adapt the code for the Electron-based desktop apps. However, we encountered some challenges.
### Overcoming challenges
While Electron allows us to integrate with the different platforms' desktop environments pretty easily, you can't underestimate the time investment to get things just right! In the end, it was these little things that took up much more time than we expected but were also crucial to finish the desktop client project.
The places where platform-specific code was necessary caused most of the friction:
* Window management and the tray, for example, are still handled in subtly different ways on the three platforms.
* Registering Tutanota as the default mail program and setting up autostart required diving into the Windows Registry while making sure to prompt the user for admin access in a [UAC][12]-compatible way.
* We needed to use Electron's API for shortcuts and menus to offer even standard features like copy, paste, undo, and redo.
This process was complicated a bit by users' expectations of certain, sometimes not directly compatible behavior of the apps on different platforms. Making the three versions feel native required some iteration and even some modest additions to the web app to offer a text search similar to the one in the browser.
### Wrapping up
Our experience with Electron was largely positive, and we completed the project in less than four months. Despite some rather time-consuming features, we were surprised about the ease with which we could ship a beta version of the [Tutanota desktop client for Linux][13]. If you're interested, you can dive into the source code on [GitHub][14].
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/4/linux-desktop-electron
作者:[Nils Ganther][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/nils-ganther
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/email_paper_envelope_document.png?itok=uPj_kouJ (document sending)
[2]: https://tutanota.com/
[3]: https://f-droid.org/en/packages/de.tutao.tutanota/
[4]: https://electronjs.org/
[5]: https://github.com/systemjs/systemjs
[6]: https://mithril.js.org/
[7]: https://tutanota.uservoice.com/forums/237921-general/filters/top?status_id=1177482
[8]: https://tutanota.com/blog/posts/first-search-encrypted-data/
[9]: https://electronjs.org/docs/tutorial/security
[10]: https://www.blackhat.com/docs/us-17/thursday/us-17-Carettoni-Electronegativity-A-Study-Of-Electron-Security-wp.pdf
[11]: https://babeljs.io/
[12]: https://en.wikipedia.org/wiki/User_Account_Control
[13]: https://tutanota.com/blog/posts/desktop-clients/
[14]: https://www.github.com/tutao/tutanota

View File

@ -0,0 +1,140 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Tweaking the look of Fedora Workstation with themes)
[#]: via: (https://fedoramagazine.org/tweaking-the-look-of-fedora-workstation-with-themes/)
[#]: author: (Ryan Lerch https://fedoramagazine.org/author/ryanlerch/)
Tweaking the look of Fedora Workstation with themes
======
![][1]
Changing the theme of a desktop environment is a common way to customize your daily experience with Fedora Workstation. This article discusses the 4 different types of visual themes you can change and how to change to a new theme. Additionally, this article will cover how to install new themes from both the Fedora repositories and 3rd party theme sources.
### Theme Types
When changing the theme of Fedora Workstation, there are 4 different themes that can be changed independently of each other. This allows a user to mix and match the theme types to customize their desktop in a multitude of combinations. The 4 theme types are the **Application** (GTK) theme, the **shell** theme, the **icon** theme, and the **cursor** theme.
#### Application (GTK) themes
As the name suggests, Application themes change the styling of the applications that are displayed on a users desktop. Application themes control the style of the window borders and the window titlebar. Additionally, they also control the style of the widgets in the windows — like dropdowns, text inputs, and buttons. One point to note is that an application theme does not change the icons that are displayed in an application — this is achieved using the icon theme.
![Two application windows with two different application themes. The default Adwaita theme on the left, the Adapta theme on the right.][2]
Application themes are also known as GTK themes, as GTK ( **G** IMP **T** ool **k** it) is the underlying technology that is used to render the windows and user interface widgets in those windows on Fedora Workstation.
#### Shell Themes
Shell themes change the appearance of the GNOME Shell. The GNOME Shell is the technology that displays the top bar (and the associated widgets like drop downs), as well as the overview screen and the applications list it contains.
![Comparison of two Shell themes, with the Fedora Workstation default on top, and the Adapta shell theme on the bottom.][3]
#### Icon Themes
As the name suggests, icon themes change the icons used in the desktop. Changing the icon theme will change the icons displayed both in the Shell, and in applications.
![Comparison of two icon themes, with the Fedora 30 Workstation default Adwaita on the left, and the Yaru icon theme on the right][4]
One important item to note with icon themes is that all icon themes will not have customized icons for all application icons. Consequently, changing the icon theme will not change all the icons in the applications list in the overview.
![Comparison of two icon themes, with the Fedora 30 Workstation default Adwaita on the top, and the Yaru icon theme on the bottom][5]
#### Cursor Theme
The cursor theme allows a user to change how the mouse pointer is displayed. Most cursor themes change all the common cursors, including the pointer, drag handles and the loading cursor.
![Comparison of multiple cursors of two different cursor themes. Fedora 30 default is on the left, the Breeze Snow theme on the right.][6]
### Changing the themes
Changing themes on Fedora Workstation is a simple process. To change all 4 types of themes, use the **Tweaks** application. Tweaks is a tool used to change a range of different options in Fedora Workstation. It is not installed by default, and is installed using the Software application:
![][7]
Alternatively, install Tweaks from the command line with the command:
```
sudo dnf install gnome-tweak-tool
```
In addition to Tweaks, to change the Shell theme, the **User Themes** GNOME Shell Extension needs to be installed and enabled. [Check out this post for more details on installing extensions][8].
Next, launch Tweaks, and switch to the Appearance pane. The Themes section in the Appearance pane allows the changing of the multiple theme types. Simply choose the theme from the dropdown, and the new theme will apply automatically.
![][9]
### Installing themes
Armed with the knowledge of the types of themes, and how to change themes, it is time to install some themes. Broadly speaking, there are two ways to install new themes to your Fedora Workstation — installing theme packages from the Fedora repositories, or manually installing a theme. One point to note when installing themes, is that you may need to close and re-open the Tweaks application to make a newly installed theme appear in the dropdowns.
#### Installing from the Fedora repositories
The Fedora repositories contain a small selection of additional themes that once installed are available to we chosen in Tweaks. Theme packages are not available in the Software application, and have to be searched for and installed via the command line. Most theme packages have a consistent naming structure, so listing available themes is pretty easy.
To find Application (GTK) themes use the command:
```
dnf search gtk | grep theme
```
To find Shell themes:
```
dnf search shell-theme
```
Icon themes:
```
dnf search icon-theme
```
Cursor themes:
```
dnf search cursor-theme
```
Once you have found a theme to install, install the theme using dnf. For example:
```
sudo dnf install numix-gtk-theme
```
#### Installing themes manually
For a wider range of themes, there are a plethora of places on the internet to find new themes to use on Fedora Workstation. Two popular places to find themes are [OpenDesktop][10] and [GNOMELook][11].
Typically when downloading themes from these sites, the themes are encapsulated in an archive like a tar.gz or zip file. In most cases, to install these themes, simply extract the contents into the correct directory, and the theme will appear in Tweaks. Note too, that themes can be installed either globally (must be done using sudo) so all users on the system can use them, or can be installed just for the current user.
For Application (GTK) themes, and GNOME Shell themes, extract the archive to the **.themes/** directory in your home directory. To install for all users, extract to **/usr/share/themes/**
For Icon and Cursor themes, extract the archive to the **.icons/** directory in your home directory. To install for all users, extract to **/usr/share/icons/**
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/tweaking-the-look-of-fedora-workstation-with-themes/
作者:[Ryan Lerch][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/ryanlerch/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2019/06/themes.png-816x345.jpg
[2]: https://fedoramagazine.org/wp-content/uploads/2019/06/application-theme-1024x514.jpg
[3]: https://fedoramagazine.org/wp-content/uploads/2019/06/overview-theme-1024x649.jpg
[4]: https://fedoramagazine.org/wp-content/uploads/2019/06/icon-theme-application-1024x441.jpg
[5]: https://fedoramagazine.org/wp-content/uploads/2019/06/overview-icons-1024x637.jpg
[6]: https://fedoramagazine.org/wp-content/uploads/2019/06/cursortheme-1024x467.jpg
[7]: https://fedoramagazine.org/wp-content/uploads/2019/06/tweaks-in-software-1024x725.png
[8]: https://fedoramagazine.org/install-extensions-via-software-application/
[9]: https://fedoramagazine.org/wp-content/uploads/2019/06/tweaks-choose-themes.png
[10]: https://www.opendesktop.org/
[11]: https://www.gnome-look.org/

View File

@ -0,0 +1,65 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Examples of blameless culture outside of DevOps)
[#]: via: (https://opensource.com/article/19/6/everyday-blameless)
[#]: author: (Patrick Housley https://opensource.com/users/patrickhousley)
Examples of blameless culture outside of DevOps
======
Is blameless culture just a matter of postmortems and top-down change?
Or are there things individuals can do to promote it?
![people in different locations who are part of the same team][1]
A blameless culture is not a new concept in the technology industry. In fact, in 2012, [John Allspaw][2] wrote about how [Etsy uses blameless postmortems][3] to dive to the heart of problems when they arise. Other technology giants, like Google, have also worked hard to implement a blameless culture. But what is a blameless culture? Is it just a matter of postmortems? Does it take a culture change to make blameless a reality? And what about flagrant misconduct?
### Exploring blameless culture
In 2009, [Mike Rother][4] wrote an [award-winning book][5] on the culture of Toyota, in which he broke down how the automaker became so successful in the 20th century when most other car manufacturers were either stagnant or losing ground. Books on Toyota were nothing new, but how Mike approached Toyota's success was unique. Instead of focusing on the processes and procedures Toyota implements, he explains in exquisite detail the company's culture, including its focus on blameless failure and continuous improvement.
Mike explains that Toyota, in the face of failure, focuses on the system where the failure occurred instead of who is at fault. Furthermore, the company treats failure as a learning opportunity, not a chance to chastise the operator. This is the very definition of a blameless culture and one that the technology field can still learn much from.
### It's not a culture shift
It shouldn't take an executive initiative to attain blamelessness. It's not so much the company's culture that we need to change, but our attitudes towards fault and failure. Sure, the company's culture should change, but, even in a blameless culture, some people still have the undying urge to point fingers and call others out for their shortcomings.
I was once contracted to work with a company on developing and improving its digital footprint. This company employed its own developers, and, as you might imagine, there was tension at times. If a bug was found in production, the search began immediately for the person responsible. I think it's just human nature to want to find someone to blame. But there is a better way, and it will take practice.
### Blamelessness at the microscale
When I talk about implementing blamelessness, I'm not talking about doing it at the scale of companies and organizations. That's too large for most of us. Instead, focus your attention on the smallest scale: the code commit, review, and pull request. Focus on your actions and the actions of your peers and those you lead. You may find that you have the biggest impact in this area.
How often do you or one of your peers get a bug report, dig in to find out what is wrong, and stop once you determine who made the breaking change? Do you immediately assume that a pull request or code commit needs to be reverted? Do you contact that individual and tell them what they broke and which commit it was? If this is happening within your team, you're the furthest from blamelessness you could be. But it can be remedied.
Obviously, when you find a bug, you need to understand what broke, where, and who did it. But don't stop there. Attempt to fix the issue. The chances are high that patching the code will be a faster resolution than trying to figure out which code to back out. Too many times, I have seen people try to back out code only to find that they broke something else.
If you're not confident that you can fix the issue, politely ask the individual who made the breaking change to assist. Yes, assist! My mom always said, "you can catch more flies with honey than vinegar." You will typically get a more positive response if you ask people for help instead of pointing out what they broke.
Finally, once you have a fix, make sure to ask the individual who caused the bug to review your change. This isn't about rubbing it in their face. Remember that failure represents a learning opportunity, and the person who created the failure will learn if they have a chance to review the fix you created. Furthermore, that individual may have unique details and reasoning that suggests your change may fix the immediate issue but may not solve the original problem.
### Catch flagrant misconduct and abuse sooner
A blameless culture doesn't provide blanket protection if someone is knowingly attempting to do wrong. That also doesn't mean the system is not faulty. Remember how Toyota focuses on the system where failure occurs? If an individual can knowingly create havoc within the software they are working on, they should be held accountable—but so should the system.
When reviewing failure, no matter how small, always ask, "How could we have caught this sooner?" Chances are you could improve some part of your software development lifecycle (SDLC) to make failures less likely to happen. Maybe you need to add more tests. Or run your tests more often. Whatever the solution, remember that fixing the bug is only part of a complete fix.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/6/everyday-blameless
作者:[Patrick Housley][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/patrickhousley
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/connection_people_team_collaboration.png?itok=0_vQT8xV (people in different locations who are part of the same team)
[2]: https://twitter.com/allspaw
[3]: https://codeascraft.com/2012/05/22/blameless-postmortems/
[4]: http://www-personal.umich.edu/~mrother/Homepage.html
[5]: https://en.wikipedia.org/wiki/Toyota_Kata

View File

@ -0,0 +1,263 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How Linux can help with your spelling)
[#]: via: (https://www.networkworld.com/article/3400942/how-linux-can-help-with-your-spelling.html)
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
How Linux can help with your spelling
======
Whether you're struggling with one elusive word or checking a report before you send it off to your boss, Linux can help with your spelling.
![Sandra Henry-Stocker][1]
Linux provides all sorts of tools for data analysis and automation, but it also helps with an issue that we all struggle with from time to time spelling! Whether you're grappling with the spelling of a single word while youre writing your weekly report or you want a set of computerized "eyes" to find your typos before you submit a business proposal, maybe its time to check out how it can help.
### look
One tool is **look**. If you know how a word begins, you can ask the look command for provide a list of words that start with those letters. Unless an alternate word source is provided, look uses **/usr/share/dict/words** to identify the words for you. This file with its hundreds of thousands of words will suffice for most of the English words that we routinely use, but it might not have some of the more obscure words that some of us in the computing field tend to use — such as zettabyte.
**[ Two-Minute Linux Tips:[Learn how to master a host of Linux commands in these 2-minute video tutorials][2] ]**
The look command's syntax is as easy as can be. Type "look word" and it will run through all the words in that words file and find matches for you.
```
$ look amelio
ameliorable
ameliorableness
ameliorant
ameliorate
ameliorated
ameliorates
ameliorating
amelioration
ameliorations
ameliorativ
ameliorative
amelioratively
ameliorator
amelioratory
```
If you happen upon a word that isn't included in the word list on the system, you'll simply get no output.
```
$ look zetta
$
```
Dont despair if you're not seeing what you were hoping for. You can add words to your words file or even reference an altogether different words list — either finding one online and creating one yourself. You don't even have to place an added word in the proper alphabetical location; just add it to the end of the file. You do need to do this as root, however. For example (and be careful with that **> >**!):
```
# echo “zettabyte” >> /usr/share/dict/words
```
Using a different list of words ("jargon" in this case) just requires adding the name of the file. Use a full path if the file is not the default.
```
$ look nybble /usr/share/dict/jargon
nybble
nybbles
```
The look command is also case-insensitive, so you don't have to concern yourself with whether the word you're looking for should be capitalized or not.
```
$ look zet
ZETA
Zeta
zeta
zetacism
Zetana
zetas
Zetes
zetetic
Zethar
Zethus
Zetland
Zetta
```
Of course, not all word lists are created equal. Some Linux distributions provide a _lot_ more words than others in their word files. Yours might have 100,000 words or many times that number.
On one of my Linux systems:
```
$ wc -l /usr/share/dict/words
102402 /usr/share/dict/words
```
On another:
```
$ wc -l /usr/share/dict/words
479828 /usr/share/dict/words
```
Remember that the look command works only with the beginnings of words, but there are other options if you don't want to start there.
### grep
Our dearly beloved **grep** command can pluck words from a word file as well as any tool. If youre looking for words that start or end with particular letters, grep is a natural. It can match words using beginnings, endings, or middle portions of words. Your system's word file will work with grep as easily as it does with look. The only drawback is that unlike with look, you have to specify the file.
Using word beginnings with ^:
```
$ grep ^terra /usr/share/dict/words
terrace
terrace's
terraced
terraces
terracing
terrain
terrain's
terrains
terrapin
terrapin's
terrapins
terraria
terrarium
terrarium's
terrariums
```
Using word endings with $:
```
$ grep bytes$ /usr/share/dict/words
bytes
gigabytes
kilobytes
megabytes
terabytes
```
With grep, you do need to concern yourself with capitalization, but the command provides some options for that.
```
$ grep ^[Zz]et /usr/share/dict/words
Zeta
zeta
zetacism
Zetana
zetas
Zetes
zetetic
Zethar
Zethus
Zetland
Zetta
zettabyte
```
Setting up a symbolic link to the words file makes this kind of word search a little easier:
```
$ ln -s /usr/share/dict/words words
$ grep ^[Zz]et words
Zeta
zeta
zetacism
Zetana
zetas
Zetes
zetetic
Zethar
Zethus
Zetland
Zetta
zettabytye
```
### aspell
The aspell command takes a different approach. It provides a way to check the spelling in whatever file or text you provide to it. You can pipe text to it and have it tell you which words appear to be misspelled. If youre spelling all the words correctly, youll see no output.
```
$ echo Did I mispell that? | aspell list
mispell
$ echo I can hardly wait to try out aspell | aspell list
aspell
$ echo Did I misspell anything? | aspell list
$
```
The "list" argument tells aspell to provide a list of misspelled words in the words that are sent through standard input.
You can also use aspell to locate and correct words in a text file. If it finds a misspelled word, it will offer you an opportunity to replace it from a list of similar (but correctly spelled) words, to accept the words and add them to your personal words list (~/.aspell.en.pws), to ignore the misspelling, or to abort the process altogether (leaving the file as it was before you started).
```
$ aspell -c mytext
```
Once aspell finds a word thats misspelled, it offers a list of choices like these for the incorrect "mispell":
```
1) mi spell 6) misplay
2) mi-spell 7) spell
3) misspell 8) misapply
4) Ispell 9) Aspell
5) misspells 0) dispel
i) Ignore I) Ignore all
r) Replace R) Replace all
a) Add l) Add Lower
b) Abort x) Exit
```
Note that the alternate words and spellings are numbered, while other options are represented by letter choices. You can choose one of the suggested spellings or opt to type a replacement. The "Abort" choice will leave the file intact even if you've already chosen replacements for some words. Words you elect to add will be inserted into a local file (e.g., ~/.aspell.en.pws).
### Alternate word lists
Tired of English? The aspell command can work with other languages if you add a word file for them. To add a dictionary for French on Debian systems, for example, you could do this:
```
$ sudo apt install aspell-fr
```
This new dictionary file would be installed as /usr/share/dict/French. To use it, you would simply need to tell aspell that you want to use the alternate word list:
```
$ aspell --lang=fr -c mytext
```
When using, you might see something like this if aspell looks at the word “one”:
```
1) once 6) orné
2) onde 7) ne
3) ondé 8) né
4) onze 9) on
5) orne 0) cône
i) Ignore I) Ignore all
r) Replace R) Replace all
a) Add l) Add Lower
b) Abort x) Exit
```
You can also get other language word lists from [GNU][3].
### Wrap-up
Even if you're a national spelling bee winner, you probably need a little help with spelling every now and then — if only to spot your typos. The aspell tool, along with look and grep, are ready to come to your rescue.
Join the Network World communities on [Facebook][4] and [LinkedIn][5] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3400942/how-linux-can-help-with-your-spelling.html
作者:[Sandra Henry-Stocker][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/06/linux-spelling-100798596-large.jpg
[2]: https://www.youtube.com/playlist?list=PL7D2RMSmRO9J8OTpjFECi8DJiTQdd4hua
[3]: ftp://ftp.gnu.org/gnu/aspell/dict/0index.html
[4]: https://www.facebook.com/NetworkWorld/
[5]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,73 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Kubernetes basics: Learn how to drive first)
[#]: via: (https://opensource.com/article/19/6/kubernetes-basics)
[#]: author: (Scott McCarty https://opensource.com/users/fatherlinux/users/fatherlinux/users/fatherlinux)
Kubernetes basics: Learn how to drive first
======
Quit focusing on new projects and get focused on getting your Kubernetes
dump truck commercial driver's license.
![Truck steering wheel and dash][1]
In the first two articles in this series, I explained how Kubernetes is [like a dump truck][2] and that there are always [learning curves][3] to understanding elegant, professional tools like [Kubernetes][4] (and dump trucks, cranes, etc.). This article is about the next step: learning how to drive.
Recently, I saw a thread on Reddit about [essential Kubernetes projects][5]. People seem hungry to know the bare minimum they should learn to get started with Kubernetes. The "driving a dump truck analogy" helps frame the problem to stay on track. Someone in the thread mentioned that you shouldn't be running your own registry unless you have to, so people are already nibbling at this idea of driving Kubernetes instead of building it.
The API is Kubernetes' engine and transmission. Like a dump truck's steering wheel, clutch, gas, and brake pedal, the YAML or JSON files you use to build your applications are the primary interface to the machine. When you're first learning Kubernetes, this should be your primary focus. Get to know your controls. Don't get sidetracked by all the latest and greatest projects. Don't try out an experimental dump truck when you are just learning to drive. Instead, focus on the basics.
### Defined and actual states
First, Kubernetes works on the principles of defined state and actual state.
![Defined state and actual state][6]
Humans (developers/sysadmins/operators) specify the defined state using the YAML/JSON files they submit to the Kubernetes API. Then, Kubernetes uses a controller to analyze the difference between the new state defined in the YAML/JSON and the actual state in the cluster.
In the example above, the Replication Controller sees the difference between the three pods specified by the user, with one Pod running, and schedules two more. If you were to log into Kubernetes and manually kill one of the Pods, it would start another one to replace it—over and over and over. Kubernetes does not stop until the actual state matches the defined state. This is super powerful.
### **Primitives**
Next, you need to understand what primitives you can specify in Kubernetes.
![Kubernetes primitives][7]
It's more than just Pods; it's Deployments, Persistent Volume Claims, Services, routes, etc. With Kubernetes platform [OpenShift][8], you can add builds and BuildConfigs. It will take you a day or so to get decent with each of these primitives. Then you can dive in deeper as your use cases become more complex.
### Mapping developer-native to traditional IT environments
Finally, start thinking about how this maps to what you do in a traditional IT environment.
![Mapping developer-native to traditional IT environments][9]
The user has always been trying to solve a business problem, albeit a technical one. Historically, we have used things like playbooks to tie business logic to sets of IT systems with a single language. This has always been great for operations staff, but it gets hairier when you try to extend this to developers.
We have never been able to truly specify how a set of IT systems should behave and interact together, in a developer-native way, until Kubernetes. If you think about it, we are extending the ability to manage storage, network, and compute resources in a very portable and declarative way with the YAML/JSON files we write in Kubernetes, but they are always mapped back to "real" resources somewhere. We just don't have to worry about it in developer mode.
So, quit focusing on new projects in the Kubernetes ecosystem and get focused on driving it. In the next article, I will share some tools and workflows that help you drive Kubernetes.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/6/kubernetes-basics
作者:[Scott McCarty][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/fatherlinux/users/fatherlinux/users/fatherlinux
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/truck_steering_wheel_drive_car_kubernetes.jpg?itok=0TOzve80 (Truck steering wheel and dash)
[2]: https://opensource.com/article/19/6/kubernetes-dump-truck
[3]: https://opensource.com/article/19/6/kubernetes-learning-curve
[4]: https://opensource.com/resources/what-is-kubernetes
[5]: https://www.reddit.com/r/kubernetes/comments/bsoixc/what_are_the_essential_kubernetes_related/
[6]: https://opensource.com/sites/default/files/uploads/defined_state_-_actual_state.png (Defined state and actual state)
[7]: https://opensource.com/sites/default/files/uploads/new_primitives.png (Kubernetes primatives)
[8]: https://www.openshift.com/
[9]: https://opensource.com/sites/default/files/uploads/developer_native_experience_-_mapped_to_traditional.png (Mapping developer-native to traditional IT environments)

View File

@ -0,0 +1,152 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Why hypothesis-driven development is key to DevOps)
[#]: via: (https://opensource.com/article/19/6/why-hypothesis-driven-development-devops)
[#]: author: (Brent Aaron Reed https://opensource.com/users/brentaaronreed/users/wpschaub)
Why hypothesis-driven development is key to DevOps
======
A hypothesis-driven development mindset harvests the core value of
feature flags: experimentation in production.
![gears and lightbulb to represent innovation][1]
The definition of DevOps, offered by [Donovan Brown][2] * _is_ "The union of **people** , **process** , and **products** to enable continuous delivery of **value** to our customers.*" It accentuates the importance of continuous delivery of value. Let's discuss how experimentation is at the heart of modern development practices.
![][3]
### Reflecting on the past
Before we get into hypothesis-driven development, let's quickly review how we deliver value using waterfall, agile, deployment rings, and feature flags.
In the days of _**waterfall**_ , we had predictable and process-driven delivery. However, we only delivered value towards the end of the development lifecycle, often failing late as the solution drifted from the original requirements, or our killer features were outdated by the time we finally shipped.
![][4]
Here, we have one release X and eight features, which are all deployed and exposed to the patiently waiting user. We are continuously delivering value—but with a typical release cadence of six months to two years, _the value of the features declines as the world continues to move on_. It worked well enough when there was time to plan and a lower expectation to react to more immediate needs.
The introduction of _**agile**_ allowed us to create and respond to change so we could continuously deliver working software, sense, learn, and respond.
![][5]
Now, we have three releases: X.1, X.2, and X.3. After the X.1 release, we improved feature 3 based on feedback and re-deployed it in release X.3. This is a simple example of delivering features more often, focused on working software, and responding to user feedback. _We are on the path of continuous delivery, focused on our key stakeholders: our users._
Using _**deployment rings**_ and/or _**feature flags**_ , we can decouple release deployment and feature exposure, down to the individual user, to control the exposure—the blast radius—of features. We can conduct experiments; progressively expose, test, enable, and hide features; fine-tune releases, and continuously pivot on learnings and feedback.
When we add feature flags to the previous workflow, we can toggle features to be ON (enabled and exposed) or OFF (hidden).
![][6]
Here, feature flags for features 2, 4, and 8 are OFF, which results in the user being exposed to fewer of the features. All features have been deployed but are not exposed (yet). _We can fine-tune the features (value) of each release after deploying to production._
_**Ring-based deployment**_ limits the impact (blast) on users while we gradually deploy and evaluate one or more features through observation. Rings allow us to deploy features progressively and have multiple releases (v1, v1.1, and v1.2) running in parallel.
![Ring-based deployment][7]
Exposing features in the canary and early-adopter rings enables us to evaluate features without the risk of an all-or-nothing big-bang deployment.
_**Feature flags**_ decouple release deployment and feature exposure. You "flip the flag" to expose a new feature, perform an emergency rollback by resetting the flag, use rules to hide features, and allow users to toggle preview features.
![Toggling feature flags on/off][8]
When you combine deployment rings and feature flags, you can progressively deploy a release through rings and use feature flags to fine-tune the deployed release.
> See [deploying new releases: Feature flags or rings][9], [what's the cost of feature flags][10], and [breaking down walls between people, process, and products][11] for discussions on feature flags, deployment rings, and related topics.
### Adding hypothesis-driven development to the mix
_**Hypothesis-driven development**_ is based on a series of experiments to validate or disprove a hypothesis in a [complex problem domain][12] where we have unknown-unknowns. We want to find viable ideas or fail fast. Instead of developing a monolithic solution and performing a big-bang release, we iterate through hypotheses, evaluating how features perform and, most importantly, how and if customers use them.
> **Template:** _**We believe**_ {customer/business segment} _**wants**_ {product/feature/service} _**because**_ {value proposition}.
>
> **Example:** _**We believe**_ that users _**want**_ to be able to select different themes _**because**_ it will result in improved user satisfaction. We expect 50% or more users to select a non-default theme and to see a 5% increase in user engagement.
Every experiment must be based on a hypothesis, have a measurable conclusion, and contribute to feature and overall product learning. For each experiment, consider these steps:
* Observe your user
* Define a hypothesis and an experiment to assess the hypothesis
* Define clear success criteria (e.g., a 5% increase in user engagement)
* Run the experiment
* Evaluate the results and either accept or reject the hypothesis
* Repeat
Let's have another look at our sample release with eight hypothetical features.
![][13]
When we deploy each feature, we can observe user behavior and feedback, and prove or disprove the hypothesis that motivated the deployment. As you can see, the experiment fails for features 2 and 6, allowing us to fail-fast and remove them from the solution. _**We do not want to carry waste that is not delivering value or delighting our users!**_ The experiment for feature 3 is inconclusive, so we adapt the feature, repeat the experiment, and perform A/B testing in Release X.2. Based on observations, we identify the variant feature 3.2 as the winner and re-deploy in release X.3. _**We only expose the features that passed the experiment and satisfy the users.**_
### Hypothesis-driven development lights up progressive exposure
When we combine hypothesis-driven development with progressive exposure strategies, we can vertically slice our solution, incrementally delivering on our long-term vision. With each slice, we progressively expose experiments, enable features that delight our users and hide those that did not make the cut.
But there is more. When we embrace hypothesis-driven development, we can learn how technology works together, or not, and what our customers need and want. We also complement the test-driven development (TDD) principle. TDD encourages us to write the test first (hypothesis), then confirm our features are correct (experiment), and succeed or fail the test (evaluate). _**It is all about quality and delighting our users** , as outlined in principles 1, 3, and 7_ of the [Agile Manifesto][14]:
* Our highest priority is to satisfy the customers through early and continuous delivery of value.
* Deliver software often, from a couple of weeks to a couple of months, with a preference to the shorter timescale.
* Working software is the primary measure of progress.
More importantly, we introduce a new mindset that breaks down the walls between development, business, and operations to view, design, develop, deliver, and observe our solution in an iterative series of experiments, adopting features based on scientific analysis, user behavior, and feedback in production. We can evolve our solutions in thin slices through observation and learning in production, a luxury that other engineering disciplines, such as aerospace or civil engineering, can only dream of.
The good news is that hypothesis-driven development supports the empirical process theory and its three pillars: **Transparency** , **Inspection** , and **Adaption**.
![][15]
But there is more. Based on lean principles, we must pivot or persevere after we measure and inspect the feedback. Using feature toggles in conjunction with hypothesis-driven development, we get the best of both worlds, as well as the ability to use A|B testing to make decisions on feedback, such as likes/dislikes and value/waste.
### Remember:
Hypothesis-driven development:
* Is about a series of experiments to confirm or disprove a hypothesis. Identify value!
* Delivers a measurable conclusion and enables continued learning.
* Enables continuous feedback from the key stakeholder—the user—to understand the unknown-unknowns!
* Enables us to understand the evolving landscape into which we progressively expose value.
Progressive exposure:
* Is not an excuse to hide non-production-ready code. _**Always ship quality!**_
* Is about deploying a release of features through rings in production. _**Limit blast radius!**_
* Is about enabling or disabling features in production. _**Fine-tune release values!**_
* Relies on circuit breakers to protect the infrastructure from implications of progressive exposure. _**Observe, sense, act!**_
What have you learned about progressive exposure strategies and hypothesis-driven development? We look forward to your candid feedback.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/6/why-hypothesis-driven-development-devops
作者:[Brent Aaron Reed][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/brentaaronreed/users/wpschaub
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/innovation_lightbulb_gears_devops_ansible.png?itok=TSbmp3_M (gears and lightbulb to represent innovation)
[2]: http://donovanbrown.com/post/what-is-devops
[3]: https://opensource.com/sites/default/files/hypo-1_copy.png
[4]: https://opensource.com/sites/default/files/uploads/hyp0-2-trans.png
[5]: https://opensource.com/sites/default/files/uploads/hypo-3-trans.png
[6]: https://opensource.com/sites/default/files/uploads/hypo-4_0.png
[7]: https://opensource.com/sites/default/files/uploads/hypo-6-trans.png
[8]: https://opensource.com/sites/default/files/uploads/hypo-7-trans.png
[9]: https://opensource.com/article/18/2/feature-flags-ring-deployment-model
[10]: https://opensource.com/article/18/7/does-progressive-exposure-really-come-cost
[11]: https://opensource.com/article/19/3/breaking-down-walls-between-people-process-and-products
[12]: https://en.wikipedia.org/wiki/Cynefin_framework
[13]: https://opensource.com/sites/default/files/uploads/hypo-5-trans.png
[14]: https://agilemanifesto.org/principles.html
[15]: https://opensource.com/sites/default/files/uploads/adapt-transparent-inspect.png

View File

@ -0,0 +1,219 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (4 tools to help you drive Kubernetes)
[#]: via: (https://opensource.com/article/19/6/tools-drive-kubernetes)
[#]: author: (Scott McCarty https://opensource.com/users/fatherlinux/users/fatherlinux/users/fatherlinux/users/fatherlinux)
4 tools to help you drive Kubernetes
======
Learning to drive Kubernetes is more important that knowing how to build
it, and these tools will get you on the road fast.
![Tools in a workshop][1]
In the third article in this series, _[Kubernetes basics: Learn how to drive first][2]_ , I emphasized that you should learn to drive Kubernetes, not build it. I also explained that there is a minimum set of primitives that you have to learn to model an application in Kubernetes. I want to emphasize this point: the set of primitives that you _need_ to learn are the easiest set of primitives that you _can_ learn to achieve production-quality application deployments (i.e., high-availability [HA], multiple containers, multiple applications). Stated another way, learning the set of primitives built into Kubernetes is easier than learning clustering software, clustered file systems, load balancers, crazy Apache configs, crazy Nginx configs, routers, switches, firewalls, and storage backends—all the things you would need to model a simple HA application in a traditional IT environment (for virtual machines or bare metal).
In this fourth article, I'll share some tools that will help you learn to drive Kubernetes quickly.
### 1\. Katacoda
[Katacoda][3] is the easiest way to test-drive a Kubernetes cluster, hands-down. With one click and five seconds of time, you have a web-based terminal plumbed straight into a running Kubernetes cluster. It's magnificent for playing and learning. I even use it for demos and testing out new ideas. Katacoda provides a completely ephemeral environment that is recycled when you finish using it.
![OpenShift Playground][4]
[OpenShift playground][5]
![Kubernetes Playground][6]
[Kubernetes playground][7]
Katacoda provides ephemeral playgrounds and deeper lab environments. For example, the [Linux Container Internals Lab][3], which I have run for the last three or four years, is built in Katacoda.
Katacoda maintains a bunch of [Kubernetes and cloud tutorials][8] on its main site and collaborates with Red Hat to support a [dedicated learning portal for OpenShift][9]. Explore them both—they are excellent learning resources.
When you first learn to drive a dump truck, it's always best to watch how other people drive first.
### 2\. Podman generate kube
The **podman generate kube** command is a brilliant little subcommand that helps users naturally transition from a simple container engine running simple containers to a cluster use case running many containers (as I described in the [last article][2]). [Podman][10] does this by letting you start a few containers, then exporting the working Kube YAML, and firing them up in Kubernetes. Check this out (pssst, you can run it in this [Katacoda lab][3], which already has Podman and OpenShift).
First, notice the syntax to run a container is strikingly similar to Docker:
```
`podman run -dtn two-pizza quay.io/fatherlinux/two-pizza`
```
But this is something other container engines don't do:
```
`podman generate kube two-pizza`
```
The output:
```
# Generation of Kubernetes YAML is still under development!
#
# Save the output of this file and use kubectl create -f to import
# it into Kubernetes.
#
# Created with podman-1.3.1
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: "2019-06-07T08:08:12Z"
labels:
app: two-pizza
name: two-pizza
spec:
containers:
\- command:
\- /bin/sh
\- -c
\- bash -c 'while true; do /usr/bin/nc -l -p 3306 < /srv/hello.txt; done'
env:
\- name: PATH
value: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
\- name: TERM
value: xterm
\- name: HOSTNAME
\- name: container
value: oci
image: quay.io/fatherlinux/two-pizza:latest
name: two-pizza
resources: {}
securityContext:
allowPrivilegeEscalation: true
capabilities: {}
privileged: false
readOnlyRootFilesystem: false
tty: true
workingDir: /
status: {}
\---
apiVersion: v1
kind: Service
metadata:
creationTimestamp: "2019-06-07T08:08:12Z"
labels:
app: two-pizza
name: two-pizza
spec:
selector:
app: two-pizza
type: NodePort
status:
loadBalancer: {}
```
You now have some working Kubernetes YAML, which you can use as a starting point for mucking around and learning, tweaking, etc. The **-s** flag created a service for you. [Brent Baude][11] is even working on new features like [adding Volumes/Persistent Volume Claims][12]. For a deeper dive, check out Brent's amazing work in his blog post "[Podman can now ease the transition to Kubernetes and CRI-O][13]."
### 3\. oc new-app
The **oc new-app** command is extremely powerful. It's OpenShift-specific, so it's not available in default Kubernetes, but it's really useful when you're starting to learn Kubernetes. Let's start with a quick command to create a fairly sophisticated application:
```
oc new-project -n example
oc new-app -f <https://raw.githubusercontent.com/openshift/origin/master/examples/quickstarts/cakephp-mysql.json>
```
With **oc new-app** , you can literally steal templates from the OpenShift developers and have a known, good starting point when developing primitives to describe your own applications. After you run the above command, your Kubernetes namespace (in OpenShift) will be populated by a bunch of new, defined resources.
```
`oc get all`
```
The output:
```
NAME READY STATUS RESTARTS AGE
pod/cakephp-mysql-example-1-build 0/1 Completed 0 4m
pod/cakephp-mysql-example-1-gz65l 1/1 Running 0 1m
pod/mysql-1-nkhqn 1/1 Running 0 4m
NAME DESIRED CURRENT READY AGE
replicationcontroller/cakephp-mysql-example-1 1 1 1 1m
replicationcontroller/mysql-1 1 1 1 4m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/cakephp-mysql-example ClusterIP 172.30.234.135 <none> 8080/TCP 4m
service/mysql ClusterIP 172.30.13.195 <none> 3306/TCP 4m
NAME REVISION DESIRED CURRENT TRIGGERED BY
deploymentconfig.apps.openshift.io/cakephp-mysql-example 1 1 1 config,image(cakephp-mysql-example:latest)
deploymentconfig.apps.openshift.io/mysql 1 1 1 config,image(mysql:5.7)
NAME TYPE FROM LATEST
buildconfig.build.openshift.io/cakephp-mysql-example Source Git 1
NAME TYPE FROM STATUS STARTED DURATION
build.build.openshift.io/cakephp-mysql-example-1 Source Git@47a951e Complete 4 minutes ago 2m27s
NAME DOCKER REPO TAGS UPDATED
imagestream.image.openshift.io/cakephp-mysql-example docker-registry.default.svc:5000/example/cakephp-mysql-example latest About aminute ago
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
route.route.openshift.io/cakephp-mysql-example cakephp-mysql-example-example.2886795271-80-rhsummit1.environments.katacoda.com cakephp-mysql-example <all> None
```
The beauty of this is that you can delete Pods, watch the replication controllers recreate them, scale Pods up, and scale them down. You can play with the template and change it for other applications (which is what I did when I first started).
### 4\. Visual Studio Code
I saved one of my favorites for last. I use [vi][14] for most of my work, but I have never found a good syntax highlighter and code completion plugin for Kubernetes (if you have one, let me know). Instead, I have found that Microsoft's [VS Code][15] has a killer set of plugins that complete the creation of Kubernetes resources and provide boilerplate.
![VS Code plugins UI][16]
First, install Kubernetes and YAML plugins shown in the image above.
![Autocomplete in VS Code][17]
Then, you can create a new YAML file from scratch and get auto-completion of Kubernetes resources. The example above shows a Service.
![VS Code autocomplete filling in boilerplate for an object][18]
When you use autocomplete and select the Service resource, it fills in some boilerplate for the object. This is magnificent when you are first learning to drive Kubernetes. You can build Pods, Services, Replication Controllers, Deployments, etc. This is a really nice feature when you are building these files from scratch or even modifying the files you create with **Podman generate kube**.
### Conclusion
These four tools (six if you count the two plugins) will help you learn to drive Kubernetes, instead of building or equipping it. In my final article in the series, I will talk about why Kubernetes is so exciting for running so many different workloads.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/6/tools-drive-kubernetes
作者:[Scott McCarty][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/fatherlinux/users/fatherlinux/users/fatherlinux/users/fatherlinux
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/tools_workshop_blue_mechanic.jpg?itok=4YXkCU-J (Tools in a workshop)
[2]: https://opensource.com/article/19/6/kubernetes-basics
[3]: https://learn.openshift.com/subsystems/container-internals-lab-2-0-part-1
[4]: https://opensource.com/sites/default/files/uploads/openshift-playground.png (OpenShift Playground)
[5]: https://learn.openshift.com/playgrounds/openshift311/
[6]: https://opensource.com/sites/default/files/uploads/kubernetes-playground.png (Kubernetes Playground)
[7]: https://katacoda.com/courses/kubernetes/playground
[8]: https://katacoda.com/learn
[9]: http://learn.openshift.com/
[10]: https://podman.io/
[11]: https://developers.redhat.com/blog/author/bbaude/
[12]: https://github.com/containers/libpod/issues/2303
[13]: https://developers.redhat.com/blog/2019/01/29/podman-kubernetes-yaml/
[14]: https://en.wikipedia.org/wiki/Vi
[15]: https://code.visualstudio.com/
[16]: https://opensource.com/sites/default/files/uploads/vscode_-_kubernetes_red_hat_-_plugins.png (VS Code plugins UI)
[17]: https://opensource.com/sites/default/files/uploads/vscode_-_kubernetes_service_-_autocomplete.png (Autocomplete in VS Code)
[18]: https://opensource.com/sites/default/files/uploads/vscode_-_kubernetes_service_-_boiler_plate.png (VS Code autocomplete filling in boilerplate for an object)

View File

@ -0,0 +1,67 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (5 reasons to use Kubernetes)
[#]: via: (https://opensource.com/article/19/6/reasons-kubernetes)
[#]: author: (Daniel Oh https://opensource.com/users/daniel-oh)
5 reasons to use Kubernetes
======
Kubernetes solves some of the most common problems development and
operations teams see every day.
![][1]
[Kubernetes][2] is the de facto open source container orchestration tool for enterprises. It provides application deployment, scaling, container management, and other capabilities, and it enables enterprises to optimize hardware resource utilization and increase production uptime through fault-tolerant functionality at speed. The project was initially developed by Google, which donated the project to the [Cloud-Native Computing Foundation][3]. In 2018, it became the first CNCF project to [graduate][4].
This is all well and good, but it doesn't explain why development and operations should invest their valuable time and effort in Kubernetes. The reason Kubernetes is so useful is that it helps dev and ops quickly solve the problems they struggle with every day.
Following are five ways Kubernetes' capabilities help dev and ops professionals address their most common problems.
### 1\. Vendor-agnostic
Many public cloud providers not only serve managed Kubernetes services but also lots of cloud products built on top of those services for on-premises application container orchestration. Being vendor-agnostic enables operators to design, build, and manage multi-cloud and hybrid cloud platforms easily and safely without risk of vendor lock-in. Kubernetes also eliminates the ops team's worries about a complex multi/hybrid cloud strategy.
### 2\. Service discovery
To develop microservices applications, Java developers must control service availability (in terms of whether the application is ready to serve a function) and ensure the service continues living, without any exceptions, in response to the client's requests. Kubernetes' [service discovery feature][5] means developers don't have to manage these things on their own anymore.
### 3\. Invocation
How would your DevOps initiative deploy polyglot, cloud-native apps over thousands of virtual machines? Ideally, dev and ops could trigger deployments for bug fixes, function enhancements, new features, and security patches. Kubernetes' [deployment feature][6] automates this daily work. More importantly, it enables advanced deployment strategies, such as [blue-green and canary][7] deployments.
### 4\. Elasticity
Autoscaling is the key capability needed to handle massive workloads in cloud environments. By building a container platform, you can increase system reliability for end users. [Kubernetes Horizontal Pod Autoscaler][8] (HPA) allows a cluster to increase or decrease the number of applications (or Pods) to deal with peak traffic or performance spikes, reducing concerns about unexpected system outages.
### 5\. Resilience
In a modern application architecture, failure-handling codes should be considered to control unexpected errors and recover from them quickly. But it takes a lot of time and effort for developers to simulate all the occasional errors. Kubernetes' [ReplicaSet][9] helps developers solve this problem by ensuring a specified number of Pods are kept alive continuously.
### Conclusion
Kubernetes enables enterprises to solve common dev and ops problems easily, quickly, and safely. It also provides other benefits, such as building a seamless multi/hybrid cloud strategy, saving infrastructure costs, and speeding time to market.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/6/reasons-kubernetes
作者:[Daniel Oh][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/daniel-oh
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/ship_wheel_gear_devops_kubernetes.png?itok=xm4a74Kv
[2]: https://opensource.com/resources/what-is-kubernetes
[3]: https://www.cncf.io/projects/
[4]: https://www.cncf.io/blog/2018/03/06/kubernetes-first-cncf-project-graduate/
[5]: https://kubernetes.io/docs/concepts/services-networking/service/
[6]: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/
[7]: https://opensource.com/article/17/5/colorful-deployments
[8]: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/
[9]: https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/

View File

@ -0,0 +1,608 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (An Introduction to Kubernetes Secrets and ConfigMaps)
[#]: via: (https://opensource.com/article/19/6/introduction-kubernetes-secrets-and-configmaps)
[#]: author: (Chris Collins https://opensource.com/users/clcollins)
An Introduction to Kubernetes Secrets and ConfigMaps
======
Kubernetes Secrets and ConfigMaps separate the configuration of
individual container instances from the container image, reducing
overhead and adding flexibility.
![Kubernetes][1]
Kubernetes has two types of objects that can inject configuration data into a container when it starts up: Secrets and ConfigMaps. Secrets and ConfigMaps behave similarly in [Kubernetes][2], both in how they are created and because they can be exposed inside a container as mounted files or volumes or environment variables.
To explore Secrets and ConfigMaps, consider the following scenario:
> You're running the [official MariaDB container image][3] in Kubernetes and must do some configuration to get the container to run. The image requires an environment variable to be set for **MYSQL_ROOT_PASSWORD** , **MYSQL_ALLOW_EMPTY_PASSWORD** , or **MYSQL_RANDOM_ROOT_PASSWORD** to initialize the database. It also allows for extensions to the MySQL configuration file **my.cnf** by placing custom config files in **/etc/mysql/conf.d**.
You could build a custom image, setting the environment variables and copying the configuration files into it to create a bespoke container image. However, it is considered a best practice to create and use generic images and add configuration to the containers created from them, instead. This is a perfect use-case for ConfigMaps and Secrets. The **MYSQL_ROOT_PASSWORD** can be set in a Secret and added to the container as an environment variable, and the configuration files can be stored in a ConfigMap and mounted into the container as a file on startup.
Let's try it out!
### But first: A quick note about Kubectl
Make sure that your version of the **kubectl** client command is the same or newer than the Kubernetes cluster version in use.
An error along the lines of:
```
`error: SchemaError(io.k8s.api.admissionregistration.v1beta1.ServiceReference): invalid object doesn't have additional properties`
```
may mean the client version is too old and needs to be upgraded. The [Kubernetes Documentation for Installing Kubectl][4] has instructions for installing the latest client on various platforms.
If you're using Docker for Mac, it also installs its own version of **kubectl** , and that may be the issue. You can install a current client with **brew install** , replacing the symlink to the client shipped by Docker:
```
$ rm /usr/local/bin/kubectl
$ brew link --overwrite kubernetes-cli
```
The newer **kubectl** client should continue to work with Docker's Kubernetes version.
### Secrets
Secrets are a Kubernetes object intended for storing a small amount of sensitive data. It is worth noting that Secrets are stored base64-encoded within Kubernetes, so they are not wildly secure. Make sure to have appropriate [role-based access controls][5] (RBAC) to protect access to Secrets. Even so, extremely sensitive Secrets data should probably be stored using something like [HashiCorp Vault][6]. For the root password of a MariaDB database, however, base64 encoding is just fine.
#### Create a Secret manually
To create the Secret containing the **MYSQL_ROOT_PASSWORD** , choose a password and convert it to base64:
```
# The root password will be "KubernetesRocks!"
$ echo -n 'KubernetesRocks!' | base64
S3ViZXJuZXRlc1JvY2tzIQ==
```
Make a note of the encoded string. You need it to create the YAML file for the Secret:
```
apiVersion: v1
kind: Secret
metadata:
name: mariadb-root-password
type: Opaque
data:
password: S3ViZXJuZXRlc1JvY2tzIQ==
```
Save that file as **mysql-secret.yaml** and create the Secret in Kubernetes with the **kubectl apply** command:
```
$ kubectl apply -f mysql-secret.yaml
secret/mariadb-root-password created
```
#### View the newly created Secret
Now that you've created the Secret, use **kubectl describe** to see it:
```
$ kubectl describe secret mariadb-root-password
Name: mariadb-root-password
Namespace: secrets-and-configmaps
Labels: <none>
Annotations:
Type: Opaque
Data
====
password: 16 bytes
```
Note that the **Data** field contains the key you set in the YAML: **password**. The value assigned to that key is the password you created, but it is not shown in the output. Instead, the value's size is shown in its place, in this case, 16 bytes.
You can also use the **kubectl edit secret <secretname>** command to view and edit the Secret. If you edit the Secret, you'll see something like this:
```
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
data:
password: S3ViZXJuZXRlc1JvY2tzIQ==
kind: Secret
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","data":{"password":"S3ViZXJuZXRlc1JvY2tzIQ=="},"kind":"Secret","metadata":{"annotations":{},"name":"mariadb-root-password","namespace":"secrets-and-configmaps"},"type":"Opaque"}
creationTimestamp: 2019-05-29T12:06:09Z
name: mariadb-root-password
namespace: secrets-and-configmaps
resourceVersion: "85154772"
selfLink: /api/v1/namespaces/secrets-and-configmaps/secrets/mariadb-root-password
uid: 2542dadb-820a-11e9-ae24-005056a1db05
type: Opaque
```
Again, the **data** field with the **password** key is visible, and this time you can see the base64-encoded Secret.
#### Decode the Secret
Let's say you need to view the Secret in plain text, for example, to verify that the Secret was created with the correct content. You can do this by decoding it.
It is easy to decode the Secret by extracting the value and piping it to base64. In this case, you will use the output format **-o jsonpath= <path>** to extract only the Secret value using a JSONPath template.
```
# Returns the base64 encoded secret string
$ kubectl get secret mariadb-root-password -o jsonpath='{.data.password}'
S3ViZXJuZXRlc1JvY2tzIQ==
# Pipe it to `base64 --decode -` to decode:
$ kubectl get secret mariadb-root-password -o jsonpath='{.data.password}' | base64 --decode -
KubernetesRocks!
```
#### Another way to create Secrets
You can also create Secrets directly using the **kubectl create secret** command. The MariaDB image permits setting up a regular database user with a password by setting the **MYSQL_USER** and **MYSQL_PASSWORD** environment variables. A Secret can hold more than one key/value pair, so you can create a single Secret to hold both strings. As a bonus, by using **kubectl create secret** , you can let Kubernetes mess with base64 so that you don't have to.
```
$ kubectl create secret generic mariadb-user-creds \
\--from-literal=MYSQL_USER=kubeuser\
\--from-literal=MYSQL_PASSWORD=kube-still-rocks
secret/mariadb-user-creds created
```
Note the **\--from-literal** , which sets the key name and the value all in one. You can pass as many **\--from-literal** arguments as you need to create one or more key/value pairs in the Secret.
Validate that the username and password were created and stored correctly with the **kubectl get secrets** command:
```
# Get the username
$ kubectl get secret mariadb-user-creds -o jsonpath='{.data.MYSQL_USER}' | base64 --decode -
kubeuser
# Get the password
$ kubectl get secret mariadb-user-creds -o jsonpath='{.data.MYSQL_PASSWORD}' | base64 --decode -
kube-still-rocks
```
### ConfigMaps
ConfigMaps are similar to Secrets. They can be created and shared in the containers in the same ways. The only big difference between them is the base64-encoding obfuscation. ConfigMaps are intended for non-sensitive data—configuration data—like config files and environment variables and are a great way to create customized running services from generic container images.
#### Create a ConfigMap
ConfigMaps can be created in the same ways as Secrets. You can write a YAML representation of the ConfigMap manually and load it into Kubernetes, or you can use the **kubectl create configmap** command to create it from the command line. The following example creates a ConfigMap using the latter method but, instead of passing literal strings (as with **\--from-literal= <key>=<string>** in the Secret above), it creates a ConfigMap from an existing file—a MySQL config intended for **/etc/mysql/conf.d** in the container. This config file overrides the **max_allowed_packet** setting that MariaDB sets to 16M by default.
First, create a file named **max_allowed_packet.cnf** with the following content:
```
[mysqld]
max_allowed_packet = 64M
```
This will override the default setting in the **my.cnf** file and set **max_allowed_packet** to 64M.
Once the file is created, you can create a ConfigMap named **mariadb-config** using the **kubectl create configmap** command that contains the file:
```
$ kubectl create configmap mariadb-config --from-file=max_allowed_packet.cnf
configmap/mariadb-config created
```
Just like Secrets, ConfigMaps store one or more key/value pairs in their Data hash of the object. By default, using **\--from-file= <filename>** (as above) will store the contents of the file as the value, and the name of the file will be stored as the key. This is convenient from an organization viewpoint. However, the key name can be explicitly set, too. For example, if you used **\--from-file=max-packet=max_allowed_packet.cnf** when you created the ConfigMap, the key would be **max-packet** rather than the file name. If you had multiple files to store in the ConfigMap, you could add each of them with an additional **\--from-file= <filename>** argument.
#### View the new ConfigMap and read the data
As mentioned, ConfigMaps are not meant to store sensitive data, so the data is not encoded when the ConfigMap is created. This makes it easy to view and validate the data and edit it directly.
First, validate that the ConfigMap was, indeed, created:
```
$ kubectl get configmap mariadb-config
NAME DATA AGE
mariadb-config 1 9m
```
The contents of the ConfigMap can be viewed with the **kubectl describe** command. Note that the full contents of the file are visible and that the key name is, in fact, the file name, **max_allowed_packet.cnf**.
```
$ kubectl describe cm mariadb-config
Name: mariadb-config
Namespace: secrets-and-configmaps
Labels: <none>
Annotations: <none>
Data
====
max_allowed_packet.cnf:
\----
[mysqld]
max_allowed_packet = 64M
Events: <none>
```
A ConfigMap can be edited live within Kubernetes with the **kubectl edit** command. Doing so will open a buffer with the default editor showing the contents of the ConfigMap as YAML. When changes are saved, they will immediately be live in Kubernetes. While not really the _best_ practice, it can be handy for testing things in development.
Say you want a **max_allowed_packet** value of 32M instead of the default 16M or the 64M in the **max_allowed_packet.cnf** file. Use **kubectl edit configmap mariadb-config** to edit the value:
```
$ kubectl edit configmap mariadb-config
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
data:
max_allowed_packet.cnf: |
[mysqld]
max_allowed_packet = 32M
kind: ConfigMap
metadata:
creationTimestamp: 2019-05-30T12:02:22Z
name: mariadb-config
namespace: secrets-and-configmaps
resourceVersion: "85609912"
selfLink: /api/v1/namespaces/secrets-and-configmaps/configmaps/mariadb-config
uid: c83ccfae-82d2-11e9-832f-005056a1102f
```
After saving the change, verify the data has been updated:
```
# Note the '.' in max_allowed_packet.cnf needs to be escaped
$ kubectl get configmap mariadb-config -o "jsonpath={.data['max_allowed_packet\\.cnf']}"
[mysqld]
max_allowed_packet = 32M
```
### Using Secrets and ConfigMaps
Secrets and ConfigMaps can be mounted as environment variables or as files within a container. For the MariaDB container, you will need to mount the Secrets as environment variables and the ConfigMap as a file. First, though, you need to write a Deployment for MariaDB so that you have something to work with. Create a file named **mariadb-deployment.yaml** with the following:
```
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: mariadb
name: mariadb-deployment
spec:
replicas: 1
selector:
matchLabels:
app: mariadb
template:
metadata:
labels:
app: mariadb
spec:
containers:
\- name: mariadb
image: docker.io/mariadb:10.4
ports:
\- containerPort: 3306
protocol: TCP
volumeMounts:
\- mountPath: /var/lib/mysql
name: mariadb-volume-1
volumes:
\- emptyDir: {}
name: mariadb-volume-1
```
This is a bare-bones Kubernetes Deployment of the official MariaDB 10.4 image from Docker Hub. Now, add your Secrets and ConfigMap.
#### Add the Secrets to the Deployment as environment variables
You have two Secrets that need to be added to the Deployment:
1. **mariadb-root-password** (with one key/value pair)
2. **mariadb-user-creds** (with two key/value pairs)
For the **mariadb-root-password** Secret, specify the Secret and the key you want by adding an **env** list/array to the container spec in the Deployment and setting the environment variable value to the value of the key in your Secret. In this case, the list contains only a single entry, for the variable **MYSQL_ROOT_PASSWORD**.
```
env:
\- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mariadb-root-password
key: password
```
Note that the name of the object is the name of the environment variable that is added to the container. The **valueFrom** field defines **secretKeyRef** as the source from which the environment variable will be set; i.e., it will use the value from the **password** key in the **mariadb-root-password** Secret you set earlier.
Add this section to the definition for the **mariadb** container in the **mariadb-deployment.yaml** file. It should look something like this:
```
spec:
containers:
\- name: mariadb
image: docker.io/mariadb:10.4
env:
\- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mariadb-root-password
key: password
ports:
\- containerPort: 3306
protocol: TCP
volumeMounts:
\- mountPath: /var/lib/mysql
name: mariadb-volume-1
```
In this way, you have explicitly set the variable to the value of a specific key from your Secret. This method can also be used with ConfigMaps by using **configMapRef** instead of **secretKeyRef**.
You can also set environment variables from _all_ key/value pairs in a Secret or ConfigMap to automatically use the key name as the environment variable name and the key's value as the environment variable's value. By using **envFrom** rather than **env** in the container spec, you can set the **MYSQL_USER** and **MYSQL_PASSWORD** from the **mariadb-user-creds** Secret you created earlier, all in one go:
```
envFrom:
\- secretRef:
name: mariadb-user-creds
```
**envFrom** is a list of sources for Kubernetes to take environment variables. Use **secretRef** again, this time to specify **mariadb-user-creds** as the source of the environment variables. That's it! All the keys and values in the Secret will be added as environment variables in the container.
The container spec should now look like this:
```
spec:
containers:
\- name: mariadb
image: docker.io/mariadb:10.4
env:
\- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mariadb-root-password
key: password
envFrom:
\- secretRef:
name: mariadb-user-creds
ports:
\- containerPort: 3306
protocol: TCP
volumeMounts:
\- mountPath: /var/lib/mysql
name: mariadb-volume-1
```
_Note:_ You could have just added the **mysql-root-password** Secret to the **envFrom** list and let it be parsed as well, as long as the **password** key was named **MYSQL_ROOT_PASSWORD** instead. There is no way to manually specify the environment variable name with **envFrom** as with **env**.
#### Add the max_allowed_packet.cnf file to the Deployment as a volumeMount
As mentioned, both **env** and **envFrom** can be used to share ConfigMap key/value pairs with a container as well. However, in the case of the **mariadb-config** ConfigMap, your entire file is stored as the value to your key, and the file needs to exist in the container's filesystem for MariaDB to be able to use it. Luckily, both Secrets and ConfigMaps can be the source of Kubernetes "volumes" and mounted into the containers instead of using a filesystem or block device as the volume to be mounted.
The **mariadb-deployment.yaml** already has a volume and volumeMount specified, an **emptyDir** (effectively a temporary or ephemeral) volume mounted to **/var/lib/mysql** to store the MariaDB data:
```
<...>
volumeMounts:
\- mountPath: /var/lib/mysql
name: mariadb-volume-1
<...>
volumes:
\- emptyDir: {}
name: mariadb-volume-1
<...>
```
_Note:_ This is not a production configuration. When the Pod restarts, the data in the **emptyDir** volume is lost. This is primarily used for development or when the contents of the volume don't need to be persistent.
You can add your ConfigMap as a source by adding it to the volume list and then adding a volumeMount for it to the container definition:
```
<...>
volumeMounts:
\- mountPath: /var/lib/mysql
name: mariadb-volume-1
\- mountPath: /etc/mysql/conf.d
name: mariadb-config
<...>
volumes:
\- emptyDir: {}
name: mariadb-volume-1
\- configMap:
name: mariadb-config
items:
\- key: max_allowed_packet.cnf
path: max_allowed_packet.cnf
name: mariadb-config-volume
<...>
```
The **volumeMount** is pretty self-explanatory—create a volume mount for the **mariadb-config-volume** (specified in the **volumes** list below it) to the path **/etc/mysql/conf.d**.
Then, in the **volumes** list, **configMap** tells Kubernetes to use the **mariadb-config** ConfigMap, taking the contents of the key **max_allowed_packet.cnf** and mounting it to the path **max_allowed_packed.cnf**. The name of the volume is **mariadb-config-volume** , which was referenced in the **volumeMounts** above.
_Note:_ The **path** from the **configMap** is the name of a file that will contain the contents of the key's value. In this case, your key was a file name, too, but it doesn't have to be. Note also that **items** is a list, so multiple keys can be referenced and their values mounted as files. These files will all be created in the **mountPath** of the **volumeMount** specified above: **/etc/mysql/conf.d**.
### Create a MariaDB instance from the Deployment
At this point, you should have enough to create a MariaDB instance. You have two Secrets, one holding the **MYSQL_ROOT_PASSWORD** and another storing the **MYSQL_USER** , and the **MYSQL_PASSWORD** environment variables to be added to the container. You also have a ConfigMap holding the contents of a MySQL config file that overrides the **max_allowed_packed** value from its default setting.
You also have a **mariadb-deployment.yaml** file that describes a Kubernetes deployment of a Pod with a MariaDB container and adds the Secrets as environment variables and the ConfigMap as a volume-mounted file in the container. It should look like this:
```
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: mariadb
name: mariadb-deployment
spec:
replicas: 1
selector:
matchLabels:
app: mariadb
template:
metadata:
labels:
app: mariadb
spec:
containers:
\- image: docker.io/mariadb:10.4
name: mariadb
env:
\- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mariadb-root-password
key: password
envFrom:
\- secretRef:
name: mariadb-user-creds
ports:
\- containerPort: 3306
protocol: TCP
volumeMounts:
\- mountPath: /var/lib/mysql
name: mariadb-volume-1
\- mountPath: /etc/mysql/conf.d
name: mariadb-config-volume
volumes:
\- emptyDir: {}
name: mariadb-volume-1
\- configMap:
name: mariadb-config
items:
\- key: max_allowed_packet.cnf
path: max_allowed_packet.cnf
name: mariadb-config-volume
```
#### Create the MariaDB instance
Create a new MariaDB instance from the YAML file with the **kubectl create** command:
```
$ kubectl create -f mariadb-deployment.yaml
deployment.apps/mariadb-deployment created
```
Once the deployment has been created, use the **kubectl get** command to view the running MariaDB pod:
```
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
mariadb-deployment-5465c6655c-7jfqm 1/1 Running 0 3m
```
Make a note of the Pod name (in this example, it's **mariadb-deployment-5465c6655c-7jfqm** ). Note that the Pod name will differ from this example.
#### Verify the instance is using the Secrets and ConfigMap
Use the **kubectl exec** command (with your Pod name) to validate that the Secrets and ConfigMaps are in use. For example, check that the environment variables are exposed in the container:
```
$ kubectl exec -it mariadb-deployment-5465c6655c-7jfqm env |grep MYSQL
MYSQL_PASSWORD=kube-still-rocks
MYSQL_USER=kubeuser
MYSQL_ROOT_PASSWORD=KubernetesRocks!
```
Success! All three environment variables—the one using the **env** setup to specify the Secret, and two using **envFrom** to mount all the values from the Secret—are available in the container for MariaDB to use.
Spot check that the **max_allowed_packet.cnf** file was created in **/etc/mysql/conf.d** and that it contains the expected content:
```
$ kubectl exec -it mariadb-deployment-5465c6655c-7jfqm ls /etc/mysql/conf.d
max_allowed_packet.cnf
$ kubectl exec -it mariadb-deployment-5465c6655c-7jfqm cat /etc/mysql/conf.d/max_allowed_packet.cnf
[mysqld]
max_allowed_packet = 32M
```
Finally, validate that MariaDB used the environment variable to set the root user password and read the **max_allowed_packet.cnf** file to set the **max_allowed_packet** configuration variable. Use the **kubectl exec** command again, this time to get a shell inside the running container and use it to run some **mysql** commands:
```
$ kubectl exec -it mariadb-deployment-5465c6655c-7jfqm /
bin/sh
# Check that the root password was set correctly
$ mysql -uroot -p${MYSQL_ROOT_PASSWORD} -e 'show databases;'
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| performance_schema |
+--------------------+
# Check that the max_allowed_packet.cnf was parsed
$ mysql -uroot -p${MYSQL_ROOT_PASSWORD} -e "SHOW VARIABLES LIKE 'max_allowed_packet';"
+--------------------+----------+
| Variable_name | Value |
+--------------------+----------+
| max_allowed_packet | 33554432 |
+--------------------+----------+
```
### Advantages of Secrets and ConfigMaps
This exercise explained how to create Kubernetes Secrets and ConfigMaps and how to use those Secrets and ConfigMaps by adding them as environment variables or files inside of a running container instance. This makes it easy to keep the configuration of individual instances of containers separate from the container image. By separating the configuration data, overhead is reduced to maintaining only a single image for a specific type of instance while retaining the flexibility to create instances with a wide variety of configurations.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/6/introduction-kubernetes-secrets-and-configmaps
作者:[Chris Collins][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/clcollins
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/kubernetes.png?itok=PqDGb6W7 (Kubernetes)
[2]: https://opensource.com/resources/what-is-kubernetes
[3]: https://hub.docker.com/_/mariadb
[4]: https://kubernetes.io/docs/tasks/tools/install-kubectl/
[5]: https://kubernetes.io/docs/reference/access-authn-authz/rbac/
[6]: https://www.vaultproject.io/

View File

@ -0,0 +1,177 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (5 Easy Ways To Free Up Space (Remove Unwanted or Junk Files) on Ubuntu)
[#]: via: (https://www.2daygeek.com/linux-remove-delete-unwanted-junk-files-free-up-space-ubuntu-mint-debian/)
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
5 Easy Ways To Free Up Space (Remove Unwanted or Junk Files) on Ubuntu
======
Most of us may perform this action whenever we fall into out of disk space on system.
Most of us may perform this action whenever we are running out of space on Linux system
It should be performed frequently, to make space for installing a new application and dealing with other files.
Housekeeping is one of the routine task of Linux administrator, which allow them to maintain the disk utilization is in under threshold.
There are several ways we can clean up our system space.
There is no need to clean up your system when you have TB of storage capacity.
But if your have limited space then freeing up disk space becomes a necessity.
In this article, Ill show you some of the easiest or simple ways to clean up your Ubuntu system and get more space.
### How To Check Free Space On Ubuntu Systems?
Use **[df Command][1]** to check current disk utilization on your system.
```
$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 975M 0 975M 0% /dev
tmpfs 200M 1.7M 198M 1% /run
/dev/sda1 30G 16G 13G 55% /
tmpfs 997M 0 997M 0% /dev/shm
tmpfs 5.0M 4.0K 5.0M 1% /run/lock
tmpfs 997M 0 997M 0% /sys/fs/cgroup
```
GUI users can use “Disk Usage Analyzer tool” to view current usage.
[![][2]![][2]][3]
### 1) Remove The Packages That Are No Longer Required
The following command removes the dependency libs and packages that are no longer required by the system.
These packages were installed automatically to satisfy the dependencies of an installed package.
Also, it removes old Linux kernels that were installed in the system.
It removes orphaned packages which are not longer needed from the system, but not purges them.
```
$ sudo apt-get autoremove
[sudo] password for daygeek:
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following packages will be REMOVED:
apache2-bin apache2-data apache2-utils galera-3 libaio1 libapr1 libaprutil1
libaprutil1-dbd-sqlite3 libaprutil1-ldap libconfig-inifiles-perl libdbd-mysql-perl
libdbi-perl libjemalloc1 liblua5.2-0 libmysqlclient20 libopts25
libterm-readkey-perl mariadb-client-10.1 mariadb-client-core-10.1 mariadb-common
mariadb-server-10.1 mariadb-server-core-10.1 mysql-common sntp socat
0 upgraded, 0 newly installed, 25 to remove and 23 not upgraded.
After this operation, 189 MB disk space will be freed.
Do you want to continue? [Y/n]
```
To purge them, use the `--purge` option together with the command for that.
```
$ sudo apt-get autoremove --purge
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following packages will be REMOVED:
apache2-bin* apache2-data* apache2-utils* galera-3* libaio1* libapr1* libaprutil1*
libaprutil1-dbd-sqlite3* libaprutil1-ldap* libconfig-inifiles-perl*
libdbd-mysql-perl* libdbi-perl* libjemalloc1* liblua5.2-0* libmysqlclient20*
libopts25* libterm-readkey-perl* mariadb-client-10.1* mariadb-client-core-10.1*
mariadb-common* mariadb-server-10.1* mariadb-server-core-10.1* mysql-common* sntp*
socat*
0 upgraded, 0 newly installed, 25 to remove and 23 not upgraded.
After this operation, 189 MB disk space will be freed.
Do you want to continue? [Y/n]
```
### 2) Empty The Trash Can
There might a be chance, that you may have a large amount of useless data residing in your trash can.
It takes up your system space. This is one of the best way to clear up those and get some free space on your system.
To clean up this, simple use the file manager to empty your trash can.
[![][2]![][2]][4]
### 3) Clean up the APT cache
Ubuntu uses **[APT Command][5]** (Advanced Package Tool) for package management like installing, removing, searching, etc,.
By default every Linux operating system keeps a cache of downloaded and installed packages on their respective directory.
Ubuntu also does the same, it keeps every updates it downloads and installs in a cache on your disk.
Ubuntu system keeps a cache of DEB packages in /var/cache/apt/archives directory.
Over time, this cache can quickly grow and hold a lot of space on your system.
Run the following command to check the current utilization of APT cache.
```
$ sudo du -sh /var/cache/apt
147M /var/cache/apt
```
It cleans obsolete deb-packages. I mean to say, less than clean.
```
$ sudo apt-get autoclean
```
It removes all packages kept in the apt cache.
```
$ sudo apt-get clean
```
### 4) Uninstall the unused applications
I would request you to check the installed packages and games on your system and delete them if you are using rarely.
This can be easily done via “Ubuntu Software Center”.
[![][2]![][2]][6]
### 5) Clean up the thumbnail cache
The cache folder is a place where programs stored data they may need again, it is kept for speed but is not essential to keep. It can be generated again or downloaded again.
If its really filling up your hard drive then you can delete things without worrying.
Run the following command to check the current utilization of APT cache.
```
$ du -sh ~/.cache/thumbnails/
412K /home/daygeek/.cache/thumbnails/
```
Run the following command to delete them permanently from your system.
```
$ rm -rf ~/.cache/thumbnails/*
```
--------------------------------------------------------------------------------
via: https://www.2daygeek.com/linux-remove-delete-unwanted-junk-files-free-up-space-ubuntu-mint-debian/
作者:[Magesh Maruthamuthu][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.2daygeek.com/author/magesh/
[b]: https://github.com/lujun9972
[1]: https://www.2daygeek.com/how-to-check-disk-space-usage-using-df-command/
[2]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[3]: https://www.2daygeek.com/wp-content/uploads/2019/06/remove-delete-Unwanted-Junk-Files-free-up-space-ubuntu-mint-debian-1.jpg
[4]: https://www.2daygeek.com/wp-content/uploads/2019/06/remove-delete-Unwanted-Junk-Files-free-up-space-ubuntu-mint-debian-2.jpg
[5]: https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/
[6]: https://www.2daygeek.com/wp-content/uploads/2019/06/remove-delete-Unwanted-Junk-Files-free-up-space-ubuntu-mint-debian-3.jpg

View File

@ -0,0 +1,88 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Blockchain 2.0 EOS.IO Is Building Infrastructure For Developing DApps [Part 13])
[#]: via: (https://www.ostechnix.com/blockchain-2-0-eos-io-is-building-infrastructure-for-developing-dapps/)
[#]: author: (editor https://www.ostechnix.com/author/editor/)
Blockchain 2.0 EOS.IO Is Building Infrastructure For Developing DApps [Part 13]
======
![Building infrastructure for Developing DApps][1]
When a blockchain startup makes over **$4 billion** through an ICO without having a product or service to show for it, that is newsworthy. It becomes clear at this point that people invested these billions on this project because it seemed to promise a lot. This post will seek to demystify this mystical and seemingly powerful platform.
**EOS.IO** is a [**blockchain**][2] platform that aims to develop standardized infrastructure including an application protocol and operating system for developing DApps ([ **distributed applications**][3]). **Block.one** the lead developer and investor in the project envisions **EOS.IO as the worlds first distributed operating system** providing a developing environment for decentralized applications. The system is meant to mirror a real computer by simulating hardware such as its CPUs, GPUs, and even RAM, apart from the obvious storage solutions courtesy of the blockchain database system.
### Who is block.one?
Block.one is the company behind EOS.IO. They developed the platform from the ground up based on a white paper published on GitHub. Block.one also spearheaded the yearlong “continuous” ICO that eventually made them a whopping $4 billion. They have one of the best teams of backers and advisors any company in the blockchain space can hope for with partnerships from **Bitmain** , **Louis Bacon** and **Alan Howard** among others. Not to mention **Peter Thiel** being one of the lead investors in the company. The expectations from their purported platform the EOS.IO and their crypto VC fund **EOS.VC** are high indeed.
### What is EOS.IO?
It is difficult to arrive at a short description for EOS.IO. The platform aims to position itself as a world wide computer with virtually simulated resources, hence creating a virtual environment for distributed applications to be built and run. The team behind EOS.IO aims achieve the following by directly quoting [**Ethereum**][4] as competition.
* Increase transaction throughput to millions per second,
* Reduce transaction costs to insignificant sums or remove them altogether.
Though EOS.IO is not anywhere near solving any of these problems, the platform has a few capabilities that make it noteworthy as an alternative to Ethereum for DApp enthusiasts and developers.
1. **Scalability** : EOS.IO uses a different consensus algorithm for handling blocks called **DPoS**. Weve described it briefly below. The DPoS system basically allows the system to handle far more requests at better speeds than Ethereum is capable of with its **PoW** algorithm. The claim is that because theyll be able to handle such massive throughputs theyll be able to afford transactions at insignificant or even zero charges if need be.
2. **Governance capabilities** : The consensus algorithm allows EOS.IO to dynamically understand malicious user (or node) behaviour to penalize or deactivate the user. The elected delegate feature of the delegated proof of stake system also ensures faster amendments to the rules that govern the network and its users.
3. **Parallel processing** : Touted to another major feature. This will basically allow programs on the EOS.IO blockchain to utilize multiple computers or processors or even computing resources such as GPUs to parallelly processes large chunks of data and blocks. This is not yet seen in a roll out ready form. (This however is not a unique feature of EOS.IO. [**Hyperledger Sawtooth**][5] and Burrow for instance support the same at the moment).
4. **Self-sufficiency** : The system has a built-in grievance system along with well defined incentive and penal systems for providing feedback for acceptable and non-acceptable behaviour. This means that the platform has a governance system without actually having a central governing body.
All or at least most of the selling points of the system is based on the consensus algorithm it follows, DPoS. We explore more about the same below.
### What is the delegated Proof of Stake (DPoS) consensus algorithm?
As far as blockchains are concerned consensus algorithms are what gives them the strength and the selling point they need. However, as a general rule of thumb, as the “openness” and immutability of the ledger increases so does the computational power that is required to run it. For instance, if a blockchain intends to be secure against intrusion, be safe and immutable with respect to data, while being accessible to a lot of users, it will use up a lot of computing power in creating and maintaining itself. Think of it as a number lock. A 6-digit pin code is safer than a 3-digit pin code, but the latter will be easier and faster to open, now consider a million of these locks but with limited manpower to open them, and you get the scale at which blockchains operate and how much these consensus algorithms matter.
In fact, this is a major area where competing platforms differ from each other. Hyperledger Sawtooth uses a proof of elapsed time algorithm (PoET), while ethereum uses a slightly modified proof of work (PoW) algorithm. Each of these have their own pros and cons which we will cover in a detailed post later on. However, for now, to be noted is that EOS.IO uses a delegated proof of stake mechanism for attesting and validating blocks under it. This has the following implications for users.
Only one node can actively change the status of data written on the blockchain. In the case of a DPoS based system, this validator node is selected as part of a delegation by all the token holders of the blockchain. Every token holder gets to vote and have a say in who should be a validator. The weight the vote carries is usually proportional to the number of tokens the user carries. This is seen as a democratic way to ensure centralized accountability in terms of running the network. Furthermore, the validator is given additional monetary incentives to keep the network running smoothly and without friction. In case a validator or delegate member who is elected appears to be malicious, the system automatically votes out the said node member.
DPoS system is efficient as it requires fewer computing resources to cast a vote and select a leader to validate. Further, it incentivizes good behaviour and penalizes bad ones leading to self-correction and maintenance of the blockchain. **The average transaction time for PoW vs DPoS is 10 minutes vs 10 seconds**. The downside to this paradigm being centralized operations, weighted voting systems, lack of redundancy, and possible malicious behaviour from the validator.
To understand the difference between PoW and DPoS, imagine this: Lets say your network has 100 participants out of which 10 are capable of handling validator duties and you need to choose one to do the same. In PoW, you give each of them a tough problem to solve to know whos the fastest and smartest. You give the validator position to the winner and reward them for the same. In the DPoS system, the rest of the members vote for the person they think should hold the position. This is a simple matter of choosing based on arithmetic performance data based on the past. The node with the most votes win, and if the winner tries to do something fishy, the same electorate votes him out for the next transaction.
### So does Ethereum lose out?
While EOS.IO has miles to go before it even steps into the same ballpark as Ethereum with respect to the market cap and user base, EOS.IO targets specific shortfalls with Ethereum and solves them. We conclude this post by summarising some findings based on a 2017 [**paper**][6] written and published by **Ian Grigg**.
1. The consensus algorithm used in the Ethereum (proof of work) platform uses far more computing resources and time to process transactions. This is true even for small block sizes. This limits its scaling potential and throughput. A meagre 15 transactions per second globally is no match for the over 2000 that payments network Visa manages. If the platform is to be adopted on a global scale based on a large scale roll out this is not acceptable.
2. The reliance on proprietary languages, tool kits and protocols (including **Solidity** for instance) limits developer capability. Interoperability between platforms is also severely hurt due to this fact.
3. This is rather subjective, however, the fact that Ethereum foundation refuses to acknowledge even the need for governance on the platform instead choosing to intervene on an ad-hoc manner when things turn sour on the network is not seen by many industry watchers as a sustainable model to be emulated in the future.
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/blockchain-2-0-eos-io-is-building-infrastructure-for-developing-dapps/
作者:[editor][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.ostechnix.com/author/editor/
[b]: https://github.com/lujun9972
[1]: https://www.ostechnix.com/wp-content/uploads/2019/05/Developing-DApps-720x340.png
[2]: https://www.ostechnix.com/blockchain-2-0-an-introduction/
[3]: https://www.ostechnix.com/blockchain-2-0-explaining-distributed-computing-and-distributed-applications/
[4]: https://www.ostechnix.com/blockchain-2-0-what-is-ethereum/
[5]: https://www.ostechnix.com/blockchain-2-0-introduction-to-hyperledger-sawtooth/
[6]: http://iang.org/papers/EOS_An_Introduction.pdf

View File

@ -0,0 +1,157 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Expand And Unexpand Commands Tutorial With Examples)
[#]: via: (https://www.ostechnix.com/expand-and-unexpand-commands-tutorial-with-examples/)
[#]: author: (sk https://www.ostechnix.com/author/sk/)
Expand And Unexpand Commands Tutorial With Examples
======
![Expand And Unexpand Commands Explained][1]
This guide explains two Linux commands namely **Expand** and **Unexpand** with practical examples. For those wondering, the Expand and Unexpand commands are used to replace TAB characters in files with SPACE characters and vice versa. There is also a command called “Expand” in MS-DOS, which is used to expand a compressed file. But the Linux Expand command simply converts the tabs to spaces. These two commands are part of **GNU coreutils** and written by **David MacKenzie**.
For the demonstration purpose, I will be using a text file named “ostechnix.txt” throughout this guide. All commands given below are tested in Arch Linux.
### Expand command examples
Like I already mentioned, the Expand command replaces TAB characters in a file with SPACE characters.
Now, let us convert tabs to spaces in the ostechnix.txt file and write the result to standard output using command:
```
$ expand ostechnix.txt
```
If you dont want to display the result in standard output, just upload it to another file like below.
```
$ expand ostechnix.txt>output.txt
```
We can also convert tabs to spaces, reading from standard input. To do so, just run “expand” command without mentioning the source file name:
```
$ expand
```
Just type the text and hit ENTER to convert tabs to spaces. Press **CTRL+C** to quit.
If you do not want to convert tabs after non blanks, use **-i** flag like below.
```
$ expand -i ostechnix.txt
```
We can also have tabs a certain number of characters apart, not 8 (the default value):
```
$ expand -t=5 ostechnix.txt
```
You can even mention multiple tab positions with comma separated like below.
```
$ expand -t 5,10,15 ostechnix.txt
```
Or,
```
$ expand -t "5 10 15" ostechnix.txt
```
For more details, refer man pages.
```
$ man expand
```
### Unexpand Command Examples
As you may have already guessed, the **Unexpand** command will do the opposite of the Expand command. I.e It will convert SPACE charatcers to TAB characters. Let me show you a few examples to learn how to use Unexpand command.
To convert blanks (spaces, of course) in a file to tabs and write the output to stdout, do:
```
$ unexpand ostechnix.txt
```
If you want to write the output in a file instead of just displaying it to stdout, use this command:
```
$ unexpand ostechnix.txt>output.txt
```
Convert blanks to tabs, reading from standard output:
```
$ unexpand
```
By default, Unexpand command will only convert the initial blanks. If you want to convert all blanks, instead of just initial blanks, use **-a** flag:
```
$ unexpand -a ostechnix.txt
```
To convert only leading sequences of blanks (Please note that it overrides **-a** ):
```
$ unexpand --first-only ostechnix.txt
```
Have tabs a certain number of characters apart, not **8** (enables **-a** ):
```
$ unexpand -t 5 ostechnix.txt
```
Similarly, we can mention multiple tab positions with comma separated like below.
```
$ unexpand -t 5,10,15 ostechnix.txt
```
Or,
```
$ unexpand -t "5 10 15" ostechnix.txt
```
For more details, refer man pages.
```
$ man unexpand
```
* * *
**Suggested read:**
* [**The Fold Command Tutorial With Examples For Beginners**][2]
* * *
When you working on large number of files, the Expand and Unexpand commands could be very helpful to replace unwanted TAB characters with SPACE characters and vice versa.
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/expand-and-unexpand-commands-tutorial-with-examples/
作者:[sk][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.ostechnix.com/author/sk/
[b]: https://github.com/lujun9972
[1]: https://www.ostechnix.com/wp-content/uploads/2019/05/Expand-And-Unexpand-Commands-720x340.png
[2]: https://www.ostechnix.com/fold-command-tutorial-examples-beginners/

View File

@ -0,0 +1,94 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Graviton: A Minimalist Open Source Code Editor)
[#]: via: (https://itsfoss.com/graviton-code-editor/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
Graviton: A Minimalist Open Source Code Editor
======
[Graviton][1] is a free and open source, cross-platform code editor in development. The sixteen years old developer, Marc Espin, emphasizes that it is a minimalist code editor. I am not sure about that but it does have a clean user interface like other [modern code editors like Atom][2].
![Graviton Code Editor Interface][3]
The developer also calls it a lightweight code editor despite the fact that Graviton is based on [Electron][4].
Graviton comes with features you expect in any standard code editors like syntax highlighting, auto-completion etc. Since Graviton is still in the beta phase of development, more features will be added to it in the future releases.
![Graviton Code Editor with Syntax Highlighting][5]
### Feature of Graviton code editor
Some of the main highlights of Graviton features are:
* Syntax highlighting for a number of programming languages using [CodeMirrorJS][6]
* Autocomplete
* Support for plugins and themes.
* Available in English, Spanish and a few other European languages.
* Available for Linux, Windows and macOS.
I had a quick look at Graviton and it might not be as feature-rich as [VS Code][7] or [Brackets][8], but for some simple code editing, its not a bad tool.
### Download and install Graviton
![Graviton Code Editor][9]
As mentioned earlier, Graviton is a cross-platform code editor available for Linux, Windows and macOS. It is still in beta stages which means that you more features will be added in future and you may encounter some bugs.
You can find the latest version of Graviton on its release page. Debian and [Ubuntu users can install it from .deb file][10]. [AppImage][11] has been provided so that it could be used in other distributions. DMG and EXE files are also available for macOS and Windows respectively.
[Download Graviton][12]
If you are interested, you can find the source code of Graviton on its GitHub repository:
[Graviton Source Code on GitHub][13]
If you decided to use Graviton and find some issues, please open a bug report [here][14]. If you use GitHub, you may want to star the Graviton project. This boosts the morale of the developer as he would know that more users are appreciating his efforts.
[][15]
Suggested read Get Windows Style Sticky Notes For Ubuntu with Indicator Stickynotes
I believe you know [how to install a software from source code][16] if you are taking that path.
**In the end…**
Sometimes, simplicity itself becomes a feature and the Gravitons focus on being minimalist could help it form a niche for itself in the already crowded segment of code editors.
At Its FOSS, we try to highlight open source software. If you know some interesting open source software that you would like more people to know about, [do send us a note][17].
--------------------------------------------------------------------------------
via: https://itsfoss.com/graviton-code-editor/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://graviton.ml/
[2]: https://itsfoss.com/best-modern-open-source-code-editors-for-linux/
[3]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/06/graviton-code-editor-interface.jpg?resize=800%2C571&ssl=1
[4]: https://electronjs.org/
[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/06/graviton-code-editor-interface-2.jpg?resize=800%2C522&ssl=1
[6]: https://codemirror.net/
[7]: https://itsfoss.com/install-visual-studio-code-ubuntu/
[8]: https://itsfoss.com/install-brackets-ubuntu/
[9]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/06/graviton-code-editor-800x473.jpg?resize=800%2C473&ssl=1
[10]: https://itsfoss.com/install-deb-files-ubuntu/
[11]: https://itsfoss.com/use-appimage-linux/
[12]: https://github.com/Graviton-Code-Editor/Graviton-App/releases
[13]: https://github.com/Graviton-Code-Editor/Graviton-App
[14]: https://github.com/Graviton-Code-Editor/Graviton-App/issues
[15]: https://itsfoss.com/indicator-stickynotes-windows-like-sticky-note-app-for-ubuntu/
[16]: https://itsfoss.com/install-software-from-source-code/
[17]: https://itsfoss.com/contact-us/

View File

@ -0,0 +1,298 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Screen Command Examples To Manage Multiple Terminal Sessions)
[#]: via: (https://www.ostechnix.com/screen-command-examples-to-manage-multiple-terminal-sessions/)
[#]: author: (sk https://www.ostechnix.com/author/sk/)
Screen Command Examples To Manage Multiple Terminal Sessions
======
![Screen Command Examples To Manage Multiple Terminal Sessions][1]
**GNU Screen** is a terminal multiplexer (window manager). As the name says, Screen multiplexes the physical terminal between multiple interactive shells, so we can perform different tasks in each terminal session. All screen sessions run their programs completely independent. So, a program or process running inside a screen session will keep running even if the session is accidentally closed or disconnected. For instance, when [**upgrading Ubuntu**][2] server via SSH, Screen command will keep running the upgrade process just in case your SSH session is terminated for any reason.
The GNU Screen allows us to easily create multiple screen sessions, switch between different sessions, copy text between sessions, attach or detach from a session at any time and so on. It is one of the important command line tool every Linux admins should learn and use wherever necessary. In this brief guide, we will see the basic usage of Screen command with examples in Linux.
### Installing GNU Screen
GNU Screen is available in the default repositories of most Linux operating systems.
To install GNU Screen on Arch Linux, run:
```
$ sudo pacman -S screen
```
On Debian, Ubuntu, Linux Mint:
```
$ sudo apt-get install screen
```
On Fedora:
```
$ sudo dnf install screen
```
On RHEL, CentOS:
```
$ sudo yum install screen
```
On SUSE/openSUSE:
```
$ sudo zypper install screen
```
Let us go ahead and see some screen command examples.
### Screen Command Examples To Manage Multiple Terminal Sessions
The default prefix shortcut to all commands in Screen is **Ctrl+a**. You need to use this shortcut a lot when using Screen. So, just remember this keyboard shortcut.
##### Create new Screen session
Let us create a new Screen session and attach to it. To do so, type the following command in terminal:
```
screen
```
Now, run any program or process inside this session. The running process or program will keep running even if youre disconnected from this session.
##### Detach from Screen sessions
To detach from inside a screen session, press **Ctrl+a** and **d**. You dont have to press the both key combinations at the same time. First press **Ctrl+a** and then press **d**. After detaching from a session, you will see an output something like below.
```
[detached from 29149.pts-0.sk]
```
Here, **29149** is the **screen ID** and **pts-0.sk** is the name of the screen session. You can attach, detach and kill Screen sessions using either screen ID or name of the respective session.
##### Create a named session
You can also create a screen session with any custom name of your choice other than the default username like below.
```
screen -S ostechnix
```
The above command will create a new screen session with name **“xxxxx.ostechnix”** and attach to it immediately. To detach from the current session, press **Ctrl+a** followed by **d**.
Naming screen sessions can be helpful when you want to find which processes are running on which sessions. For example, when a setup LAMP stack inside a session, you can simply name it like below.
```
screen -S lampstack
```
##### Create detached sessions
Sometimes, you might want to create a session, but dont want to attach it automatically. In such cases, run the following command to create detached session named **“senthil”** :
```
screen -S senthil -d -m
```
Or, shortly:
```
screen -dmS senthil
```
The above command will create a session called “senthil”, but wont attach to it.
##### List Screen sessions
To list all running sessions (attached or detached), run:
```
screen -ls
```
Sample output:
```
There are screens on:
29700.senthil (Detached)
29415.ostechnix (Detached)
29149.pts-0.sk (Detached)
3 Sockets in /run/screens/S-sk.
```
As you can see, I have three running sessions and all are detached.
##### Attach to Screen sessions
If you want to attach to a session at any time, for example **29415.ostechnix** , simply run:
```
screen -r 29415.ostechnix
```
Or,
```
screen -r ostechnix
```
Or, just use the screen ID:
```
screen -r 29415
```
To verify if we are attached to the aforementioned session, simply list the open sessions and check.
```
screen -ls
```
Sample output:
```
There are screens on:
29700.senthil (Detached)
29415.ostechnix (Attached)
29149.pts-0.sk (Detached)
3 Sockets in /run/screens/S-sk.
```
As you see in the above output, we are currently attached to **29415.ostechnix** session. To exit from the current session, press ctrl+a, d.
##### Create nested sessions
When we run “screen” command, it will create a single session for us. We can, however, create nested sessions (a session inside a session).
First, create a new session or attach to an opened session. I am going to create a new session named “nested”.
```
screen -S nested
```
Now, press **Ctrl+a** and **c** inside the session to create another session. Just repeat this to create any number of nested Screen sessions. Each session will be assigned with a number. The number will start from **0**.
You can move to the next session by pressing **Ctrl+n** and move to previous by pressing **Ctrl+p**.
Here is the list of important Keyboard shortcuts to manage nested sessions.
* **Ctrl+a ”** List all sessions
* **Ctrl+a 0** Switch to session number 0
* **Ctrl+a n** Switch to next session
* **Ctrl+a p** Switch to the previous session
* **Ctrl+a S** Split current region horizontally into two regions
* **Ctrl+a l** Split current region vertically into two regions
* **Ctrl+a Q** Close all sessions except the current one
* **Ctrl+a X** Close the current session
* **Ctrl+a \** Kill all sessions and terminate Screen
* **Ctrl+a ?** Show keybindings. To quit this, press ENTER.
##### Lock sessions
Screen has an option to lock a screen session. To do so, press **Ctrl+a** and **x**. Enter your Linux password to lock the screen.
```
Screen used by sk <sk> on ubuntuserver.
Password:
```
##### Logging sessions
You might want to log everything when youre in a Screen session. To do so, just press **Ctrl+a** and **H**.
Alternatively, you can enable the logging when starting a new session using **-L** parameter.
```
screen -L
```
From now on, all activities youve done inside the session will recorded and stored in a file named **screenlog.x** in your $HOME directory. Here, **x** is a number.
You can view the contents of the log file using **cat** command or any text viewer applications.
![][3]
Log screen sessions
* * *
**Suggested read:**
* [**How To Record Everything You Do In Terminal**][4]
* * *
##### Kill Screen sessions
If a session is not required anymore, just kill it. To kill a detached session named “senthil”:
```
screen -r senthil -X quit
```
Or,
```
screen -X -S senthil quit
```
Or,
```
screen -X -S 29415 quit
```
If there are no open sessions, you will see the following output:
```
$ screen -ls
No Sockets found in /run/screens/S-sk.
```
For more details, refer man pages.
```
$ man screen
```
There is also a similar command line utility named “Tmux” which does the same job as GNU Screen. To know more about it, refer the following guide.
* [**Tmux Command Examples To Manage Multiple Terminal Sessions**][5]
**Resource:**
* [**GNU Screen home page**][6]
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/screen-command-examples-to-manage-multiple-terminal-sessions/
作者:[sk][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.ostechnix.com/author/sk/
[b]: https://github.com/lujun9972
[1]: https://www.ostechnix.com/wp-content/uploads/2019/06/Screen-Command-Examples-720x340.jpg
[2]: https://www.ostechnix.com/how-to-upgrade-to-ubuntu-18-04-lts-desktop-and-server/
[3]: https://www.ostechnix.com/wp-content/uploads/2019/06/Log-screen-sessions.png
[4]: https://www.ostechnix.com/record-everything-terminal/
[5]: https://www.ostechnix.com/tmux-command-examples-to-manage-multiple-terminal-sessions/
[6]: https://www.gnu.org/software/screen/

View File

@ -0,0 +1,91 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Search Linux Applications On AppImage, Flathub And Snapcraft Platforms)
[#]: via: (https://www.ostechnix.com/search-linux-applications-on-appimage-flathub-and-snapcraft-platforms/)
[#]: author: (sk https://www.ostechnix.com/author/sk/)
Search Linux Applications On AppImage, Flathub And Snapcraft Platforms
======
![Search Linux Applications On AppImage, Flathub And Snapcraft][1]
Linux is evolving day by day. In the past, the developers had to build applications separately for different Linux distributions. Since there are several Linux variants exists, building apps for all distributions became tedious task and quite time consuming. Then some developers invented package converters and builders such as [**Checkinstall**][2], [**Debtap**][3] and [**Fpm**][4]. But they didnt completely solved the problem. All of these tools will simply convert one package format to another. We still need to find and install the required dependencies the app needs to run.
Well, the time has changed. We have now universal Linux apps. Meaning we can install these applications on most Linux distributions. Be it Arch Linux, Debian, CentOS, Redhat, Ubuntu or any popular Linux distribution, the Universal apps will work just fine out of the box. These applications are packaged with all necessary libraries and dependencies in a single bundle. All we have to do is to download and run them on any Linux distributions of our choice. The popular universal app formats are **AppImages** , [**Flatpaks**][5] and [**Snaps**][6].
The AppImages are created and maintained by **Simon peter**. Many popular applications, like Gimp, Firefox, Krita and a lot more, are available in these formats and available directly on their download pages.Just download them, make it executable and run it in no time. You dont even root permissions to run AppImages.
The developer of Flatpak is **Alexander Larsson** (a RedHat employee). The Flatpak apps are hosted in a central repository (store) called **“Flathub”**. If youre a developer, you are encouraged to build your apps in Flatpak format and distribute them to the users via Flathub.
The **Snaps** are created mainly for Ubuntu, by **Canonical**. However, the developers of other Linux distributions are started to contribute to Snap packing format. So, Snaps will work on other Linux distributions as well. The Snaps can be downloaded either directly from applications download page or from **Snapcraft** store.
Many popular Companies and developers have released their applications in AppImage, Flatpak and Snap formats. If you ever looking for an app, just head over to the respective store and grab the application of your choice and run it regardless of the Linux distribution you use.
There is also a command line universal app search tool called **“Chob”** is available to easily search Linux Applications on AppImage, Flathub and Snapcraft platforms. This tool will only search for the given application and display official link in your default browser. It wont install them. This guide will explain how to install Chob and use it to search AppImages, Flatpaks and Snaps on Linux.
### Search Linux Applications On AppImage, Flathub And Snapcraft Platforms Using Chob
Download the latest Chob binary file from the [**releases page**][7]. As of writing this guide, the latest version was **0.3.5**.
```
$ wget https://github.com/MuhammedKpln/chob/releases/download/0.3.5/chob-linux
```
Make it executable:
```
$ chmod +x chob-linux
```
Finally, search the applications you want. For example, I am going to search applications related to **Vim**.
```
$ ./chob-linux vim
```
Chob will search for the given application (and related) on AppImage, Flathub and Snapcraft platforms and display the results.
![][8]
Search Linux applications Using Chob
Just choose the application you want by typing the appropriate number to open the official link of the selected app in your default web browser where you can read the details of the app.
![][9]
View Linux applications Details In Browser
For more details, have a look at the Chob official GitHub page given below.
**Resource:**
* [**Chob GitHub Repository**][10]
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/search-linux-applications-on-appimage-flathub-and-snapcraft-platforms/
作者:[sk][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.ostechnix.com/author/sk/
[b]: https://github.com/lujun9972
[1]: https://www.ostechnix.com/wp-content/uploads/2019/05/chob-720x340.png
[2]: https://www.ostechnix.com/build-packages-source-using-checkinstall/
[3]: https://www.ostechnix.com/convert-deb-packages-arch-linux-packages/
[4]: https://www.ostechnix.com/build-linux-packages-multiple-platforms-easily/
[5]: https://www.ostechnix.com/flatpak-new-framework-desktop-applications-linux/
[6]: https://www.ostechnix.com/introduction-ubuntus-snap-packages/
[7]: https://github.com/MuhammedKpln/chob/releases
[8]: http://www.ostechnix.com/wp-content/uploads/2019/05/Search-Linux-applications-Using-Chob.png
[9]: http://www.ostechnix.com/wp-content/uploads/2019/05/View-Linux-applications-Details.png
[10]: https://github.com/MuhammedKpln/chob

View File

@ -0,0 +1,296 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Tmux Command Examples To Manage Multiple Terminal Sessions)
[#]: via: (https://www.ostechnix.com/tmux-command-examples-to-manage-multiple-terminal-sessions/)
[#]: author: (sk https://www.ostechnix.com/author/sk/)
Tmux Command Examples To Manage Multiple Terminal Sessions
======
![tmux command examples][1]
Weve already learned to use [**GNU Screen**][2] to manage multiple Terminal sessions. Today, we will see yet another well-known command-line utility named **“Tmux”** to manage Terminal sessions. Similar to GNU Screen, Tmux is also a Terminal multiplexer that allows us to create number of terminal sessions and run more than one programs or processes at the same time inside a single Terminal window. Tmux is free, open source and cross-platform program that supports Linux, OpenBSD, FreeBSD, NetBSD and Mac OS X. In this guide, we will discuss most-commonly used Tmux commands in Linux.
### Installing Tmux in Linux
Tmux is available in the official repositories of most Linux distributions.
On Arch Linux and its variants, run the following command to install it.
```
$ sudo pacman -S tmux
```
On Debian, Ubuntu, Linux Mint:
```
$ sudo apt-get install tmux
```
On Fedora:
```
$ sudo dnf install tmux
```
On RHEL and CentOS:
```
$ sudo yum install tmux
```
On SUSE/openSUSE:
```
$ sudo zypper install tmux
```
Well, we have just installed Tmux. Let us go ahead and see some examples to learn how to use Tmux.
### Tmux Command Examples To Manage Multiple Terminal Sessions
The default prefix shortcut to all commands in Tmux is **Ctrl+b**. Just remember this keyboard shortcut when using Tmux.
* * *
**Note:** The default prefix to all **Screen** commands is **Ctrl+a**.
* * *
##### Creating Tmux sessions
To create a new Tmux session and attach to it, run the following command from the Terminal:
```
tmux
```
Or,
```
tmux new
```
Once you are inside the Tmux session, you will see a **green bar at the bottom** as shown in the screenshot below.
![][3]
New Tmux session
It is very handy to verify whether youre inside a Tmux session or not.
##### Detaching from Tmux sessions
To detach from a current Tmux session, just press **Ctrl+b** and **d**. You dont need to press this both Keyboard shortcut at a time. First press “Ctrl+b” and then press “d”.
Once youre detached from a session, you will see an output something like below.
```
[detached (from session 0)]
```
##### Creating named sessions
If you use multiple sessions, you might get confused which programs are running on which sessions. In such cases, you can just create named sessions. For example if you wanted to perform some activities related to web server in a session, just create the Tmux session with a custom name, for example **“webserver”** (or any name of your choice).
```
tmux new -s webserver
```
Here is the new named Tmux session.
![][4]
Tmux session with a custom name
As you can see in the above screenshot, the name of the Tmux session is **webserver**. This way you can easily identify which program is running on which session.
To detach, simply press **Ctrl+b** and **d**.
##### List Tmux sessions
To view the list of open Tmux sessions, run:
```
tmux ls
```
Sample output:
![][5]
List Tmux sessions
As you can see, I have two open Tmux sessions.
##### Creating detached sessions
Sometimes, you might want to simply create a session and dont want to attach to it automatically.
To create a new detached session named **“ostechnix”** , run:
```
tmux new -s ostechnix -d
```
The above command will create a new Tmux session called “ostechnix”, but wont attach to it.
You can verify if the session is created using “tmux ls” command:
![][6]
Create detached Tmux sessions
##### Attaching to Tmux sessions
You can attach to the last created session by running this command:
```
tmux attach
```
Or,
```
tmux a
```
If you want to attach to any specific named session, for example “ostechnix”, run:
```
tmux attach -t ostechnix
```
Or, shortly:
```
tmux a -t ostechnix
```
##### Kill Tmux sessions
When youre done and no longer required a Tmux session, you can kill it at any time with command:
```
tmux kill-session -t ostechnix
```
To kill when attached, press **Ctrl+b** and **x**. Hit “y” to kill the session.
You can verify if the session is closed with “tmux ls” command.
To Kill Tmux server along with all Tmux sessions, run:
```
tmux kill-server
```
Be careful! This will terminate all Tmux sessions even if there are any running jobs inside the sessions without any warning.
When there were no running Tmux sessions, you will see the following output:
```
$ tmux ls
no server running on /tmp/tmux-1000/default
```
##### Split Tmux Session Windows
Tmux has an option to split a single Tmux session window into multiple smaller windows called **Tmux panes**. This way we can run different programs on each pane and interact with all of them simultaneously. Each pane can be resized, moved and closed without affecting the other panes. We can split a Tmux window either horizontally or vertically or both at once.
**Split panes horizontally**
To split a pane horizontally, press **Ctrl+b** and **”** (single quotation mark).
![][7]
Split Tmux pane horizontally
Use the same key combination to split the panes further.
**Split panes vertically**
To split a pane vertically, press **Ctrl+b** and **%**.
![][8]
Split Tmux panes vertically
**Split panes horizontally and vertically**
We can also split a pane horizontally and vertically at the same time. Take a look at the following screenshot.
![][9]
Split Tmux panes
First, I did a horizontal split by pressing **Ctrl+b “** and then split the lower pane vertically by pressing **Ctrl+b %**.
As you see in the above screenshot, I am running three different programs on each pane.
**Switch between panes**
To switch between panes, press **Ctrl+b** and **Arrow keys (Left, Right, Up, Down)**.
**Send commands to all panes**
In the previous example, we run three different commands on each pane. However, it is also possible to run send the same commands to all panes at once.
To do so, press **Ctrl+b** and type the following command and hit ENTER:
```
:setw synchronize-panes
```
Now type any command on any pane. You will see that the same command is reflected on all panes.
**Swap panes**
To swap panes, press **Ctrl+b** and **o**.
**Show pane numbers**
Press **Ctrl+b** and **q** to show pane numbers.
**Kill panes**
To kill a pane, simply type **exit** and ENTER key. Alternatively, press **Ctrl+b** and **x**. You will see a confirmation message. Just press **“y”** to close the pane.
![][10]
Kill Tmux panes
At this stage, you will get a basic idea of Tmux and how to use it to manage multiple Terminal sessions. For more details, refer man pages.
```
$ man tmux
```
Both GNU Screen and Tmux utilities can be very helpful when managing servers remotely via SSH. Learn Screen and Tmux commands thoroughly to manage your remote servers like a pro.
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/tmux-command-examples-to-manage-multiple-terminal-sessions/
作者:[sk][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.ostechnix.com/author/sk/
[b]: https://github.com/lujun9972
[1]: https://www.ostechnix.com/wp-content/uploads/2019/06/Tmux-720x340.png
[2]: https://www.ostechnix.com/screen-command-examples-to-manage-multiple-terminal-sessions/
[3]: https://www.ostechnix.com/wp-content/uploads/2019/06/Tmux-session.png
[4]: https://www.ostechnix.com/wp-content/uploads/2019/06/Named-Tmux-session.png
[5]: https://www.ostechnix.com/wp-content/uploads/2019/06/List-Tmux-sessions.png
[6]: https://www.ostechnix.com/wp-content/uploads/2019/06/Create-detached-sessions.png
[7]: https://www.ostechnix.com/wp-content/uploads/2019/06/Horizontal-split.png
[8]: https://www.ostechnix.com/wp-content/uploads/2019/06/Vertical-split.png
[9]: https://www.ostechnix.com/wp-content/uploads/2019/06/Split-Panes.png
[10]: https://www.ostechnix.com/wp-content/uploads/2019/06/Kill-panes.png

View File

@ -0,0 +1,80 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Try a new game on Free RPG Day)
[#]: via: (https://opensource.com/article/19/5/free-rpg-day)
[#]: author: (Seth Kenlon https://opensource.com/users/seth/users/erez/users/seth)
Try a new game on Free RPG Day
======
Celebrate tabletop role-playing games and get free RPG materials at your
local game shop on June 15.
![plastic game pieces on a board][1]
Have you ever thought about trying Dungeons & Dragons but didn't know how to start? Did you play Traveller in your youth and have been thinking about returning to the hobby? Are you curious about role-playing games (RPGs) but not sure whether you want to play one? Are you completely new to the concept of tabletop gaming and have never heard of RPGs until now? It doesn't matter which of these profiles suits you, because [Free RPG Day][2] is for everyone!
The first Free RPG Day event happened in 2007 at hobby game stores all over the world. The idea was to bring new and exclusive RPG quickstart rules and adventures to both new and experienced gamers for $0. For one day, you could walk into your local game store and get a booklet containing simple, beginner-level rules for a tabletop RPG, which you could play with people there in the store or with friends back home. The booklet was yours to keep forever.
The event was such a smash hit that the tradition has continued ever since. This year, Free RPG Day is scheduled for Saturday, June 15.
### What's the catch?
Obviously, the idea behind Free RPG Day is to get you addicted to tabletop RPG gaming. Before you let instinctual cynicism kick in, consider that as addictions go, it's not too bad to fall in love with a game that encourages you to read books of rules and lore so you and your family and friends have an excuse to spend time together. Tabletop RPGs are a powerful, imaginative, and fun medium, and Free RPG Day is a great introduction.
![FreeRPG Day logo][3]
### Open gaming
Like so many other industries, the open source phenomenon has influenced tabletop gaming. Way back at the turn of the century, [Wizards of the Coast][4], purveyors of Magic: The Gathering and Dungeons & Dragons, decided to adopt open source methodology by developing the [Open Game License][5] (OGL). They used this license for editions 3 and 3.5 of the world's first RPG (Dungeons & Dragons). When they faltered years later for the 4th Edition, the publisher of _Dragon_ magazine forked the "code" of D &D 3.5, publishing its remix as the Pathfinder RPG, keeping innovation and a whole cottage industry of third-party game developers healthy. Recently, Wizards of the Coast returned to the OGL for D&D 5e.
The OGL allows developers to use, at the very least, a game's mechanics in a product of their own. You may or may not be allowed to use the names of custom monsters, weapons, kingdoms, or popular characters, but you can always use the rules and maths of an OGL game. In fact, the rules of an OGL game are often published for free as a [system reference document][6] (SRD) so, whether you purchase a copy of the rule book or not, you can learn how a game is played.
If you've never played a tabletop RPG before, it may seem strange that a game played with pen and paper can have a game engine, but computation is computation whether it's digital or analog. As a simplified example: suppose a game engine dictates that a player character has a number to represent its strength. When that player character fights a giant twice its strength, there's real tension when a player rolls dice to add to her character's strength-based attack. Without a very good roll, her strength won't match the giant's. Knowing this, a third-party or independent developer can design a monster for this game engine with an understanding of the effects that dice rolls can have on a player's ability score. This means they can base their math on the game engine's precedence. They can design a slew of monsters to slay, with meaningful abilities and skills in the context of the game engine, and they can advertise compatibility with that engine.
Furthermore, the OGL allows a publisher to define _product identity_ for their material. Product identity can be anything from the trade dress of the publication (graphical elements and layout), logos, terminology, lore, proper names, and so on. Anything defined as product identity may _not_ be reused without publisher consent. For example, suppose a publisher releases a book of weapons including a magical machete called Sigint, granting a +2 magical bonus to all of its attacks against zombies. This trait is explained by a story about how the machete was forged by a scientist with a latent zombie gene. However, the publication lists in section 1e of the OGL that all proper names of weapons are reserved as product identity. This means you can use the numbers (durability of the weapon, the damage it deals, the +2 magical bonus, and so on) and the lore associated with the sword (it was forged by a latent zombie) in your own publication, but you cannot use the name of the weapon (Sigint).
The OGL is an extremely flexible license, so developers must read section 1e carefully. Some publishers reserve nothing but the layout of the publication itself, while others reserve everything but the numbers and the most generic of terms.
When the preeminent RPG franchise embraced open source, it sent waves through the industry that are still felt today. Third-party developers can create content for the 5e and Pathfinder systems. A whole website, [DungeonMastersGuild.com][7], featuring independent content for D&D 5e was created by Wizards of the Coast to promote independent publishing. Games like [Starfinder][8], [OpenD6][9], [Warrior, Rogue & Mage][10], [Swords & Wizardry][11], and many others have adopted the OGL. Other systems, like Brent Newhall's [Dungeon Delvers][12], [Fate][13], [Dungeon World][14], and many more are licensed under a [Creative Commons license][15].
### Get your RPG
Free RPG Day is a day for you to go to your local gaming store, play an RPG, and get materials for future RPG games you play with friends. Like a [Linux installfest][16] or [Software Freedom Day][17], the event is loosely defined. Each retailer may do Free RPG Day a little differently, each one running whatever game they choose. However, the free content donated by game publishers is the same each year. Obviously, the free stuff is subject to availability, but when you go to a Free RPG Day event, notice how many games are offered with an open license (if it's an OGL game, the OGL is printed in the back matter of the book). Any content for Pathfinder, Starfinder, and D&D is sure to have taken some advantage of the OGL. Content for many other systems use Creative Commons licenses. Some, like the resurrected [Dead Earth][18] RPG from the '90s, use the [GNU Free Documentation License][19].
There are plenty of gaming resources out there that are developed with open licenses. You may or may not need to care about the license of a game; after all, the license has no bearing upon whether you can play it with friends or not. But if you enjoy supporting [free culture][20] in more ways than just the software you run, try out a few OGL or Creative Commons games. If you're new to gaming entirely, try out a tabletop RPG at your local game store on Free RPG Day!
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/5/free-rpg-day
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth/users/erez/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/team-game-play-inclusive-diversity-collaboration.png?itok=8sUXV7W1 (plastic game pieces on a board)
[2]: https://www.freerpgday.com/
[3]: https://opensource.com/sites/default/files/uploads/freerpgday-logoblank.jpg (FreeRPG Day logo)
[4]: https://company.wizards.com/
[5]: http://www.opengamingfoundation.org/licenses.html
[6]: https://www.d20pfsrd.com/
[7]: https://www.dmsguild.com/
[8]: https://paizo.com/starfinder
[9]: https://ogc.rpglibrary.org/index.php?title=OpenD6
[10]: http://www.stargazergames.eu/games/warrior-rogue-mage/
[11]: https://froggodgames.com/frogs/product/swords-wizardry-complete-rulebook/
[12]: http://brentnewhall.com/games/doku.php?id=games:dungeon_delvers
[13]: http://www.faterpg.com/licensing/licensing-fate-cc-by/
[14]: http://dungeon-world.com/
[15]: https://creativecommons.org/
[16]: https://www.tldp.org/HOWTO/Installfest-HOWTO/introduction.html
[17]: https://www.softwarefreedomday.org/
[18]: https://mixedsignals.ml/games/blog/blog_dead-earth
[19]: https://www.gnu.org/licenses/fdl-1.3.en.html
[20]: https://opensource.com/article/18/1/creative-commons-real-world

View File

@ -0,0 +1,113 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Welcoming Blockchain 3.0)
[#]: via: (https://www.ostechnix.com/welcoming-blockchain-3-0/)
[#]: author: (sk https://www.ostechnix.com/author/sk/)
Welcoming Blockchain 3.0
======
![Welcoming blockchain 3.0][1]
Image credit : <https://pixabay.com/illustrations/blockchain-network-business-3448502/>
The series of posts [**“Blockchain 2.0”**][2] discussed about the evolution of blockchain technology since the advent of cryptocurrencies since the Bitcoin in 2008. This post will seek to explore the future of blockchains. Lovably called **blockchain 3.0** , this new wave of DLT evolution will answer the issues faced with blockchains currently (each of which will be summarized here). The next version of the tech standard will also bring new applications and use cases. At the end of the post we will also look at a few examples of these principles currently applied.
Few of the shortcomings of blockchain platforms in existence are listed below with some proposed solutions to those answered afterward.
### Problem 1: Scalability[1]
This is seen as the first major hurdle to mainstream adoption. As previously discussed, a lot of limiting factors contribute to the blockchains in-capacity to process a lot of transactions at the same time. Existing networks such as [**Ethereum**][3] are capable of measly 10-15 transactions per second (TPS) whereas mainstream networks such as those employed by Visa for instance are capable of more than 2000 TPS. **Scalability** is an issue that plagues all modern database systems. Improved consensus algorithms and better blockchain architecture designs are improving it though as we see here.
**Solving scalability**
Implementing leaner and more efficient consensus algorithms have been proposed for solving issues of scalability without disturbing the primary structure of the blockchain. While most cryptocurrencies and blockchain platforms use resource intensive PoW algorithms (For instance, Bitcoin & Ethereum) to generate blocks, newer DPoS and PoET algorithms exist to solve this issue. DPoS and PoET algorithms (there are some more in development) require less resources to maintain the blockchain and have shown to have throughputs ranging up to 1000s of TPS rivalling that of popular non-blockchain systems.
The second solution to scalability is altering the blockchain structure[1] and functionality altogether. We wont get into finer details of this, but alternative architectures such as **Directed Acyclic Graph** ( **DAG** ) have been proposed to handle this issue. Essentially, the assumption for this to work is that not all network nodes need to have a copy of the entire blockchain for the blockchain to work or the participants to reap the benefits of a DLT system. The system does not require transactions to be validated by the entirety of the participants and simply requires the transactions to happen in a common frame of reference and be linked to each other.
The DAG[2] approach is implemented in the Bitcoin system using an implementation called the **Lightning network** and Ethereum implements the same using their **Sharding** [3] protocol. At its heart a DAG implementation is not technically a blockchain. Its more like a tangled maze, but still retains the peer to peer and distributed database properties of the blockchain. We will explore DAG and Tangle networks in a separate post later.
### Problem 2: Interoperability[4][5]
**Interoperability** is called cross-chain interaction is basically different blockchains being able to talk to each other to exchange metrics and information. With so many platforms that is hard to keep a count on at the moment and different companies coming up with proprietary systems for all the myriad of applications, interoperability between platforms is key. For instance, at the moment, someone who owns digital identities on one platform will not be able to exploit features presented by other platforms because the individual blockchains do not understand or know each other. Problems pertaining to lack of credible verifications, token exchange etc. still persist. A global roll out of [**smart contracts**][4] is also not viable without platforms being able to communicate with each other.
**Solving Interoperability**
There are protocols and platforms designed just for enabling interoperability at the moment. Such platforms implement atomic swaps protocols and provide open stages for different blockchain systems to communicate and exchange information between them. An example would be **“0x (ZRX)”** which is described later on.
### Problem 3: Governance[6]
Not a limitation in its own, **governance** in a public blockchain needs to act as a community moral compass where everyones opinion is taken into account on the operation of the blockchain. Combined and seen with scale this presents a problem where in either the protocols change far too frequently or the protocols are changed at the whims of a “central” authority who holds the most tokens. This is not an issue that most public blockchains are working to avoid right now since the scale at their operating in and the nature of their operations dont require stricter supervision.
**Solving Governance issues**
The Tangled framework or the DAG mentioned above would almost eliminate the need and use for global (platform wide) governance laws. Instead a program can automatically oversee the transaction and user type and decide on the laws that need to be implemented.
### Problem 4: Sustainability
**Sustainability** builds on the scalability issue again. Current blockchains and cryptocurrencies are notorious for being not sustainable in the long run owing to the significant oversight that is still required and the amount of resources required to keep the systems running. If youve read reports of how “mining cryptocurrencies” have not been so profitable lately, this is what it was. The amount of resources required to keep up existing platforms running is simply not practical at a global scale with mainstream use.
**Solving non-sustainability**
From a resources or economic point of view the answer to sustainability would be similar to the one for scalability. However, for the system to be implemented on a global scale, laws and regulations need to endorse it. This however depends on the governments of the world. Favourable moves from the American and European governments have however renewed hopes in this aspect.
### Problem 5: User adoption[7]
Currently a hindrance to widespread consumer adoption of blockchain based applications is consumer unfamiliarity with the platform and the tech underneath it. The fact that most applications require some sort of a tech and computing background to figure out how they work does not help in this aspect either. The third wave of blockchain developments seek to lessen the gap between consumer knowledge and platform usability.
**Solving the user adoption issue**
The internet took a lot of time to be the way it is now. A lot of work has been done on developing a standardized internet technology stack over the years that has allowed the web to function the way it is now. Developers are working on user facing front end distributed applications that should act as a layer on top of existing web 3.0 technology while being supported by blockchains and open protocols underneath. Such [**distributed applications**][5] will make the underlying technology more familiar to users, hence increasing mainstream adoption.
Weve discussed about the solutions to the above issues theoretically, and now we proceed to show these being applied in the present scenario.
**[0x][6]** is a decentralized token exchange where users from different platforms can exchange tokens without the need of a central authority to vet the same. Their breakthrough comes in how theyve designed the system to record and vet the blocks only after transactions are settled and not in between (to verify context, blocks preceding the transaction order is also verified normally) as is normally done. This allows for a more liquid faster exchange of digitized assets online.
**[Cardano][7]** founded by one of the co-founders of Ethereum, Cardano boasts of being a truly “scientific” platform with multiple reviews and strict protocols for the developed code and algorithms. Everything out of Cardano is assumed to be mathematically as optimized as possible. Their consensus algorithm called **Ouroboros** , is a modified Proof of Stake algorithm. Cardano is developed in [**Haskell**][8] and the smart contract engine uses a derivative of Haskell called **Plutus** for operating. Both are functional programming languages which guarantee secure transactions without compromising efficiency.
**EOS** Weve already described EOS here in [**this post**][9].
**[COTI][10]** a rather obscure architecture, COTI entails no mining, and next to zero power consumption in operating. It also stores assets in offline wallets localized on users devices rather than a pure peer to peer network. They also follow a DAG based architecture and claim of processing throughputs of up to 10000 TPS. Their platform allows enterprises to build their own cryptocurrency and digitized currency wallets without exploiting a blockchain.
**References:**
* [1] **A. P. Paper, K. Croman, C. Decker, I. Eyal, A. E. Gencer, and A. Juels, “On Scaling Decentralized Blockchains | SpringerLink,” 2018.**
* [2] [**Going Beyond Blockchain with Directed Acyclic Graphs (DAG)**][11]
* [3] [**Ethreum/wiki On sharding blockchains**][12]
* [4] [**Why is blockchain interoperability important**][13]
* [5] [**The Importance of Blockchain Interoperability**][14]
* [6] **R. Beck, C. Müller-Bloch, and J. L. King, “Governance in the Blockchain Economy: A Framework and Research Agenda,” J. Assoc. Inf. Syst., pp. 10201034, 2018.**
* [7] **J. M. Woodside, F. K. A. Jr, W. Giberson, F. K. J. Augustine, and W. Giberson, “Blockchain Technology Adoption Status and Strategies,” J. Int. Technol. Inf. Manag., vol. 26, no. 2, pp. 6593, 2017.**
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/welcoming-blockchain-3-0/
作者:[sk][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.ostechnix.com/author/sk/
[b]: https://github.com/lujun9972
[1]: https://www.ostechnix.com/wp-content/uploads/2019/06/blockchain-720x340.jpg
[2]: https://www.ostechnix.com/blockchain-2-0-an-introduction/
[3]: https://www.ostechnix.com/blockchain-2-0-what-is-ethereum/
[4]: https://www.ostechnix.com/blockchain-2-0-explaining-smart-contracts-and-its-types/
[5]: https://www.ostechnix.com/blockchain-2-0-explaining-distributed-computing-and-distributed-applications/
[6]: https://0x.org/
[7]: https://www.cardano.org/en/home/
[8]: https://www.ostechnix.com/getting-started-haskell-programming-language/
[9]: https://www.ostechnix.com/blockchain-2-0-eos-io-is-building-infrastructure-for-developing-dapps/
[10]: https://coti.io/
[11]: https://cryptoslate.com/beyond-blockchain-directed-acylic-graphs-dag/
[12]: https://github.com/ethereum/wiki/wiki/Sharding-FAQ#introduction
[13]: https://www.capgemini.com/2019/02/can-the-interoperability-of-blockchains-change-the-world/
[14]: https://medium.com/wanchain-foundation/the-importance-of-blockchain-interoperability-b6a0bbd06d11