Merge remote-tracking branch 'LCTT/master'

This commit is contained in:
Xingyu.Wang 2019-05-06 01:02:00 +08:00
commit 645307b889
43 changed files with 5599 additions and 30 deletions

View File

@ -1,49 +1,46 @@
[#]: collector: (lujun9972)
[#]: translator: (warmfrog)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-10817-1.html)
[#]: subject: (Installing Ubuntu MATE on a Raspberry Pi)
[#]: via: (https://itsfoss.com/ubuntu-mate-raspberry-pi/)
[#]: author: (Chinmay https://itsfoss.com/author/chinmay/)
Raspberry Pi 上安装 Ubuntu MATE
树莓派上安装 Ubuntu MATE
=================================
_**简介: 这篇快速指南告诉你如何在 Raspberry Pi 设备上安装 Ubuntu MATE。**_
> 简介: 这篇快速指南告诉你如何在树莓派设备上安装 Ubuntu MATE。
[Raspberry Pi][1] 是目前最流行的单板机并且是制造商的首选。[Raspbian][2] 是基于 Debian 的 Pi 的官方操作系统。它是轻量级的,内置了教育工具和能在大部分场景下完成工作的工具。
[安装 Raspbian][3] 安装同样简单,但是与 [Debian][4] 一起的问题是慢的升级周期和旧的软件包。
在 Raspberry Pi 上运行 Ubuntu 给你带来一个更丰富的体验和最新的软件。当在你的 Pi 上运行 Ubuntu 时我们有几个选择。
1. [Ubuntu MATE][5] Ubuntu MATE 是仅有的原生支持 Raspberry Pi 包含一个完整的桌面环境的分发版。
2. [Ubuntu Server 18.04][6] \+ 手动安装一个桌面环境。
3. 使用 [Ubuntu Pi Flavor Maker][7] 社区构建的镜像_这些镜像只支持 Raspberry Pi 2B 和 3B 的变种_并且**不能**更新到最新的 LTS 发布版。
[树莓派][1] 是目前最流行的单板机并且是创客首选的板子。[Raspbian][2] 是基于 Debian 的树莓派官方操作系统。它是轻量级的,内置了教育工具和能在大部分场景下完成工作的工具。
[安装 Raspbian][3] 安装同样简单,但是与 [Debian][4] 随同带来的问题是慢的升级周期和旧的软件包。
在树莓派上运行 Ubuntu 可以给你带来一个更丰富的体验和最新的软件。当在你的树莓派上运行 Ubuntu 时我们有几个选择。
1. [Ubuntu MATE][5] Ubuntu MATE 是仅有的原生支持树莓派且包含一个完整的桌面环境的发行版。
2. [Ubuntu Server 18.04][6] + 手动安装一个桌面环境。
3. 使用 [Ubuntu Pi Flavor Maker][7] 社区构建的镜像,这些镜像只支持树莓派 2B 和 3B 的变种,并且**不能**更新到最新的 LTS 发布版。
第一个选择安装是最简单和快速的,而第二个选择给了你自由选择安装桌面环境的机会。我推荐选择前两个中的任一个。
这里是一些磁盘镜像下载链接。在这篇文章里我只会提及 Ubuntu MATE 的安装。
### 在 Raspberry Pi 上安装 Ubuntu MATE
### 在树莓派上安装 Ubuntu MATE
去 Ubuntu MATE 的下载页面获取推荐的镜像。
![][8]
试验 ARM64 版只应在你需要在 Raspberry Pi 服务器上运行像 MongoDB 这样的 64-bit 应用时使用。
试验性的 ARM64 版本只应在你需要在树莓派服务器上运行像 MongoDB 这样的 64 位应用时使用。
[ 下载为 Raspberry Pi 准备的 Ubuntu MATE][9]
- [下载为树莓派准备的 Ubuntu MATE][9]
#### 第 1 步:设置 SD 卡
镜像文件一旦下载完成后需要解压。你应该简单的右击来提取它。
镜像文件一旦下载完成后需要解压。你可以简单的右击来提取它。
可替换地,下面命令做同样的事。
也可以使用下面命令做同样的事。
```
xz -d ubuntu-mate***.img.xz
@ -51,7 +48,7 @@ xz -d ubuntu-mate***.img.xz
如果你在 Windows 上你可以使用 [7-zip][10] 替代。
安装 **[Balena Etcher][11]**,我们将使用这个工具将镜像写入 SD 卡。确保你的 SD 卡有至少 8 GB 的容量。
安装 [Balena Etcher][11],我们将使用这个工具将镜像写入 SD 卡。确保你的 SD 卡有至少 8 GB 的容量。
启动 Etcher选择镜像文件和 SD 卡。
@ -59,21 +56,19 @@ xz -d ubuntu-mate***.img.xz
一旦进度完成 SD 卡就准备好了。
#### 第 2 步:设置 Raspberry Pi
#### 第 2 步:设置树莓派
你可能已经知道你需要一些外设才能使用 Raspberry Pi 例如 鼠标,键盘, HDMI 线等等。你同样可以[不用键盘和鼠标安装 Raspberry Pi][13] 但是这篇指南不是那样。
你可能已经知道你需要一些外设才能使用树莓派例如鼠标、键盘、HDMI 线等等。你同样可以[不用键盘和鼠标安装树莓派][13]但是这篇指南不是那样。
* 插入一个鼠标和一个键盘。
* 连接 HDMI 线缆。
* 插入 SD 卡 到 SD 卡槽。
插入电源线给它供电。确保你有一个好的电源供应5V3A 至少)。一个不好的电源供应可能降低性能。
插入电源线给它供电。确保你有一个好的电源供应5V、3A 至少)。一个不好的电源供应可能降低性能。
#### Ubuntu MATE 安装
一旦你给 Raspberry Pi 供电,你将遇到非常熟悉的 Ubuntu 安装过程。在这里的安装过程相当直接。
一旦你给树莓派供电,你将遇到非常熟悉的 Ubuntu 安装过程。在这里的安装过程相当直接。
![选择你的键盘布局][14]
@ -83,7 +78,7 @@ xz -d ubuntu-mate***.img.xz
![添加用户名和密码][16]
在设置了键盘布局时区和用户凭证后,在几分钟后你将被带到登录界面。瞧!你快要完成了。
在设置了键盘布局时区和用户凭证后,在几分钟后你将被带到登录界面。瞧!你快要完成了。
![][17]
@ -98,9 +93,9 @@ sudo apt upgrade
![][19]
一旦更新完成安装你就可以开始了。你可以根据你的需要继续安装 Raspberry Pi 平台的为 GPIO 和其他 I/O 准备的特定软件包。
一旦更新完成安装你就可以开始了。你可以根据你的需要继续安装树莓派平台为 GPIO 和其他 I/O 准备的特定软件包。
是什么让你考虑在 Raspberry 上安装 Ubuntu你对 Raspbian 的体验如何呢?在下方评论来让我知道。
是什么让你考虑在 Raspberry 上安装 Ubuntu你对 Raspbian 的体验如何呢?在下方评论来让我知道。
--------------------------------------------------------------------------------
@ -109,7 +104,7 @@ via: https://itsfoss.com/ubuntu-mate-raspberry-pi/
作者:[Chinmay][a]
选题:[lujun9972][b]
译者:[warmfrog](https://github.com/warmfrog)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,63 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Edge computing is in most industries future)
[#]: via: (https://www.networkworld.com/article/3391016/edge-computing-is-in-most-industries-future.html#tk.rss_all)
[#]: author: (Anne Taylor https://www.networkworld.com/author/Anne-Taylor/)
Edge computing is in most industries future
======
Nearly every industry can take advantage of edge computing in the journey to speed digital transformation efforts
![iStock][1]
The growth of edge computing is about to take a huge leap. Right now, companies are generating about 10% of their data outside a traditional data center or cloud. But within the next six years, that will increase to 75%, [according to Gartner][2].
Thats largely down to the need to process data emanating from devices, such as Internet of Things (IoT) sensors. Early adopters include:
* **Manufacturers:** Devices and sensors seem endemic to this industry, so its no surprise to see the need to find faster processing methods for the data produced. A recent [_Automation World_][3] survey found that 43% of manufacturers have deployed edge projects. Most popular use cases have included production/manufacturing data analysis and equipment data analytics.
* **Retailers** : Like most industries deeply affected by the need to digitize operations, retailers are being forced to innovate their customer experiences. To that end, these organizations are “investing aggressively in compute power located closer to the buyer,” [writes Dave Johnson][4], executive vice president of the IT division at Schneider Electric. He cites examples such as augmented-reality mirrors in fitting rooms that offer different clothing options without the consumer having to try on the items, and beacon-based heat maps that show in-store traffic.
* **Healthcare organizations** : As healthcare costs continue to escalate, this industry is ripe for innovation that improves productivity and cost efficiencies. Management consulting firm [McKinsey & Co. has identified][5] at least 11 healthcare use cases that benefit patients, the facility, or both. Two examples: tracking mobile medical devices for nursing efficiency as well as optimization of equipment, and wearable devices that track user exercise and offer wellness advice.
While these are strong use cases, as the edge computing market grows, so too will the number of industries adopting it.
**Getting the edge on digital transformation**
Faster processing at the edge fits perfectly into the objectives and goals of digital transformation — improving efficiencies, productivity, speed to market, and the customer experience. Here are just a few of the potential applications and industries that will be changed by edge computing:
**Agriculture:** Farmers and organizations already use drones to transmit field and climate conditions to watering equipment. Other applications might include monitoring and location tracking of workers, livestock, and equipment to improve productivity, efficiencies, and costs.
**Energy** : There are multiple potential applications in this sector that could benefit both consumers and providers. For example, smart meters help homeowners better manage energy use while reducing grid operators need for manual meter reading. Similarly, sensors on water pipes would detect leaks, while providing real-time consumption data.
**Financial services** : Banks are adopting interactive ATMs that quickly process data to provide better customer experiences. At the organizational level, transactional data can be more quickly analyzed for fraudulent activity.
**Logistics** : As consumers demand faster delivery of goods and services, logistics companies will need to transform mapping and routing capabilities to get real-time data, especially in terms of last-mile planning and tracking. That could involve street-, package-, and car-based sensors transmitting data for processing.
All industries have the potential for transformation, thanks to edge computing. But it will depend on how they address their computing infrastructure. Discover how to overcome any IT obstacles at [APC.com][6].
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3391016/edge-computing-is-in-most-industries-future.html#tk.rss_all
作者:[Anne Taylor][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Anne-Taylor/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/04/istock-1019389496-100794424-large.jpg
[2]: https://www.gartner.com/smarterwithgartner/what-edge-computing-means-for-infrastructure-and-operations-leaders/
[3]: https://www.automationworld.com/article/technologies/cloud-computing/its-not-edge-vs-cloud-its-both
[4]: https://blog.schneider-electric.com/datacenter/2018/07/10/why-brick-and-mortar-retail-quickly-establishing-leadership-edge-computing/
[5]: https://www.mckinsey.com/industries/high-tech/our-insights/new-demand-new-markets-what-edge-computing-means-for-hardware-companies
[6]: https://www.apc.com/us/en/solutions/business-solutions/edge-computing.jsp

View File

@ -0,0 +1,186 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Open architecture and open source The new wave for SD-WAN?)
[#]: via: (https://www.networkworld.com/article/3390151/open-architecture-and-open-source-the-new-wave-for-sd-wan.html#tk.rss_all)
[#]: author: (Matt Conran https://www.networkworld.com/author/Matt-Conran/)
Open architecture and open source The new wave for SD-WAN?
======
As networking continues to evolve, you certainly don't want to break out a forklift every time new technologies are introduced. Open architecture would allow you to replace the components of a system, and give you more flexibility to control your own networking destiny.
![opensource.com \(CC BY-SA 2.0\)][1]
I recently shared my thoughts about the [role of open source in networking][2]. I discussed two significant technological changes that we have witnessed. I call them waves, and these waves will redefine how we think about networking and security.
The first wave signifies that networking is moving to the software so that it can run on commodity off-the-shelf hardware. The second wave is the use of open source technologies, thereby removing the barriers to entry for new product innovation and rapid market access. This is especially supported in the SD-WAN market rush.
Seemingly, we are beginning to see less investment in hardware unless there is a specific segment that needs to be resolved. But generally, software-based platforms are preferred as they bring many advantages. It is evident that there has been a technology shift. We have moved networking from hardware to software and this shift has positive effects for users, enterprises and service providers.
**[ Dont miss[customer reviews of top remote access tools][3] and see [the most powerful IoT companies][4] . | Get daily insights by [signing up for Network World newsletters][5]. ]**
### Performance (hardware vs software)
There has always been a misconception that the hardware-based platforms are faster due to the hardware acceleration that exists in the network interface controller (NIC). However, this is a mistaken belief. Nowadays, software platforms can reach similar performance levels as compared to hardware-based platforms.
Initially, people viewed hardware as a performance-based vehicle but today this does not hold true anymore. Even the bigger vendors are switching to software-based platforms. We are beginning to see this almost everywhere in networking.
### SD-WAN and open source
SD-WAN really took off quite rapidly due to the availability of open source. It enabled the vendors to leverage all the available open source components and then create their solution on top. By and large, SD-WAN vendors used the open source as the foundation of their solution and then added additional proprietary code over the baseline.
However, even when using various open source components, there is still a lot of work left for these vendors to make it to a complete SD-WAN solution, even for reaching a baseline of centrally managed routers with flexible network architecture control, not to talk about complete feature set of SD-WAN.
The result of the work done by these vendors is still closed products so the fact they are using open source components in their products is merely a time-to-market advantage but not a big benefit to the end users (enterprises) or service providers launching hosted services with these products. They are still limited in flexibility and vendor diversity is only achieved through a multi-vendor strategy which in practice means launching multiple silo services each based on a different SD-WAN vendor without real selection of the technologies that make each of the SD-WAN services they launch.
I recently came across a company called [Flexiwan][6], their goal is to fundamentally change this limitation of SD-WAN by offering a full open source solution that, as they say, “includes integration points in the core of the system that allow for 3rd party logic to be integrated in an efficient way.” They call this an open architecture, which, in practical terms, means a service provider or enterprise can integrate his own application logic into the core of the SD-WAN router…or select best of breed sub-technologies or applications instead of having these dictated by the vendor himself. I believe there is the possibility of another wave of SD-WAN with a fully open source and open architecture to SD-WAN.
This type of architecture brings many benefits to users, enterprises and service providers, especially when compared to the typical lock-in of bigger vendors, such as Cisco and VMware.
With an open source open architecture, its easier to control the versions and extend more flexibility by using the software offered by different providers. It offers the ability to switch providers, not to mention being easier to install and upgrade the versions.
### SD-WAN, open source and open architecture
An SD-WAN solution that is an open source with open architecture provides a modular and decomposed type of SD-WAN. This enables the selection of elements to provide a solution.
For example, enterprises and service providers can select the best-of-breed technologies from independent vendors, such as deep packet inspection (DPI), security, wide area network (WAN) optimization, session border controller (SBC), VoIP and other traffic specific optimization logic.
Some SD-WAN vendors define an open architecture in such a way that they just have a set of APIs, for example, northbound APIs, to enable one to build management or do service chaining. This is one approach to an open architecture but in fact, its pretty limited since it does not bring the full benefits that an open architecture should offer.
### Open source and the fear of hacking
However, when I think about an open source and open architecture for SD-WAN, the first thing that comes to mind is bad actors. What about the code? If its an open source, the bad actor can find vulnerabilities, right?
The community is a powerful force and will fix any vulnerability. Also with open source, the vendor, who is the one responsible for the open source component will fix the vulnerability much faster than a closed solution, where you are unaware of the vulnerability until a fix is released.
### The SD-WAN evolution
Before we go any further, lets examine the history of SD-WAN and its origins, how we used to connect from the wide area network (WAN) to other branches via private or public links.
SD-WAN offers the ability to connect your organization to a WAN. This could be connecting to the Internet or other branches, to optimally deliver applications with a good user-experience. Essentially, SD-WAN allows the organizations to design the architecture of their network dynamically by means of software.
### In the beginning, there was IPSec
It started with IPSec. Around two decades back, in 1995, the popular design was that of mesh architecture. As a result, we had a lot of point-to-point connections. Firstly, mesh architectures with IPSec VPNs are tiresome to manage as there is a potential for 100s of virtual private network (VPN) configurations.
Authentically, IPSec started with two fundamental goals. The first was the tunneling protocol that would allow organizations to connect the users or other networks to their private network. This enabled the enterprises to connect to networks that they did not have a direct route to.
The second goal of IPSec was to encrypt packets at the network layer and therefore securing the data in motion. Lets face it: at that time, IPSec was terrible for complicated multi-site interconnectivity and high availability designs. If left to its defaults, IPSec is best suited for static designs.
This was the reason why we had to step in the next era where additional functionality was added to IPSec. For example, IPSec had issues in supporting routing protocols using multicast. To overcome this, IPSec over generic routing encapsulation (GRE) was introduced.
### The next era of SD-WAN
During the journey to 2008, one could argue that the next era of WAN connectivity was when additional features were added to IPSec. At this time IPSec became known as a “Swiss army knife.” It could do many things but not anything really well.
Back then, you could create multiple links, but it failed to select the traffic over these links other than by using simple routing. You needed to add a routing protocol. For advanced agile architectures, IPSec had to be incorporated with other higher-level protocols.
Features were then added based on measuring the quality. Link quality features were added to analyze any delay, drops and to select alternative links for your applications. We began to add tunnels, multi-links and to select the traffic based on the priority, not just based on the routing.
The most common way to the tunnel was to have IPSec over GRE. You have the GRE tunnel that enables you to send any protocol end-to-end by using IPSec for the encryption. All this functionality was added to achieve and create dynamic tunnels over IPSec and to optimize the IPSec tunnels.
This was a move in the right direction, but it was still complex. It was not centrally managed and was error-prone with complex configurations that were unable to manage large deployments. IPSec had far too many limitations, so in the mid-2000s early SD-WAN vendors started cropping up. Some of these vendors enabled the enterprises to aggregate many separate digital subscriber lines (DSL) links into one faster logical link. At the same time, others added time stamps and/or sequence numbers to packets to improve the network performance and security when running over best effort (internet) links.
International WAN connectivity was a popular focus since the cost delta between the Internet and private multiprotocol label switching (MPLS) was 10x+ different. Primarily, enterprises wanted the network performance and security of MPLS without having to pay a premium for it.
Most of these solutions sat in-front or behind a traditional router from companies like Cisco. Evidently, just like WAN Optimization vendors, these were additional boxes/solutions that enterprises added to their networks.
### The next era of SD-WAN, circa 2012
It was somewhere in 2012 that we started to see the big rush to the SD-WAN market. Vendors such as Velocloud, Viptela and a lot of the other big players in the SD-WAN market kicked off with the objective of claiming some of the early SD-WAN success and going after the edge router market with a full feature router replacement and management simplicity.
Open source networking software and other open source components for managing the traffic enabled these early SD-WAN vendors to lay a foundation where a lot of the code base was open source. They would then “glue” it together and add their own additional features.
Around this time, Intel was driving data plane development kit (DPDK) and advanced encryption standard (AES) instruction set, which enabled that software to run on commodity hardware. The SD-WAN solutions were delivered as closed solutions where each solution used its own set of features. The features and technologies chosen for each vendor were different and not compatible with each other.
### The recent era of SD-WAN, circa 2017
A tipping point in 2017 was the gold rush for SD-WAN deployments. Everyone wanted to have SD-WAN as fast as possible.
The SD-WAN market has taken off, as seen by 50 vendors with competing, proprietary solutions and market growth curves with a CAGR of 100%. There is a trend of big vendors like Cisco, Vmware and Oracle acquiring startups to compete in this new market.
As a result, Cisco, which is the traditional enterprise market leader in WAN routing solutions felt threatened since its IWAN solution, which had been around since 2008, was too complex (a 900-page configuration and operations manual). Besides, its simple solution based on the Meraki acquisition was not feature-rich enough for the large enterprises.
With their acquisition of Viptela, Cisco currently has a 13% of the market share, and for the first time in decades, it is not the market leader. The large cloud vendors, such as Google and Facebook are utilizing their own technology for routing within their very large private networks.
At some point between 2012 and 2017, we witnessed the service providers adopting SD-WAN. This introduced the onset and movement of managed SD-WAN services. As a result, the service providers wanted to have SD-WAN on the menu for their customers. But there were many limitations in the SD-WAN market, as it was offered as a closed-box solution, giving the service providers limited control.
At this point surfaced an expectation of change, as service providers and enterprises looked for more control. Customers can get better functionality from a multi-vendor approach than from a single vendor.
### Dont forget DIY SD-WAN
Up to 60% of service providers and enterprises within the USA are now looking at DIY SD-WAN. A DIY SD-WAN solution is not where the many pieces of open source are taken and caste into something. The utmost focus is on the solution that can be self-managed but buy from a vendor.
Today, the majority of the market is looking for managed solutions and the upper section that has the expertise wants to be equipped with more control options.
### SD-WAN vendors attempting everything
There is a trend that some vendors try to do everything with SD-WAN. As a result, whether you are an enterprise or a service provider, you are locked into a solution that is dictated by the SD-WAN vendor.
The SD-WAN vendors have made the supplier choice or developed what they think is relevant. Evidently, some vendors are using stacks and software development kits (SDKs) that they purchased, for example, for deep packet inspection (DPI).
Ultimately, you are locked into a specific solution that the vendor has chosen for you. If you are a service provider, you might disapprove of this limitation and if you are an enterprise with specific expertise, you might want to zoom in for more control.
### All-in-one security vendors
Many SD-WAN vendors promote themselves as security companies. But would you prefer to buy a security solution from an SD-WAN vendor or from an experienced vendor, such as Checkpoint?
Both: enterprise and service providers want to have a choice, but with an integrated black box security solution, you don't have a choice. The more you integrate and throw into the same box, the stronger the vendor lock-in is and the weaker the flexibility.
Essentially, with this approach, you are going for the lowest common denominator instead of the highest. Ideally, the technology of the services that you deploy on your network requires expertise. One vendor cannot be an expert in everything.
An open architecture lies in a place for experts in different areas to join together and add their own specialist functionality.
### Encrypted traffic
As a matter of fact, what is not encrypted today will be encrypted tomorrow. The vendor of the application can perform intelligent things that the SD-WAN vendor cannot because they control both sides. Hence, if you can put something inside the SD-WAN edge device, they can make smart decisions even if the traffic is encrypted.
But in the case of traditional SD-WANs, there needs to be a corporation with a content provider. However, with an open architecture, you can integrate anything, and nothing prevents the integration. A lot of traffic is encrypted and it's harder to manage encrypted traffic. However, an open architecture would allow the content providers to manage the traffic more effectively.
### 2019 and beyond: what is an open architecture?
Cloud providers and enterprises have discovered that 90% of the user experience and security problems arise due to the network: between where the cloud provider resides and where the end-user consumes the application.
Therefore, both cloud providers and large enterprise with digital strategies are focusing on building their solutions based on open source stacks. Having a viable open source SD-WAN solution is the next step in the SD-WAN evolution, where it moves to involve the community in the solution. This is similar to what happens with containers and tools.
Now, since were in 2019, are we going to witness a new era of SD-WAN? Are we moving to the open architecture with an open source SD-WAN solution? An open architecture should be the core of the SD-WAN infrastructure, where additional technologies are integrated inside the SD-WAN solution and not only complementary VNFs. There is an interface and native APIs that allow you to integrate logic into the router. This way, the router will be able to intercept and act according to the traffic.
So, if Im a service provider and have my own application, I would want to write logic that would be able to communicate with my application. Without an open architecture, the service providers cant really offer differentiation and change the way SD-WAN makes decisions and interacts with the traffic of their applications.
There is a list of various technologies that you need to be an expert in to be able to integrate. And each one of these technologies can be a company, for example, DPI, VoIP optimization, and network monitoring to name a few. An open architecture will allow you to pick and choose these various elements as per your requirements.
Networking is going through a lot of changes and it will continue to evolve with the passage of time. As a result, you wouldnt want something that forces you to break out a forklift each time new technologies are introduced. Primarily, open architecture allows you to replace the components of the system and add code or elements that handle specific traffic and applications.
### Open source
Open source gives you more flexibility to control your own destiny. It offers the ability to select your own services that you want to be applied to your system. It provides security in the sense that if something happens to the vendor or there is a vulnerability in the system, you know that you are backed by the community that can fix such misadventures.
From the perspective of the business model, it makes a more flexible and cost-effective system. Besides, with open source, the total cost of ownership will also be lower.
**This article is published as part of the IDG Contributor Network.[Want to Join?][7]**
Join the Network World communities on [Facebook][8] and [LinkedIn][9] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3390151/open-architecture-and-open-source-the-new-wave-for-sd-wan.html#tk.rss_all
作者:[Matt Conran][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Matt-Conran/
[b]: https://github.com/lujun9972
[1]: https://images.techhive.com/images/article/2017/03/6554314981_7f95641814_o-100714680-large.jpg
[2]: https://www.networkworld.com/article/3338143/the-role-of-open-source-in-networking.html
[3]: https://www.networkworld.com/article/3262145/lan-wan/customer-reviews-top-remote-access-tools.html#nww-fsb
[4]: https://www.networkworld.com/article/2287045/internet-of-things/wireless-153629-10-most-powerful-internet-of-things-companies.html#nww-fsb
[5]: https://www.networkworld.com/newsletters/signup.html#nww-fsb
[6]: https://flexiwan.com/sd-wan-open-source/
[7]: /contributor-network/signup.html
[8]: https://www.facebook.com/NetworkWorld/
[9]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,97 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Cisco: DNSpionage attack adds new tools, morphs tactics)
[#]: via: (https://www.networkworld.com/article/3390666/cisco-dnspionage-attack-adds-new-tools-morphs-tactics.html#tk.rss_all)
[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
Cisco: DNSpionage attack adds new tools, morphs tactics
======
Cisco's Talos security group says DNSpionage tools have been upgraded to be more stealthy
![Calvin Dexter / Getty Images][1]
The group behind the Domain Name System attacks known as DNSpionage have upped their dark actions with new tools and malware to focus their attacks and better hide their activities.
Cisco Talos security researchers, who discovered [DNSpionage][2] in November, this week warned of new exploits and capabilities of the nefarious campaign.
**More about DNS:**
* [DNS in the cloud: Why and why not][3]
* [DNS over HTTPS seeks to make internet use more private][4]
* [How to protect your infrastructure from DNS cache poisoning][5]
* [ICANN housecleaning revokes old DNS security key][6]
“The threat actor's ongoing development of DNSpionage malware shows that the attacker continues to find new ways to avoid detection. DNS tunneling is a popular method of exfiltration for some actors and recent examples of DNSpionage show that we must ensure DNS is monitored as closely as an organization's normal proxy or weblogs,” [Talos wrote][7]. “DNS is essentially the phonebook of the internet, and when it is tampered with, it becomes difficult for anyone to discern whether what they are seeing online is legitimate.”
In Talos initial report, researchers said a DNSpionage campaign targeted various businesses in the Middle East as well as United Arab Emirates government domains. It also utilized two malicious websites containing job postings that were used to compromise targets via crafted Microsoft Office documents with embedded macros. The malware supported HTTP and DNS communication with the attackers.
In a separate DNSpionage campaign, the attackers used the same IP address to redirect the DNS of legitimate .gov and private company domains. During each DNS compromise, the actor carefully generated “Let's Encrypt” certificates for the redirected domains. These certificates provide X.509 certificates for [Transport Layer Security (TLS)][8] free of charge to the user, Talos said.
This week Cisco said DNSpionage actors have created a new remote administrative tool that supports HTTP and DNS communication with the attackers' command and control (C2).
“In our previous post concerning DNSpionage, we showed that the malware author used malicious macros embedded in a Microsoft Word document. In the new sample from Lebanon identified at the end of February, the attacker used an Excel document with a similar macro.”
**[[Prepare to become a Certified Information Security Systems Professional with this comprehensive online course from PluralSight. Now offering a 10-day free trial!][9] ]**
Talos wrote: “The malware supports HTTP and DNS communication to the C2 server. The HTTP communication is hidden in the comments in the HTML code. This time, however, the C2 server mimics the GitHub platform instead of Wikipedia. While the DNS communication follows the same method we described in our previous article, the developer added some new features in this latest version and, this time, the actor removed the debug mode.”
Talos added that the domain used for the C2 campaign is “bizarre.”
“The previous version of DNSpionage attempted to use legitimate-looking domains in an attempt to remain undetected. However, this newer version uses the domain coldfart[.]com, which would be easier to spot than other APT campaigns which generally try to blend in with traffic more suitable to enterprise environments. The domain was also hosted in the U.S., which is unusual for any espionage-style attack.”
Talos researchers said they discovered that DNSpionage added a reconnaissance phase, that ensures the payload is being dropped on specific targets rather than indiscriminately downloaded on every machine.
This level of attack also returns information about the workstation environment, including platform-specific information, the name of the domain and the local computer, and information concerning the operating system, Talos wrote. This information is key to helping the malware select the victims only and attempts to avoid researchers or sandboxes. Again, it shows the actor's improved abilities, as they now fingerprint the victim.
This new tactic indicates an improved level of sophistication and is likely in response to the significant amount of public interest in the campaign.
Talos noted that there have been several other public reports of DNSpionage attacks, and in January, the U.S. Department of Homeland Security issued an [alert][10] warning users about this threat activity.
“In addition to increased reports of threat activity, we have also discovered new evidence that the threat actors behind the DNSpionage campaign continue to change their tactics, likely in an attempt to improve the efficacy of their operations,” Talos stated.
In April, Cisco Talos identified an undocumented malware developed in .NET. On the analyzed samples, the malware author left two different internal names in plain text: "DropperBackdoor" and "Karkoff."
“The malware is lightweight compared to other malware due to its small size and allows remote code execution from the C2 server. There is no obfuscation and the code can be easily disassembled,” Talos wrote.
The Karkoff malware searches for two specific anti-virus platforms: Avira and Avast and will work around them.
“The discovery of Karkoff also shows the actor is pivoting and is increasingly attempting to avoid detection while remaining very focused on the Middle Eastern region,” Talos wrote.
Talos distinguished DNSpionage from another DNS attack method, “[Sea Turtle][11]”, it detailed this month. Sea Turtle involves state-sponsored attackers that are abusing DNS to target organizations and harvest credentials to gain access to sensitive networks and systems in a way that victims are unable to detect. This displays unique knowledge about how to manipulate DNS, Talos stated.
By obtaining control of victims DNS, attackers can change or falsify any data victims receive from the Internet, illicitly modify DNS name records to point users to actor-controlled servers and users visiting those sites would never know, Talos reported.
“While this incident is limited to targeting primarily national security organizations in the Middle East and North Africa, and we do not want to overstate the consequences of this specific campaign, we are concerned that the success of this operation will lead to actors more broadly attacking the global DNS system,” Talos stated about Sea Turtle.
Join the Network World communities on [Facebook][12] and [LinkedIn][13] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3390666/cisco-dnspionage-attack-adds-new-tools-morphs-tactics.html#tk.rss_all
作者:[Michael Cooney][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Michael-Cooney/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/02/cyber_attack_threat_danger_breach_hack_security_by_calvindexter_gettyimages-860363294_2400x800-100788395-large.jpg
[2]: https://blog.talosintelligence.com/2018/11/dnspionage-campaign-targets-middle-east.html
[3]: https://www.networkworld.com/article/3273891/hybrid-cloud/dns-in-the-cloud-why-and-why-not.html
[4]: https://www.networkworld.com/article/3322023/internet/dns-over-https-seeks-to-make-internet-use-more-private.html
[5]: https://www.networkworld.com/article/3298160/internet/how-to-protect-your-infrastructure-from-dns-cache-poisoning.html
[6]: https://www.networkworld.com/article/3331606/security/icann-housecleaning-revokes-old-dns-security-key.html
[7]: https://blog.talosintelligence.com/2019/04/dnspionage-brings-out-karkoff.html
[8]: https://www.networkworld.com/article/2303073/lan-wan-what-is-transport-layer-security-protocol.html
[9]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fcertified-information-systems-security-professional-cisspr
[10]: https://www.us-cert.gov/ncas/alerts/AA19-024A
[11]: https://www.networkworld.com/article/3389747/cisco-talos-details-exceptionally-dangerous-dns-hijacking-attack.html
[12]: https://www.facebook.com/NetworkWorld/
[13]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,70 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How data storage will shift to blockchain)
[#]: via: (https://www.networkworld.com/article/3390722/how-data-storage-will-shift-to-blockchain.html#tk.rss_all)
[#]: author: (Patrick Nelson https://www.networkworld.com/author/Patrick-Nelson/)
How data storage will shift to blockchain
======
Move over cloud and traditional in-house enterprise data center storage, distributed storage based on blockchain may be arriving imminently.
![Cybrain / Getty Images][1]
If you thought cloud storage was digging in its heels to become the go-to method for storing data, and at the same time grabbing share from own-server, in-house storage, you may be interested to hear that some think both are on the way out. Instead organizations will use blockchain-based storage.
Decentralized blockchain-based file storage will be more secure, will make it harder to lose data, and will be cheaper than anything seen before, say organizations actively promoting the slant on encrypted, distributed technology.
**[ Read also:[Why blockchain (might be) coming to an IoT implementation near you][2] ]**
### Storing transactional data in a blockchain
China company [FileStorm][3], which describes itself in marketing materials as the first [Interplanetary File Storage][4] (IPFS) platform on blockchain, says the key to making it all work is to only store the transactional data in blockchain. The actual data files, such as large video files, are distributed in IPFS.
IPFS is a distributed, peer-to-peer file storage protocol. File parts come from multiple computers all at the same time, supposedly making the storage hardy. FileStorm adds blockchain on top of it for a form of transactional indexing.
“Blockchain is designed to store transactions forever, and the data can never be altered, thus a trustworthy system is created,” says Raymond Fu, founder of FileStorm and chief product officer of MOAC, the underlying blockchain system used, in a video on the FileStorm website.
“The blocks are used to store only small transactional data,” he says. You cant store the large files on it. Those are distributed. Decentralized data storage platforms are needed for added decentralized blockchain, he says.
YottaChain, another blockchain storage start-up project is coming at the whole thing from a slightly different angle. It claims its non-IPFS system is more secure partly because it performs deduplication after encryption.
“Data is 10,000 times more secure than [traditional] centralized storage,” it says on its [website][5]. Deduplication eliminates duplicated or redundant data.
### Disrupting data storage
“Blockchain will disrupt data storage,” [says BlockApps separately][6]. The blockchain backend platform provider says advantages to this new generation of storage include that decentralizing data provides more security and privacy. That's due in part because it's harder to hack than traditional centralized storage. That the files are spread piecemeal among nodes, conceivably all over the world, makes it impossible for even the participating node to view the contents of the complete file, it says.
Sharding, which is the term for the breaking apart and node-spreading of the actual data, is secured through keys. Markets can award token coins for mining, and coins can be spent to gain storage. Excess storage can even be sold. And cryptocurrencies have been started to “incentivize usage and to create a market for buying and selling decentralized storage,” BlockApps explains.
The final parts of this new storage mix are that lost files are minimized because data can be duplicated simply — the data sets, for example, can be stored multiple times for error correction — and costs are reduced due to efficiencies.
Square Tech (Shenzhen) Co., which makes blockchain file storage nodes, says in its marketing materials that it intends to build service centers globally to monitor its nodes in real time. Interestingly, another area the company has gotten involved in is the internet of things (IoT), and [it says][7] it wants “to unite the technical resources, capital, and human resources of the IoT industry and blockchain.” Perhaps we end up with a form of the internet of storage things?
“The entire cloud computing industry will be disrupted by blockchain technology in just a few short years,” says BlockApps. Dropbox and Amazon “may even become overpriced and obsolete if they do not find ways to integrate with the advances.”
Join the Network World communities on [Facebook][8] and [LinkedIn][9] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3390722/how-data-storage-will-shift-to-blockchain.html#tk.rss_all
作者:[Patrick Nelson][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Patrick-Nelson/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/02/chains_binary_data_blockchain_security_by_cybrain_gettyimages-926677890_2400x1600-100788435-large.jpg
[2]: https://www.networkworld.com/article/3386881/why-blockchain-might-be-coming-to-an-iot-implementation-near-you.html
[3]: http://filestorm.net/
[4]: https://ipfs.io/
[5]: https://www.yottachain.io/
[6]: https://blockapps.net/blockchain-disrupt-data-storage/
[7]: http://www.sikuaikeji.com/
[8]: https://www.facebook.com/NetworkWorld/
[9]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,69 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (IoT roundup: VMware, Nokia beef up their IoT)
[#]: via: (https://www.networkworld.com/article/3390682/iot-roundup-vmware-nokia-beef-up-their-iot.html#tk.rss_all)
[#]: author: (Jon Gold https://www.networkworld.com/author/Jon-Gold/)
IoT roundup: VMware, Nokia beef up their IoT
======
Everyone wants in on the ground floor of the internet of things, and companies including Nokia, VMware and Silicon Labs are sharpening their offerings in anticipation of further growth.
![Getty Images][1]
When attempting to understand the world of IoT, its easy to get sidetracked by all the fascinating use cases: Automated oil and gas platforms! Connected pet feeders! Internet-enabled toilets! (Is “the Internet of Toilets” a thing yet?) But the most important IoT trend to follow may be the way that major tech vendors are vying to make large portions of the market their own.
VMwares play for a significant chunk of the IoT market is called Pulse IoT Center, and the company released version 2.0 of it this week. It follows the pattern set by other big companies getting into IoT: Leveraging their existing technological strengths and applying them to the messier, more heterodox networking environment that IoT represents.
Unsurprisingly, given that its VMware were talking about, theres now a SaaS option, and the company was also eager to talk up that Pulse IoT Center 2.0 has simplified device-onboarding and centralized management features.
**More about edge networking**
* [How edge networking and IoT will reshape data centers][2]
* [Edge computing best practices][3]
* [How edge computing can help secure the IoT][4]
That might sound familiar, and for good reason companies with any kind of a background in network management, from HPE/Aruba to Amazon, have been pushing to promote their system as the best framework for managing a complicated and often decentralized web of IoT devices from a single platform. By rolling features like software updates, onboarding and security into a single-pane-of-glass management console, those companies are hoping to be the organizational base for customers trying to implement IoT.
Whether theyre successful or not remains to be seen. While major IT companies have been trying to capture market share by competing across multiple verticals, the operational orientation of the IoT also means that non-traditional tech vendors with expertise in particular fields (particularly industrial and automotive) are suddenly major competitors.
**Nokia spreads the IoT network wide**
As a giant carrier-equipment vendor, Nokia is an important company in the overall IoT landscape. While some types of enterprise-focused IoT are heavily localized, like connected factory floors or centrally managed office buildings, others are so geographically disparate that carrier networks are the only connectivity medium that makes sense.
The Finnish company earlier this month broadened its footprint in the IoT space, announcing that it had partnered with Nordic Telecom to create a wide-area network focused on enabling IoT and emergency services. The network, which Nokia is billing as the first mission-critical communications network, operates using LTE technology in the 410-430MHz band a relatively low frequency, which allows for better propagation and a wide effective range.
The idea is to provide a high-throughput, low-latency network option to any user on the network, whether its an emergency services provider needing high-speed video communication or an energy or industrial company with a low-delay-tolerance application.
**Silicon Labs packs more onto IoT chips**
The heart of any IoT implementation remains the SoCs that make devices intelligent in the first place, and Silicon Labs announced that it's building more muscle into its IoT-focused product lineup.
The Austin-based chipmaker said that version 2 of its Wireless Gecko platform will pack more than double the wireless connectivity range of previous entries, which could seriously ease design requirements for companies planning out IoT deployments. The chipsets support Zigbee, Thread and Bluetooth mesh networking, and are designed for line-powered IoT devices, using Arm Cortex-M33 processors for relatively strong computing capacity and high energy efficiency.
Chipset advances arent the type of thing that will pay off immediately in terms of making IoT devices more capable, but improvements like these make designing IoT endpoints for particular uses that much easier, and new SoCs will begin to filter into line-of-business equipment over time.
Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3390682/iot-roundup-vmware-nokia-beef-up-their-iot.html#tk.rss_all
作者:[Jon Gold][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Jon-Gold/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2018/08/nw_iot-news_internet-of-things_smart-city_smart-home5-100768494-large.jpg
[2]: https://www.networkworld.com/article/3291790/data-center/how-edge-networking-and-iot-will-reshape-data-centers.html
[3]: https://www.networkworld.com/article/3331978/lan-wan/edge-computing-best-practices.html
[4]: https://www.networkworld.com/article/3331905/internet-of-things/how-edge-computing-can-help-secure-the-iot.html
[5]: https://www.facebook.com/NetworkWorld/
[6]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,91 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (No, drone delivery still isnt ready for prime time)
[#]: via: (https://www.networkworld.com/article/3390677/drone-delivery-not-ready-for-prime-time.html#tk.rss_all)
[#]: author: (Fredric Paul https://www.networkworld.com/author/Fredric-Paul/)
No, drone delivery still isnt ready for prime time
======
Despite incremental progress and limited regulatory approval in the U.S. and Australia, drone delivery still isnt a viable option in the vast majority of use cases.
![Sorry imKirk \(CC0\)][1]
April has a been a big month for drone delivery. First, [Alphabets Wing Aviation drones got approval from Australias Civil Aviation Safety Authority (CASA)][2], for public deliveries in the country, and this week [Wing earned Air Carrier Certification from the U.S. Federal Aviation Administration][3]. These two regulatory wins got lot of people got very excited. Finally, the conventional wisdom exulted, drone delivery is actually becoming a reality.
Not so fast.
### Drone delivery still in pilot/testing mode
Despite some small-scale successes and the first signs of regulatory acceptance, drone delivery remains firmly in the carefully controlled pilot/test phase (and yes, I know drones dont carry pilots).
**[ Also read:[Coffee drone delivery: Ideas this bad could kill the internet of things][4] ]**
For example, despite getting FAA approval to begin commercial deliveries, Wing is still working up beginning delivery trials to test the technology and gather information in Virginia later this year.
But what about that public approval from CASA for deliveries outside Canberra? Thats [a real business][5] now, right?
Well, yes and no.
On the “yes” side, the Aussie approval reportedly came after 18 months of tests, 70,000 flights, and more than 3,000 actual deliveries of products from local coffee shops and pharmacies. So, at least some people somewhere in the world are actually getting stuff dropped at their doors by drones.
In the “no” column, however, goes the fact that the approval covers only about 100 suburban homes, though more are planned to be added “in the coming weeks and months.” More importantly, the approval includes strict limits on when and where the drones can go. No crossing main roads, no nighttime deliveries, and prohibitions to stay away from people. And drone-eligible customers need a safety briefing!
### Safety briefings required for customers
That still sounds like a small-scale test, not a full-scale commercial roll-out. And while I think drone-safety classes are probably a good idea and the testing period apparently passed without any injuries even the perceived _need_ for them is not be a great advertisement for rapid expansion of drone deliveries.
Ironically, though, a bigger issue than protecting people from the drones, perhaps, is protecting the drones from people. Instructions to stay 2 to 5 meters away from folks will help, but as Ive previously addressed, these things are already seen as attractive nuisances and vandalism targets. Further raising the stakes, many local residents were said to be annoyed at the noise created by the drones. Now imagine those contraptions buzzing right by you all loaded down with steaming hot coffee or delicious ice cream.
And even with all those caveats, no one is talking about the key factors in making drone deliveries a viable business: How much will those deliveries cost and who will pay? For a while, the desire to explore the opportunity will drive investment, but that wont last forever. If drone deliveries arent cost effective for businesses, they wont spread very far.
From the customer perspective, most drone delivery tests are not yet charging for the service. If and when they start carrying fees as well as purchases, the novelty factor will likely entice many shoppers to pony up to get their items airlifted directly to their homes. But that also wont last. Drone delivery will have to demonstrate that its better — faster, cheaper, or more reliable — than the existing alternatives to find its niche.
### Drone deliveries are fast, commercial roll-out will be slow
Long term, I have no doubt that drone delivery will eventually become an accepted part of the transportation infrastructure. I dont necessarily buy into Wings prediction of an AU $40 million drone delivery market in Australia coming to pass anytime soon, but real commercial operations seem inevitable.
Its just going to be more limited than many proponents claim, and its likely to take a lot longer than expected to become mainstream. For example, despite ongoing testing, [Amazon has already missed Jeff Bezos 2018 deadline to begin commercial drone deliveries][6], and we havent heard much about [Walmarts drone delivery plans][7] lately. And while tests by a number of companies continue in locations ranging from Iceland and Finland to the U.K. and the U.S. have created a lot of interest, they have not yet translated into widespread availability.
Apart from the issue of how much consumers really want their stuff delivered by an armada of drones (a [2016 U.S. Post Office study][8] found that 44% of respondents liked the idea, while 34% didnt — and 37% worried that drone deliveries might not be safe), a lot has to happen before that vision becomes reality.
At a minimum, successful implementations of full-scale commercial drone delivery will require better planning, better-thought-out business cases, more rugged and efficient drone technology, and significant advances in flight control and autonomous flight. Like drone deliveries themselves, all that stuff is coming; it just hasnt arrived yet.
**More about drones and the internet of things:**
* [Drone defense -- powered by IoT -- is now a thing][9]
* [Ideas this bad could kill the Internet of Things][4]
* [10 reasons Amazon's drone delivery plan still won't fly][10]
* [Amazons successful drone delivery test doesnt really prove anything][11]
Join the Network World communities on [Facebook][12] and [LinkedIn][13] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3390677/drone-delivery-not-ready-for-prime-time.html#tk.rss_all
作者:[Fredric Paul][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Fredric-Paul/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2018/07/drone_mountains_by_sorry_imkirk_cc0_via_unsplash_1200x800-100763763-large.jpg
[2]: https://medium.com/wing-aviation/wing-launches-commercial-air-delivery-service-in-canberra-5da134312474
[3]: https://medium.com/wing-aviation/wing-becomes-first-certified-air-carrier-for-drones-in-the-us-43401883f20b
[4]: https://www.networkworld.com/article/3301277/ideas-this-bad-could-kill-the-internet-of-things.html
[5]: https://wing.com/australia/canberra
[6]: https://www.businessinsider.com/jeff-bezos-predicted-amazon-would-be-making-drone-deliveries-by-2018-2018-12?r=US&IR=T
[7]: https://www.networkworld.com/article/2999828/walmart-delivery-drone-plans.html
[8]: https://www.uspsoig.gov/sites/default/files/document-library-files/2016/RARC_WP-17-001.pdf
[9]: https://www.networkworld.com/article/3309413/drone-defense-powered-by-iot-is-now-a-thing.html
[10]: https://www.networkworld.com/article/2900317/10-reasons-amazons-drone-delivery-plan-still-wont-fly.html
[11]: https://www.networkworld.com/article/3185478/amazons-successful-drone-delivery-test-doesnt-really-prove-anything.html
[12]: https://www.facebook.com/NetworkWorld/
[13]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,52 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Dell EMC and Cisco renew converged infrastructure alliance)
[#]: via: (https://www.networkworld.com/article/3391071/dell-emc-and-cisco-renew-converged-infrastructure-alliance.html#tk.rss_all)
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
Dell EMC and Cisco renew converged infrastructure alliance
======
Dell EMC and Cisco renewed their agreement to collaborate on converged infrastructure (CI) products for a few more years even though the momentum is elsewhere.
![Dell EMC][1]
Dell EMC and Cisco have renewed a collaboration on converged infrastructure (CI) products that has run for more than a decade, even as the momentum shifts elsewhere. The news was announced via a [blog post][2] by Pete Manca, senior vice president for solutions engineering at Dell EMC.
The deal is centered around Dell EMCs VxBlock product line, which originally started out in 2009 as a joint venture between EMC and Cisco called VCE (Virtual Computing Environment). EMC bought out Ciscos stake in the venture before Dell bought EMC.
The devices offered UCS servers and networking from Cisco, EMC storage, and VMware virtualization software in pre-configured, integrated bundles. VCE was retired in favor of new brands, VxBlock, VxRail, and VxRack. The lineup has been pared down to one device, the VxBlock 1000.
**[ Read also:[How to plan a software-defined data-center network][3] ]**
“The newly inked agreement entails continued company alignment across multiple organizations: executive, product development, marketing, and sales,” Manca wrote in the blog post. “This means well continue to share product roadmaps and collaborate on strategic development activities, with Cisco investing in a range of Dell EMC sales, marketing and training initiatives to support VxBlock 1000.”
Dell EMC cites IDC research that it holds a 48% market share in converged systems, nearly 1.5 times that of any other vendor. But IDC's April edition of the Converged Systems Tracker said the CI category is on the downswing. CI sales fell 6.4% year over year, while the market for hyperconverged infrastructure (HCI) grew 57.2% year over year.
For the unfamiliar, the primary difference between converged and hyperconverged infrastructure is that CI relies on hardware building blocks, while HCI is software-defined and considered more flexible and scalable than CI and operates more like a cloud system with resources spun up and down as needed.
Despite this, Dell is committed to CI systems. Just last month it announced an update and expansion of the VxBlock 1000, including higher scalability, a broader choice of components, and the option to add new technologies. It featured updated VMware vRealize and vSphere support, the option to consolidate high-value, mission-critical workloads with new storage and data protection options and support for Cisco UCS fabric and servers.
For customers who prefer to build their own infrastructure solutions, Dell EMC introduced Ready Stack, a portfolio of validated designs with sizing, design, and deployment resources that offer VMware-based IaaS, vSphere on Dell EMC PowerEdge servers and Dell EMC Unity storage, and Microsoft Hyper-V on Dell EMC PowerEdge servers and Dell EMC Unity storage.
Join the Network World communities on [Facebook][4] and [LinkedIn][5] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3391071/dell-emc-and-cisco-renew-converged-infrastructure-alliance.html#tk.rss_all
作者:[Andy Patrizio][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Andy-Patrizio/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/04/dell-emc-vxblock-1000-100794721-large.jpg
[2]: https://blog.dellemc.com/en-us/dell-technologies-cisco-reaffirm-joint-commitment-converged-infrastructure/
[3]: https://www.networkworld.com/article/3284352/data-center/how-to-plan-a-software-defined-data-center-network.html
[4]: https://www.facebook.com/NetworkWorld/
[5]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,87 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Cisco goes all in on WiFi 6)
[#]: via: (https://www.networkworld.com/article/3391919/cisco-goes-all-in-on-wifi-6.html#tk.rss_all)
[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
Cisco goes all in on WiFi 6
======
Cisco rolls out Catalyst and Meraki WiFi 6-based access points, Catalyst 9000 switch
![undefined / Getty Images][1]
Cisco has taken the wraps off a family of WiFi 6 access points, roaming technology and developer-community support all to make wireless a solid enterprise equal with the wired world.
“Best-effort wireless for enterprise customers doesnt cut it any more. Theres been a change in customer expectations that there will be an uninterrupted unplugged experience,” said Scott Harrell, senior vice president and general manager of enterprise networking at Cisco. ****It is now a wireless-first world.** ”**
**More about 802.11ax (Wi-Fi 6)**
* [Why 802.11ax is the next big thing in wireless][2]
* [FAQ: 802.11ax Wi-Fi][3]
* [Wi-Fi 6 (802.11ax) is coming to a router near you][4]
* [Wi-Fi 6 with OFDMA opens a world of new wireless possibilities][5]
* [802.11ax preview: Access points and routers that support Wi-Fi 6 are on tap][6]
Bringing a wireless-first enterprise world together is one of the drivers behind a new family of WiFi 6-based access points (AP) for Ciscos Catalyst and Meraki portfolios. WiFi 6 (802.11ax) is designed for high-density public or private environments. But it also will be beneficial in internet of things (IoT) deployments, and in offices that use bandwidth-hogging applications like videoconferencing.
The Cisco Catalyst 9100 family and Meraki [MR 45/55][7] WiFi-6 access points are built on Cisco silicon and communicate via pre-802.1ax protocols. The silicon in these access points now acts a rich sensor providing IT with insights about what is going on the wireless network in real-time, and that enables faster reactions to problems and security concerns, Harrell said.
Aside from WiFi 6, the boxes include support for visibility and communications with Zigbee, BLE and Thread protocols. The Catalyst APs support uplink speeds of 2.5 Gbps, in addition to 100 Mbps and 1 Gbps. All speeds are supported on Category 5e cabling for an industry first, as well as 10GBASE-T (IEEE 802.3bz) cabling, Cisco said.
Wireless traffic aggregates to wired networks so and the wired network must also evolve. Technology like multi-gigabit Ethernet must be driven into the access layer, which in turn drives higher bandwidth needs at the aggregation and core layers, [Harrell said][8].
Handling this influx of wireless traffic was part of the reason Cisco also upgraded its iconic Catalyst 6000 with the [Catalyst 9600 this week][9]. The 9600 brings with it support for Cat 6000 features such as support for MPLS, virtual switching and IPv6, while adding or bolstering support for wireless netowrks as well as Intent-based networking (IBN) and security segmentation. The 9600 helps fill out the companys revamped lineup which includes the 9200 family of access switches, the 9500 aggregation switch and 9800 wireless controller.
“WiFi doesnt exist in a vacuum how it connects to the enterprise and the data center or the Internet is key and in Ciscos case that key is now the 9600 which has been built to handle the increased traffic,” said Lee Doyle, principal analyst with Doyle Research.
The new 9600 ties in with the recently [released Catalyst 9800][10], which features 40Gbps to 100Gbps performance, depending on the model, hot-patching to simplify updates and eliminate update-related downtime, Encrypted Traffic Analytics (ETA), policy-based micro- and macro-segmentation and Trustworthy solutions to detect malware on wired or wireless connected devices, Cisco said.
All Catalyst 9000 family members support other Cisco products such as [DNA Center][11] , which controls automation capabilities, assurance setting, fabric provisioning and policy-based segmentation for enterprise wired and wireless networks.
The new APs are pre-standard, but other vendors including Aruba, NetGear and others are also selling pre-standard 802.11ax devices. Cisco getting into the market solidifies the validity of this strategy, said Brandon Butler, a senior research analyst with IDC.
Many experts [expect the standard][12] to be ratified late this year.
“We expect to see volume shipments of WiFi 6 products by early next year and it being the de facto WiFi standard by 2022.”
On top of the APs and 9600 switch, Cisco extended its software development community [DevNet][13] to offer WiFi 6 learning labs, sandboxes and developer resources.
The Cisco Catalyst and Meraki access platforms are open and programmable all the way down to the chipset level, allowing applications to take advantage of network programmability, Cisco said.
Cisco also said it had added more vendors to now include Apple, Samsung, Boingo, Presidio and Intel for its ongoing [OpenRoaming][14] project. OpenRoaming, which is in beta promises to let users move seamlessly between wireless networks and LTE without interruption.
Join the Network World communities on [Facebook][15] and [LinkedIn][16] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3391919/cisco-goes-all-in-on-wifi-6.html#tk.rss_all
作者:[Michael Cooney][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Michael-Cooney/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/04/cisco_catalyst_wifi_coffee-cup_coffee-beans_-100794990-large.jpg
[2]: https://www.networkworld.com/article/3215907/mobile-wireless/why-80211ax-is-the-next-big-thing-in-wi-fi.html
[3]: https://%20https//www.networkworld.com/article/3048196/mobile-wireless/faq-802-11ax-wi-fi.html
[4]: https://www.networkworld.com/article/3311921/mobile-wireless/wi-fi-6-is-coming-to-a-router-near-you.html
[5]: https://www.networkworld.com/article/3332018/wi-fi/wi-fi-6-with-ofdma-opens-a-world-of-new-wireless-possibilities.html
[6]: https://www.networkworld.com/article/3309439/mobile-wireless/80211ax-preview-access-points-and-routers-that-support-the-wi-fi-6-protocol-on-tap.html
[7]: https://meraki.cisco.com/lib/pdf/meraki_datasheet_MR55.pdf
[8]: https://blogs.cisco.com/news/unplugged-and-uninterrupted
[9]: https://www.networkworld.com/article/3391580/venerable-cisco-catalyst-6000-switches-ousted-by-new-catalyst-9600.html
[10]: https://www.networkworld.com/article/3321000/cisco-links-wireless-wired-worlds-with-new-catalyst-9000-switches.html
[11]: https://www.networkworld.com/article/3280988/cisco/cisco-opens-dna-center-network-control-and-management-software-to-the-devops-masses.html
[12]: https://www.networkworld.com/article/3336263/is-jumping-ahead-to-wi-fi-6-the-right-move.html
[13]: https://developer.cisco.com/wireless/?utm_campaign=colaunch-wireless19&utm_source=pressrelease&utm_medium=ciscopress-wireless-main
[14]: https://www.cisco.com/c/en/us/solutions/enterprise-networks/802-11ax-solution/openroaming.html
[15]: https://www.facebook.com/NetworkWorld/
[16]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,92 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to shop for enterprise firewalls)
[#]: via: (https://www.networkworld.com/article/3390686/how-to-shop-for-enterprise-firewalls.html#tk.rss_all)
[#]: author: (Zeus Kerravala https://www.networkworld.com/author/Zeus-Kerravala/)
How to shop for enterprise firewalls
======
Performance, form factors, and automation capabilities are key considerations when choosing a next-generation firewall (NGFW).
Firewalls have been around for years, but the technology keeps evolving as the threat landscape changes. Here are some tips about what to look for in a next-generation firewall ([NGFW][1]) that will satisfy business needs today and into the future.
### Don't trust firewall performance stats
Understanding how a NGFW performs requires more than looking at a vendors specification or running a bit of traffic through it. Most [firewalls][2] will perform well when traffic loads are light. Its important to see how a firewall responds at scale, particularly when [encryption][3] is turned on. Roughly 80% of traffic is encrypted today, and the ability to maintain performance levels with high volumes of encrypted traffic is critical.
**Learn more about network security**
* [How to boost collaboration between network and security teams][4]
* [What is microsegmentation? How getting granular improves network security][5]
* [What to consider when deploying a next-generation firewall][1]
* [How firewalls fit into enterprise security][2]
Also, be sure to turn on all major functions including application and user identification, IPS, anti-malware, URL filtering and logging during testing to see how a firewall will hold up in a production setting. Firewall vendors often tout a single performance number that's achieved with core features turned off. Data from [ZK Research][6] shows many IT pros learn this lesson the hard way: 58% of security professionals polled said they were forced to turn off features to maintain performance.
Before committing to a vendor, be sure to run tests with as many different types of traffic as possible and with various types of applications. Important metrics to look at include application throughput, connections per second, maximum sessions for both IPv4 and [IPv6][7], and SSL performance.
### NGFW needs to fit into broader security platform
Is it better to have a best-of-breed strategy or go with a single vendor for security? The issue has been debated for years, but the fact is, neither approach works flawlessly. Its important to understand that best-of-breed everywhere doesnt ensure best-in-class security. In fact, the opposite is typically true; having too many vendors can lead to complexity that can't be managed, which puts a business at risk. A better approach is a security platform, which can be thought of as an open architecture, that third-party products can be plugged into.
Any NGFW must be able to plug into a platform so it can "see" everything from IoT endpoints to cloud traffic to end-user devices. Also, once the NGFW has aggregated the data, it should be able to perform analytics to provide insights. This will enable the NGFW to take action and enforce policies across the network.
### Multiple form factors, consistent security features
Firewalls used to be relegated to corporate data centers. Today, networks have opened up, and customers need a consistent feature set at every point in the network. NGFW vendors should have the following form factors available to optimize price/performance:
* Data center
* Internet edge
* Midsize branch office
* Small branch office
* Ruggedized for IoT environments
* Cloud delivered
* Virtual machines that can run in private and public clouds
Also, NGFW vendors should have a roadmap for a containerized form factor. This certainly isnt a trivial task. Most vendors wont have a [container][8]-ready product yet, but they should be able to talk to how they plan to address the problem.
### Single-pane-of-glass firewall management
Having a broad product line doesnt matter if products need to be managed individually. This makes it hard to keep policies and rules up to date and leads to inconsistencies in features and functions. A firewall vendor must have a single management tool that provides end-to-end visibility and enables the administrator to make a change and push it out across the network at once. Visibility must extend everywhere, including the cloud, [IoT][9] edge, operational technology (OT) environments, and branch offices. A single dashboard is also the right place to implement and maintain software-based segmentation instead of having to configure each device.
### Firewall automation capabilities
The goal of [automation][10] is to help remove many of the manual steps that create "human latency" in the security process. Almost all vendors tout some automation capabilities as a way of saving on headcount, but automation goes well beyond that.
To continue reading this article register now
[Get Free Access][11]
[Learn More][12] Existing Users [Sign In][11]
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3390686/how-to-shop-for-enterprise-firewalls.html#tk.rss_all
作者:[Zeus Kerravala][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Zeus-Kerravala/
[b]: https://github.com/lujun9972
[1]: https://www.networkworld.com/article/3236448/what-to-consider-when-deploying-a-next-generation-firewall.html
[2]: https://www.networkworld.com/article/3230457/what-is-a-firewall-perimeter-stateful-inspection-next-generation.html
[3]: https://www.networkworld.com/article/3098354/enterprise-encryption-adoption-up-but-the-devils-in-the-details.html
[4]: https://www.networkworld.com/article/3328218/how-to-boost-collaboration-between-network-and-security-teams.html
[5]: https://www.networkworld.com/article/3247672/what-is-microsegmentation-how-getting-granular-improves-network-security.html
[6]: https://zkresearch.com/
[7]: https://www.networkworld.com/article/3254575/what-is-ipv6-and-why-aren-t-we-there-yet.html
[8]: https://www.networkworld.com/article/3159735/containers-what-are-containers.html
[9]: https://www.networkworld.com/article/3207535/what-is-iot-how-the-internet-of-things-works.html
[10]: https://www.networkworld.com/article/3184389/automation-rolls-on-what-are-you-doing-about-it.html
[11]: javascript://
[12]: /learn-about-insider/

View File

@ -0,0 +1,86 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Venerable Cisco Catalyst 6000 switches ousted by new Catalyst 9600)
[#]: via: (https://www.networkworld.com/article/3391580/venerable-cisco-catalyst-6000-switches-ousted-by-new-catalyst-9600.html#tk.rss_all)
[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
Venerable Cisco Catalyst 6000 switches ousted by new Catalyst 9600
======
Cisco introduced Catalyst 9600 switches, that let customers automate, set policy, provide security and gain assurance across wired and wireless networks.
![Martyn Williams][1]
Few events in the tech industry are truly transformative, but Ciscos replacement of its core Catalyst 6000 family could be one of those actions for customers and the company.
Introduced in 1999, [iterations of the Catalyst 6000][2] have nestled into the core of scores of enterprise networks, with the model 6500 becoming the companys largest selling box ever.
**Learn about edge networking**
* [How edge networking and IoT will reshape data centers][3]
* [Edge computing best practices][4]
* [How edge computing can help secure the IoT][5]
It goes without question that migrating these customers alone to the new switch the Catalyst 9600 which the company introduced today will be of monumental importance to Cisco as it looks to revamp and continue to dominate large campus-core deployments. The first [Catalyst 9000][6], introduced in June 2017, is already the fastest ramping product line in Ciscos history.
“There are at least tens of thousands of Cat 6000s running in campus cores all over the world,” said [Sachin Gupta][7], senior vice president for product management at Cisco. ”It is the Swiss Army knife of switches in term of features, and we have taken great care and over two years developing feature parity and an easy migration path for those users to the Cat 9000.”
Indeed the 9600 brings with it for Cat 6000 features such as support for MPLS, virtual switching and IPv6, while adding or bolstering support for newer items such as Intent-based networking (IBN), wireless networks and security segmentation. Strategically the 9600 helps fill out the companys revamped lineup which includes the 9200 family of access switches, the [9500][8] aggregation switch and [9800 wireless controller.][9]
Some of the nitty-gritty details about the 9600:
* It is a purpose-built 40 Gigabit and 100 Gigabit Ethernet line of modular switches targeted for the enterprise campus with a wired switching capacity of up to 25.6 Tbps, with up to 6.4 Tbps of bandwidth per slot.
* The switch supports granular port densities that fit diverse campus needs, including nonblocking 40 Gigabit and 100 Gigabit Ethernet Quad Small Form-Factor Pluggable (QSFP+, QSFP28) and 1, 10, and 25 GE Small Form-Factor Pluggable Plus (SFP, SFP+, SFP28).
* It can be configured to support up to 48 nonblocking 100 Gigabit Ethernet QSPF28 ports with the Cisco Catalyst 9600 Series Supervisor Engine 1; Up to 96 nonblocking 40 Gigabit Ethernet QSFP+ ports with the Cisco Catalyst 9600 Series Supervisor Engine 1 and Up to 192 nonblocking 25 Gigabit/10 Gigabit Ethernet SFP28/SFP+ ports with the Cisco Catalyst 9600 Series Supervisor Engine 1.
* It supports advanced routing and infrastructure services (MPLS, Layer 2 and Layer 3 VPNs, Multicast VPN, and Network Address Translation.
* Cisco Software-Defined Access capabilities (such as a host-tracking database, cross-domain connectivity, and VPN Routing and Forwarding [VRF]-aware Locator/ID Separation Protocol; and network system virtualization with Cisco StackWise virtual technology.
The 9600 series runs Ciscos IOS XE software which now runs across all Catalyst 9000 family members. The software brings with it support for other key products such as Ciscos [DNA Center][10] which controls automation capabilities, assurance setting, fabric provisioning and policy-based segmentation for enterprise networks. What that means is that with one user interface, DNA Center, customers can automate, set policy, provide security and gain assurance across the entire wired and wireless network fabric, Gupta said.
“The 9600 is a big deal for Cisco and customers as it brings together the campus core and lets users establish standards access and usage policies across their wired and wireless environments,” said Brandon Butler, a senior research analyst with IDC. “It was important that Cisco add a powerful switch to handle the increasing amounts of traffic wireless and cloud applications are bringing to the network.”
IOS XE brings with it automated device provisioning and a wide variety of automation features including support for the network configuration protocol NETCONF and RESTCONF using YANG data models. The software offers near-real-time monitoring of the network, leading to quick detection and rectification of failures, Cisco says.
The software also supports hot patching which provides fixes for critical bugs and security vulnerabilities between regular maintenance releases. This support lets customers add patches without having to wait for the next maintenance release, Cisco says.
As with the rest of the Catalyst family, the 9600 is available via subscription-based licensing. Cisco says the [base licensing package][11] includes Network Advantage licensing options that are tied to the hardware. The base licensing packages cover switching fundamentals, management automation, troubleshooting, and advanced switching features. These base licenses are perpetual.
An add-on licensing package includes the Cisco DNA Premier and Cisco DNA Advantage options. The Cisco DNA add-on licenses are available as a subscription.
IDCS Butler noted that there are competitors such as Ruckus, Aruba and Extreme that offer switches capable of melding wired and wireless environments.
The new switch is built for the next two decades of networking, Gupta said. “If any of our competitors though they could just go in and replace the Cat 6k they were misguided.”
Join the Network World communities on [Facebook][12] and [LinkedIn][13] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3391580/venerable-cisco-catalyst-6000-switches-ousted-by-new-catalyst-9600.html#tk.rss_all
作者:[Michael Cooney][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Michael-Cooney/
[b]: https://github.com/lujun9972
[1]: https://images.techhive.com/images/article/2017/02/170227-mwc-02759-100710709-large.jpg
[2]: https://www.networkworld.com/article/2289826/133715-The-illustrious-history-of-Cisco-s-Catalyst-LAN-switches.html
[3]: https://www.networkworld.com/article/3291790/data-center/how-edge-networking-and-iot-will-reshape-data-centers.html
[4]: https://www.networkworld.com/article/3331978/lan-wan/edge-computing-best-practices.html
[5]: https://www.networkworld.com/article/3331905/internet-of-things/how-edge-computing-can-help-secure-the-iot.html
[6]: https://www.networkworld.com/article/3256264/cisco-ceo-we-are-still-only-on-the-front-end-of-a-new-version-of-the-network.html
[7]: https://blogs.cisco.com/enterprise/looking-forward-catalyst-9600-switch-and-9100-access-point-meraki
[8]: https://www.networkworld.com/article/3202105/cisco-brings-intent-based-networking-to-the-end-to-end-network.html
[9]: https://www.networkworld.com/article/3321000/cisco-links-wireless-wired-worlds-with-new-catalyst-9000-switches.html
[10]: https://www.networkworld.com/article/3280988/cisco/cisco-opens-dna-center-network-control-and-management-software-to-the-devops-masses.html
[11]: https://www.cisco.com/c/en/us/td/docs/switches/lan/catalyst9600/software/release/16-11/release_notes/ol-16-11-9600.html#id_67835
[12]: https://www.facebook.com/NetworkWorld/
[13]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,73 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Measuring the edge: Finding success with edge deployments)
[#]: via: (https://www.networkworld.com/article/3391570/measuring-the-edge-finding-success-with-edge-deployments.html#tk.rss_all)
[#]: author: (Anne Taylor https://www.networkworld.com/author/Anne-Taylor/)
Measuring the edge: Finding success with edge deployments
======
To make the most of edge computing investments, its important to first understand objectives and expectations.
![iStock][1]
Edge computing deployments are well underway as companies seek to better process the wealth of data being generated, for example, by Internet of Things (IoT) devices.
So, what are the results? Plus, how can you ensure success with your own edge projects?
**Measurements of success**
The [use cases for edge computing deployments][2] vary widely, as do the business drivers and, ultimately, the benefits.
Whether theyre seeking improved network or application performance, real-time data analytics, a better customer experience, or other efficiencies, enterprises are accomplishing their goals. Based on two surveys — one by [_Automation World_][3] and another by [Futurum Research][4] — respondents have reported:
* Decreased network downtime
* Increased productivity
* Increased profitability/reduced costs
* Improved business processes
Basically, success metrics can be bucketed into two categories: direct value propositions and cost reductions. In the former, companies are seeking results that measure revenue generation — such as improved digital experiences with customers and users. In the latter, metrics that prove the value of digitizing processes — like speed, quality, and efficacy — will demonstrate success with edge deployments.
**Goalposts for success with edge**
Edge computing deployments are underway. But before diving in, understand whats driving these projects.
According to the Futurum Research, 29% of respondents are currently investing in edge infrastructure, and 62% expect to adopt within the year. For these companies, the driving force has been the business, which expects operational efficiencies from these investments. Beyond that, theres an expectation down the road to better align with IoT strategy.
That being the case, its worth considering your business case before diving into edge. Ask: Are you seeking innovation and revenue generation amid digital transformation efforts? Or is your company looking for a low-risk, “test the waters” type of investment in edge?
Next up, what type of edge project makes sense for your environment? Edge data centers fall into three categories: local devices for a specific purpose (e.g., an appliance for security systems or a gateway for cloud-to-premises storage); small local data centers (typically one to 10 racks for storage and processing); and regional data centers (10+ racks for large office spaces).
Then, think about these best practices before talking with vendors:
* Management: Especially for unmanned edge data centers, remote management is critical. Youll need predictive alerts and a service contract that enables IT to be just as resilient as a regular data center.
* Security:Much of todays conversation has been around data security. That starts with physical protection. Too many data breaches — including theft and employee error — are caused by physical access to IT assets.
* Standardization: There is no need to reinvent the wheel when it comes to edge data center deployments. Using standard components makes it easier for the internal IT team to deploy, manage, and maintain.
Finally, consider the ecosystem. The end-to-end nature of edge requires not just technology integration, but also that all third parties work well together. A good ecosystem connects customers, partners, and vendors.
Get further information to kickstart your edge computing environment at [APC.com][5].
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3391570/measuring-the-edge-finding-success-with-edge-deployments.html#tk.rss_all
作者:[Anne Taylor][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Anne-Taylor/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/04/istock-912928582-100795093-large.jpg
[2]: https://www.networkworld.com/article/3391016/edge-computing-is-in-most-industries-future.html
[3]: https://www.automationworld.com/article/technologies/cloud-computing/its-not-edge-vs-cloud-its-both
[4]: https://futurumresearch.com/edge-computing-from-edge-to-enterprise/
[5]: https://www.apc.com/us/en/solutions/business-solutions/edge-computing.jsp

View File

@ -0,0 +1,54 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Must-know Linux Commands)
[#]: via: (https://www.networkworld.com/article/3391029/must-know-linux-commands.html#tk.rss_all)
[#]: author: (Tim Greene https://www.networkworld.com/author/Tim-Greene/)
Must-know Linux Commands
======
A compilation of essential commands for searching, monitoring and securing Linux systems - plus the Linux Command Line Cheat Sheet.
It takes some time working with Linux commands before you know which one you need for the task at hand, how to format it and what result to expect, but its possible to speed up the process.
With that in mind, weve gathered together some of the essential Linux commands as explained by Network World's [Unix as a Second Language][1] blogger Sandra Henry-Stocker to give aspiring power users what they need to get started with Linux systems.
The breakdown starts with the rich options available for finding files on Linux **find** , **locate** , **mlocate** , **which** , **whereis** , to name a few. Some overlap, but some are more efficient than others or zoom in on a narrow set of results. Theres even a command **apropos** to find the right command to do what you want to do. This section will demystify searches.
Henry-Stocker's article on memory provides a wealth of options for discovering the availability of physical and virtual memory and ways to have that information updated at intervals to constantly measure whether theres enough memory to go around. It shows how its possible to tailor your requests so you get a concise presentation of the results you seek.
Two remaining articles in this package show how to monitor activity on Linux servers and how to set up security parameters on these systems.
The first of these shows how to run the same command repetitively in order to have regular updates about any designated activity. It also tells about a command that focuses on user processes and shows changes as they occur, and a command that examines the time that users are connected.
The final article is a deep dive into commands that help keep Linux systems secure. It describes 22 of them that are essential for day-to-day admin work. They can restrict privileges to keep individuals from having more capabilities than their jobs call for and report on whos logged in, where from and how long theyve been there.
Some of these commands can track recent logins for individuals, which can be useful in running down who made changes. Others find files with varying characteristics, such as having no owner or by their contents. There are commands to control firewalls and to display routing tables.
As a bonus, our bundle of commands includes **The Linux Command-Line Cheat Sheet,** a concise summary of important commands that are useful every single day. Its suitable for printing out on two sides of a single sheet, laminating and keeping beside your keyboard.
Enjoy!
To continue reading this article register now
[Get Free Access][2]
[Learn More][3] Existing Users [Sign In][2]
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3391029/must-know-linux-commands.html#tk.rss_all
作者:[Tim Greene][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Tim-Greene/
[b]: https://github.com/lujun9972
[1]: https://www.networkworld.com/blog/unix-as-a-second-language/?nsdr=true
[2]: javascript://
[3]: /learn-about-insider/

View File

@ -0,0 +1,81 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Cisco issues critical security warning for Nexus data-center switches)
[#]: via: (https://www.networkworld.com/article/3392858/cisco-issues-critical-security-warning-for-nexus-data-center-switches.html#tk.rss_all)
[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
Cisco issues critical security warning for Nexus data-center switches
======
Cisco released 40 security advisories around Nexus switches, Firepower firewalls and more
![Thinkstock][1]
Cisco issued some 40 security advisories today but only one of them was deemed “[critical][2]” a vulnerability in the Cisco Nexus 9000 Series Application Centric Infrastructure (ACI) Mode data-center switch that could let an attacker secretly access system resources.
The exposure, which was given a Common Vulnerability Scoring System importance of 9.8 out of 10, is described as a problem with secure shell (SSH) key-management for the Cisco Nexus 9000 that lets a remote attacker to connect to the affected system with the privileges of a root user, Cisco said.
**[ Read also:[How to plan a software-defined data-center network][3] ]**
“The vulnerability is due to the presence of a default SSH key pair that is present in all devices. An attacker could exploit this vulnerability by opening an SSH connection via IPv6 to a targeted device using the extracted key materials. This vulnerability is only exploitable over IPv6; IPv4 is not vulnerable," Cisco wrote.
This vulnerability affects Nexus 9000s if they are running a Cisco NX-OS software release prior to 14.1, and the company said there were no workarounds to address the problem.
However, Cisco has [released free software updates][4] that address the vulnerability.
The company also issued a “high” security warning advisory for the Nexus 9000 that involves an exploit that would let attackers execute arbitrary operating-system commands as root on an affected device. To succeed, an attacker would need valid administrator credentials for the device, Cisco said.
The vulnerability is due to overly broad system-file permissions, [Cisco wrote][5]. An attacker could exploit this vulnerability by authenticating to an affected device, creating a crafted command string and writing this crafted string to a specific file location.
**[[Prepare to become a Certified Information Security Systems Professional with this comprehensive online course from PluralSight. Now offering a 10-day free trial!][6] ]**
Cisco has released software updates that address this vulnerability.
Two other vulneraries rated “high” also involved the Nexus 9000:
* [A vulnerability][7] in the background-operations functionality of Cisco Nexus 9000 software could allow an authenticated, local attacker to gain elevated privileges as root on an affected device. The vulnerability is due to insufficient validation of user-supplied files on an affected device. Cisco said an attacker could exploit this vulnerability by logging in to the CLI of the affected device and creating a crafted file in a specific directory on the filesystem.
* A [weakness][7] in the background-operations functionality of the switch software could allow an attacker to login to the CLI of the affected device and create a crafted file in a specific directory on the filesystem. The vulnerability is due to insufficient validation of user-supplied files on an affected device, Cisco said.
Cisco has [released software][4] for these vulnerabilities as well.
Also part of these security alerts were a number of “high” rated warnings about vulneraries in Ciscos FirePower firewall series.
For example Cisco [wrote][8] that multiple vulnerabilities in the Server Message Block Protocol preprocessor detection engine for Cisco Firepower Threat Defense Software could allow an unauthenticated, adjacent or remote attacker to cause a denial of service (DoS) condition.
Yet [another vulnerability][9] in the internal packet-processing functionality of Cisco Firepower software for the Cisco Firepower 2100 Series could let an unauthenticated, remote attacker cause an affected device to stop processing traffic, resulting in a DOS situation, Cisco said.
[Software patches][4] are available for these vulnerabilities.
Other products such as the Cisco [Adaptive Security Virtual Appliance][10], and [Web Security appliance][11] had high priority patches as well.
Join the Network World communities on [Facebook][12] and [LinkedIn][13] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3392858/cisco-issues-critical-security-warning-for-nexus-data-center-switches.html#tk.rss_all
作者:[Michael Cooney][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Michael-Cooney/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2018/02/lock_broken_unlocked_binary_code_security_circuits_protection_privacy_thinkstock_873916354-100750739-large.jpg
[2]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20190501-nexus9k-sshkey
[3]: https://www.networkworld.com/article/3284352/data-center/how-to-plan-a-software-defined-data-center-network.html
[4]: https://www.cisco.com/c/en/us/about/legal/cloud-and-software/end_user_license_agreement.html
[5]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20190501-nexus9k-rpe
[6]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fcertified-information-systems-security-professional-cisspr
[7]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20190501-aci-hw-clock-util
[8]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20190501-frpwr-smb-snort
[9]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20190501-frpwr-dos
[10]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20190501-asa-ipsec-dos
[11]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20190501-wsa-privesc
[12]: https://www.facebook.com/NetworkWorld/
[13]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,61 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Health care is still stitching together IoT systems)
[#]: via: (https://www.networkworld.com/article/3392818/health-care-is-still-stitching-together-iot-systems.html#tk.rss_all)
[#]: author: (Jon Gold https://www.networkworld.com/author/Jon-Gold/)
Health care is still stitching together IoT systems
======
The use of IoT technology in medicine is fraught with complications, but users are making it work.
_Government regulations, safety and technical integration are all serious issues facing the use of IoT in medicine, but professionals in the field say that medical IoT is moving forward despite the obstacles. A vendor, a doctor, and an IT pro all spoke to Network World about the work involved._
### Vendor: It's tough to gain acceptance**
**
Josh Stein is the CEO and co-founder of Adheretech, a medical-IoT startup whose main product is a connected pill bottle. The idea is to help keep seriously ill patients current with their medications, by monitoring whether theyve taken correct dosages or not.
The bottle which patients get for free (Adheretechs clients are hospitals and clinics) uses a cellular modem to call home to the companys servers and report on how much medication is left in the bottle, according to sensors that detect how many pills are touching the bottles sides and measuring its weight. There, the data is analyzed not just to determine whether patients are sticking to their doctors prescription, but to help identify possible side effects and whether they need additional help.
For example, a bottle that detects itself being moved to the bathroom too often might send up a flag that the patient is experiencing gastrointestinal side effects. The system can then contact patients or providers via phone or text to help them take the next steps.
The challenges to reach this point have been stiff, according to Stein. The company was founded in 2011 and spent the first four years of its existence simply designing and building its product.
“We had to go through many years of R&D to create a device thats replicatible a million times over,” he said. “If youre a healthcare company, you have to deal with HIPAA, the FDA, and then theres lots of other things like medication bottles have their whole own set of regulatory certifications.”
Beyond the simple fact of regulatory compliance, Stein said that theres resistance to this sort of new technology in the medical community.
“Healthcare is typically one of the last industries to adopt new technology,” he said.
### Doctor: Colleagues wonder if medical IoT plusses are worth the trouble
Dr. Rebecca Mishuris is the associate chief medical information officer at Boston Medical Center, a private non-profit hospital located in the South End. One of the institutions chief missions is to act as a safety net for the population of the area 57% of BMCs patients come from under-served populations, and roughly a third dont speak English as a primary language. That, in itself, can be a problem for IoT because many devices are designed to be used by native English speakers.
BMCs adoption of IoT tech has taken place mostly at the individual-practice level things like Bluetooth-enabled scales and diagnostic equipment for specific offices that want to use them but theres no hospital-wide IoT initiative happening, according to Mishuris.
Thats partially due to the fact that many practitioners arent convinced that connected healthcare devices are worth the trouble to purchase, install and manage, she said. HIPAA compliance and BMCs own privacy regulations are a particular concern, given that many of the devices deal with patient-generated data.
To continue reading this article register now
[Get Free Access][1]
[Learn More][2] Existing Users [Sign In][1]
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3392818/health-care-is-still-stitching-together-iot-systems.html#tk.rss_all
作者:[Jon Gold][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Jon-Gold/
[b]: https://github.com/lujun9972
[1]: javascript://
[2]: /learn-about-insider/

View File

@ -0,0 +1,69 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Vapor IO provides direct, high-speed connections from the edge to AWS)
[#]: via: (https://www.networkworld.com/article/3391922/vapor-io-provides-direct-high-speed-connections-from-the-edge-to-aws.html#tk.rss_all)
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
Vapor IO provides direct, high-speed connections from the edge to AWS
======
With a direct fiber line, latency between the edge and the cloud can be dramatically reduced.
![Vapor IO][1]
Edge computing startup Vapor IO now offers a direct connection between its edge containers to Amazon Web Services (AWS) via a high-speed fiber network link.
The company said that connection between its Kinetic Edge containers and AWS will be provided by Crown Castle's Cloud Connect fiber network, which uses Amazon Direct Connect Services. This would help reduce network latency by essentially drawing a straight fiber line from Vapor IO's edge computing data centers to Amazon's cloud computing data centers.
“When combined with Crown Castles high-speed Cloud Connect fiber, the Kinetic Edge lets AWS developers build applications that span the entire continuum from core to edge. By enabling new classes of applications at the edge, we make it possible for any AWS developer to unlock the next generation of real-time, innovative use cases,” wrote Matt Trifiro, chief marketing officer of Vapor IO, in a [blog post][2].
**[ Read also:[What is edge computing and how its changing the network][3] ]**
Vapor IO clams that the connection will lower latency by as much as 75%. “Connecting workloads and data at the Kinetic Edge with workloads and data in centralized AWS data centers makes it possible to build edge applications that leverage the full power of AWS,” wrote Trifiro.
Developers building applications at the Kinetic Edge will have access to the full suite of AWS cloud computing services, including Amazon Simple Storage Service (Amazon S3), Amazon Elastic Cloud Compute (Amazon EC2), Amazon Virtual Private Cloud (Amazon VPC), and Amazon Relational Database Service (Amazon RDS).
Crown Castle is the largest provider of shared communications infrastructure in the U.S., with 40,000 cell towers and 60,000 miles of fiber, offering 1Gbps to 10Gbps private fiber connectivity between the Kinetic Edge and AWS.
AWS Direct Connect is a essentially a private connection between Amazon's AWS customers and their the AWS data centers, so customers dont have to rout their traffic over the public internet and compete with Netflix and YouTube, for example, for bandwidth.
### How edge computing works
The structure of [edge computing][3] is the reverse of the standard internet design. Rather than sending all the data up to central servers, as much processing as possible is done at the edge. This is to reduce the sheer volume of data coming upstream and thus reduce latency.
With things like smart cars, even if 95% of data is eliminated that remaining, 5% can still be a lot, so moving it fast is essential. Vapor IO said it will shuttle workloads to Amazons USEAST and USWEST data centers, depending on location.
This shows how the edge is up-ending the traditional internet design and moving more computing outside the traditional data center, although a connection upstream is still important because it allows for rapid movement of necessary data from the edge to the cloud, where it can be stored or processed.
**More about edge networking:**
* [How edge networking and IoT will reshape data centers][4]
* [Edge computing best practices][5]
* [How edge computing can help secure the IoT][6]
Join the Network World communities on [Facebook][7] and [LinkedIn][8] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3391922/vapor-io-provides-direct-high-speed-connections-from-the-edge-to-aws.html#tk.rss_all
作者:[Andy Patrizio][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Andy-Patrizio/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2018/09/vapor-io-kinetic-edge-data-center-100771510-large.jpg
[2]: https://www.vapor.io/powering-amazon-web-services-at-the-kinetic-edge/
[3]: https://www.networkworld.com/article/3224893/what-is-edge-computing-and-how-it-s-changing-the-network.html
[4]: https://www.networkworld.com/article/3291790/data-center/how-edge-networking-and-iot-will-reshape-data-centers.html
[5]: https://www.networkworld.com/article/3331978/lan-wan/edge-computing-best-practices.html
[6]: https://www.networkworld.com/article/3331905/internet-of-things/how-edge-computing-can-help-secure-the-iot.html
[7]: https://www.facebook.com/NetworkWorld/
[8]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,92 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Yet another killer cloud quarter puts pressure on data centers)
[#]: via: (https://www.networkworld.com/article/3391465/another-strong-cloud-computing-quarter-puts-pressure-on-data-centers.html#tk.rss_all)
[#]: author: (Fredric Paul https://www.networkworld.com/author/Fredric-Paul/)
Yet another killer cloud quarter puts pressure on data centers
======
Continued strong growth from Amazon Web Services, Microsoft Azure, and Google Cloud Platform signals even more enterprises are moving to the cloud.
![Getty Images][1]
Youd almost think Id get tired of [writing this story over and over and over][2]… but the ongoing growth of cloud computing is too big a trend to ignore.
Critically, the impressive growth numbers of the three leading cloud infrastructure providers—Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform—doesnt occur in a vacuum. Its not just about new workloads being run in the cloud; its also about more and more enterprises moving existing workloads to the cloud from on-premises data centers.
**[ Also read:[Is the cloud already killing the enterprise data center?][3] ]**
To put these trends in perspective, lets take a look at the results for all three vendors.
### AWS keeps on trucking
AWS remains by far the dominant player in the cloud infrastructure market, with a massive [$7.7 billion in quarterly sales][4] (an annual run rate of a whopping $30.8 billion). Even more remarkable, somehow AWS continues to grow revenue by almost 42% year over year. Oh, and that kind of growth is not just unique _this_ quarter; the unit has topped 40% revenue growth _every_ quarter since the beginning of 2017. (To be fair, the first quarter of 2018 saw an amazing 49% revenue growth.)
And unlike many fast-growing tech companies, that incredible expansion isnt being fueled by equally impressive losses. AWS earned a healthy $2.2 billion operating profit in the quarter, up 59% from the same period last year. One reason? [The company told analysts][5] it made big data center investments in 2016 and 2017, so it hasnt had to do so more recently (it expects to boost spending on data centers later this year). The company [reportedly][6] described AWS revenue growth as “lumpy,” but it seems to me that the numbers merely vary between huge and even bigger.
### Microsoft Azure grows even faster than AWS
Sure, 41% growth is good, but [Microsofts quarterly Azure revenue][7] almost doubled that, jumping 73% year over year (fairly consistent with the previous—also stellar—quarter), helping the company exceed estimates for both sales and revenue and sparking a brief shining moment of a $1 billion valuation for the company. Microsoft doesnt break out Azures sales and revenue, but [the commercial cloud business, which includes Azure as well as other cloud businesses, grew 41% in the quarter to $9.6 billion][8].
Its impossible to tell exactly how big Azure is, but it appears to be growing faster than AWS, though off a much smaller base. While some analysts reportedly say Azure is growing faster than AWS was at a similar stage in its development, that may not bear much significance because the addressable cloud market is now far larger than it used be.
According to [the New York Times][9], like AWS, Microsoft is also now reaping the benefits of heavy investments in new data centers around the world. And the Times credits Microsoft with “particular success” in [hybrid cloud installations][10], helping ease concerns among some slow-to-change enterprise about full-scale cloud adoption.
**[ Also read:[Why hybrid cloud will turn out to be a transition strategy][11] ]**
### Can Google Cloud Platform keep up?
Even as the [overall quarterly numbers for Alphabet][12]—Googles parent company—didnt meet analysts revenue expectations (which sent the stock tumbling), Google Cloud Platform seems to have continued its strong growth. Alphabet doesnt break out its cloud unit, but sales in Alphabets “Other Revenue” category—which includes cloud computing along with hardware—jumped 25% compared to the same period last year, hitting $5.4 billion.
More telling, perhaps, Alphabet Chief Financial Officer Ruth Porat [reportedly][13] told analysts that "Google Cloud Platform remains one of the fastest growing businesses in Alphabet." [Porat also mentioned][14] that hiring in the cloud unit was so aggressive that it drove a 20% jump in Alphabets operating expenses!
### Companies keep going cloud
But the raw numbers tell only part of the story. All that growth means existing customers are spending more, but also that ever-increasing numbers of enterprises are abandoning their hassle and expense of running their data centers in favor of buying what they need from the cloud.
**[ Also read:[Large enterprises abandon data centers for the cloud][15] ]**
The New York Times quotes Amy Hood, Microsofts chief financial officer, explaining that, “You dont really get revenue growth unless you have a usage growth, so this is customers deploying and using Azure.” And the Times notes that Microsoft has signed big deals with companies such as [Walgreens Boots Alliance][16] that combined Azure with other Microsoft cloud-based services.
This growth is true in existing markets, and also includes new markets. For example, AWS just opened new regions in [Indonesia][17] and [Hong Kong][18].
**[ Now read:[After virtualization and cloud, what's left on premises?][19] ]**
Join the Network World communities on [Facebook][20] and [LinkedIn][21] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3391465/another-strong-cloud-computing-quarter-puts-pressure-on-data-centers.html#tk.rss_all
作者:[Fredric Paul][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Fredric-Paul/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/02/cloud_comput_connect_blue-100787048-large.jpg
[2]: https://www.networkworld.com/article/3292935/cloud-computing-just-had-another-kick-ass-quarter.html
[3]: https://www.networkworld.com/article/3268384/is-the-cloud-already-killing-the-enterprise-data-center.html
[4]: https://www.latimes.com/business/la-fi-amazon-earnings-cloud-computing-aws-20190425-story.html
[5]: https://www.businessinsider.com/amazon-q1-2019-earnings-aws-advertising-retail-prime-2019-4
[6]: https://www.computerweekly.com/news/252462310/Amazon-cautions-against-reading-too-much-into-slowdown-in-AWS-revenue-growth-rate
[7]: https://www.microsoft.com/en-us/Investor/earnings/FY-2019-Q3/press-release-webcast
[8]: https://www.cnbc.com/2019/04/24/microsoft-q3-2019-earnings.html
[9]: https://www.nytimes.com/2019/04/24/technology/microsoft-earnings.html
[10]: https://www.networkworld.com/article/3268448/what-is-hybrid-cloud-really-and-whats-the-best-strategy.html
[11]: https://www.networkworld.com/article/3238466/why-hybrid-cloud-will-turn-out-to-be-a-transition-strategy.html
[12]: https://abc.xyz/investor/static/pdf/2019Q1_alphabet_earnings_release.pdf?cache=8ac2b86
[13]: https://www.forbes.com/sites/jilliandonfro/2019/04/29/google-alphabet-q1-earnings-2019/#52f5c8c733be
[14]: https://www.youtube.com/watch?v=31_KHdse_0Y
[15]: https://www.networkworld.com/article/3240587/large-enterprises-abandon-data-centers-for-the-cloud.html
[16]: https://www.walgreensbootsalliance.com
[17]: https://www.businesswire.com/news/home/20190403005931/en/AWS-Open-New-Region-Indonesia
[18]: https://www.apnews.com/Business%20Wire/57eaf4cb603e46e6b05b634d9751699b
[19]: https://https//www.networkworld.com/article/3232626/virtualization/extreme-virtualization-impact-on-enterprises.html
[20]: https://www.facebook.com/NetworkWorld/
[21]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,65 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Revolutionary data compression technique could slash compute costs)
[#]: via: (https://www.networkworld.com/article/3392716/revolutionary-data-compression-technique-could-slash-compute-costs.html#tk.rss_all)
[#]: author: (Patrick Nelson https://www.networkworld.com/author/Patrick-Nelson/)
Revolutionary data compression technique could slash compute costs
======
A new form of data compression, called Zippads, will create faster computer programs that could drastically lower the costs of computing.
![Kevin Stanchfield \(CC BY 2.0\)][1]
Theres a major problem with todays money-saving memory compression used for storing more data in less space. The issue is that computers store and run memory in predetermined blocks, yet many modern programs function and play out in variable chunks.
The way its currently done is actually, highly inefficient. Thats because the compressed programs, which use objects rather than evenly configured slabs of data, dont match the space used to store and run them, explain scientists working on a revolutionary new compression system called Zippads.
The answer, they say—and something that if it works would drastically reduce those inefficiencies, speed things up, and importantly, reduce compute costs—is to compress the varied objects and not the cache lines, as is the case now. Cache lines are fixed-size blocks of memory that are transferred to memory cache.
**[ Read also:[How to deal with backup when you switch to hyperconverged infrastructure][2] ]**
“Objects, not cache lines, are the natural unit of compression,” writes Po-An Tsai and Daniel Sanchez in their MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) [paper][3] (pdf).
They say object-based programs — of the kind used now everyday, such as Python — should be compressed based on their programmed object size, not on some fixed value created by traditional or even state-of-the art cached methods.
The alternative, too, isnt to recklessly abandon object-oriented programming just because its inefficient at using compression. One must adapt compression to that now common object-using code.
The scientists claim their new system can increase the compression ratio 1.63 times and improve performance by 17%. Its the “first compressed memory hierarchy designed for object-based applications,” they say.
### The benefits of compression
Compression is a favored technique for making computers more efficient. The main advantage over simply adding more memory is that costs are lowered significantly—you dont need to add increasing physical main memory hardware because youre cramming more data into existing.
However, to date, hardware memory compression has been best suited to more old-school large blocks of data, not the “random, fine-grained memory accesses,” the team explains. Its not great at accessing small pieces of data, such as words, for example.
### How the Zippads compression system works
In Zippads, as the new system is called, stored object hierarchical levels (called “pads”) are located on-chip and are directly accessed. The different levels (pads) have changing speed grades, with newly referenced objects being placed in the fastest pad. As a pad fills up, it begins the process of evicting older, not-so-active objects and ultimately recycles the unused code that is taking up desirable fast space and isnt being used. Cleverly, at the fast level, the code parts arent even compressed, but as they prove their non-usefulness they get kicked down to compressed, slow-to-access, lower-importance pads—and are brought back up as necessary.
Zippads would “see computers that can run much faster or can run many more apps at the same speeds,” an[ MIT News][4] article says. “Each application consumes less memory, it runs faster, so a device can support more applications within its allotted memory.” Bandwidth is freed up, in other words.
“All computer systems would benefit from this,” Sanchez, a professor of computer science and electrical engineering, says in the article. “Programs become faster because they stop being bottlenecked by memory bandwidth.”
Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3392716/revolutionary-data-compression-technique-could-slash-compute-costs.html#tk.rss_all
作者:[Patrick Nelson][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Patrick-Nelson/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/02/memory-100787327-large.jpg
[2]: https://www.networkworld.com/article/3389396/how-to-deal-with-backup-when-you-switch-to-hyperconverged-infrastructure.html
[3]: http://people.csail.mit.edu/poantsai/papers/2019.zippads.asplos.pdf
[4]: http://news.mit.edu/2019/hardware-data-compression-0416
[5]: https://www.facebook.com/NetworkWorld/
[6]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,135 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Epic Games Store is Now Available on Linux Thanks to Lutris)
[#]: via: (https://itsfoss.com/epic-games-lutris-linux/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
Epic Games Store is Now Available on Linux Thanks to Lutris
======
_**Brief: Open Source gaming platform Lutris now enables you to use Epic Games Store on Linux. We tried it on Ubuntu 19.04 and heres our experience with it.**_
[Gaming on Linux][1] just keeps getting better. Want to [play Windows games on Linux][2], Steams new [in-progress feature][3] enables you to do that.
Steam might be new in the field of Windows games on Linux but Lutris has been doing it for years.
[Lutris][4] is an open source gaming platform for Linux where it provides installers for game clients like Origin, Steam, Blizzard.net app and so on. It utilizes Wine to run stuff that isnt natively supported on Linux.
Lutris has recently announced that you can now use Epic Games Store using Lutris.
### Lutris brings Epic Games to Linux
![Epic Games Store Lutris Linux][5]
[Epic Games Store][6] is a digital video game distribution platform like Steam. It only supports Windows and macOS for the moment.
The Lutris team worked hard to bring Epic Games Store to Linux via Lutris. Even though Im not a big fan of Epic Games Store, it was good to know about the support for Linux via Lutris:
> Good news! [@EpicGames][7] Store is now fully functional under Linux if you use Lutris to install it! No issues observed whatsoever. <https://t.co/cYmd7PcYdG>[@TimSweeneyEpic][8] will probably like this 😊 [pic.twitter.com/7mt9fXt7TH][9]
>
> — Lutris Gaming (@LutrisGaming) [April 17, 2019][10]
As an avid gamer and Linux user, I immediately jumped upon this news and installed Lutris to run Epic Games on it.
**Note:** _I used[Ubuntu 19.04][11] to test Epic Games store for Linux._
### Using Epic Games Store for Linux using Lutris
To install Epic Games Store on your Linux system, make sure that you have [Lutris][4] installed with its pre-requisites Wine and Python 3. So, first [install Wine on Ubuntu][12] or whichever Linux you are using and then [download Lutris from its website][13].
[][14]
Suggested read Ubuntu Mate Will Be Default OS On Entroware Laptops
#### Installing Epic Games Store
Once the installation of Lutris is successful, simply launch it.
While I tried this, I encountered an error (nothing happened when I tried to launch it using the GUI). However, when I typed in “ **lutris** ” on the terminal to launch it otherwise, I noticed an error that looked like this:
![][15]
Thanks to Abhishek, I learned that this is a common issue (you can check that on [GitHub][16]).
So, to fix it, all I had to do was type in a command in the terminal:
```
export LC_ALL=C
```
Just copy it and enter it in your terminal if you face the same issue. And, then, you will be able to open Lutris.
**Note:** _Youll have to enter this command every time you launch Lutris. So better to add it to your .bashrc or list of environment variable._
Once that is done, simply launch it and search for “ **Epic Games Store** ” as shown in the image below:
![Epic Games Store in Lutris][17]
Here, I have it installed already, so you will get the option to “Install” it and then it will automatically ask you to install the required packages that it needs. You just have to proceed in order to successfully install it. Thats it no rocket science involved.
#### Playing a Game on Epic Games Store
![Epic Games Store][18]
Now that we have Epic Games store via Lutris on Linux, simply launch it and log in to your account to get started.
But, does it really work?
_Yes, the Epic Games Store does work._ **But, all the games dont.**
Well, I havent tried everything, but I grabbed a free game (Transistor a turn-based ARPG game) to check if that works.
![Transistor Epic Games Store][19]
Unfortunately, it didnt. It says that it is “Running” when I launch it but then again, nothing happens.
As of now, Im not aware of any solutions to that so Ill try to keep you guys updated if I find a fix.
[][20]
Suggested read Alpha Version Of New Skype Client For Linux Is Out Now
**Wrapping Up**
Its good to see the gaming scene improve on Linux thanks to the solutions like Lutris for users. However, theres still a lot of work to be done.
For a game to run hassle-free on Linux is still a challenge. There can be issues like this which I encountered or similar. But, its going in the right direction even if it has issues.
What do you think of Epic Games Store on Linux via Lutris? Have you tried it yet? Let us know your thoughts in the comments below.
--------------------------------------------------------------------------------
via: https://itsfoss.com/epic-games-lutris-linux/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/linux-gaming-guide/
[2]: https://itsfoss.com/steam-play/
[3]: https://itsfoss.com/steam-play-proton/
[4]: https://lutris.net/
[5]: https://itsfoss.com/wp-content/uploads/2019/04/epic-games-store-lutris-linux-800x450.png
[6]: https://www.epicgames.com/store/en-US/
[7]: https://twitter.com/EpicGames?ref_src=twsrc%5Etfw
[8]: https://twitter.com/TimSweeneyEpic?ref_src=twsrc%5Etfw
[9]: https://t.co/7mt9fXt7TH
[10]: https://twitter.com/LutrisGaming/status/1118552969816018948?ref_src=twsrc%5Etfw
[11]: https://itsfoss.com/ubuntu-19-04-release-features/
[12]: https://itsfoss.com/install-latest-wine/
[13]: https://lutris.net/downloads/
[14]: https://itsfoss.com/ubuntu-mate-entroware/
[15]: https://itsfoss.com/wp-content/uploads/2019/04/lutris-error.jpg
[16]: https://github.com/lutris/lutris/issues/660
[17]: https://itsfoss.com/wp-content/uploads/2019/04/lutris-epic-games-store-800x520.jpg
[18]: https://itsfoss.com/wp-content/uploads/2019/04/epic-games-store-800x450.jpg
[19]: https://itsfoss.com/wp-content/uploads/2019/04/transistor-game-epic-games-store-800x410.jpg
[20]: https://itsfoss.com/skpe-alpha-linux/

View File

@ -0,0 +1,260 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to identify same-content files on Linux)
[#]: via: (https://www.networkworld.com/article/3390204/how-to-identify-same-content-files-on-linux.html#tk.rss_all)
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
How to identify same-content files on Linux
======
Copies of files sometimes represent a big waste of disk space and can cause confusion if you want to make updates. Here are six commands to help you identify these files.
![Vinoth Chandar \(CC BY 2.0\)][1]
In a recent post, we looked at [how to identify and locate files that are hard links][2] (i.e., that point to the same disk content and share inodes). In this post, we'll check out commands for finding files that have the same _content_ , but are not otherwise connected.
Hard links are helpful because they allow files to exist in multiple places in the file system while not taking up any additional disk space. Copies of files, on the other hand, sometimes represent a big waste of disk space and run some risk of causing some confusion if you want to make updates. In this post, we're going to look at multiple ways to identify these files.
**[ Two-Minute Linux Tips:[Learn how to master a host of Linux commands in these 2-minute video tutorials][3] ]**
### Comparing files with the diff command
Probably the easiest way to compare two files is to use the **diff** command. The output will show you the differences between the two files. The < and > signs indicate whether the extra lines are in the first (<) or second (>) file provided as arguments. In this example, the extra lines are in backup.html.
```
$ diff index.html backup.html
2438a2439,2441
> <pre>
> That's all there is to report.
> </pre>
```
If diff shows no output, that means the two files are the same.
```
$ diff home.html index.html
$
```
The only drawbacks to diff are that it can only compare two files at a time, and you have to identify the files to compare. Some commands we will look at in this post can find the duplicate files for you.
### Using checksums
The **cksum** (checksum) command computes checksums for files. Checksums are a mathematical reduction of the contents to a lengthy number (like 2819078353 228029). While not absolutely unique, the chance that files that are not identical in content would result in the same checksum is extremely small.
```
$ cksum *.html
2819078353 228029 backup.html
4073570409 227985 home.html
4073570409 227985 index.html
```
In the example above, you can see how the second and third files yield the same checksum and can be assumed to be identical.
### Using the find command
While the find command doesn't have an option for finding duplicate files, it can be used to search files by name or type and run the cksum command. For example:
```
$ find . -name "*.html" -exec cksum {} \;
4073570409 227985 ./home.html
2819078353 228029 ./backup.html
4073570409 227985 ./index.html
```
### Using the fslint command
The **fslint** command can be used to specifically find duplicate files. Note that we give it a starting location. The command can take quite some time to complete if it needs to run through a large number of files. Here's output from a very modest search. Note how it lists the duplicate files and also looks for other issues, such as empty directories and bad IDs.
```
$ fslint .
-----------------------------------file name lint
-------------------------------Invalid utf8 names
-----------------------------------file case lint
----------------------------------DUPlicate files <==
home.html
index.html
-----------------------------------Dangling links
--------------------redundant characters in links
------------------------------------suspect links
--------------------------------Empty Directories
./.gnupg
----------------------------------Temporary Files
----------------------duplicate/conflicting Names
------------------------------------------Bad ids
-------------------------Non Stripped executables
```
You may have to install **fslint** on your system. You will probably have to add it to your search path, as well:
```
$ export PATH=$PATH:/usr/share/fslint/fslint
```
### Using the rdfind command
The **rdfind** command will also look for duplicate (same content) files. The name stands for "redundant data find," and the command is able to determine, based on file dates, which files are the originals — which is helpful if you choose to delete the duplicates, as it will remove the newer files.
```
$ rdfind ~
Now scanning "/home/shark", found 12 files.
Now have 12 files in total.
Removed 1 files due to nonunique device and inode.
Total size is 699498 bytes or 683 KiB
Removed 9 files due to unique sizes from list.2 files left.
Now eliminating candidates based on first bytes:removed 0 files from list.2 files left.
Now eliminating candidates based on last bytes:removed 0 files from list.2 files left.
Now eliminating candidates based on sha1 checksum:removed 0 files from list.2 files left.
It seems like you have 2 files that are not unique
Totally, 223 KiB can be reduced.
Now making results file results.txt
```
You can also run this command in "dryrun" (i.e., only report the changes that might otherwise be made).
```
$ rdfind -dryrun true ~
(DRYRUN MODE) Now scanning "/home/shark", found 12 files.
(DRYRUN MODE) Now have 12 files in total.
(DRYRUN MODE) Removed 1 files due to nonunique device and inode.
(DRYRUN MODE) Total size is 699352 bytes or 683 KiB
Removed 9 files due to unique sizes from list.2 files left.
(DRYRUN MODE) Now eliminating candidates based on first bytes:removed 0 files from list.2 files left.
(DRYRUN MODE) Now eliminating candidates based on last bytes:removed 0 files from list.2 files left.
(DRYRUN MODE) Now eliminating candidates based on sha1 checksum:removed 0 files from list.2 files left.
(DRYRUN MODE) It seems like you have 2 files that are not unique
(DRYRUN MODE) Totally, 223 KiB can be reduced.
(DRYRUN MODE) Now making results file results.txt
```
The rdfind command also provides options for things such as ignoring empty files (-ignoreempty) and following symbolic links (-followsymlinks). Check out the man page for explanations.
```
-ignoreempty ignore empty files
-minsize ignore files smaller than speficied size
-followsymlinks follow symbolic links
-removeidentinode remove files referring to identical inode
-checksum identify checksum type to be used
-deterministic determiness how to sort files
-makesymlinks turn duplicate files into symbolic links
-makehardlinks replace duplicate files with hard links
-makeresultsfile create a results file in the current directory
-outputname provide name for results file
-deleteduplicates delete/unlink duplicate files
-sleep set sleep time between reading files (milliseconds)
-n, -dryrun display what would have been done, but don't do it
```
Note that the rdfind command offers an option to delete duplicate files with the **-deleteduplicates true** setting. Hopefully the command's modest problem with grammar won't irritate you. ;-)
```
$ rdfind -deleteduplicates true .
...
Deleted 1 files. <==
```
You will likely have to install the rdfind command on your system. It's probably a good idea to experiment with it to get comfortable with how it works.
### Using the fdupes command
The **fdupes** command also makes it easy to identify duplicate files and provides a large number of useful options — like **-r** for recursion. In its simplest form, it groups duplicate files together like this:
```
$ fdupes ~
/home/shs/UPGRADE
/home/shs/mytwin
/home/shs/lp.txt
/home/shs/lp.man
/home/shs/penguin.png
/home/shs/penguin0.png
/home/shs/hideme.png
```
Here's an example using recursion. Note that many of the duplicate files are important (users' .bashrc and .profile files) and should clearly not be deleted.
```
# fdupes -r /home
/home/shark/home.html
/home/shark/index.html
/home/dory/.bashrc
/home/eel/.bashrc
/home/nemo/.profile
/home/dory/.profile
/home/shark/.profile
/home/nemo/tryme
/home/shs/tryme
/home/shs/arrow.png
/home/shs/PNGs/arrow.png
/home/shs/11/files_11.zip
/home/shs/ERIC/file_11.zip
/home/shs/penguin0.jpg
/home/shs/PNGs/penguin.jpg
/home/shs/PNGs/penguin0.jpg
/home/shs/Sandra_rotated.png
/home/shs/PNGs/Sandra_rotated.png
```
The fdupe command's many options are listed below. Use the **fdupes -h** command, or read the man page for more details.
```
-r --recurse recurse
-R --recurse: recurse through specified directories
-s --symlinks follow symlinked directories
-H --hardlinks treat hard links as duplicates
-n --noempty ignore empty files
-f --omitfirst omit the first file in each set of matches
-A --nohidden ignore hidden files
-1 --sameline list matches on a single line
-S --size show size of duplicate files
-m --summarize summarize duplicate files information
-q --quiet hide progress indicator
-d --delete prompt user for files to preserve
-N --noprompt when used with --delete, preserve the first file in set
-I --immediate delete duplicates as they are encountered
-p --permissions don't soncider files with different owner/group or
permission bits as duplicates
-o --order=WORD order files according to specification
-i --reverse reverse order while sorting
-v --version display fdupes version
-h --help displays help
```
The fdupes command is another one that you're like to have to install and work with for a while to become familiar with its many options.
### Wrap-up
Linux systems provide a good selection of tools for locating and potentially removing duplicate files, along with options for where you want to run your search and what you want to do with duplicate files when you find them.
**[ Also see:[Invaluable tips and tricks for troubleshooting Linux][4] ]**
Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3390204/how-to-identify-same-content-files-on-linux.html#tk.rss_all
作者:[Sandra Henry-Stocker][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/04/chairs-100794266-large.jpg
[2]: https://www.networkworld.com/article/3387961/how-to-identify-duplicate-files-on-linux.html
[3]: https://www.youtube.com/playlist?list=PL7D2RMSmRO9J8OTpjFECi8DJiTQdd4hua
[4]: https://www.networkworld.com/article/3242170/linux/invaluable-tips-and-tricks-for-troubleshooting-linux.html
[5]: https://www.facebook.com/NetworkWorld/
[6]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,106 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Debian has a New Project Leader)
[#]: via: (https://itsfoss.com/debian-project-leader-election/)
[#]: author: (Shirish https://itsfoss.com/author/shirish/)
Debian has a New Project Leader
======
Like each year, the Debian Secretary announced a call for nominations for the post of Debian Project Leader (commonly known as DPL) in early March. Soon 5 candidates shared their nomination. One of the DPL candidates backed out due to personal reasons and we had [four candidates][1] as can be seen in the Nomination section of the Vote page.
### Sam Hartman, the new Debian Project Leader
![][2]
While I will not go much into details as Sam already outlined his position on his [platform][3], it is good to see that most Debian developers recognize that its no longer just the technical excellence which need to be looked at. I do hope he is able to create more teams which would leave some more time in DPLs hands and less stress going forward.
As he has shared, he would be looking into also helping the other DPL candidates, all of which presented initiatives to make Debian better.
Apart from this, there had been some excellent suggestions, for example modernizing debian-installer, making lists.debian.org have a [Mailman 3][4] instance, modernizing Debian packaging and many more.
While probably a year is too short a time for any of the deliverables that Debian people are thinking, some sort of push or start should enable Debian to reach greater heights than today.
### A brief history of DPL elections
In the beginning, Debian was similar to many distributions which have a [BDFL][5], although from the very start Debian had a sort of rolling leadership. While I wouldnt go through the whole history, from October 1998 there was an idea [germinated][6] to have a Debian Constitution.
After quite a bit of discussion between Debian users, contributors, developers etc. [Debian 1.0 Constitution][7] was released on December 2nd, 1998. One of the big changes was that it formalised the selection of Debian Project Leader via elections.
From 1998 till 2019 13 Debian project leaders have been elected till date with Sam Hartman being the latest (2019).
Before Sam, [Chris Lamb][8] was DPL in 2017 and again stood up for re-election in 2018. One of the biggest changes in Chriss tenure was having more impetus to outreach than ever before. This made it possible to have many more mini-debconfs all around the world and thus increasing more number of Debian users and potential Debian Developers.
[][9]
Suggested read SemiCode OS: A Linux Distribution For Programmers And Web Developers
### Duties and Responsibilities of the Debian Project Leader
![][10]
Debian Project Leader (DPL) is a non-monetary position which means that the DPL doesnt get a salary or any monetary benefits in the traditional sense but its a prestigious position.
Curious what what a DPL does? Here are some of the duties, responsibilities, prestige and perks associated with this position.
#### Travelling
As the DPL is the public face of the project, she/he is supposed to travel to many places in the world to share about Debian. While the travel may be a perk, it is and could be discounted by being not paid for the time spent articulating Debians position in various free software and other communities. Also travel, language, politics of free software are also some of the stress points that any DPL would have to go through.
#### Communication
A DPL is expected to have excellent verbal and non-verbal communication skills as she/he is the expected to share Debians vision of computing to technical and non-technical people. As she/he is also expected to weigh in many a sensitive matter, the Project Leader has to make choices about which communications should be made public and which should be private.
#### Budgeting
Quite a bit of the time the Debian Project Leader has to look into the finances along with the Secretary and take a call at various initiatives mooted by the larger community. The Project Leader has to ask and then make informed decisions on the same.
#### Delegation
One of the important tasks of the DPL is to delegate different tasks to suitable people. Some sensitive delegations include ftp-master, ftp-assistant, list-managers, debian-mirror, debian-infrastructure and so on.
#### Influence
Last but not the least, just like any other election, the people who contest for DPL have a platform where they share their ideas about where they would like to see the Debian project heading and how they would go about doing it.
This is by no means an exhaustive list. I would suggest to read Lucas Nussbaums [mail][11] in which he outlines some more responsibilities as a Debian Project Leader.
[][12]
Suggested read Lightweight Linux Distribution Bodhi Linux 5.0 Released
**In the end…**
I wish Sam Hartman all the luck. I look forward to see how Debian grows under his leadership.
I also hope that you learned a few non-technical thing around Debian. If you are an [ardent Debian user][13], stuff like this make you feel more involved with Debian project. What do you say?
--------------------------------------------------------------------------------
via: https://itsfoss.com/debian-project-leader-election/
作者:[Shirish][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/shirish/
[b]: https://github.com/lujun9972
[1]: https://www.debian.org/vote/2019/vote_001
[2]: https://itsfoss.com/wp-content/uploads/2019/04/Debian-Project-Leader-election-800x450.png
[3]: https://www.debian.org/vote/2019/platforms/hartmans
[4]: http://docs.mailman3.org/en/latest/
[5]: https://en.wikipedia.org/wiki/Benevolent_dictator_for_life
[6]: https://lists.debian.org/debian-devel/1998/09/msg00506.html
[7]: https://www.debian.org/devel/constitution.1.0
[8]: https://www.debian.org/vote/2017/platforms/lamby
[9]: https://itsfoss.com/semicode-os-linux/
[10]: https://itsfoss.com/wp-content/uploads/2019/04/leadership-800x450.jpg
[11]: https://lists.debian.org/debian-vote/2019/03/msg00023.html
[12]: https://itsfoss.com/bodhi-linux-5/
[13]: https://itsfoss.com/reasons-why-i-love-debian/

View File

@ -0,0 +1,125 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (NomadBSD, a BSD for the Road)
[#]: via: (https://itsfoss.com/nomadbsd/)
[#]: author: (John Paul https://itsfoss.com/author/john/)
NomadBSD, a BSD for the Road
======
As regular Its FOSS readers should know, I like diving into the world of BSDs. Recently, I came across an interesting BSD that is designed to live on a thumb drive. Lets take a look at NomadBSD.
### What is NomadBSD?
![Nomadbsd Desktop][1]
[NomadBSD][2] is different than most available BSDs. NomadBSD is a live system based on FreeBSD. It comes with automatic hardware detection and an initial config tool. NomadBSD is designed to “be used as a desktop system that works out of the box, but can also be used for data recovery, for educational purposes, or to test FreeBSDs hardware compatibility.”
This German BSD comes with an [OpenBox][3]-based desktop with the Plank application dock. NomadBSD makes use of the [DSB project][4]. DSB stands for “Desktop Suite (for) (Free)BSD” and consists of a collection of programs designed to create a simple and working environment without needing a ton of dependencies to use one tool. DSB is created by [Marcel Kaiser][5] one of the lead devs of NomadBSD.
Just like the original BSD projects, you can contact the NomadBSD developers via a [mailing list][6].
[][7]
Suggested read Enjoy Netflix? You Should Thank FreeBSD
#### Included Applications
NomadBSD comes with the following software installed:
* Thunar file manager
* Asunder CD ripper
* Bash 5.0
* Filezilla FTP client
* Firefox web browser
* Fish Command line
* Gimp
* Qpdfview
* Git
* Hexchat IRC client
* Leafpad text editor
* Midnight Commander file manager
* PaleMoon web browser
* PCManFM file manager
* Pidgin messaging client
* Transmission BitTorrent client
* Redshift
* Sakura terminal emulator
* Slim login manager
* Thunderbird email client
* VLC media player
* Plank application dock
* Z Shell
You can see a complete of the pre-installed applications in the [MANIFEST file][8].
![Nomadbsd Openbox Menu][9]
#### Version 1.2 Released
NomadBSD recently released version 1.2 on April 21, 2019. This means that NomadBSD is now based on FreeBSD 12.0-p3. TRIM is now enabled by default. One of the biggest changes is that the initial command-line setup was replaced with a Qt graphical interface. They also added a Qt5 tool to install NomadBSD to your hard drive. A number of fixes were included to improve graphics support. They also added support for creating 32-bit images.
[][10]
Suggested read 6 Reasons Why Linux Users Switch to BSD
### Installing NomadBSD
Since NomadBSD is designed to be a live system, we will need to add the BSD to a USB drive. First, you will need to [download it][11]. There are several options to choose from: 64-bit, 32-bit, or 64-bit Mac.
You will be a USB drive that has at least 4GB. The system that you are installing to should have a 1.2 GHz processor and 1GB of RAM to run NomadBSD comfortably. Both BIOS and UEFI are supported.
All of the images available for download are compressed as a `.lzma` file. So, once you have downloaded the file, you will need to extract the `.img` file. On Linux, you can use either of these commands: `lzma -d nomadbsd-x.y.z.img.lzma` or `xzcat nomadbsd-x.y.z.img.lzma`. (Be sure to replace x.y.z with the correct file name you just downloaded.)
Before we proceed, we need to find out the id of your USB drive. (Hopefully, you have inserted it by now.) I use the `lsblk` command to find my USB drive, which in my case is `sdb`. To write the image file, use this command `sudo dd if=nomadbsd-x.y.z.img of=/dev/sdb bs=1M conv=sync`. (Again, dont forget to correct the file name.) If you are uncomfortable using `dd`, you can use [Etcher][12]. If you have Windows, you will need to use [7-zip][13] to extract the image file and Etcher or [Rufus][14] to write the image to the USB drive.
When you boot from the USB drive, you will encounter a simple config tool. Once you answer the required questions, you will be greeted with a simple Openbox desktop.
### Thoughts on NomadBSD
I first discovered NomadBSD back in January when they released 1.2-RC1. At the time, I had been unable to install [Project Trident][15] on my laptop and was very frustrated with BSDs. I downloaded NomadBSD and tried it out. I initially ran into issues reaching the desktop, but RC2 fixed that issue. However, I was unable to get on the internet, even though I had an Ethernet cable plugged in. Luckily, I found the wifi manager in the menu and was able to connect to my wifi.
Overall, my experience with NomadBSD was pleasant. Once I figured out a few things, I was good to go. I hope that NomadBSD is the first of a new generation of BSDs that focus on mobility and ease of use. BSD has conquered the server world, its about time they figured out how to be more user-friendly.
Have you ever used NomadBSD? What is your BSD? Please let us know in the comments below.
If you found this article interesting, please take a minute to share it on social media, Hacker News or [Reddit][16].
--------------------------------------------------------------------------------
via: https://itsfoss.com/nomadbsd/
作者:[John Paul][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/john/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/wp-content/uploads/2019/04/NomadBSD-desktop-800x500.jpg
[2]: http://nomadbsd.org/
[3]: http://openbox.org/wiki/Main_Page
[4]: https://freeshell.de/%7Emk/projects/dsb.html
[5]: https://github.com/mrclksr
[6]: http://nomadbsd.org/contact.html
[7]: https://itsfoss.com/netflix-freebsd-cdn/
[8]: http://nomadbsd.org/download/nomadbsd-1.2.manifest
[9]: https://itsfoss.com/wp-content/uploads/2019/04/NomadBSD-Openbox-menu-800x500.jpg
[10]: https://itsfoss.com/why-use-bsd/
[11]: http://nomadbsd.org/download.html
[12]: https://www.balena.io/etcher/
[13]: https://www.7-zip.org/
[14]: https://rufus.ie/
[15]: https://itsfoss.com/project-trident-interview/
[16]: http://reddit.com/r/linuxusersgroup

View File

@ -0,0 +1,166 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Monitoring CPU and GPU Temperatures on Linux)
[#]: via: (https://itsfoss.com/monitor-cpu-gpu-temp-linux/)
[#]: author: (It's FOSS Community https://itsfoss.com/author/itsfoss/)
Monitoring CPU and GPU Temperatures on Linux
======
_**Brief: This articles discusses two simple ways of monitoring CPU and GPU temperatures in Linux command line.**_
Because of **[Steam][1]** (including _[Steam Play][2]_ , aka _Proton_ ) and other developments, **GNU/Linux** is becoming the gaming platform of choice for more and more computer users everyday. A good number of users are also going for **GNU/Linux** when it comes to other resource-consuming computing tasks such as [video editing][3] or graphic design ( _Kdenlive_ and _[Blender][4]_ are good examples of programs for these).
Whether you are one of those users or otherwise, you are bound to have wondered how hot your computers CPU and GPU can get (even more so if you do overclocking). If that is the case, keep reading. We will be looking at a couple of very simple commands to monitor CPU and GPU temps.
My setup includes a [Slimbook Kymera][5] and two displays (a TV set and a PC monitor) which allows me to use one for playing games and the other to keep an eye on the temperatures. Also, since I use [Zorin OS][6] I will be focusing on **Ubuntu** and **Ubuntu** derivatives.
To monitor the behaviour of both CPU and GPU we will be making use of the useful `watch` command to have dynamic readings every certain number of seconds.
![][7]
### Monitoring CPU Temperature in Linux
For CPU temps, we will combine `watch` with the `sensors` command. An interesting article about a [gui version of this tool has already been covered on Its FOSS][8]. However, we will use the terminal version here:
```
watch -n 2 sensors
```
`watch` guarantees that the readings will be updated every 2 seconds (and this value can — of course — be changed to what best fit your needs):
```
Every 2,0s: sensors
iwlwifi-virtual-0
Adapter: Virtual device
temp1: +39.0°C
acpitz-virtual-0
Adapter: Virtual device
temp1: +27.8°C (crit = +119.0°C)
temp2: +29.8°C (crit = +119.0°C)
coretemp-isa-0000
Adapter: ISA adapter
Package id 0: +37.0°C (high = +82.0°C, crit = +100.0°C)
Core 0: +35.0°C (high = +82.0°C, crit = +100.0°C)
Core 1: +35.0°C (high = +82.0°C, crit = +100.0°C)
Core 2: +33.0°C (high = +82.0°C, crit = +100.0°C)
Core 3: +36.0°C (high = +82.0°C, crit = +100.0°C)
Core 4: +37.0°C (high = +82.0°C, crit = +100.0°C)
Core 5: +35.0°C (high = +82.0°C, crit = +100.0°C)
```
Amongst other things, we get the following information:
* We have 5 cores in use at the moment (with the current highest temperature being 37.0ºC).
* Values higher than 82.0ºC are considered high.
* A value over 100.0ºC is deemed critical.
[][9]
Suggested read Top 10 Command Line Games For Linux
The values above lead us to the conclusion that the computers workload is very light at the moment.
### Monitoring GPU Temperature in Linux
Let us turn to the graphics card now. I have never used an **AMD** dedicated graphics card, so I will be focusing on **Nvidia** ones. The first thing to do is download the appropriate, current driver through [additional drivers in Ubuntu][10].
On **Ubuntu** (and its forks such as **Zorin** or **Linux Mint** ), going to _Software & Updates_ > _Additional Drivers_ and selecting the most recent one normally suffices. Additionally, you can add/enable the official _ppa_ for graphics cards (either through the command line or via _Software & Updates_ > _Other Software_ ). After installing the driver you will have at your disposal the _Nvidia X Server_ gui application along with the command line utility _nvidia-smi_ (Nvidia System Management Interface). So we will use `watch` and `nvidia-smi`:
```
watch -n 2 nvidia-smi
```
And — the same as for the CPU — we will get updated readings every two seconds:
```
Every 2,0s: nvidia-smi
Fri Apr 19 20:45:30 2019
+-----------------------------------------------------------------------------+
| Nvidia-SMI 418.56 Driver Version: 418.56 CUDA Version: 10.1 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX 106... Off | 00000000:01:00.0 On | N/A |
| 0% 54C P8 10W / 120W | 433MiB / 6077MiB | 4% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 1557 G /usr/lib/xorg/Xorg 190MiB |
| 0 1820 G /usr/bin/gnome-shell 174MiB |
| 0 7820 G ...equest-channel-token=303407235874180773 65MiB |
+-----------------------------------------------------------------------------+
```
The chart gives the following information about the graphics card:
* it is using the open source driver version 418.56.
* the current temperature of the card is 54.0ºC — with the fan at 0% of its capacity.
* the power consumption is very low: only 10W.
* out of 6 GB of vram (video random access memory), it is only using 433 MB.
* the used vram is being taken by three processes whose IDs are — respectively — 1557, 1820 and 7820.
[][11]
Suggested read Googler: Now You Can Google From Linux Terminal!
Most of these facts/values show that — clearly — we are not playing any resource-consuming games or dealing with heavy workloads. Should we started playing a game, processing a video — or the like —, the values would start to go up.
#### Conclusion
Althoug there are gui tools, I find these two commands very handy to check on your hardware in real time.
What do you make of them? You can learn more about the utilities involved by reading their man pages.
Do you have other preferences? Share them with us in the comments, ;).
Halof!!! (Have a lot of fun!!!).
![avatar][12]
### Alejandro Egea-Abellán
Its FOSS Community Contributor
I developed a liking for electronics, linguistics, herpetology and computers (particularly GNU/Linux and FOSS). I am LPIC-2 certified and currently work as a technical consultant and Moodle administrator in the Department for Lifelong Learning at the Ministry of Education in Murcia, Spain. I am a firm believer in lifelong learning, the sharing of knowledge and computer-user freedom.
--------------------------------------------------------------------------------
via: https://itsfoss.com/monitor-cpu-gpu-temp-linux/
作者:[It's FOSS Community][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/itsfoss/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/install-steam-ubuntu-linux/
[2]: https://itsfoss.com/steam-play-proton/
[3]: https://itsfoss.com/best-video-editing-software-linux/
[4]: https://www.blender.org/
[5]: https://slimbook.es/
[6]: https://zorinos.com/
[7]: https://itsfoss.com/wp-content/uploads/2019/04/monitor-cpu-gpu-temperature-linux-800x450.png
[8]: https://itsfoss.com/check-laptop-cpu-temperature-ubuntu/
[9]: https://itsfoss.com/best-command-line-games-linux/
[10]: https://itsfoss.com/install-additional-drivers-ubuntu/
[11]: https://itsfoss.com/review-googler-linux/
[12]: https://itsfoss.com/wp-content/uploads/2019/04/EGEA-ABELLAN-Alejandro.jpg

View File

@ -0,0 +1,116 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Installing Budgie Desktop on Ubuntu [Quick Guide])
[#]: via: (https://itsfoss.com/install-budgie-ubuntu/)
[#]: author: (Atharva Lele https://itsfoss.com/author/atharva/)
Installing Budgie Desktop on Ubuntu [Quick Guide]
======
_**Brief: Learn how to install Budgie desktop on Ubuntu in this step-by-step tutorial.**_
Among all the [various Ubuntu versions][1], [Ubuntu Budgie][2] is the most underrated one. It looks elegant and its not heavy on resources.
Read this [Ubuntu Budgie review][3] or simply watch this video to see what Ubuntu Budgie 18.04 looks like.
[Subscribe to our YouTube channel for more Linux Videos][4]
If you like [Budgie desktop][5] but you are using some other version of Ubuntu such as the default Ubuntu with GNOME desktop, I have good news for you. You can install Budgie on your current Ubuntu system and switch the desktop environments.
In this post, Im going to tell you exactly how to do that. But first, a little introduction to Budgie for those who are unaware about it.
Budgie desktop environment is developed mainly by [Solus Linux team.][6] It is designed with focus on elegance and modern usage. Budgie is available for all major Linux distributions for users to try and experience this new desktop environment. Budgie is pretty mature by now and provides a great desktop experience.
Warning
Installing multiple desktops on the same system MAY result in conflicts and you may see some issue like missing icons in the panel or multiple icons of the same program.
You may not see any issue at all as well. Its your call if you want to try different desktop.
### Install Budgie on Ubuntu
This method is not tested on Linux Mint, so I recommend that you not follow this guide for Mint.
For those on Ubuntu, Budgie is now a part of the Ubuntu repositories by default. Hence, we dont need to add any PPAs in order to get Budgie.
To install Budgie, simply run this command in terminal. Well first make sure that the system is fully updated.
```
sudo apt update && sudo apt upgrade
sudo apt install ubuntu-budgie-desktop
```
When everything is done downloading, you will get a prompt to choose your display manager. Select lightdm to get the full Budgie experience.
![Select lightdm][7]
After the installation is complete, reboot your computer. You will be then greeted by the Budgie login screen. Enter your password to go into the homescreen.
![Budgie Desktop Home][8]
### Switching to other desktop environments
![Budgie login screen][9]
You can click the Budgie icon next to your name to get options for login. From there you can select between the installed Desktop Environments (DEs). In my case, I see Budgie and the default Ubuntu (GNOME) DEs.
![Select your DE][10]
Hence whenever you feel like logging into GNOME, you can do so using this menu.
[][11]
Suggested read Get Rid of 'snapd returned status code 400: Bad Request' Error in Ubuntu
### How to Remove Budgie
If you dont like Budgie or just want to go back to your regular old Ubuntu, you can switch back to your regular desktop as described in the above section.
However, if you really want to remove Budgie and its component, you can follow the following commands to get back to a clean slate.
_**Switch to some other desktop environments before using these commands:**_
```
sudo apt remove ubuntu-budgie-desktop ubuntu-budgie* lightdm
sudo apt autoremove
sudo apt install --reinstall gdm3
```
After running all the commands successfully, reboot your computer.
Now, you will be back to GNOME or whichever desktop environment you had.
**What you think of Budgie?**
Budgie is one of the [best desktop environments for Linux][12]. Hope this short guide helped you install the awesome Budgie desktop on your Ubuntu system.
If you did install Budgie, what do you like about it the most? Let us know in the comments below. And as usual, any questions or suggestions are always welcome.
--------------------------------------------------------------------------------
via: https://itsfoss.com/install-budgie-ubuntu/
作者:[Atharva Lele][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/atharva/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/which-ubuntu-install/
[2]: https://ubuntubudgie.org/
[3]: https://itsfoss.com/ubuntu-budgie-18-review/
[4]: https://www.youtube.com/c/itsfoss?sub_confirmation=1
[5]: https://github.com/solus-project/budgie-desktop
[6]: https://getsol.us/home/
[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/04/budgie_install_select_dm.png?fit=800%2C559&ssl=1
[8]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/04/budgie_homescreen.jpg?fit=800%2C500&ssl=1
[9]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/budgie_install_lockscreen.png?fit=800%2C403&ssl=1
[10]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/04/budgie_install_lockscreen_select_de.png?fit=800%2C403&ssl=1
[11]: https://itsfoss.com/snapd-error-ubuntu/
[12]: https://itsfoss.com/best-linux-desktop-environments/

View File

@ -0,0 +1,111 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How To Turn On And Shutdown The Raspberry Pi [Absolute Beginner Tip])
[#]: via: (https://itsfoss.com/turn-on-raspberry-pi/)
[#]: author: (Chinmay https://itsfoss.com/author/chinmay/)
How To Turn On And Shutdown The Raspberry Pi [Absolute Beginner Tip]
======
_**Brief: This quick tip teaches you how to turn on Raspberry Pi and how to shut it down properly afterwards.**_
The [Raspberry Pi][1] is one of the [most popular SBC (Single-Board-Computer)][2]. If you are interested in this topic, I believe that youve finally got a Pi device. I also advise to get all the [additional Raspberry Pi accessories][3] to get started with your device.
Youre ready to turn it on and start to tinker around with it. It has its own similarities and differences compared to traditional computers like desktops and laptops.
Today, lets go ahead and learn how to turn on and shutdown a Raspberry Pi as it doesnt really feature a power button of sorts.
For this article Im using a Raspberry Pi 3B+, but its the same for all the Raspberry Pi variants.
Bestseller No. 1
[][4]
[CanaKit Raspberry Pi 3 B+ (B Plus) Starter Kit (32 GB EVO+ Edition, Premium Black Case)][4]
CanaKit - Personal Computers
$79.99 [][5]
Bestseller No. 2
[][6]
[CanaKit Raspberry Pi 3 B+ (B Plus) with Premium Clear Case and 2.5A Power Supply][6]
CanaKit - Personal Computers
$54.99 [][5]
### Turn on Raspberry Pi
![Micro USB port for Power][7]
The micro USB port powers the Raspberry Pi, the way you turn it on is by plugging in the power cable into the micro USB port. But, before you do that you should make sure that you have done the following things.
* Preparing the micro SD card with Raspbian according to the official [guide][8] and inserting into the micro SD card slot.
* Plugging in the HDMI cable, USB keyboard and a Mouse.
* Plugging in the Ethernet Cable(Optional).
Once you have done the above, plug in the power cable. This turns on the Raspberry Pi and the display will light up and load the Operating System.
Bestseller No. 1
[][4]
[CanaKit Raspberry Pi 3 B+ (B Plus) Starter Kit (32 GB EVO+ Edition, Premium Black Case)][4]
CanaKit - Personal Computers
$79.99 [][5]
### Shutting Down the Pi
Shutting down the Pi is pretty straight forward, click the menu button and choose shutdown.
![Turn off Raspberry Pi graphically][9]
Alternatively, you can use the [shutdown command][10] in the terminal:
```
sudo shutdown now
```
Once the shutdown process has started **wait** till it completely finishes and then you can cut the power to it. Once the Pi shuts down, there is no real way to turn the Pi back on without turning off and turning on the power. You could the GPIOs to turn on the Pi from the shutdown state but itll require additional modding.
[][2]
Suggested read 12 Single Board Computers: Alternative to Raspberry Pi
_Note: Micro USB ports tend to be fragile, hence turn-off/on the power at source instead of frequently unplugging and plugging into the micro USB port._
Well, thats about all you should know about turning on and shutting down the Pi, what do you plan to use it for? Let me know in the comments.
--------------------------------------------------------------------------------
via: https://itsfoss.com/turn-on-raspberry-pi/
作者:[Chinmay][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/chinmay/
[b]: https://github.com/lujun9972
[1]: https://www.raspberrypi.org/
[2]: https://itsfoss.com/raspberry-pi-alternatives/
[3]: https://itsfoss.com/things-you-need-to-get-your-raspberry-pi-working/
[4]: https://www.amazon.com/CanaKit-Raspberry-Starter-Premium-Black/dp/B07BCC8PK7?SubscriptionId=AKIAJ3N3QBK3ZHDGU54Q&tag=chmod7mediate-20&linkCode=xm2&camp=2025&creative=165953&creativeASIN=B07BCC8PK7&keywords=raspberry%20pi%20kit (CanaKit Raspberry Pi 3 B+ (B Plus) Starter Kit (32 GB EVO+ Edition, Premium Black Case))
[5]: https://www.amazon.com/gp/prime/?tag=chmod7mediate-20 (Amazon Prime)
[6]: https://www.amazon.com/CanaKit-Raspberry-Premium-Clear-Supply/dp/B07BC7BMHY?SubscriptionId=AKIAJ3N3QBK3ZHDGU54Q&tag=chmod7mediate-20&linkCode=xm2&camp=2025&creative=165953&creativeASIN=B07BC7BMHY&keywords=raspberry%20pi%20kit (CanaKit Raspberry Pi 3 B+ (B Plus) with Premium Clear Case and 2.5A Power Supply)
[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/04/raspberry-pi-3-microusb.png?fit=800%2C532&ssl=1
[8]: https://www.raspberrypi.org/documentation/installation/installing-images/README.md
[9]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/04/Raspbian-ui-menu.jpg?fit=800%2C492&ssl=1
[10]: https://linuxhandbook.com/linux-shutdown-command/

View File

@ -0,0 +1,115 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (The Awesome Fedora 30 is Here! Check Out the New Features)
[#]: via: (https://itsfoss.com/fedora-30/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
The Awesome Fedora 30 is Here! Check Out the New Features
======
The latest and greatest release of Fedora is here. Fedora 30 brings some visual as well as performance improvements.
Fedora releases a new version every six months and each release is supported for thirteen months.
Before you decide to download or upgrade Fedora, lets first see whats new in Fedora 30.
### New Features in Fedora 30
![Fedora 30 Release][1]
Heres whats new in the latest release of Fedora.
#### GNOME 3.32 gives a brand new look, features and performance improvement
A lot of visual improvements is brought by the latest release of GNOME.
GNOME 3.32 has refreshed new icons and UI and it almost looks like a brand new version of GNOME.
![Gnome 3.32 icons | Image Credit][2]
GNOME 3.32 also brings several other features like fractional scaling, permission control for each application, granular control on Night Light intensity among many other changes.
GNOME 3.32 also brings some performance improvements. Youll see faster file and app searches and a smoother scrolling.
#### Improved performance for DNF
Fedora 30 will see a faster [DNF][3] (the default package manager for Fedora) thanks to the [zchunk][4] compression algorithm.
The zchunk algorithm splits the file into independent chunks. This helps in dealing with delta or changes as you download only the changed chunks while downloading the new version of a file.
With zcunk, dnf will only download the difference between the metadata of the current version and the earlier versions.
#### Fedora 30 brings two new desktop environments into the fold
Fedora already offers several desktop environment choices. Fedora 30 extends the offering with [elementary OS][5] Pantheon desktop environment and Deepin Linux [DeepinDE][6].
So now you can enjoy the looks and feel of elementary OS and Deepin Linux in Fedora. How cool is that!
#### Linux Kernel 5
Fedora 29 has Linux Kernel 5.0.9 version that has improved support for hardware and some performance improvements. You may check out the [features of Linux kernel 5.0 in this article][7].
[][8]
Suggested read The Featureful Release of Nextcloud 14 Has Two New Security Features
#### Updated software
Youll also get newer versions of software. Some of the major ones are:
* GCC 9.0.1
* [Bash Shell 5.0][9]
* GNU C Library 2.29
* Ruby 2.6
* Golang 1.12
* Mesa 19.0.2
* Vagrant 2.2
* JDK12
* PHP 7.3
* Fish 3.0
* Erlang 21
* Python 3.7.3
### Getting Fedora 30
If you are already using Fedora 29 then you can upgrade to the latest release from your current install. You may follow this guide to learn [how to upgrade a Fedora version][10].
Fedora 29 users will still get the updates for seven more months so if you dont feel like upgrading, you may skip it for now. Fedora 28 users have no choice because Fedora 28 reached end of life next month which means there will be no security or maintenance update anymore. Upgrading to a newer version is no longer a choice.
You always has the option to download the ISO of Fedora 30 and install it afresh. You can download Fedora from its official website. Its only available for 64-bit systems and the ISO is 1.9 GB in size.
[Download Fedora 30 Workstation][11]
What do you think of Fedora 30? Are you planning to upgrade or at least try it out? Do share your thoughts in the comment section.
--------------------------------------------------------------------------------
via: https://itsfoss.com/fedora-30/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/wp-content/uploads/2019/04/fedora-30-release-800x450.png
[2]: https://itsfoss.com/wp-content/uploads/2019/04/gnome-3-32-icons.png
[3]: https://fedoraproject.org/wiki/DNF?rd=Dnf
[4]: https://github.com/zchunk/zchunk
[5]: https://itsfoss.com/elementary-os-juno-features/
[6]: https://www.deepin.org/en/dde/
[7]: https://itsfoss.com/linux-kernel-5/
[8]: https://itsfoss.com/nextcloud-14-release/
[9]: https://itsfoss.com/bash-5-release/
[10]: https://itsfoss.com/upgrade-fedora-version/
[11]: https://getfedora.org/en/workstation/

View File

@ -0,0 +1,219 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Looking into Linux modules)
[#]: via: (https://www.networkworld.com/article/3391362/looking-into-linux-modules.html#tk.rss_all)
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
Looking into Linux modules
======
The lsmod command can tell you which kernel modules are currently loaded on your system, along with some interesting details about their use.
![Rob Oo \(CC BY 2.0\)][1]
### What are Linux modules?
Kernel modules are chunks of code that are loaded and unloaded into the kernel as needed, thus extending the functionality of the kernel without requiring a reboot. In fact, unless users inquire about modules using commands like **lsmod** , they won't likely know that anything has changed.
One important thing to understand is that there are _lots_ of modules that will be in use on your Linux system at all times and that a lot of details are available if you're tempted to dive into the details.
One of the prime ways that lsmod is used is to examine modules when a system isn't working properly. However, most of the time, modules load as needed and users don't need to be aware of how they are working.
**[ Also see:[Must-know Linux Commands][2] ]**
### Listing modules
The easiest way to list modules is with the **lsmod** command. While this command provides a lot of detail, this is the most user-friendly output.
```
$ lsmod
Module Size Used by
snd_hda_codec_realtek 114688 1
snd_hda_codec_generic 77824 1 snd_hda_codec_realtek
ledtrig_audio 16384 2 snd_hda_codec_generic,snd_hda_codec_realtek
snd_hda_codec_hdmi 53248 1
snd_hda_intel 40960 2
snd_hda_codec 131072 4 snd_hda_codec_generic,snd_hda_codec_hdmi,snd_hda_intel
,snd_hda_codec_realtek
snd_hda_core 86016 5 snd_hda_codec_generic,snd_hda_codec_hdmi,snd_hda_intel
,snd_hda_codec,snd_hda_codec_realtek
snd_hwdep 20480 1 snd_hda_codec
snd_pcm 102400 4 snd_hda_codec_hdmi,snd_hda_intel,snd_hda_codec,snd_hda
_core
snd_seq_midi 20480 0
snd_seq_midi_event 16384 1 snd_seq_midi
dcdbas 20480 0
snd_rawmidi 36864 1 snd_seq_midi
snd_seq 69632 2 snd_seq_midi,snd_seq_midi_event
coretemp 20480 0
snd_seq_device 16384 3 snd_seq,snd_seq_midi,snd_rawmidi
snd_timer 36864 2 snd_seq,snd_pcm
kvm_intel 241664 0
kvm 626688 1 kvm_intel
radeon 1454080 10
irqbypass 16384 1 kvm
joydev 24576 0
input_leds 16384 0
ttm 102400 1 radeon
drm_kms_helper 180224 1 radeon
drm 475136 13 drm_kms_helper,radeon,ttm
snd 81920 15 snd_hda_codec_generic,snd_seq,snd_seq_device,snd_hda
_codec_hdmi,snd_hwdep,snd_hda_intel,snd_hda_codec,snd
_hda_codec_realtek,snd_timer,snd_pcm,snd_rawmidi
i2c_algo_bit 16384 1 radeon
fb_sys_fops 16384 1 drm_kms_helper
syscopyarea 16384 1 drm_kms_helper
serio_raw 20480 0
sysfillrect 16384 1 drm_kms_helper
sysimgblt 16384 1 drm_kms_helper
soundcore 16384 1 snd
mac_hid 16384 0
sch_fq_codel 20480 2
parport_pc 40960 0
ppdev 24576 0
lp 20480 0
parport 53248 3 parport_pc,lp,ppdev
ip_tables 28672 0
x_tables 40960 1 ip_tables
autofs4 45056 2
raid10 57344 0
raid456 155648 0
async_raid6_recov 24576 1 raid456
async_memcpy 20480 2 raid456,async_raid6_recov
async_pq 24576 2 raid456,async_raid6_recov
async_xor 20480 3 async_pq,raid456,async_raid6_recov
async_tx 20480 5 async_pq,async_memcpy,async_xor,raid456,async_raid6_re
cov
xor 24576 1 async_xor
raid6_pq 114688 3 async_pq,raid456,async_raid6_recov
libcrc32c 16384 1 raid456
raid1 45056 0
raid0 24576 0
multipath 20480 0
linear 20480 0
hid_generic 16384 0
psmouse 151552 0
i2c_i801 32768 0
pata_acpi 16384 0
lpc_ich 24576 0
usbhid 53248 0
hid 126976 2 usbhid,hid_generic
e1000e 245760 0
floppy 81920 0
```
In the output above:
* "Module" shows the name of each module
* "Size" shows the module size (not how much memory it is using)
* "Used by" shows each module's usage count and the referring modules
Clearly, that's a _lot_ of modules. The number of modules loaded will depend on your system and distribution and what's running. We can count them like this:
```
$ lsmod | wc -l
67
```
To see the number of modules available on the system (not just running), try this command:
```
$ modprobe -c | wc -l
41272
```
### Other commands for examining modules
Linux provides several commands for listing, loading and unloading, examining, and checking the status of modules.
* depmod -- generates modules.dep and map files
* insmod -- a simple program to insert a module into the Linux Kernel
* lsmod -- show the status of modules in the Linux Kernel
* modinfo -- show information about a Linux Kernel module
* modprobe -- add and remove modules from the Linux Kernel
* rmmod -- a simple program to remove a module from the Linux Kernel
### Listing modules that are built in
As mentioned above, the **lsmod** command is the most convenient command for listing modules. There are, however, other ways to examine them. The modules.builtin file lists all modules that are built into the kernel and is used by modprobe when trying to load one of these modules. Note that **$(uname -r)** in the commands below provides the name of the kernel release.
```
$ more /lib/modules/$(uname -r)/modules.builtin | head -10
kernel/arch/x86/crypto/crc32c-intel.ko
kernel/arch/x86/events/intel/intel-uncore.ko
kernel/arch/x86/platform/intel/iosf_mbi.ko
kernel/mm/zpool.ko
kernel/mm/zbud.ko
kernel/mm/zsmalloc.ko
kernel/fs/binfmt_script.ko
kernel/fs/mbcache.ko
kernel/fs/configfs/configfs.ko
kernel/fs/crypto/fscrypto.ko
```
You can get some additional detail on a module by using the **modinfo** command, though nothing that qualifies as an easy explanation of what service the module provides. The omitted details from the output below include a lengthy signature.
```
$ modinfo floppy | head -16
filename: /lib/modules/5.0.0-13-generic/kernel/drivers/block/floppy.ko
alias: block-major-2-*
license: GPL
author: Alain L. Knaff
srcversion: EBEAA26742DF61790588FD9
alias: acpi*:PNP0700:*
alias: pnp:dPNP0700*
depends:
retpoline: Y
intree: Y
name: floppy
vermagic: 5.0.0-13-generic SMP mod_unload
sig_id: PKCS#7
signer:
sig_key:
sig_hashalgo: md4
```
You can load or unload a module using the **modprobe** command. Using a command like the one below, you can locate the kernel object associated with a particular module:
```
$ find /lib/modules/$(uname -r) -name floppy*
/lib/modules/5.0.0-13-generic/kernel/drivers/block/floppy.ko
```
If you needed to load the module, you could use a command like this one:
```
$ sudo modprobe floppy
```
### Wrap-up
Clearly the loading and unloading of modules is a big deal. It makes Linux systems considerably more flexible and efficient than if they ran with a one-size-fits-all kernel. It also means you can make significant changes — including adding hardware — without rebooting.
**[ Two-Minute Linux Tips:[Learn how to master a host of Linux commands in these 2-minute video tutorials][3] ]**
Join the Network World communities on [Facebook][4] and [LinkedIn][5] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3391362/looking-into-linux-modules.html#tk.rss_all
作者:[Sandra Henry-Stocker][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/04/modules-100794941-large.jpg
[2]: https://www.networkworld.com/article/3391029/must-know-linux-commands.html
[3]: https://www.youtube.com/playlist?list=PL7D2RMSmRO9J8OTpjFECi8DJiTQdd4hua
[4]: https://www.facebook.com/NetworkWorld/
[5]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,99 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Crowdsourcing license compliance with ClearlyDefined)
[#]: via: (https://opensource.com/article/19/5/license-compliance-clearlydefined)
[#]: author: (Jeff McAffer https://opensource.com/users/jeffmcaffer)
Crowdsourcing license compliance with ClearlyDefined
======
Licensing is what holds open source together, and ClearlyDefined takes
the mystery out of projects' licenses, copyright, and source location.
![][1]
Open source use continues to skyrocket, not just in use cases and scenarios but also in volume. It is trivial for a developer to depend on a 1,000 JavaScript packages from a single run of `npm install` or have thousands of packages in a [Docker][2] image. At the same time, there is increased interest in ensuring license compliance.
Without the right license you may not be able to legally use a software component in the way you intend or may have obligations that run counter to your business model. For instance, a JavaScript package could be marked as [MIT license][3], which allows commercial reuse, while one of its dependencies is licensed has a [copyleft license][4] that requires you give your software away under the same license. Complying means finding the applicable license(s), and assessing and adhering to the terms, which is not too bad for individual components adn can be daunting for large initiatives.
Fortunately, this open source challenge has an open source solution: [ClearlyDefined][5]. ClearlyDefined is a crowdsourced, open source, [Open Source Initiative][6] (OSI) effort to gather, curate, and upstream/normalize data about open source components, such as license, copyright, and source location. This data is the cornerstone of reducing the friction in open source license compliance.
The premise behind ClearlyDefined is simple: we are all struggling to find and understand key information related to the open source we use—whether it is finding the license, knowing who to attribute, or identifying the source that goes with a particular package. Rather than struggling independently, ClearlyDefined allows us to collaborate and share the compliance effort. Moreover, the ClearlyDefined community seeks to upstream any corrections so future releases are more clearly defined and make conventions more explicit to improve community understanding of project intent.
### How it works
![ClearlyDefined's harvest, curate, upstream process][7]
ClearlyDefined monitors the open source ecosystem and automatically harvests relevant data from open source components using a host of open source tools such as [ScanCode][8], [FOSSology][9], and [Licensee][10]. The results are summarized and aggregated to create a _definition_ , which is then surfaced to users via an API and a UI. Each definition includes:
* Declared license of the component
* Licenses and copyrights discovered across all files
* Exact source code location to the commit level
* Release date
* List of embedded components
Coincidentally (well, not really), this is exactly the data you need to do license compliance.
### Curating
Any given definition may have gaps or imperfections due to tool issues or the data being missing or incorrect at the origin. ClearlyDefined enables users to curate the results by refining the values and filling in the gaps. These contributions are reviewed and merged, as with any open source project. The result is an improved dataset for all to use.
### Getting ahead
To a certain degree, this process is still chasing the problem—analyzing and curating after the packages have already been published. To get ahead of the game, the ClearlyDefined community also feeds merged curations back to the originating projects as pull requests (e.g., adding a license file, clarifying a copyright). This increases the clarity of future release and sets up a virtuous cycle.
### Adapting, not mandating
In doing the analysis, we've found quite a number of approaches to expressing license-related data. Different communities put LICENSE files in different places or have different practices around attribution. The ClearlyDefined philosophy is to discover these conventions and adapt to them rather than asking the communities to do something different. A side benefit of this is that implicit conventions can be made more explicit, improving clarity for all.
Related to this, ClearlyDefined is careful to not look too hard for this interesting data. If we have to be too smart and infer too much to find the data, then there's a good chance the origin is not all that clear. Instead, we prefer to work with the community to better understand and clarify the conventions being used. From there, we can update the tools accordingly and make it easier to be "clearly defined."
#### NOTICE files
As an added bonus for users, we set up an API and UI for generating NOTICE files, making it trivial for you to comply with the attribution requirements found in most open source licenses. You can give ClearlyDefined a list of components (e.g., _drag and drop an npm package-lock.json file on the UI_ ) and get back a fully formed NOTICE file rendered by one of several renderers (e.g., text, HTML, Handlebars.js template). This is a snap, given that we already have all the compliance data. Big shout out to the [OSS Attribution Builder project][11] for making a simple and pluggable NOTICE renderer we could just pop into the ClearlyDefined service.
### Getting involved
You can get involved with ClearlyDefined in several ways:
* Become an active user, contributing to your compliance workflow
* Review other people's curations using the interface
* Get involved in [the code][12] (Node and React)
* Ask and answer questions on [our mailing list][13] or [Discord channel][14]
* Contribute money to the OSI targeted to ClearlyDefined. We'll use that to fund development and curation.
We are excited to continue to grow our community of contributors so that licensing can continue to become an understable part of any team's open source adoption. For more information, check out [https://clearlydefined.io][15].
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/5/license-compliance-clearlydefined
作者:[Jeff McAffer][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jeffmcaffer
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/OSDC_Crowdfunding_520x292_9597717_0612CM.png?itok=lxSKyFXU
[2]: https://opensource.com/resources/what-docker
[3]: /article/19/4/history-mit-license
[4]: /resources/what-is-copyleft
[5]: https://clearlydefined.io
[6]: https://opensource.org
[7]: https://opensource.com/sites/default/files/uploads/clearlydefined.png (ClearlyDefined's harvest, curate, upstream process)
[8]: https://github.com/nexB/scancode-toolkit
[9]: https://www.fossology.org/
[10]: https://github.com/licensee/licensee
[11]: https://github.com/amzn/oss-attribution-builder
[12]: https://github.com/clearlydefined
[13]: mailto:clearlydefined@googlegroups.com
[14]: %C2%A0https://clearlydefined.io/discord)
[15]: https://clearlydefined.io/

View File

@ -0,0 +1,99 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Format Python however you like with Black)
[#]: via: (https://opensource.com/article/19/5/python-black)
[#]: author: (Moshe Zadka https://opensource.com/users/moshez/users/moshez/users/moshez)
Format Python however you like with Black
======
Learn more about solving common Python problems in our series covering
seven PyPI libraries.
![OpenStack source code \(Python\) in VIM][1]
Python is one of the most [popular programming languages][2] in use today—and for good reasons: it's open source, it has a wide range of uses (such as web programming, business applications, games, scientific programming, and much more), and it has a vibrant and dedicated community supporting it. This community is the reason we have such a large, diverse range of software packages available in the [Python Package Index][3] (PyPI) to extend and improve Python and solve the inevitable glitches that crop up.
In this series, we'll look at seven PyPI libraries that can help you solve common Python problems. In the first article, we learned about [Cython][4]; today, we'll examine the **[Black][5]** code formatter.
### Black
Sometimes creativity can be a wonderful thing. Sometimes it is just a pain. I enjoy solving hard problems creatively, but I want my Python formatted as consistently as possible. Nobody has ever been impressed by code that uses "interesting" indentation.
But even worse than inconsistent formatting is a code review that consists of nothing but formatting nits. It is annoying to the reviewer—and even more annoying to the person whose code is reviewed. It's also infuriating when your linter tells you that your code is indented incorrectly, but gives no hint about the _correct_ amount of indentation.
Enter Black. Instead of telling you _what_ to do, Black is a good, industrious robot: it will fix your code for you.
To see how it works, feel free to write something beautifully inconsistent like:
```
def add(a, b): return a+b
def mult(a, b):
return \
a * b
```
Does Black complain? Goodness no, it just fixes it for you!
```
$ black math
reformatted math
All done! ✨ 🍰 ✨
1 file reformatted.
$ cat math
def add(a, b):
return a + b
def mult(a, b):
return a * b
```
Black does offer the option of failing instead of fixing and even outputting a **diff** -style edit. These options are great in a continuous integration (CI) system that enforces running Black locally. In addition, if the **diff** output is logged to the CI output, you can directly paste it into **patch** in the rare case that you need to fix your output but cannot install Black locally.
```
$ black --check --diff bad
\--- math 2019-04-09 17:24:22.747815 +0000
+++ math 2019-04-09 17:26:04.269451 +0000
@@ -1,7 +1,7 @@
-def add(a, b): return a + b
+def add(a, b):
\+ return a + b
def mult(a, b):
\- return \
\- a * b
\+ return a * b
would reformat math
All done! 💥 💔 💥
1 file would be reformatted.
$ echo $?
1
```
In the next article in this series, we'll look at **attrs** , a library that helps you write concise, correct code quickly.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/5/python-black
作者:[Moshe Zadka ][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/moshez/users/moshez/users/moshez
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/openstack_python_vim_1.jpg?itok=lHQK5zpm (OpenStack source code (Python) in VIM)
[2]: https://opensource.com/article/18/5/numbers-python-community-trends
[3]: https://pypi.org/
[4]: https://opensource.com/article/19/4/7-python-problems-solved-cython
[5]: https://pypi.org/project/black/

View File

@ -0,0 +1,61 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Get started with Libki to manage public user computer access)
[#]: via: (https://opensource.com/article/19/5/libki-computer-access)
[#]: author: (Don Watkins https://opensource.com/users/don-watkins/users/tony-thomas)
Get started with Libki to manage public user computer access
======
Libki is a cross-platform, computer reservation and time management
system.
![][1]
Libraries, schools, colleges, and other organizations that provide public computers need a good way to manage users' access—otherwise, there's no way to prevent some people from monopolizing the machines and ensure everyone has a fair amount of time. This is the problem that [Libki][2] was designed to solve.
Libki is an open source, cross-platform, computer reservation and time management system for Windows and Linux PCs. It provides a web-based server and a web-based administration system that staff can use to manage computer access, including creating and deleting users, setting time limits on accounts, logging out and banning users, and setting access restrictions.
According to lead developer [Kyle Hall][3], Libki is mainly used for PC time control as an open source alternative to Envisionware's proprietary computer access control software. When users log into a Libki-managed computer, they get a block of time to use the computer; once that time is up, they are logged off. The default setting is 45 minutes, but that can easily be adjusted using the web-based administration system. Some organizations offer 24 hours of access before logging users off, and others use it to track usage without setting time limits.
Kyle is currently lead developer at [ByWater Solutions][4], which provides open source software solutions (including Libki) to libraries. He developed Libki early in his career when he was the IT tech at the [Meadville Public Library][5] in Pennsylvania. He was occasionally asked to cover the children's room during lunch breaks for other employees. The library used a paper sign-up sheet to manage access to the computers in the children's room, which meant constant supervision and checking to ensure equitable access for the people who came there.
Kyle said, "I found this system to be cumbersome and awkward, and I wanted to find a solution. That solution needed to be both FOSS and cross-platform. In the end, no existing software package suited our particular needs, and that is why I developed Libki."
Or, as Libki's website proclaims, "Libki was born of the need to avoid interacting with teenagers and now allows librarians to avoid interacting with teenagers around the world!"
### Easy to set up and use
I recently decided to try Libki in our local public library, where I frequently volunteer. I followed the [documentation][6] for the automatic installation, using Ubuntu 18.04 Server, and very quickly had it up and running.
I am planning to support Libki in our local library, but I wondered about libraries that don't have someone with IT experience or the ability to build and deploy a server. Kyle says, "ByWater Solutions can cloud-host a Libki server, which makes maintenance and management much simpler for everyone."
Kyle says ByWater is not planning to bundle Libki with its most popular offering, open source integrated library system (ILS) Koha, or any of the other [projects][7] it supports. "Libki and Koha are different [types of] software serving different needs, but they definitely work well together in a library setting. In fact, it was quite early on that I developed Libki's SIP2 integration so it could support single sign-on using Koha," he says.
### How you can contribute
Libki client is licensed under the GPLv3 and Libki server is licensed under the AGPLv3. Kyle says he would love Libki to have a more active and robust community, and the project is always looking for new people to join its [contributors][8]. If you would like to participate, visit [Libki's Community page][9] and join the mailing list.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/5/libki-computer-access
作者:[Don Watkins ][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/don-watkins/users/tony-thomas
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/desk_clock_job_work.jpg?itok=Nj4fuhl6
[2]: https://libki.org/
[3]: https://www.linkedin.com/in/kylemhallinfo/
[4]: https://opensource.com/article/19/4/software-libraries
[5]: https://meadvillelibrary.org/
[6]: https://manual.libki.org/master/libki-manual.html#_automatic_installation
[7]: https://bywatersolutions.com/projects
[8]: https://github.com/Libki/libki-server/graphs/contributors
[9]: https://libki.org/community/

View File

@ -0,0 +1,62 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (The making of the Breaking the Code electronic book)
[#]: via: (https://opensource.com/article/19/5/code-book)
[#]: author: (Alicia Gibb https://opensource.com/users/aliciagibb/users/don-watkins)
The making of the Breaking the Code electronic book
======
Offering a safe space for middle school girls to learn technology speaks
volumes about who should be sitting around the tech table.
![Open hardware electronic book][1]
I like a good challenge. The [Open Source Stories team][2] came to me with a great one: Create a hardware project where students could create their own thing that would be put together as a larger thing. The students would be middle school girls. My job was to figure out the hardware and make this thing make sense.
After days of sketching out concepts, I was wandering through my local public library, and it dawned on me that the perfect piece of hardware where everyone could design their own part to create something whole is a book! The idea of a book using paper electronics was exciting, simple enough to be taught in a day, and fit the criteria of needing no special equipment, like soldering irons.
!["Breaking the Code" book cover][3]
I designed two parts to the electronics within the book. Half the circuits were developed with copper tape, LEDs, and DIY buttons, and half were developed with LilyPad Arduino microcontrollers, sensors, LEDs, and DIY buttons. Using the electronics in the book, the girls could make pages light up, buzz, or play music using various inputs such as button presses, page turns, or tilting the book.
!['Breaking the Code' interior pages][4]
We worked with young adult author [Lauren Sabel][5] to come up with the story, which features two girls who get locked in the basement of their school and have to solve puzzles to get out. Setting the scene in the basement gave us lots of opportunities to use lights! Along with the story, we received illustrations that the girls enhanced with electronics. The girls got creative, for example, using lights as the skeleton's eyes, not just for the obvious light bulb in the room.
Creating a curriculum that was flexible enough to empower each girl to build her own successfully functioning circuit was a vital piece of the user experience. We chose components so the circuit wouldn't need to be over-engineered. We also used breakout boards and LEDs with built-in resistors so that the circuits allowed flexibility and functioned with only basic knowledge of circuit design—without getting too muddled in the deep end.
!['Breaking the Code' interior pages][6]
The project curriculum gave girls the confidence and skills to understand electronics by building two circuits, in the process learning circuit layout, directional aspects, cause-and-effect through inputs and outputs, and how to identify various components. Controlling electrons by pushing them through a circuit feels a bit like you're controlling a tiny part of the universe. And seeing the girls' faces light up is like seeing a universe of opportunities open in front of them.
!['Breaking the Code' interior pages][7]
The girls were ecstatic to see their work as a completed book, taking pride in their pages and showing others what they had built.
![About 'Breaking the Code'][8]
Teaching them my little corner of the world for the day was a truly empowering experience for me. As a woman in tech, I think this is the right approach for companies trying to change the gender inequalities we see in tech. Offering a safe space to learn—with lots of people in the room who look like you as mentors—speaks volumes about who should be sitting around the tech table.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/5/code-book
作者:[Alicia Gibb][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/aliciagibb/users/don-watkins
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/code_book_electronics_hardware.jpg?itok=zb-zaiwz (Open hardware electronic book)
[2]: https://www.redhat.com/en/open-source-stories
[3]: https://opensource.com/sites/default/files/uploads/codebook_cover.jpg ("Breaking the Code" book cover)
[4]: https://opensource.com/sites/default/files/uploads/codebook_38-39.jpg ('Breaking the Code' interior pages)
[5]: https://www.amazon.com/Lauren-Sabel/e/B01M0FW223
[6]: https://opensource.com/sites/default/files/uploads/codebook_lightbulb.jpg ('Breaking the Code' interior pages)
[7]: https://opensource.com/sites/default/files/uploads/codebook_10-11.jpg ('Breaking the Code' interior pages)
[8]: https://opensource.com/sites/default/files/uploads/codebook_pg1.jpg (About 'Breaking the Code')

View File

@ -0,0 +1,735 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (API evolution the right way)
[#]: via: (https://opensource.com/article/19/5/api-evolution-right-way)
[#]: author: (A. Jesse https://opensource.com/users/emptysquare)
API evolution the right way
======
Ten covenants that responsible library authors keep with their users.
![Browser of things][1]
Imagine you are a creator deity, designing a body for a creature. In your benevolence, you wish for the creature to evolve over time: first, because it must respond to changes in its environment, and second, because your wisdom grows and you think of better designs for the beast. It shouldn't remain in the same body forever!
![Serpents][2]
The creature, however, might be relying on features of its present anatomy. You can't add wings or change its scales without warning. It needs an orderly process to adapt its lifestyle to its new body. How can you, as a responsible designer in charge of this creature's natural history, gently coax it toward ever greater improvements?
It's the same for responsible library maintainers. We keep our promises to the people who depend on our code: we release bugfixes and useful new features. We sometimes delete features if that's beneficial for the library's future. We continue to innovate, but we don't break the code of people who use our library. How can we fulfill all those goals at once?
### Add useful features
Your library shouldn't stay the same for eternity: you should add features that make your library better for your users. For example, if you have a Reptile class and it would be useful to have wings for flying, go for it.
```
class Reptile:
@property
def teeth(self):
return 'sharp fangs'
# If wings are useful, add them!
@property
def wings(self):
return 'majestic wings'
```
But beware, features come with risk. Consider the following feature in the Python standard library, and see what went wrong with it.
```
bool(datetime.time(9, 30)) == True
bool(datetime.time(0, 0)) == False
```
This is peculiar: converting any time object to a boolean yields True, except for midnight. (Worse, the rules for timezone-aware times are even stranger.)
I've been writing Python for more than a decade but I didn't discover this rule until last week. What kind of bugs can this odd behavior cause in users' code?
Consider a calendar application with a function that creates events. If an event has an end time, the function requires it to also have a start time.
```
def create_event(day,
start_time=None,
end_time=None):
if end_time and not start_time:
raise ValueError("Can't pass end_time without start_time")
# The coven meets from midnight until 4am.
create_event(datetime.date.today(),
datetime.time(0, 0),
datetime.time(4, 0))
```
Unfortunately for witches, an event starting at midnight fails this validation. A careful programmer who knows about the quirk at midnight can write this function correctly, of course.
```
def create_event(day,
start_time=None,
end_time=None):
if end_time is not None and start_time is None:
raise ValueError("Can't pass end_time without start_time")
```
But this subtlety is worrisome. If a library creator wanted to make an API that bites users, a "feature" like the boolean conversion of midnight works nicely.
![Man being chased by an alligator][3]
The responsible creator's goal, however, is to make your library easy to use correctly.
This feature was written by Tim Peters when he first made the datetime module in 2002. Even founding Pythonistas like Tim make mistakes. [The quirk was removed][4], and all times are True now.
```
# Python 3.5 and later.
bool(datetime.time(9, 30)) == True
bool(datetime.time(0, 0)) == True
```
Programmers who didn't know about the oddity of midnight are saved from obscure bugs, but it makes me nervous to think about any code that relies on the weird old behavior and didn't notice the change. It would have been better if this bad feature were never implemented at all. This leads us to the first promise of any library maintainer:
#### First covenant: Avoid bad features
The most painful change to make is when you have to delete a feature. One way to avoid bad features is to add few features in general! Make no public method, class, function, or property without a good reason. Thus:
#### Second covenant: Minimize features
Features are like children: conceived in a moment of passion, they must be supported for years. Don't do anything silly just because you can. Don't add feathers to a snake!
![Serpents with and without feathers][5]
But of course, there are plenty of occasions when users need something from your library that it does not yet offer. How do you choose the right feature to give them? Here's another cautionary tale.
### A cautionary tale from asyncio
As you may know, when you call a coroutine function, it returns a coroutine object:
```
async def my_coroutine():
pass
print(my_coroutine())
[/code] [code]`<coroutine object my_coroutine at 0x10bfcbac8>`
```
Your code must "await" this object to run the coroutine. It's easy to forget this, so asyncio's developers wanted a "debug mode" that catches this mistake. Whenever a coroutine is destroyed without being awaited, the debug mode prints a warning with a traceback to the line where it was created.
When Yury Selivanov implemented the debug mode, he added as its foundation a "coroutine wrapper" feature. The wrapper is a function that takes in a coroutine and returns anything at all. Yury used it to install the warning logic on each coroutine, but someone else could use it to turn coroutines into the string "hi!"
```
import sys
def my_wrapper(coro):
return 'hi!'
sys.set_coroutine_wrapper(my_wrapper)
async def my_coroutine():
pass
print(my_coroutine())
[/code] [code]`hi!`
```
That is one hell of a customization. It changes the very meaning of "async." Calling set_coroutine_wrapper once will globally and permanently change all coroutine functions. It is, [as Nathaniel Smith wrote][6], "a problematic API" that is prone to misuse and had to be removed. The asyncio developers could have avoided the pain of deleting the feature if they'd better shaped it to its purpose. Responsible creators must keep this in mind:
#### Third covenant: Keep features narrow
Luckily, Yury had the good judgment to mark this feature provisional, so asyncio users knew not to rely on it. Nathaniel was free to replace **set_coroutine_wrapper** with a narrower feature that only customized the traceback depth.
```
import sys
sys.set_coroutine_origin_tracking_depth(2)
async def my_coroutine():
pass
print(my_coroutine())
[/code] [code]
<coroutine object my_coroutine at 0x10bfcbac8>
RuntimeWarning:'my_coroutine' was never awaited
Coroutine created at (most recent call last)
File "script.py", line 8, in <module>
print(my_coroutine())
```
This is much better. There's no more global setting that can change coroutines' type, so asyncio users need not code as defensively. Deities should all be as farsighted as Yury.
#### Fourth covenant: Mark experimental features "provisional"
If you have merely a hunch that your creature wants horns and a quadruple-forked tongue, introduce the features but mark them "provisional."
![Serpent with horns][7]
You might discover that the horns are extraneous but the quadruple-forked tongue is useful after all. In the next release of your library, you can delete the former and mark the latter official.
### Deleting features
No matter how wisely we guide our creature's evolution, there may come a time when it's best to delete an official feature. For example, you might have created a lizard, and now you choose to delete its legs. Perhaps you want to transform this awkward creature into a sleek and modern python.
![Lizard transformed to snake][8]
There are two main reasons to delete features. First, you might discover a feature was a bad idea, through user feedback or your own growing wisdom. That was the case with the quirky behavior of midnight. Or, the feature might have been well-adapted to your library's environment at first, but the ecology changes. Perhaps another deity invents mammals. Your creature wants to squeeze into the mammals' little burrows and eat the tasty mammal filling, so it has to lose its legs.
![A mouse][9]
Similarly, the Python standard library deletes features in response to changes in the language itself. Consider asyncio's Lock. It has been awaitable ever since "await" was added as a keyword:
```
lock = asyncio.Lock()
async def critical_section():
await lock
try:
print('holding lock')
finally:
lock.release()
```
But now, we can do "async with lock."
```
lock = asyncio.Lock()
async def critical_section():
async with lock:
print('holding lock')
```
The new style is much better! It's short and less prone to mistakes in a big function with other try-except blocks. Since "there should be one and preferably only one obvious way to do it," [the old syntax is deprecated in Python 3.7][10] and it will be banned soon.
It's inevitable that ecological change will have this effect on your code, too, so learn to delete features gently. Before you do so, consider the cost or benefit of deleting it. Responsible maintainers are reluctant to make their users change a large amount of their code or change their logic. (Remember how painful it was when Python 3 removed the "u" string prefix, before it was added back.) If the code changes are mechanical, however, like a simple search-and-replace, or if the feature is dangerous, it may be worth deleting.
#### Whether to delete a feature
![Balance scales][11]
Con | Pro
---|---
Code must change | Change is mechanical
Logic must change | Feature is dangerous
In the case of our hungry lizard, we decide to delete its legs so it can slither into a mouse's hole and eat it. How do we go about this? We could just delete the **walk** method, changing code from this:
```
class Reptile:
def walk(self):
print('step step step')
```
to this:
```
class Reptile:
def slither(self):
print('slide slide slide')
```
That's not a good idea; the creature is accustomed to walking! Or, in terms of a library, your users have code that relies on the existing method. When they upgrade to the latest version of your library, their code will break.
```
# User's code. Oops!
Reptile.walk()
```
Therefore, responsible creators make this promise:
#### Fifth covenant: Delete features gently
There are a few steps involved in deleting a feature gently. Starting with a lizard that walks with its legs, you first add the new method, "slither." Next, deprecate the old method.
```
import warnings
class Reptile:
def walk(self):
warnings.warn(
"walk is deprecated, use slither",
DeprecationWarning, stacklevel=2)
print('step step step')
def slither(self):
print('slide slide slide')
```
The Python warnings module is quite powerful. By default it prints warnings to stderr, only once per code location, but you can silence warnings or turn them into exceptions, among other options.
As soon as you add this warning to your library, PyCharm and other IDEs render the deprecated method with a strikethrough. Users know right away that the method is due for deletion.
`Reptile().walk()`
What happens when they run their code with the upgraded library?
```
$ python3 script.py
DeprecationWarning: walk is deprecated, use slither
script.py:14: Reptile().walk()
step step step
```
By default, they see a warning on stderr, but the script succeeds and prints "step step step." The warning's traceback shows what line of the user's code must be fixed. (That's what the "stacklevel" argument does: it shows the call site that users need to change, not the line in your library where the warning is generated.) Notice that the error message is instructive, it describes what a library user must do to migrate to the new version.
Your users will want to test their code and prove they call no deprecated library methods. Warnings alone won't make unit tests fail, but exceptions will. Python has a command-line option to turn deprecation warnings into exceptions.
```
> python3 -Werror::DeprecationWarning script.py
Traceback (most recent call last):
File "script.py", line 14, in <module>
Reptile().walk()
File "script.py", line 8, in walk
DeprecationWarning, stacklevel=2)
DeprecationWarning: walk is deprecated, use slither
```
Now, "step step step" is not printed, because the script terminates with an error.
So, once you've released a version of your library that warns about the deprecated "walk" method, you can delete it safely in the next release. Right?
Consider what your library's users might have in their projects' requirements.
```
# User's requirements.txt has a dependency on the reptile package.
reptile
```
The next time they deploy their code, they'll install the latest version of your library. If they haven't yet handled all deprecations, then their code will break, because it still depends on "walk." You need to be gentler than this. There are three more promises you must keep to your users: maintain a changelog, choose a version scheme, and write an upgrade guide.
#### Sixth covenant: Maintain a changelog
Your library must have a changelog; its main purpose is to announce when a feature that your users rely on is deprecated or deleted.
#### Changes in Version 1.1
**New features**
* New function Reptile.slither()
**Deprecations**
* Reptile.walk() is deprecated and will be removed in version 2.0, use slither()
---
Responsible creators use version numbers to express how a library has changed so users can make informed decisions about upgrading. A "version scheme" is a language for communicating the pace of change.
#### Seventh covenant: Choose a version scheme
There are two schemes in widespread use, [semantic versioning][12] and time-based versioning. I recommend semantic versioning for nearly any library. The Python flavor thereof is defined in [PEP 440][13], and tools like **pip** understand semantic version numbers.
If you choose semantic versioning for your library, you can delete its legs gently with version numbers like:
> 1.0: First "stable" release, with walk()
> 1.1: Add slither(), deprecate walk()
> 2.0: Delete walk()
Your users should depend on a range of your library's versions, like so:
```
# User's requirements.txt.
reptile>=1,<2
```
This allows them to upgrade automatically within a major release, receiving bugfixes and potentially raising some deprecation warnings, but not upgrading to the _next_ major release and risking a change that breaks their code.
If you follow time-based versioning, your releases might be numbered thus:
> 2017.06.0: A release in June 2017
> 2018.11.0: Add slither(), deprecate walk()
> 2019.04.0: Delete walk()
And users can depend on your library like:
```
# User's requirements.txt for time-based version.
reptile==2018.11.*
```
This is terrific, but how do your users know your versioning scheme and how to test their code for deprecations? You have to advise them how to upgrade.
#### Eighth covenant: Write an upgrade guide
Here's how a responsible library creator might guide users:
#### Upgrading to 2.0
**Migrate from Deprecated APIs**
See the changelog for deprecated features.
**Enable Deprecation Warnings**
Upgrade to 1.1 and test your code with:
`python -Werror::DeprecationWarning`
Now it's safe to upgrade.
---
You must teach users how to handle deprecation warnings by showing them the command line options. Not all Python programmers know this—I certainly have to look up the syntax each time. And take note, you must _release_ a version that prints warnings from each deprecated API so users can test with that version before upgrading again. In this example, version 1.1 is the bridge release. It allows your users to rewrite their code incrementally, fixing each deprecation warning separately until they have entirely migrated to the latest API. They can test changes to their code and changes in your library, independently from each other, and isolate the cause of bugs.
If you chose semantic versioning, this transitional period lasts until the next major release, from 1.x to 2.0, or from 2.x to 3.0, and so on. The gentle way to delete a creature's legs is to give it at least one version in which to adjust its lifestyle. Don't remove the legs all at once!
![A skink][14]
Version numbers, deprecation warnings, the changelog, and the upgrade guide work together to gently evolve your library without breaking the covenant with your users. The [Twisted project's Compatibility Policy][15] explains this beautifully:
> "The First One's Always Free"
>
> Any application which runs without warnings may be upgraded one minor version of Twisted.
>
> In other words, any application which runs its tests without triggering any warnings from Twisted should be able to have its Twisted version upgraded at least once with no ill effects except the possible production of new warnings.
Now, we creator deities have gained the wisdom and power to add features by adding methods and to delete them gently. We can also add features by adding parameters, but this brings a new level of difficulty. Are you ready?
### Adding parameters
Imagine that you just gave your snake-like creature a pair of wings. Now you must allow it the choice whether to move by slithering or flying. Currently its "move" function takes one parameter.
```
# Your library code.
def move(direction):
print(f'slither {direction}')
# A user's application.
move('north')
```
You want to add a "mode" parameter, but this breaks your users' code if they upgrade, because they pass only one argument.
```
# Your library code.
def move(direction, mode):
assert mode in ('slither', 'fly')
print(f'{mode} {direction}')
# A user's application. Error!
move('north')
```
A truly wise creator promises not to break users' code this way.
#### Ninth covenant: Add parameters compatibly
To keep this covenant, add each new parameter with a default value that preserves the original behavior.
```
# Your library code.
def move(direction, mode='slither'):
assert mode in ('slither', 'fly')
print(f'{mode} {direction}')
# A user's application.
move('north')
```
Over time, parameters are the natural history of your function's evolution. They're listed oldest first, each with a default value. Library users can pass keyword arguments to opt into specific new behaviors and accept the defaults for all others.
```
# Your library code.
def move(direction,
mode='slither',
turbo=False,
extra_sinuous=False,
hail_lyft=False):
# ...
# A user's application.
move('north', extra_sinuous=True)
```
There is a danger, however, that a user might write code like this:
```
# A user's application, poorly-written.
move('north', 'slither', False, True)
```
What happens if, in the next major version of your library, you get rid of one of the parameters, like "turbo"?
```
# Your library code, next major version. "turbo" is deleted.
def move(direction,
mode='slither',
extra_sinuous=False,
hail_lyft=False):
# ...
# A user's application, poorly-written.
move('north', 'slither', False, True)
```
The user's code still compiles, and this is a bad thing. The code stopped moving extra-sinuously and started hailing a Lyft, which was not the intention. I trust that you can predict what I'll say next: Deleting a parameter requires several steps. First, of course, deprecate the "turbo" parameter. I like a technique like this one, which detects whether any user's code relies on this parameter.
```
# Your library code.
_turbo_default = object()
def move(direction,
mode='slither',
turbo=_turbo_default,
extra_sinuous=False,
hail_lyft=False):
if turbo is not _turbo_default:
warnings.warn(
"'turbo' is deprecated",
DeprecationWarning,
stacklevel=2)
else:
# The old default.
turbo = False
```
But your users might not notice the warning. Warnings are not very loud: they can be suppressed or lost in log files. Users might heedlessly upgrade to the next major version of your library, the version that deletes "turbo." Their code will run without error and silently do the wrong thing! As the Zen of Python says, "Errors should never pass silently." Indeed, reptiles hear poorly, so you must correct them very loudly when they make mistakes.
![Woman riding an alligator][16]
The best way to protect your users is with Python 3's star syntax, which requires callers to pass keyword arguments.
```
# Your library code.
# All arguments after "*" must be passed by keyword.
def move(direction,
*,
mode='slither',
turbo=False,
extra_sinuous=False,
hail_lyft=False):
# ...
# A user's application, poorly-written.
# Error! Can't use positional args, keyword args required.
move('north', 'slither', False, True)
```
With the star in place, this is the only syntax allowed:
```
# A user's application.
move('north', extra_sinuous=True)
```
Now when you delete "turbo," you can be certain any user code that relies on it will fail loudly. If your library also supports Python 2, there's no shame in that; you can simulate the star syntax thus ([credit to Brett Slatkin][17]):
```
# Your library code, Python 2 compatible.
def move(direction, **kwargs):
mode = kwargs.pop('mode', 'slither')
turbo = kwargs.pop('turbo', False)
sinuous = kwargs.pop('extra_sinuous', False)
lyft = kwargs.pop('hail_lyft', False)
if kwargs:
raise TypeError('Unexpected kwargs: %r'
% kwargs)
# ...
```
Requiring keyword arguments is a wise choice, but it requires foresight. If you allow an argument to be passed positionally, you cannot convert it to keyword-only in a later release. So, add the star now. You can observe in the asyncio API that it uses the star pervasively in constructors, methods, and functions. Even though "Lock" only takes one optional parameter so far, the asyncio developers added the star right away. This is providential.
```
# In asyncio.
class Lock:
def __init__(self, *, loop=None):
# ...
```
Now we've gained the wisdom to change methods and parameters while keeping our covenant with users. The time has come to try the most challenging kind of evolution: changing behavior without changing either methods or parameters.
### Changing behavior
Let's say your creature is a rattlesnake, and you want to teach it a new behavior.
![Rattlesnake][18]
Sidewinding! The creature's body will appear the same, but its behavior will change. How can we prepare it for this step of its evolution?
![][19]
Image by HCA [[CC BY-SA 4.0][20]], [via Wikimedia Commons][21], modified by Opensource.com
A responsible creator can learn from the following example in the Python standard library, when behavior changed without a new function or parameters. Once upon a time, the os.stat function was introduced to get file statistics, like the creation time. At first, times were always integers.
```
>>> os.stat('file.txt').st_ctime
1540817862
```
One day, the core developers decided to use floats for os.stat times to give sub-second precision. But they worried that existing user code wasn't ready for the change. They created a setting in Python 2.3, "stat_float_times," that was false by default. A user could set it to True to opt into floating-point timestamps.
```
>>> # Python 2.3.
>>> os.stat_float_times(True)
>>> os.stat('file.txt').st_ctime
1540817862.598021
```
Starting in Python 2.5, float times became the default, so any new code written for 2.5 and later could ignore the setting and expect floats. Of course, you could set it to False to keep the old behavior or set it to True to ensure the new behavior in all Python versions, and prepare your code for the day when stat_float_times is deleted.
Ages passed. In Python 3.1, the setting was deprecated to prepare people for the distant future and finally, after its decades-long journey, [the setting was removed][22]. Float times are now the only option. It's a long road, but responsible deities are patient because we know this gradual process has a good chance of saving users from unexpected behavior changes.
#### Tenth covenant: Change behavior gradually
Here are the steps:
* Add a flag to opt into the new behavior, default False, warn if it's False
* Change default to True, deprecate flag entirely
* Remove the flag
If you follow semantic versioning, the versions might be like so:
Library version | Library API | User code
---|---|---
| |
1.0 | No flag | Expect old behavior
1.1 | Add flag, default False,
warn if it's False | Set flag True,
handle new behavior
2.0 | Change default to True,
deprecate flag entirely | Handle new behavior
3.0 | Remove flag | Handle new behavior
You need _two_ major releases to complete the maneuver. If you had gone straight from "Add flag, default False, warn if it's False" to "Remove flag" without the intervening release, your users' code would be unable to upgrade. User code written correctly for 1.1, which sets the flag to True and handles the new behavior, must be able to upgrade to the next release with no ill effect except new warnings, but if the flag were deleted in the next release, that code would break. A responsible deity never violates the Twisted policy: "The First One's Always Free."
### The responsible creator
![Demeter][23]
Our 10 covenants belong loosely in three categories:
**Evolve cautiously**
1. Avoid bad features
2. Minimize features
3. Keep features narrow
4. Mark experimental features "provisional"
5. Delete features gently
**Record history rigorously**
1. Maintain a changelog
2. Choose a version scheme
3. Write an upgrade guide
**Change slowly and loudly**
1. Add parameters compatibly
2. Change behavior gradually
If you keep these covenants with your creature, you'll be a responsible creator deity. Your creature's body can evolve over time, forever improving and adapting to changes in its environment but without sudden changes the creature isn't prepared for. If you maintain a library, keep these promises to your users and you can innovate your library without breaking the code of the people who rely on you.
* * *
_This article originally appeared on[A. Jesse Jiryu Davis's blog][24] and is republished with permission._
Illustration credits:
* [The World's Progress, The Delphian Society, 1913][25]
* [Essay Towards a Natural History of Serpents, Charles Owen, 1742][26]
* [On the batrachia and reptilia of Costa Rica: With notes on the herpetology and ichthyology of Nicaragua and Peru, Edward Drinker Cope, 1875][27]
* [Natural History, Richard Lydekker et. al., 1897][28]
* [Mes Prisons, Silvio Pellico, 1843][29]
* [Tierfotoagentur / m.blue-shadow][30]
* [Los Angeles Public Library, 1930][31]
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/5/api-evolution-right-way
作者:[A. Jesse][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/emptysquare
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/browser_desktop_website_checklist_metrics.png?itok=OKKbl1UR (Browser of things)
[2]: https://opensource.com/sites/default/files/uploads/praise-the-creator.jpg (Serpents)
[3]: https://opensource.com/sites/default/files/uploads/bite.jpg (Man being chased by an alligator)
[4]: https://bugs.python.org/issue13936
[5]: https://opensource.com/sites/default/files/uploads/feathers.jpg (Serpents with and without feathers)
[6]: https://bugs.python.org/issue32591
[7]: https://opensource.com/sites/default/files/uploads/horns.jpg (Serpent with horns)
[8]: https://opensource.com/sites/default/files/uploads/lizard-to-snake.jpg (Lizard transformed to snake)
[9]: https://opensource.com/sites/default/files/uploads/mammal.jpg (A mouse)
[10]: https://bugs.python.org/issue32253
[11]: https://opensource.com/sites/default/files/uploads/scale.jpg (Balance scales)
[12]: https://semver.org
[13]: https://www.python.org/dev/peps/pep-0440/
[14]: https://opensource.com/sites/default/files/uploads/skink.jpg (A skink)
[15]: https://twistedmatrix.com/documents/current/core/development/policy/compatibility-policy.html
[16]: https://opensource.com/sites/default/files/uploads/loudly.jpg (Woman riding an alligator)
[17]: http://www.informit.com/articles/article.aspx?p=2314818
[18]: https://opensource.com/sites/default/files/uploads/rattlesnake.jpg (Rattlesnake)
[19]: https://opensource.com/sites/default/files/articles/neonate_sidewinder_sidewinding_with_tracks_unlabeled.png
[20]: https://creativecommons.org/licenses/by-sa/4.0
[21]: https://commons.wikimedia.org/wiki/File:Neonate_sidewinder_sidewinding_with_tracks_unlabeled.jpg
[22]: https://bugs.python.org/issue31827
[23]: https://opensource.com/sites/default/files/uploads/demeter.jpg (Demeter)
[24]: https://emptysqua.re/blog/api-evolution-the-right-way/
[25]: https://www.gutenberg.org/files/42224/42224-h/42224-h.htm
[26]: https://publicdomainreview.org/product-att/artist/charles-owen/
[27]: https://archive.org/details/onbatrachiarepti00cope/page/n3
[28]: https://www.flickr.com/photos/internetarchivebookimages/20556001490
[29]: https://www.oldbookillustrations.com/illustrations/stationery/
[30]: https://www.alamy.com/mediacomp/ImageDetails.aspx?ref=D7Y61W
[31]: https://www.vintag.es/2013/06/riding-alligator-c-1930s.html

View File

@ -0,0 +1,81 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Check your spelling at the command line with Ispell)
[#]: via: (https://opensource.com/article/19/5/spelling-command-line-ispell)
[#]: author: (Scott Nesbitt https://opensource.com/users/scottnesbitt)
Check your spelling at the command line with Ispell
======
Ispell helps you stamp out typos in plain text files written in more
than 50 languages.
![Command line prompt][1]
Good spelling is a skill. A skill that takes time to learn and to master. That said, there are people who never quite pick that skill up—I know a couple or three outstanding writers who can't spell to save their lives.
Even if you spell well, the occasional typo creeps in. That's especially true if you're quickly banging on your keyboard to meet a deadline. Regardless of your spelling chops, it's always a good idea to run what you've written through a spelling checker.
I do most of my writing in [plain text][2] and often use a command line spelling checker called [Aspell][3] to do the deed. Aspell isn't the only game in town. You might also want to check out the venerable [Ispell][4].
### Getting started
Ispell's been around, in various forms, since 1971. Don't let its age fool you. Ispell is still a peppy application that you can use effectively in the 21st century.
Before doing anything else, check whether or not Ispell is installed on your computer by cracking open a terminal window and typing **which ispell**. If it isn't installed, fire up your distribution's package manager and install Ispell from there.
Don't forget to install dictionaries for the languages you work in, too. My only language is English, so I just need to worry about grabbing the US and British English dictionaries. You're not limited to my mother (and only) tongue. Ispell has [dictionaries for over 50 languages][5].
![Installing Ispell dictionaries][6]
### Using Ispell
If you haven't guessed already, Ispell only works with text files. That includes ones marked up with HTML, LaTeX, and [nroff or troff][7]. More on this in a few moments.
To get to work, open a terminal window and navigate to the directory containing the file where you want to run a spelling check. Type **ispell** followed by the file's name and then press Enter.
![Checking spelling with Ispell][8]
Ispell highlights the first word it doesn't recognize. If the word is misspelled, Ispell usually offers one or more alternatives. Press **R** and then the number beside the correct choice. In the screen capture above, I'd press **R** and **0** to fix the error.
If, on the other hand, the word is correctly spelled, press **A** to move to the next misspelled word.
Keep doing that until you reach the end of the file. Ispell saves your changes, creates a backup of the file you just checked (with the extension _.bak_ ), and shuts down.
### A few other options
This example illustrates basic Ispell usage. The program has a [number of options][9], some of which you _might_ use and others you _might never_ use. Let's take a quick peek at some of the ones I regularly use.
A few paragraphs ago, I mentioned that Ispell works with certain markup languages. You need to tell it a file's format. When starting Ispell, add **-t** for a TeX or LaTeX file, **-H** for an HTML file, or **-n** for a groff or troff file. For example, if you enter **ispell -t myReport.tex** , Ispell ignores all markup.
If you don't want the backup file that Ispell creates after checking a file, add **-x** to the command line—for example, **ispell -x myFile.txt**.
What happens if Ispell runs into a word that's spelled correctly but isn't in its dictionary, like a proper name? You can add that word to a personal word list by pressing **I**. This saves the word to a file called _.ispell_default_ in the root of your _/home_ directory.
Those are the options I find most useful when working with Ispell, but check out [Ispell's man page][9] for descriptions of all its options.
Is Ispell any better or faster than Aspell or any other command line spelling checker? I have to say it's no worse than any of them, nor is it any slower. Ispell's not for everyone. It might not be for you. But it is good to have options, isn't it?
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/5/spelling-command-line-ispell
作者:[Scott Nesbitt ][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/scottnesbitt
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/command_line_prompt.png?itok=wbGiJ_yg (Command line prompt)
[2]: https://plaintextproject.online
[3]: https://opensource.com/article/18/2/how-check-spelling-linux-command-line-aspell
[4]: https://www.cs.hmc.edu/~geoff/ispell.html
[5]: https://www.cs.hmc.edu/~geoff/ispell-dictionaries.html
[6]: https://opensource.com/sites/default/files/uploads/ispell-install-dictionaries.png (Installing Ispell dictionaries)
[7]: https://opensource.com/article/18/2/how-format-academic-papers-linux-groff-me
[8]: https://opensource.com/sites/default/files/uploads/ispell-checking.png (Checking spelling with Ispell)
[9]: https://www.cs.hmc.edu/~geoff/ispell-man.html

View File

@ -0,0 +1,107 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Say goodbye to boilerplate in Python with attrs)
[#]: via: (https://opensource.com/article/19/5/python-attrs)
[#]: author: (Moshe Zadka https://opensource.com/users/moshez/users/moshez)
Say goodbye to boilerplate in Python with attrs
======
Learn more about solving common Python problems in our series covering
seven PyPI libraries.
![Programming at a browser, orange hands][1]
Python is one of the most [popular programming languages][2] in use today—and for good reasons: it's open source, it has a wide range of uses (such as web programming, business applications, games, scientific programming, and much more), and it has a vibrant and dedicated community supporting it. This community is the reason we have such a large, diverse range of software packages available in the [Python Package Index][3] (PyPI) to extend and improve Python and solve the inevitable glitches that crop up.
In this series, we'll look at seven PyPI libraries that can help you solve common Python problems. Today, we'll examine [**attrs**][4], a Python package that helps you write concise, correct code quickly.
### attrs
If you have been using Python for any length of time, you are probably used to writing code like:
```
class Book(object):
def __init__(self, isbn, name, author):
self.isbn = isbn
self.name = name
self.author = author
```
Then you write a **__repr__** function; otherwise, it would be hard to log instances of **Book** :
```
def __repr__(self):
return f"Book({self.isbn}, {self.name}, {self.author})"
```
Next, you write a nice docstring documenting the expected types. But you notice you forgot to add the **edition** and **published_year** attributes, so you have to modify them in five places.
What if you didn't have to?
```
@attr.s(auto_attribs=True)
class Book(object):
isbn: str
name: str
author: str
published_year: int
edition: int
```
Annotating the attributes with types using the new type annotation syntax, **attrs** detects the annotations and creates a class.
ISBNs have a specific format. What if we want to enforce that format?
```
@attr.s(auto_attribs=True)
class Book(object):
isbn: str = attr.ib()
@isbn.validator
def pattern_match(self, attribute, value):
m = re.match(r"^(\d{3}-)\d{1,3}-\d{2,3}-\d{1,7}-\d$", value)
if not m:
raise ValueError("incorrect format for isbn", value)
name: str
author: str
published_year: int
edition: int
```
The **attrs** library also has great support for [immutability-style programming][5]. Changing the first line to **@attr.s(auto_attribs=True, frozen=True)** means that **Book** is now immutable: trying to modify an attribute will raise an exception. Instead, we can get a _new_ instance with modification using **attr.evolve(old_book, published_year=old_book.published_year+1)** , for example, if we need to push publication forward by a year.
In the next article in this series, we'll look at **singledispatch** , a library that allows you to add methods to Python libraries retroactively.
#### Review the previous articles in this series
* [Cython][6]
* [Black][7]
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/5/python-attrs
作者:[Moshe Zadka ][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/moshez/users/moshez
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/programming_code_keyboard_orange_hands.png?itok=G6tJ_64Y (Programming at a browser, orange hands)
[2]: https://opensource.com/article/18/5/numbers-python-community-trends
[3]: https://pypi.org/
[4]: https://pypi.org/project/attrs/
[5]: https://opensource.com/article/18/10/functional-programming-python-immutable-data-structures
[6]: https://opensource.com/article/19/4/7-python-problems-solved-cython
[7]: https://opensource.com/article/19/4/python-problems-solved-black

View File

@ -0,0 +1,85 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Tutanota Launches New Encrypted Tool to Support Press Freedom)
[#]: via: (https://itsfoss.com/tutanota-secure-connect/)
[#]: author: (John Paul https://itsfoss.com/author/john/)
Tutanota Launches New Encrypted Tool to Support Press Freedom
======
A secure email provider has announced the release of a new product designed to help whistleblowers get their information to the media. The tool is free for journalists.
### Tutanota helps you protect your privacy
![][1]
[Tutanota][2] is a German-based company that provides “worlds most secure email service, easy to use and private by design.” They offer end-to-end encryption for their [secure email service][3]. Recently Tutanota announced a [desktop app for their email service][4].
They also make use of two-factor authentication and [open source the code][5] that they use.
While you can get an account for free, you dont have to worry about your information being sold or seeing ads. Tutanota makes money by charging for extra features and storage. They also offer solutions for non-profit organizations.
Tutanota has launched a new service to further help journalists, social activists and whistleblowers in communicating securely.
[][6]
Suggested read Purism's New Offering is a Dream Come True for Privacy Concerned People
### Secure Connect: An encrypted form for websites
![][7]
Tutanota has released a new piece of software named Secure Connect. Secure Connect is “an open source encrypted contact form for news sites”. The goal of the project is to create a way so that “whistleblowers can get in touch with journalists securely”. Tutanota picked the right day because May 3rd is the [Day of Press Freedom][8].
According to Tutanota, Secure Connect is designed to be easily added to websites, but can also work on any blog to ensure access by smaller news agencies. A whistleblower would access Secure Connect app on a news site, preferably using Tor, and type in any information that they want to bring to light. The whistleblower would also be able to upload files. Once they submit the information, Secure Connect will assign a random address and password, “which lets the whistleblower re-access his sent message at a later stage and check for replies from the news site.”
![Secure Connect Encrypted Contact Form][9]
While Tutanota will be offering Secure Connect to journalists for free, they know that someone will have to foot the bill. They plan to pay for further development of the project by selling it to businesses, such as “lawyers, financial institutions, medical institutions, educational institutions, and the authorities”. Non-journalists would have to pay €24 per month.
You can see a demo of Secure Connect, by clicking [here][10]. If you are a journalist interested in adding Secure Connect to your website or blog, you can contact them at [[email protected]][11] Be sure to include a link to your website.
[][12]
Suggested read 8 Privacy Oriented Alternative Search Engines To Google in 2019
### Final Thoughts on Secure Connect
I have read repeatedly about whistleblowers whose identities were accidentally exposed, either by themselves or others. Tutanotas project looks like it would remove that possibility by making it impossible for others to discover their identity. It also gives both parties an easy way to exchange information without having to worry about encryption or PGP keys.
I understand that its not the same as [Firefox Send][13], another encrypted file sharing program from Mozilla. The only question I have is whose servers will the whistleblowers information be sitting on?
Do you think that Tutanotas Secure Connect will be a boon for whistleblowers and activists? Please let us know in the comments below.
If you found this article interesting, please take a minute to share it on social media, Hacker News or [Reddit][14].
--------------------------------------------------------------------------------
via: https://itsfoss.com/tutanota-secure-connect/
作者:[John Paul][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/john/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/wp-content/uploads/2018/02/tutanota-featured-800x450.png
[2]: https://tutanota.com/
[3]: https://itsfoss.com/tutanota-review/
[4]: https://itsfoss.com/tutanota-desktop/
[5]: https://tutanota.com/blog/posts/open-source-email
[6]: https://itsfoss.com/librem-one/
[7]: https://itsfoss.com/wp-content/uploads/2019/05/secure-communication.jpg
[8]: https://en.wikipedia.org/wiki/World_Press_Freedom_Day
[9]: https://itsfoss.com/wp-content/uploads/2019/05/secure-connect-encrypted-contact-form.png
[10]: https://secureconnect.tutao.de/contactform/demo
[11]: /cdn-cgi/l/email-protection
[12]: https://itsfoss.com/privacy-search-engines/
[13]: https://itsfoss.com/firefox-send/
[14]: http://reddit.com/r/linuxusersgroup

View File

@ -0,0 +1,106 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Add methods retroactively in Python with singledispatch)
[#]: via: (https://opensource.com/article/19/5/python-singledispatch)
[#]: author: (Moshe Zadka https://opensource.com/users/moshez)
Add methods retroactively in Python with singledispatch
======
Learn more about solving common Python problems in our series covering
seven PyPI libraries.
![][1]
Python is one of the most [popular programming languages][2] in use today—and for good reasons: it's open source, it has a wide range of uses (such as web programming, business applications, games, scientific programming, and much more), and it has a vibrant and dedicated community supporting it. This community is the reason we have such a large, diverse range of software packages available in the [Python Package Index][3] (PyPI) to extend and improve Python and solve the inevitable glitches that crop up.
In this series, we'll look at seven PyPI libraries that can help you solve common Python problems. Today, we'll examine [**singledispatch**][4], a library that allows you to add methods to Python libraries retroactively.
### singledispatch
Imagine you have a "shapes" library with a **Circle** class, a **Square** class, etc.
A **Circle** has a **radius** , a **Square** has a **side** , and a **Rectangle** has **height** and **width**. Our library already exists; we do not want to change it.
However, we do want to add an **area** calculation to our library. If we didn't share this library with anyone else, we could just add an **area** method so we could call **shape.area()** and not worry about what the shape is.
While it is possible to reach into a class and add a method, this is a bad idea: nobody expects their class to grow new methods, and things might break in weird ways.
Instead, the **singledispatch** function in **functools** can come to our rescue.
```
@singledispatch
def get_area(shape):
raise NotImplementedError("cannot calculate area for unknown shape",
shape)
```
The "base" implementation for the **get_area** function fails. This makes sure that if we get a new shape, we will fail cleanly instead of returning a nonsense result.
```
@get_area.register(Square)
def _get_area_square(shape):
return shape.side ** 2
@get_area.register(Circle)
def _get_area_circle(shape):
return math.pi * (shape.radius ** 2)
```
One nice thing about doing things this way is that if someone writes a _new_ shape that is intended to play well with our code, they can implement **get_area** themselves.
```
from area_calculator import get_area
@attr.s(auto_attribs=True, frozen=True)
class Ellipse:
horizontal_axis: float
vertical_axis: float
@get_area.register(Ellipse)
def _get_area_ellipse(shape):
return math.pi * shape.horizontal_axis * shape.vertical_axis
```
_Calling_ **get_area** is straightforward.
```
`print(get_area(shape))`
```
This means we can change a function that has a long **if isintance()/elif isinstance()** chain to work this way, without changing the interface. The next time you are tempted to check **if isinstance** , try using **singledispatch**!
In the next article in this series, we'll look at **tox** , a tool for automating tests on Python code.
#### Review the previous articles in this series:
* [Cython][5]
* [Black][6]
* [attrs][7]
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/5/python-singledispatch
作者:[Moshe Zadka ][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/moshez
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/computer_code_programming_laptop.jpg?itok=ormv35tV
[2]: https://opensource.com/article/18/5/numbers-python-community-trends
[3]: https://pypi.org/
[4]: https://pypi.org/project/singledispatch/
[5]: https://opensource.com/article/19/4/7-python-problems-solved-cython
[6]: https://opensource.com/article/19/4/python-problems-solved-black
[7]: https://opensource.com/article/19/4/python-problems-solved-attrs

View File

@ -0,0 +1,93 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (May the fourth be with you: How Star Wars (and Star Trek) inspired real life tech)
[#]: via: (https://opensource.com/article/19/5/may-the-fourth-star-wars-trek)
[#]: author: (Jeff Macharyas https://opensource.com/users/jeffmacharyas)
May the fourth be with you: How Star Wars (and Star Trek) inspired real life tech
======
The technologies may have been fictional, but these two acclaimed sci-fi
series have inspired open source tech.
![Triangulum galaxy, NASA][1]
Conventional wisdom says you can either be a fan of _Star Trek_ or of _Star Wars_ , but mixing the two is like mixing matter and anti-matter. I'm not sure that's true, but even if the laws of physics cannot be changed, these two acclaimed sci-fi series have influenced the open source universe and created their own open source multi-verses.
For example, fans have used the original _Star Trek_ as "source code" to create fan-made films, cartoons, and games. One of the more notable fan creations was the web series _Star Trek Continues_ , which faithfully adapted Gene Roddenberry's universe and redistributed it to the world.
"Eventually we realized that there is no more profound way in which people could express what _Star Trek_ has meant to them than by creating their own very personal _Star Trek_ things," [Roddenberry said][2]. However, due to copyright restrictions, this "open source" channel [has since been curtailed][3].
_Star Wars_ has a different approach to open sourcing its universe. [Jess Paguaga writes][4] on FanSided: "With a variety [of] fan film awards dating back to 2002, the _Star Wars_ brand has always supported and encouraged the creation of short films that help expand the universe of a galaxy far, far away."
But, _Star Wars_ is not without its own copyright prime directives. In one case, a Darth Vader film by a YouTuber called Star Wars Theory has drawn a copyright claim from Disney. The claim does not stop production of the film, but diverts monetary gains from it, [reports James Richards][5] on FanSided.
This could be one of the [Ferengi Rules of Acquisition][6], perhaps.
But if you can't watch your favorite fan film, you can still get your [_Star Wars_ fix right in the Linux terminal][7] by entering:
```
`telnet towel.blinkenlights.nl`
```
And _Star Trek_ fans can also interact with the Federation with the original text-based video game from 1971. While a high-school senior, Mike Mayfield ported the game from punch cards to HP BASIC. If you'd like to go old school and battle Klingons, the source code is available at the [Code Project][8].
### Real-life star tech
Both _Star Wars_ and _Star Trek_ have inspired real-life technologies. Although those technologies were fictional, many have become the practical, open technology we use today. Some of them inspired technologies that are still in development now.
In the early 1970s, Motorola engineer Martin Cooper was trying to beat AT&T at the car-phone game. He says he was watching Captain Kirk use a "communicator" on an episode of _Star Trek_ and had a eureka moment. His team went on to create the first portable cellular 800MHz phone prototype in 90 days.
In _Star Wars_ , scout stormtroopers of the Galactic Empire rode the Aratech 74-Z Speeder Bike, and a real-life counterpart is the [Aero-X][9] being developed by California's Aerofex.
Perhaps the most visible _Star Wars_ tech to enter our lives is droids. We first encountered R2-D2 back in the 1970s, but now we have droids vacuuming our carpets and mowing our lawns, from Roombas to the [Worx Landroid][10] lawnmower.
And, in _Star Wars_ , Princess Leia appeared to Obi-Wan Kenobi as a hologram, and in Star Trek: Voyager, the ship's chief medical officer was an interactive hologram that could diagnose and treat patients. The technology to bring characters like these to "life" is still a ways off, but there are some interesting open source developments that hint of things to come. [OpenHolo][11], "an open source library containing algorithms and software implementations for holograms in various fields," is one such project.
### Where's the beef?
> "She handled… real meat… touched it, and cut it?" —Keiko O'Brien, Star Trek: The Next Generation
In the _Star Trek_ universe, crew members get their meals by simply ordering a replicator to produce whatever food they desire. That could one day become a reality thanks to a concept created by two German students for an open source "meat-printer" they call the [Cultivator][12]. It would use bio-printing to produce something that appears to be meat; the user could even select its mineral and fat content. Perhaps with more collaboration and development, the Cultivator could become the replicator in tomorrow's kitchen!
### The 501st
Cosplayers, people from all walks of life who dress as their favorite characters, are the "open source embodiment" of their favorite universes. The [501st][13] [Legion][13] is an all-volunteer _Star Wars_ fan organization "formed for the express purpose of bringing together costume enthusiasts under a collective identity within which to operate," according to its charter.
Jon Stallard, a member of Garrison Tyranus, the Central Virginia chapter of the 501st Legion says, "Everybody wanted to be something else when they were a kid, right? Whether it was Neil Armstrong, Batman, or the Six Million Dollar Man. Every backyard playdate was some kind of make-believe. The 501st lets us participate in our fan communities while contributing to the community at large."
Are cosplayers really "open source characters"? Well, that depends. The copyright laws around cosplay and using unique props, costumes, and more are very complex, [writes Meredith Filak Rose][14] for _Public Knowledge_. "We're lucky to be living in a time where fandom generally enjoys a positive relationship with the creators whose work it admires," Rose concludes.
So, it is safe to say that stormtroopers, Ferengi, Vulcans, and Yoda are all here to stay for a long, long time, near, and far, far away.
Live long and prosper, you shall.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/5/may-the-fourth-star-wars-trek
作者:[Jeff Macharyas ][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jeffmacharyas
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/triangulum_galaxy_nasa_stars.jpg?itok=NdS19A7m
[2]: https://fanlore.org/wiki/Gene_Roddenberry#His_Views_Regarding_Fanworks
[3]: https://trekmovie.com/2016/06/23/cbs-and-paramount-release-fan-film-guidelines/
[4]: https://dorksideoftheforce.com/2019/01/17/star-wars-fan-films/
[5]: https://dorksideoftheforce.com/2019/01/16/disney-claims-copyright-star-wars-theory/
[6]: https://en.wikipedia.org/wiki/Rules_of_Acquisition
[7]: https://itsfoss.com/star-wars-linux/
[8]: https://www.codeproject.com/Articles/28228/Star-Trek-1971-Text-Game
[9]: https://www.livescience.com/58943-real-life-star-wars-technology.html
[10]: https://www.digitaltrends.com/cool-tech/best-robot-lawnmowers/
[11]: http://openholo.org/
[12]: https://www.pastemagazine.com/articles/2016/05/the-future-is-vegan-according-to-star-trek.html
[13]: https://www.501st.com/
[14]: https://www.publicknowledge.org/news-blog/blogs/copyright-and-cosplay-working-with-an-awkward-fit

View File

@ -0,0 +1,240 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Using the force at the Linux command line)
[#]: via: (https://opensource.com/article/19/5/may-the-force-linux)
[#]: author: (Alan Formy-Duval https://opensource.com/users/alanfdoss)
Using the force at the Linux command line
======
Like the Jedi Force, -f is powerful, potentially destructive, and very
helpful when you know how to use it.
![Fireworks][1]
Sometime in recent history, sci-fi nerds began an annual celebration of everything [_Star Wars_ on May the 4th][2], a pun on the Jedi blessing, "May the Force be with you." Although most Linux users are probably not Jedi, they still have ways to use the force. Of course, the movie might not have been quite as exciting if Yoda simply told Luke to type **man X-Wing fighter** or **man force**. Or if he'd said, "RTFM" (Read the Force Manual, of course).
Many Linux commands have an **-f** option, which stands for, you guessed it, force! Sometimes when you execute a command, it fails or prompts you for additional input. This may be an effort to protect the files you are trying to change or inform the user that a device is busy or a file already exists.
If you don't want to be bothered by prompts or don't care about errors, use the force!
Be aware that using a command's force option to override these protections is, generally, destructive. Therefore, the user needs to pay close attention and be sure that they know what they are doing. Using the force can have consequences!
Following are four Linux commands with a force option and a brief description of how and why you might want to use it.
### cp
The **cp** command is short for copy—it's used to copy (or duplicate) a file or directory. The [man page][3] describes the force option for **cp** as:
```
-f, --force
if an existing destination file cannot be opened, remove it
and try again
```
This example is for when you are working with read-only files:
```
[alan@workstation ~]$ ls -l
total 8
-rw-rw---- 1 alan alan 13 May 1 12:24 Hoth
-r--r----- 1 alan alan 14 May 1 12:23 Naboo
[alan@workstation ~]$ cat Hoth Naboo
Icy Planet
Green Planet
```
If you want to copy a file called _Hoth_ to _Naboo_ , the **cp** command will not allow it since _Naboo_ is read-only:
```
[alan@workstation ~]$ cp Hoth Naboo
cp: cannot create regular file 'Naboo': Permission denied
```
But by using the force, **cp** will not prompt. The contents and permissions of _Hoth_ will immediately be copied to _Naboo_ :
```
[alan@workstation ~]$ cp -f Hoth Naboo
[alan@workstation ~]$ cat Hoth Naboo
Icy Planet
Icy Planet
[alan@workstation ~]$ ls -l
total 8
-rw-rw---- 1 alan alan 12 May 1 12:32 Hoth
-rw-rw---- 1 alan alan 12 May 1 12:38 Naboo
```
Oh no! I hope they have winter gear on Naboo.
### ln
The **ln** command is used to make links between files. The [man page][4] describes the force option for **ln** as:
```
-f, --force
remove existing destination files
```
Suppose Princess Leia is maintaining a Java application server and she has a directory where all Java versions are stored. Here is an example:
```
leia@workstation:/usr/lib/java$ ls -lt
total 28
lrwxrwxrwx 1 leia leia 12 Mar 5 2018 jdk -> jdk1.8.0_162
drwxr-xr-x 8 leia leia 4096 Mar 5 2018 jdk1.8.0_162
drwxr-xr-x 8 leia leia 4096 Aug 28 2017 jdk1.8.0_144
```
As you can see, there are several versions of the Java Development Kit (JDK) and a symbolic link pointing to the latest one. She uses a script with the following commands to install new JDK versions. However, it won't work without a force option or unless the root user runs it:
```
tar xvzmf jdk1.8.0_181.tar.gz -C jdk1.8.0_181/
ln -vs jdk1.8.0_181 jdk
```
The **tar** command will extract the .gz file to the specified directory, but the **ln** command will fail to upgrade the link because one already exists. The result will be that the link no longer points to the latest JDK:
```
leia@workstation:/usr/lib/java$ ln -vs jdk1.8.0_181 jdk
ln: failed to create symbolic link 'jdk/jdk1.8.0_181': File exists
leia@workstation:/usr/lib/java$ ls -lt
total 28
drwxr-x--- 2 leia leia 4096 May 1 15:44 jdk1.8.0_181
lrwxrwxrwx 1 leia leia 12 Mar 5 2018 jdk -> jdk1.8.0_162
drwxr-xr-x 8 leia leia 4096 Mar 5 2018 jdk1.8.0_162
drwxr-xr-x 8 leia leia 4096 Aug 28 2017 jdk1.8.0_144
```
She can force **ln** to update the link correctly by passing the force option and one other, **-n**. The **-n** is needed because the link points to a directory. Now, the link again points to the latest JDK:
```
leia@workstation:/usr/lib/java$ ln -vsnf jdk1.8.0_181 jdk
'jdk' -> 'jdk1.8.0_181'
leia@workstation:/usr/lib/java$ ls -lt
total 28
lrwxrwxrwx 1 leia leia 12 May 1 16:13 jdk -> jdk1.8.0_181
drwxr-x--- 2 leia leia 4096 May 1 15:44 jdk1.8.0_181
drwxr-xr-x 8 leia leia 4096 Mar 5 2018 jdk1.8.0_162
drwxr-xr-x 8 leia leia 4096 Aug 28 2017 jdk1.8.0_144
```
A Java application can be configured to find the JDK with the path **/usr/lib/java/jdk** instead of having to change it every time Java is updated.
### rm
The **rm** command is short for "remove" (which we often call delete, since some other operating systems have a **del** command for this action). The [man page][5] describes the force option for **rm** as:
```
-f, --force
ignore nonexistent files and arguments, never prompt
```
If you try to delete a read-only file, you will be prompted by **rm** :
```
[alan@workstation ~]$ ls -l
total 4
-r--r----- 1 alan alan 16 May 1 11:38 B-wing
[alan@workstation ~]$ rm B-wing
rm: remove write-protected regular file 'B-wing'?
```
You must type either **y** or **n** to answer the prompt and allow the **rm** command to proceed. If you use the force option, **rm** will not prompt you and will immediately delete the file:
```
[alan@workstation ~]$ rm -f B-wing
[alan@workstation ~]$ ls -l
total 0
[alan@workstation ~]$
```
The most common use of force with **rm** is to delete a directory. The **-r** (recursive) option tells **rm** to remove a directory. When combined with the force option, it will remove the directory and all its contents without prompting.
The **rm** command with certain options can be disastrous. Over the years, online forums have filled with jokes and horror stories of users completely wiping their systems. This notorious usage is **rm -rf ***. This will immediately delete all files and directories without any prompt wherever it is used.
### userdel
The **userdel** command is short for user delete, which will delete a user. The [man page][6] describes the force option for **userdel** as:
```
-f, --force
This option forces the removal of the user account, even if the
user is still logged in. It also forces userdel to remove the
user's home directory and mail spool, even if another user uses
the same home directory or if the mail spool is not owned by the
specified user. If USERGROUPS_ENAB is defined to yes in
/etc/login.defs and if a group exists with the same name as the
deleted user, then this group will be removed, even if it is
still the primary group of another user.
Note: This option is dangerous and may leave your system in an
inconsistent state.
```
When Obi-Wan reached the castle on Mustafar, he knew what had to be done. He had to delete Darth's user account—but Darth was still logged in.
```
[root@workstation ~]# ps -fu darth
UID PID PPID C STIME TTY TIME CMD
darth 7663 7655 0 13:28 pts/3 00:00:00 -bash
[root@workstation ~]# userdel darth
userdel: user darth is currently used by process 7663
```
Since Darth is currently logged in, Obi-Wan has to use the force option to **userdel**. This will delete the user account even though it's logged in.
```
[root@workstation ~]# userdel -f darth
userdel: user darth is currently used by process 7663
[root@workstation ~]# finger darth
finger: darth: no such user.
[root@workstation ~]# ps -fu darth
error: user name does not exist
```
As you can see, the **finger** and **ps** commands confirm the user Darth has been deleted.
### Using force in shell scripts
Many other commands have a force option. One place force is very useful is in shell scripts. Since we use scripts in cron jobs and other automated operations, avoiding any prompts is crucial, or else these automated processes will not complete.
I hope the four examples I shared above help you understand how certain circumstances may require the use of force. You should have a strong understanding of the force option when used at the command line or in creating automation scripts. It's misuse can have devastating effects—sometimes across your infrastructure, and not only on a single machine.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/5/may-the-force-linux
作者:[Alan Formy-Duval ][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/alanfdoss
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/fireworks_light_art_design.jpg?itok=hfx9i4By (Fireworks)
[2]: https://www.starwars.com/star-wars-day
[3]: http://man7.org/linux/man-pages/man1/cp.1.html
[4]: http://man7.org/linux/man-pages/man1/ln.1.html
[5]: http://man7.org/linux/man-pages/man1/rm.1.html
[6]: http://man7.org/linux/man-pages/man8/userdel.8.html

View File

@ -0,0 +1,261 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Duc A Collection Of Tools To Inspect And Visualize Disk Usage)
[#]: via: (https://www.ostechnix.com/duc-a-collection-of-tools-to-inspect-and-visualize-disk-usage/)
[#]: author: (sk https://www.ostechnix.com/author/sk/)
Duc A Collection Of Tools To Inspect And Visualize Disk Usage
======
![Duc - A Collection Of Tools To Inspect And Visualize Disk Usage][1]
**Duc** is a collection of tools that can be used to index, inspect and visualize disk usage on Unix-like operating systems. Dont think of it as a simple CLI tool that merely displays a fancy graph of your disk usage. It is built to scale quite well on huge filesystems. Duc has been tested on systems that consisted of more than 500 million files and several petabytes of storage without any problems.
Duc is quite fast and versatile tool. It stores your disk usage in an optimized database, so you can quickly find where your bytes are as soon as the index is completed. In addition, it comes with various user interfaces and back-ends to access the database and draw the graphs.
Here is the list of currently supported user interfaces (UI):
1. Command line interface (ls),
2. Ncurses console interface (ui),
3. X11 GUI (duc gui),
4. OpenGL GUI (duc gui).
List of supported database back-ends:
* Tokyocabinet,
* Leveldb,
* Sqlite3.
Duc uses **Tokyocabinet** as default database backend.
### Install Duc
Duc is available in the default repositories of Debian and its derivatives such as Ubuntu. So installing Duc on DEB-based systems is a piece of cake.
```
$ sudo apt-get install duc
```
On other Linux distributions, you may need to manually compile and install Duc from source as shown below.
Download latest duc source .tgz file from the [**releases**][2] page on github. As of writing this guide, the latest version was **1.4.4**.
```
$ wget https://github.com/zevv/duc/releases/download/1.4.4/duc-1.4.4.tar.gz
```
Then run the following commands one by one to install DUC.
```
$ tar -xzf duc-1.4.4.tar.gz
$ cd duc-1.4.4
$ ./configure
$ make
$ sudo make install
```
### Duc Usage
The typical usage of duc is:
```
$ duc <subcommand> <options>
```
You can view the list of general options and sub-commands by running the following command:
```
$ duc help
```
You can also know the the usage of a specific subcommand as below.
```
$ duc help <subcommand>
```
To view the extensive list of all commands and their options, simply run:
```
$ duc help --all
```
Let us now se some practical use cases of duc utility.
### Create Index (database)
First of all, you need to create an index file (database) of your filesystem. To create an index file, use “duc index” command.
For example, to create an index of your **/home** directory, simply run:
```
$ duc index /home
```
The above command will create the index of your /home/ directory and save it in **$HOME/.duc.db** file. If you have added new files/directories in the /home directory in future, just re-run the above command at any time later to rebuild the index.
### Query Index
Duc has various sub-commands to query and explore the index.
To view the list of available indexes, run:
```
$ duc info
```
**Sample output:**
```
Date Time Files Dirs Size Path
2019-04-09 15:45:55 3.5K 305 654.6M /home
```
As you see in the above output, I have already indexed the /home directory.
To list all files and directories in the current working directory, you can do:
```
$ duc ls
```
To list files/directories in a specific directory, for example **/home/sk/Downloads** , just pass the path as argument like below.
```
$ duc ls /home/sk/Downloads
```
Similarly, run **“duc ui”** command to open a **ncurses** based console user interface for exploring the file system usage and run **“duc gui”** to start a **graphical (X11)** interface to explore the file system.
To know more about a sub-command usage, simply refer the help section.
```
$ duc help ls
```
The above command will display the help section of “ls” subcommand.
### Visualize Disk Usage
In the previous section, we have seen how to list files and directories using duc subcommands. In addition, you can even show the file sizes in a fancy graph.
To show the graph of a given path, use “ls” subcommand like below.
```
$ duc ls -Fg /home/sk
```
Sample output:
![][3]
Visualize disk usage using “duc ls” command
As you see in the above output, the “ls” subcommand queries the duc database and lists the inclusive size of all
files and directories of the given path i.e **/home/sk/** in this case.
Here, the **“-F”** option is used to append file type indicator (one of */) to entries and the **“-g”** option is used to draw graph with relative size for each entry.
Please note that if no path is given, the current working directory is explored.
You can use **-R** option to view the disk usage result in [**tree**][4] structure.
```
$ duc ls -R /home/sk
```
![][5]
Visualize disk usage in tree structure
To query the duc database and open a **ncurses** based console user interface for exploring the disk usage of given path, use **“ui”** subcommand like below.
```
$ duc ui /home/sk
```
![][6]
Similarly, we use **“gui”** subcommand to query the duc database and start a **graphical (X11)** interface to explore the disk usage of the given path:
```
$ duc gui /home/sk
```
![][7]
Like I already mentioned earlier, we can learn more about a subcommand usage like below.
```
$ duc help <subcommand-name>
```
I covered the basic usage part only. Refer man pages for more details about “duc” tool.
```
$ man duc
```
* * *
**Related read:**
* [**Filelight Visualize Disk Usage On Your Linux System**][8]
* [**Some Good Alternatives To du Command**][9]
* [**How To Check Disk Space Usage In Linux Using Ncdu**][10]
* [**Agedu Find Out Wasted Disk Space In Linux**][11]
* [**How To Find The Size Of A Directory In Linux**][12]
* [**The df Command Tutorial With Examples For Beginners**][13]
* * *
### Conclusion
Duc is simple yet useful disk usage viewer. If you want to quickly and easily know which files/directories are eating up your disk space, Duc might be a good choice. What are you waiting for? Go get this tool already, scan your filesystem and get rid of unused files/directories.
And, thats all for now. Hope this was useful. More good stuffs to come. Stay tuned!
Cheers!
**Resource:**
* [**Duc website**][14]
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/duc-a-collection-of-tools-to-inspect-and-visualize-disk-usage/
作者:[sk][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.ostechnix.com/author/sk/
[b]: https://github.com/lujun9972
[1]: https://www.ostechnix.com/wp-content/uploads/2019/04/duc-720x340.png
[2]: https://github.com/zevv/duc/releases
[3]: http://www.ostechnix.com/wp-content/uploads/2019/04/duc-1-1.png
[4]: https://www.ostechnix.com/view-directory-tree-structure-linux/
[5]: http://www.ostechnix.com/wp-content/uploads/2019/04/duc-2.png
[6]: http://www.ostechnix.com/wp-content/uploads/2019/04/duc-3.png
[7]: http://www.ostechnix.com/wp-content/uploads/2019/04/duc-4.png
[8]: https://www.ostechnix.com/filelight-visualize-disk-usage-on-your-linux-system/
[9]: https://www.ostechnix.com/some-good-alternatives-to-du-command/
[10]: https://www.ostechnix.com/check-disk-space-usage-linux-using-ncdu/
[11]: https://www.ostechnix.com/agedu-find-out-wasted-disk-space-in-linux/
[12]: https://www.ostechnix.com/find-size-directory-linux/
[13]: https://www.ostechnix.com/the-df-command-tutorial-with-examples-for-beginners/
[14]: https://duc.zevv.nl/

View File

@ -0,0 +1,209 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How To Create SSH Alias In Linux)
[#]: via: (https://www.ostechnix.com/how-to-create-ssh-alias-in-linux/)
[#]: author: (sk https://www.ostechnix.com/author/sk/)
How To Create SSH Alias In Linux
======
![How To Create SSH Alias In Linux][1]
If you frequently access a lot of different remote systems via SSH, this trick will save you some time. You can create SSH alias to frequently-accessed systems via SSH. This way you need not to remember all the different usernames, hostnames, ssh port numbers and IP addresses etc. Additionally, It avoids the need to repetitively type the same username/hostname, ip address, port no whenever you SSH into a Linux server(s).
### Create SSH Alias In Linux
Before I know this trick, usually, I connect to a remote system over SSH using anyone of the following ways.
Using IP address:
```
$ ssh 192.168.225.22
```
Or using port number, username and IP address:
```
$ ssh -p 22 sk@server.example.com
```
Or using port number, username and hostname:
```
$ ssh -p 22 sk@server.example.com
```
Here,
* **22** is the port number,
* **sk** is the username of the remote system,
* **192.168.225.22** is the IP of my remote system,
* **server.example.com** is the hostname of remote system.
I believe most of the newbie Linux users and/or admins would SSH into a remote system this way. However, If you SSH into multiple different systems, remembering all hostnames/ip addresses, usernames is bit difficult unless you write them down in a paper or save them in a text file. No worries! This can be easily solved by creating an alias(or shortcut) for SSH connections.
We can create an alias for SSH commands in two methods.
##### Method 1 Using SSH Config File
This is my preferred way of creating aliases.
We can use SSH default configuration file to create SSH alias. To do so, edit **~/.ssh/config** file (If this file doesnt exist, just create one):
```
$ vi ~/.ssh/config
```
Add all of your remote hosts details like below:
```
Host webserver
HostName 192.168.225.22
User sk
Host dns
HostName server.example.com
User root
Host dhcp
HostName 192.168.225.25
User ostechnix
Port 2233
```
![][2]
Create SSH Alias In Linux Using SSH Config File
Replace the values of **Host** , **Hostname** , **User** and **Port** with your own. Once you added the details of all remote hosts, save and exit the file.
Now you can SSH into the systems with commands:
```
$ ssh webserver
$ ssh dns
$ ssh dhcp
```
It is simple as that.
Have a look at the following screenshot.
![][3]
Access remote system using SSH alias
See? I only used the alias name (i.e **webserver** ) to access my remote system that has IP address **192.168.225.22**.
Please note that this applies for current user only. If you want to make the aliases available for all users (system wide), add the above lines in **/etc/ssh/ssh_config** file.
You can also add plenty of other things in the SSH config file. For example, if you have [**configured SSH Key-based authentication**][4], mention the SSH keyfile location as below.
```
Host ubuntu
HostName 192.168.225.50
User senthil
IdentityFIle ~/.ssh/id_rsa_remotesystem
```
Make sure you have replace the hostname, username and SSH keyfile path with your own.
Now connect to the remote server with command:
```
$ ssh ubuntu
```
This way you can add as many as remote hosts you want to access over SSH and quickly access them using their alias name.
##### Method 2 Using Bash aliases
This is quick and dirty way to create SSH aliases for faster communication. You can use the [**alias command**][5] to make this task much easier.
Open **~/.bashrc** or **~/.bash_profile** file:
Add aliases for each SSH connections one by one like below.
```
alias webserver='ssh sk@server.example.com'
alias dns='ssh sk@server.example.com'
alias dhcp='ssh sk@server.example.com -p 2233'
alias ubuntu='ssh sk@server.example.com -i ~/.ssh/id_rsa_remotesystem'
```
Again make sure you have replaced the host, hostname, port number and ip address with your own. Save the file and exit.
Then, apply the changes using command:
```
$ source ~/.bashrc
```
Or,
```
$ source ~/.bash_profile
```
In this method, you dont even need to use “ssh alias-name” command. Instead, just use alias name only like below.
```
$ webserver
$ dns
$ dhcp
$ ubuntu
```
![][6]
These two methods are very simple, yet useful and much more convenient for those who often SSH into multiple different systems. Use any one of the aforementioned methods that suits for you to quickly access your remote Linux systems over SSH.
* * *
**Suggested read:**
* [**Allow Or Deny SSH Access To A Particular User Or Group In Linux**][7]
* [**How To SSH Into A Particular Directory On Linux**][8]
* [**How To Stop SSH Session From Disconnecting In Linux**][9]
* [**4 Ways To Keep A Command Running After You Log Out Of The SSH Session**][10]
* [**SSLH Share A Same Port For HTTPS And SSH**][11]
* * *
And, thats all for now. Hope this was useful. More good stuffs to come. Stay tuned!
Cheers!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/how-to-create-ssh-alias-in-linux/
作者:[sk][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.ostechnix.com/author/sk/
[b]: https://github.com/lujun9972
[1]: https://www.ostechnix.com/wp-content/uploads/2019/04/ssh-alias-720x340.png
[2]: http://www.ostechnix.com/wp-content/uploads/2019/04/Create-SSH-Alias-In-Linux.png
[3]: http://www.ostechnix.com/wp-content/uploads/2019/04/create-ssh-alias.png
[4]: https://www.ostechnix.com/configure-ssh-key-based-authentication-linux/
[5]: https://www.ostechnix.com/the-alias-and-unalias-commands-explained-with-examples/
[6]: http://www.ostechnix.com/wp-content/uploads/2019/04/create-ssh-alias-1.png
[7]: https://www.ostechnix.com/allow-deny-ssh-access-particular-user-group-linux/
[8]: https://www.ostechnix.com/how-to-ssh-into-a-particular-directory-on-linux/
[9]: https://www.ostechnix.com/how-to-stop-ssh-session-from-disconnecting-in-linux/
[10]: https://www.ostechnix.com/4-ways-keep-command-running-log-ssh-session/
[11]: https://www.ostechnix.com/sslh-share-port-https-ssh/

View File

@ -0,0 +1,350 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How To Navigate Directories Faster In Linux)
[#]: via: (https://www.ostechnix.com/navigate-directories-faster-linux/)
[#]: author: (sk https://www.ostechnix.com/author/sk/)
How To Navigate Directories Faster In Linux
======
![Navigate Directories Faster In Linux][1]
Today we are going to learn some command line productivity hacks. As you already know, we use “cd” command to move between a stack of directories in Unix-like operating systems. In this guide I am going to teach you how to navigate directories faster without having to use “cd” command often. There could be many ways, but I only know the following five methods right now! I will keep updating this guide when I came across any methods or utilities to achieve this task in the days to come.
### Five Different Methods To Navigate Directories Faster In Linux
##### Method 1: Using “Pushd”, “Popd” And “Dirs” Commands
This is the most frequent method that I use everyday to navigate between a stack of directories. The “Pushd”, “Popd”, and “Dirs” commands comes pre-installed in most Linux distributions, so dont bother with installation. These trio commands are quite useful when youre working in a deep directory structure and scripts. For more details, check our guide in the link given below.
* **[How To Use Pushd, Popd And Dirs Commands For Faster CLI Navigation][2]**
##### Method 2: Using “bd” utility
The “bd” utility also helps you to quickly go back to a specific parent directory without having to repeatedly typing “cd ../../.” on your Bash.
Bd is also available in the [**Debian extra**][3] and [**Ubuntu universe**][4] repositories. So, you can install it using “apt-get” package manager in Debian, Ubuntu and other DEB based systems as shown below:
```
$ sudo apt-get update
$ sudo apt-get install bd
```
For other distributions, you can install as shown below.
```
$ sudo wget --no-check-certificate -O /usr/local/bin/bd https://raw.github.com/vigneshwaranr/bd/master/bd
$ sudo chmod +rx /usr/local/bin/bd
$ echo 'alias bd=". bd -si"' >> ~/.bashrc
$ source ~/.bashrc
```
To enable auto completion, run:
```
$ sudo wget -O /etc/bash_completion.d/bd https://raw.github.com/vigneshwaranr/bd/master/bash_completion.d/bd
$ source /etc/bash_completion.d/bd
```
The Bd utility has now been installed. Let us see few examples to understand how to quickly move through stack of directories using this tool.
Create some directories.
```
$ mkdir -p dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10
```
The above command will create a hierarchy of directories. Let us check [**directory structure**][5] using command:
```
$ tree dir1/
dir1/
└── dir2
└── dir3
└── dir4
└── dir5
└── dir6
└── dir7
└── dir8
└── dir9
└── dir10
9 directories, 0 files
```
Alright, we have now 10 directories. Let us say youre currently in 7th directory i.e dir7.
```
$ pwd
/home/sk/dir1/dir2/dir3/dir4/dir5/dir6/dir7
```
You want to move to dir3. Normally you would type:
```
$ cd /home/sk/dir1/dir2/dir3
```
Right? yes! But it not necessary though! To go back to dir3, just type:
```
$ bd dir3
```
Now you will be in dir3.
![][6]
Navigate Directories Faster In Linux Using “bd” Utility
Easy, isnt it? It supports auto complete, so you can just type the partial name of a directory and hit the tab key to auto complete the full path.
To check the contents of a specific parent directory, you dont need to inside that particular directory. Instead, just type:
```
$ ls `bd dir1`
```
The above command will display the contents of dir1 from your current working directory.
For more details, check out the following GitHub page.
* [**bd GitHub repository**][7]
##### Method 3: Using “Up” Shell script
The “Up” is a shell script allows you to move quickly to your parent directory. It works well on many popular shells such as Bash, Fish, and Zsh etc. Installation is absolutely easy too!
To install “Up” on **Bash** , run the following commands one bye:
```
$ curl --create-dirs -o ~/.config/up/up.sh https://raw.githubusercontent.com/shannonmoeller/up/master/up.sh
$ echo 'source ~/.config/up/up.sh' >> ~/.bashrc
```
The up script registers the “up” function and some completion functions via your “.bashrc” file.
Update the changes using command:
```
$ source ~/.bashrc
```
On **zsh** :
```
$ curl --create-dirs -o ~/.config/up/up.sh https://raw.githubusercontent.com/shannonmoeller/up/master/up.sh
$ echo 'source ~/.config/up/up.sh' >> ~/.zshrc
```
The up script registers the “up” function and some completion functions via your “.zshrc” file.
Update the changes using command:
```
$ source ~/.zshrc
```
On **fish** :
```
$ curl --create-dirs -o ~/.config/up/up.fish https://raw.githubusercontent.com/shannonmoeller/up/master/up.fish
$ source ~/.config/up/up.fish
```
The up script registers the “up” function and some completion functions via “funcsave”.
Now it is time to see some examples.
Let us create some directories.
```
$ mkdir -p dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10
```
Let us say youre in 7th directory i.e dir7.
```
$ pwd
/home/sk/dir1/dir2/dir3/dir4/dir5/dir6/dir7
```
You want to move to dir3. Using “cd” command, we can do this by typing the following command:
```
$ cd /home/sk/dir1/dir2/dir3
```
But it is really easy to go back to dir3 using “up” script:
```
$ up dir3
```
Thats it. Now you will be in dir3. To go one directory up, just type:
```
$ up 1
```
To go back two directory type:
```
$ up 2
```
Its that simple. Did I type the full path? Nope. Also it supports tab completion. So just type the partial directory name and hit the tab to complete the full path.
For more details, check out the GitHub page.
* [**Up GitHub Repository**][8]
Please be mindful that “bd” and “up” tools can only help you to go backward i.e to the parent directory of the current working directory. You cant move forward. If you want to switch to dir10 from dir5, you cant! Instead, you need to use “cd” command to switch to dir10. These two utilities are meant for quickly moving you to the parent directory!
##### Method 4: Using “Shortcut” tool
This is yet another handy method to switch between different directories quickly and easily. This is somewhat similar to [**alias**][9] command. In this method, we create shortcuts to frequently used directories and use the shortcut name to go to that respective directory without having to type the path. If youre working in deep directory structure and stack of directories, this method will greatly save some time. You can learn how it works in the guide given below.
* [**Create Shortcuts To The Frequently Used Directories In Your Shell**][10]
##### Method 5: Using “CDPATH” Environment variable
This method doesnt require any installation. **CDPATH** is an environment variable. It is somewhat similar to **PATH** variable which contains many different paths concatenated using **:** (colon). The main difference between PATH and CDPATH variables is the PATH variable is usable with all commands whereas CDPATH works only for **cd** command.
I have the following directory structure.
![][11]
Directory structure
As you see, there are four child directories under a parent directory named “ostechnix”.
Now add this parent directory to CDPATH using command:
```
$ export CDPATH=~/ostechnix
```
You now can instantly cd to the sub-directories of the parent directory (i.e **~/ostechnix** in our case) from anywhere in the filesystem.
For instance, currently I am in **/var/mail/** location.
![][12]
To cd into **~/ostechnix/Linux/** directory, we dont have to use the full path of the directory as shown below:
```
$ cd ~/ostechnix/Linux
```
Instead, just mention the name of the sub-directory you want to switch to:
```
$ cd Linux
```
It will automatically cd to **~/ostechnix/Linux** directory instantly.
![][13]
As you can see in the above output, I didnt use “cd <full-path-of-subdir>”. Instead, I just used “cd <subdir-name>” command.
Please note that CDPATH will allow you to quickly navigate to only one child directory of the parent directory set in CDPATH variable. It doesnt much help for navigating a stack of directories (directories inside sub-directories, of course).
To find the values of CDPATH variable, run:
```
$ echo $CDPATH
```
Sample output would be:
```
/home/sk/ostechnix
```
**Set multiple values to CDPATH**
Similar to PATH variable, we can also set multiple values (more than one directory) to CDPATH separated by colon (:).
```
$ export CDPATH=.:~/ostechnix:/etc:/var:/opt
```
**Make the changes persistent**
As you already know, the above command (export) will only keep the values of CDPATH until next reboot. To permanently set the values of CDPATH, just add them to your **~/.bashrc** or **~/.bash_profile** files.
```
$ vi ~/.bash_profile
```
Add the values:
```
export CDPATH=.:~/ostechnix:/etc:/var:/opt
```
Hit **ESC** key and type **:wq** to save and exit.
Apply the changes using command:
```
$ source ~/.bash_profile
```
**Clear CDPATH**
To clear the values of CDPATH, use **export CDPATH=””**. Or, simply delete the entire line from **~/.bashrc** or **~/.bash_profile** files.
In this article, you have learned the different ways to navigate directory stack faster and easier in Linux. As you can see, its not that difficult to browse a pile of directories faster. Now stop typing “cd ../../..” endlessly by using these tools. If you know any other worth trying tool or method to navigate directories faster, feel free to let us know in the comment section below. I will review and add them in this guide.
And, thats all for now. Hope this helps. More good stuffs to come. Stay tuned!
Cheers!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/navigate-directories-faster-linux/
作者:[sk][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.ostechnix.com/author/sk/
[b]: https://github.com/lujun9972
[1]: https://www.ostechnix.com/wp-content/uploads/2017/12/Navigate-Directories-Faster-In-Linux-720x340.png
[2]: https://www.ostechnix.com/use-pushd-popd-dirs-commands-faster-cli-navigation/
[3]: https://tracker.debian.org/pkg/bd
[4]: https://launchpad.net/ubuntu/+source/bd
[5]: https://www.ostechnix.com/view-directory-tree-structure-linux/
[6]: http://www.ostechnix.com/wp-content/uploads/2017/12/Navigate-Directories-Faster-1.png
[7]: https://github.com/vigneshwaranr/bd
[8]: https://github.com/shannonmoeller/up
[9]: https://www.ostechnix.com/the-alias-and-unalias-commands-explained-with-examples/
[10]: https://www.ostechnix.com/create-shortcuts-frequently-used-directories-shell/
[11]: http://www.ostechnix.com/wp-content/uploads/2018/12/tree-command-output.png
[12]: http://www.ostechnix.com/wp-content/uploads/2018/12/pwd-command.png
[13]: http://www.ostechnix.com/wp-content/uploads/2018/12/cdpath.png

View File

@ -0,0 +1,156 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Kindd A Graphical Frontend To dd Command)
[#]: via: (https://www.ostechnix.com/kindd-a-graphical-frontend-to-dd-command/)
[#]: author: (sk https://www.ostechnix.com/author/sk/)
Kindd A Graphical Frontend To dd Command
======
![Kindd - A Graphical Frontend To dd Command][1]
A while ago we learned how to [**create bootable ISO using dd command**][2] in Unix-like systems. Please keep in mind that dd command is one of the dangerous and destructive command. If youre not sure what you are actually doing, you might accidentally wipe your hard drive in minutes. The dd command just takes bytes from **if** and writes them to **of**. It wont care what its overwriting, it wont care if theres a partition table in the way, or a boot sector, or a home folder, or anything important. It will simply do what it is told to do. If youre beginner, mostly try to avoid using dd command to do stuffs. Thankfully, there is a simple GUI utility for dd command. Say hello to **“Kindd”** , a graphical frontend to dd command. It is free, open source tool written in **Qt Quick**. This tool can be very helpful for the beginners and who are not comfortable with command line in general.
The developer created this tool mainly to provide,
1. a modern, simple and safe graphical user interface for dd command,
2. a graphical way to easily create bootable device without having to use Terminal.
### Installing Kindd
Kindd is available in [**AUR**][3]. So if youre a Arch user, install it using any AUR helper tools, for example [**Yay**][4].
To install Git version, run:
```
$ yay -S kindd-git
```
To install release version, run:
```
$ yay -S kindd
```
After installing, launch Kindd from the Menu or Application launcher.
For other distributions, you need to manually compile and install it from source as shown below.
Make sure you have installed the following prerequisites.
* git
* coreutils
* polkit
* qt5-base
* qt5-quickcontrols
* qt5-quickcontrols2
* qt5-graphicaleffects
Once all prerequisites installed, git clone the Kindd repository:
```
git clone https://github.com/LinArcX/Kindd/
```
Go to the directory where you just cloned Kindd and compile and install it:
```
cd Kindd
qmake
make
```
Finally run the following command to launch Kindd application:
```
./kindd
```
Kindd uses **pkexec** internally. The pkexec agent is installed by default in most most Desktop environments. But if you use **i3** (or maybe some other DE), you should install **polkit-gnome** first, and then paste the following line into i3 config file:
```
exec /usr/lib/polkit-gnome/polkit-gnome-authentication-agent-1 &
```
### Create bootable ISO using Kindd
To create a bootable USB from an ISO, plug in the USB drive. Then, launch Kindd either from the Menu or Terminal.
This is how Kindd default interface looks like:
![][5]
Kindd interface
As you can see, Kindd interface is very simple and self-explanatory. There are just two sections namely **List Devices** which displays the list of available devices (hdd and Usb) on your system and **Create Bootable .iso**. You will be in “Create Bootable .iso” section by default.
Enter the block size in the first column, select the path of the ISO file in the second column and choose the correct device (USB drive path) in third column. Click **Convert/Copy** button to start creating bootable ISO.
![][6]
Once the process is completed, you will see successful message.
![][7]
Now, unplug the USB drive and boot your system with USB to check if it really works.
If you dont know the actual device name (target path), just click on the List devices and check the USB drive name.
![][8]
* * *
**Related read:**
* [**Etcher A Beautiful App To Create Bootable SD Cards Or USB Drives**][9]
* [**Bootiso Lets You Safely Create Bootable USB Drive**][10]
* * *
Kindd is in its early development stage. So, there would be bugs. If you find any bugs, please report them in its GitHub page given at the end of this guide.
And, thats all. Hope this was useful. More good stuffs to come. Stay tuned!
Cheers!
**Resource:**
* [**Kindd GitHub Repository**][11]
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/kindd-a-graphical-frontend-to-dd-command/
作者:[sk][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.ostechnix.com/author/sk/
[b]: https://github.com/lujun9972
[1]: https://www.ostechnix.com/wp-content/uploads/2019/04/kindd-720x340.png
[2]: https://www.ostechnix.com/how-to-create-bootable-usb-drive-using-dd-command/
[3]: https://aur.archlinux.org/packages/kindd-git/
[4]: https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/
[5]: http://www.ostechnix.com/wp-content/uploads/2019/04/kindd-interface.png
[6]: http://www.ostechnix.com/wp-content/uploads/2019/04/kindd-1.png
[7]: http://www.ostechnix.com/wp-content/uploads/2019/04/kindd-2.png
[8]: http://www.ostechnix.com/wp-content/uploads/2019/04/kindd-3.png
[9]: https://www.ostechnix.com/etcher-beauitiful-app-create-bootable-sd-cards-usb-drives/
[10]: https://www.ostechnix.com/bootiso-lets-you-safely-create-bootable-usb-drive/
[11]: https://github.com/LinArcX/Kindd

View File

@ -0,0 +1,89 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Ping Multiple Servers And Show The Output In Top-like Text UI)
[#]: via: (https://www.ostechnix.com/ping-multiple-servers-and-show-the-output-in-top-like-text-ui/)
[#]: author: (sk https://www.ostechnix.com/author/sk/)
Ping Multiple Servers And Show The Output In Top-like Text UI
======
![Ping Multiple Servers And Show The Output In Top-like Text UI][1]
A while ago, we wrote about [**“Fping”**][2] utility which enables us to ping multiple hosts at once. Unlike the traditional **“Ping”** utility, Fping doesnt wait for one hosts timeout. It uses round-robin method. Meaning It will send the ICMP Echo request to one host, then move to the another host and finally display which hosts are up or down at a time. Today, we are going to discuss about a similar utility named **“Pingtop”**. As the name says, it will ping multiple servers at a time and show the result in Top-like Terminal UI. It is free and open source, command line program written in **Python**.
### Install Pingtop
Pingtop can be installed using Pip, a package manager to install programs developed in Python. Make sure sure you have installed Python 3.7.x and Pip in your Linux box.
To install Pip on Linux, refer the following link.
* [**How To Manage Python Packages Using Pip**][3]
Once Pip is installed, run the following command to install Pingtop:
```
$ pip install pingtop
```
Now let us go ahead and ping multiple systems using Pingtop.
### Ping Multiple Servers And Show The Output In Top-like Terminal UI
To ping mulstiple hosts/systems, run:
```
$ pingtop ostechnix.com google.com facebook.com twitter.com
```
You will now see the result in a nice top-like Terminal UI as shown in the following output.
![][4]
Ping multiple servers using Pingtop
* * *
**Suggested read:**
* [**Some Alternatives To top Command line Utility You Might Want To Know**][5]
* * *
I personally couldnt find any use cases for Pingtop utility at the moment. But I like the idea of showing the ping commands output in text user interface. Give it a try and see if it helps.
And, thats all for now. More good stuffs to come. Stay tuned!
Cheers!
**Resource:**
* [**Pingtop GitHub Repository**][6]
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/ping-multiple-servers-and-show-the-output-in-top-like-text-ui/
作者:[sk][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.ostechnix.com/author/sk/
[b]: https://github.com/lujun9972
[1]: https://www.ostechnix.com/wp-content/uploads/2019/04/pingtop-720x340.png
[2]: https://www.ostechnix.com/ping-multiple-hosts-linux/
[3]: https://www.ostechnix.com/manage-python-packages-using-pip/
[4]: http://www.ostechnix.com/wp-content/uploads/2019/04/pingtop-1.gif
[5]: https://www.ostechnix.com/some-alternatives-to-top-command-line-utility-you-might-want-to-know/
[6]: https://github.com/laixintao/pingtop