mirror of
https://github.com/LCTT/TranslateProject.git
synced 2025-02-03 23:40:14 +08:00
Merge remote-tracking branch 'LCTT/master'
This commit is contained in:
commit
9b2547ce06
104
published/20190823 The Linux kernel- Top 5 innovations.md
Normal file
104
published/20190823 The Linux kernel- Top 5 innovations.md
Normal file
@ -0,0 +1,104 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (heguangzhi)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11368-1.html)
|
||||
[#]: subject: (The Linux kernel: Top 5 innovations)
|
||||
[#]: via: (https://opensource.com/article/19/8/linux-kernel-top-5-innovations)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
|
||||
|
||||
Linux 内核的五大创新
|
||||
======
|
||||
|
||||
> 想知道什么是 Linux 内核上真正的(不是那种时髦的)创新吗?
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/201909/21/093858no01oh78v111r3zt.jpg)
|
||||
|
||||
在科技行业,*创新*这个词几乎和*革命*一样到处泛滥,所以很难将那些夸张的东西与真正令人振奋的东西区分开来。Linux 内核被称为创新,但它又被称为现代计算中最大的奇迹,一个微观世界中的庞然大物。
|
||||
|
||||
撇开营销和模式不谈,Linux 可以说是开源世界中最受欢迎的内核,它在近 30 年的生命时光当中引入了一些真正的规则改变者。
|
||||
|
||||
### Cgroups(2.6.24)
|
||||
|
||||
早在 2007 年,Paul Menage 和 Rohit Seth 就在内核中添加了深奥的[控制组(cgroups)][2]功能(cgroups 的当前实现是由 Tejun Heo 重写的)。这种新技术最初被用作一种方法,从本质上来说,是为了确保一组特定任务的服务质量。
|
||||
|
||||
例如,你可以为与你的 WEB 服务相关联的所有任务创建一个控制组定义(cgroup),为例行备份创建另一个 cgroup ,再为一般操作系统需求创建另一个 cgroup。然后,你可以控制每个组的资源百分比,这样你的操作系统和 WEB 服务就可以获得大部分系统资源,而你的备份进程可以访问剩余的资源。
|
||||
|
||||
然而,cgroups 如今变得这么著名是因其作为驱动云技术的角色:容器。事实上,cgroups 最初被命名为[进程容器][3]。当它们被 [LXC][4]、[CoreOS][5] 和 Docker 等项目采用时,这并不奇怪。
|
||||
|
||||
就像闸门打开后一样,“容器” 一词就像成为了 Linux 的同义词一样,微服务风格的基于云的“应用”概念很快成为了规范。如今,已经很难摆脱 cgroups 了,它们是如此普遍。每一个大规模的基础设施(如果你运行 Linux 的话,可能还有你的笔记本电脑)都以一种合理的方式使用了 cgroups,这使得你的计算体验比以往任何时候都更加易于管理和灵活。
|
||||
|
||||
例如,你可能已经在电脑上安装了 [Flathub][6] 或 [Flatpak][7],或者你已经在工作中使用 [Kubernetes][8] 和/或 [OpenShift][9]。不管怎样,如果“容器”这个术语对你来说仍然模糊不清,则可以 [通过 Linux 容器从背后][10]获得对容器的实际理解。
|
||||
|
||||
### LKMM(4.17)
|
||||
|
||||
2018 年,Jade Alglave、Alan Stern、Andrea Parri、Luc Maranget、Paul McKenney 以及其他几个人的辛勤工作的成果被合并到主线 Linux 内核中,以提供正式的内存模型。Linux 内核内存[一致性]模型(LKMM)子系统是一套描述 Linux 内存一致性模型的工具,同时也产生用于测试的用例(特别命名为 klitmus)。
|
||||
|
||||
随着系统在物理设计上变得越来越复杂(增加了更多的中央处理器内核,高速缓存和内存增长,等等),它们就越难知道哪个中央处理器需要哪个地址空间,以及何时需要。例如,如果 CPU0 需要将数据写入内存中的共享变量,并且 CPU1 需要读取该值,那么 CPU0 必须在 CPU1 尝试读取之前写入。类似地,如果值是以一种顺序方式写入内存的,那么期望它们也以同样的顺序被读取,而不管哪个或哪些 CPU 正在读取。
|
||||
|
||||
即使在单个处理器上,内存管理也需要特定的任务顺序。像 `x = y` 这样的简单操作需要处理器从内存中加载 `y` 的值,然后将该值存储在 `x` 中。在处理器从内存中读取值之前,是不能将存储在 `y` 中的值放入 `x` 变量的。此外还有地址依赖:`x[n] = 6` 要求在处理器能够存储值 `6` 之前加载 `n`。
|
||||
|
||||
LKMM 可以帮助识别和跟踪代码中的这些内存模式。它部分是通过一个名为 `herd` 的工具来实现的,该工具(以逻辑公式的形式)定义了内存模型施加的约束,然后列举了与这些约束一致性的所有可能的结果。
|
||||
|
||||
### 低延迟补丁(2.6.38)
|
||||
|
||||
很久以前,在 2011 年之前,如果你想[在 Linux 上进行多媒体工作][11],你必须得有一个低延迟内核。这主要适用于[录音][12]时添加了许多实时效果(如对着麦克风唱歌和添加混音,以及在耳机中无延迟地听到你的声音)。有些发行版,如 [Ubuntu Studio][13],可靠地提供了这样一个内核,所以实际上这没有什么障碍,这只不过是当艺术家选择发行版时的一个重要提醒。
|
||||
|
||||
然而,如果你没有使用 Ubuntu Studio,或者你需要在你的发行版提供之前更新你的内核,你必须跳转到 rt-patches 网页,下载内核补丁,将它们应用到你的内核源代码,编译,然后手动安装。
|
||||
|
||||
后来,随着内核版本 2.6.38 的发布,这个过程结束了。Linux 内核突然像变魔术一样默认内置了低延迟代码(根据基准测试,延迟至少降低了 10 倍)。不再需要下载补丁,不用编译。一切都很顺利,这都是因为 Mike Galbraith 编写了一个 200 行的小补丁。
|
||||
|
||||
对于全世界的开源多媒体艺术家来说,这是一个规则改变者。从 2011 年开始事情变得如此美好,到 2016 年我自己做了一个挑战,[在树莓派 v1(型号 B)上建造一个数字音频工作站(DAW)][14],结果发现它运行得出奇地好。
|
||||
|
||||
### RCU(2.5)
|
||||
|
||||
RCU,即<ruby>读-拷贝-更新<rt>Read-Copy-Update</rt></ruby>,是计算机科学中定义的一个系统,它允许多个处理器线程从共享内存中读取数据。它通过延迟更新但也将它们标记为已更新来做到这一点,以确保数据读取为最新内容。实际上,这意味着读取与更新同时发生。
|
||||
|
||||
典型的 RCU 循环有点像这样:
|
||||
|
||||
1. 删除指向数据的指针,以防止其他读操作引用它。
|
||||
2. 等待读操作完成它们的关键处理。
|
||||
3. 回收内存空间。
|
||||
|
||||
将更新阶段划分为删除和回收阶段意味着更新程序会立即执行删除,同时推迟回收直到所有活动读取完成(通过阻止它们或注册一个回调以便在完成时调用)。
|
||||
|
||||
虽然 RCU 的概念不是为 Linux 内核发明的,但它在 Linux 中的实现是该技术的一个定义性的例子。
|
||||
|
||||
### 合作(0.01)
|
||||
|
||||
对于 Linux 内核创新的问题的最终答案永远是协作。你可以说这是一个好时机,也可以称之为技术优势,称之为黑客能力,或者仅仅称之为开源,但 Linux 内核及其支持的许多项目是协作与合作的光辉范例。
|
||||
|
||||
它远远超出了内核范畴。各行各业的人都对开源做出了贡献,可以说都是因为 Linux 内核。Linux 曾经是,现在仍然是[自由软件][15]的主要力量,激励人们把他们的代码、艺术、想法或者仅仅是他们自己带到一个全球化的、有生产力的、多样化的人类社区中。
|
||||
|
||||
### 你最喜欢的创新是什么?
|
||||
|
||||
这个列表偏向于我自己的兴趣:容器、非统一内存访问(NUMA)和多媒体。无疑,列表中肯定缺少你最喜欢的内核创新。在评论中告诉我。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/8/linux-kernel-top-5-innovations
|
||||
|
||||
作者:[Seth Kenlon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[heguangzhi](https://github.com/heguangzhi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/seth
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/linux_penguin_green.png?itok=ENdVzW22 (Penguin with green background)
|
||||
[2]: https://en.wikipedia.org/wiki/Cgroups
|
||||
[3]: https://lkml.org/lkml/2006/10/20/251
|
||||
[4]: https://linuxcontainers.org
|
||||
[5]: https://coreos.com/
|
||||
[6]: http://flathub.org
|
||||
[7]: http://flatpak.org
|
||||
[8]: http://kubernetes.io
|
||||
[9]: https://www.redhat.com/sysadmin/learn-openshift-minishift
|
||||
[10]: https://opensource.com/article/18/11/behind-scenes-linux-containers
|
||||
[11]: http://slackermedia.info
|
||||
[12]: https://opensource.com/article/17/6/qtractor-audio
|
||||
[13]: http://ubuntustudio.org
|
||||
[14]: https://opensource.com/life/16/3/make-music-raspberry-pi-milkytracker
|
||||
[15]: http://fsf.org
|
@ -0,0 +1,64 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Oracle Autonomous Linux: A Self Updating, Self Patching Linux Distribution for Cloud Computing)
|
||||
[#]: via: (https://itsfoss.com/oracle-autonomous-linux/)
|
||||
[#]: author: (John Paul https://itsfoss.com/author/john/)
|
||||
|
||||
Oracle Autonomous Linux: A Self Updating, Self Patching Linux Distribution for Cloud Computing
|
||||
======
|
||||
|
||||
Automation is the growing trend in the IT industry. The aim is to remove the manual interference from the repetitive tasks. Oracle has taken another step into the automation world by launching Oracle Autonomous Linux that is surely going to benefit the IoT and CLoud Computing industry.
|
||||
|
||||
### Oracle Autonomous Linux: Less Human Intervention, More Automation
|
||||
|
||||
![][1]
|
||||
|
||||
On Monday, Larry Ellison, the legendary co-founder of Oracle, took the stage at the Oracle OpenWorld conference in San Francisco. [He announced][2] a new product: the world’s first autonomous linux. This is the second step in Oracle’s march towards a second-generation cloud. The first step was the [Autonomous Database][3] released two years ago.
|
||||
|
||||
The biggest feature that Oracle Autonomous Linux is reduced maintenance costs. According to [Oracle’s site][4], Autonomous Linux “uses advanced machine learning and autonomous capabilities to deliver unprecedented cost savings, security, and availability and frees up critical IT resources to tackle more strategic initiatives”.
|
||||
|
||||
Autonomous Linux can install updates and patches without human interference. These automatic updates include patches for the “Linux kernel and key user space libraries”. “This requires no downtime along with protection from both external attacks and malicious internal users.” They can also take place while the system is running to reduce downtime. Autonomous Linux also handles scaling automatically to ensure that all computing needs are handled.
|
||||
|
||||
Ellison highlighted how the new autonomous would improve security. He mentioned in particular how [Capitol One data breach][5] occurred because of a bad configuration. He said “One simple rule to prevent data theft: Put your data in an autonomous system. No human error, no data loss. That’s the big difference between us and AWS.”
|
||||
|
||||
Interestingly, Oracle is also aiming this new product to compete with IBM. Ellison said, “If you’re paying IBM, you can stop.” All Red Hat applications should be able to run on Autonomous Linux without modification. Interestingly, Oracle Linux is [built][6] from the sources of Red Hat Enterprise Linux.
|
||||
|
||||
It does not appear that Oracle Autonomous Linux will be available for anyone outside of the enterprise market.
|
||||
|
||||
### Thoughts on Oracle Autonomous Linux
|
||||
|
||||
Oracle is a big player in the cloud services market. This new Linux product will allow it to compete with IBM. It will be interesting how IBM responds, especially since they have a new influx of open-source smarts from Red Hat.
|
||||
|
||||
If you look at the number, things are not looking good for either IBM or Oracle. The majority of the cloud business is controlled by [Amazon Web Services, Microsoft Azure, and Google Cloud Platform][7]. IBM and Oracle are somewhere behind them. [IBM bought Red Hat][8] in an attempt to gain ground. This new Autonomous Cloud initiative is Oracle’s move for dominance (or at least attempt to gain a larger market share). It will be interesting how many companies buy into Oracle’s system to become more secure in the wild west of the internet.
|
||||
|
||||
I have to mention this quickly: when I first read about the announcement, all I could this was “Well, we are one step closer to Skynet.” If we let technology think for itself, we are just inviting an android apocalypse. If you’ll encuse me, I’m going to buy some canned goods.
|
||||
|
||||
Are you interested in Oracle’s new product? Do you it will help them win the cloud wars? Let us know in the comments below.
|
||||
|
||||
If you found this article interesting, please take a minute to share it on social media, Hacker News or [R][9][e][9][ddit][9].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/oracle-autonomous-linux/
|
||||
|
||||
作者:[John Paul][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/john/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/09/oracle-autonomous-linux.png?resize=800%2C450&ssl=1
|
||||
[2]: https://www.zdnet.com/article/oracle-announces-oracle-autonomous-linux/
|
||||
[3]: https://www.oracle.com/in/database/what-is-autonomous-database.html
|
||||
[4]: https://www.oracle.com/corporate/pressrelease/oow19-oracle-autonomous-linux-091619.html
|
||||
[5]: https://www.zdnet.com/article/100-million-americans-and-6-million-canadians-caught-up-in-capital-one-breach/
|
||||
[6]: https://distrowatch.com/table.php?distribution=oracle
|
||||
[7]: https://www.zdnet.com/article/top-cloud-providers-2019-aws-microsoft-azure-google-cloud-ibm-makes-hybrid-move-salesforce-dominates-saas/
|
||||
[8]: https://itsfoss.com/ibm-red-hat-acquisition/
|
||||
[9]: https://reddit.com/r/linuxusersgroup
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: translator: (Morisun029)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
@ -0,0 +1,77 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (A Network for All Edges: Why SD-WAN, SDP, and the Application Edge Must Converge in the Cloud)
|
||||
[#]: via: (https://www.networkworld.com/article/3440101/a-network-for-all-edges-why-sd-wan-sdp-and-the-application-edge-must-converge-in-the-cloud.html)
|
||||
[#]: author: (Cato Networks https://www.networkworld.com/author/Matt-Conran/)
|
||||
|
||||
A Network for All Edges: Why SD-WAN, SDP, and the Application Edge Must Converge in the Cloud
|
||||
======
|
||||
Globalization, mobilization, the cloud, and now edge computing are complicating enterprise networking. Here’s why and how you can best prepare yourself.
|
||||
NicoElNino
|
||||
|
||||
The software-defined movement keeps marching on. [Software-defined WAN (SD-WAN)][1] is redefining the branch edge by displacing legacy technologies like MPLS, WAN optimizers, and routers. [Software-defined Perimeter (SDP)][2] is displacing whole network access via mobile VPN with secure and optimized access from any device to specific applications in physical and cloud datacenters. These seem like unrelated developments, despite the “software-defined” buzz, because enterprise IT thinks about physical locations, mobile users, and applications separately. Each enterprise edge, location, person, or application is usually served by different technologies and often by different teams.
|
||||
|
||||
### Emerging Business Needs and Point Solutions, like SD-WAN and SDP, Make the Network Unmanageable
|
||||
|
||||
In recent years, though, managing networking and security got even more complicated due to the accelerating trends of globalization, mobilization, and cloudification.
|
||||
|
||||
Take global locations: Connecting and securing them create a unique set of challenges. Network latency is introduced by distance, requiring predictable long-haul network connectivity. There is often less support from local IT, so the technology footprint at the location must be minimized. Yet security can’t be compromised, so remote locations must still be protected as well as any other location.
|
||||
|
||||
Next, mobile users introduce their own set of challenges. Optimizing and securing application access for mobile users require the extension of the network and security fabric to every user globally. Mobile VPNs are a very bad solution. Since the network is tied to key corporate locations, getting mobile traffic to a firewall at headquarters or to a VPN concentrator over the unpredictable public internet is a pain for road warriors and field workers. And doing so just so the traffic can be inspected on its way to the cloud creates the so-called “[Trombone Effect][3]” and makes performance even worse.
|
||||
|
||||
Finally, the move to cloud applications and cloud datacenters further increases complexity. Instead of optimizing the network for a single destination (the physical datacenter), we now need to optimize it for at least two (physical and cloud datacenters), and sometimes more if we include regional datacenter instances. As the application “edge” got fragmented, a new set of technologies was introduced. These include cloud access security brokers (CASB) and cloud optimization solutions like AWS DirectConnect and Microsoft Azure ExpressRoute. Recently, edge computing is becoming a new megatrend – placing the application itself near the user, introducing new technologies into the mix such as AWS Outpost and Azure Stack.
|
||||
|
||||
### Making the Network Manageable Again with a Converged Cloud-Native Architecture
|
||||
|
||||
What is the remedy for this explosion in requirements and complexity? It seems enterprises are hard at work patching their networks with myriad point solutions to accommodate that shift in business requirements. There is an opportunity for forward-looking enterprises to transform their networks by holistically addressing **all** enterprise edges and distributing networking and security capabilities globally with a cloud-native network architecture.
|
||||
|
||||
Here [are several key attributes of the cloud-native network][4].
|
||||
|
||||
**The Cloud is the Network**
|
||||
|
||||
A cloud-native architecture is essential to serving all types of edges. Traditional appliance-centric designs are optimized for physical edges, not mobile or cloud ones. These legacy designs lock the networking and security capabilities into the physical location, making it difficult to serve other types of edges. This imbalance made sense where networks were mostly used to connect physical locations. We now need an [**edge-neutral** design that can serve any edge: location, user, or application.][5]
|
||||
|
||||
What is this edge neutrality? It means that we place as many networking and security capabilities as possible away from the edge itself – in the cloud. These include global route optimization, WAN and cloud acceleration, and network security capabilities such [as NGFW, IPS, IDS, anti-malware, and cloud security][6]. With a cloud-native architecture, we can distribute these capabilities globally across multiple points of presence to create a dynamic fabric that is within a short distance from any edge. This architecture delivers enterprise-grade optimization and security down to a location, application, user, or device.
|
||||
|
||||
**Built from Scratch as a Multitenant Cloud Service**
|
||||
|
||||
Cloud-native networks are built for the cloud from the ground up. In contrast, managed services that rely on hosted physical and virtual appliances can’t benefit from a cloud platform. Simply put, appliances don’t have any of the key attributes of cloud services. They are single-tenant entities unlike cloud services, which are multi-tenant. They aren't elastic or scalable, so dynamic workloads are difficult to accommodate. And they need to be managed individually, one instance at a time. You can't build a cloud-native network by using appliance-based software.
|
||||
|
||||
**End-to-End Control**
|
||||
|
||||
The cloud-native network has edge-to-edge visibility and control. Traditionally, IT decoupled network services (the transports) from the network functions (routing, optimization, and security). In other cases, the full range of services and functions was bundled by a service provider. By running traffic between all edges and to the Internet through the cloud network, it is possible to dynamically adjust routing based on global network behavior. This is markedly different from trying to use edge solutions that at best have limited visibility to last-mile ISPs or rely on dated protocols like BGP, which are not aware of actual network conditions.
|
||||
|
||||
**Self-Healing by Design**
|
||||
|
||||
Cloud-native networks are [resilient by design][7]. We are all familiar with the resiliency of cloud services like Amazon Web Services, Facebook, and Google. We don’t worry about infrastructure resiliency as we expect that the service will be up and running, masking the state of underlying components. Compare this with typical HA configurations of appliances within and across locations, and what it takes to plan, configure, test, and run these environments.
|
||||
|
||||
### The Cloud-Native Network Is the Network for All Edges and All Functions
|
||||
|
||||
To summarize, the cloud-native network represents a transformation of the legacy IT architecture. Instead of silos, point solutions for emerging requirements like SD-WAN and SDP, and a growing complexity, we must consider a network architecture that will serve the business into the future. By democratizing the network for all edges and delivering network and security functions through a cloud-first/thin-edge design, cloud-native networks are designed to rapidly evolve with the business even as new requirements—and new edges—emerge.
|
||||
|
||||
Cato Networks built the world’s first cloud-native network using the global reach, self-service, and scalability of the cloud. To learn more about Cato Networks and the Cato Cloud, visit [here][8].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3440101/a-network-for-all-edges-why-sd-wan-sdp-and-the-application-edge-must-converge-in-the-cloud.html
|
||||
|
||||
作者:[Cato Networks][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Matt-Conran/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.catonetworks.com/sd-wan?utm_source=idg
|
||||
[2]: https://www.catonetworks.com/glossary-use-cases/software-defined-perimeter-sdp/
|
||||
[3]: https://www.catonetworks.com/news/is-your-network-suffering-from-the-trombone-effect/
|
||||
[4]: https://www.catonetworks.com/cato-cloud/global-private-backbone-3/#Cloud-native_Software_for_Faster_Innovation_and_Lower_Costs
|
||||
[5]: https://www.catonetworks.com/cato-cloud
|
||||
[6]: https://www.catonetworks.com/cato-cloud/enterprise-grade-security-as-a-service-built-directly-into-the-network/
|
||||
[7]: https://www.catonetworks.com/cato-cloud/global-private-backbone-3/#Self-healing_By_Design_for_24x7_Operation
|
||||
[8]: http://www.networkworld.com/cms/article/catonetowrks.com/cato-cloud
|
@ -0,0 +1,71 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Linux on the mainframe: Then and now)
|
||||
[#]: via: (https://opensource.com/article/19/9/linux-mainframes-part-2)
|
||||
[#]: author: (Elizabeth K. Joseph https://opensource.com/users/pleia2https://opensource.com/users/pleia2https://opensource.com/users/lauren-pritchett)
|
||||
|
||||
Linux on the mainframe: Then and now
|
||||
======
|
||||
It's been two decades since IBM got onboard with Linux on the mainframe.
|
||||
Here's what happened.
|
||||
![Penguin driving a car with a yellow background][1]
|
||||
|
||||
Last week, I introduced you to the origins of the [mainframe's origins from a community perspective][2]. Let's continue our journey, picking up at the end of 1999, which is when IBM got onboard with Linux on the mainframe (IBM Z).
|
||||
|
||||
According to the [Linux on z Systems Wikipedia page][3]:
|
||||
|
||||
> "IBM published a collection of patches and additions to the Linux 2.2.13 kernel on December 18, 1999, to start today's mainline Linux on Z. Formal product announcements quickly followed in 2000."
|
||||
|
||||
These patches weren't part of the mainline Linux kernel yet, but they did get Linux running on z/VM (Virtual Machine for IBM Z), for anyone who was interested. Several efforts followed, including the first Linux distro—put together out of Marist College in Poughkeepsie, N.Y., and Think Blue Linux by Millenux in Germany. The first real commercial distribution came from SUSE on October 31, 2000; this is notable in SUSE history because the first edition of what is now known as SUSE Enterprise Linux (SLES) is that S/390 port. Drawing again from Wikipedia, the [SUSE Enterprise Linux page][4] explains:
|
||||
|
||||
> "SLES was developed based on SUSE Linux by a small team led by Josué Mejía and David Áreas as principal developer who was supported by Joachim Schröder. It was first released on October 31, 2000 as a version for IBM S/390 mainframe machines… In April 2001, the first SLES for x86 was released."
|
||||
|
||||
Red Hat quickly followed with support, and community-driven distributions, including Debian, Slackware, and Gentoo, followed, as they gained access to mainframe hardware to complete their builds. Over the next decade, teams at IBM and individual distributions improved support, even getting to the point where a VM was no longer required, and Linux could run on what is essentially "bare metal" alongside the traditional z/OS. With the release of Ubuntu 16.04 in 2016, Canonical also began official support for the platform.
|
||||
|
||||
In 2015, some of the biggest news in Linux mainframe history occurred: IBM began offering a Linux-only mainframe called LinuxONE. With z/OS and similar traditional configurations, this was released as the IBM z13; with Linux, these mainframes were branded Rockhopper and Emperor. These two machines came only with Integrated Facility for Linux (IFL) processors, meaning it wasn't even possible to run z/OS, only Linux. This investment from IBM in an entire product line for Linux was profound.
|
||||
|
||||
With the introduction of this machine, we also saw the first support for KVM on the mainframe. KVM can replace z/VM as the virtualization technology. This allows for all the standard tooling around KVM to be used for managing virtual machines on the mainframe, including libvirt and OpenStack.
|
||||
|
||||
Also in 2015, The Linux Foundation announced the [Open Mainframe Project][5]. Both a community and a series of open source software projects geared specifically towards the mainframe, the flagship project, [Zowe][6], has gathered contributions from multiple companies in the mainframe ecosystem. While it is created for z/OS, Zowe has been a driving force behind the modernization of interactions with mainframes today. On the Linux on Z side, [ADE][7], announced in 2016, is used to detect "anomalous time slices and messages in Linux logs" so that they can be analyzed alongside other mainframe logs.
|
||||
|
||||
In 2017, the z14 was released, and LinuxONE Rockhopper II and Emperor II were introduced. One of the truly revolutionary changes with this release was the size of the Rockhopper II: it's air-cooled and fits in the space of a 19" rack. No longer does a company need special space and consideration for this mainframe in their datacenter, it has standard connectors and fits in standard spaces. Then, on September 12, 2019, the z15 was launched alongside the LinuxONE III, and the really notable thing from an infrastructure perspective is the size. A considerable amount of effort was put into making it run happily alongside non-Z systems in the data center, so there is only a 19" version.
|
||||
|
||||
![LinuxONE Emperor III mainframe][8]
|
||||
|
||||
LinuxONE Emperor III mainframe | Used with permission, Copyright IBM
|
||||
|
||||
There are one, two, three, or four-frame configurations, but they'll still fit in a standard datacenter spot. See inside a four-frame, water-cooled version.
|
||||
|
||||
![Inside the water-cooled LinuxONE III][9]
|
||||
|
||||
Inside the water-cooled LinuxONE III | Used with permission, Copyright IBM
|
||||
|
||||
As a long-time x86 Linux systems administrator new to the mainframe world, I'm excited to be a part of it at IBM and to introduce my fellow systems administrators and developers to the platform. Looking forward, I see a future where mainframes continue to be used in tandem with cloud and edge technologies to leverage the best of all worlds.
|
||||
|
||||
The modernization of the mainframe isn't stopping any time soon. The mainframe may have a long history, but it's not old.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/9/linux-mainframes-part-2
|
||||
|
||||
作者:[Elizabeth K. Joseph][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/pleia2https://opensource.com/users/pleia2https://opensource.com/users/lauren-pritchett
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/car-penguin-drive-linux-yellow.png?itok=twWGlYAc (Penguin driving a car with a yellow background)
|
||||
[2]: https://opensource.com/article/19/9/linux-mainframes-part-1
|
||||
[3]: https://en.wikipedia.org/wiki/Linux_on_z_Systems
|
||||
[4]: https://en.wikipedia.org/wiki/SUSE_Linux_Enterprise
|
||||
[5]: https://www.openmainframeproject.org/
|
||||
[6]: https://www.zowe.org/
|
||||
[7]: https://www.openmainframeproject.org/projects/anomaly-detection-engine-for-linux-logs-ade
|
||||
[8]: https://opensource.com/sites/default/files/uploads/linuxone_iii_pair.jpg (LinuxONE Emperor III mainframe)
|
||||
[9]: https://opensource.com/sites/default/files/uploads/water-cooled_rear.jpg (Inside the water-cooled LinuxONE III)
|
@ -0,0 +1,83 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Space internet service closer to becoming reality)
|
||||
[#]: via: (https://www.networkworld.com/article/3439140/space-internet-service-closer-to-becoming-reality.html)
|
||||
[#]: author: (Patrick Nelson https://www.networkworld.com/author/Patrick-Nelson/)
|
||||
|
||||
Space internet service closer to becoming reality
|
||||
======
|
||||
OneWeb and SpaceX advance with their low-latency, satellite service offerings. Test results show promise, and service is expected by 2020.
|
||||
Getty Images
|
||||
|
||||
Test results from recent Low Earth Orbit internet satellite launches are starting to come in—and they're impressive.
|
||||
|
||||
OneWeb, which launched six Airbus satellites in February, says tests show [throughput speeds of over 400 megabits per second][1] and latency of 40 milliseconds.
|
||||
|
||||
Partnering with Intellian, developer of OneWeb user terminals, OneWeb streamed full high-definition video at 1080p resolution. The company tested for latency, speed, jitter, handover between satellites, and power control.
|
||||
|
||||
OneWeb said it achieved the following during its tests:
|
||||
|
||||
* Low latency, with an average of 32 milliseconds
|
||||
* Seamless beam and satellite handovers
|
||||
* Accurate antenna pointing and tracking
|
||||
* Test speed rates of more than 400 Mbps
|
||||
|
||||
|
||||
|
||||
**Also read: [The hidden cause of slow internet and how to fix it][2]**
|
||||
|
||||
### Internet service for the Arctic
|
||||
|
||||
Arctic internet blackspots above the 60th parallel, such as Alaska, will be the first to benefit from OneWeb’s partial constellation of Low Earth Orbit (LEO) broadband satellites, OneWeb says.
|
||||
|
||||
“Substantial services will start towards the end of 2020,” the future ISP [says on its website][3]. “Full 24-hour coverage being provided by early 2021.”
|
||||
|
||||
Currently 48% of the Arctic is without broadband coverage, according to figures OneWeb has published.
|
||||
|
||||
The Arctic-footprint service will provide “enough capacity to give fiber-like connectivity to hundreds of thousands of homes, planes, and boats, connecting millions across the Arctic,” it says.
|
||||
|
||||
### SpaceX also in the space internet race
|
||||
|
||||
[SpaceX, too, is in the race to provide a new generation of internet-delivering satellites][4]. That constellation, like OneWeb’s, is positioned in Low Earth Orbit, which has less latency than traditional satellite internet service because it’s closer to Earth.
|
||||
|
||||
SpaceX says through its offering, Starlink, it will be able[ provide service in the northern United States and Canada after six launches][5]. And it is trying to make two to six launches by the end of 2019. The company expects to provide worldwide coverage after 24 launches. In May, it successfully placed in orbit the first batch of 60 satellites.
|
||||
|
||||
### SpaceX's plan to provide service sooner
|
||||
|
||||
Interestingly, though, a SpaceX filing made with the U. S. Federal Communication Commission (FCC) at the end of August, ([discovered by][6] and subsequently [published (pdf) on Ars Technica’s website][7]), seeks to modify its original FCC application because of results it discovered in its initial satellite deployment. SpaceX is now asking for permission to “re-space” previously authorized, yet unlaunched satellites. The company says it can optimize its constellation better by spreading the satellites out more.
|
||||
|
||||
“This adjustment will accelerate coverage to southern states and U.S. territories, potentially expediting coverage to the southern continental United States by the end of the next hurricane season and reaching other U.S. territories by the following hurricane season,” the document says.
|
||||
|
||||
Satellite internet is used extensively in disaster recovery. Should SpaceX's request be approved, it will speed up service deployment for continental U.S. because fewer satellites will be needed.
|
||||
|
||||
Because we are currently in a hurricane season (Atlantic basin hurricane seasons last from June 1 to Nov. 30 each year), one can assume they are talking about services at the end of 2020 and end of 2021, respectively.
|
||||
|
||||
Interestingly, too, the document reinforces the likelihood of SpaceX’s intent to launch more internet-delivering satellites this year. “SpaceX currently expects to conduct several more Starlink launches before the end of 2019,” the document says.
|
||||
|
||||
Join the Network World communities on [Facebook][8] and [LinkedIn][9] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3439140/space-internet-service-closer-to-becoming-reality.html
|
||||
|
||||
作者:[Patrick Nelson][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Patrick-Nelson/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.oneweb.world/media-center/onewebs-satellites-deliver-real-time-hd-streaming-from-space
|
||||
[2]: https://www.networkworld.com/article/3107744/internet/the-hidden-cause-of-slow-internet-and-how-to-fix-it.html
|
||||
[3]: https://www.oneweb.world/media-center/oneweb-brings-fiber-like-internet-for-the-arctic-in-2020
|
||||
[4]: https://www.networkworld.com/article/3398940/space-internet-maybe-end-of-year-says-spacex.html
|
||||
[5]: https://www.starlink.com/
|
||||
[6]: https://arstechnica.com/information-technology/2019/09/spacex-says-itll-deploy-satellite-broadband-across-us-faster-than-expected/
|
||||
[7]: https://cdn.arstechnica.net/wp-content/uploads/2019/09/spacex-orbital-plane-filing.pdf
|
||||
[8]: https://www.facebook.com/NetworkWorld/
|
||||
[9]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,213 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Upgrade to Android Pie on Any Xiaomi Device with the Pixel Experience ROM)
|
||||
[#]: via: (https://opensourceforu.com/2019/09/upgrade-to-android-pie-on-any-xiaomi-device-with-the-pixel-experience-rom/)
|
||||
[#]: author: (Swapnil Vivek Kulkarni https://opensourceforu.com/author/swapnil-vivek/)
|
||||
|
||||
Upgrade to Android Pie on Any Xiaomi Device with the Pixel Experience ROM
|
||||
======
|
||||
|
||||
[![][1]][2]
|
||||
|
||||
_If you enjoy a hands-on experience and you own a Redmi device, this article will convince you to upgrade to Android Pie with a Pixel Experience ROM. Even if you don’t own a Redmi device, reading this article could help you to upgrade your own device._
|
||||
|
||||
Xiaomi is the market leader in mid-range Android devices. Redmi Note 4 was one of the most shipped Android devices in 2017. It became more popular because of its powerful hardware specifications that make the phone super smooth. The price offered by Xiaomi at that time was much lower than phones from other companies with similar configurations.
|
||||
|
||||
Note 4 runs on MIUI 10, which is based on Android Nougat 7.1. The latest version of Android to hit the market is Android Pie, and Android Q is likely to be launched in the coming months. All users of Note 4 want the upgrade to the latest OS but sadly, the company has no such plans and is not pushing security patch updates either.
|
||||
|
||||
In a recent announcement, the company declared that Note 4 and prior released devices will not get a MIUI 11 update. But don’t fret, because irrespective of this bad news, you can upgrade to the latest Android Pie using custom Android ROMs.
|
||||
|
||||
If you are one of those who want to enjoy the latest features and security updates of Android Pie without blowing your budget by buying a new Android device, then read this article carefully. Many of us with Note 4 or other MI devices want to upgrade to the next generation Android Pie on our devices. This article is written for Redmi Note 4, but it is applicable for all Xiaomi devices that run on MIUI (Redmi 3, Redmi Note 3, Redmi Note 4, Redmi 4, Redmi Note 4A, Redmi Note 5, Redmi Note 5 Pro, Redmi 5, Redmi 5 Pro, Redmi 5A, and others in this series).
|
||||
But before installing the latest Android ROM, let us go over some basic concepts that we really need to be clear about, regarding Android and custom ROMs.
|
||||
|
||||
![Figure 1: Bootloader][3]
|
||||
|
||||
**Things you should know before actual installation of custom ROM on any Android device**
|
||||
|
||||
**What is a custom ROM?**
|
||||
The acronym ROM stands for Read Only Memory. But this is a bit confusing with respect to custom Android ROM, which is firmware or software that has been permanently programmed into Read Only Memory, and directly interacts with hardware.
|
||||
Android is an open source project, so any developer can edit, modify and compile the code for a variety of devices. Custom ROMs are developed entirely by the community, which comprises people who are passionate about modding.
|
||||
|
||||
Android custom ROMs are available for smartphones, tablets, smart TVs, media players, smart watches, etc. When you buy a new Android device, it comes with company installed firmware or an operating system which is controlled by the manufacturer and has limited functionality. Here are some benefits of switching over to a custom ROM.
|
||||
|
||||
**Performance:** Custom ROMs can give tremendous performance improvements. The device manufacturer locks the clock speed at an optimal level to balance heat and battery life. But custom ROMs do not have restrictions on clock speed.
|
||||
|
||||
**Battery life:** Company installed firmware has lots of bloatware or OEM installed apps that are always running in the background, consuming processor resources, which drains the battery.
|
||||
|
||||
**Updates:** It is very frustrating to wait for manufacturers to release updates. Custom ROMs are always updated, depending upon the active community behind the ROM.
|
||||
|
||||
Here is a list of some of the Android custom ROMs available for Note 4 and corresponding Xiaomi devices:
|
||||
|
||||
* Pixel Experience ROM
|
||||
* Resurrection Remix ROM
|
||||
* Lineage OS ROM
|
||||
* Dot OS ROM
|
||||
* cDroid ROM
|
||||
|
||||
|
||||
|
||||
![Figure 2: Could not unlock][4]
|
||||
|
||||
![Figure 3: Unlocked successfully][5]
|
||||
|
||||
**Why the preference for Pixel Experience?**
|
||||
After a lot of research, I came to the conclusion that Pixel Experience is best suited to general user requirements, so I decided to go with it. As the name suggests, it is supposed to give you a Google Pixel like experience on your device. This ROM comes with preloaded Google apps so there’s no need to flash externally. Over the air (OTA) updates are provided by the community regularly. I have used this ROM for the past six months, and am getting monthly security patch updates with other bug fixes and enhancements.
|
||||
Pixel Experience is a lightweight and less customisable ROM, so it consumes less battery. The battery performance is outstanding and beyond expectations.
|
||||
|
||||
**Are there security concerns when a custom ROM is installed?**
|
||||
It is not true that the installation of a custom ROM compromises the security of a phone or device. Behind every custom ROM there is a large community and thousands of users who test it.
|
||||
For custom ROM installation, you don’t need to root your device — it is 100 per cent safe and secure. If we keep this discussion specific to the Pixel Experience ROM, then it is a pure vanilla Android ROM developed for Nexus and Pixel devices, and ported by developers and maintainers to specific Android devices.
|
||||
|
||||
Before installing a custom ROM, you need to unlock the bootloader. During the bootloader unlock process, you will see a warning from the vendor that states your phone will be less secure after unlocking the bootloader. The reason for this is that an unlocked phone can be used to install a fresh ROM without any permission from device manufacturers or the owner of the device. So, stolen or lost devices can be reused by flashing a ROM. Anyway, there are a number of methods that can be used to unlock the bootloader unofficially and install the ROM without permission from the manufacturer.
|
||||
|
||||
Custom ROMs are more secure than stock ROMs because of the latest updates provided by the community. Device manufacturers are profit making companies. They want their customers to upgrade their phones after two years; so they stop providing support and stop pushing software updates! Custom ROMs, on the other hand, are driven by non-profit communities. They run on community support and donations.
|
||||
|
||||
![Figure 4: Command adb][6]
|
||||
|
||||
**What does it mean to root an Android device, and is this really required before flashing a custom ROM?**
|
||||
On a Windows machine, there is an administrator account which has all the privileges. Similarly, in Linux too, there is the concept of a root account. Android uses the Linux kernel; so all the OS internals are the same as in Linux.
|
||||
|
||||
Your Android phone uses Linux permissions and file system ownership. You are a user when you sign in and you can do only certain things based on your user permissions. In all Android devices, the root user is hidden by the vendor to avoid misuse.
|
||||
|
||||
Rooting an Android phone is to jail-break the phone to allow the user to dive deep into the device. I personally recommend that you do not root your device, because doing so is really not required to flash a custom ROM to the device.
|
||||
|
||||
Here are a few reasons why you should not root your device:
|
||||
|
||||
1. Rooting can give your apps complete control of the system, and there is a chance of misuse of power.
|
||||
2. Google officially does not support rooted devices.
|
||||
3. Banking applications, BHIM, UPI, Google Pay, PhonePe and Paytm will not work on rooted devices.
|
||||
4. There is a myth that rooting of a phone is required to flash a custom ROM, but that is not true. You only need to unlock the bootloader to do so.
|
||||
|
||||
|
||||
|
||||
**What is a bootloader and why should you unlock it before flashing a custom ROM?**
|
||||
A bootloader is the proprietary image responsible for bringing up the kernel on a device. It is nothing but a guard for the device, and is responsible for initialising trust between root and user. The bootloader may directly flash the OS to the partition or we can use custom recovery to do the same thing.
|
||||
|
||||
In this article we will use Team Win custom recovery to flash the operating system in a device.
|
||||
|
||||
In Microsoft Windows terminology, there is the concept of a BIOS which is the same as a bootloader. Let’s look at an example. When we install Linux alongside Windows on a laptop or PC, there is bootloader called GRUB which allows the user to boot either Windows or Linux. A bootloader points to the OS partition from the file system. At the press of the power button to start the phone, the bootloader initiates the process to boot the operating system installed in the file system.
|
||||
|
||||
Most bootloaders are locked by vendors to make sure the user sticks to the operating system specifically designed by the vendor for that particular device. With a locked bootloader, it is impossible to flash a custom ROM and a wrong attempt may brick the device. Again, this is one of the security features provided by the vendor.
|
||||
|
||||
The bootloader can be unlocked in two ways — one is the official method provided by the vendor, and the other is the unofficial method which may lead to a bricked device.
|
||||
|
||||
![Figure 5: Team Win Recovery Project][7]
|
||||
|
||||
**Installing Pixel Experience on a Xiaomi device**
|
||||
The previous section covered concepts that we needed to be clear about before flashing a custom ROM. Now let’s go through a step by step procedure to install a custom ROM on any Android device. We’re working specifically on a Redmi device and this is based on my own experience.
|
||||
|
||||
Here are some points to remember before unlocking the bootloader:
|
||||
|
||||
* Take the phone’s backup on a PC/laptop (you are unlikely to lose any data, and this step is just a precaution).
|
||||
* Unlocking the bootloader voids the warranty.
|
||||
* Make sure that the zip file of the Android ROM is downloaded to the device’s internal memory or SD card.
|
||||
* Make sure that the bootloader of the device is unlocked after the unlock process completes, because a wrong attempt may brick the device.
|
||||
|
||||
|
||||
|
||||
_**Steps to follow on a laptop/PC**_
|
||||
|
||||
1. On your laptop/PC, navigate to _<https://en.miui.com/unlock/>_ and click on the Unlock Now button.
|
||||
2. Log in to the MI account with the credentials you used to log into your device.
|
||||
3. Remember the credentials, since this is the most important step.
|
||||
4. As per the new MI unlock bootloader method, you don’t need permission from MI. To download the MIUI Unlock application, simply click on the button. The size is around 55MB.
|
||||
5. Go to where we downloaded the MIUI Unlock application in Step 4, and double click on _miflash_unlock.exe_.
|
||||
6. Log in using the MI account, which is used in Step 2.
|
||||
7. Make sure that the device is properly connected to the PC using a USB cable (the status will be shown in the application).
|
||||
|
||||
|
||||
|
||||
_**Steps to follow on a mobile phone**_
|
||||
|
||||
1. Go to _Settings->About phone_.
|
||||
2. Tap five times on the MIUI version; it will enable the _Developer option_ on your device.
|
||||
3. Go to _Settings->Additional Settings -> Developer options_ and tap on _OEM unlocking_. Do not enable this but click on the text to go inside.
|
||||
4. Enter your password/PIN (whichever is set to the device for confirmation) and enable the option.
|
||||
5. Go to Settings->Additional Settings -> Developer options, and then go to MIUI status and tap on Add account. This step will add the MI account to unlock the bootloader. Sometimes the account does not get added, in which case, restart the phone.
|
||||
6. Enable USB debugging from Settings->Additional Settings -> Developer options.
|
||||
7. Switch off the phone and press the Power button and the ‘volume down’ key simultaneously.
|
||||
8. Connect the USB cable to the device and laptop/PC.
|
||||
|
||||
|
||||
|
||||
_**After completing these steps on the mobile, repeat the folliwing steps on your PC/laptop**_
|
||||
|
||||
1. Click on the Unlock button.
|
||||
2. A dialogue box with the following message will appear: “Unlocking the phone will erase all phone data. Do you still want to continue to unlock the phone?” Click on Unlock anyway.
|
||||
3. You will see the message shown in Figure 2, if it did not unlock. There is a time specified after which you may try again. The time varies between 24 hours to 360 hours. In my case it was 360 hours, which is nothing but 15 days!
|
||||
4. After the specified period, carry out the same steps, and the bootloader will get unlocked and you will see the result shown in Figure 3.
|
||||
|
||||
|
||||
|
||||
**Installing Team Win Recovery Project (TWRP)**
|
||||
Team Win Recovery Project (TWRP) is an open source custom recovery image for Android based devices. It provides a touchscreen-enabled interface that allows users to install third-party firmware and back up the current system. It is installed on an Android device when flashing, installing or rooting Android devices.
|
||||
|
||||
1. Download the Pixel Experience ROM for your device from the official website _<https://download.pixelexperience.org>_. In my case, the device is Redmi Note 4 (Mido); download and save the zip file in the phone memory.
|
||||
2. In a Web browser, navigate to the Android SDK tools website _<https://developer.android.com/studio/releases/platform-tools.html\#download>_. Under the download section you will find three links for your platform — Windows, Linux and Mac. Depending on your operating system, download the SDK, which is just around 7MB.
|
||||
3. In a Web browser, navigate to _<https://twrp.me/Devices/>_ and search for your device. Here, remember that my device is Redmi Note 4 and the name is Xiaomi Redmi Note 4(x) (mido). Go to your device by simply clicking on the link. There is a section called Download links that you can click on. Choose the latest TWRP image and download it.
|
||||
4. Head to the _Downloads_ directory and extract the platform tool’s zip file downloaded in Step 1.
|
||||
5. Move the TWRP image file downloaded in Step 2 inside the Platform Tools folder.
|
||||
6. Connect your phone to a computer using a USB cable, and make sure that _USB debugging_ is ON.
|
||||
7. Open a command window and CD to the _Platform Tools_ directory.
|
||||
8. Run the following commands on the command prompt:
|
||||
i. Run the command _adb devices_, and make sure that your device is listed (Figure 4).
|
||||
ii. Run the command _adb bootloader._ It will take you to the bootloader.
|
||||
iii. Now type _fastboot devices_. Your device will get listed here.
|
||||
iv. Run the command _fastboot flash recovery twrp-image-file.img_.
|
||||
v. Run the command _fastboot boot twrp-image-file.img_.
|
||||
vi. Wait for a few moments and you will see the Team Win Recovery Project start on your device.
|
||||
|
||||
|
||||
|
||||
**Steps to install custom ROM on Xiaomi device**
|
||||
|
||||
**Installing the Pixel Experience ROM on a device**
|
||||
Now you are already booted into TWRP. It is recommended that you take a backup. Press Backup, select the following options, and swipe right to backup.
|
||||
|
||||
* System
|
||||
* Data
|
||||
* Vendor
|
||||
* Recovery
|
||||
* Boot
|
||||
* System image
|
||||
|
||||
|
||||
1. Next, wipe the existing stock ROM from your device. To do so, go to _Wipe->Advanced wipe options,_ select the following options and wipe them:
|
||||
|
||||
|
||||
* Dalvik
|
||||
* System
|
||||
* Data
|
||||
* Cache
|
||||
* Vendor
|
||||
|
||||
|
||||
|
||||
ii. Come back to the _Install_ option and browse for the pixel experience zip file, select it and swipe to flash. It will take some time. Once it is completed, wipe the cache.
|
||||
iii. Press the _Reboot_ to _start_ button.
|
||||
Pixel Experience will get started on your device.
|
||||
Congratulations, you now have successfully upgraded to Android Pie.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensourceforu.com/2019/09/upgrade-to-android-pie-on-any-xiaomi-device-with-the-pixel-experience-rom/
|
||||
|
||||
作者:[Swapnil Vivek Kulkarni][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensourceforu.com/author/swapnil-vivek/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/09/Screenshot-from-2019-09-12-15-58-26.png?resize=627%2C587&ssl=1 (Screenshot from 2019-09-12 15-58-26)
|
||||
[2]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/09/Screenshot-from-2019-09-12-15-58-26.png?fit=627%2C587&ssl=1
|
||||
[3]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/09/Xiaomi1.png?resize=350%2C160&ssl=1
|
||||
[4]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/09/Xiaomi2.png?resize=350%2C181&ssl=1
|
||||
[5]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/09/Xiaomi3.png?resize=350%2C171&ssl=1
|
||||
[6]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/09/4-1.png?resize=350%2C169&ssl=1
|
||||
[7]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/09/5.png?resize=278%2C467&ssl=1
|
@ -0,0 +1,86 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Code it, ship it, own it with full-service ownership)
|
||||
[#]: via: (https://opensource.com/article/19/9/full-service-ownership)
|
||||
[#]: author: (Julie GundersonJustin Kearns https://opensource.com/users/juliegundhttps://opensource.com/users/juliegundhttps://opensource.com/users/juliegundhttps://opensource.com/users/kearnsjdhttps://opensource.com/users/ophir)
|
||||
|
||||
Code it, ship it, own it with full-service ownership
|
||||
======
|
||||
Making engineers responsible for their code and services in production
|
||||
offers multiple advantages—for the engineer as well as the code.
|
||||
![Gears above purple clouds][1]
|
||||
|
||||
Software teams seeking to provide better products and services must focus on faster release cycles. But running reliable systems at ever-increasing speeds presents a big challenge. Software teams can have both quality and speed by adjusting their policies around ongoing service ownership. While on-call plays a large part in this model, advancement in knowledge, more resilient code, increased collaboration, and better practices mean engineers don't have to wake up to a nightmare.
|
||||
|
||||
This four-part series will delve into the concepts of full-service ownership, psychological safety in transformation, the ethics of accountability, and the impact of ownership on the customer experience.
|
||||
|
||||
### What is full-service ownership?
|
||||
|
||||
![Code it, ship it, own it][2]
|
||||
|
||||
Full-service ownership is the philosophy that engineers are responsible for the code and services they create in production. Using the "code it, ship it, own it," mentality means embracing the [DevOps principle][3] of no longer throwing code over the wall to operations nor relying on the [site reliability engineering (SRE) team][4] to ensure the reliability of services in the wild. Instead:
|
||||
|
||||
> Accountability, reliability, and continuous improvement are the main objectives of full-service ownership.
|
||||
|
||||
Putting engineers on-call for what they create brings accountability directly into the hands of that engineer and team.
|
||||
|
||||
### Why accountability matters
|
||||
|
||||
Digital transformation has changed how people work and how consumers consume. There is an implicit expectation in consumers' minds that services will work. For example, when I try to make an online purchase (almost always through my mobile device), I expect a seamless, secure, and efficient experience. When I am interrupted because a page won't load or throws an error, I simply move on to another company that can fulfill my request. According to the [PagerDuty State of Digital Operations 2017 UK report][5], 86.6% of consumers will do the same thing.
|
||||
|
||||
![Amount of time consumers will wait for an unresponsive app][6]
|
||||
|
||||
Empowering engineers to work on the edge of the customer experience by owning the full lifecycle of their code and services gives companies a competitive advantage. As well as benefiting the company, full-service ownership benefits the engineer. Accountability ensures high-quality work and gives engineers a direct line of sight into how the code or service is performing and impacting the customers' day-to-day.
|
||||
|
||||
### Reliability beyond subject-matter experts
|
||||
|
||||
Services will go down; it's an inevitable facet of operating in the digital world. However, how long those services are down—and the impact the outages have on customers—will be mitigated by bringing the
|
||||
|
||||
subject matter expert (SME) or "owner" into the incident immediately. The SME is the engineer who created the code or service and has the intimate, technical knowledge to both respond to incidents and take corrective action to ensure their services experience fewer interruptions through continuous improvement. As the responsible party, the engineers are incented to automate, test, and create code that is as bulletproof as possible.
|
||||
|
||||
Also, teams that adopt full-service ownership increase their overall knowledge. Through practices that include on-call handoffs, code reviews, daily standups, and Failure Friday exercises, individual engineers develop greater expertise around the entire codebase. New skills include systems thinking, collaboration, and working in non-siloed environments. Teams and individuals build necessary redundancy in skills and knowledge by sharing information.
|
||||
|
||||
### Continuous improvement
|
||||
|
||||
As engineers strive to improve their product, code, and/or services continuously, a side-effect of full-service ownership is the refinement of services and alerting. Alerts that interrupt time outside regular work hours must be actionable. If team members are repeatedly interrupted with non-actionable alerts, there is an opportunity to improve the system by analyzing the data. Cleaning up the monitoring system is an investment of time; however, committing to actionable alerting will make on-call better for everyone on the team and reduce alert fatigue—which will free up mental energy to focus on future releases and automation.
|
||||
|
||||
Developers who write the code and define the alerts for that code are more likely to create actionable alerts. It will literally wake them up at night if they don't. Beyond actionable alerts, engineers are incented to produce the highest quality code, as better code equals fewer interruptions.
|
||||
|
||||
While on-call can interrupt your personal life, on-call is not meant to be "always-on." Rather, it's a shared team responsibility to ensure high-quality code. Instead of looking at full-service ownership as an on-call requirement, you can argue that it is building in time to go "off-call."
|
||||
|
||||
Imagine you are on the operations team triaging an incident; time is of the essence, and you need answers fast. Are you going to carefully run through a list of all members of the team responsible for that service? Or are you going to call the SME you know always answers the phone on a Sunday afternoon? Repeatedly calling the same one or two people places an undue burden on those individuals, potentially causing a single source of failure that can lead to burnout. With that said, an on-call rotation serves multiple functions:
|
||||
|
||||
1. Engineers know that their code and services are being covered when they are off-call so they can fully relax.
|
||||
2. The burden of being the "go-to" SME is parsed out to the rest of the team on rotation.
|
||||
3. Services become more reliable.
|
||||
4. Team knowledge and skills increase through deeper understanding of the codebase.
|
||||
|
||||
|
||||
|
||||
By going beyond coding to shipping and owning, full-service ownership reduces the chaos associated with incidents by defining roles and responsibilities, removing unnecessary layers, and ultimately fostering a culture of empowerment and accountability. And, in the next article in this series, I'll share how full-service ownership can foster psychological safety.
|
||||
|
||||
What has your experience been? Has being on-call helped you to become a better engineer? Do you loathe the thought of picking up a "pager"? Let us know your thoughts in the comments below or tweet [@julie_gund][7].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/9/full-service-ownership
|
||||
|
||||
作者:[Julie GundersonJustin Kearns][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/juliegundhttps://opensource.com/users/juliegundhttps://opensource.com/users/juliegundhttps://opensource.com/users/kearnsjdhttps://opensource.com/users/ophir
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/chaos_engineer_monster_scary_devops_gear_kubernetes.png?itok=GPYLvfVh (Gears above purple clouds)
|
||||
[2]: https://opensource.com/sites/default/files/uploads/code_ship_own.png (Code it, ship it, own it)
|
||||
[3]: https://opensource.com/article/18/1/getting-devops
|
||||
[4]: https://opensource.com/article/18/10/sre-startup
|
||||
[5]: https://www.pagerduty.com/resources/reports/digital-operations-uk/
|
||||
[6]: https://opensource.com/sites/default/files/uploads/unresponsiveapps.png (Amount of time consumers will wait for an unresponsive app)
|
||||
[7]: https://twitter.com/julie_gund
|
134
sources/talk/20190920 How to decommission a data center.md
Normal file
134
sources/talk/20190920 How to decommission a data center.md
Normal file
@ -0,0 +1,134 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to decommission a data center)
|
||||
[#]: via: (https://www.networkworld.com/article/3439917/how-to-decommission-a-data-center.html)
|
||||
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
|
||||
|
||||
How to decommission a data center
|
||||
======
|
||||
Decommissioning a data center is lot more complicated than shutting down servers and switches. Here’s what you should keep in mind.
|
||||
3dSculptor / Getty Images
|
||||
|
||||
About the only thing harder than building a [data center][1] is dismantling one, because the potential for disruption of business is much greater when shutting down a data center than constructing one.
|
||||
|
||||
The recent [decommissioning of the Titan supercomputer][2] at the Oak Ridge National Laboratory (ORNL) reveals just how complicated the process can be. More than 40 people were involved with the project, including staff from ORNL, supercomputer manufacturer Cray, and external subcontractors. Electricians were required to safely shut down the 9 megawatt-capacity system, and Cray staff was on hand to disassemble and recycle Titan’s electronics and its metal components and cabinets. A separate crew handled the cooling system. In the end, 350 tons of equipment and 10,800 pounds of refrigerant were removed from the site.
|
||||
|
||||
**Read more data center stories**
|
||||
|
||||
* [NVMe over Fabrics creates data-center storage disruption][3]
|
||||
* [Data center workloads become more complex][4]
|
||||
* [What is data-center management as a service (DMaaS)?][5]
|
||||
* [Data center staff aging faster than equipment][6]
|
||||
* [Micro-modular data centers set to multiply][7]
|
||||
|
||||
|
||||
|
||||
While most enterprise IT pros aren’t likely to face decommissioning a computer the size of Titan, it is likely they’ll be involved with dismantling smaller-scale data centers given the trend for companies to [move away from on-premises data centers][8].
|
||||
|
||||
The pace of data center closure is going to accelerate over next three or four years, according to Rick Villars, research vice president, datacenter and cloud, at [IDC][9]. "Every company we’ve spoken to is planning to close 10% to 50% of their data centers over the next four years, and in some cases even 100%. No matter who you talk to, they absolutely have on the agenda they want to close data centers," Villars says.
|
||||
|
||||
Successfully retiring a data center requires navigating many steps. Here’s how to get started.
|
||||
|
||||
### Inventory data-center assets
|
||||
|
||||
The first step is a complete inventory. However, given the preponderance of [zombie servers][10] in IT environments, it’s clear that a good number of IT departments don’t have a handle on data-center asset management.
|
||||
|
||||
"They need to know what they have. That’s the most basic. What equipment do you have? What apps live on what device? And what data lives on each device?” says Ralph Schwarzbach, who worked as a security and decommissioning expert with Verisign and Symantec before retiring.
|
||||
|
||||
All that information should be in a configuration management database (CMDB), which serves as a repository for configuration data pertaining to physical and virtual IT assets. A CMDB “is a popular tool, but having the tool and processes in place to maintain data accuracy are two distinct things," Schwarzbach says.
|
||||
|
||||
A CMDB is a necessity for asset inventory, but “any good CMDB is only as good as the data you put in it,” says Al DeRose, a senior IT director responsible for infrastructure design, implementation and management at a large media firm. “If your asset management department is very good at entering data, your CMDB is great. [In] my experience, smaller companies will do a better job of assets. Larger companies, because of the breadth of their space, aren’t so good at knowing what their assets are, but they are getting better.”
|
||||
|
||||
### Map dependences among data-center resources
|
||||
|
||||
Preparation also includes mapping out dependencies in the data center. The older a data center is, the more dependencies you are likely to find.
|
||||
|
||||
It’s important to segment what’s in the data center so that you can move things in orderly phases and limit the risk of something going wrong, says Andrew Wertkin, chief strategy officer with [BlueCat Networks][11], a networking connectivity provider that helps companies migrate to the cloud. "Ask how can I break this into phases that are independent – meaning ‘I can’t move that app front-end because it depends on this database,’" Wertkin says.
|
||||
|
||||
The WAN is a good example. Connection points are often optimized, so when you start to disassemble it, you need to know who is getting what in terms of connections and optimized services so you don’t create SLA issues when you break the connection. Changing the IP addresses of well-known servers, even temporarily, also creates connection problems. The solution is to do it in steps, not all at once.
|
||||
|
||||
### Questions to ask decomissioning providers
|
||||
|
||||
Given the complexities and manpower needs of decommissioning a data center, it’s important to hire a professional who specializes in it.
|
||||
|
||||
Experience and track record are everything when it comes to selecting a vendor, says Mike Satter, vice president at [OceanTech][12], which provides data center decommissioning and IT asset disposition services. There are a lot of small companies that say they can decommission a data center and fail because they lack experience and credentials, he says. "I can't tell you how many times we’ve come into a mess where we had to clean up what someone else did. There were servers all over the floor, hardware everywhere," Satter says.
|
||||
|
||||
His advice? Ask a lot of questions.
|
||||
|
||||
"I love having a client who asks a lot of questions," Satter says. “Don’t be shy to ask for references,” he adds. “If you are going to have someone do work on your house, you look up their references. You better know who the contractor will be. Maybe 10% of the time have I had people actually look into their contractor.”
|
||||
|
||||
Among the processes you should ask about and conditions you should expect are:
|
||||
|
||||
* Have the vendor provide you with a detailed statement of work laying out how they will handle every aspect of the data center decommissioning project.
|
||||
* Ask the vendor to do a walkthrough with you, prior to the project, showing how they will execute each step.
|
||||
* Find out if the vendor outsources any aspect of data center decommissioning, including labor or data destruction.
|
||||
* Inquire about responsible recycling (see more below).
|
||||
* Ask for references for the last three data center decommissioning clients the vendor serviced.
|
||||
* Ask if the vendor will be able to recover value from your retired IT hardware. If so, find out how much and when you could expect to receive the compensation.
|
||||
* Ask how data destruction will be handled. If the solution is software based, find out the name of the software.
|
||||
* Learn about the vendor’s security protocols around data destruction.
|
||||
* Find out where the truck goes when it leaves with the gear.
|
||||
* Ask how hazardous materials will be disposed.
|
||||
* Ask how metals and other components will be disposed.
|
||||
|
||||
|
||||
|
||||
### Recycle electronics responsibly
|
||||
|
||||
As gear is cleared out of the data center, it’s important to make sure it’s disposed of safely, from both a security and environmental standpoint.
|
||||
|
||||
When it comes to electronics recycling, the key certification to look for is the [R2 Standard][13], Satter says. R2 – sometimes referred to as the responsible recycling certification – is a standard for electronics recyclers that requires certified companies to have a policy on managing used and end-of-life electronics equipment, components and materials for reuse, recovery and/or recycling.
|
||||
|
||||
But R2 does more than that; it offers a traceable chain of custody for all equipment, tracking who touched every piece and its ultimate fate. R2 certified providers “aren’t outsourced Craigslist tech people. These are people who do it every day," Satter says. "There are techniques to remove that gear. They have a group to do data security on site, and a compliance project manager to make sure compliance is met and the chain of custody is met."
|
||||
|
||||
And don’t be cheap, DeRose adds. "When I decommission a data center, I use a well-known company that does asset removal, asset destruction, chain of custody, provides certifications of destruction for hard drives, and proper disposal of toxic materials. All that needs to be very well documented not [only] for the environment’s protection but [also] for the company’s protection. You can’t wake up one morning and find your equipment was found dumped in a landfill or in a rainforest," DeRose says.
|
||||
|
||||
Documentation is critical when disposing of electronic waste, echoes Schwarzbach. "The process must capture and store info related to devices being decommissioned: What is the intent for the device, recycling or new service life? What data resides on it? Who owns the data? And [what is] the category of data?"
|
||||
|
||||
In the end, it isn't the liability of the disposal company if servers containing customer or medical information turn up at a used computer fair, it's the fault of the owners. "The creator of e-waste is ultimately liable for the e-waste," Schwarzbach says.
|
||||
|
||||
### Control who's coming into the data center
|
||||
|
||||
Shutting down a data center means one inevitability: You will have to bring in outside consultants to do the bulk of the work, as the ORNL example shows. Chances are, your typical data center doesn't let anywhere near 40 people inside during normal operations. But during decommissioning, you will have a lot of people going in and out, and this is not a step to be taken lightly.
|
||||
|
||||
"In a normal scenario, the number of people allowed in the data center is selected. Now, all of a sudden, you got a bunch of contractors coming in to pack and ship, and maybe there’s another 50 people with access to your data center. It’s a process and security nightmare if all these people have access to your boxes and requires a whole other level of vetting," Wertkin says. His solution: Log people in and out and use video cameras.
|
||||
|
||||
Any company hired to do a decommissioning project needs to clearly identify the people involved, DeRose says. "You need to know who your company is sending, and they need to show ID.” People are to be escorted in and out and never given a keycard. In addition, contractors should not to be left to decommission any room on their own. There should always be someone on staff overseeing the process, DeRose says.
|
||||
|
||||
In short, the decommissioning process means lots of outside, non-staff being given access to your most sensitive systems, so vigilance is mandatory.
|
||||
|
||||
None of the steps involved in a data center decommissioning should be hands-off, even when it requires outside experts. For the security and integrity of your data, the IT staff must be fully involved at all times, even if it is just to watch others do their work. When millions of dollars (even depreciated) of server gear goes out the door in the hands of non-employees, your involvement is paramount.
|
||||
|
||||
Join the Network World communities on [Facebook][14] and [LinkedIn][15] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3439917/how-to-decommission-a-data-center.html
|
||||
|
||||
作者:[Andy Patrizio][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Andy-Patrizio/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.networkworld.com/article/3223692/what-is-a-data-centerhow-its-changed-and-what-you-need-to-know.html
|
||||
[2]: https://www.networkworld.com/article/3408176/the-titan-supercomputer-is-being-decommissioned-a-costly-time-consuming-project.html
|
||||
[3]: https://www.networkworld.com/article/3394296/nvme-over-fabrics-creates-data-center-storage-disruption.html
|
||||
[4]: https://www.networkworld.com/article/3400086/data-center-workloads-become-more-complex-despite-promises-to-the-contrary.html
|
||||
[5]: https://www.networkworld.com/article/3269265/data-center-management-what-does-dmaas-deliver-that-dcim-doesnt
|
||||
[6]: https://www.networkworld.com/article/3301883/data-center/data-center-staff-are-aging-faster-than-the-equipment.html
|
||||
[7]: https://www.networkworld.com/article/3238476/data-center/micro-modular-data-centers-set-to-multiply.html
|
||||
[8]: https://www.networkworld.com/article/3391465/another-strong-cloud-computing-quarter-puts-pressure-on-data-centers.html
|
||||
[9]: https://www.idc.com
|
||||
[10]: https://www.computerworld.com/article/3196355/a-third-of-virtual-servers-are-zombies.html
|
||||
[11]: https://www.bluecatnetworks.com/
|
||||
[12]: https://www.oceantech.com/services/data-center-decommissioning/
|
||||
[13]: https://sustainableelectronics.org/r2-standard
|
||||
[14]: https://www.facebook.com/NetworkWorld/
|
||||
[15]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,56 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (The Richest Man in History Shares his Thoughts: Jeff Bezos Top 5 Tips for Success)
|
||||
[#]: via: (https://opensourceforu.com/2019/09/the-richest-man-in-history-shares-his-thoughts-jeff-bezos-top-5-tips-for-success/)
|
||||
[#]: author: (Aashima Sharma https://opensourceforu.com/author/aashima-sharma/)
|
||||
|
||||
The Richest Man in History Shares his Thoughts: Jeff Bezos Top 5 Tips for Success
|
||||
======
|
||||
|
||||
[![][1]][2]
|
||||
|
||||
_The story of Jeff Bezos and his immense success is not new, but it still can help the new generation of entrepreneurs and just young people to understand what it takes to become really successful in their field. Of course, you cannot just repeat what Bezos did and get the same result, but you can surely listen to some of Jeff Bezos advice and start your own unique path to prosperity. Brad Stone, the author of the book called The Everything Store: Jeff Bezos and the Age of Amazon, shares some of Bezos’ tips and ideas on what makes up for a [successful business][3], so follow along:_
|
||||
|
||||
**1\. Gather the Best People in Your Team**
|
||||
Whether you are working on a team project in school or hiring people for your new business, you probably know how important it is to have reliable people in your team. Sharing some thoughts on Jeff Bezos success, we must also remember about the Two Pizza rule he introduced. As Bezos believes, the perfect size of the team is the one you can feed with two pizzas. Of course, there are more people than that working for Amazon, but they are all divided into smaller teams that consist of highly professional people. So, if you strive for success, make sure that you are surrounded by the best people equipped for the task you are up to.
|
||||
|
||||
**2\. Learn From Mistakes**
|
||||
We all make them; mistakes are unfortunate but important for all of us, and the best thing you can do with your mistake is to learn from it. Let’s say you are writing an essay and it does not go as well as you thought it would. Your professor does not like, and you get a low grade. Your choice is to either keep up with what you did before and fail again or to learn from it. Find some [_free essay sample_][4], go through it, make notes, use some assistance service, and craft a professional essay that will stun your professor. So, whenever you make a mistake, and there’s nothing you can do to promptly fix it — make it your teacher, that’s one of Amazon’s CEO tips we want you to remember.
|
||||
|
||||
**3\. Be Brave**
|
||||
While it might seem like an obvious tip, many students and young entrepreneurs get it wrong. If you are trying to do something new, let’s say start a business or just write some new essay, experimenting will be an integral part of your task. Experiments might fail, of course, but even if your experiment fails to deliver a desirable result, it is still going to be something new, something previously unseen. This is the best part of creating something new: you never know what you’ll end up with. And if you are brave, you are going to experiment on and on until one of your experiments brings you success and money. So, whether we talk writing your essays or starting a new business — you must be brave and ready to face both success and failure.
|
||||
|
||||
**[![][5]][6]4\. Be Firm and Patient**
|
||||
Braveness alone is not enough because even the bravest of us fail to achieve desired goals. So, the trick here is to be patient and keep pushing until you make it. If you are brave enough to start a new business or whatever it is new that you are planning to do, then you must also be patient and firm to withstand a potential failure. Many try and give up after the very first time they fail, and only a few like Jeff Bezos keep on pushing on until they reach their goals. Starting anew is always hard, especially if you have a couple of failures behind, some people lose faith in themselves and stop trying, so you must be firm in your desire to achieve [_success in life_][7]. Whatever you do and whatever challenges you face — keep on chasing your goal until you catch it.
|
||||
|
||||
**5\. Think Big**
|
||||
This very phrase might sound like a sort of cliche to some people, but if you start to comprehend what stands behind these words, it might make sense. When Bezos started his online retail service, his idea was not just to sell things, but to become the best retailer in the world. He accomplished his task brilliantly, and all of his achievements were only possible because he thought big. Bezos did not want to merely run business and make money, he wanted his company to be the best thing in the world. Over the years, he built a business empire that reflects the concept of thinking big at its best.
|
||||
|
||||
**Wrap Up**
|
||||
Of course, following this set of advice does not automatically make you a billionaire, but it might surely help on your way to achieving even some minor goals. Only a few of us will be able to become as successful as Jeff Bezos, and it might be you, why not? Just keep on pushing, be brave, learn from your mistakes, think big, and try to surround yourself with the best people. These are the top advice from the world’s richest men, so you might want to follow some of them in your daily life. Go ahead and strive for success, and maybe in a couple of years, someone will ask for your advice on how to reach success and become a billionaire.
|
||||
|
||||
**By: Jessica Vainer**
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensourceforu.com/2019/09/the-richest-man-in-history-shares-his-thoughts-jeff-bezos-top-5-tips-for-success/
|
||||
|
||||
作者:[Aashima Sharma][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensourceforu.com/author/aashima-sharma/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/09/jeff1.jpg?resize=696%2C470&ssl=1 (jeff1)
|
||||
[2]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/09/jeff1.jpg?fit=1200%2C810&ssl=1
|
||||
[3]: https://www.investopedia.com/articles/pf/08/make-money-in-business.asp
|
||||
[4]: https://studymoose.com/
|
||||
[5]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/09/jeff2.jpg?resize=350%2C233&ssl=1
|
||||
[6]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/09/jeff2.jpg?ssl=1
|
||||
[7]: https://www.success.com/10-tips-to-achieve-anything-you-want-in-life/
|
@ -1,3 +1,4 @@
|
||||
wenwensnow is translating
|
||||
Go on very small hardware (Part 1)
|
||||
============================================================
|
||||
|
||||
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: translator: (PsiACE)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
@ -0,0 +1,162 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (An introduction to audio processing and machine learning using Python)
|
||||
[#]: via: (https://opensource.com/article/19/9/audio-processing-machine-learning-python)
|
||||
[#]: author: (Jyotika Singh https://opensource.com/users/jyotika-singhhttps://opensource.com/users/jroakeshttps://opensource.com/users/don-watkinshttps://opensource.com/users/clhermansenhttps://opensource.com/users/greg-p)
|
||||
|
||||
An introduction to audio processing and machine learning using Python
|
||||
======
|
||||
The pyAudioProcessing library classifies audio into different categories
|
||||
and genres.
|
||||
![abstract illustration with black background][1]
|
||||
|
||||
At a high level, any machine learning problem can be divided into three types of tasks: data tasks (data collection, data cleaning, and feature formation), training (building machine learning models using data features), and evaluation (assessing the model). Features, [defined][2] as "individual measurable propert[ies] or characteristic[s] of a phenomenon being observed," are very useful because they help a machine understand the data and classify it into categories or predict a value.
|
||||
|
||||
![Machine learning at a high level][3]
|
||||
|
||||
Different data types use very different processing techniques. Take the example of an image as a data type: it looks like one thing to the human eye, but a machine sees it differently after it is transformed into numerical features derived from the image's pixel values using different filters (depending on the application).
|
||||
|
||||
![Data types and feature formation in images][4]
|
||||
|
||||
[Word2vec][5] works great for processing bodies of text. It represents words as vectors of numbers, and the distance between two word vectors determines how similar the words are. If we try to apply Word2vec to numerical data, the results probably will not make sense.
|
||||
|
||||
![Word2vec for analyzing a corpus of text][6]
|
||||
|
||||
So, there are processing techniques specific to the audio data type that works well with audio.
|
||||
|
||||
### What are audio signals?
|
||||
|
||||
Audio signals are signals that vibrate in the audible frequency range. When someone talks, it generates air pressure signals; the ear takes in these air pressure differences and communicates with the brain. That's how the brain helps a person recognize that the signal is speech and understand what someone is saying.
|
||||
|
||||
There are a lot of MATLAB tools to perform audio processing, but not as many exist in Python. Before we get into some of the tools that can be used to process audio signals in Python, let's examine some of the features of audio that apply to audio processing and machine learning.
|
||||
|
||||
![Examples of audio terms to learn][7]
|
||||
|
||||
Some data features and transformations that are important in speech and audio processing are Mel-frequency cepstral coefficients ([MFCCs][8]), Gammatone-frequency cepstral coefficients (GFCCs), Linear-prediction cepstral coefficients (LFCCs), Bark-frequency cepstral coefficients (BFCCs), Power-normalized cepstral coefficients (PNCCs), spectrum, cepstrum, spectrogram, and more.
|
||||
|
||||
We can use some of these features directly and extract features from some others, like spectrum, to train a machine learning model.
|
||||
|
||||
### What are spectrum and cepstrum?
|
||||
|
||||
Spectrum and cepstrum are two particularly important features in audio processing.
|
||||
|
||||
![Spectrum and cepstrum][9]
|
||||
|
||||
Mathematically, a spectrum is the [Fourier transform][10] of a signal. A Fourier transform converts a time-domain signal to the frequency domain. In other words, a spectrum is the frequency domain representation of the input audio's time-domain signal.
|
||||
|
||||
A [cepstrum][11] is formed by taking the log magnitude of the spectrum followed by an inverse Fourier transform. This results in a signal that's neither in the frequency domain (because we took an inverse Fourier transform) nor in the time domain (because we took the log magnitude prior to the inverse Fourier transform). The domain of the resulting signal is called the quefrency.
|
||||
|
||||
### What does this have to do with hearing?
|
||||
|
||||
The reason we care about the signal in the frequency domain relates to the biology of the ear. Many things must happen before we can process and interpret a sound. One happens in the cochlea, a fluid-filled part of the ear with thousands of tiny hairs that are connected to nerves. Some of the hairs are short, and some are relatively longer. The shorter hairs resonate with higher sound frequencies, and the longer hairs resonate with lower sound frequencies. Therefore, the ear is like a natural Fourier transform analyzer!
|
||||
|
||||
![How the ear works][12]
|
||||
|
||||
Another fact about human hearing is that as the sound frequency increases above 1kHz, our ears begin to get less selective to frequencies. This corresponds well with something called the Mel filter bank.
|
||||
|
||||
![MFCC][13]
|
||||
|
||||
Passing a spectrum through the Mel filter bank, followed by taking the log magnitude and a [discrete cosine transform][14] (DCT) produces the Mel cepstrum. DCT extracts the signal's main information and peaks. It is also widely used in JPEG and MPEG compressions. The peaks are the gist of the audio information. Typically, the first 13 coefficients extracted from the Mel cepstrum are called the MFCCs. These hold very useful information about audio and are often used to train machine learning models.
|
||||
|
||||
Another filter inspired by human hearing is the Gammatone filter bank. This filter bank is used as a front-end simulation of the cochlea. Thus, it has many applications in speech processing because it aims to replicate how we hear.
|
||||
|
||||
![GFCC][15]
|
||||
|
||||
GFCCs are formed by passing the spectrum through Gammatone filter bank, followed by loudness compression and DCT. The first (approximately) 22 features are called GFCCs. GFCCs have a number of applications in speech processing, such as speaker identification.
|
||||
|
||||
Other features useful in audio processing tasks (especially speech) include LPCC, BFCC, PNCC, and spectral features like spectral flux, entropy, roll off, centroid, spread, and energy entropy.
|
||||
|
||||
### Building a classifier
|
||||
|
||||
As a quick experiment, let's try building a classifier with spectral features and MFCC, GFCC, and a combination of MFCCs and GFCCs using an open source Python-based library called [pyAudioProcessing][16].
|
||||
|
||||
To start, we want pyAudioProcessing to classify audio into three categories: speech, music, or birds.
|
||||
|
||||
![Segmenting audio into speech, music, and birds][17]
|
||||
|
||||
Using a small dataset (50 samples for training per class) and without any fine-tuning, we can gauge the potential of this classification model to identify audio categories.
|
||||
|
||||
![MFCC of speech, music, and bird signals][18]
|
||||
|
||||
Next, let's try pyAudioProcessing on a music genre classification problem using the [GZTAN][19] audio dataset and audio features: MFCC and spectral features.
|
||||
|
||||
![Music genre classification][20]
|
||||
|
||||
Some genres do well while others have room for improvement. Some things that can be explored from this data include:
|
||||
|
||||
* Data quality check: Is more data needed?
|
||||
* Features around the beat and other aspects of music audio
|
||||
* Features other than audio, like transcription and text
|
||||
* Would a different classifier be better? There has been research on using neural networks to classify music genres.
|
||||
|
||||
|
||||
|
||||
Regardless of the results of this quick test, it is evident that these features get useful information out of the signal, a machine can work with them, and they form a good baseline to work with.
|
||||
|
||||
### Learn more
|
||||
|
||||
Here are some useful resources that can help in your journey with Python audio processing and machine learning:
|
||||
|
||||
* [pyAudioAnalysis][21]
|
||||
* [pyAudioProcessing][16]
|
||||
* [Power-normalized cepstral coefficients (PNCC) for robust speech recognition][22]
|
||||
* [LPCC features][23]
|
||||
* [Speech recognition using MFCC][24]
|
||||
* [Speech/music classification using block-based MFCC features][25]
|
||||
* [Musical genre classification of audio signals][26]
|
||||
* Libraries for reading audio in Python: [SciPy][27], [pydub][28], [libROSA][29], pyAudioAnalysis
|
||||
* Libraries for getting features: libROSA, pyAudioAnalysis (for MFCC); pyAudioProcessing (for MFCC and GFCC)
|
||||
* Basic machine learning models to use on audio: sklearn, hmmlearn, pyAudioAnalysis, pyAudioProcessing
|
||||
|
||||
|
||||
|
||||
* * *
|
||||
|
||||
_This article is based on Jyotika Singh's presentation "[Audio processing and ML using Python][30]" from PyBay 2019._
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/9/audio-processing-machine-learning-python
|
||||
|
||||
作者:[Jyotika Singh][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/jyotika-singhhttps://opensource.com/users/jroakeshttps://opensource.com/users/don-watkinshttps://opensource.com/users/clhermansenhttps://opensource.com/users/greg-p
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/FeedbackLoop.png?itok=l7Sa9fHt (abstract illustration with black background)
|
||||
[2]: https://en.wikipedia.org/wiki/Feature_(machine_learning)
|
||||
[3]: https://opensource.com/sites/default/files/uploads/audioprocessing-ml_1.png (Machine learning at a high level)
|
||||
[4]: https://opensource.com/sites/default/files/uploads/audioprocessing-ml_1a.png (Data types and feature formation in images)
|
||||
[5]: https://en.wikipedia.org/wiki/Word2vec
|
||||
[6]: https://opensource.com/sites/default/files/uploads/audioprocessing-ml_2b.png (Word2vec for analyzing a corpus of text)
|
||||
[7]: https://opensource.com/sites/default/files/uploads/audioprocessing-ml_4.png (Examples of audio terms to learn)
|
||||
[8]: https://en.wikipedia.org/wiki/Mel-frequency_cepstrum
|
||||
[9]: https://opensource.com/sites/default/files/uploads/audioprocessing-ml_5.png (Spectrum and cepstrum)
|
||||
[10]: https://en.wikipedia.org/wiki/Fourier_transform
|
||||
[11]: https://en.wikipedia.org/wiki/Cepstrum
|
||||
[12]: https://opensource.com/sites/default/files/uploads/audioprocessing-ml_6.png (How the ear works)
|
||||
[13]: https://opensource.com/sites/default/files/uploads/audioprocessing-ml_7.png (MFCC)
|
||||
[14]: https://en.wikipedia.org/wiki/Discrete_cosine_transform
|
||||
[15]: https://opensource.com/sites/default/files/uploads/audioprocessing-ml_8.png (GFCC)
|
||||
[16]: https://github.com/jsingh811/pyAudioProcessing
|
||||
[17]: https://opensource.com/sites/default/files/uploads/audioprocessing-ml_10.png (Segmenting audio into speech, music, and birds)
|
||||
[18]: https://opensource.com/sites/default/files/uploads/audioprocessing-ml_11.png (MFCC of speech, music, and bird signals)
|
||||
[19]: http://marsyas.info/downloads/datasets.html
|
||||
[20]: https://opensource.com/sites/default/files/uploads/audioprocessing-ml_12.png (Music genre classification)
|
||||
[21]: https://github.com/tyiannak/pyAudioAnalysis
|
||||
[22]: http://www.cs.cmu.edu/~robust/Papers/OnlinePNCC_V25.pdf
|
||||
[23]: https://link.springer.com/content/pdf/bbm%3A978-3-319-17163-0%2F1.pdf
|
||||
[24]: https://pdfs.semanticscholar.org/3439/454a00ef811b3a244f2b0ce770e80f7bc3b6.pdf
|
||||
[25]: https://pdfs.semanticscholar.org/031b/84fb7ae3fae3fe51a0a40aed4a0dcb55a8e3.pdf
|
||||
[26]: https://pdfs.semanticscholar.org/4ccb/0d37c69200dc63d1f757eafb36ef4853c178.pdf
|
||||
[27]: https://www.scipy.org/
|
||||
[28]: https://github.com/jiaaro/pydub
|
||||
[29]: https://librosa.github.io/librosa/
|
||||
[30]: https://pybay.com/speaker/jyotika-singh/
|
@ -0,0 +1,101 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Why it's time to embrace top-down cybersecurity practices)
|
||||
[#]: via: (https://opensource.com/article/19/9/cybersecurity-practices)
|
||||
[#]: author: (Matt ShealyAnderson Silva https://opensource.com/users/mshealyhttps://opensource.com/users/asnaylorhttps://opensource.com/users/ansilvahttps://opensource.com/users/bexelbiehttps://opensource.com/users/mkalindepauleduhttps://opensource.com/users/alanfdoss)
|
||||
|
||||
Why it's time to embrace top-down cybersecurity practices
|
||||
======
|
||||
An open culture doesn't mean being light on security practices. Having
|
||||
executives on board with cybersecurity, including funding it adequately,
|
||||
is critical for protecting and securing company data.
|
||||
![Two different business organization charts][1]
|
||||
|
||||
Cybersecurity is no longer just the domain of the IT staff putting in firewalls and backing up servers. It takes a commitment from the top and a budget to match. The stakes are high when it comes to keeping your customers' information safe.
|
||||
|
||||
The average cost of a data breach in 2018 was $148 for each compromised record. That equals an average cost of [$3.86 million per breach][2]. Because it takes organizations more than six months—196 days on average—to detect breaches, a lot of remediation must happen after discovery.
|
||||
|
||||
With compliance regulations in most industries tightening and stricter security rules, such as the [General Data Protection Regulation][3] (GDPR) becoming law, breaches can lead to large fines as well as loss of reputation.
|
||||
|
||||
To build a cybersecurity solution from the top down, you need to build a solid foundation. This foundation should be viewed not as a technology problem but as a governance issue. Tech solutions will play a role, but it takes more than that—it starts with building a culture of safety.
|
||||
|
||||
### Build a cybersecurity culture
|
||||
|
||||
"A chain is no stronger than its weakest link," Thomas Reid wrote back in 1786. The message still applies when it comes to cybersecurity today. Your systems are only as secure as your least safety-conscious team member. One lapse, by one person, can compromise your data.
|
||||
|
||||
It's important to build a culture where all team members understand the importance of cybersecurity. Security is not just the IT department's job. It is everyone's responsibility.
|
||||
|
||||
Training is a continuous responsibility. When new team members are on-boarded, they need to be trained in security best practices. When team members leave, their access must be restricted immediately. As team members get comfortable in their positions, there should be [strong policies, procedures, and training][4] to keep them safety conscious.
|
||||
|
||||
### Maintain secure systems
|
||||
|
||||
Corporate policies and procedures will establish a secure baseline for your systems. It's important to maintain strict adherence as systems expand or evolve. Secure network design must match these policies.
|
||||
|
||||
A secure system will be able to filter all incoming traffic at the network perimeter. Only traffic required to support your organization should be allowed to get through this perimeter. Unfortunately, threats sometimes still get in.
|
||||
|
||||
Zero-day attacks are increasing in number, and more threat actors are exploiting known defects in software. In 2018, more than [three-quarters of successful endpoint attacks exploited zero-day flaws][5]. While it's difficult to guard against unknown threats, you can minimize your exposure by strictly applying updates and patches immediately when they're released.
|
||||
|
||||
### Manage user privileges
|
||||
|
||||
By limiting each individual user's access and privileges, companies can utilize micro-segmenting to minimize potential damage done by a possible attack. If an attack does get through your secure perimeter, this will limit the number of areas the attacker has access to.
|
||||
|
||||
User access should be limited to only the privileges they need to do their jobs, especially when it comes to sensitive data. Most breaches start with email phishing. Unsuspecting employees click on a malicious link or are tricked into giving up their login credentials. The less access employees have, the less damage a hacker can do.
|
||||
|
||||
Identity and access management (IAM) systems can deploy single sign-on (SSO) to reduce the number of passwords users need to access systems by using an authentication token accepted by different apps. Multi-factor authentication practices combined with reducing privileges can lower risk to the entire system.
|
||||
|
||||
### Implement continuous monitoring
|
||||
|
||||
Your security needs [continuous monitoring across your enterprise][6] to detect and prevent intrusion. This includes servers, networks, Software-as-a-Service (SaaS), cloud services, mobile users, third-party applications, and much more. In reality, it is imperative that every entry point and connection are continuously monitored.
|
||||
|
||||
Your employees are working around the clock, especially if you are a global enterprise. They are working from home and working on the road. This means multiple devices, internet accesses, and servers, all of which need to be monitored.
|
||||
|
||||
Likewise, hackers are working continuously to find any flaw in your system that could lead to a possible cyberattack. Don't wait for your next IT audit to worry about finding the flaws; this should be a continual process and high priority.
|
||||
|
||||
### Conduct regular risk assessments
|
||||
|
||||
Even with continuous monitoring, chief information security officers (CISOs) and IT managers should regularly conduct risk assessments. New devices, hardware, third-party apps, and cloud services are being added all the time. It's easy to forget how all these individual pieces, added one at a time, all fit into the big picture.
|
||||
|
||||
The regularly scheduled, formal risk assessment should take an exhaustive look at infrastructure and access points. It should include penetration testing to identify potential threats.
|
||||
|
||||
Your risk assessment should also analyze backups and data-recovery planning in case a breach occurs. Don't just set up your security and hope it works. Have a plan for what you will do if access is breached, know who will be responsible for what, and establish an expected timeline to implement your plan.
|
||||
|
||||
### Pay attention to remote teams and BYOD users
|
||||
|
||||
More team members than ever work remotely. Whether they are working on the road, at a remote location, or from home, they pose a cybersecurity risk. They are connecting remotely, which can [leave channels open for intrusion or data interception][7].
|
||||
|
||||
Team members often mix company devices and personal devices almost seamlessly. The advent of BYOD (bring your own device) means company assets may also be vulnerable to apps and software installed on personal devices. While you can manage what's on company devices, when employees check their company email from their personal phone or connect to a company server from their personal laptop, you've increased your overall risk.
|
||||
|
||||
Personal devices and remote connections should always utilize a virtual private network (VPN). A VPN uses encrypted connections to the internet that create a private tunnel that masks the user's IP address. As Douglas Crawford, resident security expert at ProPrivacy.com, [explains][8], "Until the Edward Snowden revelations, people assumed that 128-bit encryption was in practice uncrackable through brute force. They believed it would be so for around another 100 years (taking Moore's Law into account). In theory, this still holds true. However, the scale of resources that the NSA seems willing to throw at cracking encryption has shaken many experts' faith in these predictions. Consequently, system administrators the world over are scrambling to upgrade cipher key lengths."
|
||||
|
||||
### A top-down cybersecurity strategy is essential
|
||||
|
||||
When it comes to cybersecurity, a top-down strategy is essential to providing adequate protection. Building a culture of cybersecurity throughout the organization, maintaining secure systems, and continuous monitoring are essential to safeguarding your systems and your data.
|
||||
|
||||
A top-down approach means your IT department is not solely focused on your company's tech stack while management is solely focused on the company mission and objectives. These are no longer siloed departments; they are interwoven and dependent on each other to ensure success.
|
||||
|
||||
Ultimately, success is defined as keeping your customer information safe and secure. Continuous monitoring and protection of sensitive information are critical to the success of the entire company. With top management on board with funding cybersecurity adequately, IT can ensure optimum security practices.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/9/cybersecurity-practices
|
||||
|
||||
作者:[Matt ShealyAnderson Silva][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/mshealyhttps://opensource.com/users/asnaylorhttps://opensource.com/users/ansilvahttps://opensource.com/users/bexelbiehttps://opensource.com/users/mkalindepauleduhttps://opensource.com/users/alanfdoss
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_orgchart2.png?itok=R_cnshU2 (Two different business organization charts)
|
||||
[2]: https://securityintelligence.com/ponemon-cost-of-a-data-breach-2018/
|
||||
[3]: https://ec.europa.eu/info/law/law-topic/data-protection_en
|
||||
[4]: https://us.norton.com/internetsecurity-how-to-cyber-security-best-practices-for-employees.html
|
||||
[5]: https://www.ponemon.org/news-2/82
|
||||
[6]: https://digitalguardian.com/blog/what-continuous-security-monitoring
|
||||
[7]: https://www.chamberofcommerce.com/business-advice/ransomeware-the-terrifying-threat-to-small-business
|
||||
[8]: https://proprivacy.com/guides/the-ultimate-privacy-guide
|
@ -0,0 +1,113 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Deep Learning Based Chatbots are Smarter)
|
||||
[#]: via: (https://opensourceforu.com/2019/09/deep-learning-based-chatbots-are-smarter/)
|
||||
[#]: author: (Dharmendra Patel https://opensourceforu.com/author/dharmendra-patel/)
|
||||
|
||||
Deep Learning Based Chatbots are Smarter
|
||||
======
|
||||
|
||||
[![][1]][2]
|
||||
|
||||
_Contemporary chatbots extensively use machine learning, natural language processing, artificial intelligence and deep learning. They are typically used in the customer service space for almost all domains. Chatbots based on deep learning are far better than traditional variants. Here’s why._
|
||||
|
||||
Chatbots are currently being used extensively to change customer behaviour. Usually, traditional artificial intelligence (AI) concepts are used in designing chatbots. However, modern applications generate such vast volumes of data that it becomes arduous to process this with traditional AI algorithms.
|
||||
Deep learning is a subset of AI and is the most suitable technique to process large quantities of data. Deep learning based systems learn from copious data points. Systems like chatbots are the right contenders for deep learning as they require abundant data points to train the system to reach precise levels of performance. The main purpose of chatbots is to offer the most appropriate reply to any question or message that it receives. The ideal response from a chatbot has multiple aspects to it, such as:
|
||||
|
||||
* It should be able to chat in a pragmatic manner
|
||||
* Respond to the caller’s query
|
||||
* Provide the corresponding, relevant information
|
||||
* Raise follow up questions like in a real conversation
|
||||
|
||||
|
||||
|
||||
Deep learning simulates the human mind for processing information. It works like the human brain by categorising a variety of information, and automatically discovers the features to be used to classify this information in a way that is perfect for chatbot systems.
|
||||
|
||||
![Figure 1: Steps for designing chatbots using deep learning][3]
|
||||
|
||||
**Steps for designing chatbots using deep learning**
|
||||
The goal while designing chatbots using deep learning is to entirely automate the system to lessen the need for human management as much as possible. To achieve this, we need to completely replace all human experts with a chatbot, eradicating the need for client service representatives entirely. Figure 1 depicts the steps for designing chatbots using deep learning.
|
||||
|
||||
The first step when designing a chatbot is to collect the existing interactions between clients and service representatives, in order to teach the machine the phrases that are important while interacting with customers. This is called ontology creation.
|
||||
|
||||
The data preparation or data preprocessing is the next step in designing the chatbot. This consists of several steps such as tokenisation, stemming and lemmatising. This phase integrates grammar into machine understanding.
|
||||
|
||||
The third step involves deciding on the appropriate model of the chatbot. There are two prominent models — retrieval based and generative. Retrieval models apply the repository of predefined responses while generative models are advanced versions of the retrieval model that use deep learning concepts.
|
||||
|
||||
The next step is to decide on the appropriate technique to handle client interactions efficiently.
|
||||
Now you are ready to design and implement the chatbot. Use the appropriate programming language for the implementation. Once it is implemented successfully, test it to uncover any bugs or errors.
|
||||
|
||||
**Deep learning based models for chatbots**
|
||||
Generative models are based on deep learning. They are the smartest models for chatbots but are very complicated to build and operate. They give the best response for any query as they use semantic similarity, which identifies the terms that have common characteristics.
|
||||
|
||||
The Recurrent Neural Network (RNN) encoder-decoder is the ultimate generative model for chatbots, and consists of two RNNs. As an input, the encoder takes a sentence and processes one word at a time. It translates the series of the words into a predetermined size feature vector. It takes only significant words and removes the unnecessary ones. The encoder consists of a number of hidden layers in which one layer influences the other. The final hidden layer acts as a summary layer for the entire sentence.
|
||||
The decoder, on the other hand, generates another series, one word at a time. The decoder is influenced by the context and by previously generated words.
|
||||
|
||||
Generally, this model is best suited to fixed length sequences; however, before applying the training to the model, padding concepts are used to convert variable length series into fixed length series. For example:
|
||||
|
||||
```
|
||||
Query : [P P P P P P “What” “About” “Placement” “ ?” ]
|
||||
// Assume that the fixed length is 10.P is Padding
|
||||
Response : [ SD “ It” “is” “Almost” “100%” END P P P P ]
|
||||
// SD means start decoding. END means response is over. P is Padding
|
||||
```
|
||||
|
||||
Word embedding is another important aspect of deep learning based chatbots. It captures the context of the word in the sentence, the semantic and syntactic similarities, as well as the relationship with other words. Word2Vec is a famous method to construct word embedding. There are two main techniques in Word2Vec and both are based on neural networks — continuous bag-of-words (CBOW) and continuous skip-gram.
|
||||
|
||||
The continuous bag-of-words method is generally used as a tool of feature generation. A sentence is first converted into a bag of words. After that, various measures are calculated to characterise the sentence.
|
||||
|
||||
The frequency is the main measure of the CBOW. It provides better accuracy for frequent words. The skip-gram method achieves the reverse of the CBOW method. It tries to predict the source context words from the target. It works well for fewer training data sets.
|
||||
|
||||
The logic for the chatbots that use deep learning is as follows:
|
||||
_Step 1:_ Build the corpus vocabulary.
|
||||
_Step 2:_ Map a unique numeric identifier with each word.
|
||||
_Step 3:_ Padding is done to the context words to keep their size fixed.
|
||||
_Step 4:_ Make a pair of target words and surround the context words.
|
||||
_Step 5:_ Build the deep learning architecture for the CBOW model. This involves the following sequence:
|
||||
|
||||
* Input as context words
|
||||
* Initialised with random weights
|
||||
* Arrange the word embeddings
|
||||
* Create a dense softmax layer
|
||||
* Predict target word
|
||||
* Match with actual target word
|
||||
* Compute the loss
|
||||
* Perform back propagation to update embedding layer
|
||||
_Step 6:_ Train the model.
|
||||
_Step 7:_ Test the model.
|
||||
|
||||
|
||||
|
||||
![Figure 2: Encoder layers][4]
|
||||
|
||||
![Figure 3: Decoder functioning][5]
|
||||
|
||||
**Deep learning tools for chatbots**
|
||||
TensorFlow is a great tool that uses deep learning. It uses linear regression to achieve effective conversation. We first need to develop a TensorFlow model by using JSON to recognise patterns. The next step is loading this framework and contextualising the data. TensorFlow makes chatbots realistic and very effective.
|
||||
|
||||
Microsoft conversational AI tools are another important resource to design effective chatbots. These tools can be used to design, link, install and accomplish intelligent bots. The Microsoft Bot Builder software development kit (SDK) is ideal for the quick, free and easy development of chatbots with intelligence.
|
||||
|
||||
Pytorch is an excellent open source library based on Python for applications like chatbots. The Optim module implements various algorithms based on neural networks, which are essential for the designing of efficient chatbots. It also provides the power of Tensors and so has the same functionalities as TensorFlow.
|
||||
Chatbots are essential if organisations aim to deal with customers without any human intervention. As discussed, deep learning based chatbots are the better option compared to the traditional variants, as the former handle abundant data efficiently. And generative models for building chatbots are more appropriate in the modern context.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensourceforu.com/2019/09/deep-learning-based-chatbots-are-smarter/
|
||||
|
||||
作者:[Dharmendra Patel][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensourceforu.com/author/dharmendra-patel/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/09/Screenshot-from-2019-09-20-16-23-04.png?resize=696%2C472&ssl=1 (Screenshot from 2019-09-20 16-23-04)
|
||||
[2]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/09/Screenshot-from-2019-09-20-16-23-04.png?fit=706%2C479&ssl=1
|
||||
[3]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/09/1DL.png?resize=350%2C248&ssl=1
|
||||
[4]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/09/2DL.png?resize=350%2C72&ssl=1
|
||||
[5]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/09/3DL.png?resize=350%2C67&ssl=1
|
@ -0,0 +1,138 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Euler's Identity Really is a Miracle, Too)
|
||||
[#]: via: (https://theartofmachinery.com/2019/09/20/euler_formula_miracle.html)
|
||||
[#]: author: (Simon Arneaud https://theartofmachinery.com)
|
||||
|
||||
Euler's Identity Really is a Miracle, Too
|
||||
======
|
||||
|
||||
[A post about the exponential function being a miracle][1] did the rounds recently, and [the Hacker News comment thread][2] brought up some debate about the miracle of Euler’s famous identity:
|
||||
|
||||
[e^{\pi i} + 1 = 0]
|
||||
|
||||
A while back I used to make a living teaching this stuff to high school students and university undergrads. Let me give my personal take on what’s so special about Euler’s identity.
|
||||
|
||||
### Complex numbers are already a miracle
|
||||
|
||||
Let’s start with complex numbers.
|
||||
|
||||
The first introduction to complex numbers is usually something like, “We don’t know what (\sqrt{- 1}) is, so let’s try calling it (i).” As it turns out, it works. [It works unreasonably well.][3] To see what I mean, imagine we tried to do the same thing with (\frac{1}{0}). We’ll let’s just make up a value for it called, say, (v). Now consider this old teaser:
|
||||
|
||||
[\begin{matrix} {x = 2} & {,y = 2} \ {\therefore x} & {= y} \ {\text{(multiply\ by\ y)}\therefore{xy}} & {= y^{2}} \ {\text{(subtract\ x\ squared)}\therefore{xy} - x^{2}} & {= y^{2} - x^{2}} \ {\text{(factorise)}\therefore x(y - x)} & {= (y + x)(y - x)} \ {\text{(divide\ common\ factor)}\therefore x} & {= y + x} \ {\text{(subtract\ x)}\therefore 0} & {= y} \ {\therefore 0} & {= 2} \ \end{matrix}]
|
||||
|
||||
(If you’re not sure about the factorisation, try expanding it.) Obviously (0 \neq 2), so where does this “proof” go wrong? At the point it assumes dividing by the (y - x) factor obeys the normal rules of algebra — it doesn’t because (y - x = 0). We can’t just quietly add (v) to our number system and expect any of our existing maths to work with it. On the other hand, it turns out we _can_ (for example) write quadratic equations using (i) and treat them just like quadratic equations using real numbers (even solving them with the same old quadratic formula).
|
||||
|
||||
It gets better. As anyone who’s studied complex numbers knows, after we take the plunge and say (\sqrt{- 1} = i), we don’t need to invent new numbers for, e.g., (\sqrt{i}) (it’s (\frac{\pm (1 + i)}{2})). In fact, instead of going “[turtles all the way down][4]” naming new numbers, we discover that complex numbers actually fill more gaps in the real number system. In many ways, complex numbers work better than real numbers.
|
||||
|
||||
### (e^{\pi i}) isn’t just a made up thing
|
||||
|
||||
I’ve met a few engineers who think that (e^{\pi i} = - 1) and its generalisation (e^{\theta i} = \cos\theta + i\sin\theta) are just notation made up by mathematicians for conveniently modelling things like rotations. I think that’s a shame because Euler’s formula is a lot more surprising than just notation.
|
||||
|
||||
Let’s look at some ways to calculate (e^{x}) for real numbers. With a bit of calculus, you can figure out this Taylor series expansion around zero (also known as a Maclaurin series):
|
||||
|
||||
[\begin{matrix} e^{x} & {= 1 + x + \frac{x^{2}}{2} + \frac{x^{3}}{2 \times 3} + \frac{x^{4}}{2 \times 3 \times 4} + \ldots} \ & {= \sum\limits_{n = 0}^{\infty}\frac{x^{n}}{n!}} \ \end{matrix}]
|
||||
|
||||
A neat thing about this series is that it’s easy to compare with [the series for sin and cos][5]. If you assume they work just as well for complex numbers as real numbers, it only takes simple algebra to show (e^{\theta i} = \cos\theta + i\sin\theta), so it’s the classic textbook proof.
|
||||
|
||||
Unfortunately, if you try evaluating the series on a computer, you hit numerical stability problems. Here’s another way to calculate (e^{x}):
|
||||
|
||||
[e^{x} = \lim\limits_{n\rightarrow\infty}\left( 1 + \frac{x}{n} \right)^{n}]
|
||||
|
||||
Or, translated naïvely into a stupid approximation algorithm in computer code [1][6]:
|
||||
|
||||
```
|
||||
import std.algorithm;
|
||||
import std.range;
|
||||
|
||||
double approxExp(double x, int n) pure
|
||||
{
|
||||
return (1 + x / n).repeat(n).reduce!"a * b";
|
||||
}
|
||||
```
|
||||
|
||||
Try plugging some numbers into this function, and you’ll see it calculates approximate values for (e^{x}) (though you might need `n` in the thousands to get good results).
|
||||
|
||||
Now for a little leap of faith: That function only uses addition, division and multiplication, which can all be defined and implemented for complex numbers without assuming Euler’s formula. So what if you replace `double` with [a complex number type][7], assume everything’s okay mathematically, and try plugging in some numbers like (3.141593i)? Try it for yourself. Somehow everything starts cancelling out as (n) gets bigger and (x) gets closer to (\pi i), and you get something closer and closer to (- 1 + 0i).
|
||||
|
||||
### (e) and (\pi) are miracles, too
|
||||
|
||||
Because mathematicians prefer to write these constants symbolically, it’s easy to forget what they really are. Imagine the real number line stretching from minus infinity to infinity. There’s one notch slightly below 3, and another notch just above 3, and for deeper reasons, these two notches are special and keep turning up in seemingly unrelated places in maths.
|
||||
|
||||
For example, take the series sum (\frac{1}{1} + \frac{1}{2} + \frac{1}{3} + \ldots). It doesn’t converge, but the sum to (n) terms (called the Harmonic function, or (H(n))) approximates (\log_{e}n). If you square the terms, the series converges, but this time (\pi) appears instead of (e): (\frac{1}{1^{2}} + \frac{1}{2^{2}} + \frac{1}{3^{2}} + \ldots = \frac{\pi^{2}}{6}).
|
||||
|
||||
Here’s some more context for why the ubiquity of (e) and (\pi) is special. “The ratio of a circle’s circumference to its diameter” and “the square root of 2” are both numbers that can’t be written down as exact decimals, but at least we can describe them well enough to _define_ them exactly. Imagine some immortal creature tried listing all the numbers that can be mathematically defined. The list could start with all numbers that can be defined in under 10 characters, then all the numbers that can be defined in 10-20 characters, and so on. Obviously, that list never ends, but every definable number will appear on it somewhere, at some finite position. That’s what Georg Cantor called countably infinite, and he went on to prove ([using a simple diagonalisation argument][8]) that the set of real numbers is somehow infinitely bigger than that. That means most real numbers aren’t even definable.
|
||||
|
||||
In other words, you could say maths with numbers is based on a sea of literally indescribable chaos. Thinking of it that way, it’s amazing that the five constants in Euler’s formula get us as far as they do.
|
||||
|
||||
### Yes, the exponential function is a miracle
|
||||
|
||||
I hinted that we can’t just assume that the Taylor series expansion for (e^{x}) works for complex numbers. Here are some examples that show what I mean. First, take the series expansion of (e^{- x^{2}}), the shape of the bell curve famous in statistics:
|
||||
|
||||
[e^{- x^{2}} = 1 - x^{2} + \frac{x^{4}}{2} - \frac{x^{6}}{3!} + \frac{x^{8}}{4!} - \ldots]
|
||||
|
||||
Of course, we can’t calculate the whole infinite sum, but we can approximate it by taking the first (n) terms. Here’s a plot of approximations taking successively more terms. We can see the bell shape after a few dozen terms, and the more terms we add, the better it gets:
|
||||
|
||||
![][9]
|
||||
|
||||
Okay, that’s a Taylor series doing what it’s supposed to. How about we try the same thing with another hump-shaped curve, (\frac{1}{1 + x^{2}})?
|
||||
|
||||
![][10]
|
||||
|
||||
This time it’s like there’s an invisible brick wall at (x = \pm 1). By adding more terms, we can get as close to perfect an approximation as we like, until (x) hits (\pm 1), then the approximation stops converging. The series just won’t work beyond that. But if Taylor expansion doesn’t always work for the whole real number line, can we take it for granted that the series for (e^{x}), (\sin x) and (\cos x) work for complex numbers?
|
||||
|
||||
To get some more insight, we can colour in the places in the complex plane where the Taylor series for (\frac{1}{1 + x^{2}}) converges. It turns out we get a perfect circle of radius 1 centred at 0:
|
||||
|
||||
![][11]
|
||||
|
||||
There are two special points on the plane: (i) and (- i). At these points, (\frac{1}{1 + x^{2}}) turns into a (\frac{1}{0}) singularity, and the series expansion simply can’t work. It’s as if the convergence region expands out from 0 until it hits these singularity points and gets stuck. The funny thing is, these singularities in the complex plane limit how far the Taylor series can work, even when if we derive it using nothing but real analysis.
|
||||
|
||||
It turns out that (e^{x}), (\sin x) and (\cos x) don’t have any problematic points in the complex plane, and that’s why we can easily use Taylor series to explore them beyond real numbers.
|
||||
|
||||
This is yet another example of things making more sense when analysed with complex numbers, which only makes “real” numbers look like the odd ones out. Which raises another question: if [complex numbers are apparently fundamental to explaining the universe][12][2][13], why do we only experience real values? Obviously, the world would be a very different place if we could eat (i) slices of pizza, or if the flow of time had real and imaginary parts. But why the heck _not_?
|
||||
|
||||
### Provably true things can still be surprising
|
||||
|
||||
Of course, philosophy about the physical world aside, none of this is just luck. Maths is maths and there’s no alternative universe where things work differently. That’s because there are logical reasons why all this is true.
|
||||
|
||||
But I don’t think that makes it less special. Arthur C. Clarke famously said that any sufficiently advanced technology is indistinguishable from magic, and I don’t think it should lose all magic as soon as someone, somewhere is smart enough to figure out how to make it work. Likewise, I don’t think mathematical theory becomes less special just because someone figures out a proof. On the contrary, it’s thanks to people wondering about these miraculous patterns that we have the calculus and complex analysis needed to understand how it all works.
|
||||
|
||||
1. A less-stupid version uses squaring instead of naïve exponentiation: `return (1 + z / (1<<n)).recurrence!"a[n-1] * a[n-1]".take(n+1).reduce!"b"` [↩︎][14]
|
||||
|
||||
2. A classical physics example is the shape of a chain hanging from two poles (i.e., [a catenary][15]): it’s the shape of (\cos ix) [↩︎][16]
|
||||
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://theartofmachinery.com/2019/09/20/euler_formula_miracle.html
|
||||
|
||||
作者:[Simon Arneaud][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://theartofmachinery.com
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://blog.plover.com/math/exponential.html
|
||||
[2]: https://news.ycombinator.com/item?id=20954275
|
||||
[3]: https://www.dartmouth.edu/~matc/MathDrama/reading/Wigner.html
|
||||
[4]: https://en.wikipedia.org/wiki/Turtles_all_the_way_down
|
||||
[5]: https://en.wikipedia.org/wiki/Taylor_series#Trigonometric_functions
|
||||
[6]: tmp.03tyq5Ssty#fn:1
|
||||
[7]: https://dlang.org/phobos/std_complex.html
|
||||
[8]: https://www.coopertoons.com/education/diagonal/diagonalargument.html
|
||||
[9]: https://theartofmachinery.com/images/euler_formula_miracle/taylorbellcurve.svg
|
||||
[10]: https://theartofmachinery.com/images/euler_formula_miracle/taylorfailure.svg
|
||||
[11]: https://theartofmachinery.com/images/euler_formula_miracle/taylorconvergence.svg
|
||||
[12]: https://www.scottaaronson.com/blog/?p=4021
|
||||
[13]: tmp.03tyq5Ssty#fn:2
|
||||
[14]: tmp.03tyq5Ssty#fnref:1
|
||||
[15]: http://mathworld.wolfram.com/Catenary.html
|
||||
[16]: tmp.03tyq5Ssty#fnref:2
|
@ -0,0 +1,341 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Hone advanced Bash skills by building Minesweeper)
|
||||
[#]: via: (https://opensource.com/article/19/9/advanced-bash-building-minesweeper)
|
||||
[#]: author: (Abhishek Tamrakar https://opensource.com/users/tamrakarhttps://opensource.com/users/dnearyhttps://opensource.com/users/sethhttps://opensource.com/users/sethhttps://opensource.com/users/marcobravo)
|
||||
|
||||
Hone advanced Bash skills by building Minesweeper
|
||||
======
|
||||
The nostalgia of classic games can be a great source for mastering
|
||||
programming. Deep dive into Bash with Minesweeper.
|
||||
![bash logo on green background][1]
|
||||
|
||||
I am no expert on teaching programming, but when I want to get better at something, I try to find a way to have fun with it. For example, when I wanted to get better at shell scripting, I decided to practice by programming a version of the [Minesweeper][2] game in Bash.
|
||||
|
||||
If you are an experienced Bash programmer and want to hone your skills while having fun, follow along to write your own version of Minesweeper in the terminal. The complete source code is found in this [GitHub repository][3].
|
||||
|
||||
### Getting ready
|
||||
|
||||
Before I started writing any code, I outlined the ingredients I needed to create my game:
|
||||
|
||||
1. Print a minefield
|
||||
2. Create the gameplay logic
|
||||
3. Create logic to determine the available minefield
|
||||
4. Keep count of available and discovered (extracted) mines
|
||||
5. Create the endgame logic
|
||||
|
||||
|
||||
|
||||
### Print a minefield
|
||||
|
||||
In Minesweeper, the game world is a 2D array (columns and rows) of concealed cells. Each cell may or may not contain an explosive mine. The player's objective is to reveal cells that contain no mine, and to never reveal a mine. Bash version of the game uses a 10x10 matrix, implemented using simple bash arrays.
|
||||
|
||||
First, I assign some random variables. These are the locations that mines could be placed on the board. By limiting the number of locations, it will be easy to build on top of this. The logic could be better, but I wanted to keep the game looking simple and a bit immature. (I wrote this for fun, but I would happily welcome your contributions to make it look better.)
|
||||
|
||||
The variables below are some default variables, declared to call randomly for field placement, like the variables a-g, we will use them to calculate our extractable mines:
|
||||
|
||||
|
||||
```
|
||||
# variables
|
||||
score=0 # will be used to store the score of the game
|
||||
# variables below will be used to randomly get the extract-able cells/fields from our mine.
|
||||
a="1 10 -10 -1"
|
||||
b="-1 0 1"
|
||||
c="0 1"
|
||||
d="-1 0 1 -2 -3"
|
||||
e="1 2 20 21 10 0 -10 -20 -23 -2 -1"
|
||||
f="1 2 3 35 30 20 22 10 0 -10 -20 -25 -30 -35 -3 -2 -1"
|
||||
g="1 4 6 9 10 15 20 25 30 -30 -24 -11 -10 -9 -8 -7"
|
||||
#
|
||||
# declarations
|
||||
declare -a room # declare an array room, it will represent each cell/field of our mine.
|
||||
```
|
||||
|
||||
Next, I print my board with columns (0-9) and rows (a-j), forming a 10x10 matrix to serve as the minefield for the game. (M[10][10] is a 100-value array with indexes 0-99.) If you want to know more about Bash arrays, read [_You don't know Bash: An introduction to Bash arrays_][4].
|
||||
|
||||
Lets call it a function, **plough,** we print the header first: two blank lines, the column headings, and a line to outline the top of the playing field:
|
||||
|
||||
|
||||
```
|
||||
printf '\n\n'
|
||||
printf '%s' " a b c d e f g h i j"
|
||||
printf '\n %s\n' "-----------------------------------------"
|
||||
```
|
||||
|
||||
Next, I establish a counter variable, called **r**, to keep track of how many horizontal rows have been populated. Note that, we will use the same counter variable '**r**' as our array index later in the game code. In a [Bash **for** loop][5], using the **seq** command to increment from 0 to 9, I print a digit (**d%**) to represent the row number ($row, which is defined by **seq**):
|
||||
|
||||
|
||||
```
|
||||
r=0 # our counter
|
||||
for row in $(seq 0 9); do
|
||||
printf '%d ' "$row" # print the row numbers from 0-9
|
||||
```
|
||||
|
||||
Before we move ahead from here, lets check what we have made till now. We printed sequence **[a-j] **horizontally first and then we printed row numbers in a range **[0-9]**, we will be using these two ranges to act as our users input coordinates to locate the mine to extract.** **
|
||||
|
||||
Next,** **Within each row, there is a column intersection, so it's time to open a new **for** loop. This one manages each column, so it essentially generates each cell in the playing field. I have added some helper functions that you can see the full definition of in the source code. For each cell, we need something to make the field look like a mine, so we initialize the empty ones with a dot (.), using a custom function called [**is_null_field**][6]. Also, we need an array variable to store the value for each cell, we will use the predefined global array variable **[room][7]** along with an index [variable **r**][8]. As **r** increments, we iterate over the cells, dropping mines along the way.
|
||||
|
||||
|
||||
```
|
||||
for col in $(seq 0 9); do
|
||||
((r+=1)) # increment the counter as we move forward in column sequence
|
||||
is_null_field $r # assume a function which will check, if the field is empty, if so, initialize it with a dot(.)
|
||||
printf '%s \e[33m%s\e[0m ' "|" "${room[$r]}" # finally print the separator, note that, the first value of ${room[$r]} will be '.', as it is just initialized.
|
||||
#close col loop
|
||||
done
|
||||
```
|
||||
|
||||
Finally, I keep the board well-defined by enclosing the bottom of each row with a line, and then close the row loop:
|
||||
|
||||
|
||||
```
|
||||
printf '%s\n' "|" # print the line end separator
|
||||
printf ' %s\n' "-----------------------------------------"
|
||||
# close row for loop
|
||||
done
|
||||
printf '\n\n'
|
||||
```
|
||||
|
||||
The full **plough** function looks like:
|
||||
|
||||
|
||||
```
|
||||
plough()
|
||||
{
|
||||
r=0
|
||||
printf '\n\n'
|
||||
printf '%s' " a b c d e f g h i j"
|
||||
printf '\n %s\n' "-----------------------------------------"
|
||||
for row in $(seq 0 9); do
|
||||
printf '%d ' "$row"
|
||||
for col in $(seq 0 9); do
|
||||
((r+=1))
|
||||
is_null_field $r
|
||||
printf '%s \e[33m%s\e[0m ' "|" "${room[$r]}"
|
||||
done
|
||||
printf '%s\n' "|"
|
||||
printf ' %s\n' "-----------------------------------------"
|
||||
done
|
||||
printf '\n\n'
|
||||
}
|
||||
```
|
||||
|
||||
It took me some time to decide on needing the **is_null_field**, so let's take a closer look at what it does. We need a dependable state from the beginning of the game. That choice is arbitrary–it could have been a number or any character. I decided to assume everything was declared as a dot (.) because I believe it makes the gameboard look pretty. Here's what that looks like:
|
||||
|
||||
|
||||
```
|
||||
is_null_field()
|
||||
{
|
||||
local e=$1 # we used index 'r' for array room already, let's call it 'e'
|
||||
if [[ -z "${room[$e]}" ]];then
|
||||
room[$r]="." # this is where we put the dot(.) to initialize the cell/minefield
|
||||
fi
|
||||
}
|
||||
```
|
||||
|
||||
Now that, I have all the cells in our mine initialized, I get a count of all available mines by declaring and later calling a simple function shown below:
|
||||
|
||||
|
||||
```
|
||||
get_free_fields()
|
||||
{
|
||||
free_fields=0 # initialize the variable
|
||||
for n in $(seq 1 ${#room[@]}); do
|
||||
if [[ "${room[$n]}" = "." ]]; then # check if the cells has initial value dot(.), then count it as a free field.
|
||||
((free_fields+=1))
|
||||
fi
|
||||
done
|
||||
}
|
||||
```
|
||||
|
||||
Here is the printed minefield, where [**a-j]** are columns, and [**0-9**] are rows.
|
||||
|
||||
![Minefield][9]
|
||||
|
||||
### Create the logic to drive the player
|
||||
|
||||
The player logic reads an option from [stdin][10] as a coordinate to the mines and extracts the exact field on the minefield. It uses Bash's [parameter expansion][11] to extract the column and row inputs, then feeds the column to a switch that points to its equivalent integer notation on the board, to understand this, see the values getting assigned to variable '**o'** in the switch case statement below. For instance, a player might enter **c3**, which Bash splits into two characters: **c** and **3**. For simplicity, I'm skipping over how invalid entry is handled.
|
||||
|
||||
|
||||
```
|
||||
colm=${opt:0:1} # get the first char, the alphabet
|
||||
ro=${opt:1:1} # get the second char, the digit
|
||||
case $colm in
|
||||
a ) o=1;; # finally, convert the alphabet to its equivalent integer notation.
|
||||
b ) o=2;;
|
||||
c ) o=3;;
|
||||
d ) o=4;;
|
||||
e ) o=5;;
|
||||
f ) o=6;;
|
||||
g ) o=7;;
|
||||
h ) o=8;;
|
||||
i ) o=9;;
|
||||
j ) o=10;;
|
||||
esac
|
||||
```
|
||||
|
||||
Then it calculates the exact index and assigns the index of the input coordinates to that field.
|
||||
|
||||
There is also a lot of use of **shuf** command here, **shuf** is a [Linux utility][12] designed to provide a random permutation of information where the **-i** option denotes indexes or possible ranges to shuffle and **-n** denotes the maximum number or output given back. Double parentheses allow for [mathematical evaluation][13] in Bash, and we will use them heavily here.
|
||||
|
||||
Let's assume our previous example received **c3** via stdin. Then, **ro=3** and **o=3** from above switch case statement converted **c** to its equivalent integer, put it into our formula to calculate final index '**i'.**
|
||||
|
||||
|
||||
```
|
||||
i=$(((ro*10)+o)) # Follow BODMAS rule, to calculate final index.
|
||||
is_free_field $i $(shuf -i 0-5 -n 1) # call a custom function that checks if the final index value points to a an empty/free cell/field.
|
||||
```
|
||||
|
||||
Walking through this math to understand how the final index '**i**' is calculated:
|
||||
|
||||
|
||||
```
|
||||
i=$(((ro*10)+o))
|
||||
i=$(((3*10)+3))=$((30+3))=33
|
||||
```
|
||||
|
||||
The final index value is 33. On our board, printed above, the final index points to 33rd cell and that should be 3rd (starting from 0, otherwise 4th) row and 3rd (C) column.
|
||||
|
||||
### Create the logic to determine the available minefield
|
||||
|
||||
To extract a mine, after the coordinates are decoded and the index is found, the program checks whether that field is available. If it's not, the program displays a warning, and the player chooses another coordinate.
|
||||
|
||||
In this code, a cell is available if it contains a dot (**.**) character. Assuming it's available, the value in the cell is reset and the score is updated. If a cell is unavailable because it does not contain a dot, then a variable **not_allowed** is set. For brevity, I leave it to you to look at the source code of the game for the contents of [the warning statement][14] in the game logic.
|
||||
|
||||
|
||||
```
|
||||
is_free_field()
|
||||
{
|
||||
local f=$1
|
||||
local val=$2
|
||||
not_allowed=0
|
||||
if [[ "${room[$f]}" = "." ]]; then
|
||||
room[$f]=$val
|
||||
score=$((score+val))
|
||||
else
|
||||
not_allowed=1
|
||||
fi
|
||||
}
|
||||
```
|
||||
|
||||
![Extracting mines][15]
|
||||
|
||||
If the coordinate entered is available, the mine is discovered, as shown below. When **h6** is provided as input, some values at random populated on our minefields, these values are added to users score after the mins are extracted.
|
||||
|
||||
![Extracting mines][16]
|
||||
|
||||
Now remember the variables we declared at the start, [a-g], I will now use them here to extract random mines assigning their value to the variable **m** using Bash indirection. So, depending upon the input coordinates, the program picks a random set of additional numbers (**m**) to calculate the additional fields to be populated (as shown above) by adding them to the original input coordinates, represented here by **i (**calculated above**)**.
|
||||
|
||||
Please note the character **X** in below code snippet, is our sole GAME-OVER trigger, we added it to our shuffle list to appear at random, with the beauty of **shuf** command, it can appear after any number of chances or may not even appear for our lucky winning user.
|
||||
|
||||
|
||||
```
|
||||
m=$(shuf -e a b c d e f g X -n 1) # add an extra char X to the shuffle, when m=X, its GAMEOVER
|
||||
if [[ "$m" != "X" ]]; then # X will be our explosive mine(GAME-OVER) trigger
|
||||
for limit in ${!m}; do # !m represents the value of value of m
|
||||
field=$(shuf -i 0-5 -n 1) # again get a random number and
|
||||
index=$((i+limit)) # add values of m to our index and calculate a new index till m reaches its last element.
|
||||
is_free_field $index $field
|
||||
done
|
||||
```
|
||||
|
||||
I want all revealed cells to be contiguous to the cell selected by the player.
|
||||
|
||||
![Extracting mines][17]
|
||||
|
||||
### Keep a count of available and extracted mines
|
||||
|
||||
The program needs to keep track of available cells in the minefield; otherwise, it keeps asking the player for input even after all the cells have been revealed. To implement this, I create a variable called **free_fields**, initially setting it to 0. In a **for** loop defined by the remaining number of available cells/fields in our minefields. ****If a cell contains a dot (**.**), then the count of **free_fields** is incremented.
|
||||
|
||||
|
||||
```
|
||||
get_free_fields()
|
||||
{
|
||||
free_fields=0
|
||||
for n in $(seq 1 ${#room[@]}); do
|
||||
if [[ "${room[$n]}" = "." ]]; then
|
||||
((free_fields+=1))
|
||||
fi
|
||||
done
|
||||
}
|
||||
```
|
||||
|
||||
Wait, what if, the **free_fields=0**? That means, our user had extracted all the mines. Please feel free to look at [the exact code][18] to understand better.
|
||||
|
||||
|
||||
```
|
||||
if [[ $free_fields -eq 0 ]]; then # well that means you extracted all the mines.
|
||||
printf '\n\n\t%s: %s %d\n\n' "You Win" "you scored" "$score"
|
||||
exit 0
|
||||
fi
|
||||
```
|
||||
|
||||
### Create the logic for Gameover
|
||||
|
||||
For the Gameover situation, we print to the middle of the terminal using some [nifty logic][19] that I leave it to the reader to explore how it works.
|
||||
|
||||
|
||||
```
|
||||
if [[ "$m" = "X" ]]; then
|
||||
g=0 # to use it in parameter expansion
|
||||
room[$i]=X # override the index and print X
|
||||
for j in {42..49}; do # in the middle of the minefields,
|
||||
out="gameover"
|
||||
k=${out:$g:1} # print one alphabet in each cell
|
||||
room[$j]=${k^^}
|
||||
((g+=1))
|
||||
done
|
||||
fi
|
||||
```
|
||||
|
||||
Finally, we can print the two lines which are most awaited.
|
||||
|
||||
|
||||
```
|
||||
if [[ "$m" = "X" ]]; then
|
||||
printf '\n\n\t%s: %s %d\n' "GAMEOVER" "you scored" "$score"
|
||||
printf '\n\n\t%s\n\n' "You were just $free_fields mines away."
|
||||
exit 0
|
||||
fi
|
||||
```
|
||||
|
||||
![Minecraft Gameover][20]
|
||||
|
||||
That's it, folks! If you want to know more, access the source code for this Minesweeper game and other games in Bash from my [GitHub repo][3]. I hope it gives you some inspiration to learn more Bash and to have fun while doing so.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/9/advanced-bash-building-minesweeper
|
||||
|
||||
作者:[Abhishek Tamrakar][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/tamrakarhttps://opensource.com/users/dnearyhttps://opensource.com/users/sethhttps://opensource.com/users/sethhttps://opensource.com/users/marcobravo
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bash_command_line.png?itok=k4z94W2U (bash logo on green background)
|
||||
[2]: https://en.wikipedia.org/wiki/Minesweeper_(video_game)
|
||||
[3]: https://github.com/abhiTamrakar/playground/tree/master/bash_games
|
||||
[4]: https://opensource.com/article/18/5/you-dont-know-bash-intro-bash-arrays
|
||||
[5]: https://opensource.com/article/19/6/how-write-loop-bash
|
||||
[6]: https://github.com/abhiTamrakar/playground/blob/28143053ced699c80569666f25268e8b96c38c46/bash_games/minesweeper.sh#L114-L120
|
||||
[7]: https://github.com/abhiTamrakar/playground/blob/28143053ced699c80569666f25268e8b96c38c46/bash_games/minesweeper.sh#L41
|
||||
[8]: https://github.com/abhiTamrakar/playground/blob/28143053ced699c80569666f25268e8b96c38c46/bash_games/minesweeper.sh#L74
|
||||
[9]: https://opensource.com/sites/default/files/uploads/minefield.png (Minefield)
|
||||
[10]: https://en.wikipedia.org/wiki/Standard_streams#Standard_input_(stdin)
|
||||
[11]: https://www.gnu.org/software/bash/manual/html_node/Shell-Parameter-Expansion.html
|
||||
[12]: https://linux.die.net/man/1/shuf
|
||||
[13]: https://www.tldp.org/LDP/abs/html/dblparens.html
|
||||
[14]: https://github.com/abhiTamrakar/playground/blob/28143053ced699c80569666f25268e8b96c38c46/bash_games/minesweeper.sh#L143-L177
|
||||
[15]: https://opensource.com/sites/default/files/uploads/extractmines.png (Extracting mines)
|
||||
[16]: https://opensource.com/sites/default/files/uploads/extractmines2.png (Extracting mines)
|
||||
[17]: https://opensource.com/sites/default/files/uploads/extractmines3.png (Extracting mines)
|
||||
[18]: https://github.com/abhiTamrakar/playground/blob/28143053ced699c80569666f25268e8b96c38c46/bash_games/minesweeper.sh#L91
|
||||
[19]: https://github.com/abhiTamrakar/playground/blob/28143053ced699c80569666f25268e8b96c38c46/bash_games/minesweeper.sh#L131-L141
|
||||
[20]: https://opensource.com/sites/default/files/uploads/gameover.png (Minecraft Gameover)
|
447
sources/tech/20190920 How to compare strings in Java.md
Normal file
447
sources/tech/20190920 How to compare strings in Java.md
Normal file
@ -0,0 +1,447 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to compare strings in Java)
|
||||
[#]: via: (https://opensource.com/article/19/9/compare-strings-java)
|
||||
[#]: author: (Girish Managoli https://opensource.com/users/gammayhttps://opensource.com/users/sethhttps://opensource.com/users/clhermansenhttps://opensource.com/users/clhermansen)
|
||||
|
||||
How to compare strings in Java
|
||||
======
|
||||
There are six ways to compare strings in Java.
|
||||
![Javascript code close-up with neon graphic overlay][1]
|
||||
|
||||
String comparison is a fundamental operation in programming and is often quizzed during interviews. These strings are a sequence of characters that are _immutable_ which means unchanging over time or unable to be changed.
|
||||
|
||||
Java has a number of methods for comparing strings; this article will teach you the primary operation of how to compare strings in Java.
|
||||
|
||||
There are six options:
|
||||
|
||||
1. The == operator
|
||||
2. String equals
|
||||
3. String equalsIgnoreCase
|
||||
4. String compareTo
|
||||
5. String compareToIgnoreCase
|
||||
6. Objects equals
|
||||
|
||||
|
||||
|
||||
### The == operator
|
||||
|
||||
**==** is an operator that returns **true** if the contents being compared refer to the same memory or **false** if they don't. If two strings compared with **==** refer to the same string memory, the return value is **true**; if not, it is **false**.
|
||||
|
||||
|
||||
```
|
||||
[String][2] string1 = "MYTEXT";
|
||||
[String][2] string2 = "YOURTEXT";
|
||||
|
||||
[System][3].out.println("Output: " + (string1 == string2));
|
||||
|
||||
Output: false
|
||||
```
|
||||
|
||||
The return value of **==** above is **false**, as "MYTEXT" and "YOURTEXT" refer to different memory.
|
||||
|
||||
|
||||
```
|
||||
[String][2] string1 = "MYTEXT";
|
||||
[String][2] string6 = "MYTEXT";
|
||||
|
||||
[System][3].out.println("Output: " + (string1 == string6));
|
||||
|
||||
Output: true
|
||||
```
|
||||
|
||||
In this case, the return value of **==** is **true**, as the compiler internally creates one memory location for both "MYTEXT" memories, and both variables refer to the same memory location.
|
||||
|
||||
|
||||
```
|
||||
[String][2] string1 = "MYTEXT";
|
||||
[String][2] string7 = string1;
|
||||
|
||||
[System][3].out.println("Output: " + (string1 == string7));
|
||||
|
||||
Output: true
|
||||
```
|
||||
|
||||
If you guessed right, you know string7 is initialized with the same memory location as string1 and therefore **==** is true.
|
||||
|
||||
|
||||
```
|
||||
[String][2] string1 = "MYTEXT";
|
||||
[String][2] string4 = new [String][2]("MYTEXT");
|
||||
|
||||
[System][3].out.println("Output: " + (string1 == string4));
|
||||
|
||||
Output: false
|
||||
```
|
||||
|
||||
In this case, the compiler creates a new memory location, even though the value is the same for string4 and string1.
|
||||
|
||||
|
||||
```
|
||||
[String][2] string1 = "MYTEXT";
|
||||
[String][2] string5 = new [String][2](string1);
|
||||
|
||||
[System][3].out.println("Output: " + (string1 == string4));
|
||||
|
||||
Output: false
|
||||
```
|
||||
|
||||
Here, string5 is a new string object initialized with string1; hence, **string1 == string4** is not true.
|
||||
|
||||
### String equals
|
||||
|
||||
The string class has a **String equals** method to compare two strings. String comparison with **equals** is case-sensitive. According to the [docs][4]:
|
||||
|
||||
|
||||
```
|
||||
/**
|
||||
* Compares this string to the specified object. The result is {@code
|
||||
* true} if and only if the argument is not {@code null} and is a {@code
|
||||
* String} object that represents the same sequence of characters as this
|
||||
* object.
|
||||
*
|
||||
* @param anObject
|
||||
* The object to compare this {@code String} against
|
||||
*
|
||||
* @return {@code true} if the given object represents a {@code String}
|
||||
* equivalent to this string, {@code false} otherwise
|
||||
*
|
||||
* @see #compareTo(String)
|
||||
* @see #equalsIgnoreCase(String)
|
||||
*/
|
||||
public boolean equals(Object anObject) { ... }
|
||||
```
|
||||
|
||||
Let's see a few examples:
|
||||
|
||||
|
||||
```
|
||||
[String][2] string1 = "MYTEXT";
|
||||
[String][2] string2 = "YOURTEXT";
|
||||
|
||||
[System][3].out.println("Output: " + string1.equals(string2));
|
||||
|
||||
Output: false
|
||||
```
|
||||
|
||||
If the strings are not the same, the output of the **equals** method is obviously **false**.
|
||||
|
||||
|
||||
```
|
||||
[String][2] string1 = "MYTEXT";
|
||||
[String][2] string3 = "mytext";
|
||||
|
||||
[System][3].out.println("Output: " + string1.equals(string3));
|
||||
|
||||
Output: false
|
||||
```
|
||||
|
||||
These strings are the same in value but differ in case; hence, the output is **false**.
|
||||
|
||||
|
||||
```
|
||||
[String][2] string1 = "MYTEXT";
|
||||
[String][2] string4 = new [String][2]("MYTEXT");
|
||||
|
||||
[System][3].out.println("Output: " + string1.equals(string4));
|
||||
|
||||
Output: true
|
||||
|
||||
[/code] [code]
|
||||
|
||||
[String][2] string1 = "MYTEXT";
|
||||
[String][2] string5 = new [String][2](string1);
|
||||
|
||||
[System][3].out.println("Output: " + string1.equals(string5));
|
||||
|
||||
Output: true
|
||||
```
|
||||
|
||||
The examples in both these cases are **true**, as the two values are the same. Unlike with **==**, the second example above returns **true**.
|
||||
|
||||
The string object on which **equals** is called should obviously be a valid string object and non-null.
|
||||
|
||||
|
||||
```
|
||||
[String][2] string1 = "MYTEXT";
|
||||
[String][2] string8 = null;
|
||||
|
||||
[System][3].out.println("Output: " + string8.equals(string1));
|
||||
|
||||
[Exception][5] in thread _____ java.lang.[NullPointerException][6]
|
||||
```
|
||||
|
||||
The above evidently is not a good code.
|
||||
|
||||
|
||||
```
|
||||
[System][3].out.println("Output: " + string1.equals(string8));
|
||||
|
||||
Output: false
|
||||
```
|
||||
|
||||
This is alright.
|
||||
|
||||
### String equalsIgnoreCase
|
||||
|
||||
The behavior of **equalsIgnoreCase** is identical to **equals** with one difference—the comparison is not case-sensitive. The [docs][4] say:
|
||||
|
||||
|
||||
```
|
||||
/**
|
||||
* Compares this {@code String} to another {@code String}, ignoring case
|
||||
* considerations. Two strings are considered equal ignoring case if they
|
||||
* are of the same length and corresponding characters in the two strings
|
||||
* are equal ignoring case.
|
||||
*
|
||||
* <p> Two characters {@code c1} and {@code c2} are considered the same
|
||||
* ignoring case if at least one of the following is true:
|
||||
* <ul>
|
||||
* <li> The two characters are the same (as compared by the
|
||||
* {@code ==} operator)
|
||||
* <li> Applying the method {@link
|
||||
* java.lang.Character#toUpperCase(char)} to each character
|
||||
* produces the same result
|
||||
* <li> Applying the method {@link
|
||||
* java.lang.Character#toLowerCase(char)} to each character
|
||||
* produces the same result
|
||||
* </ul>
|
||||
*
|
||||
* @param anotherString
|
||||
* The {@code String} to compare this {@code String} against
|
||||
*
|
||||
* @return {@code true} if the argument is not {@code null} and it
|
||||
* represents an equivalent {@code String} ignoring case; {@code
|
||||
* false} otherwise
|
||||
*
|
||||
* @see #equals(Object)
|
||||
*/
|
||||
public boolean equalsIgnoreCase(String anotherString) { ... }
|
||||
```
|
||||
|
||||
The second example in **equals** (above) is the only difference from the comparison in **equalsIgnoreCase**.
|
||||
|
||||
|
||||
```
|
||||
[String][2] string1 = "MYTEXT";
|
||||
[String][2] string3 = "mytext";
|
||||
|
||||
[System][3].out.println("Output: " + string1.equalsIgnoreCase(string3));
|
||||
|
||||
Output: true
|
||||
```
|
||||
|
||||
This returns **true** because the comparison is case-independent. All other examples under **equals** remain the same as they are for **equalsIgnoreCase**.
|
||||
|
||||
### String compareTo
|
||||
|
||||
The **compareTo** method compares two strings lexicographically (i.e., pertaining to alphabetical order) and case-sensitively and returns the lexicographical difference in the two strings. The [docs][4] describe lexicographical order computation as:
|
||||
|
||||
|
||||
```
|
||||
/**
|
||||
* Compares two strings lexicographically.
|
||||
* The comparison is based on the Unicode value of each character in
|
||||
* the strings. The character sequence represented by this
|
||||
* {@code String} object is compared lexicographically to the
|
||||
* character sequence represented by the argument string. The result is
|
||||
* a negative integer if this {@code String} object
|
||||
* lexicographically precedes the argument string. The result is a
|
||||
* positive integer if this {@code String} object lexicographically
|
||||
* follows the argument string. The result is zero if the strings
|
||||
* are equal; {@code compareTo} returns {@code 0} exactly when
|
||||
* the {@link #equals(Object)} method would return {@code true}.
|
||||
* <p>
|
||||
* This is the definition of lexicographic ordering. If two strings are
|
||||
* different, then either they have different characters at some index
|
||||
* that is a valid index for both strings, or their lengths are different,
|
||||
* or both. If they have different characters at one or more index
|
||||
* positions, let <i>k</i> be the smallest such index; then the string
|
||||
* whose character at position <i>k</i> has the smaller value, as
|
||||
* determined by using the &lt; operator, lexicographically precedes the
|
||||
* other string. In this case, {@code compareTo} returns the
|
||||
* difference of the two character values at position {@code k} in
|
||||
* the two string -- that is, the value:
|
||||
* <blockquote><pre>
|
||||
* this.charAt(k)-anotherString.charAt(k)
|
||||
* </pre></blockquote>
|
||||
* If there is no index position at which they differ, then the shorter
|
||||
* string lexicographically precedes the longer string. In this case,
|
||||
* {@code compareTo} returns the difference of the lengths of the
|
||||
* strings -- that is, the value:
|
||||
* <blockquote><pre>
|
||||
* this.length()-anotherString.length()
|
||||
* </pre></blockquote>
|
||||
*
|
||||
* @param anotherString the {@code String} to be compared.
|
||||
* @return the value {@code 0} if the argument string is equal to
|
||||
* this string; a value less than {@code 0} if this string
|
||||
* is lexicographically less than the string argument; and a
|
||||
* value greater than {@code 0} if this string is
|
||||
* lexicographically greater than the string argument.
|
||||
*/
|
||||
public int compareTo(String anotherString) { ... }
|
||||
```
|
||||
|
||||
Let's look at some examples.
|
||||
|
||||
|
||||
```
|
||||
[String][2] string1 = "A";
|
||||
[String][2] string2 = "B";
|
||||
|
||||
[System][3].out.println("Output: " + string1.compareTo(string2));
|
||||
|
||||
Output: -1
|
||||
[System][3].out.println("Output: " + string2.compareTo(string1));
|
||||
|
||||
Output: 1
|
||||
|
||||
[/code] [code]
|
||||
|
||||
[String][2] string1 = "A";
|
||||
[String][2] string3 = "a";
|
||||
|
||||
[System][3].out.println("Output: " + string1.compareTo(string3));
|
||||
|
||||
Output: -32
|
||||
|
||||
[System][3].out.println("Output: " + string3.compareTo(string1));
|
||||
|
||||
Output: 32
|
||||
|
||||
[/code] [code]
|
||||
|
||||
[String][2] string1 = "A";
|
||||
[String][2] string6 = "A";
|
||||
|
||||
[System][3].out.println("Output: " + string1.compareTo(string6));
|
||||
|
||||
Output: 0
|
||||
|
||||
[/code] [code]
|
||||
|
||||
String string1 = "A";
|
||||
String string8 = null;
|
||||
|
||||
System.out.println("Output: " + string8.compareTo(string1));
|
||||
|
||||
Exception in thread ______ java.lang.NullPointerException
|
||||
at java.lang.String.compareTo(String.java:1155)
|
||||
|
||||
String string1 = "A";
|
||||
String string10 = "";
|
||||
|
||||
System.out.println("Output: " + string1.compareTo(string10));
|
||||
|
||||
Output: 1
|
||||
```
|
||||
|
||||
### String compareToIgnoreCase
|
||||
|
||||
The behavior of **compareToIgnoreCase** is identical to **compareTo** with one difference: the strings are compared without case consideration.
|
||||
|
||||
|
||||
```
|
||||
[String][2] string1 = "A";
|
||||
[String][2] string3 = "a";
|
||||
|
||||
[System][3].out.println("Output: " + string1.compareToIgnoreCase(string3));
|
||||
|
||||
Output: 0
|
||||
```
|
||||
|
||||
### Objects equals
|
||||
|
||||
The **Objects equals** method invokes the overridden **String equals** method; its behavior is the same as in the **String equals** example above.
|
||||
|
||||
|
||||
```
|
||||
[String][2] string1 = "MYTEXT";
|
||||
[String][2] string2 = "YOURTEXT";
|
||||
|
||||
[System][3].out.println("Output: " + Objects(string1, string2));
|
||||
|
||||
Output: false
|
||||
|
||||
[/code] [code]
|
||||
|
||||
[String][2] string1 = "MYTEXT";
|
||||
[String][2] string3 = "mytext";
|
||||
|
||||
[System][3].out.println("Output: " + Objects(string1, string3));
|
||||
|
||||
Output: false
|
||||
|
||||
[/code] [code]
|
||||
|
||||
[String][2] string1 = "MYTEXT";
|
||||
[String][2] string6 = "MYTEXT";
|
||||
|
||||
[System][3].out.println("Output: " + Objects(string1, string6));
|
||||
|
||||
Output: true
|
||||
|
||||
[/code] [code]
|
||||
|
||||
[String][2] string1 = "MYTEXT";
|
||||
[String][2] string8 = null;
|
||||
|
||||
[System][3].out.println("Output: " + Objects.equals(string1, string8));
|
||||
|
||||
Output: false
|
||||
|
||||
[System][3].out.println("Output: " + Objects.equals(string8, string1));
|
||||
|
||||
Output: false
|
||||
|
||||
[/code] [code]
|
||||
|
||||
[String][2] string8 = null;
|
||||
[String][2] string9 = null;
|
||||
|
||||
[System][3].out.println("Output: " + Objects.equals(string8, string9));
|
||||
|
||||
Output: true
|
||||
```
|
||||
|
||||
The advantage here is that the **Objects equals** method checks for null values (unlike **String equals**). The implementation of **Object equals** is:
|
||||
|
||||
|
||||
```
|
||||
public static boolean equals([Object][7] a, [Object][7] b) {
|
||||
return (a == b) || (a != null && a.equals(b));
|
||||
}
|
||||
```
|
||||
|
||||
### Which method to use?
|
||||
|
||||
There are many methods to compare two strings. Which one should you use? As a common practice, use **String equals** for case-sensitive strings and **String equalsIgnoreCase** for case-insensitive comparisons. However, one caveat: take care of NPE (**NullPointerException**) if one or both strings are null.
|
||||
|
||||
The source code is available on [GitLab][8] and [GitHub][9].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/9/compare-strings-java
|
||||
|
||||
作者:[Girish Managoli][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/gammayhttps://opensource.com/users/sethhttps://opensource.com/users/clhermansenhttps://opensource.com/users/clhermansen
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/code_javascript.jpg?itok=60evKmGl (Javascript code close-up with neon graphic overlay)
|
||||
[2]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+string
|
||||
[3]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+system
|
||||
[4]: http://hg.openjdk.java.net/jdk8/jdk8/jdk/file/687fd7c7986d/src/share/classes/java/lang/String.java
|
||||
[5]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+exception
|
||||
[6]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+nullpointerexception
|
||||
[7]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+object
|
||||
[8]: https://gitlab.com/gammay/stringcomparison
|
||||
[9]: https://github.com/gammay/stringcompare
|
@ -0,0 +1,120 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Managing network interfaces and FirewallD in Cockpit)
|
||||
[#]: via: (https://fedoramagazine.org/managing-network-interfaces-and-firewalld-in-cockpit/)
|
||||
[#]: author: (Shaun Assam https://fedoramagazine.org/author/sassam/)
|
||||
|
||||
Managing network interfaces and FirewallD in Cockpit
|
||||
======
|
||||
|
||||
![][1]
|
||||
|
||||
In the [last article][2], we saw how Cockpit can manage storage devices. This article will focus on the networking functionalities within the UI. We’ll see how to manage the interfaces attached to the system in Cockpit. We’ll also look at the firewall and demonstrate how to assign a zone to an interface, and allow/deny services and ports.
|
||||
|
||||
To access these controls, verify the _cockpit-networkmanager_ and _cockpit-firewalld_ packages are installed.
|
||||
|
||||
To start, log into the Cockpit UI and select the **Networking** menu option. As is consistent with the UI design we see performance graphs at the top and a summary of the logs at the bottom of the page. Between them are the sections to manage the firewall and interface(s).
|
||||
|
||||
![][3]
|
||||
|
||||
### Firewall
|
||||
|
||||
Cockpit’s firewall configuration page works with FirewallD and allows admins to quickly configure these settings. The page has options for assigning zones to specific interfaces, as well as a list of services configured to those zones.
|
||||
|
||||
#### Adding a zone
|
||||
|
||||
Let’s start by configuring a zone to an available interface. First, click the **Add Zone** button. From here you can select a pre-configured or custom zone. Selecting one of the zones will display a brief description of that zone, as well as the services, or ports, allowed, or opened, in that zone. Select the interface you want to assign the zone to. Also, there’s the option to configure the rules to apply to the **Entire Subset**, or you can specify a **Range** of IP addresses. In the example below, we add the Internal zone to an available network card. The IP range can also be configured so the rule is only applied to the specified addresses.
|
||||
|
||||
![][4]
|
||||
|
||||
#### Adding and removing services/ports
|
||||
|
||||
To allow network access to services, or open ports, click the **Add Services** button. From here you can search (or filter) for a service, or manually enter the port(s) you would like to open. Selecting the **Custom Ports** option provides options to enter the port number or alias into the TCP and/or UDP fields. You can also provide an optional name to label the rule. In the example below, the Cockpit service/socket is added to the Internal zone. Once completed, click the **Add Services**, or **Add Ports**, button. Likewise, to remove the service click the red trashcan to the right, select the zone(s), and click **Remove service**.
|
||||
|
||||
For more information about using Cockpit to configure your system’s firewall, visit the [Cockpit project’s Github page][5].
|
||||
|
||||
![][6]
|
||||
|
||||
### Interfaces
|
||||
|
||||
The interfaces section displays both physical and virtual/logical NICs assigned to the system. From the main screen we see the name of the interface, the IP address, and activity stats of the NIC. Selecting an interface will display IP related information and options to manually configure them. You can also choose to have the network card inactive after a reboot by toggling the **Connect automatically** option. To enable, or disable, the network interface, click the toggle switch in the top right corner of the section.
|
||||
|
||||
![][7]
|
||||
|
||||
#### Bonding
|
||||
|
||||
Bonding network interfaces can help increase bandwidth availability. It can also serve as a redundancy plan in the event one of the NIC’s fail.
|
||||
|
||||
To start, click the **Add Bond** button located in the header of the Interfaces section. In the Bond Settings overlay, enter a name and select the interfaces you wish to bond in the list below. Next, select the **MAC Address** you would like to assign to the bond. Now select the **Mode**, or purpose, of the bond: Round Robin, Active Backup, Broadcast, &c. (the demo below shows a complete list of modes.)
|
||||
|
||||
Continue the configuration by selecting the **Primary** NIC, and a **Link Monitoring** option. You can also tweak the **Monitoring Interval**, and **Link Up Delay** and **Link Down Delay** options. To finish the configuration, click the **Apply** button. We’re taken back to the main screen, and the new bonded interface we just created is added to the list of interfaces.
|
||||
|
||||
From here we can configure the bond like any other interface. We can even delve deeper into the interface’s settings for the bond. As seen in the example below, selecting one of the interfaces in the bond’s settings page provides details pertaining to the interface link. There’s also an added option for changing the bond settings. To delete the bond, click the **Delete** button.
|
||||
|
||||
![][8]
|
||||
|
||||
#### Teaming
|
||||
|
||||
Teaming, like bonding, is another method used for link aggregation. For a comparison between bonding and teaming, refer to [this chart][9]. You can also find more information about teaming on the [Red Hat documentation site.][10]
|
||||
|
||||
As with creating a bond, click the **Add Team** button. The settings are similar in the sense you can give it a name, select the interfaces, link delay, and the mode or **Runner** as it’s referred to here. The options are similar to the ones available for bonding. By default the **Link Watch** option is set to Ethtool, but also has options for ARP Ping, and NSNA Ping.
|
||||
|
||||
Click the **Apply** button to complete the setup. It will also return you to the main networking screen. For further configuration, such as IP assignment and changing the runner, click the newly made team interface. As with bonding, you can click one of the interfaces in the link aggregation. Depending on the runner, you may have additional options for the Team Port. Click the **Delete** button from the screen to remove the team.
|
||||
|
||||
![][11]
|
||||
|
||||
#### Bridging
|
||||
|
||||
From the article, [Build a network bridge with Fedora][12]:
|
||||
|
||||
> “A bridge is a network connection that combines multiple network adapters.”
|
||||
|
||||
One excellent example for a bridge is combining the physical NIC with a virtual interface, like the one created and used for KVM virtualization. [Leif Madsen’s blog][13] has an excellent article on how to achieve this in the CLI. This can also be accomplished in Cockpit with just a few clicks. The example below will accomplish the first part of Leif’s blog using the web UI. We’ll bridge the enp9s0 interface with the virbr0 virtual interface.
|
||||
|
||||
Click the **Add Bridge** button to launch the settings box. Provide a name and select the interfaces you would like to bridge. To enable **Spanning Tree Protocol (STP)**, click the box to the right of the label. Click the **Apply** button to finalize the configuration.
|
||||
|
||||
As is consistent with teaming and bonding, selecting the bridge from the main screen will display the details of the interface. As seen in the example below, the physical device takes control and the virtual interface will adopt that device’s IP address.
|
||||
|
||||
Select the individual interface in the bridge’s detail screen for more options. And once again, click the **Delete** button to remove the bridge.
|
||||
|
||||
![][14]
|
||||
|
||||
#### Adding VLANs
|
||||
|
||||
Cockpit allows admins to create VLANs, or virtual networks, using any of the interfaces on the system. Click the **Add VLAN** button and select an interface. Furthermore, in the **Parent** drop-down list, assign the VLAN ID, and if you like, give it a new name. By default the name will be the same as the parent followed by a dot and the ID. For example, interface _enp11s0_ with VLAN ID _9_ will result in _enp11s0.9_). Click **Apply** to save the settings and to return to the networking main screen. Click the VLAN interface for further configuration. As always, click the **Delete** button to remove the VLAN.
|
||||
|
||||
![][15]
|
||||
|
||||
As we can see, Cockpit can help admins with common network configurations when managing the system’s connectivity. In the next article, we’ll explore how Cockpit handles user management and peek into the add-on 389 Directory Servers.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/managing-network-interfaces-and-firewalld-in-cockpit/
|
||||
|
||||
作者:[Shaun Assam][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fedoramagazine.org/author/sassam/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://fedoramagazine.org/wp-content/uploads/2019/09/cockpit-networking-816x345.jpg
|
||||
[2]: https://fedoramagazine.org/performing-storage-management-tasks-in-cockpit/
|
||||
[3]: https://fedoramagazine.org/wp-content/uploads/2019/09/cockpit-network-main-screen-1024x687.png
|
||||
[4]: https://fedoramagazine.org/wp-content/uploads/2019/09/cockpit-add-zone.gif
|
||||
[5]: https://github.com/cockpit-project/cockpit/wiki/Feature:-Firewall
|
||||
[6]: https://fedoramagazine.org/wp-content/uploads/2019/09/cockpit-add_remove-services.gif
|
||||
[7]: https://fedoramagazine.org/wp-content/uploads/2019/09/cockpit-interfaces-overview-1.gif
|
||||
[8]: https://fedoramagazine.org/wp-content/uploads/2019/09/cockpit-interface-bonding.gif
|
||||
[9]: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/networking_guide/sec-comparison_of_network_teaming_to_bonding
|
||||
[10]: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/networking_guide/ch-configure_network_teaming
|
||||
[11]: https://fedoramagazine.org/wp-content/uploads/2019/09/cockpit-interface-teaming.gif
|
||||
[12]: https://fedoramagazine.org/build-network-bridge-fedora
|
||||
[13]: http://blog.leifmadsen.com/blog/2016/12/01/create-network-bridge-with-nmcli-for-libvirt/
|
||||
[14]: https://fedoramagazine.org/wp-content/uploads/2019/09/cockpit-interface-bridging.gif
|
||||
[15]: https://fedoramagazine.org/wp-content/uploads/2019/09/cockpit-interface-vlans.gif
|
@ -0,0 +1,142 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to Remove (Delete) Symbolic Links in Linux)
|
||||
[#]: via: (https://www.2daygeek.com/remove-delete-symbolic-link-softlink-linux/)
|
||||
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
|
||||
|
||||
How to Remove (Delete) Symbolic Links in Linux
|
||||
======
|
||||
|
||||
You may have the opportunity to create or remove symbolic links on Linux.
|
||||
|
||||
If yes, do you know how to do it?
|
||||
|
||||
Have you already done that? Are you aware of that?
|
||||
|
||||
If yes then there is no problem. If not, don’t worry, we are here to help you with this.
|
||||
|
||||
This can be done using the rm and unlink commands.
|
||||
|
||||
### What is Symbolic Link?
|
||||
|
||||
A symbolic link, also known as a symlink or soft link, is a special type of file that points to another file or directory in Linux.
|
||||
|
||||
It’s similar to a shortcut in Windows.
|
||||
|
||||
It can point to a file or a directory on the same or a different filesystem or partition.
|
||||
|
||||
In general Symbolic links are used to link libraries. Also, used to link log files and folders on mounted NFS (Network File System) shares.
|
||||
|
||||
### What is rm Command?
|
||||
|
||||
The **[rm command][1]** is used to remove files or directories. This is very dangerous and be cautious every time you use the rm command.
|
||||
|
||||
### What is unlink Command?
|
||||
|
||||
The unlink command is used to remove the specified file. It is already installed as it is part of the GNU Gorutils.
|
||||
|
||||
### 1) How to Remove Symbolic Link Files Using the rm Command
|
||||
|
||||
The rm command is one of the most frequently used commands in Linux. Furthermore, it allows us to remove the symbolic links as described below.
|
||||
|
||||
```
|
||||
# rm symlinkfile
|
||||
```
|
||||
|
||||
Always use the rm command with the “-i” switch to understand what is being done.
|
||||
|
||||
```
|
||||
# rm -i symlinkfile1
|
||||
rm: remove symbolic link ‘symlinkfile1’? y
|
||||
```
|
||||
|
||||
It also allows us to remove multiple symbolic links at once.
|
||||
|
||||
```
|
||||
# rm -i symlinkfile2 symlinkfile3
|
||||
|
||||
rm: remove symbolic link ‘symlinkfile2’? y
|
||||
rm: remove symbolic link ‘symlinkfile3’? y
|
||||
```
|
||||
|
||||
### 1a) How to Remove Symbolic Link Directories Using the rm Command
|
||||
|
||||
This is like removing a symbolic link file.
|
||||
|
||||
Use the below command to remove Symbolic Link directory.
|
||||
|
||||
```
|
||||
# rm -i symlinkdir
|
||||
|
||||
rm: remove symbolic link ‘symlinkdir’? y
|
||||
```
|
||||
|
||||
Use the below command to remove multiple Symbolic Link directory.
|
||||
|
||||
```
|
||||
# rm -i symlinkdir1 symlinkdir2
|
||||
|
||||
rm: remove symbolic link ‘symlinkdir1’? y
|
||||
rm: remove symbolic link ‘symlinkdir2’? y
|
||||
```
|
||||
|
||||
If you add _**“/”**_ trailing slash at the end, the symbolic link directory cannot be deleted. If you add, you get an error.
|
||||
|
||||
```
|
||||
# rm -i symlinkdir/
|
||||
|
||||
rm: cannot remove ‘symlinkdir/’: Is a directory
|
||||
```
|
||||
|
||||
You may need to add the **“-r”** switch to deal with the above problem. If you add this, it will delete the contents of the target directory, while it will not delete the symbolic link directory.
|
||||
|
||||
```
|
||||
# rm -ri symlinkdir/
|
||||
|
||||
rm: descend into directory ‘symlinkdir/’? y
|
||||
rm: remove regular file ‘symlinkdir/file4.txt’? y
|
||||
rm: remove directory ‘symlinkdir/’? y
|
||||
rm: cannot remove ‘symlinkdir/’: Not a directory
|
||||
```
|
||||
|
||||
### 2) How to Remove Symbolic Links Using the unlink Command
|
||||
|
||||
The unlink command deletes a given file. It accepts only a single file at a time.
|
||||
|
||||
To delete symbolic link file
|
||||
|
||||
```
|
||||
# unlink symlinkfile
|
||||
```
|
||||
|
||||
To delete symbolic link directory
|
||||
|
||||
```
|
||||
# unlink symlinkdir2
|
||||
```
|
||||
|
||||
If you append the _**“/”**_ trailing slash at the end, you cannot remove the symbolic link directory using the unlink command.
|
||||
|
||||
```
|
||||
# unlink symlinkdir3/
|
||||
|
||||
unlink: cannot unlink ‘symlinkdir3/’: Not a directory
|
||||
```
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.2daygeek.com/remove-delete-symbolic-link-softlink-linux/
|
||||
|
||||
作者:[Magesh Maruthamuthu][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.2daygeek.com/author/magesh/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.2daygeek.com/linux-remove-files-directories-folders-rm-command/
|
295
sources/tech/20190921 Top Open Source Video Players for Linux.md
Normal file
295
sources/tech/20190921 Top Open Source Video Players for Linux.md
Normal file
@ -0,0 +1,295 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Top Open Source Video Players for Linux)
|
||||
[#]: via: (https://itsfoss.com/video-players-linux/)
|
||||
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
|
||||
|
||||
Top Open Source Video Players for Linux
|
||||
======
|
||||
|
||||
_**Wondering which video player should you use on Linux? Here’s a list of top open source video players available for Linux distributions.**_
|
||||
|
||||
You can watch Hulu, Prime Video and/or [Netflix on Linux][1]. You can also [download videos from YouTube][2] and watch them later or if you are in a country where you cannot get Netflix and other streaming services, you may have to rely on torrent services like [Popcorn Time in Linux][3].
|
||||
|
||||
Watching movies/TV series or other video contents on computers is not an ‘ancient tradition’ yet. Usually, you go with the default video player that comes baked in with your Linux distribution (that could be anything).
|
||||
|
||||
You won’t have an issue utilizing the default player – however, if you specifically want more open-source video player choices (or alternatives to the default one), you should keep reading.
|
||||
|
||||
### Best Linux video players
|
||||
|
||||
![][4]
|
||||
|
||||
I have included the installation steps for Ubuntu but that shouldn’t make it the list of Ubuntu video players. These open source software should be available in any Linux distribution you are using.
|
||||
|
||||
Installing the software
|
||||
|
||||
Another note for Ubuntu users. You should have [universe repository enabled][5] in order to find and install these video players from the software center or by using command line. I have mentioned the commands but if you like, you can also install them from the Software Center.
|
||||
|
||||
_Please keep in mind that the list is in no particular order of ranking._
|
||||
|
||||
#### 1\. VLC Media Player
|
||||
|
||||
![][6]
|
||||
|
||||
Key Highlights:
|
||||
|
||||
* Built-in codecs
|
||||
* Customization options
|
||||
* Cross-platform
|
||||
* Every video file format supported
|
||||
* Extensions available for added functionalities
|
||||
|
||||
|
||||
|
||||
[VLC Media Player][7] is unquestionably the most popular open source video player. Not just limited to Linux – but it’s a must-have video player for every platform (including Windows).
|
||||
|
||||
It is a quite powerful video player capable of handling a variety of file formats and codecs. You can customize the look of it by using skins and enhance the functionalities with the help of certain extensions. Other features like [subtitle synchronization][8], audio/video filters, etc, exist as well.
|
||||
|
||||
[VLC Media Player][7]
|
||||
|
||||
#### How to install VLC?
|
||||
|
||||
You can easily [install VLC in Ubuntu][9] from the Software Center or download it from the [official website][7].
|
||||
|
||||
If you’re utilizing the terminal, you will have to separately install the components as per your requirements by following the [official resource][10]. To install the player, just type in:
|
||||
|
||||
```
|
||||
sudo apt install vlc
|
||||
```
|
||||
|
||||
#### 2\. MPlayer
|
||||
|
||||
![][11]
|
||||
|
||||
Key Highlights:
|
||||
|
||||
* Wide range of output drivers supported
|
||||
* Major file formats supported
|
||||
* Cross-platform
|
||||
* Command-line based
|
||||
|
||||
|
||||
|
||||
Yet another impressive open-source video player (technically, a video player engine). [MPlayer][12] may not offer you an intuitive user experience but it supports a wide range of output drivers and subtitle files.
|
||||
|
||||
Unlike others, MPlayer does not offer a working GUI (it has one, but it doesn’t work as expected). So, you will have to utilize the terminal in order to play a video. Even though this isn’t a popular choice – it works and a couple of video players that I’ll be listing below are inspired (or based) from MPlayer but with a GUI.
|
||||
|
||||
[MPlayer][12]
|
||||
|
||||
#### How to install MPlayer?
|
||||
|
||||
We already have an article on [installing MPlayer on Ubuntu and other Linux distros][13]. If you’re interested to install this, you should check it out.
|
||||
|
||||
```
|
||||
sudo apt install mplayer mplayer-gui
|
||||
```
|
||||
|
||||
#### 3\. SMPlayer
|
||||
|
||||
![][14]
|
||||
|
||||
Key Highlights:
|
||||
|
||||
* Supports all major video formats
|
||||
* Built-in codecs
|
||||
* Cross-platform (Windows & Linux)
|
||||
* Play ad-free YouTube video
|
||||
* Opensubtitles integration
|
||||
* UI Customization available
|
||||
* Based on MPlayer
|
||||
|
||||
|
||||
|
||||
As mentioned, SMPlayer uses MPlayer as the playback engine. So, it supports a wide range of file formats. In addition to all the basic features, it also lets you play YouTube videos from within the video player (by getting rid of the annoying ads).
|
||||
|
||||
If you want to know about SMPlayer a bit more – we have a separate article here: [SMPlayer in Linux][15].
|
||||
|
||||
Similar to VLC, it also comes baked in with codecs, so you don’t have to worry about finding codecs and installing them to make it work unless there’s something specific you need.
|
||||
|
||||
[SMPlayer][16]
|
||||
|
||||
#### How to install SMPlayer?
|
||||
|
||||
SMPlayer should be available in your Software Center. However, if you want to utilize the terminal, type in this:
|
||||
|
||||
```
|
||||
sudo apt install smplayer
|
||||
```
|
||||
|
||||
#### 4\. MPV Player
|
||||
|
||||
![][17]
|
||||
|
||||
Key Highlights:
|
||||
|
||||
* Minimalist GUI
|
||||
* Video codecs built in
|
||||
* High-quality video output by video scaling
|
||||
* Cross-platform
|
||||
* YouTube Videos supported via CLI
|
||||
|
||||
|
||||
|
||||
If you are looking for a video player with a streamlined/minimal UI, this is for you. Similar to the above-mentioned video players, we also have a separate article on [MPV Player][18] with installation instructions (if you’re interested to know more about it).
|
||||
|
||||
Keeping that aside, it offers what you would expect from a standard video player. You can even try it on your Windows/Mac systems.
|
||||
|
||||
[MPV Player][19]
|
||||
|
||||
#### How to install MPV Player?
|
||||
|
||||
You will find it listed in the Software Center or Package Manager. In either case, you can download the required package for your distro from the [official download page][20].
|
||||
|
||||
If you’re on Ubuntu, you can type in this in the terminal:
|
||||
|
||||
```
|
||||
sudo apt install mpv
|
||||
```
|
||||
|
||||
#### 5\. Dragon Player
|
||||
|
||||
![][21]
|
||||
|
||||
Key Highlights:
|
||||
|
||||
* Simple UI
|
||||
* Tailored for KDE
|
||||
* Supports playing CDs and DVDs
|
||||
|
||||
|
||||
|
||||
This has been specifically tailored for KDE desktop users. It is a dead-simple video player with all the basic features needed. You shouldn’t expect anything fancy out of it – but it does support the major file formats.
|
||||
|
||||
[Dragon Player][22]
|
||||
|
||||
#### How to install Dragon Player?
|
||||
|
||||
You will find it listed in the official repo. In either case, you can type in the following command to install it via terminal:
|
||||
|
||||
```
|
||||
sudo apt install dragonplayer
|
||||
```
|
||||
|
||||
#### 6\. GNOME Videos
|
||||
|
||||
![Totem Video Player][23]
|
||||
|
||||
Key Highlights:
|
||||
|
||||
* A simple video player for GNOME Desktop
|
||||
* Plugins supported
|
||||
* Ability to sort/access separate video channels
|
||||
|
||||
|
||||
|
||||
The default video player for distros with GNOME desktop environment (previously known as Totem). It supports all the major file formats and also lets you take a snap while playing a video. Similar to some of the others, it is a very simple and useful video player. You can try it out if you want.
|
||||
|
||||
[Gnome Videos][24]
|
||||
|
||||
#### How to install Totem (GNOME Videos)?
|
||||
|
||||
You can just type in “totem” to find the video player for GNOME listed in the software center. If not, you can also try utilizing the terminal with the following command:
|
||||
|
||||
```
|
||||
sudo apt install totem
|
||||
```
|
||||
|
||||
#### 7\. Deepin Movie
|
||||
|
||||
![][25]
|
||||
|
||||
If you are using [Deepin OS][26], you will find this as your default video player for Deepin Desktop Environment. It features all the basic functionalities that you would normally look in a video player. You can try compiling the source to install it if you aren’t using Deepin.
|
||||
|
||||
[Deepin Movie][27]
|
||||
|
||||
#### How to Install Deepin?
|
||||
|
||||
You can find it in the Software Center. If you’d want to compile, the source code is available at [GitHub][28]. In either case, type in the following command in the terminal:
|
||||
|
||||
```
|
||||
sudo apt install deepin-movie
|
||||
```
|
||||
|
||||
#### 8\. Xine Multimedia Engine
|
||||
|
||||
![][29]
|
||||
|
||||
Key Higlights:
|
||||
|
||||
* Customization available
|
||||
* Subtitles supported
|
||||
* Major file formats supported
|
||||
* Streaming playback support
|
||||
|
||||
|
||||
|
||||
Xine is an interesting portable media player. You can either choose to utilize the GUI or call the xine library from other applications to make use of the features available.
|
||||
|
||||
It supports a wide range of file formats. You can customize the skin of the GUI. It supports all kinds of subtitles (even from the DVDs). In addition to this, you can take a snapshot while playing the video, which comes handy.
|
||||
|
||||
[Xine Multimedia][30]
|
||||
|
||||
#### How to install Xine Multimedia?
|
||||
|
||||
You probably won’t find this in your Software Center. So, you can try typing this in your terminal to get it installed:
|
||||
|
||||
```
|
||||
sudo apt install xine-ui
|
||||
```
|
||||
|
||||
In addition to that, you can also check for available binary packages on their [official website][31].
|
||||
|
||||
**Wrapping Up**
|
||||
|
||||
We would recommend you to try out these open source video players over anything else. In addition to all these, you can also try [Miro Player][32] which is no more being actively maintained but works – so you can give it a try, if nothing else works for you.
|
||||
|
||||
However, if you think we missed one of your favorite Linux video player that deserves a mentioned, let us know about it in the comments down below!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/video-players-linux/
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/ankush/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://itsfoss.com/watch-netflix-in-ubuntu-linux/
|
||||
[2]: https://itsfoss.com/download-youtube-linux/
|
||||
[3]: https://itsfoss.com/popcorn-time-ubuntu-linux/
|
||||
[4]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/09/Video-Players-for-Linux.png?ssl=1
|
||||
[5]: https://itsfoss.com/ubuntu-repositories/
|
||||
[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/09/vlc-media-player.jpg?ssl=1
|
||||
[7]: https://www.videolan.org/vlc/
|
||||
[8]: https://itsfoss.com/how-to-synchronize-subtitles-with-movie-quick-tip/
|
||||
[9]: https://itsfoss.com/install-latest-vlc/
|
||||
[10]: https://wiki.videolan.org/Debian/#Debian
|
||||
[11]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2015/10/mplayer-video.jpg?ssl=1
|
||||
[12]: http://www.mplayerhq.hu/design7/news.html
|
||||
[13]: https://itsfoss.com/mplayer/
|
||||
[14]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2018/11/SMPlayer-coco.jpg?ssl=1
|
||||
[15]: https://itsfoss.com/smplayer/
|
||||
[16]: https://www.smplayer.info/en/info
|
||||
[17]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2018/08/mpv-player-interface.png?ssl=1
|
||||
[18]: https://itsfoss.com/mpv-video-player/
|
||||
[19]: https://mpv.io/
|
||||
[20]: https://mpv.io/installation/
|
||||
[21]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/09/dragon-player.jpg?ssl=1
|
||||
[22]: https://kde.org/applications/multimedia/org.kde.dragonplayer
|
||||
[23]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/09/totem-video-player.png?ssl=1
|
||||
[24]: https://wiki.gnome.org/Apps/Videos
|
||||
[25]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/09/deepin-movie.jpg?ssl=1
|
||||
[26]: https://www.deepin.org/en/
|
||||
[27]: https://www.deepin.org/en/original/deepin-movie/
|
||||
[28]: https://github.com/linuxdeepin/deepin-movie-reborn
|
||||
[29]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/09/xine-multilmedia.jpg?ssl=1
|
||||
[30]: https://www.xine-project.org/home
|
||||
[31]: https://www.xine-project.org/releases
|
||||
[32]: http://www.getmiro.com/
|
@ -1,108 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (heguangzhi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (The Linux kernel: Top 5 innovations)
|
||||
[#]: via: (https://opensource.com/article/19/8/linux-kernel-top-5-innovations)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/sethhttps://opensource.com/users/mhaydenhttps://opensource.com/users/mralexjuarez)
|
||||
|
||||
The Linux kernel: Top 5 innovations
|
||||
======
|
||||
Linux 内核:五大创新
|
||||
======
|
||||
|
||||
想知道什么是真正的(不是那种时髦的)在 Linux 内核上的创新吗?请继续阅读。
|
||||
![绿色背景的企鹅][1]
|
||||
|
||||
_创新_ 这个词在科技行业的传播几乎和 _革命_ 一样多,所以很难区分那些夸张和真正令人振奋的东西。Linux 内核被称为创新的,但它又被称为现代计算中最大的黑客,一个微观世界中的庞然大物。
|
||||
|
||||
撇开市场和模式不谈,Linux 可以说是开源世界中最受欢迎的内核,它在近30年的生命周期中引入了一些真正的游戏改变者。
|
||||
|
||||
### Cgroups (2.6.24)
|
||||
|
||||
早在2007年,Paul Menage 和 Rohit Seth 就在内核中添加了深奥的[_control groups_ (cgroups)][2]功能(cgroups 的当前实现是由 Tejun Heo 重写的。)这种新技术最初被用作一种方法,从本质上来说,是为了确保一组特定任务的服务质量。
|
||||
|
||||
例如,您为与您的 WEB 服务相关联的所有任务创建一个控制组定义 ( cgroup ),为常规备份创建另一个 cgroup ,为一般操作系统需求创建另一个cgroup。然后,您可以控制每个组的资源百分比,这样您的操作系统和 WEB 服务就可以获得大部分系统资源,而您的备份进程可以访问剩余的资源。
|
||||
|
||||
然而,cgroups 最著名的是它作为今天驱动云技术的角色:容器。事实上,cgroups 最初被命名为[进程容器][3]。当它们被 [LXC][4],[CoreOS][5]和 Docker 等项目采用时,这并不奇怪。
|
||||
|
||||
就像闸门打开后一样,“ _容器_ ”一词恰好成为了 Linux 的同义词,微服务风格的基于云的“应用”概念很快成为了规范。如今,很难脱离 cgroups ,他们是如此普遍。每一个大规模的基础设施(可能还有你的笔记本电脑,如果你运行 Linux 的话)都以一种有意思的方式使用了 cgroups ,使你的计算体验比以往任何时候都更加易于管理和灵活。
|
||||
|
||||
例如,您可能已经在电脑上安装了[Flathub][6]或[Flatpak][7],或者您已经在工作中使用[Kubernetes][8]和/或[OpenShift][9]。不管怎样,如果“容器”这个术语对你来说仍然模糊不清,你可以在[ Linux 容器背后的应用场景][10] 获得对容器的实际理解。
|
||||
|
||||
### LKMM (4.17)
|
||||
|
||||
2018年,Jade Alglave, Alan Stern, Andrea Parri, Luc Maranget, Paul McKenney, 和其他几个人的辛勤工作的成果被合并到主线 Linux 内核中,以提供正式的内存模型。Linux 内核内存[一致性]模型(LKMM)子系统是一套描述Linux 内存一致性模型的工具,同时也产生测试用例。
|
||||
|
||||
|
||||
随着系统在物理设计上变得越来越复杂(增加了更多的中央处理器内核,高速缓存和内存增加,等等),它们就越难知道哪个中央处理器需要哪个地址空间,以及何时需要。例如,如果 CPU0 需要将数据写入内存中的共享变量,并且 CPU1 需要读取该值,那么 CPU0 必须在 CPU1 尝试读取之前写入。类似地,如果值是以一种顺序写入内存的,那么期望它们也以同样的顺序被读取,而不管哪个或哪些 CPU 正在读取。
|
||||
|
||||
即使在单个处理器上,内存管理也需要特定的顺序。像 **x = y** 这样的简单操作需要处理器从内存中加载 **y** 的值,然后将该值存储在 **x** 中。在处理器从内存中读取值之前,是不能将存储在 **y** 中的值放入 **x** 变量的。还有地址依赖:**x[n] = 6** 要求在处理器能够存储值6之前加载 **n** 。
|
||||
|
||||
LKMM 帮助识别和跟踪代码中的这些内存模式。这部分是通过一个名为 **herd** 的工具来实现的,该工具定义了内存模型施加的约束(以逻辑公式的形式),然后列举了与这些约束一致性的所有可能的结果。
|
||||
|
||||
### 低延迟补丁 (2.6.38)
|
||||
|
||||
|
||||
很久以前,在2011年之前的日子里,如果你想在 Linux进行 多媒体工作 [multimedia work on Linux][11] ,您必须获得一个低延迟内核。这主要适用于[录音/audio recording][12],同时添加了许多实时效果(如对着麦克风唱歌和添加混音,以及在耳机中无延迟地听到您的声音)。有些发行版,如[Ubuntu Studio][13],可靠地提供了这样一个内核,所以实际上这没有什么障碍,当艺术家选择发行版本时,只是作为一个重要提醒。
|
||||
|
||||
然而,如果您没有使用 Ubuntu Studio ,或者您需要在分发之前更新您的内核,您必须跳转到 rt-patches 网页,下载内核补丁,将它们应用到您的内核源代码,编译,然后手动安装。
|
||||
|
||||
然后,随着内核版本2.6.38的发布,这个过程结束了。默认情况下,Linux 内核突然像变魔术一样内置了低延迟代码(根据基准测试,延迟至少降低了10倍)。不再下载补丁,不用编译。一切都很顺利,这都是因为 Mike Galbraith 编写了一个200行的小补丁。
|
||||
|
||||
对于全世界的开源多媒体艺术家来说,这是一个游戏规则的改变。从2011年开始到2016年事情变得如此美好,我向自己做了一个挑战,要求[在树莓派v1(型号B)上建造一个数字音频工作站(DAW)][14],结果发现它运行得出奇地好。
|
||||
|
||||
### RCU (2.5)
|
||||
|
||||
RCU,或称读-拷贝-更新,是计算机科学中定义的一个系统,它允许多个处理器线程从共享内存中读取数据。它通过推迟更新来做到这一点,但也将它们标记为已更新,以确保数据读取为最新内容。实际上,这意味着读取与更新同时发生。
|
||||
|
||||
|
||||
典型的 RCU 循环有点像这样:
|
||||
|
||||
1. 删除指向数据的指针,以防止其他读操作引用它。
|
||||
2. 等待读完成他们的关键处理。
|
||||
3. 回收内存空间。
|
||||
|
||||
将更新阶段划分为删除和回收阶段意味着更新程序会立即执行删除,同时推迟回收直到所有活动读取完成(通过阻止它们或注册一个回调以便在完成时调用)。
|
||||
|
||||
虽然读-拷贝-更新的概念不是为 Linux 内核发明的,但它在 Linux 中的实现是该技术的一个定义性的例子。
|
||||
|
||||
### 合作 (0.01)
|
||||
|
||||
对于 Linux 内核创新的问题,最重要的是协作,最终答案也是。称之为好时机,称之为技术优势,称之为黑客能力,或者仅仅称之为开源,但 Linux 内核及其支持的许多项目是协作与合作的光辉范例。
|
||||
|
||||
它远远超出了内核范畴。各行各业的人都对开源做出了贡献,可以说是因为 Linux 内核。Linux 曾经是,现在仍然是 [自由软件][15]的主要力量,激励人们把他们的代码、艺术、想法或者仅仅是他们自己带到一个全球化的、有生产力的、多样化的人类社区中。
|
||||
|
||||
### 你最喜欢的创新是什么?
|
||||
|
||||
这个列表偏向于我自己的兴趣:容器、非统一内存访问(NUMA)和多媒体。我肯定把你最喜欢的内核创新从列表中去掉了。在评论中告诉我!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/8/linux-kernel-top-5-innovations
|
||||
|
||||
作者:[Seth Kenlon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[heguangzhi](https://github.com/heguangzhi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/sethhttps://opensource.com/users/mhaydenhttps://opensource.com/users/mralexjuarez
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/linux_penguin_green.png?itok=ENdVzW22 (Penguin with green background)
|
||||
[2]: https://en.wikipedia.org/wiki/Cgroups
|
||||
[3]: https://lkml.org/lkml/2006/10/20/251
|
||||
[4]: https://linuxcontainers.org
|
||||
[5]: https://coreos.com/
|
||||
[6]: http://flathub.org
|
||||
[7]: http://flatpak.org
|
||||
[8]: http://kubernetes.io
|
||||
[9]: https://www.redhat.com/sysadmin/learn-openshift-minishift
|
||||
[10]: https://opensource.com/article/18/11/behind-scenes-linux-containers
|
||||
[11]: http://slackermedia.info
|
||||
[12]: https://opensource.com/article/17/6/qtractor-audio
|
||||
[13]: http://ubuntustudio.org
|
||||
[14]: https://opensource.com/life/16/3/make-music-raspberry-pi-milkytracker
|
||||
[15]: http://fsf.org
|
Loading…
Reference in New Issue
Block a user