mirror of
https://github.com/LCTT/TranslateProject.git
synced 2025-02-03 23:40:14 +08:00
Merge remote-tracking branch 'LCTT/master'
This commit is contained in:
commit
bc5cca0490
@ -0,0 +1,82 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11497-1.html)
|
||||
[#]: subject: (Kubernetes networking, OpenStack Train, and more industry trends)
|
||||
[#]: via: (https://opensource.com/article/19/10/kubernetes-openstack-and-more-industry-trends)
|
||||
[#]: author: (Tim Hildred https://opensource.com/users/thildred)
|
||||
|
||||
每周开源点评:Kubernetes 网络、OpenStack Train 以及更多的行业趋势
|
||||
======
|
||||
|
||||
> 开源社区和行业趋势的每周总览。
|
||||
|
||||
![Person standing in front of a giant computer screen with numbers, data][1]
|
||||
|
||||
作为我在具有开源开发模型的企业软件公司担任高级产品营销经理的角色的一部分,我为产品营销人员、经理和其他影响者定期发布有关开源社区,市场和行业趋势的定期更新。以下是该更新中我和他们最喜欢的五篇文章。
|
||||
|
||||
### OpenStack Train 中最令人兴奋的功能
|
||||
|
||||
- [文章地址][2]
|
||||
|
||||
> 考虑到 Train 版本必须提供的所有技术优势([你可以在此处查看版本亮点][3]),你可能会对 Red Hat 认为这些将使我们的电信和企业客户受益的顶级功能及其用例感到好奇。以下我们对该版本最兴奋的功能的概述。
|
||||
|
||||
**影响**:OpenStack 对我来说就像 Shia LaBeouf:它在几年前达到了炒作的顶峰,然后继续产出了好的作品。Train 版本看起来是又一次令人难以置信的创新下降。
|
||||
|
||||
### 以 Ansible 原生的方式构建 Kubernetes 操作器
|
||||
|
||||
- [文章地址][4]
|
||||
|
||||
> 操作器简化了 Kubernetes 上复杂应用程序的管理。它们通常是用 Go 语言编写的,并且需要懂得 Kubernetes 内部的专业知识。但是,还有另一种进入门槛较低的选择。Ansible 是操作器 SDK 中的一等公民。使用 Ansible 可以释放应用程序工程师的精力,最大限度地利用时间来自动化和协调你的应用程序,并使用一种简单的语言在新的和现有的平台上进行操作。在这里我们可以看到如何做。
|
||||
|
||||
**影响**:这就像你发现可以用搅拌器和冷冻香蕉制作出不错的冰淇淋一样:Ansible(通常被认为很容易掌握)可以使你比你想象的更容易地做一些令人印象深刻的操作器魔术。
|
||||
|
||||
### Kubernetes 网络:幕后花絮
|
||||
|
||||
- [文章地址][5]
|
||||
|
||||
> 尽管围绕该主题有很多很好的资源(链接在[这里][6]),但我找不到一个示例,可以将所有的点与网络工程师喜欢和讨厌的命令输出连接起来,以显示背后实际发生的情况。因此,我决定从许多不同的来源收集这些信息,以期帮助你更好地了解事物之间的联系。
|
||||
|
||||
**影响**:这是一篇对复杂主题(带有图片)阐述的很好的作品。保证可以使 Kubenetes 网络的混乱程度降低 10%。
|
||||
|
||||
### 保护容器供应链
|
||||
|
||||
- [文章地址][7]
|
||||
|
||||
> 随着容器、软件即服务和函数即服务的出现,人们开始着眼于在使用现有服务、函数和容器映像的过程中寻求新的价值。[Red Hat][8] 的容器首席产品经理 Scott McCarty 表示,关注这个重点既有优点也有缺点。“它使我们能够集中精力编写满足我们需求的新应用程序代码,同时将对基础架构的关注转移给其他人身上,”McCarty 说,“容器处于一个最佳位置,提供了足够的控制,而卸去了许多繁琐的基础架构工作。”但是,容器也会带来与安全性相关的劣势。
|
||||
|
||||
**影响**:我在一个由大约十位安全人员组成的小组中,可以肯定地说,整天要考虑软件安全性需要一定的倾向。当你长时间凝视深渊时,它也凝视着你。如果你不是如此倾向的软件开发人员,请听取 Scott 的建议并确保你的供应商考虑安全。
|
||||
|
||||
### 15 岁的 Fedora:为何 Matthew Miller 看到 Linux 发行版的光明前景
|
||||
|
||||
- [文章链接][9]
|
||||
|
||||
> 在 TechRepublic 的一个大范围采访中,Fedora 项目负责人 Matthew Miller 讨论了过去的经验教训、软件容器的普遍采用和竞争性标准、Fedora 的潜在变化以及包括 systemd 在内的热门话题。
|
||||
|
||||
**影响**:我喜欢 Fedora 项目的原因是它的清晰度;该项目知道它代表什么。像 Matt 这样的人就是为什么能看到光明前景的原因。
|
||||
|
||||
*我希望你喜欢这张上周让我印象深刻的列表,并在下周一回来了解更多的开放源码社区、市场和行业趋势。*
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/10/kubernetes-openstack-and-more-industry-trends
|
||||
|
||||
作者:[Tim Hildred][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/thildred
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/data_metrics_analytics_desktop_laptop.png?itok=9QXd7AUr (Person standing in front of a giant computer screen with numbers, data)
|
||||
[2]: https://www.redhat.com/en/blog/look-most-exciting-features-openstack-train
|
||||
[3]: https://releases.openstack.org/train/highlights.html
|
||||
[4]: https://www.cncf.io/webinars/building-kubernetes-operators-in-an-ansible-native-way/
|
||||
[5]: https://itnext.io/kubernetes-networking-behind-the-scenes-39a1ab1792bb
|
||||
[6]: https://github.com/nleiva/kubernetes-networking-links
|
||||
[7]: https://www.devprojournal.com/technology-trends/open-source/securing-the-container-supply-chain/
|
||||
[8]: https://www.redhat.com/en
|
||||
[9]: https://www.techrepublic.com/article/fedora-at-15-why-matthew-miller-sees-a-bright-future-for-the-linux-distribution/
|
@ -1,70 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Kubernetes networking, OpenStack Train, and more industry trends)
|
||||
[#]: via: (https://opensource.com/article/19/10/kubernetes-openstack-and-more-industry-trends)
|
||||
[#]: author: (Tim Hildred https://opensource.com/users/thildred)
|
||||
|
||||
Kubernetes networking, OpenStack Train, and more industry trends
|
||||
======
|
||||
A weekly look at open source community and industry trends.
|
||||
![Person standing in front of a giant computer screen with numbers, data][1]
|
||||
|
||||
As part of my role as a senior product marketing manager at an enterprise software company with an open source development model, I publish a regular update about open source community, market, and industry trends for product marketers, managers, and other influencers. Here are five of my and their favorite articles from that update.
|
||||
|
||||
## [A look at the most exciting features in OpenStack Train][2]
|
||||
|
||||
> But given all the technology goodies ([you can see the release highlights here][3]) that the Train release has to offer, you may be curious about the features that we at Red Hat believe are among the top capabilities that will benefit our telecommunications and enterprise customers and their uses cases. Here's an overview of the features we are most excited about this release.
|
||||
|
||||
**The impact**: OpenStack to me is like Shia LaBeouf: it reached peak hype a couple of years ago and then continued turning out good work. The Train release looks like yet another pretty incredible drop of innovation.
|
||||
|
||||
## [Building Kubernetes Operators in an Ansible-native way][4]
|
||||
|
||||
> Operators simplify management of complex applications on Kubernetes. They are usually written in Go and require expertise with the internals of Kubernetes. But, there’s an alternative to that with a lower barrier to entry. Ansible is a first-class citizen in the Operator SDK. Using Ansible frees up application engineers, maximizes time to automate and orchestrate your applications, and doing it across new & existing platforms with one simple language. Here we see how.
|
||||
|
||||
**The impact**: This is like finding out you can make pretty good ice cream with a blender and frozen bananas: Ansible (which is generally thought of as being pretty simple to pick up) lets you do some pretty impressive Operator magic way easier than you thought you could.
|
||||
|
||||
## [Kubernetes networking: Behind the scenes][5]
|
||||
|
||||
> While there are very good resources around this topic (links [here][6]), I couldn’t find a single example that connects all of the dots with commands outputs that network engineers love and hate, showing what is actually happening behind the scenes. So, I decided to curate this information from a number of different sources to hopefully help you better understand how things are tied together.
|
||||
|
||||
**The impact**: An accessible, well-written take on a complicated topic (with pictures). Guaranteed to make Kube networking 10% less confusing.
|
||||
|
||||
## [Securing the container supply chain][7]
|
||||
|
||||
> With the emergence of containers, Software as a Service and Functions as a Service, the focus in on consuming existing services, functions and container images in the race to provide new value. Scott McCarty, Principal Product Manager, Containers at [Red Hat][8], says that focus has both advantages and disadvantages. “It allows us to focus our energy on writing new application code that is specific to our needs, while shifting the concern for the underlying infrastructure to someone else,” says McCarty. “Containers are in a sweet spot providing enough control, but offloading a lot of tedious infrastructure work.” But containers can also create disadvantages related to security.
|
||||
|
||||
**The impact**: I sit amongst a group of ~10 security people, and can safely say that it takes a certain disposition to want to think about software security all day. When you stare into the abyss for long enough, it stares back into you. If you are a software developer who is not so disposed, please take Scott's advice and make sure your suppliers are.
|
||||
|
||||
## [Fedora at 15: Why Matthew Miller sees a bright future for the Linux distribution][9]
|
||||
|
||||
> In a wide-ranging interview with TechRepublic, Fedora project leader Matthew Miller discussed lessons learned from the past, popular adoption and competing standards for software containers, potential changes coming to Fedora, as well as hot-button topics, including systemd.
|
||||
|
||||
**The impact**: What I like about the Fedora project is it's clarity; the project knows what it stands for. People like Matt are why.
|
||||
|
||||
## _I hope you enjoyed this list of what stood out to me from last week and come back next Monday for more open source community, market, and industry trends._
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/10/kubernetes-openstack-and-more-industry-trends
|
||||
|
||||
作者:[Tim Hildred][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/thildred
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/data_metrics_analytics_desktop_laptop.png?itok=9QXd7AUr (Person standing in front of a giant computer screen with numbers, data)
|
||||
[2]: https://www.redhat.com/en/blog/look-most-exciting-features-openstack-train
|
||||
[3]: https://releases.openstack.org/train/highlights.html
|
||||
[4]: https://www.cncf.io/webinars/building-kubernetes-operators-in-an-ansible-native-way/
|
||||
[5]: https://itnext.io/kubernetes-networking-behind-the-scenes-39a1ab1792bb
|
||||
[6]: https://github.com/nleiva/kubernetes-networking-links
|
||||
[7]: https://www.devprojournal.com/technology-trends/open-source/securing-the-container-supply-chain/
|
||||
[8]: https://www.redhat.com/en
|
||||
[9]: https://www.techrepublic.com/article/fedora-at-15-why-matthew-miller-sees-a-bright-future-for-the-linux-distribution/
|
@ -1,62 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (Morisun029)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to use IoT devices to keep children safe?)
|
||||
[#]: via: (https://opensourceforu.com/2019/10/how-to-use-iot-devices-to-keep-children-safe/)
|
||||
[#]: author: (Andrew Carroll https://opensourceforu.com/author/andrew-carroll/)
|
||||
|
||||
How to use IoT devices to keep children safe?
|
||||
======
|
||||
|
||||
[![][1]][2]
|
||||
|
||||
_IoT (Internet of Things) devices are transforming our lives rapidly. These devices are everywhere, from our homes to industries. According to some estimates, there will be 10 billion IoT devices by 2020. By 2025, the number of IoT devices will grow to 22 billion. IoT has found its application in a range of fields, including smart homes, industrial processes, agriculture, and even healthcare. With such a wide variety of applications, it is obvious why IoT has become one of the hot topics in recent years._
|
||||
|
||||
Several factors have contributed to the explosion of IoT devices in multiple disciplines. These include the availability of low-cost processors and wireless connectivity. Moreover, open-source platforms have enabled the exchange of information in driving innovation in the field of IoT. Compared with conventional application development, IoT has developed exponentially because its resources are open-source.
|
||||
Before explaining how IoT can be used to protect children, a basic understanding of IoT technology is essential.
|
||||
|
||||
**What are IoT devices?**
|
||||
IoT devices are those that can communicate with each other, without the involvement of humans. Hence, smartphones and computers are not considered as IoT devices by many experts. Moreover, IoT devices must be able to gather data and communicate it to other devices or the cloud for processing.
|
||||
|
||||
However, there are some fields where we need to explore the potential for IoT. Children are vulnerable, which makes them an easy target for criminals and others who mean to harm them. Whether in the physical or digital world, children are susceptible to crime. Since parents cannot be physically present to protect their children at all times; that’s where the need for monitoring tools is obvious.
|
||||
|
||||
In addition to wearable devices for children, there are plenty of parental monitoring applications such as Xnspy that monitor children in real-time and provide live updates. These tools ensure that the child is safe. While wearable devices ensure that the child is not physically in danger, parental monitoring apps ensure that the child is safe online.
|
||||
|
||||
As more children spend time on their smartphones, it is no surprise to see them becoming the primary target for frauds and scammers. Moreover, there is also a chance of children becoming targets of cyberbullying because pedophilia, catfishing, and other crimes are prevalent on the internet.
|
||||
|
||||
Are these solutions enough? We need to find IoT solutions for ensuring our children’s safety, both online and offline. How can we keep children secure in these times? We need to come up with new and innovative solutions that keep our children safe. The solutions provided by IoT can help keep our children safe in schools as well as homes.
|
||||
|
||||
**The potential of IoT**
|
||||
The benefits offered by IoT devices are numerous. For one, parents can remotely monitor their children without being too overbearing. Thus, children have space and freedom to become independent while having a safe environment to do so.
|
||||
|
||||
Moreover, parents do not have to worry about their children’s safety. IoT devices can provide 24/7 updates about a child. Monitoring apps such as Xnspy go a step further in providing information regarding a child’s smartphone activity. As IoT devices become more sophisticated, it is only a matter of time before we have devices with increased battery life. IoT devices such as location tracking can provide accurate details regarding a child’s whereabouts, so parents do not have to worry.
|
||||
|
||||
While wearable devices are great to have, these are often not enough, when ensuring a child’s safety. Hence, to provide a safe environment for children, we need other methods. Many incidents have shown that schools are just as susceptible to attacks than any other public place. Therefore, schools need to adopt safety measures that keep children and teachers safe. In this, IoT devices can be used to detect threats and take necessary action to prevent the onslaught of an attack. The threat detection system can include cameras. Once the system detects a threat, it can notify the authorities, including law enforcement agencies and hospitals. Devices such as smart locks can be used to lock down the school, including classrooms, to protect children. In addition to this, parents can be informed about their child’s safety, receive immediate alerts on threats. It would require the implementation of wireless technology, such as Wi-Fi and sensors. Thus, schools need to create a budget that is specifically for providing security in the classroom.
|
||||
|
||||
Smart homes have made it possible to turn off lights with a clap, or telling your home assistant to do so. Likewise, IoT devices can be used in a house to protect children. In a home, IoT devices such as cameras can be used to provide parents with 100% visibility when looking after the children. When parents aren’t in the house, cameras and other sensors can be used to detect if any suspicious activity takes place. Other devices, such as smart locks connected to these sensors, can lock the doors, windows, and bedrooms to ensure that the kids are safe.
|
||||
Likewise, there are plenty of IoT solutions that can be introduced to keep kids safe.
|
||||
|
||||
**Just as bad as they are good**
|
||||
Sensors in IoT devices create an enormous amount of data. The safety of the data is a crucial factor. The data gathered on a child falling into the wrong hands is a risk. Hence, precautions are required. Any data data breached from your IoT devices can be used to determine behavior patterns. So one must invest in providing safe IoT solutions that do not breach user privacy.
|
||||
|
||||
Often IoT devices connect to the Wi-Fi to transmit data between devices. Unsecure networks that deal with unencrypted data pose certain risks. Such networks are easy to eavesdrop. Hackers can use such network points to hack the system. They can also introduce malware into the system, making it vulnerable. Moreover, cyberattacks on devices and public networks such as those in schools can lead to data breaches and theft of private data. Hence, an overall plan for protecting the network and IoT devices must be in effect when implementing an IoT solution for the protection of children.
|
||||
|
||||
The potential of IoT devices to protect children in schools and homes is yet to find innovation. We need more effort to protect the network that connects IoT devices. Moreover, the data generated by an IoT device can fall into the wrong hands, causing more trouble. So this is one area where IoT security is essential.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensourceforu.com/2019/10/how-to-use-iot-devices-to-keep-children-safe/
|
||||
|
||||
作者:[Andrew Carroll][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensourceforu.com/author/andrew-carroll/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Visual-Internet-of-things_EB-May18.jpg?resize=696%2C507&ssl=1 (Visual Internet of things_EB May18)
|
||||
[2]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Visual-Internet-of-things_EB-May18.jpg?fit=900%2C656&ssl=1
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: translator: (PsiACE)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
@ -339,7 +339,7 @@ via: https://nicolasparada.netlify.com/posts/go-messenger-conversations/
|
||||
|
||||
作者:[Nicolás Parada][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
译者:[PsiACE](https://github.com/PsiACE)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,161 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wenwensnow)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Use GameHub to Manage All Your Linux Games in One Place)
|
||||
[#]: via: (https://itsfoss.com/gamehub/)
|
||||
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
|
||||
|
||||
Use GameHub to Manage All Your Linux Games in One Place
|
||||
======
|
||||
|
||||
How do you [play games on Linux][1]? Let me guess. Either you install games from the software center or from Steam or from GOG or Humble Bundle etc, right? But, how do you plan to manage all your games from multiple launchers and clients? Well, that sounds like a hassle to me – which is why I was delighted when I come across [GameHub][2].
|
||||
|
||||
GameHub is a desktop application for Linux distributions that lets you manage “All your games in one place”. That sounds interesting, isn’t it? Let me share more details about it.
|
||||
|
||||
![][3]
|
||||
|
||||
### GameHub Features to manage Linux games from different sources at one place
|
||||
|
||||
Let’s see all the features that make GameHub one of the [essential Linux applications][4], specially for gamers.
|
||||
|
||||
#### Steam, GOG & Humble Bundle Support
|
||||
|
||||
![][5]
|
||||
|
||||
It supports Steam, [GOG][6], and [Humble Bundle][7] account integration. You can sign in to your account to see manager your library from within GameHub.
|
||||
|
||||
For my usage, I have a lot of games on Steam and a couple on Humble Bundle. I can’t speak for all – but it is safe to assume that these are the major platforms one would want to have.
|
||||
|
||||
#### Native Game Support
|
||||
|
||||
![][8]
|
||||
|
||||
There are several [websites where you can find and download Linux games][9]. You can also add native Linux games by downloading their installers or add the executable file.
|
||||
|
||||
Unfortunately, there’s no easy way of finding out games for Linux from within GameHub at the moment. So, you will have to download them separately and add it to the GameHub as shown in the image above.
|
||||
|
||||
#### Emulator Support
|
||||
|
||||
With emulators, you can [play retro games on Linux][10]. As you can observe in the image above, you also get the ability to add emulators (and import emulated images).
|
||||
|
||||
You can see [RetroArch][11] listed already but you can also add custom emulators as per your requirements.
|
||||
|
||||
#### User Interface
|
||||
|
||||
![Gamehub Appearance Option][12]
|
||||
|
||||
Of course, the user experience matters. Hence, it is important to take a look at its user interface and what it offers.
|
||||
|
||||
To me, I felt it very easy to use and the presence of a dark theme is a bonus.
|
||||
|
||||
#### Controller Support
|
||||
|
||||
If you are comfortable using a controller with your Linux system to play games – you can easily add it, enable or disable it from the settings.
|
||||
|
||||
#### Multiple Data Providers
|
||||
|
||||
Just because it fetches the information (or metadata) of your games, it needs a source for that. You can see all the sources listed in the image below.
|
||||
|
||||
![Data Providers Gamehub][13]
|
||||
|
||||
You don’t have to do anything here – but if you are using anything else other than steam as your platform, you can generate an [API key for IDGB.][14]
|
||||
|
||||
I shall recommend you to do that only if you observe a prompt/notice within GameHub or if you have some games that do not have any description/pictures/stats on GameHub.
|
||||
|
||||
#### Compatibility Layer
|
||||
|
||||
![][15]
|
||||
|
||||
Do you have a game that does not support Linux?
|
||||
|
||||
You do not have to worry. GameHub offers multiple compatibility layers like Wine/Proton which you can use to get the game installed in order to make it playable.
|
||||
|
||||
We can’t be really sure on what works for you – so you have to test it yourself for that matter. Nevertheless, it is an important feature that could come handy for a lot of gamers.
|
||||
|
||||
### How Do You Manage Your Games in GameHub?
|
||||
|
||||
You get the option to add Steam/GOG/Humble Bundle account right after you launch it.
|
||||
|
||||
For Steam, you need to have the Steam client installed on your Linux distro. Once, you have it, you can easily link the games to GameHub.
|
||||
|
||||
![][16]
|
||||
|
||||
For GOG & Humble Bundle, you can directly sign in using your credentials to get your games organized in GameHub.
|
||||
|
||||
If you are adding an emulated image or a native installer, you can always do that by clicking on the “**+**” button that you observe in the top-right corner of the window.
|
||||
|
||||
### How Do You Install Games?
|
||||
|
||||
For Steam games, it automatically launches the Steam client to download/install (I wish if this was possible without launching Steam!)
|
||||
|
||||
![][17]
|
||||
|
||||
But, for GOG/Humble Bundle, you can directly start downloading to install the games after signing in. If necessary, you can utilize the compatibility layer for non-native Linux games.
|
||||
|
||||
In either case, if you want to install an emulated game or a native game – just add the installer or import the emulated image. There’s nothing more to it.
|
||||
|
||||
### GameHub: How do you install it?
|
||||
|
||||
![][18]
|
||||
|
||||
To start with, you can just search for it in your software center or app center. It is available in the **Pop!_Shop**. So, it can be found in most of the official repositories.
|
||||
|
||||
If you don’t find it there, you can always add the repository and install it via terminal by typing these commands:
|
||||
|
||||
```
|
||||
sudo add-apt-repository ppa:tkashkin/gamehub
|
||||
sudo apt update
|
||||
sudo apt install com.github.tkashkin.gamehub
|
||||
```
|
||||
|
||||
In case you encounter “**add-apt-repository command not found**” error, you can take a look at our article to help fix [add-apt-repository not found error.][19]
|
||||
|
||||
There are also AppImage and Flatpak versions available. You can find installation instructions for other Linux distros on its [official webpage][2].
|
||||
|
||||
Also, you have the option to download pre-release packages from its [GitHub page][20].
|
||||
|
||||
[GameHub][2]
|
||||
|
||||
**Wrapping Up**
|
||||
|
||||
GameHub is a pretty neat application as a unified library for all your games. The user interface is intuitive and so are the options.
|
||||
|
||||
Have you had the chance it test it out before? If yes, let us know your experience in the comments down below.
|
||||
|
||||
Also, feel free to tell us about some of your favorite tools/applications similar to this which you would want us to try.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/gamehub/
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/ankush/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://itsfoss.com/linux-gaming-guide/
|
||||
[2]: https://tkashkin.tk/projects/gamehub/
|
||||
[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/10/gamehub-home-1.png?ssl=1
|
||||
[4]: https://itsfoss.com/essential-linux-applications/
|
||||
[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/10/gamehub-platform-support.png?ssl=1
|
||||
[6]: https://www.gog.com/
|
||||
[7]: https://www.humblebundle.com/monthly?partner=itsfoss
|
||||
[8]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/10/gamehub-native-installers.png?ssl=1
|
||||
[9]: https://itsfoss.com/download-linux-games/
|
||||
[10]: https://itsfoss.com/play-retro-games-linux/
|
||||
[11]: https://www.retroarch.com/
|
||||
[12]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/10/gamehub-appearance.png?ssl=1
|
||||
[13]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/10/data-providers-gamehub.png?ssl=1
|
||||
[14]: https://www.igdb.com/api
|
||||
[15]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/10/gamehub-windows-game.png?fit=800%2C569&ssl=1
|
||||
[16]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/10/gamehub-library.png?ssl=1
|
||||
[17]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/10/gamehub-compatibility-layer.png?ssl=1
|
||||
[18]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/10/gamehub-install.jpg?ssl=1
|
||||
[19]: https://itsfoss.com/add-apt-repository-command-not-found/
|
||||
[20]: https://github.com/tkashkin/GameHub/releases
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: translator: (wenwensnow)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
@ -1,210 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to Configure Rsyslog Server in CentOS 8 / RHEL 8)
|
||||
[#]: via: (https://www.linuxtechi.com/configure-rsyslog-server-centos-8-rhel-8/)
|
||||
[#]: author: (James Kiarie https://www.linuxtechi.com/author/james/)
|
||||
|
||||
How to Configure Rsyslog Server in CentOS 8 / RHEL 8
|
||||
======
|
||||
|
||||
**Rsyslog** is a free and opensource logging utility that exists by default on **CentOS** 8 and **RHEL** 8 systems. It provides an easy and effective way of **centralizing logs** from client nodes to a single central server. The centralization of logs is beneficial in two ways. First, it simplifies viewing of logs as the Systems administrator can view all the logs of remote servers from a central point without logging into every client system to check the logs. This is greatly beneficial if there are several servers that need to be monitored and secondly, in the event that a remote client suffers a crash, you need not worry about losing the logs because all the logs will be saved on the **central rsyslog server**. Rsyslog has replaced syslog which only supported **UDP** protocol. It extends the basic syslog protocol with superior features such as support for both **UDP** and **TCP** protocols in transporting logs, augmented filtering abilities, and flexible configuration options. That said, let’s explore how to configure the Rsyslog server in CentOS 8 / RHEL 8 systems.
|
||||
|
||||
[![configure-rsyslog-centos8-rhel8][1]][2]
|
||||
|
||||
### Prerequisites
|
||||
|
||||
We are going to have the following lab setup to test the centralized logging process:
|
||||
|
||||
* **Rsyslog server** CentOS 8 Minimal IP address: 10.128.0.47
|
||||
* **Client system** RHEL 8 Minimal IP address: 10.128.0.48
|
||||
|
||||
|
||||
|
||||
From the setup above, we will demonstrate how you can set up the Rsyslog server and later configure the client system to ship logs to the Rsyslog server for monitoring.
|
||||
|
||||
Let’s get started!
|
||||
|
||||
### Configuring the Rsyslog Server on CentOS 8
|
||||
|
||||
By default, Rsyslog comes installed on CentOS 8 / RHEL 8 servers. To verify the status of Rsyslog, log in via SSH and issue the command:
|
||||
|
||||
```
|
||||
$ systemctl status rsyslog
|
||||
```
|
||||
|
||||
Sample Output
|
||||
|
||||
![rsyslog-service-status-centos8][1]
|
||||
|
||||
If rsyslog is not present for whatever reason, you can install it using the command:
|
||||
|
||||
```
|
||||
$ sudo yum install rsyslog
|
||||
```
|
||||
|
||||
Next, you need to modify a few settings in the Rsyslog configuration file. Open the configuration file.
|
||||
|
||||
```
|
||||
$ sudo vim /etc/rsyslog.conf
|
||||
```
|
||||
|
||||
Scroll and uncomment the lines shown below to allow reception of logs via UDP protocol
|
||||
|
||||
```
|
||||
module(load="imudp") # needs to be done just once
|
||||
input(type="imudp" port="514")
|
||||
```
|
||||
|
||||
![rsyslog-conf-centos8-rhel8][1]
|
||||
|
||||
Similarly, if you prefer to enable TCP rsyslog reception uncomment the lines:
|
||||
|
||||
```
|
||||
module(load="imtcp") # needs to be done just once
|
||||
input(type="imtcp" port="514")
|
||||
```
|
||||
|
||||
![rsyslog-conf-tcp-centos8-rhel8][1]
|
||||
|
||||
Save and exit the configuration file.
|
||||
|
||||
To receive the logs from the client system, we need to open Rsyslog default port 514 on the firewall. To achieve this, run
|
||||
|
||||
```
|
||||
# sudo firewall-cmd --add-port=514/tcp --zone=public --permanent
|
||||
```
|
||||
|
||||
Next, reload the firewall to save the changes
|
||||
|
||||
```
|
||||
# sudo firewall-cmd --reload
|
||||
```
|
||||
|
||||
Sample Output
|
||||
|
||||
![firewall-ports-rsyslog-centos8][1]
|
||||
|
||||
Next, restart Rsyslog server
|
||||
|
||||
```
|
||||
$ sudo systemctl restart rsyslog
|
||||
```
|
||||
|
||||
To enable Rsyslog on boot, run beneath command
|
||||
|
||||
```
|
||||
$ sudo systemctl enable rsyslog
|
||||
```
|
||||
|
||||
To confirm that the Rsyslog server is listening on port 514, use the netstat command as follows:
|
||||
|
||||
```
|
||||
$ sudo netstat -pnltu
|
||||
```
|
||||
|
||||
Sample Output
|
||||
|
||||
![netstat-rsyslog-port-centos8][1]
|
||||
|
||||
Perfect! we have successfully configured our Rsyslog server to receive logs from the client system.
|
||||
|
||||
To view log messages in real-time run the command:
|
||||
|
||||
```
|
||||
$ tail -f /var/log/messages
|
||||
```
|
||||
|
||||
Let’s now configure the client system.
|
||||
|
||||
### Configuring the client system on RHEL 8
|
||||
|
||||
Like the Rsyslog server, log in and check if the rsyslog daemon is running by issuing the command:
|
||||
|
||||
```
|
||||
$ sudo systemctl status rsyslog
|
||||
```
|
||||
|
||||
Sample Output
|
||||
|
||||
![client-rsyslog-service-rhel8][1]
|
||||
|
||||
Next, proceed to open the rsyslog configuration file
|
||||
|
||||
```
|
||||
$ sudo vim /etc/rsyslog.conf
|
||||
```
|
||||
|
||||
At the end of the file, append the following line
|
||||
|
||||
```
|
||||
*.* @10.128.0.47:514 # Use @ for UDP protocol
|
||||
*.* @@10.128.0.47:514 # Use @@ for TCP protocol
|
||||
```
|
||||
|
||||
Save and exit the configuration file. Just like the Rsyslog Server, open port 514 which is the default Rsyslog port on the firewall
|
||||
|
||||
```
|
||||
$ sudo firewall-cmd --add-port=514/tcp --zone=public --permanent
|
||||
```
|
||||
|
||||
Next, reload the firewall to save the changes
|
||||
|
||||
```
|
||||
$ sudo firewall-cmd --reload
|
||||
```
|
||||
|
||||
Next, restart the rsyslog service
|
||||
|
||||
```
|
||||
$ sudo systemctl restart rsyslog
|
||||
```
|
||||
|
||||
To enable Rsyslog on boot, run following command
|
||||
|
||||
```
|
||||
$ sudo systemctl enable rsyslog
|
||||
```
|
||||
|
||||
### Testing the logging operation
|
||||
|
||||
Having successfully set up and configured Rsyslog Server and client system, it’s time to verify of your configuration is working as intended.
|
||||
|
||||
On the client system issue the command:
|
||||
|
||||
```
|
||||
# logger "Hello guys! This is our first log"
|
||||
```
|
||||
|
||||
Now head out to the Rsyslog server and run the command below to check the logs messages in real-time
|
||||
|
||||
```
|
||||
# tail -f /var/log/messages
|
||||
```
|
||||
|
||||
The output from the command run on the client system should register on the Rsyslog server’s log messages to imply that the Rsyslog server is now receiving logs from the client system.
|
||||
|
||||
![centralize-logs-rsyslogs-centos8][1]
|
||||
|
||||
And that’s it, guys! We have successfully setup the Rsyslog server to receive log messages from a client system.
|
||||
|
||||
Read Also: **[How to Setup Multi Node Elastic Stack Cluster on RHEL 8 / CentOS 8][3]**
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linuxtechi.com/configure-rsyslog-server-centos-8-rhel-8/
|
||||
|
||||
作者:[James Kiarie][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.linuxtechi.com/author/james/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
|
||||
[2]: https://www.linuxtechi.com/wp-content/uploads/2019/10/configure-rsyslog-centos8-rhel8.jpg
|
||||
[3]: https://www.linuxtechi.com/setup-multinode-elastic-stack-cluster-rhel8-centos8/
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: translator: (laingke)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
@ -370,7 +370,7 @@ via: https://opensource.com/article/19/10/initializing-arrays-java
|
||||
|
||||
作者:[Chris Hermansen][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
译者:[laingke](https://github.com/laingke)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,206 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Best practices in test-driven development)
|
||||
[#]: via: (https://opensource.com/article/19/10/test-driven-development-best-practices)
|
||||
[#]: author: (Alex Bunardzic https://opensource.com/users/alex-bunardzic)
|
||||
|
||||
Best practices in test-driven development
|
||||
======
|
||||
Ensure you're producing very high-quality code by following these TDD
|
||||
best practices.
|
||||
![magnifying glass on computer screen][1]
|
||||
|
||||
In my previous series on [test-driven development (TDD) and mutation testing][2], I demonstrated the benefits of relying on examples when building a solution. That begs the question: What does "relying on examples" mean?
|
||||
|
||||
In that series, I described one of my expectations when building a solution to determine whether it's daytime or nighttime. I provided an example of a specific hour of the day that I consider to fall in the daytime category. I created a **DateTime** variable named **dayHour** and gave it the specific value of **August 8, 2019, 7 hours, 0 minutes, 0 seconds**.
|
||||
|
||||
My logic (or way of reasoning) was: "When the system is notified that the time is exactly 7am on August 8, 2019, I expect that the system will perform the necessary calculations and return the value **Daylight**."
|
||||
|
||||
Armed with such a specific example, it was very easy to create a unit test (**Given7amReturnDaylight**). I then ran the tests and watched my unit test fail, which gave me the opportunity to work on fixing this early failure.
|
||||
|
||||
### Iteration is the solution
|
||||
|
||||
One very important aspect of TDD (and, by proxy, of agile) is the fact that it is impossible to arrive at an acceptable solution unless you are iterating. TDD is a professional discipline based on the process of relentless iterating. It is very important to note that it mandates that each iteration must begin with a micro-failure. That micro-failure has only one purpose: to solicit immediate feedback. And that immediate feedback ensures we can rapidly close the gap between _wanting_ a solution and _getting_ a solution.
|
||||
|
||||
Iteration provides an opportunity to solicit immediate feedback by failing as early as possible. Because that failure is fast (i.e., it is a micro-failure), it is not alarming; even when we fail, we can remain calm, knowing that it will be easy to fix the failure. And the feedback from that failure will guide us toward fixing the failure.
|
||||
|
||||
Rinse, repeat, until we completely close the gap and deliver the solution that fully meets the expectation (but keep in mind that the expectation must also be a micro-expectation).
|
||||
|
||||
### Why micro?
|
||||
|
||||
This approach often feels very unambitious. In TDD (and in agile), it's best to pick a tiny, almost trivial challenge, and then do the TDD song-and-dance by failing first, then iterating until we solve that trivial challenge. People who are used to more substantial, beefy engineering and problem solving tend to feel that such an exercise is beneath their level of competence.
|
||||
|
||||
One of the cornerstones of agile philosophy relies on reducing the problem space to multiple, smallest-possible surface areas. As Robert C. Martin puts it:
|
||||
|
||||
> _"Agile is a small idea about the small problems of small programming teams doing small things"_
|
||||
|
||||
But how can making an unimpressive series of such pedestrian, minuscule, and almost insignificant micro-victories ever enable us to reach the big-scale solution?
|
||||
|
||||
Here is where sophisticated and elaborate systems thinking comes into play. When building a system, there's always the risk of ending up with a dreaded "monolith." A monolith is a system built on the principle of tight coupling. Any part of the monolith is highly dependent on many other parts of the same monolith. That arrangement makes the monolith very brittle, unreliable, and difficult to operate, maintain, troubleshoot, and fix.
|
||||
|
||||
The only way to avoid this trap is to minimize or, better yet, completely remove coupling. Instead of investing heroic efforts into building elaborate parts that will be assembled into a system, it is much better to take humble, baby steps toward building tiny, micro parts. These micro parts have very little capability on their own, and will, by virtue of such arrangement, not be dependent on other components. This will minimize and even remove any coupling.
|
||||
|
||||
The desired end game in building a useful, elaborate system is to compose it from a collection of generic, completely independent components. The more generic each component is, the more robust, resilient, and flexible the resulting system will be. Also, having a collection of generic components enables them to be repurposed to build brand new systems by reconfiguring those components.
|
||||
|
||||
Consider a toy castle made out of Lego blocks. If we pick almost any block from that castle and examine it in isolation, we won't be able to find anything on that block that specifies it is a Lego block meant for building a castle. The block itself is sufficiently generic, which makes it suitable for building other contraptions, such as toy cars, toy airplanes, toy boats, etc. That's the power of having generic components.
|
||||
|
||||
TDD is a proven discipline for delivering generic, independent, and autonomous components that can be safely used to assemble large, sophisticated systems expediently. As in agile, TDD is focused on micro-activities. And because agile is based on the fundamental principle known as "the Whole Team," the humble approach illustrated here is also important when specifying business examples. If the example used for building a component is not modest, it will be difficult to meet the expectations. Therefore, the expectations must be humble, which makes the resulting examples equally humble.
|
||||
|
||||
For instance, if a member of the Whole Team (a requester) provides the developer with an expectation and an example that reads:
|
||||
|
||||
> _"When processing an order, make sure to apply appropriate discount for orders made by loyal customers, or for orders over certain monetary value, or both."_
|
||||
|
||||
The developer should recognize that this example is too ambitious. That's not a humble expectation. It is not sufficiently micro, if you will. The developer should always strive to guide a requester in being more specific and micro-level when crafting examples. Paradoxically, the more specific the example, the more generic the resulting solution will be.
|
||||
|
||||
A much better, more effective expectation and example would be:
|
||||
|
||||
> _"Discount made for an order greater than $100.00 is $18.00."_
|
||||
|
||||
Or:
|
||||
|
||||
> _"Discount made for an order greater than $100.00 that was made by a customer who already placed three orders is $25.00."_
|
||||
|
||||
Such micro-examples make it easy to turn them into automated micro-expectations (read: unit tests). Such expectations will make us fail, and then we will pick ourselves up and iterate until we deliver the solution—a robust, generic component that knows how to calculate discounts based on the micro-examples supplied by the Whole Team.
|
||||
|
||||
### Writing quality unit tests
|
||||
|
||||
Merely writing unit tests without any concern about their quality is a fool's errand. Shoddily written unit tests will result in bloated, tightly coupled code. Such code is brittle, difficult to reason about, and often nearly impossible to fix.
|
||||
|
||||
We need to lay down some ground rules for writing quality unit tests. These ground rules will help us make swift progress in building robust, reliable solutions. The easiest way to do that is to introduce a mnemonic in the form of an acronym: **FIRST**, which says unit tests must be:
|
||||
|
||||
* **F** = Fast
|
||||
* **I** = Independent
|
||||
* **R** = Repeatable
|
||||
* **S** = Self-validating
|
||||
* **T** = Thorough
|
||||
|
||||
|
||||
|
||||
#### Fast
|
||||
|
||||
Since a unit test describes a micro-example, it should expect very simple processing from the implemented code. This means that each unit test should be very fast to run.
|
||||
|
||||
#### Independent
|
||||
|
||||
Since a unit test describes a micro-example, it should describe a very simple process that does not depend on any other unit test.
|
||||
|
||||
#### Repeatable
|
||||
|
||||
Since a unit test does not depend on any other unit test, it must be fully repeatable. What that means is that each time a certain unit test runs, it produces the same results as the previous time it ran. Neither the number of times the unit tests run nor the order in which they run should ever affect the expected output.
|
||||
|
||||
#### Self-validating
|
||||
|
||||
When unit tests run, the outcome of the testing should be instantly visible. Developers should not be expected to reach for some other source(s) of information to find out whether their unit tests failed or passed.
|
||||
|
||||
#### Thorough
|
||||
|
||||
Unit tests should describe all the expectations as defined in the micro-examples.
|
||||
|
||||
### Well-structured unit tests
|
||||
|
||||
Unit tests are code. And the same as any other code, unit tests need to be well-structured. It is unacceptable to deliver sloppy, messy unit tests. All the principles that apply to the rules governing clean implementation code apply with equal force to unit tests.
|
||||
|
||||
A time-tested and proven methodology for writing reliable quality code is based on the clean code principle known as **SOLID**. This acronym that helps us remember five very important principles:
|
||||
|
||||
* **S** = Single responsibility principle
|
||||
* **O** = Open–closed principle
|
||||
* **L** = Liskov substitution principle
|
||||
* **I** = Interface segregation principle
|
||||
* **D** = Dependency inversion principle
|
||||
|
||||
|
||||
|
||||
#### Single responsibility principle
|
||||
|
||||
Each component must be responsible for performing only one operation. This principle is illustrated in this meme
|
||||
|
||||
![Sign illustrating single-responsibility principle][3]
|
||||
|
||||
Pumping septic tanks is an operation that must be kept separate from filling swimming pools.
|
||||
|
||||
Applied to unit tests, this principle ensures that each unit test verifies one—and only one—expectation. From a technical standpoint, this means each unit test must have one and only one **Assert** statement.
|
||||
|
||||
#### Open–closed principle
|
||||
|
||||
This principle states that a component should be open for extensions, but closed for any modifications.
|
||||
|
||||
![Open-closed principle][4]
|
||||
|
||||
Applied to unit tests, this principle ensures that we will not implement a change to an existing unit test in that unit test. Instead, we must write a brand new unit test that will implement the changes.
|
||||
|
||||
#### Liskov substitution principle
|
||||
|
||||
This principle provides a guide for deciding which level of abstraction may be appropriate for the solution.
|
||||
|
||||
![Liskov substitution principle][5]
|
||||
|
||||
Applied to unit tests, this principle guides us to avoid tight coupling with dependencies that depend on the underlying computing environment (such as databases, disks, network, etc.).
|
||||
|
||||
#### Interface segregation principle
|
||||
|
||||
This principle reminds us not to bloat APIs. When subsystems need to collaborate to complete a task, they should communicate via interfaces. But those interfaces must not be bloated. If a new capability becomes necessary, don't add it to the already defined interface; instead, craft a brand new interface.
|
||||
|
||||
![Interface segregation principle][6]
|
||||
|
||||
Applied to unit tests, removing the bloat from interfaces helps us craft more specific unit tests, which, in turn, results in more generic components.
|
||||
|
||||
#### Dependency inversion principle
|
||||
|
||||
This principle states that we should control our dependencies, instead of dependencies controlling us. If there is a need to use another component's services, instead of being responsible for instantiating that component within the component we are building, it must instead be injected into our component.
|
||||
|
||||
![Dependency inversion principle][7]
|
||||
|
||||
Applied to the unit tests, this principle helps separate the intention from the implementation. We must strive to inject only those dependencies that have been sufficiently abstracted. That approach is important for ensuring unit tests are not mixed with integration tests.
|
||||
|
||||
### Testing the tests
|
||||
|
||||
Finally, even if we manage to produce well-structured unit tests that fulfill the FIRST principles, it does not guarantee that we have delivered a solid solution. TDD best practices rely on the proper sequence of events when building components/services; we are always and invariably expected to provide a description of our expectations (supplied in the micro-examples). Only after those expectations are described in the unit test can we move on to writing the implementation code. However, two unwanted side effects can, and often do, happen while writing implementation code:
|
||||
|
||||
1. Implemented code enables the unit tests to pass, but they are written in a convoluted way, using unnecessarily complex logic
|
||||
2. Implemented code gets tagged on AFTER the unit tests have been written
|
||||
|
||||
|
||||
|
||||
In the first case, even if all unit tests pass, mutation testing uncovers that some mutants have survived. As I explained in _[Mutation testing by example: Evolving from fragile TDD][8]_, that is an extremely undesirable situation because it means that the solution is unnecessarily complex and, therefore, unmaintainable.
|
||||
|
||||
In the second case, all unit tests are guaranteed to pass, but a potentially large portion of the codebase consists of implemented code that hasn't been described anywhere. This means we are dealing with mysterious code. In the best-case scenario, we could treat that mysterious code as deadwood and safely remove it. But more likely than not, removing this not-described, implemented code will cause some serious breakages. And such breakages indicate that our solution is not well engineered.
|
||||
|
||||
### Conclusion
|
||||
|
||||
TDD best practices stem from the time-tested methodology called [extreme programming][9] (XP for short). One of the cornerstones of XP is based on the **three C's**:
|
||||
|
||||
1. **Card:** A small card briefly specifies the intent (e.g., "Review customer request").
|
||||
2. **Conversation:** The card becomes a ticket to conversation. The whole team gets together and talks about "Review customer request." What does that mean? Do we have enough information/knowledge to ship the "review customer request" functionality in this increment? If not, how do we further slice this card?
|
||||
3. **Concrete confirmation examples:** This includes all the specific values plugged in (e.g., concrete names, numeric values, specific dates, whatever else is pertinent to the use case) plus all values expected as an output of the processing.
|
||||
|
||||
|
||||
|
||||
Starting from such micro-examples, we write unit tests. We watch unit tests fail, then make them pass. And while doing that, we observe and respect the best software engineering practices: the **FIRST** principles, the **SOLID** principles, and the mutation testing discipline (i.e., kill all surviving mutants).
|
||||
|
||||
This ensures that our components and services are delivered with solid quality built in. And what is the measure of that quality? Simple—**the cost of change**. If the delivered code is costly to change, it is of shoddy quality. Very high-quality code is structured so well that it is simple and inexpensive to change and, at the same time, does not incur any change-management risks.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/10/test-driven-development-best-practices
|
||||
|
||||
作者:[Alex Bunardzic][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/alex-bunardzic
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/search_find_code_issue_bug_programming.png?itok=XPrh7fa0 (magnifying glass on computer screen)
|
||||
[2]: https://opensource.com/users/alex-bunardzic
|
||||
[3]: https://opensource.com/sites/default/files/uploads/single-responsibility.png (Sign illustrating single-responsibility principle)
|
||||
[4]: https://opensource.com/sites/default/files/uploads/openclosed_cc.jpg (Open-closed principle)
|
||||
[5]: https://opensource.com/sites/default/files/uploads/liskov_substitution_cc.jpg (Liskov substitution principle)
|
||||
[6]: https://opensource.com/sites/default/files/uploads/interface_segregation_cc.jpg (Interface segregation principle)
|
||||
[7]: https://opensource.com/sites/default/files/uploads/dependency_inversion_cc.jpg (Dependency inversion principle)
|
||||
[8]: https://opensource.com/article/19/9/mutation-testing-example-definition
|
||||
[9]: https://en.wikipedia.org/wiki/Extreme_programming
|
@ -0,0 +1,154 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Building container images with the ansible-bender tool)
|
||||
[#]: via: (https://opensource.com/article/19/10/building-container-images-ansible)
|
||||
[#]: author: (Tomas Tomecek https://opensource.com/users/tomastomecek)
|
||||
|
||||
Building container images with the ansible-bender tool
|
||||
======
|
||||
Learn how to use Ansible to execute commands in a container.
|
||||
![Blocks for building][1]
|
||||
|
||||
Containers and [Ansible][2] blend together so nicely—from management and orchestration to provisioning and building. In this article, we'll focus on the building part.
|
||||
|
||||
If you are familiar with Ansible, you know that you can write a series of tasks, and the **ansible-playbook** command will execute them for you. Did you know that you can also execute such commands in a container environment and get the same result as if you'd written a Dockerfile and run **podman build**.
|
||||
|
||||
Here is an example:
|
||||
|
||||
|
||||
```
|
||||
\- name: Serve our file using httpd
|
||||
hosts: all
|
||||
tasks:
|
||||
- name: Install httpd
|
||||
package:
|
||||
name: httpd
|
||||
state: installed
|
||||
- name: Copy our file to httpd’s webroot
|
||||
copy:
|
||||
src: our-file.txt
|
||||
dest: /var/www/html/
|
||||
```
|
||||
|
||||
You could execute this playbook locally on your web server or in a container, and it would work—as long as you remember to create the **our-file.txt** file first.
|
||||
|
||||
But something is missing. You need to start (and configure) httpd in order for your file to be served. This is a difference between container builds and infrastructure provisioning: When building an image, you just prepare the content; running the container is a different task. On the other hand, you can attach metadata to the container image that tells the command to run by default.
|
||||
|
||||
Here's where a tool would help. How about trying **ansible-bender**?
|
||||
|
||||
|
||||
```
|
||||
`$ ansible-bender build the-playbook.yaml fedora:30 our-httpd`
|
||||
```
|
||||
|
||||
This script uses the ansible-bender tool to execute the playbook against a Fedora 30 container image and names the resulting container image **our-httpd**.
|
||||
|
||||
But when you run that container, it won't start httpd because it doesn't know how to do it. You can fix this by adding some metadata to the playbook:
|
||||
|
||||
|
||||
```
|
||||
\- name: Serve our file using httpd
|
||||
hosts: all
|
||||
vars:
|
||||
ansible_bender:
|
||||
base_image: fedora:30
|
||||
target_image:
|
||||
name: our-httpd
|
||||
cmd: httpd -DFOREGROUND
|
||||
tasks:
|
||||
- name: Install httpd
|
||||
package:
|
||||
name: httpd
|
||||
state: installed
|
||||
- name: Listen on all network interfaces.
|
||||
lineinfile:
|
||||
path: /etc/httpd/conf/httpd.conf
|
||||
regexp: '^Listen '
|
||||
line: Listen 0.0.0.0:80
|
||||
- name: Copy our file to httpd’s webroot
|
||||
copy:
|
||||
src: our-file.txt
|
||||
dest: /var/www/html
|
||||
```
|
||||
|
||||
Now you can build the image (from here on, please run all the commands as root—currently, Buildah and Podman won't create dedicated networks for rootless containers):
|
||||
|
||||
|
||||
```
|
||||
# ansible-bender build the-playbook.yaml
|
||||
PLAY [Serve our file using httpd] ****************************************************
|
||||
|
||||
TASK [Gathering Facts] ***************************************************************
|
||||
ok: [our-httpd-20191004-131941266141-cont]
|
||||
|
||||
TASK [Install httpd] *****************************************************************
|
||||
loaded from cache: 'f053578ed2d47581307e9ba3f64f4b4da945579a082c6f99bd797635e62befd0'
|
||||
skipping: [our-httpd-20191004-131941266141-cont]
|
||||
|
||||
TASK [Listen on all network interfaces.] *********************************************
|
||||
changed: [our-httpd-20191004-131941266141-cont]
|
||||
|
||||
TASK [Copy our file to httpd’s webroot] **********************************************
|
||||
changed: [our-httpd-20191004-131941266141-cont]
|
||||
|
||||
PLAY RECAP ***************************************************************************
|
||||
our-httpd-20191004-131941266141-cont : ok=3 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
|
||||
|
||||
Getting image source signatures
|
||||
Copying blob sha256:4650c04b851c62897e9c02c6041a0e3127f8253fafa3a09642552a8e77c044c8
|
||||
Copying blob sha256:87b740bba596291af8e9d6d91e30a01d5eba9dd815b55895b8705a2acc3a825e
|
||||
Copying blob sha256:82c21252bd87532e93e77498e3767ac2617aa9e578e32e4de09e87156b9189a0
|
||||
Copying config sha256:44c6dc6dda1afe28892400c825de1c987c4641fd44fa5919a44cf0a94f58949f
|
||||
Writing manifest to image destination
|
||||
Storing signatures
|
||||
44c6dc6dda1afe28892400c825de1c987c4641fd44fa5919a44cf0a94f58949f
|
||||
Image 'our-httpd' was built successfully \o/
|
||||
```
|
||||
|
||||
The image is built, and it's time to run the container:
|
||||
|
||||
|
||||
```
|
||||
# podman run our-httpd
|
||||
AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.88.2.106. Set the 'ServerName' directive globally to suppress this message
|
||||
```
|
||||
|
||||
Is your file being served? First, find out the IP of your container:
|
||||
|
||||
|
||||
```
|
||||
# podman inspect -f '{{ .NetworkSettings.IPAddress }}' 7418570ba5a0
|
||||
10.88.2.106
|
||||
```
|
||||
|
||||
And now you can check:
|
||||
|
||||
|
||||
```
|
||||
$ curl <http://10.88.2.106/our-file.txt>
|
||||
Ansible is ❤
|
||||
```
|
||||
|
||||
What were the contents of your file?
|
||||
|
||||
This was just an introduction to building container images with Ansible. If you want to learn more about what ansible-bender can do, please check it out on [GitHub][3]. Happy building!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/10/building-container-images-ansible
|
||||
|
||||
作者:[Tomas Tomecek][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/tomastomecek
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/blocks_building.png?itok=eMOT-ire (Blocks for building)
|
||||
[2]: https://www.ansible.com/
|
||||
[3]: https://github.com/ansible-community/ansible-bender
|
@ -0,0 +1,263 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to dual boot Windows 10 and Debian 10)
|
||||
[#]: via: (https://www.linuxtechi.com/dual-boot-windows-10-debian-10/)
|
||||
[#]: author: (James Kiarie https://www.linuxtechi.com/author/james/)
|
||||
|
||||
How to dual boot Windows 10 and Debian 10
|
||||
======
|
||||
|
||||
So, you finally made the bold decision to try out **Linux** after much convincing. However, you do not want to let go of your Windows 10 operating system yet as you will still be needing it before you learn the ropes on Linux. Thankfully, you can easily have a dual boot setup that allows you to switch to either of the operating systems upon booting your system. In this guide, you will learn how to **dual boot Windows 10 alongside Debian 10**.
|
||||
|
||||
[![How-to-dual-boot-Windows-and-Debian10][1]][2]
|
||||
|
||||
### Prerequisites
|
||||
|
||||
Before you get started, ensure you have the following:
|
||||
|
||||
* A bootable USB or DVD of Debian 10
|
||||
* A fast and stable internet connection ( For installation updates & third party applications)
|
||||
|
||||
|
||||
|
||||
Additionally, it worth paying attention to how your system boots (UEFI or Legacy) and ensure both the operating systems boot using the same boot mode.
|
||||
|
||||
### Step 1: Create a free partition on your hard drive
|
||||
|
||||
To start off, you need to create a free partition on your hard drive. This is the partition where Debian will be installed during the installation process. To achieve this, you will invoke the disk management utility as shown:
|
||||
|
||||
Press **Windows Key + R** to launch the Run dialogue. Next, type **diskmgmt.msc** and hit **ENTER**
|
||||
|
||||
[![Launch-Run-dialogue][1]][3]
|
||||
|
||||
This launches the **disk management** window displaying all the drives existing on your Windows system.
|
||||
|
||||
[![Disk-management][1]][4]
|
||||
|
||||
Next, you need to create a free space for Debian installation. To do this, you need to shrink a partition from one of the volumes and create a new unallocated partition. In this case, I will create a **30 GB** partition from Volume D.
|
||||
|
||||
To shrink a volume, right-click on it and select the ‘**shrink**’ option
|
||||
|
||||
[![Shrink-volume][1]][5]
|
||||
|
||||
In the pop-up dialogue, define the size that you want to shrink your space. Remember, this will be the disk space on which Debian 10 will be installed. In my case, I selected **30000MB ( Approximately 30 GB)**. Once done, click on ‘**Shrink**’.
|
||||
|
||||
[![Shrink-space][1]][6]
|
||||
|
||||
After the shrinking operation completes, you should have an unallocated partition as shown:
|
||||
|
||||
[![Unallocated-partition][1]][7]
|
||||
|
||||
Perfect! We are now good to go and ready to begin the installation process.
|
||||
|
||||
### Step 2: Begin the installation of Debian 10
|
||||
|
||||
With the free partition already created, plug in your bootable USB drive or insert the DVD installation medium in your PC and reboot your system. Be sure to make changes to the **boot order** in the **BIOS** set up by pressing the function keys (usually, **F9, F10 or F12** depending on the vendor). This is crucial so that the PC boots into your installation medium. Saves the BIOS settings and reboot.
|
||||
|
||||
A new grub menu will be displayed as shown below: Click on ‘**Graphical install**’
|
||||
|
||||
[![Graphical-Install-Debian10][1]][8]
|
||||
|
||||
In the next step, select your **preferred language** and click ‘**Continue**’
|
||||
|
||||
[![Select-Language-Debian10][1]][9]
|
||||
|
||||
Next, select your **location** and click ‘**Continue**’. Based on this location the time will automatically be selected for you. If you cannot find you located, scroll down and click on ‘**other**’ then select your location.
|
||||
|
||||
[![Select-location-Debain10][1]][10]
|
||||
|
||||
Next, select your **keyboard** layout.
|
||||
|
||||
[![Configure-Keyboard-layout-Debain10][1]][11]
|
||||
|
||||
In the next step, specify your system’s **hostname** and click ‘**Continue**’
|
||||
|
||||
[![Set-hostname-Debian10][1]][12]
|
||||
|
||||
Next, specify the **domain name**. If you are not in a domain environment, simply click on the ‘**continue**’ button.
|
||||
|
||||
[![Set-domain-name-Debian10][1]][13]
|
||||
|
||||
In the next step, specify the **root password** as shown and click ‘**continue**’.
|
||||
|
||||
[![Set-root-Password-Debian10][1]][14]
|
||||
|
||||
In the next step, specify the full name of the user for the account and click ‘**continue**’
|
||||
|
||||
[![Specify-fullname-user-debain10][1]][15]
|
||||
|
||||
Then set the account name by specifying the **username** associated with the account
|
||||
|
||||
[![Specify-username-Debian10][1]][16]
|
||||
|
||||
Next, specify the username’s password as shown and click ‘**continue**’
|
||||
|
||||
[![Specify-user-password-Debian10][1]][17]
|
||||
|
||||
Next, specify your **timezone**
|
||||
|
||||
[![Configure-timezone-Debian10][1]][18]
|
||||
|
||||
At this point, you need to create partitions for your Debian 10 installation. If you are an inexperienced user, Click on the ‘**Use the largest continuous free space**’ and click ‘**continue**’.
|
||||
|
||||
[![Use-largest-continuous-free-space-debian10][1]][19]
|
||||
|
||||
However, if you are more knowledgeable about creating partitions, select the ‘**Manual**’ option and click ‘**continue**’
|
||||
|
||||
[![Select-Manual-Debain10][1]][20]
|
||||
|
||||
Thereafter, select the partition labeled ‘**FREE SPACE**’ and click ‘**continue**’ . Next click on ‘**Create a new partition**’.
|
||||
|
||||
[![Create-new-partition-Debain10][1]][21]
|
||||
|
||||
In the next window, first, define the size of swap space, In my case, I specified **2GB**. Click **Continue**.
|
||||
|
||||
[![Define-swap-space-debian10][1]][22]
|
||||
|
||||
Next, click on ‘’**Primary**’ on the next screen and click ‘**continue**’
|
||||
|
||||
[![Partition-Disks-Primary-Debain10][1]][23]
|
||||
|
||||
Select the partition to **start at the beginning** and click continue.
|
||||
|
||||
[![Start-at-the-beginning-Debain10][1]][24]
|
||||
|
||||
Next, click on **Ext 4 journaling file system** and click ‘**continue**’
|
||||
|
||||
[![Select-Ext4-Journaling-system-debain10][1]][25]
|
||||
|
||||
On the next window, select **swap **and click continue
|
||||
|
||||
[![Select-swap-debain10][1]][26]
|
||||
|
||||
Next, click on **done setting the partition** and click continue.
|
||||
|
||||
[![Done-setting-partition-debian10][1]][27]
|
||||
|
||||
Back to the **Partition disks** page, click on **FREE SPACE** and click continue
|
||||
|
||||
[![Click-Free-space-Debain10][1]][28]
|
||||
|
||||
To make your life easy select **Automatically partition the free space** and click **continue**.
|
||||
|
||||
[![Automatically-partition-free-space-Debain10][1]][29]
|
||||
|
||||
Next click on **All files in one partition (recommended for new users)**
|
||||
|
||||
[![All-files-in-one-partition-debian10][1]][30]
|
||||
|
||||
Finally, click on **Finish partitioning and write changes to disk** and click **continue**.
|
||||
|
||||
[![Finish-partitioning-write-changes-to-disk][1]][31]
|
||||
|
||||
Confirm that you want to write changes to disk and click ‘**Yes**’
|
||||
|
||||
[![Write-changes-to-disk-Yes-Debian10][1]][32]
|
||||
|
||||
Thereafter, the installer will begin installing all the requisite software packages.
|
||||
|
||||
When asked if you want to scan another CD, select **No** and click continue
|
||||
|
||||
[![Scan-another-CD-No-Debain10][1]][33]
|
||||
|
||||
Next, select the mirror of the Debian archive closest to you and click ‘Continue’
|
||||
|
||||
[![Debian-archive-mirror-country][1]][34]
|
||||
|
||||
Next, select the **Debian mirror** that is most preferable to you and click ‘**Continue**’
|
||||
|
||||
[![Select-Debian-archive-mirror][1]][35]
|
||||
|
||||
If you plan on using a proxy server, enter its details as shown below, otherwise leave it blank and click ‘continue’
|
||||
|
||||
[![Enter-proxy-details-debian10][1]][36]
|
||||
|
||||
As the installation proceeds, you will be asked if you would like to participate in a **package usage survey**. You can select either option and click ‘continue’ . In my case, I selected ‘**No**’
|
||||
|
||||
[![Participate-in-survey-debain10][1]][37]
|
||||
|
||||
Next, select the packages you need in the **software selection** window and click **continue**.
|
||||
|
||||
[![Software-selection-debian10][1]][38]
|
||||
|
||||
The installation will continue installing the selected packages. At this point, you can take a coffee break as the installation goes on.
|
||||
|
||||
You will be prompted whether to install the grub **bootloader** on **Master Boot Record (MBR)**. Click **Yes** and click **Continue**.
|
||||
|
||||
[![Install-grub-bootloader-debian10][1]][39]
|
||||
|
||||
Next, select the hard drive on which you want to install **grub** and click **Continue**.
|
||||
|
||||
[![Select-hard-drive-install-grub-Debian10][1]][40]
|
||||
|
||||
Finally, the installation will complete, Go ahead and click on the ‘**Continue**’ button
|
||||
|
||||
[![Installation-complete-reboot-debian10][1]][41]
|
||||
|
||||
You should now have a grub menu with both **Windows** and **Debian** listed. To boot to Debian, scroll and click on Debian. Thereafter, you will be prompted with a login screen. Enter your details and hit ENTER.
|
||||
|
||||
[![Debian10-log-in][1]][42]
|
||||
|
||||
And voila! There goes your fresh copy of Debian 10 in a dual boot setup with Windows 10.
|
||||
|
||||
[![Debian10-Buster-Details][1]][43]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linuxtechi.com/dual-boot-windows-10-debian-10/
|
||||
|
||||
作者:[James Kiarie][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.linuxtechi.com/author/james/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
|
||||
[2]: https://www.linuxtechi.com/wp-content/uploads/2019/10/How-to-dual-boot-Windows-and-Debian10.jpg
|
||||
[3]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Launch-Run-dialogue.jpg
|
||||
[4]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Disk-management.jpg
|
||||
[5]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Shrink-volume.jpg
|
||||
[6]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Shrink-space.jpg
|
||||
[7]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Unallocated-partition.jpg
|
||||
[8]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Graphical-Install-Debian10.jpg
|
||||
[9]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Select-Language-Debian10.jpg
|
||||
[10]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Select-location-Debain10.jpg
|
||||
[11]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Configure-Keyboard-layout-Debain10.jpg
|
||||
[12]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Set-hostname-Debian10.jpg
|
||||
[13]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Set-domain-name-Debian10.jpg
|
||||
[14]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Set-root-Password-Debian10.jpg
|
||||
[15]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Specify-fullname-user-debain10.jpg
|
||||
[16]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Specify-username-Debian10.jpg
|
||||
[17]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Specify-user-password-Debian10.jpg
|
||||
[18]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Configure-timezone-Debian10.jpg
|
||||
[19]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Use-largest-continuous-free-space-debian10.jpg
|
||||
[20]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Select-Manual-Debain10.jpg
|
||||
[21]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Create-new-partition-Debain10.jpg
|
||||
[22]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Define-swap-space-debian10.jpg
|
||||
[23]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Partition-Disks-Primary-Debain10.jpg
|
||||
[24]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Start-at-the-beginning-Debain10.jpg
|
||||
[25]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Select-Ext4-Journaling-system-debain10.jpg
|
||||
[26]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Select-swap-debain10.jpg
|
||||
[27]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Done-setting-partition-debian10.jpg
|
||||
[28]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Click-Free-space-Debain10.jpg
|
||||
[29]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Automatically-partition-free-space-Debain10.jpg
|
||||
[30]: https://www.linuxtechi.com/wp-content/uploads/2019/10/All-files-in-one-partition-debian10.jpg
|
||||
[31]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Finish-partitioning-write-changes-to-disk.jpg
|
||||
[32]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Write-changes-to-disk-Yes-Debian10.jpg
|
||||
[33]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Scan-another-CD-No-Debain10.jpg
|
||||
[34]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Debian-archive-mirror-country.jpg
|
||||
[35]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Select-Debian-archive-mirror.jpg
|
||||
[36]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Enter-proxy-details-debian10.jpg
|
||||
[37]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Participate-in-survey-debain10.jpg
|
||||
[38]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Software-selection-debian10.jpg
|
||||
[39]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Install-grub-bootloader-debian10.jpg
|
||||
[40]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Select-hard-drive-install-grub-Debian10.jpg
|
||||
[41]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Installation-complete-reboot-debian10.jpg
|
||||
[42]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Debian10-log-in.jpg
|
||||
[43]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Debian10-Buster-Details.jpg
|
352
sources/tech/20191023 How to program with Bash- Loops.md
Normal file
352
sources/tech/20191023 How to program with Bash- Loops.md
Normal file
@ -0,0 +1,352 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to program with Bash: Loops)
|
||||
[#]: via: (https://opensource.com/article/19/10/programming-bash-part-3)
|
||||
[#]: author: (David Both https://opensource.com/users/dboth)
|
||||
|
||||
How to program with Bash: Loops
|
||||
======
|
||||
Learn how to use loops for performing iterative operations, in the final
|
||||
article in this three-part series on programming with Bash.
|
||||
![arrows cycle symbol for failing faster][1]
|
||||
|
||||
Bash is a powerful programming language, one perfectly designed for use on the command line and in shell scripts. This three-part series, based on my [three-volume Linux self-study course][2], explores using Bash as a programming language on the command-line interface (CLI).
|
||||
|
||||
The [first article][3] in this series explored some simple command-line programming with Bash, including using variables and control operators. The [second article][4] looked into the types of file, string, numeric, and miscellaneous logical operators that provide execution-flow control logic and different types of shell expansions in Bash. This third (and final) article examines the use of loops for performing various types of iterative operations and ways to control those loops.
|
||||
|
||||
### Loops
|
||||
|
||||
Every programming language I have ever used has at least a couple types of loop structures that provide various capabilities to perform repetitive operations. I use the for loop quite often but I also find the while and until loops useful.
|
||||
|
||||
#### for loops
|
||||
|
||||
Bash's implementation of the **for** command is, in my opinion, a bit more flexible than most because it can handle non-numeric values; in contrast, for example, the standard C language **for** loop can deal only with numeric values.
|
||||
|
||||
The basic structure of the Bash version of the **for** command is simple:
|
||||
|
||||
|
||||
```
|
||||
`for Var in list1 ; do list2 ; done`
|
||||
```
|
||||
|
||||
This translates to: "For each value in list1, set the **$Var** to that value and then perform the program statements in list2 using that value; when all of the values in list1 have been used, it is finished, so exit the loop." The values in list1 can be a simple, explicit string of values, or they can be the result of a command substitution (described in the second article in the series). I use this construct frequently.
|
||||
|
||||
To try it, ensure that **~/testdir** is still the present working directory (PWD). Clean up the directory, then look at a trivial example of the **for** loop starting with an explicit list of values. This list is a mix of alphanumeric values—but do not forget that all variables are strings and can be treated as such.
|
||||
|
||||
|
||||
```
|
||||
[student@studentvm1 testdir]$ rm *
|
||||
[student@studentvm1 testdir]$ for I in a b c d 1 2 3 4 ; do echo $I ; done
|
||||
a
|
||||
b
|
||||
c
|
||||
d
|
||||
1
|
||||
2
|
||||
3
|
||||
4
|
||||
```
|
||||
|
||||
Here is a bit more useful version with a more meaningful variable name:
|
||||
|
||||
|
||||
```
|
||||
[student@studentvm1 testdir]$ for Dept in "Human Resources" Sales Finance "Information Technology" Engineering Administration Research ; do echo "Department $Dept" ; done
|
||||
Department Human Resources
|
||||
Department Sales
|
||||
Department Finance
|
||||
Department Information Technology
|
||||
Department Engineering
|
||||
Department Administration
|
||||
Department Research
|
||||
```
|
||||
|
||||
Make some directories (and show some progress information while doing so):
|
||||
|
||||
|
||||
```
|
||||
[student@studentvm1 testdir]$ for Dept in "Human Resources" Sales Finance "Information Technology" Engineering Administration Research ; do echo "Working on Department $Dept" ; mkdir "$Dept" ; done
|
||||
Working on Department Human Resources
|
||||
Working on Department Sales
|
||||
Working on Department Finance
|
||||
Working on Department Information Technology
|
||||
Working on Department Engineering
|
||||
Working on Department Administration
|
||||
Working on Department Research
|
||||
[student@studentvm1 testdir]$ ll
|
||||
total 28
|
||||
drwxrwxr-x 2 student student 4096 Apr 8 15:45 Administration
|
||||
drwxrwxr-x 2 student student 4096 Apr 8 15:45 Engineering
|
||||
drwxrwxr-x 2 student student 4096 Apr 8 15:45 Finance
|
||||
drwxrwxr-x 2 student student 4096 Apr 8 15:45 'Human Resources'
|
||||
drwxrwxr-x 2 student student 4096 Apr 8 15:45 'Information Technology'
|
||||
drwxrwxr-x 2 student student 4096 Apr 8 15:45 Research
|
||||
drwxrwxr-x 2 student student 4096 Apr 8 15:45 Sales
|
||||
```
|
||||
|
||||
The **$Dept** variable must be enclosed in quotes in the **mkdir** statement; otherwise, two-part department names (such as "Information Technology") will be treated as two separate departments. That highlights a best practice I like to follow: all file and directory names should be a single word. Although most modern operating systems can deal with spaces in names, it takes extra work for sysadmins to ensure that those special cases are considered in scripts and CLI programs. (They almost certainly should be considered, even if they're annoying because you never know what files you will have.)
|
||||
|
||||
So, delete everything in **~/testdir**—again—and do this one more time:
|
||||
|
||||
|
||||
```
|
||||
[student@studentvm1 testdir]$ rm -rf * ; ll
|
||||
total 0
|
||||
[student@studentvm1 testdir]$ for Dept in Human-Resources Sales Finance Information-Technology Engineering Administration Research ; do echo "Working on Department $Dept" ; mkdir "$Dept" ; done
|
||||
Working on Department Human-Resources
|
||||
Working on Department Sales
|
||||
Working on Department Finance
|
||||
Working on Department Information-Technology
|
||||
Working on Department Engineering
|
||||
Working on Department Administration
|
||||
Working on Department Research
|
||||
[student@studentvm1 testdir]$ ll
|
||||
total 28
|
||||
drwxrwxr-x 2 student student 4096 Apr 8 15:52 Administration
|
||||
drwxrwxr-x 2 student student 4096 Apr 8 15:52 Engineering
|
||||
drwxrwxr-x 2 student student 4096 Apr 8 15:52 Finance
|
||||
drwxrwxr-x 2 student student 4096 Apr 8 15:52 Human-Resources
|
||||
drwxrwxr-x 2 student student 4096 Apr 8 15:52 Information-Technology
|
||||
drwxrwxr-x 2 student student 4096 Apr 8 15:52 Research
|
||||
drwxrwxr-x 2 student student 4096 Apr 8 15:52 Sales
|
||||
```
|
||||
|
||||
Suppose someone asks for a list of all RPMs on a particular Linux computer and a short description of each. This happened to me when I worked for the State of North Carolina. Since open source was not "approved" for use by state agencies at that time, and I only used Linux on my desktop computer, the pointy-haired bosses (PHBs) needed a list of each piece of software that was installed on my computer so that they could "approve" an exception.
|
||||
|
||||
How would you approach that? Here is one way, starting with the knowledge that the **rpm –qa** command provides a complete description of an RPM, including the two items the PHBs want: the software name and a brief summary.
|
||||
|
||||
Build up to the final result one step at a time. First, list all RPMs:
|
||||
|
||||
|
||||
```
|
||||
[student@studentvm1 testdir]$ rpm -qa
|
||||
perl-HTTP-Message-6.18-3.fc29.noarch
|
||||
perl-IO-1.39-427.fc29.x86_64
|
||||
perl-Math-Complex-1.59-429.fc29.noarch
|
||||
lua-5.3.5-2.fc29.x86_64
|
||||
java-11-openjdk-headless-11.0.ea.28-2.fc29.x86_64
|
||||
util-linux-2.32.1-1.fc29.x86_64
|
||||
libreport-fedora-2.9.7-1.fc29.x86_64
|
||||
rpcbind-1.2.5-0.fc29.x86_64
|
||||
libsss_sudo-2.0.0-5.fc29.x86_64
|
||||
libfontenc-1.1.3-9.fc29.x86_64
|
||||
<snip>
|
||||
```
|
||||
|
||||
Add the **sort** and **uniq** commands to sort the list and print the unique ones (since it's possible that some RPMs with identical names are installed):
|
||||
|
||||
|
||||
```
|
||||
[student@studentvm1 testdir]$ rpm -qa | sort | uniq
|
||||
a2ps-4.14-39.fc29.x86_64
|
||||
aajohan-comfortaa-fonts-3.001-3.fc29.noarch
|
||||
abattis-cantarell-fonts-0.111-1.fc29.noarch
|
||||
abiword-3.0.2-13.fc29.x86_64
|
||||
abrt-2.11.0-1.fc29.x86_64
|
||||
abrt-addon-ccpp-2.11.0-1.fc29.x86_64
|
||||
abrt-addon-coredump-helper-2.11.0-1.fc29.x86_64
|
||||
abrt-addon-kerneloops-2.11.0-1.fc29.x86_64
|
||||
abrt-addon-pstoreoops-2.11.0-1.fc29.x86_64
|
||||
abrt-addon-vmcore-2.11.0-1.fc29.x86_64
|
||||
<snip>
|
||||
```
|
||||
|
||||
Since this gives the correct list of RPMs you want to look at, you can use this as the input list to a loop that will print all the details of each RPM:
|
||||
|
||||
|
||||
```
|
||||
`[student@studentvm1 testdir]$ for RPM in `rpm -qa | sort | uniq` ; do rpm -qi $RPM ; done`
|
||||
```
|
||||
|
||||
This code produces way more data than you want. Note that the loop is complete. The next step is to extract only the information the PHBs requested. So, add an **egrep** command, which is used to select **^Name** or **^Summary**. The carat (**^**) specifies the beginning of the line; thus, any line with Name or Summary at the beginning of the line is displayed.
|
||||
|
||||
|
||||
```
|
||||
[student@studentvm1 testdir]$ for RPM in `rpm -qa | sort | uniq` ; do rpm -qi $RPM ; done | egrep -i "^Name|^Summary"
|
||||
Name : a2ps
|
||||
Summary : Converts text and other types of files to PostScript
|
||||
Name : aajohan-comfortaa-fonts
|
||||
Summary : Modern style true type font
|
||||
Name : abattis-cantarell-fonts
|
||||
Summary : Humanist sans serif font
|
||||
Name : abiword
|
||||
Summary : Word processing program
|
||||
Name : abrt
|
||||
Summary : Automatic bug detection and reporting tool
|
||||
<snip>
|
||||
```
|
||||
|
||||
You can try **grep** instead of **egrep** in the command above, but it will not work. You could also pipe the output of this command through the **less** filter to explore the results. The final command sequence looks like this:
|
||||
|
||||
|
||||
```
|
||||
`[student@studentvm1 testdir]$ for RPM in `rpm -qa | sort | uniq` ; do rpm -qi $RPM ; done | egrep -i "^Name|^Summary" > RPM-summary.txt`
|
||||
```
|
||||
|
||||
This command-line program uses pipelines, redirection, and a **for** loop—all on a single line. It redirects the output of your little CLI program to a file that can be used in an email or as input for other purposes.
|
||||
|
||||
This process of building up the program one step at a time allows you to see the results of each step and ensure that it is working as you expect and provides the desired results.
|
||||
|
||||
From this exercise, the PHBs received a list of over 1,900 separate RPM packages. I seriously doubt that anyone read that list. But I gave them exactly what they asked for, and I never heard another word from them about it.
|
||||
|
||||
### Other loops
|
||||
|
||||
There are two more types of loop structures available in Bash: the **while** and **until** structures, which are very similar to each other in both syntax and function. The basic syntax of these loop structures is simple:
|
||||
|
||||
|
||||
```
|
||||
`while [ expression ] ; do list ; done`
|
||||
```
|
||||
|
||||
and
|
||||
|
||||
|
||||
```
|
||||
`until [ expression ] ; do list ; done`
|
||||
```
|
||||
|
||||
The logic of the first reads: "While the expression evaluates as true, execute the list of program statements. When the expression evaluates as false, exit from the loop." And the second: "Until the expression evaluates as true, execute the list of program statements. When the expression evaluates as true, exit from the loop."
|
||||
|
||||
#### While loop
|
||||
|
||||
The **while** loop is used to execute a series of program statements while (so long as) the logical expression evaluates as true. Your PWD should still be **~/testdir**.
|
||||
|
||||
The simplest form of the **while** loop is one that runs forever. The following form uses the true statement to always generate a "true" return code. You could also use a simple "1"—and that would work just the same—but this illustrates the use of the true statement:
|
||||
|
||||
|
||||
```
|
||||
[student@studentvm1 testdir]$ X=0 ; while [ true ] ; do echo $X ; X=$((X+1)) ; done | head
|
||||
0
|
||||
1
|
||||
2
|
||||
3
|
||||
4
|
||||
5
|
||||
6
|
||||
7
|
||||
8
|
||||
9
|
||||
[student@studentvm1 testdir]$
|
||||
```
|
||||
|
||||
This CLI program should make more sense now that you have studied its parts. First, it sets **$X** to zero in case it has a value left over from a previous program or CLI command. Then, since the logical expression **[ true ]** always evaluates to 1, which is true, the list of program instructions between **do** and **done** is executed forever—or until you press **Ctrl+C** or otherwise send a signal 2 to the program. Those instructions are an arithmetic expansion that prints the current value of **$X** and then increments it by one.
|
||||
|
||||
One of the tenets of [_The Linux Philosophy for Sysadmins_][5] is to strive for elegance, and one way to achieve elegance is simplicity. You can simplify this program by using the variable increment operator, **++**. In the first instance, the current value of the variable is printed, and then the variable is incremented. This is indicated by placing the **++** operator after the variable:
|
||||
|
||||
|
||||
```
|
||||
[student@studentvm1 ~]$ X=0 ; while [ true ] ; do echo $((X++)) ; done | head
|
||||
0
|
||||
1
|
||||
2
|
||||
3
|
||||
4
|
||||
5
|
||||
6
|
||||
7
|
||||
8
|
||||
9
|
||||
```
|
||||
|
||||
Now delete **| head** from the end of the program and run it again.
|
||||
|
||||
In this version, the variable is incremented before its value is printed. This is specified by placing the **++** operator before the variable. Can you see the difference?
|
||||
|
||||
|
||||
```
|
||||
[student@studentvm1 ~]$ X=0 ; while [ true ] ; do echo $((++X)) ; done | head
|
||||
1
|
||||
2
|
||||
3
|
||||
4
|
||||
5
|
||||
6
|
||||
7
|
||||
8
|
||||
9
|
||||
```
|
||||
|
||||
You have reduced two statements into a single one that prints the value of the variable and increments that value. There is also a decrement operator, **\--**.
|
||||
|
||||
You need a method for stopping the loop at a specific number. To accomplish that, change the true expression to an actual numeric evaluation expression. Have the program loop to 5 and stop. In the example code below, you can see that **-le** is the logical numeric operator for "less than or equal to." This means: "So long as **$X** is less than or equal to 5, the loop will continue. When **$X** increments to 6, the loop terminates."
|
||||
|
||||
|
||||
```
|
||||
[student@studentvm1 ~]$ X=0 ; while [ $X -le 5 ] ; do echo $((X++)) ; done
|
||||
0
|
||||
1
|
||||
2
|
||||
3
|
||||
4
|
||||
5
|
||||
[student@studentvm1 ~]$
|
||||
```
|
||||
|
||||
#### Until loop
|
||||
|
||||
The **until** command is very much like the **while** command. The difference is that it will continue to loop until the logical expression evaluates to "true." Look at the simplest form of this construct:
|
||||
|
||||
|
||||
```
|
||||
[student@studentvm1 ~]$ X=0 ; until false ; do echo $((X++)) ; done | head
|
||||
0
|
||||
1
|
||||
2
|
||||
3
|
||||
4
|
||||
5
|
||||
6
|
||||
7
|
||||
8
|
||||
9
|
||||
[student@studentvm1 ~]$
|
||||
```
|
||||
|
||||
It uses a logical comparison to count to a specific value:
|
||||
|
||||
|
||||
```
|
||||
[student@studentvm1 ~]$ X=0 ; until [ $X -eq 5 ] ; do echo $((X++)) ; done
|
||||
0
|
||||
1
|
||||
2
|
||||
3
|
||||
4
|
||||
[student@studentvm1 ~]$ X=0 ; until [ $X -eq 5 ] ; do echo $((++X)) ; done
|
||||
1
|
||||
2
|
||||
3
|
||||
4
|
||||
5
|
||||
[student@studentvm1 ~]$
|
||||
```
|
||||
|
||||
### Summary
|
||||
|
||||
This series has explored many powerful tools for building Bash command-line programs and shell scripts. But it has barely scratched the surface on the many interesting things you can do with Bash; the rest is up to you.
|
||||
|
||||
I have discovered that the best way to learn Bash programming is to do it. Find a simple project that requires multiple Bash commands and make a CLI program out of them. Sysadmins do many tasks that lend themselves to CLI programming, so I am sure that you will easily find tasks to automate.
|
||||
|
||||
Many years ago, despite being familiar with other shell languages and Perl, I made the decision to use Bash for all of my sysadmin automation tasks. I have discovered that—sometimes with a bit of searching—I have been able to use Bash to accomplish everything I need.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/10/programming-bash-part-3
|
||||
|
||||
作者:[David Both][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/dboth
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/fail_progress_cycle_momentum_arrow.png?itok=q-ZFa_Eh (arrows cycle symbol for failing faster)
|
||||
[2]: http://www.both.org/?page_id=1183
|
||||
[3]: https://opensource.com/article/19/10/programming-bash-part-1
|
||||
[4]: https://opensource.com/article/19/10/programming-bash-part-2
|
||||
[5]: https://www.apress.com/us/book/9781484237298
|
106
sources/tech/20191023 Using SSH port forwarding on Fedora.md
Normal file
106
sources/tech/20191023 Using SSH port forwarding on Fedora.md
Normal file
@ -0,0 +1,106 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Using SSH port forwarding on Fedora)
|
||||
[#]: via: (https://fedoramagazine.org/using-ssh-port-forwarding-on-fedora/)
|
||||
[#]: author: (Paul W. Frields https://fedoramagazine.org/author/pfrields/)
|
||||
|
||||
Using SSH port forwarding on Fedora
|
||||
======
|
||||
|
||||
![][1]
|
||||
|
||||
You may already be familiar with using the _[ssh][2]_ [command][2] to access a remote system. The protocol behind _ssh_ allows terminal input and output to flow through a [secure channel][3]. But did you know that you can also use _ssh_ to send and receive other data securely as well? One way is to use _port forwarding_, which allows you to connect network ports securely while conducting your _ssh_ session. This article shows you how it works.
|
||||
|
||||
### About ports
|
||||
|
||||
A standard Linux system has a set of network ports already assigned, from 0-65535. Your system reserves ports up to 1023 for system use. In many systems you can’t elect to use one of these low-numbered ports. Quite a few ports are commonly expected to run specific services. You can find these defined in your system’s _/etc/services_ file.
|
||||
|
||||
You can think of a network port like a physical port or jack to which you can connect a cable. That port may connect to some sort of service on the system, like wiring behind that physical jack. An example is the Apache web server (also known as _httpd_). The web server usually claims port 80 on the host system for HTTP non-secure connections, and 443 for HTTPS secure connections.
|
||||
|
||||
When you connect to a remote system, such as with a web browser, you are also “wiring” your browser to a port on your host. This is usually a random high port number, such as 54001. The port on your host connects to the port on the remote host, such as 443 to reach its secure web server.
|
||||
|
||||
So why use port forwarding when you have so many ports available? Here are a couple common cases in the life of a web developer.
|
||||
|
||||
### Local port forwarding
|
||||
|
||||
Imagine that you are doing web development on a remote system called _remote.example.com_. You usually reach this system via _ssh_ but it’s behind a firewall that allows very little additional access, and blocks most other ports. To try out your web app, it’s helpful to be able to use your web browser to point to the remote system. But you can’t reach it via the normal method of typing the URL in your browser, thanks to that pesky firewall.
|
||||
|
||||
Local forwarding allows you to tunnel a port available via the remote system through your _ssh_ connection. The port appears as a local port on your system (thus “local forwarding.”)
|
||||
|
||||
Let’s say your web app is running on port 8000 on the _remote.example.com_ box. To locally forward that system’s port 8000 to your system’s port 8000, use the _-L_ option with _ssh_ when you start your session:
|
||||
|
||||
```
|
||||
$ ssh -L 8000:localhost:8000 remote.example.com
|
||||
```
|
||||
|
||||
Wait, why did we use _localhost_ as the target for forwarding? It’s because from the perspective of _remote.example.com_, you’re asking the host to use its own port 8000. (Recall that any host usually can refer to itself as _localhost_ to connect to itself via a network connection.) That port now connects to your system’s port 8000. Once the _ssh_ session is ready, keep it open, and you can type _<http://localhost:8000>_ in your browser to see your web app. The traffic between systems now travels securely over an _ssh_ tunnel!
|
||||
|
||||
If you have a sharp eye, you may have noticed something. What if we used a different hostname than _localhost_ for the _remote.example.com_ to forward? If it can reach a port on another system on its network, it usually can forward that port just as easily. For example, say you wanted to reach a MariaDB or MySQL service on the _db.example.com_ box also on the remote network. This service typically runs on port 3306. So you could forward it with this command, even if you can’t _ssh_ to the actual _db.example.com_ host:
|
||||
|
||||
```
|
||||
$ ssh -L 3306:db.example.com:3306 remote.example.com
|
||||
```
|
||||
|
||||
Now you can run MariaDB commands against your _localhost_ and you’re actually using the _db.example.com_ box.
|
||||
|
||||
### Remote port forwarding
|
||||
|
||||
Remote forwarding lets you do things the opposite way. Imagine you’re designing a web app for a friend at the office, and want to show them your work. Unfortunately, though, you’re working in a coffee shop, and because of the network setup, they can’t reach your laptop via a network connection. However, you both use the _remote.example.com_ system at the office and you can still log in there. Your web app seems to be running well on port 5000 locally.
|
||||
|
||||
Remote port forwarding lets you tunnel a port from your local system through your _ssh_ connection, and make it available on the remote system. Just use the _-R_ option when you start your _ssh_ session:
|
||||
|
||||
```
|
||||
$ ssh -R 6000:localhost:5000 remote.example.com
|
||||
```
|
||||
|
||||
Now when your friend inside the corporate firewall runs their browser, they can point it at _<http://remote.example.com:6000>_ and see your work. And as in the local port forwarding example, the communications travel securely over your _ssh_ session.
|
||||
|
||||
By default the _sshd_ daemon running on a host is set so that **only** that host can connect to its remote forwarded ports. Let’s say your friend wanted to be able to let people on other _example.com_ corporate hosts see your work, and they weren’t on _remote.example.com_ itself. You’d need the owner of the _remote.example.com_ host to add **one** of these options to _/etc/ssh/sshd_config_ on that box:
|
||||
|
||||
```
|
||||
GatewayPorts yes # OR
|
||||
GatewayPorts clientspecified
|
||||
```
|
||||
|
||||
The first option means remote forwarded ports are available on all the network interfaces on _remote.example.com_. The second means that the client who sets up the tunnel gets to choose the address. This option is set to **no** by default.
|
||||
|
||||
With this option, you as the _ssh_ client must still specify the interfaces on which the forwarded port on your side can be shared. Do this by adding a network specification before the local port. There are several ways to do this, including the following:
|
||||
|
||||
```
|
||||
$ ssh -R *:6000:localhost:5000 # all networks
|
||||
$ ssh -R 0.0.0.0:6000:localhost:5000 # all networks
|
||||
$ ssh -R 192.168.1.15:6000:localhost:5000 # single network
|
||||
$ ssh -R remote.example.com:6000:localhost:5000 # single network
|
||||
```
|
||||
|
||||
### Other notes
|
||||
|
||||
Notice that the port numbers need not be the same on local and remote systems. In fact, at times you may not even be able to use the same port. For instance, normal users may not to forward onto a system port in a default setup.
|
||||
|
||||
In addition, it’s possible to restrict forwarding on a host. This might be important to you if you need tighter security on a network-connected host. The _PermitOpen_ option for the _sshd_ daemon controls whether, and which, ports are available for TCP forwarding. The default setting is **any**, which allows all the examples above to work. To disallow any port fowarding, choose **none**, or choose only a specific **host:port** setting to permit. For more information, search for _PermitOpen_ in the manual page for _sshd_ daemon configuration:
|
||||
|
||||
```
|
||||
$ man sshd_config
|
||||
```
|
||||
|
||||
Finally, remember port forwarding only happens as long as the controlling _ssh_ session is open. If you need to keep the forwarding active for a long period, try running the session in the background using the _-N_ option. Make sure your console is locked to prevent tampering while you’re away from it.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/using-ssh-port-forwarding-on-fedora/
|
||||
|
||||
作者:[Paul W. Frields][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fedoramagazine.org/author/pfrields/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://fedoramagazine.org/wp-content/uploads/2019/10/ssh-port-forwarding-816x345.jpg
|
||||
[2]: https://en.wikipedia.org/wiki/Secure_Shell
|
||||
[3]: https://fedoramagazine.org/open-source-ssh-clients/
|
@ -0,0 +1,116 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Open Source CMS Ghost 3.0 Released with New features for Publishers)
|
||||
[#]: via: (https://itsfoss.com/ghost-3-release/)
|
||||
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
|
||||
|
||||
Open Source CMS Ghost 3.0 Released with New features for Publishers
|
||||
======
|
||||
|
||||
[Ghost][1] is a free and open source content management system (CMS). If you are not aware of the term, a CMS is a software that allows you to build a website that is primarily focused on creating content without knowledge of HTML and other web-related technologies.
|
||||
|
||||
Ghost is in fact one of the [best open source CMS][2] out there. It’s main focus is on creating lightweight, fast loading and good looking blogs.
|
||||
|
||||
It has a modern intuitive editor with built-in SEO features. You also have native desktop (Linux including) and mobile apps. If you like terminal, you can also use the CLI tools it provides.
|
||||
|
||||
Let’s see what new feature Ghost 3.0 brings.
|
||||
|
||||
### New Features in Ghost 3.0
|
||||
|
||||
![][3]
|
||||
|
||||
I’m usually intrigued by open source CMS solutions – so after reading the official announcement post, I went ahead and gave it a try by installing a new Ghost instance via [Digital Ocean cloud server][4].
|
||||
|
||||
I was really impressed with the improvements they’ve made with the features and the UI compared to the previous version.
|
||||
|
||||
Here, I shall list out the key changes/additions worth mentioning.
|
||||
|
||||
#### Bookmark Cards
|
||||
|
||||
![][5]
|
||||
|
||||
In addition to all the subtle change to the editor, it now lets you add a beautiful bookmark card by just entering the URL.
|
||||
|
||||
If you have used WordPress – you may have noticed that you need to have a plugin in order to add a card like that – so it is definitely a useful addition in Ghost 3.0.
|
||||
|
||||
#### Improved WordPress Migration Plugin
|
||||
|
||||
I haven’t tested this in particular but they have updated their WordPress migration plugin to let you easily clone the posts (with images) to Ghost CMS.
|
||||
|
||||
Basically, with the plugin, you will be able to create an archive (with images) and import it to Ghost CMS.
|
||||
|
||||
#### Responsive Image Galleries & Images
|
||||
|
||||
To make the user experience better, they have also updated the image galleries (which is now responsive) to present your picture collection comfortably across all devices.
|
||||
|
||||
In addition, the images in post/pages are now responsive as well.
|
||||
|
||||
#### Members & Subscriptions option
|
||||
|
||||
![Ghost Subscription Model][6]
|
||||
|
||||
Even though the feature is still in the beta phase, it lets you add members and a subscription model for your blog if you choose to make it a premium publication to sustain your business.
|
||||
|
||||
With this feature, you can make sure that your blog can only be accessed by the subscribed members or choose to make it available to the public in addition to the subscription.
|
||||
|
||||
#### Stripe: Payment Integration
|
||||
|
||||
It supports Stripe payment gateway by default to help you easily enable the subscription (or any type of payments) with no additional fee charged by Ghost.
|
||||
|
||||
#### New App Integrations
|
||||
|
||||
![][7]
|
||||
|
||||
You can now integrate a variety of popular applications/services with your blog on Ghost 3.0. It could come in handy to automate a lot of things.
|
||||
|
||||
#### Default Theme Improvement
|
||||
|
||||
The default theme (design) that comes baked in has improved and now offers a dark mode as well.
|
||||
|
||||
You can always choose to create a custom theme as well (if not pre-built themes available).
|
||||
|
||||
#### Other Minor Improvements
|
||||
|
||||
In addition to all the key highlights, the visual editor to create posts/pages has improved as well (with some drag and drop capabilities).
|
||||
|
||||
I’m sure there’s a lot of technical changes as well – which you can check it out in their [changelog][8] if you’re interested.
|
||||
|
||||
### Ghost is gradually getting good traction
|
||||
|
||||
It’s not easy to make your mark in a world dominated by WordPress. But Ghost has gradually formed a dedicated community of publishers around it.
|
||||
|
||||
Not only that, their managed hosting service [Ghost Pro][9] now has customers like NASA, Mozilla and DuckDuckGo.
|
||||
|
||||
In last six years, Ghost has made $5 million in revenue from their Ghost Pro customers . Considering that they are a non-profit organization working on open source solution, this is indeed an achievement.
|
||||
|
||||
This helps them remain independent by avoiding external funding from venture capitalists. The more customers for managed Ghost CMS hosting, the more funds goes into the development of the free and open source CMS.
|
||||
|
||||
Overall, Ghost 3.0 is by far the best upgrade they’ve offered. I’m personally impressed with the features.
|
||||
|
||||
If you have websites of your own, what CMS do you use? Have you ever used Ghost? How’s your experience with it? Do share your thoughts in the comment section.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/ghost-3-release/
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/ankush/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://itsfoss.com/recommends/ghost/
|
||||
[2]: https://itsfoss.com/open-source-cms/
|
||||
[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/10/ghost-3.jpg?ssl=1
|
||||
[4]: https://itsfoss.com/recommends/digital-ocean/
|
||||
[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/10/ghost-editor-screenshot.png?ssl=1
|
||||
[6]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/10/ghost-subscription-model.jpg?resize=800%2C503&ssl=1
|
||||
[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/10/ghost-app-integration.jpg?ssl=1
|
||||
[8]: https://ghost.org/faq/upgrades/
|
||||
[9]: https://itsfoss.com/recommends/ghost-pro/
|
@ -0,0 +1,66 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (Morisun029)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to use IoT devices to keep children safe?)
|
||||
[#]: via: (https://opensourceforu.com/2019/10/how-to-use-iot-devices-to-keep-children-safe/)
|
||||
[#]: author: (Andrew Carroll https://opensourceforu.com/author/andrew-carroll/)
|
||||
|
||||
如何使用物联网设备来确保儿童安全?
|
||||
======
|
||||
|
||||
[![][1]][2]
|
||||
|
||||
IoT (物联网)设备正在迅速改变我们的生活。这些设备无处不在,从我们的家庭到其它行业。根据一些预测数据,到2020年,将会有100亿个 IoT 设备。到2025年,该数量将增长到220亿。目前,物联网已经在很多领域得到了应用,包括智能家居,工业生产过程,农业甚至医疗保健领域。伴随着如此广泛的应用,物联网显然已经成为近年来的热门话题之一。
|
||||
多种因素促成了物联网设备在多个学科的爆炸式增长。这其中包括低成本处理器和无线连接的的可用性, 以及开源平台的信息交流推动了物联网领域的创新。与传统的应用程序开发相比,物联网设备的开发成指数级增长,因为它的资源是开源的。
|
||||
在解释如何使用物联网设备来保护儿童之前,必须对物联网技术有基本的了解。
|
||||
|
||||
|
||||
**IOT 设备是什么?**
|
||||
IOT 设备是指那些在没有人类参与的情况下彼此之间可以通信的设备。 因此,许多专家并不将智能手机和计算机视为物联网设备。 此外,物联网设备必须能够收集数据并且能将收集到的数据传送到其他设备或云端进行处理。
|
||||
|
||||
然而,在某些领域中,我们需要探索物联网的潜力。 儿童往往是脆弱的,他们很容易成为犯罪分子和其他蓄意伤害者的目标。 无论在物理世界还是数字世界中,儿童都很容易犯罪。 因为父母不能始终亲自到场保护孩子; 这就是为什么需要监视工具了。
|
||||
|
||||
除了适用于儿童的可穿戴设备外,还有许多父母监视应用程序,例如Xnspy,可实时监控儿童并提供信息的实时更新。 这些工具可确保儿童安全。 可穿戴设备确保儿童身体上的安全性,而家长监控应用可确保儿童的上网安全。
|
||||
|
||||
由于越来越多的孩子花费时间在智能手机上,毫无意外地,他们也就成为诈骗分子的主要目标。 此外,由于恋童癖,网络自夸和其他犯罪在网络上的盛行,儿童也有可能成为网络欺凌的目标。
|
||||
|
||||
这些解决方案够吗? 我们需要找到物联网解决方案,以确保孩子们在网上和线下的安全。 在当代,我们如何确保孩子的安全? 我们需要提出创新的解决方案。 物联网可以帮助保护孩子在学校和家里的安全。
|
||||
|
||||
|
||||
**物联网的潜力**
|
||||
物联网设备提供的好处很多。 举例来说,父母可以远程监控自己的孩子,而又不会显得太霸道。 因此,儿童在拥有安全环境的同时也会有空间和自由让自己变得独立。
|
||||
而且,父母也不必在为孩子的安全而担忧。物联网设备可以提供7x24小时的信息更新。像 Xnspy 之类的监视应用程序在提供有关孩子的智能手机活动信息方面更进了一步。随着物联网设备变得越来越复杂,拥有更长使用寿命的电池只是一个时间问题。诸如位置跟踪器之类的物联网设备可以提供有关孩子下落的准确详细信息,所以父母不必担心。
|
||||
|
||||
虽然可穿戴设备已经非常好了,但在确保儿童安全方面,这些通常还远远不够。因此,要为儿童提供安全的环境,我们还需要其他方法。许多事件表明,学校比其他任何公共场所都容易受到攻击。因此,学校需要采取安全措施,以确保儿童和教师的安全。在这一点上,物联网设备可用于检测潜在威胁并采取必要的措施来防止攻击。威胁检测系统包括摄像头。系统一旦检测到威胁,便可以通知当局,如一些执法机构和医院。智能锁等设备可用于封锁学校(包括教室),来保护儿童。除此之外,还可以告知父母其孩子的安全,并立即收到有关威胁的警报。这将需要实施无线技术,例如 Wi-Fi 和传感器。因此,学校需要制定专门用于提供教室安全性的预算。
|
||||
|
||||
智能家居实现拍手关灯,也可以让你的家庭助手帮你关灯。 同样,物联网设备也可用在屋内来保护儿童。 在家里,物联网设备(例如摄像头)为父母在照顾孩子时提供100%的可见性。 当父母不在家里时,可以使用摄像头和其他传感器检测是否发生了可疑活动。 其他设备(例如连接到这些传感器的智能锁)可以锁门和窗,以确保孩子们的安全。
|
||||
|
||||
同样,可以引入许多物联网解决方案来确保孩子的安全。
|
||||
|
||||
|
||||
|
||||
**有多好就有多坏**
|
||||
物联网设备中的传感器会创建大量数据。 数据的安全性是至关重要的一个因素。 收集的有关孩子的数据如果落入不法分子手中会存在危险。 因此,需要采取预防措施。 IoT 设备中泄露的任何数据都可用于确定行为模式。 因此,必须投资提供不违反用户隐私的安全物联网解决方案。
|
||||
|
||||
IoT 设备通常连接到 Wi-Fi,用于设备之间传输数据。未加密数据的不安全网络会带来某些风险。 这样的网络很容易被窃听。 黑客可以使用此类网点来入侵系统。 他们还可以将恶意软件引入系统,从而使系统变得脆弱,易受攻击。 此外,对设备和公共网络(例如学校的网络)的网络攻击可能导致数据泄露和私有数据盗用。 因此,在实施用于保护儿童的物联网解决方案时,保护网络和物联网设备的总体计划必须生效。
|
||||
|
||||
物联网设备保护儿童在学校和家里的安全的潜力尚未发现有什么创新。 我们需要付出更多努力来保护连接 IoT 设备的网络安全。 此外,物联网设备生成的数据可能落入不法分子手中,从而造成更多麻烦。 因此,这是物联网安全至关重要的一个领域。
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensourceforu.com/2019/10/how-to-use-iot-devices-to-keep-children-safe/
|
||||
|
||||
作者:[Andrew Carroll][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[Morisun029](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensourceforu.com/author/andrew-carroll/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Visual-Internet-of-things_EB-May18.jpg?resize=696%2C507&ssl=1 (Visual Internet of things_EB May18)
|
||||
[2]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Visual-Internet-of-things_EB-May18.jpg?fit=900%2C656&ssl=1
|
@ -0,0 +1,161 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wenwensnow)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Use GameHub to Manage All Your Linux Games in One Place)
|
||||
[#]: via: (https://itsfoss.com/gamehub/)
|
||||
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
|
||||
|
||||
用GameHub集中管理你Linux上的所有游戏
|
||||
======
|
||||
|
||||
你在Linux 上打算怎么[玩游戏呢][1]? 让我猜猜, 要不就是从软件中心直接安装,要不就选Steam,GOG, Humble Bundle 等平台,对吧?但是,如果你有多个游戏启动器和客户端,又要如何管理呢?好吧,对我来说这简直令人头疼 —— 这也是我发现[GameHub][2]这个应用之后,感到非常高兴的原因。
|
||||
|
||||
GameHub是为Linux发行版设计的一个桌面应用,它能让你“集中管理你的所有游戏”。这听起来很有趣,是不是?下面让我来具体说明一下。
|
||||
|
||||
![][3]
|
||||
### 集中管理不同平台Linux游戏的GameHub功能
|
||||
|
||||
让我们看看,对玩家来说,让GameHub成为一个[不可或缺的Linux应用][4]的功能,都有哪些。
|
||||
|
||||
#### Steam, GOG & Humble Bundle 支持
|
||||
![][5]
|
||||
|
||||
它支持Steam, [GOG][6], 和 [Humble Bundle][7] 账户整合。你可以登录你的GameHub账号,从而在库管理器中管理所有游戏。
|
||||
|
||||
对我来说,我在Steam上有很多游戏,Humble Bundle上也有一些。我不能确保它支持所有平台。但可以确信的是,主流平台游戏是没有问题的。
|
||||
|
||||
#### 本地游戏支持
|
||||
![][8]
|
||||
|
||||
有很多网站专门推荐Linux游戏,并[支持下载][9]。你可以通过下载安装包,或者添加可执行文件,从而管理本地游戏。
|
||||
|
||||
可惜的是,在GameHub内,无法在线搜索Linux游戏。如上图所示,你需要将各平台游戏分开下载,随后再添加到自己的GameHub账号中。
|
||||
|
||||
#### 模拟器支持
|
||||
|
||||
在模拟器方面,你可以玩[Linux上的retro game][10]。正如上图所示,你可以添加模拟器(或导入模拟器镜像)。
|
||||
|
||||
你可以在[RetroArch][11]查看可添加的模拟器,但也能根据需求,添加自定义模拟器。
|
||||
|
||||
#### 用户界面
|
||||
|
||||
![Gamehub 界面选项][12]
|
||||
|
||||
当然,用户体验很重要。因此,探究下用户界面都有些什么,也很有必要。
|
||||
|
||||
我个人觉得,这一应用很容易使用,并且黑色主题是一个加分项。
|
||||
|
||||
#### 手柄支持
|
||||
|
||||
如果你习惯在Linux系统上用手柄玩游戏 —— 你可以轻松在设置里添加,启用或禁用它。
|
||||
|
||||
#### 多个数据提供商
|
||||
|
||||
|
||||
因为它需要获取你的游戏信息(或元数据),也意味着它需要一个数据源。你可以看到上图列出的所有数据源。
|
||||
|
||||
![Data Providers Gamehub][13]
|
||||
|
||||
这里你什么也不用做 —— 但如果你使用的是其他平台,而不是steam的话,你需要为[IDGB生成一个API密钥][14]。
|
||||
|
||||
我建议只有出现提示/通知,或有些游戏在GameHub上没有任何描述/图片/状态时,再这么做。
|
||||
|
||||
#### 兼容性选项
|
||||
|
||||
![][15]
|
||||
|
||||
你有不支持在Linux上运行的游戏吗?
|
||||
|
||||
不用担心,GameHub上提供了多种兼容工具,如 Wine/Proton,你可以利用它们让游戏得以运行。
|
||||
|
||||
我们无法确定具体哪个兼容工具适用于你 —— 所以你需要自己亲自测试。 然而,对许多游戏玩家来说,这的确是个很有用的功能。
|
||||
|
||||
### 如何在GameHub上管理你的游戏?
|
||||
|
||||
在启动程序后,你可以将自己的Steam/GOG/Humble Bundle 账号添加进来。
|
||||
|
||||
对于Steam, 你需要在Linux 发行版上安装Steam 客户端。一旦安装完成,你可以轻松将账号中的游戏导入GameHub.
|
||||
|
||||
|
||||
![][16]
|
||||
|
||||
对于GOG & Humble Bundle, 登录后,就能直接在GameHub上管理游戏了。
|
||||
|
||||
如果你想添加模拟器或者本地安装文件,点击窗口右上角的 “**+**” 按钮进行添加。
|
||||
|
||||
|
||||
### 如何安装游戏?
|
||||
|
||||
对于Steam游戏,它会自动启动Steam 客户端,从而下载/安装游戏(我希望之后安装游戏,可以不用启动Steam!)
|
||||
|
||||
![][17]
|
||||
|
||||
但对于GOG/Humble Bundle, 登录后就能直接、下载安装游戏。必要的话,对于那些不支持在Linux上运行的游戏,你可以使用兼容工具。
|
||||
|
||||
无论是模拟器游戏,还是本地游戏,只需添加安装包或导入模拟器镜像就可以了。这里没什么其他步骤要做。
|
||||
|
||||
### GameHub: 如何安装它呢?
|
||||
|
||||
![][18]
|
||||
|
||||
首先,你可以直接在软件中心或者应用商店内搜索。 它在 **Pop!_Shop** 分类下可见。所以,它在绝大多数官方源中都能找到。
|
||||
|
||||
如果你在这些地方都没有找到,你可以手动添加源,并从终端上安装它,你需要输入以下命令:
|
||||
|
||||
```
|
||||
sudo add-apt-repository ppa:tkashkin/gamehub
|
||||
sudo apt update
|
||||
sudo apt install com.github.tkashkin.gamehub
|
||||
```
|
||||
|
||||
如果你遇到了 “**add-apt-repository command not found**” 这个错误,你可以看看,[add-apt-repository not found error.][19]这篇文章,它能帮你解决这一问题。
|
||||
|
||||
这里还提供AppImage 和 FlatPak版本。 在[官网][2] 上,你可以针对找到其他Linux发行版的安装手册。
|
||||
|
||||
同时,你还可以从它的 [GitHub页面][20]下载之前版本的安装包.
|
||||
|
||||
[GameHub][2]
|
||||
|
||||
**注意**
|
||||
|
||||
GameHub 是相当灵活的一个集中游戏管理应用。 用户界面和选项设置也相当直观。
|
||||
|
||||
你之前是否使用过这一应用呢?如果有,请在评论里写下你的感受。
|
||||
|
||||
而且,如果你想尝试一些与此功能相似的工具/应用,请务必告诉我们。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/gamehub/
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/ankush/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://itsfoss.com/linux-gaming-guide/
|
||||
[2]: https://tkashkin.tk/projects/gamehub/
|
||||
[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/10/gamehub-home-1.png?ssl=1
|
||||
[4]: https://itsfoss.com/essential-linux-applications/
|
||||
[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/10/gamehub-platform-support.png?ssl=1
|
||||
[6]: https://www.gog.com/
|
||||
[7]: https://www.humblebundle.com/monthly?partner=itsfoss
|
||||
[8]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/10/gamehub-native-installers.png?ssl=1
|
||||
[9]: https://itsfoss.com/download-linux-games/
|
||||
[10]: https://itsfoss.com/play-retro-games-linux/
|
||||
[11]: https://www.retroarch.com/
|
||||
[12]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/10/gamehub-appearance.png?ssl=1
|
||||
[13]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/10/data-providers-gamehub.png?ssl=1
|
||||
[14]: https://www.igdb.com/api
|
||||
[15]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/10/gamehub-windows-game.png?fit=800%2C569&ssl=1
|
||||
[16]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/10/gamehub-library.png?ssl=1
|
||||
[17]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/10/gamehub-compatibility-layer.png?ssl=1
|
||||
[18]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/10/gamehub-install.jpg?ssl=1
|
||||
[19]: https://itsfoss.com/add-apt-repository-command-not-found/
|
||||
[20]: https://github.com/tkashkin/GameHub/releases
|
@ -0,0 +1,207 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to Configure Rsyslog Server in CentOS 8 / RHEL 8)
|
||||
[#]: via: (https://www.linuxtechi.com/configure-rsyslog-server-centos-8-rhel-8/)
|
||||
[#]: author: (James Kiarie https://www.linuxtechi.com/author/james/)
|
||||
|
||||
如何在 CentOS 8 / RHEL 8 中配置 Rsyslog 服务器
|
||||
======
|
||||
|
||||
**Rsyslog** 是一个免费的开源日志记录程序,默认下在 **CentOS** 8 和 **RHEL** 8 系统上存在。它提供了一种从客户端节点到单个中央服务器的“集中日志”的简单有效的方法。日志集中化有两个好处。首先,它简化了日志查看,因为系统管理员可以在一个中心节点查看远程服务器的所有日志,而无需登录每个客户端系统来检查日志。如果需要监视多台服务器,这将非常有用,其次,如果远程客户端崩溃,你不用担心丢失日志,因为所有日志都将保存在**中央 rsyslog 服务器上**。Rsyslog 取代了仅支持 **UDP** 协议的 syslog。它以优异的功能扩展了基本的 syslog 协议,例如在传输日志时支持 **UDP** 和 **TCP**协议,增强的过滤功能以及灵活的配置选项。让我们来探讨如何在 CentOS 8 / RHEL 8 系统中配置 Rsyslog 服务器。
|
||||
|
||||
[![configure-rsyslog-centos8-rhel8][1]][2]
|
||||
|
||||
### 预先条件
|
||||
|
||||
我们将搭建以下实验环境来测试集中式日志记录过程:
|
||||
|
||||
* **Rsyslog 服务器** CentOS 8 Minimal IP 地址: 10.128.0.47
|
||||
* **客户端系统** RHEL 8 Minimal IP 地址: 10.128.0.48
|
||||
|
||||
|
||||
|
||||
通过上面的设置,我们将演示如何设置 Rsyslog 服务器,然后配置客户端系统以将日志发送到 Rsyslog 服务器进行监视。
|
||||
|
||||
让我们开始!
|
||||
|
||||
### 在 CentOS 8 上配置 Rsyslog 服务器
|
||||
|
||||
默认情况下,Rsyslog 已安装在 CentOS 8 / RHEL 8 服务器上。要验证 Rsyslog 的状态,请通过 SSH 登录并运行以下命令:
|
||||
|
||||
```
|
||||
$ systemctl status rsyslog
|
||||
```
|
||||
|
||||
示例输出
|
||||
|
||||
![rsyslog-service-status-centos8][1]
|
||||
|
||||
如果由于某种原因不存在 rsyslog,那么可以使用以下命令进行安装:
|
||||
|
||||
```
|
||||
$ sudo yum install rsyslog
|
||||
```
|
||||
|
||||
接下来,你需要修改 Rsyslog 配置文件中的一些设置。打开配置文件。
|
||||
|
||||
```
|
||||
$ sudo vim /etc/rsyslog.conf
|
||||
```
|
||||
|
||||
滚动并取消注释下面的行,以允许通过 UDP 协议接收日志
|
||||
|
||||
```
|
||||
module(load="imudp") # needs to be done just once
|
||||
input(type="imudp" port="514")
|
||||
```
|
||||
|
||||
![rsyslog-conf-centos8-rhel8][1]
|
||||
|
||||
同样,如果你希望启用 TCP rsyslog 接收,请取消注释下面的行:
|
||||
|
||||
```
|
||||
module(load="imtcp") # needs to be done just once
|
||||
input(type="imtcp" port="514")
|
||||
```
|
||||
|
||||
![rsyslog-conf-tcp-centos8-rhel8][1]
|
||||
|
||||
保存并退出配置文件。
|
||||
|
||||
要从客户端系统接收日志,我们需要在防火墙上打开 Rsyslog 默认端口 514。为此,请运行
|
||||
|
||||
```
|
||||
# sudo firewall-cmd --add-port=514/tcp --zone=public --permanent
|
||||
```
|
||||
|
||||
接下来,重新加载防火墙保存更改
|
||||
|
||||
```
|
||||
# sudo firewall-cmd --reload
|
||||
```
|
||||
|
||||
示例输出
|
||||
|
||||
![firewall-ports-rsyslog-centos8][1]
|
||||
|
||||
接下来,重启 Rsyslog 服务器
|
||||
|
||||
```
|
||||
$ sudo systemctl restart rsyslog
|
||||
```
|
||||
|
||||
要在启动时运行 Rsyslog,运行以下命令
|
||||
|
||||
```
|
||||
$ sudo systemctl enable rsyslog
|
||||
```
|
||||
|
||||
要确认 Rsyslog 服务器正在监听 514 端口,请使用 netstat 命令,如下所示:
|
||||
|
||||
```
|
||||
$ sudo netstat -pnltu
|
||||
```
|
||||
|
||||
示例输出
|
||||
|
||||
![netstat-rsyslog-port-centos8][1]
|
||||
|
||||
完美!我们已经成功配置了 Rsyslog 服务器来从客户端系统接收日志。
|
||||
|
||||
要实时查看日志消息,请运行以下命令:
|
||||
|
||||
```
|
||||
$ tail -f /var/log/messages
|
||||
```
|
||||
|
||||
现在开始配置客户端系统。
|
||||
|
||||
### 在 RHEL 8 上配置客户端系统
|
||||
|
||||
与 Rsyslog 服务器一样,登录并通过以下命令检查 rsyslog 守护进程是否正在运行:
|
||||
|
||||
```
|
||||
$ sudo systemctl status rsyslog
|
||||
```
|
||||
|
||||
示例输出
|
||||
|
||||
![client-rsyslog-service-rhel8][1]
|
||||
|
||||
接下来,打开 rsyslog 配置文件
|
||||
|
||||
```
|
||||
$ sudo vim /etc/rsyslog.conf
|
||||
```
|
||||
|
||||
在文件末尾,添加以下行
|
||||
|
||||
```
|
||||
*.* @10.128.0.47:514 # Use @ for UDP protocol
|
||||
*.* @@10.128.0.47:514 # Use @@ for TCP protocol
|
||||
```
|
||||
|
||||
保存并退出配置文件。就像 Rsyslog 服务器一样,打开 514 端口,这是防火墙上的默认 Rsyslog 端口。
|
||||
|
||||
```
|
||||
$ sudo firewall-cmd --add-port=514/tcp --zone=public --permanent
|
||||
```
|
||||
|
||||
接下来,重新加载防火墙以保存更改
|
||||
|
||||
```
|
||||
$ sudo firewall-cmd --reload
|
||||
```
|
||||
|
||||
接下来,重启 rsyslog 服务
|
||||
|
||||
```
|
||||
$ sudo systemctl restart rsyslog
|
||||
```
|
||||
|
||||
要在启动时运行 Rsyslog,请运行以下命令
|
||||
|
||||
```
|
||||
$ sudo systemctl enable rsyslog
|
||||
```
|
||||
|
||||
### 测试日志记录操作
|
||||
|
||||
已经成功安装并配置 Rsyslog 服务器和客户端后,就该验证你的配置是否按预期运行了。
|
||||
|
||||
在客户端系统上,运行以下命令:
|
||||
|
||||
```
|
||||
# logger "Hello guys! This is our first log"
|
||||
```
|
||||
|
||||
现在进入 Rsyslog 服务器并运行以下命令来实时查看日志消息
|
||||
|
||||
```
|
||||
# tail -f /var/log/messages
|
||||
```
|
||||
|
||||
客户端系统上命令运行的输出显示在了 Rsyslog 服务器的日志中,这意味着 Rsyslog 服务器正在接收来自客户端系统的日志。
|
||||
|
||||
![centralize-logs-rsyslogs-centos8][1]
|
||||
|
||||
就是这些了!我们成功设置了 Rsyslog 服务器来接收来自客户端系统的日志信息。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linuxtechi.com/configure-rsyslog-server-centos-8-rhel-8/
|
||||
|
||||
作者:[James Kiarie][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.linuxtechi.com/author/james/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
|
||||
[2]: https://www.linuxtechi.com/wp-content/uploads/2019/10/configure-rsyslog-centos8-rhel8.jpg
|
@ -7,28 +7,28 @@
|
||||
[#]: via: (https://www.2daygeek.com/find-get-size-of-directory-folder-linux-disk-usage-du-command/)
|
||||
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
|
||||
|
||||
How to Get the Size of a Directory in Linux
|
||||
如何获取 Linux 中的目录大小
|
||||
======
|
||||
|
||||
You may have noticed that the size of a directory is showing only 4KB when you use the **[ls command][1]** to list the directory content in Linux.
|
||||
你应该已经注意用到,在 Linux 中使用 **[ls 命令][1]** 列出的目录内容中,目录的大小仅显示 4KB。
|
||||
|
||||
Is this the right size? If not, what is it, and how to get a directory or folder size in Linux?
|
||||
这个大小正确吗?如果不正确,那它代表什么,又该如何获取 Linux 中的目录或文件夹大小?
|
||||
|
||||
This is the default size, which is used to store the meta information of the directory on the disk.
|
||||
这是一个默认的大小,用来存储磁盘上存储目录的元数据。
|
||||
|
||||
There are some applications on Linux to **[get the actual size of a directory][2]**.
|
||||
Linux 上有一些应用程序可以 **[获取目录的实际大小][2]**.
|
||||
|
||||
But the disk usage (du) command is widely used by the Linux administrator.
|
||||
但是,磁盘使用率(du)命令已被 Linux 管理员广泛使用。
|
||||
|
||||
I will show you how to get folder size with various options.
|
||||
我将向您展示如何使用各种选项获取文件夹大小。
|
||||
|
||||
### What’s du Command?
|
||||
### 什么是 du 命令?
|
||||
|
||||
**[du command][3]** stands for `Disk Usage`. It’s a standard Unix program which used to estimate file space usage in present working directory.
|
||||
**[du 命令][3]** 表示 <ruby>Disk Usage<rt>磁盘使用率</rt></ruby>。这是一个标准的 Unix 程序,用于估计当前工作目录中的文件空间使用情况。
|
||||
|
||||
It summarize disk usage recursively to get a directory and its sub-directory size.
|
||||
它使用递归方式总结磁盘使用情况,以获取目录及其子目录的大小。
|
||||
|
||||
As I said, the directory size only shows 4KB when you use the ls command. See the below output.
|
||||
如同我说的那样, 使用 ls 命令时,目录大小仅显示 4KB。参见下面的输出。
|
||||
|
||||
```
|
||||
$ ls -lh | grep ^d
|
||||
@ -40,9 +40,9 @@ drwxr-xr-x 13 daygeek daygeek 4.0K Jan 6 2019 drive-mageshm
|
||||
drwxr-xr-x 15 daygeek daygeek 4.0K Sep 29 21:32 Thanu_Photos
|
||||
```
|
||||
|
||||
### 1) How to Check Only the Size of the Parent Directory on Linux
|
||||
### 1) 在 Linux 上如何只获取父目录的大小
|
||||
|
||||
Use the below du command format to get the total size of a given directory. In this example, we are going to get the total size of the **“/home/daygeek/Documents”** directory.
|
||||
使用以下 du 命令格式获取给定目录的总大小。在该示例中,我们将得到 **“/home/daygeek/Documents”** 目录的总大小
|
||||
|
||||
```
|
||||
$ du -hs /home/daygeek/Documents
|
||||
@ -52,20 +52,19 @@ $ du -h --max-depth=0 /home/daygeek/Documents/
|
||||
20G /home/daygeek/Documents
|
||||
```
|
||||
|
||||
**Details**:
|
||||
**详细说明**:
|
||||
|
||||
* du – It is a command
|
||||
* h – Print sizes in human readable format (e.g., 1K 234M 2G)
|
||||
* s – Display only a total for each argument
|
||||
* –max-depth=N – Print levels of directory
|
||||
* du – 这是一个命令
|
||||
* h – 以人类可读的格式显示大小 (例如 1K 234M 2G)
|
||||
* s – 仅显示每个参数的总数
|
||||
* –max-depth=N – 目录的打印级别
|
||||
|
||||
|
||||
### 2) 在 Linux 上如何获取每个目录的大小
|
||||
|
||||
### 2) How to Get the Size of Each Directory on Linux
|
||||
使用以下 du 命令格式获取每个目录(包括子目录)的总大小。
|
||||
|
||||
Use the below du command format to get the total size of each directory, including sub-directories.
|
||||
|
||||
In this example, we are going to get the total size of each **“/home/daygeek/Documents”** directory and it’s sub-directories.
|
||||
在该示例中,我们将获得每个 **“/home/daygeek/Documents”** 目录及其子目录的总大小。
|
||||
|
||||
```
|
||||
$ du -h /home/daygeek/Documents/ | sort -rh | head -20
|
||||
@ -92,9 +91,9 @@ $ du -h /home/daygeek/Documents/ | sort -rh | head -20
|
||||
150M /home/daygeek/Documents/drive-2daygeek/Thanu-photos-by-month/Nov-2016
|
||||
```
|
||||
|
||||
### 3) How to Get a Summary of Each Directory on Linux
|
||||
### 3) 在 Linux 上如何获取每个目录的摘要
|
||||
|
||||
Use the below du command format to get only the summary for each directory.
|
||||
使用如下 du 命令格式仅获取每个目录的摘要。
|
||||
|
||||
```
|
||||
$ du -hs /home/daygeek/Documents/* | sort -rh | head -10
|
||||
@ -111,9 +110,9 @@ $ du -hs /home/daygeek/Documents/* | sort -rh | head -10
|
||||
96K /home/daygeek/Documents/distro-info.xlsx
|
||||
```
|
||||
|
||||
### 4) How to Display the Size of Each Directory and Exclude Sub-Directories on Linux
|
||||
### 4) 在 Linux 上如何获取每个目录的不含子目录的大小
|
||||
|
||||
Use the below du command format to display the total size of each directory, excluding subdirectories.
|
||||
使用如下 du 命令格式来展示每个目录的总大小,不包括子目录。
|
||||
|
||||
```
|
||||
$ du -hS /home/daygeek/Documents/ | sort -rh | head -20
|
||||
@ -140,9 +139,9 @@ $ du -hS /home/daygeek/Documents/ | sort -rh | head -20
|
||||
90M /home/daygeek/Documents/drive-2daygeek/Thanu-photos-by-month/Dec-2017
|
||||
```
|
||||
|
||||
### 5) How to Get Only the Size of First-Level Sub-Directories on Linux
|
||||
### 5) 在 Linux 上如何仅获取一级子目录的大小
|
||||
|
||||
If you want to get the size of the first-level sub-directories, including their subdirectories, for a given directory on Linux, use the command format below.
|
||||
如果要获取 Linux 上给定目录的一级子目录(包括其子目录)的大小,请使用以下命令格式。
|
||||
|
||||
```
|
||||
$ du -h --max-depth=1 /home/daygeek/Documents/
|
||||
@ -155,9 +154,9 @@ $ du -h --max-depth=1 /home/daygeek/Documents/
|
||||
20G /home/daygeek/Documents/
|
||||
```
|
||||
|
||||
### 6) How to Get Grand Total in the du Command Output
|
||||
### 6) 如何在 du 命令输出中获得总计
|
||||
|
||||
If you want to get the grand total in the du Command output, use the below du command format.
|
||||
如果要在 du 命令输出中获得总计,请使用以下 du 命令格式。
|
||||
|
||||
```
|
||||
$ du -hsc /home/daygeek/Documents/* | sort -rh | head -10
|
Loading…
Reference in New Issue
Block a user