Merge remote-tracking branch 'LCTT/master'

This commit is contained in:
Xingyu Wang 2019-08-23 03:58:46 +08:00
commit 0b056154fb
21 changed files with 2637 additions and 324 deletions

View File

@ -0,0 +1,85 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Serverless on Kubernetes, diverse automation, and more industry trends)
[#]: via: (https://opensource.com/article/19/8/serverless-kubernetes-and-more)
[#]: author: (Tim Hildred https://opensource.com/users/thildred)
Serverless on Kubernetes, diverse automation, and more industry trends
======
A weekly look at open source community and industry trends.
![Person standing in front of a giant computer screen with numbers, data][1]
As part of my role as a senior product marketing manager at an enterprise software company with an open source development model, I publish a regular update about open source community, market, and industry trends for product marketers, managers, and other influencers. Here are five of my and their favorite articles from that update.
## [10 tips for creating robust serverless components][2]
> There are some repeated patterns that we have seen after creating 20+ serverless components. We recommend that you browse through the [available component repos on GitHub][3] and check which one is close to what youre building. Just open up the repo and check the code and see how everything fits together.
>
> All component code is open source, and we are striving to keep it clean, simple and easy to follow. After you look around youll be able to understand how our core API works, how we interact with external APIs, and how we are reusing other components.
**The impact**: Serverless Inc is striving to take probably the most hyped architecture early on in the hype cycle and make it usable and practical today. For serverless to truly go mainstream, producing something useful has to be as easy for a developer as "Hello world!," and these components are a step in that direction.
## [Kubernetes workloads in the serverless era: Architecture, platforms, and trends][4]
> There are many fascinating elements of the Kubernetes architecture: the containers providing common packaging, runtime and resource isolation model within its foundation; the simple control loop mechanism that monitors the actual state of components and reconciles this with the desired state; the custom resource definitions. But the true enabler for extending Kubernetes to support diverse workloads is the concept of the pod.
>
> A pod provides two sets of guarantees. The deployment guarantee ensures that the containers of a pod are always placed on the same node. This behavior has some useful properties such as allowing containers to communicate synchronously or asynchronously over localhost, over inter-process communication ([IPC][5]), or using the local file system.
**The impact**: If developer adoption of serverless architectures is largely driven by how easily they can be productive working that way, business adoption will be driven by the ability to place this trend in the operational and business context. IT decision-makers need to see a holistic picture of how serverless adds value alongside their existing investments, and operators and architects need to envision how they'll keep it all up and running.
## [How developers can survive the Last Mile with CodeReady Workspaces][6]
> Inside each cloud provider, a host of tools can address CI/CD, testing, monitoring, backing up and recovery problems. Outside of those providers, the cloud native community has been hard at work cranking out new tooling from [Prometheus][7], [Knative][8], [Envoy][9] and [Fluentd][10], to [Kubenetes][11] itself and the expanding ecosystem of Kubernetes Operators.
>
> Within all of those projects, cloud-based services and desktop utilities is one major gap, however: the last mile of software development is the IDE. And despite the wealth of development projects inside the community and Cloud Native Computing Foundation, it is indeed the Eclipse Foundation, as mentioned above, that has taken on this problem with a focus on the new cloud development landscape.
**The impact**: Increasingly complex development workflows and deployment patterns call for increasingly intelligent IDEs. While I'm sure it is possible to push a button and redeploy your microservices to a Kubernetes cluster from emacs (or vi, relax), Eclipse Che (and CodeReady Workspaces) are being built from the ground up with these types of cloud-native workflows in mind.
## [Automate security in increasingly complex hybrid environments][12]
> According to the [Information Security Forum][13]s [Global Security Threat Outlook for 2019][14], one of the biggest IT trends to watch this year is the increasing sophistication of cybercrime and ransomware. And even as the volume of ransomware attacks is dropping, cybercriminals are finding new, more potent ways to be disruptive. An [article in TechRepublic][15] points to cryptojacking malware, which enables someone to hijack another's hardware without permission to mine cryptocurrency, as a growing threat for enterprise networks.
>
> To more effectively mitigate these risks, organizations could invest in automation as a component of their security plans. Thats because it takes time to investigate and resolve issues, in addition to applying controlled remediations across bare metal, virtualized systems, and cloud environments -- both private and public -- all while documenting changes. 
**The impact**: This one is really about our ability to trust that the network service providers that we rely upon to keep our phones and smart TVs full of stutter-free streaming HD content have what they need to protect the infrastructure that makes it all possible. I for one am rooting for you!
## [AnsibleFest 2019 session catalog][16]
> 85 Ansible automation sessions over 3 days in Atlanta, Georgia
**The impact**: What struck me is the range of things that can be automated with Ansible. Windows? Check. Multicloud? Check. Security? Check. The real question after those three days are over will be: Is there anything in IT that can't be automated with Ansible? Seriously, I'm asking, let me know.
_I hope you enjoyed this list of what stood out to me from last week and come back next Monday for more open source community, market, and industry trends._
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/8/serverless-kubernetes-and-more
作者:[Tim Hildred][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/thildred
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/data_metrics_analytics_desktop_laptop.png?itok=9QXd7AUr (Person standing in front of a giant computer screen with numbers, data)
[2]: https://serverless.com/blog/10-tips-creating-robust-serverless-components/
[3]: https://github.com/serverless-components/
[4]: https://www.infoq.com/articles/kubernetes-workloads-serverless-era/
[5]: https://opensource.com/article/19/4/interprocess-communication-linux-networking
[6]: https://thenewstack.io/how-developers-can-survive-the-last-mile-with-codeready-workspaces/
[7]: https://prometheus.io/
[8]: https://knative.dev/
[9]: https://www.envoyproxy.io/
[10]: https://www.fluentd.org/
[11]: https://kubernetes.io/
[12]: https://www.redhat.com/en/blog/automate-security-increasingly-complex-hybrid-environments
[13]: https://www.securityforum.org/
[14]: https://www.prnewswire.com/news-releases/information-security-forum-forecasts-2019-global-security-threat-outlook-300757408.html
[15]: https://www.techrepublic.com/article/top-4-security-threats-businesses-should-expect-in-2019/
[16]: https://agenda.fest.ansible.com/sessions

View File

@ -1,102 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (beamrolling)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to transition into a career as a DevOps engineer)
[#]: via: (https://opensource.com/article/19/7/how-transition-career-devops-engineer)
[#]: author: (Conor Delanbanque https://opensource.com/users/cdelanbanquehttps://opensource.com/users/daniel-ohhttps://opensource.com/users/herontheclihttps://opensource.com/users/marcobravohttps://opensource.com/users/cdelanbanque)
How to transition into a career as a DevOps engineer
======
Whether you're a recent college graduate or a seasoned IT pro looking to
advance your career, these tips can help you get hired as a DevOps
engineer.
![technical resume for hiring new talent][1]
DevOps engineering is a hot career with many rewards. Whether you're looking for your first job after graduating or seeking an opportunity to reskill while leveraging your prior industry experience, this guide should help you take the right steps to become a [DevOps engineer][2].
### Immerse yourself
Begin by learning the fundamentals, practices, and methodologies of [DevOps][3]. Understand the "why" behind DevOps before jumping into the tools. A DevOps engineer's main goal is to increase speed and maintain or improve quality across the entire software development lifecycle (SDLC) to provide maximum business value. Read articles, watch YouTube videos, and go to local Meetup groups or conferences—become part of the welcoming DevOps community, where you'll learn from the mistakes and successes of those who came before you.
### Consider your background
If you have prior experience working in technology, such as a software developer, systems engineer, systems administrator, network operations engineer, or database administrator, you already have broad insights and useful experience for your future role as a DevOps engineer. If you're just starting your career after finishing your degree in computer science or any other STEM field, you have some of the basic stepping-stones you'll need in this transition.
The DevOps engineer role covers a broad spectrum of responsibilities. Following are the three ways enterprises are most likely to use them:
* **DevOps engineers with a dev bias** work in a software development role building applications. They leverage continuous integration/continuous delivery (CI/CD), shared repositories, cloud, and containers as part of their everyday work, but they are not necessarily responsible for building or implementing tooling. They understand infrastructure and, in a mature environment, will be able to push their own code into production.
* **DevOps engineers with an ops bias** could be compared to systems engineers or systems administrators. They understand software development but do not spend the core of their day building applications. Instead, they are more likely to be supporting software development teams to automate manual processes and increase efficiencies across human and technology systems. This could mean breaking down legacy code and using less cumbersome automation scripts to run the same commands, or it could mean installing, configuring, or maintaining infrastructure and tooling. They ensure the right tools are installed and available for any teams that need them. They also help to enable teams by teaching them how to leverage CI/CD and other DevOps practices.
* **Site reliability engineers (SRE)** are like software engineers that solve operations and infrastructure problems. SREs focus on creating scalable, highly available, and reliable software systems.
In the ideal world, DevOps engineers will understand all of these areas; this is typical at mature technology companies. However, DevOps roles at top-tier banks and many Fortune 500 companies usually have biases towards dev or ops.
### Technologies to learn
DevOps engineers need to know a wide spectrum of technologies to do their jobs effectively. Whatever your background, start with the fundamental technologies you'll need to use and understand as a DevOps engineer.
#### Operating systems
The operating system is where everything runs, and having fundamental knowledge is important. [Linux is the operating system][4] you'll most likely use daily, although some organizations use Windows. To get started, you can install Linux at home, where you'll be able to break as much as you want and learn along the way.
#### Scripting
Next, pick a language to learn for scripting purposes. There are many to choose from ranging from Python, Go, Java, Bash, PowerShell, Ruby, and C/C++. I suggest [starting with Python][5]; it's one of the most popular for a reason, as it's relatively easy to learn and interpret. Python is often written to follow the fundamentals of object-oriented programming (OOP) and can be used for web development, software development, and creating desktop GUI and business applications.
#### Cloud
After [Linux][4] and [Python][5], I think the next thing to study is cloud computing. Infrastructure is no longer left to the "operations guys," so you'll need some exposure to a cloud platform such as Amazon Web Services, Azure, or Google Cloud Platform. I'd start with AWS, as it has an extensive collection of free learning tools that can take you down any track from using AWS as a developer, to operations, and even business-facing components. In fact, you might become overwhelmed by how much is on offer. Consider starting with EC2, S3, and VPC, and see where you want to go from there.
#### Programming languages
If you come to DevOps with a passion for software development, keep on improving your programming skills. Some good and commonly used languages in DevOps are the same as you would for scripting: Python, Go, Java, Bash, PowerShell, Ruby, and C/C++. You should also become familiar with Jenkins and Git/GitHub, which you'll use frequently within the CI/CD process.
#### Containers
Finally, start learning about [containerizing cod][6]e using tools such as Docker and orchestration platforms such as Kubernetes. There are extensive learning resources available for free online, and most cities will have local Meetup groups where you can learn from experienced people in a friendly environment (with pizza and beer!).
#### What else?
If you have less experience in development, you can still [get involved in DevOps][3] by applying your passion for automation, increasing efficiency, collaborating with others, and improving your work. I would still suggest learning the tooling described above, but with less emphasis on the coding/scripting languages. It will be useful to learn about Infrastructure-as-a-Service, Platform-as-a-Service, cloud platforms, and Linux. You'll likely be setting up the tools and learning how to build systems that are resilient and fault-tolerant, leveraging them while writing code.
### Finding a DevOps job
The job search process will differ depending on whether you've been working in tech and are moving into DevOps or you're a recent graduate beginning your career.
#### If you're already working in technology
If you're transitioning from one tech field into a DevOps role, start by exploring opportunities within your current company. Can you reskill by working with another team? Try to shadow other team members, ask for advice, and acquire new skills without leaving your current job. If this isn't possible, you may need to move to another company. If you can learn some of the practices, tools, and technologies listed above, you'll be in a good position to demonstrate relevant knowledge during interviews. The key is to be honest and not set yourself up for failure. Most hiring managers understand that you don't know all the answers; if you can show what you've been learning and explain that you're open to learning more, you should have a good chance to land a DevOps job.
#### If you're starting your career
Apply to open opportunities at companies hiring junior DevOps engineers. Unfortunately, many companies say they're looking for more experience and recommend you re-apply when you've gained some. It's the typical, frustrating scenario of "we want more experience," but nobody seems willing to give you the first chance.
It's not all gloomy though; some companies focus on training and upskilling graduates directly out of the university. For example, [MThree][7], where I work, hires fresh graduates and trains them for eight weeks. When they complete training, participants have solid exposure to the entire SDLC and a good understanding of how it applies in a Fortune 500 environment. Graduates are hired as junior DevOps engineers with MThree's client companies—MThree pays their full-time salary and benefits for the first 18 to 24 months, after which they join the client as direct employees. This is a great way to bridge the gap from the university into a technology career.
### Summary
There are many ways to transition to become a DevOps engineer. It is a very rewarding career route that will likely keep you engaged and challenged—and increase your earning potential.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/7/how-transition-career-devops-engineer
作者:[Conor Delanbanque][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/cdelanbanquehttps://opensource.com/users/daniel-ohhttps://opensource.com/users/herontheclihttps://opensource.com/users/marcobravohttps://opensource.com/users/cdelanbanque
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/hiring_talent_resume_job_career.png?itok=Ci_ulYAH (technical resume for hiring new talent)
[2]: https://opensource.com/article/19/7/devops-vs-sysadmin
[3]: https://opensource.com/resources/devops
[4]: https://opensource.com/resources/linux
[5]: https://opensource.com/resources/python
[6]: https://opensource.com/article/18/8/sysadmins-guide-containers
[7]: https://www.mthreealumni.com/

View File

@ -0,0 +1,64 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Breakthroughs bring a quantum Internet closer)
[#]: via: (https://www.networkworld.com/article/3432509/breakthroughs-bring-a-quantum-internet-closer.html)
[#]: author: (Patrick Nelson https://www.networkworld.com/author/Patrick-Nelson/)
Breakthroughs bring a quantum Internet closer
======
Universities around the world are making discoveries that advance technologies needed to underpin quantum computing.
![Getty Images][1]
Breakthroughs in the manipulation of light are making it more likely that we will, in due course, be seeing a significantly faster and more secure Internet. Adoption of optical circuits in chips, for example, to be driven by [quantum technologies][2] could be just around the corner.
Physicists at the Technical University of Munich (TUM), have just announced a dramatic leap forward in the methods used to accurately place light sources in atom-thin layers. That fine positioning has been one block in the movement towards quantum chips.
[See who's creating quantum computers][3]
“Previous circuits on chips rely on electrons as the information carriers,” [the school explains in a press release][4]. However, by using light instead, it's possible to send data at the faster speed of light, gain power-efficiencies and take advantage of quantum entanglement, where the data is positioned in multiple states in the circuit, all at the same time.
Roughly, quantum entanglement is highly secure because eavesdropping attempts can not only be spotted immediately anywhere along a circuit, due to the always-intertwined parts, but the keys can be automatically shut down at the same time, thus corrupting visibility for the hacker.
The school says its light-source-positioning technique, using a three-atom-thick layer of the semiconductor molybdenum disulfide (MoS2) as the initial material and then irradiating it with a helium ion beam, controls the positioning of the light source better, in a chip, than has been achieved before.
They say that the precision now opens the door to quantum sensor chips for smartphones, and also “new encryption technologies for data transmission.” Any smartphone sensor also has applications in IoT.
The TUM quantum-electronics breakthrough is just one announced in the last few weeks. Scientists at Osaka University say theyve figured a way to get information thats encoded in a laser-beam to translate to a spin state of an electron in a quantum dot. They explain, [in their release][5], that they solve an issue where entangled states can be extremely fragile, in other words, petering out and not lasting for the required length of transmission. Roughly, they explain that their invention allows electron spins in distant, terminus computers to interact better with the quantum-data-carrying light signals.
“The achievement represents a major step towards a quantum internet, the university says.
“There are those who think all computers, and other electronics, will eventually be run on light and forms of photons, and that we will see a shift to all-light,” [I wrote earlier this year][6].
That movement is not slowing. Unrelated to the aforementioned quantum-based light developments, were also seeing a light-based thrust that can be used in regular electronics too.
Engineers may soon be designing with small photon diodes (not traditional LEDs, which are also diodes) that would allow light to flow in one direction only, [says Stanford University in a press release][7]. They are using materials science and have figured a way to trap light in nano-sized silicon. Diodes are basically a valve that stops electrical circuits running in reverse. Light-based diodes, for direction, havent been available in small footprints, such as would be needed in smartphone-sized form factors, or IoT sensing, for example.
“One grand vision is to have an all-optical computer where electricity is replaced completely by light and photons drive all information processing,” Mark Lawrence of Stanford says. “The increased speed and bandwidth of light would enable faster solutions.”
Join the Network World communities on [Facebook][8] and [LinkedIn][9] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3432509/breakthroughs-bring-a-quantum-internet-closer.html
作者:[Patrick Nelson][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Patrick-Nelson/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2018/08/3_nodes-and-wires_servers_hardware-100769198-large.jpg
[2]: https://www.networkworld.com/article/3275367/what-s-quantum-computing-and-why-enterprises-need-to-care.html
[3]: https://www.networkworld.com/article/3275385/who-s-developing-quantum-computers.html
[4]: https://www.tum.de/nc/en/about-tum/news/press-releases/details/35627/
[5]: https://resou.osaka-u.ac.jp/en/research/2019/20190717_1
[6]: https://www.networkworld.com/article/3338081/light-based-computers-to-be-5000-times-faster.html
[7]: https://news.stanford.edu/2019/07/24/developing-technologies-run-light/
[8]: https://www.facebook.com/NetworkWorld/
[9]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,88 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (The cloud isn't killing open source software)
[#]: via: (https://opensource.com/article/19/8/open-source-licensing)
[#]: author: (Peter Zaitsev https://opensource.com/users/peter-zaitsev)
The cloud isn't killing open source software
======
How the cloud motivates open source businesses to evolve quickly.
![Globe up in the clouds][1]
Over the last few months, I participated in two keynote panels where people asked questions about open source licensing:
* Do we need to redefine what open source means in the age of the cloud?
* Are cloud vendors abusing open source?
* Will open source, as we know it, survive?
Last year was the most eventful in my memory for the usually very conservative open source licensing space:
* [Elastic][2] and [Confluent][3] introduced their own licenses for a portion of their stack.
* [Redis Labs][4] changed its license for some extensions by adding "Commons Clause," then changed the entire license a few months later.
* [MongoDB][5] famously proposed a new license called Server-Side Public License (SSPL) to the Open Source Initiative (OSI) for approval, only to [retract][6] the proposal before the OSI had an opportunity to reach a decision. Many in the open source community regarded SSPL as failing to meet the standards of open source licenses. As a result, MongoDB is under a license that can be described as "[source-available][7]" but not open source, given that it has not been approved by the OSI.
### Competition in the cloud
The most common reason given for software vendors making these changes is "foul play" by cloud vendors. The argument is that cloud vendors unfairly offer open source software "as a service," capturing large portions of the revenue, while the original software vendor continues to carry most of the development costs. Market rumors claim Amazon Web Services (AWS) makes more revenue from MySQL than Oracle, which owns the product.
So, who is claiming foul play is destroying the open source ecosystem? Typically, the loudest voices are venture-funded open source software companies. These companies require a very high growth rate to justify their hefty valuation, so it makes sense that they would prefer not to worry about additional competition.
But I reject this argument. If you have an open source license for your software, you need to accept the benefits and drawbacks that go along with it. Besides, you are likely to have a much faster and larger adoption rate partly because other businesses, large and small, can make money from your software. You need to accept and even expect competition from these businesses.
In simple terms, there will be a larger cake, but you will only get a slice of it. If you want a bigger slice of that cake, you can choose a proprietary license for all or some of your software (the latter is often called "open core"). Or, you can choose more or less permissive open source licensing. Choosing the right mix and adapting it as time goes by is critical for the success of businesses that produce open source software.
### Open source communities
But what about software users and the open source communities that surround these projects? These groups generally love to see their software available from cloud vendors, for example, database-as-a-service (DBaaS), as it makes the software much easier to access and gives users more choices than ever. This can have a very positive impact on the community. For example, the adoption of PostgreSQL, which was not easy to use, was dramatically boosted by its availability on Heroku and then as DBaaS on major cloud vendors.
Another criticism leveled at cloud vendors is that they do not support open source communities. This is partly due to their reluctance to share software code. They do, however, contribute significantly to the community by pushing the boundaries of usability, and more and more, we see examples of cloud vendors contributing code. AWS, which gets most of the criticism, has multiple [open source projects][8] and contributes to other projects. Amazon [contributed Encryption in Transit to Redis][9] and recently released [Open Distro for Elasticsearch][10], which provides open source equivalents for many features not available in the open source version of the Elastic platform.
### Open source now and in the future
So, while open source companies impacted by cloud vendors continue to argue that such competition can kill their business—and consequently kill open source projects—this argument is misguided. Competition is not new. Weaker companies that fail to adjust to these new business realities may fail. Other companies will thrive or be acquired by stronger players. This process generally leads to better products and more choice.
This is especially true for open source software, which, unlike proprietary software, cannot be wiped out by a company's failure. Once released, open source code is _always_ open (you can only change the license for new releases), so everyone can exercise the right to fork and continue development if there is demand.
So, I believe open source software is working exactly as intended.
Some businesses attempt to balance open and proprietary software licenses and are now changing to restrictive licenses. Time will tell whether this will protect them or result in their users seeking a more open alternative.
But, what about "source-available" licenses? This is a new category and another option for software vendors and users. However, it can be confusing. The source-available category is not well defined. Some people even refer to this software as open source, as you can browse the source code on GitHub. When source-available code is mixed in with truly open source components in the same product, it can be problematic. If issues arise, they could damage the reputation of the open source software and even expose the user to potential litigation. I hope that standardized source-available licenses will be developed and adopted by software vendors, as was the case with open source licenses.
At [Percona][11], we find ourselves in a unique position. We have spent years using the freedom of open source to develop better versions of existing software, with enhanced features, at no cost to our users. Percona Server for MySQL is as open as MySQL Community Edition but has many of the enhanced features available in MySQL Enterprise as well as additional benefits. This also applies to Percona Server for MongoDB. So, we compete with MongoDB and Oracle, while also being thankful for the amazing engineering work they are doing.
We also compete with DBaaS on other cloud vendors. DBaaS is a great choice for smaller companies that aren't worried about vendor lock-in. It offers superb value without huge costs and is a great choice for some customers. This rivalry is sometimes unpleasant, but it is ultimately fair, and the competition pushes us to be a better company.
In summary, there is no need to panic! The cloud is not going to kill open source software, but it should motivate open source software businesses to adjust and evolve their operations. It is clear that agility will be key, and businesses that can take advantage of new developments and adapt to changing market conditions will be more successful. The final result is likely to be more open software and also more non-open source software, all operating under a variety of licenses.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/8/open-source-licensing
作者:[Peter Zaitsev][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/peter-zaitsev
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/cloud-globe.png?itok=_drXt4Tn (Globe up in the clouds)
[2]: https://www.elastic.co/guide/en/elastic-stack-overview/current/license-management.html
[3]: https://www.confluent.io/blog/license-changes-confluent-platform
[4]: https://redislabs.com/blog/redis-labs-modules-license-changes/
[5]: https://www.mongodb.com/licensing/server-side-public-license
[6]: http://lists.opensource.org/pipermail/license-review_lists.opensource.org/2019-March/003989.html
[7]: https://en.wikipedia.org/wiki/Source-available_software
[8]: https://aws.amazon.com/opensource/
[9]: https://aws.amazon.com/blogs/opensource/open-sourcing-encryption-in-transit-redis/
[10]: https://aws.amazon.com/blogs/opensource/keeping-open-source-open-open-distro-for-elasticsearch/
[11]: https://www.percona.com/

View File

@ -1,222 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (tomjlw)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Copying files in Linux)
[#]: via: (https://opensource.com/article/19/8/copying-files-linux)
[#]: author: (Seth Kenlon https://opensource.com/users/sethhttps://opensource.com/users/scottnesbitthttps://opensource.com/users/greg-p)
Copying files in Linux
======
Learn multiple ways to copy files on Linux, and the advantages of each.
![Filing papers and documents][1]
Copying documents used to require a dedicated staff member in offices, and then a dedicated machine. Today, copying is a task computer users do without a second thought. Copying data on a computer is so trivial that copies are made without you realizing it, such as when dragging a file to an external drive.
The concept that digital entities are trivial to reproduce is pervasive, so most modern computerists dont think about the options available for duplicating their work. And yet, there are several different ways to copy a file on Linux. Each method has nuanced features that might benefit you, depending on what you need to get done.
Here are a number of ways to copy files on Linux, BSD, and Mac.
### Copying in the GUI
As with most operating systems, you can do all of your file management in the GUI, if that's the way you prefer to work.
Drag and drop
The most obvious way to copy a file is the way youre probably used to copying files on computers: drag and drop. On most Linux desktops, dragging and dropping from one local folder to another local folder _moves_ a file by default. You can change this behavior to a copy operation by holding down the **Ctrl** key after you start dragging the file.
Your cursor may show an indicator, such as a plus sign, to show that you are in copy mode:
![Copying a file.][2]
Note that if the file exists on a remote system, whether its a web server or another computer on your own network that you access through a file-sharing protocol, the default action is often to copy, not move, the file.
#### Right-click
If you find dragging and dropping files around your desktop imprecise or clumsy, or doing so takes your hands away from your keyboard too much, you can usually copy a file using the right-click menu. This possibility depends on the file manager you use, but generally, a right-click produces a contextual menu containing common actions.
The contextual menu copy action stores the [file path][3] (where the file exists on your system) in your clipboard so you can then _paste_ the file somewhere else:
![Copying a file from the context menu.][4]
In this case, youre not actually copying the files contents to your clipboard. Instead, you're copying the [file path][3]. When you paste, your file manager looks at the path in your clipboard and then runs a copy command, copying the file located at that path to the path you are pasting into.
### Copying on the command line
While the GUI is a generally familiar way to copy files, copying in a terminal can be more efficient.
#### cp
The obvious terminal-based equivalent to copying and pasting a file on the desktop is the **cp** command. This command copies files and directories and is relatively straightforward. It uses the familiar _source_ and _target_ (strictly in that order) syntax, so to copy a file called **example.txt** into your **Documents** directory:
```
$ cp example.txt ~/Documents
```
Just like when you drag and drop a file onto a folder, this action doesnt replace **Documents** with **example.txt**. Instead, **cp** detects that **Documents** is a folder, and places a copy of **example.txt** into it.
You can also, conveniently (and efficiently), rename the file as you copy it:
```
$ cp example.txt ~/Documents/example_copy.txt
```
That fact is important because it allows you to make a copy of a file in the same directory as the original:
```
$ cp example.txt example.txt
cp: 'example.txt' and 'example.txt' are the same file.
$ cp example.txt example_copy.txt
```
To copy a directory, you must use the **-r** option, which stands for --**recursive**. This option runs **cp** on the directory _inode_, and then on all files within the directory. Without the **-r** option, **cp** doesnt even recognize a directory as an object that can be copied:
```
$ cp notes/ notes-backup
cp: -r not specified; omitting directory 'notes/'
$ cp -r notes/ notes-backup
```
#### cat
The **cat** command is one of the most misunderstood commands, but only because it exemplifies the extreme flexibility of a [POSIX][5] system. Among everything else **cat** does (including its intended purpose of con_cat_enating files), it can also copy. For instance, with **cat** you can [create two copies from one file][6] with just a single command. You cant do that with **cp**.
The significance of using **cat** to copy a file is the way the system interprets the action. When you use **cp** to copy a file, the files attributes are copied along with the file itself. That means that the file permissions of the duplicate are the same as the original:
```
$ ls -l -G -g
-rw-r--r--. 1 57368 Jul 25 23:57  foo.jpg
$ cp foo.jpg bar.jpg
-rw-r--r--. 1 57368 Jul 29 13:37  bar.jpg
-rw-r--r--. 1 57368 Jul 25 23:57  foo.jpg
```
Using **cat** to read the contents of a file into another file, however, invokes a system call to create a new file. These new files are subject to your default **umask** settings. To learn more about `umask`, read Alex Juarezs article covering [umask][7] and permissions in general.
Run **umask** to get the current settings:
```
$ umask
0002
```
This setting means that new files created in this location are granted **664** (**rw-rw-r--**) permission because nothing is masked by the first digits of the **umask** setting (and the executable bit is not a default bit for file creation), and the write permission is blocked by the final digit.
When you copy with **cat**, you dont actually copy the file. You use **cat** to read the contents of the file, and then redirect the output into a new file:
```
$ cat foo.jpg > baz.jpg
$ ls -l -G -g
-rw-r--r--. 1 57368 Jul 29 13:37  bar.jpg
-rw-rw-r--. 1 57368 Jul 29 13:42  baz.jpg
-rw-r--r--. 1 57368 Jul 25 23:57  foo.jpg
```
As you can see, **cat** created a brand new file with the systems default umask applied.
In the end, when all you want to do is copy a file, the technicalities often dont matter. But sometimes you want to copy a file and end up with a default set of permissions, and with **cat** you can do it all in one command**.**
#### rsync
The **rsync** command is a versatile tool for copying files, with the notable ability to synchronize your source and destination. At its most simple, **rsync** can be used similarly to **cp** command:
```
$ rsync example.txt example_copy.txt
$ ls
example.txt    example_copy.txt
```
The commands true power lies in its ability to _not_ copy when its not necessary. If you use **rsync** to copy a file into a directory, but that file already exists in that directory, then **rsync** doesnt bother performing the copy operation. Locally, that fact doesnt necessarily mean much, but if youre copying gigabytes of data to a remote server, this feature makes a world of difference.
What does make a difference even locally, though, is the commands ability to differentiate files that share the same name but which contain different data. If youve ever found yourself faced with two copies of what is meant to be the same directory, then **rsync** can synchronize them into one directory containing the latest changes from each. This setup is a pretty common occurrence in industries that havent yet discovered the magic of version control, and for backup solutions in which there is one source of truth to propagate.
You can emulate this situation intentionally by creating two folders, one called **example** and the other **example_dupe**:
```
$ mkdir example example_dupe
```
Create a file in the first folder:
```
$ echo "one" > example/foo.txt
```
Use **rsync** to synchronize the two directories. The most common options for this operation are **-a** (for _archive_, which ensures symlinks and other special files are preserved) and **-v** (for _verbose_, providing feedback to you on the commands progress):
```
$ rsync -av example/ example_dupe/
```
The directories now contain the same information:
```
$ cat example/foo.txt
one
$ cat example_dupe/foo.txt
one
```
If the file you are treating as the source diverges, then the target is updated to match:
```
$ echo "two" >> example/foo.txt
$ rsync -av example/  example_dupe/
$ cat example_dupe/foo.txt
one
two
```
Keep in mind that the **rsync** command is meant to copy data, not to act as a version control system. For instance, if a file in the destination somehow gets ahead of a file in the source, that file is still overwritten because **rsync** compares files for divergence and assumes that the destination is always meant to mirror the source:
```
$ echo "You will never see this note again" > example_dupe/foo.txt
$ rsync -av example/  example_dupe/
$ cat example_dupe/foo.txt
one
two
```
If there is no change, then no copy occurs.
The **rsync** command has many options not available in **cp**, such as the ability to set target permissions, exclude files, delete outdated files that dont appear in both directories, and much more. Use **rsync** as a powerful replacement for **cp**, or just as a useful supplement.
### Many ways to copy
There are many ways to achieve essentially the same outcome on a POSIX system, so it seems that open sources reputation for flexibility is well earned. Have I missed a useful way to copy data? Share your copy hacks in the comments.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/8/copying-files-linux
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[tomjlw](https://github.com/tomjlw)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/sethhttps://opensource.com/users/scottnesbitthttps://opensource.com/users/greg-p
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/documents_papers_file_storage_work.png?itok=YlXpAqAJ (Filing papers and documents)
[2]: https://opensource.com/sites/default/files/uploads/copy-nautilus.jpg (Copying a file.)
[3]: https://opensource.com/article/19/7/understanding-file-paths-and-how-use-them
[4]: https://opensource.com/sites/default/files/uploads/copy-files-menu.jpg (Copying a file from the context menu.)
[5]: https://opensource.com/article/19/7/what-posix-richard-stallman-explains
[6]: https://opensource.com/article/19/2/getting-started-cat-command
[7]: https://opensource.com/article/19/7/linux-permissions-101

View File

@ -0,0 +1,353 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (An introduction to bpftrace for Linux)
[#]: via: (https://opensource.com/article/19/8/introduction-bpftrace)
[#]: author: (Brendan Gregg https://opensource.com/users/brendanghttps://opensource.com/users/marcobravo)
An introduction to bpftrace for Linux
======
New Linux tracer analyzes production performance problems and
troubleshoots software.
![Linux keys on the keyboard for a desktop computer][1]
Bpftrace is a new open source tracer for Linux for analyzing production performance problems and troubleshooting software. Its users and contributors include Netflix, Facebook, Red Hat, Shopify, and others, and it was created by [Alastair Robertson][2], a talented UK-based developer who has won various coding competitions.
Linux already has many performance tools, but they are often counter-based and have limited visibility. For example, [iostat(1)][3] or a monitoring agent may tell you your average disk latency, but not the distribution of this latency. Distributions can reveal multiple modes or outliers, either of which may be the real cause of your performance problems. [Bpftrace][4] is suited for this kind of analysis: decomposing metrics into distributions or per-event logs and creating new metrics for visibility into blind spots.
You can use bpftrace via one-liners or scripts, and it ships with many prewritten tools. Here is an example that traces the distribution of read latency for PID 181 and shows it as a power-of-two histogram:
```
# bpftrace -e 'kprobe:vfs_read /pid == 30153/ { @start[tid] = nsecs; }
kretprobe:vfs_read /@start[tid]/ { @ns = hist(nsecs - @start[tid]); delete(@start[tid]); }'
Attaching 2 probes...
^C
@ns:
[256, 512)         10900 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@                      |
[512, 1k)          18291 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
[1k, 2k)            4998 |@@@@@@@@@@@@@@                                      |
[2k, 4k)              57 |                                                    |
[4k, 8k)             117 |                                                    |
[8k, 16k)             48 |                                                    |
[16k, 32k)           109 |                                                    |
[32k, 64k)             3 |                                                    |
```
This example instruments one event out of thousands available. If you have some weird performance problem, there's probably some bpftrace one-liner that can shed light on it. For large environments, this ability can help you save millions. For smaller environments, it can be of more use in helping to eliminate latency outliers.
I [previously][5] wrote about bpftrace vs. other tracers, including [BCC][6] (BPF Compiler Collection). BCC is great for canned complex tools and agents. Bpftrace is best for short scripts and ad hoc investigations. In this article, I'll summarize the bpftrace language, variable types, probes, and tools.
Bpftrace uses BPF (Berkeley Packet Filter), an in-kernel execution engine that processes a virtual instruction set. BPF has been extended (aka eBPF) in recent years for providing a safe way to extend kernel functionality. It also has become a hot topic in systems engineering, with at least 24 talks on BPF at the last [Linux Plumber's Conference][7]. BPF is in the Linux kernel, and bpftrace is the best way to get started using BPF for observability.
See the bpftrace [INSTALL][8] guide for how to install it, and get the latest version; [0.9.2][9] was just released. For Kubernetes clusters, there is also [kubectl-trace][10] for running it.
### Syntax
```
`probe[,probe,...] /filter/ { action }`
```
The probe specifies what events to instrument. The filter is optional and can filter down the events based on a boolean expression, and the action is the mini-program that runs.
Here's hello world:
```
`# bpftrace -e 'BEGIN { printf("Hello eBPF!\n"); }'`
```
The probe is **BEGIN**, a special probe that runs at the beginning of the program (like awk). There's no filter. The action is a **printf()** statement.
Now a real example:
```
`# bpftrace -e 'kretprobe:sys_read /pid == 181/ { @bytes = hist(retval); }'`
```
This uses a **kretprobe** to instrument the return of the **sys_read()** kernel function. If the PID is 181, a special map variable **@bytes** is populated with a log2 histogram function with the return value **retval** of **sys_read()**. This produces a histogram of the returned read size for PID 181. Is your app doing lots of one byte reads? Maybe that can be optimized.
### Probe types
These are libraries of related probes. The currently supported types are (more will be added):
Type | Description
---|---
**tracepoint** | Kernel static instrumentation points
**usdt** | User-level statically defined tracing
**kprobe** | Kernel dynamic function instrumentation
**kretprobe** | Kernel dynamic function return instrumentation
**uprobe** | User-level dynamic function instrumentation
**uretprobe** | User-level dynamic function return instrumentation
**software** | Kernel software-based events
**hardware** | Hardware counter-based instrumentation
**watchpoint** | Memory watchpoint events (in development)
**profile** | Timed sampling across all CPUs
**interval** | Timed reporting (from one CPU)
**BEGIN** | Start of bpftrace
**END** | End of bpftrace
Dynamic instrumentation (aka dynamic tracing) is the superpower that lets you trace any software function in a running binary without restarting it. This lets you get to the bottom of just about any problem. However, the functions it exposes are not considered a stable API, as they can change from one software version to another. Hence static instrumentation, where event points are hard-coded and become a stable API. When you write bpftrace programs, try to use the static types first, before the dynamic ones, so your programs are more stable.
### Variable types
Variable | Description
---|---
**@name** | global
**@name[key]** | hash
**@name[tid]** | thread-local
**$name** | scratch
Variables with an **@** prefix use BPF maps, which can behave like associative arrays. They can be populated in one of two ways:
* Variable assignment: **@name = x;**
* Function assignment: **@name = hist(x);**
Various map-populating functions are built in to provide quick ways to summarize data.
### Built-in variables and functions
Here are some of the built-in variables and functions, but there are many more.
**Built-in variables:**
Variable | Description
---|---
**pid** | process ID
**comm** | Process or command name
**nsecs** | Current time in nanoseconds
**kstack** | Kernel stack trace
**ustack** | User-level stack trace
**arg0...argN** | Function arguments
**args** | Tracepoint arguments
**retval** | Function return value
**name** | Full probe name
**Built-in functions:**
Function | Description
---|---
**printf("...")** | Print formatted string
**time("...")** | Print formatted time
**system("...")** | Run shell command
**@ = count()** | Count events
**@ = hist(x)** | Power-of-2 histogram for x
**@ = lhist(x, min, max, step)** | Linear histogram for x
See the [reference guide][11] for details.
### One-liners tutorial
A great way to learn bpftrace is via one-liners, which I turned into a [one-liners tutorial][12] that covers the following:
Listing probes | **bpftrace -l 'tracepoint:syscalls:sys_enter_*'**
---|---
Hello world | **bpftrace -e 'BEGIN { printf("hello world\n") }'**
File opens | **bpftrace -e 'tracepoint:syscalls:sys_enter_open { printf("%s %s\n", comm, str(args->filename)) }'**
Syscall counts by process | **bpftrace -e 'tracepoint:raw_syscalls:sys_enter { @[comm] = count() }'**
Distribution of read() bytes | **bpftrace -e 'tracepoint:syscalls:sys_exit_read /pid == 18644/ { @bytes = hist(args->retval) }'**
Kernel dynamic tracing of read() bytes | **bpftrace -e 'kretprobe:vfs_read { @bytes = lhist(retval, 0, 2000, 200) }'**
Timing read()s | **bpftrace -e 'kprobe:vfs_read { @start[tid] = nsecs } kretprobe:vfs_read /@start[tid]/ { @ns[comm] = hist(nsecs - @start[tid]); delete(@start[tid]) }'**
Count process-level events | **bpftrace -e 'tracepoint:sched:sched* { @[name] = count() } interval:s:5 { exit() }'**
Profile on-CPU kernel stacks | **bpftrace -e 'profile:hz:99 { @[stack] = count() }'**
Scheduler tracing | **bpftrace -e 'tracepoint:sched:sched_switch { @[stack] = count() }'**
Block I/O tracing | **bpftrace -e 'tracepoint:block:block_rq_issue { @ = hist(args->bytes); }**
Kernel struct tracing (a script, not a one-liner) | Command: **bpftrace path.bt**, where the path.bt file is:
**#include <linux/path.h>
#include <linux/dcache.h>**
**kprobe:vfs_open { printf("open path: %s\n", str(((path *)arg0)->dentry->d_name.name)); }**
See the tutorial for an explanation of each.
### Provided tools
Apart from one-liners, bpftrace programs can be multi-line scripts. Bpftrace ships with 28 of them as tools:
![bpftrace/eBPF tools][13]
These can be found in the **[/tools][14]** directory:
```
tools# ls *.bt
bashreadline.bt  dcsnoop.bt         oomkill.bt    syncsnoop.bt   vfscount.bt
biolatency.bt    execsnoop.bt       opensnoop.bt  syscount.bt    vfsstat.bt
biosnoop.bt      gethostlatency.bt  pidpersec.bt  tcpaccept.bt   writeback.bt
bitesize.bt      killsnoop.bt       runqlat.bt    tcpconnect.bt  xfsdist.bt
capable.bt       loads.bt           runqlen.bt    tcpdrop.bt
cpuwalk.bt       mdflush.bt         statsnoop.bt  tcpretrans.bt
```
Apart from their use in diagnosing performance issues and general troubleshooting, they also provide another way to learn bpftrace. Here are some examples.
#### Source
Here's the code to **biolatency.bt**:
```
tools# cat -n biolatency.bt
     1  /*
     2   * biolatency.bt    Block I/O latency as a histogram.
     3   *                  For Linux, uses bpftrace, eBPF.
     4   *
     5   * This is a bpftrace version of the bcc tool of the same name.
     6   *
     7   * Copyright 2018 Netflix, Inc.
     8   * Licensed under the Apache License, Version 2.0 (the "License")
     9   *
    10   * 13-Sep-2018  Brendan Gregg   Created this.
    11   */
    12
    13  BEGIN
    14  {
    15          printf("Tracing block device I/O... Hit Ctrl-C to end.\n");
    16  }
    17
    18  kprobe:blk_account_io_start
    19  {
    20          @start[arg0] = nsecs;
    21  }
    22
    23  kprobe:blk_account_io_done
    24  /@start[arg0]/
    25
    26  {
    27          @usecs = hist((nsecs - @start[arg0]) / 1000);
    28          delete(@start[arg0]);
    29  }
    30 
    31  END
    32  {
    33          clear(@start);
    34  }
```
It's straightforward, easy to read, and short enough to include on a slide. This version uses kernel dynamic tracing to instrument the **blk_account_io_start()** and **blk_account_io_done()** functions, and it passes a timestamp between them keyed on **arg0** to each. **arg0** on **kprobe** is the first argument to that function, which is the struct request *****, and its memory address is used as a unique identifier.
#### Example files
You can see screenshots and explanations of these tools in the [GitHub repo][14] as ***_example.txt** files. For [example][15]:
```
tools# more biolatency_example.txt
Demonstrations of biolatency, the Linux BPF/bpftrace version.
This traces block I/O, and shows latency as a power-of-2 histogram. For example:
# biolatency.bt
Attaching 3 probes...
Tracing block device I/O... Hit Ctrl-C to end.
^C
@usecs:
[256, 512)             2 |                                                    |
[512, 1K)             10 |@                                                   |
[1K, 2K)             426 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
[2K, 4K)             230 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@                        |
[4K, 8K)               9 |@                                                   |
[8K, 16K)            128 |@@@@@@@@@@@@@@@                                     |
[16K, 32K)            68 |@@@@@@@@                                            |
[32K, 64K)             0 |                                                    |
[64K, 128K)            0 |                                                    |
[128K, 256K)          10 |@                                                   |
While tracing, this shows that 426 block I/O had a latency of between 1K and 2K
usecs (1024 and 2048 microseconds), which is between 1 and 2 milliseconds.
There are also two modes visible, one between 1 and 2 milliseconds, and another
between 8 and 16 milliseconds: this sounds like cache hits and cache misses.
There were also 10 I/O with latency 128 to 256 ms: outliers. Other tools and
instrumentation, like biosnoop.bt, can shed more light on those outliers.
[...]
```
Sometimes it can be most effective to switch straight to the example file when trying to understand these tools, since the output may be self-evident (by design!).
#### Man pages
There are also man pages for every tool in the GitHub repo under [/man/man8][16]. They include sections on the output fields and the tool's expected overhead.
```
# nroff -man man/man8/biolatency.8
biolatency(8)               System Manager's Manual              biolatency(8)
NAME
       biolatency.bt - Block I/O latency as a histogram. Uses bpftrace/eBPF.
SYNOPSIS
       biolatency.bt
DESCRIPTION
       This  tool  summarizes  time  (latency) spent in block device I/O (disk
       I/O) as a power-of-2 histogram. This  allows  the  distribution  to  be
       studied,  including  modes and outliers. There are often two modes, one
       for device cache hits and one for cache misses, which can be  shown  by
       this tool. Latency outliers will also be shown.
[...]
```
Writing all these man pages was the least fun part of developing these tools, and some took longer to write than the tool took to develop, but it's nice to see the final result.
### bpftrace vs. BCC
Since eBPF has been merging in the kernel, most effort has been placed on the [BCC][6] frontend, which provides a BPF library and Python, C++, and Lua interfaces for writing programs. I've developed a lot of [tools][17] in BCC/Python; it works great, although coding in BCC is verbose. If you're hacking away at a performance issue, bpftrace is better for your one-off custom queries. If you're writing a tool with many command-line options or an agent that uses Python libraries, you'll want to consider using BCC.
On the Netflix performance team, we use both: BCC for developing canned tools that others can easily use and for developing agents; and bpftrace for ad hoc analysis. The network engineering team has been using BCC to develop an agent for its needs. The security team is most interested in bpftrace for quick ad hoc instrumentation for detecting zero-day vulnerabilities. And I expect the developer teams will use both without knowing it, via the self-service GUIs we are building (Vector), and occasionally may SSH into an instance and run a canned tool or ad hoc bpftrace one-liner.
### Learn more
* The [bpftrace][4] repository on GitHub
* The bpftrace [one-liners tutorial][12]
* The bpftrace [reference guide][11]
* The [BCC][6] repository for more complex BPF-based tools
I also have a book coming out this year that covers bpftrace: _[BPF Performance Tools: Linux System and Application Observability][18]_, to be published by Addison Wesley, and which contains many new bpftrace tools.
* * *
_Thanks to Alastair Robertson for creating bpftrace, and the bpftrace, BCC, and BPF communities for all the work over the past five years._
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/8/introduction-bpftrace
作者:[Brendan Gregg][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/brendanghttps://opensource.com/users/marcobravo
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/linux_keyboard_desktop.png?itok=I2nGw78_ (Linux keys on the keyboard for a desktop computer)
[2]: https://ajor.co.uk/bpftrace/
[3]: https://linux.die.net/man/1/iostat
[4]: https://github.com/iovisor/bpftrace
[5]: http://www.brendangregg.com/blog/2018-10-08/dtrace-for-linux-2018.html
[6]: https://github.com/iovisor/bcc
[7]: https://www.linuxplumbersconf.org/
[8]: https://github.com/iovisor/bpftrace/blob/master/INSTALL.md
[9]: https://github.com/iovisor/bpftrace/releases/tag/v0.9.2
[10]: https://github.com/iovisor/kubectl-trace
[11]: https://github.com/iovisor/bpftrace/blob/master/docs/reference_guide.md
[12]: https://github.com/iovisor/bpftrace/blob/master/docs/tutorial_one_liners.md
[13]: https://opensource.com/sites/default/files/uploads/bpftrace_tools_early2019.png (bpftrace/eBPF tools)
[14]: https://github.com/iovisor/bpftrace/tree/master/tools
[15]: https://github.com/iovisor/bpftrace/blob/master/tools/biolatency_example.txt
[16]: https://github.com/iovisor/bcc/tree/master/man/man8
[17]: https://github.com/iovisor/bcc#tools
[18]: http://www.brendangregg.com/blog/2019-07-15/bpf-performance-tools-book.html

View File

@ -0,0 +1,189 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Moving files on Linux without mv)
[#]: via: (https://opensource.com/article/19/8/moving-files-linux-without-mv)
[#]: author: (Seth Kenlon https://opensource.com/users/sethhttps://opensource.com/users/seth)
Moving files on Linux without mv
======
Sometimes the mv command isn't the best option when you need to move a
file. So how else do you do it?
![Hand putting a Linux file folder into a drawer][1]
The humble **mv** command is one of those useful tools you find on every POSIX box you encounter. Its job is clearly defined, and it does it well: Move a file from one place in a file system to another. But Linux is nothing if not flexible, and there are other options for moving files. Using different tools can provide small advantages that fit perfectly with a specific use case.
Before straying too far from **mv**, take a look at this commands default results. First, create a directory and generate some files with permissions set to 777:
```
$ mkdir example
$ touch example/{foo,bar,baz}
$ for i in example/*; do ls /bin > "${i}"; done
$ chmod 777 example/*
```
You probably don't think about it this way, but files exist as entries, called index nodes (commonly known as **inodes**), in a [filesystem][2]. You can see what inode a file occupies with the [ls command][3] and its **\--inode** option:
```
$ ls --inode example/foo
7476868 example/foo
```
As a test, move that file from the example directory to your current directory and then view the files attributes:
```
$ mv example/foo .
$ ls -l -G -g --inode
7476868 -rwxrwxrwx. 1 29545 Aug  2 07:28 foo
```
As you can see, the original file—along with its existing permissions—has been "moved", but its inode has not changed.
Thats the way the **mv** tool is programmed to move a file: Leave the inode unchanged (unless the file is being moved to a different filesystem), and preserve its ownership and permissions.
Other tools provide different options.
### Copy and remove
On some systems, the move action is a true move action: Bits are removed from one point in the file system and reassigned to another. This behavior has largely fallen out of favor. Move actions are now either attribute reassignments (an inode now points to a different location in your file organization) or amalgamations of a copy action followed by a remove action.
The philosophical intent of this design is to ensure that, should a move fail, a file is not left in pieces.
The **cp** command, unlike **mv**, creates a brand new data object in your filesystem. It has a new inode location, and it is subject to your active umask. You can mimic a move using the **cp** and **rm** (or [trash][4] if you have it) commands:
```
$ cp example/foo .
$ ls -l -G -g --inode
7476869 -rwxrwxr-x. 29545 Aug  2 11:58 foo
$ trash example/foo
```
The new **foo** file in this example got 775 permissions because the locations umask specifically excludes write permissions:
```
$ umask
0002
```
For more information about umask, read Alex Juarezs article about [file permissions][5].
### Cat and remove
Similar to a copy and remove, using the [cat][6] (or **tac**, for that matter) command assigns different permissions when your "moved" file is created. Assuming a fresh test environment with no **foo** in the current directory:
```
$ cat example/foo > foo
$ ls -l -G -g --inode
7476869 -rw-rw-r--. 29545 Aug 8 12:21 foo
$ trash example/foo
```
This time, a new file was created with no prior permissions set. The result is entirely subject to the umask setting, which blocks no permission bit for the user and group (the executable bit is not granted for new files regardless of umask), but it blocks the write (value two) bit from others. The result is a file with 664 permission.
### Rsync
The **rsync** command is a robust multipurpose tool to send files between hosts and file system locations. This command has many options available to it, including the ability to make its destination mirror its source.
You can copy and then remove a file with **rsync** using the **\--remove-source-files** option, along with whatever other option you choose to perform the synchronization (a common, general-purpose one is **\--archive**):
```
$ rsync --archive --remove-source-files example/foo .
$ ls example
bar  baz
$ ls -lGgi
7476870 -rwxrwxrwx. 1 seth users 29545 Aug 8 12:23 foo
```
Here you can see that file permission and ownership was retained, the timestamp was updated, and the source file was removed.
**A word of warning:** Do not confuse this option for **\--delete**, which removes files from your _destination_ directory. Misusing **\--delete** can wipe out most of your data, and its recommended that you avoid this option except in a test environment.
You can override some of these defaults, changing permission and modification settings:
```
$ rsync --chmod=666 --times \
\--remove-source-files example/foo .
$ ls example
bar  baz
$ ls -lGgi
7476871 -rw-rw-r--. 1 seth users 29545 Aug 8 12:55 foo
```
Here, the destinations umask is respected, so the **\--chmod=666** option results in a file with 664 permissions.
The benefits go beyond just permissions, though. The **rsync** command has [many][7] useful [options][8] (not the least of which is the **\--exclude** flag so you can exempt items from a large move operation) that make it a more robust tool than the simple **mv** command. For example, to exclude all backup files while moving a collection of files:
```
$ rsync --chmod=666 --times \
\--exclude '*~' \
\--remove-source-files example/foo .
```
### Set permissions with install
The **install** command is a copy command specifically geared toward developers and is mostly invoked as part of the install routine of software compiling. Its not well known among users (and I do often wonder why it got such an intuitive name, leaving mere acronyms and pet names for package managers), but **install** is actually a useful way to put files where you want them.
There are many options for the **install** command, including **\--backup** and **\--compare** command (to avoid "updating" a newer copy of a file).
Unlike **cp** and **cat**, but exactly like **mv**, the **install** command can copy a file while preserving its timestamp:
```
$ install --preserve-timestamp example/foo .
$ ls -l -G -g --inode
7476869 -rwxr-xr-x. 1 29545 Aug  2 07:28 foo
$ trash example/foo
```
Here, the file was copied to a new inode, but its **mtime** did not change. The permissions, however, were set to the **install** default of **755**.
You can use **install** to set the files permissions, owner, and group:
```
$ install --preserve-timestamp \
\--owner=skenlon \
\--group=dialout \
\--mode=666 example/foo .
$ ls -li
7476869 -rw-rw-rw-. 1 skenlon dialout 29545 Aug  2 07:28 foo
$ trash example/foo
```
### Move, copy, and remove
Files contain data, and the really important files contain _your_ data. Learning to manage them wisely is important, and now you have the toolkit to ensure that your data is handled in exactly the way you want.
Do you have a different way of managing your data? Tell us your ideas in the comments.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/8/moving-files-linux-without-mv
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/sethhttps://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/yearbook-haff-rx-linux-file-lead_0.png?itok=-i0NNfDC (Hand putting a Linux file folder into a drawer)
[2]: https://opensource.com/article/18/11/partition-format-drive-linux#what-is-a-filesystem
[3]: https://opensource.com/article/19/7/master-ls-command
[4]: https://gitlab.com/trashy
[5]: https://opensource.com/article/19/8/linux-permissions-101#umask
[6]: https://opensource.com/article/19/2/getting-started-cat-command
[7]: https://opensource.com/article/19/5/advanced-rsync
[8]: https://opensource.com/article/17/1/rsync-backup-linux

View File

@ -0,0 +1,102 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (A brief introduction to learning agility)
[#]: via: (https://opensource.com/open-organization/19/8/introduction-learning-agility)
[#]: author: (Jen Kelchner https://opensource.com/users/jenkelchnerhttps://opensource.com/users/marcobravohttps://opensource.com/users/jenkelchner)
A brief introduction to learning agility
======
The ability to learn and adapt quickly isn't something our hiring
algorithms typically identify. But by ignoring it, we're overlooking
insightful and innovative job candidates.
![Teacher or learner?][1]
I think everyone can agree that the workplace has changed dramatically in the last decade—or is in the process of changing, depending on where you're currently working. The landscape has evolved. Distributed leadership, project-based work models, and cross-functional solution building are commonplace. In essence, the world is going [open][2].
And yet our talent acquisition strategies, development models, and internal systems have shifted little (if at all) to meet the demands these shifts in our external work have created.
In this three-part series, let's take a look at what is perhaps the game-changing key to acquisition, retention, engagement, innovation, problem-solving, and leadership in this emerging future: learning agility. We'll discuss not only what learning agility _is_, but how your organization's leaders can create space for agile learning on teams and in departments.
### Algorithmed out of opportunities
For the last decade, I've freelanced as an independent consultant. Occasionally, when the stress of entrepreneurial, project-based work gets heavy, I search out full-time positions. As I'm sure you know, job searching requires hours of research—and often concludes in dead-ends. On a rare occasion, you find a great fit (the culture looks right and you have every skill the role could need and more!), except for one small thing: a specific educational degree.
Sticking with these outdated practices puts us in danger of overlooking amazing candidates capable of accelerating innovation and becoming amazing leaders in our organizations.
More times than I can count, I've gotten "algorithmed out" of even an initial conversation about a new position. What do I mean by that exactly?
If your specific degree—or, in my case, lack thereof—doesn't meet the one listed, the algorithmically driven job portal spits me back out. I've received a "no thank you" email within thirty seconds of hitting submit.
So why is calling this out so important?
Hiring practices have changed very little in both closed _and_ open organizations. Sticking with these outdated practices puts us in danger of overlooking amazing candidates capable of accelerating innovation and becoming amazing leaders in our organizations.
Developing more inclusive and open hiring processes will require work. For starters, it'll require focus on a key competency so often overlooked as part of more traditional, "closed" processes: Learning agility.
### Just another buzzword or key performance indicator?
While "learning agility" [is not a new term][3], it's one that organizations clearly still need help taking into account. Even in open organizations, we tend to overlook this element by focusing too rigidly on a candidate's degree history or _current role_ when we should be taking a more holistic view of the individual.
One crucial element of [adaptability][4] is learning agility. It is the capacity for adapting to situations and applying knowledge from prior experience—even when you don't know what to do. In short, it's a willingness to learn from all your experiences and then apply that knowledge to tackle new challenges in new situations.
Every experience we encounter in life can teach us something if we pay attention to it. All of these experiences are educational and useful in organizational life. In fact, as Colin Willis notes in his recent article on [informal learning][5], 70%80% of all job-related knowledge isn't learned in formal training programs. And yet we're conditioned to think that _only what you were paid to do in a formal role_ or _the degree you once earned_ speaks solely to your potential value or fit for a particular role.
Likewise, in extensive research conducted over years, Korn Ferry has shown that learning agility is also a [predictor of long-term performance][6] and leadership potential. In an [article on leadership][7], Korn Ferry notes that "individuals exhibiting high levels of learning agility can adapt quickly in unfamiliar situations and even thrive amid chaos." [Chaos][8]—there's a word I think we would all use to describe the world we live in today.
Every experience we encounter in life can teach us something if pay attention to it.
Organizations continue to overlook this critical skill ([too few U.S. companies consider candidates without college degrees][9]), even though it's a proven component of success in a volatile, complex, ambiguous world? Why?
And as adaptability and collaboration—[two key open principles][2]—sit at the top of the list of job [skills needed in 2019][10], perhaps talent acquisition conversations should stop focusing on _how to measure adaptability_ and shift to _sourcing learning agile people_ so problems can get solved faster.
### Learning agility has dimensions
A key to unlocking our adaptability during rapid change is learning agility. Agile people are great at integrating information from their experiences and then using that information to navigate unfamiliar situations. This complex set of skills allows us to draw patterns from one context and apply them to another context.
So when you're looking for an agile person to join your team, what exactly are you looking for?
Start with getting to know someone _beyond_ a resume, because learning-agile people have more lessons, more tools, and more solutions in their history that can be valuable when your organization is facing new challenges.
Next, understand the [five dimensions of learning agility][11], according to Korn Ferry's research.
**Mental Agility:** This looks like _thinking critically to decipher complex problems and expanding possibilities by seeing new connections_.
**People Agility:** This looks like _understanding and relating to other people to empower collective performance_.
**Change Agility**: This looks like _experimentation, being curious, and effectively dealing with uncertainty_.
**Results Agility:** This looks like _delivering results in first-time situations by inspiring teams and exhibiting a presence that builds confidence in themselves and others_.
**Self-Awareness:** This looks like _the ability to reflect on oneself, knowing oneself well, and understanding how one's behaviors impact others._
While finding someone with all these traits may seem like sourcing a unicorn, you'll find learning agility is more common than you think. In fact, your organization is likely _already_ full of agile people, but your culture and systems don't support agile learning.
In the next part of this series, we'll explore how you can tap into this crucial skill and create space for agile learning every day. Until then, do what you can to become more aware of the lessons you encounter _today_ that will help you solve problems _tomorrow_.
--------------------------------------------------------------------------------
via: https://opensource.com/open-organization/19/8/introduction-learning-agility
作者:[Jen Kelchner][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jenkelchnerhttps://opensource.com/users/marcobravohttps://opensource.com/users/jenkelchner
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc-lead-teacher-learner.png?itok=rMJqBN5G (Teacher or learner?)
[2]: https://opensource.com/open-organization/resources/open-org-definition
[3]: https://www.researchgate.net/publication/321438460_Learning_agility_Its_evolution_as_a_psychological_construct_and_its_empirical_relationship_to_leader_success
[4]: https://opensource.com/open-organization/resources/open-org-maturity-model
[5]: https://opensource.com/open-organization/19/7/informal-learning-adaptability
[6]: https://cmo.cm/2TDofV4
[7]: https://focus.kornferry.com/leadership-and-talent/learning-agility-a-highly-prized-quality-in-todays-marketplace/
[8]: https://opensource.com/open-organization/19/6/innovation-delusion
[9]: https://www.cnbc.com/2018/08/16/15-companies-that-no-longer-require-employees-to-have-a-college-degree.html
[10]: https://www.weforum.org/agenda/2019/01/the-hard-and-soft-skills-to-futureproof-your-career-according-to-linkedin/
[11]: https://www.forbes.com/sites/kevincashman/2013/04/03/the-five-dimensions-of-learning-agile-leaders/#7b003b737457

View File

@ -0,0 +1,86 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (A guided tour of Linux file system types)
[#]: via: (https://www.networkworld.com/article/3432990/a-guided-tour-of-linux-file-system-types.html)
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
A guided tour of Linux file system types
======
Linux file systems have evolved over the years, and here's a look at file system types
![Andreas Lehner / Flickr \(CC BY 2.0\)][1]
While it may not be obvious to the casual user, Linux file systems have evolved significantly over the last decade or so to make them more resistant to corruption and performance problems.
Most Linux systems today use a file system type called **ext4**. The “ext” part stands for “extended” and the 4 indicates that this is the 4th generation of this file system type. Features added over time include the ability to provide increasingly larger file systems (currently as large as 1,000,000 TiB) and much larger files (up to 16 TiB), more resistance to system crashes and less fragmentation (scattering single files as chunks in multiple locations) which improves performance.
The **ext4** file system type also came with other improvements to performance, scalability and capacity. Metadata and journal checksums were implemented for reliability. Timestamps now track changes down to nanoseconds for better file time-stamping (e.g., file creation and last updates). And, with two additional bits in the timestamp field, the year 2038 problem (when the digitally stored date/time fields will roll over from maximum to zero) has been put off for more than 400 years (to 2446).
### File system types
To determine the type of file system on a Linux system, use the **df** command. The **T** option in the command shown below provides the file system type. The **h** makes the disk sizes “human-readable”; in other words, adjusting the reported units (such as M and G) in a way that makes the most sense to the people reading them.
```
$ df -hT | head -10
Filesystem Type Size Used Avail Use% Mounted on
udev devtmpfs 2.9G 0 2.9G 0% /dev
tmpfs tmpfs 596M 1.5M 595M 1% /run
/dev/sda1 ext4 110G 50G 55G 48% /
/dev/sdb2 ext4 457G 642M 434G 1% /apps
tmpfs tmpfs 3.0G 0 3.0G 0% /dev/shm
tmpfs tmpfs 5.0M 4.0K 5.0M 1% /run/lock
tmpfs tmpfs 3.0G 0 3.0G 0% /sys/fs/cgroup
/dev/loop0 squashfs 89M 89M 0 100% /snap/core/7270
/dev/loop2 squashfs 142M 142M 0 100% /snap/hexchat/42
```
Notice that the **/** (root) and **/apps** file systems are both **ext4** file systems while **/dev** is a **devtmpfs** file system one with automated device nodes populated by the kernel. Some of the other file systems shown are **tmpfs**  temporary file systems that reside in memory and/or swap partitions and **squashfs**  file systems that are read-only compressed file systems and are used for snap packages.
There's also proc file systems that stores information on running processes.
```
$ df -T /proc
Filesystem Type 1K-blocks Used Available Use% Mounted on
proc proc 0 0 0 - /proc
```
There are a number of other file system types that you might encounter as you're moving around the overall file system. When you've moved into a directory, for example, and want to ask about the related file system, you can run a command like this:
```
$ cd /dev/mqueue; df -T .
Filesystem Type 1K-blocks Used Available Use% Mounted on
mqueue mqueue 0 0 0 - /dev/mqueue
$ cd /sys; df -T .
Filesystem Type 1K-blocks Used Available Use% Mounted on
sysfs sysfs 0 0 0 - /sys
$ cd /sys/kernel/security; df -T .
Filesystem Type 1K-blocks Used Available Use% Mounted on
securityfs securityfs 0 0 0 - /sys/kernel/security
```
As with other Linux commands, the . in these commands refers to the current location in the overall file system.
These and other unique file-system types provide some special functions. For example, securityfs provides file system support for security modules.
Linux file systems need to be resistant to corruption, have the ability to survive system crashes and provide fast and reliable performance. The improvements provided by the generations of **ext** file systems and the new generation on purpose-specific file system types have made Linux systems easier to manage and more reliable.
Join the Network World communities on [Facebook][2] and [LinkedIn][3] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3432990/a-guided-tour-of-linux-file-system-types.html
作者:[Sandra Henry-Stocker][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/08/guided-tour-on-the-flaker_people-in-horse-drawn-carriage_germany-by-andreas-lehner-flickr-100808681-large.jpg
[2]: https://www.facebook.com/NetworkWorld/
[3]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,72 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (A project manager's guide to Ansible)
[#]: via: (https://opensource.com/article/19/8/project-managers-guide-ansible)
[#]: author: (Rich Butkevic https://opensource.com/users/rich-butkevic)
A project manager's guide to Ansible
======
Ansible is best known for IT automation, but it can streamline
operations across the entire organization.
![Coding on a computer][1]
From application deployment to provisioning, [Ansible][2] is a powerful open source tool for automating routine IT tasks. It can help an organization's IT run smoothly, with core IT processes networked and maintained. Ansible is an advanced IT orchestration solution, and it can be deployed even over a large, complex network infrastructure.
### Project management applications for Ansible
The Ansible platform can improve an entire business' operations by streamlining the company's infrastructure. Apart from directly contributing to the efficiency of the IT team, Ansible also contributes to the productivity and efficiency of development teams.
As such, it can be used for a number of project management applications:
* Ansible Tower helps teams manage the entirety of their application lifecycle. It can take applications from development into production, giving teams more control over applications being deployed.
* Ansible Playbook enables teams to keep their applications deployed and properly configured. A Playbook can be easily programmed using Ansible's simple markup language.
* Defined automated security policies allow Ansible to detect and remediate security issues automatically. By automating security policies, the company can improve its security substantially and without increasing its administrative burden.
* Automation and systemization of processes reduce project risk by improving the precision of [project and task estimation][3].
* Ansible can update applications, reducing the time the team needs to manage and maintain its systems. Keeping applications updated can be a constant time sink, and failing to update applications reduces overall security and productivity.
### Ansible's core benefits
There are many IT solutions for automating tasks or managing IT infrastructure. But Ansible is so popular because of its advantages over other IT automation solutions:
1. Ansible is free. As an open source solution, you don't have to pay for Ansible. Many commercial products require per-seat licensing or annual licensing subscriptions, which can add up.
2. Ansible doesn't require an agent. It can be installed server-side only, requiring less interaction from end users. Other solutions require both server-side and endpoint installations, which takes a significant amount of time to manage. Not only do end users have to install these solutions on their own devices, but they also need to keep them updated and patched. Ansible doesn't require this type of maintenance.
3. Ansible is easy to install and manage out of the box. It can be quickly installed, configured, and customized, so organizations can begin reaping its benefits in managing and monitoring IT solutions immediately.
4. Ansible is flexible and can automate and control many types of IT tasks. The Ansible Playbook makes it easy to quickly code new tasks in a human-readable scripting language. Many other automation solutions require in-depth knowledge of programming languages, possibly even learning a proprietary programming language.
5. Ansible has an active community with nearly 3,000 contributors contributing to the project. The robust open source community provides pre-programmed solutions and answers for more niche problems. Ansible's community ensures that it is stable, reliable, and constantly growing.
6. Ansible is versatile and can be used in virtually any IT environment. Since it is both reliable and scalable, it is suitable for rapidly growing network environments.
### Ansible makes IT automation easier
Ansible is an out-of-the-box, open source automation solution that can schedule tasks and manage configurations over complex networks. Although it's intuitive and easy to use, it's also very robust; it has its own scripting language that can be used to program more complex functionality.
As an open source tool, Ansible is cost-effective and well-supported. The Ansible community is large and active, providing solutions for most common use cases and providing support as needed. Companies working towards IT automation can begin with an Ansible deployment and save a significant amount of money and time compared to commercial solutions.
For project managers, it's important to know that deploying Ansible will improve the effectiveness of a company's IT. Employees will spend less time trying to troubleshoot their own configuration, deployment, and provisioning. Ansible is designed to be a straightforward, reliable way to automate a network's IT tasks.
Further, development teams can use the Ansible Tower to track applications from development to production. Ansible Tower includes everything from role-based access to graphical inventory management and enables teams to remain on the same page even with complex tasks.
Ansible has a number of fantastic use cases and provides substantial productivity gains for both internal teams and the IT infrastructure as a whole. It's free, easy to use, and robust. By automating IT with Ansible, project managers will find that their teams can work more effectively without the burden of having to manage their own IT—and that IT works more smoothly overall.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/8/project-managers-guide-ansible
作者:[Rich Butkevic][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/rich-butkevic
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/code_computer_laptop_hack_work.png?itok=aSpcWkcl (Coding on a computer)
[2]: https://www.ansible.com/
[3]: https://www.projecttimes.com/articles/avoiding-the-planning-fallacy-improving-your-project-estimates.html

View File

@ -0,0 +1,307 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Go compiler intrinsics)
[#]: via: (https://dave.cheney.net/2019/08/20/go-compiler-intrinsics)
[#]: author: (Dave Cheney https://dave.cheney.net/author/davecheney)
Go compiler intrinsics
======
Go allows authors to write functions in assembly if required. This is called a _stub_ or _forward_ declaration.
```
package asm
// Add returns the sum of a and b.
func Add(a int64, b int64) int64
```
Here were declaring `Add`, a function which takes two `int64`s and returns their sum.`Add` is a normal Go function declaration, except it is missing the function body.
If we were to try to compile this package the compiler, justifiably, complains;
```
% go build
examples/asm
./decl.go:4:6: missing function body
```
To satisfy the compiler we must supply a body for `Add` via assembly, which we do by adding a `.s` file in the same package.
```
TEXT ·Add(SB),$0-24
MOVQ a+0(FP), AX
ADDQ b+8(FP), AX
MOVQ AX, ret+16(FP)
RET
```
Now we can build, test, and use our `Add` function just like normal Go code. But, theres a problem, assembly functions cannot be inlined.
This has long been a complaint by Go developers who want to use assembly either for performance or to access operations which are not exposed in the language. Some examples would be vector instructions, atomic instructions, and so on. Without the ability to inline assembly functions writing these functions in Go can have a relatively large overhead.
```
var Result int64
func BenchmarkAddNative(b *testing.B) {
var r int64
for i := 0; i < b.N; i++ {
r = int64(i) + int64(i)
}
Result = r
}
func BenchmarkAddAsm(b *testing.B) {
var r int64
for i := 0; i < b.N; i++ {
r = Add(int64(i), int64(i))
}
Result = r
}
```
```
BenchmarkAddNative-8    1000000000               0.300 ns/op
BenchmarkAddAsm-8       606165915                1.93 ns/op
```
Over the years there have been various proposals for an inline assembly syntax similar to gccs `asm(...)` directive. None have been accepted by the Go team. Instead, Go has added _intrinsic functions_[1][1].
An intrinsic function is Go code written in regular Go. These functions are known the the Go compiler which contains replacements which it can substitute during compilation. As of Go 1.13 the packages which the compiler knows about are:
* `math/bits`
* `sync/atomic`
The functions in these packages have baroque signatures but this lets the compiler, if your architecture supports a more efficient way of performing the operation, transparently replace the function call with comparable native instructions.
For the remainder of this post well study two different ways the Go compiler produces more efficient code using intrinsics.
### Ones count
Population count, the number of `1` bits in a word, is an important cryptographic and compression primitive. Because this is an important operation most modern CPUs provide a native hardware implementation.
The `math/bits` package exposes support for this operation via the `OnesCount` series of functions. The various `OnesCount` functions are recognised by the compiler and, depending on the CPU architecture and the version of Go, will be replaced with the native hardware instruction.
To see how effective this can be lets compare three different ones count implementations. The first is Kernighans Algorithm[2][2].
```
func kernighan(x uint64) int {
var count int
for ; x > 0; x &= (x - 1) {
count++
      }
      return count                
}    
```
This algorithm has a maximum loop count of the number of bits set; the more bits set, the more loops it will take.
The second algorithm is taken from Hackers Delight via [issue 14813][3].
```
func hackersdelight(x uint64) int {
const m1 = 0x5555555555555555
const m2 = 0x3333333333333333
const m4 = 0x0f0f0f0f0f0f0f0f
const h01 = 0x0101010101010101
x -= (x >> 1) & m1
x = (x & m2) + ((x >> 2) & m2)
x = (x + (x >> 4)) & m4
return int((x * h01) >> 56)
}
```
Lots of clever bit twiddling allows this version to run in constant time and optimises very well if the input is a constant (the whole thing optimises away if the compiler can figure out the answer at compiler time).
Lets benchmark these implementations against `math/bits.OnesCount64`.
```
var Result int
func BenchmarkKernighan(b *testing.B) {
var r int
for i := 0; i < b.N; i++ {
r = kernighan(uint64(i))
}
Result = r
}
func BenchmarkPopcnt(b *testing.B) {
var r int
for i := 0; i < b.N; i++ {
r = hackersdelight(uint64(i))
}
Result = r
}
func BenchmarkMathBitsOnesCount64(b *testing.B) {
var r int
for i := 0; i < b.N; i++ {
r = bits.OnesCount64(uint64(i))
}
Result = r
}
```
To keep it fair, were feeding each function under test the same input; a sequence of integers from zero to `b.N`. This is fairer to Kernighans method as its runtime increases with the number of one bits in the input argument.[3][4]
```
BenchmarkKernighan-8                    100000000               11.2 ns/op
BenchmarkPopcnt-8                       618312062                2.02 ns/op
BenchmarkMathBitsOnesCount64-8          1000000000               0.565 ns/op
```
The winner by nearly 4x is `math/bits.OnesCount64`, but is this really using a hardware instruction, or is the compiler just doing a better job at optimising this code? Lets check the assembly
```
% go test -c
% go tool objdump -s MathBitsOnesCount popcnt-intrinsic.test
TEXT examples/popcnt-intrinsic.BenchmarkMathBitsOnesCount64(SB) /examples/popcnt-intrinsic/popcnt_test.go
  popcnt_test.go:45     0x10f8610               65488b0c2530000000      MOVQ GS:0x30, CX
  popcnt_test.go:45     0x10f8619               483b6110                CMPQ 0x10(CX), SP
  popcnt_test.go:45     0x10f861d               7668                    JBE 0x10f8687
  popcnt_test.go:45     0x10f861f               4883ec20                SUBQ $0x20, SP
  popcnt_test.go:45     0x10f8623               48896c2418              MOVQ BP, 0x18(SP)
  popcnt_test.go:45     0x10f8628               488d6c2418              LEAQ 0x18(SP), BP
  popcnt_test.go:47     0x10f862d               488b442428              MOVQ 0x28(SP), AX
  popcnt_test.go:47     0x10f8632               31c9                    XORL CX, CX
  popcnt_test.go:47     0x10f8634               31d2                    XORL DX, DX
  popcnt_test.go:47     0x10f8636               eb03                    JMP 0x10f863b
  popcnt_test.go:47     0x10f8638               48ffc1                  INCQ CX
  popcnt_test.go:47     0x10f863b               48398808010000          CMPQ CX, 0x108(AX)
  popcnt_test.go:47     0x10f8642               7e32                    JLE 0x10f8676
  popcnt_test.go:48     0x10f8644               803d29d5150000          CMPB $0x0, runtime.x86HasPOPCNT(SB)
  popcnt_test.go:48     0x10f864b               740a                    JE 0x10f8657
  popcnt_test.go:48     0x10f864d               4831d2                  XORQ DX, DX
  popcnt_test.go:48     0x10f8650               f3480fb8d1              POPCNT CX, DX // math/bits.OnesCount64
  popcnt_test.go:48     0x10f8655               ebe1                    JMP 0x10f8638
  popcnt_test.go:47     0x10f8657               48894c2410              MOVQ CX, 0x10(SP)
  popcnt_test.go:48     0x10f865c               48890c24                MOVQ CX, 0(SP)
  popcnt_test.go:48     0x10f8660               e87b28f8ff              CALL math/bits.OnesCount64(SB)
  popcnt_test.go:48     0x10f8665               488b542408              MOVQ 0x8(SP), DX
  popcnt_test.go:47     0x10f866a               488b442428              MOVQ 0x28(SP), AX
  popcnt_test.go:47     0x10f866f               488b4c2410              MOVQ 0x10(SP), CX
  popcnt_test.go:48     0x10f8674               ebc2                    JMP 0x10f8638
  popcnt_test.go:50     0x10f8676               48891563d51500          MOVQ DX, examples/popcnt-intrinsic.Result(SB)
  popcnt_test.go:51     0x10f867d               488b6c2418              MOVQ 0x18(SP), BP
  popcnt_test.go:51     0x10f8682               4883c420                ADDQ $0x20, SP
  popcnt_test.go:51     0x10f8686               c3                      RET
  popcnt_test.go:45     0x10f8687               e884eef5ff              CALL runtime.morestack_noctxt(SB)
  popcnt_test.go:45     0x10f868c               eb82                    JMP examples/popcnt-intrinsic.BenchmarkMathBitsOnesCount64(SB)
  :-1                   0x10f868e               cc                      INT $0x3
  :-1                   0x10f868f               cc                      INT $0x3
```
Theres quite a bit going on here, but the key take away is on line 48 (taken from the source code of the `_test.go` file) the program is using the x86 `POPCNT` instruction as we hoped. This turns out to be faster than bit twiddling.
Of interest is the comparison two instructions prior to the `POPCNT`,
```
CMPB $0x0, runtime.x86HasPOPCNT(SB)
```
As not all intel CPUs support `POPCNT` the Go runtime records at startup if the CPU has the necessary support and stores the result in `runtime.x86HasPOPCNT`. Each time through the benchmark loop the program is checking _does the CPU have POPCNT support_ before it issues the `POPCNT` request.
The value of `runtime.x86HasPOPCNT` isnt expected to change during the life of the programs execution so the result of the check should be highly predictable making the check relatively cheap.
### Atomic counter
As well as generating more efficient code, intrinsic functions are just regular Go code, the rules of inlining (including mid stack inlining) apply equally to them.
Heres an example of an atomic counter type. Its got methods on types, method calls several layers deep, multiple packages, etc.
```
import (
"sync/atomic"
)
type counter uint64
func (c counter) get() uint64 {
return atomic.LoadUint64((uint64)(c))
}
func (c counter) inc() uint64 {
return atomic.AddUint64((uint64)(c), 1)
}
func (c counter) reset() uint64 {
return atomic.SwapUint64((uint64)(c), 0)
}
var c counter
func f() uint64 {
c.inc()
c.get()
return c.reset()
}
```
Youd be forgiven for thinking this would have a lot of overhead. However, because of the interaction between inlining and compiler intrinsics, this code collapses down to efficient native code on most platforms.
```
TEXT main.f(SB) examples/counter/counter.go
  counter.go:23         0x10512e0               90                      NOPL
  counter.go:29         0x10512e1               b801000000              MOVL $0x1, AX
  counter.go:13         0x10512e6               488d0d0bca0800          LEAQ main.c(SB), CX
  counter.go:13         0x10512ed               f0480fc101              LOCK XADDQ AX, 0(CX) // c.inc
  counter.go:24         0x10512f2               90                      NOPL
  counter.go:10         0x10512f3               488b05fec90800          MOVQ main.c(SB), AX // c.get
  counter.go:25         0x10512fa               90                      NOPL
  counter.go:16         0x10512fb               31c0                    XORL AX, AX
  counter.go:16         0x10512fd               488701                  XCHGQ AX, 0(CX) // c.reset
  counter.go:16         0x1051300               c3                      RET
```
By way of explanation. The first operation, `counter.go:13` is `c.inc` a `LOCK`ed `XADDQ`, which on x86 is an atomic increment. The second, `counter.go:10` is `c.get` which on x86, due to its strong memory consistency model, is a regular load from memory. The final operation, `counter.go:16`, `c.reset` is an atomic exchange of the address in `CX` with `AX` which was zeroed on the previous line. This puts the value in `AX`, zero, into the address stored in `CX`. The value previously stored at `(CX)` is discarded.
### Conclusion
Intrinsics are a neat solution that give Go programmers access to low level architectural operations without having to extend the specification of the language. If an architecture doesnt have a specific `sync/atomic` primitive (like some ARM variants), or a `math/bits` operation, then the compiler transparently falls back to the operation written in pure Go.
1. This may not be their official name, however the word is in common use inside the compiler and its tests[][5]
2. The C Programming Language 2nd Ed, 1998[][6]
3. As extra credit homework, try passing `0xdeadbeefdeadbeef` to each function under test and observe the results.[][7]
#### Related posts:
1. [Notes on exploring the compiler flags in the Go compiler suite][8]
2. [Padding is hard][9]
3. [Should methods be declared on T or *T][10]
4. [Wednesday pop quiz: spot the race][11]
--------------------------------------------------------------------------------
via: https://dave.cheney.net/2019/08/20/go-compiler-intrinsics
作者:[Dave Cheney][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://dave.cheney.net/author/davecheney
[b]: https://github.com/lujun9972
[1]: tmp.OyARdRB2s8#easy-footnote-bottom-1-3803 (This may not be their official name, however the word is in common use inside the compiler and its tests)
[2]: tmp.OyARdRB2s8#easy-footnote-bottom-2-3803 (The C Programming Language 2nd Ed, 1998)
[3]: https://github.com/golang/go/issues/14813
[4]: tmp.OyARdRB2s8#easy-footnote-bottom-3-3803 (As extra credit homework, try passing <code>0xdeadbeefdeadbeef</code> to each function under test and observe the results.)
[5]: tmp.OyARdRB2s8#easy-footnote-1-3803
[6]: tmp.OyARdRB2s8#easy-footnote-2-3803
[7]: tmp.OyARdRB2s8#easy-footnote-3-3803
[8]: https://dave.cheney.net/2012/10/07/notes-on-exploring-the-compiler-flags-in-the-go-compiler-suite (Notes on exploring the compiler flags in the Go compiler suite)
[9]: https://dave.cheney.net/2015/10/09/padding-is-hard (Padding is hard)
[10]: https://dave.cheney.net/2016/03/19/should-methods-be-declared-on-t-or-t (Should methods be declared on T or *T)
[11]: https://dave.cheney.net/2015/11/18/wednesday-pop-quiz-spot-the-race (Wednesday pop quiz: spot the race)

View File

@ -0,0 +1,84 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (The infrastructure is code: A story of COBOL and Go)
[#]: via: (https://opensource.com/article/19/8/command-line-heroes-cobol-golang)
[#]: author: (Matthew Broberg https://opensource.com/users/mbbroberghttps://opensource.com/users/mbbroberghttps://opensource.com/users/mbbroberghttps://opensource.com/users/mbbroberghttps://opensource.com/users/mbbroberg)
The infrastructure is code: A story of COBOL and Go
======
COBOL remains the dominant language of mainframes. What can Go learn
from its history to dominate the cloud?
![Listen to the Command Line Heroes Podcast][1]
Old challenges are new again. In [this week's Command Line Heroes podcast][2] (Season 3, Episode 5), that thought comes with a twist of programming languages and platforms.
### COBOL dominates the mainframe
One of the most brilliant minds in all of computer science is [Grace Murray Hopper][3]. Every time we don't have to write in binary to talk to computers, I recommend saying out loud: "Thank you, Rear Admiral Grace Murray Hopper." Try it next time, for she is the one who invented the first compiler (the software that translates programming code to machine language).
Hopper was essential to the invention and adoption of high-level programming languages, the first of which was COBOL. She helped create the **CO**mmon **B**usiness-**O**riented **L**anguage (COBOL for short) in 1959. As Ritika Trikha put it on [HackerRank][4]:
> "Grace Hopper, the mother of COBOL, helped champion the creation of this brand-new programming language that aimed to function across all business systems, saving an immense amount of time and money. Hopper was also the first to believe that programming languages should read just like English instead of computer jargon. Hence why COBOL's syntax is so wordy. But it helped humanize the computing process for businesses during an era when computing was intensive and prevalent only in research facilities."
In the early 1960s, mainframes were a wild new architecture for sharing powerful amounts of computation. And in the era of mainframe computing, COBOL dominated the landscape.
### COBOL in today's world
But what about today? With the decline of mainframes and the rise of newer and more innovative languages designed for the web and cloud, where does COBOL sit?
As last week's episode of Command Line Heroes mentioned, in the late 1990s, [Perl][5] (as well as JavaScript and C++) was outpacing COBOL. And, as Perl's creator, [Larry Wall stated then][6]: "COBOL is no big deal these days since demand for COBOL seems to be trailing off, for some strange reason."
Fast forward to 2019, and COBOL has far from "trailed off." As David Cassel wrote on [The New Stack][7] in 2017:
> "About 95% of ATM swipes use COBOL code, Reuters [reported in April][8], and the 58-year-old language even powers 80% of in-person transactions. In fact, Reuters calculates that there's still 220 billion lines of COBOL code currently being used in production today, and that every day, COBOL systems handle $3 trillion in commerce."
Given its continued significance in the business world, knowing COBOL can be a great career move. Top COBOL programmers can expect to [make six figures][9] due to the limited number of people who specialize in the language.
### Go dominates in the cloud, for now
That story of COBOL's early dominance rings a bell for me. If we survey the most influential projects of this cloud computing era, you'd be hard-pressed to miss Go sitting at the top of the pack. Kubernetes and much of its related technology—from Etcd to Prometheus—are written in Go. As [RedMonk explored][10] back in 2014:
> "Go's rapidly closing in on 1% of total commits and half a percent of projects and contributors. While the trend is obviously interesting, at first glance, numbers well under one percent look inconsequential relative to overall adoption. To provide some context, however, each of the most popular languages on Ohloh (C, C++, Java, JavaScript) only constitute ~10% of commits and ~5% of projects and contributors. **That means Go, a seemingly very minor player, is already used nearly one-tenth as much in FOSS as the most popular languages in existence**."
In two of my previous jobs, my team (re)wrote infrastructure software in Go to be part of this monumental wave. Influential projects continue to live in the space that Go can fill, as [Uday Hiwarale explained][11] well in 2018:
> "Things that make Go a great language [are] its simple concurrency model, its package-based code management, and its non-strict (type inference) typing system. Go does not support out-of-the box object-oriented programming experience, but [its] support structures (structs) …, with the help of methods and pointers, can help us achieve the same [outcomes]."
It looks to me like Go could be following in COBOL's footsteps, but questions remain about where it's going. In June 2019, [RedMonk ranked][12] Go in 16th place, with a future that could lead either direction.
### What can Go learn from COBOL?
If Go were to see into its future, would it look like COBOL's, with such staying power?
The stories told this season by Command Line Heroes illustrate how languages are born, how communities form around them, how they rise in popularity and standardize, and how some slowly decline. What can we learn about the lifespan of programming languages? Do they have a similar arc? Or do they differ?
I think this podcast is well worth [subscribing so that you don't miss a single one][2]. I would love to hear your thoughts in the comments below.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/8/command-line-heroes-cobol-golang
作者:[Matthew Broberg][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/mbbroberghttps://opensource.com/users/mbbroberghttps://opensource.com/users/mbbroberghttps://opensource.com/users/mbbroberghttps://opensource.com/users/mbbroberg
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/command-line-heroes-520x292.png?itok=s_F6YEoS (Listen to the Command Line Heroes Podcast)
[2]: https://www.redhat.com/en/command-line-heroes
[3]: https://www.biography.com/scientist/grace-hopper
[4]: https://blog.hackerrank.com/the-inevitable-return-of-cobol/
[5]: https://opensource.com/article/19/8/command-line-heroes-perl
[6]: http://www.wall.org/~larry/onion3/talk.html
[7]: https://thenewstack.io/cobol-everywhere-will-maintain/
[8]: http://fingfx.thomsonreuters.com/gfx/rngs/USA-BANKS-COBOL/010040KH18J/index.html
[9]: https://www.laserfiche.com/ecmblog/looking-job-hows-your-cobol/
[10]: https://redmonk.com/dberkholz/2014/03/18/go-the-emerging-language-of-cloud-infrastructure/
[11]: https://medium.com/rungo/introduction-to-go-programming-language-golang-89d16ca72bbf
[12]: https://redmonk.com/sogrady/2019/07/18/language-rankings-6-19/

View File

@ -0,0 +1,104 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (5 notable open source 3D printers)
[#]: via: (https://opensource.com/article/19/8/3D-printers)
[#]: author: (Michael Weinberg https://opensource.com/users/mweinberg)
5 notable open source 3D printers
======
A roundup of the latest notable open source 3D printers.
![Open source 3D printed coin][1]
Open source hardware and 3D printers go together like, well, open source hardware and 3D printers. Not only are 3D printers used to create all sorts of open source hardware—there are also a huge number of 3D printers that have been [certified as open source][2] by the [Open Source Hardware Association][3]. That fact means that they are freely available to improve and build upon.
There are plenty of open source 3D printers out there, with more being certified on a regular basis. Heres a look at some of the latest.
### BigFDM
![The BigFDM 3D printer by Daniele Ingrassia.][4]
_The BigFDM 3D printer by Daniele Ingrassia._
German-designed and UAE-built, the [Big FDM][5] is both the second-newest and the biggest certified open 3D printer. It was certified on July 14, and has a massive 800x800x900mm printing area, making it possibly big enough to print a full replica of many of the other printers on this list.
### Creatable 3D
![The Creatable 3D printer by Creatable Labs.][6]
_The Creatable 3D printer by Creatable Labs._
Certified on July 30, the [Creatable 3D][7] is the most recently certified printer on this list. It is the only [delta-style][8] 3D printer, which is a design that makes it faster. It is also the first piece of certified open source hardware from South Korea, sporting the certification UID SK000001.
### Ender CR-10
![The Ender CR-10 3D printer by Creality3d.][9]
_The Ender CR-10 3D printer by Creality3d._
[Enders CR-10][10] is a well-known certified as open source 3D printer. That means that this Chinese 3D printer is fully documented and licensed to allow others to build upon it. Ender also certified its [Ender 3][11] printer as open source hardware.
### LulzBot TAZ Workhorse
![The LulzBot TAZ Workhorse by Aleph Objects.][12]
_The LulzBot TAZ Workhorse by Aleph Objects._
Colorado-based Aleph Objects—creators of the LulzBot line of 3D printers—is the most prolific certifier of open source 3D printers and printer components. Their [TAZ Workhorse][13] was just certified in June, making it the latest in a long line of printers and printer elements that LulzBot has certified as open source hardware. If you are in the market for a hot end, extruder, board, or pretty much any other 3D printer component, and want to make sure that it is certified open source hardware, you will likely find something from Aleph Objects in their [certification directory][2].
### Nautilus
![The Nautilus 3D printer by Hydra Research.][14]
_The Nautilus 3D printer by Hydra Research._
Hydra Researchs [Nautilus][15] was just certified on July 10, making it the third-most recently certified printer of the bunch. It features removable build plates and a fully enclosed build area and hails from Oregon.
#### IC3D open source filament
![IC3D open source 3D printer filament.][16]
_The IC3D Open Source Filament._
What will you put in your open source 3D printer? Open source 3D printing filament, of course. Ohios IC3D certified a full line of open source 3D printing filament for all of your open source 3D printing needs, including their:
* [ABS 3D Printing Filament][17]
* [PETG 3D Printing Filament][18]
* [PLA 3D Printing Filament][19]
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/8/3D-printers
作者:[Michael Weinberg][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/mweinberg
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/open_source_print_resize.jpg?itok=v6z2FtLS (Open source 3D printed coin)
[2]: https://certification.oshwa.org/list.html
[3]: https://www.oshwa.org/
[4]: https://opensource.com/sites/default/files/uploads/bigfdm.png (The BigFDM 3D printer by Daniele Ingrassia.)
[5]: https://certification.oshwa.org/de000013.html
[6]: https://opensource.com/sites/default/files/uploads/creatable_3d.png (The Creatable 3D printer by Creatable Labs.)
[7]: https://certification.oshwa.org/kr000001.html
[8]: https://www.youtube.com/watch?v=BTU6UGm15Zc
[9]: https://opensource.com/sites/default/files/uploads/ender_cr-10.png (The Ender CR-10 3D printer by Creality3d.)
[10]: https://certification.oshwa.org/cn000005.html
[11]: https://certification.oshwa.org/cn000003.html
[12]: https://opensource.com/sites/default/files/uploads/lulzbot_taz_workhorse.png (The LulzBot TAZ Workhorse by Aleph Objects.)
[13]: https://certification.oshwa.org/us000161.html
[14]: https://opensource.com/sites/default/files/uploads/hydra_research_nautilus.png (The Nautilus 3D printer by Hydra Research.)
[15]: https://certification.oshwa.org/us000166.html
[16]: https://opensource.com/sites/default/files/uploads/ic3d_open_source_filament.png (The IC3D Open Source Filament.)
[17]: https://certification.oshwa.org/us000066.html
[18]: https://certification.oshwa.org/us000131.html
[19]: https://certification.oshwa.org/us000130.html

View File

@ -0,0 +1,189 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Build a distributed NoSQL database with Apache Cassandra)
[#]: via: (https://opensource.com/article/19/8/how-set-apache-cassandra-cluster)
[#]: author: (James Farrell https://opensource.com/users/jamesfhttps://opensource.com/users/ben-bromhead)
Build a distributed NoSQL database with Apache Cassandra
======
Set up a basic three-node Cassandra cluster from scratch with some extra
bits for replication and future expansion.
![Woman programming][1]
Recently, I got a rush request to get a three-node [Apache Cassandra][2] cluster with a replication factor of two working for a development job. I had little idea what that meant but needed to figure it out quickly—a typical day in a sysadmin's job.
Here's how to set up a basic three-node Cassandra cluster from scratch with some extra bits for replication and future node expansion.
### Basic nodes needed
To start, you need some basic Linux machines. For a production install, you would likely put physical machines into racks, data centers, and diverse locations. For development, you just need something suitably sized for the scale of your development. I used three CentOS 7 virtual machines on VMware that have 20GB thin provisioned disks, two processors, and 4GB of RAM. These three machines are called: CS1 (192.168.0.110), CS2 (192.168.0.120), and CS3 (192.168.0.130).
First, do a minimal install of CentOS 7 as an operating system on each machine. To run this in production with CentOS, consider [tweaking][3] your [firewalld][4] and [SELinux][5]. Since this cluster would be used just for initial development, I turned them off.
The only other requirement is an OpenJDK 1.8 installation, which is available from the CentOS repository.
### Installation
Create a **cass** user account on each machine. To ensure no variation between nodes, force the same UID on each install:
```
$ useradd --create-home \
\--uid 1099 cass
$ passwd cass
```
[Download][6] the current version of Apache Cassandra (3.11.4 as I'm writing this). Extract the Cassandra archive in the **cass** home directory like this:
```
`$ tar zfvx apache-cassandra-3.11.4-bin.tar.gz`
```
The complete software is contained in **~cass/apache-cassandra-3.11.4**. For a quick development trial, this is fine. The data files are there, and the **conf/** directory has the important bits needed to tune these nodes into a real cluster.
### Configuration
Out of the box, Cassandra runs as a localhost one-node cluster. That is convenient for a quick look, but the goal here is a real cluster that external clients can access and that provides the option to add additional nodes when development and tests need to broaden. The two configuration files to look at are **conf/cassandra.yaml** and **conf/cassandra-rackdc.properties**.
First, edit **conf/cassandra.yaml** to set the cluster name, network, and remote procedure call (RPC) interfaces; define peers; and change the strategy for routing requests and replication.
Edit **conf/cassandra.yaml** on each of the cluster nodes.
Change the cluster name to be the same on each node: 
```
`cluster_name: 'DevClust'`
```
Change the following two entries to match the primary IP address of the node you are working on:
```
listen_address: 192.168.0.110
rpc_address:  192.168.0.110
```
Find the **seed_provider** entry and look for the **\- seeds:** configuration line. Edit each node to include all your nodes:
```
`        - seeds: "192.168.0.110, 192.168.0.120, 192.168.0.130"`
```
This enables the local Cassandra instance to see all its peers (including itself).
Look for the **endpoint_snitch** setting and change it to:
```
`endpoint_snitch: GossipingPropertyFileSnitch`
```
The **endpoint_snitch** setting enables flexibility later on if new nodes need to be joined. The Cassandra documentation indicates that **GossipingPropertyFileSnitch** is the preferred setting for production use; it is also necessary to set the replication strategy that will be presented below.
Save and close the **cassandra.yaml** file.
Open the **conf/cassandra-rackdc.properties** file and change the default values for **dc=** and **rack=**. They can be anything that is unique and does not conflict with other local installs. For production, you would put more thought into how to organize your racks and data centers. For this example, I used generic names like:
```
dc=NJDC
rack=rack001
```
### Start the cluster
On each node, log into the account where Cassandra is installed (**cass** in this example), enter **cd apache-cassandra-3.11.4/bin**, and run **./cassandra**. A long list of messages will print to the terminal, and the Java process will run in the background.
### Confirm the cluster
While logged into the Cassandra user account, go to the **bin** directory and run **$ ./nodetool status**. If everything went well, you would see something like:
```
$ ./nodetool status
INFO  [main] 2019-08-04 15:14:18,361 Gossiper.java:1715 - No gossip backlog; proceeding
Datacenter: NJDC
================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
\--  Address       Load       Tokens       Owns (effective)  Host ID                               Rack
UN  192.168.0.110  195.26 KiB  256          69.2%             0abc7ad5-6409-4fe3-a4e5-c0a31bd73349  rack001
UN  192.168.0.120  195.18 KiB  256          63.0%             b7ae87e5-1eab-4eb9-bcf7-4d07e4d5bd71  rack001
UN  192.168.0.130  117.96 KiB  256          67.8%             b36bb943-8ba1-4f2e-a5f9-de1a54f8d703  rack001
```
This means the cluster sees all the nodes and prints some interesting information.
Note that if **cassandra.yaml** uses the default **endpoint_snitch: SimpleSnitch**, the **nodetool** command above indicates the default locations as **Datacenter: datacenter1** and the racks as **rack1**. In the example output above, the **cassandra-racdc.properties** values are evident.
### Run some CQL
This is where the replication factor setting comes in.
Create a keystore with a replication factor of two. From any one of the cluster nodes, go to the **bin** directory and run **./cqlsh 192.168.0.130** (substitute the appropriate cluster node IP address). You can see the default administrative keyspaces with the following:
```
cqlsh&gt; SELECT * FROM system_schema.keyspaces;
 keyspace_name      | durable_writes | replication
\--------------------+----------------+-------------------------------------------------------------------------------------
        system_auth |           True | {'class': 'org.apache.cassandra.locator.SimpleStrategy', 'replication_factor': '1'}
      system_schema |           True |                             {'class': 'org.apache.cassandra.locator.LocalStrategy'}
 system_distributed |           True | {'class': 'org.apache.cassandra.locator.SimpleStrategy', 'replication_factor': '3'}
             system |           True |                             {'class': 'org.apache.cassandra.locator.LocalStrategy'}
      system_traces |           True | {'class': 'org.apache.cassandra.locator.SimpleStrategy', 'replication_factor': '2'}
```
Create a new keyspace with replication factor two, insert some rows, then recall some data:
```
cqlsh&gt; CREATE KEYSPACE TestSpace WITH replication = {'class': 'NetworkTopologyStrategy', 'NJDC' : 2};
cqlsh&gt; select * from system_schema.keyspaces where keyspace_name='testspace';
 keyspace_name | durable_writes | replication
\---------------+----------------+--------------------------------------------------------------------------------
     testspace |           True | {'NJDC': '2', 'class': 'org.apache.cassandra.locator.NetworkTopologyStrategy'}
cqlsh&gt; use testspace;
cqlsh:testspace&gt; create table users ( userid int PRIMARY KEY, email text, name text );
cqlsh:testspace&gt; insert into users (userid, email, name) VALUES (1, '[jd@somedomain.com][7]', 'John Doe');
cqlsh:testspace&gt; select * from users;
 userid | email             | name
\--------+-------------------+----------
      1 | [jd@somedomain.com][7] | John Doe
```
Now you have a basic three-node Cassandra cluster running and ready for some development and testing work. The CQL syntax is similar to standard SQL, as you can see from the familiar commands to create a table, insert, and query data.
### Conclusion
Apache Cassandra seems like an interesting NoSQL clustered database, and I'm looking forward to diving deeper into its use. This simple setup only scratches the surface of the options available. I hope this three-node primer helps you get started with it, too.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/8/how-set-apache-cassandra-cluster
作者:[James Farrell][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jamesfhttps://opensource.com/users/ben-bromhead
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/programming-code-keyboard-laptop-music-headphones.png?itok=EQZ2WKzy (Woman programming)
[2]: http://cassandra.apache.org/
[3]: https://opensource.com/article/19/7/make-linux-stronger-firewalls
[4]: https://www.redhat.com/sysadmin/secure-linux-network-firewall-cmd
[5]: https://opensource.com/business/13/11/selinux-policy-guide
[6]: https://cassandra.apache.org/download/
[7]: mailto:jd@somedomain.com

View File

@ -0,0 +1,117 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Getting Started with Go on Fedora)
[#]: via: (https://fedoramagazine.org/getting-started-with-go-on-fedora/)
[#]: author: (Clément Verna https://fedoramagazine.org/author/cverna/)
Getting Started with Go on Fedora
======
![][1]
The [Go][2] programming language was first publicly announced in 2009, since then the language has become widely adopted. In particular Go has become a reference in the world of cloud infrastructure with big projects like [Kubernetes][3], [OpenShift][4] or [Terraform][5] for example.
Some of the main reasons for Gos increasing popularity are the performances, the ease to write fast concurrent application, the simplicity of the language and fast compilation time. So lets see how to get started with Go on Fedora.
### Install Go in Fedora
Fedora provides an easy way to install the Go programming language via the official repository.
```
$ sudo dnf install -y golang
$ go version
go version go1.12.7 linux/amd64
```
Now that Go is installed, lets write a simple program, compile it and execute it.
### First program in Go
Lets write the traditional “Hello, World!” program in Go. First create a _main.go_ file and type or copy the following.
```
package main
import "fmt"
func main() {
fmt.Println("Hello, World!")
}
```
Running this program is quite simple.
```
$ go run main.go
Hello, World!
```
This will build a binary from main.go in a temporary directory, execute the binary, then delete the temporary directory. This command is really great to quickly run the program during development and it also highlights the speed of Go compilation.
Building an executable of the program is as simple as running it.
```
$ go build main.go
$ ./main
Hello, World!
```
### Using Go modules
Go 1.11 and 1.12 introduce preliminary support for modules. Modules are a solution to manage application dependencies. This solution is based on 2 files _go.mod_ and _go.sum_ used to explicitly define the version of the dependencies.
To show how to use modules, lets add a dependency to the hello world program.
Before changing the code, the module needs to be initialized.
```
$ go mod init helloworld
go: creating new go.mod: module helloworld
$ ls
go.mod main main.go
```
Next modify the main.go file as follow.
```
package main
import "github.com/fatih/color"
func main () {
color.Blue("Hello, World!")
}
```
In the modified main.go, instead of using the standard library “_fmt_” to print the “Hello, World!”. The application uses an external library which makes it easy to print text in color.
Lets run this version of the application.
```
$ go run main.go
Hello, World!
```
Now that the application is depending on the _github.com/fatih/color_ library, it needs to download all the dependencies before compiling it. The list of dependencies is then added to _go.mod_ and the exact version and commit hash of these dependencies is recorded in _go.sum_.
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/getting-started-with-go-on-fedora/
作者:[Clément Verna][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/cverna/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2019/08/go-article-816x345.jpg
[2]: https://golang.org/
[3]: https://kubernetes.io/
[4]: https://www.openshift.com/
[5]: https://www.terraform.io/

View File

@ -0,0 +1,88 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Open Policy Agent: Cloud-native security and compliance)
[#]: via: (https://opensource.com/article/19/8/open-policy-agent)
[#]: author: (Tim Hinrichs https://opensource.com/users/thinrich)
Open Policy Agent: Cloud-native security and compliance
======
A look at three use cases where organizations used Open Policy Agent to
reliably automate cloud-based access policy control.
![clouds in the sky with blue pattern][1]
Every product or service has a unique way of handling policy and authorization: who-can-do-what and what-can-do-what. In the cloud-native world, authorization and policy are more complex than ever before. As the cloud-native ecosystem evolves, theres a growing need for DevOps and DevSecOps teams to identify and address security and compliance issues earlier in development and deployment cycles. Businesses need to release software on the order of minutes (instead of months). For this to happen, those security and compliance policies—which in the past were written in PDFs or email—need to be checked and enforced by machines. That way, every few minutes when software goes out the door, its obeying all of the necessary policies.
This problem was at the top of our minds when Teemu Koponen, Torin Sandall, and I founded the [Open Policy Agent project (OPA)][2] as a practical solution for the critical security and policy challenges of the cloud-native ecosystem. As the list of OPAs successful integrations grows—thanks to active involvement by the open source community—the time is right to re-introduce OPA and offer a look at how it addresses business and policy pain points in varied contexts.
### What is OPA?
OPA is a general-purpose policy engine that makes policy a first-class citizen within the cloud-native ecosystem, putting it on par with servers, networks, and storage. Its uses range from authorization and admission control to data filtering. The community uses OPA for Kubernetes admission control across all major cloud providers, as well as on on-premises deployments, along with HTTP API authorization, remote access policy, and data filtering. Since OPAs RESTful APIs use JSON over HTTP, OPA can be integrated with any programming language, making it extremely flexible across services.
OPA gives policy its own lifecycle and toolsets, so policy can be managed separately from the underlying systems that the policy applies to. Launched in 2016, OPA provides local enforcement for the sake of higher availability, better performance, greater flexibility, and more expressiveness than hard-coded service logic or ad-hoc domain-specific languages. With dedicated tooling for new users and experienced practitioners, combined with many integrations to third-party systems, OPA empowers administrators with unified, flexible, and granular policy control across their entire software stack. OPA also provides policy guardrails around Kubernetes admission control, HTTP API authorization, entitlement management, remote access, and data filtering. In 2018, we donated OPA to the [Cloud Native Computing Foundation][3], a vendor-neutral home, and since then it has graduated from the sandbox to the incubating stage.
### What can OPA do in the real world?
In short, OPA provides unified, context-aware policy controls for cloud-native environments. OPA policy is context-aware, meaning that the administrator can make policy decisions based on what is happening in the real world, such as:
* Is there currently an outage?
* Is there a new vulnerability thats been released?
* Who are the people on call right now?
Its policies are flexible enough to accommodate arbitrary context and arbitrary. OPA has been proven in production in some of the largest cloud-native deployments in the world—from global financial firms with trillions under management to technology giants and household names—but is also in use at emerging startups and regional healthcare organizations.
Beyond our own direct experiences, and thanks to the open source communitys innovations, OPA continues to mature and solve varied and evolving customer authorization and policy problems, such as Kubernetes admission control, microservice authorization, and entitlements management for both end-user and employee-facing applications. Were thrilled by both the depth and breadth of innovative use cases unfolding in front of our eyes. To better articulate some of the real-world problems OPA is solving, we looked across OPAs business-critical deployments in the user community to provide the following examples.
#### Provide regulatory compliance that role-based access control (RBAC) cant.
This lesson came to us through a global bank with trillions in assets. Their problem: A breach that occurred because a third-party broker had too much access. The banks relationship with the public was under significant stress, and it was also penalized with nearly $100 million in fines.
How did such a breach happen? In short, due to the complexity of trying to map decades of role-based access control (RBAC) onto every sprawling monolithic app. With literally millions of roles across thousands of internal and external applications, the banks situation was—not unlike most large, established corporations—impossible to manage or troubleshoot. What started out as a best practice (RBAC) could no longer scale. Static roles, based on business logic, cannot be tested. They cant be deployed inline. They cant be validated like todays modern code can. Simply put, RBAC cannot alone manage access at cloud scale.
OPA facilitated a solution: Rearchitect and simplify application access with a local context-based authorization thats automated, tested, audited, and scalable. There are both technology and business benefits to this approach. The main technology benefit is that the authorization policy (rules that establish what a given user can do) is built, tested, and deployed as part of continuous integration and continuous delivery (CI/CD). Every decision is tied directly to microservices and apps for auditing and validation, and all access is based not on role, but on the current context.
Instead of creating thousands of roles to cover every permutation of whats allowed, a simple policy can determine whether or not the user should have access, and to a very fine degree. This simplified policy greatly, since context drives access decisions. Versioning and backtesting arent required, since every time a new policy is needed the entire policy set is re-created, eliminating nested issues and legacy role sprawl. The local-only policy also eliminates the presence of conflicting rules/roles across repositories.
The major business benefit is that compliance became easier through the separation of duties (with security teams—not developers—writing policy) and by providing clear, testable visibility into access policy across applications. This process accelerated development since AppDev teams were freed from having to code Authz or policy directly into applications, and central RBAC repositories no longer need to be updated, maintained, and made available.
#### Provide regulatory compliance and safety by default.
Another large bank, with nearly 20,000 employees, was in the untenable scenario of managing policy with spreadsheets. This situation may sound comical, but its far more common than you might think. Access is often "managed" via best effort and tribal knowledge. Teams document access policy in PDFs, on Wikis, or in spreadsheets. They then rely on well-intentioned developers to read, understand, and remember access rules and guidelines. The bank had business reasons to move from monolithic apps to Kubernetes (K8s)—primarily improving differentiation and time to market—but it's legacy compliance solutions werent compatible with K8s.
The bank knew that while it was a financial institution, it was _really_ a software organization. Rather than relying on human memory and best effort, the staff started thinking of policy with a GitOps mindset (pull requests, comments, and peer review to get to consensus and commitment). OPA became the single source of truth behind what was (or wasnt) allowed with policy, implementing a true policy-as-code solution where effort was removed from the equation entirely, thanks to automation.
The K8s platform that the bank created was compliant by default, as it executed company regulatory policies exactly, every time. With OPA, the bank could build, deploy, and version its regulatory policy through an agile process, ensuring that all users, teams, and services were always obeying policy. The infrastructure is now compliant because compliance is literally built into the infrastructure.
#### Streamline and strengthen institutional knowledge.
A major telecommunications company had an education problem that was sapping time and money. Its pain points: It had created and maintained its own admission control (AC) service; had a slow, costly, HR-heavy support model that couldnt scale as its developer base grew; and it had a hammer-like enforcement model that wasnt efficient, slowing time to market.
OPA was deployed to replace the custom AC, thereby saving resources. The guardrails OPA provided allowed management to discover and deploy key policies that they developed from world events (and problems) that they wanted to eliminate moving forward.
Management has now become accustomed to using policy-as-code and is able to hone in on the specific policies that developers trip over most. The primary benefit for this company was in the person-hours saved by not having to talk to the same developers about the same problems over and over again, and by being able to educate about and enforce policies automatically. The insights from these efforts allow the company to target education (not enforcement) to the teams that need it, proactively focusing on providing help to struggling teams.
### Learn More about OPA
To learn how to use OPA to help with your authorization and policy or to learn how to contribute, check out the [Open Policy Agent on Github][4] or check out the tutorials on different usecases at the [OPA homepage][2].
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/8/open-policy-agent
作者:[Tim Hinrichs][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/thinrich
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003601_05_mech_osyearbook2016_cloud_cc.png?itok=XSV7yR9e (clouds in the sky with blue pattern)
[2]: https://www.openpolicyagent.org/
[3]: https://www.cncf.io/
[4]: https://github.com/open-policy-agent/opa

View File

@ -0,0 +1,141 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (11 Essential Keyboard Shortcuts Google Chrome/Chromium Users Should Know)
[#]: via: (https://itsfoss.com/google-chrome-shortcuts/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
11 Essential Keyboard Shortcuts Google Chrome/Chromium Users Should Know
======
_**Brief: Master these Google Chrome keyboard shortcuts for a better, smoother and more productive web browsing experience. Downloadable cheatsheet is also included.**_
Google Chrome is the [most popular web browser][1] and there is no denying it. Its open source version [Chromium][2] is also getting popularity and some Linux distributions now include it as the default web browser.
If you use it on desktop a lot, you can improve your browsing experience by using Google Chrome keyboard shortcuts. No need to go up to your mouse and spend time finding your way around. Just master these shortcuts and youll even save some time and be more productive.
I am using the term Google Chrome but these shortcuts are equally applicable to the Chromium browser.
### 11 Cool Chrome Keyboard shortcuts you should be using
If you are a pro, you might know a few of these Chrome shortcuts already but the chances are that you may still find some hidden gems here. Lets see.
**Keyboard Shortcuts** | **Action**
---|---
Ctrl+T | Open a new tab
Ctrl+N | Open a new window
Ctrl+Shift+N | Open incognito window
Ctrl+W | Close current tab
Ctrl+Shift+T | Reopen last closed tab
Ctrl+Shift+W | Close the window
Ctrl+Tab and Ctrl+Shift+Tab | Switch to right or left tab
Ctrl+L | Go to search/address bar
Ctrl+D | Bookmark the website
Ctrl+H | Access browsing history
Ctrl+J | Access downloads history
Shift+Esc | Open Chrome task manager
You can [download this list of useful Chrome keyboard shortcut for quick reference][3].
#### 1\. Open a new tab with Ctrl+T
Need to open a new tab? Just press Ctrl and T keys together and youll have a new tab opened.
#### 2\. Open a new window with Ctrl+N
Too many tabs opened already? Time to open a fresh new window. Use Ctrl and N keys to open a new browser window.
#### 3\. Go incognito with Ctrl+Shift+N
Checking flight or hotel prices online? Going incognito might help. Open an incognito window in Chrome with Ctrl+Shift+N.
[][4]
Suggested read  Best Text Editors for Linux Command Line
#### 4\. Close a tab with Ctrl+W
Close the current tab with Ctrl and W key. No need to take the mouse to the top and look for the x button.
#### 5\. Accidentally closed a tab? Reopen it with Ctrl+Shift+T
This is my favorite Google Chrome shortcut. No more oh crap when you close a tab you didnt mean to. Use the Ctrl+Shift+T and it will open the last closed tab. Keep hitting this key combination and it will keep on bringing the closed tabs.
#### 6\. Close the entire browser window with Ctrl+Shift+W
Done with you work? Time to close the entire browser window with all the tabs. Use the keys Ctrl+Shift+W and the browser window will disappear like it never existed.
#### 7\. Switch between tabs with Ctrl+Tab
Too many tabs open? You can move to right tab with Ctrl+Tab. Want to move left? Use Ctrl+Shift+Tab. Press these keys repeatedly and you can move between all the open tabs in the current browser window.
You can also use Ctrl+0 till Ctrl+9 to go to one of the first 10 tabs. But this Chrome keyboard shortcut doesnt work for the 11th tabs onward.
#### 8\. Go to the search/address bar with Ctrl+L
Want to type a new URL or search something quickly. You can use Ctrl+L and it will highlight the address bar on the top.
#### 9\. Bookmark the current website with Ctrl+D
Found something interesting? Save it in your bookmarks with Ctrl+D keys combination.
#### 10\. Go back in history with Ctrl+H
You can open up your browser history with Ctrl+H keys. Search through the history if you are looking for a page visited some time ago or delete something that you dont want to be seen anymore.
#### 11\. See your downloads with Ctrl+J
Pressing the Ctrl+J keys in Chrome will take you to the Downloads page. This page will show you all the downloads action you performed.
[][5]
Suggested read  Get Rid Of Two Google Chrome Icons From Dock In Elementary OS Freya [Quick Tip]
#### Bonus shortcut: Open Chrome task manager with Shift+Esc
Many people doesnt even know that there is a task manager in Chrome browser. Chrome is infamous for eating up your systems RAM. And when you have plenty of tabs opened, finding the culprit is not easy.
With Chrome task manager, you can see all the open tabs and their system utilization stats. You can also see various hidden processes such as Chrome extensions and other services.
![Google Chrome Task Manager][6]
I am going to this table here for a quick reference.
### Download Chrome shortcut cheatsheet
I know that mastering keyboard shortcuts depends on habit and you can make it a habit by using it again and again. To help you in this task, I have created this Google Chrome keyboard shortcut cheatsheet.
You can download the below image in PDF form, print it and put it on your desk. This way you can use practice the shortcuts all the time.
![Google Chrome Keyboard Shortcuts Cheat Sheet][7]
[Download Chrome Shortcut Cheatsheet][8]
If you are interested in mastering shortcuts, you may also have a look at [Ubuntu keyboard shortcuts][9].
By the way, whats your favorite Chrome shortcut?
--------------------------------------------------------------------------------
via: https://itsfoss.com/google-chrome-shortcuts/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://en.wikipedia.org/wiki/Usage_share_of_web_browsers
[2]: https://www.chromium.org/Home
[3]: tmp.3qZNXSy2FC#download-cheatsheet
[4]: https://itsfoss.com/command-line-text-editors-linux/
[5]: https://itsfoss.com/rid-google-chrome-icons-dock-elementary-os-freya/
[6]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/google-chrome-task-manager.png?resize=800%2C300&ssl=1
[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/08/google-chrome-keyboard-shortcuts-cheat-sheet.png?ssl=1
[8]: https://drive.google.com/open?id=1lZ4JgRuFbXrnEXoDQqOt7PQH6femIe3t
[9]: https://itsfoss.com/ubuntu-shortcuts/

View File

@ -0,0 +1,82 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (A Raspberry Pi Based Open Source Tablet is in Making and its Called CutiePi)
[#]: via: (https://itsfoss.com/cutiepi-open-source-tab/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
A Raspberry Pi Based Open Source Tablet is in Making and its Called CutiePi
======
CutiePie is an 8-inch open-source tablet built on top of Raspberry Pi. For now, it is just a working prototype which they announced on [Raspberry Pi forums][1].
In this article, youll get to know more details on the specifications, price, and availability of CutiePi.
They have made the Tablet using a custom-designed Compute Model (CM3) carrier board. The [official website][2] mentions the purpose of a custom CM3 carrier board as:
> A custom CM3/CM3+ carrier board designed for portable use, with enhanced power management and Li-Po battery level monitoring features; works with selected HDMI or MIPI DSI displays.
So, this is what makes the Tablet thin enough while being portable.
### CutiePi Specifications
![CutiePi Board][3]
I was surprised to know that it rocks an 8-inch IPS LCD display which is a good thing for starters. However, you wont be getting a true HD screen because the resolution is 1280×800 as mentioned officially.
It is also planned to come packed with Li-Po 4800 mAh battery (the prototype had a 5000 mAh battery). Well, for a Tablet, that isnt bad at all.
Connectivity options include the support for Wi-Fi and Bluetooth 4.0. In addition to this, a USB Type-A, 6x GPIO pins, and a microSD card slot is present.
![CutiePi Specifications][4]
The hardware is officially compatible with [Raspbian OS][5] and the user interface is built with [Qt][6] for a fast and intuitive user experience. Also, along with the in-built apps, it is expected to support Raspbian PIXEL apps via XWayland.
### CutiePi Source Code
You can second-guess the pricing of this tablet by analyzing the bill for the materials used. CutiePi follows a 100% open-source hardware design for this project. So, if you are curious, you can check out their GitHub page for more information on the hardware design and stuff.
[CutiePi on GitHub][7]
### CutiePi Pricing, Release Date &amp; Availability
CutiePi plans to work on [DVT][8] batch PCBs in August (this month). And, they target to launch the final product by the end of 2019.
Officially, they expect it to launch it at around $150-$250. This is just an approximate for the range and should be taken with a pinch of salt.
Obviously, the price will be a major factor in order to make it a success even though the product itself sounds promising.
**Wrapping Up**
CutiePi is not the first project to use a [single board computer like Raspberry Pi][9] to make a tablet. We have the upcoming [PineTab][10] which is based on Pine64 single board computer. Pine also has a laptop called [Pinebook][11] based on the same.
Judging by the prototype it is indeed a product that we can expect to work. However, the pre-installed apps and the apps that it will support may turn the tide. Also, considering the price estimate it sounds promising.
What do you think about it? Let us know your thoughts in the comments below or just play this interactive poll.
--------------------------------------------------------------------------------
via: https://itsfoss.com/cutiepi-open-source-tab/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://www.raspberrypi.org/forums/viewtopic.php?t=247380
[2]: https://cutiepi.io/
[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/08/cutiepi-board.png?ssl=1
[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/cutiepi-specifications.jpg?ssl=1
[5]: https://itsfoss.com/raspberry-pi-os-desktop/
[6]: https://en.wikipedia.org/wiki/Qt_%28software%29
[7]: https://github.com/cutiepi-io/cutiepi-board
[8]: https://en.wikipedia.org/wiki/Engineering_validation_test#Design_verification_test
[9]: https://itsfoss.com/raspberry-pi-alternatives/
[10]: https://www.pine64.org/pinetab/
[11]: https://itsfoss.com/pinebook-pro/

View File

@ -0,0 +1,166 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Four Ways to Check How Long a Process Has Been Running in Linux)
[#]: via: (https://www.2daygeek.com/how-to-check-how-long-a-process-has-been-running-in-linux/)
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
Four Ways to Check How Long a Process Has Been Running in Linux
======
If you want to figure out how long a process has been running in Linux for some reason.
Yes, it is possible and can be done with help of ps command.
It shows, the given process uptime in the form of [[DD-]hh:]mm:ss, in seconds, and exact start date and time.
There are multiple options are available in ps command to check this.
Each options comes with different output, which can be used for different purpose.
```
# top -b -s -n 1 | grep httpd
16337 root 20 0 228272 5160 3272 S 0.0 0.1 1:02.27 httpd
30442 apache 20 0 240520 3132 1232 S 0.0 0.1 0:00.00 httpd
30443 apache 20 0 240520 3132 1232 S 0.0 0.1 0:00.00 httpd
30444 apache 20 0 240520 3132 1232 S 0.0 0.1 0:00.00 httpd
30445 apache 20 0 240520 3132 1232 S 0.0 0.1 0:00.00 httpd
30446 apache 20 0 240520 3132 1232 S 0.0 0.1 0:00.00 httpd
```
**`Make a note:`**` ` You may think the same details can be found on **[top command output][1]**. No, It shows you the total CPU time the task has used since it started. But it doesnt include elapsed time. So, dont confuse on this.
### Whats ps Command?
ps stands for processes status, it is display information about the active/running processes on the system.
It provides a snapshot of the current processes along with detailed information like username, user id, cpu usage, memory usage, process start date and time command name etc.
* **`etime:`**` ` elapsed time since the process was started, in the form of [[DD-]hh:]mm:ss.
* **`etimes:`**` ` elapsed time since the process was started, in seconds.
To do so, you need to **[find out the PID of a process][2]**, we can easily identify it by using pidof command.
```
# pidof httpd
30446 30445 30444 30443 30442 16337
```
### Method-1: Using etime Option
Use the ps command with **`etime`**` ` option to get detailed elapsed time.
```
# ps -p 16337 -o etime
ELAPSED
13-13:13:26
```
As per the above output, the httpd process has been running in our server `13 days, 13 hours, 13 mins, and 26 sec`****.
### Method-2: Using Process Name Instead of Process ID (PID)
If you want to use process name instead of PID, use the following one.
```
# ps -eo pid,etime,cmd | grep httpd | grep -v grep
16337 13-13:13:39 /usr/sbin/httpd -DFOREGROUND
30442 1-02:59:50 /usr/sbin/httpd -DFOREGROUND
30443 1-02:59:49 /usr/sbin/httpd -DFOREGROUND
30444 1-02:59:49 /usr/sbin/httpd -DFOREGROUND
30445 1-02:59:49 /usr/sbin/httpd -DFOREGROUND
30446 1-02:59:49 /usr/sbin/httpd -DFOREGROUND
```
### Method-3: Using etimes Option
The following command will show you the elapsed time in seconds.
```
# ps -p 16337 -o etimes
ELAPSED
1170810
```
It shows the output in Seconds and you need to convert it as per your requirement.
```
+---------------------+-------------------------+
| Human-Readable time | Seconds |
+---------------------+-------------------------+
| 1 hour | 3600 seconds |
| 1 day | 86400 seconds |
+---------------------+-------------------------+
```
If you would like to know how many hours the process has been running then use, **[Linux command line calculator][3]**.
```
# bc -l
1170810/3600
325.22500000000000000000
```
If you would like to know how many days the process has been running then use the following format.
```
# bc -l
1170810/86400
13.55104166666666666666
```
The above commands doesnt show you the exact start date of the process and if you want to know those information then you can use the following command. As per the below output the httpd process has been running since **`Aug 05`**.
```
# ps -ef | grep httpd
root 16337 1 0 Aug05 ? 00:01:02 /usr/sbin/httpd -DFOREGROUND
root 24999 24902 0 06:34 pts/0 00:00:00 grep --color=auto httpd
apache 30442 16337 0 Aug18 ? 00:00:00 /usr/sbin/httpd -DFOREGROUND
apache 30443 16337 0 Aug18 ? 00:00:00 /usr/sbin/httpd -DFOREGROUND
apache 30444 16337 0 Aug18 ? 00:00:00 /usr/sbin/httpd -DFOREGROUND
apache 30445 16337 0 Aug18 ? 00:00:00 /usr/sbin/httpd -DFOREGROUND
apache 30446 16337 0 Aug18 ? 00:00:00 /usr/sbin/httpd -DFOREGROUND
```
### Method-4: Using proc filesystem (procfs)
However, the above command doesnt show you the exact start time of the process and use the following format to check that. As per the below output the httpd process has been running since **`Aug 05 at 17:20`**.
The proc filesystem (procfs) is a special filesystem in Unix-like operating systems that presents information about processes and other system information.
Its sometimes referred to as a process information pseudo-file system. It doesnt contain real files but run time system information (e.g. system memory, devices mounted, hardware configuration, etc).
```
# ls -ld /proc/16337
dr-xr-xr-x. 9 root root 0 Aug 5 17:20 /proc/16337/
```
--------------------------------------------------------------------------------
via: https://www.2daygeek.com/how-to-check-how-long-a-process-has-been-running-in-linux/
作者:[Magesh Maruthamuthu][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.2daygeek.com/author/magesh/
[b]: https://github.com/lujun9972
[1]: https://www.2daygeek.com/understanding-linux-top-command-output-usage/
[2]: https://www.2daygeek.com/9-methods-to-check-find-the-process-id-pid-ppid-of-a-running-program-in-linux/
[3]: https://www.2daygeek.com/linux-command-line-calculator-bc-calc-qalc-gcalccmd/

View File

@ -0,0 +1,101 @@
[#]: collector: (lujun9972)
[#]: translator: (beamrolling)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to transition into a career as a DevOps engineer)
[#]: via: (https://opensource.com/article/19/7/how-transition-career-devops-engineer)
[#]: author: (Conor Delanbanque https://opensource.com/users/cdelanbanquehttps://opensource.com/users/daniel-ohhttps://opensource.com/users/herontheclihttps://opensource.com/users/marcobravohttps://opensource.com/users/cdelanbanque)
如何转职为 DevOps 工程师
======
无论你是刚毕业的大学生,还是想在职业中寻求进步的经验丰富的 IT 专家,这些提示都可以帮你成为 DevOps 工程师。
![technical resume for hiring new talent][1]
DevOps 工程是一个备受称赞的热门职业。不管你是刚毕业正在找第一份工作,还是在利用之前的行业经验的同时寻求学习新技能的机会,本指南都能帮你通过正确的步骤成为[ DevOps 工程师][2].
### 让自己沉浸其中
首先学习 [DevOps][3] 的基本原理,实践以及方法。在使用工具之前,先了解 DevOps 背后的“为什么”。DevOps 工程师的主要目标是在整个软件开发生命周期SDLC中提高速度并保持或提高质量以提供最大的业务价值。阅读文章观看 YouTube 视频,参加当地小组聚会或者会议 — 成为热情的 DevOps 社区中的一员,在那里你将从先行者的错误和成功中学习。
### 考虑你的背景
如果你有从事技术工作的经历,例如软件开发人员,系统工程师,系统管理员,网络运营工程师或者数据库管理员,那么你已经拥有了广泛的见解和有用的经验,它们可以帮助你在未来成为 DevOps 工程师。如果你在完成计算机科学或任何其他 STEM译者注STEM 是科学 Science技术 Technology工程 Engineering 和数学 Math四个学科的首字母缩略字领域的学业后刚开始职业生涯那么你将拥有在这个过渡期间需要的一些基本踏脚石。
DevOps 工程师的角色涵盖了广泛的职责。以下是企业最有可能使用他们的三种方向:
* **偏向于开发Dev的 DevOps 工程师**在构建应用中扮演软件开发的角色。他们日常工作的一部分是利用持续集成 / 持续交付CI/CD共享仓库云和容器但他们不一定负责构建或实施工具。他们了解基础架构并且在成熟的环境中能将自己的代码推向生产环境。
* **偏向于运维技术Ops的 DevOps 工程师**可以与系统工程师或系统管理员进行比较。他们了解软件的开发,但并不会把一天的重心放在构建应用上。相反,他们更有可能支持软件开发团队将手动流程自动化的过程,并提高人员和技术系统的效率。这可能意味着分解遗留代码并用较少繁琐的自动化脚本来运行相同的命令,或者可能意味着安装,配置或维护基础结构和工具。他们确保为任何有需要的团队安装可使用的工具。他们也会通过教授如何利用 CI / CD 和其他 DevOps 实践来帮助团队。
* **网站可靠性工程师SRE** 就像解决运维和基础设施的软件工程师。SRE 专注于创建可扩展,高度可用且可靠的软件系统。
在理想的世界中DevOps 工程师将了解以上所有领域;这在成熟的科技公司中很常见。然而,顶级银行和许多财富 500 强企业的 DevOps 职位通常会偏向开发Dev或运营Ops
### 要学习的技术
DevOps 工程师需要了解各种技术才能有效完成工作。无论你的背景如何,请从作为 DevOps 工程师使用和理解的基础技术入手。
#### 操作系统
操作系统是所有东西运行的地方,拥有相关的基础知识十分重要。 [Linux ][4]是你最有可能每天使用的操作系统,尽管有的组织会使用 Windows 操作系统。要开始使用,你可以在家中安装 Linux在那里你可以随心所欲地打破并学习。
#### 脚本
接下来,选择一门语言来学习脚本。有很多语言可供选择,包括 PythonGoJavaBashPowerShellRuby和 C / C++。我建议[从 Python 开始][5]因为它相对容易学习和解释是最受欢迎的语言之一。Python 通常是为了遵循面向对象编程OOP的基础而写的可用于 Web 开发,软件开发以及创建桌面 GUI 和业务应用程序。
#### 云
学习了 [Linux][4] 和 [Python][5] 之后,我认为下一个该学习的是云计算。基础设施不再只是“运维小哥”的事情了,因此你需要接触云平台,例如 AWS 云服务Azure 或者谷歌云平台。我会从 AWS 开始,因为它有大量免费学习工具,可以帮助你降低作为开发人员,运维,甚至面向业务的组件的任何障碍。事实上,你可能会被它提供的东西所淹没。考虑从 EC2 S3 和 VPC 开始,然后看看你从其中想学到什么。
#### 编程语言
如果你带着对软件开发的热情来到 DevOps请继续提高你的编程技能。DevOps 中的一些优秀和常用语言与脚本相同PythonGoJavaBashPowerShellRuby 和 C / C++。你还应该熟悉 Jenkins 和 Git / Github你将会在 CI / CD 过程中经常使用到它们。
#### 容器
最后,使用 Docker 和编排平台(如 Kubernetes等工具开始学习[容器化][6]。网上有大量的免费学习资源,大多数城市都有本地的线下小组,你可以在友好的环境中向有经验的人学习(还有披萨和啤酒哦!)。
#### 其他的呢?
如果你缺乏开发经验,你依然可以通过对自动化的热情,效率的提高,与他人协作以及改进自己的工作[参与到 DevOps 中][3]来。我仍然建议学习上述工具,但重点不要放在编程 / 脚本语言上。了解基础架构即服务,平台即服务,云平台和 Linux 会非常有用。你可能正在设置工具并学习如何构建有弹性和容错性的系统,并在写代码时利用它们。
### 找一份 DevOps 的工作
求职过程会有所不同,具体取决于你是否一直从事技术工作,并且正在进入 DevOps 领域,或者你是刚开始职业生涯的毕业生。
#### 如果你已经从事技术工作
如果你正在从一个技术领域转入 DevOps 角色,首先尝试在你当前的公司寻找机会。你可以和其他的团队一起工作吗?尝试影响其他团队成员,寻求建议,并在不离开当前工作的情况下获得新技能。如果做不到这一点,你可能需要换另一家公司。如果你能从上面列出的一些实践,工具和技术中学习,你将能在面试时展示相关知识中占据有利位置。关键是要诚实,不要让自己陷入失败中。大多数招聘主管都了解你不知道所有的答案;如果你能展示你所学到的东西,并解释你愿意学习更多,你应该有机会获得 DevOps 的工作。
#### 如果你刚开始职业生涯
申请雇用初级 DevOps 工程师的公司的开放机会。不幸的是,许多公司表示他们希望寻找更富有经验的人,并建议你在获得经验后再申请该职位。这是“我们需要经验丰富的人”的典型,令人沮丧的场景,并且似乎没人愿意给你第一次机会。
然而,并不是所有求职经历都那么令人沮丧;一些公司专注于培训和提升刚从大学毕业的学生。例如,我工作的 [MThree][7] 聘请来应届毕业生并且对其进行了 8 周的培训。当完成培训后,参与者们可以充分了解到整个 SDLC并很好地了解它在财富 500 强公司相关环境中的应用。毕业生被聘为 MThree 的客户公司的初级 DevOps 工程师 — MThree 在前 18 - 24 个月内支付全职工资和福利,之后他们将作为直接雇员加入客户。这是弥合从大学到技术职业的间隙的好方法。
### 总结
转换成 DevOps 工程师的方法有很多种。这是一条非常有益的职业路线,可能会让你保持繁忙和挑战 — 并增加你的收入潜力。
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/7/how-transition-career-devops-engineer
作者:[Conor Delanbanque][a]
选题:[lujun9972][b]
译者:[beamrolling](https://github.com/beamrolling)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/cdelanbanquehttps://opensource.com/users/daniel-ohhttps://opensource.com/users/herontheclihttps://opensource.com/users/marcobravohttps://opensource.com/users/cdelanbanque
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/hiring_talent_resume_job_career.png?itok=Ci_ulYAH (technical resume for hiring new talent)
[2]: https://opensource.com/article/19/7/devops-vs-sysadmin
[3]: https://opensource.com/resources/devops
[4]: https://opensource.com/resources/linux
[5]: https://opensource.com/resources/python
[6]: https://opensource.com/article/18/8/sysadmins-guide-containers
[7]: https://www.mthreealumni.com/

View File

@ -0,0 +1,219 @@
[#]: collector: (lujun9972)
[#]: translator: (tomjlw)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Copying files in Linux)
[#]: via: (https://opensource.com/article/19/8/copying-files-linux)
[#]: author: (Seth Kenlon https://opensource.com/users/sethhttps://opensource.com/users/scottnesbitthttps://opensource.com/users/greg-p)
在 Linux 中复制文档
======
了解在 Linux 中多种复制文档的方式以及各自的优点
![归档文件][1]
在办公室里复制文档过去需要专门的员工与机器。如今,复制是电脑用户无需多加思考的任务。在电脑里复制数据是如此微不足道的事以致于你还没有意识到复制就发生了,例如当拖动文档到外部硬盘的时候。
数字实体复制起来十分简单已是一个不争的事实,以致于大部分现代电脑用户未去考虑其它复制他们工作的可选方式。尽管如此,在 Linux 中复制文档仍有几种不同的方式。每种方法取决于你的目的不同都有其独到之处。
以下是一系列在 LinuxBSD 及 Mac 上复制文件的方式。
### 在 GUI 中复制
如大多数操纵系统一样,如果你想的话,你可以完全用 GUI 来管理文件。
拖拽放下
复制文件最浅显的方式可能就是你在电脑中用来复制文件的方式:拖拽并放下。在大多数 Linux 桌面上从一个本地文件夹拖拽放下到另一个本地文件夹是_移动_文件的默认方式/你可以通过在拖拽文件后按住 **Ctrl** 到复制区域改变这个行为。
你的鼠标指针可能会有一个指示,例如一个加号以显示你在复制模式。
![复制一个文件。][2]
注意如果文件在原创系统上存在,不管它是一个网页服务器或者你自己网络里的用文件共享协议访问另一台电脑,默认动作经常是复制而不是移动文件。
### 右击
如果你觉得在你的桌面拖拽文档不够精准或者臃肿,或者这么做让你的手离开键盘太多,你可以经常使用右键菜单复制文件。这取决于你所用的文件管理器,但通常来说,右键产生的相关菜单会包括常见的操作。
相关菜单的复制动作将你的[文件路径][3](文件在系统的位置)保存在你的剪切板中,这样你可以将你的文件 _粘贴_ 到别处:
![从相关菜单复制文件][4]
在这种情况下,你并没有将文件的内容复制到你的剪切版上。取而代之的是你复制了[文件路径][3]。当你粘贴时,你的文件管理器查看剪贴板上的路径并执行复制命令,将相应路径上的文件粘贴到你准备拷贝到的路径。
### 用命令行复制
虽然 GUI 通常是相对熟悉的拷贝文件方式,用终端拷贝却更有效率。
#### cp
基于终端最显而易见的复制拷贝替代物就是 **cp** 命令。这个命令可以拷贝文件和目录,也相对直接。它使用熟悉的 _来源__目的_(必须以这样的顺序)句法,因此拷贝一个叫 **example.txt** 的文件到你的 **Documents** 目录就像这样:
```
$ cp example.txt ~/Documents
```
就像当你拖拽文件放在文件夹里一样,这个动作并不将 **Documents** 替换为 **example.txt**。取而代之的是,**cp** 察觉到 **Documents** 是一个文件夹,就将 **example.txt** 的复件放进去。
你同样可以(便捷有效地)重命名你拷贝的文档:
```
$ cp example.txt ~/Documents/example_copy.txt
```
这很重要因为它使得你可以在于原文件相同的目录中生成一个复件:
```
$ cp example.txt example.txt
cp: 'example.txt' and 'example.txt' are the same file.
$ cp example.txt example_copy.txt
```
要复制一个目录,你必须使用 **-r** 选项,代表 --**递归式的**。这个选项在目录 _inode_ 中运行 **cp** 命令,然后作用到该目录下的所有文件。没有 **-r** 选项,**cp** 不会将目录当成一个可复制的对象:
```
$ cp notes/ notes-backup
cp: -r not specified; omitting directory 'notes/'
$ cp -r notes/ notes-backup
```
#### cat
**cat** 命令是最易被误解的命令,但只是因为它表现了 [POSIX][5] 系统的极致灵活性。在 **cat** 可以做到的所有事情中,也包括拷贝。例如说使用 **cat** 你可以仅用一个命令就[从一个文件创建两个副本][6]。你用 **cp** 无法做到这一点。
使用 **cat** 复制文档的重要性在于系统解释该行为的方式。当你使用 **cp** 拷贝文件时,文件的属性跟着文件一起被拷贝。这代表复件的权限和原件一样。
```
$ ls -l -G -g
-rw-r--r--. 1 57368 Jul 25 23:57  foo.jpg
$ cp foo.jpg bar.jpg
-rw-r--r--. 1 57368 Jul 29 13:37  bar.jpg
-rw-r--r--. 1 57368 Jul 25 23:57  foo.jpg
```
然而用 **cat** 将一个文件的内容读取至另一个文件让系统创建了一个新文件。这些新文件取决于你的默认 **umansk** 设置。了解更多关于`umask`,阅读 Alex Juarez 包含 [umask][7] 以及权限概览的文章。
运行 **unmask** 获取当前设置:
```
$ umask
0002
```
这个设置代表在该处新创建的文档被给予**664****rw-rw-r--**)权限因为 **unmask** 设置的前几位数字没有遮掩任何东西(并且执行数位不是文件创建的默认数位)且写入权限被最终位屏蔽。
当你使用 **cat** 拷贝时,你并没有真正拷贝文件。你使用 **cat** 读取文件内容并将输出重新导向一个新文件:
```
$ cat foo.jpg &gt; baz.jpg
$ ls -l -G -g
-rw-r--r--. 1 57368 Jul 29 13:37  bar.jpg
-rw-rw-r--. 1 57368 Jul 29 13:42  baz.jpg
-rw-r--r--. 1 57368 Jul 25 23:57  foo.jpg
```
如你所见,**cat** 使用系统默认的 umask 创建了一个全新的文件。
最后,当你仅仅想复制一个文件时,这些手段无关紧要。但如果你想以拷贝文件并保持默认权限时,你可以用一个命令 **cat** 完成一切。
#### rsync
有着著名的同步源和目的文件的能力,**rsync** 命令是一个拷贝文件的多才多艺的工具。最为简单的,**rsync** 可以与 **cp** 命令相似般使用。
```
$ rsync example.txt example_copy.txt
$ ls
example.txt    example_copy.txt
```
这个命令真正的威力藏在其能不做不必要的拷贝的能力里。如果你使用 **rsync** 来将文件拷经目录里且其已经存在在该目录里,那么 **rsync** 和普通的拷贝在本地里并无二致。但如果你将从远程服务器拷贝海量数据,这个特性就完全不一样了。
甚至在本地中,真正不一样的地方在于它可以分辨具有相同名字但拥有不同数据的文件。如果你曾发现你面对着同一个目录里的两个相同副本时,**rsync** 可以将它们同步至一个包含每个最新修改的目录。这种配置在尚未发现版本控制威力的业界十分常见,同时也作为需要从同一个源拷贝的备选方案。
你可以通过创建两个文件夹有意识地模拟这种情况,一个叫做 **example** 另一个叫做 **example_dupe**
```
$ mkdir example example_dupe
```
在第一个文件夹里创建文件:
```
$ echo "one" &gt; example/foo.txt
```
**rsync** 同步两个目录。这种做法最常见的选项是 **-a**(代表 _archive_,保证符号链接和其它特殊文件保存下来)和 **-v**(代表 _verbose_,向你提供当前命令的进度反馈):
```
$ rsync -av example/ example_dupe/
```
两个目录现在包含同样的信息:
```
$ cat example/foo.txt
one
$ cat example_dupe/foo.txt
one
```
如果你当作源的文件发生改变,目的文件也会随之跟新:
```
$ echo "two" &gt;&gt; example/foo.txt
$ rsync -av example/  example_dupe/
$ cat example_dupe/foo.txt
one
two
```
注意 **rsync** 命令是用来复制数据的,而不是充当版本管理系统的。例如假设有一个目的文件比源文件多了改变,那个文件仍将被覆盖因为 **rsync** 比较文件的分歧并假设目的文件总是反映着源文件:
```
$ echo "You will never see this note again" &gt; example_dupe/foo.txt
$ rsync -av example/  example_dupe/
$ cat example_dupe/foo.txt
one
two
```
如果没有改变,那么就不会有拷贝。
**rsync** 命令有许多 **cp** 没有的选项,例如查看目标权限,排除文件,删除过时的文件不在两个目录中出现以及更多。使用 **rsync** 作为 **cp** 的强力替代或者有效补充。
### 许多拷贝的方式
在 POSIX 系统中有许多能够达成同样目的的方式,因此开源的灵活性名副其实。我忘了哪个复制数据的有效方式吗?在评论区分享你的拷贝神技。
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/8/copying-files-linux
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[tomjlw](https://github.com/tomjlw)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/sethhttps://opensource.com/users/scottnesbitthttps://opensource.com/users/greg-p
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/documents_papers_file_storage_work.png?itok=YlXpAqAJ (Filing papers and documents)
[2]: https://opensource.com/sites/default/files/uploads/copy-nautilus.jpg (Copying a file.)
[3]: https://opensource.com/article/19/7/understanding-file-paths-and-how-use-them
[4]: https://opensource.com/sites/default/files/uploads/copy-files-menu.jpg (Copying a file from the context menu.)
[5]: https://opensource.com/article/19/7/what-posix-richard-stallman-explains
[6]: https://opensource.com/article/19/2/getting-started-cat-command
[7]: https://opensource.com/article/19/7/linux-permissions-101