mirror of
https://github.com/LCTT/TranslateProject.git
synced 2025-03-21 02:10:11 +08:00
Merge remote-tracking branch 'LCTT/master'
This commit is contained in:
commit
5b74d94587
@ -1,112 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (hopefully2333)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How DevOps professionals can become security champions)
|
||||
[#]: via: (https://opensource.com/article/19/9/devops-security-champions)
|
||||
[#]: author: (Jessica Repka https://opensource.com/users/jrepkahttps://opensource.com/users/jrepkahttps://opensource.com/users/patrickhousleyhttps://opensource.com/users/mehulrajputhttps://opensource.com/users/alanfdosshttps://opensource.com/users/marcobravo)
|
||||
|
||||
How DevOps professionals can become security champions
|
||||
======
|
||||
Breaking down silos and becoming a champion for security will help you,
|
||||
your career, and your organization.
|
||||
![A lock on the side of a building][1]
|
||||
|
||||
Security is a misunderstood element in DevOps. Some see it as outside of DevOps' purview, while others find it important (and overlooked) enough to recommend moving to [DevSecOps][2]. No matter your perspective on where it belongs, it's clear that security affects everyone.
|
||||
|
||||
Each year, the [statistics on hacking][3] become more alarming. For example, there's a hacker attack every 39 seconds, which can lead to stolen records, identities, and proprietary projects you're writing for your company. It can take months (and possibly forever) for your security team to discover the who, what, where, or when behind a hack.
|
||||
|
||||
What are operations professionals to do about these dire problems? I say it is time for us to become part of the solution by becoming security champions.
|
||||
|
||||
### Silos and turf wars
|
||||
|
||||
Over my years of working side-by-side with my local IT security (ITSEC) teams, I've noticed a great many things. A big one is that tension is very common between DevOps and security. This tension almost always stems from the security team's efforts to protect against vulnerabilities (e.g., by setting rules or disabling things) that interrupt DevOps' work and hinder their ability to deploy apps quickly.
|
||||
|
||||
You've seen it, I've seen it, everyone you meet in the field has at least one story about it. A small set of grudges turns into a burned bridge that takes time to repair—or the groups begin a small turf war, and the resulting silos make achieving DevOps unlikely.
|
||||
|
||||
### Get a new perspective
|
||||
|
||||
To try to break down these silos and end the turf wars, I talk to at least one person on each security team to learn about the ins and outs of daily security operations in our organization. I started doing this out of general curiosity, but I've continued because it always gives me a valuable new perspective. For example, I've learned that for every deployment that's stopped due to failed security, the ITSEC team is feverishly trying to patch 10 other problems it sees. Their brashness and quickness to react are due to the limited time they have to fix something before it becomes a large problem.
|
||||
|
||||
Consider the immense amount of knowledge it takes to find, analyze, and undo what has been done. Or to figure out what the DevOps team is doing—without background information—then replicate and test it. And to do all of this with their usual greatly understaffed security team.
|
||||
|
||||
This is the daily life of your security team, and your DevOps team is not seeing it. ITSEC's daily work can mean overtime hours and overwork to make sure that the company, its teams, and the proprietary work its teams are producing are secure.
|
||||
|
||||
### Ways to be a security champion
|
||||
|
||||
This is where being your own security champion can help. This means—for everything you work on—you must take a good, hard look at all the ways someone could log into it and what could be taken from it.
|
||||
|
||||
Help your security team help you. Introduce tools into your pipelines to integrate what you know will work with what they will know will work. Start with small things, such as reading up on Common Vulnerabilities and Exposures (CVEs) and adding scanning functions to your [CI/CD][4] pipelines. For everything you build, there is an open source scanning tool, and adding small open source tools (such as the ones below) can go the extra mile in the long run.
|
||||
|
||||
**Container scanning tools:**
|
||||
|
||||
* [Anchore Engine][5]
|
||||
* [Clair][6]
|
||||
* [Vuls][7]
|
||||
* [OpenSCAP][8]
|
||||
|
||||
|
||||
|
||||
**Code scanning tools:**
|
||||
|
||||
* [OWASP SonarQube][9]
|
||||
* [Find Security Bugs][10]
|
||||
* [Google Hacking Diggity Project][11]
|
||||
|
||||
|
||||
|
||||
**Kubernetes security tools:**
|
||||
|
||||
* [Project Calico][12]
|
||||
* [Kube-hunter][13]
|
||||
* [NeuVector][14]
|
||||
|
||||
|
||||
|
||||
### Keep your DevOps hat on
|
||||
|
||||
Learning about new technology and how to create new things with it is part of the job if you're in a DevOps-related role. Security is no different. Here's my list of ways to keep up to date on the security front while keeping your DevOps hat on.
|
||||
|
||||
* Read one article each week about something related to security in whatever you're working on.
|
||||
* Look at the [CVE][15] website weekly to see what's new.
|
||||
* Try doing a hackathon. Some companies do this once a month; check out the [Beginner Hack 1.0][16] site if yours doesn't and you'd like to learn more.
|
||||
* Try to attend at least one security conference a year with a member of your security team to see things from their side.
|
||||
|
||||
|
||||
|
||||
### Be a champion for good
|
||||
|
||||
There are several reasons you should become your own security champion. The first and foremost is to further your knowledge and advance your career. The second reason is to help other teams, foster new relationships, and break down the silos that harm your organization. Creating friendships across your organization has multiple benefits, including setting a good example of bridging teams and encouraging people to work together. You will also foster sharing knowledge throughout the organization and provide everyone with a new lease on security and greater internal cooperation.
|
||||
|
||||
Overall, being a security champion will lead you to be a champion for good across your organization.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/9/devops-security-champions
|
||||
|
||||
作者:[Jessica Repka][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/jrepkahttps://opensource.com/users/jrepkahttps://opensource.com/users/patrickhousleyhttps://opensource.com/users/mehulrajputhttps://opensource.com/users/alanfdosshttps://opensource.com/users/marcobravo
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_3reasons.png?itok=k6F3-BqA (A lock on the side of a building)
|
||||
[2]: https://opensource.com/article/19/1/what-devsecops
|
||||
[3]: https://hostingtribunal.com/blog/hacking-statistics/
|
||||
[4]: https://opensource.com/article/18/8/what-cicd
|
||||
[5]: https://github.com/anchore/anchore-engine
|
||||
[6]: https://github.com/coreos/clair
|
||||
[7]: https://vuls.io/
|
||||
[8]: https://www.open-scap.org/
|
||||
[9]: https://github.com/OWASP/sonarqube
|
||||
[10]: https://find-sec-bugs.github.io/
|
||||
[11]: https://resources.bishopfox.com/resources/tools/google-hacking-diggity/
|
||||
[12]: https://www.projectcalico.org/
|
||||
[13]: https://github.com/aquasecurity/kube-hunter
|
||||
[14]: https://github.com/neuvector/neuvector-helm
|
||||
[15]: https://cve.mitre.org/
|
||||
[16]: https://www.hackerearth.com/challenges/hackathon/beginner-hack-10/
|
@ -0,0 +1,75 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Data center liquid-cooling to gain momentum)
|
||||
[#]: via: (https://www.networkworld.com/article/3446027/data-center-liquid-cooling-to-gain-momentum.html)
|
||||
[#]: author: (Patrick Nelson https://www.networkworld.com/author/Patrick-Nelson/)
|
||||
|
||||
Data center liquid-cooling to gain momentum
|
||||
======
|
||||
The serious number-crunching demands of AI, IoT and big data - and the heat they generate - may mean air cooling is on its way out.
|
||||
artisteer / Getty Images
|
||||
|
||||
Concern over escalating energy costs is among reasons liquid-cooling solutions could gain traction in the [data center][1].
|
||||
|
||||
Schneider Electric, a major energy-management specialist, this month announced refreshed impetus to a collaboration conceived in 2014 with [liquid-cooling specialist Iceotope][2]. Now, [technology solutions company Avnet has been brought into that collaboration][3].
|
||||
|
||||
[[Get regularly scheduled insights by signing up for Network World newsletters.]][4]
|
||||
|
||||
The three companies will develop chassis-level immersive liquid cooling for data centers, Schneider Electric says in a [press release][5]. Liquid-cooling systems submerge server components in a dielectric fluid as opposed to air-cooled systems which create ambient cooled air.
|
||||
|
||||
[][6]
|
||||
|
||||
BrandPost Sponsored by HPE
|
||||
|
||||
[Take the Intelligent Route with Consumption-Based Storage][6]
|
||||
|
||||
Combine the agility and economics of HPE storage with HPE GreenLake and run your IT department with efficiency.
|
||||
|
||||
One reason for the shift: “Compute-intensive applications like AI and [IoT][7] are driving the need for better chip performance,” Kevin Brown, CTO and SVP of Innovation, Secure Power, Schneider Electric, is quoted as saying.
|
||||
|
||||
“Liquid Cooling [is] more efficient and less costly for power-dense applications,” the company explains. That’s in part because the use of Graphical Processing Units (GPUs) is replacing some traditional processing, and is gaining ground. GPUs are better suited to data-mining-type applications than traditional processors. They parallel-process and are now used extensively in artificial intelligence compute environments and processor-hungry analytics churning big data.
|
||||
|
||||
“This makes traditional data-center air-cooled architectures impractical, or costly and less efficient than liquid-cooled approaches.” Reasons liquid-cooling may become a new go-to cooling solution is also related to “space constraints, water usage restrictions and harsh IT environments,” [Schneider said in a white paper earlier this year][8]:
|
||||
|
||||
As chip density increases, and the resulting rack-space that is required to hold the gear decreases, the need for traditional air-based cooling-equipment space keeps going up. So even as greater computing density decreases the space the equipment occupies, the space required for air-cooling it increases. The heat created is so great with GPUs that it stops being practical to air-cool.
|
||||
|
||||
Additionally, as edge data centers become more important there’s an advantage to using IT that can be placed anywhere. “As the demand for IT deployments in urban areas, high rise buildings, and at the Edge increase, the need for placement in constrained locations will increase,” the paper says. In such scenarios, not requiring space for hot and cold aisles would be an advantage.
|
||||
|
||||
Liquid cooling would allow for silent operation, too; there aren’t any fans and pumps making disruptive noise.
|
||||
|
||||
Liquid cooling would also address restrictions on water useage that can affect the ability to use evaporative cooling and cooling towers to carry off heat generated by data centers. Direct-to-chip liquid-cooling systems of the kind the three companies want to concentrate their efforts on narrowly target the cooling at the server, not at the building level.
|
||||
|
||||
In harsh environments such as factories and [industrial IoT][9] deployments, heat and air quality can hinder air-cooling systems. Liquid-cooling systems can be self-contained in sealed units, thus being protected from dust, for example.
|
||||
|
||||
Interestingly, as serious computer gamers will know, liquid cooling isn’t a new technology, [Wendy Torell points out in a Schneider blog post][10] pitching the technology. “It’s been around for decades and has historically focused on mainframes, high-performance computing (HPC), and gaming applications,” she explains. “Demand for IoT, artificial intelligence, machine learning, big data analytics, and edge applications is once again bringing it into the limelight.”
|
||||
|
||||
Join the Network World communities on [Facebook][11] and [LinkedIn][12] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3446027/data-center-liquid-cooling-to-gain-momentum.html
|
||||
|
||||
作者:[Patrick Nelson][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Patrick-Nelson/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.networkworld.com/article/3223692/what-is-a-data-centerhow-its-changed-and-what-you-need-to-know.html
|
||||
[2]: http://www.iceotope.com/about
|
||||
[3]: https://www.avnet.com/wps/portal/us/about-avnet/overview/
|
||||
[4]: https://www.networkworld.com/newsletters/signup.html
|
||||
[5]: https://www.prnewswire.com/news-releases/schneider-electric-announces-partnership-with-avnet-and-iceotope-to-develop-liquid-cooled-data-center-solutions-300929586.html
|
||||
[6]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE20773&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
|
||||
[7]: https://www.networkworld.com/article/3207535/what-is-iot-how-the-internet-of-things-works.html
|
||||
[8]: https://www.schneider-electric.us/en/download/search/liquid%20cooling/?langFilterDisabled=true
|
||||
[9]: https://www.networkworld.com/article/3243928/what-is-the-industrial-iot-and-why-the-stakes-are-so-high.html
|
||||
[10]: https://blog.se.com/datacenter/2019/07/11/not-just-about-chip-density-five-reasons-consider-liquid-cooling-data-center/
|
||||
[11]: https://www.facebook.com/NetworkWorld/
|
||||
[12]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,117 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Measuring the business value of open source communities)
|
||||
[#]: via: (https://opensource.com/article/19/10/measuring-business-value-open-source)
|
||||
[#]: author: (Jon Lawrence https://opensource.com/users/the3rdlaw)
|
||||
|
||||
Measuring the business value of open source communities
|
||||
======
|
||||
Corporate constituencies are interested in finding out the business
|
||||
value of open source communities. Find out how to answer key questions
|
||||
with the right metrics.
|
||||
![Lots of people in a crowd.][1]
|
||||
|
||||
In _[Measuring the health of open source communities][2]_, I covered some of the key questions and metrics that we’ve explored as part of the [CHAOSS project][3] as they relate to project founders, maintainers, and contributors. In this article, we focus on open source corporate constituents (such as open source program offices, business risk and legal teams, human resources, and others) and end users.
|
||||
|
||||
Where the bulk of the metrics for core project teams are quantitative, for the remaining constituents our metrics must reflect a much broader range of interests, and address many more qualitative measures. From the metrics collection standpoint, much of the data collection for qualitative measures is much more manual and subjective, but it is nonetheless within the scope CHAOSS hopes to be able to address as the project matures.
|
||||
|
||||
While people on the business side of things do sometimes care about the metrics in use by the project itself, there are only two fundamental questions that corporate constituencies have. The first is about _value_: "Will this choice help our business make more money sooner?" The second is about _risk_: "Will this choice hurt our business’s chances of making money?"
|
||||
|
||||
Those questions can come in many different iterations across disciplines, from human resources to legal counsel and executive offices. But, at the end of the day, having answers that are based on data can make open source engagement more efficient, effective, and less risky.
|
||||
|
||||
Once again, the information below is structured in a Goal-Question-Metric format:
|
||||
|
||||
* Open source program offices (OSPOs)
|
||||
* As an OSPO leader, I care about prioritizing our resources toward healthy communities:
|
||||
* How [active][4] is the community?
|
||||
**Metric:** [Code development][5] \- The number of commits and pull requests, review time for new code commits and pull requests, code reviews and merges, the number of accepted vs. rejected pull requests, and the frequency of new version releases.
|
||||
**Metric:** [Issue resolution][6] \- The number of new issues, closed issues, the ratio of new vs. closed issues, and the average open time per issue.
|
||||
**Metric:** Social - Social media mention counts, social media sentiment analysis, the activity of community blog, and news releases (_future release_).
|
||||
* What is the [value][7] of our contributions to the project? (This is an area in active development.)
|
||||
**Metric:** Time value - Time saved for training developers on new technologies, and time saved maintaining custom development once the improvements are upstreamed.
|
||||
**Metric:** Dollar value - How much would it have cost to maintain changes and custom solutions internally, versus contributing upstream and ensuring compatibility with future community releases
|
||||
* What is the value of contributions to the project by other contributors and organizations?
|
||||
**Metric:** Time value - Time to market, new community-developed features released, and support for the project by the community versus the company.
|
||||
**Metric:** Dollar value - How much would it cost to internally rebuild the features provided by the community, and what is the opportunity cost of lagging behind innovations in open source projects?
|
||||
* Downstream value: How many other projects list our project as a dependency?
|
||||
**Metric:** The value of the ecosystem that is around a project.
|
||||
* How many forks of our project have there been?
|
||||
**Metric:** Are core developers more active in the mainline or a fork?
|
||||
**Metric:** Are the forks contributing back to the mainline, or developing in new directions?
|
||||
* Engineering leadership
|
||||
* As an approving architect, I care most about good design patterns that introduce a minimum of technical debt.
|
||||
**Metric:** [Test Coverage][8] \- What percentage of the code is tested?
|
||||
**Metric:** What is the percentage of code undergoing code reviews?
|
||||
**Metric:** Does the project follow [Core][9] [Infrastructure][9] [Initiative (CII) Best Practices][9]?
|
||||
* As an engineering executive, I care most about minimizing time-to-market and bugs, and maximizing platform stability and reliability.
|
||||
**Metric:** The defect resolution velocity.
|
||||
**Metric:** The defect density.
|
||||
**Metric:** The feature development velocity.
|
||||
* I also want social proofs that give me a level of comfort.
|
||||
**Metric:** Sentiment analysis of social media related to the project.
|
||||
**Metric:** Count of white papers.
|
||||
**Metric:** Code Stability - Project version numbers and the frequency of new releases.
|
||||
|
||||
|
||||
|
||||
There is also the issue of legal counsel. This goal statement is: "As legal counsel, I care most about minimizing our company’s chances of getting sued." The question is: "What kind of license does the software have, and what obligations do we have under the license?"
|
||||
|
||||
The metrics involved here are:
|
||||
|
||||
* **Metric:** [License Count][10] \- How many different licenses are declared in a given project?
|
||||
* **Metric:** [License Declaration][11] \- What kinds of licenses are declared in a given project?
|
||||
* **Metric:** [License Coverage][12] \- How much of a given codebase is covered by the declared license?
|
||||
|
||||
|
||||
|
||||
Lastly, there are further goals our project is considering to measure the impact of corporate open source policy as it relates to talent acquisition and retention. The goal for human resource managers is: "As an HR manager, I want to attract and retain the best talent I can." The questions and metrics are as follows:
|
||||
|
||||
* What impact do our open source policies have on talent acquisition?
|
||||
**Metric:** Talent acquisition - Measure over time how many candidates report that it’s important to them that they get to work with open source technologies.
|
||||
* What impact do our open source policies have on talent retention?
|
||||
**Metric:** Talent retention - Measure how much employee churn can be reduced because of people being able to work with or use open source technologies.
|
||||
* What is the impact on training employees who can learn from engaging in open source projects?
|
||||
**Metric:** Talent development - Measure over time the importance to employees of being able to use open source tech effectively.
|
||||
* How does allowing employees to work in a community outside of the company impact job satisfaction?
|
||||
**Metric:** Talent satisfaction - Measure over time the importance to employees of being able to contribute to open source tech.
|
||||
**Source:** Internal surveys.
|
||||
**Source:** Exit interviews. Did our policies around open source technologies at all influence your decision to leave?
|
||||
|
||||
|
||||
|
||||
### Wrapping up
|
||||
|
||||
It is still the early days of building a platform for bringing together these disparate data sources. The CHAOSS core of [Augur][13] and [GrimoireLab][14] currently supports over two dozen sources, and I’m excited to see what lies ahead for this project.
|
||||
|
||||
As the CHAOSS frameworks mature, I’m optimistic that teams and projects that implement these types of measurement will be able to make better real-world decisions that result in healthier and more productive software development lifecycles.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/10/measuring-business-value-open-source
|
||||
|
||||
作者:[Jon Lawrence][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/the3rdlaw
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_community_1.png?itok=rT7EdN2m (Lots of people in a crowd.)
|
||||
[2]: https://opensource.com/article/19/8/measure-project
|
||||
[3]: https://github.com/chaoss/
|
||||
[4]: https://github.com/chaoss/wg-evolution/blob/master/focus_areas/community_growth.md
|
||||
[5]: https://github.com/chaoss/wg-evolution#metrics
|
||||
[6]: https://github.com/chaoss/wg-evolution/blob/master/focus_areas/issue_resolution.md
|
||||
[7]: https://github.com/chaoss/wg-value
|
||||
[8]: https://chaoss.community/metric-test-coverage/
|
||||
[9]: https://github.com/coreinfrastructure/best-practices-badge
|
||||
[10]: https://github.com/chaoss/wg-risk/blob/master/metrics/License_Count.md
|
||||
[11]: https://github.com/chaoss/wg-risk/blob/master/metrics/License_Declared.md
|
||||
[12]: https://github.com/chaoss/wg-risk/blob/master/metrics/License_Coverage.md
|
||||
[13]: https://github.com/chaoss/augur
|
||||
[14]: https://github.com/chaoss/grimoirelab
|
@ -0,0 +1,88 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Pennsylvania school district tackles network modernization)
|
||||
[#]: via: (https://www.networkworld.com/article/3445976/pennsylvania-school-district-tackles-network-modernization.html)
|
||||
[#]: author: (Zeus Kerravala https://www.networkworld.com/author/Zeus-Kerravala/)
|
||||
|
||||
Pennsylvania school district tackles network modernization
|
||||
======
|
||||
NASD upgrades its campus core to be the foundation for digital learning.
|
||||
Wenjie Dong / Getty Images
|
||||
|
||||
Success in business and education today starts with infrastructure modernization. In fact, my research has found that digitally-forward organizations spend more than twice what their non-digital counterparts spend on evolving their IT infrastructure. However, most of the focus from IT has been on upgrading the application and compute infrastructure with little thought given to a critical ingredient – the network. Organizations can only be as agile as the least agile component of their infrastructure, and for most companies, that’s the network.
|
||||
|
||||
### Manual processes plague network reliability
|
||||
|
||||
Legacy networks have outlived their useful life. The existing three+ tier architecture was designed for an era when network traffic was considered “best-effort,” where there was no way to guarantee performance or reserve bandwidth, and delivered non-mission-critical applications. Employees and educators ran applications locally, and the majority of critical data resided on workstations.
|
||||
|
||||
Today, everything has changed. Applications have moved to the cloud, workers are constantly on the go, and companies are connecting things to business networks at an unprecedented rate. One could argue that, for most organizations, the network is the business. Consider what’s happened in our personal lives. People stream content, communicate using video, shop online, and rely on the network for almost every aspect of their lives.
|
||||
|
||||
[][1]
|
||||
|
||||
BrandPost Sponsored by HPE
|
||||
|
||||
[Take the Intelligent Route with Consumption-Based Storage][1]
|
||||
|
||||
Combine the agility and economics of HPE storage with HPE GreenLake and run your IT department with efficiency.
|
||||
|
||||
The same thing is happening to digital organizations. Companies today must support the needs of applications that are becomingly increasingly dynamic and distributed. An unavailable or poorly performing network means the organization comes to a screeching halt.
|
||||
|
||||
Yet network engineering teams working with legacy networks can’t keep up with demands; the rigid and manual processes required to hard-code configuration are slow and error-prone. In fact, ZK Research found that the largest cause of downtime with legacy networks is from human errors.
|
||||
|
||||
Given the importance of the network, this kind of madness must stop. Businesses will never be able to harness the potential of digital transformation without modernizing the network.
|
||||
|
||||
What’s required is a network that is more dynamic and intelligent, one that simplifies operations via automation. This can lead to better control and faster error detection, diagnosis and resolution. These buzzwords have been tossed around by many vendors and customers as the vision of where we are headed – yet it's been difficult to find actual customer deployments.
|
||||
|
||||
### NASD modernizes wired and wireless network to support digital curriculum
|
||||
|
||||
The Nazareth Area School District (NASD).recently went through a network modernization project.
|
||||
|
||||
The Eastern Pennsylvania school district, which has roughly 4,800 students, has a bold vision: to inspire students to be innovative, collaborative and constructive members of the community who embrace the tenets of diversity, value, education and honesty. NASD aims to accomplish its vision by helping students build a strong worth ethic and sense of responsibility and by challenging them to be leaders and good global citizens.
|
||||
|
||||
To support its goals, NASD set out to completely revamp the way it teaches. The district embraced a number of modern technologies that would foster immersive learning and collaboration.
|
||||
|
||||
There's a heavy emphasis on science, technology, engineering, arts and mathematics (STEAM), which drives more focus on coding, robotics, and virtual and augmented reality. For example, the teachers are using Google Expeditions VR Classroom kits to integrate VR into the classroom. In addition, NASD has converted many of its classrooms into “affinity rooms” where students can work together on different projects in the areas of VR, AR, robotics, stop motion photography, and other advanced technologies.
|
||||
|
||||
NASD understood that modernizing education requires a modernized network. If new tools and applications don’t perform as expected, it can hurt the learning process as students sit around waiting while network problems are solved. The district knew it needed to upgrade its network to one that was more intelligent, reliable and easier to diagnose.
|
||||
|
||||
NASD chose Aruba, a Hewlett Packard Enterprise company, to be its wired and wireless networking supplier.
|
||||
|
||||
In my opinion, the decision to upgrade the wired and wireless networks at the same time is a smart one. Many organizations put in a new Wi-Fi network only to find the wired backbone can’t support the traffic or doesn’t have the necessary reliability.
|
||||
|
||||
The high-availability switches are running the new ArubaOS-CX operating system designed for the digital transformation era. The network devices are configured through a centralized graphical interface and not a command line interface (CLI), and they have an onboard Network Analytics Engine to reduce the complexity of running the network.
|
||||
|
||||
NASD selected two Aruba 8320 switches to be the core of its network, to provide “utility-grade networking” that is always on and always available, much like power.
|
||||
|
||||
“By running two switches in tandem, we would gain a fully redundant network that made failovers, whether planned or unplanned, completely undetectable by our users,” said Mike Fahey, senior application and network administrator at NASD.
|
||||
|
||||
### Wanted: utility-grade Wi-Fi
|
||||
|
||||
Utility-grade Wi-Fi was a must for NASD as almost all of the new learning tools connect via Wi-Fi only. The school system had been using two Wi-Fi vendors, neither of which performed well and required long troubleshooting periods.
|
||||
|
||||
The Nazareth IT staff initially replaced the most problematic APs with Aruba APs. As this happened, Michael Uelses, director of IT, said that the teachers noticed a marked difference in Wi-Fi performance. Now, the entire school has standardized on Aruba’s gigabit Wi-Fi and has expanded it to outdoor locations. This has enabled the school to expand its security strategy and new emergency preparedness application to include playgrounds, parking lots and other outdoor areas where Wi-Fi previously did not reach.
|
||||
|
||||
Supporting gigabit Wi-Fi required upgrading the backbone network to 10 Gigabit, which the Aruba 8320 switches support. The switches can also be upgraded to high speeds, up to 100 Gigabit, if the need arises. NASD is planning to expand the use of bandwidth-hungry apps such as VR to immerse students in subjects including biology and engineering. The option to upgrade the switches gives NASD the confidence it has made the right network choices for the future.
|
||||
|
||||
What NASD is doing should be a message to all schools. Digital tools are here to stay and can change the way students learn. Success with digital education requires a rock-solid wired and wireless network to deliver utility-like services that are always on so students can always be learning.
|
||||
|
||||
Join the Network World communities on [Facebook][2] and [LinkedIn][3] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3445976/pennsylvania-school-district-tackles-network-modernization.html
|
||||
|
||||
作者:[Zeus Kerravala][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Zeus-Kerravala/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE20773&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
|
||||
[2]: https://www.facebook.com/NetworkWorld/
|
||||
[3]: https://www.linkedin.com/company/network-world
|
@ -1,3 +1,4 @@
|
||||
luming translating
|
||||
23 open source audio-visual production tools
|
||||
======
|
||||
|
||||
|
@ -1,195 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (Morisun029)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Mutation testing by example: How to leverage failure)
|
||||
[#]: via: (https://opensource.com/article/19/9/mutation-testing-example-tdd)
|
||||
[#]: author: (Alex Bunardzic https://opensource.com/users/alex-bunardzic)
|
||||
|
||||
Mutation testing by example: How to leverage failure
|
||||
======
|
||||
Use planned failure to ensure your code meets expected outcomes and
|
||||
follow along with the .NET xUnit.net testing framework.
|
||||
![failure sign at a party, celebrating failure][1]
|
||||
|
||||
In my article _[Mutation testing is the evolution of TDD][2]_, I exposed the power of iteration to guarantee a solution when a measurable test is available. In that article, an iterative approach helped to determine how to implement code that calculates the square root of a given number.
|
||||
|
||||
I also demonstrated that the most effective method is to find a measurable goal or test, then start iterating with best guesses. The first guess at the correct answer will most likely fail, as expected, so the failed guess needs to be refined. The refined guess must be validated against the measurable goal or test. Based on the result, the guess is either validated or must be further refined.
|
||||
|
||||
In this model, the only way to learn how to reach the solution is to fail repeatedly. It sounds counterintuitive, but amazingly, it works.
|
||||
|
||||
Following in the footsteps of that analysis, this article examines the best way to use a DevOps approach when building a solution containing some dependencies. The first step is to write a test that can be expected to fail.
|
||||
|
||||
### The problem with dependencies is that you can't depend on them
|
||||
|
||||
The problem with dependencies, as Michael Nygard wittily expresses in _[Architecture without an end state][3]_, is a huge topic better left for another article. Here, you'll look into potential pitfalls that dependencies tend to bring to a project and how to leverage test-driven development (TDD) to avoid those pitfalls.
|
||||
|
||||
First, pose a real-life challenge, then see how it can be solved using TDD.
|
||||
|
||||
### Who let the cat out?
|
||||
|
||||
![Cat standing on a roof][4]
|
||||
|
||||
In Agile development environments, it's helpful to start building the solution by defining the desired outcomes. Typically, the desired outcomes are described in a [_user story_][5]:
|
||||
|
||||
> _Using my home automation system (HAS),
|
||||
> I want to control when the cat can go outside,
|
||||
> because I want to keep the cat safe overnight._
|
||||
|
||||
Now that you have a user story, you need to elaborate on it by providing some functional requirements (that is, by specifying the _acceptance criteria_). Start with the simplest of scenarios described in pseudo-code:
|
||||
|
||||
> _Scenario #1: Disable cat trap door during nighttime_
|
||||
>
|
||||
> * Given that the clock detects that it is nighttime
|
||||
> * When the clock notifies the HAS
|
||||
> * Then HAS disables the Internet of Things (IoT)-capable cat trap door
|
||||
>
|
||||
|
||||
|
||||
### Decompose the system
|
||||
|
||||
The system you are building (the HAS) needs to be _decomposed_–broken down to its dependencies–before you can start working on it. The first thing you must do is identify any dependencies (if you're lucky, your system has no dependencies, which would make it easy to build, but then it arguably wouldn't be a very useful system).
|
||||
|
||||
From the simple scenario above, you can see that the desired business outcome (automatically controlling a cat door) depends on detecting nighttime. This dependency hinges upon the clock. But the clock is not capable of determining whether it is daylight or nighttime. It's up to you to supply that logic.
|
||||
|
||||
Another dependency in the system you're building is the ability to automatically access the cat door and enable or disable it. That dependency most likely hinges upon an API provided by the IoT-capable cat door.
|
||||
|
||||
### Fail fast toward dependency management
|
||||
|
||||
To satisfy one dependency, we will build the logic that determines whether the current time is daylight or nighttime. In the spirit of TDD, we will start with a small failure.
|
||||
|
||||
Refer to my [previous article][2] for detailed instructions on how to set the development environment and scaffolds required for this exercise. We will be reusing the same NET environment and relying on the [xUnit.net][6] framework.
|
||||
|
||||
Next, create a new project called HAS (for "home automation system") and create a file called **UnitTest1.cs**. In this file, write the first failing unit test. In this unit test, describe your expectations. For example, when the system runs, if the time is 7pm, then the component responsible for deciding whether it's daylight or nighttime returns the value "Nighttime."
|
||||
|
||||
Here is the unit test that describes that expectation:
|
||||
|
||||
|
||||
```
|
||||
using System;
|
||||
using Xunit;
|
||||
|
||||
namespace unittest
|
||||
{
|
||||
public class UnitTest1
|
||||
{
|
||||
DayOrNightUtility dayOrNightUtility = [new][7] DayOrNightUtility();
|
||||
|
||||
[Fact]
|
||||
public void Given7pmReturnNighttime()
|
||||
{
|
||||
var expected = "Nighttime";
|
||||
var actual = dayOrNightUtility.GetDayOrNight();
|
||||
Assert.Equal(expected, actual);
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
By this point, you may be familiar with the shape and form of a unit test. A quick refresher: describe the expectation by giving the unit test a descriptive name, **Given7pmReturnNighttime**, in this example. Then in the body of the unit test, a variable named **expected** is created, and it is assigned the expected value (in this case, the value "Nighttime"). Following that, a variable named **actual** is assigned the actual value (available after the component or service processes the time of day).
|
||||
|
||||
Finally, it checks whether the expectation has been met by asserting that the expected and actual values are equal: **Assert.Equal(expected, actual)**.
|
||||
|
||||
You can also see in the above listing a component or service called **dayOrNightUtility**. This module is capable of receiving the message **GetDayOrNight** and is supposed to return the value of the type **string**.
|
||||
|
||||
Again, in the spirit of TDD, the component or service being described hasn't been built yet (it is merely being described with the intention to prescribe it later). Building it is driven by the described expectations.
|
||||
|
||||
Create a new file in the **app** folder and give it the name **DayOrNightUtility.cs**. Add the following C# code to that file and save it:
|
||||
|
||||
|
||||
```
|
||||
using System;
|
||||
|
||||
namespace app {
|
||||
public class DayOrNightUtility {
|
||||
public string GetDayOrNight() {
|
||||
string dayOrNight = "Undetermined";
|
||||
return dayOrNight;
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Now go to the command line, change directory to the **unittests** folder, and run the test:
|
||||
|
||||
|
||||
```
|
||||
[Xunit.net 00:00:02.33] unittest.UnitTest1.Given7pmReturnNighttime [FAIL]
|
||||
Failed unittest.UnitTest1.Given7pmReturnNighttime
|
||||
[...]
|
||||
```
|
||||
|
||||
Congratulations, you have written the first failing unit test. The unit test was expecting **DayOrNightUtility** to return string value "Nighttime" but instead, it received the string value "Undetermined."
|
||||
|
||||
### Fix the failing unit test
|
||||
|
||||
A quick and dirty way to fix the failing test is to replace the value "Undetermined" with the value "Nighttime" and save the change:
|
||||
|
||||
|
||||
```
|
||||
using System;
|
||||
|
||||
namespace app {
|
||||
public class DayOrNightUtility {
|
||||
public string GetDayOrNight() {
|
||||
string dayOrNight = "Nighttime";
|
||||
return dayOrNight;
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Now when we run the test, it passes:
|
||||
|
||||
|
||||
```
|
||||
Starting test execution, please wait...
|
||||
|
||||
Total tests: 1. Passed: 1. Failed: 0. Skipped: 0.
|
||||
Test Run Successful.
|
||||
Test execution time: 2.6470 Seconds
|
||||
```
|
||||
|
||||
However, hardcoding the values is basically cheating, so it's better to endow **DayOrNightUtility** with some intelligence. Modify the **GetDayOrNight** method to include some time-calculation logic:
|
||||
|
||||
|
||||
```
|
||||
public string GetDayOrNight() {
|
||||
string dayOrNight = "Daylight";
|
||||
DateTime time = new DateTime();
|
||||
if(time.Hour < 7) {
|
||||
dayOrNight = "Nighttime";
|
||||
}
|
||||
return dayOrNight;
|
||||
}
|
||||
```
|
||||
|
||||
The method now gets the current time from the system and compares the **Hour** value to see if it is less than 7am. If it is, the logic transforms the **dayOrNight** string value from "Daylight" to "Nighttime." The unit test now passes.
|
||||
|
||||
### The start of a test-driven solution
|
||||
|
||||
We now have the beginnings of a base case unit test and a viable solution for our time dependency. There are more than a few more cases to work through.
|
||||
|
||||
In the next article, I'll demonstrate how to test for daylight hours and how to leverage failure along the way.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/9/mutation-testing-example-tdd
|
||||
|
||||
作者:[Alex Bunardzic][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/alex-bunardzic
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/fail_failure_celebrate.png?itok=LbvDAEZF (failure sign at a party, celebrating failure)
|
||||
[2]: https://opensource.com/article/19/8/mutation-testing-evolution-tdd
|
||||
[3]: https://www.infoq.com/presentations/Architecture-Without-an-End-State/
|
||||
[4]: https://opensource.com/sites/default/files/uploads/cat.png (Cat standing on a roof)
|
||||
[5]: https://www.agilealliance.org/glossary/user-stories
|
||||
[6]: https://xunit.net/
|
||||
[7]: http://www.google.com/search?q=new+msdn.microsoft.com
|
@ -0,0 +1,81 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Linux sudo flaw can lead to unauthorized privileges)
|
||||
[#]: via: (https://www.networkworld.com/article/3446036/linux-sudo-flaw-can-lead-to-unauthorized-privileges.html)
|
||||
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
|
||||
|
||||
Linux sudo flaw can lead to unauthorized privileges
|
||||
======
|
||||
Exploiting a newly discovered sudo flaw in Linux can enable certain users with to run commands as root despite restrictions against it.
|
||||
Thinkstock
|
||||
|
||||
A newly discovered and serious flaw in the [**sudo**][1] command can, if exploited, enable users to run commands as root in spite of the fact that the syntax of the **/etc/sudoers** file specifically disallows them from doing so.
|
||||
|
||||
Updating **sudo** to version 1.8.28 should address the problem, and Linux admins are encouraged to do so as soon as possible.
|
||||
|
||||
[[Get regularly scheduled insights by signing up for Network World newsletters.]][2]
|
||||
|
||||
How the flaw might be exploited depends on specific privileges granted in the **/etc/sudoers** file. A rule that allows a user to edit files as any user except root, for example, would actually allow that user to edit files as root as well. In this case, the flaw could lead to very serious problems.
|
||||
|
||||
[][3]
|
||||
|
||||
BrandPost Sponsored by HPE
|
||||
|
||||
[Take the Intelligent Route with Consumption-Based Storage][3]
|
||||
|
||||
Combine the agility and economics of HPE storage with HPE GreenLake and run your IT department with efficiency.
|
||||
|
||||
For a user to exploit the flaw, **a user** needs to be assigned privileges in the **/etc/sudoers **file that allow that user to run commands as some other users, and the flaw is limited to the command privileges that are assigned in this way.
|
||||
|
||||
This problem affects versions prior to 1.8.28. To check your sudo version, use this command:
|
||||
|
||||
```
|
||||
$ sudo -V
|
||||
Sudo version 1.8.27 <===
|
||||
Sudoers policy plugin version 1.8.27
|
||||
Sudoers file grammar version 46
|
||||
Sudoers I/O plugin version 1.8.27
|
||||
```
|
||||
|
||||
The vulnerability has been assigned [CVE-2019-14287][4] in the **Common Vulnerabilities and Exposures** database. The risk is that any user who has been given the ability to run even a single command as an arbitrary user may be able to escape the restrictions and run that command as root – even if the specified privilege is written to disallow running the command as root.
|
||||
|
||||
The lines below are meant to give the user "jdoe" the ability to edit files with **vi** as any user except root (**!root** means "not root") and nemo the right to run the **id** command as any user except root:
|
||||
|
||||
```
|
||||
# affected entries on host "dragonfly"
|
||||
jdoe dragonfly = (ALL, !root) /usr/bin/vi
|
||||
nemo dragonfly = (ALL, !root) /usr/bin/id
|
||||
```
|
||||
|
||||
However, given the flaw, either of these users would be able to circumvent the restriction and edit files or run the **id** command as root as well.
|
||||
|
||||
The flaw can be exploited by an attacker to run commands as root by specifying the user ID "-1" or "4294967295."
|
||||
|
||||
The response of "1" demonstrates that the command is being run as root (showing root's user ID).
|
||||
|
||||
Joe Vennix from Apple Information Security both found and analyzed the problem.
|
||||
|
||||
Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3446036/linux-sudo-flaw-can-lead-to-unauthorized-privileges.html
|
||||
|
||||
作者:[Sandra Henry-Stocker][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.networkworld.com/article/3236499/some-tricks-for-using-sudo.html
|
||||
[2]: https://www.networkworld.com/newsletters/signup.html
|
||||
[3]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE20773&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
|
||||
[4]: http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-14287
|
||||
[5]: https://www.facebook.com/NetworkWorld/
|
||||
[6]: https://www.linkedin.com/company/network-world
|
146
sources/tech/20191017 How to type emoji on Linux.md
Normal file
146
sources/tech/20191017 How to type emoji on Linux.md
Normal file
@ -0,0 +1,146 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to type emoji on Linux)
|
||||
[#]: via: (https://opensource.com/article/19/10/how-type-emoji-linux)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
|
||||
|
||||
How to type emoji on Linux
|
||||
======
|
||||
The GNOME desktop makes it easy to use emoji in your communications.
|
||||
![A cat under a keyboard.][1]
|
||||
|
||||
Emoji are those fanciful pictograms that snuck into the Unicode character space. They're all the rage online, and people use them for all kinds of surprising things, from signifying reactions on social media to serving as visual labels for important file names. There are many ways to enter Unicode characters on Linux, but the GNOME desktop makes it easy to find and type an emoji.
|
||||
|
||||
![Emoji in Emacs][2]
|
||||
|
||||
### Requirements
|
||||
|
||||
For this easy method, you must be running Linux with the [GNOME][3] desktop.
|
||||
|
||||
You must also have an emoji font installed. There are many to choose from, so do a search for _emoji_ using your favorite software installer application or package manager.
|
||||
|
||||
For example, on Fedora:
|
||||
|
||||
|
||||
```
|
||||
$ sudo dnf search emoji
|
||||
emoji-picker.noarch : An emoji selection tool
|
||||
unicode-emoji.noarch : Unicode Emoji Data Files
|
||||
eosrei-emojione-fonts.noarch : A color emoji font
|
||||
twitter-twemoji-fonts.noarch : Twitter Emoji for everyone
|
||||
google-android-emoji-fonts.noarch : Android Emoji font released by Google
|
||||
google-noto-emoji-fonts.noarch : Google “Noto Emoji” Black-and-White emoji font
|
||||
google-noto-emoji-color-fonts.noarch : Google “Noto Color Emoji” colored emoji font
|
||||
[...]
|
||||
```
|
||||
|
||||
On Ubuntu or Debian, use **apt search** instead.
|
||||
|
||||
I'm using [Google Noto Color Emoji][4] in this article.
|
||||
|
||||
### Get set up
|
||||
|
||||
To get set up, launch GNOME's Settings application.
|
||||
|
||||
1. In Settings, click the **Region & Language** category in the left column.
|
||||
2. Click the plus symbol (**+**) under the **Input Sources** heading to bring up the **Add an Input Source** panel.
|
||||
|
||||
|
||||
|
||||
![Add a new input source][5]
|
||||
|
||||
3. In the **Add an Input Source** panel, click the hamburger menu at the bottom of the input list.
|
||||
|
||||
|
||||
|
||||
![Add an Input Source panel][6]
|
||||
|
||||
4. Scroll to the bottom of the list and select **Other**.
|
||||
5. In the **Other** list, find **Other (Typing Booster)**. (You can type **boost** in the search field at the bottom to filter the list.)
|
||||
|
||||
|
||||
|
||||
![Find Other \(Typing Booster\) in inputs][7]
|
||||
|
||||
6. Click the **Add** button in the top-right corner of the panel to add the input source to GNOME.
|
||||
|
||||
|
||||
|
||||
Once you've done that, you can close the Settings window.
|
||||
|
||||
#### Switch to Typing Booster
|
||||
|
||||
You now have a new icon in the top-right of your GNOME desktop. By default, it's set to the two-letter abbreviation of your language (**en** for English, **eo** for Esperanto, **es** for Español, and so on). If you press the **Super** key (the key with a Linux penguin, Windows logo, or Mac Command symbol) and the **Spacebar** together on your keyboard, you will switch input sources from your default source to the next on your input list. In this example, you only have two input sources: your default language and Typing Booster.
|
||||
|
||||
Try pressing **Super**+**Spacebar** together and watch the input name and icon change.
|
||||
|
||||
#### Configure Typing Booster
|
||||
|
||||
With the Typing Booster input method active, click the input sources icon in the top-right of your screen, select **Unicode symbols and emoji predictions**, and set it to **On**.
|
||||
|
||||
![Set Unicode symbols and emoji predictions to On][8]
|
||||
|
||||
This makes Typing Booster dedicated to typing emoji, which isn't all Typing Booster is good for, but in the context of this article it's exactly what is needed.
|
||||
|
||||
### Type emoji
|
||||
|
||||
With Typing Booster still active, open a text editor like Gedit, a web browser, or anything that you know understands Unicode characters, and type "_thumbs up_." As you type, Typing Booster searches for matching emoji names.
|
||||
|
||||
![Typing Booster searching for emojis][9]
|
||||
|
||||
To leave emoji mode, press **Super**+**Spacebar** again, and your input source goes back to your default language.
|
||||
|
||||
### Switch the switcher
|
||||
|
||||
If the **Super**+**Spacebar** keyboard shortcut is not natural for you, then you can change it to a different combination. In GNOME Settings, navigate to **Devices** and select **Keyboard**.
|
||||
|
||||
In the top bar of the **Keyboard** window, search for **Input** to filter the list. Set **Switch to next input source** to a key combination of your choice.
|
||||
|
||||
![Changing keystroke combination in GNOME settings][10]
|
||||
|
||||
### Unicode input
|
||||
|
||||
The fact is, keyboards were designed for a 26-letter (or thereabouts) alphabet along with as many numerals and symbols. ASCII has more characters than what you find on a typical keyboard, to say nothing of the millions of characters within Unicode. If you want to type Unicode characters into a modern Linux application but don't want to switch to Typing Booster, then you can use the Unicode input shortcut.
|
||||
|
||||
1. With your default language active, open a text editor like Gedit, a web browser, or any application you know accepts Unicode.
|
||||
2. Press **Ctrl**+**Shift**+**U** on your keyboard to enter Unicode entry mode. Release the keys.
|
||||
3. You are currently in Unicode entry mode, so type a number of a Unicode symbol. For instance, try **1F44D** for a 👍 symbol, or **2620** for a ☠ symbol. To get the number code of a Unicode symbol, you can search the internet or refer to the [Unicode specification][11].
|
||||
|
||||
|
||||
|
||||
### Pragmatic emoji-ism
|
||||
|
||||
Emoji are fun and expressive. They can make your text unique to you. They can also be utilitarian. Because emoji are Unicode characters, they can be used anywhere a font can be used, and they can be used the same way any alphabetic character can be used. For instance, if you want to mark a series of files with a special symbol, you can add an emoji to the name, and you can filter by that emoji in Search.
|
||||
|
||||
![Labeling a file with emoji][12]
|
||||
|
||||
Use emoji all you want because Linux is a Unicode-friendly environment, and it's getting friendlier with every release.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/10/how-type-emoji-linux
|
||||
|
||||
作者:[Seth Kenlon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/seth
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc-lead_cat-keyboard.png?itok=fuNmiGV- (A cat under a keyboard.)
|
||||
[2]: https://opensource.com/sites/default/files/uploads/emacs-emoji.jpg (Emoji in Emacs)
|
||||
[3]: https://www.gnome.org/
|
||||
[4]: https://www.google.com/get/noto/help/emoji/
|
||||
[5]: https://opensource.com/sites/default/files/uploads/gnome-setting-region-add.png (Add a new input source)
|
||||
[6]: https://opensource.com/sites/default/files/uploads/gnome-setting-input-list.png (Add an Input Source panel)
|
||||
[7]: https://opensource.com/sites/default/files/uploads/gnome-setting-input-other-typing-booster.png (Find Other (Typing Booster) in inputs)
|
||||
[8]: https://opensource.com/sites/default/files/uploads/emoji-input-on.jpg (Set Unicode symbols and emoji predictions to On)
|
||||
[9]: https://opensource.com/sites/default/files/uploads/emoji-input.jpg (Typing Booster searching for emojis)
|
||||
[10]: https://opensource.com/sites/default/files/uploads/gnome-setting-keyboard-switch-input.jpg (Changing keystroke combination in GNOME settings)
|
||||
[11]: http://unicode.org/emoji/charts/full-emoji-list.html
|
||||
[12]: https://opensource.com/sites/default/files/uploads/file-label.png (Labeling a file with emoji)
|
218
sources/tech/20191017 Intro to the Linux useradd command.md
Normal file
218
sources/tech/20191017 Intro to the Linux useradd command.md
Normal file
@ -0,0 +1,218 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Intro to the Linux useradd command)
|
||||
[#]: via: (https://opensource.com/article/19/10/linux-useradd-command)
|
||||
[#]: author: (Alan Formy-Duval https://opensource.com/users/alanfdoss)
|
||||
|
||||
Intro to the Linux useradd command
|
||||
======
|
||||
Add users (and customize their accounts as needed) with the useradd
|
||||
command.
|
||||
![people in different locations who are part of the same team][1]
|
||||
|
||||
Adding a user is one of the most fundamental exercises on any computer system; this article focuses on how to do it on a Linux system.
|
||||
|
||||
Before getting started, I want to mention three fundamentals to keep in mind. First, like with most operating systems, Linux users need an account to be able to log in. This article specifically covers local accounts, not network accounts such as LDAP. Second, accounts have both a name (called a username) and a number (called a user ID). Third, users are typically placed into a group. Groups also have a name and group ID.
|
||||
|
||||
As you'd expect, Linux includes a command-line utility for adding users; it's called **useradd**. You may also find the command **adduser**. Many distributions have added this symbolic link to the **useradd** command as a matter of convenience.
|
||||
|
||||
|
||||
```
|
||||
$ file `which adduser`
|
||||
/usr/sbin/adduser: symbolic link to useradd
|
||||
```
|
||||
|
||||
Let's take a look at **useradd**.
|
||||
|
||||
> Note: The defaults described in this article reflect those in Red Hat Enterprise Linux 8.0. You may find subtle differences in these files and certain defaults on other Linux distributions or other Unix operating systems such as FreeBSD or Solaris.
|
||||
|
||||
### Default behavior
|
||||
|
||||
The basic usage of **useradd** is quite simple: A user can be added just by providing their username.
|
||||
|
||||
|
||||
```
|
||||
`$ sudo useradd sonny`
|
||||
```
|
||||
|
||||
In this example, the **useradd** command creates an account called _sonny_. A group with the same name is also created, and _sonny_ is placed in it to be used as the primary group. There are other parameters, such as language and shell, that are applied according to defaults and values set in the configuration files **/etc/default/useradd** and **/etc/login.defs**. This is generally sufficient for a single, personal system or a small, one-server business environment.
|
||||
|
||||
While the two files above govern the behavior of **useradd**, user information is stored in other files found in the **/etc** directory, which I will refer to throughout this article.
|
||||
|
||||
File | Description | Fields (bold—set by useradd)
|
||||
---|---|---
|
||||
passwd | Stores user account details | **username**:unused:**uid**:**gid**:**comment**:**homedir**:**shell**
|
||||
shadow | Stores user account security details | **username**:password:lastchange:minimum:maximum:warn:**inactive**:**expire**:unused
|
||||
group | Stores group details | **groupname**:unused:**gid**:**members**
|
||||
|
||||
### Customizable behavior
|
||||
|
||||
The command line allows customization for times when an administrator needs finer control, such as to specify a user's ID number.
|
||||
|
||||
#### User and group ID numbers
|
||||
|
||||
By default, **useradd** tries to use the same number for the user ID (UID) and primary group ID (GID), but there are no guarantees. Although it's not necessary for the UID and GID to match, it's easier for administrators to manage them when they do.
|
||||
|
||||
I have just the scenario to explain. Suppose I add another account, this time for Timmy. Comparing the two users, _sonny_ and _timmy_, shows that both users and their respective primary groups were created by using the **getent** command.
|
||||
|
||||
|
||||
```
|
||||
$ getent passwd sonny timmy
|
||||
sonny❌1001:1002:Sonny:/home/sonny:/bin/bash
|
||||
timmy❌1002:1003::/home/timmy:/bin/bash
|
||||
|
||||
$ getent group sonny timmy
|
||||
sonny❌1002:
|
||||
timmy❌1003:
|
||||
```
|
||||
|
||||
Unfortunately, neither users' UID nor primary GID match. This is because the default behavior is to assign the next available UID to the user and then attempt to assign the same number to the primary group. However, if that number is already used, the next available GID is assigned to the group. To explain what happened, I hypothesize that a group with GID 1001 already exists and enter a command to confirm.
|
||||
|
||||
|
||||
```
|
||||
$ getent group 1001
|
||||
book❌1001:alan
|
||||
```
|
||||
|
||||
The group _book_ with the ID _1001_ has caused the GIDs to be off by one. This is an example where a system administrator would need to take more control of the user-creation process. To resolve this issue, I must first determine the next available user and group ID that will match. The commands **getent group** and **getent passwd** will be helpful in determining the next available number. This number can be passed with the **-u** argument.
|
||||
|
||||
|
||||
```
|
||||
$ sudo useradd -u 1004 bobby
|
||||
|
||||
$ getent passwd bobby; getent group bobby
|
||||
bobby❌1004:1004::/home/bobby:/bin/bash
|
||||
bobby❌1004:
|
||||
```
|
||||
|
||||
Another good reason to specify the ID is for users that will be accessing files on a remote system using the Network File System (NFS). NFS is easier to administer when all client and server systems have the same ID configured for a given user. I cover this in a bit more detail in my article on [using autofs to mount NFS shares][2].
|
||||
|
||||
### More customization
|
||||
|
||||
Very often though, other account parameters need to be specified for a user. Here are brief examples of the most common customizations you may need to use.
|
||||
|
||||
#### Comment
|
||||
|
||||
The comment option is a plain-text field for providing a short description or other information using the **-c** argument.
|
||||
|
||||
|
||||
```
|
||||
$ sudo useradd -c "Bailey is cool" bailey
|
||||
$ getent passwd bailey
|
||||
bailey❌1011:1011:Bailey is cool:/home/bailey:/bin/bash
|
||||
```
|
||||
|
||||
#### Groups
|
||||
|
||||
A user can be assigned one primary group and multiple secondary groups. The **-g** argument specifies the name or GID of the primary group. If it's not specified, **useradd** creates a primary group with the user's same name (as demonstrated above). The **-G** (uppercase) argument is used to pass a comma-separated list of groups that the user will be placed into; these are known as secondary groups.
|
||||
|
||||
|
||||
```
|
||||
$ sudo useradd -G tgroup,fgroup,libvirt milly
|
||||
$ id milly
|
||||
uid=1012(milly) gid=1012(milly) groups=1012(milly),981(libvirt),4000(fgroup),3000(tgroup)
|
||||
```
|
||||
|
||||
#### Home directory
|
||||
|
||||
The default behavior of **useradd** is to create the user's home directory in **/home**. However, different aspects of the home directory can be overridden with the following arguments. The **-b** sets another directory where user homes can be placed. For example, **/home2** instead of the default **/home**.
|
||||
|
||||
|
||||
```
|
||||
$ sudo useradd -b /home2 vicky
|
||||
$ getent passwd vicky
|
||||
vicky❌1013:1013::/home2/vicky:/bin/bash
|
||||
```
|
||||
|
||||
The **-d** lets you specify a home directory with a different name from the user.
|
||||
|
||||
|
||||
```
|
||||
$ sudo useradd -d /home/ben jerry
|
||||
$ getent passwd jerry
|
||||
jerry❌1014:1014::/home/ben:/bin/bash
|
||||
```
|
||||
|
||||
#### The skeleton directory
|
||||
|
||||
The **-k** instructs the new user's new home directory to be populated with any files in the **/etc/skel** directory. These are usually shell configuration files, but they can be anything that a system administrator would like to make available to all new users.
|
||||
|
||||
#### Shell
|
||||
|
||||
The **-s** argument can be used to specify the shell. The default is used if nothing else is specified. For example, in the following, shell **bash** is defined in the default configuration file, but Wally has requested **zsh**.
|
||||
|
||||
|
||||
```
|
||||
$ grep SHELL /etc/default/useradd
|
||||
SHELL=/bin/bash
|
||||
|
||||
$ sudo useradd -s /usr/bin/zsh wally
|
||||
$ getent passwd wally
|
||||
wally❌1004:1004::/home/wally:/usr/bin/zsh
|
||||
```
|
||||
|
||||
#### Security
|
||||
|
||||
Security is an essential part of user management, so there are several options available with the **useradd** command. A user account can be given an expiration date, in the form YYYY-MM-DD, using the **-e** argument.
|
||||
|
||||
|
||||
```
|
||||
$ sudo useradd -e 20191231 sammy
|
||||
$ sudo getent shadow sammy
|
||||
sammy:!!:18171:0:99999:7::20191231:
|
||||
```
|
||||
|
||||
An account can also be disabled automatically if the password expires. The **-f** argument will set the number of days after the password expires before the account is disabled. Zero is immediate.
|
||||
|
||||
|
||||
```
|
||||
$ sudo useradd -f 30 willy
|
||||
$ sudo getent shadow willy
|
||||
willy:!!:18171:0:99999:7:30::
|
||||
```
|
||||
|
||||
### A real-world example
|
||||
|
||||
In practice, several of these arguments may be used when creating a new user account. For example, if I need to create an account for Perry, I might use the following command:
|
||||
|
||||
|
||||
```
|
||||
$ sudo useradd -u 1020 -c "Perry Example" \
|
||||
-G tgroup -b /home2 \
|
||||
-s /usr/bin/zsh \
|
||||
-e 20201201 -f 5 perry
|
||||
```
|
||||
|
||||
Refer to the sections above to understand each option. Verify the results with:
|
||||
|
||||
|
||||
```
|
||||
$ getent passwd perry; getent group perry; getent shadow perry; id perry
|
||||
perry❌1020:1020:Perry Example:/home2/perry:/usr/bin/zsh
|
||||
perry❌1020:
|
||||
perry:!!:18171:0:99999:7:5:20201201:
|
||||
uid=1020(perry) gid=1020(perry) groups=1020(perry),3000(tgroup)
|
||||
```
|
||||
|
||||
### Some final advice
|
||||
|
||||
The **useradd** command is a "must-know" for any Unix (not just Linux) administrator. It is important to understand all of its options since user creation is something that you want to get right the first time. This means having a well-thought-out naming convention that includes a dedicated UID/GID range reserved for your users across your enterprise, not just on a single system—particularly when you're working in a growing organization.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/10/linux-useradd-command
|
||||
|
||||
作者:[Alan Formy-Duval][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/alanfdoss
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/connection_people_team_collaboration.png?itok=0_vQT8xV (people in different locations who are part of the same team)
|
||||
[2]: https://opensource.com/article/18/6/using-autofs-mount-nfs-shares
|
132
sources/tech/20191017 Using multitail on Linux.md
Normal file
132
sources/tech/20191017 Using multitail on Linux.md
Normal file
@ -0,0 +1,132 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Using multitail on Linux)
|
||||
[#]: via: (https://www.networkworld.com/article/3445228/using-multitail-on-linux.html)
|
||||
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
|
||||
|
||||
Using multitail on Linux
|
||||
======
|
||||
|
||||
[Glen Bowman][1] [(CC BY-SA 2.0)][2]
|
||||
|
||||
The **multitail** command can be very helpful whenever you want to watch activity on a number of files at the same time – especially log files. It works like a multi-windowed **tail -f** command. That is, it displays the bottoms of files and new lines as they are being added. While easy to use in general, **multitail** does provide some command-line and interactive options that you should be aware of before you start to use it routinely.
|
||||
|
||||
### Basic multitail-ing
|
||||
|
||||
The simplest use of **multitail** is to list the names of the files that you wish to watch on the command line. This command splits the screen horizontally (i.e., top and bottom), displaying the bottom of each of the files along with updates.
|
||||
|
||||
[[Get regularly scheduled insights by signing up for Network World newsletters.]][3]
|
||||
|
||||
```
|
||||
$ multitail /var/log/syslog /var/log/dmesg
|
||||
```
|
||||
|
||||
The display will be split like this:
|
||||
|
||||
[][4]
|
||||
|
||||
BrandPost Sponsored by HPE
|
||||
|
||||
[Take the Intelligent Route with Consumption-Based Storage][4]
|
||||
|
||||
Combine the agility and economics of HPE storage with HPE GreenLake and run your IT department with efficiency.
|
||||
|
||||
```
|
||||
+-----------------------+
|
||||
| |
|
||||
| |
|
||||
+-----------------------|
|
||||
| |
|
||||
| |
|
||||
+-----------------------+
|
||||
```
|
||||
|
||||
The lines displayed from each of the files would be followed by a single line per file that includes the assigned file number (starting with 00), the file name, the file size, and the date and time the most recent content was added. Each of the files will be allotted half the space available regardless of its size or activity. For example:
|
||||
|
||||
```
|
||||
content lines from my1.log
|
||||
more content
|
||||
more lines
|
||||
|
||||
00] my1.log 59KB - 2019/10/14 12:12:09
|
||||
content lines from my2.log
|
||||
more content
|
||||
more lines
|
||||
|
||||
01] my2.log 120KB - 2019/10/14 14:22:29
|
||||
```
|
||||
|
||||
Note that **multitail** will not complain if you ask it to display non-text files or files that you have no permission to view; you just won't see the contents.
|
||||
|
||||
You can also use wild cards to specify the files that you want to watch:
|
||||
|
||||
```
|
||||
$ multitail my*.log
|
||||
```
|
||||
|
||||
One thing to keep in mind is that **multitail** is going to split the screen evenly. If you specify too many files, you will see only a few lines from each and you will only see the first seven or so of the requested files if you list too many unless you take extra steps to view the later files (see the scrolling option described below). The exact result depends on the how many lines are available in your terminal window.
|
||||
|
||||
Press **q** to quit **multitail** and return to your normal screen view.
|
||||
|
||||
### Dividing the screen
|
||||
|
||||
**Multitail** will split your terminal window vertically (i.e., left and right) if you prefer. For this, use the **-s** option. If you specify three files, the right side of your screen will be divided horizontally as well. With four, you'll have four equal-sized windows.
|
||||
|
||||
```
|
||||
+-----------+-----------+ +-----------+-----------+ +-----------+-----------+
|
||||
| | | | | | | | |
|
||||
| | | | | | | | |
|
||||
| | | | +-----------+ +-----------+-----------+
|
||||
| | | | | | | | |
|
||||
| | | | | | | | |
|
||||
+-----------+-----------+ +-----------+-----------+ +-----------+-----------+
|
||||
2 files 3 files 4 files
|
||||
```
|
||||
|
||||
Use **multitail -s 3 file1 file2 file3** if you want to split the screen into three columns.
|
||||
|
||||
```
|
||||
+-------+-------+-------+
|
||||
| | | |
|
||||
| | | |
|
||||
| | | |
|
||||
| | | |
|
||||
| | | |
|
||||
+-------+-------+-------+
|
||||
3 files with -s 3
|
||||
```
|
||||
|
||||
### Scrolling
|
||||
|
||||
You can scroll up and down through displayed files, but you need to press **b** to bring up a selection menu and then use the up and arrow buttons to select the file you wish to scroll through. Then press the **enter** key. You can then scroll through the lines in an enlarged area, again using the up and down arrows. Press **q** when you're done to go back to the normal view.
|
||||
|
||||
### Getting Help
|
||||
|
||||
Pressing **h** in **multitail** will open a help menu describing some of the basic operations, though the man page provides quite a bit more information and is worth perusing if you want to learn even more about using this tool.
|
||||
|
||||
**Multitail** will not likely be installed on your system by default, but using **apt-get** or **yum** should get you to an easy install. The tool provides a lot of functionality, but with its character-based display, window borders will just be strings of **q**'s and **x**'s. It's a very handy when you need to keep an eye on file updates.
|
||||
|
||||
Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3445228/using-multitail-on-linux.html
|
||||
|
||||
作者:[Sandra Henry-Stocker][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.flickr.com/photos/glenbowman/7992498919/in/photolist-dbgDtv-gHfRRz-5uRM4v-gHgFnz-6sPqTZ-5uaP7H-USFPqD-pbtRUe-fiKiYn-nmgWL2-pQNepR-q68p8d-dDsUxw-dbgFKG-nmgE6m-DHyqM-nCKA4L-2d7uFqH-Kbqzk-8EwKg-8Vy72g-2X3NSN-78Bv84-buKWXF-aeM4ok-yhweWf-4vwpyX-9hu8nq-9zCoti-v5nzP5-23fL48r-24y6pGS-JhWDof-6zF75k-24y6nHS-9hr19c-Gueh6G-Guei7u-GuegFy-24y6oX5-26qu5iX-wKrnMW-Gueikf-24y6oYh-27y4wwA-x4z19F-x57yP4-24BY6gc-24y6nPo-QGwbkf
|
||||
[2]: https://creativecommons.org/licenses/by-sa/2.0/legalcode
|
||||
[3]: https://www.networkworld.com/newsletters/signup.html
|
||||
[4]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE20773&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
|
||||
[5]: https://www.facebook.com/NetworkWorld/
|
||||
[6]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,210 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to Configure Rsyslog Server in CentOS 8 / RHEL 8)
|
||||
[#]: via: (https://www.linuxtechi.com/configure-rsyslog-server-centos-8-rhel-8/)
|
||||
[#]: author: (James Kiarie https://www.linuxtechi.com/author/james/)
|
||||
|
||||
How to Configure Rsyslog Server in CentOS 8 / RHEL 8
|
||||
======
|
||||
|
||||
**Rsyslog** is a free and opensource logging utility that exists by default on **CentOS** 8 and **RHEL** 8 systems. It provides an easy and effective way of **centralizing logs** from client nodes to a single central server. The centralization of logs is beneficial in two ways. First, it simplifies viewing of logs as the Systems administrator can view all the logs of remote servers from a central point without logging into every client system to check the logs. This is greatly beneficial if there are several servers that need to be monitored and secondly, in the event that a remote client suffers a crash, you need not worry about losing the logs because all the logs will be saved on the **central rsyslog server**. Rsyslog has replaced syslog which only supported **UDP** protocol. It extends the basic syslog protocol with superior features such as support for both **UDP** and **TCP** protocols in transporting logs, augmented filtering abilities, and flexible configuration options. That said, let’s explore how to configure the Rsyslog server in CentOS 8 / RHEL 8 systems.
|
||||
|
||||
[![configure-rsyslog-centos8-rhel8][1]][2]
|
||||
|
||||
### Prerequisites
|
||||
|
||||
We are going to have the following lab setup to test the centralized logging process:
|
||||
|
||||
* **Rsyslog server** CentOS 8 Minimal IP address: 10.128.0.47
|
||||
* **Client system** RHEL 8 Minimal IP address: 10.128.0.48
|
||||
|
||||
|
||||
|
||||
From the setup above, we will demonstrate how you can set up the Rsyslog server and later configure the client system to ship logs to the Rsyslog server for monitoring.
|
||||
|
||||
Let’s get started!
|
||||
|
||||
### Configuring the Rsyslog Server on CentOS 8
|
||||
|
||||
By default, Rsyslog comes installed on CentOS 8 / RHEL 8 servers. To verify the status of Rsyslog, log in via SSH and issue the command:
|
||||
|
||||
```
|
||||
$ systemctl status rsyslog
|
||||
```
|
||||
|
||||
Sample Output
|
||||
|
||||
![rsyslog-service-status-centos8][1]
|
||||
|
||||
If rsyslog is not present for whatever reason, you can install it using the command:
|
||||
|
||||
```
|
||||
$ sudo yum install rsyslog
|
||||
```
|
||||
|
||||
Next, you need to modify a few settings in the Rsyslog configuration file. Open the configuration file.
|
||||
|
||||
```
|
||||
$ sudo vim /etc/rsyslog.conf
|
||||
```
|
||||
|
||||
Scroll and uncomment the lines shown below to allow reception of logs via UDP protocol
|
||||
|
||||
```
|
||||
module(load="imudp") # needs to be done just once
|
||||
input(type="imudp" port="514")
|
||||
```
|
||||
|
||||
![rsyslog-conf-centos8-rhel8][1]
|
||||
|
||||
Similarly, if you prefer to enable TCP rsyslog reception uncomment the lines:
|
||||
|
||||
```
|
||||
module(load="imtcp") # needs to be done just once
|
||||
input(type="imtcp" port="514")
|
||||
```
|
||||
|
||||
![rsyslog-conf-tcp-centos8-rhel8][1]
|
||||
|
||||
Save and exit the configuration file.
|
||||
|
||||
To receive the logs from the client system, we need to open Rsyslog default port 514 on the firewall. To achieve this, run
|
||||
|
||||
```
|
||||
# sudo firewall-cmd --add-port=514/tcp --zone=public --permanent
|
||||
```
|
||||
|
||||
Next, reload the firewall to save the changes
|
||||
|
||||
```
|
||||
# sudo firewall-cmd --reload
|
||||
```
|
||||
|
||||
Sample Output
|
||||
|
||||
![firewall-ports-rsyslog-centos8][1]
|
||||
|
||||
Next, restart Rsyslog server
|
||||
|
||||
```
|
||||
$ sudo systemctl restart rsyslog
|
||||
```
|
||||
|
||||
To enable Rsyslog on boot, run beneath command
|
||||
|
||||
```
|
||||
$ sudo systemctl enable rsyslog
|
||||
```
|
||||
|
||||
To confirm that the Rsyslog server is listening on port 514, use the netstat command as follows:
|
||||
|
||||
```
|
||||
$ sudo netstat -pnltu
|
||||
```
|
||||
|
||||
Sample Output
|
||||
|
||||
![netstat-rsyslog-port-centos8][1]
|
||||
|
||||
Perfect! we have successfully configured our Rsyslog server to receive logs from the client system.
|
||||
|
||||
To view log messages in real-time run the command:
|
||||
|
||||
```
|
||||
$ tail -f /var/log/messages
|
||||
```
|
||||
|
||||
Let’s now configure the client system.
|
||||
|
||||
### Configuring the client system on RHEL 8
|
||||
|
||||
Like the Rsyslog server, log in and check if the rsyslog daemon is running by issuing the command:
|
||||
|
||||
```
|
||||
$ sudo systemctl status rsyslog
|
||||
```
|
||||
|
||||
Sample Output
|
||||
|
||||
![client-rsyslog-service-rhel8][1]
|
||||
|
||||
Next, proceed to open the rsyslog configuration file
|
||||
|
||||
```
|
||||
$ sudo vim /etc/rsyslog.conf
|
||||
```
|
||||
|
||||
At the end of the file, append the following line
|
||||
|
||||
```
|
||||
*.* @10.128.0.47:514 # Use @ for UDP protocol
|
||||
*.* @@10.128.0.47:514 # Use @@ for TCP protocol
|
||||
```
|
||||
|
||||
Save and exit the configuration file. Just like the Rsyslog Server, open port 514 which is the default Rsyslog port on the firewall
|
||||
|
||||
```
|
||||
$ sudo firewall-cmd --add-port=514/tcp --zone=public --permanent
|
||||
```
|
||||
|
||||
Next, reload the firewall to save the changes
|
||||
|
||||
```
|
||||
$ sudo firewall-cmd --reload
|
||||
```
|
||||
|
||||
Next, restart the rsyslog service
|
||||
|
||||
```
|
||||
$ sudo systemctl restart rsyslog
|
||||
```
|
||||
|
||||
To enable Rsyslog on boot, run following command
|
||||
|
||||
```
|
||||
$ sudo systemctl enable rsyslog
|
||||
```
|
||||
|
||||
### Testing the logging operation
|
||||
|
||||
Having successfully set up and configured Rsyslog Server and client system, it’s time to verify of your configuration is working as intended.
|
||||
|
||||
On the client system issue the command:
|
||||
|
||||
```
|
||||
# logger "Hello guys! This is our first log"
|
||||
```
|
||||
|
||||
Now head out to the Rsyslog server and run the command below to check the logs messages in real-time
|
||||
|
||||
```
|
||||
# tail -f /var/log/messages
|
||||
```
|
||||
|
||||
The output from the command run on the client system should register on the Rsyslog server’s log messages to imply that the Rsyslog server is now receiving logs from the client system.
|
||||
|
||||
![centralize-logs-rsyslogs-centos8][1]
|
||||
|
||||
And that’s it, guys! We have successfully setup the Rsyslog server to receive log messages from a client system.
|
||||
|
||||
Read Also: **[How to Setup Multi Node Elastic Stack Cluster on RHEL 8 / CentOS 8][3]**
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linuxtechi.com/configure-rsyslog-server-centos-8-rhel-8/
|
||||
|
||||
作者:[James Kiarie][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.linuxtechi.com/author/james/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
|
||||
[2]: https://www.linuxtechi.com/wp-content/uploads/2019/10/configure-rsyslog-centos8-rhel8.jpg
|
||||
[3]: https://www.linuxtechi.com/setup-multinode-elastic-stack-cluster-rhel8-centos8/
|
@ -0,0 +1,516 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to use Protobuf for data interchange)
|
||||
[#]: via: (https://opensource.com/article/19/10/protobuf-data-interchange)
|
||||
[#]: author: (Marty Kalin https://opensource.com/users/mkalindepauledu)
|
||||
|
||||
How to use Protobuf for data interchange
|
||||
======
|
||||
Protobuf encoding increases efficiency when exchanging data between
|
||||
applications written in different languages and running on different
|
||||
platforms.
|
||||
![metrics and data shown on a computer screen][1]
|
||||
|
||||
Protocol buffers ([Protobufs][2]), like XML and JSON, allow applications, which may be written in different languages and running on different platforms, to exchange data. For example, a sending application written in Go could encode a Go-specific sales order in Protobuf, which a receiver written in Java then could decode to get a Java-specific representation of the received order. Here is a sketch of the architecture over a network connection:
|
||||
|
||||
|
||||
```
|
||||
`Go sales order--->Pbuf-encode--->network--->Pbuf-decode--->Java sales order`
|
||||
```
|
||||
|
||||
Protobuf encoding, in contrast to its XML and JSON counterparts, is binary rather than text, which can complicate debugging. However, as the code examples in this article confirm, the Protobuf encoding is significantly more efficient in size than either XML or JSON encoding.
|
||||
|
||||
Protobuf is efficient in another way. At the implementation level, Protobuf and other encoding systems serialize and deserialize structured data. Serialization transforms a language-specific data structure into a bytestream, and deserialization is the inverse operation that transforms a bytestream back into a language-specific data structure. Serialization and deserialization may become the bottleneck in data interchange because these operations are CPU-intensive. Efficient serialization and deserialization is another Protobuf design goal.
|
||||
|
||||
Recent encoding technologies, such as Protobuf and FlatBuffers, derive from the [DCE/RPC][3] (Distributed Computing Environment/Remote Procedure Call) initiative of the early 1990s. Like DCE/RPC, Protobuf contributes to both the [IDL][4] (interface definition language) and the encoding layer in data interchange.
|
||||
|
||||
This article will look at these two layers then provide code examples in Go and Java to flesh out Protobuf details and show that Protobuf is easy to use.
|
||||
|
||||
### Protobuf as an IDL and encoding layer
|
||||
|
||||
DCE/RPC, like Protobuf, is designed to be language- and platform-neutral. The appropriate libraries and utilities allow any language and platform to play in the DCE/RPC arena. Furthermore, the DCE/RPC architecture is elegant. An IDL document is the contract between the remote procedure on the one side and callers on the other side. Protobuf, too, centers on an IDL document.
|
||||
|
||||
An IDL document is text and, in DCE/RPC, uses basic C syntax along with syntactic extensions for metadata (square brackets) and a few new keywords such as **interface**. Here is an example:
|
||||
|
||||
|
||||
```
|
||||
[uuid (2d6ead46-05e3-11ca-7dd1-426909beabcd), version(1.0)]
|
||||
interface echo {
|
||||
const long int ECHO_SIZE = 512;
|
||||
void echo(
|
||||
[in] handle_t h,
|
||||
[in, string] idl_char from_client[ ],
|
||||
[out, string] idl_char from_service[ECHO_SIZE]
|
||||
);
|
||||
}
|
||||
```
|
||||
|
||||
This IDL document declares a procedure named **echo**, which takes three arguments: the **[in]** arguments of type **handle_t** (implementation pointer) and **idl_char** (array of ASCII characters) are passed to the remote procedure, whereas the **[out]** argument (also a string) is passed back from the procedure. In this example, the **echo** procedure does not explicitly return a value (the **void** to the left of **echo**) but could do so. A return value, together with one or more **[out]** arguments, allows the remote procedure to return arbitrarily many values. The next section introduces a Protobuf IDL, which differs in syntax but likewise serves as a contract in data interchange.
|
||||
|
||||
The IDL document, in both DCE/RPC and Protobuf, is the input to utilities that create the infrastructure code for exchanging data:
|
||||
|
||||
|
||||
```
|
||||
`IDL document--->DCE/PRC or Protobuf utilities--->support code for data interchange`
|
||||
```
|
||||
|
||||
As relatively straightforward text, the IDL is likewise human-readable documentation about the specifics of the data interchange—in particular, the number of data items exchanged and the data type of each item.
|
||||
|
||||
Protobuf can used in a modern RPC system such as [gRPC][5]; but Protobuf on its own provides only the IDL layer and the encoding layer for messages passed from a sender to a receiver. Protobuf encoding, like the DCE/RPC original, is binary but more efficient.
|
||||
|
||||
At present, XML and JSON encodings still dominate in data interchange through technologies such as web services, which make use of in-place infrastructure such as web servers, transport protocols (e.g., TCP, HTTP), and standard libraries and utilities for processing XML and JSON documents. Moreover, database systems of various flavors can store XML and JSON documents, and even legacy relational systems readily generate XML encodings of query results. Every general-purpose programming language now has libraries that support XML and JSON. What, then, recommends a return to a _binary_ encoding system such as Protobuf?
|
||||
|
||||
Consider the negative decimal value **-128**. In the 2's complement binary representation, which dominates across systems and languages, this value can be stored in a single 8-bit byte: 10000000. The text encoding of this integer value in XML or JSON requires multiple bytes. For example, UTF-8 encoding requires four bytes for the string, literally **-128**, which is one byte per character (in hex, the values are 0x2d, 0x31, 0x32, and 0x38). XML and JSON also add markup characters, such as angle brackets and braces, to the mix. Details about Protobuf encoding are forthcoming, but the point of interest now is a general one: Text encodings tend to be significantly less compact than binary ones.
|
||||
|
||||
### A code example in Go using Protobuf
|
||||
|
||||
My code examples focus on Protobuf rather than RPC. Here is an overview of the first example:
|
||||
|
||||
* The IDL file named _dataitem.proto_ defines a Protobuf **message** with six fields of different types: integer values with different ranges, floating-point values of a fixed size, and strings of two different lengths.
|
||||
* The Protobuf compiler uses the IDL file to generate a Go-specific version (and, later, a Java-specific version) of the Protobuf **message** together with supporting functions.
|
||||
* A Go app populates the native Go data structure with randomly generated values and then serializes the result to a local file. For comparison, XML and JSON encodings also are serialized to local files.
|
||||
* As a test, the Go application reconstructs an instance of its native data structure by deserializing the contents of the Protobuf file.
|
||||
* As a language-neutrality test, the Java application also deserializes the contents of the Protobuf file to get an instance of a native data structure.
|
||||
|
||||
|
||||
|
||||
This IDL file and two Go and one Java source files are available as a ZIP file on [my website][6].
|
||||
|
||||
The all-important Protobuf IDL document is shown below. The document is stored in the file _dataitem.proto_, with the customary _.proto_ extension.
|
||||
|
||||
#### Example 1. Protobuf IDL document
|
||||
|
||||
|
||||
```
|
||||
syntax = "proto3";
|
||||
|
||||
package main;
|
||||
|
||||
message DataItem {
|
||||
int64 oddA = 1;
|
||||
int64 evenA = 2;
|
||||
int32 oddB = 3;
|
||||
int32 evenB = 4;
|
||||
float small = 5;
|
||||
float big = 6;
|
||||
string short = 7;
|
||||
string long = 8;
|
||||
}
|
||||
```
|
||||
|
||||
The IDL uses the current proto3 rather than the earlier proto2 syntax. The package name (in this case, **main**) is optional but customary; it is used to avoid name conflicts. The structured **message** contains eight fields, each of which has a Protobuf data type (e.g., **int64**, **string**), a name (e.g., **oddA**, **short**), and a numeric tag (aka key) after the equals sign **=**. The tags, which are 1 through 8 in this example, are unique integer identifiers that determine the order in which the fields are serialized.
|
||||
|
||||
Protobuf messages can be nested to arbitrary levels, and one message can be the field type in the other. Here's an example that uses the **DataItem** message as a field type:
|
||||
|
||||
|
||||
```
|
||||
message DataItems {
|
||||
repeated DataItem item = 1;
|
||||
}
|
||||
```
|
||||
|
||||
A single **DataItems** message consists of repeated (none or more) **DataItem** messages.
|
||||
|
||||
Protobuf also supports enumerated types for clarity:
|
||||
|
||||
|
||||
```
|
||||
enum PartnershipStatus {
|
||||
reserved "FREE", "CONSTRAINED", "OTHER";
|
||||
}
|
||||
```
|
||||
|
||||
The **reserved** qualifier ensures that the numeric values used to implement the three symbolic names cannot be reused.
|
||||
|
||||
To generate a language-specific version of one or more declared Protobuf **message** structures, the IDL file containing these is passed to the _protoc_ compiler (available in the [Protobuf GitHub repository][7]). For the Go code, the supporting Protobuf library can be installed in the usual way (with **%** as the command-line prompt):
|
||||
|
||||
|
||||
```
|
||||
`% go get github.com/golang/protobuf/proto`
|
||||
```
|
||||
|
||||
The command to compile the Protobuf IDL file _dataitem.proto_ into Go source code is:
|
||||
|
||||
|
||||
```
|
||||
`% protoc --go_out=. dataitem.proto`
|
||||
```
|
||||
|
||||
The flag **\--go_out** directs the compiler to generate Go source code; there are similar flags for other languages. The result, in this case, is a file named _dataitem.pb.go_, which is small enough that the essentials can be copied into a Go application. Here are the essentials from the generated code:
|
||||
|
||||
|
||||
```
|
||||
var _ = proto.Marshal
|
||||
|
||||
type DataItem struct {
|
||||
OddA int64 `protobuf:"varint,1,opt,name=oddA" json:"oddA,omitempty"`
|
||||
EvenA int64 `protobuf:"varint,2,opt,name=evenA" json:"evenA,omitempty"`
|
||||
OddB int32 `protobuf:"varint,3,opt,name=oddB" json:"oddB,omitempty"`
|
||||
EvenB int32 `protobuf:"varint,4,opt,name=evenB" json:"evenB,omitempty"`
|
||||
Small float32 `protobuf:"fixed32,5,opt,name=small" json:"small,omitempty"`
|
||||
Big float32 `protobuf:"fixed32,6,opt,name=big" json:"big,omitempty"`
|
||||
Short string `protobuf:"bytes,7,opt,name=short" json:"short,omitempty"`
|
||||
Long string `protobuf:"bytes,8,opt,name=long" json:"long,omitempty"`
|
||||
}
|
||||
|
||||
func (m *DataItem) Reset() { *m = DataItem{} }
|
||||
func (m *DataItem) String() string { return proto.CompactTextString(m) }
|
||||
func (*DataItem) ProtoMessage() {}
|
||||
func init() {}
|
||||
```
|
||||
|
||||
The compiler-generated code has a Go structure **DataItem**, which exports the Go fields—the names are now capitalized—that match the names declared in the Protobuf IDL. The structure fields have standard Go data types: **int32**, **int64**, **float32**, and **string**. At the end of each field line, as a string, is metadata that describes the Protobuf types, gives the numeric tags from the Protobuf IDL document, and provides information about JSON, which is discussed later.
|
||||
|
||||
There are also functions; the most important is **proto.Marshal** for serializing an instance of the **DataItem** structure into Protobuf format. The helper functions include **Reset**, which clears a **DataItem** structure, and **String**, which produces a one-line string representation of a **DataItem**.
|
||||
|
||||
The metadata that describes Protobuf encoding deserves a closer look before analyzing the Go program in more detail.
|
||||
|
||||
### Protobuf encoding
|
||||
|
||||
A Protobuf message is structured as a collection of key/value pairs, with the numeric tag as the key and the corresponding field as the value. The field names, such as **oddA** and **small**, are for human readability, but the _protoc_ compiler does use the field names in generating language-specific counterparts. For example, the **oddA** and **small** names in the Protobuf IDL become the fields **OddA** and **Small**, respectively, in the Go structure.
|
||||
|
||||
The keys and their values both get encoded, but with an important difference: some numeric values have a fixed-size encoding of 32 or 64 bits, whereas others (including the **message** tags) are _varint_ encoded—the number of bits depends on the integer's absolute value. For example, the integer values 1 through 15 require 8 bits to encode in _varint_, whereas the values 16 through 2047 require 16 bits. The _varint_ encoding, similar in spirit (but not in detail) to UTF-8 encoding, favors small integer values over large ones. (For a detailed analysis, see the Protobuf [encoding guide][8].) The upshot is that a Protobuf **message** should have small integer values in fields, if possible, and as few keys as possible, but one key per field is unavoidable.
|
||||
|
||||
Table 1 below gives the gist of Protobuf encoding:
|
||||
|
||||
**Table 1. Protobuf data types**
|
||||
|
||||
Encoding | Sample types | Length
|
||||
---|---|---
|
||||
varint | int32, uint32, int64 | Variable length
|
||||
fixed | fixed32, float, double | Fixed 32-bit or 64-bit length
|
||||
byte sequence | string, bytes | Sequence length
|
||||
|
||||
Integer types that are not explicitly **fixed** are _varint_ encoded; hence, in a _varint_ type such as **uint32** (**u** for unsigned), the number 32 describes the integer's range (in this case, 0 to 232 \- 1) rather than its bit size, which differs depending on the value. For fixed types such as **fixed32** or **double**, by contrast, the Protobuf encoding requires 32 and 64 bits, respectively. Strings in Protobuf are byte sequences; hence, the size of the field encoding is the length of the byte sequence.
|
||||
|
||||
Another efficiency deserves mention. Recall the earlier example in which a **DataItems** message consists of repeated **DataItem** instances:
|
||||
|
||||
|
||||
```
|
||||
message DataItems {
|
||||
repeated DataItem item = 1;
|
||||
}
|
||||
```
|
||||
|
||||
The **repeated** means that the **DataItem** instances are _packed_: the collection has a single tag, in this case, 1. A **DataItems** message with repeated **DataItem** instances is thus more efficient than a message with multiple but separate **DataItem** fields, each of which would require a tag of its own.
|
||||
|
||||
With this background in mind, let's return to the Go program.
|
||||
|
||||
### The dataItem program in detail
|
||||
|
||||
The _dataItem_ program creates a **DataItem** instance and populates the fields with randomly generated values of the appropriate types. Go has a **rand** package with functions for generating pseudo-random integer and floating-point values, and my **randString** function generates pseudo-random strings of specified lengths from a character set. The design goal is to have a **DataItem** instance with field values of different types and bit sizes. For example, the **OddA** and **EvenA** values are 64-bit non-negative integer values of odd and even parity, respectively; but the **OddB** and **EvenB** variants are 32 bits in size and hold small integer values between 0 and 2047. The random floating-point values are 32 bits in size, and the strings are 16 (**Short**) and 32 (**Long**) characters in length. Here is the code segment that populates the **DataItem** structure with random values:
|
||||
|
||||
|
||||
```
|
||||
// variable-length integers
|
||||
n1 := rand.Int63() // bigger integer
|
||||
if (n1 & 1) == 0 { n1++ } // ensure it's odd
|
||||
...
|
||||
n3 := rand.Int31() % UpperBound // smaller integer
|
||||
if (n3 & 1) == 0 { n3++ } // ensure it's odd
|
||||
|
||||
// fixed-length floats
|
||||
...
|
||||
t1 := rand.Float32()
|
||||
t2 := rand.Float32()
|
||||
...
|
||||
// strings
|
||||
str1 := randString(StrShort)
|
||||
str2 := randString(StrLong)
|
||||
|
||||
// the message
|
||||
dataItem := &DataItem {
|
||||
OddA: n1,
|
||||
EvenA: n2,
|
||||
OddB: n3,
|
||||
EvenB: n4,
|
||||
Big: f1,
|
||||
Small: f2,
|
||||
Short: str1,
|
||||
Long: str2,
|
||||
}
|
||||
```
|
||||
|
||||
Once created and populated with values, the **DataItem** instance is encoded in XML, JSON, and Protobuf, with each encoding written to a local file:
|
||||
|
||||
|
||||
```
|
||||
func encodeAndserialize(dataItem *DataItem) {
|
||||
bytes, _ := xml.MarshalIndent(dataItem, "", " ") // Xml to dataitem.xml
|
||||
ioutil.WriteFile(XmlFile, bytes, 0644) // 0644 is file access permissions
|
||||
|
||||
bytes, _ = json.MarshalIndent(dataItem, "", " ") // Json to dataitem.json
|
||||
ioutil.WriteFile(JsonFile, bytes, 0644)
|
||||
|
||||
bytes, _ = proto.Marshal(dataItem) // Protobuf to dataitem.pbuf
|
||||
ioutil.WriteFile(PbufFile, bytes, 0644)
|
||||
}
|
||||
```
|
||||
|
||||
The three serializing functions use the term _marshal_, which is roughly synonymous with _serialize_. As the code indicates, each of the three **Marshal** functions returns an array of bytes, which then are written to a file. (Possible errors are ignored for simplicity.) On a sample run, the file sizes were:
|
||||
|
||||
|
||||
```
|
||||
dataitem.xml: 262 bytes
|
||||
dataitem.json: 212 bytes
|
||||
dataitem.pbuf: 88 bytes
|
||||
```
|
||||
|
||||
The Protobuf encoding is significantly smaller than the other two. The XML and JSON serializations could be reduced slightly in size by eliminating indentation characters, in this case, blanks and newlines.
|
||||
|
||||
Below is the _dataitem.json_ file resulting eventually from the **json.MarshalIndent** call, with added comments starting with **##**:
|
||||
|
||||
|
||||
```
|
||||
{
|
||||
"oddA": 4744002665212642479, ## 64-bit >= 0
|
||||
"evenA": 2395006495604861128, ## ditto
|
||||
"oddB": 57, ## 32-bit >= 0 but < 2048
|
||||
"evenB": 468, ## ditto
|
||||
"small": 0.7562016, ## 32-bit floating-point
|
||||
"big": 0.85202795, ## ditto
|
||||
"short": "ClH1oDaTtoX$HBN5", ## 16 random chars
|
||||
"long": "xId0rD3Cri%3Wt%^QjcFLJgyXBu9^DZI" ## 32 random chars
|
||||
}
|
||||
```
|
||||
|
||||
Although the serialized data goes into local files, the same approach would be used to write the data to the output stream of a network connection.
|
||||
|
||||
### Testing serialization/deserialization
|
||||
|
||||
The Go program next runs an elementary test by deserializing the bytes, which were written earlier to the _dataitem.pbuf_ file, into a **DataItem** instance. Here is the code segment, with the error-checking parts removed:
|
||||
|
||||
|
||||
```
|
||||
filebytes, err := ioutil.ReadFile(PbufFile) // get the bytes from the file
|
||||
...
|
||||
testItem.Reset() // clear the DataItem structure
|
||||
err = proto.Unmarshal(filebytes, testItem) // deserialize into a DataItem instance
|
||||
```
|
||||
|
||||
The **proto.Unmarshal** function for deserializing Protbuf is the inverse of the **proto.Marshal** function. The original **DataItem** and the deserialized clone are printed to confirm an exact match:
|
||||
|
||||
|
||||
```
|
||||
Original:
|
||||
2041519981506242154 3041486079683013705 1192 1879
|
||||
0.572123 0.326855
|
||||
boPb#T0O8Xd&Ps5EnSZqDg4Qztvo7IIs 9vH66AiGSQgCDxk&
|
||||
|
||||
Deserialized:
|
||||
2041519981506242154 3041486079683013705 1192 1879
|
||||
0.572123 0.326855
|
||||
boPb#T0O8Xd&Ps5EnSZqDg4Qztvo7IIs 9vH66AiGSQgCDxk&
|
||||
```
|
||||
|
||||
### A Protobuf client in Java
|
||||
|
||||
The example in Java is to confirm Protobuf's language neutrality. The original IDL file could be used to generate the Java support code, which involves nested classes. To suppress warnings, however, a slight addition can be made. Here is the revision, which specifies a **DataMsg** as the name for the outer class, with the inner class automatically named **DataItem** after the Protobuf message:
|
||||
|
||||
|
||||
```
|
||||
syntax = "proto3";
|
||||
|
||||
package main;
|
||||
|
||||
option java_outer_classname = "DataMsg";
|
||||
|
||||
message DataItem {
|
||||
...
|
||||
```
|
||||
|
||||
With this change in place, the _protoc_ compilation is the same as before, except the desired output is now Java rather than Go:
|
||||
|
||||
|
||||
```
|
||||
`% protoc --java_out=. dataitem.proto`
|
||||
```
|
||||
|
||||
The resulting source file (in a subdirectory named _main_) is _DataMsg.java_ and about 1,120 lines in length: Java is not terse. Compiling and then running the Java code requires a JAR file with the library support for Protobuf. This file is available in the [Maven repository][9].
|
||||
|
||||
With the pieces in place, my test code is relatively short (and available in the ZIP file as _Main.java_):
|
||||
|
||||
|
||||
```
|
||||
package main;
|
||||
import java.io.FileInputStream;
|
||||
|
||||
public class Main {
|
||||
public static void main(String[] args) {
|
||||
String path = "dataitem.pbuf"; // from the Go program's serialization
|
||||
try {
|
||||
DataMsg.DataItem deserial =
|
||||
DataMsg.DataItem.newBuilder().mergeFrom(new FileInputStream(path)).build();
|
||||
|
||||
System.out.println(deserial.getOddA()); // 64-bit odd
|
||||
System.out.println(deserial.getLong()); // 32-character string
|
||||
}
|
||||
catch(Exception e) { System.err.println(e); }
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Production-grade testing would be far more thorough, of course, but even this preliminary test confirms the language-neutrality of Protobuf: the _dataitem.pbuf_ file results from the Go program's serialization of a Go **DataItem**, and the bytes in this file are deserialized to produce a **DataItem** instance in Java. The output from the Java test is the same as that from the Go test.
|
||||
|
||||
### Wrapping up with the numPairs program
|
||||
|
||||
Let's end with an example that highlights Protobuf efficiency but also underscores the cost involved in any encoding technology. Consider this Protobuf IDL file:
|
||||
|
||||
|
||||
```
|
||||
syntax = "proto3";
|
||||
package main;
|
||||
|
||||
message NumPairs {
|
||||
repeated NumPair pair = 1;
|
||||
}
|
||||
|
||||
message NumPair {
|
||||
int32 odd = 1;
|
||||
int32 even = 2;
|
||||
}
|
||||
```
|
||||
|
||||
A **NumPair** message consists of two **int32** values together with an integer tag for each field. A **NumPairs** message is a sequence of embedded **NumPair** messages.
|
||||
|
||||
The _numPairs_ program in Go (below) creates 2 million **NumPair** instances, with each appended to the **NumPairs** message. This message can be serialized and deserialized in the usual way.
|
||||
|
||||
#### Example 2. The numPairs program
|
||||
|
||||
|
||||
```
|
||||
package main
|
||||
|
||||
import (
|
||||
"math/rand"
|
||||
"time"
|
||||
"encoding/xml"
|
||||
"encoding/json"
|
||||
"io/ioutil"
|
||||
"github.com/golang/protobuf/proto"
|
||||
)
|
||||
|
||||
// protoc-generated code: start
|
||||
var _ = proto.Marshal
|
||||
type NumPairs struct {
|
||||
Pair []*NumPair `protobuf:"bytes,1,rep,name=pair" json:"pair,omitempty"`
|
||||
}
|
||||
|
||||
func (m *NumPairs) Reset() { *m = NumPairs{} }
|
||||
func (m *NumPairs) String() string { return proto.CompactTextString(m) }
|
||||
func (*NumPairs) ProtoMessage() {}
|
||||
func (m *NumPairs) GetPair() []*NumPair {
|
||||
if m != nil { return m.Pair }
|
||||
return nil
|
||||
}
|
||||
|
||||
type NumPair struct {
|
||||
Odd int32 `protobuf:"varint,1,opt,name=odd" json:"odd,omitempty"`
|
||||
Even int32 `protobuf:"varint,2,opt,name=even" json:"even,omitempty"`
|
||||
}
|
||||
|
||||
func (m *NumPair) Reset() { *m = NumPair{} }
|
||||
func (m *NumPair) String() string { return proto.CompactTextString(m) }
|
||||
func (*NumPair) ProtoMessage() {}
|
||||
func init() {}
|
||||
// protoc-generated code: finish
|
||||
|
||||
var numPairsStruct NumPairs
|
||||
var numPairs = &numPairsStruct
|
||||
|
||||
func encodeAndserialize() {
|
||||
// XML encoding
|
||||
filename := "./pairs.xml"
|
||||
bytes, _ := xml.MarshalIndent(numPairs, "", " ")
|
||||
ioutil.WriteFile(filename, bytes, 0644)
|
||||
|
||||
// JSON encoding
|
||||
filename = "./pairs.json"
|
||||
bytes, _ = json.MarshalIndent(numPairs, "", " ")
|
||||
ioutil.WriteFile(filename, bytes, 0644)
|
||||
|
||||
// ProtoBuf encoding
|
||||
filename = "./pairs.pbuf"
|
||||
bytes, _ = proto.Marshal(numPairs)
|
||||
ioutil.WriteFile(filename, bytes, 0644)
|
||||
}
|
||||
|
||||
const HowMany = 200 * 100 * 100 // two million
|
||||
|
||||
func main() {
|
||||
rand.Seed(time.Now().UnixNano())
|
||||
|
||||
// uncomment the modulus operations to get the more efficient version
|
||||
for i := 0; i < HowMany; i++ {
|
||||
n1 := rand.Int31() // % 2047
|
||||
if (n1 & 1) == 0 { n1++ } // ensure it's odd
|
||||
n2 := rand.Int31() // % 2047
|
||||
if (n2 & 1) == 1 { n2++ } // ensure it's even
|
||||
|
||||
next := &NumPair {
|
||||
Odd: n1,
|
||||
Even: n2,
|
||||
}
|
||||
numPairs.Pair = append(numPairs.Pair, next)
|
||||
}
|
||||
encodeAndserialize()
|
||||
}
|
||||
```
|
||||
|
||||
The randomly generated odd and even values in each **NumPair** range from zero to 2 billion and change. In terms of raw rather than encoded data, the integers generated in the Go program add up to 16MB: two integers per **NumPair** for a total of 4 million integers in all, and each value is four bytes in size.
|
||||
|
||||
For comparison, the table below has entries for the XML, JSON, and Protobuf encodings of the 2 million **NumPair** instances in the sample **NumsPairs** message. The raw data is included, as well. Because the _numPairs_ program generates random values, output differs across sample runs but is close to the sizes shown in the table.
|
||||
|
||||
**Table 2. Encoding overhead for 16MB of integers**
|
||||
|
||||
Encoding | File | Byte size | Pbuf/other ratio
|
||||
---|---|---|---
|
||||
None | pairs.raw | 16MB | 169%
|
||||
Protobuf | pairs.pbuf | 27MB | —
|
||||
JSON | pairs.json | 100MB | 27%
|
||||
XML | pairs.xml | 126MB | 21%
|
||||
|
||||
As expected, Protobuf shines next to XML and JSON. The Protobuf encoding is about a quarter of the JSON one and about a fifth of the XML one. But the raw data make clear that Protobuf incurs the overhead of encoding: the serialized Protobuf message is 11MB larger than the raw data. Any encoding, including Protobuf, involves structuring the data, which unavoidably adds bytes.
|
||||
|
||||
Each of the serialized 2 million **NumPair** instances involves _four_ integer values: one apiece for the **Even** and **Odd** fields in the Go structure, and one tag per each field in the Protobuf encoding. As raw rather than encoded data, this would come to 16 bytes per instance, and there are 2 million instances in the sample **NumPairs** message. But the Protobuf tags, like the **int32** values in the **NumPair** fields, use _varint_ encoding and, therefore, vary in byte length; in particular, small integer values (which include the tags, in this case) require fewer than four bytes to encode.
|
||||
|
||||
If the _numPairs_ program is revised so that the two **NumPair** fields hold values less than 2048, which have encodings of either one or two bytes, then the Protobuf encoding drops from 27MB to 16MB—the very size of the raw data. The table below summarizes the new encoding sizes from a sample run.
|
||||
|
||||
**Table 3. Encoding with 16MB of integers < 2048**
|
||||
|
||||
Encoding | File | Byte size | Pbuf/other ratio
|
||||
---|---|---|---
|
||||
None | pairs.raw | 16MB | 100%
|
||||
Protobuf | pairs.pbuf | 16MB | —
|
||||
JSON | pairs.json | 77MB | 21%
|
||||
XML | pairs.xml | 103MB | 15%
|
||||
|
||||
In summary, the modified _numPairs_ program, with field values less than 2048, reduces the four-byte size for each integer value in the raw data. But the Protobuf encoding still requires tags, which add bytes to the Protobuf message. Protobuf encoding does have a cost in message size, but this cost can be reduced by the _varint_ factor if relatively small integer values, whether in fields or keys, are being encoded.
|
||||
|
||||
For moderately sized messages consisting of structured data with mixed types—and relatively small integer values—Protobuf has a clear advantage over options such as XML and JSON. In other cases, the data may not be suited for Protobuf encoding. For example, if two applications need to share a huge set of text records or large integer values, then compression rather than encoding technology may be the way to go.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/10/protobuf-data-interchange
|
||||
|
||||
作者:[Marty Kalin][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/mkalindepauledu
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/metrics_data_dashboard_system_computer_analytics.png?itok=oxAeIEI- (metrics and data shown on a computer screen)
|
||||
[2]: https://developers.google.com/protocol-buffers/
|
||||
[3]: https://en.wikipedia.org/wiki/DCE/RPC
|
||||
[4]: https://en.wikipedia.org/wiki/Interface_description_language
|
||||
[5]: https://grpc.io/
|
||||
[6]: http://condor.depaul.edu/mkalin
|
||||
[7]: https://github.com/protocolbuffers/protobuf
|
||||
[8]: https://developers.google.com/protocol-buffers/docs/encoding
|
||||
[9]: https://mvnrepository.com/artifact/com.google.protobuf/protobuf-java
|
122
sources/tech/20191018 Perceiving Python programming paradigms.md
Normal file
122
sources/tech/20191018 Perceiving Python programming paradigms.md
Normal file
@ -0,0 +1,122 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Perceiving Python programming paradigms)
|
||||
[#]: via: (https://opensource.com/article/19/10/python-programming-paradigms)
|
||||
[#]: author: (Jigyasa Grover https://opensource.com/users/jigyasa-grover)
|
||||
|
||||
Perceiving Python programming paradigms
|
||||
======
|
||||
Python supports imperative, functional, procedural, and object-oriented
|
||||
programming; here are tips on choosing the right one for a specific use
|
||||
case.
|
||||
![A python with a package.][1]
|
||||
|
||||
Early each year, TIOBE announces its Programming Language of The Year. When its latest annual [TIOBE index][2] report came out, I was not at all surprised to see [Python again winning the title][3], which was based on capturing the most search engine ranking points (especially on Google, Bing, Yahoo, Wikipedia, Amazon, YouTube, and Baidu) in 2018.
|
||||
|
||||
![Python data from TIOBE Index][4]
|
||||
|
||||
Adding weight to TIOBE's findings, earlier this year, nearly 90,000 developers took Stack Overflow's annual [Developer Survey][5], which is the largest and most comprehensive survey of people who code around the world. The main takeaway from this year's results was:
|
||||
|
||||
> "Python, the fastest-growing major programming language, has risen in the ranks of programming languages in our survey yet again, edging out Java this year and standing as the second most loved language (behind Rust)."
|
||||
|
||||
Ever since I started programming and exploring different languages, I have seen admiration for Python soaring high. Since 2003, it has consistently been among the top 10 most popular programming languages. As TIOBE's report stated:
|
||||
|
||||
> "It is the most frequently taught first language at universities nowadays, it is number one in the statistical domain, number one in AI programming, number one in scripting and number one in writing system tests. Besides this, Python is also leading in web programming and scientific computing (just to name some other domains). In summary, Python is everywhere."
|
||||
|
||||
There are several reasons for Python's rapid rise, bloom, and dominance in multiple domains, including web development, scientific computing, testing, data science, machine learning, and more. The reasons include its readable and maintainable code; extensive support for third-party integrations and libraries; modular, dynamic, and portable structure; flexible programming; learning ease and support; user-friendly data structures; productivity and speed; and, most important, community support. The diverse application of Python is a result of its combined features, which give it an edge over other languages.
|
||||
|
||||
But in my opinion, the comparative simplicity of its syntax and the staggering flexibility it provides developers coming from many other languages win the cake. Very few languages can match Python's ability to conform to a developer's coding style rather than forcing him or her to code in a particular way. Python lets more advanced developers use the style they feel is best suited to solve a particular problem.
|
||||
|
||||
While working with Python, you are like a snake charmer. This allows you to take advantage of Python's promise to offer a non-conforming environment for developers to code in the style best suited for a particular situation and to make the code more readable, testable, and coherent.
|
||||
|
||||
## Python programming paradigms
|
||||
|
||||
Python supports four main [programming paradigms][6]: imperative, functional, procedural, and object-oriented. Whether you agree that they are valid or even useful, Python strives to make all four available and working. Before we dive in to see which programming paradigm is most suitable for specific use cases, it is a good time to do a quick review of them.
|
||||
|
||||
### Imperative programming paradigm
|
||||
|
||||
The [imperative programming paradigm][7] uses the imperative mood of natural language to express directions. It executes commands in a step-by-step manner, just like a series of verbal commands. Following the "how-to-solve" approach, it makes direct changes to the state of the program; hence it is also called the stateful programming model. Using the imperative programming paradigm, you can quickly write very simple yet elegant code, and it is super-handy for tasks that involve data manipulation. Owing to its comparatively slower and sequential execution strategy, it cannot be used for complex or parallel computations.
|
||||
|
||||
[![Linus Torvalds quote][8]][9]
|
||||
|
||||
Consider this example task, where the goal is to take a list of characters and concatenate it to form a string. A way to do it in an imperative programming style would be something like:
|
||||
|
||||
|
||||
```
|
||||
>>> sample_characters = ['p','y','t','h','o','n']
|
||||
>>> sample_string = ''
|
||||
>>> sample_string
|
||||
''
|
||||
>>> sample_string = sample_string + sample_characters[0]
|
||||
>>> sample_string
|
||||
'p'
|
||||
>>> sample_string = sample_string + sample_characters[1]
|
||||
>>> sample_string
|
||||
'py'
|
||||
>>> sample_string = sample_string + sample_characters[2]
|
||||
>>> sample_string
|
||||
'pyt'
|
||||
>>> sample_string = sample_string + sample_characters[3]
|
||||
>>> sample_string
|
||||
'pyth'
|
||||
>>> sample_string = sample_string + sample_characters[4]
|
||||
>>> sample_string
|
||||
'pytho'
|
||||
>>> sample_string = sample_string + sample_characters[5]
|
||||
>>> sample_string
|
||||
'python'
|
||||
>>>
|
||||
```
|
||||
|
||||
Here, the variable **sample_string** is also like a state of the program that is getting changed after executing the series of commands, and it can be easily extracted to track the progress of the program. The same can be done using a **for** loop (also considered imperative programming) in a shorter version of the above code:
|
||||
|
||||
|
||||
```
|
||||
>>> sample_characters = ['p','y','t','h','o','n']
|
||||
>>> sample_string = ''
|
||||
>>> sample_string
|
||||
>>> for c in sample_characters:
|
||||
... sample_string = sample_string + c
|
||||
... print(sample_string)
|
||||
...
|
||||
p
|
||||
py
|
||||
pyt
|
||||
pyth
|
||||
pytho
|
||||
python
|
||||
>>>
|
||||
```
|
||||
|
||||
### Functional programming paradigm
|
||||
|
||||
The [functional programming paradigm][10] treats program computation as the evaluation of mathematical functions based on [lambda calculus][11]. Lambda calculus is a formal system in mathematical logic for expressing computation based on function abstraction and application using variable binding and substitution. It follows the "what-to-solve" approach—that is, it expresses logic without describing its control flow—hence it is also classified as the declarative programming model.
|
||||
|
||||
The functional programming paradigm promotes stateless functions, but it's important to note that Python's implementation of functional programming deviates from standard implementation. Python is said to be an _impure_ functional language because it is possible to maintain state and create side effects if you are not careful. That said, functional programming is handy for parallel processing and is super-efficient for tasks requiring recursion and concurrent execution.
|
||||
|
||||
|
||||
```
|
||||
>>> sample_characters = ['p','y','t','h','o','n']
|
||||
>>> import functools
|
||||
>>> sample_string = functools.reduce(lambda s,c: s + c, sample_characters)
|
||||
>>> sample_string
|
||||
'python'
|
||||
>>>
|
||||
```
|
||||
|
||||
Using the same example, the functional way of concatenating a list of characters to form a string would be the same as above. Since the computation happens in a single line, there is no explicit way to obtain the state of the program with **sample_string** and track the progress. The functional programming implementation of this example is fascinating, as it reduces the lines of code and simply does its job in a single line, with the exception of using the **functools** module and the **reduce** method. The three keywords—**functools**, **reduce**, and **lambda**—are defined as follows:
|
||||
|
||||
* **functools** is a module for higher-order functions and provides for functions that act on or return other functions. It encourages writing reusable code, as it is easier to replicate existing functions with some arguments already passed and create a new version of a function in a well-documented manner.
|
||||
* **reduce** is a method that applies a function of two arguments cumulatively to the items in sequence, from left to right, to reduce the sequence to a single value. For example: [code] >>> sample_list = [1,2,3,4,5]
|
||||
>>> import functools
|
||||
>>> sum = functools.reduce(lambda x,y: x + y, sample_list)
|
||||
>>> sum
|
||||
15
|
||||
>>> ((((1+2)+3)+4)+5)
|
||||
15
|
||||
>>>
|
||||
```
|
||||
* **lambda functions** are small, anonymized (i.e., nameless) functions that can take any number of arguments but spit out only one value. They are useful when they are used as an argu
|
@ -0,0 +1,111 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (hopefully2333)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How DevOps professionals can become security champions)
|
||||
[#]: via: (https://opensource.com/article/19/9/devops-security-champions)
|
||||
[#]: author: (Jessica Repka https://opensource.com/users/jrepkahttps://opensource.com/users/jrepkahttps://opensource.com/users/patrickhousleyhttps://opensource.com/users/mehulrajputhttps://opensource.com/users/alanfdosshttps://opensource.com/users/marcobravo)
|
||||
|
||||
DevOps 专业人员如何成为网络安全拥护者
|
||||
======
|
||||
打破信息孤岛,成为网络安全的拥护者,这对你、对你的职业、对你的公司都会有所帮助。
|
||||
![A lock on the side of a building][1]
|
||||
|
||||
安全是 DevOps 中一个被误解了的部分,一些人认为它不在 DevOps 的范围内,而另一些人认为它太过重要(并且被忽视),建议改为使用 DevSecOps。无论你同意哪一方的观点,网络安全都会影响到我们每一个人,这是很明显的事实。
|
||||
|
||||
每年, [黑客行为的统计数据][3] 都会更加令人震惊。例如, 每 39 秒就有一次黑客行为发生,这可能会导致你为公司写的记录、身份和专有项目被盗。你的安全团队可能需要花上几个月(也可能是永远找不到)才能发现这次黑客行为背后是谁,目的是什么,人在哪,什么时候黑进来的。
|
||||
|
||||
运营专家面对这些棘手问题应该如何是好?呐我说,现在是时候成为网络安全的拥护者,变为解决方案的一部分了。
|
||||
|
||||
### 孤岛势力范围的战争
|
||||
|
||||
在我和我本地的 IT 安全(ITSEC)团队一起肩并肩战斗的岁月里,我注意到了很多事情。一个很大的问题是,安全团队和 DevOps 之间关系紧张,这种情况非常普遍。这种紧张关系几乎都是来源于安全团队为了保护系统、防范漏洞所作出的努力(例如,设置访问控制或者禁用某些东西),这些努力会中断 DevOps 的工作并阻碍他们快速部署应用程序。
|
||||
|
||||
你也看到了,我也看到了,你在现场碰见的每一个人都有至少一个和它有关的故事。一小撮的怨恨最终烧毁了信任的桥梁,要么是花费一段时间修复,要么就是两个团体之间开始一场小型的地盘争夺战,这个结果会使 DevOps 实现起来更加艰难。
|
||||
|
||||
### 一种新观点
|
||||
|
||||
为了打破这些孤岛并结束势力战争,我在每个安全团队中都选了至少一个人来交谈,了解我们组织日常安全运营里的来龙去脉。我开始做这件事是出于好奇,但我持续做这件事是因为它总是能带给我一些有价值的、新的观点。例如,我了解到,对于每个因为失败的安全性而被停止的部署,安全团队都在疯狂地尝试修复 10 个他们看见的其他问题。他们反应的莽撞和尖锐是因为他们必须在有限的时间里修复这些问题,不然这些问题就会变成一个大问题。
|
||||
|
||||
考虑到发现、识别和撤销已完成操作所需的大量知识,或者指出 DevOps 团队正在做什么-没有背景信息-然后复制并测试它。所有的这些通常都要由人手配备非常不足的安全团队完成。
|
||||
|
||||
这就是你的安全团队的日常生活,并且你的 DevOps 团队看不到这些。ITSEC 的日常工作意味着超时加班和过度劳累,以确保公司,公司的团队,团队里工作的所有人能够安全地工作。
|
||||
|
||||
### 成为安全拥护者的方法
|
||||
|
||||
这些是你成为你的安全团队的拥护者之后可以帮到它们的。这意味着-对于你做的所有操作-你必须仔细、认真地查看所有能够让其他人登录的方式,以及他们能够从中获得什么。
|
||||
|
||||
帮助你的安全团队就是在帮助你自己。将工具添加到你的工作流程里,以此将你知道的要干的活和他们知道的要干的活结合到一起。从小事入手,例如阅读公共漏洞披露(CVEs),并将扫描模块添加到你的 CI/CD 流程里。对于你写的所有代码,都会有一个开源扫描工具,添加小型开源工具(例如下面列出来的)在长远看来是可以让项目更好的。
|
||||
|
||||
**容器扫描工具:**
|
||||
|
||||
* [Anchore Engine][5]
|
||||
* [Clair][6]
|
||||
* [Vuls][7]
|
||||
* [OpenSCAP][8]
|
||||
|
||||
|
||||
|
||||
**代码扫描工具:**
|
||||
|
||||
* [OWASP SonarQube][9]
|
||||
* [Find Security Bugs][10]
|
||||
* [Google Hacking Diggity Project][11]
|
||||
|
||||
|
||||
|
||||
**Kubernetes 安全工具:**
|
||||
|
||||
* [Project Calico][12]
|
||||
* [Kube-hunter][13]
|
||||
* [NeuVector][14]
|
||||
|
||||
|
||||
|
||||
### 保持你的 DevOps 态度
|
||||
|
||||
如果你的工作角色是和 DevOps 相关的,那么学习新技术和如何运用这项新技术创造新事物就是你工作的一部分。安全也是一样。我在 DevOps 安全方面保持到最新,下面是我的方法的列表。
|
||||
|
||||
* 每周阅读一篇你工作的方向里和安全相关的文章.
|
||||
* 每周查看 [CVE][15] 官方网站,了解出现了什么新漏洞.
|
||||
* 尝试做一次黑客马拉松。一些公司每个月都要这样做一次;如果你觉得还不够、想了解更多,可以访问 Beginner Hack 1.0 网站。
|
||||
* 每年至少一次和那你的安全团队的成员一起参加安全会议,从他们的角度来看事情。
|
||||
|
||||
|
||||
|
||||
### 成为拥护者是为了变得更好
|
||||
|
||||
你应该成为你的安全的拥护者,下面是我们列出来的几个理由。首先是增长你的知识,帮助你的职业发展。第二是帮助其他的团队,培养新的关系,打破对你的组织有害的孤岛。在你的整个组织内建立由很多好处,包括设置沟通团队的典范,并鼓励人们一起工作。你同样能促进在整个组织中分享知识,并给每个人提供一个在安全方面更好的内部合作的新契机。
|
||||
|
||||
总的来说,成为一个网络安全的拥护者会让你成为你整个组织的拥护者。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/9/devops-security-champions
|
||||
|
||||
作者:[Jessica Repka][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[hopefully2333](https://github.com/hopefully2333)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/jrepkahttps://opensource.com/users/jrepkahttps://opensource.com/users/patrickhousleyhttps://opensource.com/users/mehulrajputhttps://opensource.com/users/alanfdosshttps://opensource.com/users/marcobravo
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_3reasons.png?itok=k6F3-BqA (A lock on the side of a building)
|
||||
[2]: https://opensource.com/article/19/1/what-devsecops
|
||||
[3]: https://hostingtribunal.com/blog/hacking-statistics/
|
||||
[4]: https://opensource.com/article/18/8/what-cicd
|
||||
[5]: https://github.com/anchore/anchore-engine
|
||||
[6]: https://github.com/coreos/clair
|
||||
[7]: https://vuls.io/
|
||||
[8]: https://www.open-scap.org/
|
||||
[9]: https://github.com/OWASP/sonarqube
|
||||
[10]: https://find-sec-bugs.github.io/
|
||||
[11]: https://resources.bishopfox.com/resources/tools/google-hacking-diggity/
|
||||
[12]: https://www.projectcalico.org/
|
||||
[13]: https://github.com/aquasecurity/kube-hunter
|
||||
[14]: https://github.com/neuvector/neuvector-helm
|
||||
[15]: https://cve.mitre.org/
|
||||
[16]: https://www.hackerearth.com/challenges/hackathon/beginner-hack-10/
|
@ -0,0 +1,205 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (Morisun029)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Mutation testing by example: How to leverage failure)
|
||||
[#]: via: (https://opensource.com/article/19/9/mutation-testing-example-tdd)
|
||||
[#]: author: (Alex Bunardzic https://opensource.com/users/alex-bunardzic)
|
||||
|
||||
变异测试:如何利用故障?
|
||||
======
|
||||
使用事先设计好的故障以确保你的代码达到预期的结果,并遵循 .NET xUnit.net 测试框架来进行测试。
|
||||
![failure sign at a party, celebrating failure][1]
|
||||
|
||||
[在变异测试是TDD的演变][2]一文中, 我谈到了迭代的力量。在可度量的测试中,迭代能够保证找到问题的解决方案。 在那篇文章中,我们讨论了迭代法帮助确定实现计算给定数字平方根的代码。
|
||||
|
||||
我还演示了最有效的方法是找到可衡量的目标或测试,然后以最佳猜测值开始迭代。 正如所预期的,第一次测试通常会失败。因此,必须根据可衡量的目标或测试对失败的代码进行完善。 根据运行结果,对测试值进行验证或进一步加以完善。
|
||||
在此模型中,学习获得解决方案的唯一方法是反复失败。 这听起来有悖常理,但它确实有效。
|
||||
|
||||
按照这种分析,本文探讨了在构建包含某些依赖项的解决方案时使用 DevOps 的最佳方法。 第一步是编写一个预期结果失败的用例。
|
||||
|
||||
|
||||
### 依赖性问题是你不能依赖它们
|
||||
|
||||
正如迈克尔•尼加德(Michael Nygard)在_[Architecture without an end state][3]_,表达的那样,依赖问题是一个很大的话题,最好留到另一篇文章中讨论。 在这里,你将会看到依赖项给项目带来的一些潜在问题,以及
|
||||
如何利用测试驱动开发(TDD)来避免这些陷阱。
|
||||
|
||||
首先,找到现实生活中的一个挑战,然后看看如何使用TDD解决它。
|
||||
|
||||
### 谁让猫出来?
|
||||
|
||||
![一只猫站在屋顶][4]
|
||||
|
||||
|
||||
在敏捷开发环境中,通过定义期望结果开始构建解决方案会很有帮助。 通常,在 [用户故事][5]中描述期望结果:
|
||||
|
||||
|
||||
>我想使用我家的自动化系统(HAS)来控制猫何时可以出门,因为我想保证它在夜间的安全。
|
||||
|
||||
|
||||
现在你已经有了一个用户故事,你需要通过提供一些功能要求(即指定验收标准)来对其进行详细说明。 从伪代码中描述的最简单的场景开始:
|
||||
|
||||
> 场景1:在夜间关闭猫门
|
||||
>
|
||||
> * 用时钟监测到晚上时间
|
||||
> * 时钟通知 HAS 系统
|
||||
> * HAS 关闭支持物联网(IoT)的猫门
|
||||
>
|
||||
|
||||
|
||||
### 分解系统
|
||||
|
||||
|
||||
开始构建之前,你需要将正在构建的系统(HAS)进行分解(分解为依赖项)。 你必须要做的第一件事是识别任何依赖项(如果幸运的话,你的系统没有依赖项,这将会更容易,但是,这样的系统可以说不是非常有用)。
|
||||
|
||||
从上面的简单场景中,你可以看到所需的业务成果(自动控制猫门)取决于对夜间情况监测。 这种依赖性取决于时钟。 但是时钟是无法区分白天和夜晚的。 需要你来提供这种逻辑。
|
||||
|
||||
正在构建的系统中的另一个依赖项是能够自动访问猫门并启用或关闭它。 该依赖项很可能取决于具有 IoT 功能的猫门提供的API。
|
||||
|
||||
|
||||
|
||||
### 依赖管理面临快速失败
|
||||
|
||||
为了满足一个依赖项,我们将构建确定当前时间是白天还是晚上的逻辑。 本着TDD的精神,我们将从一个小小的失败开始。
|
||||
|
||||
|
||||
有关如何设置此练习所需的开发环境和脚手架的详细说明,请参阅我的[上一篇文章][2]。 我们将重用相同的 NET 环境和 [xUnit.net][6] 框架。
|
||||
|
||||
|
||||
接下来,创建一个名为 HAS(“家庭自动化系统”)的新项目,创建一个名为**UnitTest1.cs**的文件。 在该文件中,编写第一个失败的单元测试。 在此单元测试中,描述你的期望结果。 例如,当系统运行时,如果时间是晚上7点,负责确定是白天还是夜晚的组件将返回值“ Nighttime”。
|
||||
|
||||
这是描述期望值的单元测试:
|
||||
|
||||
|
||||
```
|
||||
using System;
|
||||
using Xunit;
|
||||
|
||||
namespace unittest
|
||||
{
|
||||
public class UnitTest1
|
||||
{
|
||||
DayOrNightUtility dayOrNightUtility = [new][7] DayOrNightUtility();
|
||||
|
||||
[Fact]
|
||||
public void Given7pmReturnNighttime()
|
||||
{
|
||||
var expected = "Nighttime";
|
||||
var actual = dayOrNightUtility.GetDayOrNight();
|
||||
Assert.Equal(expected, actual);
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
至此,你可能已经熟悉了单元测试的结构。 快速复习:在此示例中,通过给单元测试一个描述性名称**Given7pmReturnNighttime** 来描述期望结果。 然后,在单元测试的主体中,创建一个名为**expected** 的变量,并为该变量指定期望值(在该示例中,值为“ Nighttime”)。 然后,为实际变量指定一个 **actual**(在组件或服务处理一天中的时间之后可用)。
|
||||
|
||||
最后,通过断言期望值和实际值是否相等来检查是否满足期望结果:**Assert.Equal(expected, actual)**。
|
||||
|
||||
|
||||
你还可以在上面的列表中看到名为**dayOrNightUtility** 的组件或服务。 该模块能够接收消息**GetDayOrNight**,并且返回**string** 类型的值。
|
||||
|
||||
|
||||
同样,本着TDD的精神,描述的组件或服务还尚未构建(仅为了后面说明在此进行描述)。 构建这些是由所描述的期望结果来驱动的。
|
||||
|
||||
在 **app** 文件夹中创建一个新文件,并将其命名为**DayOrNightUtility.cs**。 将以下 C# 代码添加到该文件中并保存:
|
||||
|
||||
|
||||
```
|
||||
using System;
|
||||
|
||||
namespace app {
|
||||
public class DayOrNightUtility {
|
||||
public string GetDayOrNight() {
|
||||
string dayOrNight = "Undetermined";
|
||||
return dayOrNight;
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
现在转到命令行,将目录更改为**unittests**文件夹,然后运行:
|
||||
|
||||
```
|
||||
[Xunit.net 00:00:02.33] unittest.UnitTest1.Given7pmReturnNighttime [FAIL]
|
||||
Failed unittest.UnitTest1.Given7pmReturnNighttime
|
||||
[...]
|
||||
```
|
||||
|
||||
恭喜,你已经完成了第一个失败的单元测试。 单元测试的期望结果是**DayOrNightUtility**方法返回字符串“ Nighttime”,但相反,它返回是“ Undetermined”。
|
||||
|
||||
### 修复失败的单元测试
|
||||
|
||||
|
||||
修复失败的测试的一种快速而粗略的方法是将值“ Undetermined”替换为值“ Nighttime”并保存更改:
|
||||
|
||||
```
|
||||
using System;
|
||||
|
||||
namespace app {
|
||||
public class DayOrNightUtility {
|
||||
public string GetDayOrNight() {
|
||||
string dayOrNight = "Nighttime";
|
||||
return dayOrNight;
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
现在运行时,成功了。
|
||||
|
||||
```
|
||||
Starting test execution, please wait...
|
||||
|
||||
Total tests: 1. Passed: 1. Failed: 0. Skipped: 0.
|
||||
Test Run Successful.
|
||||
Test execution time: 2.6470 Seconds
|
||||
```
|
||||
|
||||
但是,对值进行硬编码基本上是在作弊,最好为**DayOrNightUtility** 方法赋予一些智能。 修改**GetDayOrNight**方法以包括一些时间计算逻辑:
|
||||
|
||||
|
||||
```
|
||||
public string GetDayOrNight() {
|
||||
string dayOrNight = "Daylight";
|
||||
DateTime time = new DateTime();
|
||||
if(time.Hour < 7) {
|
||||
dayOrNight = "Nighttime";
|
||||
}
|
||||
return dayOrNight;
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
该方法现在从系统获取当前时间,并与 **Hour**比较,查看其是否小于上午7点。 如果小于,则处理逻辑将 **dayOrNight**字符串值从“ Daylight”转换为“ Nighttime”。 现在,单元测试通过。
|
||||
|
||||
|
||||
### 测试驱动解决方案的开始
|
||||
|
||||
现在,我们已经开始了基本的单元测试,并为我们的时间依赖项提供了可行的解决方案。 后面还有更多的测试案例需要执行。
|
||||
|
||||
在下一篇文章中,我将演示如何对白天时间进行测试以及如何在整个过程中利用故障。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/9/mutation-testing-example-tdd
|
||||
|
||||
作者:[Alex Bunardzic][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[Morisun029](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/alex-bunardzic
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/fail_failure_celebrate.png?itok=LbvDAEZF (failure sign at a party, celebrating failure)
|
||||
[2]: https://opensource.com/article/19/8/mutation-testing-evolution-tdd
|
||||
[3]: https://www.infoq.com/presentations/Architecture-Without-an-End-State/
|
||||
[4]: https://opensource.com/sites/default/files/uploads/cat.png (Cat standing on a roof)
|
||||
[5]: https://www.agilealliance.org/glossary/user-stories
|
||||
[6]: https://xunit.net/
|
||||
[7]: http://www.google.com/search?q=new+msdn.microsoft.com
|
Loading…
Reference in New Issue
Block a user