Merge remote-tracking branch 'LCTT/master'

This commit is contained in:
Xingyu Wang 2019-09-17 10:39:57 +08:00
commit f98b0c0d6b
11 changed files with 1381 additions and 25 deletions

View File

@ -0,0 +1,103 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Sandboxie's path to open source, update on the Pentagon's open source initiative, open source in Hollywood, and more)
[#]: via: (https://opensource.com/article/19/9/news-september-15)
[#]: author: (Lauren Maffeo https://opensource.com/users/lmaffeo)
Sandboxie's path to open source, update on the Pentagon's open source initiative, open source in Hollywood, and more
======
Catch up on the biggest open source headlines from the past two weeks.
![Weekly news roundup with TV][1]
In this edition of our open source news roundup, Sandboxie's path to open source, update on the Pentagon's adoption of open source, open source in Hollywood, and more!
### Sandboxie becomes freeware on its way to open source
Sophos Group plc, a British security company, released a [free version of its popular Sandboxie tool][2], used as an isolated operating environment for Windows ([downloadable here][2]).
Sophos said that since Sandboxie isn't a core aspect of its business, the easier decision would've been to shut it down. But Sandboxie has [earned a reputation][3] for letting users run unknown software in a safe environment without risking their systems, so the team is putting in the additional work to release it as open source software. This intermediate phase of free-but-not-open-source appears to be related to the current system design, which requires an activation key:
> Sandboxie currently uses a license key to activate and grant access to premium features only available to paid customers (as opposed to those using a free version). We have modified the code and have released an updated free version that does not restrict any features. In other words, the new free license will have access to all the features previously only available to paid customers.
Citing this tool's community impact, senior leaders at Sophos announced the release of Sandboxie version 5.31.4an unrestricted version of the programwill remain free until the tool is fully open sourced.
"The Sandboxie user base represents some of the most passionate, forward thinking, and knowledgeable members of the security community and we didnt want to let you down," [Sophos' blog post read][4]. "After thoughtful consideration we decided that the best way to keep Sandboxie going was to give it back to its users -- transitioning it to an open source tool."
### The Pentagon doesn't meet White House mandate for more open source software
In 2016, the White House mandated that each government agency had to open source at least 20 percent of its custom software within three years. There is an [interesting article][5] about this initiative from 2017 that laid out some of the excitement and challenges.
According to the Government Accountability Office, [the Pentagon's not even halfway there][6].
In an article for Nextgov, Jack Corrigan wrote that as of July 2019, the Pentagon had released just 10 percent of its code as open source. They've also not yet implemented other aspects of the White House mandate, including the directive to build an open source software policy and inventories of custom code.
According to the report, some government officials told the GAO that they worry about security risks of sharing code across government departments. They also admitted to not creating metrics that could measure their open source efforts' successes. The Pentagon's Chief Technology Officer cited the Pentagon's size as the reason for not implementing the White House's open source mandate. In a report published Tuesday, the GAO said, “Until [the Defense Department] fully implements its pilot program and establishes milestones for completing the OMB requirements, the department will not be positioned to take advantage of significant cost savings and efficiencies."
### A team of volunteers works to find and digitize copyright-free books
All books published in the U.S. before 1924 are [publicly owned and can be freely used/copied][7]. Books published in and after 1964 will stay under copyright for 95 years after their publication dates. But thanks to a copyright loophole, up to 75 percent of books published between 1923 and 1964 are free to read and copy. The time-consuming trick is confirming which books those are.
So, a group of libraries, volunteers, and archivists have united to learn which books are copyright-free, then digitize and upload them to the Internet. Since renewal records were already digitized, it's been easy to tell if books published between 1923 and 1964 had their copyrights renewed. But looking for a lack of copyright renewal is much harder since you're trying to prove a negative.
Participants include the New York Public Library, [which recently explained][8] why the time-consuming project is worthwhile. To help find more books faster, the NYPL converted many records to XML format. This makes it easier to automate the process of finding which books can be added to the public domain. 
### Hollywood's Academy Software Foundation gains new members
Microsoft and Apple announced plans to contribute at the premier membership level of the ASF. They'll join [founding board members][9] including Netflix, Google Cloud, Disney Studios, and Sony Pictures.
The Academy Software Foundation launched in 2018 as a joint project of the [Academy of Motion Picture Arts and Sciences][10] and the [Linux Foundation][11].
> The mission of the Academy Software Foundation (ASWF) is to increase the quality and quantity of contributions to the content creation industrys open source software base; to provide a neutral forum to coordinate cross-project efforts; to provide a common build and test infrastructure; and to provide individuals and organizations a clear path to participation in advancing our open source ecosystem.
Within its first year, the Foundation built [OpenTimelineIO][12], an open source API and interchange format that helps studio teams collaborate across departments. OpenTImelineIO was formally accepted by [the Foundation's Technical Advisory Council][13] as its fifth hosted project last July. They now maintain it alongside [OpenColorIO][14], [OpenCue][15], [OpenEXR][16], and [OpenVDB][17].
#### In other news
* [Comcast puts open source networking software into production][18]
* [SD Times open source project of the week: Ballerina][19]
* [DOD struggles to implement open source pilots][20]
* [Kong open sources universal service mesh Kuma][21]
* [Eclipse unveils Jakarta EE 8][22]
_Thanks, as always, to Opensource.com staff members and moderators for their help this week._
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/9/news-september-15
作者:[Lauren Maffeo][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/lmaffeo
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/weekly_news_roundup_tv.png?itok=B6PM4S1i (Weekly news roundup with TV)
[2]: https://www.sandboxie.com/DownloadSandboxie
[3]: https://betanews.com/2019/09/13/sandboxie-free-open-source/
[4]: https://community.sophos.com/products/sandboxie/f/forum/115109/major-sandboxie-news-sandboxie-is-now-a-free-tool-with-plans-to-transition-it-to-an-open-source-tool/414522
[5]: https://medium.com/@DefenseDigitalService/code-mil-an-open-source-initiative-at-the-pentagon-5ae4986b79bc
[6]: https://www.nextgov.com/analytics-data/2019/09/pentagon-needs-make-more-software-open-source-watchdog-says/159832/
[7]: https://www.vice.com/en_us/article/a3534j/libraries-and-archivists-are-scanning-and-uploading-books-that-are-secretly-in-the-public-domain
[8]: https://www.nypl.org/blog/2019/09/01/historical-copyright-records-transparency
[9]: https://variety.com/2019/digital/news/microsoft-apple-academy-software-foundation-1203334675/
[10]: https://www.oscars.org/
[11]: http://www.linuxfoundation.org/
[12]: https://github.com/PixarAnimationStudios/OpenTimelineIO
[13]: https://www.linuxfoundation.org/press-release/2019/07/opentimelineio-joins-aswf/
[14]: https://opencolorio.org/
[15]: https://www.opencue.io/
[16]: https://www.openexr.com/
[17]: https://www.openvdb.org/
[18]: https://www.fiercetelecom.com/operators/comcast-puts-open-source-networking-software-into-production
[19]: https://sdtimes.com/os/sd-times-open-source-project-of-the-week-ballerina/
[20]: https://www.fedscoop.com/open-source-software-dod-struggles/
[21]: https://sdtimes.com/micro/kong-open-sources-universal-service-mesh-kuma/
[22]: https://devclass.com/2019/09/11/hey-were-open-source-again-eclipse-unveils-jakarta-ee-8/

View File

@ -0,0 +1,79 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Linux Plumbers, Appwrite, and more industry trends)
[#]: via: (https://opensource.com/article/19/9/conferences-industry-trends)
[#]: author: (Tim Hildred https://opensource.com/users/thildred)
Linux Plumbers, Appwrite, and more industry trends
======
A weekly look at open source community and industry trends.
![Person standing in front of a giant computer screen with numbers, data][1]
As part of my role as a senior product marketing manager at an enterprise software company with an open source development model, I publish a regular update about open source community, market, and industry trends for product marketers, managers, and other influencers. Here are five of my and their favorite articles from that update.
## [Working on Linux's nuts and bolts at Linux Plumbers][2]
> The Kernel Maintainers Summit, Linux creator Linus Torvalds told me, is an invitation-only gathering of the top Linux kernel developers. But, while you might think it's about planning on the Linux kernel's future, that's not the case. "The maintainer summit is really different because it doesn't even talk about technical issues." Instead, "It's all about the process of creating and maintaining the Linux kernel."
**The impact**: This is like the technical version of the Bilderberg meeting: you can have your flashy buzzword conferences, but we'll be over here making the real decisions. Or so I imagine. Probably less private jets involved though.
## [Microsoft hosts first Windows Subsystem for Linux conference][3]
> Hayden Barnes, founder of [Whitewater Foundry][4], a startup focusing on [Windows Subsystem for Linux (WSL)][5] [announced WSLconf 1][6], the first community conference for WSL. This event will be held on March 10-11, 2020 at Building 20 on the Microsoft HQ campus in Redmond, WA. The conference is still coming together. But we already know it will have presentations and workshops from [Pengwin, Whitewater's Linux for Windows,][7] Microsoft WSL, and [Canonical][8]'s [Ubuntu][9] on WSL developers.
**The impact**: Microsoft is nurturing the seeds of community growing up around its increasing adoption of and contribution to open source software. It's enough to bring a tear to my eye.
## [Introducing Appwrite: An open source backend server for mobile and web developers][10]
> [Appwrite][11] is a new [open source][12], end to end backend server for frontend and mobile developers that allows you to build apps a lot faster. [Appwrite][13] goal is to abstract and simplify common development tasks behind REST APIs and tools, to help developers build advanced apps way faster.
>
> In this post I will shortly cover some of the main [Appwrite][14] services and explain about their main features and how they are designed to help you build your next project way faster than you would when writing all your backend APIs from scratch.
**The impact**: Software development is getting more and more accessible as more open source middleware gets easier to use. Appwrite claims to reduce the time and cost of development by 70%. Imagine what that would mean to a small mobile development agency or citizen developer. I'm curious about how they'll monetize this.
## ['More than just IT': Open source technologist says collaborative culture is key to government transformation][15]
> AGL (agile government leadership) is providing a valuable support network for people who are helping government work better for the public. The organization is focused on things that I am very passionate about — DevOps, digital transformation, open source, and similar topics that are top-of-mind for many government IT leaders. AGL provides me with a community to learn about what the best and brightest are doing today, and share those learnings with my peers throughout the industry.
**The impact**: It is easy to be cynical about the government no matter your political persuasion. I found it refreshing to have a reminder that the government is comprised of real people who are mostly doing their best to apply relevant technology to the public good. Especially when that technology is open source!
## [How Bloomberg achieves close to 90-95% hardware utilization with Kubernetes][16]
> In 2016, Bloomberg adopted Kubernetes—when it was still in alpha—and has seen remarkable results ever since using the projects upstream code. “With Kubernetes, were able to very efficiently use our hardware to the point where we can get close to 90 to 95% utilization rates,” says Rybka. Autoscaling in Kubernetes allows the system to meet demands much faster. Furthermore, Kubernetes “offered us the ability to standardize our approach to how we build and manage services, which means that we can spend more time focused on actually working on the open source tools that we support,” says Steven Bower, Data and Analytics Infrastructure Lead. “If we want to stand up a new cluster in another location in the world, its really very straightforward to do that. Everything is all just code. Configuration is code.”
**The impact**: Nothing cuts through the fog of marketing like utilization stats. One of the things that I've heard about Kube is that people don't know what to do with it when they have it running. Use cases like this give them (and you) something to aspire to.
_I hope you enjoyed this list of what stood out to me from last week and come back next Monday for more open source community, market, and industry trends._
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/9/conferences-industry-trends
作者:[Tim Hildred][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/thildred
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/data_metrics_analytics_desktop_laptop.png?itok=9QXd7AUr (Person standing in front of a giant computer screen with numbers, data)
[2]: https://www.zdnet.com/article/working-on-linuxs-nuts-and-bolts-at-linux-plumbers/
[3]: https://www.zdnet.com/article/microsoft-hosts-first-windows-subsystem-for-linux-conference/
[4]: https://github.com/WhitewaterFoundry
[5]: https://docs.microsoft.com/en-us/windows/wsl/install-win10
[6]: https://www.linkedin.com/feed/update/urn:li:activity:6574754435518599168/
[7]: https://www.zdnet.com/article/pengwin-a-linux-specifically-for-windows-subsystem-for-linux/
[8]: https://canonical.com/
[9]: https://ubuntu.com/
[10]: https://medium.com/@eldadfux/introducing-appwrite-an-open-source-backend-server-for-mobile-web-developers-4be70731575d
[11]: https://appwrite.io
[12]: https://github.com/appwrite/appwrite
[13]: https://medium.com/@eldadfux/introducing-appwrite-an-open-source-backend-server-for-mobile-web-developers-4be70731575d?source=friends_link&sk=b6a2be384aafd1fa5b1b6ff12906082c
[14]: https://appwrite.io/
[15]: https://medium.com/agile-government-leadership/more-than-just-it-open-source-technologist-says-collaborative-culture-is-key-to-government-c46d1489f822
[16]: https://www.cncf.io/blog/2019/09/12/how-bloomberg-achieves-close-to-90-95-hardware-utilization-with-kubernetes/

View File

@ -0,0 +1,75 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (3 strategies to simplify complex networks)
[#]: via: (https://www.networkworld.com/article/3438840/3-strategies-to-simplify-complex-networks.html)
[#]: author: (Zeus Kerravala https://www.networkworld.com/author/Zeus-Kerravala/)
3 strategies to simplify complex networks
======
Innovations such as SD-WAN, Wi-Fi 6 and 5G have enabled networks to do more, but theyve also made them complex. Software, machine learning, and automation will alleviate the problem.
Metamorworks / Getty Images
As the cloud era meets the demands of digital transformation, networks must change. That means for enterprises, they must become simpler, said Juniper CEO Rami Rahim, speaking at the company's annual industry analyst conference last week.
The past five years has seen more innovation in networking than in the previous 30. Things such as [SD-WAN][1], multi-cloud, [Wi-Fi 6][2], [5G][3], 400 Gig, and [edge computing][4] are on the near-term horizon for almost every company. While all of those technologies have enabled the network to do so much more than ever before, their complexity has also risen.
[][5] Zeus Kerravala
Juniper CEO Rami Rahim
Network engineers face the harsh reality that they are being tasked with working faster but also more accurately to cut down on unplanned downtime. Networks must become simpler to run, which actually requires more engineering from the vendor. Think of the iPhone. Its so simple, my dad can use it without calling me every hour. Making it easy requires a tremendous amount of innovation from Apple to mask the underlying complexity.
**[ Related: [What is 5G wireless? And how it will change networking as we know it][6] ]**
### How to simplify networks
Vendors can help make networks simpler by executing on the following:
* **Simplicity through software.** The pendulum has swung way too far on the “hardware doesnt matter” theory. Of course it matters, particularly for networking where tasks such as deep-packet inspection, routing, and other functions are still best done in hardware. However, control and management of the hardware should be done in software because it can act as an abstraction layer for the underlying features in the actual boxes. For Juniper, Contrail Cloud and their software-delivered SD-WAN provides the centralized software overlay for simplified operations.
* **Machine learning-based operations.** Networks generate massive amounts of data that can use useful for operating the environment. The problem is that people cant analyze the data fast enough to understand what it means but machines can. This is where network professionals must be willing to cede some control to the computers. The purpose of machine learning isnt to replace people, but to be a tool to let them work smarter and faster. [Juniper acquired Mist Systems earlier this year][7] to provide machine learning based operations to Wi-Fi, which is a great starting point because Wi-Fi troubleshooting is very difficult. Over time, I expect Mists benefit to be brought to the entire enterprise portfolio.
* **Vision of intent-based operations with purposeful automation.** The long-term goal of network operations is akin to a self-driving car where the network runs and secures itself. However, like with a self-driving car, the technology isnt quite there yet. In the auto industry, there are many automation features, such as parallel park assist and lane change alerts that make drivers better. Similarly, network engineers can benefit by automating many of the mundane tasks associated with running a network, such as firmware upgrades, OS patching, and other things that need to be done but offer no strategic benefits.
### To ASIC or not to ASIC
As I mentioned, network hardware is still important. Theres currently a debate in the network industry as to whether companies like Juniper should be spinning their own silicon or leveraging merchant silicon. I believe ASICs allow vendors to bring new features to market faster than waiting for the silicon vendors to bake them into their chips. ASICs also give the network equipment manufacturer better control over product roadmaps.
However, there is a wide range of silicon vendors that offer chips for a multitude of use cases that might be hard to replicate in custom chips. Also, some of the cloud providers know the specific feature set they are looking for and will dictate they want something like a Barefoot Tofino-based switch. In this case, merchant silicon would provide a time to market advantage over custom. But both approaches are viable as long as the vendor has a clear roadmap and strategy of how to take advantage of hardware and software.
Historically, Juniper has done a great job using custom chips for competitive advantage, and I dont see that changing. Buyers should not shy away from one approach or the other. Rather they should look at vendor roadmaps and choose the one that meets its needs best.
Theres no shortage of innovation in networking today, but new features and functions without simplicity can wreak havoc on a network and make things worse. One of my general rules of thumb for IT projects is that the solutions must be simpler than the original issue, and thats not often the case in networking. Simplifying the network through software, machine learning, and automation enables businesses to take advantage of the new features without the risk associated with complexity.
**[ Learn more about SDN: Find out [where SDN is going][8] and learn the [difference between SDN and NFV][9]. | Get regularly scheduled insights by [signing up for Network World newsletters][10]. ]**
Join the Network World communities on [Facebook][11] and [LinkedIn][12] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3438840/3-strategies-to-simplify-complex-networks.html
作者:[Zeus Kerravala][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Zeus-Kerravala/
[b]: https://github.com/lujun9972
[1]: https://www.networkworld.com/article/3031279/sd-wan-what-it-is-and-why-you-ll-use-it-one-day.html
[2]: https://www.networkworld.com/article/3311921/wi-fi-6-is-coming-to-a-router-near-you.html
[3]: https://www.networkworld.com/article/3203489/what-is-5g-how-is-it-better-than-4g.html
[4]: https://www.networkworld.com/article/3224893/what-is-edge-computing-and-how-it-s-changing-the-network.html
[5]: https://images.idgesg.net/images/article/2019/09/juniper-ceo-rami-rahim-100811037-orig.jpg
[6]: https://www.networkworld.com/article/3203489/lan-wan/what-is-5g-wireless-networking-benefits-standards-availability-versus-lte.html
[7]: https://www.networkworld.com/article/3353042/juniper-grabs-mist-for-wireless-ai-cloud-service-delivery-technology.html
[8]: https://www.networkworld.com/article/3209131/lan-wan/what-sdn-is-and-where-its-going.html
[9]: https://www.networkworld.com/article/3206709/lan-wan/what-s-the-difference-between-sdn-and-nfv.html
[10]: https://www.networkworld.com/newsletters/signup.html
[11]: https://www.facebook.com/NetworkWorld/
[12]: https://www.linkedin.com/company/network-world

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -0,0 +1,163 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Constraint programming by example)
[#]: via: (https://opensource.com/article/19/9/constraint-programming-example)
[#]: author: (Oleksii Tsvietnov https://opensource.com/users/oleksii-tsvietnovhttps://opensource.com/users/oleksii-tsvietnov)
Constraint programming by example
======
Understand constraint programming with an example application that
converts a character's case and ASCII codes.
![Math formulas in green writing][1]
There are many different ways to solve problems in computing. You might "brute force" your way to a solution by calculating as many possibilities as you can, or you might take a procedural approach and carefully establish the known factors that influence the correct answer. In [constraint programming][2], a problem is viewed as a series of limitations on what could possibly be a valid solution. This paradigm can be applied to effectively solve a group of problems that can be translated to variables and constraints or represented as a mathematic equation. In this way, it is related to the Constraint Satisfaction Problem ([CSP][3]).
Using a declarative programming style, it describes a general model with certain properties. In contrast to the imperative style, it doesn't tell _how_ to achieve something, but rather _what_ to achieve. Instead of defining a set of instructions with only one obvious way to compute values, constraint programming declares relationships between variables within constraints. A final model makes it possible to compute the values of variables regardless of direction or changes. Thus, any change in the value of one variable affects the whole system (i.e., all other variables), and to satisfy defined constraints, it leads to recomputing the other values.
As an example, let's take Pythagoras' theorem: **a² + b² = c²**. The _constraint_ is represented by this equation, which has three _variables_ (a, b, and c), and each has a _domain_ (non-negative). Using the imperative programming style, to compute any of the variables if we have the other two, we would need to create three different functions (because each variable is computed by a different equation):
* c = √(a² + b²)
* a = √(c² - b²)
* b = √(c² - a²)
These functions satisfy the main constraint, and to check domains, each function should validate the input. Moreover, at least one more function would be needed for choosing an appropriate function according to the provided variables. This is one of the possible solutions:
```
def pythagoras(*, a=None, b=None, c=None):
    ''' Computes a side of a right triangle '''
    # Validate
    if len([i for i in (a, b, c) if i is None or i <= 0]) != 1:
        raise SystemExit("ERROR: you need to define any of two non-negative variables")
    # Compute
    if a is None:
        return (c**2 - b**2)**0.5
    elif b is None:
        return (c**2 - a**2)**0.5
    else:
        return (a**2 + b**2)**0.5
```
To see the difference with the constraint programming approach, I'll show an example of a "problem" with four variables and a constraint that is not represented by a straight mathematic equation. This is a converter that can change characters' cases (lower-case to/from capital/upper-case) and return the ASCII codes for each. Hence, at any time, the converter is aware of all four values and reacts immediately to any changes. The idea of creating this example was fully inspired by John DeNero's [Fahrenheit-Celsius converter][4].
Here is a diagram of a constraint system:
![Constraint system model][5]
The represented "problem" is translated into a constraint system that consists of nodes (constraints) and connectors (variables). Connectors provide an interface for getting and setting values. They also check the variables' domains. When one value changes, that particular connector notifies all its connected nodes about the change. Nodes, in turn, satisfy constraints, calculate new values, and propagate them to other connectors across the system by "asking" them to set a new value. Propagation is done using the message-passing technique that means connectors and nodes get messages (synchronously) and react accordingly. For instance, if the system gets the **A** letter on the "capital letter" connector, the other three connectors provide an appropriate result according to the defined constraint on the nodes: 97, a, and 65. It's not allowed to set any other lower-case letters (e.g., b) on that connector because each connector has its own domain.
When all connectors are linked to nodes, which are defined by constraints, the system is fully set and ready to get values on any of four connectors. Once it's set, the system automatically calculates and sets values on the rest of the connectors. There is no need to check what variable was set and which functions should be called, as is required in the imperative approach—that is relatively easy to achieve with a few variables but gets interesting in case of tens or more.
### How it works
The full source code is available in my [GitHub repo][6]. I'll dig a little bit into the details to explain how the system is built.
First, define the connectors by giving them names and setting domains as a function of one argument:
```
import constraint_programming as cp
small_ascii = cp.connector('Small Ascii', lambda x: x >= 97 and x <= 122)
small_letter = cp.connector('Small Letter', lambda x: x >= 'a' and x <= 'z')
capital_ascii = cp.connector('Capital Ascii', lambda x: x >= 65 and x <= 90)
capital_letter = cp.connector('Capital Letter', lambda x: x >= 'A' and x <= 'Z')
```
Second, link these connectors to nodes. There are two types: _code_ (translates letters back and forth to ASCII codes) and _aA_ (translates small letters to capital and back):
```
code(small_letter, small_ascii)
code(capital_letter, capital_ascii)
aA(small_letter, capital_letter)
```
These two nodes differ in which functions should be called, but they are derived from a general constraint function:
```
def code(conn1, conn2):
    return cp.constraint(conn1, conn2, ord, chr)
def aA(conn1, conn2):
    return cp.constraint(conn1, conn2, str.upper, str.lower)
```
Each node has only two connectors. If there is an update on a first connector, then a first function is called to calculate the value of another connector (variable). The same happens if a second connector's value changes. For example, if the _code_ node gets **A** on the **conn1** connector, then the function **ord** will be used to get its ASCII code. And, the other way around, if the _aA_ node gets **A** on the **conn2** connector, then it needs to use the **str.lower** function to get the correct small letter on the **conn1**. Every node is responsible for computing new values and "sending" a message to another connector that there is a new value to set. This message is conveyed with the name of a node that is asking to set a new value and also a new value.
```
def set_value(src_constr, value):
    if (not domain is None) and (not domain(value)):
        raise ValueOutOfDomain(link, value)
    link['value'] = value
    for constraint in constraints:
        if constraint is not src_constr:
            constraint['update'](link)
```
When a connector receives the **set** message, it runs the **set_value** function to check a domain, sets a new value, and sends the "update" message to another node. It is just a notification that the value on that connector has changed.
```
def update(src_conn):
    if src_conn is conn1:
        conn2['set'](node, constr1(conn1['value']))
    else:
        conn1['set'](node, constr2(conn2['value']))
```
Then, the notified node requests this new value on the connector, computes a new value for another connector, and so on until the whole system changes. That's how the propagation works.
But how does the message passing happen? It is implemented as accessing keys of dictionaries. Both functions (connector and constraint) return a _dispatch dictionary_. Such a dictionary contains _messages_ as keys and _closures_ as values. By accessing a key, let's say, **set**, a dictionary returns the function **set_value** (closure) that has access to all local names of the "connector" function.
```
# A dispatch dictionary
link = { 'name': name,
         'value': None,
         'connect': connect,
         'set': set_value,
         'constraints': get_constraints }
return link
```
Having a dictionary as a return value makes it possible to create multiple closures (functions) with access to the same local state to operate on. Then these closures are callable by using keys as a type of message.
### Why use Constraint programming?
Constraint programming can give you a new perspective to difficult problems. It's not something you can use in every situation, but it may well open new opportunities for solutions in certain situations. If you find yourself up against an equation that seems difficult to reliably solve in code, try looking at it from a different angle. If the angle that seems to work best is constraint programming, you now have an example of how it can be implemented.
* * *
_This article was originally published on [Oleksii Tsvietnov's blog][7] and is reprinted with his permission._
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/9/constraint-programming-example
作者:[Oleksii Tsvietnov][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/oleksii-tsvietnovhttps://opensource.com/users/oleksii-tsvietnov
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/edu_math_formulas.png?itok=B59mYTG3 (Math formulas in green writing)
[2]: https://en.wikipedia.org/wiki/Constraint_programming
[3]: https://vorakl.com/articles/csp/
[4]: https://composingprograms.com/pages/24-mutable-data.html#propagating-constraints
[5]: https://opensource.com/sites/default/files/uploads/constraint-system.png (Constraint system model)
[6]: https://github.com/vorakl/composingprograms.com/tree/master/char_converter
[7]: https://vorakl.com/articles/char-converter/

View File

@ -0,0 +1,101 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Copying large files with Rsync, and some misconceptions)
[#]: via: (https://fedoramagazine.org/copying-large-files-with-rsync-and-some-misconceptions/)
[#]: author: (Daniel Leite de Abreu https://fedoramagazine.org/author/dabreu/)
Copying large files with Rsync, and some misconceptions
======
![][1]
There is a notion that a lot of people working in the IT industry often copy and paste from internet howtos. We all do it, and the copy-and-paste itself is not a problem. The problem is when we run things without understanding them.
Some years ago, a friend who used to work on my team needed to copy virtual machine templates from site A to site B. They could not understand why the file they copied was 10GB on site A but but it became 100GB on-site B.
The friend believed that _rsync_ is a magic tool that should just “sync” the file as it is. However, what most of us forget is to understand what _rsync_ really is, and how is it used, and the most important in my opinion is, where it come from. This article provides some further information about rsync, and an explanation of what happened in that story.
### About rsync
_rsync_ is a tool was created by Andrew Tridgell and Paul Mackerras who were motivated by the following problem:
Imagine you have two files, _file_A_ and _file_B_. You wish to update _file_B_ to be the same as _file_A_. The obvious method is to copy _file_A_ onto _file_B_.
Now imagine that the two files are on two different servers connected by a slow communications link, for example, a dial-up IP link. If _file_A_ is large, copying it onto _file_B_ will be slow, and sometimes not even possible. To make it more efficient, you could compress _file_A_ before sending it, but that would usually only gain a factor of 2 to 4.
Now assume that _file_A_ and _file_B_ are quite similar, and to speed things up, you take advantage of this similarity. A common method is to send just the differences between _file_A_ and _file_B_ down the link and then use such list of differences to reconstruct the file on the remote end.
The problem is that the normal methods for creating a set of differences between two files rely on being able to read both files. Thus they require that both files are available beforehand at one end of the link. If they are not both available on the same machine, these algorithms cannot be used. (Once you had copied the file over, you dont need the differences). This is the problem that _rsync_ addresses.
The _rsync_ algorithm efficiently computes which parts of a source file match parts of an existing destination file. Matching parts then do not need to be sent across the link; all that is needed is a reference to the part of the destination file. Only parts of the source file which are not matching need to be sent over.
The receiver can then construct a copy of the source file using the references to parts of the existing destination file and the original material.
Additionally, the data sent to the receiver can be compressed using any of a range of common compression algorithms for further speed improvements.
The rsync algorithm addresses this problem in a lovely way as we all might know.
After this introduction on _rsync_, Back to the story!
### Problem 1: Thin provisioning
There were two things that would help the friend understand what was going on.
The problem with the file getting significantly bigger on the other size was caused by Thin Provisioning (TP) being enabled on the source system — a method of optimizing the efficiency of available space in Storage Area Networks (SAN) or Network Attached Storages (NAS).
The source file was only 10GB because of TP being enabled, and when transferred over using _rsync_ without any additional configuration, the target destination was receiving the full 100GB of size. _rsync_ could not do the magic automatically, it had to be configured.
The Flag that does this work is _-S_ or _sparse_ and it tells _rsync_ to handle sparse files efficiently. And it will do what it says! It will only send the sparse data so source and destination will have a 10GB file.
### Problem 2: Updating files
The second problem appeared when sending over an updated file. The destination was now receiving just the 10GB, but the whole file (containing the virtual disk) was always transferred. Even when a single configuration file was changed on that virtual disk. In other words, only a small portion of the file changed.
The command used for this transfer was:
```
rsync -avS vmdk_file syncuser@host1:/destination
```
Again, understanding how _rsync_ works would help with this problem as well.
The above is the biggest misconception about rsync. Many of us think _rsync_ will simply send the delta updates of the files, and that it will automatically update only what needs to be updated. But this is not the default behaviour of _rsync_.
As the man page says, the default behaviour of _rsync_ is to create a new copy of the file in the destination and to move it into the right place when the transfer is completed.
To change this default behaviour of _rsync_, you have to set the following flags and then rsync will send only the deltas:
```
--inplace update destination files in-place
--partial keep partially transferred files
--append append data onto shorter files
--progress show progress during transfer
```
So the full command that would do exactly what the friend wanted is:
```
rsync -av --partial --inplace --append --progress vmdk_file syncuser@host1:/destination
```
Note that the sparse flag _-S_ had to be removed, for two reasons. The first is that you can not use _sparse_ and _inplace_ together when sending a file over the wire. And second, when you once sent a file over with _sparse_, you cant updated with _inplace_ anymore. Note that versions of rsync older than 3.1.3 will reject the combination of _sparse_ and _inplace_.
So even when the friend ended up copying 100GB over the wire, that only had to happen once. All the following updates were only copying the difference, making the copy to be extremely efficient.
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/copying-large-files-with-rsync-and-some-misconceptions/
作者:[Daniel Leite de Abreu][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/dabreu/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2019/08/rsync-816x345.jpg

View File

@ -0,0 +1,103 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to freeze and lock your Linux system (and why you would want to))
[#]: via: (https://www.networkworld.com/article/3438818/how-to-freeze-and-lock-your-linux-system-and-why-you-would-want-to.html)
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
How to freeze and lock your Linux system (and why you would want to)
======
What it means to freeze a terminal window and lock a screen -- and how to manage these activities on your Linux system.
Sandra Henry-Stocker
How you freeze and "thaw out" a screen on a Linux system depends a lot on what you mean by these terms. Sometimes “freezing a screen” might mean freezing a terminal window so that activity within that window comes to a halt. Sometimes it means locking your screen so that no one can walk up to your system when you're fetching another cup of coffee and type commands on your behalf.
In this post, we'll examine how you can use and control these actions.
**[ Two-Minute Linux Tips: [Learn how to master a host of Linux commands in these 2-minute video tutorials][1] ]**
### How to freeze a terminal window on Linux
You can freeze a terminal window on a Linux system by typing **Ctrl+S** (hold control key and press "s"). Think of the "s" as meaning "start the freeze". If you continue typing commands after doing this, you won't see the commands you type or the output you would expect to see. In fact, the commands will pile up in a queue and will be run only when you reverse the freeze by typing **Ctrl+Q**. Think of this as "quit the freeze".
One easy way to view how this works is to use the date command and then type **Ctrl+S**. Then type the date command again and wait a few minutes before typing **Ctrl+Q**. You'll see something like this:
```
$ date
Mon 16 Sep 2019 06:47:34 PM EDT
$ date
Mon 16 Sep 2019 06:49:49 PM EDT
```
The gap between the two times shown will indicate that the second date command wasn't run until you unfroze your window.
Terminal windows can be frozen and unfrozen whether you're sitting at the computer screen or running remotely using a tool such as PuTTY.
And here's a little trick that can come in handy. If you see that a terminal window appears to be inactive, one possibility is that you or someone else inadvertently typed **Ctrl+S**. In any case, entering **Ctrl+Q** just in case this resolves the problem is not a bad idea.
### How to lock your screen
To lock your screen before you leave your desk, either **Ctrl+Alt+L** or **Super+L** (i.e., holding down the Windows key and pressing L) should work. Once your screen is locked, you will have to enter your password to log back in.
### Automatic screen locking on Linux systems
While best practice suggests that you lock your screen whenever you are about to leave your desk, Linux systems usually automatically lock after a period of no activity. The timing for "blanking" a screen (making it go dark) and actually locking the screen (requiring a login to use it again) depend on settings that you can set to your personal preferences.
To change how long it takes for your screen to go dark when using GNOME screensaver, open your settings window and select **Power** and then **Blank screen**. You can choose times between 1 and 15 minutes or never. To select how long after the blanking the screen locks, go to settings, select **Privacy** and then **Blank screen.** Settings should include 1, 2, 3, 5 and 30 minutes or one hour.
### How to lock your screen from the command line
If you are using Gnome screensaver, you can also lock the screen from the command line using this command:
```
gnome-screensaver-command -l
```
That's a lowercase L for "lock".
### How to check your lockscreen state
You can also use the gnome screensaver command to check whether your screen is locked,. With the **\--query** option, the command tells you whether screen is currently locked (i.e., active). With the --time option, it tells you how long the lock has been in effect. Here's an sample sctipt:
```
#!/bin/bash
gnome-screensaver-command --query
gnome-screensaver-command --time
```
Running the script will show output like this:
```
$ ./check_lockscreen
The screensaver is active
The screensaver has been active for 1013 seconds.
```
#### Wrap-up
Freezing your terminal window is easy if you remember the proper control sequences. For screen locking, how well it works depends on the controls you put in place for yourself or whether you're comfortable working with the defaults.
**[ Also see: [Invaluable tips and tricks for troubleshooting Linux][2] ]**
Join the Network World communities on [Facebook][3] and [LinkedIn][4] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3438818/how-to-freeze-and-lock-your-linux-system-and-why-you-would-want-to.html
作者:[Sandra Henry-Stocker][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
[b]: https://github.com/lujun9972
[1]: https://www.youtube.com/playlist?list=PL7D2RMSmRO9J8OTpjFECi8DJiTQdd4hua
[2]: https://www.networkworld.com/article/3242170/linux/invaluable-tips-and-tricks-for-troubleshooting-linux.html
[3]: https://www.facebook.com/NetworkWorld/
[4]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,170 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to start developing with .NET)
[#]: via: (https://opensource.com/article/19/9/getting-started-net)
[#]: author: (Seth Kenlon https://opensource.com/users/sethhttps://opensource.com/users/alex-bunardzichttps://opensource.com/users/alex-bunardzic)
How to start developing with .NET
======
Learn the basics to get up and running with the .NET development
platform.
![Coding on a computer][1]
The .NET framework was released in 2000 by Microsoft. An open source implementation of the platform, [Mono][2], was the center of controversy in the early 2000s because Microsoft held several patents for .NET technology and could have used those patents to end Mono implementations. Fortunately, in 2014, Microsoft declared that the .NET development platform would be open source under the MIT license from then on, and in 2016, Microsoft purchased Xamarin, the company that produces Mono.
Both .NET and Mono have grown into cross-platform programming environments for C#, F#, GTK#, Visual Basic, Vala, and more. Applications created with .NET and Mono have been delivered to Linux, BSD, Windows, MacOS, Android, and even some gaming consoles. You can use either .NET or Mono to develop .NET applications. Both are open source, and both have active and vibrant communities. This article focuses on getting started with Microsoft's implementation of the .NET environment.
### How to install .NET
The .NET downloads are divided into packages: one containing just a .NET runtime, and the other a .NET software development kit (SDK) containing the .NET Core and runtime. Depending on your platform, there may be several variants of even these packages, accounting for architecture and OS version. To start developing with .NET, you must [install the SDK][3]. This gives you the [dotnet][4] terminal or PowerShell command, which you can use to create and build projects.
#### Linux
To install .NET on Linux, first, add the Microsoft Linux software repository to your computer.
On Fedora:
```
$ sudo rpm --import <https://packages.microsoft.com/keys/microsoft.asc>
$ sudo wget -q -O /etc/yum.repos.d/microsoft-prod.repo <https://packages.microsoft.com/config/fedora/27/prod.repo>
```
On Ubuntu:
```
$ wget -q <https://packages.microsoft.com/config/ubuntu/19.04/packages-microsoft-prod.deb> -O packages-microsoft-prod.deb
$ sudo dpkg -i packages-microsoft-prod.deb
```
Next, install the SDK using your package manager, replacing **&lt;X.Y&gt;** with the current version of the .NET release:
On Fedora:
```
`$ sudo dnf install dotnet-sdk-<X.Y>`
```
On Ubuntu:
```
$ sudo apt install apt-transport-https
$ sudo apt update
$ sudo apt install dotnet-sdk-&lt;X.Y&gt;
```
Once all the packages are downloaded and installed, confirm the installation by opening a terminal and typing:
```
$ dotnet --version
X.Y.Z
```
#### Windows
If you're on Microsoft Windows, you probably already have the .NET runtime installed. However, to develop .NET applications, you must also install the .NET Core SDK.
First, [download the installer][3]. To keep your options open, download .NET Core for cross-platform development (the .NET Framework is Windows-only). Once the **.exe** file is downloaded, double-click it to launch the installation wizard, and click through the two-step install process: accept the license and allow the install to proceed.
![Installing dotnet on Windows][5]
Afterward, open PowerShell from your Application menu in the lower-left corner. In PowerShell, type a test command:
```
`PS C:\Users\osdc> dotnet`
```
If you see information about a dotnet installation, .NET has been installed correctly.
#### MacOS
If you're on an Apple Mac, [download the Mac installer][3], which comes in the form of a **.pkg** package. Download and double-click on the **.pkg** file and click through the installer. You may need to grant permission for the installer since the package is not from the App Store.
Once all packages are downloaded and installed, confirm the installation by opening a terminal and typing:
```
$ dotnet --version
X.Y.Z
```
### Hello .NET
A sample "hello world" application written in .NET is provided with the **dotnet** command. Or, more accurately, the command provides the sample application.
First, create a project directory and the required code infrastructure using the **dotnet** command with the **new** and **console** options to create a new console-only application. Use the **-o** option to specify a project name:
```
`$ dotnet new console -o hellodotnet`
```
This creates a directory called **hellodotnet** in your current directory. Change into your project directory and have a look around:
```
$ cd hellodotnet
$ dir
hellodotnet.csproj  obj  Program.cs
```
The file **Program.cs** is an empty C# file containing a simple Hello World application. Open it in a text editor to view it. Microsoft's Visual Studio Code is a cross-platform, open source application built with dotnet in mind, and while it's not a bad text editor, it also collects a lot of data about its user (and grants itself permission to do so in the license applied to its binary distribution). If you want to try out Visual Studio Code, consider using [VSCodium][6], a distribution of Visual Studio Code that's built from the MIT-licensed source code _without_ the telemetry (read the [documentation][7] for options to disable other forms of tracking in even this build). Alternatively, just use your existing favorite text editor or IDE.
The boilerplate code in a new console application is:
```
using System;
namespace hellodotnet
{
    class Program
    {
        static void Main(string[] args)
        {
            Console.WriteLine("Hello World!");
        }
    }
}
```
To run the program, use the **dotnet run** command:
```
$ dotnet run
Hello World!
```
That's the basic workflow of .NET and the **dotnet** command. The full [C# guide for .NET][8] is available, and everything there is relevant to .NET. For examples of .NET in action, follow [Alex Bunardzic][9]'s mutation testing articles here on opensource.com.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/9/getting-started-net
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/sethhttps://opensource.com/users/alex-bunardzichttps://opensource.com/users/alex-bunardzic
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/code_computer_laptop_hack_work.png?itok=aSpcWkcl (Coding on a computer)
[2]: https://www.monodevelop.com/
[3]: https://dotnet.microsoft.com/download
[4]: https://docs.microsoft.com/en-us/dotnet/core/tools/dotnet?tabs=netcore21
[5]: https://opensource.com/sites/default/files/uploads/dotnet-windows-install.jpg (Installing dotnet on Windows)
[6]: https://vscodium.com/
[7]: https://github.com/VSCodium/vscodium/blob/master/DOCS.md
[8]: https://docs.microsoft.com/en-us/dotnet/csharp/tutorials/intro-to-csharp/
[9]: https://opensource.com/users/alex-bunardzic (View user profile.)

View File

@ -0,0 +1,417 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Linux commands to display your hardware information)
[#]: via: (https://opensource.com/article/19/9/linux-commands-hardware-information)
[#]: author: (Howard Fosdick https://opensource.com/users/howtechhttps://opensource.com/users/sethhttps://opensource.com/users/sethhttps://opensource.com/users/seth)
Linux commands to display your hardware information
======
Get the details on what's inside your computer from the command line.
![computer screen ][1]
There are many reasons you might need to find out details about your computer hardware. For example, if you need help fixing something and post a plea in an online forum, people will immediately ask you for specifics about your computer. Or, if you want to upgrade your computer, you'll need to know what you have and what you can have. You need to interrogate your computer to discover its specifications.
The easiest way is to do that is with one of the standard Linux GUI programs:
* [i-nex][2] collects hardware information and displays it in a manner similar to the popular [CPU-Z][3] under Windows.
* [HardInfo][4] displays hardware specifics and even includes a set of eight popular benchmark programs you can run to gauge your system's performance.
* [KInfoCenter][5] and [Lshw][6] also display hardware details and are available in many software repositories.
Alternatively, you could open up the box and read the labels on the disks, memory, and other devices. Or you could enter the boot-time panels—the so-called UEFI or BIOS panels. Just hit [the proper program function key][7] during the boot process to access them. These two methods give you hardware details but omit software information.
Or, you could issue a Linux line command. Wait a minute… that sounds difficult. Why would you do this?
Sometimes it's easy to find a specific bit of information through a well-targeted line command. Perhaps you don't have a GUI program available or don't want to install one.
Probably the main reason to use line commands is for writing scripts. Whether you employ the Linux shell or another programming language, scripting typically requires coding line commands.
Many line commands for detecting hardware must be issued under root authority. So either switch to the root user ID, or issue the command under your regular user ID preceded by **sudo**:
```
`sudo <the_line_command>`
```
and respond to the prompt for the root password.
This article introduces many of the most useful line commands for system discovery. The quick reference chart at the end summarizes them.
### Hardware overview
There are several line commands that will give you a comprehensive overview of your computer's hardware.
The **inxi** command lists details about your system, CPU, graphics, audio, networking, drives, partitions, sensors, and more. Forum participants often ask for its output when they're trying to help others solve problems. It's a standard diagnostic for problem-solving:
```
`inxi -Fxz`
```
The **-F** flag means you'll get full output, **x** adds details, and **z** masks out personally identifying information like MAC and IP addresses.
The **hwinfo** and **lshw** commands display much of the same information in different formats:
```
`hwinfo --short`
```
or
```
`lshw -short`
```
The long forms of these two commands spew out exhaustive—but hard to read—output:
```
`hwinfo`
```
or
```
`lshw`
```
### CPU details
You can learn everything about your CPU through line commands. View CPU details by issuing either the **lscpu** command or its close relative **lshw**:
```
`lscpu`
```
or
```
`lshw -C cpu`
```
In both cases, the last few lines of output list all the CPU's capabilities. Here you can find out whether your processor supports specific features.
With all these commands, you can reduce verbiage and narrow any answer down to a single detail by parsing the command output with the **grep** command. For example, to view only the CPU make and model:
```
`lshw -C cpu | grep -i product`
```
To view just the CPU's speed in megahertz:
```
`lscpu | grep -i mhz`
```
or its [BogoMips][8] power rating:
```
`lscpu | grep -i bogo`
```
The **-i** flag on the **grep** command simply ensures your search ignores whether the output it searches is upper or lower case.
### Memory
Linux line commands enable you to gather all possible details about your computer's memory. You can even determine whether you can add extra memory to the computer without opening up the box.
To list each memory stick and its capacity, issue the **dmidecode** command:
```
`dmidecode -t memory | grep -i size`
```
For more specifics on system memory, including type, size, speed, and voltage of each RAM stick, try:
```
`lshw -short -C memory`
```
One thing you'll surely want to know is is the maximum memory you can install on your computer:
```
`dmidecode -t memory | grep -i max`
```
Now find out whether there are any open slots to insert additional memory sticks. You can do this without opening your computer by issuing this command:
```
`lshw -short -C memory | grep -i empty`
```
A null response means all the memory slots are already in use.
Determining how much video memory you have requires a pair of commands. First, list all devices with the **lspci** command and limit the output displayed to the video device you're interested in:
```
`lspci | grep -i vga`
```
The output line that identifies the video controller will typically look something like this:
```
`00:02.0 VGA compatible controller: Intel Corporation 82Q35 Express Integrated Graphics Controller (rev 02)`
```
Now reissue the **lspci** command, referencing the video device number as the selected device:
```
`lspci -v -s 00:02.0`
```
The output line identified as _prefetchable_ is the amount of video RAM on your system:
```
...
Memory at f0100000 (32-bit, non-prefetchable) [size=512K]
I/O ports at 1230 [size=8]
Memory at e0000000 (32-bit, prefetchable) [size=256M]
Memory at f0000000 (32-bit, non-prefetchable) [size=1M]
...
```
Finally, to show current memory use in megabytes, issue:
```
`free -m`
```
This tells how much memory is free, how much is in use, the size of the swap area, and whether it's being used. For example, the output might look like this:
```
              total        used        free     shared    buff/cache   available
Mem:          11891        1326        8877      212        1687       10077
Swap:          1999           0        1999
```
The **top** command gives you more detail on memory use. It shows current overall memory and CPU use and also breaks it down by process ID, user ID, and the commands being run. It displays full-screen text output:
```
`top`
```
### Disks, filesystems, and devices
You can easily determine whatever you wish to know about disks, partitions, filesystems, and other devices.
To display a single line describing each disk device:
```
`lshw -short -C disk`
```
Get details on any specific SATA disk, such as its model and serial numbers, supported modes, sector count, and more with:
```
`hdparm -i /dev/sda`
```
Of course, you should replace **sda** with **sdb** or another device mnemonic if necessary.
To list all disks with all their defined partitions, along with the size of each, issue:
```
`lsblk`
```
For more detail, including the number of sectors, size, filesystem ID and type, and partition starting and ending sectors:
```
`fdisk -l`
```
To start up Linux, you need to identify mountable partitions to the [GRUB][9] bootloader. You can find this information with the **blkid** command. It lists each partition's unique identifier (UUID) and its filesystem type (e.g., ext3 or ext4):
```
`blkid`
```
To list the mounted filesystems, their mount points, and the space used and available for each (in megabytes):
```
`df -m`
```
Finally, you can list details for all USB and PCI buses and devices with these commands:
```
`lsusb`
```
or
```
`lspci`
```
### Network
Linux offers tons of networking line commands. Here are just a few.
To see hardware details about your network card, issue:
```
`lshw -C network`
```
Traditionally, the command to show network interfaces was **ifconfig**:
```
`ifconfig -a`
```
But many people now use:
```
`ip link show`
```
or
```
`netstat -i`
```
In reading the output, it helps to know common network abbreviations:
**Abbreviation** | **Meaning**
---|---
**lo** | Loopback interface
**eth0** or **enp*** | Ethernet interface
**wlan0** | Wireless interface
**ppp0** | Point-to-Point Protocol interface (used by a dial-up modem, PPTP VPN connection, or USB modem)
**vboxnet0** or **vmnet*** | Virtual machine interface
The asterisks in this table are wildcard characters, serving as a placeholder for whatever series of characters appear from system to system. ****
To show your default gateway and routing tables, issue either of these commands:
```
`ip route | column -t`
```
or
```
`netstat -r`
```
### Software
Let's conclude with two commands that display low-level software details. For example, what if you want to know whether you have the latest firmware installed? This command shows the UEFI or BIOS date and version:
```
`dmidecode -t bios`
```
What is the kernel version, and is it 64-bit? And what is the network hostname? To find out, issue:
```
`uname -a`
```
### Quick reference chart
This chart summarizes all the commands covered in this article:
Display info about all hardware | **inxi -Fxz**              _\--or--_
**hwinfo --short**     _\--or--_
**lshw  -short**
---|---
Display all CPU info | **lscpu**                  _\--or--_
**lshw -C cpu**
Show CPU features (e.g., PAE, SSE2) | **lshw -C cpu | grep -i capabilities**
Report whether the CPU is 32- or 64-bit | **lshw -C cpu | grep -i width**
Show current memory size and configuration | **dmidecode -t memory | grep -i size**    _\--or--_
**lshw -short -C memory**
Show maximum memory for the hardware | **dmidecode -t memory | grep -i max**
Determine whether memory slots are available | **lshw -short -C memory | grep -i empty**
(a null answer means no slots available)
Determine the amount of video memory | **lspci | grep -i vga**
then reissue with the device number;
for example:  **lspci -v -s 00:02.0**
The VRAM is the _prefetchable_ value.
Show current memory use | **free -m**    _\--or--_
**top**
List the disk drives | **lshw -short -C disk**
Show detailed information about a specific disk drive | **hdparm -i /dev/sda**
(replace **sda** if necessary)
List information about disks and partitions | **lsblk **     (simple)      _\--or--_
**fdisk -l**   (detailed)
List partition IDs (UUIDs) | **blkid**
List mounted filesystems, their mount points,
and megabytes used and available for each | **df -m**
List USB devices | **lsusb**
List PCI devices | **lspci**
Show network card details | **lshw -C network**
Show network interfaces | **ifconfig -a**       _\--or--_
**ip link show   **_\--or--_
**netstat -i**
Display routing tables | **ip route | column -t`  `**_\--or--_
**netstat -r**
Display UEFI/BIOS info | **dmidecode -t bios**
Show kernel version, network hostname, more | **uname -a**
Do you have a favorite command that I overlooked? Please add a comment and share it.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/9/linux-commands-hardware-information
作者:[Howard Fosdick][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/howtechhttps://opensource.com/users/sethhttps://opensource.com/users/sethhttps://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/features_solutions_command_data.png?itok=4_VQN3RK (computer screen )
[2]: http://sourceforge.net/projects/i-nex/
[3]: https://www.cpuid.com/softwares/cpu-z.html
[4]: http://sourceforge.net/projects/hardinfo.berlios/
[5]: https://userbase.kde.org/KInfoCenter
[6]: http://www.binarytides.com/linux-lshw-command/
[7]: http://www.disk-image.com/faq-bootmenu.htm
[8]: https://en.wikipedia.org/wiki/BogoMips
[9]: https://www.dedoimedo.com/computers/grub.html

View File

@ -0,0 +1,145 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to Check Linux Mint Version Number & Codename)
[#]: via: (https://itsfoss.com/check-linux-mint-version/)
[#]: author: (Sergiu https://itsfoss.com/author/sergiu/)
How to Check Linux Mint Version Number & Codename
======
Linux Mint has a major release (like Mint 19) every two years and minor releases (like Mint 19.1, 19.2 etc) every six months or so. You can upgrade Linux Mint version on your own or it may get automatically update for the minor releases.
Between all these release, you may wonder which Linux Mint version you are using. Knowing the version number is also helpful in determining whether a particular software is available for your system or if your system has reached end of life.
There could be a number of reasons why you might require the Linux Mint version number and there are various ways you can obtain this information. Let me show you both graphical and command line ways to get the Mint release information.
* [Check Linux Mint version using command line][1]
* [Check Linux Mint version information using GUI][2]
### Ways to check Linux Mint version number using terminal
![][3]
Ill go over several ways you can check your Linux Mint version number and codename using very simple commands. You can open up a **terminal** from the **Menu** or by pressing **CTRL+ALT+T** (default hotkey).
The **last two entries** in this list also output the **Ubuntu release** your current Linux Mint version is based on.
#### 1\. /etc/issue
Starting out with the simplest CLI method, you can print out the contents of **/etc/issue** to check your **Version Number** and **Codename**:
```
[email protected]:~$ cat /etc/issue
Linux Mint 19.2 Tina \n \l
```
#### 2\. hostnamectl
![hostnamectl][4]
This single command (**hostnamectl**) prints almost the same information as that found in **System Info**. You can see your **Operating System** (with **version number**), as well as your **kernel version**.3.
#### 3\. lsb_release
**lsb_release** is a very simple Linux utility to check basic information about your distribution:
```
[email protected]:~$ lsb_release -a
No LSB modules are available.
Distributor ID: LinuxMint
Description: Linux Mint 19.2 Tina
Release: 19.2
Codename: tina
```
**Note:** *I used the *****_**a**_ _tag to print all parameters, but you can also use **-s** for short form, **-d** for description etc. (check **man lsb_release** for all tags)._
#### 4\. /etc/linuxmint/info
![/etc/linuxmint/info][5]
This isnt a command, but rather a file on any Linux Mint install. Simply use cat command to print its contents to your terminal and see your **Release Number** and **Codename**.
[][6]
Suggested read  Get Rid Of Two Google Chrome Icons From Dock In Elementary OS Freya [Quick Tip]
#### 5\. Use /etc/os-release to get Ubuntu codename as well
![/etc/os-release][7]
Linux Mint is based on Ubuntu. Each Linux Mint release is based on a different Ubuntu release. Knowing which Ubuntu version your Linux Mint release is based on is helpful in cases where youll have to use Ubuntu codename while adding a repository like when you need to [install the latest Virtual Box in Linux Mint][8].
**os-release** is yet another file similar to **info**, showing you the codename for the **Ubuntu** release your Linux Mint is based on.
#### 6\. Use /etc/upstream-release/lsb-release to get only Ubuntu base info
If you only ****want to see information about the **Ubuntu** base, output **/etc/upstream-release/lsb-release**:
```
[email protected]:~$ cat /etc/upstream-release/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=18.04
DISTRIB_CODENAME=bionic
DISTRIB_DESCRIPTION="Ubuntu 18.04 LTS"
```
Bonus Tip: [You can just check Linux kernel version][9] with the **uname** command:
```
[email protected]:~$ uname -r
4.15.0-54-generic
```
**Note:** _**-r** stands for **release**, however you can check the other flags with **man uname**._
### Check Linux Mint version information using GUI
If you are not comfortable with the terminal and commands, you can use the graphical method. As you would expect, this one is pretty straight-forward.
Open up the **Menu** (bottom-left corner) and then go to **Preferences &gt; System Info**:
![Linux Mint Menu][10]
Alternatively, in the Menu you can search for **System Info**:
![Menu Search System Info][11]
Here you can see both your operating system (including version number), your kernel and the version number of your DE:
![System Info][12]
**Wrapping Up**
I have covered some different ways you can quickly check the version and name (as well as the Ubuntu base and kernel) of the Linux Mint release you are running. I hope you found this beginner tutorial helpful. Let us know in the comments which one is your favorite method!
--------------------------------------------------------------------------------
via: https://itsfoss.com/check-linux-mint-version/
作者:[Sergiu][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/sergiu/
[b]: https://github.com/lujun9972
[1]: tmp.pL5Hg3N6Qt#terminal
[2]: tmp.pL5Hg3N6Qt#GUI
[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/09/check-linux-mint-version.png?ssl=1
[4]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/09/hostnamectl.jpg?ssl=1
[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/09/linuxmint_info.jpg?ssl=1
[6]: https://itsfoss.com/rid-google-chrome-icons-dock-elementary-os-freya/
[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/09/os_release.jpg?ssl=1
[8]: https://itsfoss.com/install-virtualbox-ubuntu/
[9]: https://itsfoss.com/find-which-kernel-version-is-running-in-ubuntu/
[10]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/09/linux_mint_menu.jpg?ssl=1
[11]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/09/menu_search_system_info.jpg?ssl=1
[12]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/09/system_info.png?ssl=1

View File

@ -7,40 +7,40 @@
[#]: via: (https://www.2daygeek.com/linux-shell-script-to-monitor-user-creation-send-email/)
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
Bash Script to Send a Mail About New User Account Creation
用 Bash 脚本发送新用户帐户创建的邮件
======
For some purposes you may need to keep track of new user creation details on Linux.
出于某些原因,你可能需要跟踪 Linux 上的新用户创建信息。
Also, you may need to send the details by mail.
同时,你可能需要通过邮件发送详细信息。
This may be part of the audit objective or the security team may wish to monitor this for the tracking purposes.
这或许是审计目标的一部分,或者安全团队出于跟踪目的可能希望对此进行监控。
We can do this in other way, as we have already described in the previous article.
我们可以通过其他方式进行此操作,正如我们在上一篇文章中已经描述的那样。
* **[Bash script to send a mail when new user account is created in system][1]**
* **[在系统中创建新用户帐户时发送邮件的 Bash 脚本][1]**
There are many open source monitoring tools are available for Linux.
Linux 有许多开源监控工具可以使用。
But I dont think they have a way to track the new user creation process and alert the administrator when that happens.
但我不认为他们有办法跟踪新用户创建过程,并在发生时提醒管理员。
So how can we achieve this?
那么我们怎样才能做到这一点?
We can write our own Bash script to achieve this.
我们可以编写自己的 Bash 脚本来实现这一目标。
We have added many useful shell scripts in the past. If you want to check them out, go to the link below.
我们过去写过许多有用的 shell 脚本。如果你想了解,请进入下面的链接。
* **[How to automate day to day activities using shell scripts?][2]**
* **[如何使用 shell 脚本自动化日常活动?][2]**
### What does this script really do?
### 这个脚本做了什么?
This will take a backup of the “/etc/passwd” file twice a day (beginning of the day and end of the day), which will enable you to get new user creation details for the specified date.
这将每天两次(一天的开始和结束)备份 “/etc/passwd” 文件,这将使你能够获取指定日期的新用户创建详细信息。
We need to add the below two cronjobs to copy the “/etc/passwd” file.
我们需要添加以下两个 cron 任务来复制 “/etc/passwd” 文件。
```
# crontab -e
@ -49,13 +49,13 @@ We need to add the below two cronjobs to copy the “/etc/passwd” file.
59 23 * * * cp /etc/passwd /opt/scripts/passwd-end-$(date +"%Y-%m-%d")
```
It uses the “difference” command to detect the difference between files, and if any difference is found to yesterdays date, the script will send an email alert to the email id given with new user details.
它使用 “difference” 命令来检测文件之间的差异,如果发现与昨日有任何差异,脚本将向指定 email 发送新用户详细信息。
We cant run this script often because user creation is not happening frequently. However, we plan to run this script once a day.
我们不用经常运行此脚本,因为用户创建不经常发生。但是,我们计划每天运行一次此脚本。
Therefore, you can get a consolidated report on new user creation.
这样,你可以获得有关新用户创建的综合报告。
**Note:** We used our email id in the script for demonstration purpose. So we ask you to use your email id instead.
**注意:**我们在脚本中使用了我们的电子邮件地址进行演示。因此,我们要求你用自己的电子邮件地址。
```
# vi /opt/scripts/new-user-detail.sh
@ -80,13 +80,13 @@ rm $MESSAGE
fi
```
Set an executable permission to "new-user-detail.sh" file.
给 “new-user-detail.sh” 文件添加可执行权限。
```
$ chmod +x /opt/scripts/new-user-detail.sh
```
Finally add a cronjob to automate this. It runs daily at 7AM.
最后添加一个 cron 任务来自动执行此操作。它在每天早上 7 点运行。
```
# crontab -e
@ -94,9 +94,9 @@ Finally add a cronjob to automate this. It runs daily at 7AM.
0 7 * * * /bin/bash /opt/scripts/new-user.sh
```
**Note:** You will receive an email alert at 7AM every day, which is for yesterday's date details.
**注意:**你会在每天早上 7 点都会收到一封关于昨日详情的邮件提醒。
**Output:** The output will be the same as the one below.
**输出:**输出与下面的输出相同。
```
# cat /tmp/new-user-logs.txt
@ -115,7 +115,7 @@ via: https://www.2daygeek.com/linux-shell-script-to-monitor-user-creation-send-e
作者:[Magesh Maruthamuthu][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出