mirror of
https://github.com/LCTT/TranslateProject.git
synced 2025-01-04 22:00:34 +08:00
Merge remote-tracking branch 'LCTT/master'
This commit is contained in:
commit
b0931ac35c
@ -1,8 +1,8 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-10984-1.html)
|
||||
[#]: subject: (Running LEDs in reverse could cool computers)
|
||||
[#]: via: (https://www.networkworld.com/article/3386876/running-leds-in-reverse-could-cool-computers.html#tk.rss_all)
|
||||
[#]: author: (Patrick Nelson https://www.networkworld.com/author/Patrick-Nelson/)
|
||||
@ -47,7 +47,7 @@ via: https://www.networkworld.com/article/3386876/running-leds-in-reverse-could-
|
||||
作者:[Patrick Nelson][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,89 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (6 ways to make enterprise IoT cost effective)
|
||||
[#]: via: (https://www.networkworld.com/article/3401082/6-ways-to-make-enterprise-iot-cost-effective.html)
|
||||
[#]: author: (Fredric Paul https://www.networkworld.com/author/Fredric-Paul/)
|
||||
|
||||
6 ways to make enterprise IoT cost effective
|
||||
======
|
||||
Rob Mesirow, a principal at PwC’s Connected Solutions unit, offers tips for successfully implementing internet of things (IoT) projects without breaking the bank.
|
||||
![DavidLeshem / Getty][1]
|
||||
|
||||
There’s little question that the internet of things (IoT) holds enormous potential for the enterprise, in everything from asset tracking to compliance.
|
||||
|
||||
But enterprise uses of IoT technology are still evolving, and it’s not yet entirely clear which use cases and practices currently make economic and business sense. So, I was thrilled to trade emails recently with [Rob Mesirow][2], a principal at [PwC’s Connected Solutions][3] unit, about how to make enterprise IoT implementations as cost effective as possible.
|
||||
|
||||
“The IoT isn’t just about technology (hardware, sensors, software, networks, communications, the cloud, analytics, APIs),” Mesirow said, “though tech is obviously essential. It also includes ensuring cybersecurity, managing data governance, upskilling the workforce and creating a receptive workplace culture, building trust in the IoT, developing interoperability, and creating business partnerships and ecosystems—all part of a foundation that’s vital to a successful IoT implementation.”
|
||||
|
||||
**[ Also read:[Enterprise IoT: Companies want solutions in these 4 areas][4] ]**
|
||||
|
||||
Yes, that sounds complicated—and a lot of work for a still-hard-to-quantify return. Fortunately, though, Mesirow offered up some tips on how companies can make their IoT implementations as cost effective as possible.
|
||||
|
||||
### 1\. Don’t wait for better technology
|
||||
|
||||
Mesirow advised against waiting to implement IoT projects until you can deploy emerging technology such as [5G networks][5]. That makes sense, as long as your implementation doesn’t specifically require capabilities available only in the new technology.
|
||||
|
||||
### 2\. Start with the basics, and scale up as needed
|
||||
|
||||
“Companies need to start with the basics—building one app/task at a time—instead of jumping ahead with enterprise-wide implementations and ecosystems,” Mesirow said.
|
||||
|
||||
“There’s no need to start an IoT initiative by tackling a huge, expensive ecosystem. Instead, begin with one manageable use case, and build up and out from there. The IoT can inexpensively automate many everyday tasks to increase effectiveness, employee productivity, and revenue.”
|
||||
|
||||
After you pick the low-hanging fruit, it’s time to become more ambitious.
|
||||
|
||||
“After getting a few successful pilots established, businesses can then scale up as needed, building on the established foundation of business processes, people experience, and technology," Mesirow said,
|
||||
|
||||
### 3\. Make dumb things smart
|
||||
|
||||
Of course, identifying the ripest low-hanging fruit isn’t always easy.
|
||||
|
||||
“Companies need to focus on making dumb things smart, deploying infrastructure that’s not going to break the bank, and providing enterprise customers the opportunity to experience what data intelligence can do for their business,” Mesirow said. “Once they do that, things will take off.”
|
||||
|
||||
### 4\. Leverage lower-cost networks
|
||||
|
||||
“One key to building an IoT inexpensively is to use low-power, low-cost networks (Low-Power Wide-Area Networks (LPWAN)) to provide IoT services, which reduces costs significantly,” Mesirow said.
|
||||
|
||||
Naturally, he mentioned that PwC has three separate platforms with some 80 products that hang off those platforms, which he said cost “a fraction of traditional IoT offerings, with security and privacy built in.”
|
||||
|
||||
Despite the product pitch, though, Mesirow is right to call out the efficiencies involved in using low-cost, low-power networks instead of more expensive existing cellular.
|
||||
|
||||
### 5\. Balance security vs. cost
|
||||
|
||||
Companies need to plan their IoT network with costs vs. security in mind, Mesirow said. “Open-source networks will be less expensive, but there may be security concerns,” he said.
|
||||
|
||||
That’s true, of course, but there may be security concerns in _any_ network, not just open-source solutions. Still, Mesirow’s overall point remains valid: Enterprises need to carefully consider all the trade-offs they’re making in their IoT efforts.
|
||||
|
||||
### 6\. Account for _all_ the value IoT provides
|
||||
|
||||
Finally, Mesirow pointed out that “much of the cost-effectiveness comes from the _value_ the IoT provides,” and its important to consider the return, not just the investment.
|
||||
|
||||
“For example,” Mesirow said, the IoT “increases productivity by enabling the remote monitoring and control of business operations. It saves on energy costs by automatically turning off lights and HVAC when spaces are vacant, and predictive maintenance alerts lead to fewer machine repairs. And geolocation can lead to personalized marketing to customer smartphones, which can increase sales to nearby stores.”
|
||||
|
||||
**[ Now read this:[5 reasons the IoT needs its own networks][6] ]**
|
||||
|
||||
Join the Network World communities on [Facebook][7] and [LinkedIn][8] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3401082/6-ways-to-make-enterprise-iot-cost-effective.html
|
||||
|
||||
作者:[Fredric Paul][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Fredric-Paul/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/02/money_financial_salary_growth_currency_by-davidleshem-100787975-large.jpg
|
||||
[2]: https://twitter.com/robmesirow
|
||||
[3]: https://digital.pwc.com/content/pwc-digital/en/products/connected-solutions.html
|
||||
[4]: https://www.networkworld.com/article/3396128/the-state-of-enterprise-iot-companies-want-solutions-for-these-4-areas.html
|
||||
[5]: https://www.networkworld.com/article/3203489/what-is-5g-how-is-it-better-than-4g.html
|
||||
[6]: https://www.networkworld.com/article/3284506/5-reasons-the-iot-needs-its-own-networks.html
|
||||
[7]: https://www.facebook.com/NetworkWorld/
|
||||
[8]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,66 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Cisco launches a developer-community cert program)
|
||||
[#]: via: (https://www.networkworld.com/article/3401524/cisco-launches-a-developer-community-cert-program.html)
|
||||
[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
|
||||
|
||||
Cisco launches a developer-community cert program
|
||||
======
|
||||
Cisco has revamped some of its most critical certification and career-development programs in an effort to address the emerging software-oriented-network environment.
|
||||
![Getty Images][1]
|
||||
|
||||
SAN DIEGO – Cisco revamped some of its most critical certification and career-development tools in an effort to address the emerging software-oriented network environment.
|
||||
|
||||
Perhaps one of the biggest additions – rolled out here at the company’s Cisco Live customer event – is the new set of professional certifications for developers utilizing Cisco’s growing DevNet developer community.
|
||||
|
||||
**[ Also see[4 job skills that can boost networking salaries][2] and [20 hot jobs ambitious IT pros should shoot for][3].]**
|
||||
|
||||
The Cisco Certified DevNet Associate, Specialist and Professional certifications will cover software development for applications, automation, DevOps, cloud and IoT. They will also target software developers and network engineers who develop software proficiency to develop applications and automated workflows for operational networks and infrastructure.
|
||||
|
||||
“This certification evolution is the next step to reflect the critical skills network engineers must have to be at the leading edge of networked-enabled business disruption and delivering customer excellence,” said Mike Adams, vice president and general manager of Learning@Cisco. “To perform effectively in this new world, every IT professional needs skills that are broader, deeper and more agile than ever before. And they have to be comfortable working as a multidisciplinary team including infrastructure network engineers, DevOps and automation specialists, and software professionals.”
|
||||
|
||||
Other Cisco Certifications changes include:
|
||||
|
||||
* Streamlined certifications to validate engineering professionals with Cisco Certified Network Associate (CCNA) and Cisco Specialist certifications as well as Cisco Certified Network Professional (CCNP) and Cisco Certified Internetwork Expert (CCIE) certifications in enterprise, data center, service provider, security and collaboration.
|
||||
* For more senior professionals, the CCNP will give learners a choice of five tracks, covering enterprise technologies including infrastructure and wireless, service provider, data center, security and collaboration. Candidates will be able to further specialize in a particular focus area within those technologies.
|
||||
* Cisco says it will eliminate pre-requisites for certifications, meaning engineers can change career options without having to take a defined path.
|
||||
* Expansion of Cisco Networking Academy offerings to train entry level network professionals and software developers. Courses prepare students to earn CCNA and Certified DevNet Associate certifications, equipping them for high-demand jobs in IT.
|
||||
|
||||
|
||||
|
||||
New network technologies such as intent-based networking, multi-domain networking, and programmability fundamentally change the capabilities of the network, giving network engineers the opportunity to architect solutions that utilize the programmable network in new and exciting ways, wrote Susie Wee senior vice president and chief technology officer of DevNet.
|
||||
|
||||
“DevOps practices can be applied to the network, making the network more agile and enabling automation at scale. The new network provides more than just connectivity, it can now use policy and intent to securely connect applications, users, devices and data across multiple environments – from the data center and cloud, to the campus and branch, to the edge, and to the device,” Wee wrote.
|
||||
|
||||
**[[Looking to upgrade your career in tech? This comprehensive online course teaches you how.][4] ]**
|
||||
|
||||
She also announced the DevNet Automation Exchange, a community that will offer shared code, best practices and technology tools for users, developers or channel partners interested in developing automation apps.
|
||||
|
||||
Wee said Cisco seeded the Automation Exchange with over 50 shared code repositories.
|
||||
|
||||
“It is becoming increasingly clear that network ops can be handled much more efficiently with automation, and offering the tools to develop better applications is crucial going forward,” said Zeus Kerravala, founder and principal analyst with ZK Research.
|
||||
|
||||
Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3401524/cisco-launches-a-developer-community-cert-program.html
|
||||
|
||||
作者:[Michael Cooney][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Michael-Cooney/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/01/run_digital-vanguard_business-executive-with-briefcase_career-growth-100786736-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3227832/lan-wan/4-job-skills-that-can-boost-networking-salaries.html
|
||||
[3]: https://www.networkworld.com/article/3276025/careers/20-hot-jobs-ambitious-it-pros-should-shoot-for.html
|
||||
[4]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fupgrading-your-technology-career
|
||||
[5]: https://www.facebook.com/NetworkWorld/
|
||||
[6]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,68 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (The carbon footprints of IT shops that train AI models are huge)
|
||||
[#]: via: (https://www.networkworld.com/article/3401919/the-carbon-footprints-of-it-shops-that-train-ai-models-are-huge.html)
|
||||
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
|
||||
|
||||
The carbon footprints of IT shops that train AI models are huge
|
||||
======
|
||||
Artificial intelligence (AI) model training can generate five times more carbon dioxide than a car does in a lifetime, researchers at the University of Massachusetts, Amherst find.
|
||||
![ipopba / Getty Images][1]
|
||||
|
||||
A new research paper from the University of Massachusetts, Amherst looked at the carbon dioxide (CO2) generated over the course of training several common large artificial intelligence (AI) models and found that the process can generate nearly five times the amount as an average American car over its lifetime plus the process of making the car itself.
|
||||
|
||||
The [paper][2] specifically examined the model training process for natural-language processing (NLP), which is how AI handles natural language interactions. The study found that during the training process, more than 626,000 pounds of carbon dioxide is generated.
|
||||
|
||||
This is significant, since AI training is one IT process that has remained firmly on-premises and not moved to the cloud. Very expensive equipment is needed, as is large volumes of data, so the cloud isn’t right work for most AI training, and the report notes this. Plus, IT shops want to keep that kind of IP in house. So, if you are experimenting with AI, that power bill is going to go up.
|
||||
|
||||
**[ Read also:[How to plan a software-defined data-center network][3] ]**
|
||||
|
||||
While the report used carbon dioxide as a measure, that’s still the product of electricity generation. Training involves the use of the most powerful processors, typically Nvidia GPUs, and they are not known for being low-power draws. And as the paper notes, “model training also incurs a substantial cost to the environment due to the energy required to power this hardware for weeks or months at a time.”
|
||||
|
||||
Training is the most processor-intensive portion of AI. It can take days, weeks, or even months to “learn” what the model needs to know. That means power-hungry Nvidia GPUs running at full utilization for the entire time. In this case, how to handle and process natural language questions rather than broken sentences of keywords like your typical Google search.
|
||||
|
||||
The report said training one model with a neural architecture generated 626,155 pounds of CO2. By contrast, one passenger flying round trip between New York and San Francisco would generate 1,984 pounds of CO2, an average American would generate 11,023 pounds in one year, and a car would generate 126,000 pounds over the course of its lifetime.
|
||||
|
||||
### How the researchers calculated the CO2 amounts
|
||||
|
||||
The researchers used four models in the NLP field that have been responsible for the biggest leaps in performance. They are Transformer, ELMo, BERT, and GPT-2. They trained all of the models on a single Nvidia Titan X GPU, with the exception of ELMo which was trained on three Nvidia GTX 1080 Ti GPUs. Each model was trained for a maximum of one day.
|
||||
|
||||
**[[Learn Java from beginning concepts to advanced design patterns in this comprehensive 12-part course!][4] ]**
|
||||
|
||||
They then used the number of training hours listed in the model’s original papers to calculate the total energy consumed over the complete training process. That number was converted into pounds of carbon dioxide equivalent based on the average energy mix in the U.S.
|
||||
|
||||
The big takeaway is that computational costs start out relatively inexpensive, but they mushroom when additional tuning steps were used to increase the model’s final accuracy. A tuning process known as neural architecture search ([NAS][5]) is the worst offender because it does so much processing. NAS is an algorithm that searches for the best neural network architecture. It is seriously advanced AI and requires the most processing time and power.
|
||||
|
||||
The researchers suggest it would be beneficial to directly compare different models to perform a cost-benefit (accuracy) analysis.
|
||||
|
||||
“To address this, when proposing a model that is meant to be re-trained for downstream use, such as re-training on a new domain or fine-tuning on a new task, authors should report training time and computational resources required, as well as model sensitivity to hyperparameters. This will enable direct comparison across models, allowing subsequent consumers of these models to accurately assess whether the required computational resources,” the authors wrote.
|
||||
|
||||
They also say researchers who are cost-constrained should pool resources and avoid the cloud, as cloud compute time is more expensive. In an example, it said a GPU server with eight Nvidia 1080 Ti GPUs and supporting hardware is available for approximately $20,000. To develop the sample models used in their study, that hardware would cost $145,000, plus electricity to run the models, about half the estimated cost to use on-demand cloud GPUs.
|
||||
|
||||
“Unlike money spent on cloud compute, however, that invested in centralized resources would continue to pay off as resources are shared across many projects. A government-funded academic compute cloud would provide equitable access to all researchers,” they wrote.
|
||||
|
||||
Join the Network World communities on [Facebook][6] and [LinkedIn][7] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3401919/the-carbon-footprints-of-it-shops-that-train-ai-models-are-huge.html
|
||||
|
||||
作者:[Andy Patrizio][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Andy-Patrizio/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/05/ai-vendor-relationship-management_artificial-intelligence_hand-on-virtual-screen-100795246-large.jpg
|
||||
[2]: https://arxiv.org/abs/1906.02243
|
||||
[3]: https://www.networkworld.com/article/3284352/data-center/how-to-plan-a-software-defined-data-center-network.html
|
||||
[4]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fjava
|
||||
[5]: https://www.oreilly.com/ideas/what-is-neural-architecture-search
|
||||
[6]: https://www.facebook.com/NetworkWorld/
|
||||
[7]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,95 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Cisco offers cloud-based security for SD-WAN resources)
|
||||
[#]: via: (https://www.networkworld.com/article/3402079/cisco-offers-cloud-based-security-for-sd-wan-resources.html)
|
||||
[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
|
||||
|
||||
Cisco offers cloud-based security for SD-WAN resources
|
||||
======
|
||||
Cisco adds support for its cloud-based security gateway Umbrella to SD-WAN software
|
||||
![Thinkstock][1]
|
||||
|
||||
SAN DIEGO— As many companies look to [SD-WAN][2] technology to reduce costs, improve connectivity and streamline branch office access, one of the key requirements will be solid security technologies to protect corporate resources.
|
||||
|
||||
At its Cisco Live customer event here this week, the company took aim at that need by telling customers it added support for the its cloud-based security gateway – known as Umbrella – to its SD-WAN software offerings.
|
||||
|
||||
**More about SD-WAN**
|
||||
|
||||
* [How to buy SD-WAN technology: Key questions to consider when selecting a supplier][3]
|
||||
* [How to pick an off-site data-backup method][4]
|
||||
* [SD-Branch: What it is and why you’ll need it][5]
|
||||
* [What are the options for security SD-WAN?][6]
|
||||
|
||||
|
||||
|
||||
At its most basic, SD-WAN lets companies aggregate a variety of network connections – including MPLS, 4G LTE and DSL – into a branch or network-edge location and provides a management software that can turn up new sites, prioritize traffic and set security policies. SD-WAN's driving principle is to simplify the way big companies turn up new links to branch offices, better manage the way those links are utilized – for data, voice or video – and potentially save money in the process.
|
||||
|
||||
According to Cisco, Umbrella can provide the first line of defense against threats on the internet. By analyzing and learning from internet activity patterns, Umbrella automatically uncovers attacker infrastructure and proactively blocks requests to malicious destinations before a connection is even established — without adding latency for users. With Umbrella, customers can stop phishing and malware infections earlier, identify already infected devices faster and prevent data exfiltration, Cisco says.
|
||||
|
||||
Branch offices and roaming users are more vulnerable to attacks, and attackers are looking to exploit them, said Gee Rittenhouse, senior vice president and general manager of Cisco's Security Business Group. He pointed to Enterprise Strategy Group research that says 68 percent of branch offices and roaming users were the source of compromise in recent attacks. And as organizations move to more direct internet access, this becomes an even greater risk, Rittenhouse said.
|
||||
|
||||
“Scaling security at every location often means more appliances to ship and manage, more policies to separately maintain, which translates into more money and resources needed – but Umbrella offers an alternative to all that," he said. "Umbrella provides simple deployment and management, and in a single cloud platform, it unifies multiple layers of security, ncluding DNS, secure web gateway, firewall and cloud-access security,” Rittenhouse said.
|
||||
|
||||
“It also acts as your secure onramp to the internet by offering secure internet access and controlled SaaS usage across all locations and roaming users.”
|
||||
|
||||
Basically users can set up Umbrella support via the SD-WAN dashboard vManage, and the system automatically creates a secure tunnel to the cloud.** ** Once the SD-WAN traffic is pointed at the cloud, firewall and other security policies can be set. Customers can then see traffic and collect information about patterns or set policies and respond to anomalies, Rittenhouse said.
|
||||
|
||||
Analysts said the Umbrella offering is another important security option offered by Cisco for SD-WAN customers.
|
||||
|
||||
“Since it is cloud-based, using Umbrella is a great option for customers with lots of branch or SD-WAN locations who don’t want or need to have a security gateway on premises,” said Rohit Mehra, vice president of Network Infrastructure at IDC. “One of the largest requirements for large customers going forward will be the need for all manner of security technologies for the SD-WAN environment, and Cisco has a big menu of offerings that can address those requirements.”
|
||||
|
||||
IDC says the SD-WAN infrastructure market will hit $4.5 billion by 2022, growing at a more than 40 percent yearly clip between now and then.
|
||||
|
||||
The Umbrella announcement is on top of other recent SD-WAN security enhancements the company has made. In May [Cisco added support for Advanced Malware Protection (AMP) to its million-plus ISR/ASR edge routers][7] in an effort to reinforce branch- and core-network malware protection across the SD-WAN.
|
||||
|
||||
“Together with Cisco Talos [Cisco’s security-intelligence arm], AMP imbues your SD-WAN branch, core and campuses locations with threat intelligence from millions of worldwide users, honeypots, sandboxes and extensive industry partnerships,” Cisco said.
|
||||
|
||||
In total, AMP identifies more than 1.1 million unique malware samples a day and when AMP in Cisco SD-WAN platform spots malicious behavior it automatically blocks it, Cisco said.
|
||||
|
||||
Last year Cisco added its [Viptela SD-WAN technology to the IOS XE][8] version 16.9.1 software that runs its core ISR/ASR routers such as the ISR models 1000, 4000 and ASR 1000, in use by organizations worldwide. Cisco bought Viptela in 2017.
|
||||
|
||||
The release of Cisco IOS XE offered an instant upgrade path for creating cloud-controlled SD-WAN fabrics to connect distributed offices, people, devices and applications operating on the installed base, Cisco said. At the time Cisco said that Cisco SD-WAN on edge routers builds a secure virtual IP fabric by combining routing, segmentation, security, policy and orchestration.
|
||||
|
||||
With the recent release of IOS-XE SD-WAN 16.11, Cisco has brought AMP and other enhancements to its SD-WAN.
|
||||
|
||||
AMP support is added to a menu of security features already included in Cisco's SD-WAN software including support for URL filtering, Snort Intrusion Prevention, the ability to segment users across the WAN and embedded platform security, including the Cisco Trust Anchor module.
|
||||
|
||||
The software also supports SD-WAN Cloud onRamp for CoLocation, which lets customers tie distributed multicloud applications back to a local branch office or local private data center. That way a cloud-to-branch link would be shorter, faster and possibly more secure that tying cloud-based applications directly to the data center.
|
||||
|
||||
Also in May [Cisco and Teridion][9] said they would team to deliver faster enterprise software-defined WAN services. The integration links Cisco Meraki MX Security/SD-WAN appliances and its Auto VPN technology which lets users quickly bring up and configure secure sessions between branches and data centers with Teridion’s cloud-based WAN service. Teridion’s service promises customers better performance and control over traffic running from remote offices over the public internet to the data center.
|
||||
|
||||
Teridion said the Meraki integration creates an IPSec connection from the Cisco Meraki MX to the Teridion edge. Customers create locations in the Teridion portal and apply the preconfigured Meraki template to them, or just upload a csv file if they have a lot of locations. Then, from each Meraki MX, they can create a third-party IPSec tunnel to the Teridion edge IP addresses that are generated as part of the Teridion configuration, the company stated.
|
||||
|
||||
The combined Cisco Meraki and Teridion offering brings SD-WAN and security capabilities at the WAN edge that are tightly integrated with a WAN service delivered over cost-effective broadband or dedicated Internet access. Meraki’s MX family supports everything from SD-WAN and [Wi-Fi][10] features to next-generation [firewall][11] and intrusion prevention in a single package.
|
||||
|
||||
Join the Network World communities on [Facebook][12] and [LinkedIn][13] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3402079/cisco-offers-cloud-based-security-for-sd-wan-resources.html
|
||||
|
||||
作者:[Michael Cooney][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Michael-Cooney/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.techhive.com/images/article/2015/10/cloud-security-ts-100622309-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3209131/what-sdn-is-and-where-its-going.html
|
||||
[3]: https://www.networkworld.com/article/3323407/sd-wan/how-to-buy-sd-wan-technology-key-questions-to-consider-when-selecting-a-supplier.html
|
||||
[4]: https://www.networkworld.com/article/3328488/backup-systems-and-services/how-to-pick-an-off-site-data-backup-method.html
|
||||
[5]: https://www.networkworld.com/article/3250664/lan-wan/sd-branch-what-it-is-and-why-youll-need-it.html
|
||||
[6]: https://www.networkworld.com/article/3285728/sd-wan/what-are-the-options-for-securing-sd-wan.html?nsdr=true
|
||||
[7]: https://www.networkworld.com/article/3394597/cisco-adds-amp-to-sd-wan-for-israsr-routers.html
|
||||
[8]: https://www.networkworld.com/article/3296007/cisco-upgrade-enables-sd-wan-in-1m-israsr-routers.html
|
||||
[9]: https://www.networkworld.com/article/3396628/cisco-ties-its-securitysd-wan-gear-with-teridions-cloud-wan-service.html
|
||||
[10]: https://www.networkworld.com/article/3318119/what-to-expect-from-wi-fi-6-in-2019.html
|
||||
[11]: https://www.networkworld.com/article/3230457/what-is-a-firewall-perimeter-stateful-inspection-next-generation.html
|
||||
[12]: https://www.facebook.com/NetworkWorld/
|
||||
[13]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,72 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Dell and Cisco extend VxBlock integration with new features)
|
||||
[#]: via: (https://www.networkworld.com/article/3402036/dell-and-cisco-extend-vxblock-integration-with-new-features.html)
|
||||
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
|
||||
|
||||
Dell and Cisco extend VxBlock integration with new features
|
||||
======
|
||||
Dell EMC and Cisco took another step in their alliance, announcing plans to expand VxBlock 1000 integration across servers, networking, storage, and data protection.
|
||||
![Dell EMC][1]
|
||||
|
||||
Just two months ago [Dell EMC and Cisco renewed their converged infrastructure][2] vows, and now the two have taken another step in the alliance. At this year’s at [Cisco Live][3] event taking place in San Diego, the two announced plans to expand VxBlock 1000 integration across servers, networking, storage, and data protection.
|
||||
|
||||
This is done through support of NVMe over Fabrics (NVMe-oF), which allows enterprise SSDs to talk to each other directly through a high-speed fabric. NVMe is an important advance because SATA and PCI Express SSDs could never talk directly to other drives before until NVMe came along.
|
||||
|
||||
To leverage NVMe-oF to its fullest extent, Dell EMC has unveiled a new integrated Cisco compute (UCS) and storage (MDS) 32G options, extending PowerMax capabilities to deliver NVMe performance across the VxBlock stack.
|
||||
|
||||
**More news from Cisco Live 2019:**
|
||||
|
||||
* [Cisco offers cloud-based security for SD-WAN resources][4]
|
||||
* [Cisco software to make networks smarter, safer, more manageable][5]
|
||||
* [Cisco launches a developer-community cert program][6]
|
||||
|
||||
|
||||
|
||||
Dell EMC said this will enhance the architecture, high-performance consistency, availability, and scalability of VxBlock and provide its customers with high-performance, end-to-end mission-critical workloads that can deliver microsecond responses.
|
||||
|
||||
These new compute and storage options will be available to order sometime later this month.
|
||||
|
||||
### Other VxBlock news from Dell EMC
|
||||
|
||||
Dell EMC also announced it is extending its factory-integrated on-premise integrated protection solutions for VxBlock to hybrid and multi-cloud environments, such as Amazon Web Services (AWS). This update will offer to help protect VMware workloads and data via the company’s Data Domain Virtual Edition and Cloud Disaster Recovery software options. This will be available in July.
|
||||
|
||||
The company also plans to release VxBlock Central 2.0 software next month. VxBlock Central is designed to help customers simplify CI administration through converged awareness, automation, and analytics.
|
||||
|
||||
New to version 2.0 is modular licensing that matches workflow automation, advanced analytics, and life-cycle management/upgrade options to your needs.
|
||||
|
||||
VxBlock Central 2.0 has a variety of license features, including the following:
|
||||
|
||||
**Base** – Free with purchase of a VxBlock, the base license allows you to manage your system and improve compliance with inventory reporting and alerting. **Workflow Automation** – Provision infrastructure on-demand using engineered workflows through vRealize Orchestrator. New workflows available with this package include Cisco UCS server expansion with Unity and XtremIO storage arrays. **Advanced Analytics** – View capacity and KPIs to discover deeper actionable insights through vRealize Operations. **Lifecycle Management** (new, available later in 2019) – Apply “guided path” software upgrades to optimize system performance.
|
||||
|
||||
* Lifecycle Management includes a new multi-tenant, cloud-based database based on Cloud IQ that will collect and store the CI component inventory structured by the customer, extending the value and ease of use of the cloud-based analytics monitoring.
|
||||
* This feature extends the value and ease of use of the cloud-based analytics monitoring Cloud IQ already provides for individual Dell EMC storage arrays.
|
||||
|
||||
|
||||
|
||||
Join the Network World communities on [Facebook][7] and [LinkedIn][8] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3402036/dell-and-cisco-extend-vxblock-integration-with-new-features.html
|
||||
|
||||
作者:[Andy Patrizio][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Andy-Patrizio/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/04/dell-emc-vxblock-1000-100794721-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3391071/dell-emc-and-cisco-renew-converged-infrastructure-alliance.html
|
||||
[3]: https://www.ciscolive.com/global/
|
||||
[4]: https://www.networkworld.com/article/3402079/cisco-offers-cloud-based-security-for-sd-wan-resources.html
|
||||
[5]: https://www.networkworld.com/article/3401523/cisco-software-to-make-networks-smarter-safer-more-manageable.html
|
||||
[6]: https://www.networkworld.com/article/3401524/cisco-launches-a-developer-community-cert-program.html
|
||||
[7]: https://www.facebook.com/NetworkWorld/
|
||||
[8]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,95 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (IoT security vs. privacy: Which is a bigger issue?)
|
||||
[#]: via: (https://www.networkworld.com/article/3401522/iot-security-vs-privacy-which-is-a-bigger-issue.html)
|
||||
[#]: author: (Fredric Paul https://www.networkworld.com/author/Fredric-Paul/)
|
||||
|
||||
IoT security vs. privacy: Which is a bigger issue?
|
||||
======
|
||||
When it comes to the internet of things (IoT), security has long been a key concern. But privacy issues could be an even bigger threat.
|
||||
![Ring][1]
|
||||
|
||||
If you follow the news surrounding the internet of things (IoT), you know that security issues have long been a key concern for IoT consumers, enterprises, and vendors. Those issues are very real, but I’m becoming increasingly convinced that related but fundamentally different _privacy_ vulnerabilities may well be an even bigger threat to the success of the IoT.
|
||||
|
||||
In June alone, we’ve seen a flood of IoT privacy issues inundate the news cycle, and observers are increasingly sounding the alarm that IoT users should be paying attention to what happens to the data collected by IoT devices.
|
||||
|
||||
**[ Also read:[It’s time for the IoT to 'optimize for trust'][2] and [A corporate guide to addressing IoT security][2] ]**
|
||||
|
||||
Predictably, most of the teeth-gnashing has come on the consumer side, but that doesn’t mean enterprises users are immune to the issue. One the one hand, just like consumers, companies are vulnerable to their proprietary information being improperly shared and misused. More immediately, companies may face backlash from their own customers if they are seen as not properly guarding the data they collect via the IoT. Too often, in fact, enterprises shoot themselves in the foot on privacy issues, with practices that range from tone-deaf to exploitative to downright illegal—leading almost [two-thirds (63%) of consumers to describe IoT data collection as “creepy,”][3] while more than half (53%) “distrust connected devices to protect their privacy and handle information in a responsible manner.”
|
||||
|
||||
### Ring becoming the poster child for IoT privacy issues
|
||||
|
||||
As a case in point, let’s look at the case of [Ring, the IoT doorbell company now owned by Amazon][4]. Ring is [reportedly working with police departments to build a video surveillance network in residential neighborhoods][5]. Police in more than 50 cities and towns across the country are apparently offering free or discounted Ring doorbells, and sometimes requiring the recipients to share footage for use in investigations. (While [Ring touts the security benefits][6] of working with law enforcement, it has asked police departments to end the practice of _requiring_ users to hand over footage, as it appears to violate the devices’ terms of service.)
|
||||
|
||||
Many privacy advocates are troubled by this degree of cooperation between police and Ring, but that’s only part of the problem. Last year, for example, [Ring workers in Ukraine reportedly watched customer feeds][7]. Amazingly, though, even that only scratches the surface of the privacy flaps surrounding Ring.
|
||||
|
||||
### Guilty by video?
|
||||
|
||||
According to [Motherboard][8], “Ring is using video captured by its doorbell cameras in Facebook advertisements that ask users to identify and call the cops on a woman whom local police say is a suspected thief.” While the police are apparently appreciative of the “additional eyes that may see this woman and recognize her,” the ad calls the woman a thief even though she has not been charged with a crime, much less convicted!
|
||||
|
||||
Ring may be today’s poster child for IoT privacy issues, but IoT privacy complaints are widespread. In many cases, it comes down to what IoT users—or others nearby—are getting in return for giving up their privacy. According to the [Guardian][9], for example, Google’s Sidewalk Labs smart city project is little more than “surveillance capitalism.” And while car owners may get a discount on auto insurance in return for sharing their driving data, that relationship is hardly set in stone. It may not be long before drivers have to give up their data just to get insurance at all.
|
||||
|
||||
**[[Prepare to become a Certified Information Security Systems Professional with this comprehensive online course from PluralSight. Now offering a 10-day free trial!][10] ]**
|
||||
|
||||
And as the recent [data breach at the U.S. Customs and Border Protection][11] once again demonstrates, private data is “[a genie without a bottle][12].” No matter what legal or technical protections are put in place, the data may always be revealed or used in unforeseen ways. Heck, when you put it all together, it’s enough to make you wonder [whether doorbells really need to be smart][13] at all?
|
||||
|
||||
**Read more about IoT:**
|
||||
|
||||
* [Google’s biggest, craziest ‘moonshot’ yet][14]
|
||||
* [What is the IoT? How the internet of things works][15]
|
||||
* [What is edge computing and how it’s changing the network][16]
|
||||
* [Most powerful internet of things companies][17]
|
||||
* [10 Hot IoT startups to watch][18]
|
||||
* [The 6 ways to make money in IoT][19]
|
||||
* [What is digital twin technology? [and why it matters]][20]
|
||||
* [Blockchain, service-centric networking key to IoT success][21]
|
||||
* [Getting grounded in IoT networking and security][22]
|
||||
* [Building IoT-ready networks must become a priority][23]
|
||||
* [What is the Industrial IoT? [And why the stakes are so high]][24]
|
||||
|
||||
|
||||
|
||||
Join the Network World communities on [Facebook][25] and [LinkedIn][26] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3401522/iot-security-vs-privacy-which-is-a-bigger-issue.html
|
||||
|
||||
作者:[Fredric Paul][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Fredric-Paul/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/04/ringvideodoorbellpro-100794084-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3269165/internet-of-things/a-corporate-guide-to-addressing-iot-security-concerns.html
|
||||
[3]: https://www.cpomagazine.com/data-privacy/consumers-still-concerned-about-iot-security-and-privacy-issues/
|
||||
[4]: https://www.cnbc.com/2018/02/27/amazon-buys-ring-a-former-shark-tank-reject.html
|
||||
[5]: https://www.cnet.com/features/amazons-helping-police-build-a-surveillance-network-with-ring-doorbells/
|
||||
[6]: https://blog.ring.com/2019/02/14/how-rings-neighbors-creates-safer-more-connected-communities/
|
||||
[7]: https://www.theinformation.com/go/b7668a689a
|
||||
[8]: https://www.vice.com/en_us/article/pajm5z/amazon-home-surveillance-company-ring-law-enforcement-advertisements
|
||||
[9]: https://www.theguardian.com/cities/2019/jun/06/toronto-smart-city-google-project-privacy-concerns
|
||||
[10]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fcertified-information-systems-security-professional-cisspr
|
||||
[11]: https://www.washingtonpost.com/technology/2019/06/10/us-customs-border-protection-says-photos-travelers-into-out-country-were-recently-taken-data-breach/?utm_term=.0f3a38aa40ca
|
||||
[12]: https://smartbear.com/blog/test-and-monitor/data-scientists-are-sexy-and-7-more-surprises-from/
|
||||
[13]: https://slate.com/tag/should-this-thing-be-smart
|
||||
[14]: https://www.networkworld.com/article/3058036/google-s-biggest-craziest-moonshot-yet.html
|
||||
[15]: https://www.networkworld.com/article/3207535/internet-of-things/what-is-the-iot-how-the-internet-of-things-works.html
|
||||
[16]: https://www.networkworld.com/article/3224893/internet-of-things/what-is-edge-computing-and-how-it-s-changing-the-network.html
|
||||
[17]: https://www.networkworld.com/article/2287045/internet-of-things/wireless-153629-10-most-powerful-internet-of-things-companies.html
|
||||
[18]: https://www.networkworld.com/article/3270961/internet-of-things/10-hot-iot-startups-to-watch.html
|
||||
[19]: https://www.networkworld.com/article/3279346/internet-of-things/the-6-ways-to-make-money-in-iot.html
|
||||
[20]: https://www.networkworld.com/article/3280225/internet-of-things/what-is-digital-twin-technology-and-why-it-matters.html
|
||||
[21]: https://www.networkworld.com/article/3276313/internet-of-things/blockchain-service-centric-networking-key-to-iot-success.html
|
||||
[22]: https://www.networkworld.com/article/3269736/internet-of-things/getting-grounded-in-iot-networking-and-security.html
|
||||
[23]: https://www.networkworld.com/article/3276304/internet-of-things/building-iot-ready-networks-must-become-a-priority.html
|
||||
[24]: https://www.networkworld.com/article/3243928/internet-of-things/what-is-the-industrial-iot-and-why-the-stakes-are-so-high.html
|
||||
[25]: https://www.facebook.com/NetworkWorld/
|
||||
[26]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,121 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Software Defined Perimeter (SDP): Creating a new network perimeter)
|
||||
[#]: via: (https://www.networkworld.com/article/3402258/software-defined-perimeter-sdp-creating-a-new-network-perimeter.html)
|
||||
[#]: author: (Matt Conran https://www.networkworld.com/author/Matt-Conran/)
|
||||
|
||||
Software Defined Perimeter (SDP): Creating a new network perimeter
|
||||
======
|
||||
Considering the way networks work today and the change in traffic patterns; both internal and to the cloud, this limits the effect of the fixed perimeter.
|
||||
![monsitj / Getty Images][1]
|
||||
|
||||
Networks were initially designed to create internal segments that were separated from the external world by using a fixed perimeter. The internal network was deemed trustworthy, whereas the external was considered hostile. However, this is still the foundation for most networking professionals even though a lot has changed since the inception of the design.
|
||||
|
||||
More often than not the fixed perimeter consists of a number of network and security appliances, thereby creating a service chained stack, resulting in appliance sprawl. Typically, the appliances that a user may need to pass to get to the internal LAN may vary. But generally, the stack would consist of global load balancers, external firewall, DDoS appliance, VPN concentrator, internal firewall and eventually LAN segments.
|
||||
|
||||
The perimeter approach based its design on visibility and accessibility. If an entity external to the network can’t see an internal resource, then access cannot be gained. As a result, external entities were blocked from coming in, yet internal entities were permitted to passage out. However, it worked only to a certain degree. Realistically, the fixed network perimeter will always be breachable; it's just a matter of time. Someone with enough skill will eventually get through.
|
||||
|
||||
**[ Related:[MPLS explained – What you need to know about multi-protocol label switching][2]**
|
||||
|
||||
### Environmental changes – the cloud and mobile workforce
|
||||
|
||||
Considering the way networks work today and the change in traffic patterns; both internal and to the cloud, this limits the effect of the fixed perimeter. Nowadays, we have a very fluid network perimeter with many points of entry.
|
||||
|
||||
Imagine a castle with a portcullis that was used to gain access. To gain entry into the portcullis was easy as we just needed to pass one guard. There was only one way in and one way out. But today, in this digital world, we have so many small doors and ways to enter, all of which need to be individually protected.
|
||||
|
||||
This boils down to the introduction of cloud-based application services and changing the location of the perimeter. Therefore, the existing networking equipment used for the perimeter is topologically ill-located. Nowadays, everything that is important is outside the perimeter, such as, remote access workers, SaaS, IaaS and PaaS-based applications.
|
||||
|
||||
Users require access to the resources in various cloud services regardless of where the resources are located, resulting in complex-to-control multi-cloud environments. Objectively, the users do not and should not care where the applications are located. They just require access to the application. Also, the increased use of mobile workforce that demands anytime and anywhere access from a variety of devices has challenged the enterprises to support this dynamic workforce.
|
||||
|
||||
There is also an increasing number of devices, such as, BYOD, on-site contractors, and partners that will continue to grow internal to the network. This ultimately leads to a world of over-connected networks.
|
||||
|
||||
### Over-connected networks
|
||||
|
||||
Over-connected networks result in complex configurations of network appliances. This results in large and complex policies without any context.
|
||||
|
||||
They provide a level of coarse-grained access to a variety of services where the IP address does not correspond to the actual user. Traditional appliances that use static configurations to limit the incoming and outgoing traffic are commonly based on information in the IP packet and the port number.
|
||||
|
||||
Essentially, there is no notion of policy and explanation of why a given source IP address is on the list. This approach fails to take into consideration any notion of trust and dynamically adjust access in relation to the device, users and application request events.
|
||||
|
||||
### Problems with IP addresses
|
||||
|
||||
Back in the early 1990s, RFC 1597 declared three IP ranges reserved for private use: 10.0.0.0/8, 172.16.0.0/12 and 192.168.0.0/16. If an end host was configured with one of these addresses, it was considered more secure. However, this assumption of trust was shattered with the passage of time and it still haunts us today.
|
||||
|
||||
Network Address Translation (NAT) also changed things to a great extent. NAT allowed internal trusted hosts to communicate directly with the external untrusted hosts. However, since Transmission Control Protocol (TCP) is bidirectional, it allows the data to be injected by the external hosts while connecting back to the internal hosts.
|
||||
|
||||
Also, there is no contextual information regarding the IP addresses as the sole purpose revolved around connectivity. If you have the IP address of someone, you can connect to them. The authentication was handled higher up in the stack.
|
||||
|
||||
Not only do user’s IP addresses change regularly, but there’s also not a one-to-one correspondence between the users and IP addresses. Anyone can communicate from any IP address they please and also insert themselves between you and the trusted resource.
|
||||
|
||||
Have you ever heard of the 20-year old computer that responds to an internet control message protocol (ICMP) request, yet no one knows where it is? But this would not exist on a zero trust network as the network is dark until the administrator turns the lights on with a whitelist policy rule set. This is contrary to the legacy black policy rule set. You can find more information on zero trust in my course: [Zero Trust Networking: The Big Picture][3].
|
||||
|
||||
Therefore, we can’t just rely on the IP addresses and expect them to do much more other than connect. As a result, we have to move away from the IP addresses and network location as the proxy for access trust. The network location can longer be the driver of network access levels. It is not fully equipped to decide the trust of a device, user or application.
|
||||
|
||||
### Visibility – a major gap
|
||||
|
||||
When we analyze networking and its flaws, visibility is a major gap in today’s hybrid environments. By and large, enterprise networks are complex beasts. More than often networking pros do not have accurate data or insights into who or what is accessing the network resource.
|
||||
|
||||
I.T does not have the visibility in place to detect, for example, insecure devices, unauthorized users and potentially harmful connections that could propagate malware or perform data exfiltration.
|
||||
|
||||
Also, once you know how network elements connect, how do you ensure that they don’t reconnect through a broader definition of connectivity? For this, you need contextual visibility. You need full visibility into the network to see who, what, when, and how they are connecting with the device.
|
||||
|
||||
### What’s the workaround?
|
||||
|
||||
A new approach is needed that enables the application owners to protect the infrastructure located in a public or private cloud and on-premise data center. This new network architecture is known as [software-defined perimeter][4] (SDP). Back in 2013, Cloud Security Alliance (CSA) launched the SDP initiative, a project designed to develop the architecture for creating more robust networks.
|
||||
|
||||
The principles behind SDPs are not entirely new. Organizations within the DoD and Intelligence Communities (IC) have implemented a similar network architecture that is based on authentication and authorization prior to network access.
|
||||
|
||||
Typically, every internal resource is hidden behind an appliance. And a user must authenticate before visibility of the authorized services is made available and access is granted.
|
||||
|
||||
### Applying the zero trust framework
|
||||
|
||||
SDP is an extension to [zero trust][5] which removes the implicit trust from the network. The concept of SDP started with Google’s BeyondCorp, which is the general direction that the industry is heading to right now.
|
||||
|
||||
Google’s BeyondCorp puts forward the idea that the corporate network does not have any meaning. The trust regarding accessing an application is set by a static network perimeter containing a central appliance. This appliance permits the inbound and outbound access based on a very coarse policy.
|
||||
|
||||
However, access to the application should be based on other parameters such as who the user is, the judgment of the security stance of the device, followed by some continuous assessment of the session. Rationally, only then should access be permitted.
|
||||
|
||||
Let’s face it, the assumption that internal traffic can be trusted is flawed and zero trust assumes that all hosts internal to the network are internet facing, thereby hostile.
|
||||
|
||||
### What is software-defined perimeter (SDP)?
|
||||
|
||||
The SDP aims to deploy perimeter functionality for dynamically provisioned perimeters meant for clouds, hybrid environments, and on-premise data center infrastructures. There is often a dynamic tunnel that automatically gets created during the session. That is a one-to-one mapping between the requesting entity and the trusted resource. The important point to note here is that perimeters are formed not solely to obey a fixed location already design by the network team.
|
||||
|
||||
SDP relies on two major pillars and these are the authentication and authorization stages. SDPs require endpoints to authenticate and be authorized first before obtaining network access to the protected entities. Then, encrypted connections are created in real-time between the requesting systems and application infrastructure.
|
||||
|
||||
Authenticating and authorizing the users and their devices before even allowing a single packet to reach the target service, enforces what's known as least privilege at the network layer. Essentially, the concept of least privilege is for an entity to be granted only the minimum privileges that it needs to get its work done. Within a zero trust network, privilege is more dynamic than it would be in traditional networks since it uses many different attributes of activity to determine the trust score.
|
||||
|
||||
### The dark network
|
||||
|
||||
Connectivity is based on a need-to-know model. Under this model, no DNS information, internal IP addresses or visible ports of internal network infrastructure are transmitted. This is the reason why SDP assets are considered as “dark”. As a result, SDP isolates any concerns about the network and application. The applications and users are considered abstract, be it on-premise or in the cloud, which becomes irrelevant to the assigned policy.
|
||||
|
||||
Access is granted directly between the users and their devices to the application and resource, regardless of the underlying network infrastructure. There simply is no concept of inside and outside of the network. This ultimately removes the network location point as a position of advantage and also eliminates the excessive implicit trust that IP addresses offer.
|
||||
|
||||
**This article is published as part of the IDG Contributor Network.[Want to Join?][6]**
|
||||
|
||||
Join the Network World communities on [Facebook][7] and [LinkedIn][8] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3402258/software-defined-perimeter-sdp-creating-a-new-network-perimeter.html
|
||||
|
||||
作者:[Matt Conran][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Matt-Conran/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/03/sdn_software-defined-network_architecture-100791938-large.jpg
|
||||
[2]: https://www.networkworld.com/article/2297171/sd-wan/network-security-mpls-explained.html
|
||||
[3]: http://pluralsight.com/courses/zero-trust-networking-big-picture
|
||||
[4]: https://network-insight.net/2018/09/software-defined-perimeter-zero-trust/
|
||||
[5]: https://network-insight.net/2018/10/zero-trust-networking-ztn-want-ghosted/
|
||||
[6]: /contributor-network/signup.html
|
||||
[7]: https://www.facebook.com/NetworkWorld/
|
||||
[8]: https://www.linkedin.com/company/network-world
|
83
sources/talk/20190612 When to use 5G, when to use Wi-Fi 6.md
Normal file
83
sources/talk/20190612 When to use 5G, when to use Wi-Fi 6.md
Normal file
@ -0,0 +1,83 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (When to use 5G, when to use Wi-Fi 6)
|
||||
[#]: via: (https://www.networkworld.com/article/3402316/when-to-use-5g-when-to-use-wi-fi-6.html)
|
||||
[#]: author: (Lee Doyle )
|
||||
|
||||
When to use 5G, when to use Wi-Fi 6
|
||||
======
|
||||
5G is a cellular service, and Wi-Fi 6 is a short-range wireless access technology, and each has attributes that make them useful in specific enterprise roles.
|
||||
![Thinkstock][1]
|
||||
|
||||
We have seen hype about whether [5G][2] cellular or [Wi-Fi 6][3] will win in the enterprise, but the reality is that the two are largely complementary with an overlap for some use cases, which will make for an interesting competitive environment through the early 2020s.
|
||||
|
||||
### The potential for 5G in enterprises
|
||||
|
||||
The promise of 5G for enterprise users is higher speed connectivity with lower latency. Cellular technology uses licensed spectrum which largely eliminates potential interference that may occur with unlicensed Wi-Fi spectrum. Like current 4G LTE technologies, 5G can be supplied by cellular wireless carriers or built as a private network .
|
||||
|
||||
The architecture for 5G requires many more radio access points and can suffer from poor or no connectivity indoors. So, the typical organization needs to assess its [current 4G and potential 5G service][4] for its PCs, routers and other devices. Deploying indoor microcells, repeaters and distributed antennas can help solve indoor 5G service issues. As with 4G, the best enterprise 5G use case is for truly mobile connectivity such as public safety vehicles and in non-carpeted environments like mining, oil and gas extraction, transportation, farming and some manufacturing.
|
||||
|
||||
In addition to broad mobility, 5G offers advantages in terms of authentication while roaming and speed of deployment as might be needed to provide WAN connectivity to a pop-up office or retail site. 5G will have the capacity to offload traffic in cases of data congestion such as live video. As 5G standards mature, the technology will improve its options for low-power IoT connectivity.
|
||||
|
||||
5G will gradually roll out over the next four to five years starting in large cities and specific geographies; 4G technology will remain prevalent for a number of years. Enterprise users will need new devices, dongles and routers to connect to 5G services. For example, Apple iPhones are not expected to support 5G until 2020, and IoT devices will need specific cellular compatibility to connect to 5G.
|
||||
|
||||
Doyle Research expects the 1Gbps and higher bandwidth promised by 5G will have a significant impact on the SD-WAN market. 4G LTE already enables cellular services to become a primary WAN link. 5G is likely to be cost competitive or cheaper than many wired WAN options such as MPLS or the internet. 5G gives enterprise WAN managers more options to provide increased bandwidth to their branch sites and remote users – potentially displacing MPLS over time.
|
||||
|
||||
### The potential for Wi-Fi 6 in enterprises
|
||||
|
||||
Wi-Fi is nearly ubiquitous for connecting mobile laptops, tablets and other devices to enterprise networks. Wi-Fi 6 (802.11ax) is the latest version of Wi-Fi and brings the promise of increased speed, low latency, improved aggregate bandwidth and advanced traffic management. While it has some similarities with 5G (both are based on orthogonal frequency division multiple access), Wi-Fi 6 is less prone to interference, requires less power (which prolongs device battery life) and has improved spectral efficiency.
|
||||
|
||||
**[[Take this mobile device management course from PluralSight and learn how to secure devices in your company without degrading the user experience.][5] ]**
|
||||
|
||||
As is typical for Wi-Fi, early [vendor-specific versions of Wi-Fi 6][6] are currently available from many manufacturers. The Wi-Fi alliance plans for certification of Wi-Fi 6-standard gear in 2020. Most enterprises will upgrade to Wi-Fi 6 along standard access-point life cycles of three years or so unless they have specific performance/latency requirements that prompt an upgrade sooner.
|
||||
|
||||
Wi-Fi access points continue to be subject to interference, and it can be challenging to design and site APs to provide appropriate coverage. Enterprise LAN managers will continue to need vendor-supplied tools and partners to configure optimal Wi-Fi coverage for their organizations. Wi-Fi 6 solutions must be integrated with wired campus infrastructure. Wi-Fi suppliers need to do a better job at providing unified network management across wireless and wired solutions in the enterprise.
|
||||
|
||||
### Need for wired backhaul
|
||||
|
||||
For both technologies, wireless is combined with wired-network infrastructure to deliver high-speed communications end-to-end. In the enterprise, Wi-Fi is typically paired with wired Ethernet switches for campus and larger branches. Some devices are connected via cable to the switch, others via Wi-Fi – and laptops may use both methods. Wi-Fi access points are connected via Ethernet inside the enterprise and to the WAN or internet by fiber connections.
|
||||
|
||||
The architecture for 5G makes extensive use of fiber optics to connect the distributed radio access network back to the core of the 5G network. Fiber is typically required to provide the high bandwidth needed to connect 5G endpoints to SaaS-based applications, and to provide live video and high-speed internet access. Private 5G networks will also have to meet high-speed wired-connectivity requirements.
|
||||
|
||||
### Handoff issues
|
||||
|
||||
Enterprise IT managers need to be concerned with handoff challenges as phones switch between 5G and Wi-Fi 6. These issues can affect performance and user satisfaction. Several groups are working towards standards to promote better interoperability between Wi-Fi 6 and 5G. As the architectures of Wi-Fi 6 align with 5G, the experience of moving between cellular and Wi-Fi networks should become more seamless.
|
||||
|
||||
### 5G vs Wi-Fi 6 depends on locations, applications and devices
|
||||
|
||||
Wi-Fi 6 and 5G are competitive with each other for specific situations in the enterprise environment that depend on location, application and device type. IT managers should carefully evaluate their current and emerging connectivity requirements. Wi-Fi will continue to dominate indoor environments and cellular wins for broad outdoor coverage.
|
||||
|
||||
Some of the overlap cases occur in stadiums, hospitality and other large event spaces with many users competing for bandwidth. Government applications, including aspect of smart cities, can be applicable to both Wi-Fi and cellular. Health care facilities have many distributed medical devices and users that need connectivity. Large distributed manufacturing environments share similar characteristics. The emerging IoT deployments are perhaps the most interesting “competitive” environment with many overlapping use cases.
|
||||
|
||||
### Recommendations for IT Leaders
|
||||
|
||||
While the wireless technologies enabling them are converging, Wi-Fi 6 and 5G are fundamentally distinct networks – both of which have their role in enterprise connectivity. Enterprise IT leaders should focus on how Wi-Fi and cellular can complement each other, with Wi-Fi continuing as the in-building technology to connect PCs and laptops, offload phone and tablet data, and for some IoT connectivity.
|
||||
|
||||
4G LTE moving to 5G will remain the truly mobile technology for phone and tablet connectivity, an option (via dongle) for PC connections, and increasingly popular for connecting some IoT devices. 5G WAN links will increasingly become standard as a backup for improved SD-WAN reliability and as primary links for remote offices.
|
||||
|
||||
Join the Network World communities on [Facebook][7] and [LinkedIn][8] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3402316/when-to-use-5g-when-to-use-wi-fi-6.html
|
||||
|
||||
作者:[Lee Doyle][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2017/07/wi-fi_wireless_communication_network_abstract_thinkstock_610127984_1200x800-100730107-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3203489/what-is-5g-how-is-it-better-than-4g.html
|
||||
[3]: https://www.networkworld.com/article/3215907/why-80211ax-is-the-next-big-thing-in-wi-fi.html
|
||||
[4]: https://www.networkworld.com/article/3330603/5g-versus-4g-how-speed-latency-and-application-support-differ.html
|
||||
[5]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fcourses%2Fmobile-device-management-big-picture
|
||||
[6]: https://www.networkworld.com/article/3309439/80211ax-preview-access-points-and-routers-that-support-the-wi-fi-6-protocol-on-tap.html
|
||||
[7]: https://www.facebook.com/NetworkWorld/
|
||||
[8]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,59 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Data centers should sell spare UPS capacity to the grid)
|
||||
[#]: via: (https://www.networkworld.com/article/3402039/data-centers-should-sell-spare-ups-capacity-to-the-grid.html)
|
||||
[#]: author: (Patrick Nelson https://www.networkworld.com/author/Patrick-Nelson/)
|
||||
|
||||
Data centers should sell spare UPS capacity to the grid
|
||||
======
|
||||
Distributed Energy is gaining traction, providing an opportunity for data centers to sell excess power in data center UPS batteries to the grid.
|
||||
![Getty Images][1]
|
||||
|
||||
The energy storage capacity in uninterruptable power supply (UPS) batteries, languishing often dormant in data centers, could provide new revenue streams for those data centers, says Eaton, a major electrical power management company.
|
||||
|
||||
Excess, grid-generated power, created during times of low demand, should be stored on the now-proliferating lithium-backup power systems strewn worldwide in data centers, Eaton says. Then, using an algorithm tied to grid-demand, electricity should be withdrawn as necessary for grid use. It would then be slid back onto the backup batteries when not needed.
|
||||
|
||||
**[ Read also:[How server disaggregation can boost data center efficiency][2] | Get regularly scheduled insights: [Sign up for Network World newsletters][3] ]**
|
||||
|
||||
The concept is called Distributed Energy and has been gaining traction in part because electrical generation is changing—emerging green power, such as wind and solar, being used now at the grid-level have considerations that differ from the now-retiring, fossil-fuel power generation. You can generate solar only in daylight, yet much demand takes place on dark evenings, for example.
|
||||
|
||||
Coal, gas, and oil deliveries have always been, to a great extent, pre-planned, just-in-time, and used for electrical generation in real time. Nowadays, though, fluctuations between supply, storage, and demand are kicking in. Electricity storage on the grid is required.
|
||||
|
||||
Eaton says that by piggy-backing on existing power banks, electricity distribution could be evened out better. The utilities would deliver power more efficiently, despite the peaks and troughs in demand—with the data center UPS, in effect, acting like a quasi-grid-power storage battery bank, or virtual power plant.
|
||||
|
||||
The objective of this UPS use case, called EnergyAware, is to regulate frequency in the grid. That’s related to the tolerances needed to make the grid work—the cycles per second, or hertz, inherent in electrical current can’t deviate too much. Abnormalities happen if there’s a suddent spike in demand but no power on hand to supply the surge.
|
||||
|
||||
### How the Distributed Energy concept works
|
||||
|
||||
The distributed energy resource (DER), which can be added to any existing lithium-ion battery bank, in any building, allows for the consumption of energy, or the distribution of it, based on a Frequency Regulation grid-demand algorithm. It charges or discharges the backup battery, connected to the grid, thus balancing the grid frequency.
|
||||
|
||||
Often, not much power will need to be removed, just “micro-bursts of energy,” explains Sean James, director of Energy Research at Microsoft, in an Eaton-published promotional video. Microsoft Innovation Center in Virginia has been working with Eaton on the project. Those bursts are enough to get the frequency tolerances back on track, but the UPS still functions as designed.
|
||||
|
||||
Eaton says data centers should start participating in energy markets. That could mean bidding, as a producer of power, to those who need to buy it—the electricity market, also known as the grid. Data centers could conceivably even switch on generators to operate the data halls if the price for its battery-stored power was particularly lucrative at certain times.
|
||||
|
||||
“A data center in the future wouldn’t just be a huge load on the grid,” James says. “In the future, you don’t have a data center or a power plant. It’s something in the middle. A data plant,” he says on the Eaton [website][4].
|
||||
|
||||
Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3402039/data-centers-should-sell-spare-ups-capacity-to-the-grid.html
|
||||
|
||||
作者:[Patrick Nelson][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Patrick-Nelson/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2018/10/business_continuity_server-100777720-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3266624/how-server-disaggregation-could-make-cloud-datacenters-more-efficient.html
|
||||
[3]: https://www.networkworld.com/newsletters/signup.html
|
||||
[4]: https://www.eaton.com/us/en-us/products/backup-power-ups-surge-it-power-distribution/backup-power-ups/dual-purpose-ups-technology.html
|
||||
[5]: https://www.facebook.com/NetworkWorld/
|
||||
[6]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,68 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Oracle updates Exadata at long last with AI and machine learning abilities)
|
||||
[#]: via: (https://www.networkworld.com/article/3402559/oracle-updates-exadata-at-long-last-with-ai-and-machine-learning-abilities.html)
|
||||
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
|
||||
|
||||
Oracle updates Exadata at long last with AI and machine learning abilities
|
||||
======
|
||||
Oracle to update the Oracle Exadata Database Machine X8 server line to include artificial intelligence (AI) and machine learning capabilities, plus support for hybrid cloud.
|
||||
![Magdalena Petrova][1]
|
||||
|
||||
After a rather [long period of silence][2], Oracle announced an update to its server line, the Oracle Exadata Database Machine X8, which features hardware and software enhancements that include artificial intelligence (AI) and machine learning capabilities, as well as support for hybrid cloud.
|
||||
|
||||
Oracle acquired a hardware business nine years ago with the purchase of Sun Microsystems. It steadily whittled down the offerings, getting out of the commodity hardware business in favor of high-end mission-critical hardware. Whereas the Exalogic line is more of a general-purpose appliance running Oracle’s own version of Linux, Exadata is a purpose-built database server, and they really made some upgrades.
|
||||
|
||||
The Exadata X8 comes with the latest Intel Xeon Scalable processors and PCIe NVME flash technology to drive performance improvements, which Oracle promises a 60% increase in I/O throughput for all-Flash storage and a 25% increase in IOPS per storage server compared to Exadata X7. The X8 offers a 60% performance improvement over the previous generation for analytics with up to 560GB per second throughput. It can scan a 1TB table in under two seconds.
|
||||
|
||||
**[ Also read:[What is quantum computing (and why enterprises should care)][3] ]**
|
||||
|
||||
The company also enhanced the storage server to offload Oracle Database processing, and the X8 features 60% more cores and 40% higher capacity disk drives over the X7.
|
||||
|
||||
But the real enhancements come on the software side. With Exadata X8, Oracle introduces new machine-learning capabilities, such as Automatic Indexing, which continuously learns and tunes the database as usage patterns change. The Indexing technology originated with the Oracle Autonomous Database, the cloud-based software designed to automate management of Oracle databases.
|
||||
|
||||
And no, MySQL is not included in the stack. This is for Oracle databases only.
|
||||
|
||||
“We’re taking code from Autonomous Database and making it available on prem for our customers,” said Steve Zivanic, vice president for converged infrastructure at Oracle’s Cloud Business Group. “That enables companies rather than doing manual indexing for various Oracle databases to automate it with machine learning.”
|
||||
|
||||
In one test, it took a 15-year-old Netsuite database with over 9,000 indexes built up over the lifespan of the database, and in 24 hours, its AI indexer rebuilt the indexes with just 6,000, reducing storage space and greatly increasing performance of the database, since the number of indexes to search were smaller.
|
||||
|
||||
### Performance improvements with Exadata
|
||||
|
||||
Zivanic cited several examples of server consolidation done with Exadata but would not identify companies by name. He told of a large healthcare company that achieved a 10-fold performance improvement over IBM Power servers and consolidated 600 Power servers with 50 Exadata systems.
|
||||
|
||||
A financial services company replaced 4,000 Dell servers running Red Hat Linux and VMware with 100 Exadata systems running 6,000 production Oracle databases. Not only did it reduce its power footprint, but patching was down 99%. An unnamed retailer with 28 racks of hardware from five vendors went from installing 1,400 patches per year to 16 patches on four Exadata racks.
|
||||
|
||||
Because Oracle owns the entire stack, from hardware to OS to middleware and database, Exadata can roll all of its patch components – 640 in all – into a single bundle.
|
||||
|
||||
“The trend we’ve noticed is you see these [IT hardware] companies who try to maintain an erector set mentality,” said Zivanic. “And you have people saying why are we trying to build pods? Why don’t we buy finished goods and focus on our core competency rather than build erector sets?”
|
||||
|
||||
### Oracle Zero Data Loss Recovery Appliance X8 now available
|
||||
|
||||
Oracle also announced the availability of the Oracle Zero Data Loss Recovery Appliance X8, its database backup appliance, which offers up to 10 times faster data recovery of an Oracle Database than conventional data deduplication appliances while providing sub-second recoverability of all transactions.
|
||||
|
||||
The new Oracle Recovery Appliance X8 now features 30% larger capacity, nearly a petabyte in a single rack, for the same price, Oracle says.
|
||||
|
||||
Join the Network World communities on [Facebook][4] and [LinkedIn][5] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3402559/oracle-updates-exadata-at-long-last-with-ai-and-machine-learning-abilities.html
|
||||
|
||||
作者:[Andy Patrizio][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Andy-Patrizio/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.techhive.com/images/article/2017/03/vid-still-79-of-82-100714308-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3317564/is-oracles-silence-on-its-on-premises-servers-cause-for-concern.html
|
||||
[3]: https://www.networkworld.com/article/3275367/what-s-quantum-computing-and-why-enterprises-need-to-care.html
|
||||
[4]: https://www.facebook.com/NetworkWorld/
|
||||
[5]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,71 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Report: Mirai tries to hook its tentacles into SD-WAN)
|
||||
[#]: via: (https://www.networkworld.com/article/3403016/report-mirai-tries-to-hook-its-tentacles-into-sd-wan.html)
|
||||
[#]: author: (Jon Gold https://www.networkworld.com/author/Jon-Gold/)
|
||||
|
||||
Report: Mirai tries to hook its tentacles into SD-WAN
|
||||
======
|
||||
|
||||
Mirai – the software that has hijacked hundreds of thousands of internet-connected devices to launch massive DDoS attacks – now goes beyond recruiting just IoT products; it also includes code that seeks to exploit a vulnerability in corporate SD-WAN gear.
|
||||
|
||||
That specific equipment – VMware’s SDX line of SD-WAN appliances – now has an updated software version that fixes the vulnerability, but by targeting it Mirai’s authors show that they now look beyond enlisting security cameras and set-top boxes and seek out any vulnerable connected devices, including enterprise networking gear.
|
||||
|
||||
**More about SD-WAN**
|
||||
|
||||
* [How to buy SD-WAN technology: Key questions to consider when selecting a supplier][1]
|
||||
* [How to pick an off-site data-backup method][2]
|
||||
* [SD-Branch: What it is and why you’ll need it][3]
|
||||
* [What are the options for security SD-WAN?][4]
|
||||
|
||||
|
||||
|
||||
“I assume we’re going to see Mirai just collecting as many devices as it can,” said Jen Miller-Osborn, deputy director of threat research at Palo Alto Networks’ Unit 42, which recently issued [a report][5] about Mirai.
|
||||
|
||||
### Exploiting SD-WAN gear is new
|
||||
|
||||
While the exploit against the SD-WAN appliances was a departure for Mirai, it doesn’t represent a sea-change in the way its authors are approaching their work, according Miller-Osborn.
|
||||
|
||||
The idea, she said, is simply to add any devices to the botnet, regardless of what they are. The fact that SD-WAN devices were targeted is more about those particular devices having a vulnerability than anything to do with their SD-WAN capabilities.
|
||||
|
||||
### Responsible disclosure headed off execution of exploits
|
||||
|
||||
[The vulnerability][6] itself was discovered last year by independent researchers who responsibly disclosed it to VMware, which then fixed it in a later software version. But the means to exploit the weakness nevertheless is included in a recently discovered new variant of Mirai, according to the Unit 42 report.
|
||||
|
||||
The authors behind Mirai periodically update the software to add new targets to the list, according to Unit 42, and the botherders’ original tactic of simply targeting devices running default credentials has given way to a strategy that also exploits vulnerabilities in a wide range of different devices. The updated variant of the malicious software includes a total of eight new-to-Mirai exploits.
|
||||
|
||||
**[[Prepare to become a Certified Information Security Systems Professional with this comprehensive online course from PluralSight. Now offering a 10-day free trial!][7] ]**
|
||||
|
||||
The remediated version of the VMware SD-WAN is SD-WAN Edge 3.1.2. The vulnerability still affects SD-WAN Edge 3.1.1 and earlier, [according to a VMware security advisory][8]. After the Unit 42 report came out VMware posted [a blog][9] that says it is conducting its own investigation into the matter.
|
||||
|
||||
Detecting whether a given SD-WAN implementation has been compromised depends heavily on the degree of monitoring in place on the network. Any products that give IT staff the ability to notice unusual traffic to or from an affected appliance could flag that activity. Otherwise, it could be difficult to tell if anything’s wrong, Miller-Osborne said. “You honestly might not notice it unless you start seeing a hit in performance or an outside actor notifies you about it.”
|
||||
|
||||
Join the Network World communities on [Facebook][10] and [LinkedIn][11] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3403016/report-mirai-tries-to-hook-its-tentacles-into-sd-wan.html
|
||||
|
||||
作者:[Jon Gold][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Jon-Gold/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.networkworld.com/article/3323407/sd-wan/how-to-buy-sd-wan-technology-key-questions-to-consider-when-selecting-a-supplier.html
|
||||
[2]: https://www.networkworld.com/article/3328488/backup-systems-and-services/how-to-pick-an-off-site-data-backup-method.html
|
||||
[3]: https://www.networkworld.com/article/3250664/lan-wan/sd-branch-what-it-is-and-why-youll-need-it.html
|
||||
[4]: https://www.networkworld.com/article/3285728/sd-wan/what-are-the-options-for-securing-sd-wan.html?nsdr=true
|
||||
[5]: https://unit42.paloaltonetworks.com/new-mirai-variant-adds-8-new-exploits-targets-additional-iot-devices/
|
||||
[6]: https://www.exploit-db.com/exploits/44959
|
||||
[7]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fcertified-information-systems-security-professional-cisspr
|
||||
[8]: https://www.vmware.com/security/advisories/VMSA-2018-0011.html
|
||||
[9]: https://blogs.vmware.com/security/2019/06/vmsa-2018-0011-revisited.html
|
||||
[10]: https://www.facebook.com/NetworkWorld/
|
||||
[11]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,60 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Western Digital launches open-source zettabyte storage initiative)
|
||||
[#]: via: (https://www.networkworld.com/article/3402318/western-digital-launches-open-source-zettabyte-storage-initiative.html)
|
||||
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
|
||||
|
||||
Western Digital launches open-source zettabyte storage initiative
|
||||
======
|
||||
Western Digital's Zoned Storage initiative leverages new technology to create more efficient zettabyte-scale data storage for data centers by improving how data is organized when it is stored.
|
||||
![monsitj / Getty Images][1]
|
||||
|
||||
Western Digital has announced a project called the Zoned Storage initiative that leverages new technology to create more efficient zettabyte-scale data storage for data centers by improving how data is organized when it is stored.
|
||||
|
||||
As part of this, the company also launched a [developer site][2] that will host open-source, standards-based tools and other resources.
|
||||
|
||||
The Zoned Storage architecture is designed for Western Digital hardware and its shingled magnetic recording (SMR) HDDs, which hold up to 15TB of data, as well as the emerging zoned namespaces (ZNS) standard for NVMe SSDs, designed to deliver better endurance and predictability.
|
||||
|
||||
**[ Now read:[What is quantum computing (and why enterprises should care)][3] ]**
|
||||
|
||||
This initiative is not being retrofitted for non-SMR drives or non-NVMe SSDs. Western Digital estimates that by 2023, half of all its HDD shipments are expected to be SMR. And that will be needed because IDC predicts data will be generated at a rate of 103 zettabytes a year by 2023.
|
||||
|
||||
With this project Western Digital is targeting cloud and hyperscale providers and anyone building a large data center who has to manage a large amount of data, according to Eddie Ramirez, senior director of product marketing for Western Digital.
|
||||
|
||||
Western Digital is changing how data is written and stored from the traditional random 4K block writes to large blocks of sequential data, like Big Data workloads and video streams, which are rapidly growing in size and use in the digital age.
|
||||
|
||||
“We are now looking at a one-size-fits-all architecture that leaves a lot of TCO [total cost of ownership] benefits on the table if you design for a single architecture,” Ramirez said. “We are looking at workloads that don’t rely on small block randomization of data but large block sequential write in nature.”
|
||||
|
||||
Because drives use 4k write blocks, that leads to overprovisioning of storage, especially around SSDs. This is true of consumer and enterprise SSDs alike. My 1TB SSD drive has only 930GB available. And that loss scales. An 8TB SSD has only 6.4TB available, according to Ramirez. SSDs also have to be built with DRAM for caching of small block random writes. You need about 1GB of DRAM per 1TB of NAND to act as a buffer, according to Ramirez.
|
||||
|
||||
### The benefits of Zoned Storage
|
||||
|
||||
Zoned Storage allows for 15-20% more storage on a HDD the than traditional storage mechanism. It eliminates the overprovisioning of SSDs, so you get all the NAND flash the drive has and you need far fewer DRAM chips on an SSD. Additionally, Western Digital promises you will need up to one-eighth as much DRAM to act as a cache in future SSD drives, lowering the cost.
|
||||
|
||||
Ramirez also said quality of service will improve, not necessarily that peak performance is better, but it will manage latency from outliers better.
|
||||
|
||||
Western Digital has not disclosed what if any pricing is associated with the project. It plans to work with the open-source community, customers, and industry players to help accelerate application development around Zoned Storage through its website.
|
||||
|
||||
Join the Network World communities on [Facebook][4] and [LinkedIn][5] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3402318/western-digital-launches-open-source-zettabyte-storage-initiative.html
|
||||
|
||||
作者:[Andy Patrizio][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Andy-Patrizio/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/02/big_data_center_server_racks_storage_binary_analytics_by_monsitj_gettyimages-951389152_3x2-100787358-large.jpg
|
||||
[2]: http://ZonedStorage.io
|
||||
[3]: https://www.networkworld.com/article/3275367/what-s-quantum-computing-and-why-enterprises-need-to-care.html
|
||||
[4]: https://www.facebook.com/NetworkWorld/
|
||||
[5]: https://www.linkedin.com/company/network-world
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (ninifly)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: translator: (LazyWolfLin)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
@ -1,94 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Graviton: A Minimalist Open Source Code Editor)
|
||||
[#]: via: (https://itsfoss.com/graviton-code-editor/)
|
||||
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
|
||||
|
||||
Graviton: A Minimalist Open Source Code Editor
|
||||
======
|
||||
|
||||
[Graviton][1] is a free and open source, cross-platform code editor in development. The sixteen years old developer, Marc Espin, emphasizes that it is a ‘minimalist’ code editor. I am not sure about that but it does have a clean user interface like other [modern code editors like Atom][2].
|
||||
|
||||
![Graviton Code Editor Interface][3]
|
||||
|
||||
The developer also calls it a lightweight code editor despite the fact that Graviton is based on [Electron][4].
|
||||
|
||||
Graviton comes with features you expect in any standard code editors like syntax highlighting, auto-completion etc. Since Graviton is still in the beta phase of development, more features will be added to it in the future releases.
|
||||
|
||||
![Graviton Code Editor with Syntax Highlighting][5]
|
||||
|
||||
### Feature of Graviton code editor
|
||||
|
||||
Some of the main highlights of Graviton features are:
|
||||
|
||||
* Syntax highlighting for a number of programming languages using [CodeMirrorJS][6]
|
||||
* Autocomplete
|
||||
* Support for plugins and themes.
|
||||
* Available in English, Spanish and a few other European languages.
|
||||
* Available for Linux, Windows and macOS.
|
||||
|
||||
|
||||
|
||||
I had a quick look at Graviton and it might not be as feature-rich as [VS Code][7] or [Brackets][8], but for some simple code editing, it’s not a bad tool.
|
||||
|
||||
### Download and install Graviton
|
||||
|
||||
![Graviton Code Editor][9]
|
||||
|
||||
As mentioned earlier, Graviton is a cross-platform code editor available for Linux, Windows and macOS. It is still in beta stages which means that you more features will be added in future and you may encounter some bugs.
|
||||
|
||||
You can find the latest version of Graviton on its release page. Debian and [Ubuntu users can install it from .deb file][10]. [AppImage][11] has been provided so that it could be used in other distributions. DMG and EXE files are also available for macOS and Windows respectively.
|
||||
|
||||
[Download Graviton][12]
|
||||
|
||||
If you are interested, you can find the source code of Graviton on its GitHub repository:
|
||||
|
||||
[Graviton Source Code on GitHub][13]
|
||||
|
||||
If you decided to use Graviton and find some issues, please open a bug report [here][14]. If you use GitHub, you may want to star the Graviton project. This boosts the morale of the developer as he would know that more users are appreciating his efforts.
|
||||
|
||||
[][15]
|
||||
|
||||
Suggested read Get Windows Style Sticky Notes For Ubuntu with Indicator Stickynotes
|
||||
|
||||
I believe you know [how to install a software from source code][16] if you are taking that path.
|
||||
|
||||
**In the end…**
|
||||
|
||||
Sometimes, simplicity itself becomes a feature and the Graviton’s focus on being minimalist could help it form a niche for itself in the already crowded segment of code editors.
|
||||
|
||||
At It’s FOSS, we try to highlight open source software. If you know some interesting open source software that you would like more people to know about, [do send us a note][17].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/graviton-code-editor/
|
||||
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/abhishek/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://graviton.ml/
|
||||
[2]: https://itsfoss.com/best-modern-open-source-code-editors-for-linux/
|
||||
[3]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/06/graviton-code-editor-interface.jpg?resize=800%2C571&ssl=1
|
||||
[4]: https://electronjs.org/
|
||||
[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/06/graviton-code-editor-interface-2.jpg?resize=800%2C522&ssl=1
|
||||
[6]: https://codemirror.net/
|
||||
[7]: https://itsfoss.com/install-visual-studio-code-ubuntu/
|
||||
[8]: https://itsfoss.com/install-brackets-ubuntu/
|
||||
[9]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/06/graviton-code-editor-800x473.jpg?resize=800%2C473&ssl=1
|
||||
[10]: https://itsfoss.com/install-deb-files-ubuntu/
|
||||
[11]: https://itsfoss.com/use-appimage-linux/
|
||||
[12]: https://github.com/Graviton-Code-Editor/Graviton-App/releases
|
||||
[13]: https://github.com/Graviton-Code-Editor/Graviton-App
|
||||
[14]: https://github.com/Graviton-Code-Editor/Graviton-App/issues
|
||||
[15]: https://itsfoss.com/indicator-stickynotes-windows-like-sticky-note-app-for-ubuntu/
|
||||
[16]: https://itsfoss.com/install-software-from-source-code/
|
||||
[17]: https://itsfoss.com/contact-us/
|
@ -0,0 +1,229 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Neofetch – Display Linux system Information In Terminal)
|
||||
[#]: via: (https://www.ostechnix.com/neofetch-display-linux-systems-information/)
|
||||
[#]: author: (sk https://www.ostechnix.com/author/sk/)
|
||||
|
||||
Neofetch – Display Linux system Information In Terminal
|
||||
======
|
||||
|
||||
![Display Linux system information using Neofetch][1]
|
||||
|
||||
**Neofetch** is a simple, yet useful command line system information utility written in **Bash**. It gathers information about your system’s software and hardware and displays the result in the Terminal. By default, the system information will be displayed alongside your operating system’s logo. However, you can further customize it to use an **ascii image** or any image of your choice instead of the OS logo. You can also configure Neofetch to display which information, where and when that information should be displayed. Neofetch is mainly developed to be used in screenshots of your system information. It supports Linux, BSD, Mac OS X, iOS, and Windows operating systems. In this brief tutorial, let us see how to display Linux system information using Neofetch.
|
||||
|
||||
### Install Neofetch
|
||||
|
||||
Neofetch is available in the default repositories of most Linux distributions.
|
||||
|
||||
On Arch Linux and its variants, install it using command:
|
||||
|
||||
```
|
||||
$ sudo pacman -S netofetch
|
||||
```
|
||||
|
||||
On Debian (Stretch / Sid):
|
||||
|
||||
```
|
||||
$ sudo apt-get install neofetch
|
||||
```
|
||||
|
||||
On Fedora 27:
|
||||
|
||||
```
|
||||
$ sudo dnf install neofetch
|
||||
```
|
||||
|
||||
On RHEL, CentOS:
|
||||
|
||||
Enable EPEL Repository:
|
||||
|
||||
```
|
||||
# yum install epel-relase
|
||||
```
|
||||
|
||||
Fetch the neofetch repository:
|
||||
|
||||
```
|
||||
# curl -o /etc/yum.repos.d/konimex-neofetch-epel-7.repo
|
||||
https://copr.fedorainfracloud.org/coprs/konimex/neofetch/repo/epel-7/konimex-neofetch-epel-7.repo
|
||||
```
|
||||
|
||||
Then, install Neofetch:
|
||||
|
||||
```
|
||||
# yum install neofetch
|
||||
```
|
||||
|
||||
On Ubuntu 17.10 and newer versions:
|
||||
|
||||
```
|
||||
$ sudo apt-get install neofetch
|
||||
```
|
||||
|
||||
On Ubuntu 16.10 and lower versions:
|
||||
|
||||
```
|
||||
$ sudo add-apt-repository ppa:dawidd0811/neofetch
|
||||
|
||||
$ sudo apt update
|
||||
|
||||
$ sudo apt install neofetch
|
||||
```
|
||||
|
||||
On NixOS:
|
||||
|
||||
```
|
||||
$ nix-env -i neofetch
|
||||
```
|
||||
|
||||
### Display Linux system Information Using Neofetch
|
||||
|
||||
Neofetch is pretty easy and straightforward. Let us see some examples.
|
||||
|
||||
Open up your Terminal, and run the following command:
|
||||
|
||||
```
|
||||
$ neofetch
|
||||
```
|
||||
|
||||
**Sample output:**
|
||||
|
||||
![][2]
|
||||
|
||||
Display Linux system Information Using Neofetch
|
||||
|
||||
As you can see in the above output, Neofetch is displaying the following details of my Arch Linux system:
|
||||
|
||||
* Name of the installed operating system,
|
||||
* Laptop model,
|
||||
* Kernel details,
|
||||
* System uptime,
|
||||
* Number of installed packages by default and other package managers,
|
||||
* Default Shell,
|
||||
* Screen resolution,
|
||||
* Desktop environment,
|
||||
* Window manager,
|
||||
* Window manager’s theme,
|
||||
* System theme,
|
||||
* System Icons,
|
||||
* Default Terminal,
|
||||
* CPU type,
|
||||
* GPU type,
|
||||
* Installed memory.
|
||||
|
||||
|
||||
|
||||
Neofetch has plenty of other options too. We will see some of them.
|
||||
|
||||
##### How to use custom imagess in Neofetch output?
|
||||
|
||||
By default, Neofetch will display your OS logo along with the system information. You can, of course, change the image as you wish.
|
||||
|
||||
In order to display images, your Linux system should have the following dependencies installed:
|
||||
|
||||
1. **w3m-img** (It is required to display images. w3m-img is sometimes bundled together with **w3m** package),
|
||||
2. **Imagemagick** (required for thumbnail creation),
|
||||
3. A terminal that supports **\033[14t** or **xdotool** or **xwininfo + xprop** or **xwininfo + xdpyinfo**.
|
||||
|
||||
|
||||
|
||||
W3m-img and ImageMagick packages are available in the default repositories of most Linux distributions. So you can install them using your distribution’s default package manager.
|
||||
|
||||
For instance, run the following command to install w3m-img and ImageMagick on Debian, Ubuntu, Linux Mint:
|
||||
|
||||
```
|
||||
$ sudo apt install w3m-img imagemagick
|
||||
```
|
||||
|
||||
Here is the list of Terminal Emulators with **w3m-img** support:
|
||||
|
||||
1. Gnome-terminal,
|
||||
2. Konsole,
|
||||
3. st,
|
||||
4. Terminator,
|
||||
5. Termite,
|
||||
6. URxvt,
|
||||
7. Xfce4-Terminal,
|
||||
8. Xterm
|
||||
|
||||
|
||||
|
||||
If you have **kitty** , **Terminology** and **iTerm** terminal emulators on your system, you don’t need to install w3m-img.
|
||||
|
||||
Now, run the following command to display your system’s information with a custom image:
|
||||
|
||||
```
|
||||
$ neofetch --w3m /home/sk/Pictures/image.png
|
||||
```
|
||||
|
||||
Or,
|
||||
|
||||
```
|
||||
$ neofetch --w3m --source /home/sk/Pictures/image.png
|
||||
```
|
||||
|
||||
Sample output:
|
||||
|
||||
![][3]
|
||||
|
||||
Neofetch output with custom logo
|
||||
|
||||
Replace the image path in the above command with your own.
|
||||
|
||||
Alternatively, you can point a directory that contains the images like below.
|
||||
|
||||
```
|
||||
$ neofetch --w3m <path-to-directory>
|
||||
```
|
||||
|
||||
##### Configure Neofetch
|
||||
|
||||
When we run the Neofetch for the first time, It will create a per-user configuration file at **$HOME/.config/neofetch/config.conf** by default. It also creates a system-wide neofetch config file at **$HOME/.config/neofetch/config**. You can tweak this file to tell Neofetch which details should be displayed, removed and/or modified.
|
||||
|
||||
You can also keep this configuration file between versions. Meaning – just customize it once as per your liking and use the same settings after upgrading to newer version. You can even share this file to your friends and colleagues to have the same settings as yours.
|
||||
|
||||
To view Neofetch help section, run:
|
||||
|
||||
```
|
||||
$ neofetch --help
|
||||
```
|
||||
|
||||
As far as I tested Neofetch, It worked perfectly in my Arch Linux system as expected. It is a nice handy tool to easily and quickly print the details of your system in the Terminal.
|
||||
|
||||
* * *
|
||||
|
||||
**Related read:**
|
||||
|
||||
* [**How to find Linux System details using inxi**][4]
|
||||
|
||||
|
||||
|
||||
* * *
|
||||
|
||||
**Resource:**
|
||||
|
||||
* [**Neofetch on GitHub**][5]
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/neofetch-display-linux-systems-information/
|
||||
|
||||
作者:[sk][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.ostechnix.com/author/sk/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.ostechnix.com/wp-content/uploads/2016/06/neofetch-1-720x340.png
|
||||
[2]: http://www.ostechnix.com/wp-content/uploads/2016/06/Neofetch-1.png
|
||||
[3]: http://www.ostechnix.com/wp-content/uploads/2016/06/Neofetch-with-custom-logo.png
|
||||
[4]: https://www.ostechnix.com/how-to-find-your-system-details-using-inxi/
|
||||
[5]: https://github.com/dylanaraps/neofetch
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
@ -0,0 +1,104 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Cisco software to make networks smarter, safer, more manageable)
|
||||
[#]: via: (https://www.networkworld.com/article/3401523/cisco-software-to-make-networks-smarter-safer-more-manageable.html)
|
||||
[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
|
||||
|
||||
Cisco software to make networks smarter, safer, more manageable
|
||||
======
|
||||
Cisco software announced at Cisco Live embraces AI to help customers set consistent network and security policies across their domains and improve intent-based networking.
|
||||
![bigstock][1]
|
||||
|
||||
SAN DIEGO—Cisco injected a number of new technologies into its key networking control-point software that makes it easier to stretch networking from the data center to the cloud while making the whole environment smarter and easier to manage.
|
||||
|
||||
At the company’s annual Cisco Live customer event here it rolled out software that lets customers more easily meld typically siloed domains across the enterprise and cloud to the wide area network. The software enables what Cisco calls multidomain integration that lets customers set policies to apply uniform access controls to users, devices and applications regardless of where they connect to the network, the company said.
|
||||
|
||||
**More about SD-WAN**
|
||||
|
||||
* [How to buy SD-WAN technology: Key questions to consider when selecting a supplier][2]
|
||||
* [How to pick an off-site data-backup method][3]
|
||||
* [SD-Branch: What it is and why you’ll need it][4]
|
||||
* [What are the options for security SD-WAN?][5]
|
||||
|
||||
|
||||
|
||||
The company also unveiled Cisco AI Network Analytics, a software package that uses [AI and machine learning techniques][6] to learn network traffic and security patterns that can help customers spot and fix problems proactively across the enterprise.
|
||||
|
||||
All of the new software runs on Cisco’s DNA Center platform which is rapidly becoming an ever-more crucial component to the company’s intent-based networking plans. DNA Center has always been important since its introduction two years ago as it features automation capabilities, assurance setting, fabric provisioning and policy-based segmentation for enterprise networks.
|
||||
|
||||
Beyond device management and configuration, Cisco DNA Center gives IT teams the ability to control access through policies using Software-Defined Access (SD-Access), automatically provision through Cisco DNA Automation, virtualize devices through Cisco Network Functions Virtualization (NFV), and lower security risks through segmentation and Encrypted Traffic Analysis. But experts say these software enhancements take it to a new level.
|
||||
|
||||
“You can call it the rise of DNA Center and it’s important because it lets customers manage and control their entire network from one place – similar to what VMware does with its vCenter,” said Zeus Kerravala, founder and principal analyst with ZK Research. vCenter is VMware’s centralized platform for controlling its vSphere virtualized environments.
|
||||
|
||||
“Cisco will likely roll more and more functionality into DNA Center in the future making it stronger,” Kerravala said.
|
||||
|
||||
**[[Become a Microsoft Office 365 administrator in record time with this quick start course from PluralSight.][7] ]**
|
||||
|
||||
Together the new software and DNA Center will help customers set consistent policies across their domains and collaborate with others for the benefit of the entire network. Customers can define a policy once, apply it everywhere, and monitor it systematically to ensure it is realizing its business intent, said Prashanth Shenoy, Cisco vice president of marketing for Enterprise Network and Mobility. It will help customers segment their networks to reduce congestion, improve security and compliance and contain network problems, he said.
|
||||
|
||||
“In the campus, Cisco’s SD-Access solution uses this technology to group users and devices within the segments it creates according to their access privileges. Similarly, Cisco ACI creates groups of similar applications in the data center,” Shenoy said. “When integrated, SD-Access and ACI exchange their groupings and provide each other an awareness into their access policies. With this knowledge, each of the domains can map user groups with applications, jointly enforce policies, and block unauthorized access to applications.”
|
||||
|
||||
In the Cisco world it basically means there now can be a unification of its central domain network controllers and they can work together and let customers drive policies across domains.
|
||||
|
||||
Cisco also said that security capabilities can be spread across domains.
|
||||
|
||||
Cisco Advanced Malware Protection (AMP) prevents breaches, monitors malicious behavior and detects and removes malware. Security constructs built into Cisco SD-WAN, and the recently announced SD-WAN onRamp for CoLocation, provide a full security stack that applies protection consistently from user to branch to clouds. Cisco Stealthwatch and Stealthwatch Cloud detect threats across the private network, public clouds, and in encrypted traffic.
|
||||
|
||||
Analysts said Cisco’s latest efforts are an attempt to simplify what are fast becoming complex networks with tons of new devices and applications to support.
|
||||
|
||||
Cisco’s initial efforts were product specific, but its latest announcements cross products and domains, said Lee Doyle principal analyst with Doyle Research. “Cisco is making a strong push to make its networks easier to use, manage and program.”
|
||||
|
||||
That same strategy is behind the new AI Analytics program.
|
||||
|
||||
“Trying to manually analyze and troubleshoot the traffic flowing through thousands of APs, switches and routers is a near impossible task, even for the most sophisticated NetOps team. In a wireless environment, onboarding and interference errors can crop up randomly and intermittently, making it even more difficult to determine probable causes,” said Anand Oswal, senior vice president, engineering for Cisco’s Enterprise Networking Business.
|
||||
|
||||
Cisco has been integrating AI/ML into many operational and security components, with Cisco DNA Center the focal point for insights and actions, Oswal wrote in a [blog][8] about the AI announcement. AI Network Analytics collects massive amounts of network data from Cisco DNA Centers at participating customer sites, encrypts and anonymizes the data to ensure privacy, and collates all of it into the Cisco Worldwide Data Platform. In this cloud, the aggregated data is analyzed with deep machine learning to reveal patterns and anomalies such as:
|
||||
|
||||
* Highly personalized network baselines with multiple levels of granularity that define “normal” for a given network, site, building and SSID.
|
||||
|
||||
* Sudden changes in onboarding times for Wi-Fi devices, by individual APs, floor, building, campus
|
||||
```
|
||||
|
||||
```
|
||||
and branch.
|
||||
|
||||
* Simultaneous connectivity failures with numerous clients at a specific location
|
||||
|
||||
* Changes in SaaS and Cloud application performance via SD-WAN direct internet connections or [Cloud OnRamps][9].
|
||||
|
||||
* Pattern-matching capabilities of ML will be used to spot anomalies in network behavior that might otherwise be missed.
|
||||
|
||||
|
||||
|
||||
|
||||
“The intelligence of its large base of customers can help Cisco to derive important insights about how users can better manage their networks and solve problems and the power of MI/AI technology will continue to improve over time,” Doyle said.
|
||||
|
||||
Join the Network World communities on [Facebook][10] and [LinkedIn][11] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3401523/cisco-software-to-make-networks-smarter-safer-more-manageable.html
|
||||
|
||||
作者:[Michael Cooney][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Michael-Cooney/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2018/11/intelligentnetwork-100780636-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3323407/sd-wan/how-to-buy-sd-wan-technology-key-questions-to-consider-when-selecting-a-supplier.html
|
||||
[3]: https://www.networkworld.com/article/3328488/backup-systems-and-services/how-to-pick-an-off-site-data-backup-method.html
|
||||
[4]: https://www.networkworld.com/article/3250664/lan-wan/sd-branch-what-it-is-and-why-youll-need-it.html
|
||||
[5]: https://www.networkworld.com/article/3285728/sd-wan/what-are-the-options-for-securing-sd-wan.html?nsdr=true
|
||||
[6]: https://www.networkworld.com/article/3400382/cisco-will-use-aiml-to-boost-intent-based-networking.html
|
||||
[7]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fcourses%2Fadministering-office-365-quick-start
|
||||
[8]: https://blogs.cisco.com/analytics-automation/cisco-ai-network-analytics-making-networks-smarter-simpler-and-more-secure
|
||||
[9]: https://www.networkworld.com/article/3393232/cisco-boosts-sd-wan-with-multicloud-to-branch-access-system.html
|
||||
[10]: https://www.facebook.com/NetworkWorld/
|
||||
[11]: https://www.linkedin.com/company/network-world
|
56
sources/tech/20190611 What is a Linux user.md
Normal file
56
sources/tech/20190611 What is a Linux user.md
Normal file
@ -0,0 +1,56 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (What is a Linux user?)
|
||||
[#]: via: (https://opensource.com/article/19/6/what-linux-user)
|
||||
[#]: author: (Anderson Silva https://opensource.com/users/ansilva/users/petercheer/users/ansilva/users/greg-p/users/ansilva/users/ansilva/users/bcotton/users/ansilva/users/seth/users/ansilva/users/don-watkins/users/ansilva/users/seth)
|
||||
|
||||
What is a Linux user?
|
||||
======
|
||||
The definition of who is a "Linux user" has grown to be a bigger tent,
|
||||
and it's a great change.
|
||||
![][1]
|
||||
|
||||
> _Editor's note: this article was updated on Jun 11, 2019, at 1:15:19 PM to more accurately reflect the author's perspective on an open and inclusive community of practice in the Linux community._
|
||||
|
||||
In only two years, the Linux kernel will be 30 years old. Think about that! Where were you in 1991? Were you even born? I was 13! Between 1991 and 1993 a few Linux distributions were created, and at least three of them—Slackware, Debian, and Red Hat–provided the [backbone][2] the Linux movement was built on.
|
||||
|
||||
Getting a copy of a Linux distribution and installing and configuring it on a desktop or server was very different back then than today. It was hard! It was frustrating! It was an accomplishment if you got it running! We had to fight with incompatible hardware, configuration jumpers on devices, BIOS issues, and many other things. Even if the hardware was compatible, many times, you still had to compile the kernel, modules, and drivers to get them to work on your system.
|
||||
|
||||
If you were around during those days, you are probably nodding your head. Some readers might even call them the "good old days," because choosing to use Linux meant you had to learn about operating systems, computer architecture, system administration, networking, and even programming, just to keep the OS functioning. I am not one of them though: Linux being a regular part of everyone's technology experience is one of the most amazing changes in our industry!
|
||||
|
||||
Almost 30 years later, Linux has gone far beyond the desktop and server. You will find Linux in automobiles, airplanes, appliances, smartphones… virtually everywhere! You can even purchase laptops, desktops, and servers with Linux preinstalled. If you consider cloud computing, where corporations and even individuals can deploy Linux virtual machines with the click of a button, it's clear how widespread the availability of Linux has become.
|
||||
|
||||
With all that in mind, my question for you is: **How do you define a "Linux user" today?**
|
||||
|
||||
If you buy your parent or grandparent a Linux laptop from System76 or Dell, log them into their social media and email, and tell them to click "update system" every so often, they are now a Linux user. If you did the same with a Windows or MacOS machine, they would be Windows or MacOS users. It's incredible to me that, unlike the '90s, Linux is now a place for anyone and everyone to compute.
|
||||
|
||||
In many ways, this is due to the web browser becoming the "killer app" on the desktop computer. Now, many users don't care what operating system they are using as long as they can get to their app or service.
|
||||
|
||||
How many people do you know who use their phone, desktop, or laptop regularly but can't manage files, directories, and drivers on their systems? How many can't install a binary that isn't attached to an "app store" of some sort? How about compiling an application from scratch?! For me, it's almost no one. That's the beauty of open source software maturing along with an ecosystem that cares about accessibility.
|
||||
|
||||
Today's Linux user is not required to know, study, or even look up information as the Linux user of the '90s or early 2000s did, and that's not a bad thing. The old imagery of Linux being exclusively for bearded men is long gone, and I say good riddance.
|
||||
|
||||
There will always be room for a Linux user who is interested, curious, _fascinated_ about computers, operating systems, and the idea of creating, using, and collaborating on free software. There is just as much room for creative open source contributors on Windows and MacOS these days as well. Today, being a Linux user is being anyone with a Linux system. And that's a wonderful thing.
|
||||
|
||||
### The change to what it means to be a Linux user
|
||||
|
||||
When I started with Linux, being a user meant knowing how to the operating system functioned in every way, shape, and form. Linux has matured in a way that allows the definition of "Linux users" to encompass a much broader world of possibility and the people who inhabit it. It may be obvious to say, but it is important to say clearly: anyone who uses Linux is an equal Linux user.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/6/what-linux-user
|
||||
|
||||
作者:[Anderson Silva][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/ansilva/users/petercheer/users/ansilva/users/greg-p/users/ansilva/users/ansilva/users/bcotton/users/ansilva/users/seth/users/ansilva/users/don-watkins/users/ansilva/users/seth
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/linux_penguin_green.png?itok=ENdVzW22
|
||||
[2]: https://en.wikipedia.org/wiki/Linux_distribution#/media/File:Linux_Distribution_Timeline.svg
|
282
sources/tech/20190612 How to write a loop in Bash.md
Normal file
282
sources/tech/20190612 How to write a loop in Bash.md
Normal file
@ -0,0 +1,282 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to write a loop in Bash)
|
||||
[#]: via: (https://opensource.com/article/19/6/how-write-loop-bash)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/seth/users/goncasousa/users/howtopamm/users/howtopamm/users/seth/users/wavesailor/users/seth)
|
||||
|
||||
How to write a loop in Bash
|
||||
======
|
||||
Automatically perform a set of actions on multiple files with for loops
|
||||
and find commands.
|
||||
![bash logo on green background][1]
|
||||
|
||||
A common reason people want to learn the Unix shell is to unlock the power of batch processing. If you want to perform some set of actions on many files, one of the ways to do that is by constructing a command that iterates over those files. In programming terminology, this is called _execution control,_ and one of the most common examples of it is the **for** loop.
|
||||
|
||||
A **for** loop is a recipe detailing what actions you want your computer to take _for_ each data object (such as a file) you specify.
|
||||
|
||||
### The classic for loop
|
||||
|
||||
An easy loop to try is one that analyzes a collection of files. This probably isn't a useful loop on its own, but it's a safe way to prove to yourself that you have the ability to handle each file in a directory individually. First, create a simple test environment by creating a directory and placing some copies of some files into it. Any file will do initially, but later examples require graphic files (such as JPEG, PNG, or similar). You can create the folder and copy files into it using a file manager or in the terminal:
|
||||
|
||||
|
||||
```
|
||||
$ mkdir example
|
||||
$ cp ~/Pictures/vacation/*.{png,jpg} example
|
||||
```
|
||||
|
||||
Change directory to your new folder, then list the files in it to confirm that your test environment is what you expect:
|
||||
|
||||
|
||||
```
|
||||
$ cd example
|
||||
$ ls -1
|
||||
cat.jpg
|
||||
design_maori.png
|
||||
otago.jpg
|
||||
waterfall.png
|
||||
```
|
||||
|
||||
The syntax to loop through each file individually in a loop is: create a variable ( **f** for file, for example). Then define the data set you want the variable to cycle through. In this case, cycle through all files in the current directory using the ***** wildcard character (the ***** wildcard matches _everything_ ). Then terminate this introductory clause with a semicolon ( **;** ).
|
||||
|
||||
|
||||
```
|
||||
`$ for f in * ;`
|
||||
```
|
||||
|
||||
Depending on your preference, you can choose to press **Return** here. The shell won't try to execute the loop until it is syntactically complete.
|
||||
|
||||
Next, define what you want to happen with each iteration of the loop. For simplicity, use the **file** command to get a little bit of data about each file, represented by the **f** variable (but prepended with a **$** to tell the shell to swap out the value of the variable for whatever the variable currently contains):
|
||||
|
||||
|
||||
```
|
||||
`do file $f ;`
|
||||
```
|
||||
|
||||
Terminate the clause with another semi-colon and close the loop:
|
||||
|
||||
|
||||
```
|
||||
`done`
|
||||
```
|
||||
|
||||
Press **Return** to start the shell cycling through _everything_ in the current directory. The **for** loop assigns each file, one by one, to the variable **f** and runs your command:
|
||||
|
||||
|
||||
```
|
||||
$ for f in * ; do
|
||||
> file $f ;
|
||||
> done
|
||||
cat.jpg: JPEG image data, EXIF standard 2.2
|
||||
design_maori.png: PNG image data, 4608 x 2592, 8-bit/color RGB, non-interlaced
|
||||
otago.jpg: JPEG image data, EXIF standard 2.2
|
||||
waterfall.png: PNG image data, 4608 x 2592, 8-bit/color RGB, non-interlaced
|
||||
```
|
||||
|
||||
You can also write it this way:
|
||||
|
||||
|
||||
```
|
||||
$ for f in *; do file $f; done
|
||||
cat.jpg: JPEG image data, EXIF standard 2.2
|
||||
design_maori.png: PNG image data, 4608 x 2592, 8-bit/color RGB, non-interlaced
|
||||
otago.jpg: JPEG image data, EXIF standard 2.2
|
||||
waterfall.png: PNG image data, 4608 x 2592, 8-bit/color RGB, non-interlaced
|
||||
```
|
||||
|
||||
Both the multi-line and single-line formats are the same to your shell and produce the exact same results.
|
||||
|
||||
### A practical example
|
||||
|
||||
Here's a practical example of how a loop can be useful for everyday computing. Assume you have a collection of vacation photos you want to send to friends. Your photo files are huge, making them too large to email and inconvenient to upload to your [photo-sharing service][2]. You want to create smaller web-versions of your photos, but you have 100 photos and don't want to spend the time reducing each photo, one by one.
|
||||
|
||||
First, install the **ImageMagick** command using your package manager on Linux, BSD, or Mac. For instance, on Fedora and RHEL:
|
||||
|
||||
|
||||
```
|
||||
`$ sudo dnf install ImageMagick`
|
||||
```
|
||||
|
||||
On Ubuntu or Debian:
|
||||
|
||||
|
||||
```
|
||||
`$ sudo apt install ImageMagick`
|
||||
```
|
||||
|
||||
On BSD, use **ports** or [pkgsrc][3]. On Mac, use [Homebrew][4] or [MacPorts][5].
|
||||
|
||||
Once you install ImageMagick, you have a set of new commands to operate on photos.
|
||||
|
||||
Create a destination directory for the files you're about to create:
|
||||
|
||||
|
||||
```
|
||||
`$ mkdir tmp`
|
||||
```
|
||||
|
||||
To reduce each photo to 33% of its original size, try this loop:
|
||||
|
||||
|
||||
```
|
||||
`$ for f in * ; do convert $f -scale 33% tmp/$f ; done`
|
||||
```
|
||||
|
||||
Then look in the **tmp** folder to see your scaled photos.
|
||||
|
||||
You can use any number of commands within a loop, so if you need to perform complex actions on a batch of files, you can place your whole workflow between the **do** and **done** statements of a **for** loop. For example, suppose you want to copy each processed photo straight to a shared photo directory on your web host and remove the photo file from your local system:
|
||||
|
||||
|
||||
```
|
||||
$ for f in * ; do
|
||||
convert $f -scale 33% tmp/$f
|
||||
scp -i seth_web tmp/$f [seth@example.com][6]:~/public_html
|
||||
trash tmp/$f ;
|
||||
done
|
||||
```
|
||||
|
||||
For each file processed by the **for** loop, your computer automatically runs three commands. This means if you process just 10 photos this way, you save yourself 30 commands and probably at least as many minutes.
|
||||
|
||||
### Limiting your loop
|
||||
|
||||
A loop doesn't always have to look at every file. You might want to process only the JPEG files in your example directory:
|
||||
|
||||
|
||||
```
|
||||
$ for f in *.jpg ; do convert $f -scale 33% tmp/$f ; done
|
||||
$ ls -m tmp
|
||||
cat.jpg, otago.jpg
|
||||
```
|
||||
|
||||
Or, instead of processing files, you may need to repeat an action a specific number of times. A **for** loop's variable is defined by whatever data you provide it, so you can create a loop that iterates over numbers instead of files:
|
||||
|
||||
|
||||
```
|
||||
$ for n in {0..4}; do echo $n ; done
|
||||
0
|
||||
1
|
||||
2
|
||||
3
|
||||
4
|
||||
```
|
||||
|
||||
### More looping
|
||||
|
||||
You now know enough to create your own loops. Until you're comfortable with looping, use them on _copies_ of the files you want to process and, as often as possible, use commands with built-in safeguards to prevent you from clobbering your data and making irreparable mistakes, like accidentally renaming an entire directory of files to the same name, each overwriting the other.
|
||||
|
||||
For advanced **for** loop topics, read on.
|
||||
|
||||
### Not all shells are Bash
|
||||
|
||||
The **for** keyword is built into the Bash shell. Many similar shells use the same keyword and syntax, but some shells, like [tcsh][7], use a different keyword, like **foreach** , instead.
|
||||
|
||||
In tcsh, the syntax is similar in spirit but more strict than Bash. In the following code sample, do not type the string **foreach?** in lines 2 and 3. It is a secondary prompt alerting you that you are still in the process of building your loop.
|
||||
|
||||
|
||||
```
|
||||
$ foreach f (*)
|
||||
foreach? file $f
|
||||
foreach? end
|
||||
cat.jpg: JPEG image data, EXIF standard 2.2
|
||||
design_maori.png: PNG image data, 4608 x 2592, 8-bit/color RGB, non-interlaced
|
||||
otago.jpg: JPEG image data, EXIF standard 2.2
|
||||
waterfall.png: PNG image data, 4608 x 2592, 8-bit/color RGB, non-interlaced
|
||||
```
|
||||
|
||||
In tcsh, both **foreach** and **end** must appear alone on separate lines, so you cannot create a **for** loop on one line as you can with Bash and similar shells.
|
||||
|
||||
### For loops with the find command
|
||||
|
||||
In theory, you could find a shell that doesn't provide a **for** loop function, or you may just prefer to use a different command with added features.
|
||||
|
||||
The **find** command is another way to implement the functionality of a **for** loop, as it offers several ways to define the scope of which files to include in your loop as well as options for [Parallel][8] processing.
|
||||
|
||||
The **find** command is meant to help you find files on your hard drives. Its syntax is simple: you provide the path of the location you want to search, and **find** finds all files and directories:
|
||||
|
||||
|
||||
```
|
||||
$ find .
|
||||
.
|
||||
./cat.jpg
|
||||
./design_maori.png
|
||||
./otago.jpg
|
||||
./waterfall.png
|
||||
```
|
||||
|
||||
You can filter the search results by adding some portion of the name:
|
||||
|
||||
|
||||
```
|
||||
$ find . -name "*jpg"
|
||||
./cat.jpg
|
||||
./otago.jpg
|
||||
```
|
||||
|
||||
The great thing about **find** is that each file it finds can be fed into a loop using the **-exec** flag. For instance, to scale down only the PNG photos in your example directory:
|
||||
|
||||
|
||||
```
|
||||
$ find . -name "*png" -exec convert {} -scale 33% tmp/{} \;
|
||||
$ ls -m tmp
|
||||
design_maori.png, waterfall.png
|
||||
```
|
||||
|
||||
In the **-exec** clause, the bracket characters **{}** stand in for whatever item **find** is processing (in other words, any file ending in PNG that has been located, one at a time). The **-exec** clause must be terminated with a semicolon, but Bash usually tries to use the semicolon for itself. You "escape" the semicolon with a backslash ( **\;** ) so that **find** knows to treat that semicolon as its terminating character.
|
||||
|
||||
The **find** command is very good at what it does, and it can be too good sometimes. For instance, if you reuse it to find PNG files for another photo process, you will get a few errors:
|
||||
|
||||
|
||||
```
|
||||
$ find . -name "*png" -exec convert {} -flip -flop tmp/{} \;
|
||||
convert: unable to open image `tmp/./tmp/design_maori.png':
|
||||
No such file or directory @ error/blob.c/OpenBlob/2643.
|
||||
...
|
||||
```
|
||||
|
||||
It seems that **find** has located all the PNG files—not only the ones in your current directory ( **.** ) but also those that you processed before and placed in your **tmp** subdirectory. In some cases, you may want **find** to search the current directory plus all other directories within it (and all directories in _those_ ). It can be a powerful recursive processing tool, especially in complex file structures (like directories of music artists containing directories of albums filled with music files), but you can limit this with the **-maxdepth** option.
|
||||
|
||||
To find only PNG files in the current directory (excluding subdirectories):
|
||||
|
||||
|
||||
```
|
||||
`$ find . -maxdepth 1 -name "*png"`
|
||||
```
|
||||
|
||||
To find and process files in the current directory plus an additional level of subdirectories, increment the maximum depth by 1:
|
||||
|
||||
|
||||
```
|
||||
`$ find . -maxdepth 2 -name "*png"`
|
||||
```
|
||||
|
||||
Its default is to descend into all subdirectories.
|
||||
|
||||
### Looping for fun and profit
|
||||
|
||||
The more you use loops, the more time and effort you save, and the bigger the tasks you can tackle. You're just one user, but with a well-thought-out loop, you can make your computer do the hard work.
|
||||
|
||||
You can and should treat looping like any other command, keeping it close at hand for when you need to repeat a single action or two on several files. However, it's also a legitimate gateway to serious programming, so if you have to accomplish a complex task on any number of files, take a moment out of your day to plan out your workflow. If you can achieve your goal on one file, then wrapping that repeatable process in a **for** loop is relatively simple, and the only "programming" required is an understanding of how variables work and enough organization to separate unprocessed from processed files. With a little practice, you can move from a Linux user to a Linux user who knows how to write a loop, so get out there and make your computer work for you!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/6/how-write-loop-bash
|
||||
|
||||
作者:[Seth Kenlon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/seth/users/goncasousa/users/howtopamm/users/howtopamm/users/seth/users/wavesailor/users/seth
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bash_command_line.png?itok=k4z94W2U (bash logo on green background)
|
||||
[2]: http://nextcloud.com
|
||||
[3]: http://pkgsrc.org
|
||||
[4]: http://brew.sh
|
||||
[5]: https://www.macports.org
|
||||
[6]: mailto:seth@example.com
|
||||
[7]: https://en.wikipedia.org/wiki/Tcsh
|
||||
[8]: https://opensource.com/article/18/5/gnu-parallel
|
284
sources/tech/20190612 The bits and bytes of PKI.md
Normal file
284
sources/tech/20190612 The bits and bytes of PKI.md
Normal file
@ -0,0 +1,284 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (The bits and bytes of PKI)
|
||||
[#]: via: (https://opensource.com/article/19/6/bits-and-bytes-pki)
|
||||
[#]: author: (Alex Wood https://opensource.com/users/awood)
|
||||
|
||||
The bits and bytes of PKI
|
||||
======
|
||||
Take a look under the public key infrastructure's hood to get a better
|
||||
understanding of its format.
|
||||
![Computer keyboard typing][1]
|
||||
|
||||
In two previous articles— _[An introduction to cryptography and public key infrastructure][2]_ and _[How do private keys work in PKI and cryptography?][3]_ —I discussed cryptography and public key infrastructure (PKI) in a general way. I talked about how digital bundles called _certificates_ store public keys and identifying information. These bundles contain a lot of complexity, and it's useful to have a basic understanding of the format for when you need to look under the hood.
|
||||
|
||||
### Abstract art
|
||||
|
||||
Keys, certificate signing requests, certificates, and other PKI artifacts define themselves in a data description language called [Abstract Syntax Notation One][4] (ASN.1). ASN.1 defines a series of simple data types (integers, strings, dates, etc.) along with some structured types (sequences, sets). By using those types as building blocks, we can create surprisingly complex data formats.
|
||||
|
||||
ASN.1 contains plenty of pitfalls for the unwary, however. For example, it has two different ways of representing dates: GeneralizedTime ([ISO 8601][5] format) and UTCTime (which uses a two-digit year). Strings introduce even more confusion. We have IA5String for ASCII strings and UTF8String for Unicode strings. ASN.1 also defines several other string types, from the exotic [T61String][6] and [TeletexString][7] to the more innocuous sounding—but probably not what you wanted—PrintableString (only a small subset of ASCII) and UniversalString (encoded in [UTF-32][8]). If you're writing or reading ASN.1 data, I recommend referencing the [specification][9].
|
||||
|
||||
ASN.1 has another data type worth special mention: the object identifier (OID). OIDs are a series of integers. Commonly they are shown with periods delimiting them. Each integer represents a node in what is basically a "tree of things." For example, [1.3.6.1.4.1.2312][10] is the OID for my employer, Red Hat, where "1" is the node for the International Organization for Standardization (ISO), "3" is for ISO-identified organizations, "6" is for the US Department of Defense (which, for historical reasons, is the parent to the next node), "1" is for the internet, "4" is for private organizations, "1" is for enterprises, and finally "2312," which is Red Hat's own.
|
||||
|
||||
More commonly, OIDs are regularly used to identify specific algorithms in PKI objects. If you have a digital signature, it's not much use if you don't know what type of signature it is. The signature algorithm "sha256WithRSAEncryption" has the OID "1.2.840.113549.1.1.11," for example.
|
||||
|
||||
### ASN.1 at work
|
||||
|
||||
Suppose we own a factory that produces flying brooms, and we need to store some data about every broom. Our brooms have a model name, a serial number, and a series of inspections that have been made to ensure flight-worthiness. We could store this information using ASN.1 like so:
|
||||
|
||||
|
||||
```
|
||||
BroomInfo ::= SEQUENCE {
|
||||
model UTF8String,
|
||||
serialNumber INTEGER,
|
||||
inspections SEQUENCE OF InspectionInfo
|
||||
}
|
||||
|
||||
InspectionInfo ::= SEQUENCE {
|
||||
inspectorName UTF8String,
|
||||
inspectionDate GeneralizedTime
|
||||
}
|
||||
```
|
||||
|
||||
The example above defines the model name as a UTF8-encoded string, the serial number as an integer, and our inspections as a series of InspectionInfo items. Then we see that each InspectionInfo item comprises two pieces of data: the inspector's name and the time of the inspection.
|
||||
|
||||
An actual instance of BroomInfo data would look something like this in ASN.1's value assignment syntax:
|
||||
|
||||
|
||||
```
|
||||
broom BroomInfo ::= {
|
||||
model "Nimbus 2000",
|
||||
serialNumber 1066,
|
||||
inspections {
|
||||
{
|
||||
inspectorName "Harry",
|
||||
inspectionDate "201901011200Z"
|
||||
}
|
||||
{
|
||||
inspectorName "Hagrid",
|
||||
inspectionDate "201902011200Z"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Don't worry too much about the particulars of the syntax; for the average developer, having a basic grasp of how the pieces fit together is sufficient.
|
||||
|
||||
Now let's look at a real example from [RFC 8017][11] that I have abbreviated somewhat for clarity:
|
||||
|
||||
|
||||
```
|
||||
RSAPrivateKey ::= SEQUENCE {
|
||||
version Version,
|
||||
modulus INTEGER, -- n
|
||||
publicExponent INTEGER, -- e
|
||||
privateExponent INTEGER, -- d
|
||||
prime1 INTEGER, -- p
|
||||
prime2 INTEGER, -- q
|
||||
exponent1 INTEGER, -- d mod (p-1)
|
||||
exponent2 INTEGER, -- d mod (q-1)
|
||||
coefficient INTEGER, -- (inverse of q) mod p
|
||||
otherPrimeInfos OtherPrimeInfos OPTIONAL
|
||||
}
|
||||
|
||||
Version ::= INTEGER { two-prime(0), multi(1) }
|
||||
(CONSTRAINED BY
|
||||
{-- version must be multi if otherPrimeInfos present --})
|
||||
|
||||
OtherPrimeInfos ::= SEQUENCE SIZE(1..MAX) OF OtherPrimeInfo
|
||||
|
||||
OtherPrimeInfo ::= SEQUENCE {
|
||||
prime INTEGER, -- ri
|
||||
exponent INTEGER, -- di
|
||||
coefficient INTEGER -- ti
|
||||
}
|
||||
```
|
||||
|
||||
The ASN.1 above defines the PKCS #1 format used to store RSA keys. Looking at this, we can see the RSAPrivateKey sequence starts with a version type (either 0 or 1) followed by a bunch of integers and then an optional type called OtherPrimeInfos. The OtherPrimeInfos sequence contains one or more pieces of OtherPrimeInfo. And each OtherPrimeInfo is just a sequence of integers.
|
||||
|
||||
Let's look at an actual instance by asking OpenSSL to generate an RSA key and then pipe it into [asn1parse][12], which will print it out in a more human-friendly format. (By the way, the **genrsa** command I'm using here has been superseded by **genpkey** ; we'll see why a little later.)
|
||||
|
||||
|
||||
```
|
||||
% openssl genrsa 4096 2> /dev/null | openssl asn1parse
|
||||
0:d=0 hl=4 l=2344 cons: SEQUENCE
|
||||
4:d=1 hl=2 l= 1 prim: INTEGER :00
|
||||
7:d=1 hl=4 l= 513 prim: INTEGER :B80B0C2443...
|
||||
524:d=1 hl=2 l= 3 prim: INTEGER :010001
|
||||
529:d=1 hl=4 l= 512 prim: INTEGER :59C609C626...
|
||||
1045:d=1 hl=4 l= 257 prim: INTEGER :E8FC43002D...
|
||||
1306:d=1 hl=4 l= 257 prim: INTEGER :CA39222DD2...
|
||||
1567:d=1 hl=4 l= 256 prim: INTEGER :25F6CD181F...
|
||||
1827:d=1 hl=4 l= 256 prim: INTEGER :38CCE374CB...
|
||||
2087:d=1 hl=4 l= 257 prim: INTEGER :C80430E810...
|
||||
```
|
||||
|
||||
Recall that RSA uses a modulus, _n_ ; a public exponent, _e_ ; and a private exponent, _d_. Now let's look at the sequence. First, we see the version set to 0 for a two-prime RSA key (what **genrsa** generates), an integer for the modulus, _n_ , and then 0x010001 for the public exponent, _e_. If we convert to decimal, we'll see our public exponent is 65537, a number [commonly][13] used as an RSA public exponent. Following the public exponent, we see the integer for the private exponent, _e_ , and then some other integers that are used to speed up decryption and signing. Explaining how this optimization works is beyond the scope of this article, but if you like math, there's a [good video on the subject][14].
|
||||
|
||||
What about that other stuff on the left side of the output? What does "h=4" and "l=513" mean? We'll cover that shortly.
|
||||
|
||||
### DERangement
|
||||
|
||||
We've seen the "abstract" part of Abstract Syntax Notation One, but how does this data get encoded and stored? For that, we turn to a binary format called Distinguished Encoding Rules (DER) defined in the [X.690][15] specification. DER is a stricter version of its parent, Basic Encoding Rules (BER), in that for any given data, there is only one way to encode it. If we're going to be digitally signing data, it makes things a lot easier if there is only one possible encoding that needs to be signed instead of dozens of functionally equivalent representations.
|
||||
|
||||
DER uses a [tag-length-value][16] (TLV) structure. The encoding of a piece of data begins with an identifier octet defining the data's type. ("Octet" is used rather than "byte" since the standard is very old and some early architectures didn't use 8 bits for a byte.) Next are the octets that encode the length of the data, and finally, there is the data. The data can be another TLV series. The left side of the **asn1parse** output makes a little more sense now. The first number indicates the absolute offset from the beginning. The "d=" tells us the depth of that item in the structure. The first line is a sequence, which we descend into on the next line (the depth _d_ goes from 0 to 1) whereupon **asn1parse** begins enumerating all the elements in that sequence. The "hl=" is the header length (the sum of the identifier and length octets), and the "l=" tells us the length of that particular piece of data.
|
||||
|
||||
How is header length determined? It's the sum of the identifier byte and the bytes encoding the length. In our example, the top sequence is 2344 octets long. If it were less than 128 octets, the length would be encoded in a single octet in the "short form": bit 8 would be a zero and bits 7 to 1 would hold the length value ( **2 7-1=127**). A value of 2344 needs more space, so the "long" form is used. The first octet has bit 8 set to one, and bits 7 to 1 contain the length of the length. In our case, a value of 2344 can be encoded in two octets (0x0928). Combined with the first "length of the length" octet, we have three octets total. Add the one identifier octet, and that gives us our total header length of four.
|
||||
|
||||
As a side exercise, let's consider the largest value we could possibly encode. We've seen that we have up to 127 octets to encode a length. At 8 bits per octet, we have a total of 1008 bits to use, so we can hold a number equal to **2 1008-1**. That would equate to a content length of **2.743062*10 279** yottabytes, staggeringly more than the estimated **10 80** atoms in the observable universe. If you're interested in all the details, I recommend reading "[A Layman's Guide to a Subset of ASN.1, BER, and DER][17]."
|
||||
|
||||
What about "cons" and "prim"? Those indicate whether the value is encoded with "constructed" or "primitive" encoding. Primitive encoding is used for simple types like "INTEGER" or "BOOLEAN," while constructed encoding is used for structured types like "SEQUENCE" or "SET." The actual difference between the two encoding methods is whether bit 6 in the identifier octet is a zero or one. If it's a one, the parser knows that the content octets are also DER-encoded and it can descend.
|
||||
|
||||
### PEM pals
|
||||
|
||||
While useful in a lot of cases, a binary format won't pass muster if we need to display the data as text. Before the [MIME][18] standard existed, attachment support was spotty. Commonly, if you wanted to attach data, you put it in the body of the email, and since SMTP only supported ASCII, that meant converting your binary data (like the DER of your public key, for example) into ASCII characters.
|
||||
|
||||
Thus, the PEM format emerged. PEM stands for "Privacy-Enhanced Email" and was an early standard for transmitting and storing PKI data. The standard never caught on, but the format it defined for storage did. PEM-encoded objects are just DER objects that are [base64][19]-encoded and wrapped at 64 characters per line. To describe the type of object, a header and footer surround the base64 string. You'll see **\-----BEGIN CERTIFICATE-----** or **\-----BEGIN PRIVATE KEY-----** , for example.
|
||||
|
||||
Often you'll see files with the ".pem" extension. I don't find this suffix useful. The file could contain a certificate, a key, a certificate signing request, or several other possibilities. Imagine going to a sushi restaurant and seeing a menu that described every item as "fish and rice"! Instead, I prefer more informative extensions like ".crt", ".key", and ".csr".
|
||||
|
||||
### The PKCS zoo
|
||||
|
||||
Earlier, I showed an example of a PKCS #1-formatted RSA key. As you might expect, formats for storing certificates and signing requests also exist in various IETF RFCs. For example, PKCS #8 can be used to store private keys for many different algorithms (including RSA!). Here's some of the ASN.1 from [RFC 5208][20] for PKCS #8. (RFC 5208 has been obsoleted by RFC 5958, but I feel that the ASN.1 in RFC 5208 is easier to understand.)
|
||||
|
||||
|
||||
```
|
||||
PrivateKeyInfo ::= SEQUENCE {
|
||||
version Version,
|
||||
privateKeyAlgorithm PrivateKeyAlgorithmIdentifier,
|
||||
privateKey PrivateKey,
|
||||
attributes [0] IMPLICIT Attributes OPTIONAL }
|
||||
|
||||
Version ::= INTEGER
|
||||
|
||||
PrivateKeyAlgorithmIdentifier ::= AlgorithmIdentifier
|
||||
|
||||
PrivateKey ::= OCTET STRING
|
||||
|
||||
Attributes ::= SET OF Attribute
|
||||
```
|
||||
|
||||
If you store your RSA private key in a PKCS #8, the PrivateKey element will actually be a DER-encoded PKCS #1! Let's prove it. Remember earlier when I used **genrsa** to generate a PKCS #1? OpenSSL can generate a PKCS #8 with the **genpkey** command, and you can specify RSA as the algorithm to use.
|
||||
|
||||
|
||||
```
|
||||
% openssl genpkey -algorithm RSA | openssl asn1parse
|
||||
0:d=0 hl=4 l= 629 cons: SEQUENCE
|
||||
4:d=1 hl=2 l= 1 prim: INTEGER :00
|
||||
7:d=1 hl=2 l= 13 cons: SEQUENCE
|
||||
9:d=2 hl=2 l= 9 prim: OBJECT :rsaEncryption
|
||||
20:d=2 hl=2 l= 0 prim: NULL
|
||||
22:d=1 hl=4 l= 607 prim: OCTET STRING [HEX DUMP]:3082025B...
|
||||
```
|
||||
|
||||
You may have spotted the "OBJECT" in the output and guessed that was related to OIDs. You'd be correct. The OID "1.2.840.113549.1.1.1" is assigned to RSA encryption. OpenSSL has a built-in list of common OIDs and translates them into a human-readable form for you.
|
||||
|
||||
|
||||
```
|
||||
% openssl genpkey -algorithm RSA | openssl asn1parse -strparse 22
|
||||
0:d=0 hl=4 l= 604 cons: SEQUENCE
|
||||
4:d=1 hl=2 l= 1 prim: INTEGER :00
|
||||
7:d=1 hl=3 l= 129 prim: INTEGER :CA6720E706...
|
||||
139:d=1 hl=2 l= 3 prim: INTEGER :010001
|
||||
144:d=1 hl=3 l= 128 prim: INTEGER :05D0BEBE44...
|
||||
275:d=1 hl=2 l= 65 prim: INTEGER :F215DC6B77...
|
||||
342:d=1 hl=2 l= 65 prim: INTEGER :D6095CED7E...
|
||||
409:d=1 hl=2 l= 64 prim: INTEGER :402C7562F3...
|
||||
475:d=1 hl=2 l= 64 prim: INTEGER :06D0097B2D...
|
||||
541:d=1 hl=2 l= 65 prim: INTEGER :AB266E8E51...
|
||||
```
|
||||
|
||||
In the second command, I've told **asn1parse** via the **-strparse** argument to move to octet 22 and begin parsing the content's octets there as an ASN.1 object. We can clearly see that the PKCS #8's PrivateKey looks just like the PKCS #1 that we examined earlier.
|
||||
|
||||
You should favor using the **genpkey** command. PKCS #8 has some features that PKCS #1 does not: PKCS #8 can store private keys for multiple different algorithms (PKCS #1 is RSA-specific), and it provides a mechanism to encrypt the private key using a passphrase and a symmetric cipher.
|
||||
|
||||
Encrypted PKCS #8 objects use a different ASN.1 syntax that I'm not going to dive into, but let's take a look at an actual example and see if anything stands out. Encrypting a private key with **genpkey** requires that you specify the symmetric encryption algorithm to use. I'll use AES-256-CBC for this example and a password of "hello" (the "pass:" prefix is the way of telling OpenSSL that the password is coming in from the command line).
|
||||
|
||||
|
||||
```
|
||||
% openssl genpkey -algorithm RSA -aes-256-cbc -pass pass:hello | openssl asn1parse
|
||||
0:d=0 hl=4 l= 733 cons: SEQUENCE
|
||||
4:d=1 hl=2 l= 87 cons: SEQUENCE
|
||||
6:d=2 hl=2 l= 9 prim: OBJECT :PBES2
|
||||
17:d=2 hl=2 l= 74 cons: SEQUENCE
|
||||
19:d=3 hl=2 l= 41 cons: SEQUENCE
|
||||
21:d=4 hl=2 l= 9 prim: OBJECT :PBKDF2
|
||||
32:d=4 hl=2 l= 28 cons: SEQUENCE
|
||||
34:d=5 hl=2 l= 8 prim: OCTET STRING [HEX DUMP]:17E6FE554E85810A
|
||||
44:d=5 hl=2 l= 2 prim: INTEGER :0800
|
||||
48:d=5 hl=2 l= 12 cons: SEQUENCE
|
||||
50:d=6 hl=2 l= 8 prim: OBJECT :hmacWithSHA256
|
||||
60:d=6 hl=2 l= 0 prim: NULL
|
||||
62:d=3 hl=2 l= 29 cons: SEQUENCE
|
||||
64:d=4 hl=2 l= 9 prim: OBJECT :aes-256-cbc
|
||||
75:d=4 hl=2 l= 16 prim: OCTET STRING [HEX DUMP]:91E9536C39...
|
||||
93:d=1 hl=4 l= 640 prim: OCTET STRING [HEX DUMP]:98007B264F...
|
||||
|
||||
% openssl genpkey -algorithm RSA -aes-256-cbc -pass pass:hello | head -n 1
|
||||
\-----BEGIN ENCRYPTED PRIVATE KEY-----
|
||||
```
|
||||
|
||||
There are a couple of interesting items here. We see our encryption algorithm is recorded with an OID starting at octet 64. There's an OID for "PBES2" (Password-Based Encryption Scheme 2), which defines a standard process for encryption and decryption, and an OID for "PBKDF2" (Password-Based Key Derivation Function 2), which defines a standard process for creating encryption keys from passwords. Helpfully, OpenSSL uses the header "ENCRYPTED PRIVATE KEY" in the PEM output.
|
||||
|
||||
OpenSSL will let you encrypt a PKCS #1, but it's done in a non-standard way via a series of headers inserted into the PEM:
|
||||
|
||||
|
||||
```
|
||||
% openssl genrsa -aes256 -passout pass:hello 4096
|
||||
\-----BEGIN RSA PRIVATE KEY-----
|
||||
Proc-Type: 4,ENCRYPTED
|
||||
DEK-Info: AES-256-CBC,5B2C64DC05B7C0471A278C76562FD776
|
||||
...
|
||||
```
|
||||
|
||||
### In conclusion
|
||||
|
||||
There's a final PKCS format you need to know about: [PKCS #12][21]. The PKCS #12 format allows for storing multiple objects all in one file. If you have a certificate and its corresponding key or a chain of certificates, you can store them together in one PKCS #12 file. Individual entries in the file can be protected with password-based encryption.
|
||||
|
||||
Beyond the PKCS formats, there are other storage methods such as the Java-specific JKS format and the NSS library from Mozilla, which uses file-based databases (SQLite or Berkeley DB, depending on the version). Luckily, the PKCS formats are a lingua franca that can serve as a start or reference if you need to deal with other formats.
|
||||
|
||||
If this all seems confusing, that's because it is. Unfortunately, the PKI ecosystem has a lot of sharp edges between tools that generate enigmatic error messages (looking at you, OpenSSL) and standards that have grown and evolved over the past 35 years. Having a basic understanding of how PKI objects are stored is critical if you're doing any application development that will be accessed over SSL/TLS.
|
||||
|
||||
I hope this article has shed a little light on the subject and might save you from spending fruitless hours in the PKI wilderness.
|
||||
|
||||
* * *
|
||||
|
||||
_The author would like to thank Hubert Kario for providing a technical review._
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/6/bits-and-bytes-pki
|
||||
|
||||
作者:[Alex Wood][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/awood
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/keyboaord_enter_writing_documentation.jpg?itok=kKrnXc5h (Computer keyboard typing)
|
||||
[2]: https://opensource.com/article/18/5/cryptography-pki
|
||||
[3]: https://opensource.com/article/18/7/private-keys
|
||||
[4]: https://en.wikipedia.org/wiki/Abstract_Syntax_Notation_One
|
||||
[5]: https://en.wikipedia.org/wiki/ISO_8601
|
||||
[6]: https://en.wikipedia.org/wiki/ITU_T.61
|
||||
[7]: https://en.wikipedia.org/wiki/Teletex
|
||||
[8]: https://en.wikipedia.org/wiki/UTF-32
|
||||
[9]: https://www.itu.int/itu-t/recommendations/rec.aspx?rec=X.680
|
||||
[10]: https://www.alvestrand.no/objectid/1.3.6.1.4.1.2312.html
|
||||
[11]: https://tools.ietf.org/html/rfc8017
|
||||
[12]: https://linux.die.net/man/1/asn1parse
|
||||
[13]: https://www.johndcook.com/blog/2018/12/12/rsa-exponent/
|
||||
[14]: https://www.youtube.com/watch?v=NcPdiPrY_g8
|
||||
[15]: https://en.wikipedia.org/wiki/X.690
|
||||
[16]: https://en.wikipedia.org/wiki/Type-length-value
|
||||
[17]: http://luca.ntop.org/Teaching/Appunti/asn1.html
|
||||
[18]: https://www.theguardian.com/technology/2012/mar/26/ather-of-the-email-attachment
|
||||
[19]: https://en.wikipedia.org/wiki/Base64
|
||||
[20]: https://tools.ietf.org/html/rfc5208
|
||||
[21]: https://tools.ietf.org/html/rfc7292
|
97
sources/tech/20190612 Why use GraphQL.md
Normal file
97
sources/tech/20190612 Why use GraphQL.md
Normal file
@ -0,0 +1,97 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Why use GraphQL?)
|
||||
[#]: via: (https://opensource.com/article/19/6/why-use-graphql)
|
||||
[#]: author: (Zach Lendon https://opensource.com/users/zachlendon/users/goncasousa/users/patrickhousley)
|
||||
|
||||
Why use GraphQL?
|
||||
======
|
||||
Here's why GraphQL is gaining ground on standard REST API technology.
|
||||
![][1]
|
||||
|
||||
[GraphQL][2], as I wrote [previously][3], is a next-generation API technology that is transforming both how client applications communicate with backend systems and how backend systems are designed.
|
||||
|
||||
As a result of the support that began with the organization that founded it, Facebook, and continues with the backing of other technology giants such as Github, Twitter, and AirBnB, GraphQL's place as a linchpin technology for application systems seems secure; both now and long into the future.
|
||||
|
||||
### GraphQL's ascent
|
||||
|
||||
The rise in importance of mobile application performance and organizational agility has provided booster rockets for GraphQL's ascent to the top of modern enterprise architectures.
|
||||
|
||||
Given that [REST][4] is a wildly popular architectural style that already allows mechanisms for data interaction, what advantages does this new technology provide over [REST][4]? The ‘QL’ in GraphQL stands for query language, and that is a great place to start.
|
||||
|
||||
The ease at which different client applications within an organization can query only the data they need with GraphQL usurps alternative REST approaches and delivers real-world application performance boosts. With traditional [REST][4] API endpoints, client applications interrogate a server resource, and receive a response containing all the data that matches the request. If a successful response from a [REST][4] API endpoint returns 35 fields, the client application receives 35 fields
|
||||
|
||||
### Fetching problems
|
||||
|
||||
[REST][4] APIs traditionally provide no clean way for client applications to retrieve or update only the data they care about. This is often described as the “over-fetching” problem. With the prevalence of mobile applications in people’s day to day lives, the over-fetching problem has real world consequences. Every request a mobile application needs to make, every byte it has to send and receive, has an increasingly negative performance impact for end users. Users with slower data connections are particularly affected by suboptimal API design choices. Customers who experience poor performance using mobile applications are more likely to not purchase products and use services. Inefficient API designs cost companies money.
|
||||
|
||||
“Over-fetching” isn’t alone - it has a partner in crime - “under-fetching”. Endpoints that, by default, return only a portion of the data a client actually needs require clients to make additional calls to satisfy their data needs - which requires additional HTTP requests. Because of the over and under fetching problems and their impact on client application performance, an API technology that facilitates efficient fetching has a chance to catch fire in the marketplace - and GraphQL has boldly jumped in and filled that void.
|
||||
|
||||
### REST's response
|
||||
|
||||
[REST][4] API designers, not willing to go down without a fight, have attempted to counter the mobile application performance problem through a mix of:
|
||||
|
||||
* “include” and “exclude” query parameters, allowing client applications to specify which fields they want through a potentially long query format.
|
||||
* “Composite” services, which combine multiple endpoints in a way that allow client applications to be more efficient in the number of requests they make and the data they receive.
|
||||
|
||||
|
||||
|
||||
While these patterns are a valiant attempt by the [REST][4] API community to address challenges mobile clients face, they fall short in a few key regards, namely:
|
||||
|
||||
* Include and exclude query key/value pairs quickly get messy, in particular for deeper object graphs that require a nested dot notation syntax (or similar) to target data to include and exclude. Additionally, debugging issues with the query string in this model often requires manually breaking up a URL.
|
||||
* Server implementations for include and exclude queries are often custom, as there is no standard way for server-based applications to handle the use of include and exclude queries, just as there is no standard way for include and exclude queries to be defined.
|
||||
* The rise of composite services creates more tightly coupled back-end and front-end systems, requiring increasing coordination to deliver projects and turning once agile projects back to waterfall. This coordination and coupling has the painful side effect of slowing organizational agility. Additionally, composite services are by definition, not RESTful.
|
||||
|
||||
|
||||
|
||||
### GraphQL's genesis
|
||||
|
||||
For Facebook, GraphQL’s genesis was a response to pain felt and experiences learned from an HTML5-based version of their flagship mobile application back in 2011-2012. Understanding that improved performance was paramount, Facebook engineers realized that they needed a new API design to ensure peak performance. Likely taking the above [REST][4] limitations into consideration, and with needing to support different needs of a number of API clients, one can begin to understand the early seeds of what led co-creators Lee Byron and Dan Schaeffer, Facebook employees at the time, to create what has become known as GraphQL.
|
||||
|
||||
With what is often a single GraphQL endpoint, through the GraphQL query language, client applications are able to reduce, often significantly, the number of network calls they need to make, and ensure that they only are retrieving the data they need. In many ways, this harkens back to earlier models of web programming, where client application code would directly query back-end systems - some might remember writing SQL queries with JSTL on JSPs 10-15 years ago for example!
|
||||
|
||||
The biggest difference now is with GraphQL, we have a specification that is implemented across a variety of client and server languages and libraries. And with GraphQL being an API technology, we have decoupled the back-end and front-end application systems by introducing an intermediary GraphQL application layer that provides a mechanism to access organizational data in a manner that aligns with an organization’s business domain(s).
|
||||
|
||||
Beyond solving technical challenges experienced by software engineering teams, GraphQL has also been a boost to organizational agility, in particular in the enterprise. GraphQL-enabled organizational agility increases are commonly attributable to the following:
|
||||
|
||||
* Rather than creating new endpoints when 1 or more new fields are needed by clients, GraphQL API designers and developers are able to include those fields in existing graph implementations, exposing new capabilities in a fashion that requires less development effort and less change across application systems.
|
||||
* By encouraging API design teams to focus more on defining their object graph and be less focused on what client applications are delivering, the speed at which front-end and back-end software teams deliver solutions for customers has increasingly decoupled.
|
||||
|
||||
|
||||
|
||||
### Considerations before adoption
|
||||
|
||||
Despite GraphQL’s compelling benefits, GraphQL is not without its implementation challenges. A few examples include:
|
||||
|
||||
* Caching mechanisms around [REST][4] APIs are much more mature.
|
||||
* The patterns used to build APIs using [REST][4] are much more well established.
|
||||
* While engineers may be more attracted to newer technologies like GraphQL, the talent pool in the marketplace is much broader for building [REST][4]-based solutions vs. GraphQL.
|
||||
|
||||
|
||||
|
||||
### Conclusion
|
||||
|
||||
By providing both a boost to performance and organizational agility, GraphQL's adoption by companies has skyrocketed in the past few years. It does, however, have some maturing to do in comparison to the RESTful ecosystem of API design.
|
||||
|
||||
One of the great benefits of GraphQL is that it’s not designed as a wholesale replacement for alternative API solutions. Instead, GraphQL can be implemented to complement or enhance existing APIs. As a result, companies are encouraged to explore incrementally adopting GraphQL where it makes the most sense for them - where they find it has the greatest positive impact on application performance and organizational agility.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/6/why-use-graphql
|
||||
|
||||
作者:[Zach Lendon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/zachlendon/users/goncasousa/users/patrickhousley
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/metrics_graph_stats_blue.png?itok=OKCc_60D
|
||||
[2]: https://graphql.org/
|
||||
[3]: https://opensource.com/article/19/6/what-is-graphql
|
||||
[4]: https://en.wikipedia.org/wiki/Representational_state_transfer
|
@ -0,0 +1,98 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Continuous integration testing for the Linux kernel)
|
||||
[#]: via: (https://opensource.com/article/19/6/continuous-kernel-integration-linux)
|
||||
[#]: author: (Major Hayden https://opensource.com/users/mhayden)
|
||||
|
||||
Continuous integration testing for the Linux kernel
|
||||
======
|
||||
How this team works to prevent bugs from being merged into the Linux
|
||||
kernel.
|
||||
![Linux kernel source code \(C\) in Visual Studio Code][1]
|
||||
|
||||
With 14,000 changesets per release from over 1,700 different developers, it's clear that the Linux kernel moves quickly, and brings plenty of complexity. Kernel bugs range from small annoyances to larger problems, such as system crashes and data loss.
|
||||
|
||||
As the call for continuous integration (CI) grows for more and more projects, the [Continuous Kernel Integration (CKI)][2] team forges ahead with a single mission: prevent bugs from being merged into the kernel.
|
||||
|
||||
### Linux testing problems
|
||||
|
||||
Many Linux distributions test the Linux kernel when needed. This testing often occurs around release time, or when users find a bug.
|
||||
|
||||
Unrelated issues sometimes appear, and maintainers scramble to find which patch in a changeset full of tens of thousands of patches caused the new, unrelated bug. Diagnosing the bug may require specialized hardware, a series of triggers, and specialized knowledge of that portion of the kernel.
|
||||
|
||||
#### CI and Linux
|
||||
|
||||
Most modern software repositories have some sort of automated CI testing that tests commits before they find their way into the repository. This automated testing allows the maintainers to find software quality issues, along with most bugs, by reviewing the CI report. Simpler projects, such as a Python library, come with tons of tools to make this process easier.
|
||||
|
||||
Linux must be configured and compiled prior to any testing. Doing so takes time and compute resources. In addition, that kernel must boot in a virtual machine or on a bare metal machine for testing. Getting access to certain system architectures requires additional expense or very slow emulation. From there, someone must identify a set of tests which trigger the bug or verify the fix.
|
||||
|
||||
#### How the CKI team works
|
||||
|
||||
The CKI team at Red Hat currently follows changes from several internal kernels, as well as upstream kernels such as the [stable kernel tree][3]. We watch for two critical events in each repository:
|
||||
|
||||
1. When maintainers merge pull requests or patches, and the resulting commits in the repository change.
|
||||
|
||||
2. When developers propose changes for merging via patchwork or the stable patch queue.
|
||||
|
||||
|
||||
|
||||
|
||||
As these events occur, automation springs into action and [GitLab CI pipelines][4] begin the testing process. Once the pipeline runs [linting][5] scripts, merges any patches, and compiles the kernel for multiple architectures, the real testing begins. We compile kernels in under six minutes for four architectures and submit feedback to the stable mailing list usually in two hours or less. Over 100,000 kernel tests run each month and over 11,000 GitLab pipelines have completed (since January 2019).
|
||||
|
||||
Each kernel is booted on its native architecture, which includes:
|
||||
|
||||
● [aarch64][6]: 64-bit [ARM][7], such as the [Cavium (now Marvell) ThunderX][8].
|
||||
|
||||
● [ppc64/ppc64le][9]: Big and little endian [IBM POWER][10] systems.
|
||||
|
||||
● [s390x][11]: [IBM Zseries][12] mainframes.
|
||||
|
||||
● [x86_64][13]: [Intel][14] and [AMD][15] workstations, laptops, and servers.
|
||||
|
||||
Multiple tests run on these kernels, including the [Linux Test Project (LTP)][16], which contains a myriad of tests using a common test harness. My CKI team open-sourced over 44 tests with more on the way.
|
||||
|
||||
### Get involved
|
||||
|
||||
The upstream kernel testing effort grows day-by-day. Many companies provide test output for various kernels, including [Google][17], Intel, [Linaro][18], and [Sony][19]. Each effort is focused on bringing value to the upstream kernel as well as each company’s customer base.
|
||||
|
||||
If you or your company want to join the effort, please come to the [Linux Plumbers Conference 2019][20] in Lisbon, Portugal. Join us at the Kernel CI hackfest during the two days after the conference, and drive the future of rapid kernel testing.
|
||||
|
||||
For more details, [review the slides][21] from my Texas Linux Fest 2019 talk.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/6/continuous-kernel-integration-linux
|
||||
|
||||
作者:[Major Hayden][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/mhayden
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/linux_kernel_clang_vscode.jpg?itok=fozZ4zrr (Linux kernel source code (C) in Visual Studio Code)
|
||||
[2]: https://cki-project.org/
|
||||
[3]: https://www.kernel.org/doc/html/latest/process/stable-kernel-rules.html
|
||||
[4]: https://docs.gitlab.com/ee/ci/pipelines.html
|
||||
[5]: https://en.wikipedia.org/wiki/Lint_(software)
|
||||
[6]: https://en.wikipedia.org/wiki/ARM_architecture
|
||||
[7]: https://www.arm.com/
|
||||
[8]: https://www.marvell.com/server-processors/thunderx-arm-processors/
|
||||
[9]: https://en.wikipedia.org/wiki/Ppc64
|
||||
[10]: https://www.ibm.com/it-infrastructure/power
|
||||
[11]: https://en.wikipedia.org/wiki/Linux_on_z_Systems
|
||||
[12]: https://www.ibm.com/it-infrastructure/z
|
||||
[13]: https://en.wikipedia.org/wiki/X86-64
|
||||
[14]: https://www.intel.com/
|
||||
[15]: https://www.amd.com/
|
||||
[16]: https://github.com/linux-test-project/ltp
|
||||
[17]: https://www.google.com/
|
||||
[18]: https://www.linaro.org/
|
||||
[19]: https://www.sony.com/
|
||||
[20]: https://www.linuxplumbersconf.org/
|
||||
[21]: https://docs.google.com/presentation/d/1T0JaRA0wtDU0aTWTyASwwy_ugtzjUcw_ZDmC5KFzw-A/edit?usp=sharing
|
@ -0,0 +1,94 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (IPython is still the heart of Jupyter Notebooks for Python developers)
|
||||
[#]: via: (https://opensource.com/article/19/6/ipython-still-heart-jupyterlab)
|
||||
[#]: author: (Matthew Broberg https://opensource.com/users/mbbroberg/users/marcobravo)
|
||||
|
||||
IPython is still the heart of Jupyter Notebooks for Python developers
|
||||
======
|
||||
Project Jupyter's origin in IPython remains significant for the magical
|
||||
development experience it provides.
|
||||
![I love Free Software FSFE celebration][1]
|
||||
|
||||
I recently wrote about how I find Jupyter projects, especially JupyterLab, to be a [magical Python development experience][2]. In researching how the various projects are related to each other, I recapped how Jupyter began as a fork from IPython. As Project Jupyter's [The Big Split™ announcement][3] explained:
|
||||
|
||||
> "If anyone has been confused by what Jupyter is[1], it's the exact same code that lived in IPython, developed by the same people, just in a new home under a new name."
|
||||
|
||||
That [1] links to a footnote that further clarifies:
|
||||
|
||||
> "I saw 'Jupyter is like IPython, but language agnostic' immediately after the announcement, which is a great illustration of why the project needs to not have Python in the name anymore, since it was already language agnostic at the time."
|
||||
|
||||
The fact that Jupyter Notebook and IPython forked from the same source code made sense to me, but I got lost in the current state of the IPython project. Was it no longer needed after The Big Split™ or is it living on in a different way?
|
||||
|
||||
I was surprised to learn that IPython's significance continues to add value to Pythonistas, and that it is an essential part of the Jupyter experience. Here's a portion of the Jupyter FAQ:
|
||||
|
||||
> **Are any languages pre-installed?**
|
||||
>
|
||||
> Yes, installing the Jupyter Notebook will also install the IPython kernel. This allows working on notebooks using the Python programming language.
|
||||
|
||||
I now understand that writing Python in JupyterLab (and Jupyter Notebook) relies on the continued development of IPython as its kernel. Not only that, IPython is the powerhouse default kernel, and it can act as a communication bus for other language kernels according to [the documentation][4], saving a lot of time and development effort.
|
||||
|
||||
The question remains, what can I do with just IPython?
|
||||
|
||||
### What IPython does today
|
||||
|
||||
IPython provides both a powerful, interactive Python shell and a Jupyter kernel. After installing it, I can run **ipython** from any command line on its own and use it as a (much prettier than the default) Python shell:
|
||||
|
||||
|
||||
```
|
||||
$ ipython
|
||||
Python 3.7.3 (default, Mar 27 2019, 09:23:15)
|
||||
Type 'copyright', 'credits' or 'license' for more information
|
||||
IPython 7.4.0 -- An enhanced Interactive Python. Type '?' for help.
|
||||
|
||||
In [1]: import numpy as np
|
||||
In [2]: example = np.array([5, 20, 3, 4, 0, 2, 12])
|
||||
In [3]: average = np.average(example)
|
||||
In [4]: print(average)
|
||||
6.571428571428571
|
||||
```
|
||||
|
||||
That brings us to the more significant issue: IPython's functionality gives JupyterLab the ability to execute the code in every project, and it also provides support for a whole bunch of functionality that's playfully called _magic_ (thank you, Nicholas Reith, for mentioning this in a comment on my previous article).
|
||||
|
||||
### Getting magical, thanks to IPython
|
||||
|
||||
JupyterLab and other frontends using the IPython kernel can feel like your favorite IDE or terminal emulator environment. I'm a huge fan of how [dotfiles][5] give me the power to use shortcuts, and magic has some dotfile-like behavior as well. For example, check out **[%bookmark][6]**. I've mapped my default development folder, **~/Develop** , to a shortcut I can run at any time and hop right into it.
|
||||
|
||||
![Screenshot of commands from JupyterLab][7]
|
||||
|
||||
The use of **%bookmark** and **%cd** , alongside the **!** operator (which I introduced in the previous article), are powered by IPython. As the [documentation][8] states:
|
||||
|
||||
> To Jupyter users: Magics are specific to and provided by the IPython kernel. Whether Magics are available on a kernel is a decision that is made by the kernel developer on a per-kernel basis.
|
||||
|
||||
### Wrapping up
|
||||
|
||||
I, as a curious novice, was not quite sure if IPython remained relevant to the Jupyter ecosystem. I now have a new appreciation for the continuing development of IPython now that I realize it's the source of JupyterLab's powerful user experience. It's also a collection of talented contributors who are part of cutting edge research, so be sure to site them if you use Jupyter projects in your academic papers. They make it easy with this [ready-made citation entry][9].
|
||||
|
||||
Be sure to keep it in mind when you're thinking about open source projects to contribute to, and check out the [latest release notes][10] for a full list of magical features.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/6/ipython-still-heart-jupyterlab
|
||||
|
||||
作者:[Matthew Broberg][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/mbbroberg/users/marcobravo
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/ilovefs_free_sticker_fsfe_heart.jpg?itok=gLJtaieq (I love Free Software FSFE celebration)
|
||||
[2]: https://opensource.com/article/19/5/jupyterlab-python-developers-magic
|
||||
[3]: https://blog.jupyter.org/the-big-split-9d7b88a031a7
|
||||
[4]: https://jupyter-client.readthedocs.io/en/latest/kernels.html
|
||||
[5]: https://en.wikipedia.org/wiki/Hidden_file_and_hidden_directory#Unix_and_Unix-like_environments
|
||||
[6]: https://ipython.readthedocs.io/en/stable/interactive/magics.html?highlight=magic#magic-bookmark
|
||||
[7]: https://opensource.com/sites/default/files/uploads/jupyterlab-commands-ipython.png (Screenshot of commands from JupyterLab)
|
||||
[8]: https://ipython.readthedocs.io/en/stable/interactive/magics.html
|
||||
[9]: https://ipython.org/citing.html
|
||||
[10]: https://ipython.readthedocs.io/en/stable/whatsnew/index.html
|
@ -0,0 +1,93 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Open hardware for musicians and music lovers: Headphone, amps, and more)
|
||||
[#]: via: (https://opensource.com/article/19/6/hardware-music)
|
||||
[#]: author: (Michael Weinberg https://opensource.com/users/mweinberg)
|
||||
|
||||
Open hardware for musicians and music lovers: Headphone, amps, and more
|
||||
======
|
||||
From 3D-printed instruments to devices that pull sound out of the air,
|
||||
there are plenty of ways to create music with open hardware projects.
|
||||
![][1]
|
||||
|
||||
The world is full of great [open source music players][2], but why stop at using open source just to _play_ music? You can also use open source hardware to make music. All of the instruments described in this article are [certified by the Open Source Hardware Association][3] (OSHWA). That means you are free to build upon them, remix them, or do anything else with them.
|
||||
|
||||
### Open source instruments
|
||||
|
||||
Instruments are always a good place to start when you want to make music. If your instrument choices lean towards the more traditional, the [F-F-Fiddle][4] may be the one for you.
|
||||
|
||||
![F-f-fiddle][5]
|
||||
|
||||
The F-F-Fiddle is a full-sized electric violin that you can make with a standard desktop 3D printer ([fused filament fabrication][6]—get it?). If you need to see it to believe it, here is a video of the F-F-Fiddle in action:
|
||||
|
||||
Mastered the fiddle and interested in something a bit more exotic? How about the [Open Theremin][7]?
|
||||
|
||||
![Open Theremin][8]
|
||||
|
||||
Like all theremins, Open Theremin lets you play music without touching the instrument. It is, of course, especially good at making [creepy space sounds][9] for your next sci-fi video or space-themed party.
|
||||
|
||||
The [Waft][10] operates similarly by allowing you to control sounds remotely. It uses [Lidar][11] to measure the distance of your hand from the sensor. Check it out:
|
||||
|
||||
Is the Waft a theremin? I'm not sure—theremin pedants should weigh in below in the comments.
|
||||
|
||||
If theremins are too well-known for you, [SIGNUM][12] may be just what you are looking for. In the words of its developers, SIGNUM "uncovers the encrypted codes of information and the language of man/machine communication" by turning invisible wireless communications into audible signals.
|
||||
|
||||
![SIGNUM][13]
|
||||
|
||||
Here is in action:
|
||||
|
||||
### Inputs
|
||||
|
||||
Regardless of what instrument you use, you will need to plug it into something. If you want that something to be a Raspberry Pi, try the [AudioSense-Pi][14], which allows you to connect multiple inputs and outputs to your Pi at once.
|
||||
|
||||
![AudioSense-Pi][15]
|
||||
|
||||
### Synths
|
||||
|
||||
What about synthesizers? SparkFun's [SparkPunk Sound Kit][16] is a simple synth that gives you lots of room to play.
|
||||
|
||||
![SparkFun SparkPunk Sound Kit][17]
|
||||
|
||||
### Headphones
|
||||
|
||||
Making all this music is great, but you also need to think about how you will listen to it. Fortunately, [EQ-1 headphones][18] are open source and 3D-printable.
|
||||
|
||||
![EQ-1 headphones][19]
|
||||
|
||||
Are you making music with open source hardware? Let us know in the comments!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/6/hardware-music
|
||||
|
||||
作者:[Michael Weinberg][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/mweinberg
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LIFE_musicinfinity.png?itok=7LkfjcS9
|
||||
[2]: https://opensource.com/article/19/2/audio-players-linux
|
||||
[3]: https://certification.oshwa.org/
|
||||
[4]: https://certification.oshwa.org/us000010.html
|
||||
[5]: https://opensource.com/sites/default/files/uploads/f-f-fiddle.png (F-f-fiddle)
|
||||
[6]: https://en.wikipedia.org/wiki/Fused_filament_fabrication
|
||||
[7]: https://certification.oshwa.org/ch000001.html
|
||||
[8]: https://opensource.com/sites/default/files/uploads/open-theremin.png (Open Theremin)
|
||||
[9]: https://youtu.be/p05ZSHRYXVA?t=771
|
||||
[10]: https://certification.oshwa.org/uk000005.html
|
||||
[11]: https://en.wikipedia.org/wiki/Lidar
|
||||
[12]: https://certification.oshwa.org/es000003.html
|
||||
[13]: https://opensource.com/sites/default/files/uploads/signum.png (SIGNUM)
|
||||
[14]: https://certification.oshwa.org/in000007.html
|
||||
[15]: https://opensource.com/sites/default/files/uploads/audiosense-pi.png (AudioSense-Pi)
|
||||
[16]: https://certification.oshwa.org/us000016.html
|
||||
[17]: https://opensource.com/sites/default/files/uploads/sparkpunksoundkit.png (SparkFun SparkPunk Sound Kit)
|
||||
[18]: https://certification.oshwa.org/us000038.html
|
||||
[19]: https://opensource.com/sites/default/files/uploads/eq-1-headphones.png (EQ-1 headphones)
|
@ -0,0 +1,125 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Ubuntu Kylin: The Official Chinese Version of Ubuntu)
|
||||
[#]: via: (https://itsfoss.com/ubuntu-kylin/)
|
||||
[#]: author: (Avimanyu Bandyopadhyay https://itsfoss.com/author/avimanyu/)
|
||||
|
||||
Ubuntu Kylin: The Official Chinese Version of Ubuntu
|
||||
======
|
||||
|
||||
[_**Ubuntu has several official flavors**_][1] _**and Kylin is one of them. In this article, you’ll learn about Ubuntu Kylin, what it is, why it was created and what features it offers.**_
|
||||
|
||||
Kylin was originally developed in 2001 by academicians at the [National University of Defense Technology][2] in the People’s Republic of China. The name is derived from [Qilin][3], a beast from Chinese mythology.
|
||||
|
||||
The first versions of Kylin were based on [FreeBSD][4] and were intended for use by the Chinese military and other government organizations. Kylin 3.0 was purely based on the Linux kernel, and a version called [NeoKylin][5] which was announced in December 2010.
|
||||
|
||||
In 2013, [Canonical][6] (parent company of Ubuntu) reached an agreement with the [Ministry of Industry and Information Technology][7] of the People’s Republic of China to co-create and release an Ubuntu-based OS with features targeted at the Chinese market.
|
||||
|
||||
![Ubuntu Kylin][8]
|
||||
|
||||
### What is Ubuntu Kylin?
|
||||
|
||||
Following the 2013 agreement mentioned above, Ubuntu Kylin is now the official Chinese version of Ubuntu. It is much more than just language localisation. In fact, it is determined to serve the Chinese market the same way as Ubuntu serves the global market.
|
||||
|
||||
The first version of [Ubuntu Kylin][9] came with Ubuntu 13.04. Like Ubuntu, Kylin too has LTS (long term support) and non-LTS versions.
|
||||
|
||||
Currently, Ubuntu Kylin 19.04 LTS implements the [UKUI][10] desktop environment with revised boot up animation, log-in/screen-lock program and OS theme. To offer a more friendly experience for users, it has fixed bugs, has a file preview function, timer log out, the latest [WPS office suite][11] and [Sogou][12] put-in methods integrated within.
|
||||
|
||||
Kylin 4.0.2 is a community edition based on Ubuntu Kylin 16.04 LTS. It includes several third-party applications with long term and stable support. It’s perfect for both server and desktop usage for daily office work and welcomed by the developers to [download][13]. The Kylin forums are actively available to provide feedback and also troubleshooting to find solutions.
|
||||
|
||||
[][14]
|
||||
|
||||
Suggested read Solve Ubuntu Error: Failed to download repository information Check your Internet connection.
|
||||
|
||||
#### UKUI: The desktop environment by Ubuntu Kylin
|
||||
|
||||
![Ubuntu Kylin 19.04 with UKUI Desktop][15]
|
||||
|
||||
[UKUI][16] is designed and developed by the Ubuntu Kylin team and has some great features and provisions:
|
||||
|
||||
* Windows-like interactive functions to bring more friendly user experiences. The Setup Wizard is user-friendly so that users can get started with Ubuntu Kylin quickly.
|
||||
* Control Center has new settings for theme and window. Updated components such as Start Menu, taskbar, notification bar, file manager, window manager and others.
|
||||
* Available separately on both Ubuntu and Debian repositories to provide a new desktop environment for users of Debian/Ubuntu distributions and derivatives worldwide.
|
||||
* New login and lock programs, which is more stable and with many functions.
|
||||
* Includes a feedback program convenient for feedback and questions.
|
||||
|
||||
|
||||
|
||||
#### Kylin Software Center
|
||||
|
||||
![Kylin Software Center][17]
|
||||
|
||||
Kylin has a software center similar to Ubuntu software center and is called Ubuntu Kylin Software Center. It’s part of the Ubuntu Kylin Software Store that also includes Ubuntu Kylin Developer Platform and Ubuntu Kylin Repository with a simple interface and powerful function. It supports both Ubuntu and Ubuntu Kylin Repositories and is especially convenient for quick installation of Chinese characteristic software developed by Ubuntu Kylin team!
|
||||
|
||||
#### Youker: A series of tools
|
||||
|
||||
Ubuntu Kylin has also a series of tools named as Youker. Typing in “Youker” in the Kylin start menu will bring up the Kylin assistant. If you press the “Windows” key on the keyboard, you’d get a response exactly like you would on Windows. It will fire-up the Kylin start menu.
|
||||
|
||||
![Kylin Assistant][18]
|
||||
|
||||
Other Kylin branded applications include Kylin Video (player), Kylin Burner, Youker Weather and Youker Fcitx which supports better office work and personal entertainment.
|
||||
|
||||
![Kylin Video][19]
|
||||
|
||||
#### Special focus on Chinese characters
|
||||
|
||||
In cooperation with Kingsoft, Ubuntu Kylin developers also work on Sogou Pinyin for Linux, Kuaipan for Linux and Kingsoft WPS for Ubuntu Kylin, and also address issues with smart pinyin, cloud storage service and office applications. [Pinyin][20] is romanization system for Chinese characters. With this, user inputs with English keyboard but Chinese characters are displayed on the screen.
|
||||
|
||||
[][21]
|
||||
|
||||
Suggested read How to Remove Older Linux Kernel Versions in Ubuntu
|
||||
|
||||
#### Fun Fact: Ubuntu Kylin runs on Chinese supercomputers
|
||||
|
||||
![Tianhe-2 Supercomputer. Photo by O01326 – Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=45399546][22]
|
||||
|
||||
It’s already in public knowledge that the [world’s top 500 fastest supercomputers run Linux][23]. Chinese supercomputers [Tianhe-1][24] and [Tianhe-2][25] both use the 64-bit version of Kylin Linux, dedicated to high-performance [parallel computing][26] optimization, power management and high-performance [virtual computing][27].
|
||||
|
||||
#### Summary
|
||||
|
||||
I hope you liked this introduction in the world of Ubuntu Kylin. You can get either of Ubuntu Kylin 19.04 or the community edition based on Ubuntu 16.04 from its [official website][28].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/ubuntu-kylin/
|
||||
|
||||
作者:[Avimanyu Bandyopadhyay][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/avimanyu/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://itsfoss.com/which-ubuntu-install/
|
||||
[2]: https://english.nudt.edu.cn
|
||||
[3]: https://www.thoughtco.com/what-is-a-qilin-195005
|
||||
[4]: https://itsfoss.com/freebsd-12-release/
|
||||
[5]: https://thehackernews.com/2015/09/neokylin-china-linux-os.html
|
||||
[6]: https://www.canonical.com/
|
||||
[7]: http://english.gov.cn/state_council/2014/08/23/content_281474983035940.htm
|
||||
[8]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/06/Ubuntu-Kylin.jpeg?resize=800%2C450&ssl=1
|
||||
[9]: http://www.ubuntukylin.com/
|
||||
[10]: http://ukui.org
|
||||
[11]: https://www.wps.com/
|
||||
[12]: https://en.wikipedia.org/wiki/Sogou_Pinyin
|
||||
[13]: http://www.ubuntukylin.com/downloads/show.php?lang=en&id=122
|
||||
[14]: https://itsfoss.com/solve-ubuntu-error-failed-to-download-repository-information-check-your-internet-connection/
|
||||
[15]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/06/ubuntu-Kylin-19-04-desktop.jpg?resize=800%2C450&ssl=1
|
||||
[16]: http://www.ukui.org/
|
||||
[17]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/06/kylin-software-center.jpg?resize=800%2C496&ssl=1
|
||||
[18]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/06/kylin-assistant.jpg?resize=800%2C535&ssl=1
|
||||
[19]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/06/kylin-video.jpg?resize=800%2C533&ssl=1
|
||||
[20]: https://en.wikipedia.org/wiki/Pinyin
|
||||
[21]: https://itsfoss.com/remove-old-kernels-ubuntu/
|
||||
[22]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/06/tianhe-2.jpg?resize=800%2C600&ssl=1
|
||||
[23]: https://itsfoss.com/linux-runs-top-supercomputers/
|
||||
[24]: https://en.wikipedia.org/wiki/Tianhe-1
|
||||
[25]: https://en.wikipedia.org/wiki/Tianhe-2
|
||||
[26]: https://en.wikipedia.org/wiki/Parallel_computing
|
||||
[27]: https://computer.howstuffworks.com/how-virtual-computing-works.htm
|
||||
[28]: http://www.ubuntukylin.com
|
@ -0,0 +1,141 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (A data-centric approach to patching systems with Ansible)
|
||||
[#]: via: (https://opensource.com/article/19/6/patching-systems-ansible)
|
||||
[#]: author: (Mark Phillips https://opensource.com/users/markp/users/markp)
|
||||
|
||||
A data-centric approach to patching systems with Ansible
|
||||
======
|
||||
Use data and variables in Ansible to control selective patching.
|
||||
![metrics and data shown on a computer screen][1]
|
||||
|
||||
When you're patching Linux machines these days, I could forgive you for asking, "How hard can it be?" Sure, a **yum update -y** will sort it for you in a flash.
|
||||
|
||||
![Animation of updating Linux][2]
|
||||
|
||||
But for those of us working with more than a handful of machines, it's not that simple. Sometimes an update can create unintended consequences across many machines, and you're left wondering how to put things back the way they were. Or you might think, "Should I have applied the critical patch on its own and saved myself a lot of pain?"
|
||||
|
||||
Faced with these sorts of challenges in the past led me to build a way to cherry-pick the updates needed and automate their application.
|
||||
|
||||
### A flexible idea
|
||||
|
||||
Here's an overview of the process:
|
||||
|
||||
![Overview of the Ansible patch process][3]
|
||||
|
||||
This system doesn't permit machines to have direct access to vendor patches. Instead, they're selectively subscribed to repositories. Repositories contain only the patches that are required––although I'd encourage you to give this careful consideration so you don't end up with a proliferation (another management overhead you'll not thank yourself for creating).
|
||||
|
||||
Now patching a machine comes down to 1) The repositories it's subscribed to and 2) Getting the "thumbs up" to patch. By using variables to control both subscription and permission to patch, we don't need to tamper with the logic (the plays); we only need to alter the data.
|
||||
|
||||
Here is an [example Ansible role][4] that fulfills both requirements. It manages repository subscriptions and has a simple variable that controls running the patch command.
|
||||
|
||||
|
||||
```
|
||||
\---
|
||||
# tasks file for patching
|
||||
|
||||
\- name: Include OS version specific differences
|
||||
include_vars: "{{ ansible_distribution }}-{{ ansible_distribution_major_version }}.yml"
|
||||
|
||||
\- name: Ensure Yum repositories are configured
|
||||
template:
|
||||
src: template.repo.j2
|
||||
dest: "/etc/yum.repos.d/{{ item.label }}.repo"
|
||||
owner: root
|
||||
group: root
|
||||
mode: 0644
|
||||
when: patching_repos is defined
|
||||
loop: "{{ patching_repos }}"
|
||||
notify: patching-clean-metadata
|
||||
|
||||
\- meta: flush_handlers
|
||||
|
||||
\- name: Ensure OS shipped yum repo configs are absent
|
||||
file:
|
||||
path: "/etc/yum.repos.d/{{ patching_default_repo_def }}"
|
||||
state: absent
|
||||
|
||||
# add flexibility of repos here
|
||||
\- name: Patch this host
|
||||
shell: 'yum update -y'
|
||||
args:
|
||||
warn: false
|
||||
when: patchme|bool
|
||||
register: result
|
||||
changed_when: "'No packages marked for update' not in result.stdout"
|
||||
```
|
||||
|
||||
### Scenarios
|
||||
|
||||
In our fictitious, large, globally dispersed environment (of four hosts), we have:
|
||||
|
||||
* Two web servers
|
||||
* Two database servers
|
||||
* An application comprising one of each server type
|
||||
|
||||
|
||||
|
||||
OK, so this number of machines isn't "enterprise-scale," but remove the counts and imagine the environment as multiple, tiered, geographically dispersed applications. We want to patch elements of the stack across server types, application stacks, geographies, or the whole estate.
|
||||
|
||||
![Example patch groups][5]
|
||||
|
||||
Using only changes to variables, can we achieve that flexibility? Sort of. Ansible's [default behavior][6] for hashes is to overwrite. In our example, the **patching_repos** variable for the **db1** and **web1** hosts are overwritten because of their later occurrence in our inventory. Hmm, a bit of a pickle. There are two ways to manage this:
|
||||
|
||||
1. Multiple inventory files
|
||||
2. [Change the variable behavior][7]
|
||||
|
||||
|
||||
|
||||
I chose number one because it maintains clarity. Once you start merging variables, it's hard to find where a hash appears and how it's put together. Using the default behavior maintains clarity, and it's the method I'd encourage you to stick with for your own sanity.
|
||||
|
||||
### Get on with it then
|
||||
|
||||
Let's run the play, focusing only on the database servers.
|
||||
|
||||
Did you notice the final step— **Patch this host** —says **skipping**? That's because we didn't set [the controlling variable][8] to do the patching. What we have done is set up the repository subscriptions to be ready.
|
||||
|
||||
So let's run the play again, limiting it to the web servers and tell it to do the patching. I ran this example with verbose output so you can see the yum updates happening.
|
||||
|
||||
Patching an application stack requires another inventory file, as mentioned above. Let's rerun the play.
|
||||
|
||||
Patching hosts in the European geography is the same scenario as the application stack, so another inventory file is required.
|
||||
|
||||
Now that all the repository subscriptions are configured, let's just patch the whole estate. Note the **app1** and **emea** groups don't need the inventory here––they were only being used to separate the repository definition and setup. Now, **yum update -y** patches everything. If you didn't want to capture those repositories, they could be configured as **enabled=0**.
|
||||
|
||||
### Conclusion
|
||||
|
||||
The flexibility comes from how we group our hosts. Because of default hash behavior, we need to think about overlaps—the easiest way, to my mind at least, is with separate inventories.
|
||||
|
||||
With regard to repository setup, I'm sure you've already said to yourself, "Ah, but the cherry-picking isn't that simple!" There is additional overhead in this model to download patches, test that they work together, and bundle them with dependencies in a repository. With complementary tools, you could automate the process, and in a large-scale environment, you'd have to.
|
||||
|
||||
Part of me is drawn to just applying full patch sets as a simpler and easier way to go; skip the cherry-picking part and apply a full set of patches to a "standard build." I've seen this approach applied to both Unix and Windows estates with enforced quarterly updates.
|
||||
|
||||
I’d be interested in hearing your experiences of patching regimes, and the approach proposed here, in the comments below or [via Twitter][9].
|
||||
|
||||
Many companies still have massive data centres full of hardware. Here's how Ansible can help.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/6/patching-systems-ansible
|
||||
|
||||
作者:[Mark Phillips][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/markp/users/markp
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/metrics_data_dashboard_system_computer_analytics.png?itok=oxAeIEI- (metrics and data shown on a computer screen)
|
||||
[2]: https://opensource.com/sites/default/files/uploads/quick_update.gif (Animation of updating Linux)
|
||||
[3]: https://opensource.com/sites/default/files/uploads/patch_process.png (Overview of the Ansible patch process)
|
||||
[4]: https://github.com/phips/ansible-patching/blob/master/roles/patching/tasks/main.yml
|
||||
[5]: https://opensource.com/sites/default/files/uploads/patch_groups.png (Example patch groups)
|
||||
[6]: https://docs.ansible.com/ansible/2.3/playbooks_variables.html#variable-precedence-where-should-i-put-a-variable
|
||||
[7]: https://docs.ansible.com/ansible/2.3/intro_configuration.html#sts=hash_behaviour
|
||||
[8]: https://github.com/phips/ansible-patching/blob/master/roles/patching/defaults/main.yml#L4
|
||||
[9]: https://twitter.com/thismarkp
|
@ -0,0 +1,171 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to send email from the Linux command line)
|
||||
[#]: via: (https://www.networkworld.com/article/3402027/how-to-send-email-from-the-linux-command-line.html)
|
||||
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
|
||||
|
||||
How to send email from the Linux command line
|
||||
======
|
||||
Linux offers several commands that allow you to send email from the command line. Here's look at some that offer interesting options.
|
||||
![Molnia/iStock][1]
|
||||
|
||||
There are several ways to send email from the Linux command line. Some are very simple and others more complicated, but offer some very useful features. The choice depends on what you want to do -– whether you want to get a quick message off to a co-worker or send a more complicated message with an attachment to a large group of people. Here's a look at some of the options:
|
||||
|
||||
### mail
|
||||
|
||||
The easiest way to send a simple message from the Linux command line is to use the **mail** command. Maybe you need to remind your boss that you're leaving a little early that day. You could use a command like this one:
|
||||
|
||||
```
|
||||
$ echo "Reminder: Leaving at 4 PM today" | mail -s "early departure" myboss
|
||||
```
|
||||
|
||||
**[ Two-Minute Linux Tips:[Learn how to master a host of Linux commands in these 2-minute video tutorials][2] ]**
|
||||
|
||||
Another option is to grab your message text from a file that contains the content you want to send:
|
||||
|
||||
```
|
||||
$ mail -s "Reminder:Leaving early" myboss < reason4leaving
|
||||
```
|
||||
|
||||
In both cases, the -s options allows you to provide a subject line for your message.
|
||||
|
||||
### sendmail
|
||||
|
||||
Using **sendmail** , you can send a quick message (with no subject) using a command like this (replacing "recip" with your intended recipient:
|
||||
|
||||
```
|
||||
$ echo "leaving now" | sendmail recip
|
||||
```
|
||||
|
||||
You can send just a subject line (with no message content) with a command like this:
|
||||
|
||||
```
|
||||
$ echo "Subject: leaving now" | sendmail recip
|
||||
```
|
||||
|
||||
You can also use sendmail on the command line to send a message complete with a subject line. However, when using this approach, you would add your subject line to the file you intend to send as in this example file:
|
||||
|
||||
```
|
||||
Subject: Requested lyrics
|
||||
I would just like to say that, in my opinion, longer hair and other flamboyant
|
||||
affectations of appearance are nothing more ...
|
||||
```
|
||||
|
||||
Then you would send the file like this (where the lyrics file contains your subject line and text):
|
||||
|
||||
```
|
||||
$ sendmail recip < lyrics
|
||||
```
|
||||
|
||||
Sendmail can be quite verbose in its output. If you're desperately curious and want to see the interchange between the sending and receiving systems, add the -v (verbose) option:
|
||||
|
||||
```
|
||||
$ sendmail -v recip@emailsite.com < lyrics
|
||||
```
|
||||
|
||||
### mutt
|
||||
|
||||
An especially nice tool for command line emailing is the **mutt** command, though you will likely have to install it first. Mutt has a convenient advantage in that it can allow you to include attachments.
|
||||
|
||||
To use mutt to send a quick messsage:
|
||||
|
||||
```
|
||||
$ echo "Please check last night's backups" | mutt -s "backup check" recip
|
||||
```
|
||||
|
||||
To get content from a file:
|
||||
|
||||
```
|
||||
$ mutt -s "Agenda" recip < agenda
|
||||
```
|
||||
|
||||
To add an attachment with mutt, use the -a option. You can even add more than one – as shown in this command:
|
||||
|
||||
```
|
||||
$ mutt -s "Agenda" recip -a agenda -a speakers < msg
|
||||
```
|
||||
|
||||
In the command above, the "msg" file includes content for the email. If you don't have any additional content to provide, you can do this instead:
|
||||
|
||||
```
|
||||
$ echo "" | mutt -s "Agenda" recip -a agenda -a speakers
|
||||
```
|
||||
|
||||
The other useful option that you have with mutt is that it provides a way to send carbon copies (using the -c option) and blind carbon copies (using the -b option).
|
||||
|
||||
```
|
||||
$ mutt -s "Minutes from last meeting" recip@somesite.com -c myboss < mins
|
||||
```
|
||||
|
||||
### telnet
|
||||
|
||||
If you want to get deep into the details of sending email, you can use **telnet** to carry on the email exchange operation, but you'll need to, as they say, "learn the lingo." Mail servers expect a sequence of commands that include things like introducing yourself ( **EHLO** command), providing the email sender ( **MAIL FROM** command), specifying the email recipient ( **RCPT TO** command), and then adding the message ( **DATA** ) and ending the message with a "." as the only character on the line. Not every email server will respond to these requests. This approach is generally used only for troubleshooting.
|
||||
|
||||
```
|
||||
$ telnet emailsite.org 25
|
||||
Trying 192.168.0.12...
|
||||
Connected to emailsite.
|
||||
Escape character is '^]'.
|
||||
220 localhost ESMTP Sendmail 8.15.2/8.15.2/Debian-12; Wed, 12 Jun 2019 16:32:13 -0400; (No UCE/UBE) logging access from: mysite(OK)-mysite [192.168.0.12]
|
||||
EHLO mysite.org <== introduce yourself
|
||||
250-localhost Hello mysite [127.0.0.1], pleased to meet you
|
||||
250-ENHANCEDSTATUSCODES
|
||||
250-PIPELINING
|
||||
250-EXPN
|
||||
250-VERB
|
||||
250-8BITMIME
|
||||
250-SIZE
|
||||
250-DSN
|
||||
250-ETRN
|
||||
250-AUTH DIGEST-MD5 CRAM-MD5
|
||||
250-DELIVERBY
|
||||
250 HELP
|
||||
MAIL FROM: me@mysite.org <== specify sender
|
||||
250 2.1.0 shs@mysite.org... Sender ok
|
||||
RCPT TO: recip <== specify recipient
|
||||
250 2.1.5 recip... Recipient ok
|
||||
DATA <== start message
|
||||
354 Enter mail, end with "." on a line by itself
|
||||
This is a test message. Please deliver it for me.
|
||||
. <== end message
|
||||
250 2.0.0 x5CKWDds029287 Message accepted for delivery
|
||||
quit <== end exchange
|
||||
```
|
||||
|
||||
### Sending email to multiple recipients
|
||||
|
||||
If you want to send email from the Linux command line to a large group of recipients, you can always use a loop to make the job easier as in this example using mutt.
|
||||
|
||||
```
|
||||
$ for recip in `cat recips`
|
||||
do
|
||||
mutt -s "Minutes from May meeting" $recip < May_minutes
|
||||
done
|
||||
```
|
||||
|
||||
### Wrap-up
|
||||
|
||||
There are quite a few ways to send email from the Linux command line. Some tools provide quite a few options.
|
||||
|
||||
Join the Network World communities on [Facebook][3] and [LinkedIn][4] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3402027/how-to-send-email-from-the-linux-command-line.html
|
||||
|
||||
作者:[Sandra Henry-Stocker][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2017/08/email_image_blue-100732096-large.jpg
|
||||
[2]: https://www.youtube.com/playlist?list=PL7D2RMSmRO9J8OTpjFECi8DJiTQdd4hua
|
||||
[3]: https://www.facebook.com/NetworkWorld/
|
||||
[4]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,75 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Learning by teaching, and speaking, in open source)
|
||||
[#]: via: (https://opensource.com/article/19/6/conference-proposal-tips)
|
||||
[#]: author: (Lauren Maffeo https://opensource.com/users/lmaffeo)
|
||||
|
||||
Learning by teaching, and speaking, in open source
|
||||
======
|
||||
Want to speak at an open source conference? Here are a few tips to get
|
||||
started.
|
||||
![photo of microphone][1]
|
||||
|
||||
_"Everything good, everything magical happens between the months of June and August."_
|
||||
|
||||
When Jenny Han wrote these words, I doubt she had the open source community in mind. Yet, for our group of dispersed nomads, the summer brings a wave of conferences that allow us to connect in person.
|
||||
|
||||
From [OSCON][2] in Portland to [Drupal GovCon][3] in Bethesda, and [Open Source Summit North America][4] in San Diego, there’s no shortage of ways to match faces with Twitter avatars. After months of working on open source projects via Slack and Google Hangouts, the face time that these summer conferences offer is invaluable.
|
||||
|
||||
The knowledge attendees gain at open source conferences serves as the spark for new contributions. And speaking from experience, the best way to gain value from these conferences is for you to _speak_ at them.
|
||||
|
||||
But, does the thought of speaking give you chills? Hear me out before closing your browser.
|
||||
|
||||
Last August, I arrived at the Vancouver Convention Centre to give a lightning talk and speak on a panel at [Open Source Summit North America 2018][5]. It’s no exaggeration to say that this conference—and applying to speak at it—transformed my career. Nine months later, I’ve:
|
||||
|
||||
* Become a Community Moderator for Opensource.com
|
||||
* Spoken at two additional open source conferences ([All Things Open][6] and [DrupalCon North America][7])
|
||||
* Made my first GitHub pull request
|
||||
* Taken "Intro to Python" and written my first lines of code in [React][8]
|
||||
* Taken the first steps towards writing a book proposal
|
||||
|
||||
|
||||
|
||||
I don’t discount how much time, effort, and money are [involved in conference speaking][9]. Regardless, I can say with certainty that nothing else has grown my career so drastically. In the process, I met strangers who quickly became friends and unofficial mentors. Their feedback, advice, and connections have helped me grow in ways that I hadn’t envisioned this time last year.
|
||||
|
||||
Had I not boarded that flight to Canada, I would not be where I am today.
|
||||
|
||||
So, have I convinced you to take the first step? It’s easier than you think. If you want to [apply to speak at an open source conference][10] but are stuck on what to discuss, ask yourself this question: **What do I want to learn?**
|
||||
|
||||
You don’t have to be an expert on the topics that you pitch. You don’t have to know everything about JavaScript, [ML][11], or Linux to [write conference proposals][12] on these topics.
|
||||
|
||||
Here’s what you _do_ need: A willingness to do the work of teaching yourself these topics. And like any self-directed task, you’ll be most willing to do this work if you're invested in the subject.
|
||||
|
||||
As summer conference season draws closer, soak up all the knowledge you can. Then, ask yourself what you want to learn more about, and apply to speak about those subjects at fall/winter open source events.
|
||||
|
||||
After all, one of the most effective ways to learn is by [teaching a topic to someone else][13]. So, what will the open source community learn from you?
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/6/conference-proposal-tips
|
||||
|
||||
作者:[Lauren Maffeo][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/lmaffeo
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/microphone_speak.png?itok=wW6elbl5 (photo of microphone)
|
||||
[2]: https://conferences.oreilly.com/oscon/oscon-or
|
||||
[3]: https://www.drupalgovcon.org
|
||||
[4]: https://events.linuxfoundation.org/events/open-source-summit-north-america-2019/
|
||||
[5]: https://events.linuxfoundation.org/events/open-source-summit-north-america-2018/
|
||||
[6]: https://allthingsopen.org
|
||||
[7]: https://lab.getapp.com/bias-in-ai-drupalcon-debrief/
|
||||
[8]: https://reactjs.org
|
||||
[9]: https://twitter.com/venikunche/status/1130868572098572291
|
||||
[10]: https://opensource.com/article/19/1/public-speaking-resolutions
|
||||
[11]: https://en.wikipedia.org/wiki/ML_(programming_language)
|
||||
[12]: https://dev.to/aspittel/public-speaking-as-a-developer-2ihj
|
||||
[13]: https://opensource.com/article/19/5/learn-python-teaching
|
158
sources/tech/20190614 What is a Java constructor.md
Normal file
158
sources/tech/20190614 What is a Java constructor.md
Normal file
@ -0,0 +1,158 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (What is a Java constructor?)
|
||||
[#]: via: (https://opensource.com/article/19/6/what-java-constructor)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/seth/users/ashleykoree)
|
||||
|
||||
What is a Java constructor?
|
||||
======
|
||||
Constructors are powerful components of programming. Use them to unlock
|
||||
the full potential of Java.
|
||||
![][1]
|
||||
|
||||
Java is (disputably) the undisputed heavyweight in open source, cross-platform programming. While there are many [great][2] [cross-platform][2] [frameworks][3], few are as unified and direct as [Java][4].
|
||||
|
||||
Of course, Java is also a pretty complex language with subtleties and conventions all its own. One of the most common questions about Java relates to **constructors** : What are they and what are they used for?
|
||||
|
||||
Put succinctly: a constructor is an action performed upon the creation of a new **object** in Java. When your Java application creates an instance of a class you have written, it checks for a constructor. If a constructor exists, Java runs the code in the constructor while creating the instance. That's a lot of technical terms crammed into a few sentences, but it becomes clearer when you see it in action, so make sure you have [Java installed][5] and get ready for a demo.
|
||||
|
||||
### Life without constructors
|
||||
|
||||
If you're writing Java code, you're already using constructors, even though you may not know it. All classes in Java have a constructor because even if you haven't created one, Java does it for you when the code is compiled. For the sake of demonstration, though, ignore the hidden constructor that Java provides (because a default constructor adds no extra features), and take a look at life without an explicit constructor.
|
||||
|
||||
Suppose you're writing a simple Java dice-roller application because you want to produce a pseudo-random number for a game.
|
||||
|
||||
First, you might create your dice class to represent a physical die. Knowing that you play a lot of [Dungeons and Dragons][6], you decide to create a 20-sided die. In this sample code, the variable **dice** is the integer 20, representing the maximum possible die roll (a 20-sided die cannot roll more than 20). The variable **roll** is a placeholder for what will eventually be a random number, and **rand** serves as the random seed.
|
||||
|
||||
|
||||
```
|
||||
import java.util.Random;
|
||||
|
||||
public class DiceRoller {
|
||||
private int dice = 20;
|
||||
private int roll;
|
||||
private [Random][7] rand = new [Random][7]();
|
||||
```
|
||||
|
||||
Next, create a function in the **DiceRoller** class to execute the steps the computer must take to emulate a die roll: Take an integer from **rand** and assign it to the **roll** variable, add 1 to account for the fact that Java starts counting at 0 but a 20-sided die has no 0 value, then print the results.
|
||||
|
||||
|
||||
```
|
||||
public void Roller() {
|
||||
roll = rand.nextInt(dice);
|
||||
roll += 1;
|
||||
[System][8].out.println (roll);
|
||||
}
|
||||
```
|
||||
|
||||
Finally, spawn an instance of the **DiceRoller** class and invoke its primary function, **Roller** :
|
||||
|
||||
|
||||
```
|
||||
// main loop
|
||||
public static void main ([String][9][] args) {
|
||||
[System][8].out.printf("You rolled a ");
|
||||
|
||||
DiceRoller App = new DiceRoller();
|
||||
App.Roller();
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
As long as you have a Java development environment installed (such as [OpenJDK][10]), you can run your application from a terminal:
|
||||
|
||||
|
||||
```
|
||||
$ java dice.java
|
||||
You rolled a 12
|
||||
```
|
||||
|
||||
In this example, there is no explicit constructor. It's a perfectly valid and legal Java application, but it's a little limited. For instance, if you set your game of Dungeons and Dragons aside for the evening to play some Yahtzee, you would need 6-sided dice. In this simple example, it wouldn't be that much trouble to change the code, but that's not a realistic option in complex code. One way you could solve this problem is with a constructor.
|
||||
|
||||
### Constructors in action
|
||||
|
||||
The **DiceRoller** class in this example project represents a virtual dice factory: When it's called, it creates a virtual die that is then "rolled." However, by writing a custom constructor, you can make your Dice Roller application ask what kind of die you'd like to emulate.
|
||||
|
||||
Most of the code is the same, with the exception of a constructor accepting some number of sides. This number doesn't exist yet, but it will be created later.
|
||||
|
||||
|
||||
```
|
||||
import java.util.Random;
|
||||
|
||||
public class DiceRoller {
|
||||
private int dice;
|
||||
private int roll;
|
||||
private [Random][7] rand = new [Random][7]();
|
||||
|
||||
// constructor
|
||||
public DiceRoller(int sides) {
|
||||
dice = sides;
|
||||
}
|
||||
```
|
||||
|
||||
The function emulating a roll remains unchanged:
|
||||
|
||||
|
||||
```
|
||||
public void Roller() {
|
||||
roll = rand.nextInt(dice);
|
||||
roll += 1;
|
||||
[System][8].out.println (roll);
|
||||
}
|
||||
```
|
||||
|
||||
The main block of code feeds whatever arguments you provide when running the application. Were this a complex application, you would parse the arguments carefully and check for unexpected results, but for this sample, the only precaution taken is converting the argument string to an integer type:
|
||||
|
||||
|
||||
```
|
||||
public static void main ([String][9][] args) {
|
||||
[System][8].out.printf("You rolled a ");
|
||||
DiceRoller App = new DiceRoller( [Integer][11].parseInt(args[0]) );
|
||||
App.Roller();
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Launch the application and provide the number of sides you want your die to have:
|
||||
|
||||
|
||||
```
|
||||
$ java dice.java 20
|
||||
You rolled a 10
|
||||
$ java dice.java 6
|
||||
You rolled a 2
|
||||
$ java dice.java 100
|
||||
You rolled a 44
|
||||
```
|
||||
|
||||
The constructor has accepted your input, so when the class instance is created, it is created with the **sides** variable set to whatever number the user dictates.
|
||||
|
||||
Constructors are powerful components of programming. Practice using them to unlock the full potential of Java.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/6/what-java-constructor
|
||||
|
||||
作者:[Seth Kenlon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/seth/users/ashleykoree
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/build_structure_tech_program_code_construction.png?itok=nVsiLuag
|
||||
[2]: https://opensource.com/resources/python
|
||||
[3]: https://opensource.com/article/17/4/pyqt-versus-wxpython
|
||||
[4]: https://opensource.com/resources/java
|
||||
[5]: https://openjdk.java.net/install/index.html
|
||||
[6]: https://opensource.com/article/19/5/free-rpg-day
|
||||
[7]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+random
|
||||
[8]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+system
|
||||
[9]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+string
|
||||
[10]: https://openjdk.java.net/
|
||||
[11]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+integer
|
@ -0,0 +1,281 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How To Check Linux Package Version Before Installing It)
|
||||
[#]: via: (https://www.ostechnix.com/how-to-check-linux-package-version-before-installing-it/)
|
||||
[#]: author: (sk https://www.ostechnix.com/author/sk/)
|
||||
|
||||
How To Check Linux Package Version Before Installing It
|
||||
======
|
||||
|
||||
![Check Linux Package Version][1]
|
||||
|
||||
Most of you will know how to [**find the version of an installed package**][2] in Linux. But, what would you do to find the packages’ version which are not installed in the first place? No problem! This guide describes how to check Linux package version before installing it in Debian and its derivatives like Ubuntu. This small tip might be helpful for those wondering what version they would get before installing a package.
|
||||
|
||||
### Check Linux Package Version Before Installing It
|
||||
|
||||
There are many ways to find a package’s version even if it is not installed already in DEB-based systems. Here I have given a few methods.
|
||||
|
||||
##### Method 1 – Using Apt
|
||||
|
||||
The quick and dirty way to check a package version, simply run:
|
||||
|
||||
```
|
||||
$ apt show <package-name>
|
||||
```
|
||||
|
||||
**Example:**
|
||||
|
||||
```
|
||||
$ apt show vim
|
||||
```
|
||||
|
||||
**Sample output:**
|
||||
|
||||
```
|
||||
Package: vim
|
||||
Version: 2:8.0.1453-1ubuntu1.1
|
||||
Priority: optional
|
||||
Section: editors
|
||||
Origin: Ubuntu
|
||||
Maintainer: Ubuntu Developers <[email protected]>
|
||||
Original-Maintainer: Debian Vim Maintainers <[email protected]>
|
||||
Bugs: https://bugs.launchpad.net/ubuntu/+filebug
|
||||
Installed-Size: 2,852 kB
|
||||
Provides: editor
|
||||
Depends: vim-common (= 2:8.0.1453-1ubuntu1.1), vim-runtime (= 2:8.0.1453-1ubuntu1.1), libacl1 (>= 2.2.51-8), libc6 (>= 2.15), libgpm2 (>= 1.20.7), libpython3.6 (>= 3.6.5), libselinux1 (>= 1.32), libtinfo5 (>= 6)
|
||||
Suggests: ctags, vim-doc, vim-scripts
|
||||
Homepage: https://vim.sourceforge.io/
|
||||
Task: cloud-image, server
|
||||
Supported: 5y
|
||||
Download-Size: 1,152 kB
|
||||
APT-Sources: http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 Packages
|
||||
Description: Vi IMproved - enhanced vi editor
|
||||
Vim is an almost compatible version of the UNIX editor Vi.
|
||||
.
|
||||
Many new features have been added: multi level undo, syntax
|
||||
highlighting, command line history, on-line help, filename
|
||||
completion, block operations, folding, Unicode support, etc.
|
||||
.
|
||||
This package contains a version of vim compiled with a rather
|
||||
standard set of features. This package does not provide a GUI
|
||||
version of Vim. See the other vim-* packages if you need more
|
||||
(or less).
|
||||
|
||||
N: There is 1 additional record. Please use the '-a' switch to see it
|
||||
```
|
||||
|
||||
As you can see in the above output, “apt show” command displays, many important details of the package such as,
|
||||
|
||||
1. package name,
|
||||
2. version,
|
||||
3. origin (from where the vim comes from),
|
||||
4. maintainer,
|
||||
5. home page of the package,
|
||||
6. dependencies,
|
||||
7. download size,
|
||||
8. description,
|
||||
9. and many.
|
||||
|
||||
|
||||
|
||||
So, the available version of Vim package in the Ubuntu repositories is **8.0.1453**. This is the version I get if I install it on my Ubuntu system.
|
||||
|
||||
Alternatively, use **“apt policy”** command if you prefer short output:
|
||||
|
||||
```
|
||||
$ apt policy vim
|
||||
vim:
|
||||
Installed: (none)
|
||||
Candidate: 2:8.0.1453-1ubuntu1.1
|
||||
Version table:
|
||||
2:8.0.1453-1ubuntu1.1 500
|
||||
500 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 Packages
|
||||
500 http://security.ubuntu.com/ubuntu bionic-security/main amd64 Packages
|
||||
2:8.0.1453-1ubuntu1 500
|
||||
500 http://archive.ubuntu.com/ubuntu bionic/main amd64 Packages
|
||||
```
|
||||
|
||||
Or even shorter:
|
||||
|
||||
```
|
||||
$ apt list vim
|
||||
Listing... Done
|
||||
vim/bionic-updates,bionic-security 2:8.0.1453-1ubuntu1.1 amd64
|
||||
N: There is 1 additional version. Please use the '-a' switch to see it
|
||||
```
|
||||
|
||||
**Apt** is the default package manager in recent Ubuntu versions. So, this command is just enough to find the detailed information of a package. It doesn’t matter whether given package is installed or not. This command will simply list the given package’s version along with all other details.
|
||||
|
||||
##### Method 2 – Using Apt-get
|
||||
|
||||
To find a package version without installing it, we can use **apt-get** command with **-s** option.
|
||||
|
||||
```
|
||||
$ apt-get -s install vim
|
||||
```
|
||||
|
||||
**Sample output:**
|
||||
|
||||
```
|
||||
NOTE: This is only a simulation!
|
||||
apt-get needs root privileges for real execution.
|
||||
Keep also in mind that locking is deactivated,
|
||||
so don't depend on the relevance to the real current situation!
|
||||
Reading package lists... Done
|
||||
Building dependency tree
|
||||
Reading state information... Done
|
||||
Suggested packages:
|
||||
ctags vim-doc vim-scripts
|
||||
The following NEW packages will be installed:
|
||||
vim
|
||||
0 upgraded, 1 newly installed, 0 to remove and 45 not upgraded.
|
||||
Inst vim (2:8.0.1453-1ubuntu1.1 Ubuntu:18.04/bionic-updates, Ubuntu:18.04/bionic-security [amd64])
|
||||
Conf vim (2:8.0.1453-1ubuntu1.1 Ubuntu:18.04/bionic-updates, Ubuntu:18.04/bionic-security [amd64])
|
||||
```
|
||||
|
||||
Here, -s option indicates **simulation**. As you can see in the output, It performs no action. Instead, It simply performs a simulation to let you know what is going to happen when you install the Vim package.
|
||||
|
||||
You can substitute “install” option with “upgrade” option to see what will happen when you upgrade a package.
|
||||
|
||||
```
|
||||
$ apt-get -s upgrade vim
|
||||
```
|
||||
|
||||
##### Method 3 – Using Aptitude
|
||||
|
||||
**Aptitude** is an ncurses and commandline-based front-end to APT package manger in Debian and its derivatives.
|
||||
|
||||
To find the package version with Aptitude, simply run:
|
||||
|
||||
```
|
||||
$ aptitude versions vim
|
||||
p 2:8.0.1453-1ubuntu1 bionic 500
|
||||
p 2:8.0.1453-1ubuntu1.1 bionic-security,bionic-updates 500
|
||||
```
|
||||
|
||||
You can also use simulation option ( **-s** ) to see what would happen if you install or upgrade package.
|
||||
|
||||
```
|
||||
$ aptitude -V -s install vim
|
||||
The following NEW packages will be installed:
|
||||
vim [2:8.0.1453-1ubuntu1.1]
|
||||
0 packages upgraded, 1 newly installed, 0 to remove and 45 not upgraded.
|
||||
Need to get 1,152 kB of archives. After unpacking 2,852 kB will be used.
|
||||
Would download/install/remove packages.
|
||||
```
|
||||
|
||||
Here, **-V** flag is used to display detailed information of the package version.
|
||||
|
||||
Similarly, just substitute “install” with “upgrade” option to see what would happen if you upgrade a package.
|
||||
|
||||
```
|
||||
$ aptitude -V -s upgrade vim
|
||||
```
|
||||
|
||||
Another way to find the non-installed package’s version using Aptitude command is:
|
||||
|
||||
```
|
||||
$ aptitude search vim -F "%c %p %d %V"
|
||||
```
|
||||
|
||||
Here,
|
||||
|
||||
* **-F** is used to specify which format should be used to display the output,
|
||||
* **%c** – status of the given package (installed or not installed),
|
||||
* **%p** – name of the package,
|
||||
* **%d** – description of the package,
|
||||
* **%V** – version of the package.
|
||||
|
||||
|
||||
|
||||
This is helpful when you don’t know the full package name. This command will list all packages that contains the given string (i.e vim).
|
||||
|
||||
Here is the sample output of the above command:
|
||||
|
||||
```
|
||||
[...]
|
||||
p vim Vi IMproved - enhanced vi editor 2:8.0.1453-1ub
|
||||
p vim-tlib Some vim utility functions 1.23-1
|
||||
p vim-ultisnips snippet solution for Vim 3.1-3
|
||||
p vim-vimerl Erlang plugin for Vim 1.4.1+git20120
|
||||
p vim-vimerl-syntax Erlang syntax for Vim 1.4.1+git20120
|
||||
p vim-vimoutliner script for building an outline editor on top of Vim 0.3.4+pristine
|
||||
p vim-voom Vim two-pane outliner 5.2-1
|
||||
p vim-youcompleteme fast, as-you-type, fuzzy-search code completion engine for Vim 0+20161219+git
|
||||
```
|
||||
|
||||
##### Method 4 – Using Apt-cache
|
||||
|
||||
**Apt-cache** command is used to query APT cache in Debian-based systems. It is useful for performing many operations on APT’s package cache. One fine example is we can [**list installed applications from a certain repository/ppa**][3].
|
||||
|
||||
Not just installed applications, we can also find the version of a package even if it is not installed. For instance, the following command will find the version of Vim package:
|
||||
|
||||
```
|
||||
$ apt-cache policy vim
|
||||
```
|
||||
|
||||
Sample output:
|
||||
|
||||
```
|
||||
vim:
|
||||
Installed: (none)
|
||||
Candidate: 2:8.0.1453-1ubuntu1.1
|
||||
Version table:
|
||||
2:8.0.1453-1ubuntu1.1 500
|
||||
500 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 Packages
|
||||
500 http://security.ubuntu.com/ubuntu bionic-security/main amd64 Packages
|
||||
2:8.0.1453-1ubuntu1 500
|
||||
500 http://archive.ubuntu.com/ubuntu bionic/main amd64 Packages
|
||||
```
|
||||
|
||||
As you can see in the above output, Vim is not installed. If you wanted to install it, you would get version **8.0.1453**. It also displays from which repository the vim package is coming from.
|
||||
|
||||
##### Method 5 – Using Apt-show-versions
|
||||
|
||||
**Apt-show-versions** command is used to list installed and available package versions in Debian and Debian-based systems. It also displays the list of all upgradeable packages. It is quite handy if you have a mixed stable/testing environment. For instance, if you have enabled both stable and testing repositories, you can easily find the list of applications from testing and also you can upgrade all packages in testing.
|
||||
|
||||
Apt-show-versions is not installed by default. You need to install it using command:
|
||||
|
||||
```
|
||||
$ sudo apt-get install apt-show-versions
|
||||
```
|
||||
|
||||
Once installed, run the following command to find the version of a package,for example Vim:
|
||||
|
||||
```
|
||||
$ apt-show-versions -a vim
|
||||
vim:amd64 2:8.0.1453-1ubuntu1 bionic archive.ubuntu.com
|
||||
vim:amd64 2:8.0.1453-1ubuntu1.1 bionic-security security.ubuntu.com
|
||||
vim:amd64 2:8.0.1453-1ubuntu1.1 bionic-updates archive.ubuntu.com
|
||||
vim:amd64 not installed
|
||||
```
|
||||
|
||||
Here, **-a** switch prints all available versions of the given package.
|
||||
|
||||
If the given package is already installed, you need not to use **-a** option. In that case, simply run:
|
||||
|
||||
```
|
||||
$ apt-show-versions vim
|
||||
```
|
||||
|
||||
And, that’s all. If you know any other methods, please share them in the comment section below. I will check and update this guide.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/how-to-check-linux-package-version-before-installing-it/
|
||||
|
||||
作者:[sk][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.ostechnix.com/author/sk/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.ostechnix.com/wp-content/uploads/2019/06/Check-Linux-Package-Version-720x340.png
|
||||
[2]: https://www.ostechnix.com/find-package-version-linux/
|
||||
[3]: https://www.ostechnix.com/list-installed-packages-certain-repository-linux/
|
@ -0,0 +1,608 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How To Find Linux System Details Using inxi)
|
||||
[#]: via: (https://www.ostechnix.com/how-to-find-your-system-details-using-inxi/)
|
||||
[#]: author: (sk https://www.ostechnix.com/author/sk/)
|
||||
|
||||
How To Find Linux System Details Using inxi
|
||||
======
|
||||
|
||||
![find Linux system details using inxi][1]
|
||||
|
||||
**Inxi** is a free, open source, and full featured command line system information tool. It shows system hardware, CPU, drivers, Xorg, Desktop, Kernel, GCC version(s), Processes, RAM usage, and a wide variety of other useful information. Be it a hard disk or CPU, mother board or the complete detail of the entire system, inxi will display it more accurately in seconds. Since it is CLI tool, you can use it in Desktop or server edition. Inxi is available in the default repositories of most Linux distributions and some BSD systems.
|
||||
|
||||
### Install inxi
|
||||
|
||||
**On Arch Linux and derivatives:**
|
||||
|
||||
To install inxi in Arch Linux or its derivatives like Antergos, and Manajaro Linux, run:
|
||||
|
||||
```
|
||||
$ sudo pacman -S inxi
|
||||
```
|
||||
|
||||
Just in case if Inxi is not available in the default repositories, try to install it from AUR (It varies year to year) using any AUR helper programs.
|
||||
|
||||
Using [**Yay**][2]:
|
||||
|
||||
```
|
||||
$ yay -S inxi
|
||||
```
|
||||
|
||||
**On Debian / Ubuntu and derivatives:**
|
||||
|
||||
```
|
||||
$ sudo apt-get install inxi
|
||||
```
|
||||
|
||||
**On Fedora / RHEL / CentOS / Scientific Linux:**
|
||||
|
||||
inxi is available in the Fedora default repositories. So, just run the following command to install it straight away.
|
||||
|
||||
```
|
||||
$ sudo dnf install inxi
|
||||
```
|
||||
|
||||
In RHEL and its clones like CentOS and Scientific Linux, you need to add the EPEL repository and then install inxi.
|
||||
|
||||
To install EPEL repository, just run:
|
||||
|
||||
```
|
||||
$ sudo yum install epel-release
|
||||
```
|
||||
|
||||
After installing EPEL repository, install inxi using command:
|
||||
|
||||
```
|
||||
$ sudo yum install inxi
|
||||
```
|
||||
|
||||
**On SUSE/openSUSE:**
|
||||
|
||||
```
|
||||
$ sudo zypper install inxi
|
||||
```
|
||||
|
||||
### Find Linux System Details Using inxi
|
||||
|
||||
inxi will require some additional programs to operate properly. They will be installed along with inxi. However, in case if they are not installed automatically, you need to find and install them.
|
||||
|
||||
To list all required programs, run:
|
||||
|
||||
```
|
||||
$ inxi --recommends
|
||||
```
|
||||
|
||||
If you see any missing programs, then install them before start using inxi.
|
||||
|
||||
Now, let us see how to use it to reveal the Linux system details. inxi usage is pretty simple and straight forward.
|
||||
|
||||
Open up your Terminal and run the following command to print a short summary of CPU, memory, hard drive and kernel information:
|
||||
|
||||
```
|
||||
$ inxi
|
||||
```
|
||||
|
||||
**Sample output:**
|
||||
|
||||
```
|
||||
CPU: Dual Core Intel Core i3-2350M (-MT MCP-) speed/min/max: 798/800/2300 MHz
|
||||
Kernel: 5.1.2-arch1-1-ARCH x86_64 Up: 1h 31m Mem: 2800.5/7884.2 MiB (35.5%)
|
||||
Storage: 465.76 GiB (80.8% used) Procs: 163 Shell: bash 5.0.7 inxi: 3.0.34
|
||||
```
|
||||
|
||||
[![Find Linux System Details Using inxi][1]][3]
|
||||
|
||||
Find Linux System Details Using inxi
|
||||
|
||||
As you can see, Inxi displays the following details of my Arch Linux desktop:
|
||||
|
||||
1. CPU type,
|
||||
2. CPU speed,
|
||||
3. Kernel details,
|
||||
4. Uptime,
|
||||
5. Memory details (Total and used memory),
|
||||
6. Hard disk size along with current usage,
|
||||
7. Procs,
|
||||
8. Default shell details,
|
||||
9. Inxi version.
|
||||
|
||||
|
||||
|
||||
To display full summary, use **“-F”** switch as shown below.
|
||||
|
||||
```
|
||||
$ inxi -F
|
||||
```
|
||||
|
||||
**Sample output:**
|
||||
|
||||
```
|
||||
System: Host: sk Kernel: 5.1.2-arch1-1-ARCH x86_64 bits: 64 Desktop: Deepin 15.10.1 Distro: Arch Linux
|
||||
Machine: Type: Portable System: Dell product: Inspiron N5050 v: N/A serial: <root required>
|
||||
Mobo: Dell model: 01HXXJ v: A05 serial: <root required> BIOS: Dell v: A05 date: 08/03/2012
|
||||
Battery: ID-1: BAT0 charge: 39.0 Wh condition: 39.0/48.8 Wh (80%)
|
||||
CPU: Topology: Dual Core model: Intel Core i3-2350M bits: 64 type: MT MCP L2 cache: 3072 KiB
|
||||
Speed: 798 MHz min/max: 800/2300 MHz Core speeds (MHz): 1: 798 2: 798 3: 798 4: 798
|
||||
Graphics: Device-1: Intel 2nd Generation Core Processor Family Integrated Graphics driver: i915 v: kernel
|
||||
Display: x11 server: X.Org 1.20.4 driver: modesetting unloaded: vesa resolution: 1366x768~60Hz
|
||||
Message: Unable to show advanced data. Required tool glxinfo missing.
|
||||
Audio: Device-1: Intel 6 Series/C200 Series Family High Definition Audio driver: snd_hda_intel
|
||||
Sound Server: ALSA v: k5.1.2-arch1-1-ARCH
|
||||
Network: Device-1: Realtek RTL810xE PCI Express Fast Ethernet driver: r8169
|
||||
IF: enp5s0 state: down mac: 45:c8:gh:89:b6:45
|
||||
Device-2: Qualcomm Atheros AR9285 Wireless Network Adapter driver: ath9k
|
||||
IF: wlp9s0 state: up mac: c3:11:96:22:87:3g
|
||||
Device-3: Qualcomm Atheros AR3011 Bluetooth type: USB driver: btusb
|
||||
Drives: Local Storage: total: 465.76 GiB used: 376.31 GiB (80.8%)
|
||||
ID-1: /dev/sda vendor: Seagate model: ST9500325AS size: 465.76 GiB
|
||||
Partition: ID-1: / size: 456.26 GiB used: 376.25 GiB (82.5%) fs: ext4 dev: /dev/sda2
|
||||
ID-2: /boot size: 92.8 MiB used: 62.9 MiB (67.7%) fs: ext4 dev: /dev/sda1
|
||||
ID-3: swap-1 size: 2.00 GiB used: 0 KiB (0.0%) fs: swap dev: /dev/sda3
|
||||
Sensors: System Temperatures: cpu: 58.0 C mobo: N/A
|
||||
Fan Speeds (RPM): cpu: 3445
|
||||
Info: Processes: 169 Uptime: 1h 38m Memory: 7.70 GiB used: 2.94 GiB (38.2%) Shell: bash inxi: 3.0.34
|
||||
```
|
||||
|
||||
Inxi used on IRC automatically filters out your network device MAC address, WAN and LAN IP, your /home username directory in partitions, and a few other items in order to maintain basic privacy and security. You can also trigger this filtering with the **-z** option like below.
|
||||
|
||||
```
|
||||
$ inxi -Fz
|
||||
```
|
||||
|
||||
To override the IRC filter, use the **-Z** option.
|
||||
|
||||
```
|
||||
$ inxi -FZ
|
||||
```
|
||||
|
||||
This can be useful in debugging network connection issues online in a private chat, for example. Please be very careful while using -Z option. It will display your MAC addresses. You shouldn’t share the results got with -Z option in public forums.
|
||||
|
||||
##### Displaying device-specific details
|
||||
|
||||
When running inxi without any options, you will get basic details of your system, such as CPU, Memory, Kernel, Uptime, harddisk etc.
|
||||
|
||||
You can, of course, narrow down the result to show specific device details using various options. Inxi has numerous options (both uppercase and lowercase).
|
||||
|
||||
First, we will see example commands for all uppercase options in alphabetical order. Some commands may require root/sudo privileges to get actual data.
|
||||
|
||||
####### **Uppercase options**
|
||||
|
||||
**1\. Display Audio/Sound card details**
|
||||
|
||||
To show your audio and sound card(s) information with sound card driver, use **-A** option.
|
||||
|
||||
```
|
||||
$ inxi -A
|
||||
Audio: Device-1: Intel 6 Series/C200 Series Family High Definition Audio driver: snd_hda_intel
|
||||
Sound Server: ALSA v: k5.1.2-arch1-1-ARCH
|
||||
```
|
||||
|
||||
**2\. Display Battery details**
|
||||
|
||||
To show battery details of your system with current charge and condition, use **-B** option.
|
||||
|
||||
```
|
||||
$ inxi -B
|
||||
Battery: ID-1: BAT0 charge: 39.0 Wh condition: 39.0/48.8 Wh (80%)
|
||||
```
|
||||
|
||||
**3\. Display CPU details**
|
||||
|
||||
To show complete CPU details including no of cores, CPU model, CPU cache, CPU clock speed, CPU min/max speed etc., use **-C** option.
|
||||
|
||||
```
|
||||
$ inxi -C
|
||||
CPU: Topology: Dual Core model: Intel Core i3-2350M bits: 64 type: MT MCP L2 cache: 3072 KiB
|
||||
Speed: 798 MHz min/max: 800/2300 MHz Core speeds (MHz): 1: 798 2: 798 3: 798 4: 798
|
||||
```
|
||||
|
||||
**4\. Display hard disk details**
|
||||
|
||||
To show information about your hard drive, such as Disk type, vendor, device ID, model, disk size, total disk space, used percentage etc., use **-D** option.
|
||||
|
||||
```
|
||||
$ inxi -D
|
||||
Drives: Local Storage: total: 465.76 GiB used: 376.31 GiB (80.8%)
|
||||
ID-1: /dev/sda vendor: Seagate model: ST9500325AS size: 465.76 GiB
|
||||
```
|
||||
|
||||
**5\. Disply Graphics details**
|
||||
|
||||
To show details about the graphics card, including details of grahics card, driver, vendor, display server, resolution etc., use **-G** option.
|
||||
|
||||
```
|
||||
$ inxi -G
|
||||
Graphics: Device-1: Intel 2nd Generation Core Processor Family Integrated Graphics driver: i915 v: kernel
|
||||
Display: x11 server: X.Org 1.20.4 driver: modesetting unloaded: vesa resolution: 1366x768~60Hz
|
||||
Message: Unable to show advanced data. Required tool glxinfo missing.
|
||||
```
|
||||
|
||||
**6\. Display details about processes, uptime, memory, inxi version**
|
||||
|
||||
To show information about no of processes, total uptime, total memory with used memory, Shell details and inxi version etc., use **-I** option.
|
||||
|
||||
```
|
||||
$ inxi -I
|
||||
Info: Processes: 170 Uptime: 5h 47m Memory: 7.70 GiB used: 3.27 GiB (42.4%) Shell: bash inxi: 3.0.34
|
||||
```
|
||||
|
||||
**7\. Display Motherboard details**
|
||||
|
||||
To show information about your machine details, manufacturer, motherboard, BIOS, use **-M** option.
|
||||
|
||||
```
|
||||
$ inxi -M
|
||||
Machine: Type: Portable System: Dell product: Inspiron N5050 v: N/A serial: <root required>
|
||||
Mobo: Dell model: 034ygt v: A018 serial: <root required> BIOS: Dell v: A001 date: 09/04/2015
|
||||
```
|
||||
|
||||
**8\. Display network card details**
|
||||
|
||||
To show information about your network card, including vendor, card driver and no of network interfaces etc., use **-N** option.
|
||||
|
||||
```
|
||||
$ inxi -N
|
||||
Network: Device-1: Realtek RTL810xE PCI Express Fast Ethernet driver: r8169
|
||||
Device-2: Qualcomm Atheros AR9285 Wireless Network Adapter driver: ath9k
|
||||
Device-3: Qualcomm Atheros AR3011 Bluetooth type: USB driver: btusb
|
||||
```
|
||||
|
||||
If you want to show the advanced details of the network cards, such as MAC address, speed and state of nic, use **-n** option.
|
||||
|
||||
```
|
||||
$ inxi -n
|
||||
```
|
||||
|
||||
Please careful sharing this details on public forum.
|
||||
|
||||
**9\. Display Partition details**
|
||||
|
||||
To display basic partition information, use **-P** option.
|
||||
|
||||
```
|
||||
$ inxi -P
|
||||
Partition: ID-1: / size: 456.26 GiB used: 376.25 GiB (82.5%) fs: ext4 dev: /dev/sda2
|
||||
ID-2: /boot size: 92.8 MiB used: 62.9 MiB (67.7%) fs: ext4 dev: /dev/sda1
|
||||
ID-3: swap-1 size: 2.00 GiB used: 0 KiB (0.0%) fs: swap dev: /dev/sda3
|
||||
```
|
||||
|
||||
To show full partition information including mount points, use **-p** option.
|
||||
|
||||
```
|
||||
$ inxi -p
|
||||
```
|
||||
|
||||
**10\. Display RAID details**
|
||||
|
||||
To show RAID info, use **-R** option.
|
||||
|
||||
```
|
||||
$ inxi -R
|
||||
```
|
||||
|
||||
**11\. Display system details**
|
||||
|
||||
To show Linux system information such as hostname, kernel, DE, OS version etc., use **-S** option.
|
||||
|
||||
```
|
||||
$ inxi -S
|
||||
System: Host: sk Kernel: 5.1.2-arch1-1-ARCH x86_64 bits: 64 Desktop: Deepin 15.10.1 Distro: Arch Linux
|
||||
```
|
||||
|
||||
**12\. Displaying weather details**
|
||||
|
||||
Inixi is not just for finding hardware details. It is useful for getting other stuffs too.
|
||||
|
||||
For example, you can display the weather details of a given location. To do so, run inxi with **-W** option like below.
|
||||
|
||||
```
|
||||
$ inxi -W 95623,us
|
||||
Weather: Temperature: 21.1 C (70 F) Conditions: Scattered clouds Current Time: Tue 11 Jun 2019 04:34:35 AM PDT
|
||||
Source: WeatherBit.io
|
||||
```
|
||||
|
||||
Please note that you should use only ASCII letters in city/state/country names to get valid results.
|
||||
|
||||
####### Lowercase options
|
||||
|
||||
**1\. Display basic system details**
|
||||
|
||||
To show only the basic summary of your system details, use **-b** option.
|
||||
|
||||
```
|
||||
$ inxi -b
|
||||
```
|
||||
|
||||
Alternatively, you can use this command:
|
||||
|
||||
Both servers the same purpose.
|
||||
|
||||
```
|
||||
$ inxi -v 2
|
||||
```
|
||||
|
||||
**2\. Set color scheme**
|
||||
|
||||
We can set different color schemes for inxi output using **-c** option. Yu can set color scheme number from **0** to **42**. If no scheme number is supplied, **0** is assumed.
|
||||
|
||||
Here is inxi output with and without **-c** option.
|
||||
|
||||
[![inxi output without color scheme][1]][4]
|
||||
|
||||
inxi output without color scheme
|
||||
|
||||
As you can see, when we run inxi with -c option, the color scheme is disabled. The -c option is useful to turnoff colored output when redirecting clean output without escape codes to a text file.
|
||||
|
||||
Similarly, we can use other color scheme values.
|
||||
|
||||
```
|
||||
$ inxi -c10
|
||||
|
||||
$ inxi -c42
|
||||
```
|
||||
|
||||
**3\. Display optical drive details**
|
||||
|
||||
We can show the optical drive data details along with local hard drive details using **-d** option.
|
||||
|
||||
```
|
||||
$ inxi -d
|
||||
Drives: Local Storage: total: 465.76 GiB used: 376.31 GiB (80.8%)
|
||||
ID-1: /dev/sda vendor: Seagate model: ST9500325AS size: 465.76 GiB
|
||||
Optical-1: /dev/sr0 vendor: PLDS model: DVD+-RW DS-8A8SH dev-links: cdrom
|
||||
Features: speed: 24 multisession: yes audio: yes dvd: yes rw: cd-r,cd-rw,dvd-r,dvd-ram
|
||||
```
|
||||
|
||||
**4\. Display all CPU flags**
|
||||
|
||||
To show all CPU flags used, run:
|
||||
|
||||
```
|
||||
$ inxi -f
|
||||
```
|
||||
|
||||
**5\. Display IP details**
|
||||
|
||||
To show WAN and local ip address along network card details such as device vendor, driver, mac, state etc., use **-i** option.
|
||||
|
||||
```
|
||||
$ inxi -i
|
||||
```
|
||||
|
||||
**6\. Display partition labels**
|
||||
|
||||
If you have set labels for the partitions, you can view them using **-l** option.
|
||||
|
||||
```
|
||||
$ inxi -l
|
||||
```
|
||||
|
||||
You can also view the labels of all partitions along with mountpoints using command:
|
||||
|
||||
```
|
||||
$ inxi -pl
|
||||
```
|
||||
|
||||
**7\. Display Memory details**
|
||||
|
||||
We can display memory details such as total size of installed RAM, how much memory is used, no of available DIMM slots, total size of supported RAM, how much RAM is currently installed in each slots etc., using **-m** option.
|
||||
|
||||
```
|
||||
$ sudo inxi -m
|
||||
[sudo] password for sk:
|
||||
Memory: RAM: total: 7.70 GiB used: 2.26 GiB (29.3%)
|
||||
Array-1: capacity: 16 GiB slots: 2 EC: None
|
||||
Device-1: DIMM_A size: 4 GiB speed: 1067 MT/s
|
||||
Device-2: DIMM_B size: 4 GiB speed: 1067 MT/s
|
||||
```
|
||||
|
||||
**8\. Display unmounted partition details**
|
||||
|
||||
To show unmounted partition details, use **-o** option.
|
||||
|
||||
```
|
||||
$ inxi -o
|
||||
```
|
||||
|
||||
If there were no unmounted partitions in your system, you will see an output something like below.
|
||||
|
||||
```
|
||||
Unmounted: Message: No unmounted partitions found.
|
||||
```
|
||||
|
||||
**9\. Display list of repositories**
|
||||
|
||||
To display the the list of repositories in your system, use **-r** option.
|
||||
|
||||
```
|
||||
$ inxi -r
|
||||
```
|
||||
|
||||
**Sample output:**
|
||||
|
||||
```
|
||||
Repos: Active apt sources in file: /etc/apt/sources.list
|
||||
deb http://in.archive.ubuntu.com/ubuntu/ xenial main restricted
|
||||
deb http://in.archive.ubuntu.com/ubuntu/ xenial-updates main restricted
|
||||
deb http://in.archive.ubuntu.com/ubuntu/ xenial universe
|
||||
deb http://in.archive.ubuntu.com/ubuntu/ xenial-updates universe
|
||||
deb http://in.archive.ubuntu.com/ubuntu/ xenial multiverse
|
||||
deb http://in.archive.ubuntu.com/ubuntu/ xenial-updates multiverse
|
||||
deb http://in.archive.ubuntu.com/ubuntu/ xenial-backports main restricted universe multiverse
|
||||
deb http://security.ubuntu.com/ubuntu xenial-security main restricted
|
||||
deb http://security.ubuntu.com/ubuntu xenial-security universe
|
||||
deb http://security.ubuntu.com/ubuntu xenial-security multiverse
|
||||
```
|
||||
|
||||
* * *
|
||||
|
||||
**Suggested read:**
|
||||
|
||||
* [**How To Find The List Of Installed Repositories From Commandline In Linux**][5]
|
||||
|
||||
|
||||
|
||||
* * *
|
||||
|
||||
**10\. Show system temperature, fan speed details**
|
||||
|
||||
Inxi is capable to find motherboard/CPU/GPU temperatures and fan speed.
|
||||
|
||||
```
|
||||
$ inxi -s
|
||||
Sensors: System Temperatures: cpu: 60.0 C mobo: N/A
|
||||
Fan Speeds (RPM): cpu: 3456
|
||||
```
|
||||
|
||||
Please note that Inxi requires sensors to find the system temperature. Make sure **lm_sensors** is installed and correctly configured in your system. For more details about lm_sensors, check the following guide.
|
||||
|
||||
* [**How To View CPU Temperature On Linux**][6]
|
||||
|
||||
|
||||
|
||||
**11\. Display details about processes**
|
||||
|
||||
To show the list processes top 5 processes which are consuming most CPU and Memory, simply run:
|
||||
|
||||
```
|
||||
$ inxi -t
|
||||
Processes: CPU top: 5
|
||||
1: cpu: 14.3% command: firefox pid: 15989
|
||||
2: cpu: 10.5% command: firefox pid: 13487
|
||||
3: cpu: 7.1% command: firefox pid: 15062
|
||||
4: cpu: 3.1% command: xorg pid: 13493
|
||||
5: cpu: 3.0% command: firefox pid: 14954
|
||||
System RAM: total: 7.70 GiB used: 2.99 GiB (38.8%)
|
||||
Memory top: 5
|
||||
1: mem: 1115.8 MiB (14.1%) command: firefox pid: 15989
|
||||
2: mem: 606.6 MiB (7.6%) command: firefox pid: 13487
|
||||
3: mem: 339.3 MiB (4.3%) command: firefox pid: 13630
|
||||
4: mem: 303.1 MiB (3.8%) command: firefox pid: 18617
|
||||
5: mem: 260.1 MiB (3.2%) command: firefox pid: 15062
|
||||
```
|
||||
|
||||
We can also sort this output by either CPU usage or Memory usage.
|
||||
|
||||
For instance, to find the which top 5 processes are consuming most memory, use the following command:
|
||||
|
||||
```
|
||||
$ inxi -t m
|
||||
Processes: System RAM: total: 7.70 GiB used: 2.73 GiB (35.4%)
|
||||
Memory top: 5
|
||||
1: mem: 966.1 MiB (12.2%) command: firefox pid: 15989
|
||||
2: mem: 468.2 MiB (5.9%) command: firefox pid: 13487
|
||||
3: mem: 347.9 MiB (4.4%) command: firefox pid: 13708
|
||||
4: mem: 306.7 MiB (3.8%) command: firefox pid: 13630
|
||||
5: mem: 247.2 MiB (3.1%) command: firefox pid: 15062
|
||||
```
|
||||
|
||||
To sort the top 5 processes based on CPU usage, run:
|
||||
|
||||
```
|
||||
$ inxi -t c
|
||||
Processes: CPU top: 5
|
||||
1: cpu: 14.9% command: firefox pid: 15989
|
||||
2: cpu: 10.6% command: firefox pid: 13487
|
||||
3: cpu: 7.0% command: firefox pid: 15062
|
||||
4: cpu: 3.1% command: xorg pid: 13493
|
||||
5: cpu: 2.9% command: firefox pid: 14954
|
||||
```
|
||||
|
||||
Bydefault, Inxi will display the top 5 processes. You can change the number of processes, for example 10, like below.
|
||||
|
||||
```
|
||||
$ inxi -t cm10
|
||||
Processes: CPU top: 10
|
||||
1: cpu: 14.9% command: firefox pid: 15989
|
||||
2: cpu: 10.6% command: firefox pid: 13487
|
||||
3: cpu: 7.0% command: firefox pid: 15062
|
||||
4: cpu: 3.1% command: xorg pid: 13493
|
||||
5: cpu: 2.9% command: firefox pid: 14954
|
||||
6: cpu: 2.8% command: firefox pid: 13630
|
||||
7: cpu: 1.8% command: firefox pid: 18325
|
||||
8: cpu: 1.4% command: firefox pid: 18617
|
||||
9: cpu: 1.3% command: firefox pid: 13708
|
||||
10: cpu: 0.8% command: firefox pid: 14427
|
||||
System RAM: total: 7.70 GiB used: 2.92 GiB (37.9%)
|
||||
Memory top: 10
|
||||
1: mem: 1160.9 MiB (14.7%) command: firefox pid: 15989
|
||||
2: mem: 475.1 MiB (6.0%) command: firefox pid: 13487
|
||||
3: mem: 353.4 MiB (4.4%) command: firefox pid: 13708
|
||||
4: mem: 308.0 MiB (3.9%) command: firefox pid: 13630
|
||||
5: mem: 269.6 MiB (3.4%) command: firefox pid: 15062
|
||||
6: mem: 249.3 MiB (3.1%) command: firefox pid: 14427
|
||||
7: mem: 238.5 MiB (3.0%) command: firefox pid: 14954
|
||||
8: mem: 208.2 MiB (2.6%) command: firefox pid: 18325
|
||||
9: mem: 194.0 MiB (2.4%) command: firefox pid: 18617
|
||||
10: mem: 143.6 MiB (1.8%) command: firefox pid: 23960
|
||||
```
|
||||
|
||||
The above command will display the top 10 processes that consumes the most CPU and Memory.
|
||||
|
||||
To display only top10 based on memory usage, run:
|
||||
|
||||
```
|
||||
$ inxi -t m10
|
||||
```
|
||||
|
||||
**12\. Display partition UUID details**
|
||||
|
||||
To show partition UUIDs ( **U** niversally **U** nique **Id** entifier), use **-u** option.
|
||||
|
||||
```
|
||||
$ inxi -u
|
||||
```
|
||||
|
||||
There are much more options are yet to be covered. But, these are just enough to get almost all details of your Linux box.
|
||||
|
||||
For more details and options, refer the man page.
|
||||
|
||||
```
|
||||
$ man inxi
|
||||
```
|
||||
|
||||
* * *
|
||||
|
||||
**Related read:**
|
||||
|
||||
* **[Neofetch – Display your Linux system’s information][7]**
|
||||
|
||||
|
||||
|
||||
* * *
|
||||
|
||||
The primary purpose of Inxi tool is to use in IRC or forum support. If you are looking for any help via a forum or website where someone is asking the specification of your system, just run this command, and copy/paste the output.
|
||||
|
||||
**Resources:**
|
||||
|
||||
* [**Inxi GitHub Repository**][8]
|
||||
* [**Inxi home page**][9]
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/how-to-find-your-system-details-using-inxi/
|
||||
|
||||
作者:[sk][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.ostechnix.com/author/sk/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
|
||||
[2]: https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/
|
||||
[3]: http://www.ostechnix.com/wp-content/uploads/2016/08/Find-Linux-System-Details-Using-inxi.png
|
||||
[4]: http://www.ostechnix.com/wp-content/uploads/2016/08/inxi-output-without-color-scheme.png
|
||||
[5]: https://www.ostechnix.com/find-list-installed-repositories-commandline-linux/
|
||||
[6]: https://www.ostechnix.com/view-cpu-temperature-linux/
|
||||
[7]: http://www.ostechnix.com/neofetch-display-linux-systems-information/
|
||||
[8]: https://github.com/smxi/inxi
|
||||
[9]: http://smxi.org/docs/inxi.htm
|
@ -0,0 +1,89 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Graviton: A Minimalist Open Source Code Editor)
|
||||
[#]: via: (https://itsfoss.com/graviton-code-editor/)
|
||||
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
|
||||
|
||||
Graviton:极简的开源代码编辑器
|
||||
======
|
||||
|
||||
[Graviton][1]是一款开发中的免费开源跨平台代码编辑器。这位 16 岁的开发人员 Marc Espin 强调说,它是一个“极简”的代码编辑器。我不确定这点,但它确实有一个干净的用户界面,就像其他的[现代代码编辑器,如 Atom][2]。
|
||||
|
||||
![Graviton Code Editor Interface][3]
|
||||
|
||||
开发者还将其称为轻量级代码编辑器,尽管 Graviton 基于 [Electron][4]。
|
||||
|
||||
Graviton 拥有你在任何标准代码编辑器中所期望的功能,如语法高亮、自动补全等。由于 Graviton 仍处于测试阶段,因此未来版本中将添加更多功能。
|
||||
|
||||
![Graviton Code Editor with Syntax Highlighting][5]
|
||||
|
||||
### Graviton 代码编辑器的特性
|
||||
|
||||
Graviton 一些值得一说的特性有:
|
||||
|
||||
* 使用 [CodeMirrorJS][6] 为多种编程语言提供语法高亮
|
||||
* 自动补全
|
||||
* 支持插件和主题。
|
||||
* 提供英语、西班牙语和一些其他欧洲语言。
|
||||
* 适用于 Linux、Windows 和 macOS。
|
||||
|
||||
|
||||
|
||||
我快速看来一下 Graviton,它可能不像 [VS Code][7] 或 [Brackets][8] 那样功能丰富,但对于一些简单的代码编辑来说,它并不是一个糟糕的工具。
|
||||
|
||||
### 下载并安装 Graviton
|
||||
|
||||
![Graviton Code Editor][9]
|
||||
|
||||
如上所述,Graviton 是一个可用于 Linux、Windows 和 macOS 的跨平台代码编辑器。它仍处于测试阶段,这意味着将来会添加更多功能,并且你可能会遇到一些 bug。
|
||||
|
||||
你可以在其发布页面上找到最新版本的 Graviton。Debian 和 [Ubuntu 用户可以使用 .deb 安装][10]。它已提供 [AppImage][11],以便可以在其他发行版中使用它。DMG 和 EXE 文件也分别可用于 macOS 和 Windows。
|
||||
|
||||
[下载 Graviton][12]
|
||||
|
||||
如果你有兴趣,你可以在 GitHub 仓库中找到 Graviton 的源代码:
|
||||
|
||||
[GitHub 中 Graviton 的源码][13]
|
||||
|
||||
如果你决定使用 Graviton 并发现了一些问题,请在[此处][14]写一份错误报告。如果你使用 GitHub,你可能想为 Graviton 项目加星。这可以提高开发者的士气,因为他知道有更多的用户欣赏他的努力。
|
||||
|
||||
如果你看到现在,我相信你了解[如何从源码安装软件][16]
|
||||
|
||||
**写在最后**
|
||||
|
||||
有时,简单本身就成了一个特性,而 Graviton 专注于极简可以帮助它在已经拥挤的代码编辑器世界中获取一席之地。
|
||||
|
||||
并且它是 FOSS 软件,我们尝试突出开源软件。如果你知道一些有趣的开源软件,并且想要更多的人知道,[请给我们留言][17]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/graviton-code-editor/
|
||||
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/abhishek/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://graviton.ml/
|
||||
[2]: https://itsfoss.com/best-modern-open-source-code-editors-for-linux/
|
||||
[3]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/06/graviton-code-editor-interface.jpg?resize=800%2C571&ssl=1
|
||||
[4]: https://electronjs.org/
|
||||
[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/06/graviton-code-editor-interface-2.jpg?resize=800%2C522&ssl=1
|
||||
[6]: https://codemirror.net/
|
||||
[7]: https://itsfoss.com/install-visual-studio-code-ubuntu/
|
||||
[8]: https://itsfoss.com/install-brackets-ubuntu/
|
||||
[9]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/06/graviton-code-editor-800x473.jpg?resize=800%2C473&ssl=1
|
||||
[10]: https://itsfoss.com/install-deb-files-ubuntu/
|
||||
[11]: https://itsfoss.com/use-appimage-linux/
|
||||
[12]: https://github.com/Graviton-Code-Editor/Graviton-App/releases
|
||||
[13]: https://github.com/Graviton-Code-Editor/Graviton-App
|
||||
[14]: https://github.com/Graviton-Code-Editor/Graviton-App/issues
|
||||
[16]: https://itsfoss.com/install-software-from-source-code/
|
||||
[17]: https://itsfoss.com/contact-us/
|
Loading…
Reference in New Issue
Block a user