Merge remote-tracking branch 'LCTT/master'

This commit is contained in:
Xingyu Wang 2020-05-02 12:47:50 +08:00
commit 0888830099
7 changed files with 744 additions and 104 deletions

View File

@ -1,104 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (messon007)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (The ins and outs of high-performance computing as a service)
[#]: via: (https://www.networkworld.com/article/3534725/the-ins-and-outs-of-high-performance-computing-as-a-service.html)
[#]: author: (Josh Fruhlinger https://www.networkworld.com/author/Josh-Fruhlinger/)
The ins and outs of high-performance computing as a service
======
HPC services can be a way to meet expanding supercomputing needs, but depending on the use case, theyre not necessarily better than on-premises supercomputers.
Dell EMC
Electronics on missiles and military helicopters need to survive extreme conditions. Before any of that physical hardware can be deployed, defense contractor McCormick Stevenson Corp. simulates the real-world conditions it will endure, relying on finite element analysis software like Ansys, which requires significant computing power.
Then one day a few years ago, it unexpectedly ran up against its computing limits.
[10 of the world's fastest supercomputers][1]
"We had some jobs that would have overwhelmed the computers that we had in office," says Mike Krawczyk, principal engineer at McCormick Stevenson. "It did not make economic or schedule sense to buy a machine and install software." Instead, the company contracted with Rescale, which could sell them cycles on a supercomputer-class system for a tiny fraction of what they would've spent on new hardware.
McCormick Stevenson had become an early adopter in a market known as supercomputing as a service or high-performance computing (HPC) as a service two terms that are closely related. HPC is the application of supercomputers to computationally complex problems, while supercomputers are those computers at the cutting edge of processing capacity, according to the National Institute for Computational Sciences.
Whatever it's called, these services are upending the traditional supercomputing market and bringing HPC power to customers who could never afford it before. But it's no panacea, and it's definitely not plug-and-play at least not yet.
### HPC services in practice
From the end user's perspective, HPC as a service resembles the batch-processing model that dates back to the early mainframe era. "We create an Ansys batch file and send that up, and after it runs, we pull down the result files and import them locally here," Krawczyk says.
Behind the scenes, cloud providers are running the supercomputing infrastructure in their own data centers though that doesn't necessarily imply the sort of cutting-edge hardware you might be visualizing when you hear "supercomputer." As Dave Turek, Vice President of Technical Computing at IBM OpenPOWER, explains it, HPC services at their core are "a collection of servers that are strung together with an interconnect. You have the ability to invoke this virtual computing infrastructure that allows you to bring a lot of different servers to work together in a parallel construct to solve the problem when you present it."
[][2]
Sounds simple in theory. But making it viable in practice required some chipping away at technical problems, according to Theo Lynn, Professor of Digital Business at Dublin City University. What differentiates ordinary computing from HPC is those interconnects high-speed, low-latency, and expensive so those needed to be brought to the world of cloud infrastructure. Storage performance and data transport also needed to be brought up to a level at least in the same ballpark as on-prem HPC before HPC services could be viable.
But Lynn says that some of the innovations that have helped HPC services take off have been more institutional than technological. In particular, "we are now seeing more and more traditional HPC applications adopting cloud-friendly licensing models a barrier to adoption in the past."
And the economics have also shifted the potential customer base, he says. "Cloud service providers have opened up the market more by targeting low-end HPC buyers who couldnt afford the capex associated with traditional HPC and opening up the market to new users. As the markets open up, the hyperscale economic model becomes more and more feasible, costs start coming down."
Avoid on-premises CAPEX**
**
HPC services are attractive to private-sector customers in the same fields where traditional supercomputing has long held sway. These include sectors that rely heavily on complex mathematical modeling, including defense contractors like McCormick Stevenson, along with oil and gas companies, financial services firms, and biotech companies. Dublin City University's Lynn adds that loosely coupled workloads are a particularly good use case, which meant that many early adopters used it for 3D image rendering and related applications.
But when does it make sense to consider HPC services over on-premises HPC? For hhpberlin, a German company that simulates smoke propagation in and fire damage to structural components of buildings, the move came as it outgrew its current resources.
"For several years, we had run our own small cluster with up to 80 processor cores," says Susanne Kilian, hhpberlin's scientific head of numerical simulation. "With the rise in application complexity, however, this constellation has increasingly proven to be inadequate; the available capacity was not always sufficient to handle projects promptly."
But just spending money on a new cluster wasn't an ideal solution, she says: "In view of the size and administrative environment of our company, the necessity of constant maintenance of this cluster (regular software and hardware upgrades) turned out to be impractical. Plus, the number of required simulation projects is subject to significant fluctuations, such that the utilization of the cluster was not really predictable. Typically, phases with very intensive use alternate with phases with little to no use." By moving to an HPC service model, hhpberlin shed that excess capacity and the need to pay up front for upgrades.
IBM's Turek explains the calculus that different companies go through while assessing their needs. For a biosciences startup with 30 people, "you need computing, but you really can't afford to have 15% of your staff dedicated to it. It's just like you might also say you don't want to have on-staff legal representation, so you'll get that as a service as well." For a bigger company, though, it comes down to weighing the operational expense of an HPC service against the capacity expense of buying an in-house supercomputer or HPC cluster.
So far, those are the same sorts of arguments you'd have over adopting any cloud service. But the opex vs. capex dilemma can be weighted towards the former by some of the specifics of the HPC market. Supercomputers aren't commodity hardware like storage or x86 servers; they're very expensive, and technological advances can swiftly render them obsolete. As McCormick Stevenson's Krawczyk puts it, "It's like buying a car: as soon as you drive off the lot it starts to depreciate." And for many companies especially larger and less nimble ones the process of buying a supercomputer can get hopelessly bogged down. "You're caught up in planning issues, building issues, construction issues, training issues, and then you have to execute an RFP," says IBM's Turek. "You have to work through the CIO. You have to work with your internal customers to make sure there's continuity of service. It's a very, very complex process and not something that a lot of institutions are really excellent at executing."
Once you choose to go down the services route for HPC, you'll find you get many of the advantages you expect from cloud services, particularly the ability to pay only for HPC power when you need it, which results in an efficient use of resources. Chirag Dekate, Senior Director and Analyst at Gartner, says bursty workloads, when you have short-term needs for high-performance computing, are a key use case driving adoption of HPC  services.
"In the manufacturing industry, you tend to have a high peak of HPC activity around the product design stage," he says. "But once the product is designed, HPC resources are less utilized during the rest of the product-development cycle." In contrast, he says, "when you have large, long-running jobs, the economics of the cloud wear down."
With clever system design, you can integrate those HPC-services bursts of activity with your own in-house conventional computing. Teresa Tung, managing director in Accenture Labs, gives an example: "Accessing HPC via APIs makes it seamless to mix with traditional computing. A traditional AI pipeline might have its training done on a high-end supercomputer at the stage when the model is being developed, but then the resulting trained model that runs predictions over and over would be deployed on other services in the cloud or even devices at the edge."
### It's not for all use cases**
**
Use of HPC services lends itself to batch-processing and loosely-coupled use cases. That ties into a common HPC downside: data transfer issues. High-performance computing by its very nature often involves huge data sets, and sending all that information over the internet to a cloud service provider is no simple thing. "We have clients I talk to in the biotech industry who spend $10 million a month on just the data charges," says IBM's Turek.
And money isn't the only potential problem. Building a workflow that makes use of your data can challenge you to work around the long times required for data transfer. "When we had our own HPC cluster, local access to the simulation results already produced and thus an interactive interim evaluation — was of course possible at any time," says hhpberlin's Kilian. "We're currently working on being able to access and evaluate the data produced in the cloud even more efficiently and interactively at any desired time of the simulation without the need to download large amounts of simulation data."
Mike Krawczyk cites another stumbling block: compliance issues. Any service a defense contractor uses needs to be complaint with the International Traffic in Arms Regulations (ITAR), and McCormick Stevenson went with Rescale in part because it was the only vendor they found that checked that box. While more do today, any company looking to use cloud services should be aware of the legal and data-protection issues involved in living on someone else's infrastructure, and the sensitive nature of many of HPC's use cases makes this doubly true for HPC as a service.
In addition, the IT governance that HPC services require goes beyond regulatory needs. For instance, you'll need to keep track of whether your software licenses permit cloud use ­ especially with specialized software packages written to run on an on-premises HPC cluster. And in general, you need to keep track of how you use HPC services, which can be a tempting resource, especially if you've transitioned from in-house systems where staff was used to having idle HPC capabilities available. For instance, Ron Gilpin, senior director and Azure Platform Services global lead at Avanade, suggests dialing back how many processing cores you use for tasks that aren't time sensitive. "If a job only needs to be completed in an hour instead of ten minutes," he says, "that might use 165 processors instead of 1,000, a savings of thousands of dollars."
### A premium on HPC skills**
**
One of the biggest barriers to HPC adoption has always been the unique in-house skills it requires, and HPC services don't magically make that barrier vanish. "Many CIOs have migrated a lot of their workloads into the cloud and they have seen cost savings and increased agility and efficiency, and believe that they can achieve similar results in HPC ecosystems," says Gartner's Dekate. "And a common misperception is that they can somehow optimize human resource cost by essentially moving away from system admins and hiring new cloud experts who can solve their HPC workloads."
"But HPC is not one of the main enterprise environments," he says. "You're dealing with high-end compute nodes interconnected with high-bandwidth, low-latency networking stacks, along with incredibly complicated application and middleware stacks. Even the filesystem layers in many cases are unique to HPC environments. Not having the right skills can be destabilizing."
But supercomputing skills are in shortening supply, something Dekate refers to as the workforce "greying," in the wake of a generation of developers going to splashy startups rather than academia or the more staid firms where HPC is in use. As a result, vendors of HPC services are doing what they can to bridge the gap. IBM's Turek says that many HPC vets will always want to roll their own exquisitely fine-tuned code and will need specialized debuggers and other tools to help them do that for the cloud. But even HPC newbies can make calls to code libraries built by vendors to exploit supercomputing's parallel processing. And third-party software providers sell turnkey software packages that abstract away much of HPC's complication.
Accenture's Tung says the sector needs to lean further into this in order to truly prosper. "HPCaaS has created dramatically impactful new capability, but what needs to happen is making this easy to apply for the data scientist, the enterprise architect, or the software developer," she says. "This includes easy to use APIs, documentation, and sample code. It includes user support to answer questions. Its not enough to provide an API; that API needs to be fit-for-purpose. For a data scientist this should likely be in Python and easily change out for the frameworks she is already using. The value comes from enabling these users who ultimately will have their jobs improved through new efficiencies and performance, if only they can access the new capabilities." If vendors can pull that off, HPC services might truly bring supercomputing to the masses.
Join the Network World communities on [Facebook][3] and [LinkedIn][4] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3534725/the-ins-and-outs-of-high-performance-computing-as-a-service.html
作者:[Josh Fruhlinger][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Josh-Fruhlinger/
[b]: https://github.com/lujun9972
[1]: https://www.networkworld.com/article/3236875/embargo-10-of-the-worlds-fastest-supercomputers.html
[2]: https://www.networkworld.com/blog/itaas-and-the-corporate-storage-technology/?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE22140&utm_content=sidebar (ITAAS and Corporate Storage Strategy)
[3]: https://www.facebook.com/NetworkWorld/
[4]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,64 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Red Hat Summit 2020 virtual experience)
[#]: via: (https://www.networkworld.com/article/3541289/red-hat-summit-2020-virtual-experience.html)
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
Red Hat Summit 2020 virtual experience
======
[Virginiambe][1] [(CC BY-SA 3.0)][2]
In the last couple days, Red Hat was able to demonstrate that an online technical conference can succeed. The Summit, normally held in Boston or San Francisco, was held online thanks to the Covid-19 pandemic still gripping the world.
The fact that 80,000 people attended the online event warrants a huge applause. By comparison, last years in-person conference broke the record with only 8,900 attendees.
[[Get regularly scheduled insights by signing up for Network World newsletters.]][3]
### **Being “there”**
The experience of attending the conference was in many ways what you would expect when attending a large conference in person. There were keynotes, general sessions and breakout sessions. There were many opportunities to ask questions. And it was often difficult but necessary to choose between parallel sessions. I attended both days and was very impressed.
I also enjoyed some nostalgia about how weve all arrived at the places we are today with respect to Linux. It was clear that many attendees were overwhelmed by the progress that has been made just since last year. Linux, and [RHEL][4] in particular, is becoming more innovative, more clever in the ways that it can detect and respond to problems and yet in some important ways easier to manage because of the way the tools have evolved.
Announcements at the conference included Red Hat OpenShift 4.4, OpenShift virtualization and Red Hat Advanced Container Management for Kubernetes.
What was novel about attending a technical conference online was that we didnt have to leave our home or office and that we could review sessions that we missed by selecting them later from the session layout pages. In fact, the sessions are still online and may well be for the coming year. If you didnt participate in Red Hat Summit 2020, you can still sign up and you can still watch the sessions at your convenience. Just go to the [summit site][5]. And, did I mention, that it's free?
### Catching up
Once youre signed up, you can click on the Watch and Learn at the top of the page and choose General Sessions or Sessions and Labs. The presentations will now all be labeled On Demand though they once displayed upcoming time slots. The individuals presenting information are excellent and the material is exciting. Even if youre not working with Red Hat Enterprise Linux, you will learn a lot about Linux in general and how open source has evolved over the decades and is still evolving in important and critical ways.
Topics covered at the conference include OpenShift, open hybrid cloud, future technologies, robotics and automation, advances on the edge and the power of open source. Red Hat Summit also includes joint sessions with both Red Hat and technology collaborators such as Ford, Verizon, Intel, Microsoft and Credit Suisse.
### Whats next?
Watching the conference online at a time when I can't leave my home was informative, but also encouraging and comforting. Linux has been an important part of my life for decades. It felt good to be connected to the larger community and to sense the currents of progress through my desktop system.
While theres no way to know at this point whether future Red Hat Summits or other Linux conferences will be held or made available online, the fact that Red Hat Summit 2020 was available online when so many of us are still huddled up at home wondering when our world will reopen was a testament not just to great technology but to the deep-seated conviction that it is critical that we work together and that open source can make that happen in ways that nothing else can.
Join the Network World communities on [Facebook][6] and [LinkedIn][7] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3541289/red-hat-summit-2020-virtual-experience.html
作者:[Sandra Henry-Stocker][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
[b]: https://github.com/lujun9972
[1]: https://commons.wikimedia.org/wiki/File:Red_hat_with_bow2.JPG
[2]: https://creativecommons.org/licenses/by-sa/3.0/legalcode
[3]: https://www.networkworld.com/newsletters/signup.html
[4]: https://www.networkworld.com/article/3540189/red-hat-enterprise-linux-82-hits-the-stage.html
[5]: https://www.redhat.com/summit
[6]: https://www.facebook.com/NetworkWorld/
[7]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,77 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Linux and Kubernetes: Serving The Common Goals of Enterprises)
[#]: via: (https://www.linux.com/articles/linux-and-kubernetes-serving-the-common-goals-of-enterprises/)
[#]: author: (Swapnil Bhartiya https://www.linux.com/author/swapnil/)
Linux and Kubernetes: Serving The Common Goals of Enterprises
======
[![][1]][2]
For [Stefanie Chiras,][3] VP & GM, Red Hat Enterprise Linux (RHEL) Business Unit at [Red Hat][4], aspects such as security and resiliency have always been important for Red Hat. More so, in the current situation when everyone has gone fully remote and its much harder to get people in front of the hardware for carrying out updates, patching, etc.
“As we look at our current situation, never has it been more important to have an operating system that is resilient and secure, and were focused on that,” she said.
The recently released version of [Red Hat Enterprise Linux (RHEL) 8.2][5] inadvertently address these challenge as it makes it easier for technology leaders to embrace the latest, production-ready innovations swiftly which offering security and resilience that their IT teams need.
RHELs embrace of a predictable 6-month minor release cycle also helped customers plan upgrades more efficiently.
“There is value for customers in having predictability of minor releases on a six-month cycle. Without knowing when they were coming was causing disruptions for them. The launch of 8.2 is now the second time we have delivered on our commitment of having minor releases every six months,” said Stefanie Chiras.
In addition to offering security updates, the new version adds insights capabilities and forays into newer areas of innovation.
The upgrade has expanded the earlier capability called Adviser dramatically. Additional functionalities such as drift monitoring and CVE coverage allow for a much deeper granularity into how the infrastructure is running.
“It really amplifies the skills that are already present in ops and sysadmin teams, and this provides a Red Hat consultation, if you will, directly into the data center,” claimed Charis.
As containers are increasingly being leveraged for digital transformation, RHEL 8.2 offers an updated application stream of Red Hats container tools. It also has new, containerized versions of Buildah and Skopeo.
[Skopeo][6] is an open-source image copying tool, while Buildah is a tool for building Docker- and Kubernetes-compatible images easily and quickly.
RHEL has also ensured in-place upgrades in the new version. Customers can now directly in-place upgrade from version 7 to version 8.2.
Chiras believes Linux has emerged as the go-to-platform for innovations such as Machine Learning, Deep Learning, and Artificial Intelligence.
“Linux has now become the springboard of innovation,” she argued. “AI, machine learning, and deep learning are driving a real change in not just the software but also the hardware. In the context of these emerging technologies, its all about making them consumable into an enterprise.”
“Were very focused on our ecosystem, making sure that were working in the right upstream communities with the right ISVs, with the right hardware partners to make all of that magic come together,” Chiras said.
Towards this end, Red Hat has been partnering with multiple architectures for a long time — be it an x86 architecture, ARM, Power, or mainframe with IBM Z. Its partnership with Nvidia pulls in capabilities such as FPGAs, and GPU.
**Synergizing Kubernetes and Linux **
Kubernetes is fast finding favor in enterprises.  So how do Linux and Kubernetes serve the common goals of enterprises?
“Kubernetes is a new way to deploy Linux. Were very focused on providing operational consistency by leveraging our technology in RHEL and then bringing in that incredible capability of Kubernetes within our OpenShift product line,” Chiras said.
The deployment of Linux within a Kubernetes environment is much more complicated than in a traditional deployment. RHEL, therefore, made some key changes. The company created Red Hat Enterprise Linux CoreOS — an optimized version of RHEL for the OpenShift experience.
“Its deployed as an immutable. Its tailored, narrow, and gets updated as part of your OpenShift update to provide consistent user experience and comprehensive security.
The launch of the Red Hat Universal Base Image (UBI) offers users greater security, reliability, and performance of official Red Hat container images where OCI-compliant Linux containers run.
“Kubernetes is a new way to deploy Linux. It really is a tight collaboration but what were really focused on is the customer experience. We want them to get easy updates with consistency and reliability, resilience and security. Were pulling all of that together. With such advancements going on, its a fascinating space to watch,” added Chiras.
--------------------------------------------------------------------------------
via: https://www.linux.com/articles/linux-and-kubernetes-serving-the-common-goals-of-enterprises/
作者:[Swapnil Bhartiya][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linux.com/author/swapnil/
[b]: https://github.com/lujun9972
[1]: https://www.linux.com/wp-content/uploads/2019/12/computer-2930704_1280-1068x634.jpg (computer-2930704_1280)
[2]: https://www.linux.com/wp-content/uploads/2019/12/computer-2930704_1280.jpg
[3]: https://www.linkedin.com/in/stefanie-chiras-9022144/
[4]: https://www.redhat.com/en
[5]: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html-single/8.2_release_notes/index
[6]: https://github.com/containers/skopeo

View File

@ -0,0 +1,123 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Transparent, open source alternative to Google Analytics)
[#]: via: (https://opensource.com/article/20/5/plausible-analytics)
[#]: author: (Marko Saric https://opensource.com/users/markosaric)
Transparent, open source alternative to Google Analytics
======
Plausible Analytics is a leaner, more transparent option, with the
essential data you need but without all the privacy baggage.
![Digital creative of a browser on the internet][1]
Google Analytics is the most popular website analytics tool. Millions of developers and creators turn to it to collect and analyze their website statistics.
More than 53% of all sites on the web track their visitors using Google Analytics. [84%][2] of sites that do use a known analytics script use Google Analytics.
Google Analytics has, for years, been one of the first tools I installed on a newly launched site. It is a powerful and useful analytics tool. Installing Google Analytics was a habit I didn't think much about until the introduction of the [GDPR][3] (General Data Protection Regulation) and other privacy regulations.
Using Google Analytics these days comes with several pitfalls, including the need for a privacy policy, the need for cookie banners, and the need for a GDPR consent prompt. All these may negatively impact the site loading time and visitor experience.
This has made me try to [de-Google-ify websites][4] that I work on, and it's made me start working on independent solutions that are open source and more privacy-friendly. This is where Plausible Analytics enters the story.
[Plausible Analytics][5] is an open source and lightweight alternative to Google Analytics. It doesn't use cookies and it doesn't collect any personal data, so you don't need to show any cookie banners or get GDPR or CCPA consent. Let's take a closer look.
### Main differences between Google Analytics and Plausible
Plausible Analytics is not designed to be a clone of Google Analytics. It is meant as a simple-to-use replacement and a privacy-friendly alternative. Here are the main differences between the two web analytics tools:
#### Open source vs. closed source
Google Analytics may be powerful and useful, but it is closed source. It is a proprietary tool run by one of the largest companies in the world, a company that is a key player in the ad-tech industry. There's simply no way of knowing what's going on behind the scenes. You have to put your trust in Google.
Plausible is a fully open source tool. You can read our code [on GitHub][6]. We're "open" in other ways, too, such as our [public roadmap][7], which is based around the feedback and features submitted by the members of our community.
#### Privacy of your website visitors
Google Analytics places [several cookies][8] on the devices of your visitors, and it tracks and collects a lot of data. This means that there are several requirements if you want to use Google Analytics and be compliant with the different regulations:
* You need to have a privacy policy about analytics
* You need to show a cookie banner
* You need to obtain a GDPR/CCPA consent
Plausible is made to be fully compliant with the privacy regulations. No cookies are used, and no personal data is collected. This means that you don't need to display the cookie banner, you don't need a privacy policy, and you don't need to ask for the GDPR/CCPA consent when using Plausible.
#### Page weight and loading time
The recommended way of installing Google Analytics is to use the Google Tag Manager. Google Tag Manager script weights 28 KB, and it downloads another JavaScript file called the Google Analytics tag, which adds an additional 17.7 KB to your page size. That's 45.7 KB of page weight combined.
Plausible script weights only 1.4 KB. That's 33 times smaller than the Google Analytics Global Site Tag. Every KB matters when you want to keep your site fast to load.
#### Accuracy of visitor stats
Google Analytics is being blocked by an increasing number of web users. It's blocked by those who use open source browsers such as [Firefox][9] and [Brave][10]. It's also blocked by those who use open source browser add-ons such as the [uBlock Origin][11]. It's not uncommon to see 40% or more of the audience on a tech site blocking Google Analytics.
Plausible is a new player on this market and it's privacy-friendly by default, so it doesn't see the same level of blockage.
#### Simple vs. complex web analytics
[Google Analytics is overkill][12] for many website owners. It's a complex tool that takes time to understand and requires training. Google Analytics presents hundreds of different reports and metrics for you to get insights from. Many users end up creating custom dashboards while ignoring all the rest.
Plausible cuts through all the noise that Google Analytics creates. It presents everything you need to know on one single page—all the most valuable metrics at a glance. You can get an overview of the most actionable insights about your website in one minute.
### A guided tour of Plausible Analytics
Plausible Analytics is not a full-blown replacement and a feature-by-feature reproduction of Google Analytics. It's not designed for all the different use-cases of Google Analytics.
It's built with simplicity and speed in mind. There is no navigational menu. There are no additional sub-menus. There is no need to create custom reports. You get one simple and useful web analytics dashboard out of the box.
Rather than tracking every metric imaginable, many of them that you will never find a use for, Plausible focuses on the essential website stats only. It is easy to use and understand with no training or prior experience:
![Plausible analytics in action][13]
* Choose the time range that you want to analyze. The visitor numbers are automatically presented on an hourly, daily, or monthly graph. The default time frame is set at the last 30 days.
* See the number of unique visitors, total page views, and the bounce rate. These metrics include a percentage comparison to the previous time period, so you understand if the trends are going up or down.
* See all the referral sources of traffic and all the most visited pages on your site. Bounce rates of the individual referrals and pages are included too.
* See the list of countries your traffic is coming from. You can also see the device, browser, and operating system your visitors are using.
* Track events and goals to identify the number of converted visitors, the conversion rate, and the referral sites that send the best quality traffic.
Take a look at the [live demo][14] where you can follow the traffic to the Plausible website.
### Give Plausible Analytics a chance
With Plausible Analytics, you get all the important web analytics at a glance so you can focus on creating a better site without needing to annoy your visitors with all the different banners and prompts.
You can try Plausible Analytics on your site alongside Google Analytics. [Register today][15] to try it out, and see what you like and what you don't. Share your feedback with the community. This helps us learn and improve. We'd love to hear from you.
Take a look at five great open source alternatives to Google Docs.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/5/plausible-analytics
作者:[Marko Saric][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/markosaric
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/browser_web_internet_website.png?itok=g5B_Bw62 (Digital creative of a browser on the internet)
[2]: https://w3techs.com/technologies/details/ta-googleanalytics
[3]: https://gdpr-info.eu/
[4]: https://markosaric.com/degoogleify/
[5]: https://plausible.io/
[6]: https://github.com/plausible-insights/plausible
[7]: https://feedback.plausible.io/roadmap
[8]: https://developers.google.com/analytics/devguides/collection/analyticsjs/cookie-usage
[9]: https://www.mozilla.org/en-US/firefox/new/
[10]: https://brave.com/
[11]: https://github.com/gorhill/uBlock
[12]: https://plausible.io/vs-google-analytics
[13]: https://opensource.com/sites/default/files/plausible-analytics.png (Plausible analytics in action)
[14]: https://plausible.io/plausible.io
[15]: https://plausible.io/register

View File

@ -0,0 +1,147 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Using mergerfs to increase your virtual storage)
[#]: via: (https://fedoramagazine.org/using-mergerfs-to-increase-your-virtual-storage/)
[#]: author: (Curt Warfield https://fedoramagazine.org/author/rcurtiswarfield/)
Using mergerfs to increase your virtual storage
======
![][1]
What happens if you have multiple disks or partitions that youd like to use for a media project and you dont want to lose any of your existing data, but youd like to have everything located or mounted under one drive. Thats where mergerfs can come to your rescue!
[mergerfs][2] is a union filesystem geared towards simplifying storage and management of files across numerous commodity storage devices.
You will need to grab the latest RPM from their github page [here][3]. The releases for Fedora have _**fc**_ and the version number in the name. For example here is the version for Fedora 31:
[mergerfs-2.29.0-1.fc31.x86_64.rpm][4]
### Installing and configuring mergerfs
Install the mergerfs package that youve downloaded using sudo:
```
$ sudo dnf install mergerfs-2.29.0-1.fc31.x86_64.rpm
```
You will now be able to mount multiple disks as one drive. This comes in handy if you have a media server and youd like all of your media files to show up under one location. If you upload new files to your system, you can copy them to your mergerfs directory and mergerfs will automatically copy them to which ever drive has enough free space available.
Here is an example to make it easier to understand:
```
$ df -hT | grep disk
/dev/sdb1 ext4 23M 386K 21M 2% /disk1
/dev/sdc1 ext4 44M 1.1M 40M 3% /disk2
$ ls -l /disk1/Videos/
total 1
-rw-r--r--. 1 curt curt 0 Mar 8 17:17 Our Wedding.mkv
$ ls -l /disk2/Videos/
total 2
-rw-r--r--. 1 curt curt 0 Mar 8 17:17 Baby's first Xmas.mkv
-rw-rw-r--. 1 curt curt 0 Mar 8 17:21 Halloween hijinks.mkv
```
In this example there are two disks mounted as _disk1_ and _disk2_. Both drives have a _**Videos**_ directory with existing files.
Now were going to mount those drives using mergerfs to make them appear as one larger drive.
```
$ sudo mergerfs -o defaults,allow_other,use_ino,category.create=mfs,moveonenospc=true,minfreespace=1M /disk1:/disk2 /media
```
The mergerfs man page is quite extensive and complex so well break down the options that were specified.
* _defaults_: This will use the default settings unless specified.
* _allow_other_: allows users besides sudo or root to see the filesystem.
* _use_ino_: Causes mergerfs to supply file/directory inodes rather than libfuse. While not a default it is recommended it be enabled so that linked files share the same inode value.
* _category.create=mfs_: Spreads files out across your drives based on available space.
* _moveonenospc=true_: If enabled, if writing fails, a scan will be done looking for the drive with the most free space.
* _minfreespace=1M_: The minimum space value used.
* _disk1_: First hard drive.
* _disk2_: Second hard drive.
* _/media_: The directory folder where the drives are mounted.
Here is what it looks like:
```
$ df -hT | grep disk
/dev/sdb1 ext4 23M 386K 21M 2% /disk1
/dev/sdc1 ext4 44M 1.1M 40M 3% /disk2
$ df -hT | grep media
1:2 fuse.mergerfs 66M 1.4M 60M 3% /media
```
You can see that the mergerfs mount now shows a total capacity of 66M which is the combined total of the two hard drives.
Continuing with the example:
There is a 30Mb video called _Babys second Xmas.mkv_. Lets copy it to the _/media_ folder which is the mergerfs mount.
```
$ ls -lh "Baby's second Xmas.mkv"
-rw-rw-r--. 1 curt curt 30M Apr 20 08:45 Baby's second Xmas.mkv
$ cp "Baby's second Xmas.mkv" /media/Videos/
```
Here is the end result:
```
$ df -hT | grep disk
/dev/sdb1 ext4 23M 386K 21M 2% /disk1
/dev/sdc1 ext4 44M 31M 9.8M 76% /disk2
$ df -hT | grep media
1:2 fuse.mergerfs 66M 31M 30M 51% /media
```
You can see from the disk space utilization that mergerfs automatically copied the file to disk2 because disk1 did not have enough free space.
Here is a breakdown of all of the files:
```
$ ls -l /disk1/Videos/
total 1
-rw-r--r--. 1 curt curt 0 Mar 8 17:17 Our Wedding.mkv
$ ls -l /disk2/Videos/
total 30003
-rw-r--r--. 1 curt curt 0 Mar 8 17:17 Baby's first Xmas.mkv
-rw-rw-r--. 1 curt curt 30720000 Apr 20 08:47 Baby's second Xmas.mkv
-rw-rw-r--. 1 curt curt 0 Mar 8 17:21 Halloween hijinks.mkv
$ ls -l /media/Videos/
total 30004
-rw-r--r--. 1 curt curt 0 Mar 8 17:17 Baby's first Xmas.mkv
-rw-rw-r--. 1 curt curt 30720000 Apr 20 08:47 Baby's second Xmas.mkv
-rw-rw-r--. 1 curt curt 0 Mar 8 17:21 Halloween hijinks.mkv
-rw-r--r--. 1 curt curt 0 Mar 8 17:17 Our Wedding.mkv
```
When you copy files to your mergerfs mount, it will always copy the files to the hard disk that has enough free space. If none of the drives in the pool have enough free space, then you wont be able to copy them.
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/using-mergerfs-to-increase-your-virtual-storage/
作者:[Curt Warfield][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/rcurtiswarfield/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2020/04/mergerfs-816x346.png
[2]: https://github.com/trapexit/mergerfs
[3]: https://github.com/trapexit/mergerfs/releases
[4]: https://github.com/trapexit/mergerfs/releases/download/2.29.0/mergerfs-2.29.0-1.fc31.x86_64.rpm

View File

@ -0,0 +1,232 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Pop OS 20.04 Review: Best Ubuntu-based Distribution Just Got Better)
[#]: via: (https://itsfoss.com/pop-os-20-04-review/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
Pop OS 20.04 Review: Best Ubuntu-based Distribution Just Got Better
======
_**Brief: Pop OS 20.04 is an impressive Linux distribution based on Ubuntu. I review the major new features in this review and share my experience with the latest release.**_
Now that Ubuntu 20.04 LTS and its official flavours are here its time to take a look at one of best Ubuntu-based distro i.e Pop!_OS 20.04 by [System76][1].
To be honest, Pop!_OS is my favorite Linux distro that I primarily use for everything I do.
Now that Pop!_OS 20.04 has finally arrived. Its time to take a look at what it offers and whether you should upgrade or not?
### Whats New In Pop!_OS 20.04 LTS?
![][2]
Visually, Pop!_OS 20.04 LTS isnt really very different from Pop!_OS 19.10. However, you can find several new features and improvements.
But, if you were using **Pop!_OS 18.04 LTS**, you have a lot of things to try.
With [GNOME 3.36][3] onboard along with some newly added features, Pop!_OS 20.04 is an exciting release.
Overall, to give you an overview here are some key highlights:
* Automatic Window Tiling
* New Application Switcher and Launcher
* Flatpack support added in Pop!_Shop
* GNOME 3.36
* Linux Kernel 5.4
* Improved hybrid graphics support
While this sounds fun, let us take a look at a detailed look on what has changed and hows the experience of Pop!_OS 20.04 so far.
### User Experience Improvements in Pop OS 20.04
Undoubtedly, a lot of Linux distros offer a pleasant user experience out of the box. Likewise, [Ubuntu 20.04 LTS has had top-notch improvements and features][4] as well.
And, when it comes to Pop!_OS by System 76, they always try to go a mile further. And, the majority of new features aim to improve the user experience by providing useful functionalities.
Here, Im going to take a look at some of the improvements that include [GNOME 3.36][3] and Pop!_OS-specific features.
#### Support For System Tray Icons
Finally! This may not be a big change but Pop!_OS did not have the support for system tray icons (or applet icons).
![][5]
With 20.04 LTS release, its here by default. No need of any extension.
There may not be a whole lot of programs depending on system tray icons but it is still something important to have.
In my case, I wasnt able to use [ActivityWatch][6] on Pop!_OS 19.10 but now I can.
#### Automatic Window Tiling
![][7]
**Automatic Window Tiling** is something I always wanted to try but never invested any time to set it up using a [tiling window manager][8] like [i3][9], not even with [Regolith Desktop][10].
With Pop!_OS 20.04, you dont need to do that anyway. The automatic window tiling feature comes baked in without needing you to set it up.
It also features an option to **Show Active Hint** i.e it will highlight the active window to avoid confusion. And, you can also adjust the gap between the windows.
![][11]
You can see it in action in their official video:
[Subscribe to our YouTube channel for more Linux videos][12]
And, I must say that it is one of the biggest additions on Pop!_OS 20.04 that could potentially help you multi-task more efficiently.
Even though the feature comes in handy everytime you use it. To make the most out of it, a display screen bigger than 21-inches (at least) should be the best way to go! And, for this reason Im really tempted to upgrade my monitor as well!
#### New Extensions App
![][13]
Pop!_OS comes baked in with some unique GNOME extensions. But, you dont need GNOME Tweaks the manage the extension anymore.
The newly added **Extensions** app lets you configure and manage the extensions on Pop!_OS 20.04.
#### Improved Notification Center
![][14]
With the new GNOME 3.36 release, the notification center includes a revamped look. Here, I have the dark mode enabled.
#### New Application Switcher & Launcher
![][15]
You can still **ALT+TAB** or **Super key + TAB** to go through the running applications.
But, thats time-consuming when you have a lot of things going on. So, on Pop!_OS 20.04, you get an application switcher and launcher which you can activate using **Super key + /**
Once you get used to the keyboard shortcut, it will be very convenient thing to have.
In addition to this, you may find numerous other subtle improvements visually with the icons/windows on Pop!_OS 20.04.
#### New Login Screen
Well, with GNOME 3.36, its an obvious change. But, it does look good!
![][16]
### Flatpak Support on Pop!_Shop
Normally, Pop!_Shop is already something useful with a huge repository along with [Pop!_OSs own repositories.][17]
Now, with Pop!_OS 20.04, you can choose to install either Flatpak (via Flathub) or the Debian package of any available software on Pop!_Shop. Of course, only if a Flatpak package exists for the particular software.
You might want to check [how to use Flatpak on Linux][18] if you dont have Pop!_OS 20.04.
![][19]
Personally, Im not a fan of Flatpak but some applications like GIMP requires you to install the Flatpak package to get the latest version. So, it is definitely a good thing to have the support for Flatpak on Pop!_Shop baked right into it.
### Keyboard Shortcut Changes
This can be annoying if youre comfortable with the existing keyboard shortcuts on Pop!_OS 19.10 or older.
In either case, there are a few important keyboard shortcut changes to potentially improve your experience, here they are:
* Lock Screen: **Super + L** _changed to_ **Super + Escape**
* Move Workspace: **Super + Up/Down Arrow** _changed to_ **Super + CTRL + Up/Down Arrow**
* Close Window: **Super + W** _changed_ to **Super + Q**
* Toggle Maximize: **Super + Up Arrow** _changed to_ **Super + M**
### Linux Kernel 5.4
Similar to most of the other latest Linux distros, Pop!_OS 20.04 comes loaded with [Linux Kernel 5.4][20].
So, obviously, you can expect the [exFAT support][21] and an improved AMD graphics compatibility along with all the other features that come with it.
### Performance Improvements
Even though Pop!_OS doesnt pitch itself as a lightweight Linux distro, it is still a resource-efficient distro. And, with GNOME 3.36 onboard, it should be fast enough.
Considering that Ive been using Pop!_OS as my primary distro for about a year, Ive never had any performance issues. And, this is how the resource usage will probably look like (depending on your system configuration) after you install Pop!_OS 20.04.
![][22]
To give you an idea, my desktop configuration involves an i5-7400 processor, 16 GB RAM (2400 MHz), NVIDIA GTX 1050ti graphics card, and an SSD.
Im not really a fan of system benchmarks because it does not really give you the idea of how a specific application or a game would perform unless you try it.
You can try the [Phoronix Test Suite][23] to analyze how your system performs. But, Pop!_OS 20.04 LTSshould be a snappy experience!
### Package Updates & Other Improvements
While every Ubuntu-based distro benefits from the [improvements in Ubuntu 20.04 LTS][4], there are some Pop OS specific bug fixes and improvements as well.
In addition to it, some major apps/packages like **Firefox 75.0** have been updated to their latest version.
As of now, there should be no critical bugs present and at least none for me.
You can check out their [development progress on GitHub][24] to check the details of issues theyve already fixed during the beta testing and the issues they will be fixing right after the release.
### Download & Support Pop!_OS 20.04
![][25]
With this release, System76 has finally added a subscription model (optional) to support Pop!_OS development.
You can download **Pop!_OS 20.04** for free but if you want to support them Id suggest you go for the subscription with just **$1/month**.
[Pop!_OS 20.04][26]
### My Thoughts on Pop OS 20.04
I must mention that I was rooting for a fresh new wallpaper with the latest 20.04 release. But, thats not a big deal.
With the window tiling feature, flatpak support, and numerous other improvements, my experience with Pop!_OS 20.04 has been top-notch so far. Also, its great to see that they are highlighting their focus on creative professionals with out-of-the-box support for some popular software.
![][27]
All the good things about Ubuntu 20.04 and some extra toppings on it by System76, Im impressed!
_**Have you tried the Pop!_OS 20.04 yet? Let me know your thoughts in the comments below.**_
--------------------------------------------------------------------------------
via: https://itsfoss.com/pop-os-20-04-review/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://system76.com
[2]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/05/pop_os_20_04_review.jpg?ssl=1
[3]: https://itsfoss.com/gnome-3-36-release/
[4]: https://itsfoss.com/ubuntu-20-04-release-features/
[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/04/system-tray-icons-pop-os.jpg?ssl=1
[6]: https://activitywatch.net/
[7]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/04/pop-os-automatic-screen-tiling.png?ssl=1
[8]: https://en.wikipedia.org/wiki/Tiling_window_manager
[9]: https://i3wm.org/
[10]: https://itsfoss.com/regolith-linux-desktop/
[11]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/04/tile-feature-options-popos.jpg?ssl=1
[12]: https://www.youtube.com/c/itsfoss?sub_confirmation=1
[13]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/04/pop-os-extensions.jpg?ssl=1
[14]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/04/notification-center-pop-os.jpg?ssl=1
[15]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/04/pop-os-application-launcher.jpg?ssl=1
[16]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/04/pop-os-20-04-lock-screen.jpg?ssl=1
[17]: https://launchpad.net/~system76/+archive/ubuntu/pop
[18]: https://itsfoss.com/flatpak-guide/
[19]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/04/pop-os-flatpak-deb.jpg?ssl=1
[20]: https://itsfoss.com/linux-kernel-5-4/
[21]: https://itsfoss.com/mount-exfat/
[22]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/04/pop-os-20-04-performance.jpg?ssl=1
[23]: https://www.phoronix-test-suite.com/
[24]: https://github.com/orgs/pop-os/projects/13
[25]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/05/support-pop-os.jpg?ssl=1
[26]: https://pop.system76.com/
[27]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/05/pop-os-stem-focus.jpg?ssl=1

View File

@ -0,0 +1,101 @@
[#]: collector: (lujun9972)
[#]: translator: (messon007)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (The ins and outs of high-performance computing as a service)
[#]: via: (https://www.networkworld.com/article/3534725/the-ins-and-outs-of-high-performance-computing-as-a-service.html)
[#]: author: (Josh Fruhlinger https://www.networkworld.com/author/Josh-Fruhlinger/)
高性能计算即服务的来龙去脉
======
高性能计算(HPC)服务可能是一种满足不断增长的超级计算需求的方式,但依赖于使用场景,它们不一定比使用本地超级计算机好。
戴尔EMC
导弹和军用直升机上的电子设备需要工作在极端条件下。国防承包商麦考密克·史蒂文森公司McCormick Stevenson Corp.在部署任何物理设备之前都会事先模拟它所能承受的真实条件。模拟依赖于像Ansys这样的有限元素分析软件该软件需要强大的算力。
几年前的一天,它出乎意料地超出了计算极限。
[世界上最快的10个超级计算机][1]
麦考密克·史蒂文森McCormick Stevenson的首席工程师迈克·克劳奇奇Mike Krawczyk“我们的一些工作会使办公室的计算机不堪重负。购买机器并安装软件在经济上或计划上都不划算。” 相反该公司与Rescale签约从其购买在超级计算机系统上运行的周期(cycles),而这只花费了他们购买新硬件上所需的一小部分。
麦考密克·史蒂文森McCormick Stevenson已成为被称为超级计算即服务或高性能计算HPC即服务(两个紧密相关的术语)市场的早期采用者之一。根据国家计算科学研究所(的定义)HPC是超级计算机在计算复杂问题上的应用而超级计算机是处理能力最先进的那些计算机。
无论叫它什么这些服务都在颠覆传统的超级计算市场并将HPC能力带给以前买不起的客户。但这不是万能的而且绝对不是即插即用的至少现在还不是。
### HPC服务实践
从最终用户的角度来看HPC即服务类似于早期大型机时代的批处理模型。 “我们创建一个Ansys批处理文件并将其发送过去运行它然后将结果文件取下来并在本地导入它们” Krawczyk说。
在HPC服务背后云提供商在其自己的数据中心中运行超级计算基础设施尽管这不一定意味着当您听到“超级计算机”时你就会看到最先进的硬件。正如IBM OpenPOWER计算技术副总裁Dave Turek解释的那样HPC服务的核心是“相互互连的服务器集合。您可以调用该虚拟计算基础设施它能够在您提出问题时使得许多不同的服务器并行工作来解决问题。”
[][2]
理论听起来很简单。但都柏林城市大学数字业务教授西奥·林恩Theo Lynn表示要使其在实践中可行需要解决一些技术问题。普通计算与HPC的区别在于那些互连-高速的,低延时的而且昂贵的-因此需要将这些互连引入云基础设施领域。在HPC服务可行之前至少需要将存储性能和数据传输也提升到与本地HPC相同的水平。
但是林恩说一些制度创新相比技术更好的帮助了HPC服务的起飞。特别是“我们现在看到越来越多的传统HPC应用采用云友好的许可模式-过去是采用这种模式的障碍。”
他说,经济也改变了潜在的客户群。 “云服务提供商通过向那些负担不起传统HPC所需的投资成本的低端HPC买家开放进一步开放了市场。随着市场的开放超大规模经济模型变得越来越多更可行成本开始下降。”
避免本地资本支出**
**
HPC服务对有志于传统超级计算长期把持的领域的私营行业客户具有吸引力。这些客户包括严重依赖复杂数学模型的行业包括麦考密克·史蒂文森McCormick Stevenson等国防承包商以及油气公司金融服务公司和生物技术公司。都柏林城市大学的Lynn补充说松耦合的工作负载是一个特别好的用例这意味着许多早期采用者将其用于3D图像渲染和相关应用。
但是何时考虑HPC服务而不是本地HPC才有意义对于德国的模拟烟雾在建筑物中的蔓延和火灾对建筑物结构部件的破坏的hhpberlin公司来说答案是在它超出了其现有资源时。
Hpberlin公司数值模拟的科学负责人Susanne Kilian说“几年来我们一直在运行自己的小型集群该集群具有多达80个处理器核。” “但是,随着应用复杂性的提高,这种架构(constellation)已经越来越不足以支撑;可用容量并不总是够快速地处理项目。”
她说“但是仅仅花钱买一个新的集群并不是一个理想的解决方案鉴于我们公司的规模和管理环境强制持续维护该集群定期进行软件和硬件升级是不现实的。另外需要模拟的项目数量会出现很大的波动因此集群的利用率并不是真正可预测的。通常使用率很高的阶段与很少使用或不使用的阶段交替出现。”通过转换为HPC服务模式hhpberlin释放了过剩的容量并无需支付升级费用。
IBM的Turek解释了不同公司在评估其需求时所经历的计算过程。对于拥有30名员工的生物科学初创公司来说“您需要计算但您实在负担不起15的员工专门从事它。这就像您可能也说过您不想拥有在职法律代表因此您也可以通过服务获得它。”但是对于一家较大的公司而言最终归结为权衡HPC服务的运营费用与购买内部超级计算机或HPC集群的费用。
到目前为止这些都是您采用任何云服务时都会遇到的类似的争论。但是可以HPC市场的某些特点将使得衡量运营支出与资本支出时选择前者。超级计算机不是诸如存储或x86服务器之类的商用硬件它们非常昂贵技术进步很快会使其过时。正如麦考密克·史蒂文森McCormick Stevenson的克拉维奇Krawczyk所说“这就像买车只要车一开走它就会开始贬值。”对于许多公司尤其是规模较大灵活性较差的公司购买超级计算机的过程可能会陷入无望的泥潭。 IBM的Turek说“您陷入了计划问题建筑问题施工问题培训问题然后必须执行RFP。您必须得到CIO的支持。您必须与内部客户合作以确保服务的连续性。这是一个非常非常复杂的过程并没有很多机构有非常出色的执行力。”
一旦您选择了HPC服务的路线后您会发现您会得到您期望从云服务中得到的许多好处特别是仅在业务需要时才需付费的能力从而可以带来资源的高效利用。 Gartner高级总监兼分析师Chirag Dekate表示当您对高性能计算有短期需求时的突发性负载是推动选择HPC服务的关键用例。
他说“在制造业中在产品设计阶段HPC活动往往会达到很高的峰值。但是一旦产品设计完成在其余产品开发周期中HPC资源的利用率就会降低。” 相比之下,他说:“当您拥有大量长期运行的工作时,云的经济性就会逐渐减弱。”
通过巧妙的系统设计您可以将这些HPC服务突发活动与您自己的内部常规计算集成在一起。 埃森哲(Accenture)实验室常务董事Teresa Tung举了一个例子“通过API访问HPC可以无缝地与传统计算混合。在模型构建阶段传统的AI流水线可能会在高端超级计算机上进行训练但是最终经过反复按预期运行的训练好的模型将部署在云中的其他服务上甚至部署在边缘设备上。”
### 它并不适合所有的应用场景 **
**
HPC服务适合批处理和松耦合的场景。这与HPC的普遍缺点有关数据传输问题。高性能计算本身通常涉及庞大的数据集而将所有这些信息通过Internet发送到云服务提供商并不容易。IBM的Turek说“我们与生物技术行业的客户交流他们每月仅在数据费用上就花费1000万美元。”
钱并不是唯一的潜在问题。已制定的需要使用数据的工作流可能会使您在数据传输所需的时间内无法工作。hhpberlin的Kilian说“当我们拥有自己的HPC集群时当然可以随时访问已经产生的仿真结果从而进行交互式的临时评估。我们目前正努力达到在仿真的任意时刻都可以更高效地交互地访问和评估云中生成的数据而无需下载大量的模拟数据。”
Mike Krawczyk提到了另一个绊脚石合规性问题。国防承包商使用的任何服务都需要遵从(原文是complaint, 应该是笔误)《国际武器交易条例》ITAR麦考密克·史蒂文森McCormick Stevenson之所以选择Rescale部分原因是因为这是他们发现的唯一符合的供应商。如今尽管有更多的公司(使用云服务)但任何希望使用云服务的公司都应该意识到使用其他人的基础设施时所涉及的法律和数据保护问题而且许多HPC场景的敏感性使得更HPC即服务的这个问题更加突出。
此外HPC服务所需的IT治理超出了目前的监管范围。例如您需要跟踪您的软件许可证是否允许云使用­ 尤其是专门为本地HPC群集上运行而编写的软件包。通常您需要跟踪HPC服务的使用方式它可能是一个诱人的资源尤其是当您从员工习惯的内部系统过渡到有可用的空闲的HPC能力时。例如Avanade全球平台高级主管兼Azure平台服务全球负责人Ron Gilpin建议回调您用于时间不敏感任务的处理核心数量。他说“如果一项工作只需要用一小时来完成而不需要在十分钟内就完成那么它可以使用165个处理器而不是1,000个从而节省了数千美元。”
### 独特的HPC技能**
**
一直以来采用HPC的最大障碍之一就是其所需的独特的内部技能而HPC服务并不能使这种障碍消失。Gartner的Dekate表示“许多CIO将许多工作负载迁移到了云上他们看到了成本的节约敏捷性和效率的提升因此相信在HPC生态中也可以达成类似的效果。一个普遍的误解是他们可以通过彻底地免去系统管理员并聘用能解决其HPC工作负载的新的云专家从而以某种方式优化人力成本。”
“但是HPC并不是一个主流的企业环境。” 他说。“您正在处理通过高带宽低延迟的网络互联的高端计算节点以及相当复杂的应用和中间件技术栈。许多情况下甚至连文件系统层也是HPC环境所独有的。没有对应的技能可能会破坏稳定性。”
但是超级计算技能的供给却在减少Dekate将其称为劳动力“灰化”这是因为一代开发人员将目光投向了新兴的初创公司而不是学术界或使用HPC的更老套的公司。因此HPC服务供应商正在尽其所能地弥补差距。 IBM的Turek表示许多HPC老手将总是想运行他们自己精心调整过的代码将需要专门的调试器和其他工具来帮助他们在云上实现这一目标。但是即使是HPC新手也可以调用供应商构建的代码库以利用超级计算的并行处理能力。第三方软件提供商出售的交钥匙软件包可以减少HPC的许多复杂性。
埃森哲的Tung表示该行业需要进一步加大投入才能真正繁荣。她说“HPCaaS已经创建了具有重大影响力的新功能但还需要做的是使它易于被数据科学家企业架构师或软件开发人员使用。这包括易用的API文档和示例代码。它包括用户支持来解答问题。仅仅提供API是不够的API需要适合特定的用途。对于数据科学家而言这可能是以python形式提供,并容易更换她已经在使用的框架。其价值来自使这些用户最综只有在使用新功能时才能够改进效率和性能。” 如果供应商能够做到这一点那么HPC服务才能真正将超级计算带给大众。
加入[Facebook][3]和[LinkedIn][4]上的Network World社区探讨最前沿的话题。
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3534725/the-ins-and-outs-of-high-performance-computing-as-a-service.html
作者:[Josh Fruhlinger][a]
选题:[lujun9972][b]
译者:[messon007](https://github.com/messon007)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Josh-Fruhlinger/
[b]: https://github.com/lujun9972
[1]: https://www.networkworld.com/article/3236875/embargo-10-of-the-worlds-fastest-supercomputers.html
[2]: https://www.networkworld.com/blog/itaas-and-the-corporate-storage-technology/?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE22140&utm_content=sidebar (ITAAS and Corporate Storage Strategy)
[3]: https://www.facebook.com/NetworkWorld/
[4]: https://www.linkedin.com/company/network-world