Merge remote-tracking branch 'LCTT/master'

This commit is contained in:
Xingyu Wang 2019-11-01 22:22:58 +08:00
commit bfac1b73f0
13 changed files with 1636 additions and 419 deletions

View File

@ -0,0 +1,64 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Watson IoT chief: AI can broaden IoT services)
[#]: via: (https://www.networkworld.com/article/3449243/watson-iot-chief-ai-can-broaden-iot-services.html)
[#]: author: (Jon Gold https://www.networkworld.com/author/Jon-Gold/)
Watson IoT chief: AI can broaden IoT services
======
IBMs Kareem Yusuf talks smart maintenance systems, workforce expertise and some IoT use cases you might not have thought of.
IBM
IBM thrives on the complicated, asset-intensive part of the enterprise [IoT][1] market, according to Kareem Yusuf, GM of the companys Watson IoT business unit. From helping seaports manage shipping traffic to keeping technical knowledge flowing within an organization, Yusuf said that the idea is to teach [artificial intelligence][2] to provide insights from the reams of data generated by such complex systems.
[Predictive maintenance][3] is probably the headliner in terms of use cases around asset-intensive IoT, and Yusuf said that its a much more complicated task than many people might think. It isnt simply a matter of monitoring, say, pressure levels in a pipe somewhere and throwing an alert when they move outside of norms. Its about aggregate information on failure rates and asset planning, that a company can have replacements and contingency plans ready for potential failures.
[[Get regularly scheduled insights by signing up for Network World newsletters.]][4]
“Its less to do with Is that thing going to fail on that day? more to do with, because I'm now leveraging all these technologies, I have more insights to make the decision to say, this is my more optimal work-management route,’” he said. “And thats how I save money.”
For that to work, of course, AI has to be trained. Yusuf uses the example of a drone-based system to detect worrisome cracks in bridges, a process that usually involves sending technicians out to look at the bridge in person. Allowing AI to differentiate between serious and trivial damage means showing it reams of images of both types, and sourcing that kind of information isnt always straightforward.
“So when a client says they want that [service], often clients themselves will say, Here's some training data sets wed like you to start with,’” he said, noting that there are also open-source and government data sets available for some applications.
IBM itself collects a huge amount of data from its various AI implementations, and, with the explicit permission of its existing clients, uses some of that information to train new systems that do similar things.
“You get this kind of collaborative cohesion going on,” said Yusuf. “So when you think about, say[, machine-learning][5] models to help predict foot traffic for space planning and building usage … we can build that against data we have, because we already drive a lot of that kind of test data through our systems.”
Another non-traditional use case is for the design of something fantastically complicated, like an autonomous car. There are vast amounts of engineering requirements involved in such a process, governing the software, orchestration, hardware specs, regulatory compliance and more. A system with a particular strength in natural-language processing (NLP) could automatically understand what the various requirements actually mean and relate them to one another, detecting conflicts and impossibilities, said Yusuf.
“Weve trained up Watson using discovery services and NLP to be able to tell you whether your requirements are clear,” he said. “It will find duplicates or conflicting requirements.”
Nor is it simply a matter of enabling AI-based IoT systems on the back end. Helping technicians do work is a critical part of IBMs strategy in the IoT sector, and the company has taken aim at the problem of knowledge transfer via mobility solutions.
Take, for example, a newer technician dispatched to repair an elevator or other complex piece of machinery. With a mobile assistant app on his or her smartphone, the tech can do more than simply referencing error codes an AI-driven system can cross reference an error code against the history of a specific elevator, noting what, in the past, has tended to be the root of a given problem, and what needs to be done to fix it.
The key, said Yusuf, is to enable that kind of functionality without disrupting the standard workflow thats already in place.
“When we think about leveraging AI, it has to like seamlessly integrate into the [existing] way of working,” he said.
Join the Network World communities on [Facebook][6] and [LinkedIn][7] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3449243/watson-iot-chief-ai-can-broaden-iot-services.html
作者:[Jon Gold][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Jon-Gold/
[b]: https://github.com/lujun9972
[1]: https://www.networkworld.com/article/3207535/what-is-iot-how-the-internet-of-things-works.html
[2]: https://www.networkworld.com/article/3243925/artificial-intelligence-may-not-need-networks-at-all.html
[3]: https://www.networkworld.com/article/3340132/why-predictive-maintenance-hasn-t-taken-off-as-expected.html
[4]: https://www.networkworld.com/newsletters/signup.html
[5]: https://www.networkworld.com/article/3202701/the-inextricable-link-between-iot-and-machine-learning.html
[6]: https://www.facebook.com/NetworkWorld/
[7]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,62 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (A Birds Eye View of Big Data for Enterprises)
[#]: via: (https://opensourceforu.com/2019/10/a-birds-eye-view-of-big-data-for-enterprises/)
[#]: author: (Swapneel Mehta https://opensourceforu.com/author/swapneel-mehta/)
A Birds Eye View of Big Data for Enterprises
======
[![][1]][2]
_Entrepreneurial decisions are made using data and business acumen. Big Data is today a tool that helps to maximise revenue and customer engagement. Open source tools like Hadoop, Apache Spark and Apache Storm are the popular choices when it comes to analysing Big Data. As the volume and variety of data in the world grows by the day, there is great scope for the discovery of trends as well as for innovation in data analysis and storage._
In the past five years, the spate of research focused on machine learning has resulted in a boom in the nature and quality of heterogeneous data sources that are being tapped by providers for their customers. Cheaper compute and widespread storage makes it so much easier to apply bulk data processing techniques, and derive insights from existing and unexplored sources of rich user data including logs and traces of activity whilst using software products. Business decision making and strategy has been primarily dictated by data and is usually supported by business acumen. But in recent times it has not been uncommon to see data providing conclusions seemingly in contrast with conventional business logic.
One could take the simple example of the baseball movie Moneyball, in which the protagonist defies all notions of popular wisdom in looking solely at performance statistics to evaluate player viability, eventually building a winning team of players a team that would otherwise never have come together. The advantage of Big Data for enterprises, then, becomes a no brainer for most corporate entities looking to maximise revenue and engagement. At the back-end, this is accomplished by popular combinations of existing tools specially designed for large scale, multi-purpose data analysis. Apache, Hadoop and Spark are some of the most widespread open source tools used in this space in the industry. Concomitantly, it is easy to imagine that there are a number of software providers offering B2B services to corporate clients looking to outsource specific portions of their analytics. Therefore, there is a bustling market with customisable, proprietary technological solutions in this space as well.
Traditionally, Big Data refers to the large volumes of unstructured and heterogeneous data that is often subject to processing in order to provide insights and improve decision-making regarding critical business processes. The McKinsey Global institute estimates that data volumes have been growing at 40 per cent per year and will grow 44x between the years 2009 and 2020. But there is more to Big Data than just its immense volume. The rate of data production is an important factor given that smaller data streams generated at faster rates produce larger pools than their counterparts. Social media is a great example of how small networks can expand rapidly to become rich sources of information — up to massive, billion-node scales.
Structure in data is a highly variable attribute given that data is now extracted from across the entire spectrum of user activity. Conventional formats of storage, including relational databases, have been virtually replaced by massively unstructured data pools designed to be leveraged in manners unique to their respective use cases. In fact, there has been a huge body of work on data storage in order to leverage various write formats, compression algorithms, access methods and data structures to arrive at the best combination for improving productivity of the workflow reliant on that data. A variety of these combinations has emerged to set the industry standards in their respective verticals, with the benefits ranging from efficient storage to faster access.
Finally, we have the latent value in these data pools that remains to be exploited by the use of emerging trends in artificial intelligence and machine learning. Personalised advertising recommendations are a huge factor driving revenue for social media giants like Facebook and companies like Google that offer a suite of products and an ecosystem to use them. The well-known Silicon Valley giant started out as a search provider, but now controls a host of apps and most of the entry points for the data generated in the course of people using a variety of electronic devices across the world. Established financial institutions are now exploring the possibility of a portion of user data being put on an immutable public ledger to introduce a blockchain-like structure that can open the doors to innovation. The pace is picking up as product offerings improve in quality and expand in variety. Lets get a birds eye view of this subject to understand where the market stands.
The idea behind building better frameworks is increasingly turning into a race to provide more add-on features and simplify workflows for the end user to engage with. This means the categories have many blurred lines because most products and tools present themselves as end-to-end platforms to manage Big Data analytics. However, well attempt to divide this broadly into a few categories and examine some providers in each of these.
**Big Data storage and processing**
Infrastructure is the key to building a reliable workflow when it comes to enterprise use cases. Earlier, relational databases were worthwhile to invest in for small and mid-sized firms. However, when the data starts pouring in, it is usually the scalability that is put to the test first. Building a flexible infrastructure comes at the cost of complexity. It is likely to have more moving parts that can cause failure in the short-term. However, if done right something that will not be easy because it has to be tailored exactly to your company it can result in life-changing improvements for both users and the engineers working with the said infrastructure to build and deliver state-of-the-art products.
There are many alternatives to SQL, with the NoSQL paradigm being adopted and modified for building different types of systems. Cassandra, MongoDB and CouchDB are some well-known alternatives. Most emerging options can be distinguished based on their disruption, which is aimed at the fundamental ACID properties of databases. To recall, a transaction in a database system must maintain atomicity, consistency, isolation, and durability commonly known as ACID properties in order to ensure accuracy, completeness, and data integrity (from Tutorialspoint). For instance, CockroachDB, an open source offshoot of Googles Spanner database system, has gained traction due to its support for being distributed. Redis and HBase offer a sort of hybrid storage solution while Neo4j remains a flag bearer for graph structured databases. However, traditional areas aside, there are always new challenges on the horizon for building enterprise software.
![Figure 1: A crowded landscape to follow \(Source: Forbes\)][3]
Backups are one such area where startups have found viable disruption points to enter the market. Cloud backups for enterprise software are expensive, non-trivial procedures and offloading this work to proprietary software offers a lucrative business opportunity. Rubrik and Cohesity are two companies that originally started out in this space and evolved to offer added services atop their primary offerings. Clumio is a recent entrant, purportedly creating a data fabric that the promoters expect will serve as a foundational layer to run analytics on top of. It is interesting to follow recent developments in this burgeoning space as we see competitors enter the market and attempt to carve a niche for themselves with their product offerings.
**Big Data analytics in the cloud**
Apache Hadoop remains the popular choice for many organisations. However, many successors have emerged to offer a set of additional analytical capabilities: Apache Spark, commonly hailed as an improvement to the Hadoop ecosystem; Apache Storm that offers real-time data processing capabilities; and Googles BigQuery, which is supposedly a full-fledged platform for Big Data analytics.
Typically, cloud providers such as Amazon Web Services and Google Cloud Platform tend to build in-house products leveraging these capabilities, or replicate them entirely and offer them as hosted services to businesses. This helps them provide enterprise offerings that are closely integrated within their respective cloud computing ecosystem. There has been some discussion about the moral consequences of replicating open source products to profit off closed source versions of the same, but there has been no consensus on the topic, nor any severe consequences suffered on account of this questionable approach to boost revenue.
Another hosted service offering a plethora of Big Data analytics tools is Cloudera which has an established track record in the market. It has been making waves since its merger with Hortonworks earlier this year, giving it added fuel to compete with the giants in its bid to become the leading enterprise cloud provider in the market.
Overall, weve seen interesting developments in the Big Data storage and analysis domain and as the volume and variety of data grows, so do the opportunities to innovate in the field.
--------------------------------------------------------------------------------
via: https://opensourceforu.com/2019/10/a-birds-eye-view-of-big-data-for-enterprises/
作者:[Swapneel Mehta][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensourceforu.com/author/swapneel-mehta/
[b]: https://github.com/lujun9972
[1]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Figure-1-Big-Data-analytics-and-processing-for-the-enterprise.jpg?resize=696%2C449&ssl=1 (Figure 1 Big Data analytics and processing for the enterprise)
[2]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Figure-1-Big-Data-analytics-and-processing-for-the-enterprise.jpg?fit=900%2C580&ssl=1
[3]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Figure-2-A-crowded-landscape-to-follow.jpg?resize=350%2C254&ssl=1

View File

@ -0,0 +1,106 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Why you don't have to be afraid of Kubernetes)
[#]: via: (https://opensource.com/article/19/10/kubernetes-complex-business-problem)
[#]: author: (Scott McCarty https://opensource.com/users/fatherlinux)
Why you don't have to be afraid of Kubernetes
======
Kubernetes is absolutely the simplest, easiest way to meet the needs of
complex web applications.
![Digital creative of a browser on the internet][1]
It was fun to work at a large web property in the late 1990s and early 2000s. My experience takes me back to American Greetings Interactive, where on Valentine's Day, we had one of the top 10 sites on the internet (measured by web traffic). We delivered e-cards for [AmericanGreetings.com][2], [BlueMountain.com][3], and others, as well as providing e-cards for partners like MSN and AOL. Veterans of the organization fondly remember epic stories of doing great battle with other e-card sites like Hallmark. As an aside, I also ran large web properties for Holly Hobbie, Care Bears, and Strawberry Shortcake.
I remember like it was yesterday the first time we had a real problem. Normally, we had about 200Mbps of traffic coming in our front doors (routers, firewalls, and load balancers). But, suddenly, out of nowhere, the Multi Router Traffic Grapher (MRTG) graphs spiked to 2Gbps in a few minutes. I was running around, scrambling like crazy. I understood our entire technology stack, from the routers, switches, firewalls, and load balancers, to the Linux/Apache web servers, to our Python stack (a meta version of FastCGI), and the Network File System (NFS) servers. I knew where all of the config files were, I had access to all of the admin interfaces, and I was a seasoned, battle-hardened sysadmin with years of experience troubleshooting complex problems.
But, I couldn't figure out what was happening...
Five minutes feels like an eternity when you are frantically typing commands across a thousand Linux servers. I knew the site was going to go down any second because it's fairly easy to overwhelm a thousand-node cluster when it's divided up and compartmentalized into smaller clusters.
I quickly _ran_ over to my boss's desk and explained the situation. He barely looked up from his email, which frustrated me. He glanced up, smiled, and said, "Yeah, marketing probably ran an ad campaign. This happens sometimes." He told me to set a special flag in the application that would offload traffic to Akamai. I ran back to my desk, set the flag on a thousand web servers, and within minutes, the site was back to normal. Disaster averted.
I could share 50 more stories similar to this one, but the curious part of your mind is probably asking, "Where this is going?"
The point is, we had a business problem. Technical problems become business problems when they stop you from being able to do business. Stated another way, you can't handle customer transactions if your website isn't accessible.
So, what does all of this have to do with Kubernetes? Everything. The world has changed. Back in the late 1990s and early 2000s, only large web properties had large, web-scale problems. Now, with microservices and digital transformation, every business has a large, web-scale problem—likely multiple large, web-scale problems.
Your business needs to be able to manage a complex web-scale property with many different, often sophisticated services built by many different people. Your web properties need to handle traffic dynamically, and they need to be secure. These properties need to be API-driven at all layers, from the infrastructure to the application layer.
### Enter Kubernetes
Kubernetes isn't complex; your business problems are. When you want to run applications in production, there is a minimum level of complexity required to meet the performance (scaling, jitter, etc.) and security requirements. Things like high availability (HA), capacity requirements (N+1, N+2, N+100), and eventually consistent data technologies become a requirement. These are production requirements for every company that has digitally transformed, not just the large web properties like Google, Facebook, and Twitter.
In the old world, I lived at American Greetings, every time we onboarded a new service, it looked something like this. All of this was handled by the web operations team, and none of it was offloaded to other teams using ticket systems, etc. This was DevOps before there was DevOps:
1. Configure DNS (often internal service layers and external public-facing)
2. Configure load balancers (often internal services and public-facing)
3. Configure shared access to files (large NFS servers, clustered file systems, etc.)
4. Configure clustering software (databases, service layers, etc.)
5. Configure webserver cluster (could be 10 or 50 servers)
Most of this was automated with configuration management, but configuration was still complex because every one of these systems and services had different configuration files with completely different formats. We investigated tools like [Augeas][4] to simplify this but determined that it was an anti-pattern to try and normalize a bunch of different configuration files with a translator.
Today with Kubernetes, onboarding a new service essentially looks like:
1. Configure Kubernetes YAML/JSON.
2. Submit it to the Kubernetes API (**kubectl create -f service.yaml**).
Kubernetes vastly simplifies onboarding and management of services. The service owner, be it a sysadmin, developer, or architect, can create a YAML/JSON file in the Kubernetes format. With Kubernetes, every system and every user speaks the same language. All users can commit these files in the same Git repository, enabling GitOps.
Moreover, deprecating and removing a service is possible. Historically, it was terrifying to remove DNS entries, load-balancer entries, web-server configurations, etc. because you would almost certainly break something. With Kubernetes, everything is namespaced, so an entire service can be removed with a single command. You can be much more confident that removing your service won't break the infrastructure environment, although you still need to make sure other applications don't use it (a downside with microservices and function-as-a-service [FaaS]).
### Building, managing, and using Kubernetes
Too many people focus on building and managing Kubernetes instead of using it (see [_Kubernetes is a_ _dump truck_][5]).
Building a simple Kubernetes environment on a single node isn't markedly more complex than installing a LAMP stack, yet we endlessly debate the build-versus-buy question. It's not Kubernetes that's hard; it's running applications at scale with high availability. Building a complex, highly available Kubernetes cluster is hard because building any cluster at this scale is hard. It takes planning and a lot of software. Building a simple dump truck isn't that complex, but building one that can carry [10 tons of dirt and handle pretty well at 200mph][6] is complex.
Managing Kubernetes can be complex because managing large, web-scale clusters can be complex. Sometimes it makes sense to manage this infrastructure; sometimes it doesn't. Since Kubernetes is a community-driven, open source project, it gives the industry the ability to manage it in many different ways. Vendors can sell hosted versions, while users can decide to manage it themselves if they need to. (But you should question whether you actually need to.)
Using Kubernetes is the easiest way to run a large-scale web property that has ever been invented. Kubernetes is democratizing the ability to run a set of large, complex web services—like Linux did with Web 1.0.
Since time and money is a zero-sum game, I recommend focusing on using Kubernetes. Spend your very limited time and money on [mastering Kubernetes primitives][7] or the best way to handle [liveness and readiness probes][8] (another example demonstrating that large, complex services are hard). Don't focus on building and managing Kubernetes. A lot of vendors can help you with that.
### Conclusion
I remember troubleshooting countless problems like the one I described at the beginning of this article—NFS in the Linux kernel at that time, our homegrown CFEngine, redirect problems that only surfaced on certain web servers, etc. There was no way a developer could help me troubleshoot any of these problems. In fact, there was no way a developer could even get into the system and help as a second set of eyes unless they had the skills of a senior sysadmin. There was no console with graphics or "observability"—observability was in my brain and the brains of the other sysadmins. Today, with Kubernetes, Prometheus, Grafana, and others, that's all changed.
The point is:
1. The world is different. All web applications are now large, distributed systems. As complex as AmericanGreetings.com was back in the day, the scaling and HA requirements of that site are now expected for every website.
2. Running large, distributed systems is hard. Period. This is the business requirement, not Kubernetes. Using a simpler orchestrator isn't the answer.
Kubernetes is absolutely the simplest, easiest way to meet the needs of complex web applications. This is the world we live in and where Kubernetes excels. You can debate whether you should build or manage Kubernetes yourself. There are plenty of vendors that can help you with building and managing it, but it's pretty difficult to deny that it's the easiest way to run complex web applications at scale.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/10/kubernetes-complex-business-problem
作者:[Scott McCarty][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/fatherlinux
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/browser_web_internet_website.png?itok=g5B_Bw62 (Digital creative of a browser on the internet)
[2]: http://AmericanGreetings.com
[3]: http://BlueMountain.com
[4]: http://augeas.net/
[5]: https://opensource.com/article/19/6/kubernetes-dump-truck
[6]: http://crunchtools.com/kubernetes-10-ton-dump-truck-handles-pretty-well-200-mph/
[7]: https://opensource.com/article/19/6/kubernetes-basics
[8]: https://srcco.de/posts/kubernetes-liveness-probes-are-dangerous.html

View File

@ -0,0 +1,73 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Wireless noise protocol can extend IoT range)
[#]: via: (https://www.networkworld.com/article/3449819/wireless-noise-protocol-can-extend-iot-range.html)
[#]: author: (Patrick Nelson https://www.networkworld.com/author/Patrick-Nelson/)
Wireless noise protocol can extend IoT range
======
On-off noise power communication (ONPC) protocol creates a long-distance carrier of noise energy in Wi-Fi to ping IoT devices.
Thinkstock
The effective range of [Wi-Fi][1], and other wireless communications used in [Internet of Things][2] networks could be increased significantly by adding wireless noise, say scientists.
This counter-intuitive solution could extend the range of an off-the-shelf Wi-Fi radio by 73 yards, a group led by Brigham Young University says. Wireless noise, a disturbance in the signal, is usually unwanted.
[[Get regularly scheduled insights by signing up for Network World newsletters.]][3]
The remarkably simple concept sends wireless noise-energy over-the-top of Wi-Fi data traffic in an additional, unrelated channel. That second channel, or carrier, which is albeit at a much lower data rate than the native Wi-Fi, travels further, and when encoded can be used to ping a sensor, say, to find out if the device is alive when the Wi-Fi link itself may have lost association through distance-caused, poor handshaking.
[][4]
BrandPost Sponsored by HPE
[Take the Intelligent Route with Consumption-Based Storage][4]
Combine the agility and economics of HPE storage with HPE GreenLake and run your IT department with efficiency.
The independent, additional noise channel travels further than the native Wi-Fi. “It works beyond the range of Wi-Fi,” [the scientists say in their paper][5].
Applications could be found in hard-to-reach sensor locations where the sensor might still be usefully collecting data, just be offline on the network through an iffy Wi-Fi link. Ones-and-zeroes can be encoded in the add-on channel to switch sensors on and off too.
### How it works
The on-off noise power communication (ONPC) protocol, as its called, works via a software hack on commodity Wi-Fi access points. Through software, part of the transmitter is converted to an RF power source, and then elements in the receiver are turned into a power measuring device. Noise energy, created by the power source is encoded, emitted and picked up by the measuring setup at the other end.
“If the access point, [or] router hears this code, it says, OK, I know the sensor is still alive and trying to reach me, its just out of range,’” Neal Patwari of Washington University says in a Brigham Young University (BYU) [press release][6]. “Its basically sending one bit of information that says its alive.”
The noise channel is much leaner than the Wi-Fi one, BYU explains. “While Wi-Fi requires speeds of at least one megabit per second to maintain a signal, ONPC can maintain a signal on as low as one bit per second—one millionth of the data speed required by Wi-Fi.” Thats enough for IoT sensor housekeeping, conceivably. Additionally, “one bit of information is sufficient for many Wi-Fi enabled devices that simply need an on [and] off message,” the school says. It uses the example of an irrigation system.
Assuring up-time, though, in hard-to-reach, dynamic environments, is where the school got the idea from. Researchers found that they were continually implementing sensors for environmental IoT experiments in hard to reach spots.
The team use an example of a sensor placed in a students bedroom where the occupant had placed a laundry basket in front of the important device. It had blocked the native Wi-Fi signal. The scientists, then, couldnt get a site appointment for some weeks due to the vagaries of the subject students life, and they didnt know if the trouble issue was sensor or link during that crucial time. ONPC would have allowed them to be reassured that data was still being collected and stored—or not—without the tricky-to-obtain site visit.
The researchers reckon cellular, [Bluetooth][7] and also [LoRa][8] could use ONPC, too. “We can send and receive data regardless of what Wi-Fi is doing; all we need is the ability to transmit energy and then receive noise measurements,” Phil Lundrigan of BYU says.
Join the Network World communities on [Facebook][9] and [LinkedIn][10] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3449819/wireless-noise-protocol-can-extend-iot-range.html
作者:[Patrick Nelson][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Patrick-Nelson/
[b]: https://github.com/lujun9972
[1]: https://www.networkworld.com/article/3258807/what-is-802-11ax-wi-fi-and-what-will-it-mean-for-802-11ac.html
[2]: https://www.networkworld.com/article/3207535/what-is-iot-how-the-internet-of-things-works.html
[3]: https://www.networkworld.com/newsletters/signup.html
[4]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE20773&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
[5]: https://dl.acm.org/citation.cfm?id=3345436
[6]: https://news.byu.edu/byu-created-software-could-significantly-extend-wi-fi-range-for-smart-home-devices
[7]: https://www.networkworld.com/article/3434526/bluetooth-finds-a-role-in-the-industrial-internet-of-things.html
[8]: https://www.networkworld.com/article/3211390/lorawan-key-to-building-full-stack-production-iot-networks.html
[9]: https://www.facebook.com/NetworkWorld/
[10]: https://www.linkedin.com/company/network-world

View File

@ -1,389 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (laingke)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Initializing arrays in Java)
[#]: via: (https://opensource.com/article/19/10/initializing-arrays-java)
[#]: author: (Chris Hermansen https://opensource.com/users/clhermansen)
Initializing arrays in Java
======
Arrays are a helpful data type for managing collections elements best
modeled in contiguous memory locations. Here's how to use them
effectively.
![Coffee beans and a cup of coffee][1]
People who have experience programming in languages like C or FORTRAN are familiar with the concept of arrays. Theyre basically a contiguous block of memory where each location is a certain type: integers, floating-point numbers, or what-have-you.
The situation in Java is similar, but with a few extra wrinkles.
### An example array
Lets make an array of 10 integers in Java:
```
int[] ia = new int[10];
```
Whats going on in the above piece of code? From left to right:
1. The **int[]** to the extreme left declares the _type_ of the variable as an array (denoted by the **[]**) of **int**.
2. To the right is the _name_ of the variable, which in this case is **ia**.
3. Next, the **=** tells us that the variable defined on the left side is set to whats to the right side.
4. To the right of the **=** we see the word **new**, which in Java indicates that an object is being _initialized_, meaning that storage is allocated and its constructor is called ([see here for more information][2]).
5. Next, we see **int[10]**, which tells us that the specific object being initialized is an array of 10 integers.
Since Java is strongly-typed, the type of the variable **ia** must be compatible with the type of the expression on the right-hand side of the **=**.
### Initializing the example array
Lets put this simple array in a piece of code and try it out. Save the following in a file called **Test1.java**, use **javac** to compile it, and use **java** to run it (in the terminal of course):
```
import java.lang.*;
public class Test1 {
    public static void main([String][3][] args) {
        int[] ia = new int[10];                              // See note 1 below
        [System][4].out.println("ia is " + ia.getClass());        // See note 2 below
        for (int i = 0; i < ia.length; i++)                  // See note 3 below
            [System][4].out.println("ia[" + i + "] = " + ia[i]);  // See note 4 below
    }
}
```
Lets work through the most important bits.
1. Our declaration and initialization of the array of 10 integers, **ia**, is easy to spot.
2. In the line just following, we see the expression **ia.getClass()**. Thats right, **ia** is an _object_ belonging to a _class_, and this code will let us know which class that is.
3. In the next line following that, we see the start of the loop **for (int i = 0; i < ia.length; i++)**, which defines a loop index variable **i** that runs through a sequence from zero to one less than **ia.length**, which is an expression that tells us how many elements are defined in the array **ia**.
4. Next, the body of the loop prints out the values of each element of **ia**.
When this program is compiled and run, it produces the following results:
```
me@mydesktop:~/Java$ javac Test1.java
me@mydesktop:~/Java$ java Test1
ia is class [I
ia[0] = 0
ia[1] = 0
ia[2] = 0
ia[3] = 0
ia[4] = 0
ia[5] = 0
ia[6] = 0
ia[7] = 0
ia[8] = 0
ia[9] = 0
me@mydesktop:~/Java$
```
The string representation of the output of **ia.getClass()** is **[I**, which is shorthand for "array of integer." Similar to the C programming language, Java arrays begin with element zero and extend up to element **<array size> 1**. We can see above that each of the elements of **ia** are set to zero (by the array constructor, it seems).
So, is that it? We declare the type, use the appropriate initializer, and were done?
Well, no. There are many other ways to initialize an array in Java. 
### Why do I want to initialize an array, anyway?
The answer to this question, like that of all good questions, is "it depends." In this case, the answer depends on what we expect to do with the array once it is initialized.
In some cases, arrays emerge naturally as a type of accumulator. For example, suppose we are writing code for counting the number of calls received and made by a set of telephone extensions in a small office. There are eight extensions, numbered one through eight, plus the operators extension, numbered zero. So we might declare two arrays:
```
int[] callsMade;
int[] callsReceived;
```
Then, whenever we start a new period of accumulating call statistics, we initialize each array as:
```
callsMade = new int[9];
callsReceived = new int[9];
```
At the end of each period of accumulating call statistics, we can print out the stats. In very rough terms, we might see:
```
import java.lang.*;
import java.io.*;
public class Test2 {
    public static void main([String][3][] args) {
        int[] callsMade;
        int[] callsReceived;
        // initialize call counters
        callsMade = new int[9];
        callsReceived = new int[9];
        // process calls...
        //   an extension makes a call: callsMade[ext]++
        //   an extension receives a call: callsReceived[ext]++
        // summarize call statistics
        [System][4].out.printf("%3s%25s%25s\n","ext"," calls made",
            "calls received");
        for (int ext = 0; ext < callsMade.length; ext++)
            [System][4].out.printf("%3d%25d%25d\n",ext,
                callsMade[ext],callsReceived[ext]);
    }
}
```
Which would produce output something like this:
```
me@mydesktop:~/Java$ javac Test2.java
me@mydesktop:~/Java$ java Test2
ext               calls made           calls received
  0                        0                        0
  1                        0                        0
  2                        0                        0
  3                        0                        0
  4                        0                        0
  5                        0                        0
  6                        0                        0
  7                        0                        0
  8                        0                        0
me@mydesktop:~/Java$
```
Not a very busy day in the call center.
In the above example of an accumulator, we see that the starting value of zero as set by the array initializer is satisfactory for our needs. But in other cases, this starting value may not be the right choice.
For example, in some kinds of geometric computations, we might need to initialize a two-dimensional array to the identity matrix (all zeros except for the ones along the main diagonal). We might choose to do this as:
```
 double[][] m = new double[3][3];
        for (int d = 0; d < 3; d++)
            m[d][d] = 1.0;
```
In this case, we rely on the array initializer **new double[3][3]** to set the array to zeros, and then use a loop to set the diagonal elements to ones. In this simple case, we might use a shortcut that Java provides:
```
 double[][] m = {
         {1.0, 0.0, 0.0},
         {0.0, 1.0, 0.0},
         {0.0, 0.0, 1.0}};
```
This type of visual structure is particularly appropriate in this sort of application, where it can be a useful double-check to see the actual layout of the array. But in the case where the number of rows and columns is only determined at run time, we might instead see something like this:
```
 int nrc;
 // some code determines the number of rows & columns = nrc
 double[][] m = new double[nrc][nrc];
 for (int d = 0; d < nrc; d++)
     m[d][d] = 1.0;
```
Its worth mentioning that a two-dimensional array in Java is actually an array of arrays, and theres nothing stopping the intrepid programmer from having each one of those second-level arrays be a different length. That is, something like this is completely legitimate:
```
int [][] differentLengthRows = {
     { 1, 2, 3, 4, 5},
     { 6, 7, 8, 9},
     {10,11,12},
     {13,14},
     {15}};
```
There are various linear algebra applications that involve irregularly-shaped matrices, where this type of structure could be applied (for more information see [this Wikipedia article][5] as a starting point). Beyond that, now that we understand that a two-dimensional array is actually an array of arrays, it shouldnt be too much of a surprise that:
```
differentLengthRows.length
```
tells us the number of rows in the two-dimensional array **differentLengthRows**, and:
```
differentLengthRows[i].length
```
tells us the number of columns in row **i** of **differentLengthRows**.
### Taking the array further
Considering this idea of array size that is determined at run time, we see that arrays still require us to know that size before instantiating them. But what if we dont know the size until weve processed all of the data? Does that mean we have to process it once to figure out the size of the array, and then process it again? That could be hard to do, especially if we only get one chance to consume the data.
The [Java Collections Framework][6] solves this problem in a nice way. One of the things provided there is the class **ArrayList**, which is like an array but dynamically extensible. To demonstrate the workings of **ArrayList**, lets create one and initialize it to the first 20 [Fibonacci numbers][7]:
```
import java.lang.*;
import java.util.*;
public class Test3 {
       
        public static void main([String][3][] args) {
                ArrayList<Integer> fibos = new ArrayList<Integer>();
                fibos.add(0);
                fibos.add(1);
                for (int i = 2; i < 20; i++)
                        fibos.add(fibos.get(i-1) + fibos.get(i-2));
                for (int i = 0; i < fibos.size(); i++)
                        [System][4].out.println("fibonacci " + i +
                       " = " + fibos.get(i));
        }
}
```
Above, we see:
* The declaration and instantiation of an **ArrayList** that is used to store **Integer**s.
* The use of **add()** to append to the **ArrayList** instance.
* The use of **get()** to retrieve an element by index number.
* The use of **size()** to determine how many elements are already in the **ArrayList** instance.
Not shown is the **put()** method, which places a value at a given index number.
The output of this program is:
```
fibonacci 0 = 0
fibonacci 1 = 1
fibonacci 2 = 1
fibonacci 3 = 2
fibonacci 4 = 3
fibonacci 5 = 5
fibonacci 6 = 8
fibonacci 7 = 13
fibonacci 8 = 21
fibonacci 9 = 34
fibonacci 10 = 55
fibonacci 11 = 89
fibonacci 12 = 144
fibonacci 13 = 233
fibonacci 14 = 377
fibonacci 15 = 610
fibonacci 16 = 987
fibonacci 17 = 1597
fibonacci 18 = 2584
fibonacci 19 = 4181
```
**ArrayList** instances can also be initialized by other techniques. For example, an array can be supplied to the **ArrayList** constructor, or the **List.of()** and **Arrays.asList()** methods can be used when the initial elements are known at compile time. I dont find myself using these options all that often since my primary use case for an **ArrayList** is when I only want to read the data once.
Moreover, an **ArrayList** instance can be converted to an array using its **toArray()** method, for those who prefer to work with an array once the data is loaded; or, returning to the current topic, once the **ArrayList** instance is initialized.
The Java Collections Framework provides another kind of array-like data structure called a **Map**. What I mean by "array-like" is that a **Map** defines a collection of objects whose values can be set or retrieved by a key, but unlike an array (or an **ArrayList**), this key need not be an integer; it could be a **String** or any other complex object.
For example, we can create a **Map** whose keys are **String**s and whose values are **Integer**s as follows:
```
Map<[String][3],Integer> stoi = new Map<[String][3],Integer>();
```
Then we can initialize this **Map** as follows:
```
stoi.set("one",1);
stoi.set("two",2);
stoi.set("three",3);
```
And so on. Later, when we want to know the numeric value of **"three"**, we can retrieve it as:
```
stoi.get("three");
```
In my world, a **Map** is useful for converting strings occurring in third-party datasets into coherent code values in my datasets. As a part of a [data transformation pipeline][8], I will often build a small standalone program to clean the data before processing it; for this, I will almost always use one or more **Map**s.
Worth mentioning is that its quite possible, and sometimes reasonable, to have **ArrayLists** of **ArrayLists** and **Map**s of **Map**s. For example, lets assume were looking at trees, and were interested in accumulating the count of the number of trees by tree species and age range. Assuming that the age range definition is a set of string values ("young," "mid," "mature," and "old") and that the species are string values like "Douglas fir," "western red cedar," and so forth, then we might define a **Map** of **Map**s as:
```
Map<[String][3],Map<[String][3],Integer>> counter =
        new Map<[String][3],Map<[String][3],Integer>>();
```
One thing to watch out for here is that the above only creates storage for the _rows_ of **Map**s. So, our accumulation code might look like:
```
// assume at this point we have figured out the species
// and age range
if (!counter.containsKey(species))
        counter.put(species,new Map<[String][3],Integer>());
if (!counter.get(species).containsKey(ageRange))
        counter.get(species).put(ageRange,0);
```
At which point, we can start accumulating as:
```
counter.get(species).put(ageRange,
        counter.get(species).get(ageRange) + 1);
```
Finally, its worth mentioning that the (new in Java 8) Streams facility can also be used to initialize arrays, **ArrayList** instances, and **Map** instances. A nice discussion of this feature can be found [here][9] and [here][10].
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/10/initializing-arrays-java
作者:[Chris Hermansen][a]
选题:[lujun9972][b]
译者:[laingke](https://github.com/laingke)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/clhermansen
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/java-coffee-mug.jpg?itok=Bj6rQo8r (Coffee beans and a cup of coffee)
[2]: https://opensource.com/article/19/8/what-object-java
[3]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+string
[4]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+system
[5]: https://en.wikipedia.org/wiki/Irregular_matrix
[6]: https://en.wikipedia.org/wiki/Java_collections_framework
[7]: https://en.wikipedia.org/wiki/Fibonacci_number
[8]: https://towardsdatascience.com/data-science-for-startups-data-pipelines-786f6746a59a
[9]: https://stackoverflow.com/questions/36885371/lambda-expression-to-initialize-array
[10]: https://stackoverflow.com/questions/32868665/how-to-initialize-a-map-using-a-lambda

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -0,0 +1,168 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Getting started with awk, a powerful text-parsing tool)
[#]: via: (https://opensource.com/article/19/10/intro-awk)
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
Getting started with awk, a powerful text-parsing tool
======
Let's jump in and start using it.
![Woman programming][1]
Awk is a powerful text-parsing tool for Unix and Unix-like systems, but because it has programmed functions that you can use to perform common parsing tasks, it's also considered a programming language. You probably won't be developing your next GUI application with awk, and it likely won't take the place of your default scripting language, but it's a powerful utility for specific tasks.
What those tasks may be is surprisingly diverse. The best way to discover which of your problems might be best solved by awk is to learn awk; you'll be surprised at how awk can help you get more done but with a lot less effort.
Awk's basic syntax is:
```
`awk [options] 'pattern {action}' file`
```
To get started, create this sample file and save it as **colours.txt**
```
name       color  amount
apple      red    4
banana     yellow 6
strawberry red    3
grape      purple 10
apple      green  8
plum       purple 2
kiwi       brown  4
potato     brown  9
pineapple  yellow 5
```
This data is separated into columns by one or more spaces. It's common for data that you are analyzing to be organized in some way. It may not always be columns separated by whitespace, or even a comma or semicolon, but especially in log files or data dumps, there's generally a predictable pattern. You can use patterns of data to help awk extract and process the data that you want to focus on.
### Printing a column
In awk, the **print** function displays whatever you specify. There are many predefined variables you can use, but some of the most common are integers designating columns in a text file. Try it out:
```
$ awk '{print $2;}' colours.txt
color
red
yellow
red
purple
green
purple
brown
brown
yellow
```
In this case, awk displays the second column, denoted by **$2**. This is relatively intuitive, so you can probably guess that **print $1** displays the first column, and **print $3** displays the third, and so on.
To display _all_ columns, use **$0**.
The number after the dollar sign (**$**) is an _expression_, so **$2** and **$(1+1)** mean the same thing.
### Conditionally selecting columns
The example file you're using is very structured. It has a row that serves as a header, and the columns relate directly to one another. By defining _conditional_ requirements, you can qualify what you want awk to return when looking at this data. For instance, to view items in column 2 that match "yellow" and print the contents of column 1:
```
awk '$2=="yellow"{print $1}' file1.txt
banana
pineapple
```
Regular expressions work as well. This conditional looks at **$2** for approximate matches to the letter **p** followed by any number of (one or more) characters, which are in turn followed by the letter **p**:
```
$ awk '$2 ~ /p.+p/ {print $0}' colours.txt
grape   purple  10
plum    purple  2
```
Numbers are interpreted naturally by awk. For instance, to print any row with a third column containing an integer greater than 5:
```
awk '$3>5 {print $1, $2}' colours.txt
name    color
banana  yellow
grape   purple
apple   green
potato  brown
```
### Field separator
By default, awk uses whitespace as the field separator. Not all text files use whitespace to define fields, though. For example, create a file called **colours.csv** with this content:
```
name,color,amount
apple,red,4
banana,yellow,6
strawberry,red,3
grape,purple,10
apple,green,8
plum,purple,2
kiwi,brown,4
potato,brown,9
pineapple,yellow,5
```
Awk can treat the data in exactly the same way, as long as you specify which character it should use as the field separator in your command. Use the **\--field-separator** (or just **-F** for short) option to define the delimiter:
```
$ awk -F"," '$2=="yellow" {print $1}' file1.csv
banana
pineapple
```
### Saving output
Using output redirection, you can write your results to a file. For example:
```
`$ awk -F, '$3>5 {print $1, $2} colours.csv > output.txt`
```
This creates a file with the contents of your awk query.
You can also split a file into multiple files grouped by column data. For example, if you want to split colours.txt into multiple files according to what color appears in each row, you can cause awk to redirect _per query_ by including the redirection in your awk statement:
```
`$ awk '{print > $2".txt"}' colours.txt`
```
This produces files named **yellow.txt**, **red.txt**, and so on.
In the next article, you'll learn more about fields, records, and some powerful awk variables.
* * *
This article is adapted from an episode of [Hacker Public Radio][2], a community technology podcast.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/10/intro-awk
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/programming-code-keyboard-laptop-music-headphones.png?itok=EQZ2WKzy (Woman programming)
[2]: http://hackerpublicradio.org/eps.php?id=2114

View File

@ -0,0 +1,163 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Test automation without assertions for web development)
[#]: via: (https://opensource.com/article/19/10/test-automation-without-assertions)
[#]: author: (Jeremias Roessler https://opensource.com/users/roesslerj)
Test automation without assertions for web development
======
Recheck-web promises the benefits of golden master-based testing without
the drawbacks.
![Coding on a computer][1]
Graphical user interface (GUI) test automation is broken. Regression testing is not testing; it's version control for a software's behavior. Here's my assertion: test automation _without_ _assertions_ works better!
In software development and test automation, an assertion is a means to check the result of a calculation, typically by comparing it to a singular expected value. While this is very well suited for unit-based test automation (i.e. testing the system from within), applying it to testing an interface (specifically the user interface) has proven to be problematic, as this post will explain.
The number of tools that work according to the [golden master][2] approach to testing, characterization testing, and approval testing—such as [Approval Tests][3], [Jest][4], or [recheck-web][5] ([retest][6])—is constantly increasing. This approach promises more robust tests with less effort (for both creation and maintenance) while testing more thoroughly.
The examples in this article are available on [GitHub][7].
### A basic Selenium test
Here's a simple example of a traditional test running against a web application's login page. Using [Selenium][8] as the testing framework, the code could look like this:
```
public class MySeleniumTest {
        RemoteWebDriver driver;
        @Before
        public void setup() {
                driver =  new ChromeDriver();
        }
        @Test
        public void login() throws Exception {
                driver.get("<https://assets.retest.org/demos/app/demo-app.html>");
                driver.findElement(By.id("username")).sendKeys("Simon");
                driver.findElement(By.id("password")).sendKeys("secret");
                driver.findElement(By.id("sign-in")).click();
                assertEquals(driver.findElement(By.tagName("h4")).getText(), "Success!");
        }
        @After
        public void tearDown() throws InterruptedException {
                driver.quit();
        }
}
```
This is a very simple test. It opens a specific URL, then finds input fields by their invisible element IDs. It enters the user name and password, then clicks the login button.
As is currently best practice, this test then uses a unit-test library to check the correct outcome by means of an _assert_ statement.
In this example, the test determines whether the text "Success!" is displayed.
You can run the test a few times to verify success, but it's important to experience failure, as well. To create an error, change the HTML of the website being tested. You could, for instance, edit the CSS declaration:
```
`<link href=_"./files/main.css"_ rel=_"stylesheet"_>`
```
Changing or removing as much as a single character of the URL (e.g. change "main" to "min") changes the website to display as raw HTML without a layout.
![Website login form displayed as raw HTML][9]
This small change is definitely an error. However, when the test is executed, it shows no problem and still passes. To outright ignore such a blatant error clearly is not what you would expect of your tests. They should guard against you involuntarily breaking your website after all.
Now instead, change or remove the element IDs of the input fields. Since these IDs are invisible, this change doesn't have any impact on the website from a user's perspective. But when the test executes, it fails with a **NoSuchElementException**. This essentially means that this irrelevant change _broke the test_. Tests that ignore major changes but fail on invisible and hence irrelevant ones are the current standard in test automation. This is basically the _opposite_ of how a test should behave.
Now, take the original test and wrap the driver in a RecheckDriver:
```
`driver = new RecheckDriver( new ChromeDriver() );`
```
Then either replace the assertion with a call to **driver.capTest();** at the end of the test or add a Junit 5 rule: **@ExtendWith(RecheckExtension.class)**. If you remove the CSS from the website, the test fails, as it should:
![Failed test][10]
But if you change or remove the element IDs instead, the test still passes.
This surprising ability, coming from the "unbreakable" feature of recheck-web, is explained in detail below. This is how a test should behave: detect changes important to the user, and do not break on changes that are irrelevant to the user.
### How it works
The [recheck-web][5] project is a free, open source tool that operates on top of Selenium. It is golden master-based, which essentially means that it creates a copy of the rendered website the first time the test is executed, and subsequent runs of the test compare the current state against that copy (the golden master). This is how it can detect that the website has changed in unfavorable ways. It is also how it can still identify an element after its ID has changed: It simply peeks into the golden master (where the ID is still present) and finds the element there. Using additional properties like XPath, HTML name, and CSS classes, recheck-web identifies the element on the changed website and returns it to Selenium. The test can then interact with the element, just as before, and report the change.
![recheck-web's process][11]
#### Problems with golden master testing
Golden master testing, in general, has two essential drawbacks:
1. It is often difficult to ignore irrelevant changes. Many changes are not problematic (e.g., date and time changes, random IDs, etc.). For the same reason that Git features the **.gitignore** file, recheck-web features the **recheck.ignore** file. And its Git-like syntax makes it easy to specify which differences to ignore.
2. It is often cumbersome to maintain redundancy. Golden masters usually have quite an overlap. Often, the same change has to be approved multiple times, nullifying the efficiency gained during the fast test creation. For that, recheck comes complete with its own [command-line interface (CLI)][12] that takes care of this annoying task. The CLI (and the [commercial GUI][13]) lets users easily apply the same change to the same element in all instances or simply apply or ignore all changes at once.
The example above illustrates both drawbacks and their respective solutions: the changed ID was detected, but not reported because the ID attribute in the **recheck.ignore** file was specified to be ignored with **attribute=id**. Removing that rule makes the test fail, but it does not _break_ (the test still executes and reports the changed ID).
The example test uses the implicit checking mechanism, which automatically checks the result after every action. (Note that if you prefer to do explicit checking (e.g. by calling **re.check**) this is entirely possible.) Opening the URL, entering the user name, and entering the password are three actions that are being performed on the same page, therefore three golden masters are created for the same page. The changed ID thus is reported three times. All three instances can be treated with a single call to **recheck commit --all tests.report** on the command line. Applying the change makes the recheck-web test fail because the ID is removed from the golden master. This calls for anther neat feature of recheck-web: the **retestId**.
### Virtual constant IDs
The basic idea of the **retestId** is to introduce an additional attribute in the copy of the website. Since this attribute lives only in the website copy, not on the live site, it can never be affected by a change (unless the element is completely removed). This is called a _virtual constant ID_.
Now, this **retestId** can be referred to in the test. Simply replace the call to, for instance, **By._id_("username")** with **By._retestId_("username")**, and this problem is solved for good. This also addresses instances where elements are hard to reference because they have no ID to begin with.
### Filter mechanism
What would Git be without the **.gitignore** file? Filtering out irrelevant changes is one of the most important features of a version-control system. Traditional assertion-based testing ignores more than 99% of the changes. Instead, similar to Git without a **.gitignore** file, recheck-web reports any and all changes.
It's up to the user to ignore changes that aren't of interest. Recheck-web can be used for cross-browser testing, cross-device testing, deep visual regression testing, and functional regression testing, depending on what you do or do not ignore.
The filtering mechanism is as simple (based on the **.gitignore** file) as it is powerful. Single attributes can be filtered globally or for certain elements. Single elements—or even whole parts of the page—can be ignored. If this is not powerful enough, you can implement filter rules in JavaScript to, for example, ignore different URLs with the same base or position differences of less than five pixels.
A good starting point for understanding this is the [predefined filter files][14] that are distributed with recheck-web. Ignoring element positioning is usually a good idea. If you want to learn more about how to maintain your **recheck.ignore** file or create your own filters, see the [documentation][15].
### Summary
Recheck-web is one of the few golden master-based testing tools available; alternatives include Approval Tests and Jest.
Recheck-web provides the ability to quickly and easily create tests that are more complete and robust than traditional tests. Because it compares rendered websites (or parts of them) with each other, cross-browser testing, cross-platform testing, and other test scenarios can be realized. Also, this kind of testing is an "enabler" technology that will enable artificial intelligence to generate additional tests.
Recheck-web is free and open source, so please [try it out][5]. The company's business model is to offer additional services (e.g., storing golden masters and reports as well as an AI to generate tests) and to have a commercial GUI on top of the CLI for maintaining the golden masters.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/10/test-automation-without-assertions
作者:[Jeremias Roessler][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/roesslerj
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/code_computer_laptop_hack_work.png?itok=aSpcWkcl (Coding on a computer)
[2]: https://opensource.com/article/19/7/what-golden-image
[3]: https://approvaltests.com
[4]: https://jestjs.io/
[5]: https://github.com/retest/recheck-web
[6]: http://retest.de
[7]: https://github.com/retest/recheck-web-example
[8]: https://www.seleniumhq.org/
[9]: https://opensource.com/sites/default/files/uploads/webformerror.png (Website login form displayed as raw HTML)
[10]: https://opensource.com/sites/default/files/uploads/testfails.png (Failed test)
[11]: https://opensource.com/sites/default/files/uploads/recheck-web-process.png (recheck-web's process)
[12]: https://github.com/retest/recheck.cli
[13]: https://retest.de/review/
[14]: https://github.com/retest/recheck/tree/master/src/main/resources/filter/web
[15]: https://docs.retest.de/recheck/usage/filter

View File

@ -0,0 +1,69 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (4 Python tools for getting started with astronomy)
[#]: via: (https://opensource.com/article/19/10/python-astronomy-open-data)
[#]: author: (Gina Helfrich, Ph.D. https://opensource.com/users/ginahelfrich)
4 Python tools for getting started with astronomy
======
Explore the universe with NumPy, SciPy, Scikit-Image, and Astropy.
![Person looking up at the stars][1]
NumFOCUS is a nonprofit charity that supports amazing open source toolkits for scientific computing and data science. As part of the effort to connect Opensource.com readers with the NumFOCUS community, we are republishing some of the most popular articles from [our blog][2]. To learn more about our mission and programs, please visit [numfocus.org][3]. If you're interested in participating in the NumFOCUS community in person, check out a local [PyData event][4] happening near you.
* * *
### Astronomy with Python
Python is a great language for science, and specifically for astronomy. The various packages such as [NumPy][5], [SciPy][6], [Scikit-Image][7] and [Astropy][8] (to name but a few) are all a great testament to the suitability of Python for astronomy, and there are plenty of use cases. [NumPy, Astropy, and SciPy are NumFOCUS fiscally sponsored projects; Scikit-Image is an affiliated project.] Since leaving the field of astronomical research behind more than 10 years ago to start a second career as software developer, I have always been interested in the evolution of these packages. Many of my former colleagues in astronomy used most if not all of these packages for their research work. I have since worked on implementing professional astronomy software packages for instruments for the Very Large Telescope (VLT) in Chile, for example.
It struck me recently that the Python packages have evolved to such an extent that it is now fairly easy for anyone to build [data reduction][9] scripts that can provide high-quality data products. Astronomical data is ubiquitous, and what is more, it is almost all publicly available—you just need to look for it.
For example, ESO, which runs the VLT, offers the data for download on their site. Head over to [www.eso.org/UserPortal][10] and create a user name for their portal. If you look for data from the instrument SPHERE you can download a full dataset for any of the nearby stars that have exoplanet or proto-stellar discs. It is a fantastic and exciting project for any Pythonista to reduce that data and make the planets or discs that are deeply hidden in the noise visible.
I encourage you to download the ESO or any other astronomy imaging dataset and go on that adventure. Here are a few tips:
1. Start off with a good dataset. Have a look at papers about nearby stars with discs or exoplanets and then search, for example: <http://archive.eso.org/wdb/wdb/eso/sphere/query>. Notice that some data on this site is marked as red and some as green. The red data is not publicly available yet — it will say under “release date” when it will be available.
2. Read something about the instrument you are using the data from. Try and get a basic understanding of how the data is obtained and what the standard data reduction should look like. All telescopes and instruments have publicly available documents about this.
3. You will need to consider the standard problems with astronomical data and correct for them:
1. Data comes in FITS files. You will need **pyfits** or **astropy** (which contains pyfits) to read them into **NumPy** arrays. In some cases the data comes in a cube and you should to use **numpy.median **along the z-axis to turn them into 2-D arrays. For some SPHERE data you get two copies of the same piece of sky on the same image (each has a different filter) which you will need to extract using **indexing and slicing.**
2. The master dark and bad pixel map. All instruments will have specific images taken as “dark frames” that contain images with the shutter closed (no light at all). Use these to extract a mask of bad pixels using **NumPy masked arrays** for this. This mask of bad pixels will be very important — you need to keep track of it as you process the data to get a clean combined image in the end. In some cases it also helps to subtract this master dark from all scientific raw images.
3. Instruments will typically also have a master flat frame. This is an image or series of images taken with a flat uniform light source. You will need to divide all scientific raw images by this (again, using numpy masked array makes this an easy division operation).
4. For planet imaging, the fundamental technique to make planets visible against a bright star rely on using a coronagraph and a technique known as angular differential imaging. To that end, you need to identify the optical centre on the images. This is one of the most tricky steps and requires finding some artificial helper images embedded in the images using **skimage.feature.blob_dog**.
4. Be patient. It can take a while to understand the data format and how to handle it. Making some plots and histograms of the pixel data can help you to understand it. It is well worth it to be persistent! You will learn a lot about imaging data and processing.
Using the tools offered by NumPy, SciPy, Astropy, scikit-image and more in combination, with some patience and persistence, it is possible to analyse the vast amount of available astronomical data to produce some stunning results. And who knows, maybe you will be the first one to find a planet that was previously overlooked! Good luck!
_This article was originally published on the NumFOCUS blog and is republished with permission. It is based on [a talk][11] by [Ole Moeller-Nilsson][12], CTO at Pivigo. If you want to support NumFOCUS, you can donate [here][13] or find your local [PyData event][4] happening around the world._
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/10/python-astronomy-open-data
作者:[Gina Helfrich, Ph.D.][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/ginahelfrich
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/space_stars_cosmos_person.jpg?itok=XUtz_LyY (Person looking up at the stars)
[2]: https://numfocus.org/blog
[3]: https://numfocus.org
[4]: https://pydata.org/
[5]: http://numpy.scipy.org/
[6]: http://www.scipy.org/
[7]: http://scikit-image.org/
[8]: http://www.astropy.org/
[9]: https://en.wikipedia.org/wiki/Data_reduction
[10]: http://www.eso.org/UserPortal
[11]: https://www.slideshare.net/OleMoellerNilsson/pydata-lonon-finding-planets-with-python
[12]: https://twitter.com/olly_mn
[13]: https://numfocus.org/donate

View File

@ -0,0 +1,287 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Advance your awk skills with two easy tutorials)
[#]: via: (https://opensource.com/article/19/10/advanced-awk)
[#]: author: (Dave Neary https://opensource.com/users/dneary)
Advance your awk skills with two easy tutorials
======
Go beyond one-line awk scripts with mail merge and word counting.
![a checklist for a team][1]
Awk is one of the oldest tools in the Unix and Linux user's toolbox. Created in the 1970s by Alfred Aho, Peter Weinberger, and Brian Kernighan (the A, W, and K of the tool's name), awk was created for complex processing of text streams. It is a companion tool to sed, the stream editor, which is designed for line-by-line processing of text files. Awk allows more complex structured programs and is a complete programming language.
This article will explain how to use awk for more structured and complex tasks, including a simple mail merge application.
### Awk program structure
An awk script is made up of functional blocks surrounded by **{}** (curly brackets). There are two special function blocks, **BEGIN** and **END**, that execute before processing the first line of the input stream and after the last line is processed. In between, blocks have the format:
```
`pattern { action statements }`
```
Each block executes when the line in the input buffer matches the pattern. If no pattern is included, the function block executes on every line of the input stream.
Also, the following syntax can be used to define functions in awk that can be called from any block:
```
`function name(parameter list) { statements }`
```
This combination of pattern-matching blocks and functions allows the developer to structure awk programs for reuse and readability.
### How awk processes text streams
Awk reads text from its input file or stream one line at a time and uses a field separator to parse it into a number of fields. In awk terminology, the current buffer is a _record_. There are a number of special variables that affect how awk reads and processes a file:
* **FS** (field separator): By default, this is any whitespace (spaces or tabs)
* **RS** (record separator): By default, a newline (**\n**)
* **NF** (number of fields): When awk parses a line, this variable is set to the number of fields that have been parsed
* **$0:** The current record
* **$1, $2, $3, etc.:** The first, second, third, etc. field from the current record
* **NR** (number of records): The number of records that have been parsed so far by the awk script
There are many other variables that affect awk's behavior, but this is enough to start with.
### Awk one-liners
For a tool so powerful, it's interesting that most of awk's usage is basic one-liners. Perhaps the most common awk program prints selected fields from an input line from a CSV file, a log file, etc. For example, the following one-liner prints a list of usernames from **/etc/passwd**:
```
`awk -F":" '{print $1 }' /etc/passwd`
```
As mentioned above, **$1** is the first field in the current record. The **-F** option sets the FS variable to the character **:**.
The field separator can also be set in a BEGIN function block:
```
`awk 'BEGIN { FS=":" } {print $1 }' /etc/passwd`
```
In the following example, every user whose shell is not **/sbin/nologin** can be printed by preceding the block with a pattern match:
```
`awk 'BEGIN { FS=":" } ! /\/sbin\/nologin/ {print $1 }' /etc/passwd`
```
### Advanced awk: Mail merge
Now that you have some of the basics, try delving deeper into awk with a more structured example: creating a mail merge.
A mail merge uses two files, one (called in this example **email_template.txt**) containing a template for an email you want to send:
```
From: Program committee &lt;[pc@event.org][2]&gt;
To: {firstname} {lastname} &lt;{email}&gt;
Subject: Your presentation proposal
Dear {firstname},
Thank you for your presentation proposal:
  {title}
We are pleased to inform you that your proposal has been successful! We
will contact you shortly with further information about the event
schedule.
Thank you,
The Program Committee
```
And the other is a CSV file (called **proposals.csv**) with the people you want to send the email to:
```
firstname,lastname,email,title
Harry,Potter,[hpotter@hogwarts.edu][3],"Defeating your nemesis in 3 easy steps"
Jack,Reacher,[reacher@covert.mil][4],"Hand-to-hand combat for beginners"
Mickey,Mouse,[mmouse@disney.com][5],"Surviving public speaking with a squeaky voice"
Santa,Claus,[sclaus@northpole.org][6],"Efficient list-making"
```
You want to read the CSV file, replace the relevant fields in the first file (skipping the first line), then write the result to a file called **acceptanceN.txt**, incrementing **N** for each line you parse.
Write the awk program in a file called **mail_merge.awk**. Statements are separated by **;** in awk scripts. The first task is to set the field separator variable and a couple of other variables the script needs. You also need to read and discard the first line in the CSV, or a file will be created starting with _Dear firstname_. To do this, use the special function **getline** and reset the record counter to 0 after reading it.
```
BEGIN {
  FS=",";
  template="email_template.txt";
  output="acceptance";
  getline;
  NR=0;
}
```
The main function is very straightforward: for each line processed, a variable is set for the various fields—**firstname**, **lastname**, **email**, and **title**. The template file is read line by line, and the function **sub** is used to substitute any occurrence of the special character sequences with the value of the relevant variable. Then the line, with any substitutions made, is output to the output file.
Since you are dealing with the template file and a different output file for each line, you need to clean up and close the file handles for these files before processing the next record.
```
{
        # Read relevant fields from input file
        firstname=$1;
        lastname=$2;
        email=$3;
        title=$4;
        # Set output filename
        outfile=(output NR ".txt");
        # Read a line from template, replace special fields, and
        # print result to output file
        while ( (getline ln &lt; template) &gt; 0 )
        {
                sub(/{firstname}/,firstname,ln);
                sub(/{lastname}/,lastname,ln);
                sub(/{email}/,email,ln);
                sub(/{title}/,title,ln);
                print(ln) &gt; outfile;
        }
        # Close template and output file in advance of next record
        close(outfile);
        close(template);
}
```
You're done! Run the script on the command line with:
```
`awk -f mail_merge.awk proposals.csv`
```
or
```
`awk -f mail_merge.awk < proposals.csv`
```
and you will find text files generated in the current directory.
### Advanced awk: Word frequency count
One of the most powerful features in awk is the associative array. In most programming languages, array entries are typically indexed by a number, but in awk, arrays are referenced by a key string. You could store an entry from the file _proposals.txt_ from the previous section. For example, in a single associative array, like this:
```
        proposer["firstname"]=$1;
        proposer["lastname"]=$2;
        proposer["email"]=$3;
        proposer["title"]=$4;
```
This makes text processing very easy. A simple program that uses this concept is the idea of a word frequency counter. You can parse a file, break out words (ignoring punctuation) in each line, increment the counter for each word in the line, then output the top 20 words that occur in the text.
First, in a file called **wordcount.awk**, set the field separator to a regular expression that includes whitespace and punctuation:
```
BEGIN {
        # ignore 1 or more consecutive occurrences of the characters
        # in the character group below
        FS="[ .,:;()&lt;&gt;{}@!\"'\t]+";
}
```
Next, the main loop function will iterate over each field, ignoring any empty fields (which happens if there is punctuation at the end of a line), and increment the word count for the words in the line.
```
{
        for (i = 1; i &lt;= NF; i++) {
                if ($i != "") {
                        words[$i]++;
                }
        }
}
```
Finally, after the text is processed, use the END function to print the contents of the array, then use awk's capability of piping output into a shell command to do a numerical sort and print the 20 most frequently occurring words:
```
END {
        sort_head = "sort -k2 -nr | head -n 20";
        for (word in words) {
                printf "%s\t%d\n", word, words[word] | sort_head;
        }
        close (sort_head);
}
```
Running this script on an earlier draft of this article produced this output:
```
[[dneary@dhcp-49-32.bos.redhat.com][7]]$ awk -f wordcount.awk &lt; awk_article.txt
the     79
awk     41
a       39
and     33
of      32
in      27
to      26
is      25
line    23
for     23
will    22
file    21
we      16
We      15
with    12
which   12
by      12
this    11
output  11
function        11
```
### What's next?
If you want to learn more about awk programming, I strongly recommend the book [_Sed and awk_][8] by Dale Dougherty and Arnold Robbins.
One of the keys to progressing in awk programming is mastering "extended regular expressions." Awk offers several powerful additions to the sed [regular expression][9] syntax you may already be familiar with.
Another great resource for learning awk is the [GNU awk user guide][10]. It has a full reference for awk's built-in function library, as well as lots of examples of simple and complex awk scripts.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/10/advanced-awk
作者:[Dave Neary][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/dneary
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/checklist_hands_team_collaboration.png?itok=u82QepPk (a checklist for a team)
[2]: mailto:pc@event.org
[3]: mailto:hpotter@hogwarts.edu
[4]: mailto:reacher@covert.mil
[5]: mailto:mmouse@disney.com
[6]: mailto:sclaus@northpole.org
[7]: mailto:dneary@dhcp-49-32.bos.redhat.com
[8]: https://www.amazon.com/sed-awk-Dale-Dougherty/dp/1565922255/book
[9]: https://en.wikibooks.org/wiki/Regular_Expressions/POSIX-Extended_Regular_Expressions
[10]: https://www.gnu.org/software/gawk/manual/gawk.html

View File

@ -0,0 +1,236 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Looping your way through bash)
[#]: via: (https://www.networkworld.com/article/3449116/looping-your-way-through-bash.html)
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
Looping your way through bash
======
There are many ways to loop through data in a bash script and on the command line. Which way is best depends on what you're trying to do.
[Alan Levine / Flickr][1] [(CC BY 2.0)][2]
There are a lot of options for looping in bash whether on the command line or in a script. The choice depends on what you're trying to do.
You may want to loop indefinitely or quickly run through the days of the week. You might want to loop once for every file in a directory or for every account on a server. You might want to loop through every line in a file or have the number of loops be a choice when the script is run. Let's check out some of the options.
[[Get regularly scheduled insights by signing up for Network World newsletters.]][3]
### Simple loops
Probably the simplest loop is a **for** loop like the one below. It loops as many times as there are pieces of text on the line. We could as easily loop through the words **cats are smart** as the numbers 1, 2, 3 and 4.
[][4]
BrandPost Sponsored by HPE
[Take the Intelligent Route with Consumption-Based Storage][4]
Combine the agility and economics of HPE storage with HPE GreenLake and run your IT department with efficiency.
```
#!/bin/bash
for num in 1 2 3 4
do
echo $num
done
```
And, to prove it, here's a similar loop run on the command line:
```
$ for word in cats are smart
> do
> echo $word
> done
cats
are
smart
```
### for vs while
Bash provides both a **for** and a **while** looping command. In **while** loops, some condition is tested each time through the loop to determine whether the loop should continue. This example is practically the same as the one before in how it works, but imagine what a difference it would make if we wanted to loop 444 times instead of just 4.
```
#!/bin/bash
n=1
while [ $n -le 4 ]
do
echo $n
((n++))
done
```
### Looping through value ranges
If you want to loop through every letter of the alphabet or some more restricted range of letters, you can use syntax like this:
```
#!/bin/bash
for x in {a..z}
do
echo $x
done
```
If you used **{d..f}**, you would only loop three times.
### Looping inside loops
There's also nothing stopping you from looping inside a loop. In this example, we're using a **for** loop inside a **while** loop.
```
#!/bin/bash
n=1
while [ $n -lt 6 ]
do
for l in {a..d}
do
echo $n$l
done
((n++))
done
```
The output would in this example include 1a, 1b, 1c, 1d, 2a and so on, ending at 5d. Note that **((n++))** is used to increment the value of $n so that **while** has a stopping point.
### Looping through variable data
If you want to loop through every account on the system, every file in a directory or some other kind of variable data, you can issue a command within your loop to generate the list of values to loop through. In this example, we loop through every account (actually every file) in **/home**  assuming, as we should expect, that there are no other files or directories in **/home**.
```
#!/bin/bash
for user in `ls /home`
do
echo $user
done
```
If the command were **date** instead of **ls /home**, we'd run through each of the 7 pieces of text in the output of the date command.
```
$ for word in `date`
> do
> echo $word
> done
Thu
31
Oct
2019
11:59:59
PM
EDT
```
### Looping by request
It's also very easy to allow the person running the script to determine how many times a loop should run. If you want to do this, however, you should test the response provided to be sure that it's numeric. This example shows three ways to do that.
```
#!/bin/bash
echo -n "How many times should I say hello? "
read ans
if [ "$ans" -eq "$ans" ]; then
echo ok1
fi
if [[ $ans = *[[:digit:]]* ]]; then
echo ok2
fi
if [[ "$ans" =~ ^[0-9]+$ ]]; then
echo ok3
fi
```
The first option above shown might look a little odd, but it works because the **-eq** test only works if the values being compared are numeric. If the test came down to asking if **"f" -eq "f"**, it would fail. The second test uses the bash character class for digits. The third tests the variable to ensure that it contains only digits.
Of course, once you've selected how you prefer to test a user response to be sure that it's numeric, you need to follow through on the loop. In this next example, we'll print "hello" as many times as the user wants to see it. The **le** does a "less than or equal" test.
```
#!/bin/bash
echo -n "How many times should I say hello? "
read ans
if [ "$ans" -eq "$ans" ]; then
n=1
while [ $n -le $ans ]
do
echo hello
((n++))
done
fi
```
### Looping through the lines in a file
If you want to loop through the contents of a file line by line (i.e., NOT word by word), you can use a loop like this one:
```
#!/bin/bash
echo -n "File> "
read file
n=0
while read line; do
((n++))
echo "$n: $line"
done < $file
```
The word "line" used in the above script is for clarity, but you could use any variable name. The **while read** and the redirection of the file content on the last line of the script is what provides the line-by-line reading.
### Looping forever
If you want to loop forever or until, well, someone gets tired of seeing the script's output and decides to kill it, you can simple use the **while true** syntax.
```
#!/bin/bash
while true
do
echo -n "Still running at "
date
sleep 10
done
```
The examples shown above are basically only (excuse the pun) "shells" for the kind of real work that you might need to do and are meant simply to provide the basic syntax for running undoubtedly far more useful commands.
### Now see:
Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3449116/looping-your-way-through-bash.html
作者:[Sandra Henry-Stocker][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
[b]: https://github.com/lujun9972
[1]: https://www.flickr.com/photos/cogdog/7778741378/in/photolist-cRo5NE-8HFUGG-e1kzG-4TFXrc-D3mM8-Lzx7h-LzGRB-fN3CY-LzwRo-8mWuUB-2jJ2j8-AABU8-eNrDET-eND7Nj-eND6Co-pNq3ZR-3bndB2-dNobDn-3brHfC-eNrSXv-4z4dNn-R1i2P5-eNDvyQ-agaw5-eND55q-4KQnc9-eXg6mo-eNscpF-eNryR6-dTGEqg-8uq9Wm-eND54j-eNrKD2-cynYp-eNrJsk-eNCSSj-e9uAD5-25xTWb-eNrJ3e-eNCW8s-7nKXtJ-5URF1j-8Y253Z-oaNVEQ-4AUK9b-6SJiLP-7GL54w-25yEqLa-fN3gL-dEgidW
[2]: https://creativecommons.org/licenses/by/2.0/legalcode
[3]: https://www.networkworld.com/newsletters/signup.html
[4]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE20773&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
[5]: https://www.facebook.com/NetworkWorld/
[6]: https://www.linkedin.com/company/network-world

View File

@ -7,59 +7,59 @@
[#]: via: (https://www.linuxtechi.com/enable-epel-repo-centos8-rhel8-server/)
[#]: author: (Pradeep Kumar https://www.linuxtechi.com/author/pradeep/)
How to Enable EPEL Repository on CentOS 8 and RHEL 8 Server
如何在 CentOS 8 和 RHEL 8 服务器上启用 EPEL 仓库
======
**EPEL** Stands for Extra Packages for Enterprise Linux, it is a free and opensource additional packages repository available for **CentOS** and **RHEL** servers. As the name suggests, EPEL repository provides extra and additional packages which are not available in the default package repositories of [CentOS 8][1] and [RHEL 8][2].
**EPEL** 代表 “Extra Packages for Enterprise Linux”它是一个免费的开源附加软件包仓库可用于 **CentOS****RHEL** 服务器。顾名思义EPEL 仓库提供了额外的软件包,它们在 [CentOS 8][1]和 [RHEL 8][2] 的默认软件包仓库中不可用。
In this article we will demonstrate how to enable and use epel repository on CentOS 8 and RHEL 8 Server.
在本文中,我们将演示如何在 CentOS 8 和 RHEL 8 服务器上启用和使用 epel 存储库。
[![EPEL-Repo-CentOS8-RHEL8][3]][4]
### Prerequisites of EPEL Repository
### EPEL 仓库的先决条件
* Minimal CentOS 8 and RHEL 8 Server
* Root or sudo admin privileges
* Internet Connection
* Minimal CentOS 8 和 RHEL 8 服务器
* root 或 sudo 管理员权限
* 网络连接
### Install and Enable EPEL Repository on RHEL 8.x Server
### 在 RHEL 8.x 服务器上安装并启用 EPEL 仓库
Login or ssh to your RHEL 8.x server and execute the following dnf command to install EPEL rpm package,
登录或 SSH 到你的 RHEL 8.x 服务器并执行以下 dnf 命令来安装 EPEL rpm 包,
```
[root@linuxtechi ~]# dnf install https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm -y
```
Output of above command would be something like below,
上面命令的输出将如下所示,
![dnf-install-epel-repo-rehl8][3]
Once epel rpm package is installed successfully then it will automatically enable and configure its yum / dnf repository.  Run following dnf or yum command to verify whether EPEL repository is enabled or not,
epel rpm 包成功安装后,它将自动启用并配置其 yum/dnf 仓库。运行以下 dnf 或 yum 命令,以验证是否启用了 EPEL 仓库,
```
[root@linuxtechi ~]# dnf repolist epel
Or
或者
[root@linuxtechi ~]# dnf repolist epel -v
```
![epel-repolist-rhel8][3]
### Install and Enable EPEL Repository on CentOS 8.x Server
### 在 CentOS 8.x 服务器上安装并启用 EPEL 仓库
Login or ssh to your CentOS 8 server and execute following dnf or yum command to install **epel-release** rpm package. In CentOS 8 server, epel rpm package is available in its default package repository.
登录或 SSH 到你的 CentOS 8 服务器,并执行以下 dnf 或 yum 命令来安装 “**epel-release**” rpm 软件包。在 CentOS 8 服务器中epel rpm 在其默认软件包仓库中。
```
[root@linuxtechi ~]# dnf install epel-release -y
Or
或者
[root@linuxtechi ~]# yum install epel-release -y
```
Execute the following commands to verify the status of epel repository on CentOS 8 server,
执行以下命令来验证 CentOS 8 服务器上 epel 仓库的状态,
```
[root@linuxtechi ~]# dnf repolist epel
[root@linuxtechi ~]# dnf repolist epel
Last metadata expiration check: 0:00:03 ago on Sun 13 Oct 2019 04:18:05 AM BST.
repo id repo name status
*epel Extra Packages for Enterprise Linux 8 - x86_64 1,977
@ -82,11 +82,11 @@ Total packages: 1,977
[root@linuxtechi ~]#
```
Above commands output confirms that we have successfully enabled epel repo. Lets perform some basic operations on EPEL repo.
以上命令的输出说明我们已经成功启用了epel 仓库。 让我们在 EPEL 仓库上执行一些基本操作。
### List all available packages from epel repository
### 列出 epel 仓库种所有可用包
If you want to list all the packages from epel repository then run the following dnf command,
如果要列出 epel 仓库中的所有的软件包,请运行以下 dnf 命令,
```
[root@linuxtechi ~]# dnf repository-packages epel list
@ -116,23 +116,23 @@ zvbi-fonts.noarch 0.2.35-9.el8 epel
[root@linuxtechi ~]#
```
### Search a package from epel repository
### 从 epel 仓库中搜索软件包
Lets assume if we want to search Zabbix package in epel repository, execute the following dnf command,
假设我们要搜索 epel 仓库中的 Zabbix 包,请执行以下 dnf 命令,
```
[root@linuxtechi ~]# dnf repository-packages epel list | grep -i zabbix
```
Output of above command would be something like below,
上面命令的输出类似下面这样,
![epel-repo-search-package-centos8][3]
### Install a package from epel repository
### 从 epel 仓库安装软件包
Lets assume we want to install htop package from epel repo, then issue the following dnf command,
假设我们要从 epel 仓库安装 htop 包,运行以下 dnf 命令,
Syntax:
语法:
# dnf enablerepo=”epel” install &lt;pkg_name&gt;
@ -140,9 +140,9 @@ Syntax:
[root@linuxtechi ~]# dnf --enablerepo="epel" install htop -y
```
**Note:** If we dont specify the “**enablerepo=epel**” in above command then it will look for htop package in all available package repositories.
**注意:**如果我们在上面的命令中未指定 “**enablerepo=epel**”,那么它将在所有可用的软件包仓库中查找 htop 包。
Thats all from this article, I hope above steps helps you to enable and configure EPEL repository on CentOS 8 and RHEL 8 Server, please dont hesitate to share your comments and feedback in below comments section.
本文就是这些内容了,我希望上面的步骤能帮助你在 CentOS 8 和 RHEL 8 服务器上启用并配置 EPEL 仓库,请在下面的评论栏分享你的评论和反馈。
--------------------------------------------------------------------------------
@ -150,7 +150,7 @@ via: https://www.linuxtechi.com/enable-epel-repo-centos8-rhel8-server/
作者:[Pradeep Kumar][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,378 @@
[#]: collector: (lujun9972)
[#]: translator: (laingke)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Initializing arrays in Java)
[#]: via: (https://opensource.com/article/19/10/initializing-arrays-java)
[#]: author: (Chris Hermansen https://opensource.com/users/clhermansen)
Java 中初始化数组
======
数组是一种有用的数据类型,用于管理在连续内存位置中建模最好的集合元素。下面是如何有效地使用它们。
![Coffee beans and a cup of coffee][1]
有使用 C 或者 FORTRAN 语言编程经验的人会对数组的概念很熟悉。它们基本上是一个连续的内存块,其中每个位置都是某种数据类型:整型、浮点型或者诸如此类的数据类型。
Java 的情况与此类似,但是有一些额外的问题。
### 一个数组的示例
让我们在 Java 中创建一个长度为 10 的整型数组:
```
int[] ia = new int[10];
```
上面的代码片段会发生什么?从左到右依次是:
1. 最左边的 **int[]** 将数组变量的 _类型_ 声明为 **int**(由 **[]**表示)。
2. 它的右边是变量的名称,当前为 **ia**
3. 接下来,**=** 告诉我们,左侧定义的变量赋值为右侧的内容。
4. 在 **=** 的右侧,我们看到了 **new**,它在 Java 中表示一个对象正在 _被初始化_ 中,这意味着已为其分配存储空间并调用了其构造函数([请参见此处以获取更多信息][2])。
5. 然后,我们看到 **int[10]**,它告诉我们正在初始化的这个对象是包含 10 个整型的数组。
因为 Java 是强类型的,所以变量 **ia** 的类型必须跟 **=** 右侧表达式的类型兼容。
### 初始化示例数组
让我们把这个简单的数组放在一段代码中,并尝试运行一下。将以下内容保存到一个名为 **Test1.java** 的文件中,使用 **javac** 编译,使用 **java** 运行(当然是在终端中):
```
import java.lang.*;
public class Test1 {
public static void main(String[] args) {
int[] ia = new int[10]; // 见下文注 1
System.out.println("ia is " + ia.getClass()); // 见下文注 2
for (int i = 0; i < ia.length; i++) // 见下文注 3
System.out.println("ia[" + i + "] = " + ia[i]); // 见下文注 4
}
}
```
让我们来看看最重要的部分。
1. 我们很容易发现长度为 10 的整型数组,**ia** 的声明和初始化。
2. 在下面的行中,我们看到表达式 **ia.getClass()**。没错,**ia** 是属于一个 _类__对象_,这行代码将告诉我们是哪个类。
3. 在紧接的下一行中,我们看到了一个循环 **for (int i = 0; i < ia.length; i++)**,它定义了一个循环索引变量 **i**,该变量运行的序列从 0 到比 **ia.length** 小 1这个表达式告诉我们在数组 **ia** 中定义了多少个元素。
4. 接下来,循环体打印出 **ia** 的每个元素的值。
当这个程序被编译和运行时,它产生以下结果:
```
me@mydesktop:~/Java$ javac Test1.java
me@mydesktop:~/Java$ java Test1
ia is class [I
ia[0] = 0
ia[1] = 0
ia[2] = 0
ia[3] = 0
ia[4] = 0
ia[5] = 0
ia[6] = 0
ia[7] = 0
ia[8] = 0
ia[9] = 0
me@mydesktop:~/Java$
```
**ia.getClass()** 的输出的字符串表示形式是 **[I**,它是“整数数组”的简写。与 C 语言类似Java 数组以第 0 个元素开始,扩展到第 **<数组大小> - 1** 个元素。我们可以在上面看到数组 ia 的每个元素都设置为零(看来是数组构造函数)。
所以,就这些吗?声明类型,使用适当的初始化器,就完成了吗?
好吧,并没有。在 Java 中有许多其它方法来初始化数组。
### 为什么我要初始化一个数组,有其它方式吗?
像所有好的问题一样,这个问题的答案是“视情况而定”。在这种情况下,答案取决于初始化后我们希望对数组做什么。
在某些情况下,数组自然会作为一种累加器出现。例如,假设我们正在编程实现计算小型办公室中一组电话分机接收和拨打的电话数量。一共有 8 个分机,编号为 1 到 8加上话务员的分机编号为 0。 因此,我们可以声明两个数组:
```
int[] callsMade;
int[] callsReceived;
```
然后,每当我们开始一个新的累积呼叫统计数据的周期时,我们就将每个数组初始化为:
```
callsMade = new int[9];
callsReceived = new int[9];
```
在每个累积通话统计数据的最后阶段,我们可以打印出统计数据。粗略地说,我们可能会看到:
```
import java.lang.*;
import java.io.*;
public class Test2 {
public static void main(String[] args) {
int[] callsMade;
int[] callsReceived;
// 初始化呼叫计数器
callsMade = new int[9];
callsReceived = new int[9];
// 处理呼叫……
// 分机拨打电话callsMade[ext]++
// 分机接听电话callsReceived[ext]++
// 汇总通话统计
System.out.printf("%3s%25s%25s\n", "ext", " calls made",
"calls received");
for (int ext = 0; ext < callsMade.length; ext++) {
System.out.printf("%3d%25d%25d\n", ext,
callsMade[ext], callsReceived[ext]);
}
}
}
```
这会产生这样的输出:
```
me@mydesktop:~/Java$ javac Test2.java
me@mydesktop:~/Java$ java Test2
ext calls made calls received
0 0 0
1 0 0
2 0 0
3 0 0
4 0 0
5 0 0
6 0 0
7 0 0
8 0 0
me@mydesktop:~/Java$
```
呼叫中心不是很忙的一天。
在上面的累加器示例中,我们看到由数组初始化程序设置的零起始值可以满足我们的需求。但是在其它情况下,这个起始值可能不是正确的选择。
例如,在某些几何计算中,我们可能需要将二维数组初始化为单位矩阵(除沿主对角线的那些零以外的所有零)。我们可以选择这样做:
```
double[][] m = new double[3][3];
for (int d = 0; d < 3; d++) {
m[d][d] = 1.0;
}
```
在这种情况下,我们依靠数组初始化器 **new double[3][3]** 将数组设置为零,然后使用循环将对角元素设置为 1。 在这种简单情况下,我们可以使用 Java 提供的快捷方式:
```
double[][] m = {
{1.0, 0.0, 0.0},
{0.0, 1.0, 0.0},
{0.0, 0.0, 1.0}};
```
这种可视结构特别适用于这种应用程序,在这种应用程序中,可以通过双重检查查看数组的实际布局。但是在这种情况下,行数和列数只在运行时确定,我们可能会看到这样的东西:
```
int nrc;
// 一些代码确定行数和列数 = nrc
double[][] m = new double[nrc][nrc];
for (int d = 0; d < nrc; d++) {
m[d][d] = 1.0;
}
```
值得一提的是Java 中的二维数组实际上是数组的数组,没有什么能阻止无畏的程序员让这些第二级数组中的每个数组的长度都不同。也就是说,下面这样的事情是完全合法的:
```
int [][] differentLengthRows = {
{1, 2, 3, 4, 5},
{6, 7, 8, 9},
{10, 11, 12},
{13, 14},
{15}};
```
在涉及不规则形状矩阵的各种线性代数应用中,可以应用这种类型的结构(有关更多信息,请参见[此 Wikipedia 文章][5])。除此之外,既然我们了解到二维数组实际上是数组的数组,那么以下内容也就不足为奇了:
```
differentLengthRows.length
```
告诉我们二维数组 **differentLengthRows** 的行数,并且:
```
differentLengthRows[i].length
```
告诉我们 **differentLengthRows****i** 行的列数。
### 深入理解数组
考虑到在运行时确定数组大小的想法,我们看到数组在实例化之前仍需要我们知道该大小。但是,如果在处理完所有数据之前我们不知道大小怎么办?这是否意味着我们必须先处理一次以找出数组的大小,然后再次处理?这可能很难做到,尤其是如果我们只有一次机会使用数据时。
[Java 集合框架][6]很好地解决了这个问题。提供的其中一项是 **ArrayList** 类,它类似于数组,但可以动态扩展。为了演示 **ArrayList** 的工作原理,让我们创建一个 ArrayList 并将其初始化为前 20 个[斐波那契数字][7]
```
import java.lang.*;
import java.util.*;
public class Test3 {
public static void main(String[] args) {
ArrayList<Integer> fibos = new ArrayList<Integer>();
fibos.add(0);
fibos.add(1);
for (int i = 2; i < 20; i++) {
fibos.add(fibos.get(i - 1) + fibos.get(i - 2));
}
for (int i = 0; i < fibos.size(); i++) {
System.out.println("fibonacci " + i + " = " + fibos.get(i));
}
}
}
```
上面的代码中,我们看到:
* 用于存储多个 **Integer** 的 **ArrayList** 的声明和实例化。
* 使用 **add()** 附加到 **ArrayList** 实例。
* 使用 **get()** 通过索引号检索元素。
* 使用 **size()** 来确定 **ArrayList** 实例中已经有多少个元素。
没有显示 **put()** 方法,它的作用是将一个值放在给定的索引号上。
该程序的输出为:
```
fibonacci 0 = 0
fibonacci 1 = 1
fibonacci 2 = 1
fibonacci 3 = 2
fibonacci 4 = 3
fibonacci 5 = 5
fibonacci 6 = 8
fibonacci 7 = 13
fibonacci 8 = 21
fibonacci 9 = 34
fibonacci 10 = 55
fibonacci 11 = 89
fibonacci 12 = 144
fibonacci 13 = 233
fibonacci 14 = 377
fibonacci 15 = 610
fibonacci 16 = 987
fibonacci 17 = 1597
fibonacci 18 = 2584
fibonacci 19 = 4181
```
**ArrayList** 实例也可以通过其它方式初始化。例如,一个数组可以提供给 **ArrayList** 构造器,或者 **List.of()****array.aslist()** 方法可以在编译过程中知道初始元素时使用。我发现自己并不经常使用这些选项,因为我对 **ArrayList** 的主要用途是我只想读取一次数据。
此外,对于那些喜欢在加载数据后使用数组的人,可以使用 **ArrayList****toArray()** 方法将其实例转换为数组;或者,在初始化 **ArrayList** 实例之后,返回到当前数组本身。
Java 集合框架提供了另一种类似数组的数据结构,称为 **Map**。我所说的“类似数组”是指 **Map** 定义了一个对象集合,它的值可以通过一个键来设置或检索,但与数组(或 **ArrayList**)不同,这个键不需要是整型数;它可以是 **String** 或任何其它复杂对象。
例如,我们可以创建一个 **Map**,其键为 **String**,其值为 **Integer** 类型,如下:
```
Map<String, Integer> stoi = new Map<String, Integer>();
```
然后我们可以对这个 **Map** 进行如下初始化:
```
stoi.set("one",1);
stoi.set("two",2);
stoi.set("three",3);
```
等类似操作。稍后,当我们想要知道 **"three"** 的数值时,我们可以通过下面的方式将其检索出来:
```
stoi.get("three");
```
在我的认知中,**Map** 对于将第三方数据集中出现的字符串转换为我的数据集中的一致代码值非常有用。作为[数据转换管道][8]的一部分,我经常会构建一个小型的独立程序,用作在处理数据之前清理数据;为此,我几乎总是会使用一个或多个 **Map**
值得一提的是,内部定义有 **ArrayList****ArrayLists****Map****Maps** 是很可能的有时也是合理的。例如假设我们在看树我们对按树种和年龄范围累积树的数目感兴趣。假设年龄范围定义是一组字符串值“young”、“mid”、“mature” 和 “old”物种是 “Douglas fir”、“western red cedar” 等字符串值,那么我们可以将这个 **Map** 中的 **Map** 定义为:
```
Map<String, Map<String, Integer>> counter = new Map<String, Map<String, Integer>>();
```
One thing to watch out for here is that the above only creates storage for the _rows_ of **Map**s. So, our accumulation code might look like:
这里需要注意的一件事是,以上内容仅为 **Map**_行_ 创建存储。 因此,我们的累加代码可能类似于:
```
// 假设我们已经知道了物种和年龄范围
if (!counter.containsKey(species)) {
counter.put(species,new Map<String, Integer>());
}
if (!counter.get(species).containsKey(ageRange)) {
counter.get(species).put(ageRange,0);
}
```
此时,我们可以开始累加:
```
counter.get(species).put(ageRange, counter.get(species).get(ageRange) + 1);
```
最后值得一提的是Java 8 中的新特性Streams 还可以用来初始化数组、**ArrayList** 实例和 **Map** 实例。关于此特性的详细讨论可以在[此处][9]和[此处][10]中找到。
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/10/initializing-arrays-java
作者:[Chris Hermansen][a]
选题:[lujun9972][b]
译者:[laingke](https://github.com/laingke)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/clhermansen
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/java-coffee-mug.jpg?itok=Bj6rQo8r (Coffee beans and a cup of coffee)
[2]: https://opensource.com/article/19/8/what-object-java
[3]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+string
[4]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+system
[5]: https://en.wikipedia.org/wiki/Irregular_matrix
[6]: https://en.wikipedia.org/wiki/Java_collections_framework
[7]: https://en.wikipedia.org/wiki/Fibonacci_number
[8]: https://towardsdatascience.com/data-science-for-startups-data-pipelines-786f6746a59a
[9]: https://stackoverflow.com/questions/36885371/lambda-expression-to-initialize-array
[10]: https://stackoverflow.com/questions/32868665/how-to-initialize-a-map-using-a-lambda