Merge remote-tracking branch 'LCTT/master'

This commit is contained in:
Xingyu Wang 2020-09-03 23:00:01 +08:00
commit 34427aa9c0
8 changed files with 538 additions and 207 deletions

View File

@ -51,7 +51,7 @@ $ git clone https://github.com/wireghoul/graudit
现在,我们需要创建一个 Graudit 的符号链接,以便我们可以将其作为一个命令使用。
```
$ cd ~/bin && mkdir graudit
$ cd ~/bin && mkdir graudit
$ ln --symbolic ~/graudit/graudit ~/bin/graudit
```

View File

@ -0,0 +1,59 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Making Zephyr More Secure)
[#]: via: (https://www.linux.com/audience/developers/making-zephyr-more-secure/)
[#]: author: (Swapnil Bhartiya https://www.linux.com/author/swapnil/)
Making Zephyr More Secure
======
Zephyr is gaining momentum where more and more companies are embracing this open source project for their embedded devices. However, security is becoming a huge concern for these connected devices. The NCC Group recently conducted an evaluation and security assessment of the project to help harden it against attacks. In the interview, Kate Stewart, Senior Director of Strategic Programs at Linux Foundation talk about the assessment and the evolution of the project.
Here is a quick transcript of the interview:
**Swapnil Bhartiya: The NCC group recently evaluated Linux for security. Can you talk about what was the outcome of that evaluation?**
Kate Stewart: Were very thankful for the NCC group for the work that they did and helping us to get Zephyr hardened further. In some senses when it had first hit us, it was like, “Okay, theyre taking us seriously now. Awesome.” And the reason theyre doing this is that their customers are asking for it. Theyve got people who are very interested in Zephyr so they decided to invest the time doing the research to see what they could find. And the fact that were good enough to critique now is a nice positive for the project, no question.
Up till this point, wed had been getting some vulnerabilities that researchers had noticed in certain areas and had to tell us about. Wed issued CVEs so we had a process down, but suddenly being hit with the whole bulk of those like that was like, “Okay, time to up our game guys.” And so, what weve done is we found out we didnt have a good way of letting people who have products with Zephyr based on them know about our vulnerabilities. And what we wanted to be able to do is make it clear that if people have products and they have them out in the market and that they want to know if theres a vulnerability. We just added a new webpage so they know how to register, and they can let us know to contact them.
The challenge of embedded is you dont quite know where the software is. Weve got a lot of people downloading Zephyr, we got a lot of people using Zephyr. Were seeing people upstreaming things all the time, but we dont know where the products are, its all word of mouth to a large extent. Therere no tracers or anything else, you dont want to do that in an embedded space on IoT; battery life is important. And so, its pretty key for figuring out how do we let people who want to be notified know.
Wed registered as a CNA with Mitre several years ago now and we can assign CVE numbers in the project. But what we didnt have was a good way of reaching out to people beyond our membership under embargo so that we can give them time to remediate any issues that were fixing. By changing our policies, its gone from a 60-day embargo window to a 90-day embargo window. In the first 30 days, were working internally to get the team to fix the issues and then weve got a 60-day window for our people who do products to basically remediate in the field if necessary. So, getting ourselves useful for product makers was one of the big focuses this year.
**Swapnil Bhartiya: Since Zephyrs LTS release was made last year, can you talk about the new releases, especially from the security perspective because I think the latest version is 2.3.0?**
Kate Stewart: Yeah, 2.3.0 and then we also have 1.14.2. and 1.14 is our LTS-1 as we say. And weve put an update out to it with the security fixes and a long-term stable like the Linux kernel has security fixes and bug fixes backported into it so that people can build products on it and keep it active over time without as much change in the interfaces and everything else that were doing in the mainline development tree and what weve just done with the 2.3.
2.3 has a lot of new features in it and weve got all these vulnerabilities remediated. Theres a lot more coming up down the road, so the community right now is working. Weve adopted new set of coding guidelines for the project and we will be working on that so we can get ourselves ready for going after safety certifications next year. So theres a lot of code in motion right now, but theres a lot of new features being added every day. Its great.
**Swapnil Bhartiya: I also want to talk a bit about the community side of it. Can you talk about how the community is growing new use cases?**
Kate Stewart: Weve just added two new members into Zephyr. Weve got Teenage Engineering has just joined us and Laird Connectivity has just joined us and its really cool to start seeing these products coming out. There are some rather interesting technologies and products that are showing up and so Im really looking forward to being able to have blog posts about them.
Laird Connectivity is basically a small device running Zephyr that you can use for monitoring distance without recording other information. So, in days of COVID, we need to start figuring out technology assists to help us keep the risk down. Laird Connectivity has devices for that.
So were seeing a lot of innovation happening very quickly in Zephyr and thats really Zephyrs strength is its got a very solid code base and lets people add their innovation on top.
**Swapnil Bhartiya: What role do you think Zephyr going to play in the post-COVID-19 world?**
Kate Stewart: Well, I think they offer us interesting opportunities. Some of the technologies that are being looked at for monitoring for instance we have distance monitoring, contact tracing and things like that. We can either do it very manually or we can start to take advantage of the technology infrastructures to do so. But people may not want to have a device effectively monitoring them all the time. They may just want to know exactly, position-wise, where they are. So thats potentially some degree of control over whats being sent into the tracing and tracking.
These sorts of technologies I think will be helping us improve things over time. I think theres a lot of knowledge that were getting out of these and ways we can optimize the information and the RTOS and the sensors are discrete functionality and are improving how do we look at things.
**Swapnil Bhartiya: There are so many people who are using Zephyr but since it is open source we not even aware of them. How do you ensure that whether someone is an official member of the project or not if they are running Zephyr their devices are secure?**
Kate Stewart: We do a lot of testing with Zephyr, theres a tremendous amount of test infrastructure. Theres the whole regression infrastructure. We work to various thresholds of quality levels and weve got a lot of expertise and have publicly documented all of our best practices. A security team is a top-notch group of people. Im really so proud to be able to work with them. They do a really good job of caring about the issues as well as finding them, debugging them and making sure anything that comes up gets solved. So in that sense, theres a lot of really great people working on Zephyr and it makes it a really fun community to work with, no question. In fact, its growing fast actually.
**Swapnil Bhartiya: Kate, thank you so much for taking your time out and talking to me today about these projects.**
--------------------------------------------------------------------------------
via: https://www.linux.com/audience/developers/making-zephyr-more-secure/
作者:[Swapnil Bhartiya][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linux.com/author/swapnil/
[b]: https://github.com/lujun9972

View File

@ -0,0 +1,94 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Open Source Project For Earthquake Warning Systems)
[#]: via: (https://www.linux.com/featured/open-source-project-for-earthquake-warning-systems/)
[#]: author: (Swapnil Bhartiya https://www.linux.com/author/swapnil/)
Open Source Project For Earthquake Warning Systems
======
Earthquakes or the shaking doesnt kill people, buildings do. If we can get people out of buildings in time, we can save lives. Grillo has founded OpenEEW in partnership with IBM and the Linux Foundation to allow anyone to build their own earthquake early-warning system. Swapnil Bhartiya, the founder of TFiR, talked to the founder of Grillo on behalf of The Linux Foundation to learn more about the project.
Here is the transcript of the interview:
**Swapnil Bhartiya: If you look at these natural phenomena like earthquakes, theres no way to fight with nature. We have to learn to coexist with them. Early warnings are the best thing to do. And we have all these technologies IoTs and AI/ML. All those things are there, but we still dont know much about these phenomena. So, what I do want to understand is if you look at an earthquake, well see that in some countries the damage is much more than some other places. What is the reason for that?**
Andres Meira: earthquakes disproportionately affect countries that dont have great construction. And so, if you look at places like Mexico, the Caribbean, much of Latin America, Nepal, even some parts of India in the North and the Himalayas, you find that earthquakes can cause more damage than say in California or in Tokyo. The reason is it is buildings that ultimately kill people, not the shaking itself. So, if you can find a way to get people out of buildings before the shaking thats really the solution here. There are many things that we dont know about earthquakes. Its obviously a whole field of study, but we cant tell you for example, that an earthquake can happen in 10 years or five years. We can give you some probabilities, but not enough for you to act on.
What we can say is that an earthquake is happening right now. These technologies are all about reducing the latency so that when we know an earthquake is happening in milliseconds we can be telling people who will be affected by that event.
**Swapnil Bhartiya: What kind of work is going on to better understand earthquakes themselves?**
Andres Meira: I have a very narrow focus. Im not a seismologist and I have a very narrow focus related to detecting earthquakes and alerting people. I think in the world of seismology, there are a lot of efforts to understand the tectonic movement, but I would say there are a few interesting things happening that I know of. For example, undersea cables. People in Chile and other places are looking at undersea telecommunications cables and the effects that any sort of seismic movement have on the signals. They can actually use that as a detection system. But when you talk about some of the really deep earthquakes, 60-100 miles beneath the surface, man has not yet created holes deep enough for us to place sensors. So were very limited as to actually detecting earthquakes at a great depth. We have to wait for them to affect us near the surface.
**Swapnil Bhartiya: So then how do these earthquake early warning systems work? I want to understand from a couple of points: What does the device itself look like? What do those sensors look like? What does the software look like? And how do you kind of share data and interact with each other?**
Andres Meira: The sensors that we use, weve developed several iterations over the last couple of years and effectively, they are a small microcontroller, an accelerometer, this is the core component and some other components. What the device does is it records accelerations. So, it looks on the X, Y, and Z axes and just records accelerations from the ground so we are very fussy about how we install our sensors. Anybody can install it in their home through this OpenEEW initiative that were doing.
The sensors themselves record shaking accelerations and we send all of those accelerations in quite large messages using MQTT. We send them every second from every sensor and all of this data is collected in the cloud, and in real-time we run algorithms. We want to know that the shaking, which the accelerometer is getting is not a passing truck. Its actually an earthquake.
So weve developed the algorithms that can tell those things apart. And of course, we wait for one or two sensors to confirm the same event so that we dont get any false positives because you can still get some errors. Once we have that confirmation in the cloud we can send a message to all of the client devices. If you have an app, you will be receiving a message saying, theres an earthquake at this location, and your device will then be calculating how long it will take to reach it. Therefore, how much energy will be lost and therefore, what shaking youre going to be expecting very soon.
**Swapnil Bhartiya: Where are these devices installed?**
Andres Meira: They are installed at the moment in several countries Mexico, Chile, Costa Rica, and Puerto Rico. We are very fussy about how people install them, and in fact, on the OpenEEW website, we have a guide for this. We really require that theyre installed on the ground floor because the higher up you go, the different the frequencies of the building movement, which affects the recordings. We need it to be fixed to a solid structural element. So this could be a column or a reinforced wall, something which is rigid and it needs to be away from the noise. So it wouldnt be great if its near a door that was constantly opening and closing. Although we can handle that to some extent. As long as you are within the parameters, and ideally we look for good internet connections, although we have cellular versions as well, then thats all we need.
The real name of the game here is a quantity more than quality. If you can have a lot of sensors, it doesnt matter if one is out. It doesnt matter if the quality is down because were waiting for confirmation from other ones and redundancy is how you achieve a stable network.
**Swapnil Bhartiya: What is the latency between the time when sensors detect an earthquake and the warning is sent out? Does it also mean that the further you are from the epicenter, the more time you will get to leave a building?**
Andres Meira: So the time that a user gaps in terms of what we call the window of opportunity for them to actually act on the information is a variable and it depends on where the earthquake is relative to the user. So, Ill give you an example. Right now, Im in Mexico City. If we are detecting an earthquake in Acapulco, then you might get 60 seconds of advanced warning because an earthquake travels at more or less a fixed velocity, which is unknown and so the distance and the velocity gives you the time that youre going to be getting.
If that earthquake was in the South of Mexico in Oaxaca, we might get two minutes. Now, this is a variable. So of course, if you are in Istanbul, you might be very near the fault line or Kathmandu. You might be near the fault line. If the distance is less than what I just described, the time goes down. But even if you only have five seconds or 10 seconds, which might happen in the Bay area, for example, thats still okay. You can still ask children in a school to get underneath the furniture. You can still ask surgeons in a hospital to stop doing the surgery. Theres many things you can do and there are also automated things. You can shut off elevators or turn off gas pipes. So anytime is good, but the actual time itself is a variable.
**
Swapnil Bhartiya: The most interesting thing that you are doing is that you are also open sourcing some of these technologies. Talk about what components you have open source and why.**
Andres Meira: Open sourcing was a tough decision for us. It wasnt something we felt comfortable with initially because we spent several years developing these tools, and were obviously very proud. I think that there came a point where we realized why are we doing this? Are we doing this to develop cool technologies to make some money or to save lives? All of us live in Mexico, all of us have seen the devastation of these things. We realized that open source was the only way to really accelerate what were doing.
If we want to reach people in these countries that Ive mentioned; if we really want people to work on our technology as well and make it better, which means better alert times, less false positives. If we want to really take this to the next level, then we cant do it on our own. It will take a long time and we may never get there.
So that was the idea for the open source. And then we thought about what we could do with open source. We identified three of our core technologies and by that I mean the sensors, the detection system, which lives in the cloud, but could also live on a Raspberry Pi, and then the way we alert people. The last part is really quite open. It depends on the context. It could be a radio station. It could be a mobile app, which weve got on the website, on the GitHub. It could be many things. Loudspeakers. So those three core components, we now have published in our repo, which is OpenEEW on GitHub. And from there, people can pick and choose.
It might be that some people are data scientists so they might go just for the data because we also publish over a terabyte of accelerometer data from our networks. So people might be developing new detection systems using machine learning, and weve got instructions for that and we would very much welcome it. Then we have something for the people who do front end development. So they might be helping us with the applications and then we also have people something for the makers and the hardware guys. So they might be interested in working on the census and the firmware. Theres really a whole suite of technologies that we published.
**Swapnil Bhartiya: There are other earthquake warning systems. How is OpenEEW different?**
Andres Meira: I would divide the other systems into two categories. I would look at the national systems. I would look at say the Japanese or the California and the West coast system called Shake Alert. Those are systems with significant public funding and have taken decades to develop. I would put those into one category and another category I would look at some applications that people have developed. My Shake or Skylert, or theres many of them.
If you look at the first category, I would say that the main difference is that we understand the limitations of those systems because an earthquake in Northern Mexico is going to affect California and vice versa. An earthquake in Guatemala is going to affect Mexico and vice versa. An earthquake in Dominican Republic is going to affect Puerto Rico. The point is that earthquakes dont respect geography or political boundaries. And so we think national systems are limited, and so far they are limited by their borders. So, that was the first thing.
In terms of the technology, actually in many ways, the MEMS accelerometers that we use now are streets ahead of where we were a couple of years ago. And it really allows us to detect earthquakes hundreds of kilometers away. And actually, we can perform as well as these national systems. Weve studied our system versus the Mexican national system called SASMEX, and more often than not, we are faster and more accurate. Its on our website. So theres no reason to say that our technology is worse. In fact, having cheaper sensors means you can have huge networks and these arrays are what make all the difference.
In terms of the private ones, the problems with those are that sometimes they dont have the investment to really do wide coverage. So the open sources are our strength there because we can rely on many people to add to the project.
**Swapnil Bhartiya: What kind of roadmap do you have for the project? How do you see the evolution of the project itself?**
Andres Meira: So this has been a new area for me; Ive had to learn. The governance of OpenEEW as of today, like you mentioned, is now under the umbrella of the Linux Foundation. So this is now a Linux Foundation project and they have certain prerequisites. So we had to form a technical committee. This committee makes the steering decisions and creates the roadmap you mentioned. So, the roadmap is now published on the GitHub, and its a work in progress, but effectively were looking 12 months ahead and weve identified some areas that really need priority. Machine learning, as you mentioned, is definitely something that will be a huge change in this world because if we can detect earthquakes, potentially with just a single station with a much higher degree of certainty, then we can create networks that are less dense. So you can have something in Northern India and in Nepal, in Ecuador, with just a handful of sensors. So thats a real Holy grail for us.
We also are asking on the roadmap for people to work with us in lots of other areas. In terms of the sensors themselves, we want to do more detection on the edge. We feel that edge computing with the sensors is obviously a much better solution than what we do now, which has a lot of cloud detection. But if we can move a lot of that work to the actual devices, then I think were going to have much smarter networks and less telemetry, which opens up new connectivity options. So, the sensors as well are another area of priority on the road map.
**Swapnil Bhartiya: What kind of people would you like to get involved with and how can they get involved?**
Andres Meira: So as of today, were formally announcing the initiative and I would really invite people to go to OpenEEW.com, where weve got a site that outlines some areas that people can get involved with. Weve tried to consider what type of people would join the project. So youre going to get seismologists. We have seismologists from Harvard University and from other areas. Theyre most interested in the data from what weve seen so far. Theyre going to be looking at the data sets that weve offered and some of them are already looking at machine learning. So theres many things that they might be looking at. Of course, anyone involved with Python and machine learning, data scientists in general, might also do similar things. Ultimately, you can be agnostic about seismology. It shouldnt put you off because weve tried to abstract it away. Weve got down to the point where this is really just data.
Then weve also identified the engineers and the makers, and weve tried to guide them towards the repos, like the sensory posts. We are asking them to help us with the firmware and the hardware. And then weve got for your more typical full stack or front end developer, weve got some other repos that deal with the actual applications. How does the user get the data? How does the user get the alerts? Theres a lot of work we can be doing there as well.
So, different people might have different interests. Someone might just want to take it all. Maybe someone might want to start a network in the community, but isnt technical and thats fine. We have a Slack channel where people can join and people can say, “Hey, Im in this part of the world and Im looking for people to help me with the sensors. I can do this part.” Maybe an entrepreneur might want to join and look for the technical people.
So, were just open to anybody who is keen on the mission, and theyre welcome to join.
--------------------------------------------------------------------------------
via: https://www.linux.com/featured/open-source-project-for-earthquake-warning-systems/
作者:[Swapnil Bhartiya][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linux.com/author/swapnil/
[b]: https://github.com/lujun9972

View File

@ -1,114 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Use GraphQL as an API gateway to monitor microservices)
[#]: via: (https://opensource.com/article/20/8/microservices-graphql)
[#]: author: (Rigin Oommen https://opensource.com/users/riginoommen)
Use GraphQL as an API gateway to monitor microservices
======
Use the monitoring feature of GraphQL to help you detect issues early,
before a problem takes a critical microservice down.
![Net catching 1s and 0s or data in the clouds][1]
[Microservices][2] and [GraphQL][3] are a great combination, like bread and butter. They're both great on their own and even better together. Knowing the health of your microservices is important because they run important services—it would be foolish to wait until something critical breaks before diagnosing a problem. It doesn't take much effort to let GraphQL help you detect issues early.
![GraphQL in Microservices][4]
Routine health checks allow you to watch and test your services to get early notifications about problems before they affect your business, clients, or project. That's easy enough to say, but what does it really mean to do a health check?
Here are the factors I think about when designing a service checkup:
**Requirements for a server health check:**
1. I need to understand the availability status of my microservice.
2. I want to be able to manage the server load.
3. I want end-to-end (e2e) testing of my microservices.
4. I should be able to predict outages.
![Service health in microservices][5]
### Ways to do server health checks
Coming up with health checks can be tricky because, in theory, there's nearly an infinite number of things you could check for. I like to start small and run the most basic test: a ping test. This simply tests whether the server running the application is available. Then I ramp up my tests to evaluate specific concerns, thinking about the elements of my server that are most important. I think about the things that would be disastrous should they disappear suddenly.
1. **Ping check:** Ping is the simplest monitor type. It just checks that your application is online.
2. **Scripted browser:** Scripted browsers are more advanced; browser automation tools like [Selenium][6] enable you to implement custom monitoring rule sets.
3. **API tests:** API tests are used to monitor API endpoints. This is an advanced version of the ping check model, where you can define the monitoring plan based on the API responses.
### Health check with GraphQL
In a typical REST-based microservice, you need to build health check features from scratch. It's a time-intensive process, but it's not something you have to worry about with GraphQL.
According to its [website][7]:
> "GraphQL is a query language for APIs and a runtime for fulfilling those queries with your existing data. GraphQL provides a complete and understandable description of the data in your API, gives clients the power to ask for exactly what they need and nothing more, makes it easier to evolve APIs over time, and enables powerful developer tools."
When you bootstrap a GraphQL microservice, you also get a provision to monitor the health of the microservice. This is something of a hidden gem.
As I mentioned above, you can perform API tests as well as ping checks with the GraphQL endpoint.
Apollo GraphQL Server provides a default endpoint that returns information about your microservices and server health. It's not very complex: it returns status code 200 if the server is running.
The default endpoint is `<server-host>/.well-known/apollo/server-health`.
![Health Check with GraphQL][8]
### Advanced health checks
In some cases, basic health checks may not be enough to ensure the integrity of a system. For example, tightly coupled systems require more business logic to ensure the health of the system.
Apollo GraphQL is efficient enough to manage this use case by declaring an `onHealthCheck` function while defining the server:
```
* Defining the Apollo Server */
const apollo = new ApolloServer({
  playground: process.env.NODE_ENV !== 'production',
  typeDefs: gqlSchema,
  resolvers: resolver,
  onHealthCheck: () =&gt; {
    return new Promise((resolve, reject) =&gt; {
      // Replace the `true` in this conditional with more specific checks!
      if (true) {
        resolve();
      } else {
        reject();
      }
    });
  }
});
```
When you define an `onHealthCheck` method, it returns a promise that _resolves_ if the server is ready and _rejects_ if there is an error.
GraphQL makes monitoring APIs easier. In addition, using it for your server infrastructure makes things scalable. If you want to try adopting GraphQL as your new infrastructure definition, see my GitHub repo for [example code and configuration][9].
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/8/microservices-graphql
作者:[Rigin Oommen][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/riginoommen
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/data_analytics_cloud.png?itok=eE4uIoaB (Net catching 1s and 0s or data in the clouds)
[2]: https://opensource.com/resources/what-are-microservices
[3]: https://opensource.com/article/19/6/what-is-graphql
[4]: https://opensource.com/sites/default/files/uploads/graphql-microservices.png (GraphQL in Microservices)
[5]: https://opensource.com/sites/default/files/uploads/servicehealth.png (Service health in microservices)
[6]: https://www.selenium.dev/
[7]: https://graphql.org/
[8]: https://opensource.com/sites/default/files/uploads/healthcheck.png (Health Check with GraphQL)
[9]: https://github.com/riginoommen/example-graphql

View File

@ -0,0 +1,179 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Developing an email alert system using a surveillance camera with Node-RED and TensorFlow.js)
[#]: via: (https://www.linux.com/news/developing-an-email-alert-system-using-a-surveillance-camera-with-node-red-and-tensorflow-js/)
[#]: author: (Linux.com Editorial Staff https://www.linux.com/author/linuxdotcom/)
Developing an email alert system using a surveillance camera with Node-RED and TensorFlow.js
======
## **Overview**
In a previous article, we introduced [a procedure for developing an image recognition flow using Node-RED and TensorFlow.js.][1] Now, lets apply those learnings from what we have done and develop an e-mail alert system that uses a surveillance camera together with image recognition. As shown in the following image, we will create a flow that automatically sends an email alert when a suspicious person is captured within a surveillance camera frame.
![][2]
## **Objective: Develop flow**
In this flow, the image of the surveillance camera is periodically acquired from the webserver, and the image is displayed under the **“Original image”** node in the lower left. After that, the image is recognized using the **TensorFlow.js** node. The recognition result and the image with recognition results are displayed under the **debug** tab and the **“image with annotation”** node, respectively.
![][3]
If a person is detected by image recognition, an alert mail with the image file attached will be sent using the **SendGrid** node.  Since it is difficult to set up a real surveillance camera, we will use a sample [image sent by a surveillance camera in Kanagawa Prefecture of Japan][4]  to check the amount of water in the river.
We will explain the procedure for creating this flow in the following sections. For the Node-RED environment, use your local PC, a Raspberry Pi, or a cloud-based deployment.
## **Install the required nodes**
Click the hamburger menu on the top right of the Node-RED flow editor, go to **“Manage palette” -&gt; “Palette” tab -&gt; “Install”** tab, and install the following nodes.
* [node-red-contrib-tensorflow][5]: Image recognition node using TensorFlow.js
* [node-red-contrib-image-output][6]: Nodes that display images on the Flow Editor
* [node-red-contrib-sendgrid][7]: Nodes that send mail using SendGrid
## **Create a flow of acquiring image data**
First, create a flow that acquires the image binary data from the webserver. As in the flow below, place an inject node (the name will be changed to **“timestamp”** when placed in the workspace), **http request** node, and **image preview** node, and connect them with wires in the user interface.
![][8]
Then double-click the **http request** node to change the node property settings.
## **Adjust** _**http request**_ **node property settings**
 
Paste the URL of the surveillance camera image to the URL on the property setting screen of the **http request** node. (In Google Chrome, when you right-click on the image and select **“Copy image address”** from the menu, the URL of the image is copied to the clipboard.) Also, select **“a binary buffer”** as the output format.
![][9]
## **Execute the flow to acquire image data**
Click the **Deploy** button at the top right of the flow editor, then click the button to the **inject** nodes left. Then, the message is sent from the **inject** node to the **http request** node through the wire, and the image is acquired from the web server that provides the image of the surveillance camera. After receiving the image data, a message containing the data in binary format is sent to the **image preview** node, and the image is displayed under the **image preview** node.
![][10]
 An image of the river taken by the surveillance camera is displayed in the lower right.
## **Create a flow for image recognition of the acquired image data**
Next, create a flow that analyzes what is in the acquired image. Place a **cocossd** node, a **debug** node (the name will be changed to **msg.payload** when you place it), and a second **image preview** node.
Then, connect the **output termina**l on the right side of the **http request** node, and the **input terminal** on the left side of the **cocossd** node.
Next, connect the **output terminal** on the right side of the **cocossd** node and the debug node, the **output terminal** on the right side of the **cocossd** node, and the **input terminal** on the left side of the **image preview** node with the respective wires.
Through the wire, the binary data of the surveillance camera image is sent to the **cocossd** node, and after the image recognition is performed using **TensorFlow.js,** the object name is displayed in the **debug** node, and the image with the image recognition result is displayed in the **image preview** node. 
![][11]
The **cocossd** node is designed to store the object name in the variable **msg.payload**, and the binary data of the image with the annotation in the variable **msg.annotatedInput**. 
To make this flow work as intended, you need to double-click the **image preview** node used to display the image and change the node property settings.
## **Adjust** _**image preview**_ **node property settings**
By default, the **image preview** node displays the image data stored in the variable **msg.payload**. Here, change this default variable to **msg.annotatedInput**.
![][12]
## **Adjust** _**inject**_ **node property settings**
Since the flow is run regularly every minute, the **inject** nodes property needs to be changed. In the **Repeat** pull-down menu, select **“interval”** and set **“1 minute”** as the time interval. Also, since we want to start the periodic run process immediately after pressing the **Deploy** button, select the checkbox on the left side of **“inject once after 0.1 seconds”.**
![][13]
## **Run the flow for image recognition**
The flow process will be run immediately after pressing the **Deploy** button. When the person (author) is shown on the surveillance camera, the image recognition result **“person”** is displayed in the debug tab on the right. Also, below the **image preview** node, you will see the image annotated with an orange square.
![][14]
## **Create a flow of sending an email when a person caught in the surveillance camera**
Finally, create a flow to send the annotated image by email when the object name in the image recognition result is **“person”**. As a subsequent node of the **cocossd** node, place a **switch** node that performs condition determination, a **change** node that assigns values, and a **sendgrid** node that sends an email, and connect each node with a wire.
![][15]
Then, change the property settings for each node, as detailed in the sections below.
## **Adjust the** _**switch**_ **node property settings**
Set the rule to execute the subsequent flow only if **msg.payload** contains the string **“person” **
To set that rule, enter **“person”** in the comparison string for the condition **“==”** (on the right side of the **“az”** UX element in the property settings dialog for the switch node).
![][16]
## **Adjust the** _**change**_ **node property settings**
To attach the image with annotation to the email, substitute the image data stored in the variable **msg.annotatedInput** to the variable **msg.payload**. First, open the pull-down menu of **“az”** on the right side of the UX element of **“Target value”** and select **“msg.”**. Then enter **“annotatedInput”** in the text area on the right.
![][17]
If you forget to change to **“msg.”** in the pull-down menu that appears when you click **“az”,** the flow often does not work well, so check again to be sure that it is set to **“msg.”**.
## **Adjust the** _**sendgrid**_ **node property settings**
Set the API key from the [SendGrid management screen][18]. And then input the sender email address and recipient email address.
![][19]
Finally, to make it easier to see what each node is doing, open each nodes node properties, and set the appropriate name.
## **Validate the operation of the flow to send an email when the surveillance camera captures a person in frame**
When a person is captured in the image of the surveillance camera, the image recognition result is displayed in the debug tab the same as in the previous flow of confirmation and the orange frame is displayed in the image under the **image preview** node of **“Image with annotation”**. You can see that the person is recognized correctly.
![][20]
After that, if the judgment process, the substitution process, and the email transmission process works as designed, you will receive an email with the image file with the annotation attached to your smartphone as follows:
![][21]
## **Conclusion**
By using the flow created in this article, you can also build a simple security system for your own garden using a camera connected to a Raspberry Pi. At a larger scale, image recognition can also be run on image data acquired using network cameras that support protocols such as [ONVIF][22].
*About the author: Kazuhito Yokoi is an Engineer at Hitachis OSS Solution Center, located in Yokohama, Japan. *
--------------------------------------------------------------------------------
via: https://www.linux.com/news/developing-an-email-alert-system-using-a-surveillance-camera-with-node-red-and-tensorflow-js/
作者:[Linux.com Editorial Staff][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linux.com/author/linuxdotcom/
[b]: https://github.com/lujun9972
[1]: https://www.linux.com/news/using-tensorflow-js-and-node-red-with-image-recognition-applications/
[2]: https://www.linux.com/wp-content/uploads/2020/09/tensor1.png
[3]: https://www.linux.com/wp-content/uploads/2020/09/tensor2.png
[4]: http://www.pref.kanagawa.jp/sys/suibou/web_general/suibou_joho/html/camera/past0/p20102_0_6.html
[5]: https://flows.nodered.org/node/node-red-contrib-tensorflow
[6]: https://flows.nodered.org/node/node-red-contrib-image-output
[7]: https://flows.nodered.org/node/node-red-contrib-sendgrid
[8]: https://www.linux.com/wp-content/uploads/2020/09/tensor3.png
[9]: https://www.linux.com/wp-content/uploads/2020/09/tensor4.png
[10]: https://www.linux.com/wp-content/uploads/2020/09/tensor5.png
[11]: https://www.linux.com/wp-content/uploads/2020/09/tensor6.png
[12]: https://www.linux.com/wp-content/uploads/2020/09/tensor7.png
[13]: https://www.linux.com/wp-content/uploads/2020/09/tensor8.png
[14]: https://www.linux.com/wp-content/uploads/2020/09/tensor9.png
[15]: https://www.linux.com/wp-content/uploads/2020/09/tensor10.png
[16]: https://www.linux.com/wp-content/uploads/2020/09/tensor11.png
[17]: https://www.linux.com/wp-content/uploads/2020/09/tensor12.png
[18]: https://sendgrid.com/
[19]: https://www.linux.com/wp-content/uploads/2020/09/tensor13.png
[20]: https://www.linux.com/wp-content/uploads/2020/09/tensor14.png
[21]: https://www.linux.com/wp-content/uploads/2020/09/tensor15.png
[22]: https://www.onvif.org/

View File

@ -1,92 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Soon Youll be Able to Convert Any Website into Desktop Application in Linux Mint)
[#]: via: (https://itsfoss.com/web-app-manager-linux-mint/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
Soon Youll be Able to Convert Any Website into Desktop Application in Linux Mint
======
Imagine this situation. You are working on a certain topic and you have more than twenty tabs open in your web browser, mostly related to the work.
Some of these tabs are for YouTube or some other music streaming website you are listening to.
You finished the work on the topic and close the browser. Your intent was to close all the work related tabs but it also closed the tabs that you were using for listening to music or some other activities.
Now youll have to log in to those websites again and find the track you were listening to or whatever you were doing.
Frustrating, isnt it? Linux Mint understands your pain and they have an upcoming project to help you out in such scenario.
### Linux Mints Web App Manager
![][1]
In a [recent post][2], Linux Mint team revealed that it is working on a new tool called Web App Manager.
The Web App Manager tool will allow you to launch your favorite websites and have them run in their own window as if they were desktop applications.
While adding a website as a Web App, you can give it a custom name and icon. You can also give it a different category. This will help you search this app in the menu.
You may also specify which web browser you want the Web App to be opened in. Option for enabling/disabling navigation bar is also there.
![Adding a Web App In Linux Mint][3]
Say, you add YouTube as a Web App:
![Web Apps In Linux Mint][4]
If you run this YouTube Web App, YouTube will now run in its own window and in a browser of your choice.
![YouTube Web App][5]
The Web App has most of the features you see in a regular desktop application. You can use it in Alt+Tab switcher:
![Web App in Alt Tab Switcher][6]
You can even pin the Web App to the panel/taskbar for quick access.
![YouTube Web App added to the panel][7]
The Web App Manager is in beta right now but it is fairly stable to use. It is not translation ready right now and this is why it is not released to the public.
If you are using Linux Mint and want to try the Web App Manager, you can download the DEB file for the beta version of this app from the link below:
[Download Web App Manager (beta) for Linux Mint][8]
### Web apps are not new to desktop Linux
This is not something ground breaking from Linux Mint. Web apps have been on the scene for almost a decade now.
If you remember, Ubuntu had added the web app feature to its Unity desktop in 2013-14.
The lightweight Linux distribution PeppermintOS lists ICE (tool for web apps) as its main feature since 2010. In fact, Linux Mints Web App manager is based on Peppermint OSs [ICE][9].
Personally, I like web apps feature. It has its usefulness.
What do you think of Web Apps in Linux Mint? Is it something you look forward to use? Do share your views in the comment section.
--------------------------------------------------------------------------------
via: https://itsfoss.com/web-app-manager-linux-mint/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/09/Web-App-Manager-linux-mint.jpg?resize=800%2C450&ssl=1
[2]: https://blog.linuxmint.com/?p=3960
[3]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/09/Add-web-app-in-Linux-Mint.png?resize=600%2C489&ssl=1
[4]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/09/Web-Apps-in-Linux-Mint.png?resize=600%2C489&ssl=1
[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/09/youtube-web-app-linux-mint.jpg?resize=800%2C611&ssl=1
[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/09/web-app-alt-tab-switcher.jpg?resize=721%2C576&ssl=1
[7]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/09/panel.jpg?resize=470%2C246&ssl=1
[8]: http://www.linuxmint.com/tmp/blog/3960/webapp-manager_1.0.3_all.deb
[9]: https://github.com/peppermintos/ice

View File

@ -0,0 +1,114 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Use GraphQL as an API gateway to monitor microservices)
[#]: via: (https://opensource.com/article/20/8/microservices-graphql)
[#]: author: (Rigin Oommen https://opensource.com/users/riginoommen)
使用 GraphQL 作为 API 网关来监控微服务
======
在一个问题使一个关键的微服务瘫痪之前,使用 GraphQL 的监控功能,帮助你及早发现问题。
![Net catching 1s and 0s or data in the clouds][1]
[微服务][2]和 [GraphQL][3] 就像面包和黄油一样,是一个很好的组合。它们本身都很好,结合起来就更好了。了解你的微服务的健康状况是很重要的,因为它们运行着重要的服务。如果等到某个关键的服务崩溃了才诊断问题,那是很愚蠢的。让 GraphQL 帮助你及早发现问题并不需要花费太多精力。
![GraphQL in Microservices][4]
常规的健康检查可以让你观察和测试你的服务,在问题影响到你的业务、客户或项目之前,尽早得到通知。说起来很简单,但健康检查到底要做什么呢?
以下是我在设计服务检查时考虑的因素:
**服务器健康检查的要求:**
1. 我需要了解我的微服务的可用性状态。
2. 我希望能够管理服务器的负载。
3. 我希望对我的微服务进行端到端e2e测试。
4. 我应该能够预测中断。
![Service health in microservices][5]
### 做服务器健康检查的方法
进行健康检查可能比较棘手因为理论上你可以检查的东西几乎是无穷无尽的。我喜欢从小处着手运行最基本的测试ping 测试。这只是测试运行应用的服务器是否可用。然后,我加强测试以评估特定问题,思考服务器中最重要的元素。我想到那些如果突然消失的话将是灾难性的事情。
1. **Ping 检查:**Ping 是最简单的监控类型。它只是检查你的应用是否在线。
2. **脚本化浏览器:**脚本化浏览器比较高级。像 [Selenium][6] 这样的浏览器自动化工具可以让你实现自定义的监控规则集。
3. **API 测试:**API 测试用于监控 API 端点。这是 ping 检查模型的高级版本,你可以根据 API 响应来定义监控计划。
### 使用 GraphQL 进行健康检查
在一个典型的基于 REST 的微服务中,你需要从头开始构建健康检查功能。这是一个时间密集型的过程,但使用 GraphQL 就不用担心了。
根据它的[网站][7]
> “GraphQL 是一种用于 API 的查询语言也是一种用现有数据完成这些查询的运行时。GraphQL 为你的 API 中的数据提供了一个完整的、可理解的描述,让客户有能力精确地仅查询他们所需要的东西,让 API 更容易随着时间的推移而进化,并实现强大的开发者工具。”
当你启动一个 GraphQL 微服务时,你还可以获得监控微服务的运行状况的预置。这是一个隐藏的宝贝。
正如我上面提到的,你可以用 GraphQL 端点执行 API 测试以及 ping 检查。
Apollo GraphQL Server 提供了一个默认的端点,它可以返回有关你的微服务和服务器健康的信息。它不是很复杂:如果服务器正在运行,它就会返回状态码 200。
默认端点是 `<server-host>/.well-known/apollo/server-health`
![Health Check with GraphQL][8]
### 高级健康检查
在某些情况下,基本的健康检查可能不足以确保系统的完整性。例如,紧密耦合的系统需要更多的业务逻辑来确保系统的健康。
Apollo GraphQL 在定义服务器的同时,通过声明一个 `onHealthCheck` 函数来有效地管理这种情况。
```
* Defining the Apollo Server */
const apollo = new ApolloServer({
  playground: process.env.NODE_ENV !== 'production',
  typeDefs: gqlSchema,
  resolvers: resolver,
  onHealthCheck: () =&gt; {
    return new Promise((resolve, reject) =&gt; {
      // Replace the `true` in this conditional with more specific checks!
      if (true) {
        resolve();
      } else {
        reject();
      }
    });
  }
});
```
当你定义一个 `onHealthCheck` 方法时,它返回一个 promise如果服务器准备好了它就会返回 _resolve_,如果有错误,它就会返回 _reject_
GraphQL 让监控 API 变得更容易。此外,在你的服务器基础架构中使用它可以使代码变得可扩展。如果你想尝试采用 GraphQL 作为你的新基础设施定义,请参见我的 GitHub 仓库中的[示例代码和配置][9]。
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/8/microservices-graphql
作者:[Rigin Oommen][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/riginoommen
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/data_analytics_cloud.png?itok=eE4uIoaB (Net catching 1s and 0s or data in the clouds)
[2]: https://opensource.com/resources/what-are-microservices
[3]: https://opensource.com/article/19/6/what-is-graphql
[4]: https://opensource.com/sites/default/files/uploads/graphql-microservices.png (GraphQL in Microservices)
[5]: https://opensource.com/sites/default/files/uploads/servicehealth.png (Service health in microservices)
[6]: https://www.selenium.dev/
[7]: https://graphql.org/
[8]: https://opensource.com/sites/default/files/uploads/healthcheck.png (Health Check with GraphQL)
[9]: https://github.com/riginoommen/example-graphql

View File

@ -0,0 +1,91 @@
[#]: collector: (lujun9972)
[#]: translator: (koolape)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Soon Youll be Able to Convert Any Website into Desktop Application in Linux Mint)
[#]: via: (https://itsfoss.com/web-app-manager-linux-mint/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
很快你就能在 Linux Mint 上将任何网站转化为桌面应用程序了
======
设想一下,你正忙于一项任务且需要在浏览器中打开超过 20 个页面,大多数页面都和工作有关。
还有一些是 YouTube 或其他音乐流媒体网站。
完成任务后需要关闭浏览器,但这会将包括工作相关和听音乐的网页等所有网页一起关掉。
然后你需要再次打开这些网页并登录账号以回到原来的进度。
这看起来令人沮丧对吧Linux Mint 理解你的烦恼,因此有了下面这个新项目帮助你应对这些问题。
![][1]
在[最近的文章][2]中Linux Mint 团队揭示了正在开发一个名叫 「网页应用程序」Web App Manager的新工具。
该工具让你能够像使用桌面程序那样以独立窗口运行你最喜爱的网页。
在将网页添加为网页应用程序的时候,你可以给这个程序取名字并添加图标。也可以将它添加到不同的分类,以便在菜单中搜索这个应用。
还可以指定用什么浏览器打开应用。启用/禁用导航栏的选项也有。
![在 Linux Mint 中添加网页应用程序][3]
例如,将 YouTube 添加为网页应用程序:
![Linux Mint 中的网页应用程序][4]
运行 YouTube 程序将通过你所使用的浏览器打开一个独立的页面。
![YouTube 网页应用程序][5]
网页应用程序拥有常规桌面应用程序有的大多数功能特点,如使用 Alt+Tab 切换。
![Web App in Alt Tab Switcher][6]
甚至还能将应用固定到面板/任务栏方便打开。
![YouTube Web App added to the panel][7]
该管理器目前处于 beta 开发阶段,但已经使用起来已经相对比较稳定了。不过目前还没有面向大众发放,因为翻译工作还未完成。
如果你在使用 Linux Mint 并想尝试这个工具,可在下方下载 beta 版本的 deb 文件:
[下载 beta 版][8]
### 网页应用程序在桌面环境的 Linux 中不是什么新事物
网页应用程序不是由 Linux Mint 独创的,而是早在大约 10 年前就有了。
你可能还记得 Ubuntu 在 2013-2014 年向 Unity 桌面中加入了网页应用程序这项特性。
轻量级 Linux 发行版 PeppermintOS 自 2010 年起就将 ICE网页应用程序工具列为其主要特色之一。实际上Linux Mint 的网页应用程序管理器也是基于 [ICE][9] 的。
我个人喜欢网页应用程序,因为有用。
你怎么看 Linux Mint 中的网页应用程序呢,这是你期待使用的吗?欢迎在下方评论。
--------------------------------------------------------------------------------
via: https://itsfoss.com/web-app-manager-linux-mint/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[koolape](https://github.com/koolape)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/09/Web-App-Manager-linux-mint.jpg?resize=800%2C450&ssl=1
[2]: https://blog.linuxmint.com/?p=3960
[3]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/09/Add-web-app-in-Linux-Mint.png?resize=600%2C489&ssl=1
[4]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/09/Web-Apps-in-Linux-Mint.png?resize=600%2C489&ssl=1
[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/09/youtube-web-app-linux-mint.jpg?resize=800%2C611&ssl=1
[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/09/web-app-alt-tab-switcher.jpg?resize=721%2C576&ssl=1
[7]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/09/panel.jpg?resize=470%2C246&ssl=1
[8]: http://www.linuxmint.com/tmp/blog/3960/webapp-manager_1.0.3_all.deb
[9]: https://github.com/peppermintos/ice