mirror of
https://github.com/LCTT/TranslateProject.git
synced 2025-01-25 23:11:02 +08:00
Merge remote-tracking branch 'LCTT/master'
This commit is contained in:
commit
3a286942e5
@ -0,0 +1,68 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (Morisun029)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11506-1.html)
|
||||
[#]: subject: (How to use IoT devices to keep children safe?)
|
||||
[#]: via: (https://opensourceforu.com/2019/10/how-to-use-iot-devices-to-keep-children-safe/)
|
||||
[#]: author: (Andrew Carroll https://opensourceforu.com/author/andrew-carroll/)
|
||||
|
||||
如何使用物联网设备来确保儿童安全?
|
||||
======
|
||||
|
||||
![][1]
|
||||
|
||||
IoT (物联网)设备正在迅速改变我们的生活。这些设备无处不在,从我们的家庭到其它行业。根据一些预测数据,到 2020 年,将会有 100 亿个 IoT 设备。到 2025 年,该数量将增长到 220 亿。目前,物联网已经在很多领域得到了应用,包括智能家居、工业生产过程、农业甚至医疗保健领域。伴随着如此广泛的应用,物联网显然已经成为近年来的热门话题之一。
|
||||
|
||||
多种因素促成了物联网设备在多个学科的爆炸式增长。这其中包括低成本处理器和无线连接的的可用性,以及开源平台的信息交流推动了物联网领域的创新。与传统的应用程序开发相比,物联网设备的开发成指数级增长,因为它的资源是开源的。
|
||||
|
||||
在解释如何使用物联网设备来保护儿童之前,必须对物联网技术有基本的了解。
|
||||
|
||||
### IoT 设备是什么?
|
||||
|
||||
IoT 设备是指那些在没有人类参与的情况下彼此之间可以通信的设备。因此,许多专家并不将智能手机和计算机视为物联网设备。此外,物联网设备必须能够收集数据并且能将收集到的数据传送到其他设备或云端进行处理。
|
||||
|
||||
然而,在某些领域中,我们需要探索物联网的潜力。儿童往往是脆弱的,他们很容易成为犯罪分子和其他蓄意伤害者的目标。无论在物理世界还是数字世界中,儿童都很容易面临犯罪的威胁。因为父母不能始终亲自到场保护孩子;这就是为什么需要监视工具了。
|
||||
|
||||
除了适用于儿童的可穿戴设备外,还有许多父母监视应用程序,例如 Xnspy,可实时监控儿童并提供信息的实时更新。这些工具可确保儿童安全。可穿戴设备确保儿童身体上的安全性,而家长监控应用可确保儿童的上网安全。
|
||||
|
||||
由于越来越多的孩子花费时间在智能手机上,毫无意外地,他们也就成为诈骗分子的主要目标。此外,由于恋童癖、网络自夸和其他犯罪在网络上的盛行,儿童也有可能成为网络欺凌的目标。
|
||||
|
||||
这些解决方案够吗?我们需要找到物联网解决方案,以确保孩子们在网上和线下的安全。在当代,我们如何确保孩子的安全?我们需要提出创新的解决方案。 物联网可以帮助保护孩子在学校和家里的安全。
|
||||
|
||||
### 物联网的潜力
|
||||
|
||||
物联网设备提供的好处很多。举例来说,父母可以远程监控自己的孩子,而又不会显得太霸道。因此,儿童在拥有安全环境的同时也会有空间和自由让自己变得独立。
|
||||
|
||||
而且,父母也不必在为孩子的安全而担忧。物联网设备可以提供 7x24 小时的信息更新。像 Xnspy 之类的监视应用程序在提供有关孩子的智能手机活动信息方面更进了一步。随着物联网设备变得越来越复杂,拥有更长使用寿命的电池只是一个时间问题。诸如位置跟踪器之类的物联网设备可以提供有关孩子下落的准确详细信息,所以父母不必担心。
|
||||
|
||||
虽然可穿戴设备已经非常好了,但在确保儿童安全方面,这些通常还远远不够。因此,要为儿童提供安全的环境,我们还需要其他方法。许多事件表明,儿童在学校比其他任何公共场所都容易受到攻击。因此,学校需要采取安全措施,以确保儿童和教师的安全。在这一点上,物联网设备可用于检测潜在威胁并采取必要的措施来防止攻击。威胁检测系统包括摄像头。系统一旦检测到威胁,便可以通知当局,如一些执法机构和医院。智能锁等设备可用于封锁学校(包括教室),来保护儿童。除此之外,还可以告知父母其孩子的安全,并立即收到有关威胁的警报。这将需要实施无线技术,例如 Wi-Fi 和传感器。因此,学校需要制定专门用于提供教室安全性的预算。
|
||||
|
||||
智能家居实现拍手关灯,也可以让你的家庭助手帮你关灯。同样,物联网设备也可用在屋内来保护儿童。在家里,物联网设备(例如摄像头)为父母在照顾孩子时提供 100% 的可见性。当父母不在家里时,可以使用摄像头和其他传感器检测是否发生了可疑活动。其他设备(例如连接到这些传感器的智能锁)可以锁门和窗,以确保孩子们的安全。
|
||||
|
||||
同样,可以引入许多物联网解决方案来确保孩子的安全。
|
||||
|
||||
### 有多好就有多坏
|
||||
|
||||
物联网设备中的传感器会创建大量数据。数据的安全性是至关重要的一个因素。收集的有关孩子的数据如果落入不法分子手中会存在危险。因此,需要采取预防措施。IoT 设备中泄露的任何数据都可用于确定行为模式。因此,必须对提供不违反用户隐私的安全物联网解决方案投入资金。
|
||||
|
||||
IoT 设备通常连接到 Wi-Fi,用于设备之间传输数据。未加密数据的不安全网络会带来某些风险。这样的网络很容易被窃听。黑客可以使用此类网点来入侵系统。他们还可以将恶意软件引入系统,从而使系统变得脆弱、易受攻击。此外,对设备和公共网络(例如学校的网络)的网络攻击可能导致数据泄露和私有数据盗用。 因此,在实施用于保护儿童的物联网解决方案时,保护网络和物联网设备的总体计划必须生效。
|
||||
|
||||
物联网设备保护儿童在学校和家里的安全的潜力尚未发现有什么创新。我们需要付出更多努力来保护连接 IoT 设备的网络安全。此外,物联网设备生成的数据可能落入不法分子手中,从而造成更多麻烦。因此,这是物联网安全至关重要的一个领域。
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensourceforu.com/2019/10/how-to-use-iot-devices-to-keep-children-safe/
|
||||
|
||||
作者:[Andrew Carroll][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[Morisun029](https://github.com/Morisun029)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensourceforu.com/author/andrew-carroll/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Visual-Internet-of-things_EB-May18.jpg?resize=696%2C507&ssl=1 (Visual Internet of things_EB May18)
|
||||
[2]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Visual-Internet-of-things_EB-May18.jpg?fit=900%2C656&ssl=1
|
@ -0,0 +1,89 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (4 ways developers can have a say in what agile looks like)
|
||||
[#]: via: (https://opensource.com/article/19/10/ways-developers-what-agile)
|
||||
[#]: author: (Clement Verna https://opensource.com/users/cverna)
|
||||
|
||||
4 ways developers can have a say in what agile looks like
|
||||
======
|
||||
How agile is implemented—versus imposed—plays a big role in what
|
||||
developers gain from it.
|
||||
![Person on top of a mountain, arm raise][1]
|
||||
|
||||
Agile has become the default way of developing software; sometimes, it seems like every organization is doing (or wants to do) agile. But, instead of trying to change their culture to become agile, many companies try to impose frameworks like scrum onto developers, looking for a magic recipe to increase productivity. This has unfortunately created some bad experiences and leads developers to feel like agile is something they would rather avoid. This is a shame because, when it's done correctly, developers and their projects benefit from becoming involved in it. Here are four reasons why.
|
||||
|
||||
### Agile, back to the basics
|
||||
|
||||
The first way for developers to be unafraid of agile is to go back to its basics and remember what agile is really about. Many people see agile as a synonym for scrum, kanban, story points, or daily stand-ups. While these are important parts of the [agile umbrella][2], this perception takes people away from the original spirit of agile.
|
||||
|
||||
Going back to agile's origins means looking at the [Agile Manifesto][3], and what I believe is its most important part, the introduction:
|
||||
|
||||
> We are uncovering better ways of developing software by doing it and helping others do it.
|
||||
|
||||
I'm a believer in continuous improvement, and this sentence resonates with me. It emphasizes the importance of having a [growth mindset][4] while being a part of an agile team. In fact, I think this outlook is a solution to most of the problems a team may face when adopting agile.
|
||||
|
||||
Scrum is not working for your team? Right, let's discover a better way of organizing it. You are working in a distributed team across multiple timezones, and having a daily standup is not ideal? No problem, let's find a better way to communicate and share information.
|
||||
|
||||
Agile is all about flexibility and being able to adapt to change, so be open-minded and creative to discover better ways of collaborating and developing software.
|
||||
|
||||
### Agile metrics as a way to improve, not control
|
||||
|
||||
Indeed, agile is about adopting and embracing change. Metrics play an important part in this process, as they help the team determine if it is heading in the right direction. As an agile developer, you want metrics to provide the data your team needs to support its decisions, including whether it should change directions. This process of learning from facts and experience is known as empiricism, and it is well-illustrated by the three pillars of agile.
|
||||
|
||||
![Three pillars of agile][5]
|
||||
|
||||
Unfortunately, in most of the teams I've worked with, metrics were used by project management as an indicator of the team's performance, which causes people on the team to be afraid of implementing changes or to cut corners to meet expectations.
|
||||
|
||||
In order to avoid those outcomes, developers need to be in control of their team's metrics. They need to know exactly what is measured and, most importantly, why it's being measured. Once the team has a good understanding of those factors, it will be easier for them to try new practices and measure their impact.
|
||||
|
||||
Rather than using metrics to measure your team's performance, engage with management to find a better way to define what success means to your team.
|
||||
|
||||
### Developer power is in the team
|
||||
|
||||
As a member of an agile team, you have more power than you think to help build a team that has a great impact. The [Toyota Production System][6] recognized this long ago. Indeed, Toyota considered that employees, not processes, were the key to building great products.
|
||||
|
||||
This means that, even if a team uses the best process possible, if the people on the team are not comfortable working with each other, there is a high chance that the team will fail. As a developer, invest time to build trust inside your team and to understand what motivates its members.
|
||||
|
||||
If you are curious about how to do this, I recommend reading Alexis Monville's book [_Changing Your Team from the Inside_][7].
|
||||
|
||||
### Making developer work visible
|
||||
|
||||
A big part of any agile methodology is to make information and work visible; this is often referred to as an [information radiator][8]. In his book [_Teams of Teams_][9], Gen. Stanley McChrystal explains how the US Army had to transform itself from an organization that was optimized on productivity to one optimized to adapt. What we learn from his book is that the world in which we live has changed. The problem of becoming more productive was mostly solved at the end of the 20th century, and the challenge that companies now face is how to adapt to a world in constant evolution.
|
||||
|
||||
![A lot of sticky notes on a whiteboard][10]
|
||||
|
||||
I particularly like Gen. McChrystal's explanation of how he created a powerful information radiator. When he took charge of the [Joint Special Operations Command][11], Gen. McChrystal began holding a daily call with his high commanders to discuss and plan future operations. He soon realized that this was not optimal and instead started running 90-minute briefings every morning for 7,000 people around the world. This allowed every task force to acquire the knowledge necessary to accomplish their missions and made them aware of other task forces' assignments and situations. Gen. McChrystal refers to this as "shared consciousness."
|
||||
|
||||
So, as a developer, how can you help build a shared consciousness in your team? Start by simply sharing what you are working on and/or plan to work on and get curious about what your colleagues are doing.
|
||||
|
||||
* * *
|
||||
|
||||
If you're using agile in your development organization, what do you think are its main benefits? And if you aren't using agile, what barriers are holding your team back? Please share your thoughts in the comments.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/10/ways-developers-what-agile
|
||||
|
||||
作者:[Clement Verna][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/cverna
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/developer_mountain_cloud_top_strong_win.jpg?itok=axK3EX-q (Person on top of a mountain, arm raise)
|
||||
[2]: https://confluence.huit.harvard.edu/display/WGAgile/2014/07/01/The+Agile+Umbrella
|
||||
[3]: https://agilemanifesto.org/
|
||||
[4]: https://www.edglossary.org/growth-mindset/
|
||||
[5]: https://opensource.com/sites/default/files/uploads/3pillarsofagile.png (Three pillars of agile)
|
||||
[6]: https://en.wikipedia.org/wiki/Toyota_Production_System#Respect_for_people
|
||||
[7]: https://leanpub.com/changing-your-team-from-the-inside#packages
|
||||
[8]: https://www.agilealliance.org/glossary/information-radiators/
|
||||
[9]: https://www.mcchrystalgroup.com/insights-2/teamofteams/
|
||||
[10]: https://opensource.com/sites/default/files/uploads/stickynotes.jpg (A lot of sticky notes on a whiteboard)
|
||||
[11]: https://en.wikipedia.org/wiki/Joint_Special_Operations_Command
|
@ -0,0 +1,122 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Gartner crystal ball: Looking beyond 2020 at the top IT-changing technologies)
|
||||
[#]: via: (https://www.networkworld.com/article/3447759/gartner-looks-beyond-2020-to-foretell-the-top-it-changing-technologies.html)
|
||||
[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
|
||||
|
||||
Gartner crystal ball: Looking beyond 2020 at the top IT-changing technologies
|
||||
======
|
||||
Gartner’s top strategic predictions for 2020 and beyond is heavily weighted toward the human side of technology
|
||||
[Thinkstock][1]
|
||||
|
||||
ORLANDO – Forecasting long-range IT technology trends is a little herding cats – things can get a little crazy.
|
||||
|
||||
But Gartner analysts have specialized in looking forwardth, boasting an 80 percent accuracy rate over the years, Daryl Plummer, distinguished vice president and Gartner Fellow told the IT crowd at this year’s [IT Symposium/XPO][2]. Some of those successful prediction have included the rise of automation, robotics, AI technology and other ongoing trends.
|
||||
|
||||
[Now see how AI can boost data-center availability and efficiency][3]
|
||||
|
||||
Like some of the [other predictions][4] Gartner has made at this event, this year’s package of predictions for 2020 and beyond is heavily weighted toward the human side of technology rather than technology itself.
|
||||
|
||||
**[ [Become a Microsoft Office 365 administrator in record time with this quick start course from PluralSight.][5] ]**
|
||||
|
||||
“Beyond offering insights into some of the most critical areas of technology evolution, this year’s predictions help us move beyond thinking about mere notions of technology adoption and draw us more deeply into issues surrounding what it means to be human in the digital world.” Plummer said.
|
||||
|
||||
The list this year goes like this:
|
||||
|
||||
**By 2023, the number of people with disabilities employed will triple due to AI and emerging technologies, reducing barriers to access.**
|
||||
|
||||
Technology is going to make it easier for people with disabilities to connect to the business world. “People with disabilities constitute an untapped pool of critically skilled talent,” Plummer said.
|
||||
|
||||
“[Artificial intelligence (AI)][6], augmented reality (AR), virtual reality (VR) and other [emerging technologies][7] have made work more accessible for employees with disabilities. For example, select restaurants are starting to pilot AI robotics technology that enables paralyzed employees to control robotic waiters remotely. Organizations that actively employ people with disabilities will not only cultivate goodwill from their communities, but also see 89 percent higher retention rates, a 72 percent increase in employee productivity, and a 29 percent increase in profitability,” Plummer said.
|
||||
|
||||
**By 2024, AI identification of emotions will influence more than half of the online advertisements you see.**
|
||||
|
||||
Computer vision, which allows AI to identify and interpret physical environments, is one of the key technologies used for emotion recognition and has been ranked by Gartner as one of the most important technologies in the next three to five years. [Artificial emotional intelligence (AEI)][8] is the next frontier for AI development, Plummer said. Twenty-eight percent of marketers ranked AI and machine learning (ML) among the top three technologies that will drive future marketing impact, and 87 percent of marketing organizations are currently pursuing some level of personalization, according to Gartner. By 2022, 10 percent of personal devices will have emotion AI capabilities, Gartner predicted.
|
||||
|
||||
“AI makes it possible for both digital and physical experiences to become hyper personalized, beyond clicks and browsing history but actually on how customers _feel_ in a specific purchasing moment. With the promise to measure and engage consumers based on something once thought to be intangible, this area of ‘empathetic marketing’ holds tremendous value for both brands and consumers when used within the proper [privacy][9] boundaries,” said Plummer.
|
||||
|
||||
**Through 2023, 30% of IT organizations will extend BYOD policies with “bring your own enhancement” (BYOE) to address augmented humans in the workforce.**
|
||||
|
||||
The concept of augmented workers has gained traction in social media conversations in 2019 due to advancements in wearable technology. Wearables are driving workplace productivity and safety across most verticals, including automotive, oil and gas, retail and healthcare.
|
||||
|
||||
Wearables are only one example of physical augmentations available today, but humans will look to additional physical augmentations that will enhance their personal lives and help do their jobs. Gartner defines human augmentation as creating cognitive and physical improvements as an integral part of the human body. An example is using active control systems to create limb prosthetics with characteristics that can exceed the highest natural human performance.
|
||||
|
||||
“IT leaders certainly see these technologies as impactful, but it is the consumers’ desire to physically enhance themselves that will drive the adoption of these technologies first,” Plummer said. “Enterprises need to balance the control of these devices in their enterprises while also enabling users to use them for the benefit of the organization.”
|
||||
|
||||
**By 2025, 50% of people with a smartphone but without a bank account will use a mobile-accessible cryptocurrency account.**
|
||||
|
||||
Currently 30 percent of people have no bank account and 71 percent will subscribe to mobile services by 2025. Major online marketplaces and social media platforms will start supporting cryptocurrency payments by the end of next year. By 2022, Facebook, Uber, Airbnb, eBay, PayPal and other digital e-commerce companies will support over 750 million customer, Gartner predicts.
|
||||
|
||||
At least half the globe’s citizens who do not use a bank account will instead use these new mobile-enabled cryptocurrency account services offered by global digital platforms by 2025, Gartner said.
|
||||
|
||||
**By 2023, a self-regulating association for oversight of AI and machine-learning designers will be established in at least four of the G7 countries.**
|
||||
|
||||
By 2021, multiple incidents involving non-trivial AI-produced harm to hundreds or thousands of individuals can be expected, Gartner said. Public demand for protection from the consequences of malfunctioning algorithms will in turn produce pressure to assign legal liability for the harmful consequences of algorithm failure. The immediate impact of regulation of process will be to increase cycle times for AI and ML algorithm development and deployment. Enterprises can also expect to spend more for training and certification for practitioners and documentation of processes, as well as higher salaries for certified personnel.
|
||||
|
||||
“Regulation of products as complex as AI and ML algorithms is no easy task. Consequences of algorithm failures at scale that occur within major societal functions are becoming more visible. For instance, AI-related failures in autonomous vehicles and aircraft have already killed people and attracted widespread attention in recent months,” said Plummer.
|
||||
|
||||
**By 2023, 40% of professional workers will orchestrate their business application experiences and capabilities like they do their music streaming experience.**
|
||||
|
||||
The human desire to have a work environment that is similar to their personal environment continues to rise — one where they can assemble their own applications to meet job and personal requirements in a [self-service fashion][10]. The consumerization of technology and introduction of new applications have elevated the expectations of employees as to what is possible from their business applications. Gartner says through 2020, the top 10 enterprise-application vendors will expose over 90 percent of their application capabilities through APIs.
|
||||
|
||||
“Applications used to define our jobs. Nowadays, we are seeing organizations designing application experiences around the employee. For example, mobile and cloud technologies are freeing many workers from coming into an office and instead supporting a work-anywhere environment, outpacing traditional application business models,” Plummer said. “Similar to how humans customize their streaming experience, they can increasingly customize and engage with new application experiences.”
|
||||
|
||||
**By 2023, up to 30 percent of world news and video content will be authenticated as real by blockchain countering deep fake technology.**
|
||||
|
||||
Fake news represents deliberate disinformation, such as propaganda that is presented to viewers as real news. Its rapid proliferation in recent years can be attributed to bot-controlled accounts on social media, attracting more viewers than authentic news and manipulating human intake of information, Plummer said. Fake content, exacerbated by AI can pose an existential threat to an organization.
|
||||
|
||||
By 2021, at least 10 major news organizations will use [blockchain][11] to track and prove the authenticity of their published content to readers and consumers. Likewise, governments, technology giants and other entities are fighting back through industry groups and proposed regulations. “The IT organization must work with content-production teams to establish and track the origin of enterprise-generated content using blockchain technology,” Plummer said.
|
||||
|
||||
**On average, through 202, digital transformation initiatives will take large traditional enterprises twice as long and cost twice as much as anticipated.**
|
||||
|
||||
Business leaders’ expectations for revenue growth are unlikely to be realized from digital optimization strategies, due to the cost of technology modernization and the unanticipated costs of simplifying operational interdependencies. Such operational complexity also impedes the pace of change along with the degree of innovation and adaptability required to operate as a digital business.
|
||||
|
||||
“In most traditional organizations, the gap between digital ambition and reality is large,” Plummer said. “We expect CIOs’ budget allocation for IT modernization to grow 7 percent year-over-year through 2021 to try to close that gap.”
|
||||
|
||||
**By 2023, individual activities will be tracked digitally by an “Internet of Behavior” to influence, benefit and service eligibility for 40% of people worldwide.**
|
||||
|
||||
Through facial recognition, location tracking and big data, organizations are starting to monitor individual behavior and link that behavior to other digital actions, like buying a train ticket. The Internet of Things (IoT) – where physical things are directed to do a certain thing based on a set of observed operating parameters relative to a desired set of operating parameters — is now being extended to people, known as the Internet of Behavior (IoB). Through 2020 watch for examples of usage-based and behaviorally-based business models to expand into health insurance or financial services, Plummer said.
|
||||
|
||||
“With IoB, value judgements are applied to behavioral events to create a desired state of behavior,” Plummer said. “What level of tracking will we accept? Will it be hard to get life insurance if your Fitbit tracker doesn’t see 10,000 steps a day?”
|
||||
|
||||
“Over the long term, it is likely that almost everyone living in a modern society will be exposed to some form of IoB that melds with cultural and legal norms of our existing predigital societies,” Plummer said
|
||||
|
||||
**By 2024, the World Health Organization will identify online shopping as an addictive disorder, as millions abuse digital commerce and encounter financial stress.**
|
||||
|
||||
Consumer spending via digital commerce platforms will continue to grow over 10 percent year-over-year through 2022. In addition watch for an increased number of digital commerce orders predicted by, and initiated by, AI.
|
||||
|
||||
The ease of online shopping will cause financial stress for millions of people, as online retailers increasingly use AI and personalization to effectively target consumers and prompt them to spend income that they do not have. The resulting debt and personal bankruptcies will cause depression and other health concerns caused by stress, which is capturing the attention of the WHO.
|
||||
|
||||
“The side effects of technology that promote addictive behavior are not exclusive to consumers. CIOs must also consider the possibility of lost productivity among employees who put work aside for online shopping and other digital distractions. In addition, regulations in support of responsible online retail practices might force companies to provide warnings to prospective customers who are ready to make online purchases, similar to casinos or cigarette companies,” Plummer said.
|
||||
|
||||
Join the Network World communities on [Facebook][12] and [LinkedIn][13] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3447759/gartner-looks-beyond-2020-to-foretell-the-top-it-changing-technologies.html
|
||||
|
||||
作者:[Michael Cooney][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Michael-Cooney/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: http://thinkstockphotos.com
|
||||
[2]: https://www.networkworld.com/article/3447397/gartner-10-infrastructure-trends-you-need-to-know.html
|
||||
[3]: https://www.networkworld.com/article/3274654/ai-boosts-data-center-availability-efficiency.html
|
||||
[4]: https://www.networkworld.com/article/3447401/gartner-top-10-strategic-technology-trends-for-2020.html
|
||||
[5]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fcourses%2Fadministering-office-365-quick-start
|
||||
[6]: https://www.gartner.com/en/newsroom/press-releases/2019-07-15-gartner-survey-reveals-leading-organizations-expect-t
|
||||
[7]: https://www.gartner.com/en/newsroom/press-releases/2018-08-20-gartner-identifies-five-emerging-technology-trends-that-will-blur-the-lines-between-human-and-machine
|
||||
[8]: https://www.gartner.com/smarterwithgartner/13-surprising-uses-for-emotion-ai-technology/
|
||||
[9]: https://www.gartner.com/smarterwithgartner/how-to-balance-personalization-with-data-privacy/
|
||||
[10]: https://www.gartner.com/en/newsroom/press-releases/2019-05-28-gartner-says-the-future-of-self-service-is-customer-l
|
||||
[11]: https://www.gartner.com/smarterwithgartner/the-cios-guide-to-blockchain/
|
||||
[12]: https://www.facebook.com/NetworkWorld/
|
||||
[13]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,55 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (My Linux Story: Why introduce people to the Raspberry Pi)
|
||||
[#]: via: (https://opensource.com/article/19/10/new-linux-open-source-users)
|
||||
[#]: author: (RolandBerberich https://opensource.com/users/rolandberberich)
|
||||
|
||||
My Linux Story: Why introduce people to the Raspberry Pi
|
||||
======
|
||||
Learn why I consider the Raspberry Pi one of our best opportunities to
|
||||
invite more people to the open source community.
|
||||
![Team of people around the world][1]
|
||||
|
||||
My first steps into Linux happened around 2003 or 2004 when I was a student. The experiment lasted an hour or two. Being used to Windows, I was confused and quickly frustrated at having to learn the most basic stuff again.
|
||||
|
||||
By 2018, I was curious enough to try Ubuntu before settling on Fedora 29 on an unused laptop, and to get a Pi3B+ and Pi4, both currently running Raspbian. What changed? Well, first of all, Linux has certainly changed. Also, by that time I was not only curious but more patient than my younger self by that time. Reflecting on this experience, I reckon that patience to overcome the perceived usability gap is the key to Linux satisfaction. Just one year later, I can confidently say I am productive in both Windows as well as (my) Linux environments.
|
||||
|
||||
This experience has brought up two questions. First, why are more people not using Linux (or other open source software)? Second, what can the savvier among us could do to improve these numbers? Of course, these questions assume the open source world has advantages over the more common alternatives, and that some of us would go to ends of the Earth to convince the non-believers.
|
||||
|
||||
Believe it or not, this last issue is one of the problems. By far, I am not a Linux pro. I would rather describe myself as a "competent user" able to solve a few issues by myself. Admittedly, internet search engines are my friend, but step-by-step I accumulated the expertise and confidence to work outside the omnipresent Windows workspace.
|
||||
|
||||
On the other hand, how technophile is the standard user? Probably not at all. The internet is full of "have you switched it on" examples to illustrate the incompetence of users. Now, imagine someone suggests you are incompetent and then offers (unsolicited) advice on how to improve. How well would you take that, especially if you consider yourself "operational" (meaning that you have no problems at work or surfing the web)?
|
||||
|
||||
### Introduce them to the Raspberry Pi
|
||||
|
||||
Overcoming this initial barrier is crucial, and we cannot do so with a superiority complex. Personally, I consider the Raspberry Pi one of our best opportunities to invite more people to the open source community. The Raspberry Pi’s simplicity combined with its versatility and affordability could entice more people to get and use one.
|
||||
|
||||
I recently upgraded my Pi3B+ to the new Pi4B, and with the exception of my usual reference manager, this unit fully replaces my (Windows) desktop. My next step is to use a Pi3B+ as a media center and gaming console. The point is that if we want people to use open source software, we need to make it accessible for everyday tasks such as the above. Realizing it isn't that difficult will do more for user numbers than aloof superiority from open source advocates, or Linux clubs at university.
|
||||
|
||||
It is one thing to keep preaching the many advantages of open source, but a more convincing experience can only be a personal one. Obviously, people will realize the cost advantage of, say, a Pi4 running Linux over a standard supermarket Windows PC. And humans are curious. An affordable gadget where mistakes are easy to correct (clone your card, it is not hard) will entice more and more users to fiddle around and get first hand IT knowledge. Maybe none of us will be an expert (I count myself among this crowd) but the least that will happen is wider use of open source software with users realizing that is is a viable alternative.
|
||||
|
||||
With curiosity rampant, a Pi club at school or university could make younger workers competent in Linux. Some of these workers perhaps will bring their SD card to work, plug it into any Raspberry Pi provided, and start being productive. Imagine the potential savings in regards to IT. Imagine the flexibility of choosing any space in the office and having your own work environment with you.
|
||||
|
||||
Wider use of open source solutions will not only add flexibility. Targetting mainly Windows environments, your systems will be somewhat safer from attacks, and with more demand, more resources will pour into further development. Consequently, this trend will force propriety software developers to up their game, which is also good for users of course.
|
||||
|
||||
In summary, my point is to reflect as a community how we can improve our resource base by following my journey. We can only do so by starting early, accessibly, and affordably, and by showing that open source is a real alternative for any professional application on a daily basis.
|
||||
|
||||
There are lots of non-code ways to contribute to open source: Here are three alternatives.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/10/new-linux-open-source-users
|
||||
|
||||
作者:[RolandBerberich][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/rolandberberich
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/team_global_people_gis_location.png?itok=Rl2IKo12 (Team of people around the world)
|
@ -0,0 +1,124 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (The evolution to Secure Access Service Edge (SASE) is being driven by necessity)
|
||||
[#]: via: (https://www.networkworld.com/article/3448276/the-evolution-to-secure-access-service-edge-sase-is-being-driven-by-necessity.html)
|
||||
[#]: author: (Matt Conran https://www.networkworld.com/author/Matt-Conran/)
|
||||
|
||||
The evolution to Secure Access Service Edge (SASE) is being driven by necessity
|
||||
======
|
||||
The users and devices are everywhere. As a result, secure access services also need to be everywhere.
|
||||
MF3d / Getty Images
|
||||
|
||||
The WAN consists of network and security stacks, both of which have gone through several phases of evolution. Initially, we began with the router, introduced WAN optimization, and then edge SD-WAN. From the perspective of security, we have a number of firewall generations that lead to network security-as-a-service. In today’s scenario, we have advanced to another stage that is more suited to today’s environment. This stage is the convergence of network and security in the cloud.
|
||||
|
||||
For some, the network and security trends have been thought of in terms of silos. However, the new market category of secure access service edge (SASE) challenges this ideology and recommends a converged cloud-delivered secure access service edge.
|
||||
|
||||
Gartner proposes that the future of the network and network security is in the cloud. This is similar to what [Cato Networks][1] has been offering for quite some time – the convergence of networking and security-as-a-service capabilities into a private, global cloud.
|
||||
|
||||
[][2]
|
||||
|
||||
BrandPost Sponsored by HPE
|
||||
|
||||
[Take the Intelligent Route with Consumption-Based Storage][2]
|
||||
|
||||
Combine the agility and economics of HPE storage with HPE GreenLake and run your IT department with efficiency.
|
||||
|
||||
We all know; when we employ anything new, there will be noise. Therefore, it's difficult to dissect the right information and understand who is doing what and if SASE actually benefits your organization. And this is the prime motive of this post. However, before we proceed, I have a question for you.
|
||||
|
||||
Will combining the comprehensive WAN capabilities with comprehensive network security functions be the next evolution? In the following sections, I would like to discuss each of the previous network and security stages to help you answer the same question. So, first, let’s begin with networking.
|
||||
|
||||
### The networking era
|
||||
|
||||
### The router
|
||||
|
||||
We started with the router at the WAN edge, configured with routing protocols. Routing protocols do not make a decision on global information and are limited to the routing loop restrictions. This restricts the number of paths that the application traffic can take.
|
||||
|
||||
For a redundant WAN design, we need the complex BGP tuning to the load balance between the border edges along with the path attributes. This is because these path attributes may not choose the best performing path. By and large, the shortest path is not necessarily the best path.
|
||||
|
||||
**[ Now read [20 hot jobs ambitious IT pros should shoot for][3]. ]**
|
||||
|
||||
The WAN edge exhibited a rigid network topology that applications had to fit into. Security was provided by pushing the traffic from one appliance to another. With the passage of time, we began to see the rise of real-time voice and video traffic which are highly sensitive to latency and jitter. Hence, the WAN optimization was a welcomed feature.
|
||||
|
||||
### WAN optimization
|
||||
|
||||
The basic WAN optimization includes a range of TCP optimizations and basic in-line compression. The advanced WAN optimization includes deduplication, file-based caching and protocol-specific optimizations. This, indeed, helped in managing the latency-sensitive applications and applications where large amounts of data must be transferred across the WAN.
|
||||
|
||||
However, it was a complex deployment. A WAN optimization physical appliance was needed at both ends of the connection and had to be used for all the applications. At that time, it was an all or nothing approach and you couldn’t roll out WAN optimization per application. Besides, it had no effect on the remote workers where the users were not located in the office.
|
||||
|
||||
Subsequently, SD-WAN started to appear in 2015. During this year, I was consulting an Azure migration and attempting to [create my own DIY SD-WAN][4] _[Disclaimer: the author works for Network Insight]_ with a protocol called Tina from Barracuda. Since I was facing some challenges, so I welcomed the news of SD-WAN with open arms. For the first time, we had a decent level of abstraction in the WAN that was manageable.
|
||||
|
||||
Deploying SD-WAN allows me to have all the available bandwidth. Contrarily, many of the WAN optimization techniques such as data compression and deduplication are not as useful.
|
||||
|
||||
But others, such as error correction, protocol, and application acceleration could still be useful and are widely used today. Regardless of how many links you bundle, it might still result in latency and packet loss unless of course, you privatize as much as possible.
|
||||
|
||||
### The security era
|
||||
|
||||
### Packet filters
|
||||
|
||||
Elementally, the firewall is classed in a number of generations. We started with the first-generation firewalls that are just simple packet filters. These packet filters match on layer 2 to 4 headers. Since most of them do not match on the TCP SYN flags it’s impossible to identify the established sessions.
|
||||
|
||||
### Stateful devices
|
||||
|
||||
The second-generation firewalls refer to stateful devices. Stateful firewalls keep the state connections and the return traffic is permitted if the state for that flow is in the connection table.
|
||||
|
||||
These stateful firewalls did not inspect at an application level. The second-generation firewalls were stateful and could track the state of the session. However, they could not go deeper into the application, for example, examining the HTTP content and inspecting what users are doing.
|
||||
|
||||
### Next-generation firewalls
|
||||
|
||||
Just because a firewall is stateful doesn’t mean it can examine the application layer and determine what users are doing. Therefore, we switched to the third-generation firewalls.
|
||||
|
||||
These firewall types are often termed as the next-generation firewalls because they offer layer 7 inspections combined with other network device filtering functionalities. Some examples could be an application firewall using an in-line deep packet inspection (DPI) or intrusion prevention system (IPS).
|
||||
|
||||
Eventually, other niche devices started to emerge, called application-level firewalls. These devices are usually only concerned with the HTTP traffic, also known as web application firewalls (WAF). The WAF has similar functionality to reverse the web proxy, thereby terminating the HTTP session.
|
||||
|
||||
From my experience, while designing the on-premises active/active firewalls with a redundant WAN, you must keep an eye on the asymmetric traffic flows. If the firewall receives a packet that does not have any connection/state information for that packet, it will drop the packet.
|
||||
|
||||
Having an active/active design is complicated, whereas the active/passive design with an idle firewall is expensive. Anyways, if you manage to piece together a redundant design, most firewall vendors will require the management of security boxes instead of delivering policy-based security services.
|
||||
|
||||
### Network Security-as-a-Service
|
||||
|
||||
We then witnessed some major environmental changes. The introduction of the cloud and workload mobility changed the network and security paradigm completely. Workload fluidity and the movement of network state put pressure on the traditional physical security devices.
|
||||
|
||||
The physical devices cannot follow workloads and you can’t move a physical appliance around the network. There is also considerable operational overhead. We have to constantly maintain these devices which literally becomes a race against time. For example, when a new patch is issued there will be a test, stage and deploy phase. All of this needs to be done before the network becomes prone to vulnerability.
|
||||
|
||||
Network Security-as-a-Service was one solution to this problem. Network security functions, such as the CASB, FWaaS cloud SWG are now pushed to the cloud.
|
||||
|
||||
### Converging network and security
|
||||
|
||||
All the technologies described above have a time and a place. But these traditional networks and network security architectures are becoming increasingly ineffective.
|
||||
|
||||
Now, we have more users, devices, applications, services and data located outside of an enterprise than inside. Hence, with the emergence of edge and cloud-based service, we need a completely different type of architecture.
|
||||
|
||||
The SASE proposes combining the network-as-a-service capabilities (SD-WAN, WAN optimization, etc.) with the Security-as-a-Service (SWG, CASB, FWaaS, etc.) to support the dynamic secure access. It focuses extensively on the identity of the user and/or device, not the data center.
|
||||
|
||||
Then policy can be applied to the identity and context. Following this model inverts our thinking about network and security. To be fair, we have seen the adoption of some cloud-based services including cloud-based SWG, content delivery network (CDN) and the WAF. However, the overarching design stays the same – the data center is still the center of most enterprise networks and network security architectures. Yet, the user/identity should be the new center of its operations.
|
||||
|
||||
In the present era, we have dynamic secure access requirements. The users and devices are everywhere. As a result, secure access services need to be everywhere and distributed closer to the systems and devices that require access. When pursuing a data-centric approach to cloud security, one must follow the data everywhere it goes.
|
||||
|
||||
**This article is published as part of the IDG Contributor Network. [Want to Join?][5]**
|
||||
|
||||
Join the Network World communities on [Facebook][6] and [LinkedIn][7] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3448276/the-evolution-to-secure-access-service-edge-sase-is-being-driven-by-necessity.html
|
||||
|
||||
作者:[Matt Conran][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Matt-Conran/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.catonetworks.com/blog/the-secure-access-service-edge-sase-as-described-in-gartners-hype-cycle-for-enterprise-networking-2019/
|
||||
[2]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE20773&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
|
||||
[3]: https://www.networkworld.com/article/3276025/careers/20-hot-jobs-ambitious-it-pros-should-shoot-for.html
|
||||
[4]: https://network-insight.net/2015/07/azure-expressroute-cloud-ix-barracuda/
|
||||
[5]: https://www.networkworld.com/contributor-network/signup.html
|
||||
[6]: https://www.facebook.com/NetworkWorld/
|
||||
[7]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,69 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (NICT successfully demos petabit-per-second network node)
|
||||
[#]: via: (https://www.networkworld.com/article/3447857/nict-successfully-demos-petabit-per-second-network-node.html)
|
||||
[#]: author: (Patrick Nelson https://www.networkworld.com/author/Patrick-Nelson/)
|
||||
|
||||
NICT successfully demos petabit-per-second network node
|
||||
======
|
||||
One-petabit-per-second signals could send 8K resolution video to 10 million people simultaneously, researchers say. Japan’s national research agency says it has just successfully demoed a networked version of it.
|
||||
Thinkstock
|
||||
|
||||
Petabit-class networks will support more than 100-times the capacity of existing networks, according to scientists who have just demonstrated an optical switching rig designed to handle the significant amounts of data that would pour through future petabit cables. One petabit is equal to a thousand terabits, or a million gigabits.
|
||||
|
||||
Researchers at the [National Institute of Information and Communications Technology][1] (NICT) in Japan routed signals with capacities ranging from 10 terabits per second to 1 petabit per second through their node. Those kinds of capacities, which could send 8K resolution video to 10 million people simultaneously, are going to be needed for future broadband video streaming and Internet of Things at scale, researchers believe. In-data-center applications and backhaul could benefit.
|
||||
|
||||
“Petabit-class transmission requires petabit-class switching technologies to manage and reliably direct large amounts of data through complex networks, NICT said in a [press release][2]. “Up to now, such technologies have been beyond reach, because the existing approaches are limited by complexity and, or performance.”
|
||||
|
||||
[][3]
|
||||
|
||||
BrandPost Sponsored by HPE
|
||||
|
||||
[Take the Intelligent Route with Consumption-Based Storage][3]
|
||||
|
||||
Combine the agility and economics of HPE storage with HPE GreenLake and run your IT department with efficiency.
|
||||
|
||||
In this case, NICT used “large-scale” spatial optical switching with spatial-division multiplexing to build its node. Three types of multicore fibers were incorporated, all with different capacities, in order to represent different scenarios, like metropolitan or regional networks. MEMS technology, too, was incorporated. That’s equipment built on micro-electro-mechanical systems, or a kind of merging of micrometer-measured, nanoscale electronics devices with moving parts.
|
||||
|
||||
NICT says that within its testing, it was able to not only perform the one petabit optical switching, but also was able to run a redundant configuration at one petabit per second. That’s to support network failures such as breaks in the fiber. It used 22-core fiber for both of those scenarios.
|
||||
|
||||
Additionally, NICT branched the one petabit signals into other multicore optical fibers with miscellaneous capacities. It used 22-Core Fiber, 7-Core Fiber and 3-Mode Fiber. Finally, running at a slower 10 terabits per second, it managed that lower capacity signal within the capacious one petabit per second network— NICT says that that kind of application would be most suitable for regional networks, whereas the other scenarios apply best to metro networks.
|
||||
|
||||
Actual, straight, petabit-class transmissions over fiber have been achieved before. In 2015 NICT was involved in the successful testing of a 2.15 petabit per second signal over a single 22-core fiber. Then, it said, [in a press release][4], that it was making “progress to the practical realization of an over one petabit per second optical fiber.” (Typical [real-world limits][5], right now, include 26.2 terabits, in an experiment, over a transatlantic cable, and an 800 gigabit fiber data center solution Ciena is pitching.)
|
||||
|
||||
**More about SD-WAN**: [How to buy SD-WAN technology: Key questions to consider when selecting a supplier][6] • [How to pick an off-site data-backup method][7] • [SD-Branch: What it is and why you’ll need it][8] • [What are the options for security SD-WAN?][9]
|
||||
|
||||
In 2018 NICT said, in another [news release][10], that it had tested a petabit transmission over thinner 4-core, 3-mode fiber with a diameter of 0.16 mm (0.006 inches): There’s an advantage to getting the cladding diameter as small as possible—smaller diameter fiber has less propensity to mechanical stress damage, such as bending or pulling, NICT explains. It can also be connected less problematically if it has a similar diameter to existing fiber cables, already run.
|
||||
|
||||
“This is a major step forward towards practical petabit-class backbone networks,” NICT says of its current 22-core fiber, one petabit per second switch capacity experiments. These will end up being “backbone optical networks capable of supporting the increasing requirements of internet services,” it says.
|
||||
|
||||
Join the Network World communities on [Facebook][11] and [LinkedIn][12] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3447857/nict-successfully-demos-petabit-per-second-network-node.html
|
||||
|
||||
作者:[Patrick Nelson][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Patrick-Nelson/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.nict.go.jp/en/about/index.html
|
||||
[2]: https://www.nict.go.jp/en/press/2019/10/17-1.html
|
||||
[3]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE20773&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
|
||||
[4]: https://www.nict.go.jp/en/press/2015/10/13-1.html
|
||||
[5]: https://www.networkworld.com/article/3374545/data-center-fiber-to-jump-to-800-gigabits-in-2019.html
|
||||
[6]: https://www.networkworld.com/article/3323407/sd-wan/how-to-buy-sd-wan-technology-key-questions-to-consider-when-selecting-a-supplier.html
|
||||
[7]: https://www.networkworld.com/article/3328488/backup-systems-and-services/how-to-pick-an-off-site-data-backup-method.html
|
||||
[8]: https://www.networkworld.com/article/3250664/lan-wan/sd-branch-what-it-is-and-why-youll-need-it.html
|
||||
[9]: https://www.networkworld.com/article/3285728/sd-wan/what-are-the-options-for-securing-sd-wan.html
|
||||
[10]: https://www.nict.go.jp/en/press/2018/11/21-1.html
|
||||
[11]: https://www.facebook.com/NetworkWorld/
|
||||
[12]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,77 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Why I made the switch from Mac to Linux)
|
||||
[#]: via: (https://opensource.com/article/19/10/why-switch-mac-linux)
|
||||
[#]: author: (Matthew Broberg https://opensource.com/users/mbbroberg)
|
||||
|
||||
Why I made the switch from Mac to Linux
|
||||
======
|
||||
Thanks to a lot of open source developers, it's a lot easier to use
|
||||
Linux as your daily driver than ever before.
|
||||
![Hands programming][1]
|
||||
|
||||
I have been a huge Mac fan and power user since I started in IT in 2004. But a few months ago—for several reasons—I made the commitment to shift to Linux as my daily driver. This isn't my first attempt at fully adopting Linux, but I'm finding it easier than ever. Here is what inspired me to switch.
|
||||
|
||||
### My first attempt at Linux on the desktop
|
||||
|
||||
I remember looking up at the projector, and it looking back at me. Neither of us understood why it wouldn't display. VGA cords were fully seated with no bent pins to be found. I tapped every key combination I could think of to signal my laptop that it's time to get over the stage fright.
|
||||
|
||||
I ran Linux in college as an experiment. My manager in the IT department was an advocate for the many flavors out there, and as I grew more confident in desktop support and writing scripts, I wanted to learn more about it. IT was far more interesting to me than my computer science degree program, which felt so abstract and theoretical—"who cares about binary search trees?" I thought—while our sysadmin team's work felt so tangible.
|
||||
|
||||
This story ends with me logging into a Windows workstation to get through my presentation for class, and marks the end of my first attempt at Linux as my day-to-day OS. I admired its flexibility, but compatibility was lacking. I would occasionally write a script that SSHed into a box to run another script, but I stopped using Linux on a day-to-day basis.
|
||||
|
||||
### A fresh look at Linux compatibility
|
||||
|
||||
When I decided to give Linux another go a few months ago, I expected more of the same compatibility nightmare, but I couldn't be more wrong.
|
||||
|
||||
Right after the installation process completed, I plugged in a USB-C hub to see what I'd gotten myself into. Everything worked immediately. The HDMI-connected extra-wide monitor popped up as a mirrored display to my laptop screen, and I easily adjusted it to be a second monitor. The USB-connected webcam, which is essential to my [work-from-home life][2], showed up as a video with no trouble at all. Even my Mac charger, which was already plugged into the hub since I've been using a Mac, started to charge my very-not-Mac hardware.
|
||||
|
||||
My positive experience was probably related to some updates to USB-C, which received some needed attention in 2018 to compete with other OS experiences. As [Phoronix explained][3]:
|
||||
|
||||
> "The USB Type-C interface offers an 'Alternate Mode' extension for non-USB signaling and the biggest user of this alternate mode in the specification is allowing DisplayPort support. Besides DP, another alternate mode is the Thunderbolt 3 support. The DisplayPort Alt Mode supports 4K and even 8Kx4K video output, including multi-channel audio.
|
||||
>
|
||||
> "While USB-C alternate modes and DisplayPort have been around for a while now and is common in the Windows space, the mainline Linux kernel hasn't supported this functionality. Fortunately, thanks to Intel, that is now changing."
|
||||
|
||||
Thinking beyond ports, a quick scroll through the [Linux on Laptops][4] hardware options shows a much more complete set of choices than I experienced in the early 2000s.
|
||||
|
||||
This has been a night-and-day difference from my first attempt at Linux adoption, and it's one I welcome with open arms.
|
||||
|
||||
### Breaking out of Apple's walled garden
|
||||
|
||||
Using Linux has added new friction to my daily workflow, and I love that it has.
|
||||
|
||||
My Mac workflow was seamless: hop on an iPad in the morning, write down some thoughts on what my day will look like, and start to read some articles in Safari; slide over my iPhone to continue reading; then log into my MacBook where years of fine-tuning have worked out how all these pieces connect. Keyboard shortcuts are built into my brain; user experiences are as they've mostly always been. It's wildly comfortable.
|
||||
|
||||
That comfort comes with a cost. I largely forgot how my environment functions, and I couldn't answer questions I wanted to answer. Did I customize some [PLIST files][5] to get that custom shortcut, or did I remember to check it into [my dotfiles][6]? How did I get so dependent on Safari and Chrome when Firefox has a much better mission? Or why, specifically, won't I use an Android-based phone instead of my i-things?
|
||||
|
||||
On that note, I've often thought about shifting to an Android-based phone, but I would lose the connection I have across all these devices and the little conveniences designed into the ecosystem. For instance, I wouldn't be able to type in searches from my iPhone for the Apple TV or share a password with AirDrop with my other Apple-based friends. Those features are great benefits of homogeneous device environments, and it is remarkable engineering. That said, these conveniences come at a cost of feeling trapped by the ecosystem.
|
||||
|
||||
I love being curious about how devices work. I want to be able to explain environmental configurations that make it fun or easy to use my systems, but I also want to see what adding some friction does for my perspective. To paraphrase [Marcel Proust][7], "The real voyage of discovery consists not in seeking new lands but seeing with new eyes." My use of technology has been so convenient that I stopped being curious about how it all works. Linux gives me an opportunity to see with new eyes again.
|
||||
|
||||
### Inspired by you
|
||||
|
||||
All of the above is reason enough to explore Linux, but I have also been inspired by you. While all operating systems are welcome in the open source community, Opensource.com writers' and readers' joy for Linux is infectious. It inspired me to dive back in, and I'm enjoying the journey.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/10/why-switch-mac-linux
|
||||
|
||||
作者:[Matthew Broberg][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/mbbroberg
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/programming-code-keyboard-laptop.png?itok=pGfEfu2S (Hands programming)
|
||||
[2]: https://opensource.com/article/19/8/rules-remote-work-sanity
|
||||
[3]: https://www.phoronix.com/scan.php?page=news_item&px=Linux-USB-Type-C-Port-DP-Driver
|
||||
[4]: https://www.linux-laptop.net/
|
||||
[5]: https://fileinfo.com/extension/plist
|
||||
[6]: https://opensource.com/article/19/3/move-your-dotfiles-version-control
|
||||
[7]: https://www.age-of-the-sage.org/quotations/proust_having_seeing_with_new_eyes.html
|
@ -4,7 +4,7 @@
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to program with Bash: Logical operators and shell expansions)
|
||||
[#]: via: (https://opensource.com/article/19/10/programming-bash-part-2)
|
||||
[#]: via: (https://opensource.com/article/19/10/programming-bash-logical-operators-shell-expansions)
|
||||
[#]: author: (David Both https://opensource.com/users/dboth)
|
||||
|
||||
How to program with Bash: Logical operators and shell expansions
|
||||
@ -482,7 +482,7 @@ The third article in this series will explore the use of loops for performing va
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/10/programming-bash-part-2
|
||||
via: https://opensource.com/article/19/10/programming-bash-logical-operators-shell-expansions
|
||||
|
||||
作者:[David Both][a]
|
||||
选题:[lujun9972][b]
|
||||
|
@ -4,7 +4,7 @@
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to program with Bash: Loops)
|
||||
[#]: via: (https://opensource.com/article/19/10/programming-bash-part-3)
|
||||
[#]: via: (https://opensource.com/article/19/10/programming-bash-loops)
|
||||
[#]: author: (David Both https://opensource.com/users/dboth)
|
||||
|
||||
How to program with Bash: Loops
|
||||
@ -334,7 +334,7 @@ Many years ago, despite being familiar with other shell languages and Perl, I ma
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/10/programming-bash-part-3
|
||||
via: https://opensource.com/article/19/10/programming-bash-loops
|
||||
|
||||
作者:[David Both][a]
|
||||
选题:[lujun9972][b]
|
||||
|
@ -0,0 +1,250 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Get sorted with sort at the command line)
|
||||
[#]: via: (https://opensource.com/article/19/10/get-sorted-sort)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
|
||||
|
||||
Get sorted with sort at the command line
|
||||
======
|
||||
Reorganize your data in a format that makes sense to you—right from the
|
||||
Linux, BSD, or Mac terminal—with the sort command.
|
||||
![Coding on a computer][1]
|
||||
|
||||
If you've ever used a spreadsheet application, then you know that rows can be sorted by the contents of a column. For instance, if you have a list of expenses, you might want to sort them by date or by ascending price or by category, and so on. If you're comfortable using a terminal, you may not want to have to use a big office application just to sort text data. And that's exactly what the [**sort**][2] command is for.
|
||||
|
||||
### Installing
|
||||
|
||||
You don't need to install **sort** because it's invariably included on any [POSIX][3] system. On most Linux systems, the **sort** command is bundled in a collection of utilities from the GNU organization. On other POSIX systems, such as BSD and Mac, the default **sort** command is not from GNU, so some options may differ. I'll attempt to account for both GNU and BSD implementations in this article.
|
||||
|
||||
### Sort lines alphabetically
|
||||
|
||||
The **sort** command, by default, looks at the first character of each line of a file and outputs each line in ascending alphabetic order. In the event that two characters on multiple lines are the same, it considers the next character. For example:
|
||||
|
||||
|
||||
```
|
||||
$ cat distro.list
|
||||
Slackware
|
||||
Fedora
|
||||
Red Hat Enterprise Linux
|
||||
Ubuntu
|
||||
Arch
|
||||
1337
|
||||
Mint
|
||||
Mageia
|
||||
Debian
|
||||
$ sort distro.list
|
||||
1337
|
||||
Arch
|
||||
Debian
|
||||
Fedora
|
||||
Mageia
|
||||
Mint
|
||||
Red Hat Enterprise Linux
|
||||
Slackware
|
||||
Ubuntu
|
||||
```
|
||||
|
||||
Using **sort** doesn't change the original file. Sort is a filter, so if you want to preserve your data in its sorted form, you must redirect the output using either **>** or **tee**:
|
||||
|
||||
|
||||
```
|
||||
$ sort distro.list | tee distro.sorted
|
||||
1337
|
||||
Arch
|
||||
Debian
|
||||
[...]
|
||||
$ cat distro.sorted
|
||||
1337
|
||||
Arch
|
||||
Debian
|
||||
[...]
|
||||
```
|
||||
|
||||
### Sort by column
|
||||
|
||||
Complex data sets sometimes need to be sorted by something other than the first letter of each line. Imagine, for instance, a list of animals and each one's species and genus, and each "field" (a "cell" in a spreadsheet) is defined by a predictable delimiter character. This is such a common data format for spreadsheet exports that the CSV (comma-separated values) file extension exists to identify such files (although a CSV file doesn't have to be comma-separated, nor does a delimited file have to use the CSV extension to be valid and usable). Consider this example data set:
|
||||
|
||||
|
||||
```
|
||||
Aptenodytes;forsteri;Miller,JF;1778;Emperor
|
||||
Pygoscelis;papua;Wagler;1832;Gentoo
|
||||
Eudyptula;minor;Bonaparte;1867;Little Blue
|
||||
Spheniscus;demersus;Brisson;1760;African
|
||||
Megadyptes;antipodes;Milne-Edwards;1880;Yellow-eyed
|
||||
Eudyptes;chrysocome;Viellot;1816;Southern Rockhopper
|
||||
Torvaldis;linux;Ewing,L;1996;Tux
|
||||
```
|
||||
|
||||
Given this sample data set, you can use the **\--field-separator** (use **-t** on BSD and Mac—or on GNU to reduce typing) option to set the delimiting character to a semicolon (because this example uses semicolons instead of commas, but it could use any character), and use the **\--key** (**-k** on BSD and Mac or on GNU to reduce typing) option to define which field to sort by. For example, to sort by the second field (starting at 1, not 0) of each line:
|
||||
|
||||
|
||||
```
|
||||
sort --field-separator=";" --key=2
|
||||
Megadyptes;antipodes;Milne-Edwards;1880;Yellow-eyed
|
||||
Eudyptes;chrysocome;Viellot;1816;Sothern Rockhopper
|
||||
Spheniscus;demersus;Brisson;1760;African
|
||||
Aptenodytes;forsteri;Miller,JF;1778;Emperor
|
||||
Torvaldis;linux;Ewing,L;1996;Tux
|
||||
Eudyptula;minor;Bonaparte;1867;Little Blue
|
||||
Pygoscelis;papua;Wagler;1832;Gentoo
|
||||
```
|
||||
|
||||
That's somewhat difficult to read, but Unix is famous for its _pipe_ method of constructing commands, so you can use the **column** command to "prettify" the output. Using GNU **column**:
|
||||
|
||||
|
||||
```
|
||||
$ sort --field-separator=";" \
|
||||
\--key=2 penguins.list | \
|
||||
column --table --separator ";"
|
||||
Megadyptes antipodes Milne-Edwards 1880 Yellow-eyed
|
||||
Eudyptes chrysocome Viellot 1816 Southern Rockhopper
|
||||
Spheniscus demersus Brisson 1760 African
|
||||
Aptenodytes forsteri Miller,JF 1778 Emperor
|
||||
Torvaldis linux Ewing,L 1996 Tux
|
||||
Eudyptula minor Bonaparte 1867 Little Blue
|
||||
Pygoscelis papua Wagler 1832 Gentoo
|
||||
```
|
||||
|
||||
Slightly more cryptic to the new user (but shorter to type), the command options on BSD and Mac:
|
||||
|
||||
|
||||
```
|
||||
$ sort -t ";" \
|
||||
-k2 penguins.list | column -t -s ";"
|
||||
Megadyptes antipodes Milne-Edwards 1880 Yellow-eyed
|
||||
Eudyptes chrysocome Viellot 1816 Southern Rockhopper
|
||||
Spheniscus demersus Brisson 1760 African
|
||||
Aptenodytes forsteri Miller,JF 1778 Emperor
|
||||
Torvaldis linux Ewing,L 1996 Tux
|
||||
Eudyptula minor Bonaparte 1867 Little Blue
|
||||
Pygoscelis papua Wagler 1832 Gentoo
|
||||
```
|
||||
|
||||
The **key** definition doesn't have to be set to **2**, of course. Any existing field may be used as the sorting key.
|
||||
|
||||
### Reverse sort
|
||||
|
||||
You can reverse the order of a sorted list with the **\--reverse** (**-r** on BSD or Mac or GNU for brevity):
|
||||
|
||||
|
||||
```
|
||||
$ sort --reverse alphabet.list
|
||||
z
|
||||
y
|
||||
x
|
||||
w
|
||||
[...]
|
||||
```
|
||||
|
||||
You can achieve the same result by piping the output of a normal sort through [tac][4].
|
||||
|
||||
### Sorting by month (GNU only)
|
||||
|
||||
In a perfect world, everyone would write dates according to the ISO 8601 standard: year, month, day. It's a logical method of specifying a unique date, and it's easy for computers to understand. And yet quite often, humans use other means of identifying dates, including months with pretty arbitrary names.
|
||||
|
||||
Fortunately, the GNU **sort** command accounts for this and is able to sort correctly by month name. Use the **\--month-sort** (**-M**) option:
|
||||
|
||||
|
||||
```
|
||||
$ cat month.list
|
||||
November
|
||||
October
|
||||
September
|
||||
April
|
||||
[...]
|
||||
$ sort --month-sort month.list
|
||||
January
|
||||
February
|
||||
March
|
||||
April
|
||||
May
|
||||
[...]
|
||||
November
|
||||
December
|
||||
```
|
||||
|
||||
Months may be identified by their full name or some portion of their names.
|
||||
|
||||
### Human-readable numeric sort (GNU only)
|
||||
|
||||
Another common point of confusion between humans and computers is groups of numbers. For instance, humans often write "1024 kilobytes" as "1KB" because it's easier and quicker for the human brain to parse "1KB" than "1024" (and it gets easier the larger the number becomes). To a computer, though, a string such as 9KB is larger than, for instance, 1MB (even though 9KB is only a fraction of a megabyte). The GNU **sort** command provides the **\--human-numeric-sort** (**-h**) option to help parse these values correctly.
|
||||
|
||||
|
||||
```
|
||||
$ cat sizes.list
|
||||
2M
|
||||
12MB
|
||||
1k
|
||||
9k
|
||||
900
|
||||
7000
|
||||
$ sort --human-numeric-sort
|
||||
900
|
||||
7000
|
||||
1k
|
||||
9k
|
||||
2M
|
||||
12MB
|
||||
```
|
||||
|
||||
There are some inconsistencies. For example, 16,000 bytes is greater than 1KB, but **sort** fails to recognize that:
|
||||
|
||||
|
||||
```
|
||||
$ cat sizes0.list
|
||||
2M
|
||||
12MB
|
||||
16000
|
||||
1k
|
||||
$ sort -h sizes0.list
|
||||
16000
|
||||
1k
|
||||
2M
|
||||
12MB
|
||||
```
|
||||
|
||||
Logically, 16,000 should be written 16KB in this context, so GNU **sort** is not entirely to blame. As long as you are sure that your numbers are consistent, the **\--human-numeric-sort** can help parse human-readable numbers in a computer-friendly way.
|
||||
|
||||
### Randomized sort (GNU only)
|
||||
|
||||
Sometimes utilities provide the option to do the opposite of what they're meant to do. In a way, it makes no sense for a **sort** command to have the ability to "sort" a file randomly. Then again, the workflow of the command makes it a convenient feature to have. You _could_ use a different command, like [**shuf**][5], or you could just add an option to the command you're using. Whether it's bloat or ingenious UX design, the GNU **sort** command provides the means to sort a file arbitrarily.
|
||||
|
||||
The purest form of arbitrary sorting is the **\--random-sort** or **-R** option (not to be confused with the **-r** option, which is short for **\--reverse**).
|
||||
|
||||
|
||||
```
|
||||
$ sort --random-sort alphabet.list
|
||||
d
|
||||
m
|
||||
p
|
||||
a
|
||||
[...]
|
||||
```
|
||||
|
||||
You can run a random sort multiple times on a file for different results each time.
|
||||
|
||||
### Sorted
|
||||
|
||||
There are many more features available with the **sort** GNU and BSD commands, so spend some time getting to know the options. You'll be surprised at how flexible **sort** can be, especially when it's combined with other Unix utilities.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/10/get-sorted-sort
|
||||
|
||||
作者:[Seth Kenlon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/seth
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/code_computer_laptop_hack_work.png?itok=aSpcWkcl (Coding on a computer)
|
||||
[2]: https://en.wikipedia.org/wiki/Sort_(Unix)
|
||||
[3]: https://en.wikipedia.org/wiki/POSIX
|
||||
[4]: https://opensource.com/article/19/9/tac-command
|
||||
[5]: https://www.gnu.org/software/coreutils/manual/html_node/shuf-invocation.html
|
@ -0,0 +1,132 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How I used the wget Linux command to recover lost images)
|
||||
[#]: via: (https://opensource.com/article/19/10/how-community-saved-artwork-creative-commons)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
|
||||
|
||||
How I used the wget Linux command to recover lost images
|
||||
======
|
||||
The story of the rise and fall of the Open Clip Art Library and the
|
||||
birth of FreeSVG.org, a new library of communal artwork.
|
||||
![White shoes on top of an orange tribal pattern][1]
|
||||
|
||||
In 2004, the Open Clip Art Library (OCAL) was launched as a source of free illustrations for anyone to use, for any purpose, without requiring attribution or anything in return. This site was the open source world’s answer to the big stacks of clip art CDs on the shelf of every home office in the 1990s, and to the art dumps provided by the closed-source office and artistic software titles.
|
||||
|
||||
In the beginning, the clip art library consisted mostly of work by a few contributors, but in 2010 it went live with a brand new interactive website, allowing anyone to create and contribute clip art with a vector illustration application. The site immediately garnered contributions from around the globe, and from all manner of free software and free culture projects. A special importer for this library was even included in [Inkscape][2].
|
||||
|
||||
However, in early 2019, the website hosting the Open Clip Art Library went offline with no warning or explanation. Its community, which had grown to number in the thousands, assumed at first that this was a temporary glitch. The site remained offline, however, for over six months without any clear explanation of what had happened.
|
||||
|
||||
Rumors started to swell. The site was being updated ("There is years of technical debt to pay off," said site developer Jon Philips in an email). The site had fallen to rampant DDOS attacks, claimed a Twitter account. The maintainer had fallen prey to identity theft, another Twitter account claimed. Today, as of this writing, the site’s one and only remaining page declares that it is in "maintenance and protected mode," the meaning of which is unclear, except that users cannot access its content.
|
||||
|
||||
### Recovering the commons
|
||||
|
||||
Sites appear and disappear over the course of time, but the loss of the Open Clip Art Library was particularly surprising to its community because it was seen as a community project. Few community members understood that the site hosting the library had fallen into the hands of a single maintainer, so while the artwork in the library was owned by everyone due to its [Creative Commons 0 License][3], access to it was functionally owned by a single maintainer. And, because the site’s community kept in touch with one another through the site, that same maintainer effectively owned the community.
|
||||
|
||||
When the site failed, the community lost access to its artwork as well as each other. And without the site, there was no community.
|
||||
|
||||
Initially, everything on the site was blocked when it went down. After several months, though, users started recognizing that the site’s database was still online, which meant that a user could access an individual art file by entering its exact URL. In other words, you couldn’t navigate to the art file through clicking around a website, but if you already knew the address, then you could bring it up in your browser. Similarly, technical (or lazy) users realized it was also possible to "scrape" the site with an automated web browser like **wget**.
|
||||
|
||||
The **wget** Linux command is _technically_ a web browser, although it doesn’t let you browse interactively the way you do with Firefox. Instead, **wget** goes out onto the internet and retrieves a file or a collection of files and downloads them to your hard drive. You can then open those files in Firefox or a text editor, or whatever application is most appropriate, and view the content.
|
||||
|
||||
Usually, **wget** needs to know a specific file to fetch. If you’re on Linux or macOS with **wget** installed, you can try this process by downloading the index page for [example.com][4]:
|
||||
|
||||
|
||||
```
|
||||
$ wget example.org/index.html
|
||||
[...]
|
||||
$ tail index.html
|
||||
|
||||
<body><div>
|
||||
<h1>Example Domain</h1>
|
||||
<p>This domain is for illustrative examples in documents.
|
||||
You may use this domain in examples without permission.</p>
|
||||
<p><a href="[http://www.iana.org/domains/example"\>More][5] info</a></p>
|
||||
</div></body></html>
|
||||
```
|
||||
|
||||
To scrape the Open Clip Art Library, I used the **\--mirror** option, so that I could point **wget** to just the directory containing the artwork so it could download everything within that directory. This action resulted in four straight days (96 hours) of constant downloading, ending with an excess of 100,000 SVG files that had been contributed by over 5,000 community members. Unfortunately, the author of any file that did not have proper metadata was irrecoverable because this information was locked in inaccessible files in the database, but the CC0 license meant that this issue _technically_ didn’t matter (because no attribution is required with CC0 files).
|
||||
|
||||
A casual analysis of the downloaded files also revealed that nearly 45,000 of them were copies of the same single file (the site’s logo). This was caused by redirects pointing to the site's logo (for reasons unknown), and careful parsing could extract the original destination. Another 96 hours, and all clip art posted on OCAL up to its last day was recovered: **a total of about 156,000 images.**
|
||||
|
||||
SVG files tend to be small, but this is still an enormous amount of work that poses a few very real problems. First of all, several gigabytes of online storage would be needed so the artwork could be made available to its former community. Secondly, a means of searching the artwork would be necessary, because it’s just not realistic to browse through 55,000 files manually.
|
||||
|
||||
It became apparent that what the community really needed was a platform.
|
||||
|
||||
### Building a new platform
|
||||
|
||||
For some time, the site [Public Domain Vectors][6] had been publishing vector art that was in the public domain. While it remains a popular site, open source users often used it only as a secondary source of art because most of the files there were in the EPS and AI formats, both of which are associated with Adobe. Both file formats can generally be converted to SVG but at a loss of features.
|
||||
|
||||
When the Public Domain Vectors site’s maintainers (Vedran and Boris) heard about the loss of the Open Clip Art Library, they decided to create a site oriented toward the open source community. True to form, they chose the open source [Laravel][7] framework as the backend, which provided the site with an admin dashboard and user access. The framework, being robust and well-developed, also allowed them to respond quickly to bug reports and feature requests, and to upgrade the site as needed. The site they are building is called [FreeSVG.org][8], and is already a robust and thriving library of communal artwork.
|
||||
|
||||
Since then they have been uploading all of the clip art from the Open Clip Art Library, and they're even diligently tagging and categorizing the art as they go. As creators of Public Domain Vectors, they are also contributing their own images in SVG format. Their aim is to become the primary resource for SVG images with a CC0 license on the internet.
|
||||
|
||||
### Contributing
|
||||
|
||||
The maintainers of [FreeSVG.org][8] are aware that they have inherited significant stewardship. They are working to title and describe all images on the site so that users can easily find artwork, and will provide this file to the community once it is ready, believing strongly that the metadata about the art belongs to the people that create and use the art as much as the art itself does. They're also aware that unforeseen circumstances can arise, so they create regular backups of their site and content, and intend to make the most recent backup available to the public, should their site fail.
|
||||
|
||||
If you want to add to the Creative Commons content of [FreeSVG.org][9], then download [Inkscape][10] and start drawing. There’s plenty of public domain artwork out there in the world, like [historical advertisements][11], [tarot cards][12], and [storybooks][13] just waiting to be converted to SVG, so you can contribute even if you aren’t confident in your drawing skills. Visit the [FreeSVG forum][14] to connect with and support other contributors.
|
||||
|
||||
The concept of the _commons_ is important. [Creative Commons benefits everyone][15], whether you’re a student, teacher, librarian, small business owner, or CEO. If you don’t contribute directly, then you can always help promote it.
|
||||
|
||||
That’s a strength of free culture: It doesn’t just scale, it gets better when more people participate.
|
||||
|
||||
### Hard lessons learned
|
||||
|
||||
From the demise of the Open Clip Art Library to the rise of FreeSVG.org, the open culture community has learned several hard lessons. For posterity, here are the ones that I believe are most important.
|
||||
|
||||
#### Maintain your metadata
|
||||
|
||||
If you’re a content creator, help the archivists of the future and add metadata to your files. Most image, music, font, and video file formats can have EXIF data embedded into them, and others have metadata entry interfaces in the applications that create them. Be diligent in tagging your work with your name, website or public email, and license.
|
||||
|
||||
#### Make copies
|
||||
|
||||
Don’t assume that somebody else is doing backups. If you care about communal digital content, then back it up yourself, or else don’t count on having it available forever. The trope that _whatever’s uploaded to the internet is forever_ may be true, but that doesn’t mean it’s _available to you_ forever. If the Open Clip Art Library files hadn’t become secretly available again, it’s unlikely that anyone would have ever successfully uncovered all 55,000 images from random places on the web, or from personal stashes on people’s hard drives around the globe.
|
||||
|
||||
#### Create external channels
|
||||
|
||||
If a community is defined by a single website or physical location, then that community is as good as dissolved should it lose access to that space. If you’re a member of a community that’s driven by a single organization or site, you owe it to yourselves to share contact information with those you care about and to establish a channel for communication even when that site is not available.
|
||||
|
||||
For example, [Opensource.com][16] itself maintains mailing lists and other off-site channels for its authors and correspondents to communicate with one another, with or without the intervention or even existence of the website.
|
||||
|
||||
#### Free culture is worth working for
|
||||
|
||||
The internet is sometimes seen as a lazy person’s social club. You can log on when you want and turn it off when you’re tired, and you can wander into whatever social circle you want.
|
||||
|
||||
But in reality, free culture can be hard work. It’s not hard in the sense that it’s difficult to be a part of, but it’s something you have to work to maintain. If you ignore the community you’re in, then the community may wither and fade before you realize it.
|
||||
|
||||
Take a moment to look around you and identify what communities you’re a part of, and if nothing else, tell someone that you appreciate what they bring to your life. And just as importantly, keep in mind that you’re contributing to the lives of your communities, too.
|
||||
|
||||
Creative Commons held its Gl obal Summit a few weeks ago in Warsaw, with amazing international...
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/10/how-community-saved-artwork-creative-commons
|
||||
|
||||
作者:[Seth Kenlon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/seth
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/tribal_pattern_shoes.png?itok=e5dSf2hS (White shoes on top of an orange tribal pattern)
|
||||
[2]: https://opensource.com/article/18/1/inkscape-absolute-beginners
|
||||
[3]: https://creativecommons.org/share-your-work/public-domain/cc0/
|
||||
[4]: http://example.com
|
||||
[5]: http://www.iana.org/domains/example"\>More
|
||||
[6]: http://publicdomainvectors.org
|
||||
[7]: https://github.com/viralsolani/laravel-adminpanel
|
||||
[8]: https://freesvg.org
|
||||
[9]: http://freesvg.org
|
||||
[10]: http://inkscape.org
|
||||
[11]: https://freesvg.org/drinking-coffee-vector-drawing
|
||||
[12]: https://freesvg.org/king-of-swords-tarot-card
|
||||
[13]: https://freesvg.org/space-pioneers-135-scene-vector-image
|
||||
[14]: http://forum.freesvg.org/
|
||||
[15]: https://opensource.com/article/18/1/creative-commons-real-world
|
||||
[16]: http://Opensource.com
|
@ -0,0 +1,452 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Understanding system calls on Linux with strace)
|
||||
[#]: via: (https://opensource.com/article/19/10/strace)
|
||||
[#]: author: (Gaurav Kamathe https://opensource.com/users/gkamathe)
|
||||
|
||||
Understanding system calls on Linux with strace
|
||||
======
|
||||
Trace the thin layer between user processes and the Linux kernel with
|
||||
strace.
|
||||
![Hand putting a Linux file folder into a drawer][1]
|
||||
|
||||
A system call is a programmatic way a program requests a service from the kernel, and **strace** is a powerful tool that allows you to trace the thin layer between user processes and the Linux kernel.
|
||||
|
||||
To understand how an operating system works, you first need to understand how system calls work. One of the main functions of an operating system is to provide abstractions to user programs.
|
||||
|
||||
An operating system can roughly be divided into two modes:
|
||||
|
||||
* **Kernel mode:** A privileged and powerful mode used by the operating system kernel
|
||||
* **User mode:** Where most user applications run
|
||||
|
||||
|
||||
|
||||
Users mostly work with command-line utilities and graphical user interfaces (GUI) to do day-to-day tasks. System calls work silently in the background, interfacing with the kernel to get work done.
|
||||
|
||||
System calls are very similar to function calls, which means they accept and work on arguments and return values. The only difference is that system calls enter a kernel, while function calls do not. Switching from user space to kernel space is done using a special [trap][2] mechanism.
|
||||
|
||||
Most of this is hidden away from the user by using system libraries (aka **glibc** on Linux systems). Even though system calls are generic in nature, the mechanics of issuing a system call are very much machine-dependent.
|
||||
|
||||
This article explores some practical examples by using some general commands and analyzing the system calls made by each command using **strace**. These examples use Red Hat Enterprise Linux, but the commands should work the same on other Linux distros:
|
||||
|
||||
|
||||
```
|
||||
[root@sandbox ~]# cat /etc/redhat-release
|
||||
Red Hat Enterprise Linux Server release 7.7 (Maipo)
|
||||
[root@sandbox ~]#
|
||||
[root@sandbox ~]# uname -r
|
||||
3.10.0-1062.el7.x86_64
|
||||
[root@sandbox ~]#
|
||||
```
|
||||
|
||||
First, ensure that the required tools are installed on your system. You can verify whether **strace** is installed using the RPM command below; if it is, you can check the **strace** utility version number using the **-V** option:
|
||||
|
||||
|
||||
```
|
||||
[root@sandbox ~]# rpm -qa | grep -i strace
|
||||
strace-4.12-9.el7.x86_64
|
||||
[root@sandbox ~]#
|
||||
[root@sandbox ~]# strace -V
|
||||
strace -- version 4.12
|
||||
[root@sandbox ~]#
|
||||
```
|
||||
|
||||
If that doesn't work, install **strace** by running:
|
||||
|
||||
|
||||
```
|
||||
`yum install strace`
|
||||
```
|
||||
|
||||
For the purpose of this example, create a test directory within **/tmp** and create two files using the **touch** command using:
|
||||
|
||||
|
||||
```
|
||||
[root@sandbox ~]# cd /tmp/
|
||||
[root@sandbox tmp]#
|
||||
[root@sandbox tmp]# mkdir testdir
|
||||
[root@sandbox tmp]#
|
||||
[root@sandbox tmp]# touch testdir/file1
|
||||
[root@sandbox tmp]# touch testdir/file2
|
||||
[root@sandbox tmp]#
|
||||
```
|
||||
|
||||
(I used the **/tmp** directory because everybody has access to it, but you can choose another directory if you prefer.)
|
||||
|
||||
Verify that the files were created using the **ls** command on the **testdir** directory:
|
||||
|
||||
|
||||
```
|
||||
[root@sandbox tmp]# ls testdir/
|
||||
file1 file2
|
||||
[root@sandbox tmp]#
|
||||
```
|
||||
|
||||
You probably use the **ls** command every day without realizing system calls are at work underneath it. There is abstraction at play here; here's how this command works:
|
||||
|
||||
|
||||
```
|
||||
`Command-line utility -> Invokes functions from system libraries (glibc) -> Invokes system calls`
|
||||
```
|
||||
|
||||
The **ls** command internally calls functions from system libraries (aka **glibc**) on Linux. These libraries invoke the system calls that do most of the work.
|
||||
|
||||
If you want to know which functions were called from the **glibc** library, use the **ltrace** command followed by the regular **ls testdir/** command:
|
||||
|
||||
|
||||
```
|
||||
`ltrace ls testdir/`
|
||||
```
|
||||
|
||||
If **ltrace** is not installed, install it by entering:
|
||||
|
||||
|
||||
```
|
||||
`yum install ltrace`
|
||||
```
|
||||
|
||||
A bunch of output will be dumped to the screen; don't worry about it—just follow along. Some of the important library functions from the output of the **ltrace** command that are relevant to this example include:
|
||||
|
||||
|
||||
```
|
||||
opendir("testdir/") = { 3 }
|
||||
readdir({ 3 }) = { 101879119, "." }
|
||||
readdir({ 3 }) = { 134, ".." }
|
||||
readdir({ 3 }) = { 101879120, "file1" }
|
||||
strlen("file1") = 5
|
||||
memcpy(0x1665be0, "file1\0", 6) = 0x1665be0
|
||||
readdir({ 3 }) = { 101879122, "file2" }
|
||||
strlen("file2") = 5
|
||||
memcpy(0x166dcb0, "file2\0", 6) = 0x166dcb0
|
||||
readdir({ 3 }) = nil
|
||||
closedir({ 3 })
|
||||
```
|
||||
|
||||
By looking at the output above, you probably can understand what is happening. A directory called **testdir** is being opened by the **opendir** library function, followed by calls to the **readdir** function, which is reading the contents of the directory. At the end, there is a call to the **closedir** function, which closes the directory that was opened earlier. Ignore the other **strlen** and **memcpy** functions for now.
|
||||
|
||||
You can see which library functions are being called, but this article will focus on system calls that are invoked by the system library functions.
|
||||
|
||||
Similar to the above, to understand what system calls are invoked, just put **strace** before the **ls testdir** command, as shown below. Once again, a bunch of gibberish will be dumped to your screen, which you can follow along with here:
|
||||
|
||||
|
||||
```
|
||||
[root@sandbox tmp]# strace ls testdir/
|
||||
execve("/usr/bin/ls", ["ls", "testdir/"], [/* 40 vars */]) = 0
|
||||
brk(NULL) = 0x1f12000
|
||||
<<< truncated strace output >>>
|
||||
write(1, "file1 file2\n", 13file1 file2
|
||||
) = 13
|
||||
close(1) = 0
|
||||
munmap(0x7fd002c8d000, 4096) = 0
|
||||
close(2) = 0
|
||||
exit_group(0) = ?
|
||||
+++ exited with 0 +++
|
||||
[root@sandbox tmp]#
|
||||
```
|
||||
|
||||
The output on the screen after running the **strace** command was simply system calls made to run the **ls** command. Each system call serves a specific purpose for the operating system, and they can be broadly categorized into the following sections:
|
||||
|
||||
* Process management system calls
|
||||
* File management system calls
|
||||
* Directory and filesystem management system calls
|
||||
* Other system calls
|
||||
|
||||
|
||||
|
||||
An easier way to analyze the information dumped onto your screen is to log the output to a file using **strace**'s handy **-o** flag. Add a suitable file name after the **-o** flag and run the command again:
|
||||
|
||||
|
||||
```
|
||||
[root@sandbox tmp]# strace -o trace.log ls testdir/
|
||||
file1 file2
|
||||
[root@sandbox tmp]#
|
||||
```
|
||||
|
||||
This time, no output dumped to the screen—the **ls** command worked as expected by showing the file names and logging all the output to the file **trace.log**. The file has almost 100 lines of content just for a simple **ls** command:
|
||||
|
||||
|
||||
```
|
||||
[root@sandbox tmp]# ls -l trace.log
|
||||
-rw-r--r--. 1 root root 7809 Oct 12 13:52 trace.log
|
||||
[root@sandbox tmp]#
|
||||
[root@sandbox tmp]# wc -l trace.log
|
||||
114 trace.log
|
||||
[root@sandbox tmp]#
|
||||
```
|
||||
|
||||
Take a look at the first line in the example's trace.log:
|
||||
|
||||
|
||||
```
|
||||
`execve("/usr/bin/ls", ["ls", "testdir/"], [/* 40 vars */]) = 0`
|
||||
```
|
||||
|
||||
* The first word of the line, **execve**, is the name of a system call being executed.
|
||||
* The text within the parentheses is the arguments provided to the system call.
|
||||
* The number after the **=** sign (which is **0** in this case) is a value returned by the **execve** system call.
|
||||
|
||||
|
||||
|
||||
The output doesn't seem too intimidating now, does it? And you can apply the same logic to understand other lines.
|
||||
|
||||
Now, narrow your focus to the single command that you invoked, i.e., **ls testdir**. You know the directory name used by the command **ls**, so why not **grep** for **testdir** within your **trace.log** file and see what you get? Look at each line of the results in detail:
|
||||
|
||||
|
||||
```
|
||||
[root@sandbox tmp]# grep testdir trace.log
|
||||
execve("/usr/bin/ls", ["ls", "testdir/"], [/* 40 vars */]) = 0
|
||||
stat("testdir/", {st_mode=S_IFDIR|0755, st_size=32, ...}) = 0
|
||||
openat(AT_FDCWD, "testdir/", O_RDONLY|O_NONBLOCK|O_DIRECTORY|O_CLOEXEC) = 3
|
||||
[root@sandbox tmp]#
|
||||
```
|
||||
|
||||
Thinking back to the analysis of **execve** above, can you tell what this system call does?
|
||||
|
||||
|
||||
```
|
||||
`execve("/usr/bin/ls", ["ls", "testdir/"], [/* 40 vars */]) = 0`
|
||||
```
|
||||
|
||||
You don't need to memorize all the system calls or what they do, because you can refer to documentation when you need to. Man pages to the rescue! Ensure the following package is installed before running the **man** command:
|
||||
|
||||
|
||||
```
|
||||
[root@sandbox tmp]# rpm -qa | grep -i man-pages
|
||||
man-pages-3.53-5.el7.noarch
|
||||
[root@sandbox tmp]#
|
||||
```
|
||||
|
||||
Remember that you need to add a **2** between the **man** command and the system call name. If you read **man**'s man page using **man man**, you can see that section 2 is reserved for system calls. Similarly, if you need information on library functions, you need to add a **3** between **man** and the library function name.
|
||||
|
||||
The following are the manual's section numbers and the types of pages they contain:
|
||||
|
||||
|
||||
```
|
||||
1\. Executable programs or shell commands
|
||||
2\. System calls (functions provided by the kernel)
|
||||
3\. Library calls (functions within program libraries)
|
||||
4\. Special files (usually found in /dev)
|
||||
```
|
||||
|
||||
Run the following **man** command with the system call name to see the documentation for that system call:
|
||||
|
||||
|
||||
```
|
||||
`man 2 execve`
|
||||
```
|
||||
|
||||
As per the **execve** man page, this executes a program that is passed in the arguments (in this case, that is **ls**). There are additional arguments that can be provided to **ls**, such as **testdir** in this example. Therefore, this system call just runs **ls** with **testdir** as the argument:
|
||||
|
||||
|
||||
```
|
||||
'execve - execute program'
|
||||
|
||||
'DESCRIPTION
|
||||
execve() executes the program pointed to by filename'
|
||||
```
|
||||
|
||||
The next system call, named **stat**, uses the **testdir** argument:
|
||||
|
||||
|
||||
```
|
||||
`stat("testdir/", {st_mode=S_IFDIR|0755, st_size=32, ...}) = 0`
|
||||
```
|
||||
|
||||
Use **man 2 stat** to access the documentation. **stat** is the system call that gets a file's status—remember that everything in Linux is a file, including a directory.
|
||||
|
||||
Next, the **openat** system call opens **testdir.** Keep an eye on the **3** that is returned. This is a file description, which will be used by later system calls:
|
||||
|
||||
|
||||
```
|
||||
`openat(AT_FDCWD, "testdir/", O_RDONLY|O_NONBLOCK|O_DIRECTORY|O_CLOEXEC) = 3`
|
||||
```
|
||||
|
||||
So far, so good. Now, open the **trace.log** file and go to the line following the **openat** system call. You will see the **getdents** system call being invoked, which does most of what is required to execute the **ls testdir** command. Now, **grep getdents** from the **trace.log** file:
|
||||
|
||||
|
||||
```
|
||||
[root@sandbox tmp]# grep getdents trace.log
|
||||
getdents(3, /* 4 entries */, 32768) = 112
|
||||
getdents(3, /* 0 entries */, 32768) = 0
|
||||
[root@sandbox tmp]#
|
||||
```
|
||||
|
||||
The **getdents** man page describes it as **get directory entries**, which is what you want to do. Notice that the argument for **getdents** is **3**, which is the file descriptor from the **openat** system call above.
|
||||
|
||||
Now that you have the directory listing, you need a way to display it in your terminal. So, **grep** for another system call, **write**, which is used to write to the terminal, in the logs:
|
||||
|
||||
|
||||
```
|
||||
[root@sandbox tmp]# grep write trace.log
|
||||
write(1, "file1 file2\n", 13) = 13
|
||||
[root@sandbox tmp]#
|
||||
```
|
||||
|
||||
In these arguments, you can see the file names that will be displayed: **file1** and **file2**. Regarding the first argument (**1**), remember in Linux that, when any process is run, three file descriptors are opened for it by default. Following are the default file descriptors:
|
||||
|
||||
* 0 - Standard input
|
||||
* 1 - Standard out
|
||||
* 2 - Standard error
|
||||
|
||||
|
||||
|
||||
So, the **write** system call is displaying **file1** and **file2** on the standard display, which is the terminal, identified by **1**.
|
||||
|
||||
Now you know which system calls did most of the work for the **ls testdir/** command. But what about the other 100+ system calls in the **trace.log** file? The operating system has to do a lot of housekeeping to run a process, so a lot of what you see in the log file is process initialization and cleanup. Read the entire **trace.log** file and try to understand what is happening to make the **ls** command work.
|
||||
|
||||
Now that you know how to analyze system calls for a given command, you can use this knowledge for other commands to understand what system calls are being executed. **strace** provides a lot of useful command-line flags to make it easier for you, and some of them are described below.
|
||||
|
||||
By default, **strace** does not include all system call information. However, it has a handy **-v verbose** option that can provide additional information on each system call:
|
||||
|
||||
|
||||
```
|
||||
`strace -v ls testdir`
|
||||
```
|
||||
|
||||
It is good practice to always use the **-f** option when running the **strace** command. It allows **strace** to trace any child processes created by the process currently being traced:
|
||||
|
||||
|
||||
```
|
||||
`strace -f ls testdir`
|
||||
```
|
||||
|
||||
Say you just want the names of system calls, the number of times they ran, and the percentage of time spent in each system call. You can use the **-c** flag to get those statistics:
|
||||
|
||||
|
||||
```
|
||||
`strace -c ls testdir/`
|
||||
```
|
||||
|
||||
Suppose you want to concentrate on a specific system call, such as focusing on **open** system calls and ignoring the rest. You can use the **-e** flag followed by the system call name:
|
||||
|
||||
|
||||
```
|
||||
[root@sandbox tmp]# strace -e open ls testdir
|
||||
open("/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3
|
||||
open("/lib64/libselinux.so.1", O_RDONLY|O_CLOEXEC) = 3
|
||||
open("/lib64/libcap.so.2", O_RDONLY|O_CLOEXEC) = 3
|
||||
open("/lib64/libacl.so.1", O_RDONLY|O_CLOEXEC) = 3
|
||||
open("/lib64/libc.so.6", O_RDONLY|O_CLOEXEC) = 3
|
||||
open("/lib64/libpcre.so.1", O_RDONLY|O_CLOEXEC) = 3
|
||||
open("/lib64/libdl.so.2", O_RDONLY|O_CLOEXEC) = 3
|
||||
open("/lib64/libattr.so.1", O_RDONLY|O_CLOEXEC) = 3
|
||||
open("/lib64/libpthread.so.0", O_RDONLY|O_CLOEXEC) = 3
|
||||
open("/usr/lib/locale/locale-archive", O_RDONLY|O_CLOEXEC) = 3
|
||||
file1 file2
|
||||
+++ exited with 0 +++
|
||||
[root@sandbox tmp]#
|
||||
```
|
||||
|
||||
What if you want to concentrate on more than one system call? No worries, you can use the same **-e** command-line flag with a comma between the two system calls. For example, to see the **write** and **getdents** systems calls:
|
||||
|
||||
|
||||
```
|
||||
[root@sandbox tmp]# strace -e write,getdents ls testdir
|
||||
getdents(3, /* 4 entries */, 32768) = 112
|
||||
getdents(3, /* 0 entries */, 32768) = 0
|
||||
write(1, "file1 file2\n", 13file1 file2
|
||||
) = 13
|
||||
+++ exited with 0 +++
|
||||
[root@sandbox tmp]#
|
||||
```
|
||||
|
||||
The examples so far have traced explicitly run commands. But what about commands that have already been run and are in execution? What, for example, if you want to trace daemons that are just long-running processes? For this, **strace** provides a special **-p** flag to which you can provide a process ID.
|
||||
|
||||
Instead of running a **strace** on a daemon, take the example of a **cat** command, which usually displays the contents of a file if you give a file name as an argument. If no argument is given, the **cat** command simply waits at a terminal for the user to enter text. Once text is entered, it repeats the given text until a user presses Ctrl+C to exit.
|
||||
|
||||
Run the **cat** command from one terminal; it will show you a prompt and simply wait there (remember **cat** is still running and has not exited):
|
||||
|
||||
|
||||
```
|
||||
`[root@sandbox tmp]# cat`
|
||||
```
|
||||
|
||||
From another terminal, find the process identifier (PID) using the **ps** command:
|
||||
|
||||
|
||||
```
|
||||
[root@sandbox ~]# ps -ef | grep cat
|
||||
root 22443 20164 0 14:19 pts/0 00:00:00 cat
|
||||
root 22482 20300 0 14:20 pts/1 00:00:00 grep --color=auto cat
|
||||
[root@sandbox ~]#
|
||||
```
|
||||
|
||||
Now, run **strace** on the running process with the **-p** flag and the PID (which you found above using **ps**). After running **strace**, the output states what the process was attached to along with the PID number. Now, **strace** is tracing the system calls made by the **cat** command. The first system call you see is **read**, which is waiting for input from 0, or standard input, which is the terminal where the **cat** command ran:
|
||||
|
||||
|
||||
```
|
||||
[root@sandbox ~]# strace -p 22443
|
||||
strace: Process 22443 attached
|
||||
read(0,
|
||||
```
|
||||
|
||||
Now, move back to the terminal where you left the **cat** command running and enter some text. I entered **x0x0** for demo purposes. Notice how **cat** simply repeated what I entered; hence, **x0x0** appears twice. I input the first one, and the second one was the output repeated by the **cat** command:
|
||||
|
||||
|
||||
```
|
||||
[root@sandbox tmp]# cat
|
||||
x0x0
|
||||
x0x0
|
||||
```
|
||||
|
||||
Move back to the terminal where **strace** was attached to the **cat** process. You now see two additional system calls: the earlier **read** system call, which now reads **x0x0** in the terminal, and another for **write**, which wrote **x0x0** back to the terminal, and again a new **read**, which is waiting to read from the terminal. Note that Standard input (**0**) and Standard out (**1**) are both in the same terminal:
|
||||
|
||||
|
||||
```
|
||||
[root@sandbox ~]# strace -p 22443
|
||||
strace: Process 22443 attached
|
||||
read(0, "x0x0\n", 65536) = 5
|
||||
write(1, "x0x0\n", 5) = 5
|
||||
read(0,
|
||||
```
|
||||
|
||||
Imagine how helpful this is when running **strace** against daemons to see everything it does in the background. Kill the **cat** command by pressing Ctrl+C; this also kills your **strace** session since the process is no longer running.
|
||||
|
||||
If you want to see a timestamp against all your system calls, simply use the **-t** option with **strace**:
|
||||
|
||||
|
||||
```
|
||||
[root@sandbox ~]#strace -t ls testdir/
|
||||
|
||||
14:24:47 execve("/usr/bin/ls", ["ls", "testdir/"], [/* 40 vars */]) = 0
|
||||
14:24:47 brk(NULL) = 0x1f07000
|
||||
14:24:47 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f2530bc8000
|
||||
14:24:47 access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or directory)
|
||||
14:24:47 open("/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3
|
||||
```
|
||||
|
||||
What if you want to know the time spent between system calls? **strace** has a handy **-r** command that shows the time spent executing each system call. Pretty useful, isn't it?
|
||||
|
||||
|
||||
```
|
||||
[root@sandbox ~]#strace -r ls testdir/
|
||||
|
||||
0.000000 execve("/usr/bin/ls", ["ls", "testdir/"], [/* 40 vars */]) = 0
|
||||
0.000368 brk(NULL) = 0x1966000
|
||||
0.000073 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fb6b1155000
|
||||
0.000047 access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or directory)
|
||||
0.000119 open("/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3
|
||||
```
|
||||
|
||||
### Conclusion
|
||||
|
||||
The **strace** utility is very handy for understanding system calls on Linux. To learn about its other command-line flags, please refer to the man pages and online documentation.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/10/strace
|
||||
|
||||
作者:[Gaurav Kamathe][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/gkamathe
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/yearbook-haff-rx-linux-file-lead_0.png?itok=-i0NNfDC (Hand putting a Linux file folder into a drawer)
|
||||
[2]: https://en.wikipedia.org/wiki/Trap_(computing)
|
@ -1,66 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (Morisun029)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to use IoT devices to keep children safe?)
|
||||
[#]: via: (https://opensourceforu.com/2019/10/how-to-use-iot-devices-to-keep-children-safe/)
|
||||
[#]: author: (Andrew Carroll https://opensourceforu.com/author/andrew-carroll/)
|
||||
|
||||
如何使用物联网设备来确保儿童安全?
|
||||
======
|
||||
|
||||
[![][1]][2]
|
||||
|
||||
IoT (物联网)设备正在迅速改变我们的生活。这些设备无处不在,从我们的家庭到其它行业。根据一些预测数据,到2020年,将会有100亿个 IoT 设备。到2025年,该数量将增长到220亿。目前,物联网已经在很多领域得到了应用,包括智能家居,工业生产过程,农业甚至医疗保健领域。伴随着如此广泛的应用,物联网显然已经成为近年来的热门话题之一。
|
||||
多种因素促成了物联网设备在多个学科的爆炸式增长。这其中包括低成本处理器和无线连接的的可用性, 以及开源平台的信息交流推动了物联网领域的创新。与传统的应用程序开发相比,物联网设备的开发成指数级增长,因为它的资源是开源的。
|
||||
在解释如何使用物联网设备来保护儿童之前,必须对物联网技术有基本的了解。
|
||||
|
||||
|
||||
**IOT 设备是什么?**
|
||||
IOT 设备是指那些在没有人类参与的情况下彼此之间可以通信的设备。 因此,许多专家并不将智能手机和计算机视为物联网设备。 此外,物联网设备必须能够收集数据并且能将收集到的数据传送到其他设备或云端进行处理。
|
||||
|
||||
然而,在某些领域中,我们需要探索物联网的潜力。 儿童往往是脆弱的,他们很容易成为犯罪分子和其他蓄意伤害者的目标。 无论在物理世界还是数字世界中,儿童都很容易犯罪。 因为父母不能始终亲自到场保护孩子; 这就是为什么需要监视工具了。
|
||||
|
||||
除了适用于儿童的可穿戴设备外,还有许多父母监视应用程序,例如Xnspy,可实时监控儿童并提供信息的实时更新。 这些工具可确保儿童安全。 可穿戴设备确保儿童身体上的安全性,而家长监控应用可确保儿童的上网安全。
|
||||
|
||||
由于越来越多的孩子花费时间在智能手机上,毫无意外地,他们也就成为诈骗分子的主要目标。 此外,由于恋童癖,网络自夸和其他犯罪在网络上的盛行,儿童也有可能成为网络欺凌的目标。
|
||||
|
||||
这些解决方案够吗? 我们需要找到物联网解决方案,以确保孩子们在网上和线下的安全。 在当代,我们如何确保孩子的安全? 我们需要提出创新的解决方案。 物联网可以帮助保护孩子在学校和家里的安全。
|
||||
|
||||
|
||||
**物联网的潜力**
|
||||
物联网设备提供的好处很多。 举例来说,父母可以远程监控自己的孩子,而又不会显得太霸道。 因此,儿童在拥有安全环境的同时也会有空间和自由让自己变得独立。
|
||||
而且,父母也不必在为孩子的安全而担忧。物联网设备可以提供7x24小时的信息更新。像 Xnspy 之类的监视应用程序在提供有关孩子的智能手机活动信息方面更进了一步。随着物联网设备变得越来越复杂,拥有更长使用寿命的电池只是一个时间问题。诸如位置跟踪器之类的物联网设备可以提供有关孩子下落的准确详细信息,所以父母不必担心。
|
||||
|
||||
虽然可穿戴设备已经非常好了,但在确保儿童安全方面,这些通常还远远不够。因此,要为儿童提供安全的环境,我们还需要其他方法。许多事件表明,学校比其他任何公共场所都容易受到攻击。因此,学校需要采取安全措施,以确保儿童和教师的安全。在这一点上,物联网设备可用于检测潜在威胁并采取必要的措施来防止攻击。威胁检测系统包括摄像头。系统一旦检测到威胁,便可以通知当局,如一些执法机构和医院。智能锁等设备可用于封锁学校(包括教室),来保护儿童。除此之外,还可以告知父母其孩子的安全,并立即收到有关威胁的警报。这将需要实施无线技术,例如 Wi-Fi 和传感器。因此,学校需要制定专门用于提供教室安全性的预算。
|
||||
|
||||
智能家居实现拍手关灯,也可以让你的家庭助手帮你关灯。 同样,物联网设备也可用在屋内来保护儿童。 在家里,物联网设备(例如摄像头)为父母在照顾孩子时提供100%的可见性。 当父母不在家里时,可以使用摄像头和其他传感器检测是否发生了可疑活动。 其他设备(例如连接到这些传感器的智能锁)可以锁门和窗,以确保孩子们的安全。
|
||||
|
||||
同样,可以引入许多物联网解决方案来确保孩子的安全。
|
||||
|
||||
|
||||
|
||||
**有多好就有多坏**
|
||||
物联网设备中的传感器会创建大量数据。 数据的安全性是至关重要的一个因素。 收集的有关孩子的数据如果落入不法分子手中会存在危险。 因此,需要采取预防措施。 IoT 设备中泄露的任何数据都可用于确定行为模式。 因此,必须投资提供不违反用户隐私的安全物联网解决方案。
|
||||
|
||||
IoT 设备通常连接到 Wi-Fi,用于设备之间传输数据。未加密数据的不安全网络会带来某些风险。 这样的网络很容易被窃听。 黑客可以使用此类网点来入侵系统。 他们还可以将恶意软件引入系统,从而使系统变得脆弱,易受攻击。 此外,对设备和公共网络(例如学校的网络)的网络攻击可能导致数据泄露和私有数据盗用。 因此,在实施用于保护儿童的物联网解决方案时,保护网络和物联网设备的总体计划必须生效。
|
||||
|
||||
物联网设备保护儿童在学校和家里的安全的潜力尚未发现有什么创新。 我们需要付出更多努力来保护连接 IoT 设备的网络安全。 此外,物联网设备生成的数据可能落入不法分子手中,从而造成更多麻烦。 因此,这是物联网安全至关重要的一个领域。
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensourceforu.com/2019/10/how-to-use-iot-devices-to-keep-children-safe/
|
||||
|
||||
作者:[Andrew Carroll][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[Morisun029](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensourceforu.com/author/andrew-carroll/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Visual-Internet-of-things_EB-May18.jpg?resize=696%2C507&ssl=1 (Visual Internet of things_EB May18)
|
||||
[2]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Visual-Internet-of-things_EB-May18.jpg?fit=900%2C656&ssl=1
|
Loading…
Reference in New Issue
Block a user